Tags

, , , , , ,

Andy Williams used to croon that this is “The Most Wonderful Time of the Year.” For me, it’s time to update the curriculum for my class on Electronic Discovery and Digital Evidence at the University of Texas in the graduate schools of Law, Computer Science and Information Science. I’ve long built the course around a Workbook I wrote with readings and some two dozen exercises. But, when I last taught the course a year ago, generative AI was hardly a twinkle in Santa’s eye. Now, of course, AI is the topic that’s eaten all others. So, I’ve had to fashion a policy for student use of AI. I elected to embrace student use of AI tools, in part because legal scholarsip is artful plagarism termed “precedent” and–let’s face it–students are going to use LLMs, whatever I say. So, here’s what I’ve come up with. I’ll be grateful for your feedback as comments, most especially if you are an educator facing the same issues with advice born of experience.

Use of Generative Large Language Models to Assist with Exercises

1. Explicit Disclosure Requirement

  • It is a violation of the honor code to misrepresent work by characterizing it as your own if it is not.  Students may use generative LLMs, such as ChatGPT or Bard, for assistance in completing Workbook exercises; however, they must explicitly disclose the use of these tools by providing a brief note or acknowledgment in their submissions. Transparency is mandatory.

2. Verification and Cross-Checking

  • Students may utilize generative LLMs during Workbook exercises but are required to independently verify and cross-check the information generated by these models through additional research using alternate, reliable sources.

3.  Accountability

  • While generative LLMs are permitted tools, students are held accountable for the accuracy and completeness of the information obtained from these models. Any errors or omissions resulting from the use of LLMs are considered the responsibility of the student. This policy underscores the importance of independent verification and personal accountability.

4. Prohibited for Quizzes and Exams

  • Notwithstanding the foregoing, you may not consult any source of information, including AI resources, when completing quizzes or the final exam.

POSTSCRIPT: I add this a day after the foregoing, after reading that the Fifth Circuit’s proposed a rule change requiring that counsel and pro se litigants certify of any filed document, that “no generative artificial intelligence program was used in drafting the document…or to the extent such a program was used, all generated text, including all citations and legal analysis, has been reviewed for accuracy and approved by a human.” I recall shaking my head at how foolish it was when a grandstanding district court judge made headlines by requiring such certifications following a high-profile gaffe in New York. “Of course a lawyer must verify the accuracy of legal analysis and citations! Lawyers shouldn’t need to certify that we did what we are required to do!”

Yet, here I am requiring my students to do much the same. I feel confident in advising students that, if they use AI, they must verify the information and sink or swim based on what they submit, even if the AI hallucinates or misleads. Back in the day, lawyers knew they had to “Shepardize” citations to verify that the cases cited were still solid. Proffering a a made-up citation was beyond comprehension.

So, am I right to require explicit disclosure of generative AI? Or will AI soon be woven into so many sources of information that disclosure will feel as foolish as requiring students to disclose they used a word processor instead of a typewriter would have been forty years ago? I’m struggling with this. What do you think?