UC National Center for Free Speech and Civic Engagement: “Ask the Experts: Artificial Intelligence and Education”
What should universities be doing to educate students about how to responsibly use generative artificial intelligence (AI) technologies in college and after they graduate?
What I try to teach my students about AI-generated text is that they cannot rely on it being correct. It may be useful if they are not confident about their grammar, but that’s about as far as it goes. We already train students, when using material from the internet, to assess the sources; AI technologies are the same except that we can no longer see the sources!
I set an exercise in my most recent class that required students to use ChatGPT to generate an essay, but then asked them to fact-check it. Many students professed a prior familiarity with the technology (although most were coy about how they had acquired it), while others enthused about its usefulness. Almost uniformly, though, the essays on a wide range of topics included major errors of fact. My students caught some but by no means all. The errors often have a superficial plausibility and, of course, being generated by artificial “intelligence,” they seem reliable. I also take students through a dialog that I had with the same system in which, even after correction, it consistently returns to the same inaccuracies. It seems inevitable that, once they graduate, students will continue to rely on these technologies. The danger is that they lose their inclination to examine the products of AI critically and begin to assume that it must be true.
Paul Dourish, Chancellor’s Professor & Steckler Endowed Chair of Information & Computer Science, UC Irvine and Director of the Steckler Center for Responsible, Ethical, and Accessible Technology
Read more responses captured by the UC National Center for Free Speech and Civic Engagement.