Siva Vaidhyanathan 

My students are using AI to cheat. Here’s why it’s a teachable moment

Ignoring ChatGPT and its cousins won’t get us anywhere. In fact, these systems reveal issues we too often miss
  
  

students raising hands in stock photo
‘We have been dealing with cheating methods and technologies as long as we have been asking students to prove their knowledge.’ Photograph: izusek/Getty Images

In my spring lecture course of 120 students, my teaching assistants caught four examples of students using artificial-intelligence-driven language programs like ChatGPT to complete short essays. In each case, the students confessed to using such systems and agreed to rewrite the assignments themselves.

With all the panic about how students might use these systems to get around the burden of actually learning, we often forget that as of 2023, the systems don’t work well at all. It was easy to spot these fraudulent essays. They contained spectacular errors. They used text that did not respond to the prompt we had issued to students. Or they just sounded unlike what a human would write.

Our policy, given that this was the first wave of such cheating we encountered (and with full consideration that all students at the University of Virginia pledge to follow an “honour code” when they enrol), was to start a conversation with each student. We decided to make this moment work toward the goal of learning.

We asked them why they were tempted to use these services rather than their own efforts, which, for a two-page essay, would have been minimal. In each case, they said they were overwhelmed by demands of other courses and life itself.

We asked them to consider whether the results reflected well on their goal of becoming educated citizens. Of course they did not.

We also asked them why they thought we would be so inattentive as to let such submissions pass. They had no answer to that. But I hope we at least sparked some thought and self-examination. Sometimes that’s all we can hope for as teachers.

The course I teach is called Democracy in Danger. It was designed to get students to consider the historical roots of threats to democracy around the world. So it was not the proper forum to urge students to consider how we use new technologies thoughtlessly and what goes on behind the screen with a machine-learning system. But those are the most interesting questions to ask about artificial intelligence and large-language models. I can’t wait to get these questions before my next group of students.

That’s why I am excited about the instant popularity of large-language models in our lives. As long as they remain terrible at what they purport to do, they are perfect for study. They reveal so many of the issues we too often let lurk below our frenetic attention.

For decades, I have been searching for ways to get students to delve deeply into the nature of language and communication. What models of language do human minds and communities use? What models of language do computers use? How are they different? Why would we want computers to mimic humans? What are the costs and benefits of such mimicking? Is artificial intelligence actually intelligent? What does it mean that these systems look like they are producing knowledge, when they are actually only faking it?

Now, thanks to a few recent, significant leaps in language-based machine learning, students are invested in these questions. My community of scholars in media and communication, human-computer interaction, science-and-technology studies, and data science have been following the development of these and other machine-learning systems embedded in various areas of life for decades. Finally, the public seems to care.

As my University of Virginia School of Data Science colleague Rafael Alvarado has argued, these systems work by generating meaningful prose based on the vast index of language we have produced for the world wide web and that companies like Google have scanned in from books. They fake what looks like knowledge by producing strings of text that statistically make sense.

They don’t correspond to reality in any direct way. When they get something right (and they do seem to often) it’s by coincidence. These systems have consumed so much human text that they can predict which sentence looks good following another, and what combination of sentences and statements look appropriate to respond to a prompt or question.

“It’s really the work of the library that we are witnessing here,” Alvarado said. It’s a library without librarians, consisting of content disembodied and decontextualized, severed from the meaningful work of authors, submitted to gullible readers. These systems are, in Alvarado’s words, “good at form; bad at content”.

The prospect of fooling a professor is always tempting. I’m old enough to remember when search engines themselves gave students vast troves of potential content to pass off as their own. We have been dealing with cheating methods and technologies as long as we have been asking students to prove their knowledge to us. Each time students deploy a new method, we respond and correct for it. And each time, we get better at designing tasks that can help students learn better. Writing, after all, is learning. So are speaking, arguing, and teaching. So are designing games, writing code, creating databases, and making art.

So going forward I will demand some older forms of knowledge creation to challenge my students and help them learn. I will require in-class writing. This won’t just take them away from screens, search engines, and large-language models. It will demand they think fluidly in the moment. Writing in real time demands clarity and concision. I will also assign more group presentations and insist that other students ask questions of the presenters, generating deeper real-time understanding of a subject.

Crucially, I will also ask students to use large-language model systems in class to generate text and assess its value and validity. I might tell AI to “write an essay about AI in the classroom written in the style of Siva Vaidhyanathan”. Then, as a class, we would look up the sources of the claims, text, and citations and assess the overall results of the text generation.

One of the reasons so many people suddenly care about artificial intelligence is that we love panicking about things we don’t understand. Misunderstanding allows us to project spectacular dangers on to the future. Many of the very people responsible for developing these models (who have enriched themselves) warn us about artificial intelligence systems achieving some sort of sentience and taking control of important areas of life. Others warn of massive job displacement from these systems. All of these predictions assume that the commercial deployment of artificial intelligence actually would work as designed. Fortunately, most things don’t.

That does not mean we should ignore present and serious dangers of poorly designed and deployed systems. For years predictive modeling has distorted police work and sentencing procedures in American criminal justice, surveilling and punishing Black people disproportionately. Machine learning systems are at work in insurance and health care, mostly without transparency, accountability, oversight or regulation.

We are committing two grave errors at the same time. We are hiding from and eluding artificial intelligence because it seems too mysterious and complicated, rendering the current, harmful uses of it invisible and undiscussed. And we are fretting about future worst-case scenarios that resemble the movie The Matrix more than any world we would actually create for ourselves. Both of these habits allow the companies that irresponsibly deploy these systems to exploit us. We can do better. I will do my part by teaching better in the future, but not by ignoring these systems and their presence in our lives.

 

Leave a Comment

Required fields are marked *

*

*