aerial view of Western's campus at night, with golden lights surrounded by dark trees

What is ChatGPT, how does it work, and how is it impacting academia?

ChatGPT is incredibly advanced software that uses artificial intelligence to formulate answers and participate in discussions in a fully realistic, conversational way. The problem? It’s so startlingly good at its job that it is often indistinguishable from what a human would type or say, which has led to a huge impact in academia as students are using ChatGPT — which can answer just about any question posted to it — to perform tasks for them, instead of doing the work themselves.

I recently chatted with Western Washington University Associate Professor of Computer Science Brian Hutchinson about ChatGPT and how it works, as well as looking at some of the questions gripping higher education across the world as it grapples with how to understand how technology is — and will continue to — impact how faculty members can best understand the way their work is impacting their students.

Can you tell us the basics of how ChatGPT works and why it was developed?

BH: ChatGPT is a specialized large language model. Language models are designed to model the probability of sequences, typically word sequences. Using machine learning techniques applied to large collections of text, they “learn” which words are likely to follow other words. (Machine learning falls under the umbrella of artificial intelligence, and accounts for many of artificial intelligence’s most impressive feats.) Language models have been used for decades in spoken and natural language processing tasks; e.g., speech recognition (speech to text). In recent years, researchers have developed a series of large language models that are massive neural networks (often containing billions of parameters to learn) that learn from massive datasets (often hundreds of billions of words) using a massive amount of computing power on supercomputing infrastructure.

The primary motivation for building large language models is to advance natural language understanding by computers; for example, to allow computers to answer questions or reason about text passages, to infer whether one statement contradicts another, or whether two statements mean the same thing. Once the parameters have been learned, these models can also “write” by repeatedly sampling words based on the probabilities they have learned. By cleverly prompting such models (by giving them a well-chosen initial snippet of text) and having them write the rest of the document, researchers can also use these models for numerous language processing tasks, including summarizing documents, correcting grammatical mistakes, and solving math story problems.

ChatGPT is a large language model that has been optimized for dialog: rather than use clever prompting to get the model to do what you want, you can simply tell it what to do. The convenience of this mode of interaction partially explains the sheer amount of publicity ChatGPT has received compared to prior large language models.

How is this app particularly divisive or controversial in academia?

BH: ChatGPT and subsequent models have the potential to be very disruptive. On one hand, many faculty, students and staff will find benign ways to use it to boost their productivity; for example, as a proofreading aid or brainstorming tool. On the other hand, there is the obvious and serious risk of plagiarism. Numerous reports are surfacing across the country of university students attempting to pass off ChatGPT-authored work as their own, undermining the ability of instructors to gauge their students’ understanding of course content. Beyond the potential for plagiarism there are other sources of controversy: large language models are known for reproducing the biases of the data they learned from and have a bad habit of flat out fabricating things. It is also far from settled to what extent these types of models actually ‘understand language.’

How can faculty members protect the integrity of their coursework in the age of ChatGPT?

BH: I am aware of several strategies, and have no doubt many more have been and will be devised. To protect assignments that can be undermined by ChatGPT, faculty should be explicit about what role, if any, ChatGPT and similar models can play before it constitutes academic dishonesty. Some organizations are banning its use entirely. As with other forms of plagiarism, there are tools for detection that predict whether a passage was written by a human or a large language model. I expect that a stream of such tools will be released over the coming months and years.

Unfortunately, I also expect tools designed to assist in evading detection to become more sophisticated over time. Faculty could certainly change the way writing assignments are administered; for example, having students write by hand and/or in controlled environments where access to ChatGPT is restricted. Lastly, I suspect there will be much discussion about how assessment can or should evolve — either to reduce vulnerability to ChatGPT or to incorporate tools like ChatGPT into the learning process without losing the ability to assess learning outcomes. There is some protection simply due to the current limitations of ChatGPT (for example, it does not cite sources), but faculty should expect these tools to become increasingly sophisticated in the coming years.

There are now examples of the use of generative artificial intelligence to do things like create original artwork. Where do you see the uses for this type of programming going in the future?

BH: Beyond language models, I anticipate rapid progress on problems adjacent to the image generation example you mention, including on video, audio, and 3d object generation. There will be countless related problems embedded in specific domains.

As an example, my research students, collaborators and I have had recent success developing generative models able to rapidly produce realistic spatiotemporal precipitation and temperature data under different climate scenarios, with the goal of allowing climate scientists to explore the effects of climate change on extreme weather events. I also suspect we will see progress generating multiple data modalities; for example audio-video or text-with-figures.

Unfortunately, I suspect we will find ourselves facing new controversies spawned by some of these future models, so the debate about machine learning and artificial intelligence in higher education may continue for quite some time.

Brian Hutchinson is an associate professor of Computer Science at Western, and holds a joint appointment as a staff scientist in the Foundational Data Science Group at Pacific Northwest National Laboratory. He has taught at WWU since Fall 2013, and received his doctorate in Electrical Engineering from the University of Washington in 2013.