IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Higher Ed Reactions to ChatGPT Run the Gamut

As some K-12 schools have moved to ban a new AI chatbot that can write essays and answer complex questions, higher ed experts are weighing the pros and cons. They all agree on one thing: Education is going to change.

A person typing on a laptop with the word "ChatGPT" hovering in the air above the keyboard with a symbol of a head with gears inside it and speech bubbles around it to indicate a chatbot.
Shutterstock
Most chatbots to date have barely been able to hold basic conversations, but one of the latest leaps forward in artificial intelligence — ChatGPT — can answer complex questions and even write coherent essays, prompting widespread concerns among educators about its potential to enable academic dishonesty. While the New York City Department of Education and Seattle Public Schools recently banned access to ChatGPT from school-owned networks and devices, similar discussions are taking place among higher education administrators and educators about how to approach the issues raised by the technology amid the ongoing digitization of teaching and learning.

According to Robert Cummings, executive director of academic innovation and associate professor of writing and rhetoric at the University of Mississippi, programs such as ChatGPT are a double-edged sword not unlike other emerging technologies. He noted the tool could be helpful to student learning, depending on how it’s used, adding that students may also be able to use the program to combat writer’s block and find ideas for essay topics.

“We built a local team of writing faculty to engage with the tools and to explore pedagogical possibilities. We want to empower our students as writers and thinkers, and we know that AI will play a role in their futures,” he wrote in a recent statement for the Chronicle of Higher Education.

Cummings told Government Technology it’s important for higher ed faculty to familiarize themselves with new chatbot capabilities, as well as signs that a student may be passing off AI-generated content as their own. Some instructors have also changed their approach to course content and the types of assignments they give, in some cases emphasizing participation in class, where a student’s contributions are more apparent.

“Our attitude is basically this: We don’t have a choice. Our students will go to work in a world where they’re expected to use these tools to be more productive, so we owe it to our students to help explore these tools and help them understand what they can and can’t do well with them. … The main complication is that technologies are changing so rapidly on a day-to-day basis,” Cummings said. “We think it’s more risky to turn away from the technology than engage it.”

Cummings noted that ChatGPT, which was originally designed to act like a search engine, could also be helpful for research purposes. However, one of the major challenges presented by programs like ChatGPT is that many anti-plagiarism programs used in higher ed today are virtually useless when it comes to detecting AI-generated content.

While there are some programs in development that can help to detect AI-generated writing, Cummings said, the challenges that come with this kind of technology are indicative of an ongoing “tech arms race” between anti-plagiarism tools and chatbots that can write coherent prose.

“It’s a disruption for higher ed because it’s a disruption to fundamental human literacy,” he said. “We could reasonably assume whenever we read text that there was a human involved [in previous years]. We’re past that now. That is no longer going to be a reasonable assumption when you look at a piece of writing. That’s a big shift, and I think that’s what we’re looking at now.”

Some educators and tech experts — including AI researchers themselves — warn of the need for caution when it comes to chatbot programs, partly because of the current limitations of the technology. Despite constant improvements, chatbots still tend to produce writing that feels off, according to Neil Heffernan, an AI researcher at Worcester Polytechnic Institute in Massachusetts and developer of the AI K-12 homework feedback program ASSISTments. He said he generally supports moves to get these programs out of schools and classrooms, adding that he is particularly worried about AI-driven programs being used for things like social-emotional support.

“No student should be directly exposed to [ChatGPT],” Heffernan said in an email to Government Technology.

As programs like ChatGPT come into the higher ed limelight, universities such as American University, Stanford University and Florida International University have hosted expert panels and discussions with their campus communities about how to approach the issues raised by AI-based digital learning resources.

Despite the tech’s current limitations, experts from Northwestern University said in a recent news release that chatbot programs could be used to create fake abstracts for research studies. The release cited recent research indicating that tech tools like ChatGPT may be able to “fool” human reviewers more than 30 percent of the time.

According to Mara Bianco, an IT program manager and adjunct lecturer at Baruch College at the City University of New York system, recent developments in AI ed-tech tools are uncharted territory, meaning institutions need to adapt to the challenges they present in higher education. She recommended against punitive approaches like banning and discipline.

“This technology is forcing many into an uncomfortable arena and we have to adapt through training and development, and academic administrations need to be nimble and provide development opportunities,” she told Government Technology. “My role is to make my students successful. If I detect challenges, we talk about it. What’s the value of reporting them to an academic tribunal?”

Bianco said colleges and universities should consider ways that new tools like ChatGPT might help them serve their core mission, and adjust accordingly.

“Faculty just need to adapt to different learning styles and new technologies, especially when enrollments are declining,” she said. “Moreover, we need to adapt to students’ life schedules, shorter semesters and quicker turnaround times to remain competitive.”

As with most technologies, Cummings and Bianco said the challenges of these tools lie in what the end user does with them. ChatGPT, for instance, was not developed with the intention of encouraging academic dishonesty, but rather as a helpful resource to academics and students, according to the company.

“ChatGPT is available as a research preview to learn from real-world use, which we believe is a critical part of developing and deploying capable, safe AI systems. We are constantly incorporating feedback and lessons learned,” OpenAI spokesperson Alex Beck told Government Technology. “We’ve always called for transparency around the use of AI-generated text. Our policies require that users be upfront with their audience when using our API and creative tools like DALL-E and GPT-3.”
Brandon Paykamian is a staff writer for Government Technology. He has a bachelor's degree in journalism from East Tennessee State University and years of experience as a multimedia reporter, mainly focusing on public education and higher ed.