IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

OpenAI Unveils Tool to Address ChatGPT Cheating Concerns

Responding to concerns about students using chatbot programs like ChatGPT to do their homework for them, OpenAI developed a classifier tool that can, with limited accuracy, identify text generated by an AI chatbot.

A person typing on a laptop with "ChatGPT" and an image of a robot head to indicate AI hovering above the keyboard.
Shutterstock
The artificial intelligence company OpenAI unveiled a tool on Tuesday to help educators detect AI-generated text, aiming to address concerns about students passing off AI-generated writing as their own.

According to a blog post on the company’s website, OpenAI trained a language-model program, which it called a “classifier,” to distinguish between text written by a human and text written by any one of a number of chatbot programs. The new tool comes about two months after OpenAI launched ChatGPT, an AI-driven chatbot program that can generate text to answer writing prompts, including those given to K-12 and college students for assignments and exams, with sufficient quality and accuracy that the tool could lend itself to academic dishonesty.

The blog post warned that the tool is not reliable in all cases, adding that it’s “impossible to reliably detect all AI-written text.” It said so far, the classifier is able to correctly identify 26 percent of AI-written text as “likely AI-written,” with a 9 percent false positive rate.

OpenAI’s announcement added that the classifier “should not be used as a primary decision-making tool” for educators to determine academic dishonesty, but rather as a supplemental tool to help detect instances where a student might be taking credit for AI-generated content.

“Our classifier’s reliability typically improves as the length of the input text increases. Compared to our previously released classifier, this new classifier is significantly more reliable on text from more recent AI systems,” the blog post said. “We’re making this classifier publicly available to get feedback on whether imperfect tools like this one are useful. Our work on the detection of AI-generated text will continue, and we hope to share improved methods in the future.”

According to the announcement, OpenAI has developed a preliminary resource for educators outlining some of the limitations of the classifier, which the company expects will also be used by journalists and misinformation watchdogs. In addition, the announcement asked teachers, administrators, parents, students and education service providers to try the classifier and provide their feedback on their website to help improve the tool moving forward.

“We are engaging with educators in the U.S. to learn what they are seeing in their classrooms and to discuss ChatGPT’s capabilities and limitations, and we will continue to broaden our outreach as we learn. These are important conversations to have as part of our mission is to deploy large language models safely, in direct contact with affected communities,” the blog post said.