IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Anti-Cheating Software Drawing Criticism at Universities

Some education officials view anti-cheating software as an important part of maintaining integrity of exams during remote learning, but the tools have raised privacy concerns among students and digital rights activists.

A laptop surrounded by digital icons on a blue background.
When universities across the country first went virtual last year in response to COVID-19, administrators increased their use of anti-cheating programs, which monitor students through their webcams with artificial intelligence-based facial recognition. Through functions such as these, officials hoped to discourage cheating on tests during remote learning — but it hasn’t come without backlash.

In March, the University of Wisconsin-Madison disabled facial recognition features offered through Honorlock, an online exam proctoring service, after three students with darker skin tones said the program failed to recognize their facial features and paused the exam. University spokesperson Meredith McGlone said students also raised privacy concerns about the tool in focus groups and surveys conducted last fall.

“We shared the concerns that emerged here with Honorlock, and they indicated that they have no data from our campus or others to indicate that the tool has difficulty recognizing certain skin colors,” she said in an email to Government Technology. “Nonetheless, because of this and other concerns and out of an abundance of caution, we asked them to disable the exam pause feature for our campus and they did so.”

Honorlock CEO Michael Hemlepp said the exam pause feature sometimes disables online exams when students look away from the camera or use low lighting. He said developers are working to refine the program’s AI features, which also watch for behaviors deemed suspicious during testing.

“Unlike other proctoring platforms, Honorlock works by using AI technology to monitor exams, watching for unexpected behaviors,” he explained. “Only when a flag is triggered does the system notify a certified online test proctor who has the option to ‘pop-in’ to the exam and communicate with the student directly via live chat.”

Hemlepp denied hearing any explicit reports about the company’s facial recognition technology flagging students due to their race, adding that any new technology “gets better with time.”

“We acknowledge that AI technology is imperfect, and flags are not always an indication of problematic behavior,” he said. “We have not been informed by UW-Madison, or any other customer using Honorlock, that the pause feature was disabled due to the test taker’s ethnicity.”

McGlone said officials expect other digital tools will continue to play a role at UW-Madison, but the university had not yet entered into a digital proctoring contract for the 2021-22 academic year as of this week.

Digital rights organization Fight for the Future has railed against the use of various facial recognition technologies in both the public and private sectors for years. Campaigns and Communications Director Lia Holland said the reports about problems faced by students of color at UW-Madison were troubling to digital privacy advocates.

“It seems like almost every week that we hear another story of a student of color being told that they aren’t there, being told that they don’t have a face, being told that they are cheating,” she said. “We find that incredibly problematic and concerning.”

Holland said “experimental” surveillance technologies are largely inaccurate and inefficient at what they set out to do, noting errors such as those reported at UW-Madison. She believes these tools also cultivate a culture of distrust and suspicion within institutions between professors, administrators and students. This culture, she said, has also found its way into K-12 schools, where administrators have used surveillance software tools that provide granular analytics on how students’ devices are used. 

Holland said questions remain about how AI proctoring programs like Honorlock interact with neurodiverse students, adding that perceptions of what’s considered “normal” or “suspicious” can be subjective.

“It also does things like tracking your eye movements and whether you’re looking away,” she said. “There are huge ableism concerns in the same vein as the racial [concerns], and those are just as important and urgent too.”

In a broader context, digital rights activists have pressed the Biden administration to impose restrictions on the use of facial recognition technology by law enforcement and federal agencies.

Holland believes officials should ban these and other digital monitoring tools as tech companies offering surveillance programs look to new markets in education and in telework, where she said there’s little need for them.

“It’s [about] privacy concerns overall, because all of these ‘move fast, break things’ Silicon Valley-type companies are creating solutions for problems that don’t really exist,” she said. “It truly is an awful situation, and we’re working very hard to change that.”

Students taking online courses at San Diego State University have also voiced complaints regarding the university’s use of Respondus, another exam proctoring program with similar functions. According to a university statement to Government Technology, SDSU officials have taken note of student concerns about the widely used ed-tech tool, which can flag and lock student browsers.

“The university recently decided to phase out campus support of the software,” the statement read. “The move is in response to national concerns regarding privacy, equity and the efficacy of the software.”

As some universities ditch their anti-cheating programs, others such as the University of North Carolina at Greensboro continue using tools like Respondus. So far, UNC-Greensboro Dean of Undergraduate Studies Andrew Hamilton said there has been little backlash over proctoring tools among students who’ve enjoyed the convenience of online courses.

“It matters that students are fairly assessed for what they know and can do,” he said. “We have received feedback from students about what they take to be the invasive nature of electronic monitoring of remote exams; these are in the tens or dozens in number, while in the fall 2020 term alone we served more than 30,000 assessments to nearly 8,700 students using lockdown browser and monitoring technologies.

“For the overwhelming majority of our students, convenience and safety eclipse any concerns they may have about steps we and they must take to ensure the integrity of our courses and grading,” he said.

Though discussions about surveillance and privacy in general often weigh privacy concerns against a desire for security, Holland said many ed-tech surveillance tools can backfire and create new vulnerabilities for students. She cited a data breach last year that affected more than 440,000 individuals using exam proctoring program ProctorU as one example.

“Anyone can be behind a camera, using an algorithm or AI for their own purposes,” she said.

Editor's note: This story originally misquoted Lia Holland's description of "'move fast, break things' Silicon Valley-type companies."

Brandon Paykamian is a staff writer for Government Technology. He has a bachelor's degree in journalism from East Tennessee State University and years of experience as a multimedia reporter, mainly focusing on public education and higher ed.