IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Preparing K-12 and higher education IT leaders for the exponential era

How AI Is Changing College Assessments of Proficiency

Artificial intelligence is causing college instructors to move more meaningful examinations back to the classroom, and connect the dots with students on why learning matters.

Students in a classroom taking an exam on paper.
When David Hummels returned to teaching international economics at Indiana’s Purdue University in 2023 after nine years as business school dean, he found that assignments he once relied on no longer worked.

Previously, many of his assessments asked students to apply an algorithm to a given situation.

“That doesn’t work anymore, at all,” he said. “Especially for any kind of assessment that takes place out of my sight.”

Generative AI tools can now execute those algorithms quickly and convincingly. Hummels initially tried workarounds, like asking students to draw or analyze graphs. However, those were short lived. As AI models have rapidly improved, his response has grown more complex. College instructors are redesigning how and where they assess student learning.

SHIFTING BACK TO THE CLASSROOM


One of the more immediate changes has been a move toward in-person or synchronous assessment.

Hummels began relocating graded work into the classroom and decreasing reliance on take-home problem sets. At Grand Valley State University (GVSU) in Michigan, mathematics professor Robert Talbert reached a similar conclusion.

Talbert has used mastery-based grading for nearly a decade. Students demonstrated proficiency on clearly defined learning targets with opportunities to revise their work.

When he noticed changes in take-home submissions that indicated AI use, he reconsidered whether grades of any kind on take-home work were still valuable.

“I basically decided I cannot grade anything if it’s outside the classroom,” he said. “It conveys no information to me anymore, because I really have no way of knowing who or what is creating the content that is getting the grade.”

Now, any work that counts toward mastery must be completed in class. He still issues problem sets every couple of weeks to be completed out of class, and students are awarded for giving a good faith effort. Only in-class exams are graded for comprehension.

The shift has led to much more time spent on assessment in class, and Talbert said it is an imperfect solution. It can make the classroom feel more stressful. Even if students are allowed retakes and corrections, there is still this inherent negative feeling around a test day, he said.

MAKING THE LEARNING PROCESS MORE EXPLICIT


Alongside changes in location, both professors have put greater emphasis on how learning happens.

Talbert devotes the first week of class to discussing what it means to learn mathematics.

“We make this big point on the first day of class that anything that you have ever learned that is of any significance was done through repetition, mindful repetition, in a feedback loop with a trusted third party,” he said.

He still uses out-of-class weekly quizzes through GVSU’s learning management system, but he has tweaked them so they cover foundational definitions and calculations. These assignments are not worth much for a student’s grade, so if a student used AI to complete them, they won’t move the needle on a grade, and it will likely catch up to a student on higher value assessments.

Hummels similarly frames assessment around effort, comprehension and signaling performance. He has grown concerned that traditional algorithm-based assignments risk training students to perform tasks that AI can now automate.

In a course on economic growth and innovation, he brings frontier research into the classroom and asks students to develop hypotheses about unsolved questions. He asks them to explain why scholars frame questions the way they do and what counts as evidence in the discipline.

These processes are important for students to generate new knowledge and embark on their own research, Hummels said. While this process is typically reserved for graduate students, he said the path to knowledge generation needs to be accelerated as entry-level jobs change and demand more than information retrieval and summary.

WORKING WITH AI


In addition to showing students why learning matters, the professors said it is essential to teach students how to use AI tools to augment their learning, just as they will be expected to use AI tools to augment their work upon graduation.

Hummels is working with students in a pilot independent study project to explore research questions using AI chatbots. Students submit full transcripts of their chatbot exchanges, which allows him to see how students’ ideas develop.

He uses AI on his part, to help analyze those transcripts and generate targeted follow-up questions. In one case, AI suggested Hummels ask about cost-plus versus fixed-price contracts to probe a student’s understanding of political and economic incentives in defense spending. He said this process is efficient and could be scaled up beyond small independent study.

Talbert sees this kind of AI incorporation as his next step.

“I feel like I’ve got a pretty good sort of firewall against AI, but what I haven’t found yet is a really meaningful way to include AI in the classroom process,” he said. “I don't want to just shut it out.”
Abby Sourwine is a staff writer for the Center for Digital Education. She has a bachelor's degree in journalism from the University of Oregon and worked in local news before joining the e.Republic team. She is currently located in San Diego, California.