IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

University of Georgia Advancing AI to Assess Creativity

Researchers at the University of Georgia's Mary Frances Early College of Education are working on an AI system to more accurately rate open-ended responses on creativity assessments for children.

creativity, child painting a window
Researchers from the University of Georgia are developing an artificial intelligence system to more accurately assess creativity in elementary students.

According to a news release on the university's website, to help train this AI, a team led by associate professor of educational psychology Denis Dumas recently conducted a study of when human judges rate kids' creativity differently on assessments. The study, which was funded by the U.S. Department of Education and included collaborators from the University of Denver and the University of North Texas, analyzed more than 10,000 student responses to questions on a 30-minute creativity assessment. They found judges tended to disagree most when rating responses from younger or male students, on responses that were more elaborate or "less original," on "highly original" responses from exceptionally gifted students, on responses from Latino students who were English-language learners, and on responses from Asian students who took a lot of time.

“Our judges didn’t know who the kids were and did not know their specific demographics,” Dumas said in the news release. “There wasn’t an explicit bias, but something about the way some students responded made their responses harder for our team to rate reliably.”

Noting that creativity assessments tend to be time-consuming to evaluate, Dumas said in an email to Government Technology that the goal of the research is to develop an AI system with less grading bias to make creativity assessments a more accessible and reliable tool for schools. He said his team has been working to develop an automatic scoring system based on large language modeling (LLM), which “effectively 'reads' the responses that children write and assigns them a numerical score.”

Dumas said researchers’ near-term goals are “mainly about improving methodology for psychological research.”

“Already I see us making an impact there, with other psychologists using the tools we are making and citing us in the peer-reviewed literature," he wrote. "Longer term, we hope to influence school-based assessment, but it might take some time until we are adopted by a school system."

Dumas said that understanding where ratings disagreements came from can help retrain AI grading systems and make assessments more accurate. He added that their research ultimately aims to improve confidence in creativity assessments and allow schools to make better decisions based off their results.

“We are essentially thinking, ‘How can we observe [a certain] psychological attribute in people, and observe it in a way that we can quantify it?’" he wrote. "I think answering that question, especially in the area of creativity, is wonderful work."
Brandon Paykamian is a staff writer for Government Technology. He has a bachelor's degree in journalism from East Tennessee State University and years of experience as a multimedia reporter, mainly focusing on public education and higher ed.