“I don’t think (AI) knows what a border means,” one kid mumbled under his breath as his eyes darted back and forth across the screen.
“This is so wrong, (AI) is so awful, it is so bad,” a girl a few feet away at a different computer chanted. “This AI is not good! It isn’t working!”
The students were playing a game that shows them they are smarter than AI. Developed by UW researchers Aayushi Dangol, Jason Yip and Julie Kientz, the game asks humans to solve a simple puzzle. Once the user does it correctly, it prompts them to ask AI to do the same. AI consistently fails, even when the user types in specific directions to give AI hints on how to solve the puzzle.
“When it comes to AI technologies in general, there is a huge sense of trust that kids have with these devices,” Dangol said. “For a lot of kids, especially elementary and middle schoolers, they almost see AI as magical, that it knows everything.”
While adults who can read and have content knowledge can fact-check AI or see AI gets things wrong, kids who can’t read or don’t know much about a subject often can’t experience that feeling of AI being wrong. That’s where the puzzle game comes in. It is a visual representation of what AI gets wrong.
The researchers developed the game because they saw an opportunity to display visually for kids who couldn’t read that AI fails. Playing the game allows kids to see this and then have conversations about the strengths and limitations of AI.
The students trying out the game this week are at KidsTeam UW camp. It's a design lab run by Yip at UW’s Information School that partners with kids to test and develop technology.
“Clearly, there is this discrepancy between what (AI) is saying and what (AI) is producing,” Dangol said. “Mistakes like that actually prompt children to go deeper into why AI makes this type of mistake and how it is different from the type of reasoning humans do.”
Solving the puzzle, which anyone can try online, involves abstraction and reasoning — two things humans have but AI models don’t yet.
“I think it is actually important for people to know even at a young age that the machines aren’t smarter than you, they just do different things and different tasks in this way,” Yip said. “You as a kid have different talents and skills that are just different.”
The game is an application of the ARC Puzzle, short for Abstraction and Reasoning Corpus, which was created by an AI researcher François Chollet in 2019. He designed it to be easy for humans but hard for machines, and it has served as a benchmark for measuring the capabilities of AI models. The UW researchers made it accessible for kids and added various AI models to try out the puzzle to show children that different AI models can fail.
“Humans are excellent at trying to make those really deeper connections that others can’t see, including machines,” Yip said.
Back at KidsTeam UW, Zoe Blumenthal, 10, typed hints into a text box in hopes of helping AI solve the puzzle she can complete fairly easily.
“It feels disappointing and frustrating,” Blumenthal said. “AI is supposed to be this magical computer mind that can do anything, and instead it is this.”
The puzzler game has made her think about the power of her creativity.
“Creativity is something the mind makes up. It doesn’t have to be told information (like AI),” Blumenthal said.
© 2025 The Seattle Times. Distributed by Tribune Content Agency, LLC.