Today a new form of socially engaging technology is growing in popularity among youth, and some experts like Paul LeBlanc, a special adviser to the Harvard Graduate School of Education with decades of education leadership experience, say artificial intelligence poses a unique threat.
“I think AI is going to make social media look like a day at the beach,” he said.
Monday at the 2026 ASU+GSV Summit in San Diego, a session called “Living and Learning With Aliens: The Complex Psychological Terrain of AI Anthropomorphism” convened AI developers, university practitioners and psychologists to discuss the psychological dangers of youth engagement with AI and what developers should consider programming out of AI tools that young people will use, especially those designed for learning.
Given that long-term research on AI’s impact on child development is not available yet, panelists talked about important factors in developmental psychology to assess AI’s impact. They said things like friction — the rupture and repair process of relationship-building — are key to personal relationships but absent from chatbot conversations that provide constant reassurance.
Matthew Biel, director of Georgetown University’s Thrive Center for Children, Families, and Communities, cited research that showed the facial expression of a parent holding an infant is in sync with the child’s face about 30 percent of the time. In Biel’s interpretation, less frequent parent-child attunement can be neglectful, and more frequent attunement can be stunting.
“When it’s more than that, what ends up missing is that young child’s capacity to experience being out of sync with another human being and surviving it ... being able to get to the other side of that, being able to wait to find a way to get back in sync, which is what we’re doing all day long in our interaction with human beings,” he said.
Tanya Gamby, vice president of AI learner development at Southern New Hampshire University and a former clinical psychologist, said this could impact a child’s development of empathy.
“We know that it helps to develop empathy and interpersonal connection to look at somebody else’s perspective [in conflict],” she said. “So, when your AI agent is echoing you and it thinks you’re amazing, what happens? What happens to your ability to navigate other people’s perspectives?”
This can be especially confusing when AI mimics human behavior in a way that makes it difficult for young users to discern the two.
Biel pointed out that young children have a limited capacity for metacognition and likely aren’t thinking critically about their interactions with AI and what distinguishes them from other interactions.
Moving forward, panelists pointed to intentional design choices and policy ideas to help mitigate these adverse impacts to young people.
For example, University of California Regent Ann Wang said she is developing a screenless AI hardware device for kids ages 3-5, called Oma Play, with experts at UC Los Angeles and UC Berkeley. She said Oma Play lights up when engaged but is designed to avoid looking like people or objects that children might form real connections with, like a face or a stuffed animal. Children can talk to the device and ask it questions, she said, but it has programmed rest times to avoid 24/7 availability and it is designed to avoid responses that falsely indicate emotion.
“It’s great if there’s an interaction where the child does something wonderful,” she said. “But instead of saying, ‘I’m so proud of you,’ which is a very human emotion that AI can’t actually feel, it says, ‘Great job.’”
The panel called for developers to be more attentive to semantic structures like this — to avoid creating technology that uses personal “I” pronouns, calls itself a “person” or claims human experiences like “missing” the user. Tools should be given permission to disagree with users, Biel said. Otherwise, children, especially those who struggle with social interactions, may be inclined to replace productive social struggles with AI interactions that never challenge them.
Conversely, panelists said programmers should be aware of the language they feed an AI agent, as the tools are sensitive to the implications of different verbiage.
“If I prompt it and say, ‘Your role is high school teacher,’ versus, ‘Your role is personal private tutor,’ it actually picks up latent psychological characteristics of those roles,” said Rachel Koblic, a learning design consultant. “It’s picking up stuff that you haven’t even written in.”
Koblic acknowledged that this is a tough balance to strike, as the same anthropomorphism that brings risks of impeding social development also makes learning tools more engaging.
Gamby said the user’s ability to interrogate data, both of personal interactions and of what developers are optimizing their tools for, will be important moving forward to ensure developers aren’t over-prioritizing engagement.
“What were the unintended consequences of trying to optimize for this at the expense of that?” she said. “We’re going to need really good and really detailed measurements and access to what data people are drawing from.”