The report, titled “Building a Human Resilience Infrastructure for the AI Age,” from Elon University’s Imagining the Digital Future Center and surveyed 386 global experts, including academics, technologists and business leaders.
About 82 percent said AI will play a significantly larger role in shaping people’s lives within the next 10 years or less.
“The existential danger to people may not come from AI becoming too intelligent, but from humans becoming dangerously reliant on systems they do not understand,” wrote Roger Spitz, founder of the Disruptive Futures Institute in San Francisco.
He called the condition “superstupidity” — and warned the 2006 comedy film Idiocracy “is prophetic.”
Experts are calling for an institutions-first response, arguing that governments, businesses, educators and communities must act together now rather than leaving individuals to adapt on their own.
Across more than 160 essays, researchers identified four key themes businesses and individuals should be aware of.
LOSS OF HUMAN AGENCY
Report co-author Janna Anderson warned that accelerating AI adoption will lead to a slow, cumulative erosion of human agency — a drift that can look like progress but steadily weakens human judgment, accountability and shared truth.
Anderson wrote that experts fear “accelerated AI use will lead to a cumulative reallocation of human agency, until people and institutions find it harder to question, contest or even notice what has changed.”
In other words, as people increasingly defer decisions to AI systems, they may gradually lose the habit — and ability — to think critically for themselves.
WORK, IDENTITY AND AI
As AI displaces workers, experts warn the damage won’t be purely financial — it will be psychological.
The report calls on businesses to prioritize human augmentation over replacement and create deliberate “human-only zones” where AI is intentionally off-limits.
Alf Rehn, professor of innovation and design management at the University of Southern Denmark, cautioned that “the most dangerous kind of resilience is the kind that looks like stability but is actually surrender.”
THE COLLAPSE OF SHARED REALITY
Alison Poltock, co-founder of AI Commons UK, described a moment of “epistemic shift” in which the frameworks shaping identity and social orientation are changing with no shared civic conversation to process it.
“We are operating on outdated institutional architecture,” she wrote, “strapping jetpacks to systems built for another age.”
Stephan Adelson, president of Adelson Consulting Services, predicted the consequences could extend to mental health.
He warned that “AI psychosis and other forms of mental illness will arise” as the erosion of a stable foundational reality creates new vulnerabilities — and that entirely new approaches to diagnosing and treating mental illness will be needed as a result.
CHANGING SOCIAL LIFE
Paul Saffo, a prominent Silicon Valley forecaster, offered one of the report’s starker predictions: AI will eliminate solitude.
The temptation to interact with AI will prove so powerful, he wrote, that people will choose never to be alone — and realize too late what they have lost.
Salman Khatani, manager of Pakistan’s IMAGINE Institute of Futures Studies, put the stakes plainly: “The window for proactive intervention is now — we have perhaps five to 10 years to establish new resilience-building practices and norms before AI’s role becomes too entrenched to reshape.”
©2026 Advance Local Media LLC. Distributed by Tribune Content Agency, LLC.