Published this week, the report found that while teachers and students are embracing AI tools more than ever before, their deployment to schools is accompanied by rising concerns about privacy, academic integrity and social well-being. It also outlines four main risks that grow more likely as AI use increases: data breaches or ransomware attacks, tech-enabled bullying and sexual harassment, systems that fail to function as intended, and troubling interactions between students and companion chatbots.
Elizabeth Laird, CDT’s director of equity in civic technology, said the goal of the 2025 report was to determine the frequency and ways in which AI is being used, identify verifiable risks associated with its use, and raise awareness so AI tools do not inadvertently harm students.
“There’s a lot of nuance. AI is not a monolith,” Laird stated. “I hope school leaders can use some of our research as a starting point to, at a minimum, even break down what are the possible range of topics that AI literacy can cover.”
SCOPE OF USE
According to the report — which surveyed 1,030 high school students, 806 middle and high school teachers, and 1,018 middle and high school parents between June and August 2025 — 85 percent of teachers and 86 percent of students said they used AI during the 2024-2025 school year, with uses ranging from lesson planning and grading to tutoring and personal conversations. Laird said this data shows the highest levels of AI usage among students and teachers since the organization began tracking the variable in 2020.
EFFECTS ON TEACHING AND LEARNING
The report also highlights emerging tensions over what constitutes quality instruction in the age of AI.
Eighty percent of teachers said AI enabled more personalized instruction for diverse learners, and 76 percent said it gave them more time to work directly with students. Half of parents and students question if teachers who use AI are really “doing their job.”
But while teachers reported that AI helped personalize instruction and save time, many also said it made it more difficult to verify whether student work was original, and they were perturbed by the potential diminishing of critical thinking skills.
Nearly 75 percent of educators surveyed worry that AI use creates distractions or complicates grading and assessment.
IMPACT ON RELATIONSHIPS
The technology’s impact goes beyond academic — the CDT found evidence that AI tools are influencing how students relate to teachers, peers and family members.
“Along with increased uses of AI in schools, you’re seeing that affect real-life relationships,” Laird said, referencing the report’s findings: 56 percent of students feel less connected to their teachers when using AI, yet 52 percent stated they prefer to work with AI over a teacher.
According to the report, students increasingly use AI for emotional or social support, with 42 percent saying they’ve used it for mental health support and nearly 1 in 5 saying they used it to form a romantic relationship.
AI use has even proved dangerous without safeguards: “The more that teachers and students report that their school uses AI, the more likely they are to report having heard of a deepfake and/or deepfake non-consensual intimate imagery (NCII) that depicts someone associated with their school in the last school year,” the report found.
LAGGING AI LITERACY
Despite the widespread use of AI, CDT found that formal training and guidance have not kept pace.
Fewer than half of teachers and students surveyed said they had received any AI-related training or information from their school. Among those who did, most found it helpful — but few said it covered how to respond to biased or inaccurate AI outputs, or how to address potential harm to students’ well-being.
“School policies are trending toward permitting its use and moving away from banning it,” Laird said. “And you have to couple that, which is why we call the report 'Hand in Hand.' These things need to go together, that as you’re promoting the benefits of it, you’re also equipping people with how to use it responsibly.”
Only about 1 in 5 teachers and students reported receiving instruction on AI risks such as bias, misinformation or overreliance. The report also found a disconnect among groups: Teachers prioritized learning how to detect AI-generated work, while students and parents wanted more information about privacy, fairness and how to use AI responsibly.
Moving forward, Laird recommended that schools invest in AI literacy to avert harm.
“Those who did get [training] found it to be really helpful, and I think it speaks to how the adoption of this technology has well outpaced the guidance and information provided by schools,” Laird said. “But when they get it, you know they are really hungry and demanding more information.”