Exploring ways to utilize AI in practice, some educators are drawing on AI teaching assistants to support their pedagogy. With AI assistants and tools becoming more common, some educators say, the real test will be whether schools can harness their efficiency and creativity without losing sight of what makes teaching and learning human.
WHAT ARE AI TEACHING ASSISTANTS?
In contrast with student-facing chatbots, AI teaching assistants like MagicSchool and Khan Academy’s Khanmigo are designed to help educators with their daily professional needs, such as planning, grading, communication and administrative tasks. They're marketed as being specifically for classroom use, promising to save teachers time while improving student outcomes.
They are also becoming increasingly common. A report published earlier this year by Gallup and the Walton Family Foundation found that 60 percent of teachers used an AI tool for their work during the 2024-2025 school year, with high school and early-career teachers being especially likely to be using the tools. And earlier this month, a Stanford University study of 9,000 U.S. teachers using SchoolAI found that the more they use the platform, the more they gravitate toward teacher-facing functions for tasks like lesson planning and grading.
But as educators experiment with these tools, a risk assessment this month by the nonprofit Common Sense Media, which typically reviews the suitability of media and technology for children, urges caution and clear guardrails to ensure safety and equity for all students.
“AI teacher assistants can be powerful productivity assistants when used with proper oversight and built on high-quality curricula,” the risk assessment said. “But they require experienced educators to evaluate outputs and clear district policies to prevent them from becoming 'invisible influencers' that undermine learning quality.”
POTENTIAL RISKS
While AI teaching assistants can streamline teachers’ workflows, some experts caution against overreliance on the technology. Common Sense Media's risk assessment document describes AI as an “invisible influencer” that risks sharing bias, one-sided perspectives or misinformation that can subtly shape teaching and learning.
“When AI generates lesson content, discussion questions, or feedback, it is shaping young minds in ways that aren’t always obvious,” the assessment said. “[M]any tools have improved in how they address obvious harmful stereotypes and blatant misinformation, but subtle and less obvious stereotypes and biases remain.”
According to Common Sense Media, sometimes AI's promise of personalized educational content falls flat; effective differentiation, the assessment said, requires deep knowledge of each student, which AI fundamentally lacks.
“Effective personalization requires understanding how students learn, what they've mastered, and what scaffolding they need,” the assessment said. “Without this context from teachers, AI-generated materials could leave some students without access to grade-level content or provide work that's inappropriately challenging.”
Common Sense’s assessment is also wary of high-stakes applications, such as drafting individualized education programs (IEPs) or behavior plans, where professional-looking but inaccurate outputs could be legally or educationally harmful. For example, while AI teaching assistants can support educators in certain tasks like note-taking or translation during an IEP meeting, there is a high risk of legal and ethical issues when official documents are AI-generated.
Additionally, Common Sense Media addresses how AI teaching assistants can be inopportune for novice educators who lack the experience to appropriately use these tools.
“Experienced teachers can typically spot when content doesn't align with learning progressions, contains subtle biases, or misses important nuances,” the assessment said. “Novice teachers are more likely to rely too heavily on AI-generated content without the expertise to catch these issues.”
The aforementioned Gallup/Walton Family Foundation surveys also showed that higher-poverty schools were less likely to receive district-provided guidance on generative AI tools like teaching assistants, raising concerns about equity.
HOW TEACHERS APPROACH USING AI
When seventh grade English teacher Matthew Fuchs first learned about ChatGPT nearly three years ago, he had the same reaction as many educators and district leaders: alarm.
“I was like, 'oh my gosh,'” Fuchs, now entering his 16th year of teaching in Pennsylvania’s Trinity Area School District, recalled. “It’s going to change everything. It’s going to ruin education.”
Yet after familiarizing himself with the tool — asking it to write a short story in the voice of Dr. Seuss, or an essay about turtles as though it were a second grader — his perspective shifted.
“Eventually I came to the realization that this isn’t going away,” he said. “One of my philosophies as a teacher is to prepare my students for the future. But I don’t see this [AI] not being a part of their future now.”
From there, Fuchs determined it was critical to utilize ChatGPT and other AI tools in his pedagogy.
Last year Fuchs piloted an AI-assisted grammar unit. Generally, he and a fellow seventh grade English teacher work to generate content for three levels of learners, but Fuchs wanted to use AI to provide extra guidance.
He said that, crucially, students had the choice to opt into the AI-assisted lesson. Out of 110 students, 11 signed up — but only six students stuck with it, and they were notably all girls, he said.
Fuchs compared the exploration process he and his colleagues are going through with learning how to drive.
“Cars, for example — great technology, very useful,” he said. “You don't let a 2-year-old or a second grader get behind the wheel, not only because they would have difficulty reaching the pedals, but they're not developmentally ready for it.
“We should introduce [AI] to younger kids. We should teach older ones how to use it ethically,” Fuchs continued. “By the time they're ready to move on to college, to technical school, to whatever their future holds for them — maybe right into the workforce where they can very well be thrust into a role where they're dealing with AI from the jump — they’ll be prepared.”
But AI does not necessarily need to be embedded into curricula in order to be useful to educators.
Eoin Gronningsater, a former middle school math specialist at a charter school in Harlem in New York City, said he primarily uses AI on the back end to help him automate arduous tasks and increase the efficiency of his workload.
For example, Gronningsater said he and his colleagues leverage AI to analyze both structured and unstructured data to produce potential ideas for lesson plans and activities.
“When I needed to differentiate problem sets to target a specific skill for kids, I could use AI to generate those problem sets much more quickly and make them align exactly with the content needs of the student I was working with,” he said.
Both Fuchs and Gronningsater emphasized a common refrain in AI guidance for educators: that it is meant to augment and not replace the role of a human teacher, and that it is of the utmost importance to learn how to use tools properly before integrating them into systems and curricula.
“Keep yourself in the loop. You are the human in the loop. You need to be fully involved. You are the student’s teacher,” Fuchs said. “It should make your academic offerings better. Without cheapening the process, cheapening the experience for students, for all that education is and can and should be, use it for yourself as well. Explore, play with things.”