The term “AI agent” has only recently entered mainstream education conversations, gaining traction around 2023-2024. According to Nate Ober, senior ed-tech and AI/machine learning leader at Amazon Web Services (AWS), it refers to a system that can use tools, like databases or LMSs, to plan a sequence of steps, take action, observe results and adjust, operating in a loop until a desired goal is completed.
Early pilots struggled with workflows that included more than a few steps, making implementation look similar to that of generative AI: retrieval-based Q&A and content generation, Ober said in an email to the Center for Digital Education. Now, agents’ capabilities have evolved to allow them to work independently for hours.
“We are in the earliest days,” said Nicole Engelbert, vice president of product strategy for student systems at Oracle. “Take a side eye on what anyone is saying about what’s happening in a pervasive way.”
EASY WINS: ADMINISTRATIVE SYSTEMS
Experts say the strongest easy wins for agentic AI are administrative, where the work is high-volume and repetitive.
Similarly, at Highline College in Washington state, a financial aid status tracker introduced in 2023 reduced emails, phone calls and in-person visits about application status by 75 percent.
These systems are also appearing inside LMSs. For example, Instructure’s Canvas system now has an agentic tool that follows natural language prompting from instructors, like “grant this student an extension.” Ryan Lufkin, vice president of global academic strategy at Instructure, said this agent will update the assignment due date, generate a reminder for that student to send out a day before the updated due date, and remind the instructor to grade the extended assignment separately.
“It’s possible to actually go in and trigger those things, which would have been individual steps on their own,” he said. “It takes a fraction of the time when you have an agent actually performing those tasks.”
Engelbert said advising is a particularly promising administrative use case, as agentic AI systems can generate optimized student course schedules, test different degree pathways, then route those options to human advisers and schedule follow-up meetings.
TEACHING AND LEARNING RESISTANCE
All of these early use cases focus on the administrative side of university roles. For the teaching and learning side, many experts agree that the conversation is more complicated. Where struggles linked with administrative tasks are more universally considered unnecessary, the struggles associated with learning are not. In discussions of AI’s impact on learning, preserving “productive struggle” is often a priority.
“Education is specifically different than your normal institutional tasks,” said Jake Burley, a researcher at the Applied Ethics Center at the University of Massachusetts, Boston. “There’s a strong sense that there’s something personal or powerful about the educational experience.”
The risk of AI co-opting that struggle is shown in cases like the Einstein agent, created by the startup Companion, which integrates directly into Canvas and completes assignments automatically. Experts say the teaching and learning applications of AI invite broader questions of why we educate the way we do, as tools like Einstein are more enticing for students who don’t understand why they should complete assignments themselves.
A few approaches have emerged to try to thread this needle. Instructors can use custom GPTs to create course-specific AI tutors trained on their own learning materials, as one California instructor has done.
The University of Luxembourg adopted AWS’ framework that uses AI agents across the instructional cycle, from lecture preparation to real-time transcription and translation during lectures and finally to post-lecture analysis and feedback.
MOVING FORWARD
As institutions consider where to implement agents, Ober said to consider reliability. While agentic AI is tackling longer workflows, the margin of error compounds with each step.
“What’s the cost of being wrong 10 percent of the time?” Ober wrote. “If it’s ‘a student gets a slightly suboptimal course suggestion,’ agent is fine. If it’s ‘a student gets the wrong financial aid amount,’ no.”
Additionally, Lufkin said that some newer AI vendors lack experience with education-specific regulations like the Family Educational Rights and Privacy Act and the Children's Online Privacy Protection Act, which can be especially problematic in the agentic AI space.
“We’ve got to be very selective in who we partner with,” he said.
To this end, Ober said that audits and the ability to track each step of an agent’s decision-making process will be key.
Some experts predict the fastest growth in agentic AI for education will take place in areas where the value is clear, like advising and administrative workflows. Farther out, Burley sees a future in which agents act as research collaborators and teaching assistants.
As agentic AI expands, experts say training staff on responsible use will be key and should be incorporated into professional development efforts around AI.
“A large AI investment returns nothing if faculty and staff can’t use it confidently,” Ober wrote.