At the 2025 EDUCAUSE conference, Ashley Dockens, associate provost of digital learning at Lamar University, and Cindy Blackwell, director of academic faculty development at Texas A&M University, warned that higher-education leaders and teachers may be holding students to an unreasonable standard — expecting students to inherently understand when AI use is appropriate and inappropriate and, in the latter case, to keep a perfect track record of resisting temptation.
“We give them a lot of expectation for doing the right thing, and I appreciate that, but we’ve all failed at something, and they are going to fail,” Dockens said. “And if you can’t fail in a safe space while you’re still at a university and learning, where can you?”
AI ANXIETY
Dockens said students feel uncertain about when AI is allowed, often juggling class- or assignment-level policies. Compounding this fear are AI detection tools, which often mistakenly flag human-produced work as machine-generated. Dockens’ own dissertation, for example, was flagged as 98 percent AI-generated despite being written before the technology existed.
While some institutions, including Lamar, have banned the use of AI detection software, more than 16,000 institutions worldwide use services from the detection tool Turnitin, according to the company website.
Dockens said AI cheating accusations are distinct from other academic integrity issues a student might face because the issue is so heightened right now. Some students have lost their visas and ended up in national news over accusations.
“If you don’t have 100 percent certainty that this student has definitely cheated, are you willing to accept them as collateral damage?” she said.
WHY STUDENTS TURN TO AI
Dockens said that traditional-age college students have not fully developed the part of the brain responsible for judgement, impulse control and long-term thinking, making it difficult for them to fully grasp the repercussions of using AI for schoolwork.
Combined with academic pressure, outside responsibilities like jobs and inconsistent institutional messaging, Dockens said it makes sense that students turn to AI for help. Most misuse, she said, isn’t malicious and could even be viewed as rational.
In using AI, students disconnect their behavior from their long-term identity as a learner with justifications like “everybody’s doing it,” “I’m super busy” and “I’m just doing it on one assignment,” she said.
This understanding of student behavior and motivations can help inform AI policy and repercussions for misuse, and Dockens argued for restorative processes rather than punishments.
RESTORATIVE RESPONSES
Policy clarity is an important first step, and Dockens and Blackwell recommended a tiered approach. Institutional policies set broad expectations, program-level policies can reflect disciplinary differences, and course- or assignment-specific rules can specifically outline when and how AI may be used. Dockens said instructors could include students in this process, asking them why they think AI should be allowed or not.
Policymaking and communication should also include transparency around how instructors are using AI, because students may feel betrayed if teachers disallow AI but use it themselves, as seen in one recent case at Northeastern University.
“If [the instructor] had a conversation to say: ‘This is foundational knowledge. You have to have this first. You can’t use AI yet because you don’t have it yet. I can use it because I have the foundational knowledge, but I’m vetting it and I’m checking it and making sure it’s accurate,’ that might have looked very different,” Dockens said.
Instructors can also take an open-minded approach to AI, being willing to adapt their teaching and assessment to help students better understand why they need to actively engage instead of leaning on a chatbot. For example, Blackwell said she likes to let students give themselves feedback on assignments before revealing their grades, helping them reflect on their effort.
Instructors can also make the learning process more visible, rather than assuming AI use equals no learning. This can include asking students to keep a log of their research process, prompts and verification steps. Students can also use this log as a defense against accusations of inappropriate AI use, which may assuage their anxieties, Dockens said.
If instructors and institutions shift their goal to making school a safe place to fail, responses to AI infractions might include AI ethics workshops, redo opportunities for partial credit and structured reflection on what led to misuse, Dockens said.
Ultimately, Dockens said it is important not to get bogged down in the technology but to remember the personal side.
“Because humans are involved, we still need to remember that empathy is needed,” she said.