IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Opinion: Framing Academic Integrity for the Age of AI

Educators should welcome new conversations about academic integrity, and the chance to teach the concept as a positive, desirable principle to strive toward, rather than a litany of rules with negative consequences.

Digital illustration of a robot lecturing a student after they were caught cheating on an essay.
Watching ChatGPT work is mesmerizing. Its cursor plows forward without pauses or backspaces as it produces an essay that would get a decent grade in many college courses.

With good reason, the debate surrounding the application of generative AI in higher education has centered on addressing the risk that students use it to cheat. Generative AI makes it all too easy to take shortcuts that undermine learning.

This renewed concern over academic integrity is a welcome development. But how the conversation is typically defined — academic integrity stands as a bulwark against classroom chaos — is much too limited. Too often, academic integrity policies are built around “thou shall nots”: thou shall not cheat, fabricate, plagiarize or collaborate without permission. This approach fixates on a litany of rules rather than instruction that teaches students to do their work with honesty and integrity.

The AI conversation should be informed instead by a much deeper and more critical understanding of academic integrity. The impulse to portray academic integrity as something to be defended gives too much power to the language of dishonesty. Academic integrity should not be about closing doors to an unwanted intruder to protect what’s inside. Instead, academic integrity — particularly in the age of generative AI — must be seen as something we consistently work toward. To act with academic integrity means asking questions, trusting your instincts and taking risks. It requires bravery.

Academic integrity as defined by the International Center for Academic Integrity is a commitment to six core values — honesty, trust, fairness, respect, responsibility and courage — that enable academic communities to translate ideals into action. In the classroom, academic integrity often revolves around attribution — having students do their own work and not claim the work of others as their own.

Long before ChatGPT, many instructors had erected guardrails around academic integrity to prevent students from co-opting someone else’s thoughts and punishing them when they crossed a line. Today, instructors are deploying AI tools to sniff out AI-written text. But this technology has proven to be unreliable and rife with bias. The measures erected to defend academic integrity have built an adversarial relationship between faculty and students. We find no fault with instructors not wanting students to claim someone else’s ideas as their own. The task at hand, then, is to create a classroom culture that incentivizes academic integrity over dishonesty.

College campuses should be spaces for creating and exchanging ideas for the public good. Integrity describes the character of instructors and students and the impact of knowledge on real lives. Academic integrity frames how we practice this goal within our communities. By using academic integrity as a teaching opportunity, we can support students rather than punish them, and promote not an adversarial relationship between teachers and students but a collaborative one.

Using academic integrity in this new AI age starts with defining critical AI literacies. Faculty should help their students understand what generative AI is and how, why and when (or when not) to use it. Students should learn to be skeptical and understand the potential harms associated with AI. In the tradition of critical pedagogy — a way of teaching that encourages students to think critically and question the information they receive — students should learn how issues of democracy, social justice and power are intertwined with AI’s use and deployment. Instead of interrogating students about their use of AI, let’s interrogate AI and its creators.

These new forms of technical literacy should reflect the six academic integrity values. Because these values are not neutral, faculty also should make space for the racial, ethnic and community values students bring with them to class. Instructors should identify these values at the start of each term and use them as a rubric for coursework. If a student wishes to use AI to help write a paper, they should be honest about it and take responsibility for it according to established classroom values. As bell hooks suggests in her book Teaching to Transgress, engaged pedagogy that bases decisions on underlying values and objectives instead of a strict set of rules can be used to guide awkward conversations about inappropriate generative AI use with students.

Faculty should create a classroom environment that facilitates critical inquiry. Students sometimes violate norms of academic integrity when they lack the time to complete an assignment, do not yet fully understand the material, or fear failure. It’s OK to fail. Success often comes from learning from failure. Instructors should make authentic and meaningful assignments where failure and flexibility are built in.

When classroom activities are done with generative AI tools, students should provide evidence for their work and make sure to give proper credit to whomever owns the work. Engaging with others honestly and equitably and applying rules and policies consistently demonstrates fairness. Standing up for what you believe in and being willing to take risks — and perhaps fail — is evidence of courage.

Whether or not students use generative AI, the end should be the same: Faculty should produce scholars who are creating new knowledge. The point of scholarship is for scholars to use the available tools to produce new ideas and let those ideas shine through. Instructors who actively embrace academic integrity values can deploy AI in a human-centered and collaborative way to throw open the doors to new knowledge, new scholarship and new ways of solving problems in ways that are honest, true and fair.

Antonio Byrd is an assistant professor of English at the University of Missouri-Kansas City. Sean Michael Morris is vice president of academics at the online learning company Course Hero.