IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Preparing K-12 and higher education IT leaders for the exponential era

Differing Rules on AI Use Confuse Texas College Students, Staff

Colleges and universities are addressing AI use with a patchwork of policies, with many professors setting their own rules, leaving both students and instructors unsure where appropriate AI use ends and cheating begins.

Illustration of a person holding up a magnifying glass with "AI" visible through it. In their other hand are multiple education symbols including the word "education," a graduation cap, a paper document and a robot. Light blue background.
Adobe Stock
(TNS) — University of Houston freshman Ava Romero doesn't use artificial intelligence much for classwork — but when she does, her professors call the shots.

In English and government, she must stick to approved tools and stay within the school's 20 percent threshold, measured by special AI detection software. But her history professor bans AI entirely. There, Romero can't touch it — or she'll risk violating UH's academic honesty policy.

Across college campuses from UH to Rice to Texas A&M, those shifting rules show how AI is already upending teaching and learning in higher education.

One day in the back of history class, it really hit Romero as she saw ChatGPT beaming from laptops around her.

"Even though I feel like it is our future, I don't really trust it," Romero said. "I like using my own brain."

The AI boom has turned college classrooms into a patchwork of policies, with many professors setting their own rules for everything from multiple-choice homework to essays — leaving both students and instructors unsure where appropriate AI use ends and cheating begins.

"It can almost create a culture of paranoia for students who are living in constant fear of being called out for possible AI use, when they're trying their best not to," said Lauren Zentz, who chairs the UH English department and reviews academic integrity cases. "It's just a little bit of a minefield."

Ten students spoke with the Houston Chronicle about their AI habits, with most saying they still value generating and writing their own ideas. Professors said many students go out of their way to follow the rules.

Still, some students cheat — and professors say those cases have made things harder for everyone.

"AI has become a temptation for some students," said Lois Parkinson Zamora, a UH English professor. "I have this extra thing I have to look into."

Policies vary widely. On one end of the spectrum: teachers who've embraced the technology, weaving it into assignments so students can learn how to use the tools responsibly. Others allow AI for homework or essays, as long as students show proof of their prompts. Some say AI is only OK for grammar and spelling edits.

And some, like UH history professor Robert Zaretsky, have gone old-school with hand-written essays in blue books.

"You can't find your voice as a writer through AI," Zaretsky said.

The result can be confusing for students, though that may be unavoidable. Higher-ed guidelines stress it's important to have faculty's input in AI policies, to preserve academic freedom. And administrators in Houston doubt that blanket policies would be effective.

"The reason we leave it up to the faculty is because it's really for them to know what are the learning outcomes," Rice University Provost Amy Dittmar said. "This is a tool that, if used properly, may help you towards those learning outcomes, if not used properly, could hinder you towards those learning outcomes."

STUDENTS TEST AI LIMITS


Sophomore Andre Orta says he's still figuring out how far to trust AI tools. For calculus, it's helpful to organize complicated material. But not so much in physics, he said.

It's a big experiment as he works toward his degree.

If he were to use generative AI just to pop out answers and not learn the material, Orta says it would come back to him on tests, which seem to increasingly require students to show proof of their knowledge.

"They want to make sure that everybody gets it," he said.

At Rice, associate teaching professor Risa Myers lets students use generative AI in homework.

There are some rules: Students can't use computer science methods that Myers hasn't taught, and they must provide her with their AI prompts and a four-sentence reflection each time they use the technology.

Myers said it's a tradeoff for students — homework doesn't mean as much so now there are more quizzes.

"It makes them keep up with the class," Myers said.

Professors like Myers say students might be better prepared if they learn to harness AI, which is becoming ingrained in the workforce and Americans' daily lives. Some say it's helpful in certain academic settings, like speeding up the time-consuming process of writing code or helping students who aren't native English speakers edit their work.

But Myers has also noticed students rely more on AI to code and become less proficient at coding themselves.

While several STEM professors worried about AI potentially undercutting foundational skills, humanities professors told the Chronicle they have a more existential concern: what does it mean to learn?

"People are worried that (AI) will replace writing — and more specifically, what does it mean to replace writing?" said Zentz with UH's English department. "It means reducing opportunities for students to learn how to think and reason in structured and democratic ways."

OVER-DETECTING AI USE


Zaretsky said his return to blue books happened after a disastrous experience teaching one asynchronous course called "Film and Existentialism."

He found that students who couldn't pass the multiple-choice quizzes would submit "thoughtful, seamless" papers.

"I knew it simply wasn't them," he said. "I learned two lessons: never to try asking for papers written again, and the second is never to teach an asynchronous class."

Zaretsky said he might have continued assigning essays outside the classroom if he could trust the AI detector Turnitin. UH has a contract with the software, which automatically checks students' work for plagiarism when they submit written assignments in Canvas, a management system used by many colleges.

He and several other professors said in multiple instances Turnitin over-detected students' AI use in their classes, leading to allegations of academic dishonesty that cost students time and peace of mind — and could result in a failing grade or even expulsion.

Zentz said in one case, a student wrote a paper with a tutor at UH's writing center, submitted it and then received a warning that the work was 100 percent AI.

Zentz said many students have become scrupulous about false cheating accusations, sending her videos of their editing history as they write their papers.

"Your own personal work can get flagged as AI ... I'm having it happen to me too," UH senior Juan Lopez said. "It's kind of scary in that regard."

Zentz said since ChatGPT arrived in late 2022, she has reviewed a higher number of accusations, but she's not sure if those have resulted in more provable cases. This semester, she's reviewed 10 cases and determined two of those involved AI, and only one could be considered cheating.

Some Texas universities have chosen to avoid the problems that might arise from AI detection software. UT-Austin doesn't have a contract with any, partly because they haven't been proven effective, according to Julie Schell, UT-Austin's assistant vice provost of academic technology.

"It's not always clear what's ours and what's produced through the AI," Schell said. "We need to be more thoughtful and intentional about how we work with students to help them understand how to be transparent and intentional in their work."

At Rice, Dittmar said that AI checkers are allowed but aren't enough to prove plagiarism or academic dishonesty on their own. She said Rice has also seen increases in honor code violations in recent years, but she declined to attribute them to AI, citing enrollment increases since 2023.

'LEFT BEHIND'


Some students say it's in their best interest to master AI, while others resist it for environmental reasons or because they feel their work is more original without the tool.

UH junior Diego Gonzalez said his early attempts to use AI to cut corners were unhelpful. He's learned better ways to leverage it, like uploading professors' lecture notes and slides to create quizzes, flash cards or even podcasts.

Shalom McNeil, a Texas Southern University senior, said he's noticed a difference in his productivity in the pre- and post-ChatGPT era. It's not useful when he's shooting and writing news packages for his broadcast journalism courses. But he does use generative AI to organize his notes or break down complicated math problems.

"If you demonize it, you're getting left behind," he said.

Several students said it's tempting to use AI in ways professors might not approve of — especially in classes outside their majors or during late-night study crunches. But they also agreed that it isn't worth the risk.

Meanwhile, several professors have become adept at detecting AI themselves, spotting its tone and patterns. And they've gotten crafty to catch AI cheaters.

Gonzalez said he's heard of professors hiding instructions written in invisible ink in their assignments, so students who generate their essays with AI end up with nonsense words in their copy. Some of Gonzalez's instructors have also stopped uploading professionally-developed instructional materials onto Canvas and draw problems by hand instead, so AI can't read them.

"Professors are getting smarter," Gonzalez said.

© 2025 the Houston Chronicle. Distributed by Tribune Content Agency, LLC.