IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

San Antonio Professors Launch Research, Workshops on ChatGPT

Like most schools, the University of Texas at San Antonio has yet to clearly define how students can use AI chatbots that can answer essay prompts and math problems, but professors hope the strategy isn't a simple ban.

ChatGPT laptop,Chatgpt,Chat,With,Ai,Or,Artificial,Intelligence.,Young,Businessman,Chatting
Shutterstock
(TNS) — Faculty in the mechanical engineering department at the University of Texas at San Antonio have been chewing on a new discussion topic: ChatGPT, an app that can attempt to solve math problems and imitate conversational English to explain concepts and write simple yet often nuanced sentences.

Students might already be using the increasingly popular chatbot to complete assignments, but no campuswide policy defines how that might be considered cheating. Some instructors are looking for ways they could introduce the technology in their classes as an educational tool, said Chris Combs, an assistant professor who joined the conversations.

"As academics, we are obligated to make sure students are trained in the tools of the modern age as best as possible. It's out there and people are going to use it," Combs said in an interview this week. "In the distant future, we would look back on that and say, 'It's like looking at a calculator like it's cheating in math.'"

Professors at other San Antonio-area institutions of higher learning — also working without formal policies on the use of AI text and image generators — have launched their own research to familiarize themselves with ChatGPT, knowing it is gaining traction with their students.

Some are putting together workshops open to all faculty. Some are having department-focused meetings on how to respond. Amid the obvious concern over how students will choose to employ the app, some see the technology as an opportunity to teach in a different way.

"Understandably, there's a little bit of panic happening because of ChatGPT. But I think that perspective limits us in what we can do with our students," said Scott Gage, who directs the First Year Composition Program at Texas A&M University-San Antonio.

"I come to it from the perspective of, 'OK, so we have this new technology, let's learn about it. What can it do? What can't it do? And how do we work, not against the technology, but how do we work with the technology?'" Gage added.

Across the United States, teachers and administrators are grappling with how to handle the widening use of AI-based programs in and out of the classroom.

Some school systems have cut off access to ChatGPT over worries about cheating and are grappling with what the easy availability of advanced tech means for learning. New York City public schools this month blocked ChatGPT on school computers and networks. Schools in Los Angeles, Seattle and Baltimore have restricted access.

But schools in San Antonio and across Texas have not yet banned ChatGPT or defined how students can use AI, leaving professors, for now, to adapt their teaching methods to the most advanced tech available to the common student.

"We clearly have policies around academic integrity and to hold students accountable for any types of academic dishonesty," said Melissa Vito, vice provost for academic innovation at UTSA. "We don't have a specific policy addressing AI and ChatGPT. Whether we will in a year or two years, or six months, is yet to be determined."

A UTSA professor said his colleagues suspect students used ChatGPT during final exams in December, though none have been able to prove instances of plagiarism. Vito said she hasn't yet received reports of potential cheating but "wasn't shocked" by the idea that students are using the app to complete their coursework.

For now, Vito doesn't want to rush a policy and supports faculty attempts to learn about ChatGPT and look at ways to use it.

UTSA this week launched a website featuring ChatGPT-specific "instructional strategies," with recorded presentations and workshops, guides and articles. The university is also bringing in tech experts to speak on ChatGPT and developing a "faculty learning community" of local professors to gauge the effects of AI apps in classrooms.

GOOD BUT STILL LIMITED



ChatGPT, released in November by the artificial intelligence lab OpenAI, is a large language model that uses algorithms to analyze information pulled from the Internet to generate text in response to user prompts. The company has been fine-tuning the app's ability to predict words in sentences and find patterns.

While a growing number of users have prompted ChatGPT to write poetry, fan fiction, raps and computer code, researchers found that the app can just as easily generate propaganda and disinformation.

In recent interviews, area professors said the app performs well when fed simple prompts. It can generate text on the history of San Antonio, for example, but it struggles when asked detailed technical questions or prompted to provide opinions about academic topics.

"It's sort of like Wikipedia," Combs said, referring to the free, Internet-based encyclopedia. "It's usually right, but be wary — you need to do some self-evaluation. Sometimes it's wrong."

For all its limitations, ChatGPT is creating a stir among academics.

University of Minnesota law professors recently published a study showing that the app could get passing grades on graduate-level exams. ChatGPT passed a business management exam at the Wharton School of Business, a professor there found. They were impressed but noted that it struggled to handle advanced questions and that the grades were in the B- to C+ range.

Professors in San Antonio said there's a race to understand ChatGPT and similar AI-fueled apps — because they're going to improve. Businesses are pouring money into San Francisco-based OpenAI, which began as a nonprofit research company in 2015.

Last month, Microsoft said it was making a "multiyear, multibillion-dollar investment" in the company and its tools.

IN THE CLASSROOM



Ronni Gura Sadovsky, assistant professor at Trinity University's philosophy department, has been playing with ChatGPT, asking it questions she would normally ask her students for writing assignments.

Like other profs, she was sort-of impressed — up to a point.

"Although ChatGPT was not doing a great job at getting the right answer, it was doing a great job of demonstrating that it would give it the 'old college try,'" Sadovsky said. "It does a very good job at using the terminology, structuring an essay according to a formula that works very well for short, college essay-type writing."

Two things immediately came to mind for Sadovsky. First, she might be able to use these AI-generated essays as a lesson on writing structure for her undergraduate students. Second, there's an obvious concern that they might not learn anything when relying on the technology, she said.

Sadovsky and her colleagues are putting together a workshop through The Collaborative for Learning and Teaching at Trinity, where any faculty member can find out more about these tools.

"What competencies do we worry our students will miss out on if they use ChatGPT to complete their work?" the workshop description asks.

"In a world where ChatGPT is available, how can we find a different route to build these competencies? And if we're feeling optimistic, what competencies might we help them build by incorporating generative chatbots like ChatGPT into our teaching?"

Abe Gibson, an assistant professor of history at UTSA, has introduced earlier text-generating programs to his students. Last semester, he tasked students in his History of Technology course with experimenting with GPT-3, a less sophisticated version of ChatGPT.

Now, with many of his students aspiring to become teachers, Gibson said, there's an immediate need to think through the balancing act of using ChatGPT in the classroom. This semester, he's using the app in a master's degree course called Historical Methods.

"It's very important that they know about synthetic media and text generators," he said. "Is it a harmless, potentially good accelerant? Or is it an insidious tool for misinformation, or something in between? That's what we'll try to figure out."

In the rapidly accelerating AI sphere, professors like Gibson said they're just trying to keep up with tech advances — and they know more are coming. Last month, Sam Altman, the CEO of Open AI, told StrictlyVC, a tech newsletter, that the company is planning to release the next version of its chatbot, called GPT-4.

"We need to meet this AI challenge head on," Gibson said. "We need to demystify it so that we know exactly what we're dealing with and what it can accomplish and what is the best, most responsible and ethical way to use this new, emerging technology."

HOW TO POLICE? AND WHEN?



Some professors said they have plenty of experience getting in front of tech that can be used to cheat.

For years, UTSA's Combs has plugged exam questions into the Chegg tech company website to see if its database of 46 million textbook and exam problems can provide the answers. If it can, he changes his questions.

By comparison, he said, ChatGPT "is like a calculator which struggles to make things personal and give opinions." Combs believes he can craft questions to beat the app.

"It's out in the wild now," he said, adding that it's the responsibility of professors to fine-tune their assignments to challenge the app. "If a student can just use ChatGPT to do the assignment, maybe it's not an assignment you should be giving right now."

Gage of A&M-San Antonio agreed. His most immediate concern is the obvious risk of plagiarism, but focusing on that would mean starting from a place of distrust, and good teaching makes a better safety net, he said.

"There's a difference between assigning writing and teaching writing," Gage said. "If we are teaching writing, we are engaging with our students' voices, we are engaging with their identities as writers and where they are in that moment as writers, we are working with them as they develop.

"Through that type of engagement of student writers, it can become apparent and it can be detected if a student is suddenly using ChatGPT to write."

Sadovsky and Gage said they would be interested in helping their universities shape policy on how to use or restrict chatbots — and policies should emphasize responsible use, they said.

"I would be very disappointed if our response was just policing," Sadovsky said. "If what we try to do is just to get better at catching whether a student's answer was crafted using ChatGPT, then I think we are not doing our job well."

Yet administrators and professors also noted the need for tech that can identify plagiarism, in both text or images. Some said they have utilized ChatGPT enough to recognize when a student is using it but fear the technology's improvements eventually will make that impossible.

In a blog post Tuesday, Open AI said it launched its new AI Text Classifier tool to help educators detect if a student or an app wrote an assignment. The company stressed that the new tool "is not fully reliable" but is better than what was previously available.

Combs said Wednesday that he tried the new tool on text he wrote for an academic proposal and that "it correctly identified as not written by AI."

But then Combs used ChatGPT to generate AI text and pasted it into the tool.

"It wasn't sure if it was AI," he said.

©2023 the San Antonio Express-News. Distributed by Tribune Content Agency, LLC.