IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

University of Northern Iowa Panel Gauges Challenges of AI

A recent series of roundtable discussions hosted by a University of Northern Iowa professor offered insight into the challenges of artificial intelligence currently playing out in classrooms, workplaces and statehouses.

A person holding out their palm with the words "Ai" and "ChatGPT" hovering above it, as well as symbols like a computer chip and graphs.
Shutterstock/SomYuZu
(TNS) — Generative artificial intelligence — words that conjure up images ranging from "The Jetsons" to "The Terminator." While it would be difficult to overstate the expected benefits of generative AI, there are drawbacks that need to be considered.

Jim O'Loughlin, who heads the University of Northern Iowa department of literature and languages, recently hosted a series of roundtable discussions focused on generative AI, offering insight into the challenges currently playing out in classrooms, workplaces and statehouses across the country.

"A lot of things are happening really quickly with AI, and while we realize the impact of this technology is likely to be widespread, it's also likely to be uneven, having a substantial impact in some fields, less of an impact in others," O'Loughlin said. "Those of us in education have a particular obligation to prepare our students for the world they're going to be entering when they graduate."

Part of that challenge includes introducing generative AI tools into the classroom and deciding on the best ways to use them. O'Loughlin calls this the "throw spaghetti at the wall and see what sticks" phase.

"There (is) a wide array of tools made available often for free to attract early interested users," he said.

One of the first AI tools to break into the mainstream was Chat Generative Pre-Trained Transformer, or ChatGPT. It is an artificial intelligence technology that can process natural human language and generate responses. There are concerns with ChatGPT and similar AI tools over inaccuracies and its potential to perpetuate biases, spread misinformation and enable plagiarism.

One of the simplest uses and biggest improvements O'Loughlin sees is in the area of Internet searches.

For example, when using Google, users don't get an answer, they get a list of sites to comb through.

"I don't think that's going to be good enough anymore," O'Loughlin said. "I don't think we're going to put up with that level of inexact information anymore."

ChatGPT can also represent human-sounding speech when composing letters, emails, or even research papers.

"I immediately thought I have to try this with my students," said Disa Cornish, associate professor in UNI's department of nursing and public health. "I view my classroom as a fun place to try things out. My students are usually really game, and we have a good back and forth."

ACCURACY CONCERNS


Cornish had her students use ChatGPT to write an essay about the epidemiology of a health condition.

She then had them do a deep fact check on their finished product.

They discovered "varying levels of accuracy," she said. "Most of the references had been made up, completely just fabricated. They looked like good references, but they were just fake.

"They had high expectations of this amazing tool to do everything perfectly, and it didn't deliver for them. They were alarmed that it was inaccurate.

"What are the implications if you as a public health advocate, as a source of information for public health in your community, if you just pass on misinformation or disinformation? You are breaking the code of ethics for health educators," she said.

"There's a reliability issue with a lot of the information you get, and it can often look good and not be right," O'Loughlin said.

Similarly, Doug Shaw, of UNI's department of mathematics, got good results when using ChatGPT to develop course objectives or write an email, but not when he entered a mathematical proof.

"It was completely wrong," he said. "I confess, I felt a little good about the fact that it was not able to do it correctly, (because) I still want to have a job."

The overall conclusion with ChatGPT was, "try it, don't rely on it."

O'Loughlin mentioned Google's NotebookLM as a possibly more reliable alternative to ChatGPT, as the user inputs the resources the tool pulls information from.

"The thing we want (students) to understand (is) AI is something you can use as a tool," said Jason Paulson, marketing director of Harmonic Hospitality. "Understand it, embrace it, and feel comfortable being able to grow with it because ... it's a part of our new world."

COMMERCIAL IMPACT


While AI's transformative possibilities in the workplace cannot be denied, realizing the way job categories can be affected and acknowledging people's legitimate concerns going forward is important, O'Loughlin said.

"There is the fear that jobs will be lost to robots," he said. "The Pew Trust estimates 19 percent of American job holders are highly exposed ... and may find that much of the work they've been doing can either be done by or enhanced by artificial intelligence."

Although, in most industries, workers see ways in which AI will help them do their jobs rather than take away what they do, O'Loughlin said.

He refers to generative AI as disruptive technology.

"It's likely to significantly change the way that businesses and consumers operate," he said.

But that doesn't have to be a bad thing.

"We're working on merging the physical and digital worlds largely through computer vision," said Brad Dwyer, chief technical officer for Roboflow Inc., a computer application company based in Des Moines. "The core insight we have is AI really lets computers interact directly with their environment.

"In the olden days, it used to be a human that would have to consume things through their eyes, process them with their brains, then put them into a computer with their fingers. And we have things like robotics and all sorts of automated machines that can use cameras to directly ingest information about their environment, use AI to understand what they're looking at and then you can build applications on top of that to do all sorts of things from self-driving cars to automating assembly line quality assurance to monitoring security cameras for threats in schools."

"I think it's really clear that AI is going to have a similar-size impact across every industry, similar to the way the PC or the Internet did," Dwyer said. "I think this is part of a much broader trend of AI seeping its way into everything we do.

"If you look back at all sorts of disruptive technology like this that, even though it is hard to predict in what ways it's going to create jobs, increases in productivity have always led to job increases.

"I'm pretty excited about what AI is going to do for the economy," Dwyer said. "People are already starting to see some of the effect. In industries where AI has really come into play and had an effect on productivity, you see prices fall. ... Technology has a very deflationary effect on the things that it touches.

Dwyer uses automotive manufacturing as an example.

"There's a concern that automation will mean they need less people to do the same job, but in fact, it's going to mean they can produce more cars for the same price, and (people) will buy more cars if you can drive the price down ... and so these productivity gains make their way through and end up as a net benefit for everybody," he said.

"I wouldn't worry about jobs being lost. I'd be much more excited about all the ... things that we're going to be able to produce given the (AI) available to us.

Dwyer said one of the biggest surprises he has seen over the last few years is the drastic increase in U.S. manufacturing investment.

"Since the pandemic, it's really taken off ... because an increase in productivity in the United States, especially powered by technology, (has) meant that it makes sense to invest here again versus going for the lowest cost of labor overseas, because we can utilize these technological gains to get increased productivity."

AI AND GOVERNANCE


While there is an executive order in place that addresses national security and safety, consumer fraud and data privacy, algorithmic discrimination and jobs and innovation, there currently is no overarching federal legislation on generative AI, O'Loughlin said.

This has led to individual states taking up the task of addressing AI themselves.

"We've got technology developing at the speed of sound ... and we've got the glacial pace of government," said Sen. Chris Cournoyer, chair of the Iowa State Senate Technology Committee. "We are way behind the curve. Something that is important to me as a policymaker is to protect the rights and privacy and data of our citizens. We also don't want to stifle the innovation and creativity of our technology sector. That is definitely a fine line.

"We need to be very careful we don't come in too heavy handed on something that does not need to be heavily regulated, but it is important that we develop those ethical standards that really need to be developed globally because we don't want to have a patchwork of regulation at the state level where businesses don't know what they're doing in each state," she said.

"The Senate Technology Committee just formed in the Iowa Senate last year. I am thankful that leadership saw the importance of a technology committee to address issues like cybersecurity and consumer privacy and how technology is being used in cyber crime," she said.

©2024 Waterloo-Cedar Falls Courier (Waterloo, Iowa). Distributed by Tribune Content Agency, LLC.