IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Professors React to Call for ‘Pause’ on AI Research

Whether or not they agree with calls to halt innovation, many professors and computer scientists in higher education share the feelings of tech leaders that emerging AI needs oversight and regulation.

Digital graphic of a brain in blue surrounded by dots connected by lines as well as lines of code.
Shutterstock/PopTika
The speed at which generative AI technologies like ChatGPT are advancing and finding new use cases has prompted several well-publicized calls from industry leaders for a “pause,” citing concerns about how AI could change, or eventually even end, human life. In January, religious and political leaders joined several tech leaders in signing a “Rome Call for AI Ethics.” On March 22, tech leaders including Apple co-founder Steve Wozniak, UC Berkely computer science professor Stuart Russell, Stability AI CEO Emad Mostaque and Tesla CEO Elon Musk signed an open letter asking “all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4.” In May, OpenAI CEO Sam Altman, whose company is developing ChatGPT, urged Congress to create an agency to license AI companies. More recently, Altman joined Bill Gates and other AI experts and policymakers in signing a one-sentence statement comparing the seriousness of AI risk to that of pandemics and nuclear war.

A common theme in these statements is a desire for governing bodies to seriously and deliberately study the implications of the emerging technology and create frameworks for its responsible use — a concern that appears to be shared by many subject matter experts in higher education.

According to a news release emailed to Government Technology, the University of Florida was among several that recently signed onto the Rome Call for AI Ethics, which calls on the field to commit itself to “technological progress that serves human genius and creativity and not their gradual replacement.” Among those siding with calls for more responsible AI development is My T. Thai, associate director of the Nelms Institute for the Connected World and part of an expert panel at the university’s Herbert Wertheim College of Engineering.

“There appears to be a race on this such development where no one can fully understand how these powerful systems work, how they will be used/misused, and how to assess their risks. Thus, these powerful AI systems should be developed only once we are confident that their effects to society and humanity will be positive and their safety can be verifiable,” she said in a public statement. “I think the call is more or less to bring back our full attention on responsible AI — to tell us slow down on the race of building a more powerful AI system than [GPT-4], and to refocus on making modern AI systems more understandable, safe, self-assessable and trustworthy.  The [Rome] call is not actually calling for a pause itself. Six months is obviously not long enough to accomplish such a complex goal.”

According to Junfeng Jiao, a professor at the University of Texas at Austin and signatory on the March 22 open letter, tech companies today are afraid of missing out on the AI gold rush amid growing interest in tools like ChatGPT. He said he believes universities should play a bigger role in AI research and development efforts moving forward.

“There are many unknown parts of generative AI [GAI] and large language modeling [LLM], such as how exactly each node in the model transfers signals during the training, like how exactly our brain cells process signals when learning,” he said. “We have to really understand what LLM or GAI can do or cannot do ... We do need some guidance on what type of data can be trained and what can be answered from these LLM models.”

Jiao added that most work on GAI and LLM is being done in private industry because of exorbitant hardware costs, and more support is needed for academic research.

“Maybe OpenAI can be more open and support more professors,” he said.

However, Shannon French, a professor of ethics at Case Western Reserve University, is more skeptical about the March 22 open letter’s true aims. She said that while calls for the responsible development of AI tools are needed, she believes this call for a “pause” on AI development is a clever way for private tech industry leaders to “direct people’s attention in a panicky sense toward AI.”

Noting machine-learning biases in programs that make important decisions — such as in enrollment algorithms in university admissions, or programs that screen job applicants — she said the problem isn’t in hypothetical threats that AI might pose, but rather that “AI is hurting people right now,” and that “people need to be working on that instead.”

“It’s a form of ‘AI hype.’ They are trying to continue the money flow toward AI projects and research that goes into and supports the use of AI by suggesting there’s this huge existential threat that’s going to come from the systems they’re building. For example, [saying that] the defense and government organizations need to flood more money that way and need to be worried about making sure they’re ‘winning the AI arms race’ and all that kind of language,” she said. “Meanwhile, while they’re trying to focus attention on that, what they’re not doing is fixing the actual problems with AI … AI is being rushed into use — in a great many fields and industries and government uses — before it is ready.”

French singled out bias as most important among AI’s many flaws.

“That bias is getting baked into these systems to the point where people can’t even see into the black box and recognize that these algorithms have bias in them, and the data sets they were trained on have bias in them, and then they treat the [results from] AI systems as if they’re objective,” she said.

Whatever the motivation, Paul Root Wolpe, director of the Emory University Center for Ethics, stressed the need for more regulation in the field more generally. He said, “there’s no question we should ‘pause.’”

“When things take time and are thought out constantly as well, every new iteration is a new opportunity to correct mistakes and solve problems. When the goal is to get something out as quickly as possible, there isn’t a chance for self-correction. The damage is already done,” he said. “It’s a remarkable thing when leaders of technology or leaders of any industry call for regulation early in that industry. ... When these leaders are calling for regulation, we should listen to them, because every incentive of industry is not to be regulated, so they’re sending a powerful message when they say regulation is required here.”
Brandon Paykamian is a staff writer for Government Technology. He has a bachelor's degree in journalism from East Tennessee State University and years of experience as a multimedia reporter, mainly focusing on public education and higher ed.