IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Schumer Navigates Path to Artificial Intelligence Regulations

Senate Majority Leader Charles Schumer has been consulting experts on the best way to regulate the advanced technology. Under his framework, independent experts would have to test new AI technologies before they are publicly released or updated.

Senate Majority Leader Charles Schumer
Senate Majority Leader Charles Schumer
Shutterstock
(TNS) — The Senate leader who's famous for using a flip phone is now hard at work trying to figure out how to regulate some of the most advanced technology known to man.

Senate Majority Leader Charles E. Schumer, mindful of the vast changes — and dangers — lurking in the coming era of artificial intelligence, wants Congress to get a handle on the technology to both make the most of it and, when necessary, to rein it in. That's why he's been meeting for months with tech leaders like Tesla CEO Elon Musk to develop a regulatory framework to guide a burgeoning field that's big at the University at Buffalo and that has the potential to remake society.

"You cannot deny that the age of AI is here, and it's transformative," Schumer, a New York Democrat, said in a recent interview. "It could be more transformative than almost anything that's come along in centuries, even. And it's going to revolutionize science and medicine and technology and almost everything else."

But Schumer — like many scientists and some of his fellow lawmakers — also acknowledged that there is risk in a technology that can already write computer code but that can also be used to make fake video clips of politicians saying things they never said.

"You know, the fact is that AI can write a seemingly well-researched article in just a few seconds," Schumer said. "Isn't that amazing? But it also is a cause for concern as it can easily become a source of disinformation and be dangerous."


THE AI ERA


We've been surrounded by AI for years without calling it that. If you ask Alexa to play the Goo Goo Dolls or watch your Roomba vacuum your living room without vacuuming your cat, you're witnessing artificial intelligence at work. You're witnessing what happens when scientists program machines to behave as humans would.

But the possibilities and risks posed by AI came much clearer last fall with the release of ChatGPT, a sophisticated chatbot that mines a knowledge database and then does what you tell it to do, be it write a thank-you note or computer code or a research paper. And that's just the start. More sophisticated AI tools could conceivably create artwork or music or give medical professionals shortcuts that can save lives.

To see the full range of AI's positive possibilities, look no further than the University at Buffalo, where David Doermann directs the Institute for Artificial Intelligence and Data Science. Click on the web page that spells out the institute's primary research areas, and you'll find a dozen of them, ranging from robotics to health sciences to natural language processing.

Doermann sees the institute as a place that can bring together researchers from different disciplines to look at problems and solve them and, in the end, make life better.

"We can do real out-of-the-box type of applications in absolutely every domain that you have, from education, finance, medicine, materials, all types of engineering, all these types of things," he said. "And people want to do that."

Elsewhere at UB, the National AI Institute for Exceptional Education was established earlier this year with a $20 million National Science Foundation grant. That institute aims to find ways of using AI to help children who have difficulty speaking or understanding language.

"This project is a great example of how we can harness the opportunities that AI technologies can offer to enhance the services that our nation can offer the American people," said Fengfeng Ke, program director at the National Science Foundation.

LOOMING RISKS


But even at UB, the coming age of artificial intelligence is prompting angst as well as excitement. UB's Center for Information Integrity promotes news articles that warn of AI's dangers: articles with headlines like "Deepfakes Could Destroy the 2024 Election " and "If We Don't Master AI, It Will Master Us."

What's more, the institute — which goes by the acronym CII — has developed what it calls "Deception Awareness and Resilience Training" to help seniors avoid scams, and its leaders regularly opine on the dangers posed by the Internet and AI.

"In the initial stages of development, no one was thinking seriously about the dark corners of this remarkable technical development," Siwei Lyu, the institute's co-director, said in a UB newsletter last year. "Now we're left with a problem to fix and CII can confront existing disinformation and help users navigate the disinformation that's yet to come."

Yet plenty of people at UB and beyond worry that disinformation may be the least of the dangers wrought by AI. The New York City and Los Angeles public schools have banned ChatGPT, fearing it will allow students to write essays without really learning to write. Colleges professors bemoan what they call "CheatGPT" — and now, predictably, there's even an app by that name, which bills itself as "Your GPT-4 study assistant with superpowers."

And the Future of Life Institute, formed to minimize the risks to society that technology poses, in March published an open letter calling for a six-month moratorium on the development of AI systems more powerful than the latest iteration of ChatGPT. As of Friday, 27,565 people — including Musk and Apple co-founder Steve Wozniak — signed it. So did Ferdinand Schweser, an associate professor of biomedical engineering at UB, and Sonjoy Das, an adjunct associate professor of mathematics at SUNY Buffalo State.

"Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth?" the letter says. "Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?"

A REGULATORY FRAMEWORK?


In addition to calling for a pause in AI development, that letter also suggests development of a set of safety protocols overseen by outside experts — and that sounds remarkably like what Schumer wants to do.

Under his framework, which will eventually take the form of legislation, independent experts would have to test new AI technologies before they are publicly released or updated. The results of those tests will then guide government regulation that's aimed at preventing the release of AI technologies that could do public harm.

"In terms of job loss, we have to be very careful here," Schumer told The Buffalo News. "Second, there's a potential for destructiveness, and you've got to be very careful and you have to thread the needle. How do you move this forward and make sure China doesn't get ahead of us on this, but at the same time, don't be so precipitous that something dangerous could occur that you don't move to prevent?"

House Speaker Kevin McCarthy, a Republican, recently told Fox News that he's interested in AI, too. Mocking Schumer's use of a flip phone, McCarthy said it's more important for Congress to learn more about the issue before moving forward with the sort of legislation Schumer is suggesting. That's why McCarthy recently set up a bipartisan session for lawmakers with AI experts from the Massachusetts Institute of Technology.

"You can never go wrong with Congress being ... educated on subjects, and especially subjects that are going to harbor into the future," McCarthy said.

Buffalo-area lawmakers are taking a long and hard look at AI, as well.

Rep. Brian Higgins, a Buffalo Democrat, said legislation guiding the future of AI is likely necessary, "but I don't know exactly what that looks like now, and I don't know that anybody else does...How do you regulate something that most people, including me, do not fully understand?"

Meanwhile, Rep. Claudia Tenney, a Canandaigua Republican who represents parts of Western New York, said in a statement that Congress must find a way to both promote AI and, when necessary, regulate it.

"AI presents opportunity and pitfalls, and I look forward to continuing to work in a bipartisan manner to address both head on," said Tenney, a member of the House Science, Space and Technology Committee.

As for Schumer, he said he recently had a cordial hourlong meeting with Musk to talk about AI.

"He has a lot of knowledge and some thoughts and we shared them and we had a very good conversation that lasted about an hour," said Schumer, who noted that he and Musk also had a productive discussion about Buffalo's Tesla plant.

"He was very optimistic about growth of his manufacturing in Buffalo," Schumer said.

When not meeting with tech moguls or otherwise running the Senate, Schumer has gotten a little hands-on AI training.

Asked if he had ever used ChatGPT, Schumer replied: "Yeah, on a couple of little experimental bases. We actually asked it to write 10 Sunday Chuck Schumer press releases."

©2023 The Buffalo News, Distributed by Tribune Content Agency, LLC.