IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

What Do AI Experts Think of Biden’s Executive Order?

The White House unveiled a sweeping executive order that aims to minimize the risks of artificial intelligence while simultaneously maximizing its potential. AI policy experts have mixed views on the double-edged order.

President Joe Biden
President Joe Biden
Shutterstock/archna nautiyal
(TNS) — The White House unveiled a sweeping executive order that aims to minimize the risks of artificial intelligence (AI) while simultaneously maximizing its potential. AI policy experts have mixed views on the effectiveness of the wide-ranging, double-edged order.

The executive order, released on Oct. 30, sets new guidelines for AI safety, including requiring innovators to share critical information — like safety test results — with the federal government. It also sets standards for AI as it relates to privacy, civil rights, education and health care.

Additionally, the order calls for expanded research into AI in order to promote innovation, ensuring American companies remain at the forefront of the technology’s development.

The applications of AI “are almost infinitely broad, and every single one of them has a bright side and a dark side,” White House Office of Science and Technology Policy Arati Prabhakar said in a video posted on X.

“Our task with AI is to figure out how to get all the benefits that this powerful technology is going to deliver and yet manage and mitigate the risks that are going to come with that,” Prabhakar said.

WHAT ARE AI EXPERTS SAYING?


The executive order “seems on track to represent a remarkable, whole-of-government effort to support the responsible development and governance of AI,” Alexandra Reeve Givens, the president of the Center for Democracy and Technology, said in a statement provided to McClatchy News.

It demonstrates a “comprehensive commitment” across the government to prioritize the supervision of AI development, Hodan Omaar, a policy analyst at the Center for Data Innovation, told McClatchy News.

“This unified stance is crucial because it shows that the United States is dedicated to addressing AI issues, preventing a scenario where policymakers from other regions take the lead on tech matters,” Omaar said.

The lightning-speed development of AI systems — including those seen in ChatGPT and Bard — has caused governments to scramble to decide the proper regulatory approach to a technology that is viewed skeptically by many.

Seventy-five percent of American adults believe AI will reduce job opportunities, and 79% believe businesses can’t be trusted to responsibly implement AI, according to a September Gallup poll.

“AI systems with human-competitive intelligence can pose profound risks to society and humanity,” according to an open letter published in March by the Future of Life Institute. The letter, which called for at least a six-month pause on AI experimentation, was signed by over 33,000 people, including Elon Musk, Steve Wozniak and other Silicon Valley leaders.

“There’s broad agreement that the federal government needs to take action to protect citizens from AI’s harms and risks, while also promoting innovation and capturing the technology’s benefits,” Helen Toner, the director of strategy at the Center for Security and Emerging Technology, told McClatchy News.

“This executive order is clearly trying to play to both sides of this coin,” Toner said, “though the White House is limited in how much they can do without Congress.”

WHAT CONGRESS, AGENCIES NEED TO DO


Most of the proscriptions in the executive order affect executive branch agencies overseen by the president. However, the order explicitly calls on Congress to pass bipartisan legislation aimed at safeguarding the data of American citizens in the face of AI.

“The Administration has laid out a very ambitious agenda, but figuring out how to implement it is left to a swath of different federal agencies,” Toner said. “Especially in the absence of dedicated funding from Congress, it will likely be a stretch to get all of this done.”

While much of the executive order is in theory uncontroversial, its implementation could become an issue, Adam Thierer, a senior fellow at the R Street Institute, told McClatchy News.

“There is a danger of reading (the guidelines) too broadly to authorize administrative agencies to aggressively regulate in ways that Congress has not yet authorized for artificial intelligence,” Thierer said.

Thierer, who is in favor of a lighter-touch approach to AI regulation, wrote in an analysis of the executive order that AI policy has largely been driven by “dystopian narratives” and “worst-case scenarios.”

“We shouldn’t treat algorithmic innovators as guilty until proven innocent,” Thierer said. “We should wait and see what the problems are that develop and address them in a piecemeal, iterative, and flexible fashion to ensure we maximize innovation.”

©2023 The Charlotte Observer, Distributed by Tribune Content Agency, LLC.