IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Risk of Extinction from AI? Tech Execs Issue Vague Warning

Executives from some of the leading companies in the AI space have issued an intentionally vague warning meant to “open up discussion” around the rapidly evolving technology. The statement is another in a long line of warnings about the potential dangers of unchecked AI.

US-NEWS-SCI-AI-THREAT-GET
OpenAI CEO Sam Altman speaks during a keynote address announcing ChatGPT integration for Bing at Microsoft in Redmond, Washington, on Feb. 7, 2023. Microsoft is fusing ChatGPT-like technology into its search engine Bing, transforming an internet service that now trails far behind Google into a new way of communicating with artificial intelligence. (Jason Redmond/AFP/Getty Images/TNS)
Jason Redmond/TNS
(TNS) — As artificial intelligence races toward everyday adoption, experts have come together — again — to express worry over technology's potential power to harm — or even end — human life.

Months after Elon Musk and numerous others working in the field signed a letter in March seeking a pause in AI development, another group consisting of hundreds of AI-involved business leaders and academics signed on to a new statement from the Center for AI Safety that serves to "voice concerns about some of advanced AI's most severe risks."

The new statement, only a sentence long, is meant to "open up discussion" and highlight the rising level of concern among those most versed in the technology,according to the nonprofit's website. The full statement reads: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

Notable signatories of the document include Demis Hassabis, chief executive of Google DeepMind, and Sam Altman, Chief Executive of OpenAI.

Though proclamations of impending doom from artificial intelligence are not new, recent developments in generative AI such as the public-facing tool ChatGPT, developed by OpenAI, have infiltrated the public consciousness.

The Center for AI Safety divides the risks of AI into eight categories. Among the dangers it foresees are AI-designed chemical weapons, personalized disinformation campaigns, humans becoming completely dependent on machines and synthetic minds evolving past the point where humans can control them.

Geoffrey Hinton, an AI pioneer who signed the new statement, quit Google earlier this year, saying he wanted to be free to speak about his concerns about potential harm from systems like those he helped to design.

"It is hard to see how you can prevent the bad actors from using it for bad things," he told the New York Times.

The March letter did not include the support of executives from the major AI players, and went significantly further than the newer statement in calling for a voluntary six-month pause in development. After the letter published, Musk was reported to be backing his own ChatGPT competitor, "TruthGPT."

Tech writer Alex Kantrowitz noted on Twitter that Center for AI's funding was opaque, speculating that the media campaign around the danger of AI might be linked to calls from AI executives for more regulation. In the past, social media companies such as Facebook used a similar playbook: ask for regulation, then get a seat at the table when the laws are written.

The Center for AI Safety did not immediately respond to a request for comment on the sources of its funding.

Whether the technology actually poses a major risk is up for debate, Times tech columnist Brian Merchant wrote in March. He argued that, for someone in Altman's position, "apocalyptic doomsaying about the terrifying power of AI serves your marketing strategy."

©2023 Los Angeles Times, Distributed by Tribune Content Agency, LLC.