IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Biden Uses Social Media as Cautionary Tale for AI Laws

President Biden said that artificial intelligence has “enormous promise” but it also comes with risks such as fueling disinformation and job losses — dangers his administration wants to tackle.

President Joe Biden
(TNS) — President Joe Biden said Tuesday that artificial intelligence has “enormous promise” but that it also comes with risks such as fueling disinformation and job losses — dangers his administration wants to tackle.

Biden, meeting in San Francisco with AI experts, researchers and advocates, said the technology is already driving “change in every part of American life, often in ways we don’t notice.” AI helps people search the internet, find directions — and has the potential to disrupt how people teach and learn.

“In seizing this moment, we need to manage the risks to our society, to our economy and our national security,” Biden said to reporters before the closed-door meeting with AI experts at the Fairmont Hotel.

Pointing to the rise of social media, Biden said people have already seen the harm powerful technology can do without the proper guardrails. Still, he acknowledged he has a lot to learn about AI.

The meeting came as Biden is ramping up efforts to raise money for his 2024 reelection bid, including from tech billionaires. While visiting Silicon Valley on Monday, he attended two fundraisers, including one co-hosted by entrepreneur Reid Hoffman, who has numerous ties to AI businesses. The venture capitalist was an early investor in Open AI, which built the popular ChatGPT app, and sits on the board of tech companies including Microsoft that are investing heavily in AI.

The experts Biden met with Tuesday included some of Big Tech’s loudest critics. The list includes children’s advocate Jim Steyer, who founded and leads Common Sense Media; Tristan Harris, executive director and co-founder of the Center for Humane Technology; Joy Buolamwini, founder of the Algorithmic Justice League; and Fei-Fei Li, co-director of Stanford’s Human-Centered AI Institute. California Gov. Gavin Newsom also joined Biden at the AI event.

Some of the experts have experience working inside major tech companies. Harris, a former Google product manager and design ethicist, has spoken out about how social media companies like Facebook and Twitter can harm people’s mental health and amplify misinformation.

Biden’s meetings with AI researchers and tech executives underscore how the president is engaging both sides as his campaign tries to attract wealthy donors while his administration examines the risks of the fast-growing technology. While Biden has been critical of tech giants, executives and workers from companies such as Apple, Microsoft, Google and Facebook’s parent company Meta contributed millions of dollars to his 2020 presidential campaign.

“AI is a top priority for the president and his team. Generative AI tools have increased significantly in the past several months and we don’t want to solve yesterday’s problem,” a White House official said in a statement.

So the Biden administration has been focusing on AI’s risks. Last year, the administration released a “Blueprint for an AI Bill of Rights,” outlining five principles developers should keep in mind before they release new AI-powered tools. The administration also met with tech executives, announced steps the federal government had taken to address AI risks, and advanced other efforts to “promote responsible American innovation.”

Lina Khan, the Federal Trade Commission chairperson appointed by Biden, said in a May op-ed published in the New York Times that the rise of tech platforms like Facebook and Google costs users their privacy and security.

“As the use of AI becomes more widespread, public officials have a responsibility to ensure this hard-learned history doesn’t repeat itself,” Khan said.

Tech giants use AI in various products to recommend videos, power virtual assistants and transcribe audio.

While artificial intelligence has been around for decades, the popularity of an AI chatbot known as ChatGPT intensified a race between big tech players like Microsoft, Google and Meta. Launched in 2022 by OpenAI, ChatGPT can answer questions, generate text and complete a variety of tasks.

The rush to advance AI technology has made tech workers, researchers, lawmakers and regulators uneasy about whether new products might be released before they’re safe. In March, Tesla, SpaceX and Twitter Chief Executive Elon Musk, Apple co-founder Steve Wozniak and other technology leaders called for AI labs to pause the training of advanced AI systems, and urged developers to work with policymakers. AI pioneer Geoffrey Hinton, 75, quit his job at Google so he could speak about AI’s risks more openly.

As technology rapidly advances, lawmakers and regulators have struggled to keep up. In California, Newsom has signaled he wants to tread carefully with state-level AI regulation. He said at a Los Angeles conference in May that “the biggest mistake” politicians can make is asserting themselves “without first seeking to understand.”

California lawmakers have floated several ideas, including legislation that would combat algorithmic discrimination, establish an office of artificial intelligence and create a working group to provide a report on AI to the Legislature.

Writers and artists are worried that companies could use AI to replace workers. The use of AI to generate text and art comes with ethical questions, including concerns about plagiarism and copyright infringement. The Writer’s Guild of America, which remains on strike, proposed rules in March on how Hollywood studios can use AI. Any text generated by AI chatbots, for example, “cannot be considered in determining writing credits” under the proposed rules.

The potential abuse of AI to spread political propaganda and conspiracy theories, a problem that has plagued social media, is another top concern among disinformation researchers. They fear AI tools that can spit out text and images will make it easier and cheaper for bad actors to spread misleading information.

AI is already being deployed in some mainstream political ads. The Republican National Committee posted an AI-generated video ad depicting a dystopian future that would supposedly become reality if Biden wins reelection.

AI tools have also been used to create fake audio clips of politicians and celebrities making remarks they didn’t actually say. The campaign of GOP presidential candidate and Florida Gov. Ron DeSantis shared a video of what appeared to be AI-generated images of former President Trump hugging Dr. Anthony Fauci — a villain to believers of COVID-19 conspiracy theories.

Tech companies are not opposed to putting guardrails around AI. They say they welcome regulation but also want to help to shape it. In May, Microsoft released a 42-page report about governing AI, noting that no company is above the law. The report includes a “blueprint for the public governance of AI” that outlines five points, including the creation of “safety breaks” for AI systems that control the electric grid, water systems and other crucial infrastructure.

That same month, OpenAI CEO Sam Altman testified before Congress and called for AI regulation.

“My worst fear is that we, the technology industry, cause significant harm to the world,” he told lawmakers. “If this technology goes wrong, it can go quite wrong.”

Altman, who has met with world leaders in Europe, Asia, Africa, the Middle East and beyond, also joined scientists and other leaders in signing a one-sentence letter in May that warned AI poses a “risk of extinction” for humanity.

© 2023 Los Angeles Times. Distributed by Tribune Content Agency, LLC.