IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Feds Could Be OK With Political Deepfakes — If Labeled

A proposal from the U.S. FCC would not ban AI-generated content from political ads, but require it be marked as such. If approved, rules would still have to be created.

One face gives way to another in this red and blue illustration of deepfake technology
Shutterstock
(TNS) — Federal officials said Wednesday they might be OK with deepfakes and other AI generated content in political advertising — as long as they are labeled.

That is according to a proposal from the U.S. Federal Communications Commission, the first step in requiring advertisers to carry a disclosure that artificial intelligence was used to create a political ad, as the U.S. ramps up for its first presidential elections of the AI era.

The proposal is a preliminary step. If voted through, it would allow the agency to take comment and craft rules on requiring the disclosure of AI-generated content in political ads.

The issue has become increasingly important at the state and now federal level, as increasingly powerful and available AI technologies make it easier to create falsified audio, images and video capable of making candidates appear to do or say something they did not.

"As artificial intelligence tools become more accessible, the Commission wants to make sure consumers are fully informed when the technology is used," said FCC Chairwoman Jessica Rosenworcel in a statement. "Today, I've shared with my colleagues a proposal that makes clear consumers have a right to know when AI tools are being used in the political ads they see, and I hope they swiftly act on this issue."

The commission said it expects AI and deepfakes to "play a substantial role in the creation of political ads in 2024."

In a statement, the FCC said the proposed plan would not prohibit AI-generated content in political ads, but only require it be marked as such.

The FCC rulemaking comes after multiple bills on the topic have been proposed in the California legislature.

Some would require the labeling of AI-produced election content by social media companies, while others prohibit distributing deceptive audio or visual material of a political candidate in the months before and after an election.

"We applaud the FCC for opening a rulemaking process because action is desperately needed, but an executive branch agency taking this step underscores that we can't count on Congress to act," said a statement from Jonathan Mehta Stein, co-founder of the California Initiative for Technology and Democracy, which launched last year to fight the potential for AI-generated content to distort political messaging and confuse voters.

"We are glad that California is not waiting — it is advancing a legislative package that can protect our democracy from deep fakes and AI disinformation, and help lead the nation in doing so," Mehta Stein added.

©2024 the San Francisco Chronicle, Distributed by Tribune Content Agency, LLC.