IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

New Alabama Deepfake Law Includes Criminal Sanctions

Signed by the state’s governor, the new law criminalizes distributing “materially deceptive media.” A first violation would be a misdemeanor, but subsequent violations would be treated as felonies. The law takes effect Oct. 1.

Technology maps the lower half of a mannequin's face.
(TNS) — Campaign ads that mislead voters are a political ploy older than bumper stickers, but the use of artificial intelligence to generate lifelike images and speech creates a vast new potential for disinformation.

Alabama lawmakers are putting restrictions in place to try to get ahead of what figures to be an evolving and challenging area of regulation.

The Legislature passed HB172 by Rep. Prince Chestnut, D-Selma, which sets up criminal and civil remedies intended to stop the use of AI to falsely depict candidates.

“Obviously it’s the first time we’ve done something like this in Alabama,” Chestnut said. “So I want it to be as broad as possible and hopefully we can deter this technology being used this way, in a way that none of us want to see it used.”

More than 100 bills on regulating AI campaign ads have been proposed in 40 states, according to the Voting Rights Lab, a nonpartisan organization that tracks election laws.

“We’re really seeing it as an emerging issue this year,” said Megan Bellamy, vice president of law and policy for the Voting Rights Lab. “Misinformation and disinformation are widespread in elections anyway, especially during federal election cycles. It turns up false rumors, misconceptions about elections, and disinformation specifically, can be targeted messages spread to purposely mislead voters.”

Before the presidential primary in New Hampshire in February, thousands of that state’s voters received robocalls that used a false AI-generated voice impersonating President Biden, the Voting Rights Lab reported.

The Federal Communication Commission will consider a proposal to require political advertisers to disclose the use of artificial intelligence in broadcast and radio ads, the Associated Press reported Wednesday.

HB172 passed the Alabama Legislature without a dissenting vote and was signed into law by Gov. Kay Ivey. It takes effect Oct. 1.

The bill defines “materially deceptive media” as an image, audio, or video produced by artificial intelligence that “falsely depicts an individual engaging in speech or conduct in which the depicted individual did not in fact engage” and that “a reasonable viewer or listener would incorrectly believe that the depicted individual engaged in the speech or conduct depicted.”

The bill makes it a crime for a person to distribute “materially deceptive media” if they know the depiction is false, if it occurs within 90 days before an election, if the intent is to harm the reputation of the candidate, and if the intent is to deceive voters and change votes.

A violation is a Class A misdemeanor, punishable by up to a year in jail. A subsequent violation is a Class D felony.

The distribution of “materially deceptive media” is not a crime if the ad carries disclaimers, informing the viewer or listener that the ad has been manipulated by technical means and depicts speech or conduct that did not occur. The law includes specific requirements for the disclaimers. For example, if the material is a video, the disclaimer must appear throughout the video and be clearly visible and readable for the average viewer.

The new law also carries some exceptions, including for news outlets and for material that is satire or parody.

In addition to the criminal sanctions, the new law carries a civil remedy. It says the attorney general, the person depicted, a candidate who has been injured or is likely to be injured, and any entity that represents the interests of voters can ask a court for an injunction to block the use of the material.

Chestnut said it was important to include the criminal sanctions as a deterrent.

“There’s some people who may decide if they know there is punishment for it, they’ll say that maybe I won’t do this,” Chestnut said. “Maybe I’ll just do some negative ads but I won’t put out completely false information or depict this person on a false light.”

Chestnut said there is a difference in traditional-style negative campaign ads and the worst examples of what could be done with AI-generated video or images. A traditional negative ad, for example, might cherry-pick a few lines from a 100-page bill to characterize a lawmaker differently than a broader look at their voting record would show,

“The person has the opportunity to come back and say, yeah but also in that 100-page bill I voted to give teachers a pay raise,” Chestnut said. “So that’s something that the record can be supplemented.

“But what do you do when someone takes your face, and it looks like you, and they have you on there kicking and beating a woman or maybe running over a person? This is a video of a hit-and-run. And it’s somebody who did engage in a hit and run. But it’s not you.”

Bellamy said voters have learned to expect negative campaign ads. But she said the potential impact of disinformation spread by AI-generated images falls in a different category.

“Voters have become really accustomed to the rhetoric and the mailers and the TV ads and radio ads really intensifying as you approach an election day,” Bellamy said. “The difference about AI-generated content is that it’s so compelling and it’s very realistic, and it could take a while and actually impact a voter’s thinking around the election and around a candidate before they realize that this is not real content. So that’s where we want to be really mindful about the impacts.”

Bellamy said legislation in other states, like Alabama’s, generally includes some requirement for a disclaimer that an ad includes media generated or manipulated by AI. Bellamy said the details of that disclaimer will be important.

“Is it really going to catch the voter’s attention?” Bellamy said. “Is it going to be more of an afterthought? Are folks hanging around? Is the disclaimer going to have to be present in a certain size on the ad and for the duration of the ad?

“There’s just a lot of particulars that I think would weigh into whether even a politically savvy and a more sophisticated voter would be able to identify this as AI-generated content, much less folks who are less politically savvy and really up to speed on AI and what it can do.”

Candidates are not the only potential targets of disinformation from AI-generated ads, Bellamy said. The technology could be abused to falsely depict work of the state and local agencies that manage elections and undermine voter confidence in the process.

“I think that this is just the beginning of legislation and we know we are seeing lawmakers across the country, both sides of the aisle, really trying to take this on and tackle it, understanding that it is going to impact the elections in some way,” Bellamy said. “It’s already impacting the election landscape.”

“I think that states and lawmakers are going to continue to grapple with the appropriate parameters to put on AI generated content,” Bellamy said.

Chestnut said economic competition and natural security are driving forces that ensure that technology like AI will continue to evolve and grow more powerful. He said that carries great potential benefit for the public. But the increasing ability to create convincing deepfake messages that spread disinformation causes deep concern that goes beyond political campaigns.

“I think with these technologies that we have, we have the ability to undermine everything that we’ve come to know to be true in this country,” Chestnut said. “Our institutions really are at stake. So I think we have to set up some type of parameters, some type of guardrails, some civil, some criminal, where we’re saying this type of behavior is not acceptable for us as a society.”

©2024 Advance Local Media LLC, Distributed by Tribune Content Agency, LLC.