According to some, the new AI-powered videos have the power to confuse and mislead voters, potentially compromising election integrity. But there isn't much in the way of legislation at the state level to address them.
A new form of online disinformation has some government officials uneasy about its potential effects on upcoming political campaigns and elections, but policy efforts to address it are sparse.
“Deepfakes” — videos altered with the help of AI that can make people (typically celebrities or politicians) appear to do and say things they actually did not — are not only weird, uncanny manifestations of a new era of technological progress, they’re also a national security threat, according to some.
Last November, the Council on Foreign Relations hosted a public roundtable discussion of the new online phenomenon, where panelists lamented the potential these videos have for deployment by hostile foreign actors. Similarly, the Pentagon and its research agency, the Defense Advanced Research Projects Agency (DARPA), recently announced their commitment to researching various ways to combat the new phenomenon.
Deepfakes are created through AI algorithms that observe and record movement patterns in a subject's face from actual video and then recreate and simulate them to make the subject do or say something they did not. The concern by many is that such videos will be used to manipulate public perceptions of public figures during elections.
With the globalized reach of “fake news,” disinformation has gone from being a mostly federal issue to one with state and local relevance as well. Still, states have been slow to adopt legislation that could combat these potential forms of election interference — in no small part because the technology is still so new and untested in its ability to sow confusion amidst voters.
As a result, though a number of federal bills have been introduced that attack deepfakes, only a handful of state bills have been introduced — namely in California, Texas and Massachusetts.
Maybe the most prominent bill introduced so far has been in Texas, where Sen. Bryan Hughes, R-Mineola, introduced SB 751.
The bill grew out of discussions had after Lt. Gov. Dan Patrick appointed a committee on election integrity. The committee, tasked with the job of researching various forms of election interference, discussed deepfakes at length, said Hughes, who sat on the committee.
Hughes’ bill narrowly defines a deepfake as “a video created with artificial intelligence that, with the intent to deceive, appears to depict a real person performing an action that did not occur in reality,” while criminalizing the act of creating such a video to “injure a candidate or influence the result of an election.”
“We’ve been taking testimony and investigating all kinds of election problems [in the committee] and ran across this technology and read about the potential it has and really became concerned,” Hughes said in an interview with Government Technology.
Hughes’ legislation, which is currently being considered by the state’s House Elections Committee, has been criticized by civil rights groups, who argue that it would infringe upon free speech rights.
“Obviously there are going to be some free speech concerns, so for our purposes we wanted to make this pretty narrow and focus on elections,” Hughes said. “Our attempt here is to create a pretty narrow remedy so that if you use this technology with the intent to influence an election then you will be held accountable for it.”
Another bill that sought to combat deepfakes was recently introduced in the California Legislature, but failed to pass.
AB 1280 was introduced by the Organization for Social Media Safety (OSMS), a relatively new nonprofit that describes itself as committed to combating forms of online bullying and other hazards related to social media. Though it was voted down, the bill was granted an opportunity for reconsideration next year.
“Even though it’s still early, we feel that there should be some sort of legislative response,” said Marc Berkman, executive director for OSMS.
While Berkman’s bill focused heavily on the potential for videos to be used as a form of bullying or ostracization, it would have also made it a felony or a misdemeanor to “prepare, produce, or develop, any deepfake [within 60 days of an election] with the intent that the deepfake coerce or deceive any voter into voting for or against a candidate or measure in that election.”
Those who opposed the bill argued that deepfakes are still a largely theoretical threat to the integrity of elections, and that laws already exist to assist potential victims of such videos. Some civil liberties advocacy groups like the ACLU also voiced opposition to the bill, seeing it as an infringement on rights to freedom of expression.
“It was helpful to see where everyone’s at and there were some good words spoken about it at the hearing. Some more work needs to be done,” Berkman said.