IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Senate Committee Advances Bill to Create Deepfake Task Force

The proposed legislation would examine technology- and policy-based approaches to detecting and combating maliciously deployed deepfakes. This marks yet another attempt to legislate the controversial technology.

deepfake_shutterstock_1020952429
The U.S. Senate Committee on Homeland Security and Governmental Affairs voted unanimously on Wednesday to advance the Deepfake Task Force Act, reporting it favorably to the Senate. The bill would establish a public-private team charged with investigating policy and technology strategies for curbing the harms of deceptively used deepfake technology.

Deepfakes modify original audiovisual content in convincing ways that may be difficult to distinguish from genuine footage. The technology has often come under fire for its ability to be used to make falsified pornographic content of individuals without their consent as well as contribute to the spread of disinformation by creating videos in which public officials appear to say things they did not.

The bill would establish a “National Deepfake and Digital Provenance Task Force” called on to propose policies and standards that could help reduce the spread and impact of falsified digital content. That would include identifying methods that members of the public could use to verify whether content is authentic and free of tampering.

The legislation suggests that this work requires researching and developing new technologies, rules and best practices for differentiating between content distorted with the intention to disinform and content that is unmanipulated or which has undergone harmless — or even helpful — alterations.

The Department of Homeland Security secretary and the director of the Office of Science and Technology Policy or their appointees would co-chair the task force, whose membership would be drawn from federal government, higher education and private or nonprofit organizations. Members would need relevant background, such as in artificial intelligence, cryptography, digital forensics or media manipulation.

The bill acknowledges mitigating abusive deepfakes as more than a technological problem. It calls for the task force to address the wider dis- and misinformation ecosystem of social media, promote digital literacy among the public and examine any relevant “privacy and civil liberty requirements” raised by the efforts to identify and restrict the spread of deepfakes.

This is far from the only attempt to legislate against some of the more harmful uses of realistic audiovisual distortions, and various state and federal policymakers have taken a stab at the issue. Such policy proposals often hit stumbling blocks when it comes to how the text — or the policy’s enforcers — differentiate between malicious disinformation and protected free speech. Still, several states have successfully introduced laws, though not without controversy.

TEXAS AND CALIFORNIA’S FORAYS


California and Texas passed laws in 2019 aimed at preventing deepfakes that misrepresent candidates during the lead-up to elections. A September 2019 Texas law forbids spreading or releasing deceptive deepfake videos intended to sway voters or damage candidates within 30 days of an election. The following month, California made it illegal to disseminate deepfakes intended to trick voters or harm candidates within 60 days of an election.

The latter included exceptions for deepfakes used satirically or as part of news publications’ coverage, so long as they clearly identified the content as faked. But these exclusions did not go far enough for opponents who feared it would chill free speech.

An analyst warned at the time that the California law’s broad language could plausibly require candidates to publish image manipulation disclaimers before being allowed to adjust the brightness on online photos of them interacting with voters.

The Texas legislation prompted similar warnings, with the Electronic Frontier Foundation (EFF) saying its loose definition of “deepfakes” could be interpreted as banning many standard political ads that use some level of dramatization, including music or sound effects.

Another wrinkle: Texas’ law intends to only block content adjustments that are intentionally deceptive rather than meant to condense and clarify, but pinning down motives is tricky. Prosecutors and officials who already wield political power would be the ones in charge of enforcement, EFF states.

Policymakers in other states have considered following in the footsteps of Texas and California, perhaps most recently with Pennsylvania Rep. Nick Pisciottano’s July 31, 2021 announcement of plans to file a comparable bill.

HISTORY OF FEDERAL ATTEMPTS


Federal lawmakers have tried to tackle deepfakes in recent years as well, with the National Defense Authorization Act for Fiscal Year 2020 turning attention onto the potential for adversary nations to use deepfakes as part of election disinformation campaigns. That law calls for national intelligence officials to report to Congress on the issue, and it follows on the heels of several less successful efforts to legislate deepfakes.

Deepfake Task Force Act sponsor Sen. Rob Portman's, R-Ohio, previously attempted to pass a 2019 bill that would have had DHS produce annual reports on the state of deepfakes, their threats to national security and other related matters.

That year also brought another defeated effort, the Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to Accountability Act, or DEEP FAKES Accountability Act, which called for identifying altered content through disclosures and digital watermarking. This proposal, too, would have created a DHS task force, in this case responsible for developing ways to identify and mitigate deepfakes and associated national security threats.

The DEEP FAKES Accountability Act had safeguarded some applications of deepfakes — such as for artistic and parody purposes — while condemning their use to influence elections and public policy debate, harass and create porn of individuals, incite violence and perpetrate identity fraud. The bill ultimately failed, possibly due to its expansive nature that some opponents feared infringed on First Amendment rights, as well as various loopholes and exceptions that other objectors said undermined its effectiveness, writes Missouri Law graduate Lindsey Wilkerson.

A prior year attempt, the Malicious Deep Fake Prohibition Act of 2018, never left committee, but would have criminalized making or knowingly spreading realistic-seeming faked digital audio or visual content — except when doing so is protected by the First Amendment.
Jule Pattison-Gordon is a senior staff writer for Government Technology. She previously wrote for PYMNTS and The Bay State Banner, and holds a B.A. in creative writing from Carnegie Mellon. She’s based outside Boston.