IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Deepfakes Are on the Rise — How Should Government Respond?

Bad actors are increasingly using artificial intelligence to manipulate images to misrepresent their subjects. As states work to legislate deepfake technologies, perhaps a federal approach would be better.

A woman with dots on her face connected by lines to depict facial recognition.
Shutterstock/Mihai Surdu
The past year has seen a remarkable rise in the quality and quantity of deepfakes — realistic-looking images and videos produced with artificial intelligence that portray someone doing or saying something that never actually happened, such as Nixon delivering an alternate moon landing speech. As the tools to produce this synthetic media advance, policymakers are scrambling to address public concerns, and state lawmakers in particular have put forth several proposals this year to respond to deepfakes.

One of the top concerns is that deepfakes will be used as part of a misinformation campaign to influence elections. For example, researchers at an MIT conference demonstrated how they could use the technology to create a real-time fake interview with Russian President Vladimir Putin. In response to such concerns, Texas passed a law in September to criminalize publishing and distributing deepfake videos intended to harm a candidate or influence results within 30 days of an election. California passed a law in October that makes it illegal for anyone to intentionally distribute deepfakes intended to deceive voters or harm a candidate’s reputation within 60 days of an election. The law excludes news broadcasters from its rules, as well as any videos that are made for satire or parody and videos that are clearly labeled as being fake. These laws are good steps toward preventing campaigns from using deepfakes to attack their opponents, but they will do nothing to stop foreign political interference. And some First Amendment activists are concerned these laws might unduly restrict free speech.

Another major concern is that deepfake technology is used to create pornographic images or videos of individuals — mostly female celebrities — without their consent. In a September 2019 study, Deeptrace, an Amsterdam-based company that detects and tracks deepfakes on the Internet, found 14,678 deepfake videos on popular streaming websites — double the number from December 2018 — and discovered that 96 percent of the fake videos involved nonconsensual pornography. These videos are popular, having received approximately 134 million views. So far only one state, California, has passed a law addressing this issue. In October, Gov. Gavin Newsom signed a law that allows individuals to sue someone who has created a deepfake that makes their likeness appear in pornographic images or videos, even if the content is labeled as fake. The law tries to balance free speech concerns by excluding materials that have legitimate public interest, such as being newsworthy. While this law will provide victims with some recourse, it will not help them if the source of the material is anonymous or out of the state’s jurisdiction, nor will it stop the distribution of the content.

The last major issue lawmakers are grappling with is how to protect the rights of individuals to control the commercial use of their image and identity. Deepfake technology is advancing to the point that performers may have their likeness fully re-created in digital form, allowing their image to be used in projects they have no direct involvement in, even after their death. Celebrities typically charge for commercial use of their likeness, and these rights can be enormously valuable, so many want to ensure that they maintain these rights even with emerging technology. The New York state Legislature considered, but ultimately did not pass, legislation supported by the Screen Actors Guild that would have established a new right of publicity for individuals. In particular, it would have extended this right of publicity to 40 years past an individual’s death, and it would have prohibited non-consensual use of a “digital replica” of an individual without their (or their heirs’) consent.

Most of these laws generally take the right approach: They make it unlawful to distribute deepfakes with a malicious intent, and they create recourse for those in their state who have been negatively affected by bad actors. However, it is important that lawmakers carefully craft these laws so as not to erode free speech rights or undermine legitimate uses of the technology. As other states consider whether to pursue these types of laws, they should proceed cautiously, recognizing that deepfake technology is changing rapidly. And state laws will only be a first step — websites will also need to take down this content, and the rules for this may need to be decided at the federal level.

Daniel Castro is the vice president of the Information Technology and Innovation Foundation (ITIF) and director of the Center for Data Innovation. Before joining ITIF, he worked at the Government Accountability Office where he audited IT security and management controls.