IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Deepfakes: The Next Big Threat to American Democracy?

As anxieties about foreign interference in the 2020 presidential election grow, concerns about other vectors of misinformation are evident. Deepfakes, realistic video forgeries, have some of the most damaging potential.

deepfake_shutterstock_526422091
Shutterstock/Niyazz
At an RSA 2020 seminar on deepfakes, McAfee researchers showed audience members a series of eight pictures, all of which appeared to be average, everyday people. Asked to identify which were authentic photos, however, audience members failed to recognize that half of the photos were computer-generated fakes. 

"This technology is so realistic it's actually making people think twice about whether seeing is actually believing," said Sherin Mathews, senior data scientist for McAfee. 

The past year has seen an ever growing milieu of these fabricated images — whether it's Richard Nixon giving a speech he never gave, President Trump saying "Epstein didn't kill himself," or Brad Pitt's face replacing Tommy Wiseau's in The Room — deepfake videos have essentially been normalized as legitimate online entertainment.  

Yet, in a world where social media plays increasingly pivotal and unexpected roles in politics and culture, deepfakes are viewed as the next vector for disinformation, driven by political and commercial motivations.  

"Imagine a dark web economy where deepfakers might produce misleading content that they can release to the world to influence what car you may buy or what supermarket you may go to. Deepfakers may [eventually] touch every area of our lives," Mathews said. 

The public-sector side of this coin sees government officials and lawmakers worrying that deepfakes, when introduced into political arenas like state and local elections, will have the ability to sow confusion and discord in the democratic process and contribute to general political unrest. 

Deep learning and data  

Deepfakes were really jettisoned to scale through the field of deep learning: artificial intelligence algorithms that can create highly realistic artificial images from large deposits of real ones.

One of the most prevalent forms of this tech are generative adversarial networks (GANs) — systems in which two artificial neural networks are set up to create the fake images. GANs are already being used in a number of industries, including film and fashion, where simulated humans can replace real ones, cutting company costs.   

An expert on this subject is Hao Li, an associate professor in computer graphics at the University of Southern California, who is sometimes referred to as a deepfake "pioneer" and "artist." Li, who runs his own lab at USC and has already enjoyed a lengthy career in computer graphics for media and entertainment companies, has consistently been at the forefront of this technology and its evolution. 

"My work has focused on [the question of] how do we scale this process? How do we make facial capture and facial rendering more efficient and accessible?" he said, in an interview with Government Technology.

The process, which was historically the work of highly specialized media technicians, began to evolve around 2016 when data and artificial intelligence was applied to this field, Li said.

By now, the technology has advanced to the point where deepfake, face-swapping technology is used in a number of mainstream apps, and companies like Topaz Labs and software like Deepface are able to fake videos and audio with ease. 

"It's basically a software suite ... that gives people the ability to alter or swap faces inside a video without requiring a lot of expertise in computer graphics ... all you need is to collect a lot of data and then trade models to create these kinds of effects," said Li. 

From entertainment to propaganda 

"This kind of technology basically started with visual effects," Li said. "Hollywood has spent a lot of time trying to find new methods of storytelling, ones where [they can] bring people to life who have passed away, or can create a younger version of someone."

As an example, Weta Digital, the company behind the special effects for the Lord of the Rings trilogy and one of Li's former workplaces, has been a pioneer in the media trickery.  

Similarly, Li and his team have contributed to important, early examples of this technology, such as inserting actor Paul Walker posthumously back into the Fast and Furious franchise after his untimely death in 2013.

Other recent examples like Netflix's recent gangster epic The Irishman (which uses digital de-aging technology on its stars), Dove's usage of Audrey Hepburn's simulacrum to sell chocolate, or Finding Jack, an upcoming Vietnam war movie with plans to resurrect the likeness of James Dean show the extent of the tech's current capabilities and its commercial appeal.

The technology is also set to advance exponentially in the years to come. Very soon it may be impossible to tell most real and fake images apart, he said. While this will offer new and exciting opportunities in the world of entertainment, it will also present complex relationships between audiences and media content — creating an opportunity for actors, both good and bad, to mediate the reality of images.

The disinformation game 

"Disinformation itself, without deepfakes, is of great concern in terms of politics, especially since the elections are coming," Li said, while noting that the visual nature of deepfakes makes them especially advantageous to propagandists.  

If officials in the U.S. are just starting to grapple with the way new technologies can interfere politically, deepfakes are already an unfortunate part of the dialog in other countries, Li explained.

A Brazilian gubernatorial candidate claimed he had been the victim of a disinformation campaign after a sex tape involving him emerged ahead of the election. In another case, a video of the president of Gabon — who had recently been absent from the public eye — led dissident political forces to allege that the video was a fake and the result of a cover-up; those forces later attempted a coup against the administration on the basis that it was fraudulent and illegitimate. 

"I don't think the deepfake technology is actually the catalyst of disinformation; it's social media," Li said, explaining that these online networks are where fictions are spread.  

Indeed, experts believe that deepfakes will likely have the biggest impact on developing nations, where political situations are less stable and digital literacy is lower. Here in the U.S., however, legislators are interested in creating solutions to what could soon be a domestic problem.  

Regulations: a lost cause? 

So far governments have largely taken two routes to address deepfakes: bills penalizing deepfake use, and investment in research to better identify and label such videos. 

Recent legislative attempts include several bills from California: one to criminalize deepfake dissemination with the intent to manipulate an election's results, and another that would've appropriated $25 million in taxpayer money for state universities to study effective methods to identify and combat "inappropriate use of deepfake technology."

Meanwhile, federal legislation introduced last year would criminalize the creation and distribution of such videos without labeling them as fakes, while a whole slew of other bills have been introduced that focus on limiting the technology's use in the pornography industry

All of these bills hit something of a dead end when it comes to enforcement, however, said Alex Engler, a researcher with the Brookings Institution, for the simple fact that attribution is difficult in an online setting. Engler, who studies artificial intelligence and emergent technologies, said that lawmakers will likely not be able to legislate their way out of this problem. 

"The majority of these [disinformation campaigns] are not public[ly identifiable], they are not ... domestic [in origin] ... and so it is not totally clear that that's going to be a big deterrent. It might still be the right decision ... but I don't know how much of the problem it will ultimately address," he said. 

Additionally, officials are also pouring money into efforts to create detection methods, searching for nuanced approaches to identifying and labeling fraudulent media. These detection methods look at things like blinking patterns, inconsistent head postures or optical flow, or how a person's eyes move, to determine whether a video may be real or artificially generated. 

Yet even here, both Engler and Li are skeptical that detection methods will ever truly cut it. 

"If you have an automated detection system — you could imagine Facebook or YouTube doing this, though they haven't yet — it's going to be good enough to get a lot of the commercial deepfake software ... It's not that that is going to be undetectable," Engler said. "It's that it is currently technically possible to make them perfect, in which it is literally impossible to tell the difference between a synthetic video and a real video. That's going to be a really big challenge." 

Lucas Ropek is a former staff writer for Government Technology.