IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

4 in 5 People Want Protection Against AI Deepfakes

A new survey from Boston University revealed that respondents support enacting protections against deepfakes — AI-generated images or videos depicting something that did not happen. Their backing is bipartisan.

A side profile of a woman's face with a digital twin version next to it.
A significant and bipartisan majority of people in the U.S. support protections against deepfakes, according to a survey out of Boston University.

Deepfakes, or AI-generated images misrepresenting reality, are becoming increasingly common, posing new risks related to security and fraud. Many state lawmakers have explored potential measures to protect against these and other risks. In May, the TAKE IT DOWN Act became federal law, making it a crime to share nonconsensual deepfake images of a sexual nature, but experts say this is a “drop in the bucket” of risks.

More than 4 of 5 respondents — or 84 percent — agreed or strongly agreed that individuals should be protected from unauthorized usage of their voice and likeness, according to the survey. This support is bipartisan; 84 percent of Republicans and 90 percent of Democrats support such protections.
“In this confusing environment, one principle has strong bipartisan support: the public overwhelmingly agrees that everyone’s voice and image should be protected from unauthorized AI-generated recreations,” Michelle Amazeen, associate dean of research at Boston University’s College of Communication, and Communication Research Center (CRC) director, said in a statement, acknowledging the rapid spread of disinformation fueled by the scaling back of content moderation on social media platforms.

The survey also revealed that there is significant support for the labeling and watermarking of AI-generated content on social platforms — 84 percent of people support that. A majority of respondents also support social media platforms in removing unauthorized deepfakes and providing a transparent appeals process for contesting unauthorized deepfakes. Three of four respondents support individuals having a right to license their voice and visual likeness to enhance control over the use of a person’s digital identity.

The survey was designed by the CRC at Boston University’s College of Communication and conducted by Ipsos.

The TAKE IT DOWN Act received bipartisan support, but some lawmakers want to enact further protections, as demonstrated by the introduction of the NO FAKES Act this spring to protect individuals’ digital identity against the unauthorized use of their likeness.

However, as the U.S. Congress is slow to act on AI-related safeguards for individuals, President Donald Trump’s AI Action Plan hones in on states’ autonomy to enact their own regulations to protect people from AI risks; states that do so may risk losing federal funding.

This is not the first instance of the public demonstrating significant bipartisan support for specific policies — ranging from deepfake regulation to an autonomous weapons ban — to safeguard people from AI technologies’ risks.