Deepfakes, or AI-generated images misrepresenting reality, are becoming increasingly common, posing new risks related to security and fraud. Many state lawmakers have explored potential measures to protect against these and other risks. In May, the TAKE IT DOWN Act became federal law, making it a crime to share nonconsensual deepfake images of a sexual nature, but experts say this is a “drop in the bucket” of risks.
More than 4 of 5 respondents — or 84 percent — agreed or strongly agreed that individuals should be protected from unauthorized usage of their voice and likeness, according to the survey. This support is bipartisan; 84 percent of Republicans and 90 percent of Democrats support such protections.
The survey also revealed that there is significant support for the labeling and watermarking of AI-generated content on social platforms — 84 percent of people support that. A majority of respondents also support social media platforms in removing unauthorized deepfakes and providing a transparent appeals process for contesting unauthorized deepfakes. Three of four respondents support individuals having a right to license their voice and visual likeness to enhance control over the use of a person’s digital identity.
The survey was designed by the CRC at Boston University’s College of Communication and conducted by Ipsos.
The TAKE IT DOWN Act received bipartisan support, but some lawmakers want to enact further protections, as demonstrated by the introduction of the NO FAKES Act this spring to protect individuals’ digital identity against the unauthorized use of their likeness.
However, as the U.S. Congress is slow to act on AI-related safeguards for individuals, President Donald Trump’s AI Action Plan hones in on states’ autonomy to enact their own regulations to protect people from AI risks; states that do so may risk losing federal funding.
This is not the first instance of the public demonstrating significant bipartisan support for specific policies — ranging from deepfake regulation to an autonomous weapons ban — to safeguard people from AI technologies’ risks.