IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

How Generative AI Is Boosting Propaganda, Disinformation

Governments and political actors around the world, in democracies and autocracies, are using AI to generate texts, images and video to manipulate public opinion in their favor and automatically censor critical content.

the word disinformation highlighted in the dictionary
Shutterstock/Casimiro PT
(TNS) — Artificial intelligence has turbocharged state efforts to crack down on internet freedoms over the past year.

Governments and political actors around the world, in both democracies and autocracies, are using AI to generate texts, images, and video to manipulate public opinion in their favor and to automatically censor critical online content. In a new report released by Freedom House, a human rights advocacy group, researchers documented the use of generative AI in 16 countries “to sow doubt, smear opponents, or influence public debate.”

The annual report, Freedom on the Net, scores and ranks countries according to their relative degree of internet freedom, as measured by a host of factors like internet shutdowns, laws limiting online expression, and retaliation for online speech. The 2023 edition, released on October 4, found that global internet freedom declined for the 13th consecutive year, driven in part by the proliferation of artificial intelligence.

“Internet freedom is at an all-time low, and advances in AI are actually making this crisis even worse,” says Allie Funk, a researcher on the report. Funk says one of their most important findings this year has to do with changes in the way governments use AI, though we are just beginning to learn how the technology is boosting digital oppression.

Funk found there were two primary factors behind these changes: the affordability and accessibility of generative AI is lowering the barrier of entry for disinformation campaigns, and automated systems are enabling governments to conduct more precise and more subtle forms of online censorship.

Disinformation and deepfakes

As generative AI tools grow more sophisticated, political actors are continuing to deploy the technology to amplify disinformation.

Venezuelan state media outlets, for example, spread pro-government messages through AI-generated videos of news anchors from a nonexistent international English-language channel; they were produced by Synthesia, a company that produces custom deepfakes. And in the United States, AI-manipulated videos and images of political leaders have made the rounds on social media. Examples include a video that depicted President Biden making transphobic comments and an image of Donald Trump hugging Anthony Fauci.

In addition to generative AI tools, governments persisted with older tactics, like using a combination of human and bot campaigns to manipulate online discussions. At least 47 governments deployed commentators to spread propaganda in 2023—double the number a decade ago.

And though these developments are not necessarily surprising, Funk says one of the most interesting findings is that the widespread accessibility of generative AI can undermine trust in verifiable facts. As AI-generated content on the internet becomes normalized, “it’s going to allow for political actors to cast doubt about reliable information,” says Funk. It’s a phenomenon known as “liar’s dividend,” in which wariness of fabrication makes people more skeptical of true information, particularly in times of crisis or political conflict when false information can run rampant.

For example, in April 2023, leaked recordings of Palanivel Thiagarajan, a prominent Indian official, sparked controversy after they showed the politician disparaging fellow party members. And while Thiagarajan denounced the audio clips as machine generated, independent researchers determined that at least one of the recordings was authentic.

Chatbots and censorship

Authoritarian regimes, in particular, are using AI to make censorship more widespread and effective.

Freedom House researchers documented 22 countries that passed laws requiring or incentivizing internet platforms to use machine learning to remove unfavorable online speech. Chatbots in China, for example, have been programmed not to answer questions about Tiananmen Square. And in India, authorities in Prime Minister Narendra Modi’s administration ordered YouTube and Twitter to restrict access to a documentary about violence during Modi’s tenure as chief minister of the state of Gujarat, which in turn encourages the tech companies to filter content through AI-based moderation tools.

In all, a record high of 41 governments blocked websites for political, social, and religious speech last year, which “speaks to the deepening of censorship around the world,” says Funk.

Iran suffered the biggest annual drop in Freedom House’s rankings after authorities shut down internet service, blocked WhatsApp and Instagram, and increased surveillance after historic antigovernment protests in fall 2022. Myanmar and China have the most restrictive internet censorship, according to the report—a title China has held for nine consecutive years.

© Copyright 2023 Technology Review, Inc. Distributed by TRIBUNE CONTENT AGENCY, LLC.