According to a new study, online actors may be pushing false narratives through social media to sow chaos. While it's unclear where the information is coming from exactly, it poses dangers to people looking for information.
Disinformation may be making the COVID-19 crisis much worse, with online groups attempting to spread false narratives, incorrect health information and other propaganda while governments rush to lock down communities and treat the sick.
That's according to a new study out by Blackbird.AI, a vendor that uses artificial intelligence and data analysis to provide insights into online media.
In recent weeks, governments have struggled to mitigate a swell of factually incorrect information related to the ongoing pandemic — and some of this has been intentionally created and promoted, according to the report.
The company collected and analyzed nearly 50 million tweets from some 13 million unique users, with its AI-based algorithms looking for suspicious patterns of propagation or sharing. They found what appeared to be multiple campaigns that were intentionally crafted to push divisive or destabilizing narratives throughout online communities between the period of Feb. 27 and March 12, 2020.
Coordinated networks of fake profiles, bots and online actors spread the disinformation through social media, the report shows.
For instance, one recent campaign attempted to spread negative sentiment about congressional Democrats, characterizing them as fear mongers who were exaggerating the severity of the virus, said Wasim Khaled, CEO of Blackbird.AI, speaking with Government Technology.
“That campaign was made up of something like 84,000 manipulated tweets. ...They were pushing stuff like conspiracy theories ... there were all types of narratives that got pulled into this specific messaging, just to make people think coronavirus is fake. Telling them it's not a concern,” he said.
Other campaigns that were pushed included attempts to delegitimize the news media as an accurate source of information about COVID-19, as well as conspiracy theories about the origins of the virus — namely, that it is some sort of escaped bio-weapon created either by China, Iran or the U.S.
Traditionally, there have been three types of threat actors that might engage in this sort of behavior, Khaled said. Those include state-backed groups, who have a relationship to a specific government; enterprise groups, that peddle disinformation-as-a-service often hired by companies to take down corporate rivals; or a “lone wolf,” who just wants to create havoc for their own amusement.
While Khaled said his company doesn't really deal with attribution, he shared that Blackbird.AI will frequently pass their findings off to federal intelligence agencies, who are better suited to tracing the origins of such activity.
“We pass off our information to various agencies who have an interest in finding out where this comes from,” he said, explaining that his company is more focused on the intent of a specific narrative, how it affects an online media ecosystem and what might be done to mitigate it.
As disinformation is injected into online communities, members of those communities pick it up and spread it organically, said Khaled: "These people don't actually know that they're being manipulated. They don't know that what they're seeing is real."
Combating this type of informational attack is difficult and requires the ability to trace and analyze large amounts of data.
Part of the problem is surely that social media is largely an unregulated market where this type of fraudulent activity is allowed to flourish. Big companies like Twitter and Facebook recently announced their commitment to fighting these kinds of campaigns, but it's unclear if these efforts will be enough.
Looking for the latest gov tech news as it happens? Subscribe to GT newsletters.