IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.
Sponsor Content
What does this mean?

Protecting Elections from Disinformation Threats and Domestic Extremists

election security_shutterstock_1268234902

Over the past year, we have seen all too clearly the harm that purposefully false information, or disinformation, can cause.

Over the past year, we have seen all too clearly the harm that purposefully false information, or disinformation, can cause. During the course of the 2020 presidential elections, we witnessed that coordinated narratives and conspiracy theories impact the campaign trail, contribute to voter suppression and election day disruption, and eventually lead to a breakdown of public trust in the electoral system. The final result: One poll from December suggested that just half of Americans had confidence that the 2020 presidential election was fair.

With the Biden administration acting to impose sanctions against Russia earlier this year in response to attempted interference in the 2020 elections, international state influence campaigns are unsurprisingly a topic of heightened interest. However, evidence that the information war is being fought even more intensely within U.S. borders came when we saw the fatal results of a conspiracy theory-fueled disinformation-driven event on Jan. 6, conducted by domestic actors.

U.S. authorities have more recently moved to acknowledge conspiracy-driven domestic extremism as a very real threat to the U.S. The New Jersey Office of Homeland Security and Preparedness has predicted that “domestic extremists - primarily anarchist, anti-government and racially motivated - will continue to manipulate national incidents“ in 2021. Conspiracy groups such as QAnon, which gained particular traction and attention during the 2020 election, are still attracting huge followings. A recent study found an 843.4 percent increase in articles discussing QAnon from March - November 2020 compared to the eight-month period prior. QAnon and associated groups remain actively engaged in “information warfare,” intent on undermining the legitimacy of democratic processes and institutions both in the U.S. and around the world.

Information threats are evolving, and policymakers need to find new ways to counter them

The information threats to election integrity evolve throughout the electoral process. Last year’s elections saw false allegations made about candidates before Election Day, threats against election officials, rumors about COVID-19 outbreaks at polling stations and suggestions of postal voter fraud distorting results during the voting period. There were also accusations of a rigged election in the days and months afterward; even four months after Election Day, QAnon supporters theorized that Donald Trump would return to the White House on March 4, the historical inauguration date for U.S. presidents.

We have seen how quickly disinformation and incendiary content spreads across social and digital media, with groups such as QAnon branching out en masse from online echo chambers to hijack conversations and hashtags, spreading disinformation. Social media platforms have been able to add some friction to its spread by suspending accounts and banning certain groups, and calls for platforms to be held more accountable over harmful and illegal online content are a step in the right direction.

However, relying solely on social media platforms to deplatform bad actors is insufficient and unsustainable, as these individuals and groups will just continue their activities elsewhere. Protecting elections from disinformation requires a broader, more comprehensive strategy, harnessing all tools available to address this challenge. By establishing a robust framework now which utilizes the latest technical capabilities and effective countermeasures, state election officials have the opportunity to proactively tackle this problem head on.

Technological solutions exist to meet this challenge

Advances in artificial intelligence (AI) and machine learning (ML) have provided us with the pace and scale required to create an early warning system for countermeasures to be implemented. While there is still an important place for human analysis and complex investigations into areas such as campaign origins and identification of bad actors, complementing this work with powerful AI is crucial if we are to keep up with the speed and extent to which content can spread.

Logically worked with the office of the secretary of state of a key battleground state in 2020, applying innovative technology to identify domestically driven mis- and disinformation that was intent on spreading false narratives to reduce voter turnout or malignly influence voter behavior.

Logically’s threat intelligence platform, Logically Intelligence, monitored millions of sources of online information for harmful content, automatically analyzing and classifying information in real time based on toxicity and threat-level assessments. After identifying and analyzing over 40,000 threats, Logically Intelligence then provided the ability to triage and escalate credible threats, as well as deploy countermeasures such as flagging content for platform review, combating with countermessaging and conducting deep-dive expert-led investigations into bad actors and/or false narratives.

For example, Logically uncovered a campaign targeting the state’s Black community with convoluted messaging to create confusion around what form of identification was required to vote, with the goal of deterring people from showing up to vote. In response to this discovery, harmful posts were flagged for removal, counter-narratives were deployed and communities such as Proud Boys and militias were monitored for election day disruptions, and polling stations monitored for further risks.

As a result of its monitoring, Logically was also able to escalate threats it had identified that originated outside the U.S. to federal bodies, as well as escalating threats to the safety of election officials to their offices and local law enforcement for investigation.
logically 1.png

The Logically Intelligence dashboard depicting a top-line overview of identified threats
logically 2.png
A data visualization of linked posts and communities as displayed by the Explore function

Early detection is critical to neutralize potential threats

As the threats shift from the campaign trail to everyday socio-political issues, so too do the specific countermeasures that will be most effective. Due to the speed at which online content can go viral, early intervention is key at every stage to disperse disinformation campaigns before they gain traction and spill over into real-world action. By identifying and triaging disinformation as early as possible, government officials have more time to ensure the correct countermessaging is promoted and targeted communities are reliably informed.

How states can stay ahead

Disinformation campaigns created and spread by both domestic and foreign actors pose a significant risk to the election process, but these can be mitigated with the right tools. The technology industry offers innovative and scalable solutions to the public sector, supporting state governments as they monitor and guard against the ever-changing threat landscape, helping to restore trust in government.

Amid the current online threat landscape, and ahead of the 2022 midterm elections, states have the opportunity to take more proactive measures to prevent harmful and misleading information from gaining ground, and potentially spilling over into real-world adverse events such as violent protest or voter suppression efforts. This in turn will give state officials the ability to create lasting confidence in both the election process and ensuing results.