IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

New Tool Aims to Help Government Fight COVID Misinformation

The Washington, D.C.-based company AlphaVu recently filed for patent protection for two algorithms: one dedicated to assessing public sentiment on COVID-related issues, the other for finding misinformation.

Accela_COVID2
When the data analytics and messaging company AlphaVu launched in 2009, it filled, as founder and CEO Scott Wilkinson recalls, a “sleepy niche” for technology that helps government agencies assess public opinion and craft messaging. Over half of the Washington, D.C.-based company’s business since then has been in the transportation sector, helping public transit or tolling agencies with outreach regarding road construction and multibillion-dollar toll projects.

But in the midst of a global pandemic exacerbated by relentless misinformation on social media, the company believes its software could have a role to play in public health.

Wilkinson told Government Technology last week that AlphaVu recently filed for patent protection for a pair of algorithms to help local health departments counter misinformation on social media: one is for analyzing sentiment, which is incorporated into the other algorithm, for identifying misinformation. Wilkinson said the ability to gauge public opinion and respond with carefully tailored messaging has been a rare luxury for local governments until now; but the sheer volume of misinformation out there, combined with political polarization and growing distrust in government, has made a strong case for it as part of the fight against COVID-19.

“If you are a local government agency … you don’t have the advertising budget of a consumer product company, you legally and ethically must be very much tied to the facts … and you can’t be overly persuasive in your messaging. It turns out that as we’ve gone along, helping the public attach to fact-based information is increasingly exciting and cutting-edge of the communications industry,” he said. “When the pandemic hit, we very quickly realized there was a need for our services in public health, and that’s what our big focus has been the last seven months.”

Wilkinson said AlphaVu’s public health software collects data only from the public domain — public posts on Facebook, Instagram, Twitter and Reddit. The software has two subcomponents: one to assess sentiment, or whether someone is saying something positive or negative; another to assess word similarity, or how much their choice of words differs from what the Centers for Disease Control and Prevention has posted online about COVID-19. Taken together, if a comment’s sentiment is negative and its choice of words is very different from how the CDC talks about COVID-19, then it likely represents misinformation. The program uses these criteria to apply what the company calls a Misinformation Identification Risk score to the conversations it finds.

No doubt health departments don’t have time for an endless game of whack-a-mole with Internet trolls, but Wilkinson said isolated posts or short, discrete intervals of misinformation aren’t the issue. AlphaVu is more concerned with several-day rolling averages of high misinformation, because those predict more endemic and problematic behavior. Wilkinson said the software also has some ability to glean which geographic area commenters are from, or talking about, so the program can focus on comments relevant to their jurisdiction.

“If you can’t measure it, you can’t know how, where or when to act. And frankly, there’s not nearly enough measurement,” he said. “We’re measuring so many more things than we used to even a year or two years ago, but the effort to actually quantify misinformation is unfortunately pretty new.”

Wilkinson said most off-the-shelf algorithms for gauging sentiment are generic, and therefore inaccurate. To make their software as accurate as possible, Wilkinson said AlphaVu’s team undertook an “extraordinarily arduous” four-month process of hand-coding more than 70,000 social media posts about COVID-19 to score whether they were positive or negative, then ran them through a machine learning-based algorithm to teach it the difference.

“It’s very important because sentiment, when you approach it the right way, is essentially a proxy for intent,” he said. “When somebody’s expressing negative sentiment toward mask-wearing, we know that’s potentially a problem.”

AlphaVu’s misinformation identification tools are now in use at the Virginia Department of Health, according to a case study in which the company collected and analyzed COVID-related conversations from more than 200,000 Virginians. AlphaVu used these to report back to the department with MIR scores of conversations among at-risk and marginalized communities, as well as how often people discussed specific topics such as school and business reopening, contact tracing, personal protective equipment, social distancing, vaccinations and an exposure notification app deployed by the state. It’s unclear what the Virginia Department of Health did with this information. Spokespeople from the department did not answer requests for comment before deadline.

Wilkinson said that if a public health department knows what kind of misinformation is circulating among its residents, how intense and common it is, where it’s prevalent and potentially where it’s coming from, the department can craft better messaging. He said it can also work with AlphaVu to target specific populations on social media with facts and outreach, the success of which will depend on finding the right messenger.

“First of all, elected officials in leadership positions, of course, need and should be out front. But there are also times and places where what we’re doing is identifying — in particularly higher-risk, hard-to-reach, marginalized communities — who local micro-community influences are, who can help share information to help enhance public trust. Because when information comes from a source that a community knows, or feels that it knows, there is less likely to be that spike in misinformation,” he said. “Misinformation is not new. It’s always been present. But our clients have never been overly concerned about it because it was never life or death. Now it’s life or death, so it has become one of the most critical things.”

Andrew Westrope is managing editor of the Center for Digital Education. Before that, he was a staff writer for Government Technology, and previously was a reporter and editor at community newspapers. He has a bachelor’s degree in physiology from Michigan State University and lives in Northern California.