IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Social Media Companies, Researchers Face Off Over Algorithms

A Senate committee hearing earlier this week pitted researchers against three major social media companies over the question of whether algorithms are to blame for harmful content on the platforms.

US-NEWS-SENATE-SOCIALMEDIA-GET
Monika Bickert, Vice President for Content Policy at Facebook, testifies remotely during a Senate Judiciary Subcommittee on Privacy, Technology, and the Law hearing on April 27, 2021, on Capitol Hill in Washington, D.C. The committee is hearing testimony on the effect social media companies' algorithms and design choices have on users and discourse. (Al Drago/Pool/Getty Images/TNS)
Al Drago/Pool/TNS
(TNS) — A Senate hearing on Tuesday pitted three powerful social media companies against researchers who testified that the algorithms used by the platforms to generate revenue by keeping users engaged pose existential threats to individual thought, and democracy itself.

The hearing before the Judiciary Subcommittee on Privacy, Technology and the Law featured a bipartisan approach to the issue from the new chairman, Democratic Sen. Chris Coons of Delaware, and ranking member, GOP Sen. Ben Sasse of Nebraska. Algorithms can be useful, the senators agreed, but they also amplify harmful content and may need to be regulated.

Government relations and content policy executives from Facebook, YouTube, and Twitter described for the senators how their algorithms help them identify and remove content in violation of their terms of use, including hateful or harassing speech and disinformation. And they said their algorithms have begun “downranking,” or suppressing, “borderline” content.

Monika Bickert, Facebook’s vice president for content policy, said it would be “self-defeating” for social media companies to direct users toward extreme content.

But Tristan Harris, a former industry executive who became a data ethicist and now runs the Center for Humane Technology, told the committee that no matter what steps the companies took, their core business would still depend on steering users into individual “rabbit holes of reality.”

“It’s almost like having the heads of Exxon, BP, and Shell here and asking about what you’re doing to responsibly stop climate change,” Harris said. “Their business model is to create a society that’s addicted, outraged, polarized, performative and disinformed.”

“While they can try to skim the major harm off the top and do what they can — and we want to celebrate that, we really do — it’s just that they are fundamentally trapped in something they cannot change,” Harris continued.

Joan Donovan, the research director at the Harvard Kennedy School’s Shorenstein Center on Media, Politics and Public Policy, said the platforms should be required to offer users a “public interest” version of their news feeds or timelines and provide robust tools to moderate content.

“We didn’t build airports overnight but tech companies are flying the planes with nowhere to land,” Donovan said. “The cost of doing nothing is nothing short of democracy’s end.”

Coons and Sasse commended the platforms for steps taken to curb the spread of hate speech but questioned whether they would do enough if left to their own devices. Coons noted that Facebook recently took special measures to limit misinformation and violent content ahead of the verdict in the trial of former Minneapolis police officer Derek Chauvin, who was convicted in the May 2020 murder of George Floyd, a Black man.

“My question for you is why wouldn’t Facebook always limit the rapid spread of content likely to violate your standards,” Coons asked Bickert.

Bickert responded that such measures, in addition to removing harmful content, might also limit the spread of “false-positive” content that would not violate the company’s policies.

“So there is a cost to taking action on that content,” Bickert said. “But in situations where we know there is extreme or finite risk, such as an election in a country experiencing unrest or in Minneapolis with the Chauvin trial, we’ll put in a temporary measure where we’ll de-emphasize content that the technology, the algorithms, say is likely to violate [company policy].”

Coons said the hearing was a learning opportunity for both him and Sasse and that he had no specific regulatory agenda but thinks the issue demands urgent attention and would consider supporting voluntary, regulatory or legislative remedies.

Sasse said the piecemeal approaches by each company were “irreconcilable” with the broad challenges described by Harris.

“He’s making a big argument and we’re hearing responses that I think are only around the margins,” Sasse said.

©2021 CQ Roll Call, Distributed by Tribune Content Agency, LLC.