IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

What’s Missing, Misunderstood in Section 230 Debates

Does the federal law allow you to sue social media if their algorithms spread disinformation? Are some would-be social media reforms targeting the First Amendment? Is a three-word phrase a dangerous loophole or useful catch-all?

Section 230 co-author former Rep. Chris Cox discusses the law during an AEI panel.
Section 230 co-author Rep. Chris Cox discusses the law during an AEI panel.
Screenshot
State and federal politicians increasingly train their crosshairs on Section 230 of the Communications Decency Act, as they attempt to combat everything from the spread of disinformation to the perception that content moderators demonstrate political biases.

Among other things, that piece of legislation says that online services can host user-created content without fear of being sued over what those users say (with a few exceptions for things like copyright violations). It’s often seen as the reason websites, social media platforms and others allow people to make posts, and to do so without stringent limitations.

Section 230 also says online services can’t be sued in civil court for well-intentioned attempts to remove particularly problematic content, such as that which is “excessively violent” or “obscene.”

During an April 11 panel hosted by public policy think tank American Enterprise Institute (AEI), Section 230 co-author former Rep. Chris Cox and several legal experts discussed whether reforms to the law could actually achieve such goals.

ARE SOCIAL MEDIA COMPANIES LIABLE FOR THEIR ALGORITHMS?


Debates over reforming Section 230 to curb disinformation typically home in on certain portions of the law, including the so-called “26 words that created the Internet.”

That provision essentially says online platforms and users won’t be regarded as the publishers or originators of material from “another information content provider,” so won’t face the associated liabilities. A digital newspaper publisher can be sued for printing an author’s defamatory content, but newspaper comment sections and social media platforms aren’t liable for defamatory user-posted content.

Some researchers focused on combatting disinformation in social media say that the greatest threat isn’t users posting falsehoods, but rather platform algorithms amplifying that bad content by pushing it into newsfeeds and recommending it to other users.

That platform behavior might not necessarily be protected from lawsuits, Cox said.

That’s because of another clause in the law stating that, “the term ‘information content provider’ means any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service.”

Cox explained, “It says that, if the online portal becomes involved in either the creation of that content or its subsequent development, then it’s the content creator itself, and it has no protection … under Section 230.”

Platforms that boost content might feasibly be regarded as “developing” that content.

“Once it’s algorithmically amplified, is that a version of an editorial control issue? Is the design of the algorithm something that is an editorial judgment?” asked Daniel Lyons, AEI nonresident fellow and Boston College law professor of law and associate dean of academic affairs.

No individual is making decisions about which specific content items to boost into a newsfeed, but companies are making choices over using and configuring algorithms that do that for them.

Cox said that companies must be held accountable for the behavior of their algorithm. Letting firms shift responsibility off themselves and onto their tools would mean that, as society gets increasingly automated, “nobody’d be liable for anything.”

There’s some precedent for courts treating platforms as content co-creators, although the situation isn’t quite the same, speakers noted.

In 2008 an appeals court ruled that roommate matching platform Roommates.com could be sued for housing discrimination. The platform required users to provide certain profile details by selecting options from drop-down boxes — including about protected characteristics like their age, gender and race — and had a feature enabling filtering roommate matches based on those criteria.

The platform would have had liability protection if it simply gave users open space to write whatever they would about themselves and their desired roommates. But including these features meant Roommates.com was instead an active “developer” of the information and so open to lawsuits, the court said, reports Wired.

Whether platforms can be sued for harmful content curation algorithms seems an open question.

“OTHERWISE OBJECTIONABLE”


Section 230 also shields platform providers from civil liability over limiting access to material they or their users believe to be “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.”

If Meta chooses to take down Facebook posts featuring pornography or cyberbullying, the company doesn’t have to worry about angry users suing it, for example.

Jeffrey Rosen, former U.S. acting attorney general and deputy AG and a current AEI nonresident fellow, said the “otherwise objectionable” phrase is ripe for misuse, however. Its vagueness allows for “arbitrary” application, and it should be replaced with a specific list of all targeted problematic content, he said.

Cox pushed back, saying the phrase was written in to serve as a catch-all to cover similar content types that the legislators might forget to specifically detail. He said the courts ought to follow his interpretation that the phrase refers to material objectionable on similar violent or sexual grounds to the content outlined, rather than to just any theoretically objectionable content.

TARGETING CONTENT MODERATION


Some are indeed concerned that social media platforms may remove more than just illicit, violent or sexual content. A 2020 Pew Research Center survey found 73 percent of U.S. adults believed it “somewhat” or “very” likely that social media platforms “intentionally censor political viewpoints they find objectionable.”

Two states recently attempted legislating against such potential activities.

In 2021, Texas passed a law allowing social media users to sue if they or their posts are removed due to the viewpoints expressed. Florida took a stab too, with a 2021 law fining social media firms for “censoring or de-platforming” political candidates near an election or for suppressing visibility of candidate-related content.

Both policies ran into preliminary injunctions that halted them from taking effect.

A federal judge temporarily blocked Florida’s policy on the grounds that it’s discriminatory, violates Section 230 and might violate the Constitution, per Vox.

Public officials — as the arms of government — are prevented by the First Amendment from interfering with private platforms’ choices over removing such posts, said Cox, who is on the Board of Directors for NetChoice, a trade association that opposed both policies.

“The reason platforms get to decide what goes on their platforms, is not Section 230… When a platform takes things down for political reasons, it’s exercising its First Amendment right,” Cox said.

Officials who object to current levels of content moderation are also unlikely to benefit from reducing Section 230’s civil liability immunity provision. Doing so would allow more individuals to sue, likely prompting platforms to limit their legal risks by more severely restricting users’ speech, Cox said. Major social media platforms serving massive numbers of users would otherwise chance being perpetually embroiled in lawsuits, which can take years to be fully resolved.

Lyons said that all political leanings benefit from a level of content moderation, although particular approaches may better appeal to different target audiences.

“[Parler] quickly learned that allowing anybody to post at all times quickly turns every social media site into 4chan,” Lyons said.
Jule Pattison-Gordon is a senior staff writer for Government Technology. She previously wrote for PYMNTS and The Bay State Banner, and holds a B.A. in creative writing from Carnegie Mellon. She’s based outside Boston.