IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

What Is Section 230? An Expert on Internet Law and Regulation Explains

A terse piece of legislation from 1996 has been credited with creating the internet as we know it — and blamed for the flood of misinformation and other ills that have come with it.

Social Media
Almost any article you read about Section 230 reminds you that it contains the most important 26 words in tech and that it is the law that made the modern internet. This is all true, but Section 230 is also the most significant obstacle to stopping misinformation online.

Section 230 is part of the Communications Decency Act, a 1996 law passed while the internet was still embryonic and downright terrifying to some lawmakers for what it could unleash, particularly with regard to pornography.

Section 230 states that internet platforms — dubbed “interactive computer services” in the statute — cannot be treated as publishers or speakers of content provided by their users. This means that just about anything a user posts on a platform’s website will not create legal liability for the platform, even if the post is defamatory, dangerous, abhorrent or otherwise unlawful. This includes encouraging terrorism, promoting dangerous medical misinformation and engaging in revenge porn.

Platforms, including today’s social media giants Facebook, Twitter and Google, therefore have complete control over what information Americans see.


The Communications Decency Act was the brainchild of Sen. James Exon, Democrat of Nebraska, who wanted to remove and prevent “filth” on the internet. Because of its overreaching nature, much of the law was struck down on First Amendment grounds shortly after the act’s passage. Ironically, what remains is the provision that allowed filth and other truly damaging content to metastasize on the internet.

Section 230’s inclusion in the CDA was a last-ditch effort by then Rep. Ron Wyden, Democrat of Oregon, and Rep. Chris Cox, Republican of California, to save the nascent internet and its economic potential. They were deeply concerned by a 1995 case that found Prodigy, an online bulletin board operator, liable for a defamatory post by one of its users because Prodigy lightly moderated user content. Wyden and Cox wanted to preempt the court’s decision with Section 230. Without it, platforms would face a Hobson’s choice: If they did anything to moderate user content, they would be held liable for that content, and if they did nothing, who knew what unchecked horrors would be released.


When Section 230 was enacted, less than 8% of Americans had access to the internet, and those who did went online for an average of just 30 minutes a month. The law’s anachronistic nature and brevity left it wide open for interpretation. Case by case, courts have used its words to give platforms broad rather than narrow immunity.

As a result, Section 230 is disliked on both sides of the aisle. Democrats argue that Section 230 allows platforms to get away with too much, particularly with regard to misinformation that threatens public health and democracy. Republicans, by contrast, argue that platforms censor user content to Republicans’ political disadvantage. Former President Trump even attempted to pressure Congress into repealing Section 230 completely by threatening to veto the unrelated annual defense spending bill.

As criticisms of Section 230 and technology platforms mount, it is possible Congress could reform Section 230 in the near future. Already, Democrats and Republicans have proposed over 20 reforms – from piecemeal changes to complete repeal. However, free speech and innovation advocates are worried that any of the proposed changes could be harmful.

Facebook has suggested changes, and Google similarly advocates for some Section 230 reform. It remains to be seen how much influence the tech giants will be able to exert on the reform process. It also remains to be seen what if any reform can emerge from a sharply divided Congress.

This article was originally published by The Conversation.
Special Projects
Sponsored Articles
  • How the State of Washington teamed with Deloitte to move to a Red Hat footprint within 100 days.
  • The State of Michigan’s Department of Technology, Management, and Budget (DTMB) reduced its application delivery times to get digital services to citizens faster.

  • Sponsored
    Like many governments worldwide, the City and County of Denver, Colorado, had to act quickly to respond to the COVID-19 pandemic. To support more than 15,000 employees working from home, the government sought to adapt its new collaboration tool, Microsoft Teams. By automating provisioning and scaling tasks with Red Hat Ansible Automation Platform, an agentless, human-readable automation tool, Denver supported 514% growth in Teams use and quickly launched a virtual emergency operations center (EOC) for government leaders to respond to the pandemic.
  • Sponsored
    Microsoft Teams quickly became the business application of choice as state and local governments raced to equip remote teams and maintain business continuity during the COVID-19 lockdown. But in the rush to deploy Teams, many organizations overlook, ignore or fail to anticipate some of the administrative hurdles to successful adoption. As more organizations have matured their use of Teams, a set of lessons learned has emerged to help agencies ensure a successful Teams rollout – or correct course on existing implementations.