Twenty-six words written in 1996 created the internet as you know it. Section 230 of the Communications Decency Act protects platforms from being sued for what their users post. Without it, Facebook couldn't host your cousin's conspiracy theories. YouTube couldn't let you upload videos. Reddit couldn't exist at all.
Now both political parties want Section 230 gone — or at least fundamentally rewritten. Republicans claim it lets platforms censor conservative speech. Democrats say it shields companies that profit from disinformation and hate. Tech companies warn that changing it would break the internet. All three are partly right.
The fight over Section 230 is really a fight over who controls online speech: platforms, governments, or users themselves. The answer will determine whether the internet remains a space for open expression or becomes something closer to broadcast television — where everything is vetted, sanitized, and legally defensible before it reaches your screen.
What is Section 230?
Section 230 is a federal law that says online platforms are not legally responsible for content their users post. The actual text is remarkably short: "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."
That single sentence means Twitter is not liable when someone tweets defamation. Yelp is not liable when a reviewer lies about a restaurant. Amazon is not liable when a third-party seller posts a fraudulent product listing. The user who posted the content is liable. The platform is not.
Section 230 also includes a second protection that gets less attention but matters just as much: platforms can moderate content without losing their liability shield. A website can delete posts, ban users, or set community standards — and doing so does not make them legally responsible for the content they don't remove.
This second provision was designed to solve a problem courts created in the early 1990s. In Stratton Oakmont, Inc. v. Prodigy Services Co., a New York court ruled that because Prodigy moderated some content on its message boards, it could be held liable for defamatory posts it failed to catch. The ruling created a perverse incentive: moderate nothing, and you're safe. Moderate anything, and you're liable for everything.
Section 230 reversed that logic. According to the text of the law passed in 1996, platforms can moderate content in good faith without becoming publishers. They can remove spam, harassment, and illegal content without accepting legal responsibility for the posts they leave up.
Why was Section 230 created?
Section 230 was written to encourage platforms to moderate harmful content — not to protect them from having to moderate at all. That distinction has been lost in the current debate.
The law emerged from two competing concerns in the mid-1990s. Congress wanted to protect children from online pornography, which led to the Communications Decency Act's original provisions criminalizing "indecent" online speech. (The Supreme Court struck down those provisions in 1997 as unconstitutional.) But lawmakers also wanted to encourage platforms to self-regulate rather than forcing the government to police every website.
Representatives Chris Cox and Ron Wyden, the law's authors, framed Section 230 as pro-moderation legislation. Their goal was to let platforms remove objectionable content without fear of being sued for inconsistent enforcement. As Wyden explained in a 2019 statement, the law was meant to give platforms "the freedom to develop new blocking and filtering techniques" so parents and users could control what they saw online.
The law assumed that platforms would act as responsible intermediaries — removing illegal content, protecting users from harassment, and creating spaces for legitimate expression. It did not anticipate platforms with three billion users, algorithmic amplification of extremism, or business models built on maximizing engagement regardless of social harm.
Section 230 was written for the internet of 1996: message boards, early chat rooms, and websites with user comments. It now governs an internet dominated by a handful of corporations whose content moderation decisions shape global politics, public health information, and the boundaries of acceptable speech.
How platforms use Section 230 today
Platforms invoke Section 230 to avoid liability for an enormous range of user behavior — some of it genuinely beyond their control, some of it directly enabled by their design choices.
Facebook has used Section 230 to avoid liability for housing discrimination in targeted ads, even though Facebook built the targeting system. Airbnb has used it to avoid responsibility when hosts discriminate against Black guests. YouTube has used it to avoid liability for recommending videos that radicalize users toward extremism, even though YouTube's algorithm decides which videos to recommend.
The law protects platforms not just from defamation claims, but from nearly any lawsuit arguing that user-generated content caused harm. Courts have applied Section 230 to shield platforms from liability for sex trafficking ads, revenge porn, cyberstalking, and harassment campaigns — even when victims argue the platform's design facilitated the harm.
In Herrick v. Grindr, a man's ex-boyfriend created fake Grindr profiles that sent more than 1,000 strangers to his home and workplace. Grindr argued it had no duty to remove the fake profiles because Section 230 protected it from liability for user content. Courts agreed. The case was dismissed.
Platforms also use Section 230 to justify inconsistent moderation. Because the law protects their right to remove content "in good faith," platforms can ban some users for violating terms of service while allowing others to break the same rules. There is no requirement that moderation be consistent, transparent, or even rational.
This has created a system where platforms have near-total discretion over what speech is allowed — but almost no legal accountability when their moderation decisions cause harm or when their failure to moderate enables abuse. Critics on both the left and right argue this gives platforms too much power with too little responsibility.
Why Republicans want to change Section 230
Republicans claim Section 230 enables political censorship of conservative voices. Their proposed changes would condition liability protection on platforms remaining politically neutral — or strip protection entirely from platforms that moderate content based on viewpoint.
The core Republican argument is that platforms use their moderation power to silence conservative speech while allowing liberal speech to flourish. As evidence, they point to Twitter's decision to ban Donald Trump, Facebook's restrictions on certain conservative news outlets, and YouTube's removal of videos questioning the 2020 election results.
Republican proposals generally fall into two categories. Some would require platforms to prove their moderation is "politically neutral" to keep Section 230 protection. Others would eliminate the law's protections for large platforms entirely, making them liable for user content unless they adopt a strict hands-off approach.
Texas and Florida both passed laws attempting to restrict platform moderation of political speech. Both laws were blocked by federal courts, which ruled that forcing platforms to host speech they don't want to host violates the First Amendment. The Supreme Court heard arguments on the Texas law in 2024 but has not yet issued a final ruling.
The Republican position contains an internal contradiction. Platforms moderate content because Section 230 allows them to do so without becoming liable for everything users post. Eliminating that protection would force platforms to moderate more aggressively, not less — because the safest legal strategy would be to remove anything potentially controversial before it results in a lawsuit.
Conservative complaints about platform bias also ignore that the most-engaged content on Facebook is consistently dominated by conservative outlets. A 2021 New York Times analysis found that right-leaning pages generated significantly more engagement than left-leaning ones, and that Facebook's most popular link posts were routinely from conservative sources.
Why Democrats want to change Section 230
Democrats argue Section 230 lets platforms profit from disinformation, hate speech, and algorithmic amplification of harmful content without accountability. Their proposals would narrow the law's protections — particularly for algorithmic recommendations and paid advertising.
The Democratic critique focuses on platform design, not content moderation. The problem is not that platforms remove too much speech, but that they amplify dangerous speech because it drives engagement. Facebook's algorithm promotes conspiracy theories because conspiracy theories keep users scrolling. YouTube recommends progressively more extreme videos because extremism is engaging. TikTok's "For You" page is optimized for addiction, not accuracy.
Senator Mark Warner and others have proposed limiting Section 230 protection for algorithmically amplified content. Under this approach, platforms would still be protected from liability for user posts that appear in chronological feeds — but not for posts the algorithm actively recommends. If YouTube's algorithm recommends a video promoting medical disinformation and someone dies following that advice, YouTube could be sued.
Democrats have also targeted paid advertising. Current law treats a paid ad the same as an organic user post: the platform is not liable for its content. Critics argue this makes no sense. When Facebook sells an ad, Facebook is not a neutral intermediary — it's a publisher selling ad space. Removing Section 230 protection for paid ads would force platforms to vet advertising the way newspapers and television networks do.
Some Democratic proposals would condition Section 230 protection on platforms meeting basic transparency and accountability standards: publishing moderation policies, explaining how algorithms work, and allowing independent audits. Platforms that refuse would lose liability protection.
The progressive position is that Section 230 was meant to protect small websites and emerging platforms from being crushed by litigation. It was not meant to shield trillion-dollar corporations from accountability for design choices that maximize profit at the expense of public health, democracy, and user safety. As Tinsel News has reported, some lawsuits are now bypassing Section 230 entirely by targeting addictive design rather than content.
What would happen without Section 230?
Eliminating Section 230 would not return the internet to some imagined era of free expression. It would make platforms far more restrictive — and would likely destroy smaller platforms entirely.
Without liability protection, every platform would face potential lawsuits for anything a user posts. The rational response would be aggressive pre-moderation: require all posts to be reviewed before going live, ban any content that could plausibly result in litigation, and prohibit user anonymity so platforms can identify who to sue if something slips through.
Large platforms like Facebook and Google could survive this regime. They have the resources to hire tens of thousands of moderators, build sophisticated filtering systems, and absorb the cost of litigation. Small platforms could not. Reddit, Discord, Mastodon, and any startup trying to build a social network would face existential legal risk from day one.
The result would be further consolidation. A handful of massive platforms would dominate because only they could afford the legal infrastructure to operate. The internet would look less like a diverse ecosystem of communities and more like broadcast television: a few giant gatekeepers deciding what speech is safe enough to allow.
Eliminating Section 230 would also make platforms more aggressive about removing controversial speech — the opposite of what many critics want. Platforms would ban political debate, activist organizing, and anything that could result in a defamation claim. The safest legal strategy is to allow only the blandest, most commercially safe content.
Some critics argue this would be an improvement. If platforms can't handle the responsibility of moderating billions of users, perhaps they shouldn't exist at that scale. Perhaps the internet would be healthier with smaller, more accountable communities rather than globe-spanning monopolies optimized for engagement.
That argument has merit — but eliminating Section 230 is not the way to get there. Antitrust enforcement, data privacy regulation, and algorithmic transparency requirements could break up platform power without destroying the legal foundation that makes user-generated content possible. Removing Section 230 would punish small platforms and startups while entrenching the dominance of the giants it's meant to constrain.
The real question is not whether Section 230 should exist, but whether a law written for the internet of 1996 should apply unchanged to the internet of 2025. Platforms are not neutral conduits. They are designed systems that make choices about what to amplify, what to suppress, and how to monetize human attention. Treating them as passive intermediaries was always a legal fiction. The fiction is now untenable.
Reforming Section 230 is necessary. Eliminating it would be catastrophic. The challenge is writing rules that hold platforms accountable for their design choices without making it legally impossible to let users speak at all.