4 min read

Section 230 Wasn’t Written for This

Section 230 Wasn’t Written for This
Photo by Tingey Injury Law Firm / Unsplash

Do you remember what the Internet was like in 1996? Slow. Difficult to navigate. Random. Text-based and small. A few thousand websites, dial-up modems, online forums where users typed messages to each other. At that time a law called Section 230 was written to ensure that a small online forum should not be sued out of existence every time a user posted something defamatory. It made sense in 1996.

The law was not written to cover today’s social media. Today’s social media has built a recommendation engine, focused on keeping people on the platform, and optimized it for children. Those same companies that ran internal studies to determine exactly what the engine was doing to kids’ mental health only shared the worst findings to a group of 66 employees, allocated zero new safety funding the following year, and kept going.

In the three decades since Section 230 was enacted, courts have stretched the statute well beyond what Congress intended. The clause meant to keep small forums from being treated as publishers of every comment in them got interpreted, again and again, to mean that a platform doing anything that looked like content moderation or curation was off-limits to liability for any harm those choices produced. The reading was always a stretch. It looks even stranger now that the choices in question are made by recommendation systems shaping what billions of people see every day.

A few members of Congress are starting to address this. Last November, Senators Mark Kelly (D-AZ) and John Curtis (R-UT) introduced a bipartisan bill called the Algorithm Accountability Act, with a House companion from Representatives Mike Kennedy (R-UT) and April McClain Delaney (D-MD). It would amend Section 230 to impose a duty of care on companies that use recommendation-based algorithms: a duty to design, train, test, and operate those algorithms so they don’t produce foreseeable bodily injury or death. It would also give injured users a clear path to federal court, even when the platform’s terms of service require mandatory arbitration. The bill is a starting point. What it gets right is putting the recommendation algorithm at the center of the legal question, where it now belongs.

Two juries this spring agreed with the design distinction. So did the Third Circuit federal appeals court last August, in Anderson v. TikTok, a case where a ten-year-old died imitating a viral challenge that TikTok’s algorithm had pushed to her. The court held that when an algorithm is the thing doing the recommending, the platform is the speaker, not just the host. Section 230 doesn’t immunize a company from speech the company itself is making.

Here is where the platform-defender argument quietly contradicts itself.

In 2024, the same industry argued the opposite position to the Supreme Court. Texas and Florida had passed laws trying to regulate content moderation, and the industry’s lobby went to court arguing that algorithmic curation is the platform’s own First Amendment-protected speech: its expressive product, its editorial judgment, its constitutionally protected voice. The Court largely agreed, in Moody v. NetChoice. The platforms got the win.

Then families started suing those same platforms over those same algorithms. Suddenly the company argued the algorithms weren’t the company’s expressive speech anymore. Abruptly they were neutral hosting tools, indistinguishable from a comments section, and Section 230 applied. The companies are trying to have it both ways. If algorithmic curation is speech worth First Amendment protection, it is also speech the company is responsible for. The Third Circuit caught the trick. A New York appellate court, in Patterson v. Meta, ruled the other way. The disagreement is going to keep moving up the courts because the central question has shifted from Section 230 to product design.

If this argument feels familiar, it should. Every time an industry faces product liability for the first time, the same script plays out. Tobacco said you’d kill the corner store. Auto manufacturers said crashworthiness liability would make cars unaffordable. Pharma said opioid liability would chill drug development. Chemical companies said pollution liability would destroy manufacturing. In every case, the courts managed to draw lines: between reasonable design choices and defective ones, between foreseeable harm and unforeseeable harm, between companies acting in good faith and companies that suppressed their own safety research. A small forum running a chronological feed is not going to face the same liability as a company that ran internal studies documenting what its product was doing to teenagers and chose to bury them. Courts have a century of doctrine for telling the difference.

I spent more than a decade at Facebook. I argued some version of the open-internet defense myself, and I believed it. What changed my mind was something quieter. Quarterly goals to hit, deadlines to chase, debates over which feature mattered most. We recognized the rhythms. What was harder to live with was watching that ordinary work used to defend choices that hurt the people on the other end of the product. Most of us went there to build something good. The trouble was that the system rewarded us for working like mercenaries when the people using the product needed caretakers. When you spend long enough inside that gap, "the open internet" starts to sound like a phrase the company reaches for whenever it happens to align with the bottom line.

The worry about platform-accountability law is not crazy. Badly drafted liability has a long history of hurting small operators while leaving the giants untouched. People who fear a clumsy regulatory backlash over platform harm are worried about something real, and that worry deserves an answer. The answer is to insist that the distinction between hosting and product design, between negligence and willful concealment, between a small community forum and an engagement-optimization system that knows what it is doing to children, be made carefully. Courts can do that work. Two juries this spring just did.

This week in Santa Fe, a judge will begin the process of deciding whether the New Mexico verdict becomes a redesigned product or another line item that Meta absorbs and forgets. A federal bellwether trial begins in California in June, with school districts as plaintiffs. More than forty state attorneys general have lawsuits pending. The legal map of platform accountability is being redrawn in real time.

You don’t have to be a lawyer to follow this. You just have to know that when someone tells you Section 230 protects every decision a platform makes, and that any new liability is the end of the open internet, they are using a 1996 statute to answer a question nobody was asking until very recently. The courts that have looked closely at the question have begun to give a different answer.