Dailymaverick logo

Business Maverick

Business reflection

Crossed Wires: The social media industry finally got bludgeoned in court — Why now?

The claim is not that Instagram or YouTube merely hosted a harmful video or cruel posts. The claim is that they engineered systems intended to keep young users attached.

BM-Sidley-CrossedWires-social media Illustrative image: In a significant victory for social media users, Meta and Google have been found guilty in the US of causing social harm, but whether these court judgments will rewire the industry remains to be seen. (Image: Mariia Shalabaieva for Unsplash)

For roughly three decades, America’s social media industry has operated behind one of the most extraordinary legal force fields ever granted to any industry.

It is called Section 230 of the Communications Decency Act, passed in 1996. In plain English, it says that if somebody else posts the words, the platform usually cannot be treated as the publisher or speaker of those words. In statute-speak: “No provider or user of an interactive computer service shall be treated as the publisher or speaker” of content supplied by another party.

In practice, that became the 26-word moat around the modern internet.

Courts repeatedly threw out cases against platforms because the harm was traced, ultimately, to user content, and not to the platform that published it.

Andrew Chung noted in a 2023 Reuters article that many lawsuits had been “scuttled” by this immunity, and the Supreme Court in 2023 left Section 230 essentially untouched. (I wrote about Section 230 in August 2025 in a column called The Internet Cesspool.)

Which is why last week’s pair of courtroom blows landed with such force. In Los Angeles, a jury found Meta and Google liable for harming a young woman through the design of Instagram and YouTube, awarding $6-million. In New Mexico, a jury found Meta liable under state consumer-protection law and imposed a $375-million penalty over allegations that its platforms enabled child sexual exploitation and misled users about safety. Those are very different legal cases, but they share the same strategic legal innovation: the plaintiffs did not say, “your users posted bad things” – which had never convinced any judge. They said, “you built a product that works in a dangerously addictive and foreseeably harmful way”.

That shift matters enormously. For years, suing a platform often looked like suing a telephone company because of what somebody said across its pipes. Section 230 stopped that argument cold. But going after a “platform architecture” is different.

Architecture is what the company itself designed. The claim is not that Instagram or YouTube merely hosted a harmful video or cruel posts. The claim is that they engineered systems intended to keep young users attached: infinite scroll, autoplay, recommendation loops, notifications timed to pull them back in, the shiny but devious back-end mechanics of behavioural compulsion.

Social media’s Big Tobacco moment

A number of commentators have compared this to the Big Tobacco moment, when prosecutors secured a $206-billion Master Settlement Agreement against cigarette companies in 1998 – the largest civil litigation settlement in US history. There are indeed echoes in the legal strategy.

The “content” is the toxin in tobacco; the “platform” is the cigarette delivery system. Both tobacco companies and social media companies knew that their products were addictive and that their delivery systems were engineered to optimise that addiction. That is why terms such as “like button”, “infinite scroll” and “algorithmic recommendation” became the ordinance that won the plaintiffs’ cases last week.

The Los Angeles plaintiff, identified in court only as KGM and referred to during the trial as Kaley, is now 20. She told the court that she had started using YouTube at the age of six and Instagram at the age of nine. By the time she finished elementary school, she had posted 284 videos on YouTube. She said that her anxiety and depression began at the age of 10, that she was later diagnosed with both, and that she developed body dysmorphia as a result of her social media use. She also described ceasing to engage with her family because she was spending so much time on the platforms, and testified to spending up to 16 hours in a single day on Instagram.

In the New Mexico case, the state – which was the plaintiff – alleged that Meta misled consumers about safety and enabled child sexual exploitation on Facebook and Instagram. A jury found Meta liable for violating consumer-protection laws and endangering children, with a second phase still to come that could produce court-ordered design changes. The allegations included failures around crime reporting, age verification and broader child-safety systems. This was not a sad-teenager case but a child-exploitation case, which is legally and morally more combustible.

So what happens now? Are we looking at a reforming and rewiring of the social media industry? I would not be too sure.

First, Meta and Google have said they will appeal. Reuters described the Los Angeles case as a “bellwether” for thousands of similar claims, but bellwethers are just test balloons. A trial jury is one thing. An appeals court, with a cooler eye and a taste for doctrinal tidiness, is another. And the Supreme Court is, of course, another again.

Section 230 lives on

The second reason is Section 230 itself, which is not dead at all. These verdicts sidestep it; they do not repeal it. That distinction is crucial. On appeal, the companies will almost certainly argue that the plaintiffs are trying to perform a legal costume change – dressing up content claims as design claims.

Judges may yet agree. We have seen this film before. In 2023, the Supreme Court had a chance, in a case called Gonzalez v Google, to grapple with “algorithmic recommendations” and Section 230, and instead more or less backed away, leaving the shield intact.

That leaves the current Supreme Court, which looks unlikely to blow up the internet’s legal plumbing in one grand gesture, given its generally conservative makeup. But some justices have plainly shown an interest in narrowing immunity. The 2023 Reuters article also reported that members of the court questioned whether algorithmic recommendations are really just organisation of content. But the same court also displayed real anxiety about where such reasoning would stop. If every feed, ranking system and recommendation engine becomes potential evidence of product liability, then much of the modern internet starts living on borrowed time.

So the real significance of last week’s decisions is not that Meta and YouTube have been decisively defeated. It is that plaintiffs have finally found a path around the fortress wall. After decades of losing to Section 230, they have stopped charging the ramparts and started tunnelling underneath. Whether the tunnel reaches daylight depends on the appeals courts, and perhaps eventually the Supreme Court. But for the first time in a long time, Silicon Valley’s most familiar courtroom incantation – we are not the publisher – no longer sounds impenetrable.

Kaley’s lawyers called it a “referendum from a jury to an entire industry”. That may be right. But referenda, as we are regularly reminded in US politics, do not always survive the law. DM

Steven Boykey Sidley is a professor of practice at JBS, University of Johannesburg, a partner at Bridge Capital and a columnist-at-large at Daily Maverick. His new book, It’s Mine: How the Crypto Industry is Redefining Ownership, is published by Maverick451 in SA and Legend Times Group in the UK/EU, available now.


Comments

Loading your account…

Scroll down to load comments...