/file/dailymaverick/wp-content/uploads/2025/09/label-Opinion.jpg)
When Spain’s Prime Minister Pedro Sánchez stood at the World Government Summit in Dubai and declared that social media had become “a failed state, a place where laws are ignored and crime is endured”, he wasn’t being dramatic. He was being accurate.
And then he did something almost no other world leader has done: he backed the words with action, announcing legislation to hold platform executives criminally liable for failing to remove illegal content, and to criminalise the algorithmic amplification of harmful material.
The response was predictable. Elon Musk called him a tyrant. Silicon Valley’s lobby machine cranked into gear with well-worn warnings about censorship and free speech. But here’s the thing: Sánchez isn’t proposing anything radical. He’s proposing that the laws every democracy already has should apply online, too. And he’s right.
The root of all evil
The origin of this mess is a 29-year-old American law called Section 230 of the Communications Decency Act. Passed in 1996, when fewer than 40 million people were online and more than half the US Senate didn’t have an internet connection, it granted platforms blanket immunity from liability for content posted by users. The thinking was borrowed from telephone regulation: AT&T isn’t liable if someone uses a phone to plan a crime, so why should an internet platform be liable for what its users post?
In 1996, that analogy made (some) sense. The early internet was a series of message boards and static web pages. Content appeared chronologically. Nobody’s algorithm was deciding what you saw.
But here’s what a telephone company can’t do: take a call between two people planning a violent attack and broadcast it to a million other people who, based on their behavioural profile, might be in the market for a little napalm for breakfast.
That is precisely what social media algorithms do. When Facebook’s own leaked internal research showed that 64% of users who joined extremist groups did so because Facebook’s recommendation tools directed them there, the telephone analogy doesn’t hold up.
When TikTok’s recommendation engine decides, from a pool of millions of videos, to push a specific snippet into your feed, that is an editorial decision. It’s akin to an editor of Daily Maverick deciding which story goes on the home page or the front page of a newspaper – just done at a speed and breadth unimaginable to us.
The US Supreme Court recognised this in Moody v NetChoice (2024), acknowledging that algorithmic curation constitutes “editorial judgment”. And yet platforms face none of the legal consequences that every newspaper, broadcaster and online news publisher accepts as the cost of the power to publish.
If Daily Maverick published defamatory content, we’d be in court if a complainant wanted to forgo the free and quick recourse offered by the Press Council of South Africa. If we amplified child sexual abuse material to thousands of readers, we’d face criminal prosecution.
But Meta did exactly that – hosting multiple WhatsApp channels distributing more than 1,000 explicit images and videos of South African schoolchildren to about 600,000 followers. It took a South African law firm, the Digital Law Company, going to the Gauteng High Court to force Meta to act, but only after an urgent contempt-of-court application was heard. And only after Meta’s South African representative and other colleagues were named in the application.
This is the length victims have to go to, but it is simply not affordable or available to everyone. We need an accountability sword hanging over their heads to motivate action, not just an urgent court order.
The free speech argument
Whenever liability is raised, the free speech defence arrives on cue. But this argument is flawed. It is also one my colleagues at the South African National Editors’ Forum and other media organisations made in their submissions to South Africa’s Competition Commission Market Inquiry.
They are right to worry about the implications of free speech but wrong that holding platforms criminally liable isn’t the best and last resort to remedy the minefield we find ourselves wandering in.
Defamation law has coexisted with free speech for centuries. Crimen injuria, the offence of unlawfully and seriously violating someone’s dignity, has been on our statute books for decades. Our Constitutional Court, in Khumalo v Holomisa (2002), established clearly that the right to free expression must be balanced against the right to dignity. None of these legal remedies has destroyed free speech. They have shaped it into something accountable and what needs to be replicated in the online world.
And let’s dispense with the fiction that platforms can’t moderate content at scale. They already do, but only when it suits them. YouTube removed 179 million videos over six years. When the US Attorney General asked Apple to pull the ICEBlock immigration app from its store in October 2025, Apple complied within hours. TikTok blocks users from even sending the word “Epstein” in direct messages. After the Christchurch massacre, Facebook deployed automated systems that blocked 80% of 1.5 million attempted re-uploads of the shooter’s video within 24 hours.
So platforms can act swiftly when governments demand it or when their reputation or personal freedom is on the line. What they won’t do voluntarily is act for ordinary people. If you’re an individual targeted by a disinformation campaign, you’re looking at legal wrangling and tens if not hundreds of thousands in legal fees just to get defamatory content removed.
Big tech’s official position on personal injury? Get a court order. The asymmetry is staggering, if not unsurprising: instant action for the powerful, bureaucratic stonewalling for everyone else.
South Africa’s Competition Commission, through its Media and Digital Platforms Market Inquiry, has already done much of the analytical groundwork. Its November 2025 final report made many findings and remedies to try to rebalance the digital advertising sector which includes the digital information space.
It recommended establishing a social media ombudsman to help alleviate some of these issues. These are good recommendations. But an ombudsman without legal teeth is a suggestion box and many wasted years will pass before we get to the same conclusion that only criminal liability for inaction against illegal content will force any proactive action. The key here is preventing harm for something that is so egregious being promoted and ruining the lives and reputations of innocent people, even if a legal response follows much later.
The South African landscape
The tentacles of Section 230 have slithered everywhere. South Africa’s Electronic Communications and Transactions Act (ECTA) of 2002 – also enacted before social media exploded onto the scene – has become a barrier to platform accountability.
The ECTA relies on three ageing pillars that protect platforms from liability: Section 73 exempts providers who act as neutral conduits for data. Section 78 states that platforms are not required to actively police content or hunt for illegal activity. And Section 77 limits liability provided that the platform removes content after receiving a formal notice. Neutrality ends when the algorithmic promotion begins.
Because the firehose of modern content production is so massive and ever increasing, this will only get worse. The burden lies with the victim to chase down large, obfuscating tech companies. This allows platforms to remain passive bystanders to digital harm until an external party intervenes, which is simply unworkable for today’s digital landscape.
If blocking illegal content outright scares the free speech brigade too much, there is a spectrum of interventions available. For example, if a post is deemed very likely to be illegal, it can be posted but not promoted until some other assessment, possibly human, takes place. What we want to avoid is illegal content being promoted to millions of people. That’s one possible remedy that would make a world of difference.
Every country already has laws defining illegal acts: defamation, hate speech, incitement, fraud, child exploitation. The principle is simple. Apply them online as we do in the physical world. If your algorithm promotes illegal content to millions of people, you should also carry liability for that promotion, just as any broadcaster or publisher would. That will inspire better detection and correction methods better than anything else.
Yes, it will be messy. Yes, it will be expensive. But these are companies that collectively generated more than $500-billion in revenue last year and have figured out how to make computers think and reason better than most humans.
Meta alone posted $62-billion in net profit in 2024. ByteDance, TikTok’s parent, is on track for $50-billion in profit. These are companies that have taught computers to drive cars, predict protein structures and generate photorealistic video from text.
If they can deploy AI to space, they can build systems to stop promoting child abuse material and extremist content. They have the capability. What they lack is the legal obligation and will. They owe it to humanity to be more responsible about how illegal content is handled on their platforms.
Only regulators can fix this
Spain is showing the way, going further than even the European Union’s Digital Services Act by introducing criminal liability for executives and criminalising algorithmic manipulation. We are past the point of voluntary codes and polite requests. Only legal liability and meaningful fines will change behaviour.
South Africa needs to follow suit to protect our information sphere. Trusted and reliable information doesn’t rise above the sewage of tonnes of AI slop and defamatory claims.
Sánchez put it well: “Our determination is greater than their pockets.”
It has to be. Because the cost of inaction to our democracies, our children and our information ecosystems is one we can no longer afford. DM
Styli Charalambous is the CEO and co-founder of Daily Maverick.