Dailymaverick logo

Business Maverick

This article is an Opinion, which presents the writer’s personal point of view. The views expressed are those of the author/authors and do not necessarily represent the views of Daily Maverick.

AI's moment of rupture

The word “BETRAYAL” fills the screen. A rather desolate-looking man asks his chatbot how to reconnect with his mother. The bot responds with the eerily familiar sycophancy of “Great Question!” and proceeds to offer advice: start by listening, find common ground, try a shared activity. Then, in a dystopian lurch, the bot leans forward conspiratorially: “Or, if the relationship can’t be fixed, find emotional connection with other older women on Golden Encounters, a mature dating site”. The man is completely bemused and the bot continues undeterred “Would you like me to create a profile for you?”.

This is Anthropic’s multimillion-dollar Super Bowl ad, staking its claim as the moral alternative – the AI on our side, the AI that wants us to ‘keep thinking’. To dismiss this as competitive banter would miss what it signals – a tectonic shifting of who holds power in the AI zeitgeist and why, a moment of rupture.

At Davos last month, Canadian Prime Minister Mark Carney boldly articulated our contemporary condition as exactly this – a moment of rupture. Drawing on Vaclav Haval’s 1978 essay “The Power of the Powerless”, Carney invoked the greengrocer parable. The greengrocer is the ordinary citizen who places a party slogan in his window, not because he believes it but because compliance buys safety. Carney contends the system’s power is rooted in performance of agreement and therefore its fragility lies in the revocation of this performance. When the greengrocer removes his sign, the illusion begins to crack.

Read more: Canada’s Davos wake-up call for honesty rather than compliance

Haval argues that in a system built on universal conformity, any authentic act is inherently powerful because it can expose the lie. Therein lies the power of the individual and ultimately the collective.

Over the last few years, AI companies have placed their own signs in the proverbial window. “We exist for the benefit of humanity.” We go along with it because net-net, this feels directionally correct. The fiction is useful because it attracts talent, users and billions in capital. Carney described a world that “knew the story was partially false” but participated anyway because the fiction was useful to uphold.

What happens when it no longer works

The rupture comes when that fiction is no longer merely imperfect but actively turned against you. When, as he put it “integration becomes the source of your subordination”. I would frame this as the shift from tacit to egregious hypocrisy. From a system where everyone quietly tolerates the gap between rhetoric and reality because it is in service of the common good, to one where that gap is weaponised against that very common good it once served.

In May 2024, Open AI CEO Sam Altman referred to ads in AI as “uniquely unsettling” and a “last resort”. Yet, not even twenty months later, OpenAI announced ads across its free and lower paid tiers. Anthropic’s clever ad campaign visualised how very unsettling ads could be. The financial logic is clear: OpenAI lost nearly $8bn in 2025, converts 5% of its 800 million users to paid subscriptions, and needs revenue to grow exponentially. The “last resort” arrived as an existential requirement. While Anthropic has chosen to portray the insertion of ads as an annoying user experience, I would argue that the deeper concern is structural and it has a name.

Enshittification

Journalist and author Cory Doctorow indelicately calls it “enshittification”. The predictable arc where platforms move from being good to users, to exploiting them to serve advertisers and ultimately then exploiting everyone to serve shareholders. We may be witnessing one of those cycles with ChatGPT. From the miraculous free tool we never knew we needed, to ad-supported product in under three years.

In contrast, Anthropic has positioned itself as ad-free and safety-first. CEO Dario Amodei has been the high-priest of AI safety. His latest essay “The Adolescence of Technology” sketches a grim future in which unchecked AI poses civilisational risk. Anthropic’s answer is Constitutional AI, which is built on principles rather than enumerated rules.

But hang on a minute...

This panacea feels comforting until we ask the question: who gets to write this constitution? Who benefits from holding the pen? Anthropic’s valuation has increased materially in the last year. Its safety-first positioning is simultaneously a moral framework and a commercial moat. If OpenAI’s path is enshittification through advertising, Anthropic’s may be enshittification through authority. Both models monetise something. Either your attention or your trust.

For investors capitalising the promise of AI with half a trillion dollars in hyperscaler capex, the question is whether the revenue projections underpinning these valuations are a roadmap or a fiction? For the rest of us, it is simpler – are we participants in a system with unwitting costs?

The man on the couch wanted help reconnecting with his mother. He got a dating site. Parody or preview? That is the question of the moment. DM

Khadeeja Bassier is chief operating officer at Ninety One.

Comments

Loading your account…

Scroll down to load comments...