Last week, the AI Impact Summit was held in Delhi. It attracted many bigwigs, from political leaders to NGO giants to tech CEOs. One of these was UN Secretary-General António Guterres, who said (in probably the most understated quote of the conference), “The future of AI cannot be decided by a few countries or left to the whims of a few billionaires.”
Yes, well, here’s the rub. That’s exactly what is happening, and there seems to be few brakes in sight.
Tristan Harris, the head of the Institute of Humane Technology, has been obsessively writing and proselytising about these matters for more than two decades. First about social media and now, with even greater urgency, about the dangers of unchecked AI. He made this statement in a recent podcast that brings the problem into bright relief:
“We’re about to live in a world where basically six people are determining the future for eight billion people without their consent. If you talk to the very top AI lab leaders, they’ll say they believe there’s an 80% chance of utopia and a 20% chance that all of humanity gets wiped out. 20%.
“But they say they’re willing to take that bet.
“Did they ask us? Did they ask eight billion people? Do eight billion people know that that’s what they believe?”
The raging and inescapable debate around AI and what it will mean for all of us seems to have got lost in the noise of small things:
- When will hallucinations be evicted?
- When will AGI (artificial general intelligence) arrive, and how will we recognise it?
- When will the overheated market crash?
- Which LLM (large language model) is the smartest?
- How much capital is being sucked up by AI?
- Will Nvidia hold its lead?
- How much energy will AI data centres consume?
- Will China outpace the West?
Of course, these are not small things; they are all vitally important on one level or another. And while the topic of tech billionaires having too much money and power has been well trodden, it seems to me that there is a much darker cloud overhead, one that does not have a historical precedent, and which seems to have eluded the best intents of democracy, potentially creating an orthogonal governing system which is blind to the Enlightenment heritage of civics and society.
Tristan Harris’s quote amplifies this problem — the “consent” problem.
A handful of men are deploying the most powerful cognitive machinery ever assembled. They are not just launching fancy new apps. They are shipping systems that can mediate knowledge, automate judgment, influence attention and increasingly stand between citizens and reality itself. They are building the interfaces through which billions of people will search, write, learn, organise, work, flirt, panic, campaign and vote.
Transformative technology
No referendum was held on deploying these systems across billions of people’s lives. Sam Altman has said explicitly that OpenAI is building “one of the most transformative and potentially dangerous technologies in human history” — and yet the governance framework is a private board. Let that settle – this “most transformative and potentially dangerous technologies in human history” is largely outside the remit of public governance.
Historically, emperors and kings had total coercive power within their territories, but limited reach beyond their borders and no ability to shape the cognition of subjects at scale. A Roman emperor couldn’t influence what a merchant in China thought. Musk can influence what 600 million X users across every continent believe, in real time, in their own languages, optimised by algorithmic amplification he controls.
And so the qualitative difference between then and now is scale without borders and influence over cognition. That sounds dramatic, but only because we are still using the old categories. We think in terms of “companies” and “states,” “products” and “policies,” “markets.” Those distinctions made sense when the richest men in the world controlled railroads, oil wells, newspapers, or car factories and when governments were the last word in governance and coercion. It is harder to use those same labels when a private company controls a model that can shape what millions of schoolchildren believe is true.
Why should there be a referendum? Why should any of this be subject to the voice of voters?
History tells us that companies build products and citizens decide whether they are worth the price; the markets decide. Occasionally, the government steps in to say “no poisons”, “no child labour”, “no unsafe conditions for employees”. They can do that because these harms are relatively easy to measure and monitor.
Not this time.
The frontier AI industry is selling something soft and squishy. If citizens are nudged to a certain worldview or opinion or understanding or action, the trace-back to the source is impossible — that is the nature of the AI “black-box” problem, sometimes called the “interpretability” problem. So attempts to keep AI on ethical rails have been hobbled. If you can’t find the source of the trouble, how do you police it? Not to mentio; that Western ethical rails have a different gauge than those of China.
It is not clear how this will be resolved: there is more than one fork in the road before we become serfs to a small handful of billionaires.
Let me offer this then — we need to recover a nearly forgotten idea — if a system shapes the lives of billions, then the public has a legitimate claim on how that system is governed. This is not anti-technology; it is anti-monarchy.
Democracy was never designed for machine gods built by a few people. True, it was never designed for industrial monopolies or mass broadcasting either, and it adapted, messily. Perhaps it can adapt again. The question is whether it will do so before the current path hardens into destiny.
If democracy is to survive, we need to put it to the vote. I understand the objections: China! Underinformed voters! The Constitution! Private markets! I have no idea what you would even ask on the ballot paper.
But even so… DM
Steven Boykey Sidley is a professor of practice at JBS, University of Johannesburg, a partner at Bridge Capital and a columnist-at-large at Daily Maverick. His new book, It’s Mine: How the Crypto Industry is Redefining Ownership, is published by Maverick451 in SA and Legend Times Group in the UK/EU, available now.

There’s a compelling need for a referendum on transformative technologies that affect all of humanity. (Photo: Element5 Digital on Unsplash)