
There is a peculiar joy in witnessing a trillion-dollar disagreement unfold in real time. On one side are the techno-optimists, utterly convinced that artificial intelligence is the next electricity and that we are building the substrate for a future of miracles. On the other side are sceptics who see “irrational exuberance” (a phrase coined during a different mania) — one likely to crash the world’s financial system when reality finally bites. How you answer the question “Are we in an AI bubble?” depends largely on how you define a bubble, how rapid the deflation will be, and whether it will have been worth it anyway.
Tech companies are projected to spend around $400-billion (R7-trillion) this year on AI infrastructure — that's more than the inflation-adjusted cost of the entire Apollo moon programme, except instead of happening over a decade, it’s happening every 10 months. Amazon, Microsoft, Google and Meta alone plan to pour around $325-billion into AI in 2025, up from roughly $230-billion in 2024. That’s a 46% increase in a single year.
To put this in perspective: during the dotcom frenzy, telecom companies invested more than $500-billion in fibre-optic cables between 1996 and 2001, largely on the back of internet promises. In today’s dollars, that’s not too far from what we’re seeing in AI investment — except this is happening much faster, in a more concentrated period, and there seems to be no end in sight. Various forecasts bandy about trillion-dollar investments being reached by 2030.
Then there is this eyebrow-raising observation: the amount of capital allocated to AI build-out (data centres and their power requirements, GPUs, software development and deployment, etc.) now accounts for 40% of US GDP growth and 80% of growth in US stocks. In short, the US economy is currently pretty much entirely propped up by AI spending.
That’s a sobering thought even for those drunk on optimism.
It reminds one of the internet, circa 2000, right? But here’s the twist: the dotcom bubble didn’t build the internet. The over-investment was actually the catalyst that made it all worthwhile. It left behind a vast physical and digital foundation — “dark fibre” that would later light up and carry Netflix binges, remote work, online banking and everything we now casually call “the economy”. The bubble burst, but the infrastructure only grew more valuable.
The optimists argue that AI is now in its own necessary over-building phase, and the pay-off will arrive — slowly, then all at once. Productivity will surge. Every knowledge worker will gain a digital assistant that eliminates drudgery. R&D cycles will compress. Science will accelerate. Even the global energy transition might get help from AI-optimised efficiency.
The productivity gains could be astonishing. McKinsey and other analysts forecast trillions in incremental annual value from generative AI alone once the systems mature and integrate themselves into everyday business operations. The laughable idea that these are mere parlour tricks fades when you see the CFOs’ delight at reduced headcounts and faster cycles. (A recent report finds that 92% of companies plan to increase their AI investments in the next three years, signalling strong forward momentum.)
The true zealots — the Mark Andreessen and Jensen Huang crowd — are less concerned with corporate productivity, as nice as that would be. Their optimism is based on a grander vision: that this is the most important human invention of all time. It is no less than the birth of a knowledge machine, and in time, a wisdom machine. There can be no price on it, and no upper bound to spending.
Seen this way, the spending is an investment in prosperity, not a speculative folly.
A darker script
The pessimists, however, have a darker script. It goes like this: enterprises play with AI pilots but struggle to scale them. Organisational resistance is obstinate and immovable. Power grids can’t keep up. Costs refuse to fall. Regulatory brakes tighten. Margins refuse to increase. Model “intelligence” performance improvements stall, and the underlying inference wizardry hits a wall.
And we wake in 2030 to the news that the world is awash in spare graphics processing units (GPUs) looking for something meaningful to do — the great “GPU glut”. The infrastructure still exists, of course. It just becomes a slightly embarrassing monument to premature exuberance. Financial historians will give it a punchy label, and business school case studies will follow.
Both outcomes are plausible. And both may happen at once: a short-term valuation hangover followed by long-term triumph, just like the internet. The real risk isn’t that AI is meaningless — it’s that we’ve priced its value too early.
So are we in a bubble? Probably. But that isn’t necessarily a condemnation. Bubbles are often what it looks like when humanity prepares for a step change. They are capital markets’ way of saying, “We’re not sure yet — but build it anyway.” If AI fulfills even half of its promise (or even just finds a cure for cancer), future generations will thank us for the excess.
And if it doesn’t, at least we'll have plenty of GPUs, fancy data centres and their nuclear-fuelled power supplies to use for other stuff — the world’s most expensive reminder that innovation rarely moves in straight, sensible lines. DM
Steven Boykey Sidley is a professor of practice at JBS, University of Johannesburg, a partner at Bridge Capital and a columnist-at-large at Daily Maverick. His new book, It’s Mine: How the Crypto Industry is Redefining Ownership, is published by Maverick451 in SA and Legend Times Group in the UK/EU, available now.
(Image:reve.art)