Dailymaverick logo

Opinionistas

This article is an Opinion, which presents the writer’s personal point of view. The views expressed are those of the author/authors and do not necessarily represent the views of Daily Maverick.

The great schism — Where does AI go from here?

We are witnessing the birth of a new paradigm: artificial intelligence as infrastructure, not product. 

There is a quiet revolution unfolding – not in silicon, but in sovereignty. And this is not so much about national sovereignty, but more about consumer sovereignty. The future of artificial intelligence (AI) is being decided not by which company trains the biggest model, but by who owns and above all who uses the recipe the model provides. 

At the heart of this shift lies a simple distinction… and great schism: open-weight versus closed-weight models.

Think of an AI model as a chef’s Beef Wellington. In a closed-weight system, the dish arrives on your plate – perfectly cooked, delicious but still a mystery. You don’t know the ingredients or their proportions or how it was made. That’s ChatGPT, Gemini, Claude: proprietary, polished, profitable. They are first-class travel: expensive, exclusive, highly choreographed.

In an open-weight system, you receive the recipe. The quantities of beef, duxelles, pastry – even the cooking time. But you can tweak them. Or make a vegetarian version. Or even be a Heston Blumenthal and concoct egg-and-bacon ice cream! That’s DeepSeek, Qwen, Llama: transparent, adaptable, free. They are more like business class: smarter, much cheaper yet still high quality and often far better suited to the needs of you, the consumer.

This isn’t merely technical – it’s geopolitical. The US leads in closed-weight AI. China leads in open-weight. And the crux isn’t about speed or scale – it’s about survival. And, as Darwin taught us, it is adaptability that beats the fastest, the most massive. 

America’s AI giants are betting everything on energy-hungry data centres, billion-dollar chips from Nvidia and near cloud monopolies from AWS and Azure. But electricity is the new oil – and in this “oil”, America is running out. While US power prices have surged 27% since 2020, supply has barely risen. Meanwhile, China has built enough renewable capacity to power not just its factories but its entire AI ecosystem – and still keep lights on at home… and all this, without power prices increasing.

China isn’t winning because it has better chips. It’s winning because it must. When the US cut off access to the latest GPUs, China didn’t surrender – it reinvented, adapted. Instead of chasing H100s, Chinese engineers trained state-of-the-art models such as DeepSeek-R1 on older H800s – a mere 2,000 of them – and yet still achieved performance levels that stunned the world. Their secret? Open-weight architecture. By releasing weights – the “recipe” – they let thousands of developers improve the model together, adapting it to fit new circumstances and new uses. A thousand flowers bloomed.

Meanwhile, US firms doubled down on secrecy. They hoarded data, locked models behind paywalls and built moats around their profits. Google and Microsoft now benefit from a “data flywheel”: more users create more data, meaning better models attracting more users. But this closed loop is becoming brittle. And the one thing brittleness fears most is adaptability.

Enter the muddle-in-the-middle. The binary divide between open and closed is collapsing. In its place rise two new forces: small language models (SLMs) and hybrids.

SLMs – tiny, efficient, embedded on phones and edge devices – are mostly open-weight. Google’s Gemma, Microsoft’s Phi-3, Meta’s Tiny Llama. These aren’t replacements for GPT-5; they’re easy-to-use, dynamic adaptations. They don’t need cloud servers or $20,000 chips. They run on $50 processors. And they’re everywhere.

Hybrids blend both worlds. Some, such as Mistral’s Mixtral 8x22B or Alibaba’s Qwen3, use “mixture-of-experts” tech – activating only the parts of the model needed for a task; for example, using WhatsApp without powering up your whole phone. Others, such as IBM’s Granite or Baidu’s Ernie Speed, are closed-weight hybrids – tailored for banks, hospitals and governments that need privacy, compliance and control.

These aren’t niche experiments. They’re the future. Indeed, increasingly they are the now. And they’re cheaper. More efficient. More accessible. And, above all, more adaptable.

Wall Street still bets on the incumbent triumvirate: closed-weight LLMs, Nvidia chips and hyperscale clouds. But history warns us: when Microsoft dominated the world of the 1990s with Windows and Office, it thought the game was over and won. Then came open-source Linux… which begat open-source Android. Today, Android powers 45% of all operating systems worldwide. Microsoft has 27%, mostly on laptops and desktops. Apple has 21%. To its credit, post 2000, Microsoft has survived by adapting its business areas into completely new areas... including closed-source AI and cloud computing. Out of the frying pan?

Microsoft’s millennial reckoning could yet happen to AI. 

Today’s AI titans — Alphabet, Meta, Oracle, Nvidia, and yes Microsoft — are valued on the belief that proprietary models are the only path to dominance. But what if the next trillion-dollar AI company isn’t built in Silicon Valley?

What if it’s built in Hangzhou – where Alibaba gives away Qwen so freely, even rural clinics in Bolivia can use it?  

What if it’s built in Hyderabad – where a start-up finetunes a 7-billion-parameter model on local dialects, serving millions who speak Tamil, Telugu or Bengali?  

What if it’s built by a university lab in Ghana, using a hybrid model trained on solar-powered edge devices?

AI is no longer about scale. It’s about using adaptability to promote access.

China’s advantage isn’t just its factories or its data – it’s its philosophy. By embracing openness, it turned sanctions into innovation. It adapted. The US responded with gates. But gates rust. Walls crumble… whereas openness multiplies. And adapts.

Even regulation is shifting. The EU’s AI Act demands transparency. California requires risk disclosures. China enforces data sovereignty. None of these laws favours secrecy. But, critically, they all encourage adaptability. 

And then there’s energy. If AI becomes the world’s largest consumer of electricity – and we’re already on that path – then efficiency wins. SLMs use 1/1,000th the power of GPT-4. Hybrids slice inference costs by 90%. Cloud computing may become optional. Data centres may become relics.

We are witnessing the birth of a new paradigm: AI as infrastructure, not product. The winners won’t be the companies that sell the most powerful models. They’ll be the ones that give away the best recipes. Because once you open the kitchen, everyone becomes a chef. And when everyone can cook, online meals become universal.

This is why the greatest threat to today’s AI giants isn’t another model. It’s a mindset. In 1991, Linux didn’t outperform Windows on paper. It just didn’t need permission to grow. Today, open-weight models don’t need Nvidia’s latest chip or AWS’s cloud to thrive. They just need access.

And access – unlike capital, code or chips – cannot be easily embargoed. Or taxed. Or owned. But it can be readily shared.

So yes, the valuations of today’s AI titans may one day look as inflated as Microsoft’s did in 1999. 

But the real reckoning won’t come from competition among suppliers. It will come from the choice made by consumers. So when a farmer in Kenya chooses a free, Swahili-speaking AI model over a $100/month API to analyse flower prices at Aalsmeer, the market shifts. When a hospital in Nigeria uses a fluent-in-Yoruba hybrid model to address river blindness, one trained on its own patient data, the monopoly breaks. When a high school student in Jakarta modifies a SLM to teach her grandmother Mandarin, the revolution accelerates.

The Aladdin’s Cave of AI isn’t locked. It never was. The world just thought it was.

And just in case you still don’t believe me, universal keys to that cave are being freely handed out to us consumers, one open-weight model at a time. DM

Comments

John Cartwright Sep 22, 2025, 03:53 PM

Excellent and fundamental distinction. The US's exclusivist corporate obsession is coming back to bite them, and this is good news for democracy, social justice and plain effectiveness.

Michael Power Sep 22, 2025, 06:20 PM

Thank you, John. There is a (small) part of me that worries that when this gargantuan contraption that is US AI starts to unravel, there may be worldwide fall-out and a lot of innocent bystanders may be negatively impacted. But in the end I think the price - whilst costly - will be worth paying for the reasons you state.

kanu sukha Sep 25, 2025, 12:33 PM

Would we not have to thank the self proclaimed 'stable genius' in white house for his extra-ordinary contribution to situation ? And the many not so stable geniuses who buy into the 'colonialist' mindset. And we thought colonialism was in the past .. like the KKK !

Michael Power Sep 30, 2025, 08:46 AM

Civilizations die from suicide, not by murder. Arnold Toynbee

Hari Seldon Sep 22, 2025, 05:42 PM

Great analysis. The market is'nt worth trillions of dollars with cranky, unreliable, unintelligent LLMs. ChatGPT is only marginally better than DeepSeek. Investors are going to lose a lot of money when the bubble bursts.

Michael Power Sep 23, 2025, 05:36 AM

I fear you are right, Hari...it will not be pretty. And try Qwen; far better than Chat GPT 5...

Johan Buys Sep 23, 2025, 08:43 AM

Imho the real problem is that AI of any kind cannot generate the sort of returns that what is colored “AI Investment” would demand. Spend runs about $300 billion per quarter, call it a trillion per year. Since 2025 AI is obsolete in 2027 and investors want a return, the 2025 AI should be generating cash-backed incremental operating profits of more than $1 trillion. That is impossible - a dozen Googles profits from AI????

Michael Power Sep 23, 2025, 11:35 AM

Johan. Spot on 100%. This is the Red Queen Dilemma: the evolutionary arms race where species must constantly evolve to keep up with their rivals, yet they don't necessarily improve their overall fitness. Translated into the world of AI this reads: the likes of Nvidia must constantly improve their offering to keep ahead of their rivals, yet they don't necessarily improve their own ROIs. Eventually the Red Queen Dilemma means a company can trip over its lackluster ROI...

Johan Buys Sep 23, 2025, 06:58 PM

As much as I like a nice conspiracy theory or pending investor disaster, I think the core of this issue is plain old PR. The same trillion that was spent on big datacenters and other boring plain old IT in 2024 is now being colored “invested in AI” Back in 90’s we built neural networks that ran on Compaq 486 PC to optimize VERY large and complex supply chain models (automotive and pharmaceutical). We called it clever, not artificial. HKGK though

Dagmar Timler Sep 23, 2025, 09:25 AM

You hit the nail on the head - the US tried to restrict China but they under-estimated them and their capacity to innovate. I am mildly concerned about doom AI could unleash on the world, so support some kind of regulation - but I think the powers that be need to think smarter about it because AI is not controlled as easily as nuclear power. And I actually wonder if the tech oligarchy could be the problem, causing mass unemployment and gross inequality.

Michael Power Sep 23, 2025, 12:01 PM

Dagmar, I too have a sense of trepidation about what AI means for us all. And this is far FAR more than the usual SHOCK OF THE NEW unease. I also worry about the tech oligarchy's motives though I take some comfort from the fact that the trends I herald may lead to their "come-downance".... The tech oligarchy inequality is already there...but I do not think it is sustainable. I am reminded of the salutary tale that, in the 1940s, Simón Iturri Patiño - a Bolivian tin miner - was said to be the richest man in the world. It did not last. Aluminium came along. "Sic transit gloria mundi"!!!

Hari Seldon Sep 23, 2025, 04:21 PM

Dagmar, I dont think AI is even remotely close to AGI. Sam Altman is a snake oil salesman. Scaling of LLMs has just about reached its limit. Despite this a LLM (like ChatGPT) is totally incapable of RELIABLY booking your holiday for you or even planning your travel route. LLMs simply cannot perform complex tasks that require planning, staying on task, and an understanding of the physical world. I dont see major job losses to AI's based on LLMs. Hence the trillion USD + market is not there.

Dagmar Timler Sep 24, 2025, 08:24 AM

Hari, you are right that scaling LLMs will hit a ceiling and we can't just keep adding chips, but there is still plenty of room for optimisations and specialised SLMs are proving more powerful than LLMs (as described in Michael's article). Regardless, I always defer to Amara’s law - “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” When Facebook emerged in 2004, who would have thought we'd have computers in our pockets and getting our news through social media? I hope I am wrong about the job losses.

kanu sukha Sep 25, 2025, 12:45 PM

Are we going get AI to sweep our streets, pack the supermarket shelves (not warehouses, where automation seems to be taking over) fix my leaking tap or shower or loose cupboard door, or broken window pane etc ?

Dagmar Timler Sep 26, 2025, 11:29 AM

kanu, I don’t think anyone is claiming every job will vanish. My point is that even partial automation - as we’ve already seen in factories, warehouses, and call centres - can still cause major shifts in employment and inequality. So rather than list exceptions, I think the bigger question is how we manage those changes fairly.

Johan Buys Sep 26, 2025, 08:56 AM

In the last few weeks, OpenAI is raising a few billion that would value the entire company at over $500 billion. I’d love to see how that valuation is reached. Also in the last week or so, anthropic paid $1.5 billion to settle a court case for it ‘learning’ from copyright material. That precedent opens up a tens of billions court case between NYT and OpenAI that repeated verbatim copyright material as its own - to PAID clients. That is fraud, not just theft.