There is a quiet revolution unfolding – not in silicon, but in sovereignty. And this is not so much about national sovereignty, but more about consumer sovereignty. The future of artificial intelligence (AI) is being decided not by which company trains the biggest model, but by who owns and above all who uses the recipe the model provides.
At the heart of this shift lies a simple distinction… and great schism: open-weight versus closed-weight models.
Think of an AI model as a chef’s Beef Wellington. In a closed-weight system, the dish arrives on your plate – perfectly cooked, delicious but still a mystery. You don’t know the ingredients or their proportions or how it was made. That’s ChatGPT, Gemini, Claude: proprietary, polished, profitable. They are first-class travel: expensive, exclusive, highly choreographed.
In an open-weight system, you receive the recipe. The quantities of beef, duxelles, pastry – even the cooking time. But you can tweak them. Or make a vegetarian version. Or even be a Heston Blumenthal and concoct egg-and-bacon ice cream! That’s DeepSeek, Qwen, Llama: transparent, adaptable, free. They are more like business class: smarter, much cheaper yet still high quality and often far better suited to the needs of you, the consumer.
This isn’t merely technical – it’s geopolitical. The US leads in closed-weight AI. China leads in open-weight. And the crux isn’t about speed or scale – it’s about survival. And, as Darwin taught us, it is adaptability that beats the fastest, the most massive.
America’s AI giants are betting everything on energy-hungry data centres, billion-dollar chips from Nvidia and near cloud monopolies from AWS and Azure. But electricity is the new oil – and in this “oil”, America is running out. While US power prices have surged 27% since 2020, supply has barely risen. Meanwhile, China has built enough renewable capacity to power not just its factories but its entire AI ecosystem – and still keep lights on at home… and all this, without power prices increasing.
China isn’t winning because it has better chips. It’s winning because it must. When the US cut off access to the latest GPUs, China didn’t surrender – it reinvented, adapted. Instead of chasing H100s, Chinese engineers trained state-of-the-art models such as DeepSeek-R1 on older H800s – a mere 2,000 of them – and yet still achieved performance levels that stunned the world. Their secret? Open-weight architecture. By releasing weights – the “recipe” – they let thousands of developers improve the model together, adapting it to fit new circumstances and new uses. A thousand flowers bloomed.
Meanwhile, US firms doubled down on secrecy. They hoarded data, locked models behind paywalls and built moats around their profits. Google and Microsoft now benefit from a “data flywheel”: more users create more data, meaning better models attracting more users. But this closed loop is becoming brittle. And the one thing brittleness fears most is adaptability.
Enter the muddle-in-the-middle. The binary divide between open and closed is collapsing. In its place rise two new forces: small language models (SLMs) and hybrids.
SLMs – tiny, efficient, embedded on phones and edge devices – are mostly open-weight. Google’s Gemma, Microsoft’s Phi-3, Meta’s Tiny Llama. These aren’t replacements for GPT-5; they’re easy-to-use, dynamic adaptations. They don’t need cloud servers or $20,000 chips. They run on $50 processors. And they’re everywhere.
Hybrids blend both worlds. Some, such as Mistral’s Mixtral 8x22B or Alibaba’s Qwen3, use “mixture-of-experts” tech – activating only the parts of the model needed for a task; for example, using WhatsApp without powering up your whole phone. Others, such as IBM’s Granite or Baidu’s Ernie Speed, are closed-weight hybrids – tailored for banks, hospitals and governments that need privacy, compliance and control.
These aren’t niche experiments. They’re the future. Indeed, increasingly they are the now. And they’re cheaper. More efficient. More accessible. And, above all, more adaptable.
Wall Street still bets on the incumbent triumvirate: closed-weight LLMs, Nvidia chips and hyperscale clouds. But history warns us: when Microsoft dominated the world of the 1990s with Windows and Office, it thought the game was over and won. Then came open-source Linux… which begat open-source Android. Today, Android powers 45% of all operating systems worldwide. Microsoft has 27%, mostly on laptops and desktops. Apple has 21%. To its credit, post 2000, Microsoft has survived by adapting its business areas into completely new areas... including closed-source AI and cloud computing. Out of the frying pan?
Microsoft’s millennial reckoning could yet happen to AI.
Today’s AI titans — Alphabet, Meta, Oracle, Nvidia, and yes Microsoft — are valued on the belief that proprietary models are the only path to dominance. But what if the next trillion-dollar AI company isn’t built in Silicon Valley?
What if it’s built in Hangzhou – where Alibaba gives away Qwen so freely, even rural clinics in Bolivia can use it?
What if it’s built in Hyderabad – where a start-up finetunes a 7-billion-parameter model on local dialects, serving millions who speak Tamil, Telugu or Bengali?
What if it’s built by a university lab in Ghana, using a hybrid model trained on solar-powered edge devices?
AI is no longer about scale. It’s about using adaptability to promote access.
China’s advantage isn’t just its factories or its data – it’s its philosophy. By embracing openness, it turned sanctions into innovation. It adapted. The US responded with gates. But gates rust. Walls crumble… whereas openness multiplies. And adapts.
Even regulation is shifting. The EU’s AI Act demands transparency. California requires risk disclosures. China enforces data sovereignty. None of these laws favours secrecy. But, critically, they all encourage adaptability.
And then there’s energy. If AI becomes the world’s largest consumer of electricity – and we’re already on that path – then efficiency wins. SLMs use 1/1,000th the power of GPT-4. Hybrids slice inference costs by 90%. Cloud computing may become optional. Data centres may become relics.
We are witnessing the birth of a new paradigm: AI as infrastructure, not product. The winners won’t be the companies that sell the most powerful models. They’ll be the ones that give away the best recipes. Because once you open the kitchen, everyone becomes a chef. And when everyone can cook, online meals become universal.
This is why the greatest threat to today’s AI giants isn’t another model. It’s a mindset. In 1991, Linux didn’t outperform Windows on paper. It just didn’t need permission to grow. Today, open-weight models don’t need Nvidia’s latest chip or AWS’s cloud to thrive. They just need access.
And access – unlike capital, code or chips – cannot be easily embargoed. Or taxed. Or owned. But it can be readily shared.
So yes, the valuations of today’s AI titans may one day look as inflated as Microsoft’s did in 1999.
But the real reckoning won’t come from competition among suppliers. It will come from the choice made by consumers. So when a farmer in Kenya chooses a free, Swahili-speaking AI model over a $100/month API to analyse flower prices at Aalsmeer, the market shifts. When a hospital in Nigeria uses a fluent-in-Yoruba hybrid model to address river blindness, one trained on its own patient data, the monopoly breaks. When a high school student in Jakarta modifies a SLM to teach her grandmother Mandarin, the revolution accelerates.
The Aladdin’s Cave of AI isn’t locked. It never was. The world just thought it was.
And just in case you still don’t believe me, universal keys to that cave are being freely handed out to us consumers, one open-weight model at a time. DM
