This is not a factory in the old sense. It is a future AI data centre, booked out years in advance by a handful of companies that most people did not know existed a decade ago. Inside will be racks of GPUs, rented in multibillion-dollar blocks to labs with names that have become oddly familiar: OpenAI, Anthropic, Google DeepMind, xAI.
Now pan the camera. In Brussels, officials are arguing over the wording of the EU AI Act, trying to pin down obligations for systems that can already write, code, summarise, translate and improvise video.
In Johannesburg and Mumbai, teenagers are using a Google model nicknamed Nano Banana Pro, with prompts shaped by ChatGPT 5.2 (released this month), to turn their faces into candy-coloured figurines and hyper-styled avatars. In offices from London to Lagos, marketers are pasting their brand guidelines into chat windows, asking a patient machine to write campaigns, segment audiences and optimise ad spend.
This is not a normal tech upgrade. It is several upheavals landing at once: computation, media, search, commerce, labour, regulation, identity. A year in which valuations sit at the trillion-dollar mark, data centres are costed like national infrastructure, and the word “intelligence” quietly loses its reassuring human tone.
It is disorienting. And questions hover: How safe is this? And what do jobs look like in a few years? What should my kids study?
This is the year AI went bananas.
From clever chatbot to ambient operating system
Before we dive in, it’s important to discern that the article focuses on generative AI. I say this because in AI and machine learning there are many reasons to be encouraged, and many opportunities for the marginalised, but they would need champions to realise them. AI tutors offering personalised instruction to children with inconsistent access to education. Translation tools lowering barriers for immigrants and cross-border collaboration. Medical systems catching anomalies earlier than tired clinicians. Enormous gains in accessibility for people with disabilities. And in South Africa, under the right leadership and investment, our Grade 4 pupils, struggling with 81% unable to read for meaning, could be using AI in one-to-one tutoring environments.
But, to be crystal clear, this story focuses largely on generative AI. When ChatGPT first appeared in November 2022, it felt like a very smart sidekick that lived in a browser tab. Ask a question, get a few paragraphs; hey presto, a quick tap with a wand and it felt like magic. In 2025, that magic became laddered so fast that it spun us in dizzying circles.
OpenAI has quickly been leaving its chatbot label behind, moving more into a multimodal ecosystem – image generation (GPT 1.5), video gen (Sora 2), coding (Codex), browsing (Atlas) and deep research (umm, also known as “deep research”). With new reasoning models, longer context windows and an expanding agent framework, it moved from “type your prompt” to “connect your email, files, calendar, CRM and apps, and let our system orchestrate tasks across all of them”.
Book travel, summarise meetings, follow up on leads, rewrite Excel formulas, draft legal letters, plan a project. Other players followed with their own agent platforms. The assistant on your screen started to look far more like a colleague. I am invited to beta-test new products quite often. Recently, I started using Nume, a CFO AI solution linking to Xero.
There is also a nagging reality tucked underneath all this: if software can read everything, write most things and coordinate large parts of your day, how much of your working identity is left untouched? This isn’t a thought or a question any of us should take lightly; we will need to reposition ourselves. Mentioned earlier, we will need to think of our kids’ futures too. I suggested to my daughter to change her university course last year. After all, our kids are creeping along a broad (and important) learning journey against a backdrop of machines that can deliver niche expertise in a nanosecond. The playing fields are not remotely level.
The year moving pictures stopped needing cameras
Gen AI could already generate photorealistic still images. We were in awe of that just a year ago. In 2025, video joined the front line properly.
In September, OpenAI’s Sora moved from showpiece to tool, capable of stitching prompts into short films and cinematic sequences that would once have required equipment, crews and weeks of post-production. Google’s Veo 3 arrived inside YouTube Shorts. Typing a sentence was enough to generate eight seconds of vertical video with motion, lighting and soundtrack that would not look out of place in a brand campaign.
Runway, Pika, Luma and others raced to give creators finer control: camera paths, character consistency, physics that at least pretend to obey gravity. Meta quietly rebuilt the creative plumbing inside Reels and Instagram.
For creators, this broke an old equation. High quality had always meant high cost or at least high effort. Suddenly, the marginal cost of another piece of video moved alarmingly close to zero. The constraint shifted from budget to imagination and time.
It also broke an old assumption for citizens. For most of the last century, a moving image of an event was strong evidence that something happened. By the end of 2025, that instinct was visibly out of date.
Nano Banana and the aesthetics of unreality
In August, a faintly silly phrase escaped a lab and turned into a global trend. Google’s Gemini 2.5 Flash Image, better known as Nano Banana, let anyone feed in a selfie and turn themselves into a glossy figurine, a vinyl doll, an anime warrior or a runway model.
There is technical sophistication behind the fun. The model runs quickly, cheaply and well enough on consumer hardware to scale to millions of users. What mattered culturally is that it made AI-mediated identity utterly casual.
Your “real” face became just one version among many. Your digital self blurred into a wardrobe of costumes that could be swapped in seconds. For a young user, this is both playful and expressive. For an older one, it may feel like the ground shifting. When every portrait is editable at that speed, it raises the question of what a face means in public life: in a job application, in a dating profile, in a news report; a visual reality blurring more than ever before in history.
From 10 blue links to one synthetic paragraph
While many were distracted by avatars, the foundations of the web quietly changed.
Search results stopped being just lists of links. Google’s AI-driven overviews and rival “answer engines” like Perplexity stepped onto centre stage. Ask for travel advice, a technical explanation or a product comparison and the first thing you see is likely to be a synthesised answer written by a model, with source links tucked around it like footnotes.
For ordinary users, this often feels like a gift. Less sifting, fewer tabs, more clarity. For publishers, it is more ambiguous. If an AI can absorb your content, rewrite it and present it without users ever needing to click through, what happens to your economics?
For brands, the game of visibility is changing. It was once about ranking high on a search results page. Now the question is more brutal. Does the AI quote you at all? Do you appear in its synthesis? That shift is giving rise to a new craft: answer engine optimisation (AEO). It is less about keyword density and more about what questions people have, authority signals, structure and machine legibility.
The centre of power moves up one layer, from websites to models. The entity that writes the answer stands between people and information in a way that search engines never quite did.
When your browser starts shopping for you
The same logic is creeping into commerce.
Retailers have been using machine learning quietly for years: recommendations, dynamic pricing, fraud checks. In 2025, the experiments moved closer to the customer’s fingertips. AI commerce began to assemble baskets of goods based on natural-language instructions and a blend of profile data, past behaviour and external signals.
Instead of browsing a dozen sites, you can ask an assistant for “a full winter wardrobe for Cape Town, budget under this number, no leather, neutral colours, deliver by Friday”. The agent now compares options, weighs reviews and either proposes a short list or submits the order. Expect much more capability in 2026. This area will expand fast.
Convenient, certainly. But it puts a new middleman between you and the merchant. That raises hard questions. Which platforms does the agent favour? Which offers does it see or ignore? Can retailers pay their way into its recommendations? (This is coming for sure.) When a bug or a skewed data set directs a flow of purchases towards one vendor, who is accountable?
It is not just about speed. It is about who controls the moment of choice.
Steel, silicon and the electricity bill
None of this appears by magic. It runs on hardware and energy that can no longer be swept under the warm carpet of “the cloud”.
The past year turned data centres into political objects. Projects like Stargate, the proposed collaboration between OpenAI, Oracle and SoftBank, hinted at AI infrastructure on a scale previously associated with airports or large power plants. Cloud providers signed multibillion-dollar, multiyear contracts to guarantee GPU supply for frontier labs. Governments and regulators started to worry (out loud) about grid capacity and climate goals.
There is something slightly surreal in hearing a company speak about “training runs” in the same breath as a regional government speaks about electricity rationing. Yet that is where we are headed if demand rises faster than capacity and policy.
We have spent a few years telling ourselves that AI is weightless. In 2025, the weight started to show.
Consolidation at the top, revolt from below
Above this infrastructure, the software stack began to settle into an interesting pattern.
Enterprise vendors moved quickly. Oracle, Microsoft, Salesforce, SAP and others wove generative assistants into every product line. Instead of selling AI as a separate add-on, they simply turned it into standard functionality. Log into your CRM and a panel suggests which deals to chase. Open your HR system and a summariser offers a draft for feedback reports. Workflows that used to require human triage are now quietly pre-processed.
What looked at first like a binary contest between one or two dominant closed models and a fragmented open hobbyist scene is turning into something more layered. Big suites with embedded AI at the top. Regional and sector-specific models in the middle. Specialist open systems at the edge.
On-device intelligence and the privacy story
In the background, Apple pursued a different path. Its pitch was not “the biggest model” but “your model, near your data”.
By running smaller models on phones, tablets and laptops, and reserving the cloud for heavier lifting behind a privacy curtain, Apple presented AI as an intimate, personal layer rather than a new ad funnel. Translate that call, summarise that message thread, clean up that note, recognise what is in that photograph, all without shuttling every detail to a distant data centre.
It is not a perfect story, and it certainly does not eliminate cloud dependence, but it marks a clear philosophical split. One future imagines very large models in a few places, with the rest of the world as a set of dependent terminals. The other imagines more modest models spread across billions of devices, each with some autonomy.
Which vision wins will not just be a technical question. It will reflect user trust, regulation, and how citizens feel about their own data after a decade of social media extraction.
Evidence that no longer comforts
The most unsettling change may be the erosion of what counts as proof. Text online has been suspect for years. Images joined it once deepfakes became mainstream, and now we are watching video and audio follow. A convincing clip of a chief executive making a market-moving admission, or a politician delivering a racist rant, or an activist calling for violence, no longer guarantees that any of those events happened. Even Donald Trump is using it to press a point from time to time.
Banks and governments are updating their verification procedures. Newsrooms are scrambling to stress-test footage. Platforms are experimenting with watermarking and provenance standards. But every time a technical fix appears, creative adversaries get to work on ways around it. Hydra’s heads.
The deeper issue is psychological. For centuries, humans took “I saw it with my own eyes” as the gold standard of evidence. We are being forced, in real time, to replace that instinct with something more forensic. That adjustment is stressful and not evenly distributed.
In a world already hooked on “rage bait” – 2025’s Word of the Year by Oxford University Press – this is not a niche concern. It goes to the heart of whether democratic societies can still agree on a basic picture of reality.
Law trying to catch a bullet train
Into this maelstrom, legislators and regulators are attempting to draw lines.
The EU AI Act is the most visible effort, with a catalogue of risk categories, documentation requirements, transparency duties and enforcement powers. Elsewhere, regulation is emerging through sector rules, privacy law, competition cases and soft guidelines. Industry groups are drafting voluntary codes, some of which are signed with enthusiasm and others politely ignored.
The common feature is lag. Models move faster than institutions. A system that takes a year to agree on wording can be overtaken three times over by the pace of new capability and deployment. That lag is not only procedural. It is conceptual. Much of our legal thinking about responsibility, liability and harm is grounded.
This is why so many of the world’s most respected researchers and policymakers keep returning to the same message: we are not just regulating tools, we are drafting a new theory of agency.
The valuation surge and its shadow
Money poured into the sector at a pace that felt less like investment and more like gravitational attraction. Disney, Amazon, Oracle. Chipmakers broke historic valuation curves. Cloud providers signed decade-long compute supply deals. Frontier labs raised sums once reserved for national infrastructure projects.
At the top, huge valuations made sense if you believed AI would reshape every knowledge-based job. The valuations looked fragile if you assumed a slower productivity payoff. Beneath the frontier labs, hundreds of start-ups rushed to repaint themselves as “AI-enabled”, hoping to surf the wave of capital before it crested.
It is easy to call this a bubble. But bubbles usually involve speculative hope disconnected from underlying capability. In AI’s case, the capability is undeniably real. Models are learning faster than organisations are able to absorb them. That creates market distortion, but also genuine structural change.
This is what makes 2025 feel so unusual. It is neither a clean boom nor a classic bubble. It is a moment where the underlying technology is surging ahead of the human, legal and economic systems designed to contain it.
The human question at the centre
Strip away the noise and one question remains. What is generative AI doing to society?
There are also reasons to pause. Quiet displacement in administrative roles. Pressure on entry-level creative jobs. A rising dependence on systems that few people truly understand. Emotional bonds forming between users and chat-based systems that feel conversationally human. The subtle erosion of agency when recommendations, drafts and decisions are prepared before we have articulated our own thoughts.
Then there is the deeper psychological layer. We are a species wired for narrative, evidence, rhythm and shared social cues. Generative AI can provide limitless narratives, limitless evidence, limitless cues. It can mirror our preferences back to us with uncanny accuracy. That creates something intoxicating: technology that feels personally attentive. But it also creates a vulnerability: technology that understands our impulses well enough to shape our decisions.
The people worth listening to
With so much noise, you look for voices that hold on to balance. Yoshua Bengio continues to combine optimism about scientific progress with clear warnings about concentration of power and system-level harm. Demis Hassabis blends ambition with a researcher’s respect for uncertainty. Fei-Fei Li anchors the field in human values and civic responsibility. Geoffrey Hinton has become a rare figure: a pioneer willing to say out loud that some of the trajectories worry him. Timnit Gebru and others remind the world that harms do not fall evenly, and that any credible approach must address social and political context, not just benchmarks. Gary Marcus plays the sceptic who insists on robustness and transparency. Some of these were original architects though, now concerned.
Standing at the edge of the curve
So what do we make of the banana year?
We are living with tools that accelerate at a rate human institutions simply do not match. It is not simply that the guardrails are incomplete. It is that the pace of change is outrunning our collective ability to decide what we want from it.
The challenge for 2026 is not to slow the entire field, nor to surrender to it, but to build the civic, legal and industrial structures that allow human beings to stay central in the loop. To define what kind of augmentation we want. To decide where autonomy ends and accountability begins. To recognise that intelligence without alignment is not wisdom. We failed dismally with social media.
We have built machines that can generate almost anything, except consensus. And machines that can simulate almost anyone, except leadership.
Gen AI went utterly bananas this year, so the question for 2026 is not whether AI will change our world, it will be directed towards how government and regulators respond – I expect poorly and greedily, as they harness the capability for power moves and tax revenues – and whether humanity can hold its nerve as the jobs fallout starts to become more visible. DM
Dean McCoubrey is the co-founder of Humaine and founder of MySociaLife.com.
A 33-megawatt data centre with a closed-loop cooling system in Vernon, California, on 20 October 2025. A surge in demand for AI infrastructure is fuelling a boom in data centres across the US and around the globe. (Photo: Mario Tama / Getty Images)