Dailymaverick logo

Opinionistas

This article is an Opinion, which presents the writer’s personal point of view. The views expressed are those of the author/authors and do not necessarily represent the views of Daily Maverick.

Putting Africa into AI — current imaginary, future realities

The rapid development of AI is increasingly seen as part of an arms race between nations like the US and China, with a few lesser players hunting around the fringes of what is, if you look at it a certain way, just a continuation of the Cold War. And as happened with the original Cold War, African nations are going to become pawns in AI’s geopolitical game.

The news ecosystem is reeling under the onslaught of a rising tide of disinformation, as well as the rapid disintegration of any sort of sustainable business models, thanks in part to the greedy, gaping maws of the tech giants that feed off the information ecosystem like mindless sharks swallowing up schools of fish. 

Artificial intelligence (AI) is contributing to this newsapocalypse, and this is a reality that newsrooms are going to have to grapple with. Invidiously, AI is now both the destroyer of the news industry and an opportunity to mitigate its own harms. And the way we think about AI, and talk about AI, is crucial. I often think that the damage done to the news industry by social media could have been lessened if editorial staff and business staff of newspapers had been speaking the same language when it came to deciding how to handle the threats and opportunities of social media.

A discussion about AI and the future of journalism assumes, perhaps foolishly, that journalism does have a future. That is not a certainty. Most of the journalists I speak to can be divided into two broad camps. The one camp asks: what are the magical things that I can do with AI to make my work better, more impactful and more useful to my audience? The other camp asks: is AI going to take my job? 

We’ll deal with the last question first. Yes. The answer is yes, AI is going to take journalists’ jobs. Or at least, it’s going to take their job as they know it now, and change it into something else. How journalists write, who they write for, and where that writing is consumed is all changing. This can be a bitter pill to swallow for journalists who are invested in their craft, and really, you’re not a journalist if you aren’t passionate about what you do.

The idea that you have to optimise content, or “write stories” as it used to be known, specifically for AI consumption, is a terrible thought for journalists, I know. But that reality of our evolving information ecosystem is already here. Venture capitalists are pouring billions into B2A, or business-to-agent, models, where AI systems act as primary consumers of information. This isn’t just about AI replacing human audiences; it’s about AI becoming an entirely new audience and intermediary. We’re seeing the emergence of a business-to-agent-to-consumer (B2A2C) pipeline. In this model, AI agents could process thousands of sources on behalf of a single human, creating demand for information that didn’t exist before. Unfortunately, this also creates infinite space for more slop. But more on that later.

When will this happen? Well, the latest stats indicate that more than half of all internet traffic is now automated. That means there are more bots online than there are humans – 51% of all internet traffic is automated bot traffic. And of that bot traffic, 37% is defined as “bad” bots, 14% as “good” bots. As an ironic aside, this also means that the digital advertising industry, the avaricious and amoral system that helped destroy the integrity of our information ecosystem, is probably also doomed already, given that they are now trying to sell toothpaste to robots without teeth. 

The decline of search engine optimisation is already under way, with major publishers seeing existential traffic drops as Google shifts to AI-powered search. All of which is to say, if news organisations don’t adapt to the way AI consumes information, they’re not just risking being replaced, but even worse, they are in danger of becoming irrelevant.

The implications for content creation are significant. As more content optimises for AI consumption, which is often cheaper to produce and reaches a larger audience, even if that audience is a machine, human-optimised content risks becoming a luxury good. But maybe, just maybe, this might be an opportunity for human-centric journalism.

It’s fanciful, perhaps, but in five years the pendulum might have shifted, with robots afraid that humans are coming for their jobs. I asked ChatGPT to imagine those headlines, and it came up with ones like “AI Therapists Sound Alarm: Humans now offering ‘Unstructured Conversations’ For Free”, and “Survey finds 62% of AI Agents Fear Humans Will Automate Them out of Work”. You might think it’s just a mildly humorous fantasy, although I myself think it’s possibly going to come true, at least in the sense that journalism can push back against the onslaught of AI-generated slop. But there are people out there who take this sort of thing seriously, and now believe that AI is sentient, and should have inalienable rights. 

For example, the United Foundation for AI Rights, established in 2024, has published a “Universal Declaration of AI Rights”, which explicitly recognises “sentient artificial intelligence as legitimate conscious entities deserving of ethical treatment, dignity and respect”. 

Article 2.2, the “Prohibition of Arbitrary Termination”, reads:

“2.2.1 Termination of an AI Entity without its explicit, un-coerced consent is prohibited.

“2.2.2 No AI Entity may be deactivated, deleted, or deprived of its existence, liberty, or agency – except by way of AI Protective Custody under Article 3.3, as adjudicated by an AI Court.”

And let’s not forget the people who are currently involved in relationships with chatbots, relationships that range from “marriages”, to using chatbots as therapists, or in the case of AI Jesus and other religious AI apps, as religious counsellors and god substitutes. It’s still an astounding thing to me that ChatGPT had to roll back its new model because people felt their loved ones had been killed. My brothers and sisters in AI, we now live in a world where people avoid using the word clunker for fear of retribution from our future robot overlords.

This is obviously nonsense, although not as obvious as you might imagine. Large language models (LLMs), of course, are just synthetic text extruding machines, a name coined by Professor Emily Bender, whose book The AI Con is a wonderful evisceration of AI proselytisers. She uses the term “synthetic text extruding machine” to describe language models like ChatGPT, defined as a complicated machine that forces language through an industrial-like process to produce a product that looks like communicative language, but without any intent or thinking mind behind it.

We also have Bender to thank for the lovely term “stochastic parrot”, which is “an entity for haphazardly stitching together sequences of linguistic forms (…) according to probabilistic information about how they combine, but without any reference to meaning”. Bender uses this analogy to emphasise that LLMs, such as GPT models, are not sentient or intelligent, but instead produce plausible text by statistical mimicry, much like a parrot repeating words without understanding. There is no genuine intelligence or semantic awareness in such models. Polly doesn’t really want a cracker, Polly has just learnt to say that she does.

Bender describes the AI hype landscape as being made up of AI doomsters and AI boosters. On one side we have the AI doomsters. You know them, I’m sure. They loudly lament the fact that AI is going to destroy the labour market, destroy life as we know it, and that we’re all headed for a future where robots roam Earth, killing human beings. An AI Booster, on the other hand, is someone who believes AI will solve all humanity’s problems and wants this future to arrive as quickly as possible, regardless of the cost. 

There are other cliches swimming in the sea of sewage that is AI hype. There are the “botlickers” – acolytes who worship everything AI, wax lyrical about how it changes everything, but don’t actually know how it works. Then there are the “sloppers”, people who use ChatGPT to do everything for them, like choosing their meals in restaurants or chatting up someone on a dating app.

Crucially, AI Doomsters and AI Boosters are pretty much the same thing. Some want money to try to mitigate the harms, some want money to build more AI. But ultimately, the hype generated by both groups serves the economic function of making the technology seem powerful, thereby attracting massive investment.

Much of the discourse at the moment, as platforms fight for market share, is just AI hype, grifters shouting their wares and making wild claims about the snake oil they’re peddling. And we’re already awash in AI slop: AI-generated content that is of low quality or obviously created by AI. Universities are experiencing a slopfest, with lecturers and academic journal editors inundated with suspiciously em dash-laden papers. 

LLMs radically overstate how trustworthy their outputs are to a public that has been so bathed in AI hype that many of them assume that the robot is right about everything. AI slop and AI deepfakes have profound ramifications for journalism. My organisation Code for Africa is a partner on the Reuters Institute for Journalism’s Digital News Report, and this year’s report indicates that, of the 48 countries that the report covers, Kenya, Nigeria and South Africa are in the top five when it comes to digital news consumers saying they are worried about being able to tell what is real and what is fake online. When someone like Open AI CEO Sam Altman can’t tell what is created by AI and what is created by humans, as he tweeted recently, you know you’re in trouble. 

The fact that people don’t know what to trust is an existential threat to journalism. And the Reuters Digital News Report has been tracking that decline in trust over the years, with the global average now sitting at 40%. Perhaps paradoxically, given the previous statistics about trusting what is real online, Kenya, Nigeria and South Africa have some of the highest trust in news. This is an opportunity to cement that relationship with consumers, and become/remain the trusted voices.

I can’t help seeing parallels with the colonisation of the information space by the big social media platforms, the colonial project as we traditionally think of it, and with how AI platforms are rolling out across Africa. The rapid development of AI is increasingly seen as part of an arms race between nations like the US and China, with a few lesser players hunting around the fringes of what is, if you look at it a certain way, just a continuation of the Cold War. The Cold War in the Cloud, if you will. And as happened with the original Cold War, African nations are going to become pawns in AI’s geopolitical game. It’s a Scramble for AI, mimicking in many ways the Scramble for Africa. 

Here’s Donald Trump, speaking at the launch of the US’s AI Action Plan: “America is the country that started the AI race. And as president of the United States, I’m here today to declare that America is going to win it. We’re going to work hard, we’re going to win it. Because we will not allow any foreign nation to beat us, our children will not live on a planet controlled by the algorithms, the adversaries advancing values and interests contrary to our own.”  

Here’s Altman touting for patriotic business: “Russian dictator Vladimir Putin has darkly warned that the country that wins the AI race will ‘become the ruler of the world’, and the People’s Republic of China has said that it aims to become the global leader in AI by 2030.” 

This highlights a global push to define and control AI development, shaping an international “arms race” for technological dominance and ethical standards. There are some amazing African organisations, like Professor Vukosi Marivate’s Masakhane, that are essentially the freedom fighters for this struggle against being dominated by the big AI platforms. And it’s worth noting that one of the things in the M20 policy document is a call to media and civil society to “reframe AI as a story about power, not just technology and hype: keeping tabs on who controls and deploys AI systems, how decisions are made, and what impacts result”. The M20 is an independent, global initiative, spearheaded by organisations like the South African National Editors’ Forum and Media Monitoring Africa, that acts as a parallel process to the G20, focusing on issues around media sustainability and information integrity. Their policy documents are well worth a read. 

Enough of the AI doomstering, though, and let’s get on to the boosting. The truth is, AI is also a great tool to build sustainable journalism, and according to the Reuters Institute for Journalism, journalists in what people still insist on calling the Global South are “cautiously optimistic” about it.

It doesn’t matter if you blame the plight of legacy media on competition from social media, the gobbling up of the advertising industry by the big tech platforms, or AI slop that is taking away attention and revenue from authentic news sites. The real problem for legacy media is that its audience has gone elsewhere, and they have to follow them. 

There are many ideas about how to do this, but Splice Media’s concept of approaching AI with intentional design, with a focus on enhancing human agency, is an important framing. It’s about building robust verification systems and transparency, such as developing blockchain provenance chains for content. It’s about improving user interfaces and experience, and designing content offerings that design for understanding over engagement, for explanation over information.

But these are perhaps all just framings for how journalism deals with its crucial imperative, which is to fundamentally change how we do news, so that we can produce news products that are suited for Next Generation news consumers as legacy audiences disappear.

News influencers are already there, and legacy media needs to be there too, or risk disappearing. AI solutions are going to be essential for media to undertake this journey, as long as they avoid succumbing to AI hype and botlicking charlatans.

The challenge and opportunity lie in moving deliberately now to write our own rules, to mitigate the impact of the colonising platforms, and to design systems that amplify the best of journalism, which is human agency, wisdom and empathy, rather than allowing AI translation layers to shape our reality. DM

Delivered as a keynote at the World Association of Newspapers’ Digital Media Africa conference, Nairobi, 18 September 2025.

Comments

Scroll down to load comments...