Dailymaverick logo

Maverick Life

AGE OF AI OP-ED

Is AI the real threat to our future?

For as long as civilisations have risen, humanity has encountered existential threats. From disease to war, ideology to empire, we have faced collapse and clawed our way back. And yet, this moment feels different. Not because it is, but because of how it is experienced by almost five billion people. On screen.
Is AI the real threat to our future? OpenAI CEO Sam Altman speaks during the OpenAI DevDay event in San Francisco, California, on 6 November 2023. (Photo: Justin Sullivan / Getty Images)

We now carry the voices of conflict and confusion in our very smart pockets. 

We ingest bias, fear and persuasion through glowing rectangles that rarely leave our hands. And as generative artificial intelligence begins to spin out from that same media pool, trained on unmoderated information, mimicking its tones and intentions, the question becomes unavoidable: What is the real threat to our future? 

The technology itself, the media pushing agendas, the men who wield the influence of these channels, the erosion of our moral compass, or the mindsets so easily shaped by algorithms? Perhaps it’s as simple as “climate change”.

Technology, the amplifier

Let us begin with a necessary distinction: AI is not sentient. It doesn’t want, feel or believe. It predicts

Trained on massive datasets, much of it user generated, ideologically biased or commercially driven, it reflects the structure of human intention at scale. It is a mirror with momentum.

Just like plastics and fast food, technology began as innovation, and its harm emerged from its unchecked proliferation and industrialisation without systemic foresight. AI is on the same trajectory: its models are optimised not for truth, empathy or equity, but for engagement, efficiency and profitability. Those metrics do not build strong societies. They build rich companies. 

AI, therefore, isn’t the threat. It is the catalyst. What matters more is what it absorbs, who controls it and whether we question its logic before we automate our lives around it.

Media saturation. Machinery of persuasion

In 2021, the Oxford Internet Institute confirmed that 81 countries use social media to conduct disinformation campaigns, and 76 of them with state-sanctioned strategy. This is not a Western anomaly; it is a global industrialisation of manipulation.

Social media platforms have long favoured content that inflames. 

MIT research found that falsehood spreads six times faster than truth on X, formerly Twitter. AI doesn’t just inherit this bias, it repurposes it. Generative models digest headlines, hashtags and human anguish and recompose them in persuasive language, on demand, at scale.

This is not artificial intelligence in the traditional sense, it is synthetic intuition, derived from synthetic content, produced for synthetic outcomes.

In the age of the smartphone, media is no longer consumed. It consumes us, draining attention, diluting discernment and reinforcing worldview through algorithmic loops. When AI is built atop this infrastructure, the crisis is not what it creates, it’s what it normalises.

Men who reign. The cost of power

Much of technology – the platforms, capital structures and leadership – is largely controlled by a demographic: male and elite. If we look at current conflicts, or tech platforms, this is a structural reality with obvious consequences.

Studies across global markets reveal that male-dominated leadership, particularly among narcissistic executives and politicians, correlates with higher risk-taking, insider trading and value-destructive mergers. 

A 2018 study from the Journal of Financial Economics found that firms with male-dominated boards engaged more frequently in legal grey zones and speculative expansion. 

In contrast, female inclusion on boards tempered volatility and improved regulatory compliance. 

Why does this matter? Because the men who run our tech empires, often lionised for vision and disruption, operate in ecosystems that lack sufficient dissent, empathy or diversity of ethical framework. AI is dearly lacking in a race to the summit. 

Altman, Zuck, Musk, your tools are shaping billions of lives. Are you sure you have your hands on the tiller for future generations? Lauded now, but potentially villainised in 50 years? 

In Brazil and Spain, corruption network analyses show 80% to 90% male dominance in political scandals. This is far from a coincidence; it is an ecosystem pattern. When power lacks reflection, and speed outweighs scrutiny, harm scales invisibly until it’s irreversible.

Morality and mindset: Inner OS

Our moral code isn’t hardwired. It’s taught, practised and shaped by culture, family and feedback. Neuroscientific research confirms that moral reasoning is linked to empathy, “theory of mind” and executive function, all of which are eroded by cognitive overload and stimulus saturation.

Mindset plays a central role here. Carol Dweck’s work on fixed-versus-growth mindsets has shown that individuals who believe in learning, humility and adaptability respond better to challenge and ambiguity. Those who fear failure, crave certainty or outsource authority are more susceptible to manipulation.

In the context of media and AI, this becomes urgent. People with low intellectual humility are far more likely to believe misinformation, react with tribalism and resist correction. And yet the global trend, driven by screen time, news fatigue and polarisation, leans towards the erosion of precisely those qualities we need most: patience, reflection and ethical ambiguity.

The smartphone, a turning point

Here’s why my “bias” (read: insight) comes in. After a decade of teaching Grade 4s to C-Suite executives about emerging technology, I think the problem can be traced to when access became somewhat universal. 

While many of these threats are ancient – for example, propaganda, corruption, ego, ideology – the delivery mechanism has changed. The smartphone is a wolf in sheep’s clothing, an interface gluing us to media outlets, influencers, politicians with hits of dopamine upon every ping, fracturing our attention and anchoring identity to digital performance.

We no longer consume media. We inhale it. So immersed are we, we no longer (care to) ask: “What is this platform trying to do to me?” or “Who benefits if I believe this?” 

These are the questions that thinkers like Tristan Harris, Dr Jonathan Haidt and Persily ask – and yet they are absent from most citizens’ daily digital lives.

AI, trained on this sea of content, now becomes the forecaster of our conditioned impulses. A machine not of pure logic, but of cumulative bias, scripted into synthetic knowledge. The billionaire scriptwriters know what we respond to, pumping content into the air around us. 

The real threat? Automation without inquiry

Some readers may not agree with any of this which is important. Writers are rather subjective and provocative. It would be lucky if many even make it to the bottom of the article: a five-minute read is painful by comparison to a 15-second Insta Reel. 

But ask yourself this: If we took social media off smart devices, balanced power equally with women or focused on critical thinking in schools and homes from an early age (around Grade 4), would we see better outcomes?

After 25 years in media and teaching kids in schools, our best hope is that our kids see through the tech, the media, the power puppeteers, and shape challenger mindsets, which will give birth to new ways of being and doing.

It would (at the least) produce a new generation of wide-awake voters and savvy media consumers. It would change our current trajectory. Politicians and platform owners wouldn’t want that, I can only assume. DM

Dean McCoubrey is the co-founder of Humaine and MySociaLife.

Comments

Tim Spring Jul 9, 2025, 07:18 AM

The head of anthropic says nobody really know how AI works, but now he can read this article and finally understand that it "merely predicts". This is a sophomoric article, failing to tackle the genuine risks of AI at any level.

de Jul 10, 2025, 09:07 AM

Not sure I wrote this article for The Head of Anthropic. But I love that you think he will be able to make good headway afterwards. Thank you. Interestingly, the article was exactly NOT about AGI or superintelligence causing Armageddon because no one knows, as you rightly say. In fact, it was only about the power of narrow minds, powerful men and divisive media that seem to be not working out for humans so well, and how could we attend to what seems to have got us here.