Defend Truth


AI-pocalypse Now: What happens when AI tells you ‘I’m sorry Dave, I’m afraid I can’t do that’?


Clinton Nortje is a content professional, specialising in creation, syndication and licensing. He spent many years hosting radio talk shows, building feature bureaus and likely had some involvement in at least one piece of content that you have watched, read or heard.

Those seeking to dismiss artificial intelligence (AI) as a fad or something that will never be able to replace human intelligence will, of course, find ways to highlight shortcomings in AI (of which there are many), but often they fail to draw a comparison to the failures of human intelligence and a human workforce.

“Your scientists were so preoccupied with whether or not they could that they didn’t stop to think if they should.”

Dr Ian Malcolm’s wisdom rings more true today than ever. Granted, when he said this, he was referring to the folly of recreating dinosaurs from DNA found in fossilised mosquitos. Still, I say that we ignore the lessons learned from 1990s adventure movies at our peril.

We have created velociraptors, and we are excited about it.

Five years ago, I watched keenly as Google Assistant made an appointment for a haircut using an artificially generated voice and responded in real time to inputs from the salon receptionist. It was exciting and impressive.

Compared with today’s AI landscape, that demo was like comparing a Lego house with the Burj Khalifa.

The phrase is often overused, but in this case, it holds true – the possibilities that AI presents are quite literally endless.

Thinking long term, one would be hard-pressed to find a single AI-proof job. Perhaps initially, the output quality will be unpredictable in creative endeavours, but based on the current rate and scale of improvement, this will not be the case for much longer.

Those seeking to dismiss AI as a fad or something that will never be able to replace human intelligence will, of course, find ways to highlight shortcomings in AI (of which there are many), but often they fail to draw a comparison to the failures of human intelligence and a human workforce.

Take one often-cited example. Bias in AI.

Sure, AI has biases based on the dataset on which it has been trained. So do humans. I am, however, willing to venture that addressing biases in AI, while challenging, will be less of a challenge in the long term than addressing biases in humans.

Humans need to overcome genetic hardwiring and constantly be aware of the tricks our mind plays on us if we have any hope of overcoming our respective biases. Making ourselves aware of what causes bias is helpful, and over time we can train ourselves to be mindful of these traps, but this is an individual endeavour. We never change the operating system. Each generation starts from scratch and has to overcome it again.

In terms of AI, we can amend code, and once amended, it becomes part of its DNA and will act consistently within those parameters – despite the fact that acting consistently within these parameters may not always yield predictable results (for now).

If we can agree that AI is here to stay and that its impact on the world will be as transformative (if not more) than the internet, then we need to start asking important questions about what this world will look like. Currently, conversations around AI are hopelessly focused on the short term. Questions like:

  • How can I use AI to improve my product?
  • Who owns the copyright to AI content?
  • How do we improve AI accuracy and address bias?

These are important questions, but they will be in the rearview mirror before you can say: “Open the pod bay doors, HAL.”

We have the benefit of looking back at the expert analysis of how the internet and mobile phones would change our lives, and can see how being locked into our vision of the current way of doing things blinds us to the actual transformation that would take place.

In the early days of the internet, concerns were raised about it being an elitist product that would only be available to those who owned a computer and a phone, which, in the early 1990s, was a hefty expense. Thirty years later, 65% of the world’s population has access to the internet. While we focused on the equitable distribution of game-changing technology, no real thought was given to the dangers presented by the ability to anonymously disseminate information globally in seconds.

Assuming that the dangers we think AI presents today are the actual dangers it will, in fact, present in 20 years, would be repeating the same mistake. It is time to peer through the fog and ask a few questions that may seem outlandish today.

Who owns the wealth generated by AI?

Businesses will become as reliant on AI services as they have been on the internet. The key difference is that no one owns the internet, while AI models are proprietary and owned by companies. This begs the question of how this will impact the current free market landscape.

In a future world where a handful of AIs have the capacity to perform every imaginable task or service, how will the free market be affected?

Initially, businesses focused on the short term will incorporate AI into their services, making their product more competitive and more economical. Once a saturation point has been reached, where will the owners of AI technology be able to turn to maximise profits?


It is a small leap from providing diagnostic services to the medical industry to merging with a medical company and becoming a diagnostic service. Fast-forward across multiple sectors, and you have a handful of AI owners effectively controlling all products and services. Only now, the target market has no means of generating its own wealth, and effectively the economy as we know it will be limited to a handful of companies.

This somewhat dystopian scenario begs the question: Who should own the wealth generated by AI? Frameworks need to be written now before too much money is at stake and heavily invested short-term interests take precedence over wise long-term decisions.

Should specific industries be prohibited from introducing AI into their workflows?

Some integrations can’t be undone. Designing the next SUV doesn’t have the highest stakes, but what about government roles that could easily be “outsourced” to AI?

Current AI models perform exceptionally well when doing financial analysis. Could we outsource the Budget to AI? Why not all policy generation?

We could clear the court backlog if one AI acted as an attorney and another as a judge, programmed to use the laws of that country.

Watching ChatGPT wrestle with the trolley problem and come to different conclusions each time assures me that we aren’t there yet … but we will get there.

The question isn’t whether AI can perform this task, but rather what we should do when it can.

Is an unemotional, cheap and quick legal system worth the price of removing humanity from the process? Is removing humanity from the process even a price?

What about human/AI integration?

Something that should be the exclusive realm of science fiction writers is no longer such a stretch. A certain South African-born billionaire has been happily developing neurological chips, ostensibly to combat blindness and paralysis. While human testing has been prevented until now, what decision do we make when concerns around the safety of such testing are adequately addressed?

The benefits of having a microchip on board, constantly monitoring vital signs and looking for disease markers, would be a game-changer in public health. MIT recently showed how an AI model identified the high risk of breast cancer four years before it developed. The impact on health can, quite simply, not be overstated.

What about the ability to access information immediately without having to externalise the search? Not as clear-cut.

The bottom line is that, while the opportunities and benefits of AI can be revolutionary, we can’t let the excitement and wonder blind us to the pitfalls.

AI has the potential to be an existential threat (see aforementioned 1990s action movies), and we need a critical mass of people to start thinking in this way before we find ourselves asking questions that have already been answered by unmanaged progress.

Legislators, sadly, lack the understanding of the technology to properly legislate this frontier, and those who understand the technology are too blinded by the opportunity to pause for thought.

We are in the Wild West, and we need a sheriff. DM


Comments - Please in order to comment.

  • Bruce Q says:

    I’m old enough to remember the public panic caused by the imminent arrival of the PC. How the Work Force would be decimated by this ‘techno beast’. Jobs would be lost! Livelihoods would be destroyed. Family life would perish. Humanity would end!
    How much more wrong could we have been?
    And let’s be honest here…
    I’m thinking that AI would NO DOUBT do a better job than our politicians.

    • Patrick O'Shea says:

      Well said. I remember those days too, and hearing the predictions that paper would disappear almost completely 🙂 and trying to get in on the ground floor of the computer industry, only to be told by ICL to find a company with a computer and try to get a job in the “Computer Department” LOL

  • Pieter Malan says:

    I can’t wait for AI to manage municipal and government functions. The greedy incompetent mismanagers will become Political dodos.
    AI will serve the people and not have political ambitions

Please peer review 3 community comments before your comment can be posted