ARTIFICIAL INTELLIGENCE OP-ED
It’s time to stop giving AI a free pass and bring structure to a chaotic world
Should we be fearful of artificial intelligence and the pace at which it is progressing? Will we lose control over our future or will AI complement and augment human intelligence?
Most of us are familiar with many artificial intelligence (AI) applications, although we may not know when we use them. For example, Google search, spam filtering on email, Netflix film recommendations and Facebook feeds all use AI technology.
While we are nowhere near creating a machine with a mind of its own, called artificial general intelligence (AGI), we are witnessing the emergence of a new era of intelligent machines that can learn, adapt and solve problems independently.
The Turing Test, created in the 1950s by Royal Society fellow Alan Turing, has long been a benchmark for machine intelligence. The Turing Test poses the question, “Can machines think?”
The idea is that a machine could be presumed to think if it exhibits an intelligence that a human might think was actually human. In reality, that might just point to the idea that humans are gullible, rather than a good perception of machine intelligence.
AI as a technology isn’t intrinsically good or evil. That decision is up to us, the users of the technology. We can use it well, or we can misuse it.
There are risks from poorly designed AI systems, particularly ones pursuing incorrectly specified objectives.
The myth of King Midas is an example of stating an objective which is not, in fact, genuinely aligned with what was meant.
The legend goes that King Midas wished, “I want everything I touch to turn to gold,” and he got exactly what he asked for. His wish was the objective he put into the machine, so to speak. Ergo, his food, drink and relatives turned to gold, and he died of misery and starvation.
Similarly, nowadays, many people, even experts such as doctors or judges, get information from an AI system and treat it as if it is from a trusted colleague. This unconditional trust in AI is worrying because of how often AI gets it wrong.
AI can create a powerful, unconscious bias when identifying an object. AI researchers pride themselves on the accuracy of results. However, unless specifically investigated, researchers and developers may be unaware of the biases in the data fed to the algorithm(s).
On 29 March 2023, many prominent AI researchers, engineers and entrepreneurs signed an open letter calling for a six-month pause on developing AI systems more capable than OpenAI’s latest GPT-4 language generator. They argued that AI is advancing so quickly and unpredictably that it could eliminate countless jobs and flood society with disinformation, among other nasty things.
However, the letter did not pause AI development or even slow it down to a more measured pace. Instead, companies have accelerated their efforts to build more advanced AI. Elon Musk, one of the most prominent signatories, didn’t wait long to ignore his own call for a slowdown. In July, he announced xAI, a new company seeking to surpass existing AI and compete with OpenAI, Google and Microsoft.
There is a positive side to technological development.
It can help us create more efficient models for solving problems and tackling challenges in new ways to benefit society generally. That is why many researchers worldwide have been working to help develop sophisticated artificial intelligence models to address some of our biggest challenges.
Nevertheless, some believe we have given poorly designed AI systems a free pass for too long.
This has happened before. There was a time when we gave pharmaceuticals a free pass.
There was no Food and Drug Administration or other agency regulating medicines. Hundreds of thousands were killed and injured by poorly formulated or fake medicines. Eventually, over about a century, a regulatory system for medicines was developed. Although it is expensive, most people believe its benefits outweigh the costs.
On 17 May 2023, Sam Altman, then with OpenAI; Christina Montgomery of IBM, and New York University professor Gary Marcus met with the US Congress on regulating AI models.
They unanimously recommended that AI should be regulated on a global level, but at the same time, they don’t want AI regulation to slow down development or innovation.
They suggested regulations that included forming a new agency that licenses any effort above a particular scale of capabilities and can take that licence away and ensure compliance with safety standards. Next, to create a set of safety standards by, for example, looking to see if a model can self-replicate.
In addition, independent audits of whether the model complies with these stated safety thresholds and these performance percentages should be compulsory.
And finally, there should be transparency around these models, specifically disclosure of the data used to train the AI model. It is unclear when such regulations will be implemented, if ever.
Alliance with AI
Our entire civilisation, everything that we value, is based on human intelligence.
If we have access to a lot more intelligence, then there’s no limit to what humans can do. Yet, there is a fear that AI systems, once highly advanced, could operate beyond our control, potentially causing unintended harm or making morally problematic decisions.
This uncertainty may tell us that our relationship with AI is just as meaningful as the relative intelligence of the machine itself.
As the use of artificial intelligence grows and spreads throughout society, how do we feel about it making decisions on our behalf? Let us approach this question with a healthy scepticism of these systems and opportunistic confidence. Not with fear.
Calling everyone to stop innovating and winding each other up about what may go wrong seems pointless. Instead, let us focus on our alliance with AI globally.
Being sceptical of AI-sourced information and setting regulations on a global level – sooner rather than later – can help mitigate the risks and bring structure to a seemingly uncontrollable tech world.
Structure gives confidence. DM
Lisa Esterhuyzen is a lecturer in the Department of Business Management at Stellenbosch University. She writes in her personal capacity.