Defend Truth

CRAZED NEW WORLD OP-ED

Open letters, AI hysteria, and airstrikes against data centres – why is the Tech Nobility freaking out?

Open letters, AI hysteria, and airstrikes against data centres – why is the Tech Nobility freaking out?

In eight short years the tech industry seems to have moved from hype to hysteria – calling not for further research to advance artificial intelligence, but instead for airstrikes to destroy ‘rogue’ data centres.

Is the further development of artificial intelligence (AI) worth the trouble? On 29 March 2023, in an open letter published on the Future of Life’s website, about 1,800 scientists, historians, philosophers and even some billionaires and others – let us call them the Tech Nobility – called for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4 […]. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”

In a reaction to this letter, decision theorist Eliezer Yudkowsky wrote that the call in the open letter does not go far enough, and insisted that governments should:

“Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs… Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue data centre by airstrike.”

Calls for such extreme measures against AI are based on the fear that AI poses an existential risk to humanity. Following the release of large language models (LLM) by OpenAI (GTP-4) and Microsoft (Bing) there is a growing concern that further versions could move us towards an AI “singularity” – that is where AI becomes as smart as humans and can self-improve. The result is runaway intelligence. An intelligence explosion.

Hypotheses for catastrophes

There are many ways in which this could spell doom for humanity. All of these are argued to be unavoidable by proponents of AI doom because we do not know how to align AI and human interests (the “alignment problem”) and how to control how AI is used (the “control problem”).

A 2020 paper lists 25 ways in which AI poses an existential risk. We can summarise these into four main hypothetical consequences that would be catastrophic.

One is that such a superintelligence causes an accident or does something with the unintended side-effect of curtailing humanity’s potential. An example is given by the thought experiment of the paper clip maximiser.

A second is that a superintelligent AI may pre-emptively strike against humanity because it may see humanity as its biggest threat.

A third is that a superintelligent AI takes over world government, merges all corporations into one “ascended” corporation, and rules forever as a singleton – locking humanity into a potential North Korean dystopia until the end of time.

A fourth is that a superintelligent AI may wire-head humans (like we wire-head mice) – somewhat akin to Aldous Huxley’s Brave New World where humans are kept in a pacified condition to accept their tech-ruled existence through using a drug called Soma.

Read more in Daily Maverick: Artificial intelligence has a dirty little secret

Issuing highly publicised open letters on AI – like that of 29 March – is nothing new in the tech industry, the main beneficiaries of AI. On 28 October 2015 we saw a similar grand public signing by much the same Tech Nobility – also published as an open letter on the Future of Life’s website – wherein they did not, however, call for a pause in AI research, but instead stated that “we recommend expanded research” and that the “potential benefits are huge, since everything that civilisation has to offer is a product of human intelligence”.

In eight short years the tech industry seems to have moved from hype to hysteria – calling not for further research to advance AI, but instead for airstrikes to destroy “rogue” data centres.

What is happening?

First, the hysteria surrounding AI has steadily risen to exceed the hype. This was to be expected given humans’ cognitive bias towards bad news. After all, the fear that AI will pose an existential threat to humanity is deep-seated. Samuel Butler wrote an essay in 1863 titled “Darwin Among The Machines, in which he predicted that intelligent machines will come to dominate:

“The machines are gaining ground upon us; day by day we are becoming more subservient to them… that the time will come when the machines will hold the real supremacy over the world and its inhabitants is what no person of a truly philosophic mind can for a moment question.”

Not much different from Eliezer Yudkowsky writing in 2023. That the hysteria surrounding AI has steadily risen to exceed the hype is however not only due to human bias and deep-seated fears of “The Machine”, but also because public distrust in AI has grown between 2015 and 2023.

None of the benefits touted in the 2015 open letter have materialised. Instead, we saw AI being of little value during the global Covid-19 crisis, we have seen a select few rich corporations getting more monopoly power and richer on the back of harvesting people’s private data, and we have seen the rise of the surveillance state.

At the same time, productivity, research efficiency, tech progress and science have all declined in the most advanced economies. People are more likely to believe the worst about AI, and the establishment of several institutes that earn their living from peddling existential risks just further feeds the number of newspaper articles that drive the hysteria.

The second reason for the tech industry’s flip from hype to hysteria between 2015 and 2023 is that another AI winter – or at least an AI autumn – may be approaching. The Tech Nobility is freaking out.

Not only are they facing growing public distrust and increasing scrutiny by governments, but the tech industry has taken serious knocks in recent months. These include more than 100,000 industry job cuts, the collapse of Silicon Valley Bank – the second-largest bank failure in US history – declining stock prices and growing fears that the tech bubble is about to burst.

Underlying these cutbacks and declines is a growing realisation that new technologies have failed to meet expectations.

Read more in Daily Maverick: Why is everyone so angry at artificial intelligence?

The jobs cuts, bank failures and tech bubble problems compound the market’s evaluation of an AI industry where the costs are increasingly exceeding the benefits.

AI is expensive – developing and rolling out LLMs such as GTP-4 and Bing requires investment. And add infrastructure cost in the billions of dollars and training costs in the millions. GTP-4 has 100 trillion parameters and the total training compute it needed has been estimated to be about 18 billion petaflops – in comparison, the famous AlphaGo which beat the best human Go player needed less than a million petaflops in compute.

Slow uptake

The point is, these recent LLMs are pushing against the boundaries of what can be thrown at deep learning methods and make sophisticated AI systems out of bounds for most firms – and even most governments. Not surprisingly then, the adoption of AI systems by firms in the US, arguably the country most advanced in terms of AI, has been very low: a US Census Bureau survey of 800,000 firms found that only 2.9% were using machine learning as recently as 2018. 

AI’s existential risk is at present only in the philosophical and literary realms. This does not mean that the narrow AI we have cannot cause serious harm – there are many examples of Awful AI – we should continue to be vigilant.

It also does not mean that some day in the future the existential risk will not be real – but we are still too far from this to know how to do anything sensible about it. The open letter’s call to “pause” AI for three months is more likely a response borne out of desperation in an industry that is running out of steam.

It is a perfect example of a virtue signal and an advertisement for GTP-4 (called a tool of hi-tech plagiarism by Noam Chomsky and a failure by Gary Marcus) – all rolled into one grand publicity stunt. DM

Wim Naudé is Visiting Professor in Technology, Innovation, Marketing and Entrepreneurship at RWTH Aachen University, Germany; Distinguished Visiting Professor at the University of Johannesburg; a Fellow of the African Studies Centre, Leiden University, the Netherlands; and an AI Expert at the OECD’s AI Policy Observatory, Paris, France.

Gallery

Comments - Please in order to comment.

Please peer review 3 community comments before your comment can be posted

X

This article is free to read.

Sign up for free or sign in to continue reading.

Unlike our competitors, we don’t force you to pay to read the news but we do need your email address to make your experience better.


Nearly there! Create a password to finish signing up with us:

Please enter your password or get a sign in link if you’ve forgotten

Open Sesame! Thanks for signing up.

We would like our readers to start paying for Daily Maverick...

…but we are not going to force you to. Over 10 million users come to us each month for the news. We have not put it behind a paywall because the truth should not be a luxury.

Instead we ask our readers who can afford to contribute, even a small amount each month, to do so.

If you appreciate it and want to see us keep going then please consider contributing whatever you can.

Support Daily Maverick→
Payment options

Daily Maverick Elections Toolbox

Feeling powerless in politics?

Equip yourself with the tools you need for an informed decision this election. Get the Elections Toolbox with shareable party manifesto guide.