ARTIFICIAL INTELLIGENCE OP-ED
To pause AI or not to pause AI? That is the question — we are at a transformative moment in human history

The emergence of artificial intelligence and the possible development of advanced general intelligence represent a transformative chapter in human history. The ‘Pause AI’ letter, with its call for a collective pause in AI progression, raises fundamental questions about the future of AI and the ethical responsibilities that accompany its development.
We are living in a moment in which anyone consuming media of any type is getting a healthy weekly ration of artificial intelligence (AI). Amid the seemingly constant headlines announcing stunning technological breakthroughs, several voices of alarm have also sounded, most recently in the person of Geoffrey Hinton, one of the ostensible “fathers of AI”, who recently resigned from Google.
The publication of the so-called “Pause AI” open letter – signed by scientists, technological pioneers and entrepreneurs – has added fuel to the debates currently raging, some hailing the letter’s warnings as overdue, while others label it a misguided and counterproductive overreaction to AI’s progression.
And so we ask, practically and ethically, what are the stakes and implications of an AI research pause?
The emergence of AI: From the dream of passing the Turing Test to large language models
The field of AI emerged in the mid-1930s as a product of several early computer scientists, including Alan Turing and his celebrated work on “general purpose computers”. Turing’s groundbreaking concept, the Turing Machine, laid the foundation for programmable computers: those with functions written by users rather than manufacturers.
The machine was the catalyst for conversations theorising about a potential future age in which computers would match or surpass humans in what were then thought to be uniquely human skills. In other words, the age of what we have come to call artificial intelligence.
Marking a crucial (if still hypothetical) milestone in defining human-like intelligence in machines, the “Imitation Game” or “Turing Test” was born. That is, the idea that a machine might be iteratively improved to achieve proficiency in human-like reasoning to the extent that a third party could not distinguish between human and machine responses.
In the 1950s, Christopher Strachey and Dietrich Prinz developed the first successful AI programmes, enabling machines to play board games like checkers and chess. Since then, computer storage and processing capabilities have increased near-exponentially.
In parallel to this rapid advancement in microprocessor architecture was the development and expansion of the internet, which facilitated global communication among computers and collaboration among their users – thus continuously expanding available digital data, and leading to conversations about “big data”.
The amalgamation of powerful, parallel computing and availability of big data allowed the development of machine learning, a subfield of AI. The realisation of what was previously seen as a theoretical concept, machine learning (which often relies on large datasets to “train” algorithms) paved the way for advanced AI applications, which today can comfortably pass the Turing Test as originally envisaged.
Other, more recent breakthroughs came in 2016, with Google Brain’s research on “Neural Machine Translation”, and then again in 2017, when transformers (a type of deep learning model) demonstrated the potential of deep learning models to generate human-like language translations.
These achievements led to the development of tools like OpenAI’s GPT, which through simple-to-use models like ChatGPT 3.5, are revolutionising the utility and popularity of AI applications as we speak.
Built on deep-learning algorithms and transformer architecture, so-called “large language models” (LLMs) aim to mimic, in a rudimentary and abstract way, the processes of our brain synapses and cognitive processes. Through this infrastructure (and using the copious amounts of data available for processing), LLMs can in some cases return accurate and relevant responses to complex questions, asked in natural language (vs computer code).
The result? Vast numbers of people are adopting these models for everyday use – over 100 million users in the first two months, all also helping train future models.
Should we be concerned?
While social media feeds brim with stories of ChatGPT (or other LLMs) impressive feats, funny mistakes or disturbing hallucinations, the companies behind these models are focused on a goal far beyond clever chatbots, crafty homework-solvers or supercharged search engines: theirs is the pursuit of artificial general intelligence (AGI).
AGI refers to a (currently hypothetical) super-human, intelligence-based AI capability. In other words, an artificial agent capable of understanding and learning any cognitive task, and able to autonomously take action to improve. No longer limited to specific human-assigned tasks or set of tasks, AGI refers to an AI agent that is comparable to or “smarter” than a human.
As recently as the early 2000s, the timeline for such a technology was understood to be long. Cognitive scientists such as Ben Goertzel, writing in 2007, suggested that AGI would arrive in a century. More recently, the futurist Ray Kurzweil suggested that a date before 2045 was plausible.
However, following the release of GPT-4 earlier this year (an improved version of the same family of LLM, GPT-3.5, released in 2022) Microsoft Research published a paper claiming that the current GPT-4 model already exhibited human-level performance in tasks across several fields of expertise, including computer law, computer programming and mathematics.
Controversially, some experts claim that this model might be considered an early, incomplete version of AGI, showing “sparks” of real intelligence.
The letter
The possibility of the hypothesised AGI becoming realised has fuelled intense scientific and ethical debates about the nature of intelligence, the value of AGI, and indeed whether it is a worthy (or perhaps reckless) pursuit in the first instance.
It was against this backdrop that technologist and futurist think tank, Future of Life Institute, published the “Pause AI” letter – an open call for a six-month moratorium on advanced AGI-pursuant research. Referring to “the training of AI systems more powerful than GPT-4”, the letter gives credence to claims of the aforementioned “Sparks of Intelligence” paper. It concludes that should it be misaligned with societal values, the risks advanced AI pose represent an irreparable and existential threat to society.
As a hopeful remedy, the six-month hiatus should be used to research AI safety and alignment, argue the authors (and signatories, of whom there are more than 33,000 as of July 2023).
Prominent signatories included Eliezer Yudkowsky and Elon Musk. Yudkowsky, founder of the Machine Intelligence Research Institute, a nonprofit research institute focused on identifying “existential risks from AGI”, warned that we have a single opportunity to safely implement AGI, as beyond a certain threshold, control becomes uncontrollable.
Musk, also a cofounder of OpenAI, echoed this sentiment, highlighting the existential threat posed by misaligned AI, and that AGI should only be progressed if safety is ensured beyond nominal doubt.
Among other concerns regarding AI’s power and reach, “Pause” supporters highlight recent events such as widespread layoffs by tech conglomerates in Silicon Valley, illustrating how AI adoption in preparation for an economic downturn will lead to job cuts.
Together with AI’s potential impact on art and culture, critics argue that unregulated AI progress could both disrupt the job market and redefine art.
Additionally, AI ethicists have questioned the very premise of AGI as an ideal, linking it (and many of the prominent advocates of AGI) to eugenic and transhumanist projects – movements oriented around improving the quality of a human population through genetic selection or by combining biological and synthetic technologies as a way of evolving our species.
These include notable engineers and pioneers of AI ethics, such as Timnit Gebru, Margaret Mitchell and Emily Bender. As such, Pause advocates say a moratorium on research to develop a more thoughtful and inclusive approach would allow regulations and laws to “catch up” to this new technological reality.
While the letter claims to be in the interest of all humanity, its opponents also include experts in the fields of both AI and AI ethics, saying there are ethical reasons not to pause.
For those seeing technological advances as necessarily positive, a pause is viewed as an impediment to an ultimately beneficial outcome. That is, permitting more efficient use of time and resources, superhuman AI will bring about increased economic output and advancements, socioeconomic innovation and human flourishing, and it would thus be unwise to pause its development.
Voices in this camp include scholars and tech luminaries Andrew Ng and Yann LeCun, the latter another of the “founding fathers of AI” and current Chief AI Scientist at Meta.
Compounding such losses, “The Pause” would likely encourage nefarious actors – for example, companies interested in making breakthroughs, but unencumbered by ethical consideration – to advance during this proposed voluntary hiatus.
Meanwhile, recognising its potential to confer economic and military advantage, the world’s nations are also in a race to achieve AI supremacy. Looking at the momentous, historical shifts in geopolitical dynamics as a result of the Naval and Industrial Revolutions, political leaders may decide to act to gain power, regardless of the proposed regulations or laws – further exacerbating both political tensions in the “AI Arms Race” and discrepancies between technologically affluent and poor nations.
This side also claims that the early estimates for the AGI timeline are inaccurate, so the pause is an overreaction and unnecessary.
Finally, in response to concerns both ethical (the exploitation of human “ghost labour”, poorly paid data labellers for ML companies) and environmental (the extraordinary energy consumption of LLMs), some AI scholars argue that LLMs are, by nature, ethics- and truth-blind, mere (sophisticated) statistical models, not self-reasoning agents (regardless of the safety “guardrails” that platforms might install to prevent sharing nefarious content).
The moral choices: Teleological and deontological perspectives
Helping to disentangle humanity’s latest existential dilemma are two age-old ethical approaches: teleology and deontology.
Teleological ethics, also known as consequentialism, focuses on the outcomes and consequences of actions. The idea of a teleological position is useful because it asks “where are we heading?” – asking if the intended outcome will be deemed worthwhile.
This approach would thus find it morally desirable to pursue AGI, should AGI lead to unalloyed economic prosperity, social harmony and environmental sustainability – all owing to the plans hatched by super-smart AI, which would revolutionise jobs and enhance productivity, thereby creating new industries and rejuvenating economic growth, innovating healthcare, and transforming transportation and countless other industries for the benefit of humankind, leading to significant advancements and improved quality of life.
However, if our collective destination is one in which we have surrendered decisions to a supremely impressive yet biased and discriminatory algorithm (its flaws rendered invisible due to the black-box nature of neural networks, and the contentious fact that OpenAI has refused to reveal their training data) and are now stuck with the potential catastrophes accompanying that scenario – ranging from mass unemployment, the exacerbation of societal inequalities, and a monopolisation of AI power and control, all the way to “losing control of civilisation” – then sound minds would demand support of “the pause”, and perhaps insist on a freeze in such research (and in finding ways to enforce the pause on potential non-pausing rogues).
Deontological ethics, in contrast, places importance on adherence to moral principles and duties. The idea of a deontological position is useful because it asks “how will we get there?”, asking if the means to the end are moral.
Taking up this mantle, we should ask whether human autonomy (people’s decision-making rights and abilities), justice (in the sense of fairness) or beneficence (or general welfare) will be maintained or violated in the pursuit of AGI.
Deontology thus offers a way to show that actions that might lead towards a prosperous future cannot justify violating people’s rights today. Thus, likely social, economic, political or environmental harms that may befall people, communities or ecosystems in our journey towards AGI (however beneficial we might deem the eventual destination) cannot be brushed off as “paying our dues”.
That said, the deontological approach would also caution against depriving AI researchers of their autonomy and freedom if potential harms cannot be identified. From this viewpoint, algorithmic oppression, exploitation, and dispossession are significant concerns in the AI progression debate.
Present LLMs, including ChatGPT, have been shown to perpetuate biases and discrimination based on their training data, which often includes parts of the internet (which has many grim places). Without effective regulation and oversight, AI algorithms could thus reinforce ingrained inequalities and marginalise vulnerable groups.
This leads to the contentious dispute regarding open-source (openly available to the public) and closed-source software, debating which is a more optimal mechanism to align AI.
Furthermore, the potential rights and responsibilities regarding AGI need to be considered, as well as the processes that are taken to achieve this – including debates regarding super-intelligent AGI sentience and the nature of intelligence (which is another discussion altogether).
Conclusion
The emergence of AI and the possible development of AGI represent a transformative chapter in human history. The “Pause AI” letter, with its call for a collective pause in AI progression, raises fundamental questions about the future of AI and the ethical responsibilities that accompany its development.
While the call to pause will be neither enforced nor heeded, we should do well to consider how all human values might be woven into the tapestry of the mind of a superhuman “intelligence”, should it ever manifest. DM
Daron Sender is a BSc Engineering Information Honours student at the University of the Witwatersrand, with an interest in machine learning and software development.
Dr Martin Bekker is a lecturer of Social Sciences and the Ethics of AI at the School of Electrical and Information Engineering at the University of the Witwatersrand.

Comments - Please login in order to comment.