I have for a long time held the belief that we humans are on a collision course with ourselves because of our stubbornness and stupidity, and, in more tangible terms, because scientific and technological achievements can end up in the wrong hands. I have never doubted any of these beliefs. I have been especially perturbed by the dangers of nuclear power, war (again) and the climate crisis, which have, I believe, run away from us, with no apparent way back.
With respect to scientific achievements, there is no way that we can “undiscover” or retreat from the frontiers of, say, physics, that we have expanded to the point where people have (wrongfully) suggested that physics may have reached a dead end. This notwithstanding, the stand-out example of scientific achievements ending up in the “wrong hands” and being used to sow death and destruction is the way in which the Manhattan Project led to the US killing and maiming of at least a million Japanese at Hiroshima and Nagasaki.
In 1936, the physicist Francis Aston warned about the dangers of atomic research being used for destructive purposes.
“There are those about us who say that [atomic] research should be stopped by law, alleging that man’s destructive powers are already large enough… Personally, I think there is no doubt that subatomic energy is all around us, and that one day man will release and control its almost infinite power. We cannot prevent him from doing so and can only hope that he will not use it exclusively in blowing up his next door neighbour,” Aston said.
There is enough evidence to suggest that the climate crisis may be beyond the point where we can reverse the damage, and, well, there are wars around the globe, with the US and China in a showdown over who has the right to be the next global hegemon. The US has the military might, but China has the arc of history on its side, but never mind all that – and this is not hyperbole – there is a clearer and more ominous threat facing humanity with artificial intelligence and robotics. For brevity I will refer to both as AI&R
AI and the world we don’t want but may get anyway
There is many a critic, public intellectual, or crusty Marxist professor who hasn’t noticed that the world has changed significantly since the early 19th century. There are also folk who have real fears that AI&R will take away jobs. The ones we can ignore are those who would insist that we should not worry about the future, because today’s problems – not the 19th-century imaginings – are overwhelming. We cannot ignore the real concerns about job displacement, or (today’s) problems of hunger and need. There are, however, very many things happening in the world concurrently. It is possible to pay attention and invest in the development of vaccines, or AI&R, without negating the very real problems of hunger, homelessness and unemployment.
One of the problems we face with AI, at least in my mind, centres on ethics, application (when scientific or technological achievements end up in the wrong hands); when AI&R inevitably overtake human intelligence, and carry out tasks faster, more accurately, and more efficiently – and when they start making decisions without any human input. Put more bluntly, can we create robots, without losing control over them?
Are we another step closer to self-destruction?
There are a range of responses to the progress and achievements in AI&R. Some folk are dismissive, others are sceptical, others think it is some conspiracy, and then there are political knuckle draggers who would consider it an improbability, because we can’t even maintain a steady flow of electricity, or “they can’t even run SOEs”…. I am not going to contest any of these claims and assertions. Well, I will insist that we should not be dismissive of advances in technology. To those who are dismissive, I have these questions: How much faster, and smarter is your cellphone than it was a decade ago? Or, how much smarter is your computer than it was two decades ago? I am willing to wage a bet that everyone who has a cellphone or a computer wishes it were faster or smarter. This is precisely what drives research and development in AI&R. Of course, workers may have cause for concern when robots can (and already do) assemble cars faster and more accurately than do people on assembly lines. Again, these are legitimate concerns.
But it’s impossible to conceive of a world where all advances in AI&R (and technology, in general) are stopped. Though it’s really difficult to wrap your mind around it, we could, some day, produce robots that improve themselves, or that “don’t need” humans. We could reach the point at which, the statistician IJ Good remarked (back in 1965), we created an ultra-intelligent machine, “that can far surpass all the intellectual activities of any man, however clever”. Such a machine would design “even better machines; there would then unquestionably be an ‘intelligence explosion’, and the intelligence of man would be left far behind”. Such a machine, he suggested, may turn out to be the “last invention that man needs ever make”.
At that point, a bigger question arises, of what use are people to a utilitarian society controlled by Homo economicus? A bigger, more frightening development would be when these machines we create decide, for themselves, what may be better for them – and by extension, better for us. The problem, such as it may be, is that we purposefully give robots the power and ability to process information – hence the “intelligence” in AI. Can we, should we, or would we have the foresight to place limitations on machines acquiring or developing general intelligence – the ability to think across domains? I doubt that very much, because notwithstanding potholes or faulty traffic lights, scientific and technological advances in medicine continue, and we want AI&R to help us fight or detect diseases – and provide assistance with what seems like humdrum medical nursing routines.
This topic is as marvellous, as it is exciting, as it is foreboding. The one thing that I would dare say is that we cannot possibly stop, or reel back the expansion of intellectual endeavour. We can’t uninvent robots because they’re taking someone’s job – not unless you’re stuck, intellectually, in the English Midlands of the mid-1800s.
My fears, as explained in the opening passages of this essay, are that we, humans, will, in the next 100 years cause our own extinction. It may be by nuclear war, unconventional war (with the use of drones), technological warfare (when we simply destroy the technological infrastructure of our enemies), or it may be that the climate crisis will decimate whatever we hold precious in life.
As for AI&R, and technology, in general, there may come a time, within the next three to four decades, when we have robots that are smarter and more independent than we want them to be. Already there are machines that repair codes faster than any human. It will not be long before intelligent machines will be able to do every daily mundane task or high-level statistical computation better, faster and more cheaply than humans.
“Rightly or wrongly, one industry after another is falling under its spell, even though few have benefited significantly so far. And that raises an interesting question: when will artificial intelligence exceed human performance? More specifically, when will a machine do your job better than you?” the MIT Technological Review asked, somewhat rhetorically. DM