Defend Truth

Opinionista

Artificial intelligence and robotics: Be careful what you wish for

mm

Ismail Lagardien is a writer, columnist and political economist with extensive exposure and experience in global political economic affairs. He was educated at the London School of Economics, and holds a PhD in International Political Economy.

There may come a time, within the next three to four decades, when we have robots that are smarter and more independent than we want them to be.

I have for a long time held the belief that we humans are on a collision course with ourselves because of our stubbornness and stupidity, and, in more tangible terms, because scientific and technological achievements can end up in the wrong hands. I have never doubted any of these beliefs. I have been especially perturbed by the dangers of nuclear power, war (again) and the climate crisis, which have, I believe, run away from us, with no apparent way back. 

With respect to scientific achievements, there is no way that we can “undiscover” or retreat from the frontiers of, say, physics, that we have expanded to the point where people have (wrongfully) suggested that physics may have reached a dead end. This notwithstanding, the stand-out example of scientific achievements ending up in the “wrong hands” and being used to sow death and destruction is the way in which the Manhattan Project led to the US killing and maiming of at least a million Japanese at Hiroshima and Nagasaki.

In 1936, the physicist Francis Aston warned about the dangers of atomic research being used for destructive purposes

“There are those about us who say that [atomic] research should be stopped by law, alleging that man’s destructive powers are already large enough… Personally, I think there is no doubt that subatomic energy is all around us, and that one day man will release and control its almost infinite power. We cannot prevent him from doing so and can only hope that he will not use it exclusively in blowing up his next door neighbour,” Aston said.

There is enough evidence to suggest that the climate crisis may be beyond the point where we can reverse the damage, and, well, there are wars around the globe, with the US and China in a showdown over who has the right to be the next global hegemon. The US has the military might, but China has the arc of history on its side, but never mind all that – and this is not hyperbole – there is a clearer and more ominous threat facing humanity with artificial intelligence and robotics. For brevity I will refer to both as AI&R

AI and the world we don’t want but may get anyway

There is many a critic, public intellectual, or crusty Marxist professor who hasn’t noticed that the world has changed significantly since the early 19th century. There are also folk who have real fears that AI&R will take away jobs. The ones we can ignore are those who would insist that we should not worry about the future, because today’s problems – not the 19th-century imaginings – are overwhelming. We cannot ignore the real concerns about job displacement, or (today’s) problems of hunger and need. There are, however, very many things happening in the world concurrently. It is possible to pay attention and invest in the development of vaccines, or AI&R, without negating the very real problems of hunger, homelessness and unemployment. 

One of the problems we face with AI, at least in my mind, centres on ethics, application (when scientific or technological achievements end up in the wrong hands); when AI&R inevitably overtake human intelligence, and carry out tasks faster, more accurately, and more efficiently – and when they start making decisions without any human input. Put more bluntly, can we create robots, without losing control over them?

Are we another step closer to self-destruction?

There are a range of responses to the progress and achievements in AI&R. Some folk are dismissive, others are sceptical, others think it is some conspiracy, and then there are political knuckle draggers who would consider it an improbability, because we can’t even maintain a steady flow of electricity, or “they can’t even run SOEs”…. I am not going to contest any of these claims and assertions. Well, I will insist that we should not be dismissive of advances in technology. To those who are dismissive, I have these questions: How much faster, and smarter is your cellphone than it was a decade ago? Or, how much smarter is your computer than it was two decades ago? I am willing to wage a bet that everyone who has a cellphone or a computer wishes it were faster or smarter. This is precisely what drives research and development in AI&R. Of course, workers may have cause for concern when robots can (and already do) assemble cars faster and more accurately than do people on assembly lines. Again, these are legitimate concerns. 

But it’s impossible to conceive of a world where all advances in AI&R (and technology, in general) are stopped. Though it’s really difficult to wrap your mind around it, we could, some day, produce robots that improve themselves, or that “don’t need” humans. We could reach the point at which, the statistician IJ Good remarked (back in 1965), we created an ultra-intelligent machine, “that can far surpass all the intellectual activities of any man, however clever”. Such a machine would design “even better machines; there would then unquestionably be an ‘intelligence explosion’, and the intelligence of man would be left far behind”. Such a machine, he suggested, may turn out to be the “last invention that man needs ever make”. 

At that point, a bigger question arises, of what use are people to a utilitarian society controlled by Homo economicus? A bigger, more frightening development would be when these machines we create decide, for themselves, what may be better for them – and by extension, better for us. The problem, such as it may be, is that we purposefully give robots the power and ability to process information – hence the “intelligence” in AI. Can we, should we, or would we have the foresight to place limitations on machines acquiring or developing general intelligence – the ability to think across domains? I doubt that very much, because notwithstanding potholes or faulty traffic lights, scientific and technological advances in medicine continue, and we want AI&R to help us fight or detect diseases – and provide assistance with what seems like humdrum medical nursing routines.

This topic is as marvellous, as it is exciting, as it is foreboding. The one thing that I would dare say is that we cannot possibly stop, or reel back the expansion of intellectual endeavour. We can’t uninvent robots because they’re taking someone’s job – not unless you’re stuck, intellectually, in the English Midlands of the mid-1800s. 

My fears, as explained in the opening passages of this essay, are that we, humans, will, in the next 100 years cause our own extinction. It may be by nuclear war, unconventional war (with the use of drones), technological warfare (when we simply destroy the technological infrastructure of our enemies), or it may be that the climate crisis will decimate whatever we hold precious in life. 

As for AI&R, and technology, in general, there may come a time, within the next three to four decades, when we have robots that are smarter and more independent than we want them to be. Already there are machines that repair codes faster than any human. It will not be long before intelligent machines will be able to do every daily mundane task or high-level statistical computation better, faster and more cheaply than humans. 

“Rightly or wrongly, one industry after another is falling under its spell, even though few have benefited significantly so far. And that raises an interesting question: when will artificial intelligence exceed human performance? More specifically, when will a machine do your job better than you?” the MIT Technological Review asked, somewhat rhetorically. DM

Gallery

Comments - Please in order to comment.

  • Kanu Sukha says:

    This article reminds me of the claim by China to become ‘carbon neutral’ by 2050 ! Biden yesterday referred to that date also for the USA ! The two biggest polluters on the globe want another 30 years to reach that “target” ! Just another 10/15 years at the current rate of global pollution is going to bring the earth to its knees by becoming uninhabitable simply in terms of temperature increases. The mantra ‘growth, growth, and more growth’ and unbridled consumption by the rich (in material things) are going to ensure that. The next ‘extinction’ of civilisations such as the ancient ones of Egypt, Inca, Mohendajaro and Harrapa are closer than Ismail thinks. Is it any wonder that savants like Greta Thunberg are seeing the writing on the wall … particularly for their generation ? I don’t think AI is going to solve that one !

    • Rodney Weidemann says:

      On the other hand, if we have machines intelligent enough to make faster and better decisions than humans, and if they have the capability of implementing such decisions without human intervention, there is the possibility that the machines will not only work out what needs to be done to solve the climate crisis, but actually set the ball in motion, regardless of what the rich, consumptive societies want…

      …maybe putting the machines in charge will be better for us, in the long run?

  • John Cartwright says:

    I have no doubt that – assuming climate change, nuclear war or catastrophic gene manipulation don’t get us first – AIs will quite quickly outpace human intelligence and proceed with their own evolution. This need not be a bad thing for us humans and other hon-human persons – why would advanced AIs (what Ian M Banks refers to as “Minds’) be interested in messing around with us? Surely it would be in their interests to see to it that our destructive tendencies are limited to some extent? Beyond that, why would such a Mind bother to show us who’s boss? That’s what humans do, and look where it’s got us.

  • Rodney Weidemann says:

    Two quick points: you mention how ‘the Manhattan Project led to the US killing and maiming of at least a million Japanese at Hiroshima and Nagasaki.’ as an amateur WW II historian, it’s worth mentioning that Allied projections for casualties that would occur during the planned invasion of Japan (and these were the low end of the scale) were at least a million Allied casualties, and a minimum of five times as many Japanese (again, a low projection, given the nation’s existing mindset of ‘death before dishonour’). So in all likelihood, Little Boy and Fat Man SAVED millions of lives on both sides, despite the horror of being the first nuclear attacks in history.

    Secondly, it’s worth remembering that every major advance in history – from the original Industrial Revolution to now has led to a net increase in jobs. Yes, certain job categories are lost, but a host of other new ones open up. When the motor car first became popular, most horse shoe salesmen went out of business. However, the clever ones reinvented themselves as tyre salesmen, or mechanics. Robots may be able to build a car faster than a production line human could, but who (currently) does the maintenance on these robots? who performs final quality checks at the end? People, of course.

    Lastly, the Singularity (when machines overtake human brains and can start evolving on their own) may well lead to the machine overlords taking over fully 🙂 But I prefer to think of it this way: Space travel has always confounded humanity, because the distances are so vast that it would take many generations to travel, even to our nearest star neighbours. However, if we could send machines as emissaries and explorers, which did not age, or fall apart (they would likely be self repairing) it would matter far less if it took them centuries to cross the gulf. In fact, we may reach a point where we can download our own consciousness into a machine – would that not be the ideal next step in evolution?
    Creating bodies that never fail, never age, self repair – wouldn’t that be an interesting turn of events for the human race?

  • District Six says:

    Isaac Asimov’s now dated Three Laws of Robotics (The first law is that a robot shall not harm a human, or by inaction allow a human to come to harm. The second law is that a robot shall obey any instruction given to it by a human, and the third law is that a robot shall avoid actions or situations that could cause it to come to harm itself) have now been surpassed by AI itself. AI may well conclude that humans are a threat to the planet. If I read your first sentence correctly then human stupidity and greed (or as you say, “stubbornness”) could well be our undoing.

  • alan Beadle says:

    Always challenging and thought provoking to read Ismail Lagardien’s articles.

  • Louis Potgieter says:

    Am I confused, or is this the guy who wrote a climate denial book?

  • alan Beadle says:

    When Henry Ford took the union leaders on a tour of his new efficient assembly line they asked ‘Who would purchase those cars?’
    A long way since the time of the Luddites!

Please peer review 3 community comments before your comment can be posted