Defend Truth

GUEST ESSAY

Artificial Intelligence and the great ethics cage fight of 2023

Artificial Intelligence and the great ethics cage fight of 2023

AI has arrived in the public sphere faster than anyone (even its most enthusiasm-drunk proponents) had expected or even hoped. And it took about three seconds for everyone to start saying, wait a minute, just wait a darn minute, let’s think about this first.

Well, that didn’t take long. People I know who have never, ever had a serious conversation about ethics are suddenly screeching and throwing punches at each other. Dinner-party and private conversations are degrading into insults, snark and fractured friendships. I exaggerate of course, but not by much.  

All because of a piece of statistical magic almost no one had ever heard of before 29 November, called GPT (General Pre-trained Transformer). 

At the risk of being redundant (because everyone is an expert now) the original ChatGPT was based on a language-focused AI framework called GPT-3, upgraded mere months later to an orders-of-magnitude more powerful language and image-based GPT-4, which will be upgraded some time later this year to another orders-of-magnitude more powerful GPT-5.  To say nothing of tens of billions being spent by everyone from Google to Meta to Adobe to IBM to Nvidia to Bloomberg to cash-flush startups trying to get ahead of ChatGPT.   

You get the point. AI has arrived in the public sphere faster than anyone (even its most enthusiasm-drunk proponents) had expected or even hoped. And it took about three seconds for everyone to start saying, wait a minute, just wait a darn minute, let’s think about this first.    

Actually, the story of ethics and artificial intelligence goes back a long time. We could start around 800 AD with a chap named Jabir ibn Hayyan who developed the Arabic alchemical theory of takwin, the artificial creation of life in the laboratory, up to and including human life. 

Or around 1580 AD when Rabbi Judah Loew ben Bezalel of Prague is said to have invented the golem, a clay man brought to life. Of course, neither of them succeeded, but I am sure the ethics debates around their aspirations were, well, robust.  

Popular culture also brims with this stuff. All the way back to Jonathan Swift’s Gulliver’s Travels, Mary Shelley’s Frankenstein, Neal Stephenson’s Snow Crash, the movies Blade Runner, The Matrix...  

The first laws of robot ethics were conceived by the science fiction writer Isaac Asimov (in conjunction with his editor, John W Campbell) in a short story called Runaround in a collection called I, Robot. They defined three laws of robot ethics, thereby producing (as far as I know) the first stab at this fraught subject, albeit in a work of fiction:

“A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given to it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.” 

Read more in Daily Maverick: AI – Let a thousand artificial flowers bloom

This is a great start to the ethical debate, but any half-decent litigator would take it apart quickly. Define “injure”. Define “come to harm” (does that include being defamed, insulted, neglected?). What if the robot is faced with the trolley problem and forced to choose between two harmful acts? (The trolley problem is a thought experiment in ethics about a fictional scenario in which an onlooker has the choice to save five people in danger of being hit by a trolley, by diverting the trolley to kill just one person.) 

Ethics debate heats up

In the wake of Asimov and the long-winding road of AI research, there were plenty of ivory tower debates about these matters, most of them occurring outside of the public sphere. But when the 21st century arrived, and machine intelligence research started producing real results, the ethics debate started heating up, although still not publicly; no one was yet sure when AI would spill noisily into our lives.  

Enough gravitas had surrounded the project that by 2017 we saw the first fully fledged conference on AI ethics — the Asilomar Conference in Barcelona. At the end of a few days of philosophical, sociological and technical pow-wowing, 23 principles were promulgated, all very noble and high-minded. Like principle number 11): “Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.” 

Well, yes. Easy to say. Kumbaya, right?  

In the past few years, we have seen these principles embedded in embryonic national pre-legislative frameworks and proposals, including the European Commission and the UK, the latter of which includes this proposal: “Fairness: AI should be used in a way which complies with the UK’s existing laws, for example, the Equality Act 2010 or UK GDPR, and must not discriminate against individuals or create unfair commercial outcomes.”  

Uh-huh.  

You see the problem here? These laws and regulations will be complete in, I know, a year, two, four? Putin or Xi or some religious fundamentalist terrorist will use AI to shut down the air-control towers at a major airport the next week, develop a bioweapon the next and open the sluice gates at Hoover Dam the next. There is simply no chance that the leader of a Russia who thinks it is okay to kidnap 30,000 children and send them to a foreign country, or a China that thinks nothing of repressing an entire ethnic group, will embrace the Western world’s concept of “ethics”. None whatsoever.   

Which finally brings us to two news items that set the internet aflame over the past couple of weeks. The first was an open letter from the Future of Life Institute on 23 March. This letter has now been signed by 1,300 big thinkers and luminaries from diverse fields, like Yuval Noah Harari, Steve Wozniak, Elon Musk, Tristan Harris and Lawrence Krauss. The letter is short; it basically says: we have no idea what we are building here and we don’t know what nastiness may emerge. 

It ends with this plea: “Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”  

This letter percolated for a few days, while others hummed and hawed, presumably thinking about terrorists and Putin and XI and whether they were giggling and downloading Alpaca, a $600 large language model available from Stanford, which rivals ChatGPT.  

Read more in Daily Maverick: Why is everyone so angry at artificial intelligence?

But then a fellow named Eliezer Yudkowsky chimed in a few days later. He is one of the fathers of artificial general intelligence research and one of the most vocal and respected machine intelligence scientists in the world. His contribution? Steel yourself:  

“…the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally, everyone on Earth will die. Not as in ‘maybe possibly some remote chance’, but as in ‘that is the obvious thing that would happen’.” 

Some smart people got scared. Others said surely not. Others scoffed. Others said: you’re an idiot.  

The cage fight is now open for business. DM

Steven Boykey Sidley is a professor of practice at JBS, University of Johannesburg.

Gallery

Comments - Please in order to comment.

  • David Forbes says:

    US military, and probably others, have already developed killer robots for use in ground warfare. What is a drone other than a remote killing machine programmed by humans? We have gone too far already.

Please peer review 3 community comments before your comment can be posted

It'Mine: How the Crypto Industry is Redefining Ownership

There must be more to blockchains than just Bitcoin.

There is. And it's coming to a future near you soon.

It's Mine is an entertaining and accessible look at how Bitcoin made its mark, how it all works and how it challenges our long-held beliefs, from renowned expert and frequent Daily Maverick contributor Steven Boykey Sidley.