World

GUEST ESSAY

AI – President Joe Biden and the many-headed water monster

AI – President Joe Biden and the many-headed water monster

One of the more confounding matters swirling around AI is the confused tangle of risks, both real and imagined, currently being heatedly debated in both the public and private spheres. Metaphors illustrating various dire predictions of unconstrained AI careen promiscuously around the media, often crossing into hyperbole, such as comparisons with the many-headed water monster Hydra (from Greek mythology) or the more literal ‘horizon of harm’ (heard recently on a podcast called Your Undivided Attention).

On 30 October, partly in response to growing AI fears, President Joe Biden issued an executive order relating to “safe, secure and trustworthy artificial intelligence”. This was a big deal. 

It had been in the works for more than six months and, unlike the bickering that usually attends these things, Biden had, in this case, listened to some very smart advisors and a stellar research team before grasping the nettle and signing the broad-ranging 60-page document. 

Why was this a big deal? Because few of these risks have yet to materialise as events. They are mostly just guesses at this point. We have little evidence on which to base an assessment of actual probability.

It helps to label the risks, and there have been many attempts to do so, by some very smart people whose job it is to gaze into the future. 

For instance, here is a neat list culled from an article by tech writer Mike Thomas taken from the online tech community site Builtin.com. He outlines the AI risk landscape in an article titled, “12 Risks and Dangers of Artificial Intelligence”. 

They are:

  1. Lack of AI explainability (e.g. no ability to check veracity).
  2. Job losses.
  3. Social manipulation through algorithms (e.g. deep fakes).
  4. Social surveillance (e.g. China’s facial recognition-driven citizen surveillance).
  5. Data privacy (e.g. secretly harvesting individual user behaviour from interactions with chatbots).
  6. Discrimination (e.g. gender and race bias stemming from training on unfiltered datasets).
  7. Socioeconomic inequality (e.g. stemming from biased recruiting algorithms).
  8. Weakening ethics and goodwill (e.g. from tendentious opinions spread by AI).
  9. Autonomous weapons.
  10. Financial crises.
  11. Loss of human influence (like empathy).
  12. Uncontrollable self-aware AI.

I read this with a slightly sceptical eye. Not because the list is inaccurate (it isn’t), but because a few of these are simply restatements of previous concerns about digital technologies. 

Sure, they may well end up being amplified by AI, but anxieties about stuff like data privacy or “social manipulation” are not new.

In any event, it’s a grab bag of scary things; one feels that it should be prioritised. The threat of autonomous weapons, for instance, to my mind looms much larger than the somewhat more speculative hazard of “weakening ethics and goodwill”.

I turned instead to the Center for Humane Technology, co-founded by Tristan Harris and Aza Raskin. The analysis and mitigation of technology threats and risks is their raison d’etre – they have been at the forefront of this field for five years, since well before the current public interest in AI. In April this year, they released an hour-long video titled, “The AI Dilemma”, which articulately teased out the issues.

Harris and Raskin distilled the threats as follows: 

The first is “loss of control”, meaning that, as these systems become more and more complex, so do humans lose the ability to control or understand them. This could result in AIs making decisions that are not in humanity’s interest, potentially without us even knowing about these decisions until the harm is done.

The second is the “alignment problem”. How do we align AIs with human values? This strikes me as an impossible task, given that we humans sometimes cannot even align our values with our next-door neighbours. 

The third is “existential risk”. AIs may become so smart, so well beyond our intelligence that we might appear to them to be at best an irrelevance and at worst a hindrance, using up energy and molecules that could be (from the perspective of the AIs) put to better use. We all know where that leads. 

The fourth is “surveillance state” (also tabled by Mike Thomas). If anyone has doubts about this one, just look at what China is already doing with AI-fuelled facial recognition. Not only does the state surveil its citizens, but it sanctions those who step out of some party-mandated line. 

The fifth is “dehumanisation”. If AI is to start doing things that we have long done for ourselves, (including higher-skill tasks like teaching, law or medicine), where does our sense of purpose go? A meaty threat, perhaps, but one that seems to me to be pretty wild conjecture. Our sense of purpose might change, but it is unlikely to shrivel and die. 

Harris and Raskin were two of the many experts asked to work on the paper that eventually ended up as a presidential directive in the Oval Office, and Raskin was in the room when it was signed. He later noted – with impressive restraint – that the directive was “written in broad strokes” and that it would be up to the Biden administration to “flesh out the details” of its implementation.

So, what was in this directive? 

It directs the US government to do a bunch of common sense things and some ambitious things, most of them in a “watchdog” capacity, with none having a sharp set of enforcement teeth to accompany them (that will have to wait for Congress to pass laws). 

Included in the list are reporting requirements (such as when the computer power to train an AI exceeds some predetermined level); the requirement to share safety test results with government; the setting up of a government AI standards committee (with special emphasis on bio-engineering), and the establishment of best practices for detecting AI-generated content.

It goes on like this for pages, directing the US government to protect Americans from potential AI harm to civil rights, labour and the general consumer. 

And then there are fine and full-breasted paragraphs about fostering innovation and competition, as well as using AI to advance US leadership abroad and improve government services at home. 

I am not cynical about any of this. The directive is a proper and carefully considered attempt to “begin the beginning” of regulation and support for this young and unpredictable technology, notwithstanding our inability to properly see where it is heading.

Yet I have one nagging concern. 

Government oversight of something this potentially uplifting and potentially harmful to everyone on the planet can only work if every country agrees to stick by the same rules. 

Sadly, US efforts to codify value alignment, human dignity, virtue and weapons control are going to be met with delighted derision in the private corridors of Chinese, Russian and Saudi Arabian power. 

Anything that the US and the West do to inject caution and prudence into the dizzying forward momentum of AI simply gives democracy’s competitors a chance to pull ahead, leaving national security and careful regulation on opposite sides of the table. DM

Steven Boykey Sidley is a professor of practice at JBS, University of Johannesburg. His new book It’s Mine: How the Crypto Industry is Redefining Ownership is published by Maverick451. It can be ordered directly from the DM store here or on Kindle. It’s also available at bookstores.

Gallery

Comments - Please in order to comment.

  • Peter Atkins says:

    There are some great SF books about AI. One of these is Avogadro Corp. it is about a smart messaging system similar to Google which accidentally gets out of its IT sandbox before it’s inventors can react. The AI was programmed with an objective which it enthusiastically tries to achieve – with many unintended consequences. I found it scary and all too feasible.
    I noticed that the question “has this AI achieved general intelligence?” Was not on any of the risk lists.

  • Alan Stevens says:

    The world is racing to an abyss. As computing power increases so decryption improves. A networking system, step by step, taking control of every digital device in the world, will be powerful enough to overcome the strongest encryption. Whoever controls this controls everything.

    And what about the “bad players” using AI for their own nefarious purposes. I don’t need to elaborate on that.

    And even this pales next to the threat of Artificial General Intelligence

  • jason du toit says:

    the thought that the US will make any policy decisions that don’t advance their power over others is worthy of derision.

Please peer review 3 community comments before your comment can be posted

X

This article is free to read.

Sign up for free or sign in to continue reading.

Unlike our competitors, we don’t force you to pay to read the news but we do need your email address to make your experience better.


Nearly there! Create a password to finish signing up with us:

Please enter your password or get a sign in link if you’ve forgotten

Open Sesame! Thanks for signing up.

We would like our readers to start paying for Daily Maverick...

…but we are not going to force you to. Over 10 million users come to us each month for the news. We have not put it behind a paywall because the truth should not be a luxury.

Instead we ask our readers who can afford to contribute, even a small amount each month, to do so.

If you appreciate it and want to see us keep going then please consider contributing whatever you can.

Support Daily Maverick→
Payment options

It'Mine: How the Crypto Industry is Redefining Ownership

There must be more to blockchains than just Bitcoin.

There is. And it's coming to a future near you soon.

It's Mine is an entertaining and accessible look at how Bitcoin made its mark, how it all works and how it challenges our long-held beliefs, from renowned expert and frequent Daily Maverick contributor Steven Boykey Sidley.