AI – President Joe Biden and the many-headed water monster
One of the more confounding matters swirling around AI is the confused tangle of risks, both real and imagined, currently being heatedly debated in both the public and private spheres. Metaphors illustrating various dire predictions of unconstrained AI careen promiscuously around the media, often crossing into hyperbole, such as comparisons with the many-headed water monster Hydra (from Greek mythology) or the more literal ‘horizon of harm’ (heard recently on a podcast called Your Undivided Attention).
On 30 October, partly in response to growing AI fears, President Joe Biden issued an executive order relating to “safe, secure and trustworthy artificial intelligence”. This was a big deal.
It had been in the works for more than six months and, unlike the bickering that usually attends these things, Biden had, in this case, listened to some very smart advisors and a stellar research team before grasping the nettle and signing the broad-ranging 60-page document.
Why was this a big deal? Because few of these risks have yet to materialise as events. They are mostly just guesses at this point. We have little evidence on which to base an assessment of actual probability.
It helps to label the risks, and there have been many attempts to do so, by some very smart people whose job it is to gaze into the future.
For instance, here is a neat list culled from an article by tech writer Mike Thomas taken from the online tech community site Builtin.com. He outlines the AI risk landscape in an article titled, “12 Risks and Dangers of Artificial Intelligence”.
- Lack of AI explainability (e.g. no ability to check veracity).
- Job losses.
- Social manipulation through algorithms (e.g. deep fakes).
- Social surveillance (e.g. China’s facial recognition-driven citizen surveillance).
- Data privacy (e.g. secretly harvesting individual user behaviour from interactions with chatbots).
- Discrimination (e.g. gender and race bias stemming from training on unfiltered datasets).
- Socioeconomic inequality (e.g. stemming from biased recruiting algorithms).
- Weakening ethics and goodwill (e.g. from tendentious opinions spread by AI).
- Autonomous weapons.
- Financial crises.
- Loss of human influence (like empathy).
- Uncontrollable self-aware AI.
I read this with a slightly sceptical eye. Not because the list is inaccurate (it isn’t), but because a few of these are simply restatements of previous concerns about digital technologies.
Sure, they may well end up being amplified by AI, but anxieties about stuff like data privacy or “social manipulation” are not new.
In any event, it’s a grab bag of scary things; one feels that it should be prioritised. The threat of autonomous weapons, for instance, to my mind looms much larger than the somewhat more speculative hazard of “weakening ethics and goodwill”.
I turned instead to the Center for Humane Technology, co-founded by Tristan Harris and Aza Raskin. The analysis and mitigation of technology threats and risks is their raison d’etre – they have been at the forefront of this field for five years, since well before the current public interest in AI. In April this year, they released an hour-long video titled, “The AI Dilemma”, which articulately teased out the issues.
Harris and Raskin distilled the threats as follows:
The first is “loss of control”, meaning that, as these systems become more and more complex, so do humans lose the ability to control or understand them. This could result in AIs making decisions that are not in humanity’s interest, potentially without us even knowing about these decisions until the harm is done.
The second is the “alignment problem”. How do we align AIs with human values? This strikes me as an impossible task, given that we humans sometimes cannot even align our values with our next-door neighbours.
The third is “existential risk”. AIs may become so smart, so well beyond our intelligence that we might appear to them to be at best an irrelevance and at worst a hindrance, using up energy and molecules that could be (from the perspective of the AIs) put to better use. We all know where that leads.
The fourth is “surveillance state” (also tabled by Mike Thomas). If anyone has doubts about this one, just look at what China is already doing with AI-fuelled facial recognition. Not only does the state surveil its citizens, but it sanctions those who step out of some party-mandated line.
The fifth is “dehumanisation”. If AI is to start doing things that we have long done for ourselves, (including higher-skill tasks like teaching, law or medicine), where does our sense of purpose go? A meaty threat, perhaps, but one that seems to me to be pretty wild conjecture. Our sense of purpose might change, but it is unlikely to shrivel and die.
Harris and Raskin were two of the many experts asked to work on the paper that eventually ended up as a presidential directive in the Oval Office, and Raskin was in the room when it was signed. He later noted – with impressive restraint – that the directive was “written in broad strokes” and that it would be up to the Biden administration to “flesh out the details” of its implementation.
So, what was in this directive?
It directs the US government to do a bunch of common sense things and some ambitious things, most of them in a “watchdog” capacity, with none having a sharp set of enforcement teeth to accompany them (that will have to wait for Congress to pass laws).
Included in the list are reporting requirements (such as when the computer power to train an AI exceeds some predetermined level); the requirement to share safety test results with government; the setting up of a government AI standards committee (with special emphasis on bio-engineering), and the establishment of best practices for detecting AI-generated content.
It goes on like this for pages, directing the US government to protect Americans from potential AI harm to civil rights, labour and the general consumer.
And then there are fine and full-breasted paragraphs about fostering innovation and competition, as well as using AI to advance US leadership abroad and improve government services at home.
I am not cynical about any of this. The directive is a proper and carefully considered attempt to “begin the beginning” of regulation and support for this young and unpredictable technology, notwithstanding our inability to properly see where it is heading.
Yet I have one nagging concern.
Government oversight of something this potentially uplifting and potentially harmful to everyone on the planet can only work if every country agrees to stick by the same rules.
Sadly, US efforts to codify value alignment, human dignity, virtue and weapons control are going to be met with delighted derision in the private corridors of Chinese, Russian and Saudi Arabian power.
Anything that the US and the West do to inject caution and prudence into the dizzying forward momentum of AI simply gives democracy’s competitors a chance to pull ahead, leaving national security and careful regulation on opposite sides of the table. DM
Steven Boykey Sidley is a professor of practice at JBS, University of Johannesburg. His new book It’s Mine: How the Crypto Industry is Redefining Ownership is published by Maverick451. It can be ordered directly from the DM store here or on Kindle. It’s also available at bookstores.