South Africa

ISS Today

Putting tech in its place

Putting tech in its place
Human vs artificial intelligence concept. Business job applicant man competing with cartoon robots sitting in line for a job interview

As human relations become digitised, the controls and human values that underpin tech must be saved.

First published by ISS Today

When J Robert Oppenheimer, the man hailed as the father of the first atomic bomb in the 1940s, realised that the tool he had created was being put to devastating effect in Japan, he committed his later years to developing controls on nuclear proliferation. This serves as a useful reminder of the power of tech and the need to “tame” technology, or at least retain a human dimension to its development and application.

Representatives from South Africa and 22 other states including the G5 are scheduled to meet this month to mull over these moral dilemmas as part of the United Nations General Assembly’s Group of Government Experts on Advancing responsible State behaviour in cyberspace in the context of international security.

Their task is to help devise norms for states on how they apply existing international law principles to developments in information and communications technology. They must also determine whether the global community needs new rules of engagement or whether existing legal frameworks are sufficient. It’s a tough task, not least because of divergent opinion between states on how to balance human rights, data security and privacy.

Given the growing digitisation of human relations, and people’s seeming inability to “opt out” of this new digital ecosystem, there is much talk about “putting tech in its place”. The current debate is about how to ensure that the human safeguards, controls and values that underpin the tech are not lost.

Our human-centred international order is designed to keep power in check and hold states and individuals to account. Autonomous-based technologies are disrupting that. For example, how do you hold a so-called killer robot or lethal autonomous weapon to account when things go wrong?

During the Iraq war of 2003 people watched in horror as a pre-programmed United States patriot missile battery, as part of a missile defence shield, took down a British Royal Air Force Tornado jet on an air base where I was embedded. It killed the aircraft’s pilot and navigator who I had shared a coffee with just hours earlier. It was a dreadful “accident”.

The plane was mistaken for an incoming missile and the pre-programmed machine reacted. This was before the days of artificial intelligence as we know it, and serves as a salutary reminder that machines cannot always distinguish the nuances that shape us as human beings.

In this digital age the need to consider a revision of the rules of the global game and how interactions take place is no longer confined to state-on-state behaviour. It also touches on how industry, militaries, governments, armed groups, civil society and the media rub up alongside each other.

A recent conference in Stockholm was dominated by the question of how to ensure that at a time of increased machine autonomy, human control and decision making retains primacy in policymaking. Among the vexing questions is what balance to strike between human and artificial intelligence. Specifically, how can we audit artificial intelligence, or subject it to the rule of law, when we increasingly rely on it to help in vital decision making?

The debate about putting the “human dimension” back into tech focuses on questions of control. Surrendering complete control to machines in an era of artificial intelligence also affects the way government policy is conducted in peacetime. The application of new autonomous technologies may determine policing or counter-terrorism strategies that decide whether based on our “score” we are considered a threat, and determine who should be detained or targeted, and who not.

The algorithms already being used in decision making, including in decisions that help save lives, have a flip side which arguably intrudes on our privacy. They can help determine whether we are likely to reoffend, or display personality traits that mean we can be easily radicalised.

But machines can’t read between the lines or operate in the grey zone of uncertainty. The international community is confronted with the challenge of setting limits to the amount of autonomy society is prepared to cede to machines while protecting human security. In terms of checks and balances, how do we define privacy both on an international and domestic level?

Experts remind us that as technology has developed, so too have the legal definitions of what constitutes public and private space. For African countries like South Africa that seek to centre human rights in their policy, there is a case for them to assert themselves at a time when states with divergent opinions on privacy and security are deepening their business interests in Africa.

At the Stockholm conference, UN High Representative for Disarmament Affairs Izumi Nakamitsu, warned that the growing use of unmanned aerial vehicles or drones and increased autonomy could lead to perceptions of casualty-free warfare. She also cautioned that “the possibility of third parties with malicious intent interfering in control systems to incite conflict cannot be discounted”.

Without human controls Nakamitsu said “artificial intelligence in the digital space threatens to ‘exacerbate political divisions … even in the most benign of international environments”. So emerging technologies in the digital sphere may act as an accelerant to existing simmering tensions, leaving governments unable to react as quickly as machines.

A recent report by the International Committee of the Red Cross warns that: “[Artificial intelligence] and machine-learning systems remain tools that must be used to serve human actors, and augment human decision makers, not replace them.”

In a world where the internet of things could enable a refrigerator or any other wireless domestic appliance to be remotely captured, weaponised and used to cause mass destruction, the need for human-centred technology grows more pressing. DM

Karen Allen is a Senior Research Adviser, Emerging Threats in Africa, ISS Pretoria

Gallery

Please peer review 3 community comments before your comment can be posted

X

This article is free to read.

Sign up for free or sign in to continue reading.

Unlike our competitors, we don’t force you to pay to read the news but we do need your email address to make your experience better.


Nearly there! Create a password to finish signing up with us:

Please enter your password or get a sign in link if you’ve forgotten

Open Sesame! Thanks for signing up.

We would like our readers to start paying for Daily Maverick...

…but we are not going to force you to. Over 10 million users come to us each month for the news. We have not put it behind a paywall because the truth should not be a luxury.

Instead we ask our readers who can afford to contribute, even a small amount each month, to do so.

If you appreciate it and want to see us keep going then please consider contributing whatever you can.

Support Daily Maverick→
Payment options

Every seed of hope will one day sprout.

South African citizens throughout the country are standing up for our human rights. Stay informed, connected and inspired by our weekly FREE Maverick Citizen newsletter.