Defend Truth

Opinionista

Artificial Intelligence: An inhumane future?

mm

Professor Tshilidzi Marwala is the seventh Rector of the United Nations (UN) University and UN Under Secretary-General.

Artificial Intelligence, like a tale of two cities, will usher both the good and the bad, hope and despair, as well as poverty and wealth. What we have to do as agents of positive change is to ensure that AI ushers in more good than bad.

This week I was invited to speak at the Oxford Union in England on the topic: AI: An inhumane future? In this panel there was Kenneth Cukier, a senior editor of The Economist, D Catherine Havasi, the Massachusetts Institute of Technology (MIT) scientist and Professor Sir Adrian Smith, the director of The Alan Turing Institute.

As I was preparing for this talk, I was reminded of the famous British author Charles Dickens who in his classic novel, A Tale of Two Cities, famously characterised the French Revolution as follows: It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was the season of Light, it was the season of Darkness, it was the spring of hope, it was the winter of despair.”

Artificial Intelligence (AI) is catalysing another revolution and not the French one…but the fourth industrial revolution (4IR). While the French revolution ushered new improved ways of human relations, it also ushered the cruelty of political factions.

AI is the art of making machines think at the least like human beings. When AI systems become more intelligent than human beings this is characterised in scientific terms as “reaching singularity”. Hegel wrote somewhere that everything happens in pairs, and these pairs occur as an antithesis of each other. This in Hegel dialectics is called contradiction. Extreme wealth coexists with extreme poverty, good exists alongside evil, etc. Some prophets of asymmetry of things even claim that the pairing of the opposites is necessary, otherwise the world will be unbalanced, unstable and ultimately collapse. AI, like a tale of two cities, will usher both the good and the bad, hope and despair, as well as poverty and wealth. What we have to do as agents of positive change is to ensure that this AI ushers more good than bad.

In our book Militarized Conflict Modeling Using Computational Intelligence, we use AI to predict, resolve and control interstate conflict. The AI system used information such as distances between the two countries’ capitals, whether they share a border, the differences in their militarisation, the amount of trade between the two countries etc. The AI system was able to take this information and predict whether the two countries would encounter conflict or peace.

Constricting the good of AI in conflict, this technology is being deployed in weapons to make them intelligent, effective and autonomous. Autonomous weapons are designed with a human being in the loop, on the loop and out of the loop.

Human in the loop means that a person is part of the deployment of the weapon and therefore a weapon cannot be deployed without the consent of the human being. Human on the loop is when the autonomous weapon is overseen by a human being and therefore participates in the weapon deployment if the need arises. Human out of the loop is when a human being is completely out of the loop. In this scenario, the responsibility and accountability of any mistakes is an open question. Military experts claim that the human out of the loop autonomous weapons minimise collateral damage and maximise lethal potency thereby ultimately saving lives. However, the lack of transparency of current state-of-the-art AI makes accountability elusive.

AI is changing the field of medical sciences. At the University of Johannesburg, we have built AI systems that are able to detect epilepsy better than human doctors, to diagnose pulmonary embolism better than human doctors and used machines to restore speech in people who have lost their voice boxes due to cancer.

These advances are revolutionising medical care, bringing down health care costs and adding capabilities in impoverished areas. The disadvantages of this technology are that it is trained using data that is gathered in affluent areas and, therefore, does not work optimally when these devices are used in poor areas which are not well represented in the data sets used to train these machines.

In essence, AI systems are potentially reproducing rather than resolving the economic privileges that exist in our society. We, therefore, need to train the AI systems to reflect the reality as we plan it rather than as it exists. Moreover, these systems assume that data is complete, perfect and precise, a situation that is hard to attain for a developing country like South Africa.

The rise of automation is relieving humans from the burden of work. Tasks that used to be executed by human beings are increasingly performed by machines. In the City of Johannesburg last year three firefighters perished in downtown Johannesburg. If automated robots were used to deal with this fire, these three firefighters wouldn’t have lost their lives.

But the downside of automation on the world of work is significant. There are three consequences of automation on the labour market and these are that jobs will be changed, replaced and new types of jobs will be created. The consequences of this change in the labour market are that unemployment and inequality will increase and consequently the aggregate demand for goods and services as well as tax collection will decrease. The decrease in tax collection will result in the curtailed ability of governments to pursue distributed strategies such as universal basic income.

Recent studies have shown that machine-learning algorithms are discriminating against people of Sub-Saharan African descent and work best on people of European and Asian descent. This is because machine-learning algorithms that are currently in use are largely trained using data gathered in North America, Europe and Asia and therefore are biased. When these data are used to imagine the next generation of machine learning algorithms not only are these machines going to be biased because they were trained using biased data but they can potentially be biased by design and this will create its own dual world of the included and the excluded.

The 4IR is creating the winner takes all monopolies. In the previous industrial revolutions, every product or company had direct competition. For example, Coca-Cola has Pepsi as a competitor. There is no direct competition of Facebook in the English-speaking world. This phenomenon is creating monopolistic companies that capture the customers because they have access to their data.

For example, iCloud, which stores information from the iPhone, makes it difficult for customers to leave Apple product because if they move to another provider such as Samsung they lose control of their iCloud data. Some of the ideas that have been bandied around to facilitate choice are that data must be nationalised. This can be done by ensuring that after a certain period, data must be distributed to the entire industry to allow ease of movement of customers and promoting competition.

To answer the question in this article, AI like the tales of two cities will offer both the negatives and the positives. For us to magnify the positives and minimise the negatives we need to democratise technology and data. We also need to incorporate into the devices of the 4IR a character of the world as we desire it and not make these devices reflect biases, prejudices and unequal economic spaces as they currently exist. DM

Gallery

Please peer review 3 community comments before your comment can be posted