South Africa

OP-ED

Artificial Intelligence: Where are the users?

Artificial Intelligence: Where are the users?
(Image: Franck V. / Unsplash)

One argument that is reaching the status of common sense is that Artificial Intelligence is catalysing a new revolution, the Fourth Industrial Revolution. Fearing being left behind as the rest of the world revolutionises, South Africa is scrambling madly to catch up.

Claims abound that Artificial Intelligence (AI) can rescue our ailing retail and manufacturing sectors. President Cyril Ramaphosa has even appointed a Commission on the Fourth Industrial Revolution to promote what he calls an “entrepreneurial state… [which will] assist government in taking advantage of the opportunities presented by the digital industrial revolution”.

The one voice that is largely missing in the noise about AI is the users of AI-driven systems, and by now, that includes most of us. Users are an important constituency as these systems are generally trained using our data, but the means by which they do so are opaque.

Automated decisions using AI are difficult to challenge, which make them ripe for abuse in ways that threaten basic rights and freedom. Elections can be distorted through AI-powered disinformation, and people can be falsely accused of a crime if they are profiled incorrectly.

Yet, despite the dangers, information regulators are struggling to defend users’ rights as AI challenges traditional notions of data protection. In South Africa’s case, the regulator is struggling to get up and running. So, as things stand, it is open season on people’s data.

In an important development, the United Nations Educational, Cultural and Scientific Organisation (Unesco) has stepped into the policy breach and proposed some basic principles which should guide AI take-up by their member states.

This means that Unesco member states – including South Africa – will have to report on what they are doing to protect AI users’ rights and promote what Unesco has called a more human-centred approach to AI.

So, for instance, states will have to account for how they are using AI to address current inequalities around access to knowledge, diversity of cultural expression and eliminate technological divides within and between countries.

In making these proposals, Unesco has elaborated on principles it developed for the internet as a universal resource. Unesco’s work raises an important question: what rights should people insist on when it comes to AI?

At least four rights are threatened by the unregulated growth of AI: freedom of expression, privacy, equality and political participation.

It is particularly important to highlight the dangers for collective rights, such as the rights to organise and assemble. This is because in particular contexts where AI may limit rights substantially, such as in criminal justice and national security matters, governments are likely to argue that individual rights must give way to the national interest.

Yet, it is in these domains where the dangers of AI are particularly acute, because of the negative implications of incorrect decisions (profiling people incorrectly as terrorism suspects, for instance). However, if it can be shown that collective rights, and in fact democracy itself, are being eroded by unaccountable AI uses, then the arguments for a rights-based AI becomes stronger.

The human element in AI decision-making needs to be made visible, and the decision-makers need to be held to account. This means challenging the technological determinism in Fourth Industrial Revolution (4IR) arguments, and the problematic assumptions that it does, in fact, qualify as a social revolution.

There needs to be an open engagement between AI engineers and AI users about the lack of access to the rules of the game engineered into AI-driven platforms and websites.

This engagement should hopefully shift the discussion away from one that appears to blame AI systems when they are rights-insensitive – which can come across as technophobic and even Luddite – and shift it towards blaming the humans who designed AI systems, or those who deploy these systems for undemocratic purposes.

There need to be clear circumstances where AI-powered automated decision-making should be limited. For instance, there should be a prohibition (with narrow exceptions) on solely automated decisions when such decisions have legal or other significant effects. There should be human involvement in decisions that stand to limit fundamental rights (or a “human-in-the-loop”).

Also, algorithms only amplify existing human behaviour, and AI users need to understand what these behaviours are and how to alter them. Doing so often means following the money, or in the case of media content, asking which content is likely to attract sponsorships because it is this content that is likely to be boosted.

Understanding the potentially anti-democratic uses of AI is particularly important in an era when democracy itself cannot be taken for granted as a political system. It is also important to identify those actors who are most at risk and unpack the rights implications for them. There are many other instances where democratic rights could be affected by unaccountable AI uses.

For instance, it is well recognised that AI can be used to de-anonymise and re-identify individuals, which can allow them to be tracked, and jeopardising their right to communicate anonymously.

These threats are not just to individuals, but to groups. Protesters protesting about the climate emergency, for instance, could be identified and tracked, which impacts not only on their democratic rights as they could be profiled and targeted, but their ability to organise around possibly the single biggest issue for the future of our planet is jeopardised.

AI could reinforce rather than eliminate discrimination, unfairness and bias. We cannot assume that rights have been taken into account in the design of AI systems, even when they claim to limit human bias.

In fact, automated decision-making may lead to people being misidentified and discriminated against based on race, age or other characteristics due to in-built biases in the algorithms. This is because data samples may be skewed towards particular population groups: a now well-recognised problem with facial recognition systems used for public space surveillance, which tends to be biased against black and young people.

It could be (and has been) argued that people cannot have a reasonable expectation of privacy in public spaces. But this argument is outdated in an era of smart surveillance where individuals can be identified from large datasets.

There are so many problems with automated facial recognition for law enforcement purposes that it should be banned. In the case of San Franscisco, it has been banned. Yet, seemingly oblivious to these controversies, the City of Joburg is proceeding with facial recognition plans.

Personalised algorithmic models to rank and curate information can lead to the development of filter bubbles. As things stand, though, the available research points in the opposite direction, with search engines of companies like Google exposing internet users to a greater diversity of news sources than they would be exposed to ordinarily.

Even social media users can reap the unintended benefits of incidental exposure to news they would otherwise not look at. Greater AI-enabled content personalisation could amplify these dangers in time to come, though, so these concerns shouldn’t be taken off the table.

Democracy could also be threatened by the discriminatory or bias aspects of AI. For instance, populist politicians who continue to deny or downplay the seriousness of the climate emergency may continue to enjoy support that they don’t deserve.

The rise of the right is a threat to democracy globally. Some have become publishers skilled at using computational propaganda, where AI is used to amplify disinformation. They have also become adept at mobilising social groups that are increasingly closed, inward looking, rewarding polarising speech and benefitting from a curated worldview.

Individualised targeting raises the question, but where can the public sphere be located: the places (physical, mediated or virtual) where we go to have the conversations that affect all of us.

Internet companies must be challenged to be transparent about their automated filtering of content, which can easily violate the requirement for moderators to take context into account in limiting freedom of expression.

In practising self-regulation, companies tend to be risk-averse and err on the side of caution in moderating “extreme” content, and they need to do better than relying on a stock response of “violating community standards” to take down content.

It must become standard practice for companies and public authorities to inform individuals about the existence of automated decision-making. Labelling of AI-enabled services and devices must become standard practice. AI users must be able to challenge the bases for decision-making and their consequences.

AI can lead to opacity and secrecy of profiling. But AI also enables mass surveillance; in fact, mass surveillance wouldn’t be possible without AI as huge datasets could not be sifted and analysed. Yet, mass surveillance is in itself disproportionate as it involves surveillance even where there is no reasonable expectation of criminality.

Individuals can be targeted from huge datasets, based on patterns in their communications. The bases on which these patterns are identified need to be disclosed by those doing the targeting, and the selectors they used should be approved by a judge.

Peoples’ data may be stored, analysed and shared in ways they did not consent to or even understand fully, leading to data exploitation. This is especially so as smart and connected devices are integrated increasingly into everyday life.

Increasingly, AI users are being confronted with the “black box” problem, where algorithms can make life-changing decisions about people, but people don’t know how these decisions were arrived at. People will not trust decisions if the bases for these decisions are not transparent. At the same time, companies do not want to reveal their secret source.

Information regulators need to step in and ensure that people can examine the input data, as well as the outputs, as this data should help to expose some of the inner workings of the “box”. AI designers also need to do impact assessments of their products, with user participation, to mitigate risks.

Governments must also ensure that their data protection laws and policies are fit for purpose. Many governments will benefit from opacity in this area as it will allow them to enlist companies in censorship and surveillance efforts, increasing the potential for negative AI.

The Unesco intervention is valuable as it challenges us to view AI as a social, cultural and political resource, rather than being driven purely by technophiles, business and governments. However, as Unesco’s Guy Berger has pointed out, this shift in thinking will happen only if ordinary AI users control the trajectory of AI.

It is a challenge that South Africa has not even begun to confront yet. DM

Jane Duncan is a professor and Head of Department of Journalism, Film and Television. She is author of ‘Stopping the Spies: Constructing and Resisting the Surveillance State in South Africa’ (Wits University Press, 2018). This article is based on an input to a panel organised by Unesco on ‘Steering AI for knowledge societies’, at the 2019 conference of the International Association for Media and Communication Research, Madrid, Spain

Gallery

Please peer review 3 community comments before your comment can be posted

X

This article is free to read.

Sign up for free or sign in to continue reading.

Unlike our competitors, we don’t force you to pay to read the news but we do need your email address to make your experience better.


Nearly there! Create a password to finish signing up with us:

Please enter your password or get a sign in link if you’ve forgotten

Open Sesame! Thanks for signing up.

We would like our readers to start paying for Daily Maverick...

…but we are not going to force you to. Over 10 million users come to us each month for the news. We have not put it behind a paywall because the truth should not be a luxury.

Instead we ask our readers who can afford to contribute, even a small amount each month, to do so.

If you appreciate it and want to see us keep going then please consider contributing whatever you can.

Support Daily Maverick→
Payment options