South Africa

DAILY MAVERICK WEBINAR

Plethora of concerns on issues of accountability, transparency and privacy in AI content moderation

Plethora of concerns on issues of accountability, transparency and privacy in AI content moderation
From left: BBC Foreign Correspondent and Institute of Security Studies’ emerging threats consultant Karen Allen | Attorney and Director of ALT Advisory, Avani Singh | CEO and co-founder of Utopia Analytics, Dr Mari-Sanna Paukkeri. (Photos: Supplied)

In the second instalment of the cybersecurity webinar series with former BBC foreign correspondent Karen Allen on Wednesday, participants heard about the ‘promise and perils’ of artificial intelligence and content moderation on social media platforms.

As South Africa talks about the Fourth Industrial Revolution, the use of artificial intelligence (AI) is going to become more and more prevalent. While AI can be a powerful tool in moderating online social interactions, there is a plethora of concerns regarding issues of accountability, transparency and privacy in AI content moderation. 

As Dr Mari-Sanna Paukkeri, CEO and co-founder of the Finnish tech firm, Utopia Analytics, and Avani Singh, a South African data-rights activist and lawyer, pointed out in conversation with the Institute for Security Studies’ senior research adviser, Karen Allen, the need for ethical AI is more important than ever. 

“What do we mean by ethical artificial intelligence?” Allen asked the AI experts.

Ethical AI would aim to ensure “the protection and promotion of fundamental rights like freedom of expression, access to information, privacy, quality, non-discrimination and access to a meaningful remedy,” Singh responded. 

These may be the minimum requirements of what constitutes ethical AI, but the confluence of other elements such as “transparency, accountability, openness and processes of due diligence”, also speak to ethical artificial intelligence, she added. 

In South Africa, AI is being used by media companies to sift through large volumes of data to produce articles and, in some instances, to write articles from scratch, said Singh. 

But AI is increasingly “being used to moderate or curate platforms to determine what is put out”, said Singh. This has tremendous benefits, she added, but AI content moderation also poses tremendous risks, including issues around biases and restrictions on free speech.

AI content moderation

For Utopia Analytics, described as a “text analytics and content moderation” firm, AI content moderation would simply refer to the curation of online content by an automated AI tool that would have learnt from human moderators, said Paukkeri. 

For example, if a platform is looking to restrict hate speech, Utopia Analytics’ AI intelligence moderator could be used to curate and eliminate improper content. 

The firm’s AI moderator receives human moderation decisions, uses the decisions as training data and learns what it is that made the human moderator accept or decline the comment, and then mimics human moderators’ decisions on a larger scale, said Paukkeri. 

“This works for any language and any dialect in the world, [and] it takes only two weeks to get the AI model working for a new social platform”, said Paukkeri. 

Singh commended Utopia Analytics for respecting the United Nations’ Universal Declaration for Human Rights, but raised concerns about other AI service providers not following similar guidelines or codes of conduct — leaving the public at the mercy of AI moderators. 

As the AI moderator learns from human decision-making, it would therefore receive what humans would define as, for example, hate speech. 

“[But] when we talk about moderating hate speech, there is no internationally accepted definition of what constitutes hate speech and so legitimate content may be removed based on the very cautious or overbearing nature of the platforms themselves, which creates huge implications for the right to freedom of expression,” explained Singh. 

AI moderation also raises privacy concerns regarding data collection and storage. 

Utopia Analytics adheres to the EU privacy legislation, said Paukkeri. But the company does not collect any data and would receive data from its customer companies that have been provided by users of the platform.

If the firm no longer has a relationship with a customer, EU privacy legislation requires that the AI moderated data be deleted, she said. 

Human and machine biases

Another challenge in the moderation of content through AI is the human biases that AI moderating tools effectively acquire from the human training data. According to Paukkeri, there are many ways of eliminating built-in human biases from being passed on to AI moderators. 

“Every time we build a new AI model from the data we receive from the customer, we [assess it] and if there’s something that shouldn’t be there we will let the customer know and ask them to re-moderate that content,” said Paukkeri.

Utopia Analytics’ AI also learns a “general moderation policy” and therefore notices if there are outliers or if different humans have been moderating in a different way, which happens very often, said Paukkeri. 

However, there are many AI moderation solutions using reputation as part of their model which could work to perpetuate biases. For example, if a user who has a history of bad online behaviour types a message that happens to be the same as a user who has a history of good online behaviour, only the badly behaved user’s message would be eliminated, said Paukkeri. 

‘How do you hold an algorithm to account?’

Because of the increasing presence of AI content moderation, it has become necessary for professional bodies to update declarations and codes of conduct to deal with the issue of accountability. “Accountability remains one of the biggest gaps when we talk about artificial intelligence and that’s partly because of the voluntary codes that don’t necessarily establish accountability measures and perhaps a slight reluctance [sic],” said Singh. 

Earlier this year, the European Commission published its draft regulation surrounding “the harmonisation of rules [regarding] AI”, she said. The commission seeks to establish an EU AI board, where users would be subject to significant fines of up to 300 million for the most egregious violations, she added. 

While we are seeing methods to achieve accountability and legislation moving swiftly in certain parts of the world such as the EU, there is a lag in the African context, said Singh. 

In South Africa, the Protection of Personal Information Act achieves some accountability, but when considering AI and machine learning the act is still largely outdated, she said.  

“I fully support the benefits that AI can provide and I take nothing away from the potential that it offers to achieve really meaningful solutions,” said Singh. “I just think we are on a very concerning path at the moment, which is an unaccountable, poorly transparent one. And so, I think there is a lot of work that needs to be done.” DM

Subscribe to the Daily Maverick webinar newsletter and keep updated with our upcoming conversations: https://email.touchbasepro.com/h/d/38911C881454EE15 

Gallery

Comments - Please in order to comment.

Please peer review 3 community comments before your comment can be posted

X

This article is free to read.

Sign up for free or sign in to continue reading.

Unlike our competitors, we don’t force you to pay to read the news but we do need your email address to make your experience better.


Nearly there! Create a password to finish signing up with us:

Please enter your password or get a sign in link if you’ve forgotten

Open Sesame! Thanks for signing up.

We would like our readers to start paying for Daily Maverick...

…but we are not going to force you to. Over 10 million users come to us each month for the news. We have not put it behind a paywall because the truth should not be a luxury.

Instead we ask our readers who can afford to contribute, even a small amount each month, to do so.

If you appreciate it and want to see us keep going then please consider contributing whatever you can.

Support Daily Maverick→
Payment options

Daily Maverick Elections Toolbox

Feeling powerless in politics?

Equip yourself with the tools you need for an informed decision this election. Get the Elections Toolbox with shareable party manifesto guide.