Algorithmic systems have the potential to proliferate and amplify social injustice: We need ethical AI

Big Data, algorithms. (Photo:

Algorithmic systems are composed of sets of instructions that collect and analyse data. But data is not pristine or free of human folly: data bias can reinforce and entrench existing inequalities.

Historically, the popular imagination about the risks of artificial intelligence (AI) approximated some Hollywood pastiche of humanoid robots exacting ruthless vengeance, à la The Terminator.

Now, as algorithmic systems become integrated into our daily lives, our anxieties are no longer limited to the existential threat of the totally unknown; rather, there is an increasing awareness that algorithmic systems have the capacity to proliferate and amplify social injustice.

In brief, algorithmic systems are composed of sets of instructions that collect and analyse data. But data is not pristine or free of human folly: without due care, data sets can import the biases and values of the societal sample from which it is drawn. Indeed “data bias” reflects the historical experience, and hence can reinforce and entrench existing inequalities.

Even the most sophisticated systems run this risk: in 2018, Amazon was forced to discontinue its use of a recruitment algorithm that had a bias against women applying for jobs with the tech market leader.

The bias problem is exacerbated by AI’s transparency problem. As algorithmic systems become increasingly ubiquitous in our decision making, there is an urgent need for transparency. For many people affected by algorithmic systems, they are effectively a Black Box: one sees their output, but the process remains shrouded in mathematical mystery.

Suppose then that you are retrenched because your company’s AI determines that you are superfluous; your parole application is denied because the government’s algorithm flags you for a risk of recidivism; or you are investigated for fraud because a revenue collector’s algorithm says you are likely a criminal. If we are to challenge the decisions that affect our lives, then we must have the information that allows us to know why they are made.

Accurate as algorithms might be, their lack of transparency and explainability means that the decisions of algorithmic systems effectively remain arbitrary to those they affect.

As AI becomes central to our decision-making, it is important for algorithmic systems not only to do justice, but also for them to be seen to do justice. Even when algorithms turn out to be safe, their lack of transparency is itself a source of anxiety that erodes public trust. In extreme cases, it fuels the kind of conspiracy mongering that undermines democracy. 

South African AI anxiety 

Two recent South African cases illustrate this anxiety. In the space of about a year, two high-profile algorithmic systems have been drawn into controversy by allegations of impropriety. In neither case has there been conclusive evidence of algorithmic error, but both cases suggest a need to shore up public trust in algorithmic systems.

The more recent controversy concerns the South African Revenue Service (SARS). SARS uses an algorithmic system to identify individuals who are at high risk of tax noncompliance. The algorithm recommends approximately 20% of taxpayers who are then subject to verification by SARS auditors.

Critics of the system allege that it is unfairly imposing audits (and accompanying anxiety) on ordinary middle-class taxpayers, while the most avaricious noncompliers remain unchecked. The as-yet unproven accusation, in other words, is one of bias.

This case has echoes of the recent medical scheme inquiry. In January 2021, a panel led by Tembeka Ngcukaitobi SC, appointed by the Council for Medical Schemes, found that the process by which medical schemes investigated fraud, wastage, and abuse by doctors had been tainted by systemic bias against black doctors.

Again, it was an algorithmic system that recommended doctors for investigation, and human investigators that would verify the algorithm’s findings. After a lengthy inquiry, Ngcukaitobi could not find algorithmic error in the systems his panel assessed but found against the medical schemes nonetheless, because the outcomes of the investigations were racially disparate.

In cases like this, time, effort, money, and reputations have been spent investigating and debating the trustworthiness of algorithms – to some rather unsatisfying conclusions.

What if we could prevent all this strife by effectively vetting algorithms ex ante for bias and other risks? How could we ensure the safety of algorithmic systems in a way that builds public confidence? How do we safeguard algorithmic systems against both bias and the perception of bias? How do we avoid yet more ex post facto inquiries in a state already saturated with them?

Algorithmic auditing

To these ends, we envision a new field: that of algorithmic auditing. Algorithmic auditing is the process of assessing and mapping the risks of an algorithmic system, recommending mitigation strategies for these risks, and monitoring their development (Koshiyama et al, 2020). We envision a new field of assurance, in which organisations that use AI will be able to have their algorithmic systems accredited by third parties to ensure that they are responsible, legally compliant, and safe to use.

This intervention is crucial. First, many cases of algorithmic bias have come to light because the results of the system were notably skewed. But it is not sufficient simply to correct problems as and when they arise. Soon, there will be billions of algorithms reaching every aspect of our lives. If we rely simply on correcting problems retroactively, we will be caught in an endless game of whack-a-mole.

It is imperative, therefore, that we make sure that algorithms are responsible and trustworthy before they cause harm. Second, an ex ante process of assurance prevents not only moral risk but the perception of risk: it prevents speculative accusations, public inquiries, and conspiracy theories.

At Holistic AI, we have been researching and conducting algorithmic audits for corporate and government clients. But our ambitions are grander: we are also building the functionality to scale this process, allowing clients to test their algorithms en masse at low cost, checking them for bias, transparency, robustness, and impact risks. 

The next time the government contemplates a patchwork inquiry, or spars with journalists about its algorithms, we invite them instead to seek out a holistic solution to the ethical quandaries and anxieties of AI, and to consider an algorithmic audit. Algorithmic auditing will ensure the safety of systems before they cause harm, it will preempt costly controversies, and – as capacity grows – it will create trust at scale. DM

Dr Emre Kazim is a leader in the field of AI Ethics, and is the co-founder and COO of Holistic AI, a London-based company specialising in algorithmic auditing. 

Markus Trengove is a senior research fellow at Holistic AI. 



Comments - share your knowledge and experience

Please note you must be a Maverick Insider to comment. Sign up here or sign in if you are already an Insider.

Everybody has an opinion but not everyone has the knowledge and the experience to contribute meaningfully to a discussion. That’s what we want from our members. Help us learn with your expertise and insights on articles that we publish. We encourage different, respectful viewpoints to further our understanding of the world. View our comments policy here.

No Comments, yet

Please peer review 3 community comments before your comment can be posted