Defend Truth


I rest my case, Honourable Algorithm: Artificial intelligence could help eliminate judicial bias


Prof Tshilidzi Marwala is the Vice-Chancellor and Principal of the University of Johannesburg and the author of ‘Leadership Lessons From the Books I Have Read’. He is on Twitter at @txm1971.

Studies in the US have demonstrated that artificial intelligence has been proven to predict outcomes of court cases better than lawyers. There is also an indication to suggest that AI could make better judgments. In general, AI systems make more rational decisions than judges.

When I was an undergraduate student in Cleveland, Ohio in the US, my senior at the university, Joseph Makhari, and I visited Oberlin College. Oberlin is a liberal arts college that educated the first president of the ANC, John Langalibalele Dube.

As we were about to get onto the highway, Makhari skipped a red robot and was stopped by a traffic cop. He was given a traffic fine, and his licence was suspended because his traffic score was high. He contested the offence, and the matter had to be settled by the courts. I was a witness in the case, and so we went to court. When eventually our case was heard, the judge asked Makhari to make a plea, and he pleaded not guilty.

Realising that Makhari had a characteristically un-American accent, the judge inquired where his interesting accent came from. When Makhari answered that the accent is South African, the judge complimented Makhari on his “impressive accent” and dismissed the case.

Though it has been almost three decades since that incident, I have wondered ever since what prompted the judge to dismiss the case. Constantly playing in my mind have been the words of US Judge Oliver Wendell Holmes Jr, who once remarked during a case: “This is a court of law, young man, not a court of justice.” Was it because Makhari had an impressive accent that was considered more educated than the African-American accent? Was it because the judge felt sympathetic to Makhari because he came from apartheid South Africa? Was it because the traffic cop who had written the fine was late?

One can create all forms of counterfactuals to try to rationalise the judge’s decision, but the actual reason remains unknown. Counterfactuals are statements about the alternative reality that didn’t happen. Perhaps the most niggling question in Makhari’s case is whether the judge was biased?

A few weeks ago, the suspended secretary-general of the ANC, Ace Magashule, accused the courts of being biased. As he put it, the judgment had been “littered with countless examples of indications or pointers of actual and/or perceived bias” against him and that there was “a desire to produce or justify a predetermined outcome in favour of the respondents and/or against the applicant”. Given the gravity of the accusations that Magashule faces, many of us dismissed his views. However, this does call into question the possibility of bias in judgments, which is not wholly unfounded.

As Gerry Spence, the American trial lawyer, wrote in his 1995 book How to Argue and Win Every Time: “If logic and reason, the hard, cold products of the mind, can be relied upon to deliver justice or produce the truth, how is it that these brain-heavy judges rarely agree? Five-to-four decisions are the rule, not the exception. Nearly half of the court must be unjust and wrong nearly half of the time. Each decision, whether the majority or minority, exudes logic and reason like the obfuscating ink from a jellyfish, and in language as opaque.”

In 2011, researchers from Ben Gurion University in Israel and Columbia University in the US studied 1,000 cases by eight judges in Israel. They found that these judges were harsher just before lunch than after lunch, with 65% favourable rulings after lunch and none just before lunch. They observed the same trend and found that the judges were more generous at the beginning of the day than at the end. When time influences judges’ outcomes, then we have to be very afraid. This, in effect, showed that how hungry a judge is or how tired a judge is can have drastic consequences for the accused.

We have also seen the impact of demographics on judgments. In 1998, a study by Welch et al in the American Journal of Political Science found that race played a significant role in bias. For instance, black judges are more even-handed in the treatment of black defendants, while white judges were somewhat more lenient in treating white defendants. The statistics in the US are certainly damning. The sentencing commission estimated that black defendants get sentences 19% longer than white defendants do for the same offence. Studies have shown that human beings, whether judges or not, are fundamentally biased.

In fact, bias is not limited to our courts but also extends to many areas, including our hospitals. Bias is not the only thing that clouds decision-making. Another is noise, or unwanted variability. For example, in a study, 100 medical radiologists were given 100 images to diagnose. Their diagnoses were stored, and they were given the same images two weeks later without their original diagnoses. Half of them changed their minds. This phenomenon is called noise. Why did half of them change their minds? Was it because of their skill levels? Was it because they slept better or worse the day before? We may never know the correct answer, but noise is an essential factor to deal with.

Recently, Daniel Kahneman, Olivier Sibony and Cass Sunstein published Noise, a book detailing the effect of noise and bias on decisions. I will use an example of 70 people who are asked on which continent Johannesburg is. If all of them say it is in Europe, this is a biased response. Biased responses are wrong, and they are biased because they are far from the truth. If sets of 10 say it is in Asia, Africa, North America, South America, Antarctica, Europe and Australia, respectively, then the response is noisy. The outcome is still wrong. If 46 say it is in Africa and the rest distribute their responses evenly to the other six continents, then the average response is correct but noisy. If all of them say it is in Africa, the response is unbiased and not noisy.

Given that noise and bias are detrimental to decision-making, what is to be done? First, we need to use “the wisdom of the crowd”, when a collective opinion of a group is sought. Going back to Magashule, he seems to have understood the principle of the crowd when he was appealing his suspension by asking for the full Bench to hear his case. The wisdom of the crowd can minimise both noise and bias provided that it is sufficiently large and diverse.

For the Magashule case, the wisdom of the crowd was derived from three judges who heard his case: Jody Kollapen, Sharise Weiner and Edwin Molahlehi. This panel was small but diverse, consisting of Indian, white and black African judges. The diversity of group decision-making is an important factor for the transformation of all our sectors, including the judiciary, to assure fair, unbiased and noise-free decisions.

The second strategy to minimise noise and bias is to be as quantitative as possible. Studies in the US have demonstrated that AI has been proven to predict outcomes of court cases better than lawyers. There is also an indication to suggest it could make better judgments. Judges assess risks to hand down sentences and verdicts. This is based on several features such as the evidence, the presentation of the attorneys and the testimony of witnesses. In general, AI systems make more rational decisions. Here, judgments can be made by considering the evidence, the defendant’s history and a reoffending score that estimates the likelihood of reoffending to determine a sentence, for instance.

To mitigate against this risk, judges should start using technology harnessing the ability to input all documents and proceedings into an AI system, with the result being a recommendation on a case. The safety guards will have to be built in to ensure that even the AI system is free from bias. This could be a solution for those entrusted with making judgments and arriving at decisions that have far-reaching consequences. It could become part of our arsenal in our quest for justice. So the future of justice lies in the Judge-Machine System! DM


Comments - share your knowledge and experience

Please note you must be a Maverick Insider to comment. Sign up here or sign in if you are already an Insider.

Everybody has an opinion but not everyone has the knowledge and the experience to contribute meaningfully to a discussion. That’s what we want from our members. Help us learn with your expertise and insights on articles that we publish. We encourage different, respectful viewpoints to further our understanding of the world. View our comments policy here.

All Comments 3

  • In Gladwell’s latest book “Talking to strangers” there is a story about a few thousand bail cases in New York being fed to a computer (the outcomes were known). The computer scored better than the judges once trained. It turns out judges are not so good at judging character as to whom will offend when on bail. Less information can be better. (Doctors often make the same mistake with things like heart disease) However computers are a long way off in counting as real artificial intelligence.

    As for bias the AI will be only as good as the training set. Which itself will be biased and incomplete. Will Skynet be more just in the end? We already know the answer to that one.

  • Depends on the bias of the originator. Artificial Intelligence is a contradiction in terms. Pull the battery and let’s see what happens.

  • Perhaps add a feedback element to the AI algorithms – such as following the outcomes of each decision (lots of invasions of privacy!). Or preempt bias accusations by making the algorithm tend towards a uniform distribution across ethnicity, gender, age, wealth etc. At least that can be measured, even if it’s wrong.

  • Please peer review 3 community comments before your comment can be posted