Dailymaverick logo

Opinionistas

This article is an Opinion, which presents the writer’s personal point of view. The views expressed are those of the author/authors and do not necessarily represent the views of Daily Maverick.

AI ‘hallucinations’ are threatening the administration of justice in SA

There have already been three incidents in which non-existent cases have been used in court documents. The lawyers involved were required to explain how these fictitious cases came to be cited.

The legal profession stands at a crossroads. Our courts have been confronted with alarming incidents in which lawyers, often relying on artificial intelligence (AI) tools, refer to cases in court documents that simply do not exist.

These court proceedings reveal cracks in the justice system so serious that the use of such non-existent cases risks undermining public confidence in the legal profession and eroding the administration of justice itself.

This problem is not hypothetical — it has already surfaced in our courts. There have been three incidents in which non-existent cases have been used in court documents, namely:

In all of these cases, the lawyers involved were required to explain how these fictitious cases came to be cited. What connects these incidents is not only the existence of fabricated legal citations, but also the growing reliance on AI tools that produce content that appears authoritative but may be entirely fictional.

The term for such fabrications is “hallucinations”, referring to AI systems generating text (including legal references) that have no basis in law or fact.

Lawyers who copy such AI outputs into court documents without verification are not only a source of personal embarrassment, but may also commit serious breaches of their professional duties.

As officers of the court, a lawyer’s conduct when using AI tools for legal research without proper verification (as seen in recent cases) directly affects the legitimacy of the legal system, as courts rely on the information lawyers provide. This, in turn, erodes public trust in the legal system.

Ultimately, this type of conduct burdens an already overstretched court system as courts are now forced to waste precious judicial time investigating non-existent material. More gravely, court decisions might be influenced by or depend on false cases or legal references, leading to a potential miscarriage of justice.

The crisis of relying on AI tools for legal research is a symptom of a legal profession under growing strain. Economic pressures, high volumes of litigation, and an overburdened court system tempt lawyers to cut corners to create efficiencies. One of the ways is relying on AI tools as substitutes for professional judgement and without proper verification.

No clear regulatory guidelines

Currently, there is no clear regulatory guidance on how lawyers should use or utilise AI tools in their legal practice. The legal profession’s regulatory bodies have yet to issue binding rules or guidelines governing the verification and use of AI-generated legal content.

The result is a regulatory vacuum in which serious errors can occur without clear accountability mechanisms for lawyers. In such a vacuum, clients may unknowingly suffer from poor representation and incorrect legal advice.

The consequences of leaving this vacuum unaddressed are dangerous. In a legal system built on precedent, courts depend on accurate citations and reliable legal arguments. False citations in court documents destroy the predictability and stability that are the characteristics of the rule of law.

Also, public faith in the judiciary depends on confidence in its impartiality and in the legal foundations of its decisions. Once this confidence is lost, restoring it is not easy.

There is, however, a path forward. Legislative and regulatory bodies must act swiftly and with urgency. At a minimum, lawyers should be required to verify all legal authorities sourced from AI tools and law firms should adopt clear policies on AI use.

Added training on AI’s capabilities, limitations and risks should become compulsory for lawyers. Courts should also adopt protocols for disclosure where lawyers use AI in preparing court documents. Ultimately, there should be a collaborative effort to develop practical, enforceable standards for integrating AI responsibly into legal practice.

The legal profession must act swiftly to restore its own standards. A failure to do so poses serious dangers to the administration of justice.

After all, the rule of law and achieving justice in individual matters demands more than having a court decision dependent on AI-generated hallucinations.

And so, too, do the people whose rights depend on such decisions. DM

Comments

John Bewsey Jul 15, 2025, 04:18 PM

If it is so simple and quick for "lawyers" to create fake court rulings using AI, it will be equally simple and accurate to check on these fables, also using AI. It can be done in around 5 seconds while the storyteller is creating its fabrication - while in court as well. No sympathy for those who don't want to use the technology.

Earl Grey Jul 15, 2025, 04:26 PM

This comment shows a lack of understanding of how the technology works. You can't use AI to fact-check AI, because it is not designed to produce facts but rather plausible strings of words. No amount of RAG will get it away from its nature, which is to generate new things. Fortunately, fact checking by opening up your digital repository of actual case reports works just as well to fact check AI-generated pleadings.

Anne Swart Jul 16, 2025, 02:07 PM

No sympathy for a professional using AI. If lawyers present fictitious cases - the courts should sanction them for a period of time.