The legal profession stands at a crossroads. Our courts have been confronted with alarming incidents in which lawyers, often relying on artificial intelligence (AI) tools, refer to cases in court documents that simply do not exist.
These court proceedings reveal cracks in the justice system so serious that the use of such non-existent cases risks undermining public confidence in the legal profession and eroding the administration of justice itself.
This problem is not hypothetical — it has already surfaced in our courts. There have been three incidents in which non-existent cases have been used in court documents, namely:
- Parker v Forsyth NNO and Others (1585/20) [2023] ZAGPRD 1 (29 June 2023)
- Mavundla v MEC: Department of Co-Operative Government and Traditional Affairs KwaZulu-Natal and Others (7940/2024P) [2025] ZAKZPHC 2; 2025 (3) SA 534 (KZP) (8 January 2025)
- Northbound Processing (Pty) Ltd v South African Diamond and Precious Metals Regulator and Others (2025/072038) [2025] ZAGPJHC 661 (30 June 2025)
In all of these cases, the lawyers involved were required to explain how these fictitious cases came to be cited. What connects these incidents is not only the existence of fabricated legal citations, but also the growing reliance on AI tools that produce content that appears authoritative but may be entirely fictional.
The term for such fabrications is “hallucinations”, referring to AI systems generating text (including legal references) that have no basis in law or fact.
Lawyers who copy such AI outputs into court documents without verification are not only a source of personal embarrassment, but may also commit serious breaches of their professional duties.
As officers of the court, a lawyer’s conduct when using AI tools for legal research without proper verification (as seen in recent cases) directly affects the legitimacy of the legal system, as courts rely on the information lawyers provide. This, in turn, erodes public trust in the legal system.
Ultimately, this type of conduct burdens an already overstretched court system as courts are now forced to waste precious judicial time investigating non-existent material. More gravely, court decisions might be influenced by or depend on false cases or legal references, leading to a potential miscarriage of justice.
The crisis of relying on AI tools for legal research is a symptom of a legal profession under growing strain. Economic pressures, high volumes of litigation, and an overburdened court system tempt lawyers to cut corners to create efficiencies. One of the ways is relying on AI tools as substitutes for professional judgement and without proper verification.
No clear regulatory guidelines
Currently, there is no clear regulatory guidance on how lawyers should use or utilise AI tools in their legal practice. The legal profession’s regulatory bodies have yet to issue binding rules or guidelines governing the verification and use of AI-generated legal content.
The result is a regulatory vacuum in which serious errors can occur without clear accountability mechanisms for lawyers. In such a vacuum, clients may unknowingly suffer from poor representation and incorrect legal advice.
The consequences of leaving this vacuum unaddressed are dangerous. In a legal system built on precedent, courts depend on accurate citations and reliable legal arguments. False citations in court documents destroy the predictability and stability that are the characteristics of the rule of law.
Also, public faith in the judiciary depends on confidence in its impartiality and in the legal foundations of its decisions. Once this confidence is lost, restoring it is not easy.
There is, however, a path forward. Legislative and regulatory bodies must act swiftly and with urgency. At a minimum, lawyers should be required to verify all legal authorities sourced from AI tools and law firms should adopt clear policies on AI use.
Added training on AI’s capabilities, limitations and risks should become compulsory for lawyers. Courts should also adopt protocols for disclosure where lawyers use AI in preparing court documents. Ultimately, there should be a collaborative effort to develop practical, enforceable standards for integrating AI responsibly into legal practice.
The legal profession must act swiftly to restore its own standards. A failure to do so poses serious dangers to the administration of justice.
After all, the rule of law and achieving justice in individual matters demands more than having a court decision dependent on AI-generated hallucinations.
And so, too, do the people whose rights depend on such decisions. DM