Dailymaverick logo

Business Maverick

MACHINE UN-LEARNING

Malatsi withdraws draft AI policy after hallucination revelations

It was a horrible Freedom Day weekend for Communications Minister Solly Malatsi as an explosive media report unmasked the AI hallucinations holding up the very policy meant to solve the exact problem.

AI policy Solly Malatsi Minister of Digital Communications and Technologies Solly Malatsi speaks at a B20 side-event in Cape Town on 17 September 2025 following the Digital Transformation Task Force’s recommendations to the G20. (Photo: Supplied / RTC Studios)

Dumisani Sondlo, the Department of Communications and Digital Technologies’s AI policy lead, revealed to Daily Maverick at GovTech 2025 that the National AI Policy’s development was “an act of acknowledging that we don’t know enough”.

Now, as reported by News24, it appears that the revelation did not turn into introspection and inspire broadening of the collective knowledge base. Instead, researchers leaned too much on AI tools.

On LinkedIn this weekend Sondlo went from being grateful for having “gained massively-irreplaceable experience in this process” – he was the guy who presented and championed the policy – to promoting his new book on leadership strategy, Lion Among Sheep. Although the timing may be merely coincidental and not a deliberate distancing from the crisis.

That said, the leadership questions now all rest on Minister Solly Malatsi, who is getting no quarter on the social media streets.

Big man, big moment

Malatsi has been nothing if not consistent in his approach to governance since joining the Government of National Unity. He has been the adult in the room between the SABC and Sentech, a champion for affordable smartphones, and been refreshingly realistic on the role of the Post Office in a modern society.

There is, of course, the matter of the equity equivalent investment programme loophole he is, quite obviously given his documented determination that Daily Maverick has reported on, trying to squeeze Starlink through that may now be severely compromised.

“Following revelations that the draft national artificial intelligence policy published for public comment contains various fictitious sources in its reference list, we have initiated internal questions which have now confirmed that this was the case,” Malatsi said in a media statement following a public outcry over the weekend.

“This failure is not a mere technical issue but has compromised the integrity and credibility of the draft policy… As such, I’m withdrawing the draft national policy. South Africans deserve better.”

The minister has committed to internal accountability, stating that “there will be consequence management for those responsible for drafting and quality assurance”. The department was treating the matter with the “gravity it deserves” and acknowledged it as a lesson taken with humility.

Diabolus ex machina

The devil was in the details for three of the six pillars the policy rests on; namely Capacity and Talent Development, Economic Transformation and Responsible Governance – affecting over a third of the policy.

Malatsi revealed that “the most plausible explanation is that AI-generated citations were included without proper verification”.

A great example is in the latter pillar: because the draft policy attempted to loosely model the EU AI Act’s tiered regulatory system, the authors required European legal scholarship.

The policy justified the categorisation of high-risk AI, data sovereignty frameworks and regulatory sandboxes by citing Müller Schmidt 2024 in the European Journal of Law and Technology.

While the journal and scholars with the surname Schmidt exist, the Large Language Model (LLM) used to compile that policy section conflated real authors with real journals to attribute a synthetic paper that perfectly fit the narrative requirements.

That compilation was most likely done without access to paywalled academic libraries, so when prompted to, say, justify youth empowerment and AI in Africa… the LLM invented evidence that perfectly mirrors the prompt’s assumptions. With every additional prompt in the same session only compounding the errors.

A crisis of confidence

The minister, however, did use the circumstance of the errors as an opportunity to underline the need for the policy: “In fact, this unacceptable lapse proves why vigilant human oversight over the use of artificial intelligence is critical.”

But that structural irony does not remove his signature on the Government Gazette entry or erase the online threads of his critics, regardless of who in the broader team was responsible for the quality checks.

In response to the crisis, the Department of Communications and Digital Technologies has taken immediate remedial actions. The first step was the complete withdrawal of the Draft National Artificial Intelligence Policy to prevent further public comment on a structurally compromised document.

The next step will be the most crucial to restoring trust in Malatsi’s actions – visible accountability for the actions. Only after that can the country receive an AI policy that proposes enforcement around irresponsible AI and then take it seriously. DM

Comments

Loading your account…
Johan Herholdt 28 April 2026 01:34 PM

Ai, ai ai!