“A policy framework without credible, technically capable, adequately resourced enforcement is not governance – it is theatre,” said former journalist and now founder of Ubiquity AI, Kaveer Beharee.
He was commenting on Communications Minister Solly Malatsi’s unveiling of a draft national AI policy that blends ubuntu philosophy with innovative frameworks, including a proposed AI Insurance Superfund reminiscent of the Road Accident Fund.
The axe Beharee is grinding is with the state’s reliance on existing regulators (like the Information Regulator and Icasa) to oversee AI, using the current lack of staff and technical literacy to enforce existing laws like the Protection of Personal Information Act as an example. He makes a good point.
“Without an independent, properly funded AI regulatory authority with prosecutorial power, this policy, honestly, is a joke,” said Beharee.
But a closer reading reveals a policy that aligns with global frameworks such as the OECD AI Principles and Unesco’s ethics recommendations.
In its draft form (public comment is open for 60 days), the document contains several highly novel, localised and genuinely surprising interventions that distinguish it from Western regulatory models.
A national insurance fund for AI?
Perhaps, especially given Malatsi’s largely libertarian approach to equity regulations, the most legally and structurally anomalous proposal is the establishment of an AI Insurance Superfund.
The policy authors acknowledge that advanced AI models often operate in ambiguous decision-making spaces where liability for harm is difficult to trace back to a single developer or entity. This fund is cast in the same mould as the Road Accident Fund (RAF), providing a state-backed safety net to compensate individuals or entities harmed by AI-driven outcomes.
Because the policy is still in draft, there is little indication of the fund’s accounting practices. But before we get lost in the weeds of state-backed compensation, it is crucial to understand what the Department of Communications and Digital Technologies (DCDT) is trying to achieve here.
Malatsi frames it as a necessary step in deliberately shaping South Africa’s digital future in the public interest. “We must create the conditions for innovation, growth, and better service delivery while also protecting people’s rights and ensuring that AI does not deepen inequality or exclusion,” he said on Friday.
It’s a noble sentiment grounded in the Constitution. However, the tech ecosystem tasked with building this future is inherently sceptical of the state’s ability to get out of its own way.
Hitting the procurement wall
Joshua Harvey, the CEO of local tech startup enabler Specno, argues that while the policy takes a sensible “middle of the road” approach, its success rests entirely on whether the government will structurally reform its engagement with local innovators.
“Modernisation without procurement reform is just a document,” Harvey warns. His point is that the traditional public sector tendering process systematically shuts out agile AI startups in favour of large enterprises.
His advice to the state is to issue problem-framing challenges rather than rigid technical tenders: “‘How can we use AI to reduce queue times at Home Affairs?’ is a better brief than a 60-page RFP [request for proposal].”
Harvey also flags the critical risk of imported bias. Because public sector data remains siloed, South African developers are forced to rely on datasets from the Global North.
To fix this, he proposes a National Data Trust to unlock anonymised public data for local researchers. Fortunately, the DCDT draft does nod in this direction, proactively advocating for non-private data to be treated as a public good.
The black box paradox
Where the policy truly stumbles, however, is in its attempt to be everything to everyone. The DCDT’s explanatory note explicitly admits, “There are various interventions that have wording that is purposefully open-ended”.
While this is framed as collaborative, it does create a large loophole, delaying hard regulatory stances and rendering the immediate framework legally ambiguous.
To illustrate, let’s look at this policy statement:
“Because advanced and sensitive systems may have less transparent decision-making processes (i.e., black box scenarios), South Africa should carefully select use cases which require regulatory oversight (and not reject every system which does not have transparent processes).”
This hands developers of opaque, complex models a free pass from transparency requirements simply by claiming their systems are too advanced to be regulated.
Which brings us back to Beharee’s core critique: who will police this?
The draft promises to establish an independent AI Regulatory Authority to monitor compliance, perform audits and issue certifications. In the same breath, it insists on a multi-regulator model in which oversight is distributed across Icasa, the Information Regulator and the Competition Commission.
By spreading responsibility across existing bodies that, to the opening point, lack machine-learning auditing capacity, the state risks creating a fragmented bureaucratic maze where no one is accountable.
And the practical independence of the proposed National AI Commission and AI Ethics Board remains vague. The policy language says that their relationship with the DCDT will determine if they operate as an entity under government influence or with true independence, leaving the door wide open for political interference and regulatory capture.
Ubuntu, but for AI
Despite these glaring structural gaps, the draft includes several deeply localised mandates. It formally integrates the Rainbow Nation foundation philosophy of ubuntu as a beacon and a lens for AI governance, demanding that systems serve the common good rather than just focusing on Western individualistic data privacy rights.
It also takes a surprising swing at international labour laws, adopting a Data Justice approach that requires equitable wages for local gig-workers employed by multinationals and even mandates psychological care for the human data-labellers who train AI systems.
Malatsi’s policy also includes a provision for developers to ensure that consumers have the option to bypass AI and engage with humans, but stops short by offering a “where feasible” qualification.
The pièce de résistance is a massive, state-directed technological mandate to achieve real-time language translation across all 12 official languages to overcome national communication barriers.
These are ambitious, uniquely South African goals. But as the public comment window ticks down toward the 10 July deadline, the tech sector must aggressively interrogate how to turn these open-ended options into enforceable realities. DM

Solly Malatsi during the DA Federal Congress 2026 on 12 April in Johannesburg. (Photo: Felix Dlangamandla) 