The age of artificial intelligence (AI) calls for widespread change far beyond technology. This week, it emerged that China has outlined policies to regulate its domestic AI sector in a bid to balance state control with support for companies to become viable global competitors. This has been billed as the most comprehensive set of AI guidelines outlined so far.
In stark contrast, accusations have been levelled against Africa for being far too slow to act – and South Africa is no exception.
As we see regulations emerge in pockets elsewhere, the pertinent question is, what should the regulatory trajectory be in the South African context?
Important in this discussion is to first understand why regulating AI is necessary.
While AI has paved the way for significant advancement, the caveat is that it is also increasingly being used for harm. The proliferation of autonomous weapons, the spread of dangerous social media rhetoric, entrenched algorithmic bias and technology’s ability to exacerbate our inequalities demonstrate the peril.
At the “AI for Good Summit” in July this year, UN Secretary-General António Guterres said that developing AI “for the good of all” requires a framework grounded in human rights, transparency and accountability.
What is essential is that regulation strikes a balance between innovation and responsible use, ensuring that AI technologies benefit humanity while minimising potential risks and negative impacts. Ethical AI and regulations around this are essential to ensure fairness, maintain transparency, establish accountability, protect privacy and build public trust.
The goal is for AI development and deployment to align with societal values and long-term wellbeing.
The progress made in AI technology highlights the need for legal and ethical guidelines, but history has shown that laws are usually reactive rather than proactive.
South Africa’s regulatory context cannot be divorced from that of the wider continent. There has been some progress.
In 2021, Africa’s AI blueprint was launched. The proposal suggested establishing regional AI Centres of Excellence to encourage collaboration across various AI fields in Africa.
It emphasised the importance of ethical considerations in AI adoption and highlighted human development as a primary focus. The document also underscored the data’s economic value and advocated for effective management for economic growth. It identified key sectors that could benefit from AI adoption and outlined a potential roadmap for member states to navigate AI implementation.
Additionally, the proposal suggested policy and regulatory approaches to address AI challenges.
Similarly, in 2021, the African Commission on Human and Peoples’ Rights adopted Resolution 473, outlining the need to address the implications for human rights of AI, robotics and other new and emerging technologies in Africa.
Yet, despite these various initiatives, little has been actioned so far. We need to draw the line between rhetoric and policy.
Currently, there is no specific legislation in South Africa regarding AI.
The Presidential Commission on the Fourth Industrial Revolution has recommended reviewing and creating policies and legislation to empower stakeholders with responsible technology use. The focus will be on data privacy, protection laws and digital taxation, with the aim of creating a science-literate judiciary.
Although this is a helpful starting point for establishing a legal framework, we have been slow to adopt these recommendations, and our scope may not be comprehensive enough to keep up with the rapid pace of AI developments.
As the University of KwaZulu-Natal’s Dr Dusty-Lee Donelly argues, “While a core set of general principles for the ethical development of AI has emerged, those principles must still be operationalised through legal regulations…
“Thus, existing legal principles must be adapted, or new principles developed to mitigate the risks to human wellbeing while not stifling innovation and leading to non-compliance.”
So, what is to be done?
AI researcher Kingsley Owadara outlines that regulating AI should be seen from three angles. First, laws enacted by a parliament; second, strategies toward the adoption of AI; and, finally, policies.
As he further argues, Africa is not ready to action regulation in the same way as the European Union, for example. This is because any legal framework needs to be meticulously developed to accurately represent the continent’s current circumstances and future ambitions.
It can thus be argued that to effectively regulate AI, South Africa (and Africa, by extension) should engage diverse stakeholders to establish a comprehensive legal framework and prioritise ethical guidelines tailored to the local context.
For instance, are we taking into account our infrastructure or the digital divide?
As Emile Ormond outlined, “Most of the South African-specific risks are more sociotechnical, manifesting the country’s environment. An absence of policy and regulation, for example, is not an inherent feature of AI. It is a symptom of the country being on the periphery of technology development and related policy formulation.”
This has to be a consideration in our regulatory approach. What is apparent is that we need to build capacity, create regulatory bodies and raise public awareness.
South Africa certainly must catch up in many instances, but regulation should not wait.
As a researcher in the field, Matt Sheehan, states, “[We] can learn from Chinese regulators to be targeted and iterative” in our approach.
And, I would add, be proactive. DM