BEYOND RISK OP-ED
Why South Africa needs a more holistic and contextual approach to AI regulation
South Africa is a developing country with a history of inequality and discrimination. Based on this alone, there is a real possibility that AI could be a strategic developmental tool while also exacerbating existing inequalities and discriminatory practices. This requires a context-specific approach to effective regulation and a national AI strategy. At the moment, we have neither.
The regulation of artificial intelligence (AI) has come to the fore of late as people become increasingly aware of some of the risks associated with AI use within different contexts. The European Union is the first regional body to attempt regulating AI, having developed a fairly comprehensive AI Act that aims to ensure that “AI systems are developed and used in a safe and trustworthy manner, in line with the EU’s values and principles”.
The act takes a prescriptive approach to regulating AI systems, setting out a detailed list of requirements for all AI systems, with the most stringent requirements applying to systems that pose the highest risks to people’s safety and fundamental rights.
The EU law categorises and places restrictions on AI systems based on a risk scale ranging from unacceptable risk, high risk, limited risk and low risk. The gradation in risk classes is based on how much potential an AI system has to negatively affect human interests or rights.
For example, systems that are used for social scoring or that use subliminal or manipulative techniques are considered unacceptable and are totally banned by the act.
The AI Act imposes strict requirements for systems identified as high-risk AI systems, requiring human oversight, technical documentation and impact assessments.
The act also imposes transparency requirements for all AI systems, including the provision of information about the system’s functioning and how it was developed, and establishes a system for market surveillance and enforcement, with the European Commission and national authorities empowered to take action against AI systems that do not comply with the law.
The AI Act is expected to be adopted in the coming months, after which it will be the world’s first major piece of legislation on AI.
On the other hand, the UK has published a white paper that details its plans for implementing what it claims is a “pro-innovation approach” to AI regulation. It takes what it calls a “risk-based” approach, focusing on regulating AI systems that are known to pose a high risk to people or society and includes many of the same topics of discussion and proposed solutions in the EU AI Act.
However, it emphasises the UK’s commitment to innovation. This is to say that the intended legislation will be designed to promote innovation in AI while ensuring that it is used safely and responsibly. This would include providing funding for research and development in AI, and creating a regulatory environment that is supportive of innovation.
Ultimately, the UK approach to AI regulation is more focused on promoting innovation than the EU AI Act is. In particular, the EU AI Act has been criticised for being so restrictive that it could stifle innovation.
Further, in light of newer generative AI systems such as ChatGPT, the prescriptive approach to AI regulation becomes useless as these kinds of AI are not built for a specific context or conditions of use, and their openness and ease of control allow for unprecedented scale of use.
Thus, while the producers of the AI might not intend that it be used for a restrictive use, others can manipulate it and engage in those uses.
What approach to adopt in SA?
Looking at South Africa, the EU and UK approaches have some merit, but contextually they may not be easily emulated. For example, both the prescriptive and risk-based approaches assume that all stakeholders have access to the same level of information and can accurately assess AI risks. In South Africa, this is far from the truth.
South Africa is a developing country with a history of inequality and discrimination. Based on this alone, there is a real possibility that AI could be a strategic developmental tool while also exacerbating existing inequalities and discriminatory practices.
As such, a contextual, holistic approach to AI seems more feasible for South Africa.
Such an approach to AI regulation will promote the responsible development and use of AI, while also protecting people’s rights and interests and levelling the participatory field in the AI revolution.
The main advantages of a contextual, holistic approach are that it takes into account broader societal and ethical implications of AI technologies, as well as the potential benefits. Contextually it is more attractive because it focuses on creating a regulatory framework that is more suitable for regulating a wide range of AI applications and use cases, as well as promoting responsible innovation in the AI sector.
A holistic, contextual approach is preferred since it would be based on establishing public confidence in AI, along with the participation of many various stakeholders, including government, corporations, civil society and academia, who should make up an advisory council.
The first step towards adopting a holistic, contextual approach to AI regulation would be to adopt a National AI strategy that is tailor-made for the South African context and which outlines the country’s vision for the development and use of AI.
South Africa does not have a national AI strategy yet.
Such a strategy should be grounded in the realisation that AI has the potential to be a powerful tool for social and economic development, but also that there are risks associated with AI, such as bias, discrimination and (potentially massive) job displacement.
The strategy should also lay out guidelines for how artificial intelligence should be developed and used in South Africa. These guidelines should include the usage and advancement of AI that is centred on humans, as well as the guaranteeing of fairness and security in AI systems, and the establishment of accountability and transparency mechanisms.
The lack of a national AI strategy in South Africa has been criticised widely.
Read more in Daily Maverick: South Africa’s AI blind spot may have repercussions for the economy in our changing digital world
The main criticisms are around the fact that in the absence of an AI development strategy, South Africa cannot fully take advantage of AI-assisted innovation.
Inversely, it also means that the general public is left exposed to the dangers of interacting with and using AI technologies that may not have their best interests at heart.
What we do have, though, is the Presidential Commission on the 4th Industrial Revolution which consists of leaders from academia, business and civil society. The only key contribution of the commission to date has been its identification of the development and advance of AI as a key focus area in South Africa’s digital economic development strategy, but the need for regulation has arrived.
Perhaps it would be prudent to re-task the commission to develop an AI strategy or to convene a smaller, nimbler AI advisory council.
Once a national development strategy has materialised, framework legislation should be developed. Such legislation should set out in principle, rules and standards for the development and use of AI, and, as importantly, empower either a responsible minister or a digital regulator, advised by the AI council, to rapidly develop new rules in response to changes.
Ideally, this law should be based on both a prescriptive and risk-based approach, albeit with a comprehensive system for monitoring and enforcing the regulatory framework, and such a system should be independent and transparent.
Such a law should also address issues surrounding the legal personality of AI, such as authorship in copyright and inventorship in patent law. Dr Andrew Rens and I have written about some of the considerations.
Now is the time to act.
The foundation is in place and the nation must shift from a reactive legislative approach to one that anticipates the perceived opportunities and risks associated with AI.
Such a move would also send a strong message to the rest of the world – establishing African nations as active participants in AI development and use rather than merely consumers. DM
Hanani Hlomani is a research fellow at Research ICT Africa, a senior researcher at the Intaka Centre for law and technology and a PhD candidate in law at the University of Cape Town. The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of any of the organisations with which he is affiliated.