South Africa

AI LEGISLATION OP-ED

South Africa faces many challenges in regulating the use of artificial intelligence

South Africa faces many challenges in regulating the use of artificial intelligence

There are currently no laws in South Africa specifically regulating artificial intelligence (AI). While the country may choose to use foreign legislation as the basis for drafting its own AI legislation, it will have to be adapted to meet local challenges.

Artificial intelligence (AI) has seen rapid growth in recent years. The release of ChatGPT in November 2022 and several other AI developments have created a frenzy where individuals and businesses are seeking to deploy and leverage AI in their everyday lives.

However, the rate at which AI is being developed far exceeds the creation of AI regulations.

The rapid development and deployment of AI without regulation is cause for concern for many, including well-known technology experts such as Elon Musk and Steve Wozniak, who are among a long list of industry leaders and who signed a letter calling for a halt on AI research and development on 22 March 2023.

The purpose behind the letter was to institute a freeze on AI development for six months to allow for alignment on how to properly regulate AI tools before they become even more powerful and intelligent than they already are, and for purposes of providing legal tools and guidelines to mitigate the obvious risks associated with AI.

Many countries have already started establishing draft acts and legislation to regulate AI.

The European Union has taken a risk-based approach under the European Union AI Act and plans to classify AI tools into one of the identified risk categories, each of which prescribes certain development and use requirements based on the allocated risk.

On 29 March 2023, the United Kingdom’s Department for Science, Innovation, and Technology published a white paper on AI regulation. The UK White Paper sets out five principles to guide the growth, development, and use of AI across sectors, namely:

  • Principle 1: Safety, security, and robustness. This principle requires potential risks to be robustly and securely identified and managed;
  • Principle 2: Appropriate transparency and explainability. Transparency requires that channels be created for communication and dissemination of information on the AI tool. The concept of explainability, as referred to in the UK White Paper, requires that people, to the extent possible, should have access to and be able to interpret the AI tool’s decision-making process;
  • Principle 3: AI tools should treat their users fairly and not discriminate or lead to unfair outcomes;
  • Principle 4: Accountability and governance. Measures must be deployed to ensure oversight over the AI tool and steps must be taken to ensure compliance with the principles set out in the UK White Paper; and
  • Principle 5: Contestability and redress. Users of an AI tool should be entitled to contest and seek redress against adverse decisions made by AI.

In South Africa, there are currently no laws regulating AI specifically. South Africa may choose to use foreign legislation as the basis for drafting its own AI legislation, but it is difficult to say at this early stage in the regulatory process.

Inasmuch as it may be beneficial for South Africa to base its AI regulatory framework on existing principles and legislation formulated by other countries, we suspect that South Africa will face the following challenges in respect of establishing AI regulations:

  • Data privacy: AI tools process vast amounts of data and information, and the extent to which personal information (if any) is processed remains unknown. The unregulated use of AI tools could result in the personal information of data subjects being processed without their knowledge or consent, and lead to a situation where an organisation is in breach of its obligations under the Protection of Personal Information Act (Popia) if its employees are not trained on the acceptable use of AI tools;
  • Cyberattacks: AI tools are susceptible to cyberattacks and there is an immediate need for the enforcement of appropriate regulations to ensure that adequate security measures are imposed on the use of AI tools. Italy recently experienced a data breach on ChatGPT and subsequently imposed a temporary ban on the use of ChatGPT in Italy, as well as a ban on the processing of personal information by OpenAI. This is an example of the ramifications of deploying AI without having an adequate regulatory framework in place;
  • Inequality and unemployment: South Africans are particularly concerned about AI tools automating jobs that would otherwise create job opportunities in the country, thus increasing the all-time low unemployment and poverty rates currently being experienced in South Africa. Our legislation will need to weigh up the advantages of the use of AI tools in the context of and against South Africa’s existing challenges and determine ways in which we can use AI tools to improve our current situation. Furthermore, the issue of data bias can lead to decisions that are not equitable and serve to perpetuate existing social injustices;
  • Lack of understanding and awareness of AI: AI is technical, and the most common issue among rule-makers is the lack of understanding of how AI tools operate, and therefore how to safely and effectively regulate the use of such AI tools. Our rule-makers will need to consult and collaborate with technology experts to ensure that all risks are identified and addressed under South Africa’s AI laws and regulations;
  • Inappropriate use: AI tools could be deployed for criminal purposes, such as money laundering, fraud and corruption, or otherwise used to promote terrorist activities. Any AI laws and regulations that are established for South Africa will need to align with the existing legislation that is currently regulating such criminal behaviour, to avoid further risks and a rise in criminal activity; and
  • Accountability and recourse: South Africa’s AI laws and regulations will need to be clear in respect of accountability, and provide guidelines to assist in determining who would be held accountable for adverse decisions generated by AI tools, as well as the escalation procedure for appealing or contesting an adverse AI decision.

The future of AI regulation in South Africa is unclear at this stage, however AI tools, just like any new technological developments, present real risks that should be mitigated through laws and regulations. For now, users of AI tools should be aware of the associated risks and take steps to protect themselves against those risks. DM

Kayla Casillo is Senior Associate in the Corporate Commercial Department of law firm ENSafrica. Alex Powell is a Candidate Legal Practitioner in the department.

Gallery

Comments - Please in order to comment.

Please peer review 3 community comments before your comment can be posted

X

This article is free to read.

Sign up for free or sign in to continue reading.

Unlike our competitors, we don’t force you to pay to read the news but we do need your email address to make your experience better.


Nearly there! Create a password to finish signing up with us:

Please enter your password or get a sign in link if you’ve forgotten

Open Sesame! Thanks for signing up.

We would like our readers to start paying for Daily Maverick...

…but we are not going to force you to. Over 10 million users come to us each month for the news. We have not put it behind a paywall because the truth should not be a luxury.

Instead we ask our readers who can afford to contribute, even a small amount each month, to do so.

If you appreciate it and want to see us keep going then please consider contributing whatever you can.

Support Daily Maverick→
Payment options