Artificial intelligence is no longer a future ambition for South Africa’s financial sector. It is already embedded in customer interactions, fraud detection systems and internal operations — from chatbots to call centres to integrated “super apps” on your phone.
But as adoption accelerates, a clear message emerged from a panel discussion at the Financial Sector Conduct Authority’s (FSCA’s) 2026 conference this week: governance, skills and regulation are struggling to keep pace.
Or, as moderator Ayanda Ngcebetsha, director of data and AI at Microsoft South Africa, put it bluntly: “AI is indeed here. We can no longer wish it away.”
From pilot projects to real decisions
The panel made it clear that AI in financial services has moved well beyond experimentation. Darren Franks, co-founder of the FinTech Association of South Africa, shared fresh industry data showing that adoption is already widespread, even if still uneven.
Franks said fintech firms are already well into AI adoption, with an average maturity score of 3.45 out of five. While 14% have live use cases in production and a further 32% are scaling, not a single firm has reached full deployment. More concerning is the governance gap: 20% of firms have no AI governance, and only 5% report mature oversight frameworks, even as 86% say AI will be critical to their business in the next five years.
“This is not sci-fi. AI is here, and it’s being used in businesses today,” warned Franks.
From customer service chatbots to fraud detection and compliance monitoring, AI is increasingly being deployed in live environments and, in many cases, it is already shaping real customer outcomes.
More importantly, the role of AI is evolving fast.
“AI in finance is moving from decision support to decision delegation,” said panellist Fatos Koc, head of the financial markets unit at the Organisation for Economic Co-operation and Development (OECD), who joined online, speaking from an international regulatory perspective.
“Models are no longer just advising humans. They are executing decisions autonomously at scale,” she said.
A case study of AI use in financial services
In a separate interview, Discovery Bank’s CEO, Hylton Kallner told this journalist that the bank is using AI to:
🤖Build a behavioural fingerprint for every client, based on how much they usually spend; who they usually pay; when they make payments; and where they transact from (geolocation).
🤖Monitor every single transaction in real time against that fingerprint.
🤖Use network effects to see if:
- Other clients are paying the same beneficiary in similar ways
- That beneficiary/account looks like a possible fraudulent or mule account.
Based on the AI’s risk assessment, the Discovery Bank system can do three things, in real time:
🚨Show a red alert in‑app if a transaction looks highly suspicious and ask the client to double‑check.
⏳Delay the payment in a small number of high‑risk cases to confirm with the client.
🔒Lock down the app entirely if it appears the client may be under duress or a phone has been compromised.
A key design goal Kallner highlighted is minimising false positives so that the bank protects clients without “creating chaos” by blocking too many legitimate transactions.
Governance is lagging behind
The problem is that while adoption is accelerating, oversight is not keeping up.
Even among those that do have AI governance measures in place, many rely on inconsistent or ad hoc policies.
Inside organisations, innovation is moving faster than the frameworks designed to manage it. Ngcebetsha summed up the situation succinctly: “Innovation is forging ahead, and policy is catching up.”
The risk is not just internal. Regulators face the same challenge at a system-wide level: how to oversee technologies that are evolving faster than traditional rule-making processes.
The black box problem
One of the most pressing concerns raised during the panel was explainability.
Nolwazi Hlophe, a senior specialist in the fintech department at the FSCA, highlighted a critical gap between technical performance and real-world accountability. “Your model can be accurate, and you still might not know how it works,” she pointed out.
This creates a dilemma for financial institutions. A model may pass technical validation tests, but still fail the basic requirement of explaining outcomes to customers.
“Explainability is critical, because you must be able to tell a consumer how a decision about them was made,” she said.
The issue goes beyond theory. Poor explainability raises the risk of bias, unfair outcomes and reputational damage — especially in areas such as credit decisions, insurance pricing and fraud detection.
As one audience member put it, “If you can’t explain it, you can’t audit it properly.”
Risks are amplifying, not disappearing
The panel identified a cluster of risks that are becoming more pronounced as AI adoption deepens.
According to the FSCA’s market study, the most significant risks include:
- Data privacy and protection;
- Cybersecurity vulnerabilities;
- Data quality and representativeness;
- Third-party dependencies; and
- Model hallucinations and errors.
These risks are not isolated. They are interconnected and can scale rapidly. “Small errors can be amplified into systemic risks,” warned Koc.
The increasing use of large, general-purpose AI models adds another layer of complexity, particularly when institutions do not fully understand the assumptions embedded within them.
Jobs: replaced or reinvented?
As always, when it comes to AI conversations, the impact of AI on jobs sparked one of the most animated discussions.
The consensus was that AI is more likely to reshape work than eliminate it — but only if organisations actively manage the transition.
“We need to upskill our people so we don’t see a wave of people losing their jobs due to AI implementation,” said Hlophe.
Franks pointed to early shifts already visible in the market: “Interaction with our members shows that in the last nine months alone, there’s been a 42% decline in demand for software engineers, and a 69% increase in demand for commercial roles.”
That suggests a rebalancing rather than a collapse; with growing demand for client-facing, strategic and decision-making skills.
Ngcebetsha framed it as a responsibility for leaders: “Are we repurposing jobs, retraining people, and shifting them to higher-value tasks? That’s on all of us.”
Africa’s opportunity — and risk
From a continental perspective, the stakes are even higher.
Ambassador Lavina Ramkissoon of the African Union argued that Africa has a narrow window to use AI to drive growth and inclusion, but risks missing it without clear direction.
“We focus too much on fear and not enough on opportunity,” she said.
She pointed to three critical foundations for responsible AI at scale:
- Infrastructure;
- Computational capacity; and
- Intelligence (human and machine).
“Intelligence will become a utility,” she added.
But without a coordinated strategy, AI could just as easily deepen fragmentation and inequality across the continent.
The regulatory balancing act
For regulators, the challenge is to enable innovation without allowing risks to spiral.
The panel leaned toward more adaptive approaches, including:
- Principle-based regulation;
- Regulatory sandboxes and experimentation; and
- Activity-based oversight rather than entity-based rules.
“What works is not slowing innovation, but adapting oversight to its pace,” said Koc.
“We want to ensure great outcomes for citizens while allowing the industry to innovate, create jobs and build new products,” concluded Ngcebetsha.
If there was one unifying message from the discussion, it was this: AI has already crossed into the core of financial services. The question is no longer whether it will transform the industry, but whether institutions and regulators can keep up. DM

AI is already making financial decisions in South Africa, but the guardrails are not keeping pace. (Photo: iStock)