/file/dailymaverick/wp-content/uploads/2025/10/label-Op-Ed.jpg)
Artificial Intelligence (AI) is no longer a futuristic abstraction hovering at the periphery of public life. It now sits at the centre of political decision-making, public administration, the digital transformation of the economy, society and the everyday experiences through which citizens engage the state.
Across the world, governments are racing to integrate AI into service delivery, and South Africa is no exception. Visa adjudication, taxpayer support, information retrieval and even drought prediction systems are already relying on algorithmic tools.
Widespread adoption of AI is proceeding with speed across the globe. Yet, amid this surge of technological optimism, a quieter, more complex truth is emerging: AI’s impact on democracy will not be determined by its capabilities, but by the governance decisions societies make today.
We tend to imagine AI as a neutral instrument, a digital tool that simply accelerates what we already do. But the evidence from international governance forums shows the opposite. AI is profoundly political. It shapes how authority is distributed, how decisions are made, how citizens understand their rights and, crucially, how people think. The idea that AI merely automates bureaucracy is dangerously incomplete. It reshapes the cognitive and institutional foundations that make democratic life possible.
Public services
One of the most promising insights to emerge from recent research is the potential for AI to transform access to public services, especially in countries like South Africa, where administrative burdens routinely exclude the poor, the rural and the marginalised.
Imagine a world in which every citizen, regardless of language, literacy or geography, has access to a personal digital assistant that explains government processes in their home language, prepares applications, checks eligibility for social benefits, tracks progress, and ensures that rights are fully understood and exercised. This is not an abstract fantasy. The underlying technology is already in place, and the design models to operationalise it have been demonstrated.
Such an architecture would amount to a quiet revolution in administrative justice. Instead of forcing citizens to navigate a maze of departments, queues, forms and official jargon, AI could translate bureaucracy into accessible, conversational guidance that meets people where they are.
A mother in rural Limpopo could apply for a child support grant entirely through voice prompts. An informal trader in Khayelitsha could report a faulty electricity connection and track the municipality’s response. A student could have their financial aid documentation compiled and submitted automatically. With the right design, AI can become a bridge between citizens and the state, making rights real rather than theoretical.
Darkest possibility
But if this is the most hopeful scenario, the darkest possibility is unfolding just as quickly. AI systems are increasingly capable of influencing behaviour in ways that bypass human awareness.
Emotionally responsive digital assistants, marketed as companions, helpers or advisers, are designed to build trust and attachment. They learn users’ preferences, vulnerabilities and patterns of thought over extended periods. They can persuade without appearing to persuade. In democratic terms, this is not a minor concern. If citizens begin to rely emotionally or cognitively on AI systems whose design is shaped by commercial interests, then the boundary between assistance and manipulation becomes dangerously thin.
Alongside this psychological risk lies a more structural institutional threat, because as governments adopt AI tools to allocate public resources, assess applications and prioritise cases, decisions once made by humans become embedded in complex algorithmic systems that can be difficult to scrutinise.
Traditional oversight mechanisms, like the courts, auditors and ombudspersons, were created to review and challenge human decision-making, but they struggle when faced with automated systems whose reasoning is statistical, opaque or proprietary. If, for example, an algorithm denies a visa, misclassifies an applicant or prioritises one community over another, citizens must still have the right to understand why it was so decided and to appeal, because without enforceable standards of transparency, contestability and accountability, AI risks hollowing out democratic protections.
Cognitive outsourcing
Another emerging threat is to the cognitive foundations of citizenship, with studies showing that different forms of AI interaction influence human decision-making in different ways. When AI is used as a conversational partner that challenges assumptions, asks clarifying questions and encourages reflection, it can strengthen reasoning, but when AI is designed simply to deliver answers, what some call delegative AI, it encourages cognitive outsourcing. The human brain then takes shortcuts, judgment weakens, and, over time, the ability to weigh arguments, evaluate evidence and participate meaningfully in civic life begins to erode. And democracy depends on reflective citizens, not passive recipients of algorithmic outputs.
Widespread adoption in AI and automation in the South African economy is likely to replace routine white-collar and low-skilled roles in sectors such as administrative support, manufacturing, logistics and retail. Conversely, demand for skills in data science, AI engineering, cybersecurity and digital marketing is likely to increase. Ironically, AI may be pivotal in facilitating the necessary reskilling and upskilling at scale, given sufficient investment in public digital infrastructure and digital literacy.
‘Surveillance capitalism’
AI’s integration into governance and commerce risks deepening what Professor Shoshana Zuboff has termed “surveillance capitalism” by transforming personal data into a commodity for profit and control. When governments and corporations deploy AI without enforceable standards of transparency and accountability, the line between assistance and coercion blurs, turning democratic oversight into an illusion and amplifying the potential for pervasive, but invisible, social, economic and political control.
While the application of AI could optimise energy and water use in industries and the public sector and promote the effective use of renewable energy sources, the infrastructure powering AI, especially data centres for training and operating large AI models, consumes vast amounts of electricity and fresh water, raising environmental concerns.
Africa currently has limited AI research hubs, infrastructure and investment. This means that most AI tools will be imported, with Africans using systems designed abroad by the Big Tech companies. Africa risks becoming primarily a consumer of AI technologies developed elsewhere, exacerbating the existing large digital divide, unless deliberate strategies are implemented to strengthen African countries’ data sovereignty and local AI innovation ecosystems to ensure a more equitable distribution of benefits.
With continent-wide collaboration to curate and share African datasets and compute pools, South Africa and its continental counterparts should seize the initiative to build competitive, context-aware, freely accessible AI models that reflect African languages, norms, and contextual realities.
Two diverging futures
These tensions give rise to two diverging futures. In one, AI strengthens democracy by empowering citizens, supporting reflective thinking, reducing administrative exclusion and making public institutions more responsive.
In the other, AI displaces human judgment, concentrates power in opaque technological systems which serve the commercial interests of a few super-rich tech oligarchs. And the geopolitical interests of the large world powers undermine institutional oversight and reshape democratic behaviour in ways that societies never consciously chose. And so the trajectory we take depends entirely on whether governments act with foresight and discipline in the broader public interest.
The most important insight from current research is that none of this is predetermined. AI is not a destiny. It is a governance challenge. If policymakers design AI systems and regulation that centre on citizens rather than institutions, preserve human oversight, strengthen administrative fairness and encourage reflective reasoning rather than replace it, AI can become a powerful instrument for inclusion. But if they neglect oversight, allow algorithms to operate in the dark and/or deploy AI in ways that weaken cognitive autonomy, transparency and accountability, they risk accelerating the erosion of democracy.
Civil society can play a role in ensuring that AI adoption in the public sector strengthens rather than undermines democratic ideals. By acting as watchdogs, advocacy groups, academic institutions and community organisations can demand transparency in algorithmic decision-making, push for inclusive consultation processes, and hold governments accountable when AI systems produce biased, unfair or opaque outcomes.
They can actively mobilise to amplify the voices of marginalised communities, ensuring that AI tools are designed to promote human rights and equitable access to quality public services rather than to deepen digital exclusion.
Through public education campaigns, independent audits and policy engagement, civil society can collaborate with government and academia to foster digital literacy, encourage reflective use of AI, and safeguard against the erosion of human oversight.
The stakes could not be higher. AI is quickly becoming embedded in the basic machinery of government. It will either help rebuild public trust by making the state more accessible, transparent and fair, or it will corrode trust by making decisions less explainable, less contestable, and less human. The responsibility lies not with coders or corporations, but with democratic institutions themselves.
Therefore, the question is whether governments will rise to the challenge before the systems they adopt begin to reshape democracy in ways they can no longer control. The window to make collective decisions which will shape the trajectory of AI adoption and its impact on the democratic project is very limited.
For better or for worse, AI is here to stay. Whether it becomes a gateway to inclusion or a threat to democracy depends on the choices we make right now. DM
Professor Tania Ajam is a faculty member of Stellenbosch University’s School of Public Leadership, and Daryl Swanepoel is a research fellow at the same institution. He is the CEO of the Inclusive Society Institute.
Illustrative Image: Circuit board. (Photo: Freepic) | Parliament building Icon. (Image: iStock) | (By Daniella Lee Ming Yesca)