Dailymaverick logo

Op-eds

STATE INTELLIGENCE OP-ED

We need to stop the spies abusing Silicon Valley’s runaway AI in the name of national security

Intelligence has become about obtaining and maintaining global dominance rather than just protecting countries against imminent threats. It is now too risky to leave intelligence agencies to police themselves on AI uses. There are dangers of ‘hallucinations’ or the generation of non-existent knowledge, which can lead to misinformation and manipulation, bias and discrimination infecting intelligence work — and the struggle for intelligence accountability is losing ground.

We need to stop the spies abusing Silicon Valley’s runaway AI in the name of national security
Illustrative Image: Lock and chain | AI | World map (Images: Freepik) | (Daniella Lee Ming Yesca)

Despite its huge potential to improve our quality of life, artificial intelligence (AI) is threatening global stability and security in dangerous ways, and Africa is particularly at risk.

State intelligence agencies can use AI for legitimate public purposes, such as for threat detection, analysis and defence. At the same time, the risk of unaccountable and destabilising abuses is growing.

These agencies can, for instance, use AI to mount cyberattacks by scanning networks for vulnerabilities and exploiting them, and can generate deepfakes and other forms of disinformation. They can use AI to decide who to surveil, who to profile as a terrorism suspect or to select as a military target, with limited to no external oversight and redress. They can also use autonomous weapons in dangerous, conflict-escalating ways, citing nebulous national security interests, and the shift to agentic AI is amplifying the danger.

What are some of the issues around how AI can be used, and is being used, in intelligence work for national security purposes? What needs to be done to prevent dangerous, even destabilising, abuses by the spies?

Too much executive power and secrecy on national security

Countries across the democracy spectrum allow too much executive discretion on national security. When former US National Security Agency contractor Edward Snowden leaked intelligence documents detailing massive surveillance abuses, the resulting “Snowden moment” created pressure for intelligence reform. However, since then the struggle for intelligence accountability has lost ground.

Intelligence accountability gaps are being widened by AI, as limited meaningful restrictions are being placed on intelligence agencies’ uses of AI. This is creating the danger of what Ashley Deeks has referred to as the double black box problem, where algorithms themselves are black boxes, and they are being introduced into national security operations that are shrouded in secrecy.

This problem risks making national security decision-making even more opaque than it is, and the quest for explainable AI can become even more difficult to achieve. In fact, AI can automate and expand national security activities in life-threatening ways.

Broadening definitions of national security

We’ve seen a major shift globally towards broader definitions of national security to include national interests, and not just threats. Intelligence has become about obtaining and maintaining global dominance rather than just protecting countries against imminent threats. Intelligence has also become tethered increasingly to narrow nationalist interests, with the rules of the game (or the lack of them) being determined by the most powerful countries.

Increasingly, the growing geopolitical rivalry between the US, China and Russia is determining the priorities of the major intelligence agencies. The US is battling it out with China in the race for artificial general intelligence, and the US has framed the need to win this race at all costs as a national security imperative.

At the same time, the second Donald Trump administration is expanding its operating definition of national security to include matters such as trade. As it seeks to outcompete China and other perceived adversaries, this administration is boosting its uses of strategic intelligence to that end.

As Emily Kilcrease, former US official and trade and security expert has argued: “There is no set of norms, rules or institutions to guide these interventions, now that we have blown open the barn doors using national security justification… There is a real risk of calling everything national security and using it to justify doing whatever you want.’

Overbroad definitions of national security are a recipe for runaway autonomous AI in this most sensitive area of government.

Rise of authoritarian nationalism

The political ground is shifting increasingly to the right, making it more difficult to set up and maintain democratic oversight institutions, including external intelligence oversight institutions. As things stand, these institutions may have limited to no experience in conducting oversight of AI uses in national security work.

It is too risky to leave intelligence agencies to police themselves on AI uses. Authoritarian nationalism can also intersect with techno-nationalism, leading to these agencies viewing AI as a strategic asset for enhancing national security, sovereignty, and competitiveness. This approach can undermine attempts at global collaboration to address common issues like AI safety.

Silicon Valley’s relentless race for commercially driven AI

The Donald Trump administration’s approach to AI policy focuses heavily on accelerating innovation, removing regulatory barriers, and prioritising market-led development. As detailed by Karen Hao, Silicon Valley has free reign to pursue an “AI at all costs” approach, where the race for AI developments outweighs any concerns about ethics, safety or governance.

The military AI industry and global defence spending are both booming, leading to real risks that while AI-powered weapons and dual-use goods are rolled out, problems relating to safety are being shelved for a later date. But when it comes to defence, war and the intelligence that powers it, the harms produced by this kind of risk-taking could be devastating.

For instance, there are dangers of “hallucinations” or the generation of non-existent knowledge, which can lead to misinformation and manipulation, bias and discrimination infecting intelligence work. The lack of adequate human oversight can lead to miscalculations in security strategies that aren’t based on real threats, but manufactured or non-existent ones.

The dominant global tech companies and their governments have largely been writing their own rules, which has (unsurprisingly) led to a desperately under-regulated AI industry.

Africa peripheral to AI capabilities

The means of AI is not distributed evenly. Africa holds less than 1% of global data centre capacity, and the US and China hold about 90% of them together. This means the continent has minimal infrastructure and resources for hosting the computational power that is necessary to build and run AI models.

As things stand, AI risks perpetuating a form of neocolonialism, in which AI reinforces historical forms of unequal exchange between the continent and more powerful countries. Already, African labour is being exploited to conduct the basic work of labelling, and this labour is often hidden, belying claims of automation. African knowledge is particularly vulnerable to theft, erasing indigenous knowledge in the process.

The AI boom is also being powered by extractivism, where many of the basic resources needed to power this boom are extracted from the continent, and not to the benefit of the countries involved. Digital inequality, data injustice and economic disparities are being intensified in the process.

Inaccurate AI models and incomplete datasets are more susceptible to bias, which is more likely in global south countries that remain underrepresented in datasets used to train these models. These gaps and silences increase the dangers of using AI in national security functions in Africa.

African countries that antagonise the major AI powers through their foreign policy stances could be cut off from critical AI developments. South Africa’s principled stance against genocide in Gaza and the colonial occupation of Palestinian land is a case in point. Yet, as Emile Ormond has warned, South Africa hasn’t even started to grapple with the risks of foreign dependency on AI supply chains in its recent national security strategy, leaving the country dangerously exposed.

In the longer term, job losses and growing peripheralisation may lead to more social instability, already apparent in the “Gen Z” protests spreading across the continent. In the absence of safeguards against abuses, authoritarian governments may use AI to crush protests in ever more brutal ways, misidentifying “instigators” even in situations where protests are organic.

The law and policy lag

There is also the added problem of the law lag, where governments allow the technological capabilities to collect intelligence and conduct surveillance to run ahead of the ability to oversee them democratically.

Questions that often remain to be answered on AI applications in national security include the following: What are the mechanisms for assessing the quality of the data to ensure biases and discrimination do not creep into intelligence assessments? Are autonomous decisions allowed, for instance?

Law and policy gaps are apparent in many African countries where basic protections are weak to non-existent, and where the law lag is growing by the day. The increasing use of AI in more national security functions is compounding this problem.

Existing international principles do not really cover national security uses adequately. These include the OECD AI principles, which emphasise robust, secure and safe AI systems, but don’t mention national security explicitly. What is meant to be the gold standard of AI law, the EU AI Act, has national security exemptions. These gaps mean that countries can pursue their national security interests unhindered by AI regulations.

No African country has a comprehensive AI law, but several have AI strategies, for example Mauritius and South Africa, and other initiatives and some bills are in development. The African Union has an AI strategy, which attempts to leverage AI to the advantage of the continent, and expects countries to develop their own AI strategies.

These strategies tend to be confined to governance and ethical frameworks, integrating AI into various sectors of the economy and building AI capabilities, and are largely silent on national security issues.

Existing data protections are inadequate to the task of regulating AI, as they struggle to deal with complex data processing and have little to say on the black box problem as well as systemic issues like bias in datasets, and issues associated with synthetic data.

What is needed? Where to start?

There is a need for outright prohibitions of dangerous AI uses in national security functions. For instance, as suggested in a (now shelved) memorandum produced during the Joe Biden administration, AI should not be used to target individuals for using their basic democratic rights and freedoms such as free speech and the right to protest, and there needs to be a human in the loop for such critical decisions.

AI should not be used to discriminate unfairly against an individual, to target and track individuals based on inferences about their emotional state, or to make a final determination about a person’s immigration status.

Then we need to define high risk uses, where AI outputs form the basis of decisions, or where AI controls or significantly influences intelligence activities. These uses could create national security risks or significant human rights violations if there is AI failure, and they should be allowed only if proper checks and balances are in place.

Such uses could include classifying an individual as a national security threat or using AI only to generate and disseminate intelligence products. Where AI has been used to do so, then it should be disclosed.

We also need a notice, redress and reparations system for people harmed by AI systems, as well as whistleblower protection. Protections shouldn’t apply nationally only. AI uses should also be subject to independent oversight. There should be evidence of judicial decision making about intrusive measures, where the bases for decisions are interrogated and which confines their uses to the investigation of serious offences only.

In the longer term, African countries need to collaborate to reduce reliance on foreign AI systems and build domestic capacities without reverting to narrow economic nationalism.

If these basic safeguards do not exist then AI should not be deployed in national security functions and should be banned for export to countries that lack these safeguards. Using AI to automate and expand national security operations poses some of the greatest dangers to people’s lives and civil liberties.

We need to get out of the mindset that national security and how to protect it falls within the discretion of the executive arm of governments only. We need to define and popularise norms at the global scale of what are acceptable and unacceptable AI uses in national security functions. Doing so would be for the sake of common global security and stability, in which we should all have a stake. DM

Jane Duncan is Professor of Digital Society at the University of Glasgow, and a Visiting Professor at the University of Johannesburg (UJ). Before joining UJ, she held the Chair in Media and Information Society in the School of Journalism and Media Studies at Rhodes University. She worked for the Freedom of Expression Institute for 15 years, having served as its Executive Director for eight. She has produced several peer-reviewed books and is a regular contributor to a range of journalistic publications.

This article is based on the author’s presentation to a roundtable entitled ‘Equitable and Just AI in education, health and justice systems in Africa, Canada, Spain, Germany and the UK’, organised for the Festival of Data Science and AI, University of Glasgow, on 28 October 2025.

Comments

Scroll down to load comments...