Defend Truth


Militarisation of AI has severe implications for global security and warfare


Professor Tshilidzi Marwala is the seventh Rector of the United Nations (UN) University and UN Under Secretary-General.

Artificial Intelligence has quickly become an integral part of our daily existence, influencing fields as diverse as healthcare, education, finance, and entertainment. As AI continues to evolve, the need for effective governance mechanisms to manage its use and mitigate potential hazards grows.

AI systems, especially those employing machine learning, have the potential to have a profound effect on society. They can make decisions or predictions that affect individuals and communities, raising ethical concerns.

How, for example, can we guarantee that AI systems are impartial and do not perpetuate existing biases? How can we ensure that AI decision-making mechanisms are transparent? How can we safeguard privacy when AI systems frequently rely on vast data? How do we get people who understand the technical and regulatory frameworks? How do we bridge the gap between the AI haves and the AI-have-nots?

These considerations are an essential aspect of AI governance. The sober news is that fixing a defective AI system is far easier than fixing a broken human.

Regulating AI is challenging due to its technical complexity, rapid evolution, and widespread applicability across multiple industries. Regulatory frameworks must balance fostering innovation and preventing potential harm to society. They should ensure accountability, transparency, and impartiality in AI systems while promoting competition and preventing misuse.

Adaptive AI governance

Due to the global nature of AI development and the rapidity with which it evolves compared to traditional legislative procedures, it is difficult to create such frameworks. Therefore, an adaptive AI governance framework is essential, and we can draw many lessons from genetic algorithms, an AI method based on the principles of natural evolution.

AI governance is a global issue, not just a national one. Effective governance of AI systems necessitates international cooperation, as AI systems and their effects do not respect national boundaries. Countries must collaborate to establish global AI usage standards and norms.

International organisations, such as the United Nations, can play a vital role in facilitating dialogue and cooperation regarding AI governance. Cooperation between governments, private sector companies, and civil society is essential to guarantee a comprehensive approach to AI governance.

What are some of the essential elements that should be taken into account to create a global AI governance? We need to standardise AI. The need for standardisation increases as AI evolves and becomes more integrated into society. Standardisation can ensure AI systems’ consistency, dependability, and fairness while fostering innovation and competition.

Standardisation in AI is essential for a variety of reasons. It can assure the interoperability of AI systems, allowing them to communicate and collaborate effectively. This is crucial as AI becomes more prevalent in vital infrastructure, such as healthcare and transportation systems.

Second, standardisation can promote AI system transparency and confidence. By adhering to recognised standards, developers can demonstrate the predictability and dependability of their AI systems. Standardisation can assist in addressing ethical and societal issues associated with AI, such as bias, privacy, and accountability.

The swift evolution of AI technology is one of the most significant obstacles to standardisation. Frequently, the rate of AI development outpaces the rate at which standards can be developed and implemented, resulting in a never-ending game of catch-up. In addition, the complexity and variety of AI technologies make it challenging to develop universal standards. Additionally, there are concerns that excessively rigid standards could stifle innovation and competition in the AI industry.

Despite these obstacles, the prospects for standardising AI are promising. Several groups, including the International Organization for Standardization (ISO) and the Institute of Electrical and Electronics Engineers (IEEE), are actively developing AI standards. These efforts concentrate on terminology, ethical considerations, and technical specifications for AI systems.

In addition, there is a growing recognition of the significance of involving a broad range of stakeholders in the standardisation process, including AI developers, consumers, regulators, and civil society.

Machine learning governance

The dominant version of AI technology that urgently requires governance is machine learning. AI systems, especially those based on machine learning, rely significantly on data. This data’s quality, diversity, and quantity can considerably affect the performance and behaviour of AI systems. Consequently, regulating data utilised for AI training is a crucial concern.

Privacy protection is a primary challenge in regulating AI training data. Data protection and privacy concerns are raised because AI systems frequently require large quantities of personal data. To safeguard individuals’ data privacy, regulations such as the General Data Protection Regulation (GDPR) have been implemented in the European Union. How do we do the same in the global South?

However, striking a balance between the need for data to train AI systems and the need to safeguard privacy remains a formidable challenge. Data bias is a further crucial concern. If the data used to train AI systems is prejudiced, the AI systems can become biased and produce unfair or discriminatory results.

Consequently, it is crucial to regulate data to ensure it is representative and bias-free. Nevertheless, identifying and eliminating bias in data can be difficult and complex. Transparency in data collection, storage, and use for AI training is an additional crucial aspect of regulation.

Transparency can aid in developing confidence in AI systems and ensuring accountability. However, providing transparency without compromising privacy or confidential information can be difficult.

There are significant opportunities for regulating data for AI training despite these obstacles. Regulations can aid in establishing data privacy, bias, and transparency standards, thereby fostering the responsible and ethical use of AI. Additionally, they can promote innovation by levelling the playing field and fostering competition.

As AI continues to advance, the need to effectively regulate these algorithms becomes more crucial. Regulation of AI training algorithms is crucial for multiple reasons. It can first assure the fairness and openness of AI systems. Inadvertently, algorithms can perpetuate or amplify biases present in training data, resulting in unjust outcomes.

Regulation can aid in ensuring that algorithms are created and utilised in a manner that mitigates these biases. Yes, the traditional training, testing and validation process, even though necessary, is not enough.

Second, regulation can aid in establishing accountability. As AI systems become more complex, it can be challenging to comprehend how they make decisions. Regulation can help ensure that algorithms are transparent and interpretable, allowing their decisions to be held accountable.

The technical complexity of these algorithms is one of the primary challenges. The regulatory experts often treat this as a black box and have little understanding of the interworkings of these algorithms with devastating consequences. The need to understand the complex networks of neural networks, optimisation and backpropagation is required to have effective governance of algorithms. A high level of technical expertise is required to comprehend how they operate and how they must be regulated.

In addition, the swift development of AI can make it challenging for regulations to keep up. There is also the possibility that excessively restrictive regulations will inhibit AI innovation.

There are promising prospective directions for algorithm regulation despite these obstacles. Creating technical standards for AI algorithms can guide their design and implementation. Third-party audits are another method for evaluating the impartiality and transparency of algorithms.

In addition, there is a growing recognition of the need for the participation of multiple stakeholders in algorithm regulation, including AI developers, consumers, regulators, and affected communities.

Use and activity

Finally, we need to regulate the use of AI. Each sector will have to develop its regulatory standards. For example, the World Health Organization has developed ethical guidelines for using AI in health. However, one area that requires global regulation is the weaponisation of AI. 

In addition to its numerous benefits, AI poses potential dangers, especially when weaponised. The weaponisation of AI refers to using AI in military and warfare contexts, such as autonomous weapons and cyber warfare.

The militarisation of AI has profound implications for global security and warfare. AI can improve military capabilities by allowing quicker decision-making, more accurate targeting, and more efficient resource allocation. AI-powered autonomous weapons can operate without human intervention, potentially reducing the danger to human soldiers.

However, these developments raise concerns regarding the escalation of conflicts, the possibility of autonomous weapons being compromised or misused, and the possibility of an AI arms race.

The regulation of AI weaponisation presents significant difficulties. The rapid development of AI and its technical complexity makes it challenging for regulations to keep up. In addition, international cooperation is difficult; effective regulation requires consensus among nations, which can be difficult due to divergent national interests. In addition, the dual-use nature of AI technology, i.e., its use for civilian and military purposes, complicates regulation.

The weaponisation of AI raises numerous ethical concerns. Can autonomous weapons, as mandated by international law, distinguish between combatants and civilians? Who bears responsibility if an AI-powered weapon causes inadvertent harm? Is it ethical to delegate decisions concerning life and death to machines? These concerns highlight the need for an ethical framework that governs the use of AI in warfare.

In conclusion, even though machine learning is an area of AI that requires urgent regulation, it should be noted that it comes as part of a more extensive system outside machine learning.

For example, a robot is much more than machine learning as it involves actuators, electric motors, materials and other essential components requiring their regulations and standardisations. As we move forward with the governance of AI, sector by sector, we should take a holistic and global view of this reality.

Much of the governance of AI is based on what has happened in the past rather than what is in the future, and given the fast developments in AI, we must adopt an adaptive and evolving regulatory framework rather than a static regulatory approach. DM


Comments - Please in order to comment.

Please peer review 3 community comments before your comment can be posted