UNIDIR wants governance of military artificial intelligence

UNIDIR wants governance of military artificial intelligence

Discover the six key priorities identified for AI governance in the military domain at the 2024 RAISE meeting, with a strategic and scientific focus.

In March 2024, the RAISE initiative was launched by UNIDIR, in collaboration with Microsoft, to address the governance of AI in security and defence, with a focus on military applications. The event identified six key priorities: creating a knowledge base, building trust, the role of humans in AI use, data management practices, lifecycle management, and destabilisation risks. These priorities will serve as a framework for future policy recommendations, with multi-sector cooperation. The article explores these points in detail.

Launch of the RAISE initiative

In March 2024, the Roundtable on Artificial Intelligence, Security and Ethics (RAISE) was officially launched in Bellagio, Italy. This initiative, led by UNIDIR (United Nations Institute for Disarmament Research), in partnership with Microsoft, aims to create a neutral and inclusive platform to discuss the impact of artificial intelligence (AI) in the fields of security and defence. The project is designed to bring together players from various sectors, including research, industry and government.

The aim of RAISE is to respond to the rapid rise in the use of AI in military contexts, an area that is constantly expanding. According to a report by MarketsandMarkets, the global military AI market will reach €11.6 billion by 2027, with a compound annual growth rate (CAGR) of 14.6%. This rapid growth underlines the urgency of defining solid standards and policy frameworks to avoid the risks of misuse of this technology.

Analysis of AI applications in security and defence

The inaugural RAISE event reviewed the current state of AI applications in security and defence contexts, with a particular focus on the military domain. In this sector, AI is being used to improve several aspects of military operations, such as:

  • Autonomous drone management: Integrating AI into drone systems reduces human dependency and increases mission accuracy. For example, autonomous drones can carry out reconnaissance or strike missions with minimal human intervention.
  • Predictive analysis**: AI algorithms are used to analyse large amounts of data in real time and predict enemy movements, increasing the effectiveness of military strategies.

The major challenges in the use of AI in the military environment concern the reliability of systems and interoperability between different countries and organisations. An international cooperation framework is essential to avoid scenarios where competing AI systems lead to unintended escalations in conflict.

The six strategic priorities identified

One of the key outcomes of the RAISE meeting was the identification of six priorities to guide the development of AI governance in the military domain:

1. Creation of a knowledge base

The first priority is to create a robust and shared knowledge base, where stakeholders can access the latest research and developments in military AI. The absence of such a base increases the risk of unregulated use of AI.

This database should include technical information, case studies and ethical frameworks to ensure responsible application. For example, studying the management of autonomous drones in war zones can provide crucial information for implementing safety rules.

2. Building trust

Trust between the various players – industrial, military and civilian – is crucial to ensure the safe deployment of AI. Transparency of the algorithms, data and decisions taken by AI systems is a central element. An opaque system can lead to a loss of trust and exacerbate international tensions.

A practical example is the use of AI systems in facial recognition, which can generate errors if the algorithms are biased. It is essential to ensure that these systems are tested and validated to avoid costly errors.

3. The role of humans in the use of AI.

Even with increased automation, the human element remains crucial in military AI supervision and decision making. The concept of ‘humans in the loop’ implies that humans always have the last word in critical situations, such as drone strikes.

This principle is vital to ensure that military actions respect international law and human rights. It is also necessary to train human operators to work with complex AI systems, which could represent an investment of several million euros in technical and ethical training.

UNIDIR wants governance of military artificial intelligence

Data management practices

Data management** is a major priority, as the effectiveness of AI systems depends directly on the quality and accessibility of the data. Incorrect or biased data can distort the decisions of military AI systems, with potentially dangerous consequences.

For example, AI systems used in target recognition need to be fed with accurate data to avoid collateral damage. Poor data management could also undermine the international credibility of AI systems.

Lifecycle management and destabilisation

Finally, the management of the life cycle of AI systems, from their design to their decommissioning, and the destabilisation linked to the adoption of new technologies are crucial aspects to consider. AI systems evolve rapidly and their long-term use raises issues of maintenance, updating and security.

For example, integrating new functionalities into autonomous weapons systems requires regular monitoring to avoid security breaches. In addition, destabilisation can result from the uncontrolled dissemination of these technologies, particularly if they fall into the hands of non-state or malicious actors.

Global consequences and implications

The six priorities identified at the RAISE meeting have the potential to reshape the way AI is governed in the military domain. The implications of these developments are vast, affecting not only global security but also geopolitical relations.

The lack of governance and clear rules could lead to serious abuses, particularly in asymmetric conflicts, where non-state actors could exploit poorly regulated AI systems. Governments and industries must work together to define a framework that minimises the risks while maximising the benefits of AI for collective security.

War Wings Daily is an independant magazine.