AI, the weapon of choice in modern conflicts

Ai targeting

AI systems generate daily bombing targets for the Israeli army in Gaza.

In an era when artificial intelligence (AI) is revolutionizing sectors, its use in armed conflict raises profound ethical and strategic questions. Notably, the Israeli army uses AI to identify up to 100 bombing targets daily in the Gaza Strip, a practice that raises questions about the impact of this technology in modern warfare.

AI on the battlefield: a tactical and ethical revolution

The adoption of AI by the Israeli armed forces to generate bombing targets in Gaza marks a turning point in the way military operations are conducted. This system, named Habsora, illustrates the ability of AI to provide information at lightning speed on what it identifies as Hamas militants in Gaza. This technological evolution, coupled with AI’s ability to analyze massive volumes of data to identify potential targets, raises questions about the accuracy of strikes and respect for international humanitarian law, notably the protection of civilians in conflict zones.

Technological and geopolitical context

The use of AI in armed conflict is part of a wider context of militarization of this technology. Countries like Israel, renowned for their technological advance and capacity for innovation, are increasingly integrating AI into their defense strategies. This trend can also be observed in Ukraine, where tech giants such as Google and Palantir have taken to the field since the outbreak of hostilities, transforming the country into an AI warfare laboratory. These developments testify to an era in which warfare is increasingly determined by technological superiority, posing unprecedented challenges in terms of regulation and accountability.

Ai targeting

Ethical implications and debates

The use of AI to designate military targets introduces additional complexity into the evaluation of strike decisions, exacerbated by the problem of the AI “black box”, where decision-making processes remain opaque. This raises concerns about the ability to distinguish between combatants and non-combatants, a fundamental principle of international humanitarian law. In addition, the interaction between human decision-makers and automated AI recommendations raises questions about the delegation of critical decisions to algorithmic systems, with potential implications for liability in the event of blunders or violations of the laws of war.

Towards regulation of military AI?

The proliferation of AI on the battlefield calls for urgent and appropriate regulation. Although the EU, the UK and the USA have taken steps towards AI regulation, initiatives remain insufficient in the face of technological acceleration. The establishment of a human-centered regulatory framework, with enforceable sanctions, is becoming imperative to frame the use of AI in armed conflicts and prevent dystopian drifts.

The integration of AI into military strategies, as illustrated by its use by the Israeli army in Gaza, represents both a major technological advance and a considerable ethical and legal challenge. The ability to regulate and frame this technology will largely determine the nature of future conflicts and the protection of civilian populations in war zones.

War Wings Daily is an independant magazine.