Armies replace soldiers with AI-powered robots

Armies replace soldiers with AI-powered robots

Experts warn that the rise of autonomous AI on the battlefield could lead to fatal errors and pose major ethical challenges.

The world’s armed forces are increasingly relying on autonomous robots to replace human soldiers in combat situations. While today’s drones still require human intervention before opening fire, the rapid advance of artificial intelligence (AI) could soon make such intervention optional. The consequences of this autonomy could include fatal errors, the escalation of conflicts and major ethical issues, particularly concerning the role of humans in making lethal decisions. This article explores the technical, economic and moral challenges associated with the growing military empowerment of AI.

Military autonomy: a technological evolution in progress

The rise of autonomous armed robots responds to a logic of operational efficiency, where reaction speed and risk reduction for soldiers are priorities. From the First World War, when distant artillery reduced direct engagement, to the current use of armed drones, technology is enabling armies to move further and further from the front line. In 2020, autonomous drones used in Libya attacked convoys without human intervention, illustrating a first step towards automated combat.

However, this autonomy raises concerns: an AI that determines targets on its own introduces the risk of potentially disastrous errors. For example, an autonomous system could mistakenly target a civilian, which could have major diplomatic and military implications. Samuel Bendett, researcher at the Center for New American Security, argues that the cost of implementing autonomy could become profitable for armies in the coming decades, with potential budgets of several hundred million euros for complete military robot systems.

The economic implications of autonomous AI for armies

The implementation of autonomous AI systems in the military domain involves massive investment in research and development. Currently, worldwide spending on military AI is estimated at around €10 billion a year, a figure that could rise to €20 billion by 2030. Countries are investing not only in drones, but also in autonomous land and sea vehicles, such as unmanned warships capable of patrolling vast expanses of sea.

These technologies, though expensive, aim to deliver strategic advantages by minimizing casualties and increasing responsiveness to threats. Russia, for example, experimented with unmanned ground vehicles in Ukraine in 2024, underlining a trend towards increased empowerment in high-intensity conflicts. For economies like the USA, China and Russia, this kind of spending is part of a global technological competition to maintain military advantage.

Ethical challenges: the human role in lethal decisions

Experts agree that life-and-death decisions must remain under human control. Autonomous weapons expert Zach Kallenborn insists that current machine vision systems are prone to error. If a drone mistakenly attacks a non-military target, this could provoke an unwanted escalation. States committed to such technologies therefore need to keep a human in the loop to monitor and control these autonomous systems.

The ethical aspect is essential in the debate on military autonomy. By eliminating human intervention, the moral responsibility to kill disappears, giving way to the coldness of algorithmic logic. This situation could generate critical incidents, especially if software errors or biases in the AI interfere with real-time decision-making. In addition, the question of liability in the event of blunders remains an unresolved challenge, posing the question of who – or what – would be responsible for civilian casualties.

Armies replace soldiers with AI-powered robots

The potential consequences of total autonomy in conflicts

Once AI systems are advanced enough to attack without human intervention, battlefield decisions could become purely mechanical, leading to an increase in fatal errors. The ability of autonomous drones to act without human oversight could, for example, trigger an attack on civilians, with serious diplomatic repercussions. In a context of heightened tensions, such an incident could trigger retaliation and aggravate existing conflicts.

In view of these risks, some experts believe that autonomy should be kept in check, particularly in areas such as maritime defense, where civilians are less exposed. Kallenborn points out that the use of autonomous weapons in theaters where non-combatants are rare, such as on the high seas, could limit ethical risks, but their use in populated areas presents far greater dangers.

A technological race between risks and strategic advantages

The militarization of AI and the empowerment of combat systems pose a strategic dilemma for states. Advances in AI enable faster responses and human economies on the battlefield, but they also introduce a high risk of uncontrolled errors. Reducing the human role as regulator of lethal decisions could pose major ethical and operational risks.

The current trajectory towards an army of autonomous machines calls for in-depth reflection on safety protocols and control frameworks. In the face of growing investment, militaries will need to establish rigorous standards to ensure that autonomous systems do not compromise global security, and that warfare retains an ethical framework, where human life remains a priority in every decision.

War Wings Daily is an independant magazine.