Israel militarizes AI: when artificial intelligence decides who lives or dies

Israel militarizes AI

Israel is deploying military AI to target enemies through automated targeting, raising major ethical risks.

The Israeli army now uses commercial artificial intelligence tools (Microsoft, OpenAI) to sort surveillance data, identify targets, and recommend strikes in real time. Two systems, “Gospel” for buildings and “Lavender” for suspected individuals, speed up the production of target lists, sometimes up to a hundred per day. These technologies have significantly increased the pace of operations, but they also increase misidentification and expose the civilian population to unjustified strikes. International organizations are sounding the alarm about the lack of transparency, algorithmic bias, and accountability in the event of civilian casualties.

Israel militarizes AI

The use of artificial intelligence by the Israeli army

Since late 2023, the Israeli army has been using AI models developed by US companies such as Microsoft and OpenAI to analyze masses of surveillance data and intercepted communications in order to identify potential targets in real time. Two major systems are operational: Gospel, which automatically identifies structures to be bombed, and Lavender, which compiles lists of people suspected of links to armed groups such as Hamas or Islamic Jihad.

Gospel can generate up to 100 target recommendations per day, whereas human analysts produced only about 50 per year before its use. Lavender lists between 30,000 and 37,000 people, with an estimated accuracy rate of around 90%, despite an opaque method. These systems drastically speed up target identification, but also reduce human oversight and increase the likelihood of errors. The training data is biased, and there have been documented cases of civilians being killed by mistake.

The ethical and legal issues of military AI

The use of AI in targeted attacks raises several questions:

  1. Unclear responsibility: the algorithm recommends, a human validates, but in the event of an error, who is to blame?
  2. Dependence on data: the absence of negative data reinforces biases and can lead to systemic errors.
  3. Risk to civilians: Human Rights Watch and the UN have warned about strikes caused by technological errors, sometimes described as potential violations of international humanitarian law.

The UN Secretary-General has expressed deep concern, denouncing a use that removes humans from the decision to kill. Some experts have described its use as AI-assisted genocide.

The operational impact of AI on the pace of strikes

The exponential acceleration of strikes is striking: thanks to Gospel, the pace of attacks has risen from a few dozen a year to a hundred a day in Gaza. This pace clearly exceeds what human analysts can handle.

According to an AP investigation, the widespread use of AI has enabled the Israeli military to process thousands of hours of data every day to quickly select targets, increasing the effectiveness of strikes while reducing decision-making time.

But technical efficiency does not guarantee a relevant outcome: several major errors have resulted in civilian deaths, including the tragic case of three young girls killed by mistake in Lebanon. The Gospel algorithm was able to identify civilian structures as targets by mistake, while Lavender classified non-combatants as suspects.

Geopolitical consequences and potential abuses

The introduction of these systems has several lasting effects:

  • A learning model for other states: Israel sells or shares these technologies with other partner countries, amplifying their spread.
  • Pressure on international legislation: Many NGOs are campaigning for a moratorium on lethal AI, arguing that it undermines the distinction between combatants and civilians.
  • Erosion of the rules of war: Algorithmic autonomy reduces traditional legal frameworks, creating a dangerous precedent for future conflicts.
Israel militarizes AI

Comparison with drone warfare in Ukraine: another form of AI in combat

At the same time, the conflict in Ukraine illustrates a different form of technological militarization: FPV drones and light AI-guided kits are transforming the battlefield. These inexpensive drones (around $500) carry out large numbers of precision strikes.

In 2024, Ukraine is said to have produced more than 2.2 million UAVs, with a target of 4.5 million by 2025. In June 2025, the Unmanned Systems Forces, a dedicated branch created in June 2024, struck 19,600 targets, destroying more than 5,000 Russian vehicles (88 tanks, 129 armored vehicles, etc.) and neutralized or killed 4,500 Russian soldiers.

These drones pose similar technological challenges: electronic warfare, jamming, targeted jammers, and the use of simple algorithms to maintain effectiveness despite interference.

Towards widespread militarization of AI

Both cases show that military AI is now a central factor in modern conflicts:

  • Israel uses centralized AI systems to sort data and recommend strikes,
  • Ukraine uses simple AI embedded in mass drones to strike at the tactical level.

Both models raise challenges: algorithmic errors, human disempowerment, risks to civilians, and data bias.

The global challenge is to adapt international regulations to these technologies. Uncontrolled use can lead to serious abuses, including unjustified strikes or automated decision-making without real human oversight.

Challenging traditional modes of combat

The militarization of AI is permanently transforming combat strategies:

  • Accelerated decision-making: AI eliminates the time lag between intelligence and action.
  • Expanded strike range: hundreds of targets can be processed every day.
  • Ethical pressure: repeated errors heighten international criticism.
  • Reassessment of military doctrines: traditional forces must respond to the rise of autonomous systems.

Israel and Ukraine are following two different paths: one centralized and dependent on big data and advanced models, the other distributed across inexpensive but effective drones using simpler algorithms. These two approaches are shaping the future of armed conflict: fast, automated, but also dangerous if left unchecked.

War Wings Daily is an independant magazine.