Trust in military AI requires transparency

Trust in military AI requires transparency

Building trust in military AI requires explainable, transparent, and rigorously tested models, not opaque black boxes.

The challenge of opaque models in a military context

The futuristic rhetoric of tech leaders is appealing. They argue for ultra-modern artificial intelligence systems capable of anticipating autonomous threats. Their argument is compelling: when faced with tech-savvy rivals, only the most advanced technology will suffice. But the narrative is often based more on showbiz than operational reality.

There is a clear gap between these promises and operational complexity. In mission planning platforms, for example, complex calculations must be solved to balance performance, adaptability, usability, and human responsibility. This is not science fiction: it is the daily reality of researchers. Trust cannot be decreed, it must be built.

Bayesian optimization explained simply

Bayesian optimization is an effective mathematical method for improving complex systems through successive trials. It works by trying an approach, learning from the results, and then starting over in a smarter way. It often incorporates a Gaussian process to estimate the most promising areas to test.

It is well suited to military systems where information is partial or uncertain. It enables iterative learning even in dynamic environments. However, its assumptions—for example, a smooth relationship between input and output—can clash with the sudden disruptions and uncertainties of real-world operations. To be useful, such a system must remain adaptive, clear, interpretable, capable of handling partial or noisy data, and always understandable by a human operator.

Trust in military AI requires transparency

The importance of explainability in military AI

The military sector cannot be satisfied with opaque algorithms. XAI (eXplainable AI) tools aim to lift this veil by making decisions understandable and verifiable. They make it possible to know why a decision was made, based on what data and according to what rules.

Initiatives such as the DARPA XAI program seek to combine performance and transparency. It aims to create models that justify their choices so that human users can judge them.

This requirement is vital to comply with legal frameworks such as international humanitarian law, which requires clear accountability. Opaque models complicate audits and the traceability of errors or deviations, and undermine the legitimacy of decisions.

Recent technical advances for greater trust

Teams of military veterans have filed a patent for explainable, hallucination-resistant AI. This AI allows systems to remain reliable in critical missions by ensuring minimal errors and deadlocks. The idea is to open up the “black box” to instill confidence where failure is not tolerable.

In the field of optimization, the CoExBO method offers a collaborative approach: AI makes suggestions while explaining them at each iteration. It also allows human experts to intervene when they identify an error.

Another approach, TNTRules, generates explanations based on post-optimization rules. It provides a clear interpretation of AI decisions, even when the first layer is sophisticated.

Practical challenges for the responsible adoption of AI

High expectations and commercial enthusiasm should not obscure the reality of technological limitations. AI tools are already being used in the US military. But their unreliability—hallucinations, code errors, vulnerabilities—creates real risks for critical infrastructure. The current margins of error (40 to 70%) are unacceptable in these contexts.

Regular legal reviews, operational logs, and rigorous adversarial testing must also be incorporated prior to deployment. This ensures that a human remains in control of the process and that accountability can always be traced.

Some concrete examples

The US Army’s Maven Project illustrates this point. This targeting AI identifies targets on satellite imagery, with the final decision resting with a human operator. During an exercise, Maven processed 80 targets per hour compared to 30 without AI, with twenty operators instead of two thousand. However, this success is always accompanied by human validation.

The systems developed by Helsing AI provide a real-time visual interface for understanding the situation. AI assists, but does not decide. Humans remain at the center of the decision-making process.

AI does not replace the operator

Military actors should not shy away from innovation. But blindly giving in to marketing promises leads to disaster. Efficiency requires patiently building internal expertise, coupled with conscious and critical partnerships with the commercial and academic sectors.

Governments must resist the temptation to buy automatically without understanding. Teams must be trained, systems must be technically evaluated, clear explanations must be demanded, audits must be imposed, and contractual commitments on transparency must be required. Without this, artificial intelligence will remain a sales pitch, not a reliable defense tool.

War Wings Daily is an independant magazine.