Why is it more difficult to fly a fighter jet during AI-simulated combat than during actual training?

Why is it more difficult to fly a fighter jet during AI-simulated combat than during actual training?

Why does AI-simulated aerial combat put more pressure on pilots than a conventional training flight? Technical analysis, figures, examples.

Summary

AI-simulated combat is shaking up air training. It imposes a machine-speed tempo, compresses the OODA loop, saturates the information space, and removes the tacit cues of real flight. In a LVC (Live-Virtual-Constructive) environment and new-generation simulators such as JSE, algorithm-driven opponents can perform thousands of iterations, coordinate “swarms,” vary their rules of engagement, and reappear without fatigue, safety, or cost constraints. Pilots then face a higher cognitive load, with fewer sensory signals, more electromagnetic ambiguities, and classified scenarios that are impossible to reproduce in the open air. Recent tests on X-62A VISTA, Project VENOM, and AlphaDogfight have shown that AI imposes a tactical density rarely achieved in training flights. When well designed, these environments force anticipation, refine multi-sensor management, and prepare pilots for collaborative operations with loyal drones, while requiring teaching methods adapted to the limitations and biases of digital technology.

Why is it more difficult to fly a fighter jet during AI-simulated combat than during actual training?

The changing scale of modern training

Live training remains essential; however, the enemy is evolving rapidly and threats are renewing themselves at a rate that exceeds the capacity for replication on firing ranges. Studies estimate that it takes four to five years to upgrade threats in a virtual environment and seven to ten years in a live environment, while adversaries are renewing their capabilities more quickly. The armed forces are therefore converging towards LVC architectures that combine real flight, distributed simulators, and “constructed forces” to multiply tactical combinations while controlling costs and security. At Nellis, the Nevada Test and Training Range offers 12,000 square nautical miles (≈ 41,000 km²) of training space, but some scenarios remain too dense, classified, or dangerous for the open air. Next-generation simulators such as the Joint Simulation Environment (JSE) fill this gap by enabling high-fidelity representations of air, surface-to-air, and electromagnetic spectrum threats at rates that would be impossible in reality.

The rise of AI-simulated combat

Since AlphaDogfight, where an AI agent beat an F-16 pilot five rounds to zero, maturity has progressed: AI now flies for real on X-62A VISTA and faces humans in controlled close combat. In Europe, Saab and Helsing have validated engagements beyond sight, with an agent trained by massive volumes of simulation. These campaigns prove that AI can learn effective tactical patterns and execute them without fatigue or stress. For the trainee, this means opponents who optimize energy and positioning in milliseconds, never “forget” their parameters, and constantly vary their responses.

Security constraints that do not exist for AI

In real BFM/ACM, safety requires bubbles and minimum distances: a 500 ft (≈ 152 m) bubble, attack termination at 1,000 ft (≈ 305 m) against certain aircraft, and “Knock-it-off” procedures. These rules save lives, but limit the exploration of “marginal” tactical areas. In AI simulation, these safeguards disappear: the agent pushes trajectories that a human would never dare to attempt in real life. The trainee pilot finds himself facing an opponent with no safety inhibitions, exploiting angular windows and extreme energy regimes. This asymmetry makes victory psychologically and technically more costly than in conventional training flights.

The density and fidelity of the “synthetic battlespace”

Hyper-realistic simulators make the environment denser than the open air: several hundred transmitters, enemy aircraft and missile models aligned with the latest intelligence, and a detailed representation of electronic warfare. The JSE can integrate thousands of virtual entities and reproduce enemy detections and firings at realistic ranges, something that many pilots never see in stand-alone or firing range training. Accelerated repetition also contributes to the difficulty: entire classes complete more than 350 simulated sorties in a few days, with rapid level progression and less “downtime.” In this format, “survival” requires impeccable sensor-weapon discipline, from briefing to post-mission review.

Compression of the decision loop through machine speed

A human reacts to a visual stimulus in ≈ 250 ms on average, slower under heavy load. AI calculates and decides at machine speed, producing attack plans hundreds of times faster than human staffs; even if not everything is usable, the excess of options imposes a cognitive race against time on crews. In AI-simulated combat, the OODA loop contracts: “observe-orient-decide-act” shifts from a human tempo (seconds) to an algorithmic tempo (milliseconds). Crews must filter, prioritize, and delegate to the system, or risk becoming bogged down in delayed decisions.

Increased cognitive load and lack of tacit reference points

Flying in real life provides sensory anchors: G-forces, vibrations, peripheral vision, smells, noises, and micro-clues that help estimate energy/attitude status. In simulation, these cues are diminished. At the same time, AI increases the density of the display: multi-sensor tracks, electromagnetic threats, decoys, and false positives. As a result, cognitive load increases, attention narrows, reaction times lengthen, and sorting errors multiply. Recent studies show that an increase in visual information and simultaneous tasks degrades performance and lengthens response times. Hence the interest in digital co-pilots and adaptive interfaces capable of modulating assistance according to the mental state of the crew.

Classified scenarios that only virtual reality allows

Certain 5th generation aircraft tactics, certain sensor settings, and certain weapon reactions cannot be exposed in the open air without revealing secrets. The JSE was designed precisely to train and test these sensitive modes: near-peer threats, realistic electromagnetic density, credible firing and detection. For trainees, this translates into situations that are more aggressive and ambiguous than those encountered on a firing range, and the need to quickly master “black modes.” This makes the exercise more demanding than standard live training, where these functions remain disabled.

Adaptive and tireless AI adversaries

An AI agent can “play” the ideal adversary: it does not tire, does not suffer from hypoxia or stress, does not violate its own instructions, and learns from each iteration. In Europe, an agent trained on Gripen has been “fed” the equivalent of decades of experience per week, solely through simulation. In the United States, Project VENOM equips F-16s with autothrottles and autonomy instruments to test agents in real flight, before moving on to large-scale collaborative combat drones. When facing these adversaries, energy, sensor, and weapon management must be optimal from the first turn to the last; the agent immediately punishes the slightest latency.

The risks specific to AI that pilots must master

AI-simulated combat is not magic. Recent tests show “plans” generated very quickly but sometimes inapplicable. Algorithms can also “hallucinate” correlations or generalize poorly outside the training domain. Hence the need for robust human supervision, clear rules of use, and cross-validation in a mixed environment. Paradoxically, these limitations complicate the exercise: the crew must detect AI biases while taking advantage of its speed. This additional vigilance increases the mental effort compared to a conventional training flight where the enemy does not “cheat” with modeling.

Why is it more difficult to fly a fighter jet during AI-simulated combat than during actual training?

Educational levers to take advantage of the difficulty

Scenario design

Systematically vary the rules of engagement, introduce sensor “noise,” mix conventional and AI adversaries, and gradually increase the density of threats. Measure performance not only by “kill ratio,” but also by sensor sorting quality, adherence to priorities, ammunition economy, and time spent in stable states.

Load regulation

Instrument sessions (eye tracking, research EEG, telemetry), set cognitive load thresholds, and impose brief but regular “cognitive breaks.” The goal is to optimize learning: too little stress does not train, too much stress freezes.

Integration into reality

Linking the virtual to the live: joint briefings, joint debriefings, replaying weak moments from live flights in the simulator, applying lessons learned from AI to flight. Professional panels emphasize that LVC must prepare for “high-end fight,” not replace it.

Operational openness

The major benefit of this “additional” difficulty is clear: it forces crews to think ‘mission’ rather than “platform,” to delegate to systems, and to orchestrate mixed formations with collaborating drones. The X-62A VISTA and Project VENOM programs herald this future, where the pilot, rather than a gunner, becomes the conductor of a tactical network. They must therefore learn to win against an adversary that plays faster, longer, and more densely than any “bandit” in conventional training flights.

War Wings Daily is an independant magazine.