Modern war has changed from a chess match between people into a high-speed race between computers. During Operation Epic Fury, the first few days of strikes on Iran demonstrated that enemy soldiers aren’t the primary threat. Instead, a military system like the Maven Smart System (MSS) might be doing the heavy lifting. This system acts like a giant filter during war. It processes millions of photos and signals, then generates a list of hits.
AI use in war can root the problem known as Decision Compression. This is a way of saying that people have almost no time to think. In the past, a commander might spend hours looking at a map before a strike. Now, because the computer locates so many targets so fast, analysts are facing much shorter review times.
Based on the 1,000 strikes reported in the first 24 hours, the speculated math suggests that the time to check each target could be as short as 20 to 45 seconds. This is like trying to decide if a person is a friend or an enemy in the time it takes to tie your shoes.
The Math of the Kill Chain: Why Seconds Matter

The numbers behind this new approach are straightforward, yet they reveal a significant shift in who is accountable for a strike. To hit over 1,000 targets on the first day of Operation Epic Fury, if it is used, the Maven Smart System had to work at a speed humans cannot match alone.
In the past, hitting 1,000 targets in one day would require a staff of 2,000 people. According to a CSET Policy Brief: Building the Tech Coalition, the transition to AI-driven targeting has reduced staffing requirements from 2,000 to 20 officers.
When you do the math, the human workload is impossible:
1,000 targets ÷ 20 officers = 50 targets per officer per day
Note: The exact number of officers are undisclosed. The 20 officers is just speculations derived from other reports.
During an 8-hour shift, this concerns 6 life-or-death decisions every hour. However, targets do not appear one by one. They arrive in large groups during war. An officer has as little as 20 to 45 seconds to check a target. They must verify that the AI has not mistaken a civilian bus for a military truck or a hospital for an arsenal storage.
Under DoD Directive 3000.09, these systems are referred to as Semi-Autonomous because a human still manually selects the target. While DoDD 3000.09 defines an Operator-Supervised system as one where a human can terminate engagements before unacceptable levels of damage occur, it does not define how much time that human must be given.
Because the directive lacks a specific minimum thought time for operators, the transition from a human-led strike to a machine-driven one has a thin line.
If an officer only has 45 seconds to dissect a complicated screen before saying “YES,” they are no longer using judgment. They are acting as a rubber stamp, which leads to Automation Bias. Because the machine is fast and usually correct, humans may stop double-checking the data. In this 45-second system, the human might no longer be the pilot, but just a witness.
As the 18th Airborne transitions to ‘AI-native warfighting,’ the 45-second window creates a dangerous Responsibility Vacuum. Under the Law of Armed Conflict, a commander is liable for a machine’s mistake if they fail to provide meaningful human control. But it seems like a standard that is mathematically impossible to meet at the speeds required by full-blown war.
There was an attempt of AI misinformation about Israeli Prime Minister Benjamin Netanyahu death claims in March 2026.
The Tech Fight: Anthropic vs. OpenAI

On February 28, 2026, a major split happened between the government and the tech world. The Pentagon officially banned a company called Anthropic. This happened after the government labeled the firm a Supply Chain Risk because it refused to remove “red lines” against using AI for lethal targeting.
Almost immediately, OpenAI stepped in to fill the gap. In a public statement on March 2, 2026, the company detailed a new multi-layered agreement to put GPT on classified networks. Unlike Anthropic, OpenAI accepted a Lawful Use rule, allowing the military to utilize AI for any purpose consistent with U.S. law and operational requirements.
To protect its remaining red lines, OpenAI argues its deal is technically more enforceable because it is cloud-only. By retaining full discretion over its Safety Stack and keeping cleared OpenAI personnel in the loop, the company maintains a physical off-switch. This architecture is designed to support military missions while technically preventing the mass domestic surveillance or fully autonomous strikes that led to the Pentagon’s split with Anthropic.
Historical Mistakes and Modern Risks

History shows us that trusting machines too much can lead to accidents. In 1983, a Soviet officer named Stanislav Petrov saw a computer warning that the U.S. had launched nuclear missiles. He chose to wait and realized it was a glitch. If he had trusted the machine blindly, the world might have ended. In 2026, we are losing that “waiting time.” Everything moves so fast that there is no room for a “Petrov” to stop and think.
Another example is the Patriot Missile accidents in 2003. Computers told soldiers that a friendly plane was an enemy missile. Because the soldiers only had about a minute to decide, they trusted the screen and shot it down.
In Operation Epic Fury, the time to decide has dropped even lower. We are building a system where the computer’s Success Probability is the main focus, which can lead to tragic mistakes if a human cannot intervene in time.
The AI Proximity Bias Problem

AI models today are built to be efficient. They want to be sure they hit their target. Because people are easier to track when they are at home rather than moving in a car, patterns in recent strikes suggest a Proximity Bias in AI.
The computer may identify a house as a high-probability location for a target, but it may not fully account for the families or neighbors nearby. This is why observers are seeing more civilian areas affected by these high-tech strikes.
In 2026, the human officer is still in the loop, but the speed of the Maven Smart System makes their job much harder. We are essentially asking humans to keep up with the speed of an algorithm, but humans are not built for that. If we do not find a way to slow down Decision Compression, the war of the future will be won by the fastest computer, not necessarily the best cause.
Transparency Box: How This Was Researched
This analysis was compiled by cross-referencing public strike totals from the week of February 28, 2026, with performance benchmarks for the Maven Smart System (MSS). Data regarding the Anthropic blacklist and OpenAI’s contract was sourced from credible sources and OpenAI’s March public disclosure.
TODAY’S UPDATE: As of March 11, 2026, official and independent reports confirm the conflict has reached a catastrophic scale. Estimated numbers:
| Nation / Group | Reported Fatalities |
| Iran | 1,255 – 1,787+ |
| Lebanon | 570+ |
| Israel | 26 – 28 |
| United States | 9 |
| Gulf Neighbors | 10 – 12 |

Avery interesting article to read. To quote Aldous Huxley ;its a Brave New World” we are living in