top of page

The Algorithmic Front:How AI and LLMs are Being Used In Iran War

Updated: Mar 18

As of March 2026, the US Department of War (DOW) and Central Command (CENTCOM) have integrated AI into the "kill chain" at an unprecedented scale. Central to this is the Maven Smart System, an evolved version of the original Project Maven. For the first time in active combat, the DOW is utilizing Large Language Models (LLMs), specifically a classified version of Anthropic’s Claude and OpenAI’s latest models, to handle "decision compression."


TL;DR

Decision Compression via LLMs: The US Department of War (DOW) is using Large Language Models to synthesize massive amounts of data, reducing the time to identify and approve targets from days to minutes.


Project Maven Evolution: The "Maven Smart System" is being used at an unprecedented scale to identify over 1,000 targets in the first 24 hours of Operation Epic Fury.


Automated Legal/Strategic Auditing: AI agents are now providing real-time recommendations on munition types and verifying if specific strikes comply with international laws of armed conflict.


Autonomous Counter-Drone Warfare: New "Merops" systems use edge-AI to hunt and intercept Iranian Shahed drones even during total electronic warfare and GPS blackouts.


The "Human-in-the-Loop" Crisis: While humans still give the final "fire" command, the velocity of AI recommendations is creating a "rubber-stamping" environment where human oversight is being pushed to its breaking point.

1. Automated Targeting at Scale


On the first day of Operation Epic Fury (February 28, 2026), reports indicated that the Maven Smart System, powered by LLMs, identified and prioritized over 1,000 targets within the first 24 hours. These systems synthesize mountains of data—satellite imagery, signals intelligence (SIGINT), and drone feeds—to produce GPS coordinates and strike recommendations in seconds, a process that previously took days.


2. The Role of LLMs in Decision Support


The US DOW is using LLMs to act as "Strategic Reasoners." Unlike traditional algorithms that just flag objects, these LLMs provide:


Weaponry Recommendations: Suggesting the most efficient munition based on current stockpiles and target durability.


Legal Justification: Automated reasoning to evaluate if a strike complies with international laws of armed conflict.


Post-Strike Assessment: Using generative AI to analyze "before and after" imagery to confirm target neutralization.


3. Counter-Drone Autonomy


The conflict has seen the deployment of the Merops anti-drone system. This AI-driven platform uses autonomous "interceptor drones" to hunt down Iranian-made Shahed loitering munitions. These systems are designed to function even during electronic warfare blackouts, relying on edge-AI to recognize and ram enemy targets without a constant satellite link.


Our Thoughts


The 2026 conflict marks a turning point where war moves at the "speed of thought." While the US DOW emphasizes that a "human is always in the loop," the sheer velocity of AI-generated target lists creates a "rubber-stamping" risk, where human officers struggle to keep pace with algorithmic recommendations. As Operation Epic Fury continues, the ethical debate intensifies: we have successfully automated the "how" of warfare, but the "why" remains a high-stakes human burden.

Comments


bottom of page