Palm Jumeirah Strike
Smoke billows above Dubai's Palm Jumeirah on 28 February 2026 after an Iranian drone struck the Fairmont The Palm hotel during Tehran's retaliatory campaign across Gulf states. Screenshot from X/Twitter/@KunwarVeer805

The US military reportedly used Anthropic's Claude AI during a major strike in the Middle East against Iran, just hours after US President Donald Trump ordered all federal agencies to stop using the technology.

According to WSJ, officials say the AI helped with intelligence analysis, spotting potential targets, and running 'what-if' battle scenarios.

Even though Trump had called for a halt, Claude was still deeply embedded in military systems. AI has now quietly become a part of US military operations, helping commanders make sense of huge amounts of information in real time.

How the Military Is Using Claude AI

Claude is not a robot firing missiles. It is helping to digest a huge amount of information and translate it into a strategy for the military. The AI digests satellite images, intercepted communications, and troop movements, spotting patterns and highlighting what deserves attention.

During operations, it helps identify potential targets, runs simulations to show what might happen if different strategies are used, and even assists with the paperwork required for each mission, creating slide decks and briefings automatically.

Some online discussion has offered insight into what this looks like in practice. One person described it as handling enormous numbers of signals from many sources, organising them, checking for conflicts and spotting patterns.

They said, '...they want to run some agentic workflow that manages ranking, dedupe, corroboration, contradiction, patterns for which opus 4.x is the model.' Another observer pictured a more mundane role, saying, 'I'm imagining they have some bureaucratic requirement to make slide decks before and after each mission and Claude automates it.'

While Claude is not making lethal decisions on its own, it is doing much of the behind-the-scenes work that human commanders would otherwise spend hours or days doing. It organises the data, points out trends, and presents the findings in a way that lets humans act more efficiently and confidently.

Trump's Ban and the Pentagon Challenge

On Feb. 27, 2026, President Trump instructed all federal agencies to stop using Anthropic's AI, citing national security risks. Agencies were given a six-month window to fully remove the system, but the reality on the ground is more complicated.

Claude is so tightly integrated into military operations that shutting it off immediately could disrupt ongoing missions. This is why it reportedly remained in use during the Iran strike.

But for context, the call from Trump did not happen out of expired contracts or out of nowhere. There is dispute between the Pentagon and Anthropic and it revolves around the limits of AI use.

The military wants wide access for any lawful purpose, while Anthropic has insisted that its AI not be used for mass surveillance or as an autonomous weapon. This tension has made it unclear how federal orders can be enforced in practice when complex operations rely on AI for data analysis and planning.

Anthropic AI's Replacement in the US Military

Right now, Anthropic's Claude is the only AI model fully approved for use inside the US military's classified systems. At the moment, it helps with things like analysing intelligence and planning operations. That is why pulling it out will now happen overnight.

But after Trump ordered all federal agencies to stop using Anthropic's technology, and the Pentagon labelled the company a supply‑chain risk, the military is already looking at alternatives.

Almost immediately after the Anthropic ban was announced, the Pentagon reached a new deal with another big AI lab — OpenAI, the company behind ChatGPT. Under this agreement, OpenAI's AI models will be used in some classified military systems.

The contract reportedly includes built‑in safety rules that prevent the technology from being used for purposes such as mass domestic surveillance or fully autonomous weapons, the same types of restrictions that led to the dispute with Anthropic. However, OpenAI agreed to a contract allowing the military to access its AI models for operational use.

This means the Pentagon can now use the AI for planning, intelligence analysis and mission support. There are also reports that the Pentagon is engaging with other major AI companies too, including Google and Elon Musk's xAI, which operates an AI called Grok.

When Will the Switch Happen?

The president's ban does not immediately cut off Anthropic's AI, as it includes a six‑month phase‑out period. The intention is to give the military time to reduce its reliance on Claude and transition to other systems without disrupting operations. That effectively means the military could still use Claude for several months while backup systems are brought online.

Shifting from one AI provider to another is not straightforward, as the Pentagon's systems are closely integrated with AI tools. Replacing them requires time, testing and careful integration to ensure the new models operate safely and reliably within classified networks.

The most likely scenario is that OpenAI's models will begin to be used more widely in military support roles in the coming weeks or months. At the same time, Google's and xAI's tools are being prepared for potential use, while Claude is gradually phased out over the six‑month period.