Anthropic AI Claude Deployed in Iran Strikes As Donald Trump, Pentagon Feud Deepens
The Pentagon's reliance on AI tools like Claude raises questions about political messaging and military strategy.

Amid the escalating and chaotic war in the Middle East, the United States military reportedly used Anthropic's Claude AI tool during operations linked to Iran, even as President Donald Trump publicly cut ties with the company. The decision has exposed a widening gap between political talk and what actually happens behind closed doors.
Claude was reportedly used in support roles connected to military planning and analysis tied to the recent airstrikes in Iran. Reports said the tool was in play despite Trump's move to sever all ties with Anthropic and its artificial intelligence systems.
Trump had earlier branded Anthropic as a 'radical left AI company run by people who have no idea what the real world is all about,' a remark that deepened tensions between the White House and Silicon Valley. Yet inside the Pentagon, the demand for advanced AI tools appears to have continued regardless of the political noise.
Pentagon Reliance Despite Political Backlash
The US Department of Defence had access to Claude under existing arrangements, even after Trump signalled a break with the company, The Guardian reported. Officials did not spell out exactly how the system was used, but defence sources said AI tools helped process information and support analysis.
Relations between Anthropic and the Pentagon had grown tense in recent months. The disagreement centred on access. The Defence Department wanted broader and less restricted use of Anthropic's systems, while the company resisted requests for full, unrestricted, and open control.
That standoff created an uneasy partnership. On one side stood a military establishment eager to expand its technological edge. On the other stood an AI firm cautious about losing oversight of how its systems were used and how its models were deployed.
Trump's criticism added pressure to that already strained relationship. His comments painted Anthropic as politically biased, raising questions about whether AI tools can truly remain neutral in national security landscape.
Worsening Ties and Questions of Control
Pentagon officials held talks not only with Anthropic but also with other AI companies as part of a broader effort to bring artificial intelligence into defence operations, as per The New York Times. The goal was simple. They wanted systems that could handle huge amounts of intelligence data quickly and efficiently.
Anthropic, however, pushed back against what it viewed as overreach. The company refused to grant the Defence Department blanket access to its technology, pointing out concerns about safeguards and responsible use.
That refusal tested trust. The gap between what the Pentagon wanted and what Anthropic felt comfortable providing grew wider over time.
Against that background, reports that Claude was used in connection with Iran strikes raised eyebrows in Washington. Critics began to question whether political messaging truly matched what was happening in practice.
For service members and analysts, the debate feels far less theoretical. AI tools can quickly scan satellite images, review intercepted messages and highlight patterns that might take human teams much longer to spot. That speed can influence decisions in real time, especially in tense situations like conflicts and war.
From Battlefield Tools to Personal Companions
The controversy comes as Europe wrestles with a different concern about artificial intelligence. Several European Parliament lawmakers recently urged the European Commission to investigate whether companion style chatbots should face tighter limits under the EU AI law.
Experts told Politico that AI chatbots are not your friends. They warned that some users develop emotional attachments to systems designed to appear caring and understanding, and simulate empathy. The concern centres on mental health and vulnerability.
The timing has polished the wider debate. On one hand, governments rely on artificial intelligence to help guide military decisions. On the other, regulators worry about the emotional and psychological impact of similar tools in daily life.
This dual use leads to a simple but unsettling question. If AI can shape choices in war rooms, what might it mean for how it shapes everyday thinking at home?
For now, the Pentagon has not publicly explained the full scope of Claude's role in Iran-related operations. Anthropic has stood by its safety policies and the limits it sets.
The clash between Trump and the company may intensify. Yet the larger issue may prove harder to settle.
Artificial intelligence is no longer a distant idea. It now sits inside command centres and living rooms alike. As governments and ordinary people test its boundaries, the line between tool and influence continues to narrow.
© Copyright IBTimes 2025. All rights reserved.




















