Pentagon
The Pentagon David B. Gleason/Wikimedia Commons

War Secretary Pete Hegseth has issued a stern ultimatum to Anthropic's CEO, Dario Amodei: remove all safety restrictions on Claude AI for military use by Friday evening, or face serious repercussions.

The stakes are significant. Claude is currently the sole advanced commercial AI model integrated within the Pentagon's classified networks, under a lucrative contract worth up to $200 million, awarded last summer.

Hegseth cautioned that failure to comply by the 5pm Friday deadline could lead to the termination of the contract, the company being labelled a 'supply chain risk,' or invocation of the Defence Production Act to enforce access, according to sources reported by Axios.

This 'supply chain risk' label, usually reserved for foreign adversaries, would essentially blacklist Anthropic from all future federal contracts.

What The Pentagon Demands

Pentagon officials are seeking complete access to Claude AI for all lawful military purposes, bypassing the need for Anthropic's approval for each mission.

Using a pointed analogy, CBS News reported that Hegseth compared the situation to government purchases of Boeing aircraft, where the manufacturer does not control military operations. The same principle, he argued, should apply to AI technology.

A senior Pentagon official asserted that the matter does not concern mass surveillance or autonomous targeting, stressing the importance of human involvement and adherence to legal norms.

In clear terms, the Pentagon seeks to remove Anthropic's power to veto the use of its technology in military applications.

Anthropic's Firm Stance

Amodei has stood his ground. According to a source close to the meeting, as reported by Reuters, the CEO emphasised two non-negotiable principles: Claude AI must not be used for fully autonomous weapons that select and strike targets without human oversight, and it must not be used for mass surveillance of American citizens.

The company has consistently maintained these positions.

Amodei reassured Hegseth that these restrictions have not impeded lawful military operations and that no field operators have reported issues due to them, the source relayed.

In a subsequent statement, an Anthropic spokesperson stated, 'We continue our good-faith discussions regarding usage policy to ensure Anthropic can support the government's national security mission in alignment with what our models are capable of achieving reliably and responsibly.'

The meeting, described by attendees as cordial, did not lead to any resolution or change in positions on either side.

The Maduro Raid That Sparked The Crisis

The confrontation did not emerge in a vacuum. Tensions escalated sharply after Claude was reportedly used during the US military operation that captured Venezuelan leader Nicolás Maduro on 3 January, through Anthropic's partnership with defence contractor Palantir.

Pentagon officials alleged that a senior Anthropic executive contacted Palantir to ask whether Claude had been used in the raid — an inquiry the department interpreted as disapproval. The Palantir executive was said to be alarmed and reported the exchange to the Pentagon, NBC News reported.

Anthropic flatly denied raising concerns about the operation. The company said it had not discussed the use of Claude for specific operations with the department or with Palantir.

Separately, Semafor reported a previously undisclosed exchange from early December. Under Secretary of War Emil Michael posed a hypothetical: if hypersonic missiles were heading for American soil and Claude could help stop them, would Anthropic refuse? Pentagon officials claimed Amodei suggested the department should check with Anthropic first. The company called that characterisation 'patently false' and said it had offered to make a missile defence carveout.

Actions by Anthropic's Competitors

In contrast, Anthropic's competitors have adopted a different approach. OpenAI, Google, and Elon Musk's xAI have all secured Pentagon contracts valued at $200 million each, signed concurrently with Anthropic's agreement last July.

All three have agreed to allow their models to be used for 'all lawful purposes'. Musk's xAI reached a deal this week to deploy its Grok model on classified networks, NPR reported.

That leaves Anthropic isolated.

'Anthropic's peers, including Meta, Google and xAI, have been willing to comply with the department's policy on using models for all lawful applications,' said a fellow at Georgetown University's Centre for Security and Emerging Technology. 'So the company's bargaining power here is limited.'

The administration has labelled Anthropic's safety stance 'woke AI'. White House AI adviser David Sacks helped draft an executive order last year targeting firms over the claim.

What Happens Next

Anthropic has until 5pm on Friday to respond. If it declines, the Pentagon has three options on the table: terminate the contract, designate the company a supply chain risk, or invoke the Defence Production Act (DPA) to force compliance.

Using the DPA to compel changes to AI software would be legally untested. Experts say Anthropic could challenge any such order in court.

But cutting ties carries risks for the Pentagon too. One official acknowledged the problem privately. 'The only reason we are still talking to these people is we need them, and we need them now,' the official told Axios.

The department has been accelerating talks with OpenAI and Google about moving their models into classified systems. For now, though, Claude remains ahead in several applications the military considers critical, including offensive cyber capabilities, sources said.

The decision Anthropic makes by Friday may redefine the integration of Silicon Valley's advanced technology within the capabilities of the world's most potent military force.