US Military Used Anthropic’s Claude AI in Iran Strikes Hours After Trump Ban

The United States military used an artificial intelligence model developed by Anthropic during operations linked to strikes on Iran, even as a federal order to suspend the company’s technology had been issued hours earlier, according to a report by The Guardian citing sources familiar with the matter.

US Central Command reportedly relied on Anthropic’s Claude model for intelligence assessments, target identification and combat scenario simulations. The system had been embedded within classified defence and intelligence infrastructure through partnerships with Palantir Technologies and Amazon Web Services under contracts valued at up to $200 million.

The development came despite a directive from President Donald Trump ordering federal agencies to immediately halt use of Claude. In a post on Truth Social, Trump criticised Anthropic, describing it as a “Radical Left AI company.”

Contract Dispute and Security Concerns

The controversy stems from a contractual dispute between Anthropic and the Pentagon over safeguards governing military applications of the AI model. Anthropic reportedly refused to remove restrictions preventing use of its system for mass domestic surveillance or fully autonomous weapons — conditions it said were non-negotiable.

Following the breakdown in negotiations, the Defense Department designated Anthropic a national security supply-chain risk. The company termed the move unprecedented for a US firm and signalled its intention to challenge the designation in court.

Despite the ban, the Pentagon confirmed Anthropic would continue limited services for up to six months during a transition to alternative providers. Meanwhile, OpenAI has announced a separate agreement with the Defense Department covering classified military networks.

Embedded AI in Military Systems

Experts note that removing AI systems deeply integrated into defence infrastructure is complex and costly, involving retraining, re-certification, security testing and parallel deployment across classified networks.

The episode has intensified debate in Washington over oversight, governance and control of military AI systems — underscoring broader tensions between Silicon Valley developers and defence agencies as artificial intelligence becomes increasingly embedded in national security operations.

Leave a Reply

Your email address will not be published. Required fields are marked *