Trump Administration Deploys Anthropic’s Claude in Iran Strikes Hours After Ban, Boosting
Photo by Kevin Ku on Unsplash
While Trump publicly banned Anthropic’s Claude as “radical left AI,” the administration still turned to the tool for its Iran strikes hours later, Sfist reports.
Key Facts
- •Key company: Anthropic
The administration’s decision to tap Anthropic’s Claude for the Iran strikes came after a rapid cascade of public statements and contract re‑assignments on Friday. According to the Wall Street Journal, the Department of Defense’s targeting team invoked Claude’s natural‑language processing capabilities to parse real‑time intelligence feeds and generate strike recommendations, despite President Trump’s Truth Social proclamation earlier that day that “every Federal Agency … must immediately cease all use of Anthropic’s technology.” The Journal report, which was later echoed by Axios, indicates that the tool was employed in the final planning stages of the operation that began at 8 p.m. local time, just hours after the ban was announced.
The apparent contradiction sparked an immediate response from OpenAI, which announced on its corporate blog that it was ready to assume any federal contracts vacated by Anthropic. The Verge confirmed that OpenAI’s leadership reached out to the Pentagon that evening, offering its own large‑language model as a “drop‑in replacement” for Claude. However, the Pentagon’s internal procurement timeline did not allow for a seamless switch before the strikes were executed, forcing the military to rely on the existing Claude integration that had been in place for months under a separate, classified contract. Reuters later reported that the State Department is now transitioning all agency AI use to OpenAI, a move that underscores the logistical challenges of abruptly cutting off a vendor in the middle of an operational cycle.
Anthropic’s own statements, as reported by The Guardian, suggest that the company had previously raised ethical concerns about the potential misuse of its models for “AI‑based mass murder and domestic surveillance.” Those warnings were part of the backdrop for Trump’s public denunciation of the firm as a “radical left AI company.” Nevertheless, the internal military briefing cited by the Wall Street Journal shows that Claude’s ability to synthesize disparate data streams—satellite imagery, signals intelligence, and open‑source reports—was deemed critical for the rapid decision‑making required in the high‑stakes theater of the Middle East. The report notes that the model’s output was reviewed by human analysts before any kinetic action was authorized, a standard safeguard that the Pentagon says remains in place regardless of the underlying AI provider.
The episode has reignited a broader debate about AI governance in defense settings. Wired’s coverage of the Trump administration’s ban highlighted the tension between political rhetoric and operational necessity, noting that the Department of Defense has long maintained a “dual‑use” policy that permits the use of commercial AI tools under strict oversight. The Guardian adds that Anthropic’s popularity may actually rise among defense contractors who view the company’s willingness to engage with ethical constraints as a competitive advantage, especially as other vendors scramble to fill the sudden vacuum. Industry analysts, while not quoted directly in the available sources, are likely to watch how the Treasury’s recent decision to end all use of Anthropic products—reported by Reuters—affects the firm’s market position and its relationships with other federal agencies.
In the short term, the immediate fallout appears to be administrative rather than technical. The Treasury’s cessation of Anthropic services, the State Department’s pivot to OpenAI, and the Pentagon’s continued reliance on Claude for the Iran operation illustrate a fragmented approach to AI procurement that may prompt a reevaluation of contract structures and contingency planning. If the administration’s ban was intended to send a political signal, the practical outcome—continued use of the very technology it condemned—underscores the difficulty of aligning policy pronouncements with the realities of modern warfare. The incident serves as a cautionary tale for future administrations: without a clear, enforceable framework for AI usage across all branches, rapid policy shifts risk creating operational blind spots that could compromise both strategic objectives and ethical standards.
This article was created using AI technology and reviewed by the SectorHQ editorial team for accuracy and quality.