Anthropic refuses Pentagon’s AI‑morality removal, Dario Amodei cites Department of War
Photo by ThisisEngineering RAEng on Unsplash
Anthropic said on Feb 26 2026 it refused the Pentagon’s request to remove AI‑morality safeguards, with CEO Dario Amodei citing discussions with the Department of War.
Quick Summary
- •Anthropic said on Feb 26 2026 it refused the Pentagon’s request to remove AI‑morality safeguards, with CEO Dario Amodei citing discussions with the Department of War.
- •Key company: Anthropic
Anthropic’s refusal to strip safety guardrails from Claude comes after the Pentagon threatened to cancel a $200 million contract and label the firm a “supply‑chain risk,” a move that would jeopardize its access to defense‑grade procurement channels, the company said in a statement to The Guardian on Feb 26 2026. CEO Dario Amodei warned that the Department of Defense’s demand to “turn off safety guardrails and allow any lawful use of Claude” conflicted with the company’s ethical standards, and he urged Defense Secretary Pete Hegseth to reconsider the ultimatum (The Guardian).
Amodei’s own policy note, also dated Feb 26, underscores Anthropic’s broader national‑security strategy. He said the firm has already deployed Claude inside classified DoD networks, at national laboratories, and for intelligence‑community customers, supporting tasks ranging from intelligence analysis to cyber operations (Anthropic statement). He added that Anthropic deliberately forwent “several hundred million dollars in revenue” to block Claude’s use by firms tied to the Chinese Communist Party, even when those firms were designated as Chinese Military Companies by the Department of War (Anthropic statement).
TechCrunch reported that the standoff has escalated, with the Pentagon pressing for an “unfettered” AI tool while Anthropic insists on retaining two core safeguards. The company’s position is that these safeguards are essential to prevent misuse and to preserve the model’s alignment with democratic values, a stance it says does not preclude continued support for U.S. warfighters (TechCrunch).
Reuters, citing an unnamed source familiar with the negotiations, confirmed that Anthropic’s leadership remains firm, describing the company’s stance as “digging in heels” amid the pressure. The source said the Pentagon’s leverage hinges on the contract’s size and the potential designation of Anthropic as a supply‑chain risk, which could trigger broader regulatory scrutiny (Reuters).
The Verge’s coverage frames the dispute as an “existential negotiation” for Anthropic, noting that the outcome could set a precedent for how private AI firms balance government demands with internal safety protocols. Amodei’s public remarks emphasize that Anthropic will continue to serve the Department of War “with our two requested safeguards in place,” signaling that the company is prepared to risk the contract loss rather than compromise its moral safeguards (The Verge, The Guardian).
This article was created using AI technology and reviewed by the SectorHQ editorial team for accuracy and quality.