Pentagon Gives Anthropic Friday to Deploy Claude; Hegseth Threat
Photo by Steve Johnson on Unsplash
The Pentagon has given Anthropic until Friday to remove guardrails on its Claude model for military use, after Defense Secretary Pete Hegseth pressed CEO Dario Amodei, Engadget reports.
Quick Summary
- •The Pentagon has given Anthropic until Friday to remove guardrails on its Claude model for military use, after Defense Secretary Pete Hegseth pressed CEO Dario Amodei, Engadget reports.
- •Key company: Anthropic
Anthropic’s Claude has become the Pentagon’s de‑facto AI workhorse for a handful of classified projects, but the partnership is now on a knife‑edge. According to Axios, Defense Secretary Pete Hegseth gave Anthropic CEO Dario Amodei an ultimatum on Tuesday: remove the remaining guardrails on Claude for “lawful” military use by Friday, or face a forced redesign under the Defense Production Act (DPA) and a possible label as a “supply‑chain risk” (Axios; Reuters). The deadline was set after a high‑stakes meeting in Washington, where Hegseth pressed the company to loosen safety standards that it has kept in place to block mass surveillance of Americans and the development of autonomous weapons (NPR).
Anthropic has publicly resisted the demand, insisting that its ethical line‑items—particularly prohibitions on domestic mass‑surveillance and AI‑directed weaponry—are non‑negotiable. In a statement to Engadget, the firm said it would “adopt certain policies for the Pentagon” but would not allow Claude to be used for mass surveillance or autonomous weapons (Engadget). The company’s stance echoes Amodei’s long‑standing position that such applications are “illegitimate” and “prone to abuse,” a view he reiterated during the New Delhi trip last month (NPR). The Pentagon, however, argues that the only reason Claude remains in use is its superior performance on the sensitive tasks the Department needs right now, a point underscored by a defense official who told Axios, “The only reason we’re still talking to these people is we need them and we need them now. The problem for these guys is they’re that good” (Axios).
If Anthropic refuses to comply, the DPA could be invoked to compel the company to produce a version of Claude that meets the Department’s specifications. Hegseth has already hinted that the DPA, a 1950s law typically reserved for wartime emergencies, could be used to force compliance or, alternatively, to terminate the existing $200 million contract (NPR). The contract, signed in 2024, funds Claude’s integration into classified workflows and represents a significant revenue stream for the AI lab. Losing it would not only cut a major line of income but also signal to other defense contractors that the Pentagon will not tolerate “woke AI” safeguards that it deems obstructive (NPR).
The pressure on Anthropic comes amid a broader Pentagon push to diversify its AI portfolio. While Claude remains the sole model cleared for the most sensitive classified work, the Department has already approved OpenAI’s ChatGPT and Google’s Gemini for unclassified tasks, and Elon Musk’s xAI signed a separate agreement to embed Grok in classified systems (Engadget). Reuters reported that the DoD used Claude during the February 2026 raid on Venezuelan assets, highlighting the model’s operational importance (Reuters). Yet the same report noted that the Pentagon is actively courting OpenAI and Google for deeper classified integration, suggesting that Anthropic’s leverage could evaporate quickly if it balks (Reuters).
Analysts warn that the standoff could have ripple effects across the AI industry. A forced DPA‑driven redesign would set a precedent for government‑mandated alterations to commercial AI safety frameworks, potentially chilling innovation at firms that prioritize ethical guardrails. Conversely, a decisive break with Anthropic could force the Pentagon to accelerate its reliance on OpenAI and Google, consolidating market power among the few giants that already dominate the enterprise AI space. As the Friday deadline looms, the outcome will likely shape not only Anthropic’s future but also the broader balance between national‑security imperatives and corporate AI ethics.
This article was created using AI technology and reviewed by the SectorHQ editorial team for accuracy and quality.