Pentagon Issues Ultimatum to Anthropic, Only AI Firm Operating on Classified Network
Photo by Steve Johnson on Unsplash
The Pentagon gave Anthropic, the sole AI firm on a classified military network, an ultimatum to meet its demands by Friday, news reports say.
Quick Summary
- •The Pentagon gave Anthropic, the sole AI firm on a classified military network, an ultimatum to meet its demands by Friday, news reports say.
- •Key company: Anthropic
Anthropic’s leadership told Reuters the company will not grant the Pentagon unrestricted access to its Claude model, despite Defense Secretary Pete Hegseth’s ultimatum that a compliance deadline be met by Friday or the firm be deemed a supply‑chain risk and subject to the Defense Production Act (DP Act) (Reuters). The DP Act, a Korean‑War‑era statute, allows the government to conscript private firms into national‑security production, effectively forcing Anthropic to either abandon its safety‑by‑design policy or face compulsory integration (Zecheng, 24 Feb).
The Pentagon’s demand goes beyond mere licensing. According to a Verge report, officials want Claude deployed for “mass surveillance, autonomous weapons, the full menu” on classified networks, a use case explicitly prohibited by Anthropic’s current usage policy (The Verge). The model already runs on classified U.S. defense systems through a Palantir partnership and was reportedly used in the operation that captured Venezuelan President Nicolás Maduro (Zecheng).
Anthropic responded by publishing a revised Responsible Scaling Policy (RSP 3.0) the same day the ultimatum was delivered. The new policy drops the hard line that barred training more powerful models without confirmed safety measures, replacing it with language that development will only be “delayed” if leadership believes the company leads the AI race while catastrophic risk remains (Zecheng). Critics see the timing as a strategic pivot to appease the Pentagon while preserving the firm’s market‑cap surge, which TechCrunch notes followed the policy change and the launch of enterprise plugins that lifted Anthropic’s valuation by $830 billion (TechCrunch).
Anthropic’s refusal to bend has escalated the dispute, with Reuters citing a source that the company is “digging in its heels” despite pressure to avoid being labeled a supply‑chain risk (Reuters). The source added that the Pentagon is prepared to invoke the DP Act if Anthropic does not comply, potentially forcing the firm to integrate Claude into broader defense applications against its own safety guidelines.
Analysts monitoring the standoff note that the outcome could set a precedent for how the U.S. government leverages private AI capabilities for classified missions. If the Pentagon proceeds under the DP Act, Anthropic could be compelled to relinquish control over its model’s deployment, undermining its “responsible AI” brand and raising questions about future industry‑government collaborations. The deadline looms, and the next move will determine whether Anthropic preserves its ethical stance or becomes a mandated component of the nation’s defense AI infrastructure.
This article was created using AI technology and reviewed by the SectorHQ editorial team for accuracy and quality.