Pentagon Threatens to Cut Anthropic’s $200 Million Contract If It Won’t Drop “Woke” AI
Photo by Possessed Photography on Unsplash
The Pentagon warned Anthropic on Friday it will slash its $200 million contract and blacklist the firm unless it removes the guardrails on its Claude AI model, Edition reports.
Quick Summary
- •The Pentagon warned Anthropic on Friday it will slash its $200 million contract and blacklist the firm unless it removes the guardrails on its Claude AI model, Edition reports.
- •Key company: Anthropic
Anthropic’s CEO Dario Amodei met Defense Secretary Pete Hegseth on Thursday, where the Pentagon delivered an ultimatum: lift the safety guardrails on Claude or see the $200 million contract terminated by Friday, Edition reported. The demand centers on “all lawful use” language the Department of Defense wants baked into the model’s terms of service, a source familiar with the talks said, allowing the military to deploy Claude for any application it deems permissible, including autonomous weapons and domestic surveillance.
Anthropic has pushed back, citing two non‑negotiable red lines. The company refuses to let Claude be used to power AI‑controlled weapons, arguing the technology is not yet reliable enough for lethal decision‑making, according to the same source. It also balked at any role in mass surveillance of American citizens, noting the absence of clear legal frameworks to govern such use. Amodei reiterated those positions in the meeting, calling the proposed applications “illegitimate” and “prone to abuse,” NPR noted.
Hegseth warned that failure to comply would trigger a blacklist that could bar Anthropic from future federal work, and hinted the Defense Production Act could be invoked to force a tailored version of Claude for the Pentagon’s needs, Engadget reported. The threat of a blacklist is unprecedented for an AI firm that has been a key supplier to the DoD; the contract currently funds Claude’s integration into several classified projects, including intelligence analysis tools used in recent overseas operations, Reuters has confirmed.
The standoff has drawn attention from industry observers who see it as a flashpoint in the broader debate over AI ethics and national security. Anthropic’s stance aligns with a growing chorus of AI leaders—such as OpenAI and Google DeepMind—who have publicly refused to develop fully autonomous weapon systems. Yet the Pentagon argues that restricting “lawful” uses hampers the military’s ability to maintain a technological edge, especially as rival nations accelerate their own AI weaponization programs, a theme echoed across the coverage in Axios and The Verge.
If the deadline passes without a concession, the Pentagon is prepared to re‑allocate the $200 million to other vendors that will meet its unrestricted‑use criteria. Sources close to the procurement office say the DoD has already identified alternative providers willing to relax safeguards, though none have matched Anthropic’s performance benchmarks for Claude’s reasoning capabilities. The outcome could set a precedent for how the U.S. government negotiates AI contracts, potentially forcing other firms to choose between ethical guardrails and lucrative defense dollars.
Sources
- Hacker News Front Page
This article was created using AI technology and reviewed by the SectorHQ editorial team for accuracy and quality.