Anthropic Rejects Department of Defense Deal, Says “Drop Dead” to Military Contract Offer
Photo by Alexandre Debiève on Unsplash
Computerworld reports that Anthropic has flat‑out refused the Pentagon’s “any lawful use” clause, telling the Department of Defense to “drop dead” and rejecting the military contract outright.
Key Facts
- •Key company: Anthropic
- •Also mentioned: Anthropic
Anthropic’s refusal to sign the Pentagon’s “any lawful use” clause marks a rare public clash between a frontier‑AI firm and the U.S. defense establishment. According to Computerworld, Defense Secretary Pete Hegseth demanded that the company strip away its existing safeguards that prohibit the use of Claude models for battlefield or domestic surveillance applications, threatening to cancel Anthropic’s $200 million contract and blacklist the startup if it did not comply. The deadline was set for 5 p.m. on the day of the demand, and CEO Dario Amodei responded by publicly rejecting the terms, saying the firm “cannot, in good conscience” agree to language that would allow “mass domestic surveillance” or “fully autonomous weapons.” 【Computerworld】
Amodei’s stance rests on two technical arguments. First, he warned that “frontier AI systems are simply not reliable enough to power fully autonomous weapons,” citing the risk of unpredictable behavior in high‑stakes combat scenarios. Second, he emphasized that the proposed “any lawful use” language would effectively remove contractual limits that currently prevent Anthropic’s models from being deployed in mass‑surveillance contexts, which he described as “incompatible with democratic values” and a “serious, novel risk to our fundamental liberties.” 【Computerworld】
The Pentagon’s push, described by officials as a “my way or the highway” ultimatum, reflects a broader trend of the DoD seeking unrestricted access to commercial AI capabilities. TechCrunch notes that the stakes extend beyond a single contract: the department wants a legal framework that would let it repurpose civilian‑grade models for any mission, including intelligence‑gathering and weaponization, without renegotiating terms for each use case. Anthropic countered by offering to collaborate on research and development to improve model reliability, but the DoD “has not accepted this offer,” according to the same report. 【TechCrunch】
The fallout has already entered the political arena. Reuters reported that President Trump directed federal agencies to cease using Anthropic’s technology after the company was labeled a “supply chain risk” by the military. The move underscores how the dispute is being framed as a national‑security issue, even as civil‑rights groups such as the Electronic Frontier Foundation have rallied behind Anthropic, urging the firm to hold the line against what they call “bulk spying and automated warfare.” 【Reuters】
Industry observers see Anthropic’s decision as a litmus test for the future of AI governance. Wired highlighted that the company’s refusal is not driven by partisan ideology—National Review pointed out Amodei’s prior work with the Trump administration on a high‑profile intelligence operation—but by a pragmatic assessment of the technology’s maturity and the ethical hazards of unfettered deployment. By walking away from a lucrative defense contract, Anthropic signals that emerging AI firms may prioritize long‑term reputational risk management over short‑term revenue, potentially reshaping how the Pentagon negotiates with private AI vendors. 【Wired】【National Review】
Sources
This article was created using AI technology and reviewed by the SectorHQ editorial team for accuracy and quality.