Pentagon, OpenAI, and Anthropic Clash Over “All Lawful Use” Policy in Defense AI Deal
Photo by Compare Fibre on Unsplash
Before the Pentagon’s AI pact was a quiet partnership with Anthropic, it is now a public showdown with OpenAI over a clause that permits “all lawful use,” The‑Decoder reports.
Key Facts
- •Key company: OpenAI
- •Also mentioned: Anthropic
OpenAI’s decision to step into the Department of Defense contract came after President Trump ordered a halt to Anthropic’s involvement, citing the startup’s refusal to accept a “all lawful use” clause (The‑Decoder). Within hours, OpenAI announced it would inherit the deal and published a detailed blog post outlining three red lines: no domestic mass surveillance, no autonomous weapons systems, and no automated high‑risk decisions (The‑Decoder). The company also pledged technical safeguards to enforce those limits, but it left the contentious “all lawful use” language intact, arguing that the Pentagon could employ its models for any purpose that complies with existing law (The‑Decoder).
Anthropic’s CEO Dario Amodei warned that the phrase “all lawful use” is a loophole‑rich formulation. In a CBS interview, Amodei cited a scenario in which the DoD could purchase commercial datasets and run them through AI without triggering the domestic surveillance prohibition, because current statutes do not define such analysis as surveillance (The‑Decoder). He argued that the clause effectively grants the Pentagon carte blanche to exploit AI capabilities in ways that were previously impractical but now legally permissible (The‑Decoder). This criticism resonated with the AI community, prompting a surge in Anthropic’s Claude app downloads that briefly overtook ChatGPT in the App Store (The‑Decoder).
OpenAI’s contract also touches on autonomous weapons, but the language mirrors the Department of War’s Directive 3000.09, which requires “appropriate levels of human judgment” rather than mandatory human approval (The‑Decoder). OpenAI pledged that its models would not “independently direct autonomous weapons” where law, regulation, or policy demands human control, yet the directive’s vague standard leaves room for interpretation (The‑Decoder). Reuters notes that the DoD’s definition of “appropriate” human oversight is unsettled, raising concerns that the Pentagon could still deploy AI‑driven systems with minimal human intervention (Reuters). Anthropic had demanded explicit human‑in‑the‑loop safeguards, a condition OpenAI’s agreement does not guarantee (The‑Decoder).
The fallout has turned into a public relations battle as well as a policy dispute. OpenAI’s transparency move—publishing the contract and hosting an AMA on X with Sam Altman—has drawn criticism rather than reassurance, with users rallying behind Anthropic and pushing its Claude app to the top of the App Store (The‑Decoder). TechCrunch reported that Altman framed the deal as a step toward responsible defense AI, emphasizing the red‑line commitments (TechCrunch). However, analysts cited by Reuters argue that the “all lawful use” clause could set a precedent for broader military AI deployments, potentially shaping future procurement across the defense sector (Reuters).
With a Friday deadline looming for finalizing the contract terms, the Pentagon faces a choice: accept OpenAI’s broader usage language and the associated technical safeguards, or renegotiate the clause to address Anthropic’s concerns about legal loopholes and human oversight. The outcome will not only determine which AI vendor supplies the DoD’s next‑generation tools but also signal how U.S. policy will balance national security imperatives against emerging ethical standards for AI in warfare (Reuters).
Sources
This article was created using AI technology and reviewed by the SectorHQ editorial team for accuracy and quality.