Anthropic Shapes Military AI Policy by Contract, Highlighting Procurement Limits as
Photo by ThisisEngineering RAEng on Unsplash
In the past year, the United States has shifted to a contract‑based AI governance model, a move Lawfaremedia reports that leaves procurement limits as the primary—and arguably insufficient—tool for military AI oversight.
Key Facts
- •Key company: Anthropic
Anthropic’s recent designation as a “supply chain risk” by the Pentagon has thrust the company into the spotlight of a broader shift in U.S. military AI oversight, where bilateral contracts now serve as the primary governance mechanism. According to Lawfaremedia, the Department of Defense labeled Anthropic the first frontier AI firm on classified networks a security risk on Feb. 27, 2026, even as field reports indicated continued use of its Claude model in operations against Iranian targets. The move, later reinforced by a Trump‑directed blanket ban on Anthropic’s technology across federal agencies, effectively excluded the company from any government procurement pipeline, underscoring how quickly a contractual designation can translate into an industry‑wide black‑list (Lawfaremedia).
The contractual model that enabled this rapid exclusion is itself a product of the Pentagon’s growing reliance on Other Transaction (OT) agreements, which sit outside the Federal Acquisition Regulation (FAR) framework. Lawfaremedia notes that OT agreements grant the parties—typically the DoD and an AI vendor—broad discretion to negotiate guardrails, leaving dispute resolution to the terms of the instrument rather than a standardized statutory regime. This flexibility, while attractive for accelerating technology integration, means that enforcement hinges on the vendor’s technical controls rather than on any durable legal or democratic oversight. In practice, the Pentagon’s ability to monitor or sanction misuse of an AI model depends on whether the model is delivered directly to a defense agency or embedded within a prime contractor’s platform, creating a patchwork of accountability that statutes were designed to avoid.
OpenAI’s parallel negotiations with the Pentagon illustrate how the same contractual levers can produce divergent outcomes. After facing public backlash over its own procurement terms, OpenAI announced amendments to its agreement on social media, a move reported by Lawfaremedia that highlights the transparency gap inherent in contract‑based governance. Unlike Anthropic, which was unilaterally black‑listed, OpenAI’s renegotiated terms suggest that a vendor’s bargaining power and public profile can shape the contours of military AI use, reinforcing the notion that contracts, not statutes, now dictate the rules of engagement. The disparity between the two firms underscores a structural problem: contracts do not provide the democratic deliberation or institutional durability that Congress‑crafted regulations would, leaving critical decisions about autonomous weapons and domestic surveillance to private negotiations.
The reliance on procurement limits as the sole oversight tool has drawn criticism from multiple quarters. Wired reports that Anthropic’s legal team warned the Pentagon’s blacklist was “legally unsound,” emphasizing that a unilateral exclusion without due process could set a precarious precedent for future tech‑industry relations with the government. TechCrunch adds that the controversy may deter other startups from pursuing defense contracts, fearing abrupt policy shifts that could jeopardize their business models. Reuters further explains that Anthropic ultimately walked away from the Pentagon’s overtures, citing concerns over the contractual framework’s inability to guarantee long‑term security and compliance guarantees. Together, these accounts suggest that the contract‑centric approach may erode the pipeline of innovative AI talent into defense, as firms weigh the risks of operating under an opaque, mutable governance regime.
Analysts observing the shift note that while contract‑based governance offers short‑term agility, it fails to address the deeper question of who ultimately controls the ethical deployment of AI in warfare. Lawfaremedia argues that the current model “dismantles the governance infrastructure that might have answered” fundamental concerns about autonomous decision‑making and surveillance. Without statutory safeguards, the Pentagon’s reliance on procurement limits leaves a vacuum where democratic accountability, public scrutiny, and inter‑agency coordination should reside. If the United States continues to prioritize flexible contracts over durable regulation, the risk is not merely a bureaucratic inconvenience but a systemic vulnerability that could compromise both national security and the broader social contract governing emerging technologies.
Sources
This article was created using AI technology and reviewed by the SectorHQ editorial team for accuracy and quality.