Skip to main content
Anthropic

Anthropic Denies Capability to Sabotage AI Tools in Wartime, Says Company Executive

Published by
SectorHQ Editorial
Anthropic Denies Capability to Sabotage AI Tools in Wartime, Says Company Executive

Photo by Shutter Speed (unsplash.com/@shutter_speed_) on Unsplash

While the Trump administration warned that Anthropic could pull the plug on military AI, Wired reports the company’s public‑sector head says it has no ability to disable Claude in wartime.

Key Facts

  • Key company: Anthropic

Anthropic’s courtroom filing on Friday underscores a technical reality that has become a flashpoint in the Pentagon’s broader debate over AI‑enabled weapons systems. Thiyagu Ramasamy, the company’s head of public‑sector affairs, wrote that Anthropic “has never had the ability to cause Claude to stop working, alter its functionality, shut off access, or otherwise influence or imperil military operations” – a direct rebuttal to the Trump administration’s claim that the firm could “pull the plug” on its own generative model during combat ([Wired]). Ramasamy emphasized that Anthropic does not maintain a back‑door or remote “kill switch” and that its engineers cannot log into Department of War (DoW) systems to modify or disable Claude in real time. The filing further notes that the model’s behavior can only be updated with explicit approval from the government and its cloud provider, implicitly Amazon Web Services, though the company did not name the provider outright.

The Pentagon’s concerns stem from a series of high‑level designations that have effectively barred the Department of Defense from using Claude. In early March, Defense Secretary Pete Hegseth labeled Anthropic a “supply‑chain risk,” a status that precludes DoD contracts and forces contractors to seek alternatives ([Wired]). Federal agencies beyond the DoD have followed suit, prompting Anthropic to file two lawsuits challenging the constitutionality of the ban and to request an emergency order for a temporary reversal. A hearing is set for March 24 in a San Francisco federal district court, and the judge could issue a provisional relief shortly thereafter ([Wired]).

From the government’s perspective, the risk is not merely theoretical. Attorney filings argue that the DoD “is not required to tolerate the risk that critical military systems will be jeopardized at pivotal moments for national defense and active military operations” ([Wired]). Claude has already been deployed to analyze intelligence data, draft briefing memos, and even assist in drafting battle plans, according to the same Wired report. Officials fear that, if Anthropic were to withhold updates or push a harmful version of the model, it could disrupt active operations or, worse, be used to influence tactical decisions without human oversight. Anthropic counters that it lacks any “kill switch” and cannot access the prompts or data entered by military users, thereby eliminating the possibility of covert manipulation ([Wired]).

Anthropic’s legal team has attempted to assuage these fears through contract language. Sarah Heck, head of policy, submitted a proposal on March 4 that explicitly states the license “does not grant or confer any right to control or veto lawful Department of War operational decision‑making” ([Wired]). The same filing indicates that Anthropic was prepared to incorporate provisions addressing the company’s concerns about Claude being used for lethal strikes without human supervision. Nevertheless, negotiations collapsed, and the DoD has moved to “mitigate the supply chain risk” by working with third‑party cloud providers to ensure continuity of service independent of Anthropic ([Wired]).

The dispute highlights a broader tension between rapid AI adoption in defense and the governance structures needed to manage the technology’s inherent opacity. While Anthropic argues that its architecture simply does not permit remote disabling or unilateral model changes, the Pentagon’s risk‑aversion reflects a policy environment where any perceived single‑point failure can trigger a strategic liability. If the court grants a temporary injunction, Anthropic could regain access to DoD contracts and potentially resume its role in augmenting military analysis. Conversely, a ruling that upholds the ban would force the defense establishment to accelerate the migration to alternative AI platforms, reshaping the competitive landscape for AI vendors seeking government business.

Sources

Primary source

Reporting based on verified sources and public filings. Sector HQ editorial standards require multi-source attribution.

More from SectorHQ:📊Intelligence📝Blog

🏢Companies in This Story

Related Stories