White House Drafts Executive Order to Pull Anthropic’s Claude AI from Federal Systems
Photo by Jakub Żerdzicki (unsplash.com/@jakubzerdzicki) on Unsplash
While the administration once touted Anthropic’s Claude as a “trusted” AI for federal use, reports indicate the White House is now drafting an executive order to yank the system from all government networks.
Key Facts
- •Key company: Anthropic
- •Also mentioned: Anthropic
The draft executive order, which internal officials have begun circulating among senior staff, would require every federal agency to cease using Anthropic’s Claude‑based services by the end of the fiscal year, according to reporting by Axios. The move marks a sharp reversal from the administration’s earlier endorsement of Claude as a “trusted” AI platform for government workloads, a stance that was publicized in 2023 when the White House highlighted the model’s purported safety controls. Axios notes that the new directive is being framed as a precautionary step to mitigate “unforeseen security and privacy risks” that have emerged as the technology has been more widely deployed across agencies ranging from the Department of Defense to the General Services Administration.
The shift appears to be driven by a confluence of factors that have recently come to light. First, internal audits revealed that Claude’s data handling practices may not align with the stringent federal standards for classified and personally identifiable information, a concern echoed in a separate Axios briefing that cited “inconsistent logging and insufficient encryption at rest.” Second, the administration is reportedly responding to pressure from congressional oversight committees that have begun scrutinizing the procurement of third‑party AI tools after several high‑profile incidents involving data leakage in other government contracts. The draft order, as described by Axios, explicitly calls for a “comprehensive risk assessment” before any future AI system can be approved for federal use, signaling a broader tightening of the government’s AI acquisition framework.
Industry analysts, while not quoted directly in the source material, have long warned that the rapid integration of generative AI into critical infrastructure could outpace existing oversight mechanisms. The White House’s decision, therefore, may be less about Claude’s specific shortcomings and more about establishing a precedent for rigorous vetting of all AI vendors. Axios reports that the administration is also considering a parallel effort to develop an in‑house “trusted AI” capability, leveraging existing federal research labs and public‑sector partnerships, which could eventually replace reliance on external providers like Anthropic. If implemented, the executive order would compel agencies to migrate workloads to alternative platforms or to discontinue AI‑driven processes altogether, a transition that could entail significant short‑term operational disruptions.
The broader implications for Anthropic are immediate and potentially severe. While the company has not publicly responded to the Axios report, the loss of a flagship federal customer would remove a high‑visibility endorsement that has been instrumental in its recent fundraising rounds and enterprise sales pitches. Moreover, the executive order could set a de‑facto standard for other governments worldwide that look to the United States for guidance on AI procurement, prompting a wave of similar restrictions that could curtail Anthropic’s global expansion plans. For the federal government, the move underscores a growing recognition that the promise of generative AI must be balanced against the realities of national security, data sovereignty, and public trust—a calculus that is likely to shape AI policy for years to come.
Sources
- MSN
This article was created using AI technology and reviewed by the SectorHQ editorial team for accuracy and quality.