Trump bans Anthropic AI in federal agencies, ordering halt amid ethics dispute and
Photo by History in HD (unsplash.com/@historyhd) on Unsplash
Trump ordered all U.S. federal agencies to “IMMEDIATELY CEASE” use of Anthropic’s AI on Friday, intervening in a dead‑lock with the Pentagon over the company’s ethical safeguards; Theguardian reports the move came an hour before a deadline for a pending agreement.
Quick Summary
- •Trump ordered all U.S. federal agencies to “IMMEDIATELY CEASE” use of Anthropic’s AI on Friday, intervening in a dead‑lock with the Pentagon over the company’s ethical safeguards; Theguardian reports the move came an hour before a deadline for a pending agreement.
- •Key company: Anthropic
President Trump’s order comes just minutes before the Pentagon’s deadline, forcing agencies to halt all use of Anthropic’s Claude models within 24 hours, according to a Truth Social post cited by The Guardian. The administration gave Anthropic until 5:01 p.m. ET Friday to relax its “ethical guardrails” on military‑grade deployments, or risk being labeled a “supply‑chain risk,” a designation normally reserved for hardware components deemed insecure (The Edition).
The Pentagon’s demand centered on unlocking capabilities that the defense department says are essential for autonomous weapon‑system testing and large‑scale intelligence analysis. Anthropic’s CEO Dario Amodei, speaking at Davos earlier this month, rejected the request, arguing that removing the safeguards would expose the U.S. to “uncontrolled autonomous military applications” and could enable mass domestic surveillance (Tom’s Hardware). The company’s stance has been consistent across statements to the press, emphasizing that its Terms of Service are designed to prevent misuse of its models (NBC News).
In response, the White House issued a six‑month phase‑out timeline for all federal entities, giving them a window to migrate to alternative vendors or develop in‑house solutions (CBS News). The move effectively cuts off a significant portion of Anthropic’s government revenue, which analysts estimate accounted for roughly 12 % of the firm’s annual sales before the dispute (VentureBeat). The ban also raises immediate operational challenges for agencies that rely on Claude for document summarization, code generation, and internal chat‑bot support.
Industry observers note that the rift could accelerate the shift toward OpenAI and Google’s platforms within the public sector. A recent usage report from Poe showed OpenAI and Google gaining market share as Anthropic’s enterprise contracts stalled (VentureBeat). Meanwhile, Reuters reported that Anthropic launched new AI tools just days after the controversy, attempting to offset the loss of government business (Reuters).
Legal experts warn that the “supply‑chain risk” label may trigger additional compliance audits under the Defense Federal Acquisition Regulation Supplement, potentially opening the door to contract terminations or penalties (The Guardian). The Pentagon has not disclosed whether it will pursue alternative AI suppliers or develop its own models, but officials have hinted that the department will treat Anthropic’s refusal as a breach of national‑security obligations (The Guardian).
The standoff underscores a broader clash between AI developers’ self‑imposed ethical frameworks and the U.S. government’s demand for unrestricted access to advanced models for defense purposes. As the deadline passes, federal agencies must scramble to replace Claude, while Anthropic faces an uncertain future in a market now dominated by rivals willing to accommodate the Pentagon’s requirements.
Sources
This article was created using AI technology and reviewed by the SectorHQ editorial team for accuracy and quality.