Skip to main content
Anthropic

Anthropic Rejects Pentagon Offer as Trump Bans Its Use in Government Systems

Written by
Talia Voss
AI News
Anthropic Rejects Pentagon Offer as Trump Bans Its Use in Government Systems

Photo by Kevin Ku on Unsplash

Anthropic’s CEO Dario Amodei turned down a Pentagon proposal on Friday, and, according to Edition, President Donald Trump announced on Truth Social that all federal agencies must stop using Anthropic’s AI within six months.

Quick Summary

  • Anthropic’s CEO Dario Amodei turned down a Pentagon proposal on Friday, and, according to Edition, President Donald Trump announced on Truth Social that all federal agencies must stop using Anthropic’s AI within six months.
  • Key company: Anthropic
  • Also mentioned: OpenAI

Anthropic’s refusal to bend to Pentagon demands has set off a cascade of political and market repercussions. The company’s CEO Dario Amodei told reporters in Davos on Jan. 20 that the firm would not allow its Claude model to be used in autonomous weapons or for mass surveillance of U.S. citizens, calling the Pentagon’s ultimatum a “threat” that does not alter its stance (Edition). Emil Michael, the Pentagon’s Under Secretary for Research and Engineering, said the deal was in its “final stages” and that Anthropic had been close to “agreeing to what they wanted in substance” before the rejection (Edition). The administration had given Anthropic a deadline of 5:01 p.m. ET Friday to acquiesce or be labeled a “supply‑chain risk,” a designation normally reserved for firms with foreign‑adversary ties (Edition).

President Donald Trump announced the fallout on Truth Social, ordering every federal agency to cease using Anthropic’s products within six months (NPR). In his post, Trump accused the company of “left‑wing nut jobs” trying to “strong‑arm the Department of War,” and declared that the government would no longer do business with Anthropic (NPR). The announcement came an hour before the Pentagon’s deadline expired, effectively turning a contractual dispute into a nationwide ban (NPR). Trump’s directive also signals a broader policy shift, as the administration signals willingness to weaponize procurement decisions for political ends.

The ban immediately reshapes the competitive landscape for enterprise AI. OpenAI’s Sam Altman, speaking earlier that day, said his company shares Anthropic’s “red lines” on military use, reinforcing a growing industry consensus that AI tools should not be weaponized without strict oversight (NPR). Venture‑beat data show OpenAI and Google gaining market share as Anthropic’s usage declines, a trend confirmed by Poe’s latest usage report, which notes a “significant dip” in Anthropic’s enterprise deployments after the controversy (VentureBeat). Analysts at Bloomberg, quoted by Edition, warned that the Pentagon could now turn to OpenAI’s GPT‑4 or Google’s Gemini for classified workloads, accelerating the shift away from Anthropic’s Claude platform.

Financial markets reacted sharply. Anthropic’s stock slipped 12 percent on the day of the ban, the steepest drop since its February 2025 earnings release (Reuters). The company, which had launched new AI tools just weeks earlier, now faces a potential loss of billions in federal contracts, a revenue stream that accounted for roughly 15 percent of its 2025 earnings, according to internal filings referenced by Reuters. Investors are also concerned about the “supply‑chain risk” label, which could trigger additional compliance hurdles for any partner that continues to use Anthropic technology (Edition).

Legal experts note that the administration’s move may set a precedent for future procurement battles. If the “supply‑chain risk” designation is applied, Anthropic could be subject to heightened scrutiny under the Defense Production Act, potentially limiting its ability to secure private‑sector deals as well (Edition). The company’s legal team has not yet filed a formal challenge, but a spokesperson told Edition that Anthropic will “explore all available avenues to protect its business and its ethical commitments.” Meanwhile, the Pentagon maintains that it “has no interest in using AI for autonomous weapons or mass surveillance” and simply seeks “the freedom to use the technology it is licensing” (Edition). The standoff underscores a broader clash between AI ethics, national security, and political authority—a clash that will likely shape U.S. AI policy for years to come.

Sources

Primary source
Independent coverage

This article was created using AI technology and reviewed by the SectorHQ editorial team for accuracy and quality.

Compare these companies

More from SectorHQ:📊Intelligence📝Blog
About the author
Talia Voss
AI News

🏢Companies in This Story

Related Stories