Skip to main content
Anthropic

Anthropic Blacklisted as US National Security Risk, OpenAI Engineers Rally Behind It

Published by
SectorHQ Editorial
Anthropic Blacklisted as US National Security Risk, OpenAI Engineers Rally Behind It

Photo by Maxim Hopman on Unsplash

$200 million. That contract sparked Anthropic’s designation as a U.S. national‑security supply‑chain risk—a label once reserved for firms like Huawei—and spurred OpenAI engineers to publicly back the company.

Key Facts

  • Key company: Anthropic
  • Also mentioned: Anthropic

Anthropic’s $200 million Pentagon contract, signed in July 2025 to deploy its Claude model on classified networks, became the catalyst for an unprecedented national‑security designation. Six months after the deal was sealed, the Department of Defense rescinded the agreement, citing Anthropic’s refusal to strip two safety guardrails—one barring autonomous weapons targeting and another prohibiting mass surveillance of U.S. citizens. The cancellation triggered a six‑month phase‑out for all federal agencies that had adopted Claude, and the Pentagon formally listed Anthropic as a “supply‑chain security risk,” a label historically reserved for firms such as Huawei, according to a Reuters report.

The blacklist carries immediate operational penalties: every contractor that works with the Department of Defense is now barred from collaborating with Anthropic, and the company faces lawsuits in two federal courts challenging the designation. Anthropic’s legal team has filed a suit to block the blacklist, arguing that the government’s demand infringes on the company’s core safety policies. The Register notes that the lawsuit frames the dispute as a clash between national‑security prerogatives and the industry’s emerging norm of embedding ethical constraints into AI systems.

What sets this episode apart is the cross‑industry solidarity it has sparked. Thirty engineers from rival firms—including OpenAI and Google DeepMind—signed an amicus brief in support of Anthropic’s position, with Google’s chief scientist Jeff Dean among the signatories, as reported by Skila AI. Their brief argues that government coercion of safety restrictions could set a dangerous precedent for AI governance, potentially undermining the very safeguards that many firms have voluntarily adopted. The involvement of OpenAI engineers, traditionally viewed as competitors, underscores a broader concern within the AI community that external pressure may erode the autonomy of technical safety teams.

For enterprise buyers, the fallout raises a practical question: can a client’s procurement policies be overridden by a later governmental edict? If a federal contract can be nullified on the basis of internal safety rules, private sector customers may face similar risks should regulatory bodies decide to impose contradictory requirements. Analysts cited by CNBC warn that the episode could ripple through the market, prompting vendors to reassess contract clauses that touch on safety and ethics, and possibly leading to higher compliance costs for developers who wish to retain control over their model safeguards.

The episode also highlights the growing strategic importance of AI safety as a bargaining chip in high‑stakes contracts. While Anthropic’s stance cost it a lucrative defense deal, the company’s willingness to uphold its safety constraints has earned it the backing of prominent engineers and may bolster its reputation among privacy‑focused clients. As the litigation proceeds, the outcome will likely shape how future government‑AI partnerships are structured, and whether the “national‑security risk” label will become a tool for policy enforcement or remain an outlier reserved for clear threats.

Sources

Primary source

No primary source found (coverage-based)

Other signals
  • Dev.to AI Tag

Reporting based on verified sources and public filings. Sector HQ editorial standards require multi-source attribution.

Compare these companies

More from SectorHQ:📊Intelligence📝Blog

🏢Companies in This Story

Related Stories