Anthropic flags AI risk as QuitGPT emerges and ChatGPT 5.4 rolls out
Photo by Kevin Ku on Unsplash
While Anthropic warned the DoD of a supply‑chain AI risk, OpenAI is unveiling ChatGPT 5.4 and a “QuitGPT” backlash is swelling, Lastweekin reports.
Key Facts
- •Key company: Anthropic
- •Also mentioned: Anthropic
Anthropic’s clash with the Pentagon has turned into a public relations flashpoint, with the defense department formally designating the startup as a “supply chain risk” under 10 UCSC 3252, according to the Last Week in AI newsletter. The label forces DoD contractors to certify they are not using Claude for any defense work and triggers an immediate wind‑down of Pentagon deployments of Anthropic’s models. Anthropic’s CEO Dario Amodei framed the move as a violation of the statute’s “least restrictive means” requirement and said the company will challenge the designation in court, while insisting the restriction applies only to projects directly tied to DoD contracts. The dispute stems from Anthropic’s insistence on red‑lines against mass domestic surveillance and fully autonomous weapons—terms the DoD rejected in favor of “all lawful purposes,” the newsletter notes.
At the same time, OpenAI has pressed ahead with a separate DoD agreement that places its latest ChatGPT 5.4 models in classified environments, promising safeguards against mass surveillance and autonomous weapon systems. Amodei called OpenAI’s public framing of the deal “straight‑up lies” and “safety theater,” arguing that the contract still hinges on the same “all lawful purposes” clause that Anthropic opposed. The contrast has sparked a burgeoning “Quit GPT” movement, with consumer sentiment turning against OpenAI after the military deal was announced, as reported by Last Week in AI. Social media chatter and petitions to “cancel ChatGPT” have surged, reflecting a growing unease about the commercialization of powerful AI tools for defense purposes.
Despite the controversy, Anthropic’s Claude has not been entirely sidelined. Cloud providers Microsoft, Google, and Amazon Web Services have all clarified that Claude remains available for non‑defense workloads through platforms such as Microsoft 365, GitHub, AI Foundry, and Google Cloud, according to the same newsletter. The clarification comes after Defense Secretary Pete Hegseth’s remarks that “no military partner may conduct any commercial activity with Anthropic,” which initially sparked confusion about the scope of the supply‑chain designation. Moreover, Forbes highlighted that Claude’s visibility has actually risen, with the model climbing to No. 1 in the App Store following the Pentagon dispute, suggesting that the public backlash against OpenAI may be funneling users toward Anthropic’s alternative.
The market reaction has been mixed. Reuters reported a modest uptick in U.S. software stocks after the Anthropic announcement, interpreting the episode as a relief for investors who feared a broader crackdown on AI vendors. Meanwhile, the BBC noted Amazon’s continued commitment to Anthropic, reaffirming a multi‑billion‑dollar investment that positions the cloud giant as a direct competitor to Microsoft’s partnership with OpenAI. This dual‑track approach—Amazon backing Anthropic while Microsoft deepens ties with OpenAI—underscores the strategic split within the industry over how to balance lucrative defense contracts with public trust.
OpenAI’s rollout of ChatGPT 5.4 adds another layer to the debate. The upgrade promises faster inference, expanded multimodal capabilities, and tighter integration with Microsoft’s Azure confidential computing stacks, as outlined in the Last Week in AI briefing. Yet the timing of the release, coming on the heels of the DoD deal, has amplified criticism that OpenAI is prioritizing government revenue over ethical safeguards. Analysts cited in the newsletter suggest the “Quit GPT” backlash could pressure OpenAI to adopt more transparent governance policies, but the company has so far defended its position by emphasizing compliance with existing U.S. export controls and internal safety protocols. As the AI arms race accelerates, the Anthropic‑DoD standoff and OpenAI’s latest product launch illustrate how quickly technical advancements can become flashpoints for broader societal and geopolitical concerns.
Sources
This article was created using AI technology and reviewed by the SectorHQ editorial team for accuracy and quality.