Anthropic CEO Dario Amodei slams OpenAI’s military‑deal claims as “straight‑up lies”
Photo by Possessed Photography on Unsplash
TechCrunch reports Anthropic CEO Dario Amodei called OpenAI’s DoD partnership “straight‑up lies,” accusing the firm of “safety theater” and saying it pursued the deal only to placate employees, not to prevent abuse.
Key Facts
- •Key company: Anthropic
- •Also mentioned: Anthropic
Anthropic’s internal memo, which The Information obtained and TechCrunch relayed, paints OpenAI’s new Department of Defense (DoD) contract as a public‑relations stunt rather than a genuine safety measure. CEO Dario Amodei wrote that Sam Altman “presented himself as a peacemaker and dealmaker” while the company was actually “plac[ing] employees” — a phrase Amodei used to suggest the deal was meant to soothe internal dissent after Anthropic’s own negotiations with the Pentagon fell apart [TechCrunch]. Amodei’s criticism hinges on the stark contrast between the two firms’ red‑line demands: Anthropic insisted the DoD guarantee that its AI would never be used for domestic mass surveillance or autonomous weapons, a condition the agency refused, prompting Anthropic to walk away despite a $200 million existing contract [TechCrunch].
OpenAI, by contrast, framed its agreement as a “peaceful” solution that still respects the same safeguards Anthropic championed. In a blog post accompanying the deal, Altman’s team asserted that the contract “allows use of its AI systems for all lawful purposes” but also claimed that the Department of War—its moniker under the Trump administration—explicitly recognized that mass domestic surveillance is illegal and would not be pursued [TechCrunch]. The company further insisted that the contract makes clear the “lawful use” clause does not cover prohibited activities, attempting to position itself as the only AI firm willing to work with the DoD without compromising on safety [TechCrunch].
The dispute has spilled into the public sphere, with Amodei noting a dramatic 295 percent surge in ChatGPT uninstall rates following the announcement of OpenAI’s defense partnership [TechCrunch]. He framed the backlash as evidence that “the general public or the media … see OpenAI’s deal with the DoW as sketchy or suspicious, and see us as the heroes,” adding that the criticism is “working on some Twitter morons, which doesn’t matter” but expressing concern about its impact on OpenAI staff morale [TechCrunch]. The memo also underscores Anthropic’s market momentum: the startup recently climbed to the #2 spot in the App Store, a point Amodei highlighted to contrast the company’s growth with OpenAI’s reputational risk [TechCrunch].
Industry observers have noted that the legal language surrounding “lawful use” is fluid, with critics warning that today’s illegal applications could become permissible under future regulatory shifts [TechCrunch]. Nonetheless, Anthropic’s stance reflects a broader trend among AI firms to carve out ethical boundaries in government contracts, a move that could shape future procurement standards. If OpenAI’s “safety theater” claim proves hollow, the company may face heightened scrutiny from both regulators and its own workforce, especially as employees increasingly demand transparent safeguards against misuse [TechCrunch].
The clash also spotlights the strategic calculus of AI leaders. While OpenAI appears to be leveraging the DoD deal to reassure investors and demonstrate its ability to operate at the highest levels of national security, Anthropic is betting on a reputation for principled restraint to attract enterprise customers wary of government entanglements. Both approaches carry risk: OpenAI risks alienating a user base that is already uninstalling its flagship product, while Anthropic may forgo lucrative defense revenue by refusing to compromise on its red lines [TechCrunch]. The outcome of this rivalry could set a precedent for how AI companies negotiate the thin line between commercial opportunity and ethical responsibility.
Sources
This article was created using AI technology and reviewed by the SectorHQ editorial team for accuracy and quality.