Anthropic Softens Safety Pledge Under Pentagon Pressure as IBM’s $30B Valuation Plummets
Photo by Maxim Hopman on Unsplash
Anthropic has softened its AI safety pledge after U.S. Defense Secretary Pete Hegseth pressured the company for broader military access, Engadget reports.
Quick Summary
- •Anthropic has softened its AI safety pledge after U.S. Defense Secretary Pete Hegseth pressured the company for broader military access, Engadget reports.
- •Key company: Anthropic
- •Also mentioned: IBM
Anthropic’s decision to dilute its Responsible Scaling Policy (RSP) came just hours after Defense Secretary Pete Hegseth pressed the company for “unfettered” military access to its Claude chatbot, Engadget reported. The pressure, part of a broader Pentagon campaign to secure advanced AI tools for defense applications, appears to have tipped the balance for Anthropic’s leadership, which on Tuesday announced a revision of its safety pledge that removes the hard stop on training new models without pre‑guaranteed safeguards. “We felt that it wouldn't actually help anyone for us to stop training AI models,” chief science officer Jared Kaplan told TIME, underscoring the shift from a precautionary stance to a more flexible development approach.
The policy change scrapes away the centerpiece of Anthropic’s safety narrative: a commitment, first made in 2023, to halt model training unless risk mitigations could be assured in advance. That promise had been a cornerstone of the company’s pitch to enterprise customers and regulators, positioning Anthropic as the most safety‑conscious of the leading AI labs. According to TIME, the overhaul “radically” overhauls the RSP, effectively abandoning the guarantee that new AI systems would only be released after safety measures are proven. The move aligns Anthropic more closely with industry peers that have faced criticism for prioritizing speed over caution, raising questions about the durability of its “responsible AI” brand.
The timing of the policy shift also coincides with a dramatic market reaction to Anthropic’s latest technical showcase. A single blog post demonstrating Claude Code’s ability to translate legacy COBOL applications into modern Java and Python code with 98% accuracy sent IBM’s shares tumbling 13.15% in a matter of hours, wiping roughly $30 billion off the tech giant’s market cap, as reported by a developer‑focused blog. The post highlighted Claude’s capacity to map massive dependencies, document undocumented workflows, and even migrate entrenched mainframe systems directly onto cloud infrastructure. Analysts note that the demonstration undercuts IBM’s long‑standing “COBOL moat,” a competitive advantage the company has relied on for decades.
Anthropic’s $30 billion funding round in February, which lifted its valuation to $380 billion according to Reuters, now faces a credibility test. Investors who poured capital into the startup on the promise of a safety‑first ethos may have to reassess the risk profile of a company that is willing to relax its own safeguards under governmental pressure. The Pentagon’s own stance, as reported by Axios and echoed in Reuters, suggests it could sever ties with Anthropic if the firm does not meet its demands for broader AI access, adding a layer of strategic uncertainty to the startup’s future funding and partnership landscape.
Industry observers see the confluence of military pressure, policy backtracking, and a market‑shaking technical demo as a pivotal moment for AI governance. If Anthropic’s revised RSP proves insufficient to allay safety concerns, regulators could step in, potentially imposing stricter oversight on AI development pathways. Conversely, the company’s willingness to adapt its safety framework may signal a pragmatic response to the accelerating pace of AI innovation, where rigid guardrails risk rendering firms obsolete. As the sector grapples with the balance between rapid advancement and responsible deployment, Anthropic’s latest moves will likely serve as a bellwether for how other AI leaders navigate the competing demands of defense customers, investors, and public safety advocates.
This article was created using AI technology and reviewed by the SectorHQ editorial team for accuracy and quality.