Anthropic Reveals Its Identity, Urging Public Trust in Its Claims.
Photo by Alexandre Debiève on Unsplash
While analysts expected Anthropic to stay low‑key after a Pentagon ultimatum, the company instead rewrote its safety policy and unveiled enterprise plugins that instantly added $830 billion to its market cap, a bold move meant to cement public trust.
Quick Summary
- •While analysts expected Anthropic to stay low‑key after a Pentagon ultimatum, the company instead rewrote its safety policy and unveiled enterprise plugins that instantly added $830 billion to its market cap, a bold move meant to cement public trust.
- •Key company: Anthropic
Anthropic’s sudden policy overhaul and product rollout on Feb. 24 appear to be a coordinated response to a direct threat from the Pentagon, according to a detailed post by Zecheng on lizecheng.net. The post notes that Defense Secretary Pete Hegseth met with Anthropic co‑founder Dario Amodei and warned that the company must grant the Department of Defense unrestricted access to its Claude model by the following Friday or risk being designated a “supply‑chain risk” and forced into production under the Defense Production Act, a Korean‑War‑era statute that can conscript private firms for national‑security work. Claude is already the only AI model running on classified U.S. defense systems, reportedly deployed via Palantir in the operation that captured Venezuelan President Nicolás Maduro, and the Pentagon’s demand is to expand its use for mass‑surveillance, autonomous weapons and other classified applications that Anthropic’s existing usage policy explicitly bans. Faced with a binary choice—comply and abandon its “responsible AI” branding, or resist and be compelled by law—Anthropic chose to shift its public stance.
The same day the company released “Responsible Scaling Policy 3.0,” which replaces the previous hard‑line rule that barred training more powerful models until safety measures were verified. The new policy, as Zecheng explains, only mandates a delay if two conditions are met simultaneously: Anthropic must be leading the AI race and the catastrophic risk must be deemed significant. Chief Scientist Jared Kaplan’s rationale—“If competitors are blazing ahead, pausing wouldn’t help—it would result in a less safe world”—effectively turns the policy into a permission structure that Anthropic can satisfy by simply arguing that it is not the clear leader, given the presence of OpenAI and Google. This change came just two weeks after Anthropic closed a $30 billion financing round that valued the firm at roughly $380 billion and lifted its annualized revenue to $14 billion, with Claude‑Code alone contributing $2.5 billion, according to the same source. The timing suggests the capital influx gave the company the leeway to relax its guardrails while still maintaining investor confidence.
Concurrently, Anthropic launched a suite of enterprise plugins under the “Claude Cowork” brand, integrating the assistant with Google Workspace, Slack, DocuSign, FactSet, LegalZoom, SimilarWeb, WordPress, S&P Global, MSCI and other platforms. The rollout enables multi‑step workflows across Excel and PowerPoint with context passing between apps, and private plugin marketplaces for developers. Market reaction was immediate: Salesforce shares rose 4%, Thomson Reuters 11%, FactSet 6%, Intapp 7.1%, while the S&P 500 gained 0.77% to 6,890.07 and the Nasdaq climbed 1.04% to 22,863.68, as reported by Zecheng. The post also references a prior “legal plugin” release that triggered an $830 billion sell‑off in global software stocks, underscoring how Anthropic’s product moves can swing market sentiment in both directions.
Analysts observing the pattern argue that Anthropic is not merely pivoting away from safety but graduating from it. By coupling a massive capital raise with a softened safety policy and an aggressive enterprise expansion, the company appears to be leveraging its “responsible AI” reputation as a credential rather than an operating principle. Zecheng’s assessment—that the brand of caution was “a stage, not an identity”—captures this shift. The strategy mirrors a broader industry trend where firms use safety narratives to secure funding, then relax constraints once the capital is in hand, a dynamic also noted in recent coverage by Ars Technica and VentureBeat, which have highlighted Anthropic’s internal research on Claude’s moral code and its broader implications for AI governance.
The implications for the AI ecosystem are significant. If Anthropic proceeds with unrestricted Claude deployments for the Pentagon, it could set a precedent for other AI firms to prioritize lucrative government contracts over self‑imposed safety limits. Moreover, the new Responsible Scaling Policy effectively ties Anthropic’s development pace to competitive dynamics rather than absolute risk assessments, a move that could accelerate the race toward more capable—and potentially hazardous—models. Investors appear comfortable with this gamble, as evidenced by the $30 billion raise and the market’s positive reaction to the enterprise plugins, but regulators and civil‑society groups may push back, especially given the Pentagon’s explicit desire to expand Claude’s role in classified and combat‑related applications. The coming weeks will likely reveal whether Anthropic’s recalibrated identity can sustain both its commercial ambitions and the public trust it once cultivated.
Sources
No primary source found (coverage-based)
- Dev.to AI Tag
This article was created using AI technology and reviewed by the SectorHQ editorial team for accuracy and quality.