Anthropic meldet, China versucht, Claudes KI-Fähigkeiten zu stehlen
Photo by Kyle Conradie (unsplash.com/@kcphotographer) on Unsplash
Three major Chinese AI firms—MiniMax, DeepSeek and Moonshot—are accused of attempting to steal Anthropic’s Claude model, Torbenkopp reports, sparking a fierce debate over global AI espionage.
Quick Summary
- •Three major Chinese AI firms—MiniMax, DeepSeek and Moonshot—are accused of attempting to steal Anthropic’s Claude model, Torbenkopp reports, sparking a fierce debate over global AI espionage.
- •Key company: Anthropic
Anthropic’s technical post‑mortem reveals that the three Chinese firms deployed a coordinated “hydra‑cluster” of roughly 24,000 fabricated user accounts, generating more than 16 million API calls to Claude over a six‑month window. The volume and distribution of requests were deliberately engineered to evade Anthropic’s rate‑limiting and anomaly‑detection systems, allowing the attackers to harvest Claude’s most resource‑intensive capabilities—agentic reasoning, tool use, and code generation—through a process known as model distillation. Distillation itself is a standard practice when a lab trains a smaller, cheaper model to mimic a larger one, but Anthropic argues that the Chinese operatives used it to shortcut the multi‑year, multi‑billion‑dollar effort required to develop those functions in‑house (Torbenkopp).
The report details how the perpetrators leveraged “hydra‑cluster architectures,” a term borrowed from cyber‑crime jargon to describe a sprawling network of accounts that route traffic through multiple proxies and cloud regions. By fragmenting the request stream, the attackers masked the true origin of the queries and avoided triggering Anthropic’s automated detection heuristics, which typically flag sudden spikes from single IP blocks. Anthropic says it identified the offending infrastructure through a combination of IP‑address correlation, request‑metadata analysis, and corroborating evidence from partner cloud providers (Torbenkopp). The firm claims a high degree of confidence that the clusters can be traced back to MiniMax, DeepSeek, and Moonshot, although no formal legal accusations have been filed yet.
In response, Anthropic has rolled out a suite of defensive upgrades aimed at “distillation‑attack” signatures. New classifiers scan API traffic for patterns consistent with large‑scale output harvesting, while behavioral fingerprinting tags accounts that exhibit the rapid, repetitive querying typical of the reported campaign. Anthropic is also sharing the technical indicators—such as specific request headers and timing anomalies—with other AI labs and major cloud operators to build a cross‑industry early‑warning system (Torbenkopp). Additionally, the company has tightened verification procedures for academic and research accounts, a segment that had previously enjoyed relatively lax onboarding to encourage open‑science collaborations.
The episode arrives amid heightened scrutiny of Anthropic’s relationship with the U.S. defense establishment. Bloomberg notes that the company is currently negotiating a contentious contract with the Pentagon, a deal that hinges on the firm’s ability to guarantee the integrity of its models against hostile exploitation (Bloomberg). If the Chinese distillation effort proves successful, it could erode confidence in Anthropic’s security posture and give U.S. policymakers a lever to press for stricter compliance clauses or even reconsider the partnership altogether.
Industry observers see the incident as a bellwether for a broader “AI espionage” arms race. VentureBeat reports that the scale of the operation—tens of thousands of fake accounts and millions of interactions—suggests state‑backed resources rather than a rogue hackathon (VentureBeat). The targeted capabilities—agentic planning and tool‑use—are precisely the functions that give large language models a competitive edge in enterprise automation, software development, and autonomous decision‑making. By siphoning these abilities, the Chinese firms could accelerate their own model roadmaps without bearing the massive compute costs that companies like Anthropic incur to train next‑generation systems.
Anthropic’s CEO has warned that if such “industrial‑scale” theft becomes commonplace, the AI ecosystem could see a fragmentation of innovation, with rival labs racing to protect their intellectual property rather than collaborating on safety standards. The company’s latest defensive measures, combined with its willingness to share threat intelligence, may set a precedent for collective security in a field where the line between legitimate research and competitive sabotage is increasingly blurred. For now, Anthropic’s findings underscore that the battle for AI supremacy is being fought not only in chip fabs and data centers, but also in the shadowy realm of API abuse and model distillation.
Sources
This article was created using AI technology and reviewed by the SectorHQ editorial team for accuracy and quality.