Anthropic Exposes Chinese AI Firms Using 24,000 Fake Accounts for Distillation Attacks
Photo by Kevin Ku on Unsplash
24,000 fraudulent accounts were allegedly deployed by Chinese AI firms to launch “distillation attacks,” a claim detailed in a recent report on the misuse of AI resources.
Key Facts
- •Key company: Anthropic
Anthropic’s security team uncovered a coordinated effort by three Chinese AI startups—DeepSeek, Moonshot and MiniMax—to harvest data from its Claude models, according to a detailed internal report shared with The Information. The firms allegedly created more than 24,000 fraudulent user accounts on Anthropic’s platform, each one programmed to submit a steady stream of prompts designed to elicit high‑quality model outputs. By aggregating those responses, the attackers could perform “distillation attacks,” a technique that extracts knowledge from a proprietary model and repackages it into a competing system. Anthropic says the operation was large enough to raise alarms about the scale of data theft and the potential for accelerated model development in rival labs.
Forbes corroborated the claim, noting that the Chinese startups were “mining Claude for data” in order to shortcut their own research pipelines. The article points out that the stolen outputs can be used to train new models without the need for costly compute or original data collection, effectively giving the attackers a shortcut to comparable performance. While Anthropic did not disclose the exact impact on its revenue or user base, the report suggests that the stolen data could be integrated into the Chinese firms’ own products, potentially narrowing the gap between them and Western AI leaders.
Anthropic’s response was swift: the company disabled the fraudulent accounts, tightened its API authentication mechanisms, and began a forensic review of the compromised interactions. In a statement to VentureBeat, Anthropic said the three firms “used Claude data to improve their own models,” underscoring the seriousness of the breach. The company also announced plans to roll out additional watermarking and usage‑tracking features to make future distillation attempts more detectable. These measures reflect a broader industry trend toward hardening model APIs against abuse, as other providers have recently introduced similar safeguards.
The incident arrives at a moment when Chinese AI firms are rapidly scaling, buoyed by generous state funding and a growing domestic market. Analysts cited by The Information have warned that such aggressive data‑acquisition tactics could intensify competition for talent and compute resources worldwide. If the stolen Claude outputs are indeed being incorporated into the rivals’ models, the competitive advantage gained could translate into faster product rollouts and more aggressive pricing, pressuring Western incumbents to defend both their intellectual property and market share.
Anthropic’s disclosure also raises questions about the efficacy of current legal frameworks for cross‑border AI theft. While the company is exploring potential legal recourse, the lack of clear jurisdiction over digital assets complicates enforcement. As the AI arms race accelerates, the episode underscores the need for coordinated international standards on model protection and data provenance, a point echoed in multiple industry commentaries but not yet codified into policy.
Sources
- MSN
Reporting based on verified sources and public filings. Sector HQ editorial standards require multi-source attribution.