Anthropic Accuses DeepSeek, Moonshot and MiniMax of Stealing Claude Data via 16 Million
Photo by Tien Vu Ngoc (unsplash.com/@tienvn3012) on Unsplash
Anthropic says Chinese labs Deepseek, Moonshot and MiniMax launched distillation attacks, using over 24,000 fake accounts to submit 16 million queries that stole Claude data, The‑Decoder reports.
Quick Summary
- •Anthropic says Chinese labs Deepseek, Moonshot and MiniMax launched distillation attacks, using over 24,000 fake accounts to submit 16 million queries that stole Claude data, The‑Decoder reports.
- •Key company: Anthropic
- •Also mentioned: Anthropic, MiniMax
Anthropic disclosed that three Chinese AI labs—DeepSeek, Moonshot AI, and MiniMax—conducted coordinated “distillation attacks” on its Claude model, firing more than 16 million queries through 24,000 fabricated accounts, according to a statement posted by the company and reported by The‑Decoder. The campaigns targeted Claude’s reasoning chains, programming outputs, and tool‑usage capabilities, extracting reward‑model data for reinforcement learning and “censorship‑compliant” answers on politically sensitive topics. DeepSeek alone submitted over 150,000 requests focused on Claude’s step‑by‑step reasoning and on generating safe alternatives to controversial questions, while MiniMax generated the bulk of the traffic—13 million queries—pivoting within 24 hours to a newly released Claude version and redirecting half of its load to the updated system, The‑Decoder notes.
Anthropic’s analysis indicates that the labs leveraged proxy services to bypass China’s domestic AI access restrictions, allowing the fake accounts to masquerade as legitimate users. Moonshot AI contributed more than 3.4 million queries, concentrating on agent‑based reasoning, computer‑vision tasks, and data‑analysis prompts designed to reconstruct Claude’s “thought processes,” the report adds. By harvesting these outputs, the labs could train smaller, proprietary models that mimic Claude’s performance without inheriting its safety layers, a risk Anthropic warns could enable “unprotected capabilities” to be embedded in military, intelligence, or surveillance systems, as highlighted in The Verge coverage.
The scale of the operation mirrors similar allegations made by OpenAI and Google, which have also reported illicit data‑mining attempts by Chinese entities, according to The‑Decoder. Anthropic argues that while model distillation is a legitimate research technique, the industrial‑scale, fraudulent nature of these campaigns constitutes theft of intellectual property and a breach of its usage policies. The company is urging policymakers and industry peers to develop a coordinated response, emphasizing that illicitly distilled models are “unlikely to carry over existing safeguards,” a point echoed by The Verge’s Emma Roth.
In response, Anthropic has begun tightening access controls for Claude, deploying stricter rate limits and enhanced authentication checks to curb automated abuse. The firm also plans to pursue legal avenues where feasible, though it acknowledges the difficulty of enforcing U.S. IP rights against actors operating behind Chinese firewalls. The company’s leadership stresses that the attacks not only jeopardize its competitive edge but also threaten broader AI safety, given that stolen capabilities could be weaponized without the safety mitigations built into Claude, as detailed in the statement.
The revelations arrive as the global AI race intensifies, with Chinese labs pushing rapidly to close the gap on frontier models. VentureBeat recently highlighted Moonshot’s Kimi K2 and DeepSeek’s V2.5 as emerging open‑source contenders, underscoring the strategic importance of the data they seek to appropriate. Anthropic’s claim, corroborated by multiple outlets, marks a stark escalation in the covert competition for AI talent and underscores the urgent need for cross‑border governance mechanisms to protect proprietary models and the safety frameworks that accompany them.
This article was created using AI technology and reviewed by the SectorHQ editorial team for accuracy and quality.