Skip to main content
MiniMax

MiniMax Detects Industrial-Scale Distillation Attacks by DeepSeek, Moonshot AI, and Others

Written by
Maren Kessler
AI News
MiniMax Detects Industrial-Scale Distillation Attacks by DeepSeek, Moonshot AI, and Others

Photo by Maxim Hopman on Unsplash

MiniMax reported that DeepSeek, Moonshot AI and other labs launched industrial‑scale distillation attacks, creating more than 24,000 fake accounts and conducting over 16 million Claude exchanges to siphon its capabilities for their own models, according to a recent report.

Quick Summary

  • MiniMax reported that DeepSeek, Moonshot AI and other labs launched industrial‑scale distillation attacks, creating more than 24,000 fake accounts and conducting over 16 million Claude exchanges to siphon its capabilities for their own models, according to a recent report.
  • Key company: MiniMax
  • Also mentioned: DeepSeek, MiniMax, Anthropic

Anthropic’s internal security team discovered the breach after a sudden spike in API traffic that far outpaced normal usage patterns, prompting an emergency audit that uncovered more than 24,000 newly created accounts funneling requests to Claude. The audit, detailed in Anthropic’s own report, shows the accounts generated over 16 million exchanges, effectively “siphoning” Claude’s capabilities to train rival models at DeepSeek, Moonshot AI and other labs [report].

VentureBeat confirmed the scale of the operation, noting that the fraudulent accounts were deliberately engineered to mimic legitimate developers, allowing the attackers to bypass rate limits and scrape large swaths of Claude’s output without triggering immediate alarms [VentureBeat]. The article adds that the stolen data was likely used to fine‑tune smaller, proprietary models, a practice known in the industry as “model distillation,” which can dramatically accelerate a competitor’s development cycle.

The Register framed the incident as a geopolitical flashpoint, pointing out that DeepSeek and Moonshot AI are Chinese‑backed startups that have recently received substantial venture funding. The outlet highlighted Anthropic’s warning that the attack represents “industrial‑scale copying,” a phrase that underscores how the perpetrators treated Claude’s API as a raw material pipeline rather than a service [The Register].

Tom’s Hardware echoed the technical details, emphasizing that the 16 million Claude exchanges were not random queries but targeted prompts designed to extract nuanced reasoning patterns and domain‑specific knowledge. According to the report, the attackers leveraged the harvested data to train “smaller models” that could compete in niche markets, potentially eroding Anthropic’s competitive edge in enterprise AI [Tom’s Hardware].

Anthropic has not disclosed any immediate financial impact, but the company’s leadership indicated that the breach will force a reassessment of API authentication and abuse‑prevention mechanisms. The internal report suggests that tighter identity verification, stricter usage caps and real‑time anomaly detection will be rolled out in the coming weeks to deter future large‑scale distillation attempts.

Sources

Primary source

This article was created using AI technology and reviewed by the SectorHQ editorial team for accuracy and quality.

More from SectorHQ:📊Intelligence📝Blog
About the author
Maren Kessler
AI News

🏢Companies in This Story

Related Stories