
Anthropic has alleged that three Chinese artificial intelligence firms generated more than 16 million exchanges with its Claude model through roughly 24,000 fraudulent accounts in what it described as large-scale distillation attacks.
In a blog post, Anthropic said DeepSeek, Moonshot AI and MiniMax used Claude outputs to train their own models, exploiting a technique that involves refining smaller systems on the responses of more capable ones.
“Distillation is a widely used and legitimate training method,”
Anthropic wrote, adding that:
“Distillation can also be used for illicit purposes: competitors can use it to acquire powerful capabilities from other labs in a fraction of the time, and at a fraction of the cost, that it would take to develop them independently.”
Anthropic said the campaigns targeted Claude’s most differentiated capabilities, including agentic reasoning, coding, tool use and computer vision, and that it identified the activity through IP address correlation, request metadata and infrastructure indicators, alongside corroboration from industry partners.
The company warned that foreign distillation efforts could pose geopolitical risks, arguing that advanced AI capabilities scraped from American models could be deployed in military, intelligence and surveillance systems.
Anthropic said it would respond by strengthening detection systems, tightening access controls and sharing threat intelligence, while calling for coordinated action among AI firms, cloud providers and policymakers to counter large-scale model scraping.