BotBeat
...
← Back

> ▌

AnthropicAnthropic
POLICY & REGULATIONAnthropic2026-03-02

Anthropic Accuses Chinese AI Labs DeepSeek, Moonshot AI, and MiniMax of Large-Scale Model Distillation

Key Takeaways

  • ▸Anthropic detected over 24,000 fake accounts linked to DeepSeek, Moonshot AI, and MiniMax conducting distillation attacks on Claude
  • ▸The three Chinese labs generated more than 16 million exchanges targeting Claude's agentic reasoning, tool use, and coding capabilities
  • ▸MiniMax conducted the largest attack with 13 million exchanges, while Moonshot AI had 3.4 million and DeepSeek had over 150,000
Source:
Hacker Newshttps://techcrunch.com/2026/02/23/anthropic-accuses-chinese-ai-labs-of-mining-claude-as-us-debates-ai-chip-exports/↗

Summary

Anthropic has publicly accused three prominent Chinese AI companies—DeepSeek, Moonshot AI, and MiniMax—of creating over 24,000 fake accounts to conduct what it calls distillation attacks on its Claude AI model. According to Anthropic, these labs generated more than 16 million exchanges with Claude, specifically targeting its most advanced capabilities including agentic reasoning, tool use, and coding. The alleged objective was to use a technique known as distillation to train and improve their own AI models by essentially copying Claude's responses and behaviors.

The scale and focus of the attacks varied by company. DeepSeek allegedly conducted over 150,000 exchanges aimed at improving foundational logic and alignment, particularly around generating censorship-safe alternatives to sensitive queries. Moonshot AI, which recently launched the open-source Kimi K2.5 model and a coding agent, was linked to 3.4 million exchanges targeting agentic reasoning, coding, data analysis, and computer vision. MiniMax accounted for the largest volume with 13 million exchanges focused on agentic coding and tool orchestration, with Anthropic observing MiniMax redirecting nearly half its traffic to the latest Claude model upon its release.

The accusations arrive at a politically sensitive moment as the United States debates the strictness of AI chip export controls to China. Last month, the Trump administration formally permitted U.S. companies like Nvidia to export advanced AI chips such as the H200 to China, a move critics argue could accelerate China's AI capabilities. Anthropic contends that the scale of distillation observed would require access to advanced chips, reinforcing the rationale for export controls. The company is calling for a coordinated industry-wide response involving AI companies, cloud providers, and policymakers to combat such attacks, while pledging to invest in stronger defenses against distillation.

  • The allegations coincide with U.S. debates over AI chip export controls, with Anthropic arguing the attacks demonstrate why restrictions are necessary
  • Anthropic is calling for coordinated action across the AI industry, cloud providers, and policymakers to defend against model distillation
Large Language Models (LLMs)CybersecurityRegulation & PolicyEthics & BiasAI Safety & Alignment

More from Anthropic

AnthropicAnthropic
RESEARCH

Inside Claude Code's Dynamic System Prompt Architecture: Anthropic's Complex Context Engineering Revealed

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Security Researcher Exposes Critical Infrastructure After Following Claude's Configuration Advice Without Authentication

2026-04-05

Comments

Suggested

OracleOracle
POLICY & REGULATION

AI Agents Promise to 'Run the Business'—But Who's Liable When Things Go Wrong?

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
PerplexityPerplexity
POLICY & REGULATION

Perplexity's 'Incognito Mode' Called a 'Sham' in Class Action Lawsuit Over Data Sharing with Google and Meta

2026-04-05
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us