Descriptions:
Anthropic publicly named three Chinese AI laboratories—DeepSeek, Moonshot AI, and Minimax—accusing them of conducting industrial-scale distillation attacks against Claude models. According to Anthropic’s blog post, the three labs collectively created over 24,000 fraudulent API accounts and generated more than 16 million exchanges with Claude to extract its capabilities and chain-of-thought reasoning for use in training their own models. DeepSeek was attributed roughly 150,000 exchanges targeting reasoning behavior; Moonshot AI accumulated 3.4 million exchanges focused on agentic tool use and computer vision; and Minimax—the largest operation—conducted 13 million exchanges centered on agent coding and orchestration, with Anthropic detecting the campaign while it was still active and tracking its evolution in real time.
Matthew Berman explains how legitimate model distillation differs from what Anthropic alleges: rather than collecting and curating training data independently, the labs allegedly bypassed that process by querying Claude at scale for high-quality question-answer pairs and explicit chain-of-thought outputs. Anthropic frames the practice as a national security risk, arguing that illicitly distilled models strip away safety guardrails and could be directed toward military, intelligence, or cyberattack applications—and that it undermines GPU export controls designed to preserve the United States’ lead in AI development.
The video gives substantial attention to the backlash Anthropic received. Community notes on the original post cited Anthropic’s own $1.5 billion settlement for using pirated books and a $3 billion copyright lawsuit over music, leading many commentators—including Elon Musk—to call the accusation hypocritical. Berman presents both sides, leaving viewers to weigh the legitimacy of Anthropic’s complaint against its own data provenance history.
📺 Source: Matthew Berman · Published February 25, 2026
🏷️ Format: News Analysis







