Descriptions:
Matt Wolfe breaks down Anthropic’s blog post accusing three Chinese AI companies — Deepseek, Moonshot AI, and Minimax — of running a coordinated, industrial-scale campaign to steal Claude’s capabilities through unauthorized model distillation. According to Anthropic, the operation involved approximately 24,000 fake accounts, proxy networks to evade usage restrictions, and over 16 million total exchanges with Claude models — all used to generate training data for competing systems.
The video explains how model distillation works — the student-teacher process where a smaller model learns from a larger model’s outputs and chain-of-thought reasoning — and why Anthropic frames the unauthorized version as both a terms-of-service violation and a national security concern. Per-company specifics are notable: Moonshot AI logged 3.4 million exchanges targeting agentic reasoning, tool use, and computer vision; Minimax accumulated over 13 million exchanges and reportedly pivoted to Anthropic’s newest model within 24 hours of its release, redirecting nearly half their traffic mid-campaign.
Wolfe also raises a pointed counterargument: Anthropic settled a $1.5 billion copyright infringement suit with authors in September 2025 over training data it scraped without permission, and faced a separate lawsuit from Reddit in June 2025. The video presents a clear-eyed look at the legal, ethical, and geopolitical tensions now defining frontier AI development — where the line between legitimate knowledge transfer and IP theft is actively contested by the same companies on both sides of it.
📺 Source: Matt Wolfe · Published February 25, 2026
🏷️ Format: News Analysis







