Descriptions:
DeepSeek V4 Pro arrives as the largest open-source AI model ever released — 1.6 trillion total parameters with approximately 47 billion activated per token via mixture-of-experts routing, a 1-million-token context window, and a suite of architectural innovations including compressed sparse attention, MHA residual modifications, and multi-tier on-policy distillation drawn from Kimi 2.5. David Ondrej breaks down what makes the model significant both technically and geopolitically.
On benchmarks, DeepSeek V4 Pro matches or surpasses GPT-5.4 and Claude Opus 4.6 on several major evaluations — including SimpleQA, LiveCodeBench, and CodeForces — while trailing on long-context tasks. The cost differential is substantial: DeepSeek runs at a fraction of the price of comparable OpenAI or Anthropic models, making the performance-per-dollar case compelling for high-volume agentic coding. Ondrej contextualizes this against the geopolitical backdrop: the model was trained despite U.S. export controls restricting China’s access to modern GPUs, ASML EUV lithography machines, and advanced chip supply chains — making its benchmark performance all the more striking.
For deployment, Ondrej demonstrates Open Code (opencode.ai) as the recommended agentic coding interface since DeepSeek V4 Pro is not yet natively available in Claude Code or Codex. He walks through the Open Code Go subscription ($5/month), model configuration, and running four parallel coding builds simultaneously across terminals — at near-negligible cost. The video concludes with live build demos illustrating the model’s long-horizon agentic coding capabilities.
📺 Source: David Ondrej · Published April 24, 2026
🏷️ Format: Review







