Descriptions:
Anthropic CEO Dario Amodei sits down with Dwarkesh Patel in February 2026 for an in-depth interview covering the current state of AI scaling, Anthropic’s economics, and what Amodei describes as the public’s surprising failure to recognize how close AI is to a major inflection point.
Amodei revisits his “Big Blob of Compute Hypothesis” — a framework he first articulated in 2017, before GPT-1 — which holds that raw compute, data quantity and distribution, training duration, and the right objective function matter far more than clever techniques. He confirms that pre-training scaling laws are still delivering gains, and critically, that Anthropic is now observing the same log-linear improvement dynamics in RL training as it previously saw in pre-training — not just on math benchmarks like AIME, but across a wide variety of RL tasks. This is one of the most specific public confirmations of broad RL scaling from any frontier lab CEO.
On economics, Amodei sketches a model in which half of compute serves inference (with gross margins above 50%) and half serves training. He projects the AI industry reaching multiple trillions of dollars in annual revenue by 2028–2029, with individual top-tier labs potentially spending $100 billion per year on compute by that point. He explains why profitability by 2028 is consistent with continued aggressive reinvestment, framing profitability as a function of demand forecasting accuracy rather than a strategic choice to slow growth. The conversation also engages Rich Sutton’s “Bitter Lesson” and the tension between general scaling and the need for bespoke RL environments.
📺 Source: Dwarkesh Patel · Published February 13, 2026
🏷️ Format: Interview







