Descriptions:
Alphabet’s Q4 2025 earnings revealed revenues exceeding $400 billion for the first time in company history—and a 2026 capital expenditure plan of $175-185 billion, roughly 50% above analyst expectations of $120 billion and nearly double the $91 billion Google spent in 2025. The stock initially dropped 7%. Nate B Jones argues the market’s instinct was wrong and explains why $185 billion may still not be enough.
The video traces how the consensus 2025 narrative—that AI infrastructure spending had decoupled from reality, as argued in Goldman Sachs research notes and Sequoia’s widely-cited “$600 billion question” analysis—collapsed once enterprise agent deployments began generating inference demand at scales nobody had modeled. Anthropic’s Claude Code plugins automating legal contract review wiped 16% off Thomson Reuters in a single repricing event. OpenAI’s Frontier enterprise agent platform signed HP, Intuit, Oracle, State Farm, and Uber as production customers. Coding agents at Cursor, Codex, and Claude Code moved from autocomplete to generating thousands of production commits annually.
Jones draws a crucial distinction between the training-focused first wave of AI infrastructure spending (2023 through mid-2025) and the current inference-focused second wave. Training is expensive but bursty and front-loaded; inference from agents runs continuously, 24 hours a day, at potentially 1,000x the token consumption of a human user. The railroad and fiber-optic analogies commonly invoked to predict an AI infrastructure bust are challenged on the grounds that AI infrastructure is vertically integrated with the intelligence product itself—making the economics fundamentally different from dumb-pipe buildouts.
📺 Source: Nate B Jones · Published February 14, 2026
🏷️ Format: News Analysis







