Descriptions:
Nate B Jones presents a systematic analysis of what he frames as a structural crisis in AI compute infrastructure: global enterprise AI consumption is growing at roughly 10x annually, driven by heavy per-worker usage and the explosive proliferation of agentic systems, while physical supply constraints are unlikely to ease before 2028.
The video walks through compounding bottlenecks: TSMC’s near-monopoly on advanced AI chip fabrication with 3-4 year lead times for new capacity, Nvidia’s roughly 80% market share with H100 and Blackwell GPUs sold out and 6+ month lead times for large orders, and the strategic hoarding behavior of major hyperscalers—Google, Microsoft, Amazon, and Meta—who have locked up compute allocations through multi-year purchase agreements worth hundreds of billions of dollars. Trend Force projections cited in the video suggest memory costs alone could rise 40-60% in the first half of 2026, with effective inference costs potentially doubling or tripling within 18 months.
A key structural argument: AWS, Azure, and Google Cloud are not neutral infrastructure vendors—they are AI product companies that compete directly with the enterprise customers they serve. When compute is scarce, every GPU allocated to an enterprise is one not powering Gemini, Copilot, or Alexa. Jones argues enterprises still relying on traditional capex models are running out of time to secure allocations before the crisis peaks.
📺 Source: Nate B Jones · Published February 08, 2026
🏷️ Format: Deep Dive







