The Era of Vertical AI Models

The Era of Vertical AI Models

More

Descriptions:

The AI Daily Brief examines whether vertical AI models have finally arrived as a legitimate competitive force, anchored by Intercom’s announcement of Apex—a customer service-specific model that CEO Euan McCabe and CPO Paul Adams claim outperforms GPT and Claude Opus 4.5 on resolution rate, speed, and cost, built on billions of proprietary customer service interaction data points.

Host Nathaniel Whittemore frames this against Rich Sutton’s 2019 “bitter lesson” essay—the argument that general compute-scaling always beats domain-specific approaches—tracing its history from chess and Bloomberg GPT through to recent evidence from Cursor’s Composer 2, where reinforcement learning on proprietary coding data reportedly vaulted an open-weight Kimi K2.5 fine-tune above Opus 4.6 on coding benchmarks. The core question is whether last-mile usage data—interaction data at the actual product edge—represents a qualitatively different input that can change the vertical model calculus.

Adams argues that successful companies will increasingly need to operate at the app layer, AI layer, and model layer simultaneously, with durable differentiation migrating down to proprietary models as the app layer becomes easier to clone. The episode collects reactions from AI researchers and operators—including an OpenAI board member—unpacking what Intercom’s claim means for foundation model providers and which enterprise verticals with large labeled interaction datasets are sitting on untapped fine-tuning assets.


📺 Source: The AI Daily Brief: Artificial Intelligence News · Published March 29, 2026
🏷️ Format: News Analysis