Why AI Moats Still Matter (And How They’ve Changed)

Why AI Moats Still Matter (And How They’ve Changed)

More

Descriptions:

In this panel discussion from Andreessen Horowitz (a16z), partners examine whether traditional software competitive moats still hold in the AI era—and conclude that they largely do, with one critical structural shift: software now competes for labor spend, not just IT budgets. The conversation opens with a clear distinction between differentiation (what AI makes possible, like a voice agent operating in 50 languages around the clock) and defensibility (what actually protects a business long-term).

The panel argues that the classic moat sources—owning end-to-end workflows, accumulating data network effects, becoming a system of record, and deeply embedding within customer operations—remain the right framework. However, scale thresholds matter more than ever. Using anti-fraud as an analogy, one partner explains that data advantages are nearly invisible at small scale and only become gravitational at very large scale, which means momentum—raising capital and acquiring customers faster than competitors—is often the only viable path to a durable position.

The discussion also addresses the “GPT wrapper” criticism directly, noting that overlap between model capabilities and application capabilities is a genuine risk, but that many markets previously unattractive for software (plaintiff law, auto-loan servicing) are now wide open. a16z portfolio company Salient is cited as a concrete example: a voice-agent business collecting on auto loans across 50 US states in multiple languages, achieving meaningfully higher collection rates than human agents. For founders and investors, the key takeaway is that the question “will OpenAI build this?” requires honest assessment of how much application-layer differentiation exists beyond raw model capability.


📺 Source: a16z · Published December 03, 2025
🏷️ Format: Deep Dive

1 Item

Companies