Cursor, Claude Code and Codex all have a BIG problem

Cursor, Claude Code and Codex all have a BIG problem

More

Descriptions:

Theo (t3.gg) argues that Cursor, Claude Code, and Codex share a structural problem that goes deeper than UX complaints: they were built using the same early AI models they now help developers work with, and those models produced low-quality foundational code that compounds over time. Disclosing early investments in Cursor and an indirect Anthropic connection upfront, Theo frames the critique as coming from someone with genuine stakes in being wrong.

The core concept is “codebase inertia” — the observation that bad patterns established early in a codebase tend to expand exponentially while good patterns only grow linearly, because AI agents like Codex preferentially copy whatever patterns are most visible in the existing code. Teams that bet on early models like Sonnet 3.5 or older GPT variants ended up with codebases that even modern models like Opus 4.6 and Codex 5.3 struggle to rehabilitate. As a concrete example, Theo describes Cursor reportedly acquiring a developer to rewrite their Electron app in Zig just to address a 2GB RAM footprint — a symptom of accumulated technical debt.

The video is a technically grounded analysis of how early AI tooling decisions create long-tail engineering costs, with clear implications for teams evaluating AI-assisted development environments. Anyone building on or choosing between Cursor, Claude Code, or Codex will find the mechanism Theo describes worth understanding before committing to a codebase direction.


📺 Source: Theo – t3.gg · Published March 01, 2026
🏷️ Format: Deep Dive