Reacting to “Why AI is so smart but also so dumb?”

Reacting to “Why AI is so smart but also so dumb?”

More

Descriptions:

Matthew Berman reacts to an exclusive talk by Andrej Karpathy delivered at Sequoia Capital’s annual AI gathering, working through some of the most important conceptual frameworks for understanding why modern large language models perform so unevenly across different tasks. Karpathy — co-founder of OpenAI, creator of Tesla’s self-driving AI program, and the person who coined the term “vibe coding” — explains the LLM as a new computing paradigm where the context window functions as RAM and model weights function as a CPU, replacing the traditional operating system layer entirely.

The centerpiece of Karpathy’s argument is a “verifiability” theory of AI capability. AI systems dominate coding and mathematics because outcomes in those domains can be verified automatically and rapidly — code either runs or throws an error, math answers are provably right or wrong — creating tight reinforcement learning feedback loops. Domains like creative writing lack this automated verification signal, which limits training effectiveness and produces the jagged capability profile observed across frontier models today.

Karpathy also explains how frontier labs’ revenue incentives reinforce this pattern: enterprise demand for AI-assisted coding is enormous, which drives labs to concentrate training data and optimization effort there. Berman contextualizes these points with his own observations about the qualitative shift in agentic coding that became apparent in December 2024, connecting Karpathy’s theoretical framework to the lived experience of developers who use these tools.


📺 Source: Matthew Berman · Published May 01, 2026
🏷️ Format: Reaction

1 Item

Channels

2 Items

People