Descriptions:
Dwarkesh Patel interviews Adam Marblestone — a neuroscientist and AI researcher — to explore one of the field’s deepest open questions: what does the human brain do that large language models fundamentally cannot, and why does throwing more data and compute at LLMs still leave them far short of human-level generality?
Marblestone argues that the field has systematically underestimated the role of loss functions. While machine learning favors mathematically simple objectives like next-token prediction, he suggests evolution may have encoded rich, stage-dependent cost functions into the brain — effectively a learned curriculum that different cortical regions use at different developmental stages. He draws on the work of Yann LeCun, Peter Dayan, and DeepMind’s temporal difference learning research to sketch an alternative architecture: a cortex that performs omnidirectional prediction across arbitrary subsets of its inputs, more akin to a probabilistic energy-based model than a unidirectional sequence predictor.
The conversation also covers the basal ganglia as a simple RL system layered beneath a more general cortical world model, how dopamine encodes reward prediction error rather than raw reward, and why empowering neuroscience as a field — rather than just hiring more ML researchers — may be the critical bottleneck to answering these questions. Marblestone frames the problem as potentially the most important question in science, while remaining candid about the limits of current understanding.
📺 Source: Dwarkesh Patel · Published December 30, 2025
🏷️ Format: Interview

![AutoGrad Changed Everything (Not Transformers) [Dr. Jeff Beck]](https://frontiermodels.cc/wp-content/uploads/2026/03/autograd-changed-everything-not-150x150.jpg)





