Descriptions:
Machine Learning Street Talk hosts a wide-ranging interview with Jeremy Howard — deep learning pioneer, fast.ai founder, and Kaggle grandmaster — on the risks and realities of AI-assisted coding. Howard argues that the dominant narrative around tools like Cursor and Claude Code, particularly claims of dramatically increased software output, is not supported by actual shipping data: he references an Anthropic study showing only a “tiny uptick” in what teams are actually delivering to production, directly contradicting public statements from Anthropic’s own CEO.
Howard’s core concern is what he calls “understanding debt” — the cognitive equivalent of a self-driving car disengaging the driver. When developers stop actively wrestling with code and instead delegate to AI agents, they stop building the mental models that compound into genuine engineering skill over time. He distinguishes sharply between LLMs “cosplaying understanding” through statistical pattern matching and the real insight that comes from interactive, iterative problem-solving in a notebook or REPL. For organizations betting their futures on AI replacing engineering judgment, Howard sees a serious risk of teams that can neither grow nor course-correct when AI-generated code fails in production.
The conversation covers Howard’s ULMFiT paper and the early history of transfer learning for NLP, the philosophy of mind debates (Dennett, Searle’s Chinese Room) now playing out in practical AI contexts, and the specific failure modes of current agentic coding workflows — including the “slot machine” dynamic where developers feel control through prompt crafting but ultimately can’t reason about the code that emerges. Howard is not anti-AI; he argues these tools can be powerful learning accelerators when used intentionally, but warns the default mode of use actively undermines the skills they are supposed to augment.
📺 Source: Machine Learning Street Talk · Published March 03, 2026
🏷️ Format: Interview







