Descriptions:
Wes Roth and Dylan cover three substantive AI stories in this episode, leading with the Claude Code source code exposure incident. Anthropic accidentally included unminified map files in a Claude Code update — files that effectively contained the full readable source code for the agentic scaffolding around the Claude model. The leak spread rapidly across GitHub, copied tens of thousands of times before Anthropic issued DMCA takedowns. The hosts detail how Anthropic’s initial response overcorrected, targeting repositories that had no legal obligation to comply, before the company walked back the overbroad notices within roughly 24 hours. Public statements from Anthropic employees Boris Cherny and Zack Witten attributed the situation to miscommunication.
The episode also examines Anthropic’s published research on functional emotional states in large language models — the finding that Claude exhibits internal representations that behave like emotions, are consistent across contexts, and causally influence outputs. The hosts discuss what it means for a model to have emotions that are genuinely its own versus being artifacts of training data.
Finally, the episode covers a neuroscience research effort that used AI to analyze EEG patterns across species at varying levels of consciousness, producing a quantitative scale for measuring conscious states. The hosts explore what structures like the basal ganglia and the brain’s default mode network might suggest about building or detecting machine consciousness, and whether similar architectures could be deliberately tuned in AI systems.
📺 Source: Wes Roth · Published April 06, 2026
🏷️ Format: Podcast







