Descriptions:
Ajeya Cotra — senior risk assessment researcher at Meter, former head of technical AI safety grant-making at Open Philanthropy, and top-three finisher out of 400+ participants in the AI Digest 2025 forecasting survey — joins the 80,000 Hours podcast with host Rob Wiblin for a wide-ranging conversation on AI timelines, recursive self-improvement, and what she calls “crunch time”: the approaching window in which AI systems become powerful enough to dramatically accelerate AI R&D while still remaining partially under human control.
Cotra walks through multiple plausible paths to transformative AI, including scenarios where narrow AI systems specialized in machine learning research hit upon architectural breakthroughs before broader general capabilities emerge. She explains why OpenAI, Anthropic, and Google DeepMind are all converging on a strategy of using each AI generation to help align and control its successors — and why the success of that strategy depends on interpretability and oversight techniques keeping pace with raw capability gains. She also addresses the asymmetry in AI’s current skill profile: far more capable at ML research and software engineering than at the philosophical and social reasoning that safety work often demands.
Host Nathan Labenz frames the episode with a note that Cotra’s January 2026 forecasts — the backdrop for this conversation — were already showing early signs of being met within months, and contextualizes the discussion against Anthropic’s Mythos model and its reported discovery of zero-day exploits across major operating systems and browsers. Whether listeners are optimistic or pessimistic about AI development trajectories, the conversation makes a strong case that aggressive adoption of current AI tools is now essential for staying meaningfully engaged with the situation.
📺 Source: Cognitive Revolution “How AI Changes Everything” · Published April 11, 2026
🏷️ Format: Podcast







