Descriptions:
Ryan Kidd, co-executive director of MATS (Machine Learning Alignment Theory Scholars), joins the Cognitive Revolution for an inside look at the AI safety research field and its primary talent pipeline. With 446 alumni now working across organizations including Anthropic, DeepMind, Redwood Research, Goodfire, and Apollo Research, MATS is widely regarded as the largest and most impactful AI safety training program in existence.
The conversation opens with AGI timelines and the current state of safety thinking. Kidd observes that even among the most technically sophisticated researchers, uncertainty remains extremely high — a finding that supports portfolio-style research investment rather than concentration on any single paradigm. He then describes three researcher archetypes MATS has identified through its experience: connectors, who define new research agendas and often found organizations; iterators, who systematically develop those paradigms through experiments and analysis; and amplifiers, who help scale research teams. Iterators have historically been in highest demand, but that balance is shifting as organizations grow larger and AI coding agents lower the engineering barrier to safety research.
Kidd addresses why breaking into AI safety remains difficult despite active hiring, explains why some tangible research output is a near-requirement for MATS admission regardless of formal credentials or age, and discusses which research directions require frontier model access versus commodity compute. Applications for the summer 2026 cohort close January 18th at matsprogram.org/tcr.
📺 Source: Cognitive Revolution “How AI Changes Everything” · Published January 04, 2026
🏷️ Format: Interview







