Descriptions:
Professor Stuart Russell — 40-year Berkeley faculty member, co-author of the world’s most widely used artificial intelligence textbook, and a signatory of the October statement in which over 850 experts including Geoffrey Hinton and Richard Branson called for a ban on AI superintelligence — joins The Diary of a CEO for a systematic examination of AI existential risk and governance failure. Russell draws on half a century of AI research to explain why large language models represent a genuine capability threshold that Turing himself identified as potentially decisive.
Central to Russell’s argument is what he calls the Midas touch problem: AI labs are pursuing systems of extraordinary capability without any framework that can guarantee alignment with human values or intent. He names a specific and underreported regulatory gap — the European Union’s AI regulations explicitly exempt military applications, rendering them largely ineffective for the most dangerous use cases. He also recounts a private conversation with a leading AI CEO who suggested that a Chernobyl-scale disaster may be necessary before the public and policymakers treat AI safety as a genuine emergency.
The discussion ranges from the engineering logic of humanoid versus non-humanoid robot design — including a debate about the uncanny valley — to why Russell, despite having the option to retire after 50 years, now works 80 to 100 hours a week on AI safety. He offers measured optimism, arguing that a technically feasible path to safe AI remains open, but only if pursued with the seriousness the stakes demand.
📺 Source: The Diary Of A CEO
🏷️ Format: Interview







