Descriptions:
Dr. Roman Yampolsky, an associate professor of computer science and one of the longest-serving researchers in AI safety, delivers a series of stark and specific predictions in this Diary of a CEO interview: by 2027, AI systems will have the capability to replace most humans across most occupations; within five years, unemployment could reach levels the world has never seen — not 10% but potentially 99%, even before the arrival of superintelligence. His framing is deliberate: these projections do not require science fiction scenarios, only the continuation of trends already underway.
Yampolsky draws a sharp and technically grounded distinction between two separate problems: making AI more capable, which is well understood and accelerating rapidly thanks to scaling compute and data, and making AI safe, which remains an unsolved problem at the frontier. He is pointed in his criticism of leading labs, arguing that OpenAI has violated every established guardrail for responsible AI development and that the legal obligations of AI companies run to investors, not to humanity. He maps specific pathways to catastrophic harm — including AI-assisted design of biological, chemical, radiological, and nuclear weapons — noting that these risks are near-term and concrete, not speculative.
The conversation addresses simulation theory, the feasibility of international coordination to slow AI development, and why Yampolsky frames his goal not as permanent prevention but as buying time — shifting a potentially catastrophic threshold from five years away to fifty. He offers little optimism but insists that every additional decade of preparation meaningfully improves humanity’s odds.
📺 Source: The Diary Of A CEO
🏷️ Format: Interview







