Descriptions:
Wes Roth sits down with Stephen Wolfram — mathematician, creator of Mathematica and Wolfram Alpha, and architect of the Wolfram Physics Project — for a wide-ranging conversation on the computational foundations of both intelligence and physical reality. Wolfram, who first experimented with neural networks in the early 1980s, offers a distinctive perspective on why modern deep learning works the way it does and what its fundamental limits might be.
A central theme is the contrast between neural networks and symbolic/computational systems: Wolfram characterizes neural nets as ‘broad but shallow’ pattern matchers that succeed by searching a vast space of computational possibilities for whatever happens to work, rather than discovering human-interpretable mechanisms. He draws a direct parallel to biological evolution — both processes produce systems that function but resist narrative explanation. This framing has implications for AI interpretability and safety research, as it suggests the opacity of large models may be structural rather than incidental.
The conversation also covers Wolfram’s Ruliad framework, which models the universe as an evolving hypergraph of discrete elements, and how quantum entanglement and general relativity emerge naturally from this picture. He discusses ongoing work in ‘infrageometry’ — extending Euclidean geometry to spaces with non-integer dimensions — as foundational mathematics required to validate the theory. For technically oriented viewers, the interview provides rare first-person insight into how one of the field’s longest-standing thinkers connects fundamental physics, computation, and the current generation of AI systems.
📺 Source: Wes Roth · Published February 23, 2026
🏷️ Format: Interview







