Descriptions:
Chris Lattner—creator of LLVM, the Clang compiler, and the Swift programming language, and now co-founder of Modular—returns to the Lex Fridman podcast for his third appearance to discuss the AI infrastructure company he is building and Mojo, the new programming language he co-created. Mojo is designed as a superset of Python optimized for machine learning workloads, delivering C and C++ performance while maintaining Python’s ergonomics; benchmarks cited in the conversation show over 30,000x speedups over standard Python in targeted workloads.
Lattner articulates the core problem Modular is trying to solve: as AI hardware proliferates—GPUs, TPUs, NPUs, custom ASICs—and AI frameworks have grown to support thousands of operators, the gap between theoretical hardware capability and what software developers can actually utilize has become enormous. Mojo and Modular’s infrastructure aim to provide a universal compilation and deployment platform that abstracts over this hardware fragmentation without sacrificing performance, enabling researchers and engineers to write code once that runs optimally across accelerators without constant rewrites.
The technical depth of the conversation is substantial, covering MLIR compiler infrastructure, distributed training and inference scheduling framed as a hardware placement optimization problem, the role of reinforcement learning and genetic algorithms in autotuning, and the philosophical case for simple and predictable execution layers as the foundation for higher-level automation. Lattner also draws on his prior work building TPUs at Google and autopilot software at Tesla to contextualize why he believes a universal AI infrastructure layer is one of the most important unsolved problems in computing today.
📺 Source: Lex Fridman
🏷️ Format: Interview







