Descriptions:
Raia Hadsell, VP of Research at Google DeepMind overseeing approximately 1,200 scientists and engineers across ten labs, delivers a keynote at the AI Engineer London conference covering three active research frontiers that her organization is betting on — all of them deliberately non-language-model in nature.
The first focus is a new class of embedding models designed around the neuroscience concept of “Jennifer Aniston cells” — sparse neuron clusters that activate for a specific entity regardless of modality (text, image, audio). DeepMind’s goal is to replicate this unified semantic space artificially, enabling fast cross-modal retrieval and recognition with as few as 256 dimensions that can then expand for expressiveness. The second is GraphCast, a spherical graph neural network that predicts atmospheric state up to 15 days out across 100 variables globally; Hadsell highlights that GraphCast predicted Hurricane Lee’s Nova Scotia landfall nine days in advance, three days earlier than the best physics-based models. The follow-on model, GenCast, is probabilistic and designed for uncertainty quantification across ensemble forecasts.
Hadsell frames DeepMind’s research philosophy around finding “root nodes” — deep unsolved problems whose solutions unlock broad downstream impact — rather than incrementally improving existing systems. She also discusses her role as a UK AI Ambassador, bridging government, academia, and industry. For AI engineers tracking frontier lab research priorities, this talk provides rare directional visibility into what Google DeepMind considers its highest-leverage bets beyond large language models.
📺 Source: AI Engineer · Published April 18, 2026
🏷️ Format: Keynote Launch







