Googles AI Boss Reveals What AI In 2026 Looks Like

Googles AI Boss Reveals What AI In 2026 Looks Like

More

Descriptions:

In an Axios interview, Google DeepMind CEO Demis Hassabis laid out his vision for AI’s trajectory in 2026, and this TheAIGRID video breaks down every major theme. The centerpiece of Hassabis’s outlook is the convergence of modalities — the development of full omnimodels spanning robotics, image, video, audio, 3D, and text within a unified architecture — an area where he argues Gemini has a structural head start given its multimodal design from day one.

The video examines each modality in turn. Gemini Robotics 1.5 now runs the same model across different robot form factors without requiring per-hardware fine-tuning and has gained internet access to answer questions mid-task, demonstrated with an Aloha robot sorting objects by San Francisco recycling guidelines. On image generation, Imagen’s Nano Banana Pro model uses an iterative agent-like process — generating, evaluating, and refining — that the creator argues explains its unusual accuracy with spatial and factual content. Veo 3.1 advances the video generation front, while Project Astra continues developing the real-time universal assistant concept, shown here guiding a user through a full BMW 335i oil change with accurate torque specifications.

Hassabis also addresses the challenge of ambient AI integration — embedding AI into Android devices, smart glasses, and everyday contexts — and why the convergence of these modalities within a single model family gives Google a compounding advantage that pure-play AI companies will find difficult to replicate at scale.


📺 Source: TheAIGRID · Published December 13, 2025
🏷️ Format: News Analysis

1 Item

Channels

1 Item

Companies

1 Item

People