Build & deploy AI-powered apps — Paige Bailey, Google DeepMind

Build & deploy AI-powered apps — Paige Bailey, Google DeepMind

More

Descriptions:

Paige Bailey, Developer Relations Lead at Google DeepMind, presents a demo-heavy overview of the latest Gemini model family and AI Studio capabilities at the AI Engineer conference. The talk covers a rapid succession of releases: Gemini 3.1 Flash Live, Gemini 3.1 Pro and Flash Lite, Nano Banana 2 for image generation and editing, a multimodal embeddings model that unifies video, images, audio, text, and code in a single embedding space, LIA 3 for music generation, Genie3 for world model building, Gemma 4 (the latest open model), and VO3.1 Light for video generation at a competitive cost profile. Bailey notes that Augment Code recently migrated its entire agent system to default to Gemini 3.1 Pro, citing performance and cost advantages.

A central theme is Gemini’s native multimodal input and output capabilities — unlike most competing models that handle only text and code as outputs, Gemini can produce text, code, audio, and images, including interleaved image-text outputs. Bailey demonstrates Gemini Live with real-time video understanding, where the model accurately identifies hand gestures through a live camera feed, and highlights the low cost profile relative to manually stitching together speech-to-text, LLM, and text-to-speech pipelines.

The session also showcases AI Studio’s new “Build” feature — comparable to v0.dev or Lovable — for creating and deploying full-stack apps, now with integrated Firestore database support and Firebase authentication. Bailey walks through creating an app from scratch using voice input, with all generated code inspectable and exportable directly from the UI.


📺 Source: AI Engineer · Published April 29, 2026
🏷️ Format: Keynote Launch

1 Item

Channels

1 Item

Companies