Descriptions:
Cole Medin introduces generative UI — a design pattern where AI agents dynamically decide both the layout and the components of a frontend interface rather than simply populating a fixed template — and walks through a working implementation built around a personalized research dashboard.
The tech stack layers three protocols: AGUI for connecting the Pydantic AI backend agent to the frontend, Google’s A2UI specification for defining which components the agent can select, and CopilotKit for rendering the React frontend and managing agent interaction. The agent picks from a predefined library of components and arranges them into a custom layout based on the content it receives — producing a different dashboard every time depending on the input. Medin clearly distinguishes between three generative UI approaches — static (preconfigured templates), open-ended (arbitrary JSX generation), and declarative (agent chooses from a bounded component library) — and makes the case for declarative as the most practical and secure for production.
The video is aimed at developers building the next generation of AI-powered web applications. Medin argues that generative UI represents a fundamental shift in software, pointing toward a near future where platforms like Amazon and Google render personalized interfaces for every user. A GitHub repo is provided as a starting point for developers who want to build on the demo stack directly.
📺 Source: Cole Medin · Published February 05, 2026
🏷️ Format: Hands On Build







