Descriptions:
Samuel Colvin, creator of Pydantic and now CEO of Pydantic the company, presents a live demonstration at AI Engineer showing how to combine two tools — Jepper and Pydantic Logfire’s managed variables — to iteratively optimize agent prompts in production environments. Pydantic’s stack includes Pydantic AI (the agent framework), Pydantic validation, and Logfire (an OpenTelemetry-based observability platform that Colvin positions as general observability with AI eval capabilities layered on top, not a standalone AI observability category).
Jepper is a genetic algorithm optimization library that treats prompts as strings to be evolved: it samples from a Pareto frontier of high-performing candidates, mixes them, proposes new variants via a dedicated proposer agent, evaluates each against a test suite, and iterates toward better solutions — analogous to breeding racehorses by always selecting from the best performers. Managed variables in Logfire extend prompt management beyond text to any Pydantic-typed object, editable live from the platform without redeployment. The combination allows Jepper to propose new system prompts, evaluate them using Pydantic AI’s eval tooling with real API calls (at roughly $2 per 400-call run), and push winning prompts back into the managed variable — closing a production optimization loop.
The concrete use case is a Pydantic AI agent analyzing Wikipedia articles for UK MPs to detect political dynasty connections, originally built to answer a listener question on the podcast The Rest is Politics. Colvin walks through the async/sync friction between Jepper and Pydantic AI, the proposer agent’s system prompt, and how Logfire traces make the optimization passes inspectable in real time.
📺 Source: AI Engineer · Published May 07, 2026
🏷️ Format: Hands On Build







