⚡️GPT5-Codex-Max: Training Agents with Personality, Tools & Trust — Brian Fioca + Bill Chen, OpenAI

⚡️GPT5-Codex-Max: Training Agents with Personality, Tools & Trust — Brian Fioca + Bill Chen, OpenAI

More

Descriptions:

Brian Fioca and Bill Chen from OpenAI join the Latent Space podcast at AI Engineer World’s Fair to unpack the training philosophy behind Codex Max and share what the team learned during GPT-5’s development. Central to their discussion is the insight that model personality — not just raw capability — is what determines whether developers actually trust and adopt a coding agent over the long run.

The OpenAI team describes how they translated best software engineering practices into concrete behavioral benchmarks: communication (keeping the developer informed during runs), planning (gathering context before acting), and verification (checking your own work). These became explicit training objectives, with the goal of producing a model that behaves like a reliable pair programmer. Codex Max is designed for 24-hour-plus autonomous runs and optimizes for both consistency and speed on complex tasks.

A significant portion of the conversation covers how the abstraction layer in AI development is shifting upward — from raw model APIs toward packaged agents. Rather than chasing every model release with a new integration, developers can now plug in a full agent like Codex and build on top of it, a pattern already adopted by Zed and VS Code. The episode offers a candid look at how OpenAI approaches the intersection of model training, developer trust, and the emerging era of long-running agentic coding systems.


📺 Source: Latent Space · Published December 26, 2025
🏷️ Format: Interview

1 Item

Companies