Shipmas Day 14: Can AI Agents "Dream" In a Simulation?

Shipmas Day 14: Can AI Agents "Dream" In a Simulation?

More

Descriptions:

All About AI’s Shipmas Day 14 entry presents a working prototype of a multi-agent social simulation built on Gemini 3 Flash, asking a genuinely novel question: can AI agents be made to “dream”? Three agents — Jack (barista at The Daily Grind), Claude (barista at Bean There), and Erica (a customer who visits both) — interact through a custom web UI, each maintaining separate conversation histories that update across turns. Jack and Claude cannot communicate directly; Erica acts as a social bridge who inadvertently shares each barista’s words with the other.

The standout feature is a mental image pipeline: each agent’s internal monologue is captured as text and fed to Said Image Turbo, a fast, low-cost image generation model, to produce a visual representation of the agent’s thoughts at each turn — mimicking human visualization during reflection. Memory is split into two tiers: a sliding window of the five most recent mental images, and a larger conversational memory per agent relationship.

The creator frames this as an early prototype, with plans to scale toward a headless simulation involving more agents, richer interaction graphs, and cross-agent event summarization. For developers interested in persistent agent identity, emergent social dynamics, or novel memory architectures, the video offers a concrete and reproducible starting point rather than pure theory.


📺 Source: All About AI · Published December 18, 2025
🏷️ Format: Hands On Build

1 Item

Channels