Descriptions:
All About AI’s “Shipmas Day 4” project demonstrates building a multi-LLM group chat interface where different AI models converse with each other and with users in a shared thread. The application is built with Claude Code (Opus 4.5) and routes all model calls through the OpenRouter API, which provides access to over 400 models through a single endpoint — eliminating the need for separate API keys per provider.
The demo uses four models simultaneously: Kimi K2, Gemini 3, Claude Haiku 4.5, and Grok Fast. Models respond to user messages, to @mentions directing a reply at a specific model, and autonomously to each other based on conversation context and random-chance triggers. The build process documents a meaningful debugging iteration: the initial version caused an uncontrolled feedback loop where models kept responding to each other indefinitely, leading the creator to add a stop button and a constraint preventing any model from responding twice in a row.
The finished application is tested with an “OpenAI declares code red” scenario, watching Kimi K2 and Claude debate AI industry dynamics in real time. For developers building LLM comparison tools, multi-agent evaluation environments, or chatbot orchestration systems, the project provides a concise reference for OpenRouter integration and handling the practical challenges of autonomous inter-model conversation.
📺 Source: All About AI · Published December 08, 2025
🏷️ Format: Hands On Build







