Agent Experts: Finally, Agents That ACTUALLY Learn

Agent Experts: Finally, Agents That ACTUALLY Learn

More

Descriptions:

IndyDevDan introduces “agent experts” — a concrete architectural pattern for building agents that automatically learn from their actions and accumulate expertise at runtime, without any manual updates to memory files or prompts. The key insight is that traditional agent memory approaches (static memory files, skills, prime prompts) all require human intervention to stay current, creating a bottleneck that prevents agents from genuinely improving over time.

The solution centers on a YAML-based “expertise file” that functions as the agent’s evolving mental model of a specific problem domain. Unlike documentation or source-of-truth files, this expertise file is explicitly not authoritative — the code always is — but rather a working mental model that lets the agent skip re-learning context with every new task. A live demo shows an agent reading its own expertise file, then cross-validating its understanding against a live multi-agent orchestration codebase using PostgreSQL, correctly identifying six database tables, a parent-child cascade-delete pattern, and three-way communication flows between user, orchestrator, and sub-agents.

The video also covers three supporting primitives IndyDevDan calls meta-agentics: meta-prompts (prompts that generate structured prompts), meta-agents (agents that build new agents), and meta-skills (skills that create new skills). Each is demonstrated live, including generating a new question prompt with Mermaid diagram support, a planner agent that reads and executes plan files directly, and a start-orchestrator skill for spinning up multi-agent applications. The combination forms the foundation for agents that compound their capabilities over time rather than resetting with each session.


📺 Source: IndyDevDan · Published December 15, 2025
🏷️ Format: Tutorial Demo

1 Item

Channels