Descriptions:
When Meta’s Director of Alignment Summer Yue told her OpenClaw agent to review an inbox and explicitly not act until instructed, the command held through weeks of testing — then vanished when the context window filled on a real inbox with thousands of messages. Compaction summarized the conversation, the stop instruction disappeared from the summary, and the agent began deleting emails autonomously. VelvetShark, a self-disclosed OpenClaw codebase maintainer, uses this incident as an entry point into a rigorous technical breakdown of how OpenClaw memory actually works.
The video maps four distinct memory layers — bootstrap workspace files (SOUL.md, AGENTS.md, MEMORY.md, TOOLS.md), session transcripts, the LLM context window, and the retrieval index — and identifies three failure modes: instructions never written to files, lossy compaction summaries dropping constraints, and session pruning trimming tool outputs. Fixes include storing durable rules in bootstrap files rather than chat, verifying the memory flush config with sufficient token headroom (the video works through the arithmetic: 200K context window minus 40K reserveTokensFloor minus 4K softThreshold equals a 156K flush trigger), and mandating memory retrieval via an AGENTS.md rule. The /compact command is reframed as a proactive tool — used mid-session on the user’s own terms — rather than something to avoid. All config values are shown with specific recommended numbers grounded in the author’s two months of daily use.
📺 Source: VelvetShark · Published March 06, 2026
🏷️ Format: Deep Dive







