OpenClaw’s Memory Sucks and the fix is simple — Dhravya Shah, Supermemory

OpenClaw’s Memory Sucks and the fix is simple — Dhravya Shah, Supermemory

More

Descriptions:

Dhravya Shah, founder of Supermemory, joins the Latent Space podcast to explain why AI memory is far harder than plugging text into a vector database — and how his infrastructure-first approach addresses the gaps that trip up most implementations. Shah traces Supermemory’s evolution from a 2023 consumer bookmarking app to a foundational memory layer used by companies building AI agents, growing to 100,000 users while running on just $5 per month on Cloudflare and reaching 10,000 GitHub stars shortly after open-sourcing.

The conversation digs into why naive RAG-based memory breaks down at scale and in production. Shah identifies four capabilities that modern AI memory actually requires: knowledge updates (invalidating stale information rather than simply appending), temporal reasoning (giving agents a sense of how much time has passed and how context ages), selective forgetfulness (dropping irrelevant data rather than accumulating everything indefinitely), and persistent user profiles (a compact representation consulted on every LLM turn, not just at retrieval time).

Shah also addresses file-system-based memory approaches used by tools like Claude Code and Cursor, acknowledging their merits while pointing out key limitations: files grow unboundedly, lack update logic, and require slow agentic discovery to traverse. The discussion covers hook-based versus tool-use-based retrieval strategies, and why Cloudflare’s infrastructure gives Supermemory a meaningful scalability and cost edge. Anyone building or evaluating memory layers for production AI agents will find this one of the more technically grounded and honest assessments of the current state of the art.


📺 Source: Latent Space · Published March 09, 2026
🏷️ Format: Interview