Lossless-Claw: Infinite Memory for Your OpenClaw – Run Locally with Ollama and GLM

Lossless-Claw: Infinite Memory for Your OpenClaw – Run Locally with Ollama and GLM

More

Descriptions:

Fahd Mirza demonstrates Lossless-Claw, a community plugin for OpenClaw that solves a fundamental problem in long agentic sessions: the silent loss of context as the model’s window fills up. Standard OpenClaw handles context limits by sliding the window forward, discarding earlier messages including architecture decisions, debugging history, and user constraints. Lossless-Claw replaces this default compaction with a DAG (directed acyclic graph) summarization system backed by a SQLite database where nothing is ever deleted.

The plugin organizes conversation history across three layers: raw messages compact into depth-0 nodes capturing exact decisions and technical details; those accumulate into depth-1 arc summaries; which eventually condense into a depth-2 narrative of the full session. The agent can call an expand function on any node and trace all the way back to the original raw message on demand. This is what makes the system “lossless” — unlike traditional flat-summary compaction, every message remains retrievable.

Mirza walks through the full installation on Ubuntu with an Nvidia RTX 6000 and Ollama running a GLM 4.7 flash model as the local LLM backend. The plugin installs as a TypeScript module that registers itself as a context engine within OpenClaw’s plugin architecture, replacing built-in behavior without modifying core code. The video also discusses known drawbacks of the DAG approach, making it a balanced resource for developers running extended agentic workflows who need reliable long-term memory.


📺 Source: Fahd Mirza · Published March 18, 2026
🏷️ Format: Tutorial Demo

1 Item

Channels