Google’s New Breakthrough Brings AGI Even Closer – Titans and Miras

Google’s New Breakthrough Brings AGI Even Closer – Titans and Miras

More

Descriptions:

Google published two research papers—Titans and Miras—targeting one of the most persistent limitations in large language models: the inability to maintain and actively update long-term memory during inference. TheAIGRID provides a structured walkthrough of both papers, explaining the architectural decisions and the theoretical framework that connects them.

Titans introduces the MAC (Memory as Context) architecture, which organizes memory into three layers. The long-term memory module uses a multilayer perceptron—a small neural network inside the larger model—rather than a simple vector matrix, enabling it to actively learn patterns and connections during inference rather than passively recording input. A “surprise metric” causes the system to prioritize novel information and deprioritize routine input, analogous to how human memory filters experience. This layer works in tandem with a standard attention-based short-term layer and fixed persistent memory from training. Google claims the architecture supports over two million tokens of context handled correctly across long documents.

Miras is the theoretical framework underlying Titans. It argues that every major sequence model architecture—Transformers, RNNs, state space models—is performing the same four operations: choosing a memory structure, applying an attentional bias, applying a retention gate, and running a memory update algorithm. Most current models use mean squared error for both attention and retention; Miras opens the design space to alternatives, and the paper introduces three experimental models (including YAAD and Moneta) that explore Huber loss and stricter mathematical constraints as replacements.


📺 Source: TheAIGRID · Published December 05, 2025
🏷️ Format: Deep Dive

1 Item

Channels

1 Item

Companies