Descriptions:
Jacob Lauritzen, CTO of Legora — a collaborative AI workspace serving over 1,000 law firms across 50+ markets — delivers a conference talk on the evolving challenges of building production-grade vertical AI agents. His central argument: as AI makes execution cheap, the real bottleneck has shifted to planning and reviewing work, and chat-based interfaces were never designed for long-running, multi-step agentic workflows.
Lauritzen introduces the “verifier’s rule” — the principle that AI excels at tasks where success is easy to verify but struggles in domains like litigation strategy where no objective ground truth exists. He maps this spectrum across legal and coding tasks to show where agents can run autonomously and where human judgment remains irreplaceable. The talk also addresses the practical problem of context rot, where agents working over long horizons lose coherence as their context window fills.
The second half covers concrete mechanisms for improving agent-human collaboration: using proxy verification (comparing new contracts against known-good examples), decomposing hard tasks into verifiable sub-tasks, adding guardrails to limit agent scope, and using an upfront planning phase to align on approach before execution begins. Legora’s framing — treating agent work as a DAG with human checkpoints at high-stakes nodes — offers a grounded model for engineers building agents in regulated, high-stakes industries.
📺 Source: AI Engineer · Published April 22, 2026
🏷️ Format: Deep Dive







