Descriptions:
Nate Herk breaks down four distinct retrieval methods for RAG agents, explaining when each approach outperforms the others. The video opens with a frank critique of chunk-based vector retrieval—the default choice for most builders—showing concretely why splitting documents into small pieces causes agents to lose broader context, leading to wrong answers when summarizing full documents or performing aggregations on tabular sales data.
The four methods covered are: filter-based retrieval (explicit equality checks on structured data inside n8n), SQL query agents (connecting to a Postgres database on Supabase to push aggregations and sorting into the query itself), full-context retrieval, and traditional vector database search. For each approach, Herk runs a live n8n workflow and executes real queries against sample datasets of 20 to 50 rows, showing both correct results and the edge cases where each method breaks down.
Key takeaways include the importance of supplying agents with valid filter values and schema details upfront—since agents won’t discover these on their own—and the advantage of SQL agents for math-heavy questions because the database handles computation before the LLM ever sees the result. The video is practical and well-suited to anyone already building AI automations in n8n who wants to improve retrieval accuracy on structured or semi-structured data.
📺 Source: Nate Herk · Published January 05, 2026
🏷️ Format: Tutorial Demo







