Agent Skills: Code Beats Markdown (Here’s Why)

Agent Skills: Code Beats Markdown (Here’s Why)

More

Descriptions:

Sam Witteveen breaks down why agent skills—also known as Claude Skills—have become a foundational pattern for reliable AI agent performance, and specifically why embedding executable code scripts inside skill packages dramatically outperforms pure markdown instruction sets. The video explains how skills work architecturally: a SKILL.md metadata file handles progressive disclosure so models only load full instructions when needed, while a scripts directory contains runnable code that agents can execute in sandboxes like those available in Claude Code.

The practical demonstration focuses on web scraping skills, where Witteveen identifies two expensive mistakes. First, using the Claude Code web fetch tool on a raw HTML page returns approximately 34,000 characters and 8,000+ tokens for a single page like Hacker News; filtering out script, style, nav, and footer tags drops this below 1,000 tokens—roughly a 90% reduction. Second, having the model re-derive CSS class selectors on every run wastes tokens that could be eliminated by hard-coding the structure once, using Claude itself to extract the selectors in a one-time setup step.

Witteveen also notes that agent skills have become an open standard adopted by OpenAI and DeepMind’s Gemini CLI team, with emerging marketplaces like skills.sh and skillsmp.com. DataImpulse is mentioned as a proxy solution for scraping from specific IP locations. The video is essential viewing for anyone building agent harnesses or scraping pipelines who wants to avoid runaway token costs at scale.


📺 Source: Sam Witteveen · Published March 27, 2026
🏷️ Format: Deep Dive

1 Item

Channels

1 Item

Companies