Descriptions:
Web Dev Cody makes the case that most AI coding agents — including Claude Code and Cursor — represent a genuine security risk when run with default settings, and demonstrates how Docker sandboxing addresses the core vulnerabilities. The discussion is grounded in his own tool, Automaker, which wraps Claude Code, Cursor, and the Anthropic Agent SDK.
The central concern is prompt injection: when an agent operates with broad file system access (such as Claude Code’s `–dangerously-skip-permissions` flag), a single malicious instruction embedded in a pull request, dependency script, or document could direct the agent to delete files, install a rootkit, or begin keylogging. Cody notes he has received pull requests with up to 180 file changes that he would never run locally without isolation, and frames autonomous agentic operation — where developers step away and let agents work unattended — as the scenario that makes this risk acute.
The mitigation Cody demonstrates is running the Automaker API inside a Docker container with a tightly scoped volume mount that limits the agent’s file system access to a single workspace directory. He walks through the Docker Compose configuration and local override pattern in detail, explaining how the Electron frontend connects to the containerized API without itself needing broad permissions. He frames this Docker-first architecture as a necessary shift for the entire agentic coding ecosystem, not just Automaker, as tools like open code and Codex are also planned for integration.
📺 Source: Web Dev Cody · Published January 05, 2026
🏷️ Format: Deep Dive







