Agentic coding has some major issues…

Agentic coding has some major issues…

More

Descriptions:

Web Dev Cody lays out the real costs of agentic coding from the perspective of an open-source maintainer, using his project Automaker as a case study. The core problem: AI tools like Claude Code and Cursor let contributors ship pull requests with 45 to 185 file changes in minutes, but human review hasn’t sped up at all โ€” creating a review bottleneck that falls entirely on core maintainers who are serious about code quality.

The most technically valuable segment covers a drive-by attack vulnerability discovered in Automaker through a Claude Code security audit. Because Automaker exposes its settings API on localhost port 3008 without authentication, a malicious website visited by a user running Automaker can POST a crafted MCP server configuration โ€” which the Electron app then executes as an arbitrary local command. Cody walks through exactly how MCP server configs work (using Context7 as an example), why the attack surface exists, and the fix: generating an API key known only to the UI layer so external sites cannot reach the endpoint.

His security review workflow โ€” git diff piped into Claude Code with an “ultrathink” prompt targeting MCP-related changes โ€” is shown in detail, including follow-up questioning to verify the model isn’t hallucinating the vulnerability. The broader takeaway is that LLM-generated code requires systematic security auditing, not just functional review, and that tools like CodeRabbit and Gemini Code Assist still miss real issues that warrant manual Claude Code prompting.


๐Ÿ“บ Source: Web Dev Cody ยท Published December 29, 2025
๐Ÿท๏ธ Format: Workflow Case Study

1 Item

Channels