Descriptions:
NetworkChuck teams up with Jason Haddix — who wrote the AI pentesting methodology and runs the CANAM security research team — to demonstrate what real-world AI hacking looks like beyond simple prompt-injection party tricks. The video is structured around CANAM’s open-source AI Security Resource Hub on GitHub, which provides 23 active labs covering prompt injection, agent manipulation, and adversarial inputs against LLM-enabled applications.
The centerpiece is Agent Breaker, a platform that simulates realistic business applications — portfolio advisors, trip planners, corporate messaging systems — rather than toy examples. Haddix demonstrates live exploitation attempts against these apps, highlighting a key difference from traditional CTFs: LLM unpredictability means testers must iterate through multiple variations rather than applying a single payload. The video also walks through the Auto Parts CTF, built directly from a real client pentest engagement, which can be self-hosted via Docker and contains five flags spanning prompt injection and system prompt extraction.
Practical career context is woven throughout: Anthropic, OpenAI, and Google all run public bug bounty programs for model vulnerabilities, and AI security competitions offer cash prizes. Haddix and NetworkChuck also touch on the offensive side — AI-generated phishing emails, deepfake voice calls, and synthetic texts that have rendered traditional “look for typos” detection advice obsolete — framing AI security as both an offensive and defensive discipline that is rapidly growing in demand.
📺 Source: NetworkChuck · Published February 20, 2026
🏷️ Format: Tutorial Demo







