Descriptions:
Craig Hewitt demonstrates a three-command terminal setup that connects OpenClaw’s autonomous agent framework to OpenAI’s Codex GPT 5.3 model using a standard $20/month ChatGPT subscription—eliminating the need to pay separate per-token API costs for frontier model access. The tutorial covers the exact commands needed (openclaw onboard, openclaw models set, and openclaw model status) and explains what each does in the context of OpenClaw’s model configuration layer.
Hewitt makes the case for using frontier models rather than cheaper open-source alternatives on practical grounds: stronger models like GPT 5.3 Codex and Claude Opus 4.6 produce better outputs in fewer steps, reducing total token consumption despite higher per-token pricing, and are less susceptible to prompt injection attacks targeting always-on agent clusters. He also shares his own model routing preferences—Codex for coding tasks, Claude Code with Opus 4.6 for marketing and operations—and compares alternatives including Kimi K2.5 and MiniMax M2.5 via OpenRouter.
A key nuance the video addresses: while OpenAI has publicly confirmed that using ChatGPT plans to power OpenClaw is acceptable, Anthropic has not issued equivalent guidance, meaning users who connect Claude plans to OpenClaw risk account cancellation. Hewitt credits a tweet thread from Andrew Warner and OpenClaw’s original creator Peter Steinberger for surfacing this policy distinction, making it an important consideration for anyone building on Anthropic’s ecosystem.
📺 Source: Craig Hewitt · Published February 23, 2026
🏷️ Format: Tutorial Demo







