ComfyUI + Google Anti Gravity: Connect ComfyUI to LLMs: Build Your Own Image Agent!|

ComfyUI + Google Anti Gravity: Connect ComfyUI to LLMs: Build Your Own Image Agent!|

More

Descriptions:

This Veteran AI tutorial shows how to connect ComfyUI directly to a large language model using Google AntiGravity, effectively turning image generation into a natural-language-driven skill. The end result is an LLM agent that can generate multiple cover images at different aspect ratios (16:9, 4:3, 3:4) from a single conversational prompt, with no manual workflow editing required.

The integration uses RunningHub’s cloud ComfyUI as the backend, accessed via its public API. A key technique covered is how to reliably feed API documentation to the LLM: rather than pasting raw HTML, the presenter points to the llm.txt file available at the bottom of RunningHub’s documentation pages, which provides clean Markdown-formatted links the model can parse accurately. The agent is then given the RunningHub API key, the local Python interpreter path, and a workflow ID (extracted from the URL of any RunningHub workflow), after which it can autonomously download the workflow JSON, analyze which nodes control resolution and prompt inputs, and expose those as natural language parameters.

The video also addresses a common pitfall: if the LLM agent starts web-searching for API documentation instead of reading the supplied materials, it should be stopped immediately and redirected. The technique generalizes to any cloud ComfyUI workflow, making it a practical foundation for building AI-powered creative tools that non-technical users can operate through plain language.


πŸ“Ί Source: Veteran AI Β· Published February 09, 2026
🏷️ Format: Hands On Build

1 Item

Channels