Run LTX 2.3 Video Generation AI Model Locally with ComfyUI – Easy Guide

Run LTX 2.3 Video Generation AI Model Locally with ComfyUI – Easy Guide

More

Descriptions:

Fahd Mirza covers the installation and local testing of LTX 2.3, the latest video generation model from Lightricks, running through ComfyUI on a self-hosted GPU. The tutorial provides a precise step-by-step download guide: the 22-billion parameter LTX 2.3 safetensors checkpoint, a dedicated latent upscaler, and the JMA text encoder — each saved into specific ComfyUI model subdirectories. A quantized version is noted for lower-VRAM systems.

LTX 2.3 is notable for generating synchronized video and audio in a single unified pass from a text prompt, rather than running separate video and audio models and combining them post-hoc. Mirza demonstrates the model running on an Nvidia H100 80GB VRAM GPU, with VRAM consumption measured at 58GB at load and jumping to 71GB during active generation. The video also explains the model’s multi-stage upscaler pipeline, which uses separate spatial and temporal upscaler models to increase both resolution and frame rate after initial generation.

Quality is assessed honestly: the model produces usable results with detailed text prompts that specify both scene content and camera movement, but Mirza notes that open-source video generation still trails closed-source alternatives by roughly a year. The tutorial is a practical reference for anyone wanting to run LTX 2.3 locally via ComfyUI without cloud API costs.


📺 Source: Fahd Mirza · Published March 06, 2026
🏷️ Format: Tutorial Demo

1 Item

Channels