Descriptions:
Veteran AI walks through a complete ComfyUI workflow for video face swapping using LTX Video 2.3 and a purpose-built LoRA called ‘Best Face Swap Video’ (v3), hosted and run on RunningHub’s cloud-based ComfyUI environment. This is the first documented end-to-end face swap pipeline built specifically for LTX 2.3, filling a notable gap given the model’s otherwise strong position for creative video generation.
The workflow centers on three core components: Kijai’s LTX 2.3 model loaded with the face swap LoRA at weight 1.0, the BFS Node extension that generates a split-canvas guide video compositing the reference face and driving video, and Kijai’s ‘Add Guide Multi’ node from the KJ Node extension to inject spatial constraints into the latent space before sampling. A critical finding from testing: the official custom Sigma sampling method causes face distortion and blurring — switching to a specific third-party scheduler (linked in the video) with 9 sampling steps produces substantially cleaner results. Audio is handled by discarding LTX’s output audio entirely and splicing the original track back in to avoid electronic interference artifacts.
The channel ran five systematic tests varying camera angle, resolution, environmental conditions, scene transitions, and prompt phrasing. Overall character consistency benchmarks at roughly 70 out of 100, consistent with known LTX 2.3 limitations. Camera angle proves to be the single most impactful variable — close-up reference shots with a front-facing face image yield the best fidelity. The workflow is available on RunningHub for direct import.
📺 Source: Veteran AI · Published March 25, 2026
🏷️ Format: Hands On Build







