Descriptions:
Veteran AI introduces Wan Move, a trajectory-guided video generation model that lets users draw motion paths directly in ComfyUI to control how objects and cameras move within generated clips. Unlike earlier trajectory tools such as ByteDance’s ATI, Wan Move supports single-object movement across large displacements, multi-object trajectory control, action transfer, 3D object rendering, and camera animation — all within a unified framework that runs through Kijai’s Wan Video ComfyUI extension.
The tutorial walks through the complete workflow starting with the Spline Editor node, where users sketch motion paths over a reference image background. Those trajectories are then encoded as an embedding and injected directly into the latent space via a dedicated pose injection node, so the sampling process is trajectory-aware from the start. Model loading uses the Wan Move FP8 checkpoint alongside a LightX2V acceleration model with block swapping enabled at 25 blocks, and sampling is configured for just 4 steps — keeping generation fast while still producing accurate motion.
The presenter is candid about the gap between the model’s advertised capabilities and what is currently reproducible inside ComfyUI: features like 3D object rendering and some camera animations from the GitHub demo page are not yet fully supported by available nodes. The video focuses on demonstrating single-object trajectory control and basic camera movement, with enough workflow detail — node connections, resolution settings, spline editing controls — that viewers can reproduce the results immediately after updating Kijai’s extension.
📺 Source: Veteran AI · Published December 12, 2025
🏷️ Format: Tutorial Demo

![The Mathematical Foundations of Intelligence [Professor Yi Ma]](https://frontiermodels.cc/wp-content/uploads/2026/03/the-mathematical-foundations-of-150x150.jpg)





