Descriptions:
Youri van Hofwegen walks through a complete end-to-end workflow for creating anime-style video using Seedance 2.0, combined with Claude for prompt refinement, Nano Banana Pro for reference image generation, and the Higgsfield platform as a unified access layer for both models.
The workflow centers on image-to-video generation rather than text-to-video, using consistent character and location reference images to prevent visual drift between scenes. Van Hofwegen demonstrates how to generate those reference images by writing plain-language character descriptions, using Claude to reformat them into optimized Nano Banana Pro prompts, and iterating until the output matches the intended aesthetic. The resulting images then anchor every subsequent Seedance 2.0 generation, eliminating the credit-burning cycle of repeated text-to-video attempts.
The video also introduces a structured prompting framework—covering multi-shot composition, camera movement, character action, and in-clip audio—that breaks 15-second generations into three to six discrete shots with explicit angle and motion cues for each. Van Hofwegen argues this structure is what separates cinematic-feeling sequences from static or randomly animated output. The full workflow is demonstrated with multiple locations, aerial establishing shots, close-ups, and ambient audio tracks generated directly within Seedance 2.0, reportedly completing a full anime scene in under 15 minutes.
📺 Source: Youri van Hofwegen · Published April 28, 2026
🏷️ Format: Tutorial Demo







