Why My AI Videos look Ultra Realistic – Higgsfield AI

Why My AI Videos look Ultra Realistic – Higgsfield AI

More

Descriptions:

Dan Kieft breaks down his complete workflow for creating ultra-realistic AI videos, arguing that cinematic quality depends almost entirely on the source image rather than the video generator itself. The core tools are Higgsfield AI (specifically the Soul and Soul 2 custom models) and Nano Banana 2 for image generation, with Kieft describing spending 99% of his production time perfecting reference imagery before any video generation begins.

The video covers four cinematic fundamentals that separate convincing AI generations from generic output: motivated lighting with direction, shadow, and contrast; compositional structure using the rule of thirds and leading lines; foreground/midground/background depth layering; and deliberate color prompting. Kieft trained a custom character model inside Higgsfield on 70 personal photos to maintain consistent identity across shots, then walks through a layered pipeline: generating a base shot in Higgsfield Soul for aesthetic reference, importing that into Nano Banana 2 with detailed identity-preservation prompts specifying outfit changes while locking pose, camera angle, facial features, and hairstyle.

A particularly replicable technique is Kieft’s method for reverse-engineering Higgsfield Soul’s internal prompts โ€” the platform embeds its generation prompts in output metadata, which can be copied and reused directly in other tools. For creators trying to move past obviously AI-generated footage toward something that could plausibly pass for live-action, this video offers concrete, step-by-step techniques built around specific commercial tools.


๐Ÿ“บ Source: Dan Kieft ยท Published March 07, 2026
๐Ÿท๏ธ Format: Workflow Case Study

1 Item

Channels

1 Item

Companies