Descriptions:
Dylan Davis identifies and addresses a systematic weakness in how AI models like Claude, ChatGPT, and Gemini respond to requests for multiple options: what looks like distinct alternatives is usually one answer reformulated with different words. Davis calls this the “gravity problem” — the model first locks onto its single best answer, then generates variations orbiting that same answer rather than genuinely divergent perspectives. The result is that asking for three options typically yields one option in three costumes.
The video presents three named techniques to force genuinely distinct outputs. The first is MECE (Mutually Exclusive, Collectively Exhaustive), borrowed from consulting practice — including this term in a prompt pushes the model to ensure options don’t overlap and together cover the full problem space. The second is persona rotation, where each option is assigned a named persona with an explicitly stated core belief, forcing the model to reason from fundamentally different starting assumptions. The third is dimension locking, where only one specific component of a response is varied per version (opening hook, argument structure, or call to action) while everything else stays fixed — isolating the variable of interest.
Two bonus tactics close the video: a self-verification prompt that asks the AI to explain in one sentence what makes each version fundamentally different from the others, and a devil’s advocate instruction to force deliberate counterargument framing. All techniques are demonstrated with copyable prompt templates and are positioned as applicable to research, data analysis, strategic planning, and proposal writing — not just the email examples used throughout.
📺 Source: Dylan Davis · Published March 14, 2026
🏷️ Format: Tutorial Demo







