GimbalDiffusion: Gravity-Aware Camera Control for Video Generation
Frédéric Fortier-Chouinard     Yannick Hold-Geoffroy
Valentin Deschaintre     Matheus Gadelha     Jean-François Lalonde



"A quiet mountain village during snowfall as smoke rises from chimneys and lights glow along the snowy street. Many llamas are walking around in the street."

Camera motion: Pitch from +80° to -80°, no translation.

"A tropical beach at sunrise with palm trees swaying gently while small waves roll onto the golden sand."

Camera motion: Roll from -90° to 90°, move forward by 8 meters.



[Paper]
[Supplementary & Videos]

Abstract

Recent progress in text-to-video generation has achieved remarkable realism, yet fine-grained control over camera motion and orientation remains elusive, especially with extreme trajectories (e.g., a 180-degree turnaround, or looking directly up or down). Existing approaches typically encode camera trajectories using relative or ambiguous representations, limiting precise geometric control and offering limited support for large rotations. We introduce GimbalDiffusion, a framework that enables camera control grounded in physical-world coordinates, using gravity as a global reference. Instead of describing motion relative to previous frames, our method defines camera trajectories in an absolute coordinate system, allowing accurate, interpretable control over camera parameters. Using panoramic 360-degree videos for training, we cover the full sphere of possible viewpoints, including combinations of extreme pitch and roll that are out-of-distribution of conventional video data. To improve camera guidance, we introduce null-pitch conditioning, a strategy that prevents the model from overriding camera specifications in the presence of conflicting prompt content (e.g., generating grass while the camera points toward the sky). Finally, we propose new benchmarks to evaluate gravity-aware camera-controlled video generation, assessing models' ability to generate extreme camera angles and quantify their input prompt entanglement.



Acknowledgements

This research was supported by Adobe and a Natural Sciences and Engineering Research Council of Canada (NSERC) scholarship, reference number 600578. Computing resources were provided by Adobe and the Digital Research Alliance of Canada. The authors thank Yohan Poirier-Ginter, Qitao Zhao, Rahul Sajnani, Jack Hilliard and Jonathan Roussel for discussions and proofreading help.