Guided Co-Modulated GAN for 360° Field of View Extrapolation
Mohammad Reza Karimi Dastjerdi     Yannick Hold-Geoffroy     Jonathan Eisenmann     Siavash Khodadadeh    
Jean-François Lalonde    


Accepted as an oral presentation in International Conference on 3D Vision (3DV), 2022!
Accepted in Sixth Workshop on Computer Vision for AR/VR (CV4ARVR), 2022!

This work is featured at Adobe Max Sneaks 2022!
Media Coverage:
  • Adobe Blog
  • Popular Science
  • PetaPixel
  • DigitalCameraWorld

  • Abstract

    We propose a method to extrapolate a 360° field of view from a single image that allows for user-controlled synthesis of the out-painted content. To do so, we propose improvements to an existing GAN-based in-painting architecture for out-painting panoramic image representation. Our method obtains state-of-the-art results and outperforms previous methods on standard image quality metrics. To allow controlled synthesis of out-painting, we introduce a novel guided co-modulation framework, which drives the image generation process with a common pretrained discriminative model. Doing so maintains the high visual quality of generated panoramas while enabling user-controlled semantic content in the extrapolated field of view. We demonstrate the state-of-the-art results of our method on field of view extrapolation both qualitatively and quantitatively, providing thorough analysis of our novel editing capabilities. Finally, we demonstrate that our approach benefits the photorealistic virtual insertion of highly glossy objects in photographs.

    Paper and Supplementary Material

    Mohammad Reza Karimi Dastjerdi, Yannick Hold-Geoffroy, Jonathan Eisenmann, Siavash Khodadadeh, Jean-François Lalonde
    Guided Co-Modulated GAN for 360° Field of View Extrapolation
    (hosted on ArXiv)


    Video - Adobe Max Sneaks 2022

    Video - 3DV


    This work was partially supported by NSERC grant ALLRP 557208-20. We would like to thank Vova Kim, Sohrab Amirghodsi, Eli Shechtman and Kuldeep Kulkarni for the helpful discussion and comments. In addition, thanks to everyone at Laboratoire de Vision et Systèmes Numériques of Université Laval who helped with proofreading.