Casual Indoor HDR Radiance Capture from Omnidirectional Images
Pulkit Gera     Mohammad Reza Karimi Dastjerdi    
Charles Renaud     P J Narayanan     Jean-François Lalonde    




[Paper]
[Supplementary]
[Video]
[Code]
[Presentation]
[Dataset]

Accepted at 33rd British Machine Vision Conference,BMVC 2022
Presented at 3rd OmniCV Workshop,CVPR-W 2022


Abstract

We present PanoHDR-NeRF, a novel pipeline to casually capture a plausible full HDR radiance field of a large indoor scene without elaborate setups or complex capture protocols. First, a user captures a low dynamic range (LDR) omnidirectional video of the scene by freely waving an off-the-shelf camera around the scene. Then, an LDR2HDR network uplifts the captured LDR frames to HDR, subsequently used to train a tailored NeRF++ model. The resulting PanoHDR-NeRF pipeline can estimate full HDR panoramas from any location of the scene. Through experiments on a novel test dataset of a variety of real scenes with the ground truth HDR radiance captured at locations not seen during training, we show that PanoHDR-NeRF predicts plausible radiance from any scene point. We also show that the HDR images produced by PanoHDR-NeRF can synthesize correct lighting effects, enabling the augmentation of indoor scenes with synthetic objects that are lit correctly.


Paper and Supplementary Material

Pulkit Gera, Mohammed Reza Karimi Dastjerdi, Charles Renaud,P.J Narayanan, Jean-François Lalonde
Casual Indoor HDR Radiance Capture from Omnidirectional Images
(hosted on ArXiv)


[Bibtex]

Video




Presentation @ OmniCV-2022, CVPR-W


Dataset


Scene 001 Scene 002 Scene 003 Scene 004



Acknowledgements

This research was supported by NSERC grant RGPIN-2020-04799, Compute Canada, and a MITACS Globalink internship to Pulkit Gera. The authors thank Bowei Chen for his early work on the project, and David Ibarzabal for his help with data capture. We also thank Yohan Poirier-Ginter and Jinsong Zhang for their help.