We propose a real-time method to estimate spatiallyvarying indoor lighting from a single RGB image. Given an image and a 2D location in that image, our CNN estimates a 5th order spherical harmonic representation of the lighting at the given location in less than 20ms on a laptop mobile graphics card. While existing approaches estimate a single, global lighting representation or require depth as input, our method reasons about local lighting without requiring any geometry information. We demonstrate, through quantitative experiments including a user study, that our results achieve lower lighting estimation errors and are preferred by users over the state-of-the-art. Our approach can be used directly for augmented reality applications, where a virtual object is relit realistically at any position in the scene in real-time.
This dataset contains 20 indoor scenes with 79 HDR light probes distributed in the scenes. Download here!
For more results check out the [Supplementary material]
VideosThe position of the bunny and the RGB image is used as input to the neural network and the prediction is used to light the bunny in real-time.
Note that depth is only used to scale the 3D model.
- Adobe System inc with generous gift funding;
- NSERC/Creaform Industrial Research Chair on 3D Scanning: CREATION 3D for the funding;
- NVIDIA Corporation with the donation of the Tesla K40 and Titan X GPUs used for this research.