This document completes the paper Fast Spatially-Varying Indoor Lighting Estimation. We provide the evaluation dataset from sec. 5.2, the renders of the comparison with Gardner et al.(fig. 6), the renders of the comparison with Barron et al. (fig. 7), the full demo video of fig. 9, a supplementary demo of various synthetic/real scenes and more failure cases (fig. 8).
Interactive demo captured with a Kinect V2 where the illumination is computed in real-time (complement to fig. 9).
The depth is only used to scale the virtual object, the network only sees the RGB image to estimate the lighting.
Note that in this demo the shadows are not composited onto the surfaces.
Moving the object in the scene:
Moving the light source:
Several videos from the synthetic test set. The illumination is updated interactively, the depth is used to scale the object.
The depth is only used to scale the virtual object, the network only sees the RGB image to estimate the lighting.
Note that in this demo the shadows are not composited onto the surfaces.
Several videos on real images. The illumination is computed interactively. A plane was fitted to simulate depth on most images.
[image source]