Fast Spatially-Varying Indoor Lighting Estimation
(paper ID:4701)

This document completes the paper Fast Spatially-Varying Indoor Lighting Estimation. We provide the evaluation dataset from sec. 5.2, the renders of the comparison with Gardner et al.(fig. 6), the renders of the comparison with Barron et al. (fig. 7), the full demo video of fig. 9, a supplementary demo of various synthetic/real scenes and more failure cases (fig. 8).

Table of contents


Evaluation dataset: backgrounds and probes

Scenes used for the quantitative and user study evaluation (see section 5.2 in the main paper).
  • Hover the image to see the probes, center probes are green and off-center probes are red.
  • Click to enlarge the image.
  • Note that the center probes (green) are selected empirically by determining which probes reflect the best the average lighting in the room.

Comparison with Gardner et al. 2017

Render of a diffuse object using the estimation of each methods (complement to fig. 6).
  • Hover the ground truth image to see the virtual object mask.
  • Click to enlarge the image.

Ground truth
Ours
Gardner et al. Local
Gardner et al. Global

Comparison with Barron et al. (complement to fig. 7)


Barron et al.
Ours

Demo

Interactive demo captured with a Kinect V2 where the illumination is computed in real-time (complement to fig. 9).


Moving the object in the scene:



Moving the light source:


Videos

Synthetic scenes

Several videos from the synthetic test set. The illumination is updated interactively, the depth is used to scale the object.



Real scenes

Several videos on real images. The illumination is computed interactively. A plane was fitted to simulate depth on most images.
[image source]


Failure cases (complement to fig. 8)


[image source]