We present additional results complementing the main paper. In particular, we show interactive relighting results,
using pre-computed images generated by our method and baselines.
Hover your mouse over the images to see animated relighting results!
We present 20 additional randomly-selected results on the "user-controlled" dataset, extending the qualitative results from fig. 5.
In this case, we rendered 8 light directions instead
of 5 (as used for the user study), in order to also show results where the shadow is behind the object.
Move your mouse from left to right over the images to see the light direction change.
We show how the two main parameters of our method can be adjusted for enhanced artistic control, extending sec. 4.6. Here, both parameters are modified separately, but they could be modified simultaneously for optimal results.
Here, we show control over the guidance scale γ, extending fig. 6.
Increasing the guidance scale generally makes the dominant light source stronger.
Move the mouse vertically to adjust the guidance scale, and horizontally to adjust the light azimuth angle.
The current light azimuth angle is 0°.
The current guidance scale is γ=3.0.
Here, we show control over the latent mask weight β, extending fig. 7.
Increasing the latent mask weight makes the dominant shadow darker.
Move the mouse vertically to adjust the latent mask weight, and horizontally to adjust the light azimuth angle.
The current light azimuth angle is 0°.
The current latent weight is β=0.05.
We adjust the light source radius used to generate the coarse shadow fed to SpotLight.
Small radii lead to hard shadows, whereas higher radii generate more diffuse shadows.
Move the mouse vertically to adjust the light source radius, and horizontally to adjust the light azimuth angle.
The current light azimuth angle is 0°.
The current light radius is 1 (default).
In our paper, we analyze all the methods by conditioning on a single dominant light source, which is sufficiently realistic in most cases.
Here, we show that we can combine outputs from SpotLight at different light directions to simulate multiple light sources.
We combine a static light direction (shadow to the right of the object), with a dynamic direction (hover over the images to move this virtual light). We combine the two lightings
in linear space (by using the gamma of 2.2) using the following equation:
$$x_{\text{combined}}=(0.5 \times {x_{\text{light 1}}}^{2.2} + 0.5 \times {x_{\text{light 2}}}^{2.2})^{\frac{1}{2.2}}.$$
Notice how the static shadow and shading have a clear effect on the combined output.
In our work, we use SpotLight for object relighting. We experiment with extending SpotLight for full-scene relighting using the following approach.
In step 1, we use the same ZeroComp checkpoint as in all other experiments. For steps 2 and 3, in order to properly relight the background, we found that this checkpoint had limited full-scene relighting capabilities. We hypothesize that this is due to the fact that the neural renderer backbone is trained to relight only a small region of an image (circular and rectangular masks at training time). We therefore train a separate model from scratch for 270K iterations on inverted shading masks, where we take the inverse of the circular and rectangular masks for shading masking. Furthermore, we only keep the largest connected component in the mask. This leaves training examples where only a small region of the shading is known and the full background lighting needs to be inferred.
Here, we show the user interface used to perform our human perceptual study, as described in sec. 4.5 in the paper. The user is first shown an instructions page, followed by the questions page (which is the same for the 120 2AFC questions).