The purpose of this document is to extend the dataset and results presented in the main paper and provide additional analysis of our proposed model. Specifically, examples of our panorama dataset as well as more results on lighting estimation insertion are provided. Furthermore, we provide the images used for our user study. Lastly, we analyze our model by showing two experiments to demonstrate the generalization capabilities of our proposed model, first on sky in-filling and then semantic mapping.
In this section, we present some panoramas from our evaluation dataset presented in sec. 6.1.
This section presents additional lighting estimations, extending the results shown in fig. 7.
We now present the images used for our user study (sec. 6.4). The bird inserted using the lighting estimated by our method is highlighted in green.
To ensure the robustness of our learned sky model to buildings and other sky occluders, we ask our sky model to reconstruct the sky appearance over the whole sky hemisphere during training, even when the input pixels are masked. We are now interested in our model's capability to extrapolate plausible lighting distributions in regions missing from the input panorama, effectively performing sky in-filling. From partially visible skies (fig. 4.1, left), we observe that our reconstruction yields plausible skies (right), suggesting our sky model understands plausible sky appearance.
We further analyze our sky model by applying UMAP to the sky parameters z from our test set and show the reconstructed skies on the resulting coordinates on fig. 5.1. This mapping suggests that our sky model understands some high-level semantics about the sky as we see some clear trends such as sun elevation and intensity.