Advances in high dynamic range (HDR) lighting estimation from a single image have opened new
possibilities for augmented reality (AR) applications. Predicting complex lighting environments from a
single input image allows for the realistic rendering and compositing of virtual objects. In this work,
we investigate the color robustness of such methods---an often overlooked yet critical factor for
achieving visual realism. While most evaluations conflate color with other lighting attributes (e.g.,
intensity, direction), we isolate color as the primary variable of interest. Rather than introducing a
new lighting estimation algorithm, we explore whether simple adaptation techniques can enhance the color
accuracy of existing models. Using a novel HDR dataset featuring diverse lighting colors, we
systematically evaluate several adaptation strategies. Our results show that preprocessing the input
image with a pre-trained white balance network improves color robustness, outperforming other strategies
across all tested scenarios. Notably, this approach requires no retraining of the lighting estimation
model. We further validate the generality of this finding by applying the technique to three
state-of-the-art lighting estimation methods from recent literature.
|