Improving the color accuracy of lighting estimation models
Zitian Zhang     Joshua Urban Davis    
Jeanne Phuong Anh Vu     Jiangtao Kuang     Jean-François Lalonde    



Getting the "color right" is critical for realistic virtual object insertion. Left: input image. Middle: virtual object (sofa) relit with estimated lighting from a baseline model. Right: a lighting estimation model with higher color accuracy results in a more realistic composite.


[Paper]

Accepted as an Oral presentation in Color and Imaging Conference, 2025!

Abstract

Advances in high dynamic range (HDR) lighting estimation from a single image have opened new possibilities for augmented reality (AR) applications. Predicting complex lighting environments from a single input image allows for the realistic rendering and compositing of virtual objects. In this work, we investigate the color robustness of such methods---an often overlooked yet critical factor for achieving visual realism. While most evaluations conflate color with other lighting attributes (e.g., intensity, direction), we isolate color as the primary variable of interest. Rather than introducing a new lighting estimation algorithm, we explore whether simple adaptation techniques can enhance the color accuracy of existing models. Using a novel HDR dataset featuring diverse lighting colors, we systematically evaluate several adaptation strategies. Our results show that preprocessing the input image with a pre-trained white balance network improves color robustness, outperforming other strategies across all tested scenarios. Notably, this approach requires no retraining of the lighting estimation model. We further validate the generality of this finding by applying the technique to three state-of-the-art lighting estimation methods from recent literature.

Pipeline