Towards a Perceptual Evaluation Framework for Lighting Estimation
Justine Giroux     Mohammad Reza Karimi Dastjerdi    
Yannick Hold-Geoffroy     Javier Vazquez-Corral     Jean-François Lalonde    





[Paper]
[Supplementary]
[Code]
[Poster]
[Video]


Accepted in Conference on Computer Vision and Pattern Recognition (CVPR), 2024!
Best interactive paper award winner at Color & Imaging Conference (CIC32), 2024!

Abstract

Progress in lighting estimation is tracked by computing existing image quality assessment (IQA) metrics on images from standard datasets. While this may appear to be a reasonable approach, we demonstrate that doing so does not correlate to human preference when the estimated lighting is used to relight a virtual scene into a real photograph. To study this, we design a controlled psychophysical experiment where human observers must choose their preference amongst rendered scenes lit using a set of lighting estimation algorithms selected from the recent literature, and use it to analyse how these algorithms perform according to human perception. Then, we demonstrate that none of the most popular IQA metrics from the literature, taken individually, correctly represent human perception. Finally, we show that by learning a combination of existing IQA metrics, we can more accurately represent human preference. This provides a new perceptual framework to help evaluate future lighting estimation algorithms.


Citation

@InProceedings{giroux2024towards,
    author    = {Giroux, Justine and Dastjerdi, Mohammad Reza Karimi and Hold-Geoffroy, Yannick and Vazquez-Corral, Javier and Lalonde, Jean-Fran\c{c}ois},
    title     = {Towards a Perceptual Evaluation Framework for Lighting Estimation},
    booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    year      = {2024}
}


Acknowledgements

This research was supported by Sentinel North and NSERC grant RGPIN 2020-04799. The authors thank all members of the lab for discussions and help with the paper, as well as all participants for taking part in our user study. JVC was supported by Grant PID2021-128178OB-I00 funded by MCIN/AEI/10.13039/501100011033, ERDF ”A way of making Europe”, the Departament de Recerca i Universitats from Generalitat de Catalunya with ref. 2021SGR01499, and the ”Ayudas para la recualificación del sistema univeritario español” financed by the European Union-NextGenerationEU.