Abstract

We present a method to estimate the depth of field effect from a single image. Most existing methods related to this task provide either a per-pixel estimation of blur and/or depth. Instead, we go further and propose a lens-based representation that models the depth of field using two parameters: the blur factor and focus disparity. Those two parameters, along with our new signed defocus representation, result in a more intuitive and linear representation directly solvable through linear least squares. Furthermore, our method explicitly enforces consistency between the estimated defocus blur, the lens parameters, and the depth map. Finally, we train our deep-learning-based model on a mix of real images with synthetic depth of field and fully synthetic images. These improvements result in a more robust and accurate method, as demonstrated by our state-of-the-art results. In particular, our lens parametrization enables several applications, such as 3D staging for AR environments and seamless object compositing.

Video presentation

Dataset

Coming soon.

BibTeX

@inproceedings{pichemeunier2023lens,
    title = {Lens Parameter Estimation for Realistic Depth of Field Modeling},
    booktitle = {IEEE/CVF International Conference on Computer Vision (ICCV)},
    author = {Pich{\'e}-Meunier, Dominique and Hold-Geoffroy, Yannick and Zhang, Jianming and Lalonde, Jean-Fran{\c c}ois},
    year = {2023}
    }