Overparameterization Improves StyleGAN Inversion
Yohan Poirier-Ginter     Alexandre Lessard    
Ryan Smith     Jean-François Lalonde    




[Paper]

Presented at AI for Content Creation Workshop, 2022!


Abstract

Deep generative models like StyleGAN hold the promise of semantic image editing: modifying images by their content, rather than their pixel values. Unfortunately, working with arbitrary images requires inverting the StyleGAN generator, which has remained challenging so far. Existing inversion approaches obtain promising yet imperfect results, having to trade-off between reconstruction quality and downstream editability. To improve quality, these approaches must resort to various techniques that extend the model latent space after training. Taking a step back, we observe that these methods essentially all propose, in one way or another, to increase the number of free parameters. This suggests that inversion might be difficult because it is underconstrained. In this work, we address this directly and dramatically overparame- terize the latent space, before training, with simple changes to the original StyleGAN architecture. Our overparameterization increases the available degrees of freedom, which in turn facilitates inversion. We show that this allows us to obtain near-perfect image reconstruction without the need for encoders nor for altering the latent space after training. Our approach also retains editability, which we demonstrate by realistically interpolating between images.


Paper and Supplementary Material

Yohan Poirier-Ginter, Alexandre Lessard, Ryan Smith, Jean-François Lalonde
Overparameterization Improves StyleGAN Inversion
(hosted on ArXiv)


[Bibtex]



Acknowledgements

This research was supported by NSERC grants CRDPJ 537961-18 and RGPIN-2020-04799, and by Compute Canada.