GAN-based image restoration inverts the generative
process to repair images corrupted by known degrada-
tions. Existing unsupervised methods must be carefully
tuned for each task and degradation level. In this work,
we make StyleGAN image restoration robust: a single
set of hyperparameters works across a wide range of
degradation levels. This makes it possible to handle
combinations of several degradations, without the need
to retune. Our proposed approach relies on a 3-phase
progressive latent space extension and a conservative
optimizer, which avoids the need for any additional reg-
ularization terms. Extensive experiments demonstrate
robustness on inpainting, upsampling, denoising, and
deartifacting at varying degradations levels, outperform-
ing other StyleGAN-based inversion techniques. Our
approach also favorably compares to diffusion-based
restoration by yielding much more realistic inversion
results.
|