Published on Wed Jun 05 2019

Style Generator Inversion for Image Enhancement and Animation

Aviv Gabbay, Yedid Hoshen

Adversarial learning often experiences mode-collapse, which manifests in generators that cannot generate some modes of the target distribution. We show that style generators outperform other GANs as well as Deep Image Prior as priors for image enhancement tasks.

0
0
0
Abstract

One of the main motivations for training high quality image generative models is their potential use as tools for image manipulation. Recently, generative adversarial networks (GANs) have been able to generate images of remarkable quality. Unfortunately, adversarially-trained unconditional generator networks have not been successful as image priors. One of the main requirements for a network to act as a generative image prior, is being able to generate every possible image from the target distribution. Adversarial learning often experiences mode-collapse, which manifests in generators that cannot generate some modes of the target distribution. Another requirement often not satisfied is invertibility i.e. having an efficient way of finding a valid input latent code given a required output image. In this work, we show that differently from earlier GANs, the very recently proposed style-generators are quite easy to invert. We use this important observation to propose style generators as general purpose image priors. We show that style generators outperform other GANs as well as Deep Image Prior as priors for image enhancement tasks. The latent space spanned by style-generators satisfies linear identity-pose relations. The latent space linearity, combined with invertibility, allows us to animate still facial images without supervision. Extensive experiments are performed to support the main contributions of this paper.

Wed Dec 12 2018
Neural Networks
A Style-Based Generator Architecture for Generative Adversarial Networks
We propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature. The new architecture leads to an automatically learned, unsupervised separation of high-level attributes and stochastic variation in the generated images.
9
3,500
24,206
Tue Jun 04 2019
Computer Vision
Example-Guided Style Consistent Image Synthesis from Semantic Labeling
An example-guided image synthesis aims to synthesize an image from a semantic label map and an exemplary image indicating style. We use the term "style" in this problem to refer to implicit characteristics of images, for example.
0
0
0
Mon Sep 14 2020
Computer Vision
Improving Inversion and Generation Diversity in StyleGAN using a Gaussianized Latent Space
Modern Generative Adversarial Networks are capable of creating artificial images from latent vectors living in a low-dimensional learned Latent space. However, the reconstructed latent vectors are unstable and small perturbations result in significant image distortions. We propose to explicitly model the data distribution in latent space.
0
0
0
Sun May 26 2019
Computer Vision
Disentangling Style and Content in Anime Illustrations
Existing methods for AI-generated artworks still struggle with generating high-quality stylized content. We show the ability to generate high-fidelity anime portraits with a fixed content and a large variety of styles from over a thousand artists.
0
0
0
Thu Nov 19 2020
Computer Vision
Style Intervention: How to Achieve Spatial Disentanglement with Style-based Generators?
Generative Adversarial Networks (GANs) with style-based generators (e.g.StyleGAN) successfully enable semantic control over image synthesis. Traveling in the latent space would lead to `spatially entangled changes' in corresponding images.
0
0
0
Wed Feb 24 2021
Computer Vision
AniGAN: Style-Guided Generative Adversarial Networks for Unsupervised Anime Face Generation
AniGAN is a novel GAN-based translator that synthesizes high-quality anime-faces. We propose a double-branch discriminatorydiscriminator to learn both domain-specific distributions and domain-shared Distributions. Extensive experiments on selfie2anime and a new face2
0
0
0