Published on Tue Jul 13 2021

Deep Neural Networks are Surprisingly Reversible: A Baseline for Zero-Shot Inversion

Xin Dong, Hongxu Yin, Jose M. Alvarez, Jan Kautz, Pavlo Molchanov

New method can scale zero-shot direct inversion to deep architectures and complex datasets. Inversion of generators in GANs unveils code of given synthesized face image at 128x128px.

2
47
216
Abstract

Understanding the behavior and vulnerability of pre-trained deep neural networks (DNNs) can help to improve them. Analysis can be performed via reversing the network's flow to generate inputs from internal representations. Most existing work relies on priors or data-intensive optimization to invert a model, yet struggles to scale to deep architectures and complex datasets. This paper presents a zero-shot direct model inversion framework that recovers the input to the trained model given only the internal representation. The crux of our method is to inverse the DNN in a divide-and-conquer manner while re-syncing the inverted layers via cycle-consistency guidance with the help of synthesized data. As a result, we obtain a single feed-forward model capable of inversion with a single forward pass without seeing any real data of the original task. With the proposed approach, we scale zero-shot direct inversion to deep architectures and complex datasets. We empirically show that modern classification models on ImageNet can, surprisingly, be inverted, allowing an approximate recovery of the original 224x224px images from a representation after more than 20 layers. Moreover, inversion of generators in GANs unveils latent code of a given synthesized face image at 128x128px, which can even, in turn, improve defective synthesized images from GANs.

Thu Nov 26 2020
Artificial Intelligence
Omni-GAN: On the Secrets of cGANs and Beyond
Omni-GAN is a variant of cGAN that reveals the devil in designing a proper discriminator for training the model. The key is to ensure that the discriminator receives strong supervision to perceive the concepts and moderate regularization to avoid collapse.
0
0
0
Tue Apr 06 2021
Computer Vision
ReStyle: A Residual-Based StyleGAN Encoder via Iterative Refinement
The power of unconditional image synthesis has significantly advanced through the use of Generative Adversarial Networks (GANs) Instead of directly predicting the latent code of a given real image using a single pass, the encoder is tasked with predicting a residual with respect to the current estimate.
3
71
376
Mon Dec 23 2019
Computer Vision
CNN-generated images are surprisingly easy to spot... for now
In this work we ask whether it is possible to create a "universal" detector for telling apart real images from these generated by a CNN. To test this, we collect a dataset consisting of fake images generated by 11 different CNN-based image generator models. We demonstrate that, with careful pre-
0
0
0
Mon Sep 17 2018
Neural Networks
FermiNets: Learning generative machines to generate efficient neural networks via generative synthesis
Deep learning is often offset by architectural and computational complexity. Can we learn generative machines to automatically generate deep neural networks with efficient network architectures? This study introduces the idea of generative synthesis.
0
0
0
Mon May 10 2021
Machine Learning
Robust Training Using Natural Transformation
0
0
0
Thu Feb 07 2019
Computer Vision
Reversible GANs for Memory-efficient Image-to-Image Translation
The Pix2pix and CycleGAN losses have vastly improved the qualitative and quantitative visual quality of results in image-to-image translation tasks. We are able to demonstrate superior quantitative output on the Cityscapes and Maps datasets at near constant memory budget.
0
0
0
Fri Sep 28 2018
Machine Learning
Large Scale GAN Training for High Fidelity Natural Image Synthesis
We train Generative Adversarial Networks at the largest scale yet attempted. We find that applying orthogonal regularization to the generator renders it amenable to a simple "truncation trick" Our modifications lead to models which set the new state of the art in class-conditional
2
280
906
Mon Jan 11 2021
Computer Vision
RepVGG: Making VGG-style ConvNets Great Again
We present a simple but powerful architecture of convolutional neural network. The training-time model has amulti-branch topology. On ImageNet, RepVGG reaches over 80% top-1 accuracy.
2
146
755
Fri Nov 02 2018
Machine Learning
Invertible Residual Networks
We show that standard ResNet architectures can be made invertible. This allows the same model to be used for classification, density estimation, and generation. Invertible ResNets perform competitively with state-of-the-art image classifiers and flow-based generative models.
2
182
628
Sat Jun 13 2020
Machine Learning
Bootstrap your own latent: A new approach to self-supervised Learning
Bootstrap Your Own Latent (BYOL) is a new approach to self-supervised image representation learning. BYOL relies on two neural networks, referred to as online and target networks, that interact and learn from each other.
1
30
112
Tue Dec 03 2019
Computer Vision
Analyzing and Improving the Image Quality of StyleGAN
The style-based GAN architecture (StyleGAN) yields state-of-the-art results in data-driven unconditional generative image modeling. We propose changes in both model architecture and training methods to address them.
6
25
93
Wed Jun 17 2020
Machine Learning
Big Self-Supervised Models are Strong Semi-Supervised Learners
A key ingredient of our approach is the use of big (deep and wide) networks during pretraining and fine-tuning. We find that, the fewer the labels, the more this approach (task-agnostic use of unlabeled data) benefits from a bigger network. This procedure achieves 73.9% ImageNet top-1 accuracy with
2
13
70