Published on Tue Oct 02 2018

Deep Decoder: Concise Image Representations from Untrained Non-convolutional Networks

Reinhard Heckel, Paul Hand

Deep neural networks have become highly effective tools for compressing images. This success can be attributed in part to their ability to represent and generate natural images well. In this paper, we propose an untrained simple image model, called the deep decoder, which is a deep neural network.

0
0
0
Abstract

Deep neural networks, in particular convolutional neural networks, have become highly effective tools for compressing images and solving inverse problems including denoising, inpainting, and reconstruction from few and noisy measurements. This success can be attributed in part to their ability to represent and generate natural images well. Contrary to classical tools such as wavelets, image-generating deep neural networks have a large number of parameters---typically a multiple of their output dimension---and need to be trained on large datasets. In this paper, we propose an untrained simple image model, called the deep decoder, which is a deep neural network that can generate natural images from very few weight parameters. The deep decoder has a simple architecture with no convolutions and fewer weight parameters than the output dimensionality. This underparameterization enables the deep decoder to compress images into a concise set of network weights, which we show is on par with wavelet-based thresholding. Further, underparameterization provides a barrier to overfitting, allowing the deep decoder to have state-of-the-art performance for denoising. The deep decoder is simple in the sense that each layer has an identical structure that consists of only one upsampling unit, pixel-wise linear combination of channels, ReLU activation, and channelwise normalization. This simplicity makes the network amenable to theoretical analysis, and it sheds light on the aspects of neural networks that enable them to form effective signal representations.

Thu Oct 31 2019
Machine Learning
Denoising and Regularization via Exploiting the Structural Bias of Convolutional Generators
Convolutional Neural Networks (CNNs) have emerged as highly successful tools for image generation, recovery, and restoration. A major contributing factor to their success is that convolutional networks impose strong prior assumptions about natural images. A surprising experiment highlights this architectural bias.
0
0
0
Tue Jul 23 2019
Machine Learning
Convolutional Dictionary Learning in Hierarchical Networks
We propose an alternating minimization algorithm for learning the filters in this hierarchical model. The algorithm alternates between a coefficient-estimation step and a filter update step. The coefficient update step performs sparse (detail) coding and, when unfolded, leads to a deep neural network.
0
0
0
Wed Jun 24 2020
Machine Learning
Hierarchically Compositional Tasks and Deep Convolutional Networks
The main success stories of deep learning, starting with ImageNet, depend on deep convolutional networks. These networks perform significantly better than traditional shallow classifiers, such as support vector machines, and also better than deep fully connected networks. Recent results in approximation theory proved an exponential advantage of deep convolved networks.
0
0
0
Wed Nov 14 2018
Computer Vision
Deep Learning in the Wavelet Domain
This paper examines the possibility of and the possible advantages to learning the filters of convolutional neural networks (CNNs) for image analysis. We are stimulated by both Mallat's scattering transform and the idea of filtering in the Fourier domain.
0
0
0
Tue Jan 19 2016
Machine Learning
Understanding Deep Convolutional Networks
Deep convolutional networks provide state of the art classifications and regressions. We review their architecture, which scatters data with a cascade of linear filter weights and non-linearities. A mathematical framework is introduced to analyze theirproperties.
0
0
0
Mon Oct 28 2019
Machine Learning
On approximating with neural networks
We prove a theorem that a neural network with more than one hidden layer can only represent one feature in its first hidden layer. The proof of the theorem is straightforward, where two backward paths and a weight-tying matrix play the key roles.
0
0
0