Published on Sat Dec 28 2013

Rate-Distortion Auto-Encoders

Luis G. Sanchez Giraldo, Jose C. Principe

The goal is to learn a representation that is minimally committed to the input data, but rich enough to reconstruct the inputs up to certain level of rate-distortion. The proposed algorithm uses a measure of entropy based on infinitely divisible matrices that avoids the plug in estimation of densities.

0
0
0
Abstract

A rekindled the interest in auto-encoder algorithms has been spurred by recent work on deep learning. Current efforts have been directed towards effective training of auto-encoder architectures with a large number of coding units. Here, we propose a learning algorithm for auto-encoders based on a rate-distortion objective that minimizes the mutual information between the inputs and the outputs of the auto-encoder subject to a fidelity constraint. The goal is to learn a representation that is minimally committed to the input data, but that is rich enough to reconstruct the inputs up to certain level of distortion. Minimizing the mutual information acts as a regularization term whereas the fidelity constraint can be understood as a risk functional in the conventional statistical learning setting. The proposed algorithm uses a recently introduced measure of entropy based on infinitely divisible matrices that avoids the plug in estimation of densities. Experiments using over-complete bases show that the rate-distortion auto-encoders can learn a regularized input-output mapping in an implicit manner.

Sun Mar 30 2014
Neural Networks
Auto-encoders: reconstruction versus compression
Minimizing a codelength for the data using an auto-encoder is equivalent to minimizing the reconstruction error plus some correcting terms. These terms have an interpretation as either a denoising or contractive property of the decoding function.
0
0
0
Wed May 06 2020
Machine Learning
Stochastic Bottleneck: Rateless Auto-Encoder for Flexible Dimensionality Reduction
0
0
0
Mon Apr 15 2019
Machine Learning
Exact Rate-Distortion in Autoencoders via Echo Noise
Compression is at the heart of effective representation learning. However, lossy compression is typically achieved through simple parametric models like Gaussian noise. We introduce a new noise channel, Echo noise, that admits a simple expression for mutual information for arbitrary input distributions.
0
0
0
Wed Jan 16 2013
Machine Learning
Switched linear encoding with rectified linear autoencoders
This paper explores in depth an autoencoder model that is constructed using rectified linear activations on its hidden units. Our analysis builds on recent results to further unify the world of sparse linear coding models.
0
0
0
Sun Sep 20 2020
Machine Learning
Deep Autoencoders: From Understanding to Generalization Guarantees
Deep Autoencoders (AEs) are a mainstream deep learning solution for learning compressed, interpretable, and structured data representations. We take a step towards a better understanding of the underlying phenomena of AEs.
0
0
0
Sat Jun 05 2021
Computer Vision
Principle Bit Analysis: Autoencoding with Schur-Concave Loss
We consider a linear autoencoder in which the latent variables are quantized, corrupted by noise, and the constraint is Schur-concave. Although finding the optimal encoder/decoder pair for this setup is a nonconvex optimization problem, we show that
4
0
0