Published on Wed May 06 2020

Stochastic Bottleneck: Rateless Auto-Encoder for Flexible Dimensionality Reduction

Toshiaki Koike-Akino, Ye Wang
0
0
0
Abstract

We propose a new concept of rateless auto-encoders (RL-AEs) that enable a flexible latent dimensionality, which can be seamlessly adjusted for varying distortion and dimensionality requirements. In the proposed RL-AEs, instead of a deterministic bottleneck architecture, we use an over-complete representation that is stochastically regularized with weighted dropouts, in a manner analogous to sparse AE (SAE). Unlike SAEs, our RL-AEs employ monotonically increasing dropout rates across the latent representation nodes such that the latent variables become sorted by importance like in principal component analysis (PCA). This is motivated by the rateless property of conventional PCA, where the least important principal components can be discarded to realize variable rate dimensionality reduction that gracefully degrades the distortion. In contrast, since the latent variables of conventional AEs are equally important for data reconstruction, they cannot be simply discarded to further reduce the dimensionality after the AE model is trained. Our proposed stochastic bottleneck framework enables seamless rate adaptation with high reconstruction performance, without requiring predetermined latent dimensionality at training. We experimentally demonstrate that the proposed RL-AEs can achieve variable dimensionality reduction while achieving low distortion compared to conventional AEs.

Sat Dec 28 2013
Machine Learning
Rate-Distortion Auto-Encoders
The goal is to learn a representation that is minimally committed to the input data, but rich enough to reconstruct the inputs up to certain level of rate-distortion. The proposed algorithm uses a measure of entropy based on infinitely divisible matrices that avoids the plug in estimation of densities.
0
0
0
Sat Jun 05 2021
Computer Vision
Principle Bit Analysis: Autoencoding with Schur-Concave Loss
We consider a linear autoencoder in which the latent variables are quantized, corrupted by noise, and the constraint is Schur-concave. Although finding the optimal encoder/decoder pair for this setup is a nonconvex optimization problem, we show that
4
0
0
Mon May 27 2019
Machine Learning
Quantization-Based Regularization for Autoencoders
Autoencoders provide unsupervised models for learning low-dimensional representations for downstream tasks. Without proper regularization, autoencoder models are susceptible to the overfitting problem. We introduce aquantization-based regularizer in the bottleneck stage to learn meaningful latent representations.
0
0
0
Tue Jan 22 2019
Machine Learning
CAE-ADMM: Implicit Bitrate Optimization via ADMM-based Pruning in Compressive Autoencoders
ADMM-pruned Compressive AutoEncoder (CAE-ADMM) uses Alternative Direction Method of Multipliers (ADMM), to optimize the trade-off between distortion and efficiency of lossy image compression.
0
0
0
Sun Mar 30 2014
Neural Networks
Auto-encoders: reconstruction versus compression
Minimizing a codelength for the data using an auto-encoder is equivalent to minimizing the reconstruction error plus some correcting terms. These terms have an interpretation as either a denoising or contractive property of the decoding function.
0
0
0
Mon Apr 15 2019
Machine Learning
Exact Rate-Distortion in Autoencoders via Echo Noise
Compression is at the heart of effective representation learning. However, lossy compression is typically achieved through simple parametric models like Gaussian noise. We introduce a new noise channel, Echo noise, that admits a simple expression for mutual information for arbitrary input distributions.
0
0
0