Published on Fri Dec 19 2014

Gradual training of deep denoising auto encoders

Alexander Kalmanovich, Gal Chechik

Stacked denoising auto encoders (DAEs) are well known to learn useful deep representations. We investigate a training scheme of a deep DAE where DAE layers are gradually added and keep adapting as additional layers are added. We show that in the regime of mid-sized

0
0
0
Abstract

Stacked denoising auto encoders (DAEs) are well known to learn useful deep representations, which can be used to improve supervised training by initializing a deep network. We investigate a training scheme of a deep DAE, where DAE layers are gradually added and keep adapting as additional layers are added. We show that in the regime of mid-sized datasets, this gradual training provides a small but consistent improvement over stacked training in both reconstruction quality and classification error over stacked training on MNIST and CIFAR datasets.

Sat Apr 11 2015
Neural Networks
Gradual Training Method for Denoising Auto Encoders
Stacked denoising auto encoders (DAEs) are well known to learn useful deep representations. We investigate a training scheme of a deep DAE where DAE layers are gradually added and keep adapting as additional layers are added. We show that in the regime of mid-sized
0
0
0
Tue Feb 16 2021
Artificial Intelligence
Training Stacked Denoising Autoencoders for Representation Learning
We implement stacked denoising autoencoders, a class of neural networks that are capable of learning powerful representations of high dimensional data. We describe stochastic gradient descent for unsupervised training of autoen coders, as well as a novel genetic algorithm based approach.
0
0
0
Thu May 05 2011
Artificial Intelligence
Rapid Feature Learning with Stacked Linear Denoisers
Stacked Denoising Autoencoders (SdA) can lead to significant improvements in accuracy. In contrast to SdAs,our algorithm requires no training through gradient descent. It can be implemented in less than 20 lines of code.
0
0
0
Thu Nov 23 2017
Machine Learning
A Pitfall of Unsupervised Pre-Training
Stacked Convolutional Auto-Encoder is good at reconstructing pictures, it is not necessarily good at discriminating their classes. When using Auto-Encoders,intuitively one assumes that features which are good for reconstruction will also lead to high classification accuracy.
0
0
0
Mon Mar 13 2017
Computer Vision
A Pitfall of Unsupervised Pre-Training
Stacked Convolutional Auto-Encoder is good at reconstructing pictures, it is not necessarily good at discriminating their classes. When using Auto-Encoders,intuitively one assumes that features which are good for reconstruction will also lead to high classification accuracy.
0
0
0
Fri Dec 07 2012
Neural Networks
Layer-wise learning of deep generative models
When using deep, multi-layered architectures to build generative models, it is difficult to train all layers at once. We propose a layer-wise training procedure admitting a performance guarantee compared to the global optimum.
0
0
0