Published on Thu May 21 2015

Why Regularized Auto-Encoders learn Sparse Representation?

Devansh Arpit, Yingbo Zhou, Hung Ngo, Venu Govindaraju

The authors of Batch Normalization (BN) identify and address an important problem involved in training deep networks. BN depends on batch statistics for layerwise input normalization during training. Our approach uses a data-independent parametric estimate of mean and standard-deviation in every layer.

0
0
0
Abstract

While the authors of Batch Normalization (BN) identify and address an important problem involved in training deep networks-- \textit{Internal Covariate Shift}-- the current solution has certain drawbacks. For instance, BN depends on batch statistics for layerwise input normalization during training which makes the estimates of mean and standard deviation of input (distribution) to hidden layers inaccurate due to shifting parameter values (especially during initial training epochs). Another fundamental problem with BN is that it cannot be used with batch-size $ 1 $ during training. We address these drawbacks of BN by proposing a non-adaptive normalization technique for removing covariate shift, that we call \textit{Normalization Propagation}. Our approach does not depend on batch statistics, but rather uses a data-independent parametric estimate of mean and standard-deviation in every layer thus being computationally faster compared with BN. We exploit the observation that the pre-activation before Rectified Linear Units follow Gaussian distribution in deep networks, and that once the first and second order statistics of any given dataset are normalized, we can forward propagate this normalization without the need for recalculating the approximate statistics for hidden layers.

Wed Nov 21 2018
Machine Learning
Regularizing by the Variance of the Activations' Sample-Variances
The new loss term encourages the variance of the activations to be stable and not vary from one random mini-batch to the next. Normalization techniques play an important role in supporting efficient and often more effective training of deep neural networks.
0
0
0
Fri Mar 04 2016
Machine Learning
Normalization Propagation: A Parametric Technique for Removing Internal Covariate Shift in Deep Networks
The authors of Batch Normalization (BN) identify and address an important problem involved in training deep networks. BN depends on batch statistics for layerwise input normalization during training which makes estimates of mean and standard deviation of input inaccurate. We address these drawbacks by proposing a non-
0
0
0
Fri Jun 01 2018
Artificial Intelligence
Understanding Batch Normalization
Batch normalization (BN) is a technique to normalize activations in intermediate layers of deep neural networks. Its tendency to improve accuracy and speed up training have established BN as a favorite technique in deep learning.
0
0
0
Thu Nov 01 2018
Machine Learning
Stochastic Normalizations as Bayesian Learning
Batch Normalization (BN) improves the generalization performance of deep networks. We argue that one major reason is the randomness of batch statistics. We apply this idea to other (deterministic) normalization techniques that are oblivious to the batch size.
0
0
0
Mon Jun 07 2021
Machine Learning
Batch Normalization Orthogonalizes Representations in Deep Random Networks
This paper underlines a subtle property of batch-normalization. Successive batch normalizations with random linear transformations make hidden representations increasingly orthogonal across layers of a deep neural network. The deviation of the representations from orthogonality rapidly decays with depth.
0
0
0
Tue Aug 18 2020
Machine Learning
Training Deep Neural Networks Without Batch Normalization
Batch normalization was developed to combat covariate shift inside networks. Empirically it is known to work, but there is a lack of theoretical understanding about its effectiveness. This work studies batch normalization in detail, while comparing it with other methods.
0
0
0