Published on Tue Oct 10 2017

High-dimensional dynamics of generalization error in neural networks

Madhu S. Advani, Andrew M. Saxe

The dynamics of gradient descent learning naturally protect against overtraining. Overtraining is worst at intermediate network sizes. Low generalization error requires starting with small initial weights. Making networks very large does not harm their generalization performance.

0
0
0
Abstract

We perform an average case analysis of the generalization dynamics of large neural networks trained using gradient descent. We study the practically-relevant "high-dimensional" regime where the number of free parameters in the network is on the order of or even larger than the number of examples in the dataset. Using random matrix theory and exact solutions in linear models, we derive the generalization error and training error dynamics of learning and analyze how they depend on the dimensionality of data and signal to noise ratio of the learning problem. We find that the dynamics of gradient descent learning naturally protect against overtraining and overfitting in large networks. Overtraining is worst at intermediate network sizes, when the effective number of free parameters equals the number of samples, and thus can be reduced by making a network smaller or larger. Additionally, in the high-dimensional regime, low generalization error requires starting with small initial weights. We then turn to non-linear neural networks, and show that making networks very large does not harm their generalization performance. On the contrary, it can in fact reduce overtraining, even without early stopping or regularization of any sort. We identify two novel phenomena underlying this behavior in overcomplete models: first, there is a frozen subspace of the weights in which no learning occurs under gradient descent; and second, the statistical properties of the high-dimensional regime yield better-conditioned input correlations which protect against overtraining. We demonstrate that naive application of worst-case theories such as Rademacher complexity are inaccurate in predicting the generalization performance of deep neural networks, and derive an alternative bound which incorporates the frozen subspace and conditioning effects and qualitatively matches the behavior observed in simulation.

Tue Dec 25 2018
Machine Learning
Overparameterized Nonlinear Learning: Gradient Descent Takes the Shortest Path?
Many modern learning tasks involve fitting nonlinear models to data which are mistakenlytrained in an overparameterized regime where the parameters of the model exceed the size of the training dataset. The training loss may have infinitely many global minima and it is critical to understand the properties of the solutions found by first-order optimization.
0
0
0
Sat Dec 30 2017
Machine Learning
Theory of Deep Learning III: explaining the non-overfitting puzzle
A main puzzle of deep networks revolves around the absence of overfitting despite large overparametrization. The proposition depends on the qualitative theory of dynamical systems and is supported by numerical results.
0
0
0
Mon Jun 08 2020
Machine Learning
The Heavy-Tail Phenomenon in SGD
In recent years, various notions of capacity and complexity have beenproposed for characterizing the generalization properties of stochastic gradient descent. We argue that these three seemingly unrelated perspectives for generalization are deeply linked to each other. We show that even in a simple linear regression problem, the iterates can be heavy-tailed with infinite variance.
1
3
0
Mon Oct 30 2017
Machine Learning
Stochastic gradient descent performs variational inference, converges to limit cycles for deep networks
Stochastic gradient descent (SGD) is widely believed to perform implicit regularization when used to train deep neural networks. We prove that SGD minimizes an average potential over the posterior distribution of weights along with an entropic regularization term. This potential is however not the original
1
0
0
Thu Sep 03 2015
Machine Learning
Train faster, generalize better: Stability of stochastic gradient descent
We show that parametric models trained by a stochastic gradient method (SGM) with few iterations have vanishing generalization error. We prove our results by arguing that SGM is algorithmically stable in the sense of Bousquet and Elisseeff.
1
3
6
Tue May 28 2019
Neural Networks
SGD on Neural Networks Learns Functions of Increasing Complexity
We show that in the initial epochs, almost all of the improvement of the classifier obtained by SGD can be explained by alinear classifier. More generally, we give evidence for the hypothesis that, as progressivelyiterations progress, SGD learns functions of increasing complexity.
0
0
0