Published on Sat Jan 16 2021

Phases of learning dynamics in artificial neural networks: with or without mislabeled data

Yu Feng, Yuhai Tu

Despite tremendous success of deep neural network in machine learning, the underlying reason for its superior learning capability remains unclear. Here, we present a framework based on statistical physics to study dynamics of Stochastic gradient descent that drives learning in neural networks.

0
0
0
Abstract

Despite tremendous success of deep neural network in machine learning, the underlying reason for its superior learning capability remains unclear. Here, we present a framework based on statistical physics to study dynamics of stochastic gradient descent (SGD) that drives learning in neural networks. By using the minibatch gradient ensemble, we construct order parameters to characterize dynamics of weight updates in SGD. Without mislabeled data, we find that the SGD learning dynamics transitions from a fast learning phase to a slow exploration phase, which is associated with large changes in order parameters that characterize the alignment of SGD gradients and their mean amplitude. In the case with randomly mislabeled samples, SGD learning dynamics falls into four distinct phases. The system first finds solutions for the correctly labeled samples in phase I, it then wanders around these solutions in phase II until it finds a direction to learn the mislabeled samples during phase III, after which it finds solutions that satisfy all training samples during phase IV. Correspondingly, the test error decreases during phase I and remains low during phase II; however, it increases during phase III and reaches a high plateau during phase IV. The transitions between different phases can be understood by changes of order parameters that characterize the alignment of mean gradients for the correctly and incorrectly labeled samples and their (relative) strength during learning. We find that individual sample losses for the two datasets are most separated during phase II, which leads to a cleaning process to eliminate mislabeled samples for improving generalization.

Tue Oct 17 2017
Artificial Intelligence
A Bayesian Perspective on Generalization and Stochastic Gradient Descent
We consider two questions at the heart of machine learning. How can we predict if a minimum will generalize to the test set? And why does stochastic gradient descent find minima that generalize well?
1
0
4
Tue May 02 2017
Machine Learning
A Strategy for an Uncompromising Incremental Learner
We show that phantom sampling helps avoid catastrophic forgetting during incremental learning. We apply these strategies to competitive multi-class learning of deep neural networks. We further put our strategy to test on challenging cases, including cross-domain increments and incrementing on a novel label space.
0
0
0
Fri Jun 11 2021
Machine Learning
Label Noise SGD Provably Prefers Flat Global Minimizers
In overparametrized models, the noise in stochastic gradient descent (SGD)implicitly regularizes the optimization trajectory. We study the implicit regularization effect of SGD with label noise. We also prove extensions to classification with general loss functions.
2
28
189
Mon Feb 24 2020
Neural Networks
The Early Phase of Neural Network Training
We examine the changes that deep neural networks undergo during this early phase of training. We find that weight distributions are highly non-independent even after only a few hundred iterations. Pre-training with blurred inputs or an auxiliary self-supervised task can approximate the changes in networks.
0
0
0
Thu Apr 08 2021
Machine Learning
A Theoretical Analysis of Learning with Noisily Labeled Data
0
0
0
Fri Sep 28 2018
Machine Learning
SIGUA: Forgetting May Make Learning with Noisy Labels More Robust
Over-parameterized deep networks can gradually memorize the data, and fit everything in the end. Many learning methods in this area still suffer overfitting due to undesired memorization. We propose stochastic integrated gradient underweighted ascent.
0
0
0