Published on Tue Aug 15 2017

Improved Regularization of Convolutional Neural Networks with Cutout

Terrance DeVries, Graham W. Taylor

Convolutional neural networks are capable of learning powerful representational spaces. However, due to the model capacity required to capture such representations, they are often susceptible to overfitting. A simple regularization technique called cutout can be used to improve performance.

0
0
0
Abstract

Convolutional neural networks are capable of learning powerful representational spaces, which are necessary for tackling complex learning tasks. However, due to the model capacity required to capture such representations, they are often susceptible to overfitting and therefore require proper regularization in order to generalize well. In this paper, we show that the simple regularization technique of randomly masking out square regions of input during training, which we call cutout, can be used to improve the robustness and overall performance of convolutional neural networks. Not only is this method extremely easy to implement, but we also demonstrate that it can be used in conjunction with existing forms of data augmentation and other regularizers to further improve model performance. We evaluate this method by applying it to current state-of-the-art architectures on the CIFAR-10, CIFAR-100, and SVHN datasets, yielding new state-of-the-art results of 2.56%, 15.20%, and 1.30% test error respectively. Code is available at https://github.com/uoguelph-mlrg/Cutout

Wed Jun 26 2019
Machine Learning
Further advantages of data augmentation on convolutional neural networks
Data augmentation is a popular technique largely used to enhance the training of convolutional neural networks. Many of its benefits are well known by deep learning researchers and practitioners. But its implicit regularization effects remain largely unstudied.
0
0
0
Thu Sep 18 2014
Neural Networks
Deeply-Supervised Nets
We make an attempt to boost the classification performance by studying a new formulation in deep networks. Our proposed deeply-supervised nets (DSN) method simultaneously minimizes classify error while making the learning process of hidden layers direct and transparent.
0
0
0
Mon Oct 08 2018
Computer Vision
Diagnosing Convolutional Neural Networks using their Spectral Response
Convolutional Neural Networks (CNNs) are a class of artificial neural networks. CNNs use convolution, together with other linear and non-linear operations, to perform classification or regression. This paper explores the spectral response of CNNs and their potential use in diagnosing problems
0
0
0
Thu Mar 11 2021
Computer Vision
Preprint: Norm Loss: An efficient yet effective regularization method for deep neural networks
0
0
0
Thu Feb 18 2016
Computer Vision
RandomOut: Using a convolutional gradient norm to rescue convolutional filters
The 28x28 Inception-V3 model fails to train 26% of the time when varying the random seed alone. This is a problem that affects the trial and error process of designing a network. We propose to evaluate and replace specific convolutional filters.
0
0
0
Tue Oct 22 2019
Machine Learning
Explicitly Bayesian Regularizations in Deep Learning
Generalization is essential for deep learning. We demonstrate explicitly Bayesian regularizations in a specific category of DNNs, i.e., Convolutional Neural Networks (CNNs) In addition, we clarify two recently observed empirical phenomena.
0
0
0