Published on Tue Oct 24 2017

Improving Accuracy of Nonparametric Transfer Learning via Vector Segmentation

Vincent Gripon, Ghouthi B. Hacene, Matthias Löwe, Franck Vermet

Transfer learning using deep neural networks as feature extractors has become increasingly popular over the past few years. In this paper, we show that the features extracted using deep. neural networks have specific properties which can be used to improve accuracy of. nonparametric learning methods.

0
0
0
Abstract

Transfer learning using deep neural networks as feature extractors has become increasingly popular over the past few years. It allows to obtain state-of-the-art accuracy on datasets too small to train a deep neural network on its own, and it provides cutting edge descriptors that, combined with nonparametric learning methods, allow rapid and flexible deployment of performing solutions in computationally restricted settings. In this paper, we are interested in showing that the features extracted using deep neural networks have specific properties which can be used to improve accuracy of downstream nonparametric learning methods. Namely, we demonstrate that for some distributions where information is embedded in a few coordinates, segmenting feature vectors can lead to better accuracy. We show how this model can be applied to real datasets by performing experiments using three mainstream deep neural network feature extractors and four databases, in vision and audio.

Tue Mar 30 2021
Artificial Intelligence
Exploiting Invariance in Training Deep Neural Networks
0
0
0
Thu May 26 2016
Neural Networks
Discrete Deep Feature Extraction: A Theory and New Architectures
First steps towards a mathematical theory of deep convolutional neural networks for feature extraction were made in 2012 and 2015. We investigate how certain structural properties of the input signal are reflected in the corresponding feature vectors. Experiments on handwritten digit classification and facial landmark recognition complement the theoretical findings.
0
0
0
Tue Apr 18 2017
Machine Learning
Unsupervised Learning by Predicting Noise
This paper introduces a generic framework to train deep networks, end-to-end, with no supervision. The proposed representations perform on par with state-of-the-art unsupervised methods on ImageNet and Pascal VOC.
0
0
0
Thu Aug 29 2019
Machine Learning
Estimation of a function of low local dimensionality by deep neural networks
Deep neural networks (DNNs) achieve impressive results for complicated tasks like object detection on images and speech recognition. There is a strong interest in showing good theoretical properties of DNNs. The aim of this paper is to contribute to the current statistical theory of Dnns.
0
0
0
Mon Nov 23 2015
Machine Learning
Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
The "exponential linear unit" (ELU) speeds up learning in deep neural networks. In contrast to ReLUs, ELUs have negative values which allows them to push mean unit activations closer to zero. ELU networks significantly outperform ReLU networks with batch normalization.
0
0
0
Fri Jun 20 2014
Neural Networks
Caffe: Convolutional Architecture for Fast Feature Embedding
Caffe is a BSD-licensed C++ library with Python and MATLAB bindings. It powers ongoing research projects, large-scale industrial applications, and startup prototypes in vision, speech, and multimedia. Caffe fits industry and internet-scale media needs.
0
0
0