Published on Wed Oct 04 2017

Mechanisms of dimensionality reduction and decorrelation in deep neural networks

Haiping Huang

Deep neural networks are widely used in various domains. The nature of computations at each layer of the deep networks is far from being well understood. Increasing the interpretability of deep neural networks is thus important.

0
0
0
Abstract

Deep neural networks are widely used in various domains. However, the nature of computations at each layer of the deep networks is far from being well understood. Increasing the interpretability of deep neural networks is thus important. Here, we construct a mean-field framework to understand how compact representations are developed across layers, not only in deterministic deep networks with random weights but also in generative deep networks where an unsupervised learning is carried out. Our theory shows that the deep computation implements a dimensionality reduction while maintaining a finite level of weak correlations between neurons for possible feature extraction. Mechanisms of dimensionality reduction and decorrelation are unified in the same framework. This work may pave the way for understanding how a sensory hierarchy works.

Wed May 29 2019
Machine Learning
Intrinsic dimension of data representations in deep neural networks
Deep neural networks progressively transform their inputs across multiple layers. What are the geometrical properties of the representations learned by these networks? We study the intrinsic dimensionality of data-representations.
1
0
0
Sat Jun 20 2020
Machine Learning
Weakly-correlated synapses promote dimension reduction in deep neural networks
0
0
0
Mon Jun 08 2020
Neural Networks
Complexity for deep neural networks and other characteristics of deep feature representations
We define a notion of complexity, which quantifies the nonlinearity of the computation of a neural network. The introduced observables can be applied without any change to the analysis of biological neuronal systems.
0
0
0
Tue Jan 14 2020
Artificial Intelligence
High--Dimensional Brain in a High-Dimensional World: Blessing of Dimensionality
High-dimensional data and high-dimensional representations of reality are inherent features of modern Artificial Intelligence systems and applications of machine learning. There is a fundamental tradeoff between complexity and simplicity in high dimensional spaces.
0
0
0
Mon Feb 15 2016
Neural Networks
Efficient Representation of Low-Dimensional Manifolds using Deep Networks
Deep neural networks can efficiently extract the intrinsic, low-dimensional coordinates of data. The first two layers of a deep network can exactly embed points lying on a monotonic chain. Remarkably, the network can do this using an almost optimal number of parameters.
0
0
0
Tue Nov 26 2019
Machine Learning
Representation Learning: A Statistical Perspective
Learning representations of data is an important problem in statistics andmachine learning. The origin of learning representations can be traced back to factor analysis and multidimensional scaling in statistics. Learning representations have important applications in computer vision and computational neuroscience.
0
0
0