Published on Wed Jan 27 2021

Learning Abstract Representations through Lossy Compression of Multi-Modal Signals

Charles Wilmot, Jochen Triesch

abstract representations ignore specific details and facilitate generalization. We show that generic lossy compression of multimodal sensory input extracts abstract representations that strip away modalitiyspecific details. We propose an architecture to learn abstract representations by identifying and retaining only the information that is shared across multiple modalities.

0
0
0
Abstract

A key competence for open-ended learning is the formation of increasingly abstract representations useful for driving complex behavior. Abstract representations ignore specific details and facilitate generalization. Here we consider the learning of abstract representations in a multi-modal setting with two or more input modalities. We treat the problem as a lossy compression problem and show that generic lossy compression of multimodal sensory input naturally extracts abstract representations that tend to strip away modalitiy specific details and preferentially retain information that is shared across the different modalities. Furthermore, we propose an architecture to learn abstract representations by identifying and retaining only the information that is shared across multiple modalities while discarding any modality specific information.

Sat Jun 23 2018
Machine Learning
The Sparse Manifold Transform
The sparse manifold transform is an unsupervised and generative framework. It explicitly and simultaneously models the sparse discreteness and low-dimensional manifold structure found in natural scenes.
0
0
0
Thu Jun 02 2011
Artificial Intelligence
Learning Hierarchical Sparse Representations using Iterative Dictionary Learning and Dimension Reduction
This paper introduces an elemental building block which combines Dictionary Learning and Dimension Reduction. We show how this foundational element can be used to construct a Hierarchical Sparse Representation. The ultimate goal is building ahematically rigorous, integrated theory of intelligence.
0
0
0
Tue Jun 30 2020
Computer Vision
Data-driven Regularization via Racecar Training for Generalizing Neural Networks
We propose a novel training approach for improving the generalization in neural networks. We show that in contrast to regular constraints for orthogonality, our approach represents a {\em data-dependent} orthog onality.
0
0
0
Thu Mar 19 2015
Machine Learning
On Invariance and Selectivity in Representation Learning
We discuss data representation which can be learned automatically from data. Data representation is selective, in the sense that two points have the same representation only if they are one the other.
0
0
0
Sun Nov 19 2017
Machine Learning
Compression-Based Regularization with an Application to Multi-Task Learning
This paper investigates, from information theoretic grounds, a learning problem based on the principle that any regularity in a given dataset can be exploited to extract compact features from data. An important property of this algorithm is that it provides a natural safeguard against overfitting.
0
0
0
Mon Jan 27 2020
Artificial Intelligence
Structural Information Learning Machinery: Learning from Observing, Associating, Optimizing, Decoding, and Abstracting
A SiLeM machine learns the laws or rules of nature. It observes the data points of real world, builds the connections among the observed data and constructs a data space. The principle is to choose the way of connections of data points so that the decoding information of the data space is maximized.
0
0
0