Published on Thu Sep 01 2016

Neural Coarse-Graining: Extracting slowly-varying latent degrees of freedom with neural networks

Nicholas Guttenberg, Martin Biehl, Ryota Kanai

We present a loss function for neural networks that encompasses an idea ofrivial versus non-trivial predictions. The network jointly determines its own prediction goals and learns to satisfy them. This permits the network to choose sub-sets of a problem which are most amenable to its abilities.

0
0
0
Abstract

We present a loss function for neural networks that encompasses an idea of trivial versus non-trivial predictions, such that the network jointly determines its own prediction goals and learns to satisfy them. This permits the network to choose sub-sets of a problem which are most amenable to its abilities to focus on solving, while discarding 'distracting' elements that interfere with its learning. To do this, the network first transforms the raw data into a higher-level categorical representation, and then trains a predictor from that new time series to its future. To prevent a trivial solution of mapping the signal to zero, we introduce a measure of non-triviality via a contrast between the prediction error of the learned model with a naive model of the overall signal statistics. The transform can learn to discard uninformative and unpredictable components of the signal in favor of the features which are both highly predictive and highly predictable. This creates a coarse-grained model of the time-series dynamics, focusing on predicting the slowly varying latent parameters which control the statistics of the time-series, rather than predicting the fast details directly. The result is a semi-supervised algorithm which is capable of extracting latent parameters, segmenting sections of time-series with differing statistics, and building a higher-level representation of the underlying dynamics from unlabeled data.

Wed Mar 21 2018
Machine Learning
Seglearn: A Python Package for Learning Sequences and Time Series
Seglearn is an open-source python package for machine learning. It uses a sliding window segmentation approach. The package is compatible with scikit-learn.
0
0
0
Thu Aug 26 2021
Machine Learning
Sketches for Time-Dependent Machine Learning
Time series data can be subject to changes in the underlying process that generates them. Models built on old samples can become obsolete or perform poorly. We present a way to incorporate information about the current data distribution and its evolution across time into machine learning algorithms.
1
0
0
Mon Nov 30 2020
Machine Learning
A Survey on Principles, Models and Methods for Learning from Irregularly Sampled Time Series
0
0
0
Mon Feb 15 2021
Machine Learning
A Koopman Approach to Understanding Sequence Neural Models
We introduce a new approach to understanding trained sequence neural models. Motivated by the relation between time-series models and self-maps, we compute approximate Koopman operators that encode well the latent dynamics. Unlike other existing methods whose applicability is limited, our framework is global.
0
0
0
Wed May 08 2019
Machine Learning
Unsupervised Learning through Temporal Smoothing and Entropy Maximization
This paper proposes a method for machine learning from unlabeled data. The mapping that is learned is shown to extract slowly evolving information that would be useful for control applications. The method consists of training a feedforward artificial neural network.
0
0
0
Fri Sep 08 2000
Machine Learning
Information theory and learning: a physical approach
0
0
0