Published on Fri Jun 20 2014

Predicting the Future Behavior of a Time-Varying Probability Distribution

Christoph H. Lampert

We study the problem of predicting the future, though only in the probabilistic sense of estimating a future state of a time-varying probability distribution. We rely on two recent machine learning techniques: embedding probability distributions into a reproducing kernel Hilbert space.

0
0
0
Abstract

We study the problem of predicting the future, though only in the probabilistic sense of estimating a future state of a time-varying probability distribution. This is not only an interesting academic problem, but solving this extrapolation problem also has many practical application, e.g. for training classifiers that have to operate under time-varying conditions. Our main contribution is a method for predicting the next step of the time-varying distribution from a given sequence of sample sets from earlier time steps. For this we rely on two recent machine learning techniques: embedding probability distributions into a reproducing kernel Hilbert space, and learning operators by vector-valued regression. We illustrate the working principles and the practical usefulness of our method by experiments on synthetic and real data. We also highlight an exemplary application: training a classifier in a domain adaptation setting without having access to examples from the test time distribution at training time.

Wed Mar 21 2018
Machine Learning
Seglearn: A Python Package for Learning Sequences and Time Series
Seglearn is an open-source python package for machine learning. It uses a sliding window segmentation approach. The package is compatible with scikit-learn.
0
0
0
Mon Mar 19 2018
Machine Learning
Learning non-Gaussian Time Series using the Box-Cox Gaussian Process
Gaussian processes (GPs) are Bayesian nonparametric generative models. Overparametrising the kernel and the transformation is known to hinder gradient-based training and make predictions computationally expensive. We propose a warping function based on the celebrated Box-Cox transformation.
0
0
0
Mon Nov 30 2020
Machine Learning
A Survey on Principles, Models and Methods for Learning from Irregularly Sampled Time Series
0
0
0
Wed Dec 25 2013
Machine Learning
Time series modeling by a regression approach based on a latent process
Time series are used in many domains including finance, engineering and bioinformatics. Modeling techniques may then be used to give a synthetic representation of such data. A new approach for time series modeling has been proposed.
0
0
0
Fri Oct 09 2015
Machine Learning
Conditional Risk Minimization for Stochastic Processes
We study the task of learning from non-i.i.d. data. We aim to find predictors that minimize the conditional risk for a stochastic process. Our main contribution is a practical estimator for the true conditional risk.
0
0
0
Wed May 08 2019
Machine Learning
Unsupervised Learning through Temporal Smoothing and Entropy Maximization
This paper proposes a method for machine learning from unlabeled data. The mapping that is learned is shown to extract slowly evolving information that would be useful for control applications. The method consists of training a feedforward artificial neural network.
0
0
0