Published on Fri Mar 26 2021

Online learning with exponential weights in metric spaces

Quentin Paris
0
0
0
Abstract

This paper addresses the problem of online learning in metric spaces using exponential weights. We extend the analysis of the exponentially weighted average forecaster, traditionally studied in a Euclidean settings, to a more abstract framework. Our results rely on the notion of barycenters, a suitable version of Jensen's inequality and a synthetic notion of lower curvature bound in metric spaces known as the measure contraction property. We also adapt the online-to-batch conversion principle to apply our results to a statistical learning framework.

Tue Mar 07 2017
Machine Learning
Online Learning Without Prior Information
The vast majority of optimization and online learning algorithms today require some prior information. We describe a frontier of new lower bounds on the performance of such algorithms. We construct a family of algorithms whose performance matches any desired point.
0
0
0
Fri Aug 21 2015
Machine Learning
Adaptive Online Learning
We propose a general framework for studying adaptive regret bounds in the online learning framework. Given a data- or model-dependent bound we ask, "Does there exist some algorithm achieving this bound?" We show that modifications to recently introduced sequential complexity measures can be used to answer this question.
0
0
0
Tue Oct 11 2011
Machine Learning
The Generalization Ability of Online Algorithms for Dependent Data
We study the generalization performance of online learning algorithms trained on samples coming from a dependent source of data. We show that the generalization error of any stable online algorithm concentrates around its regret.
0
0
0
Sun Mar 03 2019
Machine Learning
Anytime Online-to-Batch Conversions, Optimism, and Acceleration
A standard way to obtain convergence guarantees in stochastic convex optimization is to run an online learning algorithm and then output the average of its iterates. We show that combining our approach with optimistic online learning algorithms yields a fast convergence rate.
0
0
0
Tue Jun 29 2021
Machine Learning
Optimal Rates for Random Order Online Optimization
We study online convex optimization in the random order model. The loss functions may be chosen by an adversary, but are then presented to the online algorithm in a uniformly random order. Our analysis relies on novel connections between algorithmic stability and generalization.
0
0
0
Fri Feb 12 2021
Machine Learning
MetaGrad: Adaptation using Multiple Learning Rates in Online Learning
We provide a new adaptive method for online convex optimization, MetaGrad, that is robust to general convex losses but achieves faster rates for a broad class of special functions. We prove this by drawing a connection to the Bernstein condition, known to imply fast rates in offline statistical learning.
0
0
0