Published on Wed Feb 25 2015

Strongly Adaptive Online Learning

Amit Daniely, Alon Gonen, Shai Shalev-Shwartz

Strongly adaptive algorithms are algorithms whose performance on every time interval is close to optimal. We present a reduction that can transform standard low-regret algorithms to strongly adaptive.

0
0
0
Abstract

Strongly adaptive algorithms are algorithms whose performance on every time interval is close to optimal. We present a reduction that can transform standard low-regret algorithms to strongly adaptive. As a consequence, we derive simple, yet efficient, strongly adaptive algorithms for a handful of problems.

Fri Aug 21 2015
Machine Learning
Adaptive Online Learning
We propose a general framework for studying adaptive regret bounds in the online learning framework. Given a data- or model-dependent bound we ask, "Does there exist some algorithm achieving this bound?" We show that modifications to recently introduced sequential complexity measures can be used to answer this question.
0
0
0
Thu Jan 26 2017
Machine Learning
Dynamic Regret of Strongly Adaptive Methods
The dynamic regret can be expressed in terms of the adaptive regret and the functional variation. This observation implies that strongly adaptive algorithms can be directly leveraged to minimize the dynamic regret.
0
0
0
Sun Nov 14 2010
Machine Learning
Online Learning: Beyond Regret
We study online learnability of a wide class of problems. Our framework simultaneously captures such well-known notions as internal and general Phi-regret. We show that learnability in all these situations is due to control of the same quantities.
0
0
0
Mon Feb 10 2020
Machine Learning
Adaptive Online Learning with Varying Norms
We provide an online convex optimization algorithm that outputs points in some domain in response to convex losses. Our method does not require tuning to the value of and allows for arbitrary convex . We apply this result to obtain
0
0
0
Tue Mar 07 2017
Machine Learning
Online Learning Without Prior Information
The vast majority of optimization and online learning algorithms today require some prior information. We describe a frontier of new lower bounds on the performance of such algorithms. We construct a family of algorithms whose performance matches any desired point.
0
0
0
Sun Feb 24 2019
Machine Learning
Combining Online Learning Guarantees
We show how to take any two parameter-free online learning algorithms with different regret guarantees and obtain a single algorithm whose regret is the minimum of the two base algorithms. We then proceed to show how a variant on this idea yields a black-box procedure for generating optimistic onlinelearning algorithms.
0
0
0