Published on Thu Oct 22 2020

Regret Bounds without Lipschitz Continuity: Online Learning with Relative-Lipschitz Losses

Yihan Zhou, Victor S. Portella, Mark Schmidt, Nicholas J. A. Harvey

In online convex optimization (OCO), Lipschitz continuity of the functions is commonly assumed in order to obtain sublinear regret. Many algorithms have only logarithmic regret when these functions are also strongly convex.

0
0
0
Abstract

In online convex optimization (OCO), Lipschitz continuity of the functions is commonly assumed in order to obtain sublinear regret. Moreover, many algorithms have only logarithmic regret when these functions are also strongly convex. Recently, researchers from convex optimization proposed the notions of "relative Lipschitz continuity" and "relative strong convexity". Both of the notions are generalizations of their classical counterparts. It has been shown that subgradient methods in the relative setting have performance analogous to their performance in the classical setting. In this work, we consider OCO for relative Lipschitz and relative strongly convex functions. We extend the known regret bounds for classical OCO algorithms to the relative setting. Specifically, we show regret bounds for the follow the regularized leader algorithms and a variant of online mirror descent. Due to the generality of these methods, these results yield regret bounds for a wide variety of OCO algorithms. Furthermore, we further extend the results to algorithms with extra regularization such as regularized dual averaging.

Mon Feb 10 2020
Machine Learning
Adaptive Online Learning with Varying Norms
We provide an online convex optimization algorithm that outputs points in some domain in response to convex losses. Our method does not require tuning to the value of and allows for arbitrary convex . We apply this result to obtain
0
0
0
Fri Apr 29 2016
Machine Learning
MetaGrad: Multiple Learning Rates in Online Learning
MetaGrad can achieve logarithmic regret on unregularized hinge loss, even though it has no curvature, if the data come from a favourable probability distribution. Unlike previous methods, its learning rates are not monotonically decreasing over time.
0
0
0
Wed May 15 2019
Machine Learning
Adaptivity and Optimality: A Universal Algorithm for Online Convex Optimization
Existing universal methods are limited in the sense that they are optimal for only a subclass of loss functions. We propose a novel online method, namely Maler, which enjoys the optimal regret bounds for convex, exponentially concave, and strongly convex functions.
0
0
0
Wed Jun 26 2019
Machine Learning
Dual Adaptivity: A Universal Algorithm for Minimizing the Adaptive Regret of Convex Functions
The paper presents the first universal algorithm for minimizing the adaptive regret of convex functions. The algorithm automatically adapts to the property of functions as well as the nature of environments. It also allows the type of functions to switch between rounds.
0
0
0
Sat Aug 13 2016
Machine Learning
Improved Dynamic Regret for Non-degenerate Functions
The dynamic regret of strongly convex functions can be upper bounded by the minimum of the path-length. We extend our theoretical guarantee to functions that are semi-strongly convex or self-concordant.
0
0
0
Sun Feb 24 2019
Machine Learning
Artificial Constraints and Lipschitz Hints for Unconstrained Online Learning
Previous algorithms dispense with the $O(\|u\|^3) term at the expense of one or both of these parameters, while a lower bound shows that some additional penalty term is necessary. The same techniques allow us to design algorithms that adapt optimally to the unknown value of $
0
0
0