Published on Wed Dec 23 2015

Adaptive Algorithms for Online Convex Optimization with Long-term Constraints

Rodolphe Jenatton, Jim Huang, Cédric Archambeau

We present an adaptive online gradient descent algorithm to solve online convex optimization problems with long-term constraints. Constraints are constraints that need to be satisfied when accumulated over a finite number of rounds T but can be violated in intermediate rounds. For some user-defined trade-

0
0
0
Abstract

We present an adaptive online gradient descent algorithm to solve online convex optimization problems with long-term constraints , which are constraints that need to be satisfied when accumulated over a finite number of rounds T , but can be violated in intermediate rounds. For some user-defined trade-off parameter (0, 1), the proposed algorithm achieves cumulative regret bounds of O(T^max{,1--}) and O(T^(1--/2)) for the loss and the constraint violations respectively. Our results hold for convex losses and can handle arbitrary convex constraints without requiring knowledge of the number of rounds in advance. Our contributions improve over the best known cumulative regret bounds by Mahdavi, et al. (2012) that are respectively O(T^1/2) and O(T^3/4) for general convex domains, and respectively O(T^2/3) and O(T^2/3) when further restricting to polyhedral domains. We supplement the analysis with experiments validating the performance of our algorithm in practice.

Fri Nov 25 2011
Machine Learning
Trading Regret for Efficiency: Online Convex Optimization with Long Term Constraints
In this paper we propose a framework for solving constrained online convex optimization problem. Instead of requiring decisions belong to the convex set, we only require that the constraints which define the set be satisfied in the long run. We show that our framework can be utilized to solve a relaxed version of online learning.
0
0
0
Tue Feb 19 2019
Machine Learning
Online Learning with Continuous Variations: Dynamic Regret and Reductions
Online learning is a powerful tool for analyzing iterative algorithms. The classic adversarial setup sometimes fails to capture certain regularity in online problems in practice. We establish a new setup, called Continuous Online Learning (COL), where the gradient of online loss function changes continuously across rounds.
0
0
0
Sat Jun 06 2020
Machine Learning
Unconstrained Online Optimization: Dynamic Regret Analysis of Strongly Convex and Smooth Problems
0
0
0
Mon Oct 05 2015
Machine Learning
On the Online Frank-Wolfe Algorithms for Convex and Non-convex Optimizations
In this paper, the online variants of the classical Frank-Wolfe algorithm are considered. The online algorithms only require simple iterative updates and a non-adaptive step size rule.
0
0
0
Tue Dec 31 2019
Machine Learning
A Modern Introduction to Online Learning
In this monograph, I introduce the basic concepts of Online Learning through a modern view of Online Convex Optimization. I present first-order and second-order algorithms for online learning with convex losses, in Euclidean and non-Euclidean settings.
2
0
10
Mon Feb 19 2018
Machine Learning
Online convex optimization for cumulative constraints
We propose the algorithms for online convex optimization which lead to cumulative squared constraint violations. The new form of constraint penalizes large constraint violations and cancellation effects cannot occur. For convex objectives, our regret bounds generalize existing bounds, and we give improved regret bounds.
0
0
0