Published on Thu Jan 30 2020

Faster Projection-free Online Learning

Elad Hazan, Edgar Minasyan

In many online learning problems the computational bottleneck is the projection operation. For this reason, the most efficient algorithms are based on the Frank-Wolfe method. In the general case, online projection-free methods require more iterations than projection-based methods.

0
0
0
Abstract

In many online learning problems the computational bottleneck for gradient-based methods is the projection operation. For this reason, in many problems the most efficient algorithms are based on the Frank-Wolfe method, which replaces projections by linear optimization. In the general case, however, online projection-free methods require more iterations than projection-based methods: the best known regret bound scales as . Despite significant work on various variants of the Frank-Wolfe method, this bound has remained unchanged for a decade. In this paper we give an efficient projection-free algorithm that guarantees regret for general online convex optimization with smooth cost functions and one linear optimization computation per iteration. As opposed to previous Frank-Wolfe approaches, our algorithm is derived using the Follow-the-Perturbed-Leader method and is analyzed using an online primal-dual framework.

Thu Oct 15 2020
Machine Learning
Revisiting Projection-free Online Learning: the Strongly Convex Case
Projection-free optimization algorithms are mostly based on the classical Frank-Wolfe method. OFW achieves a faster rate of on on on strongly convex functions. This is surprising since it is known that for offline optimization, strong convexity does not lead to faster rates.
0
0
0
Mon Jun 17 2013
Machine Learning
Online Alternating Direction Method (longer version)
The alternating direction method (ADM) can solve online convex optimization under linear constraints where the objective could be non-smooth. We introduce new proof techniques for ADM in the batch setting.
0
0
0
Mon Oct 21 2019
Machine Learning
Efficient Projection-Free Online Methods with Stochastic Recursive Gradient
Existing projection-free methods either achieve suboptimal regret bounds or have high per-iteration computational costs. Two efficient projection- free online methods called ORGFW and MORGFW are proposed for solving stochastic and adversarial OCO problems.
0
0
0
Tue Oct 08 2019
Machine Learning
Improved Regret Bounds for Projection-free Bandit Convex Optimization
We revisit the challenge of designing online algorithms for the bandit convex optimization problem (BCO) which are also scalable to high dimensional problems. We present the first such algorithm that attains expected regret.
0
0
0
Fri Oct 16 2020
Machine Learning
Projection-free Online Learning over Strongly Convex Sets
Projection-free algorithms including online frank-wolfe (OFW) and its variants have received significant interest recently. In the general case, existing projection- free algorithms only achieved the regret bound of over strongly convex sets.
0
0
0
Mon Jun 18 2012
Machine Learning
Projection-free Online Learning
The computational bottleneck in applying online learning to massive data sets is usually the projection step. We present efficient online learning algorithms that eschew projections in favor of much more efficient linear optimization steps.
0
0
0