Published on Sun Sep 10 2017

### Efficient Online Linear Optimization with Approximation Algorithms

We revisit the problem of online linear optimization. We present new algorithms with significantly improved oracle complexity for both the full information and bandit variants of the problem. These are the first results to obtain both an average oracles complexity of

0
0
0
###### Abstract

We revisit the problem of \textit{online linear optimization} in case the set of feasible actions is accessible through an approximated linear optimization oracle with a factor multiplicative approximation guarantee. This setting is in particular interesting since it captures natural online extensions of well-studied \textit{offline} linear optimization problems which are NP-hard, yet admit efficient approximation algorithms. The goal here is to minimize the \textit{-regret} which is the natural extension of the standard \textit{regret} in \textit{online learning} to this setting. We present new algorithms with significantly improved oracle complexity for both the full information and bandit variants of the problem. Mainly, for both variants, we present -regret bounds of , were is the number of prediction rounds, using only calls to the approximation oracle per iteration, on average. These are the first results to obtain both average oracle complexity of (or even poly-logarithmic in ) and -regret bound for a constant , for both variants.