Published on Fri May 04 2018

Combinatorial Pure Exploration with Continuous and Separable Reward Functions and Its Applications (Extended Version)

Weiran Huang, Jungseul Ok, Liang Li, Wei Chen

We study the Combinatorial Pure Exploration problem with Continuous and Separable reward functions (CPE-CS) in the stochastic multi-armed bandit setting. The problem generalizes the combinatorial pure exploration problem with linear rewards, which has attracted significant attention in recent years.

0
0
0
Abstract

We study the Combinatorial Pure Exploration problem with Continuous and Separable reward functions (CPE-CS) in the stochastic multi-armed bandit setting. In a CPE-CS instance, we are given several stochastic arms with unknown distributions, as well as a collection of possible decisions. Each decision has a reward according to the distributions of arms. The goal is to identify the decision with the maximum reward, using as few arm samples as possible. The problem generalizes the combinatorial pure exploration problem with linear rewards, which has attracted significant attention in recent years. In this paper, we propose an adaptive learning algorithm for the CPE-CS problem, and analyze its sample complexity. In particular, we introduce a new hardness measure called the consistent optimality hardness, and give both the upper and lower bounds of sample complexity. Moreover, we give examples to demonstrate that our solution has the capacity to deal with non-linear reward functions.

Mon Oct 16 2017
Machine Learning
Fully adaptive algorithm for pure exploration in linear bandits
We propose the first fully-adaptive algorithm for pure exploration in linear bandits. We show our sample complexity matches the achievable lower bound up to a constant factor in an extreme case. We evaluate the performance of the methods by simulations based on both synthetic setting andreal-world data.
0
0
0
Sat May 08 2021
Machine Learning
Pure Exploration Bandit Problem with General Reward Functions Depending on Full Distributions
0
0
0
Wed Mar 03 2021
Machine Learning
Combinatorial Bandits without Total Order for Arms
0
0
0
Sun Jun 04 2017
Machine Learning
Nearly Optimal Sampling Algorithms for Combinatorial Pure Exploration
We study the combinatorial pure exploration problem Best-Set in stochastic multi-armed bandits. The problem generalizes the classical best arm identification problem and the top- arm identification problem. We provide a novel instance-wise lower bound for the sample complexity of the problem.
0
0
0
Tue Feb 19 2008
Machine Learning
Pure Exploration for Multi-Armed Bandit Problems
The paper considers the possibilities and limitations of forecasters that perform an on-line exploration of the arms. These forecasters are assessed in terms of their simple regret, a regret notion that captures the fact that exploration is only constrained by the number of available rounds.
0
0
0
Wed Jul 14 2010
Machine Learning
Online Algorithms for the Multi-Armed Bandit Problem with Markovian Rewards
We consider the classical multi-armed bandit problem with Markovian rewards. The player receives a state-dependent reward each time it plays an arm. We show that a sample mean based index policy achieves logarithmic regret uniformly over the total number of trials.
0
0
0