Published on Wed Jun 27 2012

An Online Boosting Algorithm with Theoretical Justifications

Shang-Tse Chen, Hsuan-Tien Lin, Chi-Jen Lu

We study the task of online boosting--combining online weak learners into an online strong learner. While batch boosting has a sound theoretical foundation, online boosting deserves more study from the theoretical perspective. We design an online boosting algorithm with a strong theoretical guarantee by adapting from the offline SmoothBoost algorithm.

0
0
0
Abstract

We study the task of online boosting--combining online weak learners into an online strong learner. While batch boosting has a sound theoretical foundation, online boosting deserves more study from the theoretical perspective. In this paper, we carefully compare the differences between online and batch boosting, and propose a novel and reasonable assumption for the online weak learner. Based on the assumption, we design an online boosting algorithm with a strong theoretical guarantee by adapting from the offline SmoothBoost algorithm that matches the assumption closely. We further tackle the task of deciding the number of weak learners using established theoretical results for online convex programming and predicting with expert advice. Experiments on real-world data sets demonstrate that the proposed algorithm compares favorably with existing online boosting algorithms.

Mon Feb 09 2015
Machine Learning
Optimal and Adaptive Algorithms for Online Boosting
Online boosting is the task of converting any weak online learner into a strong one. We develop two online boosting algorithms. Both algorithms work with base learners that can handle example importance weights.
0
0
0
Thu Feb 23 2017
Machine Learning
Online Multiclass Boosting
An optimal boosting algorithm requires the minimal number of weak learners to achieve a certain accuracy. We propose an adaptive algorithm which is near optimal and enjoys an excellent performance on real data.
0
0
0
Thu Sep 25 2014
Machine Learning
A Boosting Framework on Grounds of Online Learning
We present a boosting framework which proves to be extremely powerful thanks to employing the vast knowledge available in the online learning area. Using this framework, we develop various algorithms to address multiple practically and theoreticallyinteresting questions.
0
0
0
Mon Mar 02 2020
Machine Learning
Online Agnostic Boosting via Regret Minimization
Algorithm is based on an abstract (and simple) reduction to online convex optimization. It efficiently converts an arbitrary online conveX optimizer to an online booster. This reduction extends to the statistical as well as the online realizable settings.
0
0
0
Tue Jun 16 2015
Machine Learning
Online Gradient Boosting
We extend the theory of boosting for regression problems to the online learning setting. Our main result is an online gradient boosting algorithm. We also give a simpler boosting algorithm that converts a weak online learning algorithm into a strong one.
0
0
0
Thu Jan 16 2020
Machine Learning
Better Boosting with Bandits for Online Learning
Probability estimates generated by boosting ensembles are poorly calibrated because of the margin maximization nature of the algorithm. In batch learning, calibration is achieved by reserving part of the training data for training the calibrator.
0
0
0