Published on Wed Jan 30 2019

On the Calibration of Multiclass Classification with Rejection

Chenri Ni, Nontawat Charoenphakdee, Junya Honda, Masashi Sugiyama

We investigate the problem of multiclass classification with rejection. A classifier can choose not to make a prediction to avoid critical misclassification. We propose rejection criteria for more general losses for this approach.

0
0
0
Abstract

We investigate the problem of multiclass classification with rejection, where a classifier can choose not to make a prediction to avoid critical misclassification. First, we consider an approach based on simultaneous training of a classifier and a rejector, which achieves the state-of-the-art performance in the binary case. We analyze this approach for the multiclass case and derive a general condition for calibration to the Bayes-optimal solution, which suggests that calibration is hard to achieve by general loss functions unlike the binary case. Next, we consider another traditional approach based on confidence scores, in which the existing work focuses on a specific class of losses. We propose rejection criteria for more general losses for this approach and guarantee calibration to the Bayes-optimal solution. Finally, we conduct experiments to validate the relevance of our theoretical findings.

Tue Sep 20 2016
Machine Learning
Multiclass Classification Calibration Functions
Calibration functions are a powerful tool for easily converting bounds for the surrogate risk. They are particularly suitable in non-parametric settings, where approximation error can be controlled. We devise a streamlined analysis that simplifies the process of deriving calibration functions for a large number of surrogate losses.
0
0
0
Fri Apr 10 2015
Machine Learning
Performance measures for classification systems with rejection
Classifiers with rejection are essential in real-world applications where misclassifications and their effects are critical. There are no established measures to assess the performance of such classifiers. We introduce a set of desired properties for performance measures for classifiers with rejector.
0
0
0
Tue May 26 2020
Machine Learning
Class-Weighted Classification: Trade-offs and Robust Approaches
0
0
0
Wed Jun 16 2021
Machine Learning
Multi-Class Classification from Single-Class Data with Confidences
Without assumptions on the loss functions, models, andoptimizers, we can successfully learn a multi-class classifier from only data of a single class. We further theoretically and experimentally show that our method can be Bayes-consistent with a simple modification.
1
0
0
Tue Dec 18 2018
Machine Learning
Consistent Robust Adversarial Prediction for General Multiclass Classification
We propose a robust adversarial prediction framework for general multiclass classification. Our method seeks predictive distributions that robustly optimize non-convex and non-continuous multiclass loss metrics against the worst-case conditional label distributions.
0
0
0
Mon Jun 21 2021
Machine Learning
Benign Overfitting in Multiclass Classification: All Roads Lead to Interpolation
The growing literature on "benign overfitting" in overparameterized models has been mostly restricted to regression or binary classification settings. Most success stories of modern machine learning have been recorded in multiclass settings.
1
0
0