Published on Thu Dec 01 2016

Learning in an Uncertain World: Representing Ambiguity Through Multiple Hypotheses

Christian Rupprecht, Iro Laina, Robert DiPietro, Maximilian Baust, Federico Tombari, Nassir Navab, Gregory D. Hager

Many prediction tasks contain uncertainty. We propose a framework for reforming existing single-prediction models. We find that MHP models outperform their single-hypothesis counterparts in all cases.

0
0
0
Abstract

Many prediction tasks contain uncertainty. In some cases, uncertainty is inherent in the task itself. In future prediction, for example, many distinct outcomes are equally valid. In other cases, uncertainty arises from the way data is labeled. For example, in object detection, many objects of interest often go unlabeled, and in human pose estimation, occluded joints are often labeled with ambiguous values. In this work we focus on a principled approach for handling such scenarios. In particular, we propose a framework for reformulating existing single-prediction models as multiple hypothesis prediction (MHP) models and an associated meta loss and optimization procedure to train them. To demonstrate our approach, we consider four diverse applications: human pose estimation, future prediction, image classification and segmentation. We find that MHP models outperform their single-hypothesis counterparts in all cases, and that MHP models simultaneously expose valuable insights into the variability of predictions.

Mon Apr 19 2021
Computer Vision
Bayesian Uncertainty and Expected Gradient Length - Regression: Two Sides Of The Same Coin?
Expected Gradient Length has been successfully used for classification and regression. Instead of computing multiple possible inferences per input, we leverage previously annotated samples to quantify the probability of previous labels being the true label. We show that expected gradient length in regression is equivalent to Bayesian uncertainty.
5
0
1
Wed Jun 06 2018
Machine Learning
Localized Structured Prediction
Key to structured prediction is exploiting the problem structure to simplify the learning process. Data exhibit a local structure that can be leveraged to better approximate the relation between (parts of) the input and (parts) of the output. We derive a novel approach to deal with these problems.
0
0
0
Mon Jun 18 2012
Artificial Intelligence
Modeling Latent Variable Uncertainty for Loss-based Learning
We consider the problem of parameter estimation using weakly supervised machine learning. We propose a novel framework that separates the demands of the two tasks using two distributions. Our approach generalizes latent SVM in two important ways. We demonstrate the efficacy of our approach on two challenging problems.
0
0
0
Thu May 14 2020
Machine Learning
Taskology: Utilizing Task Relations at Scale
Many computer vision tasks address the problem of scene understanding and are naturally interrelated e.g. object classification, detection, scene segmentation, depth estimation, etc. We show that we can leverage the inherentrelationships among collections of tasks, as they are trained jointly.
0
0
0
Wed Jun 27 2012
Machine Learning
Efficient Structured Prediction with Latent Variables for General Graphical Models
In this paper we propose a unified framework for structured prediction. This includes hidden conditional random fields and latent structured support vector machines as special cases. We describe a local entropy approximation for this general formulation using duality.
0
0
0
Thu Jan 07 2021
Machine Learning
Distribution-Free, Risk-Controlling Prediction Sets
This framework enables simple, distribution-free, rigorous error control for many tasks. We demonstrate it in five large-scale machine learning problems. We discuss extensions to uncertainty quantification for ranking, metric learning and other tasks.
5
0
0