Many prediction tasks contain uncertainty. We propose a framework for reforming existing single-prediction models. We find that MHP models outperform their single-hypothesis counterparts in all cases.
Many prediction tasks contain uncertainty. In some cases, uncertainty is
inherent in the task itself. In future prediction, for example, many distinct
outcomes are equally valid. In other cases, uncertainty arises from the way
data is labeled. For example, in object detection, many objects of interest
often go unlabeled, and in human pose estimation, occluded joints are often
labeled with ambiguous values. In this work we focus on a principled approach
for handling such scenarios. In particular, we propose a framework for
reformulating existing single-prediction models as multiple hypothesis
prediction (MHP) models and an associated meta loss and optimization procedure
to train them. To demonstrate our approach, we consider four diverse
applications: human pose estimation, future prediction, image classification
and segmentation. We find that MHP models outperform their single-hypothesis
counterparts in all cases, and that MHP models simultaneously expose valuable
insights into the variability of predictions.