Published on Sun Jun 07 2020

Uncertainty-Aware Deep Classifiers using Generative Models

Murat Sensoy, Lance Kaplan, Federico Cerutti, Maryam Saleki
0
0
0
Abstract

Deep neural networks are often ignorant about what they do not know and overconfident when they make uninformed predictions. Some recent approaches quantify classification uncertainty directly by training the model to output high uncertainty for the data samples close to class boundaries or from the outside of the training distribution. These approaches use an auxiliary data set during training to represent out-of-distribution samples. However, selection or creation of such an auxiliary data set is non-trivial, especially for high dimensional data such as images. In this work we develop a novel neural network model that is able to express both aleatoric and epistemic uncertainty to distinguish decision boundary and out-of-distribution regions of the feature space. To this end, variational autoencoders and generative adversarial networks are incorporated to automatically generate out-of-distribution exemplars for training. Through extensive analysis, we demonstrate that the proposed approach provides better estimates of uncertainty for in- and out-of-distribution samples, and adversarial examples on well-known data sets against state-of-the-art approaches including recent Bayesian approaches for neural networks and anomaly detection methods.

Tue Aug 20 2019
Machine Learning
Density estimation in representation space to predict model uncertainty
Deep learning models frequently make incorrect predictions with high confidence when presented with test examples that are not well represented in their training dataset. We propose a novel and straightforward approach to estimate prediction uncertainty in a pre-trained neural network model.
0
0
0
Tue Feb 11 2020
Machine Learning
Fine-grained Uncertainty Modeling in Neural Networks
The method corrects overconfident NN decisions, detects outlier points and learns to say ''I don't know'' when uncertain about a critical point. The method sits on top of a given Neural Network and requires a single scan of training data to estimate class distribution statistics.
0
0
0
Mon Dec 16 2019
Machine Learning
On-manifold Adversarial Data Augmentation Improves Uncertainty Calibration
On-Manifold Adversarial Data Augmentation or OMADA attempts to generate the most challenging examples by following an on-manifold attack path in the latent space of an autoencoder-based generative model. On a variety of datasets as well as on multiple diverse
0
0
0
Tue Jun 05 2018
Machine Learning
Evidential Deep Learning to Quantify Classification Uncertainty
Deterministic neural nets have been shown to learn effective predictors on a wide range of machine learning problems. We treat predictions of a neural net as subjective opinions and learn the function that collects the evidence leading to these opinions by a deterministic neural net.
0
0
0
Tue Jul 06 2021
Machine Learning
Logit-based Uncertainty Measure in Classification
We introduce a new, reliable, and agnostic uncertainty measure for classify tasks called logit uncertainty. It is based on logit outputs of neurological networks. We show that this new uncertainty measure yields a superior performance compared to existing uncertainty measures.
1
0
0
Sun Nov 18 2018
Artificial Intelligence
A Variational Dirichlet Framework for Out-of-Distribution Detection
Deep neural networks have been widely adopted in many real-life applications. They are known to have very little control over its uncertainty for unseen examples. In this paper, we propose a higher-order uncertainty metric for deep neural networks. We also propose an objective function to discriminate against adversarial examples.
0
0
0