Published on Tue Apr 13 2021

Fast Hierarchical Games for Image Explanations

Jacopo Teneggi, Alexandre Luster, Jeremias Sulam

The current lack of interpretability often undermines the deployment of accurate machine learning tools. We present a model-agnostic.explanation method for image classification based on a hierarchical extension. of Shapley coefficients.

1
1
13
Abstract

As modern complex neural networks keep breaking records and solving harder problems, their predictions also become less and less intelligible. The current lack of interpretability often undermines the deployment of accurate machine learning tools in sensitive settings. In this work, we present a model-agnostic explanation method for image classification based on a hierarchical extension of Shapley coefficients --Hierarchical Shap (h-Shap)-- that resolves some of the limitations of current approaches. Unlike other Shapley-based explanation methods, h-Shap is scalable and can be computed without the need of approximation. Under certain distributional assumptions, such as those common in multiple instance learning, h-Shap retrieves the exact Shapley coefficients with an exponential improvement in computational complexity. We compare our hierarchical approach with popular Shapley-based and non-Shapley-based methods on a synthetic dataset, a medical imaging scenario, and a general computer vision problem, showing that h-Shap outperforms the state of the art in both accuracy and runtime. Code and experiments are made publicly available.

Mon Jan 27 2020
Machine Learning
Black Box Explanation by Learning Image Exemplars in the Latent Feature Space
The proposed method first generates exemplar images in the latent feature space and learns a decision tree classifier. Then, it selects anddecodes exemplars respecting local decision rules. Finally, it visualizes them in a manner that shows to the user how the exemplars can be modified.
0
0
0
Mon May 29 2017
Artificial Intelligence
Contextual Explanation Networks
contextual explanation networks (CENs) learn to predict and explain simultaneously. CENs generate parameters for intermediate graphical models which are further used for prediction and play role of explanations.
0
0
0
Mon May 31 2021
Computer Vision
Bounded logit attention: Learning to explain image classifiers
B bounded logit attention (BLA) module learns to select a subset of the convolutional feature map for each input instance. BLA can also be employed as a post-hoc add-on to trained classifiers.
0
0
0
Mon Apr 05 2021
Machine Learning
Explainability-aided Domain Generalization for Image Classification
0
0
0
Wed Oct 14 2020
Artificial Intelligence
Human-interpretable model explainability on high-dimensional data
The importance of explainability in machine learning continues to grow. Unique challenges arise when a model's input features become high-dimensional. This means principled model-agnostic approaches to explainability become too computationally expensive.
0
0
0
Mon May 04 2020
Artificial Intelligence
LIMEtree: Interactively Customisable Explanations Based on Local Surrogate Multi-output Regression Trees
Systems based on artificial intelligence and machine learning models should be transparent. Many of them are only capable of outputting a single one-size-fits-all explanation. LIMEtree can produce consistent explanations on which an interactive exploratory process can be built.
0
0
0
Tue Feb 16 2016
Machine Learning
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Despite widespread adoption, machine learning models remain mostly black boxes. LIME is a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner.
7
0
11
Fri Oct 07 2016
Computer Vision
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization
Gradient-weighted Class Activation Mapping (Grad-CAM) uses the gradients of any target concept, flowing into the final convolutional layer to produce a coarse localization map. It is applicable to CNNs with fully-connected layers, CNNs used for structured outputs and CNNs without any architectural changes or re-training.
4
2
7
Mon Dec 22 2014
Machine Learning
Adam: A Method for Stochastic Optimization
Adam is an algorithm for first-order gradient-based optimization of stochastic objective functions. The method is straightforward to implement and has little memory requirements. It is well suited for problems that are large in terms of data and parameters.
3
0
2
Thu Aug 22 2019
Artificial Intelligence
The many Shapley values for model explanation
Shapley value has become a popular method to attribute the prediction of a machine-learning model on an input to its base features. There are a multiplicity of ways in which the Shapley value is operationalized in the attribution problem.
0
0
0
Mon May 27 2019
Machine Learning
A Rate-Distortion Framework for Explaining Neural Network Decisions
We formalise the widespread idea of interpreting neural network decisions as an explicit optimisation problem in a rate-distortion framework. We present numerical experiments for two image classification data sets where we outperform established methods. We develop a heuristic solution strategy for deep ReLU neural networks.
0
0
0
Mon May 22 2017
Artificial Intelligence
A Unified Approach to Interpreting Model Predictions
Highly complex models often struggle to interpret, creating a tension between accuracy and interpretability. To address this problem, we present a unified framework for interpreting predictions, SHAP.
0
0
0