Published on Mon Jun 08 2020

Evaluation of Similarity-based Explanations

Kazuaki Hanawa, Sho Yokoi, Satoshi Hara, Kentaro Inui

The study investigated relevance metrics that can provide reasonable explanations to users. The cosine similarity of the gradients of the loss performs best, which would be a recommended choice in practice. Some metrics performed poorly in our tests and we analyzed the reasons of their failure.

0
0
0
Abstract

Explaining the predictions made by complex machine learning models helps users to understand and accept the predicted outputs with confidence. One promising way is to use similarity-based explanation that provides similar instances as evidence to support model predictions. Several relevance metrics are used for this purpose. In this study, we investigated relevance metrics that can provide reasonable explanations to users. Specifically, we adopted three tests to evaluate whether the relevance metrics satisfy the minimal requirements for similarity-based explanation. Our experiments revealed that the cosine similarity of the gradients of the loss performs best, which would be a recommended choice in practice. In addition, we showed that some metrics perform poorly in our tests and analyzed the reasons of their failure. We expect our insights to help practitioners in selecting appropriate relevance metrics and also aid further researches for designing better relevance metrics for explanations.

Wed Jun 09 2021
Machine Learning
A general approach for Explanations in terms of Middle Level Features
It is growing interest to make Machine Learning (ML) systems more understandable and trusting to general users. This paper suggests an XAI general approach that allows producing explanations for an ML system behaviour in terms of different anduser-selected input features.
0
0
0
Sat Sep 12 2020
Artificial Intelligence
MeLIME: Meaningful Local Explanation for Machine Learning Models
Most state-of-the-art machine learning algorithms induce black-box models. MeLIME generalizes the LIME method, allowing more flexibility in perturbation sampling.
0
0
0
Wed Nov 20 2019
Artificial Intelligence
Towards a Unified Evaluation of Explanation Methods without Ground Truth
This paper proposes a set of criteria to evaluate the objectiveness of Explanation methods of neural networks. The core challenge is that people usually cannot obtain ground-truth explanations of the neural network.
0
0
0
Fri Feb 02 2018
Artificial Intelligence
How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation
Machine learning systems can provide a human-understandable rationale for their predictions or decisions. Exactly what kinds of explanation are truly human-interpretable remains poorly understood. This work advances our understanding of what makes explanations interpretable.
0
0
0
Tue Jul 16 2019
Artificial Intelligence
Evaluating Explanation Without Ground Truth in Interpretable Machine Learning
Interpretable Machine Learning (IML) has become increasingly important in many real-world applications, such as autonomous cars and medical diagnosis. Having a sense of explanation quality not only matters for assessing system boundaries, but also helps to realize the true benefits to human users in practical settings.
0
0
0
Sun May 27 2018
Artificial Intelligence
Semantic Explanations of Predictions
The main objective of explanations is to transmit knowledge to humans. This work proposes to construct informative explanations for predictions made from machine learning models. The main features of our approach are that knowledge about explanations is captured in the form of ontological concepts.
0
0
0