Published on Thu Apr 01 2021

Coalitional strategies for efficient individual prediction explanation

Gabriel Ferrettini, Elodie Escriva, Julien Aligon, Jean-Baptiste Excoffier, Chantal Soulé-Dupuy
0
0
0
Abstract

As Machine Learning (ML) is now widely applied in many domains, in both research and industry, an understanding of what is happening inside the black box is becoming a growing demand, especially by non-experts of these models. Several approaches had thus been developed to provide clear insights of a model prediction for a particular observation but at the cost of long computation time or restrictive hypothesis that does not fully take into account interaction between attributes. This paper provides methods based on the detection of relevant groups of attributes -- named coalitions -- influencing a prediction and compares them with the literature. Our results show that these coalitional methods are more efficient than existing ones such as SHapley Additive exPlanation (SHAP). Computation time is shortened while preserving an acceptable accuracy of individual prediction explanations. Therefore, this enables wider practical use of explanation methods to increase trust between developed ML models, end-users, and whoever impacted by any decision where these models played a role.

Mon Dec 07 2020
Machine Learning
Explainable Artificial Intelligence: How Subsets of the Training Data Affect a Prediction
0
0
0
Sun Dec 20 2020
Artificial Intelligence
Biased Models Have Biased Explanations
We study fairness in Machine Learning through the lens of attribute-based explanations generated for machine learning models. Our hypothesis is: Biased Models have Biased Explanations. We propose a novel way of detecting (un)fairness for any black box model.
0
0
0
Fri Jun 12 2020
Machine Learning
Generalized SHAP: Generating multiple types of explanations in machine learning
GeneralizedShapley Additive Explanations (G-SHAP) produces many additional types of explanations. These include: General classification explanations, Intergroup differences and Model failure explanations. We formally define these types of explanation and illustrate their practical use on real data.
0
0
0
Tue Jun 26 2018
Artificial Intelligence
Open the Black Box Data-Driven Explanation of Black Box Decision Systems
Black box systems for automated decision making often based on machine learning over (big) data. This is problematic not only for lack of transparency, but also for possible biases hidden in the algorithms. We introduce the local-to-global framework for black box explanation.
0
0
0
Wed Aug 18 2021
Machine Learning
CARE: Coherent Actionable Recourse based on Sound Counterfactual Explanations
Counterfactual explanation methods interpret the outputs of a machine learning model in the form of "what-if scenarios" They explain how to obtain a desired prediction from the model by recommending small changes to the input features. This paper introduces CARE, a modular explanation framework.
1
0
0
Fri Apr 09 2021
Artificial Intelligence
Individual Explanations in Machine Learning Models: A Survey for Practitioners
0
0
0