As Machine Learning (ML) is now widely applied in many domains, in both
research and industry, an understanding of what is happening inside the black
box is becoming a growing demand, especially by non-experts of these models.
Several approaches had thus been developed to provide clear insights of a model
prediction for a particular observation but at the cost of long computation
time or restrictive hypothesis that does not fully take into account
interaction between attributes. This paper provides methods based on the
detection of relevant groups of attributes -- named coalitions -- influencing a
prediction and compares them with the literature. Our results show that these
coalitional methods are more efficient than existing ones such as SHapley
Additive exPlanation (SHAP). Computation time is shortened while preserving an
acceptable accuracy of individual prediction explanations. Therefore, this
enables wider practical use of explanation methods to increase trust between
developed ML models, end-users, and whoever impacted by any decision where
these models played a role.