Published on Tue May 11 2021

Rationalization through Concepts

Diego Antognini, Boi Faltings

ConRAT extracts a set of text snippets as concepts and infers which ones are described in the document. Then, it explains the outcome with a linear aggregation of concepts.

1
0
0
Abstract

Automated predictions require explanations to be interpretable by humans. One type of explanation is a rationale, i.e., a selection of input features such as relevant text snippets from which the model computes the outcome. However, a single overall selection does not provide a complete explanation, e.g., weighing several aspects for decisions. To this end, we present a novel self-interpretable model called ConRAT. Inspired by how human explanations for high-level decisions are often based on key concepts, ConRAT extracts a set of text snippets as concepts and infers which ones are described in the document. Then, it explains the outcome with a linear aggregation of concepts. Two regularizers drive ConRAT to build interpretable concepts. In addition, we propose two techniques to boost the rationale and predictive performance further. Experiments on both single- and multi-aspect sentiment classification tasks show that ConRAT is the first to generate concepts that align with human rationalization while using only the overall label. Further, it outperforms state-of-the-art methods trained on each aspect label independently.

Tue May 28 2019
Machine Learning
EDUCE: Explaining model Decisions through Unsupervised Concepts Extraction
We propose a new self-interpretable model that performs output prediction and simultaneously provides an explanation in terms of the presence of particular concepts in the input. We experimentally demonstrate the relevance of our approach on text classification and multi-sentiment analysis tasks.
0
0
0
Mon Oct 28 2019
Machine Learning
A Game Theoretic Approach to Class-wise Selective Rationalization
We show theoretically in a simplified scenario how the game drives the solution towards meaningful class-dependent rationales. The proposed method is able to identify both factual(justifying the ground truth label) and counterfactual (countering the ground truth label) rationales consistent with human rationalization.
0
0
0
Mon Jan 11 2021
Artificial Intelligence
Explain and Predict, and then Predict Again
A desirable property of learning systems is to be both effective and interpretable. We propose a novel yet simple approach ExPred, that uses multi-task learning in the explanation generation phase. And then we use another prediction network on just the extracted explanations for optimizing the task performance.
0
0
0
Mon Aug 19 2019
NLP
Fine-grained Sentiment Analysis with Faithful Attention
The general task of textual sentiment classification has been widely studied. Much less research looks specifically at sentiment between a specified source and target. We found that despite reasonable performance, the model's attention was often systematically misaligned with the words that contribute to sentiment.
0
0
0
Mon Jun 13 2016
Neural Networks
Rationalizing Neural Predictions
Prediction without justification has limited applicability. Our approach combines two modular components, generator and encoder. The generator specifies a distribution over text fragments as candidate rationales. These are passed through the encoder for prediction.
0
0
0
Mon Apr 27 2020
NLP
Octa: Omissions and Conflicts in Target-Aspect Sentiment Analysis
Sentiments in opinionated text are often determined by both aspects and target words. We observe that targets and aspects interrelate in subtle ways, often yielding conflicting sentiments. We propose Octa, an approach that jointly considers aspects and targets whenferring sentiments.
0
0
0
Mon Jun 12 2017
NLP
Attention Is All You Need
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms. Experiments on two machine translation tasks show these models to be superior in
50
215
883
Mon Mar 09 2015
Machine Learning
Distilling the Knowledge in a Neural Network
A new type of ensemble composed of one or more full models and many specialist models. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.
5
4
21
Tue Feb 16 2016
Machine Learning
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Despite widespread adoption, machine learning models remain mostly black boxes. LIME is a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner.
6
0
11
Mon Sep 01 2014
NLP
Neural Machine Translation by Jointly Learning to Align and Translate
Neural machine translation is a recently proposed approach to machine translation. Unlike traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be tuned to maximize translation performance.
6
4
7
Thu Jul 09 2020
Machine Learning
Concept Bottleneck Models
State-of-the-art models today do not typically support the manipulation of concepts like "the existence of bone spurs" We revisit the classic idea of first predicting concepts that are provided at training time, and then using these concepts to predict the label.
2
1
4
Mon Dec 22 2014
Machine Learning
Adam: A Method for Stochastic Optimization
Adam is an algorithm for first-order gradient-based optimization of stochastic objective functions. The method is straightforward to implement and has little memory requirements. It is well suited for problems that are large in terms of data and parameters.
3
0
2