Published on Thu Oct 15 2020

Interpreting Deep Learning Model Using Rule-based Method

Xiaojian Wang, Jingyuan Wang, Ke Tang

Deep learning models are favored in many research and industry areas. They have reached the accuracy of approximating or even surpassing human level. They've long been considered by researchers as black-box models for theircomplicated nonlinear property.

0
0
0
Abstract

Deep learning models are favored in many research and industry areas and have reached the accuracy of approximating or even surpassing human level. However they've long been considered by researchers as black-box models for their complicated nonlinear property. In this paper, we propose a multi-level decision framework to provide comprehensive interpretation for the deep neural network model. In this multi-level decision framework, by fitting decision trees for each neuron and aggregate them together, a multi-level decision structure (MLD) is constructed at first, which can approximate the performance of the target neural network model with high efficiency and high fidelity. In terms of local explanation for sample, two algorithms are proposed based on MLD structure: forward decision generation algorithm for providing sample decisions, and backward rule induction algorithm for extracting sample rule-mapping recursively. For global explanation, frequency-based and out-of-bag based methods are proposed to extract important features in the neural network decision. Furthermore, experiments on the MNIST and National Free Pre-Pregnancy Check-up (NFPC) dataset are carried out to demonstrate the effectiveness and interpretability of MLD framework. In the evaluation process, both functionally-grounded and human-grounded methods are used to ensure credibility.

Thu Mar 04 2021
Machine Learning
Learning Accurate and Interpretable Decision Rule Sets from Neural Networks
This paper proposes a new paradigm for learning a set of independent logical rules in disjunctive normal form as an interpretable model for classification. We consider the problem of learning an interpretable decision rule set as training a neural network in a specific, yet very simple two-layeritecture.
0
0
0
Mon Aug 23 2021
Machine Learning
Explaining Bayesian Neural Networks
explainable AI (XAI) aims to provide interpretations of DNNs' predictions. Bayesian approaches such as Bayesian Neural Networks (BNNs) so far have a limited form of transparency (model transparency) BNNs implicitly employ multiple heterogeneous prediction strategies.
1
0
0
Sun Oct 02 2011
Neural Networks
Eclectic Extraction of Propositional Rules from Neural Networks
Neural Networks have a well-known drawback of being a "Black Box" learner that is not comprehensible to the Users. This lack of transparency makes it unsuitable for many high risk tasks such as medical diagnosis. Rule Extraction methods attempt to curb this limitation by extracting
0
0
0
Wed Apr 10 2019
Machine Learning
Enhancing Decision Tree based Interpretation of Deep Neural Networks through L1-Orthogonal Regularization
Machine learning models are often difficult to explain because of their lack of explainability. Using L1-orthogonal regularization during training, however, preserves the accuracy of the NN, while it can be closely approximated by small decision trees.
0
0
0
Tue Aug 13 2019
Machine Learning
Regional Tree Regularization for Interpretability in Black Box Models
The lack of interpretability remains a barrier to the adoption of deep neural networks. We propose regional tree regularization, which encourages a deep model to be well-approximated by several separate decision trees. Practitioners can define regions based on domain knowledge of contexts.
0
0
0
Fri Oct 09 2020
Artificial Intelligence
Explaining Clinical Decision Support Systems in Medical Imaging using Cycle-Consistent Activation Maximization
Clinical decision support using deep neural networks has become a topic of growing interest. We propose a novel decision explanation scheme based on CycleGAN activation. This generates high-quality visualizations of classifier decisions even in smaller data sets.
0
0
0