Published on Thu May 31 2018

DeepMiner: Discovering Interpretable Representations for Mammogram Classification and Explanation

Jimmy Wu, Bolei Zhou, Diondra Peck, Scott Hsieh, Vandana Dialani, Lester Mackey, Genevieve Patterson

We propose DeepMiner, a framework to discover interpretable representations in deep neural networks. We show that many individual units in the final convolutional layer of a CNN respond strongly to diseased tissue concepts. Our proposed method is able to generate explanations for CNN mammogram classification.

0
0
0
Abstract

We propose DeepMiner, a framework to discover interpretable representations in deep neural networks and to build explanations for medical predictions. By probing convolutional neural networks (CNNs) trained to classify cancer in mammograms, we show that many individual units in the final convolutional layer of a CNN respond strongly to diseased tissue concepts specified by the BI-RADS lexicon. After expert annotation of the interpretable units, our proposed method is able to generate explanations for CNN mammogram classification that are correlated with ground truth radiology reports on the DDSM dataset. We show that DeepMiner not only enables better understanding of the nuances of CNN classification decisions, but also possibly discovers new visual knowledge relevant to medical diagnosis.

Mon Jul 12 2021
Computer Vision
Interpretable Mammographic Image Classification using Cased-Based Reasoning and Deep Learning
Inherently interpretable networks address this need by explaining the rationale behind each decision while maintaining equal or higher Accuracy. The network first detects the clinically relevant semantic features of each image. It then uses those clinical features to predict malignancy.
0
0
0
Tue Mar 13 2018
Computer Vision
Expert identification of visual primitives used by CNNs during mammogram classification
This work interprets the internal representations of deep neural networks trained for classification of diseased tissue in 2D mammograms. Expert radiologists identify patterns detected by the units are correlated with meaningful medical phenomena such as mass tissue and calcificated vessels.
0
0
0
Fri Mar 19 2021
Computer Vision
XProtoNet: Diagnosis in Chest Radiography with Global and Local Explanations
0
0
0
Thu Jul 22 2021
Computer Vision
Explainable artificial intelligence (XAI) in deep learning-based medical image analysis
Survey presents an overview of eXplainable Artificial Intelligence (XAI) used in deep learning-based medical image analysis. A framework of XAI criteria is introduced to classify deep learning-basedMedical image analysis methods. Papers on XAI techniques in Medical Image Analysis are then surveyed.
3
2
3
Wed May 05 2021
Artificial Intelligence
Explainable Artificial Intelligence for Human Decision-Support System in Medical Domain
In the present paper we present the potential of Explainable Artificial Intelligence methods for decision-support in medical image analysis scenarios. With three types of explainable methods applied to the same medical image data our aim was to improve the comprehensibility of the decisions provided by Convolutional Neural Network (CNN)
1
0
1
Mon Jul 19 2021
Computer Vision
Improving Interpretability of Deep Neural Networks in Medical Diagnosis by Investigating the Individual Units
This paper shows the efficiency of recent attribution techniques to explain the medical diagnostic decision. By visualizing the relevant factors, it is possible to confirm that the criterion for decision is in line with the learning strategy.
0
0
0