Published on Fri May 22 2020

Semi-supervised Medical Image Classification with Global Latent Mixing

Prashnna Kumar Gyawali, Sandesh Ghimire, Pradeep Bajracharya, Zhiyuan Li, Linwei Wang

Computer-aided diagnosis via deep learning relies on large-scale annotated data sets. We present a novel SSL approach that trains the neural network on linear mixing of labeled and unlabeled data.

0
0
0
Abstract

Computer-aided diagnosis via deep learning relies on large-scale annotated data sets, which can be costly when involving expert knowledge. Semi-supervised learning (SSL) mitigates this challenge by leveraging unlabeled data. One effective SSL approach is to regularize the local smoothness of neural functions via perturbations around single data points. In this work, we argue that regularizing the global smoothness of neural functions by filling the void in between data points can further improve SSL. We present a novel SSL approach that trains the neural network on linear mixing of labeled and unlabeled data, at both the input and latent space in order to regularize different portions of the network. We evaluated the presented model on two distinct medical image data sets for semi-supervised classification of thoracic disease and skin lesion, demonstrating its improved performance over SSL with local perturbations and SSL with global mixing but at the input space only. Our code is available at https://github.com/Prasanna1991/LatentMixing.

Wed Nov 11 2020
Machine Learning
Interpretable and synergistic deep learning for visual explanation and statistical estimations of segmentation of disease features from medical images
Deep learning (DL) models for disease classification or segmentation from medical images are increasingly trained using transfer learning (TL) from natural world images. We report detailed comparisons, rigorous statistical analysis and comparisons of widely used DL architecture.
0
0
0
Wed Oct 28 2020
Machine Learning
MultiMix: Sparingly Supervised, Extreme Multitask Learning From Medical Images
Semi-supervised learning via learning from limited quantities of labeled data has been investigated as an alternative to supervised counterparts. We propose a novel multitask learning model, namely MultiMix, which jointly learns disease classification and anatomical segmentation.
0
0
0
Fri May 15 2020
Computer Vision
Semi-supervised Medical Image Classification with Relation-driven Self-ensembling Model
In medical image analysis, obtaining high-quality labels for the data is laborious and expensive. We introduce a novel sample relation consistency (SRC) paradigm to effectively exploit unlabeled data. We have conducted extensive experiments to evaluate our method on two benchmarks.
0
0
0
Mon Jul 22 2019
Machine Learning
Semi-Supervised Learning by Disentangling and Self-Ensembling Over Stochastic Latent Space
The success of deep learning in medical imaging is mostly achieved at the cost of a large labeled data set. Semi-supervised learning (SSL) provides a promising solution by leveraging the structure of unlabeled data. Self-ensembling is a simple approach that encourages consensus among ensemble predictions.
0
0
0
Wed Jan 13 2021
Machine Learning
Big Self-Supervised Models Advance Medical Image Classification
0
0
0
Fri Jan 25 2019
Computer Vision
Surrogate Supervision for Medical Image Analysis: Effective Deep Learning From Limited Quantities of Labeled Data
We investigate the effectiveness of a simple solution to the common problem of deep learning in medical image analysis with limited quantities of labeled training data. The underlying idea is to assign artificial labels to abundantly available unlabeled medical images and pre-train a deep neural network model for the target task.
0
0
0