Published on Fri Apr 23 2021

Predicting Distant Metastases in Soft-Tissue Sarcomas from PET-CT scans using Constrained Hierarchical Multi-Modality Feature Learning

Yige Peng, Lei Bi, Ashnil Kumar, Michael Fulham, Dagan Feng, Jinman Kim
0
0
0
Abstract

Distant metastases (DM) refer to the dissemination of tumors, usually, beyond the organ where the tumor originated. They are the leading cause of death in patients with soft-tissue sarcomas (STSs). Positron emission tomography-computed tomography (PET-CT) is regarded as the imaging modality of choice for the management of STSs. It is difficult to determine from imaging studies which STS patients will develop metastases. 'Radiomics' refers to the extraction and analysis of quantitative features from medical images and it has been employed to help identify such tumors. The state-of-the-art in radiomics is based on convolutional neural networks (CNNs). Most CNNs are designed for single-modality imaging data (CT or PET alone) and do not exploit the information embedded in PET-CT where there is a combination of an anatomical and functional imaging modality. Furthermore, most radiomic methods rely on manual input from imaging specialists for tumor delineation, definition and selection of radiomic features. This approach, however, may not be scalable to tumors with complex boundaries and where there are multiple other sites of disease. We outline a new 3D CNN to help predict DM in STS patients from PET-CT data. The 3D CNN uses a constrained feature learning module and a hierarchical multi-modality feature learning module that leverages the complementary information from the modalities to focus on semantically important regions. Our results on a public PET-CT dataset of STS patients show that multi-modal information improves the ability to identify those patients who develop DM. Further our method outperformed all other related state-of-the-art methods.

Sun Jul 12 2020
Machine Learning
Multi-Modality Information Fusion for Radiomics-based Neural Architecture Search
'Radiomics' is a method that extracts mineable quantitative features from medical images. These features can then be used to determine prognosis, for example, predicting the development of distant metastases (DM) Existing radiomics methods require complex manual effort including the design and selection of hand-crafted radiomic features.
0
0
0
Mon Mar 18 2019
Machine Learning
Deep Learning Enables Automatic Detection and Segmentation of Brain Metastases on Multi-Sequence MRI
Detecting and segmenting brain metastases is a tedious and time-consuming task for many radiologists. This study demonstrates automated detection and segmentation of brain. metastases on multi-sequence MRI using a deep learning approach based on. a fully convolution neural network.
0
0
0
Fri Nov 15 2019
Computer Vision
Deep radiomic features from MRI scans predict survival outcome of recurrent glioblastoma
This paper proposes to use deep radiomic features (DRFs) to model fine-grained texture signatures in the radiomic analysis of recurrent glioblastoma (rGBM) We use DRFs to predict survival of rGBM patients with preoperative T1-weighted
0
0
0
Fri Oct 05 2018
Computer Vision
Co-Learning Feature Fusion Maps from PET-CT Images of Lung Cancer
The analysis of multi-modality positron emission tomography and computed tomography (PET-CT) images for computer aided diagnosis applications requires combining the sensitivity of PET to detect abnormal regions with anatomical localization from CT. Current methods for PET-CT image analysis either process the modalities separately or fuse information from each modality based on knowledge about the task.
0
0
0
Tue Mar 26 2019
Artificial Intelligence
Deep segmentation networks predict survival of non-small cell lung cancer
Non-small-cell lung cancer (NSCLC) represents approximately 80-85% of lung cancer diagnoses. CNN trained to perform the tumor segmentation task, with no other information than physician contours, identified survival-related image features with remarkable prognostic value.
0
0
0
Thu Mar 21 2019
Computer Vision
Deep Radiomics for Brain Tumor Detection and Classification from Multi-Sequence MRI
Glioma constitutes 80% of malignant primary brain tumors and is usually classified as HGG and LGG. LGG tumors are less aggressive, with slower growth rate as compared to HGG, and are responsive to therapy. We propose novel ConvNet models, which are trained from scratch, on MRI patches, slices, and multi-planar volumetric slices.
0
0
0