Feature learning using stacked autoencoder for shared and multimodal fusion of medical images

Vikas Singh, Nishchal K. Verma, Zeeshan Ul Islam, Yan Cui

Research output: Chapter in Book/Report/Conference proceedingChapter

Abstract

In recent years, deep learning has become a powerful tool for medical image analysis mainly because of their ability to automatically extract more abstract features from large training data. The current methods used for multiple modalities are mostly conventional machine learning, in which people use the handcrafted feature, which is very difficult to construct for large training sizes. Deep learning which is an advancement in the machine learning automatically extracts relevant features from the data. In this paper, we have used deep learning model for the multimodal data. The basic building blocks of the network are stacked autoencoder for the multiple modalities. The performance of deep learning-based models with and without multimodal fusion and shared learning are compared. The results indicates that the use of multimodal fusion and shared learning help to improve deep learning-based medical image analysis.

LanguageEnglish (US)
Title of host publicationAdvances in Intelligent Systems and Computing
PublisherSpringer Verlag
Pages53-66
Number of pages14
DOIs
StatePublished - Jan 1 2019

Publication series

NameAdvances in Intelligent Systems and Computing
Volume798
ISSN (Print)2194-5357

Fingerprint

Fusion reactions
Image analysis
Learning systems
Deep learning

All Science Journal Classification (ASJC) codes

  • Control and Systems Engineering
  • Computer Science(all)

Cite this

Singh, V., Verma, N. K., Ul Islam, Z., & Cui, Y. (2019). Feature learning using stacked autoencoder for shared and multimodal fusion of medical images. In Advances in Intelligent Systems and Computing (pp. 53-66). (Advances in Intelligent Systems and Computing; Vol. 798). Springer Verlag. https://doi.org/10.1007/978-981-13-1132-1_5

Feature learning using stacked autoencoder for shared and multimodal fusion of medical images. / Singh, Vikas; Verma, Nishchal K.; Ul Islam, Zeeshan; Cui, Yan.

Advances in Intelligent Systems and Computing. Springer Verlag, 2019. p. 53-66 (Advances in Intelligent Systems and Computing; Vol. 798).

Research output: Chapter in Book/Report/Conference proceedingChapter

Singh, V, Verma, NK, Ul Islam, Z & Cui, Y 2019, Feature learning using stacked autoencoder for shared and multimodal fusion of medical images. in Advances in Intelligent Systems and Computing. Advances in Intelligent Systems and Computing, vol. 798, Springer Verlag, pp. 53-66. https://doi.org/10.1007/978-981-13-1132-1_5
Singh V, Verma NK, Ul Islam Z, Cui Y. Feature learning using stacked autoencoder for shared and multimodal fusion of medical images. In Advances in Intelligent Systems and Computing. Springer Verlag. 2019. p. 53-66. (Advances in Intelligent Systems and Computing). https://doi.org/10.1007/978-981-13-1132-1_5
Singh, Vikas ; Verma, Nishchal K. ; Ul Islam, Zeeshan ; Cui, Yan. / Feature learning using stacked autoencoder for shared and multimodal fusion of medical images. Advances in Intelligent Systems and Computing. Springer Verlag, 2019. pp. 53-66 (Advances in Intelligent Systems and Computing).
@inbook{2234674dd5e44f99ba7ec96c02b93621,
title = "Feature learning using stacked autoencoder for shared and multimodal fusion of medical images",
abstract = "In recent years, deep learning has become a powerful tool for medical image analysis mainly because of their ability to automatically extract more abstract features from large training data. The current methods used for multiple modalities are mostly conventional machine learning, in which people use the handcrafted feature, which is very difficult to construct for large training sizes. Deep learning which is an advancement in the machine learning automatically extracts relevant features from the data. In this paper, we have used deep learning model for the multimodal data. The basic building blocks of the network are stacked autoencoder for the multiple modalities. The performance of deep learning-based models with and without multimodal fusion and shared learning are compared. The results indicates that the use of multimodal fusion and shared learning help to improve deep learning-based medical image analysis.",
author = "Vikas Singh and Verma, {Nishchal K.} and {Ul Islam}, Zeeshan and Yan Cui",
year = "2019",
month = "1",
day = "1",
doi = "10.1007/978-981-13-1132-1_5",
language = "English (US)",
series = "Advances in Intelligent Systems and Computing",
publisher = "Springer Verlag",
pages = "53--66",
booktitle = "Advances in Intelligent Systems and Computing",
address = "Germany",

}

TY - CHAP

T1 - Feature learning using stacked autoencoder for shared and multimodal fusion of medical images

AU - Singh, Vikas

AU - Verma, Nishchal K.

AU - Ul Islam, Zeeshan

AU - Cui, Yan

PY - 2019/1/1

Y1 - 2019/1/1

N2 - In recent years, deep learning has become a powerful tool for medical image analysis mainly because of their ability to automatically extract more abstract features from large training data. The current methods used for multiple modalities are mostly conventional machine learning, in which people use the handcrafted feature, which is very difficult to construct for large training sizes. Deep learning which is an advancement in the machine learning automatically extracts relevant features from the data. In this paper, we have used deep learning model for the multimodal data. The basic building blocks of the network are stacked autoencoder for the multiple modalities. The performance of deep learning-based models with and without multimodal fusion and shared learning are compared. The results indicates that the use of multimodal fusion and shared learning help to improve deep learning-based medical image analysis.

AB - In recent years, deep learning has become a powerful tool for medical image analysis mainly because of their ability to automatically extract more abstract features from large training data. The current methods used for multiple modalities are mostly conventional machine learning, in which people use the handcrafted feature, which is very difficult to construct for large training sizes. Deep learning which is an advancement in the machine learning automatically extracts relevant features from the data. In this paper, we have used deep learning model for the multimodal data. The basic building blocks of the network are stacked autoencoder for the multiple modalities. The performance of deep learning-based models with and without multimodal fusion and shared learning are compared. The results indicates that the use of multimodal fusion and shared learning help to improve deep learning-based medical image analysis.

UR - http://www.scopus.com/inward/record.url?scp=85051187486&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85051187486&partnerID=8YFLogxK

U2 - 10.1007/978-981-13-1132-1_5

DO - 10.1007/978-981-13-1132-1_5

M3 - Chapter

T3 - Advances in Intelligent Systems and Computing

SP - 53

EP - 66

BT - Advances in Intelligent Systems and Computing

PB - Springer Verlag

ER -