Bio-Medical Multimodal Methods for Diagnosis, Prognosis, and Outcome Prediction

A special issue of Journal of Imaging (ISSN 2313-433X). This special issue belongs to the section "Medical Imaging".

Deadline for manuscript submissions: closed (31 May 2022) | Viewed by 17237

Special Issue Editors


E-Mail Website
Guest Editor
Anacleto Lab. & MIPS Lab., Computer Science Department "Giovanni degli Antoni", Università degli Studi di Milano, 20133 Milan, Italy
Interests: medical and biomedical image and signal processing; artificial intelligence; explainable artificial intelligence; digital twins; pattern recognition; data analysis; scientific visualization
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Computer Science Department “Giovanni degli Antoni”, Università degli Studi di Milano, 20133 Milan, Italy
Interests: computational biology; machine learning; systems biology; personalized and precision medicine, network medicine; data and text mining; biological network, bioinformatics

E-Mail Website
Guest Editor
Computer Science Department “Giovanni degli Antoni”, Università degli Studi di Milano, 20133 Milan, Italy
Interests: high-performance computing (heterogeneous, accelerated, large scale); machine learning; bioinformatics; personalized and precision medicine; image processing and compression; bio-medical imaging; network medicine, systems biology
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Computer Science Department “Giovanni degli Antoni”, Università degli Studi di Milano, 20133 Milan, Italy
Interests: computational biology; machine learning; systems biology; personalized and precision medicine; network medicine; graph representation and learning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Over the past two years, a plethora of automated COVID-19 diagnosis and prognosis prediction models based on the analysis of pulmonary images acquired either by X-rays or by computed tomography scanners has highlighted the power of automated bio-medical image processing models and their ability to improve patient care by decreasing waiting times in emergency rooms while improving the accuracy of diagnostic and prognostic procedures.

Moreover, in the past twenty years, research interest in the development of automated bio-medical image processing has resulted in relevant results concerning, for example, the diagnosis and malignancy assessment of tumors affecting organs such as the breast, lung, or liver. The advent of digital microscopy scanners has further motivated the recent development of relevant applications for processing histological images acquired in the field of immuno-pathological image analysis with the aim of aiding in the development of novel (personalized) chemotherapeutic drugs.

Though imaging itself carries crucial information, several state-of-the-art results in the bioinformatics field show that prediction accuracy and precision may be improved by multimodal techniques integrating imaging information with other information sources, e.g. patients' demographics, their clinical, and/or genome-level descriptions.

This Special Issue aims to gather research papers describing research efforts in the field of bio-medical image processing, with a particular interest in works aimed at improving knowledge in the field of personalized and precision medicine or at handling multimodal datasets containing imaging as one of the data inputs.

 

Prof. Dr. Elena Casiraghi
Dr. Marco Notaro
Dr. Alessandro Petrini
Prof. Dr. Giorgio Valentini
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Bio-Medical image processing
  • Computer-aided diagnosis, prognosis, outcome prediction
  • Bio-medical systems for personalized and/or precision medicine
  • Multimodal bio-medical systems
  • Machine learning and artificial intelligence applied to biomedical imaging
  • Machine learning and artificial intelligence applied to multimodal data integration

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

15 pages, 420 KiB  
Article
A Multimodal Ensemble Driven by Multiobjective Optimisation to Predict Overall Survival in Non-Small-Cell Lung Cancer
by Camillo Maria Caruso, Valerio Guarrasi, Ermanno Cordelli, Rosa Sicilia, Silvia Gentile, Laura Messina, Michele Fiore, Claudia Piccolo, Bruno Beomonte Zobel, Giulio Iannello, Sara Ramella and Paolo Soda
J. Imaging 2022, 8(11), 298; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8110298 - 02 Nov 2022
Cited by 5 | Viewed by 1905
Abstract
Lung cancer accounts for more deaths worldwide than any other cancer disease. In order to provide patients with the most effective treatment for these aggressive tumours, multimodal learning is emerging as a new and promising field of research that aims to extract complementary [...] Read more.
Lung cancer accounts for more deaths worldwide than any other cancer disease. In order to provide patients with the most effective treatment for these aggressive tumours, multimodal learning is emerging as a new and promising field of research that aims to extract complementary information from the data of different modalities for prognostic and predictive purposes. This knowledge could be used to optimise current treatments and maximise their effectiveness. To predict overall survival, in this work, we investigate the use of multimodal learning on the CLARO dataset, which includes CT images and clinical data collected from a cohort of non-small-cell lung cancer patients. Our method allows the identification of the optimal set of classifiers to be included in the ensemble in a late fusion approach. Specifically, after training unimodal models on each modality, it selects the best ensemble by solving a multiobjective optimisation problem that maximises both the recognition performance and the diversity of the predictions. In the ensemble, the labels of each sample are assigned using the majority voting rule. As further validation, we show that the proposed ensemble outperforms the models learning a single modality, obtaining state-of-the-art results on the task at hand. Full article
Show Figures

Figure 1

13 pages, 1009 KiB  
Article
Time Synchronization of Multimodal Physiological Signals through Alignment of Common Signal Types and Its Technical Considerations in Digital Health
by Ran Xiao, Cheng Ding and Xiao Hu
J. Imaging 2022, 8(5), 120; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8050120 - 21 Apr 2022
Cited by 5 | Viewed by 1976
Abstract
Background: Despite advancements in digital health, it remains challenging to obtain precise time synchronization of multimodal physiological signals collected through different devices. Existing algorithms mainly rely on specific physiological features that restrict the use cases to certain signal types. The present study aims [...] Read more.
Background: Despite advancements in digital health, it remains challenging to obtain precise time synchronization of multimodal physiological signals collected through different devices. Existing algorithms mainly rely on specific physiological features that restrict the use cases to certain signal types. The present study aims to complement previous algorithms and solve a niche time alignment problem when a common signal type is available across different devices. Methods: We proposed a simple time alignment approach based on the direct cross-correlation of temporal amplitudes, making it agnostic and thus generalizable to different signal types. The approach was tested on a public electrocardiographic (ECG) dataset to simulate the synchronization of signals collected from an ECG watch and an ECG patch. The algorithm was evaluated considering key practical factors, including sample durations, signal quality index (SQI), resilience to noise, and varying sampling rates. Results: The proposed approach requires a short sample duration (30 s) to operate, and demonstrates stable performance across varying sampling rates and resilience to common noise. The lowest synchronization delay achieved by the algorithm is 0.13 s with the integration of SQI thresholding. Conclusions: Our findings help improve the time alignment of multimodal signals in digital health and advance healthcare toward precise remote monitoring and disease prevention. Full article
Show Figures

Figure 1

18 pages, 6116 KiB  
Article
Novel Hypertrophic Cardiomyopathy Diagnosis Index Using Deep Features and Local Directional Pattern Techniques
by Anjan Gudigar, U. Raghavendra, Jyothi Samanth, Chinmay Dharmik, Mokshagna Rohit Gangavarapu, Krishnananda Nayak, Edward J. Ciaccio, Ru-San Tan, Filippo Molinari and U. Rajendra Acharya
J. Imaging 2022, 8(4), 102; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8040102 - 06 Apr 2022
Cited by 6 | Viewed by 3051
Abstract
Hypertrophic cardiomyopathy (HCM) is a genetic disorder that exhibits a wide spectrum of clinical presentations, including sudden death. Early diagnosis and intervention may avert the latter. Left ventricular hypertrophy on heart imaging is an important diagnostic criterion for HCM, and the most common [...] Read more.
Hypertrophic cardiomyopathy (HCM) is a genetic disorder that exhibits a wide spectrum of clinical presentations, including sudden death. Early diagnosis and intervention may avert the latter. Left ventricular hypertrophy on heart imaging is an important diagnostic criterion for HCM, and the most common imaging modality is heart ultrasound (US). The US is operator-dependent, and its interpretation is subject to human error and variability. We proposed an automated computer-aided diagnostic tool to discriminate HCM from healthy subjects on US images. We used a local directional pattern and the ResNet-50 pretrained network to classify heart US images acquired from 62 known HCM patients and 101 healthy subjects. Deep features were ranked using Student’s t-test, and the most significant feature (SigFea) was identified. An integrated index derived from the simulation was defined as 100·log10(SigFea/2)  in each subject, and a diagnostic threshold value was empirically calculated as the mean of the minimum and maximum integrated indices among HCM and healthy subjects, respectively. An integrated index above a threshold of 0.5 separated HCM from healthy subjects with 100% accuracy in our test dataset. Full article
Show Figures

Figure 1

16 pages, 3707 KiB  
Article
A Pipelined Tracer-Aware Approach for Lesion Segmentation in Breast DCE-MRI
by Antonio Galli, Stefano Marrone, Gabriele Piantadosi, Mario Sansone and Carlo Sansone
J. Imaging 2021, 7(12), 276; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7120276 - 14 Dec 2021
Cited by 5 | Viewed by 2197
Abstract
The recent spread of Deep Learning (DL) in medical imaging is pushing researchers to explore its suitability for lesion segmentation in Dynamic Contrast-Enhanced Magnetic-Resonance Imaging (DCE-MRI), a complementary imaging procedure increasingly used in breast-cancer analysis. Despite some promising proposed solutions, we argue that [...] Read more.
The recent spread of Deep Learning (DL) in medical imaging is pushing researchers to explore its suitability for lesion segmentation in Dynamic Contrast-Enhanced Magnetic-Resonance Imaging (DCE-MRI), a complementary imaging procedure increasingly used in breast-cancer analysis. Despite some promising proposed solutions, we argue that a “naive” use of DL may have limited effectiveness as the presence of a contrast agent results in the acquisition of multimodal 4D images requiring thorough processing before training a DL model. We thus propose a pipelined approach where each stage is intended to deal with or to leverage a peculiar characteristic of breast DCE-MRI data: the use of a breast-masking pre-processing to remove non-breast tissues; the use of Three-Time-Points (3TP) slices to effectively highlight contrast agent time course; the application of a motion-correction technique to deal with patient involuntary movements; the leverage of a modified U-Net architecture tailored on the problem; and the introduction of a new “Eras/Epochs” training strategy to handle the unbalanced dataset while performing a strong data augmentation. We compared our pipelined solution against some literature works. The results show that our approach outperforms the competitors by a large margin (+9.13% over our previous solution) while also showing a higher generalization ability. Full article
Show Figures

Figure 1

12 pages, 2722 KiB  
Article
Abdominal Computed Tomography Imaging Findings in Hospitalized COVID-19 Patients: A Year-Long Experience and Associations Revealed by Explainable Artificial Intelligence
by Alice Scarabelli, Massimo Zilocchi, Elena Casiraghi, Pierangelo Fasani, Guido Giovanni Plensich, Andrea Alessandro Esposito, Elvira Stellato, Alessandro Petrini, Justin Reese, Peter Robinson, Giorgio Valentini and Gianpaolo Carrafiello
J. Imaging 2021, 7(12), 258; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7120258 - 01 Dec 2021
Cited by 2 | Viewed by 2507
Abstract
The aim of this retrospective study is to assess any association between abdominal CT findings and the radiological stage of COVID-19 pneumonia, pulmonary embolism and patient outcomes. We included 158 adult hospitalized COVID-19 patients between 1 March 2020 and 1 March 2021 who [...] Read more.
The aim of this retrospective study is to assess any association between abdominal CT findings and the radiological stage of COVID-19 pneumonia, pulmonary embolism and patient outcomes. We included 158 adult hospitalized COVID-19 patients between 1 March 2020 and 1 March 2021 who underwent 206 abdominal CTs. Two radiologists reviewed all CT images. Pathological findings were classified as acute or not. A subset of patients with inflammatory pathology in ACE2 organs (bowel, biliary tract, pancreas, urinary system) was identified. The radiological stage of COVID pneumonia, pulmonary embolism, overall days of hospitalization, ICU admission and outcome were registered. Univariate statistical analysis coupled with explainable artificial intelligence (AI) techniques were used to discover associations between variables. The most frequent acute findings were bowel abnormalities (n = 58), abdominal fluid (n = 42), hematomas (n = 28) and acute urologic conditions (n = 8). According to univariate statistical analysis, pneumonia stage > 2 was significantly associated with increased frequency of hematomas, active bleeding and fluid-filled colon. The presence of at least one hepatobiliary finding was associated with all the COVID-19 stages > 0. Free abdominal fluid, acute pathologies in ACE2 organs and fluid-filled colon were associated with ICU admission; free fluid also presented poor patient outcomes. Hematomas and active bleeding with at least a progressive stage of COVID pneumonia. The explainable AI techniques find no strong relationship between variables. Full article
Show Figures

Figure 1

Review

Jump to: Research

17 pages, 861 KiB  
Review
Generative Adversarial Networks in Brain Imaging: A Narrative Review
by Maria Elena Laino, Pierandrea Cancian, Letterio Salvatore Politi, Matteo Giovanni Della Porta, Luca Saba and Victor Savevski
J. Imaging 2022, 8(4), 83; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8040083 - 23 Mar 2022
Cited by 12 | Viewed by 4718
Abstract
Artificial intelligence (AI) is expected to have a major effect on radiology as it demonstrated remarkable progress in many clinical tasks, mostly regarding the detection, segmentation, classification, monitoring, and prediction of diseases. Generative Adversarial Networks have been proposed as one of the most [...] Read more.
Artificial intelligence (AI) is expected to have a major effect on radiology as it demonstrated remarkable progress in many clinical tasks, mostly regarding the detection, segmentation, classification, monitoring, and prediction of diseases. Generative Adversarial Networks have been proposed as one of the most exciting applications of deep learning in radiology. GANs are a new approach to deep learning that leverages adversarial learning to tackle a wide array of computer vision challenges. Brain radiology was one of the first fields where GANs found their application. In neuroradiology, indeed, GANs open unexplored scenarios, allowing new processes such as image-to-image and cross-modality synthesis, image reconstruction, image segmentation, image synthesis, data augmentation, disease progression models, and brain decoding. In this narrative review, we will provide an introduction to GANs in brain imaging, discussing the clinical potential of GANs, future clinical applications, as well as pitfalls that radiologists should be aware of. Full article
Show Figures

Figure 1

Back to TopTop