Quantitative and Intelligent Analysis of Medical Imaging

A special issue of Diagnostics (ISSN 2075-4418). This special issue belongs to the section "Machine Learning and Artificial Intelligence in Diagnostics".

Deadline for manuscript submissions: closed (30 November 2022) | Viewed by 15548

Special Issue Editor


E-Mail Website
Guest Editor
1. UJM-Saint-Etienne, INSA, CNRS UMR 5520, INSERM U1206, CREATIS, University of Lyon, F-42023 Saint-Etienne, France
2. UCBL, INSA, UJM-Saint Etienne, CNRS UMR 5520, INSERM U1206, CREATIS, University of Lyon, F-69100 Lyon, France
Interests: magnetic resonance imaging; radiology; cardiology; sports; nutrition
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Medical imaging allows the observation of the internal characteristics of a body through images for clinical analysis and medical interventions.

This field is undergoing rapid development, resulting in improvement in the quality of the images as well as in the quantity of the observed features. Moreover, its democratization is leading to a wide availability of medical image data in almost all pathologies. However, it remains crucial to be able to extract useful and robust information for targeted medical analysis and decision. Faced with this afflux of data, these treatments allowing extraction must be as automatic as possible, robust, and in line with the needs of physicians in order to empower their efficiency on medical analysis.

Nevertheless, if hundreds of articles are published each year, describing semi-automatic or automatic quantification methods, they are unfortunately without a reference implementation (i.e., without source code). The authors or vendors are by essence reluctant to share their algorithms, because there is simply no practical way to (1) share the algorithms and (2) evaluate their performance in a fair way on the same database elaborated from realistic data derived from routine examinations. As a consequence, and as pointed out by many researchers, all available methods from simple to sophisticated algorithms are “not as objective as one might think”, require user inputs or final supervision to distinguish some artifact and/or noise voxels, i.e., useless information.

As a consequence, despite the huge number of papers that describe over-performing isolated solutions and increasing number of black box services, there is still a large community of physicians or clinical researchers that are missing satisfactory automatic quantification tools to segment the anatomy and extract quantitative indicators, with available quality control to determine the advances or limitations. State-of-the-art and a priori solutions are published but unavailable and unsuitable for worldwide deployment in the clinical (or clinical research) environment where they could be tested in broader patient populations, improved, and made rapidly available for the entire physician and developer community. Widely available clinical databases and common numerical datasets are also missing that could enable the community to easily and rapidly test and evaluate new algorithms, especially in a world of limited resources, where an urgent need therefore emerges for more durable and coordinated research.

In this Special Issue, I would like to invite all colleagues and researchers who share these concerns and who develop approaches attempting to address them to submit their important papers describing their solutions to achieve more reproducible, useful research which can be quickly transferred to clinical research.

The objective of this Special Issue is to collect papers of paramount importance for our future that offer solutions to this critical need: (i) methods that can be used on any image data acquired independently of the scanner manufacturer and that address the abovementioned concerns, (ii) intelligent methods that can both allow unified performance tests on numerical datasets and confidentiality, (iii) smart ways to create shared databases with expert referenced knowledge that the community could feed into and use to demonstrate the performance of new algorithms, and (iv) computer processing methods that are able to enrich diagnosis by extracting objective and clinically useful information from medical images.

Prof. Dr. Magalie Viallon
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Diagnostics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Image segmentation
  • Image registration
  • Data mining
  • Reproducible and open research

Related Special Issue

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 24308 KiB  
Article
Smart Visualization of Medical Images as a Tool in the Function of Education in Neuroradiology
by Aleksandar Simović, Maja Lutovac-Banduka, Snežana Lekić and Valentin Kuleto
Diagnostics 2022, 12(12), 3208; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12123208 - 17 Dec 2022
Viewed by 1594
Abstract
The smart visualization of medical images (SVMI) model is based on multi-detector computed tomography (MDCT) data sets and can provide a clearer view of changes in the brain, such as tumors (expansive changes), bleeding, and ischemia on native imaging (i.e., a non-contrast MDCT [...] Read more.
The smart visualization of medical images (SVMI) model is based on multi-detector computed tomography (MDCT) data sets and can provide a clearer view of changes in the brain, such as tumors (expansive changes), bleeding, and ischemia on native imaging (i.e., a non-contrast MDCT scan). The new SVMI method provides a more precise representation of the brain image by hiding pixels that are not carrying information and rescaling and coloring the range of pixels essential for detecting and visualizing the disease. In addition, SVMI can be used to avoid the additional exposure of patients to ionizing radiation, which can lead to the occurrence of allergic reactions due to the contrast media administration. Results of the SVMI model were compared with the final diagnosis of the disease after additional diagnostics and confirmation by neuroradiologists, who are highly trained physicians with many years of experience. The application of the realized and presented SVMI model can optimize the engagement of material, medical, and human resources and has the potential for general application in medical training, education, and clinical research. Full article
(This article belongs to the Special Issue Quantitative and Intelligent Analysis of Medical Imaging)
Show Figures

Figure 1

14 pages, 1848 KiB  
Article
Quantification of Significant Aortic Stenosis by Echocardiography versus Four-Dimensional Cardiac Computed Tomography: A Multi-Modality Imaging Study
by Tom Kai Ming Wang, Ossama K. Abou Hassan, Zoran B. Popović, Brian P. Griffin and Luis Leonardo Rodriguez
Diagnostics 2022, 12(12), 3106; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12123106 - 09 Dec 2022
Cited by 1 | Viewed by 1116
Abstract
Transthoracic echocardiography (TTE) grading of aortic stenosis (AS) is challenging when parameters are discrepant, and four-dimensional cardiac computed tomography (4D-CCT) is increasingly utilized for transcatheter intervention workup. We compared TTE and 4D-CCT measures contributing to AS quantification. AS patients (n = 80, [...] Read more.
Transthoracic echocardiography (TTE) grading of aortic stenosis (AS) is challenging when parameters are discrepant, and four-dimensional cardiac computed tomography (4D-CCT) is increasingly utilized for transcatheter intervention workup. We compared TTE and 4D-CCT measures contributing to AS quantification. AS patients (n = 80, age 86 ± 10 years, 71% men) referred for transcatheter replacement in 2014–2017 were retrospectively studied, 20 each with high-gradient AS (HG-AS), classical and paradoxical low-flow low-gradient AS (CLFLG-AS and PLFLG-AS), and normal-flow low-gradient AS (NFLG-AS). Correlation and Bland–Altman analyses were performed between TTE and 4D-CCT parameters. There were moderate-to-high TTE versus 4D-CCT correlations for left ventricular volumes, function, mass, and outflow tract dimensions (r = 0.51–0.88), though values were mostly significantly higher by 4D-CCT (p < 0.001). Compared with 4D-CCT planimetry of aortic valve area (AVA), TTE estimates had modest correlation (r = 0.37–0.43) but were significantly lower (by 0.15–0.32 cm2). The 4D-CCT estimate of LVSVi lead to significant reclassification of AS subtype defined by TTE. In conclusion, 4D-CCT quantified values were higher than TTE for the left ventricle and AVA, and the AS subtype was reclassified based on LVSVi by 4D-CCT, warranting further research to establish its clinical implications and optimal thresholds in severe AS management. Full article
(This article belongs to the Special Issue Quantitative and Intelligent Analysis of Medical Imaging)
Show Figures

Figure 1

16 pages, 4116 KiB  
Article
Quantitative Airway Assessment of Diffuse Idiopathic Pulmonary Neuroendocrine Cell Hyperplasia (DIPNECH) on CT as a Novel Biomarker
by Cormac O’Brien, John A. Duignan, Margaret Gleeson, Orla O’Carroll, Alessandro N. Franciosi, Dermot O’Toole, Aurelie Fabre, Rachel K. Crowley, Cormac McCarthy, Jonathan D. Dodd and David J. Murphy
Diagnostics 2022, 12(12), 3096; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12123096 - 08 Dec 2022
Cited by 2 | Viewed by 1303
Abstract
Objectives: Diffuse idiopathic pulmonary neuroendocrine cell hyperplasia (DIPNECH) occurs due to abnormal proliferation of pulmonary neuroendocrine cells. We hypothesized that performing a quantitative analysis of airway features on chest CT may reveal differences to matched controls, which could ultimately help provide an imaging [...] Read more.
Objectives: Diffuse idiopathic pulmonary neuroendocrine cell hyperplasia (DIPNECH) occurs due to abnormal proliferation of pulmonary neuroendocrine cells. We hypothesized that performing a quantitative analysis of airway features on chest CT may reveal differences to matched controls, which could ultimately help provide an imaging biomarker. Methods: A retrospective quantitative analysis of chest CTs in patients with DIPNECH and age matched controls was carried out using semi-automated post-processing software. Paired segmental airway and artery diameters were measured for each bronchopulmonary segment, and the airway:artery (AA) ratio, airway wall thickness:artery ratio (AWTA ratio) and wall area percentage (WAP) calculated. Nodule number, size, shape and location was recorded. Correlation between CT measurements and pulmonary function testing was performed. Results: 16 DIPNECH and 16 control subjects were analysed (all female, mean age 61.7 +/− 11.8 years), a combined total of 425 bronchopulmonary segments. The mean AwtA ratio, AA ratio and WAP for the DIPNECH group was 0.57, 1.18 and 68.8%, respectively, compared with 0.38, 1.03 and 58.3% in controls (p < 0.001, <0.001, 0.03, respectively). DIPNECH patients had more nodules than controls (22.4 +/− 32.6 vs. 3.6 +/− 3.6, p = 0.03). AA ratio correlated with FVC (R2 = 0.47, p = 0.02). A multivariable model incorporating nodule number, AA ratio and AWTA-ratio demonstrated good performance for discriminating DIPNECH and controls (AUC 0.971; 95% CI: 0.925–1.0). Conclusions: Quantitative CT airway analysis in patients with DIPNECH demonstrates increased airway wall thickness and airway:artery ratio compared to controls. Advances in knowledge: Quantitative CT measurement of airway wall thickening offers a potential imaging biomarker for treatment response. Full article
(This article belongs to the Special Issue Quantitative and Intelligent Analysis of Medical Imaging)
Show Figures

Figure 1

13 pages, 3787 KiB  
Article
Effect of Coupling Medium on Penetration Depth in Microwave Medical Imaging
by Wenyi Shao and Beibei Zhou
Diagnostics 2022, 12(12), 2906; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12122906 - 22 Nov 2022
Viewed by 929
Abstract
In microwave medical imaging, the human skin reflects most of microwave energy due to the impedance mismatch between the air and the body. As a result, only a small portion of the microwave energy can enter the body and work for medical purpose. [...] Read more.
In microwave medical imaging, the human skin reflects most of microwave energy due to the impedance mismatch between the air and the body. As a result, only a small portion of the microwave energy can enter the body and work for medical purpose. One solution to tackle this issue is to use a coupling (or matching) medium, which can reduce unwanted reflections on the skin and meanwhile improve spatial imaging resolution. A few types of fluids were measured in this paper for their dielectric properties between 500 MHz and 13.5 GHz. Measurements were performed by a Keysight programmable network analyzer (PNA) with a dielectric probe kit, and dielectric constant and conductivity of the fluids were presented in this paper. Then, quantitative computations were exercised to present the attenuations due to the reflection on the skin and to the loss in each coupling medium, based on the measured liquid dielectric values. Finally, electromagnetic simulations verified that the coupling liquid can allow more microwave energy to enter the body to allow for a more efficient medical examination. Full article
(This article belongs to the Special Issue Quantitative and Intelligent Analysis of Medical Imaging)
Show Figures

Figure 1

9 pages, 4004 KiB  
Article
Quantitative Measurement of Pneumothorax Using Artificial Intelligence Management Model and Clinical Application
by Dohun Kim, Jae-Hyeok Lee, Si-Wook Kim, Jong-Myeon Hong, Sung-Jin Kim, Minji Song, Jong-Mun Choi, Sun-Yeop Lee, Hongjun Yoon and Jin-Young Yoo
Diagnostics 2022, 12(8), 1823; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12081823 - 29 Jul 2022
Cited by 6 | Viewed by 1825
Abstract
Artificial intelligence (AI) techniques can be a solution for delayed or misdiagnosed pneumothorax. This study developed, a deep-learning-based AI model to estimate the pneumothorax amount on a chest radiograph and applied it to a treatment algorithm developed by experienced thoracic surgeons. U-net performed [...] Read more.
Artificial intelligence (AI) techniques can be a solution for delayed or misdiagnosed pneumothorax. This study developed, a deep-learning-based AI model to estimate the pneumothorax amount on a chest radiograph and applied it to a treatment algorithm developed by experienced thoracic surgeons. U-net performed semantic segmentation and classification of pneumothorax and non-pneumothorax areas. The pneumothorax amount was measured using chest computed tomography (volume ratio, gold standard) and chest radiographs (area ratio, true label) and calculated using the AI model (area ratio, predicted label). Each value was compared and analyzed based on clinical outcomes. The study included 96 patients, of which 67 comprised the training set and the others the test set. The AI model showed an accuracy of 97.8%, sensitivity of 69.2%, a negative predictive value of 99.1%, and a dice similarity coefficient of 61.8%. In the test set, the average amount of pneumothorax was 15%, 16%, and 13% in the gold standard, predicted, and true labels, respectively. The predicted label was not significantly different from the gold standard (p = 0.11) but inferior to the true label (difference in MAE: 3.03%). The amount of pneumothorax in thoracostomy patients was 21.6% in predicted cases and 18.5% in true cases. Full article
(This article belongs to the Special Issue Quantitative and Intelligent Analysis of Medical Imaging)
Show Figures

Figure 1

17 pages, 2702 KiB  
Article
Deep Learning and Domain-Specific Knowledge to Segment the Liver from Synthetic Dual Energy CT Iodine Scans
by Usman Mahmood, David D. B. Bates, Yusuf E. Erdi, Lorenzo Mannelli, Giuseppe Corrias and Christopher Kanan
Diagnostics 2022, 12(3), 672; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12030672 - 10 Mar 2022
Cited by 2 | Viewed by 2589
Abstract
We map single energy CT (SECT) scans to synthetic dual-energy CT (synth-DECT) material density iodine (MDI) scans using deep learning (DL) and demonstrate their value for liver segmentation. A 2D pix2pix (P2P) network was trained on 100 abdominal DECT scans to infer synth-DECT [...] Read more.
We map single energy CT (SECT) scans to synthetic dual-energy CT (synth-DECT) material density iodine (MDI) scans using deep learning (DL) and demonstrate their value for liver segmentation. A 2D pix2pix (P2P) network was trained on 100 abdominal DECT scans to infer synth-DECT MDI scans from SECT scans. The source and target domain were paired with DECT monochromatic 70 keV and MDI scans. The trained P2P algorithm then transformed 140 public SECT scans to synth-DECT scans. We split 131 scans into 60% train, 20% tune, and 20% held-out test to train four existing liver segmentation frameworks. The remaining nine low-dose SECT scans tested system generalization. Segmentation accuracy was measured with the dice coefficient (DSC). The DSC per slice was computed to identify sources of error. With synth-DECT (and SECT) scans, an average DSC score of 0.93±0.06 (0.89±0.01) and 0.89±0.01 (0.81±0.02) was achieved on the held-out and generalization test sets. Synth-DECT-trained systems required less data to perform as well as SECT-trained systems. Low DSC scores were primarily observed around the scan margin or due to non-liver tissue or distortions within ground-truth annotations. In general, training with synth-DECT scans resulted in improved segmentation performance with less data. Full article
(This article belongs to the Special Issue Quantitative and Intelligent Analysis of Medical Imaging)
Show Figures

Figure 1

15 pages, 3295 KiB  
Article
Convolutional Neural Network-Based Automatic Analysis of Chest Radiographs for the Detection of COVID-19 Pneumonia: A Prioritizing Tool in the Emergency Department, Phase I Study and Preliminary “Real Life” Results
by Davide Tricarico, Marco Calandri, Matteo Barba, Clara Piatti, Carlotta Geninatti, Domenico Basile, Marco Gatti, Massimiliano Melis and Andrea Veltri
Diagnostics 2022, 12(3), 570; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12030570 - 23 Feb 2022
Cited by 4 | Viewed by 2153
Abstract
The aim of our study is the development of an automatic tool for the prioritization of COVID-19 diagnostic workflow in the emergency department by analyzing chest X-rays (CXRs). The Convolutional Neural Network (CNN)-based method we propose has been tested retrospectively on a single-center [...] Read more.
The aim of our study is the development of an automatic tool for the prioritization of COVID-19 diagnostic workflow in the emergency department by analyzing chest X-rays (CXRs). The Convolutional Neural Network (CNN)-based method we propose has been tested retrospectively on a single-center set of 542 CXRs evaluated by experienced radiologists. The SARS-CoV-2 positive dataset (n = 234) consists of CXRs collected between March and April 2020, with the COVID-19 infection being confirmed by an RT-PCR test within 24 h. The SARS-CoV-2 negative dataset (n = 308) includes CXRs from 2019, therefore prior to the pandemic. For each image, the CNN computes COVID-19 risk indicators, identifying COVID-19 cases and prioritizing the urgent ones. After installing the software into the hospital RIS, a preliminary comparison between local daily COVID-19 cases and predicted risk indicators for 2918 CXRs in the same period was performed. Significant improvements were obtained for both prioritization and identification using the proposed method. Mean Average Precision (MAP) increased (p < 1.21 × 10−21 from 43.79% with random sorting to 71.75% with our method. CNN sensitivity was 78.23%, higher than radiologists’ 61.1%; specificity was 64.20%. In the real-life setting, this method had a correlation of 0.873. The proposed CNN-based system effectively prioritizes CXRs according to COVID-19 risk in an experimental setting; preliminary real-life results revealed high concordance with local pandemic incidence. Full article
(This article belongs to the Special Issue Quantitative and Intelligent Analysis of Medical Imaging)
Show Figures

Figure 1

15 pages, 2751 KiB  
Article
Deep-Learning Segmentation of Epicardial Adipose Tissue Using Four-Chamber Cardiac Magnetic Resonance Imaging
by Pierre Daudé, Patricia Ancel, Sylviane Confort Gouny, Alexis Jacquier, Frank Kober, Anne Dutour, Monique Bernard, Bénédicte Gaborit and Stanislas Rapacchi
Diagnostics 2022, 12(1), 126; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12010126 - 06 Jan 2022
Cited by 8 | Viewed by 2429
Abstract
In magnetic resonance imaging (MRI), epicardial adipose tissue (EAT) overload remains often overlooked due to tedious manual contouring in images. Automated four-chamber EAT area quantification was proposed, leveraging deep-learning segmentation using multi-frame fully convolutional networks (FCN). The investigation involved 100 subjects—comprising healthy, obese, [...] Read more.
In magnetic resonance imaging (MRI), epicardial adipose tissue (EAT) overload remains often overlooked due to tedious manual contouring in images. Automated four-chamber EAT area quantification was proposed, leveraging deep-learning segmentation using multi-frame fully convolutional networks (FCN). The investigation involved 100 subjects—comprising healthy, obese, and diabetic patients—who underwent 3T cardiac cine MRI, optimized U-Net and FCN (noted FCNB) were trained on three consecutive cine frames for segmentation of central frame using dice loss. Networks were trained using 4-fold cross-validation (n = 80) and evaluated on an independent dataset (n = 20). Segmentation performances were compared to inter-intra observer bias with dice (DSC) and relative surface error (RSE). Both systole and diastole four-chamber area were correlated with total EAT volume (r = 0.77 and 0.74 respectively). Networks’ performances were equivalent to inter-observers’ bias (EAT: DSCInter = 0.76, DSCU-Net = 0.77, DSCFCNB = 0.76). U-net outperformed (p < 0.0001) FCNB on all metrics. Eventually, proposed multi-frame U-Net provided automated EAT area quantification with a 14.2% precision for the clinically relevant upper three quarters of EAT area range, scaling patients’ risk of EAT overload with 70% accuracy. Exploiting multi-frame U-Net in standard cine provided automated EAT quantification over a wide range of EAT quantities. The method is made available to the community through a FSLeyes plugin. Full article
(This article belongs to the Special Issue Quantitative and Intelligent Analysis of Medical Imaging)
Show Figures

Figure 1

Back to TopTop