Clinical Diagnosis Using Deep Learning

A special issue of Diagnostics (ISSN 2075-4418). This special issue belongs to the section "Machine Learning and Artificial Intelligence in Diagnostics".

Deadline for manuscript submissions: closed (1 July 2021) | Viewed by 63093

Special Issue Editor


E-Mail Website
Guest Editor
Department of Radiology, School of Medicine, Stanford University, Stanford, CA, USA
Interests: medical 3D printing and advanced visualization; artificial intelligence; semantic web technologies; bioinformatics; medical informatics

Special Issue Information

Dear Colleagues,

Numerous clinically validated applications of deep learning have already revolutionized and continue to disrupt key facets of medical diagnostics, including laboratory, clinical-assessment-based, and image-based solutions at an ever-accelerating pace. For example, in clinical medicine, deep learning promises novel combinations of treatment protocols ensuring the most relevant and cost-effective workups, precisely targeting therapies in a personalized manner. In medical imaging, deep learning holds the potential to make existing imaging modalities ranging from optical analysis to radiographic imaging and nuclear medicine safer, faster, and more affordable. Supporting and augmenting the work of numerous clinical specialists, image-based deep learning is currently used for i) image acquisition and augmentation, ii) automated semantic segmentation, and iii) image classification. While numerous clinically validated examples of algorithms in each of these categories exist, combinations of these applications have further potential for uncovering and facilitating entirely new diagnostic approaches and instruments.

The relatively straightforward translation of consumer-grade deep learning algorithms has so far been tremendously helpful in facilitating numerous proof-of-principle studies. To enable productive integration into medical diagnostics, the scope and clinical relevance of such applications requires a rigorous debate and meticulous clinical reality checks in all facets of implementation. This Special Issue is intended to lay the foundation of clinical deep learning applications focusing on case studies in image analysis, discuss several example applications of deep learning diagnostics with emphasis on enabling personalized medicine, and provide an overview of frameworks for the broader integration of disparate deep learning algorithms in clinical practice.

Dr. Leonid Chepelev
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Diagnostics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Deep Learning
  • Medical Informatics
  • Automated Diagnostics
  • Software Integration
  • Clinical Validation
  • Medical Image Augmentation
  • Advanced Visualization
  • Personalized Medicine

Published Papers (18 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

12 pages, 1851 KiB  
Article
Machine Learning-Based Modeling of Ovarian Response and the Quantitative Evaluation of Comprehensive Impact Features
by Liu Liu, Fujin Shen, Hua Liang, Zhe Yang, Jing Yang and Jiao Chen
Diagnostics 2022, 12(2), 492; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12020492 - 14 Feb 2022
Cited by 4 | Viewed by 1996
Abstract
Appropriate ovarian responses to the controlled ovarian stimulation strategy is the premise for a good outcome of the in vitro fertilization cycle. With the booming of artificial intelligence, machine learning is becoming a popular and promising approach for tailoring a controlled ovarian stimulation [...] Read more.
Appropriate ovarian responses to the controlled ovarian stimulation strategy is the premise for a good outcome of the in vitro fertilization cycle. With the booming of artificial intelligence, machine learning is becoming a popular and promising approach for tailoring a controlled ovarian stimulation strategy. Nowadays, most machine learning-based tailoring strategies aim to generally classify the controlled ovarian stimulation outcome, lacking the capacity to precisely predict the outcome and evaluate the impact features. Based on a clinical cohort composed of 1365 women and two machine learning methods of artificial neural network and supporting vector regression, a regression prediction model of the number of oocytes retrieved is trained, validated, and selected. Given the proposed model, an index called the normalized mean impact value is defined and calculated to reflect the importance of each impact feature. The proposed models can estimate the number of oocytes retrieved with high precision, with the regression coefficient being 0.882% and 89.84% of the instances having the prediction number ≤ 5. Among the impact features, the antral follicle count has the highest importance, followed by the E2 level on the human chorionic gonadotropin day, the age, and the Anti-Müllerian hormone, with their normalized mean impact value > 0.3. Based on the proposed model, the prognostic results for ovarian response can be predicted, which enables scientific clinical decision support for the customized controlled ovarian stimulation strategies for women, and eventually helps yield better in vitro fertilization outcomes. Full article
(This article belongs to the Special Issue Clinical Diagnosis Using Deep Learning)
Show Figures

Figure 1

15 pages, 3450 KiB  
Article
Weakly Supervised Ternary Stream Data Augmentation Fine-Grained Classification Network for Identifying Acute Lymphoblastic Leukemia
by Yunfei Liu, Pu Chen, Junran Zhang, Nian Liu and Yan Liu
Diagnostics 2022, 12(1), 16; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12010016 - 22 Dec 2021
Cited by 5 | Viewed by 2763
Abstract
Due to the high incidence of acute lymphoblastic leukemia (ALL) worldwide as well as its rapid and fatal progression, timely microscopy screening of peripheral blood smears is essential for the rapid diagnosis of ALL. However, screening manually is time-consuming and tedious and may [...] Read more.
Due to the high incidence of acute lymphoblastic leukemia (ALL) worldwide as well as its rapid and fatal progression, timely microscopy screening of peripheral blood smears is essential for the rapid diagnosis of ALL. However, screening manually is time-consuming and tedious and may lead to missed or misdiagnosis due to subjective bias; on the other hand, artificially intelligent diagnostic algorithms are constrained by the limited sample size of the data and are prone to overfitting, resulting in limited applications. Conventional data augmentation is commonly adopted to expand the amount of training data, avoid overfitting, and improve the performance of deep models. However, in practical applications, random data augmentation, such as random image cropping or erasing, is difficult to realistically occur in specific tasks and may instead introduce tremendous background noises that modify actual distribution of data, thereby degrading model performance. In this paper, to assist in the early and accurate diagnosis of acute lymphoblastic leukemia, we present a ternary stream-driven weakly supervised data augmentation classification network (WT-DFN) to identify lymphoblasts in a fine-grained scale using microscopic images of peripheral blood smears. Concretely, for each training image, we first generate attention maps to represent the distinguishable part of the target by weakly supervised learning. Then, guided by these attention maps, we produce the other two streams via attention cropping and attention erasing to obtain the fine-grained distinctive features. The proposed WT-DFN improves the classification accuracy of the model from two aspects: (1) in the images can be seen details since cropping attention regions provide the accurate location of the object, which ensures our model looks at the object closer and discovers certain detailed features; (2) images can be seen more since erasing attention mechanism forces the model to extract more discriminative parts’ features. Validation suggests that the proposed method is capable of addressing the high intraclass variances located in lymphocyte classes, as well as the low interclass variances between lymphoblasts and other normal or reactive lymphocytes. The proposed method yields the best performance on the public dataset and the real clinical dataset among competitive methods. Full article
(This article belongs to the Special Issue Clinical Diagnosis Using Deep Learning)
Show Figures

Figure 1

15 pages, 3641 KiB  
Article
Automated Final Lesion Segmentation in Posterior Circulation Acute Ischemic Stroke Using Deep Learning
by Riaan Zoetmulder, Praneeta R. Konduri, Iris V. Obdeijn, Efstratios Gavves, Ivana Išgum, Charles B.L.M. Majoie, Diederik W.J. Dippel, Yvo B.W.E.M. Roos, Mayank Goyal, Peter J. Mitchell, Bruce C. V. Campbell, Demetrius K. Lopes, Gernot Reimann, Tudor G. Jovin, Jeffrey L. Saver, Keith W. Muir, Phil White, Serge Bracard, Bailiang Chen, Scott Brown, Wouter J. Schonewille, Erik van der Hoeven, Volker Puetz and Henk A. Marqueringadd Show full author list remove Hide full author list
Diagnostics 2021, 11(9), 1621; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics11091621 - 04 Sep 2021
Cited by 2 | Viewed by 3212
Abstract
Final lesion volume (FLV) is a surrogate outcome measure in anterior circulation stroke (ACS). In posterior circulation stroke (PCS), this relation is plausibly understudied due to a lack of methods that automatically quantify FLV. The applicability of deep learning approaches to PCS is [...] Read more.
Final lesion volume (FLV) is a surrogate outcome measure in anterior circulation stroke (ACS). In posterior circulation stroke (PCS), this relation is plausibly understudied due to a lack of methods that automatically quantify FLV. The applicability of deep learning approaches to PCS is limited due to its lower incidence compared to ACS. We evaluated strategies to develop a convolutional neural network (CNN) for PCS lesion segmentation by using image data from both ACS and PCS patients. We included follow-up non-contrast computed tomography scans of 1018 patients with ACS and 107 patients with PCS. To assess whether an ACS lesion segmentation generalizes to PCS, a CNN was trained on ACS data (ACS-CNN). Second, to evaluate the performance of only including PCS patients, a CNN was trained on PCS data. Third, to evaluate the performance when combining the datasets, a CNN was trained on both datasets. Finally, to evaluate the performance of transfer learning, the ACS-CNN was fine-tuned using PCS patients. The transfer learning strategy outperformed the other strategies in volume agreement with an intra-class correlation of 0.88 (95% CI: 0.83–0.92) vs. 0.55 to 0.83 and a lesion detection rate of 87% vs. 41–77 for the other strategies. Hence, transfer learning improved the FLV quantification and detection rate of PCS lesions compared to the other strategies. Full article
(This article belongs to the Special Issue Clinical Diagnosis Using Deep Learning)
Show Figures

Figure 1

11 pages, 10598 KiB  
Article
Generating Virtual Short Tau Inversion Recovery (STIR) Images from T1- and T2-Weighted Images Using a Conditional Generative Adversarial Network in Spine Imaging
by Johannes Haubold, Aydin Demircioglu, Jens Matthias Theysohn, Axel Wetter, Alexander Radbruch, Nils Dörner, Thomas Wilfried Schlosser, Cornelius Deuschl, Yan Li, Kai Nassenstein, Benedikt Michael Schaarschmidt, Michael Forsting, Lale Umutlu and Felix Nensa
Diagnostics 2021, 11(9), 1542; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics11091542 - 25 Aug 2021
Cited by 11 | Viewed by 2541
Abstract
Short tau inversion recovery (STIR) sequences are frequently used in magnetic resonance imaging (MRI) of the spine. However, STIR sequences require a significant amount of scanning time. The purpose of the present study was to generate virtual STIR (vSTIR) images from non-contrast, non-fat-suppressed [...] Read more.
Short tau inversion recovery (STIR) sequences are frequently used in magnetic resonance imaging (MRI) of the spine. However, STIR sequences require a significant amount of scanning time. The purpose of the present study was to generate virtual STIR (vSTIR) images from non-contrast, non-fat-suppressed T1- and T2-weighted images using a conditional generative adversarial network (cGAN). The training dataset comprised 612 studies from 514 patients, and the validation dataset comprised 141 studies from 133 patients. For validation, 100 original STIR and respective vSTIR series were presented to six senior radiologists (blinded for the STIR type) in independent A/B-testing sessions. Additionally, for 141 real or vSTIR sequences, the testers were required to produce a structured report of 15 different findings. In the A/B-test, most testers could not reliably identify the real STIR (mean error of tester 1–6: 41%; 44%; 58%; 48%; 39%; 45%). In the evaluation of the structured reports, vSTIR was equivalent to real STIR in 13 of 15 categories. In the category of the number of STIR hyperintense vertebral bodies (p = 0.08) and in the diagnosis of bone metastases (p = 0.055), the vSTIR was only slightly insignificantly equivalent. By virtually generating STIR images of diagnostic quality from T1- and T2-weighted images using a cGAN, one can shorten examination times and increase throughput. Full article
(This article belongs to the Special Issue Clinical Diagnosis Using Deep Learning)
Show Figures

Figure 1

11 pages, 2889 KiB  
Article
Automated Adenoid Hypertrophy Assessment with Lateral Cephalometry in Children Based on Artificial Intelligence
by Tingting Zhao, Jiawei Zhou, Jiarong Yan, Lingyun Cao, Yi Cao, Fang Hua and Hong He
Diagnostics 2021, 11(8), 1386; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics11081386 - 31 Jul 2021
Cited by 10 | Viewed by 6347
Abstract
Adenoid hypertrophy may lead to pediatric obstructive sleep apnea and mouth breathing. The routine screening of adenoid hypertrophy in dental practice is helpful for preventing relevant craniofacial and systemic consequences. The purpose of this study was to develop an automated assessment tool for [...] Read more.
Adenoid hypertrophy may lead to pediatric obstructive sleep apnea and mouth breathing. The routine screening of adenoid hypertrophy in dental practice is helpful for preventing relevant craniofacial and systemic consequences. The purpose of this study was to develop an automated assessment tool for adenoid hypertrophy based on artificial intelligence. A clinical dataset containing 581 lateral cephalograms was used to train the convolutional neural network (CNN). According to Fujioka’s method for adenoid hypertrophy assessment, the regions of interest were defined with four keypoint landmarks. The adenoid ratio based on the four landmarks was used for adenoid hypertrophy assessment. Another dataset consisting of 160 patients’ lateral cephalograms were used for evaluating the performance of the network. Diagnostic performance was evaluated with statistical analysis. The developed system exhibited high sensitivity (0.906, 95% confidence interval [CI]: 0.750–0.980), specificity (0.938, 95% CI: 0.881–0.973) and accuracy (0.919, 95% CI: 0.877–0.961) for adenoid hypertrophy assessment. The area under the receiver operating characteristic curve was 0.987 (95% CI: 0.974–1.000). These results indicated the proposed assessment system is able to assess AH accurately. The CNN-incorporated system showed high accuracy and stability in the detection of adenoid hypertrophy from children’ lateral cephalograms, implying the feasibility of automated adenoid hypertrophy screening utilizing a deep neural network model. Full article
(This article belongs to the Special Issue Clinical Diagnosis Using Deep Learning)
Show Figures

Figure 1

14 pages, 3019 KiB  
Article
Prediction of In-Hospital Cardiac Arrest Using Shallow and Deep Learning
by Minsu Chae, Sangwook Han, Hyowook Gil, Namjun Cho and Hwamin Lee
Diagnostics 2021, 11(7), 1255; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics11071255 - 13 Jul 2021
Cited by 12 | Viewed by 2389
Abstract
Sudden cardiac arrest can leave serious brain damage or lead to death, so it is very important to predict before a cardiac arrest occurs. However, early warning score systems including the National Early Warning Score, are associated with low sensitivity and false positives. [...] Read more.
Sudden cardiac arrest can leave serious brain damage or lead to death, so it is very important to predict before a cardiac arrest occurs. However, early warning score systems including the National Early Warning Score, are associated with low sensitivity and false positives. We applied shallow and deep learning to predict cardiac arrest to overcome these limitations. We evaluated the performance of the Synthetic Minority Oversampling Technique Ratio. We evaluated the performance using a Decision Tree, a Random Forest, Logistic Regression, Long Short-Term Memory model, Gated Recurrent Unit model, and LSTM–GRU hybrid models. Our proposed Logistic Regression demonstrated a higher positive predictive value and sensitivity than traditional early warning systems. Full article
(This article belongs to the Special Issue Clinical Diagnosis Using Deep Learning)
Show Figures

Figure 1

15 pages, 1681 KiB  
Article
An End-to-End Pipeline for Early Diagnosis of Acute Promyelocytic Leukemia Based on a Compact CNN Model
by Yifan Qiao, Yi Zhang, Nian Liu, Pu Chen and Yan Liu
Diagnostics 2021, 11(7), 1237; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics11071237 - 11 Jul 2021
Cited by 6 | Viewed by 2179
Abstract
Timely microscopy screening of peripheral blood smears is essential for the diagnosis of acute promyelocytic leukemia (APL) due to the occurrence of early death (ED) before or during the initial therapy. Screening manually is time-consuming and tedious, and may lead to missed diagnosis [...] Read more.
Timely microscopy screening of peripheral blood smears is essential for the diagnosis of acute promyelocytic leukemia (APL) due to the occurrence of early death (ED) before or during the initial therapy. Screening manually is time-consuming and tedious, and may lead to missed diagnosis or misdiagnosis because of subjective bias. To address these problems, we develop a three-step pipeline to help in the early diagnosis of APL from peripheral blood smears. The entire pipeline consists of leukocytes focusing, cell classification and diagnostic opinions. As the key component of the pipeline, a compact classification model based on attention embedded convolutional neural network blocks is proposed to distinguish promyelocytes from normal leukocytes. The compact classification model is validated on both the combination of two public datasets, APL-Cytomorphology_LMU and APL-Cytomorphology_JHH, as well as the clinical dataset, to yield a precision of 96.53% and 99.20%, respectively. The results indicate that our model outperforms the other evaluated popular classification models owing to its better accuracy and smaller size. Furthermore, the entire pipeline is validated on realistic patient data. The proposed method promises to act as an assistant tool for APL diagnosis. Full article
(This article belongs to the Special Issue Clinical Diagnosis Using Deep Learning)
Show Figures

Figure 1

13 pages, 4687 KiB  
Article
Accuracy of New Deep Learning Model-Based Segmentation and Key-Point Multi-Detection Method for Ultrasonographic Developmental Dysplasia of the Hip (DDH) Screening
by Si-Wook Lee, Hee-Uk Ye, Kyung-Jae Lee, Woo-Young Jang, Jong-Ha Lee, Seok-Min Hwang and Yu-Ran Heo
Diagnostics 2021, 11(7), 1174; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics11071174 - 28 Jun 2021
Cited by 15 | Viewed by 6093
Abstract
Hip joint ultrasonographic (US) imaging is the golden standard for developmental dysplasia of the hip (DDH) screening. However, the effectiveness of this technique is subject to interoperator and intraobserver variability. Thus, a multi-detection deep learning artificial intelligence (AI)-based computer-aided diagnosis (CAD) system was [...] Read more.
Hip joint ultrasonographic (US) imaging is the golden standard for developmental dysplasia of the hip (DDH) screening. However, the effectiveness of this technique is subject to interoperator and intraobserver variability. Thus, a multi-detection deep learning artificial intelligence (AI)-based computer-aided diagnosis (CAD) system was developed and evaluated. The deep learning model used a two-stage training process to segment the four key anatomical structures and extract their respective key points. In addition, the check angle of the ilium body balancing level was set to evaluate the system’s cognitive ability. Hence, only images with visible key anatomical points and a check angle within ±5° were used in the analysis. Of the original 921 images, 320 (34.7%) were deemed appropriate for screening by both the system and human observer. Moderate agreement (80.9%) was seen in the check angles of the appropriate group (Cohen’s κ = 0.525). Similarly, there was excellent agreement in the intraclass correlation coefficient (ICC) value between the measurers of the alpha angle (ICC = 0.764) and a good agreement in beta angle (ICC = 0.743). The developed system performed similarly to experienced medical experts; thus, it could further aid the effectiveness and speed of DDH diagnosis. Full article
(This article belongs to the Special Issue Clinical Diagnosis Using Deep Learning)
Show Figures

Figure 1

17 pages, 903 KiB  
Article
Method for Diagnosing the Bone Marrow Edema of Sacroiliac Joint in Patients with Axial Spondyloarthritis Using Magnetic Resonance Image Analysis Based on Deep Learning
by Kang Hee Lee, Sang Tae Choi, Guen Young Lee, You Jung Ha and Sang-Il Choi
Diagnostics 2021, 11(7), 1156; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics11071156 - 24 Jun 2021
Cited by 23 | Viewed by 2528
Abstract
Axial spondyloarthritis (axSpA) is a chronic inflammatory disease of the sacroiliac joints. In this study, we develop a method for detecting bone marrow edema by magnetic resonance (MR) imaging of the sacroiliac joints and a deep-learning network. A total of 815 MR images [...] Read more.
Axial spondyloarthritis (axSpA) is a chronic inflammatory disease of the sacroiliac joints. In this study, we develop a method for detecting bone marrow edema by magnetic resonance (MR) imaging of the sacroiliac joints and a deep-learning network. A total of 815 MR images of the sacroiliac joints were obtained from 60 patients diagnosed with axSpA and 19 healthy subjects. Gadolinium-enhanced fat-suppressed T1-weighted oblique coronal images were used for deep learning. Active sacroiliitis was defined as bone marrow edema, and the following processes were performed: setting the region of interest (ROI) and normalizing it to a size suitable for input to a deep-learning network, determining bone marrow edema using a convolutional-neural-network-based deep-learning network for individual MR images, and determining sacroiliac arthritis in subject examinations based on the classification results of individual MR images. About 70% of the patients and normal subjects were randomly selected for the training dataset, and the remaining 30% formed the test dataset. This process was repeated five times to calculate the average classification rate of the five-fold sets. The gradient-weighted class activation mapping method was used to validate the classification results. In the performance analysis of the ResNet18-based classification network for individual MR images, use of the ROI showed excellent detection performance of bone marrow edema with 93.55 ± 2.19% accuracy, 92.87 ± 1.27% recall, and 94.69 ± 3.03% precision. The overall performance was additionally improved using a median filter to reflect the context information. Finally, active sacroiliitis was diagnosed in individual subjects with 96.06 ± 2.83% accuracy, 100% recall, and 94.84 ± 3.73% precision. This is a pilot study to diagnose bone marrow edema by deep learning based on MR images, and the results suggest that MR analysis using deep learning can be a useful complementary means for clinicians to diagnose bone marrow edema. Full article
(This article belongs to the Special Issue Clinical Diagnosis Using Deep Learning)
Show Figures

Figure 1

14 pages, 1318 KiB  
Article
Hyoid Bone Tracking in a Videofluoroscopic Swallowing Study Using a Deep-Learning-Based Segmentation Network
by Hyun-Il Kim, Yuna Kim, Bomin Kim, Dae Youp Shin, Seong Jae Lee and Sang-Il Choi
Diagnostics 2021, 11(7), 1147; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics11071147 - 23 Jun 2021
Cited by 8 | Viewed by 2543
Abstract
Kinematic analysis of the hyoid bone in a videofluorosopic swallowing study (VFSS) is important for assessing dysphagia. However, calibrating the hyoid bone movement is time-consuming, and its reliability shows wide variation. Computer-assisted analysis has been studied to improve the efficiency and accuracy of [...] Read more.
Kinematic analysis of the hyoid bone in a videofluorosopic swallowing study (VFSS) is important for assessing dysphagia. However, calibrating the hyoid bone movement is time-consuming, and its reliability shows wide variation. Computer-assisted analysis has been studied to improve the efficiency and accuracy of hyoid bone identification and tracking, but its performance is limited. In this study, we aimed to design a robust network that can track hyoid bone movement automatically without human intervention. Using 69,389 frames from 197 VFSS files as the data set, a deep learning model for detection and trajectory prediction was constructed and trained by the BiFPN-U-Net(T) network. The present model showed improved performance when compared with the previous models: an area under the curve (AUC) of 0.998 for pixelwise accuracy, an accuracy of object detection of 99.5%, and a Dice similarity of 90.9%. The bounding box detection performance for the hyoid bone and reference objects was superior to that of other models, with a mean average precision of 95.9%. The estimation of the distance of hyoid bone movement also showed higher accuracy. The deep learning model proposed in this study could be used to detect and track the hyoid bone more efficiently and accurately in VFSS analysis. Full article
(This article belongs to the Special Issue Clinical Diagnosis Using Deep Learning)
Show Figures

Figure 1

17 pages, 422 KiB  
Article
Forecasting the Walking Assistance Rehabilitation Level of Stroke Patients Using Artificial Intelligence
by Kanghyeon Seo, Bokjin Chung, Hamsa Priya Panchaseelan, Taewoo Kim, Hyejung Park, Byungmo Oh, Minho Chun, Sunjae Won, Donkyu Kim, Jaewon Beom, Doyoung Jeon and Jihoon Yang
Diagnostics 2021, 11(6), 1096; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics11061096 - 15 Jun 2021
Cited by 11 | Viewed by 3673
Abstract
Cerebrovascular accidents (CVA) cause a range of impairments in coordination, such as a spectrum of walking impairments ranging from mild gait imbalance to complete loss of mobility. Patients with CVA need personalized approaches tailored to their degree of walking impairment for effective rehabilitation. [...] Read more.
Cerebrovascular accidents (CVA) cause a range of impairments in coordination, such as a spectrum of walking impairments ranging from mild gait imbalance to complete loss of mobility. Patients with CVA need personalized approaches tailored to their degree of walking impairment for effective rehabilitation. This paper aims to evaluate the validity of using various machine learning (ML) and deep learning (DL) classification models (support vector machine, Decision Tree, Perceptron, Light Gradient Boosting Machine, AutoGluon, SuperTML, and TabNet) for automated classification of walking assistant devices for CVA patients. We reviewed a total of 383 CVA patients’ (1623 observations) prescription data for eight different walking assistant devices from five hospitals. Among the classification models, the advanced tree-based classification models (LightGBM and tree models in AutoGluon) achieved classification results of over 90% accuracy, recall, precision, and F1-score. In particular, AutoGluon not only presented the highest predictive performance (almost 92% in accuracy, recall, precision, and F1-score, and 86.8% in balanced accuracy) but also demonstrated that the classification performances of the tree-based models were higher than that of the other models on its leaderboard. Therefore, we believe that tree-based classification models have potential as practical diagnosis tools for medical rehabilitation. Full article
(This article belongs to the Special Issue Clinical Diagnosis Using Deep Learning)
Show Figures

Figure 1

24 pages, 1400 KiB  
Article
Intelligent Bone Age Assessment: An Automated System to Detect a Bone Growth Problem Using Convolutional Neural Networks with Attention Mechanism
by Mohd Asyraf Zulkifley, Nur Ayuni Mohamed, Siti Raihanah Abdani, Nor Azwan Mohamed Kamari, Asraf Mohamed Moubark and Ahmad Asrul Ibrahim
Diagnostics 2021, 11(5), 765; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics11050765 - 24 Apr 2021
Cited by 14 | Viewed by 7775
Abstract
Skeletal bone age assessment using X-ray images is a standard clinical procedure to detect any anomaly in bone growth among kids and babies. The assessed bone age indicates the actual level of growth, whereby a large discrepancy between the assessed and chronological age [...] Read more.
Skeletal bone age assessment using X-ray images is a standard clinical procedure to detect any anomaly in bone growth among kids and babies. The assessed bone age indicates the actual level of growth, whereby a large discrepancy between the assessed and chronological age might point to a growth disorder. Hence, skeletal bone age assessment is used to screen the possibility of growth abnormalities, genetic problems, and endocrine disorders. Usually, the manual screening is assessed through X-ray images of the non-dominant hand using the Greulich–Pyle (GP) or Tanner–Whitehouse (TW) approach. The GP uses a standard hand atlas, which will be the reference point to predict the bone age of a patient, while the TW uses a scoring mechanism to assess the bone age using several regions of interest information. However, both approaches are heavily dependent on individual domain knowledge and expertise, which is prone to high bias in inter and intra-observer results. Hence, an automated bone age assessment system, which is referred to as Attention-Xception Network (AXNet) is proposed to automatically predict the bone age accurately. The proposed AXNet consists of two parts, which are image normalization and bone age regression modules. The image normalization module will transform each X-ray image into a standardized form so that the regressor network can be trained using better input images. This module will first extract the hand region from the background, which is then rotated to an upright position using the angle calculated from the four key-points of interest. Then, the masked and rotated hand image will be aligned such that it will be positioned in the middle of the image. Both of the masked and rotated images will be obtained through existing state-of-the-art deep learning methods. The last module will then predict the bone age through the Attention-Xception network that incorporates multiple layers of spatial-attention mechanism to emphasize the important features for more accurate bone age prediction. From the experimental results, the proposed AXNet achieves the lowest mean absolute error and mean squared error of 7.699 months and 108.869 months2, respectively. Therefore, the proposed AXNet has demonstrated its potential for practical clinical use with an error of less than one year to assist the experts or radiologists in evaluating the bone age objectively. Full article
(This article belongs to the Special Issue Clinical Diagnosis Using Deep Learning)
Show Figures

Figure 1

30 pages, 53519 KiB  
Article
Improved Semantic Segmentation of Tuberculosis—Consistent Findings in Chest X-rays Using Augmented Training of Modality-Specific U-Net Models with Weak Localizations
by Sivaramakrishnan Rajaraman, Les R. Folio, Jane Dimperio, Philip O. Alderson and Sameer K. Antani
Diagnostics 2021, 11(4), 616; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics11040616 - 30 Mar 2021
Cited by 27 | Viewed by 5702
Abstract
Deep learning (DL) has drawn tremendous attention for object localization and recognition in both natural and medical images. U-Net segmentation models have demonstrated superior performance compared to conventional hand-crafted feature-based methods. Medical image modality-specific DL models are better at transferring domain knowledge to [...] Read more.
Deep learning (DL) has drawn tremendous attention for object localization and recognition in both natural and medical images. U-Net segmentation models have demonstrated superior performance compared to conventional hand-crafted feature-based methods. Medical image modality-specific DL models are better at transferring domain knowledge to a relevant target task than those pretrained on stock photography images. This character helps improve model adaptation, generalization, and class-specific region of interest (ROI) localization. In this study, we train chest X-ray (CXR) modality-specific U-Nets and other state-of-the-art U-Net models for semantic segmentation of tuberculosis (TB)-consistent findings. Automated segmentation of such manifestations could help radiologists reduce errors and supplement decision-making while improving patient care and productivity. Our approach uses the publicly available TBX11K CXR dataset with weak TB annotations, typically provided as bounding boxes, to train a set of U-Net models. Next, we improve the results by augmenting the training data with weak localization, postprocessed into an ROI mask, from a DL classifier trained to classify CXRs as showing normal lungs or suspected TB manifestations. Test data are individually derived from the TBX11K CXR training distribution and other cross-institutional collections, including the Shenzhen TB and Montgomery TB CXR datasets. We observe that our augmented training strategy helped the CXR modality-specific U-Net models achieve superior performance with test data derived from the TBX11K CXR training distribution and cross-institutional collections (p < 0.05). We believe that this is the first study to i) use CXR modality-specific U-Nets for semantic segmentation of TB-consistent ROIs and ii) evaluate the segmentation performance while augmenting the training data with weak TB-consistent localizations. Full article
(This article belongs to the Special Issue Clinical Diagnosis Using Deep Learning)
Show Figures

Graphical abstract

14 pages, 1403 KiB  
Article
An Improved UNet++ Model for Congestive Heart Failure Diagnosis Using Short-Term RR Intervals
by Meng Lei, Jia Li, Ming Li, Liang Zou and Han Yu
Diagnostics 2021, 11(3), 534; https://doi.org/10.3390/diagnostics11030534 - 16 Mar 2021
Cited by 21 | Viewed by 3092
Abstract
Congestive heart failure (CHF), a progressive and complex syndrome caused by ventricular dysfunction, is difficult to detect at an early stage. Heart rate variability (HRV) was proposed as a prognostic indicator for CHF. Inspired by the success of 2-D UNet++ in medical image [...] Read more.
Congestive heart failure (CHF), a progressive and complex syndrome caused by ventricular dysfunction, is difficult to detect at an early stage. Heart rate variability (HRV) was proposed as a prognostic indicator for CHF. Inspired by the success of 2-D UNet++ in medical image segmentation, in this paper, we introduce an end-to-end encoder-decoder model to detect CHF using HRV signals. The developed model enhances the UNet++ model with Squeeze-and-Excitation (SE) residual blocks to extract deep features hierarchically and distinguish CHF patients from normal subjects. Two open-source databases are utilized for evaluating the proposed method, and three segment lengths of intervals between successive R-peaks are employed in comparison with state-of-the-art methods. The proposed method achieves an accuracy of 85.64%, 86.65% and 88.79% when 500, 1000 and 2000 RR intervals are utilized, respectively. It demonstrates that HRV evaluation based on deep learning can be an important tool for early detection of CHF, and may assist clinicians in achieving timely and accurate diagnoses. Full article
(This article belongs to the Special Issue Clinical Diagnosis Using Deep Learning)
Show Figures

Figure 1

9 pages, 2649 KiB  
Article
Artificial Intelligence in Fractured Dental Implant Detection and Classification: Evaluation Using Dataset from Two Dental Hospitals
by Dong-Woon Lee, Sung-Yong Kim, Seong-Nyum Jeong and Jae-Hong Lee
Diagnostics 2021, 11(2), 233; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics11020233 - 03 Feb 2021
Cited by 27 | Viewed by 3507
Abstract
Fracture of a dental implant (DI) is a rare mechanical complication that is a critical cause of DI failure and explantation. The purpose of this study was to evaluate the reliability and validity of a three different deep convolutional neural network (DCNN) architectures [...] Read more.
Fracture of a dental implant (DI) is a rare mechanical complication that is a critical cause of DI failure and explantation. The purpose of this study was to evaluate the reliability and validity of a three different deep convolutional neural network (DCNN) architectures (VGGNet-19, GoogLeNet Inception-v3, and automated DCNN) for the detection and classification of fractured DI using panoramic and periapical radiographic images. A total of 21,398 DIs were reviewed at two dental hospitals, and 251 intact and 194 fractured DI radiographic images were identified and included as the dataset in this study. All three DCNN architectures achieved a fractured DI detection and classification accuracy of over 0.80 AUC. In particular, automated DCNN architecture using periapical images showed the highest and most reliable detection (AUC = 0.984, 95% CI = 0.900–1.000) and classification (AUC = 0.869, 95% CI = 0.778–0.929) accuracy performance compared to fine-tuned and pre-trained VGGNet-19 and GoogLeNet Inception-v3 architectures. The three DCNN architectures showed acceptable accuracy in the detection and classification of fractured DIs, with the best accuracy performance achieved by the automated DCNN architecture using only periapical images. Full article
(This article belongs to the Special Issue Clinical Diagnosis Using Deep Learning)
Show Figures

Figure 1

Other

Jump to: Research

2 pages, 170 KiB  
Reply
Reply to Çiftci, S.; Aydin, B.K. Comment on “Lee et al. Accuracy of New Deep Learning Model-Based Segmentation and Key-Point Multi-Detection Method for Ultrasonographic Developmental Dysplasia of the Hip (DDH) Screening. Diagnostics 2021, 11, 1174”
by Si-Wook Lee, Hee-Uk Ye, Kyung-Jae Lee, Woo-Young Jang, Jong-Ha Lee, Seok-Min Hwang and Yu-Ran Heo
Diagnostics 2022, 12(7), 1739; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12071739 - 18 Jul 2022
Viewed by 824
Abstract
We thank Dr. Sadettin Ciftci for his comment on the key point issues in measuring the alpha and beta angle with Graf method. We appreciated his feedback [...] Full article
(This article belongs to the Special Issue Clinical Diagnosis Using Deep Learning)
2 pages, 168 KiB  
Comment
Comment on Lee et al. Accuracy of New Deep Learning Model-Based Segmentation and Key-Point Multi-Detection Method for Ultrasonographic Developmental Dysplasia of the Hip (DDH) Screening. Diagnostics 2021, 11, 1174
by Sadettin Çiftci and Bahattin Kerem Aydin
Diagnostics 2022, 12(7), 1738; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12071738 - 18 Jul 2022
Cited by 1 | Viewed by 878
Abstract
We have read the article titled “Accuracy of New Deep Learning Model-Based Segmentation and Key-Point Multi-Detection Method for Ultrasonographic Developmental Dysplasia of the Hip (DDH) Screening” by Lee et al. [...] Full article
(This article belongs to the Special Issue Clinical Diagnosis Using Deep Learning)
18 pages, 3150 KiB  
Protocol
Content-Based Medical Image Retrieval and Intelligent Interactive Visual Browser for Medical Education, Research and Care
by Camilo G. Sotomayor, Marcelo Mendoza, Víctor Castañeda, Humberto Farías, Gabriel Molina, Gonzalo Pereira, Steffen Härtel, Mauricio Solar and Mauricio Araya
Diagnostics 2021, 11(8), 1470; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics11081470 - 13 Aug 2021
Cited by 5 | Viewed by 3066
Abstract
Medical imaging is essential nowadays throughout medical education, research, and care. Accordingly, international efforts have been made to set large-scale image repositories for these purposes. Yet, to date, browsing of large-scale medical image repositories has been troublesome, time-consuming, and generally limited by text [...] Read more.
Medical imaging is essential nowadays throughout medical education, research, and care. Accordingly, international efforts have been made to set large-scale image repositories for these purposes. Yet, to date, browsing of large-scale medical image repositories has been troublesome, time-consuming, and generally limited by text search engines. A paradigm shift, by means of a query-by-example search engine, would alleviate these constraints and beneficially impact several practical demands throughout the medical field. The current project aims to address this gap in medical imaging consumption by developing a content-based image retrieval (CBIR) system, which combines two image processing architectures based on deep learning. Furthermore, a first-of-its-kind intelligent visual browser was designed that interactively displays a set of imaging examinations with similar visual content on a similarity map, making it possible to search for and efficiently navigate through a large-scale medical imaging repository, even if it has been set with incomplete and curated metadata. Users may, likewise, provide text keywords, in which case the system performs a content- and metadata-based search. The system was fashioned with an anonymizer service and designed to be fully interoperable according to international standards, to stimulate its integration within electronic healthcare systems and its adoption for medical education, research and care. Professionals of the healthcare sector, by means of a self-administered questionnaire, underscored that this CBIR system and intelligent interactive visual browser would be highly useful for these purposes. Further studies are warranted to complete a comprehensive assessment of the performance of the system through case description and protocolized evaluations by medical imaging specialists. Full article
(This article belongs to the Special Issue Clinical Diagnosis Using Deep Learning)
Show Figures

Figure 1

Back to TopTop