Machine Learning for Computer-Aided Diagnosis in Biomedical Imaging

A special issue of Diagnostics (ISSN 2075-4418). This special issue belongs to the section "Machine Learning and Artificial Intelligence in Diagnostics".

Deadline for manuscript submissions: closed (31 March 2022) | Viewed by 61752

Special Issue Editors

Arlington Innovation Center, Health Research, Virginia Tech -NCR, 900 N. Glebe Road, Arlington, VA 22203, USA
Interests: diagnostic imaging; radiation oncology; biomedical imaging; artificial intelligence imaging; computer aided diagnosis; telemedicine; health informatics; electronic health record; quantitative imaging
Special Issues, Collections and Topics in MDPI journals
Department of Diagnostic Radiology, The Royal Marsden NHS Foundation Trust, London, UK
Interests: diagnostic radiology; oncological imaging; functional cancer imaging; body diffusion-weighted MRI; whole body MRI; hepatobiliary and pancreatic cancers; malignant bone disease; radiomics and artificial intelligence

Special Issue Information

Dear Colleagues,

The radiology imaging community has been very active in developing computer-aided diagnosis (CAD) tools since the early 1990s, before the imagination of artificial intelligence (AI) fueled many unbound expectations in medicine as well as in other industries. Today, there are more than 50 FDA-approved AI imaging products in the US, but clinical adoption of these products has been very slow. However, the successful adoption of some AI imaging tools indicates that AI can positively impact medical imaging services. This Special Issue plans to focus on topics and issues to make AI more meaningfully intelligent for radiology and pathology. Imaging diagnostic services are far more than just radiologists or pathologists making a diagnosis. Imaging workflow is a very complex and labor-intensive process, often under stressful situations. CAD research in the past focused on diagnosis processes only. AI tools will have to look at the entire workflow beyond diagnosis. In AI development, we have seen success in laboratories that did not translate to real-world successes. The root causes of such failures have spurred the development of better science beyond traditional convolutional networks (CNN). There are better tools to manage the poor quality of input data and insufficient data volume for training and testing. Experience in other industries shows us that using various deep learning tools—supervised, unsupervised, and others—can address these complex issues associated with workflow and productivity improvement. AI is also being applied to image acquisition to improve image quality or accelerate acquisitions, thereby enhancing diagnostic performance while shortening examination times. Recent advances in quantitative imaging interns of radiomics and pathomics offer new tools for more integrated diagnosis, especially between pathology and radiology. In short, through this Special Issue, we would like to bring forth next-generation AI theories, tools, and solutions toward radiology and pathology imaging that can ultimately dramatically improve patient care.

Prof. Dr. Seong K. Mun
Prof. Dr. Dow-Mu Koh
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Diagnostics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (16 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review

3 pages, 183 KiB  
Editorial
Special Issue: “Machine Learning for Computer-Aided Diagnosis in Biomedical Imaging”
by Seong K. Mun and Dow-Mu Koh
Diagnostics 2022, 12(6), 1331; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12061331 - 27 May 2022
Viewed by 1061
Abstract
The radiology imaging community has been developing computer-aided diagnosis (CAD) tools since the early 1990s before the imagination of artificial intelligence (AI) fueled many unbound healthcare expectations and other industries [...] Full article
(This article belongs to the Special Issue Machine Learning for Computer-Aided Diagnosis in Biomedical Imaging)

Research

Jump to: Editorial, Review

17 pages, 1864 KiB  
Article
Deep Learning-Based Total Kidney Volume Segmentation in Autosomal Dominant Polycystic Kidney Disease Using Attention, Cosine Loss, and Sharpness Aware Minimization
by Anish Raj, Fabian Tollens, Laura Hansen, Alena-Kathrin Golla, Lothar R. Schad, Dominik Nörenberg and Frank G. Zöllner
Diagnostics 2022, 12(5), 1159; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12051159 - 07 May 2022
Cited by 19 | Viewed by 2538
Abstract
Early detection of the autosomal dominant polycystic kidney disease (ADPKD) is crucial as it is one of the most common causes of end-stage renal disease (ESRD) and kidney failure. The total kidney volume (TKV) can be used as a biomarker to quantify disease [...] Read more.
Early detection of the autosomal dominant polycystic kidney disease (ADPKD) is crucial as it is one of the most common causes of end-stage renal disease (ESRD) and kidney failure. The total kidney volume (TKV) can be used as a biomarker to quantify disease progression. The TKV calculation requires accurate delineation of kidney volumes, which is usually performed manually by an expert physician. However, this is time-consuming and automated segmentation is warranted. Furthermore, the scarcity of large annotated datasets hinders the development of deep learning solutions. In this work, we address this problem by implementing three attention mechanisms into the U-Net to improve TKV estimation. Additionally, we implement a cosine loss function that works well on image classification tasks with small datasets. Lastly, we apply a technique called sharpness aware minimization (SAM) that helps improve the generalizability of networks. Our results show significant improvements (p-value < 0.05) over the reference kidney segmentation U-Net. We show that the attention mechanisms and/or the cosine loss with SAM can achieve a dice score (DSC) of 0.918, a mean symmetric surface distance (MSSD) of 1.20 mm with the mean TKV difference of −1.72%, and R2 of 0.96 while using only 100 MRI datasets for training and testing. Furthermore, we tested four ensembles and obtained improvements over the best individual network, achieving a DSC and MSSD of 0.922 and 1.09 mm, respectively. Full article
(This article belongs to the Special Issue Machine Learning for Computer-Aided Diagnosis in Biomedical Imaging)
Show Figures

Figure 1

24 pages, 9503 KiB  
Article
Deep Learning in Multi-Class Lung Diseases’ Classification on Chest X-ray Images
by Sungyeup Kim, Beanbonyka Rim, Seongjun Choi, Ahyoung Lee, Sedong Min and Min Hong
Diagnostics 2022, 12(4), 915; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12040915 - 06 Apr 2022
Cited by 33 | Viewed by 4556
Abstract
Chest X-ray radiographic (CXR) imagery enables earlier and easier lung disease diagnosis. Therefore, in this paper, we propose a deep learning method using a transfer learning technique to classify lung diseases on CXR images to improve the efficiency and accuracy of computer-aided diagnostic [...] Read more.
Chest X-ray radiographic (CXR) imagery enables earlier and easier lung disease diagnosis. Therefore, in this paper, we propose a deep learning method using a transfer learning technique to classify lung diseases on CXR images to improve the efficiency and accuracy of computer-aided diagnostic systems’ (CADs’) diagnostic performance. Our proposed method is a one-step, end-to-end learning, which means that raw CXR images are directly inputted into a deep learning model (EfficientNet v2-M) to extract their meaningful features in identifying disease categories. We experimented using our proposed method on three classes of normal, pneumonia, and pneumothorax of the U.S. National Institutes of Health (NIH) data set, and achieved validation performances of loss = 0.6933, accuracy = 82.15%, sensitivity = 81.40%, and specificity = 91.65%. We also experimented on the Cheonan Soonchunhyang University Hospital (SCH) data set on four classes of normal, pneumonia, pneumothorax, and tuberculosis, and achieved validation performances of loss = 0.7658, accuracy = 82.20%, sensitivity = 81.40%, and specificity = 94.48%; testing accuracy of normal, pneumonia, pneumothorax, and tuberculosis classes was 63.60%, 82.30%, 82.80%, and 89.90%, respectively. Full article
(This article belongs to the Special Issue Machine Learning for Computer-Aided Diagnosis in Biomedical Imaging)
Show Figures

Figure 1

12 pages, 2497 KiB  
Article
Deep Learning Model Based on 3D Optical Coherence Tomography Images for the Automated Detection of Pathologic Myopia
by So-Jin Park, Taehoon Ko, Chan-Kee Park, Yong-Chan Kim and In-Young Choi
Diagnostics 2022, 12(3), 742; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12030742 - 18 Mar 2022
Cited by 15 | Viewed by 2182
Abstract
Pathologic myopia causes vision impairment and blindness, and therefore, necessitates a prompt diagnosis. However, there is no standardized definition of pathologic myopia, and its interpretation by 3D optical coherence tomography images is subjective, requiring considerable time and money. Therefore, there is a need [...] Read more.
Pathologic myopia causes vision impairment and blindness, and therefore, necessitates a prompt diagnosis. However, there is no standardized definition of pathologic myopia, and its interpretation by 3D optical coherence tomography images is subjective, requiring considerable time and money. Therefore, there is a need for a diagnostic tool that can automatically and quickly diagnose pathologic myopia in patients. This study aimed to develop an algorithm that uses 3D optical coherence tomography volumetric images (C-scan) to automatically diagnose patients with pathologic myopia. The study was conducted using 367 eyes of patients who underwent optical coherence tomography tests at the Ophthalmology Department of Incheon St. Mary’s Hospital and Seoul St. Mary’s Hospital from January 2012 to May 2020. To automatically diagnose pathologic myopia, a deep learning model was developed using 3D optical coherence tomography images. The model was developed using transfer learning based on four pre-trained convolutional neural networks (ResNet18, ResNext50, EfficientNetB0, EfficientNetB4). Grad-CAM was used to visualize features affecting the detection of pathologic myopia. The performance of each model was evaluated and compared based on accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUROC). The model based on EfficientNetB4 showed the best performance (95% accuracy, 93% sensitivity, 96% specificity, and 98% AUROC) in identifying pathologic myopia. Full article
(This article belongs to the Special Issue Machine Learning for Computer-Aided Diagnosis in Biomedical Imaging)
Show Figures

Figure 1

15 pages, 1933 KiB  
Article
An Efficient Multi-Level Convolutional Neural Network Approach for White Blood Cells Classification
by César Cheuque, Marvin Querales, Roberto León, Rodrigo Salas and Romina Torres
Diagnostics 2022, 12(2), 248; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12020248 - 20 Jan 2022
Cited by 38 | Viewed by 5983
Abstract
The evaluation of white blood cells is essential to assess the quality of the human immune system; however, the assessment of the blood smear depends on the pathologist’s expertise. Most machine learning tools make a one-level classification for white blood cell classification. This [...] Read more.
The evaluation of white blood cells is essential to assess the quality of the human immune system; however, the assessment of the blood smear depends on the pathologist’s expertise. Most machine learning tools make a one-level classification for white blood cell classification. This work presents a two-stage hybrid multi-level scheme that efficiently classifies four cell groups: lymphocytes and monocytes (mononuclear) and segmented neutrophils and eosinophils (polymorphonuclear). At the first level, a Faster R-CNN network is applied for the identification of the region of interest of white blood cells, together with the separation of mononuclear cells from polymorphonuclear cells. Once separated, two parallel convolutional neural networks with the MobileNet structure are used to recognize the subclasses in the second level. The results obtained using Monte Carlo cross-validation show that the proposed model has a performance metric of around 98.4% (accuracy, recall, precision, and F1-score). The proposed model represents a good alternative for computer-aided diagnosis (CAD) tools for supporting the pathologist in the clinical laboratory in assessing white blood cells from blood smear images. Full article
(This article belongs to the Special Issue Machine Learning for Computer-Aided Diagnosis in Biomedical Imaging)
Show Figures

Figure 1

16 pages, 1417 KiB  
Article
Fully Automatic Knee Bone Detection and Segmentation on Three-Dimensional MRI
by Rania Almajalid, Ming Zhang and Juan Shan
Diagnostics 2022, 12(1), 123; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12010123 - 06 Jan 2022
Cited by 20 | Viewed by 3721
Abstract
In the medical sector, three-dimensional (3D) images are commonly used like computed tomography (CT) and magnetic resonance imaging (MRI). The 3D MRI is a non-invasive method of studying the soft-tissue structures in a knee joint for osteoarthritis studies. It can greatly improve the [...] Read more.
In the medical sector, three-dimensional (3D) images are commonly used like computed tomography (CT) and magnetic resonance imaging (MRI). The 3D MRI is a non-invasive method of studying the soft-tissue structures in a knee joint for osteoarthritis studies. It can greatly improve the accuracy of segmenting structures such as cartilage, bone marrow lesion, and meniscus by identifying the bone structure first. U-net is a convolutional neural network that was originally designed to segment the biological images with limited training data. The input of the original U-net is a single 2D image and the output is a binary 2D image. In this study, we modified the U-net model to identify the knee bone structures using 3D MRI, which is a sequence of 2D slices. A fully automatic model has been proposed to detect and segment knee bones. The proposed model was trained, tested, and validated using 99 knee MRI cases where each case consists of 160 2D slices for a single knee scan. To evaluate the model’s performance, the similarity, dice coefficient (DICE), and area error metrics were calculated. Separate models were trained using different knee bone components including tibia, femur, patella, as well as a combined model for segmenting all the knee bones. Using the whole MRI sequence (160 slices), the method was able to detect the beginning and ending bone slices first, and then segment the bone structures for all the slices in between. On the testing set, the detection model accomplished 98.79% accuracy and the segmentation model achieved DICE 96.94% and similarity 93.98%. The proposed method outperforms several state-of-the-art methods, i.e., it outperforms U-net by 3.68%, SegNet by 14.45%, and FCN-8 by 2.34%, in terms of DICE score using the same dataset. Full article
(This article belongs to the Special Issue Machine Learning for Computer-Aided Diagnosis in Biomedical Imaging)
Show Figures

Figure 1

11 pages, 1485 KiB  
Article
Cancer Diagnosis of Microscopic Biopsy Images Using a Social Spider Optimisation-Tuned Neural Network
by Prasanalakshmi Balaji and Kumarappan Chidambaram
Diagnostics 2022, 12(1), 11; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12010011 - 22 Dec 2021
Cited by 5 | Viewed by 2240
Abstract
One of the most dangerous diseases that threaten people is cancer. If diagnosed in earlier stages, cancer, with its life-threatening consequences, has the possibility of eradication. In addition, accuracy in prediction plays a significant role. Hence, developing a reliable model that contributes much [...] Read more.
One of the most dangerous diseases that threaten people is cancer. If diagnosed in earlier stages, cancer, with its life-threatening consequences, has the possibility of eradication. In addition, accuracy in prediction plays a significant role. Hence, developing a reliable model that contributes much towards the medical community in the early diagnosis of biopsy images with perfect accuracy comes to the forefront. This article aims to develop better predictive models using multivariate data and high-resolution diagnostic tools in clinical cancer research. This paper proposes the social spider optimisation (SSO) algorithm-tuned neural network to classify microscopic biopsy images of cancer. The significance of the proposed model relies on the effective tuning of the weights of the neural network classifier by the SSO algorithm. The performance of the proposed strategy is analysed with performance metrics such as accuracy, sensitivity, specificity, and MCC measures, and the attained results are 95.9181%, 94.2515%, 97.125%, and 97.68%, respectively, which shows the effectiveness of the proposed method for cancer disease diagnosis. Full article
(This article belongs to the Special Issue Machine Learning for Computer-Aided Diagnosis in Biomedical Imaging)
Show Figures

Figure 1

12 pages, 19713 KiB  
Article
Deep Learning-Based Artificial Intelligence System for Automatic Assessment of Glomerular Pathological Findings in Lupus Nephritis
by Zhaohui Zheng, Xiangsen Zhang, Jin Ding, Dingwen Zhang, Jihong Cui, Xianghui Fu, Junwei Han and Ping Zhu
Diagnostics 2021, 11(11), 1983; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics11111983 - 26 Oct 2021
Cited by 12 | Viewed by 2134
Abstract
Accurate assessment of renal histopathology is crucial for the clinical management of patients with lupus nephritis (LN). However, the current classification system has poor interpathologist agreement. This paper proposes a deep convolutional neural network (CNN)-based system that detects and classifies glomerular pathological findings [...] Read more.
Accurate assessment of renal histopathology is crucial for the clinical management of patients with lupus nephritis (LN). However, the current classification system has poor interpathologist agreement. This paper proposes a deep convolutional neural network (CNN)-based system that detects and classifies glomerular pathological findings in LN. A dataset of 349 renal biopsy whole-slide images (WSIs) (163 patients with LN, periodic acid-Schiff stain, 3906 glomeruli) annotated by three expert nephropathologists was used. The CNN models YOLOv4 and VGG16 were employed to localise the glomeruli and classify glomerular lesions (slight/severe impairments or sclerotic lesions). An additional 321 unannotated WSIs from 161 patients were used for performance evaluation at the per-patient kidney level. The proposed model achieved an accuracy of 0.951 and Cohen’s kappa of 0.932 (95% CI 0.915–0.949) for the entire test set for classifying the glomerular lesions. For multiclass detection at the glomerular level, the mean average precision of the CNN was 0.807, with ‘slight’ and ‘severe’ glomerular lesions being easily identified (F1: 0.924 and 0.952, respectively). At the per-patient kidney level, the model achieved a high agreement with nephropathologist (linear weighted kappa: 0.855, 95% CI: 0.795–0.916, p < 0.001; quadratic weighted kappa: 0.906, 95% CI: 0.873–0.938, p < 0.001). The results suggest that deep learning is a feasible assistive tool for the objective and automatic assessment of pathological LN lesions. Full article
(This article belongs to the Special Issue Machine Learning for Computer-Aided Diagnosis in Biomedical Imaging)
Show Figures

Figure 1

18 pages, 33173 KiB  
Article
Analysis of Brain MRI Images Using Improved CornerNet Approach
by Marriam Nawaz, Tahira Nazir, Momina Masood, Awais Mehmood, Rabbia Mahum, Muhammad Attique Khan, Seifedine Kadry and Orawit Thinnukool
Diagnostics 2021, 11(10), 1856; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics11101856 - 08 Oct 2021
Cited by 28 | Viewed by 3511
Abstract
The brain tumor is a deadly disease that is caused by the abnormal growth of brain cells, which affects the human blood cells and nerves. Timely and precise detection of brain tumors is an important task to avoid complex and painful treatment procedures, [...] Read more.
The brain tumor is a deadly disease that is caused by the abnormal growth of brain cells, which affects the human blood cells and nerves. Timely and precise detection of brain tumors is an important task to avoid complex and painful treatment procedures, as it can assist doctors in surgical planning. Manual brain tumor detection is a time-consuming activity and highly dependent on the availability of area experts. Therefore, it is a need of the hour to design accurate automated systems for the detection and classification of various types of brain tumors. However, the exact localization and categorization of brain tumors is a challenging job due to extensive variations in their size, position, and structure. To deal with the challenges, we have presented a novel approach, namely, DenseNet-41-based CornerNet framework. The proposed solution comprises three steps. Initially, we develop annotations to locate the exact region of interest. In the second step, a custom CornerNet with DenseNet-41 as a base network is introduced to extract the deep features from the suspected samples. In the last step, the one-stage detector CornerNet is employed to locate and classify several brain tumors. To evaluate the proposed method, we have utilized two databases, namely, the Figshare and Brain MRI datasets, and attained an average accuracy of 98.8% and 98.5%, respectively. Both qualitative and quantitative analysis show that our approach is more proficient and consistent with detecting and classifying various types of brain tumors than other latest techniques. Full article
(This article belongs to the Special Issue Machine Learning for Computer-Aided Diagnosis in Biomedical Imaging)
Show Figures

Figure 1

15 pages, 4565 KiB  
Article
A Deep Learning Based Approach for Patient Pulmonary CT Image Screening to Predict Coronavirus (SARS-CoV-2) Infection
by Parag Verma, Ankur Dumka, Rajesh Singh, Alaknanda Ashok, Aman Singh, Hani Moaiteq Aljahdali, Seifedine Kadry and Hafiz Tayyab Rauf
Diagnostics 2021, 11(9), 1735; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics11091735 - 21 Sep 2021
Cited by 13 | Viewed by 2269
Abstract
The novel coronavirus (nCoV-2019) is responsible for the acute respiratory disease in humans known as COVID-19. This infection was found in the Wuhan and Hubei provinces of China in the month of December 2019, after which it spread all over the world. By [...] Read more.
The novel coronavirus (nCoV-2019) is responsible for the acute respiratory disease in humans known as COVID-19. This infection was found in the Wuhan and Hubei provinces of China in the month of December 2019, after which it spread all over the world. By March, 2020, this epidemic had spread to about 117 countries and its different variants continue to disturb human life all over the world, causing great damage to the economy. Through this paper, we have attempted to identify and predict the novel coronavirus from influenza-A viral cases and healthy patients without infection through applying deep learning technology over patient pulmonary computed tomography (CT) images, as well as by the model that has been evaluated. The CT image data used under this method has been collected from various radiopedia data from online sources with a total of 548 CT images, of which 232 are from 12 patients infected with COVID-19, 186 from 17 patients with influenza A virus, and 130 are from 15 healthy candidates without infection. From the results of examination of the reference data determined from the point of view of CT imaging cases in general, the accuracy of the proposed model is 79.39%. Thus, this deep learning model will help in establishing early screening of COVID-19 patients and thus prove to be an analytically robust method for clinical experts. Full article
(This article belongs to the Special Issue Machine Learning for Computer-Aided Diagnosis in Biomedical Imaging)
Show Figures

Figure 1

11 pages, 1206 KiB  
Article
Using Machine Learning Algorithms to Predict Hospital Acquired Thrombocytopenia after Operation in the Intensive Care Unit: A Retrospective Cohort Study
by Yisong Cheng, Chaoyue Chen, Jie Yang, Hao Yang, Min Fu, Xi Zhong, Bo Wang, Min He, Zhi Hu, Zhongwei Zhang, Xiaodong Jin, Yan Kang and Qin Wu
Diagnostics 2021, 11(9), 1614; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics11091614 - 03 Sep 2021
Cited by 4 | Viewed by 1760
Abstract
Hospital acquired thrombocytopenia (HAT) is a common hematological complication after surgery. This research aimed to develop and compare the performance of seven machine learning (ML) algorithms for predicting patients that are at risk of HAT after surgery. We conducted a retrospective cohort study [...] Read more.
Hospital acquired thrombocytopenia (HAT) is a common hematological complication after surgery. This research aimed to develop and compare the performance of seven machine learning (ML) algorithms for predicting patients that are at risk of HAT after surgery. We conducted a retrospective cohort study which enrolled adult patients transferred to the intensive care unit (ICU) after surgery in West China Hospital of Sichuan University from January 2016 to December 2018. All subjects were randomly divided into a derivation set (70%) and test set (30%). ten-fold cross-validation was used to estimate the hyperparameters of ML algorithms during the training process in the derivation set. After ML models were developed, the sensitivity, specificity, area under the curve (AUC), and net benefit (decision analysis curve, DCA) were calculated to evaluate the performances of ML models in the test set. A total of 10,369 patients were included and in 1354 (13.1%) HAT occurred. The AUC of all seven ML models exceeded 0.7, the two highest were Gradient Boosting (GB) (0.834, 0.814–0.853, p < 0.001) and Random Forest (RF) (0.828, 0.807–0.848, p < 0.001). There was no difference between GB and RF (0.834 vs. 0.828, p = 0.293); however, these two were better than the remaining five models (p < 0.001). The DCA revealed that all ML models had high net benefits with a threshold probability approximately less than 0.6. In conclusion, we found that ML models constructed by multiple preoperative variables can predict HAT in patients transferred to ICU after surgery, which can improve risk stratification and guide management in clinical practice. Full article
(This article belongs to the Special Issue Machine Learning for Computer-Aided Diagnosis in Biomedical Imaging)
Show Figures

Figure 1

38 pages, 17199 KiB  
Article
Brain Hemorrhage Classification in CT Scan Images Using Minimalist Machine Learning
by José-Luis Solorio-Ramírez, Magdalena Saldana-Perez, Miltiadis D. Lytras, Marco-Antonio Moreno-Ibarra and Cornelio Yáñez-Márquez
Diagnostics 2021, 11(8), 1449; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics11081449 - 11 Aug 2021
Cited by 10 | Viewed by 2899
Abstract
Over time, a myriad of applications have been generated for pattern classification algorithms. Several case studies include parametric classifiers such as the Multi-Layer Perceptron (MLP) classifier, which is one of the most widely used today. Others use non-parametric classifiers, Support Vector Machine (SVM), [...] Read more.
Over time, a myriad of applications have been generated for pattern classification algorithms. Several case studies include parametric classifiers such as the Multi-Layer Perceptron (MLP) classifier, which is one of the most widely used today. Others use non-parametric classifiers, Support Vector Machine (SVM), K-Nearest Neighbors (K-NN), Naïve Bayes (NB), Adaboost, and Random Forest (RF). However, there is still little work directed toward a new trend in Artificial Intelligence (AI), which is known as eXplainable Artificial Intelligence (X-AI). This new trend seeks to make Machine Learning (ML) algorithms increasingly simple and easy to understand for users. Therefore, following this new wave of knowledge, in this work, the authors develop a new pattern classification methodology, based on the implementation of the novel Minimalist Machine Learning (MML) paradigm and a higher relevance attribute selection algorithm, which we call dMeans. We examine and compare the performance of this methodology with MLP, NB, KNN, SVM, Adaboost, and RF classifiers to perform the task of classification of Computed Tomography (CT) brain images. These grayscale images have an area of 128 × 128 pixels, and there are two classes available in the dataset: CT without Hemorrhage and CT with Intra-Ventricular Hemorrhage (IVH), which were classified using the Leave-One-Out Cross-Validation method. Most of the models tested by Leave-One-Out Cross-Validation performed between 50% and 75% accuracy, while sensitivity and sensitivity ranged between 58% and 86%. The experiments performed using our methodology matched the best classifier observed with 86.50% accuracy, and they outperformed all state-of-the-art algorithms in specificity with 91.60%. This performance is achieved hand in hand with simple and practical methods, which go hand in hand with this trend of generating easily explainable algorithms. Full article
(This article belongs to the Special Issue Machine Learning for Computer-Aided Diagnosis in Biomedical Imaging)
Show Figures

Figure 1

22 pages, 6490 KiB  
Article
Colon Tissues Classification and Localization in Whole Slide Images Using Deep Learning
by Pushpanjali Gupta, Yenlin Huang, Prasan Kumar Sahoo, Jeng-Fu You, Sum-Fu Chiang, Djeane Debora Onthoni, Yih-Jong Chern, Kuo-Yu Chao, Jy-Ming Chiang, Chien-Yuh Yeh and Wen-Sy Tsai
Diagnostics 2021, 11(8), 1398; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics11081398 - 02 Aug 2021
Cited by 19 | Viewed by 3865
Abstract
Colorectal cancer is one of the leading causes of cancer-related death worldwide. The early diagnosis of colon cancer not only reduces mortality but also reduces the burden related to the treatment strategies such as chemotherapy and/or radiotherapy. However, when the microscopic examination of [...] Read more.
Colorectal cancer is one of the leading causes of cancer-related death worldwide. The early diagnosis of colon cancer not only reduces mortality but also reduces the burden related to the treatment strategies such as chemotherapy and/or radiotherapy. However, when the microscopic examination of the suspected colon tissue sample is carried out, it becomes a tedious and time-consuming job for the pathologists to find the abnormality in the tissue. In addition, there may be interobserver variability that might lead to conflict in the final diagnosis. As a result, there is a crucial need of developing an intelligent automated method that can learn from the patterns themselves and assist the pathologist in making a faster, accurate, and consistent decision for determining the normal and abnormal region in the colorectal tissues. Moreover, the intelligent method should be able to localize the abnormal region in the whole slide image (WSI), which will make it easier for the pathologists to focus on only the region of interest making the task of tissue examination faster and lesser time-consuming. As a result, artificial intelligence (AI)-based classification and localization models are proposed for determining and localizing the abnormal regions in WSI. The proposed models achieved F-score of 0.97, area under curve (AUC) 0.97 with pretrained Inception-v3 model, and F-score of 0.99 and AUC 0.99 with customized Inception-ResNet-v2 Type 5 (IR-v2 Type 5) model. Full article
(This article belongs to the Special Issue Machine Learning for Computer-Aided Diagnosis in Biomedical Imaging)
Show Figures

Figure 1

15 pages, 3366 KiB  
Article
TransMed: Transformers Advance Multi-Modal Medical Image Classification
by Yin Dai, Yifan Gao and Fayu Liu
Diagnostics 2021, 11(8), 1384; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics11081384 - 31 Jul 2021
Cited by 148 | Viewed by 10805
Abstract
Over the past decade, convolutional neural networks (CNN) have shown very competitive performance in medical image analysis tasks, such as disease classification, tumor segmentation, and lesion detection. CNN has great advantages in extracting local features of images. However, due to the locality of [...] Read more.
Over the past decade, convolutional neural networks (CNN) have shown very competitive performance in medical image analysis tasks, such as disease classification, tumor segmentation, and lesion detection. CNN has great advantages in extracting local features of images. However, due to the locality of convolution operation, it cannot deal with long-range relationships well. Recently, transformers have been applied to computer vision and achieved remarkable success in large-scale datasets. Compared with natural images, multi-modal medical images have explicit and important long-range dependencies, and effective multi-modal fusion strategies can greatly improve the performance of deep models. This prompts us to study transformer-based structures and apply them to multi-modal medical images. Existing transformer-based network architectures require large-scale datasets to achieve better performance. However, medical imaging datasets are relatively small, which makes it difficult to apply pure transformers to medical image analysis. Therefore, we propose TransMed for multi-modal medical image classification. TransMed combines the advantages of CNN and transformer to efficiently extract low-level features of images and establish long-range dependencies between modalities. We evaluated our model on two datasets, parotid gland tumors classification and knee injury classification. Combining our contributions, we achieve an improvement of 10.1% and 1.9% in average accuracy, respectively, outperforming other state-of-the-art CNN-based models. The results of the proposed method are promising and have tremendous potential to be applied to a large number of medical image analysis tasks. To our best knowledge, this is the first work to apply transformers to multi-modal medical image classification. Full article
(This article belongs to the Special Issue Machine Learning for Computer-Aided Diagnosis in Biomedical Imaging)
Show Figures

Figure 1

Review

Jump to: Editorial, Research

21 pages, 1067 KiB  
Review
Deep Learning Applications in Computed Tomography Images for Pulmonary Nodule Detection and Diagnosis: A Review
by Rui Li, Chuda Xiao, Yongzhi Huang, Haseeb Hassan and Bingding Huang
Diagnostics 2022, 12(2), 298; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12020298 - 25 Jan 2022
Cited by 48 | Viewed by 6393
Abstract
Lung cancer has one of the highest mortality rates of all cancers and poses a severe threat to people’s health. Therefore, diagnosing lung nodules at an early stage is crucial to improving patient survival rates. Numerous computer-aided diagnosis (CAD) systems have been developed [...] Read more.
Lung cancer has one of the highest mortality rates of all cancers and poses a severe threat to people’s health. Therefore, diagnosing lung nodules at an early stage is crucial to improving patient survival rates. Numerous computer-aided diagnosis (CAD) systems have been developed to detect and classify such nodules in their early stages. Currently, CAD systems for pulmonary nodules comprise data acquisition, pre-processing, lung segmentation, nodule detection, false-positive reduction, segmentation, and classification. A number of review articles have considered various components of such systems, but this review focuses on segmentation and classification parts. Specifically, categorizing segmentation parts based on lung nodule type and network architectures, i.e., general neural network and multiview convolution neural network (CNN) architecture. Moreover, this work organizes related literature for classification of parts based on nodule or non-nodule and benign or malignant. The essential CT lung datasets and evaluation metrics used in the detection and diagnosis of lung nodules have been systematically summarized as well. Thus, this review provides a baseline understanding of the topic for interested readers. Full article
(This article belongs to the Special Issue Machine Learning for Computer-Aided Diagnosis in Biomedical Imaging)
Show Figures

Figure 1

26 pages, 1018 KiB  
Review
Automatic Segmentation of Pelvic Cancers Using Deep Learning: State-of-the-Art Approaches and Challenges
by Reza Kalantar, Gigin Lin, Jessica M. Winfield, Christina Messiou, Susan Lalondrelle, Matthew D. Blackledge and Dow-Mu Koh
Diagnostics 2021, 11(11), 1964; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics11111964 - 22 Oct 2021
Cited by 27 | Viewed by 3472
Abstract
The recent rise of deep learning (DL) and its promising capabilities in capturing non-explicit detail from large datasets have attracted substantial research attention in the field of medical image processing. DL provides grounds for technological development of computer-aided diagnosis and segmentation in radiology [...] Read more.
The recent rise of deep learning (DL) and its promising capabilities in capturing non-explicit detail from large datasets have attracted substantial research attention in the field of medical image processing. DL provides grounds for technological development of computer-aided diagnosis and segmentation in radiology and radiation oncology. Amongst the anatomical locations where recent auto-segmentation algorithms have been employed, the pelvis remains one of the most challenging due to large intra- and inter-patient soft-tissue variabilities. This review provides a comprehensive, non-systematic and clinically-oriented overview of 74 DL-based segmentation studies, published between January 2016 and December 2020, for bladder, prostate, cervical and rectal cancers on computed tomography (CT) and magnetic resonance imaging (MRI), highlighting the key findings, challenges and limitations. Full article
(This article belongs to the Special Issue Machine Learning for Computer-Aided Diagnosis in Biomedical Imaging)
Show Figures

Figure 1

Back to TopTop