Advances in Machine Learning for Computer-Aided Diagnosis in Biomedical Imaging

A special issue of Diagnostics (ISSN 2075-4418). This special issue belongs to the section "Machine Learning and Artificial Intelligence in Diagnostics".

Deadline for manuscript submissions: closed (31 December 2023) | Viewed by 64659

Special Issue Editors


E-Mail Website
Guest Editor
Department of Pathology and Clinical Bioinformatics, Erasmus Medical Center, 3015 GD Rotterdam, The Netherlands
Interests: deep learning; radiomics; histopathology; medical image analysis; image segmentation; image classification; CAD systems
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Tissue Hybridisation & Digital Pathology, Precision Medicine Centre of Excellence, Queen’s University Belfast, Northern Ireland, 97 Lisburn Rd, Belfast BT9 7AE, UK
Interests: breast cancer; breast density; deep learning; mammograms; generative adversarial networks; convolutional neural network; COVID-19; ct slices; image segmentation
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Cancer ranks the second most common cause of death in many countries, following cardiovascular diseases [1]. Therefore, early detection and diagnosis are crucial for improving the 5-year survival rate [2]. Screening examination plays an essential role in diagnosing diseases [3], requiring physicians to interpret many medical images. However, human interpretation has many limitations, including inaccuracy, distraction, and fatigue, which may lead to false positives and false negatives that lead to improper treatment. Therefore, a computer-aided diagnosis (CAD) system is needed as a second opinion system to diagnose ambiguous cases to solve these limitations.

Computer-aided diagnostic (CAD) systems use classical image processing, computer vision, machine learning, and deep learning methods for image analysis. Using image classification or segmentation algorithms, they find a region of interest (ROI) pointing to a specific location within the given image or the outcome of interest in the form of a label pointing to a diagnosis or prognosis. This special issue focuses on advanced CAD methods that use artificial intelligence (AI) approaches in various imaging modalities, such as x-ray, computed tomography (CT), positron emission tomography (PET), ultrasound, MRI, immunohistochemistry, and hematoxylin and eosin (H&E) whole slide images (WSIs), toward the end diagnosis or prognosis.  

[1] Huang, X.; Xiao, R.; Pan, S.; Yang, X.; Yuan, W.; Tu, Z.; et al. Uncovering the roles of long non-coding RNAS in cancer stem cells. J. Hematol. Oncol. 2017, 10, 62. doi: 10.1186/s13045-017-0428-9.

[2] Mohaghegh, P.; Rockall, A.G. Imaging strategy for early ovarian cancer: Characterization of adnexal masses with conventional and advanced imaging techniques. Radiographics 2012, 32, 1751-1773.

[3] Sarigoz, T.; Ertan, T.; Topuz, O.; Sevim, Y.; Cihan, Y. Role of digital infrared thermal imaging in the diagnosis of breast mass: A pilot study. Infrared Phys. Technol. 2018, 91, 214-219.

Dr. Farhan Akram
Dr. Vivek Kumar Singh
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Diagnostics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • cancer diagnosis 
  • medical images 
  • electronic health records 
  • machine learning 
  • deep learning 
  • artificial intelligence 
  • explainable AI models 
  • multi-modal analysis 
  • federated learning 
  • CAD systems

Published Papers (15 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

15 pages, 2052 KiB  
Article
Remote Diagnosis on Upper Respiratory Tract Infections Based on a Neural Network with Few Symptom Words—A Feasibility Study
by Chung-Hung Tsai, Kuan-Hung Liu and Da-Chuan Cheng
Diagnostics 2024, 14(3), 329; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics14030329 - 02 Feb 2024
Viewed by 809
Abstract
This study aims explore the feasibility of using neural network (NNs) and deep learning to diagnose three common respiratory diseases with few symptom words. These three diseases are nasopharyngitis, upper respiratory infection, and bronchitis/bronchiolitis. Through natural language processing, the symptom word vectors are [...] Read more.
This study aims explore the feasibility of using neural network (NNs) and deep learning to diagnose three common respiratory diseases with few symptom words. These three diseases are nasopharyngitis, upper respiratory infection, and bronchitis/bronchiolitis. Through natural language processing, the symptom word vectors are encoded by GPT-2 and classified by the last linear layer of the NN. The experimental results are promising, showing that this model achieves a high performance in predicting all three diseases. They revealed 90% accuracy, which suggests the implications of the developed model, highlighting its potential use in assisting patients’ understanding of their conditions via a remote diagnosis. Unlike previous studies that have focused on extracting various categories of information from medical records, this study directly extracts sequential features from unstructured text data, reducing the effort required for data pre-processing. Full article
Show Figures

Figure 1

21 pages, 6612 KiB  
Article
Early Detection of Lung Nodules Using a Revolutionized Deep Learning Model
by Durgesh Srivastava, Santosh Kumar Srivastava, Surbhi Bhatia Khan, Hare Ram Singh, Sunil K. Maakar, Ambuj Kumar Agarwal, Areej A. Malibari and Eid Albalawi
Diagnostics 2023, 13(22), 3485; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics13223485 - 20 Nov 2023
Cited by 1 | Viewed by 2466
Abstract
According to the WHO (World Health Organization), lung cancer is the leading cause of cancer deaths globally. In the future, more than 2.2 million people will be diagnosed with lung cancer worldwide, making up 11.4% of every primary cause of cancer. Furthermore, lung [...] Read more.
According to the WHO (World Health Organization), lung cancer is the leading cause of cancer deaths globally. In the future, more than 2.2 million people will be diagnosed with lung cancer worldwide, making up 11.4% of every primary cause of cancer. Furthermore, lung cancer is expected to be the biggest driver of cancer-related mortality worldwide in 2020, with an estimated 1.8 million fatalities. Statistics on lung cancer rates are not uniform among geographic areas, demographic subgroups, or age groups. The chance of an effective treatment outcome and the likelihood of patient survival can be greatly improved with the early identification of lung cancer. Lung cancer identification in medical pictures like CT scans and MRIs is an area where deep learning (DL) algorithms have shown a lot of potential. This study uses the Hybridized Faster R-CNN (HFRCNN) to identify lung cancer at an early stage. Among the numerous uses for which faster R-CNN has been put to good use is identifying critical entities in medical imagery, such as MRIs and CT scans. Many research investigations in recent years have examined the use of various techniques to detect lung nodules (possible indicators of lung cancer) in scanned images, which may help in the early identification of lung cancer. One such model is HFRCNN, a two-stage, region-based entity detector. It begins by generating a collection of proposed regions, which are subsequently classified and refined with the aid of a convolutional neural network (CNN). A distinct dataset is used in the model’s training process, producing valuable outcomes. More than a 97% detection accuracy was achieved with the suggested model, making it far more accurate than several previously announced methods. Full article
Show Figures

Figure 1

12 pages, 1420 KiB  
Article
Predicting Non-Small-Cell Lung Cancer Survival after Curative Surgery via Deep Learning of Diffusion MRI
by Jung Won Moon, Ehwa Yang, Jae-Hun Kim, O Jung Kwon, Minsu Park and Chin A Yi
Diagnostics 2023, 13(15), 2555; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics13152555 - 01 Aug 2023
Cited by 1 | Viewed by 1015
Abstract
Background: the objective of this study is to evaluate the predictive power of the survival model using deep learning of diffusion-weighted images (DWI) in patients with non-small-cell lung cancer (NSCLC). Methods: DWI at b-values of 0, 100, and 700 sec/mm2 (DWI0 [...] Read more.
Background: the objective of this study is to evaluate the predictive power of the survival model using deep learning of diffusion-weighted images (DWI) in patients with non-small-cell lung cancer (NSCLC). Methods: DWI at b-values of 0, 100, and 700 sec/mm2 (DWI0, DWI100, DWI700) were preoperatively obtained for 100 NSCLC patients who underwent curative surgery (57 men, 43 women; mean age, 62 years). The ADC0-100 (perfusion-sensitive ADC), ADC100-700 (perfusion-insensitive ADC), ADC0-100-700, and demographic features were collected as input data and 5-year survival was collected as output data. Our survival model adopted transfer learning from a pre-trained VGG-16 network, whereby the softmax layer was replaced with the binary classification layer for the prediction of 5-year survival. Three channels of input data were selected in combination out of DWIs and ADC images and their accuracies and AUCs were compared for the best performance during 10-fold cross validation. Results: 66 patients survived, and 34 patients died. The predictive performance was the best in the following combination: DWI0-ADC0-100-ADC0-100-700 (accuracy: 92%; AUC: 0.904). This was followed by DWI0-DWI700-ADC0-100-700, DWI0-DWI100-DWI700, and DWI0-DWI0-DWI0 (accuracy: 91%, 81%, 76%; AUC: 0.889, 0.763, 0.711, respectively). Survival prediction models trained with ADC performed significantly better than the one trained with DWI only (p-values < 0.05). The survival prediction was improved when demographic features were added to the model with only DWIs, but the benefit of clinical information was not prominent when added to the best performing model using both DWI and ADC. Conclusions: Deep learning may play a role in the survival prediction of lung cancer. The performance of learning can be enhanced by inputting precedented, proven functional parameters of the ADC instead of the original data of DWIs only. Full article
Show Figures

Figure 1

15 pages, 3099 KiB  
Article
Generalist Vision Foundation Models for Medical Imaging: A Case Study of Segment Anything Model on Zero-Shot Medical Segmentation
by Peilun Shi, Jianing Qiu, Sai Mu Dalike Abaxi, Hao Wei, Frank P.-W. Lo and Wu Yuan
Diagnostics 2023, 13(11), 1947; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics13111947 - 02 Jun 2023
Cited by 19 | Viewed by 4879
Abstract
Medical image analysis plays an important role in clinical diagnosis. In this paper, we examine the recent Segment Anything Model (SAM) on medical images, and report both quantitative and qualitative zero-shot segmentation results on nine medical image segmentation benchmarks, covering various imaging modalities, [...] Read more.
Medical image analysis plays an important role in clinical diagnosis. In this paper, we examine the recent Segment Anything Model (SAM) on medical images, and report both quantitative and qualitative zero-shot segmentation results on nine medical image segmentation benchmarks, covering various imaging modalities, such as optical coherence tomography (OCT), magnetic resonance imaging (MRI), and computed tomography (CT), as well as different applications including dermatology, ophthalmology, and radiology. Those benchmarks are representative and commonly used in model development. Our experimental results indicate that while SAM presents remarkable segmentation performance on images from the general domain, its zero-shot segmentation ability remains restricted for out-of-distribution images, e.g., medical images. In addition, SAM exhibits inconsistent zero-shot segmentation performance across different unseen medical domains. For certain structured targets, e.g., blood vessels, the zero-shot segmentation of SAM completely failed. In contrast, a simple fine-tuning of it with a small amount of data could lead to remarkable improvement of the segmentation quality, showing the great potential and feasibility of using fine-tuned SAM to achieve accurate medical image segmentation for a precision diagnostics. Our study indicates the versatility of generalist vision foundation models on medical imaging, and their great potential to achieve desired performance through fine-turning and eventually address the challenges associated with accessing large and diverse medical datasets in support of clinical diagnostics. Full article
Show Figures

Figure 1

19 pages, 3751 KiB  
Article
Detection and Classification of Histopathological Breast Images Using a Fusion of CNN Frameworks
by Ahsan Rafiq, Alexander Chursin, Wejdan Awad Alrefaei, Tahani Rashed Alsenani, Ghadah Aldehim, Nagwan Abdel Samee and Leila Jamel Menzli
Diagnostics 2023, 13(10), 1700; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics13101700 - 11 May 2023
Cited by 5 | Viewed by 2188
Abstract
Breast cancer is responsible for the deaths of thousands of women each year. The diagnosis of breast cancer (BC) frequently makes the use of several imaging techniques. On the other hand, incorrect identification might occasionally result in unnecessary therapy and diagnosis. Therefore, the [...] Read more.
Breast cancer is responsible for the deaths of thousands of women each year. The diagnosis of breast cancer (BC) frequently makes the use of several imaging techniques. On the other hand, incorrect identification might occasionally result in unnecessary therapy and diagnosis. Therefore, the accurate identification of breast cancer can save a significant number of patients from undergoing unnecessary surgery and biopsy procedures. As a result of recent developments in the field, the performance of deep learning systems used for medical image processing has showed significant benefits. Deep learning (DL) models have found widespread use for the aim of extracting important features from histopathologic BC images. This has helped to improve the classification performance and has assisted in the automation of the process. In recent times, both convolutional neural networks (CNNs) and hybrid models of deep learning-based approaches have demonstrated impressive performance. In this research, three different types of CNN models are proposed: a straightforward CNN model (1-CNN), a fusion CNN model (2-CNN), and a three CNN model (3-CNN). The findings of the experiment demonstrate that the techniques based on the 3-CNN algorithm performed the best in terms of accuracy (90.10%), recall (89.90%), precision (89.80%), and f1-Score (89.90%). In conclusion, the CNN-based approaches that have been developed are contrasted with more modern machine learning and deep learning models. The application of CNN-based methods has resulted in a significant increase in the accuracy of the BC classification. Full article
Show Figures

Figure 1

17 pages, 3274 KiB  
Article
Deep-EEG: An Optimized and Robust Framework and Method for EEG-Based Diagnosis of Epileptic Seizure
by Waseem Ahmad Mir, Mohd Anjum, Izharuddin and Sana Shahab
Diagnostics 2023, 13(4), 773; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics13040773 - 17 Feb 2023
Cited by 16 | Viewed by 2681
Abstract
Detecting brain disorders using deep learning methods has received much hype during the last few years. Increased depth leads to more computational efficiency, accuracy, and optimization and less loss. Epilepsy is one of the most common chronic neurological disorders characterized by repeated seizures. [...] Read more.
Detecting brain disorders using deep learning methods has received much hype during the last few years. Increased depth leads to more computational efficiency, accuracy, and optimization and less loss. Epilepsy is one of the most common chronic neurological disorders characterized by repeated seizures. We have developed a deep learning model using Deep convolutional Autoencoder—Bidirectional Long Short Memory for Epileptic Seizure Detection (DCAE-ESD-Bi-LSTM) for automatic detection of seizures using EEG data. The significant feature of our model is that it has contributed to the accurate and optimized diagnosis of epilepsy in ideal and real-life situations. The results on the benchmark (CHB-MIT) dataset and the dataset collected by the authors show the relevance of the proposed approach over the baseline deep learning techniques by achieving an accuracy of 99.8%, classification accuracy of 99.7%, sensitivity of 99.8%, specificity and precision of 99.9% and F1 score of 99.6%. Our approach can contribute to the accurate and optimized detection of seizures while scaling the design rules and increasing performance without changing the network’s depth. Full article
Show Figures

Figure 1

22 pages, 5400 KiB  
Article
An Intelligent Auxiliary Framework for Bone Malignant Tumor Lesion Segmentation in Medical Image Analysis
by Xiangbing Zhan, Jun Liu, Huiyun Long, Jun Zhu, Haoyu Tang, Fangfang Gou and Jia Wu
Diagnostics 2023, 13(2), 223; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics13020223 - 07 Jan 2023
Cited by 13 | Viewed by 1815
Abstract
Bone malignant tumors are metastatic and aggressive, with poor treatment outcomes and prognosis. Rapid and accurate diagnosis is crucial for limb salvage and increasing the survival rate. There is a lack of research on deep learning to segment bone malignant tumor lesions in [...] Read more.
Bone malignant tumors are metastatic and aggressive, with poor treatment outcomes and prognosis. Rapid and accurate diagnosis is crucial for limb salvage and increasing the survival rate. There is a lack of research on deep learning to segment bone malignant tumor lesions in medical images with complex backgrounds and blurred boundaries. Therefore, we propose a new intelligent auxiliary framework for the medical image segmentation of bone malignant tumor lesions, which consists of a supervised edge-attention guidance segmentation network (SEAGNET). We design a boundary key points selection module to supervise the learning of edge attention in the model to retain fine-grained edge feature information. We precisely locate malignant tumors by instance segmentation networks while extracting feature maps of tumor lesions in medical images. The rich contextual-dependent information in the feature map is captured by mixed attention to better understand the uncertainty and ambiguity of the boundary, and edge attention learning is used to guide the segmentation network to focus on the fuzzy boundary of the tumor region. We implement extensive experiments on real-world medical data to validate our model. It validates the superiority of our method over the latest segmentation methods, achieving the best performance in terms of the Dice similarity coefficient (0.967), precision (0.968), and accuracy (0.996). The results prove the important contribution of the framework in assisting doctors to improve the accuracy of diagnosis and clinical efficiency. Full article
Show Figures

Figure 1

20 pages, 4838 KiB  
Article
A Novel Computer-Aided Detection/Diagnosis System for Detection and Classification of Polyps in Colonoscopy
by Chia-Pei Tang, Hong-Yi Chang, Wei-Chun Wang and Wei-Xuan Hu
Diagnostics 2023, 13(2), 170; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics13020170 - 04 Jan 2023
Cited by 1 | Viewed by 2268
Abstract
Using a deep learning algorithm in the development of a computer-aided system for colon polyp detection is effective in reducing the miss rate. This study aimed to develop a system for colon polyp detection and classification. We used a data augmentation technique and [...] Read more.
Using a deep learning algorithm in the development of a computer-aided system for colon polyp detection is effective in reducing the miss rate. This study aimed to develop a system for colon polyp detection and classification. We used a data augmentation technique and conditional GAN to generate polyp images for YOLO training to improve the polyp detection ability. After testing the model five times, a model with 300 GANs (GAN 300) achieved the highest average precision (AP) of 54.60% for SSA and 75.41% for TA. These results were better than those of the data augmentation method, which showed AP of 53.56% for SSA and 72.55% for TA. The AP, mAP, and IoU for the 300 GAN model for the HP were 80.97%, 70.07%, and 57.24%, and the data increased in comparison with the data augmentation technique by 76.98%, 67.70%, and 55.26%, respectively. We also used Gaussian blurring to simulate the blurred images during colonoscopy and then applied DeblurGAN-v2 to deblur the images. Further, we trained the dataset using YOLO to classify polyps. After using DeblurGAN-v2, the mAP increased from 25.64% to 30.74%. This method effectively improved the accuracy of polyp detection and classification. Full article
Show Figures

Figure 1

15 pages, 3070 KiB  
Article
HaTU-Net: Harmonic Attention Network for Automated Ovarian Ultrasound Quantification in Assisted Pregnancy
by Vivek Kumar Singh, Elham Yousef Kalafi, Eugene Cheah, Shuhang Wang, Jingchao Wang, Arinc Ozturk, Qian Li, Yonina C. Eldar, Anthony E. Samir and Viksit Kumar
Diagnostics 2022, 12(12), 3213; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12123213 - 18 Dec 2022
Cited by 1 | Viewed by 3811
Abstract
Antral follicle Count (AFC) is a non-invasive biomarker used to assess ovarian reserves through transvaginal ultrasound (TVUS) imaging. Antral follicles’ diameter is usually in the range of 2–10 mm. The primary aim of ovarian reserve monitoring is to measure the size of ovarian [...] Read more.
Antral follicle Count (AFC) is a non-invasive biomarker used to assess ovarian reserves through transvaginal ultrasound (TVUS) imaging. Antral follicles’ diameter is usually in the range of 2–10 mm. The primary aim of ovarian reserve monitoring is to measure the size of ovarian follicles and the number of antral follicles. Manual follicle measurement is inhibited by operator time, expertise and the subjectivity of delineating the two axes of the follicles. This necessitates an automated framework capable of quantifying follicle size and count in a clinical setting. This paper proposes a novel Harmonic Attention-based U-Net network, HaTU-Net, to precisely segment the ovary and follicles in ultrasound images. We replace the standard convolution operation with a harmonic block that convolves the features with a window-based discrete cosine transform (DCT). Additionally, we proposed a harmonic attention mechanism that helps to promote the extraction of rich features. The suggested technique allows for capturing the most relevant features, such as boundaries, shape, and textural patterns, in the presence of various noise sources (i.e., shadows, poor contrast between tissues, and speckle noise). We evaluated the proposed model on our in-house private dataset of 197 patients undergoing TransVaginal UltraSound (TVUS) exam. The experimental results on an independent test set confirm that HaTU-Net achieved a Dice coefficient score of 90% for ovaries and 81% for antral follicles, an improvement of 2% and 10%, respectively, when compared to a standard U-Net. Further, we accurately measure the follicle size, yielding the recall, and precision rates of 91.01% and 76.49%, respectively. Full article
Show Figures

Figure 1

16 pages, 7408 KiB  
Article
Efficient Staining-Invariant Nuclei Segmentation Approach Using Self-Supervised Deep Contrastive Network
by Mohamed Abdel-Nasser, Vivek Kumar Singh and Ehab Mahmoud Mohamed
Diagnostics 2022, 12(12), 3024; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12123024 - 02 Dec 2022
Cited by 1 | Viewed by 1469
Abstract
Existing nuclei segmentation methods face challenges with hematoxylin and eosin (H&E) whole slide imaging (WSI) due to the variations in staining methods and nuclei shapes and sizes. Most existing approaches require a stain normalization step that may cause losing source information and fail [...] Read more.
Existing nuclei segmentation methods face challenges with hematoxylin and eosin (H&E) whole slide imaging (WSI) due to the variations in staining methods and nuclei shapes and sizes. Most existing approaches require a stain normalization step that may cause losing source information and fail to handle the inter-scanner feature instability problem. To mitigate these issues, this article proposes an efficient staining-invariant nuclei segmentation method based on self-supervised contrastive learning and an effective weighted hybrid dilated convolution (WHDC) block. In particular, we propose a staining-invariant encoder (SIE) that includes convolution and transformers blocks. We also propose the WHDC block allowing the network to learn multi-scale nuclei-relevant features to handle the variation in the sizes and shapes of nuclei. The SIE network is trained on five unlabeled WSIs datasets using self-supervised contrastive learning and then used as a backbone for the downstream nuclei segmentation network. Our method outperforms existing approaches in challenging multiple WSI datasets without stain color normalization. Full article
Show Figures

Figure 1

27 pages, 6371 KiB  
Article
Identifying Severity Grading of Knee Osteoarthritis from X-ray Images Using an Efficient Mixture of Deep Learning and Machine Learning Models
by Sozan Mohammed Ahmed and Ramadhan J. Mstafa
Diagnostics 2022, 12(12), 2939; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12122939 - 24 Nov 2022
Cited by 16 | Viewed by 25461
Abstract
Recently, many diseases have negatively impacted people’s lifestyles. Among these, knee osteoarthritis (OA) has been regarded as the primary cause of activity restriction and impairment, particularly in older people. Therefore, quick, accurate, and low-cost computer-based tools for the early prediction of knee OA [...] Read more.
Recently, many diseases have negatively impacted people’s lifestyles. Among these, knee osteoarthritis (OA) has been regarded as the primary cause of activity restriction and impairment, particularly in older people. Therefore, quick, accurate, and low-cost computer-based tools for the early prediction of knee OA patients are urgently needed. In this paper, as part of addressing this issue, we developed a new method to efficiently diagnose and classify knee osteoarthritis severity based on the X-ray images to classify knee OA in (i.e., binary and multiclass) in order to study the impact of different class-based, which has not yet been addressed in previous studies. This will provide physicians with a variety of deployment options in the future. Our proposed models are basically divided into two frameworks based on applying pre-trained convolutional neural networks (CNN) for feature extraction as well as fine-tuning the pre-trained CNN using the transfer learning (TL) method. In addition, a traditional machine learning (ML) classifier is used to exploit the enriched feature space to achieve better knee OA classification performance. In the first one, we developed five classes-based models using a proposed pre-trained CNN for feature extraction, principal component analysis (PCA) for dimensionality reduction, and support vector machine (SVM) for classification. While in the second framework, a few changes were made to the steps in the first framework, the concept of TL was used to fine-tune the proposed pre-trained CNN from the first framework to fit the two classes, three classes, and four classes-based models. The proposed models are evaluated on X-ray data, and their performance is compared with the existing state-of-the-art models. It is observed through conducted experimental analysis to demonstrate the efficacy of the proposed approach in improving the classification accuracy in both multiclass and binary class-based in the OA case study. Nonetheless, the empirical results revealed that the fewer multiclass labels used, the better performance achieved, with the binary class labels outperforming all, which reached a 90.8% accuracy rate. Furthermore, the proposed models demonstrated their contribution to early classification in the first stage of the disease to help reduce its progression and improve people’s quality of life. Full article
Show Figures

Figure 1

23 pages, 6937 KiB  
Article
HRU-Net: A Transfer Learning Method for Carotid Artery Plaque Segmentation in Ultrasound Images
by Yanchao Yuan, Cancheng Li, Ke Zhang, Yang Hua and Jicong Zhang
Diagnostics 2022, 12(11), 2852; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12112852 - 17 Nov 2022
Cited by 2 | Viewed by 1782
Abstract
Carotid artery stenotic plaque segmentation in ultrasound images is a crucial means for the analysis of plaque components and vulnerability. However, segmentation of severe stenotic plaques remains a challenging task because of the heterogeneities of inter-plaques and intra-plaques, and obscure boundaries of plaques. [...] Read more.
Carotid artery stenotic plaque segmentation in ultrasound images is a crucial means for the analysis of plaque components and vulnerability. However, segmentation of severe stenotic plaques remains a challenging task because of the heterogeneities of inter-plaques and intra-plaques, and obscure boundaries of plaques. In this paper, we propose an automated HRU-Net transfer learning method for segmenting carotid plaques, using the limited images. The HRU-Net is based on the U-Net encoder–decoder paradigm, and cross-domain knowledge is transferred for plaque segmentation by fine-tuning the pretrained ResNet-50. Moreover, a cropped-blood-vessel image augmentation is customized for the plaque position constraint during training only. Moreover, hybrid atrous convolutions (HACs) are designed to derive diverse long-range dependences for refined plaque segmentation that are used on high-level semantic layers to exploit the implicit discrimination features. The experiments are performed on 115 images; Firstly, the 10-fold cross-validation, using 40 images with severe stenosis plaques, shows that the proposed method outperforms some of the state-of-the-art CNN-based methods on Dice, IoU, Acc, and modified Hausdorff distance (MHD) metrics; the improvements on metrics of Dice and MHD are statistically significant (p < 0.05). Furthermore, our HRU-Net transfer learning method shows fine generalization performance on 75 new images with varying degrees of plaque stenosis, and it may be used as an alternative for automatic noisy plaque segmentation in carotid ultrasound images clinically. Full article
Show Figures

Figure 1

17 pages, 3720 KiB  
Article
BCNet: A Deep Learning Computer-Aided Diagnosis Framework for Human Peripheral Blood Cell Identification
by Channabasava Chola, Abdullah Y. Muaad, Md Belal Bin Heyat, J. V. Bibal Benifa, Wadeea R. Naji, K. Hemachandran, Noha F. Mahmoud, Nagwan Abdel Samee, Mugahed A. Al-Antari, Yasser M. Kadah and Tae-Seong Kim
Diagnostics 2022, 12(11), 2815; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12112815 - 16 Nov 2022
Cited by 17 | Viewed by 7186
Abstract
Blood cells carry important information that can be used to represent a person’s current state of health. The identification of different types of blood cells in a timely and precise manner is essential to cutting the infection risks that people face on a [...] Read more.
Blood cells carry important information that can be used to represent a person’s current state of health. The identification of different types of blood cells in a timely and precise manner is essential to cutting the infection risks that people face on a daily basis. The BCNet is an artificial intelligence (AI)-based deep learning (DL) framework that was proposed based on the capability of transfer learning with a convolutional neural network to rapidly and automatically identify the blood cells in an eight-class identification scenario: Basophil, Eosinophil, Erythroblast, Immature Granulocytes, Lymphocyte, Monocyte, Neutrophil, and Platelet. For the purpose of establishing the dependability and viability of BCNet, exhaustive experiments consisting of five-fold cross-validation tests are carried out. Using the transfer learning strategy, we conducted in-depth comprehensive experiments on the proposed BCNet’s architecture and test it with three optimizers of ADAM, RMSprop (RMSP), and stochastic gradient descent (SGD). Meanwhile, the performance of the proposed BCNet is directly compared using the same dataset with the state-of-the-art deep learning models of DensNet, ResNet, Inception, and MobileNet. When employing the different optimizers, the BCNet framework demonstrated better classification performance with ADAM and RMSP optimizers. The best evaluation performance was achieved using the RMSP optimizer in terms of 98.51% accuracy and 96.24% F1-score. Compared with the baseline model, the BCNet clearly improved the prediction accuracy performance 1.94%, 3.33%, and 1.65% using the optimizers of ADAM, RMSP, and SGD, respectively. The proposed BCNet model outperformed the AI models of DenseNet, ResNet, Inception, and MobileNet in terms of the testing time of a single blood cell image by 10.98, 4.26, 2.03, and 0.21 msec. In comparison to the most recent deep learning models, the BCNet model could be able to generate encouraging outcomes. It is essential for the advancement of healthcare facilities to have such a recognition rate improving the detection performance of the blood cells. Full article
Show Figures

Figure 1

12 pages, 3137 KiB  
Article
Automated Detection of Cervical Carotid Artery Calcifications in Cone Beam Computed Tomographic Images Using Deep Convolutional Neural Networks
by Maryam Ajami, Pavani Tripathi, Haibin Ling and Mina Mahdian
Diagnostics 2022, 12(10), 2537; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12102537 - 19 Oct 2022
Cited by 5 | Viewed by 2130
Abstract
The aim of this study was to determine if a convolutional neural network (CNN) can be trained to automatically detect and localize cervical carotid artery calcifications (CACs) in CBCT. A total of 56 CBCT studies (15,257 axial slices) were utilized to train, validate, [...] Read more.
The aim of this study was to determine if a convolutional neural network (CNN) can be trained to automatically detect and localize cervical carotid artery calcifications (CACs) in CBCT. A total of 56 CBCT studies (15,257 axial slices) were utilized to train, validate, and test the deep learning model. The study comprised of two steps: Step 1: Localizing axial slices that are below the C2–C3 disc space. For this step the openly available Inception V3 architecture was trained on the ImageNet dataset of real-world images, and retrained on 40 CBCT studies. Step 2: Detecting CACs in slices from step 1. For this step, two methods were implemented; Method A: Segmentation neural network trained using small patches at random coordinates of the original axial slices; Method B: Segmentation neural network trained using two larger patches at fixed coordinates of the original axial slices with an improved loss function to account for class imbalance. Our approach resulted in 94.2% sensitivity and 96.5% specificity. The mean intersection over union metric for Method A was 76.26% and Method B improved this metric to 82.51%. The proposed CNN model shows the feasibility of deep learning in the detection and localization of CAC in CBCT images. Full article
Show Figures

Figure 1

Review

Jump to: Research

13 pages, 3339 KiB  
Review
The Potential Role of Artificial Intelligence in Lung Cancer Screening Using Low-Dose Computed Tomography
by Philippe A. Grenier, Anne Laure Brun and François Mellot
Diagnostics 2022, 12(10), 2435; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12102435 - 08 Oct 2022
Cited by 11 | Viewed by 2878
Abstract
Two large randomized controlled trials of low-dose CT (LDCT)-based lung cancer screening (LCS) in high-risk smoker populations have shown a reduction in the number of lung cancer deaths in the screening group compared to a control group. Even if various countries are currently [...] Read more.
Two large randomized controlled trials of low-dose CT (LDCT)-based lung cancer screening (LCS) in high-risk smoker populations have shown a reduction in the number of lung cancer deaths in the screening group compared to a control group. Even if various countries are currently considering the implementation of LCS programs, recurring doubts and fears persist about the potentially high false positive rates, cost-effectiveness, and the availability of radiologists for scan interpretation. Artificial intelligence (AI) can potentially increase the efficiency of LCS. The objective of this article is to review the performances of AI algorithms developed for different tasks that make up the interpretation of LCS CT scans, and to estimate how these AI algorithms may be used as a second reader. Despite the reduction in lung cancer mortality due to LCS with LDCT, many smokers die of comorbid smoking-related diseases. The identification of CT features associated with these comorbidities could increase the value of screening with minimal impact on LCS programs. Because these smoking-related conditions are not systematically assessed in current LCS programs, AI can identify individuals with evidence of previously undiagnosed cardiovascular disease, emphysema or osteoporosis and offer an opportunity for treatment and prevention. Full article
Show Figures

Figure 1

Back to TopTop