Artificial Intelligence in Clinical Medical Imaging Analysis

A special issue of Diagnostics (ISSN 2075-4418). This special issue belongs to the section "Machine Learning and Artificial Intelligence in Diagnostics".

Deadline for manuscript submissions: closed (30 April 2023) | Viewed by 110209

Special Issue Editor

Graduate Institute of Biomedical Informatics, College of Medicine Science and Technology, Taipei Medical University, Taipei 11031, Taiwan
Interests: machine learning; deep learning; artificial intelligence; medicine; meta-analysis; clinical decision support system; evidence-based medicine; pharmacoepidemiology; cancer; observational study; retrospective study
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Artificial intelligence (AI) has generated tremendous global attention in recent years. Compared with traditional methods, AI has emerged as a major frontier in healthcare. It has been widely applied to medical imaging analysis to revolutionize the current procedures for disease diagnosis and treatment. Indeed, it has shown excellent performance to identify imaging abnormalities that are not easily amenable to human identification. Previous evidence has also shown that AI models have outperformed radiologists in disease screening. The performance of AI models clearly demonstrates that AI can improve work efficiency and enable radiologists to be more productive. AI-based tools would undoubtedly be mainstream tools for typical medical imaging analysis tasks such as diagnosis, segmentation, or classification. The aim of this Special Issue is to assess the diagnostics accuracy of AI techniques (deep learning/machine learning) for disease diagnosis/detection using medical imaging analysis.

Dr. Md Mohaimenul Islam
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Diagnostics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Artificial intelligence
  • Deep leaning
  • Machine learning
  • Medical imaging
  • Disease diagnosis
  • Early detection

Published Papers (47 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review, Other

13 pages, 5457 KiB  
Article
Computer-Based Diagnosis of Celiac Disease by Quantitative Processing of Duodenal Endoscopy Images
by Adriana Molder, Daniel Vasile Balaban, Cristian-Constantin Molder, Mariana Jinga and Antonin Robin
Diagnostics 2023, 13(17), 2780; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics13172780 - 28 Aug 2023
Cited by 3 | Viewed by 1138
Abstract
Celiac disease (CD) is a lifelong chronic autoimmune systemic disease that primarily affects the small bowel of genetically susceptible individuals. The diagnostics of adult CD currently rely on specific serology and the histological assessment of duodenal mucosa on samples taken by upper digestive [...] Read more.
Celiac disease (CD) is a lifelong chronic autoimmune systemic disease that primarily affects the small bowel of genetically susceptible individuals. The diagnostics of adult CD currently rely on specific serology and the histological assessment of duodenal mucosa on samples taken by upper digestive endoscopy. Because of several pitfalls associated with duodenal biopsy sampling and histopathology, and considering the pediatric no-biopsy diagnostic criteria, a biopsy-avoiding strategy has been proposed for adult CD diagnosis also. Several endoscopic changes have been reported in the duodenum of CD patients, as markers of villous atrophy (VA), with good correlation with serology. In this setting, an opportunity lies in the automated detection of these endoscopic markers, during routine endoscopy examinations, as potential case-finding of unsuspected CD. We collected duodenal endoscopy images from 18 CD newly diagnosed CD patients and 16 non-CD controls and applied machine learning (ML) and deep learning (DL) algorithms on image patches for the detection of VA. Using histology as standard, high diagnostic accuracy was seen for all algorithms tested, with the layered convolutional neural network (CNN) having the best performance, with 99.67% sensitivity and 98.07% positive predictive value. In this pilot study, we provide an accurate algorithm for automated detection of mucosal changes associated with VA in CD patients, compared to normally appearing non-atrophic mucosa in non-CD controls, using histology as a reference. Full article
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging Analysis)
Show Figures

Figure 1

28 pages, 1004 KiB  
Article
A Deep Learning Framework for the Characterization of Thyroid Nodules from Ultrasound Images Using Improved Inception Network and Multi-Level Transfer Learning
by O. A. Ajilisa, V. P. Jagathy Raj and M. K. Sabu
Diagnostics 2023, 13(14), 2463; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics13142463 - 24 Jul 2023
Cited by 1 | Viewed by 1281
Abstract
In the past few years, deep learning has gained increasingly widespread attention and has been applied to diagnosing benign and malignant thyroid nodules. It is difficult to acquire sufficient medical images, resulting in insufficient data, which hinders the development of an efficient deep-learning [...] Read more.
In the past few years, deep learning has gained increasingly widespread attention and has been applied to diagnosing benign and malignant thyroid nodules. It is difficult to acquire sufficient medical images, resulting in insufficient data, which hinders the development of an efficient deep-learning model. In this paper, we developed a deep-learning-based characterization framework to differentiate malignant and benign nodules from the thyroid ultrasound images. This approach improves the recognition accuracy of the inception network by combining squeeze and excitation networks with the inception modules. We have also integrated the concept of multi-level transfer learning using breast ultrasound images as a bridge dataset. This transfer learning approach addresses the issues regarding domain differences between natural images and ultrasound images during transfer learning. This paper aimed to investigate how the entire framework could help radiologists improve diagnostic performance and avoid unnecessary fine-needle aspiration. The proposed approach based on multi-level transfer learning and improved inception blocks achieved higher precision (0.9057 for the benign class and 0.9667 for the malignant class), recall (0.9796 for the benign class and 0.8529 for malignant), and F1-score (0.9412 for benign class and 0.9062 for malignant class). It also obtained an AUC value of 0.9537, which is higher than that of the single-level transfer learning method. The experimental results show that this model can achieve satisfactory classification accuracy comparable to experienced radiologists. Using this model, we can save time and effort as well as deliver potential clinical application value. Full article
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging Analysis)
Show Figures

Figure 1

23 pages, 4945 KiB  
Article
An Improved Multimodal Medical Image Fusion Approach Using Intuitionistic Fuzzy Set and Intuitionistic Fuzzy Cross-Correlation
by Maruturi Haribabu and Velmathi Guruviah
Diagnostics 2023, 13(14), 2330; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics13142330 - 10 Jul 2023
Viewed by 1226
Abstract
Multimodal medical image fusion (MMIF) is the process of merging different modalities of medical images into a single output image (fused image) with a significant quantity of information to improve clinical applicability. It enables a better diagnosis and makes the diagnostic process easier. [...] Read more.
Multimodal medical image fusion (MMIF) is the process of merging different modalities of medical images into a single output image (fused image) with a significant quantity of information to improve clinical applicability. It enables a better diagnosis and makes the diagnostic process easier. In medical image fusion (MIF), an intuitionistic fuzzy set (IFS) plays a role in enhancing the quality of the image, which is useful for medical diagnosis. In this article, a new approach to intuitionistic fuzzy set-based MMIF has been proposed. Initially, the input medical images are fuzzified and then create intuitionistic fuzzy images (IFIs). Intuitionistic fuzzy entropy plays a major role in calculating the optimal value for three degrees, namely, membership, non-membership, and hesitation. After that, the IFIs are decomposed into small blocks and then perform the fusion rule. Finally, the enhanced fused image can be obtained by the defuzzification process. The proposed method is tested on various medical image datasets in terms of subjective and objective analysis. The proposed algorithm provides a better-quality fused image and is superior to other existing methods such as PCA, DWTPCA, contourlet transform (CONT), DWT with fuzzy logic, Sugeno’s intuitionistic fuzzy set, Chaira’s intuitionistic fuzzy set, and PC-NSCT. The assessment of the fused image is evaluated with various performance metrics such as average pixel intensity (API), standard deviation (SD), average gradient (AG), spatial frequency (SF), modified spatial frequency (MSF), cross-correlation (CC), mutual information (MI), and fusion symmetry (FS). Full article
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging Analysis)
Show Figures

Figure 1

12 pages, 1252 KiB  
Article
Artificial Intelligence in Dementia: A Bibliometric Study
by Chieh-Chen Wu, Chun-Hsien Su, Md. Mohaimenul Islam and Mao-Hung Liao
Diagnostics 2023, 13(12), 2109; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics13122109 - 19 Jun 2023
Cited by 1 | Viewed by 1536
Abstract
The applications of artificial intelligence (AI) in dementia research have garnered significant attention, prompting the planning of various research endeavors in current and future studies. The objective of this study is to provide a comprehensive overview of the research landscape regarding AI and [...] Read more.
The applications of artificial intelligence (AI) in dementia research have garnered significant attention, prompting the planning of various research endeavors in current and future studies. The objective of this study is to provide a comprehensive overview of the research landscape regarding AI and dementia within scholarly publications and to suggest further studies for this emerging research field. A search was conducted in the Web of Science database to collect all relevant and highly cited articles on AI-related dementia research published in English until 16 May 2023. Utilizing bibliometric indicators, a search strategy was developed to assess the eligibility of titles, utilizing abstracts and full texts as necessary. The Bibliometrix tool, a statistical package in R, was used to produce and visualize networks depicting the co-occurrence of authors, research institutions, countries, citations, and keywords. We obtained a total of 1094 relevant articles published between 1997 and 2023. The number of annual publications demonstrated an increasing trend over the past 27 years. Journal of Alzheimer’s Disease (39/1094, 3.56%), Frontiers in Aging Neuroscience (38/1094, 3.47%), and Scientific Reports (26/1094, 2.37%) were the most common journals for this domain. The United States (283/1094, 25.86%), China (222/1094, 20.29%), India (150/1094, 13.71%), and England (96/1094, 8.77%) were the most productive countries of origin. In terms of institutions, Boston University, Columbia University, and the University of Granada demonstrated the highest productivity. As for author contributions, Gorriz JM, Ramirez J, and Salas-Gonzalez D were the most active researchers. While the initial period saw a relatively low number of articles focusing on AI applications for dementia, there has been a noticeable upsurge in research within this domain in recent years (2018–2023). The present analysis sheds light on the key contributors in terms of researchers, institutions, countries, and trending topics that have propelled the advancement of AI in dementia research. These findings collectively underscore that the integration of AI with conventional treatment approaches enhances the effectiveness of dementia diagnosis, prediction, classification, and monitoring of treatment progress. Full article
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging Analysis)
Show Figures

Figure 1

16 pages, 12319 KiB  
Article
Deep Learning-Based Recognition of Cervical Squamous Interepithelial Lesions
by Huimin An, Liya Ding, Mengyuan Ma, Aihua Huang, Yi Gan, Danli Sheng, Zhinong Jiang and Xin Zhang
Diagnostics 2023, 13(10), 1720; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics13101720 - 12 May 2023
Cited by 1 | Viewed by 1273
Abstract
Cervical squamous intraepithelial lesions (SILs) are precursor lesions of cervical cancer, and their accurate diagnosis enables patients to be treated before malignancy manifests. However, the identification of SILs is usually laborious and has low diagnostic consistency due to the high similarity of pathological [...] Read more.
Cervical squamous intraepithelial lesions (SILs) are precursor lesions of cervical cancer, and their accurate diagnosis enables patients to be treated before malignancy manifests. However, the identification of SILs is usually laborious and has low diagnostic consistency due to the high similarity of pathological SIL images. Although artificial intelligence (AI), especially deep learning algorithms, has drawn a lot of attention for its good performance in cervical cytology tasks, the use of AI for cervical histology is still in its early stages. The feature extraction, representation capabilities, and use of p16 immunohistochemistry (IHC) among existing models are inadequate. Therefore, in this study, we first designed a squamous epithelium segmentation algorithm and assigned the corresponding labels. Second, p16-positive area of IHC slides were extracted with Whole Image Net (WI-Net), followed by mapping the p16-positive area back to the H&E slides and generating a p16-positive mask for training. Finally, the p16-positive areas were inputted into Swin-B and ResNet-50 to classify the SILs. The dataset comprised 6171 patches from 111 patients; patches from 80% of the 90 patients were used for the training set. The accuracy of the Swin-B method for high-grade squamous intraepithelial lesion (HSIL) that we propose was 0.914 [0.889–0.928]. The ResNet-50 model for HSIL achieved an area under the receiver operating characteristic curve (AUC) of 0.935 [0.921–0.946] at the patch level, and the accuracy, sensitivity, and specificity were 0.845, 0.922, and 0.829, respectively. Therefore, our model can accurately identify HSIL, assisting the pathologist in solving actual diagnostic issues and even directing the follow-up treatment of patients. Full article
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging Analysis)
Show Figures

Figure 1

24 pages, 11309 KiB  
Article
Mobile-HR: An Ophthalmologic-Based Classification System for Diagnosis of Hypertensive Retinopathy Using Optimized MobileNet Architecture
by Muhammad Zaheer Sajid, Imran Qureshi, Qaisar Abbas, Mubarak Albathan, Kashif Shaheed, Ayman Youssef, Sehrish Ferdous and Ayyaz Hussain
Diagnostics 2023, 13(8), 1439; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics13081439 - 17 Apr 2023
Cited by 6 | Viewed by 2367
Abstract
Hypertensive retinopathy (HR) is a serious eye disease that causes the retinal arteries to change. This change is mainly due to the fact of high blood pressure. Cotton wool patches, bleeding in the retina, and retinal artery constriction are affected lesions of HR [...] Read more.
Hypertensive retinopathy (HR) is a serious eye disease that causes the retinal arteries to change. This change is mainly due to the fact of high blood pressure. Cotton wool patches, bleeding in the retina, and retinal artery constriction are affected lesions of HR symptoms. An ophthalmologist often makes the diagnosis of eye-related diseases by analyzing fundus images to identify the stages and symptoms of HR. The likelihood of vision loss can significantly decrease the initial detection of HR. In the past, a few computer-aided diagnostics (CADx) systems were developed to automatically detect HR eye-related diseases using machine learning (ML) and deep learning (DL) techniques. Compared to ML methods, the CADx systems use DL techniques that require the setting of hyperparameters, domain expert knowledge, a huge training dataset, and a high learning rate. Those CADx systems have shown to be good for automating the extraction of complex features, but they cause problems with class imbalance and overfitting. By ignoring the issues of a small dataset of HR, a high level of computational complexity, and the lack of lightweight feature descriptors, state-of-the-art efforts depend on performance enhancement. In this study, a pretrained transfer learning (TL)-based MobileNet architecture is developed by integrating dense blocks to optimize the network for the diagnosis of HR eye-related disease. We developed a lightweight HR-related eye disease diagnosis system, known as Mobile-HR, by integrating a pretrained model and dense blocks. To increase the size of the training and test datasets, we applied a data augmentation technique. The outcomes of the experiments show that the suggested approach was outperformed in many cases. This Mobile-HR system achieved an accuracy of 99% and an F1 score of 0.99 on different datasets. The results were verified by an expert ophthalmologist. These results indicate that the Mobile-HR CADx model produces positive outcomes and outperforms state-of-the-art HR systems in terms of accuracy. Full article
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging Analysis)
Show Figures

Figure 1

11 pages, 1254 KiB  
Article
Validation of an Automated Cardiothoracic Ratio Calculation for Hemodialysis Patients
by Hsin-Hsu Chou, Jin-Yi Lin, Guan-Ting Shen and Chih-Yuan Huang
Diagnostics 2023, 13(8), 1376; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics13081376 - 09 Apr 2023
Viewed by 1632
Abstract
Cardiomegaly is associated with poor clinical outcomes and is assessed by routine monitoring of the cardiothoracic ratio (CTR) from chest X-rays (CXRs). Judgment of the margins of the heart and lungs is subjective and may vary between different operators. Methods: Patients aged > [...] Read more.
Cardiomegaly is associated with poor clinical outcomes and is assessed by routine monitoring of the cardiothoracic ratio (CTR) from chest X-rays (CXRs). Judgment of the margins of the heart and lungs is subjective and may vary between different operators. Methods: Patients aged > 19 years in our hemodialysis unit from March 2021 to October 2021 were enrolled. The borders of the lungs and heart on CXRs were labeled by two nephrologists as the ground truth (nephrologist-defined mask). We implemented AlbuNet-34, a U-Net variant, to predict the heart and lung margins from CXR images and to automatically calculate the CTRs. Results: The coefficient of determination (R2) obtained using the neural network model was 0.96, compared with an R2 of 0.90 obtained by nurse practitioners. The mean difference between the CTRs calculated by the nurse practitioners and senior nephrologists was 1.52 ± 1.46%, and that between the neural network model and the nephrologists was 0.83 ± 0.87% (p < 0.001). The mean CTR calculation duration was 85 s using the manual method and less than 2 s using the automated method (p < 0.001). Conclusions: Our study confirmed the validity of automated CTR calculations. By achieving high accuracy and saving time, our model can be implemented in clinical practice. Full article
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging Analysis)
Show Figures

Figure 1

13 pages, 2215 KiB  
Article
Deep Learning-Based Algorithm for Automatic Detection of Pulmonary Embolism in Chest CT Angiograms
by Philippe A. Grenier, Angela Ayobi, Sarah Quenet, Maxime Tassy, Michael Marx, Daniel S. Chow, Brent D. Weinberg, Peter D. Chang and Yasmina Chaibi
Diagnostics 2023, 13(7), 1324; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics13071324 - 03 Apr 2023
Cited by 9 | Viewed by 2186
Abstract
Purpose: Since the prompt recognition of acute pulmonary embolism (PE) and the immediate initiation of treatment can significantly reduce the risk of death, we developed a deep learning (DL)-based application aimed to automatically detect PEs on chest computed tomography angiograms (CTAs) and alert [...] Read more.
Purpose: Since the prompt recognition of acute pulmonary embolism (PE) and the immediate initiation of treatment can significantly reduce the risk of death, we developed a deep learning (DL)-based application aimed to automatically detect PEs on chest computed tomography angiograms (CTAs) and alert radiologists for an urgent interpretation. Convolutional neural networks (CNNs) were used to design the application. The associated algorithm used a hybrid 3D/2D UNet topology. The training phase was performed on datasets adequately distributed in terms of vendors, patient age, slice thickness, and kVp. The objective of this study was to validate the performance of the algorithm in detecting suspected PEs on CTAs. Methods: The validation dataset included 387 anonymized real-world chest CTAs from multiple clinical sites (228 U.S. cities). The data were acquired on 41 different scanner models from five different scanner makers. The ground truth (presence or absence of PE on CTA images) was established by three independent U.S. board-certified radiologists. Results: The algorithm correctly identified 170 of 186 exams positive for PE (sensitivity 91.4% [95% CI: 86.4–95.0%]) and 184 of 201 exams negative for PE (specificity 91.5% [95% CI: 86.8–95.0%]), leading to an accuracy of 91.5%. False negative cases were either chronic PEs or PEs at the limit of subsegmental arteries and close to partial volume effect artifacts. Most of the false positive findings were due to contrast agent-related fluid artifacts, pulmonary veins, and lymph nodes. Conclusions: The DL-based algorithm has a high degree of diagnostic accuracy with balanced sensitivity and specificity for the detection of PE on CTAs. Full article
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging Analysis)
Show Figures

Figure 1

12 pages, 2390 KiB  
Article
DTBV: A Deep Transfer-Based Bone Cancer Diagnosis System Using VGG16 Feature Extraction
by G. Suganeshwari, R. Balakumar, Kalimuthu Karuppanan, Sahaya Beni Prathiba, Sudha Anbalagan and Gunasekaran Raja
Diagnostics 2023, 13(4), 757; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics13040757 - 16 Feb 2023
Cited by 1 | Viewed by 2349
Abstract
Among the many different types of cancer, bone cancer is the most lethal and least prevalent. More cases are reported each year. Early diagnosis of bone cancer is crucial since it helps limit the spread of malignant cells and reduce mortality. The manual [...] Read more.
Among the many different types of cancer, bone cancer is the most lethal and least prevalent. More cases are reported each year. Early diagnosis of bone cancer is crucial since it helps limit the spread of malignant cells and reduce mortality. The manual method of detection of bone cancer is cumbersome and requires specialized knowledge. A deep transfer-based bone cancer diagnosis (DTBV) system using VGG16 feature extraction is proposed to address these issues. The proposed DTBV system uses a transfer learning (TL) approach in which a pre-trained convolutional neural network (CNN) model is used to extract features from the pre-processed input image and a support vector machine (SVM) model is used to train using these features to distinguish between cancerous and healthy bone. The CNN is applied to the image datasets as it provides better image recognition with high accuracy when the layers in neural network feature extraction increase. In the proposed DTBV system, the VGG16 model extracts the features from the input X-ray image. A mutual information statistic that measures the dependency between the different features is then used to select the best features. This is the first time this method has been used for detecting bone cancer. Once selected features are selected, they are fed into the SVM classifier. The SVM model classifies the given testing dataset into malignant and benign categories. A comprehensive performance evaluation has demonstrated that the proposed DTBV system is highly efficient in detecting bone cancer, with an accuracy of 93.9%, which is more accurate than other existing systems. Full article
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging Analysis)
Show Figures

Figure 1

12 pages, 2168 KiB  
Article
Artificial Intelligence of Object Detection in Skeletal Scintigraphy for Automatic Detection and Annotation of Bone Metastases
by Chiung-Wei Liao, Te-Chun Hsieh, Yung-Chi Lai, Yu-Ju Hsu, Zong-Kai Hsu, Pak-Ki Chan and Chia-Hung Kao
Diagnostics 2023, 13(4), 685; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics13040685 - 12 Feb 2023
Cited by 2 | Viewed by 1432
Abstract
Background: When cancer has metastasized to bone, doctors must identify the site of the metastases for treatment. In radiation therapy, damage to healthy areas or missing areas requiring treatment should be avoided. Therefore, it is necessary to locate the precise bone metastasis area. [...] Read more.
Background: When cancer has metastasized to bone, doctors must identify the site of the metastases for treatment. In radiation therapy, damage to healthy areas or missing areas requiring treatment should be avoided. Therefore, it is necessary to locate the precise bone metastasis area. The bone scan is a commonly applied diagnostic tool for this purpose. However, its accuracy is limited by the nonspecific character of radiopharmaceutical accumulation. The study evaluated object detection techniques to improve the efficacy of bone metastases detection on bone scans. Methods: We retrospectively examined the data of 920 patients, aged 23 to 95 years, who underwent bone scans between May 2009 and December 2019. The bone scan images were examined using an object detection algorithm. Results: After reviewing the image reports written by physicians, nursing staff members annotated the bone metastasis sites as ground truths for training. Each set of bone scans contained anterior and posterior images with resolutions of 1024 × 256 pixels. The optimal dice similarity coefficient (DSC) in our study was 0.6640, which differs by 0.04 relative to the optimal DSC of different physicians (0.7040). Conclusions: Object detection can help physicians to efficiently notice bone metastases, decrease physician workload, and improve patient care. Full article
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging Analysis)
Show Figures

Figure 1

20 pages, 3182 KiB  
Article
Feature Extraction Using a Residual Deep Convolutional Neural Network (ResNet-152) and Optimized Feature Dimension Reduction for MRI Brain Tumor Classification
by Suganya Athisayamani, Robert Singh Antonyswamy, Velliangiri Sarveshwaran, Meshari Almeshari, Yasser Alzamil and Vinayakumar Ravi
Diagnostics 2023, 13(4), 668; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics13040668 - 10 Feb 2023
Cited by 14 | Viewed by 2452
Abstract
One of the top causes of mortality in people globally is a brain tumor. Today, biopsy is regarded as the cornerstone of cancer diagnosis. However, it faces difficulties, including low sensitivity, hazards during biopsy treatment, and a protracted waiting period for findings. In [...] Read more.
One of the top causes of mortality in people globally is a brain tumor. Today, biopsy is regarded as the cornerstone of cancer diagnosis. However, it faces difficulties, including low sensitivity, hazards during biopsy treatment, and a protracted waiting period for findings. In this context, developing non-invasive and computational methods for identifying and treating brain cancers is crucial. The classification of tumors obtained from an MRI is crucial for making a variety of medical diagnoses. However, MRI analysis typically requires much time. The primary challenge is that the tissues of the brain are comparable. Numerous scientists have created new techniques for identifying and categorizing cancers. However, due to their limitations, the majority of them eventually fail. In that context, this work presents a novel way of classifying multiple types of brain tumors. This work also introduces a segmentation algorithm known as Canny Mayfly. Enhanced chimpanzee optimization algorithm (EChOA) is used to select the features by minimizing the dimension of the retrieved features. ResNet-152 and the softmax classifier are then used to perform the feature classification process. Python is used to carry out the proposed method on the Figshare dataset. The accuracy, specificity, and sensitivity of the proposed cancer classification system are just a few of the characteristics that are used to evaluate its overall performance. According to the final evaluation results, our proposed strategy outperformed, with an accuracy of 98.85%. Full article
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging Analysis)
Show Figures

Figure 1

12 pages, 3486 KiB  
Article
EfficientNetV2 Based Ensemble Model for Quality Estimation of Diabetic Retinopathy Images from DeepDRiD
by Sudhakar Tummala, Venkata Sainath Gupta Thadikemalla, Seifedine Kadry, Mohamed Sharaf and Hafiz Tayyab Rauf
Diagnostics 2023, 13(4), 622; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics13040622 - 08 Feb 2023
Cited by 3 | Viewed by 2017
Abstract
Diabetic retinopathy (DR) is one of the major complications caused by diabetes and is usually identified from retinal fundus images. Screening of DR from digital fundus images could be time-consuming and error-prone for ophthalmologists. For efficient DR screening, good quality of the fundus [...] Read more.
Diabetic retinopathy (DR) is one of the major complications caused by diabetes and is usually identified from retinal fundus images. Screening of DR from digital fundus images could be time-consuming and error-prone for ophthalmologists. For efficient DR screening, good quality of the fundus image is essential and thereby reduces diagnostic errors. Hence, in this work, an automated method for quality estimation (QE) of digital fundus images using an ensemble of recent state-of-the-art EfficientNetV2 deep neural network models is proposed. The ensemble method was cross-validated and tested on one of the largest openly available datasets, the Deep Diabetic Retinopathy Image Dataset (DeepDRiD). We obtained a test accuracy of 75% for the QE, outperforming the existing methods on the DeepDRiD. Hence, the proposed ensemble method may be a potential tool for automated QE of fundus images and could be handy to ophthalmologists. Full article
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging Analysis)
Show Figures

Figure 1

17 pages, 1314 KiB  
Article
A Hybrid System of Braden Scale and Machine Learning to Predict Hospital-Acquired Pressure Injuries (Bedsores): A Retrospective Observational Cohort Study
by Odai Y. Dweekat, Sarah S. Lam and Lindsay McGrath
Diagnostics 2023, 13(1), 31; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics13010031 - 22 Dec 2022
Cited by 5 | Viewed by 3210
Abstract
Background: The Braden Scale is commonly used to determine Hospital-Acquired Pressure Injuries (HAPI). However, the volume of patients who are identified as being at risk stretches already limited resources, and caregivers are limited by the number of factors that can reasonably assess [...] Read more.
Background: The Braden Scale is commonly used to determine Hospital-Acquired Pressure Injuries (HAPI). However, the volume of patients who are identified as being at risk stretches already limited resources, and caregivers are limited by the number of factors that can reasonably assess during patient care. In the last decade, machine learning techniques have been used to predict HAPI by utilizing related risk factors. Nevertheless, none of these studies consider the change in patient status from admission until discharge. Objectives: To develop an integrated system of Braden and machine learning to predict HAPI and assist with resource allocation for early interventions. The proposed approach captures the change in patients’ risk by assessing factors three times across hospitalization. Design: Retrospective observational cohort study. Setting(s): This research was conducted at ChristianaCare hospital in Delaware, United States. Participants: Patients discharged between May 2020 and February 2022. Patients with HAPI were identified from Nursing documents (N = 15,889). Methods: Support Vector Machine (SVM) was adopted to predict patients’ risk for developing HAPI using multiple risk factors in addition to Braden. Multiple performance metrics were used to compare the results of the integrated system versus Braden alone. Results: The HAPI rate is 3%. The integrated system achieved better sensitivity (74.29 ± 1.23) and detection prevalence (24.27 ± 0.16) than the Braden scale alone (sensitivity (66.90 ± 4.66) and detection prevalence (41.96 ± 1.35)). The most important risk factors to predict HAPI were Braden sub-factors, overall Braden, visiting ICU during hospitalization, and Glasgow coma score. Conclusions: The integrated system which combines SVM with Braden offers better performance than Braden and reduces the number of patients identified as at-risk. Furthermore, it allows for better allocation of resources to high-risk patients. It will result in cost savings and better utilization of resources. Relevance to clinical practice: The developed model provides an automated system to predict HAPI patients in real time and allows for ongoing intervention for patients identified as at-risk. Moreover, the integrated system is used to determine the number of nurses needed for early interventions. Reporting Method: EQUATOR guidelines (TRIPOD) were adopted in this research to develop the prediction model. Patient or Public Contribution: This research was based on a secondary analysis of patients’ Electronic Health Records. The dataset was de-identified and patient identifiers were removed before processing and modeling. Full article
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging Analysis)
Show Figures

Figure 1

15 pages, 4673 KiB  
Article
Contextual Features and Information Bottleneck-Based Multi-Input Network for Breast Cancer Classification from Contrast-Enhanced Spectral Mammography
by Xinmeng Li, Jia Cui, Jingqi Song, Mingyu Jia, Zhenxing Zou, Guocheng Ding and Yuanjie Zheng
Diagnostics 2022, 12(12), 3133; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12123133 - 12 Dec 2022
Cited by 2 | Viewed by 1588
Abstract
In computer-aided diagnosis methods for breast cancer, deep learning has been shown to be an effective method to distinguish whether lesions are present in tissues. However, traditional methods only classify masses as benign or malignant, according to their presence or absence, without considering [...] Read more.
In computer-aided diagnosis methods for breast cancer, deep learning has been shown to be an effective method to distinguish whether lesions are present in tissues. However, traditional methods only classify masses as benign or malignant, according to their presence or absence, without considering the contextual features between them and their adjacent tissues. Furthermore, for contrast-enhanced spectral mammography, the existing studies have only performed feature extraction on a single image per breast. In this paper, we propose a multi-input deep learning network for automatic breast cancer classification. Specifically, we simultaneously input four images of each breast with different feature information into the network. Then, we processed the feature maps in both horizontal and vertical directions, preserving the pixel-level contextual information within the neighborhood of the tumor during the pooling operation. Furthermore, we designed a novel loss function according to the information bottleneck theory to optimize our multi-input network and ensure that the common information in the multiple input images could be fully utilized. Our experiments on 488 images (256 benign and 232 malignant images) from 122 patients show that the method’s accuracy, precision, sensitivity, specificity, and f1-score values are 0.8806, 0.8803, 0.8810, 0.8801, and 0.8806, respectively. The qualitative, quantitative, and ablation experiment results show that our method significantly improves the accuracy of breast cancer classification and reduces the false positive rate of diagnosis. It can reduce misdiagnosis rates and unnecessary biopsies, helping doctors determine accurate clinical diagnoses of breast cancer from multiple CESM images. Full article
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging Analysis)
Show Figures

Figure 1

14 pages, 1794 KiB  
Article
Predicting IDH Mutation Status in Low-Grade Gliomas Based on Optimal Radiomic Features Combined with Multi-Sequence Magnetic Resonance Imaging
by Ailing He, Peng Wang, Aihua Zhu, Yankui Liu, Jianhuan Chen and Li Liu
Diagnostics 2022, 12(12), 2995; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12122995 - 30 Nov 2022
Cited by 4 | Viewed by 1216
Abstract
The IDH somatic mutation status is an important basis for the diagnosis and classification of gliomas. We proposed a “6-Step” general radiomics model to noninvasively predict the IDH mutation status by simultaneously tuning combined multi-sequence MRI and optimizing the full radiomics processing pipeline. [...] Read more.
The IDH somatic mutation status is an important basis for the diagnosis and classification of gliomas. We proposed a “6-Step” general radiomics model to noninvasively predict the IDH mutation status by simultaneously tuning combined multi-sequence MRI and optimizing the full radiomics processing pipeline. Radiomic features (n = 3776) were extracted from multi-sequence MRI (T1, T2, FLAIR, and T1Gd) in low-grade gliomas (LGGs), and a total of 45,360 radiomics pipeline were investigated according to different settings. The predictive ability of the general radiomics model was evaluated with regards to accuracy, stability, and efficiency. Based on numerous experiments, we finally reached an optimal pipeline for classifying IDH mutation status, namely the T2+FLAIR combined multi-sequence with the wavelet image filter, mean data normalization, PCC dimension reduction, RFE feature selection, and SVM classifier. The mean and standard deviation of AUC, accuracy, sensitivity, and specificity were 0.873 ± 0.05, 0.876 ± 0.09, 0.875 ± 0.11, and 0.877 ± 0.15, respectively. Furthermore, 14 radiomic features that best distinguished the IDH mutation status of the T2+FLAIR multi-sequence were analyzed, and the gray level co-occurrence matrix (GLCM) features were shown to be of high importance. Apart from the promising prediction of the molecular subtypes, this study also provided a general tool for radiomics investigation. Full article
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging Analysis)
Show Figures

Figure 1

30 pages, 6183 KiB  
Article
Embedded AMIS-Deep Learning with Dialog-Based Object Query System for Multi-Class Tuberculosis Drug Response Classification
by Chutinun Prasitpuriprecha, Rapeepan Pitakaso, Sarayut Gonwirat, Prem Enkvetchakul, Thanawadee Preeprem, Sirima Suvarnakuta Jantama, Chutchai Kaewta, Nantawatana Weerayuth, Thanatkij Srichok, Surajet Khonjun and Natthapong Nanthasamroeng
Diagnostics 2022, 12(12), 2980; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12122980 - 28 Nov 2022
Cited by 7 | Viewed by 2246
Abstract
A person infected with drug-resistant tuberculosis (DR-TB) is the one who does not respond to typical TB treatment. DR-TB necessitates a longer treatment period and a more difficult treatment protocol. In addition, it can spread and infect individuals in the same manner as [...] Read more.
A person infected with drug-resistant tuberculosis (DR-TB) is the one who does not respond to typical TB treatment. DR-TB necessitates a longer treatment period and a more difficult treatment protocol. In addition, it can spread and infect individuals in the same manner as regular TB, despite the fact that early detection of DR-TB could reduce the cost and length of TB treatment. This study provided a fast and effective classification scheme for the four subtypes of TB: Drug-sensitive tuberculosis (DS-TB), drug-resistant tuberculosis (DR-TB), multidrug-resistant tuberculosis (MDR-TB), and extensively drug-resistant tuberculosis (XDR-TB). The drug response classification system (DRCS) has been developed as a classification tool for DR-TB subtypes. As a classification method, ensemble deep learning (EDL) with two types of image preprocessing methods, four convolutional neural network (CNN) architectures, and three decision fusion methods have been created. Later, the model developed by EDL will be included in the dialog-based object query system (DBOQS), in order to enable the use of DRCS as the classification tool for DR-TB in assisting medical professionals with diagnosing DR-TB. EDL yields an improvement of 1.17–43.43% over the existing methods for classifying DR-TB, while compared with classic deep learning, it generates 31.25% more accuracy. DRCS was able to increase accuracy to 95.8% and user trust to 95.1%, and after the trial period, 99.70% of users were interested in continuing the utilization of the system as a supportive diagnostic tool. Full article
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging Analysis)
Show Figures

Figure 1

23 pages, 8355 KiB  
Article
AID-U-Net: An Innovative Deep Convolutional Architecture for Semantic Segmentation of Biomedical Images
by Ashkan Tashk, Jürgen Herp, Thomas Bjørsum-Meyer, Anastasios Koulaouzidis and Esmaeil S. Nadimi
Diagnostics 2022, 12(12), 2952; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12122952 - 25 Nov 2022
Cited by 1 | Viewed by 2445
Abstract
Semantic segmentation of biomedical images found its niche in screening and diagnostic applications. Recent methods based on deep learning convolutional neural networks have been very effective, since they are readily adaptive to biomedical applications and outperform other competitive segmentation methods. Inspired by the [...] Read more.
Semantic segmentation of biomedical images found its niche in screening and diagnostic applications. Recent methods based on deep learning convolutional neural networks have been very effective, since they are readily adaptive to biomedical applications and outperform other competitive segmentation methods. Inspired by the U-Net, we designed a deep learning network with an innovative architecture, hereafter referred to as AID-U-Net. Our network consists of direct contracting and expansive paths, as well as a distinguishing feature of containing sub-contracting and sub-expansive paths. The implementation results on seven totally different databases of medical images demonstrated that our proposed network outperforms the state-of-the-art solutions with no specific pre-trained backbones for both 2D and 3D biomedical image segmentation tasks. Furthermore, we showed that AID-U-Net dramatically reduces time inference and computational complexity in terms of the number of learnable parameters. The results further show that the proposed AID-U-Net can segment different medical objects, achieving an improved 2D F1-score and 3D mean BF-score of 3.82% and 2.99%, respectively. Full article
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging Analysis)
Show Figures

Figure 1

17 pages, 1517 KiB  
Article
Over-the-Counter Breast Cancer Classification Using Machine Learning and Patient Registration Records
by Tengku Muhammad Hanis, Nur Intan Raihana Ruhaiyem, Wan Nor Arifin, Juhara Haron, Wan Faiziah Wan Abdul Rahman, Rosni Abdullah and Kamarul Imran Musa
Diagnostics 2022, 12(11), 2826; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12112826 - 16 Nov 2022
Cited by 2 | Viewed by 1643
Abstract
This study aims to determine the feasibility of machine learning (ML) and patient registration record to be utilised to develop an over-the-counter (OTC) screening model for breast cancer risk estimation. Data were retrospectively collected from women who came to the Hospital Universiti Sains [...] Read more.
This study aims to determine the feasibility of machine learning (ML) and patient registration record to be utilised to develop an over-the-counter (OTC) screening model for breast cancer risk estimation. Data were retrospectively collected from women who came to the Hospital Universiti Sains Malaysia, Malaysia for breast-related problems. Eight ML models were used: k-nearest neighbour (kNN), elastic-net logistic regression, multivariate adaptive regression splines, artificial neural network, partial least square, random forest, support vector machine (SVM), and extreme gradient boosting. Features utilised for the development of the screening models were limited to information in the patient registration form. The final model was evaluated in terms of performance across a mammographic density. Additionally, the feature importance of the final model was assessed using the model agnostic approach. kNN had the highest Youden J index, precision, and PR-AUC, while SVM had the highest F2 score. The kNN model was selected as the final model. The model had a balanced performance in terms of sensitivity, specificity, and PR-AUC across the mammographic density groups. The most important feature was the age at examination. In conclusion, this study showed that ML and patient registration information are feasible to be used as the OTC screening model for breast cancer. Full article
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging Analysis)
Show Figures

Figure 1

21 pages, 1647 KiB  
Article
Newborn Cry-Based Diagnostic System to Distinguish between Sepsis and Respiratory Distress Syndrome Using Combined Acoustic Features
by Zahra Khalilzad, Ahmad Hasasneh and Chakib Tadj
Diagnostics 2022, 12(11), 2802; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12112802 - 15 Nov 2022
Cited by 8 | Viewed by 2490
Abstract
Crying is the only means of communication for a newborn baby with its surrounding environment, but it also provides significant information about the newborn’s health, emotions, and needs. The cries of newborn babies have long been known as a biomarker for the diagnosis [...] Read more.
Crying is the only means of communication for a newborn baby with its surrounding environment, but it also provides significant information about the newborn’s health, emotions, and needs. The cries of newborn babies have long been known as a biomarker for the diagnosis of pathologies. However, to the best of our knowledge, exploring the discrimination of two pathology groups by means of cry signals is unprecedented. Therefore, this study aimed to identify septic newborns with Neonatal Respiratory Distress Syndrome (RDS) by employing the Machine Learning (ML) methods of Multilayer Perceptron (MLP) and Support Vector Machine (SVM). Furthermore, the cry signal was analyzed from the following two different perspectives: 1) the musical perspective by studying the spectral feature set of Harmonic Ratio (HR), and 2) the speech processing perspective using the short-term feature set of Gammatone Frequency Cepstral Coefficients (GFCCs). In order to assess the role of employing features from both short-term and spectral modalities in distinguishing the two pathology groups, they were fused in one feature set named the combined features. The hyperparameters (HPs) of the implemented ML approaches were fine-tuned to fit each experiment. Finally, by normalizing and fusing the features originating from the two modalities, the overall performance of the proposed design was improved across all evaluation measures, achieving accuracies of 92.49% and 95.3% by the MLP and SVM classifiers, respectively. The MLP classifier was outperformed in terms of all evaluation measures presented in this study, except for the Area Under Curve of Receiver Operator Characteristics (AUC-ROC), which signifies the ability of the proposed design in class separation. The achieved results highlighted the role of combining features from different levels and modalities for a more powerful analysis of the cry signals, as well as including a neural network (NN)-based classifier. Consequently, attaining a 95.3% accuracy for the separation of two entangled pathology groups of RDS and sepsis elucidated the promising potential for further studies with larger datasets and more pathology groups. Full article
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging Analysis)
Show Figures

Figure 1

12 pages, 819 KiB  
Article
Automated Artificial Intelligence-Based Assessment of Lower Limb Alignment Validated on Weight-Bearing Pre- and Postoperative Full-Leg Radiographs
by Felix Erne, Priyanka Grover, Marcel Dreischarf, Marie K. Reumann, Dominik Saul, Tina Histing, Andreas K. Nüssler, Fabian Springer and Carolin Scholl
Diagnostics 2022, 12(11), 2679; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12112679 - 03 Nov 2022
Cited by 1 | Viewed by 2253
Abstract
The assessment of the knee alignment using standing weight-bearing full-leg radiographs (FLR) is a standardized method. Determining the load-bearing axis of the leg requires time-consuming manual measurements. The aim of this study is to develop and validate a novel algorithm based on artificial [...] Read more.
The assessment of the knee alignment using standing weight-bearing full-leg radiographs (FLR) is a standardized method. Determining the load-bearing axis of the leg requires time-consuming manual measurements. The aim of this study is to develop and validate a novel algorithm based on artificial intelligence (AI) for the automated assessment of lower limb alignment. In the first stage, a customized mask-RCNN model was trained to automatically detect and segment anatomical structures and implants in FLR. In the second stage, four region-specific neural network models (adaptations of UNet) were trained to automatically place anatomical landmarks. In the final stage, this information was used to automatically determine five key lower limb alignment angles. For the validation dataset, weight-bearing, antero-posterior FLR were captured preoperatively and 3 months postoperatively. Preoperative images were measured by the operating orthopedic surgeon and an independent physician. Postoperative images were measured by the second rater only. The final validation dataset consisted of 95 preoperative and 105 postoperative FLR. The detection rate for the different angles ranged between 92.4% and 98.9%. Human vs. human inter-(ICCs: 0.85–0.99) and intra-rater (ICCs: 0.95–1.0) reliability analysis achieved significant agreement. The ICC-values of human vs. AI inter-rater reliability analysis ranged between 0.8 and 1.0 preoperatively and between 0.83 and 0.99 postoperatively (all p < 0.001). An independent and external validation of the proposed algorithm on pre- and postoperative FLR, with excellent reliability for human measurements, could be demonstrated. Hence, the algorithm might allow for the objective and time saving analysis of large datasets and support physicians in daily routine. Full article
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging Analysis)
Show Figures

Figure 1

13 pages, 1481 KiB  
Article
A Deep Learning Algorithm for Radiographic Measurements of the Hip in Adults—A Reliability and Agreement Study
by Janni Jensen, Ole Graumann, Søren Overgaard, Oke Gerke, Michael Lundemann, Martin Haagen Haubro, Claus Varnum, Lene Bak, Janne Rasmussen, Lone B. Olsen and Benjamin S. B. Rasmussen
Diagnostics 2022, 12(11), 2597; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12112597 - 26 Oct 2022
Cited by 8 | Viewed by 2026
Abstract
Hip dysplasia (HD) is a frequent cause of hip pain in skeletally mature patients and may lead to osteoarthritis (OA). An accurate and early diagnosis may postpone, reduce or even prevent the onset of OA and ultimately hip arthroplasty at a young age. [...] Read more.
Hip dysplasia (HD) is a frequent cause of hip pain in skeletally mature patients and may lead to osteoarthritis (OA). An accurate and early diagnosis may postpone, reduce or even prevent the onset of OA and ultimately hip arthroplasty at a young age. The overall aim of this study was to assess the reliability of an algorithm, designed to read pelvic anterior-posterior (AP) radiographs and to estimate the agreement between the algorithm and human readers for measuring (i) lateral center edge angle of Wiberg (LCEA) and (ii) Acetabular index angle (AIA). The algorithm was based on deep-learning models developed using a modified U-net architecture and ResNet 34. The newly developed algorithm was found to be highly reliable when identifying the anatomical landmarks used for measuring LCEA and AIA in pelvic radiographs, thus offering highly consistent measurement outputs. The study showed that manual identification of the same landmarks made by five specialist readers were subject to variance and the level of agreement between the algorithm and human readers was consequently poor with mean measured differences from 0.37 to 9.56° for right LCEA measurements. The algorithm displayed the highest agreement with the senior orthopedic surgeon. With further development, the algorithm may be a good alternative to humans when screening for HD. Full article
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging Analysis)
Show Figures

Figure 1

19 pages, 4190 KiB  
Article
Classification Framework for Medical Diagnosis of Brain Tumor with an Effective Hybrid Transfer Learning Model
by Nagwan Abdel Samee, Noha F. Mahmoud, Ghada Atteia, Hanaa A. Abdallah, Maali Alabdulhafith, Mehdhar S. A. M. Al-Gaashani, Shahab Ahmad and Mohammed Saleh Ali Muthanna
Diagnostics 2022, 12(10), 2541; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12102541 - 20 Oct 2022
Cited by 18 | Viewed by 3063
Abstract
Brain tumors (BTs) are deadly diseases that can strike people of every age, all over the world. Every year, thousands of people die of brain tumors. Brain-related diagnoses require caution, and even the smallest error in diagnosis can have negative repercussions. Medical errors [...] Read more.
Brain tumors (BTs) are deadly diseases that can strike people of every age, all over the world. Every year, thousands of people die of brain tumors. Brain-related diagnoses require caution, and even the smallest error in diagnosis can have negative repercussions. Medical errors in brain tumor diagnosis are common and frequently result in higher patient mortality rates. Magnetic resonance imaging (MRI) is widely used for tumor evaluation and detection. However, MRI generates large amounts of data, making manual segmentation difficult and laborious work, limiting the use of accurate measurements in clinical practice. As a result, automated and dependable segmentation methods are required. Automatic segmentation and early detection of brain tumors are difficult tasks in computer vision due to their high spatial and structural variability. Therefore, early diagnosis or detection and treatment are critical. Various traditional Machine learning (ML) techniques have been used to detect various types of brain tumors. The main issue with these models is that the features were manually extracted. To address the aforementioned insightful issues, this paper presents a hybrid deep transfer learning (GN-AlexNet) model of BT tri-classification (pituitary, meningioma, and glioma). The proposed model combines GoogleNet architecture with the AlexNet model by removing the five layers of GoogleNet and adding ten layers of the AlexNet model, which extracts features and classifies them automatically. On the same CE-MRI dataset, the proposed model was compared to transfer learning techniques (VGG-16, AlexNet, SqeezNet, ResNet, and MobileNet-V2) and ML/DL. The proposed model outperformed the current methods in terms of accuracy and sensitivity (accuracy of 99.51% and sensitivity of 98.90%). Full article
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging Analysis)
Show Figures

Figure 1

17 pages, 1428 KiB  
Article
Application of Deep Learning to IVC Filter Detection from CT Scans
by Rahul Gomes, Connor Kamrowski, Pavithra Devy Mohan, Cameron Senor, Jordan Langlois and Joseph Wildenberg
Diagnostics 2022, 12(10), 2475; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12102475 - 13 Oct 2022
Cited by 4 | Viewed by 1676
Abstract
IVC filters (IVCF) perform an important function in select patients that have venous blood clots. However, they are usually intended to be temporary, and significant delay in removal can have negative health consequences for the patient. Currently, all Interventional Radiology (IR) practices are [...] Read more.
IVC filters (IVCF) perform an important function in select patients that have venous blood clots. However, they are usually intended to be temporary, and significant delay in removal can have negative health consequences for the patient. Currently, all Interventional Radiology (IR) practices are tasked with tracking patients in whom IVCF are placed. Due to their small size and location deep within the abdomen it is common for patients to forget that they have an IVCF. Therefore, there is a significant delay for a new healthcare provider to become aware of the presence of a filter. Patients may have an abdominopelvic CT scan for many reasons and, fortunately, IVCF are clearly visible on these scans. In this research a deep learning model capable of segmenting IVCF from CT scan slices along the axial plane is developed. The model achieved a Dice score of 0.82 for training over 372 CT scan slices. The segmentation model is then integrated with a prediction algorithm capable of flagging an entire CT scan as having IVCF. The prediction algorithm utilizing the segmentation model achieved a 92.22% accuracy at detecting IVCF in the scans. Full article
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging Analysis)
Show Figures

Figure 1

17 pages, 2359 KiB  
Article
Deep Learning Assisted Automated Assessment of Thalassaemia from Haemoglobin Electrophoresis Images
by Muhammad Salman Khan, Azmat Ullah, Kaleem Nawaz Khan, Huma Riaz, Yasar Mehmood Yousafzai, Tawsifur Rahman, Muhammad E. H. Chowdhury and Saad Bin Abul Kashem
Diagnostics 2022, 12(10), 2405; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12102405 - 03 Oct 2022
Cited by 4 | Viewed by 4244
Abstract
Haemoglobin (Hb) electrophoresis is a method of blood testing used to detect thalassaemia. However, the interpretation of the result of the electrophoresis test itself is a complex task. Expert haematologists, specifically in developing countries, are relatively few in number and are usually overburdened. [...] Read more.
Haemoglobin (Hb) electrophoresis is a method of blood testing used to detect thalassaemia. However, the interpretation of the result of the electrophoresis test itself is a complex task. Expert haematologists, specifically in developing countries, are relatively few in number and are usually overburdened. To assist them with their workload, in this paper we present a novel method for the automated assessment of thalassaemia using Hb electrophoresis images. Moreover, in this study we compile a large Hb electrophoresis image dataset, consisting of 103 strips containing 524 electrophoresis images with a clear consensus on the quality of electrophoresis obtained from 824 subjects. The proposed methodology is split into two parts: (1) single-patient electrophoresis image segmentation by means of the lane extraction technique, and (2) binary classification (normal or abnormal) of the electrophoresis images using state-of-the-art deep convolutional neural networks (CNNs) and using the concept of transfer learning. Image processing techniques including filtering and morphological operations are applied for object detection and lane extraction to automatically separate the lanes and classify them using CNN models. Seven different CNN models (ResNet18, ResNet50, ResNet101, InceptionV3, DenseNet201, SqueezeNet and MobileNetV2) were investigated in this study. InceptionV3 outperformed the other CNNs in detecting thalassaemia using Hb electrophoresis images. The accuracy, precision, recall, f1-score, and specificity in the detection of thalassaemia obtained with the InceptionV3 model were 95.8%, 95.84%, 95.8%, 95.8% and 95.8%, respectively. MobileNetV2 demonstrated an accuracy, precision, recall, f1-score, and specificity of 95.72%, 95.73%, 95.72%, 95.7% and 95.72% respectively. Its performance was comparable with the best performing model, InceptionV3. Since it is a very shallow network, MobileNetV2 also provides the least latency in processing a single-patient image and it can be suitably used for mobile applications. The proposed approach, which has shown very high classification accuracy, will assist in the rapid and robust detection of thalassaemia using Hb electrophoresis images. Full article
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging Analysis)
Show Figures

Figure 1

28 pages, 22867 KiB  
Article
Ensemble Transfer Learning for Fetal Head Analysis: From Segmentation to Gestational Age and Weight Prediction
by Mahmood Alzubaidi, Marco Agus, Uzair Shah, Michel Makhlouf, Khalid Alyafei and Mowafa Househ
Diagnostics 2022, 12(9), 2229; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12092229 - 15 Sep 2022
Cited by 11 | Viewed by 2691
Abstract
Ultrasound is one of the most commonly used imaging methodologies in obstetrics to monitor the growth of a fetus during the gestation period. Specifically, ultrasound images are routinely utilized to gather fetal information, including body measurements, anatomy structure, fetal movements, and pregnancy complications. [...] Read more.
Ultrasound is one of the most commonly used imaging methodologies in obstetrics to monitor the growth of a fetus during the gestation period. Specifically, ultrasound images are routinely utilized to gather fetal information, including body measurements, anatomy structure, fetal movements, and pregnancy complications. Recent developments in artificial intelligence and computer vision provide new methods for the automated analysis of medical images in many domains, including ultrasound images. We present a full end-to-end framework for segmenting, measuring, and estimating fetal gestational age and weight based on two-dimensional ultrasound images of the fetal head. Our segmentation framework is based on the following components: (i) eight segmentation architectures (UNet, UNet Plus, Attention UNet, UNet 3+, TransUNet, FPN, LinkNet, and Deeplabv3) were fine-tuned using lightweight network EffientNetB0, and (ii) a weighted voting method for building an optimized ensemble transfer learning model (ETLM). On top of that, ETLM was used to segment the fetal head and to perform analytic and accurate measurements of circumference and seven other values of the fetal head, which we incorporated into a multiple regression model for predicting the week of gestational age and the estimated fetal weight (EFW). We finally validated the regression model by comparing our result with expert physician and longitudinal references. We evaluated the performance of our framework on the public domain dataset HC18: we obtained 98.53% mean intersection over union (mIoU) as the segmentation accuracy, overcoming the state-of-the-art methods; as measurement accuracy, we obtained a 1.87 mm mean absolute difference (MAD). Finally we obtained a 0.03% mean square error (MSE) in predicting the week of gestational age and 0.05% MSE in predicting EFW. Full article
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging Analysis)
Show Figures

Graphical abstract

9 pages, 1443 KiB  
Article
Deep Learning Artificial Intelligence to Predict the Need for Tracheostomy in Patients of Deep Neck Infection Based on Clinical and Computed Tomography Findings—Preliminary Data and a Pilot Study
by Shih-Lung Chen, Shy-Chyi Chin and Chia-Ying Ho
Diagnostics 2022, 12(8), 1943; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12081943 - 12 Aug 2022
Cited by 2 | Viewed by 1331
Abstract
Background: Deep neck infection (DNI) can lead to airway obstruction. Rather than intubation, some patients need tracheostomy to secure the airway. However, no study has used deep learning (DL) artificial intelligence (AI) to predict the need for tracheostomy in DNI patients. Thus, the [...] Read more.
Background: Deep neck infection (DNI) can lead to airway obstruction. Rather than intubation, some patients need tracheostomy to secure the airway. However, no study has used deep learning (DL) artificial intelligence (AI) to predict the need for tracheostomy in DNI patients. Thus, the purpose of this study was to develop a DL framework to predict the need for tracheostomy in DNI patients. Methods: 392 patients with DNI were enrolled in this study between August 2016 and April 2022; 80% of the patients (n = 317) were randomly assigned to a training group for model validation, and the remaining 20% (n = 75) were assigned to the test group to determine model accuracy. The k-nearest neighbor method was applied to analyze the clinical and computed tomography (CT) data of the patients. The predictions of the model with regard to the need for tracheostomy were compared with actual decisions made by clinical experts. Results: No significant differences were observed in clinical or CT parameters between the training group and test groups. The DL model yielded a prediction accuracy of 78.66% (59/75 cases). The sensitivity and specificity values were 62.50% and 80.60%, respectively. Conclusions: We demonstrated a DL framework to predict the need for tracheostomy in DNI patients based on clinical and CT data. The model has potential for clinical application; in particular, it may assist less experienced clinicians to determine whether tracheostomy is necessary in cases of DNI. Full article
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging Analysis)
Show Figures

Figure 1

11 pages, 4009 KiB  
Article
Diagnostic Value of Fully Automated Artificial Intelligence Powered Coronary Artery Calcium Scoring from 18F-FDG PET/CT
by Claudia Morf, Thomas Sartoretti, Antonio G. Gennari, Alexander Maurer, Stephan Skawran, Andreas A. Giannopoulos, Elisabeth Sartoretti, Moritz Schwyzer, Alessandra Curioni-Fontecedro, Catherine Gebhard, Ronny R. Buechel, Philipp A. Kaufmann, Martin W. Huellner and Michael Messerli
Diagnostics 2022, 12(8), 1876; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12081876 - 03 Aug 2022
Cited by 3 | Viewed by 1861
Abstract
Objectives: The objective of this study was to assess the feasibility and accuracy of a fully automated artificial intelligence (AI) powered coronary artery calcium scoring (CACS) method on ungated CT in oncologic patients undergoing 18F-FDG PET/CT. Methods: A total of 100 oncologic patients [...] Read more.
Objectives: The objective of this study was to assess the feasibility and accuracy of a fully automated artificial intelligence (AI) powered coronary artery calcium scoring (CACS) method on ungated CT in oncologic patients undergoing 18F-FDG PET/CT. Methods: A total of 100 oncologic patients examined between 2007 and 2015 were retrospectively included. All patients underwent 18F-FDG PET/CT and cardiac SPECT myocardial perfusion imaging (MPI) by 99mTc-tetrofosmin within 6 months. CACS was manually performed on non-contrast ECG-gated CT scans obtained from SPECT-MPI (i.e., reference standard). Additionally, CACS was performed using a cloud-based, user-independent tool (AI-CACS) on ungated CT scans from 18F-FDG-PET/CT examinations. Agatston scores from the manual CACS and AI-CACS were compared. Results: On a per-patient basis, the AI-CACS tool achieved a sensitivity and specificity of 85% and 90% for the detection of CAC. Interscore agreement of CACS between manual CACS and AI-CACS was 0.88 (95% CI: 0.827, 0.918). Interclass agreement of risk categories was 0.8 in weighted Kappa analysis, with a reclassification rate of 44% and an underestimation of one risk category by AI-CACS in 39% of cases. On a per-vessel basis, interscore agreement of CAC scores ranged from 0.716 for the circumflex artery to 0.863 for the left anterior descending artery. Conclusions: Fully automated AI-CACS as performed on non-contrast free-breathing, ungated CT scans from 18F-FDG-PET/CT examinations is feasible and provides an acceptable to good estimation of CAC burden. CAC load on ungated CT is, however, generally underestimated by AI-CACS, which should be taken into account when interpreting imaging findings. Full article
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging Analysis)
Show Figures

Figure 1

16 pages, 10599 KiB  
Article
HTLML: Hybrid AI Based Model for Detection of Alzheimer’s Disease
by Sarang Sharma, Sheifali Gupta, Deepali Gupta, Ayman Altameem, Abdul Khader Jilani Saudagar, Ramesh Chandra Poonia and Soumya Ranjan Nayak
Diagnostics 2022, 12(8), 1833; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12081833 - 29 Jul 2022
Cited by 15 | Viewed by 2235
Abstract
Alzheimer’s disease (AD) is a degenerative condition of the brain that affects the memory and reasoning abilities of patients. Memory is steadily wiped out by this condition, which gradually affects the brain’s ability to think, recall, and form intentions. In order to properly [...] Read more.
Alzheimer’s disease (AD) is a degenerative condition of the brain that affects the memory and reasoning abilities of patients. Memory is steadily wiped out by this condition, which gradually affects the brain’s ability to think, recall, and form intentions. In order to properly identify this disease, a variety of manual imaging modalities including CT, MRI, PET, etc. are being used. These methods, however, are time-consuming and troublesome in the context of early diagnostics. This is why deep learning models have been devised that are less time-intensive, require less high-tech hardware or human interaction, continue to improve in performance, and are useful for the prediction of AD, which can also be verified by experimental results obtained by doctors in medical institutions or health care facilities. In this paper, we propose a hybrid-based AI-based model that includes the combination of both transfer learning (TL) and permutation-based machine learning (ML) voting classifier in terms of two basic phases. In the first phase of implementation, it comprises two TL-based models: namely, DenseNet-121 and Densenet-201 for features extraction, whereas in the second phase of implementation, it carries out three different ML classifiers like SVM, Naïve base and XGBoost for classification purposes. The final classifier outcomes are evaluated by means of permutations of the voting mechanism. The proposed model achieved accuracy of 91.75%, specificity of 96.5%, and an F1-score of 90.25. The dataset used for training was obtained from Kaggle and contains 6200 photos, including 896 images classified as mildly demented, 64 images classified as moderately demented, 3200 images classified as non-demented, and 1966 images classified as extremely mildly demented. The results show that the suggested model outperforms current state-of-the-art models. These models could be used to generate therapeutically viable methods for detecting AD in MRI images based on these results for clinical prospective. Full article
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging Analysis)
Show Figures

Figure 1

18 pages, 8479 KiB  
Article
An Enhanced Transfer Learning Based Classification for Diagnosis of Skin Cancer
by Vatsala Anand, Sheifali Gupta, Ayman Altameem, Soumya Ranjan Nayak, Ramesh Chandra Poonia and Abdul Khader Jilani Saudagar
Diagnostics 2022, 12(7), 1628; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12071628 - 05 Jul 2022
Cited by 55 | Viewed by 2370
Abstract
Skin cancer is the most commonly diagnosed and reported malignancy worldwide. To reduce the death rate from cancer, it is essential to diagnose skin cancer at a benign stage as soon as possible. To save lives, an automated system that can detect skin [...] Read more.
Skin cancer is the most commonly diagnosed and reported malignancy worldwide. To reduce the death rate from cancer, it is essential to diagnose skin cancer at a benign stage as soon as possible. To save lives, an automated system that can detect skin cancer in its earliest stages is necessary. For the diagnosis of skin cancer, various researchers have performed tasks using deep learning and transfer learning models. However, the existing literature is limited in terms of its accuracy and its troublesome and time-consuming process. As a result, it is critical to design an automatic system that can deliver a fast judgment and considerably reduce mistakes in diagnosis. In this work, a deep learning-based model has been designed for the identification of skin cancer at benign and malignant stages using the concept of transfer learning approach. For this, a pre-trained VGG16 model is improved by adding one flatten layer, two dense layers with activation function (LeakyReLU) and another dense layer with activation function (sigmoid) to enhance the accuracy of this model. This proposed model is evaluated on a dataset obtained from Kaggle. The techniques of data augmentation are applied in order to enhance the random-ness among the input dataset for model stability. The proposed model has been validated by considering several useful hyper parameters such as different batch sizes of 8, 16, 32, 64, and 128; different epochs and optimizers. The proposed model is working best with an overall accuracy of 89.09% on 128 batch size with the Adam optimizer and 10 epochs and outperforms state-of-the-art techniques. This model will help dermatologists in the early diagnosis of skin cancers. Full article
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging Analysis)
Show Figures

Figure 1

15 pages, 2570 KiB  
Article
Are We There Yet? The Value of Deep Learning in a Multicenter Setting for Response Prediction of Locally Advanced Rectal Cancer to Neoadjuvant Chemoradiotherapy
by Barbara D. Wichtmann, Steffen Albert, Wenzhao Zhao, Angelika Maurer, Claus Rödel, Ralf-Dieter Hofheinz, Jürgen Hesser, Frank G. Zöllner and Ulrike I. Attenberger
Diagnostics 2022, 12(7), 1601; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12071601 - 30 Jun 2022
Cited by 4 | Viewed by 1729
Abstract
This retrospective study aims to evaluate the generalizability of a promising state-of-the-art multitask deep learning (DL) model for predicting the response of locally advanced rectal cancer (LARC) to neoadjuvant chemoradiotherapy (nCRT) using a multicenter dataset. To this end, we retrained and validated a [...] Read more.
This retrospective study aims to evaluate the generalizability of a promising state-of-the-art multitask deep learning (DL) model for predicting the response of locally advanced rectal cancer (LARC) to neoadjuvant chemoradiotherapy (nCRT) using a multicenter dataset. To this end, we retrained and validated a Siamese network with two U-Nets joined at multiple layers using pre- and post-therapeutic T2-weighted (T2w), diffusion-weighted (DW) images and apparent diffusion coefficient (ADC) maps of 83 LARC patients acquired under study conditions at four different medical centers. To assess the predictive performance of the model, the trained network was then applied to an external clinical routine dataset of 46 LARC patients imaged without study conditions. The training and test datasets differed significantly in terms of their composition, e.g., T-/N-staging, the time interval between initial staging/nCRT/re-staging and surgery, as well as with respect to acquisition parameters, such as resolution, echo/repetition time, flip angle and field strength. We found that even after dedicated data pre-processing, the predictive performance dropped significantly in this multicenter setting compared to a previously published single- or two-center setting. Testing the network on the external clinical routine dataset yielded an area under the receiver operating characteristic curve of 0.54 (95% confidence interval [CI]: 0.41, 0.65), when using only pre- and post-therapeutic T2w images as input, and 0.60 (95% CI: 0.48, 0.71), when using the combination of pre- and post-therapeutic T2w, DW images, and ADC maps as input. Our study highlights the importance of data quality and harmonization in clinical trials using machine learning. Only in a joint, cross-center effort, involving a multidisciplinary team can we generate large enough curated and annotated datasets and develop the necessary pre-processing pipelines for data harmonization to successfully apply DL models clinically. Full article
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging Analysis)
Show Figures

Figure 1

17 pages, 12058 KiB  
Article
A Radiation-Free Classification Pipeline for Craniosynostosis Using Statistical Shape Modeling
by Matthias Schaufelberger, Reinald Kühle, Andreas Wachter, Frederic Weichel, Niclas Hagen, Friedemann Ringwald, Urs Eisenmann, Jürgen Hoffmann, Michael Engel, Christian Freudlsperger and Werner Nahm
Diagnostics 2022, 12(7), 1516; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12071516 - 21 Jun 2022
Cited by 7 | Viewed by 2070
Abstract
Background: Craniosynostosis is a condition caused by the premature fusion of skull sutures, leading to irregular growth patterns of the head. Three-dimensional photogrammetry is a radiation-free alternative to the diagnosis using computed tomography. While statistical shape models have been proposed to quantify head [...] Read more.
Background: Craniosynostosis is a condition caused by the premature fusion of skull sutures, leading to irregular growth patterns of the head. Three-dimensional photogrammetry is a radiation-free alternative to the diagnosis using computed tomography. While statistical shape models have been proposed to quantify head shape, no shape-model-based classification approach has been presented yet. Methods: We present a classification pipeline that enables an automated diagnosis of three types of craniosynostosis. The pipeline is based on a statistical shape model built from photogrammetric surface scans. We made the model and pathology-specific submodels publicly available, making it the first publicly available craniosynostosis-related head model, as well as the first focusing on infants younger than 1.5 years. To the best of our knowledge, we performed the largest classification study for craniosynostosis to date. Results: Our classification approach yields an accuracy of 97.8 %, comparable to other state-of-the-art methods using both computed tomography scans and stereophotogrammetry. Regarding the statistical shape model, we demonstrate that our model performs similar to other statistical shape models of the human head. Conclusion: We present a state-of-the-art shape-model-based classification approach for a radiation-free diagnosis of craniosynostosis. Our publicly available shape model enables the assessment of craniosynostosis on realistic and synthetic data. Full article
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging Analysis)
Show Figures

Graphical abstract

23 pages, 1561 KiB  
Article
Predicting Visual Acuity in Patients Treated for AMD
by Beatrice-Andreea Marginean, Adrian Groza, George Muntean and Simona Delia Nicoara
Diagnostics 2022, 12(6), 1504; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12061504 - 20 Jun 2022
Cited by 1 | Viewed by 1885
Abstract
The leading diagnostic tool in modern ophthalmology, Optical Coherence Tomography (OCT), is not yet able to establish the evolution of retinal diseases. Our task is to forecast the progression of retinal diseases by means of machine learning technologies. The aim is to help [...] Read more.
The leading diagnostic tool in modern ophthalmology, Optical Coherence Tomography (OCT), is not yet able to establish the evolution of retinal diseases. Our task is to forecast the progression of retinal diseases by means of machine learning technologies. The aim is to help the ophthalmologist to determine when early treatment is needed in order to prevent severe vision impairment or even blindness. The acquired data are made up of sequences of visits from multiple patients with age-related macular degeneration (AMD), which, if not treated at the appropriate time, may result in irreversible blindness. The dataset contains 94 patients with AMD and there are 161 eyes included with more than one medical examination. We used various techniques from machine learning (linear regression, gradient boosting, random forest and extremely randomised trees, bidirectional recurrent neural network, LSTM network, GRU network) to handle technical challenges such as how to learn from small-sized time series, how to handle different time intervals between visits, and how to learn from different numbers of visits for each patient (1–5 visits). For predicting the visual acuity, we performed several experiments with different features. First, by considering only previous measured visual acuity, the best accuracy of 0.96 was obtained based on a linear regression. Second, by considering numerical OCT features such as previous thickness and volume values in all retinal zones, the LSTM network reached the highest score (R2=0.99). Third, by considering the fundus scan images represented as embeddings obtained from the convolutional autoencoder, the accuracy was increased for all algorithms. The best forecasting results for visual acuity depend on the number of visits and features used for predictions, i.e., 0.99 for LSTM based on three visits (monthly resampled series) based on numerical OCT values, fundus images, and previous visual acuities. Full article
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging Analysis)
Show Figures

Figure 1

10 pages, 6563 KiB  
Article
Artificial Intelligence-Based Detection of Pneumonia in Chest Radiographs
by Judith Becker, Josua A. Decker, Christoph Römmele, Maria Kahn, Helmut Messmann, Markus Wehler, Florian Schwarz, Thomas Kroencke and Christian Scheurig-Muenkler
Diagnostics 2022, 12(6), 1465; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12061465 - 14 Jun 2022
Cited by 7 | Viewed by 2326
Abstract
Artificial intelligence is gaining increasing relevance in the field of radiology. This study retrospectively evaluates how a commercially available deep learning algorithm can detect pneumonia in chest radiographs (CR) in emergency departments. The chest radiographs of 948 patients with dyspnea between 3 February [...] Read more.
Artificial intelligence is gaining increasing relevance in the field of radiology. This study retrospectively evaluates how a commercially available deep learning algorithm can detect pneumonia in chest radiographs (CR) in emergency departments. The chest radiographs of 948 patients with dyspnea between 3 February and 8 May 2020, as well as 15 October and 15 December 2020, were used. A deep learning algorithm was used to identify opacifications associated with pneumonia, and the performance was evaluated by using ROC analysis, sensitivity, specificity, PPV and NPV. Two radiologists assessed all enrolled images for pulmonal infection patterns as the reference standard. If consolidations or opacifications were present, the radiologists classified the pulmonal findings regarding a possible COVID-19 infection because of the ongoing pandemic. The AUROC value of the deep learning algorithm reached 0.923 when detecting pneumonia in chest radiographs with a sensitivity of 95.4%, specificity of 66.0%, PPV of 80.2% and NPV of 90.8%. The detection of COVID-19 pneumonia in CR by radiologists was achieved with a sensitivity of 50.6% and a specificity of 73%. The deep learning algorithm proved to be an excellent tool for detecting pneumonia in chest radiographs. Thus, the assessment of suspicious chest radiographs can be purposefully supported, shortening the turnaround time for reporting relevant findings and aiding early triage. Full article
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging Analysis)
Show Figures

Figure 1

16 pages, 5550 KiB  
Article
Deep Learning-Based Reconstruction vs. Iterative Reconstruction for Quality of Low-Dose Head-and-Neck CT Angiography with Different Tube-Voltage Protocols in Emergency-Department Patients
by Marc Lenfant, Pierre-Olivier Comby, Kevin Guillen, Felix Galissot, Karim Haioun, Anthony Thay, Olivier Chevallier, Frédéric Ricolfi and Romaric Loffroy
Diagnostics 2022, 12(5), 1287; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12051287 - 21 May 2022
Cited by 8 | Viewed by 2127
Abstract
Objective: To compare the image quality of computed tomography angiography of the supra-aortic arteries (CTSA) at different tube voltages in low doses settings with deep learning-based image reconstruction (DLR) vs. hybrid iterative reconstruction (H-IR). Methods: We retrospectively reviewed 102 patients who underwent CTSA [...] Read more.
Objective: To compare the image quality of computed tomography angiography of the supra-aortic arteries (CTSA) at different tube voltages in low doses settings with deep learning-based image reconstruction (DLR) vs. hybrid iterative reconstruction (H-IR). Methods: We retrospectively reviewed 102 patients who underwent CTSA systematically reconstructed with both DLR and H-IR. We assessed the image quality both quantitatively and qualitatively at 11 arterial segmental levels and 3 regional levels. Radiation-dose parameters were recorded and the effective dose was calculated. Eighty-six patients were eligible for analysis Of these patients, 27 were imaged with 120 kVp, 30 with 100 kVp, and 29 with 80 kVp. Results: The effective dose in 120 kVp, 100 kVp and 80 kVp was 1.5 ± 0.4 mSv, 1.1 ± 0.3 mSv and 0.68 ± 0.1 mSv, respectively (p < 0.01). Comparing 80 kVp + DLR vs. 120 and 100 kVp + H-IR CT scans, the mean overall arterial attenuation was about 64% and 34% higher (625.9 ± 118.5 HU vs. 382.3 ± 98.6 HU and 468 ± 118.5 HU; p < 0.01) without a significant difference in terms of image noise (17.7 ± 4.9 HU vs. 17.5 ± 5.2; p = 0.7 and 18.1 ± 5.4; p = 0.3) and signal-to-ratio increased by 59% and 33%, respectively (37.9 ± 12.3 vs. 23.8 ± 9.7 and 28.4 ± 12.5). This protocol also provided superior image quality in terms of qualitative parameters, compared to standard-kVp protocols with H-IR. Highest subjective image-quality grades for vascular segments close to the aorta were obtained with the 100 kVp + DLR protocol. Conclusions: DLR significantly reduced image noise and improved the overall image quality of CTSA with both low and standard tube voltages and at all vascular segments. CT that was acquired with 80 kVp and reconstructed with DLR yielded better overall image quality compared to higher kVp values with H-IR, while reducing the radiation dose by half, but it has limitations for arteries that are close to the aortic arch. Full article
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging Analysis)
Show Figures

Figure 1

12 pages, 811 KiB  
Article
CT-Angiography-Based Outcome Prediction on Diabetic Foot Ulcer Patients: A Statistical Learning Approach
by Di Zhang, Wei Dong, Haonan Guan, Aobuliaximu Yakupu, Hanqi Wang, Liuping Chen, Shuliang Lu and Jiajun Tang
Diagnostics 2022, 12(5), 1076; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12051076 - 25 Apr 2022
Cited by 3 | Viewed by 1858
Abstract
The purpose of our study is to predict the occurrence and prognosis of diabetic foot ulcers (DFUs) by clinical and lower extremity computed tomography angiography (CTA) data of patients using the artificial neural networks (ANN) model. DFU is a common complication of diabetes [...] Read more.
The purpose of our study is to predict the occurrence and prognosis of diabetic foot ulcers (DFUs) by clinical and lower extremity computed tomography angiography (CTA) data of patients using the artificial neural networks (ANN) model. DFU is a common complication of diabetes that severely affects the quality of life of patients, leading to amputation and even death. There are a lack of valid predictive techniques for the prognosis of DFU. In clinical practice, the use of scales alone has a large subjective component, leading to significant bias and heterogeneity. Currently, there is a lack of evidence-based support for patients to develop clinical strategies before reaching end-stage outcomes. The present study provides a novel technical tool for predicting the prognosis of DFU. After screening the data, 203 patients with diabetic foot ulcers (DFUs) were analyzed and divided into two subgroups based on their Wagner Score (138 patients in the low Wagner Score group and 65 patients in the high Wagner Score group). Based on clinical and lower extremity CTA data, 10 predictive factors were selected for inclusion in the model. The total dataset was randomly divided into the training sample, testing sample and holdout sample in ratio of 3:1:1. After the training sample and testing sample developing the ANN model, the holdout sample was utilized to assess the accuracy of the model. ANN model analysis shows that the sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) and area under the curve (AUC) of the overall ANN model were 92.3%, 93.5%, 87.0%, 94.2% and 0.955, respectively. We observed that the proposed model performed superbly on the prediction of DFU with a 91.6% accuracy. Evaluated with the holdout sample, the model accuracy, sensitivity, specificity, PPV and NPV were 88.9%, 90.0%, 88.5%, 75.0% and 95.8%, respectively. By contrast, the logistic regression model was inferior to the ANN model. The ANN model can accurately and reliably predict the occurrence and prognosis of a DFU according to clinical and lower extremity CTA data. We provided clinicians with a novel technical tool to develop clinical strategies before end-stage outcomes. Full article
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging Analysis)
Show Figures

Figure 1

11 pages, 1852 KiB  
Article
Use of a Feed-Forward Back Propagation Network for the Prediction of Small for Gestational Age Newborns in a Cohort of Pregnant Patients with Thrombophilia
by Petronela Vicoveanu, Ingrid Andrada Vasilache, Ioana Sadiye Scripcariu, Dragos Nemescu, Alexandru Carauleanu, Dragos Vicoveanu, Ana Roxana Covali, Catalina Filip and Demetra Socolov
Diagnostics 2022, 12(4), 1009; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12041009 - 16 Apr 2022
Cited by 9 | Viewed by 1640
Abstract
(1) Background: Fetal growth restriction is a relatively common disorder in pregnant patients with thrombophilia. New artificial intelligence algorithms are a promising option for the prediction of adverse obstetrical outcomes. The aim of this study was to evaluate the predictive performance of a [...] Read more.
(1) Background: Fetal growth restriction is a relatively common disorder in pregnant patients with thrombophilia. New artificial intelligence algorithms are a promising option for the prediction of adverse obstetrical outcomes. The aim of this study was to evaluate the predictive performance of a Feed-Forward Back Propagation Network (FFBPN) for the prediction of small for gestational age (SGA) newborns in a cohort of pregnant patients with thrombophilia. (2) Methods: This observational retrospective study included all pregnancies in women with thrombophilia who attended two tertiary maternity hospitals in Romania between January 2013 and December 2020. Bivariate associations of SGA and each predictor variable were evaluated. Clinical and paraclinical predictors were further included in a FFBPN, and its predictive performance was assessed. (3) Results: The model had an area under the curve (AUC) of 0.95, with a true positive rate of 86.7%, and a false discovery rate of 10.5%. The overall accuracy of our model was 90%. (4) Conclusion: This is the first study in the literature that evaluated the performance of a FFBPN for the prediction of pregnant patients with thrombophilia at a high risk of giving birth to SGA newborns, and its promising results could lead to a tailored prenatal management. Full article
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging Analysis)
Show Figures

Figure 1

17 pages, 1573 KiB  
Article
Automated Generation of Synoptic Reports from Narrative Pathology Reports in University Malaya Medical Centre Using Natural Language Processing
by Wee-Ming Tan, Kean-Hooi Teoh, Mogana Darshini Ganggayah, Nur Aishah Taib, Hana Salwani Zaini and Sarinder Kaur Dhillon
Diagnostics 2022, 12(4), 879; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12040879 - 01 Apr 2022
Cited by 1 | Viewed by 2923
Abstract
Pathology reports represent a primary source of information for cancer registries. University Malaya Medical Centre (UMMC) is a tertiary hospital responsible for training pathologists; thus narrative reporting becomes important. However, the unstructured free-text reports made the information extraction process tedious for clinical audits [...] Read more.
Pathology reports represent a primary source of information for cancer registries. University Malaya Medical Centre (UMMC) is a tertiary hospital responsible for training pathologists; thus narrative reporting becomes important. However, the unstructured free-text reports made the information extraction process tedious for clinical audits and data analysis-related research. This study aims to develop an automated natural language processing (NLP) algorithm to summarize the existing narrative breast pathology report from UMMC to a narrower structured synoptic pathology report with a checklist-style report template to ease the creation of pathology reports. The development of the rule-based NLP algorithm was based on the R programming language by using 593 pathology specimens from 174 patients provided by the Department of Pathology, UMMC. The pathologist provides specific keywords for data elements to define the semantic rules of the NLP. The system was evaluated by calculating the precision, recall, and F1-score. The proposed NLP algorithm achieved a micro-F1 score of 99.50% and a macro-F1 score of 98.97% on 178 specimens with 25 data elements. This achievement correlated to clinicians’ needs, which could improve communication between pathologists and clinicians. The study presented here is significant, as structured data is easily minable and could generate important insights. Full article
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging Analysis)
Show Figures

Figure 1

14 pages, 5368 KiB  
Article
TongueCaps: An Improved Capsule Network Model for Multi-Classification of Tongue Color
by Jinghong Ni, Zhuangzhi Yan and Jiehui Jiang
Diagnostics 2022, 12(3), 653; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12030653 - 08 Mar 2022
Cited by 5 | Viewed by 2053
Abstract
Tongue color is an important part of tongue diagnosis. The change of tongue color is affected by pathological state of body, blood rheology, and other factors. Therefore, physicians can understand a patient’s condition by observing tongue color. Currently, most studies use machine learning, [...] Read more.
Tongue color is an important part of tongue diagnosis. The change of tongue color is affected by pathological state of body, blood rheology, and other factors. Therefore, physicians can understand a patient’s condition by observing tongue color. Currently, most studies use machine learning, which is time consuming and labor intensive. Other studies use deep learning based on convolutional neural network (CNN), but the affine transformation of CNN is less robust and easily loses the spatial relationship between features. Recently, Capsule Networks (CapsNet) have been proposed to overcome these problems. In our work, CapsNet is used for tongue color research for the first time, and improved model TongueCaps is proposed, which combines the advantage of CapsNet and residual block structure to achieve end to end tongue color classification. We conduct experiments on 1371 tongue images; TongueCaps achieve accuracy is 0.8456, sensitivity is 0.8474, and specificity is 0.9586. In addition, the size of TongueCaps is 8.11 M, and FLOPs is 1,335,342, which are smaller than CNN in comparison models. Experiments have confirmed that the CapsNet can be used for tongue color research, and improved model TongueCaps, in this paper, is superior to other comparison models in terms of accuracy, specificity and sensitivity, computational complexity, and size of model. Full article
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging Analysis)
Show Figures

Figure 1

10 pages, 1544 KiB  
Article
Faster and Better: How Anomaly Detection Can Accelerate and Improve Reporting of Head Computed Tomography
by Tom Finck, Julia Moosbauer, Monika Probst, Sarah Schlaeger, Madeleine Schuberth, David Schinz, Mehmet Yiğitsoy, Sebastian Byas, Claus Zimmer, Franz Pfister and Benedikt Wiestler
Diagnostics 2022, 12(2), 452; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12020452 - 10 Feb 2022
Cited by 4 | Viewed by 1868
Abstract
Background: Most artificial intelligence (AI) systems are restricted to solving a pre-defined task, thus limiting their generalizability to unselected datasets. Anomaly detection relieves this shortfall by flagging all pathologies as deviations from a learned norm. Here, we investigate whether diagnostic accuracy and reporting [...] Read more.
Background: Most artificial intelligence (AI) systems are restricted to solving a pre-defined task, thus limiting their generalizability to unselected datasets. Anomaly detection relieves this shortfall by flagging all pathologies as deviations from a learned norm. Here, we investigate whether diagnostic accuracy and reporting times can be improved by an anomaly detection tool for head computed tomography (CT), tailored to provide patient-level triage and voxel-based highlighting of pathologies. Methods: Four neuroradiologists with 1–10 years of experience each investigated a set of 80 routinely acquired head CTs containing 40 normal scans and 40 scans with common pathologies. In a random order, scans were investigated with and without AI-predictions. A 4-week wash-out period between runs was included to prevent a reminiscence effect. Performance metrics for identifying pathologies, reporting times, and subjectively assessed diagnostic confidence were determined for both runs. Results: AI-support significantly increased the share of correctly classified scans (normal/pathological) from 309/320 scans to 317/320 scans (p = 0.0045), with a corresponding sensitivity, specificity, negative- and positive- predictive value of 100%, 98.1%, 98.2% and 100%, respectively. Further, reporting was significantly accelerated with AI-support, as evidenced by the 15.7% reduction in reporting times (65.1 ± 8.9 s vs. 54.9 ± 7.1 s; p < 0.0001). Diagnostic confidence was similar in both runs. Conclusion: Our study shows that AI-based triage of CTs can improve the diagnostic accuracy and accelerate reporting for experienced and inexperienced radiologists alike. Through ad hoc identification of normal CTs, anomaly detection promises to guide clinicians towards scans requiring urgent attention. Full article
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging Analysis)
Show Figures

Figure 1

9 pages, 1093 KiB  
Article
Comparison between Deep Learning and Conventional Machine Learning in Classifying Iliofemoral Deep Venous Thrombosis upon CT Venography
by Jung Han Hwang, Jae Won Seo, Jeong Ho Kim, Suyoung Park, Young Jae Kim and Kwang Gi Kim
Diagnostics 2022, 12(2), 274; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12020274 - 21 Jan 2022
Cited by 8 | Viewed by 2771
Abstract
In this study, we aimed to investigate quantitative differences in performance in terms of comparing the automated classification of deep vein thrombosis (DVT) using two categories of artificial intelligence algorithms: deep learning based on convolutional neural networks (CNNs) and conventional machine learning. We [...] Read more.
In this study, we aimed to investigate quantitative differences in performance in terms of comparing the automated classification of deep vein thrombosis (DVT) using two categories of artificial intelligence algorithms: deep learning based on convolutional neural networks (CNNs) and conventional machine learning. We retrospectively enrolled 659 participants (DVT patients, 282; normal controls, 377) who were evaluated using contrast-enhanced lower extremity computed tomography (CT) venography. Conventional machine learning consists of logistic regression (LR), support vector machines (SVM), random forests (RF), and extreme gradient boosts (XGB). Deep learning based on CNN included the VGG16, VGG19, Resnet50, and Resnet152 models. According to the mean generated AUC values, we found that the CNN-based VGG16 model showed a 0.007 higher performance (0.982 ± 0.014) as compared with the XGB model (0.975 ± 0.010), which showed the highest performance among the conventional machine learning models. In the conventional machine learning-based classifications, we found that the radiomic features presenting a statistically significant effect were median values and skewness. We found that the VGG16 model within the deep learning algorithm distinguished deep vein thrombosis on CT images most accurately, with slightly higher AUC values as compared with the other AI algorithms used in this study. Our results guide research directions and medical practice. Full article
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging Analysis)
Show Figures

Figure 1

17 pages, 2300 KiB  
Article
Hybrid Deep Learning Model for Endoscopic Lesion Detection and Classification Using Endoscopy Videos
by M Shahbaz Ayyaz, Muhammad Ikram Ullah Lali, Mubbashar Hussain, Hafiz Tayyab Rauf, Bader Alouffi, Hashem Alyami and Shahbaz Wasti
Diagnostics 2022, 12(1), 43; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12010043 - 26 Dec 2021
Cited by 15 | Viewed by 3484
Abstract
In medical imaging, the detection and classification of stomach diseases are challenging due to the resemblance of different symptoms, image contrast, and complex background. Computer-aided diagnosis (CAD) plays a vital role in the medical imaging field, allowing accurate results to be obtained in [...] Read more.
In medical imaging, the detection and classification of stomach diseases are challenging due to the resemblance of different symptoms, image contrast, and complex background. Computer-aided diagnosis (CAD) plays a vital role in the medical imaging field, allowing accurate results to be obtained in minimal time. This article proposes a new hybrid method to detect and classify stomach diseases using endoscopy videos. The proposed methodology comprises seven significant steps: data acquisition, preprocessing of data, transfer learning of deep models, feature extraction, feature selection, hybridization, and classification. We selected two different CNN models (VGG19 and Alexnet) to extract features. We applied transfer learning techniques before using them as feature extractors. We used a genetic algorithm (GA) in feature selection, due to its adaptive nature. We fused selected features of both models using a serial-based approach. Finally, the best features were provided to multiple machine learning classifiers for detection and classification. The proposed approach was evaluated on a personally collected dataset of five classes, including gastritis, ulcer, esophagitis, bleeding, and healthy. We observed that the proposed technique performed superbly on Cubic SVM with 99.8% accuracy. For the authenticity of the proposed technique, we considered these statistical measures: classification accuracy, recall, precision, False Negative Rate (FNR), Area Under the Curve (AUC), and time. In addition, we provided a fair state-of-the-art comparison of our proposed technique with existing techniques that proves its worthiness. Full article
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging Analysis)
Show Figures

Figure 1

Review

Jump to: Research, Other

37 pages, 1784 KiB  
Review
Exploring the Intersection of Artificial Intelligence and Clinical Healthcare: A Multidisciplinary Review
by Celina Silvia Stafie, Irina-Georgeta Sufaru, Cristina Mihaela Ghiciuc, Ingrid-Ioana Stafie, Eduard-Constantin Sufaru, Sorina Mihaela Solomon and Monica Hancianu
Diagnostics 2023, 13(12), 1995; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics13121995 - 07 Jun 2023
Cited by 8 | Viewed by 4315
Abstract
Artificial intelligence (AI) plays a more and more important role in our everyday life due to the advantages that it brings when used, such as 24/7 availability, a very low percentage of errors, ability to provide real time insights, or performing a fast [...] Read more.
Artificial intelligence (AI) plays a more and more important role in our everyday life due to the advantages that it brings when used, such as 24/7 availability, a very low percentage of errors, ability to provide real time insights, or performing a fast analysis. AI is increasingly being used in clinical medical and dental healthcare analyses, with valuable applications, which include disease diagnosis, risk assessment, treatment planning, and drug discovery. This paper presents a narrative literature review of AI use in healthcare from a multi-disciplinary perspective, specifically in the cardiology, allergology, endocrinology, and dental fields. The paper highlights data from recent research and development efforts in AI for healthcare, as well as challenges and limitations associated with AI implementation, such as data privacy and security considerations, along with ethical and legal concerns. The regulation of responsible design, development, and use of AI in healthcare is still in early stages due to the rapid evolution of the field. However, it is our duty to carefully consider the ethical implications of implementing AI and to respond appropriately. With the potential to reshape healthcare delivery and enhance patient outcomes, AI systems continue to reveal their capabilities. Full article
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging Analysis)
Show Figures

Figure 1

14 pages, 1243 KiB  
Review
Diagnostic Performance Evaluation of Multiparametric Magnetic Resonance Imaging in the Detection of Prostate Cancer with Supervised Machine Learning Methods
by Hamide Nematollahi, Masoud Moslehi, Fahimeh Aminolroayaei, Maryam Maleki and Daryoush Shahbazi-Gahrouei
Diagnostics 2023, 13(4), 806; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics13040806 - 20 Feb 2023
Cited by 6 | Viewed by 1922
Abstract
Prostate cancer is the second leading cause of cancer-related death in men. Its early and correct diagnosis is of particular importance to controlling and preventing the disease from spreading to other tissues. Artificial intelligence and machine learning have effectively detected and graded several [...] Read more.
Prostate cancer is the second leading cause of cancer-related death in men. Its early and correct diagnosis is of particular importance to controlling and preventing the disease from spreading to other tissues. Artificial intelligence and machine learning have effectively detected and graded several cancers, in particular prostate cancer. The purpose of this review is to show the diagnostic performance (accuracy and area under the curve) of supervised machine learning algorithms in detecting prostate cancer using multiparametric MRI. A comparison was made between the performances of different supervised machine-learning methods. This review study was performed on the recent literature sourced from scientific citation websites such as Google Scholar, PubMed, Scopus, and Web of Science up to the end of January 2023. The findings of this review reveal that supervised machine learning techniques have good performance with high accuracy and area under the curve for prostate cancer diagnosis and prediction using multiparametric MR imaging. Among supervised machine learning methods, deep learning, random forest, and logistic regression algorithms appear to have the best performance. Full article
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging Analysis)
Show Figures

Figure 1

14 pages, 624 KiB  
Review
The Future Is Coming: Artificial Intelligence in the Treatment of Infertility Could Improve Assisted Reproduction Outcomes—The Value of Regulatory Frameworks
by Sanja Medenica, Dusan Zivanovic, Ljubica Batkoska, Susanna Marinelli, Giuseppe Basile, Antonio Perino, Gaspare Cucinella, Giuseppe Gullo and Simona Zaami
Diagnostics 2022, 12(12), 2979; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12122979 - 28 Nov 2022
Cited by 32 | Viewed by 4592
Abstract
Infertility is a global health issue affecting women and men of reproductive age with increasing incidence worldwide, in part due to greater awareness and better diagnosis. Assisted reproduction technologies (ART) are considered the ultimate step in the treatment of infertility. Recently, artificial intelligence [...] Read more.
Infertility is a global health issue affecting women and men of reproductive age with increasing incidence worldwide, in part due to greater awareness and better diagnosis. Assisted reproduction technologies (ART) are considered the ultimate step in the treatment of infertility. Recently, artificial intelligence (AI) has been progressively used in the many fields of medicine, integrating knowledge and computer science through machine learning algorithms. AI has the potential to improve infertility diagnosis and ART outcomes estimated as pregnancy and/or live birth rate, especially with recurrent ART failure. A broad-ranging review has been conducted, focusing on clinical AI applications up until September 2022, which could be estimated in terms of possible applications, such as ultrasound monitoring of folliculogenesis, endometrial receptivity, embryo selection based on quality and viability, and prediction of post implantation embryo development, in order to eliminate potential contributing risk factors. Oocyte morphology assessment is highly relevant in terms of successful fertilization rate, as well as during oocyte freezing for fertility preservation, and substantially valuable in oocyte donation cycles. AI has great implications in the assessment of male infertility, with computerised semen analysis systems already in use and a broad spectrum of possible AI-based applications in environmental and lifestyle evaluation to predict semen quality. In addition, considerable progress has been made in terms of harnessing AI in cases of idiopathic infertility, to improve the stratification of infertile/fertile couples based on their biological and clinical signatures. With AI as a very powerful tool of the future, our review is meant to summarise current AI applications and investigations in contemporary reproduction medicine, mainly focusing on the nonsurgical aspects of it; in addition, the authors have briefly explored the frames of reference and guiding principles for the definition and implementation of legal, regulatory, and ethical standards for AI in healthcare. Full article
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging Analysis)
Show Figures

Figure 1

25 pages, 6126 KiB  
Review
Theory and Practice of Integrating Machine Learning and Conventional Statistics in Medical Data Analysis
by Sarinder Kaur Dhillon, Mogana Darshini Ganggayah, Siamala Sinnadurai, Pietro Lio and Nur Aishah Taib
Diagnostics 2022, 12(10), 2526; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12102526 - 18 Oct 2022
Cited by 4 | Viewed by 3557
Abstract
The practice of medical decision making is changing rapidly with the development of innovative computing technologies. The growing interest of data analysis with improvements in big data computer processing methods raises the question of whether machine learning can be integrated with conventional statistics [...] Read more.
The practice of medical decision making is changing rapidly with the development of innovative computing technologies. The growing interest of data analysis with improvements in big data computer processing methods raises the question of whether machine learning can be integrated with conventional statistics in health research. To help address this knowledge gap, this paper presents a review on the conceptual integration between conventional statistics and machine learning, focusing on the health research. The similarities and differences between the two are compared using mathematical concepts and algorithms. The comparison between conventional statistics and machine learning methods indicates that conventional statistics are the fundamental basis of machine learning, where the black box algorithms are derived from basic mathematics, but are advanced in terms of automated analysis, handling big data and providing interactive visualizations. While the nature of both these methods are different, they are conceptually similar. Based on our review, we conclude that conventional statistics and machine learning are best to be integrated to develop automated data analysis tools. We also strongly believe that machine learning could be explored by health researchers to enhance conventional statistics in decision making for added reliable validation measures. Full article
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging Analysis)
Show Figures

Figure 1

Other

Jump to: Research, Review

20 pages, 1893 KiB  
Systematic Review
Artificial Intelligence-Based Cervical Cancer Screening on Images Taken during Visual Inspection with Acetic Acid: A Systematic Review
by Roser Viñals, Magali Jonnalagedda, Patrick Petignat, Jean-Philippe Thiran and Pierre Vassilakos
Diagnostics 2023, 13(5), 836; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics13050836 - 22 Feb 2023
Cited by 2 | Viewed by 4103
Abstract
Visual inspection with acetic acid (VIA) is one of the methods recommended by the World Health Organization for cervical cancer screening. VIA is simple and low-cost; it, however, presents high subjectivity. We conducted a systematic literature search in PubMed, Google Scholar and Scopus [...] Read more.
Visual inspection with acetic acid (VIA) is one of the methods recommended by the World Health Organization for cervical cancer screening. VIA is simple and low-cost; it, however, presents high subjectivity. We conducted a systematic literature search in PubMed, Google Scholar and Scopus to identify automated algorithms for classifying images taken during VIA as negative (healthy/benign) or precancerous/cancerous. Of the 2608 studies identified, 11 met the inclusion criteria. The algorithm with the highest accuracy in each study was selected, and some of its key features were analyzed. Data analysis and comparison between the algorithms were conducted, in terms of sensitivity and specificity, ranging from 0.22 to 0.93 and 0.67 to 0.95, respectively. The quality and risk of each study were assessed following the QUADAS-2 guidelines. Artificial intelligence-based cervical cancer screening algorithms have the potential to become a key tool for supporting cervical cancer screening, especially in settings where there is a lack of healthcare infrastructure and trained personnel. The presented studies, however, assess their algorithms using small datasets of highly selected images, not reflecting whole screened populations. Large-scale testing in real conditions is required to assess the feasibility of integrating those algorithms in clinical settings. Full article
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging Analysis)
Show Figures

Figure 1

16 pages, 2617 KiB  
Systematic Review
Diagnostic Accuracy of AI for Opportunistic Screening of Abdominal Aortic Aneurysm in CT: A Systematic Review and Narrative Synthesis
by Maria R. Kodenko, Yuriy A. Vasilev, Anton V. Vladzymyrskyy, Olga V. Omelyanskaya, Denis V. Leonov, Ivan A. Blokhin, Vladimir P. Novik, Nicholas S. Kulberg, Andrey V. Samorodov, Olesya A. Mokienko and Roman V. Reshetnikov
Diagnostics 2022, 12(12), 3197; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12123197 - 16 Dec 2022
Cited by 5 | Viewed by 1868
Abstract
In this review, we focused on the applicability of artificial intelligence (AI) for opportunistic abdominal aortic aneurysm (AAA) detection in computed tomography (CT). We used the academic search system PubMed as the primary source for the literature search and Google Scholar as a [...] Read more.
In this review, we focused on the applicability of artificial intelligence (AI) for opportunistic abdominal aortic aneurysm (AAA) detection in computed tomography (CT). We used the academic search system PubMed as the primary source for the literature search and Google Scholar as a supplementary source of evidence. We searched through 2 February 2022. All studies on automated AAA detection or segmentation in noncontrast abdominal CT were included. For bias assessment, we developed and used an adapted version of the QUADAS-2 checklist. We included eight studies with 355 cases, of which 273 (77%) contained AAA. The highest risk of bias and level of applicability concerns were observed for the “patient selection” domain, due to the 100% pathology rate in the majority (75%) of the studies. The mean sensitivity value was 95% (95% CI 100–87%), the mean specificity value was 96.6% (95% CI 100–75.7%), and the mean accuracy value was 95.2% (95% CI 100–54.5%). Half of the included studies performed diagnostic accuracy estimation, with only one study having data on all diagnostic accuracy metrics. Therefore, we conducted a narrative synthesis. Our findings indicate high study heterogeneity, requiring further research with balanced noncontrast CT datasets and adherence to reporting standards in order to validate the high sensitivity value obtained. Full article
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging Analysis)
Show Figures

Figure 1

Back to TopTop