sensors-logo

Journal Browser

Journal Browser

Computer Aided Diagnosis Sensors

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Biomedical Sensors".

Deadline for manuscript submissions: closed (1 July 2022) | Viewed by 140956

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors


E-Mail Website
Guest Editor
University of Louisville, Louisville, United States
Interests: Cardiac and vascular mechanics; myocardial recovery; translational research; biomedical device development and sensors
University of Louisville, USA
Interests: computer vision; image processing; medical imaging; bioengineering
Special Issues, Collections and Topics in MDPI journals

E-Mail
Guest Editor
University of Louisville, USA
Interests: Computer vision; image processing; robotics; object detection; tracking; medical imaging; facial biometrics and sensors

E-Mail Website
Guest Editor
College of Engineering, Abu Dhabi University, Abu Dhabi, United Arab Emirates
Interests: bioimaging; image/video processing; smart systems; machine learning; sensors
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Sensors used to diagnose, monitor or treat diseases in the medical domain are known as medical sensors. There are several types of medical sensors that can be utilized for various applications, such as temperature probes, force sensors, pressure sensors, oximeter, electrocardiogram sensors which measure the electrical activity of the heart, heart rate sensors, electroencephalogram sensors that measure the electrical activity of the brain, electromyogram sensors that record electrical activity produced by skeletal muscles, and respiration rate sensors that count how many times the chest rises in a minute. The output of these sensors used to be interpreted by humans, which was time consuming and tedious; however, such interpretations became easy with the advance in artificial intelligence (AI) techniques and the integration of the sensor outputs into computer-aided diagnostic (CAD) systems. This Special Issue will present some of the state-of-the-art AI approaches that are used to diagnose different diseases and disorders based on the data collected from different medical sensors. The ultimate goal is to develop comprehensive and automated computer-aided diagnosis by focusing on the different machine learning algorithms that can be used for this purpose as well as novel applications in the medical field.

Prof. Dr. Ayman El-baz
Prof. Dr. Guruprasad A. Giridharan
Dr. Ahmed Shalaby
Dr. Ali H. Mahmoud

Prof. Dr. Mohammed Ghazal
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • New technologies for medical applications
  • Developing computer-aided diagnosis systems
  • Wearable sensors for assessing health
  • Machine learning approaches for medical images
  • Sensors in medical robotics
  • ECG-based CAD systems
  • Electromyogram sensor in medical image analysis

Published Papers (35 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review

4 pages, 203 KiB  
Editorial
Special Issue “Computer Aided Diagnosis Sensors”
by Ayman El-Baz, Guruprasad A. Giridharan, Ahmed Shalaby, Ali H. Mahmoud and Mohammed Ghazal
Sensors 2022, 22(20), 8052; https://0-doi-org.brum.beds.ac.uk/10.3390/s22208052 - 21 Oct 2022
Cited by 1 | Viewed by 1172
Abstract
Sensors used to diagnose, monitor or treat diseases in the medical domain are known as medical sensors [...] Full article
(This article belongs to the Special Issue Computer Aided Diagnosis Sensors)

Research

Jump to: Editorial, Review

37 pages, 3149 KiB  
Article
A New Approach for Detecting Fundus Lesions Using Image Processing and Deep Neural Network Architecture Based on YOLO Model
by Carlos Santos, Marilton Aguiar, Daniel Welfer and Bruno Belloni
Sensors 2022, 22(17), 6441; https://0-doi-org.brum.beds.ac.uk/10.3390/s22176441 - 26 Aug 2022
Cited by 16 | Viewed by 3873
Abstract
Diabetic Retinopathy is one of the main causes of vision loss, and in its initial stages, it presents with fundus lesions, such as microaneurysms, hard exudates, hemorrhages, and soft exudates. Computational models capable of detecting these lesions can help in the early diagnosis [...] Read more.
Diabetic Retinopathy is one of the main causes of vision loss, and in its initial stages, it presents with fundus lesions, such as microaneurysms, hard exudates, hemorrhages, and soft exudates. Computational models capable of detecting these lesions can help in the early diagnosis of the disease and prevent the manifestation of more severe forms of lesions, helping in screening and defining the best form of treatment. However, the detection of these lesions through computerized systems is a challenge due to numerous factors, such as the characteristics of size and shape of the lesions, noise and the contrast of images available in the public datasets of Diabetic Retinopathy, the number of labeled examples of these lesions available in the datasets and the difficulty of deep learning algorithms in detecting very small objects in digital images. Thus, to overcome these problems, this work proposes a new approach based on image processing techniques, data augmentation, transfer learning, and deep neural networks to assist in the medical diagnosis of fundus lesions. The proposed approach was trained, adjusted, and tested using the public DDR and IDRiD Diabetic Retinopathy datasets and implemented in the PyTorch framework based on the YOLOv5 model. The proposed approach reached in the DDR dataset an mAP of 0.2630 for the IoU limit of 0.5 and F1-score of 0.3485 in the validation stage, and an mAP of 0.1540 for the IoU limit of 0.5 and F1-score of 0.2521, in the test stage. The results obtained in the experiments demonstrate that the proposed approach presented superior results to works with the same purpose found in the literature. Full article
(This article belongs to the Special Issue Computer Aided Diagnosis Sensors)
Show Figures

Figure 1

16 pages, 3187 KiB  
Article
On the Automatic Detection and Classification of Skin Cancer Using Deep Transfer Learning
by Mohammad Fraiwan and Esraa Faouri
Sensors 2022, 22(13), 4963; https://0-doi-org.brum.beds.ac.uk/10.3390/s22134963 - 30 Jun 2022
Cited by 38 | Viewed by 3967
Abstract
Skin cancer (melanoma and non-melanoma) is one of the most common cancer types and leads to hundreds of thousands of yearly deaths worldwide. It manifests itself through abnormal growth of skin cells. Early diagnosis drastically increases the chances of recovery. Moreover, it may [...] Read more.
Skin cancer (melanoma and non-melanoma) is one of the most common cancer types and leads to hundreds of thousands of yearly deaths worldwide. It manifests itself through abnormal growth of skin cells. Early diagnosis drastically increases the chances of recovery. Moreover, it may render surgical, radiographic, or chemical therapies unnecessary or lessen their overall usage. Thus, healthcare costs can be reduced. The process of diagnosing skin cancer starts with dermoscopy, which inspects the general shape, size, and color characteristics of skin lesions, and suspected lesions undergo further sampling and lab tests for confirmation. Image-based diagnosis has undergone great advances recently due to the rise of deep learning artificial intelligence. The work in this paper examines the applicability of raw deep transfer learning in classifying images of skin lesions into seven possible categories. Using the HAM1000 dataset of dermoscopy images, a system that accepts these images as input without explicit feature extraction or preprocessing was developed using 13 deep transfer learning models. Extensive evaluation revealed the advantages and shortcomings of such a method. Although some cancer types were correctly classified with high accuracy, the imbalance of the dataset, the small number of images in some categories, and the large number of classes reduced the best overall accuracy to 82.9%. Full article
(This article belongs to the Special Issue Computer Aided Diagnosis Sensors)
Show Figures

Figure 1

19 pages, 3003 KiB  
Article
A Model for Predicting Cervical Cancer Using Machine Learning Algorithms
by Naif Al Mudawi and Abdulwahab Alazeb
Sensors 2022, 22(11), 4132; https://0-doi-org.brum.beds.ac.uk/10.3390/s22114132 - 29 May 2022
Cited by 37 | Viewed by 5024
Abstract
A growing number of individuals and organizations are turning to machine learning (ML) and deep learning (DL) to analyze massive amounts of data and produce actionable insights. Predicting the early stages of serious illnesses using ML-based schemes, including cancer, kidney failure, and heart [...] Read more.
A growing number of individuals and organizations are turning to machine learning (ML) and deep learning (DL) to analyze massive amounts of data and produce actionable insights. Predicting the early stages of serious illnesses using ML-based schemes, including cancer, kidney failure, and heart attacks, is becoming increasingly common in medical practice. Cervical cancer is one of the most frequent diseases among women, and early diagnosis could be a possible solution for preventing this cancer. Thus, this study presents an astute way to predict cervical cancer with ML algorithms. Research dataset, data pre-processing, predictive model selection (PMS), and pseudo-code are the four phases of the proposed research technique. The PMS section reports experiments with a range of classic machine learning methods, including decision tree (DT), logistic regression (LR), support vector machine (SVM), K-nearest neighbors algorithm (KNN), adaptive boosting, gradient boosting, random forest, and XGBoost. In terms of cervical cancer prediction, the highest classification score of 100% is achieved with random forest (RF), decision tree (DT), adaptive boosting, and gradient boosting algorithms. In contrast, 99% accuracy has been found with SVM. The computational complexity of classic machine learning techniques is computed to assess the efficacy of the models. In addition, 132 Saudi Arabian volunteers were polled as part of this study to learn their thoughts about computer-assisted cervical cancer prediction, to focus attention on the human papillomavirus (HPV). Full article
(This article belongs to the Special Issue Computer Aided Diagnosis Sensors)
Show Figures

Figure 1

16 pages, 2237 KiB  
Article
Brain MRI Analysis for Alzheimer’s Disease Diagnosis Using CNN-Based Feature Extraction and Machine Learning
by Duaa AlSaeed and Samar Fouad Omar
Sensors 2022, 22(8), 2911; https://0-doi-org.brum.beds.ac.uk/10.3390/s22082911 - 11 Apr 2022
Cited by 35 | Viewed by 6560
Abstract
Alzheimer’s disease is the most common form of dementia and the fifth-leading cause of death among people over the age of 65. In addition, based on official records, cases of death from Alzheimer’s disease have increased significantly. Hence, early diagnosis of Alzheimer’s disease [...] Read more.
Alzheimer’s disease is the most common form of dementia and the fifth-leading cause of death among people over the age of 65. In addition, based on official records, cases of death from Alzheimer’s disease have increased significantly. Hence, early diagnosis of Alzheimer’s disease can increase patients’ survival rates. Machine learning methods on magnetic resonance imaging have been used in the diagnosis of Alzheimer’s disease to accelerate the diagnosis process and assist physicians. However, in conventional machine learning techniques, using handcrafted feature extraction methods on MRI images is complicated, requiring the involvement of an expert user. Therefore, implementing deep learning as an automatic feature extraction method could minimize the need for feature extraction and automate the process. In this study, we propose a pre-trained CNN deep learning model ResNet50 as an automatic feature extraction method for diagnosing Alzheimer’s disease using MRI images. Then, the performance of a CNN with conventional Softmax, SVM, and RF evaluated using different metric measures such as accuracy. The result showed that our model outperformed other state-of-the-art models by achieving the higher accuracy, with an accuracy range of 85.7% to 99% for models with MRI ADNI dataset. Full article
(This article belongs to the Special Issue Computer Aided Diagnosis Sensors)
Show Figures

Figure 1

15 pages, 5271 KiB  
Article
Automated Diagnosis of Optical Coherence Tomography Angiography (OCTA) Based on Machine Learning Techniques
by Ibrahim Yasser, Fahmi Khalifa, Hisham Abdeltawab, Mohammed Ghazal, Harpal Singh Sandhu and Ayman El-Baz
Sensors 2022, 22(6), 2342; https://0-doi-org.brum.beds.ac.uk/10.3390/s22062342 - 18 Mar 2022
Cited by 8 | Viewed by 2651
Abstract
Diabetic retinopathy (DR) refers to the ophthalmological complications of diabetes mellitus. It is primarily a disease of the retinal vasculature that can lead to vision loss. Optical coherence tomography angiography (OCTA) demonstrates the ability to detect the changes in the retinal vascular system, [...] Read more.
Diabetic retinopathy (DR) refers to the ophthalmological complications of diabetes mellitus. It is primarily a disease of the retinal vasculature that can lead to vision loss. Optical coherence tomography angiography (OCTA) demonstrates the ability to detect the changes in the retinal vascular system, which can help in the early detection of DR. In this paper, we describe a novel framework that can detect DR from OCTA based on capturing the appearance and morphological markers of the retinal vascular system. This new framework consists of the following main steps: (1) extracting retinal vascular system from OCTA images based on using joint Markov-Gibbs Random Field (MGRF) model to model the appearance of OCTA images and (2) estimating the distance map inside the extracted vascular system to be used as imaging markers that describe the morphology of the retinal vascular (RV) system. The OCTA images, extracted vascular system, and the RV-estimated distance map is then composed into a three-dimensional matrix to be used as an input to a convolutional neural network (CNN). The main motivation for using this data representation is that it combines the low-level data as well as high-level processed data to allow the CNN to capture significant features to increase its ability to distinguish DR from the normal retina. This has been applied on multi-scale levels to include the original full dimension images as well as sub-images extracted from the original OCTA images. The proposed approach was tested on in-vivo data using about 91 patients, which were qualitatively graded by retinal experts. In addition, it was quantitatively validated using datasets based on three metrics: sensitivity, specificity, and overall accuracy. Results showed the capability of the proposed approach, outperforming the current deep learning as well as features-based detecting DR approaches. Full article
(This article belongs to the Special Issue Computer Aided Diagnosis Sensors)
Show Figures

Figure 1

11 pages, 1434 KiB  
Article
Effect of Strength Training Protocol on Bone Mineral Density for Postmenopausal Women with Osteopenia/Osteoporosis Assessed by Dual-Energy X-ray Absorptiometry (DEXA)
by Iulian Ștefan Holubiac, Florin Valentin Leuciuc, Daniela Maria Crăciun and Tatiana Dobrescu
Sensors 2022, 22(5), 1904; https://0-doi-org.brum.beds.ac.uk/10.3390/s22051904 - 28 Feb 2022
Cited by 9 | Viewed by 5924
Abstract
This study aims to introduce a resistance training protocol (6 repetitions × 70% of 1 maximum repetition (1RM), followed by 6 repetitions × 50% of 1RM within the same set) specifically designed for postmenopausal women with osteopenia/osteoporosis and monitor the effect of the [...] Read more.
This study aims to introduce a resistance training protocol (6 repetitions × 70% of 1 maximum repetition (1RM), followed by 6 repetitions × 50% of 1RM within the same set) specifically designed for postmenopausal women with osteopenia/osteoporosis and monitor the effect of the protocol on bone mineral density (BMD) in the lumbar spine, assessed by dual-energy X-ray absorptiometry (DEXA). The subjects included in the study were 29 postmenopausal women (56.5 ± 2.8 years) with osteopenia or osteoporosis; they were separated into two groups: the experimental group (n = 15), in which the subjects participated in the strength training protocol for a period of 6 months; and the control group (n = 14), in which the subjects did not take part in any physical activity. BMD in the lumbar spine was measured by DEXA. The measurements were performed at the beginning and end of the study. A statistically significant increase (Δ% = 1.82%) in BMD was observed at the end of the study for the exercise group (0.778 ± 0.042 at baseline vs. 0.792 ± 0.046 after 6 months, p = 0.018, 95% CI [−0.025, −0.003]); while an increase was observed for the control group (Δ% = 0.14%), the difference was not statistically significant (0.762 ± 0.057 at baseline vs. 0.763 ± 0.059, p = 0.85, 95% CI [−0.013, 0.011]). In conclusion, our strength training protocol seems to be effective in increasing BMD among women with osteopenia/osteoporosis and represents an affordable strategy for preventing future bone loss. Full article
(This article belongs to the Special Issue Computer Aided Diagnosis Sensors)
Show Figures

Figure 1

22 pages, 7429 KiB  
Article
A New Framework for Precise Identification of Prostatic Adenocarcinoma
by Sarah M. Ayyad, Mohamed A. Badawy, Mohamed Shehata, Ahmed Alksas, Ali Mahmoud, Mohamed Abou El-Ghar, Mohammed Ghazal, Moumen El-Melegy, Nahla B. Abdel-Hamid, Labib M. Labib, H. Arafat Ali and Ayman El-Baz
Sensors 2022, 22(5), 1848; https://0-doi-org.brum.beds.ac.uk/10.3390/s22051848 - 26 Feb 2022
Cited by 11 | Viewed by 2464
Abstract
Prostate cancer, which is also known as prostatic adenocarcinoma, is an unconstrained growth of epithelial cells in the prostate and has become one of the leading causes of cancer-related death worldwide. The survival of patients with prostate cancer relies on detection at an [...] Read more.
Prostate cancer, which is also known as prostatic adenocarcinoma, is an unconstrained growth of epithelial cells in the prostate and has become one of the leading causes of cancer-related death worldwide. The survival of patients with prostate cancer relies on detection at an early, treatable stage. In this paper, we introduce a new comprehensive framework to precisely differentiate between malignant and benign prostate cancer. This framework proposes a noninvasive computer-aided diagnosis system that integrates two imaging modalities of MR (diffusion-weighted (DW) and T2-weighted (T2W)). For the first time, it utilizes the combination of functional features represented by apparent diffusion coefficient (ADC) maps estimated from DW-MRI for the whole prostate in combination with texture features with its first- and second-order representations, extracted from T2W-MRIs of the whole prostate, and shape features represented by spherical harmonics constructed for the lesion inside the prostate and integrated with PSA screening results. The dataset presented in the paper includes 80 biopsy confirmed patients, with a mean age of 65.7 years (43 benign prostatic hyperplasia, 37 prostatic carcinomas). Experiments were conducted using different well-known machine learning approaches including support vector machines (SVM), random forests (RF), decision trees (DT), and linear discriminant analysis (LDA) classification models to study the impact of different feature sets that lead to better identification of prostatic adenocarcinoma. Using a leave-one-out cross-validation approach, the diagnostic results obtained using the SVM classification model along with the combined feature set after applying feature selection (88.75% accuracy, 81.08% sensitivity, 95.35% specificity, and 0.8821 AUC) indicated that the system’s performance, after integrating and reducing different types of feature sets, obtained an enhanced diagnostic performance compared with each individual feature set and other machine learning classifiers. In addition, the developed diagnostic system provided consistent diagnostic performance using 10-fold and 5-fold cross-validation approaches, which confirms the reliability, generalization ability, and robustness of the developed system. Full article
(This article belongs to the Special Issue Computer Aided Diagnosis Sensors)
Show Figures

Figure 1

28 pages, 4041 KiB  
Article
Feature-Based Fusion Using CNN for Lung and Heart Sound Classification
by Zeenat Tariq, Sayed Khushal Shah and Yugyung Lee
Sensors 2022, 22(4), 1521; https://0-doi-org.brum.beds.ac.uk/10.3390/s22041521 - 16 Feb 2022
Cited by 33 | Viewed by 6404
Abstract
Lung or heart sound classification is challenging due to the complex nature of audio data, its dynamic properties of time, and frequency domains. It is also very difficult to detect lung or heart conditions with small amounts of data or unbalanced and high [...] Read more.
Lung or heart sound classification is challenging due to the complex nature of audio data, its dynamic properties of time, and frequency domains. It is also very difficult to detect lung or heart conditions with small amounts of data or unbalanced and high noise in data. Furthermore, the quality of data is a considerable pitfall for improving the performance of deep learning. In this paper, we propose a novel feature-based fusion network called FDC-FS for classifying heart and lung sounds. The FDC-FS framework aims to effectively transfer learning from three different deep neural network models built from audio datasets. The innovation of the proposed transfer learning relies on the transformation from audio data to image vectors and from three specific models to one fused model that would be more suitable for deep learning. We used two publicly available datasets for this study, i.e., lung sound data from ICHBI 2017 challenge and heart challenge data. We applied data augmentation techniques, such as noise distortion, pitch shift, and time stretching, dealing with some data issues in these datasets. Importantly, we extracted three unique features from the audio samples, i.e., Spectrogram, MFCC, and Chromagram. Finally, we built a fusion of three optimal convolutional neural network models by feeding the image feature vectors transformed from audio features. We confirmed the superiority of the proposed fusion model compared to the state-of-the-art works. The highest accuracy we achieved with FDC-FS is 99.1% with Spectrogram-based lung sound classification while 97% for Spectrogram and Chromagram based heart sound classification. Full article
(This article belongs to the Special Issue Computer Aided Diagnosis Sensors)
Show Figures

Figure 1

16 pages, 9287 KiB  
Article
The Role of Diffusion Tensor MR Imaging (DTI) of the Brain in Diagnosing Autism Spectrum Disorder: Promising Results
by Yaser ElNakieb, Mohamed T. Ali, Ahmed Elnakib, Ahmed Shalaby, Ahmed Soliman, Ali Mahmoud, Mohammed Ghazal, Gregory Neal Barnes and Ayman El-Baz
Sensors 2021, 21(24), 8171; https://0-doi-org.brum.beds.ac.uk/10.3390/s21248171 - 07 Dec 2021
Cited by 16 | Viewed by 4522
Abstract
Autism spectrum disorder (ASD) is a combination of developmental anomalies that causes social and behavioral impairments, affecting around 2% of US children. Common symptoms include difficulties in communications, interactions, and behavioral disabilities. The onset of symptoms can start in early childhood, yet repeated [...] Read more.
Autism spectrum disorder (ASD) is a combination of developmental anomalies that causes social and behavioral impairments, affecting around 2% of US children. Common symptoms include difficulties in communications, interactions, and behavioral disabilities. The onset of symptoms can start in early childhood, yet repeated visits to a pediatric specialist are needed before reaching a diagnosis. Still, this diagnosis is usually subjective, and scores can vary from one specialist to another. Previous literature suggests differences in brain development, environmental, and/or genetic factors play a role in developing autism, yet scientists still do not know exactly the pathology of this disorder. Currently, the gold standard diagnosis of ASD is a set of diagnostic evaluations, such as the Autism Diagnostic Observation Schedule (ADOS) or Autism Diagnostic Interview–Revised (ADI-R) report. These gold standard diagnostic instruments are an intensive, lengthy, and subjective process that involves a set of behavioral and communications tests and clinical history information conducted by a team of qualified clinicians. Emerging advancements in neuroimaging and machine learning techniques can provide a fast and objective alternative to conventional repetitive observational assessments. This paper provides a thorough study of implementing feature engineering tools to find discriminant insights from brain imaging of white matter connectivity and using a machine learning framework for an accurate classification of autistic individuals. This work highlights important findings of impacted brain areas that contribute to an autism diagnosis and presents promising accuracy results. We verified our proposed framework on a large publicly available DTI dataset of 225 subjects from the Autism Brain Imaging Data Exchange-II (ABIDE-II) initiative, achieving a high global balanced accuracy over the 5 sites of up to 99% with 5-fold cross validation. The data used was slightly unbalanced, including 125 autistic subjects and 100 typically developed (TD) ones. The achieved balanced accuracy of the proposed technique is the highest in the literature, which elucidates the importance of feature engineering steps involved in extracting useful knowledge and the promising potentials of adopting neuroimaging for the diagnosis of autism. Full article
(This article belongs to the Special Issue Computer Aided Diagnosis Sensors)
Show Figures

Figure 1

26 pages, 7322 KiB  
Article
Brain Strategy Algorithm for Multiple Object Tracking Based on Merging Semantic Attributes and Appearance Features
by Mai S. Diab, Mostafa A. Elhosseini, Mohamed S. El-Sayed and Hesham A. Ali
Sensors 2021, 21(22), 7604; https://0-doi-org.brum.beds.ac.uk/10.3390/s21227604 - 16 Nov 2021
Cited by 3 | Viewed by 2184
Abstract
The human brain can effortlessly perform vision processes using the visual system, which helps solve multi-object tracking (MOT) problems. However, few algorithms simulate human strategies for solving MOT. Therefore, devising a method that simulates human activity in vision has become a good choice [...] Read more.
The human brain can effortlessly perform vision processes using the visual system, which helps solve multi-object tracking (MOT) problems. However, few algorithms simulate human strategies for solving MOT. Therefore, devising a method that simulates human activity in vision has become a good choice for improving MOT results, especially occlusion. Eight brain strategies have been studied from a cognitive perspective and imitated to build a novel algorithm. Two of these strategies gave our algorithm novel and outstanding results, rescuing saccades and stimulus attributes. First, rescue saccades were imitated by detecting the occlusion state in each frame, representing the critical situation that the human brain saccades toward. Then, stimulus attributes were mimicked by using semantic attributes to reidentify the person in these occlusion states. Our algorithm favourably performs on the MOT17 dataset compared to state-of-the-art trackers. In addition, we created a new dataset of 40,000 images, 190,000 annotations and 4 classes to train the detection model to detect occlusion and semantic attributes. The experimental results demonstrate that our new dataset achieves an outstanding performance on the scaled YOLOv4 detection model by achieving a 0.89 mAP 0.5. Full article
(This article belongs to the Special Issue Computer Aided Diagnosis Sensors)
Show Figures

Figure 1

15 pages, 3749 KiB  
Article
Non-Contact Spirometry Using a Mobile Thermal Camera and AI Regression
by Luay Fraiwan, Natheer Khasawneh, Khaldon Lweesy, Mennatalla Elbalki, Amna Almarzooqi and Nada Abu Hamra
Sensors 2021, 21(22), 7574; https://0-doi-org.brum.beds.ac.uk/10.3390/s21227574 - 15 Nov 2021
Cited by 5 | Viewed by 3056
Abstract
Non-contact physiological measurements have been under investigation for many years, and among these measurements is non-contact spirometry, which could provide acute and chronic pulmonary disease monitoring and diagnosis. This work presents a feasibility study for non-contact spirometry measurements using a mobile thermal imaging [...] Read more.
Non-contact physiological measurements have been under investigation for many years, and among these measurements is non-contact spirometry, which could provide acute and chronic pulmonary disease monitoring and diagnosis. This work presents a feasibility study for non-contact spirometry measurements using a mobile thermal imaging system. Thermal images were acquired from 19 subjects for measuring the respiration rate and the volume of inhaled and exhaled air. A mobile application was built to measure the respiration rate and export the respiration signal to a personal computer. The mobile application acquired thermal video images at a rate of nine frames/second and the OpenCV library was used for localization of the area of interest (nose and mouth). Artificial intelligence regressors were used to predict the inhalation and exhalation air volume. Several regressors were tested and four of them showed excellent performance: random forest, adaptive boosting, gradient boosting, and decision trees. The latter showed the best regression results, with an R-square value of 0.9998 and a mean square error of 0.0023. The results of this study showed that non-contact spirometry based on a thermal imaging system is feasible and provides all the basic measurements that the conventional spirometers support. Full article
(This article belongs to the Special Issue Computer Aided Diagnosis Sensors)
Show Figures

Figure 1

24 pages, 3236 KiB  
Article
Atrial Fibrillation Classification with Smart Wearables Using Short-Term Heart Rate Variability and Deep Convolutional Neural Networks
by Jayroop Ramesh, Zahra Solatidehkordi, Raafat Aburukba and Assim Sagahyroon
Sensors 2021, 21(21), 7233; https://0-doi-org.brum.beds.ac.uk/10.3390/s21217233 - 30 Oct 2021
Cited by 25 | Viewed by 5847
Abstract
Atrial fibrillation (AF) is a type of cardiac arrhythmia affecting millions of people every year. This disease increases the likelihood of strokes, heart failure, and even death. While dedicated medical-grade electrocardiogram (ECG) devices can enable gold-standard analysis, these devices are expensive and require [...] Read more.
Atrial fibrillation (AF) is a type of cardiac arrhythmia affecting millions of people every year. This disease increases the likelihood of strokes, heart failure, and even death. While dedicated medical-grade electrocardiogram (ECG) devices can enable gold-standard analysis, these devices are expensive and require clinical settings. Recent advances in the capabilities of general-purpose smartphones and wearable technology equipped with photoplethysmography (PPG) sensors increase diagnostic accessibility for most populations. This work aims to develop a single model that can generalize AF classification across the modalities of ECG and PPG with a unified knowledge representation. This is enabled by approximating the transformation of signals obtained from low-cost wearable PPG sensors in terms of Pulse Rate Variability (PRV) to temporal Heart Rate Variability (HRV) features extracted from medical-grade ECG. This paper proposes a one-dimensional deep convolutional neural network that uses HRV-derived features for classifying 30-s heart rhythms as normal sinus rhythm or atrial fibrillation from both ECG and PPG-based sensors. The model is trained with three MIT-BIH ECG databases and is assessed on a dataset of unseen PPG signals acquired from wrist-worn wearable devices through transfer learning. The model achieved the aggregate binary classification performance measures of accuracy: 95.50%, sensitivity: 94.50%, and specificity: 96.00% across a five-fold cross-validation strategy on the ECG datasets. It also achieved 95.10% accuracy, 94.60% sensitivity, 95.20% specificity on an unseen PPG dataset. The results show considerable promise towards seamless adaptation of gold-standard ECG trained models for non-ambulatory AF detection with consumer wearable devices through HRV-based knowledge transfer. Full article
(This article belongs to the Special Issue Computer Aided Diagnosis Sensors)
Show Figures

Figure 1

16 pages, 2621 KiB  
Article
A Flow Sensor-Based Suction-Index Control Strategy for Rotary Left Ventricular Assist Devices
by Lixue Liang, Kairong Qin, Ayman S. El-Baz, Thomas J. Roussel, Palaniappan Sethu, Guruprasad A. Giridharan and Yu Wang
Sensors 2021, 21(20), 6890; https://0-doi-org.brum.beds.ac.uk/10.3390/s21206890 - 18 Oct 2021
Cited by 4 | Viewed by 2056
Abstract
Rotary left ventricular assist devices (LVAD) have emerged as a long-term treatment option for patients with advanced heart failure. LVADs need to maintain sufficient physiological perfusion while avoiding left ventricular myocardial damage due to suction at the LVAD inlet. To achieve these objectives, [...] Read more.
Rotary left ventricular assist devices (LVAD) have emerged as a long-term treatment option for patients with advanced heart failure. LVADs need to maintain sufficient physiological perfusion while avoiding left ventricular myocardial damage due to suction at the LVAD inlet. To achieve these objectives, a control algorithm that utilizes a calculated suction index from measured pump flow (SIMPF) is proposed. This algorithm maintained a reference, user-defined SIMPF value, and was evaluated using an in silico model of the human circulatory system coupled to an axial or mixed flow LVAD with 5–10% uniformly distributed measurement noise added to flow sensors. Efficacy of the SIMPF algorithm was compared to a constant pump speed control strategy currently used clinically, and control algorithms proposed in the literature including differential pump speed control, left ventricular end-diastolic pressure control, mean aortic pressure control, and differential pressure control during (1) rest and exercise states; (2) rapid, eight-fold augmentation of pulmonary vascular resistance for (1); and (3) rapid change in physiologic states between rest and exercise. Maintaining SIMPF simultaneously provided sufficient physiological perfusion and avoided ventricular suction. Performance of the SIMPF algorithm was superior to the compared control strategies for both types of LVAD, demonstrating pump independence of the SIMPF algorithm. Full article
(This article belongs to the Special Issue Computer Aided Diagnosis Sensors)
Show Figures

Figure 1

15 pages, 1469 KiB  
Article
Encoder-Decoder Architecture for Ultrasound IMC Segmentation and cIMT Measurement
by Aisha Al-Mohannadi, Somaya Al-Maadeed, Omar Elharrouss and Kishor Kumar Sadasivuni
Sensors 2021, 21(20), 6839; https://0-doi-org.brum.beds.ac.uk/10.3390/s21206839 - 14 Oct 2021
Cited by 9 | Viewed by 2059
Abstract
Cardiovascular diseases (CVDs) have shown a huge impact on the number of deaths in the world. Thus, common carotid artery (CCA) segmentation and intima-media thickness (IMT) measurements have been significantly implemented to perform early diagnosis of CVDs by analyzing IMT features. Using computer [...] Read more.
Cardiovascular diseases (CVDs) have shown a huge impact on the number of deaths in the world. Thus, common carotid artery (CCA) segmentation and intima-media thickness (IMT) measurements have been significantly implemented to perform early diagnosis of CVDs by analyzing IMT features. Using computer vision algorithms on CCA images is not widely used for this type of diagnosis, due to the complexity and the lack of dataset to do it. The advancement of deep learning techniques has made accurate early diagnosis from images possible. In this paper, a deep-learning-based approach is proposed to apply semantic segmentation for intima-media complex (IMC) and to calculate the cIMT measurement. In order to overcome the lack of large-scale datasets, an encoder-decoder-based model is proposed using multi-image inputs that can help achieve good learning for the model using different features. The obtained results were evaluated using different image segmentation metrics which demonstrate the effectiveness of the proposed architecture. In addition, IMT thickness is computed, and the experiment showed that the proposed model is robust and fully automated compared to the state-of-the-art work. Full article
(This article belongs to the Special Issue Computer Aided Diagnosis Sensors)
Show Figures

Figure 1

18 pages, 593 KiB  
Article
Determination of Chewing Count from Video Recordings Using Discrete Wavelet Decomposition and Low Pass Filtration
by Sana Alshboul and Mohammad Fraiwan
Sensors 2021, 21(20), 6806; https://0-doi-org.brum.beds.ac.uk/10.3390/s21206806 - 13 Oct 2021
Cited by 5 | Viewed by 2161
Abstract
Several studies have shown the importance of proper chewing and the effect of chewing speed on the human health in terms of caloric intake and even cognitive functions. This study aims at designing algorithms for determining the chew count from video recordings of [...] Read more.
Several studies have shown the importance of proper chewing and the effect of chewing speed on the human health in terms of caloric intake and even cognitive functions. This study aims at designing algorithms for determining the chew count from video recordings of subjects consuming food items. A novel algorithm based on image and signal processing techniques has been developed to continuously capture the area of interest from the video clips, determine facial landmarks, generate the chewing signal, and process the signal with two methods: low pass filter, and discrete wavelet decomposition. Peak detection was used to determine the chew count from the output of the processed chewing signal. The system was tested using recordings from 100 subjects at three different chewing speeds (i.e., slow, normal, and fast) without any constraints on gender, skin color, facial hair, or ambience. The low pass filter algorithm achieved the best mean absolute percentage error of 6.48%, 7.76%, and 8.38% for the slow, normal, and fast chewing speeds, respectively. The performance was also evaluated using the Bland-Altman plot, which showed that most of the points lie within the lines of agreement. However, the algorithm needs improvement for faster chewing, but it surpasses the performance of the relevant literature. This research provides a reliable and accurate method for determining the chew count. The proposed methods facilitate the study of the chewing behavior in natural settings without any cumbersome hardware that may affect the results. This work can facilitate research into chewing behavior while using smart devices. Full article
(This article belongs to the Special Issue Computer Aided Diagnosis Sensors)
Show Figures

Figure 1

14 pages, 1186 KiB  
Article
A Deep Learning Pipeline for Grade Groups Classification Using Digitized Prostate Biopsy Specimens
by Kamal Hammouda, Fahmi Khalifa, Moumen El-Melegy, Mohamed Ghazal, Hanan E. Darwish, Mohamed Abou El-Ghar and Ayman El-Baz
Sensors 2021, 21(20), 6708; https://0-doi-org.brum.beds.ac.uk/10.3390/s21206708 - 09 Oct 2021
Cited by 13 | Viewed by 2135
Abstract
Prostate cancer is a significant cause of morbidity and mortality in the USA. In this paper, we develop a computer-aided diagnostic (CAD) system for automated grade groups (GG) classification using digitized prostate biopsy specimens (PBSs). Our CAD system aims to firstly classify the [...] Read more.
Prostate cancer is a significant cause of morbidity and mortality in the USA. In this paper, we develop a computer-aided diagnostic (CAD) system for automated grade groups (GG) classification using digitized prostate biopsy specimens (PBSs). Our CAD system aims to firstly classify the Gleason pattern (GP), and then identifies the Gleason score (GS) and GG. The GP classification pipeline is based on a pyramidal deep learning system that utilizes three convolution neural networks (CNN) to produce both patch- and pixel-wise classifications. The analysis starts with sequential preprocessing steps that include a histogram equalization step to adjust intensity values, followed by a PBSs’ edge enhancement. The digitized PBSs are then divided into overlapping patches with the three sizes: 100 × 100 (CNNS), 150 × 150 (CNNM), and 200 × 200 (CNNL), pixels, and 75% overlap. Those three sizes of patches represent the three pyramidal levels. This pyramidal technique allows us to extract rich information, such as that the larger patches give more global information, while the small patches provide local details. After that, the patch-wise technique assigns each overlapped patch a label as GP categories (1 to 5). Then, the majority voting is the core approach for getting the pixel-wise classification that is used to get a single label for each overlapped pixel. The results after applying those techniques are three images of the same size as the original, and each pixel has a single label. We utilized the majority voting technique again on those three images to obtain only one. The proposed framework is trained, validated, and tested on 608 whole slide images (WSIs) of the digitized PBSs. The overall diagnostic accuracy is evaluated using several metrics: precision, recall, F1-score, accuracy, macro-averaged, and weighted-averaged. The (CNNL) has the best accuracy results for patch classification among the three CNNs, and its classification accuracy is 0.76. The macro-averaged and weighted-average metrics are found to be around 0.70–0.77. For GG, our CAD results are about 80% for precision, and between 60% to 80% for recall and F1-score, respectively. Also, it is around 94% for accuracy and NPV. To highlight our CAD systems’ results, we used the standard ResNet50 and VGG-16 to compare our CNN’s patch-wise classification results. As well, we compared the GG’s results with that of the previous work. Full article
(This article belongs to the Special Issue Computer Aided Diagnosis Sensors)
Show Figures

Figure 1

17 pages, 2657 KiB  
Article
Optical Detection of SARS-CoV-2 Utilizing Antigen-Antibody Binding Interactions
by Mahmoud Al Ahmad, Farah Mustafa, Neena Panicker and Tahir A. Rizvi
Sensors 2021, 21(19), 6596; https://0-doi-org.brum.beds.ac.uk/10.3390/s21196596 - 02 Oct 2021
Cited by 4 | Viewed by 2872
Abstract
Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), the virus responsible for the coronavirus disease (COVID-19) pandemic, is sweeping the world today. This study investigates the optical detection of SARS-CoV-2, utilizing the antigen-antibody binding interactions utilizing a light source from a smart phone and [...] Read more.
Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), the virus responsible for the coronavirus disease (COVID-19) pandemic, is sweeping the world today. This study investigates the optical detection of SARS-CoV-2, utilizing the antigen-antibody binding interactions utilizing a light source from a smart phone and a portable spectrophotometer. The proof-of-concept is shown by detecting soluble preparations of spike protein subunits from SARS-CoV-2, followed by detection of the actual binding potential of the SARS-CoV-2 proteins with their corresponding antigens. The measured binding interactions for RBD and NCP proteins with their corresponding antibodies under different conditions have been measured and analyzed. Based on these observations, a “hump or spike” in light intensity is observed when a specific molecular interaction takes place between two proteins. The optical responses could further be analyzed using the principle component analysis technique to enhance and allows precise detection of the specific target in a multi-protein mixture. Full article
(This article belongs to the Special Issue Computer Aided Diagnosis Sensors)
Show Figures

Figure 1

15 pages, 4504 KiB  
Article
Portable Ultrasound Research System for Use in Automated Bladder Monitoring with Machine-Learning-Based Segmentation
by Marc Fournelle, Tobias Grün, Daniel Speicher, Steffen Weber, Mehmet Yilmaz, Dominik Schoeb, Arkadiusz Miernik, Gerd Reis, Steffen Tretbar and Holger Hewener
Sensors 2021, 21(19), 6481; https://0-doi-org.brum.beds.ac.uk/10.3390/s21196481 - 28 Sep 2021
Cited by 10 | Viewed by 3552
Abstract
We developed a new mobile ultrasound device for long-term and automated bladder monitoring without user interaction consisting of 32 transmit and receive electronics as well as a 32-element phased array 3 MHz transducer. The device architecture is based on data digitization and rapid [...] Read more.
We developed a new mobile ultrasound device for long-term and automated bladder monitoring without user interaction consisting of 32 transmit and receive electronics as well as a 32-element phased array 3 MHz transducer. The device architecture is based on data digitization and rapid transfer to a consumer electronics device (e.g., a tablet) for signal reconstruction (e.g., by means of plane wave compounding algorithms) and further image processing. All reconstruction algorithms are implemented in the GPU, allowing real-time reconstruction and imaging. The system and the beamforming algorithms were evaluated with respect to the imaging performance on standard sonographical phantoms (CIRS multipurpose ultrasound phantom) by analyzing the resolution, the SNR and the CNR. Furthermore, ML-based segmentation algorithms were developed and assessed with respect to their ability to reliably segment human bladders with different filling levels. A corresponding CNN was trained with 253 B-mode data sets and 20 B-mode images were evaluated. The quantitative and qualitative results of the bladder segmentation are presented and compared to the ground truth obtained by manual segmentation. Full article
(This article belongs to the Special Issue Computer Aided Diagnosis Sensors)
Show Figures

Figure 1

15 pages, 4922 KiB  
Article
Detection of COVID-19 from Chest X-ray Images Using Deep Convolutional Neural Networks
by Natheer Khasawneh, Mohammad Fraiwan, Luay Fraiwan, Basheer Khassawneh and Ali Ibnian
Sensors 2021, 21(17), 5940; https://0-doi-org.brum.beds.ac.uk/10.3390/s21175940 - 03 Sep 2021
Cited by 41 | Viewed by 5120
Abstract
The COVID-19 global pandemic has wreaked havoc on every aspect of our lives. More specifically, healthcare systems were greatly stretched to their limits and beyond. Advances in artificial intelligence have enabled the implementation of sophisticated applications that can meet clinical accuracy requirements. In [...] Read more.
The COVID-19 global pandemic has wreaked havoc on every aspect of our lives. More specifically, healthcare systems were greatly stretched to their limits and beyond. Advances in artificial intelligence have enabled the implementation of sophisticated applications that can meet clinical accuracy requirements. In this study, customized and pre-trained deep learning models based on convolutional neural networks were used to detect pneumonia caused by COVID-19 respiratory complications. Chest X-ray images from 368 confirmed COVID-19 patients were collected locally. In addition, data from three publicly available datasets were used. The performance was evaluated in four ways. First, the public dataset was used for training and testing. Second, data from the local and public sources were combined and used to train and test the models. Third, the public dataset was used to train the model and the local data were used for testing only. This approach adds greater credibility to the detection models and tests their ability to generalize to new data without overfitting the model to specific samples. Fourth, the combined data were used for training and the local dataset was used for testing. The results show a high detection accuracy of 98.7% with the combined dataset, and most models handled new data with an insignificant drop in accuracy. Full article
(This article belongs to the Special Issue Computer Aided Diagnosis Sensors)
Show Figures

Figure 1

15 pages, 5551 KiB  
Article
Electrical Detection of Innate Immune Cells
by Mahmoud Al Ahmad, Rasha A. Nasser, Lillian J. A. Olule and Bassam R. Ali
Sensors 2021, 21(17), 5886; https://0-doi-org.brum.beds.ac.uk/10.3390/s21175886 - 01 Sep 2021
Cited by 1 | Viewed by 2637
Abstract
Accurately classifying the innate immune players is essential to comprehensively and quantitatively evaluate the interactions between the innate and the adaptive immune systems. In addition, accurate classification enables the development of models to predict behavior and to improve prospects for therapeutic manipulation of [...] Read more.
Accurately classifying the innate immune players is essential to comprehensively and quantitatively evaluate the interactions between the innate and the adaptive immune systems. In addition, accurate classification enables the development of models to predict behavior and to improve prospects for therapeutic manipulation of inflammatory diseases and cancer. Rapid development in technologies that provide an accurate definition of the type of cell in action, allows the field of innate immunity to the lead in therapy developments. This article presents a novel immunophenotyping technique using electrical characterization to differentiate between the two most important cell types of the innate immune system: dendritic cells (DCs) and macrophages (MACs). The electrical characterization is based on capacitance measurements, which is a reliable marker for cell surface area and hence cell size. We differentiated THP-1 cells into DCs and MACs in vitro and conducted electrical measurements on the three cell types. The results showed average capacitance readings of 0.83 µF, 0.93 µF, and 1.01 µF for THP-1, DCs, and MACs, respectively. This corresponds to increasing cell size since capacitance is directly proportional to area. The results were verified with image processing. Image processing was used for verification because unlike conventional techniques, especially flow cytometry, it avoids cross referencing and by-passes the limitation of a lack of specificity of markers used to detect the different cell types. Full article
(This article belongs to the Special Issue Computer Aided Diagnosis Sensors)
Show Figures

Figure 1

12 pages, 1363 KiB  
Article
A CNN Deep Local and Global ASD Classification Approach with Continuous Wavelet Transform Using Task-Based FMRI
by Reem Haweel, Noha Seada, Said Ghoniemy, Norah Saleh Alghamdi and Ayman El-Baz
Sensors 2021, 21(17), 5822; https://0-doi-org.brum.beds.ac.uk/10.3390/s21175822 - 29 Aug 2021
Cited by 11 | Viewed by 3091
Abstract
Autism spectrum disorder (ASD) is a neurodegenerative disorder characterized by lingual and social disabilities. The autism diagnostic observation schedule is the current gold standard for ASD diagnosis. Developing objective computer aided technologies for ASD diagnosis with the utilization of brain imaging modalities and [...] Read more.
Autism spectrum disorder (ASD) is a neurodegenerative disorder characterized by lingual and social disabilities. The autism diagnostic observation schedule is the current gold standard for ASD diagnosis. Developing objective computer aided technologies for ASD diagnosis with the utilization of brain imaging modalities and machine learning is one of main tracks in current studies to understand autism. Task-based fMRI demonstrates the functional activation in the brain by measuring blood oxygen level-dependent (BOLD) variations in response to certain tasks. It is believed to hold discriminant features for autism. A novel computer aided diagnosis (CAD) framework is proposed to classify 50 ASD and 50 typically developed toddlers with the adoption of CNN deep networks. The CAD system includes both local and global diagnosis in a response to speech task. Spatial dimensionality reduction with region of interest selection and clustering has been utilized. In addition, the proposed framework performs discriminant feature extraction with continuous wavelet transform. Local diagnosis on cingulate gyri, superior temporal gyrus, primary auditory cortex and angular gyrus achieves accuracies ranging between 71% and 80% with a four-fold cross validation technique. The fused global diagnosis achieves an accuracy of 86% with 82% sensitivity, 92% specificity. A brain map indicating ASD severity level for each brain area is created, which contributes to personalized diagnosis and treatment plans. Full article
(This article belongs to the Special Issue Computer Aided Diagnosis Sensors)
Show Figures

Figure 1

20 pages, 21440 KiB  
Article
Precise Segmentation of COVID-19 Infected Lung from CT Images Based on Adaptive First-Order Appearance Model with Morphological/Anatomical Constraints
by Ahmed Sharafeldeen, Mohamed Elsharkawy, Norah Saleh Alghamdi, Ahmed Soliman and Ayman El-Baz
Sensors 2021, 21(16), 5482; https://0-doi-org.brum.beds.ac.uk/10.3390/s21165482 - 14 Aug 2021
Cited by 14 | Viewed by 2552
Abstract
A new segmentation technique is introduced for delineating the lung region in 3D computed tomography (CT) images. To accurately model the distribution of Hounsfield scale values within both chest and lung regions, a new probabilistic model is developed that depends on a linear [...] Read more.
A new segmentation technique is introduced for delineating the lung region in 3D computed tomography (CT) images. To accurately model the distribution of Hounsfield scale values within both chest and lung regions, a new probabilistic model is developed that depends on a linear combination of Gaussian (LCG). Moreover, we modified the conventional expectation-maximization (EM) algorithm to be run in a sequential way to estimate both the dominant Gaussian components (one for the lung region and one for the chest region) and the subdominant Gaussian components, which are used to refine the final estimated joint density. To estimate the marginal density from the mixed density, a modified k-means clustering approach is employed to classify the Gaussian subdominant components to determine which components belong properly to a lung and which components belong to a chest. The initial segmentation, based on the LCG-model, is then refined by the imposition of 3D morphological constraints based on a 3D Markov–Gibbs random field (MGRF) with analytically estimated potentials. The proposed approach was tested on CT data from 32 coronavirus disease 2019 (COVID-19) patients. Segmentation quality was quantitatively evaluated using four metrics: Dice similarity coefficient (DSC), overlap coefficient, 95th-percentile bidirectional Hausdorff distance (BHD), and absolute lung volume difference (ALVD), and it achieved 95.67±1.83%, 91.76±3.29%, 4.86±5.01, and 2.93±2.39, respectively. The reported results showed the capability of the proposed approach to accurately segment healthy lung tissues in addition to pathological lung tissues caused by COVID-19, outperforming four current, state-of-the-art deep learning-based lung segmentation approaches. Full article
(This article belongs to the Special Issue Computer Aided Diagnosis Sensors)
Show Figures

Figure 1

14 pages, 2777 KiB  
Article
An Automated CAD System for Accurate Grading of Uveitis Using Optical Coherence Tomography Images
by Sayed Haggag, Fahmi Khalifa, Hisham Abdeltawab, Ahmed Elnakib, Mohammed Ghazal, Mohamed A. Mohamed, Harpal Singh Sandhu, Norah Saleh Alghamdi and Ayman El-Baz
Sensors 2021, 21(16), 5457; https://0-doi-org.brum.beds.ac.uk/10.3390/s21165457 - 13 Aug 2021
Cited by 7 | Viewed by 2429
Abstract
Uveitis is one of the leading causes of severe vision loss that can lead to blindness worldwide. Clinical records show that early and accurate detection of vitreous inflammation can potentially reduce the blindness rate. In this paper, a novel framework is proposed for [...] Read more.
Uveitis is one of the leading causes of severe vision loss that can lead to blindness worldwide. Clinical records show that early and accurate detection of vitreous inflammation can potentially reduce the blindness rate. In this paper, a novel framework is proposed for automatic quantification of the vitreous on optical coherence tomography (OCT) with particular application for use in the grading of vitreous inflammation. The proposed pipeline consists of two stages, vitreous region segmentation followed by a neural network classifier. In the first stage, the vitreous region is automatically segmented using a U-net convolutional neural network (U-CNN). For the input of U-CNN, we utilized three novel image descriptors to account for the visual appearance similarity of the vitreous region and other tissues. Namely, we developed an adaptive appearance-based approach that utilizes a prior shape information, which consisted of a labeled dataset of the manually segmented images. This image descriptor is adaptively updated during segmentation and is integrated with the original greyscale image and a distance map image descriptor to construct an input fused image for the U-net segmentation stage. In the second stage, a fully connected neural network (FCNN) is proposed as a classifier to assess the vitreous inflammation severity. To achieve this task, a novel discriminatory feature of the segmented vitreous region is extracted. Namely, the signal intensities of the vitreous are represented by a cumulative distribution function (CDF). The constructed CDFs are then used to train and test the FCNN classifier for grading (grade from 0 to 3). The performance of the proposed pipeline is evaluated on a dataset of 200 OCT images. Our segmentation approach documented a higher performance than related methods, as evidenced by the Dice coefficient of 0.988 ± 0.01 and Hausdorff distance of 0.0003 mm ± 0.001 mm. On the other hand, the FCNN classification is evidenced by its average accuracy of 86%, which supports the benefits of the proposed pipeline as an aid for early and objective diagnosis of uvea inflammation. Full article
(This article belongs to the Special Issue Computer Aided Diagnosis Sensors)
Show Figures

Figure 1

16 pages, 28632 KiB  
Article
A Personalized Computer-Aided Diagnosis System for Mild Cognitive Impairment (MCI) Using Structural MRI (sMRI)
by Fatma El-Zahraa A. El-Gamal, Mohammed Elmogy, Ali Mahmoud, Ahmed Shalaby, Andrew E. Switala, Mohammed Ghazal, Hassan Soliman, Ahmed Atwan, Norah Saleh Alghamdi, Gregory Neal Barnes and Ayman El-Baz
Sensors 2021, 21(16), 5416; https://0-doi-org.brum.beds.ac.uk/10.3390/s21165416 - 11 Aug 2021
Cited by 7 | Viewed by 2696
Abstract
Alzheimer’s disease (AD) is a neurodegenerative disorder that targets the central nervous system (CNS). Statistics show that more than five million people in America face this disease. Several factors hinder diagnosis at an early stage, in particular, the divergence of 10–15 years between [...] Read more.
Alzheimer’s disease (AD) is a neurodegenerative disorder that targets the central nervous system (CNS). Statistics show that more than five million people in America face this disease. Several factors hinder diagnosis at an early stage, in particular, the divergence of 10–15 years between the onset of the underlying neuropathological changes and patients becoming symptomatic. This study surveyed patients with mild cognitive impairment (MCI), who were at risk of conversion to AD, with a local/regional-based computer-aided diagnosis system. The described system allowed for visualization of the disorder’s effect on cerebral cortical regions individually. The CAD system consists of four steps: (1) preprocess the scans and extract the cortex, (2) reconstruct the cortex and extract shape-based features, (3) fuse the extracted features, and (4) perform two levels of diagnosis: cortical region-based followed by global. The experimental results showed an encouraging performance of the proposed system when compared with related work, with a maximum accuracy of 86.30%, specificity 88.33%, and sensitivity 84.88%. Behavioral and cognitive correlations identified brain regions involved in language, executive function/cognition, and memory in MCI subjects, which regions are also involved in the neuropathology of AD. Full article
(This article belongs to the Special Issue Computer Aided Diagnosis Sensors)
Show Figures

Figure 1

20 pages, 8609 KiB  
Article
A Comprehensive Computer-Assisted Diagnosis System for Early Assessment of Renal Cancer Tumors
by Mohamed Shehata, Ahmed Alksas, Rasha T. Abouelkheir, Ahmed Elmahdy, Ahmed Shaffie, Ahmed Soliman, Mohammed Ghazal, Hadil Abu Khalifeh, Reem Salim, Ahmed Abdel Khalek Abdel Razek, Norah Saleh Alghamdi and Ayman El-Baz
Sensors 2021, 21(14), 4928; https://0-doi-org.brum.beds.ac.uk/10.3390/s21144928 - 20 Jul 2021
Cited by 21 | Viewed by 3016
Abstract
Renal cell carcinoma (RCC) is the most common and a highly aggressive type of malignant renal tumor. In this manuscript, we aim to identify and integrate the optimal discriminating morphological, textural, and functional features that best describe the malignancy status of a given [...] Read more.
Renal cell carcinoma (RCC) is the most common and a highly aggressive type of malignant renal tumor. In this manuscript, we aim to identify and integrate the optimal discriminating morphological, textural, and functional features that best describe the malignancy status of a given renal tumor. The integrated discriminating features may lead to the development of a novel comprehensive renal cancer computer-assisted diagnosis (RC-CAD) system with the ability to discriminate between benign and malignant renal tumors and specify the malignancy subtypes for optimal medical management. Informed consent was obtained from a total of 140 biopsy-proven patients to participate in the study (male = 72 and female = 68, age range = 15 to 87 years). There were 70 patients who had RCC (40 clear cell RCC (ccRCC), 30 nonclear cell RCC (nccRCC)), while the other 70 had benign angiomyolipoma tumors. Contrast-enhanced computed tomography (CE-CT) images were acquired, and renal tumors were segmented for all patients to allow the extraction of discriminating imaging features. The RC-CAD system incorporates the following major steps: (i) applying a new parametric spherical harmonic technique to estimate the morphological features, (ii) modeling a novel angular invariant gray-level co-occurrence matrix to estimate the textural features, and (iii) constructing wash-in/wash-out slopes to estimate the functional features by quantifying enhancement variations across different CE-CT phases. These features were subsequently combined and processed using a two-stage multilayer perceptron artificial neural network (MLP-ANN) classifier to classify the renal tumor as benign or malignant and identify the malignancy subtype as well. Using the combined features and a leave-one-subject-out cross-validation approach, the developed RC-CAD system achieved a sensitivity of 95.3%±2.0%, a specificity of 99.9%±0.4%, and Dice similarity coefficient of 0.98±0.01 in differentiating malignant from benign tumors, as well as an overall accuracy of 89.6%±5.0% in discriminating ccRCC from nccRCC. The diagnostic abilities of the developed RC-CAD system were further validated using a randomly stratified 10-fold cross-validation approach. The obtained results using the proposed MLP-ANN classification model outperformed other machine learning classifiers (e.g., support vector machine, random forests, relational functional gradient boosting, etc.). Hence, integrating morphological, textural, and functional features enhances the diagnostic performance, making the proposal a reliable noninvasive diagnostic tool for renal tumors. Full article
(This article belongs to the Special Issue Computer Aided Diagnosis Sensors)
Show Figures

Figure 1

21 pages, 8066 KiB  
Article
Ambient Healthcare Approach with Hybrid Whale Optimization Algorithm and Naïve Bayes Classifier
by Majed Alwateer, Abdulqader M. Almars, Kareem N. Areed, Mostafa A. Elhosseini, Amira Y. Haikal and Mahmoud Badawy
Sensors 2021, 21(13), 4579; https://0-doi-org.brum.beds.ac.uk/10.3390/s21134579 - 04 Jul 2021
Cited by 11 | Viewed by 2528
Abstract
There is a crucial need to process patient’s data immediately to make a sound decision rapidly; this data has a very large size and excessive features. Recently, many cloud-based IoT healthcare systems are proposed in the literature. However, there are still several challenges [...] Read more.
There is a crucial need to process patient’s data immediately to make a sound decision rapidly; this data has a very large size and excessive features. Recently, many cloud-based IoT healthcare systems are proposed in the literature. However, there are still several challenges associated with the processing time and overall system efficiency concerning big healthcare data. This paper introduces a novel approach for processing healthcare data and predicts useful information with the support of the use of minimum computational cost. The main objective is to accept several types of data and improve accuracy and reduce the processing time. The proposed approach uses a hybrid algorithm which will consist of two phases. The first phase aims to minimize the number of features for big data by using the Whale Optimization Algorithm as a feature selection technique. After that, the second phase performs real-time data classification by using Naïve Bayes Classifier. The proposed approach is based on fog Computing for better business agility, better security, deeper insights with privacy, and reduced operation cost. The experimental results demonstrate that the proposed approach can reduce the number of datasets features, improve the accuracy and reduce the processing time. Accuracy enhanced by average rate: 3.6% (3.34 for Diabetes, 2.94 for Heart disease, 3.77 for Heart attack prediction, and 4.15 for Sonar). Besides, it enhances the processing speed by reducing the processing time by an average rate: 8.7% (28.96 for Diabetes, 1.07 for Heart disease, 3.31 for Heart attack prediction, and 1.4 for Sonar). Full article
(This article belongs to the Special Issue Computer Aided Diagnosis Sensors)
Show Figures

Figure 1

17 pages, 645 KiB  
Article
Validation of a Low-Cost Electrocardiography (ECG) System for Psychophysiological Research
by Ruth Erna Wagner, Hugo Plácido da Silva and Klaus Gramann
Sensors 2021, 21(13), 4485; https://0-doi-org.brum.beds.ac.uk/10.3390/s21134485 - 30 Jun 2021
Cited by 7 | Viewed by 2173
Abstract
Background and Objective: The reliability of low-cost mobile systems for recording Electrocardiographic (ECG) data is mostly unknown, posing questions regarding the quality of the recorded data and the validity of the extracted physiological parameters. The present study compared the BITalino toolkit with an [...] Read more.
Background and Objective: The reliability of low-cost mobile systems for recording Electrocardiographic (ECG) data is mostly unknown, posing questions regarding the quality of the recorded data and the validity of the extracted physiological parameters. The present study compared the BITalino toolkit with an established medical-grade ECG system (BrainAmp-ExG). Methods: Participants underwent simultaneous ECG recordings with the two instruments while watching pleasant and unpleasant pictures of the “International Affective Picture System” (IAPS). Common ECG parameters were extracted and compared between the two systems. The Intraclass Correlation Coefficients (ICCs) and the Bland–Altman Limits of Agreement (LoA) method served as criteria for measurement agreement. Results: All but one parameter showed an excellent agreement (>80%) between both devices in the ICC analysis. No criteria for Bland–Altman LoA and bias were found in the literature regarding ECG parameters. Conclusion: The results of the ICC and Bland–Altman methods demonstrate that the BITalino system can be considered as an equivalent recording device for stationary ECG recordings in psychophysiological experiments. Full article
(This article belongs to the Special Issue Computer Aided Diagnosis Sensors)
Show Figures

Figure 1

13 pages, 1039 KiB  
Article
Novel MRI-Based CAD System for Early Detection of Thyroid Cancer Using Multi-Input CNN
by Ahmed Naglah, Fahmi Khalifa, Reem Khaled, Ahmed Abdel Khalek Abdel Razek, Mohammad Ghazal, Guruprasad Giridharan and Ayman El-Baz
Sensors 2021, 21(11), 3878; https://0-doi-org.brum.beds.ac.uk/10.3390/s21113878 - 04 Jun 2021
Cited by 20 | Viewed by 3857
Abstract
Early detection of thyroid nodules can greatly contribute to the prediction of cancer burdening and the steering of personalized management. We propose a novel multimodal MRI-based computer-aided diagnosis (CAD) system that differentiates malignant from benign thyroid nodules. The proposed CAD is based on [...] Read more.
Early detection of thyroid nodules can greatly contribute to the prediction of cancer burdening and the steering of personalized management. We propose a novel multimodal MRI-based computer-aided diagnosis (CAD) system that differentiates malignant from benign thyroid nodules. The proposed CAD is based on a novel convolutional neural network (CNN)-based texture learning architecture. The main contribution of our system is three-fold. Firstly, our system is the first of its kind to combine T2-weighted MRI and apparent diffusion coefficient (ADC) maps using a CNN to model thyroid cancer. Secondly, it learns independent texture features for each input, giving it more advanced capabilities to simultaneously extract complex texture patterns from both modalities. Finally, the proposed system uses multiple channels for each input to combine multiple scans collected into the deep learning process using different values of the configurable diffusion gradient coefficient. Accordingly, the proposed system would enable the learning of more advanced radiomics with an additional advantage of visualizing the texture patterns after learning. We evaluated the proposed system using data collected from a cohort of 49 patients with pathologically proven thyroid nodules. The accuracy of the proposed system has also been compared against recent CNN models as well as multiple machine learning (ML) frameworks that use hand-crafted features. Our system achieved the highest performance among all compared methods with a diagnostic accuracy of 0.87, specificity of 0.97, and sensitivity of 0.69. The results suggest that texture features extracted using deep learning can contribute to the protocols of cancer diagnosis and treatment and can lead to the advancement of precision medicine. Full article
(This article belongs to the Special Issue Computer Aided Diagnosis Sensors)
Show Figures

Figure 1

22 pages, 7269 KiB  
Article
Diabetic Retinopathy Fundus Image Classification and Lesions Localization System Using Deep Learning
by Wejdan L. Alyoubi, Maysoon F. Abulkhair and Wafaa M. Shalash
Sensors 2021, 21(11), 3704; https://0-doi-org.brum.beds.ac.uk/10.3390/s21113704 - 26 May 2021
Cited by 126 | Viewed by 13699
Abstract
Diabetic retinopathy (DR) is a disease resulting from diabetes complications, causing non-reversible damage to retina blood vessels. DR is a leading cause of blindness if not detected early. The currently available DR treatments are limited to stopping or delaying the deterioration of sight, [...] Read more.
Diabetic retinopathy (DR) is a disease resulting from diabetes complications, causing non-reversible damage to retina blood vessels. DR is a leading cause of blindness if not detected early. The currently available DR treatments are limited to stopping or delaying the deterioration of sight, highlighting the importance of regular scanning using high-efficiency computer-based systems to diagnose cases early. The current work presented fully automatic diagnosis systems that exceed manual techniques to avoid misdiagnosis, reducing time, effort and cost. The proposed system classifies DR images into five stages—no-DR, mild, moderate, severe and proliferative DR—as well as localizing the affected lesions on retain surface. The system comprises two deep learning-based models. The first model (CNN512) used the whole image as an input to the CNN model to classify it into one of the five DR stages. It achieved an accuracy of 88.6% and 84.1% on the DDR and the APTOS Kaggle 2019 public datasets, respectively, compared to the state-of-the-art results. Simultaneously, the second model used an adopted YOLOv3 model to detect and localize the DR lesions, achieving a 0.216 mAP in lesion localization on the DDR dataset, which improves the current state-of-the-art results. Finally, both of the proposed structures, CNN512 and YOLOv3, were fused to classify DR images and localize DR lesions, obtaining an accuracy of 89% with 89% sensitivity, 97.3 specificity and that exceeds the current state-of-the-art results. Full article
(This article belongs to the Special Issue Computer Aided Diagnosis Sensors)
Show Figures

Figure 1

16 pages, 1685 KiB  
Article
Precise Identification of Prostate Cancer from DWI Using Transfer Learning
by Islam R. Abdelmaksoud, Ahmed Shalaby, Ali Mahmoud, Mohammed Elmogy, Ahmed Aboelfetouh, Mohamed Abou El-Ghar, Moumen El-Melegy, Norah Saleh Alghamdi and Ayman El-Baz
Sensors 2021, 21(11), 3664; https://0-doi-org.brum.beds.ac.uk/10.3390/s21113664 - 25 May 2021
Cited by 13 | Viewed by 2931
Abstract
Background and Objective: The use of computer-aided detection (CAD) systems can help radiologists make objective decisions and reduce the dependence on invasive techniques. In this study, a CAD system that detects and identifies prostate cancer from diffusion-weighted imaging (DWI) is developed. Methods: The [...] Read more.
Background and Objective: The use of computer-aided detection (CAD) systems can help radiologists make objective decisions and reduce the dependence on invasive techniques. In this study, a CAD system that detects and identifies prostate cancer from diffusion-weighted imaging (DWI) is developed. Methods: The proposed system first uses non-negative matrix factorization (NMF) to integrate three different types of features for the accurate segmentation of prostate regions. Then, discriminatory features in the form of apparent diffusion coefficient (ADC) volumes are estimated from the segmented regions. The ADC maps that constitute these volumes are labeled by a radiologist to identify the ADC maps with malignant or benign tumors. Finally, transfer learning is used to fine-tune two different previously-trained convolutional neural network (CNN) models (AlexNet and VGGNet) for detecting and identifying prostate cancer. Results: Multiple experiments were conducted to evaluate the accuracy of different CNN models using DWI datasets acquired at nine distinct b-values that included both high and low b-values. The average accuracy of AlexNet at the nine b-values was 89.2±1.5% with average sensitivity and specificity of 87.5±2.3% and 90.9±1.9%. These results improved with the use of the deeper CNN model (VGGNet). The average accuracy of VGGNet was 91.2±1.3% with sensitivity and specificity of 91.7±1.7% and 90.1±2.8%. Conclusions: The results of the conducted experiments emphasize the feasibility and accuracy of the developed system and the improvement of this accuracy using the deeper CNN. Full article
(This article belongs to the Special Issue Computer Aided Diagnosis Sensors)
Show Figures

Figure 1

13 pages, 33042 KiB  
Article
Quantification of Blood Flow Velocity in the Human Conjunctival Microvessels Using Deep Learning-Based Stabilization Algorithm
by Hang-Chan Jo, Hyeonwoo Jeong, Junhyuk Lee, Kyung-Sun Na and Dae-Yu Kim
Sensors 2021, 21(9), 3224; https://0-doi-org.brum.beds.ac.uk/10.3390/s21093224 - 06 May 2021
Cited by 8 | Viewed by 3461
Abstract
The quantification of blood flow velocity in the human conjunctiva is clinically essential for assessing microvascular hemodynamics. Since the conjunctival microvessel is imaged in several seconds, eye motion during image acquisition causes motion artifacts limiting the accuracy of image segmentation performance and measurement [...] Read more.
The quantification of blood flow velocity in the human conjunctiva is clinically essential for assessing microvascular hemodynamics. Since the conjunctival microvessel is imaged in several seconds, eye motion during image acquisition causes motion artifacts limiting the accuracy of image segmentation performance and measurement of the blood flow velocity. In this paper, we introduce a novel customized optical imaging system for human conjunctiva with deep learning-based segmentation and motion correction. The image segmentation process is performed by the Attention-UNet structure to achieve high-performance segmentation results in conjunctiva images with motion blur. Motion correction processes with two steps—registration and template matching—are used to correct for large displacements and fine movements. The image displacement values decrease to 4–7 μm during registration (first step) and less than 1 μm during template matching (second step). With the corrected images, the blood flow velocity is calculated for selected vessels considering temporal signal variances and vessel lengths. These methods for resolving motion artifacts contribute insights into studies quantifying the hemodynamics of the conjunctiva, as well as other tissues. Full article
(This article belongs to the Special Issue Computer Aided Diagnosis Sensors)
Show Figures

Figure 1

17 pages, 3081 KiB  
Article
Analysis of the Nosema Cells Identification for Microscopic Images
by Soumaya Dghim, Carlos M. Travieso-González and Radim Burget
Sensors 2021, 21(9), 3068; https://0-doi-org.brum.beds.ac.uk/10.3390/s21093068 - 28 Apr 2021
Cited by 4 | Viewed by 2389
Abstract
The use of image processing tools, machine learning, and deep learning approaches has become very useful and robust in recent years. This paper introduces the detection of the Nosema disease, which is considered to be one of the most economically significant diseases today. [...] Read more.
The use of image processing tools, machine learning, and deep learning approaches has become very useful and robust in recent years. This paper introduces the detection of the Nosema disease, which is considered to be one of the most economically significant diseases today. This work shows a solution for recognizing and identifying Nosema cells between the other existing objects in the microscopic image. Two main strategies are examined. The first strategy uses image processing tools to extract the most valuable information and features from the dataset of microscopic images. Then, machine learning methods are applied, such as a neural network (ANN) and support vector machine (SVM) for detecting and classifying the Nosema disease cells. The second strategy explores deep learning and transfers learning. Several approaches were examined, including a convolutional neural network (CNN) classifier and several methods of transfer learning (AlexNet, VGG-16 and VGG-19), which were fine-tuned and applied to the object sub-images in order to identify the Nosema images from the other object images. The best accuracy was reached by the VGG-16 pre-trained neural network with 96.25%. Full article
(This article belongs to the Special Issue Computer Aided Diagnosis Sensors)
Show Figures

Figure 1

Review

Jump to: Editorial, Research

37 pages, 2486 KiB  
Review
Electrocardiogram-Based Emotion Recognition Systems and Their Applications in Healthcare—A Review
by Muhammad Anas Hasnul, Nor Azlina Ab. Aziz, Salem Alelyani, Mohamed Mohana and Azlan Abd. Aziz
Sensors 2021, 21(15), 5015; https://0-doi-org.brum.beds.ac.uk/10.3390/s21155015 - 23 Jul 2021
Cited by 69 | Viewed by 9120
Abstract
Affective computing is a field of study that integrates human affects and emotions with artificial intelligence into systems or devices. A system or device with affective computing is beneficial for the mental health and wellbeing of individuals that are stressed, anguished, or depressed. [...] Read more.
Affective computing is a field of study that integrates human affects and emotions with artificial intelligence into systems or devices. A system or device with affective computing is beneficial for the mental health and wellbeing of individuals that are stressed, anguished, or depressed. Emotion recognition systems are an important technology that enables affective computing. Currently, there are a lot of ways to build an emotion recognition system using various techniques and algorithms. This review paper focuses on emotion recognition research that adopted electrocardiograms (ECGs) as a unimodal approach as well as part of a multimodal approach for emotion recognition systems. Critical observations of data collection, pre-processing, feature extraction, feature selection and dimensionality reduction, classification, and validation are conducted. This paper also highlights the architectures with accuracy of above 90%. The available ECG-inclusive affective databases are also reviewed, and a popularity analysis is presented. Additionally, the benefit of emotion recognition systems towards healthcare systems is also reviewed here. Based on the literature reviewed, a thorough discussion on the subject matter and future works is suggested and concluded. The findings presented here are beneficial for prospective researchers to look into the summary of previous works conducted in the field of ECG-based emotion recognition systems, and for identifying gaps in the area, as well as in developing and designing future applications of emotion recognition systems, especially in improving healthcare. Full article
(This article belongs to the Special Issue Computer Aided Diagnosis Sensors)
Show Figures

Figure 1

28 pages, 3841 KiB  
Review
Role of AI and Histopathological Images in Detecting Prostate Cancer: A Survey
by Sarah M. Ayyad, Mohamed Shehata, Ahmed Shalaby, Mohamed Abou El-Ghar, Mohammed Ghazal, Moumen El-Melegy, Nahla B. Abdel-Hamid, Labib M. Labib, H. Arafat Ali and Ayman El-Baz
Sensors 2021, 21(8), 2586; https://0-doi-org.brum.beds.ac.uk/10.3390/s21082586 - 07 Apr 2021
Cited by 32 | Viewed by 5400
Abstract
Prostate cancer is one of the most identified cancers and second most prevalent among cancer-related deaths of men worldwide. Early diagnosis and treatment are substantial to stop or handle the increase and spread of cancer cells in the body. Histopathological image diagnosis is [...] Read more.
Prostate cancer is one of the most identified cancers and second most prevalent among cancer-related deaths of men worldwide. Early diagnosis and treatment are substantial to stop or handle the increase and spread of cancer cells in the body. Histopathological image diagnosis is a gold standard for detecting prostate cancer as it has different visual characteristics but interpreting those type of images needs a high level of expertise and takes too much time. One of the ways to accelerate such an analysis is by employing artificial intelligence (AI) through the use of computer-aided diagnosis (CAD) systems. The recent developments in artificial intelligence along with its sub-fields of conventional machine learning and deep learning provide new insights to clinicians and researchers, and an abundance of research is presented specifically for histopathology images tailored for prostate cancer. However, there is a lack of comprehensive surveys that focus on prostate cancer using histopathology images. In this paper, we provide a very comprehensive review of most, if not all, studies that handled the prostate cancer diagnosis using histopathological images. The survey begins with an overview of histopathological image preparation and its challenges. We also briefly review the computing techniques that are commonly applied in image processing, segmentation, feature selection, and classification that can help in detecting prostate malignancies in histopathological images. Full article
(This article belongs to the Special Issue Computer Aided Diagnosis Sensors)
Show Figures

Figure 1

Back to TopTop