Computing and Artificial Intelligence Techniques for Healthcare Applications

A special issue of Healthcare (ISSN 2227-9032). This special issue belongs to the section "Artificial Intelligence in Medicine".

Deadline for manuscript submissions: closed (30 September 2021) | Viewed by 40266

Special Issue Editor


E-Mail Website
Guest Editor
Cybersecurity Research Lab, Ryerson University, Toronto, ON, Canada
Interests: information security; quantum computation; quantum communication; quantum cryptography; quantum machine learning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

With the rapid growth of biological data in the recent years, data-driven computational methods are increasingly needed to quickly and accurately analyze large-scale biological data. Especially, biological and medical technologies have been providing us with explosive volumes of biological and physiological data, such as medical images, electroencephalography signals, genomic and protein sequences. Learning from these data will facilitate our understanding of human health and disease. Accordingly, computation and machine learning techniques have recently emerged in both academia and industry as “intelligent” methods in many specific healthcare areas to gain insight from medical and biological data. To expand the scope and ease of the applicability of machine learning, it is highly desirable to make learning algorithms less dependent on handcrafted feature engineering, so that novel applications can be constructed faster and, more importantly, progress toward artificial intelligence (AI) can be made.

This Special Issue aims to target recent computation and machine learning techniques as well as some of the state-of-the-art applications in the healthcare areas such as bioinformatics, bioprocess systems, biomedical systems, biomedical physics, and bioecological system. This Special Issue will consider original research articles and review articles on computational and intelligent methods in healthcare and their applications. We wish to gather relevant contributions introducing new techniques for the study of complex healthcare systems driven by computational methods. Papers on interdisciplinary applications are particularly welcome. We also encourage authors to make their codes and experimental data available to the public, so that our Special Issue can be more infusive and attractive.

Dr. Ahmed Farouk
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Healthcare is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Quantum Machine Learning
  • Supervised learning algorithms
  • Unsupervised learning algorithms
  • Imbalanced learning algorithms
  • Multi-view feature learning
  • Deep learning-based feature learning strategies
  • Feature representation optimization algorithms
  • Handcrafted feature representation algorithms
  • Computational and Mathematical Techniques
  • Image and signal processing
  • Bioinformatics
  • Mental Health
  • Bioprocess systems
  • Biomedical systems
  • Biomedical physics
  • Bioecological system

Published Papers (13 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

13 pages, 2221 KiB  
Article
Towards a Mathematical Model for the Viral Progression in the Pharynx
by Raj Kumar Arya, George D. Verros and Devyani Thapliyal
Healthcare 2021, 9(12), 1766; https://0-doi-org.brum.beds.ac.uk/10.3390/healthcare9121766 - 20 Dec 2021
Viewed by 2069
Abstract
In this work, a comprehensive model for the viral progression in the pharynx has been developed. This one-dimension model considers both Fickian diffusion and convective flow coupled with chemical reactions, such as virus population growth, infected and uninfected cell accumulation as well as [...] Read more.
In this work, a comprehensive model for the viral progression in the pharynx has been developed. This one-dimension model considers both Fickian diffusion and convective flow coupled with chemical reactions, such as virus population growth, infected and uninfected cell accumulation as well as virus clearance. The effect of a sterilizing agent such as an alcoholic solution on the viral progression in the pharynx was taken into account and a parametric analysis for the effect of kinetic rate parameters on virus propagation was made. Moreover, different conditions caused by further medical treatment, such as a decrease in virus yield per infected cell, were examined. It is shown that the infection fails to establish by decreasing the virus yield per infected cell. It is believed that this work could be used to further investigate the medical treatment of viral progression in the pharynx. Full article
Show Figures

Figure 1

9 pages, 938 KiB  
Article
A Practical Application for Quantitative Brain Fatigue Evaluation Based on Machine Learning and Ballistocardiogram
by Yanting Xu, Zhengyuan Yang, Gang Li, Jinghong Tian and Yonghua Jiang
Healthcare 2021, 9(11), 1453; https://0-doi-org.brum.beds.ac.uk/10.3390/healthcare9111453 - 27 Oct 2021
Cited by 8 | Viewed by 1962
Abstract
Brain fatigue is often associated with inattention, mental retardation, prolonged reaction time, decreased work efficiency, increased error rate, and other problems. In addition to the accumulation of fatigue, brain fatigue has become one of the important factors that harm our mental health. Therefore, [...] Read more.
Brain fatigue is often associated with inattention, mental retardation, prolonged reaction time, decreased work efficiency, increased error rate, and other problems. In addition to the accumulation of fatigue, brain fatigue has become one of the important factors that harm our mental health. Therefore, it is of great significance to explore the practical and accurate brain fatigue detection method, especially for quantitative brain fatigue evaluation. In this study, a biomedical signal of ballistocardiogram (BCG), which does not require direct contact with human body, was collected by optical fiber sensor cushion during the whole process of cognitive tasks for 20 subjects. The heart rate variability (HRV) was calculated based on BCG signal. Machine learning classification model was built based on random forest to quantify and recognize brain fatigue. The results showed that: Firstly, the heart rate obtained from BCG signal was consistent with the result displayed by the medical equipment, and the absolute difference was less than 3 beats/min, and the mean error is 1.30 ± 0.81 beats/min; secondly, the random forest classifier for brain fatigue evaluation based on HRV can effectively identify the state of brain fatigue, with an accuracy rate of 96.54%; finally, the correlation between HRV and the accuracy was analyzed, and the correlation coefficient was as high as 0.98, which indicates that the accuracy can be used as an indicator for quantitative brain fatigue evaluation during the whole task. The results suggested that the brain fatigue quantification evaluation method based on the optical fiber sensor cushion and machine learning can carry out real-time brain fatigue detection on the human brain without disturbance, reduce the risk of human accidents in human–machine interaction systems, and improve mental health among the office and driving personnel. Full article
Show Figures

Figure 1

16 pages, 1794 KiB  
Article
A Fusion-Based Machine Learning Approach for the Prediction of the Onset of Diabetes
by Muhammad Waqas Nadeem, Hock Guan Goh, Vasaki Ponnusamy, Ivan Andonovic, Muhammad Adnan Khan and Muzammil Hussain
Healthcare 2021, 9(10), 1393; https://0-doi-org.brum.beds.ac.uk/10.3390/healthcare9101393 - 18 Oct 2021
Cited by 74 | Viewed by 5655
Abstract
A growing portfolio of research has been reported on the use of machine learning-based architectures and models in the domain of healthcare. The development of data-driven applications and services for the diagnosis and classification of key illness conditions is challenging owing to issues [...] Read more.
A growing portfolio of research has been reported on the use of machine learning-based architectures and models in the domain of healthcare. The development of data-driven applications and services for the diagnosis and classification of key illness conditions is challenging owing to issues of low volume, low-quality contextual data for the training, and validation of algorithms, which, in turn, compromises the accuracy of the resultant models. Here, a fusion machine learning approach is presented reporting an improvement in the accuracy of the identification of diabetes and the prediction of the onset of critical events for patients with diabetes (PwD). Globally, the cost of treating diabetes, a prevalent chronic illness condition characterized by high levels of sugar in the bloodstream over long periods, is placing severe demands on health providers and the proposed solution has the potential to support an increase in the rates of survival of PwD through informing on the optimum treatment on an individual patient basis. At the core of the proposed architecture is a fusion of machine learning classifiers (Support Vector Machine and Artificial Neural Network). Results indicate a classification accuracy of 94.67%, exceeding the performance of reported machine learning models for diabetes by ~1.8% over the best reported to date. Full article
Show Figures

Figure 1

14 pages, 5846 KiB  
Article
Traditional versus Microsphere Embolization for Hepatocellular Carcinoma: An Effectiveness Evaluation Using Data Mining
by Pi-Yi Chang, Chen-Yang Cheng, Jau-Shin Hon, Cheng-Ding Kuo, Chieh-Ling Yen and Jyh-Wen Chai
Healthcare 2021, 9(8), 929; https://0-doi-org.brum.beds.ac.uk/10.3390/healthcare9080929 - 23 Jul 2021
Cited by 1 | Viewed by 1836
Abstract
Background: For hepatocellular carcinoma (“HCC”), the current standard of treatment is hepatic artery embolization, generally through trans-catheter arterial chemoembolization (“TACE”). There are two types: traditional (“conventional” or “cTACE”) and microsphere (“DC bead TACE”). Unfortunately, the literature comparing the relative effectiveness of cTACE [...] Read more.
Background: For hepatocellular carcinoma (“HCC”), the current standard of treatment is hepatic artery embolization, generally through trans-catheter arterial chemoembolization (“TACE”). There are two types: traditional (“conventional” or “cTACE”) and microsphere (“DC bead TACE”). Unfortunately, the literature comparing the relative effectiveness of cTACE versus DC bead TACE is inconclusive, partially due to the complexity of HCC and its response to treatment. Data mining is an excellent method to extract meaning from complex data sets. Purpose: Through the application of data mining techniques, to compare the relative effectiveness of cTACE and DC bead TACE using a large patient database and to use said comparison to establish usable guidelines for developing treatment plans for HCC patients. Materials and Methods: The data of 372 HCC patients who underwent TACE in Taichung Veterans General Hospital were analyzed. The chi-square test was used to compare the difference in the effectiveness of the two therapies was compared. Logistic regression was used to calculate the odds ratios. Furthermore, using the C4.5 decision tree, the two therapies were classified into applicable fields. Chi-square test, the t-test, and logistic regression were used to verify the classification results. Results: In Barcelona Clinic Stages A and B cancers, cTACE was found to be 22.7% more effective than DC bead TACE. By using the decision tree C4.5 as a classifier, the effectiveness of either treatment for small tumors was 8.475 times than that for large tumors. DC bead TACE was 3.39 times more successful in treating patients with a single tumor than with multiple tumors. For patients with a single tumor, the chi-square test showed that 100–300 μm microspheres were significantly more effective than 300–500 μm. While these findings provide a reference for the selection of an appropriate TACE approach, we noted that overall accuracy was somewhat low, possibly due to the limited population. Conclusions: We found that data mining could be applied to develop clear guidelines for physician and researcher use in the case of complex pathologies such as HCC. However, some of our results contradicted those elsewhere in the literature, possibly due to a relatively small sample size. Significantly larger data sets with appropriate levels of granularity could produce more accurate results. Full article
Show Figures

Figure 1

16 pages, 4542 KiB  
Article
Smart Health System to Detect Dementia Disorders Using Virtual Reality
by Areej Y. Bayahya, Wadee Alhalabi and Sultan H. AlAmri
Healthcare 2021, 9(7), 810; https://0-doi-org.brum.beds.ac.uk/10.3390/healthcare9070810 - 28 Jun 2021
Cited by 9 | Viewed by 3222
Abstract
Smart health technology includes physical sensors, intelligent sensors, and output advice to help monitor patients’ health and adjust their behavior. Virtual reality (VR) plays an increasingly larger role to improve health outcomes, being used in a variety of medical specialties including robotic surgery, [...] Read more.
Smart health technology includes physical sensors, intelligent sensors, and output advice to help monitor patients’ health and adjust their behavior. Virtual reality (VR) plays an increasingly larger role to improve health outcomes, being used in a variety of medical specialties including robotic surgery, diagnosis of some difficult diseases, and virtual reality pain distraction for severe burn patients. Smart VR health technology acts as a decision support system in the diseases diagnostic test of patients as they perform real world tasks in virtual reality (e.g., navigation). In this study, a non-invasive, cognitive computerized test based on 3D virtual environments for detecting the main symptoms of dementia (memory loss, visuospatial defects, and spatial navigation) is proposed. In a recent study, the system was tested on 115 real patients of which thirty had a dementia, sixty-five were cognitively healthy, and twenty had a mild cognitive impairment (MCI). The performance of the VR system was compared with Mini-Cog test, where the latter is used to measure cognitive impaired patients in the traditional diagnosis system at the clinic. It was observed that visuospatial and memory recall scores in both clinical diagnosis and VR system of dementia patients were less than those of MCI patients, and the scores of MCI patients were less than those of the control group. Furthermore, there is a perfect agreement between the standard methods in functional evaluation and navigational ability in our system where P-value in weighted Kappa statistic= 100% and between Mini-Cog-clinical diagnosis vs. VR scores where P-value in weighted Kappa statistic= 93%. Full article
Show Figures

Figure 1

23 pages, 6655 KiB  
Article
A Novel Method for COVID-19 Diagnosis Using Artificial Intelligence in Chest X-ray Images
by Yassir Edrees Almalki, Abdul Qayyum, Muhammad Irfan, Noman Haider, Adam Glowacz, Fahad Mohammed Alshehri, Sharifa K. Alduraibi, Khalaf Alshamrani, Mohammad Abd Alkhalik Basha, Alaa Alduraibi, M. K. Saeed and Saifur Rahman
Healthcare 2021, 9(5), 522; https://0-doi-org.brum.beds.ac.uk/10.3390/healthcare9050522 - 29 Apr 2021
Cited by 60 | Viewed by 4845
Abstract
The Coronavirus disease 2019 (COVID-19) is an infectious disease spreading rapidly and uncontrollably throughout the world. The critical challenge is the rapid detection of Coronavirus infected people. The available techniques being utilized are body-temperature measurement, along with anterior nasal swab analysis. However, taking [...] Read more.
The Coronavirus disease 2019 (COVID-19) is an infectious disease spreading rapidly and uncontrollably throughout the world. The critical challenge is the rapid detection of Coronavirus infected people. The available techniques being utilized are body-temperature measurement, along with anterior nasal swab analysis. However, taking nasal swabs and lab testing are complex, intrusive, and require many resources. Furthermore, the lack of test kits to meet the exceeding cases is also a major limitation. The current challenge is to develop some technology to non-intrusively detect the suspected Coronavirus patients through Artificial Intelligence (AI) techniques such as deep learning (DL). Another challenge to conduct the research on this area is the difficulty of obtaining the dataset due to a limited number of patients giving their consent to participate in the research study. Looking at the efficacy of AI in healthcare systems, it is a great challenge for the researchers to develop an AI algorithm that can help health professionals and government officials automatically identify and isolate people with Coronavirus symptoms. Hence, this paper proposes a novel method CoVIRNet (COVID Inception-ResNet model), which utilizes the chest X-rays to diagnose the COVID-19 patients automatically. The proposed algorithm has different inception residual blocks that cater to information by using different depths feature maps at different scales, with the various layers. The features are concatenated at each proposed classification block, using the average-pooling layer, and concatenated features are passed to the fully connected layer. The efficient proposed deep-learning blocks used different regularization techniques to minimize the overfitting due to the small COVID-19 dataset. The multiscale features are extracted at different levels of the proposed deep-learning model and then embedded into various machine-learning models to validate the combination of deep-learning and machine-learning models. The proposed CoVIR-Net model achieved 95.7% accuracy, and the CoVIR-Net feature extractor with random-forest classifier produced 97.29% accuracy, which is the highest, as compared to existing state-of-the-art deep-learning methods. The proposed model would be an automatic solution for the assessment and classification of COVID-19. We predict that the proposed method will demonstrate an outstanding performance as compared to the state-of-the-art techniques being used currently. Full article
Show Figures

Figure 1

14 pages, 398 KiB  
Article
Identification of People with Diabetes Treatment through Lipids Profile Using Machine Learning Algorithms
by Vanessa Alcalá-Rmz, Carlos E. Galván-Tejada, Alejandra García-Hernández, Adan Valladares-Salgado, Miguel Cruz, Jorge I. Galván-Tejada, Jose M. Celaya-Padilla, Huizilopoztli Luna-Garcia and Hamurabi Gamboa-Rosales
Healthcare 2021, 9(4), 422; https://0-doi-org.brum.beds.ac.uk/10.3390/healthcare9040422 - 06 Apr 2021
Cited by 3 | Viewed by 2280
Abstract
Diabetes incidence has been a problem, because according with the World Health Organization and the International Diabetes Federation, the number of people with this disease is increasing very fast all over the world. Diabetic treatment is important to prevent the development of several [...] Read more.
Diabetes incidence has been a problem, because according with the World Health Organization and the International Diabetes Federation, the number of people with this disease is increasing very fast all over the world. Diabetic treatment is important to prevent the development of several complications, also lipid profile monitoring is important. For that reason the aim of this work is the implementation of machine learning algorithms that are able to classify cases, that corresponds to patients diagnosed with diabetes that have diabetes treatment, and controls that refers to subjects who do not have diabetes treatment but some of them have diabetes, bases on lipids profile levels. Logistic regression, K-nearest neighbor, decision trees and random forest were implemented, all of them were evaluated with accuracy, sensitivity, specificity and AUC-ROC curve metrics. Artificial neural network obtain an acurracy of 0.685 and an AUC value of 0.750, logistic regression achieve an accuracy of 0.729 and an AUC value of 0.795, K-nearest neighbor gets an accuracy of 0.669 and an AUC value of 0.709, on the other hand, decision tree reached an accuracy pg 0.691 and a AUC value of 0.683, finally random forest achieve an accuracy of 0.704 and an AUC curve of 0.776. The performance of all models was statistically significant, but the best performance model for this problem corresponds to logistic regression. Full article
Show Figures

Figure 1

15 pages, 387 KiB  
Article
Automatic Evaluation of Heart Condition According to the Sounds Emitted and Implementing Six Classification Methods
by Manuel A. Soto-Murillo, Jorge I. Galván-Tejada, Carlos E. Galván-Tejada, Jose M. Celaya-Padilla, Huizilopoztli Luna-García, Rafael Magallanes-Quintanar, Tania A. Gutiérrez-García and Hamurabi Gamboa-Rosales
Healthcare 2021, 9(3), 317; https://0-doi-org.brum.beds.ac.uk/10.3390/healthcare9030317 - 12 Mar 2021
Cited by 5 | Viewed by 1872
Abstract
The main cause of death in Mexico and the world is heart disease, and it will continue to lead the death rate in the next decade according to data from the World Health Organization (WHO) and the National Institute of Statistics and Geography [...] Read more.
The main cause of death in Mexico and the world is heart disease, and it will continue to lead the death rate in the next decade according to data from the World Health Organization (WHO) and the National Institute of Statistics and Geography (INEGI). Therefore, the objective of this work is to implement, compare and evaluate machine learning algorithms that are capable of classifying normal and abnormal heart sounds. Three different sounds were analyzed in this study; normal heart sounds, heart murmur sounds and extra systolic sounds, which were labeled as healthy sounds (normal sounds) and unhealthy sounds (murmur and extra systolic sounds). From these sounds, fifty-two features were calculated to create a numerical dataset; thirty-six statistical features, eight Linear Predictive Coding (LPC) coefficients and eight Cepstral Frequency-Mel Coefficients (MFCC). From this dataset two more were created; one normalized and one standardized. These datasets were analyzed with six classifiers: k-Nearest Neighbors, Naive Bayes, Decision Trees, Logistic Regression, Support Vector Machine and Artificial Neural Networks, all of them were evaluated with six metrics: accuracy, specificity, sensitivity, ROC curve, precision and F1-score, respectively. The performances of all the models were statistically significant, but the models that performed best for this problem were logistic regression for the standardized data set, with a specificity of 0.7500 and a ROC curve of 0.8405, logistic regression for the normalized data set, with a specificity of 0.7083 and a ROC curve of 0.8407, and Support Vector Machine with a lineal kernel for the non-normalized data; with a specificity of 0.6842 and a ROC curve of 0.7703. Both of these metrics are of utmost importance in evaluating the performance of computer-assisted diagnostic systems. Full article
Show Figures

Figure 1

13 pages, 1040 KiB  
Article
ECG Enhancement and R-Peak Detection Based on Window Variability
by Lu Wu, Xiaoyun Xie and Yinglong Wang
Healthcare 2021, 9(2), 227; https://0-doi-org.brum.beds.ac.uk/10.3390/healthcare9020227 - 18 Feb 2021
Cited by 7 | Viewed by 2777
Abstract
In ECG applications, the correct recognition of R-peaks is extremely important for detecting abnormalities, such as arrhythmia and ventricular hypertrophy. In this work, a novel ECG enhancement and R-peak detection method based on window variability is presented, and abbreviated as SQRS. Firstly, the [...] Read more.
In ECG applications, the correct recognition of R-peaks is extremely important for detecting abnormalities, such as arrhythmia and ventricular hypertrophy. In this work, a novel ECG enhancement and R-peak detection method based on window variability is presented, and abbreviated as SQRS. Firstly, the ECG signal corrupted by various high or low-frequency noises is denoised by moving-average filtering. Secondly, the window variance transform technique is used to enhance the QRS complex and suppress the other components in the ECG, such as P/T waves and noise. Finally, the signal, converted by window variance transform, is applied to generate the R-peaks candidates, and the decision rules, including amplitude and kurtosis adaptive thresholds, are applied to determine the R-peaks. A special squared window variance transform (SWVT) is proposed to measure the signal variability in a certain time window, and this technique reduces false detection rate caused by the various types of interference presented in ECG signals. For the MIT-BIH arrhythmia database, the sensitivity of R-peak detection can reach 99.6% using the proposed method. Full article
Show Figures

Figure 1

19 pages, 6540 KiB  
Article
A Framework for AI-Assisted Detection of Patent Ductus Arteriosus from Neonatal Phonocardiogram
by Sergi Gómez-Quintana, Christoph E. Schwarz, Ihor Shelevytsky, Victoriya Shelevytska, Oksana Semenova, Andreea Factor, Emanuel Popovici and Andriy Temko
Healthcare 2021, 9(2), 169; https://0-doi-org.brum.beds.ac.uk/10.3390/healthcare9020169 - 05 Feb 2021
Cited by 17 | Viewed by 3602
Abstract
The current diagnosis of Congenital Heart Disease (CHD) in neonates relies on echocardiography. Its limited availability requires alternative screening procedures to prioritise newborns awaiting ultrasound. The routine screening for CHD is performed using a multidimensional clinical examination including (but not limited to) auscultation [...] Read more.
The current diagnosis of Congenital Heart Disease (CHD) in neonates relies on echocardiography. Its limited availability requires alternative screening procedures to prioritise newborns awaiting ultrasound. The routine screening for CHD is performed using a multidimensional clinical examination including (but not limited to) auscultation and pulse oximetry. While auscultation might be subjective with some heart abnormalities not always audible it increases the ability to detect heart defects. This work aims at developing an objective clinical decision support tool based on machine learning (ML) to facilitate differentiation of sounds with signatures of Patent Ductus Arteriosus (PDA)/CHDs, in clinical settings. The heart sounds are pre-processed and segmented, followed by feature extraction. The features are fed into a boosted decision tree classifier to estimate the probability of PDA or CHDs. Several mechanisms to combine information from different auscultation points, as well as consecutive sound cycles, are presented. The system is evaluated on a large clinical dataset of heart sounds from 265 term and late-preterm newborns recorded within the first six days of life. The developed system reaches an area under the curve (AUC) of 78% at detecting CHD and 77% at detecting PDA. The obtained results for PDA detection compare favourably with the level of accuracy achieved by an experienced neonatologist when assessed on the same cohort. Full article
Show Figures

Figure 1

15 pages, 2423 KiB  
Article
Computer Tomography in the Diagnosis of Ovarian Cysts: The Role of Fluid Attenuation Values
by Roxana-Adelina Lupean, Paul-Andrei Ștefan, Mihaela Daniela Oancea, Andrei Mihai Măluțan, Andrei Lebovici, Marius Emil Pușcaș, Csaba Csutak and Carmen Mihaela Mihu
Healthcare 2020, 8(4), 398; https://0-doi-org.brum.beds.ac.uk/10.3390/healthcare8040398 - 14 Oct 2020
Cited by 6 | Viewed by 2435
Abstract
Pathological analysis of ovarian cysts shows specific fluid characteristics that cannot be standardly evaluated on computer tomography (CT) examinations. This study aimed to assess the ovarian cysts’ fluid attenuation values on the native (Np), arterial (Ap), and venous (Vp) contrast phases of seventy [...] Read more.
Pathological analysis of ovarian cysts shows specific fluid characteristics that cannot be standardly evaluated on computer tomography (CT) examinations. This study aimed to assess the ovarian cysts’ fluid attenuation values on the native (Np), arterial (Ap), and venous (Vp) contrast phases of seventy patients with ovarian cysts who underwent CT examinations and were retrospectively included in this study. Patients were divided according to their final diagnosis into the benign group (n = 32) and malignant group (n = 38; of which 27 were primary and 11 were secondary lesions). Two radiologists measured the fluid attenuation values on each contrast phase, and the average values were used to discriminate between benign and malignant groups and primary tumors and metastases via univariate, multivariate, multiple regression, and receiver operating characteristics analyses. The Ap densities (p = 0.0002) were independently associated with malignant cysts. Based on the densities measured on all three phases, neoplastic lesions could be diagnosed with 89.47% sensitivity and 62.5% specificity. The Np densities (p = 0.0005) were able to identify metastases with 90.91% sensitivity and 70.37% specificity, while the combined densities of all three phases diagnosed secondary lesions with 72.73% sensitivity and 92.59% specificity. The ovarian cysts’ fluid densities could function as an adjuvant criterion to the classic CT evaluation of ovarian cysts. Full article
Show Figures

Figure 1

15 pages, 1079 KiB  
Article
Classification of Biomedical Texts for Cardiovascular Diseases with Deep Neural Network Using a Weighted Feature Representation Method
by Nizar Ahmed, Fatih Dilmaç and Adil Alpkocak
Healthcare 2020, 8(4), 392; https://0-doi-org.brum.beds.ac.uk/10.3390/healthcare8040392 - 10 Oct 2020
Cited by 5 | Viewed by 2411
Abstract
This study aims to improve the performance of multiclass classification of biomedical texts for cardiovascular diseases by combining two different feature representation methods, i.e., bag-of-words (BoW) and word embeddings (WE). To hybridize the two feature representations, we investigated a set of possible statistical [...] Read more.
This study aims to improve the performance of multiclass classification of biomedical texts for cardiovascular diseases by combining two different feature representation methods, i.e., bag-of-words (BoW) and word embeddings (WE). To hybridize the two feature representations, we investigated a set of possible statistical weighting schemes to combine with each element of WE vectors, which were term frequency (TF), inverse document frequency (IDF) and class probability (CP) methods. Thus, we built a multiclass classification model using a bidirectional long short-term memory (BLSTM) with deep neural networks for all investigated operations of feature vector combinations. We used MIMIC III and the PubMed dataset for the developing language model. To evaluate the performance of our weighted feature representation approaches, we conducted a set of experiments for examining multiclass classification performance with the deep neural network model and other state-of-the-art machine learning (ML) approaches. In all experiments, we used the OHSUMED-400 dataset, which includes PubMed abstracts related with specifically one class over 23 cardiovascular disease categories. Afterwards, we presented the results obtained from experiments and provided a comparison with related research in the literature. The results of the experiment showed that our BLSTM model with the weighting techniques outperformed the baseline and other machine learning approaches in terms of validation accuracy. Finally, our model outperformed the scores of related studies in the literature. This study shows that weighted feature representation improves the performance of the multiclass classification. Full article
Show Figures

Figure 1

26 pages, 4651 KiB  
Article
Consistency of Medical Data Using Intelligent Neuron Faster R-CNN Algorithm for Smart Health Care Application
by Seong-Kyu Kim and Jun-Ho Huh
Healthcare 2020, 8(2), 185; https://0-doi-org.brum.beds.ac.uk/10.3390/healthcare8020185 - 25 Jun 2020
Cited by 12 | Viewed by 3467
Abstract
The purpose of this study is to increase interest in health as human life is extended in modern society. Hence, many people in hospitals produce much medical data (EMR, PACS, OCS, EHR, MRI, X-ray) after treatment. Medical data are stored as structured and [...] Read more.
The purpose of this study is to increase interest in health as human life is extended in modern society. Hence, many people in hospitals produce much medical data (EMR, PACS, OCS, EHR, MRI, X-ray) after treatment. Medical data are stored as structured and unstructured data. However, many medical data are causing errors, omissions and mistakes in the process of reading. This behavior is very important in dealing with human life and sometimes leads to medical accidents due to physician errors. Therefore, this research is conducted through the CNN intelligent agent cloud architecture to verify errors in reading existing medical image data. To reduce the error rule when reading medical image data, a faster R-CNN intelligent agent cloud architecture is proposed. It shows the result of increasing errors of existing error reading by more than 1.4 times (140%). In particular, it is an algorithm that analyses data stored by actual existing medical data through Conv feature map using deep ConvNet and ROI Projection. The data were verified using about 120,000 databases. It uses data to examine human lungs. In addition, the experimental environment established an environment that can handle GPU’s high performance and NVIDIA SLI multi-OS and multiple Quadro GPUs were used. In this experiment, the verification data composition was verified and randomly extracted from about 120,000 medical records and the similarity compared to the original data were measured by comparing about 40% of the extracted images. Finally, we want to reduce and verify the error rate of medical data reading. Full article
Show Figures

Figure 1

Back to TopTop