sensors-logo

Journal Browser

Journal Browser

Artificial Intelligence in Medical Sensors

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Biomedical Sensors".

Deadline for manuscript submissions: closed (15 December 2020) | Viewed by 67828

Special Issue Editors

FEIT, UTS, 15 Broadway, University of Technology Sydney, Ultimo, NSW 2007, Australia
Interests: rehabilitation engineering; biomedical instrumentation; physiological system modeling; data acquisition and distribution; system control and parameter identification
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Electronic and Information Engineering, The Hong Kong Polytechnic University, Hong Kong, China
Interests: Machine Learning; Computational Intelligence
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
University of Waterloo, Canada
Interests: artificial intelligence; machine learning; image processing and data analytics; medical imaging; scanning; sensors and devices; drinking water; autonomous and connected car; automotive; operational artificial Intelligence; robotics; smart infrastructure
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

More and more medical sensors are using artificial intelligence to diagnose patients more accurately and to monitor and treat them more effectively. In medicine, artificial intelligence and sensor-based devices have proliferated, especially in the areas of medical image analysis and medical monitoring systems. They foreshadow important new challenges for the useful applications of artificial intelligence in medical care. In particular, recent research indicates that artificial intelligence can help achieve outstanding performance in many health technology applications. Nowadays, medical device companies are actively developing artificial intelligence applications within their manufacturing and supply chain operations. Artificial intelligence has been well accepted as the future of transformative technologies. From diagnostic and medical imaging technologies to therapeutic and medical sensor applications, the potential for artificial intelligence has extended to almost every corner of the world of MedTech.

Artificial intelligence technology has become a powerful auxiliary and support for the medical and health industry. This technology not only provides intelligent identification and analysis of medical sensor-based health applications, but also provides quick and comprehensive enhancement for medical monitoring systems and on a diagnostic level.

This Special Issue will bring together researchers to report recent findings in applying artificial intelligence to medical sensor applications.

The main topics of this Special Issue include, but are not limited to, the following:

  • Information fusion and knowledge transfer in biomedical and health technology applications.
  • Big data analytics on medical sensors.
  • Medical imaging devices.
  • Rehabilitation robotics with multi-sensor systems.
  • Therapeutic applications.
  • Analysis of medical data.
  • Advanced modeling, diagnosis, and treatment using AI and biosensors.
  • Bioinformatics and medical applications with multi-sensor networks.
  • Biomedical signal processing using AI.

Dr. Steve Ling
Dr. Steven Su
Dr. Frank H. Leung
Dr. Alexander Wong
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • biosensors
  • medical devices

Published Papers (13 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

20 pages, 2314 KiB  
Article
Breast Tumor Classification in Ultrasound Images Using Combined Deep and Handcrafted Features
by Mohammad I. Daoud, Samir Abdel-Rahman, Tariq M. Bdair, Mahasen S. Al-Najar, Feras H. Al-Hawari and Rami Alazrai
Sensors 2020, 20(23), 6838; https://0-doi-org.brum.beds.ac.uk/10.3390/s20236838 - 30 Nov 2020
Cited by 28 | Viewed by 3528
Abstract
This study aims to enable effective breast ultrasound image classification by combining deep features with conventional handcrafted features to classify the tumors. In particular, the deep features are extracted from a pre-trained convolutional neural network model, namely the VGG19 model, at six different [...] Read more.
This study aims to enable effective breast ultrasound image classification by combining deep features with conventional handcrafted features to classify the tumors. In particular, the deep features are extracted from a pre-trained convolutional neural network model, namely the VGG19 model, at six different extraction levels. The deep features extracted at each level are analyzed using a features selection algorithm to identify the deep feature combination that achieves the highest classification performance. Furthermore, the extracted deep features are combined with handcrafted texture and morphological features and processed using features selection to investigate the possibility of improving the classification performance. The cross-validation analysis, which is performed using 380 breast ultrasound images, shows that the best combination of deep features is obtained using a feature set, denoted by CONV features that include convolution features extracted from all convolution blocks of the VGG19 model. In particular, the CONV features achieved mean accuracy, sensitivity, and specificity values of 94.2%, 93.3%, and 94.9%, respectively. The analysis also shows that the performance of the CONV features degrades substantially when the features selection algorithm is not applied. The classification performance of the CONV features is improved by combining these features with handcrafted morphological features to achieve mean accuracy, sensitivity, and specificity values of 96.1%, 95.7%, and 96.3%, respectively. Furthermore, the cross-validation analysis demonstrates that the CONV features and the combined CONV and morphological features outperform the handcrafted texture and morphological features as well as the fine-tuned VGG19 model. The generalization performance of the CONV features and the combined CONV and morphological features is demonstrated by performing the training using the 380 breast ultrasound images and the testing using another dataset that includes 163 images. The results suggest that the combined CONV and morphological features can achieve effective breast ultrasound image classifications that increase the capability of detecting malignant tumors and reduce the potential of misclassifying benign tumors. Full article
(This article belongs to the Special Issue Artificial Intelligence in Medical Sensors)
Show Figures

Figure 1

17 pages, 4467 KiB  
Article
Evaluation of Hemodialysis Arteriovenous Bruit by Deep Learning
by Keisuke Ota, Yousuke Nishiura, Saki Ishihara, Hihoko Adachi, Takehisa Yamamoto and Takayuki Hamano
Sensors 2020, 20(17), 4852; https://0-doi-org.brum.beds.ac.uk/10.3390/s20174852 - 27 Aug 2020
Cited by 8 | Viewed by 5075
Abstract
Physical findings of auscultation cannot be quantified at the arteriovenous fistula examination site during daily dialysis treatment. Consequently, minute changes over time cannot be recorded based only on subjective observations. In this study, we sought to supplement the daily arteriovenous fistula consultation for [...] Read more.
Physical findings of auscultation cannot be quantified at the arteriovenous fistula examination site during daily dialysis treatment. Consequently, minute changes over time cannot be recorded based only on subjective observations. In this study, we sought to supplement the daily arteriovenous fistula consultation for hemodialysis patients by recording the sounds made by the arteriovenous fistula and evaluating the sounds using deep learning methods to provide an objective index. We sampled arteriovenous fistula auscultation sounds (192 kHz, 24 bits) recorded over 1 min from 20 patients. We also extracted arteriovenous fistula sounds for each heartbeat without environmental sound by using a convolutional neural network (CNN) model, which was made by comparing these sound patterns with 5000 environmental sounds. The extracted single-heartbeat arteriovenous fistula sounds were sent to a spectrogram and scored using a CNN learning model with bidirectional long short-term memory, in which the degree of arteriovenous fistula stenosis was assigned to one of five sound types (i.e., normal, hard, high, intermittent, and whistling). After 100 training epochs, the method exhibited an accuracy rate of 70–93%. According to the receiver operating characteristic (ROC) curve, the area under the ROC curves (AUC) was 0.75–0.92. The analysis of arteriovenous fistula sound using deep learning has the potential to be used as an objective index in daily medical care. Full article
(This article belongs to the Special Issue Artificial Intelligence in Medical Sensors)
Show Figures

Figure 1

21 pages, 2441 KiB  
Article
Comparative Analysis on Machine Learning and Deep Learning to Predict Post-Induction Hypotension
by Jihyun Lee, Jiyoung Woo, Ah Reum Kang, Young-Seob Jeong, Woohyun Jung, Misoon Lee and Sang Hyun Kim
Sensors 2020, 20(16), 4575; https://0-doi-org.brum.beds.ac.uk/10.3390/s20164575 - 14 Aug 2020
Cited by 22 | Viewed by 4745
Abstract
Hypotensive events in the initial stage of anesthesia can cause serious complications in the patients after surgery, which could be fatal. In this study, we intended to predict hypotension after tracheal intubation using machine learning and deep learning techniques after intubation one minute [...] Read more.
Hypotensive events in the initial stage of anesthesia can cause serious complications in the patients after surgery, which could be fatal. In this study, we intended to predict hypotension after tracheal intubation using machine learning and deep learning techniques after intubation one minute in advance. Meta learning models, such as random forest, extreme gradient boosting (Xgboost), and deep learning models, especially the convolutional neural network (CNN) model and the deep neural network (DNN), were trained to predict hypotension occurring between tracheal intubation and incision, using data from four minutes to one minute before tracheal intubation. Vital records and electronic health records (EHR) for 282 of 319 patients who underwent laparoscopic cholecystectomy from October 2018 to July 2019 were collected. Among the 282 patients, 151 developed post-induction hypotension. Our experiments had two scenarios: using raw vital records and feature engineering on vital records. The experiments on raw data showed that CNN had the best accuracy of 72.63%, followed by random forest (70.32%) and Xgboost (64.6%). The experiments on feature engineering showed that random forest combined with feature selection had the best accuracy of 74.89%, while CNN had a lower accuracy of 68.95% than that of the experiment on raw data. Our study is an extension of previous studies to detect hypotension before intubation with a one-minute advance. To improve accuracy, we built a model using state-of-art algorithms. We found that CNN had a good performance, but that random forest had a better performance when combined with feature selection. In addition, we found that the examination period (data period) is also important. Full article
(This article belongs to the Special Issue Artificial Intelligence in Medical Sensors)
Show Figures

Figure 1

14 pages, 419 KiB  
Article
A Hybrid Feature Selection and Extraction Methods for Sleep Apnea Detection Using Bio-Signals
by Xilin Li, Sai Ho Ling and Steven Su
Sensors 2020, 20(15), 4323; https://0-doi-org.brum.beds.ac.uk/10.3390/s20154323 - 03 Aug 2020
Cited by 14 | Viewed by 2990
Abstract
People with sleep apnea (SA) are at increased risk of having stroke and cardiovascular diseases. Polysomnography (PSG) is used to detect SA. This paper conducts feature selection from PSG signals and uses a support vector machine (SVM) to detect SA. To analyze SA, [...] Read more.
People with sleep apnea (SA) are at increased risk of having stroke and cardiovascular diseases. Polysomnography (PSG) is used to detect SA. This paper conducts feature selection from PSG signals and uses a support vector machine (SVM) to detect SA. To analyze SA, the Physionet Apnea Database was used to obtain various features. Electrocardiography (ECG), oxygen saturation (SaO2), airflow, abdominal, and thoracic signals were used to provide various frequency-, time-domain and non-linear features (n = 87). To analyse the significance of these features, firstly, two evaluation measures, the rank-sum method and the analysis of variance (ANOVA) were used to evaluate the significance of the features. These features were then classified according to their significance. Finally, different class feature sets were presented as inputs for an SVM classifier to detect the onset of SA. The hill-climbing feature selection algorithm and the k-fold cross-validation method were applied to evaluate each classification performance. Through the experiments, we discovered that the best feature set (including the top-five significant features) obtained the best classification performance. Furthermore, we plotted receiver operating characteristic (ROC) curves to examine the performance of the SVM, and the results showed the SVM with Linear kernel (regularization parameter = 1) outperformed other classifiers (area under curve = 95.23%, sensitivity = 94.29%, specificity = 96.17%). The results confirm that feature subsets based on multiple bio-signals have the potential to identify patients with SA. The use of a smaller subset avoids dimensionality problems and reduces the computational load. Full article
(This article belongs to the Special Issue Artificial Intelligence in Medical Sensors)
Show Figures

Figure 1

21 pages, 9639 KiB  
Article
ECG Biometrics Using Deep Learning and Relative Score Threshold Classification
by David Belo, Nuno Bento, Hugo Silva, Ana Fred and Hugo Gamboa
Sensors 2020, 20(15), 4078; https://0-doi-org.brum.beds.ac.uk/10.3390/s20154078 - 22 Jul 2020
Cited by 24 | Viewed by 4081
Abstract
The field of biometrics is a pattern recognition problem, where the individual traits are coded, registered, and compared with other database records. Due to the difficulties in reproducing Electrocardiograms (ECG), their usage has been emerging in the biometric field for more secure applications. [...] Read more.
The field of biometrics is a pattern recognition problem, where the individual traits are coded, registered, and compared with other database records. Due to the difficulties in reproducing Electrocardiograms (ECG), their usage has been emerging in the biometric field for more secure applications. Inspired by the high performance shown by Deep Neural Networks (DNN) and to mitigate the intra-variability challenges displayed by the ECG of each individual, this work proposes two architectures to improve current results in both identification (finding the registered person from a sample) and authentication (prove that the person is whom it claims) processes: Temporal Convolutional Neural Network (TCNN) and Recurrent Neural Network (RNN). Each architecture produces a similarity score, based on the prediction error of the former and the logits given by the last, and fed to the same classifier, the Relative Score Threshold Classifier (RSTC).The robustness and applicability of these architectures were trained and tested on public databases used by literature in this context: Fantasia, MIT-BIH, and CYBHi databases. Results show that overall the TCNN outperforms the RNN achieving almost 100%, 96%, and 90% accuracy, respectively, for identification and 0.0%, 0.1%, and 2.2% equal error rate (EER) for authentication processes. When comparing to previous work, both architectures reached results beyond the state-of-the-art. Nevertheless, the improvement of these techniques, such as enriching training with extra varied data and transfer learning, may provide more robust systems with a reduced time required for validation. Full article
(This article belongs to the Special Issue Artificial Intelligence in Medical Sensors)
Show Figures

Figure 1

15 pages, 2296 KiB  
Article
A Real Time QRS Detection Algorithm Based on ET and PD Controlled Threshold Strategy
by Aiyun Chen, Yidan Zhang, Mengxin Zhang, Wenhan Liu, Sheng Chang, Hao Wang, Jin He and Qijun Huang
Sensors 2020, 20(14), 4003; https://0-doi-org.brum.beds.ac.uk/10.3390/s20144003 - 18 Jul 2020
Cited by 22 | Viewed by 4282
Abstract
As one of the important components of electrocardiogram (ECG) signals, QRS signal represents the basic characteristics of ECG signals. The detection of QRS waves is also an essential step for ECG signal analysis. In order to further meet the clinical needs for the [...] Read more.
As one of the important components of electrocardiogram (ECG) signals, QRS signal represents the basic characteristics of ECG signals. The detection of QRS waves is also an essential step for ECG signal analysis. In order to further meet the clinical needs for the accuracy and real-time detection of QRS waves, a simple, fast, reliable, and hardware-friendly algorithm for real-time QRS detection is proposed. The exponential transform (ET) and proportional-derivative (PD) control-based adaptive threshold are designed to detect QRS-complex. The proposed ET can effectively narrow the magnitude difference of QRS peaks, and the PD control-based method can adaptively adjust the current threshold for QRS detection according to thresholds of previous two windows and predefined minimal threshold. The ECG signals from MIT-BIH databases are used to evaluate the performance of the proposed algorithm. The overall sensitivity, positive predictivity, and accuracy for QRS detection are 99.90%, 99.92%, and 99.82%, respectively. It is also implemented on Altera Cyclone V 5CSEMA5F31C6 Field Programmable Gate Array (FPGA). The time consumed for a 30-min ECG record is approximately 1.3 s. It indicates that the proposed algorithm can be used for wearable heart rate monitoring and automatic ECG analysis. Full article
(This article belongs to the Special Issue Artificial Intelligence in Medical Sensors)
Show Figures

Figure 1

20 pages, 5446 KiB  
Article
Deep Learning-Based Detection of Pigment Signs for Analysis and Diagnosis of Retinitis Pigmentosa
by Muhammad Arsalan, Na Rae Baek, Muhammad Owais, Tahir Mahmood and Kang Ryoung Park
Sensors 2020, 20(12), 3454; https://0-doi-org.brum.beds.ac.uk/10.3390/s20123454 - 18 Jun 2020
Cited by 18 | Viewed by 3373
Abstract
Ophthalmological analysis plays a vital role in the diagnosis of various eye diseases, such as glaucoma, retinitis pigmentosa (RP), and diabetic and hypertensive retinopathy. RP is a genetic retinal disorder that leads to progressive vision degeneration and initially causes night blindness. Currently, the [...] Read more.
Ophthalmological analysis plays a vital role in the diagnosis of various eye diseases, such as glaucoma, retinitis pigmentosa (RP), and diabetic and hypertensive retinopathy. RP is a genetic retinal disorder that leads to progressive vision degeneration and initially causes night blindness. Currently, the most commonly applied method for diagnosing retinal diseases is optical coherence tomography (OCT)-based disease analysis. In contrast, fundus imaging-based disease diagnosis is considered a low-cost diagnostic solution for retinal diseases. This study focuses on the detection of RP from the fundus image, which is a crucial task because of the low quality of fundus images and non-cooperative image acquisition conditions. Automatic detection of pigment signs in fundus images can help ophthalmologists and medical practitioners in diagnosing and analyzing RP disorders. To accurately segment pigment signs for diagnostic purposes, we present an automatic RP segmentation network (RPS-Net), which is a specifically designed deep learning-based semantic segmentation network to accurately detect and segment the pigment signs with fewer trainable parameters. Compared with the conventional deep learning methods, the proposed method applies a feature enhancement policy through multiple dense connections between the convolutional layers, which enables the network to discriminate between normal and diseased eyes, and accurately segment the diseased area from the background. Because pigment spots can be very small and consist of very few pixels, the RPS-Net provides fine segmentation, even in the case of degraded images, by importing high-frequency information from the preceding layers through concatenation inside and outside the encoder-decoder. To evaluate the proposed RPS-Net, experiments were performed based on 4-fold cross-validation using the publicly available Retinal Images for Pigment Signs (RIPS) dataset for detection and segmentation of retinal pigments. Experimental results show that RPS-Net achieved superior segmentation performance for RP diagnosis, compared with the state-of-the-art methods. Full article
(This article belongs to the Special Issue Artificial Intelligence in Medical Sensors)
Show Figures

Figure 1

22 pages, 2011 KiB  
Article
A Mobile Application for Smart Computer-Aided Self-Administered Testing of Cognition, Speech, and Motor Impairment
by Andrius Lauraitis, Rytis Maskeliūnas, Robertas Damaševičius and Tomas Krilavičius
Sensors 2020, 20(11), 3236; https://0-doi-org.brum.beds.ac.uk/10.3390/s20113236 - 06 Jun 2020
Cited by 28 | Viewed by 4018
Abstract
We present a model for digital neural impairment screening and self-assessment, which can evaluate cognitive and motor deficits for patients with symptoms of central nervous system (CNS) disorders, such as mild cognitive impairment (MCI), Parkinson’s disease (PD), Huntington’s disease (HD), or dementia. The [...] Read more.
We present a model for digital neural impairment screening and self-assessment, which can evaluate cognitive and motor deficits for patients with symptoms of central nervous system (CNS) disorders, such as mild cognitive impairment (MCI), Parkinson’s disease (PD), Huntington’s disease (HD), or dementia. The data was collected with an Android mobile application that can track cognitive, hand tremor, energy expenditure, and speech features of subjects. We extracted 238 features as the model inputs using 16 tasks, 12 of them were based on a self-administered cognitive testing (SAGE) methodology and others used finger tapping and voice features acquired from the sensors of a smart mobile device (smartphone or tablet). Fifteen subjects were involved in the investigation: 7 patients with neurological disorders (1 with Parkinson’s disease, 3 with Huntington’s disease, 1 with early dementia, 1 with cerebral palsy, 1 post-stroke) and 8 healthy subjects. The finger tapping, SAGE, energy expenditure, and speech analysis features were used for neural impairment evaluations. The best results were achieved using a fusion of 13 classifiers for combined finger tapping and SAGE features (96.12% accuracy), and using bidirectional long short-term memory (BiLSTM) (94.29% accuracy) for speech analysis features. Full article
(This article belongs to the Special Issue Artificial Intelligence in Medical Sensors)
Show Figures

Figure 1

24 pages, 5644 KiB  
Article
Estimating Blood Pressure from the Photoplethysmogram Signal and Demographic Features Using Machine Learning Techniques
by Moajjem Hossain Chowdhury, Md Nazmul Islam Shuzan, Muhammad E.H. Chowdhury, Zaid B. Mahbub, M. Monir Uddin, Amith Khandakar and Mamun Bin Ibne Reaz
Sensors 2020, 20(11), 3127; https://0-doi-org.brum.beds.ac.uk/10.3390/s20113127 - 01 Jun 2020
Cited by 125 | Viewed by 13666
Abstract
Hypertension is a potentially unsafe health ailment, which can be indicated directly from the blood pressure (BP). Hypertension always leads to other health complications. Continuous monitoring of BP is very important; however, cuff-based BP measurements are discrete and uncomfortable to the user. To [...] Read more.
Hypertension is a potentially unsafe health ailment, which can be indicated directly from the blood pressure (BP). Hypertension always leads to other health complications. Continuous monitoring of BP is very important; however, cuff-based BP measurements are discrete and uncomfortable to the user. To address this need, a cuff-less, continuous, and noninvasive BP measurement system is proposed using the photoplethysmograph (PPG) signal and demographic features using machine learning (ML) algorithms. PPG signals were acquired from 219 subjects, which undergo preprocessing and feature extraction steps. Time, frequency, and time-frequency domain features were extracted from the PPG and their derivative signals. Feature selection techniques were used to reduce the computational complexity and to decrease the chance of over-fitting the ML algorithms. The features were then used to train and evaluate ML algorithms. The best regression models were selected for systolic BP (SBP) and diastolic BP (DBP) estimation individually. Gaussian process regression (GPR) along with the ReliefF feature selection algorithm outperforms other algorithms in estimating SBP and DBP with a root mean square error (RMSE) of 6.74 and 3.59, respectively. This ML model can be implemented in hardware systems to continuously monitor BP and avoid any critical health conditions due to sudden changes. Full article
(This article belongs to the Special Issue Artificial Intelligence in Medical Sensors)
Show Figures

Figure 1

15 pages, 363 KiB  
Article
Uncertainty in Blood Pressure Measurement Estimated Using Ensemble-Based Recursive Methodology
by Soojeong Lee, Hilmi R Dajani, Sreeraman Rajan, Gangseong Lee and Voicu Z Groza
Sensors 2020, 20(7), 2108; https://0-doi-org.brum.beds.ac.uk/10.3390/s20072108 - 08 Apr 2020
Cited by 10 | Viewed by 2583
Abstract
Automated oscillometric blood pressure monitors are commonly used to measure blood pressure for many patients at home, office, and medical centers, and they have been actively studied recently. These devices usually provide a single blood pressure point and they are not able to [...] Read more.
Automated oscillometric blood pressure monitors are commonly used to measure blood pressure for many patients at home, office, and medical centers, and they have been actively studied recently. These devices usually provide a single blood pressure point and they are not able to indicate the uncertainty of the measured quantity. We propose a new technique using an ensemble-based recursive methodology to measure uncertainty for oscillometric blood pressure measurements. There are three stages we consider: the first stage is pre-learning to initialize good parameters using the bagging technique. In the second stage, we fine-tune the parameters using the ensemble-based recursive methodology that is used to accurately estimate blood pressure and then measure the uncertainty for the systolic blood pressure and diastolic blood pressure in the third stage. Full article
(This article belongs to the Special Issue Artificial Intelligence in Medical Sensors)
Show Figures

Figure 1

12 pages, 7694 KiB  
Article
Save Muscle Information–Unfiltered EEG Signal Helps Distinguish Sleep Stages
by Gi-Ren Liu, Caroline Lustenberger, Yu-Lun Lo, Wen-Te Liu, Yuan-Chung Sheu and Hau-Tieng Wu
Sensors 2020, 20(7), 2024; https://0-doi-org.brum.beds.ac.uk/10.3390/s20072024 - 03 Apr 2020
Cited by 4 | Viewed by 3027
Abstract
Based on the well-established biopotential theory, we hypothesize that the high frequency spectral information, like that higher than 100Hz, of the EEG signal recorded in the off-the-shelf EEG sensor contains muscle tone information. We show that an existing automatic sleep stage annotation algorithm [...] Read more.
Based on the well-established biopotential theory, we hypothesize that the high frequency spectral information, like that higher than 100Hz, of the EEG signal recorded in the off-the-shelf EEG sensor contains muscle tone information. We show that an existing automatic sleep stage annotation algorithm can be improved by taking this information into account. This result suggests that if possible, we should sample the EEG signal with a high sampling rate, and preserve as much spectral information as possible. Full article
(This article belongs to the Special Issue Artificial Intelligence in Medical Sensors)
Show Figures

Figure 1

21 pages, 11078 KiB  
Article
Detection of Early Signs of Diabetic Retinopathy Based on Textural and Morphological Information in Fundus Images
by Adrián Colomer, Jorge Igual and Valery Naranjo
Sensors 2020, 20(4), 1005; https://0-doi-org.brum.beds.ac.uk/10.3390/s20041005 - 13 Feb 2020
Cited by 61 | Viewed by 5949
Abstract
Estimated blind people in the world will exceed 40 million by 2025. To develop novel algorithms based on fundus image descriptors that allow the automatic classification of retinal tissue into healthy and pathological in early stages is necessary. In this paper, we focus [...] Read more.
Estimated blind people in the world will exceed 40 million by 2025. To develop novel algorithms based on fundus image descriptors that allow the automatic classification of retinal tissue into healthy and pathological in early stages is necessary. In this paper, we focus on one of the most common pathologies in the current society: diabetic retinopathy. The proposed method avoids the necessity of lesion segmentation or candidate map generation before the classification stage. Local binary patterns and granulometric profiles are locally computed to extract texture and morphological information from retinal images. Different combinations of this information feed classification algorithms to optimally discriminate bright and dark lesions from healthy tissues. Through several experiments, the ability of the proposed system to identify diabetic retinopathy signs is validated using different public databases with a large degree of variability and without image exclusion. Full article
(This article belongs to the Special Issue Artificial Intelligence in Medical Sensors)
Show Figures

Figure 1

Review

Jump to: Research

18 pages, 1752 KiB  
Review
Advanced Diabetes Management Using Artificial Intelligence and Continuous Glucose Monitoring Sensors
by Martina Vettoretti, Giacomo Cappon, Andrea Facchinetti and Giovanni Sparacino
Sensors 2020, 20(14), 3870; https://0-doi-org.brum.beds.ac.uk/10.3390/s20143870 - 10 Jul 2020
Cited by 48 | Viewed by 8716
Abstract
Wearable continuous glucose monitoring (CGM) sensors are revolutionizing the treatment of type 1 diabetes (T1D). These sensors provide in real-time, every 1–5 min, the current blood glucose concentration and its rate-of-change, two key pieces of information for improving the determination of exogenous insulin [...] Read more.
Wearable continuous glucose monitoring (CGM) sensors are revolutionizing the treatment of type 1 diabetes (T1D). These sensors provide in real-time, every 1–5 min, the current blood glucose concentration and its rate-of-change, two key pieces of information for improving the determination of exogenous insulin administration and the prediction of forthcoming adverse events, such as hypo-/hyper-glycemia. The current research in diabetes technology is putting considerable effort into developing decision support systems for patient use, which automatically analyze the patient’s data collected by CGM sensors and other portable devices, as well as providing personalized recommendations about therapy adjustments to patients. Due to the large amount of data collected by patients with T1D and their variety, artificial intelligence (AI) techniques are increasingly being adopted in these decision support systems. In this paper, we review the state-of-the-art methodologies using AI and CGM sensors for decision support in advanced T1D management, including techniques for personalized insulin bolus calculation, adaptive tuning of bolus calculator parameters and glucose prediction. Full article
(This article belongs to the Special Issue Artificial Intelligence in Medical Sensors)
Show Figures

Figure 1

Back to TopTop