sensors-logo

Journal Browser

Journal Browser

Biosignal Sensing and Analysis for Healthcare Monitoring

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Biomedical Sensors".

Deadline for manuscript submissions: closed (20 September 2023) | Viewed by 24565

Special Issue Editors

School of Informatics, Xiamen University, Xiamen 361005, China
Interests: biomedical signal processing; artificial intelligence; data mining
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Electrical Engineering and Computer Science, Florida Atlantic University, Boca Raton, FL 33431-0991, USA
Interests: biosignal processing; gait analysis; cardiovascular engineering; speech data analysis
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Medical applications have been powered by advanced IoT hardware, sensor technology, and big data analysis tools, wearable and implantable sensors offering multi-modal biomedical signals that manifest the changes of physiological processes. Novel biological signal processing and big data analysis technologies can effectively improve the performance of IoT circuit boards with multi-task chips for practical healthcare applications.

This Special Issue aims to collect original contributions or critical reviews focusing on the biological signal acquisition, processing, and analysis of healthcare monitoring and computer-aided diagnosis. The topics include, but are not limited to, sensor signal filtering, amplifying, and decomposition; computational feature extraction, physiological activity monitoring, clinical data analysis and interpretation, and machine learning algorithms for healthcare applications.

Topics of interest include, but are not restricted to:

  • Biomedical signal processing and analysis;
  • Data interpretation for biological processes;
  • Sensor data fusion for multi-modal devices;
  • Wearable or implantable sensor integration;
  • Low-cost IoT applications for healthcare monitoring;
  • Biological signal decomposition and reconstruction;
  • Feature extraction and computing for ECG, EMG, EEG, PPG, and other physiological signals;
  • Machine learning tools for healthcare data analysis;
  • Artificial intelligence for computer-aided diagnosis;
  • Deep learning algorithms for retinal OCT image analysis.

Dr. Yunfeng Wu
Prof. Dr. Behnaz Ghoraani
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (12 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review

3 pages, 163 KiB  
Editorial
Biological Signal Processing and Analysis for Healthcare Monitoring
by Yunfeng Wu and Behnaz Ghoraani
Sensors 2022, 22(14), 5341; https://0-doi-org.brum.beds.ac.uk/10.3390/s22145341 - 18 Jul 2022
Cited by 4 | Viewed by 2157
Abstract
Nowadays, portable and wireless wearable sensors have been commonly incorporated into the signal acquisition modules of healthcare monitoring systems [...] Full article
(This article belongs to the Special Issue Biosignal Sensing and Analysis for Healthcare Monitoring)

Research

Jump to: Editorial, Review

21 pages, 1991 KiB  
Article
A Deep Learning Framework for Anesthesia Depth Prediction from Drug Infusion History
by Mingjin Chen, Yongkang He and Zhijing Yang
Sensors 2023, 23(21), 8994; https://0-doi-org.brum.beds.ac.uk/10.3390/s23218994 - 06 Nov 2023
Viewed by 1169
Abstract
In the target-controlled infusion (TCI) of propofol and remifentanil intravenous anesthesia, accurate prediction of the depth of anesthesia (DOA) is very challenging. Patients with different physiological characteristics have inconsistent pharmacodynamic responses during different stages of anesthesia. For example, in TCI, older adults transition [...] Read more.
In the target-controlled infusion (TCI) of propofol and remifentanil intravenous anesthesia, accurate prediction of the depth of anesthesia (DOA) is very challenging. Patients with different physiological characteristics have inconsistent pharmacodynamic responses during different stages of anesthesia. For example, in TCI, older adults transition smoothly from the induction period to the maintenance period, while younger adults are more prone to anesthetic awareness, resulting in different DOA data distributions among patients. To address these problems, a deep learning framework that incorporates domain adaptation and knowledge distillation and uses propofol and remifentanil doses at historical moments to continuously predict the bispectral index (BIS) is proposed in this paper. Specifically, a modified adaptive recurrent neural network (AdaRNN) is adopted to address data distribution differences among patients. Moreover, a knowledge distillation pipeline is developed to train the prediction network by enabling it to learn intermediate feature representations of the teacher network. The experimental results show that our method exhibits better performance than existing approaches during all anesthetic phases in the TCI of propofol and remifentanil intravenous anesthesia. In particular, our method outperforms some state-of-the-art methods in terms of root mean square error and mean absolute error by 1 and 0.8, respectively, in the internal dataset as well as in the publicly available dataset. Full article
(This article belongs to the Special Issue Biosignal Sensing and Analysis for Healthcare Monitoring)
Show Figures

Figure 1

14 pages, 3758 KiB  
Article
Enhancing Electroretinogram Classification with Multi-Wavelet Analysis and Visual Transformer
by Mikhail Kulyabin, Aleksei Zhdanov, Anton Dolganov, Mikhail Ronkin, Vasilii Borisov and Andreas Maier
Sensors 2023, 23(21), 8727; https://0-doi-org.brum.beds.ac.uk/10.3390/s23218727 - 26 Oct 2023
Viewed by 1342
Abstract
The electroretinogram (ERG) is a clinical test that records the retina’s electrical response to light. Analysis of the ERG signal offers a promising way to study different retinal diseases and disorders. Machine learning-based methods are expected to play a pivotal role in achieving [...] Read more.
The electroretinogram (ERG) is a clinical test that records the retina’s electrical response to light. Analysis of the ERG signal offers a promising way to study different retinal diseases and disorders. Machine learning-based methods are expected to play a pivotal role in achieving the goals of retinal diagnostics and treatment control. This study aims to improve the classification accuracy of the previous work using the combination of three optimal mother wavelet functions. We apply Continuous Wavelet Transform (CWT) on a dataset of mixed pediatric and adult ERG signals and show the possibility of simultaneous analysis of the signals. The modern Visual Transformer-based architectures are tested on a time-frequency representation of the signals. The method provides 88% classification accuracy for Maximum 2.0 ERG, 85% for Scotopic 2.0, and 91% for Photopic 2.0 protocols, which on average improves the result by 7.6% compared to previous work. Full article
(This article belongs to the Special Issue Biosignal Sensing and Analysis for Healthcare Monitoring)
Show Figures

Figure 1

16 pages, 2804 KiB  
Article
Breast Ultrasound Images Augmentation and Segmentation Using GAN with Identity Block and Modified U-Net 3+
by Meshrif Alruily, Wael Said, Ayman Mohamed Mostafa, Mohamed Ezz and Mahmoud Elmezain
Sensors 2023, 23(20), 8599; https://0-doi-org.brum.beds.ac.uk/10.3390/s23208599 - 20 Oct 2023
Cited by 1 | Viewed by 1263
Abstract
One of the most prevalent diseases affecting women in recent years is breast cancer. Early breast cancer detection can help in the treatment, lower the infection risk, and worsen the results. This paper presents a hybrid approach for augmentation and segmenting breast cancer. [...] Read more.
One of the most prevalent diseases affecting women in recent years is breast cancer. Early breast cancer detection can help in the treatment, lower the infection risk, and worsen the results. This paper presents a hybrid approach for augmentation and segmenting breast cancer. The framework contains two main stages: augmentation and segmentation of ultrasound images. The augmentation of the ultrasounds is applied using generative adversarial networks (GAN) with nonlinear identity block, label smoothing, and a new loss function. The segmentation of the ultrasounds applied a modified U-Net 3+. The hybrid approach achieves efficient results in the segmentation and augmentation steps compared with the other available methods for the same task. The modified version of the GAN with the nonlinear identity block overcomes different types of modified GAN in the ultrasound augmentation process, such as speckle GAN, UltraGAN, and deep convolutional GAN. The modified U-Net 3+ also overcomes the different architectures of U-Nets in the segmentation process. The GAN with nonlinear identity blocks achieved an inception score of 14.32 and a Fréchet inception distance of 41.86 in the augmenting process. The GAN with identity achieves a smaller value in Fréchet inception distance (FID) and a bigger value in inception score; these results prove the model’s efficiency compared with other versions of GAN in the augmentation process. The modified U-Net 3+ architecture achieved a Dice Score of 95.49% and an Accuracy of 95.67%. Full article
(This article belongs to the Special Issue Biosignal Sensing and Analysis for Healthcare Monitoring)
Show Figures

Figure 1

21 pages, 4089 KiB  
Article
On the Early and Affordable Diagnosis of Joint Pathologies Using Acoustic Emissions, Deep Learning Decompositions and Prediction Machines
by Ejay Nsugbe, Khadijat Olorunlambe and Karl Dearn
Sensors 2023, 23(9), 4449; https://0-doi-org.brum.beds.ac.uk/10.3390/s23094449 - 02 May 2023
Cited by 1 | Viewed by 1337
Abstract
The condition of a joint in a human being is prone to wear and several pathologies, particularly in the elderly and athletes. Current means towards assessing the overall condition of a joint to assess for a pathology involve using tools such as X-ray [...] Read more.
The condition of a joint in a human being is prone to wear and several pathologies, particularly in the elderly and athletes. Current means towards assessing the overall condition of a joint to assess for a pathology involve using tools such as X-ray and magnetic resonance imaging, to name a couple. These expensive methods are of limited availability in resource-constrained environments and pose the risk of radiation exposure to the patient. The prospect of acoustic emissions (AEs) presents a modality that can monitor the joints’ conditions passively by recording the high-frequency stress waves emitted during their motion. One of the main challenges associated with this sensing method is decoding and linking acquired AE signals to their source event. In this paper, we investigate AEs’ use to identify five kinds of joint-wear pathologies using a contrast of expert-based handcrafted features and unsupervised feature learning via deep wavelet decomposition (DWS) alongside 12 machine learning models. The results showed an average classification accuracy of 90 ± 7.16% and 97 ± 3.77% for the handcrafted and DWS-based features, implying good prediction accuracies across the various devised approaches. Subsequent work will involve the potential application of regressions towards estimating the associated stage and extent of a wear condition where present, which can form part of an online system for the condition monitoring of joints in human beings. Full article
(This article belongs to the Special Issue Biosignal Sensing and Analysis for Healthcare Monitoring)
Show Figures

Figure 1

16 pages, 2581 KiB  
Article
A Multimodal Feature Fusion Framework for Sleep-Deprived Fatigue Detection to Prevent Accidents
by Jitender Singh Virk, Mandeep Singh, Mandeep Singh, Usha Panjwani and Koushik Ray
Sensors 2023, 23(8), 4129; https://0-doi-org.brum.beds.ac.uk/10.3390/s23084129 - 20 Apr 2023
Cited by 1 | Viewed by 1513
Abstract
Sleep-deprived fatigued person is likely to commit more errors that may even prove to be fatal. Thus, it is necessary to recognize this fatigue. The novelty of the proposed research work for the detection of this fatigue is that it is nonintrusive and [...] Read more.
Sleep-deprived fatigued person is likely to commit more errors that may even prove to be fatal. Thus, it is necessary to recognize this fatigue. The novelty of the proposed research work for the detection of this fatigue is that it is nonintrusive and based on multimodal feature fusion. In the proposed methodology, fatigue is detected by obtaining features from four domains: visual images, thermal images, keystroke dynamics, and voice features. In the proposed methodology, the samples of a volunteer (subject) are obtained from all four domains for feature extraction, and empirical weights are assigned to the four different domains. Young, healthy volunteers (n = 60) between the age group of 20 to 30 years participated in the experimental study. Further, they abstained from the consumption of alcohol, caffeine, or other drugs impacting their sleep pattern during the study. Through this multimodal technique, appropriate weights are given to the features obtained from the four domains. The results are compared with k-nearest neighbors (kNN), support vector machines (SVM), random tree, random forest, and multilayer perceptron classifiers. The proposed nonintrusive technique has obtained an average detection accuracy of 93.33% in 3-fold cross-validation. Full article
(This article belongs to the Special Issue Biosignal Sensing and Analysis for Healthcare Monitoring)
Show Figures

Figure 1

13 pages, 4319 KiB  
Article
Optical Measurement of Molar Absorption Coefficient of HbA1c: Comparison of Theoretical and Experimental Results
by Shifat Hossain, Shama Satter, Tae-Ho Kwon and Ki-Doo Kim
Sensors 2022, 22(21), 8179; https://0-doi-org.brum.beds.ac.uk/10.3390/s22218179 - 25 Oct 2022
Cited by 2 | Viewed by 2162
Abstract
Diabetes can cause dangerous complications if not diagnosed in a timely manner. The World Health Organization accepts glycated hemoglobin (HbA1c) as a measure of diagnosing diabetes as it provides significantly more information on the glycemic behavior from a single blood sample than the [...] Read more.
Diabetes can cause dangerous complications if not diagnosed in a timely manner. The World Health Organization accepts glycated hemoglobin (HbA1c) as a measure of diagnosing diabetes as it provides significantly more information on the glycemic behavior from a single blood sample than the fasting blood sugar reading. The molar absorption coefficient of HbA1c is needed to quantify the amount of HbA1c present in a blood sample. In this study, we measured the molar absorption coefficient of HbA1c in the range of 450 nm to 700 nm using optical methods experimentally. We observed that the characteristic peaks of the molar absorption coefficient of HbA1c (at 545 nm and 579 nm for level 1, at 544 nm and 577 nm for level 2) are in close agreement with those reported in previous studies. The molar absorption coefficient values were also found to be close to those of earlier reports. The average molar absorption coefficient values of HbA1c were found to be 804,403.5 M1cm1 at 545 nm and 703,704.5 M1cm1 at 579 nm for level 1 as well as 503,352.4 M1cm1 at 544 nm and 476,344.6 M1cm1 at 577 nm for level 2. Our experiments focused on calculating the molar absorption coefficients of HbA1c in the visible wavelength region, and the proposed experimental method has an advantage of being able to easily obtain the molar absorption coefficient at any wavelength in the visible wavelength region. The results of this study are expected to help future investigations on noninvasive methods of estimating HbA1c levels. Full article
(This article belongs to the Special Issue Biosignal Sensing and Analysis for Healthcare Monitoring)
Show Figures

Figure 1

26 pages, 9730 KiB  
Article
Adaptive Filtering for the Maternal Respiration Signal Attenuation in the Uterine Electromyogram
by Daniela Martins, Arnaldo Batista, Helena Mouriño, Sara Russo, Filipa Esgalhado, Catarina R. Palma dos Reis, Fátima Serrano and Manuel Ortigueira
Sensors 2022, 22(19), 7638; https://0-doi-org.brum.beds.ac.uk/10.3390/s22197638 - 09 Oct 2022
Cited by 2 | Viewed by 1834
Abstract
The electrohysterogram (EHG) is the uterine muscle electromyogram recorded at the abdominal surface of pregnant or non-pregnant woman. The maternal respiration electromyographic signal (MR-EMG) is one of the most relevant interferences present in an EHG. Alvarez (Alv) waves are components of the EHG [...] Read more.
The electrohysterogram (EHG) is the uterine muscle electromyogram recorded at the abdominal surface of pregnant or non-pregnant woman. The maternal respiration electromyographic signal (MR-EMG) is one of the most relevant interferences present in an EHG. Alvarez (Alv) waves are components of the EHG that have been indicated as having the potential for preterm and term birth prediction. The MR-EMG component in the EHG represents an issue, regarding Alv wave application for pregnancy monitoring, for instance, in preterm birth prediction, a subject of great research interest. Therefore, the Alv waves denoising method should be designed to include the interference MR-EMG attenuation, without compromising the original waves. Adaptive filter properties make them suitable for this task. However, selecting the optimal adaptive filter and its parameters is an important task for the success of the filtering operation. In this work, an algorithm is presented for the automatic adaptive filter and parameter selection using synthetic data. The filter selection pool comprised sixteen candidates, from which, the Wiener, recursive least squares (RLS), householder recursive least squares (HRLS), and QR-decomposition recursive least squares (QRD-RLS) were the best performers. The optimized parameters were L = 2 (filter length) for all of them and λ = 1 (forgetting factor) for the last three. The developed optimization algorithm may be of interest to other applications. The optimized filters were applied to real data. The result was the attenuation of the MR-EMG in Alv waves power. For the Wiener filter, power reductions for quartile 1, median, and quartile 3 were found to be −16.74%, −20.32%, and −15.78%, respectively (p-value = 1.31 × 10−12). Full article
(This article belongs to the Special Issue Biosignal Sensing and Analysis for Healthcare Monitoring)
Show Figures

Figure 1

11 pages, 975 KiB  
Article
A Computational Model for Evaluating Transient Auditory Storage of Acoustic Features in Normal Listeners
by Nannan Zong and Meihong Wu
Sensors 2022, 22(13), 5033; https://0-doi-org.brum.beds.ac.uk/10.3390/s22135033 - 04 Jul 2022
Cited by 1 | Viewed by 1535
Abstract
Humans are able to detect an instantaneous change in correlation, demonstrating an ability to temporally process extremely rapid changes in interaural configurations. This temporal dynamic is correlated with human listeners’ ability to store acoustic features in a transient auditory manner. The present study [...] Read more.
Humans are able to detect an instantaneous change in correlation, demonstrating an ability to temporally process extremely rapid changes in interaural configurations. This temporal dynamic is correlated with human listeners’ ability to store acoustic features in a transient auditory manner. The present study investigated whether the ability of transient auditory storage of acoustic features was affected by the interaural delay, which was assessed by measuring the sensitivity for detecting the instantaneous change in correlation for both wideband and narrowband correlated noise with various interaural delays. Furthermore, whether an instantaneous change in correlation between correlated interaural narrowband or wideband noise was detectable when introducing the longest interaural delay was investigated. Then, an auditory computational description model was applied to explore the relationship between wideband and narrowband simulation noise with various center frequencies in the auditory processes of lower-level transient memory of acoustic features. The computing results indicate that low-frequency information dominated perception and was more distinguishable in length than the high-frequency components, and the longest interaural delay for narrowband noise signals was highly correlated with that for wideband noise signals in the dynamic process of auditory perception. Full article
(This article belongs to the Special Issue Biosignal Sensing and Analysis for Healthcare Monitoring)
Show Figures

Figure 1

14 pages, 2119 KiB  
Article
A Single Subject, Feasibility Study of Using a Non-Contact Measurement to “Visualize” Temperature at Body-Seat Interface
by Zhuofu Liu, Vincenzo Cascioli and Peter W. McCarthy
Sensors 2022, 22(10), 3941; https://0-doi-org.brum.beds.ac.uk/10.3390/s22103941 - 23 May 2022
Cited by 3 | Viewed by 1642
Abstract
Measuring temperature changes at the body-seat interface has been drawing increased attention from both industrial and scientific fields, due to the increasingly sedentary nature from daily leisure activity to routine work. Although contact measurement is considered the gold standard, it can affect the [...] Read more.
Measuring temperature changes at the body-seat interface has been drawing increased attention from both industrial and scientific fields, due to the increasingly sedentary nature from daily leisure activity to routine work. Although contact measurement is considered the gold standard, it can affect the local micro-environment and the perception of sitting comfort. A non-contact temperature measurement system was developed to determine the interface temperature using data gathered unobtrusively and continuously from an infrared sensor (IRs). System performance was evaluated regarding linearity, hysteresis, reliability and accuracy. Then a healthy participant sat for an hour on low/intermediate density foams with thickness varying from 0.5–8 cm while body-seat interface temperature was measured simultaneously using a temperature sensor (contact) and an IRs (non-contact). IRs data were filtered with empirical mode decomposition and fractal scaling indices before a data-driven artificial neural network was utilized to estimate the contact surface temperature. A strong correlation existed between non-contact and contact temperature measurement (ρ > 0.85) and the estimation results showed a low root mean square error (RMSE) (<0.07 for low density foam and <0.16 for intermediate density foam) and high Nash-Sutcliff efficiency (NSE) values (≈1 for both types of foam materials). Full article
(This article belongs to the Special Issue Biosignal Sensing and Analysis for Healthcare Monitoring)
Show Figures

Figure 1

Review

Jump to: Editorial, Research

21 pages, 3545 KiB  
Review
A Review of Machine Learning Algorithms for Retinal Cyst Segmentation on Optical Coherence Tomography
by Xing Wei and Ruifang Sui
Sensors 2023, 23(6), 3144; https://0-doi-org.brum.beds.ac.uk/10.3390/s23063144 - 15 Mar 2023
Cited by 4 | Viewed by 2363
Abstract
Optical coherence tomography (OCT) is an emerging imaging technique for diagnosing ophthalmic diseases and the visual analysis of retinal structure changes, such as exudates, cysts, and fluid. In recent years, researchers have increasingly focused on applying machine learning algorithms, including classical machine learning [...] Read more.
Optical coherence tomography (OCT) is an emerging imaging technique for diagnosing ophthalmic diseases and the visual analysis of retinal structure changes, such as exudates, cysts, and fluid. In recent years, researchers have increasingly focused on applying machine learning algorithms, including classical machine learning and deep learning methods, to automate retinal cysts/fluid segmentation. These automated techniques can provide ophthalmologists with valuable tools for improved interpretation and quantification of retinal features, leading to more accurate diagnosis and informed treatment decisions for retinal diseases. This review summarized the state-of-the-art algorithms for the three essential steps of cyst/fluid segmentation: image denoising, layer segmentation, and cyst/fluid segmentation, while emphasizing the significance of machine learning techniques. Additionally, we provided a summary of the publicly available OCT datasets for cyst/fluid segmentation. Furthermore, the challenges, opportunities, and future directions of artificial intelligence (AI) in OCT cyst segmentation are discussed. This review is intended to summarize the key parameters for the development of a cyst/fluid segmentation system and the design of novel segmentation algorithms and has the potential to serve as a valuable resource for imaging researchers in the development of assessment systems related to ocular diseases exhibiting cyst/fluid in OCT imaging. Full article
(This article belongs to the Special Issue Biosignal Sensing and Analysis for Healthcare Monitoring)
Show Figures

Figure 1

21 pages, 9135 KiB  
Review
Recent Advanced Deep Learning Architectures for Retinal Fluid Segmentation on Optical Coherence Tomography Images
by Mengchen Lin, Guidong Bao, Xiaoqian Sang and Yunfeng Wu
Sensors 2022, 22(8), 3055; https://0-doi-org.brum.beds.ac.uk/10.3390/s22083055 - 15 Apr 2022
Cited by 9 | Viewed by 4770
Abstract
With non-invasive and high-resolution properties, optical coherence tomography (OCT) has been widely used as a retinal imaging modality for the effective diagnosis of ophthalmic diseases. The retinal fluid is often segmented by medical experts as a pivotal biomarker to assist in the clinical [...] Read more.
With non-invasive and high-resolution properties, optical coherence tomography (OCT) has been widely used as a retinal imaging modality for the effective diagnosis of ophthalmic diseases. The retinal fluid is often segmented by medical experts as a pivotal biomarker to assist in the clinical diagnosis of age-related macular diseases, diabetic macular edema, and retinal vein occlusion. In recent years, the advanced machine learning methods, such as deep learning paradigms, have attracted more and more attention from academia in the retinal fluid segmentation applications. The automatic retinal fluid segmentation based on deep learning can improve the semantic segmentation accuracy and efficiency of macular change analysis, which has potential clinical implications for ophthalmic pathology detection. This article summarizes several different deep learning paradigms reported in the up-to-date literature for the retinal fluid segmentation in OCT images. The deep learning architectures include the backbone of convolutional neural network (CNN), fully convolutional network (FCN), U-shape network (U-Net), and the other hybrid computational methods. The article also provides a survey on the prevailing OCT image datasets used in recent retinal segmentation investigations. The future perspectives and some potential retinal segmentation directions are discussed in the concluding context. Full article
(This article belongs to the Special Issue Biosignal Sensing and Analysis for Healthcare Monitoring)
Show Figures

Figure 1

Back to TopTop