sensors-logo

Journal Browser

Journal Browser

Sensors, Signal and Image Processing in Biomedicine and Assisted Living

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Biomedical Sensors".

Deadline for manuscript submissions: closed (31 March 2020) | Viewed by 88786

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editor


E-Mail Website
Guest Editor
Department of Computer Science and Biomedical Informatics, University of Thessaly, Papasiopoulou 2-4, 35131 Lamia, Greece
Interests: signal/image processing and analysis; pattern recognition, data mining & machine learning; software engineering; bio-inspired algorithms & fuzzy systems; decision support & cognitive systems; challenging applications including but not limited to clinical informatics and biomedical engineering
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Sensor technologies are crucial in biomedicine, as the biomedical devices used for screening and/or diagnosis rely on their efficiency and effectiveness. Further enhancement of the sensor signals acquired, such as the noise reduction in the one-dimensional electroencephalographic (EEG) signals or the color correction in the endoscopic images, and their analysis by computer-based medical systems, has been enabled by artificial intelligence, which promises enhanced diagnostic yield and productivity for sustainable health systems. Furthermore, today, smart sensor systems incorporating advanced signal processing and analysis techniques are entering our life through smartphones and other wearable devices to monitor our health status and help us maintain a healthy lifestyle. The impact of such technologies can be even more significant for the elderly or people with disabilities, such as the visually impaired.

In this context, this Special Issue welcomes original contributions that focus on novel sensor technologies, signal, image, and video processing/analysis methodologies. It also welcomes review articles on challenging topics and emerging technologies.

This Special Issue is organized in the context of the project ENORASI (Intelligent Audiovisual System Enhancing Cultural Experience and Accessibility), co-financed by the European Union and Greek national funds through the Operational Program Competitiveness, Entrepreneurship and Innovation, under the call RESEARCH–CREATE–INNOVATE (project code: T1EDK-02070).

Prof. Dr. Dimitris Iakovidis
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords


  • Biomedical systems
  • Assistive systems
  • Multisensor systems
  • Biomedical sensors
  • Sensor networks
  • Internet of Things (IoT)
  • Machine learning
  • Decision making
  • Uncertainty-aware systems
  • Segmentation
  • Detection
  • Classification
  • Modeling and simulation
  • Video analysis
  • Multimodal signal fusion
  • Coding and compression
  • Summarization
  • Transmission
  • Quality enhancement
  • Quality assessment

Published Papers (17 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Other

4 pages, 170 KiB  
Editorial
Sensors, Signal and Image Processing in Biomedicine and Assisted Living
by Dimitris K. Iakovidis
Sensors 2020, 20(18), 5071; https://0-doi-org.brum.beds.ac.uk/10.3390/s20185071 - 07 Sep 2020
Cited by 1 | Viewed by 1447
Abstract
Sensor technologies are crucial in biomedicine, as the biomedical systems and devices used for screening and diagnosis rely on their efficiency and effectiveness [...] Full article

Research

Jump to: Editorial, Other

12 pages, 1607 KiB  
Article
Laryngeal Lesion Classification Based on Vascular Patterns in Contact Endoscopy and Narrow Band Imaging: Manual Versus Automatic Approach
by Nazila Esmaeili, Alfredo Illanes, Axel Boese, Nikolaos Davaris, Christoph Arens, Nassir Navab and Michael Friebe
Sensors 2020, 20(14), 4018; https://0-doi-org.brum.beds.ac.uk/10.3390/s20144018 - 19 Jul 2020
Cited by 13 | Viewed by 3304
Abstract
Longitudinal and perpendicular changes in the vocal fold’s blood vessels are associated with the development of benign and malignant laryngeal lesions. The combination of Contact Endoscopy (CE) and Narrow Band Imaging (NBI) can provide intraoperative real-time visualization of the vascular changes in the [...] Read more.
Longitudinal and perpendicular changes in the vocal fold’s blood vessels are associated with the development of benign and malignant laryngeal lesions. The combination of Contact Endoscopy (CE) and Narrow Band Imaging (NBI) can provide intraoperative real-time visualization of the vascular changes in the laryngeal mucosa. However, the visual evaluation of vascular patterns in CE-NBI images is challenging and highly depends on the clinicians’ experience. The current study aims to evaluate and compare the performance of a manual and an automatic approach for laryngeal lesion’s classification based on vascular patterns in CE-NBI images. In the manual approach, six observers visually evaluated a series of CE+NBI images that belong to a patient and then classified the patient as benign or malignant. For the automatic classification, an algorithm based on characterizing the level of the vessel’s disorder in combination with four supervised classifiers was used to classify CE-NBI images. The results showed that the manual approach’s subjective evaluation could be reduced by using a computer-based approach. Moreover, the automatic approach showed the potential to work as an assistant system in case of disagreements among clinicians and to reduce the manual approach’s misclassification issue. Full article
Show Figures

Figure 1

27 pages, 4627 KiB  
Article
An Intelligent and Low-Cost Eye-Tracking System for Motorized Wheelchair Control
by Mahmoud Dahmani, Muhammad E. H. Chowdhury, Amith Khandakar, Tawsifur Rahman, Khaled Al-Jayyousi, Abdalla Hefny and Serkan Kiranyaz
Sensors 2020, 20(14), 3936; https://0-doi-org.brum.beds.ac.uk/10.3390/s20143936 - 15 Jul 2020
Cited by 41 | Viewed by 10367
Abstract
In the 34 developed and 156 developing countries, there are ~132 million disabled people who need a wheelchair, constituting 1.86% of the world population. Moreover, there are millions of people suffering from diseases related to motor disabilities, which cause inability to produce controlled [...] Read more.
In the 34 developed and 156 developing countries, there are ~132 million disabled people who need a wheelchair, constituting 1.86% of the world population. Moreover, there are millions of people suffering from diseases related to motor disabilities, which cause inability to produce controlled movement in any of the limbs or even head. This paper proposes a system to aid people with motor disabilities by restoring their ability to move effectively and effortlessly without having to rely on others utilizing an eye-controlled electric wheelchair. The system input is images of the user’s eye that are processed to estimate the gaze direction and the wheelchair was moved accordingly. To accomplish such a feat, four user-specific methods were developed, implemented, and tested; all of which were based on a benchmark database created by the authors. The first three techniques were automatic, employ correlation, and were variants of template matching, whereas the last one uses convolutional neural networks (CNNs). Different metrics to quantitatively evaluate the performance of each algorithm in terms of accuracy and latency were computed and overall comparison is presented. CNN exhibited the best performance (i.e., 99.3% classification accuracy), and thus it was the model of choice for the gaze estimator, which commands the wheelchair motion. The system was evaluated carefully on eight subjects achieving 99% accuracy in changing illumination conditions outdoor and indoor. This required modifying a motorized wheelchair to adapt it to the predictions output by the gaze estimation algorithm. The wheelchair control can bypass any decision made by the gaze estimator and immediately halt its motion with the help of an array of proximity sensors, if the measured distance goes below a well-defined safety margin. This work not only empowers any immobile wheelchair user, but also provides low-cost tools for the organization assisting wheelchair users. Full article
Show Figures

Figure 1

27 pages, 6771 KiB  
Article
Uncertainty-Aware Visual Perception System for Outdoor Navigation of the Visually Challenged
by George Dimas, Dimitris E. Diamantis, Panagiotis Kalozoumis and Dimitris K. Iakovidis
Sensors 2020, 20(8), 2385; https://0-doi-org.brum.beds.ac.uk/10.3390/s20082385 - 22 Apr 2020
Cited by 18 | Viewed by 3735
Abstract
Every day, visually challenged people (VCP) face mobility restrictions and accessibility limitations. A short walk to a nearby destination, which for other individuals is taken for granted, becomes a challenge. To tackle this problem, we propose a novel visual perception system for outdoor [...] Read more.
Every day, visually challenged people (VCP) face mobility restrictions and accessibility limitations. A short walk to a nearby destination, which for other individuals is taken for granted, becomes a challenge. To tackle this problem, we propose a novel visual perception system for outdoor navigation that can be evolved into an everyday visual aid for VCP. The proposed methodology is integrated in a wearable visual perception system (VPS). The proposed approach efficiently incorporates deep learning, object recognition models, along with an obstacle detection methodology based on human eye fixation prediction using Generative Adversarial Networks. An uncertainty-aware modeling of the obstacle risk assessment and spatial localization has been employed, following a fuzzy logic approach, for robust obstacle detection. The above combination can translate the position and the type of detected obstacles into descriptive linguistic expressions, allowing the users to easily understand their location in the environment and avoid them. The performance and capabilities of the proposed method are investigated in the context of safe navigation of VCP in outdoor environments of cultural interest through obstacle recognition and detection. Additionally, a comparison between the proposed system and relevant state-of-the-art systems for the safe navigation of VCP, focused on design and user-requirements satisfaction, is performed. Full article
Show Figures

Figure 1

16 pages, 3069 KiB  
Article
Contactless Real-Time Heartbeat Detection via 24 GHz Continuous-Wave Doppler Radar Using Artificial Neural Networks
by Nebojša Malešević, Vladimir Petrović, Minja Belić, Christian Antfolk, Veljko Mihajlović and Milica Janković
Sensors 2020, 20(8), 2351; https://0-doi-org.brum.beds.ac.uk/10.3390/s20082351 - 21 Apr 2020
Cited by 27 | Viewed by 6606
Abstract
The measurement of human vital signs is a highly important task in a variety of environments and applications. Most notably, the electrocardiogram (ECG) is a versatile signal that could indicate various physical and psychological conditions, from signs of life to complex mental states. [...] Read more.
The measurement of human vital signs is a highly important task in a variety of environments and applications. Most notably, the electrocardiogram (ECG) is a versatile signal that could indicate various physical and psychological conditions, from signs of life to complex mental states. The measurement of the ECG relies on electrodes attached to the skin to acquire the electrical activity of the heart, which imposes certain limitations. Recently, due to the advancement of wireless technology, it has become possible to pick up heart activity in a contactless manner. Among the possible ways to wirelessly obtain information related to heart activity, methods based on mm-wave radars proved to be the most accurate in detecting the small mechanical oscillations of the human chest resulting from heartbeats. In this paper, we presented a method based on a continuous-wave Doppler radar coupled with an artificial neural network (ANN) to detect heartbeats as individual events. To keep the method computationally simple, the ANN took the raw radar signal as input, while the output was minimally processed, ensuring low latency operation (<1 s). The performance of the proposed method was evaluated with respect to an ECG reference (“ground truth”) in an experiment involving 21 healthy volunteers, who were sitting on a cushioned seat and were refrained from making excessive body movements. The results indicated that the presented approach is viable for the fast detection of individual heartbeats without heavy signal preprocessing. Full article
Show Figures

Figure 1

16 pages, 6234 KiB  
Article
Contactless Vital Signs Measurement System Using RGB-Thermal Image Sensors and Its Clinical Screening Test on Patients with Seasonal Influenza
by Toshiaki Negishi, Shigeto Abe, Takemi Matsui, He Liu, Masaki Kurosawa, Tetsuo Kirimoto and Guanghao Sun
Sensors 2020, 20(8), 2171; https://0-doi-org.brum.beds.ac.uk/10.3390/s20082171 - 13 Apr 2020
Cited by 58 | Viewed by 9616
Abstract
Background: In the last two decades, infrared thermography (IRT) has been applied in quarantine stations for the screening of patients with suspected infectious disease. However, the fever-based screening procedure employing IRT suffers from low sensitivity, because monitoring body temperature alone is insufficient for [...] Read more.
Background: In the last two decades, infrared thermography (IRT) has been applied in quarantine stations for the screening of patients with suspected infectious disease. However, the fever-based screening procedure employing IRT suffers from low sensitivity, because monitoring body temperature alone is insufficient for detecting infected patients. To overcome the drawbacks of fever-based screening, this study aims to develop and evaluate a multiple vital sign (i.e., body temperature, heart rate and respiration rate) measurement system using RGB-thermal image sensors. Methods: The RGB camera measures blood volume pulse (BVP) through variations in the light absorption from human facial areas. IRT is used to estimate the respiration rate by measuring the change in temperature near the nostrils or mouth accompanying respiration. To enable a stable and reliable system, the following image and signal processing methods were proposed and implemented: (1) an RGB-thermal image fusion approach to achieve highly reliable facial region-of-interest tracking, (2) a heart rate estimation method including a tapered window for reducing noise caused by the face tracker, reconstruction of a BVP signal with three RGB channels to optimize a linear function, thereby improving the signal-to-noise ratio and multiple signal classification (MUSIC) algorithm for estimating the pseudo-spectrum from limited time-domain BVP signals within 15 s and (3) a respiration rate estimation method implementing nasal or oral breathing signal selection based on signal quality index for stable measurement and MUSIC algorithm for rapid measurement. We tested the system on 22 healthy subjects and 28 patients with seasonal influenza, using the support vector machine (SVM) classification method. Results: The body temperature, heart rate and respiration rate measured in a non-contact manner were highly similarity to those measured via contact-type reference devices (i.e., thermometer, ECG and respiration belt), with Pearson correlation coefficients of 0.71, 0.87 and 0.87, respectively. Moreover, the optimized SVM model with three vital signs yielded sensitivity and specificity values of 85.7% and 90.1%, respectively. Conclusion: For contactless vital sign measurement, the system achieved a performance similar to that of the reference devices. The multiple vital sign-based screening achieved higher sensitivity than fever-based screening. Thus, this system represents a promising alternative for further quarantine procedures to prevent the spread of infectious diseases. Full article
Show Figures

Figure 1

16 pages, 7429 KiB  
Article
Hyperspectral Imaging for the Detection of Glioblastoma Tumor Cells in H&E Slides Using Convolutional Neural Networks
by Samuel Ortega, Martin Halicek, Himar Fabelo, Rafael Camacho, María de la Luz Plaza, Fred Godtliebsen, Gustavo M. Callicó and Baowei Fei
Sensors 2020, 20(7), 1911; https://0-doi-org.brum.beds.ac.uk/10.3390/s20071911 - 30 Mar 2020
Cited by 55 | Viewed by 6473
Abstract
Hyperspectral imaging (HSI) technology has demonstrated potential to provide useful information about the chemical composition of tissue and its morphological features in a single image modality. Deep learning (DL) techniques have demonstrated the ability of automatic feature extraction from data for a successful [...] Read more.
Hyperspectral imaging (HSI) technology has demonstrated potential to provide useful information about the chemical composition of tissue and its morphological features in a single image modality. Deep learning (DL) techniques have demonstrated the ability of automatic feature extraction from data for a successful classification. In this study, we exploit HSI and DL for the automatic differentiation of glioblastoma (GB) and non-tumor tissue on hematoxylin and eosin (H&E) stained histological slides of human brain tissue. GB detection is a challenging application, showing high heterogeneity in the cellular morphology across different patients. We employed an HSI microscope, with a spectral range from 400 to 1000 nm, to collect 517 HS cubes from 13 GB patients using 20× magnification. Using a convolutional neural network (CNN), we were able to automatically detect GB within the pathological slides, achieving average sensitivity and specificity values of 88% and 77%, respectively, representing an improvement of 7% and 8% respectively, as compared to the results obtained using RGB (red, green, and blue) images. This study demonstrates that the combination of hyperspectral microscopic imaging and deep learning is a promising tool for future computational pathologies. Full article
Show Figures

Figure 1

17 pages, 1863 KiB  
Article
Sleep in the Natural Environment: A Pilot Study
by Fayzan F. Chaudhry, Matteo Danieletto, Eddye Golden, Jerome Scelza, Greg Botwin, Mark Shervey, Jessica K. De Freitas, Ishan Paranjpe, Girish N. Nadkarni, Riccardo Miotto, Patricia Glowe, Greg Stock, Bethany Percha, Noah Zimmerman, Joel T. Dudley and Benjamin S. Glicksberg
Sensors 2020, 20(5), 1378; https://0-doi-org.brum.beds.ac.uk/10.3390/s20051378 - 03 Mar 2020
Cited by 12 | Viewed by 5020
Abstract
Sleep quality has been directly linked to cognitive function, quality of life, and a variety of serious diseases across many clinical domains. Standard methods for assessing sleep involve overnight studies in hospital settings, which are uncomfortable, expensive, not representative of real sleep, and [...] Read more.
Sleep quality has been directly linked to cognitive function, quality of life, and a variety of serious diseases across many clinical domains. Standard methods for assessing sleep involve overnight studies in hospital settings, which are uncomfortable, expensive, not representative of real sleep, and difficult to conduct on a large scale. Recently, numerous commercial digital devices have been developed that record physiological data, such as movement, heart rate, and respiratory rate, which can act as a proxy for sleep quality in lieu of standard electroencephalogram recording equipment. The sleep-related output metrics from these devices include sleep staging and total sleep duration and are derived via proprietary algorithms that utilize a variety of these physiological recordings. Each device company makes different claims of accuracy and measures different features of sleep quality, and it is still unknown how well these devices correlate with one another and perform in a research setting. In this pilot study of 21 participants, we investigated whether sleep metric outputs from self-reported sleep metrics (SRSMs) and four sensors, specifically Fitbit Surge (a smart watch), Withings Aura (a sensor pad that is placed under a mattress), Hexoskin (a smart shirt), and Oura Ring (a smart ring), were related to known cognitive and psychological metrics, including the n-back test and Pittsburgh Sleep Quality Index (PSQI). We analyzed correlation between multiple device-related sleep metrics. Furthermore, we investigated relationships between these sleep metrics and cognitive scores across different timepoints and SRSM through univariate linear regressions. We found that correlations for sleep metrics between the devices across the sleep cycle were almost uniformly low, but still significant (p < 0.05). For cognitive scores, we found the Withings latency was statistically significant for afternoon and evening timepoints at p = 0.016 and p = 0.013. We did not find any significant associations between SRSMs and PSQI or cognitive scores. Additionally, Oura Ring’s total sleep duration and efficiency in relation to the PSQI measure was statistically significant at p = 0.004 and p = 0.033, respectively. These findings can hopefully be used to guide future sensor-based sleep research. Full article
Show Figures

Graphical abstract

15 pages, 12064 KiB  
Article
The Rehapiano—Detecting, Measuring, and Analyzing Action Tremor Using Strain Gauges
by Norbert Ferenčík, Miroslav Jaščur, Marek Bundzel and Filippo Cavallo
Sensors 2020, 20(3), 663; https://0-doi-org.brum.beds.ac.uk/10.3390/s20030663 - 24 Jan 2020
Cited by 8 | Viewed by 6409
Abstract
We have developed a device, the Rehapiano, for the fast and quantitative assessment of action tremor. It uses strain gauges to measure force exerted by individual fingers. This article verifies the device’s capability to measure and monitor the development of upper limb tremor. [...] Read more.
We have developed a device, the Rehapiano, for the fast and quantitative assessment of action tremor. It uses strain gauges to measure force exerted by individual fingers. This article verifies the device’s capability to measure and monitor the development of upper limb tremor. The Rehapiano uses a precision, 24-bit, analog-to-digital converter and an Arduino microcomputer to transfer raw data via a USB interface to a computer for processing, database storage, and evaluation. First, our experiments validated the device by measuring simulated tremors with known frequencies. Second, we created a measurement protocol, which we used to measure and compare healthy patients and patients with Parkinson’s disease. Finally, we evaluated the repeatability of a quantitative assessment. We verified our hypothesis that the Rehapiano is able to detect force changes, and our experimental results confirmed that our system is capable of measuring action tremor. The Rehapiano is also sensitive enough to enable the quantification of Parkinsonian tremors. Full article
Show Figures

Figure 1

17 pages, 1770 KiB  
Article
Adaptive Sampling of the Electrocardiogram Based on Generalized Perceptual Features
by Piotr Augustyniak
Sensors 2020, 20(2), 373; https://0-doi-org.brum.beds.ac.uk/10.3390/s20020373 - 09 Jan 2020
Cited by 10 | Viewed by 3618
Abstract
A non-uniform distribution of diagnostic information in the electrocardiogram (ECG) has been commonly accepted and is the background to several compression, denoising and watermarking methods. Gaze tracking is a widely recognized method for identification of an observer’s preferences and interest areas. The statistics [...] Read more.
A non-uniform distribution of diagnostic information in the electrocardiogram (ECG) has been commonly accepted and is the background to several compression, denoising and watermarking methods. Gaze tracking is a widely recognized method for identification of an observer’s preferences and interest areas. The statistics of experts’ scanpaths were found to be a convenient quantitative estimate of medical information density for each particular component (i.e., wave) of the ECG record. In this paper we propose the application of generalized perceptual features to control the adaptive sampling of a digital ECG. Firstly, based on temporal distribution of the information density, local ECG bandwidth is estimated and projected to the actual positions of components in heartbeat representation. Next, the local sampling frequency is calculated pointwise and the ECG is adaptively low-pass filtered in all simultaneous channels. Finally, sample values are interpolated at new time positions forming a non-uniform time series. In evaluation of perceptual sampling, an inverse transform was used for the reconstruction of regularly sampled ECG with a percent root-mean-square difference (PRD) error of 3–5% (for compression ratios 3.0–4.7, respectively). Nevertheless, tests performed with the use of the CSE Database show good reproducibility of ECG diagnostic features, within the IEC 60601-2-25:2015 requirements, thanks to the occurrence of distortions in less relevant parts of the cardiac cycle. Full article
Show Figures

Graphical abstract

28 pages, 15065 KiB  
Article
Most Relevant Spectral Bands Identification for Brain Cancer Detection Using Hyperspectral Imaging
by Beatriz Martinez, Raquel Leon, Himar Fabelo, Samuel Ortega, Juan F. Piñeiro, Adam Szolna, Maria Hernandez, Carlos Espino, Aruma J. O’Shanahan, David Carrera, Sara Bisshopp, Coralia Sosa, Mariano Marquez, Rafael Camacho, Maria de la Luz Plaza, Jesus Morera and Gustavo M. Callico
Sensors 2019, 19(24), 5481; https://0-doi-org.brum.beds.ac.uk/10.3390/s19245481 - 12 Dec 2019
Cited by 29 | Viewed by 4504
Abstract
Hyperspectral imaging (HSI) is a non-ionizing and non-contact imaging technique capable of obtaining more information than conventional RGB (red green blue) imaging. In the medical field, HSI has commonly been investigated due to its great potential for diagnostic and surgical guidance purposes. However, [...] Read more.
Hyperspectral imaging (HSI) is a non-ionizing and non-contact imaging technique capable of obtaining more information than conventional RGB (red green blue) imaging. In the medical field, HSI has commonly been investigated due to its great potential for diagnostic and surgical guidance purposes. However, the large amount of information provided by HSI normally contains redundant or non-relevant information, and it is extremely important to identify the most relevant wavelengths for a certain application in order to improve the accuracy of the predictions and reduce the execution time of the classification algorithm. Additionally, some wavelengths can contain noise and removing such bands can improve the classification stage. The work presented in this paper aims to identify such relevant spectral ranges in the visual-and-near-infrared (VNIR) region for an accurate detection of brain cancer using in vivo hyperspectral images. A methodology based on optimization algorithms has been proposed for this task, identifying the relevant wavelengths to achieve the best accuracy in the classification results obtained by a supervised classifier (support vector machines), and employing the lowest possible number of spectral bands. The results demonstrate that the proposed methodology based on the genetic algorithm optimization slightly improves the accuracy of the tumor identification in ~5%, using only 48 bands, with respect to the reference results obtained with 128 bands, offering the possibility of developing customized acquisition sensors that could provide real-time HS imaging. The most relevant spectral ranges found comprise between 440.5–465.96 nm, 498.71–509.62 nm, 556.91–575.1 nm, 593.29–615.12 nm, 636.94–666.05 nm, 698.79–731.53 nm and 884.32–902.51 nm. Full article
Show Figures

Figure 1

14 pages, 2458 KiB  
Article
A New Approach for Motor Imagery Classification Based on Sorted Blind Source Separation, Continuous Wavelet Transform, and Convolutional Neural Network
by César J. Ortiz-Echeverri, Sebastián Salazar-Colores, Juvenal Rodríguez-Reséndiz and Roberto A. Gómez-Loenzo
Sensors 2019, 19(20), 4541; https://0-doi-org.brum.beds.ac.uk/10.3390/s19204541 - 18 Oct 2019
Cited by 61 | Viewed by 4634
Abstract
Brain-Computer Interfaces (BCI) are systems that allow the interaction of people and devices on the grounds of brain activity. The noninvasive and most viable way to obtain such information is by using electroencephalography (EEG). However, these signals have a low signal-to-noise ratio, as [...] Read more.
Brain-Computer Interfaces (BCI) are systems that allow the interaction of people and devices on the grounds of brain activity. The noninvasive and most viable way to obtain such information is by using electroencephalography (EEG). However, these signals have a low signal-to-noise ratio, as well as a low spatial resolution. This work proposes a new method built from the combination of a Blind Source Separation (BSS) to obtain estimated independent components, a 2D representation of these component signals using the Continuous Wavelet Transform (CWT), and a classification stage using a Convolutional Neural Network (CNN) approach. A criterion based on the spectral correlation with a Movement Related Independent Component (MRIC) is used to sort the estimated sources by BSS, thus reducing the spatial variance. The experimental results of 94.66% using a k-fold cross validation are competitive with techniques recently reported in the state-of-the-art. Full article
Show Figures

Figure 1

18 pages, 3298 KiB  
Article
Target-Specific Action Classification for Automated Assessment of Human Motor Behavior from Video
by Behnaz Rezaei, Yiorgos Christakis, Bryan Ho, Kevin Thomas, Kelley Erb, Sarah Ostadabbas and Shyamal Patel
Sensors 2019, 19(19), 4266; https://0-doi-org.brum.beds.ac.uk/10.3390/s19194266 - 01 Oct 2019
Cited by 5 | Viewed by 3358
Abstract
Objective monitoring and assessment of human motor behavior can improve the diagnosis and management of several medical conditions. Over the past decade, significant advances have been made in the use of wearable technology for continuously monitoring human motor behavior in free-living conditions. However, [...] Read more.
Objective monitoring and assessment of human motor behavior can improve the diagnosis and management of several medical conditions. Over the past decade, significant advances have been made in the use of wearable technology for continuously monitoring human motor behavior in free-living conditions. However, wearable technology remains ill-suited for applications which require monitoring and interpretation of complex motor behaviors (e.g., involving interactions with the environment). Recent advances in computer vision and deep learning have opened up new possibilities for extracting information from video recordings. In this paper, we present a hierarchical vision-based behavior phenotyping method for classification of basic human actions in video recordings performed using a single RGB camera. Our method addresses challenges associated with tracking multiple human actors and classification of actions in videos recorded in changing environments with different fields of view. We implement a cascaded pose tracker that uses temporal relationships between detections for short-term tracking and appearance based tracklet fusion for long-term tracking. Furthermore, for action classification, we use pose evolution maps derived from the cascaded pose tracker as low-dimensional and interpretable representations of the movement sequences for training a convolutional neural network. The cascaded pose tracker achieves an average accuracy of 88% in tracking the target human actor in our video recordings, and overall system achieves average test accuracy of 84% for target-specific action classification in untrimmed video recordings. Full article
Show Figures

Figure 1

22 pages, 4368 KiB  
Article
Continuous Distant Measurement of the User’s Heart Rate in Human-Computer Interaction Applications
by Jaromir Przybyło
Sensors 2019, 19(19), 4205; https://0-doi-org.brum.beds.ac.uk/10.3390/s19194205 - 27 Sep 2019
Cited by 9 | Viewed by 2723
Abstract
In real world scenarios, the task of estimating heart rate (HR) using video plethysmography (VPG) methods is difficult because many factors could contaminate the pulse signal (i.e., a subjects’ movement, illumination changes). This article presents the evaluation of a VPG system designed for [...] Read more.
In real world scenarios, the task of estimating heart rate (HR) using video plethysmography (VPG) methods is difficult because many factors could contaminate the pulse signal (i.e., a subjects’ movement, illumination changes). This article presents the evaluation of a VPG system designed for continuous monitoring of the user’s heart rate during typical human-computer interaction scenarios. The impact of human activities while working at the computer (i.e., reading and writing text, playing a game) on the accuracy of HR VPG measurements was examined. Three commonly used signal extraction methods were evaluated: green (G), green-red difference (GRD), blind source separation (ICA). A new method based on an excess green (ExG) image representation was proposed. Three algorithms for estimating pulse rate were used: power spectral density (PSD), autoregressive modeling (AR) and time domain analysis (TIME). In summary, depending on the scenario being studied, different combinations of signal extraction methods and the pulse estimation algorithm ensure optimal heart rate detection results. The best results were obtained for the ICA method: average RMSE = 6.1 bpm (beats per minute). The proposed ExG signal representation outperforms other methods except ICA (RMSE = 11.2 bpm compared to 14.4 bpm for G and 13.0 bmp for GRD). ExG also is the best method in terms of proposed success rate metric (sRate). Full article
Show Figures

Figure 1

19 pages, 11875 KiB  
Article
Improving Discrimination in Color Vision Deficiency by Image Re-Coloring
by Huei-Yung Lin, Li-Qi Chen and Min-Liang Wang
Sensors 2019, 19(10), 2250; https://0-doi-org.brum.beds.ac.uk/10.3390/s19102250 - 15 May 2019
Cited by 26 | Viewed by 9786
Abstract
People with color vision deficiency (CVD) cannot observe the colorful world due to the damage of color reception nerves. In this work, we present an image enhancement approach to assist colorblind people to identify the colors they are not able to distinguish naturally. [...] Read more.
People with color vision deficiency (CVD) cannot observe the colorful world due to the damage of color reception nerves. In this work, we present an image enhancement approach to assist colorblind people to identify the colors they are not able to distinguish naturally. An image re-coloring algorithm based on eigenvector processing is proposed for robust color separation under color deficiency transformation. It is shown that the eigenvector of color vision deficiency is distorted by an angle in the λ , Y-B, R-G color space. The experimental results show that our approach is useful for the recognition and separation of the CVD confusing colors in natural scene images. Compared to the existing techniques, our results of natural images with CVD simulation work very well in terms of RMS, HDR-VDP-2 and an IRB-approved human test. Both the objective comparison with previous works and the subjective evaluation on human tests validate the effectiveness of the proposed method. Full article
Show Figures

Figure 1

15 pages, 1653 KiB  
Article
Low-Complexity and Hardware-Friendly H.265/HEVC Encoder for Vehicular Ad-Hoc Networks
by Xiantao Jiang, Jie Feng, Tian Song and Takafumi Katayama
Sensors 2019, 19(8), 1927; https://0-doi-org.brum.beds.ac.uk/10.3390/s19081927 - 24 Apr 2019
Cited by 24 | Viewed by 3867
Abstract
Real-time video streaming over vehicular ad-hoc networks (VANETs) has been considered as a critical challenge for road safety applications. The purpose of this paper is to reduce the computation complexity of high efficiency video coding (HEVC) encoder for VANETs. Based on a novel [...] Read more.
Real-time video streaming over vehicular ad-hoc networks (VANETs) has been considered as a critical challenge for road safety applications. The purpose of this paper is to reduce the computation complexity of high efficiency video coding (HEVC) encoder for VANETs. Based on a novel spatiotemporal neighborhood set, firstly the coding tree unit depth decision algorithm is presented by controlling the depth search range. Secondly, a Bayesian classifier is used for the prediction unit decision for inter-prediction, and prior probability value is calculated by Gibbs Random Field model. Simulation results show that the overall algorithm can significantly reduce encoding time with a reasonably low loss in encoding efficiency. Compared to HEVC reference software HM16.0, the encoding time is reduced by up to 63.96%, while the Bjontegaard delta bit-rate is increased by only 0.76–0.80% on average. Moreover, the proposed HEVC encoder is low-complexity and hardware-friendly for video codecs that reside on mobile vehicles for VANETs. Full article
Show Figures

Figure 1

Other

Jump to: Editorial, Research

11 pages, 28177 KiB  
Letter
Improving Temporal Stability and Accuracy for Endoscopic Video Tissue Classification Using Recurrent Neural Networks
by Tim Boers, Joost van der Putten, Maarten Struyvenberg, Kiki Fockens, Jelmer Jukema, Erik Schoon, Fons van der Sommen, Jacques Bergman and Peter de With
Sensors 2020, 20(15), 4133; https://0-doi-org.brum.beds.ac.uk/10.3390/s20154133 - 24 Jul 2020
Cited by 5 | Viewed by 2249
Abstract
Early Barrett’s neoplasia are often missed due to subtle visual features and inexperience of the non-expert endoscopist with such lesions. While promising results have been reported on the automated detection of this type of early cancer in still endoscopic images, video-based detection using [...] Read more.
Early Barrett’s neoplasia are often missed due to subtle visual features and inexperience of the non-expert endoscopist with such lesions. While promising results have been reported on the automated detection of this type of early cancer in still endoscopic images, video-based detection using the temporal domain is still open. The temporally stable nature of video data in endoscopic examinations enables to develop a framework that can diagnose the imaged tissue class over time, thereby yielding a more robust and improved model for spatial predictions. We show that the introduction of Recurrent Neural Network nodes offers a more stable and accurate model for tissue classification, compared to classification on individual images. We have developed a customized Resnet18 feature extractor with four types of classifiers: Fully Connected (FC), Fully Connected with an averaging filter (FC Avg (n = 5)), Long Short Term Memory (LSTM) and a Gated Recurrent Unit (GRU). Experimental results are based on 82 pullback videos of the esophagus with 46 high-grade dysplasia patients. Our results demonstrate that the LSTM classifier outperforms the FC, FC Avg (n = 5) and GRU classifier with an average accuracy of 85.9% compared to 82.2%, 83.0% and 85.6%, respectively. The benefit of our novel implementation for endoscopic tissue classification is the inclusion of spatio-temporal information for improved and robust decision making, and it is the first step towards full temporal learning of esophageal cancer detection in endoscopic video. Full article
Show Figures

Figure 1

Back to TopTop