sensors-logo

Journal Browser

Journal Browser

Wearable Sensor for Activity Analysis and Context Recognition

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: closed (31 May 2021) | Viewed by 42306

Special Issue Editors


E-Mail Website
Guest Editor
Laboratory of Image Signal and Intelligent Systems, Department of Network & Telecom, University of Paris-Est Créteil, 94000 Créteil, France
Interests: wearable robotics; physical human robot interaction; robotics

E-Mail Website
Guest Editor
Key Laboratory of Image Processing and Intelligent Control, School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430074, China
Interests: robot control; rehabilitation robbots; wearable robots; underactuted robotics; mobile robots; robotic manipulation

E-Mail Website
Guest Editor
Imperial College London, Department of Mechanical Engineering, City and Guilds Building, Rm 717, South Kensington Campus, London SW7 1AL, UK
Interests: biomechatronics; wearable systems; instrumentation; assistive technology; signal fusion; biorobotics

Special Issue Information

Dear Colleagues,

Active participation of the dependent population in society has become an important challenge from both societal and economic viewpoints as this population is constantly increasing. Assisting with the daily activities would enhance personal safety, well-being, and autonomy while reducing health care costs. The dependent population requires continuous monitoring to detect abnormal situations or prevent unpredictable events such as falls. Thus, the problem of human activity recognition is central for understanding and predicting human behavior, in particular to provide assistive services to humans, such as health monitoring, well-being, security, etc. The last decade has shown an increasing interest in the development of wearable technologies for physical and cognitive assistance and rehabilitation purposes. For instance, the rapid development of microsystems technology has contributed to the development of small, lightweight, and inexpensive wearable sensors. This has provided users with a means to improve early stage detection of pathologies while reducing the overall costs compared with more intrusive standard diagnostic methods. Recent advances in the fields of machine learning and deep learning technologies have opened new and exciting research paradigms to construct end-to-end learning models from complex data in the health care domain. These new learning techniques can be also used for translating wearable biomedical data into improved human health.

Despite this vast potential, the majority of the wearables today remain limited to simple metrics (e.g., step counts, heart rate, calories, etc.); detailed health and/or physiological instrumentation for machine interface have yet been implemented. A staggering one-third of users are reported to abandon commercial devices in regular use, which indicates transience and sustainability. Sensor development, embedded systems, and cloud connectivity enable an evolution from a device to a systems perspective, which demands recognition that sensors, learning algorithms, and devices linking wearables to humans (e.g. robotic assist) are fundamentally coupled, and hence should be treated as an integrated whole.

The Special Issue seeks to publish original investigations aimed at closing this gap. We invite papers presenting significant advances with respect to the state-of-the-art development in the following topics, which include, but are not limited to:

- Daily living activity recognition using wearable sensors;

- Learning techniques for health care application using wearable devices;

- Human assistance using smart spaces;

- Design and control of wearable robot for health care applications;

- Human-in-the-loop-optimization algorithms for assistive purposes using wearable devices;

- Neuro-robotics paradigm for wearable assistive technologies;

- Recent development and trends in wearable clinical rehabilitation techniques;

- Motion control and fall detection using wearable devices;

- Ethical, legal, and social issues of wearable devices.

Prof. Dr. Samer Mohammed
Prof. Dr. Jian Huang
Prof. Dr. Ravi Vaidyanathan
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

15 pages, 3054 KiB  
Article
Multi-Channel Fusion Classification Method Based on Time-Series Data
by Xue-Bo Jin, Aiqiang Yang, Tingli Su, Jian-Lei Kong and Yuting Bai
Sensors 2021, 21(13), 4391; https://0-doi-org.brum.beds.ac.uk/10.3390/s21134391 - 26 Jun 2021
Cited by 7 | Viewed by 2755
Abstract
Time-series data generally exists in many application fields, and the classification of time-series data is one of the important research directions in time-series data mining. In this paper, univariate time-series data are taken as the research object, deep learning and broad learning systems [...] Read more.
Time-series data generally exists in many application fields, and the classification of time-series data is one of the important research directions in time-series data mining. In this paper, univariate time-series data are taken as the research object, deep learning and broad learning systems (BLSs) are the basic methods used to explore the classification of multi-modal time-series data features. Long short-term memory (LSTM), gated recurrent unit, and bidirectional LSTM networks are used to learn and test the original time-series data, and a Gramian angular field and recurrence plot are used to encode time-series data to images, and a BLS is employed for image learning and testing. Finally, to obtain the final classification results, Dempster–Shafer evidence theory (D–S evidence theory) is considered to fuse the probability outputs of the two categories. Through the testing of public datasets, the method proposed in this paper obtains competitive results, compensating for the deficiencies of using only time-series data or images for different types of datasets. Full article
(This article belongs to the Special Issue Wearable Sensor for Activity Analysis and Context Recognition)
Show Figures

Figure 1

16 pages, 4285 KiB  
Article
A Multi-DoF Prosthetic Hand Finger Joint Controller for Wearable sEMG Sensors by Nonlinear Autoregressive Exogenous Model
by Zhaolong Gao, Rongyu Tang, Qiang Huang and Jiping He
Sensors 2021, 21(8), 2576; https://0-doi-org.brum.beds.ac.uk/10.3390/s21082576 - 07 Apr 2021
Cited by 8 | Viewed by 3410
Abstract
The loss of mobility function and sensory information from the arm, hand, and fingertips hampers the activities of daily living (ADL) of patients. A modern bionic prosthetic hand can compensate for the lost functions and realize multiple degree of freedom (DoF) movements. However, [...] Read more.
The loss of mobility function and sensory information from the arm, hand, and fingertips hampers the activities of daily living (ADL) of patients. A modern bionic prosthetic hand can compensate for the lost functions and realize multiple degree of freedom (DoF) movements. However, the commercially available prosthetic hands usually have limited DoFs due to limited sensors and lack of stable classification algorithms. This study aimed to propose a controller for finger joint angle estimation by surface electromyography (sEMG). The sEMG data used for training were gathered with the Myo armband, which is a commercial EMG sensor. Two features in the time domain were extracted and fed into a nonlinear autoregressive model with exogenous inputs (NARX). The NARX model was trained with pre-selected parameters using the Levenberg–Marquardt algorithm. Comparing with the targets, the regression correlation coefficient (R) of the model outputs was more than 0.982 over all test subjects, and the mean square error was less than 10.02 for a signal range in arbitrary units equal to [0, 255]. The study also demonstrated that the proposed model could be used in daily life movements with good accuracy and generalization abilities. Full article
(This article belongs to the Special Issue Wearable Sensor for Activity Analysis and Context Recognition)
Show Figures

Figure 1

13 pages, 4632 KiB  
Article
A Wearable Navigation Device for Visually Impaired People Based on the Real-Time Semantic Visual SLAM System
by Zhuo Chen, Xiaoming Liu, Masaru Kojima, Qiang Huang and Tatsuo Arai
Sensors 2021, 21(4), 1536; https://0-doi-org.brum.beds.ac.uk/10.3390/s21041536 - 23 Feb 2021
Cited by 21 | Viewed by 5774
Abstract
Wearable auxiliary devices for visually impaired people are highly attractive research topics. Although many proposed wearable navigation devices can assist visually impaired people in obstacle avoidance and navigation, these devices cannot feedback detailed information about the obstacles or help the visually impaired understand [...] Read more.
Wearable auxiliary devices for visually impaired people are highly attractive research topics. Although many proposed wearable navigation devices can assist visually impaired people in obstacle avoidance and navigation, these devices cannot feedback detailed information about the obstacles or help the visually impaired understand the environment. In this paper, we proposed a wearable navigation device for the visually impaired by integrating the semantic visual SLAM (Simultaneous Localization And Mapping) and the newly launched powerful mobile computing platform. This system uses an Image-Depth (RGB-D) camera based on structured light as the sensor, as the control center. We also focused on the technology that combines SLAM technology with the extraction of semantic information from the environment. It ensures that the computing platform understands the surrounding environment in real-time and can feed it back to the visually impaired in the form of voice broadcast. Finally, we tested the performance of the proposed semantic visual SLAM system on this device. The results indicate that the system can run in real-time on a wearable navigation device with sufficient accuracy. Full article
(This article belongs to the Special Issue Wearable Sensor for Activity Analysis and Context Recognition)
Show Figures

Figure 1

24 pages, 9806 KiB  
Article
Personalized Human Activity Recognition Based on Integrated Wearable Sensor and Transfer Learning
by Zhongzheng Fu, Xinrun He, Enkai Wang, Jun Huo, Jian Huang and Dongrui Wu
Sensors 2021, 21(3), 885; https://0-doi-org.brum.beds.ac.uk/10.3390/s21030885 - 28 Jan 2021
Cited by 46 | Viewed by 5550
Abstract
Human activity recognition (HAR) based on the wearable device has attracted more attention from researchers with sensor technology development in recent years. However, personalized HAR requires high accuracy of recognition, while maintaining the model’s generalization capability is a major challenge in this field. [...] Read more.
Human activity recognition (HAR) based on the wearable device has attracted more attention from researchers with sensor technology development in recent years. However, personalized HAR requires high accuracy of recognition, while maintaining the model’s generalization capability is a major challenge in this field. This paper designed a compact wireless wearable sensor node, which combines an air pressure sensor and inertial measurement unit (IMU) to provide multi-modal information for HAR model training. To solve personalized recognition of user activities, we propose a new transfer learning algorithm, which is a joint probability domain adaptive method with improved pseudo-labels (IPL-JPDA). This method adds the improved pseudo-label strategy to the JPDA algorithm to avoid cumulative errors due to inaccurate initial pseudo-labels. In order to verify our equipment and method, we use the newly designed sensor node to collect seven daily activities of 7 subjects. Nine different HAR models are trained by traditional machine learning and transfer learning methods. The experimental results show that the multi-modal data improve the accuracy of the HAR system. The IPL-JPDA algorithm proposed in this paper has the best performance among five HAR models, and the average recognition accuracy of different subjects is 93.2%. Full article
(This article belongs to the Special Issue Wearable Sensor for Activity Analysis and Context Recognition)
Show Figures

Figure 1

19 pages, 9376 KiB  
Article
Powered Two-Wheeler Riding Profile Clustering for an In-Depth Study of Bend-Taking Practices
by Mohamed Diop, Abderrahmane Boubezoul, Latifa Oukhellou and Stéphane Espié
Sensors 2020, 20(22), 6696; https://0-doi-org.brum.beds.ac.uk/10.3390/s20226696 - 23 Nov 2020
Cited by 5 | Viewed by 3121
Abstract
The understanding of rider/vehicle interaction modalities remains an issue, specifically in the case of bend-taking. This difficulty results both from the lack of adequate instrumentation to conduct this type of study and from the variety of practices of this population of road users. [...] Read more.
The understanding of rider/vehicle interaction modalities remains an issue, specifically in the case of bend-taking. This difficulty results both from the lack of adequate instrumentation to conduct this type of study and from the variety of practices of this population of road users. Riders have numerous explanations of strategies for controlling their motorcycles when taking bends. The objective of this paper is to develop a data-driven methodology in order to identify typical riding behaviors in bends by using clustering methods. The real dataset used for the experiments is collected within the VIROLO++ collaborative project to improve the knowledge of actual PTW riding practices, especially during bend taking, by collecting real data on this riding situation, including data on PTW dynamics (velocity, normal acceleration, and jerk), position on the road (road curvature), and handlebar actions (handlebar steering angle). A detailed analysis of the results is provided for both the Anderson–Darling test and clustering steps. Moreover, the clustering results are compared with the subjective data of subjects to highlight and contextualize typical riding tendencies. Finally, we perform an in-depth analysis of the bend-taking practices of one subject to highlight the differences between different methods of controlling the motorcycle (steering handlebar vs. rider’s lean) using the rider action measurements made by pressure sensors. Full article
(This article belongs to the Special Issue Wearable Sensor for Activity Analysis and Context Recognition)
Show Figures

Figure 1

20 pages, 2989 KiB  
Article
A Multimodal Intention Detection Sensor Suite for Shared Autonomy of Upper-Limb Robotic Prostheses
by Marcus Gardner, C. Sebastian Mancero Castillo, Samuel Wilson, Dario Farina, Etienne Burdet, Boo Cheong Khoo, S. Farokh Atashzar and Ravi Vaidyanathan
Sensors 2020, 20(21), 6097; https://0-doi-org.brum.beds.ac.uk/10.3390/s20216097 - 27 Oct 2020
Cited by 15 | Viewed by 4099
Abstract
Neurorobotic augmentation (e.g., robotic assist) is now in regular use to support individuals suffering from impaired motor functions. A major unresolved challenge, however, is the excessive cognitive load necessary for the human–machine interface (HMI). Grasp control remains one of the most challenging HMI [...] Read more.
Neurorobotic augmentation (e.g., robotic assist) is now in regular use to support individuals suffering from impaired motor functions. A major unresolved challenge, however, is the excessive cognitive load necessary for the human–machine interface (HMI). Grasp control remains one of the most challenging HMI tasks, demanding simultaneous, agile, and precise control of multiple degrees-of-freedom (DoFs) while following a specific timing pattern in the joint and human–robot task spaces. Most commercially available systems use either an indirect mode-switching configuration or a limited sequential control strategy, limiting activation to one DoF at a time. To address this challenge, we introduce a shared autonomy framework centred around a low-cost multi-modal sensor suite fusing: (a) mechanomyography (MMG) to estimate the intended muscle activation, (b) camera-based visual information for integrated autonomous object recognition, and (c) inertial measurement to enhance intention prediction based on the grasping trajectory. The complete system predicts user intent for grasp based on measured dynamical features during natural motions. A total of 84 motion features were extracted from the sensor suite, and tests were conducted on 10 able-bodied and 1 amputee participants for grasping common household objects with a robotic hand. Real-time grasp classification accuracy using visual and motion features obtained 100%, 82.5%, and 88.9% across all participants for detecting and executing grasping actions for a bottle, lid, and box, respectively. The proposed multimodal sensor suite is a novel approach for predicting different grasp strategies and automating task performance using a commercial upper-limb prosthetic device. The system also shows potential to improve the usability of modern neurorobotic systems due to the intuitive control design. Full article
(This article belongs to the Special Issue Wearable Sensor for Activity Analysis and Context Recognition)
Show Figures

Figure 1

19 pages, 2712 KiB  
Article
Reinforcement Learning Based Fast Self-Recalibrating Decoder for Intracortical Brain–Machine Interface
by Peng Zhang, Lianying Chao, Yuting Chen, Xuan Ma, Weihua Wang, Jiping He, Jian Huang and Qiang Li
Sensors 2020, 20(19), 5528; https://0-doi-org.brum.beds.ac.uk/10.3390/s20195528 - 27 Sep 2020
Cited by 2 | Viewed by 2681
Abstract
Background: For the nonstationarity of neural recordings in intracortical brain–machine interfaces, daily retraining in a supervised manner is always required to maintain the performance of the decoder. This problem can be improved by using a reinforcement learning (RL) based self-recalibrating decoder. However, quickly [...] Read more.
Background: For the nonstationarity of neural recordings in intracortical brain–machine interfaces, daily retraining in a supervised manner is always required to maintain the performance of the decoder. This problem can be improved by using a reinforcement learning (RL) based self-recalibrating decoder. However, quickly exploring new knowledge while maintaining a good performance remains a challenge in RL-based decoders. Methods: To solve this problem, we proposed an attention-gated RL-based algorithm combining transfer learning, mini-batch, and weight updating schemes to accelerate the weight updating and avoid over-fitting. The proposed algorithm was tested on intracortical neural data recorded from two monkeys to decode their reaching positions and grasping gestures. Results: The decoding results showed that our proposed algorithm achieved an approximate 20% increase in classification accuracy compared to that obtained by the non-retrained classifier and even achieved better classification accuracy than the daily retraining classifier. Moreover, compared with a conventional RL method, our algorithm improved the accuracy by approximately 10% and the online weight updating speed by approximately 70 times. Conclusions: This paper proposed a self-recalibrating decoder which achieved a good and robust decoding performance with fast weight updating and might facilitate its application in wearable device and clinical practice. Full article
(This article belongs to the Special Issue Wearable Sensor for Activity Analysis and Context Recognition)
Show Figures

Figure 1

15 pages, 4471 KiB  
Article
An Ultra-Sensitive Modular Hybrid EMG–FMG Sensor with Floating Electrodes
by Ang Ke, Jian Huang, Luyao Chen, Zhaolong Gao and Jiping He
Sensors 2020, 20(17), 4775; https://0-doi-org.brum.beds.ac.uk/10.3390/s20174775 - 24 Aug 2020
Cited by 11 | Viewed by 4202
Abstract
To improve the reliability and safety of myoelectric prosthetic control, many researchers tend to use multi-modal signals. The combination of electromyography (EMG) and forcemyography (FMG) has been proved to be a practical choice. However, an integrative and compact design of this hybrid sensor [...] Read more.
To improve the reliability and safety of myoelectric prosthetic control, many researchers tend to use multi-modal signals. The combination of electromyography (EMG) and forcemyography (FMG) has been proved to be a practical choice. However, an integrative and compact design of this hybrid sensor is lacking. This paper presents a novel modular EMG–FMG sensor; the sensing module has a novel design that consists of floating electrodes, which act as the sensing probe of both the EMG and FMG. This design improves the integration of the sensor. The whole system contains one data acquisition unit and eight identical sensor modules. Experiments were conducted to evaluate the performance of the sensor system. The results show that the EMG and FMG signals have good consistency under standard conditions; the FMG signal shows a better and more robust performance than the EMG. The average accuracy is 99.07% while using both the EMG and FMG signals for recognition of six hand gestures under standard conditions. Even with two layers of gauze isolated between the sensor and the skin, the average accuracy reaches 90.9% while using only the EMG signal; if we use both the EMG and FMG signals for classification, the average accuracy is 99.42%. Full article
(This article belongs to the Special Issue Wearable Sensor for Activity Analysis and Context Recognition)
Show Figures

Figure 1

29 pages, 1754 KiB  
Article
Hardware/Software Co-Design of Fractal Features Based Fall Detection System
by Ahsen Tahir, Gordon Morison, Dawn A. Skelton and Ryan M. Gibson
Sensors 2020, 20(8), 2322; https://0-doi-org.brum.beds.ac.uk/10.3390/s20082322 - 18 Apr 2020
Cited by 5 | Viewed by 3085
Abstract
Falls are a leading cause of death in older adults and result in high levels of mortality, morbidity and immobility. Fall Detection Systems (FDS) are imperative for timely medical aid and have been known to reduce death rate by 80%. We propose a [...] Read more.
Falls are a leading cause of death in older adults and result in high levels of mortality, morbidity and immobility. Fall Detection Systems (FDS) are imperative for timely medical aid and have been known to reduce death rate by 80%. We propose a novel wearable sensor FDS which exploits fractal dynamics of fall accelerometer signals. Fractal dynamics can be used as an irregularity measure of signals and our work shows that it is a key discriminant for classification of falls from other activities of life. We design, implement and evaluate a hardware feature accelerator for computation of fractal features through multi-level wavelet transform on a reconfigurable embedded System on Chip, Zynq device for evaluating wearable accelerometer sensors. The proposed FDS utilises a hardware/software co-design approach with hardware accelerator for fractal features and software implementation of Linear Discriminant Analysis on an embedded ARM core for high accuracy and energy efficiency. The proposed system achieves 99.38% fall detection accuracy, 7.3× speed-up and 6.53× improvements in power consumption, compared to the software only execution with an overall performance per Watt advantage of 47.6×, while consuming low reconfigurable resources at 28.67%. Full article
(This article belongs to the Special Issue Wearable Sensor for Activity Analysis and Context Recognition)
Show Figures

Figure 1

Review

Jump to: Research

21 pages, 3330 KiB  
Review
Recent Advances in Touch Sensors for Flexible Wearable Devices
by Abdul Hakeem Anwer, Nishat Khan, Mohd Zahid Ansari, Sang-Soo Baek, Hoon Yi, Soeun Kim, Seung Man Noh and Changyoon Jeong
Sensors 2022, 22(12), 4460; https://0-doi-org.brum.beds.ac.uk/10.3390/s22124460 - 13 Jun 2022
Cited by 43 | Viewed by 6391
Abstract
Many modern user interfaces are based on touch, and such sensors are widely used in displays, Internet of Things (IoT) projects, and robotics. From lamps to touchscreens of smartphones, these user interfaces can be found in an array of applications. However, traditional touch [...] Read more.
Many modern user interfaces are based on touch, and such sensors are widely used in displays, Internet of Things (IoT) projects, and robotics. From lamps to touchscreens of smartphones, these user interfaces can be found in an array of applications. However, traditional touch sensors are bulky, complicated, inflexible, and difficult-to-wear devices made of stiff materials. The touch screen is gaining further importance with the trend of current IoT technology flexibly and comfortably used on the skin or clothing to affect different aspects of human life. This review presents an updated overview of the recent advances in this area. Exciting advances in various aspects of touch sensing are discussed, with particular focus on materials, manufacturing, enhancements, and applications of flexible wearable sensors. This review further elaborates on the theoretical principles of various types of touch sensors, including resistive, piezoelectric, and capacitive sensors. The traditional and novel hybrid materials and manufacturing technologies of flexible sensors are considered. This review highlights the multidisciplinary applications of flexible touch sensors, such as e-textiles, e-skins, e-control, and e-healthcare. Finally, the obstacles and prospects for future research that are critical to the broader development and adoption of the technology are surveyed. Full article
(This article belongs to the Special Issue Wearable Sensor for Activity Analysis and Context Recognition)
Show Figures

Figure 1

Back to TopTop