sensors-logo

Journal Browser

Journal Browser

Sensor-Based Human Activity Monitoring

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: closed (28 February 2022) | Viewed by 18301

Special Issue Editors


E-Mail Website1 Website2
Guest Editor
Department of Computer and Information Sciences, Northumbria University, Newcastle upon Tyne, UK
Interests: computer graphics; computer vision; robotics; motion analysis; machine learning
Special Issues, Collections and Topics in MDPI journals
School of Computing, University of Leeds, Leeds, UK
Interests: computer graphics; computer animation; computer vision; machine learning and robotics
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

With the advancement of sensing technology, sensor-based monitoring systems are becoming more affordable and can be deployed everywhere easily.  However, given the huge volume of sensor data recorded every day, analysing data automatically becomes an emerging research topic. Existing approaches focus primarily on the physical/physiological aspect when analysing body movements/gait. In contrast, analyzing and interpreting the emotion statues from the subjects will provide important additional context to understanding the intentions driving human activities.

In this context, this Special Issue aims to connect researchers in the field of computer vision, machine learning and affective computing for human activity monitoring applications, such as surveillance, healthcare, etc. This issue will provide a state-of-the-art representation of algorithms and methods that progress the field of research and applications into sensor-based human activity monitoring.

We will accept full-length research articles and reviews focused on this research topic. Topics of interest include, but are not limited to, the following:

  • Vision-based (RGB and/or depth) human activity understanding
  • Vision-based (RGB and/or depth) emotion recognition
  • Wearable sensors for human activity understanding
  • Wearable sensors for emotion recognition
  • Health monitoring systems
  • Deep learning for activity understanding
  • Multimodal Human Activity Recognition

Prof. Edmond S. L. Ho
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • human activity understanding
  • action recognition
  • emotion recognition
  • deep learning
  • wearable sensors
  • health monitoring
  • healthcare
  • surveillance

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 1425 KiB  
Article
Computer-Aided Depth Video Stream Masking Framework for Human Body Segmentation in Depth Sensor Images
by Karolis Ryselis, Tomas Blažauskas, Robertas Damaševičius and Rytis Maskeliūnas
Sensors 2022, 22(9), 3531; https://0-doi-org.brum.beds.ac.uk/10.3390/s22093531 - 06 May 2022
Cited by 10 | Viewed by 1639
Abstract
The identification of human activities from videos is important for many applications. For such a task, three-dimensional (3D) depth images or image sequences (videos) can be used, which represent the positioning information of the objects in a 3D scene obtained from depth sensors. [...] Read more.
The identification of human activities from videos is important for many applications. For such a task, three-dimensional (3D) depth images or image sequences (videos) can be used, which represent the positioning information of the objects in a 3D scene obtained from depth sensors. This paper presents a framework to create foreground–background masks from depth images for human body segmentation. The framework can be used to speed up the manual depth image annotation process with no semantics known beforehand and can apply segmentation using a performant algorithm while the user only adjusts the parameters, or corrects the automatic segmentation results, or gives it hints by drawing a boundary of the desired object. The approach has been tested using two different datasets with a human in a real-world closed environment. The solution has provided promising results in terms of reducing the manual segmentation time from the perspective of the processing time as well as the human input time. Full article
(This article belongs to the Special Issue Sensor-Based Human Activity Monitoring)
Show Figures

Figure 1

42 pages, 3992 KiB  
Article
Human Activity Recognition: A Dynamic Inductive Bias Selection Perspective
by Massinissa Hamidi and Aomar Osmani
Sensors 2021, 21(21), 7278; https://0-doi-org.brum.beds.ac.uk/10.3390/s21217278 - 01 Nov 2021
Cited by 7 | Viewed by 2212
Abstract
In this article, we study activity recognition in the context of sensor-rich environments. In these environments, many different constraints arise at various levels during the data generation process, such as the intrinsic characteristics of the sensing devices, their energy and computational constraints, and [...] Read more.
In this article, we study activity recognition in the context of sensor-rich environments. In these environments, many different constraints arise at various levels during the data generation process, such as the intrinsic characteristics of the sensing devices, their energy and computational constraints, and their collective (collaborative) dimension. These constraints have a fundamental impact on the final activity recognition models as the quality of the data, its availability, and its reliability, among other things, are not ensured during model deployment in real-world configurations. Current approaches for activity recognition rely on the activity recognition chain which defines several steps that the sensed data undergo: This is an inductive process that involves exploring a hypothesis space to find a theory able to explain the observations. For activity recognition to be effective and robust, this inductive process must consider the constraints at all levels and model them explicitly. Whether it is a bias related to sensor measurement, transmission protocol, sensor deployment topology, heterogeneity, dynamicity, or stochastic effects, it is essential to understand their substantial impact on the quality of the data and ultimately on activity recognition models. This study highlights the need to exhibit the different types of biases arising in real situations so that machine learning models, e.g., can adapt to the dynamicity of these environments, resist sensor failures, and follow the evolution of the sensors’ topology. We propose a metamodeling approach in which these biases are specified as hyperparameters that can control the structure of the activity recognition models. Via these hyperparameters, it becomes easier to optimize the inductive processes, reason about them, and incorporate additional knowledge. It also provides a principled strategy to adapt the models to the evolutions of the environment. We illustrate our approach on the SHL dataset, which features motion sensor data for a set of human activities collected in real conditions. The obtained results make a case for the proposed metamodeling approach; noticeably, the robustness gains achieved when the deployed models are confronted with the evolution of the initial sensing configurations. The trade-offs exhibited and the broader implications of the proposed approach are discussed with alternative techniques to encode and incorporate knowledge into activity recognition models. Full article
(This article belongs to the Special Issue Sensor-Based Human Activity Monitoring)
Show Figures

Figure 1

15 pages, 1692 KiB  
Article
Monitoring the Cortical Activity of Children and Adults during Cognitive Task Completion
by Marina V. Khramova, Alexander K. Kuc, Vladimir A. Maksimenko, Nikita S. Frolov, Vadim V. Grubov, Semen A. Kurkin, Alexander N. Pisarchik, Natalia N. Shusharina, Alexander A. Fedorov and Alexander E. Hramov
Sensors 2021, 21(18), 6021; https://0-doi-org.brum.beds.ac.uk/10.3390/s21186021 - 08 Sep 2021
Cited by 14 | Viewed by 2395
Abstract
In this paper, we used an EEG system to monitor and analyze the cortical activity of children and adults at a sensor level during cognitive tasks in the form of a Schulte table. This complex cognitive task simultaneously involves several cognitive processes and [...] Read more.
In this paper, we used an EEG system to monitor and analyze the cortical activity of children and adults at a sensor level during cognitive tasks in the form of a Schulte table. This complex cognitive task simultaneously involves several cognitive processes and systems: visual search, working memory, and mental arithmetic. We revealed that adults found numbers on average two times faster than children in the beginning. However, this difference diminished at the end of table completion to 1.8 times. In children, the EEG analysis revealed high parietal alpha-band power at the end of the task. This indicates the shift from procedural strategy to less demanding fact-retrieval. In adults, the frontal beta-band power increased at the end of the task. It reflects enhanced reliance on the top–down mechanisms, cognitive control, or attentional modulation rather than a change in arithmetic strategy. Finally, the alpha-band power of adults exceeded one of the children in the left hemisphere, providing potential evidence for the fact-retrieval strategy. Since the completion of the Schulte table involves a whole set of elementary cognitive functions, the obtained results were essential for developing passive brain–computer interfaces for monitoring and adjusting a human state in the process of learning and solving cognitive tasks of various types. Full article
(This article belongs to the Special Issue Sensor-Based Human Activity Monitoring)
Show Figures

Figure 1

23 pages, 7897 KiB  
Article
Real-Time Action Recognition System for Elderly People Using Stereo Depth Camera
by Thi Thi Zin, Ye Htet, Yuya Akagi, Hiroki Tamura, Kazuhiro Kondo, Sanae Araki and Etsuo Chosa
Sensors 2021, 21(17), 5895; https://0-doi-org.brum.beds.ac.uk/10.3390/s21175895 - 01 Sep 2021
Cited by 27 | Viewed by 5989
Abstract
Smart technologies are necessary for ambient assisted living (AAL) to help family members, caregivers, and health-care professionals in providing care for elderly people independently. Among these technologies, the current work is proposed as a computer vision-based solution that can monitor the elderly by [...] Read more.
Smart technologies are necessary for ambient assisted living (AAL) to help family members, caregivers, and health-care professionals in providing care for elderly people independently. Among these technologies, the current work is proposed as a computer vision-based solution that can monitor the elderly by recognizing actions using a stereo depth camera. In this work, we introduce a system that fuses together feature extraction methods from previous works in a novel combination of action recognition. Using depth frame sequences provided by the depth camera, the system localizes people by extracting different regions of interest (ROI) from UV-disparity maps. As for feature vectors, the spatial-temporal features of two action representation maps (depth motion appearance (DMA) and depth motion history (DMH) with a histogram of oriented gradients (HOG) descriptor) are used in combination with the distance-based features, and fused together with the automatic rounding method for action recognition of continuous long frame sequences. The experimental results are tested using random frame sequences from a dataset that was collected at an elder care center, demonstrating that the proposed system can detect various actions in real-time with reasonable recognition rates, regardless of the length of the image sequences. Full article
(This article belongs to the Special Issue Sensor-Based Human Activity Monitoring)
Show Figures

Figure 1

19 pages, 43772 KiB  
Article
Emotion Recognition Based on Skin Potential Signals with a Portable Wireless Device
by Shuhao Chen, Ke Jiang, Haoji Hu, Haoze Kuang, Jianyi Yang, Jikui Luo, Xinhua Chen and Yubo Li
Sensors 2021, 21(3), 1018; https://0-doi-org.brum.beds.ac.uk/10.3390/s21031018 - 02 Feb 2021
Cited by 12 | Viewed by 4063
Abstract
Emotion recognition is of great importance for artificial intelligence, robots, and medicine etc. Although many techniques have been developed for emotion recognition, with certain successes, they rely heavily on complicated and expensive equipment. Skin potential (SP) has been recognized to be correlated with [...] Read more.
Emotion recognition is of great importance for artificial intelligence, robots, and medicine etc. Although many techniques have been developed for emotion recognition, with certain successes, they rely heavily on complicated and expensive equipment. Skin potential (SP) has been recognized to be correlated with human emotions for a long time, but has been largely ignored due to the lack of systematic research. In this paper, we propose a single SP-signal-based method for emotion recognition. Firstly, we developed a portable wireless device to measure the SP signal between the middle finger and left wrist. Then, a video induction experiment was designed to stimulate four kinds of typical emotion (happiness, sadness, anger, fear) in 26 subjects. Based on the device and video induction, we obtained a dataset consisting of 397 emotion samples. We extracted 29 features from each of the emotion samples and used eight well-established algorithms to classify the four emotions based on these features. Experimental results show that the gradient-boosting decision tree (GBDT), logistic regression (LR) and random forest (RF) algorithms achieved the highest accuracy of 75%. The obtained accuracy is similar to, or even better than, that of other methods using multiple physiological signals. Our research demonstrates the feasibility of the SP signal’s integration into existing physiological signals for emotion recognition. Full article
(This article belongs to the Special Issue Sensor-Based Human Activity Monitoring)
Show Figures

Figure 1

Back to TopTop