sensors-logo

Journal Browser

Journal Browser

Sensors Technology and Machine Learning for Human Activity Recognition

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensors and Robotics".

Deadline for manuscript submissions: closed (31 October 2022) | Viewed by 13372

Special Issue Editors


E-Mail Website
Guest Editor
Université Côte d’Azur, CNRS, LEAT, 06903 Sophia Antipolis, France
Interests: edge computing; AI for embedded systems; wireless sensor networks for the IoT; power efficiency; low-power management policies

E-Mail Website
Guest Editor
Université Côte d’Azur, CNRS, LEAT, 06903 Sophia Antipolis, France
Interests: embedded AI; artificial neural networks; bio-inspired AI; neuromorphic engineering; power efficiency; SoC design
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

This Special Issue of the Sensors journal entitled “Sensors Technology and Machine Learning for Human Activity Recognition” will focus on all aspects of research and development related to this area. Human Activity Recognition (HAR) has gained significant attention over the last few decades due to its wide range of applications and its promise in different domains such as health, sports, and entertainment. Moreover, the advent of the Internet of Things and wearable devices leads to a proliferation of use cases producing large datasets in various domains. A large amount of data can indeed be collected for the recognition of human activities using vision sensors, wearable devices, and smartphone sensors. Moreover, recent improvements in sensors industry leads to smaller, less expensive and less power consuming sensors, leading to emerging HAR applications and devices. In addition, HAR can be performed into or close to the sensor (near-sensor processing) bringing at the same time low-power consumption, low-latency and privacy properties. This new trend of bringing HAR at the edge can use different classification and machine learning approaches, with the aim at predicting the activity a user is performing among a set of known activities. Therefore, this Special Issue also intends to prompt emerging machine learning techniques, either supervised or unsupervised, for human activity recognition.

Despite continuous efforts, activity recognition is still a difficult task and faces many challenges, especially in an embedded context. This Special Issue aims at collecting the most recent advances in the area of human activity recognition. We are inviting the submission of original and unpublished work addressing several research topics of interest for HAR, including but not limited to the following issues:

  • Vision-based, wearable devices or smartphones sensors for HAR;
  • New sensor technologies;
  • IoT-based sensors for HAR;
  • Embedded Deep Learning;
  • Near-sensor processing (HAR at the edge);
  • Supervised, unsupervised learning methods for HAR;
  • HAR on microcontrollers;
  • HAR on wearable devices;
  • Sensors fusion for HAR (multi-modality);
  • Online learning for HAR.

All submitted papers will be peer-reviewed and selected based on both their quality and relevance. The guest editors maintain the right to reject papers they deem to be out of scope of this Special Issue. Only originally unpublished contributions will be considered for the issue. The papers should be formatted according to the journal guidelines.

Human Activity Recognition (HAR) has gained significant attention due to its wide range of applications and its promise in different domains such as health, sports, and entertainment. In this Special Issue of the Sensors journal entitled “Sensors Technology and Machine Learning for Human Activity Recognition”, we aim at collecting the most recent advances in the area of human activity recognition leveraging both sensors technology and machine learning approaches in the context of embedded systems.

Prof. Alain Pegatoquet
Prof. Benoît Miramond
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • health monitoring
  • IoT-based HAR
  • vision-based HAR
  • wearable devices or smartphones sensors
  • embedded machine learning
  • supervised learning
  • unsupervised learning
  • online learning

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

22 pages, 3223 KiB  
Article
Gait Trajectory Prediction on an Embedded Microcontroller Using Deep Learning
by Mohamed Karakish, Moustafa A. Fouz and Ahmed ELsawaf
Sensors 2022, 22(21), 8441; https://0-doi-org.brum.beds.ac.uk/10.3390/s22218441 - 03 Nov 2022
Cited by 5 | Viewed by 2593
Abstract
Achieving a normal gait trajectory for an amputee’s active prosthesis is challenging due to its kinematic complexity. Accordingly, lower limb gait trajectory kinematics and gait phase segmentation are essential parameters in controlling an active prosthesis. Recently, the most practiced algorithm in gait trajectory [...] Read more.
Achieving a normal gait trajectory for an amputee’s active prosthesis is challenging due to its kinematic complexity. Accordingly, lower limb gait trajectory kinematics and gait phase segmentation are essential parameters in controlling an active prosthesis. Recently, the most practiced algorithm in gait trajectory generation is the neural network. Deploying such a complex Artificial Neural Network (ANN) algorithm on an embedded system requires performing the calculations on an external computational device; however, this approach lacks mobility and reliability. In this paper, more simple and reliable ANNs are investigated to be deployed on a single low-cost Microcontroller (MC) and hence provide system mobility. Two neural network configurations were studied: Multi-Layered Perceptron (MLP) and Convolutional Neural Network (CNN); the models were trained on shank and foot IMU data. The data were collected from four subjects and tested on a fifth to predict the trajectory of 200 ms ahead. The prediction was made for two cases: with and without providing the current phase of the gait. Then, the models were deployed on a low-cost microcontroller (ESP32). It was found that with fewer data (excluding the current gait phase), CNN achieved a better correlation coefficient of 0.973 when compared to 0.945 for MLP; when including the current phase, both network configurations achieved better correlation coefficients of nearly 0.98. However, when comparing the execution time required for the prediction on the intended MC, MLP was much faster than CNN, with an execution time of 2.4 ms and 142 ms, respectively. In summary, it was found that when training data are scarce, CNN is more efficient within the acceptable execution time, while MLP achieves relative accuracy with low execution time with enough data. Full article
Show Figures

Figure 1

15 pages, 1924 KiB  
Article
Investigating the Impact of Information Sharing in Human Activity Recognition
by Muhammad Awais Shafique and Sergi Saurí Marchán
Sensors 2022, 22(6), 2280; https://0-doi-org.brum.beds.ac.uk/10.3390/s22062280 - 16 Mar 2022
Cited by 2 | Viewed by 1829
Abstract
The accuracy of Human Activity Recognition is noticeably affected by the orientation of smartphones during data collection. This study utilized a public domain dataset that was specifically collected to include variations in smartphone positioning. Although the dataset contained records from various sensors, only [...] Read more.
The accuracy of Human Activity Recognition is noticeably affected by the orientation of smartphones during data collection. This study utilized a public domain dataset that was specifically collected to include variations in smartphone positioning. Although the dataset contained records from various sensors, only accelerometer data were used in this study; thus, the developed methodology would preserve smartphone battery and incur low computation costs. A total of 175 different features were extracted from the pre-processed data. Data stratification was conducted in three ways to investigate the effect of information sharing between the training and testing datasets. After data balancing using only the training dataset, ten-fold and LOSO cross-validation were performed using several algorithms, including Support Vector Machine, XGBoost, Random Forest, Naïve Bayes, KNN, and Neural Network. A very simple post-processing algorithm was developed to improve the accuracy. The results reveal that XGBoost takes the least computation time while providing high prediction accuracy. Although Neural Network outperforms XGBoost, XGBoost demonstrates better accuracy with post-processing. The final detection accuracy ranges from 99.8% to 77.6% depending on the level of information sharing. This strongly suggests that when reporting accuracy values, the associated information sharing levels should be provided as well in order to allow the results to be interpreted in the correct context. Full article
Show Figures

Figure 1

14 pages, 1867 KiB  
Article
Context-Aware Human Activity Recognition in Industrial Processes
by Friedrich Niemann, Stefan Lüdtke, Christian Bartelt and Michael ten Hompel
Sensors 2022, 22(1), 134; https://0-doi-org.brum.beds.ac.uk/10.3390/s22010134 - 25 Dec 2021
Cited by 12 | Viewed by 3169
Abstract
The automatic, sensor-based assessment of human activities is highly relevant for production and logistics, to optimise the economics and ergonomics of these processes. One challenge for accurate activity recognition in these domains is the context-dependence of activities: Similar movements can correspond to different [...] Read more.
The automatic, sensor-based assessment of human activities is highly relevant for production and logistics, to optimise the economics and ergonomics of these processes. One challenge for accurate activity recognition in these domains is the context-dependence of activities: Similar movements can correspond to different activities, depending on, e.g., the object handled or the location of the subject. In this paper, we propose to explicitly make use of such context information in an activity recognition model. Our first contribution is a publicly available, semantically annotated motion capturing dataset of subjects performing order picking and packaging activities, where context information is recorded explicitly. The second contribution is an activity recognition model that integrates movement data and context information. We empirically show that by using context information, activity recognition performance increases substantially. Additionally, we analyse which of the pieces of context information is most relevant for activity recognition. The insights provided by this paper can help others to design appropriate sensor set-ups in real warehouses for time management. Full article
Show Figures

Figure 1

21 pages, 1051 KiB  
Article
Human Activity Recognition: A Comparative Study to Assess the Contribution Level of Accelerometer, ECG, and PPG Signals
by Mahsa Sadat Afzali Arani, Diego Elias Costa and Emad Shihab
Sensors 2021, 21(21), 6997; https://0-doi-org.brum.beds.ac.uk/10.3390/s21216997 - 21 Oct 2021
Cited by 16 | Viewed by 2474
Abstract
Inertial sensors are widely used in the field of human activity recognition (HAR), since this source of information is the most informative time series among non-visual datasets. HAR researchers are actively exploring other approaches and different sources of signals to improve the performance [...] Read more.
Inertial sensors are widely used in the field of human activity recognition (HAR), since this source of information is the most informative time series among non-visual datasets. HAR researchers are actively exploring other approaches and different sources of signals to improve the performance of HAR systems. In this study, we investigate the impact of combining bio-signals with a dataset acquired from inertial sensors on recognizing human daily activities. To achieve this aim, we used the PPG-DaLiA dataset consisting of 3D-accelerometer (3D-ACC), electrocardiogram (ECG), photoplethysmogram (PPG) signals acquired from 15 individuals while performing daily activities. We extracted hand-crafted time and frequency domain features, then, we applied a correlation-based feature selection approach to reduce the feature-set dimensionality. After introducing early fusion scenarios, we trained and tested random forest models with subject-dependent and subject-independent setups. Our results indicate that combining features extracted from the 3D-ACC signal with the ECG signal improves the classifier’s performance F1-scores by 2.72% and 3.00% (from 94.07% to 96.80%, and 83.16% to 86.17%) for subject-dependent and subject-independent approaches, respectively. Full article
Show Figures

Figure 1

18 pages, 894 KiB  
Article
Using Social Signals to Predict Shoplifting: A Transparent Approach to a Sensitive Activity Analysis Problem
by Shane Reid, Sonya Coleman, Philip Vance, Dermot Kerr and Siobhan O’Neill
Sensors 2021, 21(20), 6812; https://0-doi-org.brum.beds.ac.uk/10.3390/s21206812 - 13 Oct 2021
Viewed by 2290
Abstract
Retail shoplifting is one of the most prevalent forms of theft and has accounted for over one billion GBP in losses for UK retailers in 2018. An automated approach to detecting behaviours associated with shoplifting using surveillance footage could help reduce these losses. [...] Read more.
Retail shoplifting is one of the most prevalent forms of theft and has accounted for over one billion GBP in losses for UK retailers in 2018. An automated approach to detecting behaviours associated with shoplifting using surveillance footage could help reduce these losses. Until recently, most state-of-the-art vision-based approaches to this problem have relied heavily on the use of black box deep learning models. While these models have been shown to achieve very high accuracy, this lack of understanding on how decisions are made raises concerns about potential bias in the models. This limits the ability of retailers to implement these solutions, as several high-profile legal cases have recently ruled that evidence taken from these black box methods is inadmissible in court. There is an urgent need to develop models which can achieve high accuracy while providing the necessary transparency. One way to alleviate this problem is through the use of social signal processing to add a layer of understanding in the development of transparent models for this task. To this end, we present a social signal processing model for the problem of shoplifting prediction which has been trained and validated using a novel dataset of manually annotated shoplifting videos. The resulting model provides a high degree of understanding and achieves accuracy comparable with current state of the art black box methods. Full article
Show Figures

Figure 1

Back to TopTop