sensors-logo

Journal Browser

Journal Browser

Human Activity Recognition Using Deep Learning

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Biomedical Sensors".

Deadline for manuscript submissions: closed (30 November 2021) | Viewed by 19080

Special Issue Editors


E-Mail Website
Guest Editor
Institute for Systems and Computer Engineering, Technology and Science, Porto, Portugal
Interests: deep learning; biometrics; computer vision; digital forensics and document analysis

E-Mail Website
Guest Editor
INESC TEC - Institute for Systems and Computer Engineering, Technology and Science, Porto, Portugal
Interests: machine learning; computer vision; biometrics; explainable AI; cryptography; mathematics education
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

With the increase in technologies such as Internet of Things (IoT), an increasing amount of data from low-cost sensors and accelerometers that require less power consumption is becoming available for an ever-growing number of people around the world. As a result, the exploitation of technologies for human activity recognition (HAR) has become one of the trendiest research topics, with several applications. The relevant topics of HAR range from medical diagnosis or driving activities, to criminal recognition or daily activity recognition.

Despite the numerous deep learning techniques proposed in the literature, there are still a number of open challenges related to HAR. Two of these issues are the recognition of more than one activity at the same time and multisensor record data.

This Sensors Special Issue aims to bring novel solutions to human activity recognition through explainable techniques applied to deep learning models in order not only to classify the action scene, but also to give context understanding. We believe the introduction of transparency and explainability will increase the trust and accountability of a HAR system.

Authors are invited to submit original contributions or survey papers for publication in the open access Sensors journal.

Topics of interest include (but are not limited to) the following:

  • Human activity recognition;
  • Human activity understanding;
  • Contextual modeling for human action recognition;
  • Commonsense justification for human action explanation;
  • Knowledge representation of human activity.

Dr. Ana Maria Rebelo
Dr. Ana Filipa Sequeira
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Behavior modeling
  • Activity recognition
  • Deep learning
  • Explainable AI
  • Context understanding
  • Causality knowledge
  • Activity representation

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 4057 KiB  
Article
Human Activity Recognition via Hybrid Deep Learning Based Model
by Imran Ullah Khan, Sitara Afzal and Jong Weon Lee
Sensors 2022, 22(1), 323; https://0-doi-org.brum.beds.ac.uk/10.3390/s22010323 - 01 Jan 2022
Cited by 96 | Viewed by 8696
Abstract
In recent years, Human Activity Recognition (HAR) has become one of the most important research topics in the domains of health and human-machine interaction. Many Artificial intelligence-based models are developed for activity recognition; however, these algorithms fail to extract spatial and temporal features [...] Read more.
In recent years, Human Activity Recognition (HAR) has become one of the most important research topics in the domains of health and human-machine interaction. Many Artificial intelligence-based models are developed for activity recognition; however, these algorithms fail to extract spatial and temporal features due to which they show poor performance on real-world long-term HAR. Furthermore, in literature, a limited number of datasets are publicly available for physical activities recognition that contains less number of activities. Considering these limitations, we develop a hybrid model by incorporating Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) for activity recognition where CNN is used for spatial features extraction and LSTM network is utilized for learning temporal information. Additionally, a new challenging dataset is generated that is collected from 20 participants using the Kinect V2 sensor and contains 12 different classes of human physical activities. An extensive ablation study is performed over different traditional machine learning and deep learning models to obtain the optimum solution for HAR. The accuracy of 90.89% is achieved via the CNN-LSTM technique, which shows that the proposed model is suitable for HAR applications. Full article
(This article belongs to the Special Issue Human Activity Recognition Using Deep Learning)
Show Figures

Figure 1

19 pages, 4786 KiB  
Article
Robust Human Activity Recognition by Integrating Image and Accelerometer Sensor Data Using Deep Fusion Network
by Junhyuk Kang, Jieun Shin, Jaewon Shin, Daeho Lee and Ahyoung Choi
Sensors 2022, 22(1), 174; https://0-doi-org.brum.beds.ac.uk/10.3390/s22010174 - 28 Dec 2021
Cited by 9 | Viewed by 3070
Abstract
Studies on deep-learning-based behavioral pattern recognition have recently received considerable attention. However, if there are insufficient data and the activity to be identified is changed, a robust deep learning model cannot be created. This work contributes a generalized deep learning model that is [...] Read more.
Studies on deep-learning-based behavioral pattern recognition have recently received considerable attention. However, if there are insufficient data and the activity to be identified is changed, a robust deep learning model cannot be created. This work contributes a generalized deep learning model that is robust to noise not dependent on input signals by extracting features through a deep learning model for each heterogeneous input signal that can maintain performance while minimizing preprocessing of the input signal. We propose a hybrid deep learning model that takes heterogeneous sensor data, an acceleration sensor, and an image as inputs. For accelerometer data, we use a convolutional neural network (CNN) and convolutional block attention module models (CBAM), and apply bidirectional long short-term memory and a residual neural network. The overall accuracy was 94.8% with a skeleton image and accelerometer data, and 93.1% with a skeleton image, coordinates, and accelerometer data after evaluating nine behaviors using the Berkeley Multimodal Human Action Database (MHAD). Furthermore, the accuracy of the investigation was revealed to be 93.4% with inverted images and 93.2% with white noise added to the accelerometer data. Testing with data that included inversion and noise data indicated that the suggested model was robust, with a performance deterioration of approximately 1%. Full article
(This article belongs to the Special Issue Human Activity Recognition Using Deep Learning)
Show Figures

Figure 1

14 pages, 3320 KiB  
Article
Prediction of Lower Extremity Multi-Joint Angles during Overground Walking by Using a Single IMU with a Low Frequency Based on an LSTM Recurrent Neural Network
by Joohwan Sung, Sungmin Han, Heesu Park, Hyun-Myung Cho, Soree Hwang, Jong Woong Park and Inchan Youn
Sensors 2022, 22(1), 53; https://0-doi-org.brum.beds.ac.uk/10.3390/s22010053 - 22 Dec 2021
Cited by 23 | Viewed by 3998
Abstract
The joint angle during gait is an important indicator, such as injury risk index, rehabilitation status evaluation, etc. To analyze gait, inertial measurement unit (IMU) sensors have been used in studies and continuously developed; however, they are difficult to utilize in daily life [...] Read more.
The joint angle during gait is an important indicator, such as injury risk index, rehabilitation status evaluation, etc. To analyze gait, inertial measurement unit (IMU) sensors have been used in studies and continuously developed; however, they are difficult to utilize in daily life because of the inconvenience of having to attach multiple sensors together and the difficulty of long-term use due to the battery consumption required for high data sampling rates. To overcome these problems, this study propose a multi-joint angle estimation method based on a long short-term memory (LSTM) recurrent neural network with a single low-frequency (23 Hz) IMU sensor. IMU sensor data attached to the lateral shank were measured during overground walking at a self-selected speed for 30 healthy young persons. The results show a comparatively good accuracy level, similar to previous studies using high-frequency IMU sensors. Compared to the reference results obtained from the motion capture system, the estimated angle coefficient of determination (R2) is greater than 0.74, and the root mean square error and normalized root mean square error (NRMSE) are less than 7° and 9.87%, respectively. The knee joint showed the best estimation performance in terms of the NRMSE and R2 among the hip, knee, and ankle joints. Full article
(This article belongs to the Special Issue Human Activity Recognition Using Deep Learning)
Show Figures

Figure 1

17 pages, 5808 KiB  
Article
Fusion Learning for sEMG Recognition of Multiple Upper-Limb Rehabilitation Movements
by Tianyang Zhong, Donglin Li, Jianhui Wang, Jiacan Xu, Zida An and Yue Zhu
Sensors 2021, 21(16), 5385; https://0-doi-org.brum.beds.ac.uk/10.3390/s21165385 - 09 Aug 2021
Cited by 8 | Viewed by 2245
Abstract
Surface electromyogram (sEMG) signals have been used in human motion intention recognition, which has significant application prospects in the fields of rehabilitation medicine and cognitive science. However, some valuable dynamic information on upper-limb motions is lost in the process of feature extraction for [...] Read more.
Surface electromyogram (sEMG) signals have been used in human motion intention recognition, which has significant application prospects in the fields of rehabilitation medicine and cognitive science. However, some valuable dynamic information on upper-limb motions is lost in the process of feature extraction for sEMG signals, and there exists the fact that only a small variety of rehabilitation movements can be distinguished, and the classification accuracy is easily affected. To solve these dilemmas, first, a multiscale time–frequency information fusion representation method (MTFIFR) is proposed to obtain the time–frequency features of multichannel sEMG signals. Then, this paper designs the multiple feature fusion network (MFFN), which aims at strengthening the ability of feature extraction. Finally, a deep belief network (DBN) was introduced as the classification model of the MFFN to boost the generalization performance for more types of upper-limb movements. In the experiments, 12 kinds of upper-limb rehabilitation actions were recognized utilizing four sEMG sensors. The maximum identification accuracy was 86.10% and the average classification accuracy of the proposed MFFN was 73.49%, indicating that the time–frequency representation approach combined with the MFFN is superior to the traditional machine learning and convolutional neural network. Full article
(This article belongs to the Special Issue Human Activity Recognition Using Deep Learning)
Show Figures

Figure 1

Back to TopTop