sensors-logo

Journal Browser

Journal Browser

Multi-Sensor for Human Activity Recognition

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (30 September 2022) | Viewed by 33602

Special Issue Editors


E-Mail Website
Guest Editor
Centre for Research and Technology Hellas, Information Technologies Institute, 6th Km Charilaou-Thermi, 57001 Thessaloniki, Greece
Interests: activity recognition; wearable sensors; accelerometers; context-awareness; context modeling; ubiquitous computing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Informatics, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
Interests: knowledge representation; semantic web; context-based multisensor seasoning and fusion; semantic dialogue management; knowledge-driven decision making
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Centre for Research and Technology Hellas, Information Technologies Institute, 6th Km Charilaou-Thermi, 57001 Thessaloniki, Greece
Interests: semantic multimedia analysis; indexing and retrieval; social media and big data analysis; knowledge structures; reasoning and personalization for multimedia applications; e-health and environmental applications
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Human activity recognition (HAR) is a research topic that has recently made significant progress, attracting growing attention in a number of disciplines and application domains. However, whether it is video-based or sensor-based, data-driven or knowledge-driven, HAR still faces many issues and challenges that motivate the development of new activity recognition techniques to improve accuracy under more realistic conditions. Key challenges in the domain include, among others: difficulty in feature extraction, data annotation scarcity, data heterogeneity, recognition of concurrent, overlapped and multioccupant activities, increased computational cost, temporal imperfections, noise, context-based and high-level interpretability, and non-invasive activity sensing and privacy.

This Special Issue focuses on the current state-of-the-art of HAR approaches, with a special emphasis on multisensor environments, where information is typically collected from multiple sources and complementary modalities, such as from multimedia streams (e.g., using video analysis and speech recognition), lifestyle, and environmental sensors. The main objective is to stimulate original, unpublished research addressing the challenges above through the concurrent use of multiple sensors and innovative fusion schemes, frameworks, algorithms, and platforms. Surveys are very welcomed, too.

Authors are invited to submit original contributions or survey papers for publication in the open access Sensors journal. Topics of interest include (but are not limited to) the following:

  • Modeling and analysis of multisensors for human activity recognition;
  • Knowledge-driven multisensor fusion frameworks for human activity recognition;
  • Data-driven and machine-learning-driven multisensor fusion frameworks for human activity recognition;
  • Distributed sensor networks and IoT for human activity recognition;
  • Interoperability frameworks and semantic situational awareness for high-level human activity recognition and decision making;
  • Multisensor human activity recognition under uncertainty, noise, and incomplete data;
  • Multisensor human activity recognition in healthcare;
  • Multisensor human activity recognition in security and surveillance applications;
  • Multisensor human activity recognition in ambient assisted living; 
  • Multisensor human activity recognition to assist human–computer interaction;
  • Multisensor human activity recognition in augmented/virtual reality applications;
  • Security and privacy issues in multisensor human activity recognition.

Dr. Athina Tsanousa
Dr. Georgios Meditskos
Dr. Stefanos Vrochidis
Dr. Ioannis Yiannis Kompatsiaris
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Multi-sensor human activity recognition
  • Multi-sensor fusion and interpretation
  • IoT networks and interoperability
  • Data- and knowledge-driven multi-sensor human activity recognition
  • Security and privacy in multi-sensor monitoring
  • Surveys on multi-sensor human activity recognition

Related Special Issue

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review

3 pages, 164 KiB  
Editorial
Multi-Sensors for Human Activity Recognition
by Athina Tsanousa, Georgios Meditskos, Stefanos Vrochidis and Ioannis Kompatsiaris
Sensors 2023, 23(10), 4617; https://0-doi-org.brum.beds.ac.uk/10.3390/s23104617 - 10 May 2023
Viewed by 1055
Abstract
Human activity recognition (HAR) has made significant progress in recent years, with growing applications in various domains, and the emergence of wearable and ambient sensors has provided new opportunities in the field [...] Full article
(This article belongs to the Special Issue Multi-Sensor for Human Activity Recognition)

Research

Jump to: Editorial, Review

14 pages, 12794 KiB  
Article
Bus Violence: An Open Benchmark for Video Violence Detection on Public Transport
by Luca Ciampi, Paweł Foszner, Nicola Messina, Michał Staniszewski, Claudio Gennaro, Fabrizio Falchi, Gianluca Serao, Michał Cogiel, Dominik Golba, Agnieszka Szczęsna and Giuseppe Amato
Sensors 2022, 22(21), 8345; https://0-doi-org.brum.beds.ac.uk/10.3390/s22218345 - 31 Oct 2022
Cited by 9 | Viewed by 2902
Abstract
The automatic detection of violent actions in public places through video analysis is difficult because the employed Artificial Intelligence-based techniques often suffer from generalization problems. Indeed, these algorithms hinge on large quantities of annotated data and usually experience a drastic drop in performance [...] Read more.
The automatic detection of violent actions in public places through video analysis is difficult because the employed Artificial Intelligence-based techniques often suffer from generalization problems. Indeed, these algorithms hinge on large quantities of annotated data and usually experience a drastic drop in performance when used in scenarios never seen during the supervised learning phase. In this paper, we introduce and publicly release the Bus Violence benchmark, the first large-scale collection of video clips for violence detection on public transport, where some actors simulated violent actions inside a moving bus in changing conditions, such as the background or light. Moreover, we conduct a performance analysis of several state-of-the-art video violence detectors pre-trained with general violence detection databases on this newly established use case. The achieved moderate performances reveal the difficulties in generalizing from these popular methods, indicating the need to have this new collection of labeled data, beneficial for specializing them in this new scenario. Full article
(This article belongs to the Special Issue Multi-Sensor for Human Activity Recognition)
Show Figures

Figure 1

18 pages, 291 KiB  
Article
Multilabel Classification Methods for Human Activity Recognition: A Comparison of Algorithms
by Athanasios Lentzas, Eleana Dalagdi and Dimitris Vrakas
Sensors 2022, 22(6), 2353; https://0-doi-org.brum.beds.ac.uk/10.3390/s22062353 - 18 Mar 2022
Cited by 4 | Viewed by 2181
Abstract
As the world’s population is aging, and since access to ambient sensors has become easier over the past years, activity recognition in smart home installations has gained increased scientific interest. The majority of published papers in the literature focus on single-resident activity recognition. [...] Read more.
As the world’s population is aging, and since access to ambient sensors has become easier over the past years, activity recognition in smart home installations has gained increased scientific interest. The majority of published papers in the literature focus on single-resident activity recognition. While this is an important area, especially when focusing on elderly people living alone, multi-resident activity recognition has potentially more applications in smart homes. Activity recognition for multiple residents acting concurrently can be treated as a multilabel classification problem (MLC). In this study, an experimental comparison between different MLC algorithms is attempted. Three different techniques were implemented: RAkELd, classifier chains, and binary relevance. These methods are evaluated using the ARAS and CASAS public datasets. Results obtained from experiments have shown that using MLC can recognize activities performed by multiple people with high accuracy. While RAkELd had the best performance, the rest of the methods had on-par results. Full article
(This article belongs to the Special Issue Multi-Sensor for Human Activity Recognition)
Show Figures

Figure 1

27 pages, 5196 KiB  
Article
Toward Modeling Psychomotor Performance in Karate Combats Using Computer Vision Pose Estimation
by Jon Echeverria and Olga C. Santos
Sensors 2021, 21(24), 8378; https://0-doi-org.brum.beds.ac.uk/10.3390/s21248378 - 15 Dec 2021
Cited by 14 | Viewed by 4094
Abstract
Technological advances enable the design of systems that interact more closely with humans in a multitude of previously unsuspected fields. Martial arts are not outside the application of these techniques. From the point of view of the modeling of human movement in relation [...] Read more.
Technological advances enable the design of systems that interact more closely with humans in a multitude of previously unsuspected fields. Martial arts are not outside the application of these techniques. From the point of view of the modeling of human movement in relation to the learning of complex motor skills, martial arts are of interest because they are articulated around a system of movements that are predefined, or at least, bounded, and governed by the laws of Physics. Their execution must be learned after continuous practice over time. Literature suggests that artificial intelligence algorithms, such as those used for computer vision, can model the movements performed. Thus, they can be compared with a good execution as well as analyze their temporal evolution during learning. We are exploring the application of this approach to model psychomotor performance in Karate combats (called kumites), which are characterized by the explosiveness of their movements. In addition, modeling psychomotor performance in a kumite requires the modeling of the joint interaction of two participants, while most current research efforts in human movement computing focus on the modeling of movements performed individually. Thus, in this work, we explore how to apply a pose estimation algorithm to extract the features of some predefined movements of Ippon Kihon kumite (a one-step conventional assault) and compare classification metrics with four data mining algorithms, obtaining high values with them. Full article
(This article belongs to the Special Issue Multi-Sensor for Human Activity Recognition)
Show Figures

Figure 1

23 pages, 4692 KiB  
Article
A Blockchain-Based Distributed Paradigm to Secure Localization Services
by Roberto Saia, Alessandro Sebastian Podda, Livio Pompianu, Diego Reforgiato Recupero and Gianni Fenu
Sensors 2021, 21(20), 6814; https://0-doi-org.brum.beds.ac.uk/10.3390/s21206814 - 13 Oct 2021
Cited by 3 | Viewed by 2420
Abstract
In recent decades, modern societies are experiencing an increasing adoption of interconnected smart devices. This revolution involves not only canonical devices such as smartphones and tablets, but also simple objects like light bulbs. Named the Internet of Things (IoT), this ever-growing scenario offers [...] Read more.
In recent decades, modern societies are experiencing an increasing adoption of interconnected smart devices. This revolution involves not only canonical devices such as smartphones and tablets, but also simple objects like light bulbs. Named the Internet of Things (IoT), this ever-growing scenario offers enormous opportunities in many areas of modern society, especially if joined by other emerging technologies such as, for example, the blockchain. Indeed, the latter allows users to certify transactions publicly, without relying on central authorities or intermediaries. This work aims to exploit the scenario above by proposing a novel blockchain-based distributed paradigm to secure localization services, here named the Internet of Entities (IoE). It represents a mechanism for the reliable localization of people and things, and it exploits the increasing number of existing wireless devices and blockchain-based distributed ledger technologies. Moreover, unlike most of the canonical localization approaches, it is strongly oriented towards the protection of the users’ privacy. Finally, its implementation requires minimal efforts since it employs the existing infrastructures and devices, thus giving life to a new and wide data environment, exploitable in many domains, such as e-health, smart cities, and smart mobility. Full article
(This article belongs to the Special Issue Multi-Sensor for Human Activity Recognition)
Show Figures

Figure 1

17 pages, 5323 KiB  
Article
Architecture Design and VLSI Implementation of 3D Hand Gesture Recognition System
by Tsung-Han Tsai and Yih-Ru Tsai
Sensors 2021, 21(20), 6724; https://0-doi-org.brum.beds.ac.uk/10.3390/s21206724 - 10 Oct 2021
Cited by 3 | Viewed by 2965
Abstract
With advancements in technology, more and more research is being focused on enhancing daily life quality and convenience. Along with the increase in the development of gesture control systems, many controllers, such as the keyboard, mouse, and other devices, have been replaced with [...] Read more.
With advancements in technology, more and more research is being focused on enhancing daily life quality and convenience. Along with the increase in the development of gesture control systems, many controllers, such as the keyboard, mouse, and other devices, have been replaced with remote control products, which are gradually becoming more intuitive for users. However, vision-based hand gesture recognition systems still have many problems to overcome. Most hand detection methods adopt a skin filter or motion filter for pre-processing. However, in a noisy environment, it is not easy to correctly extract interesting objects. In this paper, a VLSI design with dual-cameras has been proposed to construct a depth map with a stereo matching algorithm and recognize hand gestures. The proposed system adopts an adaptive depth filter to separate interesting foreground objects from the background. We also propose dynamic gesture recognition using depth and coordinate information. The system can perform static and dynamic gesture recognition. The ASIC design is implemented in TSMC 90 nm with about 47.3 K gate counts, and 27.8 mW of power consumption. The average accuracy of each gesture recognition is 83.98%. Full article
(This article belongs to the Special Issue Multi-Sensor for Human Activity Recognition)
Show Figures

Figure 1

28 pages, 4687 KiB  
Article
Detection of Health-Related Events and Behaviours from Wearable Sensor Lifestyle Data Using Symbolic Intelligence: A Proof-of-Concept Application in the Care of Multiple Sclerosis
by Thanos G. Stavropoulos, Georgios Meditskos, Ioulietta Lazarou, Lampros Mpaltadoros, Sotirios Papagiannopoulos, Magda Tsolaki and Ioannis Kompatsiaris
Sensors 2021, 21(18), 6230; https://0-doi-org.brum.beds.ac.uk/10.3390/s21186230 - 17 Sep 2021
Cited by 7 | Viewed by 3192
Abstract
In this paper, we demonstrate the potential of a knowledge-driven framework to improve the efficiency and effectiveness of care through remote and intelligent assessment. More specifically, we present a rule-based approach to detect health related problems from wearable lifestyle sensor data that add [...] Read more.
In this paper, we demonstrate the potential of a knowledge-driven framework to improve the efficiency and effectiveness of care through remote and intelligent assessment. More specifically, we present a rule-based approach to detect health related problems from wearable lifestyle sensor data that add clinical value to take informed decisions on follow-up and intervention. We use OWL 2 ontologies as the underlying knowledge representation formalism for modelling contextual information and high-level concepts and relations among them. The conceptual model of our framework is defined on top of existing modelling standards, such as SOSA and WADM, promoting the creation of interoperable knowledge graphs. On top of the symbolic knowledge graphs, we define a rule-based framework for infusing expert knowledge in the form of SHACL constraints and rules to recognise patterns, anomalies and situations of interest based on the predefined and stored rules and conditions. A dashboard visualizes both sensor data and detected events to facilitate clinical supervision and decision making. Preliminary results on the performance and scalability are presented, while a focus group of clinicians involved in an exploratory research study revealed their preferences and perspectives to shape future clinical research using the framework. Full article
(This article belongs to the Special Issue Multi-Sensor for Human Activity Recognition)
Show Figures

Figure 1

15 pages, 1852 KiB  
Article
Use of Multiple Low Cost Carbon Dioxide Sensors to Measure Exhaled Breath Distribution with Face Mask Type and Wearing Behaviour
by Naveed Salman, Muhammad Waqas Khan, Michael Lim, Amir Khan, Andrew H. Kemp and Catherine J. Noakes
Sensors 2021, 21(18), 6204; https://0-doi-org.brum.beds.ac.uk/10.3390/s21186204 - 16 Sep 2021
Cited by 7 | Viewed by 3346
Abstract
The use of cloth face coverings and face masks has become widespread in light of the COVID-19 pandemic. This paper presents a method of using low cost wirelessly connected carbon dioxide (CO2) sensors to measure the effects of properly and improperly [...] Read more.
The use of cloth face coverings and face masks has become widespread in light of the COVID-19 pandemic. This paper presents a method of using low cost wirelessly connected carbon dioxide (CO2) sensors to measure the effects of properly and improperly worn face masks on the concentration distribution of exhaled breath around the face. Four types of face masks are used in two indoor environment scenarios. CO2 as a proxy for exhaled breath is being measured with the Sensirion SCD30 CO2 sensor, and data are being transferred wirelessly to a base station. The exhaled CO2 is measured in four directions at various distances from the head of the subject, and interpolated to create spatial heat maps of CO2 concentration. Statistical analysis using the Friedman’s analysis of variance (ANOVA) test is carried out to determine the validity of the null hypotheses (i.e., distribution of the CO2 is same) between different experiment conditions. Results suggest CO2 concentrations vary little with the type of mask used; however, improper use of the face mask results in statistically different CO2 spatial distribution of concentration. The use of low cost sensors with a visual interpolation tool could provide an effective method of demonstrating the importance of proper mask wearing to the public. Full article
(This article belongs to the Special Issue Multi-Sensor for Human Activity Recognition)
Show Figures

Figure 1

16 pages, 12986 KiB  
Article
Command Recognition Using Binarized Convolutional Neural Network with Voice and Radar Sensors for Human-Vehicle Interaction
by Seunghyun Oh, Chanhee Bae, Jaechan Cho, Seongjoo Lee and Yunho Jung
Sensors 2021, 21(11), 3906; https://0-doi-org.brum.beds.ac.uk/10.3390/s21113906 - 05 Jun 2021
Cited by 1 | Viewed by 2524
Abstract
Recently, as technology has advanced, the use of in-vehicle infotainment systems has increased, providing many functions. However, if the driver’s attention is diverted to control these systems, it can cause a fatal accident, and thus human–vehicle interaction is becoming more important. Therefore, in [...] Read more.
Recently, as technology has advanced, the use of in-vehicle infotainment systems has increased, providing many functions. However, if the driver’s attention is diverted to control these systems, it can cause a fatal accident, and thus human–vehicle interaction is becoming more important. Therefore, in this paper, we propose a human–vehicle interaction system to reduce driver distraction during driving. We used voice and continuous-wave radar sensors that require low complexity for application to vehicle environments as resource-constrained platforms. The proposed system applies sensor fusion techniques to improve the limit of single-sensor monitoring. In addition, we used a binarized convolutional neural network algorithm, which significantly reduces the computational workload of the convolutional neural network in command classification. As a result of performance evaluation in noisy and cluttered environments, the proposed system showed a recognition accuracy of 96.4%, an improvement of 7.6% compared to a single voice sensor-based system, and 9.0% compared to a single radar sensor-based system. Full article
(This article belongs to the Special Issue Multi-Sensor for Human Activity Recognition)
Show Figures

Figure 1

Review

Jump to: Editorial, Research

29 pages, 435 KiB  
Review
A Survey of Human Activity Recognition in Smart Homes Based on IoT Sensors Algorithms: Taxonomies, Challenges, and Opportunities with Deep Learning
by Damien Bouchabou, Sao Mai Nguyen, Christophe Lohr, Benoit LeDuc and Ioannis Kanellos
Sensors 2021, 21(18), 6037; https://0-doi-org.brum.beds.ac.uk/10.3390/s21186037 - 09 Sep 2021
Cited by 79 | Viewed by 6812
Abstract
Recent advances in Internet of Things (IoT) technologies and the reduction in the cost of sensors have encouraged the development of smart environments, such as smart homes. Smart homes can offer home assistance services to improve the quality of life, autonomy, and health [...] Read more.
Recent advances in Internet of Things (IoT) technologies and the reduction in the cost of sensors have encouraged the development of smart environments, such as smart homes. Smart homes can offer home assistance services to improve the quality of life, autonomy, and health of their residents, especially for the elderly and dependent. To provide such services, a smart home must be able to understand the daily activities of its residents. Techniques for recognizing human activity in smart homes are advancing daily. However, new challenges are emerging every day. In this paper, we present recent algorithms, works, challenges, and taxonomy of the field of human activity recognition in a smart home through ambient sensors. Moreover, since activity recognition in smart homes is a young field, we raise specific problems, as well as missing and needed contributions. However, we also propose directions, research opportunities, and solutions to accelerate advances in this field. Full article
(This article belongs to the Special Issue Multi-Sensor for Human Activity Recognition)
Show Figures

Figure 1

Back to TopTop