sensors-logo

Journal Browser

Journal Browser

Sensors for Gait Biometrics

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: closed (20 May 2020) | Viewed by 16071

Special Issue Editors


E-Mail Website
Guest Editor
Dpt. de Informática y Análisis Numérico, Universidad de Córdoba, Campus de Rabanales, 14071, Córdoba, Spain
Interests: human-centric video understanding; object detection; machine learning

E-Mail Website
Guest Editor
Departamento de Arquitectura de Computadores, ETSI Informatica. Boulevar Louis Pasteur, Campus de Teatinos, 29071 Málaga, Spain
Interests: computer vision techniques for video processing; accelerator-based high performance computing; video applications on embedded heterogeneous systems
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Gait is a unique pattern that identifies a person and, therefore, it can be used to distinguish a subject even without requiring his participation. In addition to classical RGB cameras, other type of sensors can be used to capture gait motion such as, for example, depth sensors, infrared sensors, gyroscopes, and accelerometers. Note that not all sensors are suitable for all possible situations. For example, an infrared sensor will be more useful during nighttime.

The aim of this Special Issue is to highlight and promote the benefits of using various types of sensor in gait-based people identification, i.e., from a biometric point of view, including gender recognition, apparent age estimation, and so on. Specifically, we are interested in original works presenting new approaches involving any kind of sensors (inertial, depth, RGB, etc.) and their combinations. In addition, works introducing and benchmarking new large realistic annotated datasets for gait recognition or including multiple people in the same scene are also welcome.

Dr. Manuel J Marin-Jimenez
Prof. Dr. Nicolás Guil Mata
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Novel gait descriptors
  • Soft biometrics based on gait
  • Multimodal gait recognition
  • View-invariant gait recognition
  • Information fusion for gait recognition
  • Incremental learning for gait recognition
  • Semi- and weakly supervised learning for gait
  • Algorithms for effective transfer learning applied to gait recognition
  • Multi-task learning applied to gait
  • New challenging datasets with complex scenarios (e.g. multi-subject, multi-sensor, etc.)

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

23 pages, 3718 KiB  
Article
Non-Linear Template-Based Approach for the Study of Locomotion
by Tristan Dot, Flavien Quijoux, Laurent Oudre, Aliénor Vienne-Jumeau, Albane Moreau, Pierre-Paul Vidal and Damien Ricard
Sensors 2020, 20(7), 1939; https://0-doi-org.brum.beds.ac.uk/10.3390/s20071939 - 30 Mar 2020
Cited by 10 | Viewed by 3205
Abstract
The automatic detection of gait events (i.e., Initial Contact (IC) and Final Contact (FC)) is crucial for the characterisation of gait from Inertial Measurements Units. In this article, we present a method for detecting steps (i.e., IC and FC) from signals of gait [...] Read more.
The automatic detection of gait events (i.e., Initial Contact (IC) and Final Contact (FC)) is crucial for the characterisation of gait from Inertial Measurements Units. In this article, we present a method for detecting steps (i.e., IC and FC) from signals of gait sequences of individuals recorded with a gyrometer. The proposed approach combines the use of a dictionary of templates and a Dynamic Time Warping (DTW) measure of fit to retrieve these templates into input signals. Several strategies for choosing and learning the adequate templates from annotated data are also described. The method is tested on thirteen healthy subjects and compared to gold standard. Depending of the template choice, the proposed algorithm achieves average errors from 0.01 to 0.03 s for the detection of IC, FC and step duration. Results demonstrate that the use of DTW allows achieving these performances with only one single template. DTW is a convenient tool to perform pattern recognition on gait gyrometer signals. This study paves the way for new step detection methods: it shows that using one single template associated with non-linear deformations may be sufficient to model the gait of healthy subjects. Full article
(This article belongs to the Special Issue Sensors for Gait Biometrics)
Show Figures

Figure 1

18 pages, 9230 KiB  
Article
MuPeG—The Multiple Person Gait Framework
by Rubén Delgado-Escaño, Francisco M. Castro, Julián R. Cózar, Manuel J. Marín-Jiménez and Nicolás Guil
Sensors 2020, 20(5), 1358; https://0-doi-org.brum.beds.ac.uk/10.3390/s20051358 - 02 Mar 2020
Cited by 5 | Viewed by 3087
Abstract
Gait recognition is being employed as an effective approach to identify people without requiring subject collaboration. Nowadays, developed techniques for this task are obtaining high performance on current datasets (usually more than 90 % of accuracy). However, those datasets are simple as they [...] Read more.
Gait recognition is being employed as an effective approach to identify people without requiring subject collaboration. Nowadays, developed techniques for this task are obtaining high performance on current datasets (usually more than 90 % of accuracy). However, those datasets are simple as they only contain one subject in the scene at the same time. This fact limits the extrapolation of the results to real world conditions where, usually, multiple subjects are simultaneously present at the scene, generating different types of occlusions and requiring better tracking methods and models trained to deal with those situations. Thus, with the aim of evaluating more realistic and challenging situations appearing in scenarios with multiple subjects, we release a new framework (MuPeG) that generates augmented datasets with multiple subjects using existing datasets as input. By this way, it is not necessary to record and label new videos, since it is automatically done by our framework. In addition, based on the use of datasets generated by our framework, we propose an experimental methodology that describes how to use datasets with multiple subjects and the recommended experiments that are necessary to perform. Moreover, we release the first experimental results using datasets with multiple subjects. In our case, we use an augmented version of TUM-GAID and CASIA-B datasets obtained with our framework. In these augmented datasets the obtained accuracies are 54.8 % and 42.3 % whereas in the original datasets (single subject), the same model achieved 99.7 % and 98.0 % for TUM-GAID and CASIA-B, respectively. The performance drop shows clearly that the difficulty of datasets with multiple subjects in the scene is much higher than the ones reported in the literature for a single subject. Thus, our proposed framework is able to generate useful datasets with multiple subjects which are more similar to real life situations. Full article
(This article belongs to the Special Issue Sensors for Gait Biometrics)
Show Figures

Figure 1

16 pages, 1789 KiB  
Article
Recurrent Neural Network for Inertial Gait User Recognition in Smartphones
by Pablo Fernandez-Lopez, Judith Liu-Jimenez, Kiyoshi Kiyokawa, Yang Wu and Raul Sanchez-Reillo
Sensors 2019, 19(18), 4054; https://0-doi-org.brum.beds.ac.uk/10.3390/s19184054 - 19 Sep 2019
Cited by 25 | Viewed by 3950
Abstract
In this article, a gait recognition algorithm is presented based on the information obtained from inertial sensors embedded in a smartphone, in particular, the accelerometers and gyroscopes typically embedded on them. The algorithm processes the signal by extracting gait cycles, which are then [...] Read more.
In this article, a gait recognition algorithm is presented based on the information obtained from inertial sensors embedded in a smartphone, in particular, the accelerometers and gyroscopes typically embedded on them. The algorithm processes the signal by extracting gait cycles, which are then fed into a Recurrent Neural Network (RNN) to generate feature vectors. To optimize the accuracy of this algorithm, we apply a random grid hyperparameter selection process followed by a hand-tuning method to reach the final hyperparameter configuration. The different configurations are tested on a public database with 744 users and compared with other algorithms that were previously tested on the same database. After reaching the best-performing configuration for our algorithm, we obtain an equal error rate (EER) of 11.48% when training with only 20% of the users. Even better, when using 70% of the users for training, that value drops to 7.55%. The system manages to improve on state-of-the-art methods, but we believe the algorithm could reach a significantly better performance if it was trained with more visits per user. With a large enough database with several visits per user, the algorithm could improve substantially. Full article
(This article belongs to the Special Issue Sensors for Gait Biometrics)
Show Figures

Figure 1

13 pages, 1084 KiB  
Article
User Identification from Gait Analysis Using Multi-Modal Sensors in Smart Insole
by Sang-Il Choi, Jucheol Moon, Hee-Chan Park and Sang Tae Choi
Sensors 2019, 19(17), 3785; https://0-doi-org.brum.beds.ac.uk/10.3390/s19173785 - 31 Aug 2019
Cited by 23 | Viewed by 5071
Abstract
Recent studies indicate that individuals can be identified by their gait pattern. A number of sensors including vision, acceleration, and pressure have been used to capture humans’ gait patterns, and a number of methods have been developed to recognize individuals from their gait [...] Read more.
Recent studies indicate that individuals can be identified by their gait pattern. A number of sensors including vision, acceleration, and pressure have been used to capture humans’ gait patterns, and a number of methods have been developed to recognize individuals from their gait pattern data. This study proposes a novel method of identifying individuals using null-space linear discriminant analysis on humans’ gait pattern data. The gait pattern data consists of time series pressure and acceleration data measured from multi-modal sensors in a smart insole used while walking. We compare the identification accuracies from three sensing modalities, which are acceleration, pressure, and both in combination. Experimental results show that the proposed multi-modal features identify 14 participants with high accuracy over 95% from their gait pattern data of walking. Full article
(This article belongs to the Special Issue Sensors for Gait Biometrics)
Show Figures

Figure 1

Back to TopTop