sensors-logo

Journal Browser

Journal Browser

Biometric Systems for Personal Human Recognition

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: closed (30 April 2023) | Viewed by 40798

Special Issue Editors


E-Mail Website
Guest Editor
CNRS, Biomechanics and Bioengineering BMBI UMR 7338, Centre de Recherche Royallieu, Université de Technologie de Compiègne, Alliance Sorbonne Université, CEDEX CS 60 319, 60 203 Compiègne, France
Interests: biomedical signal processing; connected objects; E-health
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Laboratoire Biomécanique et Bioingénierie UMR 7338, Université de Technologie de Compiègne, Centre de Recherches de Royallieu, CS- 20529 - 60205 Compiègne Cedex, France
Interests: biometrics; machine learning; pattern recognition

E-Mail Website
Guest Editor
Guangdong University of Technology, Guangzhou 510006, China
Interests: biometrics; machine learning; palmprint
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

At present, biometric systems are increasingly being used by government agencies and private industries in order to verify a person’s identity, secure the nation’s borders, and restrict access to secure sites, including buildings and computer networks. Biometric systems recognize a person based on physiological characteristics, such as fingerprints, hand, facial features, iris patterns, or behavioral characteristics that are learned or acquired, such as how a person signs their name, typing rhythm, or even walking pattern.

In intelligent systems, different embedded sensors such as digital cameras and microphones have shown a good ability to capture information in the same manner as a human perceives. However, machines do not have the ability to analyze, interpret, and extract useful information in order to make relevant decisions. Fortunately, with the development of machine learning and artificial intelligence techniques, this has become possible.

This Special Issue aims to highlight advances in machine learning, pattern recognition, and signal/image processing as they relate to biometric recognition systems.

Potential topics include but are not limited to the following:

  • Biometric systems
  • Face recognition
  • Gait recognition
  • Palmprint recognition
  • Iris recognition
  • Speaker recognition
  • Multimodal recognition
  • Forensics
Prof. Dr. Dan Istrate
Prof. Dr. Imad Rida
Prof. Dr. Lunke Fei
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Biometrics
  • Face
  • Gait
  • Palmprint
  • Iris

Published Papers (17 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

20 pages, 7824 KiB  
Article
Deep Metric Learning for Scalable Gait-Based Person Re-Identification Using Force Platform Data
by Kayne A. Duncanson, Simon Thwaites, David Booth, Gary Hanly, William S. P. Robertson, Ehsan Abbasnejad and Dominic Thewlis
Sensors 2023, 23(7), 3392; https://0-doi-org.brum.beds.ac.uk/10.3390/s23073392 - 23 Mar 2023
Cited by 2 | Viewed by 1820
Abstract
Walking gait data acquired with force platforms may be used for person re-identification (re-ID) in various authentication, surveillance, and forensics applications. Current force platform-based re-ID systems classify a fixed set of identities (IDs), which presents a problem when IDs are added or removed [...] Read more.
Walking gait data acquired with force platforms may be used for person re-identification (re-ID) in various authentication, surveillance, and forensics applications. Current force platform-based re-ID systems classify a fixed set of identities (IDs), which presents a problem when IDs are added or removed from the database. We formulated force platform-based re-ID as a deep metric learning (DML) task, whereby a deep neural network learns a feature representation that can be compared between inputs using a distance metric. The force platform dataset used in this study is one of the largest and the most comprehensive of its kind, containing 193 IDs with significant variations in clothing, footwear, walking speed, and time between trials. Several DML model architectures were evaluated in a challenging setting where none of the IDs were seen during training (i.e., zero-shot re-ID) and there was only one prior sample per ID to compare with each query sample. The best architecture was 85% accurate in this setting, though an analysis of changes in walking speed and footwear between measurement instances revealed that accuracy was 28% higher on same-speed, same-footwear comparisons, compared to cross-speed, cross-footwear comparisons. These results demonstrate the potential of DML algorithms for zero-shot re-ID using force platform data, and highlight challenging cases. Full article
(This article belongs to the Special Issue Biometric Systems for Personal Human Recognition)
Show Figures

Figure 1

16 pages, 2561 KiB  
Article
Leveraging Multiple Distinct EEG Training Sessions for Improvement of Spectral-Based Biometric Verification Results
by Renata Plucińska, Konrad Jędrzejewski, Urszula Malinowska and Jacek Rogala
Sensors 2023, 23(4), 2057; https://0-doi-org.brum.beds.ac.uk/10.3390/s23042057 - 11 Feb 2023
Cited by 2 | Viewed by 1010
Abstract
Most studies on EEG-based biometry recognition report results based on signal databases, with a limited number of recorded EEG sessions using the same single EEG recording for both training and testing a proposed model. However, the EEG signal is highly vulnerable to interferences, [...] Read more.
Most studies on EEG-based biometry recognition report results based on signal databases, with a limited number of recorded EEG sessions using the same single EEG recording for both training and testing a proposed model. However, the EEG signal is highly vulnerable to interferences, electrode placement, and temporary conditions, which can lead to overestimated assessments of the considered methods. Our study examined how different numbers of distinct recording sessions used as training sessions would affect EEG-based verification. We analyzed the original data from 29 participants with 20 distinct recorded sessions each, as well as 23 additional impostors with only one session each. We applied raw coefficients of power spectral density estimate, and the coefficients of power spectral density estimate converted to the decibel scale, as the input to a shallow neural network. Our study showed that the variance introduced by multiple recording sessions affects sensitivity. We also showed that increasing the number of sessions above eight did not improve the results under our conditions. For 15 training sessions, the achieved accuracy was 96.7 ± 4.2%, and for eight training sessions and 12 test sessions, it was 94.9 ± 4.6%. For 15 training sessions, the rate of successful impostor attacks over all attack attempts was 3.1 ± 2.2%, but this number was not significantly different from using six recording sessions for training. Our findings indicate the need to include data from multiple recording sessions in EEG-based recognition for training, and that increasing the number of test sessions did not significantly affect the obtained results. Although the presented results are for the resting-state, they may serve as a baseline for other paradigms. Full article
(This article belongs to the Special Issue Biometric Systems for Personal Human Recognition)
Show Figures

Figure 1

19 pages, 1238 KiB  
Article
Ensemble of Heterogeneous Base Classifiers for Human Gait Recognition
by Marcin Derlatka and Marta Borowska
Sensors 2023, 23(1), 508; https://0-doi-org.brum.beds.ac.uk/10.3390/s23010508 - 02 Jan 2023
Cited by 9 | Viewed by 2131
Abstract
Human gait recognition is one of the most interesting issues within the subject of behavioral biometrics. The most significant problems connected with the practical application of biometric systems include their accuracy as well as the speed at which they operate, understood both as [...] Read more.
Human gait recognition is one of the most interesting issues within the subject of behavioral biometrics. The most significant problems connected with the practical application of biometric systems include their accuracy as well as the speed at which they operate, understood both as the time needed to recognize a particular person as well as the time necessary to create and train a biometric system. The present study made use of an ensemble of heterogeneous base classifiers to address these issues. A Heterogeneous ensemble is a group of classification models trained using various algorithms and combined to output an effective recognition A group of parameters identified on the basis of ground reaction forces was accepted as input signals. The proposed solution was tested on a sample of 322 people (5980 gait cycles). Results concerning the accuracy of recognition (meaning the Correct Classification Rate quality at 99.65%), as well as operation time (meaning the time of model construction at <12.5 min and the time needed to recognize a person at <0.1 s), should be considered as very good and exceed in quality other methods so far described in the literature. Full article
(This article belongs to the Special Issue Biometric Systems for Personal Human Recognition)
Show Figures

Figure 1

21 pages, 5777 KiB  
Article
Individual Identification by Late Information Fusion of EmgCNN and EmgLSTM from Electromyogram Signals
by Yeong-Hyeon Byeon and Keun-Chang Kwak
Sensors 2022, 22(18), 6770; https://0-doi-org.brum.beds.ac.uk/10.3390/s22186770 - 07 Sep 2022
Cited by 3 | Viewed by 1407
Abstract
This paper is concerned with individual identification by late fusion of two-stream deep networks from Electromyogram (EMG) signals. EMG signal has more advantages on security compared to other biosignals exposed visually, such as the face, iris, and fingerprints, when used for biometrics, at [...] Read more.
This paper is concerned with individual identification by late fusion of two-stream deep networks from Electromyogram (EMG) signals. EMG signal has more advantages on security compared to other biosignals exposed visually, such as the face, iris, and fingerprints, when used for biometrics, at least in the aspect of visual exposure, because it is measured through contact without any visual exposure. Thus, we propose an ensemble deep learning model by late information fusion of convolutional neural networks (CNN) and long short-term memory (LSTM) from EMG signals for robust and discriminative biometrics. For this purpose, in the ensemble model’s first stream, one-dimensional EMG signals were converted into time–frequency representation to train a two-dimensional convolutional neural network (EmgCNN). In the second stream, statistical features were extracted from one-dimensional EMG signals to train a long short-term memory (EmgLSTM) that uses sequence input. Here, the EMG signals were divided into fixed lengths, and feature values were calculated for each interval. A late information fusion is performed by the output scores of two deep learning models to obtain a final classification result. To confirm the superiority of the proposed method, we use an EMG database constructed at Chosun University and a public EMG database. The experimental results revealed that the proposed method showed performance improvement by 10.76% on average compared to a single stream and the previous methods. Full article
(This article belongs to the Special Issue Biometric Systems for Personal Human Recognition)
Show Figures

Figure 1

15 pages, 645 KiB  
Article
Augmented PIN Authentication through Behavioral Biometrics
by Matteo Nerini, Elia Favarelli and Marco Chiani
Sensors 2022, 22(13), 4857; https://0-doi-org.brum.beds.ac.uk/10.3390/s22134857 - 27 Jun 2022
Cited by 3 | Viewed by 1851
Abstract
Personal Identification Numbers (PINs) are widely used today for user authentication on mobile devices. However, this authentication method can be subject to several attacks such as phishing, smudge, and side-channel. In this paper, we increase the security of PIN-based authentication by considering behavioral [...] Read more.
Personal Identification Numbers (PINs) are widely used today for user authentication on mobile devices. However, this authentication method can be subject to several attacks such as phishing, smudge, and side-channel. In this paper, we increase the security of PIN-based authentication by considering behavioral biometrics, specifically the smartphone movements typical of each user. To this end, we propose a method based on anomaly detection that is capable of recognizing whether the PIN is inserted by the smartphone owner or by an attacker. This decision is taken according to the smartphone movements, which are recorded during the PIN insertion through the built-in motion sensors. For each digit in the PIN, an anomaly score is computed using Machine Learning (ML) techniques. Subsequently, these scores are combined to obtain the final decision metric. Numerical results show that our authentication method can achieve an Equal Error Rate (EER) as low as 5% in the case of 4-digit PINs, and 4% in the case of 6-digit PINs. Considering a reduced training set, composed of solely 50 samples, the EER only slightly worsens, reaching 6%. The practicality of our approach is further confirmed by the low processing time required, on the order of fractions of milliseconds. Full article
(This article belongs to the Special Issue Biometric Systems for Personal Human Recognition)
Show Figures

Figure 1

13 pages, 3423 KiB  
Article
A Finger Vein Feature Extraction Method Incorporating Principal Component Analysis and Locality Preserving Projections
by Dingzhong Feng, Shanyu He, Zihao Zhou and Ye Zhang
Sensors 2022, 22(10), 3691; https://0-doi-org.brum.beds.ac.uk/10.3390/s22103691 - 12 May 2022
Cited by 6 | Viewed by 1680
Abstract
In the field of biometric recognition, finger vein recognition has received widespread attention by virtue of its advantages, such as biopsy, which is not easy to be stolen. However, due to the limitation of acquisition conditions such as noise and illumination, as well [...] Read more.
In the field of biometric recognition, finger vein recognition has received widespread attention by virtue of its advantages, such as biopsy, which is not easy to be stolen. However, due to the limitation of acquisition conditions such as noise and illumination, as well as the limitation of computational resources, the discriminative features are not comprehensive enough when performing finger vein image feature extraction. It will lead to such a result that the accuracy of image recognition cannot meet the needs of large numbers of users and high security. Therefore, this paper proposes a novel feature extraction method called principal component local preservation projections (PCLPP). It organically combines principal component analysis (PCA) and locality preserving projections (LPP) and constructs a projection matrix that preserves both the global and local features of the image, which will meet the urgent needs of large numbers of users and high security. In this paper, we apply the Shandong University homologous multi-modal traits (SDUMLA-HMT) finger vein database to evaluate PCLPP and add “Salt and pepper” noise to the dataset to verify the robustness of PCLPP. The experimental results show that the image recognition rate after applying PCLPP is much better than the other two methods, PCA and LPP, for feature extraction. Full article
(This article belongs to the Special Issue Biometric Systems for Personal Human Recognition)
Show Figures

Figure 1

14 pages, 1107 KiB  
Article
A Simple and Efficient Method for Finger Vein Recognition
by Zhongxia Zhang and Mingwen Wang
Sensors 2022, 22(6), 2234; https://0-doi-org.brum.beds.ac.uk/10.3390/s22062234 - 14 Mar 2022
Cited by 8 | Viewed by 2215
Abstract
Finger vein recognition has drawn increasing attention as one of the most popular and promising biometrics due to its high distinguishing ability, security, and non-invasive procedure. The main idea of traditional schemes is to directly extract features from finger vein images and then [...] Read more.
Finger vein recognition has drawn increasing attention as one of the most popular and promising biometrics due to its high distinguishing ability, security, and non-invasive procedure. The main idea of traditional schemes is to directly extract features from finger vein images and then compare features to find the best match. However, the features extracted from images contain much redundant data, while the features extracted from patterns are greatly influenced by image segmentation methods. To tackle these problems, this paper proposes a new finger vein recognition algorithm by generating code. The proposed method does not require an image segmentation algorithm, is simple to calculate, and has a small amount of data. Firstly, the finger vein images were divided into blocks to calculate the mean value. Then, the centrosymmetric coding was performed using the matrix generated by blocking and averaging. The obtained codewords were concatenated as the feature codewords of the image. The similarity between vein codes is measured by the ratio of minimum Hamming distance to codeword length. Extensive experiments on two public finger vein databases verify the effectiveness of the proposed method. The results indicate that our method outperforms the state-of-the-art methods and has competitive potential in performing the matching task. Full article
(This article belongs to the Special Issue Biometric Systems for Personal Human Recognition)
Show Figures

Figure 1

20 pages, 1496 KiB  
Article
Initial Study Using Electrocardiogram for Authentication and Identification
by Teresa M. C. Pereira, Raquel C. Conceição and Raquel Sebastião
Sensors 2022, 22(6), 2202; https://0-doi-org.brum.beds.ac.uk/10.3390/s22062202 - 11 Mar 2022
Cited by 6 | Viewed by 3244
Abstract
Recently, several studies have demonstrated the potential of electrocardiogram (ECG) to be used as a physiological signature for biometric systems (BS). We investigated the potential of ECG as a biometric trait for the identification and authentication of individuals. We used data from a [...] Read more.
Recently, several studies have demonstrated the potential of electrocardiogram (ECG) to be used as a physiological signature for biometric systems (BS). We investigated the potential of ECG as a biometric trait for the identification and authentication of individuals. We used data from a public database, CYBHi, containing two off-the-person records from 63 subjects, separated by 3 months. For the BS, two templates were generated: (1) cardiac cycles (CC) and (2) scalograms. The identification with CC was performed with LDA, kNN, DT, and SVM, whereas a convolutional neural network (CNN) and a distance-based algorithm were used for scalograms. The authentication was performed with a distance-based algorithm, with a leave-one-out cross validation, for impostors evaluation. The identification system yielded accuracies of 79.37% and 69.84% for CC with LDA and scalograms with CNN, respectively. The authentication yielded an accuracy of 90.48% and an impostor score of 13.06% for CC, and it had an accuracy of 98.42% and an impostor score of 14.34% for scalograms. The obtained results support the claim that ECG can be successfully used for personal recognition. To the best of our knowledge, our study is the first to thoroughly compare templates and methodologies to optimize the performance of an ECG-based biometric system. Full article
(This article belongs to the Special Issue Biometric Systems for Personal Human Recognition)
Show Figures

Figure 1

20 pages, 6016 KiB  
Article
Attention-Based Temporal-Frequency Aggregation for Speaker Verification
by Meng Wang, Dazheng Feng, Tingting Su and Mohan Chen
Sensors 2022, 22(6), 2147; https://0-doi-org.brum.beds.ac.uk/10.3390/s22062147 - 10 Mar 2022
Cited by 4 | Viewed by 1787
Abstract
Convolutional neural networks (CNNs) have significantly promoted the development of speaker verification (SV) systems because of their powerful deep feature learning capability. In CNN-based SV systems, utterance-level aggregation is an important component, and it compresses the frame-level features generated by the CNN frontend [...] Read more.
Convolutional neural networks (CNNs) have significantly promoted the development of speaker verification (SV) systems because of their powerful deep feature learning capability. In CNN-based SV systems, utterance-level aggregation is an important component, and it compresses the frame-level features generated by the CNN frontend into an utterance-level representation. However, most of the existing aggregation methods aggregate the extracted features across time and cannot capture the speaker-dependent information contained in the frequency domain. To handle this problem, this paper proposes a novel attention-based frequency aggregation method, which focuses on the key frequency bands that provide more information for utterance-level representation. Meanwhile, two more effective temporal-frequency aggregation methods are proposed in combination with the existing temporal aggregation methods. The two proposed methods can capture the speaker-dependent information contained in both the time domain and frequency domain of frame-level features, thus improving the discriminability of speaker embedding. Besides, a powerful CNN-based SV system is developed and evaluated on the TIMIT and Voxceleb datasets. The experimental results indicate that the CNN-based SV system using the temporal-frequency aggregation method achieves a superior equal error rate of 5.96% on Voxceleb compared with the state-of-the-art baseline models. Full article
(This article belongs to the Special Issue Biometric Systems for Personal Human Recognition)
Show Figures

Figure 1

18 pages, 6126 KiB  
Article
Ensemble-Based Bounding Box Regression for Enhanced Knuckle Localization
by Ritesh Vyas, Bryan M. Williams, Hossein Rahmani, Ricki Boswell-Challand, Zheheng Jiang, Plamen Angelov and Sue Black
Sensors 2022, 22(4), 1569; https://0-doi-org.brum.beds.ac.uk/10.3390/s22041569 - 17 Feb 2022
Cited by 2 | Viewed by 2193
Abstract
The knuckle creases present on the dorsal side of the human hand can play significant role in identifying the offenders of serious crime, especially when evidence images of more recognizable biometric traits, such as the face, are not available. These knuckle creases, if [...] Read more.
The knuckle creases present on the dorsal side of the human hand can play significant role in identifying the offenders of serious crime, especially when evidence images of more recognizable biometric traits, such as the face, are not available. These knuckle creases, if localized appropriately, can result in improved identification ability. This is attributed to ambient inclusion of the creases and minimal effect of background, which lead to quality and discerning feature extraction. This paper presents an ensemble approach, utilizing multiple object detector frameworks, to localize the knuckle regions in a functionally appropriate way. The approach leverages from the individual capabilities of the popular object detectors and provide a more comprehensive knuckle region localization. The investigations are completed with two large-scale public hand databases which consist of hand-dorsal images with varying backgrounds and finger positioning. In addition to that, effectiveness of the proposed approach is also tested with a novel proprietary unconstrained multi-ethnic hand dorsal dataset to evaluate its generalizability. Several novel performance metrics are tailored to evaluate the efficacy of the proposed knuckle localization approach. These metrics aim to measure the veracity of the detected knuckle regions in terms of their relation with the ground truth. The comparison of the proposed approach with individual object detectors and a state-of-the-art hand keypoint detector clearly establishes the outperforming nature of the proposed approach. The generalization of the proposed approach is also corroborated through the cross-dataset framework. Full article
(This article belongs to the Special Issue Biometric Systems for Personal Human Recognition)
Show Figures

Figure 1

16 pages, 5722 KiB  
Article
Identifying Voice Individuality Unaffected by Age-Related Voice Changes during Adolescence
by Natsumi Suzuki, Momoko Ishimaru, Itsuki Toyoshima and Yoshifumi Okada
Sensors 2022, 22(4), 1542; https://0-doi-org.brum.beds.ac.uk/10.3390/s22041542 - 17 Feb 2022
Viewed by 1763
Abstract
Identifying voice individuality is a key issue in the biometrics field. Previous studies have demonstrated that voice individuality is caused by differences in the shape and size of the vocal organs; however, these studies did not discuss voice individuality over a long term [...] Read more.
Identifying voice individuality is a key issue in the biometrics field. Previous studies have demonstrated that voice individuality is caused by differences in the shape and size of the vocal organs; however, these studies did not discuss voice individuality over a long term that includes periods of voice change. Therefore, we focus on adolescence (early teens to early twenties), which includes voice changes due to growth of vocal organs, and we reveal invariant voice individuality over a long period. In this study, the immature and mature periods during vocal organ development were defined as unstable and stable periods, respectively. We performed speaker verification tests across these two periods and evaluated voice features that are common to these periods using Fisher’s F-ratio. The results of the speaker verification test demonstrated a verification accuracy of 60% or more in most cases, and the results of the evaluation using Fisher’s F-ratio demonstrated that robust voice individuality existed in the frequency regions of 1–2 kHz and 4–6 kHz regardless of the period. These results suggest that voice individuality is unaffected by age-related changes over the long term, including adolescence. Full article
(This article belongs to the Special Issue Biometric Systems for Personal Human Recognition)
Show Figures

Figure 1

26 pages, 18641 KiB  
Article
Learning to Combine Local and Global Image Information for Contactless Palmprint Recognition
by Marjan Stoimchev, Marija Ivanovska and Vitomir Štruc
Sensors 2022, 22(1), 73; https://0-doi-org.brum.beds.ac.uk/10.3390/s22010073 - 23 Dec 2021
Cited by 4 | Viewed by 3196
Abstract
In the past few years, there has been a leap from traditional palmprint recognition methodologies, which use handcrafted features, to deep-learning approaches that are able to automatically learn feature representations from the input data. However, the information that is extracted from such deep-learning [...] Read more.
In the past few years, there has been a leap from traditional palmprint recognition methodologies, which use handcrafted features, to deep-learning approaches that are able to automatically learn feature representations from the input data. However, the information that is extracted from such deep-learning models typically corresponds to the global image appearance, where only the most discriminative cues from the input image are considered. This characteristic is especially problematic when data is acquired in unconstrained settings, as in the case of contactless palmprint recognition systems, where visual artifacts caused by elastic deformations of the palmar surface are typically present in spatially local parts of the captured images. In this study we address the problem of elastic deformations by introducing a new approach to contactless palmprint recognition based on a novel CNN model, designed as a two-path architecture, where one path processes the input in a holistic manner, while the second path extracts local information from smaller image patches sampled from the input image. As elastic deformations can be assumed to most significantly affect the global appearance, while having a lesser impact on spatially local image areas, the local processing path addresses the issues related to elastic deformations thereby supplementing the information from the global processing path. The model is trained with a learning objective that combines the Additive Angular Margin (ArcFace) Loss and the well-known center loss. By using the proposed model design, the discriminative power of the learned image representation is significantly enhanced compared to standard holistic models, which, as we show in the experimental section, leads to state-of-the-art performance for contactless palmprint recognition. Our approach is tested on two publicly available contactless palmprint datasets—namely, IITD and CASIA—and is demonstrated to perform favorably against state-of-the-art methods from the literature. The source code for the proposed model is made publicly available. Full article
(This article belongs to the Special Issue Biometric Systems for Personal Human Recognition)
Show Figures

Figure 1

15 pages, 1320 KiB  
Article
Triple-Type Feature Extraction for Palmprint Recognition
by Lian Wu, Yong Xu, Zhongwei Cui, Yu Zuo, Shuping Zhao and Lunke Fei
Sensors 2021, 21(14), 4896; https://0-doi-org.brum.beds.ac.uk/10.3390/s21144896 - 19 Jul 2021
Cited by 15 | Viewed by 2670
Abstract
Palmprint recognition has received tremendous research interests due to its outstanding user-friendliness such as non-invasive and good hygiene properties. Most recent palmprint recognition studies such as deep-learning methods usually learn discriminative features from palmprint images, which usually require a large number of labeled [...] Read more.
Palmprint recognition has received tremendous research interests due to its outstanding user-friendliness such as non-invasive and good hygiene properties. Most recent palmprint recognition studies such as deep-learning methods usually learn discriminative features from palmprint images, which usually require a large number of labeled samples to achieve a reasonable good recognition performance. However, palmprint images are usually limited because it is relative difficult to collect enough palmprint samples, making most existing deep-learning-based methods ineffective. In this paper, we propose a heuristic palmprint recognition method by extracting triple types of palmprint features without requiring any training samples. We first extract the most important inherent features of a palmprint, including the texture, gradient and direction features, and encode them into triple-type feature codes. Then, we use the block-wise histograms of the triple-type feature codes to form the triple feature descriptors for palmprint representation. Finally, we employ a weighted matching-score level fusion to calculate the similarity between two compared palmprint images of triple-type feature descriptors for palmprint recognition. Extensive experimental results on the three widely used palmprint databases clearly show the promising effectiveness of the proposed method. Full article
(This article belongs to the Special Issue Biometric Systems for Personal Human Recognition)
Show Figures

Figure 1

15 pages, 2013 KiB  
Article
A Multifeature Learning and Fusion Network for Facial Age Estimation
by Yulan Deng, Shaohua Teng, Lunke Fei, Wei Zhang and Imad Rida
Sensors 2021, 21(13), 4597; https://0-doi-org.brum.beds.ac.uk/10.3390/s21134597 - 05 Jul 2021
Cited by 16 | Viewed by 2835
Abstract
Age estimation from face images has attracted much attention due to its favorable and many real-world applications such as video surveillance and social networking. However, most existing studies usually learn a single kind of age feature and ignore other appearance features such as [...] Read more.
Age estimation from face images has attracted much attention due to its favorable and many real-world applications such as video surveillance and social networking. However, most existing studies usually learn a single kind of age feature and ignore other appearance features such as gender and race, which have a great influence on the age pattern. In this paper, we proposed a compact multifeature learning and fusion method for age estimation. Specifically, we first used three subnetworks to learn gender, race, and age information. Then, we fused these complementary features to further form more robust features for age estimation. Finally, we engineered a regression-ranking age-feature estimator to convert the fusion features into the exact age numbers. Experimental results on three benchmark databases demonstrated the effectiveness and efficiency of the proposed method on facial age estimation in comparison to previous state-of-the-art methods. Moreover, compared with previous state-of-the-art methods, our model was more compact with only a 20 MB memory overhead and is suitable for deployment on mobile or embedded devices for age estimation. Full article
(This article belongs to the Special Issue Biometric Systems for Personal Human Recognition)
Show Figures

Figure 1

16 pages, 1015 KiB  
Article
Breast Mass Detection in Mammography Based on Image Template Matching and CNN
by Lilei Sun, Huijie Sun, Junqian Wang, Shuai Wu, Yong Zhao and Yong Xu
Sensors 2021, 21(8), 2855; https://0-doi-org.brum.beds.ac.uk/10.3390/s21082855 - 18 Apr 2021
Cited by 20 | Viewed by 3421
Abstract
In recent years, computer vision technology has been widely used in the field of medical image processing. However, there is still a big gap between the existing breast mass detection methods and the real-world application due to the limited detection accuracy. It is [...] Read more.
In recent years, computer vision technology has been widely used in the field of medical image processing. However, there is still a big gap between the existing breast mass detection methods and the real-world application due to the limited detection accuracy. It is known that humans locate the regions of interest quickly and further identify whether these regions are the targets we found. In breast cancer diagnosis, we locate all the potential regions of breast mass by glancing at the mammographic image from top to bottom and from left to right, then further identify whether these regions are a breast mass. Inspired by the process of human detection of breast mass, we proposed a novel breast mass detection method to detect breast mass on a mammographic image by stimulating the process of human detection. The proposed method preprocesses the mammographic image via the mathematical morphology method and locates the suspected regions of breast mass by the image template matching method. Then, it obtains the regions of breast mass by classifying these suspected regions into breast mass and background categories using a convolutional neural network (CNN). The bounding box of breast mass obtained by the mathematical morphology method and image template matching method are roughly due to the mathematical morphology method, which transforms all of the brighter regions into approximate circular areas. For regression of a breast mass bounding box, the optimal solution should be searched in the feasible region and the Particle Swarm Optimization (PSO) is suitable for solving the problem of searching the optimal solution within a certain range. Therefore, we refine the bounding box of breast mass by the PSO algorithm. The proposed breast mass detection method and the compared detection methods were evaluated on the open database Digital Database for Screening Mammography (DDSM). The experimental results demonstrate that the proposed method is superior to all of the compared detection methods in detection performance. Full article
(This article belongs to the Special Issue Biometric Systems for Personal Human Recognition)
Show Figures

Figure 1

Review

Jump to: Research

36 pages, 3479 KiB  
Review
Person Recognition Based on Deep Gait: A Survey
by Md. Khaliluzzaman, Ashraf Uddin, Kaushik Deb and Md Junayed Hasan
Sensors 2023, 23(10), 4875; https://0-doi-org.brum.beds.ac.uk/10.3390/s23104875 - 18 May 2023
Viewed by 2160
Abstract
Gait recognition, also known as walking pattern recognition, has expressed deep interest in the computer vision and biometrics community due to its potential to identify individuals from a distance. It has attracted increasing attention due to its potential applications and non-invasive nature. Since [...] Read more.
Gait recognition, also known as walking pattern recognition, has expressed deep interest in the computer vision and biometrics community due to its potential to identify individuals from a distance. It has attracted increasing attention due to its potential applications and non-invasive nature. Since 2014, deep learning approaches have shown promising results in gait recognition by automatically extracting features. However, recognizing gait accurately is challenging due to the covariate factors, complexity and variability of environments, and human body representations. This paper provides a comprehensive overview of the advancements made in this field along with the challenges and limitations associated with deep learning methods. For that, it initially examines the various gait datasets used in the literature review and analyzes the performance of state-of-the-art techniques. After that, a taxonomy of deep learning methods is presented to characterize and organize the research landscape in this field. Furthermore, the taxonomy highlights the basic limitations of deep learning methods in the context of gait recognition. The paper is concluded by focusing on the present challenges and suggesting several research directions to improve the performance of gait recognition in the future. Full article
(This article belongs to the Special Issue Biometric Systems for Personal Human Recognition)
Show Figures

Figure 1

29 pages, 4578 KiB  
Review
Biometrics: Going 3D
by Gerasimos G. Samatas and George A. Papakostas
Sensors 2022, 22(17), 6364; https://0-doi-org.brum.beds.ac.uk/10.3390/s22176364 - 24 Aug 2022
Cited by 3 | Viewed by 2412
Abstract
Biometrics have been used to identify humans since the 19th century. Over time, these biometrics became 3D. The main reason for this was the growing need for more features in the images to create more reliable identification models. This work is a comprehensive [...] Read more.
Biometrics have been used to identify humans since the 19th century. Over time, these biometrics became 3D. The main reason for this was the growing need for more features in the images to create more reliable identification models. This work is a comprehensive review of 3D biometrics since 2011 and presents the related work, the hardware used and the datasets available. The first taxonomy of 3D biometrics is also presented. The research was conducted using the Scopus database. Three main categories of 3D biometrics were identified. These were face, hand and gait. The corresponding percentages for these categories were 74.07%, 20.37% and 5.56%, respectively. The face is further categorized into facial, ear, iris and skull, while the hand is divided into fingerprint, finger vein and palm. In each category, facial and fingerprint were predominant, and their respective percentages were 80% and 54.55%. The use of the 3D reconstruction algorithms was also determined. These were stereo vision, structure-from-silhouette (SfS), structure-from-motion (SfM), structured light, time-of-flight (ToF), photometric stereo and tomography. Stereo vision and SfS were the most commonly used algorithms with a combined percentage of 51%. The state of the art for each category and the available datasets are also presented. Finally, multimodal biometrics, generalization of 3D reconstruction algorithms and anti-spoofing metrics are the three areas that should attract scientific interest for further research. In addition, the development of devices with 2D/3D capabilities and more publicly available datasets are suggested for further research. Full article
(This article belongs to the Special Issue Biometric Systems for Personal Human Recognition)
Show Figures

Figure 1

Back to TopTop