Next Article in Journal
Calderón Operator on Local Morrey Spaces with Variable Exponents
Next Article in Special Issue
Fixed-Time Synchronization of Neural Networks Based on Quantized Intermittent Control for Image Protection
Previous Article in Journal
Professional Development in Mathematics Education—Evaluation of a MOOC on Outdoor Mathematics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multimodal Identification Based on Fingerprint and Face Images via a Hetero-Associative Memory Method

1
College of Intelligent Technology and Engineering, Chongqing University of Science and Technology, Chongqing 401331, China
2
College of Electrical Engineering, Chongqing University of Science and Technology, Chongqing 401331, China
*
Author to whom correspondence should be addressed.
Submission received: 28 October 2021 / Revised: 15 November 2021 / Accepted: 16 November 2021 / Published: 22 November 2021
(This article belongs to the Special Issue Modeling and Analysis of Complex Networks)

Abstract

:
Multimodal identification, which exploits biometric information from more than one biometric modality, is more secure and reliable than unimodal identification. Face recognition and fingerprint recognition have received a lot of attention in recent years for their unique advantages. However, how to integrate these two modalities and develop an effective multimodal identification system are still challenging problems. Hetero-associative memory (HAM) models store some patterns that can be reliably retrieved from other patterns in a robust way. Therefore, in this paper, face and fingerprint biometric features are integrated by the use of a hetero-associative memory method for multimodal identification. The proposed multimodal identification system can integrate face and fingerprint biometric features at feature level when the system converges to the state of asymptotic stability. In experiment 1, the predicted fingerprint by inputting an authorized user’s face is compared with the real fingerprint, and the matching rate of each group is higher than the given threshold. In experiment 2 and experiment 3, the predicted fingerprint by inputting the face of an unauthorized user and the stealing authorized user’s face is compared with its real fingerprint input, respectively, and the matching rate of each group is lower than the given threshold. The experimental results prove the feasibility of the proposed multimodal identification system.

1. Introduction

With the rapid development of science and technology, people pay more attention to security identification than ever before, and new theories and technologies continually emerge for identity authentication. Traditional identification methods include key, password, code, identification card, and so on. One of the weaknesses of these methods is that unauthorized persons can fabricate or steal protected data and make use of the rights of authorized users to engage in illegal activities. Though these traditional identification technologies, which usually face various threats in real world, are still playing an indispensable role on various occasions with a low request of security for their convenience and low cost, increasingly more consumers and enterprises choose to use biometric identification in numerous fields. Biometric identification technologies such as face recognition [1,2,3,4], fingerprint recognition [5,6,7], and gait recognition [8,9,10] are more secure and convenient than traditional technologies.
Biometric identification refers to the automated recognition of individuals based on their biological or behavioral characteristics [11]. It is closely combined with high-tech means such as optics, acoustics, biosensors, and biostatistics. Biometrics finds its applications in the following areas: access control to facilities and computers, criminal identification, border security, access to nuclear power plants, identity authentication in network environment, airport security, issue of passports or driver licenses, and forensic and medical databases [12]. Biometric identification can facilitate a well-rounded solution for system identification and maintain a reliable and secure system. Biometric technology has started to become a booming field and an important application direction of a cross subject between computer science and biology. Unimodal biometric systems, such as fingerprint identification system and face identification, have been studied in many previous articles [6,13,14,15,16,17,18,19,20].
Through the studies of recent years, it is evident that multimodal biometric identification technologies that use many kinds of biometric characteristics to identify individuals are more secure and accurate than unimodal ones. They take advantage of multiple biometric traits to improve the performance in many aspects including accuracy, noise resistance, universality, and spoof attacks, and reduce performance degradation in huge database applications [21]. Multi-biometric feature fusion is a crucial step in multimodal biometric systems. The strength of the feature fusion technique lies in its ability to derive highly discriminative information from original multiple feature sets and to eliminate redundant information that results from the correlation between distinct feature sets, thus gaining the most effective feature set with low dimensionality for the final decision [22]. On the process of multimodal identification research, several new algorithms and applications have been studied in recent years. For example, the authors of [11] presented a multimodal biometric approach based on the fusion of the finger vein and electrocardiogram (ECG) signals. The application of canonical correlation analysis (CCA) in multimodal biometric field attracted many researchers [23,24], who employed CCA to fuse gait and face cues for human gender recognition. Multimodal biometric identification system based on finger geometry, knuckle print, and palm print was proposed in [21]. Face–iris multimodal biometric system using a multi-resolution Log–Gabor filter with spectral regression kernel discriminant analysis was studied in [25]. The authors of [26] proposed an efficient multimodal face and fingerprint biometrics authentication system on space-limited tokens, e.g., smart cards, driver license, and RFID cards. The authors of [27] proposed a novel multimodal biometric identification system for face–iris recognition, based on binary particle swarm optimization and solving the problem of mutually exclusive redundant features in combined features. Dialog Communication Systems (DCS AG) developed BioID in [28], a multimodal identification system that uses three different features—face, voice, and lip movement—to identify people. In [29], a frequency-based approach results in a homogeneous biometric vector, integrating iris and fingerprint data. The authors of [30] proposed a deep multimodal fusion network to fuse multiple modalities (face, iris, and fingerprint) for person identification. They demonstrate an increase in multimodal person identification performance by utilizing the proposed multi-level feature abstract representations in our multimodal fusion, rather than using only the features from the last layer of each modality-specific CNN. However, the system in [30] based on CNNs cannot be used for small samples.
Associative memory networks are single layer nets that can store and recall patterns based on data content rather than data address [31]. Associative memory (AM) systems can be divided into hetero-associative memory (HAM) systems and auto-associative memory (AAM) systems. When the input pattern and the output pattern are the same pattern, the system can be called an AAM system. The HAM model, which stores coupling information based on input–output patterns, can recall a stored output pattern by receiving a different input pattern. In [32], to protect the face features database fundamentally, a new face recognition method by AAM based on RNNs is proposed without establishing a face feature database, in which the face features are transformed into the parameters of the AAM model. We notice that the HAM models can construct the association between the input and output patterns in a robust way, and this association can be regarded as feature fusion of two different kinds of patterns. Thus, HAM models should be able to fuse multiple biometric features in a robust way. Furthermore, the multimodal identification system can be built by HAM models.
Considering the advantages of multimodal identification and the fusion capability of HAM models, in this paper, the HAM model, which can store fusion features of face–fingerprint patterns and recall a predictable fingerprint pattern by receiving a face pattern, is constructed. The model is based on a cellular neural network, which belongs to a class of recurrent neural networks (RNNs). The stability of the HAM model is a prerequisite for its successful application in a multimodal identification system. Thus, the asymptotic stability of the HAM model is also analyzed and discussed. In this paper, we also propose a multimodal identification system based on fingerprint and face images by the HAM method. Our three contributions in this paper are highlighted as follows.
  • A multimodal identification system based on face and fingerprint images is designed, and this system effectively utilizes the advantages of two representative biometric features and ensures the system more security in the process of identification.
  • The variable gradient method is used to construct the Lyapunov function, which proves the asymptotic stability of the HAM model. The HAM model based on RNNs must converge to the asymptotic equilibrium point. Otherwise, multimodal identification cannot be carried out in practical scenarios. Analyses and discussions of the stability are given.
  • This is the first attempt to integrate face and fingerprint biometric features using the HAM method. In the HAM model, fingerprint and face biometric features are fused in a robust way. All the biometric features are fused to form a set of model coupling parameters.
The remainder of this paper is organized as follows. In Section 2 and Section 3, we give the details of our proposed multimodal identification system and research background, respectively. In Section 4, the stability of the HAM model is analyzed in detail and the main results for feature fusion are given. Some numerical simulations are presented to illustrate the effectiveness and security of the proposed system in Section 5. Finally, some conclusions are drawn in Section 6.

2. Framework of the Identification System

We design a multimodal identification system based on face and fingerprint images that makes full use of the advantages of two different biometric modalities. We put forward two stages, which are named the fusion stage and identification stage in this identification system. The framework of the proposed system is shown in Figure 1.
At the fusion stage, the main work is to establish the HAM model, which stores information of feature fusion using the HAM method. The HAM model, which is used for feature fusion, is based on an improved HAM method, and the established model can store the coupling information of the face and fingerprint patterns of the authorized users. The first step is to acquire face images and fingerprint images of the authorized users using some feature extractor device. The raw images are preprocessed, including the processes of gray level transformation, image binarization, and segmentation. The regions of interest (ROIs) of face images and fingerprint images after preprocessing are used to fuse both face and fingerprint biometric features using the HAM method. The parameters that come from the feature fusion institute crucial model coefficients of the HAM model. Then, the established HAM model can recall the fingerprint pattern of one authorized user by receiving the face pattern of the user when the model converges to the asymptotically stable equilibrium point. If the established model could not converge to the asymptotically stable equilibrium point, the fusion parameters, namely model coefficients, would not be given. The HAM model stores two kinds of biometric features of all authorized users as one group of model coefficients, and those biometrical features cannot be decrypted easily in the reversible method.
In the identification stage, the HAM model established in the fusion stage is used to test the legitimacy of the visitors. Firstly, the face image and fingerprint image of one visitor are acquired using proper feature extractor devices in the identification stage. The visitor’s face pattern after preprocessing is sent to the HAM model established in the fusion stage. Then, there will be an output pattern when the established HAM model converges to the asymptotically stable equilibrium point. By comparing the model’s output pattern with the visitor’s real fingerprint pattern after preprocessing, the recognition pass rate of the visitor can be obtained. If the numerical value of the recognition rate of the visitor exceeds a given threshold, the identification is successful and the visitor has the rights of authorized users. Instead, the visitor is an illegal user.

3. Research Background

In this section, we briefly introduce the HAM model, which is based on a class of recurrent neural networks, as well as the background knowledge of the system stability and variable gradient method.

3.1. HAM Model

Consider a class of recurrent neural network composed of N rows and M columns with time-varying delays as
s ˙ i ( t ) = p i s i ( t ) + j = 1 n q i j f ( s j ( t ) ) + j = 1 n r i j u j ( t τ i j ( t ) ) + v i , i = ( 1 , 2 , , n )
in which n corresponds to the number of neurons in the neural network and n = N × M   s i t R is the state of the i th neuron at time t ; p i > 0 represents the rate with which the i th unit will reset its potential to the resting state in isolation when disconnected from the network and external inputs; q i j and r i j are connection weights; f ( s j ( t ) ) = ( | s j ( t ) + 1 | | s j ( t ) 1 | ) / 2 is an activation function; u j is the neuron input; τ i j is the transmission delay, which is the time delay between the i th neuron and the j th neuron in the network; v i is an offset value of the i th neuron; and i = 1 , 2 , , n .
For one neuron, we can obtain the equation of dynamics as (1). Nevertheless, when considering the whole neural network, (1) can be expressed as
s ˙ = P s + Q f ( s ) + R β + V
in which s = ( s 1 , s 2 , , s n ) T R n is a neuron network state vector; P = d i a g ( p 1 , p 2 , , p n ) R + n is a positive parameter diagonal matrix; f ( s ) is n dimensions vector whose value changes between −1 and +1; and β n × 1 is the network input vector whose value is −1 or +1, especially, when the neural network comes to the state of global asymptotic stability, let α = f ( s ) { α = ( α 1 , α 2 , , α n ) T | α i = + 1   or 1 ,   i = 1 , , n } . V = ( v 1 , v 2 , , v n ) T denotes an offset value vector. Q , R , and V are the model parameters. Q n × n and R n × n are denoted as the connection weights matrix of the neuron network as follows
Q = q 11 q 12 q 1 n q 21 q 22 q 2 n q n 1 q n 2 q n n n × n R = r 11 r 12 r 1 n r 21 r 22 r 2 n r n 1 r n 2 r n n n × n

3.2. System Stability

Consider the general nonlinear system
y ˙ = g ( t , y )
in which y = ( y 1 , y 2 , , y n ) Ω R n is a state vector; t I = [ t 0 , T ] is a time variable, t 0 < T < . Then, g ( t , y ) = [ g 1 ( t , y 1 , , y n ) , g 2 ( t , y 1 , , y n ) , , g n ( t , y 1 , , y n ) ] T and g ( t , y ) C ( I × Ω , R n ) . Supposing that y = φ ( t ) is a special solution of system (3). Let y = x + φ ( t ) , then x ˙ = y ˙ φ ˙ ( t ) = g ( t , y ) g [ t , φ ( t ) ] = g [ t , x + φ ( t ) ] g [ t , φ ( t ) ] . Let x ˙ = f ( t , x ) , then the system (3) can be rewritten as
x ˙ = f ( t , x )
Definition 1.
If x 0 satisfies f ( t , x 0 ) 0 , 0 t 0 t , then x 0 is the equilibrium point of system (4).
Definition 2.
x 0 is the equilibrium point of system (4). If, for any given ϵ > 0 and t 0 > 0 , there exists σ ( ϵ , t 0 ) > 0 , when x 1 R n satisfies x 1 x 0 σ such that φ ( t , t 0 , x 1 ) x 0 ϵ . Then, the equilibrium point x 0 of Equation (4) is said to be stable in the sense of Lyapunov stability theory. Furthermore, if lim x φ ( t , t 0 , x 1 ) x 0 = 0 , then the equilibrium point x 0 is asymptotically stable.

3.3. Variable Gradient Method

The most challenging problem for which to use Lyapunov’s second method (direct method) is to find a positive definite function V that yields V ˙ < 0 . The variable gradient method, which was proposed by Scultz, is one of the famous techniques for constructing a Lyapunov function to prove the stability of nonlinear systems [33]. The idea of this method is to construct the gradient of the Lyapunov function to analyze the sign property of Lyapunov function.
For the nonlinear system (4), if there exists the Lyapunov function V ( x ) : D R , D R n , V ( x ) is an explicit function of x , and the equilibrium point of the system is the origin, i.e., x = 0 , the single value gradient g r a d V of V ( x ) can be defined as
g r a d V ( x ) d V ( x ) d x = V / x 1 V / x 2 V / x n = V 1 V 2 V n = = a 11 x 1 + a 12 x 2 + a 13 x 3 + + a 1 n x n a 21 x 1 + a 22 x 2 + a 23 x 3 + + a 2 n x n a n 1 x 1 + a n 2 x 2 + a n 3 x 3 + + a n n x n
It follows from (5) that
V ˙ ( x ) = i = 1 n ( V x i x ˙ i ) = ( g r a d V ( x ) ) T [ x ˙ 1 , , x ˙ n ] T = ( g r a d V ( x ) ) T x ˙
It can be seen from (6) that V ( x ) can be obtained by the line integral of g r a d V , namely,
V ( x ) = 0 x ( g r a d V ) T d x = 0 x i = 1 n x i V i d x i
If n-dimensional curl of g r a d V is equal to zero, namely, r o t ( g r a d V ) = 0 , then V can be regarded as a conservative field, and the line integral shown in the above formula (7) is independent of the path. The necessary and sufficient condition for r o t ( g r a d V ) = 0 is V i / x j = V j / x i , i , j = 1 , 2 , n . Therefore, for convenience, Formula (7) can be rewritten as
V ( x ) = 0 x 1 V 1 | ( x 1 , 0 , , 0 ) d x 1 + 0 x 2 V 2 | ( x 1 , x 2 , 0 , , 0 ) d x 2 + + 0 x n V n | ( x 1 , x 2 , x 3 , , x n ) d x n
By selecting appropriate coefficients such that V ˙ ( x ) is negative definite and r o t ( g r a d V ) is equal to zero. If V ( x ) is positive definite, then the second method of Lyapunov is proved, and the system is asymptotically stable at the equilibrium point.

4. Main Results

In this section, under the research background, the asymptotic stability of the HAM model with multiple time-varying delays using variable gradient method and the algorithm of feature fusion by the HAM method are presented successively.

4.1. Stability of the HAM Model

Theorem 1.
There is a stable equilibrium point in system (2), which makes the HAM model asymptotically stable.
Proof of Theorem 1.
As f is bounded, it can be proved that system (2) has at least one equilibrium point using Schauder fixed point theorem. Assuming that s = ( s 1 , s 2 , , s n ) T is an equilibrium point in the neural network.
Let x i ( t ) = s i ( t ) s i and f ¯ ( x i ( t ) ) = f ( s i ( t ) ) f ( s i ) = f ( x i ( t ) + s i ) f ( s i ) , then (1) can be rewritten as
x ˙ i ( t ) = p i x i ( t ) + j = 1 n q i j f ¯ ( x j ( t ) ) + j = 1 n r i j u j ( t τ i j ( t ) ) + c i c i = j = 1 n q i j f ( s j ) p i s i + v i , i = ( 1 , 2 , , n )
For the HAM model (9), if there exist the Lyapunov function V ( x ) , and the model’s equilibrium point is x = ( x 1 , x 2 , , x n ) T = 0 , the single value gradient of (9) can be defined as Equation (5). From Equation (6),
V ˙ ( x ) = ( g r a d V ( x ) ) T x ˙ = ( a 11 x 1 + a 12 x 2 + + a 1 n x n ) x ˙ 1 + + ( a n 1 x 1 + a n 2 x 2 + + a n n x n ) x ˙ n
It is convenient to select coefficients a i j = 0 , ( i j ) and a k k > 0 , i , j , k = 1 , 2 , , n from (9), and one can easily obtain
V ˙ ( x ) = a 11 x 1 x ˙ 1 + + a n n x n x ˙ n = k = 1 n a k k x k x ˙ k = k = 1 n a k k x k ( t ) p k x k ( t ) + j = 1 n q k j f ¯ ( x j ( t ) ) + j = 1 n r k j u j ( t τ k j ( t ) ) + c k
When s ˙ k = 0 , from Equation (1), s k = j = 1 n q k j f ( s j ( t ) ) + j = 1 n r k j u j ( t τ k j ( t ) ) + v k / p k . If x k ( t ) > 0 , i.e., s k ( t ) s k > 0 , then p k s k ( t ) > j = 1 n q k j f ( s j ( t ) ) + j = 1 n r k j u j ( t τ k j ( t ) ) + v k . By replacing s k ( t ) with x k ( t ) , the inequality p k x k ( t ) + j = 1 n q k j f ¯ ( x j ( t ) ) + j = 1 n r k j   u j ( t τ k j ( t ) ) + c k < 0 can be obtained. Analogously, if x k ( t ) < 0 , it can be proved that p k x k ( t ) + j = 1 n q k j f ¯ ( x j ( t ) ) + j = 1 n r k j u j ( t τ k j ( t ) ) + c k > 0 . Therefore, both cases can lead to V ˙ ( x ) < 0 , namely V ˙ ( x ) is negative definite. Furthermore, it is clear that V i / x j = V j / x i = 0 , i , j = 1 , 2 , n . Therefore, from Equation (8), the Lyapunov function can be obtained as
V x = 0 x 1 V x 1 , 0 , , 0 d x 1 + 0 x 2 V x 1 , x 2 , 0 , , 0 d x 2 + + 0 x n V x 1 , x 2 , , x n d x n = 0 x 1 a 11 x 1 d x 1 + 0 x 2 a 21 x 1 + a 22 x 2 d x 2 + + 0 x n a n 1 x 1 + + a n n x n d x n = 0 x 1 a 11 x 1 d x 1 + 0 x 2 a 22 x 2 d x 2 + + 0 x n a n n x n d x n
which is always positive definite. Then, we proved the HAM model is asymptotically stable at the equilibrium point using the variable gradient method. □
Remark 1.
The HAM method is used to fuse each authorized user’s face and fingerprint biometric features. The face and fingerprint patterns of each authorized user are the input vector  β n × 1 = [ β 1 , β 2 , , β n ] T and output vector  α n × 1 = [ α 1 , α 2 , , α n ] T of the neural network model, respectively. When the established HAM model converges to the asymptotically stable equilibrium point, the output vector can be obtained by receiving an input vector, i.e., the fingerprint pattern can be recalled by the face pattern of the authorized user.

4.2. HAM Model

The HAM method is used to fuse each authorized user’s face and fingerprint biometric features. The authorized user’s face and fingerprint patterns are the network model’s input vector β n × 1 and output vector α n × 1 , respectively.
Letting f ( s j ( t ) ) = α ^ j , | α ^ j | 1 , β j = u j ( t τ i j ( t ) ) , β { ( β 1 , β 2 , , β n ) T | β j = + 1 or 1 , i = 1 , , n } , Equation (1) can be rewritten as
s ˙ i ( t ) = p i s i ( t ) + j = 1 n q i j α ^ j + j = 1 n r i j β j + v i
Lemma 1
([34]). In Equation (13), s i ( 0 ) = 0 , i = 1 , 2 , , n ,
( i ) If j = 1 n q i j α ^ j + j = 1 n r i j β j + v i > p i , then (13) can converge to an asymptotically stable equilibrium point whose value is greater than +1.
( i i ) If j = 1 n q i j α j + j = 1 n r i j β j + v i < p i , then (13) can converge to an asymptotically stable equilibrium point whose value is less than −1.
Theorem 2.
HAM model (2) converges to a stable equilibrium point s and | s | > 1 , if there exists a constant λ such that λ max 1 i n { p i } and Q α + R β + V = λ α .
Proof of Theorem 2.
In (2), s = [ s 1 , s 2 , , s n ] T . Define the equilibrium of the HAM model s = [ s 1 , s 2 , , s n ] T . α = f ( s ) { α = ( α 1 , α 2 , , α n ) T | α i = + 1   or 1 } is an equilibrium point in the neural network.
For the first case, consider α i = + 1 , then λ α i > p i . When Q α + R β + V = λ α , according to Lemma 1 (i), j = 1 n q i j α j + j = 1 n r i j β j + v i > p i . For the second case, consider α i = 1 , then λ α i < p i . When Q α + R β + V = λ α , according to Lemma 1 (ii), j = 1 n q i j α j + j = 1 n r i j β j + v i < p i . Therefore, the HAM model (2) converges to a stable equilibrium point s , where | s | > 1 . □
Given S = α and U = β , in which α and β are the feature vectors extracted from the fingerprint and face images of one authorized user after preprocessing, respectively.
It is obvious that, when α and β meet the condition in Theorem 2, the coupling relationship of the face and fingerprint patterns of one authorized user is established, and the fusion features are transformed into HAM model parameters. The HAM model, which stores fusion features of face and fingerprint patterns of the user, can recall a predictable fingerprint pattern S ^ by receiving a stored face pattern U . The HAM model network is of size N × M . Let the neighborhood radius be 1, then there are eighteen unknown connection weights and one unknown bias value v i for one neuron. Denote the nineteen unknown parameters of the i th neuron as Φ i = [ q i _ 1 , q i _ 2 , , q i _ 8 , q i _ 9 , r i _ 1 , r i _ 2 , , r i _ 8 , r i _ 9 , v i ] T .
Remark 2.
In the fusion stage, the established HAM model can store fusion features of all authorized users. Therefore, all model parameters Φ i ( i = 1 , 2 , , n ) to be obtained should be determined by the face and fingerprint patterns of all authorized users.
For m authorized users, Q α + R β + V = λ α can be transformed as
Δ i Φ i = α ˜ i λ ( i = 1 , 2 , , n )
in which Δ i = α i 1 ( 1 ) α i 2 ( 1 ) α i 9 ( 1 ) β i 1 ( 1 ) β i 2 ( 1 ) β i 9 ( 1 ) 1 α i 1 ( 2 ) α i 2 ( 2 ) α i 9 ( 2 ) β i 1 ( 2 ) β i 2 ( 2 ) β i 9 ( 2 ) 1 α i 1 ( m 1 ) α i 2 ( m 1 ) α i 9 ( m 1 ) β i 1 ( m 1 ) β i 2 ( m 1 ) β i 9 ( m 1 ) 1 α i 1 ( m ) α i 2 ( m ) α i 9 ( m ) β i 1 ( m ) β i 2 ( m ) β i 9 ( m ) 1 and α ˜ i = [ α i 1 , α i 2 , , α i m ] T is the fingerprint pattern’s feature vector of m authorized users on the i -th neuron of the network model.
Then, all unknown model parameters by Equation (14) can be solved. Namely, two kinds of biometric features of all authorized users turn into parameters of the established HAM. After obtaining all the parameters based on face and fingerprint patterns of all authorized users, the HAM model, which can recall fingerprint pattern by receiving the face pattern of the authorized user, is established.
Some notations are defined in Appendix A. The feature fusion algorithm of the HAM model based on face and fingerprint images using the HAM method in the fusion stage is given in Algorithm 1.
Remark 3.
When the established HAM model, which stores biometric fusion features of all authorized users, receives a face pattern vector of an unauthorized user, there will exist a forecasting fingerprint pattern output of the visitor. In [32], the input pattern and forecasting output pattern are the same biometric pattern. It uses the AAM network structure, which fuses the face input and the same face output, but it cannot achieve the fusion of different biological models. In this paper, two different biometric patterns are studied. This is the first attempt to integrate two different biometric features using the HAM method.
Furthermore, the convolutional neural network needs a lot of data for training, which is difficult to train for small samples, so we do not use the convolutional neural network for small sample data in this paper.
Algorithm 1 Feature fusion algorithm
Require: λ max 1 i n { p i } fingerprint feature vector α ( k ) , face feature vector β ( k ) , k = 1 , 2 , , m .
Ensure: Model parameters Φ i , i = 1 , 2 , , n .
for k = 1 m  do
for ξ = 1 N  do
    E ξ ( k ) α ( k ) , F ξ ( k ) β ( k )
  end for
   E ( k ) E ξ ( k ) , F ( k ) F ξ ( k )
end for
for i = 1 n  do
Δ i E ( 1 ) , E ( 2 ) , , E ( m ) , F ( 1 ) , F ( 2 ) , , F ( m )
end for
for i = 1 n  do
for k = 1 m  do
    α ˜ i α ( k )
  end for
end for
for i = 1 n do
    Φ i = Δ i 1 α ˜ i λ
end for

5. Experiments and Discussion

In this section, we will show the experimental results of the multimodal identification system we proposed in Section 2. Firstly, we prove the effectiveness of the multimodal identification system using Experiment 1. The accuracy of the experiment meets the requirement of identification recognition that we defined. Secondly, we test unauthorized users and prove the security of the multimodal identification system using Experiment 2.
To protect private information, the experiments are based on two different public databases. The face images come from ORL Faces Database and the fingerprint images come from CASIA-FingerprintV5 Database. The fingerprint images of CASIA-FingerprintV5 were captured by a URU4000 fingerprint sensor in one session.
In order to compare the result of the fingerprint pattern S ˜ of the visitor and the predictable fingerprint pattern S ^ , a matcher is designed. The pass rate (PR) of the matcher is defined as
PR = NF M × N × 100 %
in which NF stands for the number of feature points that satisfy the value of the fingerprint pattern of the visitor and the predicted fingerprint output is equal in corresponding pixel coordinate. When the value of PR is bigger than the given threshold of 90%, the face pattern and the fingerprint patterns of the visitor are regarded as legal. Namely, the real fingerprint pattern of the visitor can match the predicted fingerprint output in the multimodal identification system.

5.1. Experiment 1

We assume that the face image and fingerprint image in each group come from the same person. Seven groups of images of authorized users from two databases mentioned above are shown in Figure 2. The first step in the biometric identification system is to extract region of interests (ROIs). In our experiments, all face image ROIs and fingerprint image ROIs used in our experiments after preprocessing are 35 × 25 pixels in size.
The seven groups of face patterns and fingerprint patterns are used to solve the model parameters Φ i ( i = 1 , 2 , , 875 ) . Let p i = 1 ( i = 1 , 2 , , 875 ) and λ = 2 . The fingerprint feature vectors ( α ( 1 ) , α ( 2 ) , , α ( 7 ) ) and the face feature vectors ( β ( 1 ) , β ( 2 ) , , β ( 7 ) ) can be obtained from the seven groups of face patterns and fingerprint patterns of all authorized users. E 1 ( 1 ) , E 2 ( 1 ) , , E 35 ( 1 ) ,   E 1 ( 2 ) , E 2 ( 2 ) , , E 35 ( 2 ) , , E 1 ( 7 ) , E 2 ( 7 ) , , E 35 ( 7 ) and F 1 ( 1 ) , F 2 ( 1 ) , , F 35 ( 1 ) , F 1 ( 2 ) , F 2 ( 2 ) , , F 35 ( 2 ) , , F 1 ( 7 ) , F 2 ( 7 ) , , F 35 ( 7 ) were obtained by face feature vectors and fingerprint feature vectors, respectively. According to the feature fusion algorithm, the matrix Δ 1 , , Δ 875 was obtained. Furthermore, α ^ 1 , α ^ 2 , , α ^ 875 was obtained through the matrix transform method. Finally, Φ i ( i = 1 , 2 , , 875 ) was calculated using the matrix operation.
According to the proposed HAM method in Section 4, when the unestablished HAM model comes to the asymptotic stable equilibrium point, the internal coupling relationship between face and fingerprint patterns will be built by solving the model parameters.
The established multimodal identification system fused face and fingerprint biometrics in the fusion stage. The matcher pass rate can be obtained by comparing S ˜ and S ^ when the system input is one of the face patterns of the authorized users. We testified the matcher pass rate as shown in Table 1, whose results prove the effectiveness of the multimodal identification system.

5.2. Experiment 2

The results of the experiment above test the feasibility and efficiency of the algorithm. Provided that an unauthorized user has access to the identification system, the matcher pass rate must be low enough for the system to reject illegal users. In this experiment, we choose seven groups of unauthorized users whose fingerprints and faces are different from the groups in Experiment 1. The flow diagram of identification is shown in Figure 3.
In this experiment, we found that the pass rate of unauthorized users is much lower than the identification matcher threshold. Hence, those users who attempted to spoof this identification system were identified as illegal users. We obtained seven groups of unauthorized users’ identification results, shown in Table 2.
Consider the case wherein attacker who has the forged fingerprint or the forged face of one authorized user through illegal means beforehand wants to cheat the system. As the illegal attacker completely hacked one kind of biometrical information, it is easy to cheat single-mode identification system if there is no extra validation. However, in the multimodal identification system, the attacker cannot spoof this identification system easily. Group 15 to Group 21 are the attackers who have face information of the authorized users (Group 1 to Group 7), respectively. Further, Group 22 to Group 28 are the attackers who have fingerprint information of the authorized users (Group 1 to Group 7), respectively. The identification results are shown in Table 3. The results of the experiment proved the security of our proposed system.
The experiment results prove the feasibility of the proposed multimodal identification system based on the HAM method. It can guarantee that the authorized users have access, while the unauthorized users and attackers have no access. The proposed identification method by fusing two different biometric modalities based on the HAM method applies not only to the situation of fusing the face and fingerprint feature, but also to other different biometric modalities.

6. Conclusions

To solve the multimodal identification problem based on face and fingerprint images, in this paper, we proposed a new feature fusion method for multimodal identification based on the HAM model, which can well fuse face features and fingerprint features of the authorized users. In the process of constructing the multimodal identification system, the stability of the established network model is discussed. We prove that the HAM model can reach the asymptotically stable state when the HAM model fuses face and fingerprint biometrics. The proposed multimodal identification system can integrate face and fingerprint biometric features at feature level when the system converges to the state of asymptotic stability. In Section 5, we test the effectiveness and security of the proposed multimodal identification system based on face and fingerprint images using two experiments.

Author Contributions

Conceptualization, Q.H. and H.Y.; methodology, H.Y.; software, T.W.; validation, G.C.; formal analysis, J.L.; investigation, Y.T.; writing—original draft preparation, H.Y.; writing—review and editing, Q.H.; visualization, H.Y.; supervision, Q.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded in part by CAS “Light of West China” Program, in part by Research Foundation of The Natural Foundation of Chongqing City (cstc2021jcyj-msxmX0146), in part by Scientific and Technological Research Program of Chongqing Municipal Education Commission (KJZD-K201901504, KJQN 201901537), in part by humanities and social sciences research of Ministry of Education (19YJCZH047), and in part by Postgraduate Innovation Program of Chongqing University of Science and Technology (YKJCX2020820). The authors would like to thank the support of China Scholarship Council.

Informed Consent Statement

All the images and data used in this article were taken from public repositories.

Data Availability Statement

The data used to support the findings of this study are available from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

E ξ ( k ) = 0 α ( ξ 1 ) M + 1 ( k ) α ( ξ 1 ) M + 2 ( k ) α ( ξ 1 ) M + 1 ( k ) α ( ξ 1 ) M + 2 ( k ) α ( ξ 1 ) M + 3 ( k ) α ( ξ 1 ) M + 2 ( k ) α ( ξ 1 ) M + 3 ( k ) α ( ξ 1 ) M + 4 ( k ) α ξ M 2 ( k ) α ξ M 1 ( k ) α ξ M ( k ) α ξ M 1 ( k ) α ξ M ( k ) 0 M × 3 E ( k ) = 0 E 1 ( k ) E 2 ( k ) E 1 ( k ) E 2 ( k ) E 3 ( k ) E 2 ( k ) E 3 ( k ) E 4 ( k ) E N 1 ( k ) E N ( k ) 0 n × 9
F ξ ( k ) = 0 β ( ξ 1 ) M + 1 ( k ) β ( ξ 1 ) M + 2 ( k ) β ( ξ 1 ) M + 1 ( k ) β ( ξ 1 ) M + 2 ( k ) β ( ξ 1 ) M + 3 ( k ) β ( ξ 1 ) M + 2 ( k ) β ( ξ 1 ) M + 3 ( k ) β ( ξ 1 ) M + 4 ( k ) β ξ M 2 ( k ) β ξ M 1 ( k ) β ξ M ( k ) β ξ M 1 ( k ) β ξ M ( k ) 0 M × 3 F ( k ) = 0 F 1 ( k ) F 2 ( k ) F 1 ( k ) F 2 ( k ) F 3 ( k ) F 2 ( k ) F 3 ( k ) F 4 ( k ) F N 1 ( k ) F N ( k ) 0 n × 9

References

  1. Wang, S.-H.; Phillips, P.; Dong, Z.-C.; Zhang, Y.-D. Intelligent facial emotion recognition based on stationary wavelet entropy and Jaya algorithm. Neurocomputing 2018, 272, 668–676. [Google Scholar] [CrossRef]
  2. Zhang, Y.-D.; Yang, Z.-J.; Lu, H.; Zhou, X.-X.; Phillips, P.; Liu, Q.-M.; Wang, S. Facial Emotion Recognition Based on Biorthogonal Wavelet Entropy, Fuzzy Support Vector Machine, and Stratified Cross Validation. IEEE Access 2016, 4, 8375–8385. [Google Scholar] [CrossRef]
  3. Lawrence, S.; Giles, C.L.; Tsoi, A.C.; Back, A.D. Face recognition: A convolutional neural-network approach. IEEE Trans. Neural Netw. 1997, 8, 98–113. [Google Scholar] [CrossRef] [Green Version]
  4. Tan, X.; Triggs, W. Enhanced Local Texture Feature Sets for Face Recognition Under Difficult Lighting Conditions. IEEE Trans. Image Process. 2010, 19, 1635–1650. [Google Scholar] [CrossRef] [Green Version]
  5. Barni, M.; Scotti, F.; Piva, A.; Bianchi, T.; Catalano, D.; Di Raimondo, M.; Labati, R.D.; Failla, P.; Fiore, D.; Lazzeretti, R.; et al. Privacy-preserving fingercode authentication. In Proceedings of the 12th ACM Workshop on Multimedia and Security, New York, NY, USA, 9–10 September 2010; pp. 231–240. [Google Scholar]
  6. Jain, A.; Hong, L.; Pankanti, S.; Bolle, R. An identity-authentication system using fingerprints. Proc. IEEE 1997, 85, 1365–1388. [Google Scholar] [CrossRef] [Green Version]
  7. Wahab, A.; Chin, S.H.; Tan, E.C. Novel approach to automated fingerprint recognition. IEE Proc. Vis. Image Signal Process. 1998, 145, 160–166. [Google Scholar] [CrossRef]
  8. Bashir, K.; Xiang, T.; Gong, S. Gait recognition without subject cooperation. Pattern Recognit. Lett. 2010, 31, 2052–2060. [Google Scholar] [CrossRef] [Green Version]
  9. Han, J.; Bhanu, B. Individual recognition using gait enduergy image. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 28, 316–322. [Google Scholar] [CrossRef]
  10. Wang, L.; Tan, T.; Hu, W.; Ning, H. Automatic gait recognition based on statistical shape analysis. IEEE Trans. Image Process. 2003, 12, 1120–1131. [Google Scholar] [CrossRef] [PubMed]
  11. Su, K.; Yang, G.; Wu, B.; Yang, L.; Li, D.; Su, P.; Yin, Y. Human identification using finger vein and ECG signals. Neurocomputing 2019, 332, 111–118. [Google Scholar] [CrossRef]
  12. Meenakshi, V.S.; Padmavathi, G. Security analysis of password hardened multimodal biometric fuzzy vault with combined feature points extracted from fingerprint, iris and retina for high security applications. Procedia Comput. Sci. 2010, 2, 195–206. [Google Scholar] [CrossRef] [Green Version]
  13. Bronstein, A.M.; Bronstein, M.M.; Kimmel, R. Three-dimensional face recognition. Int. J. Comput. Vis. 2005, 64, 5–30. [Google Scholar] [CrossRef] [Green Version]
  14. Gu, J.; Zhou, J.; Yang, C. Fingerprint recognition by combining global structure and local cues. IEEE Trans. Image Process. 2006, 15, 1952–1964. [Google Scholar] [PubMed]
  15. Haq, E.U.; Xu, H.; Khattak, M.I. Face recognition by SVM using local binary patterns. In Proceedings of the 14th Web Information Systems & Applications Conference IEEE, Liuzhou, China, 11–12 November 2017. [Google Scholar]
  16. Kasban, H. Fingerprints verification based on their spectrum. Neurocomputing 2016, 171, 910–920. [Google Scholar] [CrossRef]
  17. Medina-Pérez, M.A.; Moreno, A.M.; Ballester, M.; Ángel, F.; García-Borroto, M.; Loyola-González, O.; Altamirano-Robles, L. Latent fingerprint identification using deformable minutiae clustering. Neurocomputing 2016, 175, 851–865. [Google Scholar] [CrossRef]
  18. Nefian, A.V.; Hayes, M.H. An embedded HMM-based approach for face detection and recognition. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, Phoenix, AZ, USA, 15–19 March 1999; pp. 3553–3556. [Google Scholar]
  19. Zhao, C.; Miao, D. Two-dimensional color uncorrelated principal component analysis for feature extraction with application to face recognition. In Proceedings of the Chinese Conference on Biometric Recognition, Jinan, China, 16–17 November 2013; pp. 138–145. [Google Scholar]
  20. Zhong, F.; Zhang, J. Face recognition with enhanced local directional patterns. Neurocomputing 2013, 119, 375–384. [Google Scholar] [CrossRef]
  21. Zhu, L.; Zhang, S. Multimodal biometric identification system based on finger geometry, knuckle print and palm print. Pattern Recognit. Lett. 2010, 31, 1641–1649. [Google Scholar] [CrossRef]
  22. Ahmad, M.I.; Woo, W.L.; Dlay, S. Non-stationary feature fusion of face and palmprint multimodal biometrics. Neurocomputing 2016, 177, 49–61. [Google Scholar] [CrossRef]
  23. Sun, Q.-S.; Zeng, S.-G.; Liu, Y.; Heng, P.-A.; Xia, D.-S. A new method of feature fusion and its application in image recognition. Pattern Recognit. 2005, 38, 2437–2448. [Google Scholar] [CrossRef]
  24. Shan, C.; Gong, S.; McOwan, P.W. Fusing gait and face cues for human gender recognition. Neurocomputing 2008, 71, 1931–1938. [Google Scholar] [CrossRef]
  25. Ammour, B.; Bouden, T.; Boubchir, L. Face–iris multi-modal biometric system using multi-resolution Log-Gabor filter with spectral regression kernel discriminant analysis. IET Biom. 2018, 7, 482–489. [Google Scholar] [CrossRef]
  26. Khan, M.K.; Zhang, J. Multimodal face and fingerprint biometrics authentication on space-limited tokens. Neurocomputing 2008, 71, 3026–3031. [Google Scholar] [CrossRef]
  27. Xiong, Q.; Zhang, X.; Xu, X.; He, S. A Modified Chaotic Binary Particle Swarm Optimization Scheme and Its Application in Face-Iris Multimodal Biometric Identification. Electronics 2021, 10, 217. [Google Scholar] [CrossRef]
  28. Frischholz, R.W.; Ulrich, D. BiolD: A multimodal biometric identification system. Computer 2000, 33, 64–68. [Google Scholar] [CrossRef]
  29. Conti, V.; Militello, C.; Sorbello, F.; Vitabile, S. A Frequency-based Approach for Features Fusion in Fingerprint and Iris Multimodal Biometric Identification Systems. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 2010, 40, 384–395. [Google Scholar] [CrossRef]
  30. Soleymani, S.; Dabouei, A.; Kazemi, H.; Dawson, J.; Nasrabadi, N.M. Multi-Level Feature Abstraction from Convolutional Neural Networks for Multimodal Biometric Identification. In Proceedings of the 2018 24th International Conference on Pattern Recognition (ICPR), Beijing, China, 20–24 August 2018; pp. 3469–3476. [Google Scholar]
  31. Aghajari, Z.H.; Teshnehlab, M.; Motlagh, M.R.J. A novel chaotic hetero-associative memory. Neurocomputing 2015, 167, 352–358. [Google Scholar] [CrossRef]
  32. Han, Q.; Wu, Z.; Deng, S.; Qiao, Z.; Huang, J.; Zhou, J.; Liu, J. Research on Face Recognition Method by Autoassociative Memory Based on RNNs. Complexity 2018, 2018, 8524825. [Google Scholar] [CrossRef] [Green Version]
  33. Hamada, Y.M. Liapunov’s stability on autonomous nuclear reactor dynamical systems. Prog. Nucl. Energy 2014, 73, 11–20. [Google Scholar] [CrossRef]
  34. Han, Q.; Liao, X.; Huang, T.; Peng, J.; Li, C.; Huang, H. Analysis and design of associative memories based on stability of cellular neural networks. Neurocomputing 2012, 97, 192–200. [Google Scholar] [CrossRef]
Figure 1. The framework of the multimodal identification system.
Figure 1. The framework of the multimodal identification system.
Mathematics 09 02976 g001
Figure 2. Seven groups of biometric images of authorized users.
Figure 2. Seven groups of biometric images of authorized users.
Mathematics 09 02976 g002
Figure 3. Seven groups of biometric images of authorized users (The flow diagram).
Figure 3. Seven groups of biometric images of authorized users (The flow diagram).
Mathematics 09 02976 g003
Table 1. The recognition pass rate of the multimodal identification system for authorized users.
Table 1. The recognition pass rate of the multimodal identification system for authorized users.
Group IDPR (%)Pass Threshold (%)Matcher Result (Y/N)
Group 196.0090.00Y
Group 293.3790.00Y
Group 396.1190.00Y
Group 493.0390.00Y
Group 594.5190.00Y
Group 692.4690.00Y
Group 796.3490.00Y
Table 2. The matcher pass rate of the multimodal identification system for unauthorized users.
Table 2. The matcher pass rate of the multimodal identification system for unauthorized users.
Group IDPR (%)Pass Threshold (%)Matcher Result (Y/N)
Group 866.8690.00N
Group 969.0390.00N
Group 1068.0090.00N
Group 1167.4390.00N
Group 1270.8690.00N
Group 1372.1190.00N
Group 1468.1190.00N
Table 3. The matcher pass rate of the multimodal identification system for unauthorized users.
Table 3. The matcher pass rate of the multimodal identification system for unauthorized users.
Group IDPR (%)Pass Threshold (%)Matcher Result (Y/N)
Group 1573.9490.00N
Group 1680.6990.00N
Group 1778.1790.00N
Group 1875.6690.00N
Group 1973.1490.00N
Group 2073.4990.00N
Group 2172.2390.00N
Group 2276.5790.00N
Group 2372.5790.00N
Group 2471.0990.00N
Group 2573.4990.00N
Group 2674.6390.00N
Group 2776.1190.00N
Group 2871.7790.00N
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Han, Q.; Yang, H.; Weng, T.; Chen, G.; Liu, J.; Tian, Y. Multimodal Identification Based on Fingerprint and Face Images via a Hetero-Associative Memory Method. Mathematics 2021, 9, 2976. https://0-doi-org.brum.beds.ac.uk/10.3390/math9222976

AMA Style

Han Q, Yang H, Weng T, Chen G, Liu J, Tian Y. Multimodal Identification Based on Fingerprint and Face Images via a Hetero-Associative Memory Method. Mathematics. 2021; 9(22):2976. https://0-doi-org.brum.beds.ac.uk/10.3390/math9222976

Chicago/Turabian Style

Han, Qi, Heng Yang, Tengfei Weng, Guorong Chen, Jinyuan Liu, and Yuan Tian. 2021. "Multimodal Identification Based on Fingerprint and Face Images via a Hetero-Associative Memory Method" Mathematics 9, no. 22: 2976. https://0-doi-org.brum.beds.ac.uk/10.3390/math9222976

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop