Next Article in Journal
Sensor System: A Survey of Sensor Type, Ad Hoc Network Topology and Energy Harvesting Techniques
Next Article in Special Issue
Robust Active Shape Model via Hierarchical Feature Extraction with SFS-Optimized Convolution Neural Network for Invariant Human Age Classification
Previous Article in Journal
Wireless Sensor Networks for Smart Cities: Network Design, Implementation and Performance Evaluation
Previous Article in Special Issue
Data Analytics and Mathematical Modeling for Simulating the Dynamics of COVID-19 Epidemic—A Case Study of India
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Modified Chaotic Binary Particle Swarm Optimization Scheme and Its Application in Face-Iris Multimodal Biometric Identification

1
School of Automation Science and Engineering, Faculty of Electronic and Information Engineering, MOE Key Lab for Intelligent Networks and Network Security, Xi’an Jiaotong University, Xi’an 710049, China
2
International Collage, Hunan University of Arts and Sciences, Changde 415000, China
3
Guangdong Xi’an Jiaotong University Academy, No.3, Daliangdesheng East Road, Foshan 528000, China
4
School of Physics and Electronics, Central South University, Changsha 410083, China
*
Author to whom correspondence should be addressed.
Submission received: 30 December 2020 / Revised: 15 January 2021 / Accepted: 15 January 2021 / Published: 19 January 2021
(This article belongs to the Special Issue Evolutionary Machine Learning for Nature-Inspired Problem Solving)

Abstract

:
In order to improve the recognition rate of the biometric identification system, the features of each unimodal biometric are often combined in a certain way. However, there are some mutually exclusive redundant features in those combined features, which will degrade the identification performance. To solve this problem, this paper proposes a novel multimodal biometric identification system for face-iris recognition.It is based on binary particle swarm optimization. The face features are extracted by 2D Log-Gabor and Curvelet transform, while iris features are extracted by Curvelet transform. In order to reduce the complexity of the feature-level fusion, we propose a modified chaotic binary particle swarm optimization (MCBPSO) algorithm to select features. It uses kernel extreme learning machine (KELM) as a fitness function and chaotic binary sequences to initialize particle swarms. After the global optimal position (Gbest) is generated in each iteration, the position of Gbest is varied by using chaotic binary sequences, which is useful to realize chaotic local search and avoid falling into the local optimal position. The experiments are conducted on CASIA multimodal iris and face dataset from Chinese Academy of Sciences.The experimental results demonstrate that the proposed system can not only reduce the number of features to one tenth of its original size, but also improve the recognition rate up to 99.78%. Compared with the unimodal iris and face system, the recognition rate of the proposed system are improved by 11.56% and 2% respectively. The experimental results reveal its performance in the verification mode compared with the existing state-of-the-art systems. The proposed system is satisfactory in addressing face-iris multimodal biometric identification.

1. Introduction

With the progress of the society and the development of science and technology, people pay more and more attention to protect their privacy information. Biometric recognition technology is a new information security protection measures. It has been used widely nowadays. Biometric recognition is a process that uses some inherent and unique physiological or behavioral characteristics of human beings to collect and judge the information and finally determine the identity [1]. Common biological features include face [2,3,4,5], fingerprint [6,7,8], palmprint [9,10], iris [11,12], ear [13], EEG [14] and behavioral features such as signature [15,16], gait [17], lip [18,19]. Unimodal biometric recognition is a kind of human physiological or behavioral characteristic identifying method which is based on single biological features.
Although some breakthroughs have been made in practice, there are still some shortcomings:
(1)
Noise interferences. Due to the changes of illumination, sound, experimental environment and acquisition mode, noise is always introduced.
(2)
Theoretical upper limitation. The differences in the same biometric among different individuals may be very small, which will cause great difficulties for identification [20].
(3)
Being easy to be stolen. For example, the face is always exposed to the open environment for a long time.As a result, it is easy to be stolen and to be used by criminals [21].
In order to solve those shortcomings mentioned above, some scientists often fuse multiple unimodal biometrics features into a multimodal biometric system [22]. Compared with unimodal feature recognition, multimodal biometric recognition has the following advantages:
(1)
Higher system reliability. Multimodal technology increases the complementary information in the identification process by integrating multiple biometrics.It can improve the fault tolerance of the system and reduce the impact of noise.
(2)
Greater robustness. If a certain biometric is missing, the multimodal system can still identify the identity according to the other biometric.
(3)
Better anti-counterfeiting performance. Compared with forged unimodal biometrics features, it is more difficult to forge multiple modal features at the same time.
The multimodal biometric recognition system has better anti-counterfeiting performance.
To complete the identification task with biometrics, we need to conduct four steps. They are data collection, feature extraction, feature matching and identity decision. According to these four steps, the fusion methods of multimodal biometric recognition system can be divided into four types: collection level, feature level, matching score level and decision level fusion [23]. On the basis of the biometrics feature extraction of each unimodal, the feature level fusion combines the features of each unimodal in a certain way to get a unified feature. The key problem of feature level fusion of multimodal biometric system is how to construct fusion feature subspace. Otherwise, some mutually exclusive redundant features may affect the recognition results [24]. Many scientists have proposed some solutions to the problem of feature level fusion. Haghighat et al. [25] presented a fusion method named Discriminant Correlation Analysis (DCA). It combined different feature vectors extracted from a single modality. Similarly, Yang et al. [26] proposed a method to create fingerprint-vein feature vectors (FPVFVs) in the feature level fusion.This method was a novel supervised local-preserving canonical correlation analysis. Shekhar et al. [27] proposed a multimodal sparse representation method, by which iris, fingerprint and face were fused. In this method, the sparse coefficients of different modals were fused to realize the feature level fusion of multimodal biometrics. Chin et al. [28] extracted features from fingerprint and palmprint with Gabor filter. Then the fusion feature subspace was selected according to the random tiling model (RTM). Raghavendra et al. [29] presented an efficient fusion schemes of complementary biometric modalities. Particle swarm optimization (PSO) was used to reduce the number of features of face and palmprint at the feature level while keeping the same level of performance.
Iris and human face are usually fused because they are distributed on the face. That fusion has the following advantages: (1) Iris and face can be acquired simultaneously by the same acquisition device; (2) Their features are complementary. Iris is rich in texture information, while face is rich in structure and shape. The fusion of the two modals can improve the accuracy of system identification. Based on the above reasons, the paper constructs a novel face-iris multimodal identification system. The extracted iris and face features are fused at the feature level.
At present, chaos has been widely investigated. Its applications have attracted considerable interest of researchers. Chaotic motion has features such as ergodicity and randomness. Duo to those features, it can traverse all the states within a certain range without repeat according to some specific rules. This means that we can use chaotic variables to optimize the parameters of PSO [30,31].As a result, many PSO algorithms based on chaos have been designed [32,33]. In this paper, we propose a multimodal biometric identification system based on the chaotic BPSO algorithms for face-iris recognition. The main contributions of this work is summarized as follows:
(1)
A new face feature extraction algorithm is proposed. Firstly, 2D Log-Gabor filter is performed on the face region to generate some sub feature images; Secondly, Curvelet transform is performed on each sub feature image to extract features. Then, features of all sub features image are concatenated. The recognition rate of the algorithm is similar to the traditional 2D Log-Gabor + LBP feature extraction method, but the number of feature dimensions is greatly reduced.
(2)
Kernel extreme learning machine is used as a fitness function of the PSO. The performance of feature selection depends on the selection of the fitness function. In this paper, the classification accuracy of KELM for the selected feature is considered as the fitness function value.
(3)
A modified chaotic binary particle swarm optimization (MCBPSO) algorithm is proposed. Firstly, the particle swarms are initialized by chaotic binary sequences. Secondly, the position of Gbest after each iteration is varied by chaotic binary sequences, so as to realize chaotic local search and avoid falling into the local best.
The structure of the paper is as follows: Section 2 describes some related works about the face-iris multimodal biometric system. Section 3 presents the feature extraction and fusion strategy of face and iris. Section 4 shows the experimental results. Section 5 summarizes the whole paper.

2. Related Work

In the last decade, people have done a lot of research on the fusion of face and iris. The recognition rate of iris and face multimodal depends on the fusion type, the feature extraction method and the datasets involved. The following Table 1 briefly summarizes the latest technology of face iris multimodal biometric recognition system.
Table 1 tell us there are many works on face–iris multimodal biometric system involving the feature level fusion. Among these published references, most of the datasets used for experiments were chimeric multimodal datasets. Chimeric multimodal dataset is constructed from two different face and iris datasets instead of genuine multimodal dataset [34]. The selection of datasets has a great influence on the recognition rate. In this paper, we use a genuine multimodal dataset to do some experiments.
There are many traditional multimodal biometric fusion strategies such as simple fusion strategy, support vector machine fusion strategy [42], maximum minimum probability machine fusion strategy [43], Bayesian belief network [44], and so forth. They are difficult to meet the requirements of further development of multimodal biometric fusion and recognition technology research. Therefore, it is necessary to optimize the fusion by using some evolutionary computing methods. In 2011, Raghavandra et al. [29] presented a fusion method based on binary particle swarm optimization (BPSO). One of the defects of the traditional BPSO is that it is easy to fall into the local best and lead to premature convergence [45].Some people tried to improve the traditional BPSO to solve the problem. For example, in [46], chaos search was introduced into BPSO algorithm to solve this problem. The algorithm kept the best particle unchanged, while the other particles whose position were close to the best particle were mapped into a chaotic variable space and chaotic motion was adopted as a chaotic variable. Then the newly created chaotic variable was remapped into the search space as a new particle, and replaces the original one. This approach is adequate for the particle whose dimension is not very high. But it is not suitable for the particle whose dimension is more than 8000. Quantum-behaved particle swarm optimization with binary encoding(QBPSO) was another kind of improved BPSO, where the quantum behavior was introduced to evolve the traditional BPSO [47]. The author claimed that algorithm might improve the evolution speed of BPSO, but I do not think it is ideal for high-dimensional feature selection. Motivated by the above discussions, in this paper, we design a novel BPSO to address the existing problems.

3. Proposed Multimodal Biometric System

The framework of face-iris multimodal biometric identification system is presented in Figure 1. In Figure 1, the original fusion features of human face, left iris and right iris are concatenated. We proposed a binary coding method to encode the subspace of those original fusion features. According to the feature subspace represented by the particles, the training set and the test set of the KELM are constructed. The classification accuracy of the test set in the trained KELM model is taken as the fitness value of the particle. Then the optimal feature subspace is selected according to the particle evolution process of PSO. In order to construct the optimal feature subspace more effectively, we develop a modified chaotic BPSO, which is described in detail in the Section 3.3.2.

3.1. Iris Feature Extraction

3.1.1. Preprocessing

Before extracting iris features, the image is preprocessed in order to find the effective iris region to extract features. We first use Harris corner detection method to locate and segment the human eye area [48].Then the iris location algorithm based on improved calculus operator is used to find the iris region. After the iris region is segmented and normalized, the preprocessing is completed. The steps are shown in Figure 2. The details of the iris preprocessing method is described in [49]. It can be seen from Figure 2d that the upper half circle part of the whole iris region is greatly affected by the eyelids and eyelashes. In order to avoid their interference to the iris texture, the paper only extracts features from the lower half circle part of the iris region.

3.1.2. Feature Extraction Algorithm

Curvelet transform has great advantages in the expression of curves in images. The second generation Curvelet transform is very effective in the extraction of edge, weak linear and curve structure. Therefore, the discrete form of the second generation Curvelet is used in the current paper and is expressed as follows [50]
C D ( j , l , k ) = 0 t 1 , t 2 < n f [ t 1 , t 2 ] φ j , l , k D [ t 1 , t 2 ] ¯ ,
where C ( j , l , k ) denotes Curvelet coefficients, the superscript D denotes the discrete form, f [ t 1 , t 2 ] is the image given in Cartesian coordinates, φ j , l , k stands for Curvelet function, while j , l , k denotes the variables of scale, orientation and position, respectively. According to [51], we use the first layer curvelet coefficient as the feature of iris in this paper.

3.2. Face Feature Extraction

From the Table 1, we can see 2D log-Gabor is one of the very popular tools which extract features from face and iris [34]. The specific function expression of the 2D log-Gabor filter in the frequency domain of polar coordinates is denoted as follows:
H f , θ = exp I n f f f 0 f 0 2 2 I n σ f σ f f 0 f 0 2 × exp θ θ 0 2 2 σ 0 2 ,
where f 0 is the center frequency, σ f is the width parameter for the frequency, θ 0 is the filter orientation, and σ 0 is the width parameter of the orientation.
Due to the multi-scale and multi-directional characteristics of 2D log-Gabor, a lot of information will be generated after the image is filtered. Many scientists proposed various methods on how to extract features from the information, such as taking its amplitude, standard deviation, or LBP, or PCA, and so forth. In this paper, we use curvelet to extract features from the information. The steps are shown in Figure 3. In the Figure 3, the C{1} is a coefficient of the first layer of Curvelet. It is used to represent the features.

3.3. Feature Fusion Strategy of Iris and Face

According to the method above, the features of dual iris and face are extracted and normalized. We may obtain the original fusion features by simply concatenating the left and right iris features and face features. Those fusion features usually contains more redundant information. Those information will not only increase the computational complexity, but also may have some mutually exclusive information, which will affect the result of fusion recognition. Therefore, we propose a feature selection strategy based on a modified chaotic binary particle swarm optimization (MCBPSO) and a kernel extreme learning machine (KELM).Each dimension of the original fusion features is represented by a binary code (0 or 1) .The feature subspace is constructed randomly. The KELM classification accuracy is taken as a fitness value of the MCBPSO. It is used to optimize the particle, so as to select the optimal feature space and construct the final fusion feature. The details are visualized in Figure 4.
In Figure 4, the binary particle of N dimension is used to filter the original fusion features. “1” means to select the corresponding feature, while “0” means not to select. The selected fusion features are obtained by this way. The dimension of the particle N is equal to the dimension of the original fusion feature.

3.3.1. Kernel Extreme Learning Machine

Extreme learning machine (ELM) is a fast learning algorithm of single hidden layer feed forward neural networks (SLFNs),which was proposed by Huang et al. [52] in 2006. In the learning process of ELM algorithm, there is no need to adjust the weights between the input layer and the hidden layer. It only needs to set the number of hidden layer nodes. Compared with the traditional BP neural network, ELM has fast training speed and strong generalization ability. However, due to the randomization of the initial input weights and hidden layer bias of the ELM, more hidden layer nodes are needed and it is easy to overfit. Therefore, on this basis, Huang et al. [53] introduced the idea of kernel function into ELM and proposed kernel extreme learning machine( KELM) algorithm. The expression for KELM is,
f ( x ) = K ( x , x 1 ) K ( x , x N ) ( I C + Ω E L M ) 1 T .
where Ω E L M is a kernel matrix. T is the training data target matrix. C is a KELM regularization coefficient. Ω E L M i , j = K ( x i , x j ) is a kernel function.
There are many kernel functions. In this paper, RBF kernel function is selected and defined as
K x i , x j = exp x i x j γ 2 .
where γ is a parameter. More details about KELM can be found in [52,53].

3.3.2. The MCBPSO Algorithm

BPSO is an intelligent search strategy inspired by biological behaviors [29]. Its main idea is as follows: first, a group of particles are randomly initialized;then the fitness value of each particle is calculated with the fitness function. After this, particles update their positions by tracking the “personality best” ( P b e s t )and the “global best” ( G b e s t ) positions. The G b e s t position of the particle is searched iteratively as the final feasible solution.
While implementing the method, we improve the traditional BPSO in the two following points: (1)The initial position of BPSO is optimized by chaotic sequences. (2) Chaotic binary sequences are used to change the G b e s t position to avoid falling into the local best. The specific steps of iris and face fusion are as follows:
Step 1: Determine the encoding format and the dimension N of binary particles.
In order to implement KELM training and classification, we need to set KELM regulation coefficient C and the parameter γ . So, the encoding format of each binary particle is shown in Figure 5.
In Figure 5, the value of a i , b i , c i is either 1 or 0. a i is a binary code of the parameter γ , b i is a binary code of KELM regularization coefficient C, c i is a binary code of feature mask. n 1 , n 2 , n 3 are the dimensions of γ , C and feature mask respectively. So the the dimension of particle N = n 1 + n 2 + n 3
Step 2: Initialize particle positions with chaotic binary sequences.
Generally, the logistic map as shown in Equation (5) is selected to generate pseudo-random sequence [45], such as:
Z : a n + 1 = μ a n ( 1 a n ) ,
where, a n is the n t h chaotic number, n denotes the iteration number. Z is a chaotic variable sequence. Obviously, if the initial a 0 ( 0 , 1 ) and a 0 { 0.0, 0.25, 0.5, 0.75, 1.0}, a n ( 0 , 1 ) . When μ = 4 , the sequence Z can be guaranteed to be in a fully chaotic state without periodicity. We use chaos method to optimize the traditional BPSO algorithm.
N chaotic real value sequences { x d , d = 1 , 2 , , N } in the range of [0, 1] are generated for each particle according to Equation (5). For the i t h particle, these chaotic real value sequences are transformed into chaotic binary sequences according to Equation (6), which is given by
x d = 0 , x d a v e r 1 , x d > a v e r .
where, aver is the average value of chaotic real value sequence { x d } .
Step 3:Construct the initial feature subspace to obtain the initial P b e s t and G b e s t .
According to the principle shown in Figure 4, each N-dimensional binary particle can be regarded as a feature selection tool used to construct a feature subspace. In Figure 5, if c i = 1 , the corresponding bit of the original fusion feature is kept to construct feature subspace; otherwise, it is deleted. According to these feature subspaces, the training data and testing data of KELM are obtained. The KELM parameters, such as the regularization coefficient C and the parameter γ of Gaussian kernel function is set according to Equations (7) and (8) respectively. The training data are used to train the KELM. The classification accuracy of the testing data by the trained KELM is taken as the fitness value of the current particle, which is the initial P b e s t . The maximum fitness value of all particles is the initial G b e s t .
γ = ( max _ γ - min _ γ ) i = 1 n 1 a i 2 i 1 2 n 1 1 + min _ γ
C = i = 1 n 2 b i 2 i 1
where, max_ γ and min_ γ denote the maximum and minimum values of γ , respectively.
Step 4: Iteratively update the position and velocity of particles.
In BPSO algorithm, the initial velocity is a random decimal of [0,1], and the particle velocity is limited to [ v min , v max ] . In each iteration, the velocity and position of each particle are calculated according to Equations (9)–(12). Then the fitness value of each particle is calculated using the fitness function. Particles update their positions by tracking the P b e s t and G b e s t . We can describe the process by the following Algorithm 1.
v i j ( t + 1 ) = ω v i j ( t ) + c 1 r 1 ( p b e s t i j x i j ( t ) ) + c 2 r 2 ( g b e s t j x i j ( t ) ) ,
v i j ( t + 1 ) = v min , v i j ( t + 1 ) < v min v max , v i j ( t + 1 ) > v max ,
S v i j ( t + 1 ) = 1 1 + e v i j ( t + 1 ) ,
x i j ( t + 1 ) = 0 , s ( v i j ( t + 1 ) ) r a n d ( ) 1 , s ( v i j ( t + 1 ) ) > r a n d ( ) .
Here, t is the t t h iteration, ω is the inertia weight, taking the random number between [0,1]; r 1 and r 2 are random numbers between [0,1], c 1 and c 2 are learning factors, which affect the speed of particle swarm following the best solution. v i j ( t ) and v i j ( t + 1 ) are the velocity of the particle before and after the update; x i j ( t ) and x i j ( t + 1 ) are the position of the particle before and after the update. Equation (10) limits the velocity of the particle. Equation (11) determines the probability of the particle position updating. Equations (11) and (12) determine the positions of particles needed to be updated in the next iteration.
Algorithm 1 Update the position and velocity of particles in the particle swarm algorithm.
  while i t e r < M a x i t e r do
      update v i j , x i j by Equations (9)–(12)
       f s e l e c t e d created by x i . % f s e l e c t e d was selected as fusion features
      if f i t n e s s ( f s e l e c t e d ) P b e s t i 1 then % f i t n e s s ( · ) is the Fitness function
           P b e s t i = f i t n e s s ( f s e l e c t e d ) % P b e s t was acquired
      end if
      if M a x ( P b e s t i ) G b e s t i 1 then
           G b e s t i = M a x ( P b e s t i ) % G b e s t was acquired
      end if
  end while
Step 5: Perform chaotic local search.
We use chaotic binary sequences to change the G b e s t position generated by each iteration. It can avoid the PSO falling into the local best. Through many trials, we develop the following Algorithm 2, which is the most effective and easiest way to select features from fusion features.
Algorithm 2 Select fusion features.
  Generating 20 N-dimensional binary chaotic sequences { Z i d } , i = 1 , 2 , 20 , d = 1 , 2 , , N
  for j = 1 to 20 do
       Z i = A n d ( Z i , G b e s t ) % G b e s t is modified by the logical operation And
       F i t ( i ) = F i t n e s s ( Z i ) % Calculate the fitness value of the new particle
      if F i t ( i ) G b e s t then
           G b e s t = Z i
      end if
  end for
Step 6: Judge whether the iteration termination condition is satisfied. If satisfied, terminate the iteration and output the G b e s t . Otherwise, proceed to the next step.

3.3.3. Weight and Parameter Setting

In the early stage of search, particle swarm optimization algorithm needs a high speed to complete a wide range of search. With the development of evolution, the search point is getting closer to the optimal value. At this time, we need to slow down the search speed and conduct a small-scale search. Therefore, the inertia weight should be larger in the early stage and smaller in the later stage. The weight we take decreases with the iteration, from 0.95 to 0.4, as shown in Equation (13).
w e i g h t = W max ( W max W min ) ( i t e r / M a x i t e r ) ,
where W max is the maximum weight and W min is the minimum weight. i t e r is the current number of iterations and M a x i t e r is the maximum number of iterations. In our experiment, the fine tuning of parameters C 1 and C 2 has little effect on the results, and the default value is taken, that is, C 1 = C 2 = 2 [54].

4. Experimental Results and Analysis

4.1. Iris and Face Multimode Dataset

In our experiments, we choose CASIA iris distance dataset as iris and face multimodal dataset. This dataset was produced by Chinese Academy of Sciences. It used an advanced biometric sensor, which searched iris and facial patterns in the field of vision 3 m away. The images of the dataset were captured by a high resolution camera so that both dual-eye iris and face patterns were included in the image region of interest [55]. This dataset contains 142 subjects and a total number of 2567 images. Each image has 2352 × 1728 pixels. Figure 6 shows example images from CASIA-Iris-Distance.

4.2. Experiments on Face System

We select 90 subjects from CASIA iris distance dataset in our experiments.Each subject contains 10 images. The number of training sets is increased from 1 to 5.The kernel extreme learning machine is used as a classifier. Table 2 shows the face recognition rate. For comparison, the recognition rates of the traditional 2D Log-Gabor + LBP and 2D Log-Gabor + Curvelet algorithms are listed in Table 2.
From Table 2, the recognition rate of traditional 2D Log-Gabor + LBP is slightly higher than that of 2D Log-Gabor + Curvelet. However, it is more than 4 times of the number of feature dimensions produced by 2D Log-Gabor+Curvelet.

4.3. Experiments on Iris

The iris features of left eyes are extracted from the 900 faces mentioned in Section 4.2. Using KELM as a classifier, the number of training sets is increased from 1 to 5. The recognition rate and feature dimension of different feature extraction algorithms are recorded in Table 3.
From Table 3 we can see although the iris recognition rate using Curvelet algorithm is between 2D Log-Gabor+Curvelet and 2D Log-Gabor+LBP, it produces the least feature dimensions. Considering the need of feature fusion, Curvelet algorithm is most suitable for iris feature extraction.

4.4. Experiments on Face-Iris Multimodal System

The features of the left & right eyes and face are extracted from 90 subjects and each subject has 10 images.The total dimension of the fusion featuresIt can be calculated using Table 2 and Table 3 as follows 1785 × 2 + 5304 = 8874. For each subject, we select 2, 3, 4 and 5 images as the training set, and the rest as the testing set. By using the traditional BPSO, QBPSO and the modified CBPSO proposed in this paper, the numbers of selected features are shown in Table 4.
It can be seen from Table 4 that the feature dimension is reduced to about one tenth of its original size after the modified CBPSO.This indicates that the MCBPSO can achieve an excellent dimension reduction result. However, the traditional BPSO and QBPSO can only reduce one half dimensions.
In the experiment, we use GAR (Genuine Acceptance Rate), FRR (False Rejection Rate) and FAR (False Accept Rate) to verify the effect of the particle optimization.Ther are computed as:
F R R = F R A A 100 % , G A R = 1 F R R , F A R = F A I A 100 % ,
where, FR is the number of false rejections, that is, falsely rejecting a legal user as a illegal user, AA is the number of legal users, FA is false acceptances, that is, falsely accepting a illegal user as a legal user, IA is the number of illegal users.
The number of training samples for each subject in the database is increased from 1 to 5. The corresponding feature subspace of each subject is generated by using 866 features in Table 4. Taking the rest images of the user as the test pictures of the corresponding user. Then, the images of all objects that do not belong to this kind of pictures are taken as negative test samples. That is, when the training sample objects belong to class 001, all pictures of 002-090 objects are taken as negative testing examples to calculate the overall . The results are shown in Table 5.
We also compare the proposed multimodal system with some traditonal methods, such as BPSO, QBPSO. Table 6 shows the comparison results when the training samples is 5.
Though the number of selected features produced by MCBPSO is far less than the traditional methods, we can see its GAR is the highest and it is the fastest from Table 6. Those indicate the redundant features not only contains a lot of mutually exclusive information, but also slows down the process. We also compare the proposed multimodal system with some recent similar face iris multimodal biometric recognition systems. Table 7 shows the comparison results using CASIA iris distance database in verification mode.
The results reveal that our face iris multimodal biometric recognition system achieves the best performance (in FAR and GAR). Figure 7 shows the comparison of CMC curves between before and after MCBPSO optimization.Due to the pseudo randomness of chaotic system, the feature subspace selected after each experiment is different. As a result, the corresponding recognition rate is also different. We need to do the experiment several times to obtain an optimal result. From those experiments, we have to choose the feature subspace with less features but the best recognition performance.

5. Conclusions and Future Work

Aiming at the low recognition rate of unimodal biometric, this paper propose a novel face-iris multimodal recognition system with excellent performance and easy implementation. Curvelet transform is used for iris feature extraction. 2D Log-Gabor combined with Curvelet algorithm is employed for face feature extraction. A modified chaotic binary particle swarm optimization (MCBPSO) algorithm is proposed to select features. It uses kernel extreme learning machine (KELM) as fitness function and chaotic binary sequence to initialize particle swarms. After feature combination, the recognition rate of multimodal can reach up to 98.5%, while recognition rate of iris and face is 88.22% and 97.78% respectively. Although the fusion of face features and iris features can improve the recognition rate, the orignal fusion features contain a lot of redundant information. In order to optimize the dimensions of fusion features, reduce the running time and further improve the recognition rate, we use a modified chaotic binary particle swarm optimization to select features. Meanwhile, the classification accuracy of KELM is used as the fitness value of particles. Experimental results show that the proposed algorithm can not only reduce the number of features to one tenth of its original size, but also improve the recognition rate. CMC curve also reveals that the recognition rate can reach 99.78% after MCBPSO optimization, higher than no optimization. The principle of the system is easy to understand and flexible to implement.It is significant in practice.
Although the proposed method works well, it is still need to be improved in the future. The original features of face and iris are all tensors. In order to facilitate feature fusion, these features are transformed into vectors in the proposed approach. In this process, some feature information is lost. Therefore, we will try to use tensor for feature fusion and identity recognition.

Author Contributions

Conceptualization, Q.X. and S.H.; methodology, Q.X. and X.X.; software, Q.X. and X.X.; validation, S.H., and Q.X.; formal analysis, Q.X., X.Z., X.X. and S.H.; investigation, Q.X., X.Z. and X.X.; resources, X.Z.; data curation, Q.X.; writing—original draft preparation, Q.X. and S.H.; writing—review and editing, S.H.; visualization, Q.X., X.Z., X.X. and S.H.; supervision, X.Z.; project administration, X.Z.; funding acquisition, X.Z., X.X. and S.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (Grant Nos. 61673316, 61901530 61161006 and 61573383), the Major Science and Technology Project of Guangdong Province (No. 2015B010104002),the Science and Technology Innovation Development Project of Changde City(No. 2020ZD25), the Scientific Research Project of HUAS(No. 20ZD04), the Natural Science Foundation of Hunan Province (No. 2020JJ5767).

Institutional Review Board Statement

Ethical review and approval were waived for this study, due to all the subjects involved in the study are included in the publicly available datasets.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Publicly available datasets were analyzed in this study. This data can be found here: http://biometrics.idealtest.org/findDownloadDbByMode.do?mode=Iris.

Acknowledgments

The authors would like to thank the three anonymous reviewers for their constructive comments and insightful suggestions.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Jain, A.K.; Ross, A.; Pankanti, S. Biometrics: A tool for information security. IEEE Trans. Inf. Forensics Secur. 2006, 1, 125–143. [Google Scholar] [CrossRef] [Green Version]
  2. Best-Rowden, L.; Jain, A. Longitudinal Study of Automatic Face Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 148–162. [Google Scholar] [CrossRef]
  3. He, R.; Tan, T.; Davis, L.; Sun, Z. Learning structured ordinal measures for video based face recognition. Pattern Recognit. 2018, 75, 4–14. [Google Scholar] [CrossRef] [Green Version]
  4. Xu, W.; Shen, Y.; Bergmann, N.; Hu, W. Sensor-Assisted Multi-View Face Recognition System on Smart Glass. IEEE Trans. Mob. Comput. 2018, 17, 197–210. [Google Scholar] [CrossRef]
  5. Oh, B.S.; Toh, K.; Teoh, A.B.J.; Lin, Z. An Analytic Gabor Feedforward Network for Single-Sample and Pose-Invariant Face Recognition. IEEE Trans. Image Process. 2018, 27, 2791–2805. [Google Scholar] [CrossRef]
  6. Cao, K.; Jain, A.K. Automated Latent Fingerprint Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 41, 788–800. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Gu, S.; Feng, J.; Lu, J.; Zhou, J. Efficient Rectification of Distorted Fingerprints. IEEE Trans. Inf. Forensics Secur. 2018, 13, 156–169. [Google Scholar] [CrossRef]
  8. Jain, A.K.; Arora, S.S.; Cao, K.; Best-Rowden, L.; Bhatnagar, A. Fingerprint Recognition of Young Children. IEEE Trans. Inf. Forensics Secur. 2017, 12, 1501–1514. [Google Scholar] [CrossRef]
  9. Zhang, L.; Li, L.; Yang, A.; Shen, Y.; Yang, M. Towards contactless palmprint recognition:A novel device, a new benchmark, and a collaborative representation based identification approach. Pattern Recognit. 2017, 69, 199–212. [Google Scholar] [CrossRef]
  10. Jia, W.; Zhang, B.; Lu, J.; Zhu, Y.; Zhao, Y.; Zuo, W.; Ling, H. Palmprint Recognition Based on Complete Direction Representation. IEEE Trans. Image Process. 2017, 26, 4483–4498. [Google Scholar] [CrossRef]
  11. Hsieh, S.; Li, Y.; Wang, W.; Tien, C. A Novel Anti-Spoofing Solution for Iris Recognition Toward Cosmetic Contact Lens Attack Using Spectral ICA Analysis. Sensors 2018, 18, 795. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Llano, E.G.; García-Vázquez, M.S.; Colores-Vargas, J.M.; Zamudio-Fuentes, L.M.; Ramírez-Acosta, A.A. Optimized robust multi-sensor scheme for simultaneous video and image iris recognition. Pattern Recognit. Lett. 2018, 101, 44–51. [Google Scholar] [CrossRef]
  13. Olanrewaju, L.; Oyebiyi, O.; Misra, S.; Maskeliunas, R.; Damasevicius, R. Secure ear biometrics using circular kernel principal component analysis, Chebyshev transform hashing and Bose–Chaudhuri–Hocquenghem error-correcting codes. Signal Image Video Process. 2020, 14, 847–855. [Google Scholar] [CrossRef]
  14. Robertas, D.; Rytis, M.; Egidijus, K.; Marcin, W. Combining Cryptography with EEG Biometrics. Comput. Intell. Neurosci. 2018, 2018, 1–11. [Google Scholar]
  15. Tolosana, R.; Vera-Rodríguez, R.; Fierrez, J.; Ortega-Garcia, J. Exploring Recurrent Neural Networks for On-Line Handwritten Signature Biometrics. IEEE Access 2018, 6, 5128–5138. [Google Scholar] [CrossRef]
  16. Alpar, O. Online signature verification by continuous wavelet transformation of speed signals. Expert Syst. Appl. 2018, 104, 33–42. [Google Scholar] [CrossRef]
  17. Zou, Q.; Ni, L.; Wang, Q.; Li, Q.; Wang, S. Robust Gait Recognition by Integrating Inertial and RGBD Sensors. IEEE Trans. Cybern. 2018, 48, 1136–1150. [Google Scholar] [CrossRef] [Green Version]
  18. Abdelaziz, A.H. Comparing Fusion Models for DNN-Based Audiovisual Continuous Speech Recognition. IEEE/ACM Trans. Audio Speech Lang. Process. 2018, 26, 475–484. [Google Scholar] [CrossRef]
  19. Lu, Y.; Yan, J.; Gu, K. Review on Automatic Lip Reading Techniques. Int. J. Pattern Recognit. Artif. Intell. 2018, 32, 1856007. [Google Scholar] [CrossRef]
  20. Ammour, B.; Bouden, T.; Boubchir, L. Face-Iris Multimodal Biometric System Based on Hybrid Level Fusion. In Proceedings of the 2018 41st International Conference on Telecommunications and Signal Processing (TSP), Athens, Greece, 4–6 July 2018; pp. 1–5. [Google Scholar] [CrossRef]
  21. Matin, A.W.; Mahmud, F.; Ahmed, T.; Sabbir Ejaz, M.S. Weighted score level fusion of iris and face to identify an individual. In Proceedings of the 2017 International Conference on Electrical, Computer and Communication Engineering (ECCE), Cox’s Bazar, Bangladesh, 16–18 February 2017; pp. 1–4. [Google Scholar] [CrossRef]
  22. Ross, A.; Jain, A.K. Multimodal biometrics: An overview. In Proceedings of the 2004 12th European Signal Processing Conference, Vienna, Austria, 6–10 September 2004; pp. 1221–1224. [Google Scholar] [CrossRef]
  23. Omid, S.; Maryam, E. Optimal Face-Iris Multimodal Fusion Scheme. Symmetry 2016, 8, 48. [Google Scholar]
  24. Moi, S.H.; Asmuni, H.; Hassan, R.; Othman, R.M. Multimodal biometrics: Weighted score level fusion based on non-ideal iris and face images. Expert Syst. Appl. 2014, 41, 5390–5404. [Google Scholar] [CrossRef]
  25. Haghighat, M.; Abdel-Mottaleb, M.; Alhalabi, W. Discriminant Correlation Analysis: Real-Time Feature Level Fusion for Multimodal Biometric Recognition. IEEE Trans. Inf. Forensics Secur. 2016, 11, 1984–1996. [Google Scholar] [CrossRef] [Green Version]
  26. Yang, J.; Zhang, X. Feature-level fusion of fingerprint and finger-vein for personal identification. Pattern Recognit. Lett. 2012, 33, 623–628. [Google Scholar] [CrossRef]
  27. Shekhar, S.; Patel, V.; Nasrabadi, N.; Chellappa, R. Joint Sparse Representation for Robust Multimodal Biometrics Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 36, 113–126. [Google Scholar] [CrossRef]
  28. Chin, Y.J.; Ong, T.S.; Teoh, A.B.J.; Goh, M.O.M. Integrated biometrics template protection technique based on fingerprint and palmprint feature-level fusion. Inf. Fusion 2014, 18, 161–174. [Google Scholar] [CrossRef]
  29. Raghavendra, R.; Dorizzi, B.; Rao, A.; Kumar, G.H. Designing efficient fusion schemes for multimodal biometric systems using face and palmprint. Pattern Recognit. 2011, 44, 1076–1088. [Google Scholar] [CrossRef]
  30. He, S.; Banerjee, S.; Sun, K. Can derivative determine the dynamics of fractional-order chaotic system. Chaos Solitons Fractals 2018, 115, 14–22. [Google Scholar] [CrossRef]
  31. Song, L.; Huang, J.; Liang, X.; Yang, S.X.; Hu, W.; Tang, D. An Intelligent Multi-Sensor Variable Spray System with Chaotic Optimization and Adaptive Fuzzy Control. Sensors 2020, 20, 2954. [Google Scholar] [CrossRef]
  32. Cao, Y.; Wu, M. A Novel RPL Algorithm Based on Chaotic Genetic Algorithm. Sensors 2018, 18, 3647. [Google Scholar] [CrossRef] [Green Version]
  33. Zhang, Y.; Yang, G.; Zhang, B. FW-PSO Algorithm to Enhance the Invulnerability of Industrial Wireless Sensor Networks Topology. Sensors 2020, 20, 1114. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  34. Ammour, B.; Boubchir, L.; Bouden, T.; Ramdani, M. Face-Iris Multimodal Biometric Identification System. Electronics 2020, 9, 85. [Google Scholar] [CrossRef] [Green Version]
  35. Bouzouina, Y.; Hamami, L. Multimodal biometric: Iris and face recognition based on feature selection of iris with GA and scores level fusion with SVM. In Proceedings of the International Conference on Bio-engineering for Smart Technologies, Paris, France, 30 August–1 September 2017; pp. 1–7. [Google Scholar]
  36. Eskandari, M.; Toygar, Ö. Selection of optimized features and weights on face-iris fusion using distance images. Comput. Vis. Image Underst. 2015, 137, 63–75. [Google Scholar] [CrossRef]
  37. Huo, G.; Liu, Y.; Zhu, X.; Dong, H. Face–iris multimodal biometric scheme based on feature level fusion. J. Electron. Imaging 2015, 24, 063020. [Google Scholar] [CrossRef]
  38. Roy, K.; O’Connor, B.; Ahmad, F.; Kamel, M.S. Multibiometric System Using Level Set, Modified LBP and Random Forest. Int. J. Image Graph. 2014, 14, 1450013. [Google Scholar] [CrossRef]
  39. Eskandari, M.; Toygar, Ö. Fusion of face and iris biometrics using local and global feature extraction methods. Signal Image Video Process. 2014, 8, 995–1006. [Google Scholar] [CrossRef]
  40. Wang, Z.; Wang, E.; Wang, S.; Ding, Q. Multimodal Biometric System Using Face-Iris Fusion Feature. J. Comput. 2011, 6, 931–938. [Google Scholar] [CrossRef] [Green Version]
  41. Rattani, A.; Tistarelli, M. Robust Multi-modal and Multi-unit Feature Level Fusion of Face and Iris Biometrics. In Proceedings of the International Conference on Biometrics ICB 2009: Advances in Biometrics, Alghero, Italy, 2–5 June 2009. [Google Scholar]
  42. Fu, L.; Xia, M.; Chen, L. Speaker independent emotion recognition based on SVM/HMMS fusion system. In Proceedings of the International Conference on Audio, Shanghai, China, 7–9 July 2008. [Google Scholar]
  43. Shi, L.; Lina, X.U.; Hao, Y. Application Research on the Multi-Model Fusion Forecast of Wind Speed. Plateau Meteorol. 2017, 14, 227–230. [Google Scholar]
  44. Baker, J.P.; Maurer, D.E. Fusing multimodal biometrics with quality estimates via a bayesian belief network. Pattern Recognit. 2008, 41, 821–832. [Google Scholar]
  45. Alatas, B.; Akin, E.; Ozer, A.B. Chaos embedded particle swarm optimization algorithms. Chaos Solitons Fractals 2009, 40, 1715–1734. [Google Scholar] [CrossRef]
  46. Li, P.; Xu, D.; Zhou, Z.; Lee, W.J.; Zhao, B. Stochastic Optimal Operation of Microgrid Based on Chaotic Binary Particle Swarm Optimization. IEEE Trans. Smart Grid 2017, 7, 66–73. [Google Scholar] [CrossRef]
  47. Lin, H.; Maolong, X.; Jun, S. An improved Quantum-Behaved Particle Swarm Optimization with Binary Encoding. Kongzhi Yu Juece/Control Decis. 2011, 25, 243–249. [Google Scholar] [CrossRef]
  48. Trujillo, L.; Olague, G.; Hammoud, R.; Hernandez, B. Automatic Feature Localization in Thermal Images for Facial Expression Recognition. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05)—Workshops, San Diego, CA, USA, 21–23 September 2005; p. 14. [Google Scholar] [CrossRef]
  49. Zhang, X.; Xiong, Q.; Xu, X. Iris Identification App Based on Andriod System. In Proceedings of the 2018 Chinese Automation Congress (CAC), Xi’an, China, 30 November–2 December 2018; pp. 2229–2234. [Google Scholar]
  50. Candès, E.; Demanet, L.; Donoho, D.; Ying, L. Fast Discrete Curvelet Transforms. Multiscale Model. Simul. 2006, 5, 861–899. [Google Scholar] [CrossRef]
  51. Sun, J.; Lu, Z.; Zhou, L. Iris Recognition Using Curvelet Transform Based on Principal Component Analysis and Linear Discriminant Analysis. J. Inf. Hiding Multim. Signal Process. 2014, 5, 567–573. [Google Scholar]
  52. Huang, G.; Zhu, Q.Y.; Siew, C. Extreme learning machine: Theory and applications. Neurocomputing 2006, 70, 489–501. [Google Scholar] [CrossRef]
  53. Huang, G. An Insight into Extreme Learning Machines: Random Neurons, Random Features and Kernels. Cogn. Comput. 2014, 6, 376–390. [Google Scholar] [CrossRef]
  54. Kennedy, J.; Eberhart, R. A discrete binary version of the particle swarm algorithm. In Proceedings of the 1997 IEEE International Conference on Systems, Man, and Cybernetics. Computational Cybernetics and Simulation, Orlando, FL, USA, 12–15 October 1997; Volume 5, pp. 4104–4108. [Google Scholar]
  55. CASIA Iris Image Database. Available online: http://biometrics.idealtest.org/findDownloadDbByMode.do?mode=Iris (accessed on 16 June 2017).
Figure 1. Framework of face-iris multimodal biometric identification system.
Figure 1. Framework of face-iris multimodal biometric identification system.
Electronics 10 00217 g001
Figure 2. Process of iris preprocessing (a) original image (b) eye location (c) eye segmentation (d) iris location and segmentation (e) iris normalized, size 64 × 512 (f) iris lower semicircle, size 64 × 256 .
Figure 2. Process of iris preprocessing (a) original image (b) eye location (c) eye segmentation (d) iris location and segmentation (e) iris normalized, size 64 × 512 (f) iris lower semicircle, size 64 × 256 .
Electronics 10 00217 g002
Figure 3. Flow Chart of face feature extraction.
Figure 3. Flow Chart of face feature extraction.
Electronics 10 00217 g003
Figure 4. Principle of the chaotic BPSO.
Figure 4. Principle of the chaotic BPSO.
Electronics 10 00217 g004
Figure 5. Encoding format of each binary particle.
Figure 5. Encoding format of each binary particle.
Electronics 10 00217 g005
Figure 6. Example images from CASIA-Iris-Distance.
Figure 6. Example images from CASIA-Iris-Distance.
Electronics 10 00217 g006
Figure 7. Comparison effect of CMC curve before and after optimization (a) Training images = 2; (b) Training images = 3; (c) Training images = 4; (d) Training images = 5.
Figure 7. Comparison effect of CMC curve before and after optimization (a) Training images = 2; (b) Training images = 3; (c) Training images = 4; (d) Training images = 5.
Electronics 10 00217 g007
Table 1. Several works of Face–iris multimodal biometric system.
Table 1. Several works of Face–iris multimodal biometric system.
AuthorsFusion Level/Classification MethodMultimodal DatasetFeature Extraction Method
B. Ammour et al. (2020) [34]Hybrid fusion level/fuzzy k-nearest neighbor(FK-NN)Chimeric database [34]Iris features are extracted by 2D Log-Gabor filter. Facial features are computed using singular spectrum analysis (SSA).
B. Ammour et al. (2018) [20]Hybrid fusion level/Euclidean distanceCASIA Iris Distance databaseIris and face features are extracted by 2D Log-Gabor filter combined with SRKDA.
Y. Bouzouina et al. (2017) [35]Match score level/Support vector machine(SVM)Chimeric databasePCA and DCT are used for face feature. 1D log Gabor filtering and Zernike memomt are used for iris feature. Genetic algorithm (GA) is used to select features.
O. Sharifi et al. (2016) [23]Match score level, feature level and decision level fusion/Manhattan distanceCASIA Iris Distance databaseIris and face features are extracted by 2D Log-Gabor filter.
M. Eskandari et al. (2015) [36]Score level, feature level fusion/Weighted sum ruleCASIA Iris Distance databaseIris feature extraction algorithm is 1D Log-Gabor. Five local and global kinds of face features are extracted by subspace PCA, modular PCA and LBP. PSO was used to select features.
G. Huo et al. (2015) [37]Feature level fusion/Support vector machine(SVM)Chimeric databaseFace and iris feature extraction algorithm are 2D Gabor filter. PCA method is used to reduce the dimension
K. Roy et al. (2014) [38]Feature level fusion/Manhattan distanceChimeric databaseModified local binary pattern (MLBP). Random Forest (RF) is proposed to select the optimal subset of features.
H.M. Sim et al. (2014) [24]Match score level/Euclidean Distance for face, Hamming Distance for irisChimeric databaseFace and iris features was extracted by eigenface and NeuWave Network respectively
M. Eskandari et al. (2014) [39]Match score level/Weighted Sum RuleChimeric databaseFace and iris features was extracted by LBP and subspace LDA respectively.
Z. Wang et al. (2011) [40]Feature level fusion/Euclidean distance.Chimeric databaseFace features was extracted by eigenface, while iris features are based on Daugman’s algorithm.
A. Rattani et al. (2009) [41]Feature level fusion/Euclidean distance.Chimeric databaseCompute the SIFT (Scale invariant feature transform) features from both biometric and spatial sampling method are used for feature selection.
Table 2. Recognition rate of left iris based on different algorithms.
Table 2. Recognition rate of left iris based on different algorithms.
AlgorithmDimension of the Feature1 Pic2 Pic3 Pic4 Pic5 Pic
2D Log-Gabor + LBP22,6560.8790.9600.9760.9800.989
2D Log-Gabor + Curvelet53040.7840.9060.9300.9720.978
Table 3. Recognition rate of face based on different algorithms.
Table 3. Recognition rate of face based on different algorithms.
AlgorithmDimension of the Feature1 Pic2 Pic3 Pic4 Pic5 Pic
2D Log-Gabor + LBP56640.600.7380.7860.8020.864
2D Log-Gabor + Curvelet42,8400.6840.8260.8600.8830.898
Curvelet17850.6700.7990.8460.8570.882
Table 4. The number of selected features by different BPSO algorithms.
Table 4. The number of selected features by different BPSO algorithms.
Algorithms2 Pic3 Pic4 Pic5 Pic
Traditional BPSO4422443344654420
Traditional QBPSO4524448744574385
Modified CBPSO1630811699866
Table 5. Identification results of the proposed face-iris multimodal biometric system.
Table 5. Identification results of the proposed face-iris multimodal biometric system.
Training NumbersGAR (%)FRR (%)FAR (%)
190.499.510.07
296.403.60.02
397.782.220.01
498.891.110.0037
599.780.220.003
Table 6. Comparison of the proposed method with some traditional BPSO.
Table 6. Comparison of the proposed method with some traditional BPSO.
Fusion MethodGAR (%)FRR (%)FAR (%)Time Cost (s)
Traditional BPSO99.110.890.620.064
Traditional QBPSO99.330.670.750.063
Modified CBPSO99.780.220.0030.016
Table 7. Comparison of proposed method with some recent state-of-the-art methods.
Table 7. Comparison of proposed method with some recent state-of-the-art methods.
AuthorsFusion MethodGAR (%)FAR (%)
M. Eskandari and O. Toygar [36] (2015)score level and feature level fusion94.440.01
O. Sharifi and M. Eskandari et al. [23] (2016)score level, feature level and decision level fusion98.930.01
B. Ammour et al. [20] (2018)hybrid level of fusion99.50.06
Proposed methodfeature Level fusion99.780.003
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Xiong, Q.; Zhang, X.; Xu, X.; He, S. A Modified Chaotic Binary Particle Swarm Optimization Scheme and Its Application in Face-Iris Multimodal Biometric Identification. Electronics 2021, 10, 217. https://0-doi-org.brum.beds.ac.uk/10.3390/electronics10020217

AMA Style

Xiong Q, Zhang X, Xu X, He S. A Modified Chaotic Binary Particle Swarm Optimization Scheme and Its Application in Face-Iris Multimodal Biometric Identification. Electronics. 2021; 10(2):217. https://0-doi-org.brum.beds.ac.uk/10.3390/electronics10020217

Chicago/Turabian Style

Xiong, Qi, Xinman Zhang, Xuebin Xu, and Shaobo He. 2021. "A Modified Chaotic Binary Particle Swarm Optimization Scheme and Its Application in Face-Iris Multimodal Biometric Identification" Electronics 10, no. 2: 217. https://0-doi-org.brum.beds.ac.uk/10.3390/electronics10020217

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop