Next Article in Journal
Age-Dependent Survival Rates in SIR-SI Dengue Transmission Model and Its Application Considering Human Vaccination and Wolbachia Infection in Mosquitoes
Next Article in Special Issue
A Semantics-Based Clustering Approach for Online Laboratories Using K-Means and HAC Algorithms
Previous Article in Journal
Conic Duality for Multi-Objective Robust Optimization Problem
Previous Article in Special Issue
Three-Dimensional Reconstruction of Shoe Soles via Binocular Vision Based on Improved Matching Cost
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

RMOBF-Net: Network for the Restoration of Motion and Optical Blurred Finger-Vein Images for Improving Recognition Accuracy

Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro, 1-gil, Jung-gu, Seoul 04620, Korea
*
Author to whom correspondence should be addressed.
Submission received: 6 September 2022 / Revised: 19 October 2022 / Accepted: 20 October 2022 / Published: 24 October 2022
(This article belongs to the Special Issue Computational Intelligent and Image Processing)

Abstract

:
Biometrics is a method of recognizing a person based on one or more unique physical and behavioral characteristics. Since each person has a different structure and shape, it is highly secure and more convenient than the existing security system. Among various biometric authentication methods, finger-vein recognition has advantages in that it is difficult to forge because a finger-vein exists inside one’s finger and high user convenience because it uses a non-invasive device. However, motion and optical blur may occur for some reasons such as finger movement and camera defocusing during finger-vein recognition, and such blurring occurrences may increase finger-vein recognition error. However, there has been no research on finger-vein recognition considering both motion and optical blur. Therefore, in this study, we propose a new method for increasing finger-vein recognition accuracy based on a network for the restoration of motion and optical blurring in a finger-vein image (RMOBF-Net). Our proposed network continuously maintains features that can be utilized during motion and optical blur restoration by actively using residual blocks and feature concatenation. Also, the architecture RMOBF-Net is optimized to the finger-vein image domain. Experimental results are based on two open datasets, the Shandong University homologous multi-modal traits finger-vein database and the Hong Kong Polytechnic University finger-image database version 1, from which equal error rates of finger-vein recognition accuracy of 4.290–5.779% and 2.465–6.663% were obtained, respectively. Higher performance was obtained from the proposed method compared with that of state-of-the-art methods.

1. Introduction

Recently, the increasing use of mobile devices has raised the importance of their security. Accordingly, there is an increasing need for biometric verification systems to solve the problems of the existing identity authentication system in which forgetting passwords, ID card theft, and spoofing occur frequently. Biometric recognition refers to a technology that recognizes individuals using biometric characteristics. It is a method of recognizing a person based on one or more unique physical and behavioral traits, and it is highly secure and convenient compared with the existing system because each person has a different structure and shape of biological characteristics [1]. There are various types of biometric verifications, such as fingerprint, iris, face, voice, palmprint, and finger-vein recognition. Among them, finger-vein recognition has the following advantages: (1) the finger-vein pattern is hidden within the finger and thus is difficult to forge or steal, and (2) non-invasive image capture ensures both convenience and hygiene. (3) Additionally, since humans usually have 10 fingers, if an unexpected accident occurs on one finger, another can be used for authentication [2], and (4) the probability of false recognition is lower than other methods since each finger has a unique pattern and equal finger-vein pattern detection rarely occurs between two individuals [1]. These universality, uniqueness, and permanence characteristics of finger-veins can be applied to user authentication. However, blur may occur when capturing a finger-vein image because of various reasons such as light scattering in the skin layer due to near-infrared (NIR) sensor light used in acquiring finger-vein images, camera lens focus mismatch, finger thickness difference, depth difference between the finger skin surface and the finger-vein, and finger movement. The blurred finger-vein image due to blur is different from the original image registered in the recognition system, and as a result, the error during finger-vein recognition can be increased, which may cause instability in the recognition system. Therefore, blurred finger-vein image restoration through image deblurring is essential. Blur that can occur when capturing a finger-vein image can be divided into light scattering blur, optical blur, and motion blur. In general, light scattering from the skin tissue inevitably occurs during NIR light transmission. Therefore, numerous studies on blurred finger-vein image restoration through scattering removal have been conducted [3,4,5,6,7,8]. In addition, optical blur may occur due to the mismatch in the focus of the camera lens, the distance between the camera lens and the finger-vein, and the finger thickness of each individual. Therefore, research on optically blurred finger-vein image restoration has been conducted [9,10,11]. Additionally, research on motion blurred finger-vein image restoration due to finger movement during finger-vein recognition was also conducted [12]. Of course, during finger-vein recognition, a finger is fixed to some extent on a capturing device, but excitement, anxiety, brain disease, influence of alcohol, Parkinson’s disease, physiological tremor, dystonia, lack of sleep, and excessive stress may cause hand tremor, which may result in a motion blurred finger-vein image being captured. In addition, non-contact devices have been recently expanded due to COVID-19, and when a user tries to recognize his finger-vein at fast speed using these devices, severe motion blur can inevitably occur to the input image. Furthermore, if a finger moves during recognition, not only motion blur but also optical blur may occur because the camera lens is not focused due to the movement of an object. However, a finger-vein recognition method that accounts for both motion and optical blur has not been studied. Therefore, in this paper, research was conducted focusing on motion blur and optical blur restoration that can occur during finger-vein recognition.
Deblurring approaches for image restoration are largely divided into non-blind and blind deblurring methods [13]. In non-blind deblurring methods, deblurring is performed after estimating the blur kernel. The blur kernel can be inferred from knowledge, such as the amount of motion or defocus blur and camera sensor optics, during the image formation process, calibrated from the test image, or estimated from the point spread function (PSF) [14]. After estimating the blur kernel, the original image can be obtained through a deconvolution operation on the blurred image. From this non-blind restoration method, a high restoration performance can be expected for the image domain that suits the predicted blur kernel. However, this method has a disadvantage in that the restoration performance can be degraded when images are acquired from other devices or when the dataset for prediction is in a different domain from the existing image. Additionally, during image capturing in the real environment, various changes such as image rotation, illumination change, and texture change can occur, so there is a limit to applying non-blinded methods to the case-specific application. Also, in the real environment, most of the blur kernels are unknown, and measuring the blur kernel for each case is time-consuming. On the other hand, in the blind deblurring method, deblurring is performed while the blur kernel remains unknown. In most environments, it is difficult to know the blur kernel, and images with numerous domains are acquired from various devices. Therefore, recent blind deblurring algorithms have been proposed using a training-based method to reduce the difference between the blur and original sharp images [13,15,16,17]. Therefore, in this study, a blurred finger-vein image restoration was performed with the blind deblurring method, similar to the real environment. In this research, we propose a method of performing motion and optical blurred finger-vein image restoration using a newly proposed network for the restoration of motion and optical blurred finger-vein image and a method of performing restored finger-vein image recognition using a deep convolutional neural network (CNN). To restore blurred finger-vein images, our newly proposed network is largely composed of an encoder, a decoder, and a feature concatenation function without pooling operations, which can avoid loss of vein information. ResUnet [18] is also composed of an encoder–decoder and feature concatenation without pooling operations. However, the purpose of the task and the overall structure of ResUNet and our proposed method are different. The purpose of ResUNet is for the semantic segmentation of road label data [18], while our network is for the image restoration of motion and optical blurred finger-vein data. In detail, ResUNet uses a complex residual block with a ReLU activation function and mean square error (MSE) loss, while our network uses a simple residual block with a PReLU activation function and Charbonnier and perceptual loss functions to preserve overall finger-vein information including vein pattern and texture information. In addition, compared with ResUNet, larger numbers of residual blocks are used in our network to restore finger-vein information because the finger-vein image has extremely less high-frequency information than the road data. Considering these points, we made our proposed restoration architecture focusing on blurred finger-vein image data. The main contributions of our study are as follows:
  • This is the first study on motion and optical blur finger-vein image restoration that can occur during finger-vein recognition.
  • We propose a new image restoration network, the network for the restoration of motion and optical blurred finger-vein image (RMOBF-Net). Our proposed network continuously maintains features that can be utilized during motion and optical blur restoration by actively using residual blocks and feature concatenation. Although components such as encoder–decoder, convolutional layers, residual block, and feature concatenation of RMOBF-Net are well-known, we found out that a combination of these components is important when restoring the blurred finger-vein image to improve finger-vein recognition performance.
  • The architecture of RMOBF-Net is optimized for the finger-vein image domain. We also experimented with the state-of-the-art deep learning-based restoration network, MPRNet and found that more complex architecture showed more degradation of recognition accuracy. Therefore, we focused on the architecture that is less complex but can preserve features. As a result, in most cases, our proposed model shows better performance than the previous methods.
  • The RMOBF-Net, recognition network, and blurred image database according to non-uniform motion blur and blur intensity are publicly available in [19] to allow other researchers to perform fair performance evaluations.
This paper is organized as follows. Section 2 provides an overview of the previous studies, and the proposed method is explained in Section 3. In Section 4, comparative experiments and experimental results with analysis are discussed. Finally, in Section 5, the conclusions of this paper are summarized.

2. Related Works

Blurring may occur during finger-vein image capturing, and the types of blurs can be broadly divided into light scattering blur, optical blur, and motion blur. In the case of light scattering blur, light scattering and the attenuation of the biological tissue occur during the NIR light transmission of the finger-vein image capture using a NIR sensor. Accordingly, the quality of the captured finger-vein image can be significantly degraded. This makes the feature representation of the finger-vein image unreliable and consequently decreases the recognition accuracy [20]. Research on image restoration for skin scattering blurred finger-vein image were conducted to resolve this issue. In addition, an optical blur may occur due to the differences in finger thickness, the difference in depth between the finger skin surface and finger-vein, and the shortest distance between the camera lens and finger. Motion blur can also occur due to finger movements. Therefore, some studies have been conducted on image restoration for optical or motion blurred finger-vein images. However, motion and optical blur can occur together for hand tremors caused by illness or stress and camera lens focus mismatch caused by such hand tremors. However, research has not been conducted on image restoration for both motion and optical blurred finger-vein images. Therefore, in this study, previous research analysis can be divided into finger-vein recognition without blur restoration, finger-vein recognition with skin scattering blur restoration, finger-vein recognition with optical blur restoration, and finger-vein recognition with motion blur restoration.

2.1. Finger-Vein Recognition without Blur Restoration

Before deep learning was actively studied, finger-vein recognition was performed using handcrafted features. Miura et al. [21] extracted and connected the center positions of veins by calculating local maximum curvature and matching with the registered finger-vein pattern using template matching. Miura et al. [22] proposed a method of finger-vein recognition using repeated line tracking, feature extraction, and template matching. Matsuda et al. [23] proposed a method of finger-vein recognition using image normalization, feature extraction, and feature point matching. Lee et al. [24] extracted the minutia points within the finger-vein region and aligned the image and then extracted the finger-vein feature with the local binary pattern (LBP) from the aligned image. They proposed a method for a finger-vein recognition by Hamming distance using the extracted feature. Peng et al. [25] performed finger-vein segmentation by applying a Gabor filter with eight orientations to the original image and extracted the finger-vein pattern by fusion by selecting an image that emphasized the finger-vein pattern. Using the extracted finger-vein patterns, the scale-invariant feature transform (SIFT) features were extracted, and the method of matching the imposter and genuine using Euclidean distance was proposed. Rodsi et al. [26] used the local line binary pattern, which transformed the local binary pattern, to obtain the line binary code for each horizontal and vertical direction and use the Hamming distance to obtain the matching score. With the matching score obtained in this way, if it is close to 0, it is recognized as the same finger, and if it is close to 1, it is recognized as a different finger. The above handcrafted feature-based finger-vein recognition methods have the advantage of improving recognition performance when the designed optimal filter is accurately modeled in the image spatial domain. However, their application to images with different characteristics and the designed filter may cause performance degradation. In addition, these handcrafted feature-based methods were applied in constrained environments during filter design, and their performance may be degraded due to susceptibility to image variants, such as various illumination, misalignment, and distortion. The handcrafted feature-based method has these problems, and trained feature-based finger-vein recognition methods have been studied to overcome this problem. Hong et al. [27] and Kim et al. [28] generated a difference image, which is a difference between the enrolled and matching images, and used it as input for CNN models such as VGG-16 [29], ResNet-50 [30], or ResNet-101 [30]. The outputs from these models were divided into two classes, genuine (authentic) matching (matching between images in the same class) and imposter matching (matching between images in the different classes), and using these outputs, finger-vein recognition was performed. Qin et al. [31] divided the original finger-vein image into N × N size for training. Based on the trained CNN, the probability of the image center pixel being the vein pattern was determined from the softmax classifier, which is the last layer, followed by finger-vein recognition through feature matching. Song et al. [32] and Noh et al. [33,34] performed DenseNet-based [35] finger-vein recognition by creating a composite image between an enrolled image and a matching image and using it as training and testing images. Shift-matching was also used to mitigate performance degradation due to misalignment or rotation during the evaluation process. Qin et al. [36] proposed a finger-vein recognition method that combined the long short-term memory (LSTM) and CNN. They extracted the finger-vein feature with the stacked CNN and LSTM (SCNN-LSTM). Based on the extracted features, the genuine and imposter images were distinguished based on supervised feature encoding and enrollment database feature matching. Zhao et al. [37] performed finger-vein recognition using a shallow CNN using triplet loss, center loss, and dynamic regularization. Since these trained feature-based finger-vein recognition studies use trained deep-feature, there is no need for a person to design a filter directly. Additionally, as various forms of images were used for training, robust performance can be expected from various spatial domains. However, there are the limitations that many images are required for learning and an intensive training process is required. Additionally, various types of possible blurring that may occur during finger-vein image acquisition are not considered.

2.2. Finger-Vein Recognition with Skin Scattering Blur Restoration

Lee et al. [3] measured the PSF of skin scattering blur and used a constrained least squares (CLS) filter to restore the blur. Yang et al. [4] defined a biological optical model (BOM) specialized for finger-vein using the principle of light propagation in biological tissue. Using this defined BOM, they proposed a scattering removal algorithm that leads to blurred finger-vein image restoration after light scattering component estimation and scattering radiation estimation. Later, the phase-only-correlation (POC) measure was used for restored images to perform the finger-vein image matching. Yang et al. [5] used the weighted biological optical model (WBOM) to measure the finger-vein image degradation, and the local background illumination map (LBIM) and non-scattered transmission map (NSTM) were estimated based on the anisotropic diffusion and gamma correction (ADAGC). Based on these estimation values, image restoration was performed, and the scattering effect was eliminated through the venous region enhancement based on Gabor wavelets and inter-scale multiplication operation. Thereafter, the POC measure was used for restored images for finger-vein image matching. Yang et al. [6] proposed PSF-based image restoration and BOM-based scattering noise removal considering the structure of human skin tissue. In addition, Yang et al. [7] designed a biological optical model-based scattering algorithm to measure the scattering component, scattering radiation, and transmission map. Additionally, they proposed a method of removing the scattering effects based on the measured values. Later, finger-vein image matching was performed using the POC measure. In these studies, finger-vein image restoration and recognition were performed, focusing on light scattering within the skin layer. However, the light scattering component needs to be accurately estimated for performance improvement for restoration, which may be time-consuming. In addition, if the domains between the image used for estimation and the test image are different, there is a disadvantage in that the parameters for image restoration need to be re-estimated. Du et al. [8] newly defined the BOM and restored the degraded image through a CNN composed of dense blocks. Then, the finger-vein recognition performance was measured through matrix matching using the correlation coefficient. They proposed scattering removal and recognition methods by training the network. These studies proposed scattering blurred finger-vein image restoration methods, but none considered the motion and optical blur that occur during finger-vein image capturing.

2.3. Finger-Vein Recognition with Optical Blur Restoration

Lee et al. [9] measured the extent of finger-vein image optical blurring of the finger edge orthogonal profile based on the average gradient and proposed blurred finger-vein image restoration considering both the optical blur and scattering blur using the PSF and CLS filter. Two PSFs each estimated the optical and scattering blur components to restore the blurred finger-vein image and performed finger-vein image recognition with the modified Hausdorff distance [38] for restored images. To measure the two PSFs for optical blur and scattering blur, the performance of this method can be improved when the parameters are accurately predicted, and accordingly, it has a limitation in that it takes a lot of processing time. Choi et al. [10] proposed a finger-vein recognition method using several CNN models by restoring the optical blur included in the original finger-vein image based on a modified conditional GAN. He et al. [11] considered two types of blurs, local and global blur, a pair of defocused blurriness. They proposed GAN-based restoration for local and global blurred finger-vein images. These methods are robust for images obtained from various environments. However, the motion blur during image capturing for finger-vein recognition that may occur from hand tremor is not considered in this method. In addition, GAN-based restoration methods adopt an additional discriminator, which increases the training complexity compared with our proposed model, which adopts only encoder and decoder structure without any discriminator.

2.4. Finger-Vein Recognition with Motion Blur Restoration

Choi et al. [12] proceeded with motion-blurred finger-vein image restoration considering motion blur that may occur due to hand tremor during finger-vein recognition. They constructed a dataset by applying motion blur to the original finger-vein image, restored the image through modified DeblurGAN, and applied DenseNet to perform finger-vein recognition.
Most of the existing blurred finger-vein image restoration studies consist of scattering blur restorations, and some consist of optical blur or motion blur restoration. However, in the real environment, when a finger moves during finger-vein recognition, not only does motion blur occurs, optical blur can occur simultaneously because the camera lens is unable to focus due to the object movement. Therefore, in this paper, we propose a method for motion blur and optical blur restoration through a newly proposed network, and finger-vein recognition using a deep CNN. Table 1 summarizes the advantages and disadvantages of the proposed method and previous studies.

3. Proposed Method

3.1. Overview of the Proposed Method

Figure 1 presents an overall flowchart of the method proposed in this study. The finger-vein image is captured from the image capturing device using the NIR sensor camera (step (1)), and the finger-vein area is obtained through preprocessing (step (2)). Subsequently, the RMOBF-Net proposed in this research is used to restore the motion and optical blurred finger-vein image (step (3)). Then, the difference image between the enrolled and matching images is generated from the restored finger-vein images, and the difference image is used as input in the deep CNN model for finger-vein recognition (step (4)). Based on the final output score obtained from deep CNN, it distinguishes between genuine (authentic) matching or imposter matching (step (5)) and performs finger-vein recognition (step (6)).

3.2. Preprocessing of Finger-Vein Images

Preprocessing is necessary to find the region of interest (ROI), which is a key part of finger-vein recognition [12]. The first process for this is to obtain the image presented in Figure 2b by binarization on the captured raw finger image. However, using simple binarization thresholding does not perfectly eliminate the background. Therefore, the Sobel filter is applied to the original finger-vein image to extract the finger edge and obtain the edge map. Then, the edge map is subtracted from the binarized image to create the final edge map. Area thresholding is applied to the remaining regions aside from the finger region to obtain the background-eliminated image, as shown in Figure 2c. In addition, Equations (1) and (2) are used for the binarized mask B of Figure 2c to correct the image misalignment by the in-plane rotation.
σ 11 = ( x , y ) B ( y c y ) 2   ·   M ( x , y ) ( x , y ) B M ( x ,   y ) σ 12 = ( x , y ) B ( x c x ) ( y c y )   ·   M ( x , y ) ( x , y ) B M ( x ,   y ) σ 22 = ( x , y ) B ( x c x ) 2   ·   M ( x , y ) ( x , y ) B M ( x ,   y )
θ = { tan 1 { σ 11 σ 22 + ( σ 11 σ 22 ) 2 + 4 σ 12 2 2 σ 12 } i f   σ 11 > σ 22 tan 1 { 2 σ 12 σ 22 σ 11 + ( σ 22 σ 11 ) 2 + 4 σ 12 2 } i f   σ 11 σ 22
In Equation (1), M ( x ,   y ) and ( c x ,   c y ) are the image pixel value and the central coordinate, respectively. Based on this, the angle of rotation θ of Equation (2) is calculated to compensate for the in-plane rotation [39]. Through this process, a corrected image is obtained, as shown in Figure 2d. The left and right parts of Figure 2d are eliminated based on the predetermined value to obtain Figure 2e to focus on the finger-vein area. Finally, as shown in Figure 2f, the final ROI finger-vein image is obtained with the created ROI mask. We resized rectangular finger-vein images to quadratic ones so that the resized images could be the inputs to the proposed RMOBF-Net and DenseNet for blur restoration and finger-vein recognition, respectively.

3.3. RMOBF-Net-Based Restoration of Motion and Optical Blurred Finger-Vein Images

Unlike most image enhancement methods that subjectively process images to improve visual quality, most image restorations that change the image similar to the original image can be viewed as objective image processing [40]. Image restoration is an attempt to reconstruct a degraded image using prior knowledge of image degradation. Therefore, the restoration in image processing is aimed toward the application of the degradation modeling and its inverse process to restore the original image. Based on this, the blur model can be expressed as the following [41]:
g ( x ,   y ) = h ( x ,   y ) f ( x ,   y ) + η ( x ,   y )
f ^ ( x ,   y ) f ( x ,   y )  
In Equation (3), g ( x ,   y ) is the degraded image, h ( x ,   y ) is the degradation function, ∗ is the convolution operation, f ( x ,   y ) is the input original image, and η ( x ,   y ) is the noise. In Equation (4), f ^ ( x ,   y ) represents the estimation of the original image through image restoration, and the objective of image restoration is to make f ^ ( x ,   y ) similar to the original image f ( x ,   y ) . Therefore, it is important to more accurately estimate the degradation function h ( x ,   y ) and noise η ( x ,   y ) . Early studies performed image restoration by defining the blur model as above and directly inferring the parameters. However, when applied in a real environment, the accurate estimation of h ( x ,   y ) and η ( x ,   y ) is extremely difficult and time-consuming. In addition, due to the recent increase in digital devices, images of various types can be obtained. In this case, various types of images different from the source images referenced during parameter estimation can be input, and performance degradation may occur when estimated h ( x ,   y ) and η ( x ,   y ) are applied. Considering this point, in this study, we propose a new blurred finger-vein image restoration method that finds motion and optical restoration models through a training-based method rather than directly finding parameters for restoration. In the proposed method, the optimal filter is designed through training that when F b l u r , the motion and optical blurred finger-vein image is given. Therefore, directly estimating h ( x ,   y ) and η ( x ,   y ) is no longer necessary. The objective of the proposed method is to make the restored finger-vein image F r e s similar to the original finger-vein image F o r i . A detailed explanation of RMOBF-Net, which is the restoration of motion and optical blurred finger-vein network proposed in this paper, is presented in the following sub-section.

3.3.1. Architecture of RMOBF-Net

The architecture for the blurred finger-vein image restoration RMOBF-Net is shown in Figure 3 and Table 2. RMOBF-Net is composed of an encoder that extracts features, a decoder that restores the blurred finger-vein image using the extracted features, a residual block that continuously connects the information of the previous step, and a concatenation that sends the extracted information from each layer of the encoder to the decoder. To extract information about an image and visualize the extracted information as an output image according to the task, an encoder–decoder type structure is effective, and this structure was also used in recent restoration studies [13,15,16,17]. In the upsampling layer of the decoder part, a transpose convolution, which is the representative deconvolution operation, was considered, but checkerboard artifacts may occur in the restored output image [42]. Instead, bilinear interpolation is used to increase the feature size.
Inspired by [13,16,17], the residual block [30] is actively used at the encoder and decoder of RMOBF-Net. In the case of the finger-vein image, the vein pattern information and the texture detail are used for classification. Therefore, it is important to restore this vein pattern and texture detail information. The RMOBF-Net encoder is used to extract the abstract feature of the finger-vein, and the vein pattern or the texture data can be lost during the extraction of this abstract feature. The residual blocks are added for each layer to minimize this information loss. Within the residual block, a function exists that continuously delivers feature information from the previous layers to the following layers, and through this, the finger-vein information is continuously maintained. To utilize this advantage, residual blocks are also used in the decoder. The structure of the residual block is shown in Figure 4, and it was applied 3 times to each layer. In addition, a parametric rectified linear unit (PReLU) [43] is used as the activation function in the middle of the residual block. Unlike the rectified linear unit (ReLU) [44] that treats all negative input values of the feature into 0, PReLU is an activation function that can preserve negative values. PReLU is used to restore the blurred finger-vein image by preserving the overall information including vein pattern and texture information and transferring it to the next layer. Unlike the leaky rectified linear unit (leaky ReLU) [45], which has to directly find the optimal value by applying a predetermined constant to the negative slope coefficient to process the negative values of the feature, PReLU efficiently finds optimal parameters of the negative slope coefficient through training by setting learnable parameters. The equation of PReLU used in this paper is as follows:
f ( y i ) = { y i ,     i f   y i > 0 a i y i ,     i f   y i 0  
The y i of PReLU is the input value of the non-linear activation function of the i th channel, and a i is the controlling coefficient of the slope of the negative part as the trainable parameter.
Finally, inspired by [46], feature concatenation is performed such that the finger-vein feature information extracted from the encoder part is concatenated to the decoder part for reference when restoring the blurred finger-vein image. In the RMOBF-Net, the encoder features are channel-wise concatenated with the decoder features, and then the number of channels is adjusted to be the same as the encoder part using 1 × 1 convolution.

3.3.2. Loss Function

To optimize RMOBF-Net, two types of loss functions are used, Charbonnier and perceptual loss functions. Inspired by [17,47,48,49], the Charbonnier loss function is applied as the first loss function. Unlike the L2 loss function, the Charbonnier loss function can handle outliers and improve restoration accuracy [47]. Therefore, the Charbonnier loss function is applied in this study, which is expressed as:
c h a r = ( F r e s F o r i ) 2 + ε 2  
In the equation, F r e s is the restored finger-vein image, and F o r i is the original finger-vein image. The optimal ε used in this study is obtained from the training data, which is set to 10 3 in all experiments. Using Charbonnier loss, the overall structure of the finger-vein image can be obtained by the Equation (6). However, using the Charbonnier loss function alone can produce blurry results because its role is not to calculate the difference between local information or feature information but simply to calculate the average pixel differences between the output and target image [50]. To improve this blurry output and to restore feature information similar to the original finger-vein pattern a perceptual loss function is applied as shown below:
p e r c e p t u a l = 1 W H x = 1 W y = 1 H ( ( F o r i ) x , y ( F r e s ) x , y ) 2  
is the feature map extracted from the network trained with original finger-vein images, and W and H are the width and height of the feature map, respectively. In case of original perceptual loss [51], VGGNet [29] pretrained with ImageNet database [52] is used. However, in this research, DenseNet-161 [35], a deeper network than VGGNet, trained with original finger-vein images, is used for extracting the finger-vein feature maps that are used for the calculation of perceptual loss of Equation (7). The feature map extracted from the layer before the second dense block is used in the perceptual loss function. In general, networks for classification tasks preserve the overall spatial structure of abstracted features extracted from deeper layers. However, lower-level features, such as color, corner, edge, and texture, cannot be preserved [51,53]. In the blurred finger-vein image restoration task, it is important to restore the abstracted feature to its original, but the recognition performance can vary according to differences in finger-vein pattern (edge information) or texture between images. Therefore, it is important to restore the low-level features as well and to restore more accurate finger-vein pattern and texture features, the network trained with the original finger-vein images is used to determine the difference between the restored and target finger-vein image features from the earlier layer to achieve a more accurate finger-vein pattern and texture feature restoration, and the difference is applied to the loss. In the case of DenseNet, since the architecture itself densely connects the layers at the front and back, the features extracted from the previous layers can be preserved to a higher extent. Considering this point, during training and testing for blurred finger-vein image restoration, it can be helpful to restore features such as vein pattern and texture of finger-vein to be more similar to the original. The final loss that applies the two losses from Equations (6) and (7) is expressed as:
t o t a l = c h a r + p e r c e p t u a l  
By combining these two loss functions of Charbonnier and perceptual loss, overall finger-vein structure can be restored by the Charbonnier loss function, and vein pattern and texture detail can be restored by perceptual loss. Because of these advantages, using these two loss functions can be helpful for blurred finger-vein image restoration.

3.4. Finger-Vein Recognition by Deep CNN

In this study, deep CNN-based finger-vein recognition is performed with a difference image as input. We used difference images based on the previous research [27] that showed better finger-vein recognition performance using the difference images compared with that using original images. The difference image is created by calculating the difference between the enrolled and matching images. Image differencing is a method that can confirm the change between images and generates an image by calculating the pixel difference between images to be compared [54]. For this reason, it is sensitive to changes between images. In the case of the finger-vein dataset used in this study, because the pixel difference between images of the same class tends to be small, the difference values are close to 0. Accordingly, when generating a difference image, the image that is mostly occupied by dark areas is generated as an output. In contrast, pixel difference exists between images of different classes, and the difference values vary. Therefore, an image containing mostly bright areas is generated as output during difference image generation. Genuine and imposter matching can be expressed from a single image by utilizing image differencing and finger-vein image characteristics during finger-vein recognition [12]. Genuine matching or authentic matching refers to matching when the input matching image is of the same class as the enrolled image, and imposter matching refers to matching when the matching image is of a different class from the enrolled image. When looking at the output of the image through image differencing, in general, images with many black areas become genuine matching cases, and images with many bright areas become imposter matching cases. For the finger-vein datasets used in this study, since the similarity of image features between intra-classes is high and the similarity between inter-classes is low, it is possible to distinguish genuine matching and imposter matching through difference images, therefore, it is possible to measure the finger-vein recognition performance. The samples of finger-vein difference images generated relevant to the dataset used in this study are shown in Figure 5c,f.
The generated difference image is used as an input into the classification network for finger-vein recognition. DenseNet-161 is used in this study as the classification network [35]. DenseNet uses dense connectivity between layers that concatenate the feature maps. This may reduce information loss that occurs during feature extraction and enhance classification performance. In this study, the DenseNet-161 model pretrained with the ImageNet database [52] is fine-tuned with the finger-vein training data. The difference images are used as training and testing data for the DenseNet-161. Since finger-vein recognition is performed using the difference image, the output class of the model is set to two classes, genuine matching and imposter matching. FAR is the error rate of incorrectly accepting imposter data as genuine data, and FRR is the error rate of incorrectly rejecting genuine data as imposter data [55]. The point where FAR and FRR become equal is called the equal error rate (EER), and this is used to measure the final finger-vein recognition performance.
Vertical and horizontal shifts, rotation, and illumination changes can occur when capturing finger-vein images. During finger-vein recognition, performance degradation due to this variation can be resolved to some extent when adopting the shift matching of eight directions: up, down, right, left, and diagonal [32]. However, our research focuses on finger-vein restoration rather than finger-vein recognition. Therefore, we did not apply the shift matching for finger-vein recognition; instead, we applied various modules for finger-vein restoration of optical and motion blurring instead.

4. Experiments and Analysis

4.1. Datasets for the Experiments

In this study, two types of finger-vein datasets, the Shandong University homologous multimodal trait finger-vein database (SDU-DB) [56] and session 1 of the Hong Kong Polytechnic University finger-image database version 1 (PolyU-DB) [57] were used. In SDU-DB, six images per person consisting of the index, middle, and ring finger images of both hands were obtained from 106 participants. Therefore, the original finger-vein dataset of SDU-DB consists of a total of 3816 images (106 participants × 2 hands × 3 fingers × 6 images). In PolyU-DB, six images of the index and middle fingers of the left hand of 156 participants were obtained. Therefore, PolyU-DB consists of a total of 1872 images (156 subjects × 2 fingers × 6 images) of the original finger-vein dataset. Figure 6 shows sample images of a finger from SDU-DB and PolyU-DB.
Since there are not many total numbers of images in the datasets, all experiments in this research were performed with twofold cross-validation. Because twofold cross-validation was used, the data of the class used in training were not used for testing, and vice versa (open world setting). In addition, 1/3 and 1/5 of the training data were used as validation data for restoration and recognition, respectively. The average of the EER measured from twofold cross-validation experiments was used as the final finger-vein recognition performance.

4.2. Motion and Optical Blurred Datasets for the Finger-Vein Image Restoration

Figure 7 shows original and blurred finger-vein images captured by the actual finger-vein acquisition device [9] in a real environment. As shown in Figure 7b, vein patterns disappeared due to the optical and motion blurs. By referring to these blurred images of Figure 7b, which were captured by the actual finger-vein acquisition device, our blurred images of Figure 8b and Figure 9b were generated, and they show similar finger-vein shape and texture to those of Figure 7b. Therefore, we confirmed that the blur we added to the finger vein samples corresponds to the blur levels occurring in a practical acquisition scenario.
To restore this finger-vein pattern, a blurred dataset is necessary for motion and optical blur restoration of finger-vein images. As SDU-DB and PolyU-DB databases were constructed in a controlled environment, motion and optical blurred finger-vein datasets are not constructed. Existing motion and optical blurred finger-vein datasets do not exist. For the restoration of blurred finger-vein images by our FMOBF-Net or other state-of-the-art models, pairs of ground-truth (without blurring) and blurred images are required, and it is difficult to acquire the corresponding ground-truth finger-vein image in case of constructing real database of optical and motion blurred finger-vein image. Therefore, to construct datasets for motion and optical blurred finger-vein restorations, motion blurring kernels were first applied to SDU-DB and PolyU-DB, referring to Bascle et al. [58]. After that, a motion and optical blurred finger-vein database was constructed by applying a Gaussian kernel for the optical blurring effect. In this case, the original images of SDU-DB and PolyU-DB were used as ground-truth data. In the real environment, the motion may occur in various directions. Therefore, non-uniform (random) motion blurring kernels were applied instead of uniform motion blurring kernels. Random motion blurring kernels were generated by referring to the method proposed by Kupyn et al. [13]. In the case of the Gaussian kernel for optical blurring, referring to Choi et al. [10], filter sizes of 11 × 11, 15 × 15, and 19 × 19, and standard deviations of 11, 15, and 19 were applied to generate data. Figure 8 and Figure 9 represent the motion and optical blurred images generated for the SDU-DB and PolyU-DB datasets, respectively. The blurred finger-vein database constructed in this research can be regarded as reflecting the real environment as shown in Figure 7, Figure 8 and Figure 9 because of the blur kernels of random and various types of motion are applied. Blurred images were generated with the same number of original images as 3816 images for SDU-DB and 1872 images for PolyU-DB, respectively.

4.3. Data Augmentation and Experimental Setup

In this study, data augmentation was used during RMOBF-Net and DenseNet-161 training to reduce training overfitting and improve the generality of the model. When training RMOBF-Net, an online augmentation method for selectively adjusting gamma correction, color saturation, and contrast was applied. During DenseNet-161 training, off-line augmentation was used that increased the number of each image by nine times including the original image by applying five-pixel shifting based on eight directions: combinations of up, down, left, and right were used for each image. Additionally, for the generation of difference images for training, the intra-class distance was calculated among the augmented images, and only one image was selected as the enrolled image while the rest were used as matching images. A difference image for genuine matching and imposter matching is generated using the selected enrolled image and matching image. In this case, the number of imposter matching increases by 635 times for SDU-DB and 311 times for PolyU-DB compared to genuine matching. When this imposter matching data is used as it is, the performance for genuine matching is degraded due to data imbalance. To solve this problem, a random selection method was applied to the imposter matching data. The number of data to be randomly selected was made equal to the number of genuine matching cases so that class imbalance did not occur. The above augmentation and random selection methods were equally applied to both SDU-DB and PolyU-DB datasets.
In all our experiments, we used a desktop computer with Intel® Core™ i7-9700F CPU with 32 GB of main memory equipped with NVIDIA GeForce GTX 3060 graphics processing unit (GPU) with a graphics memory of 12 GB [59] on a Linux operating system. The training and testing algorithms of our network were implemented with a PyTorch framework (version 1.8.1) [60].

4.4. Training of RMOBF-Net and DenseNet-161

The number of max epochs, mini-batch size, and learning rate used as training parameters of RMOBF-Net were set to 300, 16, and 0.00005, respectively. In this study, adaptive moment estimation (Adam) optimization [61] was used to optimize the RMOBF-Net.
Figure 10 shows the RMOBF-Net training and validation loss graphs according to each epoch. As shown in Figure 10, although the training loss values still seem to have a little decreasing trend as the epoch increased, we stopped the training at 300 epochs because further training by more than 300 epochs can cause the overfitting of RMOBF-Net with training data and validation loss graph was already converged. Therefore, the RMOBF-Net was considered to have been sufficiently trained with the given training data. Additionally, the RMOBF-Net validation loss graph according to the epoch are presented in Figure 10. As shown in the figure, the validation loss values were converged with the increase in epoch, indicating that the RMOBF-Net used in this study is not overfitted with the training data.
For the CNN model for finger-vein recognition, training and testing were performed with DenseNet-161 to measure performance. The number of max epochs, mini-batch size, and learning rate used as training hyperparameters of DenseNet-161 were set to 30, 8, and 0.0001, respectively. To optimize DenseNet-161, a stochastic gradient descent (SGD) method [62] was used.
The difference image was used to classify genuine and imposter matching. Therefore, the final number of output classes of the model was set to 2. During training, fine-tuning was performed using ImageNet pretrained weights. The DenseNet-161 training and validation loss, which uses the difference image of the images restored with RMOBF-Net as the input, and the loss and accuracy graphs are shown in Figure 11. Based on the training loss and validation graphs, the training loss converges to 0 and accuracy to 100. Therefore, DenseNet-161 was sufficiently trained with the training data. As shown in Figure 11, although the validation loss values still seem to have a little fluctuation trend as the epoch increased, we stopped the training at 30 epochs because further training by more than 30 epochs can cause the overfitting of DenseNet-161 with training data and the validation accuracy graph was already converged whereas validation loss graph was still a little fluctuated.

4.5. Testing Results of the Proposed Method

4.5.1. Evaluation Metrics

In this experiment, the similarity between the restored image of motion and optical blurred finger-vein image and the original image was quantitatively evaluated. For the numerical comparison, the peak-signal-to-noise ratio (PSNR) [63] and structural similarity (SSIM) [64] were measured. Equations (9) and (10) define MSE and PNSR, respectively:
M S E = 1 h w i = 0 h 1 j = 0 w 1 [ F o r i ( i ,   j ) F r e s ( i ,   j ) ] 2
P S N R = 10 l o g 10 ( M A X i 2 M S E )  
where F r e s is the restored finger-vein image by the state-of-the-art methods or the proposed method and F o r i is the original finger-vein image. Symbols h and w are the height and width of the image, respectively. M A X i is the maximum pixel value of the input image. Equation (11) is the mathematical equation of SSIM:
S S I M = ( 2 μ o μ r + C 1 ) ( 2 σ o r + C 2 ) ( μ o 2 + μ r 2 + C 1 ) ( σ o 2 + σ r 2 + C 2 )
Here, μr and σr are the average and standard deviations of the pixel value of the restored image, respectively. μo and σo are the average and standard deviations of the pixel value of the original image, respectively. σor is the covariance of the two types of images. C 1 and C 2 are constants that prevent the denominator value from becoming 0. The original and restored images are similar when SSIM is close to 1. This indicates the superiority of the reconstruction algorithm in terms of image quality. Additionally, the EER of finger-vein recognition described in Section 3.4 was used as a metric for evaluating the recognition performance.

4.5.2. Testing Result with SDU-DB

4.5.2.1. Performance Evaluation of Image Quality

Using the evaluation metrics of Equations (9)–(11), the motion and optical blurred image restoration quality for our proposed method and the state-of-the-art methods were numerically evaluated as shown in Table 3. Table 3 presents that the numerical values of PSNR in all cases were superior in RMOBF-Net. The numerical value of SSIM hardly differed from that of the best-performing SRN-DeblurNet. These restoration performances show that RMOBF-Net performed well in motion and optical blur restoration. Figure 12, Figure 13 and Figure 14 show the finger-vein images reconstructed through the state-of-the-art methods and RMOBF-Net. In these figures, the motion and optical blurred finger-vein image restoration performance was superior in RMOBF-Net compared with the state-of-the-art methods as the restored image was closest to the original image.
However, the purpose of the motion and optical blurred image restoration in this study is not simply to improve the image quality to be close to the original image but to improve the accuracy of finger-vein recognition, in the following Section 4.5.2.2 and Section 4.5.2.3, performance was evaluated in terms of the EER of finger-vein recognition.

4.5.2.2. Ablation Studies

For SDU-DB, as ablation studies, experiments were conducted depending on whether motion and optical blur were applied. The method proceeded in the following five ways:
Scheme 1: After training the DenseNet-161 classifier with original finger-vein training data without blurring, performance measurement on original finger-vein testing data
Scheme 2: After training the DenseNet-161 classifier with original finger-vein training data without blurring, performance measurement with motion and optical blurred testing data
Scheme 3: After training the DenseNet-161 classifier with motion and optical blurred training data, performance measurement with motion and optical blurred testing data
Scheme 4: After training the DenseNet-161 classifier with training data restored with RMOBF-Net, performance measurement on testing data restored with RMOBF-Net
Scheme 5: After training the DenseNet-161 classifier with original finger-vein training data without blurring, performance measurement on testing data restored with RMOBF-Net
Based on the comparison of scheme 1 and schemes 2–5 in Table 4, the recognition accuracy was significantly degraded in motion and optical blur images compared with original images without blurring. In addition, based on schemes 2 and 3, the recognition accuracy was degraded because the vein pattern area and remaining skin area are harder to differentiate when motion and optical blur occur. Schemes 4 and 5, which were restored through RMOBF-Net proposed in this study, have lower error rates than schemes 2 and 3 using blurred finger-vein images. This shows that the proposed motion and optical blur restoration method was effective in improving the finger-vein recognition performance that was degraded by the blurring.
Figure 15, Figure 16 and Figure 17 represent the recognition performances of schemes 1–5 in the receiver operating characteristics (ROC) curves of FAR and genuine acceptance rate (GAR). GAR is determined by 100 − FRR (%). In all cases, the recognized performances were higher in schemes 4 and 5, which were restored with RMOBF-Net, than in schemes 2 and 3, which used blurred finger-vein images.

4.5.2.3. Comparisons with the State-of-the-Art Methods

Table 5 shows the results of SDU-DB for comparing finger-vein recognition performance on restored data after motion and optical blur restoration using the RMOBF-Net proposed in this study and the existing state-of-the-art restoration methods. Figure 18, Figure 19, Figure 20, Figure 21, Figure 22 and Figure 23 show the ROC curves of the results presented in Table 5. In scheme 4, SRN-DeblurNet demonstrated exceptional performance with motion and optical blur with intensity 11 × 11, while RMOBF-Net shows the best performances in other cases. In scheme 5, RMOBF-Net has the best performance in all cases. In short, we confirmed that the RMOBF-Net-based blurred finger-vein image restoration proposed in this study achieved better performance in finger-vein recognition than the existing restoration methods.
Table 6 shows the results of SDU-DB for comparing finger-vein recognition performance on motion and optical blur application and restoration using the DenseNet-161 used in this study and the state-of-the-art finger-vein recognition method [65]. Detail explanations of Schemes 1–5 are provide in above Section 4.5.2.2. As shown in Table 6, DenseNet-161 shows the better performance in all cases than the state-of-the-art finger-vein recognition methods. In addition, based on the comparison of scheme 1 and schemes 2–5 in Table 6, the recognition accuracy was significantly more degraded in the motion and optical blur images than in the original images without blurring in both DenseNet-161 and NASNet. Furthermore, based on schemes 2 and 3, the recognition accuracy was degraded in both DenseNet-161 and NASNet because the vein pattern area and remaining skin area are harder to differentiate when motion and optical blur occur. Schemes 4 and 5, which were restored through RMOBF-Net proposed in this study, have lower error rates than schemes 2 and 3 using blurred finger-vein images in both DenseNet-161 and NASNet. This shows that the proposed motion and optical blur restoration method was effective in improving the finger-vein recognition performance that was degraded by the blurring in both DenseNet-161 and NASNet.

4.5.3. Testing Result with PolyU-DB

4.5.3.1. Performance Evaluation of Image Quality

Similar to SDU-DB, the evaluation metrics in Equations (9)–(11) were used to numerically evaluate the motion and optical blurred image restoration quality of the proposed and state-of-the-art methods, as shown in Table 7. Table 7 shows that PSNR indicated that RMOBF-Net performed best in all cases. Additionally, SSIM did not differ much from those of SRN-DeblurNet, showing similar results to that of SDU-DB. Figure 24, Figure 25 and Figure 26 show a result of the finger-vein image reconstructed through the state-of-the-art methods and RMOBF-Net. In these figures, the motion and optical blurred finger-vein image restoration performance was superior in RMOBF-Net compared with the state-of-the-art methods as the restored image was closest to the original image.
However, as described in Section 4.5.2.1, the main objective of motion and optical blurred image restoration is to increase the finger-vein recognition accuracy rather than improving the image quality of the original image. Therefore, in Section 4.5.3.2 and Section 4.5.3.3, performance was evaluated in terms of EER of finger-vein recognition.

4.5.3.2. Ablation Studies

Ablation studies were conducted using the same method as that of SDU-DB. Based on the comparison of scheme 1 and schemes 2–5 in Table 8, the recognition accuracy was significantly degraded in motion and optical blur images than in images without blurring. In addition, scheme 4 have lower error rates in all cases than schemes 2 and 3 using blurred finger-vein images, and scheme 5 have lower error rates in all cases than scheme 2. Therefore, the motion and optical blur restoration method proposed in this study was effective in improving the finger-vein recognition performance degraded by blurring in PolyU-DB as well. Figure 27, Figure 28 and Figure 29 represent the recognition performances of schemes 1–5 in the ROC curves.

4.5.3.3. Comparisons with State-of-the-Art Methods

Table 9 shows the results of PolyU-DB for comparing finger-vein recognition performance on restored data after motion and optical blur restoration using the RMOBF-Net proposed in this study and the existing state-of-the-art restoration methods. Figure 30, Figure 31, Figure 32, Figure 33, Figure 34 and Figure 35 show the ROC curves of the results presented in Table 9. In scheme 4, MPRNet showed the best performance in motion blur and optical blur with intensity 15 × 15, but in other cases, RMOBF-Net showed the best performance. In scheme 5, DeblurGANv2 showed the best results in motion blur and optical blur with intensity 11 × 11, but in all other cases, the proposed RMOBF-Net shows the best performance. Therefore, we confirmed that the RMOBF-Net-based blurred finger-vein image restoration proposed in this study achieves better performance than the existing restoration methods.

4.6. Processing Time of the Proposed Method

In the next experiment, processing time, floating point operations (FLOPs), and the number of parameters of RMOBF-Net and DenseNet-161 were measured. The measurements were conducted on a desktop computer introduced in Section 4.3 and Jetson TX2 embedded system [66]. The reason for the measurement in the embedded system is that the access control type finger-vein recognition system is constructed in the form of on-board computing (edge computing) operated in the embedded system attached to the door. Therefore, it is essential to check whether the model proposed in this study is capable of on-board computing. Jetson TX2 has NVIDIA PascalTM-family GPU (256 CUDA cores), and 8 GB of memory shared between the central processing unit (CPU) and GPU, 59.7 GB/s of memory bandwidth, and it uses less than 7.5 W of power. As shown in Table 10, in the case of the method proposed in this study, the recognition speed for one image took 51.9 ms on the desktop computer, which corresponds to approximately 19.3 frames/sec (1000/51.9) processing speed. In the Jetson TX2 embedded system, it took about 198.4 ms, which corresponds to a processing speed of about 5 frames/sec (1000/198.4). In the case of the Jetson TX2 embedded system, the processing time was longer than that of the desktop computer due to limited computing resources. However, it can be confirmed that the method proposed in this study can be applied to embedded systems through processing time measurement. As shown in Table 10, in the case of FLOPs, RMOBF-Net has 75.5 Giga FLOPs and DenseNet-161 has 7.82 Giga FLOPs. For the number of parameters, RMOBF-Net has 49.83 million parameters and Dense-Net-161 has 28.68 million parameters. Table 11 shows the comparison of processing time and speed as frame per second (fps) for RMOBF-Net and the state-of-the-art restoration model. As shown in this table, our model shows the best performance of processing time and speed except for DeblurGAN, but the accuracies by our method were much higher than those by DeblurGAN as shown in Table 5 and Table 9, and Figure 18, Figure 19, Figure 20, Figure 21, Figure 22, Figure 23, Figure 30, Figure 31, Figure 32, Figure 33, Figure 34 and Figure 35. In the case of SRN-DeblurNet [16] and MPRNet [17], the overall structure consists of a multistage, and numerous connections between features exist between them. This leads to an increase in memory access cost, which can be inferred as an increase in processing time [67].

4.7. Discussion

4.7.1. Cases of Correct and Incorrect Matching after Restoration

Figure 36a,c are examples of genuine matching and imposter matching before motion and optical blurred finger-vein restoration, respectively, and both are incorrect matching cases because changes in vein pattern and texture information due to motion and optical blur. This blur occurrence causes false acceptance cases in which imposter matching is misclassified as genuine matching and false rejection cases in which genuine matching is misclassified as imposter matching. Increases in this incorrect matching degrade the finger-vein recognition performance and cause instability in the recognition system. Figure 36b,d show the cases of correct matching by restoring incorrect matching in (a) and (c) with RMOBF-Net proposed in this study, respectively. Genuine matching was correctly classified as correct acceptance after restoration in false rejection, and imposter matching was correctly classified as correct rejection after restoration in false acceptance.
Figure 37 shows examples of incorrect genuine and imposter matching despite the application of the restoration method proposed by this study. In the case of incorrect genuine matching, motion and optical blur significantly occurred between images of the same class, and even after restoration, vein pattern or texture information was not similar to the original. Therefore, the image was recognized as an imposter, and incorrect matching occurred. In the case of incorrect imposter matching, the enrolled image and the matching image of difference classes showed a similar texture. Even after restoration, the texture was similarly restored between different classes, and the finger-vein pattern was not clearly restored, resulting in incorrect matching.

4.7.2. Class Activation Map of the Restored Image

Figure 38 shows the visualized results of the class activation map [68] for genuine and imposter matching images based on restored images by RMOBF-Net and original images from each layer of DenseNet-161. The areas of class activation map output were first convolutional layer, first transition layer, second transition layer, third transition layer, and last dense block layer from top to bottom. Figure 38a,b are examples of authentic (genuine) matching and imposter matching, respectively, and the left and right images are original images and restored images, respectively. In the class activation map, important features are shown in red regions, while insignificant features are shown in blue regions. Therefore, if the two images have similar red and blue areas, it means that they have similar features. As shown in Figure 38, as the layer progresses, the class activation is made at a location similar to the original and restored images. In Figure 38a, class activation was performed on similar areas of the original and restored images in the case of authentic matching. Therefore, restoration using RMOBF-Net was effective for motion and optical blurred finger-vein images, and we verified that correct acceptance was possible. Further, class activation was performed on similar areas of the original and restored images in the case of imposter matching, as shown in Figure 38b. Hence, correct rejection of imposter matching cases was possible.

5. Conclusions

In this study, motion and optical blurred finger-vein images were restored to solve the problem of the degradation of finger-vein recognition performance due to motion and optical blur. In addition, a method for finger-vein recognition using deep CNN was conducted to evaluate the performance of the restored image. The motion and optical blur restoration model RMOBF-Net was proposed for the optimization of finger-vein images. We confirmed that when finger-vein recognition was performed using the restoration method proposed in this study, the recognition error rate was lower than that without restoration, and thus, the performance was better. In addition, as shown in Table 5 and Table 9, and Figure 18, Figure 19, Figure 20, Figure 21, Figure 22 and Figure 23 and Figure 30, Figure 31, Figure 32, Figure 33, Figure 34 and Figure 35, our RMOBF-Net shows the higher performance of blur restoration with finger-vein data than the state-of-the-art methods that were trained with the same motion and optically blurred images as those for our RMOBF-Net. These results confirm that our model is more appropriate for deblurring finger-vein data than the state-of-the-art methods. The RMOBF-Net proposed in this study was also found to be effective in extracting the important features for recognition through the analysis of the class activation map.
Referring to Song et al. [32] and Noh et al. [33,34], DenseNet showed the highest accuracy of finger-vein recognition; therefore, we adopted DenseNet as a baseline finger-vein recognition model in our research. Different from previous research [69,70,71], our method is mainly focused on blurred finger-vein restoration rather than finger-vein recognition. That is another reason why we adopted only DenseNet as a baseline finger-vein recognition model. For future work, we would compare the various state-of-the-art finger-vein recognition methods to DenseNet with our finger-vein restoration algorithm of optical and motion blur.
As shown in Figure 12, Figure 13, Figure 14, Figure 25, Figure 26 and Figure 27, we considered that motion and optical blur intensities can be various and severe in a real environment. Even though we restored the blurred finger-vein images to similar states as the originals, the recognition error is higher than that of using the original image due to the severe blur. Therefore, in future work, we will investigate how to obtain recognition performance similar to that of using the original image even with severe blur.
As shown in Figure 36, it was confirmed that incorrect matching cases occurred despite the proposed restoration method. Therefore, in future work, a method of improving restoration and recognition performance will be conducted by solving the problem of false rejection due to severe motion and optical blur in intra-class and by reducing inter-class similarity. Additionally, we plan to do research for a method to apply the proposed motion and optical blur restoration method to other biometric modalities such as face, iris, and palm-vein recognition. Because there is no previous state-of-the-art work in finger vein image inpainting, we would compare other state-of-the-art inpainting models of natural scene images in future work.

Author Contributions

Methodology, J.C.; Conceptualization, J.S.H. and S.G.K.; Validation, C.P. and S.H.N.; Supervision, K.R.P.; Writing—original draft, J.C.; Writing—review and editing, K.R.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported in part by the National Research Foundation of Korea (NRF) funded by the Ministry of Science and ICT (MSIT) through the Basic Science Research Program (NRF-2021R1F1A1045587), in part by the NRF funded by the MSIT through the Basic Science Research Program (NRF-2022R1F1A1064291), and in part by the NRF funded by the MSIT through the Basic Science Research Program (NRF-2020R1A2C1006179).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hashimoto, J. Finger Vein Authentication Technology and its Future. In VLSI Circuits, Digest of Technical Paper; IEEE: Honolulu, HI, USA, 2006; pp. 5–8. [Google Scholar]
  2. Liu, Z.; Yin, Y.; Wang, H.; Song, S.; Li, Q. Finger vein recognition with manifold learning. J. Netw. Comput. Appl. 2010, 33, 275–282. [Google Scholar] [CrossRef]
  3. Lee, E.C.; Park, K.R. Restoration method of skin scattering blurred vein image for finger vein recognition. Electron. Lett. 2009, 45, 1074–1076. [Google Scholar] [CrossRef]
  4. Yang, J.; Zhang, B.; Shi, Y. Scattering removal for finger-vein image restoration. Sensors 2012, 12, 3627–3640. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Yang, J.; Shi, Y. Towards finger-vein image restoration and enhancement for finger-vein recognition. Inf. Sci. 2014, 268, 33–52. [Google Scholar] [CrossRef]
  6. Yang, J.; Bai, G. Finger-vein image restoration based on skin optical property. In Proceedings of the 2012 IEEE 11th International Conference on Signal Processing, Beijing, China, 21–25 October 2012; pp. 749–752. [Google Scholar]
  7. Yang, J.; Shi, Y.; Yang, J. Finger-Vein Image Restoration Based on a Biological Optical Model. In New Trends and Developments in Biometrics; IntechOpen: London, UK, 2012; pp. 59–76. [Google Scholar]
  8. Du, S.; Yang, J.; Zhang, H.; Zhang, B.; Su, Z. FVSR-Net: An end-to-end Finger Vein Image Scattering Removal Network. Multimed. Tools Appl. 2021, 80, 10705–10722. [Google Scholar] [CrossRef]
  9. Lee, E.C.; Park, K.R. Image restoration of skin scattering and optical blurring for finger vein recognition. Opt. Lasers Eng. 2011, 49, 816–828. [Google Scholar] [CrossRef]
  10. Choi, J.; Noh, K.J.; Cho, S.W.; Nam, S.H.; Owais, M.; Park, K.R. Modified Conditional Generative Adversarial Network-Based Optical Blur Restoration for Finger-Vein Recognition. IEEE Access 2020, 8, 16281–16301. [Google Scholar] [CrossRef]
  11. He, J.; Shen, L.; Wang, H.; Zhao, G.; Gu, X.; Ding, W. Finger Vein Image Deblurring Using Neighbors-Based Binary-GAN. IEEE Trans. Emerging Topics Comput. Intell. 2021, 1–13. [Google Scholar] [CrossRef]
  12. Choi, J.; Hong, J.S.; Owais, M.; Kim, S.G.; Park, K.R. Restoration of Motion Blurred Image by Modified DeblurGAN for Enhancing the Accuracies of Finger-Vein Recognition. Sensors 2021, 21, 4635. [Google Scholar] [CrossRef]
  13. Kupyn, O.; Budzan, V.; Mykhailych, M.; Mishkin, D.; Matas, J. DeblurGAN: Blind Motion Deblurring Using Conditional Adversarial Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 17–19 June 2018; pp. 8183–8192. [Google Scholar]
  14. Szeliski, R. Computer Vision: Algorithms and Applications, 1st ed.; Springer: London, UK, 2010. [Google Scholar]
  15. Kupyn, O.; Martyniuk, T.; Wu, J.; Wang, Z. DeblurGAN-v2: Deblurring (Orders-of-Magnitude) Faster and Better. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 10–17 October 2019; pp. 8878–8887. [Google Scholar]
  16. Tao, X.; Gao, H.; Shen, X.; Wang, J.; Jia, J. Scale-recurrent Network for Deep Image Deblurring. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 17–19 June 2018; pp. 8174–8182. [Google Scholar]
  17. Zamir, S.W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F.S.; Yang, M.-H.; Shao, L. Multi-Stage Progressive Image Restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Virtual, 19–25 June 2021; pp. 14821–14831. [Google Scholar]
  18. Zhang, Z.; Liu, Q.; Wang, Y. Road Extraction by Deep Residual U-Net. IEEE Geosci. Remote Sens. Lett. 2018, 15, 749–753. [Google Scholar] [CrossRef]
  19. Dongguk RMOBF-Net and CNN for Recognition of Blurred Finger-Vein Image with Motion and Optical Blurred Image Database. Available online: https://dm.dongguk.edu/link.html (accessed on 31 May 2022).
  20. You, W.; Zhou, W.; Huang, J.; Yang, F.; Liu, Y.; Chen, Z. A bilayer image restoration for finger vein recognition. Neurocomputing 2019, 348, 54–65. [Google Scholar] [CrossRef]
  21. Miura, N.; Nagasaka, A.; Miyatake, T. Extraction of Finger-Vein Patterns Using Maximum Curvature Points in Image Profiles. IEICE Trans. Inf. Syst. 2007, 8, 1185–1194. [Google Scholar] [CrossRef] [Green Version]
  22. Miura, N.; Nagasaka, A.; Miyatake, T. Feature extraction of finger-vein patterns based on repeated line tracking and its application to personal identification. Mach. Vis. Appl. 2004, 15, 194–203. [Google Scholar] [CrossRef]
  23. Matsuda, Y.; Miura, N.; Nagasaka, A.; Kiyomizu, H.; Miyatake, T. Finger-vein authentication based on deformation-tolerant feature-point matching. Mach. Vis. Appl. 2016, 27, 237–250. [Google Scholar] [CrossRef] [Green Version]
  24. Lee, E.C.; Lee, H.C.; Park, K.R. Finger vein recognition using minutia-based alignment and local binary pattern-based feature extraction. Int. J. Imaging Syst. Technol. 2009, 19, 179–186. [Google Scholar] [CrossRef]
  25. Peng, J.; Wang, N.; El-Latif, A.A.A.; Li, Q.; Niu, X. Finger-vein Verification Using Gabor Filter and SIFT Feature Matching. In Proceedings of the 2012 Eighth International Conference on Intelligent Information Hiding and Multimedia Signal Processing, Piraeus, Greece, 18–20 July 2012; pp. 45–48. [Google Scholar]
  26. Rosdi, B.A.; Shing, C.W.; Suandi, S.A. Finger Vein Recognition Using Local Line Binary Pattern. Sensors 2011, 11, 11357–11371. [Google Scholar] [CrossRef] [Green Version]
  27. Hong, H.G.; Lee, M.B.; Park, K.R. Convolutional Neural Network-Based Finger-Vein Recognition Using NIR Image Sensors. Sensors 2017, 17, 1297. [Google Scholar] [CrossRef] [Green Version]
  28. Kim, W.; Song, J.M.; Park, K.R. Multimodal Biometric Recognition Based on Convolutional Neural Network by the Fusion of Finger-Vein and Finger Shape Using Near-Infrared (NIR) Camera Sensor. Sensors 2018, 18, 2296. [Google Scholar] [CrossRef] [Green Version]
  29. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. In Proceedings of the International Conference Learning Representations, San Diego, CA, USA, 7–9 May 2015; pp. 1–14. [Google Scholar]
  30. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  31. Qin, H.; El-Yacoubi, M.A. Deep Representation-Based Feature Extraction and Recovering for Finger-Vein Verification. IEEE Trans. Inf. Forensic Secur. 2017, 12, 1816–1829. [Google Scholar] [CrossRef]
  32. Song, J.M.; Kim, W.; Park, K.R. Finger-Vein Recognition Based on Deep DenseNet Using Composite Image. IEEE Access 2019, 7, 66845–66863. [Google Scholar] [CrossRef]
  33. Noh, K.J.; Choi, J.; Hong, J.S.; Park, K.R. Finger-Vein Recognition Based on Densely Connected Convolutional Network Using Score-Level Fusion With Shape and Texture Images. IEEE Access 2020, 8, 96748–96766. [Google Scholar] [CrossRef]
  34. Noh, K.J.; Choi, J.; Hong, J.S.; Park, K.R. Finger-Vein Recognition Using Heterogeneous Databases by Domain Adaption Based on a Cycle-Consistent Adversarial Network. Sensors 2021, 21, 524. [Google Scholar] [CrossRef] [PubMed]
  35. Huang, G.; Liu, Z.; van der Maaten, L.; Weinberger, K.Q. Densely Connected Convolutional Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2261–2269. [Google Scholar]
  36. Qin, H.; Wang, P. Finger-Vein Verification Based on LSTM Recurrent Neural Networks. Appl. Sci. 2019, 9, 1687. [Google Scholar] [CrossRef] [Green Version]
  37. Zhao, D.; Ma, H.; Yang, Z.; Li, J.; Tian, W. Finger vein recognition based on lightweight CNN combining center loss and dynamic regularization. Infrared Phys. Technol. 2020, 105, 103221. [Google Scholar] [CrossRef]
  38. Huttenlocher, D.P.; Klanderman, G.A.; Rucklidge, W.J. Comparing images using the Hausdorff distance. IEEE Trans. Pattern Anal. Mach. Intell. 1993, 15, 850–863. [Google Scholar] [CrossRef] [Green Version]
  39. Kumar, A.; Zhang, D. Personal Recognition Using Hand Shape and Texture. IEEE Trans. Image Process. 2006, 15, 2454–2461. [Google Scholar] [CrossRef] [Green Version]
  40. Gonzalez, R.C.; Woods, R.E. Digital Image Processing, 2nd ed.; Prentice-Hall: Upper Saddle River, NJ, USA, 2002. [Google Scholar]
  41. Gonzalez, R.C.; Woods, R.E. Digital Image Processing, 3rd ed.; Prentice-Hall: Upper Saddle River, NJ, USA, 2010. [Google Scholar]
  42. Sajjadi, M.S.M.; Schölkopf, B.; Hirsch, M. EnhanceNet: Single Image Super-Resolution Through Automated Texture Synthesis. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 4491–4500. [Google Scholar]
  43. He, K.; Zhang, X.; Ren, S.; Sun, J. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. In Proceedings of the IEEE International Conference on Computer vision, Santiago, Chile, 7–13 December 2015; pp. 1026–1034. [Google Scholar]
  44. Nair, V.; Hinton, G.E. Rectified Linear Units Improve Restricted Boltzmann Machines. In Proceedings of the 27th International Conference Machine Learning, Haifa, Israel, 21–24 June 2010; pp. 807–814. [Google Scholar]
  45. Maas, A.L.; Hannun, A.Y.; Ng, A.Y. Rectifier Nonlinearities Improve Neural Network Acoustic Models. In Proceedings of the 30th International Conference Machine Learning, Atlanta, GA, USA, 16–21 June 2013; pp. 16–21. [Google Scholar]
  46. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
  47. Lai, W.-S.; Huang, J.-B.; Ahuja, N.; Yang, M.-H. Deep Laplacian Pyramid Networks for Fast and Accurate Super-Resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 624–632. [Google Scholar]
  48. Wu, B.; Duan, H.; Liu, Z.; Sun, G. SRPGAN: Perceptual Generative Adversarial Network for Single Image Super Resolution. arXiv 2017, arXiv:1712.05927. [Google Scholar]
  49. Sun, D.; Roth, S.; Black, M.J. Secrets of Optical Flow Estimation and Their Principles. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010; pp. 2432–2439. [Google Scholar]
  50. Isola, P.; Zhu, J.-Y.; Zhou, T.; Efros, A.A. Image-to-Image Translation with Conditional Adversarial Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1125–1134. [Google Scholar]
  51. Johnson, J.; Alahi, A.; Fei-Fei, L. Perceptual Losses for Real-Time Style Transfer and Super-Resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Amsterdam, The Netherlands, 11–14 October 2016; pp. 694–711. [Google Scholar]
  52. Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; Fei-Fei, L. ImageNet: ImageNet: A Large-Scale Hierarchical Image Database. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
  53. Zeiler, M.D.; Fergus, R. Visualizing and Understanding Convolutional Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Zurich, Switzerland, 6–12 September 2014; pp. 818–833. [Google Scholar]
  54. Image Differencing. Available online: https://en.wikipedia.org/wiki/Image_differencing (accessed on 2 June 2022).
  55. Biometrics. Available online: https://en.wikipedia.org/wiki/Biometrics (accessed on 2 June 2022).
  56. Yin, Y.; Liu, L.; Sun, X. SDUMLA-HMT: A Multimodal Biometric Database. In Proceedings of the Chinese Conference on Biometric Recognition, Beijing, China, 3–4 December 2011; pp. 260–268. [Google Scholar]
  57. Kumar, A.; Zhou, Y. Human Identification Using Finger Images. IEEE Trans. Image Process. 2012, 21, 2228–2244. [Google Scholar] [CrossRef]
  58. Bascle, B.; Blake, A.; Zisserman, A. Motion Deblurring and Super-resolution from an Image Sequence. In Proceedings of the European Conference on Computer Vision, Cambridge, UK, 14–19 April 1996; pp. 571–582. [Google Scholar]
  59. NVIDIA GeForce GTX 3060. Available online: https://www.nvidia.com/en-us/geforce/graphics-cards/30-series/rtx-3060-3060ti/ (accessed on 10 June 2022).
  60. PyTorch. Available online: https://pytorch.org/ (accessed on 3 June 2022).
  61. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  62. Bottou, L. Large-scale machine learning with stochastic gradient descent. In Proceedings of the International Conference Computational Statistics (COMPSTAT), Paris, France, 22–27 August 2010; pp. 177–186. [Google Scholar]
  63. Peak Signal-to-Noise Ratio. Available online: https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio (accessed on 13 June 2022).
  64. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image Quality Assessment: From Error Visibility to Structural Similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [Green Version]
  65. Wang, I.S.; Chan, H.-T.; Hsia, C.-H. Finger-Vein Recognition Using a NASNet with a Cutout. In Proceedings of the 2021 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS), Hualien City, Taiwan, 16–19 November 2021; pp. 1–2. [Google Scholar]
  66. Jetson TX2 Module. Available online: https://developer.nvidia.com/embedded/jetson-tx2 (accessed on 3 June 2022).
  67. Ma, N.; Zhang, X.; Zheng, H.-T.; Sun, J. ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 122–138. [Google Scholar]
  68. Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Rarikh, D.; Batra, D. Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization. In Proceedings of the European Conference on Computer Vision (ECCV), Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar]
  69. Meng, X.; Xi, X.; Yang, G.; Yin, Y. Finger vein recognition based on deformation information. Science China Inf. Sci. 2018, 61, 1–15. [Google Scholar] [CrossRef]
  70. Liu, W.; Li, W.; Sun, L.; Zhang, L.; Chen, P. Finger vein recognition based on deep learning. In Proceedings of the 2017 12th IEEE Conference on Industrial Electronics and Applications (ICIEA), Siem Reap, Cambodia, 18–20 June 2017; pp. 205–210. [Google Scholar]
  71. Fang, Y.; Wu, Q.; Kang, W. A novel finger vein verification system based on two-stream convolutional network learning. Neurocomputing 2018, 290, 100–107. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the proposed method.
Figure 1. Flowchart of the proposed method.
Mathematics 10 03948 g001
Figure 2. Original finger-vein image preprocessing: (a) original image; (b) binarized image; (c) background removed image; (d) in-plane rotation compensation; (e) removal of left and right areas; and (f) detected finger-vein ROI image.
Figure 2. Original finger-vein image preprocessing: (a) original image; (b) binarized image; (c) background removed image; (d) in-plane rotation compensation; (e) removal of left and right areas; and (f) detected finger-vein ROI image.
Mathematics 10 03948 g002aMathematics 10 03948 g002b
Figure 3. Architecture of the proposed RMOBF-Net.
Figure 3. Architecture of the proposed RMOBF-Net.
Mathematics 10 03948 g003
Figure 4. Architectures of the residual block.
Figure 4. Architectures of the residual block.
Mathematics 10 03948 g004
Figure 5. Examples of the difference image between the enrolled and matching images. (a) Enrolled image, matching image of the same class as (ac) difference image of (a,b) enrolled image, matching image of class different to (df) difference image of (d,e).
Figure 5. Examples of the difference image between the enrolled and matching images. (a) Enrolled image, matching image of the same class as (ac) difference image of (a,b) enrolled image, matching image of class different to (df) difference image of (d,e).
Mathematics 10 03948 g005aMathematics 10 03948 g005b
Figure 6. Examples of images obtained from the same finger. (a,b) SDU-DB and (c,d) PolyU-DB.
Figure 6. Examples of images obtained from the same finger. (a,b) SDU-DB and (c,d) PolyU-DB.
Mathematics 10 03948 g006aMathematics 10 03948 g006b
Figure 7. Examples of finger-vein images captured in a real environment. (a) Original images, (b) blurred images.
Figure 7. Examples of finger-vein images captured in a real environment. (a) Original images, (b) blurred images.
Mathematics 10 03948 g007
Figure 8. Examples of original and blurred images of SDU-DB. (a) Original images, (b) motion and optical blurred images.
Figure 8. Examples of original and blurred images of SDU-DB. (a) Original images, (b) motion and optical blurred images.
Mathematics 10 03948 g008
Figure 9. Examples of original and blurred images of PolyU-DB. (a) Original images, (b) motion and optical blurred images.
Figure 9. Examples of original and blurred images of PolyU-DB. (a) Original images, (b) motion and optical blurred images.
Mathematics 10 03948 g009
Figure 10. Training and validation loss graphs of RMOBF-Net. (a) trained until 300 epochs (attached in paper), (b) trained until 1000 epochs.
Figure 10. Training and validation loss graphs of RMOBF-Net. (a) trained until 300 epochs (attached in paper), (b) trained until 1000 epochs.
Mathematics 10 03948 g010
Figure 11. Training and validation loss and accuracy graphs of DenseNet-161 using images restored by the proposed RMOBF-Net: (a) trained until 30 epochs (attached in paper), (b) trained until 100 epochs.
Figure 11. Training and validation loss and accuracy graphs of DenseNet-161 using images restored by the proposed RMOBF-Net: (a) trained until 30 epochs (attached in paper), (b) trained until 100 epochs.
Mathematics 10 03948 g011
Figure 12. Examples of restored SDU-DB finger-vein images using the state-of-the-art methods and the proposed RMOBF-Net (motion blur with Gaussian blur filter of 11 × 11): (a) original image, (b) blurred image and images restored by (c) DeblurGAN, (d) DeblurGANv2, (e) SRN-DeblurNet, (f) MPRNet, and (g) RMOBF-Net.
Figure 12. Examples of restored SDU-DB finger-vein images using the state-of-the-art methods and the proposed RMOBF-Net (motion blur with Gaussian blur filter of 11 × 11): (a) original image, (b) blurred image and images restored by (c) DeblurGAN, (d) DeblurGANv2, (e) SRN-DeblurNet, (f) MPRNet, and (g) RMOBF-Net.
Mathematics 10 03948 g012
Figure 13. Examples of restored SDU-DB finger-vein images using the state-of-the-art methods and the proposed RMOBF-Net (motion blur with Gaussian blur filter of 15 × 15): (a) original image, (b) blurred image and images restored by (c) DeblurGAN, (d) DeblurGANv2, (e) SRN-DeblurNet, (f) MPRNet, and (g) RMOBF-Net.
Figure 13. Examples of restored SDU-DB finger-vein images using the state-of-the-art methods and the proposed RMOBF-Net (motion blur with Gaussian blur filter of 15 × 15): (a) original image, (b) blurred image and images restored by (c) DeblurGAN, (d) DeblurGANv2, (e) SRN-DeblurNet, (f) MPRNet, and (g) RMOBF-Net.
Mathematics 10 03948 g013
Figure 14. Examples of restored SDU-DB finger-vein images using the state-of-the-art methods and the proposed RMOBF-Net (motion blur with Gaussian blur filter of 19 × 19): (a) original image, (b) blurred image and images restored by (c) DeblurGAN, (d) DeblurGANv2 (e) SRN-DeblurNet, (f) MPRNet, and (g) RMOBF-Net.
Figure 14. Examples of restored SDU-DB finger-vein images using the state-of-the-art methods and the proposed RMOBF-Net (motion blur with Gaussian blur filter of 19 × 19): (a) original image, (b) blurred image and images restored by (c) DeblurGAN, (d) DeblurGANv2 (e) SRN-DeblurNet, (f) MPRNet, and (g) RMOBF-Net.
Mathematics 10 03948 g014
Figure 15. ROC curves of finger-vein recognition for schemes 1–5 with SDU-DB (random motion blur and optical blur with intensity 11 × 11 (Gaussian filter size) and 11 (Gaussian filter standard deviation)).
Figure 15. ROC curves of finger-vein recognition for schemes 1–5 with SDU-DB (random motion blur and optical blur with intensity 11 × 11 (Gaussian filter size) and 11 (Gaussian filter standard deviation)).
Mathematics 10 03948 g015
Figure 16. ROC curves of finger-vein recognition for schemes 1–5 with SDU-DB (random motion blur and optical blur with intensity 15 × 15 (Gaussian filter size) and 15 (Gaussian filter standard deviation)).
Figure 16. ROC curves of finger-vein recognition for schemes 1–5 with SDU-DB (random motion blur and optical blur with intensity 15 × 15 (Gaussian filter size) and 15 (Gaussian filter standard deviation)).
Mathematics 10 03948 g016
Figure 17. ROC curves of finger-vein recognition for schemes 1–5 with SDU-DB (random motion blur and optical blur with intensity 19 × 19 (Gaussian filter size) and 19 (Gaussian filter standard deviation)).
Figure 17. ROC curves of finger-vein recognition for schemes 1–5 with SDU-DB (random motion blur and optical blur with intensity 19 × 19 (Gaussian filter size) and 19 (Gaussian filter standard deviation)).
Mathematics 10 03948 g017
Figure 18. ROC curves of finger-vein recognition with SDU-DB for scheme 4 by various restoration methods (random motion blur and optical blur with intensity 11 × 11 (Gaussian filter size) and 11 (Gaussian filter standard deviation)).
Figure 18. ROC curves of finger-vein recognition with SDU-DB for scheme 4 by various restoration methods (random motion blur and optical blur with intensity 11 × 11 (Gaussian filter size) and 11 (Gaussian filter standard deviation)).
Mathematics 10 03948 g018
Figure 19. ROC curves of finger-vein recognition with SDU-DB for scheme 4 by various restoration methods (random motion blur and optical blur with intensity 15 × 15 (Gaussian filter size) and 15 (Gaussian filter standard deviation)).
Figure 19. ROC curves of finger-vein recognition with SDU-DB for scheme 4 by various restoration methods (random motion blur and optical blur with intensity 15 × 15 (Gaussian filter size) and 15 (Gaussian filter standard deviation)).
Mathematics 10 03948 g019
Figure 20. ROC curves of finger-vein recognition with SDU-DB for scheme 4 by various restoration methods (random motion blur and optical blur with intensity 19 × 19 (Gaussian filter size) and 19 (Gaussian filter standard deviation)).
Figure 20. ROC curves of finger-vein recognition with SDU-DB for scheme 4 by various restoration methods (random motion blur and optical blur with intensity 19 × 19 (Gaussian filter size) and 19 (Gaussian filter standard deviation)).
Mathematics 10 03948 g020
Figure 21. ROC curves of finger-vein recognition with SDU-DB for scheme 5 by various restoration methods (random motion blur and optical blur with intensity 11 × 11 (Gaussian filter size) and 11 (Gaussian filter standard deviation)).
Figure 21. ROC curves of finger-vein recognition with SDU-DB for scheme 5 by various restoration methods (random motion blur and optical blur with intensity 11 × 11 (Gaussian filter size) and 11 (Gaussian filter standard deviation)).
Mathematics 10 03948 g021
Figure 22. ROC curves of finger-vein recognition with SDU-DB for scheme 5 by various restoration methods (random motion blur and optical blur with intensity 15 × 15 (Gaussian filter size) and 15 (Gaussian filter standard deviation)).
Figure 22. ROC curves of finger-vein recognition with SDU-DB for scheme 5 by various restoration methods (random motion blur and optical blur with intensity 15 × 15 (Gaussian filter size) and 15 (Gaussian filter standard deviation)).
Mathematics 10 03948 g022
Figure 23. ROC curves of finger-vein recognition with SDU-DB for scheme 5 by various restoration methods (random motion blur and optical blur with intensity 19 × 19 (Gaussian filter size) and 19 (Gaussian filter standard deviation)).
Figure 23. ROC curves of finger-vein recognition with SDU-DB for scheme 5 by various restoration methods (random motion blur and optical blur with intensity 19 × 19 (Gaussian filter size) and 19 (Gaussian filter standard deviation)).
Mathematics 10 03948 g023
Figure 24. Examples of restored PolyU-DB finger-vein images using state-of-the-art methods and the proposed RMOBF-Net (motion blur with Gaussian blur filter of 11 × 11): (a) original image, (b) blurred image and images restored by (c) DeblurGAN, (d) DeblurGANv2, (e) SRN-DeblurNet, (f) MPRNet, and (g) RMOBF-Net.
Figure 24. Examples of restored PolyU-DB finger-vein images using state-of-the-art methods and the proposed RMOBF-Net (motion blur with Gaussian blur filter of 11 × 11): (a) original image, (b) blurred image and images restored by (c) DeblurGAN, (d) DeblurGANv2, (e) SRN-DeblurNet, (f) MPRNet, and (g) RMOBF-Net.
Mathematics 10 03948 g024
Figure 25. Examples of restored PolyU-DB finger-vein images using state-of-the-art methods and the proposed RMOBF-Net (motion blur with Gaussian blur filter of 15 × 15): (a) original image, (b) blurred image, and restored images by (c) DeblurGAN, (d) DeblurGANv2, (e) SRN-DeblurNet, (f) MPRNet, and (g) RMOBF-Net.
Figure 25. Examples of restored PolyU-DB finger-vein images using state-of-the-art methods and the proposed RMOBF-Net (motion blur with Gaussian blur filter of 15 × 15): (a) original image, (b) blurred image, and restored images by (c) DeblurGAN, (d) DeblurGANv2, (e) SRN-DeblurNet, (f) MPRNet, and (g) RMOBF-Net.
Mathematics 10 03948 g025
Figure 26. Examples of restored PolyU-DB finger-vein images using state-of-the-art methods and the proposed RMOBF-Net (motion blur with Gaussian blur filter of 19 × 19): (a) original image, (b) blurred image and images restored by (c) DeblurGAN, (d) DeblurGANv2, (e) SRN-DeblurNet, (f) MPRNet, and (g) RMOBF-Net.
Figure 26. Examples of restored PolyU-DB finger-vein images using state-of-the-art methods and the proposed RMOBF-Net (motion blur with Gaussian blur filter of 19 × 19): (a) original image, (b) blurred image and images restored by (c) DeblurGAN, (d) DeblurGANv2, (e) SRN-DeblurNet, (f) MPRNet, and (g) RMOBF-Net.
Mathematics 10 03948 g026
Figure 27. ROC curves of finger-vein recognition for schemes 1–5 with PolyU-DB (random motion blur and optical blur with intensity 11 × 11 (Gaussian filter size) and 11 (Gaussian filter standard deviation)).
Figure 27. ROC curves of finger-vein recognition for schemes 1–5 with PolyU-DB (random motion blur and optical blur with intensity 11 × 11 (Gaussian filter size) and 11 (Gaussian filter standard deviation)).
Mathematics 10 03948 g027
Figure 28. ROC curves of finger-vein recognition for schemes 1–5 with PolyU-DB (random motion blur and optical blur with intensity 15 × 15 (Gaussian filter size) and 15 (Gaussian filter standard deviation)).
Figure 28. ROC curves of finger-vein recognition for schemes 1–5 with PolyU-DB (random motion blur and optical blur with intensity 15 × 15 (Gaussian filter size) and 15 (Gaussian filter standard deviation)).
Mathematics 10 03948 g028
Figure 29. ROC curves of finger-vein recognition for schemes 1–5 with PolyU-DB (random motion blur and optical blur with intensity 19 × 19 (Gaussian filter size) and 19 (Gaussian filter standard deviation)).
Figure 29. ROC curves of finger-vein recognition for schemes 1–5 with PolyU-DB (random motion blur and optical blur with intensity 19 × 19 (Gaussian filter size) and 19 (Gaussian filter standard deviation)).
Mathematics 10 03948 g029
Figure 30. ROC curves of finger-vein recognition with PolyU-DB for scheme 4 by various restoration methods (random motion blur and optical blur with intensity 11 × 11 (Gaussian filter size) and 11 (Gaussian filter standard deviation)).
Figure 30. ROC curves of finger-vein recognition with PolyU-DB for scheme 4 by various restoration methods (random motion blur and optical blur with intensity 11 × 11 (Gaussian filter size) and 11 (Gaussian filter standard deviation)).
Mathematics 10 03948 g030
Figure 31. ROC curves of finger-vein recognition with PolyU-DB for scheme 4 by various restoration methods (random motion blur and optical blur with intensity 15 × 15 (Gaussian filter size) and 15 (Gaussian filter standard deviation)).
Figure 31. ROC curves of finger-vein recognition with PolyU-DB for scheme 4 by various restoration methods (random motion blur and optical blur with intensity 15 × 15 (Gaussian filter size) and 15 (Gaussian filter standard deviation)).
Mathematics 10 03948 g031
Figure 32. ROC curves of finger-vein recognition with PolyU-DB for scheme 4 by various restoration methods (random motion blur and optical blur with intensity 19 × 19 (Gaussian filter size) and 19 (Gaussian filter standard deviation)).
Figure 32. ROC curves of finger-vein recognition with PolyU-DB for scheme 4 by various restoration methods (random motion blur and optical blur with intensity 19 × 19 (Gaussian filter size) and 19 (Gaussian filter standard deviation)).
Mathematics 10 03948 g032
Figure 33. ROC curves of finger-vein recognition with PolyU-DB for scheme 5 by various restoration methods (random motion blur and optical blur with intensity 11 × 11 (Gaussian filter size) and 11 (Gaussian filter standard deviation)).
Figure 33. ROC curves of finger-vein recognition with PolyU-DB for scheme 5 by various restoration methods (random motion blur and optical blur with intensity 11 × 11 (Gaussian filter size) and 11 (Gaussian filter standard deviation)).
Mathematics 10 03948 g033
Figure 34. ROC curves of finger-vein recognition with PolyU-DB for scheme 5 by various restoration methods (random motion blur and optical blur with intensity 15 × 15 (Gaussian filter size) and 15 (Gaussian filter standard deviation)).
Figure 34. ROC curves of finger-vein recognition with PolyU-DB for scheme 5 by various restoration methods (random motion blur and optical blur with intensity 15 × 15 (Gaussian filter size) and 15 (Gaussian filter standard deviation)).
Mathematics 10 03948 g034
Figure 35. ROC curves of finger-vein recognition with PolyU-DB for scheme 5 by various restoration methods (random motion blur and optical blur with intensity 19 × 19 (Gaussian filter size) and 19 (Gaussian filter standard deviation)).
Figure 35. ROC curves of finger-vein recognition with PolyU-DB for scheme 5 by various restoration methods (random motion blur and optical blur with intensity 19 × 19 (Gaussian filter size) and 19 (Gaussian filter standard deviation)).
Mathematics 10 03948 g035
Figure 36. Correct recognition examples after restoring motion and optical blur. (a) Incorrect genuine matching before restoration, (b) correct genuine matching after restoration, (c) incorrect imposter matching before restoration, and (d) correct imposter matching after restoration. From the left, examples in (ac) and (d) present the enrolled, matching, and difference images, respectively.
Figure 36. Correct recognition examples after restoring motion and optical blur. (a) Incorrect genuine matching before restoration, (b) correct genuine matching after restoration, (c) incorrect imposter matching before restoration, and (d) correct imposter matching after restoration. From the left, examples in (ac) and (d) present the enrolled, matching, and difference images, respectively.
Mathematics 10 03948 g036
Figure 37. Incorrect recognition examples after restoring motion and optical blur. (a) Incorrect genuine matching before restoration, (b) incorrect genuine matching after restoration, (c) incorrect imposter matching before restoration, and (d) incorrect imposter matching after restoration. From the left, examples in (ac) and (d) present the enrolled, matching, and difference images, respectively.
Figure 37. Incorrect recognition examples after restoring motion and optical blur. (a) Incorrect genuine matching before restoration, (b) incorrect genuine matching after restoration, (c) incorrect imposter matching before restoration, and (d) incorrect imposter matching after restoration. From the left, examples in (ac) and (d) present the enrolled, matching, and difference images, respectively.
Mathematics 10 03948 g037
Figure 38. Comparisons of the class activation maps between the original and restored images. (a,b) are examples of authentic and imposter images, respectively. The images on the left of (a,b) are the original images, and the images on the right are restored images by RMOBF-Net. (a,b) show the 1st convolutional layer, 1st transition layer, 2nd transition layer, 3rd transition layer, and the class activation map outputs from the last dense block from top to bottom.
Figure 38. Comparisons of the class activation maps between the original and restored images. (a,b) are examples of authentic and imposter images, respectively. The images on the left of (a,b) are the original images, and the images on the right are restored images by RMOBF-Net. (a,b) show the 1st convolutional layer, 1st transition layer, 2nd transition layer, 3rd transition layer, and the class activation map outputs from the last dense block from top to bottom.
Mathematics 10 03948 g038aMathematics 10 03948 g038b
Table 1. Comparisons of the previous and proposed finger-vein image restoration methods.
Table 1. Comparisons of the previous and proposed finger-vein image restoration methods.
CategoryMethodsAdvantagesDisadvantages
Without considering blur restorationHandcrafted feature-basedLocal maximum curvature + template matching [21]Recognition performance can be improved when the designed optimal filter is accurately modeled in the image spatial domain.
-
When a filter designed from the source image is applied to an image with different characteristics, recognition performance may decrease.
-
Vulnerable to image variants such as various, illumination, misalignment, and distortion because the optimal filter is designed in a constraint environment.
Repeated line tracking + template matching [22]
Feature point matching [23]
Minutia points + LBP-based feature extraction + Hamming distance [24]
Gabor filter + SIFT feature matching + Euclidean distance [25]
Local line binary pattern + Hamming distance [26]
Trained feature-basedDifference image + CNN + genuine/imposter matching
[27,28]
-
Efficient as the optimal filter is not directly modeled.
-
Various image features can be extracted through training, therefore robust from image variation.
Requires large data for training.
Vein pattern maps + CNN + feature matching [31]
Composite image + CNN + shift matching + genuine/imposter matching [32,33,34]
SCNN-LSTM + Hamming distance [36]
Skin scattering blur restorationHandcrafted feature-basedPSF + CLS filter [3]If the light scattering components are accurately estimated, performance can be improved significantly.
-
Blur parameters for the scattering component must be accurately measured.
-
If the domains between the image used for scattering components estimation and the test image are different, it is necessary to re-estimate the parameters for image restoration.
BOM + POC [4]
WBOM + ADAGC + LBIM + NSTM + Gabor wavelets + POC [5]
PSF + BOM [6]
Optical model + POC
[7]
Trained feature-basedBOM + CNN + matrix matching [8]Shows robust performance through training with images obtained from various environments.Did not consider optical and motion blur.
Optical-blur restorationHandcrafted feature-based2 PSFs for optical blur and scattering blur +
CLS filter + modified Hausdorff distance [9]
-
Both optical blur and skin scattering blur are considered.
-
Performance can be significantly improved if the blur components are accurately estimated.
Both optical blur and skin scattering blur components require accurate parameter estimation for performance improvement, which can increase processing time.
Trained feature-basedDifference image + Conditional GAN + CNN + genuine/imposter matching [10]
-
Robust performance through training, even with images obtained from various environments.
-
Requires large data for training.
-
Did not consider motion blur.
Local blur model + global blur model + GAN-based restoration [11]Defines blur models considering user condition and temperature condition.
Motion blur restorationTrained feature-basedDifference image + Modified DeblurGAN + CNN + genuine/imposter matching [12]Shows performance improvement considering possible motion blur during finger-vein recognition.Requires large data for restoration and recognition.
Motion blur + optical blur restorationTrained feature-basedDifference image + RMOBF-Net + CNN + genuine/imposter matching
(Proposed method)
-
Continuously maintains features that can be utilized during motion and optical blur restoration.
-
Motion and optical blur that may occur during finger-vein recognition are considered.
Requires a long (approximately 14 h) training process.
Table 2. Description of the proposed RMOBF-Net (× 3 means the number of iterations).
Table 2. Description of the proposed RMOBF-Net (× 3 means the number of iterations).
LayerInput Feature Size
(Height × Width × Channel)
Output Feature SizeSize of Kernel
(Height × Width)
StridePadding
Input (enc 0)256 × 256 × 3
EncoderConv256 × 256 × 3256 × 256 × 483 × 311
Residual block × 3 (enc 1)256 × 256 × 48256 × 256 × 483 × 311
Downsample + Conv256 × 256 × 48128 × 128 × 963 × 311
Residual block × 3 (enc 2)128 × 128 × 96128 × 128 × 963 × 311
Downsample + Conv128 × 128 × 9664 × 64 × 1923 × 311
Residual block × 3 (enc 3)64 × 64 × 19264 × 64 × 1923 × 311
Downsample + Conv64 × 64 × 19232 × 32 × 2883 × 311
Residual block × 3 (enc 4)32 × 32 × 28832 × 32 × 2883 × 311
Downsample + Conv32 × 32 × 28816 × 16 × 3843 × 311
Residual block × 3 (enc 5)16 × 16 × 38416 × 16 × 3843 × 311
Downsample + Conv16 × 16 × 3848 × 8 × 4803 × 311
Residual block × 38 × 8 × 4808 × 8 × 4803 × 311
DecoderUpsample (dec 5)8 × 8 × 48016 × 16 × 3843 × 311
Concat16 × 16 × 384 (dec 5)
16 × 16 × 384 (enc 5)
16 × 16 × 768
Conv16 × 16 × 76816 × 16 × 3841 × 110
Residual block × 316 × 16 × 38416 × 16 × 3843 × 311
Upsample + Conv (dec 4)16 × 16 × 38432 × 32 × 2883 × 311
Concat32 × 32 × 288 (dec 4)
32 × 32 × 288 (enc 4)
32 × 32 × 576
Conv32 × 32 × 57632 × 32 × 2881 × 110
Residual block × 332 × 32 × 28832 × 32 × 2883 × 311
Upsample + Conv (dec 3)32 × 32 × 28864 × 64 × 1923 × 311
Concat64 × 64 × 192 (dec 3)
64 × 64 × 192 (enc 3)
64 × 64 × 384
Conv64 × 64 × 38464 × 64 × 1921 × 110
Residual block × 364 × 64 × 19264 × 64 × 1923 × 311
Upsample + Conv (dec 2)64 × 64 × 192128 × 128 × 963 × 311
Concat128 × 128 × 96 (dec 2)
128 × 128 × 96 (enc 2)
128 × 128 × 192
Conv128 × 128 × 192128 × 128 × 961 × 110
Residual block × 3128 × 128 × 96128 × 128 × 963 × 311
Upsample + Conv (dec 1)128 × 128 × 96256 × 256 × 483 × 311
Concat256 × 256 × 48 (dec 1)
256 × 256 × 48 (enc 1)
256 × 256 × 96
Conv256 × 256 × 96256 × 256 × 481 × 110
Residual block × 3256 × 256 × 48256 × 256 × 483 × 311
Conv (dec 0)256 × 256 × 48256 × 256 × 33 × 311
Concat256 × 256 × 3 (enc 0)
256 × 256 × 3 (dec 0)
256 × 256 × 6
Ouput256 × 256 × 6256 × 256 × 31 × 110
Table 3. Comparisons of motion and optical blur restoration performance of SDU-DB by state-of-the-art methods and proposed RMOBF-Net (11 × 11, 15 × 15, and 19 × 19 mean the size of Gaussian blur filter).
Table 3. Comparisons of motion and optical blur restoration performance of SDU-DB by state-of-the-art methods and proposed RMOBF-Net (11 × 11, 15 × 15, and 19 × 19 mean the size of Gaussian blur filter).
Methods11 × 1115 × 1519 × 19
PSNRSSIMPSNRSSIMPSNRSSIM
DeblurGAN [13]29.790.91029.7640.91429.8390.916
DeblurGANv2 [15]31.0920.91331.0580.91630.5450.890
SRN-DeblurNet [16]32.7050.95832.580.95732.5530.957
MPRNet [17]31.3950.94830.2390.93731.1580.947
RMOBF-Net32.7750.95532.6680.95432.5740.954
Table 4. Comparison of finger-vein recognition error (EER) with and without motion and optical blur application and restoration in SDU-DB (unit: %).
Table 4. Comparison of finger-vein recognition error (EER) with and without motion and optical blur application and restoration in SDU-DB (unit: %).
Optical Blur Intensity
(Gaussian Filter Size, Std.)
Scheme 1Scheme 2Scheme 3Scheme 4Scheme 5
11 × 11, 112.64713.4736.3674.6385.754
15 × 15, 1513.3116.6044.3365.488
19 × 19, 1913.4946.6244.2905.779
Table 5. Comparisons of finger-vein recognition error (EER) on SDU-DB by schemes 4 and 5 with state-of-the-art restoration models and proposed RMOBF-Net (unit: %).
Table 5. Comparisons of finger-vein recognition error (EER) on SDU-DB by schemes 4 and 5 with state-of-the-art restoration models and proposed RMOBF-Net (unit: %).
MethodsScheme 4Scheme 5
11 × 1115 × 1519 × 1911 × 1115 × 1519 × 19
DeblurGAN [13]6.7066.4715.65112.70113.44312.176
DeblurGANv2 [15]5.9335.8285.5948.3158.4839.127
SRN-DeblurNet [16]4.4104.6764.8487.2157.4587.402
MPRNet [17]5.1455.3485.4309.2118.8199.421
RMOBF-Net4.6384.3364.2905.7545.4885.779
Table 6. Comparison of finger-vein recognition error (EER) with and without motion and optical blur application and restoration in SDU-DB according to different finger-vein recognition methods (unit: %) (random motion blur and optical blur with intensity 19 × 19 (Gaussian filter size) and 19 (Gaussian filter standard deviation)).
Table 6. Comparison of finger-vein recognition error (EER) with and without motion and optical blur application and restoration in SDU-DB according to different finger-vein recognition methods (unit: %) (random motion blur and optical blur with intensity 19 × 19 (Gaussian filter size) and 19 (Gaussian filter standard deviation)).
MethodsScheme 1Scheme 2Scheme 3Scheme 4Scheme 5
DenseNet-1612.64713.4946.6244.2905.779
NASNet [65]3.26921.7939.3064.8498.284
Table 7. Comparisons of motion and optical blur restoration performance of PolyU-DB by state-of-the-art methods and proposed RMOBF-Net (11 × 11, 15 × 15, and 19 × 19 mean the size of Gaussian blur filter).
Table 7. Comparisons of motion and optical blur restoration performance of PolyU-DB by state-of-the-art methods and proposed RMOBF-Net (11 × 11, 15 × 15, and 19 × 19 mean the size of Gaussian blur filter).
Methods11 × 1115 × 1519 × 19
PSNRSSIMPSNRSSIMPSNRSSIM
DeblurGAN [13]31.1850.93630.5640.9331.0710.934
DeblurGANv2 [15]31.7410.93331.3730.92431.2550.927
SRN-DeblurNet [16]33.1990.96933.0310.96932.8890.968
MPRNet [17]32.4940.96632.6090.96732.3820.966
RMOBF-Net33.7010.96833.5770.96833.2760.967
Table 8. Finger-vein recognition error (EER) comparison according to motion and optical blur application in PolyU-DB (unit: %).
Table 8. Finger-vein recognition error (EER) comparison according to motion and optical blur application in PolyU-DB (unit: %).
Optical Blur Intensity
(Gaussian Filter Size, Std.)
Scheme 1Scheme 2Scheme 3Scheme 4Scheme 5
11 × 11, 111.20518.4245.3072.6695.428
15 × 15, 1520.7075.4162.4655.887
19 × 19, 1923.6125.7503.0966.663
Table 9. Comparisons of finger-vein recognition error (EER) on PolyU-DB by schemes 4 and 5 with state-of-the-art restoration models and proposed RMOBF-Net (unit: %).
Table 9. Comparisons of finger-vein recognition error (EER) on PolyU-DB by schemes 4 and 5 with state-of-the-art restoration models and proposed RMOBF-Net (unit: %).
MethodsScheme 4Scheme 5
11 × 1115 × 1519 × 1911 × 1115 × 1519 × 19
DeblurGAN [13]6.0926.3036.51014.66313.71015.318
DeblurGANv2 [15]3.7394.0603.8885.2246.0797.161
SRN-DeblurNet [16]3.0453.2453.5037.7279.8707.815
MPRNet [17]3.3322.2654.77011.80813.89713.731
RMOBF-Net2.6692.4653.0965.4285.8876.663
Table 10. Comparisons of processing time, FLOPs, and the number of parameters of the proposed method.
Table 10. Comparisons of processing time, FLOPs, and the number of parameters of the proposed method.
CategoryRMOBF-NetDenseNet-161Total
Processing time (ms)Desktop computer17.734.251.9
Jetson TX273.1125.3198.4
FLOPs (G)75.57.8283.32
Number of parameters (M)49.8328.6878.51
Table 11. Comparisons of processing time and speed as frame per second (fps) of the proposed method and the state-of-the-art restoration model on the Jetson TX2 embedded system.
Table 11. Comparisons of processing time and speed as frame per second (fps) of the proposed method and the state-of-the-art restoration model on the Jetson TX2 embedded system.
CategoryProcessing Time (ms)Frame per Second (fps)
DeblurGAN [13]53.118.83
DeblurGANv2 [15]591.841.69
SRN-DeblurNet [16]950.571.05
MPRNet [17]758.741.32
RMOBF-Net73.113.7
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Choi, J.; Hong, J.S.; Kim, S.G.; Park, C.; Nam, S.H.; Park, K.R. RMOBF-Net: Network for the Restoration of Motion and Optical Blurred Finger-Vein Images for Improving Recognition Accuracy. Mathematics 2022, 10, 3948. https://0-doi-org.brum.beds.ac.uk/10.3390/math10213948

AMA Style

Choi J, Hong JS, Kim SG, Park C, Nam SH, Park KR. RMOBF-Net: Network for the Restoration of Motion and Optical Blurred Finger-Vein Images for Improving Recognition Accuracy. Mathematics. 2022; 10(21):3948. https://0-doi-org.brum.beds.ac.uk/10.3390/math10213948

Chicago/Turabian Style

Choi, Jiho, Jin Seong Hong, Seung Gu Kim, Chanhum Park, Se Hyun Nam, and Kang Ryoung Park. 2022. "RMOBF-Net: Network for the Restoration of Motion and Optical Blurred Finger-Vein Images for Improving Recognition Accuracy" Mathematics 10, no. 21: 3948. https://0-doi-org.brum.beds.ac.uk/10.3390/math10213948

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop