Next Article in Journal
Symmetric Magnetic Anomaly Objects’ Orientation Recognition Based on Local Binary Pattern and Support Vector Machine
Next Article in Special Issue
Multimodal Emotion Recognition Using the Symmetric S-ELM-LUPI Paradigm
Previous Article in Journal
No-Reference Image Quality Assessment with Local Gradient Orientations
Previous Article in Special Issue
Deep Learning-Based Image Segmentation for Al-La Alloy Microscopic Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Symmetric Face Normalization

1
School of Computer and Communication Engineering, University of Science and Technology Beijing, Xueyuan Road 30, Haidian District, Beijing 100083, China
2
Beijing Key Laboratory of Knowledge Engineering for Materials Science, Xueyuan Road 30, Haidian District, Beijing 100083, China
*
Author to whom correspondence should be addressed.
Submission received: 10 December 2018 / Revised: 30 December 2018 / Accepted: 14 January 2019 / Published: 16 January 2019
(This article belongs to the Special Issue Symmetry in Cooperative Applications III)

Abstract

:
Image registration is an important process in image processing which is used to improve the performance of computer vision related tasks. In this paper, a novel self-registration method, namely symmetric face normalization (SFN) algorithm, is proposed. There are three contributions in this paper. Firstly, a self-normalization algorithm for face images is proposed, which normalizes a face image to be reflection symmetric horizontally. It has the advantage that no face model needs to be built, which is always severely time-consuming. Moreover, it can be considered as a pre-processing procedure which greatly decreases the parameters needed to be adjusted. Secondly, an iterative algorithm is designed to solve the self-normalization algorithm. Finally, SFN is applied to the between-image alignment problem, which results in the symmetric face alignment (SFA) algorithm. Experiments performed on face databases show that the accuracy of SFN is higher than 0.95 when the translation on the x-axis is lower than 15 pixels, or the rotation angle is lower than 18°. Moreover, the proposed SFA outperforms the state-of-the-art between-image alignment algorithm in efficiency (about four times) without loss of accuracy.

1. Introduction

The image registration technique has been considered as an important research field in computer vision. It focuses on the problem of aligning several images of the same object. This technique has played an important role in many applications such as medical imaging, biometrics, image mosaicking, video stabilization, etc. A lot of algorithms and techniques have been proposed for these different applications. In this paper, we only pay attention to the face alignment problem.
There have been a lot of face registration techniques, which can be roughly classified into two categories, i.e., model-based algorithms and model-free algorithms. In model based registration methods, a face model is always built based on a large number of face images. For example, active appearance model (AAM) is one of the most powerful model-based algorithms proposed by Cootes et al. [1,2]. It decouples and models the shape and the texture of the deformable object, and is able to generate a variety of instances photorealistically. Because of the outstanding modeling ability, AAM has been widely applied in fields of human eyes modeling, object tracking, facial recognition, gait analysis, and medical image segmentation and analysis [3,4,5,6,7]. Then, shape regression based methods were proposed [8,9,10]. These methods begin with an initial estimate of the landmark locations which are then refined in an iterative manner. Subsequently, a deep learning framework was introduced to greatly improve the face alignment performance [11,12,13,14]. Regardless of the satisfied performance, model based algorithms suffer from the serious dependence of a large amount of training data.
On the other hand, model free algorithms consider the image registration as a between-image matching problem that does not need a great number of training samples. For example, to address the transformation error between gallery and probe image pairs, a deformable sparse representation model (SRC) is proposed [15]. The key idea of their work is that an aligning procedure is introduced to avoid the error caused by misalignment. This method is effective in cases that accuracy of face detection is not guaranteed; however, the additional alignment also introduces great computational complexity. Then, an illumination subspace [16,17] is also proposed to be learned independently and able to extend SRC with help of transfer learning. However, the most serious difficulty of model free algorithms is the heavy computational complexity.
The common problem in both model-based and model-free algorithms is that neither of them makes use of the symmetrical character of images. There exists redundant information in symmetric images such as reflection, rotation, translation, etc. In fact, there is a long history of the research of symmetry. The earliest research work on symmetrical image may date back to 1993 in which Masuda et al. [18] proposed a method to extract the properties of rotational symmetry and reflectional symmetry in 2D images. Then, Liu et al. [19] developed a method to detect dihedral and frieze symmetry as well as asymmetric sub-patterns in planar. Then, the outstanding work conducted by Loy and Eklundh [20] introduced scale-invariant feature transform (SIFT) features along with a voting scheme to cluster neighboring symmetrical features. Lee and Liu [21] generalized reflection symmetry detection to a glide reflection symmetry detection problem by estimating a set of contiguous local straight reflection axes. Kondra et al. [22] presented a multi-scale strategy to detect symmetry. First, SIFT correlation measures are computed at different directions of the same scale. Then, these SIFT correlation measures are used to select symmetrical regions. Similarly, symmetrical regions of other scales are also found by this procedure. Patraucean et al. [23] conducted a detection-and-validation procedure to find symmetrical regions. Candidate patches are first detected using a Hough-like voting scheme. Then, these regions are validated using a principled statistical procedure inspired from a contrario theory, which minimizes the number of false positives. Michaelsen et al. [24] performed symmetry detection combinatorial gestalt algebra technique as well as the SIFT descriptors. Cicconet et al. [25] introduced wavelet filtering to conduct pairwise voting. Cai et al. [26] attempted to clutter symmetrical features adaptively. Wang et al. [27] matched symmetry areas by establishing the correspondence of locally affine invariant edge-based features rather than SIFT. It outperformed other methods because it is insensitive to illumination variations and suitable for textureless objects. Cicconet et al. [28] proposed to detect symmetry by using a pairwise convolutional approach, which was based on the products of complex-valued wavelet convolutions. Funk and Liu [29] applied symmetry detection to reCAPTCHA solutions. However, most of these works are based on feature-based methods, whose accuracy depends on the localization of features.
To boost the efficiency of the model free registration methods, this paper proposes a symmetric face normalization (SFN) algorithm. There are three contributions in this paper. Firstly, a self-normalization algorithm for face images is proposed, which normalizes a face image to be reflection symmetric horizontally. It has an advantage that no face model needs to be built, which is always severely time-consuming. Moreover, it can be considered as a pre-processing procedure which greatly decreases the parameters needed to be adjusted. Secondly, an iterative algorithm is designed to solve the self-normalization algorithm. Finally, SFN is also applied to the between-image matching problem, which results in the symmetric face alignment (SFA) algorithm.

Related Work

Image registration has a long history, which can date back to the 1972 [30]. However, the most representative algorithm occured in 1981, namely the Lucas–Kanade algorithm [31,32]. It models the image registration task as an optimization problem that can be solved using the gradient descent algorithm efficiently. However, the same object embeded in two images is always disturbed by noise, which may affect the accuracy of registration. Therefore, researchers [33] proposed the DSRC algorithm where registration problem was modeled as
min p , e E s . t . T ( W ( x , p ) ) = I ( x ) + E ( x ) ,
where denotes L 2 norm ( L 1 norm is also allowed [33]), I and T are two images to be aligned, W is the warping function with p as the parameter vector, and E denotes the noise image. DSRC performs registration well; however, it does not consider the symmetry information in face images.

2. Symmetric Face Normalization Algorithm

The objective of this paper is to find the appropriate parameters for similarity transformation so that the object becomes symmetry with respective to the y-axis.

2.1. Symmetric Face Normalization

Suppose W ( x , p ) is the image warping function with p as parameters. For simplification, the discussion of this paper is limited to the similarity transformation. However, other transformations are also applicable, such as scale transformation and shearing transformation. Given an image I and its reflectional symmetric image R I ( x ) , this paper models the symmetric face normalization problem as following optimization problem
min E = x I ( I ( W ( x , p ) ) R I ( W ( x , p ) ) ) 2 ,
where
W ( x , p ) = a b x 0 b a 0 x y 1 ,
a 2 + b 2 = 1 ,
where p = [ a , b , x 0 ] is the parameter vector of the warping function, which is constrained to be similarity transformation by Equation (4). It should be mentioned that, since we only consider the symmetry with respect to the y-axis, thus the translation on the y-axis is not involved. In addition, Equation (4) indicates that there is no scale variation in the transformation W.
Subsequently, it is found that the reflectional symmetric operator R I can be equivalent to the composition of two operators that
R I ( W ( x , p ) ) = I ( S W ( x , p ) ) ,
where S is the symmetric warping operator which can be formulated as
S ( x ) = 1 0 w + 1 0 1 0 x y 1
= w x + 1 y ,
where w is the width of the image I. As a result, objective (2) is transformed to
E = x I ( I ( W ( x , p ) ) I ( S W ( x , p ) ) ) 2 ,
where the symmetric transformation S W ( x , p ) can be obtained as
S W ( x , p ) = 1 0 w + 1 0 1 0 x + b y + c b x + y = x b y + w + 1 c b x + y .
Subsequently, the optimal parameter p can be obtained with the help of the Lucas–Kanade algorithm
min E = x ( I ( W ( x , p + p ) ) I ( S ( W ( x , p + p ) ) ) 2 = x ( I ( W ( x , p ) ) I ( S ( W ( x , p ) ) ) + I · W p · p I · S W p p ) 2 = x ( I | p + I ( W p S W p ) p ) 2 = x ( I | p + J p ) 2 ,
where
W p = y 1 x 0 ,
S W p = y 1 x 0 .
The Jacobi matrix J and the Hessian matrix H can be calculated respectively by
J = I ( W p S W p ) ,
H = J T J
= ( y , 1 ) T I x T I x ( y , 1 ) .
Finally, by setting the derivative E p = 0 , the update of parameters p can be obtained as
J T J p = J T I | p ,
p = H 1 J T I | p .
Thus, the parameter p can be updated by
p = p + p .
The pseudocode of the symmetric face normalization is shown in Algorithm 1.
Algorithm 1 Symmetric Face Normalization Algorithm
1:
Initialize p = [ 1 , 0 , 0 ]
2:
repeat
3:
 Warp the image I with W and parameters p to obtain I ( W ( x , p ) )
4:
 Compute I = I ( W ( x , p ) ) I ( S ( W ( x , p ) ) )
5:
 Compute image gradient I
6:
 Compute J = I ( W p S W p )
7:
 Compute H = ( y , 1 ) T I x T I x ( y , 1 )
8:
 Compute p = H 1 J T I | p
9:
 Update p = p + p
10:
until | p | < ϵ

2.2. Symmetric Face Alignment

In this subsection, we combine the symmetric normalization with the DSRC in problem (1) to solve the face alignment problem. Suppose I is a probe image subject to an unknown expression and T denotes a template image with the same identity with I. The goal of alignment is to find a transformation of T such that the difference between I and T ( W ( x , p ) ) is minimized, i.e.,
min p x T ( W ( x , p ) ) I ( x ) 2 ,
where W denotes the transformation of T with parameters p, and . 2 means L 2 norm. The summation is over the mesh on T.
To solve the optimization problem in problem (19), it is necessary to linearize the optimization problem. Assume that a current estimate of p is known. We can iteratively solve for increments to the parameters p [32]. As a result, an equivalent problem in (19) is deduced as
min p x T ( W ( x , p + p ) ) I ( x ) 2 .
The optimization iteratively updates p in each step by
p = p + p ,
until a threshold is reached that p ϵ . Here, ϵ is a threshold preset manually.
It is remarkable that if we constrain the transformation T to be similarity transformation; then, it can be described as
T ( x ) = a b c b a d x y 1
= s c o s θ s i n θ s i n θ c o s θ x y + c d .
However, based on the symmetric normalization above, it is assumed that two parameters have been estimated by symmetric normalization in Section 2.1. In other words, the θ and c are fixed in (23). Therefore, the transformation T is reduced to
T ( x ) = s x y + 0 d .
As a result, only two parameters are left, i.e., scale s and vertical translation d.

3. Experimental Evaluation

In this section, experiments are designed to evaluate two aspects of the proposed algorithms, i.e., efficiency and accuracy. For efficiency evaluation, the configuration of the experimental platform is PC with Intel Core i5 of Intel, Santa Clara, CA, USA, 1.7 GHz CPU, 8 GB of memory. Algorithms in this paper are coded by Matlab 2016, MathWorks of Natick, MA, USA.
Two databases are involved in the following experiments, i.e., FERET [34] and Cohn–Kanade [35].
  • The FERET database contains 15 sessions of face pictures. It was collected during 1993 and 1996. The dataset includes 14,126 images of 1199 individuals, which are separated into 1564 sets. Moreover, there are duplicate sets which contain images of same person that are already in the database but collected on a different day. This time difference is as long as two years to allow enough distinction. In the following experiments, only “fb” with a smile expression is chosen. Example images are shown in Figure 1.
  • The Cohn–Kanade Database is collected for research in automatic facial image analysis and collected with 97 subjects. It contains 486 video sequences, each of which begins with a neutral expression and ends with a peak expression. In this paper, both neutral and peak expressions are involved for our test. Example images are shown in Figure 2.
For comparison, we mainly evaluate the performance of four algorithms, i.e., SRC, symmetric face normalization (SFN) + SRC, deformable SRC (DSRC), and SFN + SFA + SRC. Particularly, SRC is performed on images without alignment. It can be considered as the baseline. SFN + SRC applies symmetric normalization on faces before SRC. This is effective when minor disturbance occurs. DSRC employs alignment with affine transformation before SRC. It is robust to complex transformation; however, it triggers more computational cost. SFN + SFA + SRC performs SFN and SFA in turn before SRC.

3.1. Symmetric Face Normalization

In this subsection, we evaluate the accuracy of the SFN algorithm on single faces. For this purpose, the FERET dataset is involved. The two outer eye corners of each image in FERET dataset are manually labeled, c.f. Figure 2. Then, each face is deformed by 2D similarity transformation, i.e., translation, rotation, and scale. The translation ranges from [−12, 12] pixels with a step of two pixels, the rotation is up to 60 degrees, and the scale ranges from [0.6, 1.4].
Normalization results can be evaluated by the mean error of the two outer eye corners in the x-axis, i.e.,
e = m e a n ( e x l e f t , e x r i g h t ) ,
where e x l e f t and e x r i g h t are the errors of the two outer eye corners in x-axis, respectively.
Since there is unavoidable noise in labeling, it is accepted that there is minor mis-normalization. Therefore, it is considered to be successful normalization if e 0.5 . Given the number of test images is N, the normalizing accuracy is evaluated as
a c c u r a c y = 1 N i N δ i , δ i = 1 , e 0.5 , 0 , e > 0.5 .
The normalizing accuracy results are shown in Figure 3. Three conclusions can be drawn as follows: firstly, the symmetric normalization algorithm is sensitive to translation on the x-axis and rotation, and robust to translation on the y-axis and variation in scale. This result is consistent with the definition of “symmetric” in this paper which only affects parameters in connection with the x-axis, such as translation on the x-axis and rotation. Secondly, there is hardly any effect of the variation of scale and translation on the y-axis to the accuracy of symmetric face normalization. It is easy to understand that symmetry has nothing to do with translation on the y-axis and variation in scale. Finally, the robustness of the symmetric face normalization is limited in a range of parameter values. For example, the accuracy of SFN is higher than 0.95 when the translation on the x-axis is lower than 15 pixels, or the rotation angle is lower than 18 .

3.2. Symmetric Face Alignment (SFA)

In this subsection, we evaluate the robustness and efficiency of symmetric face alignment algorithm proposed in Section 2.2. To this end, all images with neutral expression in the FERET dataset are normalized using the SFN algorithm, which leads to symmetric faces. Then, each face is deformed by a 2D similarity transformation, i.e., translation, and scale. The translation ranges from [−12, 12] pixels with a step of 2 pixels, the rotation is up to 60 degrees, and the scale ranges from [0.6, 1.4]. Then, the SFA algorithm is applied to each image and its transformed one.
The results are evaluated by the alignment error e 1 . More specifically, let e 0 be the alignment error obtained by aligning a probe image from the manually labeled position on the gallery image. It is considered to be successful alignment if e 1 e 0 1 0.01 e 0 1 . Given the number of probe images is N, the aligning accuracy is evaluated as
S u c c e s s f u l R a t e = 1 N i N δ i , δ i = 1 , alignment is successful , 0 , alignment is failed .
Comparison is performed among DSRC, SFN + DSRC and SFN + SFA. Particularly, DSRC aligns each image pair without symmetric normalization. SFN + DSRC normalizes the face image with SFN before aligning each image pair with DSRC, while SFN + SFA first normalizes the face image with SFN and then aligns each image pair with SFA. The aligning results are shown in Figure 4. From the results, we can find that SFN + SFA obtains the best performance. The most possible reason comes from the more robust symmetric normalization algorithm, which greatly reduces the number of parameters needed to optimize in the alignment problem. However, it can also be observed that the difference between these three algorithms is not much.
At the same time, the efficiency of the two algorithms is compared by means of the average running time (ART), i.e., the average time spent on aligning one image. The ART results are shown in Table 1. It indicates that the efficiency of SFN + SFA (0.32 s) is about four times that of the DSRC (1.21 s).

4. Conclusions

In this paper, we have presented a symmetric face normalization algorithm. The key idea is to normalize a face image by using the intrinsic characteristic, i.e., symmetry. As a result, an image is normalized with respect to the symmetric axis of face. The proposed symmetric normalization algorithm has two advantages. On the one hand, it is able to normalize a face image without the help of any other image. On the other hand, it is able to decrease the number of parameters in the later alignment procedure. This is important in cases where an image needs to be aligned in the pre-process. Subsequently, an extension of SFN, symmetric face alignment (SFA), was proposed to make use of symmetry in the alignment between two images. Experiments showed that the proposed algorithm significantly enhanced the efficiency of between-image alignment without loss of accuracy.

Author Contributions

Supervision, X.B.; Validation, Z.L.; Writing—Original Draft, Y.S.; Writing—Review and Editing, Y.S.

Funding

This research was funded by the National Key Research and Development Program of China (No. 2016YFB0700500).

Acknowledgments

The authors acknowledge the financial support from the National Key Research and Development Program of China.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cootes, T.F.; Edwards, G.J.; Taylor, C.J. Active Appearance Models. In Proceedings of the European Conference on Computer Vision, Freiburg, Germany, 2–6 June 1998; Volume 2, pp. 484–498. [Google Scholar]
  2. Gao, X.; Su, Y.; Li, X.; Tao, D. A Review of Active Appearance Models. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 2010, 40, 145–158. [Google Scholar]
  3. Wan, J.; Ren, X.; Hu, G. Automatic red-eyes detection based on AAM. In Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, The Hague, The Netherlands, 10–13 October 2004; Volume 7, pp. 6337–6341. [Google Scholar]
  4. Xiao, B.; Gao, X.; Tao, D.; Li, X. A new approach for face recognition by sketches in photos. Signal Process. 2009, 89, 1576–1588. [Google Scholar] [CrossRef]
  5. Stegmann, M.B. Object tracking using active appearance models. In Proceedings of the Danish Conference on Pattern Recognition and Image Analysis, Copenhagen, Denmark, July 2001; Volume 1, pp. 54–60. Available online: http://www2.imm.dtu.dk/pubdb/views/publication_details.php?id=115 (accessed on 16 January 2019).
  6. Li, X.; Maybank, S.J.; Shuicheng, Y.; Tao, D.; Dong, X. Gait Components and Their Application to Gender Recognition. IEEE Transa. Syst. Man Cybern. Part C Appl. Rev. 2008, 38, 145–155. [Google Scholar] [Green Version]
  7. Mitchell, S.C.; Lelieveldt, B.P.F.; van der Geest, R.J.; Bosch, H.G.; Reiver, J.H.C.; Sonka, M. Multistage hybrid active appearance model matching: Segmentation of left and right ventricles in cardiac MR images. IEEE Trans. Med. Imaging 2001, 20, 415–423. [Google Scholar] [CrossRef] [PubMed]
  8. Cao, X.; Wei, Y.; Wen, F.; Sun, J. Face alignment by explicit shape regression. Int. J. Comput. Vis. 2014, 107, 177–190. [Google Scholar] [CrossRef]
  9. Kazemi, V.; Sullivan, J. One millisecond face alignment with an ensemble of regression trees. In Proceedings of the Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 1867–1874. [Google Scholar]
  10. Xiao, S.; Feng, J.; Xing, J.; Lai, H.; Yan, S.; Kassim, A. Robust facial landmark detection via recurrent attentive-refinement networks. In European Conference on Computer Vision; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2016; Volume 9905, pp. 57–72. [Google Scholar]
  11. Fan, H.; Zhou, E. Approaching human level facial landmark localization by deep learning. Image Vis. Comput. 2016, 47, 427–435. [Google Scholar] [CrossRef]
  12. Bulat, A.; Tzimiropoulos, G. Two-stage convolutional part heatmap regression for the 1st 3D face alignment in the wild (3DFAW) challenge. In European Conference on Computer Vision; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2016; Volume 9914, pp. 616–624. [Google Scholar]
  13. Trigeorgis, G.; Snape, P.; Nicolaou, M.A.; Antonakos, E.; Zafeiriou, S. Mnemonic Descent Method: A Recurrent Process Applied for End-to-End Face Alignment. In Proceedings of the Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 4177–4187. [Google Scholar]
  14. Kowalski, M.; Naruniec, J.; Trzcinski, T. Deep Alignment Network: A Convolutional Neural Network for Robust Face Alignment. In Proceedings of the Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July 2017; pp. 2034–2043. [Google Scholar]
  15. Wagner, A.; Wright, J.; Ganesh, A.; Zhou, Z.; Mobahi, H.; Ma, Y. Toward a Practical Face Recognition System: Robust Alignment and Illumination by Sparse Representation. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 372–386. [Google Scholar] [CrossRef] [PubMed]
  16. Zhuang, L.; Yang, A.Y.; Zhou, Z.; Sastry, S.S.; Ma, Y. Single-Sample Face Recognition with Image Corruption and Misalignment via Sparse Illumination Transfer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 23–28 June 2013. [Google Scholar]
  17. Zhuang, L.; Chan, T.H.; Yang, A.Y.; Sastry, S.S.; Ma, Y. Sparse Illumination Learning and Transfer for Single-Sample Face Recognition with Image Corruption and Misalignment. Int. J. Comput. Vis. 2015, 114, 272–287. [Google Scholar] [CrossRef]
  18. Masuda, T.; Yamamoto, K.; Yamada, H. Extraction of symmetry properties using correlation with rotated and reflected images. Electron. Commun. Japan (Part III Fundam. Electron. Sci.) 1993, 76, 8–19. [Google Scholar] [CrossRef]
  19. Liu, Y.; Hays, J.; Xu, Y.Q.; Shum, H.Y. Digital papercutting. In Proceedings of the ACM SIGGRAPH, Los Angeles, CA, USA, 31 July–4 August 2005. [Google Scholar]
  20. Loy, G.; Eklundh, J.O. Detecting symmetry and symmetric constellations of features. In European Conference on Computer Vision; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2006; Volume 3952, pp. 508–521. [Google Scholar]
  21. Lee, S.; Liu, Y. Curved glide-reflection symmetry detection. IEEE Trans. Pattern Anal. Mach. Intel. 2012, 34, 266–278. [Google Scholar]
  22. Kondra, S.; Petrosino, A.; Iodice, S. Multi-scale kernel operators for reflection and rotation symmetry: Further achievements. In Proceedings of the Computer Vision and Pattern Recognition Workshops, Portland, OR, USA, 23–28 June 2013; pp. 217–222. [Google Scholar]
  23. Patraucean, V.; Von Gioi, R.G.; Ovsjanikov, M. Detection of mirror-symmetric image patches. In Proceedings of the Computer Vision and Pattern Recognition Workshops, Portland, OR, USA, 23–28 June 2013; pp. 211–216. [Google Scholar]
  24. Michaelsen, E.; Muench, D.; Arens, M. Recognition of symmetry structure by use of gestalt algebra. In Proceedings of the Computer Vision and Pattern Recognition Workshops, Portland, OR, USA, 23–28 June 2013; pp. 206–210. [Google Scholar]
  25. Cicconet, M.; Geiger, D.; Gunsalus, K.C.; Werman, M. Mirror symmetry histograms for capturing geometric properties in images. In Proceedings of the Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 2981–2986. [Google Scholar]
  26. Cai, D.; Li, P.; Su, F.; Zhao, Z. An adaptive symmetry detection algorithm based on local features. In Proceedings of the Visual Communications and Image Processing Conference, Valletta, Malta, 7–10 December 2015; pp. 478–481. [Google Scholar]
  27. Wang, Z.; Tang, Z.; Zhang, X. Reflection symmetry detection using locally affine invariant edge correspondence. IEEE Trans. Image Process. 2015, 24, 1297–1301. [Google Scholar] [CrossRef] [PubMed]
  28. Cicconet, M.; Birodkar, V.; Lund, M.; Werman, M.; Geiger, D. A convolutional approach to reflection symmetry. Pattern Recogn. Lett. 2017, 95, 44–50. [Google Scholar] [CrossRef] [Green Version]
  29. Funk, C.; Liu, Y. Symmetry reCAPTCHA. In Proceedings of the Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 5165–5174. [Google Scholar]
  30. Barnea, D.; Silverman, H.F. A Class of Algorithms for Fast Digital Image Registration. IEEE Trans. Comput. 1972, C-21, 179–186. [Google Scholar] [CrossRef]
  31. Lucas, B.D.; Kanade, T. An Iterative Image Registration Technique with an Application to Stereo Vision. Robotics 1981, 81, 674–679. [Google Scholar]
  32. Baker, S.; Matthews, I. Lucas–Kanade 20 years on: A unifying framework. Int. J. Comput. Vis. 2004, 56, 221–255. [Google Scholar] [CrossRef]
  33. Wright, J.; Yang, A.Y.; Ganesh, A.; Sastry, S.S.; Yi, M. Robust Face Recognition via Sparse Representation. IEEE Trans. Pattern Anal. Mach. Intel. 2009, 31, 210–227. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  34. Phillips, P.J.; Wechsler, H.; Huang, J.; Rauss, P.J. The FERET database and evaluation procedure for face-recognition algorithms. Image Vis. Comput. 1998, 16, 295–306. [Google Scholar] [CrossRef]
  35. Kanade, T.; Cohn, J.F.; Yingli, T. Comprehensive database for facial expression analysis. In Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition, Santa Barbara, CA, USA, 21–23 March 2000. [Google Scholar]
Figure 1. Example pictures in the FERET database. The outer eye corners are labeled by cross symbols.
Figure 1. Example pictures in the FERET database. The outer eye corners are labeled by cross symbols.
Symmetry 11 00096 g001
Figure 2. Example pictures in the Cohn–Kanade database. The outer eye corners are labeled by cross symbols.
Figure 2. Example pictures in the Cohn–Kanade database. The outer eye corners are labeled by cross symbols.
Symmetry 11 00096 g002
Figure 3. Accuracy evaluation of the symmetric face normalization algorithm using the criterion in (26). Four parameters of the similar transformation are disturbed to evaluate the accuracy of the proposed algorithm, i.e., translation on x-axis and y-axis, rotation angle, and scale. The left top figure shows the accuracy with respect to the translation on the x-axis. The right top figure shows the accuracy with respect to the translation on the y-axis. The left bottom figure shows the accuracy with respect to the rotation angle. The right bottom figure shows the accuracy with respect to the variation in scale.
Figure 3. Accuracy evaluation of the symmetric face normalization algorithm using the criterion in (26). Four parameters of the similar transformation are disturbed to evaluate the accuracy of the proposed algorithm, i.e., translation on x-axis and y-axis, rotation angle, and scale. The left top figure shows the accuracy with respect to the translation on the x-axis. The right top figure shows the accuracy with respect to the translation on the y-axis. The left bottom figure shows the accuracy with respect to the rotation angle. The right bottom figure shows the accuracy with respect to the variation in scale.
Symmetry 11 00096 g003aSymmetry 11 00096 g003b
Figure 4. Accuracy of the symmetric face alignment algorithm. Four parameters of the similar transformation are disturbed to evaluate the accuracy of the alignment algorithm, i.e., translation on the x-axis and y-axis, rotation angle, and scale. The left top figure shows the accuracy with respect to the translation on the x-axis. The right top figure shows the accuracy with respect to the translation on the y-axis. The left bottom figure shows the accuracy with respect to the rotation angle. The right bottom figure shows the accuracy with respect to the variation in scale.
Figure 4. Accuracy of the symmetric face alignment algorithm. Four parameters of the similar transformation are disturbed to evaluate the accuracy of the alignment algorithm, i.e., translation on the x-axis and y-axis, rotation angle, and scale. The left top figure shows the accuracy with respect to the translation on the x-axis. The right top figure shows the accuracy with respect to the translation on the y-axis. The left bottom figure shows the accuracy with respect to the rotation angle. The right bottom figure shows the accuracy with respect to the variation in scale.
Symmetry 11 00096 g004
Table 1. Efficiency comparison between DSRC, SFN + DSRC and SFN + SFA.
Table 1. Efficiency comparison between DSRC, SFN + DSRC and SFN + SFA.
DSRCSFN + DSRCSFN + SFA
ART (s)1.21 ± 0.111.19 ± 0.950.32 ± 0.05

Share and Cite

MDPI and ACS Style

Su, Y.; Liu, Z.; Ban, X. Symmetric Face Normalization. Symmetry 2019, 11, 96. https://0-doi-org.brum.beds.ac.uk/10.3390/sym11010096

AMA Style

Su Y, Liu Z, Ban X. Symmetric Face Normalization. Symmetry. 2019; 11(1):96. https://0-doi-org.brum.beds.ac.uk/10.3390/sym11010096

Chicago/Turabian Style

Su, Ya, Zhe Liu, and Xiaojuan Ban. 2019. "Symmetric Face Normalization" Symmetry 11, no. 1: 96. https://0-doi-org.brum.beds.ac.uk/10.3390/sym11010096

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop