Next Article in Journal
Imaging Electron Dynamics with Ultrashort Light Pulses: A Theory Perspective
Previous Article in Journal
A Multi-Usable Cloud Service Platform: A Case Study on Improved Development Pace and Efficiency
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Perceptual Image Hashing Using Latent Low-Rank Representation and Uniform LBP

1
Department of Information Science and Engineering, Hunan First Normal University, Changsha 410205, China
2
School of Computer Science and Network Security, Dongguan University of Technology, Dongguan 523808, China
*
Authors to whom correspondence should be addressed.
Submission received: 22 January 2018 / Revised: 16 February 2018 / Accepted: 17 February 2018 / Published: 24 February 2018

Abstract

:
Robustness and discriminability are the two most important features of perceptual image hashing (PIH) schemes. In order to achieve a good balance between perceptual robustness and discriminability, a novel PIH algorithm is proposed by combining latent low-rank representation (LLRR) and rotation invariant uniform local binary patterns (RiuLBP). LLRR is first applied on resized original images to the principal feature matrix and to the salient feature matrix, since it can automatically extract salient features from corrupted images. Following this, Riulocal bin features are extracted from each non-overlapping block of the principal feature matrix and of the salient feature matrix, respectively. All features are concatenated and scrambled to generate final binary hash code. Experimental results show that the proposed hashing algorithm is robust against many types of distortions and attacks, such as noise addition, low-pass filtering, rotation, scaling, and JPEG compression. It outperforms other local binary patterns (LBP) based image hashing schemes in terms of perceptual robustness and discriminability.

1. Introduction

With the rapid development of multimedia information processing technology and the growing popularity of the Internet, the dissemination of digital contents such as digital images, audio and video via internet has become more and more popular. At the same time, however, the contents of digital data can easily be modified or forged without leaving any visible traces [1,2,3]. To verify the authenticity of digital images and to protect their intellectual property, perceptual image hashing (PIH) has emerged as an effective technology for image security and authentication and has attracted extensive attention [4,5]. A PIH function maps an input image to a fixed size binary string called image hash, based on an image’s appearance to human eyes [6,7]. The hash values can be used to represent digital image contents, which should tolerate content preserving distortions but should reject malicious attacks that change image contents. Consequently, images with the same visual appearance should have similar hash values, while visually distinct images should have totally different hash values [8,9,10].
General PIH schemes consist of three steps: pre-processing, feature extraction and hash generation, which in the past decades have found extensive applications in many fields, such as image authentication, image retrieval, image recognition and digital watermarking [11,12,13,14,15,16,17]. One of the key steps in a PIH scheme is robust features extraction. A high performance PIH scheme is dependent on suitable features. A local binary pattern (LBP) is originally proposed by Ojala et al. [18] and is always an effective texture feature extraction method, due to its rotation and scale invariance [19,20,21]. To achieve good robustness of image hashes, LBP has been exploited to extract suitable features in PIH schemes in recent years, and many LBP based PIH schemes have been reported in the literature. Bai and Hatzinakos [22] proposed a biometric hashing method based on LBP, and the biometric hash code that is generated from the LBP based histogram sequence is robust to lighting changes; however, the robustness against other content preserving operations has not been demonstrated. Davarzani et al. [23] employed a center-symmetric local binary pattern (CSLBP) to extract image features from non-overlapping image blocks and to obtain hash values. This PIH scheme can distinguish non-malicious manipulations from malicious distortions, but it has a weak balance between robustness and discriminability. To increase the robustness, they improved the scheme by combining the singular value decomposition (SVD) before the feature extraction [24]. However, the improved PIH algorithm is not robust against geometrical distortions. Chen et al. [25] used block truncation coding and CSLBP to produce an image hash, but it does not have a high robustness against noise addition. Qin et al. [26] first applied SVD to create a secondary image, and then employed block truncation coding (BTC) and CSLBP to generate a compact binary hash. The results showed a satisfactory robustness to common content preserving manipulations, as well as good uniqueness, but there is no good robustness against large geometrical distortions. Patil and Sarode [27,28,29,30] designed some new PIH schemes by using improved CSLBP methods. First, original images are divided into sub-blocks in these algorithms, and some modified CSLBP approaches are used to extract 8-bin histograms as image features. Finally, double bit quantization is employed to generate a hash code for original images. Experimental results proved that the proposed schemes are robust against content preserving manipulation and that they are sensitive to content changing and structural tampering. But these PIH algorithms have low robustness against serious geometrical distortion. Considering the good performance of Noise Resistant LBP (NRLBP) in a noisy environment, Abbas et al. [31] presented a PIH scheme based on SVD and NRLBP, which uses SVD transformation and NRLBP to obtain suitable features for the generation of perceptual image hash values. It has enhanced the robustness to content preserving operations but does not obtain a good tradeoff between robustness and discriminability.
The above mentioned PIH schemes based on LBP directly extract features from the original image and do not produce a robust secondary image with primary features. It is hard for them to achieve a good balance between robustness and discriminability. To obtain different perceptual image hash values for visually different images, a novel PIH scheme is proposed in this paper by using latent low rank representation (LLRR) and rotation invariant uniform local binary patterns (RiuLBP); LLRR is exploited to extract principal and salient features since it is able to effectively extract salient features from corrupted data; following this, a RiuLBP features extraction from principal and salient components is used to generate a final hash code.
This paper is organized as follows. In Section 2, the principle of low-rank representation and the local binary pattern is introduced. Section 3 describes the proposed PIH scheme. The experiments and analysis are given in Section 4. Section 5 concludes the paper.

2. Low-Rank Representation and Local Binary Pattern

2.1. Latent Low-Rank Representation

The low-rank representation (LRR) method aims at finding the lowest-rank representation among all the candidates. When the observed data matrix is used as the dictionary A, recovering the low-rank representation from the given observation X o can be written as the following convex optimization problem [32,33]:
m i n Z , E   Z + λ E 1 , s . t . X o = A Z + E ( X o = X o Z + E ) ,
where · denotes the nuclear norm of a matrix, i.e., the sum of the singular values of the matrix. · 1 is the L1-norm characterizing the sparse noise E. λ > 0 is a regularization parameter for balancing the influences of the sparsity error term. Wang et al. [33] applied LRR to multi-view spectral clustering by separately imposing a low-rank constraint on each view and achieved the multi-view agreement via an iterative fashion.
While A = X o , this assumption may be invalid, and the data sampling is insufficient. So, LRR may not represent the subspaces effectively, and the recovery robustness may be weakened. LLRR can be regarded as an enhanced version of LRR, which constructs the dictionary A using both observed data X o and unobserved hidden data X H ; it is more accurate and robust to noise than LRR for subspace representation [34,35]. To resolve the problem of insufficient sampling and to improve the robustness to noise corruption, LLRR is exploited to extract suitable image features during the PIH generation. An approximate recovery can be achieved by analyzing the properties of the hidden effects as follows:
m i n Z , E   Z + λ E 1 s . t . X o = [ X o , X H ] Z + E   .
The hidden effects recovery problem for corrupted data in Equation (2) can be solved by the following convex optimization problem:
m i n   Z O | H + L H | O + λ E 1 s . t . X o = X o Z O | H + L H | O X O + E ,
where Z O | H and L H | O correspond to the principal component and the salient component, respectively. For the sake of simplicity, we replace X o , Z O | H and L H | O with X, Z and L, respectively. Thus, the convex optimization problem in Equation (3) can be rewritten as:
m i n   Z + L + λ E 1 s . t . X = X Z + L X + E ,
where X d × n , Z n × n and L d × d . The parameters d and n are the feature vector size and the number of features, respectively. This problem can be solved via the Augmented Lagrange Multiplier (ALM) [36] method.

2.2. Local Binary Pattern

There are many LBP operators and LBP feature extraction methods are reported in image recognition and image security fields, among which the RiuLBP operator is one of most popular texture operators due to its rotation invariance and low dimension [37]. The basic LBP is a gray-scale invariant which transforms the neighborhood pixels into a set of binary codes by taking the center pixel as a threshold representing the center pixel, and it is defined as follows:
L B P P , R = p = 0 P 1 s ( g p g c ) 2 p ,
where R denotes the scale of the radius of neighborhoods, P denotes the number of sampling points, g c is the gray value of the center pixel, g p is the circularly symmetric neighbor, and s(*) is the sign function that returns the sign of the specified number.
When there are at most two bitwise 0/1 transitions, the pattern is called a uniform one. The number of uniform patterns is P ( P 1 ) + 2 , which is less than the number 2 P of the basic LBP feature. A function U ( Δ ) is defined to return the number of spatial transitions (bitwise 0/1 transitions) in the pattern Δ , and it can be written as:
U ( L B P P , R ) = p = 0 P 1 | s ( g Mod ( p + 1 , P ) g c ) s ( g p g c ) | ,  
where the function M o d ( x , y ) returns the remainder after a number x is divided by a divisor y.
In order to achieve good discriminability and robustness of the perceptual image hash, the RiuLBP feature descriptor is utilized to extract stable image features. Instead of the ordered binary coding of the basic LBP, the center pixel of RiuLBP is denoted by simply counting ones in the basic LBP coding for uniform patterns [38], denoted by L B P P , R r i u , as shown below:
L B P P , R r i u = { p = 0 P 1 s ( g p g c ) i f   U ( L B P P , R ) 2 , P + 1 o t h e r w i s e .
From Equation (7), we can see that the RiuLBP feature only has P + 2 distinct patterns.

3. Proposed Perceptual Image Hashing Algorithm

In order to achieve a good balance between discriminability and robustness, a new PIH scheme is proposed in this paper by combining LLRR and LBP features (called LLRR-RiuLBP). In the proposed scheme, LLRR is first employed in order to obtain the principal and salient components of the original images, considering its robustness of salient feature extraction to corrupted data; following this, the RiuLBP feature extraction is applied to the principal and salient components and generates the perceptual hash. The proposed PIH scheme consists of three main stages: pre-processing, feature extraction, and hash generation. The block diagram of the proposed image hashing scheme is shown in Figure 1, and the whole hash generation process is depicted as follows.
Step 1:
For color images, only the luminance component is considered because it contains significant information on the input images. An original color image I is first converted to a grayscale image Ig.
Step 2:
In order to produce a fixed-length hash code, the image normalization of the bilinear interpolation is applied to the grayscale image, and a resized image Ir is generated with a M × M size.
Step 3:
Following this, a pixel-wise adaptive Wiener filter is applied to the resized input image Ir in order to reduce disturbances caused by the image operation, such as noise addition and lossy compression; a filtered input image If is then generated.
Step 4:
LLRR is applied to the resized input image Ir in order to obtain the principle feature matrix Z, salient feature matrix L and error matrix E using Equation (4).
Step 5:
The principle feature matrix Z and salient feature matrix L are divided respectively into non-overlapping sub-blocks with a b × b size. For each image sub-block, the normalized histogram of the CSLBP codes is computed as follows. Consequently, two histograms H r , r { Z , L } , are built according to the principle feature matrix Z and salient feature matrix L, respectively.
H r ( t ) = 1 b 2 i = 1 b j = 1 b f ( L B P P , R r i u ( i , j ) , t ) ,   t = 0 , 1 , , P + 1 , r { Z , L } ,
where
f ( x , y ) = { 1 i f   x = y 0 e l s e   .
Step 6:
To reduce feature redundancy, zero-mean normalization is applied to the histogram feature to produce a normalized histogram feature H ¯ r by using Equation (10):
H ¯ r = H r μ δ + ε ,   r { Z , L } ,
where μ and δ are the mean and standard deviations of the feature set H r , and where ε is a minimal value in order to avoid division by zero.
Step 7:
The normalized histogram H ¯ r is a P + 2 bins histogram, and the features of all the blocks are concatenated in order to generate a final LLRR-RiuLBP feature.
H = [ H ¯ 1 Z , , H ¯ q Z , H ¯ 1 L , , H ¯ q L ] ,   q = ( P + 2 ) × ( M b ) 2 .
Step 8:
The principal component analysis (PCA) is applied on the feature vector H in order to obtain an effective perceptual feature; the process can be written as follows:
H P C A = [ H 1 , , H m ]   m < 2 q ,
where m denotes the feature dimension after the dimension reduction, and where H i   i = 1 , , m are the principal components after the PCA feature reduction [39].
Step 9:
A binary sequence of perceptual features V is generated by mapping H P C A onto the binary bits.
V ( i ) = { 1 i f   H P C A ( i ) 0.5 0 e l s e i = 1 , , m .
Step 10:
A secret key k is used in order to produce pseudorandom sequences W with the use of a chaotic logistic map [40].
W = { w ( i ) | w ( i ) { 0 , 1 } } , i = 1 , , m .
In order to ensure the security of the PHI scheme, the sequence W is used to scramble the sequence W via a pixel-wise exclusive-or (XOR) operation between the V and W sequences; the scramble feature vector is the final image hash H f .
H f = X O R ( V , W ) .

4. Experiments and Analysis

To test the performances of the proposed PIH scheme, extensive experiments are conducted on many standard images with a 256 × 256 size, which can be obtained from the CVG-UGR (Computer Vision Group, University of Granada) image database [41]. The normalized Hamming distance was adopted in our experiments in order to measure the similarity between two hashes.
D i s ( H f , H f ) = 1 m i = 1 m | H f ( i ) H f ( i ) | ,
where H f and H f are two hash sequences, and where m is the hash length.
In the experiments, the parameters of the resized image size M, sub-block size b, LBP radius R, LBP pixel number P of the neighbor, and final perceptual hash length m are set to 256, 4, 1, 8 and 500, respectively.
All the experiments are tested on a laptop, the Intel Core i-3630QM 2.66 GHz CPU (Intel Corp., Santa Clara, CA, USA), with an 8 GB memory and running MATLAB 2016a (Mathworks Inc., Natick, MA, USA). The average time cost is computed on test images with a 256 × 256 size, and the average running times of the different PIH methods [13,24,26] are listed in Table 1. Our method and Qin et al.’s method [13] need more time cost than the other two methods [24,26] because of the use of the LRR operations.

4.1. Perceptual Robustness

In order to evaluate the perceptual robustness of the proposed PIH scheme (LLRR-RiuLBP), we conducted some robustness experiments under the common content-preserving attacks, such as JPEG compression, Gaussian filtering, median filtering, noise addition, scaling and rotation (as listed in Table 2), based on the CVG-UGR image database [41]. Four of the standard test images derived from them are shown in Figure 2. The robustness comparison experiments with previous image hashing schemes, such as Liu et al.’s [13], Davarzani et al.’s [24] and Qin et al.’s [26] schemes are illustrated in Figure 3 in terms of the normalized Hamming distance. Note that each average normalized Hamming distance in Figure 3 is calculated according to all the hash pairs of all the test images and the corresponding attacked images.
It can be seen that the average normalized Hamming distance of the proposed scheme (LLRR-RuiLBP) is less than Liu et al.’s and Qin et al.’s methods. That is to say, our PIH scheme is more robust to content-preserving attacks than existing schemes [13,24,26]. This is partly because the LLRR adopted in the proposed scheme can effectively extract principle features from corrupted data.

4.2. Discriminability

To evaluate the anti-collision performance of image hashing, 696 hash codes are generated via the proposed PIH scheme, based on 696 test images from the CVG-UGR image database [41]; following this, 378,400 normalized Hamming distances are calculated between the hash pairs of different images. The histogram of the normalized Hamming distances is shown in Figure 4. One finds that the distribution of the normalized Hamming distance proximately obeys a normal distribution with a mean of μ = 0.4825 and with a standard variation of δ = 0.0451. Consequently, given the threshold τ < μ , the collision probability P c can be computed as follows:
P c ( τ ) = 1 2 π δ τ e ( ( x μ ) 2 δ 2 ) d x = 1 2 e r f c ( τ μ 2 δ ) ,
where e r f c ( ) is a Gauss error function. The collision probabilities of the proposed PIH scheme for different thresholds τ are shown in Table 3. From this table, it can be concluded that the collision probability decreases with a decreasing threshold τ . Additionally, the hashes generated by the proposed PIH scheme have a better discriminability than some of the existing image hashing schemes [13,24,26].

4.3. Security

In our scheme, the image hash is dependent on the secret key, and different secret keys will produce distinct hashes. Figure 5 tests the security of the proposed PIH scheme based on the average normalized Hamming distance sequences (with a size of 1001), between the hash pairs generated by one correct secret key and those generated by 1000 wrong secret keys. One can observe that only the 500th normalized Hamming distance (with the correct secret key) is located in the vicinity of 0, and it is very difficult for an unauthorized user to get the same hash without the correct secret key. The proposed PIH scheme is therefore key-dependently secure.

5. Conclusions

In this paper, we propose an effective PIH scheme based on LLRR and rotation invariant uniform LBP. LLRR is first employed to obtain a principal feature matrix and a salient feature matrix. Following this, rotation invariant uniform LBP is used to extract robust features for perceptual hash generation. The ability of LLRR to extract salient features, along with the effective texture feature extraction ability of LBP, are both helpful to robustness and discriminability. Experiments show that our proposed perceptual hashing scheme is robust to content-preserving attacks such as JEPG compression, low-pass filter, noise addition, slight rotation and scaling, and that it has better robustness and discriminability performance than existing hashing schemes. In addition, the hashing scheme has high key-dependent security.

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China under Grant No. 61672528 and No. 61073191, the Social Science Fund of Hunan Province under Grant No. 16YBA102, the National Social Science Fund of China under Grant No. 17BTQ084, and the Research Fund of Hunan Provincial Key Laboratory of informationization technology for basic education under Grant No. 2015TP1017.

Author Contributions

Hengfu Yang conceived and designed the experiments; Mingfang Jiang performed the experiments; Jianping Yin analyzed the data; Hengfu Yang wrote the paper. All authors have read and approved the final version of the paper.

Conflicts of Interest

The authors declare that there are no conflicts of interest.

References

  1. Yin, Z.; Niu, X.; Zhou, Z.; Tang, J.; Luo, B. Improved Reversible Image Authentication Scheme. Cogn. Comput. 2016, 8, 890–899. [Google Scholar] [CrossRef]
  2. Ulutas, G.; Ustubioglu, A.; Ustubioglu, B.; Nabiyev, V.; Ulutas, M. Medical Image Tamper Detection Based on Passive Image Authentication. J. Digit. Imaging 2017, 6, 695–709. [Google Scholar] [CrossRef] [PubMed]
  3. Wang, Y.; Lin, X.; Wu, L.; Zhang, W.; Zhang, Q. LBMCH: Learning Bridging Mapping for Cross-modal Hashing. In Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval, Santiago, Chile, 9–13 August 2015; pp. 999–1002. [Google Scholar]
  4. Swaminathan, A.; Mao, Y.; Wu, M. Robust and secure image hashing. IEEE Trans. Inf. Forensics Secur. 2006, 1, 215–230. [Google Scholar] [CrossRef]
  5. Das, T.K. Bhunre A Secure Image Hashing Technique for Forgery Detection. In Distributed Computing and Internet Technology; Lecture Notes in Computer Science; Natarajan, R., Barua, G., Patra, M.R., Eds.; Springer: Cham, Switzerland, 2015; Volume 8956, pp. 335–338. [Google Scholar]
  6. Lu, C.S. Hsu Geometric distortion-resilient image hashing scheme and its applications on copy detection and authentication. Multimedia Syst. 2005, 11, 159–173. [Google Scholar] [CrossRef]
  7. Venkatesan, R.; Koon, S.M.; Jakubowski, M.H.; Moulin, P. Robust image hashing. In Proceedings of the 2000 International Conference on Image Processing, Vancouver, BC, Canada, 10–13 September 2000; pp. 664–666. [Google Scholar]
  8. Kozat, S.S.; Venkatesan, R.; Mihcak, M.K. Robust perceptual image hashing via matrix invariants. In Proceedings of the 2004 International Conference on Image Processing, Singapore, 24–27 October 2004; pp. 3443–3446. [Google Scholar]
  9. Monga, V.; Evans, B.L. Perceptual Image Hashing Via Feature Points: Performance Evaluation and Tradeoffs. IEEE Trans. Image Process. 2006, 15, 3452–3465. [Google Scholar] [CrossRef] [PubMed]
  10. Qin, C.; Chen, X.; Dong, J.; Zhang, X. Perceptual Image Hashing with Selective Sampling for Salient Structure Features. Displays 2016, 45, 26–37. [Google Scholar] [CrossRef]
  11. Monga, V.; Banerjee, A.; Evans, B.L. A clustering based approach to perceptual image hashing. IEEE Trans. Inf. Forensics Secur. 2006, 1, 68–79. [Google Scholar] [CrossRef]
  12. Wu, H.; Wu, W.; Zhang, J.; Peng, J. Research on image retrieval algorithm based on LBP and LSH. In Proceedings of the International Conference on Green Energy and Sustainable Development, Phuket, Thailand, 21–22 April 2017; AIP Conf. Proc. 1864. pp. 020038-1–020038-4. [Google Scholar]
  13. Liu, H.; Xiao, D.; Xiao, Y.; Zhang, Y. Robust image hashing with tampering recovery capability via low-rank and sparse representation. Multimedia Tools Appl. 2015, 75, 7681–7696. [Google Scholar] [CrossRef]
  14. Karsh, R.K.; Laskar, R.H.; Richhariya, B.B. Robust image hashing using ring partition-PGNMF and local features. Springerplus 2016, 5, 1995. [Google Scholar] [CrossRef] [PubMed]
  15. Ouyang, J.; Coatrieux, G.; Shu, H. Robust hashing for image authentication using quaternion discrete Fourier transform and log-polar transform. Digit. Signal Process. 2015, 41, 98–109. [Google Scholar] [CrossRef] [Green Version]
  16. Wang, L.; Jiang, X.; Lian, S.; Hu, D.; Ye, D. Image authentication based on perceptual hash using Gabor filters. Soft Comput. 2011, 15, 493–504. [Google Scholar] [CrossRef]
  17. Zeng, Y.; Li, J. The Medical Image Watermarking Algorithm with Encryption by Perceptual Hashing and Double Chaos System. In Proceedings of the 2013 Fifth International Conference on Multimedia Information Networking and Security, Beijing, China, 1–3 November 2013; IEEE Computer Society: Washington, DC, USA, 2013; pp. 493–496. [Google Scholar]
  18. Ojala, T.; Pietikainen, M.; Maenpaa, T. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 971–987. [Google Scholar] [CrossRef]
  19. Ahonen, T.; Matas, J.; He, C.; Pietikäinen, M. Image Analysis; Lecture Notes in Computer Science; Salberg, A.B., Hardeberg, J.Y., Jenssen, R., Eds.; Springer: Berlin/Heidelberg, Germany, 2009; Volume 5575, pp. 61–70. [Google Scholar]
  20. Zhao, Y.; Jia, W.; Hu, R.X.; Min, H. Completed robust local binary pattern for texture classification. Neurocomputing 2013, 106, 68–76. [Google Scholar] [CrossRef]
  21. Davarzani, R.; Mozaffari, S.; Yaghmaie, K. Scale and rotation-invariant texture description with improved local binary pattern features. Signal Process. 2015, 111, 274–293. [Google Scholar] [CrossRef]
  22. Bai, Z.; Hatzinakos, D. LBP-based biometric hashing scheme for human authentication. In Proceedings of the 2010 11th International Conference on Control Automation Robotics & Vision, Singapore, 7–10 December 2010; pp. 1842–1847. [Google Scholar]
  23. Davarzani, R.; Mozaffari, S.; Yaghmaie, K. Image authentication using LBP-based perceptual image hashing. J. AI Data Mining 2015, 3, 21–30. [Google Scholar]
  24. Davarzani, R.; Mozaffari, S.; Yaghmaie, K. Perceptual image hashing using center-symmetric local binary patterns. Multimeddia Tools Appl. 2016, 75, 4639–4667. [Google Scholar] [CrossRef]
  25. Chen, X.; Qin, C.; Ji, P. Perceptual image hashing using block truncation coding and local binary pattern. In Proceedings of the 2015 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA), Hong Kong, China, 16–19 December 2015; pp. 856–859. [Google Scholar]
  26. Qin, C.; Chen, X.; Ye, D.; Wang, J.; Sun, X. A novel image hashing scheme with perceptual robustness using block truncation coding. Inf. Sci. 2016, 361, 84–99. [Google Scholar] [CrossRef]
  27. Patil, V.; Sarode, T. Image hashing by SDQ-CSLBP. In Proceedings of the 2016 International Conference on Advances in Computing, Communications and Informatics (ICACCI), Jaipur, India, 21–24 September 2016; pp. 2057–2063. [Google Scholar]
  28. Patil, V.; Sarode, T. Image hashing by LoG-QCSLBP. In Proceedings of the 2nd International Conference on Communication and Information Processing, Singapore, 26–29 November 2016; ACM: New York, NY, USA, 2016; pp. 124–128. [Google Scholar]
  29. Patil, V.; Sarode, T. Image hashing using AQ-CSLBP with double bit quantization. In Proceedings of the 2016 International Conference on Optoelectronics and Image Processing (ICOIP), Warsaw, Poland, 10–12 June 2016; pp. 30–34. [Google Scholar]
  30. Patil, V.; Sarode, T. Image hashing by CCQ-CSLBP. In Proceedings of the 2016 IEEE International WIE Conference on Electrical and Computer Engineering (WIECON-ECE), Pune, India, 19–21 December 2016; pp. 73–78. [Google Scholar]
  31. Abbas, S.Q.; Ahmed, F.; Živić, N.; Ur-Rehman, O. Perceptual image hashing using SVD based Noise Resistant Local Binary Pattern. In Proceedings of the 2016 8th International Congress on Ultra Modern Telecommunications and Control Systems and Workshops (ICUMT), Lisbon, Portugal, 18–20 October 2016; pp. 401–407. [Google Scholar]
  32. Chen, J.; Yang, J. Robust subspace segmentation via low-rank representation. IEEE Trans. Cybern. 2014, 44, 1432–1445. [Google Scholar] [CrossRef] [PubMed]
  33. Wang, Y.; Zhang, W.; Wu, L.; Lin, X.; Fang, M.; Pan, S. Iterative views agreement: An iterative low-rank based structured optimization method to multi-view spectral clustering. Presented at the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI-16), New York, NY, USA, 9–15 July 2016; pp. 2153–2159. [Google Scholar]
  34. Liu, G.; Yan, S. Latent Low-Rank Representation for subspace segmentation and feature extraction. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 1615–1622. [Google Scholar]
  35. Li, P.; Bu, J.; Yu, J.; Chen, C. Towards robust subspace recovery via sparsity-constrained latent low-rank representation. J. Vis. Commun. Image Represent. 2016, 37, 46–52. [Google Scholar] [CrossRef]
  36. Liu, G.; Lin, Z.; Yan, S.; Sun, J.; Yu, Y.; Ma, Y. Robust Recovery of Subspace Structures by Low-Rank Representation. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 171–184. [Google Scholar] [CrossRef] [PubMed]
  37. Fang, Y.; Luo, J.; Lou, C. Fusion of Multi-directional Rotation Invariant Uniform LBP Features for Face Recognition. In Proceedings of the 2009 Third International Symposium on Intelligent Information Technology Application, Shanghai, China, 21–22 November 2009; pp. 332–335. [Google Scholar]
  38. Xia, S.; Chen, P.; Zhang, J.; Li, X.; Wang, B. Utilization of rotation-invariant uniform LBP histogram distribution and statistics of connected regions in automatic image annotation based on multi-label learning. Neurocomputing 2017, 228, 11–18. [Google Scholar] [CrossRef]
  39. Ma, Z.; Li, Q.; Li, H.; Ma, Z.; Li, Z. Image representation based PCA feature for image classification. In Proceedings of the 2017 IEEE International Conference on Mechatronics and Automation (ICMA), Takamatsu, Japan, 6–9 August 2017; pp. 1121–1125. [Google Scholar]
  40. Jain, A.; Rajpal, N. A robust image encryption algorithm resistant to attacks using DNA and chaotic logistic maps. Multimedia Tools Appl. 2016, 75, 5455–5472. [Google Scholar] [CrossRef]
  41. CVG-UGR Image Database. Available online: http://decsai.ugr.es/cvg/dbimagenes/ (accessed on 10 January 2017).
Figure 1. The perceptual image hashing (PIH) scheme, based on latent low rank representation (LLRR) and rotation invariant uniform local binary patterns (RiuLBP).
Figure 1. The perceptual image hashing (PIH) scheme, based on latent low rank representation (LLRR) and rotation invariant uniform local binary patterns (RiuLBP).
Applsci 08 00317 g001
Figure 2. Four of the test images from Computer Vision Group, University of Granada (CVG-UGR) image database [41].
Figure 2. Four of the test images from Computer Vision Group, University of Granada (CVG-UGR) image database [41].
Applsci 08 00317 g002
Figure 3. The robustness comparison in terms of the average normalized Hamming distance. (a) JPEG compression; (b) Gaussian filtering; (c) median filtering; (d) Salt & pepper noise; (e) Scaling, and (f) Rotation.
Figure 3. The robustness comparison in terms of the average normalized Hamming distance. (a) JPEG compression; (b) Gaussian filtering; (c) median filtering; (d) Salt & pepper noise; (e) Scaling, and (f) Rotation.
Applsci 08 00317 g003aApplsci 08 00317 g003b
Figure 4. The distribution of 378,400 normalized Hamming distances.
Figure 4. The distribution of 378,400 normalized Hamming distances.
Applsci 08 00317 g004
Figure 5. The normalized Hamming distances between hash pairs with the correct secret key and 1000 wrong secret keys.
Figure 5. The normalized Hamming distances between hash pairs with the correct secret key and 1000 wrong secret keys.
Applsci 08 00317 g005
Table 1. The average running times of different PIH methods.
Table 1. The average running times of different PIH methods.
MethodsAverage Running Time
Liu et al.’s scheme [13]2.36
Davarzani et al.’s scheme [24]1.49
Qin et al.’s scheme [26]1.62
Proposed scheme2.58
Table 2. The content-preserving attacks for robustness testing.
Table 2. The content-preserving attacks for robustness testing.
AttacksParameters
JPEG compressionQuality factor [10, 90]
Gaussian filteringStandard deviation 0.4, 0.6, …, 1.8
median filteringFilter size [3, 15]
Salt & pepper noisenoise density [5%, 15%]
scalingScaling ratio [0.2, 2.0]
rotationRotation angle [0, 5.0]
Table 3. A comparison of collision probability of different schemes under various thresholds.
Table 3. A comparison of collision probability of different schemes under various thresholds.
Threshold τCollision Probability
Liu et al.’s
Scheme [13]
Davarzani et al.’s
Scheme [24]
Qin et al.’s
Scheme [26]
Proposed
Scheme
0.260.36942.1125 × 10−61.5412 × 10−54.0388 × 10−6
0.240.13511.1043 × 10−62.2805 × 10−63.7881 × 10−8
0.220.03065.6748 × 10−72.2853 × 10−72.9354 × 10−9
0.200.00412.8665 × 10−73.0222 × 10−81.8778 × 10−10
0.183.2451 × 10−41.4233 × 10−72.7002 × 10−99.9118 × 10−12
0.161.4608 × 10−56.9462 × 10−82.0399 × 10−104.3144 × 10−13
0.143.7352 × 10−73.3320 × 10−81.2995 × 10−111.5481 × 10−14
0.125.3909 × 10−91.5710 × 10−86.9822 × 10−134.5771 × 10−16
0.104.3729 × 10−117.2801 × 10−83.1634 × 10−141.1149 × 10−17

Share and Cite

MDPI and ACS Style

Yang, H.; Yin, J.; Jiang, M. Perceptual Image Hashing Using Latent Low-Rank Representation and Uniform LBP. Appl. Sci. 2018, 8, 317. https://0-doi-org.brum.beds.ac.uk/10.3390/app8020317

AMA Style

Yang H, Yin J, Jiang M. Perceptual Image Hashing Using Latent Low-Rank Representation and Uniform LBP. Applied Sciences. 2018; 8(2):317. https://0-doi-org.brum.beds.ac.uk/10.3390/app8020317

Chicago/Turabian Style

Yang, Hengfu, Jianping Yin, and Mingfang Jiang. 2018. "Perceptual Image Hashing Using Latent Low-Rank Representation and Uniform LBP" Applied Sciences 8, no. 2: 317. https://0-doi-org.brum.beds.ac.uk/10.3390/app8020317

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop