Next Article in Journal
Development of an ANFIS Model for the Optimization of a Queuing System in Warehouses
Previous Article in Journal
Imbalanced Learning Based on Data-Partition and SMOTE
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Visual Saliency Model-Based Image Watermarking with Laplacian Distribution

1
School of Digital Media & Creative Design, Sichuan University of Media and Communications, Chengdu 611745, China
2
School of Mathematics and Computer Science, ShangRao Normal University, Shangrao 334001, China
3
China Mobile Group Design Institute Co., Ltd. Sichuan Branch, Chengdu 610045, China
*
Author to whom correspondence should be addressed.
Submission received: 23 August 2018 / Revised: 10 September 2018 / Accepted: 15 September 2018 / Published: 19 September 2018
(This article belongs to the Section Information Processes)

Abstract

:
To improve the invisibility and robustness of the multiplicative watermarking algorithm, an adaptive image watermarking algorithm is proposed based on the visual saliency model and Laplacian distribution in the wavelet domain. The algorithm designs an adaptive multiplicative watermark strength factor by utilizing the energy aggregation of the high-frequency wavelet sub-band, texture masking and visual saliency characteristics. Then, the image blocks with high-energy are selected as the watermark embedding space to implement the imperceptibility of the watermark. In terms of watermark detection, the Laplacian distribution model is used to model the wavelet coefficients, and a blind watermark detection approach is exploited based on the maximum likelihood scheme. Finally, this paper performs the simulation analysis and comparison of the performance of the proposed algorithm. Experimental results show that the proposed algorithm is robust against additive white Gaussian noise, JPEG compression, median filtering, scaling, rotation attack and other attacks.

1. Introduction

Video, images or audio can be easily accessed and distributed over the Internet due to the continuous development of network and related information technologies. However, illegal copying or tampering will cause economic losses and potential copyright protection disputes when the digital multimedia product is authorized to be protected. Therefore, how to protect the copyright of multimedia products is an important issue in the field of information hiding. As an effective copyright protection technology, digital watermarking has been widely investigated by scholars at home and abroad [1,2,3,4,5,6]. It is used to identify the ownership of the original works by embedding some data (such as a logo, meaningful information, etc.) into the original video, image or audio products.
Generally speaking, digital image watermarking mainly includes the quantization-based method and multiplicative-based method. In terms of quantization-based watermarking, this approach usually quantizes the transform coefficients of the image by designing the corresponding quantizer. In the procedure of watermark detection, the corresponding watermark data can be extracted based on the quantization interval. The quantized-based watermarking method has high capacity, and furthermore, the method is a blind detection scheme, so it has been widely studied in recent years [7,8,9,10,11,12]. A classical quantization watermarking method is the quantization index modulation watermarking method proposed by Chen et al. in [7]. This method uses dirty paper coding theory and edge information to embed the watermark data into the host signal. However, the algorithm is sensitive to amplitude scaling due to the use of fixed quantization step size. In order to solve this problem, Jiao Li and Cox proposed an adaptive quantization watermarking method [8]. They designed an adaptive quantization step based on Watson’s perceptual model. Experimental results show that their method has strong robustness against scaling attack.
Inspired by the idea of μ -law compression in speech communication, Kalantari et al. [9] proposed a logarithmic quantization watermarking method, which effectively improved the robustness of the watermark, but this method is very sensitive to gain attack. To address this problem, a quantization watermarking method based on the Lp norm was developed in [10]. This method is robust to gain attacks due to the use of the division function. Wan et al. [11] proposed an adaptive spread transform dither modulation watermarking method based on the perceptual visual model. This paper exploited a perceptual just-noticeable distortion model in the discrete cosine transform (DCT) domain. The major merit of the algorithm is its insensitivity to the changes caused by watermark encoding and attacks. In addition, to improve the robustness of quantization-based watermarking against geometric attacks, Liu et al. [12] proposed a quantization watermarking method based on the L1 norm in the dual-tree complex wavelet transform domain.
Different from the quantization-based watermarking method, the multiplicative-based watermarking method generally can be regarded as a communication system based on the spread spectrum mechanism. Cox et al. proposed a transform-based spread-spectrum watermarking method [13] as early as 1997, and they have developed various improved multiplicative watermarking methods. In [14], an image watermarking method has been presented based on the wavelet visual model. To improve the imperceptibility of the watermark, the method [14] combined the frequency sensitivity, luminance sensitivity and texture complexity of the image. However, the method was still sensitive to geometric attacks, which consist of cropping and rotation. The work in [15,16] focused on developing the detection method of multiplicative watermarking based on the alpha stable distribution and inverse Gaussian distribution model.
In recent years, Akhaee and others have proposed two novel multiplicative methods [17,18] based on the scaling scheme. Based on the entropy masking model [17,19] chose the image sub-block with high information entropy as the watermark embedding area and established a multi-objective optimization strategy to find the optimal value of the watermark strength factor. It has good robustness under attacks, covering additive noise and JPEG compression. However, the entropy value of image sub-blocks is inconsistent before and after watermark embedding, which makes it difficult to extract watermark data correctly. Therefore, it is easy to reduce the robustness of the watermark against synchronization attacks. Furthermore, the complexity of the multi-objective optimization mathematical model is high. Subsequently, Akhaee et al. [18] proposed a novel multiplicative watermarking method based on the contourlet transform. This method also uses the sub-blocks with high information entropy as the watermark embedding space. The difference between [17] and [18] is that the method in [18] utilizes the contourlet transform to develop the watermarking method. Similar to the method in [17], the computational efficiency of the watermarking method is low in [18]. Yadav et al. [20] proposed a multiplicative watermarking method based on the dynamic watermark strength factor. The adaptability of the watermark strength factor was mainly due to the use of the variance information of the image. In addition, the method in [20] has high computational efficiency, but it does not consider the texture masking effect and saliency model of the image.
Many image watermarking methods have been proposed based on the visual saliency model. Khalilian et al. [21] proposed a video watermark decoding strategy based on principal component analysis. In this paper, they embedded the watermark into the LLsub-band of the image in an adaptive manner based on visual saliency. Tian et al. [22] proposed a video watermarking algorithm by using the visual saliency and secret sharing strategy. Agarwal et al. [23] proposed a visible watermarking method to embed a binary logo watermark in the just-noticeable distortion of the image region. They found the important portions of the image based on the visual saliency strategy. Besides this, Castiglione et al. [24] proposed a fragile reversible watermarking method for the purpose of protecting the authenticity and integrity of functional magnetic resonance imaging (fMRI) data. Moreover, they have integrated their scheme within a commercial off-the-shelf fMRI system in their work. The work in [25] used digital watermarking technology to improve the security of images and dealt with watermarking approaches in the Hadamard transform domain. Furthermore, the work in [25] also discussed various approaches using the computational intelligence strategy to arrive at the optimum value of scaling and embedding parameters.
In this paper, we develop an adaptive watermark strength factor based on the energy aggregation of wavelet coefficients and the human visual perception model. In detail, we have made full use of the texture masking effect and visual saliency characteristics of the human visual perception model, respectively. The structure of the rest paper is organized as follows. Section 2 presents the proposed watermark embedding method. Section 3 introduces the watermark decoding. Section 4 provides the experimental results and discussions about the performance of the proposed watermarking method. Finally, Section 5 concludes this study. The main motivation behind such a structure is to improve the imperceptibility and robustness of watermarking. Therefore, the region of wavelet coefficients with high image energy is used as the watermark embedding space. Moreover, in terms of watermark detection, we describe the non-Gaussian characteristics of the wavelet coefficients of the image by the Laplacian distribution.

2. Proposed Watermark Embedding Algorithm

In accordance with the human visual system (HVS), strongly-textured areas of the image have a high just-noticeable difference (JND) threshold [26], so these regions could be able to hide more distortion and allow for selecting a higher value for the watermark strength factor. Inspired by the visual attention model [26,27,28], we select high-energy image blocks as the watermark embedding space in this study. Our proposed watermarking method focuses on two stages: embedding and detection. Figure 1 shows the flowchart of the proposed watermark embedding method, which is performed through the following steps.

2.1. Watermark Embedding

Figure 1 shows the block diagram of the proposed embedding method. The embedding steps of the watermark are performed as follows.
Step 1: Segment the original image into non-overlapping image sub-blocks with equal size; calculate the energy of each image sub-block. Then, select the image sub-block with high energy as the watermark embedding area based on the descending order of the energy value. The energy of each image sub-block can be calculated as follows:
E = m = 1 L n = 1 L I ( m , n ) 2 ,
where L × L represents the size of each sub-block, I represents each image sub-block, m , n denote the rows and columns of the image sub-block, respectively, I ( m , n ) represents the pixels at position ( m , n ) and E represents the energy of the sub-block.
Step 2: Apply the wavelet transform to decompose each image sub-block, then extract the low-frequency sub-band image coefficients, and transform them into a one-dimensional coefficient vector denoted [ x 1 , x 2 , , x k ] T .
Step 3: Embed the watermark information into the host image as follows.
y = x ( 1 + μ ( 2 b 1 ) w ) ,
where b { 0 , 1 } represents the binary watermark bit, w = [ w 1 , w 2 , , w k ] T represents the pseudo-random sequence, y = [ y 1 , y 2 , , y k ] T represents the watermarked vector of the low-frequency wavelet coefficients of sub-band image; x = [ x 1 , x 2 , , x k ] T represents the host vector of the low-frequency wavelet coefficients of the sub-band image; * represents the vector multiplication in Equation (2), μ represents the adaptive watermark strength factor. The calculation of μ is introduced in the following Section 2.2. The value of the variable k is set to L 2 2 s × 2 s . s denotes the scale of wavelet decomposition.
Step 4: The inverse wavelet transform is used for each sub-block to obtain the watermarked image sub-block.
Step 5: Repeat Steps 2–4 to get all the watermarked sub-blocks and then combine with the non-watermarked sub-blocks to obtain the whole watermarked image.

2.2. Calculation of the Adaptive Embedding Strength Factor

According to the just noticeable difference model proposed in [21], the region with rich texture in the image can tolerate more information distortion, that is it can increase the capacity of watermark embedding. Based on this characteristic of the image, the average energy value of the wavelet coefficients of high frequency sub-bands in i-th image sub-block can be calculated as:
E H F , i = 1 3 ( E H L + E L H + E H H ) , i = 1 , 2 , , N
where H L , L H , H H represents the high-frequency horizontal detail sub-band image, the high-frequency vertical detail sub-band image and the high-frequency diagonal detail sub-band image, respectively. The energy of the above three sub-band images is computed as:
E H L = m = 1 L / 2 n = 1 L / 2 H L ( m , n ) 2 , E L H = m = 1 L / 2 n = 1 L / 2 L H ( m , n ) 2 , E H H = m = 1 L / 2 n = 1 L / 2 H H ( m , n ) 2 ,
where H L ( m , n ) represents the wavelet coefficients of the high frequency horizontal detail sub-band image at position ( m , n ) , L H ( m , n ) is the wavelet coefficients of the high frequency vertical detail sub-band image at position ( m , n ) and H H ( m , n ) is the wavelet coefficients of the high frequency diagonal detail sub-band image at position ( m , n ) . The average energy of the high frequency sub-band images of the N image blocks can be expressed as:
E H F = 1 N i = 1 N E H F , i ,
According to (5), the high-frequency portion of strength factor μ 1 based on image texture masking [18] can be computed as:
μ 1 = a c × e x p ξ · E H F ,
where the value of a is 1.023 and the value of c is set to 0.02, while ξ is 3.5 × 10 5 .
In the next step, we calculate the saliency distance of each image sub-block based on the method provided in [27,28]. Concretely, saliency distance is represented as D, and the maximum saliency distance of each image sub-block is expressed as D m a x . The saliency model-based factor μ 2 can be represented as: μ 2 = 1 + δ · D , where δ = 0.02 D m a x represents the scaling factor. In conclusion, the final watermark strength factor can be expressed as:
μ = a c × e x p ( ξ · E H F ) × 1 + 0.02 D m a x · D μ 0 ,
where μ 0 is a constant, with no loss of generality, and the value of μ 0 is set to 1.0. Note that the final watermark strength factor μ depends on the texture masking effect and saliency model of the human visual model. Furthermore, the watermark strength factor can be varied adaptively with the change of the image texture and saliency. Therefore, visual saliency can be used to control the strength of watermark embedding appropriately, which results in improving the imperceptibility and robustness of watermarking.

3. Watermark Detection

Roughly speaking, the watermark extraction can be regarded as a communication process. Therefore, we can decode the watermark signal at the receiver. In this section, the watermark extraction process is performed as follows.
Step 1: Based on the statistical likelihood ratio, the hypothesis test can be expressed as:
H y p o t h e s i s T e s t = H 0 : b = 0 H 1 : b = 1
where H 0 is the null hypothesis and H 1 denotes the alternative hypothesis. Then, we use the Laplacian distribution to model the wavelet coefficients of the image. The statistical distribution function of the watermarked image is represented as follows:
f y | b ( y i ) = θ e x p θ | y i 1 + μ ( 2 b 1 ) w i | 2 ( 1 + μ ( 2 b 1 ) w i ) ,
where θ denotes the parameters of the Laplacian distribution, μ is the watermark strength factor, w i represents the pseudo-random sequence i = 1 , 2 , N , b { 0 , 1 } represents the binary bit and f y | b ( y i ) represents the statistical distribution function of watermarked image y i .
Step 2: Apply the maximum likelihood rule to extract watermark information. Specifically, extract the watermark data according to b ^ = a r g m a x f ( y | b ) . The maximum likelihood function can be expressed as:
L ( y ) = f ( y | b = 1 ) f ( y | b = 0 ) = i = 1 N f y | b = 1 ( y i ) i = 1 N f y | b = 0 ( y i ) ,
where L ( y ) represents the maximum likelihood function. If L ( y ) > 1 , it corresponds to the H 1 hypothesis, whereas if L ( y ) < 1 , it corresponds to the H 0 hypothesis. f ( y | b = 1 ) represents the statistical distribution function of the watermarked image with watermark data “1”, and f ( y | b = 0 ) is the statistical distribution function of the watermarked image with watermark data “0”. Then, substitute Equation (9) into (10) and take the logarithm on both sides; we have:
i = 1 N | y i | w i > 1 μ 2 2 μ θ i = 1 N w i l o g 1 + μ 1 μ , H 1 : b = 1 ,
i = 1 N | y i | w i < 1 μ 2 2 μ θ i = 1 N w i l o g 1 + μ 1 μ , H 0 : b = 0 ,
where y i represents the watermarked image and w i represents the pseudo random sequence. τ = 1 μ 2 2 μ θ i = 1 N w i l o g 1 + μ 1 μ represents the detection threshold. According to Equation (11), when i = 1 N | y i | w i is greater than the detection threshold τ , the extracted watermark data are “1”; whereas the extracted watermark data are “0”.
According to Step 1 and Step 2, the watermark data can be extracted without the original image information. Therefore, the proposed image watermarking algorithm is a blind method, which further improves the practicability of digital watermarking.
Figure 2 shows a histogram of sub-band wavelet coefficients together with a plot of the fitted Laplacian distribution for the Barbara and Lena images, respectively. As shown in the figure, the fits are both generally quite good. Since the wavelet coefficients of the image have the characteristics of the peak shape and a heavy tailed distribution and the tail of Laplacian distribution is flatter than the Gaussian distribution, therefore, the Laplacian distribution can model the marginal distribution of wavelet coefficients well.

4. Experimental Results and Analysis

In order to objectively evaluate the performance of the proposed algorithm, the peak signal noise-ratio (PSNR) and structural similarity index (SSIM) [29] were used to assess the image quality of the image. The PSNR is calculated as follows:
P S N R = 10 × l o g 10 M A X I 2 M S E = 20 × l o g 10 M A X I M S E ,
where M A X I is set to 255 in this work. M S E denotes the mean square error, and it is defined as:
M S E = 1 m n i = 0 m 1 n = 0 n 1 I ( i , j ) I ˜ ( i , j ) 2 ,
where I ( i , j ) and I ˜ ( i , j ) represent two images of size m × n , respectively.
The construction of SSIM is mainly based on the human visual perception model, which utilizes the structural features, luminance characteristics and contrast change information. The detailed calculation process of SSIM can be referred to in [29].
In the experiment, four images (Barbara, Lena, Bridge and Crowd) with dimensions of 512 × 512 were used for testing. The size of the image sub-block in all images was 32 × 32 . One hundred twenty eight image sub-blocks with high energy were selected as the watermark embedding space. The watermark strength factor was determined adaptively according to each image sub-block and was calculated by Equation (7). Concretely, the watermark strength factor was set to 0.025, and the filter of the wavelet was “bior4.4”. Finally, simulation was carried out through the MATLAB2016 environment. Figure 3 shows the original images and their watermarked versions obtained by the proposed watermarking method. In this figure, the left image represents the original image, and the right one represents the watermarked image in each row. It can be seen from the figure that the proposed watermarking algorithm had good invisibility.
In order to assess the image visual quality of the proposed algorithm, Table 1 and Table 2 give the experimental results of the PSNR (dB) and SSIM results on the Barbara, Lena, Bridge and Crowd images under various attacks, covering Gaussian noise, JPEG compression, median filtering, Gaussian filtering, amplitude scaling, rotation, image contrast enhancement and cropping attacks. To evaluate the robustness of the proposed watermarking algorithms, we have performed several experiments under these above attacks. Table 3 shows the bit error ratio (BER) results of the extracted watermark. It can be seen from Table 1, Table 2 and Table 3 that the proposed watermarking algorithm is satisfactory.
Furthermore, to evaluate the robustness of the proposed watermarking method in comparison to other approaches, we compared our watermarking algorithm with two methods proposed in [17,20]. The results have been presented with average values obtained from a set of images, which includes the Barbara, Lena, Bridge and Crowd images. Figure 4, Figure 5, Figure 6 and Figure 7 present the average results obtained from the image set. In this regard, distortion attacks include JPEG compression, additive noise, scaling attack and rotation attack, respectively. For a fair comparison, we did the experiments by embedding the same watermark length of 1024 bits at PSNR of about 42 dB after watermarking as in [17,20] into these images. Besides this, the size of the image block in all algorithms was 16 × 16. In [17], the presented method was based on wavelet transformation and the scaling-based strategy. Furthermore, in [20], the maximum likelihood detection method in the wavelet domain had been proposed. The detailed configuration of the simulation parameters of our method is shown in Table 4.
In Figure 4, the range of JPEG compression quality factor is [5, 50]. As seen in the figure, the proposed watermarking algorithm performed better than those in [17,20]. The performance of [20] was worse than that of [17] when the compression strength of JPEG compression was relatively high (when the compression factor was less than or equal to 10%), and other cases were similar.
In Figure 5, Figure 6 and Figure 7, the proposed watermarking algorithm is also superior to those in [17,20]. The main reason consists of two aspects. On the one hand, we designed a watermark strength factor based on the texture information and visual saliency. Thus, the watermark embedding strength can be controlled adaptively. Therefore, the tradeoff between the invisibility and robustness of watermarking can be achieved. On the other hand, we inserted the watermark information into the image sub-block region with high energy. As a result, the robustness of the watermark against conventional attacks can be improved. Moreover, the watermark data can be easily detected due to the use of the Laplacian distribution model. In contrast, [17,20] only relied on mean and variance in noise estimation, which have a great influence on watermark detection. As a result, the proposed watermarking algorithm has good performance.

5. Conclusions

To further improve the imperceptibility and robustness of image watermarking, a content-adaptive multiplicative watermarking strength factor has been exploited by using the texture masking model and visual saliency model, and the corresponding watermark embedding method is developed in this paper. In terms of watermark decoding, the wavelet coefficients of the image are modeled based on the Laplacian distribution, and a blind watermark detection approach is designed. Finally, the performance of the proposed watermarking algorithm is analyzed and discussed through simulation analysis and compared with the related watermarking algorithm. The results show that the proposed watermarking algorithm is imperceptible and robust to additive noise, JPEG compression, median filtering, scaling, rotation and other attacks. Future work will focus on developing novel watermarking methods by using other technologies, such as convolution neural network, generative adversarial networks, the binocular vision model, etc.

Author Contributions

H.L. designed the experiments and wrote this paper. J.L. conceived of the idea and helped to analyze the performance of the algorithm. M.Z. helped to analyze the experimental data.

Funding

This research received no external funding.

Acknowledgments

The authors would like to thank the Editor and the anonymous reviewers for their constructive comments, which greatly helped in improving this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Tew, Y.Q.; Wong, K.S. An overview of information hiding in H.264/AVC compressed video. IEEE Trans. Circuits Syst. Video Technol. 2014, 24, 305–319. [Google Scholar] [CrossRef]
  2. Wang, C.X.; Zhang, T.; Wan, W.B.; Han, X.Y.; Xu, M.L. A novel STDM watermarking using visual saliency-based JND model. Information 2017, 8, 103. [Google Scholar] [CrossRef]
  3. Chang, C.S.; Shen, J.-J. Features classification forest: A novel development that is adaptable to robust blind watermarking techniques. IEEE Trans. Image Process. 2017, 26, 3921–3935. [Google Scholar] [CrossRef] [PubMed]
  4. Wang, Y.-G.; Zhu, G.P.; Shi, Y.-Q. Transportation spherical watermarking. IEEE Trans. Image Process. 2018, 27, 2063–2077. [Google Scholar] [CrossRef] [PubMed]
  5. Hwang, M.J.; Lee, J.S.; Lee, M.S.; Kang, H.G. SVD-based adaptive QIM watermarking on stereo audio signals. IEEE Trans. Multimedia 2018, 20, 45–54. [Google Scholar]
  6. Guo, Y.F.; Au, O.C.; Wang, R.; Fang, L.; Cao, X.C. Halftone image watermarking by content aware double-sided embedding error diffusion. IEEE Trans. Image Process. 2018, 27, 3387–3402. [Google Scholar] [PubMed]
  7. Chen, B.; Wornell, G.W. Quantization index modulation: A class of provably good methods for digital watermarking and information embedding. IEEE Trans. Inf. Theory 2001, 47, 1423–1443. [Google Scholar] [CrossRef]
  8. Li, Q.; Cox, I.J. Using perceptual models to improve fidelity and provide resistance to valumetric scaling for quantization index modulation watermarking. IEEE Trans. Inf. Forensics Secur. 2007, 2, 127–139. [Google Scholar] [CrossRef]
  9. Kalantari, N.K.; Ahadi, S.M. Logarithmic quantization index modulation for perceptually better data diding. IEEE Trans. Image Process. 2010, 19, 1504–1518. [Google Scholar] [CrossRef] [PubMed]
  10. Zareian, M.; Tohidypour, H.R. A novel gain invariant quantization-based watermarking approach. IEEE Trans. Infor. Forensics Secur. 2014, 9, 1804–1813. [Google Scholar] [CrossRef]
  11. Wan, W.B.; Liu, J.; Sun, J.D.; Gao, D. Improved logarithmic spread transform dither modulation using a robust perceptual model. Multimedia Tools Appl. 2015, 74, 1–22. [Google Scholar] [CrossRef]
  12. Liu, J.H.; Xu, Y.Y.; Wang, S.; Zhu, C. Complex wavelet-domain image watermarking algorithm using L1-norm function-based quantization. Circuits Syst. Signal Process. 2018, 37, 1268–1286. [Google Scholar] [CrossRef]
  13. Cox, I.J.; Kilian, J.; Leighton, T. Secure spread spectrum watermarking for multimedia. IEEE Trans. Image Process. 1997, 6, 1673–1687. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Barni, M.; Bartolini, F.; Piva, A. Improved wavelet-based watermarking through pixel-wise masking. IEEE Trans. Image Process. 2001, 10, 783–791. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Sadreazami, H.; Ahmad, O.; Swamy, M.N.S. A study of multiplicative watermark detection in the contourlet domain using alphastable distributions. IEEE Trans. Image Process. 2014, 23, 4348–4360. [Google Scholar] [CrossRef] [PubMed]
  16. Sadreazami, H.; Ahmad, O.; Swamy, M.N.S. Multiplicative watermark decoder in contourlet domain using the normal inverse Gaussian distribution. IEEE Trans. Multimedia 2016, 18, 196–207. [Google Scholar] [CrossRef]
  17. Akhaee, M.A.; Sahraeian, S.M.E.; Sankur, B.; Marvasti, F. Robust scaling-based image watermarking using maximum-likelihood decoder with optimum strength factor. IEEE Trans. Multimedia 2009, 11, 822–833. [Google Scholar] [CrossRef]
  18. Akhaee, M.A.; Sahraeian, S.M.E.; Marvasti, F. Contourlet-based image watermarking using optimum detector in a noisy environment. IEEE Trans. Image Process. 2010, 19, 967–980. [Google Scholar] [CrossRef] [PubMed]
  19. Watson, A.B.; Yang, G.Y.; Solomon, A.; Villasenor, J. Visibility of wavelet quantization noise. IEEE Trans. Image Process. 1997, 6, 1164–1175. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Yadav, N.; Singh, K. Robust image-adaptive watermarking using an adjustable dynamic strength factor. Signal Image Video Process. 2015, 9, 1531–1542. [Google Scholar] [CrossRef]
  21. Khalilian, H.; Bajic, I.V. Video watermarking with empirical PCA-based decoding. IEEE Trans. Image Process. 2013, 22, 4825–4840. [Google Scholar] [CrossRef] [PubMed]
  22. Tian, L.H.; Zeng, N.N.; Xue, J.R.; Li, C. Authentication and copyright protection watermarking scheme for H.264 based on visual saliency and secret sharing. Multimedia Tools Appl. 2015, 74, 2991–3011. [Google Scholar] [CrossRef]
  23. Agarwal, H.; Sen, D.; Raman, B.; Kankanhalli, M. Visible watermarking based on importance and just noticeable distortion of image regions. Multimedia Tools Appl. 2016, 75, 7605–7629. [Google Scholar] [CrossRef]
  24. Castiglione, A.; Pizzolante, R.; Palmieri, F.; Masucci, B.; Carpentieri, B.; Santis, A.D.; Castiglione, A. On-board format-independent security of functional magnetic resonance images. ACM Trans. Embed. Comput. Syst. 2017, 16, 1–15. [Google Scholar] [CrossRef]
  25. Santhi, V.; Acharjya, D.P. Improving the security of digital images in hadamard transform domain using digital watermarking. Biom. Concepts Methodol. Tools Appl. 2017, 27. [Google Scholar] [CrossRef]
  26. Yang, X.K.; Lin, W.S.; Lu, Z.K.; Ong, E.; Yao, S. Motion-compensated residue preprocessing in video coding based on just-noticeable-distortion profile. IEEE Trans. Circuits Syst. Video Technol. 2005, 15, 745–752. [Google Scholar]
  27. Itti, L.; Koch, C.; Ernst, N. A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 20, 1254–1259. [Google Scholar] [CrossRef] [Green Version]
  28. Itti, L. Automatic foveation for video compression using a neurobio-logical model of visual attention. IEEE Trans. Image Process. 2004, 13, 1304–1318. [Google Scholar] [CrossRef] [PubMed]
  29. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Flowchart of the proposed watermark embedding method.
Figure 1. Flowchart of the proposed watermark embedding method.
Information 09 00239 g001
Figure 2. Wavelet sub-band coefficient histogram fitted with a Laplacian distribution. (a) Barbara; (b) Lena.
Figure 2. Wavelet sub-band coefficient histogram fitted with a Laplacian distribution. (a) Barbara; (b) Lena.
Information 09 00239 g002
Figure 3. Original and watermarked image.
Figure 3. Original and watermarked image.
Information 09 00239 g003
Figure 4. Performance comparison of various algorithms under JPEG compression attack.
Figure 4. Performance comparison of various algorithms under JPEG compression attack.
Information 09 00239 g004
Figure 5. Performance comparison of various algorithms under noise attack.
Figure 5. Performance comparison of various algorithms under noise attack.
Information 09 00239 g005
Figure 6. Performance comparison of various algorithms under scaling attack.
Figure 6. Performance comparison of various algorithms under scaling attack.
Information 09 00239 g006
Figure 7. Performance comparison of various algorithms under rotation attack.
Figure 7. Performance comparison of various algorithms under rotation attack.
Information 09 00239 g007
Table 1. Performance results of the Barbara and Lena images against common attacks.
Table 1. Performance results of the Barbara and Lena images against common attacks.
ImageBarbaraLena
PSNR (dB)/SSIMPSNR (dB)/SSIM
Gau.noise (var = 20)43.53/0.954142.57/0.9132
Gau. noise (var = 30)41.08/0.910340.36/0.8769
JPEG (20%)40.17/0.902639.60/0.9008
JPEG (60%)44.15/0.962043.73/0.9582
Median filtering ( 3 × 3 )45.64/0.969045.92/0.9883
Gaussian filtering ( 5 × 5 )43.22/0.963842.89/0.9456
Scaling (0.75)45.85/0.975446.09/0.9772
Scaling (1.20)44.76/0.942045.90/0.9691
Rotation ( 10 )40.94/0.928041.25/0.9158
Rotation ( 30 )38.39/0.821838.77/0.8506
Contr.enhan.[0.2,0.8]39.78/0.911639.22/0.9005
Cropping ( 1 8 )39.34/0.910739.08/0.8982
Table 2. Performance results of the Bridge and Crowd images against common attacks.
Table 2. Performance results of the Bridge and Crowd images against common attacks.
ImageBridgeCrowd
PSNR (dB)/SSIMPSNR (dB)/SSIM
Gau. noise (var = 20)41.75/0.906142.63/0.9309
Gau. noise (var = 30)39.84/0.872240.23/0.9026
JPEG (20%)39.17/0.869440.18/0.9157
JPEG (60%)42.36/0.942343.45/0.9498
Median filtering ( 3 × 3 )46.03/0.998746.48/0.9993
Gaussian filtering ( 5 × 5 )43.77/0.963744.34/0.9765
Scaling (0.75)45.34/0.965345.76/0.9718
Scaling (1.20)45.39/0.954644.17/0.9452
Rotation ( 10 )42.12/0.945842.79/0.9489
Rotation ( 30 )39.56/0.907440.24/0.9117
Contr.enhan. [0.2,0.8]40.38/0.912340.79/0.9223
Cropping ( 1 8 )39.33/0.917339.76/0.9145
Table 3. BER (%) results of the extracted watermark under various attacks.
Table 3. BER (%) results of the extracted watermark under various attacks.
AttacksBarbaraLenaBridgeCrowd
Gau. noise (var = 20)3.392.032.772.91
Gau. noise (var = 30)4.604.124.944.25
JPEG (20%)3.782.242.903.10
JPEG (60%)0.000.000.000.00
Median filtering ( 3 × 3 )8.449,756.377.58
Gaussian filtering ( 5 × 5 )4.165.205.334.84
Scaling (0.75)1.730.781.151.56
Scaling (1.20)2.341.621.481.90
Rotation ( 10 )9.208.989.179.35
Rotation ( 30 )18.6917.4319.0619.83
Contr.enhan. [0.2,0.8]12.0513.5712.7314.62
Cropping ( 1 8 )14.7815.9416.6415.88
Table 4. Configuration of the simulation parameters.
Table 4. Configuration of the simulation parameters.
ParameterConfiguration
Simulation platformWindow 7, MATLABR2016
Test imagesBarbara, Lena, Bridge and Crowd
Image size512 × 512
Wavelet filters“bior 4.4”
Embedding bits1024
Decomposition scaleThree
Watermark strength factor0.025
Image assessment metricsPSNR and SSIM
Robustness metricBER

Share and Cite

MDPI and ACS Style

Liu, H.; Liu, J.; Zhao, M. Visual Saliency Model-Based Image Watermarking with Laplacian Distribution. Information 2018, 9, 239. https://0-doi-org.brum.beds.ac.uk/10.3390/info9090239

AMA Style

Liu H, Liu J, Zhao M. Visual Saliency Model-Based Image Watermarking with Laplacian Distribution. Information. 2018; 9(9):239. https://0-doi-org.brum.beds.ac.uk/10.3390/info9090239

Chicago/Turabian Style

Liu, Hongmei, Jinhua Liu, and Mingfeng Zhao. 2018. "Visual Saliency Model-Based Image Watermarking with Laplacian Distribution" Information 9, no. 9: 239. https://0-doi-org.brum.beds.ac.uk/10.3390/info9090239

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop