Next Article in Journal
Modeling and Analysis of Entropy Generation in Light Heating of Nanoscaled Silicon and Germanium Thin Films
Next Article in Special Issue
Fractional State Space Analysis of Economic Systems
Previous Article in Journal / Special Issue
H Control for Markov Jump Systems with Nonlinear Noise Intensity Function and Uncertain Transition Rates
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fractional Differential Texture Descriptors Based on the Machado Entropy for Image Splicing Detection

1
Institute of Mathematical Sciences, University of Malaya, 50603 Kuala Lumpur, Malaysia
2
Faculty of Computer Science and Information Technology, University of Malaya, 50603 Kuala Lumpur, Malaysia
*
Author to whom correspondence should be addressed.
Entropy 2015, 17(7), 4775-4785; https://0-doi-org.brum.beds.ac.uk/10.3390/e17074775
Submission received: 19 May 2015 / Revised: 2 July 2015 / Accepted: 3 July 2015 / Published: 8 July 2015
(This article belongs to the Special Issue Complex and Fractional Dynamics)

Abstract

:
Image splicing is a common operation in image forgery. Different techniques of image splicing detection have been utilized to regain people’s trust. This study introduces a texture enhancement technique involving the use of fractional differential masks based on the Machado entropy. The masks slide over the tampered image, and each pixel of the tampered image is convolved with the fractional mask weight window on eight directions. Consequently, the fractional differential texture descriptors are extracted using the gray-level co-occurrence matrix for image splicing detection. The support vector machine is used as a classifier that distinguishes between authentic and spliced images. Results prove that the achieved improvements of the proposed algorithm are compatible with other splicing detection methods.

1. Introduction

The detection of possible image manipulation is an important challenge in digital image forensics. Digital image forensics primarily aims to detect and analyze facts concealed behind a digital image. Image manipulation or tampering may be performed through image splicing, retouching, healing, copying-moving, and blurring. Image splicing refers to the creation of a new image by combining two or more parts of a number of photographs. The spliced images can deceive human eyes and can be used for malicious purposes [1]. In general, image forensic approaches can be categorized into two main groups: active and passive (blind) [2]. In active approaches, additional information is inserted into the image before it is distributed. Digital watermarking is a prevalent active detection method [3]. Meanwhile, passive approaches modify the statistical features of images during image tampering [2]. Many passive approaches have been proposed for image tampering [4]. One of the proposals involve establishing a natural image model for splicing detection by applying statistical feature extraction methods, including moments of characteristic functions of wavelet sub-bands and the Markov transition probabilities of the difference between 2D arrays and 2D arrays of multi-size block discrete cosine transform (MBDCT). Their results presented a promising improvement on image splicing detection accuracy. The approach achieved 91.8% detection accuracy on the dataset presented in [5]. In [6], a splicing detection method was developed by merging Markov features applied in [7] and discrete-cosine transform (DCT) features. Their detection method achieved an accuracy rate of 91.5% with the use of the 109-D feature vector. Moghaddasi et al. [8] proposed an approach based on statistical features obtained from the run-length method and on image edge statistics from the blind image splicing detection method. This approach achieved 88.28% detection accuracy on CASIA and DVMM image datasets. He et al. [9] developed a detection algorithm based on the approximate run length. Their results showed a moderate detection accuracy rate (75% vs. 69%) but with a shorter time than that of the original algorithm (6-D vs. 12-D). These studies suggest that the types of features extracted from images serve an important function in detecting and classifying authentic and spliced images. In this study, we developed a new fractional differential approach for texture feature descriptors by focusing on the types of texture parameters used for detection.
Fractional calculus is widely applied in physical and engineering sciences. Fractional differentiation is also excellent in describing the general properties of various materials and processes. Studies over the past 50 years have developed various operators of fractional calculus, such as Grünwald–Letnikov, Erdélyi–Kober, Caputo, Weyl–Riesz, and Riemann–Liouville [1012]. Fractional calculus has received significant attention in image processing, particularly texture enhancement and denoising [1318]. Texture is an important feature of natural images, and texture parameters are simply mathematical representations of image features, which can be classified as smooth, rough, or grainy [19]. The fractional approach preserves low-frequency features in smooth areas and enhances texture details in areas where gray level does not clearly change [17]. Texture features represent high-level information that can be used to describe the objects and structure of images.
In this study, we develop new fractional differential texture descriptors based on the Machado entropy. The descriptors are extracted using the gray-level co-occurrence matrix (GLCM) for image splicing detection.
The remainder of this paper is organized as follows. Section 2 presents fractional entropy. Section 3 shows the theoretical analysis for the construction of fractional masks. Section 4 exhibits the dimension reduction method. Section 5 reports the obtained experimental results. Section 6 presents our conclusion.

2. Fractional Entropy

Information theory, which was established by Claude Shannon in 1948, has been employed in numerous scientific fields and has been utilized in signal and image processing.
At present, information theory is generalized in view of fractional calculus and has gained new applications in engineering and physics [1923]. Machado has recently introduced a novel formula for entropy by utilizing fractional calculus as follows [23]:
S α ( P ) = { P i α Γ ( α + 1 ) [ ln P i + ψ ( 1 ) ψ ( 1 α ) ] } P i ,
where Pi is the probability of occurrence, and Γ(.) and ψ(.) refer to the gamma and digamma functions, respectively.
In this study, we use the Machado entropy for texture enhancement to increase the quality of images before feature extraction. Accordingly, Pi is the probability distribution of the image pixel’s intensity.

3. Construction of Fractional Masks

We build the generalized fractional mask ( ϕ) by using the following generalized fractional differential operator [24]:
D α , μ h ( x ) : = ( μ + 1 ) α Γ ( 1 α ) d d x 0 x ζ μ h ( ζ ) ( x μ + 1 ζ μ + 1 ) α d ζ ; 0 < α 1,
where h ( x ) is an analytic function. We can compute the value of the fractional differential operator (1) through a numerical calculation that references the discreet form. We assume that the interval [ 0 , x ] is divided into n equal parts. In addition, we derive the following approximation:
D α , μ s ( x ) ( μ + 1 ) α ( 2 μ + 1 1 ) 2 Γ ( 1 α ) ( 1 ( μ + 1 ) α ) k = 0 N 1 [ s k + 1 s k ] [ ( k + 1 ) 1 ( μ + 1 ) α k 1 ( μ + 1 ) α ] = ( μ + 1 ) α ( 2 μ + 1 1 ) 2 Γ ( 1 α ) ( 1 ( μ + 1 ) α ) k = 0 N 1 [ ( k + 1 ) 1 ( μ + 1 ) α k 1 ( μ + 1 ) α ] s ( x k ) = ( μ + 1 ) α ( 2 μ + 1 1 ) 2 Γ ( 1 α ) ( 1 ( μ + 1 ) α ) s ( z ) + ( μ + 1 ) α ( 2 μ + 1 1 ) 2 Γ ( 1 α ) ( 1 ( μ + 1 ) α ) k = 1 N 1 [ ( k + 1 ) 1 ( μ + 1 ) α k 1 ( μ + 1 ) α ] s ( x k ) .
where μ = 0 (2) is reduced to the Riemann–Liouville differential operator. However, in the context of image processing, Equation (2) is applied uniformly in the entire digital image. Therefore, the equation should be in two directions of x and y.
D α , μ s ( x , y ) = ( μ + 1 ) α ( 2 μ + 1 1 ) 2 Γ ( 1 α ) ( 1 ( μ + 1 ) α ) s ( x , y ) + ( μ + 1 ) α ( 2 μ + 1 1 ) 2 Γ ( 1 α ) ( 1 ( μ + 1 ) α ) k = 1 N 1 [ ( k + 1 ) 1 ( μ + 1 ) α k 1 ( μ + 1 ) α ] s ( x k , y )
The non-zero fractional differential coefficients ( ϕ) are:
φ 0 = ( μ + 1 ) α ( 2 μ + 1 1 ) 2 Γ ( 1 α ) ( 1 ( μ + 1 ) α ) φ 1 = φ 0 ( 2 1 ( μ + 1 ) α 1 ) φ 2 = φ 0 ( 3 1 ( μ + 1 ) α 2 1 ( μ + 1 ) α ) φ n 1 = φ 0 [ ( n + 1 ) 1 ( μ + 1 ) α n 1 ( μ + 1 ) α ] .
By convoluting φ i with Sα (Pi), we obtain
Φ 1 = ϕ 1 S α ( P 1 ) , , Φ n 1 = ϕ n 1 S α ( P n 1 )

3.1. Texture Feature Extraction

The 2D fractional mask coefficients of all images can be obtained in the following eight directions: 0°, 45°, 90°, 135°, 180°, 225°, 270°, and 315°. This algorithm primarily aims to split the image into nonoverlapping blocks and to apply (4) with optimal different values of α and µ to extract the texture features. The pseudo-code for the proposed algorithm is shown in Algorithm 1.
Algorithm 1. Pseudo-code for the proposed algorithm.
Algorithm 1. Pseudo-code for the proposed algorithm.
  // Input
  // I: an Input image
  // α, µ are fractional parameters of the proposed masks
  // Output:
  // T: Texture features
 1. Construct 2D fractional mask coefficients in the following eight directions: 0°, 45°, 90°, 135°, 180°, 225°, 270°, and 315°.
 2. Split output image into blocks equal to fractional mask window size.
 3. For each block, compute the output image’s block in which each pixel of the image’s block is convolved with the fractional masks on eight directions.
 4. For each output block, compute the gray-level co-occurrence matrix: Contrast; Homogeneity; Energy, and Entropy [25].
Save the texture features vector T for all image blocks as the final texture features.
The logic behind texture enhancement based on fractional differential operators is that the nonlinearity of fractional differential operators maintains high-frequency marginal features in areas where gray-level changes are considerable and enhances low-frequency details in areas where gray-level changes are insignificant. However, image frequencies determine the changes in gray values with distance. Thus, we utilize fractional calculus to enhance image texture.
In this study, we applied the GLCM to extract the texture features from each image block after using fractional texture enhancement based on fractional differential masks. GLCM is a statistical method used to calculate the image textural characteristics by modeling the texture as a 2D gray-level variation [17].

4. Dimension Reduction Method

Dimension reduction decreases feature dimensionality by eliminating redundant features and maintaining important dimensions in the feature vector. Humans and machine learning methods cannot easily interpret high-dimensional data. A specific instance of an object is represented by rows in a feature matrix, and the number of features exponentially increases the computational time. Thus, decreasing information enhances method analysis and improves the training and testing phases during classification [26]. Given that high correlations are found among the extracted features, kernel principal component analysis (PCA) is applied to reduce the correlations by eliminating information redundancies from the features. Figure 1 shows the standard deviation distribution of the features extracted from gray-scale images before and after applying kernel PCA. The standard deviation quantifies the dispersion of data from the mean. In this case, a high standard deviation implies a high correlation between the features. Figure 1 shows that the original features highly correlated and that their standard deviations were distributed over a wide range in 1764-D. By contrast, the standard deviations were greatly reduced after applying kernel PCA on the original features. The features were highly uncorrelated after applying kernel PCA.

5. Experimental Results and Discussion

This section demonstrates that the proposed algorithm using fractional masks has better capability than traditional approaches for texture feature descriptors. Performance tests for the proposed algorithm were implemented using MATLAB 2013b on Windows 7.
The performance of the proposed approach was studied using the image dataset provided by DVMM, Columbia University [27]. The dataset was designed to evaluate the blind image splicing detection system. A total of 1845 gray scale images (933 authentic and 912 spliced images) with sizes of 128 pixels × 128 pixels were obtained. Some examples of DVMM image dataset are shown in Figure 2.
The fractional differential masks are considered to operate using 3 × 3 processing window masks. The two fractional parameters in our algorithm are α and µ. We applied the commonly used 10-fold cross validation to display the pattern of splicing detection rate with respect to different values of α. The features for all images within the dataset were randomly partitioned into 10 equal-sized groups. A single group was used for testing, and the remaining nine groups were used for training. After training and testing, the average detection accuracy was reported for each value of α. Figure 3 displays the pattern of splicing detection rate depending on the values of α, ranging from 0.1 to 1, when µ = 1 on the basis of DVMM image dataset. A small α value indicates a small detection rate of the tampered images, whereas a large α value corresponds to a dramatic decrease in detection rate. Therefore, we selected the optimal value of α = 0.20 (Figure 3).

5.1. Classification

The Support Vector Machine (SVM) was the classifier applied in this study. SVM is a well-known supervised machine learning applied in different methods, including pattern recognition. MATLAB codes for SVM are accessible in [28]. LIBSVM is a known library that implements SVM. In this paper, LIBSVM was used under the following conditions:
  • Radial basis function is used as a kernel function
  • Grid search method is applied to obtain the best value for c and γ parameters so that the SVM classifier can accurately predict unknown data.
For satisfactory results, kernel PCA was applied to reduce feature dimensionality by eliminating redundant features and maintaining important dimensions in the feature vector. To evaluate the effect of kernel PCA on the detection performance of the trained SVM classifier, the dimensionality D of the reduced feature vector was set to different values (10, 20, 30 … 100, 150, 200-D). Detailed results are provided in Tables 1 and 2. True positive and true negative represent the detection rate of the authentic and spliced images, respectively. Accuracy represents the average detection rate.
Table 1 illustrates the results from the original dimension of the extracted feature method with 1764-D obtained from the DVMM image dataset. A detection accuracy of 70.33% was achieved, proving that a high correlation exists among the features (Figure 1).
Table 2 demonstrates the detection accuracy of the trained SVM classifier after applying PCA to reduce feature vector dimensionality. The features from 10-D to 200-D had considerably higher detection accuracy than those from the original (50.65%–91.88% vs. 70.33%). These results were anticipated from Figure 1, which indicated a low correlation among the features after kernel PCA application. Moreover, the highest detection rate of 91.88% was obtained from the features with 40-D. The number of dimensions in this study was selected experimentally. The main objective was to improve accuracy with reduced dimensionality. The accuracy was higher in dimensions lower than 100-D than high-dimensional feature vector. Therefore, the focus was on dimensions between 10 and 100.
The best results were obtained when the extracted features were combined with kernel PCA in 40-D, verifying the nonlinear nature of the extracted features. Figure 4 demonstrates the receiver operating characteristic curves for the image dataset. The features extracted from the original method in 1764-D were compared with those extracted from the merged one with kernel PCA dimension reduction methods. Figure 4 indicates the best effect of kernel PCA on the extracted features.

5.2. Comparison with Other Methods

State-of-the-art image splicing detection methods were compared to comprehensively evaluate the entire algorithm. Table 3 shows the comparison between different methods with different dimensionalities and the proposed algorithm for the DVMM image dataset.
Table 3 shows that the accuracy rates exhibited different trends. The best result was obtained using the expanded DCT Markov + DWT Markov [29], which reduced to 100-D by applying Support Vector Machine Recursive Feature Extraction (SVM-RFE) (93.55%). The next highest accuracy rate was achieved using the proposed method (91.88%) with 40-D. Thus, the proposed algorithm performed better than the presented methods in the least dimension of 40-D.

6. Conclusions

New fractional differential texture descriptors based on the Machado entropy are proposed to detect spliced and tampered images. The standard DVMM image dataset was used to demonstrate the better performance of the proposed algorithm compared with other methods for image splicing detection. With our proposed algorithm, the characteristic of the image descriptors can be altered only by changing the fractional power value α of the proposed mask. The proposed algorithm achieved the highest accuracy rate of 91.88% with 40-D. Compared with the other presented methods, the proposed method had the least dimension of 40 with a high accuracy rate. The results demonstrated the efficacy of applying information theory represented by the Machado entropy in view of fractional calculus. Future work can compare other dimension reduction methods with the proposed algorithm.

Acknowledgments

The authors would like to thank the reviewers for their comments. This research is supported by project No. RG312-14AFR from the University of Malaya.
PACS Codes: 07.05.Pj

Author Contributions

All authors jointly worked on deriving the results and approving the final manuscript.

Conflict of Interests

The authors declare no conflict of interest.

References

  1. He, Z.; Sun, W.; Lu, W.; Lu, H. Digital image splicing detection based on approximate run length. Pattern Recognit. Lett. 2011, 32, 1591–1597. [Google Scholar]
  2. Wang, W.; Dong, J.; Tan, T. A survey of passive image tampering detection. In Digital Watermarking, Proceedings of the 8th International Workshop on Digital Watermarking (IWDW 2009), University of Surrey, Guildford, Surrey, UK, 24–26 August 2009; Ho, A.T.S., Shi, Y.Q., Kim, H.J., Barni, M., Eds.; Springer: Berlin, Germany, 2009; pp. 308–322. [Google Scholar]
  3. Zhao, X.; Li, J.; Li, S.; Wang, S. Detecting digital image splicing in chroma spaces. In Digital Watermarking, Proceedings of the 9th International Workshop on Digital Watermarking (IWDW 2010), Seoul, Korea, 1–3 October 2010; Kim, H.J., Shi, Y.Q., Barni, M., Eds.; Springer: Berlin, Germany, 2011; pp. 12–22. [Google Scholar]
  4. Shi, Y.Q.; Chen, C.; Chen, W. A natural image model approach to splicing detection. Proceedings of the 9th Workshop on Multimedia & Security, Dallas, TX, USA, 20–21 September 2007; pp. 51–62.
  5. Ng, T.-T.; Chang, S.-F.; Lin, C.-Y.; Sun, Q. Passive-blind image forensics. Multimed. Secur. Technol. Digital Rights 2006, 15, 383–412. [Google Scholar]
  6. Zhang, J.; Zhao, Y.; Su, Y. A new approach merging Markov and DCT features for image splicing detection. In. In Proceedings of the IEEE, International Conference on Intelligent Computing and Intelligent Systems, Shanghai, China, 20–22 November 2009; pp. 390–394.
  7. Shi, Y.Q.; Chen, C.; Chen, W. A Markov process based approach to effective attacking jpeg steganography. In Information Hiding; Camenisch, J.L., Collberg, C.S., Johnson, N.F., Sallee, P., Eds.; Springer: Berlin, Germany, 2007; pp. 249–264. [Google Scholar]
  8. Moghaddasi, Z.; Jalab, H.A.; Md Noor, R.; Aghabozorgi, S. Improving rlrn image splicing detection with the use of PCA and Kernel PCA. Sci. World J 2014, 2014. [Google Scholar] [CrossRef]
  9. He, Z.; Lu, W.; Sun, W. Improved run length based detection of digital image splicing. In Digital-Forensics and Watermarking, Proceedings of the 10th International Workshop, IWDW 2011, Atlantic City, NJ, USA, 23–26 October 2011; Shi, Y.Q., Kim, H.J., Perez-Gonzalez, F., Eds.; Springer: Berlin, Germany, 2012; pp. 349–360. [Google Scholar]
  10. Podlubny, I. Fractional Differential Equations: An Introduction to Fractional Derivatives, Fractional Differential Equations, to Methods of Their Solution and Some of Their Applications; Mathematics in Science and Engineering; Academic Press: Waltham, MA, USA, 1999. [Google Scholar]
  11. Hilfer, R.; Butzer, P.; Westphal, U.; Douglas, J.; Schneider, W.; Zaslavsky, G.; Nonnemacher, T.; Blumen, A.; West, B. Applications of Fractional Calculus in Physics; World Scientific: Singapore, Singapore, 2000. [Google Scholar]
  12. Kilbas, A.A.A.; Srivastava, H.M.; Trujillo, J.J. Theory and Applications of Fractional Differential Equations; Elsevier Science Limited: Oxfordshire, UK, 2006; Volume 204. [Google Scholar]
  13. Jalab, H.A.; Ibrahim, R.W. Fractional conway polynomials for image denoising with regularized fractional power parameters. J. Math. Imaging Vis. 2015, 51, 442–450. [Google Scholar]
  14. Jalab, H.A. Regularized fractional power parameters for image denoising based on convex solution of fractional heat equation. Abst. Appl. Anal. 2014, 2014. [Google Scholar] [CrossRef]
  15. Jalab, H.A.; Ibrahim, R.W. Fractional alexander polynomials for image denoising. Signal Process. 2015, 107, 340–354. [Google Scholar]
  16. Jalab, H.A.; Ibrahim, R.W. Denoising algorithm based on generalized fractional integral operator with two parameters. Discrete Dyn. Nat. Soc. 2012, 2012. [Google Scholar] [CrossRef]
  17. Jalab, H.A.; Ibrahim, R.W. Texture enhancement based on the savitzky-golay fractional differential operator. Math. Probl. Eng. 2013, 2013. [Google Scholar] [CrossRef]
  18. Jalab, H.A.; Ibrahim, R.W. Texture feature extraction based on fractional mask convolution with cesáro means for content-based image retrieval. In Pricai 2012: Trends in Artificial Intelligence; Springer: Berlin, Germany, 2012; pp. 170–179. [Google Scholar]
  19. Tsallis, C. Introduction to Nonextensive Statistical Mechanics; Springer: Berlin, Germany, 2009. [Google Scholar]
  20. Machado, J.T. Entropy analysis of integer and fractional dynamical systems. Nonlinear Dyn. 2010, 62, 371–378. [Google Scholar]
  21. Ibrahim, R.W. The fractional differential polynomial neural network for approximation of functions. Entropy 2013, 15, 4188–4198. [Google Scholar]
  22. Mathai, A.M.; Haubold, H.J. On a generalized entropy measure leading to the pathway model with a preliminary application to solar neutrino data. Entropy 2013, 15, 4011–4025. [Google Scholar]
  23. Machado, J.T. Fractional order generalized information. Entropy 2014, 16, 2350–2361. [Google Scholar]
  24. Ibrahim, R.W. On generalized Srivastava–Owa fractional operators in the unit disk. Adv. Differ. Equ. 2011, 2011, 1–10. [Google Scholar]
  25. Selvarajah, S.; Kodituwakku, S. Analysis and comparison of texture features for content based image retrieval. Int. J. Latest Trends Comput. 2011, 2, 108–113. [Google Scholar]
  26. Anusudha, K.; Koshie, S.A.; Ganesh, S.S.; Mohanaprasad, K. Image splicing detection involving moment-based feature extraction and classification using artificial neural networks. Int. J. Signal Image Process. 2010, 1, 9–13. [Google Scholar]
  27. Ng, T.-T.; Chang, S.-F. A Data Set of Authentic and Spliced Image Blocks; ADVENT Technical Report, # 203-2004-3; Columbia University: New York, NY, USA, 2004. [Google Scholar]
  28. Chang, C.C.; Lin, C.J. Libsvm: A library for support vector machines. ACM Trans. Intell. Syst. Technol. 2011, 2. [Google Scholar] [CrossRef]
  29. He, Z.; Lu, W.; Sun, W.; Huang, J. Digital image splicing detection based on Markov features in DCT and DWT domain. Pattern Recognit. 2012, 45, 4292–4299. [Google Scholar]
  30. Fu, D.; Shi, Y.Q.; Su, W. Detection of image splicing based on Hilbert–Huang transform and moments of characteristic functions with wavelet decomposition. In Digital Watermarking, Proceedings of the 5th International Workshop on Digital Watermarking (IWDW 2006), Jeju Island, Korea, 8–10 November 2006; Shi, Y.Q., Jeon, B., Eds.; Springer: Berlin, Germany, 2006; pp. 177–187. [Google Scholar]
  31. Dong, J.; Wang, W.; Tan, T.; Shi, Y.Q. Run-length and edge statistics based approach for image splicing detection. In Digital Watermarking, Proceedings of the 7th International Workshop on Digital Watermarking, Busan, Korea,, 10–12 November 2008; Kim, H.J., Katzenbeisser, S., Ho, A.T.S., Eds.; Springer: Berlin, Germany, 2009; pp. 76–87. [Google Scholar]
Figure 1. Standard deviation distributions of extracted features. Rows indicate the standard deviation distributions of features extracted from gray-scale images. The first column indicates the original features. The second column shows the features after applying kernel PCA.
Figure 1. Standard deviation distributions of extracted features. Rows indicate the standard deviation distributions of features extracted from gray-scale images. The first column indicates the original features. The second column shows the features after applying kernel PCA.
Entropy 17 04775f1
Figure 2. Samples of DVMM image dataset.
Figure 2. Samples of DVMM image dataset.
Entropy 17 04775f2
Figure 3. Selection of α value.
Figure 3. Selection of α value.
Entropy 17 04775f3
Figure 4. Comparison between the features with 1764-D and features with Kernel PCA in 40-D.
Figure 4. Comparison between the features with 1764-D and features with Kernel PCA in 40-D.
Entropy 17 04775f4
Table 1. Detection accuracy of the fractional feature extraction method with the original dimension of 1764.
Table 1. Detection accuracy of the fractional feature extraction method with the original dimension of 1764.
DimensionalityTrue positive (%)True negative (%)Accuracy (%)
Number of features176474.7455.9270.33
Table 2. Detection accuracy of the fractional feature extraction method with kernel principal component analysis (PCA) in different dimensions.
Table 2. Detection accuracy of the fractional feature extraction method with kernel principal component analysis (PCA) in different dimensions.
DimensionTrue positive (%)True negative (%)Accuracy (%)





Features + Kernel PCA20088.4676.9782.72
15088.4684.8786.67
10089.7486.8488.29
9091.0388.8289.92
8091.0389.4790.26
7088.4690.1389.30
6090.3889.4789.93
5089.7489.4789.61
4092.3191.4591.88
3091.6790.1390.91
2091.0378.9584.99
10100050.65
Table 3. Comparison between the proposed approach and other methods.
Table 3. Comparison between the proposed approach and other methods.
Feature Extraction MethodsDimensionalityTP (%)TN (%)Acc (%)
Expanded DCT Markov [29]100 5089.92 89.6090.21 90.4590.07 90.02
DWT Markov [29]100 5087.58 86.7185.39 85.7086.50 86.21
Expanded DCT Markov + DWT Markov [29]100 5093.28 92.2893.83 93.1393.55 93.55
HHT + Moments of Characteristic Functions with Wavelet Decomposition [30]110 7880.03 73.9180.25 76.4980.15 75.23
Run-length and edge statistics based model [31]163 13983.23 83.8785.53 76.9784.36 80.46
Fractional features + Kernel PCA (Proposed)4092.3191.4591.88

Share and Cite

MDPI and ACS Style

Ibrahim, R.W.; Moghaddasi, Z.; Jalab, H.A.; Noor, R.M. Fractional Differential Texture Descriptors Based on the Machado Entropy for Image Splicing Detection. Entropy 2015, 17, 4775-4785. https://0-doi-org.brum.beds.ac.uk/10.3390/e17074775

AMA Style

Ibrahim RW, Moghaddasi Z, Jalab HA, Noor RM. Fractional Differential Texture Descriptors Based on the Machado Entropy for Image Splicing Detection. Entropy. 2015; 17(7):4775-4785. https://0-doi-org.brum.beds.ac.uk/10.3390/e17074775

Chicago/Turabian Style

Ibrahim, Rabha W., Zahra Moghaddasi, Hamid A. Jalab, and Rafidah Md Noor. 2015. "Fractional Differential Texture Descriptors Based on the Machado Entropy for Image Splicing Detection" Entropy 17, no. 7: 4775-4785. https://0-doi-org.brum.beds.ac.uk/10.3390/e17074775

Article Metrics

Back to TopTop