Next Article in Journal
Identifying Dry-Season Rice-Planting Patterns in Bangladesh Using the Landsat Archive
Next Article in Special Issue
Method of Delimiting the Spatial Structure of Villages for the Purposes of Land Consolidation and Exchange
Previous Article in Journal
Long-Term Monitoring of Cropland Change near Dongting Lake, China, Using the LandTrendr Algorithm with Landsat Imagery
Previous Article in Special Issue
GIS-based Landform Classification of Eneolithic Archaeological Sites in the Plateau-plain Transition Zone (NE Romania): Habitation Practices vs. Flood Hazard Perception
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Comparison of Different Methods of Texture Analysis for Their Efficacy for Land Use Classification in Satellite Imagery

by
Przemysław Kupidura
Faculty of Geodesy and Cartography, Warsaw University of Technology, 00-661 Warsaw, Poland
Remote Sens. 2019, 11(10), 1233; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11101233
Submission received: 23 April 2019 / Revised: 16 May 2019 / Accepted: 21 May 2019 / Published: 24 May 2019

Abstract

:
The paper presents a comparison of the efficacy of several texture analysis methods as tools for improving land use/cover classification in satellite imagery. The tested methods were: gray level co-occurrence matrix (GLCM) features, Laplace filters and granulometric analysis, based on mathematical morphology. The performed tests included an assessment of the classification accuracy performed based on spectro-textural datasets: spectral images with the addition of images generated using different texture analysis methods. The class nomenclature was based on spectral and textural differences and included the following classes: water, low vegetation, bare soil, urban, and two (coniferous and deciduous) forest classes. The classification accuracy was assessed using the overall accuracy and kappa index of agreement, based on the reference data generated using visual interpretation of the images. The analysis was performed using very high-resolution imagery (Pleiades, WorldView-2) and high-resolution imagery (Sentinel-2). The results show the efficacy of selected GLCM features and granulometric analysis as tools for providing textural data, which could be used in the process of land use/cover classification. It is also clear that texture analysis is generally a more important and effective component of classification for images of higher resolution. In addition, for classification using GLCM results, the Random Forest variable importance analysis was performed.

Graphical Abstract

1. Introduction

Texture is one of the most important spatial features of an image. Compared to other important spatial features, such as shape and size, it is relatively simple to use because it does not require prior image segmentation. At the same time, it is a distinctive feature of selected land use/cover classes, compared to other classes exhibiting significant spectral similarities. For example, urban and bare soil areas share similar spectral characteristics, as do forests and areas of low vegetation. As the research shows, the use of textural information in classification, apart from spectral data, can significantly increase the accuracy of classification [1,2,3,4,5,6,7,8,9,10,11,12]. The best results can be obtained by using a combination of spectral and textural data [7,8,12].
Texture has no unambiguous definition, which is why in the practice of digital image processing there are many different methods of texture analysis defined ad hoc. Some of these methods include gray level co-occurrence matrix (GLCM) [1,2,13], fractal analysis [3], discrete wavelet transformation [14], Laplace filters [15,16,17], Markov random fields [18,19] or granulometric analysis [20,21]. There are also studies showing the high potential of artificial neural networks, including convolutional ones, for spectral-spatial approaches to classification [5,6].
The following paper presents a comparison of the effectiveness in providing textural information of GLCM, Laplacian and granulometric analyses. The first two methods are relatively well researched, also in terms of the effectiveness of textural analysis. However, granulometric analysis is a lesser-known method. Although previous studies [4,7,8] show its significant potential, there are no studies comparing it with other methods of textural analysis. The main motivation of this paper is therefore to present such a comparative analysis.
Previous studies [7] show that spatial resolution is important when identifying textural signatures which indicate a specific classification of coverage or land use. It was shown that the significance of the texture decreases with the spatial resolution of the image and that it is not important in the case of images with a pixel of approximately 30 m. Therefore, this study used images with different resolutions: very high (GSD (ground sample distance) 2 m: Pleiades and WorldView-2) and high: (GSD 10 m: Sentinel-2).

2. Brief Presentation of Tested Methods of Textural Analysis

Three methods of textural analysis were tested: GLCM, Laplace filters and granulometric analysis. They are presented below.

2.1. Gray Level Co-Occurrence Matrix (GLCM)

This method, first presented by Julesz [13], is based on creating a matrix describing the frequency of the appearance of individual pairs of values in a specific image fragment (gray level co-occurrence matrix). Then certain features describing certain textura; aspects are calculated. A significant part of these features was developed by Haralick et al. [1,2], thus the indicators are often referred to as Haralick features. Various authors propose the use of various Haralick features [10,11,12]. The effectiveness of this popular method has been demonstrated in a significant number of publications [22,23]. In this paper, a set of eight different GLCM indicators is applied (formulas according to [24]):
E n e r g y = i , j g ( i , j ) 2
E n t r o p y = i , j g ( i , j ) l o g 2 g ( i , j ) ,   o r   0   i f   g ( i , j ) = 0 ,
C o r r e l a t i o n = i , j ( i μ ) ( j μ ) g ( i , j ) σ 2 ,
I n v e r s e   D i f f e r e n c e   M o m e n t = i , j 1 1 + ( i j ) 2 g ( i , j ) ,
I n e r t i a = i , j ( i , j ) 2 g ( i , j ) ,
C l u s t e r   S h a d e = i , j ( ( i μ ) + ( j μ ) ) 3 g ( i , j ) ,
C l u s t e r   P r o m i n e n c e = i , j ( ( i μ ) + ( j μ ) ) 4 g ( i , j ) ,
C o r r e l a t i o n = i , j ( i μ ) ( j μ ) g ( i , j ) σ 2 ,
where ( i , j ) is the matrix cell index, g ( i , j ) is the frequency value of the pair having index ( i , j ) , μ = i , j i g ( i , j ) = i , j j g ( i , j ) (due to matrix symmetry) and means weighted pixel average, σ = i , j ( i μ ) 2 g ( i , j ) = i , j ( j μ ) 2 g ( i , j ) (due to matrix symmetry) and means weighted pixel variance, and μ t and σ t are the mean and standard deviation of the row (or column, due to symmetry) sums.

2.2. Laplace Filters

Laplacian filters are derivative filters used to find areas of rapid change in coincident imagery. They have been presented in [25,26]. Laplace filters can be expressed using a convolution [26] e.g., using a mask as presented in the Figure 1.
They are often used to detect edges of objects in an image. They can also be used to detect the parts of an image with high texture, characterized by a high spatial frequency [27,28,29]. It can give good results compared to other similar methods, such as the Sobel filter, but also in comparison with Haralick’s features [29].

2.3. Granulometric Analysis

The third method, granulometric analysis, is not well-known, although its effectiveness has also been demonstrated in previous publications [4,7,8,30]. It resembles a morphological profile [31,32], although at the same time it differs significantly from it in some respects [7,8].
Granulometric analysis is based on the sequence of morphological opening and closing operations and the measurement of the differences between successive images. This permits the quantification of particles of different sizes [7]. The method was first presented by Haas, Matheron and Serra [20]. However, methods of local analysis were introduced later [21], allowing the assignment of texture values to individual pixels. Its accuracy, regarding use in the classification of satellite imagery, has been demonstrated in previous studies [7,8]. Granulometric analysis can be based on classical (simple) morphological operations of opening and closing, as well as on operations with a multiple structuring element (MSE) [7]. As shown by the studies, both these versions of granulometric analysis show slightly different properties. Depending on the image and distinguished land use/cover classes, differing results may be obtained [7].
As this method is relatively unknown, the two basic advantages of this texture analysis method are briefly described below.
The first is multiscality; due to the possibility of successive application of increasing size of morphological opening and closing operations, the obtained information indicates the presence of texture grains of various sizes.
The second advantage is resistance to the so-called edge effect [7,33]. The edge effect means that the edges of objects, even those with a low texture, get high values as a result of texture analysis. This applies to most textural analysis methods because they refer to the spatial frequency analysis of the selected image area as a texture determinant. Imagery edges have a high spatial frequency, and thus are exhibited with high texture. Granulometric analysis is not based on this principle, as it analyzes the number and value of removed image elements. Because of this, edges are not display as areas of high texture.

3. Material and Methods

The study consisted of processing selected satellite multispectral imagery of high and very high resolution using the tested methods of texture analysis, then combining the results of individual methods with original spectral images and finally classifying such datasets and assessing their accuracy.

3.1. Source Spectral Data

In this study, images showing the areas the South-East of Warsaw (Poland) were used. This is an area characterized by diversified land cover. There are, among others, agricultural land, coniferous and deciduous forests, water reservoirs and various forms of buildings. This study used images with different resolutions: GSD 2 m (Pleiades and WorldView-2) and 10 m (Sentinel-2). These were subsets of satellite scenes. Details of the test images are shown in Table 1. The images are shown in Figure 2.

3.2. Textural Data

The research involved four methods of texture analysis: GLCM, Laplace filter and granulometric analysis (based on simple operations and operations with multiple structuring elements, MSE). All texture images were obtained based on the processing of the image of the first principal component. The choice of the first principal component was based on previous studies showing that the use of this image as the source date for texture analysis gives the best results when it comes to separability of selected land use classes compared to other images such as second principal components or selected spectral channels [34]. Each of the multispectral images has been subjected to principal component analysis. Then, the images of the first components, which by definition represent the largest variance within the analyzed multispectral data, were subjected to textural processing using the tested methods. As a result, three basic data sets were prepared, presented in Table 2.
In the case of granulometric analysis, a different number of subsequent granulometric maps (being the result of openings and closures of successively larger sizes of the structuring element) were used, depending on the spatial resolution of the image. In the case of higher resolution photos (GSD: 2 m), i.e., test image 1 and test image 2, these were three consecutive granulometric maps for opening and closing, while in the case of test image 3 (GSD: 10 m), these were two consecutive granulometric maps for opening and closing.
Analyses with selected methods (GLCM and both versions of granulometric analysis) were carried out in several variants, depending on the size of the analyzed neighborhood of individual pixels. These were areas with 5, 7, 10 and 13 pixels.

3.3. Methodology

As part of the research, a series of classifications were performed on each of the test images. These were classifications made on different sets of data consisting of spectral image-only data and on sets of spectro-textural data, enriched with the results of textural analysis, obtained on the basis of selected methods. The tested variants are listed and explained in Table 3.
The classification was performed using the random forest [35] method based on training fields developed on the basis of a multispectral image. The classifier contained 500 trees, the number of features was equal to the square root of all features and the impurity function was based on the Gini coefficient.
In all variants of the classification, exactly the same training fields were used. For each test image a relatively large number of training fields (from 65 to 74, then aggregated to the final number of classes) were prepared to ensure the highest possible classification accuracy, so that the differences obtained for individual variants depended only on the type of input data. The six following classes were distinguished during this process:
  • Water
  • Bare soil
  • Low vegetation
  • Coniferous forest
  • Deciduous forest
  • Built-up area
To perform the textural analysis using the selected methods, the image of the first principal component, calculated on the basis of a set of multispectral data, was used. Accuracy assessment was performed by comparing the results of the classification with the reference image created on the basis of the test sites. The test sites were developed as a result of the visual interpretation of the image. They were to meet the requirements ensuring proper control of the classification: equal distribution over the entire classified area and proportional representation of all classes [36]. The total number of test pixels was large in order to ensure high reliability of the accuracy check (980,869 pixels for Test Image 1, 489,573 pixels for Test Image 2 and 250,952 pixels for Test Image 3; other statistics concerning individual classes may be found in corresponding matrices).
The error matrix was compiled for the result of each classification, and errors of omission (OE) and commission (CE) [37] as well as overall accuracy (OA) and the kappa index of agreement (KIA) [38] were calculated. The scheme of the methodology is shown in Figure 3.
In addition, for the classification of the GLCM results, the random forest variables importance analysis was performed. Variable importance is calculated based on out-of-bag accuracy and signifies the importance of the respective variable; a high value means a high importance of the variable for the entire random forest model and vice versa [35,39]. The GLCM data set is the only set of utilized data (in the section on texture images) with images of a qualitatively different nature; different images present different features, which in turn are referring to other aspects of the image’s texture. Hence, this analysis was performed, permitting a significant assessment of particular Haralick features used for satellite image classification.

4. Results

Individual analyses were carried out on three test images. This section contains a summary and analysis of the results obtained.

4.1. Results of Classification

4.1.1. Test image 1—Pleiades (2 m)

The results of the analysis are summarized in Table 4.
The Pleiades multispectral image is characterized by a relatively high spatial resolution (GSD: 2 m). As can be seen in Table 4, the accuracy of the classification based only on spectral data is low (OA: 0.78, KIA: 0.71). In all scenarios in which the results of textural analysis were additionally used, the accuracy of the classification was significantly higher. The best results were obtained for the classification using granulometric maps: spectral + gran10, i.e., obtained as a result of an analysis using simple morphological operations inside a radius of 10 pixels (OA: 0.98, KIA: 0.97). However, it should be noted that for all such operations, regardless of the radius of the analyzed neighborhood, the results were similarly high—the lowest for spectral + gran5 (OA: 0.96, KIA: 0.94), but still very high— higher than the GLCM or Laplacian results. The results obtained for the spectral + GLCM classification are also relatively good (OA: 0.89–0.92, KIA: 0.86–0.90), but clearly worse than using granulometric analysis.
Table 5, Table 6, Table 7 and Table 8 present the error matrices of the sample classification scenarios. Spectral classification is shown in Table 3. The selected classification images obtained for individual scenarios are presented in Figure 4.
Spectral classification has moderate accuracy (OA: 0.78, KIA: 0.71; Table 5; Figure 4a). As expected, large classification errors can be noticed in classes where a high texture is an important distinction. A large commission error (CE) is noticeable for Class 6: Built-up area (0.66), which is largely due to the allocation of bare soil pixels (Class 2) to this class. The obvious reason for this is the spectral similarity between these two classes. A similar situation can be observed in the case of a pair of classes: deciduous forest (Class 5) and low vegetation (Class 3). In the case of these two classes, there is also at least partial spectral similarity, especially in the case of illuminated parts of tree crowns. This results in a large CE in Class 5: deciduous forest (0.62) and, at the same time, a large OE in Class 3: low vegetation (0.42). When analyzing the classification error matrix in the spectral scenario, it can be noticed that for the relatively low accuracy of this classification, the responsible classes are the two cases discussed above: built-up areas versus bare soil and deciduous forest versus low vegetation. Therefore, it should be expected that classifications based on data sets that also contain the results of the textural analysis should improve accuracy in this area.
Table 6 (also Figure 4b) presents the results obtained for the classification using, in addition to spectral data, the results of the granulometric analysis, spectral + gran10. This is the classification with the best result of those analyzed for test image 1, Pleiades (OA: 0.98, KIA: 0.97). As expected, this classification significantly improved the separation of the two class pairs (2–6 and 3–5) as compared to the spectral classification, where a large decrease in accuracy was observed. The biggest errors were noted for Class 6, built-up area (OE: 0.14, CE: 0.04). As in the previously analyzed case, they are mainly caused by the incorrect distinction between Class 6 and 2 (bare soil). However, they are much smaller than in the case of spectral classification (OE: 0.19, CE: 0.66). The distinction between classes 2 and 5 also improved considerably due to the granulometric processing; in all other classes, OE and CE values do not exceed 0.07.
The classification based on the spectral data and GLCM (Table 7; Figure 4c) results is also significantly better than spectral classifications, however the obtained accuracy is lower than in the case of the granulometric analysis. For example, errors for class 6 (built-up area), caused mainly by similarity to Class 2 (bare soil) were OE: 0.13 and CE: 0.30, so less than in the spectral classification, but more than in the case of classification spectral + gran10. Although classes 2 and 5 continue to show misclassifications, there is an improvement in relation to the spectral classification. However, results are still less accurate than those resulting from the spectral + gran10 classification.
The classification using the results of the Laplace transformations gave the worst results among the spectro-textural classification variants (Table 8; Figure 4d). Interestingly, despite the fact that the general classification results have improved in relation to the spectral classification, the results for individual classes are even worse than in the case of the latter. An example may be Class 6, built-up area (OE: 0.18, CE: 0.65). The reason for such a large CE is the assignment to Class 6, bare soil areas (in fact Class 2). These types of misclassification also occurred within other classifications, but not on such a scale. The analysis of the classification images shows clearly that the reason for such a large error is, first and foremost, the erroneously classified pixels of the exposed soil located on the borders of agricultural plots (Figure 4). Here, the edge effect mentioned above is visible.

4.1.2. Test Image 2: WorldView (2 m)

The results of the accuracy assessment for classifications of test image 2: WorldView-2 are presented in Table 9.
Unlike in the previous case, the spectral image classification is characterized by relatively high accuracy (OA: 0.94, KIA: 0.92). Also, unlike before, not all types of textural data improved the accuracy of the classification; the use of data obtained on the basis of the GLCM analysis caused a deterioration of the results. The best result for the GLCM operation was obtained for the spectral + GLCM5 variant (OA: 0.89, KIA 0.86). The remaining operations allowed greater accuracy; this applies to both the spectral + Laplacian variant (OA: 0.95, KIA: 0.93), and both versions of the granulometric operations (simple and MSE). In the latter case, the best results were obtained for the spectral + MSEgran10 variant (OA: 0.97, KIA: 0.96), however, for all variants based on granulometric data, very similar high results were obtained (e.g., spectral + gran10: OA: 0, 96, KIA: 0.95). The differences between the individual variants are therefore insignificant.
More detailed information is provided by the analysis of the error matrix of selected classification variants. These matrices are presented in Table 10, Table 11, Table 12 and Table 13. The subsets of selected classification images obtained for individual scenarios are presented in Figure 5.
The spectral classification of test image 2 (Table 10; Figure 5a) is characterized by a high degree of accuracy (OA: 0.95, KIA: 0.92), much higher than in the case of test image 1. Because the pixel size of both analyzed images is the same, the reasons for these differences should be sought primarily at the time of acquisition of the image. It was taken in August, the period that falls, at least partly, after harvesting, which means that the total area of completely bare soil is relatively small: in some cases, there is the presence of post-harvest residues that change the soil’s spectral characteristics sufficiently to significantly improve the distinction of bare soil from built-up areas. This is reflected in relatively small errors for Class 2 (bare soil, OE: 0.02, CE: 0.20) and 6 (built-up area, OE: 0.14, CE: 0.01). The accuracy of forest classification is also higher than in the previous case. This time, larger errors were obtained for Class 4 (coniferous forest, OE: 0.05, CE: 0.11), mainly due to the erroneous allocation of deciduous forest areas.
The accuracy of the classification obtained thanks to the additional use of granulometric data (Table 11; Figure 5b) is the largest of all general types of variants, as in the case of test image 1. The improvement in comparison to the spectral classification is smaller (as in the case of test image 1), but it results from the already high accuracy of the spectral variant. Thus, it is possible to observe a reduction in errors in all classes (the exception is the slight decrease in the accuracy of Class 1, water).
The classification based on the results of the GLCM analysis (Table 12; Figure 5c) gave surprisingly poor results, much worse than the spectral classification. Analysis of the error matrix shows that the main reasons for this are due to Class 4 (coniferous forest, OE: 0.03, CE: 0.28) and 5 (deciduous forest, OE: 0.27, CE: 0.04). These errors are mainly due to the allocation of a significant part of the pixels representing deciduous forest to Class 4, coniferous forest. There is also a slight decrease (in relation to the spectral classification) of the accuracy of Class 2 (bare soil) and 6 (built-up area). The accuracy of Class 1 (water) is also noticeable (OE: 0.16, CE: 0.16), as shown in the analysis of classification images, mainly at the edges of water areas. The case of this classification seems to suggest that if the distinction between individual classes based on spectral data is fairly accurate (see the results of spectral classification), additional data may hinder the classification and deteriorate its results.
The use of the Laplace filtering results slightly increased the accuracy of classification (Table 13; Figure 5d). This is due to a slight improvement in the designation of all classes.

4.1.3. Test Image 3—Sentinel-2 (10 m)

The results of the accuracy assessment for classifications of test image 3, Sentinel-2 are presented in Table 14.
The accuracy of the spectral classification is relatively high (OA: 0.93, KIA 0.90). Apart from the spectral + Laplacian variant, all variants using textural analysis results are nonetheless more accurate. The best overall result was obtained for the spectral + gran10 variant (OA: 0.98, KIA: 0.97), among the GLCM variants it was the spectral + GLCM7 variant (OA: 0.95, 0.93). Therefore, these are the same variants that gave the best results in the case of test image 1. More detailed information is provided by the analysis of the error matrix of the selected classification variants. These matrices are presented in Table 15, Table 16, Table 17 and Table 18. The subsets of selected classification images obtained for individual scenarios are presented in Figure 6.
As the analysis of the error matrix for spectral classification (Table 15; Figure 6a) shows, the biggest errors are generated by a problem with the differentiation between Class 6 (built-up area, OE: 0.22: CE: 0.51) and Class 2 (bare soil, OE: 0.14, CE: 0.03). This indicates a high spectral similarity between both land use classes. Other classes are determined with high accuracy, and except for the example of Class 5 (deciduous forest, OE: 0.06, CE: 0.12), which is mainly due to problems with distinguishing pixels of this class from coniferous forest and low vegetation, errors do not exceed the value of 0.03.
The spectral + gran10 variant is the best of the analyzed ones (Table 16; Figure 6b). The improvement in accuracy is mainly due to an improvement in the classification of Class 6 (built-up area, OE: 0.09, CE: 0.04). Improvements have also been observed in the other classes, e.g., in Class 5, deciduous forest (OE: 0.06, CE: 0.12).
The spectral + GLCM7 variant (Table 17; Figure 6c) also improved the results relative to the spectral classification, although to a lesser extent than the spectral + gran10 classification. The increase in accuracy results mainly from the improvement in the accuracy of Class 6, built-up area (OE: 0.17, CE: 0.23), again, to a lesser extent than in the spectral + gran10 classification. However, a decrease in the accuracy of Class 5 (deciduous forest, OE: 0.26, CE: 0.08), can be noted. A similar effect for this class was noted during the analysis of the classification of test image 2, also based on GLCM data.
Classification variant spectral + Laplace is the only one characterized by lower accuracy than the spectral classification (Table 18; Figure 6d). The differences, however, are small. Similarly, the impact of particular classes on the overall accuracy of the classification appear similar. Differences between analogous errors in both variants do not exceed 0.02. It is therefore difficult to say that the use of this data in the classification of the Sentinel-2 photo could have any significant impact on the accuracy of the classification.

4.2. Analysis of Random Forest Variables Importance for the Dataset Consisting of GLCM Features

This section presents an analysis of the significance of individual images that make up the spectral + GLCM data sets for the random forest classification. This analysis was performed on these data sets only, because they are the only ones among the analyzed variants (in the section on texture images) with images of a qualitatively different nature: different images present different features, which in turn refer to other aspects of the image texture. The analysis allowed the assessment of the significance of particular Haralick features used for satellite image classification.
The diagrams showing the importance of different variables; spectral (marked with gray) and GLCM (marked with black) are shown in Figure 7, Figure 8 and Figure 9.
In the case of the classification of test image 1: Pleiades (2 m), Haralick’s correlation has the greatest significance among the GLCM features. The importance of this layer is similar to the importance of spectral images. The remaining GLCM features are of low importance, at a similar level.
Additionally, in the case of the classification of test image 2: WorldView-2 (2 m), Haralick’s correlation shows the greatest importance among all texture images, and except for in one case, had the greatest importance in general. The relatively small significance of individual spectral ranges may in this case result from their greater number; this meaning is distributed among individual images.
Once again, for the classification of test image 3: Sentinel-2, Haralick’s Correlation is the most important variable among the GLCM features. However, this significance is relatively less than the other two test images. The explanation of this situation may be a smaller spatial resolution of the image.

5. Discussion

The obtained results showed a high efficiency of spectro-textural classification based on the results of granulometric analysis. In all cases, the spectral + gran classification variants showed the best accuracy among all analyzed variants. Importantly, the individual spectral + gran variants, differing in the radius of analysis, gave quite similar results. This indicates the high stability of this method. Although it can be stated that the best results were generally obtained with the spectral + gran10 variant, all variants based on granulometric analysis were significantly better than the other variants of classification—spectral and spectro-textural—based on other texture analysis methods.
This may be at least partly due to the issue of the edge effect, not regarding granulometric processing, but regarding the other two methods tested; GLCM and Laplace filters. This effect is investigated in Figure 10a–c; the edge of the square on the right gets relatively large values in the image as a result of GLCM analysis, similar to the value for a fragment of a high texture image (left side). On the image obtained as a result of granulometric analysis, this effect is not visible. A similar set was prepared for a subset of the actual satellite image of Pleiades (Figure 10d–f). In the GLCM image, the edges of the objects (plots) get high values, falsely indicating a high texture, while on the granulometric map this effect does not occur. This effect can be very important in image classification [7] because it makes parts of objects (on the edges) with a low texture look like objects with a high texture, thus reducing their separability, especially in cases of high spectral similarity.
We can observe this effect in the example of Test Image 2, where the use of GLCM or Laplace imagery did not improve the distinction between bare soil and built-up areas; it could even (as in the case of GLCM) worsen it. It is worth recalling that due to the date when Test Image 2 was taken, the separability of these classes based on spectral data was relatively large.
The results obtained for Test Image 3 confirm the results of previous studies that with a decrease in spatial resolution, the importance of textural analysis for distinguishing land cover classes decreases. Although, also in this case, the application of the results of the textural analysis increased the accuracy of the classification (with the exception of Laplace’s operations, which had little effect on the result).

6. Conclusions

The presented studies showed the advantage of granulometric analysis over the other two methods of textural analysis (GLCM and Laplace filters) in the examined aspect. All tested variants increased the accuracy of classification in relation to the approach based only on spectral data. Also, almost all tested variants (with one exception) of the granulometric analysis showed greater efficacy than all the variants based on the other two methods of textural analysis.
The use of other tested methods of textural analysis, in the majority of analyzed cases, also increased the accuracy of classification. However, this is not a rule; in the case of both GLCM analysis and Laplace filtration, examples of deterioration in classification accuracy occurred. This suggests that, in some cases, the results of the textural analysis are irrelevant in distinguishing individual classes, which may partly be caused by the edge effect.
An additionally performed analysis of the random forest variable importance of components of the spectral + GLCM data set for random forest classification, showed the importance of Haralick’s correlation. This type of analysis may be useful for the analysis of other GLCM statistics, but also for other methods of textural analysis. This may establish and test an optimal set of complementary textural data for the best possible increase in land use/cover classification accuracy.

Funding

This research received no external funding.

Acknowledgments

The author is grateful to Astri Polska for providing the Pleiades satellite imagery.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Darling, E.M.; Joseph, R.D. Pattern recognition from satellites altitudes. IEEE Trans. Syst. Man Cybern. 1968, 4, 30–47. [Google Scholar] [CrossRef]
  2. Haralick, R.M.; Shanmugam, K.; Dinstein, I. Textural Features for Image Classification. IEEE Trans. Syst. Man Cybern. 1973, 4, 610–621. [Google Scholar] [CrossRef]
  3. Lam, N.S.N. Description and Measurement of Landsat TM Using Fractals. Photogramm. Eng. Remote Sens. 1990, 56, 187–195. [Google Scholar]
  4. Mering, C.; Chopin, F. Granulometric maps from high resolution satellite images. Image Anal. Stereol. 2002, 21, 19–24. [Google Scholar] [CrossRef]
  5. Cheng, G.; Li, Z.; Han, J.; Yao, X.; Guo, L. Ecploring Hierchical Convolutional Features for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 6712–6722. [Google Scholar] [CrossRef]
  6. Zhou, P.; Han, J.; Chen, G.; Zhang, B. Learning Compact and Discriminative Stacked Autoencoder for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2019, 1–11. [Google Scholar] [CrossRef]
  7. Kupidura, P. Wykorzystanie granulometrii obrazowej w klasyfikacji treści zdjęć satelitarnych; in Prace Naukowe Politechniki Warszawskiej; Warsaw University of Technology Publishing House: Warsaw, Poland, 2015; Volume 55. [Google Scholar]
  8. Kupidura, P.; Skulimowska, M. Morphological profile and granulometric maps in extraction of buildings in VHR satellite images. Arch. Photogramm. Cartogr. Remote Sens. 2015, 27, 83–96. [Google Scholar]
  9. Wawrzaszek, A.; Krupiński, M.; Aleksandrowicz, S.; Drzewiecki, W. Fractal and multifractal characteristics of very high resolution satellite images. In Proceedings of the 2013 IEEE International Geoscience and Remote Sensing Symposium—IGARSS, Melbourne, VIC, Australia, 21–26 July 2013; pp. 1501–1504. [Google Scholar]
  10. Weszka, J.S.; Dyer, C.R.; Rosenfeld, A. A Comparative Study of Texture measures for Terrain Classification. IEEE Trans. Syst. Man Cybern. 1976, 6, 269–285. [Google Scholar] [CrossRef]
  11. Conners, R.W.; Harlow, C.A. A Theoretical Comaprison of Texture Algorithms. IEEE Trans. Pattern Anal. Mach. Intell. 1980, 2, 204–222. [Google Scholar] [CrossRef] [PubMed]
  12. Bekkari, A.; Idbraim, S.; Elhassouny, A.; Mammass, D.; El Yassa, M.; Ducrot, D. SVM and Haralick Features for Classification of High Resolution Satellite Images from Urban Areas. In ICISP 2012; Elmoataz, A., Mammass, D., Lezoray, O., Nouboud, F., Aboutajdine, D., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; pp. 17–26. [Google Scholar]
  13. Julesz, B. Visual pattern discrimination. Ire Trans. Inf. Theory 1962, 8, 84–92. [Google Scholar] [CrossRef]
  14. Mallat, S.G. A Theory for Multiresolution Signal Decomposition: The Wavelet Representation. IEEE Trans. Pattern Anal. Mach. Intell. 1989, 11, 674–693. [Google Scholar] [CrossRef]
  15. Marr, D. Vision; Chap. 2; Freeman: San Francisco, CA, USA, 1982; pp. 54–78. [Google Scholar]
  16. Horn, B.K.P. Robot Vision; Chap. 8; The MIT Press: Cambridge, MA, USA; London, UK, 1986. [Google Scholar]
  17. Haralick, R.; Shapiro, L. Computer and Robot Vision; Addison-Wesley Publishing Company: Reading, UK, 1992; Volume 1, pp. 346–351. [Google Scholar]
  18. Spitzer, F. Random Fields and Interacting Particle Systems; Mathematical Association of America: Washington, DC, USA, 1971; p. 126. [Google Scholar]
  19. Preston, C.J. Gibbs States on Countable Sets; Cambridge University Press: Cambridge, UK, 1974. [Google Scholar]
  20. Haas, A.; Matheron, G.; Serra, J. Morphologie Mathématique et granulométries en place. Ann. Des Mines 1967, 12, 768–782. [Google Scholar]
  21. Dougherty, E.R.; Pelz, J.B.; Sand, F.; Lent, A. Morphological Image Segmentation by Local Granulometric Size Distributions. J. Electron. Imaging 1992, 1, 46–60. [Google Scholar]
  22. Baraldi, A.; Parmiggiani, F. An Investigation of the Textural Characteristics Associated with Gray Level Coocurrence Matrix Statistical Parameters. IEEE Trans. Geosci. Remote Sens. 1995, 33, 293–304. [Google Scholar] [CrossRef]
  23. Pathak, V.; Dikshit, O. A new approach for finding appropriate combination of texture parameters for classification. Geocarto Int. 2010, 25, 295–313. [Google Scholar] [CrossRef]
  24. OTB CookBook. Available online: https://www.orfeo-toolbox.org/CookBook/recipes/featextract.html (accessed on 14 May 2019).
  25. Marr, D.; Hildreth, E. Theory of edge detection. Proc. Roy. Soc. Lond. 1980, B207, 187–217. [Google Scholar]
  26. Gonzalez, R.C.; Woods, R.E. Digital Image Processing, 2nd ed.; Prentice Hall: Upper Saddle River, NJ, USA, 2002. [Google Scholar]
  27. Ahearn, S.C. Combining Laplacian images of different spatial frequencies (scales): Implications for remote sensing image analysis. IEEE Trans. Geosci. Remote Sens. 1988, 26, 826–831. [Google Scholar] [CrossRef]
  28. Faber, A.; Förstner, W. Scale characteristics of local autocovariances for texture segmentation. Int. Arch. Photogramm. Remote Sens. 1999, 32. Part 7-4-3/W6. [Google Scholar]
  29. Lewiński, S.; Aleksandrowicz, S. Ocena możliwości wykorzystania tekstury w rozpoznaniu podstawowych klas pokrycia terenu na zdjęciach satelitarnych różnej rozdzielczości. Arch. Fotogram. I Teledetekcji 2012, 23, 229–237. [Google Scholar]
  30. Kupidura, P.; Koza, P.; Marciniak, J. Morfologia Matematyczna w Teledetekcji; PWN: Warsaw, Poland, 2010. [Google Scholar]
  31. Mura, D.A.; Benediktsson, J.A.; Waske, B.; Bruzzone, L. Morphological Attribute Profiles for the Analysis of Very High Resolution Images. IEEE Trans. Geosci. Remote Sens. 2010, 48, 3747–3762. [Google Scholar] [CrossRef]
  32. Mura, D.A.; Benediktsson, J.A.; Bruzzone, L. Self-dual Attribute Profiles for the Analysis of Remote Sensing Images. In ISSM 2011; Soille, P., Pesaresi, M., Ouzounis, G.K., Eds.; Springer: Heidelberg/Berlin, Germany, 2011; pp. 320–330. [Google Scholar]
  33. Ruiz, L.A.; Fdez-sarria, A.; Recio, J.A. Texture feature extraction for classification of remote sensing data using wavelet decomposition: A comparative study. Int. Arch. Photogramm. Remote Sens. 2004, 35, 1109–1114. [Google Scholar]
  34. Staniak, K. Badanie wpływu rodzaju obrazu źródłowego na efektywność analizy granulometrycznej. Master’s Thesis, Warsaw University of Technology, Warsaw, Poland, 2016. [Google Scholar]
  35. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  36. McCloy, K.R. Resource Management Information Systems: Remote Sensing, GIS and Modelling, 2nd ed.; CRC Press: Boca Raton, FL, USA, 2005. [Google Scholar]
  37. Banko, G. A Review of Assessing the Accuracy of Classifications of Remotely Sensed Data and of Methods Including Remote Sensing Data in Forest Inventory; Interim Report; International Institute for Applied Systems Analysis: Laxenburg, Austria, 1998. [Google Scholar]
  38. Congalton, R. A Review of Assessing the Accuracy of Classifications of Remotely Sensed Data. Remote Sens. Environ. 1991, 37, 35–46. [Google Scholar] [CrossRef]
  39. Boonprong, S.; Cao, C.; Chen, W.; Bao, S. Random Forest Variable Importance Spectral Indices Scheme for Burnt Forest Recovery Monitoring—Multilevel RF-VIMP. Remote Sens. 2018, 10, 807. [Google Scholar] [CrossRef]
Figure 1. Exemplary mask of Laplace filter, used in the presented research.
Figure 1. Exemplary mask of Laplace filter, used in the presented research.
Remotesensing 11 01233 g001
Figure 2. Test images: (a) 1—Pleiades, (b) 2—WorldView-2, (c) 3—Sentinel-2.
Figure 2. Test images: (a) 1—Pleiades, (b) 2—WorldView-2, (c) 3—Sentinel-2.
Remotesensing 11 01233 g002
Figure 3. The methodology scheme.
Figure 3. The methodology scheme.
Remotesensing 11 01233 g003
Figure 4. Subsets of images of selected classification variants (test image 1, Pleiades): (a) spectral, (b) spectral + gran10, (c) spectral + GLCM7, (d) spectral+ Laplacian, (e) original satellite image.
Figure 4. Subsets of images of selected classification variants (test image 1, Pleiades): (a) spectral, (b) spectral + gran10, (c) spectral + GLCM7, (d) spectral+ Laplacian, (e) original satellite image.
Remotesensing 11 01233 g004
Figure 5. Subsets of images of selected classification variants (test image 2: WorldView-2): (a) spectral, (b) spectral + gran10, (c) spectral + GLCM7, (d) spectral+ Laplacian, (e) original satellite image.
Figure 5. Subsets of images of selected classification variants (test image 2: WorldView-2): (a) spectral, (b) spectral + gran10, (c) spectral + GLCM7, (d) spectral+ Laplacian, (e) original satellite image.
Remotesensing 11 01233 g005
Figure 6. Subsets of images of selected classification variants (test image 3: Sentinel-2): (a) spectral, (b) spectral + gran10, (c) spectral + GLCM7, (d) spectral+ Laplacian, (e) original satellite image.
Figure 6. Subsets of images of selected classification variants (test image 3: Sentinel-2): (a) spectral, (b) spectral + gran10, (c) spectral + GLCM7, (d) spectral+ Laplacian, (e) original satellite image.
Remotesensing 11 01233 g006
Figure 7. Raw variable importance for spectral and GLCM variants, test image 1: Pleiades.
Figure 7. Raw variable importance for spectral and GLCM variants, test image 1: Pleiades.
Remotesensing 11 01233 g007
Figure 8. Raw variable importance for spectral and GLCM variants, test image 2: WorldView-2.
Figure 8. Raw variable importance for spectral and GLCM variants, test image 2: WorldView-2.
Remotesensing 11 01233 g008
Figure 9. Raw variable importance for spectral and GLCM variants, test image 3: Sentinel-2.
Figure 9. Raw variable importance for spectral and GLCM variants, test image 3: Sentinel-2.
Remotesensing 11 01233 g009
Figure 10. Edge effect in simulated imagery: (a) original image, (b) GLCM entropy, (c) granulometric map; and in actual Pleiades image: (d) original image, (e) GLCM Entropy, (f) granulometric map.
Figure 10. Edge effect in simulated imagery: (a) original image, (b) GLCM entropy, (c) granulometric map; and in actual Pleiades image: (d) original image, (e) GLCM Entropy, (f) granulometric map.
Remotesensing 11 01233 g010
Table 1. Test images used in the study.
Table 1. Test images used in the study.
Test ImageSatellite PlatformGSDSpectral BandsDate of Acquisition
1Pleiades2 mblue, green, red, near infrared22.05.2012
2WorldView-22 mcoastal, blue, green, yellow, red, red edge, 2x near infrared04.08.2011
3Sentinel-210 mblue, green, red, near infrared20.04.2018
Table 2. The set of textural images.
Table 2. The set of textural images.
Texture Analysis MethodImagesNumber of Images
GLCMResults of GLCM (gray level co-occurrence matrix) features presented in Section 2.1: Energy, Entropy, Correlation, Inverse Difference Moment, Inertia, Cluster Shade, Cluster Prominence, Haralick’s Correlation.8
Laplace filtersResults of Laplacian of size 1, 2 and 3 (3 × 3, 5 × 5, 7 × 7)3
Granulometric analysisThree granulometric maps based on simple morphological opening and three granulometric maps based on simple morphological closing (two for each in the case of test image 3)6 (4 for test image 3)
Granulometric analysis basing on operations with multiple structuring element (MSE)Three granulometric maps based on morphological MSE opening and three granulometric maps based on morphological MSE closing (two for each in the case of test image 3)6 (4 for test image 3)
Table 3. Classification variants.
Table 3. Classification variants.
Name of the VariantSpectral DataTextural Data
spectralYesNone
spectral + LaplacianYesLaplace filters
spectral + GLCM5Yes8 GLCM features, neighborhood: size 5
spectral + GLCM7Yes8 GLCM features, neighborhood: size 7
spectral + GLCM10Yes8 GLCM features, neighborhood: size 10
spectral + GLCM13Yes8 GLCM features, neighborhood: size 13
spectral + gran5Yes6 (or 4) simple granulometric maps, neighborhood: size 5
spectral + gran7Yes6 (or 4) simple granulometric maps, neighborhood: size 7
spectral + gran10Yes6 (or 4) simple granulometric maps, neighborhood: size 10
spectral + gran13Yes6 (or 4) simple granulometric maps, neighborhood: size 13
spectral + MSEgran5Yes6 (or 4) MSE granulometric maps, neighborhood: size 5
spectral + MSEgran7Yes6 (or 4) MSE granulometric maps, neighborhood: size 7
spectral + MSEgran 10Yes6 (or 4) MSE granulometric maps, neighborhood: size 10
spectral + MSEgran13Yes6 (or 4) MSE granulometric maps, neighborhood: size 13
Table 4. Summary of the results for test image 1—Pleiades (2 m).
Table 4. Summary of the results for test image 1—Pleiades (2 m).
ScenarioOverall Accuracy (OA)Kappa Index of Agreement (KIA)
spectral0.780.71
spectral + Laplacian0.830.77
spectral + GLCM50.900.87
spectral + GLCM70.920.90
spectral + GLCM100.920.89
spectral + GLCM130.890.86
spectral + gran50.960.94
spectral + gran70.970.96
spectral + gran100.980.97
spectral + gran130.960.95
spectral + MSEgran50.890.86
spectral + MSEgran70.930.91
spectral + MSEgran 100.960.94
spectral + MSEgran130.960.95
Table 5. Error matrix for spectral classification of test image 1, Pleiades.
Table 5. Error matrix for spectral classification of test image 1, Pleiades.
Reference Image
1. water2. soil3. low veg4. con. forest5. dec. forest6. built-upΣCE
classification1. water91,34611432103192,4240.01
2. soil0248,368796002810251,9740.01
3. low veg061208,70624101307213,1770.02
4. con. forest13442103,98911,846345116,3200.11
5. dec. forest00148,536111992,983483243,1210.62
6. built-up042,0475556121,69463,8530.66
Σ91,480290,481358,096105,209108,93326,670980,869
OE0.000.140.420.010.150.19OA0.782
KIA0.709
Table 6. Error matrix for classification spectral + gran10 of test image 1—Pleiades.
Table 6. Error matrix for classification spectral + gran10 of test image 1—Pleiades.
Reference Image
1. water2. soil3. low veg4. con. forest5. dec. forest6. built-upΣCE
classification1. water91,353151247011191,6640.00
2. soil0285,996233001524287,7530.01
3. low veg33669351,592101020535356,8290.01
4. con. forest124120135104,7816026915112,1010.07
5. dec. forest005811352101,879768108,8100.06
6. built-up054532319822,81723,7120.04
Σ91,480290,481358,096105,209108,93326,670980,869
OE0.000.020.020.000.060.14OA0.977
KIA0.969
Table 7. Error matrix for classification spectral + GLCM7 of test image 1, Pleiades.
Table 7. Error matrix for classification spectral + GLCM7 of test image 1, Pleiades.
Reference Image
1. water2. soil3. low veg4. con. forest5. dec. forest6. built-upΣCE
classification1. water91,4171181820521039193,8080.03
2. soil0273,9624741012437281,1410.03
3. low veg06499307,07315098158318,8290.04
4. con. forest515839104,910476211105,7990.01
5. dec. forest54744231222103,303325148,1420.30
6. built-up49737192244523,14833,1500.30
Σ91,480290,481358,096105,209108,93326,670980,869
OE0.000.060.140.000.050.13OA0.921
KIA0.897
Table 8. Error matrix for classification spectral + Laplacian of test image 1, Pleiades.
Table 8. Error matrix for classification spectral + Laplacian of test image 1, Pleiades.
Reference Image
1. water2. soil3. low veg4. con. forest5. dec. forest6. built-upΣCE
classification1. water91,0251165374191,8360.01
2. soil0249,230553002894252,6770.01
3. low veg0101249,37723768358253,6060.02
4. con. forest45230104,6898816332114,2920.08
5. dec. forest30108,09041696,345445205,2990,53
6. built-up041,1467537121,90063,1590.65
Σ91,480290,481358,096105,209108,93326,670980,869
OE0.000.140.300.000.120.18OA0.828
KIA0.770
Table 9. Summary of the results for test image 2—WorldView-2 (2 m).
Table 9. Summary of the results for test image 2—WorldView-2 (2 m).
ScenarioOverall Accuracy (OA)Kappa Index of Agreement (KIA)
spectral0.940.92
spectral + Laplacian0.950.93
spectral + GLCM50.890.86
spectral + GLCM70.880.85
spectral + GLCM100.860.82
spectral + GLCM130.870.83
spectral + gran50.960.95
spectral + gran70.960.95
spectral + gran100.960.95
spectral + gran130.960.94
spectral + MSEgran50.950.94
spectral + MSEgran70.960.95
spectral + MSEgran 100.970.96
spectral + MSEgran130.960.96
Table 10. Error matrix for spectral classification of test image 2: WorldView-2.
Table 10. Error matrix for spectral classification of test image 2: WorldView-2.
Reference Image
1. water2. soil3. low veg4. con. forest5. dec. forest6. built-upΣCE
classification1. water68491020336772400.05
2. soil20847,391153373861119459,4050.20
3. low veg027119,853108978178121,8280.02
4. con. forest0090287,82093916598,1780.11
5. dec. forest0017203406127,2902132,4180.04
6. built-up1137671227169,49470,5040.01
Σ717048,186122,75092,715137,55281,200489,573
OE0.040.020.020.050.070.14OA0.937
KIA0.920
Table 11. Error matrix for classification spectral + gran10 of test image 2: WorldView-2.
Table 11. Error matrix for classification spectral + gran10 of test image 2: WorldView-2.
Reference Image
1. water2. soil3. low veg4. con. forest5. dec. forest6. built-upΣCE
classification1. water6770800067974570.09
2. soil36948,000295548611154,8280.12
3. low veg051122,0513104847123,2000.01
4. con. forest018791,9328790276101,0860.09
5. dec. forest00317748127,6624128,7310.01
6. built-up31126027474,08374,2710.00
Σ717048,186122,75092,715137,55281,200489,573
OE0.060.000.010.010.070.09OA0.961
KIA0.951
Table 12. Error matrix for classification spectral + GLCM7 of test image 2: WorldView-2.
Table 12. Error matrix for classification spectral + GLCM7 of test image 2: WorldView-2.
Reference Image
1. water2. soil3. low veg4. con. forest5. dec. forest6. built-upΣCE
classification1. water6050010070012671830.16
2. soil99347,563229102812,93963,8140.25
3. low veg030118,1135145353119,6540.01
4. con. forest031090,36935,470189126,0410.28
5. dec. forest0613242336100,59834104,2980.04
6. built-up12758455367,85968,5830.01
Σ717048,186122,75092,715137,55281,200489,573
OE0.160.010.040.030.270.16OA0.879
KIA0.847
Table 13. Error matrix for classification spectral + Laplacian of test image 2: WorldView-2.
Table 13. Error matrix for classification spectral + Laplacian of test image 2: WorldView-2.
Reference Image
1. water2. soil3. low veg4. con. forest5. dec. forest6. built-upΣCE
classification1. water69200027436473150.05
2. soil19547,5882553318010,64359,0920.19
3. low veg073121,95249881291123,4260.01
4. con. forest1011088,34589555797,4680.09
5. dec. forest004323514127,7002131,6480.03
6. built-up5452510170,04370,6240.01
Σ717048,186122,75092,715137,55281,200489,573
OE0.030.010.010.050.070.14OA0.945
KIA0.930
Table 14. Summary of the results for test image 3: Sentinel-2 (10 m).
Table 14. Summary of the results for test image 3: Sentinel-2 (10 m).
Classification VariantOveral Accuracy (OA)Kappa Index of Agreement (KIA)
spectral0.930.90
spectral + Laplacian0.920.90
spectral + GLCM50.950.93
spectral + GLCM70.950.93
spectral + GLCM100.940.92
spectral + GLCM130.940.92
spectral + gran50.970.96
spectral + gran70.970.96
spectral + gran100.980.97
spectral + gran130.970.96
spectral + MSEgran50.980.97
spectral + MSEgran70.970.96
spectral + MSEgran 100.970.96
spectral + MSEgran130.970.96
Table 15. Error matrix for spectral classification of test image 3: Sentinel-2.
Table 15. Error matrix for spectral classification of test image 3: Sentinel-2.
Reference Image
1. water2. soil3. low veg4. con. forest5. dec. forest6. built-upΣCE
classification1. water13,67100012113,6930.00
2. soil070,47726502215672,9000.03
3. low veg42861,14417171792262,9860.03
4. con. forest250059,2253372859,6150.01
5. dec. forest001067119716,5683118,8630.12
6. built-up111,4891822211,21922,8950.51
Σ13,70181,99462,65860,59517,62714,377250,952
OE0.000.140.020.020.060.22OA0.926
KIA0.902
Table 16. Error matrix for classification spectral + gran10 of test image 3: Sentinel-2.
Table 16. Error matrix for classification spectral + gran10 of test image 3: Sentinel-2.
Reference Image
1. water2. soil3. low veg4. con. forest5. dec. forest6. built-upΣCE
classification1. water13,6200104002413,7480.01
2. soil081,1782430372582,1490.01
3. low veg426361,44916671229262,8860.02
4. con. forest73022058,5981221359,0260.01
5. dec. forest40642183116,78928119,5470.14
6. built-up055300113,04213,5960.04
Σ13,70181,99462,65860,59517,62714,377250,952
OE0.010.010.020.030.050.09OA0.975
KIA0.967
Table 17. Error matrix for classification spectral + GLCM7 of test image 3: Sentinel-2.
Table 17. Error matrix for classification spectral + GLCM7 of test image 3: Sentinel-2.
Reference Image
1. water2. soil3. low veg4. con. forest5. dec. forest6. built-upΣCE
classification1. water13,596001204113713,8940.02
2. soil077,49629504144279,2370.02
3. low veg093561,720365230554765,8720.06
4. con. forest90311559,8162290262,3160.04
5. dec. forest15452029412,98627414,0930.08
6. built-up0355680111,97515,5400.23
Σ13,70181,99462,65860,59517,62714,377250,952
OE0.010.050.010.010.260.17OA0.947
KIA0.930
Table 18. Error matrix for classification spectral + Laplace of test image 3: Sentinel-2.
Table 18. Error matrix for classification spectral + Laplace of test image 3: Sentinel-2.
Reference Image
1. water2. soil3. low veg4. con. forest5. dec. forest6. built-upΣCE
classification1. water13,662007521013,7310.01
2. soil068,94631602185971,1230.03
3. low veg64461,273141652102563,1410.03
4. con. forest320059,6871502059,8890.00
5. dec. forest10103175816,7715818,6190.10
6. built-up013,004382011,40524,4490.53
Σ13,70181,99462,65860,59517,62714,377250,952
OE0.000.160.020.010.050.21OA0.923
KIA0.900

Share and Cite

MDPI and ACS Style

Kupidura, P. The Comparison of Different Methods of Texture Analysis for Their Efficacy for Land Use Classification in Satellite Imagery. Remote Sens. 2019, 11, 1233. https://0-doi-org.brum.beds.ac.uk/10.3390/rs11101233

AMA Style

Kupidura P. The Comparison of Different Methods of Texture Analysis for Their Efficacy for Land Use Classification in Satellite Imagery. Remote Sensing. 2019; 11(10):1233. https://0-doi-org.brum.beds.ac.uk/10.3390/rs11101233

Chicago/Turabian Style

Kupidura, Przemysław. 2019. "The Comparison of Different Methods of Texture Analysis for Their Efficacy for Land Use Classification in Satellite Imagery" Remote Sensing 11, no. 10: 1233. https://0-doi-org.brum.beds.ac.uk/10.3390/rs11101233

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop