Next Article in Journal
Seasonal Ecosystem Productivity in a Seasonally Dry Tropical Forest (Caatinga) Using Flux Tower Measurements and Remote Sensing Data
Next Article in Special Issue
Removing InSAR Topography-Dependent Atmospheric Effect Based on Deep Learning
Previous Article in Journal
Sensitivity Analysis of Ozone Profiles Retrieved from SCIAMACHY Limb Radiance Based on the Weighted Multiplicative Algebraic Reconstruction Technique
Previous Article in Special Issue
Oil Spill Detection with Dual-Polarimetric Sentinel-1 SAR Using Superpixel-Level Image Stretching and Deep Convolutional Neural Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Object-Oriented Unsupervised Classification of PolSAR Images Based on Image Block

1
Tianjin Key Lab for Advanced Signal Processing, Civil Aviation University of China, Tianjin 300300, China
2
Engineering Techniques Training Center, Civil Aviation University of China, Tianjin 300300, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(16), 3953; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14163953
Submission received: 30 June 2022 / Revised: 2 August 2022 / Accepted: 3 August 2022 / Published: 14 August 2022
(This article belongs to the Special Issue SAR in Big Data Era II)

Abstract

:
Land Use and Land Cover (LULC) classification is one of the tasks of Polarimetric Synthetic Aperture Radar (PolSAR) images’ interpretation, and the classification performance of existing algorithms is highly sensitive to the class number, which is inconsistent with the reality that LULC classification should have multiple levels of detail in the same image. Therefore, an object-oriented unsupervised classification algorithm for PolSAR images based on the image block is proposed. Firstly, the image is divided into multiple non-overlapping image blocks, and h / q / g r a y -Wishart classification is performed in each block. Secondly, each cluster obtained is regarded as an object, and the affinity matrix of objects is calculated in the global image. Finally, the objects are merged into the specified class number by density peak clustering (DPC), and the adjacent objects at the block boundary are checked and forced to merge. Experiments are carried out with the measured data of the airborne AIRSAR and E-SAR and the spaceborne GF-3. The experimental results show that the proposed algorithm achieves good classification results under a variety of class numbers.

1. Introduction

Polarimetric Synthetic Aperture Radar (PolSAR) is an active microwave imaging system, which has all-day and all-weather imaging capability, high resolution, and strong penetration and is an important part of modern remote sensing systems. With the launch of high-performance spaceborne radar systems such as TerraSAR-X in Germany, Radarsat-2 in Italy, ALOS-2 in Japan, and GaoFen-3 in China, PolSAR data have exploded, and the interpretation ability of radar images needs to be improved urgently.
Land Use and Land Cover (LULC) classification is one of the important aspects of PolSAR image interpretation, and its results can not only serve for disaster assessment and geological exploration to produce economic value, but also serve for the subsequent tasks of target detection and recognition [1]. According to whether training samples are available, classification algorithms can be simply divided into supervised and unsupervised. For the supervised one, they can be further subdivided into traditional supervised classification and deep learning. Kong et al. [2] designed a maximum likelihood classifier based on the Bayes criterion and the assumption that the scattering vector obeys a complex Gaussian distribution; Lee et al. [3] proposed a Wishart classifier based on the assumption that the polarimetric coherency matrices or the polarimetric covariance matrices obey the complex Wishart distribution. Fukuda et al. [4] and Zou et al. [5], respectively, introduced a Support Vector Machine (SVM) classifier and Random Forest (RF) to PolSAR image classification, proving that machine learning algorithms can also be used for PolSAR classification. Traditional supervised classification is characterized by manual feature extraction and a small number of training samples (compared with that of deep learning). Deep learning eliminates the complicated manual feature extraction and directly designs an end-to-end network from the input of raw data to the output of the classification results, thus achieving better classification results than traditional supervised algorithms. Cao et al. [6] proposed a dual-branch Convolutional Neural Network (CNN) to learn effective features from the polarimetric coherency matrix and PauliRGB image and achieved good classification results. Xie et al. [7] combined the Wishart distance with an Auto-Encoder (AE) and a Convolutional Auto-Encoder (CAE), respectively, to obtain Wishart-AE and Wishart-CAE, which further improved the classification accuracy. Zhang et al. [8] proposed a complex convolutional neural network to achieve end-to-end classification of polarimetric SAR images in view of the fact that polarimetric SAR data are complex. Furthermore, Ovies et al. extensively discussed the applications of CNNs to SAR images’ Automatic Target Recognition (ATR), LULC classification, noise removal, image information, change, and anomaly detection in [9]. However, the excellent performance of supervised classification depends on high-quality and massive training samples, and sample labeling needs to be carried out by specialized personnel, which requires much labor and time cost. In addition, terrain annotation also faces the following problems: (1) for multi-person cooperation mode, it is difficult to ensure the consistency of the annotation; (2) for fine labeling, it is difficult to enumerate all types of terrain; (3) terrain labeling is often associated with application scenarios, and different applications require labeling with different fineness. The above data annotation problems have increased the application difficulty of supervised classification.
Unsupervised classification does not need labeled samples for training. It is mainly based on the aggregation characteristics of samples. The existing unsupervised classification algorithms can be divided into two categories: based on scattering characteristics or statistical characteristics and based on machine learning. Cloude et al. [10] proposed the H / α decomposition method and divided the PolSAR image into eight categories by setting fixed thresholds. Chen et al. [11] proposed the concept of scattering similarity and applied it to PolSAR classification. Lee et al. [12] combined the H / α decomposition feature with the Wishart classifier and proposed the classical H / α -Wishart classification algorithm. In [13], Freeman decomposition was combined with the Wishart classifier to propose a classification method with a hierarchical structure, so as to maintain the main scattering mechanism of the terrain. Referring to the achievements in the field of machine learning, fuzzy C-means clustering and spectral clustering were respectively introduced to PolSAR images in [14,15]. Considering the help of neighborhood information for classification, the Markov random field was also introduced to the PolSAR image classification task and has been widely studied [16,17,18,19]. All the above classification algorithms need to preset the class number, which has an important influence on the classification results. To solve this problem, Cao et al. [20] and Mohammed et al. [21] completed the classification by designing a hierarchical clustering structure of “over-segmentation–clustering merging”, which can automatically estimate or determine whether clustering is combined. Regardless of whether the class number is preset or estimated automatically, the above classification algorithms are highly sensitive to this parameter. Only when the class number is properly set, these algorithms can achieve good classifications, which is inconsistent with the reality that LULC classification should have multiple levels of detail in the same image.
Based on the above analysis, this paper designs an object-oriented unsupervised classification algorithm for PolSAR images based on the image block, which is also a hierarchical structure. In the stage of over-segmentation or object generation, this paper is based on the following two hypotheses: (1) compared with the whole image, the local image contains simpler terrain; (2) compared with complex scenes, the algorithm applied to simple scenes can achieve a better classification effect. Firstly, the image is divided into multiple non-overlapping image blocks, and then, the classification is carried out separately in each block. Considering the terrain discrimination effect and feature calculation efficiency, the classification within blocks is completed by h / q / g r a y -Wishart classification. In the cluster- or object-merging stage, the algorithm regards each over-classified cluster as an object, and the classification is carried out in the object unit. To achieve this, the algorithm first calculates the affinity matrix, which consists of the “log-Euclidean Riemannian” distance measuring objects’ similarity, then uses Density Peak Clustering (DPC) to determine the merging relationship between objects and further merges them into the specified class number. As a result of the image block being artificially drawn, it is usually difficult to determine whether the block boundary corresponds to the real terrain boundary, and there is no information interaction between image blocks in the stage of object generation; adjacent objects at the image block boundary are not always merged into the same class, so the algorithm designs a post-processing that forcibly merges adjacent objects at the image block boundary.
The main contribution of this paper is to propose a robust and efficient unsupervised classification algorithm for PolSAR images. It has the following advantages:
  • The algorithm has a hierarchical structure, and the classification is applied in the final stage of object merging. Therefore, the algorithm can obtain classifications with different fineness without significantly increasing the amount of computation.
  • DPC has the characteristics of few parameters and strong robustness and can organize the merging relationship of objects as a tree, which effectively ensures the classification effect of the algorithm under different class numbers.
  • The object generation of the algorithm is carried out in non-overlapping image blocks, which can be implemented in parallel in practical applications to ensure the efficiency of the algorithm.
  • Each cluster obtained from the image blocks is regarded as an object, which greatly reduces the dimension of the affinity matrix, ensures the realizability of DPC, and also, reduces the number of objects processed by classification.
  • Classification of image blocks is implemented by h / q / g r a y -Wishart classification, which has the characteristics of a clear physical meaning and simple calculation. Moreover, the convergence speed of the Wishart iteration is improved by simplifying the image classification contents.
  • If the Earth is taken as the global image, the usual large images belong to the image blocks, and all the unsupervised classification faces the problem of object merging. Therefore, the structure of the algorithm in this paper is of universal significance and can be applied to images of larger scenes through multi-layer settings.
The structure of this paper is as follows: The second part gives the theoretical basis of the algorithm used in this paper and the concrete implementation of the algorithm. In the third part, abundant experimental analysis is given. Finally, the conclusion is given in the fourth part.

2. Methodology

2.1. PolSAR Data

PolSAR data can be expressed as the following scattering matrix:
S = S HH S HV S VH S VV
where S HV represents the scattering coefficient of the PolSAR system, which emits and receives electromagnetic waves in horizontal polarization mode and vertical polarization mode, respectively, and its value is complex. Under the assumption of scattering reciprocity, S can be expanded into the following vector under the Pauli matrix basis set:
k = 1 2 S HH + S VV S HH S VV 2 S HV T
Multi-look PolSAR data are generally expressed by the outer product of several spatially adjacent vectors and their conjugate-transposed vectors, which is called the polarimetric coherency matrix:
T = 1 L i = 1 L k i k i H
L is the number of looks, and H is the matrix or vector conjugate transpose. T usually obeys a complex Wishart distribution:
p T | L , Σ = L p L T L p exp L × Tr Σ 1 T K L , p Σ L
where K L , p = π p ( p 1 ) / 2 i = 1 p Γ L i + 1 is a constant related to the Gamma function. p is the dimension of T , which is 3 here. Σ is the mathematical expectation of T , and is the matrix determinant. Tr is the matrix trace. On the basis of the above Wishart distribution, Pottier et al. [22] obtained the following Wishart distance after omitting irrelevant items based on the maximum likelihood criterion:
d w T , V m = ln V m + Tr V m 1 T
where V m is the mth cluster center and V m = i C m T i . ln is natural logarithm. The Wishart classifier can be obtained by calculating the Wishart distance from each pixel to all cluster centers and classifying the pixel in the nearest cluster.

2.2. h / q Decomposition

Polarimetric scattering entropy H and polarimetric scattering angle α are 2 common features in PolSAR images, but their extraction needs a matrix eigen-decomposition operation, which requires a large amount of computation; α is unstable under some special circumstances. Therefore, An et al. [23] put forward an alternative parameters h / q . h has a physical meaning similar to H, which can characterize the scattering randomness of targets. q is called surface similarity, which indicates the similarity between targets and the surface scattering mechanisms.
h = T 11 + T 22 + T 33 i = 1 3 j = 1 3 T i j 2 q = T 11 T 11 + T 22 + T 33
where T i j is the element of polarimetric coherency matrix T in the ith row and jth column, so a scalar, and is the absolute value operation here.

2.3. Log-Euclidean Riemannian Metric

Polarimetric coherency matrix T is a positive definite (or semi-positive definite) matrix. The Affine Invariant Riemannian Metric (AIRM) is obtained by considering the geometric flow pattern of data form in [24], but its computational complexity is very high. For this reason, Arsigny et al. [25] proposed the Log-Euclidean Riemannian Metric (LERM) to replace it, which can be calculated off line and reduces the amount of computation:
d L E R M T m , T n = log T m log T m F
where log is the matrix logarithm and is the matrix Frobenius norm. It is obvious that the calculation of the matrix logarithm is more complicated than that of the matrix Frobenius norm. It can be seen from (7) that the matrix logarithm has a linear relationship with the number of objects, rather than quadratic, which greatly reduces the calculation of the affinity matrix.

2.4. DPC

DPC [26] can find the cluster center intuitively in arbitrarily distributed data by defining 2 parameters: density ρ and distance δ . It has the characteristics of few parameters, simple use, and strong robustness and has achieved good clustering results on some challenging datasets. The algorithm assumes that the clustering center meets 2 conditions: the local density of the clustering centers is the local peak, and the distance between different clustering centers is relatively far away. Based on the above 2 assumptions, the local density of ρ samples are defined as
ρ i = j χ d i j d c
where χ x = 1 if x < 0 , otherwise χ x = 0 . d i j is the distance between any 2 samples. d c is the distance threshold, which is generally set between 1% and 2% of the sample distances’ quantile. DPC also defines the concept of sample distance δ , which refers to the shortest distance from a sample to a sample whose local density is larger than it, namely
δ i = min j : ρ j > ρ i d i j
The 2-dimensional scatter diagrams of samples are obtained by density ρ and distance δ , in which the cluster centers and non-cluster centers of samples can be clearly distinguished. After the clustering center is obtained, the sample clustering can be completed in one step according to the non-clustering center sample being assigned to sample clusters with a higher density and the closest distance.

2.5. Proposed Method

The flow chart of the proposed algorithm is shown in Figure 1. It can be seen from the figure that the algorithm mainly includes 2 processes of object generation and object merging. The realization of each step will be described in detail below:
  • Pre-processing of filtering: Speckle noise is common in PolSAR images, which has a serious impact on image interpretation. Therefore, speckle filtering is usually an essential pre-processing step in image interpretation. In order to facilitate the classification comparison of other algorithms, the refined Lee filter [27] was selected here. In practice, other algorithms with better filtering effects can be used.
  • Segmenting the image into blocks: The images to be classified are divided into multiple non-overlapping image blocks. This step involves image block size S.
  • Classification within image blocks: According to the polarimetric coherence matrix T , the parameters h / q can be calculated by Formula (6), and the gray image can be obtained from the PauliRGB image. Similar to [10], PolSAR images can be divided into 8 categories by h / q according to threshold values, as shown in Figure 2. In order to distinguish terrain with the same scattering mechanism, but different scattering powers, OTSU [28] was used to subdivide each cluster into 2 categories according to the gray feature. Therefore, a maximum of 16 clusters (objects) can be obtained in each image block. This step involves the optimization parameter of Wishart classifier iteration I.
  • Calculating affinity matrix between objects: The arithmetic mean of all pixels is used as the representation of the object (the principle is similar to multi-look processing, such as Formula (3)), and the similarity between objects is measured by the LERM as Formula (7). It is easy to know that the dimension of the affinity matrix depends on the parameter of S and the complexity of image contents. Therefore, the dimension of the affinity matrix can always be kept within acceptable computation by adjusting S.
  • Merging objects based on DPC: the local density ρ and sample distance δ of each object can be calculated according to Formula (8) and Formula (9), respectively; the merging relationship between objects can be determined according to the principle that non-cluster center samples are divided into sample clusters with a higher density and the closest distance. ρ × δ is taken as the index of the object to become the center of the class. The N objects with the largest index are selected as the initial cluster center, and object merging can be completed gradually. This step involves distance threshold d c and class number N.
  • Forcibly merging of adjacent objects at the image block boundary: The image is artificially segmented, which rarely corresponds to the real boundary of the terrain, and the two adjacent objects at the image block boundary usually belong to the same terrain. Therefore, the algorithm calculates the adjacency object pairs located at the image block boundary and assigns the object pairs whose adjacency degree is greater than the preset threshold to the same label. This step involves the adjacency parameter R (expressed as the ratio of the number of adjacent pixels to the size of the image block).

3. Experiments

In order to verify the effectiveness of the proposed algorithm, the measured data of San Fancisco collected by the U.S. AIRSAR, Oberpfaffenhofen collected by the German E-SAR, and Tianjin Airport collected by the Chinese GF-3 were used for the experiments. San Francisco and Oberpfaffenhofen are public. The experiment mainly included parameter analysis and classification effect. The main body of the paper is the experimental results for San Francisco, and the experimental results for Oberpfaffenhofen and Tianjin Airport are listed in Appendix A and Appendix B. The experiment was carried out on the MATLAB platform. Classification evaluation generally includes qualitative and quantitative analysis. Qualitative evaluation displays classification results in pseudo-color and analyzes whether they meet the expectation of manual interpretation. Quantitative evaluation is used to measure the difference between the classification result and the ground truth through indicators such as the accuracy, recall rate, confusion matrix, and Kappa coefficient. Unsupervised classification only classifies data according to their aggregation characteristics, without introducing the ground truth for directional guidance. Therefore, the classification results are generally different from the ground truth, but such a difference does not deny the rationality of unsupervised classification, so qualitative methods are mainly used for subsequent analysis.
In order to match the classification results with the scattering types of terrain in the pseudo-color image (PCST) as much as possible, the pseudo-color was set as follows: Based on the color in Figure 2 (since each cluster is subdivided into two subclasses, the paper is differentiated by different color depths, as shown in Figure 3), the initial class proportion of each category in the final result was counted. By defining the red zone (including the initial classes 3, 6, 7, 11, 14, 15), the blue area (including the initial classes 8, 16), grey area (including the initial classes 1, 5, 9, 13), the green zone 1 (including the initial classes 2, 10), and the green zone 2 (including the initial classes 4, 12), the primary color of the pseudo-color was determined by the region with the largest proportion, and the final color was determined by the initial class with the largest proportion in the region defined above. Although the pseudo-color rule may incorporate some classes with the same main scattering mechanism, it can significantly reduce the difficulty of the interpretation of the classification.

3.1. Data Introduction

The PauliRGB image of the PolSAR data for San Francisco is shown in Figure 4a. The data were collected by the AIRSAR system of the Jet Propulsion Laboratory (JPL), which adopted L-band imaging, and they were processed by 4 looks with a size of 1024 × 900 pixels. Figure 4b is the Google Earth image of the corresponding area, and Figure 4c and Figure 5d are the ground truth of these data in [29,30], respectively. Reference [29] divided the terrain into 5 categories: sea (blue), urban (red), angled urban (purple), vegetation (green), and mountain (dark green) (black refers to invalid), while Reference [30] marked urban and angled urban as building area. Comparing the Google Earth images, it can be seen that the ground truth in the these 2 literature works was coarse-grained, and many details of the terrain were not marked. For example, there are man-made structures such as houses, bare land such as sports fields in the vegetation, and targets such as ships in the ocean.
The PauliRGB image of the PolSAR data for Oberpfaffenhofen is shown in Figure 4a. The data were collected by the E-SAR system of the Germany Aerospace Center, which adopted L-band imaging, and its size was 1300 × 1200 pixels. Figure 4b is the Google Earth image of the corresponding area, and Figure 4c,d are the ground truth of these data in [30] and [31], respectively. Reference [30] divided the terrain into 5 categories: suburban (red), woodland (green), road (cyan), farmland (purple), and other (blue), while [31] divided them into 3 categories: built-up area (red), woodland (yellow), and open areas (green) (white is invalid). It is easy to find that these two literature works have great differences in the fine-grained nature of the ground truth, and the annotation details of suburban (built-up area) are also different. As can be seen from the Google Earth image, the building area on the right is significantly different from the other areas, although these areas are all built-up. In some applications, these areas may need to be distinguished.
The PauliRGB image of the PolSAR data for Tianjin Airport is shown in Figure 6a. These data were collected by the GF-3 of China, which used C-band imaging, and its size was 1000 × 800 pixels. Figure 6b is the Google Earth image of the corresponding area. It can be seen from the image that the scene has rich content, including not only airport, bare land, grass land, farmland, and pond, but also residential area with high building density and factory area with low building density, and their distribution of is chaotic, so it is difficult to label the ground truth manually.
There are different ground truths in the above two public data, and it is difficult for us to deny any of them, which to some extent supports the above analysis that the production of the ground truth is subject to human subjective influence. Further, as observed from the PolSAR images and optical remote sensing images, we may need to perform a finer classification than the existing ground truths in some applications. At the same time, the above three images come from different imaging systems, and each contains a variety of terrain types. The experimental analysis of them can be conducive to the analysis of the classification performance of the proposed algorithm.

3.2. Parameter Analysis

The algorithm mainly involves 5 parameters, namely class number N, optimization stopping parameter I, the size of the image blocks S, adjacency parameter R, and distance threshold d c . The classification results of these parameters with different values will be discussed below.

3.2.1. Class Number N

Figure 7 gives the initial classification results, results after optimization within the blocks, and final results with the class numbers of 3, 5, 7, and 9. The pseudo-color results of “label2rgb”, a library function of the MATLAB platform, are also presented in the figure. Other parameters were set as follows: S = 90 (meaning segmenting images into 110 blocks), I = 5 , d c determined by 1 % , and R = 0.5 . As can be seen from Figure 7a, ocean, urban area, and vegetation are mainly marked as blue, red, and green, respectively, in the initial classification, which is a good starting point for iterative optimization. However, there are still some problems, namely angled urban and the interface between urban and ocean are not effectively separated from vegetation. After the optimization within the image blocks, the pseudo-color shows the texture structure of the terrain more clearly (seen from Figure 7d); meanwhile, the pseudo-color of many pixels has changed (seen from Figure 7c), which means that the terrain type of the cluster has changed from that of the pixels in the initial classification. This does not mean that the Wishart iteration has worsened the LULC classification, but that h/q/gray is an over-classification, that is a single type of terrain may contain multiple h/q/gray class types. These over-classification characteristics of h/q/gray and the image block exactly meet the requirements of object generation to generate many single-type clusters. Further, Figure 7e–i clearly shows the object-merging rules of DPC; with the merging of objects, the terrain types of urban and vegetation gradually change from multiple to single, and the classification fineness is also from fine to coarse. When the class number is 3 and 5, the visual effect of the classification results is good, which not only correctly distinguishes sea, urban, and mountain (vegetation), endowed with the appropriate pseudo-color, but also smaller terrain types, such as buildings in vegetation, bare land, and vegetation in urban. When the class number is 7 and 9, respectively, there are some errors in the coloring scheme designed in this paper, which significantly reduces the visual effect. For example, some urban areas are endowed with green when the class number is 7, and vegetation is endowed with grey when the class number is 9. However, it was found that the algorithm in this paper does not confuse vegetation and urban and the boundary of these 2 terrain types is still clear when we observe the pseudo-color image of “label2rgb”. The newly added categories mainly distinguish urban with smaller and finer granularity, which is mainly due to the fact that urban is essentially mixed ground. At the same time, the algorithm can distinguish mountains and vegetation well in 9 classes.

3.2.2. Optimization Stopping Parameter I

The optimization stopping parameter was applied in object generation, and the classification in this stage was carried out in pixels. Although the error classification was difficult to correct in the subsequent steps, the goal of this stage was to over-classify, so even if a small proportion of error classification was generated, it would not have a serious impact on the final result. Figure 8 shows the curve of the relative change rate of 2 adjacent iterative classifications with the number of iterations. The red is the change curve of the global image, and the blue is the average change curve of the image blocks, where the size of the image blocks is 90 (meaning segmenting the image into 110 blocks). As can be seen from Figure 8, the classification results of the image blocks converge monotonically with the increase of the iteration times, and the convergence speed is better than that of the global image. Moreover, the convergence curve of the global image shows a certain volatility. For the San Francisco data, it is proper to set the optimization stopping parameter I as 5.

3.2.3. Image Block Size S

The image block size has a similar effect on the final result as the optimization stopping parameter. Figure 9 shows the classification results when the image block size is 75 (meaning segmenting the image into 168 blocks), 90 (meaning segmenting the image into 110 blocks), and 113 (meaning segmenting the image into 72 blocks), respectively. Other parameters were set as follows: N = 5 , I = 5 , d c determined by 1 % , and R = 0.5 . As can be seen from the classification results, the algorithm distributes the sea, urban, and vegetation well, except that the pseudo-color rule classifies the vegetation as white, which significantly reduces the visual effect, when the image block size is 113.

3.2.4. Adjacency Parameter R

The adjacency parameter is only applied to the final forced merging post-processing and has no obvious effect on the final classification result. Figure 10 shows the classification results when the adjacent parameters R are 0.25, 0.5, 0.75, and 1, respectively. Other parameter settings were as follows: S = 90 (means segmenting images into 110 blocks), N = 5 , I = 5 , d c determined by 1 % . R = 1 means that no forcible merging of adjacent objects is performed. As can be seen from Figure 10, when the parameter is 0.25, 0.5, and 0.75, the same forced merging of adjacent objects is carried out, and the classification results are the same. However, when the parameter is 1, the forced merging processing is not carried out, and there are obvious segmentation traces in sea, which significantly reduces the visual effect.

3.2.5. Distance Threshold d c

The distance threshold is the only parameter of the DPC algorithm that determines the merging relationship of objects and has an important influence on the final classification result. Figure 11 shows the classification results when the distance threshold is 0.5 % , 1.0 % , 1.5 % , and 2.0 % , respectively. Other parameters were set as follows: S = 90 (meaning segmenting images into 110 blocks), N = 5 , I = 5 , and R = 0.5 . As can be seen from Figure 11, when the distance threshold is 0.5 % , the algorithm can partition sea, urban, and vegetation well, and the shallow water area at the boundary between sea and urban and the bare land scattered in vegetation are labeled as the fourth class with rich details. When the distance threshold is 1.5 % , the algorithm can only recognize sea, urban, and vegetation, but cannot distinguish more fine-grained terrain. When the distance threshold is 2.0 % , the classification performance of the algorithm significantly degrades, which confuses sea and vegetation. This may be because the local density ρ of DPC cannot accurately describe the distribution characteristics of objects due to the excessive distance threshold d c .

3.3. Classification Effect

In this paper, H / A / α -Wishart classification [22] was reproduced as a comparison. In addition to the original 16 classes, the class number was adjusted to 5 by merging the nearest clustering in the sequence. At the same time, in order to explain the rationality of the image block, the h / q / g r a y -Wishart classification without the image block was compared. The iteration number of the Wishart classifier was 5. The classification results are shown in Figure 12. As can be seen from Figure 12, when the class number is 16, all 3 algorithms classify the ground objects carefully, with rich texture details. However, the visual effect of the classification results in this paper is significantly better than that of the other two algorithms. When the category data are 5, both the H / A / α -Wishart and h / q / g r a y -Wishart algorithms without the image block mix sea and urban, and their visual effects decrease significantly. However, the proposed algorithm still maintains good visual effects. The classification result under these 2 class numbers shows that the proposed algorithm has good classification performance under different class numbers to a certain extent.

4. Conclusions

In view of the fact that the unsupervised classification method is sensitive to the class number, an object-oriented unsupervised classification algorithm based on the image block was proposed. The algorithm first completes the classification with h / q / g r a y -Wishart classification in the image blocks, then regards each cluster as an object and completes the object merging in the global image using the density peak clustering. Finally, the adjacent objects located at the boundary of the image block are forcibly merged in the post-processing. The algorithm was verified by the measured data of various sensors, and the experimental results proved the classification effect of the algorithm. Due to the limited ability of the initial h/q/gray classification to discriminate various similar scattered terrain, the classification effect of the algorithm in some scenes such as Flevoland needs to be improved. In the next step, the classification effect and parameter robustness of the algorithm will be improved.

Author Contributions

B.H. and P.H. conceived of and designed the method and experiments; B.H. and Z.C. performed the experiments and analyzed the results; B.H. wrote the article; P.H. and Z.C. revised the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Fundamental Research Funds for the Central Universities (No. 3122019110).

Data Availability Statement

Some or all data, models, or code generated or used during the study are available from the corresponding author by request. (hanpingcauc.163.com).

Acknowledgments

The authors thank the PolSAR project for providing the open-source software and experimental data (as distributed by the European Space Agency and China National Space Administration). The authors would also like to thank the Editors and Reviewers for their helpful comments and constructive suggestions.

Conflicts of Interest

No potential conflict of interest has been reported by the authors.

Appendix A

The classification results of Oberpfaffenhofen are shown below. The arrangement of the experimental results is similar to that for San Francisco. Figure A1 shows the initial classification results, intermediate classification results after Wishart optimization, and final classification results with different class numbers. It can be seen from the figure that the main terrain is well distinguished in the initial classification; Wishart optimization makes the texture of the terrain in the intermediate results clearer, and bare land is well separated from vegetation and urban in the final classification. However, the algorithm does not distinguish bare land more finely, for example the runway and other targets are not separated, and the discrimination effect of urban and vegetation needs to be further optimized. The global and local iterations in Figure A2 show similar convergence. Under different image block sizes in Figure A3, although the final classification effect produces great visual effect changes, the main terrain is still similarly distinguished. Similar classification results are obtained with different adjacency parameters in Figure A4. The change in the distance parameter from 1% to 2% in Figure A5 still had a significant impact on the final results. As shown in Figure A6, the comprehensive visual effect of the proposed algorithm is still better than the other two algorithms.
Figure A1. Classification results of Oberpfaffenhofen with different class numbers. (a) Initial results with PCST. (b) Initial results with “label2rgb”. (c) Result after optimization with PCST. (d) Result after optimization with “label2rgb”. (e) Result of 3 classes with PCST. (f) Result of 3 classes with “label2rgb”. (g) Result of 5 classes with PCST. (h) Result of 5 classes with “label2rgb”. (i) Result of 7 classes with PCST. (j) Result of 7 classes with “label2rgb”. (k) Result of 9 classes with PCST. (l) Result of 9 classes with “label2rgb”.
Figure A1. Classification results of Oberpfaffenhofen with different class numbers. (a) Initial results with PCST. (b) Initial results with “label2rgb”. (c) Result after optimization with PCST. (d) Result after optimization with “label2rgb”. (e) Result of 3 classes with PCST. (f) Result of 3 classes with “label2rgb”. (g) Result of 5 classes with PCST. (h) Result of 5 classes with “label2rgb”. (i) Result of 7 classes with PCST. (j) Result of 7 classes with “label2rgb”. (k) Result of 9 classes with PCST. (l) Result of 9 classes with “label2rgb”.
Remotesensing 14 03953 g0a1
Figure A2. The relative change rate of the whole image and image blocks.
Figure A2. The relative change rate of the whole image and image blocks.
Remotesensing 14 03953 g0a2
Figure A3. Classification results with different block sizes. (a) Result with block size as 100. (b) Result with block size as 120. (c) Result with block size as 150.
Figure A3. Classification results with different block sizes. (a) Result with block size as 100. (b) Result with block size as 120. (c) Result with block size as 150.
Remotesensing 14 03953 g0a3
Figure A4. Classification results with different adjacent parameters. (a) Result with adjacent parameter 0.25. (b) Result with adjacent parameter 0.5. (c) Result with adjacent parameter 0.75. (d) Result with adjacent parameter 1.00.
Figure A4. Classification results with different adjacent parameters. (a) Result with adjacent parameter 0.25. (b) Result with adjacent parameter 0.5. (c) Result with adjacent parameter 0.75. (d) Result with adjacent parameter 1.00.
Remotesensing 14 03953 g0a4
Figure A5. Classification results with different distance thresholds. (a) Result with distance threshold 0.5 % . (b) Result with distance threshold 1.0 % . (c) Result with distance threshold 1.5 % . (d) Result with distance threshold 2.0 % .
Figure A5. Classification results with different distance thresholds. (a) Result with distance threshold 0.5 % . (b) Result with distance threshold 1.0 % . (c) Result with distance threshold 1.5 % . (d) Result with distance threshold 2.0 % .
Remotesensing 14 03953 g0a5
Figure A6. Classification results of different algorithms and class numbers. (a) Result of H / A / α -Wishart with 16 classes. (b) Result of H / A / α -Wishart with 5 classes. (c) Result of h / q / g r a y -Wishart with 16 classes. (d) Result of h / q / g r a y -Wishart with 5 classes. (e) Result of the proposed with 16 classes. (f) Result of the proposed with 5 classes.
Figure A6. Classification results of different algorithms and class numbers. (a) Result of H / A / α -Wishart with 16 classes. (b) Result of H / A / α -Wishart with 5 classes. (c) Result of h / q / g r a y -Wishart with 16 classes. (d) Result of h / q / g r a y -Wishart with 5 classes. (e) Result of the proposed with 16 classes. (f) Result of the proposed with 5 classes.
Remotesensing 14 03953 g0a6

Appendix B

The classification results of Tianjin Airport are shown below. Compared with the other two images, the texture of the Tianjin Airport data is richer. As observed from Figure A7a, the differences in the scattering characteristics of the main terrain are not very prominent. After Wishart iterative optimization, the differences in h/q/gray class types and texture structure are clearer (as shown in Figure A7c). In the final classification results, the airport area is well distinguished from other terrain. When the class number is 7 and 9, the runway and apron are further distinguished. At the same time, it was observed that the vegetation under the existing class number is not distinguished from the urban. In Figure A8, local and global iterations show similar convergence. The final classification results obtained under different block sizes in Figure A9 still produce great visual effect changes. Similar classification results are obtained under different adjacency parameters in Figure A10 and different distance thresholds in Figure A11. In Figure A12, the comprehensive visual effect of the proposed algorithm is still optimal.
Figure A7. Classification results of Tianjin Airport with different class numbers. (a) Initial results with PCST. (b) Initial results with “label2rgb”. (c) Result after optimization with PCST. (d) Result after optimization with “label2rgb”. (e) Result of 3 classes with PCST. (f) Result of 3 classes with “label2rgb”. (g) Result of 5 classes with PCST. (h) Result of 5 classes with “label2rgb”. (i) Result of 7 classes with PCST. (j) Result of 7 classes with “label2rgb”. (k) Result of 9 classes with PCST. (l) Result of 9 classes with “label2rgb”.
Figure A7. Classification results of Tianjin Airport with different class numbers. (a) Initial results with PCST. (b) Initial results with “label2rgb”. (c) Result after optimization with PCST. (d) Result after optimization with “label2rgb”. (e) Result of 3 classes with PCST. (f) Result of 3 classes with “label2rgb”. (g) Result of 5 classes with PCST. (h) Result of 5 classes with “label2rgb”. (i) Result of 7 classes with PCST. (j) Result of 7 classes with “label2rgb”. (k) Result of 9 classes with PCST. (l) Result of 9 classes with “label2rgb”.
Remotesensing 14 03953 g0a7
Figure A8. The relative change rate of the whole image and image blocks.
Figure A8. The relative change rate of the whole image and image blocks.
Remotesensing 14 03953 g0a8
Figure A9. Classification results with different block sizes. (a) Result with block size as 67. (b) Result with block size as 80. (c) Result with block size as 100.
Figure A9. Classification results with different block sizes. (a) Result with block size as 67. (b) Result with block size as 80. (c) Result with block size as 100.
Remotesensing 14 03953 g0a9
Figure A10. Classification results with different adjacent parameters. (a) Result with adjacent parameter 0.25. (b) Result with adjacent parameter 0.5. (c) Result with adjacent parameter 0.75. (d) Result with adjacent parameter 1.00.
Figure A10. Classification results with different adjacent parameters. (a) Result with adjacent parameter 0.25. (b) Result with adjacent parameter 0.5. (c) Result with adjacent parameter 0.75. (d) Result with adjacent parameter 1.00.
Remotesensing 14 03953 g0a10
Figure A11. Classification results with different distance thresholds. (a) Result with distance threshold 0.5 % . (b) Result with distance threshold 1.0 % . (c) Result with distance threshold 1.5 % . (d) Result with distance threshold 2.0 % .
Figure A11. Classification results with different distance thresholds. (a) Result with distance threshold 0.5 % . (b) Result with distance threshold 1.0 % . (c) Result with distance threshold 1.5 % . (d) Result with distance threshold 2.0 % .
Remotesensing 14 03953 g0a11
Figure A12. Classification results of different algorithms and class numbers. (a) Result of H / A / α -Wishart with 16 classes. (b) Result of H / A / α -Wishart with 5 classes. (c) Result of h / q / g r a y -Wishart with 16 classes. (d) Result of h / q / g r a y -Wishart with 5 classes. (e) Result of the proposed with 16 classes. (f) Result of the proposed with 5 classes.
Figure A12. Classification results of different algorithms and class numbers. (a) Result of H / A / α -Wishart with 16 classes. (b) Result of H / A / α -Wishart with 5 classes. (c) Result of h / q / g r a y -Wishart with 16 classes. (d) Result of h / q / g r a y -Wishart with 5 classes. (e) Result of the proposed with 16 classes. (f) Result of the proposed with 5 classes.
Remotesensing 14 03953 g0a12

References

  1. Nie, X.L.; Huang, X.Y.; Zhang, B.; Qiao, H. Review on PolSAR Image Speckle Reduction and Classification Methods. Acta Autom. Sin. 2019, 45, 1419–1438. [Google Scholar]
  2. Kong, J.; Swartz, A.A.; Yueh, H.A.; Novak, L.M.; Shin, R.T. Identification of Terrain Cover using the Optimum Polarimetric Classifier. J. Electromagnet. Wave 1988, 2, 171–194. [Google Scholar]
  3. Lee, J.S.; Grunes, M.R.; Kwok, R. Classification of Multi-look Polarimetric SAR Imagery based on Complex Wishart Distribution. Int. J. Remote Sens. 1994, 15, 2299–2311. [Google Scholar] [CrossRef]
  4. Fukuda, S.; Hirosawa, H. Support Vector Machine Classification of Land Cover: Application to Polarimetric SAR Data. In Proceedings of the 2001 International Geoscience and Remote Sensing Symposium, Sydney, NSW, Australia, 9–13 July 2001. [Google Scholar]
  5. Zou, T.Y.; Yang, W.; Dai, D.X.; Sun, H. Polarimetric SAR Image Classification using Multifeatures Combination and Extremely Randomized Clustering Forests. EURASIP J. Adv. Signal Process. 2009, 2010, 465612. [Google Scholar] [CrossRef]
  6. Gao, F.; Huang, T.; Wang, J.; Sun, J.P.; Hussain, A.; Yang, E. Dual-Branch Deep Convolution Neural Network for Polarimetric SAR Image Classification. Appl. Sci. 2017, 7, 447. [Google Scholar] [CrossRef]
  7. Xie, W.; Jiao, L.C.; Hou, B.; Ma, W.P.; Zhao, J.; Zhang, S.Y.; Liu, F. PolSAR Image Classification via Wishart-AE Model or Wishart-CAE Model. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2017, 10, 3604–3615. [Google Scholar] [CrossRef]
  8. Zhang, Z.M.; Wang, H.P.; Xu, F.; Jin, Y.Q. Complex-Valued Convolutional Neural Network and Its Application in Polarimetric SAR Image Classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 7177–7188. [Google Scholar] [CrossRef]
  9. Oveis, A.H.; Giusti, E.; Ghio, S.; Martorella, M. A Survey on the Applications of Convolutional Neural Networks for Synthetic Aperture Radar: Recent Advances. IEEE Aero. El. Sys. Mag. 2022, 37, 18–42. [Google Scholar] [CrossRef]
  10. Cloude, S.R.; Pottier, E. An Entropy based Classification Scheme for Land Applications of Polarimetric SAR. IEEE Trans. Geosci. Remote Sens. 1997, 35, 68–78. [Google Scholar] [CrossRef]
  11. Chen, Q.; Kuang, G.Y.; Li, J.; Sui, L.C.; Li, D.G. Unsupervised Land Cover/Land Use Classification Using PolSAR Imagery Based on Scattering Similarity. IEEE Trans. Geosci. Remote Sens. 2013, 51, 1817–1825. [Google Scholar] [CrossRef]
  12. Lee, J.S.; Grune, M.R.; Ainsworth, T.L.; Du, L.J.; Schuler, D.L.; Cloude, S.R. Unsupervised Classification using Polarimetric Decomposition and the Complex Wishart Classifier. IEEE Trans. Geosci. Remote Sens. 1999, 37, 2249–2258. [Google Scholar]
  13. Lee, J.S.; Grunes, M.R.; Pottier, E.; Ferro-Famil, L. Unsupervised Terrain Classification Preserving Polarimetric Scattering Characteristics. IEEE Trans. Geosci. Remote Sens. 2004, 42, 722–731. [Google Scholar]
  14. Kersten, P.R.; Lee, J.S.; Ainsworth, T.L. Unsupervised Classification of Polarimetric Synthetic Aperture Radar Images using Fuzzy Clustering and EM Clustering. IEEE Trans. Geosci. Remote Sens. 2005, 43, 519–527. [Google Scholar] [CrossRef]
  15. Ersahin, K.; Cumming, I.G.; Ward, R.K. Segmentation and Classification of Polarimetric SAR Data Using Spectral Graph Partitioning. IEEE Trans. Geosci. Remote Sens. 2010, 48, 164–174. [Google Scholar] [CrossRef]
  16. Yu, P.; Qin, A.K.; Clausi, D.A. Unsupervised Polarimetric SAR Image Segmentation and Classification Using Region Growing With Edge Penalty. IEEE Trans. Geosci. Remote Sens. 2012, 50, 1302–1317. [Google Scholar] [CrossRef]
  17. Doulgeris, A.P. An Automatic U-Distribution and Markov Random Field Segmentation Algorithm for PolSAR Images. IEEE Trans. Geosci. Remote Sens. 2015, 53, 1819–1827. [Google Scholar] [CrossRef]
  18. Bi, H.X.; Sun, J.; Xu, Z.B. Unsupervised PolSAR Image Classification Using Discriminative Clustering. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3531–3544. [Google Scholar] [CrossRef]
  19. Song, W.Y.; Li, M.; Zhang, P.; Wu, Y.; Jia, L.; An, L. Unsupervised PolSAR Image Classification and Segmentation Using Dirichlet Process Mixture Model and Markov Random Fields with Similarity Measure. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2017, 10, 3556–3568. [Google Scholar] [CrossRef]
  20. Cao, F.; Hong, W.; Wu, Y.R.; Pottier, E. An Unsupervised Segmentation With an Adaptive Number of Clusters Using the SPAN/H/α/A Space and the Complex Wishart Clustering for Fully Polarimetric SAR Data Analysis. IEEE Trans. Geosci. Remote Sens. 2007, 45, 3454–3467. [Google Scholar] [CrossRef]
  21. Mohammed, D.; Michael, J.C.; Vassilia, K.; Alexander, B. An Unsupervised Classification Approach for Polarimetric SAR Data Based on the Chernoff Distance for Complex Wishart Distribution. IEEE Trans. Geosci. Remote Sens. 2013, 51, 4200–4213. [Google Scholar]
  22. Pottier, E.; Lee, J.S. Application of the «H/A/α» Polarimetric Decomposition Theorem for Unsupervised Classification of Fully Polarimetric SAR Data Based on the Wishart Distribution. In Proceedings of the 2000 SAR Workshop: CEOS Committee on Earth Observation Satellites; Working Group on Calibration and Validation, Toulouse, France, 26–29 October 1999. [Google Scholar]
  23. An, W.T.; Cui, Y.; Yang, J.; Zhang, H.J. Fast Alternatives to H/α for Polarimetric SAR. IEEE Geosci. Remote Sens. Lett. 2010, 7, 343–347. [Google Scholar]
  24. Barbaresco, F. Interactions Between Symmetric Cone and Information Geometries: Bruhat-Tits and Siegel Spaces Models for High Resolution Autoregressive Doppler Imagery. Emerg. Trends Vis. Comput. 2008, 5416, 124–163. [Google Scholar]
  25. Arsigny, V.; Fillard, P.; Pennec, X.; Ayache, N. Log-Euclidean Metrics for Fast and Simple Calculus on Diffusion Tensors. Magn. Reson. Med. 2006, 56, 411–421. [Google Scholar] [CrossRef] [PubMed]
  26. Rodriguez, A.; Liao, A. Clustering by Fast Search and Find of Density Peaks. Science 2014, 344, 1492–1496. [Google Scholar] [CrossRef] [PubMed]
  27. Lee, J.S.; Grunes, M.R.; Grandi, G.D. Polarimetric SAR speckle filtering and its implication for classification. IEEE Trans. Geosci. Remote Sens. 1999, 37, 2363–2373. [Google Scholar]
  28. Otsu, N. A Threshold Selection Method from Gray-Level Histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef]
  29. Xiao, D.L.; Liu, C.; Wang, Q.; Wang, C.; Zhang, X. PolSAR Image Classification Based on Dilated Convolution and Pixel-Refining Parallel Mapping network in the Complex Domain. arXiv 2020, arXiv:1909.10783v2. [Google Scholar]
  30. Nie, X.L.; Ding, S.G.; Huang, X.Y.; Qiao, H.; Zhang, B.; Jiang, Z.P. An Online Multiview Learning Algorithm for PolSAR Data Real-Time Classification. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2019, 12, 302–320. [Google Scholar] [CrossRef]
  31. Guo, Y.W.; Jiao, L.C.; Wang, S.; Wang, S.; Liu, F.; Hua, W.Q. Fuzzy Superpixels for Polarimetric SAR Images Classification. IEEE Trans. Fuzzy Sys. 2018, 26, 2846–2860. [Google Scholar] [CrossRef]
Figure 1. Flow chart of the proposed algorithm.
Figure 1. Flow chart of the proposed algorithm.
Remotesensing 14 03953 g001
Figure 2. The scatter and classification threshold of h / q from San Francisco.
Figure 2. The scatter and classification threshold of h / q from San Francisco.
Remotesensing 14 03953 g002
Figure 3. The illustration of the pseudo-color rule.
Figure 3. The illustration of the pseudo-color rule.
Remotesensing 14 03953 g003
Figure 4. San Francisco data. (a) PauliRGB image. (b) Google Earth image. (c) Ground truth based on [29]. (d) Ground truth based on [30].
Figure 4. San Francisco data. (a) PauliRGB image. (b) Google Earth image. (c) Ground truth based on [29]. (d) Ground truth based on [30].
Remotesensing 14 03953 g004
Figure 5. Oberpfaffenhofen data. (a) PauliRGB image. (b) Google Earth image. (c) Ground truth based on [30]. (d) Ground truth based on [31].
Figure 5. Oberpfaffenhofen data. (a) PauliRGB image. (b) Google Earth image. (c) Ground truth based on [30]. (d) Ground truth based on [31].
Remotesensing 14 03953 g005
Figure 6. Tianjin Airport data. (a) PauliRGB image. (b) Google Earth image.
Figure 6. Tianjin Airport data. (a) PauliRGB image. (b) Google Earth image.
Remotesensing 14 03953 g006
Figure 7. Classification results of San Francisco with different class numbers. (a) Initial results with PCST. (b) Initial results with “label2rgb”. (c) Result after optimization with PCST. (d) Result after optimization with “label2rgb”. (e) Result of 3 classes with PCST. (f) Result of 3 classes with “label2rgb”. (g) Result of 5 classes with PCST. (h) Result of 5 classes with “label2rgb”. (i) Result of 7 classes with PCST. (j) Result of 7 classes with “label2rgb”. (k) Result of 9 classes with PCST. (l) Result of 9 classes with “label2rgb”.
Figure 7. Classification results of San Francisco with different class numbers. (a) Initial results with PCST. (b) Initial results with “label2rgb”. (c) Result after optimization with PCST. (d) Result after optimization with “label2rgb”. (e) Result of 3 classes with PCST. (f) Result of 3 classes with “label2rgb”. (g) Result of 5 classes with PCST. (h) Result of 5 classes with “label2rgb”. (i) Result of 7 classes with PCST. (j) Result of 7 classes with “label2rgb”. (k) Result of 9 classes with PCST. (l) Result of 9 classes with “label2rgb”.
Remotesensing 14 03953 g007
Figure 8. The relative change rate of the whole image and image blocks.
Figure 8. The relative change rate of the whole image and image blocks.
Remotesensing 14 03953 g008
Figure 9. Classification results with different block sizes. (a) Result with block size as 75. (b) Result with block size as 90. (c) Result with block size as 113.
Figure 9. Classification results with different block sizes. (a) Result with block size as 75. (b) Result with block size as 90. (c) Result with block size as 113.
Remotesensing 14 03953 g009
Figure 10. Classification results with different adjacent parameters. (a) Result with adjacent parameter 0.25. (b) Result with adjacent parameter 0.5. (c) Result with adjacent parameter 0.75. (d) Result with adjacent parameter 1.00.
Figure 10. Classification results with different adjacent parameters. (a) Result with adjacent parameter 0.25. (b) Result with adjacent parameter 0.5. (c) Result with adjacent parameter 0.75. (d) Result with adjacent parameter 1.00.
Remotesensing 14 03953 g010
Figure 11. Classification results with different distance thresholds. (a) Result with distance threshold 0.5 % . (b) Result with distance threshold 1.0 % . (c) Result with distance threshold 1.5 % . (d) Result with distance threshold 2.0 % .
Figure 11. Classification results with different distance thresholds. (a) Result with distance threshold 0.5 % . (b) Result with distance threshold 1.0 % . (c) Result with distance threshold 1.5 % . (d) Result with distance threshold 2.0 % .
Remotesensing 14 03953 g011
Figure 12. Classification results of different algorithms and class numbers. (a) Result of H / A / α -Wishart with 16 classes. (b) Result of H / A / α -Wishart with 5 classes. (c) Result of h / q / g r a y -Wishart with 16 classes. (d) Result of h / q / g r a y -Wishart with 5 classes. (e) Result of the proposed with 16 classes. (f) Result of the proposed with 5 classes.
Figure 12. Classification results of different algorithms and class numbers. (a) Result of H / A / α -Wishart with 16 classes. (b) Result of H / A / α -Wishart with 5 classes. (c) Result of h / q / g r a y -Wishart with 16 classes. (d) Result of h / q / g r a y -Wishart with 5 classes. (e) Result of the proposed with 16 classes. (f) Result of the proposed with 5 classes.
Remotesensing 14 03953 g012
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Han, B.; Han, P.; Cheng, Z. Object-Oriented Unsupervised Classification of PolSAR Images Based on Image Block. Remote Sens. 2022, 14, 3953. https://0-doi-org.brum.beds.ac.uk/10.3390/rs14163953

AMA Style

Han B, Han P, Cheng Z. Object-Oriented Unsupervised Classification of PolSAR Images Based on Image Block. Remote Sensing. 2022; 14(16):3953. https://0-doi-org.brum.beds.ac.uk/10.3390/rs14163953

Chicago/Turabian Style

Han, Binbin, Ping Han, and Zheng Cheng. 2022. "Object-Oriented Unsupervised Classification of PolSAR Images Based on Image Block" Remote Sensing 14, no. 16: 3953. https://0-doi-org.brum.beds.ac.uk/10.3390/rs14163953

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop