Next Article in Journal
Review Analysis of Irrigation and Application of Remote Sensing in the Lower Mekong River Basin
Next Article in Special Issue
Enhancing Land Cover Mapping and Monitoring: An Interactive and Explainable Machine Learning Approach Using Google Earth Engine
Previous Article in Journal
Highly Efficient Anchor-Free Oriented Small Object Detection for Remote Sensing Images via Periodic Pseudo-Domain
Previous Article in Special Issue
A Marine Small-Targets Classification Algorithm Based on Improved Convolutional Neural Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Unsupervised Feature Extraction Using Endmember Extraction and Clustering Algorithms for Dimension Reduction of Hyperspectral Images

1
Department of Electrical and Computer Engineering, Queen’s University, Kingston, ON K7L 3N9, Canada
2
Department of Geography and Planning, Queen’s University, Kingston, ON K7L 3N6, Canada
3
School of Surveying and Land Information Engineering, Henan Polytechnic University, Jiaozuo 454000, China
4
WSP Environment and Infrastructure Canada Limited, Ottawa, ON K2E 7L5, Canada
5
Shanghai Astronomical Observatory, Chinese Academy of Sciences, Shanghai 200030, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(15), 3855; https://0-doi-org.brum.beds.ac.uk/10.3390/rs15153855
Submission received: 15 June 2023 / Revised: 31 July 2023 / Accepted: 1 August 2023 / Published: 3 August 2023
(This article belongs to the Special Issue Explainable Artificial Intelligence (XAI) in Remote Sensing Big Data)

Abstract

:
Hyperspectral images (HSIs) provide rich spectral information, facilitating many applications, including landcover classification. However, due to the high dimensionality of HSIs, landcover mapping applications usually suffer from the curse of dimensionality, which degrades the efficiency of supervised classifiers due to insufficient training samples. Feature extraction (FE) is a popular dimension reduction strategy for this issue. This paper proposes an unsupervised FE algorithm that involves extracting endmembers and clustering spectral bands. The proposed method first extracts existing endmembers from the HSI data via a vertex component analysis method. Using these endmembers, it subsequently constructs a prototype space (PS) in which each spectral band is represented by a point. Similar/correlated bands in the PS remain near one another, forming several clusters. Therefore, our method, in the next step, clusters spectral bands into multiple clusters via K-means and fuzzy C-means algorithms. Finally, it combines all the spectral bands in the same cluster using a weighted average operator to decrease the high dimensionality. The extracted features were evaluated by applying an SVM classifier. The experimental results confirmed the superior performance of the proposed method compared with five state-of-the-art dimension reduction algorithms. It outperformed these algorithms in terms of classification accuracy on three widely used hyperspectral images (Indian Pines, KSC, and Pavia Centre). The suggested technique also showed comparable or even stronger performance (up to 9% improvement) compared with its supervised competitor. Notably, the proposed method exhibited higher accuracy even when only a limited number of training samples were available for supervised classification. Using only five training samples per class for the KSC and Pavia Centre datasets, our method’s classification accuracy was higher than that of its best-performing unsupervised competitors by about 7% and 1%, respectively, in our experiments.

1. Introduction

Hyperspectral sensors, which capture images using hundreds of spectral bands, provide a rich source of spectral information and pave the way for a wide range of remote sensing applications, particularly precise landcover classification [1,2]. However, the inclusion of many spectral bands poses challenges in the transmission, storage, and processing of hyperspectral images (HSIs) [3]. Because supplying adequate training samples could be difficult in real-world applications, HSI classification may suffer from the curse of dimensionality, which has a detrimental influence on final results by increasing the likelihood of overfitting [4]. More significantly, the high dimensionality increases computational time and complexity. As a solution, dimension reduction (DR) approaches are used as a critical preprocessing step, particularly for supervised landcover classification, to decrease the high dimensionality of HSIs while preserving discriminative information [5,6].
DR approaches are broadly divided into feature extraction (FE) and feature selection (FS). FS approaches select a desired band subset from the original HSI while removing redundant bands. The sequential information of the original data is kept in FS models, which is critical for physical interpretation. These FS approaches are further classified as supervised [7,8] and unsupervised [9,10].
The FE approaches combine spectral bands using mathematical processes, such as principal component analysis (PCA), and map the original HSI into a new feature space with a lower dimension [11]. FE methods have some advantages over FS ones, which contribute to their stronger performance [12]. FE techniques can extract the most information from the original high-dimensional spectral data by generating new features that indicate the most significant variations. Furthermore, FE approaches can generate new features that are not clearly present in the original spectral bands. These newly generated features are capable of capturing complex relationships and latent patterns in the data. As a result, they could be able to increase the representation of spectral characteristics, resulting in improved discrimination and classification accuracy [13,14]. Similar to FS approaches, FE is categorized into supervised and unsupervised models. Unsupervised algorithms do not require training samples and are thus preferred, because collecting and preparing ground truth training samples is time-consuming, expensive, and sometimes impossible [15].
According to [14], there are five types of FE methods: knowledge-based, statistical, wavelet-based, clustering, and deep learning. Knowledge-based strategies improve certain spectral band characteristics by performing basic mathematical operations on the original bands to differentiate the objects of interest. This category includes spectral indices, such as the normalized difference index (NDVI). Using hyperspectral data, researchers in [16] examined different vegetation indices, such as NDVI and the soil adjusted vegetation index (SAVI), and then modified these indices to forecast the green leaf area index. Although knowledge-based features have a direct relationship with physical characteristics, the goal of this type of FE is not to reduce dimensionality; instead, these approaches are utilized to gather extra information to facilitate the differentiation of land cover classes.
In statistical FE, the original HSI is transformed into a lower dimensional feature space without considerable loss of information, in which the classes of interest are more separable [17]. Nonparametric weighted feature extraction (NWFE) is an example of statistical FE [18]. Another statistical FE approach is discriminant analysis feature extraction (DAFE) [19]. Although this supervised technique is frequently used for dimension reduction, it can only extract a limited number of features (i.e., M 1 , where M is the total number of classes), which may not be enough for an accurate image classification. Another disadvantage of DAFE is that it suffers greatly from the ill-conditioning problem when just a small number of training samples are available.
The wavelet-based FE methods are based on the fact that wavelet transform decomposes a signal into constituent wavelets of varying sizes and positions. The majority of methods in this category employ the discrete wavelet transform.
Deep learning FE performs admirably when dealing with nonlinear input data. Deep learning approaches, which are well reviewed in [13], use a hierarchical learning framework to extract high-level features [14]. The use of autoencoders is a basic method for extracting deep features in a hierarchical manner. Deep features were extracted from hyperspectral images in a work by Chen et al. (2014) using a stacked autoencoder with five layers [20]. The images were then classified using logistic regression.
Minibatch graph convolutional networks (miniGCN), a novel method for training large-scale graph convolutional networks in a minibatch fashion, was introduced by researchers in [21]. The feature extraction and classification of hyperspectral images were the primary purposes for the creation of this technique. An advantageous feature of miniGCN is its ability to handle out-of-sample data inference without the need for retraining the networks, leading to improved classification performance. In addition, SpectralFormer, a novel backbone network, was suggested in [22] for the deep feature extraction and classification of HSIs. Leveraging image transformer models, SpectralFormer is adept at learning spectrally local sequence information from adjacent bands within HSI images, thus generating groupwise spectral embeddings. Additionally, researchers in [23] addressed the need for finely classifying features in complex scenes by introducing a comprehensive multimodal deep learning framework, which serves as a baseline solution.
Deep learning algorithms for HSI dimension reduction have the benefit of learning hierarchical representations and capturing complex spectral-spatial features automatically. However, for hyperspectral datasets, supervised deep learning models require a substantial quantity of labeled training data, which might be difficult to obtain. When the training dataset is small, they are prone to overfitting, which can lead to poor generalization performance. Furthermore, developing deep learning models for dimension reduction can be computationally challenging, necessitating significant computer resources and longer training durations.
Clustering-based algorithms are the final type of FE that has shown promising results [24]. In this category, related spectral bands are first grouped into several clusters, and then the grouped bands are merged to extract a single feature from each cluster. This category includes unsupervised band correlation clustering (BCC) [25]. The BCC method involves determining correlation coefficients between spectral bands, clustering the bands using a k-means algorithm based on the correlation coefficient matrix, and extracting new features by computing the mean value of the bands in each cluster. The results of BCC experiments demonstrated that BCC outperformed well-known FE methods including PCA, independent component analysis, and minimum noise fraction. While band correlation is useful for understanding spectral band relationships, relying entirely on band correlation in BCC for dimension reduction may result in limited information capture and poor feature discrimination. By overlooking specific spectral characteristics and variations within classes, this method may result in less discriminative and less accurate feature representations, especially in complex hyperspectral datasets.
In [26], an iterative technique for clustering and merging neighboring bands based on the Pearson correlation coefficient was developed. Although the suggested technique is entirely unsupervised and has consistent behavior for different data sets, it requires time to fine-tune its parameters. Using an expectation-maximization clustering methodology and weighted average fusion, researchers at [27] developed a feature extraction method for hyperspectral image analysis. One of the most significant advantages of this technique is that it automatically identifies the optimal number of clusters based on the Bhattacharya distance. The extracted features are also highly discriminative, which resulted in superior classification results compared with its competitors.
Lu et al. (2014) described a feature extraction approach for spectral–spatial classification of hyperspectral data that is based on clustering [28]. The suggested method grouped the high-dimensional data via clustering, efficiently extracting significant information while decreasing redundancy. The technique is able to improve the extraction of hidden relationships between neighboring pixels that are not easily visible in the raw data. In [3], adjacent bands were optimally clustered, using a particle swarm optimization (PSO) algorithm, and merged using a mean-weighted operator. Although the method is capable of identifying optimal clusters, it is developed based on a metaheuristic optimization algorithm that is computationally expensive and time-consuming.
Another FE method employing a clustering-based algorithm is clustering-based feature extraction (CBFE) [29]. CBFE constructs a prototype space (PS) in a supervised manner where each band is represented as a point/vector. CBFE searches for similar/correlated bands in the PS using a clustering algorithm and combines them to minimize the dimensionality of the HSIs. The fact that CBFE only uses first-order statistics from training samples (i.e., mean values) is one of its advantages. As a result, it could perform better than other methods which employ second-order statistics in ill-posed situations, especially when there are few training samples available for training. CBFE outperformed various conventional methods, including PCA, NWFE, and linear discriminant analysis, according to the experimental results [29].
CBFE can show desirable results, but it is a supervised method and requires training samples. However, as mentioned before, acquiring accurate and sufficient training samples is expensive, time-consuming, difficult, and sometimes impossible. To address this issue, in this paper, we provide an unsupervised version of CBFE by introducing the concept of endmember extraction into the dimension reduction discipline. In the first step of our method, pure pixels, called endmembers, of the input HSI are extracted. Using these endmembers, we construct a PS in an unsupervised manner in which each axis represents an endmember. Second, the proposed method groups similar and correlated spectral bands in the PS. Finally, all bands in each cluster are combined using a weighted average operator to decrease the HSI’s high dimensionality. More details about the proposed method are provided in the following section. The primary contribution of our research can be summarized as follows:
  • Introducing an unsupervised breakthrough: By removing the requirement for training data, our method revolutionizes the CBFE into a genuinely unsupervised approach.
  • Endmember extraction meets dimension reduction: By combining the strength of endmember extraction with dimension reduction, we have developed a unique method that uses pure pixels to build a new PS. Within this PS, similar/correlated spectral bands remain near one another and form several clusters, which lets us identify and reduce redundant spectral bands.
  • Unleashing the potential of unsupervised learning: We use unsupervised learning principles to automatically group and combine similar and correlated spectral bands, opening the way for rapid and accurate dimension reduction.

2. Methodology

The proposed method for dimension reduction in hyperspectral imagery has five steps. The flowchart of the proposed method is illustrated in Figure 1, and details of each step are explained below.
Step 1: Estimate the input HSI’s virtual dimensionality (VD) using HySime, a well-known VD estimation technique [30]. This step detects the number of existent endmembers in the input image, denoted by n . Further details and additional information about HySime can be found in the publication available at [31].
Step 2: Extract existing endmembers ( E ) from the image using a state-of-the-art vertex component analysis (VCA) [32]. We utilized an open-source MATLAB Hyperspectral Toolbox for the practical application of VCA and HySime. The toolbox can be accessed and downloaded from [33]. The VCA algorithm can be outlined as follows:
Step 2.1. Consider the desired number of endmembers to be generated as n and the HSI data to be H p × d , where p is the total number of pixels and d is the total number of spectral bands.
Step 2.2. Set the initial vector e 1 = [ 0 , 0 , , 1 ] , and consider E ( 1 ) = [   e 1 T , 0 , ,   0 ]
Step 2.3. Produce a Gaussian random vector, w ( k + 1 ) , which is utilized to generate   f ( k + 1 ) :
  f ( k + 1 ) = ( ( I E ( k ) E ( k ) # ) w ( k + 1 ) ) / || ( I E ( k ) E ( k ) # ) w ( k + 1 ) ||
where E ( k ) # is pseudoinverse of E ( k ) , and   f ( k + 1 ) is a vector orthonormal to the subspace spanned by E ( k ) .
Step 2.4. Determine e k + 1 , which is a pixel (i.e., row) in H having the maximum length after projection using   f ( k + 1 ) .   In other words, e k + 1   is a pixel in H that maximizes the following expression:
e k + 1 = argmax                                                 i ( || f ( k + 1 ) T [ H ] i , : || )  
where the symbol [ H ] i , : means the i-th row of matrix H; ( . ) T denotes the transpose operator, and || . || is the norm operator. Argmax represents the operation of finding the argument that maximizes the value of a given function.
Step 2.5. Replace the k + 1-th column of E ( k + 1 ) with e k + 1 T . Repeat the last three steps to extract n endmembers in E = [ e 1 T , e 2 T , ,   e n T ] . These endmembers are pure pixels of the image and are considered as representatives of existing phenomena.
Step 3: Generate a prototype space (PS) using the spectra of the extracted endmembers (i.e., E ) in the second step. In this space, each axis is associated with an endmember. Therefore, the dimension of the PS is equal to the total number of endmembers (i.e., VD). Matrix E generates the PS in our method. Each row of E is a point in the PS, and it is representative of a spectral band. For example, the j-th row of E represents the j-th spectral band in the PS. As mentioned, each spectral band is represented by a point in PS. As a result, similar bands in PS remain near one another, forming several clusters.
Step 4: Group the spectral bands’ representatives in the PS into k clusters by applying a clustering algorithm. Each of these clusters contains similar and correlated spectral bands. k equals the number of extracted features (i.e., the dimension of the reduced HSI) in our method.
Step 5: Extract a new feature for each cluster to decrease the dimensionality of the input HSI. In order to achieve this, we present two FE versions in this study. The fourth step of the first method, known as weighted feature extraction (WFE), uses a k-means clustering algorithm [34]. Then, we selected the spectral bands that had a representative in a similar cluster. Finally, a weighted average of the selected spectral bands was calculated as described below (Equation (3)).
B l = j = 1 h w j B j j = 1 h w j
where B l is the extracted feature for the l -th cluster; B j is the j -th spectral band whose representative exists in cluster l , and w j is its corresponding weight, which is the inverse of distance to the l -th cluster’s centroid; and h denotes the total number of the spectral band’s representatives in the l -th cluster.
In the second method, called fuzzy feature extraction (FFE), we first apply the renowned fuzzy C-means clustering [35] in the fourth step, and then use Equation (4) to reduce the high dimension of input HSI:
R p × k = H p × d × U d × k T
in which H is the input HSI matrix with a dimension of k × d , where p is the total number of pixels, and d is the total number of spectral bands; R is the dimensionality deduced HSI; U = [ u i , j ] is the fuzzy partition matrix, with a dimension of k × d , where k is the total number of clusters; and u i , j denotes the degree of membership of the j-th spectral band’s representative to the i-th cluster. It should be mentioned that the columns of U T are normalized by dividing each of its columns by their sum.

3. Experimental Results and Discussions

3.1. Datasets

Three hyperspectral datasets were used in this study to evaluate the effectiveness of the proposed method as an unsupervised FE. The datasets are explained in detail in the following subsections.

3.1.1. Pavia Centre

The first dataset is a hyperspectral image of Pavia Centre, captured by the ROSIS sensor (Figure 2a). Originally, this high-resolution image (1.3 m per pixel) had 115 spectral bands that covered a spectral range of 0.43 to 0.86 μm. After removing all noise-affected bands, the remaining 102 bands were used in the experiments. According to the existing ground truth data, the nine classes of interest are water, tree, meadow, brick, soil, asphalt, bitumen, tile, and shadow (Figure 2b and Table 1).

3.1.2. Indian Pines

The AVIRIS sensor obtained the second data set over a vegetated area in Indiana, USA. The AVIRIS images contain 220 spectral bands (0.4–2.45 μm). Due to the 20 noisy water absorption bands (i.e., 104–108, 150–163, and 220) in the original image, the experiments were conducted using 200 spectral bands. This dataset has 16 classes, according to the published ground truth map. In this study, four classes were ignored due to their small sample size. Table 2 provides details about the remaining 12 classes. Figure 3 also depicts a false-color composite representation of the second data set, along with its ground truth map.

3.1.3. Kennedy Space Center (KSC)

The KSC dataset was collected from the Kennedy Space Center, Florida, by NASA’s AVIRIS sensor. It contains 512 by 614 pixels, a spectral range of 0.4 to 2.5 μm, and a spatial resolution of 18 m. This HSI contains 13 classes and 224 spectral bands. However, only 176 bands were utilized for classification, with the remaining bands being discarded due to noise. Table 3 provides the number of training and test samples, and Figure 4 illustrates a false-color composite image and ground truth map.

3.2. Experiment Design

To test and compare the quality of extracted features using FFE, WFE, and competing methods, we employed support vector machines (SVMs) [36] with an RBF kernel. The SVM was created with the LibSVM package [37]. We also employed a grid search method and fivefold cross-validation to fine-tune SVM’s hyperparameters (i.e., C and γ, which are “Cost” and RBF kernel parameters, respectively). For the KSC and Pavia Centre datasets, we randomly selected 5 and 20 training samples per class from available ground truth data, and 50 and 100 samples for the Indian Pines dataset. It should be noted that testing was conducted using the remaining ground truth samples.
To gain deeper insights into the capabilities of the methods in classifying the extracted or selected attributes, we employed the T-Distribution Stochastic Neighbour Embedding (T-SNE) [38] algorithm, effectively reducing their multidimensional nature to a mere pair. T-SNE stands as a nonlinear method of reducing dimensionality, tailor-made for effectively visualizing data with high dimensions. The primary strength of T-SNE lies in its ability to preserve local structure. In essence, this ensures that points with comparable distances in the original high-dimensional data space retain their similarity even after projection into a lower-dimensional data space. In this experiment, we utilized 18 features obtained from each method as input for T-SNE.

3.3. Model Comparison

Our proposed method was compared to five competing methods: hyperbolic clustering-based band hierarchy (HCBH) [24], clustering-based feature extraction (CBFE) [29], band correlation clustering (BCC) [25], prototype space-based feature selection (PFS) [39], and maximum tangent discrimination (MTD) [39]. The reasoning behind selecting these methods for comparison is that they are all based on clustering algorithms for reducing the dimensionality of HSIs.
A dimension reduction technique based on deep learning models was also added. In this regard, an autoencoder (AE) [13] was trained using four encoder layers (with 128, 64, 32, and X neurons) and three decoder layers (with 32, 64, and 128 neurons) with leaky ReLU activation functions. X represents the desired number of features for dimension reduction using AE. The AE was trained to reconstruct the input data by compressing it into a lower-dimensional representation and then expanding it back to its original dimension. The encoder layers gradually reduce the dimensionality of the input data, while the decoder layers aim to reconstruct the original input from the compressed representation. All the competing methods are unsupervised except for CBFE, which benefits from a training sample for dimension reduction. BCC and CBFE are FE algorithms, similar to our proposed methodology, whereas the remaining methods are FS algorithms.
Although HCBH is a FS method, we compared our method with it because, similar to our method, it was developed based on clustering algorithms. HCBH began by calculating the hyperbolic distance between spectral bands in a hyperbolic space. Then, a bottom-up hierarchical clustering was performed to cluster comparable bands. Finally, HCBH chose a band as a representative from each cluster using a cluster-center ranking algorithm.
PFS and MTD were chosen because they both employed PS in their techniques. They, however, generate the PS in a different way than we did. PFS and MTD first employ the k-means algorithm to cluster the input HSI, which has a computational burden. The mean value of each cluster is then determined to create the PS. Thus, each axis in their PS represents a cluster, which is in contrast to our method, where each axis represents an endmember in our PS. Finally, in their PS, PFS uses the orthogonal distance from the PS’s diagonal to select informative bands, while MTD applies the tangent of the angles between pairs of bands to choose the bands with the lowest correlations.
In this study, the quality of the extracted features from different methods was evaluated using the kappa coefficient ( κ ), overall accuracy (OA) and average accuracy (AA), all of which measure classification accuracy from different viewpoints. It should be mentioned that each experiment was repeated ten times, and the average values of κ , OA and AA were finally used for validation of the models in the following tables and figures.

3.4. Results and Discussion

3.4.1. Indian Pines

Table 4 shows the obtained result for the Indian Pines dataset using 50 training samples per class. The best result for each row of this table is bolded. According to Table 4, our proposed methods (i.e., WFE and FFE) outperformed the other models in all cases (i.e., with different number of features). This demonstrates the high quality of the extracted features, using our method.
In another experiment, the number of training samples was increased to 100 per class for the classification task. The acquired findings are shown in Table 5. Based on the results, this experiment did not affect the results, compared with the first experiment (i.e., Table 4). This is because most of the approaches in our trials are unsupervised and do not require training data for dimension reduction. In fact, the proposed WFE and FEE models outperformed existing approaches in terms of accuracy, demonstrating the high quality of their derived features.
To compare the overall performance of our proposed method with others, all the obtained kappa coefficients, OA and AA were averaged, and the results were reported in Table 4 and Table 5, as well as in Figure 5. As shown in Figure 5, the proposed FFE and WFE models had better performance compared with the other models. In the experiments with 50 and 100 training samples per class, FFE produced the highest accuracy, with kappa coefficient/OA/AA values of 70.26/73.79/79.31 and 74.40/77.55/83.82, respectively. This was higher than the values for WFE (68.41/72.16/77.89 and 72.59/75.96/82.43). In this dataset, the best competing approach was HCBH, which had kappa coefficient/OA/AA values of 68.04/71.85/77.95 and 72.26/75.67/82.30 in the 50 and 100 training sample experiments, respectively. When the kappa coefficients are compared, we discover that the FFE’s results were 3.26% and 2.96% more accurate than those of HCBH, on average. More importantly, FFE results were, on average, about 9% more accurate than its supervised version (i.e., the CBFE method), according to Figure 5.
Based on the mean STD values in the last rows of Table 4 and Table 5, it is observed that HCBH had the highest stability and the lowest mean STD values. For example, according to Table 5, which shows the classification accuracy using 100 training samples per class, the HCBH’s STD values are 0.34, 0.69 and 0.58 for the κ , OA and AA values, respectively. This suggests that HCBH is a reliable and stable algorithm for classification purposes. WFE, on the other hand, exhibits the least stability, with STD values of 2.32, 2.06, and 1.72 (Table 5). Other methods in this dataset have close STD values, indicating that they are almost the same from a stability point of view. However, when it comes to accuracy, these methods show significant variations. This suggests that while they may provide similar levels of stability, they may not necessarily yield equally precise results. Therefore, it is important to consider both stability and accuracy when choosing a method for any given analysis.
Overall, there are three reasons for our method’s superiority: (1) Despite our technique being unsupervised and having no knowledge of the existing classes in the image, we were able to discover pure pixels of the image that reflect existing phenomena in the image using an endmember algorithm. These pure pixels provided significant information, equivalent to training samples for building a PS. (2) The prototype space helped identify clusters of correlated and similar spectral bands, which were then combined using a weighted-mean operator to decrease the high dimension of the HSIs without sacrificing much spectral information. (3) The extracted features have a greater signal-to-noise ratio since they are the weighted average of several correlated bands in the original HSI, which should reduce random noise. Figure 6 shows the final results of each method, using 100 training samples per class and 10 features.
The outcomes of the T-SNE experiment are visualized in Figure 7 where distinct colors represent the 12 classes within the Indian Pine dataset. As depicted in this figure, FFE and HCBH exhibit superior performance compared with the other methods. Notably, points belonging to the same class clustered together, forming distinct groupings. The superior performance of these two methods was further validated through image classification, as indicated by the highest classification accuracy for the 18 features, as presented in Table 4 and Table 5.

3.4.2. Pavia Centre

The results for the Pavia Centre dataset with five training samples per class are provided in Table 6. The best outcome for each row of this table is bolded. The classification accuracies of all approaches were mostly similar, although the proposed WFE model had the highest κ value in eight cases, which was greater than those of other methods. This demonstrates the superiority of WFE’s extracted features.
We also increased the number of training samples for the classification task to 20 per class, and the results are provided in Table 7. Similar to Table 6, all the methods had high accuracies. However, the proposed WFE and FFE models outperformed the others. WFE and FFE had the greatest kappa coefficient five and seven times, respectively, according to Table 7. To evaluate the overall performance of the models, the kappa coefficients, OA and AA values obtained from each method were averaged (see Figure 8).
Figure 8a shows that WFE outperformed the other methods, with an average kappa coefficient/OA/AA of 88.18/93.09/86.74. FFE, HCBH, and CBFE produced similar results. However, it should be noted that CBFE is a supervised method, in contrast to the other two models. According to Figure 8a, when the number of training samples is limited for a supervised classifier, the feature quality of WFE outperforms both CBFE and HCBH, which are the two competing methods with the best performance in this experiment, by 0.94% and 0.98%, respectively.
FFE, WFE, and CBFE all achieved about the same classification accuracy for the case of 20 training samples per class (Figure 8b). However, FFE and WFE outperformed HCBH, yielding kappa coefficients of 93.01 and 92.96, respectively. Figure 8a further demonstrates that our suggested methodology worked more effectively for HSI dimension reduction prior to a classification task in the case of small training samples.
Given the mean STD values in the last rows of Table 6 and Table 7, we can conclude that, while HCBH and WFE have the highest and lowest stability in the Indian Pines dataset, all the methods tested with Pavia Centre dataset have nearly the same stability because their STD values are close to each other. The classified image of each method with 20 training samples per class and 10 features is illustrated in Figure 9.
In a manner similar to the Indian Pine dataset, T−SNE was employed to visually evaluate the quality of the resultant features (Figure 10). As depicted in Figure 10, there is an absence of noticeable superiority in the T−SNE results across the methods. This is because points were visually well-clustered for all methods. These well-clustered points in Figure 10 justify the high accuracy values of the methods for this dataset. Although certain points may overlap, it is evident that points belonging to the same class maintain proximity, forming distinctive clusters in all methods. Consequently, T−SNE did not offer a visual basis for comparing the methods effectively. However, it is important to note that T−SNE is only a visual evaluation tool and does not provide quantitative measures of the quality of the resultant features. Therefore, it may not be sufficient to solely rely on T−SNE for comparing the methods. Additional analysis methods such as classification accuracy or clustering performance should be utilized to further assess the effectiveness of each method. These quantitative measures would provide a more comprehensive understanding of the differences between the methods and aid in making a well-informed comparison.

3.4.3. Kennedy Space Center (KSC)

Table 8 and Table 9 provide the results of the experiments for the KSC dataset using 5 and 20 training samples per class, respectively. The best outcome is marked in bold in each table.
According to Table 8, our proposed methods (i.e., WFE and FFE) had the highest kappa coefficient values in eight cases, outperforming other methods. However, while employing 20 training samples per class, CBFE significantly surpassed our WFE in terms of the number of maximum kappa coefficient values (Table 9). According to Table 9, CBFE achieved the highest kappa coefficient values in nine cases, which was greater than WFE (i.e., five cases). Considering Figure 11, which provides the mean kappa coefficient/OA/AA values of all cases, it is clear that although CBFE and WFE competed closely, CBFE outperformed the other approaches.
CBFE, WFE, and FFE show comparable classification accuracy, according to Figure 11a, which demonstrates the overall performance of extracted/selected features in the case of five training samples. CBFE, on the other hand, had a slightly higher accuracy. In the case of 20 training samples, CBFE (with a κ / OA / AA of 80.36/82.37/78.29) outperformed WFE (79.01/81.13/76.11) and FFE (78.40/80.59/75.41), as shown in Figure 11b. This superiority can be explained by the fact that CBFE is a supervised approach that employs training samples to reduce dimensions. However, the proposed WFE and FFE models are unsupervised approaches, which is preferred since providing training samples is expensive and time-consuming. As a result, if we ignore the supervised CBFE, our proposed models considerably outperformed other unsupervised methods. For example, in the case of limited training samples (i.e., five samples per class), WFE and FFE had the kappa coefficient / OA / AA values of 71.17/74.08/66.07 and 70.90/73.85/65.82, respectively, which were greater than those of HCBH (65.74/69.22/62.11), PFS (65.74/69.14/62.24), MTD (61.32/65.18/56.22), and BCC (66.50/69.88/60.89). In other words, considering kappa coefficient, WFE’s performance was 7.02% higher than the best unsupervised technique (i.e., BCC) in this experiment. This demonstrates that adopting our dimension reduction method is a good choice when few training samples are available for classifying HSIs.
The mean STD values In the last row in Table 8 and Table 9 reveal that while other methods have nearly identical stability, HCBH and CBFE exhibit higher stability with lower mean STD values. For example, according to Table 9, which shows the classification accuracy using 20 training samples per class, the HCBH STD values are 0.80, 0.58 and 0.56 for κ , OA and AA, respectively. The classified images for each method with 20 training samples per class and 10 features are depicted in Figure 12.
Furthermore, T−SNE served as a tool to visually assess the quality of the extracted or selected features (Figure 13). As illustrated in Figure 13, similar to the Pavia Centre dataset, it proves rather challenging to visually compare the T−SNE results due to the subtle differences. However, one noteworthy observation is that the sample points belonging to classes in CBFE, FFE, and WFE exhibit greater compactness than those in other methods. Consequently, this compactness facilitates the classifier to achieve higher accuracy during the image classification process (refer to Table 8 and Table 9).

4. Conclusions

An HSI has numerous spectral bands, many of which are highly correlated and include high redundancy. As a result, a deterioration phenomenon known as the curse of dimensionality impacts the accuracy of supervised classifiers. In this study, we show that FE approaches could effectively be employed to mitigate this issue. We propose employing a simple but effective FE to decrease the high dimensionality of HSIs. Our unsupervised technique first estimates the number of existing endmembers in the input HSI. It then extracts endmembers using VCA. Using the extracted endmembers, the proposed method constructs a PS to cluster the comparable and correlated spectral bands’ representatives in the PS. We finally use a weighted mean of existing bands in each cluster and extract a new feature. Our experiments demonstrate that our proposed technique outperforms five similar band clustering approaches for dimension reduction. Specifically, our technique outperforms others when only a limited number of training samples are available to train the HSI classifier.
The proposed method only considers spectral information to reduce the high dimensionality of HSIs. Incorporating spatial information into the proposed methodology can be a topic for future research that will improve the quality of the extracted features. This could involve exploring spatial–spectral fusion techniques or integrating spatial context through the use of spatial filters. In addition, the robustness of the suggested FE approach against noise and outliers in hyperspectral data is another topic for future work. This may entail investigating strategies for improving its performance in the presence of sensor noise or atmospheric effects. Furthermore, integrating methodologies such as robust statistics or outlier identification might improve the method’s robustness, eventually leading to more reliable dimension reduction results in challenging circumstances.

Author Contributions

Conceptualization, S.H.A.M., F.K. and S.G.; methodology, S.H.A.M. and S.G.; software, S.H.A.M.; validation, S.H.A.M.; formal analysis, S.H.A.M. and F.K.; investigation, S.H.A.M.; writing—review and editing, S.H.A.M., F.K., S.G., M.A. and S.J.; visualization, S.H.A.M. and F.K.; supervision, S.G. All authors have read and agreed to the published version of the manuscript.

Funding

We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC). We express our sincere appreciation to the Government of Ontario for awarding us the Ontario Graduate Scholarship (OGS).

Data Availability Statement

For the sake of reproducibility, the source code for this study will be made available at https://github.com/halizz821/unsupervised_dimension_reduction_HSI.

Acknowledgments

We extend our sincere gratitude to He Sun for generously providing us with the source code of the HCBH method.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Aghaee, R.; Mokhtarzade, M. Classification of hyperspectral images using subspace projection feature space. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1803–1807. [Google Scholar] [CrossRef]
  2. Moghaddam, S.A.; Mokhtarzade, M.; Moghaddam, S.A. a New Multiple Classifier System Based on a Pso Algorithm for the Classification of Hyperspectral Images. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, 42, 71–75. [Google Scholar] [CrossRef] [Green Version]
  3. Moghaddam, S.H.A.; Mokhtarzade, M.; Beirami, B.A. A feature extraction method based on spectral segmentation and integration of hyperspectral images. Int. J. Appl. Earth Obs. Geoinf. 2020, 89, 102097. [Google Scholar]
  4. Camps-Valls, G.; Marsheva, T.V.B.; Zhou, D. Semi-supervised graph-based hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2007, 45, 3044–3054. [Google Scholar] [CrossRef]
  5. Fatemighomi, H.S.; Golalizadeh, M.; Amani, M. Object-based hyperspectral image classification using a new latent block model based on hidden Markov random fields. Pattern Anal. Appl. 2022, 25, 467–481. [Google Scholar] [CrossRef]
  6. Seydi, S.T.; Shah-Hosseini, R.; Amani, M. A Multi-Dimensional Deep Siamese Network for Land Cover Change Detection in Bi-Temporal Hyperspectral Imagery. Sustainability 2022, 14, 12597. [Google Scholar] [CrossRef]
  7. Shang, Y.; Zheng, X.; Li, J.; Liu, D.; Wang, P. A comparative analysis of swarm intelligence and evolutionary algorithms for feature selection in SVM-based hyperspectral image classification. Remote Sens. 2022, 14, 3019. [Google Scholar] [CrossRef]
  8. Moeini Rad, A.; Abkar, A.A.; Mojaradi, B. Supervised distance-based feature selection for hyperspectral target detection. Remote Sens. 2019, 11, 2049. [Google Scholar] [CrossRef]
  9. Bradley, P.E.; Keller, S.; Weinmann, M. Unsupervised feature selection based on ultrametricity and sparse training data: A case study for the classification of high-dimensional hyperspectral data. Remote Sens. 2018, 10, 1564. [Google Scholar] [CrossRef] [Green Version]
  10. Xie, F.; Lei, C.; Li, F.; Huang, D.; Yang, J. Unsupervised hyperspectral feature selection based on fuzzy c-means and grey wolf optimizer. Int. J. Remote Sens. 2019, 40, 3344–3367. [Google Scholar] [CrossRef]
  11. Landgrebe, D.A. Signal Theory Methods in Multispectral Remote Sensing; John Wiley & Sons: Hoboken, NJ, USA, 2003. [Google Scholar]
  12. Serpico, S.B.; Moser, G. Extraction of spectral channels from hyperspectral images for classification purposes. IEEE Trans. Geosci. Remote Sens. 2007, 45, 484–495. [Google Scholar] [CrossRef]
  13. Rasti, B.; Hong, D.; Hang, R.; Ghamisi, P.; Kang, X.; Chanussot, J.; Benediktsson, J.A. Feature extraction for hyperspectral imagery: The evolution from shallow to deep: Overview and toolbox. IEEE Geosci. Remote Sens. Mag. 2020, 8, 60–88. [Google Scholar] [CrossRef]
  14. Kumar, B.; Dikshit, O.; Gupta, A.; Singh, M.K. Feature extraction for hyperspectral image classification: A review. Int. J. Remote Sens. 2020, 41, 6248–6287. [Google Scholar] [CrossRef]
  15. Moghaddam, S.H.A.; Mokhtarzade, M.; Naeini, A.A.; Amiri-Simkooei, A. A statistical variable selection solution for RFM ill-posedness and overparameterization problems. IEEE Trans. Geosci. Remote Sens. 2018, 56, 3990–4001. [Google Scholar]
  16. Haboudane, D.; Miller, J.R.; Pattey, E.; Zarco-Tejada, P.J.; Strachan, I.B. Hyperspectral vegetation indices and novel algorithms for predicting green LAI of crop canopies: Modeling and validation in the context of precision agriculture. Remote Sens. Environ. 2004, 90, 337–352. [Google Scholar] [CrossRef]
  17. De Backer, S.; Kempeneers, P.; Debruyn, W.; Scheunders, P. A band selection technique for spectral classification. IEEE Geosci. Remote Sens. Lett. 2005, 2, 319–323. [Google Scholar] [CrossRef]
  18. Kuo, B.-C.; Landgrebe, D.A. Nonparametric weighted feature extraction for classification. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1096–1105. [Google Scholar]
  19. Richards, J.A.; Richards, J.A. Remote Sensing Digital Image Analysis; Springer: Berlin/Heidelberg, Germany, 2022; Volume 5. [Google Scholar]
  20. Chen, Y.; Lin, Z.; Zhao, X.; Wang, G.; Gu, Y. Deep learning-based classification of hyperspectral data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2094–2107. [Google Scholar] [CrossRef]
  21. Hong, D.; Gao, L.; Yao, J.; Zhang, B.; Plaza, A.; Chanussot, J. Graph convolutional networks for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2020, 59, 5966–5978. [Google Scholar] [CrossRef]
  22. Hong, D.; Han, Z.; Yao, J.; Gao, L.; Zhang, B.; Plaza, A.; Chanussot, J. SpectralFormer: Rethinking hyperspectral image classification with transformers. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5518615. [Google Scholar] [CrossRef]
  23. Hong, D.; Gao, L.; Yokoya, N.; Yao, J.; Chanussot, J.; Du, Q.; Zhang, B. More diverse means better: Multimodal deep learning meets remote-sensing imagery classification. IEEE Trans. Geosci. Remote Sens. 2020, 59, 4340–4354. [Google Scholar] [CrossRef]
  24. Sun, H.; Zhang, L.; Ren, J.; Huang, H. Novel Hyperbolic Clustering-based Band Hierarchy (HCBH) for Effective Unsupervised Band Selection of Hyperspectral Images. Pattern Recognit. 2022, 130, 108788. [Google Scholar] [CrossRef]
  25. Ghorbanian, A.; Mohammadzadeh, A. An unsupervised feature extraction method based on band correlation clustering for hyperspectral image classification using limited training samples. Remote Sens. Lett. 2018, 9, 982–991. [Google Scholar] [CrossRef]
  26. Rashwan, S.; Dobigeon, N. A split-and-merge approach for hyperspectral band selection. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1378–1382. [Google Scholar] [CrossRef] [Green Version]
  27. Prabukumar, M.; Shrutika, S. Band clustering using expectation–maximization algorithm and weighted average fusion-based feature extraction for hyperspectral image classification. J. Appl. Remote Sens. 2018, 12, 046015. [Google Scholar] [CrossRef]
  28. Lu, Q.; Huang, X.; Zhang, L. A novel clustering-based feature representation for the classification of hyperspectral imagery. Remote Sens. 2014, 6, 5732–5753. [Google Scholar] [CrossRef] [Green Version]
  29. Imani, M.; Ghassemian, H. Band clustering-based feature extraction for classification of hyperspectral images using limited training samples. IEEE Geosci. Remote Sens. Lett. 2013, 11, 1325–1329. [Google Scholar] [CrossRef]
  30. Bioucas-Dias, J.M.; Nascimento, J.M. Hyperspectral subspace identification. IEEE Trans. Geosci. Remote Sens. 2008, 46, 2435–2445. [Google Scholar] [CrossRef] [Green Version]
  31. Remón, A.; Sánchez, S.; Bernabé, S.; Quintana-Ortí, E.S.; Plaza, A. Performance versus energy consumption of hyperspectral unmixing algorithms on multi-core platforms. EURASIP J. Adv. Signal Process. 2013, 2013, 68. [Google Scholar] [CrossRef] [Green Version]
  32. Nascimento, J.M.; Dias, J.M. Vertex component analysis: A fast algorithm to unmix hyperspectral data. IEEE Trans. Geosci. Remote Sens. 2005, 43, 898–910. [Google Scholar] [CrossRef] [Green Version]
  33. Gerg, I. Open Source MATLAB Hyperspectral Toolbox. Available online: https://github.com/isaacgerg/matlabHyperspectralToolbox (accessed on 22 April 2015).
  34. Hartigan, J.A.; Wong, M.A. Algorithm AS 136: A k-means clustering algorithm. J. R. Stat. Soc. Ser. C 1979, 28, 100–108. [Google Scholar]
  35. Bezdek, J.C.; Ehrlich, R.; Full, W. FCM: The fuzzy c-means clustering algorithm. Comput. Geosci. 1984, 10, 191–203. [Google Scholar] [CrossRef]
  36. Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef] [Green Version]
  37. Chang, C.-C.; Lin, C.-J. LIBSVM: A library for support vector machines. ACM Trans. Intell. Syst. Technol. (TIST) 2011, 2, 1–27. [Google Scholar] [CrossRef]
  38. Liu, B.; Yu, X.; Zhang, P.; Yu, A.; Fu, Q.; Wei, X. Supervised deep feature extraction for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2017, 56, 1909–1921. [Google Scholar] [CrossRef]
  39. Asl, M.G.; Mobasheri, M.R.; Mojaradi, B. Unsupervised feature selection using geometrical measures in prototype space for hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2013, 52, 3774–3787. [Google Scholar]
Figure 1. The flowchart of the proposed method for dimension reduction of HSIs.
Figure 1. The flowchart of the proposed method for dimension reduction of HSIs.
Remotesensing 15 03855 g001
Figure 2. Pavia Centre case study. (a) A false-color composite image. (b) Ground truth map.
Figure 2. Pavia Centre case study. (a) A false-color composite image. (b) Ground truth map.
Remotesensing 15 03855 g002
Figure 3. Indian Pine case study. (a) A false-color composite image. (b) Ground truth map.
Figure 3. Indian Pine case study. (a) A false-color composite image. (b) Ground truth map.
Remotesensing 15 03855 g003
Figure 4. Kennedy Space Center case study. (a) A false-color composite image. (b) Ground truth map.
Figure 4. Kennedy Space Center case study. (a) A false-color composite image. (b) Ground truth map.
Remotesensing 15 03855 g004
Figure 5. Indian Pines dataset: classification accuracy for different methods, including the proposed models in this study, which are fuzzy feature extraction (FFE) and weighted feature extraction (WFE), and competing methods: hyperbolic clustering-based band hierarchy (HCBH), clustering-based feature extraction (CBFE), autoencoder (AE), band correlation clustering (BCC), prototype space-based feature selection (PFS), and maximum tangent discrimination (MTD); (a,b) illustrate the mean kappa coefficient (multiplied by 100), OA and AA for the cases of 50 and 100 training samples per class, respectively.
Figure 5. Indian Pines dataset: classification accuracy for different methods, including the proposed models in this study, which are fuzzy feature extraction (FFE) and weighted feature extraction (WFE), and competing methods: hyperbolic clustering-based band hierarchy (HCBH), clustering-based feature extraction (CBFE), autoencoder (AE), band correlation clustering (BCC), prototype space-based feature selection (PFS), and maximum tangent discrimination (MTD); (a,b) illustrate the mean kappa coefficient (multiplied by 100), OA and AA for the cases of 50 and 100 training samples per class, respectively.
Remotesensing 15 03855 g005
Figure 6. The classified image of Indian Pines dataset using 100 training samples per class and 10 features: (a) BCC with κ = 64.95 , (b) AE with κ = 63.18 , (c) CBFE with κ = 69.82 , (d) MTD with κ = 62.10 , (e) PFS with κ = 54.58 , (f) HCBH with κ = 73.20 , (g) WFE with κ = 74.99 , and (h) FFE with κ = 76.70 .
Figure 6. The classified image of Indian Pines dataset using 100 training samples per class and 10 features: (a) BCC with κ = 64.95 , (b) AE with κ = 63.18 , (c) CBFE with κ = 69.82 , (d) MTD with κ = 62.10 , (e) PFS with κ = 54.58 , (f) HCBH with κ = 73.20 , (g) WFE with κ = 74.99 , and (h) FFE with κ = 76.70 .
Remotesensing 15 03855 g006
Figure 7. T−SNE results of Indian Pines dataset using 18 features; each color represents a class in this dataset; (a) BCC, (b) AE, (c) CBFE (d) MTD (e) PFS (f) HCBH (g) WFE, and (h) FFE.
Figure 7. T−SNE results of Indian Pines dataset using 18 features; each color represents a class in this dataset; (a) BCC, (b) AE, (c) CBFE (d) MTD (e) PFS (f) HCBH (g) WFE, and (h) FFE.
Remotesensing 15 03855 g007aRemotesensing 15 03855 g007b
Figure 8. Pavia Centre Data set: classification accuracy for different methods, including the proposed models in this study, which are fuzzy feature extraction (FFE) and weighted feature extraction (WFE), and competing methods: hyperbolic clustering-based band hierarchy (HCBH), clustering-based feature extraction (CBFE), autoencoder (AE), band correlation clustering (BCC), prototype space-based feature selection (PFS), and maximum tangent discrimination (MTD); (a,b) illustrate the mean kappa coefficient (multiplied by 100), OA and AA for the cases of 5 and 20 training samples per class, respectively.
Figure 8. Pavia Centre Data set: classification accuracy for different methods, including the proposed models in this study, which are fuzzy feature extraction (FFE) and weighted feature extraction (WFE), and competing methods: hyperbolic clustering-based band hierarchy (HCBH), clustering-based feature extraction (CBFE), autoencoder (AE), band correlation clustering (BCC), prototype space-based feature selection (PFS), and maximum tangent discrimination (MTD); (a,b) illustrate the mean kappa coefficient (multiplied by 100), OA and AA for the cases of 5 and 20 training samples per class, respectively.
Remotesensing 15 03855 g008
Figure 9. The classified image of the Pavia Centre dataset using 20 training samples per class and 10 features; (a) BCC with κ = 92.29 , (b) AE with κ = 92.36 , (c) CBFE with κ = 92.59 , (d) MTD with κ = 92.70 , (e) PFS with κ = 92.75 , (f) HCBH with κ = 92.88 , (g) WFE with κ = 93.48 , and (h) FFE with κ = 92.97 .
Figure 9. The classified image of the Pavia Centre dataset using 20 training samples per class and 10 features; (a) BCC with κ = 92.29 , (b) AE with κ = 92.36 , (c) CBFE with κ = 92.59 , (d) MTD with κ = 92.70 , (e) PFS with κ = 92.75 , (f) HCBH with κ = 92.88 , (g) WFE with κ = 93.48 , and (h) FFE with κ = 92.97 .
Remotesensing 15 03855 g009aRemotesensing 15 03855 g009b
Figure 10. T−SNE results of Pavia Centre dataset using 18 features; each color represents a class in this dataset; (a) BCC, (b) AE, (c) CBFE (d) MTD (e) PFS (f) HCBH (g) WFE, and (h) FFE.
Figure 10. T−SNE results of Pavia Centre dataset using 18 features; each color represents a class in this dataset; (a) BCC, (b) AE, (c) CBFE (d) MTD (e) PFS (f) HCBH (g) WFE, and (h) FFE.
Remotesensing 15 03855 g010aRemotesensing 15 03855 g010b
Figure 11. KSC dataset: classification accuracy for different methods, including the proposed models in this study, which are fuzzy feature extraction (FFE) and weighted feature extraction (WFE), and competing methods: hyperbolic clustering-based band hierarchy (HCBH), clustering-based feature extraction (CBFE), band correlation clustering (BCC), autoencoder (AE), prototype space-based feature selection (PFS), and maximum tangent discrimination (MTD); (a,b) illustrate the mean Kappa coefficient (multiplied by 100), OA and AA for the case of 5 and 20 training samples per class, respectively.
Figure 11. KSC dataset: classification accuracy for different methods, including the proposed models in this study, which are fuzzy feature extraction (FFE) and weighted feature extraction (WFE), and competing methods: hyperbolic clustering-based band hierarchy (HCBH), clustering-based feature extraction (CBFE), band correlation clustering (BCC), autoencoder (AE), prototype space-based feature selection (PFS), and maximum tangent discrimination (MTD); (a,b) illustrate the mean Kappa coefficient (multiplied by 100), OA and AA for the case of 5 and 20 training samples per class, respectively.
Remotesensing 15 03855 g011
Figure 12. The classified images for the KSC dataset using 20 training samples per class and 10 features: (a) BCC with κ = 70.98 , (b) AE with κ = 74.75 , (c) CBFE with κ = 81.93 , (d) MTD with κ = 80.81 , (e) PFS with κ = 78.42 , (f) HCBH with κ = 77.80 , (g) WFE with κ = 81.73 , and (h) FFE with κ = 81.27 .
Figure 12. The classified images for the KSC dataset using 20 training samples per class and 10 features: (a) BCC with κ = 70.98 , (b) AE with κ = 74.75 , (c) CBFE with κ = 81.93 , (d) MTD with κ = 80.81 , (e) PFS with κ = 78.42 , (f) HCBH with κ = 77.80 , (g) WFE with κ = 81.73 , and (h) FFE with κ = 81.27 .
Remotesensing 15 03855 g012
Figure 13. T−SNE results of KSC dataset using 18 features; each color represents a class in this dataset; (a) BCC, (b) AE, (c) CBFE (d) MTD (e) PFS (f) HCBH (g) WFE, and (h) FFE.
Figure 13. T−SNE results of KSC dataset using 18 features; each color represents a class in this dataset; (a) BCC, (b) AE, (c) CBFE (d) MTD (e) PFS (f) HCBH (g) WFE, and (h) FFE.
Remotesensing 15 03855 g013aRemotesensing 15 03855 g013b
Table 1. Pavia Centre ground truth samples.
Table 1. Pavia Centre ground truth samples.
ClassesNumber of Samples
Water65,971
Trees7598
Asphalt3090
Self-Blocking Bricks2685
Bitumen6584
Tiles9248
Shadows7287
Meadows42,826
Bare soil2863
Table 2. Indian Pines ground truth samples.
Table 2. Indian Pines ground truth samples.
Classes Number of Samples
Corn (no-till)1428
Corn (min-till)830
Corn237
Grass (pasture)483
Grass (trees)730
Hay (windrowed)478
Soybean (no-till)972
Soybean (min-till)2455
Soybean (clean)593
Wheat205
Woods1265
Buildings–Grass–Trees–Drives386
Table 3. KSC ground truth samples.
Table 3. KSC ground truth samples.
Classes Number of Samples
Scrub347
Willow swamp243
CP hammock256
Slash pine252
Oak161
Hardwood229
Swamp105
Graminoid marsh390
Spartina marsh520
Cattail marsh404
Salt marsh419
Mud flats503
Water927
Table 4. Classification accuracy, i.e., mean κ /OA/AA, of different dimension reduction methods using 50 training samples per class for the Indian Pines dataset. The best results are highlighted in bold. Values of standard deviation (STD) are given in parentheses. Hyperbolic clustering-based band hierarchy (HCBH), clustering-based feature extraction (CBFE), band correlation clustering (BCC), autoencoder (AE), prototype space-based feature selection (PFS), maximum tangent discrimination (MTD), weighted feature extraction (WFE), and fuzzy feature extraction (FFE).
Table 4. Classification accuracy, i.e., mean κ /OA/AA, of different dimension reduction methods using 50 training samples per class for the Indian Pines dataset. The best results are highlighted in bold. Values of standard deviation (STD) are given in parentheses. Hyperbolic clustering-based band hierarchy (HCBH), clustering-based feature extraction (CBFE), band correlation clustering (BCC), autoencoder (AE), prototype space-based feature selection (PFS), maximum tangent discrimination (MTD), weighted feature extraction (WFE), and fuzzy feature extraction (FFE).
No.
Features
Methods
BCCAECBFEMTDPFSHCBHWFEFFE
349.47(±0.00)/55.77(±0.00)/60.77(±0.00)49.95(±0.75)/55.43(±0.74)/62.08(±1.01)55.87(±0.00)/60.81(±0.00)/67.47(±0.00)39.90(±0.52)/46.55(±0.52)/51.49(±0.67)44.04(±1.83)/50.59(±1.56)/53.22(±2.11)54.04(±0.33)/59.52(±0.53)/66.67(±0.03)57.31(±1.95)/62.28(±1.72)/67.83(±2.18)56.75(±0.66)/61.67(±0.53)/68.75(±0.78)
447.20(±0.18)/53.25(±0.23)/60.49(±0.12)50.86(±1.92)/56.31(±1.68)/62.24(±1.84)56.45(±0.00)/61.32(±0.08)/68.72(±0.00)48.16(±1.82)/54.16(±1.69)/60.12(±1.00)43.03(±0.21)/49.63(0.05±)/52.85(±0.45)61.19(±0.38)/65.91(±0.58)/72.43(±0.38)67.14(±0.81)/71.08(±0.74)/75.81(±0.46)62.62(±3.30)/66.95(±3.05)/73.27(±1.70)
549.89(±0.63)/55.85(±0.48)/62.45(±0.43)52.53(±2.07)/57.75(±1.87)/63.84(±1.57)59.19(±0.10)/63.91(±0.25)/70.60(±0.16)52.04(±1.77)/57.70(±1.61)/63.29(±1.13)38.24(±0.00)/45.27(±0.00)/48.67(±0.00)62.06(±0.88)/66.73(±1.06)/72.74(±0.68)67.70(±0.25)/71.52(±0.21)/77.75(±0.40)66.86(±0.77)/70.77(±0.68)/76.61(±0.57)
652.69(±0.96)/58.59(±0.66)/64.16(±1.14)54.15(±1.28)/59.23(±1.14)/64.94(±0.96)59.96(±0.26)/64.66(±0.06)/71.70(±0.28)56.28(±0.00)/61.52(±0.00)/67.67(±0.00)44.63(±0.00)/52.01(0.00±)/52.77(±0.00)63.72(±0.60)/68.02(±0.33)/74.81(±0.10)69.54(±1.16)/73.14(±1.02)/78.73(±0.69)67.84(±1.39)/71.63(±1.23)/77.63(±0.98)
760.13(±1.00)/64.82(±0.80)/70.82(±1.15)55.84(±1.32)/60.76(±1.18)/66.26(±1.56)63.50(±0.07)/67.69(±1.09)/74.23(±0.06)52.62(±2.55)/58.00(±2.30)/64.52(±2.30)47.16(±3.89)/53.29(±3.52)/59.03(±3.38)65.89(±1.09)/69.97(±1.03)/76.99(±0.05)70.60(±1.52)/74.10(±1.33)/79.66(±1.31)69.45(±1.58)/73.07(±1.38)/78.39(±1.02)
860.26(±1.47)/64.88(±1.32)/71.17(±1.21)56.10(±1.98)/60.94(±1.80)/67.38(±1.44)65.04(±1.15)/69.07(±1.18)/75.44(±0.61)56.44(±0.46)/61.47(±0.48)/67.94(±0.13)47.38(±0.00)/53.41(±0.00)/59.71(±0.00)68.51(±0.66)/71.95(±0.74)/78.55(±0.42)70.73(±1.57)/74.22(±1.40)/79.64(±1.43)70.83(±1.12)/74.29(±0.98)/79.63(±1.01)
960.26(±2.27)/64.94(±2.01)/71.09(±1.75)55.63(±2.31)/60.57(±2.18)/66.99(±1.36)64.76(±1.29)/68.86(±1.78)/75.25(±0.67)56.14(±0.66)/61.14(±0.62)/67.80(±0.41)52.40(±2.42)/57.95(±2.11)/64.10(±2.87)69.14(±0.37)/72.79(±0.67)/78.76(±0.02)70.34(±2.87)/73.88(±2.54)/79.24(±2.57)71.54(±1.47)/74.93(±1.30)/80.37(±0.84)
1059.74(±1.34)/64.47(±1.16)/70.67(±1.25)56.70(±2.19)/61.55(±1.99)/67.19(±2.03)65.48(±1.99)/69.47(±1.56)/75.73(±1.43)55.94(±1.52)/60.95(±1.44)/67.78(±1.02)52.06(±1.40)/57.76(±1.32)/62.44(±0.99)69.82(±0.86)/73.36(±0.42)/79.61(±0.21)69.93(±3.50)/73.52(±3.11)/78.67(±2.88)72.11(±1.07)/75.45(±0.94)/80.74(±0.78)
1158.33(±1.99)/63.22(±1.83)/69.61(±1.48)57.18(±2.55)/61.99(±2.32)/67.34(±2.10)66.60(±1.75)/70.52(±1.86)/76.46(±1.27)59.91(±0.81)/64.57(±0.73)/70.71(±0.47)53.98(±0.94)/59.34(±0.81)/65.26(±1.78)70.18(±1.15)/73.71(±0.60)/79.42(±0.37)70.00(±3.18)/73.57(±2.81)/79.14(±2.88)73.57(±0.44)/76.74(±0.41)/81.69(±0.15)
1261.67(±1.68)/66.27(±1.47)/72.25(±1.80)56.49(±2.10)/61.32(±1.99)/67.66(±1.34)67.21(±2.09)/71.04(±1.74)/77.40(±1.52)59.70(±0.40)/64.38(±0.37)/70.64(±0.29)54.19(±0.68)/59.49(±0.61)/65.72(±1.60)71.34(±0.71)/74.76(±1.14)/80.85(±0.43)70.17(±2.89)/73.73(±2.56)/79.52(±2.25)73.41(±1.04)/76.61(±0.93)/81.36(±0.60)
1361.54(±2.16)/66.11(±1.98)/72.43(±1.49)58.32(±0.95)/63.02(±0.93)/68.52(±0.22)66.64(±1.97)/70.59(±1.09)/76.36(±1.83)60.27(±0.80)/64.87(±0.72)/71.16(±0.73)55.01(±1.34)/60.29(±1.26)/66.21(±1.37)71.16(±0.36)/74.56(±0.66)/80.33(±0.46)70.75(±2.54)/74.25(±2.24)/79.97(±2.18)72.89(±0.91)/76.15(±0.79)/81.34(±0.70)
1461.34(±1.52)/65.97(±1.39)/72.04(±1.31)56.99(±1.88)/61.78(±1.71)/67.35(±1.58)67.22(±1.23)/71.11(±2.34)/76.80(±1.06)60.51(±0.25)/65.13(±0.23)/71.15(±0.43)55.67(±1.48)/60.88(±1.35)/66.94(±1.93)72.23(±1.02)/75.50(±0.43)/81.31(±0.15)68.60(±2.95)/72.34(±2.60)/78.15(±2.68)72.72(±0.90)/75.99(±0.79)/81.25(±0.67)
1562.41(±1.40)/66.92(±1.22)/73.05(±1.42)59.31(±1.37)/63.90(±1.26)/69.56(±0.76)66.89(±2.34)/70.84(±2.08)/76.43(±1.97)59.42(±0.81)/64.18(±0.78)/69.86(±0.25)55.99(±1.87)/61.20(±1.63)/67.08(±2.48)72.85(±0.80)/76.11(±0.35)/81.43(±0.13)68.44(±3.25)/72.16(±2.91)/78.36(±2.63)73.11(±0.97)/76.35(±0.85)/81.58(±0.73)
1662.69(±1.46)/67.14(±1.30)/73.33(±0.89)57.99(±0.84)/62.75(±0.71)/68.13(±1.20)66.74(±1.19)/70.69(±1.05)/76.42(±1.18)60.46(±1.89)/65.11(±1.68)/70.54(±1.55)56.49(±0.74)/61.60(±0.68)/68.10(±0.85)70.33(±0.40)/73.91(±0.56)/80.13(±0.28)67.72(±1.96)/71.51(±1.77)/77.94(±1.53)73.26(±0.70)/76.46(±0.63)/81.97(±0.59)
1763.38(±1.80)/67.79(±1.60)/73.59(±1.45)58.34(±2.13)/63.02(±1.94)/68.91(±1.39)66.64(±0.74)/70.58(±0.66)/76.59(±0.72)60.84(±2.56)/65.46(±2.26)/70.97(±2.14)57.20(±1.73)/62.24(±1.54)/68.44(±1.88)73.14(±1.00)/76.39(±0.46)/81.48(±0.86)67.89(±2.59)/71.66(±2.30)/77.94(±2.27)73.64(±1.06)/76.80(±0.96)/82.19(±0.72)
1862.19(±1.44)/66.68(±1.32)/73.25(±1.13)58.95(±1.69)/63.56(±1.55)/69.23(±1.26)66.35(±1.15)/70.31(±1.04)/76.35(±0.96)60.03(±2.06)/64.75(±1.82)/70.19(±2.03)57.33(±2.13)/62.36(±1.90)/68.62(±2.36)73.11(±0.70)/76.34(±0.57)/81.75(±0.34)67.72(±1.61)/71.50(±1.47)/77.86(±0.97)73.63(±0.70)/76.79(±0.63)/82.23(±0.49)
Mean58.32(±1.33)/63.29(±1.17)/69.45(±1.13)55.96(±1.71)/60.87(±1.56)/66.73(±1.35)64.03(±1.08)/68.22(±0.97)/74.50(±0.86)56.17(±1.18)/61.25(±1.08)/67.24(±0.91)50.92(±1.29)/56.71(±1.15)/61.82(±1.50)68.04(±0.71)/71.85(±0.63)/77.95(±0.31)68.41(±2.16)/72.16(±1.92)/77.89(±1.83)70.26(±1.13)/73.79(±1.01)/79.31(±0.77)
Table 5. Classification accuracy, i.e., mean κ /OA/AA, of different dimension reduction methods using 100 training samples per class for the Indian Pines dataset. The best results are highlighted in bold. Values of standard deviation are given in parentheses. Hyperbolic clustering-based band hierarchy (HCBH), clustering-based feature extraction (CBFE), band correlation clustering (BCC), autoencoder (AE), prototype space-based feature selection (PFS), maximum tangent discrimination (MTD), weighted feature extraction (WFE), and fuzzy feature extraction (FFE).
Table 5. Classification accuracy, i.e., mean κ /OA/AA, of different dimension reduction methods using 100 training samples per class for the Indian Pines dataset. The best results are highlighted in bold. Values of standard deviation are given in parentheses. Hyperbolic clustering-based band hierarchy (HCBH), clustering-based feature extraction (CBFE), band correlation clustering (BCC), autoencoder (AE), prototype space-based feature selection (PFS), maximum tangent discrimination (MTD), weighted feature extraction (WFE), and fuzzy feature extraction (FFE).
No.
Features
Methods
BCCAECBFEMTDPFSHCBHWFEFFE
347.01(±0.00)/52.61(±0.00)/63.75(±0.00)51.61(±0.80)/56.97(±0.76)/65.68(±0.84)57.28(±0.00)/62.28(±0.00)/68.39(±0.00)42.10(±0.42)/48.67(±0.36)/55.38(±0.38)46.07(±1.57)/52.20(±1.38)/58.58(±1.46)56.74(±0.31)/61.79(±0.31)/70.97(±0.70)56.78(±1.68)/61.93(±1.48)/69.50(±0.97)58.27(±0.33)/63.12(±0.28)/70.57(±0.21)
448.96(±0.42)/54.64(±0.41)/64.11(±0.24)52.52(±2.10)/57.89(±1.98)/66.06(±1.74)60.35(±0.00)/65.14(±0.00)/70.85(±0.00)51.38(±0.69)/57.05(±0.68)/63.84(±0.23)43.50(±1.61)/49.85(±1.59)/56.47(±1.35)62.61(±0.40)/67.04(±1.19)/75.60(±0.20)69.40(±1.01)/73.13(±0.91)/79.80(±0.58)65.42(±4.39)/69.55(±3.98)/76.64(±2.45)
555.09(±0.27)/60.42(±0.27)/67.09(±0.09)54.82(±2.53)/60.05(±2.35)/67.70(±1.70)63.75(±0.00)/68.13(±0.00)/74.25(±0.00)53.86(±0.18)/59.39(±0.05)/66.21(±0.58)41.34(±0.52)/48.24(±0.52)/53.45(±0.46)64.92(±0.48)/69.07(±0.42)/77.32(±0.55)71.89(±0.65)/75.32(±0.59)/82.30(±0.26)69.99(±0.21)/73.62(±0.19)/81.15(±0.19)
657.43(±0.41)/62.50(±0.40)/69.30(±0.19)57.08(±0.84)/62.16(±0.81)/69.44(±0.76)67.00(±0.37)/71.04(±0.34)/77.40(±0.23)56.81(±1.24)/62.09(±1.02)/68.18(±1.61)44.94(±0.77)/51.52(±0.72)/56.81(±0.74)66.85(±0.36)/70.84(±0.51)/78.88(±0.21)74.38(±2.80)/77.54(±2.46)/83.36(±2.25)71.15(±0.50)/74.67(±0.46)/81.32(±0.40)
762.76(±2.05)/67.27(±1.84)/73.94(±1.72)58.80(±1.10)/63.67(±1.00)/70.77(±0.97)68.15(±0.17)/71.94(±0.16)/79.19(±0.01)57.98(±0.64)/63.07(±0.42)/69.28(±1.04)50.93(±2.09)/56.82(±1.88)/62.88(±2.42)67.72(±0.29)/71.58(±0.98)/80.09(±0.54)75.46(±1.07)/78.49(±0.96)/84.31(±0.70)73.09(±1.38)/76.40(±1.25)/82.80(±0.74)
862.39(±1.06)/66.91(±0.95)/73.91(±0.84)58.78(±3.24)/63.60(±2.99)/71.07(±2.34)67.91(±0.56)/71.75(±0.51)/78.98(±0.49)60.02(±2.31)/64.85(±2.11)/71.14(±1.78)53.26(±1.06)/58.80(±1.01)/65.77(±0.55)72.29(±0.27)/75.73(±0.62)/81.75(±0.97)75.93(±2.09)/78.90(±1.85)/84.76(±1.79)73.97(±1.31)/77.19(±1.18)/83.43(±0.54)
964.44(±1.42)/68.75(±1.25)/75.51(±1.31)59.96(±1.76)/64.72(±1.59)/71.78(±1.16)68.55(±1.31)/72.35(±1.15)/79.34(±1.21)61.69(±2.84)/66.38(±2.59)/72.43(±2.18)54.59(±0.86)/59.99(±0.80)/67.13(±1.02)73.97(±0.75)/77.23(±0.56)/83.17(±0.41)75.98(±3.29)/78.96(±2.92)/84.70(±2.55)76.16(±1.32)/79.13(±1.18)/85.15(±0.83)
1064.33(±1.45)/68.67(±1.30)/75.29(±1.06)59.57(±1.84)/64.36(±1.65)/71.60(±1.78)70.37(±1.17)/74.00(±1.06)/80.56(±0.83)62.75(±2.42)/67.36(±2.16)/73.15(±2.09)54.88(±0.81)/60.21(±0.76)/67.29(±1.04)74.58(±0.37)/77.78(±0.46)/83.59(±0.27)74.75(±3.12)/77.88(±2.76)/83.92(±2.65)76.82(±0.91)/79.71(±0.80)/85.79(±0.73)
1165.68(±2.44)/69.86(±2.16)/76.36(±2.10)61.01(±3.28)/65.64(±3.00)/72.90(±2.08)70.83(±1.30)/74.42(±1.15)/80.96(±1.08)64.30(±1.92)/68.71(±1.77)/74.49(±1.28)56.03(±1.36)/61.36(±1.16)/68.31(±1.26)74.79(±0.22)/77.97(±0.97)/83.40(±0.53)75.46(±3.18)/78.51(±2.82)/84.62(±2.37)77.44(±0.60)/80.26(±0.52)/86.31(±0.64)
1266.36(±2.27)/70.46(±2.03)/76.98(±1.91)60.83(±0.86)/65.53(±0.83)/72.82(±0.86)70.83(±1.02)/74.43(±0.91)/80.75(±0.58)65.70(±1.64)/69.91(±1.44)/75.92(±1.42)55.99(±1.03)/61.31(±0.95)/68.59(±0.95)75.80(±0.26)/78.83(±0.41)/84.99(±0.68)74.72(±2.69)/77.86(±2.39)/83.95(±1.81)77.47(±0.52)/80.29(±0.46)/86.32(±0.43)
1367.88(±1.39)/71.84(±1.22)/78.37(±1.30)61.91(±1.51)/66.45(±1.37)/73.73(±1.03)72.21(±1.20)/75.66(±1.06)/81.79(±0.94)65.93(±1.42)/70.10(±1.25)/76.03(±1.32)56.49(±0.94)/61.82(±0.88)/69.10(±0.85)76.60(±0.34)/79.56(±0.84)/85.28(±0.75)72.32(±4.24)/75.73(±3.77)/82.26(±3.27)77.86(±0.68)/80.64(±0.59)/86.66(±0.72)
1467.36(±2.05)/71.35(±1.83)/78.18(±1.79)59.93(±2.02)/64.68(±1.79)/71.59(±2.13)72.41(±1.10)/75.81(±0.98)/82.11(±0.76)63.81(±2.94)/68.23(±2.59)/74.58(±2.37)57.12(±1.56)/62.33(±1.38)/69.40(±1.31)78.40(±0.20)/81.16(±0.85)/86.59(±0.82)73.02(±3.68)/76.34(±3.28)/82.82(±2.58)78.50(±0.62)/81.19(±0.55)/87.15(±0.44)
1567.25(±1.85)/71.28(±1.66)/78.00(±1.67)62.66(±1.63)/67.12(±1.49)/74.30(±1.03)71.27(±1.82)/74.80(±1.63)/81.41(±1.12)63.77(±3.19)/68.18(±2.82)/74.62(±2.59)57.73(±1.63)/62.92(±1.49)/70.09(±1.44)77.65(±0.77)/80.49(±1.06)/85.96(±0.67)72.99(±2.60)/76.31(±2.33)/83.14(±1.77)78.42(±0.55)/81.12(±0.49)/86.94(±0.43)
1669.11(±1.31)/72.93(±1.14)/79.75(±1.06)61.87(±1.87)/66.45(±1.69)/73.16(±1.59)71.19(±1.36)/74.70(±1.23)/81.61(±0.99)63.75(±2.83)/68.22(±2.50)/74.31(±2.36)58.13(±1.60)/63.24(±1.40)/70.31(±1.41)77.33(±0.30)/80.19(±0.57)/86.15(±0.85)73.04(±2.57)/76.36(±2.28)/83.31(±1.88)78.47(±0.66)/81.17(±0.59)/86.86(±0.45)
1768.41(±1.02)/72.30(±0.89)/78.96(±0.73)63.18(±1.53)/67.60(±1.39)/74.39(±0.83)71.35(±1.59)/74.85(±1.42)/81.79(±1.09)63.28(±2.48)/67.79(±2.25)/73.96(±1.90)58.86(±1.31)/63.87(±1.19)/70.89(±1.14)77.96(±0.16)/80.76(±0.68)/86.18(±0.68)72.32(±1.12)/75.72(±0.98)/82.78(±0.94)78.70(±0.94)/81.39(±0.83)/86.96(±0.65)
1868.49(±1.55)/72.38(±1.36)/78.98(±1.26)62.91(±1.49)/67.36(±1.35)/74.32(±1.02)70.95(±1.30)/74.46(±1.17)/81.72(±0.95)64.33(±2.64)/68.71(±2.40)/75.18(±2.21)59.89(±0.89)/64.80(±0.84)/72.01(±0.46)78.02(±0.03)/80.77(±0.60)/86.91(±0.52)73.02(±1.38)/76.34(±1.23)/83.32(±1.14)78.69(±1.03)/81.37(±0.92)/86.99(±0.66)
Mean62.68(±1.31)/67.14(±1.17)/74.28(±1.08)59.22(±1.78)/64.02(±1.63)/71.33(±1.37)68.40(±0.89)/72.23(±0.80)/78.82(±0.64)60.09(±1.86)/64.92(±1.65)/71.17(±1.58)53.11(±1.23)/58.70(±1.12)/65.44(±1.12)72.26(±0.34)/75.67(±0.69)/82.30(±0.58)72.59(±2.32)/75.96(±2.06)/82.43(±1.72)74.40(±1.00)/77.55(±0.89)/83.82(±0.66)
Table 6. Classification accuracy, i.e., mean κ /OA/AA, of different dimension reduction methods using five training samples per class for the Pavia Centre dataset. The best results are highlighted in bold. Values of standard deviation are given in parentheses. Hyperbolic clustering-based band hierarchy (HCBH), clustering-based feature extraction (CBFE), band correlation clustering (BCC), autoencoder (AE), prototype space-based feature selection (PFS), maximum tangent discrimination (MTD), weighted feature extraction (WFE), and fuzzy feature extraction (FFE).
Table 6. Classification accuracy, i.e., mean κ /OA/AA, of different dimension reduction methods using five training samples per class for the Pavia Centre dataset. The best results are highlighted in bold. Values of standard deviation are given in parentheses. Hyperbolic clustering-based band hierarchy (HCBH), clustering-based feature extraction (CBFE), band correlation clustering (BCC), autoencoder (AE), prototype space-based feature selection (PFS), maximum tangent discrimination (MTD), weighted feature extraction (WFE), and fuzzy feature extraction (FFE).
No.
Features
Methods
BCCAECBFEMTDPFSHCBHWFEFFE
387.51(±0.00)/92.70(±0.00)/86.10(±0.00)86.30(±0.84)/91.96(±0.49)/85.49(±0.88)88.41(±0.00)/93.23(±0.00)/86.18(±0.00)84.25(±3.02)/90.82(±1.75)/80.72(±5.10)88.93(±0.10)/93.53(±0.06)/87.00(±0.02)85.36(±0.41)/91.43(±0.32)/82.46(±0.59)85.90(±1.25)/91.74(±0.73)/84.04(±1.66)86.05(±0.70)/91.85(±0.41)/83.19(±0.87)
487.32(±0.00)/92.58(±0.00)/86.23(±0.00)84.76(±3.03)/91.07(±1.78)/83.32(±3.71)87.33(±0.00)/92.59(±0.00)/86.27(±0.00)84.20(±1.06)/90.78(±0.61)/81.06(±1.84)86.83(±0.06)/92.31(±0.03)/83.75(±0.19)83.72(±0.45)/90.45(±0.36)/80.78(±0.63)87.57(±0.82)/92.72(±0.49)/86.53(±0.91)86.32(±0.08)/92.01(±0.04)/84.27(±0.23)
587.38(±0.03)/92.62(±0.01)/85.90(±0.07)86.09(±1.59)/91.84(±0.93)/85.19(±2.15)88.07(±0.25)/93.02(±0.14)/86.76(±0.27)86.01(±1.68)/91.83(±0.98)/84.07(±1.98)86.09(±1.72)/91.87(±1.02)/82.97(±1.92)82.02(±0.22)/89.47(±0.13)/76.91(±0.40)88.56(±0.28)/93.31(±0.16)/87.80(±0.42)85.86(±0.22)/91.74(±0.13)/82.19(±0.42)
688.84(±0.46)/93.47(±0.27)/87.58(±0.53)87.05(±0.70)/92.41(±0.42)/86.06(±0.71)88.59(±0.96)/93.33(±0.56)/87.03(±1.51)85.71(±4.01)/91.61(±2.44)/84.21(±3.98)87.09(±0.52)/92.45(±0.31)/85.11(±0.64)87.89(±0.22)/92.92(±0.13)/85.90(±0.40)88.66(±0.15)/93.36(±0.09)/88.15(±0.20)86.77(±0.10)/92.27(±0.06)/84.40(±0.19)
786.69(±0.81)/92.22(±0.47)/84.80(±1.02)85.69(±2.65)/91.61(±1.57)/84.08(±4.04)88.97(±0.26)/93.55(±0.15)/87.71(±0.26)86.80(±1.62)/92.30(±0.94)/84.84(±1.92)87.14(±0.52)/92.49(±0.30)/84.74(±0.77)87.31(±0.21)/92.58(±0.12)/85.65(±0.39)88.72(±0.08)/93.40(±0.05)/88.03(±0.06)87.23(±0.07)/92.54(±0.04)/85.47(±0.29)
886.56(±0.08)/92.14(±0.05)/84.79(±0.17)86.19(±1.13)/91.91(±0.66)/84.43(±2.33)86.42(±0.87)/92.06(±0.51)/83.89(±1.67)86.74(±0.45)/92.26(±0.26)/84.78(±0.65)87.49(±0.20)/92.69(±0.12)/85.86(±0.73)87.69(±0.26)/92.80(±0.17)/86.17(±0.44)88.38(±0.14)/93.20(±0.08)/87.92(±0.19)87.46(±0.52)/92.67(±0.31)/86.23(±0.45)
986.51(±0.12)/92.11(±0.07)/84.82(±0.24)87.57(±1.15)/92.73(±0.67)/86.29(±1.06)86.06(±0.80)/91.84(±0.47)/83.91(±1.26)86.57(±0.71)/92.16(±0.41)/84.63(±1.11)87.47(±0.26)/92.67(±0.15)/86.25(±0.30)87.38(±0.32)/92.61(±0.23)/86.05(±0.50)88.04(±0.26)/93.00(±0.16)/87.51(±0.34)88.00(±0.28)/92.98(±0.16)/86.20(±1.05)
1086.78(±0.17)/92.27(±0.10)/85.48(±0.30)87.40(±1.08)/92.63(±0.63)/85.72(±1.78)86.68(±0.82)/92.21(±0.48)/84.94(±1.38)86.90(±0.37)/92.35(±0.22)/84.97(±0.65)87.39(±0.15)/92.63(±0.09)/85.99(±0.10)87.55(±0.21)/92.72(±0.12)/86.21(±0.39)88.39(±0.21)/93.22(±0.12)/86.49(±0.21)87.38(±0.36)/92.63(±0.21)/84.00(±0.80)
1186.75(±0.21)/92.25(±0.12)/85.49(±0.38)86.34(±3.30)/92.00(±1.95)/84.24(±4.49)87.34(±0.16)/92.59(±0.09)/86.06(±0.51)86.86(±0.54)/92.33(±0.32)/85.32(±0.50)87.29(±0.28)/92.58(±0.16)/85.70(±0.31)89.45(±0.20)/93.83(±0.11)/88.24(±0.38)88.47(±0.20)/93.26(±0.12)/86.67(±0.30)87.57(±0.26)/92.74(±0.15)/84.24(±0.58)
1286.79(±0.06)/92.27(±0.03)/85.43(±0.18)87.29(±0.27)/92.56(±0.16)/86.11(±0.58)87.30(±0.23)/92.57(±0.13)/86.11(±0.47)86.76(±0.36)/92.26(±0.21)/85.23(±0.39)87.25(±0.24)/92.55(±0.14)/85.55(±0.15)89.82(±0.24)/94.05(±0.15)/88.77(±0.42)88.52(±0.33)/93.29(±0.19)/86.87(±0.43)87.43(±0.10)/92.66(±0.06)/83.86(±0.18)
1386.78(±0.25)/92.27(±0.14)/85.32(±0.50)86.42(±1.36)/92.04(±0.80)/85.26(±1.73)87.07(±0.22)/92.43(±0.13)/85.85(±0.41)86.68(±0.34)/92.22(±0.20)/84.82(±0.31)86.57(±0.18)/92.16(±0.10)/84.87(±0.21)89.19(±0.24)/93.68(±0.15)/88.37(±0.42)88.67(±0.24)/93.38(±0.14)/87.05(±0.28)87.81(±0.41)/92.88(±0.24)/84.81(±0.91)
1486.87(±0.22)/92.32(±0.13)/85.53(±0.54)86.20(±1.85)/91.91(±1.09)/85.07(±2.18)87.14(±0.15)/92.48(±0.09)/85.93(±0.26)86.48(±0.23)/92.10(±0.13)/84.74(±0.46)86.50(±0.29)/92.12(±0.17)/84.64(±0.27)89.27(±0.22)/93.72(±0.13)/88.45(±0.40)88.27(±0.63)/93.15(±0.36)/86.17(±1.14)87.65(±0.09)/92.79(±0.05)/84.43(±0.19)
1586.39(±0.65)/92.04(±0.38)/84.29(±1.48)86.52(±0.94)/92.10(±0.57)/85.30(±0.97)87.10(±0.14)/92.45(±0.08)/86.07(±0.27)86.52(±0.31)/92.12(±0.18)/84.70(±0.36)86.32(±0.26)/92.01(±0.15)/84.28(±0.28)87.51(±0.24)/92.69(±0.14)/86.40(±0.41)88.32(±0.82)/93.17(±0.48)/86.43(±1.68)87.68(±0.09)/92.81(±0.05)/84.43(±0.17)
1686.24(±0.69)/91.96(±0.40)/83.90(±1.63)87.16(±1.05)/92.48(±0.62)/85.53(±1.35)87.17(±0.18)/92.49(±0.11)/85.86(±0.39)86.85(±0.49)/92.32(±0.28)/85.04(±0.71)86.30(±0.20)/92.00(±0.11)/84.09(±0.31)87.64(±0.23)/92.77(±0.13)/86.59(±0.40)88.04(±0.56)/93.01(±0.32)/85.83(±1.25)87.81(±0.32)/92.88(±0.19)/85.07(±1.15)
1786.79(±0.11)/92.27(±0.07)/85.28(±0.45)87.67(±0.55)/92.78(±0.32)/86.46(±1.01)87.05(±0.26)/92.43(±0.15)/85.68(±0.79)87.05(±0.38)/92.43(±0.22)/85.46(±0.76)86.05(±0.17)/91.85(±0.10)/83.80(±0.60)87.71(±0.25)/92.80(±0.16)/86.66(±0.43)88.08(±0.56)/93.04(±0.33)/86.11(±0.90)88.23(±0.35)/93.12(±0.21)/86.59(±0.17)
1886.92(±0.22)/92.35(±0.13)/85.53(±0.27)87.02(±0.79)/92.40(±0.46)/85.31(±1.62)87.10(±0.49)/92.45(±0.29)/85.54(±1.04)87.04(±0.43)/92.43(±0.25)/85.16(±0.59)86.11(±0.20)/91.89(±0.11)/83.55(±0.34)87.63(±0.24)/92.73(±0.14)/86.09(±0.41)88.30(±0.38)/93.16(±0.22)/86.31(±0.69)87.96(±0.29)/92.96(±0.17)/86.90(±0.19)
Mean86.94(±0.25)/92.37(±0.15)/85.40(±0.49)86.60(±1.39)/92.15(±0.82)/85.24(±1.91)87.36(±0.36)/92.61(±0.21)/85.86(±0.66)86.34(±1.00)/92.02(±0.59)/84.36(±1.33)86.93(±0.33)/92.36(±0.20)/84.88(±0.45)87.32(±0.26)/92.58(±0.17)/85.61(±0.44)88.18(±0.43)/93.09(±0.25)/86.74(±0.67)87.33(±0.26)/92.60(±0.15)/84.77(±0.49)
Table 7. Classification accuracy, i.e., mean κ /OA/AA, of different dimension reduction methods using 20 training samples per class for the Pavia Centre dataset. The best results are highlighted in bold. Values of standard deviation are given in parentheses. Hyperbolic clustering-based band hierarchy (HCBH), autoencoder (AE), clustering-based feature extraction (CBFE), band correlation clustering (BCC), prototype space-based feature selection (PFS), maximum tangent discrimination (MTD), weighted feature extraction (WFE), and fuzzy feature extraction (FFE).
Table 7. Classification accuracy, i.e., mean κ /OA/AA, of different dimension reduction methods using 20 training samples per class for the Pavia Centre dataset. The best results are highlighted in bold. Values of standard deviation are given in parentheses. Hyperbolic clustering-based band hierarchy (HCBH), autoencoder (AE), clustering-based feature extraction (CBFE), band correlation clustering (BCC), prototype space-based feature selection (PFS), maximum tangent discrimination (MTD), weighted feature extraction (WFE), and fuzzy feature extraction (FFE).
No.
Features
Methods
BCCAECBFEMTDPFSHCBHWFEFFE
391.23(±0.00)/94.88(±0.00)/91.16(±0.00)91.61(±0.78)/95.10(±0.46)/91.58(±0.68)92.43(±0.01)/95.58(±0.00)/92.32(±0.00)89.94(±3.30)/94.14(±1.93)/89.08(±4.08)90.92(±0.36)/94.70(±0.21)/90.48(±0.30)92.68(±0.05)/95.72(±0.00)/92.55(±0.04)91.37(±0.96)/94.96(±0.56)/90.71(±1.00)91.92(±0.19)/95.28(±0.12)/91.51(±0.02)
492.62(±0.00)/95.70(±0.00)/92.59(±0.00)92.29(±0.71)/95.49(±0.42)/92.33(±0.46)92.65(±0.07)/95.71(±0.04)/92.65(±0.10)91.02(±0.56)/94.76(±0.33)/90.64(±0.71)89.64(±1.34)/93.95(±0.78)/88.98(±1.85)92.77(±0.15)/95.78(±0.02)/92.82(±0.05)92.48(±0.54)/95.61(±0.32)/92.21(±0.75)93.06(±0.02)/95.95(±0.01)/92.79(±0.06)
592.61(±0.08)/95.66(±0.02)/92.60(±0.02)92.03(±0.32)/95.35(±0.19)/92.03(±0.29)92.59(±0.01)/95.68(±0.01)/92.78(±0.02)91.12(±1.35)/94.82(±0.79)/90.70(±1.74)92.14(±0.80)/95.41(±0.46)/91.72(±1.39)92.91(±0.10)/95.86(±0.02)/93.09(±0.06)92.96(±0.18)/95.89(±0.10)/92.96(±0.11)93.31(±0.14)/96.10(±0.08)/93.04(±0.04)
692.60(±0.09)/95.65(±0.02)/92.75(±0.04)92.42(±0.30)/95.57(±0.18)/92.42(±0.33)92.52(±0.02)/95.64(±0.01)/92.72(±0.02)92.30(±0.47)/95.51(±0.28)/91.92(±0.71)92.91(±0.18)/95.86(±0.10)/92.70(±0.18)92.95(±0.06)/95.89(±0.03)/93.16(±0.08)92.80(±0.15)/95.80(±0.09)/92.75(±0.19)93.49(±0.10)/96.20(±0.06)/93.24(±0.17)
792.93(±0.16)/95.83(±0.11)/92.61(±0.27)92.51(±0.33)/95.62(±0.20)/92.46(±0.25)92.67(±0.04)/95.75(±0.04)/92.62(±0.11)92.41(±0.26)/95.58(±0.16)/92.16(±0.26)92.84(±0.34)/95.82(±0.20)/92.67(±0.20)92.79(±0.07)/95.79(±0.13)/92.67(±0.16)93.27(±0.10)/96.07(±0.06)/93.29(±0.09)92.88(±0.31)/95.84(±0.18)/92.98(±0.22)
892.82(±0.25)/95.82(±0.14)/92.49(±0.12)92.27(±0.43)/95.49(±0.25)/92.37(±0.37)92.69(±0.13)/95.68(±0.08)/92.58(±0.12)92.76(±0.21)/95.78(±0.12)/92.49(±0.50)93.01(±0.21)/95.92(±0.13)/92.70(±0.15)92.73(±0.08)/95.76(±0.14)/92.62(±0.17)93.26(±0.10)/96.07(±0.06)/93.32(±0.06)92.78(±0.02)/95.79(±0.01)/92.90(±0.02)
992.30(±0.40)/95.59(±0.09)/92.29(±0.03)92.61(±0.15)/95.69(±0.09)/92.62(±0.15)92.77(±0.17)/95.72(±0.09)/92.57(±0.22)92.54(±0.32)/95.65(±0.19)/92.16(±0.24)92.90(±0.33)/95.86(±0.19)/92.67(±0.44)92.74(±0.13)/95.76(±0.04)/92.58(±0.08)93.11(±0.30)/95.98(±0.17)/93.03(±0.33)92.84(±0.11)/95.82(±0.07)/92.87(±0.03)
1092.23(±0.08)/95.49(±0.27)/92.10(±0.35)92.51(±0.18)/95.62(±0.11)/92.73(±0.17)92.60(±0.04)/95.73(±0.05)/92.58(±0.04)92.52(±0.39)/95.64(±0.23)/92.30(±0.41)92.55(±0.37)/95.65(±0.21)/92.31(±0.34)92.90(±0.08)/95.86(±0.05)/92.46(±0.12)93.34(±0.30)/96.12(±0.17)/93.25(±0.34)92.93(±0.05)/95.87(±0.03)/92.99(±0.07)
1192.24(±0.43)/95.56(±0.22)/92.19(±0.16)92.06(±0.51)/95.36(±0.30)/92.26(±0.41)92.72(±0.20)/95.70(±0.16)/92.47(±0.30)92.83(±0.22)/95.82(±0.13)/92.74(±0.10)92.88(±0.31)/95.85(±0.18)/92.66(±0.34)93.10(±0.12)/95.98(±0.01)/92.75(±0.09)92.93(±0.37)/95.88(±0.21)/92.85(±0.44)93.06(±0.04)/95.95(±0.02)/93.08(±0.07)
1292.30(±0.42)/95.50(±0.25)/91.92(±0.44)92.59(±0.38)/95.67(±0.22)/92.56(±0.39)92.63(±0.10)/95.74(±0.10)/92.56(±0.20)92.69(±0.20)/95.74(±0.12)/92.63(±0.18)92.67(±0.33)/95.73(±0.19)/92.45(±0.36)93.21(±0.07)/96.04(±0.01)/93.06(±0.04)93.17(±0.29)/96.02(±0.17)/93.14(±0.37)93.19(±0.03)/96.03(±0.02)/93.19(±0.02)
1392.27(±0.30)/95.54(±0.22)/92.08(±0.31)92.39(±0.28)/95.56(±0.16)/92.42(±0.48)92.72(±0.28)/95.66(±0.13)/92.35(±0.34)92.56(±0.43)/95.66(±0.25)/92.40(±0.63)92.98(±0.12)/95.91(±0.07)/92.75(±0.20)92.89(±0.19)/95.85(±0.12)/92.60(±0.17)93.15(±0.28)/96.01(±0.16)/93.11(±0.31)93.26(±0.03)/96.07(±0.02)/93.21(±0.05)
1492.56(±0.36)/95.50(±0.03)/91.90(±0.09)92.42(±0.32)/95.57(±0.19)/92.32(±0.25)92.58(±0.28)/95.73(±0.16)/92.49(±0.35)92.63(±0.39)/95.70(±0.23)/92.39(±0.39)92.58(±0.35)/95.68(±0.21)/92.13(±0.27)93.15(±0.09)/96.00(±0.06)/93.37(±0.10)92.92(±0.37)/95.87(±0.21)/92.91(±0.36)93.20(±0.33)/96.03(±0.19)/93.04(±0.40)
1592.75(±0.53)/95.49(±0.21)/91.90(±0.42)92.43(±0.37)/95.58(±0.22)/92.52(±0.40)92.53(±0.32)/95.65(±0.19)/92.28(±0.42)92.74(±0.31)/95.76(±0.18)/92.46(±0.43)92.81(±0.34)/95.81(±0.20)/92.37(±0.37)93.09(±0.16)/95.97(±0.06)/93.34(±0.14)93.03(±0.26)/95.94(±0.15)/92.96(±0.34)93.10(±0.42)/95.98(±0.24)/92.92(±0.52)
1692.23(±0.60)/95.65(±0.29)/92.18(±0.50)92.27(±0.50)/95.49(±0.29)/92.13(±0.57)92.81(±0.20)/95.68(±0.25)/92.29(±0.47)92.63(±0.34)/95.70(±0.20)/92.33(±0.39)92.89(±0.32)/95.85(±0.18)/92.43(±0.39)93.04(±0.08)/95.93(±0.10)/93.26(±0.15)93.27(±0.25)/96.08(±0.14)/93.22(±0.36)93.31(±0.22)/96.10(±0.13)/93.11(±0.31)
1792.72(±0.30)/95.67(±0.32)/92.20(±0.59)92.53(±0.39)/95.64(±0.23)/92.44(±0.49)92.48(±0.49)/95.77(±0.21)/92.48(±0.47)92.67(±0.39)/95.73(±0.23)/92.43(±0.27)92.91(±0.29)/95.87(±0.17)/92.44(±0.38)93.08(±0.21)/95.96(±0.16)/93.27(±0.22)93.27(±0.27)/96.08(±0.15)/93.19(±0.37)92.98(±0.10)/95.91(±0.05)/92.90(±0.24)
1892.20(±0.37)/95.63(±0.28)/92.18(±0.59)92.37(±0.38)/95.54(±0.22)/92.37(±0.57)92.89(±0.27)/95.84(±0.13)/92.77(±0.33)92.84(±0.29)/95.82(±0.17)/92.57(±0.29)93.04(±0.18)/95.95(±0.11)/92.53(±0.20)92.99(±0.20)/95.92(±0.07)/92.65(±0.12)92.95(±0.35)/95.89(±0.20)/92.84(±0.45)92.87(±0.46)/95.84(±0.27)/92.70(±0.60)
Mean92.41(±0.27)/95.57(±0.15)/92.20(±0.25)92.33(±0.40)/95.52(±0.23)/92.35(±0.39)92.64(±0.17)/95.70(±0.10)/92.53(±0.22)92.26(±0.59)/95.49(±0.35)/91.96(±0.71)92.48(±0.38)/95.61(±0.22)/92.12(±0.46)92.94(±0.12)/95.88(±0.06)/92.89(±0.11)92.96(±0.32)/95.89(±0.18)/92.86(±0.37)93.01(±0.16)/95.92(±0.09)/92.90(±0.18)
Table 8. Classification accuracy, i.e., mean κ /OA/AA, of different dimension reduction methods using five training samples per class for the KSC dataset. The best results are highlighted in bold. Values of standard deviation are given in parentheses. Hyperbolic clustering-based band hierarchy (HCBH), clustering-based feature extraction (CBFE), band correlation clustering (BCC), autoencoder (AE), prototype space-based feature selection (PFS), maximum tangent discrimination (MTD), weighted feature extraction (WFE), and fuzzy feature extraction (FFE).
Table 8. Classification accuracy, i.e., mean κ /OA/AA, of different dimension reduction methods using five training samples per class for the KSC dataset. The best results are highlighted in bold. Values of standard deviation are given in parentheses. Hyperbolic clustering-based band hierarchy (HCBH), clustering-based feature extraction (CBFE), band correlation clustering (BCC), autoencoder (AE), prototype space-based feature selection (PFS), maximum tangent discrimination (MTD), weighted feature extraction (WFE), and fuzzy feature extraction (FFE).
No.
Features
Methods
BCCAECBFEMTDPFSHCBHWFEFFE
364.49(±0.00)/68.07(±0.00)/56.83(±0.00)61.27(±2.29)/65.03(±2.11)/58.15(±3.79)59.50(±0.00)/63.56(±0.00)/51.26(±0.00)39.65(±3.41)/45.64(±3.17)/36.51(±4.70)42.88(±9.97)/48.31(±9.84)/39.05(±8.68)27.87(±0.32)/35.21(±0.01)/27.94(±0.49)61.17(±1.41)/65.05(±1.30)/53.95(±0.99)62.52(±0.32)/66.30(±0.29)/54.66(±0.25)
462.05(±0.00)/65.84(±0.00)/54.32(±0.00)61.59(±1.40)/65.32(±1.29)/58.22(±2.05)71.71(±0.00)/74.60(±0.00)/67.57(±0.00)46.64(±0.80)/52.01(±0.66)/42.92(±1.39)56.39(±2.89)/60.64(±2.68)/50.74(±3.48)27.80(±0.32)/35.15(±0.92)/27.88(±0.63)61.51(±1.17)/65.36(±1.07)/54.29(±0.78)61.13(±1.72)/65.03(±1.55)/53.40(±1.96)
562.06(±0.01)/65.85(±0.01)/54.33(±0.01)62.05(±2.14)/65.78(±1.93)/58.89(±2.80)69.83(±0.35)/72.90(±0.31)/65.22(±0.22)44.67(±1.12)/49.78(±1.10)/39.91(±0.20)56.45(±2.77)/60.69(±2.58)/50.82(±3.30)60.35(±0.49)/64.46(±0.93)/52.62(±1.00)71.22(±1.86)/74.14(±1.67)/64.70(±3.46)66.79(±3.66)/70.14(±3.30)/59.91(±4.53)
662.07(±0.00)/65.86(±0.00)/54.35(±0.00)62.37(±2.05)/66.04(±1.87)/60.00(±2.68)71.37(±0.03)/74.28(±0.03)/67.28(±0.12)62.80(±0.36)/66.54(±0.32)/55.12(±1.15)61.23(±4.67)/65.04(±4.22)/56.67(±5.85)60.43(±0.46)/64.54(±0.85)/52.68(±0.89)70.38(±1.18)/73.37(±1.07)/63.98(±2.11)70.74(±2.09)/73.70(±1.88)/65.39(±3.10)
765.23(±3.66)/68.73(±3.32)/59.00(±5.48)57.31(±9.34)/61.47(±9.37)/54.49(±9.01)71.40(±0.17)/74.30(±0.15)/67.58(±0.51)63.34(±1.11)/67.03(±0.99)/56.19(±2.46)58.61(±4.05)/62.65(±3.73)/53.28(±5.16)71.25(±0.46)/74.15(±0.59)/68.30(±0.50)71.51(±0.28)/74.39(±0.25)/65.98(±0.38)71.92(±0.80)/74.77(±0.71)/67.11(±0.99)
865.78(±3.10)/69.23(±2.82)/59.66(±4.63)62.04(±3.94)/65.76(±3.54)/59.95(±4.35)71.49(±0.14)/74.38(±0.12)/67.71(±0.34)63.46(±1.54)/67.14(±1.38)/55.92(±3.47)65.29(±1.29)/68.79(±1.20)/62.24(±0.58)71.18(±0.34)/74.10(±0.38)/68.21(±0.94)71.64(±0.28)/74.51(±0.25)/66.25(±0.49)71.37(±0.44)/74.28(±0.39)/66.60(±0.37)
965.67(±2.11)/69.13(±1.93)/59.92(±3.63)60.03(±7.86)/64.01(±6.98)/56.66(±9.61)72.15(±0.19)/74.96(±0.17)/68.59(±0.33)62.96(±0.50)/66.69(±0.44)/54.61(±0.73)67.17(±0.89)/70.49(±0.80)/65.05(±1.28)71.18(±0.55)/74.10(±0.27)/68.21(±0.73)72.25(±1.06)/75.07(±0.95)/66.95(±1.61)71.18(±0.14)/74.11(±0.12)/66.37(±0.23)
1068.67(±2.42)/71.85(±2.20)/64.07(±3.75)63.73(±1.79)/67.28(±1.63)/60.51(±2.32)72.83(±0.43)/75.57(±0.38)/69.69(±0.55)62.57(±2.74)/66.35(±2.46)/57.42(±3.54)67.74(±1.93)/70.99(±1.73)/65.47(±1.95)71.18(±0.43)/74.10(±0.13)/68.21(±0.87)71.38(±0.33)/74.28(±0.30)/65.66(±0.49)71.62(±0.41)/74.50(±0.36)/66.99(±0.63)
1166.79(±2.53)/70.15(±2.30)/61.47(±3.99)63.80(±1.62)/67.36(±1.45)/60.92(±2.88)72.74(±0.84)/75.49(±0.75)/69.42(±1.32)64.79(±2.25)/68.35(±2.04)/60.07(±2.53)70.61(±3.78)/73.59(±3.41)/67.81(±2.95)70.89(±0.48)/73.77(±0.18)/68.92(±0.23)73.96(±0.91)/76.61(±0.82)/69.59(±1.16)72.18(±0.31)/75.01(±0.28)/67.75(±0.44)
1267.93(±2.28)/71.19(±2.07)/63.17(±3.47)59.42(±7.92)/63.47(±7.01)/55.12(±8.67)73.35(±1.05)/76.03(±0.94)/70.26(±1.47)66.52(±1.56)/69.90(±1.39)/61.93(±2.08)70.57(±3.85)/73.56(±3.47)/67.88(±2.85)75.39(±0.46)/77.89(±0.20)/71.33(±0.20)72.93(±1.45)/75.69(±1.31)/68.15(±1.83)72.30(±0.24)/75.11(±0.23)/67.92(±0.26)
1368.09(±2.63)/71.32(±2.38)/63.41(±3.78)54.75(±9.54)/59.31(±8.53)/50.80(±9.77)71.58(±0.92)/74.46(±0.82)/67.72(±1.11)65.56(±0.14)/69.04(±0.13)/60.61(±0.20)72.87(±2.86)/75.65(±2.56)/69.70(±2.91)75.39(±0.54)/77.89(±0.05)/71.33(±0.80)73.56(±1.38)/76.25(±1.24)/69.28(±1.77)72.63(±0.07)/75.41(±0.07)/68.30(±0.09)
1469.28(±2.59)/72.40(±2.35)/64.94(±3.44)61.60(±6.39)/65.45(±5.57)/58.39(±7.85)72.32(±1.58)/75.12(±1.42)/68.67(±1.95)66.16(±2.24)/69.58(±2.02)/61.83(±2.83)73.96(±3.22)/76.60(±2.90)/70.45(±3.04)75.39(±0.69)/77.89(±0.55)/71.33(±0.72)74.03(±1.83)/76.67(±1.65)/70.49(±1.90)72.70(±0.46)/75.48(±0.41)/68.28(±0.49)
1568.59(±1.62)/71.78(±1.46)/64.33(±1.65)59.07(±9.16)/63.10(±9.13)/56.70(±9.67)72.21(±1.14)/75.03(±1.02)/68.53(±1.50)66.01(±1.81)/69.43(±1.61)/61.64(±2.53)73.15(±2.97)/75.89(±2.68)/69.72(±2.38)75.39(±0.46)/77.89(±0.74)/71.33(±0.90)74.12(±0.43)/76.75(±0.40)/70.19(±0.35)73.60(±1.06)/76.29(±0.95)/69.32(±1.29)
1668.88(±2.56)/72.04(±2.32)/64.36(±3.30)60.01(±7.31)/63.97(±6.54)/56.24(±8.10)72.35(±1.23)/75.14(±1.12)/68.91(±1.32)67.38(±2.97)/70.66(±2.66)/63.20(±3.61)71.59(±3.26)/74.49(±2.94)/69.04(±2.81)72.69(±0.47)/75.46(±0.90)/68.50(±0.28)73.22(±0.92)/75.93(±0.84)/69.20(±1.27)73.86(±0.95)/76.52(±0.85)/69.62(±1.30)
1768.94(±1.97)/72.10(±1.78)/64.79(±2.08)58.37(±8.83)/62.53(±7.89)/55.09(±9.41)72.34(±0.47)/75.13(±0.41)/69.05(±0.88)69.71(±2.65)/72.75(±2.41)/66.06(±2.36)71.48(±3.63)/74.18(±3.27)/69.02(±3.22)72.69(±0.41)/75.46(±0.17)/68.50(±0.53)72.41(±1.02)/75.19(±0.92)/68.89(±0.96)75.12(±0.40)/77.66(±0.36)/70.86(±0.59)
1869.43(±2.20)/72.54(±1.99)/65.33(±2.35)58.67(±7.71)/62.62(±7.08)/57.59(±6.74)73.02(±1.14)/75.74(±1.03)/69.80(±1.22)68.90(±1.86)/72.01(±1.67)/65.56(±2.19)71.79(±3.05)/74.66(±2.76)/68.92(±2.35)72.69(±0.38)/75.46(±0.75)/68.50(±0.57)73.37(±1.99)/76.06(±1.80)/69.52(±2.17)74.70(±0.86)/77.27(±0.78)/70.68(±1.20)
Mean66.50(±1.85)/69.88(±1.68)/60.89(±2.60)60.38(±5.71)/64.28(±5.12)/57.36(±6.54)71.26(±0.61)/74.17(±0.54)/67.33(±0.80)61.32(±1.69)/65.18(±1.53)/56.22(±2.25)65.74(±3.57)/69.14(±3.24)/62.24(±3.30)65.74(±0.45)/69.22(±0.48)/62.11(±0.64)71.17(±1.09)/74.08(±0.99)/66.07(±1.36)70.90(±0.87)/73.85(±0.78)/65.82(±1.11)
Table 9. Classification accuracy, i.e., mean κ /OA/AA, of different dimension reduction methods using 20 training samples per class for the KSC dataset. The best results are highlighted in bold. Values of standard deviation are given in parentheses. Hyperbolic clustering-based band hierarchy (HCBH), clustering-based feature extraction (CBFE), band correlation clustering (BCC), autoencoder (AE), prototype space-based feature selection (PFS), maximum tangent discrimination (MTD), weighted feature extraction (WFE), and fuzzy feature extraction (FFE).
Table 9. Classification accuracy, i.e., mean κ /OA/AA, of different dimension reduction methods using 20 training samples per class for the KSC dataset. The best results are highlighted in bold. Values of standard deviation are given in parentheses. Hyperbolic clustering-based band hierarchy (HCBH), clustering-based feature extraction (CBFE), band correlation clustering (BCC), autoencoder (AE), prototype space-based feature selection (PFS), maximum tangent discrimination (MTD), weighted feature extraction (WFE), and fuzzy feature extraction (FFE).
No.
Features
Methods
BCCAECBFEMTDPFSHCBHWFEFFE
365.72(±0.00)/69.16(±0.00)/59.86(±0.00)72.53(±2.64)/75.29(±2.35)/68.31(±4.54)66.92(±0.00)/70.27(±0.00)/60.81(±0.00)42.08(±0.02)/47.33(±0.02)/39.20(±0.04)65.13(±0.03)/68.71(±0.03)/59.24(±0.04)48.14(±0.98)/53.30(±0.26)/42.21(±0.10)63.66(±0.29)/67.29(±0.27)/58.52(±0.27)66.59(±0.61)/69.96(±0.55)/61.00(±0.58)
466.94(±0.01)/70.29(±0.01)/60.72(±0.01)73.61(±1.41)/76.26(±1.28)/70.50(±2.18)74.39(±0.35)/76.91(±0.00)/71.52(±0.00)52.28(±9.87)/56.74(±9.89)/48.50(±9.55)65.17(±0.02)/68.74(±0.02)/59.32(±0.12)48.27(±0.85)/53.38(±0.42)/42.55(±0.69)65.00(±0.98)/68.48(±0.87)/59.71(±0.85)65.19(±0.82)/68.69(±0.75)/59.68(±0.81)
566.89(±0.01)/70.24(±0.01)/60.64(±0.02)74.02(±0.62)/76.62(±0.55)/71.04(±1.38)77.67(±1.35)/80.21(±1.15)/75.80(±1.36)67.78(±5.39)/70.98(±4.90)/62.49(±6.25)65.17(±0.02)/68.74(±0.02)/59.32(±0.14)61.07(±0.80)/64.90(±0.49)/56.34(±0.64)76.60(±5.20)/78.96(±4.68)/73.20(±6.41)72.66(±4.42)/75.43(±3.99)/68.09(±5.51)
666.94(±0.00)/70.29(±0.00)/60.73(±0.00)74.11(±1.05)/76.70(±0.96)/71.40(±0.87)80.74(±0.82)/82.16(±0.32)/78.01(±0.46)78.84(±0.78)/80.99(±0.68)/75.76(±1.15)72.47(±0.08)/75.26(±0.07)/69.81(±0.06)61.30(±0.72)/65.10(±1.01)/56.58(±0.98)75.51(±6.04)/77.98(±5.44)/71.92(±7.18)75.99(±2.33)/78.43(±2.09)/72.27(±3.09)
768.91(±2.55)/72.27(±2.53)/63.91(±4.08)69.91(±9.41)/72.93(±9.31)/66.87(±9.79)80.53(±0.47)/82.51(±0.15)/78.67(±0.19)80.71(±0.57)/82.67(±0.50)/77.34(±0.61)76.52(±3.69)/78.89(±3.30)/74.09(±3.88)77.75(±0.79)/79.96(±0.80)/76.01(±0.45)79.81(±0.12)/81.86(±0.10)/77.06(±0.16)78.43(±1.93)/80.62(±1.73)/75.31(±2.15)
871.78(±2.58)/73.83(±1.97)/67.27(±3.55)74.17(±1.47)/76.74(±1.32)/72.26(±1.40)81.58(±1.10)/83.61(±0.95)/79.67(±0.93)81.09(±0.25)/83.01(±0.23)/77.75(±0.27)77.08(±4.25)/79.39(±3.81)/74.61(±4.34)77.68(±0.82)/79.90(±0.39)/75.92(±0.76)80.64(±2.08)/82.61(±1.87)/77.41(±2.21)79.57(±1.08)/81.65(±0.96)/76.53(±0.87)
973.13(±1.98)/75.08(±1.67)/68.98(±1.73)72.87(±5.29)/75.61(±4.68)/70.21(±6.80)82.48(±0.80)/84.26(±0.60)/80.41(±0.60)80.88(±0.08)/82.82(±0.08)/77.62(±0.13)78.07(±4.03)/80.29(±3.61)/75.41(±4.28)77.71(±0.75)/79.92(±1.07)/75.94(±0.99)81.57(±0.44)/83.44(±0.39)/78.78(±0.85)80.40(±1.41)/82.39(±1.26)/77.50(±1.54)
1070.62(±1.96)/74.95(±1.63)/68.96(±1.69)74.69(±1.42)/77.22(±1.30)/72.40(±1.68)81.91(±1.14)/83.20(±0.93)/79.39(±0.88)81.06(±0.18)/82.99(±0.16)/78.23(±0.24)78.54(±4.23)/80.72(±3.79)/75.92(±4.59)77.73(±0.73)/79.94(±0.37)/75.97(±0.33)81.42(±0.70)/83.31(±0.63)/78.98(±0.77)81.01(±0.61)/82.93(±0.54)/78.17(±0.95)
1172.20(±1.95)/75.69(±1.93)/69.56(±2.10)75.17(±0.36)/77.63(±0.31)/73.11(±0.89)81.07(±1.12)/83.48(±0.90)/79.81(±0.81)81.39(±0.33)/83.28(±0.30)/78.76(±0.32)77.74(±4.02)/79.98(±3.61)/75.53(±4.20)77.58(±0.84)/79.82(±1.08)/75.74(±0.98)81.68(±0.67)/83.54(±0.60)/79.24(±0.64)81.25(±0.67)/83.14(±0.60)/78.47(±1.02)
1274.33(±2.70)/75.76(±2.44)/70.17(±2.57)71.89(±7.07)/74.71(±6.36)/69.45(±7.28)81.83(±0.94)/83.71(±0.56)/80.38(±0.75)81.97(±0.51)/83.79(±0.46)/79.58(±0.70)79.93(±2.32)/81.96(±2.09)/77.77(±2.09)81.89(±0.78)/83.72(±0.20)/78.96(±0.03)82.16(±0.64)/83.97(±0.58)/79.79(±0.60)81.42(±0.29)/83.30(±0.26)/78.84(±0.70)
1373.86(±3.31)/76.26(±2.59)/70.61(±2.73)67.88(±5.81)/71.09(±5.28)/65.62(±5.80)81.80(±0.96)/84.04(±0.41)/80.68(±0.37)81.89(±0.31)/83.72(±0.28)/79.65(±0.54)79.96(±2.27)/81.98(±2.04)/77.82(±2.06)81.89(±0.77)/83.72(±0.30)/78.97(±0.95)82.11(±0.46)/83.92(±0.42)/79.69(±0.59)81.90(±0.53)/83.73(±0.48)/79.82(±0.75)
1473.16(±3.18)/76.68(±3.18)/71.15(±3.53)72.93(±4.43)/75.66(±3.97)/70.57(±4.36)82.13(±1.17)/84.20(±0.73)/80.91(±0.66)81.83(±0.43)/83.66(±0.38)/79.68(±0.57)80.93(±2.30)/82.85(±2.08)/78.95(±2.05)81.87(±0.80)/83.70(±0.80)/78.95(±0.49)82.76(±0.47)/84.51(±0.43)/80.64(±0.47)81.68(±0.28)/83.53(±0.25)/79.66(±0.60)
1575.48(±2.49)/75.14(±2.67)/69.53(±2.98)72.53(±6.44)/75.29(±5.71)/70.60(±8.00)82.97(±1.08)/84.58(±0.92)/81.39(±0.79)81.78(±0.26)/83.62(±0.23)/79.75(±0.34)81.18(±2.26)/83.08(±2.04)/79.24(±2.03)81.91(±0.89)/83.74(±0.64)/79.02(±0.86)82.86(±0.54)/84.60(±0.48)/80.90(±0.73)81.86(±0.28)/83.69(±0.25)/80.18(±0.34)
1675.26(±3.22)/78.48(±2.91)/73.11(±3.49)72.88(±2.17)/75.59(±1.91)/71.01(±2.68)83.35(±1.11)/85.04(±0.95)/81.83(±0.80)81.39(±0.15)/83.27(±0.14)/79.62(±0.24)81.19(±2.27)/83.09(±2.04)/79.26(±2.04)83.06(±0.73)/84.77(±0.29)/80.39(±0.16)82.49(±0.59)/84.26(±0.54)/80.34(±0.68)81.99(±0.40)/83.81(±0.36)/80.25(±0.45)
1775.41(±2.81)/77.59(±3.17)/72.24(±3.57)71.23(±6.08)/74.08(±5.48)/69.66(±6.90)83.41(±1.02)/84.66(±1.21)/81.49(±1.21)81.75(±0.48)/83.59(±0.43)/80.02(±0.48)81.03(±2.17)/82.94(±1.96)/79.12(±1.92)83.03(±0.75)/84.75(±0.71)/80.37(±0.31)82.83(±0.74)/84.57(±0.67)/80.72(±0.77)82.32(±0.37)/84.11(±0.33)/80.52(±0.39)
1876.88(±2.39)/79.39(±1.01)/74.39(±1.22)72.33(±4.53)/75.07(±4.07)/71.08(±4.87)82.96(±1.17)/85.07(±0.99)/81.84(±0.99)82.85(±0.43)/84.58(±0.38)/81.17(±0.80)80.84(±2.10)/82.78(±1.89)/78.99(±1.88)83.01(±0.75)/84.73(±0.41)/80.35(±0.21)83.10(±0.71)/84.82(±0.64)/80.85(±0.70)82.22(±0.54)/84.02(±0.48)/80.30(±0.68)
Mean71.72(±1.95)/74.44(±1.73)/67.62(±2.08)72.67(±3.76)/75.41(±3.43)/70.26(±4.34)80.36(±0.91)/82.37(±0.67)/78.29(±0.68)76.22(±1.44)/78.56(±1.32)/73.44(±1.51)76.31(±2.25)/78.71(±2.02)/73.40(±2.23)73.99(±0.80)/76.59(±0.58)/70.89(±0.56)79.01(±1.29)/81.13(±1.16)/76.11(±1.49)78.40(±1.03)/80.59(±0.93)/75.41(±1.28)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alizadeh Moghaddam, S.H.; Gazor, S.; Karami, F.; Amani, M.; Jin, S. An Unsupervised Feature Extraction Using Endmember Extraction and Clustering Algorithms for Dimension Reduction of Hyperspectral Images. Remote Sens. 2023, 15, 3855. https://0-doi-org.brum.beds.ac.uk/10.3390/rs15153855

AMA Style

Alizadeh Moghaddam SH, Gazor S, Karami F, Amani M, Jin S. An Unsupervised Feature Extraction Using Endmember Extraction and Clustering Algorithms for Dimension Reduction of Hyperspectral Images. Remote Sensing. 2023; 15(15):3855. https://0-doi-org.brum.beds.ac.uk/10.3390/rs15153855

Chicago/Turabian Style

Alizadeh Moghaddam, Sayyed Hamed, Saeed Gazor, Fahime Karami, Meisam Amani, and Shuanggen Jin. 2023. "An Unsupervised Feature Extraction Using Endmember Extraction and Clustering Algorithms for Dimension Reduction of Hyperspectral Images" Remote Sensing 15, no. 15: 3855. https://0-doi-org.brum.beds.ac.uk/10.3390/rs15153855

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop