Next Article in Journal
Building Extraction from UAV Images Jointly Using 6D-SLIC and Multiscale Siamese Convolutional Networks
Next Article in Special Issue
Ensemble-Based Cascaded Constrained Energy Minimization for Hyperspectral Target Detection
Previous Article in Journal
Classification of PolSAR Image Using Neural Nonlocal Stacked Sparse Autoencoders with Virtual Adversarial Regularization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Dimensionality Reduction of Hyperspectral Image Using Spatial-Spectral Regularized Sparse Hypergraph Embedding

Key Laboratory of Optoelectronic Technology and Systems of the Education Ministry of China, Chongqing University, Chongqing 400044, China
*
Author to whom correspondence should be addressed.
Submission received: 28 March 2019 / Revised: 25 April 2019 / Accepted: 29 April 2019 / Published: 1 May 2019
(This article belongs to the Special Issue Advanced Machine Learning Approaches for Hyperspectral Data Analysis)

Abstract

:
Many graph embedding methods are developed for dimensionality reduction (DR) of hyperspectral image (HSI), which only use spectral features to reflect a point-to-point intrinsic relation and ignore complex spatial-spectral structure in HSI. A new DR method termed spatial-spectral regularized sparse hypergraph embedding (SSRHE) is proposed for the HSI classification. SSRHE explores sparse coefficients to adaptively select neighbors for constructing the dual sparse hypergraph. Based on the spatial coherence property of HSI, a local spatial neighborhood scatter is computed to preserve local structure, and a total scatter is computed to represent the global structure of HSI. Then, an optimal discriminant projection is obtained by possessing better intraclass compactness and interclass separability, which is beneficial for classification. Experiments on Indian Pines and PaviaU hyperspectral datasets illustrated that SSRHE effectively develops a better classification performance compared with the traditional spectral DR algorithms.

Graphical Abstract

1. Introduction

With spectral sampling from visible to short-wave infrared region, hyperspectral image (HSI) can provide a spatial scene in hundreds of narrow contiguous spectral channels [1,2]. HSI data with high spectral resolution can provide fine spectral details for different ground objects, and they have been widely applied in many fields such as geological survey, environmental monitoring, precision agriculture, and mineral exploration [3,4]. Classification of each pixel in HSI plays a crucial role in these real applications. However, the high dimensional characteristic of HSI poses a huge challenge to the traditional classification methods, and the Hughes effect may occur if only limited training samples are available [5,6].
In general, dimensionality reduction (DR) is an effective way to reduce the volume of high-dimensional data with minimum loss of useful information by feature extraction or band selection, and it brings benefits for classification by achieving discriminating embedding features [7,8,9]. Many DR methods based on feature extraction have been proposed to reduce the dimension of high-dimensional data. Principal component analysis (PCA) [10] and linear discriminant analysis (LDA) [11] are the most popular subspace methods, but the two linear methods cannot discover the underlying manifold structure embedding in the original high-dimensional space. Many manifold learning-based DR methods are introduced to reveal nonlinear structure in high-dimensional data, such as locally linear embedding (LLE) [12], Laplacian eigenmap (LE) [13], isometric mapping (ISOMAP) [14], neighborhood preserving embedding (NPE) [15], and locality preserving projection (LPP) [16]. The above methods can be unified under the graph embedding framework (GE), and the difference between them is how to define the similarity matrix of an intrinsic graph and the constraint matrix of a penalty graph [17,18,19,20]. On the basis of GE, some supervised DR methods are designed to exploit the prior knowledge of training samples for improving classification performance, such as marginal Fisher analysis (MFA) [21], local Fisher discriminant analysis (LFDA) [22], and regularized local discriminant embedding (RLDE) [23]. However, these direct graph-based DR methods only consider the pairwise relationship between data points, while HSI data usually possess complex relationships such as one sample versus multiple samples (different classes) or one class versus multiple samples. Therefore, the pairwise relation cannot discover complex relations in HSI, which limit the discriminability of embedding features for classification [24,25].
To explore the multiple adjacency relationships in high-dimensional data, hypergraph learning has been introduced to discover the complex geometric structure between HSI pixels [26,27]. In [28], discriminant hyper-Laplacian projection (DHLP) was proposed using the hypergraph Laplacian for exploring the high-order geometric relationship of samples. Semi-supervised hypergraph embedding (SHGE) learns the discriminant structure form both labeled and unlabeled data, and it reveals the complex relationships of HSI pixels by building a semi-supervised hypergraph [29]. For analyzing the intrinsic properties of HSI pixels, a hypergraph Laplacian sparse coding method was constituted to capture the similarity among data points within the same hyperedge [30]. In addition, the heterogeneous network is explored to measure the relatedness of heterogeneous objects. Pio et al. [31] introduced heterogeneous networks with an arbitrary structure to evaluate its performance for both clustering and classification tasks. Serafino et al. [32] proposed an ensemble learning approach to classify objects of different classes, which is based on the heterogeneous networks for extracting both correlation and autocorrelation that involve the observed objects.
The aforementioned methods are designed as spectral-based DR methods, in which the spatial relationship between a pixel and its spatial neighborhood is not taken into consideration for DR. Recent investigations show that incorporating spatial information into traditional spectral-based DR methods can further improve the performance of HSI classification [33,34,35,36,37]. Wu et al. presented a spatially adaptive model to extract the spectral and spatial-contextual information, which significantly enhances the land cover classification performance in both accuracy and computational efficiency [38]. Local pixel NPE (LPNPE) [23] and spatial consistency LLE [39] were proposed to reveal the local manifold structure in HSI data by using the distance between different spatial blocks instead Euclidean distance between pixels. As an extension of LPNPE, spatial and spectral information-based RLDE (SSRLDE) tries to maximize the ratio between local spatial-spectral data scatter and global spatial-spectral data scatter for enhancing the representation ability of embedding features [23]. The spatial-spectral coordination embedding (SSCE) method defines a spatial-spectral coordination distance for neighbor selection, and it can reduce the probability that heterogeneous objects are selected as nearest neighbors [40]. Discriminative spectral-spatial margin (DSSM) exploits spatial-spectral neighbors to obtain the low-dimensional embedding via preserving the local spatial-spectral relationship of HSI data [41]. These spatial-spectral DR methods have difficulty discovering the complex relationships in HSI data due to their pairwise nature.
Recently, the spatial information in HSI has been explored to construct a spatial-spectral hypergraph model [42,43,44]. Sun et al. proposed an adaptive hyperedge weight estimation scheme to preserve the prominent hyperedges, which is better for improving the classification accuracy [45]. Yuan et al. introduced a hypergraph embedding model for feature extraction, which can represent higher order relationships [46]. However, these spatial-spectral hypergraph methods are unsupervised and do not use prior information in HSI data, which is not conducive to extract discriminant features for enhancing the classification performance.
Motived by the above limitations, a new hypergraph embedding method termed spatial-spectral joint regularized sparse hypergraph embedding (SSRHE) is proposed for DR of HSI data. SSRHE explores sparse coefficients and label information of pixels to adaptively select neighbors for constructing a regularized sparse intraclass hypergraph and a regularized sparse interclass hypergraph, which can effectively represent the complex relationships in HSI data. Then, a local spatial neighborhood preserving scatter matrix and a total-scatter matrix are computed to preserve the neighborhood structure in spatial domain and the global structure in spectral domain, respectively. Finally, an optimal objective function is designed to extract spatial-spectral discriminant features, which not only preserves the local spatial structure of HSI data, but also compacts the samples belonging to the same class and separates the samples from different classes simultaneously. Therefore, embedding features achieve a good discriminative power for HSI classification.
The rest of this paper is organized as follows. In Section 2, some related works are briefly introduced. Section 3 gives a detailed description of the proposed SSRHE method. In Section 4, experimental results on two real HSI datasets are reported to demonstrate the effectiveness of SSRHE. Finally, Section 5 summarizes this paper and provides some suggestions for future work.

2. Related Works

In this section, we provide a brief review of GE framework and hypergraph model. For convenience, supposed a HSI dataset X = x 1 , x 2 , , x N R D × N , where D is the number of spectral bands and N is the number of pixels. The label of x i denotes l i 1 , 2 , , c and c is the number of land cover classes. The goal of dimensionality reduction is to map ( x i R D ) ( y i R d ) , where d < < D . For the linear DR methods, Y can be obtained by Y = P T X with a projection matrix P R D × d .

2.1. Graph Embedding

The graph embedding (GE) framework offers a unified view for understanding and explaining many popular DR algorithms such as PCA, LDA, ISOMAP, LLE, LE, NPE, and LPP. In GE, an intrinsic graph G I ( X , W I ) represents a certain desired statistical or geometrical properties of data, and a penalty graph G P ( X , W P ) describes some characteristics or relationships that should be avoided. W I and W P are the weight matrices of undirected graphs G I and G P . w i j I and w i j P describe the similarity and dissimilarity characteristics between vertices x i and x j in G I and G P , respectively.
The purpose of GE is to map each vertex of graph into a low-dimensional space that preserves similarities between the vertex pairs. The low-dimensional embedding can be obtained by solving the following objective function:
min Y T H Y = C 1 2 i = 1 n j = 1 n y i y j 2 w i j I = min Y T H Y = C t r ( Y T ( D I W I ) ) = min Y T H Y = C t r ( Y T L I Y )
where D I is a diagonal matrix with D i i I = j w i j I , L I = D I W I is the Laplacian matrix of G I , C is a constant, H typically is a diagonal matrix for scale normalization, and it may be the Laplacian matrix of a penalty graph G P . That is, H = L P = D P W P , where D i i P = j w i j P .

2.2. Hypergraph Model

The hypergraph can represent the complex relations between high-dimensional data, and every hyperedge connects multiple vertexes. A hypergraph G = V H , E H , W H can be constructed to formulate the relationships among data samples, where V H is a set of vertices, E H is a set of hyperedges, and W H is a diagonal matrix where each diagonal element denotes the weight of hyperedge. Each hyperedge e i E H is vested by a weight w ( e i ) W H .
To represent the relationship of G , an incidence matrix H = [ H m n : h ( e m , v n ) ] R E H × V H is denoted as follows:
H m n = h ( e m , v n ) = 1 , 0 , i f i f v n e m v n e m
Furthermore, the degree of hyper-edge e m and the degree of vertex v n can be defined as
d ( e m ) = e m = v n V H H m n
d ( v n ) = e m E H w ( e m ) H m n
There is an example to illustrate the hypergraph in Figure 1. A vertex is denoted as a circle (such as v 1 , v 2 , v 5 ). As shown in Figure 1a, the simple graph only holds two vertices per edge, which just describes a single one-to-one relationship (such as v 1 and v 2 , and v 1 and v 3 ). In Figure 1b, each hyper-edge is represented by a curve (such as a hypergraph e 1 is constructed by v 1 , v 2 , v 3 ), which represents complex multiple relations among pixels. That is, a hypergraph can describe the local neighborhood structure well and preserve the complex relationships within the neighborhood. This hypergraph consists of six vertices and four hyper-edges, and the corresponding incidence matrix is shown in Figure 1c. The incidence matrix intuitively represents affinity relationships between vertices and hyper-edges, the non-zero element in each row indicates that a hyper-edge is associated with the vertex, otherwise the vertex and the hyper-edge are not interrelated.
However, the hypergraph model only represents complex higher-order relations between pixels in spectral domain, and it fails to consider spatial information in hyperspectral image that limits the discriminant ability of embedding features for land cover classification.

3. SSRHE

To reveal the complex structure in HSI, we propose a new hypergraph learning method called spatial-spectral joint regularized sparse hypergraph embedding (SSRHE) for dimensionality reduction of HSI data. At first, SSRHE constructs a regularized sparse intraclass hypergraph and a regularized sparse interclass hypergraph by exploring sparse representation (SR) and prior knowledge. After that, it exploits the spatial consistency and global structure of HSI by computing a local spatial neighborhood preserving scatter and a total scatter, which brings benefits for combining spatial structure and spectral information for DR. Finally, an optimal objective function is designed to learn a spatial-spectral discriminant projection by minimizing the regularized sparse intraclass hypergraph scatter and the local spatial neighborhood preserving scatter, while maximizing the regularized sparse interclass hypergraph scatter and the total scatter of samples simultaneously. The flowchart of the proposed SSRHE method is shown in Figure 2.

3.1. The Regularized Sparse Hypergraph Model

To discover the complex structure of HSI data, a hypergraph model is exploited to reveal the intrinsic relations between pixels. However, it remains difficult to choose a proper neighborhood size for constructing hypergraphs. Since sparse representation has natural discriminating power to adaptively reveal the inherent relationship of data [47,48,49], a sparse hypergraph model is designed based on SR theory.
Inspired by the observation that the most compact expression of a certain sample is generally given from similar samples, sparse coefficients are explored to find neighbors adaptively. Suppose S R N × N is the sparse coefficient matrix of data samples. The sparse coefficients can be calculated as follows:
min S s i 1 s . t . x i X s i ε , s i > 0
where ε denotes the sparse error, s i = [ s i , 1 , s i , 2 , , s i , i 1 , 0 , s i , i + 1 , , s i , N ] T represents the sparse coefficients of pixel x i , which can be optimized by using Alternating Direction Method of Multipliers (ADMM) framework [50,51]. With the sparse coefficient matrix S, a hypergraph can be constructed with the criterion that nodes x i and x j are connected in a hyper-edge if sparse coefficient s i j is not equal to 0. Since sparse coefficients can reflect the similarity between data, non-zero correspondence coefficient indicates the correlation between pixels, a large value indicates a high similarity. Compared with Euclidean metric, sparse coefficients can more effectively select neighbors of HSI data.
According to sparse coefficients and label information of samples, we construct a within-class sparse hypergraph G w ( X , E H w , W H w ) and a between-class sparse hypergraph G b ( X , E H b , W H b ) to characterize the intrinsic structure of HSI data. X is denoted as the vertex set, E H w and E H b are the sets of intraclass hyper-edge and interclass hyper-edge. W H w and W H b are the weight matrices of hyper-edges of G w and G b , respectively.
In G w , the intraclass hyper-edge e w is formed by connecting pixel x i with its corresponding neighbors whose sparse coefficients are non-zero. The subedge weights of e w and e b can be defined as
w i j w = φ s i j , i f s i j 0 a n d l i = l j 0 , o t h e r s
w i j b = s i j , i f s i j 0 a n d l i l j 0 , o t h e r s
in which s i j is the sparse coefficient between pixel x i and x j and parameter φ ( φ > 1 ) is used to enhance the contribution of samples from the same class for improving the discriminant power. Figure 3 illustrates the construction of sparse hypergraph on a simple classification model. As shown in Figure 3, a simple graph considers only pairwise relation between two observed samples, while the sparse hypergraph through sparse representation can select neighbors of samples adaptively and represent complex multiple relations among HSI pixels.
The weight matrices corresponding to hyper-edge regions are defined as
ω i w = w ( e i w ) = x j e i w w i j w
ω i b = w ( e i b ) = x j e i b w i j b
The incidence matrix H w = [ H i j w : h ( e j w , x i ) ] R E w × X of intraclass hypergraph is calculated by
H i j w = exp ( ( x i x j ) ( x i x j ) T 2 t 2 ) , i f x i , x j e j w 0 , o t h e r s
where t = 1 N ( e i w ) 2 j ( x i x j ) ( x i x j ) T is the heat-kernel parameter, that is the mean value of pixels in one hyper-edge, N ( e i ) denotes the number of vertices in each hyper-edge region.
According to H w and w ( e w ) , the degree of vertex x i X and the degree of intraclass hyper-edge e i w E w are computed by
θ i w = d ( x i , e j w ) = j w ( e j w ) h ( e j w , x i ) = j w ( e j w ) H i j w
ϑ i w = d ( e i w ) = j h ( e i w , x j ) = j H i j w
For the between-class hypergraph G H b , the between-class incidence matrix H b = [ H i j b : h ( e j b , x i ) ] R E b × X is defined as
H i j b = exp ( ( x i x j ) ( x i x j ) T 2 t 2 ) , i f x i , x j e j b 0 , o t h e r s
Based on H b and w ( e b ) , the degree of vertex x i and interclass hyper-edge e i b can be defined as
θ i b = d ( x i , e j b ) = j w ( e j b ) h ( e j b , x i ) = j w ( e j b ) H i j b
ϑ i b = d ( e i b ) = j h ( e i b , x j ) = j H i j b
In low-dimensional embedding space, the pixels from the same class should be as compact as possible for the intraclass sparse hypergraph, whereas pixels from different classes should be as separated as possible with the interclass sparse hypergraph. Therefore, the objective functions with intraclass and interclass sparse hypergraph constraint are constructed as
arg min P 1 2 e k w E w x i , x j e k w w ( e k w ) h ( e k w , x i ) h ( e k w , x j ) d ( e k w ) P T x i P T x j 2 = k = 1 n i , j = 1 n w ( e k w ) H i k w H j k w ϑ k w t r ( P T x i x i T P P T x j x i T P ) = k = 1 n i = 1 n w ( e k w ) H i k w θ i w t r ( P T x i x i T P ) j = 1 n H j k w θ i w ϑ k w k = 1 n i , j = 1 n t r [ P T x j w ( e k w ) H i k w H j k w x i T P ] ϑ k w = t r ( P T X D v w X T P ) t r P T X H w W w ( D e w ) 1 ( H w ) T X T P = t r P T X [ D v w H w W w ( D e w ) 1 ( H w ) T ] X T P = t r ( P T X L w X T P )
arg max P 1 2 e k b E b x i , x j e k b w ( e k b ) h ( e k b , x i ) h ( e k b , x j ) d ( e k b ) P T x i P T x j 2 = k = 1 n i , j = 1 n w ( e k b ) H i k b H j k b ϑ k b t r ( P T x i x i T P P T x j x i T P ) = k = 1 n i = 1 n w ( e k b ) H i k b θ i b t r ( P T x i x i T P ) j = 1 n H j k b θ i b ϑ k b k = 1 n i , j = 1 n t r [ P T x j w ( e k b ) H i k b H j k b x i T P ] ϑ k b = t r ( P T X D v b X T P ) t r P T X H b W b ( D e b ) 1 ( H b ) T X T P = t r P T X [ D v b H b W b ( D e b ) 1 ( H b ) T ] X T P = t r ( P T X L b X T P )
where L w = D v w H w W w D e w 1 H w T is the intraclass sparse hypergraph Laplacian matrix and L b = D v b H b W b D e b 1 H b T denotes the interclass sparse hypergraph Laplacian matrix. D v w , D v b , D e w , D e b , W w , W b with respect to θ w , θ b , ϑ w , ϑ b , w w , w b can be indicated as
D v w = d i a g ( [ θ 1 w , θ 2 w , , θ N 1 w , θ N w ] )
D v b = d i a g ( [ θ 1 b , θ 2 b , , θ N 1 b , θ N b ] )
D e w = d i a g ( [ ϑ 1 w , ϑ 2 w , , ϑ N 1 w , ϑ N w ] )
D e b = d i a g ( [ ϑ 1 b , ϑ 2 b , , ϑ N 1 b , ϑ N b ] )
W w = d i a g ( [ ω 1 w , ω 2 w , , ω N 1 w , ω N w ] )
W b = d i a g ( [ ω 1 b , ω 2 b , , ω N 1 b , ω N b ] )
Let M b = X L b X T represent the between-class scatter of G b . M w = X L w X T is the within-class scatter of G w . Then, the mapping matrix P can be obtained through dealing with the following optimization problem:
arg max P t r ( P T M b P ) t r ( P T M w P )
In real applications, to avoid the singularity in the case of small samples, the above optimal objection can be further extended as
arg max P t r { P T [ ( 1 β ) M b + β X X T ] P } t r P T [ ( 1 β ) M w + β ( d i a g ( d i a g ( M w ) ) ) ] P
where β is a tradeoff parameter. The regularization term X X T is the maximal data variance, which is used to ensure that the diversity of HSI pixels. The diagonal regularization M w is introduced for overcoming the singularity problem when the number of training samples is small. With regularization, Equation (25) becomes more stable to effectively reserve the useful discriminant information.

3.2. Spatial-Spectral Hypergraph Embedding

Due to the spatial consistency of HSI, the pixels in HSI are usually spatially related, which means the pixels within a small neighborhood usually possess the spatial distribution consistency of ground objects. Therefore, neighborhood pixels can be utilized to learn spatial-spectral combined features.
Suppose that pixel x i is the center pixel, and T is the spatial neighborhood window. ( u i , v i ) is the spatial coordinate of pixel x i in the image, and the spatial neighborhood set Ω ( x i ) with the window size T (positive odd) can be recorded as
Ω ( x i ) = x i m : u m , v m u m u z , v + z v m u z , v + z
where z = ( T 1 ) ( T 1 ) 2 2 , x i m : u m , v m corresponds to the m-th pixel in the spatial neighborhood. Ω ( x i ) has a total of T × T pixels. Thus, the distance measure in the spatial neighborhood block can be defined as
a i = m = 1 T 2 x i x i m 2 v m
where v m = exp x i x i m 2 / 2 ( 1 T × T m = 1 T 2 x i m ) 2 , which measures similarity between central pixel x i and its spectral-spatial neighbors. For all training samples in HSI data, the local spatial neighborhood preserving scatter matrix is calculated as follows:
S w = i a i = i m = 1 T 2 x i x i m 2 v m = i = 1 N m = 1 T 2 ( x i x i m ) ( x i x i m ) T v m
To preserve the global structure of samples, a total scatter matrix is defined as
S b = i = 1 N ( x i x ¯ ) ( x i x ¯ ) T
where x ¯ is the mean of all samples.
To extract low-dimensional spatial-spectral joint features, a objective function should be designed to preserve the local neighborhood as well as compact the samples with interclass hypergraph and separate the samples with interclass hypergraph simultaneously. Therefore, Equations (25), (28), and (29) are transformed into the following optimization function:
arg max P t r α P T ( 1 β ) M b + β X X T P + ( 1 α ) P T S b P arg min P t r α P T ( 1 β ) M w + β d i a g ( d i a g ( M w ) ) P + ( 1 α ) P T S w P
The above optimal function can be further simplified as
arg max P t r α P T ( 1 β ) M b + β X X T P + ( 1 α ) P T S b P t r α P T ( 1 β ) M w + β d i a g ( d i a g ( M w ) ) P + ( 1 α ) P T S w P
According to Lagrange multiplier method, Equation (31) can be solved by
α ( 1 β ) M b + β X X T + ( 1 α ) S b P = λ α ( 1 β ) M w + β d i a g ( d i a g ( M w ) ) + ( 1 α ) S w P
where λ is an eigenvalue set. With the d largest eigenvalues corresponding eigenvector, the optimal projection matrix can be denoted as P = [ p 1 , p 2 , , p d ] . In the low-dimensional embedded space, the spatial-spectral embedding of test sample x t can be given as follows:
y t = P T x t
In summary, SSRHE compacts the samples from the same class and spatial neighborhood while separates interclass samples, and the embedding features possess stronger discriminative power that ensures good classification performance. The steps of the proposed SSRHE method are shown in Algorithm 1.
Algorithm 1 SSRHE.
Input: HSI dataset X = [ x 1 , x 2 , , x N ] R D × N , corresponding class label set L , tradeoff parameters α , β , weighted coefficient φ , spatial neighborhood size T, reduced dimensionality d.
1:
According to ADMM, compute sparse coefficient matrix S ;
2:
Compute the weights of intraclass and interclass hyperedge by Equations (8) and (9);
3:
Compute the intraclass hyper-Laplacian matrix: L w = D v w H w W w D e w 1 H w T ;
4:
Compute the interclass hyper-Laplacian matrix: L b = D v b H b W b D e b 1 H b T ;
5:
Compute the local spatial neighborhood preserving scatter matrix:
6:
    S w = i a i = i m = 1 T 2 x i x i m 2 v m = i = 1 N m = 1 T 2 ( x i x i m ) ( x i x i m ) T v m
7:
Compute the extended total scatter matrix: S b = i = 1 N ( x i x ¯ ) ( x i x ¯ ) T ;
8:
Solve the generalized eigenvalue problem:
9:
    α ( 1 β ) M b + β X X T + ( 1 α ) S b P = λ α ( 1 β ) M w + β d i a g ( d i a g ( M w ) ) + ( 1 α ) S w P
10:
Obtain the optimal projection matrix with largest eigenvalues corresponding eigenvectors:
11:
    P = [ p 1 , p 2 , , p d ]
Output:
12:
Low-dimensional spatial-spectral embedding features of test samples: Y = P T X
In this paper, we adopt big O notation to analyze the computational complexity of SSRHE. Spatial neighborhood size and the number of sparse iterations are denoted as T and t, respectively. The sparse coefficient matrix S is computed with the cost of O ( N t ) . The weight of intraclass hyperedge and the weight of interclass hyperedge both take O N 2 . The intraclass incidence matrix H w and the interclass incidence matrix H b are both calculated with O N 2 . The cost of diagonal matrices W w , W b , D v w , D v b , D e w , and D e b are O ( 8 N ) ). The intraclass hyper-Laplacian matrix L w and the interclass hyper-Laplacian matrix L b both cost O N 3 . The local spatial neighborhood preserving scatter matrix S w takes O D N T 2 . The extended total scatter matrix S b costs O D N . The costs of matrix M w and M b are both O D N 2 . X X T costs O D N . It takes O D 3 to deal with the generalized problem of Equation (32). The total time complexity of SSRHE is O D 3 + D N 2 + N 3 .

4. Experimental Results and Discussion

Some experiments were performed on two real HSI datasets to verity the effectiveness of the proposed method, and several state-of-the-art DR methods were compared with SSRHE in the experiments.

4.1. HSI Datasets

Indian Pines Dataset: This hyperspectral image is a scene of northwest Indiana collected by an airborne imaging spectrometer sensor in 1992. It consists of 145 × 145 pixels and 220 spectral bands that cover the wavelength from 400 to 2450 nm. After removing the noise and water absorption bands, 200 spectral bands remained for use in the experiments. This image contains total 16 land cover types. The dataset in false color and its corresponding ground truth map are shown in Figure 4.
University of Pavia (PaviaU) Dataset: This hyperspectral image covering the area of Pavia University in northern Italy was collected by ROSIS sensor in 2002. The spatial size of this image is 640 × 340 pixels and the number of spectral band is 115. Due to the atmospheric affection, 12 bands were discarded and the remaining 103 spectral bands were used for experiment. The false color and ground truth map for this scene are shown in Figure 5.

4.2. Experimental Setup

In experiments, each HSI dataset was randomly divided into training and test sets. The training samples were utilized to learn a feature extracted model to obtain their low-dimensional embedding features. After that, the classifier was applied to obtain the class labels of test samples. The overall classification accuracy (OA), average classification accuracy of each class (AA), and kappa coefficient (KC) were employed to evaluate classification results.
To demonstrate the effectiveness of the proposed method, we compared SSRHE with four spectral-based approaches, LPP, LDA, MFA, and RLDE; two spatial-spectral combined approaches, LPNPE and SSCE; and a hypergraph method, DHLP. For all approaches, we tuned the parameters by cross-validation to achieve good results. In LPNPE, the size of spatial-spectral window was set to 15 for the PaviaU data and 11 for the Indian Pines data. In SSCE, the window scale and the local neighbor size were both set to 5 on both datasets. For RLDE and MFA, the intraclass neighbor size was set to 3 and 8, while the interclass neighbor size was set to 5 and 60, respectively. The number of nearest neighbor in DHLP was selected as 9.
The nearest neighbor (1-NN) classifier was applied for classification. Each experiment was randomly repeated ten times. All experiments were performed on a personal computer with i5-4590 central processing unit, 8-G memory, and 64-bit Windows 7 using MATLAB 2014a.

4.3. Parameters Selection

For the proposed SSRHE algorithm, there are four important parameters: spatial neighborhood size T, weighted coefficient φ , tradeoff parameters α and β . To select the optimal parameters, we performed some parameters selection experiments on the Indian Pines and PaviaU datasets. In each experiment, we randomly selected five samples from each class as training set and the remaining samples were used for testing.
To explore the effect of tradeoff parameters α and β , we turned them with a set of { 0 , 0.01 , 0.05 , 0.1 , 0.2 , , 0.9 , 1 } . Figure 6 and Figure 7 show the classification accuracies with respect to α and β on two datasets. As shown in Figure 6, an increasing α produced subtle change of OAs under a fixed β . The OAs first varied slightly and then declined significantly with the increase of α on Indian Pines dataset, since a too large α led to the algorithm losing the property of spatial-domain in HSI and the obtained features might fail to represent the intrinsic structure of hyperspectral image. For PaviaU dataset, the OAs first increased with increasing α and β , and then tended to decline. To balance the influence of spectral and spatial information for classification, we set parameters α = 0.3 and β = 0.7 for the Indian Pines dataset, and α = 0.2 and β = 0.5 for PaviaU dataset based on the results presented in Figure 6 and Figure 7.
The parameters T and φ have a significant influence on the discrimination of SSRHE. The former is defined to determine the size of spatial neighborhood and the latter is exploited to adjust the compactness of the interclass neighbors. To analyze the influence of two parameters for SSRHE, we performed two experiments with respect to T and φ , and the corresponding experimental results on two HSI datasets are shown in Figure 8. In Figure 8a, it is clear that the OAs first quickly increased with the growth of T on two dataset because a larger spatial neighborhood was beneficial to preserving more useful spatial-domain information in HSI. However, if the value of T was too large, it might bring pixels from other classes into training model, as well as greatly increase computational complexity. According to the results in Figure 8a, the size of spatial neighborhood T was set to 7 for Indian Pines dataset and 15 for PaviaU dataset. As shown in Figure 8b, with the increasing of φ , the OAs improved until reaching a peak value due to further enhancing the compactness of intraclass samples in the embedding space. While φ kept increasing, the OAs declined since too large φ weakened the ability to extract between-class features. Therefore, we set φ = 50 for the Indian Pines dataset and φ = 60 for the PaviaU dataset.

4.4. Investigation of Embedding Dimension

To explore how the embedding dimension d affects the performance of the proposed SSRHE method, thirty samples were randomly selected from each class for training, and d was tuned from 5 to 50 with an interval of 5. Figure 9 shows the classification accuracy of SSRHE and other DR methods with different embedding dimensions on Indian Pines and PaviaU datasets. According to Figure 9, the OAs firstly rapidly rose and then tended to stabilize with an increasing d, because embedding features with high dimension contained more useful information for training, while too large dimension might lead to information saturation. The classification accuracies of the most DR methods were better than the RAW method since these methods can remove the redundant information in HSI. The proposed SSRHE achieved the best classification performance in comparison with other methods on two HSI datasets. The reason is that SSRHE discovered the complex relationships between samples by hypergraph model and fused the useful spatial information for extracting discriminant features for classification.

4.5. Investigation of Classification Performance

To evaluate the performance of the proposed method under different training conditions, we selected n i ( n i = 5 , 20 , 50 , 100 , 200 ) samples per class for training, and 1-NN classifier was utilized to classify the remaining samples. Each experiment was repeated ten times, the averaged OA with standard deviations (STDs) and kappa coefficient (KC) of all DR methods on Indian Pines and PaviaU datasets are shown in Table 1 and Table 2.
As shown in Table 1 and Table 2, for all methods, the OA and KC greatly improved and the STDs decreased with the increasing of training samples, because a large number of training samples usually provide more useful information for training. The spatial-spectral combined methods, SSCE, LPNPE and SSRHE, produced better classification results than the spectral-based methods, because they exploit the spatial and spectral information in HSI to improve the representation ability of extracted features. Among these spectral-based methods, DHLP presented better classification performance under most training conditions, since it applies the hypergraph learning model to discover the intrinsic complex relationship between samples. Compared with all the other methods, SSRHE had better discriminant performance with different proportions of training samples on two HSI datasets, especially with small size of training set. SSRHE utilizes the hypergraph framework to discover the complex multivariate relationships between interclass samples and intraclass samples, and computes two spatial neighborhood scatters to reveal the spatial correlation between each pixels in HSI, which further enhances the discriminating power of low-dimensional features.
To explore the classification accuracy of the proposed method for each class, we randomly selected a percentage of samples from each class as the training set while the remaining samples were used for testing. In experiments, we set the training percent to 3% for the Indian Pines dataset and 5% for PaviaU dataset. Table 3 and Table 4 show the classification results of different DR methods on Indian Pines and PaviaU datasets, and the corresponding classification maps are displayed in Figure 10 and Figure 11.
According to Table 3 and Table 4, SSRHE obtained better classification results than other methods in most classes, and it achieved the best OA, AA and KC on two HSI datasets. This indicates that SSRHE possessed stronger ability to reveal the intrinsic complex geometric relations between samples and extract more useful the spatial-spectral combined features for classification. As shown in Figure 10 and Figure 11, the spatial-spectral methods generally produced smoother classification maps than traditional spectral-based methods, which demonstrates that the spatial information in HSI is effective for improving classification performance. Moreover, the classification maps of SSRHE possessed fewer misclassified pixels, especially in the areas Grass/Pasture, Wheat, and Stone-steel towers for Indian Pines dataset and Asphalt, Meadows, and Shadows for PaviaU dataset. In terms of running time, SSRHE cost more time than traditional spectral-based methods because it needs to build models in both spatial domain and spectral domain of HSI data. Compared with other spectral-spatial algorithms, the running time of SSRHE did not increase significantly and was far less than SSCE. Thus, SSRHE was more effective than other DR methods to weaken the influence of noisy points and develop the discrimination power of embedding features.

5. Conclusions

In this paper, a new dimensionality reduction method SSRHE is proposed based on sparse hypergraph learning and spatial-spectral information for HSI. SSRHE computes two spatial neighborhood scatters to reveal the spatial correlation between each pixels in HSI, and it also constructs two discriminant hypergraphs by sparse representation to discover the complex multivariate relationships between interclass samples and intraclass samples, respectively. Based on the spatial neighborhood scatters and the Laplacian scatters of hypergraphs, a spatial-spectral combined objective function is designed to obtain an optimal projection matrix mapping original HSI data to low-dimensional embedding space where the intrinsic spatial and spectral properties are well preserved. some experiments were conducted on Indian Pines and PaviaU datasets to demonstrate that the proposed method possesses better performance compared with some existing state-of-the-art DR methods. In the future, we will optimize the proposed algorithm to reduce the running time and extend it to other related fields such as multi-spectral images and very-high resolution images, and the sparse hypergraph model in this paper can be used in some application domains with high-dimensional data such as face images, gene expression and radiomics features. Furthermore, It would also be interesting to study integrating hypergraph learning and heterogeneous information networks when exploring high-dimensional data with multiple adjacency relationships.

Author Contributions

H.H. was primarily responsible for mathematical modeling and manuscript writing. M.C. contributed to the experimental design and experimental analysis. Y.D. provided important suggestions for improving the paper.

Funding

This work was supported in part by The Basic and Frontier Research Programmes of Chongqing under Grants cstc2018jcyjAX0093 and cstc2018jcyjAX0633, the Chongqing University Postgraduates Innovation Project under Grants CYB18048 and CYS18035, and the National Science Foundation of China under Grant 41371338.

Acknowledgments

The authors would like to thank the anonymous reviewers and associate editor for their valuable comments and suggestions to improve the quality of the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Liao, D.; Qian, Y.; Tang, Y.Y. Constrained manifold learning for hyperspectral imagery visualization. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 1, 1213–1226. [Google Scholar] [CrossRef]
  2. Yuan, Q.Q.; Zhang, Q.; Li, J.; Li, J.; Zhang, L.P. Hyperspectral image denoising employing a spatial-spectral deep residual convolutional neural network. IEEE Trans. Geosci. Remote Sens. 2019, 57, 1205–1218. [Google Scholar] [CrossRef]
  3. Liu, Q.; Sun, Y.; Hang, R.; Song, H. Spatial-spectral locality-constrained low-rank representation with semi-supervised hypergraph learning for hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 4171–4182. [Google Scholar] [CrossRef]
  4. Zhang, X.R.; Gao, Z.Y.; Jiao, L.C.; Zhou, H.Y. Multifeature hyperspectral image classification with local and nonlocal spatial information via markov random field in semantic space. IEEE Trans. Geosci. Remote Sens. 2016, 56, 1409–1424. [Google Scholar] [CrossRef]
  5. Xu, Y.H.; Zhang, L.P.; Du, B.; Zhang, F. Spectral-spatial unified networks for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 5893–5909. [Google Scholar] [CrossRef]
  6. Kumar, B.; Dikshit, O. Spectral contextual classification of hyperspectral imagery with probabilistic relaxation labeling. IEEE Trans. Cybern. 2017, 47, 4380–4391. [Google Scholar] [CrossRef] [PubMed]
  7. Hang, R.; Liu, Q.; Sun, Y.; Yuan, X.; Pei, H.; Plaza, J.; Plaza, A. Robust matrix discriminative analysis for feature extraction from hyperspectral images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 2002–2011. [Google Scholar] [CrossRef]
  8. Luo, F.L.; Huang, H.; Ma, Z.Z.; Liu, J.M. Semi-supervised sparse manifold discriminative analysis for feature extraction of hyperspectral images. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6197–6221. [Google Scholar] [CrossRef]
  9. Xu, J.; Yang, G.; Yin, Y.F.; Man, H.; He, H.B. Sparse representation based classification with structure preserving dimension reduction. Cogn. Comput. 2014, 6, 608–621. [Google Scholar] [CrossRef]
  10. Zabalza, J.; Ren, J.C.; Yang, M.Q.; Zhang, Y.; Wang, J.; Marshall, S.; Han, J.W. Novel folded-PCA for improved feature extraction and data reduction with hyperspectral imaging and SAR in remote sensing. ISPRS J. Photogramm. Remote Sens. 2014, 93, 112–122. [Google Scholar] [CrossRef]
  11. Li, W.; Prasad, S.; Fowler, J.E. Noise-adjusted subspace discriminant analysis for hyperspectral imagery classification. IEEE Geosci. Remote Sens. Lett. 2013, 10, 1374–1378. [Google Scholar] [CrossRef]
  12. Roweis, S.T.; Saul, L.K. Nonlinear dimensionality reduction by locally linear embedding. Science 2000, 290, 2323–2326. [Google Scholar] [CrossRef]
  13. Chen, C.; Zhang, L.; Bu, J.; Wang, C.; Chen, W. Constrained laplacian eigenmap for dimensionality reduction. Neurocomputing 2010, 73, 951–958. [Google Scholar] [CrossRef]
  14. Bachmann, C.M.; Ainsworth, T.L.; Fusina, R.A. Exploiting manifold geometry in hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2005, 43, 441–454. [Google Scholar] [CrossRef]
  15. He, X.F.; Cai, D.; Yan, S.C.; Zhang, H.J. Neighborhood preserving embedding. In Proceedings of the Tenth IEEE International Conference on Computer Vision (ICCV’05), Beijing, China, 17–21 October 2005; Volume 2, pp. 1208–1213. [Google Scholar]
  16. Zheng, Z.L.; Yang, F.; Tan, W.; Jia, J.; Yang, J. Gabor feature-based face recognition using supervised locality preserving projection. Signal Process. 2007, 87, 2473–2483. [Google Scholar] [CrossRef]
  17. Huang, H.; Yang, M. Dimensionality reduction of hyperspectral images with sparse discriminant embedding. IEEE Trans. Geosci. Remote Sens. 2015, 53, 5160–5169. [Google Scholar] [CrossRef]
  18. Jiang, X.; Song, X.; Zhang, Y.; Jiang, J.; Gao, J.; Cai, Z. Laplacian regularized spatial-aware collaborative graph for discriminant analysis of hyperspectral imagery. Remote Sens. 2019, 11, 29. [Google Scholar] [CrossRef]
  19. Hang, R.; Liu, Q. Dimensionality reduction of hyperspectral image using spatial regularized local graph discriminant embedding. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 3262–3271. [Google Scholar] [CrossRef]
  20. Feng, F.B.; Li, W.; Du, Q.; Zhang, B. Dimensionality reduction of hyperspectral image with graph-based discriminant analysis considering spectral similarity. Remote Sens. 2017, 9, 323. [Google Scholar] [CrossRef]
  21. Luo, F.L.; Huang, H.; Duan, Y.L.; Liu, J.M.; Liao, Y.H. Local geometric structure feature for dimensionality reduction of hyperspectral imagery. Remote Sens. 2017, 9, 790. [Google Scholar] [CrossRef]
  22. Sugiyama, M. Dimensionality reduction of multimodal labeled data by local fisher discriminant analysis. J. Mach. Learn. Res. 2007, 8, 1027–1061. [Google Scholar]
  23. Zhou, Y.C.; Peng, J.T.; Philip Chen, L.C. Dimension reduction using spatial and spectral regularized local discriminant embedding for hyperspectral image classification. IEEE Trans. Geosic. Remote Sens. 2015, 53, 1082–1095. [Google Scholar] [CrossRef]
  24. Huang, Y.; Liu, Q.; Zhang, S.; Metaxas, D. Image retrieval via probabilistic hypergraph ranking. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010; pp. 3367–3383. [Google Scholar]
  25. Liu, Q.; Sun, Y.; Wang, C.; Liu, T.; Tao, D. Elastic net hypergraph learning for image clustering and semi-supervised classification. IEEE Trans. Image Process. 2017, 26, 452–463. [Google Scholar] [CrossRef] [PubMed]
  26. Yu, J.; Tao, D.; Wang, M. Adaptive hypergraph learning and its application in image classification. IEEE Trans. Image Process. 2012, 21, 3262–3272. [Google Scholar]
  27. Wang, W.H.; Qian, Y.T.; Tang, Y.Y. Hypergraph-regularized sparse NMF for hyperspectral unmixing. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 681–694. [Google Scholar] [CrossRef]
  28. Huang, S.; Yang, D.; Ge, Y.X.; Zhang, X.H. Discriminant hyper-Laplacian projection and its scalable extension for dimensionality reduction. Neurocomputing 2016, 173, 145–153. [Google Scholar] [CrossRef]
  29. Du, W.B.; Qiang, W.W.; Lv, M.; Hou, Q.L.; Zhen, L.; Jing, L. Semi-supervised dimension reduction based on hypergraph embedding for hyperspectral images. Int. J. Remote Sens. 2018, 39, 1696–1712. [Google Scholar] [CrossRef]
  30. Gao, S.H.; Tsang, I.W.H.; Chia, L.T. Laplacian sparse coding, hypergraph Laplacian sparse coding, and applications. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 92–104. [Google Scholar] [CrossRef] [PubMed]
  31. Pio, G.; Serafino, F.; Malerba, D.; Ceci, M. Multi-type clustering and classification from heterogeneous networks. Inf. Sci. 2018, 425, 107–126. [Google Scholar] [CrossRef]
  32. Serafino, F.; Pio, G.; Ceci, M. Ensemble learning for multi-type classification in heterogeneous networks. IEEE Trans. Knowl. Data Eng. 2018, 30, 2326–2339. [Google Scholar] [CrossRef]
  33. Liao, W.Z.; Mura, M.D.; Chanussot, J.; Pizurica, A. Fusion of spectral and spatial information for classification of hyperspectral remote sensed imagery by local graph. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 9, 583–594. [Google Scholar] [CrossRef]
  34. Zhao, J.; Zhong, Y.F.; Shu, H.; Zhang, L.P. High-resolution image classification integrating spectral-spatial-location cues by conditional random fields. IEEE Trans. Image Process. 2016, 25, 4033–4045. [Google Scholar] [CrossRef]
  35. Cao, J.; Wang, B. Embedding learning on spectral-spatial graph for semi-supervised hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1805–1809. [Google Scholar] [CrossRef]
  36. Zhao, W.; Du, S. Spectral-spatial feature extraction for hyperspectral image classification: A dimension reduction and deep learning approach. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4544–4554. [Google Scholar] [CrossRef]
  37. Li, Y.; Zhang, H.; Shen, Q. Spectral-spatial classification of hyperspectral imagery with 3D convolutional neural network. Remote Sens. 2017, 9, 67. [Google Scholar] [CrossRef]
  38. Wu, Z.B.; Shi, L.L.; Li, J.; Wang, Q.C.; Sun, L.; Wei, Z.H.; Plaza, J.; Plaza, A.J. GPU parallel implementation of spatially adaptive hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 1131–1143. [Google Scholar] [CrossRef]
  39. Mohan, A.; Sapion, G.; Bosch, E. Spatially coherent nonlinear dimensionality reduction and segmentation of hyperspectral images. IEEE Geosci. Remote Sens. Lett. 2007, 4, 206–210. [Google Scholar] [CrossRef]
  40. Huang, H.; Zheng, X.L. Hyperspectral image land cover classification algorithm based on spatial-spectral coordination embedding. Acta Geod. Cartogr. Sin. 2016, 4, 964–972. [Google Scholar]
  41. Feng, Z.X.; Yang, S.Y.; Wang, S.G.; Jiao, L.C. Discriminative spectral-spatial margin-based semisupervised dimensionality reduction of hyperspectral data. IEEE Geosci. Remote Sens. Lett. 2015, 12, 224–228. [Google Scholar] [CrossRef]
  42. Ji, R.R.; Gao, Y.; Hong, R.C.; Liu, Q.; Tao, D.C.; Li, X.L. Spectral-spatial constraint hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2014, 52, 1811–1824. [Google Scholar]
  43. Luo, F.L.; Du, B.; Zhang, L.P.; Zhang, L.F.; Tao, D.C. Feature learning using spatial-spectral hypergraph discriminant analysis for hyperspectral image. IEEE Trans. Cybern. 2019, 49, 2406–2419. [Google Scholar] [CrossRef]
  44. Tang, Y.Y.; Lu, Y.; Yuan, H. Hyperspectral image classification based on three-dimensional scattering wavelet transform. IEEE Trans. Geosci. Remote Sens. 2015, 53, 2467–2480. [Google Scholar] [CrossRef]
  45. Sun, Y.B.; Wang, S.J.; Liu, Q.S.; Hang, R.L.; Liu, G.C. Hypergraph embedding for spatial-spectral joint feature extraction in hyperspectral images. Remote Sens. 2017, 9, 506. [Google Scholar]
  46. Yuan, H.L.; Tang, Y.Y. Learning with hypergraph for hyperspectral image feature extraction. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1695–1699. [Google Scholar] [CrossRef]
  47. Sun, Y.; Hang, R.; Liu, Q.; Zhu, F.; Pei, H. Graph-rRegularized low rank representation for aerosol optical depth retrieval. Int. J. Remote Sens. 2016, 37, 5749–5762. [Google Scholar] [CrossRef]
  48. Liu, J.; Xiao, Z.; Chen, Y.; Yang, J. Spatial-Spectral graph regularized kernel sparse representation for hyperspectral image classification. ISPRS Int. J. Geo-Inf. 2017, 6, 258. [Google Scholar] [CrossRef]
  49. Zhang, S.Q.; Li, J.; Li, H.C.; Deng, C.Z.; Antonio, P. Spectral-spatial weighted sparse regression for hyperspectral image unmixing. IEEE Trans. Geosci. Remote Sens. 2018, 56, 3265–3276. [Google Scholar] [CrossRef]
  50. Sun, W.W.; Zhang, L.F.; Zhang, L.P.; Yenming, M.L. A dissimilarity-weighted sparse self-representation method for band selection in hyperspectral imagery classification. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2016, 9, 4374–4388. [Google Scholar] [CrossRef]
  51. Luo, M.N.; Zhang, L.L.; Liu, J.; Zheng, Q.H. Distributed extreme learning machine with alternating direction method of multiplier. Neurocomputing 2017, 261, 164–170. [Google Scholar] [CrossRef]
Figure 1. An example for hypergraph: (a) simple graph; (b) hypergraph; and (c) incidence matrix H.
Figure 1. An example for hypergraph: (a) simple graph; (b) hypergraph; and (c) incidence matrix H.
Remotesensing 11 01039 g001
Figure 2. Flowchart of the proposed SSRHE method.
Figure 2. Flowchart of the proposed SSRHE method.
Remotesensing 11 01039 g002
Figure 3. A simple two classification model to explain the construction of sparse hypergraph.
Figure 3. A simple two classification model to explain the construction of sparse hypergraph.
Remotesensing 11 01039 g003
Figure 4. Indian Pines hyperspectral image: (a) HSI in false color; (b) ground truth; and (c) spectral curve. (Note that the number of samples for each class is shown in brackets.)
Figure 4. Indian Pines hyperspectral image: (a) HSI in false color; (b) ground truth; and (c) spectral curve. (Note that the number of samples for each class is shown in brackets.)
Remotesensing 11 01039 g004
Figure 5. PaviaU hyperspectral image: (a) HSI in false color; (b) ground truth; and (c) spectral curve. (Note that the number of samples for each class is shown in brackets.)
Figure 5. PaviaU hyperspectral image: (a) HSI in false color; (b) ground truth; and (c) spectral curve. (Note that the number of samples for each class is shown in brackets.)
Remotesensing 11 01039 g005
Figure 6. The OAs of SSRHE with different α and β on Indian Pines dataset.
Figure 6. The OAs of SSRHE with different α and β on Indian Pines dataset.
Remotesensing 11 01039 g006
Figure 7. The OAs of SSRHE with different α and β on PaviaU dataset.
Figure 7. The OAs of SSRHE with different α and β on PaviaU dataset.
Remotesensing 11 01039 g007
Figure 8. OAs with respect to parameters T and φ on different datasets: (a) the OAs of T; and (b) the OAs of φ .
Figure 8. OAs with respect to parameters T and φ on different datasets: (a) the OAs of T; and (b) the OAs of φ .
Remotesensing 11 01039 g008
Figure 9. OAs with different embedding d: (a) Indian Pines dataset; anf (b) Pavia U dataset.
Figure 9. OAs with different embedding d: (a) Indian Pines dataset; anf (b) Pavia U dataset.
Remotesensing 11 01039 g009
Figure 10. Classification maps of different DR methods with 1-NN on the Indian Pines dataset. Note that the time cost of corresponding DR algorithms is marked in bracket.
Figure 10. Classification maps of different DR methods with 1-NN on the Indian Pines dataset. Note that the time cost of corresponding DR algorithms is marked in bracket.
Remotesensing 11 01039 g010
Figure 11. Classification maps of different DR methods with 1-NN on the PaviaU dataset. Note that the time cost of corresponding DR algorithms is marked in bracket.
Figure 11. Classification maps of different DR methods with 1-NN on the PaviaU dataset. Note that the time cost of corresponding DR algorithms is marked in bracket.
Remotesensing 11 01039 g011
Table 1. Classification results using different methods with different classifiers for the Indian pines dataset. (OA ± STDs (%) (KC)).
Table 1. Classification results using different methods with different classifiers for the Indian pines dataset. (OA ± STDs (%) (KC)).
Method52050100200
RAW43.6 ± 2.8 (0.372)54.9 ± 1.7 (0.495)60.1 ± 1.4 (0.552)63.6 ± 0.9 (0.588)66.9 ± 0.6 (0.622)
PCA43.4 ± 2.7 (0.370)54.9 ± 1.6 (0.495)60.2 ± 1.2 (0.553)63.9 ± 0.8 (0.591)67.0 ± 0.6 (0.622)
LDA32.5 ± 4.8 (0.253)51.6 ± 1.9 (0.459)64.4 ± 1.2 (0.599)71.0 ± 0.5 (0.672)74.4 ± 0.7 (0.706)
LPP43.6 ± 3.7 (0.372)54.5 ± 1.8 (0.491)59.7 ± 1.2 (0.546)62.7 ± 1.0 (0.578)65.8 ± 0.5 (0.609)
MFA44.1 ± 4.0 (0.377)57.1 ± 1.6 (0.520)66.8 ± 1.9 (0.625)70.8 ± 1.1 (0.669)72.0 ± 1.0 (0.680)
RLDE41.7 ± 1.1 (0.622)60.9 ± 1.5 (0.561)69.8 ± 1.4 (0.659)74.6 ± 0.7 (0.711)78.4 ± 0.6 (0.751)
DHLP44.1 ± 3.8 (0.377)57.2 ± 2.1 (0.522)68.9 ± 1.2 (0.649)73.8 ± 0.8 (0.702)77.6 ± 0.7 (0.741)
SSCE30.2 ± 4.5 (0.230)69.7 ± 1.0 (0.658)76.3 ± 0.9 (0.730)79.1 ± 0.5 (0.760)82.9 ± 0.6 (0.801)
LPNPE60.2 ± 3.5 (0.594)74.0 ± 1.4 (0.706)79.3 ± 0.7 (0.759)81.6 ± 0.6 (0.791)84.2 ± 0.6 (0.817)
SSRHE65.6 ± 2.3 (0.615)74.8 ± 1.2 (0.711)80.0 ± 1.0 (0.765)82.9 ± 1.0 (0.803)86.7 ± 1.0 (0.829)
Note that the best results of a row are marked in bold.
Table 2. Classification results using different methods with different classifiers for the PaviaU dataset. (OA ± STDs (%) (KC)).
Table 2. Classification results using different methods with different classifiers for the PaviaU dataset. (OA ± STDs (%) (KC)).
Method52050100200
RAW60.5 ± 4.2 (0.512)66.4 ± 2.4 (0.583)73.5 ± 1.6 (0.663)76.4 ± 0.8 (0.698)78.8 ± 0.8 (0.724)
PCA60.5 ± 4.2 (0.512)66.5 ± 2.2 (0.583)73.4 ± 1.6 (0.662)76.4 ± 0.8 (0.697)78.7 ± 0.6 (0.724)
LDA46.7 ± 6.4 (0.351)59.6 ± 1.8 (0.495)73.5 ± 1.4 (0.662)78.9 ± 0.9 (0.727)83.4 ± 0.6 (0.782)
LPP47.0 ± 5.6 (0.354)59.3 ± 2.6 (0.500)72.8 ± 2.3 (0.654)78.3 ± 1.3 (0.722)82.2 ± 1.2 (0.768)
MFA64.5 ± 4.3 (0.555)69.2 ± 4.5 (0.613)76.4 ± 2.0 (0.699)78.1 ± 2.4 (0.715)79.1 ± 2.2 (0.730)
RLDE64.4 ± 3.2 (0.555)74.6 ± 2.7 (0.677)77.9 ± 2.2 (0.718)82.1 ± 1.0 (0.770)84.8 ± 1.0 (0.802)
DHLP56.8 ± 8.0 (0.471)62.2 ± 3.6 (0.530)70.8 ± 2.1 (0.629)77.5 ± 2.7 (0.711)80.2 ± 1.5 (0.742)
SSCE42.3 ± 5.3 (0.309)63.3 ± 2.9 (0.543)75.8 ± 1.7 (0.692)82.7 ± 1.2 (0.804)87.0 ± 0.8 (0.828)
LPNPE68.0 ± 4.2 (0.606)80.0 ± 2.2 (0.747)86.3 ± 1.3 (0.822)87.9 ± 0.9 (0.842)89.9 ± 0.6 (0.877)
SSRHE71.6 ± 2.7 (0.646)82.6 ± 2.3 (0.776)87.5 ± 1.1 (0.837)90.0 ± 1.5 (0.882)92.2 ± 0.2 (0.908)
Note that the best results of a row are marked in bold.
Table 3. Classification results of different classifiers on the Indian Pines dataset.
Table 3. Classification results of different classifiers on the Indian Pines dataset.
ClassTrainTestRAWPCALDALPPMFARLDEDHLPSSCELPNPESSRHE
1103641.6741.6777.7836.1150.0061.1163.8955.5677.7894.44
2143128553.3952.4564.1251.0556.0372.1466.6960.3180.7888.17
38374757.3055.2957.7047.1250.0761.5861.5863.4574.3080.46
42421341.7844.6052.5843.1921.6058.6959.1551.1777.0084.04
54843578.8578.6289.4377.9378.8586.9087.8280.4691.7296.55
67365790.2689.5095.7491.0294.6795.2895.8994.5296.0497.02
7101877.7888.8910088.8977.7894.4494.4410094.44100
84843095.5895.5899.5393.9593.2699.3099.7792.5699.5398.60
9101070.0070.0060.0050.0070.0080.0090.0090.0010080.00
109787561.0360.4660.9157.4942.0668.9163.2072.1182.7483.89
11246220969.7669.8571.8969.6258.8579.3679.2274.0285.9289.50
125953439.3337.4565.3632.0247.3867.4262.7350.5687.8383.71
132118488.0488.0497.8388.0494.5797.2898.3794.5798.91100
14127113894.0293.9494.1192.8890.6996.6695.6193.9496.1095.52
153934731.1230.5554.1825.0742.6540.9248.4155.0471.4783.57
16108391.5791.5790.3685.5485.5490.3692.7784.3492.7797.59
OA (%)68.3367.8874.4465.9164.0378.2777.0074.0687.6589.78
AA (%)67.5968.0376.9764.3765.8778.1578.7275.7988.0290.88
KC0.6380.6330.7070.6090.5890.7510.7360.7040.8580.884
Table 4. Classification results of different classifiers on the PaviaU dataset.
Table 4. Classification results of different classifiers on the PaviaU dataset.
ClassTrainTestRAWPCALDALPPMFARLDEDHLPSSCELPNPESSRHE
1332629985.6285.6287.6887.8282.7690.1963.8989.7390.2091.19
293317,71694.6594.5794.8894.7693.9097.7366.6996.7097.5398.12
3105199465.1564.6463.3467.0061.8474.7761.5872.3777.2878.6
4154291077.2277.3681.7979.0177.0284.1359.1584.7887.8389.26
568127798.8398.8398.8499.3099.7799.5387.8299.3799.7799.77
6252477760.2660.3265.1765.4769.7270.3695.8973.8689.6885.22
767126375.3075.3066.6775.6971.2680.3694.488.6086.0690.18
8185349780.2780.2774.3981.4277.3684.7999.7782.8584.6879.33
94889910010099.4410099.6710090.0099.7899.89100
OA (%)84.9284.8885.4086.2784.7389.7078.0089.6091.3092.59
AA (%)81.9281.8881.4783.3881.4886.8777.7287.5689.5390.55
KC0.7970.7960.8040.8150.7960.8610.7360.8610.8830.902

Share and Cite

MDPI and ACS Style

Huang, H.; Chen, M.; Duan, Y. Dimensionality Reduction of Hyperspectral Image Using Spatial-Spectral Regularized Sparse Hypergraph Embedding. Remote Sens. 2019, 11, 1039. https://0-doi-org.brum.beds.ac.uk/10.3390/rs11091039

AMA Style

Huang H, Chen M, Duan Y. Dimensionality Reduction of Hyperspectral Image Using Spatial-Spectral Regularized Sparse Hypergraph Embedding. Remote Sensing. 2019; 11(9):1039. https://0-doi-org.brum.beds.ac.uk/10.3390/rs11091039

Chicago/Turabian Style

Huang, Hong, Meili Chen, and Yule Duan. 2019. "Dimensionality Reduction of Hyperspectral Image Using Spatial-Spectral Regularized Sparse Hypergraph Embedding" Remote Sensing 11, no. 9: 1039. https://0-doi-org.brum.beds.ac.uk/10.3390/rs11091039

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop