Next Article in Journal
A Joint Inversion Estimate of Antarctic Ice Sheet Mass Balance Using Multi-Geodetic Data Sets
Next Article in Special Issue
Hyperspectral Dimensionality Reduction Based on Multiscale Superpixelwise Kernel Principal Component Analysis
Previous Article in Journal
Land Surface Temperature Retrieval from Sentinel-3A Sea and Land Surface Temperature Radiometer, Using a Split-Window Algorithm
Previous Article in Special Issue
Locally Weighted Discriminant Analysis for Hyperspectral Image Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Feature Manifold Discriminant Analysis for Hyperspectral Image Classification

The Key Laboratory on Opto-Electronic Technique and Systems, Ministry of Education, Chongqing University, Chongqing 400044, China
*
Author to whom correspondence should be addressed.
Submission received: 18 January 2019 / Revised: 3 March 2019 / Accepted: 13 March 2019 / Published: 17 March 2019
(This article belongs to the Special Issue Dimensionality Reduction for Hyperspectral Imagery Analysis)

Abstract

:
Hyperspectral image (HSI) provides both spatial structure and spectral information for classification, but many traditional methods simply concatenate spatial features and spectral features together that usually lead to the curse-of-dimensionality and unbalanced representation of different features. To address this issue, a new dimensionality reduction (DR) method, termed multi-feature manifold discriminant analysis (MFMDA), was proposed in this paper. At first, MFMDA explores local binary patterns (LBP) operator to extract textural features for encoding the spatial information in HSI. Then, under graph embedding framework, the intrinsic and penalty graphs of LBP and spectral features are constructed to explore the discriminant manifold structure in both spatial and spectral domains, respectively. After that, a new spatial-spectral DR model for multi-feature fusion is built to extract discriminant spatial-spectral combined features, and it not only preserves the similarity relationship between spectral features and LBP features but also possesses strong discriminating ability in the low-dimensional embedding space. Experiments on Indian Pines, Heihe and Pavia University (PaviaU) hyperspectral data sets demonstrate that the proposed MFMDA method performs significantly better than some state-of-the-art methods using only single feature or simply stacking spectral features and spatial features together, and the classification accuracies of it can reach 95.43%, 97.19% and 96.60%, respectively.

Graphical Abstract

1. Introduction

Hyperspectral imagery (HSI) provides hundreds of narrow and continuous adjacent bands through dense spectral sampling from visible to short-wave infrared regions [1,2,3,4,5,6,7,8]. A fine-spectral-resolution HSI provides useful information for classifying different types of ground objects, and it has a variety of applications in many fields such as mineral exploration, environmental monitoring, precision agriculture, and target recognition [9,10,11,12,13]. Classification of each pixel in HSI plays a crucial role in these real applications, but complex spectral characteristics within HSI data pose huge challenges to the traditional spectral feature-based HSI classification [14,15,16,17,18,19].
Recent investigations have demonstrated that combining spatial and spectral information is beneficial to the feature extraction and classification of HSI data [20,21,22,23,24,25,26,27,28]. In recent years, many effective spatial-based features have been proposed by concerning structure, shape, texture, geometric, etc. Li et al. [29] extracted textural features of HSI using LBP operator and then classified them through extreme learning machine (ELM). Mauro et al. [30] designed an extended multi-attribute profiles (EMAP) algorithm to explore morphological features from HSI, and the extracted features were classified by a random forest classifier. Li et al. [31] introduced generalized composite kernel machines to explore spatial information through EMAP, then used the multinomial logistic regression for classification. However, in real applications, it is impossible to find a single feature that is suitable for different image scenes due to the variety and irregular distribution of ground objects. The conventional method for addressing this issue is to explore a feature stacking (FS) approach for the combination of different types of features. Li et al. [32] tried to get combined features by fusing spectral features and EMAP features which improved the classification accuracy of HSI. Song et al. [33] used LBP operator for extracting textural features, and then stacked spectral features and textural features for classification. However, the feature stacking method commonly poses the problem of the curse-of-dimensionality for the increase in the dimension of stacked features, and thus such methods do not necessarily ensure better performance for HSI classification. Therefore, an urgent challenge in multi-feature classification of HSI data is how to reduce the dimension of spatial and spectral combined features largely with some valuable intrinsic information preserved [34].
To solve this problem, many DR methods have been proposed to reduce the number of bands and obtain some desired information in HSI [35,36,37,38]. Principal component analysis (PCA) and Linear Discriminant Analysis (LDA) are two classical DR methods [39,40]. However, the two subspace methods cannot analyze the data that lies on or near a submanifold embedded in the original space. Therefore, the graph-based manifold learning methods have attracted wide attention recently [41]. Such methods include isometric mapping (Isomap), Laplacian eigenmaps (LE), locality preserving projections (LPP), locally linear embedding (LLE), neighborhood preserving embedding (NPE), and local tangent space alignment (LTSA) [42,43,44,45,46,47]. These graph embedding (GE) methods are unsupervised learning methods without using the discriminant information in training samples. Some supervised learning methods were designed to explore the label information of training data to enhance the discriminating ability for classification, such as marginal Fisher analysis (MFA), locality sensitive discriminant analysis (LSDA), coupled discriminant multi-manifold analysis (CDMMA), and local geometric structure Fisher analysis (LGSFA) [48,49,50,51]. However, the above DR methods only make use of spectral features in HSI, while it is commonly accepted that exploiting multiple features, spectral, texture and shape features, brings significant benefits in terms of improving the classification performance.
To explore DR of multiple features for HSI classification, Fauvel et al. [52] used PCA to reduce the dimension of EMAP features and stacked them with spectral features to form fused feature vectors. Huo et al. [53] selected the first three PCs of HSI to extract Gabor textures, then concatenated Gabor textures and spectral features from the same pixel to form the combined feature for classification. However, the above multi-feature-based methods simply stacked the reduced spectral and spatial features together after applying DR on the different types of features, respectively. The embedding features are obtained in different subspaces that cannot ensure global optimization. Furthermore, the direct stacking strategy may lead to unbalanced representation of different features.
To overcome above drawbacks, we propose a novel DR algorithm termed multi-feature manifold discriminant analysis for HSI data. The MFMDA method first exploits the spatial information in HSI by extracting LBP textural features. Then it constructs the intrinsic graphs and penalty graphs of spectral features and LBP features within GE framework, which can effectively discover the manifold structure of spatial features and spectral features. After that, MFMDA learns low-dimensional embedding space from original spectral features as well as LBP features for compacting the intramanifold samples while separating intermanifold samples, which will increase the margins between different manifolds. As a result, the spatial-spectral embedding features possess stronger discriminating ability for HSI classification. Experimental results on three real hyperspectral data sets show that the proposed MFMDA algorithm can significantly improve the classification accuracy compared with some state-of-art DR methods, especially in the case of limited training samples are available.
The remainder of this paper is organized as follows. In Section 2, we briefly introduce the spectral features, textural features, and the GE framework. The details of our algorithm are introduced in Section 3. Section 4 gives experimental results to demonstrate the effectiveness of our algorithm. We give some concluding remarks and suggestions for further work in Section 5.

2. Related Works

2.1. Spectral and LBP Features of HSI

Spectral and textural information are the fundamental properties of hyperspectral imagery. Spectral information provides densely sampled reflectance values over a wide range of the electro-magnetic spectrum to distinguish similar materials, while texture is a typical spatial feature which gives a description of the homogeneity of an image using the texture element as the fundamental unit. Recent studies show that combining spatial context into pixel-based spectral classification can substantially improve the classification performance of HSI [54].
Local binary pattern is a discriminative and computationally local texture descriptor that has shown promising performance in classification. The original LBP operator represents the pixels of an image with binary numbers called LBP codes, which encode the local structure around each pixel, and then the codes are used for further analysis [29]. The procedure of it is shown in Figure 1, where the 10th band of PaviaU hyperspectral image is used to extract LBP features. As in Figure 1, for a given center pixel in a 3 × 3 window, the neighbor pixels are assigned with binary labels (“0” or “1”), depending on whether the gray value of center pixel is larger or not. An 8-digit binary number can be obtained by concatenating all these binary codes in a clockwise direction starting from the top-left one, and the derived binary numbers are referred to as LBP code.
According to the aforementioned analysis, spectral features and LBP features represent the information in HSI from different perspectives. Spectral features provide continuous spectral measurement across the entire electromagnetic spectrum, while LBP features present a better expression of detailed local spatial features, such as edges, corners, and knots. Thus, it is promising to apply LBP features as a supplement to spectral features that lack the consideration of spatial relations between pixels in HSI. However, both spectral features and LBP features are characterized by high dimensionality. A common approach to address the problem is to explore DR methods which will reduce the dimension of high-dimensional features largely without loss of information.

2.2. Graph Embedding

The GE framework is explored to unify many classical DR algorithms such as PCA, LDA, ISOMAP, LLE, LE, LPP and NPE. In GE, an intrinsic graph is constructed to characterize the statistical or geometrical properties that need to be preserved, and a penalty graph is explored to describe some properties which should be avoided. The intrinsic graph G I ( X , W w ) and the penalty graph G p ( X , W b ) are both undirected weighted graphs, where X is the vertex set of graph, W w n × n and W b n × n are the weight matrices of G I and G P , respectively. w i j w indicates the similarity between vertices x i and x j in G I , while w i j b measures the dissimilarity of vertices x i and x j in G P . Under this framework, MFA has been proposed for dimensionality reduction of high-dimensioanl data. In MFA, G I connects each point with its neighbors from the same class to represent intraclass compactness, and G P connects the neighbor points which from different classes to represent the interclass separability. In low-dimensional embedding space, the intraclass compactness and interclass separability should be enhanced. Therefore, the optimal projection matrix V can be obtained with the following optimization problem:
J ( V ) = a r g m i n i , j V T x i V T x j 2 w i j w i , j V T x i V T x j 2 w i j b = a r g m i n t r ( V T X L X T V ) t r ( V T X L P X T V )
where L = D w W w is the Laplacian matrix of graph G I , W w = [ w i j w ] i , j = 1 n , D w is a diagonal matrix, D w = [ D i j w ] i , j = 1 n , D i i w = j = 1 N w i j w , and L P = D b W b is the Laplacian matrix of graph G P , W b = [ w i j b ] i , j = 1 n , D b = [ D i j b ] i , j = 1 n , D i i b = j = 1 N w i j b .

3. Proposed Approach

Let us suppose that a hyperspectral data set X = [ x 1 , x 2 , x 3 , , x N ] D × N , where D is the number of bands and N indicates the number of pixels in HSI data. X S = { x i s } i = 1 N and X L = { x i l } i = 1 N denote the spectral features and LBP features of X, respectively. The class label of x i is indicated by ( x i ) = { 1 , 2 , , c } , where c is the number of classes. The purpose of DR is to find a low-dimensional embedding space Y = [ y 1 , y 2 , y 3 , , y N ] d × N , where d ( d D ) is the embedding dimensionality of extracted features.

3.1. Motivation

Since different types of features represent HSI data from different perspectives, multiple feature fusion will bring benefits to enhance the discrimination capability for classification. The most common way to combine these features is to simply concatenate different types of features together, and then a classifier is employed to classify the stacked features. However, such stack-based methods have witnessed limited performance due to the simple strategy, and they may even perform worse than using a single feature in HSI data. The reasons for this phenomenon are summarized as follows:
  • Simply stacking spatial and spectral features may yield redundant information, and it remains difficult to achieve an optimal combination for different kinds of features;
  • The spatial information and spectral information is not equally represented by simply stacking;
  • The stacked features greatly increase the dimensionality of spatial-spectral combined features, this will make HSI classification fairly challenging for the curse-of-dimensionality problem, especially when only limited training samples are available.
Many DR methods have been explored to reduce the dimension of stacked features. However, different types of features usually lie on different manifolds. Performing dimensionality reduction directly on the simply stacked features cannot reveal the manifold structure of different features in HSI. As a result, the discriminant information contained by different features is not effectively represented, which will restrict their discriminant capability for classification.
To overcome the shortcomings as discussed above, a new DR method called MFMDA is introduced in next section. By exploring the manifold structures of different features, it can effectively extract the spatial-spectral combined features and subsequently improve the classification performance of HSI.

3.2. MFMDA

The goal of the proposed MFMDA method is to find an optimized projection matrix which can couple dimensionality reduction and data fusion of original features (from HSI data) and spatial features (LBP features generated from HSI) based on GE framework. MFMDA simultaneously learns a low-dimensional embedding space from original spectral features as well as LBP features for compacting the intramanifold samples while separating intermanifold samples, which will increase the margins between different manifolds. As a result, the obtained embedding features possess stronger discriminating ability that helps to subsequent classification. The flowchart of MFMDA is shown in Figure 2.
As illustrated in Figure 2, due to the fact that the similarity relationship between spectral features and LBP features from the same pixel should be preserved in the low-dimensional embedding space. Let us assume A S D × d and A L D × d are the corresponding projection matrices of spectral features and LBP features, respectively. A S and A L should be explored to minimize the distance between the two embedding features from the same pixel, and the objective function can be defined as follows:
J 1 ( A S , A L ) = m i n i = 1 N A S T x i s A L T x i l 2
With some mathematical operations, Equation (2) can be reduced as:
J 1 ( A S , A L ) = m i n i = 1 N A S T x i s A L T x i l 2 = t r ( i = 1 N ( A S T x i s A L T x i l ) ( A S T x i s A L T x i l ) T ) = t r ( i = 1 N ( A S T x i s ( x i s ) T A S A L T x i l ( x i s ) T A S A S T x i s ( x i l ) T A L + A L T x i l ( x i l ) T A L ) ) = t r ( B 0 0 C T ( X S ) T 0 0 ( X L ) T X S 0 0 X L L 1 ( X S ) T 0 0 ( X L ) T X S 0 0 X L B 0 0 C ) = t r ( A T E L 1 E T A )
where A S and A L are respectively parameterized as A S = X S B and A L = X L C , B and C are projection matrices that map spectral information and texture information in high-dimensional features to the low-dimensional embedded space, respectively. A = B 0 0 C , E = ( X S ) T X S 0 0 ( X L ) T X L , L 1 = I I I I , I is the identity matrix in L.
From the view point of classification, in the low-dimensional embedding space, we expect that the samples are as close as possible if they belong to the same manifold, while samples are as far as possible if they are from different manifolds. To achieve this goal, we define the objective function as follows:
J 2 ( A S , A L ) = m i n ( i = 1 N j = 1 N A S T x i s A S T x j s 2 w i j w s + i = 1 N j = 1 N A L T x i l A L T x j l 2 w i j w l )
J 3 ( A S , A L ) = m a x ( i = 1 N j = 1 N A S T x i s A S T x j s 2 w i j b s + i = 1 N j = 1 N A L T x i l A L T x j l 2 w i j b l )
where w i j w s and w i j b s are the affinity weights to characterize the similarity between spectral features x i s and x j s of intrinsic graph G I S as well as the dissimilarity between x i s and x j s of the penalty graph G P S , w i j w l and w i j b l are the affinity weights to characterize the similarity between LBP features x i l and x j l of the intrinsic graph G I L and the dissimilarity between x i l and x j l of the penalty graph G P L , respectively.
In the intrinsic graph G I S of spectral features, the vertices x i s and x j s are connected by an edge if l ( x i ) = l ( x j ) and they are close to each other in terms of some distance. When it comes to the penalty graph G P S , the vertices x i s and x j s are connected by an edge if l ( x i ) l ( x j ) and x j s belongs the k b nearest neighbors of x i s . The weights w i j w s and w i j b s in two spectral-based graphs are defined as:
w i j w s = exp ( x i s x j s 2 2 ( t i s ) 2 ) , x i s N s , w ( x j s ) o r x j s N s , w ( x i s ) 0 , o t h e r w i s e
w i j b s = exp ( x i s x j s 2 2 ( t i s ) 2 ) , x i s N s , b ( x j s ) o r x j s N s , b ( x i s ) 0 , o t h e r w i s e
where N s , w ( x i s ) is the n w -intramanifold neighbors of spectral feature x i s , N s , b ( x i s ) indicates the n b -intermanifold neighbors of x i s , and t i s = 1 n j = 1 n x i s x j s .
In the intrinsic graph G I L of LBP features, an edge is added between the vertices x i l and x j l if l ( x i ) = l ( x j ) and x j l belongs the k w nearest neighbors of x i l ; in the penalty graph G P L , an edge is connected by x i l and x j l if l ( x i ) l ( x j ) and x j l belongs the k b nearest neighbors of x i l . The weights w i j w l and w i j b l in two LBP-based graphs can be set as:
w i j w l = exp ( x i l x j l 2 2 ( t i l ) 2 ) , x i l N l , w ( x j l ) o r x j l N l , w ( x i l ) 0 , o t h e r w i s e
w i j b l = exp ( x i l x j l 2 2 ( t i l ) 2 ) , x i l N l , b ( x j l ) o r x j l N l , b ( x i l ) 0 , o t h e r w i s e
where N l , w ( x i l ) is the n w -intramanifold neighbors of spectral feature x i l , N l , b ( x i l ) indicates the n b -intermanifold neighbors of x i l , and t i l = 1 n j = 1 n x i l x j l .
The objective function of J 2 ( A S , A L ) in Equation (4) is to minimize the intramanifold distance to ensuring the samples from the same manifold should be as close as possible, and the objective function of J 3 ( A S , A L ) in Equation (5) is to maximize the intermanifold distance for enlarging the manifold margins in the low-dimensional embedding space.
With some mathematical operations, Equations (4) and (5) can be reduced as:
i = 1 N j = 1 N A S T x i s A S T x j s 2 w i j w s + i = 1 N j = 1 N A L T x i l A L T x j l 2 w i j w l = t r ( i = 1 N j = 1 N ( A S T x i s w i j w s ( x i s ) T A S 2 A S T x i s w i j w s ( x j s ) T A S + A S T x j s w i j w s ( x j s ) T A S ) + i = 1 N j = 1 N ( A L T x i l w i j w l ( x i l ) T A L 2 A L T x i l w i j w l ( x j l ) T A L + A L T x i l w i j w l ( x i l ) T A L ) ) = t r ( B 0 0 C T ( X S ) T 0 0 ( X L ) T X S 0 0 X L L 2 ( X S ) T 0 0 ( X L ) T X S 0 0 X L B 0 0 C ) = t r ( A T E L 2 E T A )
i = 1 N j = 1 N A S T x i s A S T x j s 2 w i j b s + i = 1 N j = 1 N A L T x i l A L T x j l 2 w i j b l = t r ( i = 1 N j = 1 N ( A S T x i s w i j b s ( x i s ) T A S 2 A S T x i s w i j b s ( x j s ) T A S + A S T x j s w i j b s ( x j s ) T A S ) + i = 1 N j = 1 N ( A L T x i l w i j b l ( x i l ) T A L 2 A L T x i l w i j b l ( x j l ) T A L + A L T x i l w i j b l ( x i l ) T A L ) ) = t r ( B 0 0 C T ( X S ) T 0 0 ( X L ) T X S 0 0 X L L 3 ( X S ) T 0 0 ( X L ) T X S 0 0 X L B 0 0 C ) = t r ( A T E L 3 E T A )
where L 2 = 2 ( D w s W w s ) 0 0 2 ( D w l W w l ) , W w s = [ w i j w s ] i , j = 1 N , W w l = [ w i j w l ] i , j = 1 N , D w s = d i a g ( [ j = 1 N w i j w s ] i = 1 N ) , D w l = d i a g ( [ j = 1 N w i j w l ] i = 1 N ) ; L 3 = 2 ( D b s W b s ) 0 0 2 ( D b l W b l ) , W b s = [ w i j b s ] i , j = 1 N , W b l = [ w i j b l ] i , j = 1 N , D b s = d i a g ( [ j = 1 N w i j b s ] i = 1 N ) , D b l = d i a g ( [ j = 1 N w i j b l ] i = 1 N ) .
As discussed, the MFMDA method not only preserves the similarity relationship between spectral features and LBP features but also possesses strong discriminating ability in the low-dimensional embedding space. Therefore, a reasonable criterion for choosing a good projection matrix is to optimize the following objective functions:
min t r ( A T E L 1 E T A ) min t r ( A T E L 2 E T A ) max t r ( A T E L 3 E T A )
The multi-objective function optimization problem in Equation (12) can be equivalent to:
J ( A S , A L ) = m i n { t r ( A T E L 1 E T A ) + α ( t r ( A T E L 2 E T A ) ) β ( t r ( A T E L 3 E T A ) } = m i n { t r ( A T E L E T A ) }
where α , β > 0 are two tradeoff parameters which can adjust intramanifold compactness and intermanifold separability, L = L 1 + α L 2 β L 3 .
A constraint A T E E T A = I is imposed to remove an arbitrary scaling factor in the projection, and the objective function can be recast as follows:
min ( t r ( A T E L E T A ) ) s . t . A T E E T A = I
With the method of Lagrangian multiplier, the optimization solution is formulated as
A t r ( A T E L E T A λ ( A T E E T A I ) ) = 0
where λ is the Lagrangian multiplier. Then the optimization problem is transformed to solve a generalized eigenvalue problem, i.e.,
E L E T A = λ E E T A
where the optimal projection matrix A = [ a 1 , a 2 , , a d ] is composed of d minimum eigenvalues of Equation (16) corresponding eigenvectors. Then the low-dimensional feature is given by:
y i = y i s y i l = A S T x i s A L T x i l = ( X S B ) T x i s ( X L C ) T x i l = ( X S B ) T 0 0 ( X L C ) T x i s x i l = B 0 0 C T ( X S ) T 0 0 ( X L ) T x i s x i l
The proposed MFMDA algorithm is summarized in Algorithm 1.
Algorithm 1 MFMDA.
Input: 
data set X = [ x 1 , x 2 , x 3 , , x N ] D × N , corresponding class labels ( x i ) = 1 , 2 , , c , the number of intraclass neighbor points n w and the number of interclass neighbor points n b , balance parameter α and β .
  1:
Get LBP features generated from the data set, X S = { x i s } i = 1 N D × N and X L = { x i l } i = 1 N D × N denote the spectral and LBP features.
  2:
Find the n w intraclass neighbor points and n b interclass neighbor points of spectral features and LBP features, respectively.
  3:
Calculate the edge weights of the intrinsic and penalty graphs by Equations (6)–(9).
  4:
Compute the D w s , D w l , D b s and D b l by D w s = d i a g ( [ j = 1 N w i j w s ] i = 1 N ) , D w l = d i a g ( [ j = 1 N w i j w l ] i = 1 N ) , D b s = d i a g ( [ j = 1 N w i j b s ] i = 1 N ) and D b l = d i a g ( [ j = 1 N w i j b l ] i = 1 N ) , respectively.
  5:
Obtain the Lagrangian matrix which contain manifold structure through L = L 1 + α L 2 β L 3
  6:
Calculate matrix E by E = ( X S ) T X S 0 0 ( X L ) T X L .
  7:
Construct the matrix A = [ a 1 , a 2 , , a d ] according to Equation (16).
  8:
Obtain the projection matrix of spectral and LBP features by A = B 0 0 C , A S = X S B and A L = X L C .
  9:
Achieve the low-dimensional features Y through Equation (17).
Output: 
Y = [ y 1 , y 2 , y 3 , , y N ] d × N , d D ; Projection matrix of spectral and LBP features: A S = X S B and A L = X L C .

4. Experimental Results and Discussion

In this section, experiments are conducted on three real HSI data sets to evaluate the effectiveness of the proposed MFMDA method.

4.1. Experiment Data Set

Indian Pines data set: This HSI data set was collected by NASA using the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensor in Northwest Indiana. After removing water absorption bands, the remaining 200 bands were used in the experiments. The size of this image is 145 × 145 pixels with a spatial resolution of 20 m, and it contains sixteen land cover types such as Wheat, Woods and Oats. This scene in false color and its corresponding ground truth are shown in Figure 3, and the values in brackets indicate the sample size of each class.
Heihe data set [55,56]: This data set is provided by Heihe Plan Science Data Center which is sponsored by the integrated research on the eco-hydrological process of the Heihe River Basin of the National Natural Science Foundation of China, and it was captured by Compact Airborne Spectrographic Imager (CASI)/Shortwave Infrared Airborne Spectrogrpahic Imager (SASI) in Zhangye basin which is located in the middle reaches of Heihe watershed, Gansu Province, China. The data possesses a spatial size of 684 × 453 pixels, and it has a geometric resolution of 2.4 m. Exactly 135 bands remained after removal of 14 bands which have noise and atmospheric effects. The data contains 9 different kinds of land covers. The scene in false color and its ground-truth map are shown in Figure 4.
PaviaU data set: This data set is a scene of the PaviaU collected by the reflective optics system imaging spectrometer (ROSIS) sensor. It consists of 610 × 340 pixels and the spatial resolution is 1.3 m. 103 spectral bands remained after the removal of some channels as a result of dense water vapor and atmospheric effects. There are nine classes of ground objects are considered in the data set such as trees, soil and meadows. This HSI in false color and its corresponding ground truth are shown in Figure 5.

4.2. Experimental Setup

In each experiment, the HSI data set was randomly divided into training and test sets. For the classes that are very small, i.e., Alfalfa, Grass/pasture-mowed, and Oats in Indian Pines data set, the number of training samples was set to 10 per class. The training samples are used to learn a low-dimensional embedding space, then all test samples are mapped into the embedding space for extracting low-dimensional features. After that, support vector machine (SVM) with the radial basis function (RBF) kernel were used to classify test samples, and the library for SVM (LibSVM) Toolbox was employed to implement SVM [57]. The parameters for SVM were optimized by a grid search. The classification accuracy for each class, overall classification accuracies (OAs), average classification accuracies (AAs), and kappa coefficient (k) are used to evaluate the classification performance of different DR methods. To robustly evaluate the performance of different methods under different conditions, we repeated the experiments 10 times in each condition, and presented the results in the form of mean with standard deviation (STD).
The proposed MFMDA algorithm was compared with some state-of-art DR algorithms including Baseline, PCA [39], LDA [40], NPE [46], LPP [44], MFA [48] and LGSFA [51], where Baseline represents that test samples are classified directly by a classifier without dimensionality reduction. To verify the effectiveness of MFMDA, the above DR algorithms were applied to spectral features, LBP features and stacked features, respectively. Notice that LBP features are obtained by the “uniform LBP” pattern, and the ratio of neighborhood radius and the number of sampling points are set to 1 and 8, respectively [58]. The stacked features refer to stack original spectral features and LBP features after applying normalization.
For all methods, the parameters are optimized by using cross validation to achieve good results. The numbers of neighbors for NPE and LPP is set to 9. For MFA and LGSFA, the numbers of intraclass and interclass neighbors are chosen as 9 and 180, respectively. All experiments were performed on a personal computer with i7-7800X central processing unit, 32-G memory, and 64-bit Windows 10 using MATLAB 2014b.

4.3. Dimension Selection

To analyze the influence of different embedding dimensions on each DR algorithm, 40 samples were randomly selected from the stacked features of each class in three HSI data sets for training, and the remaining samples were tested. Figure 6 shows the overall classification accuracy under different embedding sizes.
As shown in Figure 6, with the increase of embedding dimension, the OAs of all methods are gradually improved. The reason for this is that the more discriminant information can be contained with the increase of embedding features, which is helpful for classification. However, when the dimension has been increased to a certain extent, the low-dimensional embedding space contains enough information for classification, and then the increase of embedding dimension has little effect on the improvement of classification performance. Meanwhile, MFMDA achieves better classification results than other methods, because the MFMDA can better characterize the intrinsic manifold structure of HSI and obtain more effective low-dimensional discriminant features. To achieve better classification performance for each algorithm, the embedding dimensions of all methods are set to 40. When it comes to LDA, the embedding dimension is set to c 1 , and c is the class number of the data set.

4.4. Experiments on the Indian Pines Data Set

In this section, the experiments were conducted on the Indian Pines data set to evaluate the effectiveness of the proposed algorithm. The proposed MFMDA method has different parameters, and we conducted experiments to analysis the sensitivity of parameters. In the experiments, 40 samples per class were randomly selected for training, and the remaining ones for testing. The SVM classifier is used for classification. To investigate the classification influence on intraclass and interclass neighbors, parameters n w and n b are tuned with a set of {1,2,3,⋯,9} and a set of {2,4,6,⋯,20}, respectively. Figure 7a shows the OAs with different values of n w and n b . When it comes to tradeoff parameters, parameters α and β are all tuned with the set of {0,0.1,0.2,⋯,1}. The OAs with different values of α and β are displayed in Figure 7b.
As can be seen in Figure 7a, with the increase of n w , the classification accuracy improves and then tends to be stable, for the reason is that a large number of intraclass neighbors are conducive to reveal the intrinsic structure of HSI data. When the value of n w is lower than 7, the OAs maintain a stable value with an increase of n b , but the OAs significantly decline when the value of n b exceeded 10. The reason is that too large values of n b will cause the phenomenon of over-learning in the margins between interclass samples. In Figure 7b, the classification performance improves with the increase of parameter α and then slightly fluctuates. While the proposed method can achieve good results at a wide range. However, when β has a very large value, the effect of the intraclass separability will be limited. Thus, parameters α and β can balance the contribution between intraclass compactness and interclass separability. According to this figure, we set the parameters n w and n b to 6 and 4, α and β to 0.8 and 0.5 for achieving a satisfactory performance.
To analyze the classification performance of each algorithm under different numbers of training samples, n i ( n i = 5, 10, 20, 30, 40) samples were randomly selected from each class for training, and the remaining data were used as test samples. Table 1 shows the average OAs with STD for different DR methods with different numbers of training samples.
According to Table 1, with the increase in the sample size of training set, the OAs of all methods continuously raise, for the reason is that a large training set contains more available information to learn discriminant features for classification. Furthermore, the classification results of each algorithm on LBP features are superior to that of spectral features, this is because LBP features are spatial-based features which bring benefits to classification. However, the classification performance of simply stacked features is even worse than LBP features, this may be due to the fact that spatial features and spectral features are not equally represented by simply stacking. While the proposed MFMDA algorithm produces a better classification effect than other methods in all conditions, especially when there are only a small number of labeled samples. The reason for this is that the proposed algorithm not only guarantees the similarity for spectral features and LBP features of the same pixel in the low-dimensional embedding space, but also discovers the manifold structure in the hyperspectral data by constructing intrinsic graphs and penalty graphs, and then extracts the spatial-spectral combined discriminant features to achieve the compactness for intraclass data and the separability for interclass data.
To explore the classification performance of MFMDA on each class, 3% samples per class were randomly selected for training, and remaining samples were used for testing. We can see from Table 1, the experimental results of DR methods on LBP features are better than spectral features and stacked features, and thus LBP features were chosen for comparison in the following experiments. Table 2 lists the classification accuracy of each class, OAs, AAs, and Kappa coefficients of different methods, and Figure 8 shows the corresponding classification maps.
As illustrated in Table 2, the MFMDA algorithm achieved good classification results in most classes, especially for the areas labeled as Wheat, Grass/Trees, Soybeans-min and Woods. By observing Figure 8, the classification map of MFMDA algorithm produced more homogenous regions than other methods.
The above results show that the proposed method compacts spectral features and LBP features from the same class and separates the features belong to different classes in low-dimensional embedding space, it can make better use of the manifold structure hidden in hyperspectral data.

4.5. Experiments on the Heihe Data Set

In this section, the Heihe hyperspectral image was used to further evaluate the classification performance of the proposed algorithm. In the parameter sensitivity experiments, we randomly selected 40 samples from each class for training and the rest for testing. At first, we analyze the influence of different parameters on MFMDA algorithm, and the OAs with different values of parameters are displayed in Figure 9.
As in Figure 9a, the OA increases and then declines with the increase of n w , it is because a small value of n w cannot get enough information to represent the intraclass structure, and a large value of n w will lead to overfitting. At the same time, an appropriate size of interclass neighbor points can prevent overfitting and effectively obtain discriminant information of HSI data. In Figure 9b, it can be observed that the OAs increase and then maintain slight fluctuation with the increase of α , and a too small value of β will lead to unsatisfactory classification performance. This indicates that the suitable α and β can balance the intramanifold and intermanifold relations of spectral features and textual features. Based on the above analysis, we set the parameters n w and n b to 24 and 6, α and β to 0.8 and 0.4.
To compare the MFMDA algorithm with other DR methods under different numbers of training samples, we randomly selected n i samples from each class for training, and the remaining samples were used for testing. Table 3 is the classification results of various algorithms.
According to Table 3, the classification accuracy increases as the number of training samples increases. Meanwhile, the experimental results of supervised learning methods, LDA, MFA and LGSFA, are superior to the unsupervised ones in most conditions, because the class information of data are used to enhance the discriminating capability of embedded features. The proposed method is more effective than other methods under various conditions, especially when a training set contains few samples. This shows that MFMDA can extract effective spatial-spectral joint features by exploring the inherent manifold structure of HSI data on the basis of GE, and then improve the classification accuracy.
To further show the classification results of each class, 0.1% samples were randomly selected for training, and the rest were used as test samples. The classification results of different methods on the Heihe data set is shown in Table 4, and Figure 10 shows the corresponding classification maps.
As illustrated in Table 4, it can be concluded that the proposed method achieves good classification performance on many classes, such as Endive Sprout and Artificial Surfaces. In addition, it possesses a smoother classification map, which is more conductive to practical application scenarios.

4.6. Experiments on the PaviaU Data Set

In this section, we used PaviaU data set to analyze the classification performance of the proposed algorithm under different scenes. We randomly selected 40 samples per class as training set to explore OAs with respect to different parameters. The results are displayed in Figure 11.
In Figure 11a, as the increase of n w , the OA rises first and then decreases slightly, the reason for this is that a small number of intraclass neighbor points cannot effectively explore intramanifold structure, while a large value of n w will include redundant information and lead to a decrease in classification accuracy. At the same time, when n b is lower than 8, the OAs can maintain a stable value. As shown in Figure 11b, the classification accuracy can fluctuate in a small range when the values of α and β continue to increase. It shows that α and β can balance the information between the intramanifold and intermanifold structures in HSI data. To achieve good classification performance, we selected n w , n b , α and β as 28, 4, 0.5, 0.3, respectively.
To verify the effectiveness of the proposed algorithm, we randomly selected n i (n i = 5, 10, 20, 30, 40) samples from each class for training and remaining samples for testing. The average OAs with STD are given in Table 5.
It can be seen from Table 5, the OAs of each method are improved when more samples are used for training. MFMDA achieves better results than other algorithms in most cases, the reason is that it can increase the margins between different classes, so the discriminant features are obtained for classification.
To compare the classification performance of various DR methods, we randomly selected 1% data in each class for training, and remaining data were used as test samples. As shown in Table 5, LBP features and stacked features achieve better experiment results than spectral features, so we choose stacked features compared with the MFMDA method. Table 6 gives the classification accuracies of different methods and Figure 12 shows the corresponding classification maps.
As shown in Table 6, the proposed method obtained the best classification results in most classes, especially in Asphalt, Gravel, Bare Soil, Bitumen, Bricks. The reason is that the MFMDA algorithm effectively fuses the multiple features by compacting spectral features and LBP features from the same class in low-dimensional space. As displayed in Figure 12, MFMDA algorithm has fewer misclassified points and the classification map is smoother than other methods.

4.7. Discussion

The experiments on three HSI data sets reveal some interesting points.
  • As shown in Table 1, Table 3 and Table 5, the classification performance of simply stacked features is even worse than LBP features in most cases, for the reason that simply stacked spatial and spectral features may yield redundant information and even lead to the curse-of-dimensionality.
  • From the experimental results, it is obviously that DR methods on LBP features or spectral features usually perform better than DR methods on the simply stacked features. This may be due to the fact that performing dimensionality reduction directly on the simply stacked features cannot reveal the manifold structure of different features in HSI, which will restrict their discriminant capability for classification.
  • The proposed MFMDA algorithm is superior to other DR methods under different training conditions. The reason is that MFMDA constructs the intrinsic graphs and penalty graphs of spectral features and LBP features to discover the manifold structure of spatial features and spectral features, then it learns low-dimensional embedding space from original spectral features as well as LBP features for compacting the intramanifold samples while separating intermanifold samples. As a result, the spatial-spectral embedding features possess stronger discriminating ability for HSI classification.

5. Conclusions

Traditional methods explore only a single feature or simply stacked features in hyperspectral image, which will restrict their discriminant capability for classification. In this paper, we proposed a new dimensionality reduction method termed MFMDA to couple DR and fusion of spectral and textual features of HSI data. MFMDA first explores LBP operator to extract textural features for encoding the spatial information in HSI. Then, within GE framework, the intrinsic and penalty graphs of LBP and spectral features are constructed to explore the discriminant manifold structure in both spatial and spectral domains, respectively. After that, a new spatial-spectral DR model is built to extract discriminant spatial-spectral combined features which not only preserve the similarity relationship between spectral features and LBP features but also possess strong discriminating ability in the low-dimensional embedding space. Experiments on Indian Pines, Heihe and PaviaU hyperspectral data sets demonstrate that the proposed MFMDA method can significantly improve classification performance and result in smoother classification maps than some state-of-the-art methods, and with fewer training samples, the classification accuracy can reach 95.43%, 97.19% and 96.60%, respectively. In the future, we will focus on conducting a more detailed investigation of other possible features to further improve the performance of MFMDA.

Author Contributions

H.H. contributed to mathematical modeling, experiment analysis and revised the paper. Z.L. was primarily responsible for experimental design and completed the comparison with other methods. Y.P. provided important suggestions for improving the paper.

Funding

This work was supported by the National Science Foundation of China under Grant 41371338, the Basic and Frontier Research Programmes of Chongqing under Grant cstc2018jcyjAX0093, and the graduate research and innovation foundation of Chongqing under Grant CYS18035.

Acknowledgments

The authors would like to thank the anonymous reviewers and associate editor for their valuable comments and suggestions to improve the quality of the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Song, W.W.; Li, S.T.; Fang, L.Y.; Lu, T. Hyperspectral Image Classification With Deep Feature Fusion Network. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4544–4554. [Google Scholar] [CrossRef]
  2. Zhong, Y.F.; Wang, X.Y.; Xu, Y.; Wang, S.Y.; Jia, T.Y.; Hu, X.; Zhao, J.; Wei, L.F.; Zhang, L.P. Mini-UAV-Borne Hyperspectral Remote Sensing: From Observation and Processing to Applications. IEEE Geosci. Remote Sens. Mag. 2018, 6, 46–62. [Google Scholar] [CrossRef]
  3. Chen, Y.S.; Jiang, H.L.; Li, C.Y.; Jia, X.P.; Ghamisi, P. Deep Feature Extraction and Classification of Hyperspectral Images Based on Convolutional Neural Networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef]
  4. Huang, H.; Duan, Y.L.; Shi, G.Y.; Lv, Z.Y. Fusion of Weighted Mean Reconstruction and SVMCK for Hyperspectral Image Classification. IEEE Access 2018, 6, 15224–15235. [Google Scholar] [CrossRef]
  5. Sun, W.W.; Yang, G.; Wu, K.; Li, W.Y.; Zhang, D.F. Pure endmember extraction using robust kernel archetypoid analysis for hyperspectral imagery. ISPRS J. Photogramm. Remote Sens. 2017, 131, 147–159. [Google Scholar] [CrossRef]
  6. Jiao, C.Z.; Chen, C.; McGarvey, R.G.; Bohlman, S.; Jiao, L.C.; Zare, A. Multiple Instance Hybrid Estimator for Hyperspectral Target Characterization and Sub-pixel Target Detection. J. Photogramm. Remote Sens. 2018, 146, 235–250. [Google Scholar] [CrossRef]
  7. Dian, R.W.; Li, S.T.; Guo, A.J.; Fang, L.Y. Deep Hyperspectral Image Sharpening. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 5345–5355. [Google Scholar] [CrossRef] [PubMed]
  8. Wang, X.Y.; Zhong, Y.F.; Xu, Y.; Zhang, L.P.; Xu, Y.Y. Saliency-Based Endmember Detection for Hyperspectral Imagery. IEEE Trans. Geosci. Remote Sens. 2018, 56, 3667–3680. [Google Scholar] [CrossRef]
  9. Wang, Q.; Lin, J.Z.; Yuan, Y. Salient Band Selection for Hyperspectral Image Classification via Manifold Ranking. IEEE Trans. Neural Netw. Learn. Syst. 2016, 27, 1279–1289. [Google Scholar] [CrossRef] [PubMed]
  10. Qian, Y.T.; Xiong, F.C.; Zeng, S.; Zhou, J.; Tang, Y.Y. Matrix-Vector Nonnegative Tensor Factorization for Blind Unmixing of Hyperspectral Imagery. IEEE Trans. Geosci. Remote Sens. 2017, 55, 1776–1792. [Google Scholar] [CrossRef]
  11. Luo, F.L.; Huang, H.; Liu, J.M.; Ma, Z.Z. Fusion of Graph Embedding and Sparse Representation for Feature Extraction and Classification of Hyperspectral Imagery. Photogramm. Eng. Remote Sens. 2017, 83, 37–46. [Google Scholar] [CrossRef]
  12. Kang, X.D.; Duan, P.H.; Li, S.T.; Benediktsson, J.A. Decolorization-Based Hyperspectral Image Visualization. IEEE Trans. Geosci. Remote Sens. 2018, 56, 4346–4360. [Google Scholar] [CrossRef]
  13. Ke, W.; Xu, G.; Zhang, Y.X.; Du, B. Hyperspectral image target detection via integrated background suppression with adaptive weight selection. Neurocomputing 2017, 315, 59–67. [Google Scholar]
  14. Xu, Y.H.; Zhang, L.P.; Du, B.; Zhang, F. Spectral-Spatial Unified Networks for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 5893–5909. [Google Scholar] [CrossRef]
  15. Li, S.T.; Hao, Q.B.; Kang, X.D.; Benediktsson, J.A. Gaussian Pyramid Based Multiscale Feature Fusion for Hyperspectral Image Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 3312–3324. [Google Scholar] [CrossRef]
  16. Peng, J.T.; Li, L.Q.; Tang, Y.Y. Maximum Likelihood Estimation-Based Joint Sparse Representation for the Classification of Hyperspectral Remote Sensing Images. IEEE Trans. Neural Netw. Learn. Syst. 2018, 1–13. [Google Scholar] [CrossRef]
  17. Wang, Z.M.; Du, B.; Zhang, L.F.; Zhang, L.P.; Jia, X.P. A Novel Semisupervised Active-Learning Algorithm for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3071–3083. [Google Scholar] [CrossRef]
  18. Su, H.J.; Zhao, B.; Du, Q.; Sheng, Y.H. Tangent Distance-Based Collaborative Representation for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1236–1240. [Google Scholar] [CrossRef]
  19. Zhang, L.F.; Zhang, L.P.; Tao, D.C.; Huang, X. On Combining Multiple Features for Hyperspectral Remote Sensing Image Classification. IEEE Trans. Geosci. Remote Sens. 2012, 50, 879–893. [Google Scholar] [CrossRef]
  20. Liao, W.Z.; Mura, M.D.; Chanussot, J.; Pizurica, A. Fusion of spectral and spatial information for classification of hyperspectral remote sensed imagery by local graph. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 9, 583–594. [Google Scholar] [CrossRef]
  21. Su, H.J.; Zhao, B.; Du, Q.; Du, P.J.; Xue, Z.H. Multifeature Dictionary Learning for Collaborative Representation Classification of Hyperspectral Imagery. IEEE Trans. Geosci. Remote Sens. 2018, 56, 2467–2484. [Google Scholar] [CrossRef]
  22. Fang, L.Y.; Li, S.T.; Duan, W.H.; Ren, J.C.; Benediktsson, J.A. Classification of Hyperspectral Images by Exploiting Spectral-Spatial Information of Superpixel via Multiple Kernels. IEEE Trans. Geosci. Remote Sens. 2015, 53, 6663–6674. [Google Scholar] [CrossRef]
  23. Liang, M.M.; Jiao, L.C.; Yang, S.Y.; Liu, F.; Hou, B.; Chen, H. Deep Multiscale Spectral-Spatial Feature Fusion for Hyperspectral Images Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 2911–2924. [Google Scholar] [CrossRef]
  24. Chen, C.; Li, W.; Su, H.J.; Liu, K. Spectral-Spatial Classification of Hyperspectral Image Based on Kernel Extreme Learning Machine. Remote Sens. 2014, 6, 5795–5814. [Google Scholar] [CrossRef]
  25. Gu, Y.F.; Liu, T.Z.; Jia, X.P.; Benediktsson, J.A.; Chanussot, J. Nonlinear Multiple Kernel Learning With Multiple-Structure-Element Extended Morphological Profiles for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2016, 54, 3235–3247. [Google Scholar] [CrossRef]
  26. Zhao, W.Z.; Du, S.H. Spectral-Spatial Feature Extraction for Hyperspectral Image Classification: A Dimension Reduction and Deep Learning Approach. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4544–4554. [Google Scholar] [CrossRef]
  27. Luo, F.L.; Du, B.; Zhang, L.P.; Zhang, L.F.; Tao, D.C. Feature Learning Using Spatial-Spectral Hypergraph Discriminant Analysis for Hyperspectral Image. IEEE Trans. Cybern. 2018. [Google Scholar] [CrossRef]
  28. Zhang, X.R.; Gao, Z.Y.; Jiao, L.C.; Zhou, H.Y. Multifeature Hyperspectral Image Classification with Local and Nonlocal Spatial Information via Markov Random Field in Semantic Space. IEEE Trans. Geosci. Remote Sens. 2018, 56, 1409–1424. [Google Scholar] [CrossRef]
  29. Li, W.; Chen, C.; Su, H.J.; Du, Q. Local Binary Patterns and Extreme Learning Machine for Hyperspectral Imagery Classification. IEEE Trans. Geosci. Remote Sens. 2015, 55, 3681–3693. [Google Scholar] [CrossRef]
  30. Mauro, M.D.; Benediktsson, J.A.; Waske, B.; Bruzzone, L. Morphological Attribute Profiles for the Analysis of Very High Resolution Images. IEEE Trans. Geosci. Remote Sens. 2010, 48, 3747–3762. [Google Scholar]
  31. Li, J.; Marpu, P.R.; Plaza, A.; Bioucas-Dias, J.M.; Benediktsson, J.A. Generalized Composite Kernel Framework for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2013, 51, 4816–4839. [Google Scholar] [CrossRef]
  32. Li, J.; Huang, X.; Gamba, P.; Bioucas-Dias, J.M.; Zhang, L.P.; Benediktsson, J.A.; Plaza, A. Multiple Feature Learning for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2015, 53, 1592–1606. [Google Scholar] [CrossRef]
  33. Song, C.Y.; Yang, F.J.; Li, P.J. Rotation invariant texture measured by local binary pattern for remote sensing image classification. In Proceedings of the 2010 Second International Workshop on Education Technology and Computer Science, Wuhan, China, 6–7 March 2010; Volume 3, pp. 3–6. [Google Scholar]
  34. Zhong, Y.F.; Ma, A.; Ong, Y.S.; Zhu, Z.X.; Zhang, L.P. Computational intelligence in optical remote sensing image processing. Appl. Soft Comput. 2018, 64, 75–93. [Google Scholar] [CrossRef]
  35. Xu, J.; Yang, G.; Yin, Y.F.; Man, H.; He, H.B. Sparse Representation Based Classification with Structure Preserving Dimension Reduction. Cogn. Comput. 2014, 6, 608–621. [Google Scholar] [CrossRef]
  36. Wang, J.; He, H.B.; Prokhorov, D.V. A Folded Neural Network Autoencoder for Dimensionality Reduction. Proc. Comput. Sci. 2012, 13, 120–127. [Google Scholar] [CrossRef]
  37. Zhou, Y.C.; Peng, J.T.; Chen, C.L.P. Dimension Reduction Using Spatial and Spectral Regularized Local Discriminant Embedding for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2015, 53, 1082–1095. [Google Scholar] [CrossRef]
  38. Zheng, X.T.; Yuan, Y.; Lu, X.Q. Dimensionality Reduction by Spatial-Spectral Preservation in Selected Bands. IEEE Trans. Geosci. Remote Sens. 2017, 55, 5185–5197. [Google Scholar] [CrossRef]
  39. Bonifazi, G.; Capobianco, G.; Serranti, S. Asbestos containing materials detection and classification by the use of hyperspectral imaging. J. Hazard. Mater. 2018, 13, 981–993. [Google Scholar] [CrossRef] [PubMed]
  40. Huang, X.Y.; Zhang, B.; Qiao, H.; Nie, X.L. Local Discriminant Canonical Correlation Analysis for Supervised PolSAR Image Classification. IEEE Geosci. Remote Sens. Lett. 2017, 14, 2102–2106. [Google Scholar] [CrossRef]
  41. Xu, X.; Huang, Z.H.; Zuo, L.; He, H.B. Manifold-based Reinforcement Learning via Locally Linear Reconstruction. IEEE Trans. Neural Netw. Learn. Syst. 2012, 28, 934–947. [Google Scholar] [CrossRef] [PubMed]
  42. Li, W.; Zhang, L.P.; Zhang, L.F.; Du, B. GPU Parallel Implementation of Isometric Mapping for Hyperspectral Classification. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1532–1539. [Google Scholar] [CrossRef]
  43. Xu, X.; Yang, H.Y.; Lian, C.Q.; Liu, J.H. Self-learning control using dual heuristic programming with global Laplacian eigenmaps. IEEE Trans. Ind. Electron. 2017, 64, 9517–9526. [Google Scholar] [CrossRef]
  44. Deng, Y.J.; Li, H.C.; Pan, L.; Shao, L.Y.; Du, Q.; Emery, W.J. Modified Tensor Locality Preserving Projection for Dimensionality Reduction of Hyperspectral Images. IEEE Geosci. Remote Sens. Lett. 2018, 15, 277–281. [Google Scholar] [CrossRef]
  45. Zhang, L.L.; Zhao, C.H. Sparsity divergence index based on locally linear embedding for hyperspectral anomaly detection. J. Appl. Remote Sens. 2016, 10, 025026. [Google Scholar] [CrossRef]
  46. Lu, G.F.; Jin, Z.; Zou, J. Face recognition using discriminant sparsity neighborhood preserving embedding. Knowl. Based Syst. 2012, 31, 119–127. [Google Scholar] [CrossRef]
  47. Wang, J.; Sun, X.L.; Du, J.X. Local tangent space alignment via nuclear norm regularization for incomplete data. Neurocomputing 2018, 273, 141–151. [Google Scholar] [CrossRef]
  48. Lu, Y.W.; Lai, Z.H.; Fan, Z.Z.; Cui, J.R.; Zhu, Q. Manifold discriminant regression learning for image classification. Neurocomputing 2015, 166, 475–486. [Google Scholar] [CrossRef]
  49. Yu, H.Y.; Gao, L.R.; Li, W.; Du, Q.; Zhang, B. Locality sensitive discriminant analysis for group sparse representation-based hyperspectral imagery classification. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1358–1362. [Google Scholar] [CrossRef]
  50. Jiang, J.J.; Hu, R.M.; Wang, Z.Y.; Cai, Z.H. CDMMA: Coupled discriminant multi-manifold analysis for matching low-resolution face images. Signal Process. 2016, 124, 162–172. [Google Scholar] [CrossRef]
  51. Luo, F.L.; Huang, H.; Duan, Y.L.; Liu, J.M.; Liao, Y.H. Local geometric structure feature for dimensionality reduction of hyperspectral imagery. Remote Sens. 2017, 9, 790. [Google Scholar] [CrossRef]
  52. Fauvel, M.; Benediktsson, J.A.; Chanussot, J.; Sveinsson, J.R. Spectral and Spatial Classification of Hyperspectral Data Using SVMs and Morphological Profile. IEEE Trans. Geosci. Remote Sens. 2018, 9, 3804–3814. [Google Scholar]
  53. Huo, L.Z.; Tang, P. Spectral and spatial classification of hyperspectral data using SVMs and Gabor textures. Proc. Int. Geosci. Remote Sens. Symp. 2011, 9, 1708–1711. [Google Scholar]
  54. Huang, K.S.; Li, S.T.; Kang, X.D.; Fang, L.Y. Spectral-spatial hyperspectral image classification based on KNN. Sens. Imaging 2016, 17, 1. [Google Scholar] [CrossRef]
  55. Xiao, Q.; Wen, J.G. HiWATER: Thermal-Infrared Hyperspectral Radiometer (4th, July, 2012). Heihe Plan Sci. Data Center 2013. [Google Scholar] [CrossRef]
  56. Xue, Z.H.; Su, H.J.; Du, P.J. Sparse graph regularization for robust crop mapping using hyperspectral remotely sensed imagery: A case study in Heihe Zhangye oasis. In Proceedings of the 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016; pp. 779–782. [Google Scholar]
  57. Chang, C.C.; Lin, C.J. LIBSVM: A library for support vector machines. ACM Trans. Intell. Syst. Technol. 2011, 2, 475–486. [Google Scholar] [CrossRef]
  58. Chang, C.C.; Lin, C.J. LIBSVM: Perceptual Image Hashing Using Latent Low-Rank Representation and Uniform LBP. Appl. Sci. 2018, 8, 317. [Google Scholar]
Figure 1. The procedure of local binary patterns (LBP) operator on the PaviaU image.
Figure 1. The procedure of local binary patterns (LBP) operator on the PaviaU image.
Remotesensing 11 00651 g001
Figure 2. The flowchart of multi-feature manifold discriminant analysis.
Figure 2. The flowchart of multi-feature manifold discriminant analysis.
Remotesensing 11 00651 g002
Figure 3. Indian Pines hyperspectral image. (a) HSI in false-color (bands 50, 27 and 17 for RGB); (b) Ground-truth map (please note that the number of samples is given in parentheses).
Figure 3. Indian Pines hyperspectral image. (a) HSI in false-color (bands 50, 27 and 17 for RGB); (b) Ground-truth map (please note that the number of samples is given in parentheses).
Remotesensing 11 00651 g003
Figure 4. Heihe hyperspectral image. (a) HSI in false-color (bands 57, 19 and 80 for RGB); (b) Ground-truth map (please note that the number of samples is given in parentheses).
Figure 4. Heihe hyperspectral image. (a) HSI in false-color (bands 57, 19 and 80 for RGB); (b) Ground-truth map (please note that the number of samples is given in parentheses).
Remotesensing 11 00651 g004
Figure 5. PaviaU hyperspectral image. (a) HSI in false-color (bands 60, 100 and 20 for RGB); (b) Ground-truth map (please note that the number of samples is given in parentheses).
Figure 5. PaviaU hyperspectral image. (a) HSI in false-color (bands 60, 100 and 20 for RGB); (b) Ground-truth map (please note that the number of samples is given in parentheses).
Remotesensing 11 00651 g005
Figure 6. Classification results with different dimensions on the Indian Pines, Heihe and PaviaU data sets. (a) Indian Pines; (b) Heihe; (c) PaviaU.
Figure 6. Classification results with different dimensions on the Indian Pines, Heihe and PaviaU data sets. (a) Indian Pines; (b) Heihe; (c) PaviaU.
Remotesensing 11 00651 g006
Figure 7. The experiments for parameter analysis of MFMDA on Indian Pines data set. (a) Classification results of MFMDA with different values of n w and n b ; (b) Classification results of MFMDA with different parameters α and β .
Figure 7. The experiments for parameter analysis of MFMDA on Indian Pines data set. (a) Classification results of MFMDA with different values of n w and n b ; (b) Classification results of MFMDA with different parameters α and β .
Remotesensing 11 00651 g007
Figure 8. Classification results of different algorithms on Indian pines data set. (a) Ground truth; (b) Baseline(93.57%, 0.93); (c) PCA(92.57%, 0.92); (d) LDA(93.43%, 0.92); (e) NPE(92.29%, 0.91); (f) LPP(92.43%, 0.91); (g) MFA(92.78%, 0.92); (h) LGSFA(92.72%, 0.92); (i) MFMDA(95.71%, 0.95). Please note that OA and k coefficients are given in parentheses.
Figure 8. Classification results of different algorithms on Indian pines data set. (a) Ground truth; (b) Baseline(93.57%, 0.93); (c) PCA(92.57%, 0.92); (d) LDA(93.43%, 0.92); (e) NPE(92.29%, 0.91); (f) LPP(92.43%, 0.91); (g) MFA(92.78%, 0.92); (h) LGSFA(92.72%, 0.92); (i) MFMDA(95.71%, 0.95). Please note that OA and k coefficients are given in parentheses.
Remotesensing 11 00651 g008
Figure 9. The experiments for parameter analysis of MFMDA on Heihe data set. (a) OAs of MFMDA with different values of n w and n b ; (b) OAs of MFMDA with different values of α and β .
Figure 9. The experiments for parameter analysis of MFMDA on Heihe data set. (a) OAs of MFMDA with different values of n w and n b ; (b) OAs of MFMDA with different values of α and β .
Remotesensing 11 00651 g009
Figure 10. Classification results of different algorithms on Heihe data set. (a) Ground truth; (b) Baseline (94.72%, 0.93); (c) PCA (94.45%, 0.92); (d) LDA (93.71%, 0.92); (e) NPE (93.09%, 0.91); (f) LPP (93.40%, 0.90); (g) MFA (94.31%, 0.91); (h) LGSFA (93.14%, 0.92); (i) MFMDA (95.64%, 0.94). Please note that OA and k coefficients are given in parentheses.
Figure 10. Classification results of different algorithms on Heihe data set. (a) Ground truth; (b) Baseline (94.72%, 0.93); (c) PCA (94.45%, 0.92); (d) LDA (93.71%, 0.92); (e) NPE (93.09%, 0.91); (f) LPP (93.40%, 0.90); (g) MFA (94.31%, 0.91); (h) LGSFA (93.14%, 0.92); (i) MFMDA (95.64%, 0.94). Please note that OA and k coefficients are given in parentheses.
Remotesensing 11 00651 g010
Figure 11. The experiments for parameter analysis of MFMDA on PaviaU data set. (a) Classification results of MFMDA with different parameters of n w and n b ; (b) Classification results of MFMDA with different parameters α and β .
Figure 11. The experiments for parameter analysis of MFMDA on PaviaU data set. (a) Classification results of MFMDA with different parameters of n w and n b ; (b) Classification results of MFMDA with different parameters α and β .
Remotesensing 11 00651 g011
Figure 12. Classification results of different algorithms on PaviaU data set. (a) Ground truth; (b) Baseline(91.59%, 0.89); (c) PCA(90.39%, 0.87); (d) LDA(91.96%, 0.89); (e) NPE(90.59%, 0.87); (f) LPP(91.78%, 0.89); (g) MFA(90.52%, 0.87); (h) LGSFA(91.49%, 0.89); (i) MFMDA(96.95%, 0.96). Please note that OA and k coefficients are given in parentheses.
Figure 12. Classification results of different algorithms on PaviaU data set. (a) Ground truth; (b) Baseline(91.59%, 0.89); (c) PCA(90.39%, 0.87); (d) LDA(91.96%, 0.89); (e) NPE(90.59%, 0.87); (f) LPP(91.78%, 0.89); (g) MFA(90.52%, 0.87); (h) LGSFA(91.49%, 0.89); (i) MFMDA(96.95%, 0.96). Please note that OA and k coefficients are given in parentheses.
Remotesensing 11 00651 g012
Table 1. Classification results using different methods with different classifiers for the Indian Pines data set. [Overall Accuracy ± Std (%)].
Table 1. Classification results using different methods with different classifiers for the Indian Pines data set. [Overall Accuracy ± Std (%)].
Algorithm n i = 5 n i = 10 n i = 20 n i = 30 n i = 40
Baseline42.21 ± 4.6353.72 ± 3.9764.05 ± 2.0369.72 ± 1.2569.78 ± 1.58
PCA42.00 ± 5.5653.35 ± 4.7064.04 ± 1.7368.07 ± 1.0568.72 ± 1.59
SpectralLDA40.27 ± 4.4640.29 ± 1.7652.93 ± 1.4461.00 ± 0.8463.51 ± 1.23
FeaturesNPE34.25 ± 6.3348.49 ± 3.3660.61 ± 1.4565.82 ± 1.7667.22 ± 1.63
LPP35.40 ± 5.4047.41 ± 2.7760.14 ± 2.3766.64 ± 1.8269.10 ± 1.51
MFA42.98 ± 4.7849.66 ± 1.7160.38 ± 2.5964.09 ± 1.8766.16 ± 0.89
LGSFA41.26 ± 5.3549.57 ± 3.0060.95 ± 1.9166.40 ± 0.6568.81 ± 1.36
Baseline68.12 ± 5.0079.07 ± 2.9787.58 ± 1.6690.36 ± 2.8293.54 ± 1.53
PCA65.72 ± 6.9875.50 ± 4.0983.89 ± 2.1586.37 ± 3.4089.80 ± 1.35
LBPLDA71.30 ± 3.7579.69 ± 3.2487.62 ± 1.7891.02 ± 2.0493.63 ± 1.25
FeaturesNPE66.71 ± 4.6175.06 ± 4.3284.60 ± 1.9586.55 ± 1.8989.87 ± 1.24
LPP65.88 ± 2.8075.73 ± 4.5185.17 ± 1.9787.71 ± 1.5589.77 ± 0.85
MFA65.60 ± 7.2075.69 ± 4.7084.68 ± 1.6385.73 ± 2.5590.34 ± 1.24
LGSFA65.59 ± 2.1174.90 ± 5.1884.92 ± 1.7087.86 ± 1.6791.34 ± 1.34
Baseline56.36 ± 5.7073.25 ± 2.1582.74 ± 2.3389.46 ± 1.4990.83 ± 1.62
PCA54.22 ± 5.0468.83 ± 2.8376.39 ± 1.6681.34 ± 1.3083.71 ± 1.25
StackedLDA70.06 ± 4.6584.37 ± 2.3489.58 ± 2.5292.93 ± 0.5994.24 ± 1.26
FeaturesNPE56.02 ± 6.9166.73 ± 3.6874.37 ± 2.5176.28 ± 1.7178.62 ± 0.91
LPP58.08 ± 4.7370.81 ± 2.4974.43 ± 2.5478.59 ± 1.3679.30 ± 0.73
MFA53.94 ± 2.4969.44 ± 2.4777.15 ± 2.1280.57 ± 2.7082.94 ± 1.83
LGSFA63.83 ± 5.1773.35 ± 1.0379.10 ± 0.3783.15 ± 1.7784.57 ± 0.57
MFMDA74.01 ± 5.7285.74 ± 2.4291.79 ± 2.3994.61 ± 1.1596.19 ± 0.89
Notes: The bold numbers represent the maximum OA of the column.
Table 2. Classification results of each class samples via different DR methods in Indian Pines data set (%).
Table 2. Classification results of each class samples via different DR methods in Indian Pines data set (%).
ClassSamplesDR with SVM Classifier
TrainTestBaselinePCALDANPELPPMFALGSFAMFMDA
1103699.7299.1798.8999.4499.7299.4499.4496.67
243138593.9192.9493.8692.2593.2394.0993.1093.33
32580594.3192.4293.7492.0191.6991.6692.4692.57
41022795.3794.8594.9892.7393.6694.9894.2393.17
51446984.5482.4183.2283.7184.8682.9084.6190.00
62270893.2992.9792.1992.1391.4492.2091.9198.18
7101810010010010010010010098.89
81446498.5398.3898.4197.9198.3897.8097.9599.74
9101010010010010010010099.00100
102994389.7288.2390.3888.2987.4189.0088.0991.56
1174238195.4694.9395.7294.5194.4995.0895.0997.96
121857587.8483.3488.5285.6385.7686.7387.0692.28
131019596.3695.7495.8594.6795.4994.7295.8598.15
1438122796.8596.2797.4295.9896.0996.1095.7599.62
151237487.8689.1284.4787.5488.3486.4787.9996.52
16108395.7896.2792.8996.3996.3996.2795.4299.52
OA93.5792.5793.4392.2992.4392.7892.7295.71
AA94.3593.5693.7893.3393.5693.5993.6296.14
Kappa0.930.920.920.910.910.920.920.95
Notes: The bold numbers represent the maximum OA of the row.
Table 3. Classification results using different methods with different classifiers for the Heihe data set. [Overall Accuracy ± Std (%)].
Table 3. Classification results using different methods with different classifiers for the Heihe data set. [Overall Accuracy ± Std (%)].
Algorithm n i = 5 n i = 10 n i = 20 n i = 30 n i = 40
Baseline80.88 ± 3.4785.48 ± 2.9390.17 ± 0.9691.04 ± 1.0091.95 ± 1.15
PCA80.88 ± 3.4685.47 ± 2.9390.17 ± 0.9691.03 ± 1.0091.95 ± 1.15
SpectralLDA74.72 ± 5.4280.12 ± 3.4389.85 ± 1.9492.07 ± 0.7792.99 ± 0.96
FeaturesNPE78.33 ± 4.6583.39 ± 3.2988.70 ± 1.5090.16 ± 1.3691.36 ± 1.21
LPP69.60 ± 5.1072.35 ± 11.4891.62 ± 1.4493.02 ± 0.8493.34 ± 0.88
MFA83.27 ± 3.4888.49 ± 3.9392.26 ± 1.3492.77 ± 1.0193.14 ± 0.67
LGSFA80.00 ± 4.1587.93 ± 1.8790.47 ± 1.6591.86 ± 0.7793.42 ± 0.80
Baseline75.82 ± 6.1584.21 ± 2.5989.95 ± 1.7592.52 ± 1.0893.58 ± 0.69
PCA75.79 ± 6.1783.19 ± 3.1689.02 ± 2.2092.50 ± 1.0593.74 ± 1.27
LBPLDA77.91 ± 5.1983.35 ± 4.3389.95 ± 2.1692.86 ± 0.9193.86 ± 0.74
FeaturesNPE75.18 ± 6.3584.65 ± 3.1791.38 ± 1.3593.22 ± 0.8994.62 ± 0.78
LPP55.31 ± 23.1268.39 ± 12.6687.84 ± 4.2593.94 ± 1.0094.57 ± 0.87
MFA73.51 ± 6.9383.35 ± 3.9391.18 ± 1.5294.62 ± 0.6595.11 ± 0.74
LGSFA76.39 ± 6.4282.99 ± 4.4890.04 ± 1.9693.05 ± 0.6594.32 ± 0.61
Baseline80.91 ± 3.4485.68 ± 2.9990.21 ± 0.9791.35 ± 1.1591.87 ± 1.17
PCA80.90 ± 3.4485.67 ± 2.9990.20 ± 0.9791.38 ± 1.1092.04 ± 1.15
StackedLDA79.30 ± 3.2990.75 ± 2.1394.25 ± 0.8094.93 ± 1.0195.62 ± 0.54
FeaturesNPE80.22 ± 3.8788.19 ± 1.4689.44 ± 1.4590.78 ± 1.2291.90 ± 1.02
LPP80.30 ± 2.6487.52 ± 3.1893.32 ± 1.5394.16 ± 0.9994.44 ± 0.65
MFA84.04 ± 3.2290.05 ± 3.0092.78 ± 1.4793.96 ± 1.0594.58 ± 0.57
LGSFA82.72 ± 3.3691.30 ± 1.8494.46 ± 1.4395.68 ± 1.2096.34 ± 0.80
MFMDA89.32 ± 3.5291.87 ± 2.5395.92 ± 0.8296.82 ± 0.5497.41 ± 0.48
Notes: The bold numbers represent the maximum OA of the column.
Table 4. Classification results of each class samples via different DR methods in Heihe data set (%).
Table 4. Classification results of each class samples via different DR methods in Heihe data set (%).
ClassSamplesDR with SVM Classifier
TrainTestBaselinePCALDANPELPPMFALGSFAMFMDA
1424102996.2395.8798.4997.3398.0797.6498.5697.86
2292855797.8697.5398.7496.7796.8998.0697.9381.23
3212033495.4695.2697.2495.2795.7796.5495.4595.66
410759881.2080.7259.7163.9367.5771.4259.2881.00
510375284.8184.2172.6284.0373.2578.1172.7983.31
610166584.2587.9861.6561.7768.0674.4361.3799.53
71097588.7688.1873.5681.1269.6676.3972.9179.27
81086590.9790.9892.9188.3189.9989.4993.5794.45
OA94.7294.4593.7193.0993.4094.3193.1495.64
AA89.9490.0981.8783.5782.4185.2681.4889.04
Kappa0.930.920.920.910.900.910.920.94
Notes: The bold numbers represent the maximum OA of the row.
Table 5. Classification results using different methods with different classifiers for the PaviaU data set. [Overall Accuracy ± Std (%)].
Table 5. Classification results using different methods with different classifiers for the PaviaU data set. [Overall Accuracy ± Std (%)].
Algorithm n i = 5 n i = 10 n i = 20 n i = 30 n i = 40
Baseline57.16 ± 9.9469.72 ± 4.1978.07 ± 2.8881.11 ± 3.9182.97 ± 2.33
PCA57.16 ± 9.9469.72 ± 4.1978.01 ± 2.9881.11 ± 3.9183.13 ± 2.35
SpectralLDA53.09 ± 5.8557.87 ± 3.8465.79 ± 3.2170.08 ± 3.1174.14 ± 2.15
FeaturesNPE57.64 ± 9.8167.02 ± 4.6873.50 ± 4.8179.98 ± 3.7181.54 ± 2.47
LPP49.70 ± 5.3550.66 ± 5.6566.37 ± 2.2373.57 ± 1.5375.84 ± 2.25
MFA62.24 ± 5.5675.16 ± 3.2777.51 ± 2.1780.97 ± 2.7082.26 ± 3.68
LGSFA57.38 ± 4.5062.91 ± 3.3269.37 ± 3.3671.11 ± 2.3775.97 ± 1.76
Baseline52.05 ± 8.2772.01 ± 5.5081.38 ± 2.2286.32 ± 1.9188.22 ± 1.21
PCA50.44 ± 7.5567.93 ± 6.8178.23 ± 3.6881.71 ± 7.8985.68 ± 2.42
LBPLDA60.31 ± 6.8776.02 ± 2.5982.54 ± 1.3286.70 ± 1.2788.88 ± 0.56
FeaturesNPE55.73 ± 6.7773.68 ± 5.8175.50 ± 3.5085.69 ± 2.6886.45 ± 1.41
LPP43.22 ± 11.1155.85 ± 13.4976.32 ± 4.1984.27 ± 3.1085.03 ± 2.62
MFA57.35 ± 7.5171.79 ± 4.2881.39 ± 1.9385.63 ± 2.6986.86 ± 1.43
LGSFA63.02 ± 6.4973.59 ± 3.6681.54 ± 2.4285.81 ± 2.6686.75 ± 2.16
Baseline59.40 ± 5.7568.29 ± 5.5480.06 ± 4.6583.93 ± 2.5385.73 ± 2.31
PCA57.52 ± 10.1170.67 ± 3.6377.52 ± 6.1282.69 ± 3.3385.63 ± 1.87
StackedLDA59.89 ± 5.9476.62 ± 2.7481.85 ± 2.8985.14 ± 3.8087.28 ± 2.22
FeaturesNPE57.50 ± 10.9975.35 ± 2.3477.89 ± 3.4784.53 ± 3.8785.62 ± 2.73
LPP64.96 ± 6.6871.29 ± 3.5678.12 ± 3.8386.07 ± 1.6887.86 ± 1.92
MFA61.57 ± 8.8674.61 ± 4.6679.07 ± 2.0881.58 ± 2.3082.34 ± 2.66
LGSFA64.45 ± 4.7278.79 ± 2.6888.03 ± 2.2692.94 ± 2.0294.15 ± 1.42
MFMDA78.70 ± 2.7084.60 ± 2.3492.66 ± 2.3295.10 ± 1.8496.09 ± 0.97
Notes: The bold numbers represent the maximum OA of the column.
Table 6. Classification results of each class samples via different DR methods in PaviaU data set (%).
Table 6. Classification results of each class samples via different DR methods in PaviaU data set (%).
ClassSamplesDR with SVM Classifier
TrainTestBaselinePCALDANPELPPMFALGSFAMFMDA
110656589.9889.7792.9589.2493.7290.3792.4397.90
21861846397.9397.4198.0497.6398.0397.4697.2799.79
321207872.2869.5975.8168.5073.3969.8277.0294.02
431303385.5085.3289.5285.3887.9486.7489.1084.89
513133298.9798.8499.6598.7499.2999.4199.6499.99
650497985.4580.9584.2683.0280.2680.1680.9099.14
713131781.9776.7467.0678.0267.5674.6869.3596.10
837364585.4884.3786.7583.8189.9984.8288.1398.06
91093799.7399.6894.2699.7599.8699.7999.6160.91
OA91.5990.3991.9690.5991.7890.5291.4996.95
AA88.5986.9687.5987.1287.7887.0388.1692.31
Kappa0.890.870.890.870.890.870.890.96
Notes: The bold numbers represent the maximum OA of the row.

Share and Cite

MDPI and ACS Style

Huang, H.; Li, Z.; Pan, Y. Multi-Feature Manifold Discriminant Analysis for Hyperspectral Image Classification. Remote Sens. 2019, 11, 651. https://0-doi-org.brum.beds.ac.uk/10.3390/rs11060651

AMA Style

Huang H, Li Z, Pan Y. Multi-Feature Manifold Discriminant Analysis for Hyperspectral Image Classification. Remote Sensing. 2019; 11(6):651. https://0-doi-org.brum.beds.ac.uk/10.3390/rs11060651

Chicago/Turabian Style

Huang, Hong, Zhengying Li, and Yinsong Pan. 2019. "Multi-Feature Manifold Discriminant Analysis for Hyperspectral Image Classification" Remote Sensing 11, no. 6: 651. https://0-doi-org.brum.beds.ac.uk/10.3390/rs11060651

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop