Next Article in Journal
Editorial for Special Issue “Ocean Radar”
Next Article in Special Issue
Multi-Scale Semantic Segmentation and Spatial Relationship Recognition of Remote Sensing Images Based on an Attention Model
Previous Article in Journal
Retrieval of High Spatial Resolution Aerosol Optical Depth from HJ-1 A/B CCD Data
Previous Article in Special Issue
Local Deep Descriptor for Remote Sensing Image Feature Matching
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hyperspectral Image Classification Based on Fusion of Curvature Filter and Domain Transform Recursive Filter

1
College of Information and Communication Engineering, Harbin Engineering University, Harbin 150001, China
2
College of Rail Transit, Guangdong Communication Polytechnic, Guangzhou 510650, China
*
Author to whom correspondence should be addressed.
Submission received: 21 February 2019 / Revised: 1 April 2019 / Accepted: 2 April 2019 / Published: 7 April 2019

Abstract

:
In recent decades, in order to enhance the performance of hyperspectral image classification, the spatial information of hyperspectral image obtained by various methods has become a research hotspot. For this work, it proposes a new classification method based on the fusion of two spatial information, which will be classified by a large margin distribution machine (LDM). First, the spatial texture information is extracted from the top of the principal component analysis for hyperspectral images by a curvature filter (CF). Second, the spatial correlation information of a hyperspectral image is completed by using domain transform recursive filter (DTRF). Last, the spatial texture information and correlation information are fused to be classified with LDM. The experimental results of hyperspectral images classification demonstrate that the proposed curvature filter and domain transform recursive filter with LDM(CFDTRF-LDM) method is superior to other classification methods.

1. Introduction

Hyperspectral images (HSI), which provide valuable spectral information, have been widely used in remote-sensing applications [1,2,3,4,5,6]. In addition, classification of HSI has drawn lots of attention for its importance in crop monitoring [7], environment monitoring [8], forest monitoring [9], mineral identification [10] and forest mapping [11].
Many scholars around the world have successfully studied various classification methods of HSI, including sparse representation-based techniques [12], Bayesian estimation method [13], K-mean [14], maximum likelihood [15], multinomial logistic regression [16] and deep learning [17]. More specifically, Support Vector Machine (SVM) has been fruitfully applied in HSI classification and achieved respectable results [18]. Zhang et al. adopted the idea of maximizing the margin mean and minimizing the margin variance to improve the maximum margin model of SVM, with the suggestion of using large margin distribution machine (LDM) [19]. In addition, Zhan et al. applied LDM to HSI classification [20].
To improve the classification accuracy, many classification methods with spatial information extraction have been successfully investigated. Some scholars have attempted to obtain spatial information by segmentation to improve HSI classification. A classification method based on the construction of a minimum spanning forest from the region markers, which were gained from the initial classification results [21]. Ghamisi et al. proposed a classification method based on two segmentation methods, fractional-order Darwinian particle swarm optimization and mean shift segmentation, and classified the integration of these two methods by SVM [22]. Also, the existing researches acquired spatial information with morphological profile feature. Multiple morphological profiles were proposed for synthesizing the spectral-spatial information extracted from the multicomponent base images and were interpreted with decision fusion and sparse classifier based on multinomial logistic regression [23]. The method proposed by Xue et al. for HSI classification was performed via morphological component analysis-based image separation rationale in sparse representation [24]. Liao et al. applied the morphological profile filter and domain transform normalized convolution Filter (DTNCF) to extract the spatial information [25], which was combined and fed into support vector machine (SVM), and finally implemented two-step optimization in the classification process [26]. Moreover, some scholars attempted to improve classification performance with Markov random field [27]. For instance, Sun et al. proposed an HSI classification method, including a spectral data fidelity term and a spatially adaptive Markov random field prior in the hidden field based on maximum a posteriori framework with sparse multinomial logistic regression [28]. Zhang et al. used an extended Markov random field model to combine the multiple features with local and nonlocal spatial constraints in the semantic space with probabilistic SVM for HSI classification [29].
In order to obtain the fully spatial features of HSI, many classification methods for spatial information extraction have been investigated. For example, the integration of spectral and spatial context was an effective method for HSI classification, and more researchers intended to extract spatial information with different filters, such as guided filter (GDF) [30], bilateral filter (BF) [31], Gabor filter (GF) [32] and etc. Wang et al. suggested a filtering framework named discriminatively guided image filtering which integrates SVM and linear discriminative analysis by GDF to enhance classification performance [33]. A method of k-nearest neighbor with GDF was presented by Guo et al. to extract spatial information and optimize the classification accuracy [34]. WANG et al. proposed a spectral-spatial HSI classification method based on joint BF and graph cut segmentation with the SVM classifier [35]. Sahadevan et al. integrated the spatial texture information obtained with BF into the spectral domain to improve SVM performance [36]. A hyperspectral classification method was proposed based on sparse representation classification spatial features, which were extracted by joint BF with the first principal component as the guidance image in the literature [37]. Edge-preserving filter (EPF) and principal component analysis (PCA) [38]-based EPFs (PCA-EPFs) methods with GDF or BF and recursive filter were adopted to progress SVM classification performance in the references of [39] and [40], respectively. Moreover, a feature extraction method based on the image fusion with multiple subsets of adjacent bands and recursive filter (IFRF) was achieved by Kang et al. to increase accuracy of HSI classification [41]. In addition, a spectral-spatial Gabor surface feature fusion method was completed with including SVM classifier for HSI classification, and the magnitude pictures of Gabor features were extracted by 2 dimensional GF in the reference [42]. Li et al. projected Gabor features of the hyperspectral image obtained with GF into the kernel induced space through composite kernel technique [43]. Chen et al. combined GD with deep convolutional neural networks to mitigate overfitting problem and increase classification accuracy for HSI classification [44]. Tu et al. proposed an HSI classification method o based on non-local means filtering with maximum probability and SVM, which uses the spatial context information and non-local means filtering in the first principal component to obtain the optimization probability image of spatial structure [45].
A filter can be used to extract spatial texture features, but it is difficult to get complete spatial features using a single filter. In this paper, we first used the curvature filter (CF) to extract the spatial texture features [46,47], and then applied DTRF [25] to attain spatial correlation features to enrich the spatial characteristics and provide more effectively hyperspectral image classification. Finally, LDM can be adopted to classify the fusion of two spatial information to form a new classification method, which combine the curvature filter and domain transform recursive filter with LDM (CFDTRF-LDM). The work of this paper can be summarized as follows:
(1)
CF with the minimal projection operator has superior characteristics of small calculation amounts and fast convergence [46], which can efficiently extract the spatial features of a hyperspectral image. The spatial correlation information of obtained by DTRF benefits the spatial texture information to improve classification accuracy.
(2)
The effective fusion of the two spatial information is conducive to the LDM classification and is superior to other methods.
The rest of this article is organized as follows. The methodology is shown in Section 2. The hyperspectral image datasets is applied in Section 3 to test the effectiveness of the proposed method, with analyzing the experimental results with CFDTRF-LDM. Finally, conclusions are drawn in Section 4.

2. Methodology

2.1. Classification Method for HSI

LDM improves the SVM classification performance with simultaneous maximization of the margin means and minimization of the margin variances. A training set is defined as S = { ( a 1 , b 1 ) , , ( a m , b m ) } , in which, xi is the training sample labelled by y i ( + 1 , 1 ) , i = 1 , 2 , , m , and m is the number of the training data. The function of SVM was to predict the unlabeled data with the hyperplane of maximization of the minimum margin [48], and can be shown as follows:
g ( a ) = v T φ ( a ) ,
where a is the weight vector of decision function, g ( a ) is the linear model and φ ( a ) is a mapping of a by a kernel k, such as:
k i j = φ ( a i ) Τ φ ( a j )
The margin of instance ( x i , y i ) can be formulated as
η i = y i v T φ ( a ) , i = 1 , 2 , , m .
For the inseparable conditions, the soft-margin LDM can be expressed as Equation (4).
min v 1 2 v T v + α 1 η ^ α 2 η + C i = 1 n ξ i s . t . y i v T ϕ ( a i ) 1 ξ i ,        ξ > 0 , i = 1 , 2 , , m ,
where α 1 and α 2 are the parameters corresponding to the trading-off the margin variance and the margin mean. The margin mean η ¯ and the margin variance η ^ can be characterized as Equations (5) and (6), respectively.
η ¯ = i = 1 m y i v T φ ( a ) = 1 n ( A b ) T v ,   i = 1 , 2 , , m
η ^ = 1 m 2 i = 1 m j = 1 m ( b i v T φ ( a i ) b j v T φ ( a j ) ) 2   = 1 m 2 ( m v T A A T v v T b b T A T v ) .
Since the hyperplane of LDM intends to maximize the mean margin and minimize the margin variance, LDM can achieve more effective performance for the hyperspectral image classification with small amount of training data [20,26].

2.2. Spatial Information Extraction

In order to obtain fully spatial information, CF and DTRF were used to extract the spatial texture features and spatial correlation features, and the principle of the CF and DTRF will be analyzed in the following.

2.2.1. Curvature Filter

Curvature filter is proposed to first study the surface corresponding to the curvature and then select one of all surfaces to best approximate the data. As a unique optimization algorithm, the curvature filter optimizes regularization energy and implicitly uses known differential geometry surfaces in the filtering process.
A. Optimization of energy functional
The basic idea of the variational regularization method is to first define the energy function of the image processing problem. When the energy function is smaller, the variable is closer to the expected result. There is a relationship in the process of optimizing the model
( M ) = D ( M , I ) + λ R ( M ) ,
where D ( M , I ) , which is data-fitting energy, measured how well M fits the image data I . R ( M ) that is regularization energy formalized prior knowledge about M , and λ is scalar regularization coefficient used to measure the contribution of the two energy.
The evolution process of the energy function in the variational model is shown in Figure 1. The data-fitting energy D ( M , I ) is always increasing, while the regular energy R ( M ) is decreasing. Since the overall energy ( M ) is decreasing, this indicated that the regularization energy is the main part in the optimization process. Therefore, curvature filtering suggests optimizing the regular energy. As long as the reduction of the regular energy is greater than the increase in the data-fitting energy, the overall energy decline can be guaranteed. The curvature filter proposes to optimize the variational model, which is to reduce the energy of the curvature regular energy to a minimum value, and minimize the regular energy by minimizing the regular curvature from the perspective of differential geometry [46].
B. Domain decomposition
There is a dependency between adjacent pixels, which hinders local minimization of the principal curvature. A domain decomposition algorithm was proposed here to circumvent the problem.
As shown in Figure 2, the discrete domain Ω of image U was decomposed into four subsets: red triangle RT, red circle RC, purple triangle PT, purple circle PC. The advantages of this decomposition were as follows: (1) the dependence of adjacent pixels can be eliminated, and the filtering efficiency can be improved; (2) the updated field can be used to ensure convergence due to independence; (3) all the tangent planes can be enumerated in a 3 × 3 local window [46].
C. Projection to the tangent plane
Assuming that a pixel is x , constructing the surface is to project the current pixel value of hyperspectral image M ( x ) onto the ideal pixel value M ¯ ( x ) which is on the optimal tangent plane of the adjacent pixel [46]. The relationships are met as following:
M ¯ = M + d ,
where d is the projection distance.
To find the optimal tangent plane of the field M ( x ) , all possible triangles are enumerated in the 3 × 3 neighborhood of x (as shown in Figure 3, excluding x as the vertex). Among them, four pass the red field R, and four pass the purple field P, and four pass the red/purple mixed field RP.
As shown in Figure 4, since there are common edges of x passing through the 12 triangular sections and the projection was sufficient, there were only eight different projection distance d i . There are two common edges in R, two common edges in P, and four mixed tangent planes.
D. Minimal Projection Operator (Pg)
According to Euler’s theorem, it can be known that:
d i = k 1 cos 2 θ i + k 2 cos 2 θ i ,
where: k 1 , k 2 are the principal curvatures; θ i is the angle to the principal plane. If the angular sample θ i is sufficiently dense within ( π , π ), when k 1 , k 1 0 , there is dm ≈ min{ki}.
For the pixel at (i, j), the distance dm can be obtained from the tangent plane with the neighborhood pixels in the 3 × 3 window [46].
{ d 1 = ( M i 1 , j + M i + 1 , j ) / 2 M i , j d 2 = ( M i , j 1 + M i , j + 1 ) / 2 M i , j d 3 = ( M i 1 , j 1 + M i + 1 , j + 1 ) / 2 M i , j d 4 = ( M i 1 , j + 1 + M i + 1 , j 1 ) / 2 M i , j d 4 = ( M i 1 , j + 1 + M i + 1 , j 1 ) / 2 M i , j d 5 = M i 1 , j + M i , j 1 M i 1 , j 1 M i , j d 6 = M i 1 , j + M i , j + 1 M i 1 , j + 1 M i , j d 7 = M i , j 1 + M i + 1 , j M i + 1 , j 1 M i , j d 7 = M i , j + 1 + M i + 1 , j M i + 1 , j 1 M i , j d 8 = M i , j + 1 + M i + 1 , j M i + 1 , j + 1 M i , j
Therefore, the minimum absolute value dm is taken as the minimum projection of M ( x ) to M ^ .
d m = min { | d i | , n = 1 , 2 , , 8 } .
M ^ is on the tangent plane of the field
M ^ i , j = M i , j + d m .
E. Gaussian curvature filter
The minimum projection operator is iterated with all pixels of R T , R C , P T and P C , and the Gaussian curvature filter can be generated as:
O = P g ( M ( x ) ) , x { R T , R C , P T , P C } .
As a unique optimization algorithm, Gaussian curvature filtering is an image smoothing algorithm with edge protection, which assumes that the surface formed by the ideal noise-free image is block-expandable, and the Gaussian curvature is zero everywhere. Also, the pixel values are directly adjusted so that the tangential plane of the domain pixel satisfied the assumption, avoiding the explicit calculation of the Gaussian curvature. Thus, the image surface is no longer required to have second-order variability, allowing for the presence of abrupt edges and corners in the image to ideally protect the edges of image.
In hyperspectral images, there are hundreds of frequency bands, high correlation between large amount of data and adjacent bands, which leads to redundant information. In order to obtain more comprehensive spatial information with CF, we first use PCA to reduce dimensionality of hyperspectral images. The CF validation test will be found in Section 3.3.

2.2.2. Domain Transform Recursive Filter (DTRF)

DTRF proposed by Gastal et al. is used for image filtering, in which two-dimensional image filtering can be converted into one-dimensional image filtering [25]. The energy function of DTRF for hyperspectral image R at the i-th band can be represented as:
D i ( n ) = ( 1 g d ) I [ n ] + g d D i [ n 1 ]
and
d = f ( y n ) f ( y n 1 )
f ( y n ) = 0 y n 1 + σ s σ r l = 1 c | R k ( x ) | d x
σ r = 3 σ H
σ H t = σ s 3 2 N t 4 N 1
a = e 2 / σ H ,
where g d is a feedback coefficient, I [ n ] is the hyperspectral image, D i [ n 1 ] is the result of the (n-1)-th recursive filtering, d is the distance between neighbor samples y n and y n 1 in the transformed domain Ω w , f ( y n ) which is calculated by integrating the partial differential for the hyperspectral band image R k , which is transformed into an increasing function. Besides, r is filter radius, σ s is the spatial standard deviation, σ r is the range standard deviation, σ H t is the value of the t-th iteration, and N is the total number of iterations.
DTRF has an infinite impulse response with the exponential decay. Briefly, as d increases, a d goes to zero, which stops the propagation chain, indicating that the neighborhood pixels are in the same ground. Equation (14) is an asymmetric causal filter and depended on input and output information. To obtain the filtering symmetry, this equation needs to be executed twice, such as the procedures: first from left to right, and then from right to left; or from top to bottom, and then from bottom to top [25].
In general, the ground distribution of hyperspectral images has suitable uniformity, so there is always a strong spatial correlation between pixels in a hyperspectral image. Moreover, the spatial correlation meaning is an associated property of the reflection intensity between a pixel and an adjacent pixel. However, spatial correlation information is often ignored in texture information extraction.
To examine the spatial correlation features of CF and DTRF, Moran’s I [49,50] is employed to test the spatial correlation of hyperspectral images before and after filtering, calculated by the following formula:
I = n i = 1 n j = 1 n α i j ( Y i Y ¯ ) ( Y j Y ¯ ) i = 1 n j = 1 n α i j i = 1 n ( Y i Y ¯ ) 2 ,
where Y i and Y j are the reflection intensities of two hyperspectral pixels, and Y ¯ is the average of Y . n is the pixel number of one band, and α i j is the spatial weight.
The larger I is larger, the stronger the spatial correlation and vice versa. Section 3.4 describes validation tests for spatial correlation information extraction with DTRF.

2.3. CFDTRF-LDM

Based on CF and DTRF, a new classification approach (CFDTRF-LDM) is proposed. CF and DTRF are respectively applied to extract spatial texture information and spatial correlation information. In order to obtain rich spatial correlation feature, the spatial correlation information is obtained from original spectral images. In addition, in order to avoid the hughes phenomenon, the spatial correlation information and spatial texture information were obtained from the original image and the components of PCA respectively, so the total numbers of images are suitable for LDM classification. The implementation process will be depicted as following.
Step 1: normalization. The formula (21) normalized the hyperspectral image R, where μ and σ are corresponding to the mean and standard deviation of R.
H = R μ σ
Step 2: dimensionality reduction. Since most of the information is distributed in the front principal component after the PCA dimension is reduced, the normalized image H will be further lowered by PCA, while the top 10% of the principal componentis selected for CF.
E = P c a ( H )
Step 3: spatial texture information extraction. CF extracts the spatial texture information D t on each band of E by Equation (13).
Step 4: spatial correlation information extraction. DTRF extracts the spatial correlation information D c from E .
Step 5: fusion. Equation (23) linearly fuses D t and D c :
D = D t + D c
Step 6: classification. The training set is randomly selected in proportion from D and the test set is formed with the remaining samples, which is verified by the LDM classifier.
The flow of the CFDTRF-LDM is shown in Figure 5.

3. Experiments

3.1. Hyperspectral Data Description

Three hyperspectral image datasets were used to verify the effectiveness of CFDTRF-LDM. The first dataset was Indian Pines [51], which was acquired in 1992 by the airborne visible infrared imaging spectrometer (AVIRIS) sensor in the Indian Pines region of Northwestern Indiana. It contains 220 spectral bands with a spatial size of 145 × 145 pixels. Due to noise and water absorption, 20 spectral bands were removed, leaving 200 bands remaining. This image includes 16 classes, and the specific types and the numbers of each class are shown in Table 1.
The second dataset was Salinas Valley [52] collected by AVIRIS in the Salinas Valley, Southern California, in 1998. It has a high spatial resolution of 3.7 m with a region of the spatial size of 512 × 217 pixels and 206 spectral bands. Similarly, 200 bands were retained because of noise and water absorption. The image also includes 16 classes, and the specific types and numbers of each class are shown in Table 2.
The third dataset was Kennedy Space Center acquired by NASA airborne visible/infrared imaging spectrometer (AVIRIS) at the Kennedy Space Center in Florida on 23 March 1996. AVIRIS collected 224 bands with 10 nm width with the center wavelengths from 400–2500 nm. The Kennedy Space Center dataset was available at an altitude of approximately 20 km with a spatial resolution of 18 m. After removal of water absorption and low SNR bands, 176 bands were used for the analysis. The image also includes 13 classes, and the specific types and numbers of each class are shown in Table 3.

3.2. Parameter Setting

To demonstrate the superiority of the proposed method, several methods were used to compare with CFDTRF-LDM, including:
(1)
SVM [18]: according to the raw features of hyperspectral images, SVM was applied with the Gaussian radial basis function kernel.
(2)
PCA-SVM (PCA with SVM): the use of PCA reduced the hyperspectral dimension and selected the top 10% components for the SVM.
(3)
LDM: gaussian radial basis function kernel was applied according to the raw features of hyperspectral images.
(4)
PCA-LDM (PCA with LDM): PCA reduced the hyperspectral dimension and selected the top 10% components for the LDM.
(5)
EPF [39]: in this method, SVM classified hyperspectral images. Next, edge-preserving filter was conducted for each probabilistic map. Last, the class of every pixel was selected based on the maximum probability.
(6)
IFRF [41]: this method acquired the classified results with SVM according to the image fusion and recursive filter.
(7)
PCA-EPFs [40]: the spatial information constructed by applying edge-preserving filters was stacked to form the fused feature, and the dimension was reduced by PCA for the classifier of SVM.
(8)
LDM and feature learning-based(LDM-FL) [20]: this method attained the classified results with LDM from the recursive filter.
(9)
CF-SVM: the hyperspectral dimensionality was reduced with PCA, and the first 10% principal components were selected for SVM based on CF.
(10)
CF-LDM: the hyperspectral dimensionality was reduced with PCA, and the first 10% principal component were selected for LDM based on CF.
(11)
DTRF-SVM: the hyperspectral dimensionality was reduced with PCA, and the first 10% principal components were selected for SVM according to DTRF.
(12)
DTRF-LDM: the hyperspectral dimensionality was reduced with PCA, and the first 10% principal components were picked for LDM based on DTRF.
(13)
CFDTRF-LDM: the advanced method in this paper.
(14)
CFDTRF-SVM: in addition to the classification results, the advanced method was generated by SVM in this paper.
In this paper, overall accuracy (OA), average accuracy (AA) and kappa statistic (Kappa) were adopted to test the classification accuracy. To avoid biased estimation, twelve independent tests were carried out using the computer program of Matlab R2012b based on the configuration of i7-6700 CPU and 8GB RAM.

3.3. The Validation Test of CF and DTRF

To verify CF validation, the 10th, 60th, 130th and 180th bands of Indian Pines were processed with CF. As shown in Figure 6, CF can extract good boundary features of hyperspectral images, and has great advantages in obtaining smooth edges by using CF smooth hyperspectral images. Also, DTRF owns good spatial correlation preserving characteristics.

3.4. Test of Spatial Correlation Information

To compare the spatial correlation of CF and DTRF, we calculated the mean of Moran’s I for each band of Indian Pines, Salinas Valley and Kennedy Space Center datasets. The average Moran’s I of the two filters is shown in Figure 7. It can be found that the average of Moran’s I obtained from DTRF is higher than the average of CF and raw spectral features. In addition, the average of Moran’s I acquired by CF is lower than that of the spectrum images, suggesting that the spatial correlation information is weak. Therefore, it can be illustrated that DTRF can extract good spatial correlation information and effectively compensate for the deficiency of CF.

3.5. Investigation of the Proposed Method

3.5.1. Optimization of DTRF

The total number of iteration N, spatial standard deviation σ s and the range standard deviation σ r of DTRF can influence the filtering effect of the image. Therefore, a classification test was conducted for the Indian Pines dataset to verify the effectiveness of parameter optimization. From the entire data set, 4% and 96% of the training and test samples were randomly selected, and the exhaustive method was employed to establish the three optimal parameters to obtain the most satisfactory LDM classification results. To reduce the complexity of the algorithm, we first set the total number of iterations N = 10. Then, σ r 0.10 , 0.11 , , 0 . 5 and σ s 10 ,   15 ,   ,   500 were set for experiments. Last, the experiments were performed sequentially for the classification with 4059 iterations. According to the iteration result, when σ r = 0.43 and σ s = 260 , the best classification can be obtained and the optimal OA = 90.23%. Therefore, to achieve a better classification, the parameters of σ r = 0.43 and σ s = 260 will be adopted in the following experiments.

3.5.2. Experiment of Indian Pines

To evaluate the performance of CFDTRF-LDM, fifteen methods were used to classify and validate the data from Indian Pines. The verified method is as follows: The distribution of Indian Pines datasets is shown in Figure 8a. All 16 categories were selected, of which 5% (about 533) samples were employed as the training set with the rest as test set, while 20% of the three types of Indian Pines grounds were insufficient for training. Table 1 and Table 2 shows the classification accuracy generated by fifteen classification methods, as shown in Figure 8.
The classification results for Indian Pines are shown in Figure 8, while Table 1 and Table 2 shows the accuracies of OA, AA and Kappa for each class of the different methods, and also indicates CFDTRF-LDM achieved the best accuracy, when OA = 96.64%, AA = 96.04% and Kappa = 96.18%. Furthermore, the accuracies can be over 99% of six classes for CFDTRF-LDM. This experiment demonstrates that the classification performance was improved compared to other classification methods.
Besides, the OA values of CFDTRF-LDM for Indian Pines are shown in Figure 8, which are 19.17%, 18.83%, 16.78%, 18.15%, 6.29%, 5.99%, 1.85%, 0.60%, 8.55%, 6.99%, 4.34% and 2.04%, correspondingly higher than that of SVM, PCA-SVM, LDM, PCA-LDM, EPF, IFRF, LDM-FL, CF-SVM, CF-LDM and GDF-LDM, DTRF-SVM and DTRF-LDM. The effectiveness of CFDTRF-LDM is fully verified for the hyperspectral classification.

3.5.3. Experiment of Salinas Valley

Similarly, the distribution according to the Salinas Valley dataset is shown in Figure 9a: all 16 classes were selected, with 0.8% (about 433) samples as the training set, and the remaining 99.2% as the test set. Table 2 lists the classification accuracy of the Salinas Valley dataset for different methods. The classification effects are shown in Figure 9.
The classification results for Salinas Valley are shown in Figure 9, while Table 3 and Table 4 shows the accuracies of OA, AA and Kappa for each class of the different methods, and also indicates CFDTRF-LDM achieved the best accuracy, when OA = 99.16%, AA = 98.71% and Kappa = 99.06%. Furthermore, the accuracies reached 100% of four classes for CFDTRF-LDM. This experiment demonstrates that the classification performance was improved compared to other classification methods.
In addition, the OA values of CFDTRF-LDM were higher than that of SVM, PCA-SVM, LDM, PCA-LDM, EPF, IFRF, LDM-FL, CF-SVM, CF-LDM, GDF-LDM, DTRF-SVM and DTRF-LDM by 11.17%, 11.71%, 10.20%, 9.97%, 7.79%, 1.64%, 0.48%, 0.40%, 9.96%, 6.56%, 2.45% and 0.64%, respectively. The hyperspectral classification fully validated the effectiveness of CFDTRF-LDM.

3.5.4. Experiment of Kennedy Space Center

Likewise, the distribution based on Kennedy Space Center dataset is shown in Figure 10a: all 16 classes were selected, of which 4% (about 208) samples were employed as the training set, and the remaining 96% were used as the test set. Table 5 and Table 6 lists the classification accuracies of the Salinas Valley dataset for different methods. The classification effect is shown in Figure 10.
The classification results for Kennedy Space Center are shown in Figure 10, while Table 5 and Table 6 indicates the accuracies of OA, AA and Kappa for each class of the various methods, with the best accuracy of CFDTRF-LDM as OA = 97.33%, AA = 96.13% and Kappa = 97.03%. Furthermore, six classes for CFDTRF-LDM owned accuracies more than 99%. This experiment shows that the classification performance was enhanced compared to other classification methods.
Also, the OA values of CFDTRF-LDM were correspondingly larger than that of SVM, PCA-SVM, LDM, PCA-LDM, EPF, IFRF, LDM-FL, CF-SVM, CF-LDM, GDF-LDM, DTRF-SVM and DTRF-LDM by 14.89%, 17.85%, 12.23%, 16.69%, 8.31%, 10.17%, 3.21%, 7.16%, 7.27%, 6.32%, 4.72% and 2.10%. The hyperspectral classification completely verified the effectiveness of CFDTRF-LDM.

3.5.5. Analysis

First, as the classification results are shown in Figure 11. The OA values of LDM and PCA-LDM for Indian Pines were 79.85% and 78.49%, correspondingly, which were 2.38% and 0.68% greater than that of SVM and PCA-SVM. Likewise, the OA values of LDM and PCA-LDM for Salinas Valley were 88.96% and 89.19%, respectively which were 0.98% and 1.74% higher than that of SVM and PCA-SVM. Furthermore, the OA values of LDM and PCA-LDM for the Kennedy Space Center were 85.11% and 80.65%, severally, which were 2.66% and 1.16% grander than that of SVM and PCA-SVM. It can be included that LDM superior to SVM with features that maximize margin means and minimize margin variances.
Second, as shown in Figure 12, the CF-SVM and CF-LDM OA values of Indian Pines were 10.62% and 9.79% higher than that of SVM and LDM, respectively, and the OA values of CF-SVM and CF-LDM in Salinas Valley were 1.21% and 1.60% higher than that of SVM and LDM. In addition, The OA values of CF-SVM and CF-LDM in Kennedy Space Center were 7.62% and 5.91% higher than that of SVM and LDM. This finding indicates that the spatial texture information extracted by CF was effective for enhancing the classification performance of SVM and LDM.
Third, from Figure 13, the OA values of DTRF-SVM and DTRF -LDM in Indian Pines were 14.82% and 14.75%, separately larger than of the OA values of SVM and LDM. Correspondingly, the OA values of DTRF-SVM and DTRF-LDM in Salinas Valley were 8.72% and 9.56% huger than the SVM and LDM OA values. Similarly, the OA values of DTRF-SVM and DTRF-LDM in Kennedy Space Center were 10.17% and 10.13%, respectively, higher than the SVM and LDM OA values. Thus, for improving the hyperspectral classification in this work, the spatial correlation information extracted by DTRF was efficient.
Fourth, in Figure 14, the OA values of CFDTRF-LDM in Indian Pines, Salinas Valley and Kennedy Space Center were 96.64%, 99.16% and 97.33%, respectively. It can be found that all those OA values were larger than that of EPF, IFRF, PCA-EPFs and LDM-FL. Therefore, the spatial texture information and spatial correlation information obtained by CF and DTRF in this work can improve the performance of LDM than that of the edge-preserving filter and recursive filter methods, and the LDM-based methods.
To prove the effect of the training ratio on the classification, the classification of the two datasets has been used to test the different values, as shown in Figure 15. As can be seen from the figure, if the training sample was 2% of the Indian Pines dataset, the OA value of the proposed method can reach 90.41%. In addition, when the ratio increased 7%, the OA value can exceed 97%. Also, if the training sample ratio of the Salinas Valley dataset was set to 0.2%, the OA value can reach 90%, and when the ratio increased to 0.8%, it can achieve to 99%. Also, when the training ratio were 2% and 9%, the OA value of the Kennedy Space Center can reach 93% and 99%, respectively. Thus, the proposed CFDTRF-LDM can obtain satisfied classification with a small amount of training set and provided stability of the different training ratios with optimal classification performance.

4. Conclusions

In this paper, based on the combination of two spatial information and LDM classification, namely CFDTRF-LDM, a hyperspectral image classification method was proposed. The spatial texture features and spatial correlation features were correspondingly extracted by CF and DTRF, which were linearly fused for the LDM classification. To verify the superior performance of CFDTRF-LDM, three hyperspectral image datasets were tested and found that the proposed method was superior to other methods. The advantage of the proposed method CFDTRF-LDM was that the spatial texture information and spatial correlation information extracted by CF and DTRF were appropriately fused to effectively classify with LDM, and obtain huge classification performance for HSI. Furthermore, the proposed method can obtain satisfied classification with a small amount of training set and supply stability of the various training ratios with optimal classification performance. For future work, more efficient spatial information should be explored for SVM or LDM classification.

Author Contributions

J.L. conceived and designed the methodology and experiments, and he also wrote, reviewed and edited the manuscript; L.W. analysis and interpretation of the results, aided the experimental verification and revised the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (Grant No 61275010 and 61675051), Natural Science Foundation of Guangdong (Grant No 2018A030313195), Major research project of Guangdong (Grant No 2017GKTSCX021), Science and Technology Project of Guangzhou (Grant No 201804010262), Science and Technology Project of Guangdong (Grant No 2017ZC0358).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Guo, Y.; Cao, H.; Bai, J.; Bai, Y. High Efficient Deep Feature Extraction and Classification of Spectral-Spatial Hyperspectral Image Using Cross Domain Convolutional Neural Networks. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 345–356. [Google Scholar] [CrossRef]
  2. Yu, C.; Wang, Y.; Song, M.; Chang, C. Class Signature-Constrained Background-Suppressed Approach to Band Selection for Classification of Hyperspectral Images. IEEE Trans. Geosci. Remote Sens. 2019, 57, 14–31. [Google Scholar] [CrossRef]
  3. Wang, X.; Zhong, Y.; Zhang, L.; Xu, Y. Spatial Group Sparsity Regularized Nonnegative Matrix Factorization for Hyperspectral Unmixing. IEEE Trans. Geosci. Remote Sens. 2017, 51, 6287–6304. [Google Scholar]
  4. Xu, X.; Li, J.; Wu, C.; Plaza, A. Regional clustering-based spatial preprocessing for hyperspectral unmixing. Remote Sens. Environ. 2018, 204, 333–346. [Google Scholar] [CrossRef]
  5. Chinsu, L.; Shih-Yu, C.; Chia-Chun, C.; Chia-Huei, T. Detecting newly grown tree leaves from unmanned-aerial-vehicle images using hyperspectral target detection techniques. Isprs J. Photogramm. Remote Sens. 2018, 142, 174–189. [Google Scholar]
  6. Dong, Y.; Du, B.; Zhang, L.; Hu, X. Hyperspectral Target Detection via Adaptive Information—Theoretic Metric Learning with Local Constraints. Remote Sens. 2018, 10, 1415. [Google Scholar]
  7. Shivers, S.W.; Roberts, D.A.; McFadden, J.P. Using paired thermal and hyperspectral aerial imagery to quantify land surface temperature variability and assess crop stress within California orchards. Remote Sens. Environ. 2019, 222, 215–231. [Google Scholar] [CrossRef]
  8. Awad, M. Sea water chlorophyll-a estimation using hyperspectral images and supervised artificial neural network. Ecol. Inform. 2014, 24, 60–68. [Google Scholar] [CrossRef]
  9. Ramirez, F.J.R.; Navarro-Cerrillo, R.M.; Varo-Martínez, M.Á.; Quero, J.L.; Doerr, S.; Hernández-Clemente, R. Determination of forest fuels characteristics in mortality-affected Pinus forests using integrated hyperspectral and ALS data. Int. J. Appl. Earth Obs. Geoinf. 2018, 68, 157–167. [Google Scholar] [CrossRef]
  10. Laakso, K.; Turner, D.J.; Rivard, B.; Sánchez-Azofeifa, A. The long-wave infrared (8–12 μm) spectral features of selected rare earth element—Bearing carbonate, phosphate and silicate minerals. Int. J. Appl. Earth Obs. Geoinf. 2019, 76, 77–83. [Google Scholar] [CrossRef]
  11. Awad, M.M. Forest mapping: A comparison between hyperspectral and multispectral images and technologies. J. For. Res. 2018, 29, 1395–1405. [Google Scholar] [CrossRef]
  12. Li, C.; Ma, Y.; Mei, X.; Ma, J. Hyperspectral image classification with robust sparse representation. IEEE Geosci. Remote Sens. Lett. 2016, 13, 641–645. [Google Scholar] [CrossRef]
  13. Golipour, M.; Ghassemian, H.; Mirzapour, F. Integrating hierarchical segmentation maps with MRF prior for classification of hyperspectral images in a Bayesian framework. IEEE Trans. Geosci. Remote Sens. 2016, 54, 805–816. [Google Scholar] [CrossRef]
  14. Guo, Y.; Cao, H.; Han, S.; Sun, Y.; Bai, Y. Spectral-Spatial Hyperspectral Image Classification with K-Nearest Neighbor and Guided Filter. IEEE Access 2018, 6, 18582–18591. [Google Scholar] [CrossRef]
  15. Richards, J.A.; Jia, X. Using Suitable Neighbors to Augment the Training Set in Hyperspectral Maximum Likelihood Classification. IEEE Geosci. Remote Sens. Lett. 2008, 5, 774–777. [Google Scholar] [CrossRef]
  16. Cao, F.; Yang, Z.; Ren, J.; Ling, W.K.; Zhao, H.; Marshall, S. Extreme sparse multinomial logistic regression: A fast and robust framework for hyperspectral image classification. Remote Sens. 2017, 9, 1255. [Google Scholar] [CrossRef]
  17. Aptoula, E.; Ozdemir, M.C.; Yanikoglu, B. Deep learning with attribute profiles for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1970–1974. [Google Scholar] [CrossRef]
  18. Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef]
  19. Zhang, T.; Zhou, Z.H. Large margin distribution machine. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, New York, NY, USA, 24–27 August 2014; pp. 313–322. [Google Scholar]
  20. Zhan, K.; Wang, H.; Huang, H.; Xie, Y. Large margin distribution machine for hyperspectral image classification. J. Electron. Imaging 2016, 25, 063024. [Google Scholar] [CrossRef]
  21. Tarabalka, Y.; Chanussot, J.; Jón Atli, B. Segmentation and Classification of Hyperspectral Images Using Minimum Spanning Forest Grown from Automatically Selected Markers. IEEE Trans. Syst. Manand Cybern. Part B (Cybern.) 2010, 40, 1267–1279. [Google Scholar] [CrossRef]
  22. Ghamisi, P.; Couceiro, M.S.; Fauvel, M.; Benediktsson, J.A. Integration of Segmentation Techniques for Classification of Hyperspectral Images. IEEE Geosci. Remote Sens. Lett. 2014, 11, 342–346. [Google Scholar] [CrossRef]
  23. Huang, X.; Guan, X.; Benediktsson, J.A.; Zhang, L.; Plaza, A.; Mura, M.D. Multiple Morphological Profiles from Multicomponent-Base Images for Hyperspectral Image Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 4653–4669. [Google Scholar] [CrossRef]
  24. Xue, Z.; Li, J.; Cheng, L.; Du, P. Spectral–Spatial Classification of Hyperspectral Data via Morphological Component Analysis-Based Image Separation. IEEE Trans. Geosci. Remote Sens. 2015, 53, 70–84. [Google Scholar]
  25. Gastal, E.S.L.; Oliveira, M.M. Domain transform for edge-aware image and video processing. In Proceedings of the ACM Transactions on Graphics (ToG), Vancouver, BC, Canada, 7–11 August 2011; Volume 30, p. 69. [Google Scholar]
  26. Liao, J.; Wang, L.; Hao, S. Hyperspectral image classification based on adaptive optimisation of morphological profile and spatial correlation information. Int. J. Remote Sens. 2018, 39, 9159–9180. [Google Scholar] [CrossRef]
  27. Zhang, Y.; Brady, M.; Smith, S. Segmentation of brain MR images through a hidden Markov random field model and the expectation-maximization algorithm. IEEE Trans. Med Imaging 2001, 20, 45–57. [Google Scholar] [CrossRef]
  28. Sun, L.; Wu, Z.; Liu, J.; Xiao, L.; Wei, Z. Supervised Spectral–Spatial Hyperspectral Image Classification with Weighted Markov Random Fields. IEEE Trans. Geosci. Remote Sens. 2014, 53, 1490–1503. [Google Scholar] [CrossRef]
  29. Zhang, X.; Gao, Z.; Jiao, L.; Zhou, H. Multifeature Hyperspectral Image Classification with Local and Nonlocal Spatial Information via Markov Random Field in Semantic Space. IEEE Trans. Geosci. Remote Sens. 2018, 56, 1409–1424. [Google Scholar] [CrossRef]
  30. He, K.; Sun, J.; Tang, X. Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1397–1409. [Google Scholar] [CrossRef] [PubMed]
  31. Tomasi, C.; Manduchi, R. Bilateral filtering for gray and color images. In Proceedings of the IEEE Sixth International Conference on Computer Vision, Bombay, India, 4–7 January 1998; pp. 839–846. [Google Scholar]
  32. Jones, J.P.; Palmer, L.A. An evaluation of the two-dimensional Gabor filter model of simple receptive fields in cat striate cortex. J. Neurophysiol. 1987, 58, 1233–1258. [Google Scholar] [CrossRef]
  33. Wang, Z.; Hu, H.; Zhang, L.; Xue, J.H. Discriminatively guided filtering (DGF) for hyperspectral image classification. Neurocomputing 2018, 275, 1981–1987. [Google Scholar] [CrossRef]
  34. Guo, Y.; Han, S.; Li, Y.; Zhang, C.; Bai, Y. K-Nearest Neighbor combined with guided filter for hyperspectral image classification. Procedia Comput. Sci. 2018, 129, 159–165. [Google Scholar] [CrossRef]
  35. Wang, Y.; Song, H.; Zhang, Y. Spectral-Spatial Classification of Hyperspectral Images Using Joint Bilateral Filter and Graph Cut Based Model. Remote Sens. 2016, 8, 748. [Google Scholar] [CrossRef]
  36. Sahadevan, A.S.; Routray, A.; Das, B.S.; Ahmad, S. Hyperspectral image preprocessing with bilateral filter for improving the classification accuracy of support vector machines. J. Appl. Remote Sens. 2016, 10, 025004. [Google Scholar] [CrossRef]
  37. Qiao, T.; Yang, Z.; Ren, J.; Yuen, P.; Zhao, H.; Sun, G.; Marshall, S.; Benediktsson, J.A. Joint bilateral filtering and spectral similarity-based sparse representation: A generic framework for effective feature extraction and data classification in hyperspectral imaging. Pattern Recognit. 2018, 77, 316–328. [Google Scholar] [CrossRef]
  38. Moore, B.C. Principal component analysis in linear systems: Controllability, observability, and model reduction. IEEE Trans. Autom. Control 2003, 26, 17–32. [Google Scholar] [CrossRef]
  39. Kang, X.; Li, S.; Benediktsson, J.A. Spectral-spatial hyperspectral image classification with edge-preserving filtering. IEEE Trans. Geosci. Remote Sens. 2014, 52, 2666–2677. [Google Scholar] [CrossRef]
  40. Kang, X.; Xiang, X.; Li, S.; Benediktsson, J.A. PCA-based edge-preserving features for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 7140–7151. [Google Scholar] [CrossRef]
  41. Kang, X.; Li, S.; Benediktsson, J.A. Feature extraction of hyperspectral images with image fusion and recursive filtering. IEEE Trans. Geosci. Remote Sens. 2014, 52, 3742–3752. [Google Scholar] [CrossRef]
  42. Jia, S.; Wu, K.; Zhu, J.; Jia, X. Spectral-Spatial Gabor Surface Feature Fusion Approach for Hyperspectral Imagery Classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 1142–1154. [Google Scholar] [CrossRef]
  43. Li, H.C.; Zhou, H.L.; Pan, L.; Du, Q. Gabor feature-based composite kernel method for hyperspectral image classification. Electron. Lett. 2018, 54, 628–630. [Google Scholar] [CrossRef]
  44. Chen, Y.; Zhu, L.; Ghamisi, P.; Jia, S.; Li, G.; Tang, L. Hyperspectral images classification with Gabor filtering and convolutional neural network. IEEE Geosci. Remote Sens. Lett. 2017, 14, 2355–2359. [Google Scholar] [CrossRef]
  45. Tu, B.; Zhang, X.; Wang, J.; Zhang, G.; Ou, X. Spectral-Spatial Hyperspectral Image Classification via Non-local Means Filtering Feature Extraction. Sens. Imaging 2018, 19, 11. [Google Scholar] [CrossRef]
  46. Gong, Y.; Sbalzarini, I.F. Curvature filters efficiently reduce certain variational energies. IEEE Trans. Image Process. 2017, 26, 1786–1798. [Google Scholar] [CrossRef]
  47. Zhang, H.; Jin, X.; Wu, Q.; Jonathan, W.Q.M.; He, Z.; Wang, Y. Automatic visual detection method of railway surface defects based on curvature filtering and improved GMM. Chin. J. Sci. Instrum. 2018, 39, 181–194. [Google Scholar]
  48. Gao, W.; Zhou, Z.H. On the doubt about margin explanation of boosting. Artif. Intell. 2013, 203, 1–18. [Google Scholar] [CrossRef]
  49. Moran, P.A.P. The interpretation of statistical maps. J. R. Stat. Soc. Ser. B (Methodol.) 1948, 10, 243–251. [Google Scholar] [CrossRef]
  50. Moran, P.A.P. Notes on continuous stochastic phenomena. Biometrika 1950, 37, 17–23. [Google Scholar] [CrossRef] [PubMed]
  51. Hao, S.; Wang, W.; Ye, Y.; Li, Y.; Bruzzone, L. A deep network architecture for super-resolution-aided hyperspectral image classification with classwise loss. IEEE Trans. Geosci. Remote Sens. 2018, 56, 4650–4663. [Google Scholar] [CrossRef]
  52. Cheng, G.; Li, Z.; Han, J.; Yao, X.; Guo, L. Exploring Hierarchical Convolutional Features for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 6712–6722. [Google Scholar] [CrossRef]
Figure 1. The evolution process of energy functional.
Figure 1. The evolution process of energy functional.
Remotesensing 11 00833 g001
Figure 2. Disjoint domain decomposition.
Figure 2. Disjoint domain decomposition.
Remotesensing 11 00833 g002
Figure 3. All possible triangles in the x neighborhood (a) down; (b) down and left (P); (c) mix.
Figure 3. All possible triangles in the x neighborhood (a) down; (b) down and left (P); (c) mix.
Remotesensing 11 00833 g003
Figure 4. Eight types of the triangular tangent planes through x (a) two of the common edges from the four tangent plane (R); (b) two of the common edges from the four tangent plane (P); (c) four of the tangent planes through mixed neighbors.
Figure 4. Eight types of the triangular tangent planes through x (a) two of the common edges from the four tangent plane (R); (b) two of the common edges from the four tangent plane (P); (c) four of the tangent planes through mixed neighbors.
Remotesensing 11 00833 g004
Figure 5. Flow of the proposed curvature filter domain transform recursive filter (CFDTRF-LDM).
Figure 5. Flow of the proposed curvature filter domain transform recursive filter (CFDTRF-LDM).
Remotesensing 11 00833 g005
Figure 6. Curvature filter (CF) and domain transform recursive filter (DTRF) comparison for Indian Pines; (a) the 10th band of spectrum; (b) the 60th band of spectrum; (c) the 130th band of spectrum; (d) the 180th band of spectrum; (e) the 10th band filtering of CF; (f) the 60th band filtering of CF; (g) the 130th band filtering of CF; (h) the 180th band filtering of CF; (i) the 10th band filtering of DTRF; (j) the 60th band filtering of DTRF; (k) the 130th band filtering of DTRF; (h) the 180th band filtering of DTRF.
Figure 6. Curvature filter (CF) and domain transform recursive filter (DTRF) comparison for Indian Pines; (a) the 10th band of spectrum; (b) the 60th band of spectrum; (c) the 130th band of spectrum; (d) the 180th band of spectrum; (e) the 10th band filtering of CF; (f) the 60th band filtering of CF; (g) the 130th band filtering of CF; (h) the 180th band filtering of CF; (i) the 10th band filtering of DTRF; (j) the 60th band filtering of DTRF; (k) the 130th band filtering of DTRF; (h) the 180th band filtering of DTRF.
Remotesensing 11 00833 g006aRemotesensing 11 00833 g006b
Figure 7. Average of Moran’s I for hyperspectral images (HSI) (a) Indian Pines (b) Salinas Valley (c) Kennedy Space Center.
Figure 7. Average of Moran’s I for hyperspectral images (HSI) (a) Indian Pines (b) Salinas Valley (c) Kennedy Space Center.
Remotesensing 11 00833 g007
Figure 8. Classification maps of different methods on the Indian Pines dataset (a) ground; (b) training; (c) SVM, overall accuracy (OA) = 77.47%; (d) principal component analysis (PCA)-SVM, OA = 77.81% (e) large margin distribution machine (LDM), OA = 79.85%; (f) PCA-LDM, OA = 78.49%; (g) edge-preserving filter (EPF), OA = 90.35%; (h) image fusion with multiple subsets of adjacent bands and recursive filter (IFRF), OA = 90.64%; (i) PCA-EPFs, OA = 91.62%; (j) LDM-FL, OA = 93.31%; (k) CF-SVM, OA = 88.09%; (l) CF-LDM, OA = 88.54%; (m) DTRF-SVM, OA = 92.29%; (n) DTRF-LDM, OA = 94.60%; (o) CFDTRFF-SVM, OA = 94.13%; (p) CFDTRF-LDM, OA = 96.64%.
Figure 8. Classification maps of different methods on the Indian Pines dataset (a) ground; (b) training; (c) SVM, overall accuracy (OA) = 77.47%; (d) principal component analysis (PCA)-SVM, OA = 77.81% (e) large margin distribution machine (LDM), OA = 79.85%; (f) PCA-LDM, OA = 78.49%; (g) edge-preserving filter (EPF), OA = 90.35%; (h) image fusion with multiple subsets of adjacent bands and recursive filter (IFRF), OA = 90.64%; (i) PCA-EPFs, OA = 91.62%; (j) LDM-FL, OA = 93.31%; (k) CF-SVM, OA = 88.09%; (l) CF-LDM, OA = 88.54%; (m) DTRF-SVM, OA = 92.29%; (n) DTRF-LDM, OA = 94.60%; (o) CFDTRFF-SVM, OA = 94.13%; (p) CFDTRF-LDM, OA = 96.64%.
Remotesensing 11 00833 g008aRemotesensing 11 00833 g008b
Figure 9. Classification maps of different methods on the Salinas Valley dataset (a) ground; (b) training; (c) SVM, OA = 87.99%; (d) PCA-SVM, OA = 87.45%; (e) LDM, OA = 88.96%; (f) PCA-LDM, OA = 89.19%; (g) EPF, OA = 91.37%; (h) IFRF, OA = 97.52%; (i) PCA-EPFs, OA = 98.68%; (j) LDM-FL, OA = 98.76%; (k) CF-SVM, OA = 89.20%; (l) CF-LDM, OA = 90.56%; (m) DTRF-SVM, OA = 96.71%; (n) DTRF-LDM, OA = 98.52%; (o) CFDTRFF-SVM, OA = 97.93%; (p) CFDTRF-LDM, OA = 99.16%.
Figure 9. Classification maps of different methods on the Salinas Valley dataset (a) ground; (b) training; (c) SVM, OA = 87.99%; (d) PCA-SVM, OA = 87.45%; (e) LDM, OA = 88.96%; (f) PCA-LDM, OA = 89.19%; (g) EPF, OA = 91.37%; (h) IFRF, OA = 97.52%; (i) PCA-EPFs, OA = 98.68%; (j) LDM-FL, OA = 98.76%; (k) CF-SVM, OA = 89.20%; (l) CF-LDM, OA = 90.56%; (m) DTRF-SVM, OA = 96.71%; (n) DTRF-LDM, OA = 98.52%; (o) CFDTRFF-SVM, OA = 97.93%; (p) CFDTRF-LDM, OA = 99.16%.
Remotesensing 11 00833 g009aRemotesensing 11 00833 g009b
Figure 10. Classification maps of different methods on the Salinas Valley dataset (a) Ground; (b) Training; (c) SVM, OA = 82.45%; (d) PCA-SVM, OA = 79.49%; (e) LDM, OA = 85.11%; (f) PCA-LDM, OA = 80.65%; (g) EPF, OA = 89.03%; (h) IFRF, OA = 86.21%; (i) PCA-EPFs, OA = 94.12%; (j) LDM-FL, OA = 90.17%; (k) CF-SVM, OA = 90.07%; (l) CF-LDM, OA = 91.02%; (m) DTRF-SVM, OA = 92.62%; (n) DTRF-LDM, OA = 95.24%; (o) CFDTRFF-SVM, OA = 95.89%; (p) CFDTRF-LDM, OA = 97.33%.
Figure 10. Classification maps of different methods on the Salinas Valley dataset (a) Ground; (b) Training; (c) SVM, OA = 82.45%; (d) PCA-SVM, OA = 79.49%; (e) LDM, OA = 85.11%; (f) PCA-LDM, OA = 80.65%; (g) EPF, OA = 89.03%; (h) IFRF, OA = 86.21%; (i) PCA-EPFs, OA = 94.12%; (j) LDM-FL, OA = 90.17%; (k) CF-SVM, OA = 90.07%; (l) CF-LDM, OA = 91.02%; (m) DTRF-SVM, OA = 92.62%; (n) DTRF-LDM, OA = 95.24%; (o) CFDTRFF-SVM, OA = 95.89%; (p) CFDTRF-LDM, OA = 97.33%.
Remotesensing 11 00833 g010
Figure 11. Comparison of SVM, LDM, PCA-SVM and PCA-LDM on three datasets.
Figure 11. Comparison of SVM, LDM, PCA-SVM and PCA-LDM on three datasets.
Remotesensing 11 00833 g011
Figure 12. Comparison of SVM, CF-SVM, LDM and CF-LDM on three datasets.
Figure 12. Comparison of SVM, CF-SVM, LDM and CF-LDM on three datasets.
Remotesensing 11 00833 g012
Figure 13. Comparison of SVM, DTRF-SVM, LDM and DTRF-LDM on three datasets.
Figure 13. Comparison of SVM, DTRF-SVM, LDM and DTRF-LDM on three datasets.
Remotesensing 11 00833 g013
Figure 14. Comparison of EPF, PCA-EPFs, IFRF, LDM-FL and CFDTRF-LDM on three datasets.
Figure 14. Comparison of EPF, PCA-EPFs, IFRF, LDM-FL and CFDTRF-LDM on three datasets.
Remotesensing 11 00833 g014
Figure 15. Effect of different training ratios on classification performance (a) Indian Pines (b) Salinas Valley (c) Kennedy Space Center.
Figure 15. Effect of different training ratios on classification performance (a) Indian Pines (b) Salinas Valley (c) Kennedy Space Center.
Remotesensing 11 00833 g015
Table 1. Comparison of classification accuracies (in percent) provided by seven methods for Indian Pines (part A).
Table 1. Comparison of classification accuracies (in percent) provided by seven methods for Indian Pines (part A).
GroundSumTrainSVMPCA-SVMLDMPCA-LDMEPFIFRFPCA-EPFs
Alfalfa541145.5973.2090.0883.4255.8889.5183.28
Corn-no-till14347264.0667.1672.9074.1684.5589.8786.18
Corn-min-till8344272.2871.2467.4157.3888.4978.0991.12
Corn2341215.7334.5156.3860.0819.1169.2578.80
Grass-pasture4972587.2284.3688.9091.8591.5692.7291.49
Grass-trees7473794.1195.8394.9894.3599.8997.9793.53
Grass-pasture-mowed26545.4673.5783.5769.1143.8664.1860.59
Hay-windrowed4892498.2696.9696.4694.62100.0099.5199.83
Oats20429.4626.6873.5387.8518.3041.2942.32
Soybeans-no-till9684865.8961.6767.4972.8386.6984.5187.86
Soybeans-min-till246812382.3882.8779.3473.3097.8394.5896.35
Soybeans-clean-till6143176.4076.0380.3573.2995.4789.3188.21
Wheat2121195.6698.2899.5199.0199.8899.1676.38
Woods12946595.6497.2592.6591.8199.5798.5598.36
Bldg-grass-tree3801941.3133.5361.0756.6553.7776.7591.20
Stone-steel-towers95581.2257.4986.3876.3893.8767.6358.59
OA/%- 77.4777.8179.8578.4990.3590.6491.62
AA/%- 68.1770.6680.6978.5176.8083.3182.76
Kappa/%- 74.1274.5277.0377.3688.9289.3090.43
Table 2. Comparison of classification accuracies (in percent) provided by seven methods for Indian Pines (part B).
Table 2. Comparison of classification accuracies (in percent) provided by seven methods for Indian Pines (part B).
GroundSumTrainLDM-FLCF-SVMCF-LDMDTRF-SVMDTRF-LDMCFDTRFF-SVMCFDTRF-LDM
Alfalfa541193.1386.6989.4192.3898.7376.3293.62
Corn-no-till14347292.1784.6086.2687.9891.0794.7496.85
Corn-min-till8344289.8983.4081.4192.5192.2192.2994.71
Corn2341277.6069.9773.3378.7994.7273.5688.17
Grass-pasture4972593.4394.7194.6789.5896.3692.6892.29
Grass-trees7473796.5697.3998.5194.7697.0197.3299.01
Grass-pasture-mowed265100.074.1197.5027.94100.098.6193.45
Hay-windrowed48924100.0098.6598.8099.78100.0099.4099.78
Oats20493.7452.3598.335.8893.2070.31100.00
Soybeans-no-till9684891.5778.6985.6086.3992.4587.6995.21
Soybeans-min-till246812392.5691.6086.9795.5795.0296.8096.97
Soybeans-clean-till6143191.1787.2186.1989.9690.6991.5392.06
Wheat2121199.1499.1299.2697.2494.0999.2599.75
Woods12946598.9898.3496.4898.7499.6798.8299.96
Bldg-grass-tree3801990.2048.8768.9792.6693.6891.1299.16
Stone-steel-towers95592.0984.0088.3867.5181.4570.0495.67
OA/%- 93.3188.0988.5492.2994.6094.1396.64
AA/%- 93.2683.1189.3881.1094.4089.4196.04
Kappa/%- 92.3986.3686.9491.2093.8493.2996.16
Table 3. Comparison of classification accuracies (in percent) provided by seven methods for Salinas Valley (part A).
Table 3. Comparison of classification accuracies (in percent) provided by seven methods for Salinas Valley (part A).
GroundSumTrainingSVMPCA-SVMLDMPCA-LDMEPFIFRFPCA-EPFs
Broccoli-green-weeds-120091696.6898.9199.0699.3299.8499.9399.86
Broccoli green-weeds-237263099.0398.5999.0899.03100.0098.8899.43
Fallow19761695.9185.3094.3098.0486.5899.9699.66
Fallow-rough-plough13941196.3494.5099.0699.3599.8795.1192.39
Fallow-smooth26782190.6097.3395.9796.5199.5096.2898.29
Stubble39593299.5799.5099.8499.76100.0099.6299.84
Celery35792999.3599.3499.6199.46100.0099.1699.18
Grapes-untrained112719089.9486.1978.0576.1495.0196.0498.45
Soil-vineyard-develop62035098.2299.0099.3599.6099.96100.00100.00
Corn-senesced-green weeds32782689.3383.1492.1793.1192.2399.0299.41
Lettuce-romaine-4wk1068970.9566.8092.3491.6097.7690.8088.48
Lettuce-romaine-5wk19271598.2993.4599.6999.62100.0098.0898.45
Lettuce-romaine-6wk916798.4350.4297.6697.99100.0083.4696.41
Lettuce-romaine-7wk1070988.0994.4994.1295.5099.6095.3196.89
Vineyard-untrained72685851.0061.0563.3965.9653.3997.8999.86
Vineyard-vertical-trellis18071481.6086.7796.4797.1391.7593.4595.86
OA/%- 87.9987.4588.9689.1991.3797.5298.68
AA/%- 90.2187.1793.7694.2694.7296.4497.65
Kappa/%- 86.5785.9687.7087.9790.3497.2398.53
Table 4. Comparison of classification accuracies (in percent) provided by seven methods for Salinas Valley (part B).
Table 4. Comparison of classification accuracies (in percent) provided by seven methods for Salinas Valley (part B).
GroundSumTrainingLDM-FLCF-SVMCF-LDMDTRF-SVMDTRF-LDMCFDTRFF-SVMCFDTRF-LDM
Broccoli-green-weeds-120091699.9999.9199.9599.96100.00100.00100.00
Broccoli green-weeds-237263099.7597.4499.5799.3699.8199.8099.98
Fallow19761699.9692.6699.9597.7498.3197.05100.00
Fallow-rough-plough13941196.4198.5998.8889.9591.3899.1598.46
Fallow-smooth26782199.0396.7399.0494.9394.7998.0798.73
Stubble39593299.5599.3699.7697.7098.8499.5899.90
Celery35792999.8599.5799.7699.8799.8399.7399.72
Grapes-untrained112719098.5087.9883.9297.8299.2497.5499.18
Soil-vineyard-develop620350100.0099.8799.67100.00100.0099.63100.00
Corn-senesced-green weeds32782699.3286.1693.2396.2796.5395.9798.19
Lettuce-romaine-4wk1068991.0856.4094.3864.0794.0696.6596.84
Lettuce-romaine-5wk19271598.3786.48100.0098.5299.0199.95100.00
Lettuce-romaine-6wk916793.0997.6197.0493.3094.4693.3698.26
Lettuce-romaine-7wk1070995.0689.9197.4174.4294.6691.3893.20
Vineyard-untrained72685899.1664.9376.3698.2199.4096.3799.03
Vineyard-vertical-trellis18071496.2488.7897.8095.2199.4395.6997.82
OA/%- 98.7689.2092.6096.7198.5297.9399.16
AA/%- 97.8490.1596.0493.5897.4897.4998.71
Kappa/%- 98.6287.9291.7696.3398.3597.7099.06
Table 5. Comparison of classification accuracies (in percent) provided by seven methods for Kennedy Space Center (part A).
Table 5. Comparison of classification accuracies (in percent) provided by seven methods for Kennedy Space Center (part A).
GroundSumTrainingSVMPCA-SVMLDMPCA-LDMEPFIFRFPCA-EPFs
Scrub7613097.6687.9589.5498.43100.0095.9199.00
Swamp willow2431085.4270.4184.7868.9881.1235.4659.96
Cabbage palm hammock2561084.9970.5488.3978.6994.8592.9897.27
Cabbage palm/oak2521049.6846.2855.3344.4080.4057.8597.32
Slash pine16168.2838.6752.7126.3212.8363.7764.70
Oak/broadleaf hammock229925.6835.8256.790.2321.3760.5177.86
Hardwood swamp105438.8957.9966.9641.0948.5177.8781.76
Graminoid marsh4311775.4463.4072.9771.7293.4691.2899.71
Spartina marsh5202194.0992.5194.5896.59100.0087.57100.00
Cattail marsh4041690.8192.3494.5692.5495.8097.0492.78
Salt marsh4191793.8592.5690.7188.3398.3388.0399.83
Muld flats5032078.5273.9383.6184.7893.5396.8094.18
Water9273799.9498.5498.1898.08100.00100.00100.00
OA/%- 82.4579.4985.1180.6589.0387.1694.12
AA/%- 71.0270.8479.1668.4778.4880.3989.57
Kappa/%- 80.3377.1483.4278.3287.7285.6393.42
Table 6. Comparison of classification accuracies (in percent) provided by seven methods for Kennedy Space Center (part B).
Table 6. Comparison of classification accuracies (in percent) provided by seven methods for Kennedy Space Center (part B).
GroundSumTrainingLDM-FLCF-SVMCF-LDMDTRF-SVMDTRF-LDMCFDTRFF-SVMCFDTRF-LDM
Scrub7613093.7398.4294.8099.4199.0199.7698.08
Swamp willow2431074.7783.8987.5965.1485.8298.0782.52
Cabbage palm hammock2561084.7286.2488.0782.80100.0096.3498.80
Cabbage palm/oak2521078.5566.9875.0463.9793.0489.6994.77
Slash pine161672.5931.3663.0565.5786.1091.7484.26
Oak/broadleaf hammock229996.6870.3964.0294.79100.0075.7599.09
Hardwood swamp1054100.0048.1068.1997.54100.0046.56100.00
Graminoid marsh4311790.5990.1590.8998.2593.9897.0697.41
Spartina marsh52021100.0099.5096.84100.00100.00100.00100.00
Cattail marsh4041670.0896.7296.7989.7675.6998.08100.00
Salt marsh41917100.0097.0695.9596.8799.9496.7699.75
Muld flats5032083.5791.7491.5095.2692.0898.3894.99
Water9273798.57100.00100.0099.8999.8999.78100.00
OA/%- 90.1790.0791.0292.6295.2495.8997.33
AA/%- 87.9981.5885.6088.4094.2791.3896.13
Kappa/%- 89.0588.9289.9991.7794.7095.4297.03

Share and Cite

MDPI and ACS Style

Liao, J.; Wang, L. Hyperspectral Image Classification Based on Fusion of Curvature Filter and Domain Transform Recursive Filter. Remote Sens. 2019, 11, 833. https://0-doi-org.brum.beds.ac.uk/10.3390/rs11070833

AMA Style

Liao J, Wang L. Hyperspectral Image Classification Based on Fusion of Curvature Filter and Domain Transform Recursive Filter. Remote Sensing. 2019; 11(7):833. https://0-doi-org.brum.beds.ac.uk/10.3390/rs11070833

Chicago/Turabian Style

Liao, Jianshang, and Liguo Wang. 2019. "Hyperspectral Image Classification Based on Fusion of Curvature Filter and Domain Transform Recursive Filter" Remote Sensing 11, no. 7: 833. https://0-doi-org.brum.beds.ac.uk/10.3390/rs11070833

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop