Next Article in Journal
The Fusion of Spectral and Structural Datasets Derived from an Airborne Multispectral Sensor for Estimation of Pasture Dry Matter Yield at Paddock Scale with Time
Next Article in Special Issue
PolSAR Image Classification Using a Superpixel-Based Composite Kernel and Elastic Net
Previous Article in Journal
Evaluation of the Consistency of Simultaneously Acquired Sentinel-2 and Landsat 8 Imagery on Plastic Covered Greenhouses
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Letter

JL-GFDN: A Novel Gabor Filter-Based Deep Network Using Joint Spectral-Spatial Local Binary Pattern for Hyperspectral Image Classification

1
Jiangxi Province Key Laboratory of Water Information Cooperative Sensing and Intelligent Processing, Nanchang Institute of Technology, Nanchang 330108, China
2
Department of Electronic Engineering, Tsinghua University, Beijing 100084, China
3
Division of Geoinformatics, KTH Royal Institute of Technology, 10044 Stockholm, Sweden
4
Shanghai Key Lab. of Intelligent Sensing and Recognition, Shanghai Jiao Tong University, Shanghai 200240, China
5
Key Laboratory of Optic-Electronic and Communication, Jiangxi Science and Technology Normal University, Nanchang 330013, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(12), 2016; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12122016
Submission received: 12 May 2020 / Revised: 11 June 2020 / Accepted: 20 June 2020 / Published: 23 June 2020
(This article belongs to the Special Issue Classification and Feature Extraction Based on Remote Sensing Imagery)

Abstract

:
The traditional local binary pattern (LBP, hereinafter we also call it a two-dimensional local binary pattern 2D-LBP) is unable to depict the spectral characteristics of a hyperspectral image (HSI). To cure this deficiency, this paper develops a joint spectral-spatial 2D-LBP feature (J2D-LBP) by averaging three different 2D-LBP features in a three-dimensional hyperspectral data cube. Subsequently, J2D-LBP is added into the Gabor filter-based deep network (GFDN), and then a novel classification method JL-GFDN is proposed. Different from the original GFDN framework, JL-GFDN further fuses the spectral and spatial features together for HSI classification. Three real data sets are adopted to evaluate the effectiveness of JL-GFDN, and the experimental results verify that (i) JL-GFDN has a better classification accuracy than the original GFDN; (ii) J2D-LBP is more effective in HSI classification in comparison with the traditional 2D-LBP.

Graphical Abstract

1. Introduction

With the development of technology, more and more tools have been invented for earth observation. As an important one, the hyperspectral sensor which can obtain spatially and spectrally continuous data simultaneously has been widely used for various applications, such as medical diagnosis and target detection [1]. In particular, classification using hyperspectral images (HSIs) has attracted a lot of attention in recent years. It is known that HSI often has several hundreds of spectral bands for each pixel, so they can contain much more information than the traditional optical images, and thus are more beneficial for classification. However, it is still a challenging task to classify with HSI, since they can easily produce the Hughes phenomenon [2]. On the other hand, HSI also contains highly complex spatial structures and interpixel relations [3].
To overcome the aforementioned shortcomings and enhance the classification accuracy, a large number of classification algorithms have been developed until now. For dealing with the issue of high dimensionality, principal component analysis (PCA) was often adopted to reduce redundant spectral features of HSI [4]. Li et al. [5] proposed the minimum estimated abundance covariance-based supervised band-selection algorithm by using the extreme learning machine (ELM). In [6], the criterion of linear prediction error (LPE) was applied to band selection. Extensive experiments have demonstrated that the band selection is helpful for HSI classification.
In the earlier studies, only the spectral characteristics of HSI were employed to classify HSI. Although this type of method is efficient, the spectral characteristics of HSI are easily affected by environmental factors (such as illumination and moisture conditions) and the materials itself [7]. As a result, how to efficiently combine the spatial and spectral features of HSI together for classification has drawn the researchers’ attention during the last decade. In [8], composite kernels (CK) were used to attain the combination of spectral and spatial information, and then support vector machine (SVM) was adopted to classify. Kang et al. constructed a spectral-spatial classification framework based on edge-preserving filtering in [9]. In [10], a morphological profile (MP) generated by certain morphological operators was utilized as spatial feature for HSI classification. In [11], the extended morphological attribute profiles (EMAP) derived from a series of attribute profiles were proposed to construct spectral-spatial features for classification. For hyperspectral data classification, in [12], spectral-spatial kernel sparse representation was proposed.
As one kind of spatial feature, recently, Gabor filter (GF) has been successfully applied to HSI classification due to its ability to provide more detailed information of HSI [13]. In [14], a GF-based deep network (GFDN) using a stacked sparse autoencoder (SSAE) and a softmax classifier was developed for HSI classification. Although different experimental results have verified its effectiveness, the features used in GFDN still need to be expanded. This is because only GF can capture the global texture features of HSI, e.g., orientation and scale [15]. Therefore, more features, especially the local texture features, need to be introduced to GFDN in order to further improve its classification performance. As a feature extraction method, the local binary pattern (LBP, hereinafter we call it“2D-LBP”) [16] can be viewed as a local operator to obtain the local texture features of image, e.g., corners and knots [15]. In fact, the way of cooperating GF and 2D-LBP has already been proven to be useful for HSI classification in [15]. However, 2D-LBP features cannot effectively reflect the continuity spectral signatures of hyperspectral data cube [7]. To overcome this drawback, Jia et al. developed a 3D-LBP for HSI classification [7]. Here, we propose another novel feature joint 2D-LBP (i.e., J2D-LBP) which averages different 2D-LBP values for simultaneously describing the spectral and spatial characteristics of HSI. In addition, then, it is added into GFDN for further improving GFDN’s performance. Therefore, to some extent, this paper can also be regarded as an extension of the work in [14]. The contributions of this paper are summarized as follows:
  • To simultaneously depict the spectral and spatial characteristics of HSI, a joint spectral-spatial 2D-LBP feature, i.e., J2D-LBP is proposed based on averaging three 2D-LBP values associated with different planes.
  • For further improving the performance of the original GFDN method, a novel HSI classification method JL-GFDN is built via using the framework of GFDN and J2D-LBP.
  • Three different data sets are used to validate the effectiveness of JL-GFDN. Experimental results show that (i) the performance of J2D-LBP is superior to 2D-LBP and 3D-LBP; (ii) JL-GFDN holds a better classification performance than the traditional GFDN method.
The rest of this paper is organized as follows. Section 2 introduces the theoretical background. Section 3 presents the details of the proposed method. In Section 4, experimental results are given and analyzed. Section 5 concludes this paper.

2. Theoretical Background

2.1. Two-Dimensional Local Binary Pattern (2D-LBP)

The goal of two-dimensional local binary pattern 2D-LBP is to find texture characteristics of a grayscale image, which has been widely applied to image processing. For extracting the 2D-LBP features of a grayscale image, a 3 × 3 window is usually used, and then the 2D-LBP features can be achieved by the following formula: [16]:
2 D - L B P i , p = i = 0 p 1 d ( g i g c ) 2 i , d ( x ) = 1 , x 0 , 0 , x < 0 ,
where g c is the gray-value of the central pixel c, g i are the gray-values of the pixels surrounding c, p is the number of pixels in the local window, and d(x) is the sign part of the difference. If the region is constant, obviously, the difference values are zero in all directions. Figure 1 gives an illustration of calculating the 2D-LBP features.

2.2. Gabor Filter-Based Deep Network (GFDN)

Frequency and orientation features of the spatial domain can be effectively achieved by Gabor filter (GF), which is often described by a sinusoidal plane wave and a Gaussian function [17], i.e.,
Φ u , v ( a , b ) = f 2 π γ η exp ( ( α 2 a 2 + β 2 b 2 ) ) exp ( j 2 π f a ) ,
a = a m + 1 2 cos θ + b n + 1 2 sin θ ,
b = a m + 1 2 sin θ + b n + 1 2 cos θ ,
where f and θ , respectively, denote the frequency and directions of Gaussian kernel function. Both α and β represent the sharpness of the Gaussian function. The former is obtained along the major axis parallel to the wave, whereas the latter is obtained along the minor axis perpendicular to the wave. To keep the ratio between frequency and sharpness, the constraint conditions γ = f α and η = f β are usually considered. Generally speaking, in most of previous research [18], the parameters α and β are the same as each other, and the parameters γ and η are often set as 2 . Moreover, m and n, representing the size of the Gabor filter, are both equal to an odd number d.
Subsequently, the GF feature is used together with the original data for constructing GFDN [14], where Kang et al. directly input them to the deep network SSAE and then adopted the softmax classifier for HSI classification. The red dashed rectangle shown in Figure 2 presents the schematic of GFDN. More details can be found in [14].

3. Methodology

3.1. Joint Two-Dimensional Local Binary Pattern (J2D-LBP)

As mentioned above, 2D-LBP is able to describe the texture characteristics of HSI, yet is unable to describe the spectral characteristics of HSI. To solve this issue, here, we propose a joint spectral-spatial 2D-LBP feature (J2D-LBP) based on the traditional 2D-LBP. Figure 3 presents the detailed calculation procedure, in which the left cube means a 3 × 3 × 3 hyperspectal data cube.
From Figure 3, it can be easily found that P is not only the center of blue plane z, but also the center of the cube. However, the means of calculating 2D-LBP is just performed on z. Naturally, the information associated with the spectral bands, i.e., red plane x and green plane y is missed. Therefore, we here compute all the 2D-LBP values of these three planes relative to P (i.e., x, y, and z). In addition, then, the average of these values is seen as the J2D-LBP value of P, i.e.,
J 2 D - L B P P = 2 D - L B P x + 2 D - L B P y + 2 D - L B P z 3 ,
where 2 D - L B P i (i = x, y, z) denotes the 2D-LBP value of the plane i. When the values of 2D-LBP x and 2D-LBP y are both equal to 2D-LBP z , J2D-LBP will become the traditional 2D-LBP.
Actually, the way of calculating J2D-LBP in Equation (5) is reasonable and effective. Spectral information plays an important role in HSI classification; therefore, we can use the 2D-LBP x and 2D-LBP y values of P as the metrics to describe the relationship between P and its neighboring spectral bands. On the other hand, to make the value of J2D-LBP still belong to [0,255], the average operator is also necessary.

3.2. The Proposed Classification Method

In [14], the usefulness of GFDN for HSI classification has been demonstrated. In spite of this, more features that can depict the local and spectral characteristics of HSI still need to be applied to GFDN. Therefore, in this paper, we further extend the work of [14] by adding J2D-LBP into GFDN, and then propose a novel classification method J2D-LBP-based GFDN, i.e., JL-GFDN. Figure 2 presents the schematic of JL-GFDN. Overall, JL-GFDN consists of the following steps:
  • Input the HSI image (Size: N × M × B).
  • Use PCA to reduce the dimensions of original HSI data and obtain the first three principal components (Size: N × M × 3).
  • Extract GF features based on the first three principal components (Size: N × M × 120).
  • Extract J2D-LBP features based on the first three principal components (Size: N × M × 3).
  • Fuse the original HSI data, GF and J2D-LBP features together, and obtain one combined vector (Size: 1 × S, S = N × M × (B + 120 + 3)).
  • Dependent on the combined vector, using the network to learn deep features and classify HSI.
  • Output the classification result (Size: N × M).
In this paper, all the parameters and the training way adopted in JL-GFDN are the same as those adopted in [14]. Specifically, for the Gabor filter, eight angles [0, π 8 , π 4 , 3 π 8 , π 2 , 5 π 8 , 3 π 4 , 7 π 8 ] are respectively used to set θ for experiments. The scale of Gabor filter is set as 5 and the size of Gabor filter is set as 55. As for the deep network part, two layers of sparse AE which are composed of 100 hidden units with 400 epochs of fine-tuning and iterations are utilized to construct the model SSAE. The adopted parameters, i.e., weight decay penalty λ and sparsity parameter ρ are respectively set as 1 × 10 4 and 0.05. It should be stressed that, in [14], virtual samples are also constructed and exploited for training the deep network of GFDN. Briefly, the virtual sample v i is built by the weighted average of two real training samples x i and x j coming from the same class as follows:
v i = A i j x i + ( 1 A i j ) x j ,
in which
A i j = e x p ( x i x j 2 / 2 σ 2 ) .
Here, A i j is the affinity between x i and x j .
For the GFDN method neglecting using virtual samples, Kang et al. named it GFDN* in [14]. Similarly, in this paper, we respectively name the method using virtual samples as JL-GFDN and the method neglecting using virtual samples as JL-GFDN*.

4. Experiments

In this section, the effectiveness of JL-GFDN * and JL-GFDN on HSI classification is investigated. Four methods, namely, GFDN*, 2DLBP-GFDN*, 3DLBP-GFDN*, and GFDN, are adopted for comparison. Here, 2DLBP-GFDN* and 3DLBP-GFDN* respectively mean that 2D-LBP and 3D-LBP are directly added into GFDN * . Once again, it should be noted that, although the recent research [19] has shown that the disjoint sample selection can allow us to obtain a more real classification result than the random sample selection, for a fair comparison with GFDN, we still choose the random sample selection here to train the network, like Kang et al. did in [14]. .

4.1. Data Sets

Three data sets are used to evaluate the performances of JL-GFDN * and JL-GFDN. The first one is the Indian Pines image, which was acquired by the Airborne visible/infrared imaging spectrometer (AVIRIS) sensor and composed of the agricultural Indian Pine test site in Northwestern Indiana. The size of this data set is 145 × 145 pixels with 200 bands after removing 20 water absorption bands. The second and third data sets are the University of Pavia and Pavia Center images, which were both acquired by the reflective optics system imaging spectrometer (ROSIS-03) and respectively focused on the campus area in the University of Pavia and the center of Pavia. The size of University of Pavia is 610 × 345 pixels with 103 bands after removing 12 noisy bands. The size of Pavia Center is 1096 × 715 with 102 bands. Figure 4, Figure 5 and Figure 6 respectively show their corresponding pseudo-color images and ground truth. Moreover, to quantitatively compare and analyze the experimental results, the overall accuracy (OA), the average accuracy (AA), and the Kappa coefficient are further adopted.

4.2. Experimental Analysis and Discussion

4.2.1. Experimental Results of Indian Pines

The first experiment is performed on Indian Pines, which includes 16 reference classes. Similar to [14], 8% samples of each class are randomly chosen for constructing virtual samples and training, and the rest is used as test samples.
Figure 7 shows different classification maps of Indian Pines. To sum up, JL-GFDN has the best classification result due to its highest OA value among these six methods, i.e., 98.86%. Specifically, GFDN* only uses the GF features, thus its classification result is the worst in Figure 7a. Via combining GF and 2D-LBP together, 2DLBP-GFDN* achieves a better classification result than GFDN* in Figure 7b. This is in agreement with the conclusion in [15], that is, 2D-LBP+GF is more useful for HSI classification than 2D-LBP or GF. By considering using the spectral characteristics of HSI, 3DLBP-GFDN* and JL-GFDN* obtain higher OA values than GFDN* and 2DLBP-GFDN*, see Figure 7c,e. This not only verifies the effectiveness of spectral characteristics for HSI classification, but also demonstrates the usefulness of J2D-LBP for HSI classification. Since more training samples are used in GFDN, the result in Figure 7d is also satisfactory. The reason why JL-GFDN has the highest OA value is that the spectral characteristics and virtual samples are simultaneity used in JL-GFDN. Table 1 presents the quantitative results of these methods, which are the average results over ten experiments. Note that the quantitative results of GFDN* and GFDN in [14] are directly presented here, since the parameters and training way of these two methods used in [14] and this paper are the same. Obviously, all of the three quantitative values of JL-GFDN are higher than those of the other five methods. It should be noted that JL-GFDN*, without using the virtual samples, outperforms 2DLBP-GFDN* by 0.48%, 1.42%, and 0.55% in terms of OA, AA, and Kappa, respectively. Therefore, it directly verifies that J2D-LBP is more useful than 2D-LBP for HSI classification.
To better understand the advantage of J2D-LBP over 2D-LBP, we further present their corresponding visualizations calculated from the first principle components of Indian Pines in Figure 8. In detail, since J2D-LBP takes the spectral information into account, it is observable that, in comparison with 2D-LBP (see Figure 8a), J2D-LBP (see Figure 8b) holds a stronger identification between the categories and background, e.g., the region marked by the red ellipses. The same conclusion can be obtained by analyzing Figure 8b,e. As the green ellipses shown, J2D-LBP detects more target-pixels than 2D-LBP. Observing Figure 8c,f, we can also find that more target-pixels are highlighted from background by J2D-LBP than 2D-LBP, for example, the region masked by the blue ellipses. The effectiveness of J2D-LBP for HSI is directly verified.
Overall, the novel feature J2D-LBP has a better capacity in terms of distinguishing categories from background than 2D-LBP, thereby holding a potential for HSI classification. However, compared to 2D-LBP, the time-consumption of calculating J2D-LBP is bigger, due to the fact that J2D-LBP needs to calculate three different 2D-LBP values, as shown in Figure 3.

4.2.2. Experimental Results of the University of Pavia

The second experiment is carried out on the University of Pavia, which is composed of nine reference classes. Two-hundred samples of each class are selected as the training samples. The classification maps and quantitative results are correspondingly exhibited in Figure 9 and Table 2.
Comparing the six subgraphs in Figure 9, it can be seen that the performance of JL-GFDN is the best, since it has the highest OA value 98.96%. Except for the GF features, no more features are adopted into GFDN*, so it has the lowest classification accuracy among these methods (see Figure 9a). Compared to GFDN*, 2DLBP-GFDN* has a higher OA value in Figure 9b. Once again, it demonstrates that fusing the features 2D-LBP and GF is an effective way to help GFDN* improve the classification accuracy. Due to the fact that the spectral information is considered in 3DLBP-GFDN*, its corresponding OA value 98.31% (see Figure 9c) is also higher than the first two methods. Analogously, JL-GFDN* utilizes the spectral information as well, and its OA value is higher than 3DLBP-GFDN* in Figure 9e. This strongly verifies that, compared to 3D-LBP, J2D-LBP is more beneficial for HSI classification. More samples are exploited to train GFDN, so it has a better classification result in Figure 9d, whose OA value surpasses JL-GFDN* 0.06%. After adopting the same manner to train JL-GFDN (i.e., the virtual samples are also used to train JL-GFDN), the OA value obtained from Figure 9f is obviously higher than Figure 9d. It indicates that the improved classification accuracy is resulted from the use of J2D-LBP in JL-GFDN. Once more, the effectiveness of J2D-LBP on HSI classification is proved. In fact, the same conclusion can also be achieved by comparing JL-GFDN* and GFDN*. In a nutshell, JL-GFDN holds the best classification performance among these methods, whose Kappa value is also the highest in Table 2 (i.e., 0.984).
Without loss of generality, in the following, we analyze the visualizations of 2D-LBP and J2D-LBP presented in Figure 10, which are both calculated from the first principle components of University of Pavia. Obviously, J2D-LBP can more effectively highlight target-pixels from their surrounding background than 2D-LBP, as shown by the red ellipses in Figure 10a,d. Once again, it demonstrates that the scheme of calculating J2D-LBP is reasonable and beneficial. Comparing Figure 10b,e, it is found that different categories can be separated to some extent in Figure 10e. Taking Bitumen and Shadows existed in the the green ellipses as an example, more gray pixels are detected by J2D-LBP, which means that shadows can be more easily detected by J2D-LBP. However, no such phenomenon is found in 2D-LBP where almost all of the pixels are white. This directly verifies that, through using the additional spectral information, J2D-LBP can more effectively discriminate different categories than 2D-LBP. Comparing the blue ellipses of Figure 10c,f, it can be seen that more target-pixels (i.e., Meadows) are detected by J2D-LBP than 2D-LBP. Once again, it demonstrates the effectiveness of J2D-LBP on HSI classification.

4.2.3. Experimental Results of Pavia Center

The third experiment is performed on Pavia Center, which is also composed of nine reference classes. One-hundred samples of each class are selected as the training samples. Figure 11 and Table 3 respectively present the classification results of different methods.
Generally speaking, the classification map of JD-GFDN in Figure 11f is still the best among these methods, whose OA value is 98.23%. Comparing Figure 11a,b, one can find that 2D-LBP is indeed useful for HSI classification. Different from 2D-LBP, the spectral characteristics of HSI image are added into 3D-LBP, thus 3DLBP-GFDN* can achieve a better classification result than 2DLBP-GFDN*, see Figure 11c. Similarly, J2D-LBP also takes into account the spectral characteristics of HSI image, thus the result of JD-GFDN* in Figure 11e is better than GFDN*. By comparing Figure 11c,f, we can find that J2D-LBP is more effective in HSI classification than 3D-LBP because the OA value of JD-GFDN* is higher than 3DLBP-GFDN*. Interestingly, the OA value of GFDN in Figure 11d is lower than 2DLBP-GFDN*, 3DLBP-GFDN*, and J2D-GFDN*, although more training samples are used. This demonstrates that considering fusing the local and spectral characteristics of HSI image together is a good way for HSI classification. Meanwhile, comparing Figure 11d,f, it verifies again that J2D-GFDN holds a better classification performance than GFDN. In Table 3, it can be seen that the OA value J2D-GFDN of is 1.04% higher than that of GFDN. Figure 12 further presents the visualizations of 2D-LBP and J2D-LBP on this data set. One can still find that J2D-LBP has a better ability to reflect the characteristics of HSI images than 2D-LBP.

5. Conclusions

In this paper, a novel feature J2D-LBP was designed based on the traditional 2D-LBP. Compared to 2D-LBP, J2D-LBP can reflect not only the spatial characteristics of HSI, but also the spectral characteristics of HSI. By adding J2D-LBP into the original GFDN framework, a novel method JL-GFDN was further proposed for HSI classification. Experiments performed on two HSI images verified that (i) the classification performance of JL-GFDN is better than that of GFDN; (ii) J2D-LBP is more beneficial for HSI classification compared to 2D-LBP. In the future, the rotation invariance of J2D-LBP should be further considered for HSI classification, and the way of disjoint sample selection also needs to be adopted for further verifying the performance of JD-GFDN*.

Author Contributions

T.Z. and F.Y. conceived and performed the experiments. P.Z., W.Z., and Z.Y. supervised the research and contributed to the organization of article. T.Z. drafted the manuscript, and all authors revised and approved the final version of the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially supported by the Jiangxi Province Key Laboratory of Water Information Cooperative Sensing and Intelligent Processing under No.2016WICSIP031, the National Natural Science Foundation of China (Grant No. 61866016), and the ESA/NRCSS Dragon-4 program (Grant No. 32235).

Acknowledgments

The authors would like to thank the editors and the anonymous reviewers for their valuable comments that significantly improved the quality of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kaufman, J.R.; Eismann, M.T.; Celenk, M. Assessment of Spatial–Spectral Feature-Level Fusion for Hyperspectral Target Detection. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 2534–2544. [Google Scholar] [CrossRef]
  2. Hughes, G. On the Mean Accuracy of Statistical Pattern Recognizers. IEEE Trans. Inf. Theory 1968, IT-14, 55–63. [Google Scholar] [CrossRef] [Green Version]
  3. Erchan, A.; Murat Can, O.; Berrin, Y. Deep Learning With Attribute Profiles for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1970–1974. [Google Scholar]
  4. Licciardi, G.; Marpu, P.R.; Chanussot, J.; Benediktsson, J.A. Linear Versus Nonlinear PCA for the Classification of Hyperspectral Data Based on the Extended Morphological Profiles. IEEE Geosci. Remote Sens. Lett. 2012, 9, 447–451. [Google Scholar] [CrossRef] [Green Version]
  5. Li, J.J.; Kingsdorf, B.; Du, Q. Band Selection for Hyperspectral Image Classification Using Extreme Learning Machine. Proc. SPIE 2017, 10198. [Google Scholar] [CrossRef]
  6. Du, Q.; Yang, H. Similarity-based Unsupervised Band Selection for Hyperspectral Image Analysis. IEEE Geosci. Remote Sens. Lett. 2008, 5, 564–568. [Google Scholar] [CrossRef]
  7. Jia, S.; Jie, H.; Zhu, J.S.; Jia, X.P.; Li, Q.Q. Three-dimensional Local Binary Patterns for Hyperspectral Imagery Classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 2399–2413. [Google Scholar] [CrossRef]
  8. Camps-Valls, G.; Gomez-Chova, L.; Munoz-Mari, J.; Vila-Frances, J.; Calpe-Maravilla, J. Composite Kernels for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2006, 3, 93–97. [Google Scholar] [CrossRef]
  9. Kang, X.D.; Li, S.T.; Benediktsson, J.A. Spectral-Spatial Hyperspectral Image Classification with Edge-Preserving Filtering. IEEE Trans. Geosci. Remote Sens. 2014, 52, 2666–2677. [Google Scholar] [CrossRef]
  10. Fauvel, M.; Benediktsson, J.A.; Chanussot, J.; Sveinsson, J.R. Spectral and Spatial Classification of Hyperspectral Data Using SVMs and Morphological Profiles. IEEE Trans. Geosci. Remote Sens. 2008, 46, 3804–3814. [Google Scholar] [CrossRef] [Green Version]
  11. Mura, M.D.; Benediktsson, J.A.; Waske, B.; Bruzzone, L. Extended Profiles with Morphological Attribute Filters for the Analysis of Hyperspectral Data. Int. J. Remote Sens. 2010, 31, 5975–5991. [Google Scholar] [CrossRef]
  12. Liu, J.J.; Wu, Z.B.; Wei, Z.H.; Xiao, L.; Sun, L. Spatial-Spectral Kernel Sparse Representation for Hyperspectral Image Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 6, 2462–2471. [Google Scholar] [CrossRef]
  13. Chen, Y.S.; Zhu, L.; Ghamisi, P.; Jia, X.P.; Li, G.Y.; Tang, L. Hyperspectral Images Classification with Gabor Filtering and Convolutional Neural Network. IEEE Geosci. Remote Sens. Lett. 2017, 14, 2355–2359. [Google Scholar] [CrossRef]
  14. Kang, X.X.; Li, C.C.; Li, S.T.; Lin, H. Classification of Hyperspectral Images by Gabor Filtering Based Deep Network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 1166–1178. [Google Scholar] [CrossRef]
  15. Li, W.; Chen, C.; Su, H.J.; Du, Q. Local Binary Patterns and Extreme Learning Machine for Hyperspectral Imagery Classification. IEEE Trans. Geosci. Remote Sens. 2015, 53, 3681–3693. [Google Scholar] [CrossRef]
  16. Ojala, T.; Pietikainen, M.; Maenpaa, T. Multiresolution Gray-scale and Rotation Invariant Texture Classification With Local Binary Patterns. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 971–987. [Google Scholar] [CrossRef]
  17. Grigorescu, S.E.; Petkov, N.; Kruizinga, P. Comparison of Texture Features Based on Gabor Filters. IEEE Trans. Image Process. 2002, 11, 116–1167. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Liu, C.; Wechsler, H. Gabor Feature Based Classification Using the Enhanced Fisher Linear Discriminant Model for Face Recognition. IEEE Trans. Image Process. 2002, 11, 467–476. [Google Scholar] [PubMed] [Green Version]
  19. Paoletti, M.E.; Haut, J.M.; Plaza, J.; Plaza, A. Deep Learning Classifiers for Hyperspectral Imaging: A Review. ISPRS J. Photogramm. Remote Sens. 2019, 158, 279–317. [Google Scholar] [CrossRef]
Figure 1. Illustration of calculating the 2D-LBP features. Note that the 2D-LBP code is here computed along the clockwise direction (i.e., the red arrow).
Figure 1. Illustration of calculating the 2D-LBP features. Note that the 2D-LBP code is here computed along the clockwise direction (i.e., the red arrow).
Remotesensing 12 02016 g001
Figure 2. The schematic of JL-GFDN. Note that the red dashed rectangle denotes the original GFDN. Note that N is the row of HSI image, M denotes the column of HSI image, and P means the number of bands of HSI images.
Figure 2. The schematic of JL-GFDN. Note that the red dashed rectangle denotes the original GFDN. Note that N is the row of HSI image, M denotes the column of HSI image, and P means the number of bands of HSI images.
Remotesensing 12 02016 g002
Figure 3. Illustration of calculating the J2D-LBP features.
Figure 3. Illustration of calculating the J2D-LBP features.
Remotesensing 12 02016 g003
Figure 4. The data set Indian Pines. (a) the three-band color composite; (b) the corresponding ground truth.
Figure 4. The data set Indian Pines. (a) the three-band color composite; (b) the corresponding ground truth.
Remotesensing 12 02016 g004
Figure 5. The data set University of Pavia. (a) the three-band color composite; (b) the corresponding ground truth.
Figure 5. The data set University of Pavia. (a) the three-band color composite; (b) the corresponding ground truth.
Remotesensing 12 02016 g005
Figure 6. The data set Pavia Center. (a) the three-band color composite; (b) the corresponding ground truth.
Figure 6. The data set Pavia Center. (a) the three-band color composite; (b) the corresponding ground truth.
Remotesensing 12 02016 g006
Figure 7. Classification results (OA in %) of Indian Pines. (a) GFDN * , OA = 97.52; (b) 2DLBP-GFDN * , OA = 98.04; (c) 3DLBP-GFDN * , OA = 98.64; (d) GFDN, OA = 98.47; (e) JL-GFDN * , OA = 98.67; (f) JL-GFDN, OA = 98.86; (g) the corresponding ground truth.
Figure 7. Classification results (OA in %) of Indian Pines. (a) GFDN * , OA = 97.52; (b) 2DLBP-GFDN * , OA = 98.04; (c) 3DLBP-GFDN * , OA = 98.64; (d) GFDN, OA = 98.47; (e) JL-GFDN * , OA = 98.67; (f) JL-GFDN, OA = 98.86; (g) the corresponding ground truth.
Remotesensing 12 02016 g007
Figure 8. The visualizations of 2D-LBP and J2D-LBP for the first three principal components of Indian Pines. (a) 2D-LBP of the first principal component; (b) 2D-LBP of the second principal component; (c) 2D-LBP of the third principal component; (d) J2D-LBP of the first principal component; (e) J2D-LBP of the second principal component; (f) J2D-LBP of the third principal component. The red, green, and blue ellipses denote the comparison regions.
Figure 8. The visualizations of 2D-LBP and J2D-LBP for the first three principal components of Indian Pines. (a) 2D-LBP of the first principal component; (b) 2D-LBP of the second principal component; (c) 2D-LBP of the third principal component; (d) J2D-LBP of the first principal component; (e) J2D-LBP of the second principal component; (f) J2D-LBP of the third principal component. The red, green, and blue ellipses denote the comparison regions.
Remotesensing 12 02016 g008
Figure 9. Classification results (OA in %) of the University of Pavia. (a) GFDN * , OA = 97.69; (b) 2DLBP-GFDN * , OA = 98.24; (c) 3DLBP-GFDN * , OA = 98.31; (d) GFDN, OA = 98.83; (e) JL-GFDN * , OA = 98.77; (f) JL-GFDN, OA = 98.96; (g) the corresponding ground truth.
Figure 9. Classification results (OA in %) of the University of Pavia. (a) GFDN * , OA = 97.69; (b) 2DLBP-GFDN * , OA = 98.24; (c) 3DLBP-GFDN * , OA = 98.31; (d) GFDN, OA = 98.83; (e) JL-GFDN * , OA = 98.77; (f) JL-GFDN, OA = 98.96; (g) the corresponding ground truth.
Remotesensing 12 02016 g009
Figure 10. The visualizations of 2D-LBP and J2D-LBP for the first three principal components of University of Pavia. (a) 2D-LBP of the first principal component; (b) 2D-LBP of the second principal component; (c) 2D-LBP of the third principal component; (d) J2D-LBP of the first principal component; (e) J2D-LBP of the second principal component; (f) J2D-LBP of the third principal component. The red, green, and blue ellipses denote the comparison regions.
Figure 10. The visualizations of 2D-LBP and J2D-LBP for the first three principal components of University of Pavia. (a) 2D-LBP of the first principal component; (b) 2D-LBP of the second principal component; (c) 2D-LBP of the third principal component; (d) J2D-LBP of the first principal component; (e) J2D-LBP of the second principal component; (f) J2D-LBP of the third principal component. The red, green, and blue ellipses denote the comparison regions.
Remotesensing 12 02016 g010
Figure 11. Classification results (OA in %) of Pavia Center. (a) GFDN * , OA = 95.13; (b) 2DLBP-GFDN * , OA = 97.86; (c) 3DLBP-GFDN * , OA = 97.98; (d) GFDN, OA = 97.79; (e) JL-GFDN * , OA = 98.04; (f) JL-GFDN, OA = 98.23; (g) the corresponding ground truth.
Figure 11. Classification results (OA in %) of Pavia Center. (a) GFDN * , OA = 95.13; (b) 2DLBP-GFDN * , OA = 97.86; (c) 3DLBP-GFDN * , OA = 97.98; (d) GFDN, OA = 97.79; (e) JL-GFDN * , OA = 98.04; (f) JL-GFDN, OA = 98.23; (g) the corresponding ground truth.
Remotesensing 12 02016 g011
Figure 12. The visualizations of 2D-LBP and J2D-LBP for the first three principal components of Pavia Center. (a) 2D-LBP of the first principal component; (b) 2D-LBP of the second principal component; (c) 2D-LBP of the third principal component; (d) J2D-LBP of the first principal component; (e) J2D-LBP of the second principal component; (f) J2D-LBP of the third principal component. The red, green, and blue ellipses denote the comparison regions.
Figure 12. The visualizations of 2D-LBP and J2D-LBP for the first three principal components of Pavia Center. (a) 2D-LBP of the first principal component; (b) 2D-LBP of the second principal component; (c) 2D-LBP of the third principal component; (d) J2D-LBP of the first principal component; (e) J2D-LBP of the second principal component; (f) J2D-LBP of the third principal component. The red, green, and blue ellipses denote the comparison regions.
Remotesensing 12 02016 g012
Table 1. Classification accuracies of different methods on Indian Pines (in %). Here, GFDN * is the GF-based deep network without using the virtual samples, 2DLBP-GFDN* means the GF-and 2DLBP- based deep network without using the virtual samples, 3DLBP-GFDN* denotes the GF-and 3DLBP- based deep network without using the virtual samples, GFDN is the GF-based deep network using the virtual samples, JL-GFDN* means the proposed method without using the virtual samples, and JL-GFDN denotes the proposed method using the virtual samples.
Table 1. Classification accuracies of different methods on Indian Pines (in %). Here, GFDN * is the GF-based deep network without using the virtual samples, 2DLBP-GFDN* means the GF-and 2DLBP- based deep network without using the virtual samples, 3DLBP-GFDN* denotes the GF-and 3DLBP- based deep network without using the virtual samples, GFDN is the GF-based deep network using the virtual samples, JL-GFDN* means the proposed method without using the virtual samples, and JL-GFDN denotes the proposed method using the virtual samples.
ClassGFDN*2DLBP-GFDN*3DLBP-GFDN*GFDNJL-GFDN*JL-GFDN
Alfalfa98.3797.8698.5794.9095.0095.95
Corn-N95.9497.3697.5097.1798.3497.74
Corn-M98.1497.2297.3598.2398.2098.32
Corn98.1997.3498.7698.6097.3499.08
Grass-P94.5395.3495.4396.6397.0597.27
Grass-T97.7199.7399.1298.7999.4099.25
Grass-P-M90.4394.0095.6094.7896.4095.20
Hay-W99.1599.8499.9199.9399.7099.77
Oats77.7877.2282.2295.5692.2288.89
Soybean-N95.2797.1696.9896.7097.1496.98
Soybean-M98.3798.2599.1999.0198.7699.07
Soybean-C95.9995.7697.6997.3296.6797.80
Wheat97.9099.3697.9398.0598.4099.15
Woods99.3699.9699.9799.9499.8299.91
Building-G-T-D97.1197.6498.1798.5798.1498.65
Stone-S-T88.1693.6598.7195.6397.8898.82
OA97.3097.9398.3398.2998.4198.56
AA95.1596.1197.0797.4997.5397.62
Kappa96.9397.6498.1098.0698.1998.35
Table 2. Classification accuracies of different methods on the University of Pavia (in %). Here, GFDN* is the GF-based deep network without using the virtual samples, 2DLBP-GFDN* means the GF-and 2DLBP- based deep network without using the virtual samples, 3DLBP-GFDN* denotes the GF-and 3DLBP- based deep network without using the virtual samples, GFDN is the GF-based deep network using the virtual samples, JL-GFDN* means the proposed method without using the virtual samples, and JL-GFDN denotes the proposed method using the virtual samples.
Table 2. Classification accuracies of different methods on the University of Pavia (in %). Here, GFDN* is the GF-based deep network without using the virtual samples, 2DLBP-GFDN* means the GF-and 2DLBP- based deep network without using the virtual samples, 3DLBP-GFDN* denotes the GF-and 3DLBP- based deep network without using the virtual samples, GFDN is the GF-based deep network using the virtual samples, JL-GFDN* means the proposed method without using the virtual samples, and JL-GFDN denotes the proposed method using the virtual samples.
ClassGFDN*2DLBP-GFDN*3DLBP-GFDN*GFDNJL-GFDN*JL-GFDN
Asphalt98.3598.7497.8198.4798.1898.80
Meadows96.3696.5497.0897.9998.2198.62
Gravel98.9799.2699.5999.8799.6099.55
Trees97.0797.7797.4798.3897.8898.52
Metal sheets99.5099.9299.8799.9099.9799.97
Bare Soil99.2699.1898.8499.1399.0998.85
Bitumen99.7699.8699.9599.9699.5899.69
Bricks98.6298.3398.2898.6298.2898.73
Shadows97.7499.9199.9199.9399.8999.80
OA97.5897.8197.8698.5198.4898.81
AA98.4098.8398.7699.1498.9799.17
Kappa96.7997.0897.1598.0197.9698.40
Table 3. Classification accuracies of different methods on Pavia Center (in %). Here, GFDN* is the GF-based deep network without using the virtual samples, 2DLBP-GFDN* means the GF-and 2DLBP- based deep network without using the virtual samples, 3DLBP-GFDN* denotes the GF-and 3DLBP- based deep network without using the virtual samples, GFDN is the GF-based deep network using the virtual samples, JL-GFDN* means the proposed method without using the virtual samples, and JL-GFDN denotes the proposed method using the virtual samples.
Table 3. Classification accuracies of different methods on Pavia Center (in %). Here, GFDN* is the GF-based deep network without using the virtual samples, 2DLBP-GFDN* means the GF-and 2DLBP- based deep network without using the virtual samples, 3DLBP-GFDN* denotes the GF-and 3DLBP- based deep network without using the virtual samples, GFDN is the GF-based deep network using the virtual samples, JL-GFDN* means the proposed method without using the virtual samples, and JL-GFDN denotes the proposed method using the virtual samples.
ClassGFDN*2DLBP-GFDN*3DLBP-GFDN*GFDNJL-GFDN*JL-GFDN
Water98.2099.6799.5098.8399.6899.71
Trees90.5192.0792.7191.2891.7293.48
Meadows94.4395.4193.8694.4795.6195.57
Bitumen97.1598.1797.1397.8998.3698.70
Bare soil97.8899.1498.7998.0098.8898.97
Asphalt91.3294.5892.7592.0895.2494.85
Self-blockingbrichs97.8897.4597.3797.6697.3197.11
Tiles96.2796.8498.0196.5497.4497.63
Shadows99.4399.7199.8999.5199.3899.65
OA96.7297.9098.0297.1898.0998.22
AA95.9097.0096.6796.2597.0797.30
Kappa95.4197.0497.2196.0497.3097.49

Share and Cite

MDPI and ACS Style

Zhang, T.; Zhang, P.; Zhong, W.; Yang, Z.; Yang, F. JL-GFDN: A Novel Gabor Filter-Based Deep Network Using Joint Spectral-Spatial Local Binary Pattern for Hyperspectral Image Classification. Remote Sens. 2020, 12, 2016. https://0-doi-org.brum.beds.ac.uk/10.3390/rs12122016

AMA Style

Zhang T, Zhang P, Zhong W, Yang Z, Yang F. JL-GFDN: A Novel Gabor Filter-Based Deep Network Using Joint Spectral-Spatial Local Binary Pattern for Hyperspectral Image Classification. Remote Sensing. 2020; 12(12):2016. https://0-doi-org.brum.beds.ac.uk/10.3390/rs12122016

Chicago/Turabian Style

Zhang, Tao, Puzhao Zhang, Weilin Zhong, Zhen Yang, and Fan Yang. 2020. "JL-GFDN: A Novel Gabor Filter-Based Deep Network Using Joint Spectral-Spatial Local Binary Pattern for Hyperspectral Image Classification" Remote Sensing 12, no. 12: 2016. https://0-doi-org.brum.beds.ac.uk/10.3390/rs12122016

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop