Next Article in Journal
Partial Shape Recognition for Sea Ice Motion Retrieval in the Marginal Ice Zone from Sentinel-1 and Sentinel-2
Next Article in Special Issue
A Bidirectional Deep-Learning-Based Spectral Attention Mechanism for Hyperspectral Data Classification
Previous Article in Journal
Comparitive Study of the Geomorphological Characteristics of Valley Networks between Mars and the Qaidam Basin
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Spectral Spatial Inverted Residual Network for Hyperspectral Image Classification

1
College of Communication and Electronic Engineering, Qiqihar University, Qiqihar 161000, China
2
College of Information and Communication Engineering, Dalian Nationalities University, Dalian 116000, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2021, 13(21), 4472; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13214472
Submission received: 14 September 2021 / Revised: 29 October 2021 / Accepted: 1 November 2021 / Published: 7 November 2021

Abstract

:
Convolutional neural networks (CNNs) have been widely used in hyperspectral image classification in recent years. The training of CNNs relies on a large amount of labeled sample data. However, the number of labeled samples of hyperspectral data is relatively small. Moreover, for hyperspectral images, fully extracting spectral and spatial feature information is the key to achieve high classification performance. To solve the above issues, a deep spectral spatial inverted residuals network (DSSIRNet) is proposed. In this network, a data block random erasing strategy is introduced to alleviate the problem of limited labeled samples by data augmentation of small spatial blocks. In addition, a deep inverted residuals (DIR) module for spectral spatial feature extraction is proposed, which locks the effective features of each layer while avoiding network degradation. Furthermore, a global 3D attention module is proposed, which can realize the fine extraction of spectral and spatial global context information under the condition of the same number of input and output feature maps. Experiments are carried out on four commonly used hyperspectral datasets. A large number of experimental results show that compared with some state-of-the-art classification methods, the proposed method can provide higher classification accuracy for hyperspectral images.

Graphical Abstract

1. Introduction

With the rapid development of remote sensing imaging technology, hyperspectral image (HSI) has drawn more attention in recent years. HSI can be viewed as a three-dimensional cube constructed by plenty of bands. Each sample of HSI contains reflection information of hundreds of different spectral bands, which makes this kind of image suitable for many practical applications, such as precision agriculture [1], food analysis [2], anomaly detection [3], geological exploration [4,5], etc. In the past decade, hyperspectral image processing technology has become increasingly popular due to the development of machine learning. However, there are still some challenges in the field of HSI classification: (1) the training of deep learning model depends on a great quantity of labeled sample data, while the number of labeled samples in hyperspectral data is insufficient; (2) because HSI contains rich spectral spatial information, the problem that the spectral spatial features of HSI are not effectively extracted still exists [6]. In addition, the phenomenon of different spectral curves of the same substance and different substances of the same spectral curves often occur.
Many of the traditional machine learning-based HSI classification approaches use hand-crafted features to train the classifier [7]. Obviously, feature extraction and classifier training are separated. Representative methods of hand-craft features include local binary patterns (LBPs) [8], directional gradient histogram (HOG) [9], global invariant scalable transform (GIST) [10], random forest [11], and so on. Representative classifiers include logistic regression [12], limit learning machine [13], and support vector machine (SVM) [14]. The method of hand-craft features (they generally rely on utilizing engineering skills and domain expertise to design several human-engineering features, such as shape, texture, color, spectral, and spatial details [7]) can effectively represent various attributes of images. However, the best feature sets of different data are very different. Moreover, manual involvement in designing the features considerably affects the classification process, as it requires a high level of domain expertise to design hand-crafted features [7]. Due to the above limitations of hand-craft features, many automatic feature extraction methods have emerged. For instance, deep learning is an automatic feature extraction method which has achieved great success in recent years. More and more researchers apply deep learning technology to HSI classification tasks.
HSI classification methods based on deep learning framework can be divided into supervised classification methods and unsupervised/semi-supervised classification methods [15]. Unsupervised classification methods only rely on the spectral or texture information of different ground objects of HSI for feature extraction, and then use the differences of features to achieve the purpose of classification. There are some classification methods improved by automatic encoders (AE) [16,17] for hyperspectral image classification. For example, in [18], the feature representation is adaptively learned from unlabeled data by learning the feature mapping function based on stacked sparse autoencoder. Zhang et al. proposed an unsupervised HSI classification feature learning method based on recursive autoencoder (RAE) network [19]. Mou et al. proposed an end-to-end complete convolution deconvolution network based on the so-called encoder–decoder paradigm for unsupervised spectral spatial feature learning [20]. Moreover, the proposal of generative adversarial network (GAN) further promotes the development of unsupervised classification methods [21]. For instance, Zhu et al. proposed an HSI classification method based on GAN. The generator provides the false input close to the real input, and the discriminator classifies the real input and false input and obtains high classification accuracy [22]. Supervised classification is the process of using the samples of the known category to judge the samples of other unknown categories. Mou et al. proposed a recurrent neural network (RNN) model which can effectively analyze hyperspectral samples into sequence data, and then determine the sample category according to network reasoning [23].
The HSI classification method based on CNN is also a typical supervised classification method. As a powerful neural network, CNN has strong ability to automatically extract features. At present, most CNN-based hyperspectral classification methods focus on joint spectral spatial feature extraction. According to different implementation types, they can be divided into two categories: (1) extract spectral and spatial features, respectively, and fuse them for classification; (2) extract spectral spatial features at the same time for classification. There are many methods to extract spatial spectral features respectively. For example, Zhang et al. proposed a dual-channel CNN (DCCNN) model, which uses one-dimensional CNN and two-dimensional CNN to extract spectral and spatial features, respectively [24]. In [25], a dual-stream architecture is introduced, in which one stream encodes spectral features through a stacked noise reduction automatic encoder, and the other stream extracts spatial features through deep CNN. In [26], a three-layer CNN is constructed to extract spectral spatial features by cascading spectral features and two-scale spatial features from shallow to deep layers. Then, multilayer spatial spectral features are fused to achieve complementary information. Finally, the fused features and classifiers are integrated into a unified network and optimized end-to-end. Yang et al. proposed a deep convolution neural network with double-branch structure to extract the joint spectral spatial features of HSI [27]. The above methods were able to extract spatial and spectral features but ignored the integrity of HSI. In this case, the method of extracting spatial spectral features at the same time showed its advantages in combining the spectral spatial context and preserving the integrity of HSI information. For example, Chen et al. proposed a three-dimensional CNN (3D-CNN) architecture based on kernel sampling to extract the spectral spatial features of HSI simultaneously [28]. In [29], a fast spectral space model with dense connectivity for HSI classification is introduced. Zhong et al. proposed a spectral spatial residual network. The spectral and spatial residual blocks continuously learn the main features from HSI to improve the classification performance [30]. Wang et al. proposed an end-to-end alternating updating spectral and spatial convolution network with cyclic feedback structure to learn the spectral and spatial features of HSI [31]. Fang et al. proposed a 3D-CNN model combining the dense connection and spectral attention mechanism and obtained good classification results [32]. Attention thoughts in deep learning are essentially similar to human selective visual attention mechanism [33,34,35]. The core goal is to select more critical information from a large amount of data. Nowadays, attention mechanism is also widely used in HSI classification tasks. For example, inspired by squeeze-and-excitation (SE) block [33], in [36], two bilinear squeeze-and-excitation network, (SENet,) with different compression strategies are used to improve the performance of HSI classification. However, deep learning models usually need a large amount of training samples to achieve optimal performance. In order to solve the problem of limited labeled samples of HSI and avoid overfitting, researchers have carried out a lot of research. Geometric transformation method and pixel transformation method were commonly used in the early stage. Later, some other methods were proposed. For instance, GAN can generate samples similar to real data [37]. Wu et al. proposed the CRNN model, where first, all training data and their pseudo-labels were used to pretrain the model, and then, the model was fine-tuned with limited labeled data [38]. In addition to these two models, CNNs have also been used to alleviate the small sample problem of HSI. Li et al. proposed a pixel pair (PPF) method. The trained 1D-CNN classifies the pixel pairs composed of test center pixels and surrounding pixels and determines the final label by the voting strategy [39]. Later, in [40], a pixel block pair (PBP) method was proposed to extract PBP features with a depth CNN model. Haut et al. introduced a random erasing method to increase the number of labeled data [41]. In [6], Zhang et al. proposed a data balance augmentation method, which can solve the problems of limited labeled data and unbalanced categories.
The training of a deep learning model depends on a large number of labeled sample data. Data augmentation can alleviate the problem of limited labeled samples in a hyperspectral image dataset. Random erasing data augmentation has been applied to many scene tasks. Nevertheless, in some articles [41] that apply it to hyperspectral image classification, the scale of model space is too large, the complexity is large, and the classification accuracy is not high. There is also the problem that spectral spatial features have not been effectively extracted. Because 3D-CNN is more suitable for hyperspectral image classification tasks, it can realize spectral and spatial feature representation at the same time. To address challenges mentioned above, a deep spectral spatial inverted residual network (DSSIRNet) based on data block augmentation is proposed. The network is divided into three stages. In the first stage, the random erasing strategy is designed to enhance the data of the original input 3D cube. The two components of the second stage realize effective joint spectral spatial feature extraction. In the third stage, hyperspectral image classification is realized by using the high-level semantic features learned previously.
The main contributions of this paper are as follows:
In order to make full use of the rich information in HSI, a DIR module is proposed. In this module, the low dimensional representation of the input data is extended to the high dimension, and the depthwise separable convolution is utilized for filtering. Then, the features are projected back to the low dimensional representation by standard convolution. This design is more suitable for high-dimensional feature extraction of HSI.
A global 3D attention module is designed and embedded into the DIR module, which fully considers the global context information of spectral and spatial dimension to further improve the classification performance.
The proposed DSSIRNet is based on 3D-CNN model. With considering the computational complexity, a random erasing strategy based on small spatial blocks is introduced to increase the number of available labeled samples.
The rest of this paper is arranged as follows. The details of the proposed DSSIRNet algorithm are described in Section 2. In Section 3, the experimental results and analysis are provided. In Section 4, some conclusions are drawn.

2. Materials and Methods

The overall framework of the proposed method is shown in Figure 1. Set S = { X , Y } is the input of the model, where X H × W × B is the 3D HSI cube with height H , width W , and the number of spectral channel B , and Y is the label vector of HSI data. Firstly, X H × W × B is randomly divided into some 3D blocks, which are composed of marked central samples and adjacent samples, and are recorded as a new sample set P h × w × B , where h , w , and B represent the height, width, and spectral dimension of the new 3D cube, respectively.
h and w are set to the same value. Then, the training set X t r a i n is randomly sampled from the new sample set according to a certain proportion P , and then the validation set X v a l i d is randomly sampled from the rest according to the same proportion. Finally, the remaining proportion is used as the test set X t e s t . Next, the DSSIRNet is trained with the training set X t r a i n to obtain the initial parameters of the model, and the parameters are continuously updated through the validation set X v a l i d until the optimal parameters are obtained. Finally, the test set X t e s t is input into the optimal model to obtain the final prediction results Y ' .

2.1. The Overall Framework of the Proposed DSSIRNet

Figure 2 shows the structure of the proposed DSSIRNet. The framework is divided into three stages: data augmentation based on random erasing, deep feature extraction with dual-input fusion and DIR module, and classification. The first stage is designed to solve the problem of limited labeled samples of hyperspectral images. Through block random erasing of training samples, the spatial distribution of the dataset can be fully changed, and the number of available training samples can be effectively increased without parameter increase. In order to improve the robustness of the model, it is necessary to solve the problem of insufficient spectral spatial information extraction of hyperspectral images. The second stage of a deep feature extraction network can solve this problem. The second stage mainly includes two parts: (1) dual-input fusion part; (2) DIR dense connection part. In the second part of this stage, three DIR modules are densely connected to enhance the ability of feature representation. This paper also designs a global 3D attention module in the DIR module to achieve more refined and sufficient global context feature extraction. Finally, the spectral spatial features fully extracted in the second stage are input into the third stage to realize classification.

2.2. Block Random Erasing Strategy for Data Augmentation

In the imaging process, hyperspectral images may be occluded by clouds, shadows or other objects due to the influence of the atmosphere, resulting in the loss of information in the occluded area. The most common interference is cloud cover. Therefore, in the first stage, this paper designs a block random erasing strategy. Before model training, first, the image scene is simulated under cloud erasing conditions, and then the space block with added interference is input into the model for training. It realizes the change of spatial information, increases the available samples, and then improves the classification accuracy. Different from [40], the feature extraction classification model proposed in this paper is three-dimensional. In order to avoid the high complexity of the model, the random erasing under small spatial blocks (i.e., 9 × 9) is studied.
For the training sample set X t r a i n , S is the spatial area of the original input 3D cube, the erasing probability is set to p , the area of the random initialization erasing area is set to S e , the ratio S e S is set between p l and p h , and the ratio r e of height and width of S e is set between r 1 and r 2 . First, a probability p 1 = R a n d ( 0 , 1 ) is randomly obtained. If p 1 < p is satisfied, erasure is implemented; otherwise, it will not be processed. In order to obtain the position of the erasing area, first the initial erasing area S e = r a n d ( p l , p h ) S is calculated according to the randomly erasing proportion, and then the length and width of the erasing area is calculated according to the random height to width ratio:
r e = r a n d ( r 1 , r 2 )
H e = S e r e
W e = S e r e
Then, according to the randomly height and width value, the coordinate ( x e , y e ) of the upper left corner of the erasing area can be obtained. According to the height and width obtained by Formulas (1)–(3), the boundary coordinates of the occluded area ( x e + W e , y e + H e ) can be obtained. If the coordinates exceed the boundary of the original space, the above process is repeated. For each iteration of training, random erasing is performed to generate a changing 3D cube, which enables the model to learn more abundant spatial information.

2.3. Dual-Input Fusion Part

Let x X t r a i n h × h × l be the data processed by random erasing. After spectral processing f ( ) and spatial processing g ( ) , two groups of feature maps are obtained as the dual input of DIR module. In fact, f ( ) and g ( ) are the combination functions of three-dimensional convolution, batch normalization (BN), and swish activation functions. The difference between f ( ) and g ( ) is that the convolution kernel of three-dimensional convolution is different, and the corresponding feature map is
M s p e c t r a l = f ( x ) = σ ( B N γ , β [ ( W 1 x + b 1 ) ] )
M s p a t i a l = g ( x ) = σ ( B N γ , β [ ( W 2 x + b 2 ) ] )
where M s p e c t r a l and M s p a t i a l are the feature maps obtained by spectral processing and spatial processing, respectively, σ represents swish activation function operation, represents three-dimensional convolution operation, W i and b i are the weight and bias of two convolution, respectively, and γ and β represent the trainable parameters of BN operation, respectively. The three-dimensional convolution kernels of spectral processing and spatial processing are 1 × 1 × 9 and 3 × 3 × 9 , respectively. The number of convolution kernels is 32, the convolution stride is ( 1 , 1 , 2 ) , and the paddings are ( 0 , 0 , 0 ) and ( 1 , 1 , 0 ) , respectively. Therefore, the size and number of the two groups of feature maps are the same. Then, the obtained two groups of feature maps are directly added and fused element by element, which can be represented as
M s u m = M s p e c t r a l M s p a t i a l
where represents the element by element addition operation, and M s u m represents the fused feature map.

2.4. DIR Module

The DIR module designed in this paper is inspired by Mobilenet v3 [42] and Efficientnet [43]. We expand the 2D inverted residual model to 3D inverted residual model. Through a large number of experiments and parameter adjustment, a 3D general module suitable for hyperspectral image classification is designed. Figure 3 shows the schematic diagram of the DIR module.
The main idea of deep extraction of spectral spatial features in a DIR module is to expand the low-dimensional representation of input data to high-dimensional representation and filter the feature maps with depthwise separable convolution (DSC). The filtered feature maps are also transmitted to the global 3D attention module for deeper filtering. Following this, the two groups of filtered features are multiplied to enhance feature representation, and then these features are projected back to the low-dimensional representation by standard convolution. Finally, the residual branch is utilized to avoid network degradation. The implementation details of DIR module are described in Algorithm 1.
Algorithm 1 DIR module.
Input: The feature map set x l h × h × l , C obtained after dual-input fusion.
1: Ascend the dimension of input data x l h × h × l , C by Conventional 3D convolution. The number of feature maps after convolution is 6 C , and then the feature maps are activated by BN and swish.
2: DSC is performed on the feature maps in two steps: depthwise process and pointwise process. The convolution kernels of the two convolution processes are set to 3 × 3 × 3 and 1 × 1 × 1 , respectively. The number of feature maps before and after DSC remains the same. Following this, BN and swish activation are performed on these feature maps and then the filtered feature map set D l + 1 is obtained.
3: Input D l + 1 into the global 3D attention module for depth filtering and obtain the feature map set G 3 D .
4: Calculate the product of feature maps G 3 D ^ = D l + 1 G 3 D .
5: Reduce the dimension of the multiplied feature map G 3 D ^ through conventional three-dimensional convolution, and then perform the BN and swish to obtain the reduced feature map set x l + 1 h × h × l , C .
6: Add the input feature map and the reduced dimension feature map, and then activate them with swish to obtain the output feature map set y l = σ ( x l x l + 1 ) .
Output: Feature map set y l h × h × l , C with the same size and number as the input feature map.

2.5. Global 3D Attention Module

Inspired by csSE [44], this paper designs the global 3D attention module, as shown in Figure 4. The global 3D attention module fully considers the global information and effectively extracts the spectral and spatial context features, so as to enhance the ability of feature representation.
The 3D spectral part: firstly, the input feature map U = [ u 1 , u 2 , , u s ] as a combination of spectral channels u i d × d × s , is processed by adaptive average pooling (AAP), and the tensor of element k is obtained:
z k = 1 d × d × s i = 0 d j = 0 d n = 0 s U k ( i , j , n )
Then, two linear layers with size c 2 × 1 × 1 × 1 and c × 1 × 1 × 1 are trained to find the correlation between features and classification, and the s -dimensional tensor is obtained. Next, the sigmoid function is used for normalization to obtain the spectral attention map. Finally, the obtained spectral attention map is multiplied by the input feature map. The process can be represented as
F c ( U ) = [ δ ( z 1 ^ ) U 1 , δ ( z 2 ^ ) U 2 , , δ ( z s ^ ) U s ]
where δ ( z s ^ ) represents the combination function of two linear layers, ReLU operation and sigmoid activation operation, and F c ( U ) is the spectral attention map.
The 3D spatial part: firstly, the image features are extracted through C convolution layers, then the spatial attention map is activated by sigmoid function, and finally the obtained spatial attention map is multiplied by the input feature map. The process can be represented as
F s ( U ) = [ δ ( q 1 ^ ) U 1 , δ ( q 2 ^ ) U 2 , , δ ( q s ^ ) U s ]
where δ ( q s ^ ) is the combination function of one-layer three-dimensional convolution and sigmoid activation operation, and F s ( U ) is the spatial attention map.
Finally, the feature maps obtained from the two parts are compared element by element, and the maximum value is returned to generate the final global 3D attention map.

3. Experimental Setting and Results

3.1. Experimental Setting

(1)
Datasets
In order to verify the performance of DSSIRNet, four classical datasets are used in the experiment.
The Indian Pines (IN) dataset was obtained from the airborne visible infrared imaging spectrometer (AVIRIS) sensor in northwestern Indiana. The data size is 145 × 145, a total of 220 continuous imaging bands for ground objects. After excluding 20 bands of 104–108, 150–163, and 200 that cannot be reflected by water, the remaining 200 effective bands are taken as the research object. The spatial resolution is 20 m per pixel and the spectral coverage is 0.4~2.5 μm. There are 16 kinds of crops.
The Pavia University (UP) dataset was collected by ROSIS sensor. It continuously images 115 bands in the wavelength range of 0.43~0.86 μm, with a spatial resolution of 1.3 m. After eliminating the noise influence band, the size of UP is 610 × 340 × 103, including nine types of land cover.
The Salinas Valley (SV) dataset was captured by AVIRIS sensors in Salinas Valley, California. The spatial resolution of the data is 3.7 m, and the size is 512 × 217. The original data is 224 bands. After removing 20 bands of 108–112, 154–167, and 224 with serious water vapor absorption, 204 effective bands are retained. The dataset contains 16 crop categories.
The Botswana (BS) dataset was gathered from the NASA EO-1 satellite over Okavango Delta, Botswana, with a spatial resolution of 30 m and spectral coverage of 0.4~2.5 μm. After excluding the uncalibrated bands covering the water absorption characteristics and noise bands, 145 bands of 10–55, 82–97, 102–119, 134–164, and 187–220 are left for research. The data size is 1476 × 256. The ground truth map can be divided into 14 categories.
In the follow-up experiment, the IN, UP, SV, and BS datasets are divided into training set, validation set, and test set, respectively. The proportion of training, validation, and test randomly selected from each category is the same. The training proportion is equal to the ratio of the number of training samples obtained by random sampling to the total number of samples. The principle of validation proportion and test proportion is similar. Here, in order to avoid missing training samples of some categories in the IN dataset, we randomly select 5% for training, 5% for validation, and the remaining 90% for testing. The proportion of training set, validation set, and test set randomly selected from UP dataset, SV dataset, and BS dataset is the same, which are 3%, 3%, and 94% respectively. The detailed division information of the four datasets is listed in Table 1 and Table 2 respectively.
(2)
Experimental Setting and Evaluation Index
In the experiment, the batch size of each dataset was set to 16 and the input spatial size was 9 × 9. In this paper, Adam optimizer was adopted. The initial learning rate was set to 0.0003, and the patience value was set to 15 with cosine attenuation. In addition, the maximum training epoch was set to 200. The experimental hardware platform is a server with Intel (R) Core (TM) i9-9900k CPU, NVIDIA GeForce RTX 2080 Ti GPU and 32 G memory. The experimental software platform is based on Windows 10 Visual Studio Code operating system, and adopts CUDA 10.0, Pytorch 1.2.0, and Python 3.7.4. The classification results of all the experiments were the average classification accuracy ± standard deviation through experiments more than 20 times. In order to provide a quantitative evaluation, this paper uses overall accuracy (OA), average accuracy (AA), and kappa coefficient (kappa) as the measures of classification performance. OA represents the ratio of the number of correctly classified samples to the total number of samples. AA indicates the classification accuracy of each category. Kappa coefficient measures the consistency between the results and the real ground map, which is an index to measure the accuracy of classification. Its calculation is based on the confusion matrix. The lower the Kappa value, the more imbalanced the confusion matrix, and the worse the classification effect.

3.2. Classification Results Compared with Other Methods

This paper compares the proposed DSSIRNet with some state-of-the-art classification methods, including SVM_RBF [14], DRCNN [45], ROhsi [40], SSAN [46], SSRN [29], and A2S2K-ResNet [47]. SVM_RBF is a pixel based classification method, DRCNN and ROhsi are 2D-CNN based methods, and SSRN is a method based on RNN and 2D-CNN. The remaining two methods and the proposed DSSIRNet are 3D-CNN based methods. Table 3, Table 4, Table 5 and Table 6 list the average classification accuracy of the seven methods on the four datasets. The 2–17 lines record the classification accuracy of specific categories, and the last three lines record OA, AA, and kappa of all categories. In addition, for these experimental results, the best results are highlighted in bold.
As can be seen from Table 3, the classification results of the proposed DSSIRNet method on the IN dataset are obviously superior to those of other state-of-the-art methods, and the best OA, AA, and kappa values are achieved. Because the frameworks based on deep learning (including DRCNN, ROhsi, SSAN, SSRN, A2S2K-ResNet, and DSSIRNet) have excellent nonlinear representation and automatic hierarchical feature extraction capabilities, their classification performances are better than that of SVM_RBF. For the model based on 2D-CNN, the model structures of ROhsi and SSAN are too simple, and the extraction of spectral spatial feature is not sufficient. Therefore, the OAs of the above two methods are 9.63% and 9.08% lower than those of DRCNN, respectively. For the 3D-CNN model, the learning of spectral spatial features of SSRN is separated, and the learned advanced features are not fused. Thus, the OAs of SSRN are lower than that of A2S2K-ResNet and the proposed DSSIRNet method. The proposed DSSIRNet method firstly performed block data augmentation on the input 3D cube to increase the available samples, and then the designed DIR module fully extracted the spectral spatial features. The global 3D attention module also effectively realized the selection and extraction of global context information. The final dense connection operation effectively integrated the joint spectral spatial features learned by the DIR module. Therefore, the OA value of the proposed method on the IN dataset is 0.8% higher than that of A2S2K-ResNet. In particular, the proposed DSSIRNet method also provides a 100% prediction rate for the wheat category.
For the UP dataset and SV dataset, as shown in Table 4 and Table 5, the classification results of all methods exceed 90%. The DRCNN uses multiple input spatial windows, so the OA values are higher than that of SVM_RBF, ROhsi, and SSAN. A2S2K-ResNet can adaptively select 3D convolution kernel to jointly extract spectral spatial features, so its OA values are higher than that of SSRN. Compared with these methods, the OA values obtained by DSSIRNet on the UP dataset are 7.82%, 3.52%, 4.74%, 4.96%, 1.69%, and 1.09% higher than that of SVM-RBF, DRCNN, ROhsi, SSAN, SSRN, and A2S2K-ResNet, respectively. The OA values obtained on the SV dataset are 8.61%, 2.72%, 5.07%, 4.71%, 2.55%, and 0.99% higher than that of SVM-RBF, DRCNN, ROhsi, SSAN, SSRN, and A2S2K-ResNet, respectively. In particular, the proposed DSSIRNet method achieved 100% prediction rate in the three categories of grassland, painted metal plate, and bare soil of UP dataset; on the SV dataset, seven categories achieved 100% prediction rate.
Table 6 shows the classification results of different methods on the BS dataset. Compared with the other three datasets, the BS dataset has the highest spatial resolution, so the classification model is very important for the effective extraction of spatial context information. Obviously, the classification performances based on 3D-CNN (SSRN, A2S2K-ResNet, and DSSIRNet) are higher than other methods, because 3D-CNN can extract spectral and spatial features at the same time. A2S2K-ResNet extracts joint spectral spatial features directly, which may lose spatial context information, so its OA values are lower than that of SSRN. The proposed DSSIRNet method utilized the DIR module to realize the joint extraction of spectral spatial features, and combined the global 3D attention module to focus on the spectral and spatial context features that contribute greatly to the classification, so it achieved the best classification performance.
Figure 5, Figure 6, Figure 7 and Figure 8 show the visual classification results obtained by seven methods on four datasets. Taking Figure 5 as an example, the classification maps obtained by SVM_RBF, DRCNN, ROhsi, and SSAN have some noise, especially in the corn-notill, grass-pasture, oats, and soybean-mintill classes. The 3DCNN-based methods (including SSRN, A2S2KResNet and DSSIRNet) extract the spectral spatial features more effectively. Compared with other methods, DSSIRNet greatly improves the regional consistency and make some categories more separable, such as grass-trees, hay-windrowed, and wheat. As can be seen, the probability of misclassification among the categories of SVM_RBF, DRCNN, ROhsi, and SSAN is large, and the misclassification rate among other methods is small. In particular, the proposed DSSIRNet method has the smallest misclassification rate and a clear category boundary, which is closest to the ground real map, which shows the effectiveness of the proposed DSSIRNet method.

4. Experimental Analysis

4.1. Parameter Analysis of Erasing Probability

In order to verify the efficiency of random erasing strategy under small spatial blocks, we studied the influence of erasing probability and spatial blocks smaller than 20 × 20 on the classification performance, and showed the comparison results with the surface graph, as shown in Figure 9a–d. It can be seen from Figure 9 that under the same patch size, the OA value reaches the highest value at p = 0.15 , and then the OA values of each dataset begin to decrease with the increase of p . From the perspective of patch, for the same parameter p , with the increase of patch size, the OA value obtains the highest value of each dataset when the patch size is 9. To sum up, for these hyperspectral datasets, appropriate size of image block (i.e., 9 × 9) and erasing probability (i.e., 0.15) are taken, which not only reduces the computational complexity, but also improves the classification accuracy.

4.2. Run Time Comparison

In addition to classification accuracy, running time is also an important indicator in HSI classification tasks, especially in practical applications. Table 7 shows the running time of all methods on four datasets. Compared with other methods based on deep learning, SVM_RBF takes the least time. Because DRCNN uses multiple input space windows for learning, the running time of this method is long. ROhsi sends large space blocks to input 2D-CNN model for training, so the running time is longer than DRCNN. For SSAN, it uses RNN and 2D-CNN to learn spectral spatial features at the same time, where running time is only longer than SVM_RBF. For 3D-CNN models, i.e., SSRN, A2S2K-ResNet, and DSSIRNet, although the proposed DSSIRNet costs slightly more time on SV and BS datasets due to a large number of layers, the running time of it is less than that of DRCNN and ROhsi methods. Therefore, DSSIRNet has moderate computational complexity and can be used in practical applications.

4.3. Efficiency Analysis of Dense Connection of DIR Module

In this section, the efficiencies of the number of DIR modules on four datasets are analyzed, as shown in Table 8. As a general module, the DIR module is densely connected, and the joint spectral spatial features extracted by the module can be used most effectively. Firstly, the two DIR blocks are densely connected, and the classification accuracy exceeds that of other methods except 3D-CNN model. When using three DIR modules, the proposed DSSIRNet method achieves the highest classification accuracy. On this basis, after adding another module, the OA value of each dataset decreases significantly. Meanwhile, the number of layers and complexity of the model increase rapidly, which will affect the effective utilization of features. In conclusion, when three DIR modules are adopted, the feature extraction is the most effective and the ability to distinguish features is the strongest.

4.4. Ablation Experiment

In order to verify the performance of the random erasing (RE) strategy and the global 3D attention (G3D) module proposed in this paper, the ablation results of the two modules (strategies) on four datasets are shown in Table 9. It can be seen that when the RE strategy and G3D module are not adopted, the classification accuracy of the four datasets is the lowest. Adding any of these methods will increase the OA value. Because the RE strategy increases the number of available samples, it will have a greater impact on the final classification performance than adding only G3D modules. Obviously, the optimal OA value is obtained by adding both the two methods at the same time (i.e., the proposed DSSIRNet), which can not only realize data augmentation, but also consider the spectral spatial global context information.

4.5. Small Sample Comparative Analysis

Figure 10a–d show the OAs of all methods on different datasets with different numbers of training samples. Specifically, four datasets are randomly selected from labeled samples, with 1%, 3%, 5%, and 10% training samples in each class. As can be seen from Figure 10, the proposed DSSIRNet method achieves the highest classification accuracy on all four datasets. With the increase of training proportion, the OA values of all methods are improved, and the performance differences between different models are reduced, but the OA value of the proposed method is still the highest. At 1% training samples, compared with other models based on deep learning, ROhsi and SVM_RBF have no advantages. In 5% of the training samples, the OA value of ROhsi on UP and SV increased rapidly, exceeding that of SVM_RBF. Compared with other methods, the proposed DSSIRNet shows the best classification performance in the case of small samples. The reason is that it adopts an effective data augmentation strategy and the DIR module to realize effective feature extraction, which also proves that DSSIRNet has more advantages in the case of small datasets.

5. Conclusions

In this paper, we propose a novel HSI classification network, DSSIRNet. DSSIRNet is divided into three stages. The first stage adopts a random erasing strategy for augmentation of the data of the original 3D cube. The two components of the second stage realize effective joint spectral spatial feature extraction. The third stage classifies high-level semantic features. The full combination of the three stages realizes the optimal classification effect of HSI. In addition, this paper studies the random erasing strategy in small spatial blocks, which can expand the data more effectively without adding parameters. In DSSIRNet, an effective feature extraction module, the DIR module, is designed to fully extract image features. This paper also designs a global 3D attention module to fully consider the global context information of spectral and spatial dimension and further improve the classification performance. The experimental results on four datasets prove the effectiveness of DSSIRNet. In the future, we will study a deep learning framework for HSI classification tasks with low parameters and small samples.

Author Contributions

Conceptualization, C.S.; data curation, C.S. and T.Z.; formal analysis, D.L.; methodology, C.S.; software, T.Z.; validation, C.S. and T.Z.; writing—original draft, T.Z.; writing—review and editing, C.S. and L.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded in part by the National Natural Science Foundation of China (41701479, 62071084), in part by the Heilongjiang Science Foundation Project of China under Grant LH2021D022, and in part by the Fundamental Research Funds in Heilongjiang Provincial Universities of China under Grant 135509136.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The Indiana Pines, University of Pavia, Salinas Valley and Botswana datasets are available online at http://www.ehu.eus/ccwintco/index.php?title=Hyperspectral_Remote_Sensing_Scenes (accessed on 3 July 2021).

Acknowledgments

We would like to thank the handling editor and the anonymous reviewers for their careful reading and helpful remarks.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhang, X.; Sun, Y.; Shang, K.; Zhang, L.; Wang, S. Crop classification based on feature band set construction and object-oriented approach using hyperspectral images. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2016, 9, 4117–4128. [Google Scholar] [CrossRef]
  2. Caporaso, N.; Whitworth, M.B.; Grebby, S.; Fisk, I.D. Non-destructive analysis of sucrose, caffeine and trigonal line on single green coffee beans by hyperspectral imaging. Food Res. Int. 2018, 106, 193–203. [Google Scholar] [CrossRef] [PubMed]
  3. Tan, H.F.; Luo, T.W.; Yang, G.; Meng, Q.Q. Research on background depression in hyperspectral image anomaly detection. J. Optoelectron. Laser 2016, 27, 177–181. [Google Scholar]
  4. Yokoya, N.; Chan, J.C.-W.; Segl, K. Potential of resolution enhanced hyperspectral data for mineral mapping using simulated EnMAP and Sentinel-2 images. Remote Sens. 2016, 8, 172. [Google Scholar] [CrossRef] [Green Version]
  5. Honkavaara, E.; Eskelinen, M.A.; Polonen, I.; Saari, H.; Ojanen, H.; Mannila, R.; Holmlund, C.; Hakala, T.; Litkey, P.; Rosnell, T.; et al. Remote sensing of 3-d geometry and surface moisture of a peat production area using hyperspectral frame cameras in visible to short-wave infrared spectral ranges onboard a small unmanned airborne vehicle (UAV). IEEE Trans. Geosci. Remote Sens. 2016, 54, 5440–5454. [Google Scholar] [CrossRef] [Green Version]
  6. Zhang, X.; Wang, Y.; Zhang, N.; Xu, D.; Luo, H.; Chen, B.; Ben, G. Spectral-Spatial Fractal Residual Convolutional Neural Network with Data Balance Augmentation for Hyperspectral Classification. IEEE Trans. Geosci. Remote Sens. 2021, 1–15, early access. [Google Scholar] [CrossRef]
  7. Ahmad, M.; Shabbir, S.; Roy, S.K.; Hong, D.; Wu, X.; Yao, J.; Khan, A.M.; Mazzara, M.; Distefano, S.; Chanussot, J. Hyperspectral Image Classification—Traditional to Deep Models: A Survey for Future Prospects. arXiv 2021, arXiv:2101.06116. [Google Scholar]
  8. Huang, L.; Chen, C.; Li, W.; Du, Q. Remote sensing image scene classification using multi-scale completed local binary patterns and fisher vectors. Remote Sens. 2016, 8, 483. [Google Scholar] [CrossRef] [Green Version]
  9. Dalal, N.; Triggs, B. Histograms of oriented gradients for human detection. In Proceedings of the IEEE Computer Society Conference on Computer Visionand Pattern Recognition (CVPR), San Diego, CA, USA, 20–25 June 2005; pp. 886–893. [Google Scholar]
  10. Oliva, A.; Torralba, A. Modeling the shape of the scene: A holistic representation of the spatial envelope. Int. J. Comput. Vis. 2001, 42, 145–175. [Google Scholar] [CrossRef]
  11. Ham, J.; Chen, Y.; Crawford, M.M.; Ghosh, J. Investigation of the random forest framework for classification of hyperspectral data. IEEE Trans. Geosci. Remote Sens. 2005, 43, 492–501. [Google Scholar] [CrossRef] [Green Version]
  12. Li, J.; Bioucas-Dias, J.; Plaza, A. Semisupervised hyperspectral image segmentation using multinomial logistic regression with active learning. IEEE Trans. Geosci. Remote Sens. 2010, 48, 4085–4098. [Google Scholar] [CrossRef] [Green Version]
  13. Li, W.; Chen, C.; Su, H.; Du, Q. Local binary patterns and extreme learning machine for hyperspectral imagery classification. IEEE Trans. Geosci. Remote Sens. 2015, 53, 3681–3693. [Google Scholar] [CrossRef]
  14. Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef] [Green Version]
  15. Li, S.; Song, W.; Fang, L.; Chen, Y.; Ghamisi, P.; Benediktsson, J.A. Deep learning for hyperspectral image classifification: An overview. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6690–6709. [Google Scholar] [CrossRef] [Green Version]
  16. Vincent, P.; Larochelle, H.; Lajoie, I.; Bengio, Y.; Manzagol, P.-A. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res. 2010, 11, 3371–3408. [Google Scholar]
  17. Ng, A. Sparse Autoencoder; Lecture Notes CS294A; Stanford University: Stanford, CA, USA, 2011; pp. 1–19. [Google Scholar]
  18. Tao, C.; Pan, H.; Li, Y.; Zou, Z. Unsupervised spectral-spatial feature learning with stacked sparse autoencoder for hyperspectral imagery classification. IEEE Geosci. Remote Sens. Lett. 2015, 12, 2438–2442. [Google Scholar]
  19. Zhang, X.; Liang, Y.; Li, C.; Huyan, N.; Jiao, L.; Zhou, H. Recursive autoencoders-based unsupervised feature learning for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1928–1932. [Google Scholar] [CrossRef] [Green Version]
  20. Mou, L.; Ghamisi, P.; Zhu, X. Unsupervised spectral-spatial feature learning via deep residual conv-deconv network for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 391–406. [Google Scholar] [CrossRef] [Green Version]
  21. Radford, A.; Metz, L.; Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv 2016, arXiv:1511.06434v2. [Google Scholar]
  22. Zhu, L.; Chen, Y.; Ghamisi, P.; Benediktsson, J.A. Generative adversarial networks for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 5046–5063. [Google Scholar] [CrossRef]
  23. Mou, L.; Ghamisi, P.; Zhu, X.X. Deep recurrent neural networks for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3639–3655. [Google Scholar] [CrossRef] [Green Version]
  24. Zhang, H.; Li, Y.; Zhang, Y.; Shen, Q. Spectral-spatial classification of hyperspectral imagery using a dual-channel convolutional neural network. Remote Sens. Lett. 2017, 8, 438–447. [Google Scholar] [CrossRef] [Green Version]
  25. Hao, S.; Wang, W.; Ye, Y.; Nie, T.; Bruzzone, L. Two-stream deep architecture for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 2349–2361. [Google Scholar] [CrossRef]
  26. Feng, J.; Chen, J.; Liu, L.; Cao, X.; Zhang, X.; Jiao, L.; Yu, T. CNN-based multilayer spatial-spectral feature fusion and sample augmentation with local and nonlocal constraints for hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 1299–1313. [Google Scholar] [CrossRef]
  27. Yang, J.; Zhao, Y.-Q.; Chan, J.C.-W. Learning and transferring deep joint spectral-spatial features for hyperspectral classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 4729–4742. [Google Scholar] [CrossRef]
  28. Chen, Y.; Jiang, H.; Li, C.; Jia, X.; Ghamisi, P. Deep feature extraction and classification of hyperspectral images based on convolutional neural networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef] [Green Version]
  29. Wang, W.; Dou, S.; Jiang, Z.; Sun, L. A fast dense spectral-spatial convolution network framework for hyperspectral images classification. Remote Sens. 2018, 10, 1068. [Google Scholar] [CrossRef] [Green Version]
  30. Zhong, Z.; Li, J.; Luo, Z.; Chapman, M. Spectral–spatial residual network for hyperspectral image classification: A 3-D deep learning framework. IEEE Trans. Geosci. Remote Sens. 2017, 56, 847–858. [Google Scholar] [CrossRef]
  31. Wang, W.; Dou, S.; Wang, S. Alternately updated spectral-spatial convolution network for the classification of hyperspectral images. Remote Sens. 2019, 11, 1794. [Google Scholar] [CrossRef] [Green Version]
  32. Fang, B.; Li, Y.; Zhang, H.; Chan, J.C.-W. Hyperspectral images classification based on dense convolutional networks with spectral-wise attention mechanism. Remote Sens. 2019, 11, 159. [Google Scholar] [CrossRef] [Green Version]
  33. Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar]
  34. Woo, S.; Park, J.; Lee, J.-Y.; Kweon, I.S. CBAM: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
  35. Zheng, X.; Wang, B.; Du, X.; Lu, X. Mutual Attention Inception Network for Remote Sensing Visual Question Answering. IEEE Trans. Geosci. Remote Sens. 2021, 1–14, early access. [Google Scholar] [CrossRef]
  36. Roy, S.K.; Dubey, S.R.; Chatterjee, S.; Baran Chaudhuri, B. FuSENet: Fused squeeze-and-excitation network for spectral-spatial hyperspectral image classification. IET Image Process. 2020, 14, 1653–1661. [Google Scholar] [CrossRef]
  37. Zhan, Y.; Hu, D.; Wang, Y.; Yu, X. Semisupervised hyperspectral image classification based on generative adversarial networks. IEEE Geosci. Remote Sens. Lett. 2018, 15, 212–216. [Google Scholar] [CrossRef]
  38. Wu, H.; Prasad, S. Convolutional recurrent neural networks for hyperspectral data classification. Remote Sens. 2017, 9, 298. [Google Scholar] [CrossRef] [Green Version]
  39. Li, W.; Wu, G.; Zhang, F.; Du, Q. Hyperspectral image classification using deep pixel-pair features. IEEE Trans. Geosci. Remote Sens. 2017, 55, 844–853. [Google Scholar] [CrossRef]
  40. Li, W.; Chen, C.; Zhang, M.; Li, H.; Du, Q. Data Augmentation for Hyperspectral Image Classification with Deep CNN. IEEE Geosci. Remote Sens. Lett. 2019, 16, 593–597. [Google Scholar] [CrossRef]
  41. Haut, J.M.; Paoletti, M.E.; Plaza, J.; Plaza, A.; Plaza, L. Hyperspectral image classification using random occlusion data augmentation. IEEE Geosci. Remote Sens. Lett. 2019, 16, 1751–1755. [Google Scholar] [CrossRef]
  42. Howard, A.; Sandler, M.; Chen, B.; Wang, W.; Chen, L.-C.; Tan, M.; Chu, G.; Vasudevan, V.; Zhu, Y.; Pang, R.; et al. Searching for mobileNetV3. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea, 27 October–2 November 2019; pp. 1314–1324. [Google Scholar]
  43. Tan, M.; Le, Q.V. EfficientNet: Rethinking model scaling for convolutional neural networks. In Proceedings of the International Conference on Machine Learning (ICML), Long Beach, CA, USA, 9–15 June 2019; pp. 6105–6114. [Google Scholar]
  44. Roy, A.G.; Navab, N.; Wachinger, C. Concurrent spatial and channel ‘squeeze & excitation’ in fully convolutional networks. In Medical Image Computing and Computer Assisted Intervention; Springer: New York, NY, USA, 2018; pp. 421–429. [Google Scholar]
  45. Zhang, M.; Li, W.; Du, Q. Diverse region-based CNN for hyperspectral image classification. IEEE Trans. Image Process. 2018, 27, 2623–2634. [Google Scholar] [CrossRef] [PubMed]
  46. Mei, X.; Pan, E.; Ma, Y.; Dai, X.; Huang, J.; Fan, F.; Du, Q.; Zheng, H.; Ma, J. Spectral-spatial attention networks for hyperspectral image classification. Remote Sens. 2019, 11, 963. [Google Scholar] [CrossRef] [Green Version]
  47. Roy, S.K.; Manna, S.; Song, T.; Bruzzone, L. Attention-Based Adaptive Spectral-Spatial Kernel ResNet for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 7831–7843. [Google Scholar] [CrossRef]
Figure 1. The overall framework of the proposed method.
Figure 1. The overall framework of the proposed method.
Remotesensing 13 04472 g001
Figure 2. The structure of the proposed DSSIRNet.
Figure 2. The structure of the proposed DSSIRNet.
Remotesensing 13 04472 g002
Figure 3. The schematic diagram of DIR module.
Figure 3. The schematic diagram of DIR module.
Remotesensing 13 04472 g003
Figure 4. The schematic diagram of the global 3D attention module.
Figure 4. The schematic diagram of the global 3D attention module.
Remotesensing 13 04472 g004
Figure 5. Classification diagrams of IN datasets obtained by seven methods: (a) the ground truth, (b) SVM-RBF, (c) DRCNN, (d) ROhsi, (e) SSAN, (f) SSRN, (g) A2S2K-ResNet, and (h) DSSIRNet.
Figure 5. Classification diagrams of IN datasets obtained by seven methods: (a) the ground truth, (b) SVM-RBF, (c) DRCNN, (d) ROhsi, (e) SSAN, (f) SSRN, (g) A2S2K-ResNet, and (h) DSSIRNet.
Remotesensing 13 04472 g005
Figure 6. Classification diagrams of UP datasets obtained by seven methods: (a) the ground truth, (b) SVM-RBF, (c) DRCNN, (d) ROhsi, (e) SSAN, (f) SSRN, (g) A2S2K-ResNet, and (h) DSSIRNet.
Figure 6. Classification diagrams of UP datasets obtained by seven methods: (a) the ground truth, (b) SVM-RBF, (c) DRCNN, (d) ROhsi, (e) SSAN, (f) SSRN, (g) A2S2K-ResNet, and (h) DSSIRNet.
Remotesensing 13 04472 g006
Figure 7. Classification diagrams of SV datasets obtained by seven methods: (a) the ground truth, (b) SVM-RBF, (c) DRCNN, (d) ROhsi, (e) SSAN, (f) SSRN, (g) A2S2K-ResNet, and (h) DSSIRNet.
Figure 7. Classification diagrams of SV datasets obtained by seven methods: (a) the ground truth, (b) SVM-RBF, (c) DRCNN, (d) ROhsi, (e) SSAN, (f) SSRN, (g) A2S2K-ResNet, and (h) DSSIRNet.
Remotesensing 13 04472 g007
Figure 8. Classification diagrams of BS datasets obtained by seven methods: (a) the ground truth, (b) SVM-RBF, (c) DRCNN, (d) ROhsi, (e) SSAN, (f) SSRN, (g) A2S2K-ResNet, and (h) DSSIRNet.
Figure 8. Classification diagrams of BS datasets obtained by seven methods: (a) the ground truth, (b) SVM-RBF, (c) DRCNN, (d) ROhsi, (e) SSAN, (f) SSRN, (g) A2S2K-ResNet, and (h) DSSIRNet.
Remotesensing 13 04472 g008aRemotesensing 13 04472 g008b
Figure 9. Parametric analysis of erasing probability (parameter p ) and spatial block size (parameter patch size) on four HSI datasets: (a) IN, (b) UP, (c) SV, and (d) BS.
Figure 9. Parametric analysis of erasing probability (parameter p ) and spatial block size (parameter patch size) on four HSI datasets: (a) IN, (b) UP, (c) SV, and (d) BS.
Remotesensing 13 04472 g009
Figure 10. Classification results on four datasets under different training sample proportions: (a) IN, (b) UP, (c) SV, and (d) BS.
Figure 10. Classification results on four datasets under different training sample proportions: (a) IN, (b) UP, (c) SV, and (d) BS.
Remotesensing 13 04472 g010aRemotesensing 13 04472 g010b
Table 1. Land cover category and number of samples for training, validation, and testing of IN and UP.
Table 1. Land cover category and number of samples for training, validation, and testing of IN and UP.
SettingINUP
NoColorClassTrainValTestClassTrainValTest
1 Remotesensing 13 04472 i001Alfalfa2242Asphalt1991996233
2 Remotesensing 13 04472 i002Corn-notill71711286Meadows55955917,531
3 Remotesensing 13 04472 i003Corn-mintill4242746Gravel63631973
4 Remotesensing 13 04472 i004Corn1212213Trees92922880
5 Remotesensing 13 04472 i005Grass-pasture2424435Painted metal sheets40401265
6 Remotesensing 13 04472 i006Grass-trees3636658Bare Soil1511514727
7 Remotesensing 13 04472 i007Grass-pasture-mowed1126Bitumen40401250
8 Remotesensing 13 04472 i008Hay-windrowed2424430Self-Blocking Bricks1101103462
9 Remotesensing 13 04472 i009Oats1118Shadows2828891
10 Remotesensing 13 04472 i010Soybean-notill4949874
11 Remotesensing 13 04472 i011Soybean-mintill1231232209
12 Remotesensing 13 04472 i012Soybean-clean3030533
13 Remotesensing 13 04472 i013Wheat1010185
14 Remotesensing 13 04472 i014Woods63631139
15 Remotesensing 13 04472 i015Building-grass-trees-drives1919348
16 Remotesensing 13 04472 i016Stone-steal-towers5583
Total51251292251282128240,212
Table 2. Land cover category and number of samples for training, validation, and testing of SV and BS.
Table 2. Land cover category and number of samples for training, validation, and testing of SV and BS.
SettingSVBS
NoColorClassTrainValTestClassTrainValTest
1 Remotesensing 13 04472 i017Brocoli-green-weeds-160601889Water77256
2 Remotesensing 13 04472 i018Brocoli-green-weeds-21081083510Hippo grass5591
3 Remotesensing 13 04472 i019Fallow54541868Floodplain grasses177237
4 Remotesensing 13 04472 i020Fallow-rough-plow36361322Floodplain grasses277201
5 Remotesensing 13 04472 i021Fallow-smooth78782522Reeds77255
6 Remotesensing 13 04472 i022Stubble1141143731Riparian77255
7 Remotesensing 13 04472 i023Celery1021023375Fierscar77245
8 Remotesensing 13 04472 i024Grapes-untrained33633610,599Island interior77189
9 Remotesensing 13 04472 i025Soil-vineyard-develop1861865831Acacia woodlands1010294
10 Remotesensing 13 04472 i026Corn-senesced-green-weeds96963086Acacia shrublands77234
11 Remotesensing 13 04472 i027Lettuce-romaine-4wk30301008Acacia grasslands1010285
12 Remotesensing 13 04472 i028Lettuce-romaine-5wk54541819Short mopane55171
13 Remotesensing 13 04472 i029Lettuce-romaine-6wk2424868Mixed mopane77254
14 Remotesensing 13 04472 i030Lettuce-romaine-7wk30301010Exposed soils2291
15 Remotesensing 13 04472 i031Vineyard-untrained2162166836
16 Remotesensing 13 04472 i032Vineyard-vertical-trellis54541699
Total1578157850,97395953058
Table 3. Classification results of different methods on the IN dataset.
Table 3. Classification results of different methods on the IN dataset.
Class.SVM-RBF [14]DRCNN [45]ROhsi [40]SSAN [46]SSRN [29]A2S2K-ResNet [47]DSSIRNet
135.17 ± 23.0395.75 ± 3.4168.25 ± 4.8958.26 ± 3.4797.30 ± 3.5295.99 ± 6.8898.88± 1.35
262.79 ± 3.2487.63 ± 4.6871.01 ± 0.6065.01 ± 1.9894.54 ± 3.2094.88 ± 1.8696.65± 2.50
368.52 ± 3.5191.40 ± 5.6279.54 ± 1.3482.75 ± 13.9695.13 ± 3.1094.54 ± 2.0496.64± 1.22
452.79 ± 7.8788.37 ± 4.1077.10 ± 5.6287.10 ± 2.7196.15 ± 2.0296.59 ± 2.7694.81 ± 2.66
586.19 ± 3.7785.87 ± 2.1285.16 ± 3.2989.02 ± 2.4897.67 ± 2.5298.41 ± 1.1098.62 ± 1.37
685.50 ± 2.7897.79 ± 3.5789.05 ± 5.2396.99 ± 0.6898.21 ± 0.9198.38 ± 1.3399.13± 0.58
775.54 ± 10.2992.26 ± 1.9971.79 ± 11.0268.84 ± 16.4982.07 ± 21.9685.05 ± 12.3494.18 ± 5.65
889.32 ± 2.0099.08 ± 2.6387.78 ± 2.0298.48 ± 2.9298.26 ± 3.2499.75 ± 0.0999.92± 0.38
959.87 ± 20.2856.75 ± 9.0440.74 ± 6.9273.08 ± 36.9085.91 ± 15.9372.41 ± 18.6383.71 ± 18.72
1068.69 ± 3.3697.18 ± 2.6586.69 ± 1.3093.38 ± 3.1193.56 ± 2.4494.72 ± 2.3097.07 ± 4.77
1169.97 ± 2.3992.45 ± 2.7890.12 ± 0.4190.39 ± 3.0997.34 ± 1.0197.11 ± 1.5597.65 ± 2.90
1261.25 ± 5.3198.03 ± 0.8686.04 ± 4.5496.65 ± 1.0696.85 ± 2.1995.77 ± 2.2298.69± 0.92
1387.18 ± 5.2999.56 ± 0.1886.48 ± 5.0199.05 ± 1.1499.57 ± 0.8498.78 ± 1.50100.0 ± 0.0
1489.37 ± 1.7398.23 ± 1.1996.90 ± 0.2897.48 ± 1.7597.54 ± 1.7697.54 ± 1.4198.78 ± 1.21
1569.36 ± 5.3285.34 ± 0.9776.50 ± 2.6396.39 ± 2.7595.94 ± 3.58.95.62 ± 2.6695.76 ± 2.60
1698.61 ± 2.2577.04 ± 4.7081.34 ± 14.1694.19 ± 7.8094.17 ± 4.2192.55 ± 5.6298.70 ± 1.10
OA (%)73.74 ± 1.3094.87 ± 0.9285.24 ± 0.5285.79 ± 6.3296.07 ± 0.7396.38 ± 0.5897.18± 0.61
AA (%)72.50 ± 2.3090.17 ± 1.5979.65 ± 1.0486.71 ± 3.0395.00 ± 2.3894.20 ± 1.1896.82± 3.05
Kappa (%)69.79 ± 1.4994.36 ± 1.0983.12 ± 0.6083.78 ± 5.5195.86 ± 0.8495.88 ± 0.6796.78± 0.69
Table 4. Classification results of different methods on the UP dataset.
Table 4. Classification results of different methods on the UP dataset.
ClassSVM-RBF [14]DRCNN [45]ROhsi [40]SSAN [46]SSRN [29]A2S2K-ResNet [47]DSSIRNet
188.56 ± 7.7195.75 ± 0.0893.63 ± 1.2794.63 ± 0.2199.01 ± 0.0899.44 ± 0.3799.04 ± 0.53
297.12 ± 0.0994.73 ± 1.2096.99 ± 0.84100.0 ± 0.098.99 ± 0.2097.71 ± 3.08100.0± 0.0
381.19 ± 9.0790.40 ± 3.2878.44 ± 2.0876.39 ± 10.0996.79 ± 1.6296.11 ± 5.1198.70± 0.54
486.58 ± 7.7797.37 ± 0.2396.56 ± 1.1696.64 ± 0.6397.98 ± 1.0399.59 ± 0.1897.98 ± 0.52
598.39 ± 0.0898.87 ± 0.0597.49 ± 1.7597.84 ± 0.74100.0 ± 0.0100.0 ± 0.0100.0 ± 0.0
685.63 ± 1.8995.70 ± 1.3491.05 ± 0.6987.90 ± 9.8199.31 ± 0.55100.0 ± 0.0100.0 ± 0.0
785.81 ± 3.9589.06 ± 11.3784.30 ± 5.0789.54 ± 8.0599.69 ± 0.4599.24 ± 1.0199.39 ± 0.63
883.92 ± 0.9896.19 ± 0.9397.73 ± 0.4792.99 ± 4.6782.82 ± 1.8195.95 ± 1.0297.86 ± 0.51
999.12 ± 0.6296.60 ± 1.3299.36 ± 0.4398.55 ± 0.0699.86 ± 0.2199.69 ± 0.28 99.89 ± 0.08
OA (%)91.49 ± 1.0495.79 ± 0.9794.57 ± 0.5194.35 ± 3.3197.62 ± 0.7198.22 ± 1.7199.31 ± 0.06
AA (%)89.59 ± 5.6994.96 ± 0.5192.84 ± 0.8192.72 ± 1.2897.16 ± 0.3698.63 ± 0.9999.20 ± 0.15
Kappa (%)88.68 ± 3.0894.67 ± 0.2992.80 ± 0.6792.44 ± 3.0996.83 ± 1.4297.62 ± 2.2999.05 ± 0.08
Table 5. Classification results of different methods on the SV dataset.
Table 5. Classification results of different methods on the SV dataset.
ClassSVM-RBF [14]DRCNN [45]ROhsi [40]SSAN [46]SSRN [29]A2S2K-ResNet [47]DSSIRNet
199.13 ± 0.3198.84 ± 0.1799.96 ± 0.0499.66 ± 0.1599.92 ± 0.09100.0 ± 0.0100.0± 0.0
299.22 ± 0.4099.61 ± 0.0999.96 ± 0.0599.94 ± 0.02100.0 ± 0.0100.0 ± 0.0100.0± 0.0
399.05 ± 1.7099.75 ± 0.0695.80 ± 1.7996.62 ± 2.1699.82 ± 0.2599.74 ± 0.2599.86± 0.13
498.67 ± 0.3598.79 ± 0.3599.11 ± 1.2598.91 ± 0.6298.90 ± 0.7599.46 ± 0.4099.66 ± 0.68
597.81 ± 0.1998.84 ± 1.0198.62 ± 0.7399.79 ± 0.0499.78 ± 0.2999.58 ± 0.5199.91 ± 0.82
698.62 ± 1.6299.07 ± 0.8098.27 ± 0.6098.17 ± 0.4399.98 ± 0.02100.0 ± 0.0100.0± 0.0
799.62 ± 2.0289.05 ± 9.2599.80 ± 0.2299.79 ± 0.2899.89 ± 0.15100.0 ± 0.0100.0 ± 0.0
881.91 ± 16.9997.17 ± 2.2388.66 ± 1.3982.53 ± 9.3799.55 ± 0.1798.88 ± 0.8799.07 ± 0.63
999.08 ± 0.8596.88 ± 3.6098.79 ± 0.20100.0 ± 0.098.91 ± 0.1499.93 ± 0.08100.0± 0.0
1088.26 ± 4.8689.34 ± 0.4197.29 ± 0.7699.25 ± 0.5398.68 ± 0.7599.48 ± 0.3299.79± 0.12
1191.98 ± 4.17100.0 ± 0.097.18 ± 0.7399.53 ± 0.1787.89 ± 14.4198.59 ± 1.9897.78 ± 3.00
1299.14 ± 0.7995.43 ± 1.3797.94 ± 1.0298.72 ± 1.1599.74 ± 0.2099.09 ± 1.16100.0± 0.0
1398.42 ± 2.4398.97 ± 0.4896.90 ± 0.3996.20 ± 2.9794.57 ± 7.6798.50 ± 1.9599.80± 0.11
1491.49 ± 5.8582.24 ± 11.3497.48 ± 0.4898.45 ± 0.9398.99 ± 0.8799.21 ± 0.5198.93 ± 1.15
1571.44 ± 19.4597.57 ± 1.7784.76 ± 1.4990.36 ± 5.0285.29 ± 5.4494.06 ± 2.5198.83± 0.92
1697.21 ± 2.7492.72 ± 0.0588.33 ± 1.8797.09 ± 1.2899.93 ± 0.0999.74 ± 0.18100.0± 0.0
OA (%)90.74 ± 0.9696.63 ± 2.4194.28 ± 0.1694.64 ± 4.1396.80 ± 1.8498.36 ± 0.5699.35± 0.09
AA (%)94.44 ± 0.2495.89 ± 0.5096.18 ± 0.0797.18 ± 2.6997.62 ± 1.8899.14 ± 0.5599.60± 0.22
Kappa (%)89.68 ± 1.2794.07 ± 1.8393.63 ± 0.1894.02 ± 5.2696.45 ± 2.0598.18 ± 0.6299.27± 0.10
Table 6. Classification results of different methods on the BS dataset.
Table 6. Classification results of different methods on the BS dataset.
ClassSVM-RBF [14]DRCNN [45]ROhsi [40]SSAN [46]SSRN [29]A2S2K-ResNet [47]DSSIRNet
190.94 ± 0.7684.75 ± 7.2098.16 ± 0.8091.13 ± 9.2498.74 ± 1.3592.06 ± 3.7499.96 ± 0.12
245.45 ± 31.1261.00 ± 24.7968.77 ± 7.5693.03 ± 4.8297.47 ± 5.04100.0 ± 0.098.60 ± 1.74
394.21 ± 4.8794.49 ± 0.3165.81 ± 2.3587.05 ± 8.2499.91 ± 0.1687.45 ± 11.10100.0 ± 0.0
475.12 ± 16.1782.55 ± 9.3524.09 ± 9.6192.99 ± 5.8996.37 ± 2.9691.41 ± 1.8395.74 ± 2.90
577.99 ± 9.7865.47 ± 18.9052.17 ± 10.3188.89 ± 10.0291.05 ± 4.4082.84 ± 9.5093.48 ± 3.88
665.25 ± 9.6575.40 ± 11.2649.14 ± 8.5950.33 ± 20.1395.43 ± 4.5488.66 ± 1.8696.20 ± 2.97
795.65 ± 1.7499.06 ± 0.0373.38 ± 3.12100.0 ± 0.0100.0 ± 0.099.18 ± 0.88100.0 ± 0.0
875.38 ± 16.8484.20 ± 10.2254.27 ± 11.8587.28 ± 5.2298.37 ± 1.4897.89 ± 0.70100.0 ± 0.0
981.04 ± 7.5286.44 ± 7.2370.27 ± 6.0171.65 ± 25.4393.94 ± 4.4989.98 ± 5.6999.40 ± 1.18
1097.01 ± 1.4787.18 ± 3.0596.70 ± 3.1480.24 ± 9.1499.40 ± 0.4292.19 ± 3.9795.41 ± 4.21
1185.08 ± 7.4689.70 ± 1.6983.50 ± 1.9986.12 ± 7.0799.82 ± 0.1496.76 ± 3.6499.83 ± 1.10
1258.10 ± 25.5993.45 ± 3.4753.60 ± 26.5067.31 ± 18.2498.99 ± 0.0697.00 ± 2.1399.08 ± 1.54
1371.65 ± 7.5177.06 ± 21.1357.01 ± 4.2787.07 ± 6.8199.52 ± 0.9489.25 ± 7.76100.0 ± 0.0
1458.24 ± 7.5896.95 ± 0.0598.87 ± 1.5892.66 ± 5.06100.0 ± 0.0100.0 ± 0.0100.0 ± 0.0
OA (%)78.80 ± 2.4186.56 ± 5.7767.21 ± 4.4784.17 ± 5.6097.75 ± 0.4491.56 ± 0.7398.14 ± 0.81
AA (%)76.50 ± 1.9284.12 ± 4.0167.55 ± 4.8283.98 ± 9.5497.78 ± 0.1393.19 ± 0.6298.40 ± 0.67
Kappa (%)76.98 ± 3.0884.57 ± 5.1864.46 ± 4.8682.87 ± 8.0697.32 ± 0.4790.84 ± 0.7997.96 ± 0.88
Table 7. Running time of seven methods on four datasets (s).
Table 7. Running time of seven methods on four datasets (s).
Method SVM_RBFDRCNNROhsiSSRNSSANA2S2K-ResNetDSSIRNet
Dataset
IN4.7102.883.8831.75.839.539.6
UP24.3142.3175.557.723.6118.5100.1
SV55.5272.7310.2120.961.1141.4219.3
BS2.519.327.56.94.810.712.7
Table 8. Number of DIR modules on four datasets (OA%).
Table 8. Number of DIR modules on four datasets (OA%).
Datasets2 Blocks3 Blocks4 Blocks
IN96.2997.1893.91
UP98.8699.3198.99
SV96.9899.3598.85
BS97.9398.1497.85
Table 9. Ablation experiments with different modules or strategies (OA%).
Table 9. Ablation experiments with different modules or strategies (OA%).
ApproachDatasets
Use RE?Use G3D?INUPSVBS
nono95.7798.5696.3197.33
noyes95.8698.9496.9098.04
yesno96.2198.8997.7598.10
yesyes97.1899.3199.3598.14
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhang, T.; Shi, C.; Liao, D.; Wang, L. Deep Spectral Spatial Inverted Residual Network for Hyperspectral Image Classification. Remote Sens. 2021, 13, 4472. https://0-doi-org.brum.beds.ac.uk/10.3390/rs13214472

AMA Style

Zhang T, Shi C, Liao D, Wang L. Deep Spectral Spatial Inverted Residual Network for Hyperspectral Image Classification. Remote Sensing. 2021; 13(21):4472. https://0-doi-org.brum.beds.ac.uk/10.3390/rs13214472

Chicago/Turabian Style

Zhang, Tianyu, Cuiping Shi, Diling Liao, and Liguo Wang. 2021. "Deep Spectral Spatial Inverted Residual Network for Hyperspectral Image Classification" Remote Sensing 13, no. 21: 4472. https://0-doi-org.brum.beds.ac.uk/10.3390/rs13214472

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop