Next Article in Journal
Non-Scanning Three-Dimensional Imaging System with a Single-Pixel Detector: Simulation and Experimental Study
Previous Article in Journal
2D vs. 3D Radiological Methods for Dental Age Determination around 18 Years: A Systematic Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep-Learning-Based Active Hyperspectral Imaging Classification Method Illuminated by the Supercontinuum Laser

1
College of Advanced Interdisciplinary Studies, National University of Defense Technology, Changsha 410073, China
2
State Key Laboratory of High-Performance Computing, College of Computer, National University of Defense Technology, Changsha 410073, China
3
Interdisciplinary Center of Quantum Information, National University of Defense Technology, Changsha 410073, China
*
Authors to whom correspondence should be addressed.
Submission received: 15 April 2020 / Revised: 26 April 2020 / Accepted: 28 April 2020 / Published: 29 April 2020
(This article belongs to the Section Optics and Lasers)

Abstract

:
Hyperspectral imaging (HSI) technology is able to provide fine spectral and spatial information of objects. It has the ability to discriminate materials and thereby has been used in a wide range of areas. However, traditional HSI strongly depends on the sunlight and hence is restricted to daytime. In this paper, a visible/near-infrared active HSI classification method illuminated by a visible/near-infrared supercontinuum laser is developed for spectra detection and objects imaging in the dark. Besides, a deep-learning-based classifier, hybrid DenseNet, is created to learn the feature representations of spectral and spatial information parallelly from active HSI data and is used for the active HSI classification. By applying the method to a selection of objects in the dark successfully, we demonstrate that with the active HSI classification method, it is possible to detect objects of interest in practical applications. Correct active HSI classification of different objects further supports the viability of the method for camouflage detection, biomedical alteration detection, cave painting mapping and so on.

Graphical Abstract

1. Introduction

Hyperspectral images (HSI) are defined as images containing fine spectra of reflected light with a spectral resolution of 1–10 nm per image pixel [1]. Providing both spectral and spatial information, HSI has the abilities to discriminate objects with a high degree of confidence and has been used in many areas, such as target detection [2,3,4], environment monitoring [5,6] and disease detection [7,8,9].
However, traditional HSI is a passive imaging technology, depending on sunlight to measure the reflected light from objects. Useful HSI data requires available sun for no more than 7 hours every day, depending on the location and time of year. Cloud cover also reduces illumination and limits the use of the passive HSI data. Besides, in scenarios such as shadows, caves or underwater, passive HSI system has difficulty detecting the optical information of objects effectively, and thus, applications of passive HSI are strictly limited [10]. In order to operate in environments with limited or even no sunlight, a broadband light source must be aligned with the field of the view of the hyperspectral imager to record the reflected spectra from objects, which can extend the operating envelope of HSI to 24 hours and eliminate the shadows that alter the measured spectral signature. So far, there are two choices of the light sources to illuminate targets: halogen lamps and supercontinuum (SC) lasers. Compared to halogen lamps, SC lasers not only can produce broad continuous spectrum, but can produce a spatially coherent, nearly diffraction-limited beam with high brightness that can extend the working distance up to kilometers [2,11,12].
So far, a few explorations have been done. In 1999, the first active HSI system with a SC laser was put forward by MIT Lincoln Laboratory [13]. The authors concluded that SC laser could provide HSI system with great promise in concealed and obscured target detection. Limited by the output power of SC lasers, active HSI witnessed a slow development [14]. Until 2013, Vinay V. Alexander et al. used a 5 W all-fiber SC laser to illuminate the targets placed 1.6 km away from the laser. The spectra of different targets were seen to be in good agreement with in-lab measurements under a lamp source [11]. Most recently in our previous work [2], a 4.5 W all-fiber SC laser was used to illuminate different objects and record their hyperspectral image data. The work demonstrated that the Gaussian-like distribution of the SC illuminator can still be used for accurate reflected spectrum measurement once the illuminator was characterized in advance, and it also indicates that an active HSI system illuminated by a SC laser with tailored spectrum has advantages over passive HSI system.
Compared with the progress of the active HSI hardware, active image processing methods do not receive enough attention. It is worth noting that, for various applications of HSI, object classification lays a crucial foundation and significant part, which aims for identification of those objects depicted in HSI by analyzing spectral and spatial features [15]. So far, only a few methods have been used to deal with the active HSI classification tasks. In the first active HSI system [13], Spectral Angel Mapper (SAM) was used to detect the man-made targets contained in the scene, which computed the angles in n dimensional (n is the number of spectral bands) spectral space between the reference spectra and the spectrum of each pixel. In [1,16], Support Vector Machine (SVM) method was applied to perform the classification, which separates classes by selecting a set of hyperplanes that maximize the distance between the nearest training samples and the hyperplanes. At present, these methods still cannot achieve high classification performance for active HSI.
Recently, deep-learning-based approaches have attracted enormous attention and have been applied into the field of passive HSI classification, since they can automatically learn the representative spectral and spatial features in a hierarchical manner from data and can yield better performance compared to the traditional, shallower classifiers. For example, Li et al. proposed a three dimensional convolutional neural network (3D CNN) framework for passive HSI classification [17], which used 3D convolutional kernels to extract spectral and spatial joint features. Besides, Zilong Zhong et al. proposed a spectral-spatial residual network (SSRN) [18]. By learning spectral and spatial features in sequence using a hybrid multi-dimensional residual structure including 3D and two dimensional (2D) operations, SSRN can extract more discriminative features and achieve satisfactory classification results for passive HSI classification. At present, along with the development of the active HSI hardware, the progress on successful applications of deep-learning-based classification model on passive HSI has led many to ask whether a similar success is achievable in classifying objects under the illumination of SC laser rather than ambient light, whose intensity and spectra distribution are much more uniform than SC lasers across entire images [19]. Moreover, there is no publicly available active HSI datasets of SC laser, which limits the development of active HSI classification methods. Without accurate objects classification, practical applications of active HSI may be greatly hindered.
At present, there are many products and studies of active HSI using halogen lamps to illuminate objects, working in the laboratories or underwater with operating distance less than 100 m. For our work, as a “proof of concept”, the active HSI system illuminated by a SC laser is developed as a possible alternative to the halogen lamps for readers who will find that SC laser could do the similar things. Besides, this work is also an exploration of the practical applications of SC lasers. The objective of this paper is not only to explore the suitability of visible/near-infrared active HSI as a tool for object detection in the dark but also to create a dataset and spectral library of active HSI with SC laser illumination, and propose a feasible deep-learning-based classification method to achieve high accuracy for active HSI which, is a hybrid DenseNet model containing one dimensional (1D) and 2D feature extracting branches parallelly. The 1D branch is used to learn spectral feature representations from a stack of spectral bands. 2D branch facilitates the spatial feature representations from the data. Then, the extracted spectral features and spatial features are concatenated to make classification via the softmax classifier. To our best knowledge, it is the first time such an attempt is made in the field of active HIS classification, and we propose that this active HSI classification method will be a valuable method for detection and investigation in various applications where there is no enough ambient light.
To be summarized, our main contributions can be listed as follows: (1) We establish an active HSI system and create a relevant spectral library and dataset from a selection of materials likely present in many realistic scenarios under the illumination of SC laser, which can be used for future development of active HSI classification technology. (2) We proposed a hybrid DenseNet classification model for the active HSI classification. This classification model can extract spectral and spatial features in parallel with the 1D and 2D branches, which can reduce the interference between the two types of features. The extracted features are then concatenated for classification. Compared with other methods, the proposed network achieves the better classification accuracy for the active HSI dataset. The rest of this paper is organized as follows: In Section 2, we describe the experimental setup and present the selection of materials for the measurement. In Section 3, we describe the details of the spectral library and dataset of the active HSI. In Section 4, the proposed hybrid DenseNet model is introduced in detail. In Section 5, experimental results and discussions are presented. Finally, in Section 6 conclusions are made.

2. Experimental Setup and Materials

The experimental setup of active HSI system is shown in Figure 1a. The active HSI uses a home-made SC laser as the broadband coherent illuminator and a commercial visible/near-infrared hyperspectral imager (Sichuan Dualix Spectral Image Technology Co.Ltd, GaiaField-F-V10) as the imaging instrument. An off-axis parabolic silver-plated mirror is mounted on a XYZ translation stage to reflect the SC laser beam to the objects. A diaphragm installed behind the parabolic mirror is used to remove the weak illumination at the edge of the laser spot.
The spectral detection range of the hyperspectral imager is 400–1000 nm with a spectral resolution of 3.5 nm. The image obtained by the imager has a spatial size of 568 × 696 and a spectral bands number of 174 after down-sampling. The SC laser has a size of about 270 × 250 × 80 mm3, driven by a direct current power supply. It has an average optical output power of 4 W and a rough spectral range of 500–2400 nm. The output spectrum of the SC laser at the wavelength range of 400–1200 nm is shown in Figure 1b, which is measured by a commercial spectrograph. Figure 1c shows the photo of the SC laser spot captured by the hyperspectral imager in the dark. The laser beam diameter on the plane of targets placed ~5 m away from the SC laser is ~12 cm, measured by the squares (0.5 × 0.5 cm) drawn on a piece of white paper. The spectral profiles of the 13 points (A–J) marked in Figure 1c is shown in Figure 1d. It is obvious that the measured illumination spectra at different locations of the spot are totally different from each other, showing a non-uniform distribution of intensity and spectrum. Conversely, the ambient light of sun is more uniform, which makes the passive HSI classification and spectral analysis much easier. Doubtlessly, the problem limits the development and application of active HSI systems.
In the experiments, different materials are selected as the objects of interest, including fresh leaves, plastic leaves, blue plastic flowers, white plastic flowers, a green plastic bottle cap, red wood and a calibrated Spectralon reflectance standard plate (Labsphere Inc., USA); they are shown in Figure 2. To avoid the effect of sunlight, active HSI experiments are conducted in darkness. The objects are placed ~5 m away from the active HSI system. At such close distances, the radiation power level of SC laser is enough for illuminating and reflected signal receiving [20]. In actual scenarios, the hyperspectral imager and SC laser may be kept at long distances from the targets, therefore, a higher-power SC laser is needed to ensure sufficient reflected signals can be received by the hyperspectral imager. Under this situation, the radiation power level of SC lasers should be considered seriously by matching the solar irradiance on the ground surface at the spectral range required [11]. The long-distance experiments and applications will be a future research orientation with a high-power SC laser developed.

3. Spectral Library and Dataset

In this section, we capture the active HSI raw data of the objects in the dark environment and convert it from digital counts into reflectance. Then, we randomly select some regions of interest (ROIs) from the objects to make the spectral library and dataset, which are used for subsequent spectral analysis and classification.

3.1. Radiometric Calibration

Raw hyperspectral image consists of the digital counts the HSI imager captured at different wavelengths. It can be seen as the sum of upwelling radiance reflected from the objects, reflections from scatters in the air and ambient light. Besides, there is noise inherent in the sensor [1]. Reflectance is a percentage of how much light at each wavelength is reflected from the objects, removing the effect of illuminator and optical properties of the air column [1]. Assuming all surfaces of the objects behave like Lambertian reflectors, the reflectance at a specific wavelength λ can be expressed as:
R ( λ ) = L u ( λ ) L d ( λ )
here, L u ( λ ) is the upwelling radiance, and L d ( λ ) is the downwelling radiance. However, the aforementioned result demonstrates that SC laser spot has a non-uniform intensity and spectrum components distribution; thus, it is inappropriate to use the space-independent radiance to calculate the reflectance of objects. A previous study [2] has given a clear argument about it. Considering the nonuniformity of laser spot, the reflectance at a given wavelength should be modified as:
R ( λ ) = L u ( x , y , λ ) L u _ ref ( x , y , λ ) × R ref ( λ )
where (x, y) represents the location of the pixel. L u ( x , y , λ ) and L u _ ref ( x , y , λ ) are the space-related spectral upwelling radiance from objects and reference plate, respectively, and R ref ( λ ) is the reflectance spectrum of the Spectralon reflectance standard plate.
Raw hyperspectral data cube of the objects captured by active HSI system in the dark is shown in Figure 3a. After calibration by Equation (2), the resulting data cube is shown as false-color map in Figure 3b. In Figure 3b, there is a circle of blue area, which indicates an obvious nonuniformity between the edge and center of the laser spot. The phenomenon can be attributed to the position misalignment of laser spot on reference plate and objects, and the distance of objects to the reference plate. The calibrated data cube is used to make the spectral library and dataset.

3.2. ROIs and Spectral Library

In this section, the Image Labeler app provided by MATLAB (https://www.mathworks.com) is used to select ROIs for each objects [1]. Avoiding shadows and highlights, ROIs were chosen manually at perceived representative homogenous surfaces, and their locations at the false-color map are shown in Figure 4 with different colors indicating different materials. Among the ROIs, “background” refers to the lightless areas without objects. The labels and pixel numbers of the selected ROIs for different objects are listed in Table 1. They make up the spectral library and are used to provide training data for the classification.
The mean spectra of different objects in the ROIs are shown in Figure 5. Note that the spectrum below ~500 nm is not credible, since the light of laser shorter than 500 nm is very weak. Although the spectra in spectral library show some variation in absolute reflectance, there is no obvious variation in feature position. For example, we can find that the calculated reflectance shows typical spectral signatures of green vegetation, like the small peak from 510 to 600 nm, sharp red edge from 670 to 750 nm and high infrared reflectance regions from 750 to 950 nm. Contrastively, the spectrum of green plastic leaves shows a gentle ascent in red edge and a different position of the troughs at around 650 nm. In conclusion, there exist giant spectral differences among different objects under the illuminating of SC laser, especially the troughs where the lowest reflectance appears. The differences show the feasibility of active HSI classification.

3.3. Dataset

Generally, datasets of hyperspectral scenes contain two parts: one is HSI data cubes, and the other is label images indicating the classes of each pixel. In this section, we create an active HSI dataset, which can be used for designing and testing the active HSI classification algorithms. In the experiments, the dataset provides the testing set for the classification.
The active HSI data cube consists of 568 × 696 pixels and 174 spectral bands in the wavelength range of 400–1000 nm, whose false-color map has been shown in Figure 3b. The file of the active HSI data cube is given the suffix “active_HSI.mat”. Then, a hand-drawn label image of the active HSI data cube is delineated in details using Image Labeler app, shown in Figure 6. Similar to the method of ROIs selection, shadow areas and edge areas of objects (the black areas in Figure 6) are avoided, for the existence of degenerative or hybrid optical signatures from different materials. The label image file is given the suffix “active_HSI_label.mat”, comprising 568 × 696 pixels. The pixels number of each object are listed in Table 2. The labels are consistent with that of the spectral library, for the convenience of classification model training and testing. Among them, ‘Label 0’ refers to the black areas, which are not grouped into meaningful classes. The information about the dataset is summarized in Table 3.

4. Hybrid DenseNet Classification Model

Generally, deep-learning-based models contain multiple layers of nonlinear neurons that can learn hierarchical representation from large amounts of labelled data and achieve high performance for various tasks including object classification [21]. At present, DenseNet is one of the popular supervised deep-learning-based models and has proved to be very effective for passive HSI classification task [22,23,24]. In this section, we will explain the proposed hybrid DenseNet classification model in detail.

4.1. Extracting Spectral and Spatial Features from HSI Data

Convolutional layers are the key operations of DenseNet, which use a stack of kernels to learn the feature representations from the input HSI data. The proposed hybrid DenseNet is composed of a 1D spectral branch and a 2D spatial branch. The 1D spectral branch mainly contains 1D convolutional layers, which can extract spectral features; while the 2D spatial branch contains 2D convolutional layers extracting spatial features of adjacent pixels.
In the 1D convolutional operation, input data is convolved with 1D convolutional kernels, before going through the activation function to form the output data. Formally, in the lth 1D convolutional layer, the output υ l j x of a neuron at position x of the jth feature map is denoted as:
υ l j x = f ( m d = 0 D l 1 k l j m d υ ( l 1 ) m x + d + b l j )
here, m indexes the mth feature map in the (l−1) layer connected to the current feature map, k l j m d is the value at position d of the 1D kernel connected to the mth feature map, Dl is the width of the kernel along the spectral dimension, and blj is the bias of jth feature map in the lth layer. f ( · ) is the activation function, which introduces nonlinearity into the model. Similarly, in the 2D convolution operation, the value of the neuron υ l j x y at position (x, y) of the jth feature map is denoted as:
υ l j x y = f ( m h = 0 H l 1 w = 0 W l 1 k l j m h w υ ( l 1 ) m ( x + h ) ( y + w ) + b l j )
where Hl and Wl is the height and width of the 2D convolutional kernel, respectively. For the activation function f ( · ) , we adopt the Mish function whose formula is [25]:
f ( x ) = x × tanh ( ln ( 1 + e x ) )

4.2. Densely-Connected Architecture

The concept of densely connection and DenseNet are proposed by Gao et al. [26]. In DenseNet, any hidden block has path to any previous block and back block, and dense connectivity combines features by concatenating them. As Figure 7 shows, assume that Xl is the output of the lth convolutional layer, and Hl(·) represents the complex nonlinear transformation operations including batch normalization, activation layers and convolutional layers for the lth convolution layer. The connected structure of DenseNet is formulated as:
X l = H l ( [ X 0 , X 1 , , X l 1 ] )
where [ · , , · ] is the operation of concatenating in the channel dimension. DenseNet combines the number of channels and leaves the value of the feature maps unchanged. Each layer of DenseNet directly connects to the input and the prior layer, resulting in a hidden deep supervision. This connected structure can reduce the phenomenon of gradient disappearance and thus establishes a deeper network. Besides, the densely-connected architecture of DenseNet has a regularizing effect that restrains the overfitting [27].

4.3. The Proposed Hybrid DenseNet Architecture

Recently, a few studies [28,29] show that the architecture of multi-dimensional CNNs is beneficial for the models to learn the highly discriminative features compared to those single-dimensional CNNs and thus gets better classification performance. Inspired by their findings, we create a hybrid DenseNet classification model containing a 1D spectral branch and a 2D spatial branch parallelly. Besides, in order to remove the spectral redundancy and reduce computational burden accordingly, Principle Component Analysis method (PCA) is applied over the original HSI data along the spectral dimension before classification. The architecture of the proposed model is shown in Figure 8.
Let the active HSI data be denoted by I R H × W × B , where I is the active HSI data cube captured by the imager, H is the height, W is the width, and B is the number of spectral bands. PCA helps to reduce the number of spectral bands from B to N, while maintaining the same spatial dimensions. We represent the PCA reduced data cube by M R H × W × N , where M is the modified input after PCA. The flow diagram of the proposed method is shown in Figure 8, where the number of the principle components is set to be 40 in the experiments, i.e., N = 40. Besides, in our experiments, the value of H × W × B is 568 × 696 × 174.
In order to use the deep-learning based image classification technology, the active HSI data cube should be divided into small overlapping 3D patches, the truth labels of which are decided by the labels of their central pixels. The 3D patches, P R S × S × N from M, covering the S × S window and N principle components. In this way, we create the training set using the spectral library, and the testing set using the dataset, separately. In our experiments, S has a value of 7. The size of input patches is 7 × 7 × 40.
In the 1D spectral branch, input patches are convolved with 1D convolutional kernels along the spectral dimension, with the kernel size of 1 × 1 × 7. In the first convolutional layer, where there are 24 1D kernels, we use “valid” padding method and set the down sampling stride to be (1,1,2), which are used to reduce the number of spectral channels. After the first convolutional layer, 24 feature maps with a shape of (7 × 7 × 17) are obtained. Then, the dense spectral block containing 3 convolutional layers is used to learn the spectral features. In the dense block, each convolutional layer has 12 kernels with a 1 × 1 × 7 size, and the stride of the convolutional operations is set as (1,1,1) to maintain the size of feature maps. After the dense spectral block, 60 feature maps with the shape of (7 × 7 × 17) are generated. After another convolutional layers with kernel size of 1 × 1 × 17, 60 feature maps with the shape of (7 × 7 × 1) are obtained. Finally, the Global Average Pooling layer is employed to get the feature vector with size of 1 × 60. Details of the layers of the spectral branch are listed in Table 4.
Meanwhile, input patches are delivered to the 2D spatial branches, and the first convolutional layer’s kernel size is set to 1 × 1 × 40, which compresses the spectral bands into one dimension. After that, 24 feature maps in the shape of (7 × 7 × 1) are generated. Then, the dense spatial block containing 3 convolutional layers is attached. Each convolutional layer in the dense spatial block has 12 channels with a 3 × 3 × 1 kernel size. After the dense spatial block, 60 feature maps in the shape of (7 × 7 × 1) are obtained. Via a Global Average Pooling layer, a spatial feature vector in the size of 1 × 60 is obtained. The implementation of the spatial branch is given in Table 5.
After the spectral branch and spatial branch, we obtain the spectral feature vector and spatial feature vector. Then, a concatenation between the two feature vectors is used to form a feature vector with a size of 1 × 120 rather than using the add operation, which will mix the spectral and spatial features together. After concatenation, a dropout layer with a rate of 0.5 is applied to avoid overfitting. In the end, the classification result is obtained via the fully connected layer and the softmax activation function.
All weighs of the hybrid DenseNet are randomly initialized and updated iteratively using a back-propagation algorithm. Stochastic Gradient Descent (SGD) is chosen as the optimizer to minimize the softmax loss [30]. During training, we use mini-batches of size 32 and train the network for 50 epochs.

5. Experiments Results

In this section, we test the performance of the hybrid DenseNet classification model. To evaluate the accuracy and efficiency, three quantitative metrics of overall accuracy (OA), average accuracy (AA) and Kappa coefficient are used. OA is calculated by the ratio between the correctly classified testing samples and the total number of testing samples. AA refers to the average of classification accuracy of each class. The Kappa coefficient reflects the consistency between the labels and classification result. The higher the three metric values are, the better the performance is.

5.1. Experimental Settting

The hybrid DenseNet framework is shown in Figure 9. Deep-learning-based algorithms are data-driven, relying on labelled training samples. We follow the method mentioned in Section 4.3 to make the input patches. The training set is provided by the spectral library to train the hybrid DenseNet, while the testing set is provided by the dataset, as Figure 9 shows.
The hybrid DenseNet model is constructed based on TensorFlow, a scalable deep learning framework. Its training and testing processes are conducted on the GTX 1060 graphical processing unit (GPU) and 8 GB RAM. To obtain a convincing estimate of capability of the proposed hybrid DenseNet, the experiments are run for 5 times.

5.2. Comparison with Other Classification Methods

To demonstrate the effectiveness of the proposed hybrid DenseNet, our model is compared with several widely used methods in active HSI classification and the state-of-the-art methods, including (1) the SVM with a radial basis function kernel (SVM-RBF) [31], (2) the 3D CNN established in [17] and (3) the spectral–spatial residual network (SSRN) proposed in [18]. For all methods, the input data is processed by PCA that reduces the spectral bands from 174 to 40. For SVM-RBF, a spectral-based classifier, we simply feed the PCA processed data to SVM-RBF pixel by pixel. For 3D CNN, SSRN and the hybrid DenseNet, we use 7 × 7 × 40 neighbors of each pixel as the input.
The batch size is set to be 32 and the SGD optimizer is adopted. The learning rate is set to be 0.001 and we train each model for 50 epochs with 100% training samples provided by spectral library. Experiments are run for 5 times, and the mean classification results in percent are shown in Table 6. The classification maps of different methods are shown in Figure 10. As the table and figure shows, all methods could get a relatively good performance for those objects under illumination of SC laser, which demonstrates that the active HSI has the ability to distinguish different materials accurately.
However, although SVM-RBF could get comparable or even higher performance compared to other deep-learning-based methods, its performance on classifying Class 6 (green plastic bottle cap) is poor, only 62.26%, which restricts its application in active HSI classification. Compared with 3D CNN, the proposed hybrid DenseNet has a deeper architecture, which gives it a stronger ability for feature extracting and learning, and thus obtains better performance. As for SSRN, it has deep network and could achieve better performance than 3D CNN. However, it firstly extracts spectral features then extracts spatial features; the extracted spectral features may be destroyed when extracting spatial features, which may influence its performance. Regarding the proposed hybrid DenseNet, it has deeper network than 3D CNN and extracts spectral features and spatial features parallelly which can preserve spectral and spatial features well. Although the classification results for some classes and metrics are not the highest, hybrid DenseNet achieves an acceptable and agreeable performance for all indexes, which proves its effectiveness on classifying active HSI data. In addition, for all methods, misclassification appears frequently at the areas where light intensity is weak, such as the edge of SC laser spot and shadow areas. The issue about illumination condition still needs further explorations and improvements.

5.3. Effect of the Model Structure

To validate the effectiveness of the proposed parallel double-branch structure, we do another two experiments, i.e., only with the 1D spectral branch and only with the 2D spatial branch. In these experiments, PCA is used to reduce the spectral bands from 174 to 40, and 100% training samples are used to train the models. The results are shown in Figure 11. From the figure, we can see that the model with only 2D spatial branch weakens the performance a lot, which ignores the spectral features. It implies that spectral features of active HSI are the key elements for classification. Adding the 1D spectral branch, the classification performance increases a lot as expected. In the other hand, for the model with 1D spectral branch only, it can achieve relatively good classification results for most objects, but for Class 6, its classification accuracy is very low. As it shows, considering spatial features is helpful to decline the misclassification of the Class 6, which means the spatial branch is indispensable to obtain more robust classification models for active HSI.
In conclusion, for the active HSI data, the spectral-based classifiers, whatever SVM-RBF or 1D deep-learning-based classifier, could distinguish most objects, since the spectral differences between different objects are significant. However, reasonable utilization of spatial information is helpful for improving the classification performance. Taking advantages of the two kinds of features, the hybrid DenseNet achieves better classification results with the parallel double-branch structure.

5.4. Effect of the Traing Sample Size

Deep-learning-based model is a data-driven algorithm that depends on large amounts of labelled samples. In this section, experiments are conducted to explore the performance of the proposed hybrid DenseNet with limited training samples. For the active HSI, we use 1%, 5%, 10%, 40%, 70% and 100% samples from the spectral library to form the training sets, respectively, and examine how the accuracy of each class, AA, OA and Kappa may be impacted.
The results are shown in Figure 12. As expected, the accuracy increases rapidly with the increase of training samples at first. However, when the proportion of training samples reaches 10%, the accuracy of most classes plateaued. For Class 4 and Class 6, their accuracies reach the highest level when the proportion is 100%. The results reveal that 10% of the training samples are enough for the proposed model to learn classification features, but for some classes, more training samples are needed, which may be caused by the spectral variability within the classes shown in Section 5.6.

5.5. Effect of the Principle Component Number

PCA is applied over the original active HSI data to cut down the spectral redundancy and reduce computational cost accordingly. The degree of the reduction depends on the number of principle components preserved. The fewer principle components are left, the less the redundancy and cost are, but the more information is lost for the data. In this section, we perform the experiment to study the influence of PCA on the performance of the hybrid DenseNet. An overview of percentage contribution explained by different number of principle components is shown in Table 7. As it shows, 40 principle components could explain 97.18% of the total variance. As the number of components decreases, the contribution decreases.
Figure 13 shows the effect of the number of PCA components to the classification results. For most classes, 10 components are enough to obtain a good accuracy; however, 40 components are need for Class 6 to achieve a satisfactory classification result. The experiment illustrates that there exists giant spectral redundancy in the active HIS dataset, and the spectral bands of the hyperspectral imager could be fewer.

5.6. Discussion

According to the experimental results presented above, we can draw the following discussions.
Firstly, active HSI has the potential to distinguish objects in similar color. Both the traditional classification model and the deep-learning-based model (SVM-RBF and proposed hybrid DenseNet) can achieve good classification performance for most classes.
Secondly, owing to the significant spectral differences between most objects, extracting spectral features only is enough to obtain good classification results. However, for Class 6, spatial information is indispensable to get better results. The proposed hybrid DenseNet model can extract spectral and spatial features with the 1D and 2D branches, without destroying any features by the parallel double-branch structure. With this architecture, the hybrid DenseNet achieves better classification results compared with other methods or single-branch structure.
Thirdly, for most classes, 10% of training samples are enough to get a good trained hybrid DenseNet model, but for Class 6, larger number of training samples are needed. Besides, there exists giant spectral redundancy in active HSI data. A total of 10 principle components can achieve a relatively good performance for most classes, except Class 6, which needs more principle components.
According to the above discussions, we find that the classification performance under different situations is seriously affected by Class 6. Therefore, for Class 6, 11 spectra are randomly selected and shown in Figure 14. It is found that there is large spatial variability of spectral signatures within Class 6, which may be the reason causing the difficulty in classifying Class 6. The spatial variability may be caused by the highlights at some areas of the green plastic bottle cap.

6. Conclusions

To summarize, this work presents an active HSI classification system with a coherent SC laser illumination and a hybrid DenseNet classifying model. It is an exploration of the applications of SC lasers and provides a possible alternative to the halogen lamps as illuminators for active HSI. The experimental results indicate that the reflected spectral signatures illuminated by the SC laser are suitable for applications of active HSI in the dark. Moreover, the accurate object classification results indicate that most parts of objects can be rightly classified based on their different spectral fingerprints and spatial information, even when using a small number of samples to train the hybrid DenseNet classification model. What is more, the experimental results indicate the superiority of the proposed hybrid DenseNet model, which uses double-branch structure to extract discriminative spectral and spatial features in parallel. Although this system is still a proof-of-concept and needs more explorations before it is used in complex real-life conditions, such as imaging from low-flying unmanned aerial vehicles or high towers, the correct active HSI classification of different objects supports the viability for camouflage detection, biomedical alteration detection and so on. In the future, more attention will need to be paid to improving the deep-learning-based classification model and accelerating the classification algorithms. Further, working distance of the active HSI classification system should be extended to kilometers to take full advantage of SC lasers after considering the effect of atmospheric turbulence and absorption [7,8,10].

Author Contributions

Conceptualization, J.H. and T.J.; data curation, Y.L.; formal analysis, Y.L. and Z.T.; funding acquisition, T.J.; investigation, Y.L., Z.T., J.Z. and H.H.; methodology, T.J.; project administration, Y.P. and T.J.; supervision, J.H. and T.J.; visualization, Y.L.; writing—original draft, Y.L.; writing—review and editing, J.H. and T.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation (NSF) of China under Grant numbers 11802339, 11805276, 61805282, 61801498 and 11804387; Scientific Researches Foundation of the National University of Defense Technology under Grant numbers ZK16-03-59, ZK18-01-03, ZK18-03-36 and ZK18-03-22; NSF of Hunan Province under Grant number 2016JJ1021 and The Youth talent lifting project under Grant number 17-JCJQ-QT-004.

Acknowledgments

We acknowledge editors and reviewers for their valuable suggestions and corrections.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ødegård, Ø.; Mogstad, A.A.; Johnsen, G.; Sørensen, A.J.; Ludvigsen, M. Underwater hyperspectral imaging: A new tool for marine archaeology. Appl. Opt. 2018, 57, 3214–3223. [Google Scholar] [CrossRef] [Green Version]
  2. Guo, Z.; Liu, Y.; Zheng, X.; Yin, K. Active hyperspectral imaging with a supercontinuum laser source in the dark. Chin. Phys. B 2019, 28, 34206. [Google Scholar] [CrossRef]
  3. Nie, W.; Zhang, B.; Zhao, S. Discriminative Local Feature for Hyperspectral Hand Biometrics by Adjusting Image Acutance. Appl. Sci. 2019, 9, 4178. [Google Scholar] [CrossRef] [Green Version]
  4. Bao, Y.; Mi, C.; Wu, N.; Liu, F.; He, Y. Rapid Classification of Wheat Grain Varieties Using Hyperspectral Imaging and Chemometrics. Appl. Sci. 2019, 9, 4119. [Google Scholar] [CrossRef] [Green Version]
  5. Jay, S.; Guillaume, M. Underwater target detection with hyperspectral remote-sensing imagery. In Proceedings of Geoscience and Remote Sensing Symposium; IEEE: Honolulu, HI, USA, 2010; pp. 2820–2823. [Google Scholar]
  6. Zheng, Y.; Ma, Y.; He, S. Detection of Huanglongbing (citrus greening) based on hyperspectral image analysis and PCR. Front. Agric. Sci. Eng. 2019, 6, 172–180. [Google Scholar] [CrossRef]
  7. Tseng, Y.-P.; Bouzy, P.; Pedersen, C.; Stone, N.; Tidemand-Lichtenberg, P. Upconversion raster scanning microscope for long-wavelength infrared imaging of breast cancer microcalcifications. Biomed. Opt. Express 2018, 9, 4979–4987. [Google Scholar] [CrossRef] [Green Version]
  8. Ortega, S.; Fabelo, H.; Iakovidis, D.K.; Koulaouzidis, A.; Callico, G.M. Use of hyperspectral/multispectral imaging in gastroenterology. Shedding some–different–light into the dark. J. Clin. Med. 2019, 8, 36. [Google Scholar] [CrossRef] [Green Version]
  9. Abdulridha, J.; Ampatzidis, Y.; Kakarla, S.C.; Roberts, P. Detection of target spot and bacterial spot diseases in tomato using UAV-based and benchtop-based hyperspectral imaging techniques. Precis. Agric. 2019. [Google Scholar] [CrossRef]
  10. Gronwall, C.; Steinvall, O.; Gohler, B.; Hamoir, D. Active and passive imaging of clothes in the NIR and SWIR regions for reflectivity analysis. Appl. Opt. 2016, 55, 5292–5303. [Google Scholar] [CrossRef]
  11. Alexander, V.V.; Shi, Z.; Islam, M.N.; Ke, K.; Kalinchenko, G.; Freeman, M.J.; Ifarraguerri, A.; Meola, J.; Absi, A.; Leonard, J. Field trial of active remote sensing using a high-power short-wave infrared supercontinuum laser. Appl. Opt. 2013, 52, 6813–6823. [Google Scholar] [CrossRef]
  12. Islam, M.N.; Freeman, M.J.; Peterson, L.M.; Ke, K.; Ifarraguerri, A.; Bailey, C.; Baxley, F.; Wager, M.; Absi, A.; Leonard, J.; et al. Field tests for round-trip imaging at a 1.4 km distance with change detection and ranging using a short-wave infrared super-continuum laser. Appl. Opt. 2016, 55, 1584–1602. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Johnson, B.; Joseph, R.; Nischan, M.L.; Newbury, A.B.; Kerekes, J.P.; Barclay, H.T.; Willard, B.C.; Zayhowski, J.J. A compact, active hyperspectral imaging system for the detection of concealed targets. In Proceedings of the SPIE Part of the SPIE Conference on Detection and Remediation Technolos for Mines and Minelike Targets IV, Orlando, FL, USA, 2 August 1999; pp. 277–786X. [Google Scholar]
  14. Orchard, D.A.; Turner, A.J.; Michaille, L.; Ridley, K.R. White light lasers for remote sensing. In Proceedings of the Technologies for Optical Countermeasures V. International Society for Optics and Photonics, Cardiff, Wales, UK, 6 October 2008; p. 711506. [Google Scholar]
  15. Bioucas-Dias, J.M.; Plaza, A.; Camps-Valls, G.; Scheunders, P.; Nasrabadi, N.M.; Chanussot, J. Hyperspectral Remote Sensing Data Analysis and Future Challenges. IEEE Geosci. Remote Sens. Mag. 2013, 1, 6–36. [Google Scholar] [CrossRef] [Green Version]
  16. Puttonen, E.; Suomalainen, J.; Hakala, T.; Räikkönen, E.; Kaartinen, H.; Kaasalainen, S.; Litkey, P. Tree species classification from fused active hyperspectral reflectance and LIDAR measurements. Fuel Energy Abstr. 2010, 260, 1843–1852. [Google Scholar] [CrossRef]
  17. Ying, L.; Zhang, H.; Qiang, S.J.R.S. Spectral-spatial classification of hyperspectral imagery with 3D convolutional neural network. Remote Sens. 2017, 9, 67. [Google Scholar]
  18. Zhong, Z.; Li, J.; Luo, Z.; Chapman, M. Spectral–spatial residual network for hyperspectral image classification: A 3-D deep learning framework. IEEE Trans. Geosci. Remote Sens. 2018, 56, 847–858. [Google Scholar] [CrossRef]
  19. Ou, Y.; Zhang, B.; Yin, K.; Xu, Z.; Chen, S.; Hou, J. Hyperspectral imaging for the spectral measurement of far-field beam divergence angle and beam uniformity of a supercontinuum laser. Opt. Express 2018, 26, 9822–9828. [Google Scholar] [CrossRef]
  20. Meola, J.; Absi, A.; Leonard, J.D.; Ifarraguerri, A.I.; Islam, M.N.; Alexander, V.V.; Zadnik, J.A. Modeling, development, and testing of a shortwave infrared supercontinuum laser source for use in active hyperspectral imaging. In Proceedings of the SPIE Defense, Security, and Sensing, Baltimore, ML, USA, 18 May 2013; p. 87431D. [Google Scholar]
  21. Zhong, Z.; Li, J.; Ma, L.; Jiang, H.; Zhao, H. Deep residual networks for hyperspectral image classification. In Proceedings of the 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Fort Worth, TX, USA, 23–28 July 2017; pp. 1824–1827. [Google Scholar]
  22. He, L.; Li, J.; Liu, C.; Li, S. Recent Advances on Spectral–Spatial Hyperspectral Image Classification: An Overview and New Guidelines. IEEE Trans. Geosci. Remote Sens. 2018, 56, 1579–1597. [Google Scholar] [CrossRef]
  23. Zhu, K.; Chen, Y.; Ghamisi, P.; Jia, X.; Benediktsson, J.A. Deep Convolutional Capsule Network for Hyperspectral Image Spectral and Spectral-Spatial Classification. Remote Sens. 2019, 11, 223. [Google Scholar] [CrossRef] [Green Version]
  24. Chen, Y.; Lin, Z.; Xing, Z.; Gang, W.; Gu, Y. Deep Learning-Based Classification of Hyperspectral Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 7, 2094–2107. [Google Scholar] [CrossRef]
  25. Li, R.; Zheng, S.; Duan, C.; Yang, Y.; Wang, X. Classification of Hyperspectral Image Based on Double-Branch Dual-Attention Mechanism Network. Remote Sens. 2020, 12, 582. [Google Scholar] [CrossRef] [Green Version]
  26. Huang, G.; Liu, Z.; Laurens, V.D.M.; Weinberger, K.Q. Densely Connected Convolutional Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 9 November 2017; pp. 4700–4708. [Google Scholar]
  27. Wang, W.; Dou, S.; Jiang, Z.; Sun, L. A Fast Dense Spectral–Spatial Convolution Network Framework for Hyperspectral Images Classification. Remote Sens. 2018, 10, 1068. [Google Scholar] [CrossRef] [Green Version]
  28. Roy, S.K.; Krishna, G.; Dubey, S.R.; Chaudhuri, B.B. HybridSN: Exploring 3D-2D CNN Feature Hierarchy for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2019, 17, 277–281. [Google Scholar] [CrossRef] [Green Version]
  29. Li, J.; Liang, B.; Wang, Y. A hybrid neural network for hyperspectral image classification. Remote Sens. Lett. 2020, 11, 96–105. [Google Scholar] [CrossRef]
  30. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. In Proceedings of the International Conference on Learning Representations, San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
  31. Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Experimental setup and characterization: (a) Photo of the active HSI system in laboratory. (b) Output spectrum of the SC laser at the range of 400–200 nm. (c) False-color image of the SC laser spot by the hyperspectral imager. (d) Spectral profiles of the 13 points (A–J) marked in (c) recorded by the hyperspectral imager.
Figure 1. Experimental setup and characterization: (a) Photo of the active HSI system in laboratory. (b) Output spectrum of the SC laser at the range of 400–200 nm. (c) False-color image of the SC laser spot by the hyperspectral imager. (d) Spectral profiles of the 13 points (A–J) marked in (c) recorded by the hyperspectral imager.
Applsci 10 03088 g001
Figure 2. A selection of the objects made of materials likely to be present at realistic scenarios. (a) Photo of the objects. (b) Positions of different objects representing by different colors.
Figure 2. A selection of the objects made of materials likely to be present at realistic scenarios. (a) Photo of the objects. (b) Positions of different objects representing by different colors.
Applsci 10 03088 g002
Figure 3. (a) The false-color map of the raw active HSI data cube, constructed with RGB channels using 640, 550 and 470 nm, separately. (b) The false-color map of the active HSI data cube after radiometric calibration, generated by RGB channels using 675, 606 and 520 nm, separately to show clearly.
Figure 3. (a) The false-color map of the raw active HSI data cube, constructed with RGB channels using 640, 550 and 470 nm, separately. (b) The false-color map of the active HSI data cube after radiometric calibration, generated by RGB channels using 675, 606 and 520 nm, separately to show clearly.
Applsci 10 03088 g003
Figure 4. Regions of interest (ROIs) selected for spectral library.
Figure 4. Regions of interest (ROIs) selected for spectral library.
Applsci 10 03088 g004
Figure 5. The mean spectra of different objects in the ROIs selected for analysis.
Figure 5. The mean spectra of different objects in the ROIs selected for analysis.
Applsci 10 03088 g005
Figure 6. Label image of the active Hyperspectral imaging (HSI) data cube.
Figure 6. Label image of the active Hyperspectral imaging (HSI) data cube.
Applsci 10 03088 g006
Figure 7. Example of a densely-connected architecture with four layers (l = 4).
Figure 7. Example of a densely-connected architecture with four layers (l = 4).
Applsci 10 03088 g007
Figure 8. The architecture of the proposed hybrid DenseNet model.
Figure 8. The architecture of the proposed hybrid DenseNet model.
Applsci 10 03088 g008
Figure 9. Hybrid DenseNet framework for active HSI classification.
Figure 9. Hybrid DenseNet framework for active HSI classification.
Applsci 10 03088 g009
Figure 10. Classification maps of different methods. (a) The false-color map. (b) The label image. (c) Support vector machine with a radial basis function kernel (SVM-RBF). (d) 3D convolutional neural network (3D CNN). (e) Spectral-spatial residual network (SSRN). (f) The proposed hybrid DenseNet. Please zoom in to see the details.
Figure 10. Classification maps of different methods. (a) The false-color map. (b) The label image. (c) Support vector machine with a radial basis function kernel (SVM-RBF). (d) 3D convolutional neural network (3D CNN). (e) Spectral-spatial residual network (SSRN). (f) The proposed hybrid DenseNet. Please zoom in to see the details.
Applsci 10 03088 g010
Figure 11. Effect of different model architecture.
Figure 11. Effect of different model architecture.
Applsci 10 03088 g011
Figure 12. Classification results with varying proportion of training samples.
Figure 12. Classification results with varying proportion of training samples.
Applsci 10 03088 g012
Figure 13. Classification results with different number of Principle Component Analysis method (PCA) components.
Figure 13. Classification results with different number of Principle Component Analysis method (PCA) components.
Applsci 10 03088 g013
Figure 14. Spectra of the Class 6 in active HSI data. A total of 11 spectra are randomly selected.
Figure 14. Spectra of the Class 6 in active HSI data. A total of 11 spectra are randomly selected.
Applsci 10 03088 g014
Table 1. Pixel numbers of ROIs selected from each object.
Table 1. Pixel numbers of ROIs selected from each object.
ObjectLabelPixel NumberPercentage to Whole Image (%)
fresh leaves17800.20
plastic leaves25780.15
white plastic flowers35780.15
blue plastic flowers45950.15
red wood58670.22
green plastic bottle cap62890.07
reflectance plate74800.12
background814450.37
Total 56121.42
Table 2. Labels and pixel numbers of different objects in the label image.
Table 2. Labels and pixel numbers of different objects in the label image.
ObjectLabelPixel NumberPercentage to Whole Image (%)
Black areas0117,85629.81
Fresh leaves128,7167.26
Plastic leaves220,7735.25
White plastic flowers315,5063.92
Blue plastic flowers413,5093.42
Red wood516,1364.08
Green plastic bottle cap670381.78
Reflectance plate752351.32
Background817,055943.14
Total395,328100.00
Table 3. An overview of the information about the dataset.
Table 3. An overview of the information about the dataset.
Dataset ComponentFile SuffixSize
Data cubeactive_HSI.mat568 × 696 × 174
Label imageactive_HSI_label.mat568 × 696
Table 4. Layer-wise summary of the 1D spectral branch.
Table 4. Layer-wise summary of the 1D spectral branch.
LayerKernel SizeOutput Shape
Input layer-7 × 7 × 40
Conv1 × 1 × 7, 247 × 7 × 17, 24
Conv1 × 1 × 7, 127 × 7 × 17, 12
Concatenate-7 × 7 × 17, 36
Conv1 × 1 × 7, 127 × 7 × 17, 12
Concatenate-7 × 7 × 17, 48
Conv1 × 1 × 7, 127 × 7 × 17, 12
Concatenate-7 × 7 × 17, 60
Conv1 × 1 × 17, 607 × 7 × 1, 60
Global average pooling-1 × 60
Table 5. Layer-wise summary of the 2D spatial branch.
Table 5. Layer-wise summary of the 2D spatial branch.
LayerKernel SizeOutput Shape
Input layer-7 × 7 × 40
Conv1 × 1 × 40, 247 × 7× 1, 24
Conv3 × 3 × 1, 127 × 7 × 1, 12
Concatenate-7 × 7 × 1, 36
Conv3 × 3 × 1, 127 × 7 × 1, 12
Concatenate-7 × 7 × 1, 48
Conv3 × 3 × 1, 127 × 7 × 1, 12
Concatenate-7 × 7 × 1, 60
Global average pooling-1 × 60
Table 6. Classification results (%) of different methods with 100% training samples.
Table 6. Classification results (%) of different methods with 100% training samples.
Class SVM-RBF3D CNNSSRNProposed
198.6898.5698.7598.73
298.9098.0898.2198.06
399.6498.7699.5998.90
494.3189.2873.1788.48
599.7795.7698.7199.95
662.2675.7979.2280.66
795.3596.5596.6299.26
898.3395.7798.7895.02
AA93.4193.5792.8894.88
OA96.8495.2596.2695.28
Kappa 94.6591.8593.7091.86
Table 7. The contribution (%) to variance explained by different number of principle components.
Table 7. The contribution (%) to variance explained by different number of principle components.
Component Number10203040
Contribution (%)92.24%94.30%95.88%97.18%

Share and Cite

MDPI and ACS Style

Liu, Y.; Tao, Z.; Zhang, J.; Hao, H.; Peng, Y.; Hou, J.; Jiang, T. Deep-Learning-Based Active Hyperspectral Imaging Classification Method Illuminated by the Supercontinuum Laser. Appl. Sci. 2020, 10, 3088. https://0-doi-org.brum.beds.ac.uk/10.3390/app10093088

AMA Style

Liu Y, Tao Z, Zhang J, Hao H, Peng Y, Hou J, Jiang T. Deep-Learning-Based Active Hyperspectral Imaging Classification Method Illuminated by the Supercontinuum Laser. Applied Sciences. 2020; 10(9):3088. https://0-doi-org.brum.beds.ac.uk/10.3390/app10093088

Chicago/Turabian Style

Liu, Yu, Zilong Tao, Jun Zhang, Hao Hao, Yuanxi Peng, Jing Hou, and Tian Jiang. 2020. "Deep-Learning-Based Active Hyperspectral Imaging Classification Method Illuminated by the Supercontinuum Laser" Applied Sciences 10, no. 9: 3088. https://0-doi-org.brum.beds.ac.uk/10.3390/app10093088

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop