Next Article in Journal
Estimating Fractional Snow Cover in Open Terrain from Sentinel-2 Using the Normalized Difference Snow Index
Next Article in Special Issue
Deep Transfer Learning for Vulnerable Road Users Detection using Smartphone Sensors Data
Previous Article in Journal
Improved Point–Line Visual–Inertial Odometry System Using Helmert Variance Component Estimation
Article

Component Decomposition-Based Hyperspectral Resolution Enhancement for Mineral Mapping

1
College of Electrical and Information Engineering, Hunan University, Changsha 418002, China
2
Helmholtz-Zentrum Dresden-Rossendorf (HZDR), Helmholtz Institute Freiberg for Resource Technology, 09599 Freiberg, Germany
3
Earth Observation System and Data Center, China National Space Administration, Bejing 100048, China
4
Faculty of Electrical Engineering and Computer Science, Technical University of Berlin, 10587 Berlin, Germany
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(18), 2903; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12182903
Received: 30 July 2020 / Revised: 26 August 2020 / Accepted: 4 September 2020 / Published: 7 September 2020

Abstract

Combining both spectral and spatial information with enhanced resolution provides not only elaborated qualitative information on surfacing mineralogy but also mineral interactions of abundance, mixture, and structure. This enhancement in the resolutions helps geomineralogic features such as small intrusions and mineralization become detectable. In this paper, we investigate the potential of the resolution enhancement of hyperspectral images (HSIs) with the guidance of RGB images for mineral mapping. In more detail, a novel resolution enhancement method is proposed based on component decomposition. Inspired by the principle of the intrinsic image decomposition (IID) model, the HSI is viewed as the combination of a reflectance component and an illumination component. Based on this idea, the proposed method is comprised of several steps. First, the RGB image is transformed into the luminance component, blue-difference and red-difference chroma components (YCbCr), and the luminance channel is considered as the illumination component of the HSI with an ideal high spatial resolution. Then, the reflectance component of the ideal HSI is estimated with the downsampled HSI image and the downsampled luminance channel. Finally, the HSI with high resolution can be reconstructed by utilizing the obtained illumination and the reflectance components. Experimental results verify that the fused results can successfully achieve mineral mapping, producing better results qualitatively and quantitatively over single sensor data.
Keywords: hyperspectral image; mineral mapping; resolution enhancement; intrinsic image decomposition hyperspectral image; mineral mapping; resolution enhancement; intrinsic image decomposition

1. Introduction

Hyperspectral scanners, as a newly appearing technique in the mining field, have been extensively utilized to explore minerals, since hyperspectral images (HSIs) are able to record rich spectral information varying from visible to infrared wavelength in hundreds of spectral channels [1,2,3,4,5,6,7,8,9]. This characterization makes HSI record the reflectance spectrum profiles of different minerals, which allows a non-destructive and non-invasive way for the exploration of mineral deposits [10,11,12]. The main goal of mineral mapping is to decide the spatial location of various minerals. A fusion of spectral and spatial information with increased resolution provides not only enhanced qualitative information on surface mineralogy, but also specific material interactions of composition and structure. Geologists are able to map formerly undetectable geological features to extract structural and mineralogical properties. Intrusions and mineralization found in dykes and veins or structures tied to tectonic forces, related to faults and folds, can be measured. However, current hyperspectral sensors cannot capture data with high resolution in terms of both spatial and spectral dimensions because of the finite sun irradiance. Therefore, the captured data often suffer from low spatial resolution, which causes limitations in identifying different minerals [13]. Different from HSIs, RGB images usually provide higher spatial resolution but much lower spectral resolution, i.e., R, G, and B three channels. Thus, the fusion of hyperspectral and RGB images is an effective scheme to yield a higher spatial and spectral resolution data, which is helpful for mapping all kinds of minerals.
In order to better achieve mineral mapping, several data fusion techniques have been developed in decades [10,14,15]. In [14], a decision-level multi-sensor fusion method was proposed based on RGB and three types of infrared data from short-wave to long-wave infrared HSIs. The low-rank component analysis was applied to extract the discriminative features of multi-sensor data, and then, a majority voting rule is used to select the final probability map. In [10], a fusion framework of VNIR and SWIR data was proposed based on majority voting rule, in which an automatic high-resolution mineralogical imaging system was used to generate training labels. These approaches to mineral mapping mainly focus on the application of existing machine learning algorithms.
In recent years, various super-resolution schemes of HSIs [16,17,18,19,20,21,22] have been designed to enhance the resolution, which can be loosely divided into two classes: (1) Fusion of hyperspectral and panchromatic data and (2) fusion of multispectral (MS) and hyperspectral data. The goal of hyperspectral and panchromatic image (PAN) fusion is to merge the spatial details of the PAN into each band of the HSI, such as component substitution (CS)-based schemes [23,24], multiresolution analysis (MRA)-based methods [25,26], and deep learning approaches [27,28]. For example, in [29], a guided filtering method was applied for the fusion of HSI and PAN data. In [30], a hyperspectral pansharpening approach was proposed by using the homomorphic filtering and matrix decomposition. In [31], an HSI pansharpening approach based on deep priors was proposed to boost the spatial resolution with the help of a high-resolution panchromatic image.
Fusion of MS and HSI aims at merging the spatial resolution of a MS and the spectral information of an HSI together so as to produce a high spatial resolution HSI. A few examples of such approaches are matrix factorization [32,33] and deep learning [34,35]. For instance, in [36], a coupled nonnegative matrix factorization method was proposed to merge multispectral and hyperspectral data via unsupervised unmixing. In [37], a novel coupled sparse tensor factorization method was developed for the fusion of MS and HSI, in which the HSI was considered as three modes and a sparse core tensor. In [38], a deep convolutional neural network (CNN) with a two-stream framework was developed to merge HSI and MS, in which the CNN was utilized to extract the deep features of the input data followed by fully connected layers.
In this work, we propose an effective approach to enhance the spatial resolution of HSIs with the guidance of RGB image for mapping minerals, since RGB images are easily obtained in practical applications and have a higher spatial resolution with respect to other modalities. In the proposed method, the ideal high resolution HSI (HR-HSI) is considered as dot multiplication of two components, i.e., illumination and reflectance components, based on the principle of IID (see Figure 1), which contains the following steps. First, the RGB image is transformed into the luminance component, blue-difference and red-difference chroma components (YCbCr) so as to yield the spatial details of the original RGB image, and the luminance channel is considered as an approximation of the illumination component of HR-HSI. Then, the reflectance component of the HR-HSI is estimated by combining the downsampled HSI and the downsampled luminance channel. Finally, the estimated illumination and reflectance components are reconstructed to obtain the HR-HSI.
  • Inspired by the principle of IID, we propose a novel hyperspectral resolution enhancement method for mineral mapping via component decomposition. To our knowledge, this is the first time to formulate the resolution enhancement of HSI as an intrinsic decomposition model.
  • The proposed approach makes the best use of the spatial details of RGB image and the rich spectral information of HSI to obtain the high resolution hyperspectral images. Moreover, the proposed method is more efficient and faster, which is quite suitable to be used in real applications.
  • We investigate whether the spatially enhanced HSI obtained by fusing HSI and RGB data can preserve spectral fidelity and consequently be conducive to mineral mapping. Experimental results demonstrate that the fused results of HSI and RGB data produced by the proposed approach are beneficial for mapping minerals compared to other approaches.
The remaining parts of this work are summarized as follows. Section 2 shows briefly intrinsic image decomposition. Section 3 is on the detailed steps of our method. The fusion results and discusses are exhibited in Section 4. Finally, conclusions are presented in Section 6.

2. Intrinsic Image Decomposition

IID is a challenging problem in image processing field [39,40], which is to divide an image into two components, i.e., the illumination component which is related to the bright property of the scene, and the reflectance component which reflects the material of different objects. The general model can be expressed as:
I = R · S
where I is the input. R and S are the illumination and reflectance components, respectively. It can be obseved from Equation (1) that estimating R and S with only I is an ill-posed issue. Currently, a variety of approaches have been developed to solve this issue by adding some prior information [41,42,43], which has been widely applied in image fusion, classification, and denoising. Different from those publications, in this paper, we only exploit the principle of the intrinsic image decomposition to perform the resolution enhancement of the HSI, which can greatly increase the computational efficiency. Therefore, the main goal of the proposed method focuses on estimating the reflectance and illumination components by utilizing the RGB and low-resolution HSI data.

3. Proposed Method

To obtain the HR-HSI for mineral mapping, we propose a component decomposition-based resolution enhancement method. Figure 2 presents the schematic of the proposed approach, which mainly contains three steps: First, the RGB image is transformed into the YCbCr space so as to obtain an estimation of the illumination component. Second, the reflectance component of the HR-HSI is calculated via using the downsampled HSI and the downsampled illumination component. Finally, the estimated illumination and reflectance components are combined together to reconstruct the HR-HSI.
As described before, the HR-HSI I F is modeled as dot multiplication of the illumination component S H and reflectance component R H , expressed as:
I F = S H · R H
Equation (2) is an ill-posed inverse issue. The solution can be obtained by adding some priors. In this paper, instead of solving the complicated optimization problem, our goal is to calculate the illumination component S H and reflectance component R H by using the LR-HSI and RGB data, which makes our method more efficient compared to objective optimization methods.

3.1. Estimation of the Illumination Component

The illumination component obtained by the intrinsic image decomposition mainly reflects the spatial details of the input. To fully merge the spatial details of the RGB image into HSI, the RGB image I V is converted into the YCbCr space [44]. As shown in Figure 3, it can be observed that the luminance channel in the YCbCr space mainly records the spatial details of the RGB image, while the chrominance channels reflect the spectral information of the RGB image. Based on this characteristic, the luminance channel in the YCbCr space is considered as an estimation of the illumination component.
Y = 0.257 R + 0.564 G + 0.098 B + 16 Cb = 0.148 R 0.291 G + 0.439 B + 128 Cr = 0.439 R 0.368 G 0.071 B + 128
where Y represents the estimated illumination component S H . R , G and B indicate three bands of RGB image.

3.2. Estimation of the Reflectance Component

The reflectance component obtained by intrinsic image decomposition has relatively low spatial resolution. In addition, it mainly records the spectral information of the input. In this study, in order to yield an accurate estimation of the reflectance component, the bicubic downsampling method is performed first on the original HSI so as to yield the corresponding low resolution HSI I L , where the downsampling scale is 1 1 / 4 4 . Then, the same downsampling operation is conducted on the S H to obtain a low resolution illumination component S L . It should be mentioned that the main aim of the downsampling operation is to yield the low spatial data since the reflectance component is in low spatial resolution according to the principle of intrinsic image decomposition. Finally, the low resolution reflectance component R L can be obtained as follows:
R L = I L S L

3.3. Reconstruction

When the low resolution reflectance component R L is obtained, the high resolution reflectance component R H can be easily estimated with bicubic upsampling ( 4 × ) on R L : R H = R L . According to the principle of the intrinsic image decomposition (Equation (1)), the reconstructed HR-HSI can be produced by combining the high resolution reflectance component R H and the estimated illumination component S H .
F = R H · S H
where F is the reconstructed high resolution HSI.

4. Experiments

In the experiment section, in order to verify the fusion performance of our approach, several advanced hyperspectral resolution enhancement approaches that have been achieved satisfactory performance in enhancing resolution of HSI, including CS methods, MAR approaches, and matrix factorization approaches. For CS methods, Gram-Schmidt (GS) [24] and principal component analysis (PCA) [45] are classic and effective resolution enhancement methods which have been widely used in some commercial softwares, e.g., the Environment for Visualizing Images (ENVI) and Earth Resources Data Analysis System (ERDAS). Therefore, they are considered as comparison approaches. For multi-resolution analysis methods, three representative approaches, i.e., smoothing filter-based intensity modulation (SFIM) [25], modulation transfer function-generalized Laplacian pyramid (MTF_GLP) [46], and high pass modulation (MGH) [47], are selected as comparison. For matrix factorization approaches, coupled nonnegative matrix factorization (CNMF) [36], hyperspectral subspace regularization (HySure) [48] are adopted for comparison since they can obtain a competitive fusion performance for resolution enhancement. For these approaches, the default parameters shown in the corresponding publications are used.

4.1. Datasets

(1) Disko dataset: The used HSI-RGB images were acquired during a geological remote sensing field campaign within the MULSEDRO project. The target area is the north shore of Disko Island in West Greenland ( 69 885 N, 52 577 W). An intriguing geologic feature named Illukunnguaq dyke with a NW-SE strike direction is one target area. The lava intrusion of paleocene age is 5 m broad and passes through cretaceous sandstone-sediment formations. Illukunnguaq feature can be followed for roughly 800 m along the coastline and is known for iron-sulphide mineralization, containing Nickel and Copper.
Use of unmanned aerial vehicles (UAV) (Tholeg Octocopter Tho-R-PX-8/12) is a probate tool to acquire HSI and RGB imagery in high resolutions. A fixed-wing, the eBee+ UAV obtained high-resolution orthoimages form the target area and its surroundings at 20 MP per image with a SODA camera, where the captured image was geo-tagged using the drone’s built-in GPS/GNSS receiver. Stereo-photogrammetry and Structure-from-motion via Agisoft Photoscan software created a detailed, georeferenced orthomosaic with 5 cm ground sampling distance (GSD) in RGB colour space.
HSI image mosaics were scanned with the Senop Rikola frame-based camera, having a resolution of 0.6 MP and 50 image channels in flight-mode. The Senop Rikola camera operates in the spectral range covering 504–900 nm, with a spectral resolution of 15 nm in average per band. Figure 4 shows the three band composite of HSI and RGB image. This scene contains six types of land covers, including vegetation, sandstone, basalt, sulphide, debris, and sandstone-basalt. Flying close to the surface target, the camera takes images on top of GPS points, which requires the UAV to complete a stop-scan-and-motion pattern along preprogrammed flight vectors. Pre-processing of HSI comprised of geometric and radiometric corrections which were accomplished in the MEPHySTo software tool [49]. The resulting HSI mosaic measures 350 × 50 m with a resampled GSD of 14 cm. The spatial size of the HSI is 1992 × 1531 pixels. External acquisition conditions for this dataset were favourable, having sunny illumination and weak winds for smooth UAV flights. Application of the described UAS workflow proved to create valuable geologic information in related arctic scenarios [50].
(2) Litov dataset: This particular dataset was acquired with the very same method as for the first dataset. However, on the premises of a mine tailing area ( 50 158 N, 12 530 E) in the Sokolov region of the Czech Republic [51] in summer 2018. Residuals of lignite mining including soils and brown coal were dumped, sealed, and renatured with vegetation. Yet, the natural phenomenon referred to as acid mine drainage occurs along the SW border of the tailing. Acidic waters (pH 2–4) with increased loads of heavy metals, sulphur and reduced oxygen concentration drain from tailing channels towards an artificial lake. Precipitation of iron-bearing proxy minerals along the seams of said streams and their surroundings can be observed and detected by hyperspectral image analysis. Again, an enhanced resolution leads to an exact exposure of affected areas. The very same methods as in the Disko dataset were applied for documentation and preprocessing of the data. The GSD of the dataset is 2.5 cm for RGB and 3.7 cm for HSI, respectively. The captured HSI measures 20.5 m × 33.5 m for the Litov scene, and the spatial size of this image is 1066 × 909 pixels. Figure 5 shows the three band composite of HSI and RGB image. This scene shows a small canyon area with a stream, bordering the tailings.

4.2. Quality Indexes

In this work, we assess the resolution enhancement performance of different approaches in terms of both visual images and objective qualities. For the visual effect, we mainly observe the spatial details of different methods. For the objective evaluation, four mostly used quality indexes, which are described briefly, are adopted in our study, including cross correlation (CC) [52], spectral angle mapper (SAM) [16], root mean squared error (RMSE) [16], and erreur relative globale adimensionnelle de synthèse (ERGAS) [53]. The CC measures the spatial information. The SAM estimates the spectral similarity. The RMSE and ERGAS denote the global spatial and spectral information. All quality indexes are obtained by comparing the reconstructed HSI and the original HSI.
(1)
CC: The CC estimates the similar level of the original image and the resulting image:
C C ( X , X ^ ) = 1 N i = 1 N C C S ( X i , X ^ i )
where
C C S ( X , X ^ ) = i = 1 M ( X μ X ) ( X ^ μ X ^ ) i = 1 M ( X μ X ) 2 i = 1 M ( X ^ μ X ^ ) 2
Here, X is the reference image, and X ^ denotes the fused image. A higher CC indicates the better fusion performance.
(2)
SAM: The SAM reflects the spectral quality of the reconstructed result, which is shown as:
S A M ( X , X ^ ) = 1 M i = 1 M a r cos X ^ T i X i X ^ i 2 X i 2
The SAM is an important index of the spectral distortion of the fused result. A smaller SAM indicates less spectral distortion of the resulting image.
(3)
RMSE: The RMSE evaluates the difference between the fused result and the reference data, which is given as
R M S E ( X , X ^ ) = t r a c e [ ( X X ^ ) T ( X X ^ ) ] N M
The smaller value indicates better performance. The best value is 1.
(4)
ERGAS: The ERGAS assesses the overall quality of the fused result as follows:
E R G A S ( X , X ^ ) = 100 c 1 M i = 1 M M S E ( X i , X ^ i ) μ X ^ i 2
where c represents the ratio of the spatial resolution between the fused result and the reference data. μ X ^ i 2 denotes the mean value of X ^ i . M S E ( X i , X ^ i ) defines the mean square error between X i and X ^ i . The smaller the ERGAS, the better the resulting image is.

4.3. Resolution Enhancement Results

In this subsection, the fused results of all approaches will be presented and discussed.

4.3.1. Disko Dataset

Figure 6a,b presents the RGB image and the downsampled HSI (with scale 4), respectively. Figure 6c–j presents the results of resolution enhancement obtained by all methods. The HySure method produces obvious spectral distortion at the plant region. The GS, PCA, and CNMF methods also exhibit spectral distortion at the shadow region of the resulting images. The result obtained by the MLP_GLP approach looks blurred compared to the MGH method. The SFIM method slightly improves the fusion performance in enhancing the spatial details. In contrast, our method can restore more detailed information compared to other approaches (see the local enlarged region in Figure 6j). Furthermore, in order to further illustrate the spectral preservation ability, the spectral reflectance values of the raw HSI and different methods at two different locations are given in Figure 7. We can observe that the spectral reflectance of our method is closer to the reflectance of the raw HSI among all compared methods, which also illustrates that our method performs well in preserving the spectral information of land covers.
Table 1 presents the objective results of all studied approaches on the Disko Island dataset. The best indexes are highlighted in bold. In this Table, our method obtains the highest CC value among all methods, which indicates that the fused result of our method is closer to the reference image. For the SAM index, it is found from Table 1 that the SAM value obtained by our method is the smallest. This demonstrates that the spectral curves of different objects between the reference image and the fused result are more similar. For the RMSE index, our method yields the smallest value among all approaches, which illustrates that the difference between the reference image and the fused result of our method is smaller compared to the compared approaches. For the ERGAS index, our method also produces the smallest value, which indicates the overall quality of the fused image obtained by our method is better than other resulting images. Generally, our method can obtain the best fusion performance among all considered approaches in terms of visual result and objective quality. This is due to the fact that our method can accurately estimate the reflectance and illumination components, which makes the proposed method have a stronger ability in injecting the spatial details of the RGB image into HSI.

4.3.2. Litov Dataset

The second experiment is tested on the Litov dataset. Figure 8 shows the resolution enhancement results of different methods. It is found from Figure 8 that the GS and PCA techniques yield unsatisfactory visual results in preserving the spectral information, such as the tree region (see Figure 8d,e). The CNMF method suffers from spectral distortion (see the trees in Figure 8f). The HySure method changes the colors of land covers (see the sand region in Figure 8g). For the MGH method, the edge information of original HSI cannot be well preserved (see Figure 8i). By contrast, the proposed method can better integrate the spatial details of RGB image into HSI (see the local enlarged region in Figure 8j). Furthermore, the spectral reflectance of different methods at two spatial positions is shown in Figure 9. As shown in this figure, the reflectance values of the GS, PCA, and CNMF methods are far from the one of the real HSI. The reflectance curve obtained by our method is very similar to the spectral reflectance of the original HSI, which confirms that our method is able to well retain the spectral information of original HSI with respect to other methods.
To intuitively display the advantage of our method, Figure 10 presents the error images between each fused result and the reference data at 20th band. The less information the error image contains, the better fusion effect this method has. It is easy to observe from Figure 10 that the GS, PCA, and CNMF approaches cannot well merge the spatial details into the HSI (see the local enlarged region in Figure 10b–d). The HySure method introduces artifacts in the edges. The MTF_GLP and MGH methods slightly improve the spatial details. However, the spatial details of the RGB image fail to be merged well into the HSI. By contrast, we can observe from the error images in Figure 10 that the error image obtained by the proposed approach contains the least information, which demonstrates that our approach can effectively merge the RGB and HSI data to obtain high-resolution HSI.
Objective performance of all test approaches on Litov dataset are displayed in Table 2. We can observe that our approach yields the highest CC and the smallest SAM, RMSE, and ERGAS, which also further demonstrates that our method outperforms other studied approaches. These experimental results verify that our approach produces the best fused result in both subjective and objective aspects.

5. Discussion

5.1. Computing Time

The computing efficiency of different approaches on all datasets is presented in Table 3. We did all experiments on a laptop with 8 GB RAM and 2.6 GHz using MATLAB 2014a. From Table 3, it is found that our approach is very fast with respect to other compared approaches since the proposed method only applies several dot multiplication and dot division. Therefore, this method can be directly applied in solving practical engineering tasks.

5.2. Mineral Mapping

In this part, in order to examine the performance of the resolution enhancement approaches for mineral mapping, the support vector machine (SVM) [54,55,56,57,58] is adopted as the spectral classifier, in which the radial basis function kernel is utilized. The number of training and test data is given in Table 4. To quantitatively assess the mineral mapping performance of all studied approaches, three popular quantitative metrics are employed [3,59,60], including overall accuracy (OA), average accuracy (AA), and Kappa coefficient.
The experiment is conducted on Disko HSI. Figure 11 shows the mineral mapping results of all approaches before and after resolution enhancement. Figure 11a,b is the classification maps on the raw data, i.e., RGB and original HSI. Figure 11b–j exhibits the mineral mapping results of different resolution enhancement methods on fused results. As shown in Figure 11, the classification map on the RGB image suffers from obvious noise-like mislabeled pixels due to the lack of rich spectral information. By visually comparing the maps of all studied approaches, the proposed approach yields less misclassification. Furthermore, Table 5 provides the objective classification results. It is clear that the proposed approach yields the highest objective indexes. In addition, the proposed approach has the best classification accuracies for the fourth and fifth classes, which are shown in bold typeface in Table 5. In general, the superior performance of the mineral classification step produced by the proposed approach demonstrates the advantage of the proposed resolution enhancement technique. This important factor is a significant part of practical mineral mapping and relevant missions.

6. Conclusions

In this work, we developed a resolution enhancement method via using the principle of component decomposition to investigate the potential of the hyperspectral data with high resolution for mineral mapping. Based on the principle of the component decomposition, the ideal hyperspectral image is considered as a linear superposition of the reflectance and illumination components. The advantages of the resolution-enhanced HSI were verified by comparing it with other approaches. Objective quality indexes and the visual interpretations prove that the proposed approach can produce a high spectral and spatial resolution, which is helpful for the subsequent mineral mapping step. More importantly, the proposed method is less time-consuming, which can be applied for practical applications. In the future, we will further investigate the potential of multisensor data fusion for mineral mapping.

Author Contributions

P.D. is devoted to the methodology, the experiments, and the draft. J.L. provided suggestions and revised the draft. P.G. provided suggestions and carefully modified the manuscript. X.K. carefully revised the manuscript and provided review. J.K. revised the manuscript and analyzed the result. R.J. and R.G. acquired the experimental data, and wrote the data description part. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the Major Program of the National Natural Science Foundation of China (No. 61890962), the National Natural Science Foundation of China (No. 61601179), the National Natural Science Fund of China for International Cooperation and Exchanges (No. 61520106001), the Natural Science Foundation of Hunan Province (No. 2019JJ50036), the Fund of Key Laboratory of Visual Perception and Artificial Intelligence of Hunan Province (No. 2018TP1013), and the China Scholarship Council.

Acknowledgments

The authors would like to thank the Editor-in-Chief, the anonymous Associate Editor, and the reviewers for their valuable comments and suggestions, which have greatly improved this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Li, H.; Ghamisi, P.; Rasti, B.; Wu, Z.; Shapiro, A.; Schultz, M.; Zipf, A. A Multi-Sensor Fusion Framework Based on Coupled Residual Convolutional Neural Networks. Remote Sens. 2020, 12, 2067. [Google Scholar] [CrossRef]
  2. Tu, B.; Zhou, C.; Peng, J.; He, W.; Ou, X.; Xu, Z. Kernel Entropy Component Analysis-Based Robust Hyperspectral Image Supervised Classification. Remote Sens. 2019, 11, 2823. [Google Scholar] [CrossRef]
  3. Duan, P.; Kang, X.; Li, S.; Ghamisi, P. Noise-Robust Hyperspectral Image Classification via Multi-Scale Total Variation. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2019, 12, 1948–1962. [Google Scholar] [CrossRef]
  4. Kang, J.; Fernandez-Beltran, R.; Duan, P.; Liu, S.; Plaza, A. Deep Unsupervised Embedding for Remotely Sensed Images based on Spatially Augmented Momentum Contrast. IEEE Trans. Geosci. Remote Sens. 2020. [Google Scholar] [CrossRef]
  5. Hong, D.; Yokoya, N.; Ge, N.; Chanussot, J.; Zhu, X.X. Learnable manifold alignment (LeMA): A semi-supervised cross-modality learning framework for land cover and land use classification. ISPRS J. Photogramm. Remote Sens. 2019, 147, 193–205. [Google Scholar] [CrossRef]
  6. Lv, Z.; Liu, T.; Benediktsson, J.A. Object-Oriented Key Point Vector Distance for Binary Land Cover Change Detection Using VHR Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2020, 58, 6524–6533. [Google Scholar] [CrossRef]
  7. Kang, X.; Duan, P.; Xiang, X.; Li, S.; Benediktsson, J.A. Detection and Correction of Mislabeled Training Samples for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 5673–5686. [Google Scholar] [CrossRef]
  8. Kang, J.; Hong, D.; Liu, J.; Baier, G.; Yokoya, N.; Demir, B. Learning Convolutional Sparse Coding on Complex Domain for Interferometric Phase Restoration. IEEE Trans. Neural Netw. Learn. Syst. 2020, 1–15. [Google Scholar] [CrossRef]
  9. Hong, D.; Wu, X.; Ghamisi, P.; Chanussot, J.; Yokoya, N.; Zhu, X.X. Invariant Attribute Profiles: A Spatial-Frequency Joint Feature Extractor for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2020, 58, 3791–3808. [Google Scholar] [CrossRef]
  10. Cecilia Contreras Acosta, I.; Khodadadzadeh, M.; Tusa, L.; Ghamisi, P.; Gloaguen, R. A Machine Learning Framework for Drill-Core Mineral Mapping Using Hyperspectral and High-Resolution Mineralogical Data Fusion. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2019, 1–14. [Google Scholar]
  11. Murphy, R.J.; Monteiro, S.T. Mapping the distribution of ferric iron minerals on a vertical mine face using derivative analysis of hyperspectral imagery (430–470 nm). ISPRS J. Photogramm. Remote Sens. 2013, 75, 29–39. [Google Scholar] [CrossRef]
  12. Hoang, N.T.; Koike, K. Comparison of hyperspectral transformation accuracies of multispectral Landsat TM, ETM+, OLI and EO-1 ALI images for detecting minerals in a geothermal prospect area. ISPRS J. Photogramm. Remote Sens. 2018, 137, 15–28. [Google Scholar] [CrossRef]
  13. Ghamisi, P.; Rasti, B.; Yokoya, N.; Wang, Q.; Hofle, B.; Bruzzone, L.; Bovolo, F.; Chi, M.; Anders, K.; Gloaguen, R.; et al. Multisource and Multitemporal Data Fusion in Remote Sensing: A Comprehensive Review of the State of the Art. IEEE Geosci. Remote Sens. Mag. 2019, 7, 6–39. [Google Scholar] [CrossRef]
  14. Lorenz, S.; Seidel, P.; Ghamisi, P.; Zimmermann, R.; Tusa, L.; Khodadadzadeh, M.; Contreras, I.C.; Gloaguen, R. Multi-Sensor Spectral Imaging of Geological Samples: A Data Fusion Approach Using Spatio-Spectral Feature Extraction. Sensors 2019, 19, 2787. [Google Scholar] [CrossRef] [PubMed]
  15. Gloaguen, R.; Fuchs, M.; Khodadadzadeh, M.; Ghamisi, P.; Kirsch, M.; Booysen, R.; Zimmermann, R.; Lorenz, S. Multi-Source and multi-Scale Imaging-Data Integration to boost Mineral Mapping. In Proceedings of the IGARSS 2019-2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July 2019; pp. 5587–5589. [Google Scholar] [CrossRef]
  16. Loncan, L.; de Almeida, L.B.; Bioucas-Dias, J.M.; Briottet, X.; Chanussot, J.; Dobigeon, N.; Fabre, S.; Liao, W.; Licciardi, G.A.; Simões, M.; et al. Hyperspectral Pansharpening: A Review. IEEE Geosci. Remote Sens. Mag. 2015, 3, 27–46. [Google Scholar] [CrossRef]
  17. De Almeida, C.T.; Galvao, L.S.; de Oliveira Cruz e Aragao, L.E.; Ometto, J.P.H.B.; Jacon, A.D.; de Souza Pereira, F.R.; Sato, L.Y.; Lopes, A.P.; de Alencastro Graca, P.M.L.; de Jesus Silva, C.V.; et al. Combining LiDAR and hyperspectral data for aboveground biomass modeling in the Brazilian Amazon using different regression algorithms. Remote Sens. Environ. 2019, 232, 111323. [Google Scholar] [CrossRef]
  18. Li, J.; Cui, R.; Li, B.; Song, R.; Li, Y.; Du, Q. Hyperspectral Image Super-Resolution with 1D-2D Attentional Convolutional Neural Network. Remote Sens. 2019, 11, 2859. [Google Scholar] [CrossRef]
  19. Liu, L.; Coops, N.C.; Aven, N.W.; Pang, Y. Mapping urban tree species using integrated airborne hyperspectral and LiDAR remote sensing data. Remote Sens. Environ. 2017, 200, 170–182. [Google Scholar] [CrossRef]
  20. Bablet, A.; Viallefont-Robinet, F.; Jacquemoud, S.; Fabre, S.; Briottet, X. High-resolution mapping of in-depth soil moisture content through a laboratory experiment coupling a spectroradiometer and two hyperspectral cameras. Remote Sens. Environ. 2020, 236, 111533. [Google Scholar] [CrossRef]
  21. He, G.; Zhong, J.; Lei, J.; Li, Y.; Xie, W. Hyperspectral Pansharpening Based on Spectral Constrained Adversarial Autoencoder. Remote Sens. 2019, 11, 2691. [Google Scholar] [CrossRef]
  22. Hladik, C.; Schalles, J.; Alber, M. Salt marsh elevation and habitat mapping using hyperspectral and LIDAR data. Remote Sens. Environ. 2013, 139, 318–330. [Google Scholar] [CrossRef]
  23. Tu, T.M.; Su, S.C.; Shyu, H.C.; Huang, P.S. A new look at IHS-like image fusion methods. Inf. Fusion 2001, 2, 177–186. [Google Scholar] [CrossRef]
  24. Laben, C.A.; Brower, B.V. Process for Enhancing the Spatial Resolution of Multispectral Imagery Using Pan-Sharpening. U.S. Patent 6011875, 4 January 2000. [Google Scholar]
  25. Liu, J.G. Smoothing Filter-based Intensity Modulation: A spectral preserve image fusion technique for improving spatial details. Int. J. Remote Sens. 2000, 21, 3461–3472. [Google Scholar] [CrossRef]
  26. Yin, M.; Duan, P.; Liu, W.; Liang, X. A novel infrared and visible image fusion algorithm based on shift-invariant dual-tree complex shearlet transform and sparse representation. Neurocomputing 2017, 226, 182–191. [Google Scholar] [CrossRef]
  27. He, L.; Zhu, J.; Li, J.; Plaza, A.; Chanussot, J.; Li, B. HyperPNN: Hyperspectral Pansharpening via Spectrally Predictive Convolutional Neural Networks. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2019, 12, 3092–3100. [Google Scholar] [CrossRef]
  28. Li, K.; Xie, W.; Du, Q.; Li, Y. DDLPS: Detail-Based Deep Laplacian Pansharpening for Hyperspectral Imagery. IEEE Trans. Geosci. Remote Sens. 2019, 57, 8011–8025. [Google Scholar] [CrossRef]
  29. Qu, J.; Li, Y.; Dong, W. Hyperspectral Pansharpening With Guided Filter. IEEE Geosci. Remote Sens. Lett. 2017, 14, 2152–2156. [Google Scholar] [CrossRef]
  30. Qu, J.; Li, Y.; Du, Q.; Dong, W.; Xi, B. Hyperspectral Pansharpening Based on Homomorphic Filtering and Weighted Tensor Matrix. Remote Sens. 2019, 11, 1005. [Google Scholar] [CrossRef]
  31. Xie, W.; Lei, J.; Cui, Y.; Li, Y.; Du, Q. Hyperspectral Pansharpening With Deep Priors. IEEE Trans. Neural Netw. Learn. Syst. 2019, 31, 1529–1543. [Google Scholar] [CrossRef]
  32. Dong, W.; Fu, F.; Shi, G.; Cao, X.; Wu, J.; Li, G.; Li, X. Hyperspectral Image Super-Resolution via Non-Negative Structured Sparse Representation. IEEE Trans. Image Process. 2016, 25, 2337–2352. [Google Scholar] [CrossRef]
  33. Kawakami, R.; Matsushita, Y.; Wright, J.; Ben-Ezra, M.; Tai, Y.; Ikeuchi, K. High-resolution hyperspectral imaging via matrix factorization. In Proceedings of the CVPR 2011, Providence, RI, USA, 20–25 June 2011; pp. 2329–2336. [Google Scholar] [CrossRef]
  34. Dian, R.; Li, S.; Guo, A.; Fang, L. Deep Hyperspectral Image Sharpening. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 5345–5355. [Google Scholar] [CrossRef] [PubMed]
  35. Zhou, F.; Hang, R.; Liu, Q.; Yuan, X. Pyramid Fully Convolutional Network for Hyperspectral and Multispectral Image Fusion. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2019, 12, 1549–1558. [Google Scholar] [CrossRef]
  36. Yokoya, N.; Yairi, T.; Iwasaki, A. Coupled Nonnegative Matrix Factorization Unmixing for Hyperspectral and Multispectral Data Fusion. IEEE Trans. Geosci. Remote Sens. 2012, 50, 528–537. [Google Scholar] [CrossRef]
  37. Li, S.; Dian, R.; Fang, L.; Bioucas-Dias, J.M. Fusing Hyperspectral and Multispectral Images via Coupled Sparse Tensor Factorization. IEEE Trans. Image Process. 2018, 27, 4118–4130. [Google Scholar] [CrossRef] [PubMed]
  38. Yang, J.; Zhao, Y.Q.; Chan, J.C.W. Hyperspectral and Multispectral Image Fusion via Deep Two-Branches Convolutional Neural Network. Remote Sens. 2018, 10, 800. [Google Scholar] [CrossRef]
  39. Shen, L.; Yeo, C.; Hua, B. Intrinsic Image Decomposition Using a Sparse Representation of Reflectance. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 2904–2915. [Google Scholar] [CrossRef]
  40. Kang, X.; Li, S.; Fang, L.; Benediktsson, J.A. Pansharpening Based on Intrinsic Image Decomposition. Sens. Imag. 2014, 15, 94. [Google Scholar] [CrossRef]
  41. Yue, H.; Yang, J.; Sun, X.; Wu, F.; Hou, C. Contrast Enhancement Based on Intrinsic Image Decomposition. IEEE Trans. Image Process. 2017, 26, 3981–3994. [Google Scholar] [CrossRef]
  42. Kang, X.; Li, S.; Fang, L.; Benediktsson, J.A. Intrinsic Image Decomposition for Feature Extraction of Hyperspectral Images. IEEE Trans. Geosci. Remote Sens. 2015, 53, 2241–2253. [Google Scholar] [CrossRef]
  43. Sheng, B.; Li, P.; Jin, Y.; Tan, P.; Lee, T. Intrinsic Image Decomposition with Step and Drift Shading Separation. IEEE Trans. Vis. Comput. Graph. 2020, 26, 1332–1346. [Google Scholar] [CrossRef]
  44. Kahu, S.Y.; Raut, R.B.; Bhurchandi, K.M. Review and evaluation of color spaces for image/video compression. Color Res. Appl. 2018, 44, 8–33. [Google Scholar] [CrossRef]
  45. Vivone, G.; Alparone, L.; Chanussot, J.; Dalla Mura, M.; Garzelli, A.; Licciardi, G.A.; Restaino, R.; Wald, L. A Critical Comparison Among Pansharpening Algorithms. IEEE Trans. Geosci. Remote Sens. 2015, 53, 2565–2586. [Google Scholar] [CrossRef]
  46. Aiazzi, B.; Alparone, L.; Baronti, S.; Garzelli, A. Context-driven fusion of high spatial and spectral resolution images based on oversampled multiresolution analysis. IEEE Trans. Geosci. Remote Sens. 2002, 40, 2300–2312. [Google Scholar] [CrossRef]
  47. Vivone, G.; Restaino, R.; Dalla Mura, M.; Licciardi, G.; Chanussot, J. Contrast and Error-Based Fusion Schemes for Multispectral Image Pansharpening. IEEE Geosci. Remote Sens. Lett. 2014, 11, 930–934. [Google Scholar] [CrossRef]
  48. Simões, M.; Bioucas-Dias, J.; Almeida, L.B.; Chanussot, J. A Convex Formulation for Hyperspectral Image Superresolution via Subspace-Based Regularization. IEEE Trans. Geosci. Remote Sens. 2015, 53, 3373–3388. [Google Scholar] [CrossRef]
  49. Jakob, S.; Zimmermann, R.; Gloaguen, R. The Need for Accurate Geometric and Radiometric Corrections of Drone-Borne Hyperspectral Data for Mineral Exploration: MEPHySTo-A Toolbox for Pre-Processing Drone-Borne Hyperspectral Data. Remote Sens. 2017, 9, 88. [Google Scholar] [CrossRef]
  50. Jackisch, R.; Madriz, Y.; Zimmermann, R.; Pirttijärvi, M.; Saartenoja, A.; Heincke, B.; Salmirinne, H.; Kujasalo, J.P.; Andreani, L.; Gloaguen, R. Drone-borne hyperspectral and magnetic data integration: Otanmäki Fe-Ti-V deposit in Finland. Remote Sens. 2019, 11, 2084. [Google Scholar] [CrossRef]
  51. Jackisch, R.; Lorenz, S.; Zimmermann, R.; Möckel, R.; Gloaguen, R. Drone-Borne Hyperspectral Monitoring of Acid Mine Drainage: An Example from the Sokolov Lignite District. Remote Sens. 2018, 10, 385. [Google Scholar] [CrossRef]
  52. Zhou, J.; Civco, D.L.; Silander, J.A. A wavelet transform method to merge Landsat TM and SPOT panchromatic data. Int. J. Remote Sens. 1998, 19, 743–757. [Google Scholar] [CrossRef]
  53. Zeng, Y.; Huang, W.; Liu, M.; Zhang, H.; Zou, B. Fusion of satellite images in urban area: Assessing the quality of resulting images. Int. Conf. Geoinform. 2010, 1–4. [Google Scholar] [CrossRef]
  54. Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef]
  55. Duan, P.; Kang, X.; Li, S.; Ghamisi, P. Multichannel Pulse-Coupled Neural Network-Based Hyperspectral Image Visualization. IEEE Trans. Geosci. Remote Sens. 2019, 1–13. [Google Scholar] [CrossRef]
  56. Duan, P.; Kang, X.; Li, S.; Ghamisi, P.; Benediktsson, J.A. Fusion of Multiple Edge-Preserving Operations for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 10336–10349. [Google Scholar] [CrossRef]
  57. Zhao, J.; Zhong, Y.; Hu, X.; Wei, L.; Zhang, L. A robust spectral-spatial approach to identifying heterogeneous crops using remote sensing imagery with high spectral and spatial resolutions. Remote Sens. Environ. 2020, 239, 111605. [Google Scholar] [CrossRef]
  58. Dalponte, M.; Bruzzone, L.; Vescovo, L.; Gianelle, D. The role of spectral resolution and classifier complexity in the analysis of hyperspectral images of forest areas. Remote Sens. Environ. 2009, 113, 2345–2355. [Google Scholar] [CrossRef]
  59. Kang, X.; Duan, P.; Li, S. Hyperspectral image visualization with edge-preserving filtering and principal component analysis. Inf. Fusion 2020, 57, 130–143. [Google Scholar] [CrossRef]
  60. Duan, P.; Lai, J.; Kang, J.; Kang, X.; Ghamisi, P.; Li, S. Texture-aware total variation-based removal of sun glint in hyperspectral images. ISPRS J. Photogramm. Remote Sens. 2020, 166, 359–372. [Google Scholar] [CrossRef]
Figure 1. The principle of intrinsic image decomposition.
Figure 1. The principle of intrinsic image decomposition.
Remotesensing 12 02903 g001
Figure 2. The flow chart of the proposed approach.
Figure 2. The flow chart of the proposed approach.
Remotesensing 12 02903 g002
Figure 3. An example of luminance component, blue-difference and red-difference chroma components (YCbCr) transform on the RGB image. (a) RGB image. (b) Intensity channel. (c) Blue-difference chroma component. (d) Red-difference chroma component.
Figure 3. An example of luminance component, blue-difference and red-difference chroma components (YCbCr) transform on the RGB image. (a) RGB image. (b) Intensity channel. (c) Blue-difference chroma component. (d) Red-difference chroma component.
Remotesensing 12 02903 g003
Figure 4. Disko dataset. (a) False color composite of hyperspectral image (HSI) (No. 17, 7, 1 at 61 nm, 551 nm, 504 nm). (b) RGB image.
Figure 4. Disko dataset. (a) False color composite of hyperspectral image (HSI) (No. 17, 7, 1 at 61 nm, 551 nm, 504 nm). (b) RGB image.
Remotesensing 12 02903 g004
Figure 5. Litov dataset. (a) False color composite of HSI (No. 17, 7, 1 at 61 nm, 551 nm, 504 nm). (b) RGB image.
Figure 5. Litov dataset. (a) False color composite of HSI (No. 17, 7, 1 at 61 nm, 551 nm, 504 nm). (b) RGB image.
Remotesensing 12 02903 g005
Figure 6. Results obtained by different resolution enhancement methods on Disko dataset. (a) RGB. (b) Three bands of HSI (R:24, G:12, B:6). (c) SFIM [25]. (d) GS [24]. (e) PCA [45]. (f) CNMF [36]. (g) HySure [48]. (h) MTF_GLP [46]. (i) MGH [47]. (j) Our method.
Figure 6. Results obtained by different resolution enhancement methods on Disko dataset. (a) RGB. (b) Three bands of HSI (R:24, G:12, B:6). (c) SFIM [25]. (d) GS [24]. (e) PCA [45]. (f) CNMF [36]. (g) HySure [48]. (h) MTF_GLP [46]. (i) MGH [47]. (j) Our method.
Remotesensing 12 02903 g006
Figure 7. Spectral reflectance values of different methods on Disko dataset at two positions. (a) Pixel (511, 294). (b) Pixel (955, 702).
Figure 7. Spectral reflectance values of different methods on Disko dataset at two positions. (a) Pixel (511, 294). (b) Pixel (955, 702).
Remotesensing 12 02903 g007
Figure 8. Resulting images obtained by different resolution enhancement methods on Litov dataset. (a) RGB. (b) Three bands of HSI (R:20, G:30, B:10). (c) SFIM [25]. (d) GS [24]. (e) PCA [45]. (f) CNMF [36]. (g) HySure [48]. (h) MTF_GLP [46]. (i) MGH [47]. (j) Our method.
Figure 8. Resulting images obtained by different resolution enhancement methods on Litov dataset. (a) RGB. (b) Three bands of HSI (R:20, G:30, B:10). (c) SFIM [25]. (d) GS [24]. (e) PCA [45]. (f) CNMF [36]. (g) HySure [48]. (h) MTF_GLP [46]. (i) MGH [47]. (j) Our method.
Remotesensing 12 02903 g008
Figure 9. Spectral reflectance values of different methods at two positions on Litov dataset. (a) Pixel (498, 487). (b) Pixel (660, 447).
Figure 9. Spectral reflectance values of different methods at two positions on Litov dataset. (a) Pixel (498, 487). (b) Pixel (660, 447).
Remotesensing 12 02903 g009
Figure 10. The error images between the fused result and the reference image at 20th band using different methods. (a) SFIM [25]. (b) GS [24]. (c) PCA [45]. (d) CNMF [36]. (e) HySure [48]. (f) MTF_GLP [46]. (g) MGH [47]. (h) Our method.
Figure 10. The error images between the fused result and the reference image at 20th band using different methods. (a) SFIM [25]. (b) GS [24]. (c) PCA [45]. (d) CNMF [36]. (e) HySure [48]. (f) MTF_GLP [46]. (g) MGH [47]. (h) Our method.
Remotesensing 12 02903 g010
Figure 11. Mineral mapping of different approaches with SVM classifier on the Disko dataset. (a) RGB. (b) HSI. (c) SFIM [25]. (d) GS [24]. (e) PCA [45]. (f) CNMF [36]. (g) HySure [48]. (h) MTF_GLP [46]. (i) MGH [47]. (j) Our method.
Figure 11. Mineral mapping of different approaches with SVM classifier on the Disko dataset. (a) RGB. (b) HSI. (c) SFIM [25]. (d) GS [24]. (e) PCA [45]. (f) CNMF [36]. (g) HySure [48]. (h) MTF_GLP [46]. (i) MGH [47]. (j) Our method.
Remotesensing 12 02903 g011
Table 1. Objective quality of the smoothing filter-based intensity modulation (SFIM) [25], Gram-Schmidt (GS) [24], principal component analysis (PCA) [45], coupled nonnegative matrix factorization (CNMF) [36], HySure [48], modulation transfer function-generalized Laplacian pyramid (MTF_GLP) [46], high pass modulation (MGH) [47], and our method on Disko dataset. The best performance is highlighted with bold. The second best performance is highlighted with underscore.
Table 1. Objective quality of the smoothing filter-based intensity modulation (SFIM) [25], Gram-Schmidt (GS) [24], principal component analysis (PCA) [45], coupled nonnegative matrix factorization (CNMF) [36], HySure [48], modulation transfer function-generalized Laplacian pyramid (MTF_GLP) [46], high pass modulation (MGH) [47], and our method on Disko dataset. The best performance is highlighted with bold. The second best performance is highlighted with underscore.
IndexesBestSFIMGSPCACNMFHySureMTF_GLPMGHOur Method
CC10.9570.9370.9320.9620.9480.9630.9320.967
SAM01.0321.4871.7071.4693.2611.1211.0710.744
RMSE00.0290.0340.0350.0230.0310.0240.0740.023
ERGAS012.00312.70713.1798.35811.3348.62137.4668.061
Table 2. Quantitative comparison of the SFIM [25], GS [24], PCA [45], CNMF [36], HySure [48], MTF_GLP [46], MGH [47], and our method on Litov dataset. The best performance is highlighted with bold. The second best performance is highlighted with underscore.
Table 2. Quantitative comparison of the SFIM [25], GS [24], PCA [45], CNMF [36], HySure [48], MTF_GLP [46], MGH [47], and our method on Litov dataset. The best performance is highlighted with bold. The second best performance is highlighted with underscore.
IndexesBestSFIMGSPCACNMFHySureMTF_GLPMGHOur Method
CC10.8310.9850.9830.9830.9730.9860.7050.994
SAM01.9822.3322.4222.0313.3761.8311.9801.056
RMSE00.1750.0170.0180.0190.0230.0160.3300.010
ERGAS068.564.3944.6134.4746.1454.201115.3122.664
Table 3. The computing time of different methods. Each number denotes the execution time in seconds (s). The best performance is highlighted with bold.
Table 3. The computing time of different methods. Each number denotes the execution time in seconds (s). The best performance is highlighted with bold.
DatasetsSFIMGSPCACNMFHySureMTF_GLPMGHOur Method
Best value00000000
Disko36.8128.0627.55168.052068.5154.1944.746.64
Litov14.2313.4713.1275.01883.6415.3714.784.57
Table 4. Numbers of train and test samples.
Table 4. Numbers of train and test samples.
ClassesNameTrainTest
1Vegetation55104
2Sandstone10462
3Basalt7246
4Sulphide105157
5Debris5668
6Sandstone-basalt9140
Total483477
Table 5. Classification accuracies on the fused results of different methods, i.e., SFIM [25], GS [24], PCA [45], CNMF [36], HySure [48], MTF_GLP [46], MGH [47], our method and raw data, i.e., RGB and HSI. The best performance is highlighted with bold. The second best performance is highlighted with underscore.
Table 5. Classification accuracies on the fused results of different methods, i.e., SFIM [25], GS [24], PCA [45], CNMF [36], HySure [48], MTF_GLP [46], MGH [47], our method and raw data, i.e., RGB and HSI. The best performance is highlighted with bold. The second best performance is highlighted with underscore.
ClassRGBHSISFIMGSPCACNMFHySureMTF_GLPMGHOur Method
178.7990.0082.3597.1279.4988.1853.7687.1679.6779.69
250.9891.1898.4193.65100.067.3996.49100.096.8896.88
319.5426.0432.7424.2220.8118.8235.7119.4327.7832.54
469.5278.9569.7079.3130.3057.1485.9618.5263.1687.04
538.4684.6242.5075.8650.0055.5644.8353.3357.7872.50
631.0330.5650.0038.4654.0539.4744.4454.0558.0655.38
OA51.9953.4662.4758.4952.249.9057.2352.8361.4366.46
AA48.0566.8962.6268.1055.7854.4360.2055.4263.8970.67
Kappa41.3846.2355.0451.6243.9940.9648.2644.9853.9259.97
Back to TopTop