Next Article in Journal
Integrating Multitemporal Sentinel-1/2 Data for Coastal Land Cover Classification Using a Multibranch Convolutional Neural Network: A Case of the Yellow River Delta
Previous Article in Journal
Global White-Sky and Black-Sky FAPAR Retrieval Using the Energy Balance Residual Method: Algorithm and Validation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hyperspectral Pansharpening Based on Homomorphic Filtering and Weighted Tensor Matrix

1
Joint Laboratory of High Speed Multi-Source Image Coding and Processing, School of Telecommunications Engineering, Xidian University, Xi’an 710071, China
2
State Key Lab. of Integrated Service Networks, School of Telecommunications Engineering, Xidian University, Xi’an 710071, China
3
Department of Electrical and Computer Engineering, Mississippi State University, Mississippi State, MS 39762, USA
*
Author to whom correspondence should be addressed.
Submission received: 5 February 2019 / Revised: 22 April 2019 / Accepted: 23 April 2019 / Published: 27 April 2019

Abstract

:
Hyperspectral pansharpening is an effective technique to obtain a high spatial resolution hyperspectral (HS) image. In this paper, a new hyperspectral pansharpening algorithm based on homomorphic filtering and weighted tensor matrix (HFWT) is proposed. In the proposed HFWT method, open-closing morphological operation is utilized to remove the noise of the HS image, and homomorphic filtering is introduced to extract the spatial details of each band in the denoised HS image. More importantly, a weighted root mean squared error-based method is proposed to obtain the total spatial information of the HS image, and an optimized weighted tensor matrix based strategy is presented to integrate spatial information of the HS image with spatial information of the panchromatic (PAN) image. With the appropriate integrated spatial details injection, the fused HS image is generated by constructing the suitable gain matrix. Experimental results over both simulated and real datasets demonstrate that the proposed HFWT method effectively generates the fused HS image with high spatial resolution while maintaining the spectral information of the original low spatial resolution HS image.

Graphical Abstract

1. Introduction

Depending on the number of acquired bands, remote sensing imaging technology has developed from collecting panchromatic (PAN) and color images to multispectral (MS) images, and it can now capture hyperspectral (HS) images with dozens of hundreds of bands. A PAN image with very high spatial resolution is a single-band grayscale image acquired in the visible range. It is able to obtain the shape feature of objects, but cannot distinguish colors. A color image consists of three bands which are red, green and blue, and displays the colors of objects. However, it is difficult to distinguish the features in similar colors. An MS image not only obtains spatial features, but also obtains spectral information in several bands, which is more capable of distinguishing categories of different features. However, the rough spectral resolution of MS images may not meet the requirements in some applications, and it is hard to realize fine feature detection [1]. An HS image with a higher spectral resolution on the order of nanometers can provide finer classification [2], which has been applied to many fields [3,4,5,6,7] and some practical applications, such as vegetation study [8], precision agriculture [8], regional geological mapping [9], mineral exploration [10], and environment monitoring [11]. Due to technical limitations, the spatial resolution of an HS image is low.
As both high spatial and spectral resolutions are important in practical applications, obtaining a high spatial resolution HS (HRHS) image is crucial. One effective way is to perform hyperspectral pansharpening, which fuses a high spatial resolution PAN (HRPAN) image with a low spatial resolution HS (LRHS) image. Figure 1 shows the concept of hyperspectral pansharpening.
Many hyperspectral pansharpening algorithms were developed, among which hyperspectral pansharpening methods using Bayesian and matrix factorization have been proposed in recent years. The Bayesian-based approaches include Bayesian naive Gaussian prior [12], Bayesian sparsity promoted Gaussian prior [13], and HySure [14]. These algorithms utilize the posterior distribution, and are based on maximum a posteriori estimation to fuse LRHS and HRPAN images [15]. The matrix factorization approach generates a fused HRHS image by using the nonnegative matrix factorization (NMF) under some constraints to estimate endmember and abundance matrices [16]. The matrix factorization approach is well represented by the nonnegative sparse coding (NNSC) [17] and constrained nonnegative matrix factorization (CNMF) [18] methods. The main challenge in hyperspectral pansharpening is to effectively improve the spatial resolution while preserving the original spectral information. The Bayesian and matrix factorization approaches are able to achieve good results on this challenge, but have a high computational cost.
Component substitution (CS) and multi-resolution analysis (MRA) approaches are two classical hyperspectral pansharpening approaches which have simple and fast implementation. For the CS class, intensity-hue-saturation (IHS) transform [19,20], principal component analysis (PCA) transform [21,22], Gram–Schmidt (GS) [23], and adaptive GS (GSA) [24] are the most representative methods. The CS class extracts spatial details of the HS image, and replaces the extracted spatial details with the HRPAN image. Regardless of superior spatial performance, the CS class suffers from serious spectral distortion [25]. The typical algorithms of the MRA technique are smoothing filter based intensity modulation (SFIM) [26], Laplacian pyramid [27], modulation transfer function generalized Laplacian pyramid (MTF-GLP) [28], and MTF-GLP with high pass modulation (MTF-GLP-HPM) [29]. The MRA methods generally utilize a multi-resolution decomposition to extract spatial details which are imported into the HS image. Compared with the CS methods, the MRA methods generate less spectral distortion, but usually have a larger computational burden [30]. Recently, several algorithms based on the CS and MRA approaches have been proposed, such as the Sentinel-2A CS and MRA based sharpening algorithm [31], the multiband Filter estimation (MBFE) algorithm [32], and the guided filter PCA (GFPCA) algorithm [33]. Moreover, several intelligent processing-based methods have also been proposed, and examples include deep two-branches convolutional neural network (Two-CNN-Fu) [34], Bidirectional Pyramid Network [35], and 3D-convolutional neural network (3D-CNN) [36].
The CS and MRA approaches mostly extract the spatial information of the HRPAN image and inject it into the LRHS image, but without considering the spatial information of the LRHS image. Due to the incomplete spatial information injection, the CS and MRA approaches may result in distortion. To address this problem, we propose a novel hyperspectral pansharpening method by combining homomorphic filtering with a weighted tensor matrix. An optimized weighted tensor matrix-based method which considers the structure information of the LRHS and HRPAN images is proposed to generate more comprehensive spatial information. In addition, to extract the spatial structure information of the LRHS images, open-closing morphological operation is first used for noise removal, and homomorphic filtering is then introduced to extract the spatial details of each band. Finally, a weighted root mean squared error based method is proposed to obtain the total spatial component of the LRHS image from extracted spatial details of each band, and the Laplacian pyramid networks super-resolution algorithm is adopted to enhance the spatial resolution of the obtained spatial component. Comparative analysis was used to demonstrate the applicability and superiority of the proposed method in both spectral and spatial qualities.
As stated above, a new hyperspectral pansharpening method based on homomorphic filtering and weighted tensor matrix is proposed in this paper. The main novelties of the proposed hyperspectral pansharpening method are concluded in the following aspects.
  • A novel HS image spatial component extraction strategy is proposed. Open-closing morphological operation and homomorphic filtering are first introduced to remove the noise and extract the spatial details of each band of the HS image, respectively. Then, a weighted root mean squared error-based method is proposed to obtain the total spatial component of the HS image.
  • An optimized weighted tensor matrix-based method is proposed to integrate the spatial component of the HS image with the spatial component of the PAN image. The weighted structure tensor matrix that represents the structural information of multiple images is applied to hyperspectral pansharpening for the first time. The classical methods which mostly extract the spatial information of the PAN image inject the incomplete spatial information, and may lead to distortion. Unlike there classical methods, the proposed optimized weighted tensor matrix-based method generates the spatial information not only from the PAN image but also from the HS image, and can reduce the distortion caused by the insufficient spatial information.
The remainder of this paper is organized as follows. Section 2 describes the weighted structure tensor matrix and homomorphic filtering. In Section 3, the proposed homomorphic filtering and weighted tensor matrix-based hyperspectral pansharpening algorithm is presented. Experimental results and discussion are provided in Section 4, and conclusions are drawn in Section 5.

2. Related Work

2.1. Weighted Structure Tensor Matrix

For an image I , the structure tensor matrix M can be decomposed as:
M = I · I T = [ I x 2 I x I y I x I y I y 2 ] = [ e 1 e 2 ] [ v 1 0 0 v 2 ] [ e 1 e 2 ] T
where I x = I / x and I y = I / y are the horizontal and vertical partial derivatives of the image, I = [ I x I y ] T , ( ) T is the transpose operation, v 1 , v 2 , e 1 and e 2 are the two eigenvectors and the corresponding eigenvalues, respectively. As shown in Equation (1), the tensor matrix M which is a symmetric and semi-definite positive matrix has eigen-decomposition, and it has been exploited in some fields, such as texture synthesis [37], image regularization [38], denoising [39], and recognition systems [40]. The eigenvalues obtained by decomposing the tensor matrix are utilized to describe the structure information of the image. For the multiple images [ I 1 , I 2 , , I n ] , the structural information is coupled from all images by employing the linear combination:
M w = 1 n l = 1 n M l = 1 n [ l = 1 n ( I l / x ) 2 l = 1 n ( ( I l / x ) ( I l / y ) ) l = 1 n ( ( I l / x ) ( I l / y ) ) l = 1 n ( I l / y ) 2 ]
where M w is the weighted structure tensor matrix, M l is the structure tensor matrix of the l th image, I l / x and I l / y are the horizontal and vertical partial derivatives of the l th image, respectively. The weighted tensor matrix M w is also a symmetric and semi-definite positive matrix, and can be proceeded the eigen-decomposition.

2.2. Homomorphic Filtering

Homomorphic filtering which is a type of frequency domain filtering can compress the image brightness range and enhance the image contrast. Homomorphic filtering has been applied to some image processing problems [41,42,43] and is based on an image imaging model:
f = f H · f L
where f represents an image, f H represents the high frequency reflectance component, and f L represents the low frequency illumination component. Homomorphic filtering aims to reduce the low frequency component of an image. Logarithmic transformation is utilized to separate the two components:
ln ( f ) = ln ( f H ) + ln ( f L )
After applying the Fourier transform:
F = F H + F L
where F , F H and F L denote the Fourier transform of ln ( f ) , ln ( f H ) and ln ( f L ) , respectively. Then, the high-pass filter H is applied to Equation (5) as
S = F · H = F H · H + F L · H
where S is the filtered result. The final image is obtained by the inverse Fourier transform and the exponential operation:
f h f = exp ( s ) = exp ( 1 ( S ) )
where f h f denotes the homomorphic filtered image, s denotes the inverse Fourier transform of S , and 1 denotes the inverse Fourier transform.

3. Proposed Method

3.1. Hyperspectral Image Preprocessing

Figure 2 shows the schematic of the proposed homomorphic filtering and weighted tensor (HFWT) matrix-based hyperspectral pansharpening algorithm. Let the LRHS image be represented by X LR HS R m × n × B , and the HRPAN image be denoted by X PAN R M × N × 1 , where m × n and M × N are the size of the LRHS and the HRPAN images, respectively, and B is the number of the LRHS image bands. The fused HRHS image is represented by X HR HS R M × N × B .
The open-closing operation which belongs to the mathematical morphology operation is an effective denoising processing operation [44,45]. The effects of denoising using the open operation and closed operation alone are usually not very good, since they may cause amplitude deflection. By contrast, the open-closing operation has the better denoising effect. The open operation is first applied on the image, and the selected structure element is larger than the noise size to remove the background noise. Then, the closed operation is utilized to remove the noise of the image obtained in the previous step. The open-closing denoising operation is suitable for the images which have less small details. Since the LRHS image has the low spatial resolution, the fine spatial details are few. The open-closing operation is applicable to removing noise with high interference in the LRHS image. The open-closing morphological operation is applied as:
( X LR RNH ) k = ( ( X LR HS ) k S 1 ) S 2
for k = 1 , 2 , , B , where X LR RNH denotes the denoised LRHS image, ( X LR HS ) k and ( X LR RNH ) k denote the k th band of the LRHS image and the denoised LRHS image, respectively, and S 1 and S 2 are the structure elements. Here, represents the opening operation which first uses the erosion operation and then the dilation operation, and denotes the closing operation which does in reverse. The erosion and dilation operations obtain the local minimum and maximum of the image, respectively. Equation (8) can be expressed in detail as:
( X LR OHS ) k = ( X LR HS ) k S 1 = ( ( X LR HS ) k Θ S 1 ) S 1 = max j S 1 { [ min j S 1 ( ( X LR HS ) k ( i + j ) S 1 ( j ) ) ] + S 1 ( j ) }
( ( X LR HS ) k S 1 ) ) S 2 = ( X LR OHS ) k S 2 = min j S 2 { [ max j S 2 ( ( X LR OHS ) k ( i + j ) S 2 ( j ) ) ] + S 2 ( j ) }
for k = 1 , 2 , , B , where Θ and denote the erosion and dilation operations, respectively.

3.2. Hyperspectral Image Spatial Information Extraction

Homomorphic filtering is a filtering method that transforms the nonlinear problem into a linear problem. It transforms the nonlinear multiplicative mixed problem into an additive model by the logarithmic transformation, and then uses linear filtering to process it. The homomorphic filtering suppresses the low frequency illumination component and enhances the high frequency reflectance component. For an HS image, the high frequency component of each band is considered as the spatial component for each band. To obtain the spatial information for each band, we apply homomorphic filtering to each band of the denoised LRHS image. Through the use of homomorphic filtering, the low frequency component of each band of the denoised LRHS image is suppressed, and the high frequency component is extracted. Therefore, in this research, homomorphic filtering is applied to each band of the denoised LRHS image to extract the spatial component of each band. The homomorphic filtering processing is based on the following image imaging model:
( X LR RNH ) k = ( X LR _ H RNH ) k · ( X LR _ L RNH ) k
for k = 1 , 2 , , B , where X LR _ H RNH represents the high frequency component of the denoised LRHS image, X LR _ L RNH represents the low frequency component, and ( X LR _ H RNH ) k and ( X LR _ L RNH ) k represent the k th band of X LR _ H RNH and X LR _ L RNH , respectively. Based on Equation (4)-(6), Logarithmic transformation, Fourier transform, and high-pass filtering operations are applied to Equation (11):
( S LR ) k = [ ( ln ( X LR _ H RNH ) k ) ] · H + [ ln ( X LR _ L RNH ) k ] · H
for k = 1 , 2 , , B , where S LR is the high-pass filtered imageh, ( S LR ) k is the k th band of S LR , represents Fourier transform, and H is the high-pass filter, defined as:
H ( x , y ) = ( β H β L ) [ 1 exp ( ( D 2 ( x , y ) / D 0 2 ) ] + β L
where D 0 is the cut-off frequency, D is the distance between ( x , y ) and the center, β H and β L are the high and low frequency gains. Figure 3 shows the 3-D mesh of the high-pass filter. Since homomorphic filtering aims to reduce the low frequency component and extract the high frequency component, β H is greater than 1 and β L is smaller than 1. By adjusting the value of the cut-off frequency D 0 , the sharpness of the transition between β L and β H can be controlled. In practice, the values of these parameters are generally determined empirically. In this paper, empirically, β H , β L , and D 0 are set to 2, 0.25, 40, respectively. S LR is the high-pass filtered image in which the low frequency component has been weakened. Then, the spatial component of each band is obtained by applying the inverse Fourier transform and the exponential operation to S LR .
( X LR I ) k = exp [ 1 ( ( S LR ) k ) ]
for k = 1 , 2 , , B , where X LR I denotes the spatial component of each band of the denoised LRHS image, ( X LR I ) k denotes the k th band of X LR I , and 1 denotes the inverse Fourier transform.
After introducing homomorphic filtering to obtain the spatial component of each band of the denoised LRHS image, a weighted root mean squared error (RMSE)-based method is presented to extract the spatial intensity information of the HS image. Let I LR = k = 1 B λ k ( X LR I ) k denote the total spatial information of the LRHS image, where [ λ 1 , λ 2 , , λ B ] is the weighted vector. To determine the values of the weighted vector, we utilize the RMSE index to measure the deviation of two images. A smaller value of RMSE indicates a better result, and the optimal value is 0. In the RMSE-based method, the spatial information of the PAN image is considered, and the RMSE value between the total spatial information I LR and the PAN image X PAN is calculated. The smallest value of RMSE is computed to obtain the optimal values of the weights [ λ 1 , λ 2 , , λ B ] :
min ( 1 T i = 1 T [ ( k = 1 B λ k ( X LR I ) k ) i ( X PAN ) i ] 2
where T = m × n represents the total pixel number of one band of the LRHS image, represents down-sampling operation, X PAN represents that the PAN image is down-sampled to the size of the LRHS image, ( X PAN ) i and ( k = 1 B λ k ( X LR I ) k ) i represent the values of the i th pixel in X PAN and k = 1 B λ k ( X LR I ) k , respectively. The laplacian pyramid networks (LapSRN) [46] super-resolution method can effectively improve the spatial resolution of an image, and has the advantages of parameter sharing, local skip connections, and multi-scale training. So it is adopted to super-resolve the spatial information of the LRHS image I LR for an I HR with super-resolution spatial information.

3.3. Panchromatic Image Preprocessing and Total Spatial Information Acquisition

To make the spatial information of the PAN image clearer, the Laplacian of Gaussian (LOG) [47] image enhancement algorithm is applied to the PAN image, which uses a Gaussian filter to reduce noise followed by a Laplace operator for enhancement. Let I s PAN represent the enhanced PAN image.
The HS and PAN images contain the different and complementary information for a scene. To acquire the total spatial information, we should consider simultaneously the spatial structure details of these two images, and then propose an optimized weighted tensor matrix-based method. I HR and I s PAN include the spatial structure information of the HS and PAN images, respectively. Based on Equation (2), for the multiple images [ I HR , I s PAN ] , the weighted structure tensor matrix at pixel p is given by:
M w , p HP = 1 2 [ ( ( I HR , p ) / x ) 2 + ( ( I s , p PAN ) / x ) 2 ( ( I HR , p ) / x ) ( ( I HR , p ) / y ) + ( ( I s , p PAN ) / x ) ( ( I s , p PAN ) / y ) ( ( I HR , p ) / x ) ( ( I HR , p ) / y ) + ( ( I s , p PAN ) / x ) ( ( I s , p PAN ) / y ) ( ( I HR , p ) / y ) 2 + ( ( I s , p PAN ) / y ) 2 ]
where M w HP denotes the obtained weighted tensor matrix, M w , p HP denotes M w HP at pixel p , ( I HR , p ) / x , ( I HR , p ) / y , ( I s , p PAN ) / x , and ( I s , p PAN ) / y are the x and y partial derivatives of I HR and I s PAN at pixel p , respectively. The weighted tensor matrix M w , p HP is semi-definite, and it can be decomposed as:
M w , p HP = [ e w 11 , p e w 21 , p e w 12 , p e w 22 , p ] [ v w 1 , p 0 0 v w 2 , p ] [ e w 11 , p e w 21 , p e w 12 , p e w 22 , p ] T = v w 1 , p e w 1 , p ( e w 1 , p ) T + v w 2 , p e w 2 , p ( e w 2 , p ) T
where ( ) T is the transpose operation, v w 1 and v w 2 are the two eigenvalues, v w 1 , p and v w 2 , p are the two eigenvalues at pixel p , e w 1 , p = [ e w 11 , p e w 12 , p ] T and e w 2 , p = [ e w 21 , p e w 22 , p ] T are the eigenvectors corresponding to the two eigenvalues at pixel p , respectively.
The two eigenvalues generally have a larger value and a smaller value. We assume that v w 1 is the larger eigenvalue. When v w 1 v w 2 0 , v w 1 > v w 2 0 , and v w 1 > v w 2 > 0 , the structure region of this pixel are flat area, edge area, and corner, respectively. We test on some images to study the eigenvalues of weighted tensor matrix. Figure 4 shows the two eigenvalues at each pixel of the weighted tensor matrix for the Salinas scene data. It can be seen that for many pixels, the smaller eigenvalues shown in Figure 4d are approximately 10 5 , and are very small. By experimenting on other numerous images, we have also discovered that the smaller eigenvalues are mostly very small. Thus, the approximation of M w , p HP is expressed as:
M ˜ w , p HP = [ e w 11 , p e w 21 , p e w 12 , p e w 22 , p ] [ v w 1 , p 0 0 0 ] [ e w 11 , p e w 21 , p e w 12 , p e w 22 , p ] T = v w 1 , p e w 1 , p ( e w 1 , p ) T
where M ˜ w , p HP is the approximation of M w , p HP . Based on Equation (1), the structure tensor matrix satisfies that M = I · I T . The weighted gradient G w at pixel p satisfies that M ˜ w , p HP = G w , p · ( G w , p ) T = v w 1 , p e w 1 , p ( e w 1 , p ) T . So, G w , p is deduced as G w , p = v w 1 , p · e w 1 , p . Since the direction of the eigenvector corresponding to v w 1 is not unique, the direction of the weighted gradient G w , p is also not unique. We specify the direction of the weighted gradient G w , p as the gradients average of the individual multiple source images [ I HR , I s PAN ] :
G w , p = v w 1 , p · e w 1 , p · sign e w 1 , p , 1 2 ( I HR , p + I s , p PAN )
where I HR , p = [ ( I HR , p ) / x ( I HR , p ) / y ] and I s , p PAN = [ ( I s , p PAN ) / x ( I s , p PAN ) / y ] , , represents the inner product of two vectors, and sign ( ) represents the sign function. Once the weighted gradient G w , p is acquired from the multiple images [ I HR , I s PAN ] , an optimization model is proposed to obtain the total spatial information I T HP as:
min ( I T HP ) G w 2
where I T HP = [ I T HP / x I T HP / y ] , I T HP / x , and I T HP / y denote the x and y partial derivatives of I T HP . Equation (20) is an unconstrained optimization problem, and we solve it by the conjugate gradient method. Equation (20) can effectively ensure that the total spatial information I T HP contains the spatial structure details of both the HS and PAN images.

3.4. Fused High Spatial Resolution Hyperspectral Image Generation

The LRHS image X LR HS is interpolated to the scale of the HRPAN image. By constructing a suitable gain matrix R , the total spatial information I T HP is injected into the interpolated HS image to generate the fused HRHS image X HR HS . For the gain matrix R , it is beneficial to preserve the ratio of each HS pair band unchanged to reduce the spectral distortion. Thus, R should satisfy that R k ( X IN HS ) k ( 1 / B ) k = 1 B ( X IN HS ) k , where X IN HS is the interpolated HS image, ( X HR HS ) k and R k are the k th band of X IN HS and R , respectively. Then, a tradeoff parameter ε is defined to regulate the amount of the injected details to reduce the spatial distortion. This process can be expressed as:
( X HR HS ) k = ( X IN HS ) k + R k · I T HP = ( X IN HS ) k + ε · ( X IN HS ) k ( 1 / B ) k = 1 B ( X IN HS ) k · I T HP
for k = 1 , 2 , , B , where X HR HS is the fused HRHS image, and ( X HR HS ) k is the k th band of X HR HS .

4. Experimental Results and Discussion

4.1. Datasets and Experimental Setup

In order to evaluate the effectiveness of the proposed HFWT hyperspectral pansharpening method (named as HFWT), experiments were performed on two simulated hyperspectral datasets which were a Washington DC and a Salinas scene, and one real hyperspectral dataset, the Hyperion dataset. The Salinas scene hyperspectral dataset was collected by Airborne Visible Infrared Imaging Spectrometer (AVIRIS) [48], and the Washington DC dataset was acquired by the Spectral Information Technology Application Center of Virginia. The used real dataset is provided by the EO-1 spacecraft. The EO-1 spacecraft has a Hyperion instrument which provides the real LRSH images and an Advanced Land Imager (ALI) instrument acquires the HRPAN images [48]. Table 1 lists the characteristics of each dataset.
The proposed HFWT method is compared with several state-of-the-art hyperspectral pansharpening methods: Gram-Schmidt (GS) [23], guided filter principal component analysis (GFPCA) [33], coupled nonnegative matrix factorization (CNMF) [18], Bayesian sparsity promoted Gaussian prior (Bayesian) [13], and HySure [14]. Four typical quantitative evaluation indexes are adopted: cross correlation (CC) [49], spectral angle mapper (SAM) [50], root mean squared error (RMSE), and erreur relative globale adimensionnelle desynthèse (ERGAS) [51]. The CC and SAM measure the spectral and spatial distortion, respectively. The larger value of CC and the smaller value of SAM indicate the better fusion result. The RMSE and ERGAS are the global indexes that measure both the spatial and spectral performance, and their value ranges are all (0,1), with 0 being the optimal value.
In order to perform the objective fusion evaluation of the simulated hyperspectral datasets, the available HS image is used as the reference HS image. The simulated LRHS image and HRPAN image are generated according to Wald’s protocol [52,53]. The reference HS image is blurred and down sampled 4 times to obtain the simulated HS image. The simulated PAN image is obtained by averaging the visible light band of the reference HS image.
For the real datasets, the real LRHS and HRPAN images are available, and the reference high resolution HS image is not available. In order to test the objective quality of the real hyperspectral images, the real LRHS image is served as the reference image. The real LRHS and HRPAN images available are degraded, and the two degraded images are fused to obtain a fused image. This fused image is compared to the real LRHS image to evaluate the objective quality.
In the proposed HFWT method, we define a tradeoff parameter ε which regulates the amount of the injected details and ensures spatial performance. In practice, the optimal value is determined based on experience. By adjusting different values of ε , the optimal value can be determined by the fusion result. In this paper, by experience, the values of tradeoff parameter ε are set as 0.25, 0.05 and 0.2 for the Washington DC, Salinas scene and Hyperion dataset, respectively.

4.2. Validity Discussion of the Open-Closing Denoising Operation

To verify the effectiveness of the open-closing HS image denoising operation, the proposed HFWT method was conducted on the Washington DC dataset with different denoising processing. The compared denoising algorithms contain average filtering, Gaussian filtering, open operation, closed operation, and open-closing operation. Table 2 shows the fusion performance of different image denoising processing. As outlined in Table 2, the HFWT method without HS image denoising had the worst fusion results. The proposed methods with each HS image denoising preprocessing have the better fusion results compared with the proposed method without HS image denoising, which demonstrates that the HS image denoising preprocessing is significative and effective. By contrast, the proposed HFWT method with the open-closing operation achieves the best fusion performance, and it demonstrates that the open-closing operation is an effective HS image denoising preprocessing.

4.3. Experiments on Simulated Hyperspectral Datasets

Figure 5 shows the fusion experimental results for the Washington DC dataset, where Figure 5(a1) shows the reference HS image, and the subjective fused HS images of each method are displayed in Figure 5(b1–g1). Moreover, Figure 5(a2) shows the enlarged subareas of the reference HS image, and the two enlarged subareas of each fused image are shown in Figure 5(b2–g2). The reference error image is shown in Figure 5(a3), and the error images between each fused HS image and the reference HS image are reported in Figure 5(b3–g3). Except for the first column, each column in Figure 5 shows the experimental results corresponding to each method. By visually comparing the fused HS images with the reference HS image, the fused result of the GS method suffers from serious spectral distortion. For example, the two enlarged subareas of the GS method are distorted seriously. The GFPCA approach generates fuzzy spatial details in some regions, such as the enlarged subareas shown in Figure 5(c2). This is because the spatial information of the GFPCA approach is injected insufficiently. As depicted in Figure 5(d1,d2), the spatial information of the fused image is well enhanced using the CNMF method, but some slight spectral distortion is appeared in the roof of the buildings. A closer inspection revealed that the HySure method seems to generate some distortion in the circular building in the upper left corner. By contrast, the fused HS images obtained by the Bayesian and HFWT methods achieve superior performance in terms of both spectral and spatial aspects. In order to further compare the performance of each fusion method, the third row of Figure 5 shows the error images of different methods. The error image is the difference (absolute value) of pixel values between the fused HS images and the reference HS image. We can see that, the GS, GFPCA and CNMF methods have larger differences, the HySure and Bayesian approaches generated relatively smaller differences, and the proposed HFWT approach shows the smallest differences in most areas, which demonstrates the excellent fusion capacity of the proposed method.
Similar to the previous experiments, for the Salinas scene dataset, the fused results are shown in Figure 6. Figure 6(a1–a3) show the reference HS image, the enlarged subareas of the reference HS image, and the reference SAM image, respectively. Figure 6(b1–g1) in the first row show the subjective pansharpened results of each algorithm, and Figure 6(b2–g2) in the second row display each enlarged subarea. The SAM images of each approach are shown in Figure 6(b3–g3). The reference SAM image of enlarged subarea is shown in Figure 6(a4), and the SAM images of enlarged subarea obtained by each method are shown in Figure 6(b4–g4). The spectral distortion caused by the GS method is very obvious, and the degree of spatial enhancement is also not acceptable for the GS method, as depicted on Figure 6(b1,b2). Compared with the GS approach, the GFPCA method performs better in terms of the spectral quality. However, the fused HS image obtained by the GFPCA approach shows an indistinct area in the left region of Figure 6(c1). Despite having a preeminent spatial quality, the CNMF method generates significant spectral distortion in the triangle region in the lower half of Figure 6(d1). From the visual analysis, the HySure, Bayesian and HFWT methods effectively improve spatial performance while maintaining spectral information, and the HFWT method shows better spectral quality compared with the HySure and Bayesian methods in some regions, such as the upper area of the enlarged subarea. The SAM images and the SAM images of the enlarged subareas of different approaches are shown in the third and fourth rows of Figure 6, to further verify the fusion performance of the proposed method. It can be seen that the proposed HFWT method yields the lowest SAM values for most regions. These results demonstrate that the proposed HFWT algorithm performs well in both the spatial and spectral aspects.
In addition to visual inspection, the performance of each algorithm for the Washington DC and Salinas scene datasets is analyzed quantitatively in Table 3, where the best results for each quantitative index are marked in bold. As can be seen from Table 3, the objective quantitative results are roughly consistent with the subjective qualitative effects. Same as the subjective results, the GS and GFPCA algorithms produce worse objective performance compared with other algorithms. The HySure approach obtains the best RMSE value for the Washington DC dataset and the optimal ERGAS value for the Salinas scene dataset. Most of the quality indexes generated by applying the proposed HFWT method are the best, in which the SAM, CC and ERGAS values are the best for Washington DC, and the RMSE, SAM and CC indexes are ranked first for the Salinas scene.

4.4. Experiments on Real Hyperspectral Datasets

Figure 7 shows the pansharpened images of each method for the Hyperion dataset to confirm the fusion performance of the proposed HFWT method in the real dataset. Figure 7a–c show the real HS, real PAN, and interpolated HS images, respectively. The GS method shown in Figure 7d generates obvious spectral distortion, especially in the wharf area. In spite of good spatial improvement, the spatial details shown in the fused images obtained by using the GS and HySure approaches are too sharp. Spectral quality of the GFPCA and Bayesian methods seems be acceptable, but the GFPCA and Bayesian methods perform poorly from the spatial aspect. By contrast, the subjective effects of the CNMF and HFWT approaches are the best, and the HFWT method yields better spatial capacity compared to the CNMF method. The objective quality evaluation for the Hyperion dataset are presented in Table 4. As reported in Table 4, the HFWT method provides the best quantitative evaluation results in terms of the RMSE, SAM, CC, and ERGAS indices, which indicates that the HFWT method successfully maintains the spectral information of the original LRHS image and improves the spatial resolution.

4.5. Computational Complexity Analysis and Time Comparisons

The proposed HFWT algorithm contains simple sequential statements, several loop statements without nesting, and a two-layer loop statements with nesting (In the proposed HFWT algorithm, the program statement of Equation (19) which are applied on each pixel is a two-layer loop statements). The simple sequential statement is naturally O ( 1 ) time, and the loop statement without nesting is O ( n ) time. The two-layer loop statement is O ( n 2 ) . According to the summation rule of algorithm complexity, the total algorithm complexity is O ( n 2 + n + 1 ) = O ( n 2 ) . The proposed HFWT algorithm which is O ( n 2 ) time belongs to the polynomial time, and is considered a fast algorithm. The computing time (in seconds) of each method for three datasets is shown in Table 5. The experiments in this paper were all performed using MATLAB R2015b, and tested on a PC with an Intel Core i5-7300HQ CPU @ 2.50 GHz and 8 GB of memory. The GS and GFPCA methods are very efficient, but the fusion performance of the GS and GFPCA methods is unsatisfactory. The proposed HFWT method is faster than the CNMF algorithm, and takes much less computing time than the HySure and Bayesian algorithms. The time cost of the proposed HFWT is acceptable.

5. Conclusions

This paper presents a novel hyperspectral pansharpening method based on the merger of the homomorphic filtering and weighted tensor matrix. The proposed HFWT algorithm introduces the open-closing morphological operation and homomorphic filtering to remove noise and extract spatial information of each band of an HhS image, respectively. Moreover, we propose a weighted RMSE-based method to obtain the total spatial information of the HS image. In order to generate the adequate spatial information from both the HS and the corresponding PAN images, an optimized weighted tensor matrix based method is proposed. Specifically, the weighted tensor matrix, eigenvalues and eigenvectors are deduced and analyzed to obtain the weighted gradient, and an optimization model is presented to acquire the integrated spatial information. Compared with the state-of-the-art methods, experiments performed on the Washington DC, Salinas scene and Hyperion datasets demonstrate the proposed method performs superiorly in terms of both subjective and objective assessment.

Author Contributions

Methodology, J.Q., Y.L. and Q.D.; Software, J.Q. and W.D.; Supervision, Y.L. and Q.D.; Writing—original draft, J.Q.; Writing—review and editing, Y.L., Q.D., W.D. and B.X.

Funding

This work was supported in part by the National Natural Science Foundation of China (nos. 61571345, 91538101, 61501346, 61,502,367 and 61701360) and the 111 project (B08038). It was also partially supported by the Supported by Yangtze River Scholar Bonus Schemes of China (No. CJT160102), Ten Thousand Talent Program, the Natural Science Basic Research Plan in Shaanxi Province of China (No. 2016JQ6023), the China Scholarship Council program (201806960020), the Excellent doctoral thesis fund of Xidian University, the Innovation Fund of Xidian University, and Fundamental Research Funds for Central Universities (JB182001).

Acknowledgments

The authors would like to thank the editors and the anonymous reviewers for their insightful comments and suggestions which have greatly improved this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Adam, E.; Mutanga, O.; Rugege, D. Multispectral and hyperspectral remote sensing for identification and mapping of wetland vegetation: A review. Wetl. Ecol. and Manag. 2010, 18, 281–296. [Google Scholar] [CrossRef]
  2. Zhou, Y.; Feng, L.Y.; Hou, C.P.; Kung, S.Y. Hyperspectral and multispectral image fusion based on local low rank and coupled spectral unmixing. IEEE Trans. Geosci. Remote Sens. 2017, 55, 5997–6009. [Google Scholar] [CrossRef]
  3. Xu, Y.; Wu, Z.; Li, J.; Plaza, A.; Wei, Z. Anomaly detection in hyperspectral images based on low-rank and sparse representation. IEEE Trans. Geosci. Remote Sens. 2016, 54, 1990–2000. [Google Scholar] [CrossRef]
  4. Zhang, M.; Li, W.; Du, Q. Diverse region-based CNN for hyperspectral image classification. IEEE Trans. Image Process. 2018, 27, 2623–2634. [Google Scholar] [CrossRef]
  5. Xue, B.; Yu, C.; Wang, Y.; Song, M.; Li, S.; Wang, L.; Chen, H.; Chang, C.I. A subpixel target detection approach to hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 5093–5114. [Google Scholar] [CrossRef]
  6. Kang, X.; Duan, P.; Xiang, X.; Li, S.; Benediktsson, J.A. Detection and correction of mislabeled training samples for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 5673–5686. [Google Scholar] [CrossRef]
  7. Lv, Z.Y.; Shi, W.Z.; Zhou, X.C.; Benediktsson, J.A. Semi-automatic system for land cover change detection using bi-temporal remote sensing images. Remote Sens. 2017, 9, 1112. [Google Scholar] [CrossRef]
  8. Haboudane, D.; Miller, J.R.; Pattey, E.; Zarco-Tejada, P.; Ian, B.; Strachan, I. Hyperspectral vegetation indices and novel algorithms for predicting green LAI of crop canopies: Modeling and validation in the context of precision agriculture. Remote Sens. Environ. 2004, 90, 337–352. [Google Scholar] [CrossRef]
  9. Chabrillat, S.; Pinet, P.C.; Ceuleneer, G.; Johnson, P.E.; Mustard, J.F. Ronda peridotite massif: Methodology for its geological mapping and lithological discrimination from airborne hyperspectral data. Int. J. Remote Sens. 2000, 21, 2363–2388. [Google Scholar] [CrossRef]
  10. Bishop, C.A.; Liu, J.G.; Mason, P.J. Hyperspectral remote sensing for mineral exploration in Pulang, Yunnan Province, China. Int. J. Remote Sens. 2011, 32, 2409–2426. [Google Scholar] [CrossRef]
  11. Ellis, R.J.; Scott, P.W. Evaluation of hyperspectral remote sensing as a means of environmental monitoring in the St. Austell China clay (kaolin) region, Cornwall, UK. Remote Sens. Environ. 2004, 93, 118–130. [Google Scholar] [CrossRef]
  12. Wei, Q.; Dobigeon, N.; Tourneret, J.Y. Fast fusion of multiband images based on solving a Sylvester equation. IEEE Trans. Image Process. 2015, 24, 4109–4121. [Google Scholar] [CrossRef]
  13. Wei, Q.; Dobigeon, N.; Tourneret, J.Y. Bayesian fusion of multiband images. IEEE J. Sel. Top. Signal Process. 2015, 9, 1117–1127. [Google Scholar] [CrossRef]
  14. Simoes, M.; Dias, J.B.; Almeida, L.; Chanussot, J. A convex formulation for hyperspectral image superresolution via subspace-based regularization. IEEE Trans. Geosci. Remote Sens. 2015, 53, 3373–3388. [Google Scholar] [CrossRef]
  15. Loncan, L.; Almeida, L.; Dias, J.B.; Briottet, X.; Chanussot, J.; Dobigeon, N.; Fabre, S.; Liao, W.Z.; Licciardi, G.A.; Simoes, M.; Tourneret, J.; Veganzones, M.A.; Vivone, G.; Wei, Q.; Yokoya, N. Hyperspectral pansharpening: A review. IEEE Geosci. Remote Sens. Mag. 2015, 3, 27–46. [Google Scholar] [CrossRef]
  16. Yokoya, N.; Grohnfeldt, C.; Chanussot, J. Hyperspectral and multispectral data fusion: A comparative review of the recent literature. IEEE Geosci. Remote Sens. Mag. 2017, 5, 29–56. [Google Scholar] [CrossRef]
  17. Hoyer, P.O. Non negative sparse coding. In Proceedings of the IEEE Workshop Neural Network Signal Processing, Martigny, Switzerland, 6 September 2002; pp. 557–565. [Google Scholar]
  18. Yokoya, N.; Yairi, T.; Iwasaki, A. Coupled nonnegative matrix factorization unmixing for hyper-spectral and multispectral data fusion. IEEE Trans. Geosci. Remote Sens. 2012, 50, 528–537. [Google Scholar] [CrossRef]
  19. Carper, W.; Lillesand, T.M.; Kiefer, P.W. The use of Intensity-Hue-Saturation transformations for merging SPOT panchromatic and multispectral image data. Photogramm. Eng. Remote Sens. 1990, 56, 459–467. [Google Scholar]
  20. Tu, T.M.; Su, S.C.; Shyu, H.C.; Huang, P.S. A new look at IHS-like image fusion methods. Inf. Fusion. 2001, 2, 117–186. [Google Scholar] [CrossRef]
  21. Chavez, P.S.; Kwarteng, A.Y.A. Extracting spectral contrast in Landsat thematic mapper image data using selective principal component analysis. Photogramm. Eng. Remote Sens. 1989, 55, 339–348. [Google Scholar]
  22. Shettigara, V. A generalized component substitution technique for spatial enhancement of multispectral images using a higher resolution data set. Photogramm. Eng. Remote Sens. 1992, 58, 561–567. [Google Scholar]
  23. Laben, C.; Brower, B. Process for Enhacing the Spatial Resolution of Multispectral Imagery Using Pan-Sharpening. U.S. Patent 6,011,875, 4 January 2000. [Google Scholar]
  24. Aiazzi, B.; Baronti, S.; Selva, M. Improving component substitution pansharpening through multivariate regression of MS + Pan data. IEEE Trans. Geosci. Remote Sens. 2007, 45, 3230–3239. [Google Scholar] [CrossRef]
  25. Thomas, C.; Ranchin, T.; Wald, L.; Chanussot, J. Synthesis of multispectral images to high spatial resolution: A critical review of fusion methods based on remote sensing physics. IEEE Trans. Geosci. Remote Sens. 2008, 46, 1301–1312. [Google Scholar] [CrossRef]
  26. Liu, J.G. Smoothing filter based intensity modulation: A spectral preserve image fusion technique for improving spatial details. Int. J. Remote Sens. 2000, 21, 3461–3472. [Google Scholar] [CrossRef]
  27. Burt, P.J.; Adelson, E.H. The Laplacian pyramid as a compact image code. IEEE Trans. Commun. 1983, 31, 532–540. [Google Scholar] [CrossRef]
  28. Aiazzi, B.; Alparone, L.; Baronti, S.; Garzelli, A.; Selva, M. MTF-tailored multiscale fusion of high-resolution MS and pan imagery. Photogramm. Eng. Remote Sens. 2006, 72, 591–596. [Google Scholar] [CrossRef]
  29. Vivone, G.; Restaino, R.; Mura, M.D.; Licciardi, G.; Chanussot, J. Contrast and error-based fusion schemes for multispectral image pansharpening. IEEE Geosci. Remote Sens. Lett. 2014, 11, 930–934. [Google Scholar] [CrossRef]
  30. Aiazzi, B.; Alparone, L.; Baronti, S.; Garzelli, A.; Selva, M. 25 years of pansharpening: A critical review and new developments. In Signal Image Processing for Remote Sensing, 2nd ed.; Chen, C.H., Ed.; CRC Press: Boca Raton, FL, USA, 2011; Chapter 28; pp. 533–548. [Google Scholar]
  31. Park, H.; Choi, J.; Park, N.; Choi, S. Sharpening the VNIR and SWIR bands of Sentinel-2A imagery through modified selected and synthesized band schemes. Remote Sens. 2017, 9, 80. [Google Scholar] [CrossRef]
  32. Vivone, G.; Addesso, P.; Restaino, R.; Dalla, M.; Chanussot, J. Pansharpening Based on Deconvolution for Multiband Filter Estimation. IEEE Trans. Geosci. Remote Sens. 2019, 57, 540–553. [Google Scholar] [CrossRef]
  33. Liao, W.; Huang, X.; Coillie, F.; Gautama, S.; Pizurica, A.; Philips, W.; Liu, H.; Zhu, T.; Shimoni, M.; Moser, G.; et al. Processing of multiresolution thermal hyperspectral and digital color data: Outcome of the 2014 IEEE GRSS data fusion contest. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 2984–2996. [Google Scholar] [CrossRef]
  34. Yang, J.; Zhao, Y.Q.; Chan, J.C.W. Hyperspectral and Multispectral Image Fusion via Deep Two-Branches Convolutional Neural Network. Remote Sens. 2018, 10, 800. [Google Scholar] [CrossRef]
  35. Zhang, Y.J.; Liu, C.; Sun, M.W.; Ou, Y.J. Pan-Sharpening Using an Efficient Bidirectional Pyramid Network. IEEE Trans. Geosci. Remote Sens. 2019, 1–15. [Google Scholar] [CrossRef]
  36. Palsson, F.; Sveinsson, J.R.; Ulfarsson, M.O. Multispectral and Hyperspectral Image Fusion Using a 3-D-Convolutional Neural Network. IEEE Geosci. Remote Sens. Lett. 2017, 14, 639–642. [Google Scholar] [CrossRef]
  37. Akl, A.; Yaacoub, C.; Donias, M.; Costa, J.D.; Germain, C. Texture synthesis using the structure tensor. IEEE Trans. Image Process. 2015, 24, 4082–4095. [Google Scholar] [CrossRef]
  38. Lefkimmiatis, S.; Osher, S. Nonlocal structure tensor functionals for image regularization. IEEE Trans. Computat. Imag. 2015, 1, 16–29. [Google Scholar] [CrossRef]
  39. Wu, Z.; Wang, Q.; Jin, J.; Shen, Y. Structure tensor total variation-regularized weighted nuclear norm minimization for hyperspectral image mixed denoising. Signal Process. 2017, 131, 202–219. [Google Scholar] [CrossRef]
  40. Tiwari, K.; Arya, D.K.; Badrinath, G.S.; Gupta, P. Designing palmprint based recognition system using local structure tensor and force field transformation for human identification. Neurocomputing. 2013, 116, 222–230. [Google Scholar] [CrossRef]
  41. Fan, C.N.; Zhang, F.Y. Homomorphic filtering based illumination normalization method for face recognition. Pattern Recogn. Lett. 2011, 32, 1468–1479. [Google Scholar] [CrossRef]
  42. Sreenivasan, K.R.; Havlicek, M.; Deshpande, G. Nonparametric hemodynamic deconvolution of fMRI using homomorphic filtering. IEEE Trans. Med. Imaging. 2014, 34, 1155–1163. [Google Scholar] [CrossRef] [PubMed]
  43. Xiao, L.; Li, C.; Wu, Z.; Wang, T. An enhancement method for X-ray image via fuzzy noise removal and homomorphic filtering. Neurocomputing. 2016, 195, 56–64. [Google Scholar] [CrossRef]
  44. Liu, Z.G.; Wang, J.L.; Liu, B. ECG Signal Denoising Based on Morphological Filtering. In Proceedings of the 2011 5th International Conference on Bioinformatics and Biomedical Engineering, Wuhan, China, 10–12 May 2011. 12072479. [Google Scholar]
  45. Chen, S.H.; Wang, E.Y. Electromagnetic Radiation Signals of Coal or Rock Denoising Based on Morphological Filter. Procedia Engineer. 2011, 26, 588–594. [Google Scholar]
  46. Lai, W.; Huang, J.; Ahuja, N.; Yang, M. Deep laplacian pyramid networks for fast and accurate super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 624–632. [Google Scholar]
  47. Dong, W.Q.; Xiao, S.; Li, Y.S.; Qu, J.H. Hyperspectral pansharpening based on intrinsic image decomposition and weighted least squares filter. Remote Sens. 2018, 10, 445. [Google Scholar] [CrossRef]
  48. Mookambiga, A.; Gomathi, V. Comprehensive review on fusion techniques for spatial information enhancement in hyperspectral imagery. Multidimens. Syst. Signal Process. 2016, 27, 863–889. [Google Scholar] [CrossRef]
  49. Alparone, L.; Wald, L.; Chanussot, J.; Thomas, C.; Gamba, P.; Bruce, L. Comparison of pansharpening algorithms: Outcome of the 2006 GRS-S data-fusion contest. IEEE Trans. Geosci. Remote Sens. 2007, 45, 3012–3021. [Google Scholar] [CrossRef]
  50. Vivone, G.; Alpharone, L.; Chanussot, J.; Mura, M.D.; Garzelli, A.; Licciardi, G.A.; Restaino, R.; Wald, L. A critical comparison among pansharpening algorithms. IEEE Trans. Geosci. Remote Sens. 2015, 53, 2565–2586. [Google Scholar] [CrossRef]
  51. Du, Q.; Younan, N.H.; King, R.; Shah, V.P. On the performance evaluation of pan-sharpening techniques. IEEE Geosci. Remote Sens. Lett. 2007, 4, 518–522. [Google Scholar] [CrossRef]
  52. Wald, L.; Ranchin, T.; Mangolini, M. Fusion of satellite images of different spatial resolutions: Assessing the quality of resulting images. Photogramm. Eng. Remote Sens. 1997, 63, 691–699. [Google Scholar]
  53. Selva, M.; Santurri, L.; Baronti, S. On the Use of the Expanded Image in Quality Assessment of Pansharpened Images. IEEE Geosci. Remote Sens. Lett. 2018, 15, 1–5. [Google Scholar] [CrossRef]
Figure 1. Concept of hyperspectral pansharpening.
Figure 1. Concept of hyperspectral pansharpening.
Remotesensing 11 01005 g001
Figure 2. Schematic of the proposed homomorphic filtering and weighted tensor (HFWT) matrix-based hyperspectral pansharpening algorithm.
Figure 2. Schematic of the proposed homomorphic filtering and weighted tensor (HFWT) matrix-based hyperspectral pansharpening algorithm.
Remotesensing 11 01005 g002
Figure 3. 3-D mesh of the high-pass filter.
Figure 3. 3-D mesh of the high-pass filter.
Remotesensing 11 01005 g003
Figure 4. Two eigenvalues at each pixel of the weighted tensor matrix. (a) Salinas scene hyperspectral image; (b) Spatial information of the hyperspectral image I HR ; (c) Enhanced PAN image I s PAN ; (d) Smaller eigenvalue; (e) Larger eigenvalue.
Figure 4. Two eigenvalues at each pixel of the weighted tensor matrix. (a) Salinas scene hyperspectral image; (b) Spatial information of the hyperspectral image I HR ; (c) Enhanced PAN image I s PAN ; (d) Smaller eigenvalue; (e) Larger eigenvalue.
Remotesensing 11 01005 g004
Figure 5. Experimental results obtained by each method for Washington DC dataset. (a1a3) Reference image; (b1b3) GS; (c1c3) GFPCA; (d1d3) CNMF; (e1e3) HySure; (f1f3) Bayesian; (g1g3) HFWT. (First row (a1): reference HS image; First row (b1–g1): fused HS images; second row: enlarged subareas; third row (a3): reference error image; third row (b3–g3): error images between each fused HS image and the reference HS image.)
Figure 5. Experimental results obtained by each method for Washington DC dataset. (a1a3) Reference image; (b1b3) GS; (c1c3) GFPCA; (d1d3) CNMF; (e1e3) HySure; (f1f3) Bayesian; (g1g3) HFWT. (First row (a1): reference HS image; First row (b1–g1): fused HS images; second row: enlarged subareas; third row (a3): reference error image; third row (b3–g3): error images between each fused HS image and the reference HS image.)
Remotesensing 11 01005 g005aRemotesensing 11 01005 g005b
Figure 6. Experimental results obtained by each method for the Salinas scene dataset. (a1a4) Reference image; (b1b4) GS; (c1c4) GFPCA; (d1d4) CNMF; (e1e4) HySure; (f1f4) Bayesian; (g1g4) HFWT. (First row (a1): reference HS image; First row (b1–g1): fused HS images; second row: enlarged subarea; third row (a3): reference SAM image; third row (b3–g3): SAM images between each fused HS image and the reference HS image; fourth row (a4): reference SAM image of enlarged subarea; fourth row (b4–g4): SAM images of enlarged subarea.)
Figure 6. Experimental results obtained by each method for the Salinas scene dataset. (a1a4) Reference image; (b1b4) GS; (c1c4) GFPCA; (d1d4) CNMF; (e1e4) HySure; (f1f4) Bayesian; (g1g4) HFWT. (First row (a1): reference HS image; First row (b1–g1): fused HS images; second row: enlarged subarea; third row (a3): reference SAM image; third row (b3–g3): SAM images between each fused HS image and the reference HS image; fourth row (a4): reference SAM image of enlarged subarea; fourth row (b4–g4): SAM images of enlarged subarea.)
Remotesensing 11 01005 g006aRemotesensing 11 01005 g006b
Figure 7. Experimental results obtained by each method for Hyperion dataset. (a) Real HS image; (b) Real PAN image; (c) Interpolated HS image; (d) GS; (e) GFPCA; (f) CNMF; (g) HySure; (h) Bayesian; (i) HFWT.
Figure 7. Experimental results obtained by each method for Hyperion dataset. (a) Real HS image; (b) Real PAN image; (c) Interpolated HS image; (d) GS; (e) GFPCA; (f) CNMF; (g) HySure; (h) Bayesian; (i) HFWT.
Remotesensing 11 01005 g007
Table 1. Characteristic of the datasets.
Table 1. Characteristic of the datasets.
DatasetReference HS SizeSimulated SizeReference HS Spatial Resolution (SR)Simulated SRBand NumberSpectral Range
Washington DC200 × 200PAN 200 × 2003mPAN 3m1910.4–2.5 μ m
HS 50 × 50HS 12m
Salinas scene200 × 200PAN 200 × 2003.7 mPAN 3.7 m2040.4–2.4 μ m
HS 50 × 50HS 18.5 m
DatasetReal HS SizeReal PAN sizeReal HS SRReal PAN SRBand NumberSpectral Range
Hyperion100 × 100300 × 30030 m10 m1740.4–2.5 μ m
Table 2. Performance of the HFWT method with different HS image denoising processing. (The best values of each index are marked in bold.)
Table 2. Performance of the HFWT method with different HS image denoising processing. (The best values of each index are marked in bold.)
IndexNo DenoisingAverageGaussianOpenClosedOpen-Closing
RMSE0.01250.01190.01230.01210.01230.0112
SAM6.85066.85076.85066.85076.85066.8506
CC0.91560.91760.91760.91750.91760.9176
ERGAS25.507125.481225.482225.505225.488025.4804
Table 3. Objective quality evaluation of each method for the simulated datasets. (The best values of each index are marked in bold.)
Table 3. Objective quality evaluation of each method for the simulated datasets. (The best values of each index are marked in bold.)
DatasetIndexMethod
GSGFPCACNMFHySureBayesianHFWT
Washington DCRMSE0.01140.01390.01160.00970.01170.0112
SAM7.10699.84677.64386.86017.25866.8506
CC0.88560.81790.88790.90180.89540.9176
ERGAS36.073237.730334.312626.567829.349725.4804
Salinas sceneRMSE0.04260.22240.01620.01630.01670.0138
SAM3.78072.98141.75861.70391.80151.5460
CC0.85420.94290.95440.95830.95150.9625
ERGAS4.33013.08432.63462.44512.90792.5312
Table 4. Objective quality evaluation of each method for the real dataset. (The best values of each index are marked in bold.)
Table 4. Objective quality evaluation of each method for the real dataset. (The best values of each index are marked in bold.)
IndexMethod
GSGFPCACNMFHySureBayesianHFWT
RMSE0.04260.04510.03680.03980.03870.0358
SAM11.103715.684912.275512.830812.96059.1809
CC0.92960.92410.96310.94430.95040.9821
ERGAS15.353616.301111.111612.344112.235311.1109
Table 5. Computing time (seconds) of each method.
Table 5. Computing time (seconds) of each method.
DatasetMethod
GSGFPCACNMFHySureBayesianHFWT
Washington DC1.17642.34558.836943.392670.53476.9068
Salinas scene2.39534.84718.949855.422471.49727.1413
Hyperion2.66187.511823.3787117.1729158.715110.6448

Share and Cite

MDPI and ACS Style

Qu, J.; Li, Y.; Du, Q.; Dong, W.; Xi, B. Hyperspectral Pansharpening Based on Homomorphic Filtering and Weighted Tensor Matrix. Remote Sens. 2019, 11, 1005. https://0-doi-org.brum.beds.ac.uk/10.3390/rs11091005

AMA Style

Qu J, Li Y, Du Q, Dong W, Xi B. Hyperspectral Pansharpening Based on Homomorphic Filtering and Weighted Tensor Matrix. Remote Sensing. 2019; 11(9):1005. https://0-doi-org.brum.beds.ac.uk/10.3390/rs11091005

Chicago/Turabian Style

Qu, Jiahui, Yunsong Li, Qian Du, Wenqian Dong, and Bobo Xi. 2019. "Hyperspectral Pansharpening Based on Homomorphic Filtering and Weighted Tensor Matrix" Remote Sensing 11, no. 9: 1005. https://0-doi-org.brum.beds.ac.uk/10.3390/rs11091005

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop