Next Article in Journal
Multiscale Weighted Adjacent Superpixel-Based Composite Kernel for Hyperspectral Image Classification
Previous Article in Journal
Fusion of Airborne LiDAR Point Clouds and Aerial Images for Heterogeneous Land-Use Urban Mapping
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

TNNG: Total Nuclear Norms of Gradients for Hyperspectral Image Prior

1
Information and Computer Science, Graduate School of Science and Engineering, Doshisha University, Kyoto 610-0394, Japan
2
Xacti Corporation, Osaka 531-6028, Japan
3
Faculty of Environmental Engineering, The University of Kitakyushu, Kitakyushu 808-0135, Japan
4
Faculty of Science and Engineering, Doshisha University, Kyoto 610-0394, Japan
*
Author to whom correspondence should be addressed.
This paper is partially based on the conference paper, which is submitted to the IEEE International Geoscience and Remote Sensing Symposium (IGARSS), 2020 Virtual Symposium.
Submission received: 16 January 2021 / Revised: 11 February 2021 / Accepted: 19 February 2021 / Published: 23 February 2021
(This article belongs to the Section Remote Sensing Image Processing)

Abstract

:
We introduce a novel regularization function for hyperspectral image (HSI), which is based on the nuclear norms of gradient images. Unlike conventional low-rank priors, we achieve a gradient-based low-rank approximation by minimizing the sum of nuclear norms associated with rotated planes in the gradient of a HSI. Our method explicitly and simultaneously exploits the correlation in the spectral domain as well as the spatial domain. Our method exploits the low-rankness of a global region to enhance the dimensionality reduction by the prior. Since our method considers the low-rankness in the gradient domain, it more sensitively detects anomalous variations. Our method achieves high-fidelity image recovery using a single regularization function without the explicit use of any sparsity-inducing priors such as 0 , 1 and total variation (TV) norms. We also apply this regularization to a gradient-based robust principal component analysis and show its superiority in HSI decomposition. To demonstrate, the proposed regularization is validated on a variety of HSI reconstruction/decomposition problems with performance comparisons to state-of-the-art methods its superior performance.

1. Introduction

Convex optimization techniques have gained broad interest in many hyperspectral image processing applications including image restoration [1,2,3,4,5,6,7,8], decomposition [9,10], unmixing [11,12,13], classification [14,15], and target detection [16]. The success of such tasks strongly depends on regularization, which is based on a priori information of a latent clear image. Total Variation (TV) is one of the exemplary models often used for regularization in convex optimization approaches [17,18,19,20,21]. Some convex optimization methods specializing in low-rank regularization have been proposed in recent years [22,23]; these use the spectral information directly and achieve high-quality image restoration.
In this paper, we propose a regularization scheme that directly and simultaneously considers correlation in the spatio-spectral domains. We name the regularization function as Total Nuclear Norms of Gradients (TNNG), which calculates nuclear norms for all planes in the gradient domain of a HSI to exploit correlations in all spatio-spectral directions and obtains a low-rank approximation of its structure. The function utilizes the a priori property, where each gradient possesses strong inter-band correlation, and it directly controls each gradient that expresses smoothness in the spatial direction and correlation in the spectral direction. We also apply the TNNG regularization scheme to a Gradient-based Robust Principal Component Analysis (GRPCA) and show its superior performance in image decomposition.
There are many methods with low-rank priors [18,22,23], but there are few approaches that exploit low-rankness in global regions of the x z and y z planes (x, y, and z indicate horizontal/vertical/spectral directions, respectively). Recently, a joint approach of low-rank restoration and non-local filtering has been proposed [24]. The method achieves high-fidelity restoration by incorporating subspace learning with the non-local filtering. However, this method works well only for image denoising tasks.
The efficiency of our method is due to the following three features:
  • Exploiting the low-rankness of a global region (i.e., a high-dimensional matrix) allows us to enhance the dimensionality reduction by the prior.
  • Inherently, HSIs have little correlation in oblique directions within the x z and y z planes. We efficiently exploit this property through our regularization.
  • Since our method considers the low-rankness in the gradient domain, not the low-rankness in the image domain, it more sensitively detects anomalous variations.
Another notable property in our regularization is that our method does not explicitly use any sparsity-inducing norms such as 0 or 1 norms. Taking the nuclear norms on all the x z and y z planes, we implicitly reduce variations on spatial domains as well. In the end, our simple regularization simultaneously exploits the low-rank property and sparsity in spatial variations as a single function. Due to this property, we can apply our regularization function to RPCA as well as the HSI restoration, and unlike conventional methods, our method is efficient for various applications in image restoration and decomposition.
We solve convex optimization problems using primal-dual splitting (PDS) [25] and apply it to various hyperspectral image processing tasks. Experimentally, we conduct HSI restoration, RPCA-based image decomposition, and HSI pansharpening problems as examples to confirm their performance.
This paper is organized as follows. After reviewing conventional methods on HSI processing in Section 2, we describe the details of the algorithm and the solutions for the proposed optimization problem in Section 3. In Section 4, we present several experiments on image reconstruction/decomposition to illustrate the superiority of the proposed method. Section 6 presents a summary of this study and future prospects.

2. Related Work

Many restoration methods based on TV for multi-channel images have previously been proposed [17,18,19,20,21]. Yuan et al. [21] proposed a hyperspectral TV (HTV), which is an expansion of the TV model to HSI. They also proposed a spectral-spatial adaptive HTV (SSAHTV), which introduces adaptive weights to HTV. SSAHTV uses spectral information for its weighting and shows superior performance over HTV. The structure tensor TV (STV) [17] was proposed for multi-channel images and considers low-rankness of gradient vectors in local regions. Furthermore, its arranged version (ASTV) was proposed in [18] and outperforms STV in some image restoration problems. These methods show high performance on HSI restoration, but they only consider correlation in the spatial domain explicitly. Meanwhile, spectral information is limited to indirect use in these regularization schemes; therefore, it is relatively prone to excessive smoothing because of the substantial impact received from the spatial properties. Some of them consider the spectral gradients, few methods exploit the low-rankness in the gradient domain by a single regularization function.
It is well known that there exist high correlations among HSI along the spectral direction, especially in aerial images, as each spectral signature can be well represented by a linear combination of a small number of pure spectral endmembers, which induces the low-rank property. Low-rank matrix recovery (LRMR) [23], which utilizes the low-rankness of local blocks in HSI, has been shown to have higher performance than SSAHTV and even VBM3D [26], which is one of the most superior methods for non-local noise reduction. Furthermore, the spatio-spectral TV (SSTV) was proposed in [27]; it applies TV to the HSI gradient in the spectral direction. It is a regularization scheme that indirectly considers spatial correlation through the application of TV to the spectral gradient. Although it is a simple method, SSTV is confirmed to be superior to LRMR. Contrary to STV and ASTV, noise tends to remain in the restoration results of LRMR and SSTV because of the weak influence from the spatial properties. The methods such as LRTV [22], HSSTV [28], and LRTDTV [29] attempt to improve the performance of STV, SSTV, and its variants by introducing new regularization. However, future tasks are still to be elucidated, such as over-smoothing.
Recent DNN-based approaches have been successful for 2D image restoration tasks such as super-resolution [30,31]. Although there are a few successful methods for HSI such as [32,33,34,35,36], the superiority of DNN for HSI restoration is limited. This is due to the following reasons: (1) Acquiring high-quality large HSI data sets is not straightforward. (2) DNN-based methods are sensitive to attributes such as the number of spectral bands, spectral ranges of wavelengths, types of sensor, and thus such a domain mismatch in attributes between training and test data often degrades performance. On the other hand, regularization-based approaches like ours are more flexible and can be applied to many restoration tasks, such as restoration, unmixing, and classification.

3. Proposed Method

3.1. Low-Rankness of Spectral Gradients

Letting the spatial coordinates be x, y, and spectral coordinates z, we define “front” as the direction in which we view the x y planes along the z axis. The “top” and “side” are defined as the directions in which we view the x z and y z planes, respectively. Here, we define x R n v × n h × B to be a HSI with an image size of n v × n h and number of bands B. Furthermore, we define G v R n h × B × 3 n v to be a combined 3D gradient image where the three gradient images for x (w.r.t. vertical, horizontal, and spectral directions) observed from the top view, are concatenated to form a 3D cuboid. Similarly, we define G h R n v × B × 3 n h to be the 3D cuboid of the gradient images observed from the side view. We will explain detailed formulation in Section 3.2.
We represent the sum of ranks for all single-structure planes in G v and G h as follows:
i = 1 3 n v rank ( G v ( i ) ) + i = 1 3 n h rank ( G h ( i ) ) ,
where G v ( i ) R n h × B represents the i-th plane extracted from G v and similarly G h ( i ) R n v × B is the i-th plane extracted from G h . Given that G v , which is observed from the top view, possesses size B in the vertical direction and size n v in depth, the total number of G v ( i ) is 3 n v . Similarly, the total number of G h ( i ) is 3 n h .
Since (1) uses rank with a discrete value and is a non-convex function, the minimization problem with (1) is NP-hard. Therefore, we incorporate the nuclear norm ( · * = k σ k ), which is the optimal relaxation of the rank constraint, and substitute
L ( x ) = i = 1 3 n v G v ( i ) * + i = 1 3 n h G h ( i ) * ,
for (1). We refer to this regularization as the Total Nuclear Norms of Gradients (TNNG). TNNG is used as regularization to approximate G v and G h through a low-rank model.
Figure 1 shows diagonal matrices S ( i ) of the image PaviaU, where singular value decomposition ( G ( i ) = U ( i ) S ( i ) V ( i ) ) is performed on some G ( i ) ( : v or h ) of the HSI. The red lines show enlarged views of the top 15 singular values. S ( i ) possesses singular values of G ( i ) as diagonal components. In any diagonal matrix S ( i ) , we can observe that there is energy concentration to a handful of singular values, and most of the values are close to 0. This observation shows that G ( i ) approximately forms a low-rank structure. In S h ( i ) of the noisy HSI, there is energy dispersion, which violates the low-rank structure, while there is energy concentration in a portion of S h ( i ) in the restored HSI. Therefore, one can see that the restored image has a low-rank structure and more closely approximates the original HSI. As this property can be seen in most images, it is expected that the low-rank approximation in the gradient domain using TNNG will work for restoration.

3.2. TNNG Regularization

In this section, we design G v and G h , and specifically formulate TNNG. We redefine a HSI with an image size of N = n v · n h and the number of bands B as a vector x = [ x 1 , , x B ] R N B . A single plane x j R N ( j = 1 , , B ) represents the image of the j-th band of x , and x j represents the transposed x j .
We define the linear operator D to find the spatio-spectral gradients. Let the vertical and horizontal difference operators be D v , D h R N B × N B , and let the spectral difference operator be D s R N B × N B as:
D s = I I O O O I I O O O I I O O I I I O O I ,
where I R N × N is an identity matrix, and O R N × N is a zero matrix. Using these single-direction operators, we define the spatio-spectral difference operator D R 3 N B × N B as follows:
D = [ D v D h D s ] .
Then, we calculate the spatio-spectral gradients of HSI, D x R 3 N B . This procedure is illustrated in the left part of Figure 2. D v x , D h x , and D s x R N B represent the vectorized versions of the three 3D components (vertical, horizontal, and spectral, respectively) of the gradient image.
Next, we introduce two operators P v , P h R 3 N B × 3 N B , which rearrange pixels so that the upper and side planes of D x are located in the front:
P v = P v O O O P v O O O P v , P h = P h O O O P h O O O P h ,
where P v is a matrix that rotates each of D v x , D h x , and D s x such that the y z , x y , and z x planes face in the directions of “front”, “side”, and “top”, respectively, and similarly P h has a role of rotating the cuboids such that the z x , x y , and y z planes faces in the directions of “front”, “side”, and “top”, respectively (see the right part of Figure 2). In addition, we use the operator Γ , which converts a vector into a 3D cuboid, to rewrite the vectors P v D x and P h D x R 3 N B as follows:
P v D x R 3 N B G v = Γ ( P v D x ) R n h × B × 3 n v , P h D x R 3 N B G h = Γ ( P h D x ) R n v × B × 3 n h .
The right side of Figure 2 shows an image diagram when G v and G h are designed using D , P v , and P h . Rewriting (2) using D , P v , and P h defined by (4) and (5), TNNG in (2) is expressed as follows:
L ( x ) = i = 1 3 n v Γ ( P v D x ) ( i ) * + i = 1 3 n h Γ ( P h D x ) ( i ) * ,
where Γ ( P v D x ) ( i ) * represents the nuclear norm of the i-th plane in the 3D cuboid Γ ( P v D x ) .

3.3. HSI Restoration Model

In this section, we formulate an image restoration problem. The HSI observation model is represented by the following equation using an original image x R N B and an observed image y R K :
y = A x + n ,
where A R K × N B ( K N B ) is the linear operator that represents deterioration (for example, it represents random sampling in the case of compressed sensing [37,38]), and n R K represents additive noise. Based on the observation model, the convex optimization problem for the HSI restoration using TNNG can be formulated as follows:
min x λ L ( x ) + F y ( A x ) s . t . x C .
Here, λ is the weight of TNNG, C = [ ρ min , ρ max ] N B represents the box constraint of the pixel values in the closed convex set, and ρ min , ρ max represent the minimum and maximum pixel values taken by x , respectively. In addition, F y is a data-fidelity function, and it is necessary to select a suitable F y depending on the observed distribution of noise. We assume that the additive noise follows a Gaussian distribution, and thus we use the 2 -norm to express the convex optimization problem of Problem (9) as follows:
min x λ L ( x ) + 1 2 A x y 2 2 s . t . x C .

3.4. Gradient-Based Robust Principal Component Analysis

We apply TNNG to 3D Robust Principal Component Analysis (RPCA). RPCA [39] has been used in many computer vision and image processing applications. The goal of RPCA in the case of 2D images is to decompose an image matrix Y R M × N to a low-rank component L R M × N and a sparse residual S R M × N by solving the following equation:
min L , S λ L * + S 1 s . t . Y = L + S .
We extend this to HSI image decomposition by applying TNNG as follows:
min x , s λ L ( x ) + s 1 s . t . y = x + s , x , s C .
By solving (12), we achieve the decomposition of the observed measurement y R N B as y = x + s , where x R N B is low-rank in the gradient domain, and s R N B is the sparse component. The two constraints in (12) guarantee the exact reconstruction and the reasonable range of pixel values. Unlike the existing 3D RPCA methods [40,41], we analyze the low-rankness of an HSI in the gradient domain. As it is gradient-based, it also has the capability of excluding sparse anomalies more sensitively, so it is expected that the performance of the decomposition improves. We call this Gradient-based RPCA (GRPCA). This problem is also convex optimization, and thus one efficiently obtains a global optimum solution by convex optimization algorithms.

3.5. HSI Pansharpening Model

Pan sharpening is a technique for generating an HSI with high spatial and spectral resolutions by combining a panchromatic image (pan image) with a high spatial resolution and a HSI with a high spectral resolution. It is important in the field of remote sensing and earth observation.
Let x R N B be a true HSI with high spatial resolution, y h s R N B ( N < N ) be an observed HSI with low spatial resolution, and y p a n R N be an observed gray-scale pan image. The observation model of HSI for pansharpening can be formulated as follows:
y h s = S B x + n h s , y p a n = R x + n p a n ,
where B R N B × N B and S R N × N B represent the lowpass operator and sub-sampling operator, respectively. The matrix R R N × N B represents an average operator. In our setting, we simply average all bands to makes a gray-scale image. n h s R N B and n p a n R N represent additive noise.
The goal of the pansharpening problem is to recover the full spatial and spectral resolution image x , which is similar to u , from the observed two images y h s and y p a n . Using the proposed regularization defined by (7), we adopt the following convex optimization problem to estimate x .
min x L ( x ) subject to S B x y h s 2 ϵ h s , R x y p a n 2 ϵ p a n , x [ 0 , 1 ] N B ,
where ϵ h s and ϵ p a n are user-defined parameters that controls the fidelity of x .

3.6. Optimization

In this study, we adopt primal-dual splitting (PDS) [25] to solve the convex optimization problems in (10), (12), and (14). PDS is an iterative algorithm to solve a convex optimization problem of the form
min x F ( x ) + G ( x ) + H ( L x ) ,
where F represents a convex function where the gradient is β -Lipschitz continuous and differentiable, G and H represent the non-smooth convex functions where the proximity operator can be efficiently calculated, and L is a linear operator. Proximity operator [42,43] is defined using the lower semi-continuous convex function ψ and index γ > 0 , as
prox γ ψ ( v ) = arg min z ψ ( z ) + 1 2 γ z v 2 2 .
The PDS algorithm to solve Problem (15) is given as follows [25,44,45]:
x ( k + 1 ) = prox γ 1 G ( x ( k ) γ 1 ( F ( x ( k ) ) + L z ( k ) ) ) , v z ( k ) + γ 2 L ( 2 x ( k + 1 ) x ( k ) ) , z ( k + 1 ) = v γ 2 prox 1 γ 2 H 1 γ 2 v
where F is the gradient of F, x is the primal variable, z is the dual variable of L x , v is the mediator variable to calculate z , and the superscript ( k ) indicates the k-th iteration. The algorithm converges to an optimal solution of Problem (15) by iteratively solving (17) under appropriate conditions on γ 1 and γ 2 .
In the paper, we explain only the procedure to solve Problem (10), but one can also solve Problem (12) and (14) in a similar way. The respective algorithms are shown in shown in Algorithm 1 and Algorithm 2. To apply PDS to Problem (10), we define the indicator function ι C with a closed convex set as follows:
ι C ( x ) = 0 , x C , + , otherwise .
We use this indicator function to turn Problem (10) into a convex optimization problem with no apparent constraints as follows:
min x λ L ( x ) + 1 2 A x y 2 2 + ι C ( x ) .
Problem (19) is formed by the convex functions allowing efficient calculation of the proximity operator. To represent (19) in the same format as Problem (15), to which PDS can be applied, we separate each term of Problem (19) into F, G, and H as follows:
F ( x ) = 1 2 A x y 2 2 , G ( x ) = ι C ( x ) , H ( L x ) = λ L ( x ) = λ ( i = 1 3 n v Γ ( P v D x ) ( i ) * + i = 1 3 n h Γ ( P h D x ) ( i ) * ) .
The function F : R N B R is formed only by the differentiable 2 -norm, and thus its gradient F is obtained by
F ( x ) = A ( A x y ) .
The function G : R N B R { } becomes the indicator function ι C . The prox γ 1 G of (17) becomes prox γ 1 ι C : R N B R N B , the projection that is the proximity operator to the box constraint. If we set the projection to prox γ 1 ι C ( x ) = P C ( x ) , then the projection P C ( x ) can be given, for i = 1 , , N B , as follows:
[ P C ( x ) ] i = ρ min , if x i < ρ min , x i , if ρ min x i ρ max , ρ max , if x i > ρ max .
We perform the operation separately for each element x i in x . The function H : R 3 N B + 3 N B R corresponds to the regularization function, and the dual variable z and linear operator L can be given as follows:
z = [ z 1 , z 2 ] i = 1 3 n v Γ ( z 1 ) ( i ) * + i = 1 3 n h Γ ( z 2 ) ( i ) * , L : R N B R 3 N B + 3 N B , x ( P v D x , P h D x ) .
Here, z 1 and z 2 are dual variables corresponding to P v D x and P h D x , respectively. Given that TNNG uses a nuclear norm, the prox 1 γ 2 H performs soft-thresholding with respect to singular values of each matrix G ( i ) = Γ ( P D x ) ( i ) ( : v or h ) in (7). Here, the singular value decomposition of G ( i ) becomes G ( i ) = U ( i ) S ( i ) V ( i ) . Thus, the proximity operator, prox 1 γ 2 · * is given as follows:
prox 1 γ 2 · * ( G ( i ) ) = U ( i ) S ˜ ( i ) V ( i ) , S ˜ ( i ) = diag ( max { σ 1 ( i ) γ , 0 } , , max { σ M ( i ) γ , 0 } ) ,
where diag ( · ) represents a diagonal matrix with diagonal elements ( · ) , and M is the maximum number of singular values.
Based on the above discussion, we can solve the convex optimization problem of Problem (10) using PDS, whose steps are shown in Algorithm 3. The calculation for the projection P C (step 4) is indicated in (22), and the proximity operator of nuclear norms (steps 7, 8) is given by (24). We set the stopping criterion to | x ( k + 1 ) x ( k ) | / | x ( k + 1 ) | < 1 e 5 .
Algorithm 1 PDS Algorithm for Solving Problem (12)
1:
input : x ( 0 ) , s ( 0 )
2:
set : γ 1 , γ 2 and initial values for z 1 ( 0 ) , z 2 ( 0 ) , z 3 ( 0 ) , z 4 ( 0 ) are given.
3:
while a stopping criterion is not satisfied do
4: 
x ( k + 1 ) = P C { x ( k ) γ 1 ( D ( P v z 1 ( k ) + P h z 2 ( k ) ) + z 4 ( k ) ) } ;
5: 
s ( k + 1 ) = P C { s ( k ) γ 1 ( z 3 ( k ) + z 4 ( k ) ) } ;
6: 
v 1 z 1 ( k ) + γ 2 P v D ( 2 x ( k + 1 ) x ( k ) ) ;
7: 
v 2 z 2 ( k ) + γ 2 P h D ( 2 x ( k + 1 ) x ( k ) ) ;
8: 
v 3 z 3 ( k ) + γ 2 ( 2 s ( k + 1 ) s ( k ) ) ;
9: 
v 4 z 4 ( k ) + γ 2 ( 2 s ( k + 1 ) s ( k ) ) ;
10: 
z 1 ( k + 1 ) = v 1 γ 2 prox 1 γ 2 · * ( 1 γ 2 v 1 ) ;
11: 
z 2 ( k + 1 ) = v 2 γ 2 prox 1 γ 2 · * ( 1 γ 2 v 2 ) ;
12: 
z 3 ( k + 1 ) = v 3 γ 2 prox 1 γ 2 · 1 ( 1 γ 2 v 3 ) ;
13: 
z 4 ( k + 1 ) = v 4 γ 2 prox 1 γ 2 ι { y } ( 1 γ 2 v 4 ) ;
14: 
Update iteration: k k + 1 ;
15:
end while
16:
output : x ( k + 1 )
We turn Problem (12) into a convex optimization problem with no apparent constraints as follows:
min x λ L ( x ) + s 1 + ι y ( x + s ) + ι C ( x ) + ι C ( s ) .
Problem (25) is formed by the convex functions allowing efficient calculation of the proximity operator. To represent (25) in the same format as Problem (15), to which PDS can be applied, we separate each term of Problem (25) into F, G, and H as follows:
F ( x ) = 0 , G ( x ) = ι C ( x ) + ι C ( s ) , H ( L x ) = λ ( i = 1 3 n v Γ ( P v D x ) ( i ) * + i = 1 3 n h Γ ( P h D x ) ( i ) * ) + s 1 + ι y ( x + s ) .
Steps 12 of Algorithm 1 is the proximity operator, prox 1 γ 2 · 1 is given as follows:
prox γ · 1 ( x ) i = sgn x i max x i γ , 0 ,
for i = 1,...,NB, where sgn ( · ) denotes the sign of ( · ) , prox γ ι { y } ( · ) is y .
We turn Problem (14) into a convex optimization problem with no apparent constraints as follows:
min x L ( x ) + ι B 2 , ε h s y h s SBx + ι B 2 , ε p a n y p a n Rx + ι C ( x )
Problem (28) is formed by the convex functions allowing efficient calculation of the proximity operator. To represent (28) in the same format as Problem (15), to which PDS can be applied, we separate each term of Problem (28) into F, G, and H as follows:
F ( x ) = 0 , G ( x ) = ι C ( x ) , H ( L x ) = i = 1 3 n v Γ ( P v D x ) ( i ) * + i = 1 3 n h Γ ( P h D x ) ( i ) * + ι B 2 , ε h s y h s SBx + ι B 2 , ε p a n y p a n R .
Algorithm 2 PDS Algorithm for Solving Problem (14)
1:
input : x ( 0 )
2:
set : γ 1 , γ 2 and initial values for z 1 ( 0 ) , z 2 ( 0 ) , z 3 ( 0 ) , z 4 ( 0 ) are given.
3:
while a stopping criterion is not satisfied do
4: 
x ( k + 1 ) = P C { x ( k ) γ 1 ( D ( P v z 1 ( k ) + P h z 2 ( k ) ) + B S z 3 ( k ) + R z 4 ( k ) ) } ;
5: 
v 1 z 1 ( k ) + γ 2 P v D ( 2 x ( k + 1 ) x ( k ) ) ;
6: 
v 2 z 2 ( k ) + γ 2 P h D ( 2 x ( k + 1 ) x ( k ) ) ;
7: 
v 3 z 3 ( k ) + γ 2 S B ( 2 x ( k + 1 ) x ( k ) ) ;
8: 
v 4 z 4 ( k ) + γ 2 R ( 2 x ( k + 1 ) x ( k ) ) ;
9: 
z 1 ( k + 1 ) = v 1 γ 2 prox 1 γ 2 · * ( 1 γ 2 v 1 ) ;
10: 
z 2 ( k + 1 ) = v 2 γ 2 prox 1 γ 2 · * ( 1 γ 2 v 2 ) ;
11: 
z 3 ( k + 1 ) = v 3 γ 2 prox 1 γ 2 B 2 , ε h s y h s ( 1 γ 2 v 3 ) ;
12: 
z 4 ( k + 1 ) = v 4 γ 2 prox 1 γ 2 B 2 , ε p a n y p a n ( 1 γ 2 v 4 ) ;
13: 
Update iteration: k k + 1 ;
14:
end while
15:
output : x ( k + 1 )
Steps 12 and 13 of Algorithm 2 is updated by using the following 2 ball projection:
prox γ , B 2 , ε v ( x ) = x , if x B 2 , ε v , v + ε x v x v 2 , otherwise ,
where ϵ is the radius of the 2 -norm sphere. And, B 2 , ε v is define as B 2 , ε v : = { x R N B | x v 2 ε } .
Algorithm 3: PDS Algorithm for Solving Problem (10)
1:
input : x ( 0 )
2:
set : γ 1 , γ 2 and initial values for z 1 ( 0 ) , z 2 ( 0 ) are given.
3:
whilea stopping criterion is not satisfieddo
4: 
x ( k + 1 ) = P C { x ( k ) γ 1 ( A ( A x ( k ) y ) + D ( P v z 1 ( k ) + P h z 2 ( k ) ) ) } ;
5: 
v 1 z 1 ( k ) + γ 2 P v D ( 2 x ( k + 1 ) x ( k ) ) ;
6: 
v 2 z 2 ( k ) + γ 2 P h D ( 2 x ( k + 1 ) x ( k ) ) ;
7: 
z 1 ( k + 1 ) = v 1 γ 2 prox 1 γ 2 H ( 1 γ 2 v 1 ) ;
8: 
z 2 ( k + 1 ) = v 2 γ 2 prox 1 γ 2 H ( 1 γ 2 v 2 ) ;
9: 
Update iteration: k k + 1 ;
10:
end while
11:
output : x ( k + 1 )

4. Results

To show the effectiveness of the proposed method, we conducted several experiments on compressed sensing (CS), signal decomposition, and pansharpening. There results were compared the results with some conventional methods.
We used PSNR and Structural Similarity Index (SSIM) [46] as objective measures of CS and signal decomposition, and SAM [47], ERGAS [48], and Q2n [49] for pansharpening. The evaluation methods for SAM, ERGAS, and Q2n are explained below. y R N B is a grandtruth image, x R N B is a restored image.
  • SAM is defined as follows:
    SAM x , y = 1 n j = 1 n arccos x j y j x j 2 y j 2 ,
    where · 2 is denotes the 2 norm.
  • ERGAS is defined as follows:
    ERGAS ( x , y ) = 100 d 1 N k = 1 N x k y k 2 2 μ k 2 ,
    where d is resolution ratio between HSI with low spatial resolution and gray-scale pan images, μ k is the sample mean of k-th band of y .
  • Q2n is defined as follows:
    Q ( x , y ) = σ x y σ x σ y 2 x ¯ y ¯ x ¯ 2 + y ¯ 2 2 σ x σ y σ x 2 + σ y 2 ,
    where the definition of each variable is x ¯ = 1 P j = 1 P x j , y ¯ = 1 P j = 1 P y j ,
    σ x = 1 P j = 1 P x j x ¯ 2 , σ y = 1 P j = 1 P y j y ¯ 2 , σ x y = 1 P j = 1 P x j x ¯ y j y ¯ .
For PSNR, SSIM, and Q2n, higher values indicate better performance, and for SAM and ERGAS, lower values indicate better performance. We used 17 HSIs for the evaluation available online such as [50,51], which have 102 to 198 spectral bands. Variation on the dynamic range of HSIs is very high, and thus for consistent evaluation on images with various ranges, we linearly scale the images to the range [ 0 , 1 ] , and then the standard metrics are used. For CS and signal decomposition, we experimentally adjusted the weight λ and adopted the values where the PSNR was the highest value. For pansharpening, ϵ h s was calculated from the noise levels of n h s and y h s , and ϵ p a n was calculated from the noise levels of n p a n and y p a n . For the parameters of the conventional methods, we experimentally adopted parameters at which the PSNR was the highest value.

4.1. Compressed Sensing

We conducted experiments on compressed sensing, in which the observation model is represented by y = A u + n , and the linear operator A R r N B × N B denotes a known random sampling matrix (r is the ratio of sampling points). The convex optimization problem for the compressed sensing reconstruction using the regularization function R is as follows:
min x λ R ( x ) + 1 2 A x y 2 2 s . t . x C .
We compared the performances by replacing the regularization function. Additive noise n is fixed to the Gaussian noise with standard deviation σ = 0.1 , and the sampling ratio is set to r = 10 , 20 and 30 % . Table 1 shows the numerical results of the compressed sensing reconstruction. As LRTDTV is not designed for the compressed sensing and yielded inferior results in our implementation, we did not include it in the experiment. The proposed method reported the 0.9–2.2 dB higher values in PSNR on the average and the best SSIM scores for most of the images. Moreover, we confirmed the strong superiority of the proposed method for any sampling ratio r.
Figure 3 shows the experimental results for the sampling ratio r = 20 % . Each PSNR[db] is 31.02 for (c), 32.75 for (d), 35.02 for (e), and 35.86 for (f). From this figure, we can observe increased blurring in ASTV and HSSTV to recover the resolution loss and remove the noise caused by the reconstruction. In the experiments on ASTV and HSSTV, the blurring was reduced by decreasing the weight λ , but we confirmed that the insufficient HSI reconstruction led to a decreased evaluation in PSNR and SSIM. For SSTV, the noise remained in the restored results. SSTV has weak smoothing effects in the spatial direction; therefore, the noise remains regardless of the weight λ. The proposed method has better restoration results than the conventional methods because the detailed areas are maintained while the noise and loss caused by the reconstruction are reduced.

4.2. Robust Principal Component Analysis

4.2.1. Image Decomposition

To evaluate our GRPCA, we first applied it to a standard sparse component decomposition. Following conventional methods such as [41], we randomly selected 30% of pixels in a HSI and replaced them with random values with a uniform distribution, which is a typical example to measure the performance of RPCA. The goal is to decompose the corrupted image to a latent low-rank HSI and the random values. We evaluated the performance with PSNR and SSIM between the corrupted image y and the obtained HSI x in (12) and compared it with the low-rank based decomposition methods, TRPCA [41] and LRTDTV [29].
We show the results in Table 2, where one can confirm that our method outperforms the conventional methods for most of the HSIs. This result shows that the proposed regularization function is able to correctly evaluate the image based on its low-rank nature. In addition, we believe that the proposed regularization function can evaluate the low-rank property of the image better than the conventional methods.

4.2.2. Mixed Anomaly Removal

We have tested the decomposition capability for images with mixed anomalies composed of dead lines/circles and impulsive noise. We compared our performance with TRPCA and LRTDTV. We conducted the removal of the anomaly pattern: sparsely scattered impulsive noise and dead lines/circles, where we prepare a mask for the dead lines and circles and generate images with dead pixels by multiplying the mask and an image (See Figure 4). We assume that anomalies sparsely appear, and then they can be removed in the low-rank component x in (12).
We show our decomposition results in Figure 5 and the objective comparison in Table 3, where one can confirm that our method outperforms the conventional methods for most of the HSIs. Under mixed anomalies, we experimentally confirmed that TNNG maintains stable performance compared with the conventional methods, while the performance depends on experimental conditions and types of images in LRTDTV and TRPCA. This indicates that the image can be restored by using the spectral information of other pixels even if there is a defect in the gradient region due to the evaluation of low rankness.

4.3. Pansharpening

We compared the proposed method defined by (14), with four conventional pansharpening methods: GFPCA [52], BayesNaive and BayesSparse [53], and HySure [54]. We conducted experiments on three types of noise levels, Type I ( σ h s = 0.05 , σ p a n = 0.01 ), Type II ( σ h s = 0.1 , σ p a n = 0.01 ), Type III ( σ h s = 0.15 , σ p a n = 0.01 ), where σ h s and σ p a n are the standard deviation of noises added on a hyperspectral and a pan image, respectively.
Table 4 shows the comparison with the conventional methods. One can see from the results that our method outperforms the conventional methods in many cases. For subjective comparison, we show an example of the estimated HSIs from the data with Type I on PaviaC as shown in Figure 6.
The estimated HSIs with high spatial resolution are shown in Figure 7. Each PSNR[db] is 25.45 for (b), 26.90 for (c), 28.86 for (d), 30.0 for (e), and 31.16 for (f). We divided the band of these HSIs into three parts and visualized the average of each as RGB images. The details of the results obtained by GFPCA and BayesNaive are over-smoothed and its shading is unnatural. BayesSparse and HySure can estimate sharper images than those images, however, the texture of trees and buildings is lost. In contrast, the proposed method is particularly accurate in the restoration of the shading on the upper right and the texture of the central building on the upper left. As shown in (14), the optimization problem for pan-sharpening is the equation that minimizes the regularization function using the proposed method, and the number of hyperparameters is reduced by using constraints on HS and PAN images.

5. Discussion

Computing Time

All methods are executed using Matlab 2019a on a MacBook Air (Retina, 13-inch, 2019), 1.6 GHz dual core Intel Core i5, and 16 GB 2133 MHz LPDDR3. The proposed method takes much more time than the conventional method. This is because the singular value decomposition of equation (7) needs to be performed 3 n v + 3 n h times. Speeding up the processing is a major issue for the future, and we plan to pursue this point in the future.
  • Compressed Sensing
    The maximum execution time for each method was about 2000 s for ASTV, 530 s for SSTV, 670 s for HSSTV, and 2500 s for TNNG (ours).
  • Image Decomposition
    The maximum execution time for each method was about 1600 s for LRTDTV, 610 s for TRPCA, 2400 s for TNNG (ours).
  • Mixed Anomaly Removal
    The maximum execution time for each method was about 1680 s for LRTDTV, 620 s for TRPCA, 2500 s for TNNG (ours).
  • Pansharpening
    The maximum execution time for each method was about 4.2 s for GFPCA, 1.7 s for BayesNative, 130 s for BayesSparse, 110 s for HySure, and 7200 s for TNNG (ours).

6. Conclusions

In this paper, we proposed a new regularization scheme for HSI reconstruction called TNNG. We focused on the HSI property where the gradient image possesses strong inter-band correlation and utilized this property in the convex optimization. TNNG is a regularization scheme aimed at enforcing the low-rank gradient structure when observed from the spectral direction. We achieved high-performance HSI restoration by solving this regularized convex optimization problem using PDS. In experiments, we conducted compressed sensing reconstruction, image decomposition, and pansharpening. While the efficiencies of conventional methods depend on applications, the proposed method is superior to the conventional methods through both subjective and objective evaluations in various applications. In the proposed method, the process is done for a whole image without exploiting local features. It would be necessary for the future to consider further advances in performance by introducing local processing. And, the improvement of the calculation speed is also an issue.

Author Contributions

Methodology, R.Y., R.K., R.M. and M.O.; software, R.Y. and R.K.; validation, R.Y., R.K., R.M. and M.O.; formal analysis, R.Y., R.K., R.M. and M.O.; investigation, R.Y., R.K., R.M. and M.O.; data curation, R.Y. and R.K.; writing—original draft preparation, R.Y., R.K., R.M. and M.O.; writing—review and editing, R.Y., R.K., R.M. and M.O.; project administration, M.O.; funding acquisition, M.O. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Ministry of Internal Affairs and Communications of Japan under the Strategic Information and Communications R&D Promotion Programme, from 2017 to 2019, No. 3620.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ghamisi, P.; Yokoya, N.; Li, J.; Liao, W.; Liu, S.; Plaza, J.; Rasti, B.; Plaza, A. Advances in hyperspectral image and signal processing: A comprehensive overview of the state of the art. IEEE Geosci. Remote Sens. Mag. 2017, 5, 37–78. [Google Scholar] [CrossRef] [Green Version]
  2. Dian, R.; Li, S.; Fang, L.; Bioucas-Dias, J. Hyperspectral Image Super-Resolution via Local Low-Rank and Sparse Representations. In Proceedings of the IGARSS 2018-2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 4003–4006. [Google Scholar]
  3. Plaza, A.; Benediktsson, J.A.; Boardman, J.W.; Brazile, J.; Bruzzone, L.; Camps-Valls, G.; Chanussot, J.; Fauvel, M.; Gamba, P.; Gualtieri, A.; et al. Recent advances in techniques for hyperspectral image processing. Remote Sens. Environ. 2009, 113, S110–S122. [Google Scholar] [CrossRef]
  4. Bioucas-Dias, J.M.; Plaza, A.; Camps-Valls, G.; Scheunders, P.; Nasrabadi, N.; Chanussot, J. Hyperspectral Remote Sensing Data Analysis and Future Challenges. IEEE Geosci. Remote Sens. Mag. 2013, 1, 6–36. [Google Scholar] [CrossRef] [Green Version]
  5. Fu, Y.; Zheng, Y.; Sato, I.; Sato, Y. Exploiting spectral-spatial correlation for coded hyperspectral image restoration. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 3727–3736. [Google Scholar]
  6. Khan, Z.; Shafait, F.; Mian, A. Joint group sparse PCA for compressed hyperspectral imaging. IEEE Trans. Image Process. 2015, 24, 4934–4942. [Google Scholar] [CrossRef]
  7. Shaw, G.; Manolakis, D. Signal processing for hyperspectral image exploitation. IEEE Signal Process. Mag. 2002, 19, 12–16. [Google Scholar] [CrossRef]
  8. Rizkinia, M.; Baba, T.; Shirai, K.; Okuda, M. Local spectral component decomposition for multi-channel image denoising. IEEE Trans. Image Process. 2016, 25, 3208–3218. [Google Scholar] [CrossRef]
  9. Jin, X.; Gu, Y. Superpixel-based intrinsic image decomposition of hyperspectral images. IEEE Trans. Geosci. Remote Sens 2017, 55, 4285–4295. [Google Scholar] [CrossRef]
  10. Zheng, Y.; Sato, I.; Sato, Y. Illumination and reflectance spectra separation of a hyperspectral image meets low-rank matrix factorization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1779–1787. [Google Scholar]
  11. Akhtar, N.; Shafait, F.; Mian, A. Futuristic greedy approach to sparse unmixing of hyperspectral data. IEEE Trans. Geosci. Remote Sens. 2015, 53, 2157–2174. [Google Scholar] [CrossRef]
  12. Bioucas-Dias, J.M.; Plaza, A.; Dobigeon, N.; Parente, M.; Du, Q.; Gader, P.; Chanussot, J. Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2012, 5, 354–379. [Google Scholar] [CrossRef] [Green Version]
  13. Rizkinia, M.; Okuda, M. Joint Local Abundance Sparse Unmixing for Hyperspectral Images. Remote Sens. 2017, 9, 1224. [Google Scholar] [CrossRef] [Green Version]
  14. Liu, Y.; Condessa, F.; Bioucas-Dias, J.M.; Li, J.; Du, P.; Plaza, A. Convex Formulation for Multiband Image Classification with Superpixel-Based Spatial Regularization. IEEE Trans. Geosci. Remote Sens. 2018, 56, 2704–2721. [Google Scholar] [CrossRef]
  15. Zhang, H.; Li, J.; Huang, Y.; Zhang, L. A nonlocal weighted joint sparse representation classification method for hyperspectral imagery. IEEE J. Sel. Topics Appl. Earth Obs. Remote Sens. 2014, 7, 2056–2065. [Google Scholar] [CrossRef]
  16. Stein, D.W.J.; Beaven, S.G.; Hoff, L.E.; Winter, E.M.; Schaum, A.P.; Stocker, A.D. Anomaly detection from hyperspectral imagery. IEEE Signal Process. Mag. 2002, 19, 58–69. [Google Scholar] [CrossRef] [Green Version]
  17. Lefkimmiatis, S.; Roussos, A.; Maragos, P.; Unser, M. Structure tensor total variation. SIAM J. Imag. Sci. 2015, 8, 1090–1122. [Google Scholar] [CrossRef]
  18. Ono, S.; Shirai, K.; Okuda, M. Vectorial total variation based on arranged structure tensor for multichannel image restoration. In Proceedings of the 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Shanghai, China, 20–25 March 2016; pp. 4528–4532. [Google Scholar]
  19. Prasath, V.B.S.; Vorotnikov, D.; Pelapur, R.; Jose, S.; Seetharaman, G.; Palaniappan, K. Multiscale Tikhonov-total variation image restoration using spatially varying edge coherence exponent. IEEE Trans. Image Process. 2015, 24, 5220–5235. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Zhang, H. Hyperspectral image denoising with cubic total variation model. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, I-7, 95–98. [Google Scholar] [CrossRef] [Green Version]
  21. Yuan, Q.; Zhang, L.; Shen, H. Hyperspectral image denoising employing a spectral-spatial adaptive total variation model. IEEE Trans. Geosci. Remote Sens. 2012, 50, 3660–3677. [Google Scholar] [CrossRef]
  22. He, W.; Zhang, H.; Zhang, L.; Shen, H. Total-variation-regularized low-rank matrix factorization for hyperspectral image restoration. IEEE Trans. Geosci. Remote Sens. 2016, 54, 178–188. [Google Scholar] [CrossRef]
  23. Zhang, H.; He, W.; Zhang, L.; Shen, H.; Yuan, Q. Hyperspectral image restoration using low-rank matrix recovery. IEEE Trans. Geosci. Remote Sens. 2014, 52, 4729–4743. [Google Scholar] [CrossRef]
  24. Sun, L.; He, C.; Zheng, Y.; Tang, S. SLRL4D: Joint Restoration of Subspace Low-Rank Learning and Non-Local 4-D Transform Filtering for Hyperspectral Image. Remote Sens. 2020, 12, 2979. [Google Scholar] [CrossRef]
  25. Condat, L. A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms. J. Optim. Theory Appl. 2013, 158, 460–479. [Google Scholar] [CrossRef] [Green Version]
  26. Dabov, K.; Foi, A.; Egiazarian, K. Video denoising by sparse 3D transform-domain collaborative filtering. In Proceedings of the European Signal Processing Conference, Poznan, Poland, 3–7 September 2007. [Google Scholar]
  27. Aggarwal, H.K.; Majumdar, A. Hyperspectral image denoising using spatio-spectral total variation. IEEE Geosci. Remote Sens. Lett. 2016, 13, 442–446. [Google Scholar] [CrossRef]
  28. Takeyama, S.; Ono, S.; Kumazawa, I. Hyperspectral image restoration by hybrid spatio-spectral total variation. In Proceedings of the 2017 IEEE International Conference on Acoustics, Speech and Signal Processing, New Orleans, LA, USA, 5–9 March 2017; pp. 4586–4590. [Google Scholar]
  29. Wang, Y.; Peng, J.; Zhao, Q.; Leung, Y.; Zhao, X.L.; Meng, D. Hyperspectral image restoration via total variation regularized low-rank tensor decomposition. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 1227–1243. [Google Scholar] [CrossRef] [Green Version]
  30. Lim, B.; Son, S.; Kim, H.; Nah, S.; Lee, K.M. Enhanced deep residual networks for single image super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Honolulu, HI, USA, 21–26 July 2017; Volume 1, p. 4. [Google Scholar]
  31. Park, D.; Kim, K.; Chun, S.Y. Efficient module based single image super resolution for multiple problems. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Salt Lake City, UT, USA, 18–22 June 2018; Volume 5. [Google Scholar]
  32. Nie, S.; Gu, L.; Zheng, Y.; Lam, A.; Ono, N.; Sato, I. Deeply Learned Filter Response Functions for Hyperspectral Reconstruction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 4767–4776. [Google Scholar]
  33. Qu, Y.; Qi, H.; Kwan, C. Unsupervised Sparse Dirichlet-Net for Hyperspectral Image Super-Resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 2511–2520. [Google Scholar]
  34. Shi, Z.; Chen, C.; Xiong, Z.; Liu, D.; Wu, F. Hscnn+: Advanced cnn-based hyperspectral recovery from rgb images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 18–22 June 2018; Volume 3, p. 5. [Google Scholar]
  35. Matsuoka, R.; Ono, S.; Okuda, M. Transformed-domain robust multiple-exposure blending with Huber loss. IEEE Access 2019, 7, 162282–162296. [Google Scholar] [CrossRef]
  36. Imamura, R.; Itasaka, T.; Okuda, M. Zero-Shot Hyperspectral Image Denoising with Separable Image Prior. In Proceedings of the IEEE International Conference on Computer Vision Workshops, Seoul, Korea, 27–28 October 2019. [Google Scholar]
  37. Baraniuk, R.G. Compressive sensing. IEEE Signal Process. Mag. 2007, 24, 118–121. [Google Scholar] [CrossRef]
  38. Candès, E.; Wakin, M. An introduction to compressive sampling. IEEE Signal Process. Mag. 2008, 25, 21–30. [Google Scholar] [CrossRef]
  39. Candès, E.J.; Li, X.; Ma, Y.; Wright, J. Robust principal component analysis? J. ACM (JACM) 2011, 58, 11. [Google Scholar] [CrossRef]
  40. Lu, C.; Feng, J.; Chen, Y.; Liu, W.; Lin, Z.; Yan, S. Tensor robust principal component analysis: Exact recovery of corrupted low-rank tensors via convex optimization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 5249–5257. [Google Scholar]
  41. Lu, C.; Feng, J.; Chen, Y.; Liu, W.; Lin, Z.; Yan, S. Tensor Robust Principal Component Analysis with A New Tensor Nuclear Norm. arXiv 2018, arXiv:1804.03728. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  42. Combettes, P.L.; Pesquet, J.C. Proximal Splitting Methods in Signal Processing; Springer: Berlin/Heidelberg, Germany, 2011; pp. 185–212. [Google Scholar]
  43. Moreau, J.J. Fonctions convexes duales et points proximaux dans un espace hilbertien. C. R. Acad. Sci. Paris Ser. A Math. 1962, 255, 2897–2899. [Google Scholar]
  44. Chambolle, A.; Pock, T. A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imag. Vis. 2010, 40, 120–145. [Google Scholar] [CrossRef] [Green Version]
  45. Combettes, P.L.; Pesquet, J.C. Primal-dual splitting algorithm for solving inclusions with mixtures of composite, Lipschitzian, and parallel-sum type monotone operators. Set Valued Variation. Anal. 2012, 20, 307–330. [Google Scholar] [CrossRef] [Green Version]
  46. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  47. Kruse, F.; Lefkoff, A.; Boardman, J.; Heidebrecht, K.; Shapiro, A.; Barloon, P.; Goetz, A. The spectral image processing system (SIPS)—interactive visualization and analysis of imaging spectrometer data. Remote Sens. Environ. 1993, 44, 145–163. [Google Scholar] [CrossRef]
  48. Wald, L. Quality of high resolution synthesised images: Is there a simple criterion? In Third Conference “Fusion of Earth Data: Merging Point Measurements, Raster Maps and Remotely Sensed Images”; Ranchin, T., Wald, L., Eds.; SEE/URISCA: Sophia Antipolis, France, 2000; pp. 99–103. [Google Scholar]
  49. Garzelli, A.; Nencini, F. Hypercomplex Quality Assessment of Multi/Hyperspectral Images. IEEE Geosci. Remote Sens. Lett. 2009, 6, 662–665. [Google Scholar] [CrossRef]
  50. Hyperspectral Remote Sensing Scenes. Available online: http://www.ehu.eus/ccwintco/index.php (accessed on 20 October 2017).
  51. Skauli, T.; Farrell, J. A collection of hyperspectral images for imaging systems research. In Proceedings of the SPIE Electronic Imaging, San Francisco, CA, USA, 3–7 February 2013. [Google Scholar]
  52. Liao, W.; Huang, X.; Van Coillie, F.; Gautama, S.; Pizurica, A.; Philips, W.; Liu, H.; Zhu, T.; Shimoni, M.; Moser, G.; et al. Processing of Multiresolution Thermal Hyperspectral and Digital Color Data: Outcome of the 2014 IEEE GRSS Data Fusion Contest. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 2984–2996. [Google Scholar] [CrossRef]
  53. Wei, Q.; Dobigeon, N.; Tourneret, J. Bayesian Fusion of Multi-Band Images. IEEE J. Sel. Top. Signal Process. 2015, 9, 1117–1127. [Google Scholar] [CrossRef] [Green Version]
  54. Simoes, M.; Bioucas-Dias, J.; Almeida, L.B.; Chanussot, J. A Convex Formulation for Hyperspectral Image Superresolution via Subspace-Based Regularization. IEEE Trans. Geosci. Remote Sens. 2015, 53, 3373–3388. [Google Scholar] [CrossRef] [Green Version]
Figure 1. An example for the diagonal matrix S h ( i ) of PaviaU in which the diagonal elements are singular values of G h ( i ) . The three images show S h ( i ) of the original input, noisy image, and restoration results, respectively. (a) original, (b) noisy, (c) restoration.
Figure 1. An example for the diagonal matrix S h ( i ) of PaviaU in which the diagonal elements are singular values of G h ( i ) . The three images show S h ( i ) of the original input, noisy image, and restoration results, respectively. (a) original, (b) noisy, (c) restoration.
Remotesensing 13 00819 g001
Figure 2. Gradient images and rotation by P v and P h .
Figure 2. Gradient images and rotation by P v and P h .
Remotesensing 13 00819 g002
Figure 3. Results for CS reconstruction experiments in it Stanford (130th band). (a) original image, (b) observation, (c) ASTV, (d) SSTV, (e) HSSTV, (f) TNNG (ours)
Figure 3. Results for CS reconstruction experiments in it Stanford (130th band). (a) original image, (b) observation, (c) ASTV, (d) SSTV, (e) HSSTV, (f) TNNG (ours)
Remotesensing 13 00819 g003
Figure 4. Dead lines/circles are generated by pixel-wise multiplication with mask. (a) original image, (b) mask, (c) generated image
Figure 4. Dead lines/circles are generated by pixel-wise multiplication with mask. (a) original image, (b) mask, (c) generated image
Remotesensing 13 00819 g004
Figure 5. Results for GRPCA in Washington (76th band). (a) Input image, (b) Image with anomalies, (c) Low-rank component, (d) Sparse component
Figure 5. Results for GRPCA in Washington (76th band). (a) Input image, (b) Image with anomalies, (c) Low-rank component, (d) Sparse component
Remotesensing 13 00819 g005
Figure 6. Data for Pansharpening in PaviaC ( Type I ). (a)original image, (b) pan image, (c) HSI.
Figure 6. Data for Pansharpening in PaviaC ( Type I ). (a)original image, (b) pan image, (c) HSI.
Remotesensing 13 00819 g006
Figure 7. Results for Pansharpening experiments in PaviaC. (a) original image, (b) GFPCA, (c) BayesNaive, (d) BayesSparse, (e) HySure, (f) TNNG (ours).
Figure 7. Results for Pansharpening experiments in PaviaC. (a) original image, (b) GFPCA, (c) BayesNaive, (d) BayesSparse, (e) HySure, (f) TNNG (ours).
Remotesensing 13 00819 g007
Table 1. Numerical values for CS reconstruction (PSNR [dB]/SSIM).
Table 1. Numerical values for CS reconstruction (PSNR [dB]/SSIM).
rHSIASTVSSTVHSSTVTNNG (ours)
PaviaC26.18/0.59127.07/0.59128.84/0.79930.27/0.797
PaviaU25.71/0.59426.64/0.58028.30/0.79829.80/0.820
Frisco29.25/0.69130.14/0.66332.99/0.89133.16/0.868
10%Stanford29.52/0.68330.39/0.66533.31/0.87833.86/0.870
Indian28.92/0.69229.03/0.64630.47/0.82932.23/0.883
Salinas30.65/0.68331.45/0.68734.34/0.90335.38/0.933
AVE. of 17 imgs28.58/0.70629.10/0.64131.04/0.85933.23/0.881
rHSIASTVSSTVHSSTVTNNG (ours)
PaviaC27.93/0.64829.75/0.71431.31/0.83532.10/0.846
PaviaU27.36/0.67229.28/0.70430.77/0.83231.52/0.854
Frisco30.89/0.75132.49/0.76835.24/0.91135.31/0.908
20%Stanford31.02/0.74132.75/0.76835.02/0.89835.86/0.910
Indian29.90/0.72230.95/0.73532.61/0.88533.01/0.849
Salinas32.26/0.74233.34/0.75936.96/0.93537.26/0.948
AVE. of 17 imgs29.84/0.74131.50/0.74633.81/0.88834.96/0.908
rHSIASTVSSTVHSSTVTNNG (ours)
PaviaC28.85/0.71131.30/0.77532.35/0.86133.13/0.875
PaviaU28.41/0.71730.85/0.76831.89/0.86232.40/0.875
Frisco31.87/0.78233.64/0.81136.07/0.91236.49/0.930
30%Stanford31.98/0.77333.90/0.80835.87/0.90936.98/0.931
Indian30.67/0.75931.84/0.77033.51/0.88833.97/0.911
Salinas33.12/0.77034.11/0.78337.98/0.94538.12/0.956
AVE. of 17 imgs30.61/0.76632.72/0.79134.89/0.89835.79/0.925
Table 2. Numerical results for image decomposition (PSNR [dB] / SSIM).
Table 2. Numerical results for image decomposition (PSNR [dB] / SSIM).
HSITRPCALRTDTVGRPCA (ours)
PaviaC35.16/0.96138.17/0.96640.45/0.977
PaviaU35.01/0.96536.93/0.96439.55/0.976
Frisco44.16/0.99042.00/0.98648.18/0.995
Stanford41.80/0.98044.05/0.98648.19/0.993
Indian35.43/0.94037.06/0.96137.87/0.988
Salinas37.29/0.98446.41/ 0.99348.34/0.991
AVE. of 17 imgs39.03/0.97440.61/0.97543.93/0.986
Table 3. Numerical results for mixed anomaly removal experiments: impulse (d = 10%) + dead lines/circles (PSNR [dB]/SSIM).
Table 3. Numerical results for mixed anomaly removal experiments: impulse (d = 10%) + dead lines/circles (PSNR [dB]/SSIM).
HSITRPCALRTDTVGRPCA (ours)
PaviaC37.95/ 0.98141.22/0.97742.25/0.981
PaviaU37.73/0.98240.4/0.97641.45/0.983
Frisco48.69/ 0.99744.83/0.99550.93/0.997
Stanford45.79/0.99450.33/ 0.99750.95/0.995
Indian37.25/0.96937.58/ 0.97038.88/0.970
Salinas40.29/0.99348.19/0.99750.29/0.996
AVE. of 17 imgs42.50/0.99044.32/0.98846.68/0.991
Table 4. Numerical results for pansharpening: I ( σ h s = 0.05 , σ p a n = 0.01 ), II ( σ h s = 0.1 , σ p a n = 0.01 ), III ( σ h s = 0.15 , σ p a n = 0.01 ).
Table 4. Numerical results for pansharpening: I ( σ h s = 0.05 , σ p a n = 0.01 ), II ( σ h s = 0.1 , σ p a n = 0.01 ), III ( σ h s = 0.15 , σ p a n = 0.01 ).
I
HSI GFPCABayesNativeBayesSparseHySureTNNG (ours)
PSNR25.4526.9028.8630.0031.16
PaviaCSAM8.6987.6367.7546.4156.696
ERGAS6.3965.6684.6253.8083.368
Q2n0.2480.3370.3390.4730.484
PSNR24.8526.5127.6729.2630.25
PaviaUSAM9.3467.7188.5687.1767.001
ERGAS7.0725.9975.2474.1793.747
Q2n0.1650.2320.2610.3870.398
PSNR31.0931.0734.5934.2034.74
FriscoSAM3.9735.4513.7334.3384.594
ERGAS3.5563.6902.5072.4232.354
Q2n0.8440.8440.8690.9190.894
PSNR31.4431.4135.1533.6735.19
StanfordSAM4.3987.6144.2043.8225.928
ERGAS5.6756.7224.6064.3804.653
Q2n0.4800.4930.5180.5210.548
PSNR29.3329.3632.9333.5234.93
SuwanneeSAM5.8167.6855.7624.9465.343
ERGAS3.7173.9172.5562.3412.214
Q2n0.5160.5600.5970.6120.662
II
PSNR25.3725.5528.5329.4529.76
PaviaCSAM9.15911.8888.4817.2388.964
ERGAS6.4786.8614.8274.0934.053
Q2n0.2490.3510.3480.4680.476
PSNR24.5725.3227.3528.0929.16
PaviaUSAM10.02510.9359.4778.0007.927
ERGAS6.9746.2695.1884.5424.069
Q2n0.1670.1980.2750.3470.348
PSNR30.8228.8333.8232.8633.48
FriscoSAM4.8229.7554.7474.6705.369
ERGAS3.7195.1062.8042.9402.876
Q2n0.8430.8440.8740.8920.893
PSNR31.1029.0634.3332.9833.89
StanfordSAM5.97914.0485.8325.2246.512
ERGAS6.43711.4215.5675.3434.958
Q2n0.4780.4970.5190.5160.516
PSNR29.1327.3832.3032.2933.48
SuwanneeSAM6.69612.6196.8955.3896.098
ERGAS3.8655.5232.8892.7082.524
Q2n0.5180.5340.5980.5980.616
III
PSNR25.2523.9428.1728.3428.67
PaviaCSAM9.86316.2109.5228.26310.273
ERGAS6.6118.5965.1264.6804.542
Q2n0.2560.3130.3760.4190.433
PSNR24.6523.9927.2327.7928.81
PaviaUSAM10.22514.6949.8488.9408.503
ERGAS7.2548.2855.5695.0474.277
Q2n0.1690.1970.2820.3620.350
PSNR30.3926.5932.5231.8332.42
FriscoSAM6.00014.0506.1056.9516.440
ERGAS4.0006.7433.2903.4283.308
Q2n0.8460.6590.8760.8890.893
PSNR30.6026.6833.2931.5632.83
StanfordSAM7.92720.4377.9267.3127.823
ERGAS7.65917.2367.1007.0005.686
Q2n0.4830.4180.5340.5180.509
PSNR28.7525.3531.4931.0732.34
SuwanneeSAM7.97317.5448.2507.6957.416
ERGAS4.1607.3763.3273.3782.924
Q2n0.5140.4070.6030.5930.592
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yuzuriha, R.; Kurihara, R.; Matsuoka, R.; Okuda, M. TNNG: Total Nuclear Norms of Gradients for Hyperspectral Image Prior. Remote Sens. 2021, 13, 819. https://0-doi-org.brum.beds.ac.uk/10.3390/rs13040819

AMA Style

Yuzuriha R, Kurihara R, Matsuoka R, Okuda M. TNNG: Total Nuclear Norms of Gradients for Hyperspectral Image Prior. Remote Sensing. 2021; 13(4):819. https://0-doi-org.brum.beds.ac.uk/10.3390/rs13040819

Chicago/Turabian Style

Yuzuriha, Ryota, Ryuji Kurihara, Ryo Matsuoka, and Masahiro Okuda. 2021. "TNNG: Total Nuclear Norms of Gradients for Hyperspectral Image Prior" Remote Sensing 13, no. 4: 819. https://0-doi-org.brum.beds.ac.uk/10.3390/rs13040819

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop