Next Article in Journal
Three-Dimensional Rule-Based City Modelling to Support Urban Redevelopment Process
Previous Article in Journal
Monthly Analysis of Wetlands Dynamics Using Remote Sensing Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel 3D Anisotropic Total Variation Regularized Low Rank Method for Hyperspectral Image Mixed Denoising

1
School of Computer and Software, Nanjing University of Information Science and Technology, Nanjing 210044, China
2
Key Laboratory of Intelligent Perception and Systems for High-Dimensional Information of Ministry of Education, Nanjing University of Science and Technology, Nanjing 210094, China
3
School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing 210094, China
4
School of Information and Technology, Nanjing Audit University, Nanjing 211815, China
5
Department of Electrical and Computer Engineering, Sungkyunkwan University, Suwon 440746, Korea
*
Author to whom correspondence should be addressed.
ISPRS Int. J. Geo-Inf. 2018, 7(10), 412; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi7100412
Submission received: 10 September 2018 / Revised: 8 October 2018 / Accepted: 12 October 2018 / Published: 17 October 2018

Abstract

:
Known to be structured in several patterns at the same time, the prior image of interest is always modeled with the idea of enforcing multiple constraints on unknown signals. For instance, when dealing with a hyperspectral restoration problem, the combination of constraints with piece-wise smoothness and low rank has yielded promising reconstruction results. In this paper, we propose a novel mixed-noise removal method by employing 3D anisotropic total variation and low rank constraints simultaneously for the problem of hyperspectral image (HSI) restoration. The main idea of the proposed method is based on the assumption that the spectra in an HSI lies in the same low rank subspace and both spatial and spectral domains exhibit the property of piecewise smoothness. The low rankness of an HSI is approximately exploited by the nuclear norm, while the spectral-spatial smoothness is explored using 3D anisotropic total variation (3DATV), which is defined as a combination of 2D spatial TV and 1D spectral TV of the HSI cube. Finally, the proposed restoration model is effectively solved by the alternating direction method of multipliers (ADMM). Experimental results of both simulated and real HSI datasets validate the superior performance of the proposed method in terms of quantitative assessment and visual quality.

1. Introduction

Hyperspectral images (HSIs), three-dimensional data cubes with hundreds of narrow spectral bands ranging from 0.4 to 2.5 μ m, have been widely used in various fields [1], e.g., precise agriculture, mineral detection, environment monitoring, and urban planning. However, during acquisition of an image, real HSIs are always corrupted by several kinds of noises, e.g., Gaussian noise, impulse noise, stripes or dead lines, which severely limit the precision of the subsequent processing, e.g., unmixing [2,3], classification [4,5], and anomaly detection [6]. Therefore, it is essential to develop effective restoration techniques for HSIs.
Enormous existing single image denoising methods, e.g., k-means clustering dictionary learning algorithm via singular value decomposition (K-SVD) [7], non-local means [8,9], block-matching and 3D filtering (BM3D) [10] as well as their variants, can be directly applied to HSI by treating it as band-by-band images. However, without imposing the information between bands, these kinds of methods always produce ordinary results.
Several kinds of powerful HSI restoration methods are based on wavelet transform [11,12,13] which has shown great potential in multi-scale representation [14] and spatial-spectral correlation exploration. For instance, Othman and Qian [15] proposed a two-dimensional (2D) algorithm that benefits the spatial and spectral features of HSIs and operates the wavelet shrinkage in the derivative domain. Usually, a wavelet transform is always combined with principle component analysis (PCA) for HSI restoration [16,17]. For example, block-matching four-dimensional (BM4D) filtering was conducted in the remaining low-energy principal component analysis (PCA) output channel images for HSI denoising [18] and it achieved great progress in removing Gaussian noise. Generally, due to the exploration in spectral and spatial domains, 3D wavelet methods [19] produce much better results than 2D wavelet methods [20]. Moreover, the combination of sparse reduced-rank regression and wavelet transform is another way to produce appealing results [21]. Except for wavelet transform-based methods, there are still a number of techniques developed for hyperspectral restoration, e.g., tensor decomposition methods [22,23], sparse representation [24,25] or sparse dictionary learning methods [26,27], kernel-based methods [28], deep learning [29] or neural network [30,31] methods and Bayesian methods [32,33]. These methods have been proved to produce outstanding results. However, all the above mentioned methods only consider one or two kinds of noises, and fail to remove mixed noise completely.
Recently, low-rank representation (LRR), another powerful technique for image analysis [34,35], has drawn a lot of attention and achieved great success in the field of hyperspectral restoration. Lu et al. [36] proposed an LRR method for removing stripe noise in HSIs by exploiting the high spectral correlation among different bands, and graph regularization was constructed to preserve the intrinsic local structure. Motivated by the idea of robust principle component analysis (RPCA), Zhang et al. [37] exploited an HSI restoration method based on low-rank matrix recovery (LRMR) via so-called “Go Decomposition” algorithm. Even though LRMR was constructed on overlapped patches and achieved exciting performance for denoising mixed Gaussian and sparse noise simultaneously, it only considers the local similarity within patches. To overcome such drawback, by treating the different contribution of the overlapped patches, a spectral nonlocal weighted low-rank representation method was presented in [38]. Meanwhile, He et al. [39] then improved the patch-wise LRMR by a noise-adjusted iteration strategy, which makes LRMR adaptive to the noise intensity in different bands. To exploit both the local similarity within a patch and the nonlocal similarity across the patches in a group simultaneously, Wang et al. [40] proposed an approach of group low-rank representation (GLRR) for HSI denoising. GLRR enables the exploitation of both the local similarity within a patch and the nonlocal similarity across the patches in a group simultaneously and achieves better performance. Different from the above-mentioned low-rank based methods, Sun et al. [41] paved a new way to explore the low rank properties in the spectral difference images of HSI for mixed noise removal. Due to lack of spatial information, however, those low-rank methods only exploit the low-rank properties of HSIs from a spectral perspective and fail to remove the heavy Gaussian noise effectively. In addition, when sparse noise has its own structure, for example, the stripes or dead lines in an HSI are located at the same place in most of the bands, the low-rank-based method will treat the sparse noise as a low-rank component and fail to remove it [42,43]. More research related to low-rank representation for HSI denoising can be found in this literature [44,45] and references therein.
Total variation is an effective tool for imposing the spatial constraint and removing the Gaussian noise in HSI [46,47,48]. For instance, Yuan et al. [46] extended the vectorial total variation (VTV) [49] for HSI denoising, which is adaptive for noise intensity in each band and structures in the spatial domain. Chang et al. [48] proposed to employ an anisotropic total variation which is defined by maintaining the difference along the direction of strips for destriping of HSI. However, both methods can only remove a certain kind of noise for HSI and are not flexible for removing mixed noises, i.e., Gaussian noise, stripes and deadlines. In order to deal with mixed noises for HSI, Aggarwal et al. [50] proposed a novel spatial-spectral TV to explore the spectral smoothness across the spatial total variation and achieved great success. In addition, by taking the advantage of combination of multiple constrains, He et al. [42] proposed a band-by-band total variation regularized low-rank matrix factorization (LRTV) method for the restoration of HSIs, which achieved great improvement for mixed noise removal by combining total variation regularization and low rank representation to enforce the local spatial smoothness and the low-rank property of the spectra in the whole scene simultaneously. Similarly, Wu et al. [51] proposed to combine the band-by-band TV regularization with weighted nuclear norm minimization (WNMM) for HSI mixed denoising. However, LRTV or WNMM only consider the band-by-band TV model, and ignore the consistency and smoothness in the spectral domain, which significantly limits the performance of HSI denoising especially in the case of heavy Gaussian noise and structured sparse noise. More recently, Wang et al. [52] combined the spatial-spectral TV with a low rank constraint for HSI mixed denoising and further improved the denoising performance. Another way to improve the performance of HSI mixed denoising is to involve three-dimensional total variation (3DTV) into the multiple combination framework. 3DTV which promotes both spatial and spectral smoothness is a natural extension of the conventional 2D TV for cubic data. It has been utilized for several applications of HSI, such as compressive sensing [53] and super-resolution [54]. In [53], 3DTV is defined as a 2D spatial TV and a 1D spectral TV on the abundance fractions of endmembers for HSI unmixing compressive sensing. Later, the 3DTV with the same definition as that in [53] was combined with low rank tensor prior for HSI super-resolution.
Based on our previous research [55], in this paper, we propose a novel hyperspectral restoration method based on 3D anisotropic total variation (ATV) regularization and low-rank representation, namely 3DATVLR, where the ATV is defined by the l 1 norm TV along two directions in spatial domain and the l 1 norm TV along one direction in spectral domain. This method could exploit both the spatial and spectral information of HSIs and is able to preserve the spectra and edges as well as remove the mixed noise simultaneously by treating an HSI as a 3D cube. Compared to [54], the proposed 3DATV intends to promote different smoothness strength from spatial and spectral TV while 3DTV defined in [53] enforces the spatial TV and spectral TV equally. This enables 3DATV to be more flexible for dealing with different mixed noises. In addition, our 3DATVLR method concerns the mixed noises removal problem while the method in [54] only considers the Gaussian noise for HSI super-resolution problem. More importantly, as analyzed in [56], the unfolding matrices of the HSI tensor along spectral mode and self-similar mode are of low rank. Enforcing low rankness on three modes of HSI tensor will reduce the performance of HSI denoising.
The contributions of this paper are summarized as follows:
  • By treating an HSI as a 3D cube, 3D anisotropic total variation (3DATV) is adopted to exploit the spatial smoothness and spectral consistency simultaneously along with the low-rank regularization.
  • The TV denoising process is updated iteratively in a spatial-spectral manner in the alternating direction method of multipliers (ADMM) algorithm instead of in a band-by-band manner, and it can be effectively solved by an n-D fast Fourier transform (nFFT).
  • When low-rank regularization fails to remove the structured sparse noise, i.e., structured stripes or dead lines, 3DATV could effectively get rid of them by simultaneously exploring the spatial and spectral consistency while LRTV even makes this case in a dilemma since the band-by-band TV preserves the stripes or deadlines along one of the directions.
The remainder of this paper is organized as follows: Section 2 formulates the restoration problem for HSIs. Section 3 describes the proposed 3DATVLR model and presents the details of the optimization algorithm. Section 4 illustrates the experimental results. Section 5 concludes the paper with some remarks.

2. Basic Formulation

2.1. Observation Model

Firstly, we define some notations for HSI restoration problem. Let X R m × n × l represent a clear HSI cube with m × n spatial pixels and l spectral bands. Then, the observed HSI cube Y R m × n × l , corrupted by Gaussian noise N R m × n × l and sparse noise S R m × n × l , can be formulated as
Y = X + S + N .
This mixed noise formulation was first utilized for hyperspectral restoration in [37] using LRMR. It is used to remove the Gaussian noise caused by the imaging sensors and model system, as well as the sparse noise, e.g., dead lines or stripes, created by push-broom sensors.

2.2. Restoration Model

With Equation (1) in mind, the restoration model of removing mixed Gaussian and sparse noises for HSIs can be formulated as
min X , S J 1 ( X ) + λ J 2 ( S ) s . t . | | Y X S | | F 2 ε ,
where λ is a parameter that maintains the trade-off between the two priori terms, and ε is a parameter related to the density of Gaussian noise. J 2 ( S ) is usually modeled as sparse regularization [37,38,39], i.e., | | S | | 1 = i | s i | , where s i is the element of S . J 1 ( X ) is the regularization associated with the clean HSI X , which models the a priori properties of the clean HSI.

2.3. Prior Regularizations

2.3.1. Low Rank Regularization

One choice for J 1 ( X ) is the nuclear norm, i.e., | | X | | , which exploits the low-rank property by enforcing the sum of the singular values of X , and has shown good performance in HSI restoration [37,42].
The objective function involving the low rank regularization and sparse regularization is formulated as:
min X , S | | X | | + λ | | S | | 1 s . t . | | Y X S | | F 2 ε .
Equation (3) is the well-known robust principle component analysis (RPCA) model which has been widely used in nature image processing [48]. Even working well for HSI mixed denoising, it still has two main drawbacks as follows:
  • Due to the independent distribution of Gaussian noise in each pixel, Equation (3) can not completely remove the heavy Gaussian noise.
  • Equation (3) can not remove the structured sparse noise, i.e., stripes or dead lines locate at the same place in each band because the low rank method will treat them as one of the low rank components.

2.3.2. Total Variation Regularization

Another supplementary choice for J 1 ( X ) is the total variation, which can effectively promote the spatial-spectral smoothness by suppressing the heavy Gaussian noise successfully.
Therefore, the band-by-band TV regularized low rank (LRTV) model is expressed as [42]:
min X , S | | X | | + τ | | X | | H T V + λ | | S | | 1 s . t . | | Y X S | | F 2 ε ,
where | | X | | H T V = i = 1 l | | X i | | t v is the conventional band-by-band TV.
Benefitting from the band-by-band TV regularization, LRTV can produce more accurate results than LRMR, especially for the heavy Gaussian noise case. However, for the structured sparse noise, it still fails to remove it because band-by-band TV will inversely keep the stripes or dead lines in one of the directions, i.e., horizontal direction or vertical direction.

3. 3D Anisotropic TV Regularized Low-Rank Model for HSI Restoration

In this section, we systematically present the mathematical formulation of the proposed method and illustrate the framework of it in Figure 1.

3.1. 3D Anisotropic Total Variation Regularization

A straightforward way to extend the conventional 2D TV, which has been widely used in HSI analysis [57,58], to a 3D version for HSI is to impose the spatial-spectral total variation into a unit system. Before doing this, we first analyze the statistical distribution of each gradient image, i.e., D h X , D v X , and D z X . Figure 2 exhibits the visual illustration and statistical table of each gradient image of Washington DC dataset. It is clear that the shape of each spatial difference (i.e., D h X , D v X ) obeys Laplacian distribution while the shape of the spectral difference obeys Gaussian distribution. The phenomena follows the rule of optical imaging in spectral band. However, considering that the sparse noise will still exist in the spectral difference image, we finally employ the l 1 norm to model the distributions in three directions.
Hence, the formulation of 3D anisotropic TV can be expressed as follows:
| | X | | 3 DATV = | | D h X | | 1 + | | D v X | | 1 + ρ | | D z X | | 1 ,
where D h and D v respectively denote the horizontal and vertical difference operators along the spatial dimension, while D z is a linear operator corresponding to the spectral difference, and ρ controls the proportion of the spectral correlation. Now, the 3DATV in Equation (5) exploits both spatial and spectral smoothness for HSIs, and the definitions of its three operators are expressed as
D h X ( i , j , k ) = X ( i + 1 , j , k ) X ( i , j , k ) , D v X ( i , j , k ) = X ( i , j + 1 , k ) X ( i , j , k ) , D z X ( i , j , k ) = X ( i , j , k + 1 ) X ( i , j , k ) ,
where X ( i , j , k ) denotes a pixel at spatial location ( i , j ) in the kth band. By enforcing the constraints on two spatial dimensions and one spectral dimension, the piecewise smoothness in both domains can be pursued.

3.2. Proposed 3DATVLR Model

By integrating the low rank constraint and 3DATV regularization into a unit framework, we propose a new model for HSI restoration:
min X , S R m × n × l | | X | | + λ t v | | X | | 3 DATV + λ s | | S | | 1 , s . t . | | Y X S | | F 2 ε , rank ( X ) r
where λ t v and λ s are two parameters that represent the trade-off of the three terms, ε represents the noise level of the Gaussian noise, and r is the desired rank of matrix X .
This proposed 3D anisotropic total variation regularized low rank model, termed the 3DATVLR method, can explore the spectral similarity and spatial-spectral smoothness simultaneously. Not only can it remove the heavy Gaussian noise and structured stripes or deadlines by pursuing the spectral-spatial smoothness with 3DATV, but it can also remove the random sparse noise, thus preserving the spectra and keeping the fine spatial details in the original HSI.

3.3. Optimization Algorithm

To effectively solve the optimization Equation (7), we first convert it to the following equivalent constrained formulation:
min L , X , S | | L | | + λ t v ( | | V 1 | | 1 + | | V 2 | | 1 + ρ | | V 3 | | 1 ) + λ s | | S | | 1 , s . t . | | Y L S | | F 2 ε , L = X , V 1 = D h X , V 2 = D v X , V 3 = D z X , rank ( L ) r .
Up to this step, Equation (5) has been separated into five simper subproblems by the ADMM method [59], which then solves it by minimizing the following Lagrangian function of Equation (8):
min L ( L , X , S ) = min L , X , S R m × n × l | | L | | + λ t v ( | | V 1 | | 1 + | | V 2 | | 1 + ρ | | V 3 | | 1 ) + λ s | | S | | 1 + Λ 1 , Y L S + Λ 2 , X L + Λ 3 , V 1 D h X + Λ 4 , V 2 D v X + Λ 5 , V 3 D z X + μ 2 ( | | Y L S | | F 2 + | | X L | | F 2 + | | V 1 D h X | | F 2 + | | V 2 D v X | | F 2 + | | V 3 D z X | | F 2 ) s . t . rank ( L ) r ,
where μ is the penalty parameter, and Λ i , i = 1 , 2 , , 5 are the Lagrangian multipliers.
Generally, we optimize the Lagrangian Equation (9) iteratively over one variable while fixing the other variables. The detailed pseudo-code for solving the proposed model via ADMM is summarized in Algorithm 1.
Line 4 in Algorithm 1 is related to variable L , which is a nuclear norm subproblem and that can be effectively solved by the singular value shrinkage method [60], given by
L ( k + 1 ) = argmin r a n k ( L ) r L ( L , X ( k ) , S ( k ) ) = argmin r a n k ( L ) r | | L | | + μ | | L 1 2 Y + X ( k ) S ( k ) + ( Λ 1 k + Λ 2 k ) / μ | | F 2 = D 1 2 μ 1 2 ( Y + X ( k ) S ( k ) + ( Λ 1 k + Λ 2 k ) / μ ) ,
where
D λ ( W ) = U D λ ( Σ r ) V T = U diag { max ( σ i λ , 0 ) } 1 i r V T .
Here, U Σ V T is the singular value decomposition of the given matrix W , and { σ i } 1 i r are the first r singular values of W in descending order.
Algorithm 1 Extended ADMM for the 3DATVLR method.
1: 
Input HSI matrix Y R m × n × l , desired rank r, regularization parameters λ t v and λ s , maximum iteration k m a x
2: 
Initialization L ( 0 ) = X ( 0 ) = S ( 0 ) = 0 , Λ i ( 0 ) = 0 , i = 1 , , 5 , V i ( 0 ) = 0 , i = 1 , 2 , 3 , μ = 0.05 , μ m a x = 10 6 , γ = 1.2 , ρ > 0 and k = 0
3: 
While k < k m a x
4: 
   L ( k + 1 ) = arg min rank ( L ) r L ( L , X ( k ) , S ( k ) )
5: 
   X ( k + 1 ) = arg min X L ( L ( k + 1 ) , X , S ( k ) )
6: 
   S ( k + 1 ) = SoftTH ( Y L ( k + 1 ) + Λ 1 ( k ) / μ , λ s / μ )
7: 
   V 1 ( k + 1 ) = SoftTH ( D h X ( k + 1 ) Λ 3 ( k ) / μ , λ t v / μ )
8: 
   V 2 ( k + 1 ) = SoftTH ( D v X ( k + 1 ) Λ 4 ( k ) / μ , λ t v / μ )
9: 
   V 3 ( k + 1 ) = SoftTH ( D z X ( k + 1 ) Λ 5 ( k ) / μ , ρ λ t v / μ )
10: 
   Update Lagrangian multipliers
11: 
      Λ 1 ( k + 1 ) = Λ 1 ( k ) + μ ( Y L ( k + 1 ) S ( k + 1 ) )
12: 
      Λ 2 ( k + 1 ) = Λ 2 ( k ) + μ ( X ( k + 1 ) L ( k + 1 ) )
13: 
      Λ 3 ( k + 1 ) = Λ 3 ( k ) + μ ( V 1 ( k + 1 ) D h X ( k + 1 ) )
14: 
      Λ 4 ( k + 1 ) = Λ 4 ( k ) + μ ( V 2 ( k + 1 ) D v X ( k + 1 ) )
15: 
      Λ 5 ( k + 1 ) = Λ 5 ( k ) + μ ( V 3 ( k + 1 ) D z X ( k + 1 ) )
16: 
   Update the parameter μ = min ( γ μ , μ m a x )
17: 
   Update iteration k = k + 1
18: 
End While
Line 5 in Algorithm 1 is for solving a convex subproblem related to the variable X , as follows:
X ( k + 1 ) = argmin X L ( L ( k + 1 ) , X , S ( k ) ) = argmin X | | X L ( k + 1 ) + Λ 2 ( k ) μ | | F 2 + | | V 1 ( k ) D h X + Λ 3 ( k ) μ | | F 2 + | | V 2 ( k ) D v X + Λ 4 ( k ) μ | | F 2 + | | V 3 ( k ) D z X + Λ 5 ( k ) μ | | F 2 .
The solution to Equation (12) is equivalent to the following liner system of equations:
( I + Δ ) X ( k + 1 ) = L ( k + 1 ) Λ 2 ( k ) μ + ξ 1 + ξ 2 + ξ 3 ,
where
Δ = D h T D h + D v T D v T + D z T D z , ξ 1 = D h T V 1 ( k ) + Λ 3 ( k ) μ ,
ξ 2 = D v T V 2 ( k ) + Λ 4 ( k ) μ , ξ 3 = D z T V 3 ( k ) + Λ 5 ( k ) μ ,
and T is the matrix transpose. By treating D h X , D v X , and D z X as convolutions along two spatial directions and one spectral direction, this problem has a closed form solution via the nFFT.
X ( k + 1 ) = F 1 F L ( k + 1 ) Λ 2 ( k ) μ + ξ 1 + ξ 2 + ξ 3 1 + Σ i { h , v , z } F ( D i ) H F ( D i ) ,
where F is the fast nFFT operator, F 1 is the inverse nFFT operator, and H is the complex conjugate.
For the rest of the variables, i.e., S , V i , i = 1 , 2 , 3 , the related problems can be solved easily by a soft threshold operator, i.e., SoftTH ( x , λ ) = sign ( x ) × max 0 , | x | λ / 2 . More details can be found in [42,48].
The convergence of Algorithm 1 using ADMM is guaranteed theoretically in [59]. Moreover, the convergence rate of the ADMM algorithm depends on the parameter μ , which is first initialized as μ = 0.05 and then updated automatically in the algorithm according to the formulate μ = m i n ( γ μ , μ m a x in each iteration, here γ = 1.05 and μ m a x = 10 6 . This strategy has been proved to be an effective method for setting the value of μ for practical applications [42,61].

3.4. Parameters Determination

As presented in Algorithm 1, there are four parameters in total, λ t v , λ s , the desired rank r, and the spectral proportion ρ in the proposed method. λ s is related to the density of sparse noise, which is set as λ s = 10 / ( m × n ) in all the experiments. What we need to discuss is the desired rank r, the 3DATV regularized parameter λ t v , and the spectral proportion ρ .
According to the linear mixture model of hyperspectral image, the desired rank r represents the number of endmembers in the scene, and it can be estimated by the HSI subspace identification method termed HySime [62]. However, due to the complexity of mixed noise in the scene, we still need to slightly tune it accordingly.
The regularized parameter λ t v controls the trade-off between the nuclear norm and the 3DATV norm. As analyzed in [42], the role of the 3DATV regularization in (7) gets smaller as the iteration progresses. Thus, it is vital to determine the initial value of λ t v .
The parameter ρ controls the proportion of spectral TV in the 3DATV term. When it is set to zero, the contribution of spectral smoothness is omitted and the proposed model degrades to LRTV model. Generally, ρ is set in the range of [0–1] for most of the noise cases, and a larger value of ρ is recommended for heavy noise case, and vice versa.

4. Experimental Results and Discussion

To validate the effectiveness of the proposed method, the simulated Indian Pines and real Urban datasets are employed for the experiments. It is worth mentioning that the gray values of each band of the HSI are normalized to the range of [0–1], and, after the restoration, they are stretched back to the original range.

4.1. Comparison Methods and Assessment Metrics

To thoroughly evaluate the performance of the proposed 3DATVLR method for HSI restoration, the block matching 4-d filtering (BM4D) [17], the principal component analysis based BM4D (PCABM4D) [18], parallel factor analysis (PARAFAC) [63], the decomposable nonlocal tensor dictionary learning (TDL) [64], the GoDec based low-rank matrix recovery (LRMR) [37], 3DTV in Equation (3), and the band-by-band total variation regularized low rank matrix factorization (LRTV) [42] are employed as the benchmark methods. BM4D and PCABM4D are two representative tensor based methods by exploiting nonlocal similarities in wavelet domain. PARAFAC is a powerful rank-1 multilinear algebra model and TDL is a decomposable nonlocal tensor dictionary learning method. Both PARAFAC and TDL are proposed for Gaussian noise removal. LRMR is one of the outstanding methods with low rank property for HSI restoration, 3DTV is a 3D spatial-spectral TV based mixed noise removal method without low rank constraint, while LRTV is the most relevant method to the proposed one, which employs the low rank constraint with band-by-band TV regularization.
As in [37,46], the mean peak signal-to-noise ratio (MPSNR) index, the mean structural similarity (MSSIM) index [65], the mean feature similarity (MFSIM) index, erreur relative globale adimensionelle de synthese (ERGAS) [66] and the mean spectral angle (MSA) are employed to give a quantitative assessment of the HSI denoising results. It is worth noting that the larger values of MPSNR, MSSIM, and MFSIM represent the better performance while the smaller values of ERGAS and MSA represent the better performance.

4.2. Experiments on Simulated Datasets

4.2.1. Datasets and Experimental Settings

Following the experiment in [42,67], the simulated data were generated using the ground truth of the Indian Pines data set (https://engineering.purdue.edu/biehl/MultiSpec/hyperspectral.html) and the spectral signatures were extracted from the USGS digital library (http://speclab.cr.usgs.gov/spectral.lib). The pixels of the same class are replaced by a specific spectrum from the library. All of the unlabeled pixels were also replaced by the same signature. The final scene has 17 pure classes with 17 endmembers from the USGS library and the size of the scene is 145 × 145 × 224 . The simulated dataset, illustrated in Figure 3, is available online (https://sites.google.com/site/rshewei/home).
After generating the clean dataset, five cases of noises are added to simulate the real HSI dataset according to the following rules:
(1)
Case 1: Zero-mean Gaussian noise with the same variance of 0.01 was added to each band.
(2)
Case 2: Zero-mean Gaussian noise with the same variance of 0.01, as well as the impulse noise with the same density of 15%, was added to each band.
(3)
Case 3: In this case, the Gaussian noise and impulse noise were added just like that in Case 2. In addition, the deadlines were added from band 111 to band 150 with the number of deadlines being randomly selected from 3 to 10, and the width of each deadline was randomly generated from 1 to 3.
(4)
Case 4: In this case, the noise intensity was different for each band. First, zero-mean Gaussian noise with different variances varying from 0 to 0.02 was added to each band randomly. Second, impulse noise with a density percentage varying from 0 to 20% was randomly added to each band. Finally, dead lines were simulated the same as that in Case 3.
(5)
Case 5: In this case, the Gaussian noise, impulse noise and dead lines are simulated the same as that in Case 4. In addition, some periodical stripes are added from band 146 to 165; the number of stripes in each band is 30.
(6)
Case 6: In this case, the Gaussian noise, impulse noise and stripes are simulated the same as that in Case 5. However, the dead lines are simulated to be located in the same place as the 40 bands randomly selected from all of the bands; the number of dead lines in each band is 15. This case is to simulate the structured sparse noise for the most severe noise situation.
For the simulated Indian Pines datasets with different noise levels, the parameters for the competing algorithms are set as follows: For the BM4D and PARAFAC methods, both of them are parameter free. For the PCABM4D method, the only parameter is the number of first principal components k; it is generally chosen from the set { 0 , 1 , 2 , 3 } . For the TDL method, the only parameter is the deviation of the noise which is set to 0.1. For the 3DTV method, we set λ t v = 0.01 and ρ = 10 . For the LRMR method, we set window size q = 20 , rank r = 5 , and the cardinality of sparsity k = 4000 . For the LRTV method, we set the parameter related to the band-by-band TV as τ = 0.005 , and rank r = 5 according to the corresponding references. For the proposed 3DATVLR method, parameter λ t v is set to 0.005 while desired rank r is set to 6 to obtain optimal results. It is worth noting that the parameters for the competing methods are tuned slightly for different noise intensities to obtain optimal results, and the detailed description of the parameters are provided in Table 1.

4.2.2. Visual Performance Evaluation

To thoroughly show the performance of the competing methods, the results in Case 4, Case 5 and Case 6 are chosen for illustration. Figure 4 illustrates the false-color composite and corresponding zoomed-in portions of the denoising results with noise of Case 4. Figure 5 shows the mean spectra of the difference between the original spectra and the constructed ones in class 4 of the simulated Indian Pines dataset. From them, it is obvious that 3DATVLR produces the best visual quality among those competing methods, that is, 3DATVLR reconstructs the closest result to the original one in Figure 4j and plots the smoothest mean difference spectrum in Figure 5i. BM4D, PCABM4D, PARAFAC and TDL methods all remove the Gaussian noise well; however, four of them can not remove the sparse noise completely and produce some distortions around the sparse noise. LRMR performs the worst for preserving the spatial details among the four mixed denoising methods due to its poor ability to remove heavy Gaussian noise, as observed in the zoomed-in portion of Figure 4h. 3DTV can remove those mixed noises effectively, but the fine details of the images are somewhat oversmoothed, as observed in Figure 4f. The false color image shown in the zoomed-in portion of Figure 4i and the mean difference spectrum shown in Figure 5h both show that even LRTV can remove the Gaussian noise and impulse noise effectively, but it fails to preserve the spectra and spatial fine details; see the ellipse and rectangle zones in Figure 4i.
Figure 6 and Figure 7 illustrate the denoising results of band 123 in Case 5 and band 166 in Case 6 of the simulated dataset, respectively. Both of them are heavily corrupted by Gaussian noise, impulse noise, and dead lines. The difference is that the dead lines in Case 6 are located at the same place of the 40 randomly selected bands. From them, it easily leads us to conclude that the proposed 3DATVLR method produces the best performance in both cases. For BM4D, PCABM4D, PARAFAC and TDL methods, they all fail to remove the impulse noise and dead lines. For the LRMR method, if the dead lines are not structured, it can easily get rid of them completely; however, it is still not good at removing the heavy Gaussian noise and the structured dead lines. A similar performance happens for the LRTV method; it can only improve to remove the heavy Gaussian noise while failing to get rid of the structured dead lines.

4.2.3. Quantitative Performance Evaluation

Table 2 exhibits the quantitative values in terms of MSPNR, MSSIM, MFSIM, ERGAS and MSA for the eight denoising methods above with different noise intensities. From Table 2, it can be observed that the proposed 3DATVLR method exhibits overwhelming superiority in terms of all assessment indexes among all the compared methods in each noise intensity. In addition, the MSPNR, MSSIM, MFSIM, ERGAS and MSA values have great consistency with the visual evaluations. From Table 2, we can also find a strange phenomenon that, when there are stripe noise or dead lines, the LRTV method works even worse than LRMR does in preserving the spectra; see the MSA values in Cases 4, 5 and 6.
Figure 8 presents the PSNR and SSIM values in each band of the reconstructed results with different restoration algorithms in all cases. It leads us to conclude that the PSNR and SSIM values of almost all of the bands obtained by the proposed 3DATVLR are higher than those of the other seven competing algorithms, which once again validates the effectiveness of the proposed methods at removing the mixed noise of HSI. In addition, it is easy to see that the LRTV method always fails to remove one or two bands of the HSI when there are stripes or dead lines. This may be the reason why LRTV produces the worse results in preserving the spectra of the data when there are stripes noise or dead lines.

4.2.4. Parameters’ Sensitive Analysis

Figure 9 presents the relationship between the MPSNR, MSSIM and MSA values with the regularization parameters, λ t v and desired rank r. It shows that λ t v and r really have an impact on the performance. The reason for this is that rank r has an inherent relationship with the number of endmembers in the scene, while λ t v determines the strength of 3DATV regularization. Even so, the proposed method is relatively robust and stable to these two parameters and it can produce competitive results when λ t v and r lie in the range of [0.003–0.01] and [4–8], respectively.
Figure 10 plots the MPSNR values as a function of the parameter ρ . From it, we can see that ρ really has an impact on the reconstruction results. This also means that the spectral difference in the 3DATV regularization contributes to improving the performance of the proposed method. In addition, we can see that if there is only Gaussian noise and impulse noise in the simulated dataset, the value of ρ lies in the range of [0–1] to obtain optimal results. If there are stripe noise or dead lines in the dataset, it needs a greater value of ρ , e.g., ρ = 5 , to significantly improve the performance.
Figure 11 plots the convergence curves (PSNR as a function of the iteration and SSIM as a function of the iteration) of the proposed method. It is quite clear that the algorithm will converge sharply as the incensement of iteration. It experimentally validates that the proposed 3DATVLR method has a fast convergence rate.
In summary, the experimental results of the simulated Indian Pines dataset all indicate that the proposed 3DATVLR outperforms the other three state-of-the-art restoration methods in terms of quantitative assessment and visual effect. In addition, for the structured sparse noise, only the proposed 3DATVLR method can get rid of it completely.

4.3. Experiments on a Real Dataset

The Hyperspectral Digital Imagery Collection Experiment (HYDICE) Urban dataset (http://www.tec.army.mil/Hypercube/) is adopted for real data experiments. The size of the original scene is 307 × 307 × 210 . Even though bands 104–108, 139–151 and 207–210 are heavily polluted by the atmosphere and water absorption, we still maintain them to thoroughly validate the performance of the proposed method in removing the severe mixed noise. Therefore, the real scene employed in the experiment is of the size 307 × 307 × 210 . An image of band 123 is illustrated in Figure 12a. From it, we can find that there are heavy Gaussian, strip noise and dead lines as well as the uneven illumination in the scene. In this experiment, a new state-of-the-art denoising method, i.e., noise-adjusted iterative low-rank matrix approximation (NAILRMA) [39] is added for comparison.
Figure 12 illustrates the image of band 139 and its corresponding zoomed-in portion of the denoising results of the real Urban dataset. It clearly shows that our proposed method produces the best results in visual quality. From the zoomed-in portion of Figure 12e, it is obvious that 3DTV removes the stripe and Gaussian noise effectively, but there is some over-smoothness in the restored image. From Figure 12g–i, we can see that LRMR, NAILRMA, and LRTV all fail to remove the stripe noise. This is because the stripes in Urban dataset are all located at the same place from band 108 to band 140. Moreover, a small amount of Gaussian noise still exists in the image restored by LRMR and NAILRMA, while the proposed 3DATVLR method promotes proper smoothness and preserves the fine details better.
Figure 13 presents the image of band 151 and its corresponding zoomed-in protion of the denoising results of the real Urban dataset. From it, we can draw a similar conclusion as that from Figure 12. That is, BM4D, PCABM4D, PARAFAC and DTL methods can remove the heavy Gaussian noise to an extent, but all of them fail to remove the stripes and dead lines. All of those LR based methods, i.e., LRMR, NAILRMA and LRTV, fail to remove the structured stripes in the scene. Benefitting from the 3DATV regularization, the proposed 3DATVLR is successful in getting rid of the mixed noise completely and preserving the spatial fine details well.
Figure 14 plots the horizontal mean profile of band 151 before and after restoration. Due to the mixed Gaussian and stripe noise, there are rapid fluctuations in the original curve. All eight competing methods can essentially suppress the mixed noise more or less, and our method appears to produce the best performance since it shows the smoothest and closest regression to the original curve.
To assess the spatial information restoration, Q-metric [68], which is a blind image content measurement index, is computed by averaging the Q values (aQ) in all bands to evaluate the results. From the above experiments, due to the poor performance of the BM4D, PCABM4D, PARAFA and TDL methods, we do not calculate the aQ values of them for comparison. The results in Table 3 demonstrate that our method outperforms other state-of-the-art methods in restoring the spatial details. The computation times (on a laptop with a 2.5 GHz CPU and 8 GB of RAM, running MATLAB 2016a) of the competing methods on the Urban dataset are also presented in Table 3, from which we can conclude that our method is only slightly slower than LRTV.

5. Conclusions

Hyperspectral image mixed denoising is one of the important and critical pre-processing steps for subsequent applications. In this paper, we present a novel restoration method for HSIs mixed denoising that uses the low-rank constraint and 3DATV regularization. The proposed 3DATVLR method can effectively exploit the spectral similarity by low-rank representation and it can also explore the spectral-spatial smoothness by 3D anisotropic TV regularization from the HSI cube. The problem is finally formulated as a combination of all convex regularizations (i.e., low rank regularization and 3DATV regularization) that could be easily solved via ADMM by alternatively separating it into several easier subproblems.
This study mainly concerns on HSI denoising with mixed noises, e.g., Gaussian noise, impulse noise and stripes or dead lines. Two hyperspectral data sets, i.e., the simulated Indian Pines and real Urban, were used as simulated and real data, respectively. Both simulated and real data experiments validate that the 3DATV regularization can effectively alleviate the impact of the structured sparse noise and finally help to improve the accuracy of the denoising results. In addition, both experiments provide evidence that the proposed 3DATVLR method overwhelmingly outperforms the state-of-the-art methods at removing the mixed noises and preserving the spectra of the original HSI. For an LRTV method, it often fails to remove the sparse noise when severe strips or dead lines exist in the scene, and the reason may be that the band-by-band TV has one of the same directions with the strips or dead lines, whereas 3DATVLR can effectively deal with such noise case by taking advantage of the spectral TV.
In the near future, we will implement the proposed 3DATV method in the high-performance computing platform, such as NVIDIA CUDA and cloud computing, to accelerate the speed of it for more applications.

Author Contributions

L.S. and T.Z. conceived and designed the experiments; L.S. performed the experiments and analyzed the results; L.S. and Z.W. analyzed the data; L.S. and B.J. wrote and revised the paper.

Funding

This study was funded by the National Natural Science Foundation of China (Nos. 61601236, 61502206, 61571230), the Natural Science Foundation of Jiangsu Province (No. BK20150923), the PAPD fund (a project funded by the priority academic program development of Jiangsu Higher Education Institutions), and the National Research Foundation (NRF) of Korea grant funded by MSIP (NRF-2016R1D1A1B03934305, NRF-2016R1D1A1B03933294).

Acknowledgments

The authors are grateful to the three anonymous reviewers for their constructive comments that help to improve the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bioucas-Dias, J.M.; Plaza, A.; Camps-Valls, G.; Scheunders, P.; Nasrabadi, N.; Chanussot, J. Hyperspectral remote sensing data analysis and future challenges. IEEE Geosci. Remote Sens. Mag. 2013, 1, 6–36. [Google Scholar] [CrossRef]
  2. Sun, L.; Wu, Z.; Xiao, L.; Liu, J.; Wei, Z.; Dang, F. A novel l1/2 sparse regression method for hyperspectral unmixing. Int. J. Remote Sens. 2013, 34, 6983–7001. [Google Scholar] [CrossRef]
  3. Sun, L.; Ge, W.; Chen, Y.; Zhang, J.; Jeon, B. Hyperspectral unmixing employing l1-l2 sparsity and total variation regularization. Int. J. Remote Sens. 2018, 34, 1–24. [Google Scholar] [CrossRef]
  4. Wu, Z.; Shi, L.; Li, J.; Wang, Q.; Sun, L.; Wei, Z.; Plaza, J.; Plaza, A. GPU parallel implementation of spatially adaptive hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 1131–1143. [Google Scholar] [CrossRef]
  5. Gu, B.; Sheng, V.S. A robust regularization path algorithm for ν-support vector classification. IEEE Trans. Neural Netw. Learn. Syst. 2017, 28, 1241–1248. [Google Scholar] [CrossRef] [PubMed]
  6. Zhang, L.; Zhao, C. A spectral-spatial method based on low-rank and sparse matrix decomposition for hyperspectral anomaly detection. Int. J. Remote Sens. 2017, 38, 4047–4068. [Google Scholar] [CrossRef]
  7. Elad, M.; Aharon, M. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans. Image Process. 2006, 15, 3736–3745. [Google Scholar] [CrossRef] [PubMed]
  8. Buades, A.; Coll, B.; Morel, J. A non-local algorithm for image denoising. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; Volume 2, pp. 60–65. [Google Scholar] [CrossRef]
  9. Kumar, B.S. Image denoising based on non-local means filter and its method noise thresholding. Signal Image Video Process. 2013, 7, 1211–1227. [Google Scholar] [CrossRef]
  10. Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 2007, 16, 2080–2095. [Google Scholar] [CrossRef] [PubMed]
  11. Wang, J.; Li, T.; Shi, Y.Q.; Lian, S.; Ye, J. Forensics feature analysis in quaternion wavelet domain for distinguishing photographic images and computer graphics. Multimed. Tools Appl. 2017, 76, 23721–23737. [Google Scholar] [CrossRef]
  12. Xiong, L.; Xu, Z.; Shi, Y.Q. An integer wavelet transform based scheme for reversible data hiding in encrypted images. Multidimens. Syst. Signal Process. 2017, 29, 1–12. [Google Scholar] [CrossRef]
  13. Wang, J.; Lian, S.; Shi, Y.Q. Hybrid multiplicative multi-watermarking in DWT domain. Multidimens. Syst. Signal Process. 2017, 28, 617–636. [Google Scholar] [CrossRef]
  14. Yuan, C.; Sun, X.; Lv, R. Fingerprint liveness detection based on multi-scale LPQ and PCA. China Commun. 2016, 13, 60–65. [Google Scholar] [CrossRef]
  15. Othman, H.; Qian, S. Noise reduction of hyperspectral imagery using hybrid spatial-spectral derivative-domain wavelet shrinkage. IEEE Trans. Geosci. Remote Sens. 2006, 44, 397–408. [Google Scholar] [CrossRef]
  16. Chen, G.; Qian, S. Simultaneous dimensionality reduction and denoising of hyperspectral imagery using bivariate wavelet shrinking and principal component analysis. Can. J. Remote Sens. 2008, 34, 447–454. [Google Scholar] [CrossRef]
  17. Chen, G.; Qian, S.E. Denoising of hyperspectral imagery using principal component analysis and wavelet shrinkage. IEEE Trans. Geosci. Remote Sens. 2011, 49, 973–980. [Google Scholar] [CrossRef]
  18. Chen, G.; Bui, T.D.; Quach, K.G.; Qian, S.E. Denoising hyperspectral imagery using principal component analysis and block-matching 4D filtering. Can. J. Remote Sens. 2014, 40, 60–66. [Google Scholar] [CrossRef]
  19. Brook, A. Three-dimensional wavelets-based denoising of hyperspectral imagery. J. Electron. Imaging 2015, 24, 013034. [Google Scholar] [CrossRef]
  20. Rasti, B.; Sveinsson, J.; Ulfarsson, M.; Benediktsson, J. Hyperspectral image denoising using first order spectral roughness penalty in wavelet domain. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2458–2467. [Google Scholar] [CrossRef]
  21. Rasti, B.; Sveinsson, J.R.; Ulfarsson, M.O. Wavelet-based sparse reduced-rank regression for hyperspectral image restoration. IEEE Trans. Geosci. Remote Sens. 2014, 52, 6688–6698. [Google Scholar] [CrossRef]
  22. Renard, N.; Bourennane, S.; Blanc-Talon, J. Denoising and dimensionality reduction using multilinear tools for hyperspectral images. IEEE Geosci. Remote Sens. Lett. 2008, 5, 138–142. [Google Scholar] [CrossRef]
  23. Guo, X.; Huang, X.; Zhang, L.; Zhang, L. Hyperspectral image noise reduction based on rank-1 tensor decomposition. ISPRS J. Photogramm. Remote Sens. 2013, 83, 50–63. [Google Scholar] [CrossRef]
  24. Lu, T.; Li, S.; Fang, L.; Ma, Y.; Benediktsson, J. Spectral–spatial adaptive sparse representation for hyperspectral image denoising. IEEE Trans. Geosci. Remote Sens. 2016, 54, 373–385. [Google Scholar] [CrossRef]
  25. Li, J.; Yuan, Q.; Shen, H.; Zhang, L. Noise removal from hyperspectral image with joint spectral–spatial distributed sparse representation. IEEE Trans. Geosci. Remote Sens. 2016, 54, 5425–5439. [Google Scholar] [CrossRef]
  26. Fu, Y.; Lam, A.; Sato, I.; Sato, Y. Adaptive spatial-spectral dictionary learning for hyperspectral image restoration. Int. J. Comput. Vis. 2017, 122, 228–245. [Google Scholar] [CrossRef]
  27. Tian, Q.; Chen, S. Cross-heterogeneous-database age estimation through correlation representation learning. Neurocomputing 2017, 238, 286–295. [Google Scholar] [CrossRef]
  28. Yuan, Y.; Zheng, X.; Lu, X. Spectral–spatial kernel regularized for hyperspectral image denoising. IEEE Trans. Geosci. Remote Sens. 2015, 53, 3815–3832. [Google Scholar] [CrossRef]
  29. Xie, W.; Li, Y. Hyperspectral imagery denoising by deep learning with trainable nonlinearity function. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1963–1967. [Google Scholar] [CrossRef]
  30. Wang, B.; Gu, X.; Ma, L.; Yan, S. Temperature error correction based on BP neural network in meteorological wireless sensor network. Int. J. Sens. Netw. 2017, 23, 265–278. [Google Scholar] [CrossRef]
  31. Qu, Z.; Keeney, J.; Robitzsch, S.; Zaman, F.; Wang, X. Multilevel pattern mining architecture for automatic network monitoring in heterogeneous wireless communication networks. China Commun. 2016, 13, 108–116. [Google Scholar] [CrossRef]
  32. Xu, L.; Li, F.; Sato, I.; Wong, A.; Clausi, D. Hyperspectral image denoising using a spatial–spectral Monte Carlo sampling approach. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 3025–3038. [Google Scholar] [CrossRef]
  33. Zheng, Y.; Jeon, B.; Sun, L.; Zhang, J.; Zhang, H. Student’s t-hidden Markov model for unsupervised learning using localized feature selection. IEEE Trans. Circuits Syst. Video Technol. 2017, 1–14. [Google Scholar] [CrossRef]
  34. Xu, Y.; Wu, Z.; Li, J.; Plaza, A.; Wei, Z. Anomaly detection in hyperspectral images based on low-rank and sparse representation. IEEE Trans. Geosci. Remote Sens. 2016, 54, 1990–2000. [Google Scholar] [CrossRef]
  35. Sun, L.; Wang, S.; Wang, J.; Zheng, Y.; Jeon, B. Hyperspectral classification employing spatial–spectral low rank representation in hidden fields. J. Ambient Intell. Hum. Comput. 2017, 1–12. [Google Scholar] [CrossRef]
  36. Lu, X.; Wang, Y.; Yuan, Y. Graph-regularized low-rank representation for destriping of hyperspectral images. IEEE Trans. Geosci. Remote Sens. 2013, 51, 4009–4018. [Google Scholar] [CrossRef]
  37. Zhang, H.; He, W.; Zhang, L.; Shen, H.; Yuan, Q. Hyperspectral image restoration using low-rank matrix recovery. IEEE Trans. Geosci. Remote Sens. 2014, 52, 4729–4743. [Google Scholar] [CrossRef]
  38. Zhu, R.; Dong, M.; Xue, J. Spectral nonlocal restoration of hyperspectral images with low-rank property. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 3062–3067. [Google Scholar] [CrossRef]
  39. He, W.; Zhang, H.; Zhang, L.; Shen, H. Hyperspectral image denoising via noise-adjusted iterative low-rank matrix approximation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 3050–3061. [Google Scholar] [CrossRef]
  40. Wang, M.; Yu, J.; Xue, J.; Sun, W. Denoising of hyperspectral images using group low-rank representation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 4420–4427. [Google Scholar] [CrossRef]
  41. Sun, L.; Jeon, B.; Zheng, Y.; Wu, Z. Hyperspectral image restoration using low-rank representation on spectral difference image. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1150–1155. [Google Scholar] [CrossRef]
  42. He, W.; Zhang, H.; Zhang, L.; Shen, H. Total-variation-regularized low-rank matrix factorization for hyperspectral image restoration. IEEE Trans. Geosci. Remote Sens. 2016, 54, 178–188. [Google Scholar] [CrossRef]
  43. Sun, L.; Jeon, B.; Soomro, B.N.; Zheng, Y.; Wu, Z.; Xiao, L. Fast superpixel based subspace low rank learning method for hyperspectral denoising. IEEE Access 2018, 6, 12031–12043. [Google Scholar] [CrossRef]
  44. Cao, W.; Wang, K.; Han, G.; Yao, J.; Cichocki, A. A Robust PCA Approach With Noise Structure Learning and Spatial–Spectral Low-Rank Modeling for Hyperspectral Image Restoration. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 1, 1–17. [Google Scholar] [CrossRef]
  45. Chang, X.; Zhong, Y.; Wang, Y.; Lin, S. Unified Low-Rank Matrix Estimate via Penalized Matrix Least Squares Approximation. IEEE Trans. Neural Netw. Learn. Syst. 2018, 1, 1–12. [Google Scholar] [CrossRef] [PubMed]
  46. Yuan, Q.; Zhang, L.; Shen, H. Hyperspectral image denoising employing a spectral–spatial adaptive total variation model. IEEE Trans. Geosci. Remote Sens. 2012, 50, 3660–3677. [Google Scholar] [CrossRef]
  47. Zhang, H. Hyperspectral image denoising with cubic total variation model. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, 7, 95–98. [Google Scholar] [CrossRef]
  48. Chang, Y.; Yan, L.; Fang, H.; Luo, C. Anisotropic spectral-spatial total variation model for multispectral remote sensing image destriping. IEEE Trans. Image Process. 2015, 24, 1852–1866. [Google Scholar] [CrossRef] [PubMed]
  49. Goldluecke, B.; Cremers, D. An approach to vectorial total variation based on geometric measure theory. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), San Francisco, CA, USA, 13–18 June 2010; pp. 327–333. [Google Scholar]
  50. Aggarwal, H.K.; Majumdar, A. Hyperspectral image denoising using spatio-spectral total variation. IEEE Geosci. Remote Sens. Lett. 2016, 13, 442–446. [Google Scholar] [CrossRef]
  51. Wu, Z.; Wang, Q.; Wu, Z.; Shen, Y. Total variation-regularized weighted nuclear norm minimization for hyperspectral image mixed denoising. J. Electron. Imaging 2016, 25, 013037. [Google Scholar] [CrossRef]
  52. Wang, Q.; Wu, Z.; Jin, J.; Wang, T.; Shen, Y. Low rank constraint and spatial spectral total variation for hyperspectral image mixed denoising. Signal Process. 2018, 142, 11–26. [Google Scholar] [CrossRef]
  53. Zhang, L.; Zhang, Y.; Wei, W.; Li, F. 3D total variation hyperspectral compressive sensing using unmixing. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Quebec City, QC, Canada, 13–18 July 2014; pp. 2961–2964. [Google Scholar]
  54. He, S.; Zhou, H.; Wang, Y.; Cao, W.; Han, Z. Super-resolution reconstruction of hyperspectral images via low rank tensor modeling and total variation regularization. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016; pp. 6962–6965. [Google Scholar]
  55. Sun, L.; Zheng, Y.; Jeon, B. Hyperspectral restoration employing low rank and 3d total variation regularization. In Proceedings of the International Conference on Progress in Informatics and Computing (PIC), Shanghai, China, 23–25 December 2016; pp. 326–329. [Google Scholar] [CrossRef]
  56. Chang, Y.; Yan, L.; Zhong, S. Hyper-laplacian regularized unidirectional low-rank tensor recovery for multispectral image denoising. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4260–4268. [Google Scholar]
  57. Sun, L.; Wu, Z.; Liu, J.; Wei, Z. Supervised hyperspectral image classification using sparse logistic regression and spatial-tv regularization. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium-IGARSS, Melbourne, Australia, 21–26 July 2013; pp. 1019–1022. [Google Scholar] [CrossRef]
  58. Sun, L.; Wu, Z.; Liu, J.; Xiao, L.; Wei, Z. Supervised spectral–spatial hyperspectral image classification with weighted Markov random fields. IEEE Trans. Geosci. Remote Sens. 2015, 53, 1490–1503. [Google Scholar] [CrossRef]
  59. Eckstein, J.; Yao, W. Understanding the convergence of the alternating direction method of multipliers: Theoretical and computational perspectives. Pac. J. Optim. 2015, 11, 619–644. [Google Scholar]
  60. Cai, J.; Candès, E.J.; Shen, Z. A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 2010, 20, 1956–1982. [Google Scholar] [CrossRef]
  61. Liu, G.; Lin, Z.; Yan, S.; Sun, J.; Yu, Y.; Ma, Y. Robust recovery of subspace structures by low-rank representation. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 171–184. [Google Scholar] [CrossRef] [PubMed]
  62. Bioucas-Dias, J.M.; Nascimento, J.M. Hyperspectral subspace identification. IEEE Trans. Geosci. Remote Sens. 2008, 46, 2435–2445. [Google Scholar] [CrossRef]
  63. Liu, X.; Bourennane, S.; Fossati, C. Denoising of hyperspectral images using the PARAFAC model and statistical performance analysis. IEEE Trans. Geosci. Remote Sens. 2012, 50, 3717–3724. [Google Scholar] [CrossRef]
  64. Peng, Y.; Meng, D.; Xu, Z.; Gao, C.; Yang, Y.; Zhang, B. Decomposable nonlocal tensor dictionary learning for multispectral image denoising. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 2949–2956. [Google Scholar]
  65. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
  66. Sun, L.; Jeon, B.; Zheng, Y.; Wu, Z. A novel weighted cross total variation method for hyperspectral image mixed denoising. IEEE Access 2017, 5, 27172–27188. [Google Scholar] [CrossRef]
  67. Qian, Y.; Ye, M. Hyperspectral imagery restoration using nonlocal spectral-spatial structured sparse representation with noise estimation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 6, 499–515. [Google Scholar] [CrossRef]
  68. Zhu, X.; Milanfar, P. Automatic parameter selection for denoising algorithms using a no-reference measure of image content. IEEE Trans. Image Process. 2010, 19, 3116–3132. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The flow chart of the proposed 3DATVLR method.
Figure 1. The flow chart of the proposed 3DATVLR method.
Ijgi 07 00412 g001
Figure 2. Visual illustration and statistical table of each gradient image of HSI (Washington DC).
Figure 2. Visual illustration and statistical table of each gradient image of HSI (Washington DC).
Ijgi 07 00412 g002
Figure 3. Simulated Indian Pines dataset used in the experiment (false color image composed of bands 6, 88, 220 for the red, green and blue wavelength, respectively).
Figure 3. Simulated Indian Pines dataset used in the experiment (false color image composed of bands 6, 88, 220 for the red, green and blue wavelength, respectively).
Ijgi 07 00412 g003
Figure 4. False color image composed of bands 6, 88, 220 (top row) and its corresponding zoomed-in portion (bottom row) marked with white rectangle in the clean image of the simulated Indian Pines dataset with noise Case 4. (a) clean image; (b) noisy image (PSNR = 15.67 dB); (c) BM4D (PSNR = 30.01 dB); (d) PCABM4D (PSNR = 30.40 dB); (e) PARAFAC (PSNR = dB); (f) 3DTV (PSNR = 32.44 dB); (g) TDL (PSNR = 30.40 dB); (h) LRMR (PSNR = 35.63 dB); (i) LRTV (PSNR = 38.06 dB); (j) 3DATVLR (PSNR = 39.65 dB).
Figure 4. False color image composed of bands 6, 88, 220 (top row) and its corresponding zoomed-in portion (bottom row) marked with white rectangle in the clean image of the simulated Indian Pines dataset with noise Case 4. (a) clean image; (b) noisy image (PSNR = 15.67 dB); (c) BM4D (PSNR = 30.01 dB); (d) PCABM4D (PSNR = 30.40 dB); (e) PARAFAC (PSNR = dB); (f) 3DTV (PSNR = 32.44 dB); (g) TDL (PSNR = 30.40 dB); (h) LRMR (PSNR = 35.63 dB); (i) LRTV (PSNR = 38.06 dB); (j) 3DATVLR (PSNR = 39.65 dB).
Ijgi 07 00412 g004
Figure 5. Mean spectra of the original spectra and the reconstructed spectra in class 2 of the simulated Indian Pines dataset with Case 4. (a) Noisy. (b) BM4D; (c) PCABM4D; (d) PARAFAC; (e) 3DTV; (f) TDL; (g) LRMR; (h) LRTV; (i) 3DATVLR.
Figure 5. Mean spectra of the original spectra and the reconstructed spectra in class 2 of the simulated Indian Pines dataset with Case 4. (a) Noisy. (b) BM4D; (c) PCABM4D; (d) PARAFAC; (e) 3DTV; (f) TDL; (g) LRMR; (h) LRTV; (i) 3DATVLR.
Ijgi 07 00412 g005
Figure 6. Restored results of band 123 in Simulated Indian Pines dataset with noise of Case 5. (a) original; (b) noisy; (c) BM4D; (d) PCABM4D; (e) PARAFAC; (f) 3DTV; (g) TDL; (h) LRMR; (i) LRTV; (j) 3DATVLR.
Figure 6. Restored results of band 123 in Simulated Indian Pines dataset with noise of Case 5. (a) original; (b) noisy; (c) BM4D; (d) PCABM4D; (e) PARAFAC; (f) 3DTV; (g) TDL; (h) LRMR; (i) LRTV; (j) 3DATVLR.
Ijgi 07 00412 g006
Figure 7. Restored results of band 166 in Simulated Indian Pines dataset with noise of Case 6. (a) original; (b) noisy; (c) BM4D; (d) PCABM4D; (e) PARAFAC; (f) 3DTV; (g) TDL; (h) LRMR; (i) LRTV; (j) 3DATVLR.
Figure 7. Restored results of band 166 in Simulated Indian Pines dataset with noise of Case 6. (a) original; (b) noisy; (c) BM4D; (d) PCABM4D; (e) PARAFAC; (f) 3DTV; (g) TDL; (h) LRMR; (i) LRTV; (j) 3DATVLR.
Ijgi 07 00412 g007
Figure 8. PSNR and SSIM values in each band of the reconstructed simulated Indian Pines dataset with different algorithms. (a,b) Case 1; (c,d) Case 2; (e,f) Case 3; (g,h) Case 4; (i,j) Case 5; (k,l) Case 6.
Figure 8. PSNR and SSIM values in each band of the reconstructed simulated Indian Pines dataset with different algorithms. (a,b) Case 1; (c,d) Case 2; (e,f) Case 3; (g,h) Case 4; (i,j) Case 5; (k,l) Case 6.
Ijgi 07 00412 g008
Figure 9. MPSNR, MSSIM and MSA values as a function of parameter λ t v and desired rank r. (a) MPSNR versus λ t v and r; (b) MSSIM versus λ t v and r; (c) MSA versus λ t v and r.
Figure 9. MPSNR, MSSIM and MSA values as a function of parameter λ t v and desired rank r. (a) MPSNR versus λ t v and r; (b) MSSIM versus λ t v and r; (c) MSA versus λ t v and r.
Ijgi 07 00412 g009
Figure 10. MPSNR values as a function of parameter ρ while fixing the other variables in different noise levels for a simulated Indian Pines dataset.
Figure 10. MPSNR values as a function of parameter ρ while fixing the other variables in different noise levels for a simulated Indian Pines dataset.
Ijgi 07 00412 g010
Figure 11. MPSNR, MSSIM values as a function of the iterations for the proposed method in Simulated Indian Pines. (a) MPSNR versus the iterations; (b) MSSIM versus the iterations.
Figure 11. MPSNR, MSSIM values as a function of the iterations for the proposed method in Simulated Indian Pines. (a) MPSNR versus the iterations; (b) MSSIM versus the iterations.
Ijgi 07 00412 g011
Figure 12. Restored results of band 139 in Urban dataset. (a) Observed; (b) BM4D; (c) PCABM4D; (d) PARAFAC; (e) 3DTV; (f) TDL; (g) LRMR; (h) NAIRLMA; (i) LRTV; (j) 3DATVLR.
Figure 12. Restored results of band 139 in Urban dataset. (a) Observed; (b) BM4D; (c) PCABM4D; (d) PARAFAC; (e) 3DTV; (f) TDL; (g) LRMR; (h) NAIRLMA; (i) LRTV; (j) 3DATVLR.
Ijgi 07 00412 g012
Figure 13. Restored results of band 151 in Urban dataset. (a) observed; (b) BM4D; (c) PCABM4D; (d) PARAFAC; (e) 3DTV; (f) TDL; (g) LRMR; (h) NAIRLMA; (i) LRTV; (j) 3DATVLR.
Figure 13. Restored results of band 151 in Urban dataset. (a) observed; (b) BM4D; (c) PCABM4D; (d) PARAFAC; (e) 3DTV; (f) TDL; (g) LRMR; (h) NAIRLMA; (i) LRTV; (j) 3DATVLR.
Ijgi 07 00412 g013
Figure 14. Horizontal mean profile of band 151 in Urban dataset. (a) Observed; (b) BM4D; (c) PCABM4D; (d) PARAFAC; (e) 3DTV; (f) TDL; (g) LRMR; (h) NAIRLMA; (i) LRTV; (j) 3DATVLR.
Figure 14. Horizontal mean profile of band 151 in Urban dataset. (a) Observed; (b) BM4D; (c) PCABM4D; (d) PARAFAC; (e) 3DTV; (f) TDL; (g) LRMR; (h) NAIRLMA; (i) LRTV; (j) 3DATVLR.
Ijgi 07 00412 g014
Table 1. The parameters setting of the competing methods in the simulated experiments.
Table 1. The parameters setting of the competing methods in the simulated experiments.
NoiseBM4DPCABM4DPARAFAC3DTVTDLLRMRLRTV3DATVLR
Case 1- k = 0 - λ t v = 0.01 σ = 0.016 rank(L) = 5 τ = 0.015 λ t v = 0.009
λ s = 0.05 card(s) = 0.0 r = 10 ρ = 0.3
ρ = 11 r = 10
Case 2- k = 0 - λ t v = 0.01 σ = 0.01 rank(L) = 5 τ = 0.01 λ t v = 0.009
λ s = 0.05 card(s) = 0.04 r = 9 ρ = 0.3
ρ = 11 r = 10
Case 3- k = 0 - λ t v = 0.01 σ = 0.01 rank(L) = 5 τ = 0.005 λ t v = 0.009
λ s = 0.05 card(s) = 0.07 r = 10 ρ = 0.5
ρ = 11 r = 10
Case 4- k = 0 - λ t v = 0.01 σ = 0.012 rank(L) = 5 τ = 0.006 λ t v = 0.01
λ s = 0.05 card(s) = 0.07 r = 10 ρ = 0.5
ρ = 11 r = 10
Case 5- k = 0 - λ t v = 0.01 σ = 0.009 rank(L) = 5 τ = 0.007 λ t v = 0.01
λ s = 0.05 card(s) = 0.06 r = 8 ρ = 0.5
ρ = 10 r = 10
Case 6- k = 0 - λ t v = 0.01 σ = 0.014 rank(L) = 5 τ = 0.005 λ t v = 0.014
λ s = 0.05 card(s) = 0.05 r = 10 ρ = 5
ρ = 10 r = 10
Table 2. Metrics values of the denoising results in simulated Indian Pines dataset (The best results are in bold type).
Table 2. Metrics values of the denoising results in simulated Indian Pines dataset (The best results are in bold type).
NoiseMetricsNoisyBM4DPCABM4DPARAFAC3DTVTDLLRMRLRTV3DATVLR
Case 1MPSNR (dB)20.1237.9141.2231.9233.3237.2437.9138.2440.32
MSSIM0.37100.96950.98760.89460.93950.97580.96950.98840.9909
MFSIM0.47240.96170.98600.86550.93440.96350.96170.98630.9885
ERGAS230.1530.7221.0061.9769.2333.1730.7229.9323.91
MSA0.19490.02340.01580.03910.04970.02010.02340.01740.0161
Case 2MPSNR (dB)12.7828.5028.3625.7032.1426.8835.1238.1339.24
MSSIM0.16820.85340.92870.68190.91830.90300.91680.98440.9894
MFSIM0.33970.85350.93720.70470.91400.88760.91560.97740.9868
ERGAS538.1591.9093.51125.0978.58110.5942.0030.5726.79
MSA0.42680.06730.06820.09370.05680.08050.02820.01980.0179
Case 3MPSNR (dB)12.6326.9326.8324.4832.0326.1834.6237.3038.50
MSSIM0.16560.81000.84370.66980.92590.83900.91120.97810.9892
MFSIM0.33720.82690.86530.70150.92360.84830.90990.97440.9866
ERGAS547.23128.70116.59161.5180.27122.6844.5954.8329.29
MSA0.43640.09970.08840.12890.05740.08930.03180.04300.0197
Case 4MPSNR (dB)14.2627.6227.2725.6233.1624.4634.0636.0939.02
MSSIM0.22360.79760.81690.71710.93730.63280.92490.97240.9924
MFSIM0.39200.82070.82530.73040.93540.68700.92110.97370.9915
ERGAS486.26174.99154.37189.6372.96189.5667.0574.5429.46
MSA0.39690.14220.12370.15970.05150.15180.04780.05780.0202
Case 5MPSNR (dB)13.9426.9226.8624.8733.1925.2434.7636.4039.19
MSSIM0.21230.76350.79460.68010.93390.68450.93230.98290.9921
MFSIM0.38020.80070.80980.71130.93340.73920.92850.98430.9911
ERGAS493.39172.68146.17189.8071.39169.8654.5655.0028.21
MSA0.40280.14100.11600.15940.05090.13160.03920.03980.0192
Case 6MPSNR (dB)14.2826.7827.5925.1433.3225.1832.1033.8636.73
MSSIM0.21970.73280.83380.65590.93330.65920.85260.89990.9817
MFSIM0.39120.79080.87930.71090.93270.73200.89020.94270.9841
ERGAS481.82161.46169.65191.3472.67189.90154.02156.2653.69
MSA0.39030.10690.09880.12810.05110.12450.06820.07510.0370
runtime (s)-194.1195.0353.076.011.328.079.740.8
Table 3. Blind quantitative evaluation for real Urban dataset using the Q-metric with boldface indicating the best results.
Table 3. Blind quantitative evaluation for real Urban dataset using the Q-metric with boldface indicating the best results.
MethodsOriginal3DTVLRMRLRTV3DATVLR
aQ60.7861.4462.3862.8465.13
Times (s)-35902811549648

Share and Cite

MDPI and ACS Style

Sun, L.; Zhan, T.; Wu, Z.; Jeon, B. A Novel 3D Anisotropic Total Variation Regularized Low Rank Method for Hyperspectral Image Mixed Denoising. ISPRS Int. J. Geo-Inf. 2018, 7, 412. https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi7100412

AMA Style

Sun L, Zhan T, Wu Z, Jeon B. A Novel 3D Anisotropic Total Variation Regularized Low Rank Method for Hyperspectral Image Mixed Denoising. ISPRS International Journal of Geo-Information. 2018; 7(10):412. https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi7100412

Chicago/Turabian Style

Sun, Le, Tianming Zhan, Zebin Wu, and Byeungwoo Jeon. 2018. "A Novel 3D Anisotropic Total Variation Regularized Low Rank Method for Hyperspectral Image Mixed Denoising" ISPRS International Journal of Geo-Information 7, no. 10: 412. https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi7100412

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop