Next Article in Journal
An Agent-Based Simulation of Deep Foundation Pit Emergency Evacuation Modeling in the Presence of Collapse Disaster
Previous Article in Journal
The Characteristic Polynomials of Symmetric Graphs
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Reweighted Symmetric Smoothed Function Approximating L0-Norm Regularized Sparse Reconstruction Method

College of Information and Communication Engineering, Harbin Engineering University, Harbin 150001, China
*
Author to whom correspondence should be addressed.
Submission received: 6 September 2018 / Revised: 17 October 2018 / Accepted: 22 October 2018 / Published: 2 November 2018

Abstract

:
Sparse-signal recovery in noisy conditions is a problem that can be solved with current compressive-sensing (CS) technology. Although current algorithms based on L 1 regularization can solve this problem, the L 1 regularization mechanism cannot promote signal sparsity under noisy conditions, resulting in low recovery accuracy. Based on this, we propose a regularized reweighted composite trigonometric smoothed L 0 -norm minimization (RRCTSL0) algorithm in this paper. The main contributions of this paper are as follows: (1) a new smoothed symmetric composite trigonometric (CT) function is proposed to fit the L 0 -norm; (2) a new reweighted function is proposed; and (3) a new L 0 regularization objective function framework is constructed based on the idea of T i k h o n o v regularization. In the new objective function framework, Contributions (1) and (2) are combined as sparsity regularization terms, and errors as deviation terms. Furthermore, the conjugate-gradient (CG) method is used to optimize the objective function, so as to achieve accurate recovery of sparse signal and image under noisy conditions. The numerical experiments on both the simulated and real data verify that the proposed algorithm is superior to other state-of-the-art algorithms, and achieves advanced performance under noisy conditions.

1. Introduction

Compressive sensing (CS) [1,2,3] has attracted worldwide attention since it was first proposed. Essentially, as a method for solving underdetermined linear inverse problems, CS can obtain an approximate solution (which only has a few nonzero values) to a linear system. That is, CS can successfully recover the original sparse signals by solving underdetermined linear system equations (ULSE). Considering the effect of noise, the framework of the CS model is shown in Figure 1. According to this figure, the CS model can be expressed as:
y = Φ x + b
where y R m is the compressed signal, and x R n is the k-sparse original signal (in most k nonzero elements, the rest of the n k dimension is all zero), k is the sparsity of signal x , m n . Φ = [ ϕ 1 , ϕ 2 , , ϕ n ] R m × n is a sensing matrix, where ϕ i R m , i = 1 , 2 , , n , which can be further represented as Φ = ψ Ω , while ψ R m × n is a random matrix, and Ω R n × n is the sparse basis matrix [4,5] that a signal mapped to it can be sparse. b R m denotes measurement noise, which is only considered as an independent additive white Gaussian noise in this paper. Therefore, the key problem in CS is how to accurately reconstruct original signal x from compressed signal y . According to [6], the signal reconstruction model is expressed as
arg min x R n | | x | | 0 s . t . | | Φ x y | | 2 ε
where | | · | | 0 is L 0 -norm (strictly, | | · | | 0 is a quasinorm, but since it satisfies the fundamental properties of the norm, we call it L 0 -norm), which represents the number of nonzero elements in a vector. The bound ε represents certain error tolerance. For the problem of Equation (2), it is an NP-hard problem, and when n is large, it can not be directly solved. In practice, two alternative approaches are usually employed to solve the problem:
  • Greedy algorithms with sparsity as a prior condition;
  • Relaxation method.
Greedy algorithms are effective sparse reconstruction methods; their corresponding optimization model is given by
arg min x R n | | Φ x y | | 2 s . t . | | x | | 0 k
where k is the sparsity of x . The essence of these algorithms is to use sparsity as prior information and use the least-square method to solve ULSE. These representative algorithms include orthogonal matching pursuit (OMP) [7,8], generalized OMP (GOMP) [9,10], compressed sampling matching pursuit (CoSaMP) [11], and subspace pursuit (SP) [12,13]. The drawback of greedy algorithms is that they are sensitive to noise.
The relaxation method is currently the most widely used method of CS reconstruction. The first kind of relaxation method, such as basis pursuit (BP) [14,15] and iterative reweighted least squares (IRLS) [16,17], is typically characterized by transforming the L 0 norm problem into another easily solved norm problem, but the antinoise ability of this method is still not strong. Another kind of relaxation method reformulates Equation (2) as the regularized least-squares problem (RLSP), as shown in Equation (4):
arg min x R n 1 2 | | Φ x y | | 2 2 + λ h ( x ) .
where λ > 0 is the regularization parameter that balances the trade off between deviation term | | Φ x y | | 2 2 and sparsity regularizer h ( x ) . For the problem in Equation (4), least absolute shrinkage and selection operator (LASSO) [18], sparse reconstruction by separable approximation (SpaRSA) [19,20], improved sparse reconstruction by separable approximation (ISpaRSA) [21], basis pursuit denoising (BPDN) [22], and fast iterative soft-thresholding algorithm (FISTA) [23,24] replaced h ( x ) with an L 1 -norm or reweighted L 1 -norm. An L 1 -norm or reweighted L 1 -norm is the best convex approximation of the L 0 -norm; however, this only holds in noiseless cases. In a noisy case, they are not exactly equivalent to the L 0 -norm [6]. Based on this, Reference [25] proposed a new algorithm called L p -regularized least squares ( L p -RLS), which converts h ( x ) into | | x | | p , ϵ p = i = 1 n x i 2 + ϵ 2 p / 2 . When p 0 , ϵ 0 , regularization term | | x | | p , ϵ p would approximate | | x | | 0 . Reference [26] proposed the smoothed L 0 -norm-regularized least-squares ( L 2 -SL0) algorithm, which replaces h ( x ) with smoothed function F σ ( x ) . When σ 0 , F σ ( x ) approximates the L 0 -norm, both the L p -RLS algorithm and the L 2 -SL0 algorithm improved the performance of sparse-signal recovery under noise conditions, and became more effective sparse reconstruction methods. However, these two algorithms also have defects that can not be ignored. These defects can be summarized as follows:
(1)
For L p -RLS, the value of p cannot be too small because, the smaller p is, the less smooth | | x | | p , ϵ p is, which makes the optimization effect worse [25], so | | x | | p , ϵ p cannot closely approach the L 0 norm, and reconstruction accuracy cannot be further improved;
(2)
For L 2 -SL0, although the algorithm can more closely approach the L 0 -norm, the convergence of the adopted optimization method is not good, resulting in limited reconstruction accuracy.
Based on Equation (4) and all the popular algorithms mentioned above, this paper proposes the RRCTSL0 algorithm. In this algorithm, we first propose a symmetric CT function that approximates | | x | | 0 and a new reweighted function, and the combination of the two can promote sparsity. Then, a new objective function framework is constructed from the idea of the T i k h o n o v regularization mechanism. Finally, the conjugate-gradient (CG) method was used to optimize the objective function and achieve accurate recovery of sparse vectors under noisy conditions. On this basis, the proposed RRCTSL0 algorithm was applied to sparse-signal and image processing.
This paper is organized as follows: Section 2 introduces the theories of the proposed RRCTSL0 algorithm. Then, we verify the performance of the RRCTSL0 algorithm through simulation experiments, and apply the proposed algorithm to sparse-signal and image recovery in Section 3. Section 4 concludes this paper.

2. RRCTSL0 Algorithm

2.1. New Smoothed L 0 -Norm Function Model

The L 0 -norm of a vector is a discontinuous function of that vector [27], and how to solve the most sparse solution of this vector is an NP-hard problem. Inspired by Equation (4) (a relaxation model), our idea was to approximate this discontinuous function by a suitable continuous one; then, the most sparse solution of this vector is the optimal solution of this continuous function. As Mohammadi et al. proposed [28], the problem of finding the most sparse vector in the { x | y = Φ x + b } set can be interpreted as the task of approximating the Kronecker delta function. Let
δ ( x i ) = 1 if x i = 0 0 otherwise i = 1 , 2 , , n
denote the Kronecker delta function, then the L 0 -norm of a vector x is equal to | | x | | 0 = i = 1 n [ 1 δ ( x i ) ] and can be approximated by i = 1 n f σ ( x i ) , in which f σ ( x i ) denotes a smoothed continuous function that acts as a delta-approximating (DA) function. Based on this, we propose a symmetric CT smoothed function
f σ ( x i ) = sin ( arctan ( a x i 2 σ 2 ) )
where a is a regulatory factor greater than 1; this paper set it to 2. x i is an independent variable. Smoothed parameter σ determines the quality of the approximation; when σ 0 , f σ ( x i ) | x i | 0 , then F σ ( x ) = i = 1 n f σ ( x i ) | | x | | 0 .
Property 1.
The CT smoothed function f σ ( x i ) is a symmetric function, where x i R .
Proof. 
When x i R ,
f σ ( x i ) f σ ( x i ) = sin arctan a x i 2 σ 2 sin arctan a x i 2 σ 2 = sin arctan a x i 2 σ 2 sin arctan a x i 2 σ 2 = 0
So, f σ ( x i ) = f σ ( x i ) , i.e., f σ ( x i ) is symmetrical. Therefore, Property 1 is proved. □
Obviously, lim σ 0 f σ x i = 0 i f x i = 0 1 i f x i 0 can be regarded as a DA function, so symmetric | | x | | 0 can be approximated as | | x | | 0 F σ ( x ) = lim σ 0 i = 1 n f σ ( x i ) . Similarly, there are other smoothed functions, like the well-known gauss function of the SL0 algorithm [29]:
φ σ ( x i ) = e x i 2 2 σ 2 .
It follows from the above equation that σ 0 and lim σ 0 φ σ x i = 1 i f x i = 0 0 i f x i 0 . Let
f σ ( x i ) = 1 φ σ ( x i ) = 1 e x i 2 2 σ 2 ,
so, the L 0 -norm of x is approximated by | | x | | 0 lim σ 0 i = 1 n f σ ( x i ) = lim σ 0 i = 1 n ( 1 φ σ ( x i ) ) .
As shown in Figure 2, we can see that the two smoothed functions (gauss function, proposed CT function) are closer to the DA function with respect to the smaller σ , but the CT function can better approximate DA function compared to the gauss function at the same σ .

2.2. New Reweighted Function Design

The sparse reconstruction model based on Equation (4) works successfully, but converges slowly. Fortunately, we can obtain the most sparse solution of x faster through a new reweighted function, which is given by
w i = 1 e | x i | , i = 1 , 2 , , n .
where w i is a positive scalar that inversely relates to true signal magnitudes. When x i 0 , then w i 1 , and when x i + or x i , then w i 0 . w i is an even function that monotonically decreases at x i [ 0 , ] , which shows that the reweighted function has a maximum at x i = 0 and a minimum at x i + or x i .
Suppose signal x = [ x 1 , x 2 , , x n ] T with the initial value x = Φ T ( Φ Φ T ) 1 y , and a reweighted function W = diag { w 1 , w 2 , w 3 , , w n } , which w i is given in Equation (7). In the process of CS optimization, we add W to x , so x is converted to x , which equals to W x = [ w 1 x 1 , w 2 x 2 , , w n x n ] T . The large entries in w i force solution x to concentrate on the indices where w i is small, and, by construction, these precisely correspond to the indices where x is nonzero. This more generally suggests that large reweights could be used to discourage nonzero entries in the recovered signal, while small weights could be used to encourage nonzero entries [30]. So, in the process of optimization, more x values better optimize and are then closer to zero with the effect of W . With iterative optimization, this method can lead to a sparse solution faster than popular reconstruction algorithms that do not consider the reweighted strategy (namely, these algorithms are reweighted x i with the same value: w i = 1 [31].
In Reference [32], one reweighted function is introduced that can be described as
w i = 1 | x i | + ζ
where ζ is a regularization factor of 10 8 in Reference [32], which can be used to avoid the denominator being zero. Evidently, for a small | x i | 0 , the reweighted strategy has a large reweighted value w i in Equation (8) when ζ is small enough, whereas it tends to further optimize | x i | ; thus a sparse solution is obtained. However, the reweighted function in Equation (8) also has obvious disadvantages: (1) the reweighted values would be quite large when the signal components are close to zero, leading to bad effects of the larger components; (2) It is hard to select a suitable ζ to satisfy the optimization process.
Compared with Equation (8), the reweighted function in Equation (7) has better characteristics. In Equation (7), the range of w i is [ 0 , 1 ] in Equation (8) is [ 0 , 1 ζ ] . If ζ equals to 1, the values of two reweighted functions have the same range, but the former in Equation (7) has better performance that can significantly improve the effect of each signal component. If ζ is large enough, the range of w i in Equation (8) would be large as well; this makes the effect of a larger x i not remarkable. In conclusion, the proposed reweighted function has two merits:
  • It has a proper range that can give each signal component a proper reweighted value, and, when the signal component is close to zero, the reweighted value is not too large.
  • It does not need the adjustment of parameters like ζ , and the denominator does not equal to zero.

2.3. New Proposed RRCTSL0 Algorithm and Its Steps

As explained above, the objective function can be described as
arg min x R n 1 2 | | y Φ x | | 2 2 + λ W F σ ( x )
where λ is a regularization parameter that revises the original objective function. Reweighted function W = diag { w 1 , w 2 , w 3 , , w n } , where w i , i = 1 , , n is in Equation (7). The differentiable smoothed accumulated function F σ ( x ) = lim σ 0 i = 1 n f σ ( x i ) = lim σ 0 i = 1 n sin ( arctan ( a x i 2 σ 2 ) ) is employed to approximate | | x | | 0 .
Let
g = F σ ( x ) = i = 1 n f σ ( x i ) x i = i = 1 n cos ( arctan ( x i 2 η ) ) 2 x i η x i 4 + η 2 ,
where η = σ 2 / a . Therefore, the gradient of Equation (9) can be written as
G = Φ T ( y Φ x ) + λ W g .
According to the objective function, the Hessian of Equation (9) can be readily expressed in closed form as
H = Φ T Φ + λ W U
where
U = diag { u 1 , u 2 , , u n }
u i = d g i d x i = [ 2 η ( η 2 3 x i 4 ) cos ( arctan ( x i 2 η ) ) ( 2 x i η ) 2 sin ( arctan ( x i 2 η ) ) ] ( x i 4 + η 2 ) 2 , i = 1 , 2 , , n .
In Equation (12), U is the differential of gradient g in Equation (10), so H is the differential of G . In Equation (14), g i represents the column vector of g .
In fact, the problem of solving the objective function in Equation (9) is translated into an optimization problem. This paper applies the CG method to the RRCTSL0 algorithm to optimize the objective function. The problem can first be solved by using a sequential σ -continuation strategy as detailed in the next paragraph.
Given a small target value σ T , and a sufficiently large initial value of parameter σ , i.e., σ 1 , the monotonically decreasing sequence { σ t : t = 1 , 2 , 3 , , T } is generated as
σ t = σ 1 ( σ T σ 1 ) t 1 T 1 , and t = 1 , 2 , , T
where T is the maximum number of iterations.
In the CG algorithm [25], iterate x ( Γ ) is updated as
x ( Γ + 1 ) = x ( Γ ) + ϱ ( Γ ) d ( Γ ) ,
where Γ is the number of iterations of the inner loop, the parameter d ( Γ ) can be given by
d ( Γ ) = G ( Γ ) Γ = 0 ; G ( Γ ) + ς ( Γ 1 ) d ( Γ 1 ) Γ 1 .
parameter ς ( Γ ) is given as
ς ( Γ 1 ) = | | G ( Γ ) | | 2 d ( Γ 1 ) T ( G ( Γ ) G ( Γ 1 ) )
and parameter ϱ ( Γ ) is updated as
ϱ ( Γ ) = | | G ( Γ ) | | 2 2 d ( Γ ) T H ( Γ ) d ( Γ )
where G ( Γ ) and H ( Γ ) are the gradient and Hessian of the objective function in Equation (9) evaluated at x = x ( Γ ) using Equation (11) and Equation (12), respectively. As shown in Equation (19), ϱ ( Γ ) is positive if H ( Γ ) is positive definite (PD). We can see from Equation (12) that the Φ T Φ is PD, and W is PD, so H ( Γ ) is PD if U ( Γ ) is PD. To get the positive definiteness of U ( Γ ) , we can perform the following processing:
u i = u i u i > ν , ν u i ν .
where ν is a small positive constant typically of the order of 10 5 . The denominator in Equation (19) can be efficiently evaluated as
d ( Γ ) T H ( Γ ) d ( Γ ) = | | Φ d ( Γ ) | | 2 2 + λ | | Q ( Γ ) | | 2 2 ,
Q ( Γ ) = P ( Γ ) d ( Γ )
where P ( Γ ) = [ p ( Γ 1 ) , p ( Γ 2 ) , , p ( Γ n ) ] T , with p ( Γ i ) = w ( Γ i ) u i , w ( Γ i ) is the component of w i evaluated at x = x ( Γ ) using Equation (7); thereby, ϱ ( Γ ) can be expressed as
ϱ ( Γ ) = | | G ( Γ ) | | 2 2 | | Φ d ( Γ ) | | 2 2 + λ | | Q ( Γ ) | | 2 2 .
Based on the above explanation, we can conclude the steps of the proposed RRCTSL0 algorithm, which are given in Table 1.
For the RRCTSL0 algorithm shown in Table 1, the values of parameters λ and σ affect its performance. We discuss the selection of parameters in the next section.

2.4. Selection of Parameters λ and σ

The unconstrained optimal problem (as shown in Equation (9)) can be regarded as the result of the linear reweighted sum method of multiobjective programming. Therefore, regularization parameter λ is determined by the α -method [33] in this paper. Let F 1 ( x ) = W F σ ( x ) , F 2 ( x ) = 1 2 | | y Φ x | | 2 2 , and then minimize F 1 ( x ) and F 2 ( x ) , respectively, and we get
F i ( x ( i ) ) = min x F i ( x ) , i = 1 , 2 .
It is easily to know that, x ( i ) ( i = 1 , 2 ) equals to 0 and x 0 (the least-squares solution of y = Φ x ), respectively. x ( i ) ( i = 1 , 2 ) can be applied to calculate the function as follows
F i j = min x F i ( x ( j ) ) , i , j = 1 , 2 .
Then, parameter α is introduced, and λ is defined as λ = λ 2 / λ 1 , so we can obtain that
i = 1 2 F i j λ i = α i = 1 2 λ i = 1
By solving the above equations, we get
α = 1 e T F 1 e
[ λ 1 , λ 2 ] = e T F 1 e T F 1 e
where coefficient matrix F ( i , j ) = F i j , e = [ 1 1 ] T . Therefore, λ = λ 2 / λ 1 is obtained.
For σ , we know that it is a descending sequence as shown in Equation (15), so the key is how to select σ 1 and σ T . Let x ˜ = m a x i ( | x i 0 | ) , and σ 1 satisfies
f σ x ˜ = sin arctan a x ˜ 2 σ 1 2 b a x ˜ 2 σ 1 2 tan arcsin b σ 1 a x ˜ tan arcsin b tan arcsin b > 0 0 < arcsin b < π 2 ,
where b is a positive constant. Taking arcsin b = π 4 , i.e., b = 2 2 , we get σ 1 a x ˜ . To simplify the calculation, σ 1 = a x ˜ in this paper.
F σ ( x ) | | x | | 0 when σ 0 , so σ T should be taken as a small positive constant. Here, we firstly perform singular value decomposition for Φ
Φ = E Q H , E = E 1 E 2 , = 1 2 , Q = Q 1 Q 2
where, 1 and E 1 are respectively composed of larger eigenvalues and the eigenvectors corresponding to larger eigenvalues, while 2 and E 2 are respectively composed of smaller eigenvalues and the eigenvectors corresponding to smaller eigenvalues, so σ T = mean ( max ( | Φ H E 1 1 2 E 1 H | ) ) .

3. Numerical Simulation and Analysis

In this section, we compare the proposed RRCTSL0 algorithm with the state-of-the-art SL0 [29,34], L 2 -SL0 [26,35], and L p -RLS [25] algorithms on simulated and real datasets. Firstly, we verified the convergence of the RRCTSL0 algorithm in the case of noise. Then, we experimented on three aspects of the reconstruction performance of the RRCTSL0 algorithm: signal-to-noise ratio (SNR), normalized mean square error (NMSE), and CPU running time (CRT). Finally, we proved the practicality of the RRCTSL0 by applying it to real signal and image recovery.
The numerical simulation platform was MATLAB 2017b, which was installed on WINDOWS 10, a 64-bit operating system, the CPU was Inter (R) Core (TM) i5-3230M, and frequency was 2.6 GHz. All experiments were based on 100 trials.

3.1. Convergence-Performance Comparison of the Algorithms

For the convergence verification experiments, we fixed n = 256 , m = 100 , k = 20 . For each experiment, we randomly generate a pair of { x , Φ , b } : Φ is an m × n random Gaussian matrix with normalized and centralized rows; the nonzero entries of sparse signal x R n were i.i.d. generated according to Gaussian distribution N ( 0 , 1 ) ; the entries of b qwre i.i.d. Gaussian N ( 0 , ξ ) .
Given measurement vector y = Φ x + b , sensing matrix Φ , and noise b , we tried to recover signal x . If NMSE = ( | | x x ^ | | 2 / | | x | | 2 ) × 100 % ( x ^ is the recovery signal) equals or approximates to 0, the convergence was considered to be a success. The parameters were selected to obtain best performance for each method: for the SL0, σ m i n = 10 2 , scale factor were set as S = 5 , ρ = 0.8 ; for the L 2 -SL0, σ m i n = 0.01 , S = 10 , ρ = 0.8 ; for the L p -RLS, p 1 = 1 , p T = 0.1 , ϵ 1 = 1 , ϵ T = 10 2 ; and for the proposed RRCTSL0, a = 2 , e r r = 10 8 . Based on the 100 trials, we simulated the NMSE of the above algorithms and plot it in Figure 3a.
Figure 3a shows that the NMSE of SL0, L 2 -SL0, L p -RLS, and RRCTSL0 algorithms eventually converge to a very small value with iterations. We can see that the RRCTSL0 algorithm had the fastest convergence rate in all algorithms. This fully proves that the reweighted function proposed in this paper could promote signal sparsity and thus improve convergence speed. Based on this, we simulated the effect of sparsity k on the NMSE of the RRCTSL0 algorithm, as shown in Figure 3b. We can see that the convergence rate of the RRCTSL0 algorithm decreases with the increase of sparsity k, hence, this situation shows that the RRCTSL0 algorithm cannot achieve accurate signal recovery under large sparsity. Of course, when k < 25 , the RRCTSL0 algorithm is reliable.

3.2. Accuracy Performance Comparison of the Algorithms

For the reconstruction performance experiments, we also tried to recover sparse signal x from noisy measurement vector y . Specifically, we fixed n = 256 , m = 100 , k = 20 , and increased noise intensity ξ to verify SNR, n = 256 , m = 100 , k = 4 s + 1 , s = 1 , 2 , , 15 to verify NMSE, and n = [ 170 , 220 , 270 , 320 , 370 , 420 , 470 , 520 ] , m = n / 2 , k = n / 5 to verify CRT. SNR is defined as 20 l o g ( | | x x ^ | | 2 / | | x | | 2 ) , and CRT is measured with t i c and t o c .
The average SNR of the recovered signal are shown in Figure 4. The denoise performance of these algorithms is positively correlated with the value of SNR. From Figure 4a, we can see that the RRCTSL0 always gets the largest SNR when the SNR of all algorithms sharply decreases with the increase of ξ , i.e., RRCTSL0 performs better than the other three algorithms in denoising. In addition, Figure 4b shows the SNR of the RRCTSL0 algorithm under different a. It can be seen that, when a [ 0.5 , 1.5 ] , the SNR increases with the increase of a, and when a 1.5 , the SNR reaches the maximum and tends to be stable. Therefore, it is feasible to set a = 2 in this paper.
Figure 5 shows the NMSE with the increase of sparsity k. The reconstruction accuracy of these algorithms is inversely related to the value of NMSE. We can see that the RRCTSL0 always obtains the smallest NMSE with the increase of k. This indicates that the RRCTSL0 algorithm has the most accurate reconstruction performance.
From Figure 6, we can see that the larger the signal length is, the longer the CRT. We can see that the SL0 and L 2 -SL0 algorithms perform better than the proposed RRCTSL0 algorithm, while the proposed RRCTSL0 algorithm outperforms the L p -RLS algorithm. Therefore, improving reconstruction speed is one of the main directions of the RRCTSL0 algorithm in the future.

3.3. Applications of the Proposed RRCTSL0 Algorithm

3.3.1. Real Sparse Signal Recovery

Real signal recovery is a popular application of CS technology. Here, we applied the proposed RRCTSL0 algorithm to the reconstruction of a real combined signal. We fixed the length of signal at n = 256 , the number of measurement at m = 64 , and sparsity at k = 6 . Given { y , Φ , b } , we tried to recover combined signal X ˜ = 0.3 cos ( 2 π f 1 T s t s ) + 0.6 sin ( 2 π f 2 T s t s ) + 0.1 cos ( 2 π f 3 T s t s ) , where f 1 = 50 , f 2 = 100 , f 3 = 200 are the signal frequency, T s = 1 / 800 is the sampling interval, and t s = [ 1 , 2 , 3 , , n ] is the sampling sequence. The recovery process of combined signal X ˜ is shown in Figure 7.
Figure 8 shows the recovery effect of real combined signal X ˜ by the RRCTSL0 algorithm under different ξ . Meanwhile, the time-frequency characteristics (obtained by short-time Fourier transform) are shown in Figure 9. Obviously, when ξ varies within the range of [0, 0.5], the recovery signal is very appropriate to X ˜ . This proves that the RRCTSL0 algorithm can accurately recover real sparse combined signal under certain noise.
Figure 10 shows the recovery effect of real combined signal X ˜ implemented by different algorithms under noise intensity ξ = 0.1 . Meanwhile, time-frequency characteristics (obtained by short-time Fourier transform) are shown in Figure 11. It can be seen that the RRCTSL0 algorithm performs better than the other three algorithms (the L p -RLS algorithm performs very similarly to the proposed RRCTSL0 algorithm, but the RRCTSL0 algorithm performs better if carefully observed). The main reasons are: (1) The unconstrained regularization mechanism can globally optimize the L 0 -norm, thus improving the antinoise performance of the RRCTSL0 in the optimization process. (2) The combination of the CT smoothed and new reweighted functions can promote signal sparsity. It is verified that the proposed RRCTSL0 algorithm has good practical value in sparse-signal recovery.

3.3.2. Real-Image Recovery

Real images are considered to be approximately sparse under a proper basis, such as the discrete cosine transform (DCT) basis and the discrete wavelet transform (DWT) basis. Here, we simulated the recovery performances of the proposed RRCTSL0 algorithm based on the two real images in Figure 12: Lena, Peppers. Specifically, in order to reveal sparse coefficients x of real images s , we used DWT basis V : s = V x , and x = V T s . Noisy measurements y were obtained as follows:
y = ψ s + b = ψ V x + b = Φ x + b .
The entries of measurement noise b were generated using i.i.d. Gaussian distribution N ( 0 , ξ ) . Taking the RRCTSL0 algorithm, we could solve this image-recovery problem as shown in Figure 13. Here, we used Peak SNR (PSNR) and a Structural Similarity Index (SSIM) to compare the recovery performance of different algorithms. PSNR is defined as
PSNR = 10 log ( 255 2 / MSE ) ,
where MSE = | | x x ^ | | 2 2 ; and SSIM is defined as
SSIM ( p , q ) = ( 2 μ p + μ q + c 1 ) ( 2 σ p q + c 2 ) ( μ p 2 + μ q 2 + c 1 ) ( σ p 2 + σ q 2 + c 2 ) .
Among these, μ p is the mean of image p, μ q is the mean of image q, σ p is the variance of image p, σ q is the variance of image q, and σ p q is covariance between image p and image q. Parameters c 1 = z 1 L and c 2 = z 2 L , which z 1 = 0.01 , z 2 = 0.03 , and L is the dynamic range of the pixel values. The range of SSIM was [ 1 , 1 ] and, when these two images were the same, the SSIM equaled to 1.
Table 2 and Table 3 show PSNR and SSIM recovered by different algorithms. We can see that the proposed RRCTSL0 algorithm always showed the best performance in terms of PSNR and SSIM at compression ratio (CR) [ 0.4 , 0.5 , 0.6 ] , where CR is defined as m / n .
Then, we fixed CR to 0.5, and analyzed the image-recovery effects of the RRCTSL0 algorithm at ξ = [0, 0.05, 0.1, 0.2, 0.5], as shown in Figure 14 and Figure 15. Moreover, we show the PSNR and SSIM in detail by scientific data in Table 4. From Figure 14 and Figure 15, and Table 4, we can see that the RRCTSL0 algorithm performs well when ξ is less than 0.2. However, when ξ is over 0.2, the image-recovery effects of the RRCTSL0 algorithm significantly reduce. These show that the proposed RRCTSL0 has a certain ability to denoise, but under high noise conditions, the effect needs to be improved.

4. Conclusions

In this paper, we proposed the RRCTSL0 algorithm, which can be used for finding sparse solutions for an USLE. We proved that the RRCTSL0 algorithm is an effective method for recovering sparse signals and images. In the RRCTSL0 algorithm, the symmetric CT function is used as a DA function to approximate an L 0 -norm, and the reweighted function is combined with the CT function to promote sparsity. Simultaneously, the objective optimization model was established with an unconstrained regularization mechanism. Based on this, the RRCTSL0 algorithm performs the optimization process with the CG method to obtain an optimal solution. Experiments on both simulated and real data showed that the RRCTSL0: (1) had faster convergence speed; (2) could improve reconstruction accuracy and had better denoise performance; (3) satisfied the needs of sparse-signal and image recovery. In addition, improving the ability of the RRCTSL0 algorithm to resist non-Gaussian noise would be our future work. Moreover, we would also like to apply the RRCTSL0 algorithm to other compressive sensing applications, such as SAR imaging [36], hyperspectral image classification [37], magnetic resonance imaging (MRI) [38], electrocardiograms (ECG) [39], and transfer hashing [40].

Author Contributions

J.X. and H.Y. conceived and designed the algorithm and paper structure; X.Y. performed the experiments; J.X. and H.Y. analyzed the data; X.Y. and G.R. gave insightful suggestions for the work; and H.Y. and X.Y. wrote the paper.

Funding

This research received no external funding.

Acknowledgments

This paper is supported by the National Key Laboratory of Communication Antijamming Technology.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  2. Cand’es, E.J.; Wakin, M.B. An introduction to compressive sampling. IEEE Signal Process. Mag. 2008, 2, 21–30. [Google Scholar] [CrossRef]
  3. Badeńska, A.; Błaszczyk, L. Compressed sensing for real measurements of quaternion signals. J. Frankl. Inst. 2017, 354, 5753–5769. [Google Scholar] [CrossRef] [Green Version]
  4. Routray, S.; Ray, A.K.; Mishra, C. MRI Denoising Using Sparse Based Curvelet Transform with Variance Stabilizing Transformation Framework. Indones. J. Electr. Eng. Comput. Sci. 2017, 7, 116–122. [Google Scholar]
  5. Luan, S.; Zhang, B.; Zhou, S.; Chen, C.; Han, J.; Yang, W. Gabor convolutional networks. IEEE Trans. Image Process. 2018, 27, 4357–4366. [Google Scholar] [CrossRef] [PubMed]
  6. Huang, S.; Tran, T.D. Sparse Signal Recovery via Generalized Entropy Functions Minimization. arxiv, 2017; arXiv:1703.10556. [Google Scholar]
  7. Tropp, J.A.; Gilbert, A.C. Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans. Inf. Theory 2007, 12, 4655–4666. [Google Scholar] [CrossRef]
  8. Wen, J.; Zhou, Z.; Wang, J.; Tang, X.; Mo, Q. A sharp condition for exact support recovery with orthogonal matching pursuit. IEEE Trans. Signal Process. 2017, 6, 1370–1382. [Google Scholar] [CrossRef]
  9. Jian, W.; Seokbeop, K.; Byonghyo, S. Generalized orthogonal matching pursuit. IEEE Trans. Signal Process. 2012, 12, 6202–6216. [Google Scholar] [CrossRef]
  10. Wang, J.; Kwon, S.; Li, P.; Shim, B. Recovery of sparse signals via generalized orthogonal matching pursuit: A new analysis. IEEE Trans. Signal Process. 2016, 4, 1076–1089. [Google Scholar] [CrossRef]
  11. Needell, D.; Tropp, J.A. CoSaMP: Iterative signal recovery from incomplete and inaccurate samples. Commun. ACM 2010, 12, 93–100. [Google Scholar] [CrossRef]
  12. Dai, W.; Milenkovic, O. Subspace pursuit for compressive sensing signal reconstruction. IEEE Trans. Inf. Theory 2009, 5, 2230–2249. [Google Scholar] [CrossRef]
  13. Liu, J.; Zhang, C.; Pan, C. Priori-information hold subspace pursuit: A compressive sensing-based channel estimation for layer modulated tds-ofdm. IEEE Trans. Broadcast. 2018, 99, 1–9. [Google Scholar] [CrossRef]
  14. Ekanadham, C.; Tranchina, D.; Simoncelli, E.P. Recovery of Sparse Translation-Invariant Signals with Continuous Basis Pursuit. IEEE Trans. Signal Process. 2011, 10, 4735–4744. [Google Scholar] [CrossRef] [PubMed]
  15. Goldstein, T.; Studer, C. Phasemax: Convex phase retrieval via basis pursuit. IEEE Trans. Inf. Theory 2018, 4, 2675–2689. [Google Scholar] [CrossRef]
  16. Khan, S.U.; Qureshi, I.M.; Haider, H.; Zaman, F.; Shoaib, B. Diagnosis of faulty sensors in phased array radar using compressed sensing and hybrid IRLS–SSF algorithm. Wirel. Pers. Commun. 2016, 91, 1–20. [Google Scholar] [CrossRef]
  17. Zhao, R.; Lai, X.; Hong, X.; Lin, Z. A matrix-based IRLS algorithm for the least Lp-norm design of 2-d fir filters. Multidimens. Syst. Signal Process. 2017, 2, 1–15. [Google Scholar]
  18. Ewald, K.; Schneider, U. Uniformly valid confidence sets based on the lasso. Statistics 2018, 12, 1358–1387. [Google Scholar] [CrossRef]
  19. Wright, S.J.; Nowak, R.D.; Figueiredo, M.A.T. Sparse reconstruction by separable approximation. IEEE Trans. Signal Process. 2009, 7, 2479–2493. [Google Scholar] [CrossRef]
  20. Qiao, B.; Zhang, X.; Wang, C.; Zhang, H.; Chen, X. Sparse regularization for force identification using dictionaries. J. Sound Vib. 2016, 368, 71–86. [Google Scholar] [CrossRef]
  21. Ye, X.; Zhu, W.; Zhang, A.; Meng, Q. Sparse channel estimation in MIMO-OFDM systems based on an improved sparse reconstruction by separable approximation algorithm. J. Inf. Comput. Sci. 2013, 10, 609–619. [Google Scholar]
  22. Quan, X.; Zhang, B.; Wang, Z.; Gao, C.; Yirong, W.U. An efficient data compression technique based on BPDN for scattered fields from complex targets. Sci. China (Inf. Sci.) 2017, 60, 109302. [Google Scholar] [CrossRef]
  23. Li, Z.X.; Li, Z.C. Accelerated 3D blind separation of convolved mixtures based on the fast iterative shrinkage thresholding algorithm for adaptive multiple subtraction. Geophysics 2018, 83, V99–V113. [Google Scholar] [CrossRef]
  24. Kim, D.; Fessler, J.A. Another look at the fast iterative shrinkage/thresholding algorithm (FISTA). Siam J. Optim. 2018, 28, 223–250. [Google Scholar] [CrossRef] [PubMed]
  25. Pant, J.K.; Lu, W.S.; Antoniou, A. New Improved Algorithms for Compressive Sensing Based on p Norm. IEEE Trans. Circuits Syst. II Express Briefs 2014, 61, 198–202. [Google Scholar] [CrossRef]
  26. Ye, X.; Zhu, W.P.; Zhang, A.; Yan, J. Sparse channel estimation of MIMO-OFDM systems with unconstrained smoothed L0-norm-regularized least squares compressed sensing. Eurasip J. Wirel. Commun. Netw. 2013, 2013, 282. [Google Scholar] [CrossRef]
  27. Mohimani, H.; Babaie-Zadeh, M.; Jutten, C. A Fast Approach for Overcomplete Sparse Decomposition Based on Smoothed 0 Norm. IEEE Trans. Signal Process. 2009, 57, 289–301. [Google Scholar] [CrossRef]
  28. Malek-Mohammadi, M.; Koochakzadeh, A.; Babaie-Zadeh, M.; Jansson, M.; Rojas, C.R. Successive concave sparsity approximation for compressed sensing. IEEE Trans. Signal Process. 2016, 64, 5657–5671. [Google Scholar] [CrossRef]
  29. Guo, Q.; Ruan, G.; Liao, Y. A time-frequency domain underdetermined blind source separation algorithm for mimo radar signals. Symmetry 2017, 9, 104. [Google Scholar] [CrossRef]
  30. Candès, E.J.; Wakin, M.B.; Boyd, S.P. Enhancing sparsity by reweighted L1 minimization. J. Fourier Anal. Appl. 2008, 14, 877–905. [Google Scholar] [CrossRef]
  31. Shi, Z. A Weighted Block Dictionary Learning Algorithm for Classification. Math. Probl. Eng. 2016, 2016, 1–15. [Google Scholar] [CrossRef]
  32. Fang, X.F.; Zhang, J.S.; Li, Y.Q. Sparse Signal Reconstruction Based on Multiparameter Approximation Function with Smoothed Norm. Math. Probl. Eng. 2014, 6, 1–9. [Google Scholar]
  33. Wang, J.H.; Huang, Z.T.; Zhou, Y.Y.; Wang, F.H. Robust sparse recovery based on approximate l0 norm. Acta Electron. Sin. 2012, 40, 1185–1189. [Google Scholar]
  34. Xiao, J.; Del-Blanco, C.R.; Cuevas, C.; García, N. Fast image decoding for block compressed sensing based encoding by using a modified smooth l0-norm. In Proceedings of the International Conference on Consumer Electronics, Berlin, Germany, 5–7 September 2016; pp. 242–244. [Google Scholar]
  35. Ye, X.; Zhu, W.P. Sparse channel estimation of pulse-shaping multiple-input–multiple-output orthogonal frequency division multiplexing systems with an approximate gradient L2-SL0 reconstruction algorithm. Iet Commun. 2014, 8, 1124–1131. [Google Scholar] [CrossRef]
  36. Tian, H.; Li, D. Sparse flight array SAR downward-looking 3-d imaging based on compressed sensing. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1395–1399. [Google Scholar] [CrossRef]
  37. Li, H.; Li, C.; Zhang, C.; Liu, Z.; Liu, C. Hyperspectral image classification with spatial filtering and 2,1 norm. Sensors 2017, 17, 314. [Google Scholar] [CrossRef]
  38. Lazarus, C.; Weiss, P.; Vignaud, A.; Ciuciu, P. An empirical study of the maximum degree of undersampling in compressed sensing for T2*-weighted MRI. Magn. Reson. Imaging 2018, 53, 112–122. [Google Scholar] [CrossRef] [PubMed]
  39. Tseng, Y.H.; Chen, Y.H.; Lu, C.W. Adaptive integration of the compressed algorithm of CS and NPC for the ECG signal compressed algorithm in VLSI implementation. Sensors 2017, 17, 2288. [Google Scholar] [CrossRef] [PubMed]
  40. Zhou, J.T.; Zhao, H.; Peng, X.; Fang, M.; Qin, Z.; Goh, R.S.M. Transfer hashing: From shallow to deep. IEEE Trans. Neural Netw. Learn. Syst. 2018, 99, 1–11. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Framework of a compressive sensing (CS) model with noise.
Figure 1. Framework of a compressive sensing (CS) model with noise.
Symmetry 10 00583 g001
Figure 2. Different delta-approximating (DA) functions are plotted in this figure for σ = 0.1 , a = 2 , displaying the L 0 -norm and L 0.5 -norm of 2D.
Figure 2. Different delta-approximating (DA) functions are plotted in this figure for σ = 0.1 , a = 2 , displaying the L 0 -norm and L 0.5 -norm of 2D.
Symmetry 10 00583 g002
Figure 3. Normalized mean square error (NMSE) of recovery signal changes with iterations. (a) Comparison between SL0, L 2 -SL0, L p -RLS algorithms and the proposed RRCTSL0 algorithm; (b) effect of the RRCTSL0 algorithm under sparsity k = [ 5 , 15 , 25 , 35 ] .
Figure 3. Normalized mean square error (NMSE) of recovery signal changes with iterations. (a) Comparison between SL0, L 2 -SL0, L p -RLS algorithms and the proposed RRCTSL0 algorithm; (b) effect of the RRCTSL0 algorithm under sparsity k = [ 5 , 15 , 25 , 35 ] .
Symmetry 10 00583 g003
Figure 4. Signal-to-noise ratio (SNR) analysis. (a) Signal SNR for the SL0, L 2 -SL0, L p -RLS algorithms and the proposed RRCTSL0 algorithm with noise intensity ξ from 0 to 0.5 (the interval is 0.05) during 100 runs; (b) SNR for proposed RRCTSL0 algorithm with a = [ 0.5 , 1 , 1.5 , 2 , 2.5 , 3 , 3.5 , 4 , 4.5 , 5 ] while ξ = 0.05 .
Figure 4. Signal-to-noise ratio (SNR) analysis. (a) Signal SNR for the SL0, L 2 -SL0, L p -RLS algorithms and the proposed RRCTSL0 algorithm with noise intensity ξ from 0 to 0.5 (the interval is 0.05) during 100 runs; (b) SNR for proposed RRCTSL0 algorithm with a = [ 0.5 , 1 , 1.5 , 2 , 2.5 , 3 , 3.5 , 4 , 4.5 , 5 ] while ξ = 0.05 .
Symmetry 10 00583 g004
Figure 5. Signal NMSE analysis for the SL0, L 2 -SL0, L p -RLS algorithms and the proposed RRCTSL0 algorithm with sparsity k from 1 to 71 at an interval of 5, during 100 runs in the noise case ξ = 0.1 .
Figure 5. Signal NMSE analysis for the SL0, L 2 -SL0, L p -RLS algorithms and the proposed RRCTSL0 algorithm with sparsity k from 1 to 71 at an interval of 5, during 100 runs in the noise case ξ = 0.1 .
Symmetry 10 00583 g005
Figure 6. Signal CPU running time (CRT) analysis for the SL0, L 2 -SL0, L p -RLS algorithms and the proposed RRCTSL0 algorithm with the length of signal n equaling [170, 220, 270, 320, 370, 420, 470, 520], during 100 runs in noise case ξ = 0.1 .
Figure 6. Signal CPU running time (CRT) analysis for the SL0, L 2 -SL0, L p -RLS algorithms and the proposed RRCTSL0 algorithm with the length of signal n equaling [170, 220, 270, 320, 370, 420, 470, 520], during 100 runs in noise case ξ = 0.1 .
Symmetry 10 00583 g006
Figure 7. Recovery process of the combined signal X ˜ by the RRCTSL0 algorithm.
Figure 7. Recovery process of the combined signal X ˜ by the RRCTSL0 algorithm.
Symmetry 10 00583 g007
Figure 8. Signal-recovery effect by proposed RRCTSL0 algorithm when noise is incrementing according to sequence ξ = [0, 0.05, 0.1, 0.2, 0.5]. (a) ξ = 0; (b) ξ = 0.05; (c) ξ = 0.1; (d) ξ = 0.2; (e) ξ = 0.5.
Figure 8. Signal-recovery effect by proposed RRCTSL0 algorithm when noise is incrementing according to sequence ξ = [0, 0.05, 0.1, 0.2, 0.5]. (a) ξ = 0; (b) ξ = 0.05; (c) ξ = 0.1; (d) ξ = 0.2; (e) ξ = 0.5.
Symmetry 10 00583 g008
Figure 9. Reconstruction time-frequency characteristics of the proposed RRCTSL0 algorithm when noise is incrementing according to sequence ξ = [0, 0.05, 0.1, 0.2, 0.5]. (a) Time-frequency characteristic of the original signal; (b) ξ = 0; (c) ξ = 0.05; (d) ξ = 0.1; (e) ξ = 0.2; (f) ξ = 0.5.
Figure 9. Reconstruction time-frequency characteristics of the proposed RRCTSL0 algorithm when noise is incrementing according to sequence ξ = [0, 0.05, 0.1, 0.2, 0.5]. (a) Time-frequency characteristic of the original signal; (b) ξ = 0; (c) ξ = 0.05; (d) ξ = 0.1; (e) ξ = 0.2; (f) ξ = 0.5.
Symmetry 10 00583 g009
Figure 10. Signal recovery effect by different algorithms when ξ = 0.1 . (a) Signal recovery by the SL0 algorithm; (b) signal recovery by the L 2 -SL0 algorithm; (c) signal recovery by the L p -RLS algorithm; (d) signal recovery by the RRCTSL0 algorithm.
Figure 10. Signal recovery effect by different algorithms when ξ = 0.1 . (a) Signal recovery by the SL0 algorithm; (b) signal recovery by the L 2 -SL0 algorithm; (c) signal recovery by the L p -RLS algorithm; (d) signal recovery by the RRCTSL0 algorithm.
Symmetry 10 00583 g010
Figure 11. Reconstruction time-frequency characteristics of different algorithms when ξ = 0.1 . (a) SL0; (b) L 2 -SL0; (c) L p -RLS; (d) RRCTSL0.
Figure 11. Reconstruction time-frequency characteristics of different algorithms when ξ = 0.1 . (a) SL0; (b) L 2 -SL0; (c) L p -RLS; (d) RRCTSL0.
Symmetry 10 00583 g011
Figure 12. Real images used in the recovery experiments: (a) Lena ( 256 × 256 ); (b) Peppers ( 256 × 256 ).
Figure 12. Real images used in the recovery experiments: (a) Lena ( 256 × 256 ); (b) Peppers ( 256 × 256 ).
Symmetry 10 00583 g012
Figure 13. Recovery process of the real images by the RRCTSL0 algorithm.
Figure 13. Recovery process of the real images by the RRCTSL0 algorithm.
Symmetry 10 00583 g013
Figure 14. Lena image-recovery effect by the proposed RRCTSL0 algorithm when noise was incrementing according to sequence ξ = [0, 0.05, 0.1, 0.2, 0.5]. (a) ξ = 0; (b) ξ = 0.05; (c) ξ = 0.1; (d) ξ = 0.2; (e) ξ = 0.5.
Figure 14. Lena image-recovery effect by the proposed RRCTSL0 algorithm when noise was incrementing according to sequence ξ = [0, 0.05, 0.1, 0.2, 0.5]. (a) ξ = 0; (b) ξ = 0.05; (c) ξ = 0.1; (d) ξ = 0.2; (e) ξ = 0.5.
Symmetry 10 00583 g014
Figure 15. Peppers image-recovery effect by the proposed RRCTSL0 algorithm when noise was incrementing according to sequence ξ = [0, 0.05, 0.1, 0.2, 0.5]. (a) ξ = 0; (b) ξ = 0.05; (c) ξ = 0.1; (d) ξ = 0.2; (e) ξ = 0.5.
Figure 15. Peppers image-recovery effect by the proposed RRCTSL0 algorithm when noise was incrementing according to sequence ξ = [0, 0.05, 0.1, 0.2, 0.5]. (a) ξ = 0; (b) ξ = 0.05; (c) ξ = 0.1; (d) ξ = 0.2; (e) ξ = 0.5.
Symmetry 10 00583 g015
Table 1. Regularized Reweighted Composite Trigonometric Smoothed L 0 -Norm Minimization (RRCTSL0) Algorithm Using the CG Method.
Table 1. Regularized Reweighted Composite Trigonometric Smoothed L 0 -Norm Minimization (RRCTSL0) Algorithm Using the CG Method.
Initialization: Φ , x , y , ν , ς , τ , σ T , T , λ , a , and x = x
Step 1: Set t = 0 , σ 1 ;
Step 2: Compute W using (7), σ t for t = 2 , 3 , , T 1 using Equation (15);
Step 3: For t = 1 , 2 , , T
 (1) Set σ = σ t , Γ = 0 , x ( Γ ) = x
 (2) Compute Residual R e s = | | ϱ ( Γ ) d ( Γ ) | | 2 2 , and iterative termination threshold e r r
 (3) While R e s > e r r
  (a) Compute x ( Γ + 1 ) using Equations (16)–(23), W using Equation (7)
  (b) Set Γ = Γ + 1
  (c) Compute R e s = | | ϱ ( Γ ) d ( Γ ) | | 2 2
 (4) Set x = x ( Γ )
Step 4: Output x = x
Table 2. Peak SNR (PSNR) and Structural Similarity Index (SSIM) of the recovered Lena image by the SL0, L 2 -SL0, and L p -RLS algorithms and the proposed RRCTSL0 algorithm with ξ = 0.01 .
Table 2. Peak SNR (PSNR) and Structural Similarity Index (SSIM) of the recovered Lena image by the SL0, L 2 -SL0, and L p -RLS algorithms and the proposed RRCTSL0 algorithm with ξ = 0.01 .
CRPSNR (dB)SSIM (%)
SL0 L 2 -SL0 L p -RLSRRCTSL0SL0 L 2 -SL0 L p -RLSRRCTSL0
0.429.07529.25532.36934.82598.2498.2899.1699.53
0.530.37930.68834.66436.66998.7098.7799.5199.69
0.633.14030.23236.69936.78999.3198.6499.6999.70
Table 3. PSNR and SSIM of the recovered Peppers image by the SL0, L 2 -SL0, and L p -RLS algorithms and the proposed RRCTSL0 algorithm with ξ = 0.01 .
Table 3. PSNR and SSIM of the recovered Peppers image by the SL0, L 2 -SL0, and L p -RLS algorithms and the proposed RRCTSL0 algorithm with ξ = 0.01 .
CRPSNR (dB)SSIM (%)
SL0L2-SL0 L q -RLSRRCTSL0SL0L2-SL0 L q -RLSRRCTSL0
0.421.81126.34333.88734.67393.1397.3399.5399.61
0.528.40529.76934.58835.04698.3598.8099.6099.64
0.632.27633.18834.87235.16099.3399.4599.6399.65
Table 4. PSNR and SSIM of the recovery image by the proposed RRCTSL0 algorithm when noise was incrementing according to sequence ξ = [ 0 , 0.05 , 0.1 , 0.2 , 0.5 ] .
Table 4. PSNR and SSIM of the recovery image by the proposed RRCTSL0 algorithm when noise was incrementing according to sequence ξ = [ 0 , 0.05 , 0.1 , 0.2 , 0.5 ] .
ξ PhotoPSNR (dB)SSIM (%)
0Lena38.49299.80
Peppers39.36799.87
0.05Lena29.20398.28
Peppers28.82697.53
0.1Lena24.27294.78
Peppers24.30595.64
0.2Lena18.72783.43
Peppers19.78286.07
0.5Lena12.48949.50
Peppers13.41655.34

Share and Cite

MDPI and ACS Style

Xiang, J.; Yue, H.; Yin, X.; Ruan, G. A Reweighted Symmetric Smoothed Function Approximating L0-Norm Regularized Sparse Reconstruction Method. Symmetry 2018, 10, 583. https://0-doi-org.brum.beds.ac.uk/10.3390/sym10110583

AMA Style

Xiang J, Yue H, Yin X, Ruan G. A Reweighted Symmetric Smoothed Function Approximating L0-Norm Regularized Sparse Reconstruction Method. Symmetry. 2018; 10(11):583. https://0-doi-org.brum.beds.ac.uk/10.3390/sym10110583

Chicago/Turabian Style

Xiang, Jianhong, Huihui Yue, Xiangjun Yin, and Guoqing Ruan. 2018. "A Reweighted Symmetric Smoothed Function Approximating L0-Norm Regularized Sparse Reconstruction Method" Symmetry 10, no. 11: 583. https://0-doi-org.brum.beds.ac.uk/10.3390/sym10110583

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop