# Reweighted Factor Selection for SLMS-RL1 Algorithm under Gaussian Mixture Noise Environments

^{1}

^{2}

^{3}

^{*}

Next Article in Journal

Previous Article in Journal

School of Information Science and Engineering, Chongqing Jiaotong University, Chongqing 400074, China

College of Computer Science, Chongqing University, Chongqing 400044, China

Department Electronics and Information Systems, Akita Prefectural University, Akita 015-0055, Japan

Author to whom correspondence should be addressed.

Academic Editor: Paul M. Goggans

Received: 6 July 2015 / Revised: 2 September 2015 / Accepted: 11 September 2015 / Published: 25 September 2015

The sign least mean square with reweighted L1-norm constraint (SLMS-RL1) algorithm is an attractive sparse channel estimation method among Gaussian mixture model (GMM) based algorithms for use in impulsive noise environments. The channel sparsity can be exploited by SLMS-RL1 algorithm based on appropriate reweighted factor, which is one of key parameters to adjust the sparse constraint for SLMS-RL1 algorithm. However, to the best of the authors’ knowledge, a reweighted factor selection scheme has not been developed. This paper proposes a Monte-Carlo (MC) based reweighted factor selection method to further strengthen the performance of SLMS-RL1 algorithm. To validate the performance of SLMS-RL1 using the proposed reweighted factor, simulations results are provided to demonstrate that convergence speed can be reduced by increasing the channel sparsity, while the steady-state MSE performance only slightly changes with different GMM impulsive-noise strengths.

Adaptive filtering algorithms have been widely applied for multipath channel estimation, especially in broadband wireless systems [1,2,3,4,5,6,7], where the broadband signals are vulnerable to multipath fading as well as additive noises [8,9,10]. Hence, channel state information (CSI) is necessary for coherent demodulation [11]. Based on the classical Gaussian noise model, second-order statistics based least mean square (LMS) algorithm has been widely used to estimate channels due to its simplicity and robustness [1,2]. However, the performance of LMS is usually degraded by impulsive noise [12], which is common in broadband wireless systems and can be described by the Gaussian mixture noise model (GMM) [13]. Thus, it is necessary to develop robust channel estimation algorithms in the presence of GMM impulsive noise. In [1], a standard sign least mean square (SLMS) algorithm was proposed to suppress impulsive noise. In [14], Jiang et al. proposed a sophisticated robust matched filtering algorithm in ${\ell}_{p}$-space to realize time delay estimation (TDE) and joint delay-Doppler estimation (JDDE) for target localization. On the other hand, wireless channels can be often modeled as sparse or cluster-sparse and hence many of channel coefficients are zero [15,16,17,18,19]. However, standard SLMS algorithm does not exploit sparse channel structure information, while some potential performance gain could be obtained by adopting advanced adaptive channel estimation algorithms.

To exploit channel sparsity as well as to mitigate GMM impulsive noises, some state-of-the-art channel estimation algorithms using linear programming [20,21] and Bayesian learning [22] have been investigated. However, these algorithms often have high computational complexity. It is well known that the fast channel estimation algorithm is one of important factors to design wireless communication systems. Hence, a fast adaptive sparse channel estimation algorithm, i.e., SLMS with reweighted L1-norm constraint (SLMS-RL1) algorithm was proposed in [23]. In our previous work, we focused on the convergence analysis without considering the reweighted factor selection, where the empirical parameter is set as $\epsilon =0.005$ [6]. However, the reweighted factor is one of critical parameters to balance estimation performance and sparsity exploitation. To this end, this paper proposes a Monte-Carlo (MC) based selection method to select suitable reweighted factor for SLMS-RL1 algorithm. Numerical simulations are provided to evaluate the performance of the SLMS-RL1 algorithm using the proposed reweighted factor.

The rest of the paper is organized as follows. In Section 2, we introduce GMM noise model and review of SLMS-RL1 algorithm. By analyzing the convergence performance of LMS-RL1 algorithm, the important problem of the reweighted factor selection is point out. In Section 3, the MC-based selection method is proposed to select an appropriate reweighted factor for SLMS-RL1 algorithm. In Section 4, numerical simulations are provided to demonstrate the effectiveness of SLMS-RL1 with the proposed reweighted factor. Finally, Section 5 concludes this paper.

Consider an additive noise interference channel, which is modeled by the unknown N-length finite impulse response (FIR) vector $w={\left[{w}_{0},{w}_{1},\dots ,{w}_{N-1}\right]}^{T}$ at discrete time index $n$. The ideal received signal is expressed as
where $x\left(n\right)={\left[x\left(n\right),x\left(n-1\right),\dots ,x\left(n-N+1\right)\right]}^{T}$ is the input signal vector of the $N$ most recent input samples; $w$ is an $N$-dimensional column vector of the unknown system that we wish to estimate, and $z\left(n\right)$ is impulsive noise which can be described by Gaussian mixture model (GMM) [13] distribution as
where $T\gg 1$ denotes impulsive-noise strength and $\mathcal{C}\mathcal{N}\left(0,{\sigma}_{n}^{2}\right)$ denotes the Gaussian distributions with zero mean and variance ${\sigma}_{n}^{2}$, and the $\varphi $ is the mixture parameter to control the impulsive noise level. According to Equation (2), one can find that stronger impulsive noises can be described by larger noise variance $T{\sigma}_{n}^{2}$ as well as larger mixture parameter $\varphi $. According to Equation (2), variance of GMM is obtained.

$$d\left(n\right)={x}^{T}\left(n\right)w+z\left(n\right)$$

$$p\left(z\left(n\right)\right)=\left(1-\varphi \right)\cdot \mathcal{C}\mathcal{N}\left(0,{\sigma}_{n}^{2}\right)+\varphi \cdot \mathcal{C}\mathcal{N}\left(0,T{\sigma}_{n}^{2}\right)$$

$${\sigma}_{z}^{2}=E\left\{{z}^{2}\left(n\right)\right\}=\left(1-\varphi \right){\sigma}_{n}^{2}+\varphi T{\sigma}_{n}^{2}$$

Note that $z\left(n\right)$ will reduce to Gaussian noise model if $\varphi =0$. The objective of the adaptive channel estimation is to perform adaptive estimate of $w\left(n\right)$ with limited complexity and memory given sequential observation $\left\{d\left(n\right),x\left(n\right)\right\}$ in the presence of additive GMM noise $z\left(n\right)$. According to Equation (1), instantaneous estimation error $e\left(n\right)$ can be written as
where $w\left(n\right)$ is the estimator of $w$ at iteration $n$.

$$e\left(n\right)=d\left(n\right)-{w}^{T}\left(n\right)x\left(n\right)$$

To obtain the optimal channel estimation, one can construct the ${\ell}_{0}$-norm minimization problem as
where $\u2551w{\left(n\right)\u2551}_{0}$ denotes ${\ell}_{0}$-norm operator which is defined as $\u2551w{\left(n\right)\u2551}_{0}=\#\left\{i\right|{w}_{i}\left(n\right)\ne 0\}$. That is to say, the main function of $\u2551w{\left(n\right)\u2551}_{0}$ is to find the total number of nonzero coefficients. However, solving the ${\ell}_{0}$-norm minimization is a Non-Polynomial (NP) hard problem. Hence, it is necessary to introduce an approximate ${\ell}_{0}$-norm minimization function so that Equation (5) is solvable. On the adaptive sparse channel estimation, reweighted ${\ell}_{1}$-norm (RL1) minimization has a better performance than ${\ell}_{1}$-minimization that is usually employed in compressive sensing [24]. It is due to the fact that a properly RL1 can approximate the ${\ell}_{0}$-norm more accurate than ${\ell}_{1}$-norm. Hence, one approach to enforce the sparsity of the solution for the sparse SLMS algorithm is to introduce the RL1 penalty term in thee cost function as RL1-LAE which considers a penalty term proportional to the RL1 of the coefficient vector. Hence, the cost function Equation (5) can be revised as
where $\lambda $ is the weight associated with the penalty term and elements of the $N\times N$ diagonal reweighted matrix $F\left(n\right)$ are devised as
where $\epsilon $ being some positive number and hence ${\left[F\left(n\right)\right]}_{ii}>0$ for $i=0,1,\dots ,N-1$. The update equation can be derived by differentiating (6) with respect to the FIR channel vector $w\left(n\right)$. Then, the resulting update equation is
where $\rho =\mu \lambda $. Notice that in Equation (8), since $\text{sgn}\left(F\left(n\right)\right)={I}_{N\times N}$, one can get $\text{sgn}\left(F\left(n\right)w\left(n\right)\right)=\text{sgn}\left(F\left(n\right)\right)\text{sgn}\left(w\left(n\right)\right)=\text{sgn}\left(w\left(n\right)\right)$, where $\text{sgn}(\cdot )$ denotes sign function, i.e., $\text{sgn}\left(a\right)=\text{a}/\left|\text{a}\right|$ for $a\ne 0$, $\text{sgn}\left(a\right)=0$ for $a=0$.

$$G\left(n\right)=\u2551e{\left(n\right)\u2551}_{1}+\lambda \u2551w{\left(n\right)\u2551}_{0}$$

$$G\left(n\right)=\u2551e{\left(n\right)\u2551}_{1}+\lambda \u2551F\left(n\right)w{\left(n\right)\u2551}_{1}$$

$${\left[F\left(n\right)\right]}_{ii}=\frac{1}{\epsilon +\left|{\left[w\left(n-1\right)\right]}_{i}\right|},i=0,1,\dots ,N-1$$

$$\begin{array}{c}w\left(n+1\right)=w\left(n\right)-\frac{\mu \partial G\left(n\right)}{\partial w\left(n\right)}\hfill \\ =w\left(n\right)+\mu x\left(n\right)\text{sgn}\left(e\left(n\right)\right)-\frac{\rho \text{sgn}\left(w\left(n\right)\right)}{\epsilon +\left|w\left(n-1\right)\right|}\hfill \end{array}$$

Define the misalignment channel vector as $v\left(n\right)\triangleq w\left(n\right)-w$ and $C\left(n\right)\triangleq E\left\{v\left(n\right){v}^{T}\left(n\right)\right\}$ as the second moment matrix of $v\left(n\right)$, Equation (4) can be rewritten as $e\left(n\right)=z\left(n\right)-{v}^{T}\left(n\right)x\left(n\right)$. To verify the performance, the convergence analysis of SLMS-RL1 algorithm is derived via mean convergence and excess MSE. Based on independent assumptions, in [23], the authors derive that SLMS-RL1 is stable if
where ${\lambda}_{\text{max}}$ denotes the maximal eigenvalue of $R=E\{x\left(n\right){x}^{T}\left(n\right)$. Then the mean estimation error $E\left\{w\left(n+1\right)\right\}$ is derived as
where ${\sigma}_{e}\left(n\right)\approx {\sigma}_{n}\sqrt{1+\left(T-1\right)\varphi}$ and $w\left(\infty \right)=\underset{n\to \infty}{\mathrm{lim}}w\left(n\right)$. Similarly, excess mean square error (MSE) is approximated as
where $\alpha $ and $\beta $ are defined as
and
respectively. Here, ${\u2551\cdot \u2551}_{1}$ denotes ${\ell}_{1}$-norm constraint. Both mean estimation error and excess MSE imply that the reweighted factor $\epsilon $ adjusts performance of SLMS-RL1 algorithm. Hence, it is necessary to develop effective method to choose agreeable reweighted factor for further reinforce the proposed SLMS-RL1 algorithm.

$$0<\mu <{\sigma}_{n}\sqrt{\pi \left(1+\left(T-1\right)\varphi \right)/2}/{\lambda}_{\text{max}}$$

$$w\left(\infty \right)=w-\sqrt{\pi /2}{\mu}^{-1}\rho {R}^{-1}{\sigma}_{e}^{-1}\text{sgn}\left(w\left(\infty \right)\right)/\left(\epsilon +w\left(\infty \right)\right)$$

$$\sqrt{\pi /8}\left(\mu {\lambda}_{\text{max}}+{\mu}^{-1}\left(\alpha -\beta \right)\right){\sigma}_{e}$$

$$\alpha \triangleq {\rho}^{2}N/{\epsilon}^{2}$$

$$\beta \triangleq 2\rho \left(1-\sqrt{2/\pi}\mu {\sigma}_{e}^{-1}{\lambda}_{\text{max}}\right)E\left\{\u2551w\left(\infty \right)/{\left(\epsilon +w\left(\infty \right)\right)\u2551}_{1}-\u2551w/{\left(\epsilon +w\left(\infty \right)\right)\u2551}_{1}\right\}$$

MC-based reweighted factor selection method is developed for SLMS-RL1 algorithm in different SNR regimes, impulsive-noise strength $=400$, mixture parameters $\varphi =0.1$ as well as channel sparsity $K=4$. For achieving average performance, $M=1000$ independent Monte-Carlo runs are adopted. The simulation setup is configured according to the typical broadband wireless communication system [10]. The signal bandwidth is 50 MHz located at the central radio frequency of 2.1 GHz. The maximum delay spread of $0.8\text{}\mu s$. Hence, the maximum length of channel vector $w$ is $N=80$. In addition, each dominant channel tap follows random Gaussian distribution as $\mathcal{C}\mathcal{N}\left(0,{\sigma}_{w}^{2}\right)$ which is subject to $E\left\{{\u2551w\u2551}_{2}^{2}\right\}=1$ and their positions are randomly decided within $w$. To evaluate SLMS-RL1 algorithm using different factors, we adopt the average mean square error (MSE) metric which is defined as
where $w$ and $w\left(\text{n}\right)$ are the actual signal vector and reconstruction vector, respectively; $E\{\cdot \}$ denotes mathematical expectation operator. The received SNR is defined as ${P}_{0}/{\sigma}_{z}^{2}$, where ${P}_{0}$ is the received power of the pseudo-random (PN) binary sequence for training signal. Detailed parameters for computer simulation are listed in Table 1.

$$\text{AverageMSE}\left\{w\left(n\right)\right\}\triangleq 10{\mathrm{log}}_{10}E\left\{\frac{\u2551w\left(n\right)-{w\u2551}_{2}^{2}}{\u2551{w\u2551}_{2}^{2}}\right\}$$

Parameters | Values |
---|---|

Training signal structure | Pseudo-random Binary sequences |

Channel length | $N=80$ |

No. of nonzero coefficients | $K=4$ |

Distribution of nonzero coefficient | Random Gaussian distribution $\mathcal{C}\mathcal{N}\left(0,1\right)$ |

Received SNR | $SNR\in \text{}\left\{5\text{dB},10\text{dB}\right\}$ |

GMM noise distribution | ${\alpha}_{1}={\alpha}_{2}=0$, ${\sigma}_{1}^{2}={10}^{\left(-SNR/10\right)}$, $T=400$ |

Step-size | $\mu =0.01$ |

Regularization parameters for sparse penalties | $\lambda =0.008$ |

Thresholds of the SLMS-RL1 algorithms | $\epsilon \in \left\{0.5,0.1,0.05,0.01,0.005,0.001\right\}$ |

First of all, MC based reweighted factor selection method is performed in Figure 1 and Figure 2. Average MSE curves of the SLMS-RL1 algorithm are depicted under two SNR regimes, i.e., $\in \left\{5\text{dB},10\text{dB}\right\}$. To confirm the effectiveness of the proposed method, standard SLMS [1] is considered as a performance benchmark. As we discussed in Section 2, one can see that the MSE performance of SLMS-RL1 algorithms depends highly on the reweighted factor $\epsilon $. In these two figures, the lowest MSE performance of SLMS-RL1 is achieved when reweighted factor $\epsilon $ is set as 0.005 in two SNR regimes. On the one hand, too big reweighted factor $\epsilon $ may suppress noise excessively and hence it result in lossy exploitation of channel sparsity. On the other hand, a too small reweighted factor $\epsilon $ may mitigate noise insufficiently and it causes inefficient exploitation of channel sparsity. Therefore, suitable reweighted factor could balance the noise suppression and channel sparsity exploitation.

In this section, three examples are given to verify the performance of SLMS-RL1 algorithm by using proposed reweighted factor $\mathsf{\epsilon}=0.005$ in the scenarios of $SNR$= 10 dB, impulsive-noise strength $T\in \left\{200,400,600,800\right\}$, mixture parameters $\varphi =0.1$ as well as channel sparsity $K\in \left\{2,4,8,12,16\right\}$. For achieving average performance, $M=1000$ independent Monte-Carlo runs are adopted as well. Detailed parameters for computer simulation are listed in Table 2.

Parameters | Values |
---|---|

Training signal structure | Pseudo-random Binary sequences |

Channel length | $N=80$ |

No. of nonzero coefficients | $K\in \left\{2,\text{}4,\text{}8,\text{}16\right\}$ |

Distribution of nonzero coefficient | Random Gaussian distribution $\mathcal{C}\mathcal{N}\left(0,1\right)$ |

Received SNR | $SNR\in \left\{5\text{dB},10\text{dB}\right\}$ |

GMM noise distribution ($T$ controls impulsive noise strength) | ${\alpha}_{1}={\alpha}_{2}=0$, ${\sigma}_{1}^{2}={10}^{\left(-SNR/10\right)}$ ${\sigma}_{2}^{2}=T{\sigma}_{1}^{2}$, $T=\left\{200,400,600,800\right\}$ |

Step-size | $\mu =0.01$ |

Regularization parameters for sparse penalties | $\lambda =0.008$ |

Threshold of the SLMS-RL1 | $\epsilon =0.005$ |

In the first example, average MSE curves of different algorithms are depicted in Figure 3. Under the certain circumstance, channel sparsity$K=4$, $SNR=10\text{dB}$, GMM noise with impulsive-noise parameter $T=400$ as well as mixture parameter $\varphi =0.1$, one can find that proposed SLMS-RL1 algorithm can achieve at least 5 dB and 10 dB performance gain in contrast to SLMS algorithm and LMS-type algorithms, respectively. Because SLMS algorithm does not exploit the channel sparsity while LMS-type algorithms do not stable under GMM noise environments. Hence, the proposed SLMS-RL1 can exploit channel sparsity and can keep stability in the presence of GMM noises.

In the second example, average MSE curves of SLMS-RL1 algorithm with respect to channel sparsity $K$ are depicted in Figure 4. Under the certain circumstance, e.g., $SNR=10\text{dB}$, GMM noise with impulsive-noise parameter $T=400$ as well as mixture parameter $\varphi =0.1$, one can find that that convergence speed of SLMS-RL1 depends on channel sparsity ($K$) while steady-state MSE curves of corresponding algorithms are very close. For different channel sparsity, in other words, the adaptive sparse algorithms may differ from conventional compressive sensing based sparse channel estimation algorithms [15], [16], [25], [26] which depend highly on channel sparsity. Hence, SLMS-RL1 using MC-based reweighted factor is expected to deal with different sparse channels stably even non-sparse cases.

In the third example, average MSE curves of SLMS-RL1 algorithm using reweighted selected factor $\mathsf{\epsilon}=0.005$ with respect to impulsive-noise strength $T$ are depicted in Figure 5. In addition, average MSE curves of the algorithm with respect to mixture parameter $\varphi $ are depicted in Figure 6. In the two figures, one can see that SLMS-RL1 algorithm using $\epsilon =0.005$ is stable for different GMM noises with impulsive-noise strength parameters $T\in \left\{200,400,600,800\right\}$ as well as mixture parameters $\in \left\{0.05,0.1,0.2,0.4,0.6,0.8,1.0\right\}$. The main reason of SLMS-RL1 algorithm is that sign function is utilized to mitigate the GMM impulsive noise. It is worth noting that SLMS-RL1 algorithm may be deteriorated by the enlarging the mixture parameter $\varphi $ of impulsive noise. In practical application scenarios, the mixture parameter $\varphi $ is very small (less than 0.1). Hence, the proposed reweighted factor for SLMS-RL1 is stable for GMM impulsive-noise.

In this paper, we propose a Monte-Carlo based reweighted factor selection method so that the SLMS-RL1 algorithm can exploit channel sparsity efficiently. Simulation results are provided to illustrate our findings. First of all, SLMS-RL1 can achieve the lowest MSE performance by selecting the reweighted factor as $\epsilon =0.005$ in different SNR regimes. Secondly, the convergence speed of SLMS-RL1 can be reduced by increasing the channel sparsity $K$. At last, the steady-state MSE performance of SLMS-RL1 does not change considerably under different GMM impulsive-noise strength $T$. In other words, SLMS-RL1 algorithm using the reweighted factor $\epsilon =0.005$ is stable under different GMM impulsive noises.

The authors would like to extend their appreciation to the anonymous reviewers for their constructive comments. This work was supported in part by the Japan Society for the Promotion of Science (JSPS) research grants (No. 26889050, No. 15K06072), and the National Natural Science Foundation of China grants (No. 61401069, No. 61271240).

We proposed the reweighted factor selection method for sign least mean square with reweighted L1-norm constraint (SLMS-RL1) algorithm under Gaussian mixture noise model. Tingping Zhang drafted this paper including theoretical work and computer simulation. Guan Gui checked the full paper in writing and presentation.

The authors declare no conflict of interest.

- Sayed, A.H. Adaptive Filters; John Wiley & Sons: Hoboken, NJ, USA, 2008. [Google Scholar]
- Haykin, S.S. Adaptive Filter Theory, Prentice Hall, Upper Saddle River, NJ, USA, 1996.
- Yoo, J.; Shin, J.; Park, P. Variable step-size affine projection aign algorithm. IEEE Trans. Circuits Syst. Express Br.
**2014**, 61, 274–278. [Google Scholar] - Chen, B.; Zhao, S.; Zhu, P.; Principe, J.C. Quantized kernel recursive least squares algorithm. IEEE Trans. Neural Netw. Learn. Syst.
**2013**, 24, 1484–1491. [Google Scholar] [CrossRef] [PubMed] - Chen, B.; Zhao, S.; Zhu, P.; Principe, J.C. Quantized kernel least mean square algorithm. IEEE Trans. Neural Netw. Learn. Syst.
**2012**, 23, 22–32. [Google Scholar] [CrossRef] [PubMed] - Taheri, O.; Vorobyov, S.A. Sparse channel estimation with LP-norm and reweighted L1-norm penalized least mean squares. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Prague, Czech Republic, 22–27 May 2011; pp. 2864–2867.
- Gendron, P.J. An empirical Bayes estimator for in-scale adaptive filtering. IEEE Trans. Signal Process.
**2005**, 53, 1670–1683. [Google Scholar] [CrossRef] - Adachi, F.; Kudoh, E. New direction of broadband wireless technology. Wirel. Commun. Mob. Comput.
**2007**, 7, 969–983. [Google Scholar] [CrossRef] - Raychaudhuri, B.D.; Mandayam, N.B. Frontiers of wireless and mobile communications. Proc. IEEE
**2012**, 100, 824–840. [Google Scholar] [CrossRef] - Dai, L.; Wang, Z.; Yang, Z. Next-generation digital television terrestrial broadcasting systems: Key technologies and research trends. IEEE Commun. Mag.
**2012**, 50, 150–158. [Google Scholar] [CrossRef] - Adachi, F.; Garg, D.; Takaoka, S.; Takeda, K. Broadband CDMA techniques. IEEE Wirel. Commun.
**2005**, 12, 8–18. [Google Scholar] [CrossRef] - Shao, M.; Nikias, C.L. Signal processing with fractional lower order moments: Stable processes and their applications. Proc. IEEE
**1993**, 81, 986–1010. [Google Scholar] [CrossRef] - Middleton, D. Non-Gaussian noise models in signal processing for telecommunications: New methods and results for class A and class B noise models. IEEE Trans. Inf. Theory
**1999**, 45, 1129–1149. [Google Scholar] [CrossRef] - Jiang, X.; Zeng, W.-J.; So, H.C.; Rajan, S.; Kirubarajan, T. Robust matched filtering in lp-space. IEEE Trans. Signal Process. Unpublished work.
**2015**. [Google Scholar] [CrossRef] - Gao, Z.; Dai, L.; Lu, Z.; Yuen, C.; Member, S.; Wang, Z. Super-resolution sparse MIMO-OFDM channel estimation based on spatial and temporal correlations. IEEE Commun. Lett.
**2014**, 18, 1266–1269. [Google Scholar] [CrossRef] - Dai, L.; Wang, Z.; Yang, Z. Compressive sensing based time domain synchronous OFDM transmission for vehicular communications. IEEE J. Sel. Areas Commun.
**2013**, 31, 460–469. [Google Scholar] - Qi, C.; Wu, L. Optimized pilot placement for sparse channel estimation in OFDM systems. IEEE Signal Process. Lett.
**2011**, 18, 749–752. [Google Scholar] [CrossRef] - Gui, G.; Peng, W.; Adachi, F. Sub-Nyquist rate ADC sampling-based compressive channel estimation. Wirel. Commun. Mob. Comput.
**2015**, 15, 639–648. [Google Scholar] [CrossRef] - Gui, G.; Zheng, N.; Wang, N.; Mehbodniya, A.; Adachi, F. Compressive estimation of cluster-sparse channels. Prog. Electromagn. Res. C
**2011**, 24, 251–263. [Google Scholar] [CrossRef] - Jiang, X.; Kirubarajan, T.; Zeng, W.-J. Robust sparse channel estimation and equalization in impulsive noise using linear programming. Signal Process.
**2013**, 93, 1095–1105. [Google Scholar] [CrossRef] - Jiang, X.; Kirubarajan, T.; Zeng, W.J. Robust time-delay estimation in impulsive noise using lp-correlation. In Proceedings of the IEEE Radar Conference (RADAR), Ottawa, ON, Canada, 29 April–3 May 2013; pp. 1–4.
- Lin, J.; Member, S.; Nassar, M.; Evans, B.L. Impulsive noise mitigation in powerline communications using sparse Bayesian learning. IEEE J. Sel. Areas Commun.
**2013**, 31, 1172–1183. [Google Scholar] [CrossRef] - Zhang, T.; Gui, G. IMAC: Impulsive-mitigation adaptive sparse channel estimation based on Gaussian-mixture model. Available online: http//arxiv.org/abs/1503.00800 (accessed on 14 September 2015).
- Candes, E.J.; Wakin, M.B.; Boyd, S.P. Enhancing sparsity by reweighted L1 minimization. J. Fourier Anal. Appl.
**2008**, 14, 877–905. [Google Scholar] [CrossRef] - Bajwa, W.U.; Haupt, J.; Sayeed, A.M.; Nowak, R. Compressed channel sensing: A new approach to estimating sparse multipath channels. Proc. IEEE
**2010**, 98, 1058–1076. [Google Scholar] [CrossRef] - Tauböck, G.; Hlawatsch, F.; Eiwen, D.; Rauhut, H. Compressive estimation of doubly selective channels in multicarrier systems: Leakage effects and sparsity-enhancing processing. IEEE J. Sel. Top. Signal Process.
**2010**, 4, 255–271. [Google Scholar] [CrossRef]

© 2015 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/4.0/).