Next Article in Journal
Poverty Measure Based on Hesitant Fuzzy Decision Algorithm under Social Network Media
Previous Article in Journal
Numerical Picard Iteration Methods for Simulation of Non-Lipschitz Stochastic Differential Equations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Modified Combined-Step-Size Affine Projection Sign Algorithms for Robust Adaptive Filtering in Impulsive Interference Environments

School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
*
Authors to whom correspondence should be addressed.
Submission received: 1 January 2020 / Revised: 11 February 2020 / Accepted: 11 February 2020 / Published: 3 March 2020

Abstract

:
In this paper, to further improve the filtering performance and enhance the poor tracking capability of the conventional combined step-size affine projection sign algorithm (CSS-APSA) in system identification, we propose a simplified CSS-APSA (SCSS-APSA) by applying the first-order Taylor series expansion to the sigmoidal active function (of which the independent variable is symmetric) of CSS-APSA. SCSS-APSA has lower computational complexity, and can achieve comparable, or even better filtering performance than that of CSS-APSA. In addition, we propose a modification of the sigmoidal active function. The modified sigmoidal active function is a form of scaling transformation based on the conventional one. Applying the modified function to the CSS-APSA, we can obtain the modified CSS-APSA (MCSS-APSA). Moreover, the extra parameter of MCSS-APSA provides the power to accelerate the convergence rate of CSS-APSA. Following the simplification operations of SCSS-APSA, the computational complexity of MCSS-APSA can also be reduced. Therefore, we get the simplified MCSS-APSA (SMCSS-APSA). Simulation results demonstrate that our proposed algorithms are able to achieve a faster convergence speed in system identification.

1. Introduction

System identifications, including channel estimation, seismic system identification, noise or echo cancellation, and image recovery have been successfully realized by adaptive filters (AFs) [1]. Among conventional AF algorithms, the least-mean-square (LMS) and normalized LMS (NLMS) algorithms play important roles due to their low computational complexity and mathematical tractability [2,3]. However, such LMS-type algorithms suffer from filtering performance degradation under the heavy-tailed impulsive interference and heavy-colored input signal [4].
To remit the negative influence of heavy-colored input data on the convergence performance [5], derived the affine projection algorithm (APA), which is based on the idea of affine subspace projection. After that, the family of APA is developed by various methods [5,6,7,8,9,10,11,12,13,14,15,16]. Rather than a single current data vector used in LMS and NLMS, APA updates weight coefficients through multiple, most recent input vectors. Increasing the projection order of APA accelerates the convergence speed at the expense of computational complexity. However, APA is derived by the minimization of 2 -norm, which results in dramatic performance degradation under the non-Gaussian noise, especially for impulsive interference.
It is known that lower-order norms are robust against impulsive disturbances. From the perspective of computational simplicity, a great number of algorithms are derived by minimizing the 1 -norm to achieve robust filtering [10,17,18]. Moreover, [19] combined the AP method with 1 -norm optimization to derive a robust AP-like algorithm, which is termed the affine projection sign algorithm (APSA). To satisfy the contradictory requirements between fast convergence speed and small steady-state misalignment, the variable step-size APSA algorithms (VSS-APSAs) were proposed [20,21,22]. However, such VSS-APSAs reduce the tracking performance when the system abruptly changes. Therefore, to enhance the tracking performance of VSS-APSA, two convex combination methods are applied to APSA from different viewpoints. One is called the combination of APSA (CAPSA), and the other is named the combined step-size APSA (CSS-APSA). The CSS-APSA uses two different step sizes—the large one is related to faster convergence, and the small one is related to a lower steady-state misadjustment. In addition, the convex combination of affine projection-type algorithms using an impulsive noise indicator was developed [23]. However, [23] needs more calculations to realize good convergence performance. After that, the cost function based on the sigmodial active function was applied to a family of robust adaptive filtering algorithms, and it can be further simplified by the first-order Taylor series expansion [24].
In this paper, applying the first-order Taylor series expansion to the sigmodal active function (SAF) of CSS-APSA, we were inspired by [24] and will propose a simplified CSS-APSA (SCSS-APSA), which has lower computational complexity. However, although CSS-APSA realizes fast convergence rates and small steady-state misadjustment, there is still an open problem of enhancing convergence performance [25,26,27,28]. Therefore, we modify the formation of SAF in CSS-APSA. Using the modified SAF to improve the performance of traditional CSS-APSA, we propose a modified CSS-APSA (MCSS-APSA). In MCSS-APSA, the two different step sizes work in the same way as CSS-APSA; however, the extra parameter of modified SAF can enable flexible adjustment of filtering performance. In addition, similar operations of SCSS-APSA can also be used for MCSS-APSA, and then a simplified MCSS-APSA (SMCSS-APSA) can be derived. Simulation results demonstrate that the proposed algorithms can achieve good filtering performances in terms of the convergence speed in system identification.
The rest of this paper is organized as follows. Section 2 introduces the APSA algorithm based on step-size adjustment. In Section 3, after introducing the CSS-APSA algorithm, the proposal is derived. Section 4 lists the simulation results to demonstrate the effectiveness of proposed algorithms. Section 5 gives the conclusion.

2. Affine Projection Sign Algorithm Based on Step-size Adjustment

We review the traditional APSA and its variable step-size forms, which include the VSS-APSA [21] and CAPSA.

2.1. APSA

In the conventional APSA, the desired output signal is given by
d ( n ) = u T ( n ) w o + v ( n ) ,
where n is the time index; superscript T represents transposition of matrix or vector; w o = [ w 0 o , w 1 o , , w M 1 o ] T denotes an unknown system weight vector, which needs to be estimated; u ( n ) = [ u ( n ) , u ( n 1 ) , , u ( n M + 1 ) ] T is the input signal vector with length M; and v ( n ) is the background noise with the addition of an interference signal.
Let U ( n ) , d ( n ) , e ( n ) , and e p ( n ) be the input matrix, desired signal vector, error vector, and a posteriori error vector, respectively, as follows:
U ( n ) = [ u ( n ) , u ( n 1 ) , , u ( n K + 1 ) ] ,
d ( n ) = [ d ( n ) , d ( n 1 ) , , d ( n K + 1 ) ] T ,
e ( n ) = d ( n ) U T ( n ) w ( n 1 ) ,
e p ( n ) = d ( n ) U T ( n ) w ( n ) ,
where K is the affine projection order. The APSA algorithm can be derived by minimizing the 1 -norm of a posteriori error vector e p ( n ) with a constraint on the filter coefficients, as
min w ( n ) d ( n ) U T ( n ) w ( n ) 1 ,
subject to w ( n ) w ( n 1 ) 2 2 μ 2 ,
where μ 2 is a parameter which is used to ensure the stability of the weight coefficients vector w ( n ) during updating. Using the method of Lagrange multipliers [19], the cost function can be solved by combining (6) and (7). Then, the updated equation for the weight vector is
w ( n ) = w ( n 1 ) + μ U s ( n 1 ) ξ 1 + U s ( n 1 ) 2 2 ,
where μ > 0 is the step size; 0 < ξ 1 1 is to avoid division by zero; · 2 is the 2 -norm of the vector; and U s ( n 1 ) = U ( n 1 ) sgn ( e ( n 1 ) ) with sgn ( e ( n 1 ) ) being the sign operation on each element of the error vector.

2.2. VSS-APSA and CAPSA

A variable step size is based on the minimization of mean-square deviation [21], and given as follows:
μ v s ( n ) = α v s μ v s ( n 1 ) + ( 1 α v s ) × min ( e ( n ) 1 U s ( n 1 ) 2 2 , μ v s ( n 1 ) )
where α v s ( 0 < α v s < 1 ) is the smoothing factor. Although the VSS-APSA has good performance in terms of fast convergence speed and small steady-state misalignment, its tracking performance is very poor when an abrupt change occurs.
Therefore, to enhance the tracking performance of VSS-APSA, the convex combination method was applied to APSA (CAPSA). For APSA algorithms, two different step sizes, that is, μ 1 > μ 2 > 0 , are related to the faster convergence rate and lower steady-state error, respectively. The corresponding weight updates are defined as
w 1 ( n ) = w 1 ( n 1 ) + μ 1 U s ( n 1 ) ξ 1 + U s ( n 1 ) 2 2 ,
and
w 2 ( n ) = w 2 ( n 1 ) + μ 2 U s ( n 1 ) ξ 1 + U s ( n 1 ) 2 2 .
Combining w 1 ( n ) with w 2 ( n ) by a variable mixing factor λ ( n ) [ 0 , 1 ] results in:
w ( n ) = λ ( n ) w 2 ( n ) + ( 1 λ ( n ) ) w 1 ( n ) .
Generally, λ ( n ) is updated indirectly based on reduction of an overall error e ( n ) through a sigmoidal active function (SAF) [27,29], that is,
λ ( n ) = 1 1 + exp ( s ( n ) ) ,
where s ( n ) is based on a gradient descent scheme by using the 1 -norm minimization to ensure robustness in the presence of impulsive noise, as
s ( n ) = s ( n 1 ) μ s e ( n ) s ( n 1 ) = s ( n 1 ) + μ s μ 12 λ u T ( n ) U s ( n 1 ) s g n ( e ( n ) ) ξ 1 + U s ( n 1 ) 2 2 ,
where μ s > 0 is a step size, and μ 12 λ = ( μ 2 μ 1 ) [ λ ( n 1 ) ( 1 λ ( n 1 ) ) + ξ 2 ] , in which the factor ξ 2 is positive and prevents the update of s ( n ) from halting when λ ( n 1 ) is equal to 1 or 0. Generally, s ( n ) is restricted in a symmetric interval, that is, [ 4 , 4 ] . (12) is the weight update of CAPSA when s ( n ) < 4 and becomes w ( n ) = α c w 2 ( n ) + ( 1 α c ) w 1 ( n ) + μ 2 U s ( n 1 ) ξ 1 + U s ( n 1 ) 2 2 only if s ( n ) = 4 . α c ( 0 α c 1 ) has the same meaning as α v s . However, from [25], we know that there is still an open problem to enhance the convergence performance of CAPSA.

3. Proposed Algorithms

We start by introducing the idea of the combined step-size APSA (CSS-APSA) algorithm. Then, we analyze the behavior of the mixing factor for CSS-APSA, and explain the reason why the mixing factor can be simplified. Finally, we derive the proposed algorithms.
In order to better identify the optimization problem, the cost functions need to be known. The CSS-APSA algorithm is derived by minimizing the 1 -norm of a posteriori error vector e p ( n ) with a constraint on the filter coefficients, as
min w ( n ) d ( n ) U T ( n ) w ( n ) 1 ,
subject to w ( n ) w ( n 1 ) 2 2 μ 12 2 ,
where
μ 12 = λ ( n ) μ 2 + ( 1 λ ( n ) ) μ 1 .
Similar to the APSA [19], we can obtain the following:
For the SCSS-APSA:
J s ( w ( n ) ) = d ( n ) U T ( n ) w ( n ) 1 + φ [ w ( n ) w ( n 1 ) 2 2 μ s 12 2 ]
For the MCSS-APSA:
J β ( w ( n ) ) = d ( n ) U T ( n ) w ( n ) 1 + φ [ w ( n ) w ( n 1 ) 2 2 μ β 12 2 ]
For the SMCSS-APSA:
J s β ( w ( n ) ) = d ( n ) U T ( n ) w ( n ) 1 + φ [ w ( n ) w ( n 1 ) 2 2 μ s β 12 2 ]
where φ is a Lagrange multiplier, μ s 12 = λ s ( n ) μ 2 + ( 1 λ s ( n ) ) μ 1 , μ β 12 = λ β ( n ) μ 2 + ( 1 λ β ( n ) ) μ 1 , μ s β 12 = λ s β ( n ) μ 2 + ( 1 λ s β ( n ) ) μ 1 , J s ( · ) , J β ( · ) , and J s β ( · ) represent the cost functions and discuss the computational complexity.

3.1. CSS-APSA

Substituting (10) and (11) into (12) yields the weight update of CSS-APSA (According to the symmetry, (12) can also be written as w ( n ) = λ ( n ) w 1 ( n ) + ( 1 λ ( n ) ) w 2 ( n ) [25]) as
w ( n ) = λ ( n ) w 2 ( n ) + ( 1 λ ( n ) ) w 1 ( n ) = w ( n 1 ) + [ λ ( n ) μ 2 + ( 1 λ ( n ) ) μ 1 ] × U s ( n 1 ) ξ 1 + U s ( n 1 ) 2 2 .

3.2. Analysis of Variable Mixing Factors for CSS-APSA

Actually, the CSS-APSA algorithm uses the following SAF to adjust the mixing parameter
λ ( n ) = C 1 + exp ( s ( n ) ) C 2 1 2 ,
where C > 1 , and s ( n ) is defined as
s ( n ) = l n ( C + 1 C 1 ) , s ( n ) < l n ( C + 1 C 1 ) l n ( C + 1 C 1 ) , s ( n ) > l n ( C + 1 C 1 ) s ( n ) , e l s e w h e r e .
According to (15), we can obtain
λ ( n ) s ( n ) = C e x p ( s ( n ) ) ( 1 + e x p ( s ( n ) ) ) 2 .
From (24), we know that λ ( n ) is a monotone increasing function of s ( n ) . In other words, the increase rate of λ ( n ) from 0 to 1 can be accelerated by increasing the value of C. Figure 1 shows the function curves of (22) and (23). From this figure, we know that λ ( n ) is approximately equal to zero or one when | s ( n ) | = 0.3365 . Actually, the range of s ( n ) is affected by two constraint conditions, namely, s ( n ) [ 4 , 4 ] and λ ( n ) [ 0 , 1 ] . Therefore, from (23) and such two constraints, we can obtain
s ( n ) [ s , s + ] s = max 4 , l n C + 1 C 1 s + = min 4 , l n C + 1 C 1 .
Furthermore, from (23) and (25), we know that:
  • When the C value is small, such as C = 2 , we can get s = l n ( 2 + 1 2 1 ) = 1.0986 and s + = 1.0986 . When | s ( n ) | = 0.3365 , λ ( n ) is approximately equal to 1 or 0. However, such two λ ( n ) values will be slowly increased to 1 or decreased to 0 as | s ( n ) | becomes larger. Therefore, the effective range for s ( n ) can be treated as [ 0.3365 , 0.3365 ] .
  • From (23), as the C value becomes larger, | s ( n ) | becomes closer to 0.3365 because l n ( C + 1 C 1 ) is a monotone decreasing function of C. Also, when the C value is relatively large, such as C = 6 , λ ( n ) will be very close to 1 or 0. Therefore, for the large C value, we can constrain s ( n ) [ 0.3365 , 0.3365 ] .

3.3. Practical Considerations about Proposed Algorithms

In this paper, to reduce the error of λ ( n ) being 1 or 0, as shown in Figure 1 when C 5 , we can appropriately extend the effective range of s ( n ) to [ 0.4 , 0.4 ] . In addition, when s ( n ) = 0.4 , the difference between exp ( s ( n ) ) and 1 s ( n ) is 0.0703. Ignoring such differences and applying the first-order Taylor series expansion to exp ( s ( n ) ) , we can use 1 s ( n ) to approximate exp ( s ( n ) ) . Hence, (22) can be simplified as
λ s ( n ) = C 2 s ( n ) C 2 1 2 .
Based on (26), we get
s ( n ) = 2 C 1 λ ( n ) = 0 s ( n ) = 2 C + 1 λ ( n ) = 1 .
Therefore, injecting (27) into (26) yields
λ s ( n ) = 0 , s ( n ) < 2 C 1 1 , s ( n ) > 2 C + 1 C 2 s ( n ) ( C 2 1 2 ) , e l s e w h e r e
with C > 1 . Also, applying (28) to CSS-APSA, we can get the simplified CSS-APSA (denoted as: SCSS-APSA).
Comparing (22) with (13), one can find that (22) is a kind of scaling transformation of (13). Such a transformation can speed up the change rate of λ ( n ) . Based on this idea, (22) can be modified as follows:
λ β ( n ) = C 1 + exp ( β s ( n ) ) C 2 1 2 ,
where β is a positive constant, and s ( n ) is restricted as
s ( n ) = l n ( C + 1 C 1 ) / β , s ( n ) < l n ( C + 1 C 1 ) / β l n ( C + 1 C 1 ) / β , s ( n ) > l n ( C + 1 C 1 ) / β s ( n ) e l s e w h e r e .
Following the modification of (29), different β values can accelerate or slow down the change rate of λ β ( n ) . The change rate may result in a faster convergence rate of the proposed algorithm. Therefore, CSS-APSA armed with (29) and (30) is called a modified CSS-APSA (denoted as MCSS-APSA).
Obviously, CSS-APSA is a special case of MCSS-APSA when the β = 1 . In addition to (29), when C = 1 , s ( n ) is restricted as
s ( n ) = 4 / β , s ( n ) < 4 / β 4 / β , s ( n ) > 4 / β s ( n ) , e l s e w h e r e .
Based on (16) and (23), the range of β s ( n ) for MCSS-APSA is the same as that of s ( n ) for CSS-APSA. Therefore, according to the operations for (26), when C > 1 and β > 0 , (29) can also be simplified as
λ s β ( n ) = C 2 β s ( n ) C 2 1 2 ,
and s ( n ) is restricted as
s ( n ) = 2 ( C 1 ) β , s ( n ) < 2 ( C 1 ) β 2 ( C + 1 ) β , s ( n ) > 2 ( C + 1 ) β s ( n ) , e l s e w h e r e .
Therefore, injecting (32) and (33) into CSS-APSA, we can obtain the simplified MCSS-APSA (denoted as SMCSS-APSA). Algorithm 1 summarizes the proceeding of the proposed algorithms.

3.4. Computational Complexity

In Table 1, the computational complexity of the proposed SCSS-APSA, MCSS-APSA, and SMCSS-APSA are compared with that of APSA, CAPSA, and CSS-APSA in terms of the total number of additions, multiplications, comparisons, and exponents. It is clear that the simplified algorithms are more efficient in computation than CAPSA and CSS-APSA.
Algorithm 1: The proposed Algorithms
  Initialization:
   0 < μ 2 < μ 1 , 0 < ξ 1 1 , s ( 0 ) = ξ 1 , 1 < C , 1 β , and w 1 ( 0 ) = w 2 ( 0 ) = 0 .
  Computation:
  while { u ( n ) , d ( n ) } n 1 available do
   1: w 1 ( n ) = w 1 ( n ) + μ 1 U s ( n 1 ) ξ 1 + U s ( n 1 ) 2 2 ,
   2: w 2 ( n ) = w 2 ( n ) + μ 2 U s ( n 1 ) ξ 1 + U s ( n 1 ) 2 2 ,
   3: w ( n ) = λ ( n ) w 2 ( n ) + ( 1 λ ( n ) ) w 1 ( n ) ,
   4: e ( n ) = d ( n ) u T ( n ) w ( n ) ,
   5: s ( n ) = s ( n 1 ) μ s e ( n ) s ( n 1 ) ,
   for SCSS-APSA
   6: λ s ( n ) = C 2 s ( n ) C 2 1 2 ,
   7: if s ( n ) < l n ( C + 1 C 1 ) :
      then s ( n ) = l n ( C + 1 C 1 ) ,
   8: else if s ( n ) > l n ( C + 1 C 1 ) :
      then s ( n ) = l n ( C + 1 C 1 ) ,
   9: else s ( n ) = s ( n ) , end.
   for MCSS-APSA
   6: λ β ( n ) = C 1 + exp ( β s ( n ) ) C 2 1 2 ,
   7: if s ( n ) < l n ( C + 1 C 1 ) / β :
      then s ( n ) = l n ( C + 1 C 1 ) / β ,
   8: else if s ( n ) > l n ( C + 1 C 1 ) / β :
      then s ( n ) = l n ( C + 1 C 1 ) / β ,
   9: else s ( n ) = s ( n ) , end.
   for SMCSS-APSA
   6: λ s β ( n ) = C 2 - β s ( n ) - C 2 - 1 2 ,
   7: if s ( n ) < - 2 ( C - 1 ) β :
      then s ( n ) = - 2 ( C - 1 ) β ,
   8: else if s ( n ) > 2 ( C + 1 ) β :
      then s ( n ) = 2 ( C + 1 ) β ,
   9: else s ( n ) = s ( n ) , end.
   end while

3.5. Steady-State Analysis

According to [24], the weight-error vector is defined as
w ˜ ( n ) = w o w ( n ) .
Using (12), we can obtain:
w ˜ ( n ) = w ˜ ( n 1 ) μ 12 U s ( n 1 ) U s ( n 1 ) 2 2
E [ w ˜ ( n ) 2 ] = E [ w ˜ ( n 1 ) 2 ] 2 μ 12 E [ w ˜ T ( n 1 ) U s ( n 1 ) U s ( n 1 ) 2 2 ] + μ 12 2 E [ U s ( n 1 ) 2 U s ( n 1 ) 2 2 ] ,
where E is the expectation operator. The mean-square-deviation E [ w ˜ ( n ) 2 ] must be less than E [ w ˜ ( n 1 ) 2 ] to ensure the algorithm is stable, so the step-size μ 12 must satisfy the following condition:
0 < μ 12 < 2 × E [ w ˜ T ( n 1 ) U s ( n 1 ) ] E [ U s ( n 1 ) 2 / U s ( n 1 ) 2 2 ] .
For the SCSS-APSA, the step-size should satisfy the condition below:
0 < μ s 12 = λ s ( n ) μ 2 + ( 1 λ s ( n ) ) μ 1 < 2 × E [ w ˜ T ( n 1 ) U s ( n 1 ) ] E [ U s ( n 1 ) 2 / U s ( n 1 ) 2 2 ] .
For the MCSS-APSA, the step-size should satisfy the condition below:
0 < μ β 12 = λ β ( n ) μ 2 + ( 1 λ β ( n ) ) μ 1 < 2 × E [ w ˜ T ( n 1 ) U s ( n 1 ) ] E [ U s ( n 1 ) 2 / U s ( n 1 ) 2 2 ] .
For the SMCSS-APSA, the step-size should satisfy the condition below:
0 < μ s β 12 = = λ s β ( n ) μ 2 + ( 1 λ s β ( n ) ) μ 1 < 2 × E [ w ˜ T ( n 1 ) U s ( n 1 ) ] E [ U s ( n 1 ) 2 / U s ( n 1 ) 2 2 ] .
In addition, λ ( n ) = 0 in the transient state and λ ( n ) = 1 ( n ) in the steady state, such that μ 1 or μ 2 , should also satisfy condition (36). Also, from [19], we know that 0 < μ 2 , μ 1 1 , so the ranges of μ s 12 , μ β 12 , and μ s β 12 are restricted as
0 < μ s 12 , μ β 12 , μ s β 12 1 .

4. Numerical Simulation Results

In this part, we test the filtering performance of the proposed algorithms in system identification. In this work, all simulation results have been averaged over 50 independent trials. The signal-to-noise ratio (SNR) was set to 30 dB, and background noise has been added to d ( n ) .
The impulsive noise v ( n ) is modeled by b ( n ) G ( n ) , where b ( n ) is a Bernoulli process with a probability of success of P [ b ( n ) = 1 ] = p r , and G ( n ) denotes zero-mean Gaussian noise with the variance σ G 2 = 1000 σ d 2 , where σ d 2 is the average power of the desired output d ( n ) = u T ( n ) w o . We set ξ 1 = ξ 2 = 0.001 , K = 2 , p r = 0.1 , μ 1 = 0.01 , and μ 2 = 0.0001 in all simulations. Following normalized MSD,
NMSD = 10 log 10 w o w ( n ) 2 w o 2
was used to measure the performance of the algorithms.
The color input datum were generated by filtering the white and zero-mean Gaussian signals with a unit power through a first-order system, as
H 1 = 1 1 0.7 z 1 .
The unknown weight w o was randomly generated with M = 32 taps, and turned into w o at the middle of the iterations.
First of all, we investigated the influence of the parameter C on the performance of CSS-APSA. We chose C { 1.1 , 2 , 3 , 4 , 5 , 6 , 7 } , μ s = 1 . Figure 2 shows the corresponding NMSD curves. From this figure, we know that CSS-APSA with C = 5 and C = 6 can realize better filtering performance than that of other C values. In other words, proper C values can enhance the filtering performance of CSS-APSA. This result demonstrates the effectiveness of constraint on s ( n ) [ 0.3365 , 0.3365 ] .
Remark 1.
For CSS-type algorithms, when C is in a certain interval (such as C ( 1 , 6 ] ), the performance of CSS-APSA can be improved by increasing the value of C. In addition, we can have following observations for C ( 1 , 6 ] :
  • If the value of C is relatively small (such as C 2 ), the change rate of λ ( n ) from 0 to 1 will be reduced. In this case, the combined algorithm has slower convergence speed. Figure 1 and Figure 2 illustrate such behavior.
  • If the value of C is too large (such as C = 7 ), the change rate of λ ( n ) from 0 to 1 is increased in the process of iteration. In this case, λ ( n ) tends to 1 too quickly, resulting in w ( n ) dominated by w 2 ( n ) , which is related to the lower steady-state error. Hence, the combined algorithm will still realize a slower convergence speed.
Secondly, we investigate the influence of μ s on CSS-APSA with C { 5 , 6 } , since the μ s as a step-size factor affects the update rate of s ( n ) and indirectly influences the change rate of λ ( n ) . In this part, we set μ s { 0.1 , 1 , 10 } . Figure 3 plots the corresponding NMSD curves. From this figure, we know that μ s = 1 is a better choice than other values. Hence, except where mentioned otherwise, we set μ s = 1 in the following experiments. Furthermore, we compare the filtering performance between CSS-APSA and SCSS-APSA. Figure 4 plots the NMSD curves of the corresponding algorithms. Such results reveal that:
  • CSS-APSA with C = 6 realizes faster convergence than C = 5 after an abrupt change;
  • SCSS-APSA with C = 6 outperforms CSS-APSA in terms of convergence rate in the whole filtering process.
Moreover, Table 2 compares the execution times of CSS-APSA, SCSS-APSA, MCSS-APSA, and SMCSS-APSA at each iteration (computation platform: Intel(R) Core(TM) i7-4770K, CPU @ 3.50 GHz). One can clearly see that SCSS-APSA is more computationally efficient than CSS-APSA.
Thirdly, we investigate the effect of parameter β on the performance of MCSS-APSA. We set C = 6 and β { 1 , 5 , 10 , 20 } . The NMSD curves are plotted in Figure 5, which shows that:
  • With β = 5 , MCSS-APSA realizes better filtering accuracy at a transient process than β { 1 , 10 , 20 } ;
  • With β = 1 , MCSS-APSA becomes to CSS-APSA, while MCSS-APSA with other β values (such as β = 5 or 10) outperforms CSS-APSA. This observation demonstrates the usefulness of modifications for CSS-APSA.
  • Increasing the β value from 1 to 20 cannot always improve the filtering performance of MCSS-APSA. Similarly to the effect of the C value, a β value which is too large leads to w ( n ) dominated by w 2 ( n ) in iterations, and slows down the convergence speed of MCSS-APSA.
Next, we compare the filtering performance between SMCSS-APSA and MCSS-APSA. In this part, we set β = 5 and C = 6 , since such two parameters can improve the tracking performance of MCSS-APSA, as demonstrated in Figure 5. The compared NMSD results are plotted in Figure 6. From this figure, we can know that SMCSS-APSA achieves slightly better filtering behavior than that of MCSS-APSA, while the former is more computationally efficient than the latter, as demonstrated in Table 2.
Finally, the filtering performance of SMCSS-APSA is compared with APSA, CAPSA, and SCSS-APSA (From [25], we know the VSS-APSA [21] need to reset algorithms when an abrupt change occurs, so we ignore the VSS-APSA’s comparison in system identification). Figure 7 shows the NMSD curves of these algorithms. The results demonstrate that SMCSS-APSA outperforms APSA, CAPSA, and SCSS-APSA in terms of the convergence rate.

5. Conclusions

In this paper, SCSS-APSA, MCSS-APSA, and SMCSS-APSA were derived to improve the convergence speed of CSS-APSA. To enhance the tracking performance of CAPSA and CSS-APSA, MCSS-APSA was proposed by using a modified sigmoidal active function. Furthermore, the simplification method was used on MCSS-APSA, resulting in SMCSS-APSA, which was found to achieve better filtering performance than that of MCSS-APSA. In addition, SCSS-APSA not only reduced the computational complexity of CSS-APSA, but also sped up the convergence under certain conditions. Simulations have shown that the proposed SCSS-APSA, MCSS-APSA, and SMCSS-APSA are superior to CSS-APSA.

Author Contributions

G.L. presented the conception of this article and carried out the experimental work; project administration, H.Z.; J.Z. reviewed this article and put forward some constructive suggestions for the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (Grants No. 61971100).

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Guo, Y.; Li, J.; Li, Y. Diffusion correntropy subband adaptive filtering (SAF) algorithm over distributed smart dust networks. Symmetry 2019, 11, 1335. [Google Scholar] [CrossRef] [Green Version]
  2. Duttweiler, D.L. Proportionate normalized least-mean-squares adaptation in echo cancelers. IEEE Trans. Speech Audio Process. 2000, 8, 508–517. [Google Scholar] [CrossRef] [Green Version]
  3. Wang, Y.; Li, Y. Norm penalized joint-optimization NLMS algorithms for broadband sparse adaptive channel estimation. Symmetry 2017, 9, 133. [Google Scholar] [CrossRef] [Green Version]
  4. Min, S.; Nikias, C.L. Signal processing with fractional lower order moments: Stable processes and their applications. Proc. IEEE 1993, 81, 986–1010. [Google Scholar]
  5. Ozeki, K.; Umeda, T.; Members., R. An adaptive filtering algorithm using an orthogonal projecti on to an affine subspace and its properties. Electron. Commun. Jpn. 1984, 67, 126–132. [Google Scholar] [CrossRef]
  6. Shin, H.C.; Sayed, A.H.; Song, W.J. Variable step-size nlms and affine projection algorithms. IEEE Signal Process. Lett. 2004, 11, 132–135. [Google Scholar] [CrossRef]
  7. Lee, C.H.; Park, P. Optimal step-size affine projection algorithm. IEEE Signal Process. Lett. 2012, 19, 431–434. [Google Scholar] [CrossRef]
  8. Paul, T.K.; Ogunfunmi, T. On the convergence behavior of the affine projection algorithm for adaptive filters. IEEE Trans. Circuits Syst. Regul. Pap. 2011, 58, 1813–1826. [Google Scholar] [CrossRef]
  9. Yin, W.; Mehr, A.S. A variable regularization method for affine projection algorithm. IEEE Trans. Circuits Syst. Ii: Express Briefs 2010, 57, 476–480. [Google Scholar]
  10. Zhao, J.; Zhang, H.; Wang, G.; Liao, X. Modified memory-improved proportionate affine projection sign algorithm based on correntropy induced metric for sparse system identification. Electron. Lett. 2018, 54, 630–632. [Google Scholar] [CrossRef]
  11. Li, Y.; Jiang, Z.; Yin, J. Mixed norm constrained sparse apa algorithm for satellite and network echo channel estimation. IEEE Access 2018, 6, 65901–65908. [Google Scholar] [CrossRef]
  12. Sivabalan, S.R.A. Za-apa with zero attractor controller selection criterion for sparse system identification. Signal Image Video Process. 2018, 12, 371–377. [Google Scholar]
  13. Meng, R.; Lamare, R.C.D.; Nascimento, H. Sparsity-aware affine projection adaptive algorithms for system identification. In Proceedings of the Sensor Signal Processing for Defence (SSPD 2011), London, UK, 27–29 September 2011; pp. 27–29. [Google Scholar]
  14. Li, Y.; Zhang, C.; Wang, S. Low-complexity non-uniform penalized affine projection algorithm for sparse system identification. Circuits Syst. Signal Process. 2016, 35, 1611–1624. [Google Scholar] [CrossRef]
  15. Li, Y.; Li, W.; Yu, W.; Wan, J.; Li, Z. Sparse adaptive channel estimation based on lp-norm-penalized affine projection algorithm. Int. Antennas Propag. 2014, 2014, 17–23. [Google Scholar]
  16. Li, Y.; Hamamura, M. Smooth approximation l0-norm constrained affine projection algorithm and its applications in sparse channel estimation. Sci. World J. 2014, 2014, 937252. [Google Scholar]
  17. Arikan, O.; Cetin, A.E.; Erzin, E. Adaptive filtering for non-gaussian stable processes. IEEE Signal Process. Lett. 1994, 1, 163–165. [Google Scholar] [CrossRef]
  18. Kwong, C. Dual sign algorithm for adaptive filtering. IEEE Trans. Commun. 1986, 34, 1272–1275. [Google Scholar] [CrossRef]
  19. Shao, T.; Zheng, Y.R.; Member, S.; Benesty, J.; Member, S. An affine projection sign algorithm robust against impulsive interferences. IEEE Signal Process. Lett. 2010, 17, 327–330. [Google Scholar] [CrossRef]
  20. Yoo, J.; Shin, J.; Park, P. Variable step-size affine projection sign algorithm. IEEE Trans. Circuits Syst. II Express Briefs 2014, 61, 274–278. [Google Scholar]
  21. Shin, J.; Yoo, J.; Park, P. Variable step-size affine projection sign algorithm. Electron. Lett. 2012, 48, 9–10. [Google Scholar] [CrossRef]
  22. Zhang, S.; Zhang, J. Modified variable step-size affine projection sign algorithm. Electron. Lett. 2013, 49, 1264–1265. [Google Scholar] [CrossRef]
  23. Kim, S.H.; Jeong, J.J. Robust convex combination of affine projection-type algorithms using an impulsive noise indicator. Signal Process. 2016, 129, 33–37. [Google Scholar] [CrossRef]
  24. Huang, F.; Zhang, J.; Zhang, S. A family of robust adaptive filtering algorithms based on sigmoid cost. Signal Process. 2018, 149, 179–192. [Google Scholar] [CrossRef]
  25. Huang, F.; Zhang, J.; Zhang, S. Combined-step-size affine projection sign algorithm for robust adaptive filtering in impulsive interference environments. IEEE Trans. Circuits Syst. Express Briefs 2016, 63, 493–497. [Google Scholar] [CrossRef]
  26. Choi, J.H.; Cho, H.; Jeong, J.J.; Kim, W. Combination of step sizes for affine projection algorithm with variable mixing parameter. Electron. Lett. 2013, 49, 29–30. [Google Scholar] [CrossRef]
  27. Arenas-García, J.; Gómez-Verdejo, V. New algorithms for improved adaptive convex combination of lms transversal filters. IEEE Trans. Instrum. Meas. 2005, 54, 2239–2249. [Google Scholar] [CrossRef]
  28. Kozat, S.S.; Erdogan, A.T.; Singer, A.C.; Sayed, A.H. Steady-state mse performance analysis of mixture approaches to adaptive filtering. IEEE Trans. Signal Process. 2010, 58, 4050–4063. [Google Scholar] [CrossRef]
  29. Ferrer, M.; Gonzalez, A.; Member, S.; Diego, M.D. Convex combination filtered-x algorithms for active noise control systems. IEEE Trans. Audio Speech Lang. Process. 2013, 21, 156–167. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The curves of λ ( n ) with different C values.
Figure 1. The curves of λ ( n ) with different C values.
Symmetry 12 00385 g001
Figure 2. The NMSD curves of APSA and CSS-APSA with different C values.
Figure 2. The NMSD curves of APSA and CSS-APSA with different C values.
Symmetry 12 00385 g002
Figure 3. The NMSD curves of CSS-APSA with different μ s values.
Figure 3. The NMSD curves of CSS-APSA with different μ s values.
Symmetry 12 00385 g003
Figure 4. The NMSD learning curves of CSS-APSA and SCSS-APSA with different C. (a,c): C = 5 ; (b,d): C = 6 .
Figure 4. The NMSD learning curves of CSS-APSA and SCSS-APSA with different C. (a,c): C = 5 ; (b,d): C = 6 .
Symmetry 12 00385 g004
Figure 5. The NMSD curves of MCSS-APSA with different β , C = 6 .
Figure 5. The NMSD curves of MCSS-APSA with different β , C = 6 .
Symmetry 12 00385 g005
Figure 6. The NMSD curves of MCSS-APSA and SMCSS-APSA with β = 5 , C = 6 .
Figure 6. The NMSD curves of MCSS-APSA and SMCSS-APSA with β = 5 , C = 6 .
Symmetry 12 00385 g006
Figure 7. The NMSD learning curves of APSA, CAPSA, SCSS-APSA, and SMCSS-APSA with colored input data H 1 ( z ) , (a): μ 1 = 0.01 ; (b): μ 2 = 0.0001 ; (c): μ 1 = 0.01 , μ 2 = 0.0001 ; (d): C = 6 ; (e): C = 6 , β = 5 .
Figure 7. The NMSD learning curves of APSA, CAPSA, SCSS-APSA, and SMCSS-APSA with colored input data H 1 ( z ) , (a): μ 1 = 0.01 ; (b): μ 2 = 0.0001 ; (c): μ 1 = 0.01 , μ 2 = 0.0001 ; (d): C = 6 ; (e): C = 6 , β = 5 .
Symmetry 12 00385 g007
Table 1. Computational complexity of APSA, CAPSA, CSS-APSA, SCSS-APSA, MCSS-APSA, and SMCSS-APSA at each iteration.
Table 1. Computational complexity of APSA, CAPSA, CSS-APSA, SCSS-APSA, MCSS-APSA, and SMCSS-APSA at each iteration.
AlgorithmsMultiplicationsAdditionsComparisonsExponents
APSA 2 M ( K + 1 ) + 1 M ( 2 K + 1 ) 1 00
CAPSA 2 M ( 2 K + 3 ) + 12 M ( 4 K + 4 ) + 6 20
CSS-APSA 2 M ( K + 1 ) + M + 8 M ( 2 K + 2 ) + 4 22
SCSS-APSA 2 M ( K + 1 ) + M + 8 M ( 2 K + 2 ) + 4 20
MCSS-APSA 2 M ( K + 1 ) + M + 8 M ( 2 K + 2 ) + 4 22
SMCSS-APSA 2 M ( K + 1 ) + M + 8 M ( 2 K + 2 ) + 4 20
Table 2. Execution time of CSS-APSA, SCSS-APSA, MCSS-APSA, and SMCSS-APSA at each iteration.
Table 2. Execution time of CSS-APSA, SCSS-APSA, MCSS-APSA, and SMCSS-APSA at each iteration.
AlgorithmsParametersExecution Time (s)
CSS-APSA C = 6 1.7132
SCSS-APSA C = 6 1.6952
MCSS-APSA C = 6 , β = 5 1.7467
SMCSS-APSA C = 6 , β = 5 1.7108

Share and Cite

MDPI and ACS Style

Li, G.; Zhang, H.; Zhao, J. Modified Combined-Step-Size Affine Projection Sign Algorithms for Robust Adaptive Filtering in Impulsive Interference Environments. Symmetry 2020, 12, 385. https://0-doi-org.brum.beds.ac.uk/10.3390/sym12030385

AMA Style

Li G, Zhang H, Zhao J. Modified Combined-Step-Size Affine Projection Sign Algorithms for Robust Adaptive Filtering in Impulsive Interference Environments. Symmetry. 2020; 12(3):385. https://0-doi-org.brum.beds.ac.uk/10.3390/sym12030385

Chicago/Turabian Style

Li, Guoliang, Hongbin Zhang, and Ji Zhao. 2020. "Modified Combined-Step-Size Affine Projection Sign Algorithms for Robust Adaptive Filtering in Impulsive Interference Environments" Symmetry 12, no. 3: 385. https://0-doi-org.brum.beds.ac.uk/10.3390/sym12030385

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop