Next Article in Journal
Task’s Choice: Pruning-Based Feature Sharing (PBFS) for Multi-Task Learning
Previous Article in Journal
GDP vs. LDP: A Survey from the Perspective of Information-Theoretic Channel
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Enhanced Affine Projection Algorithm Based on the Adjustment of Input-Vector Number

1
Department of Electronic Engineering, Kumoh National Institute of Technology, Gumi 39177, Korea
2
Department of Cogno-Mechatronics Engineering, Pusan National University, Busan 46241, Korea
3
Department of Optics and Mechatronics Engineering, Pusan National University, Busan 46241, Korea
4
Department of Electronic Engineering, Gachon University, Seongnam 13120, Korea
5
Department of Automobile and IT Convergence, Kookmin University, Seoul 02707, Korea
*
Author to whom correspondence should be addressed.
Submission received: 9 February 2022 / Revised: 12 March 2022 / Accepted: 18 March 2022 / Published: 20 March 2022
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

:
An enhanced affine projection algorithm (APA) is proposed to improve the filter performance in aspects of convergence rate and steady-state estimation error, since the adjustment of the input-vector number can be an effective way to increase the convergence rate and to decrease the steady-state estimation error at the same time. In this proposed algorithm, the input-vector number of APA is adjusted reasonably at every iteration by comparing the averages of the accumulated squared errors. Although the conventional APA has the constraint that the input-vector number should be integer, the proposed APA relaxes that integer-constraint through a pseudo-fractional method. Since the input-vector number can be updated at every iteration more precisely based on the pseudo-fractional method, the filter performance of the proposed APA can be improved. According to our simulation results, it is demonstrated that the proposed APA has a smaller steady-state estimation error compared to the existing APA-type filters in various scenarios.

1. Introduction

Many representative algorithms based on the adaptive filtering theory have been successfully applied in channel estimation, system identification, and noise/echo cancellation [1,2,3,4,5,6,7,8,9,10,11,12,13]. As can be seen in Figure 1, the main purpose of an adaptive filter is the accomplishment for the precise estimation of filter coefficients, to minimize error signals with the same input signals. The least-mean squares (LMS) and normalized least-mean squares (NLMS) algorithms are the representative adaptive filtering algorithms because of easy implementation and low complexity. Compared to the LMS and NLMS algorithms [9], the affine projection algorithm (APA) [7] exhibits a high convergence rate for highly correlated input data because it employs multiple input vectors instead of only one input vector. However, the APA is computationally complex and incurs a large steady-state estimation error. Notably, a high input-vector number leads to rapid convergence but a large estimation error. In contrast, a low input-vector number leads to slow convergence but a small estimation error. Therefore, by adjusting the input-vector number, a high convergence rate and small steady-state estimation error can likely be simultaneously realized.
Recently, several researchers have focused on the input-vector number to enhance the performance of APAs, with several representative algorithms being the APA with dynamic selection of input vectors (DS-APA) [14], APA with selective regressors (SR-APA) [15], and APA with evolving order (E-APA) [16]. Although these algorithms outperform the conventional APA by achieving smaller estimation errors, the convergence rate and steady-state estimation error remain to be optimized.
Considering these aspects, in this study, a novel variable input-vector number APA framework is developed, in which the input-vector number is modified using a pseudo-fractional method based on the concept of the pseudo-fractional tap length [17] to ensure small steady-state estimation errors. The pseudo-fractional method employs the integer and fractional input-vector number and relaxes the constraint of the conventional APA that requires the input-vector number to be integer. Thus, the input-vector number for the proposed APA can be increased or decreased by comparing the averages of the accumulated errors. Compared to the existing algorithms, such as the conventional APA, DS-APA, SR-APA, and E-APA, the proposed algorithm based on the pseudo-fractional method achieves a higher convergence rate and smaller steady-state estimation error.
This paper is organized as follows. In the following Section 2, the conventional APA is explained. In Section 3, the proposed APA based on the adjustment of input-vector number by using the pseudo-fractional strategy is explained in detail. In Section 4, we present the simulation results to verify the performance of the proposed APA. Finally, Section 5 gives the conclusions of this paper.

2. Conventional Affine Projection Algorithm

Consider reference data d i obtained from an unknown system
d i = u i T w o + v i ,
where w o is the n-dimensional column vector of the unknown system, which must be estimated; v i indicates the measurement noise, with variance σ v 2 ; and u i denotes an n-dimensional column input vector, u i = [ u i u i 1 u i n + 1 ] T . The conventional APA is derived by minimizing the L 2 -norm of the difference of filter coefficient vectors between at iteration i and i + 1 with setting a posteriori the error vector as zero, as follows:
min w ^ i + 1 | | w ^ i + 1 w ^ i | | 2 2 subject to d i = U i T w ^ i + 1 ,
where e i = d i U i T w ^ i , w ^ i is the estimate of w o at iteration i, μ is the step-size parameter, M is the input-vector number that means the number of current input vectors used for the APA update, and
U i = [ u i u i 1 u i M + 1 ] , d i = [ d i d i 1 d i M + 1 ] T .
Through the Lagrange multiplier and gradient descent method, the filter update equation of the conventional APA can be presented as [7]
w ^ i + 1 = w ^ i + μ U i ( U i T U i + β I ) 1 e i ,
where β is the regularization parameter that is usually a very small positive value, and I denotes the identity matrix.

3. Enhanced Affine Projection Algorithm Based on the Adjustment of Input-Vector Number

The conventional APA has the constraint that the input-vector number of APA should be integer. In this context, the filter performance can be enhanced if the input-vector number can include both integer and non-integer parts. Therefore, we develop a novel APA by the adjustment of the input-vector number using a pseudo-fractional method that is based on the concept of the pseudo-fractional tap length [17]. The pseudo-fractional method adopts the integer and fractional input-vector number by relaxing the integer constraint for the input-vector number. The integer input-vector number refers to the integer part of the fractional input-vector number when the difference in the integer and fractional input-vector number is larger than a predetermined value. In this proposed framework, the input-vector numbers are dynamically adjusted to enhance the performance of the proposed algorithm in terms of the convergence rate and steady-state estimation error. Moreover, the leaky factor is implemented in the adaptation rule of the fractional input-vector number.
According to the adaptation rule, as defined in Equation (5), the integer input-vector number remains constant until the change in the fractional input-vector number accumulates to a certain extent. P i is the pseudo-fractional input-vector number, which can be assigned positive integer values.
P i + 1 = ( P i α ) γ ( A A S E M i ( i ) A A S E M i 1 ( i ) ) , if M i 2 ( P i α ) γ ( A A S E M i + 1 ( i ) A A S E M i ( i ) ) , otherwise .
In this expression, α and γ are small positive numbers, α is the leaky factor that satisfies α γ , and M i is the integer input-vector number at instant i. AASE, which is average of the accumulated squared errors, is defined as
A A S E M ( i ) N = 0 M 1 e N 2 ( i ) M .
Subsequently, the integer input-vector number M i can be determined as
M i = max { min { P i 1 , M max } , 1 } , if | M i 1 P i 1 | δ M i 1 , otherwise .
where the . operator approximates the value to the nearest integer, and δ is the threshold parameter. M i is updated to satisfy 1 M i M max , where M max is the maximum input-vector number. The threshold parameter δ is set to 0.5 to maximize the filter performance of our proposed APA. The proposed APA is updated using the following expression:
w ^ i + 1 = w ^ i + μ U i , M i ( U i , M i T U i , M i + β I ) 1 e i , M i .
where U i , M i = [ u i u i 1 u i M i + 1 ] , e i , M i = [ e 0 ( i ) e 1 ( i ) e M i 1 ( i ) ] T , and M i is determined according to the adaptation rule for the fractional input-vector number. Through the above-mentioned method, the proposed APA can accomplish the variable input-vector number strategy to enhance the filter performance in terms of the convergence rate and steady-state estimation error.
Because the proposed algorithm is designed for stationary environments, the input-vector number must be re-initialized to achieve a high tracking performance each time the target system is changed. To this end, the re-initialization method reported in [18] is used as a reference and modified, as illustrated in Algorithm 1.
Algorithm 1: Re-initialization of the input-vector number.
e t h μ σ v 2 M m a x / ( 2 μ ) , flag = 0 , e a v g = e 0 2 ,
λ , α 1 , α 2 : user defined.
for each i do
         if ( e i 2 < α 1 e t h )
                flag = 1
         else if ( flag = 1 and α 2 e a v g < e i 2 )
               flag = 0 , e a v g = e i 2 , M i = M max , P i = M max
         end if
          e a v g = λ e a v g + ( 1 λ ) e i 2
end for

4. Experimental Results

The performance of the proposed algorithm is evaluated considering a channel estimation framework. The channel of the unknown system is generated using a moving average model with 16 taps ( n = 16 ). The adaptive filter and unknown channel are assumed to have the same number of taps. Moreover, the noise variance σ v 2 is assumed to be known a priori, as its value can be estimated during silences in many practical applications [19,20,21]. The input signal u i is generated by filtering a white, zero-mean Gaussian random sequence by using the following system:
G 1 ( z ) = 1 1 0.9 z 1 , G 2 ( z ) = 1 + 0.6 z 1 1 + z 1 + 0.21 z 2 .
The measurement noise v i is added to y i with a signal-to-noise ratio (SNR) of 30 dB. In this case, the SNR is defined by 10 log 10 ( E [ y i 2 ] / E [ υ i 2 ] ) and y i = u i T w o . P 0 and M 0 are set to M max , which is the initial input-vector number of the proposed APA. The mean squared deviation (MSD), i.e., E w o w ^ i 2 , is adopted as the indicator for the algorithm performance. The simulation results are obtained through ensemble averaging over 100 independent trials, and the input signals are generated using G 1 ( z ) and G 2 ( z ) . Furthermore, to examine the tracking performance of the proposed algorithm, the coefficients of the unknown filter taps are abruptly varied at time 5 × 10 3 in the simulation. The parameters of the proposed algorithm are set as follows: M max = 8, μ = 0.1, α = 0.03, γ = 1 α , and β = 0.000001 . The re-initialization parameters are set as λ = 0.95, α 1 = 10, α 2 = 40 [18].

4.1. System Identification Verification for Correlated Input

Figure 2 shows the MSD learning curves of the conventional APA [7], DS-APA [14], SR-APA [15], E-APA [16], and proposed APA when the input vector is generated using G 1 ( z ) and the AR input model. As can be seen in Figure 2, the proposed APA has the smaller steady-state estimation error as compared to the existing APA-type algorithms. In addition, even though the system change occurs suddenly as shown in Figure 3, the proposed algorithm maintains the filter performance in aspects of the convergence rate and steady-state estimation error. Moreover, when the tap length is long ( n = 256 ), the proposed APA also has the smaller steady-state estimation errors, as shown in Figure 4. Figure 5 also shows the MSD learning curves of the conventional APA, DS-APA, SR-APA, E-APA, and proposed APA when the input vector is generated using G 2 ( z ) and the ARMA input model. As can be seen in Figure 5, the proposed APA outperforms the existing algorithm in terms of the convergence rate and steady-state estimation error. Figure 6 represents the comparison of input vector numbers over one trial for the E-APA and proposed-APA. According to Figure 6, it can be verified that the proposed APA uses the input-vector number efficiently. Figure 7 shows the tracking performance of the existing and proposed algorithms when the unknown system is abruptly changed from w o to w o . As can be seen in Figure 3 and Figure 7, the tracking performance of the proposed algorithm is maintained without any degradation in the convergence rate or steady-state estimation errors, owing to the use of the re-initialization method [18]. These simulation results demonstrate that the proposed variable input-vector number APA achieves a smaller steady-state estimation error than the existing algorithms.
In our proposed APA, it is important to decide on the values of α and γ = 1 α in Equation (5), because the parameters α and γ are dominant factors to determine the input-vector number. Therefore, we investigate the filter performance comparison according to several values of α and γ . As shown in Figure 8, the proposed APA has the best performance in case of α = 0.03 with γ = 1 α . Even though the parameter tuning method cannot precisely provide an optimal value, α = 0.03 gives the best performance in our simulation scenarios. Since the parameter α and γ values have a restriction between 0 and 1, it is easy to find the proper α and γ values based on the variation of α and γ values to ensure the improved filter performance in each scenario.
Moreover, the parameter δ in Equation (7) is also a dominant factor to decide on the filter performance of the proposed APA. To verify the influence of the parameter δ , the comparison simulation according to the variation of δ is shown in Figure 9. Since the filter performance of the proposed APA can be maximized experimentally when the parameter δ is set to 0.5, the proposed APA uses δ = 0.5 consistently in all simulations.

4.2. Speech Input Verification Including a Double-Talk Situation

The proposed APA was experimented by using the speech input signals to ensure the filter performance in practical scenarios as can be seen in Figure 10. Since the speech input signals are real human speech data, this simulation result for speech input increases the reliability of the proposed APA for practical use. As can be seen in Figure 11, we can find that the proposed APA can accomplish the smaller steady-state estimation errors compared to the other algorithms. The proposed algorithm was also tested in a double-talk situation as shown in Figure 12. The far-end input signal and near-end input signal were speech signals where the power of the near-end input signal was 2 times greater than that of the far-end input signal. The near-end input signal was added between iterations 6.2 × 10 3 and 7.2 × 10 3 . Figure 12 shows that the proposed APSA delivered better performance than the other algorithms in terms of the convergence rate and the steady-state estimation error. Even after the double-talk occurrence, we can find that the proposed APSA consistently has smaller steady-state estimation errors. As can be seen in Figure 11 and Figure 12, the proposed APA can achieve the better filter performance in aspects of the convergence rate and steady-state estimation error in harsh speech-input scenarios.

4.3. Comparison for Computational Complexity

Table 1 presents the iteration-wise computational complexity values of the conventional APA, DS-APA, E-APA, and proposed algorithm, for which the input-vector numbers are M, M j , M k , and M i , respectively. Figure 13 shows the accumulated sum for the multiplications of the proposed APA compared to the existing APA-type algorithms. As can be seen in Figure 13, the proposed algorithm can be executed with smaller computational complexity than the other algorithms. The values for the proposed algorithm are considerably smaller than those of the conventional APA, DS-APA, and E-APA owing to the considerably smaller input-vector numbers of the proposed algorithm in the steady state.

5. Conclusions

This paper proposed a novel APA based on the adjustment of input-vector number to enhance the filter performance in aspects of both convergence rate and steady-state estimation error. In this framework, the input-vector number for the proposed APA was determined using the pseudo-fractional method. Because the pseudo-fractional method relaxed the constraint of the conventional APA that required the input-vector number to be integer, it was meaningful to decide the input-vector number more precisely to improve the filter performance. Specifically, the proposed method dynamically adjusted the input-vector number by using the proposed adaptation rule for the fractional input-vector number. According to the adaptation rule, the current projection order was set by comparing the AASE values reasonably. Simulation results with system identification and speech input scenarios demonstrated that the proposed APA had smaller steady-state estimation error as compared to the existing APA-type algorithms. Moreover, we plan to develop an improved APA to adjust both the projection order and step size based on the proposed method. Because the E-APA [16] was expanded by using the variable projection order and step-size concept simultaneously to enhance the filter performance in aspects of convergence rate and steady-state estimation error, it can be an effective way for our proposed APA to improve the filter performance. By extension, we will research the expansion for various kinds of adaptive filter through the variable projection-order and step-size strategies to improve the filter performance.

Author Contributions

Conceptualization and formal analysis, J.S.; investigation and validation, J.K.; methodology and software T.-K.K.; software and writing, J.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (No. NRF-2021R1F1A1062153). This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (No. NRF-2021R1A5A1032937).

Conflicts of Interest

The author declare no conflict of interest.

References

  1. Zhang, R.; Zhao, H. A Novel Method for Online Extraction of Small-Angle Scattering Pulse Signals from Particles Based on Variable Forgetting Factor RLS Algorithm. Sensors 2021, 21, 5759. [Google Scholar] [CrossRef] [PubMed]
  2. Qian, G.; Wang, S.; Lu, H.H.C. Maximum Total Complex Correntropy for Adaptive Filter. IEEE Trans. Signal Process. 2020, 68, 978–989. [Google Scholar] [CrossRef]
  3. Yue, P.; Qu, H.; Zhao, J.; Wang, M. Newtonian-Type Adaptive Filtering Based on the Maximum Correntropy Criterion. Entropy 2020, 22, 922. [Google Scholar] [CrossRef] [PubMed]
  4. Li, Y.; Wang, Y.; Yang, R.; Albu, F. A Soft Parameter Function Penalized Normalized Maximum Correntropy Criterion Algorithm for Sparse System Identification. Entropy 2017, 19, 45. [Google Scholar] [CrossRef] [Green Version]
  5. Shen, L.; Zakharov, Y.; Henson, B.; Morozs, N.; Mitchell, P.D. Adaptive filtering for full-duplex UWA systems with time-varying self-interference channel. IEEE Access 2020, 8, 187590–187604. [Google Scholar] [CrossRef]
  6. Wu, Z.; Peng, S.; Chen, B.; Zhao, H. Robust Hammerstein Adaptive Filtering under Maximum Correntropy Criterion. Entropy 2015, 17, 7149–7166. [Google Scholar] [CrossRef] [Green Version]
  7. Ozeki, K.; Umeda, T. An adaptive filtering algorithm using an orthogonal projection to an affine subspace and its properties. Electron. Commun. Jpn. 1984, 67, 19–27. [Google Scholar] [CrossRef]
  8. Pichardo, E.; Vazquez, A.; Anides, E.R.; Sanchez, J.C.; Perez, H.; Avalos, J.G.; Sanchez, G. A dual adaptive filter spike-based hardware architecture for implementation of a new active noise control structure. Electronics 2021, 10, 1945. [Google Scholar] [CrossRef]
  9. Yoo, J.; Shin, J.; Park, P. A bias-compensated proportionate NLMS algorithm with noisy input signals. Int. J. Commun. Syst. 2019, 32, e4167. [Google Scholar] [CrossRef]
  10. Li, G.; Zhang, H.; Zhao, J. Modified combined-step-size affine projection sign algorithms for robust adaptive filtering in impulsive interference environments. Symmetry 2020, 12, 385. [Google Scholar] [CrossRef] [Green Version]
  11. Shin, J.; Yoo, J.; Park, P. Adaptive regularisation for normalised subband adaptive filter: Mean-square performance analysis approach. IET Signal Process. 2018, 12, 1146–1153. [Google Scholar] [CrossRef]
  12. Jiang, Z.; Li, Y.; Huang, X. A Correntropy-Based Proportionate Affine Projection Algorithm for Estimating Sparse Channels with Impulsive Noise. Entropy 2019, 21, 555. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Li, Y.; Jin, Z.; Wang, Y.; Yang, R. A Robust Sparse Adaptive Filtering Algorithm with a Correntropy Induced Metric Constraint for Broadband Multi-Path Channel Estimation. Entropy 2016, 18, 380. [Google Scholar] [CrossRef] [Green Version]
  14. Kong, S.J.; Hwang, K.Y.; Song, W.J. An affine projection algorithm with dynamic selection of input vectors. IEEE Signal Process. Lett. 2007, 14, 529–532. [Google Scholar] [CrossRef]
  15. Hwang, K.Y.; Song, W.J. An affine projection adaptive filtering algorithm with selective regressors. IEEE Trans. Circuits Syst. II Express Briefs 2007, 54, 43–46. [Google Scholar] [CrossRef]
  16. Kim, S.E.; Kong, S.J.; Song, W.J. An affine projection algorithm with evolving order. IEEE Signal Process. Lett. 2009, 16, 937–940. [Google Scholar]
  17. Gong, Y.; Cowan, C.F.N. An LMS style variable tap-length algorithm for structure adaptation. IEEE Trans. Signal Process. 2005, 53, 2400–2407. [Google Scholar] [CrossRef] [Green Version]
  18. Chang, M.S.; Kong, N.W.; Park, P.G. Variable regularized least-squares algorithm: One-step-ahead cost function with equivalent optimality. Signal Process. 2011, 91, 1224–1228. [Google Scholar] [CrossRef]
  19. Yousef, N.R.; Sayed, A.H. A unified approach to the steady-state and tracking analyses of adaptive filters. IEEE Trans. Signal Process. 2001, 49, 314–324. [Google Scholar] [CrossRef]
  20. Benesty, J.; Rey, H.; Vega, L.R.; Tressens, S. A nonparametric VSS NLMS algorithms. IEEE Signal Process. Lett. 2006, 13, 581–584. [Google Scholar] [CrossRef]
  21. Iqbal, M.A.; Grant, S.L. Novel variable step size NLMS algorithms for echo cancellation. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, Las Vegas, NV, USA, 31 March–4 April 2008; pp. 241–244. [Google Scholar]
Figure 1. Structure of the adaptive filter.
Figure 1. Structure of the adaptive filter.
Entropy 24 00431 g001
Figure 2. MSD learning curves of the conventional APA, DS-APA, SR-APA, E-APA, and proposed APA when the input signals are generated using G 1 ( z ) , with n = 16 and SNR = 30 dB.
Figure 2. MSD learning curves of the conventional APA, DS-APA, SR-APA, E-APA, and proposed APA when the input signals are generated using G 1 ( z ) , with n = 16 and SNR = 30 dB.
Entropy 24 00431 g002
Figure 3. MSD learning curves of the conventional APA, DS-APA, SR-APA, E-APA, and proposed APA when the unknown system is abruptly changed from w o to w o at iteration 5 × 10 3 ; the input signals are generated using G 1 ( z ) , with n = 16 and SNR = 30 dB.
Figure 3. MSD learning curves of the conventional APA, DS-APA, SR-APA, E-APA, and proposed APA when the unknown system is abruptly changed from w o to w o at iteration 5 × 10 3 ; the input signals are generated using G 1 ( z ) , with n = 16 and SNR = 30 dB.
Entropy 24 00431 g003
Figure 4. MSD learning curves of the conventional APA, DS-APA, SR-APA, E-APA, and proposed APA when the input signals are generated using G 1 ( z ) , with n = 256 and SNR = 30 dB.
Figure 4. MSD learning curves of the conventional APA, DS-APA, SR-APA, E-APA, and proposed APA when the input signals are generated using G 1 ( z ) , with n = 256 and SNR = 30 dB.
Entropy 24 00431 g004
Figure 5. MSD learning curves of the conventional APA, DS-APA, SR-APA, E-APA, and proposed APA when the input signals are generated using G 2 ( z ) , with n = 16 and SNR = 30 dB.
Figure 5. MSD learning curves of the conventional APA, DS-APA, SR-APA, E-APA, and proposed APA when the input signals are generated using G 2 ( z ) , with n = 16 and SNR = 30 dB.
Entropy 24 00431 g005
Figure 6. Comparison of the input-vector numbers over one trial for the E-APA and proposed APA when the input signals are generated using G 1 ( z ) , with n = 16 and SNR = 30 dB.
Figure 6. Comparison of the input-vector numbers over one trial for the E-APA and proposed APA when the input signals are generated using G 1 ( z ) , with n = 16 and SNR = 30 dB.
Entropy 24 00431 g006
Figure 7. MSD learning curves of the conventional APA, DS-APA, SR-APA, E-APA, and proposed APA when the unknown system is abruptly changed from w o to w o at iteration 5 × 10 3 ; the input signals are generated using G 2 ( z ) , with n = 16 and SNR = 30 dB.
Figure 7. MSD learning curves of the conventional APA, DS-APA, SR-APA, E-APA, and proposed APA when the unknown system is abruptly changed from w o to w o at iteration 5 × 10 3 ; the input signals are generated using G 2 ( z ) , with n = 16 and SNR = 30 dB.
Entropy 24 00431 g007
Figure 8. MSD learning curves of the proposed APAs with several values of α and γ when the input signals are generated using G 1 ( z ) , with n = 16 and SNR = 30 dB.
Figure 8. MSD learning curves of the proposed APAs with several values of α and γ when the input signals are generated using G 1 ( z ) , with n = 16 and SNR = 30 dB.
Entropy 24 00431 g008
Figure 9. MSD learning curves of the proposed APAs with several values of δ when the input signals are generated using G 1 ( z ) , with n = 16 and SNR = 30 dB.
Figure 9. MSD learning curves of the proposed APAs with several values of δ when the input signals are generated using G 1 ( z ) , with n = 16 and SNR = 30 dB.
Entropy 24 00431 g009
Figure 10. Speech input signals.
Figure 10. Speech input signals.
Entropy 24 00431 g010
Figure 11. MSD learning curves for the speech input.
Figure 11. MSD learning curves for the speech input.
Entropy 24 00431 g011
Figure 12. MSD learning curves for the double-talk situation.
Figure 12. MSD learning curves for the double-talk situation.
Entropy 24 00431 g012
Figure 13. Accumulated sum of multiplications for the conventional APA, DS-APA, E-APA, and proposed algorithm.
Figure 13. Accumulated sum of multiplications for the conventional APA, DS-APA, E-APA, and proposed algorithm.
Entropy 24 00431 g013
Table 1. Computational complexity of conventional APA, DS-APA, E-APA, and proposed APA.
Table 1. Computational complexity of conventional APA, DS-APA, E-APA, and proposed APA.
APADS-APAE-APAProposed APA
Input-Vector
NumberM M j M k M i
# ( × / ÷ ) ( M 2 + 2 M ) n ( M j 2 + M j + M ) n ( M k 2 + 2 M k ) n + ( M i 2 + 2 M i ) n +
+ M 3 + M 2 + M j 3 + M j 2 M k 3 + M k 2 M i 3 + M i 2
+ M 3 + M 2 + M j 3 + M j 2 + M k + 1 + M i + 2
#(comparisons)0M22
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Shin, J.; Kim, J.; Kim, T.-K.; Yoo, J. An Enhanced Affine Projection Algorithm Based on the Adjustment of Input-Vector Number. Entropy 2022, 24, 431. https://0-doi-org.brum.beds.ac.uk/10.3390/e24030431

AMA Style

Shin J, Kim J, Kim T-K, Yoo J. An Enhanced Affine Projection Algorithm Based on the Adjustment of Input-Vector Number. Entropy. 2022; 24(3):431. https://0-doi-org.brum.beds.ac.uk/10.3390/e24030431

Chicago/Turabian Style

Shin, Jaewook, Jeesu Kim, Tae-Kyoung Kim, and Jinwoo Yoo. 2022. "An Enhanced Affine Projection Algorithm Based on the Adjustment of Input-Vector Number" Entropy 24, no. 3: 431. https://0-doi-org.brum.beds.ac.uk/10.3390/e24030431

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop