Next Article in Journal
Entropy of the Land Parcel Mosaic as a Measure of the Degree of Urbanization
Next Article in Special Issue
Information Entropy Algorithms for Image, Video, and Signal Processing
Previous Article in Journal
A Residual Network and FPGA Based Real-Time Depth Map Enhancement System
Previous Article in Special Issue
Novel Features for Binary Time Series Based on Branch Length Similarity Entropy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improved Approach for the Maximum Entropy Deconvolution Problem

Department of Electrical and Electronic Engineering, Ariel University, Ariel 40700, Israel
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Submission received: 29 March 2021 / Revised: 23 April 2021 / Accepted: 24 April 2021 / Published: 28 April 2021

Abstract

:
The probability density function (pdf) valid for the Gaussian case is often applied for describing the convolutional noise pdf in the blind adaptive deconvolution problem, although it is known that it can be applied only at the latter stages of the deconvolution process, where the convolutional noise pdf tends to be approximately Gaussian. Recently, the deconvolutional noise pdf was approximated with the Edgeworth Expansion and with the Maximum Entropy density function for the 16 Quadrature Amplitude Modulation (QAM) input but no equalization performance improvement was seen for the hard channel case with the equalization algorithm based on the Maximum Entropy density function approach for the convolutional noise pdf compared with the original Maximum Entropy algorithm, while for the Edgeworth Expansion approximation technique, additional predefined parameters were needed in the algorithm. In this paper, the Generalized Gaussian density (GGD) function and the Edgeworth Expansion are applied for approximating the convolutional noise pdf for the 16 QAM input case, with no need for additional predefined parameters in the obtained equalization method. Simulation results indicate that improved equalization performance is obtained from the convergence time point of view of approximately 15,000 symbols for the hard channel case with our new proposed equalization method based on the new model for the convolutional noise pdf compared to the original Maximum Entropy algorithm. By convergence time, we mean the number of symbols required to reach a residual inter-symbol-interference (ISI) for which reliable decisions can be made on the equalized output sequence.

1. Introduction

In this paper, the blind adaptive deconvolution problem (blind adaptive equalizer) is considered, where we observe the output of an unknown linear system (channel) from which we want to recover its input, using an adaptive blind equalizer (adaptive linear filter) [1,2,3,4,5,6]. The linear system (channel) is often modeled as a finite impulse response (FIR) filter. Since the channel coefficients are unknown, the equalizer’s coefficients used in the deconvolution process are only approximated values leading to an error signal that is added to the source signal at the output of the deconvolution process. In the following, we define this error signal throughout the paper as the convolutional noise. The Gaussian pdf is often applied in the literature [1,7,8,9,10,11], for approximating the convolutional noise pdf in calculating the conditional expectation of the source input given the equalized output sequence, based on Bayes rules. However, according to [8], the convolutional noise pdf tends approximately to a Gaussian pdf only at the latter stages of the iterative deconvolution process, where the equalizer has converged to a relative low residual ISI (where the convolutional noise is relative low). In the early stages of the iterative deconvolution process, the ISI is typically large with the result that the input sequence and the convolutional noise sequence are strongly correlated and the convolutional noise pdf is more uniform than Gaussian [8,12]. Recently [3,4], the convolutional noise pdf was approximated with the Maximum Entropy density approximation technique [1,2,13,14] with Lagrange multipliers up to order four and with the Edgeworth Expansion series [15,16] up to order six, to obtain the conditional expectation of the source signal (16 QAM input case), given the equalized output via Bayes rules. However, as demonstrated in [3], the blind adaptive equalization algorithm with the closed-form approximated expression for the conditional expectation based on approximating the convolutional noise pdf with the Maximum Entropy density approximation technique, achieved for the hard channel case (named the channel4 case in [3]), the same equalization performance from the residual ISI and convergence time point of view compared with the original blind adaptive equalization algorithm [1] where the convolutional noise pdf was approximated with the Gaussian pdf to obtain the closed-form approximated expression for the conditional expectation of the source signal given the equalized output via Bayes rules. The equalization performance obtained with the Edgeworth Expansion approach [4] was indeed improved compared with the original blind adaptive equalization algorithm [1] where the convolutional noise pdf was approximated with the Gaussian pdf to obtain the closed-form approximated expression for the conditional expectation of the source signal given the equalized output via Bayes rules. However, this equalization method [4] needed two additional predefined parameters (additional to the predefined step-size parameter involved in the equalizer’s coefficients update mechanism) in the algorithm. These two additional predefined parameters where used in the approximation for the fourth and sixth moment of the convolutional noise. Since the convolutional noise is channel dependent, the various moments of the convolutional noise are also channel dependent which lead also to the two additional predefined parameters in [4] to be channel dependent. As it was already implied earlier, the shape of the convolutional noise pdf changes during the iterative deconvolution process. Thus, if we could have an approximation for the convolutional pdf that is close to optimality, we could have a closed-form approximated expression for the conditional expectation of the source signal given the equalized output via Bayes rules that may lead to improved equalization performance from the residual ISI and convergence time point of view compared to existing methods based on the closed-form approximated conditional expectation expression [1,3,4]. According to [17,18,19], the GGD provides a flexible and suitable tool for data modeling and simulation. The GGD [17,18] is based on a shape parameter that changes the pdf which may have a Laplacian, or double exponential distribution, a Gaussian distribution or a uniform distribution for a shape parameter equal to one, two and infinity respectively. The shape of the convolutional noise pdf changes as a function of the residual ISI. Thus, in order to apply the GGD for the convolutional noise pdf approximation task, the shape parameter related to the GGD presentation must be a function of the residual ISI. Recently [20], the shape parameter related to the GGD presentation [17,18] was given as a function of the residual ISI.
In this paper, we deal with the 16QAM input case where we use the GGD presentation [17,18] with the results obtained from [20], to approximate the convolutional noise pdf involved in the calculation of the closed-form approximated expression for the conditional expectation of the source signal given the equalized output via Bayes rules. Since the shape parameter related to the GGD presentation [17,18] may have also fractional values during the iterative deconvolution process, the integral involved in the conditional expectation calculation may not lead to a closed-form approximated expression. Thus, we use in this work the Edgeworth Expansion series [15,16] up to order six for approximating the GGD presentation applicable for the convolutional noise pdf where the fourth and sixth moments of the convolutional noise sequence are approximated with the GGD technique [17,18]. By applying the GGD [17,18], the Edgeworth Expansion [15,16] and the results from [20] (the relationship between the shape parameter and the residual ISI), a new closed-form approximated expression is proposed for the conditional expectation of the source signal given the equalized output via Bayes rules that has no need for additional predefined parameters in the obtained equalization method as is the case in [4]. Simulation results indicate that with our new proposed equalization method based on the new model for the convolutional noise pdf we have:
  • Improved equalization performance from the convergence time point of view for the easy [6] as well as for the hard channel case, compared to the original Maximum Entropy algorithm [1]. The improvement in the convergence time for the hard channel case is approximately of 15,000 symbols while for the easy channel case the improvement in the convergence time is approximately of 250 symbols. In both cases we may say that the improvement in the convergence time is approximately third of the convergence time of the original Maximum Entropy algorithm [1].
  • Based on [3], the blind adaptive equalization algorithm with the closed-form approximated expression for the conditional expectation based on approximating the convolutional noise pdf with the Maximum Entropy density approximation technique, achieved for the hard channel case, the same equalization performance from the residual ISI and convergence time point of view as was achieved with the original Maximum Entropy algorithm [1]. Thus, the improvement in the convergence time with our new proposed method compared with the algorithm in [3] is also approximately of 15,000 symbols for the hard channel case.
  • The new proposed equalization method does not need additional predefined parameters (additional to the predefined step-size parameter involved in the equalizer’s coefficients update mechanism) in the algorithm in order to get improved convergence time compared to the original Maximum Entropy algorithm [1], as does the algorithm in [4] where the convolutional noise pdf was approximated with the Edgeworth Expansion series.
  • For the easy channel case and SNR of 26 dB, the new proposed equalization method has improved equalization performance from the residual ISI and convergence time point of view compared to the recently proposed methods [2,5] which are versions of the original Maximum Entropy algorithm [1]. From the residual ISI point of view, the improvement is approximately 4 dB while the improvement in the convergence time is approximately third of the convergence time achieved by the equalization methods presented in [2,5].
The paper is organized as follows—after having described the system under consideration in Section 2, the systematic way for obtaining the closed-form approximated expression for the conditional expectation of the source signal given the equalized output via Bayes rules based on the GGD and Edgeworth Expansion series is given in Section 3. In Section 4 we introduce our simulation results. Finally, the conclusion is presented in Section 5.

2. System Description

In the following (Figure 1), we recall the system under consideration used in [1,3,4], where we apply the same assumptions made in [1,3,4]:
  • The input sequence x [ n ] is a 16QAM source (a modulation using ± {1,3} levels for in-phase and quadrature components) which can be written as x [ n ] = x 1 [ n ] + j x 2 [ n ] where x 1 [ n ] and x 2 [ n ] are the real and imaginary parts of x [ n ] , respectively. x 1 [ n ] and x 2 [ n ] are independent and E [ x [ n ] ] = 0 (where E [ · ] denotes the expectation operator on ( · ) ).
  • The unknown channel h [ n ] is a possibly non-minimum phase linear time-invariant filter in which the transfer function has no “deep zeros”; namely, the zeros lie sufficiently far from the unit circle.
  • The filter c [ n ] is a tap-delay line.
  • The channel noise w [ n ] is an additive Gaussian white noise.
  • The function T [ · ] is a memoryless nonlinear function that satisfies the additivity condition:
    T [ z 1 [ n ] + j z 2 [ n ] ] = T [ z 1 [ n ] ] + j T [ z 2 [ n ] ] ,
    where z 1 [ n ] , z 2 [ n ] are the real and imaginary parts of the equalized output, respectively.
The input to the equalizer is given by:
y [ n ] = x [ n ] h [ n ] + w [ n ] ,
where “∗” stands for the convolutional operation. Based on (2), the equalized output is obtained via:
z [ n ] = y [ n ] c [ n ] = x [ n ] s ˜ [ n ] + w ˜ [ n ] = x [ n ] + p [ n ] + w ˜ [ n ] ,
where
s ˜ [ n ] = c n h n = δ n + ξ n p [ n ] = x [ n ] ξ n ,
where ξ [ n ] stands for the difference (error) between the ideal and the used value for c [ n ] following (6), δ is the Kronecker delta function, w ˜ [ n ] = w [ n ] * c [ n ] and p [ n ] is the convolutional noise. The ISI is expressed by:
I S I = m ˜ | s ˜ [ m ˜ ] | 2 | s ˜ | m a x 2 | s ˜ | m a x 2 ,
where | s ˜ | m a x is the component of s ˜ , given in (4), having the maximal absolute value. The function T [ z [ n ] ] is an estimation to x [ n ] where d [ n ] = T [ z [ n ] ] . The equalizer is updated according to:
c ̲ [ n + 1 ] = c ̲ [ n ] + μ T z n z n , y ̲ * [ n ]
where · * is the conjugate operation on ( · ) , μ is the step size parameter and c ̲ [ n ] is the equalizer vector, where the input vector is y ̲ [ n ] = [ y [ n ] . . . y [ n N + 1 ] ] T . The operator () T denotes the transpose of the function (), and N is the equalizer’s tap length.

3. GGD Based Closed-Form Approximated Expression for the Conditional Expectation

In this section, we present a systematic approach for obtaining the conditional expectation ( E [ x [ n ] | z [ n ] ] ) based on approximating the convolutional noise pdf with the GGD [17,18] and Edgeworth Expansion [15,16] presentations. For simplicity, we use in the following, x, y, p for x [ n ] , y [ n ] and p [ n ] , respectively.
Theorem 1.
For the noiseless and 16QAM input case, the closed-form approximated expression for the conditional expectation ( E [ x | z ] ) is given by:
E [ x | z ] u 1 f 1 + j u 2 f 2 , w h e r e f o r i = 1 , 2 K = 4 a n d k = 2 , 4 u i = z i + 1 2 2 3 T 15 V + 1 k = 2 K k z i k 1 λ k 12 T σ p i 2 90 V σ p i 2 z i + 3 T 15 V + 1 z i k = 2 K k z i k 1 λ k 2 + k = 2 K k z i k 2 λ k k 1 ( σ z i 2 σ x i 2 ) + 1 8 4 3 T 15 V + 1 k = 2 K k z i z i k λ k 3 + 4 3 T 15 V + 1 k = 2 K 1 z i 3 k 3 z i k λ k 3 k 2 z i k λ k + 2 k z i k λ k 12 12 T σ p i 2 90 V σ p i 2 k = 2 K k z i z i k λ k + 24 T σ p i 4 360 V σ p i 4 z + 3 3 T 15 V + 1 z i k = 2 K 1 z i 2 k 2 z i k λ k k z i k λ k 2 6 12 T σ p i 2 90 V σ p i 2 z i k = 2 K k z i z i k λ k 2 + 3 T 15 V + 1 z i k = 2 K k z i z i k λ k 4 + 12 3 T 15 V + 1 k = 2 K 1 z i 2 k 2 z i k λ k k z i k λ k k = 2 K k z i z i k λ k + 3 T 15 V + 1 z i k = 2 K 1 z i 4 11 k 2 z i k λ k 6 k 3 z i k λ k + k 4 z i k λ k 6 k z i k λ k 6 12 T σ p i 2 90 V σ p i 2 z i k = 2 K 1 z i 2 k 2 z i k λ k k z i k λ k + 4 3 T 15 V + 1 z i k = 2 K 1 z i 3 k 3 z i k λ k 3 k 2 z i k λ k + 2 k z i k λ k k = 2 K k z i z i k λ k + 6 3 T 15 V + 1 z i k = 2 K 1 z i 2 k 2 z i k λ k k z i k λ k k = 2 K k z i z i k λ k 2 σ z i 2 σ x i 2 2 a n d f i = 1 + 1 2 k = 2 K k z i k 1 λ k 2 + k = 2 K k z i k 2 λ k k 1 ( σ z i 2 σ x i 2 ) + 1 8 k = 2 K k z i k 1 λ k 4 + 6 k = 2 K k z i k 1 λ k 2 k = 2 K k z i k 2 λ k k 1 + 4 k = 2 K k z i k 3 λ k k 1 k 2 k = 2 K k z i k 1 λ k + 3 k = 2 K k z i k 2 λ k k 1 2 + k = 2 K k z i k 4 λ k k 1 k 2 k 3 ( σ z i 2 σ x i 2 ) 2 , w i t h σ p i 2 = σ z i 2 σ x i 2 ,
where
T = ( Γ ( 1 / β ) Γ 2 ( 3 / β ) Γ ( 5 / β ) 3 4 ! ) ; V = ( Γ 2 ( 1 / β ) Γ 3 ( 3 / β ) Γ ( 7 / β ) 15 Γ ( 1 / β ) Γ 2 ( 3 / β ) Γ ( 5 / β ) + 30 6 ! ) ,
and where Γ is the Gamma function and β is given by [20]:
β 1 . 1938 × 10 5 I S I d B 4 7 . 3370 × 10 4 I S I d B 3 0.0146 I S I d B 2 0.0693 I S I d B + 2.6266 I S I d B = 10 l o g 10 I S I .
In this work the I S I is expressed as:
I S I = σ p 1 2 σ x 1 2
and the Lagrange multipliers for k = 2 , 4 ( λ 2 , λ 4 ) are calculated according to [1]:
1 + 4 λ 2 m 2 + 8 λ 4 m 4 = 0 3 m 2 + 8 λ 4 m 6 + 4 λ 2 m 4 = 0 ,
where
m k = E x 1 k .
Proof of Theorem 1. 
For the two independent quadrature carrier case where the 16QAM modulation is a special case of it, the conditional expectation ( E [ x | z ] ) can be given according to [9] as:
E [ x | z ] = E [ x 1 | z 1 ] + j E [ x 2 | z 2 ] .
Thus, real and imaginary parts of the data are to be estimated separately on the basis of the real and imaginary parts of the equalizer’s output sequence. For the noiseless case, (3) may be written as:
p = z x .
In the following, we denote p 1 and p 2 as the real and imaginary parts of p. Based on (14) and under the assumption that the blind adaptive equalizer leaves the system with a relative low residual ISI for which the input signal x and the convolutional noise signal p can be considered as independent [8], we may write for the 16QAM modulation case:
σ p 2 = σ z 2 σ x 2 = 2 σ p 1 2 = 2 σ p 2 2 = 2 σ z 1 2 2 σ x 1 2 = 2 σ z 2 2 2 σ x 2 2 σ p 1 2 = σ z 1 2 σ x 1 2 .
Based on (3), the variance of the real part of the equalized output signal σ z 1 2 can be written for the noiseless case as:
σ z 1 2 = σ x 1 2 m ˜ | s ˜ m ˜ [ n ] | 2 .
Next, based on (16), (15) and (5) we may write:
2 σ p 1 2 = 2 σ x 1 2 m ˜ | s ˜ m ˜ [ n ] | 2 2 σ x 1 2 = 2 σ x 1 2 m ˜ | s ˜ m ˜ [ n ] | 2 1 2 σ p 1 2 = 2 σ x 1 2 I S I f o r | s ˜ | m a x = 1 σ p 1 2 σ x 1 2 = I S I f o r | s ˜ | m a x = 1 .
Next, we show the systematic approach for calculating the conditional expectation E [ x 1 | z 1 ] . The conditional expectation E [ x 1 | z 1 ] is defined by:
E [ x 1 | z 1 ] = + x 1 f x 1 | z 1 x 1 | z 1 d x 1 ,
where f x 1 | z 1 x 1 | z 1 is the conditional pdf. Based on Bayes rules we may write:
f x 1 | z 1 x 1 | z 1 = f z 1 | x 1 z 1 | x 1 f x 1 x 1 f z 1 z 1 = f z 1 | x 1 z 1 | x 1 f x 1 x 1 + f z 1 | x 1 z 1 | x 1 f x 1 x 1 d x 1 .
Now, by substituting (19) into (18) we obtain:
E [ x 1 | z 1 ] = + x 1 f z 1 | x 1 z 1 | x 1 f x 1 x 1 d x 1 + f z 1 | x 1 z 1 | x 1 f x 1 x 1 d x 1 .
As was already mentioned earlier in this paper, we would like to use the GGD [17,18] presentation for approximating the real part of the convolutional noise pdf. Thus, based on the GGD [17,18] the real part of the convolutional noise pdf is approximately given by:
f p 1 p 1 1 2 Γ 1 + 1 β B β , σ exp | p 1 B β , σ | β ,
with
B β , σ = σ p 1 2 Γ 1 β Γ 3 β 1 2 ,
where β is defined as the shape parameter of the pdf presentation. Thus, based on [17,18], (21) and (14), the conditional pdf f z 1 | x 1 z 1 | x 1 can be expressed by:
f z 1 | x 1 z 1 | x 1 1 2 Γ 1 + 1 β B β , σ exp | z 1 x 1 B β , σ | β .
Following [1,3,4], we use the Maximum Entropy density approximation technique [13,14] with Lagrange multipliers up to order four, for approximating the pdf of the real part input sequence:
f x 1 x 1 A exp λ 2 x 1 2 + λ 4 x 1 4 ,
where λ 2 and λ 4 are the Lagrange multipliers and A is a constant. Next, by substituting (23) and (24) into (20), some problems are noticed in carrying out the integrals involved in (20) for achieving a closed-form approximated expression for the conditional expectation E [ x 1 | z 1 ] due to the fact that the shape parameter β is a changing parameter during the iterative blind deconvolution process that may have also non integer values. Thus, to overcome the problem, we apply the Edgeworth Expansion series [15,16] up to order six for approximating the real part of the convolutional noise pdf where the higher moments of the convolutional noise sequence are calculated via the GGD [17,18] technique:
f p 1 p 1 exp p 1 2 2 σ p 1 2 2 π σ p 1 1 + E p 1 4 3 σ p 1 2 2 4 ! σ p 1 2 2 p 1 4 σ p 1 2 2 6 p 1 2 σ p 1 2 + 3 + E p 1 6 15 σ p 1 2 E p 1 4 + 30 σ p 1 2 3 6 ! σ p 1 2 3 p 1 6 σ p 1 2 3 15 p 1 4 σ p 1 2 2 + 45 p 1 2 σ p 1 2 15 with E p 1 6 = σ p 1 2 Γ 1 β Γ 3 β 3 Γ 7 β Γ 1 β ; E p 1 4 = σ p 1 2 Γ 1 β Γ 3 β 2 Γ 5 β Γ 1 β .
Thus, based on the Edgeworth Expansion series technique [15,16] and (25) we have:
f z 1 | x 1 z 1 | x 1 exp z 1 x 1 2 2 σ p 1 2 2 π σ p 1 1 + E p 1 4 3 σ p 1 2 2 4 ! σ p 1 2 2 z 1 x 1 4 σ p 1 2 2 6 z 1 x 1 2 σ p 1 2 + 3 + E p 1 6 15 σ p 1 2 E p 1 4 + 30 σ p 1 2 3 6 ! σ p 1 2 3 z 1 x 1 6 σ p 1 2 3 15 z 1 x 1 4 σ p 1 2 2 + 45 z 1 x 1 2 σ p 1 2 15
with E p 1 6 and E p 1 4 given in (25). Now, substituting (26) and (24) into (20) yields:
E x 1 | z 1 g 1 ( x 1 ) exp ( Ψ ( x 1 ) / ρ ) d x 1 g ( x 1 ) exp ( Ψ ( x 1 ) / ρ ) d x 1 ,
where
ρ = 2 σ p 1 2 ; Ψ ( x 1 ) = ( z 1 x 1 ) 2 ; g 1 ( x 1 ) = x 1 g ( x 1 ) ; g ( x 1 ) = g ˜ ( x 1 ) 1 + E p 1 4 3 σ p 1 2 2 4 ! σ p 1 2 2 z 1 x 1 4 σ p 1 2 2 6 z 1 x 1 2 σ p 1 2 + 3 + E p 1 6 15 σ p 1 2 E p 1 4 + 30 σ p 1 2 3 6 ! σ p 1 2 3 z 1 x 1 6 σ p 1 2 3 15 z 1 x 1 4 σ p 1 2 2 + 45 z 1 x 1 2 σ p 1 2 15 g ˜ ( x 1 ) = exp λ 2 x 1 2 + λ 4 x 1 4 .
In order to obtain closed-form expressions for the integrals involved in (27), the Laplace’s method [21] is applied as was also done in [1,3,4]. According to [21], the Laplace’s method is a general technique for obtaining the asymptotic behavior as ρ 0 of integrals in which the large parameter 1 / ρ appears in the exponent. The main idea of Laplace’s method is: if the continues function Ψ ( x 1 ) has its minimum at x 0 which is between infinity and minus infinity, then it is only the immediate neighborhood of x 1 = x 0 that contributes to the full asymptotic expansion of the integral for large 1 / ρ . Thus, according to [1,3,4,21]:
g ( x 1 ) exp ( Ψ ( x 1 ) / ρ ) d x 1 exp Ψ ( x 0 ) ρ 2 π ρ Ψ ( x 0 ) g ( x 0 ) + g ( x 0 ) 2 ρ Ψ ( x 0 ) + g ( x 0 ) 8 ( ρ Ψ ( x 0 ) ) 2 + O ( ρ Ψ ( x 0 ) ) 3 ,
g 1 ( x 1 ) exp ( Ψ ( x 1 ) / ρ ) d x 1 exp Ψ ( x 0 ) ρ 2 π ρ Ψ ( x 0 ) g 1 ( x 0 ) + g 1 ( x 0 ) 2 ρ Ψ ( x 0 ) + g 1 ( x 0 ) 8 ( ρ Ψ ( x 0 ) ) 2 + O ( ρ Ψ ( x 0 ) ) 3 ,
where ( ) and ( ) stand for the second and fourth derivative of ( ) , respectively, O x is defined as l i m x 0 O x / x = r c o n s t and r c o n s t is a constant. The expressions for Ψ ( x 0 ) and x 0 are given by:
Ψ x 1 = 2 z 1 x 1 ; Ψ x 1 = 2 Ψ x 0 = 2 Ψ x 0 = 2 z 1 x 0 = 0 x 0 = z 1 .
Now, by substituting (29) and (30) into (27), dividing both the numerator and denominator by the function g ˜ ( z 1 ) given in (28) with z 1 instead of x 1 , x 0 = z 1 , Ψ x 0 = 2 from (31), ρ = 2 σ p 1 2 from (28) and σ p 1 2 = σ z 1 2 σ x 1 2 from (15) we obtain:
E x 1 | z 1 E x 1 | z 1 u p E x 1 | z 1 d o w n
E x 1 | z 1 u p = z 1 + ( g 1 ( z 1 ) / 2 g ˜ ( z 1 ) ) ( σ z 1 2 σ x 1 2 ) + ( g 1 ( z 1 ) / 8 g ˜ ( z 1 ) ) ( σ z 1 2 σ x 1 2 ) 2 E x 1 | z 1 d o w n = 1 + 3 T 15 V + ( g ( z 1 ) / 2 g ˜ ( z 1 ) ) ( σ z 1 2 σ x 1 2 ) + ( g ( z 1 ) / 8 g ˜ ( z 1 ) ) ( σ z 1 2 σ x 1 2 ) 2 .
Next, in order to reduce the computational complexity, we notice that the denominator of (32) ( E x 1 | z 1 d o w n from (33)) can be approximated by:
E x 1 | z 1 d o w n 1 + ( g ˜ ( z 1 ) / 2 g ˜ ( z 1 ) ) ( σ z 1 2 σ x 1 2 ) + ( g ˜ ( z 1 ) / 8 g ˜ ( z 1 ) ) ( σ z 1 2 σ x 1 2 ) 2 ,
where g ˜ ( z 1 ) and g ˜ ( z 1 ) are the second and fourth derivative of g ˜ ( z 1 ) respectively. Please note that (34) is valid for the Gaussian convolutional noise pdf case. By using (32) with E x 1 | z 1 d o w n and E x 1 | z 1 u p from (34) and (33) respectively and the following derivatives:
k = 2 , 4 ; K = 4 g ˜ z 1 = g ˜ z 1 k = 2 K k z 1 k 1 λ k g ˜ z 1 = g ˜ z 1 ( k = 2 K k z 1 k 1 λ k ) 2 + g ˜ z 1 k = 2 K k z 1 k 2 λ k k 1 g ˜ z 1 = g ˜ z 1 k = 2 K k z 1 k 1 λ k 3 + g ˜ z 1 3 k = 2 K k z 1 k 2 λ k k 1 k = 2 K k z 1 k 1 λ k g ˜ z 1 k = 2 K k z 1 k 3 λ k k 1 k 2 g ˜ z 1 = 3 g ˜ z 1 k = 2 K k z 1 k 2 λ k k 1 2 + 6 g ˜ z 1 k = 2 K k z 1 k 2 λ k k 1 k = 2 K k z 1 k 1 λ k 2 + g ˜ z 1 k = 2 K k z 1 k 1 λ k 4 + 4 g ˜ z 1 k = 2 K k z 1 k 3 λ k k 1 k 2 k = 2 K k z 1 k 1 λ k + g ˜ z 1 k = 2 K k z 1 k 4 λ k k 1 k 2 k 3
g 1 z 1 = 2 g ˜ z 1 3 T 15 V + 1 + z 1 g ˜ z 1 3 T 15 V + 1 z 1 g ˜ z 1 12 T σ p 1 2 90 V σ p 1 2 g 1 z 1 = 4 g ˜ z 1 3 T 15 V + 1 12 g ˜ z 1 12 T σ p 1 2 90 V σ p 1 2 + z 1 g ˜ z 1 24 T σ p 1 4 360 V σ p 1 4 + z 1 g ˜ z 1 3 T 15 V + 1 6 z 1 g ˜ z 1 12 T σ p 1 2 90 V σ p 1 2 ,
the expression of u 1 f 1 from (7) is obtained. Now, by using (13), the expression from (7) is obtained. □

4. Simulation

In this section, we use the 16QAM input case with two different channels to show via simulation results the usefulness of our new proposed model for the convolutional noise pdf based on the GDD [17,18] and Edgeworth Expansion [15,16] compared to the Gaussian case. For equalization performance comparison, we use the MaxEnt algorithm [1], where the conditional expectation is derived by assuming the Gaussian model for the convolutional noise pdf and the source pdf is approximated with the Maximum Entropy density approximation technique [13,14] as it is done with our new proposed equalization method. Thus, the difference between the two approximated expressions for the conditional expectation ([1] and (7)) is only due to the different model used for the convolutional noise pdf. In addition, we use for the equalization performance comparison also two additional equalization methods [2,5] which we name as M a x E n t B N E W and M a x E n t A N E W respectively. These methods ([2,5]) are versions of the original MaxEnt algorithm [1] where also the convolutional noise pdf was approximated with the Gaussian model.
The equalizer’s taps for the Maximum Entropy algorithm (MaxEnt) [1] were updated according to:
c l n + 1 = c l n μ m e W y * [ n l ] ,
with:
W = E x 1 | z 1 z 1 n E x 1 | z 1 z 1 2 n + j E x 2 | z 2 z 2 n E x 2 | z 2 z 2 2 n z n ,
where μ m e is a positive step-size parameter and
E x 1 | z 1 = z 1 + g ^ 1 ( z 1 ) 2 g ^ ( z 1 ) σ x 1 2 σ z 1 2 + g ^ 1 ( 4 ) ( z 1 ) 8 g ^ ( z 1 ) σ x 1 2 σ z 1 2 2 1 + g ^ ( z 1 ) 2 g ^ ( z 1 ) σ x 1 2 σ z 1 2 + g ^ ( 4 ) ( z 1 ) 8 g ^ ( z 1 ) σ x 1 2 σ z 1 2 2 E x 2 | z 2 = z 2 + g ^ 1 ( z 2 ) 2 g ^ ( z 2 ) σ x 2 2 σ z 2 2 + g ^ 1 ( 4 ) ( z 2 ) 8 g ^ ( z 2 ) σ x 2 2 + σ z 2 2 2 1 + g ^ ( z 2 ) 2 g ^ ( z 2 ) σ x 2 2 σ z 2 2 + g ^ ( 4 ) ( z 2 ) 8 g ^ ( z 2 ) σ x 2 2 σ z 2 2 2 ,
where:
k = 2 , 4 ; K = 4 s = 1 , 2 ; g ^ z s = exp k = 2 k = K λ k x s k x s = z s g ^ ( z s ) = d 2 d x s 2 exp k = 2 k = K λ k x s k x s = z s ; g ^ ( 4 ) ( z s ) = d 4 d x s 4 exp k = 2 k = K λ k x s k x s = z s g ^ 1 ( z s ) = d 2 d x s 2 x s exp k = 2 k = K λ k x s k x s = z s ; g ^ 1 ( 4 ) ( z s ) = d 4 d x s 4 x s exp k = 2 k = K λ k x s k x s = z s
and σ x 1 2 , σ x 2 2 are the variances of the real and imaginary parts of the source signal respectively. The variances of the real and imaginary parts of the equalized output are defined as σ z 1 2 and σ z 2 2 respectively and estimated by [1]:
z s 2 = 1 β m e z s 2 n 1 + β m e z s n 2 ,
where   stands for the estimated expectation, z s 2 0 > 0 , l stands for the l-th tap of the equalizer and β m e is a positive step size parameter. The Lagrange multipliers λ k from (40) are given in (11). According to [1] the equalizer’s taps are updated only if N ^ s > ε , where ε is a small positive parameter and N ^ s = 1 + g ^ ( z 1 ) 2 g ^ ( z 1 ) σ x s 2 σ z s 2 + g ^ ( 4 ) ( z 1 ) 8 g ^ ( z 1 ) σ x s 2 σ z s 2 2 . In the following we denote our new proposed equalization method based on the GDD [17,18] as GDD were the equalizer’s taps are updated according to:
c l [ n + 1 ] = c l [ n ] μ W y * [ n l ] ,
where μ is a positive step size parameter and W is given in (38) with:
E x 1 | z 1 = u 1 f 1 ; E x 2 | z 2 = u 2 f 2 ,
where u 1 f 1 and u 2 f 2 are given in (7). The variances of the real and imaginary parts of the convolutional noise ( σ p 1 2 and σ p 2 2 ) are given by:
s = 1 , 2 σ p s 2 = σ z s 2 σ x s 2 z s 2 = 1 β z s 2 n 1 + β z s n 2 ,
where β is a positive step size parameter. It should be pointed out that the equalizer’s taps related to the GGD algorithm are updated only when f 1 > ε and f 2 > ε similar to the MaxEnt algorithm. The equalizer’s taps related to the M a x E n t A N E W algorithm are updated according to [5]:
c ˜ l n + 1 = c l n μ A N E W W y * [ n l ] ,
where μ A N E W is a positive step size parameter and W is given in (38) with:
E x 1 | z 1 1 + ( ε 0 1 + ε 2 1 z 1 2 + ε 4 1 z 1 4 ) + 1 2 ( ε 0 1 + ε 2 1 z 1 2 + ε 4 1 z 1 4 ) 2 z 1 + σ p 1 2 2 g 1 z 1 g ( z 1 ) + σ p 1 2 2 8 g 1 z 1 g ( z 1 ) E x 2 | z 2 1 + ( ε 0 2 + ε 2 2 z 2 2 + ε 4 2 z 2 4 ) + 1 2 ( ε 0 2 + ε 2 2 z 2 2 + ε 4 2 z 2 4 ) 2 z 2 + σ p 2 2 2 g 1 z 2 g ( z 2 ) + σ p 2 2 2 8 g 1 z 2 g ( z 2 ) , w h e r e : s = 1 , 2 g 1 z s g ( z s ) = 2 z s 8 z s 6 λ 4 2 + 8 z s 4 λ 2 λ 4 + 2 z s 2 λ 2 2 + 10 z s 2 λ 4 + 3 λ 2 g 1 z s g ( z s ) = 4 z s 64 z s 12 λ 4 4 + 128 z s 10 λ 2 λ 4 3 + 96 z s 8 λ 2 2 λ 4 2 + 352 z s 8 λ 4 3 + 32 z s 6 λ 2 3 λ 4 + 432 z s 6 λ 2 λ 4 2 + 4 z s 4 λ 2 4 + 168 z s 4 λ 2 2 λ 4 + 348 z s 4 λ 4 2 + 20 z s 2 λ 2 3 + 180 z s 2 λ 2 λ 4 + 15 λ 2 2 + 30 λ 4 σ p s 2 = σ z s 2 σ x s 2
and
σ x s 2 = E [ x s 2 ] .
According to [5]:
σ z s 2 = E [ z s 2 ]
and given by:
z s 2 = 1 β A N E W z s 2 n 1 + β A N E W z s n 2 ,
where z s 2 0 > 0 , β A N E W and μ A N E W are positive step size parameters. ε 0 s , ε 2 s , ε 4 s , λ 2 and λ 4 were set according to [5] as
ε 0 s = 2 λ 2 σ p s 2 ; ε 2 s = σ p s 2 4 λ 2 2 + 12 λ 4 ; ε 4 s = 16 λ 2 λ 4 σ p s 2
λ 2 1 40 m ¯ 2 20736 m ¯ 4 2 + 1280 m ¯ 2 m ¯ 6 41472 m ¯ 4 2 + 2560 m ¯ 2 m ¯ 6 144 m ¯ 4 480 m ¯ 2 2 + 288 m ¯ 4 λ 4 1 20736 m ¯ 4 2 + 1280 m ¯ 2 m ¯ 6 480 m ¯ 2 2 + 288 m ¯ 4 ,
where
E [ x 1 G ] = m ¯ G .
In order to get equalization gain of one, the following gain control was used according to [5]:
c l [ n ] = c ˜ l l c ˜ l 2 ,
where c l [ n ] is the vector of taps after iteration and c l [ 0 ] is some reasonable initial guess. The equalizer’s taps related to the M a x E n t B N E W algorithm are updated according to [2]:
c ˜ l n + 1 = c l n μ B N E W W y * [ n l ] ,
where μ B N E W is a positive step size parameter and W is given in (38) with:
E x 1 | z 1 = z 1 + g ^ 1 ( z 1 ) 2 g ^ ( z 1 ) σ p 1 2 1 + g ^ ( z 1 ) 2 g ^ ( z 1 ) σ p 1 2 E x 2 | z 2 = z 2 + g ^ 1 ( z 2 ) 2 g ^ ( z 2 ) σ p 2 2 1 + g ^ ( z 2 ) 2 g ^ ( z 2 ) σ p 2 2 , w h e r e : s = 1 , 2 g ^ 1 ( z s ) 2 g ^ ( z s ) = z s 8 z s 6 λ 4 2 + 8 z s 4 λ 2 λ 4 + 2 z s 2 λ 2 2 + 10 z s 2 λ 4 + 3 λ 2 g ^ ( z s ) 2 g ^ ( z s ) = 8 z s 6 λ 4 2 + 8 z s 4 λ 2 λ 4 + 2 z s 2 λ 2 2 + 6 z s 2 λ 4 + λ 2 σ p s 2 = σ z s 2 σ x s 2
λ 2 = 1 4 m ^ 2 64 m ^ 4 2 64 m ^ 2 m ^ 6 64 m ^ 2 m ^ 6 64 m ^ 4 2 + 8 m ^ 4 8 m ^ 4 24 m ^ 2 2 λ 4 = 1 64 m ^ 4 2 64 m ^ 2 m ^ 6 8 m ^ 4 24 m ^ 2 2 with m ^ 2 = m ¯ 2 1 + 1 S N R k = 0 R 1 h k 2 m ^ 4 = m ¯ 2 2 3 S N R k = 0 R 1 h k 2 2 + 6 S N R k = 0 R 1 h k 2 + m ¯ 4 m ¯ 2 2 m ^ 6 = m ¯ 2 3 15 S N R k = 0 R 1 h k 2 3 + 45 S N R k = 0 R 1 h k 2 2 + 15 S N R k = 0 R 1 h k 2 m ¯ 4 m ¯ 2 2 + m ¯ 6 m ¯ 2 3 ,
where
S N R = m ¯ 2 σ w r 2 .
σ z s 2 was estimated by
z s 2 = 1 β B N E W z s 2 n 1 + β B N E W z s n 2 ,
where z s 2 0 > 0 , β B N E W and μ B N E W are positive step size parameters. The equalizer’s taps in (54) were updated only if N ^ s > ε 1 , where ε 1 is a small positive parameter and
N ^ s = 1 + g ^ ( z s ) 2 g ^ ( z s ) σ p s 2 .
In addition, the gain control was applied according to (53).
Two different channels were considered:
  • Easy channel case, Channel1 (initial ISI = 0.44): The channel parameters were determined according to [6]: h n = { 0 for n < 0 ; 0.4 for n = 0 ; 0.84 × 0 . 4 n 1 for n > 0 }
  • Hard channel case, Channel2 (initial ISI = 1.402): The channel parameters were taken according to [22]: h n = 0.2258 , 0.5161 , 0.6452 , 0.5161
For Channel1 and Channel4, we used an equalizer with 13 and 21 taps respectively. In the simulation, the equalizers were initialized by setting the center tap equal to one and all others to zero [1]. The step size parameters μ , β , μ m e and β m e , were chosen for fast convergence with low steady state ISI, where the values for μ m e and β m e were taken from [1]. Figure 2 shows the simulated ISI as a function of the iteration number of our new proposed algorithm (GGD), compared to the MaxEnt method [1], for the 16QAM input and Channel1 case for signal-to noise-ratio (SNR) of 26 dB according to [1].
Please note that the main purpose of a blind adaptive equalizer is to be as fast as possible, a residual ISI that is low enough for sending the equalized output sequence to the decision device to get reliable decisions on that input data. Reliable decisions can be done on the equalized output sequence when the equalizer leaves the system with a residual ISI that is lower than −16 dB. According to Figure 2, the new algorithm (GGD) achieves the residual ISI of −16 dB faster than the MaxEnt algorithm [1]. Thus, the GGD has a faster convergence rate compared to the MaxEnt [1] method, which means that the equalized output sequence can be send earlier to the decision device with the GGD algorithm compared with the MaxEnt method [1]. Figure 3 shows the simulated ISI as a function of the iteration number of our new proposed algorithm (GGD), compared to the MaxEnt method [1], for the 16QAM input and Channel4 case for SNR of 30 dB according to [1].
According to Figure 3, the GGD algorithm reaches the residual ISI of −16 dB faster by approximately of 15,000 symbols than the MaxEnt [1] algorithm does while leaving the system with approximately the same residual ISI at the convergence state compared with the MaxEnt [1] method.
It should be pointed out that the equalization performance obtained with the GDD algorithm are very similar to those obtained in [4] where the Edgeworth Expansion up to order six was used for approximating the convolutional noise pdf. However, in [4], two additional step parameters were needed in the deconvolution process. Those step size parameters are channel dependent which are not needed in the GDD algorithm. Thus, the GDD algorithm is preferable over the algorithm proposed in [4]. The GDD algorithm has also improved equalization performance for the hard channel case (Channel2) compared to the equalization method proposed in [3] where the Maximum Entropy density approximation technique [13,14] was used for approximating the convolutional noise pdf with Lagrange multipliers up to order four. Please note that according to [3], the MaxEnt method [1] and the equalization algorithm proposed in [3] have the same equalization performance for the hard channel case (Channel2). Figure 4 shows the simulated ISI as a function of the iteration number of our new proposed algorithm (GGD), compared to the MaxEnt method [1], to the M a x E n t A N E W method [5] and to the M a x E n t B N E W method [2] for the 16QAM input and Channel1 case for SNR of 26 dB. According to Figure 4, the GGD algorithm has improved equalization performance from the residual ISI and convergence time point of view compared to the M a x E n t A N E W [5] and M a x E n t B N E W [2] methods. From the residual ISI point of view, the improvement is approximately 4 dB while the improvement in the convergence time is approximately third of the convergence time achieved by the equalization methods presented in [2,5].
Although the GGD algorithm was obtained for the 16QAM constellation input, it can be extended to other two independent quadrature carrier inputs with Lagrange multiplier up to order four, by having just another function for β (9) that fits the new input constellation case. In addition, if more Lagrange multipliers are needed than only four for approximating properly the input sequence pdf, (7) should be used with k = 2 , 4 , 6 , . . . K and the Lagrange multipliers should be calculated as given in [1] for the general order case.

5. Conclusions

In this paper, the blind adaptive deconvolution problem was considered, where the GGD function and the Edgeworth Expansion up to order six were applied for approximating the convolutional noise pdf for the 16 QAM input case. A new closed-form approximated expression was derived for the conditional expectation that led to a new blind adaptive equalization method. This new proposed algorithm does not need additional predefined parameters that are channel dependent like the literature known blind adaptive equalization method based on the conditional expectation expression where the convolutional noise pdf was approximated with the Edgeworth Expansion up to order six. Simulation results demonstrated that improved equalization performance is obtained with our new proposed equalization method based on the new model for the convolutional noise pdf compared to the original Maximum Entropy algorithm and to the two recently obtained versions of the original Maximum Entropy algorithm for the easy channel and high SNR case. Since the original Maximum Entropy algorithm has the same equalization performance for the hard channel case as the equalization method based on the conditional expectation expression where the convolutional noise pdf was approximated with the Maximum Entropy density technique, the new proposed method has also improved equalization performance for the hard channel case compared with this equalization method. This paper demonstrated that improved equalization performance can be obtained if a proper approximation is applied for the convolutional noise pdf in the calculation for the expression of the conditional expectation via Bayes rules. The new proposed algorithm is valid only for the high SNR case due to the fact that the noise was not taken into account in our derivations. Please note that the original Maximum Entropy algorithm and the two equalization methods based on the conditional expectation via Bayes rules, where the convolutional noise pdf was approximated with the Maximum Entropy density technique and with the Edgeworth Expansion approach, are valid also only for the high SNR case.

Author Contributions

Conceptualization, M.P. and S.S.; methodology, M.P.; software, S.S.; validation, M.P. and S.S.; formal analysis, M.P. and S.S.; writing—original draft preparation, M.P. and S.S.; writing—review and editing, M.P.; visualization, M.P.; supervision, M.P.; project administration, M.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

All the required data is given in the article.

Acknowledgments

We would like to thank the anonymous reviewers for their helpful comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Pinchas, M.; Bobrovsky, B.Z. A Maximum Entropy approach for blind deconvolution. Signal Process. (Eurasip) 2006, 86, 2913–2931. [Google Scholar] [CrossRef]
  2. Pinchas, M. A New Efficient Expression for the Conditional Expectation of the Blind Adaptive Deconvolution Problem Valid for the Entire Range of Signal-to-Noise Ratio. Entropy 2019, 21, 72. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Freiman, A.; Pinchas, M. A Maximum Entropy inspired model for the convolutional noise PDF. Digit. Signal Process. 2015, 39, 35–49. [Google Scholar] [CrossRef]
  4. Rivlin, Y.; Pinchas, M. Edgeworth Expansion Based Model for the Convolutional Noise pdf. Math. Problem. Eng. 2014. [Google Scholar] [CrossRef]
  5. Pinchas, M. New Lagrange Multipliers for the Blind Adaptive Deconvolution Problem Applicable for the Noisy Case. Entropy 2016, 18, 65. [Google Scholar] [CrossRef] [Green Version]
  6. Shalvi, O.; Weinstein, E. New criteria for blind deconvolution of nonminimum phase systems (channels). IEEE Trans. Inf. Theory 1990, 36, 312–321. [Google Scholar] [CrossRef]
  7. Pinchas, M.; Bobrovsky, B.Z. A Novel HOS Approach for Blind Channel Equalization. IEEE Wireless Commun. J. 2007, 6, 875–886. [Google Scholar] [CrossRef]
  8. Haykin, S. (Ed.) Adaptive Filter Theory. In Blind Deconvolution; Prentice-Hall: Englewood Cliffs, NJ, USA, 1991; Chapter 20. [Google Scholar]
  9. Bellini, S. Bussgang techniques for blind equalization. IEEE Global Telecommun. Conf. Record. 1986, 3, 1634–1640. [Google Scholar]
  10. Bellini, S. Blind Equalization. Alta Frequenza 1988, 57, 445–450. [Google Scholar]
  11. Fiori, S. A contribution to (neuromorphic) blind deconvolution by flexible approximated Bayesian estimation. Signal Process. (Eurasip) 2001, 81, 2131–2153. [Google Scholar] [CrossRef]
  12. Godfrey, R.; Rocca, F. Zero memory non-linear deconvolution. Geophys. Prospect. 1981, 29, 189–228. [Google Scholar] [CrossRef]
  13. Jumarie, G. Nonlinear filtering: A weighted mean squares approach and a Bayesian one via the Maximum Entropy principle. Signal Process. 1990, 21, 323–338. [Google Scholar] [CrossRef]
  14. Papulis, A. Probability, Random Variables, and Stochastic Processes, 2nd ed.; International Edition; McGraw-Hill: New York, NY, USA, 1984; Chapter 15; p. 536. [Google Scholar]
  15. Assaf, S.A.; Zirkle, L.D. Approximate analysis of nonlinear stochastic systems. Int. J. Control 1976, 23, 477–492. [Google Scholar] [CrossRef] [Green Version]
  16. Bover, D.C.C. Moment equation methods for nonlinear stochastic systems. J. Math. Anal. Appl. 1978, 65, 306–320. [Google Scholar] [CrossRef] [Green Version]
  17. Armando Domínguez-molina, J.; González-farías, G.; Rodríguez-dagnino, R.M. A Practical Procedure to Estimate the Shape Parameter in the Generalized Gaussian Distribution. Available online: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.329.2835; https://www.cimat.mx/reportes/enlinea/I-01-18_eng.pdf (accessed on 28 March 2021).
  18. González-Farías, G.; Domínguez-Molina, J.A.; Rodríguez-Dagnino, R.M. Effi-ciency of the Approximated Shape Parameter Estimator in the Generalized Gaussian Distribution. IEEE Trans. Vehicular Technol. 2009, 8, 4214–4223. [Google Scholar] [CrossRef]
  19. Novey, M.; Adali, T.; Roy, A. A complex generalized Gaussian distribution-characterization, generation and estimation. IEEE Trans. Signal Process. 2010, 58, 1427–1433. [Google Scholar] [CrossRef]
  20. Golberg, H.; Pinchas, M. A Novel Technique for Achieving the Approximated ISI at the Receiver for a 16QAM Signal Sent via a FIR Channel Based Only on the Received Information and Statistical Techniques. Entropy 2020, 22, 708. [Google Scholar] [CrossRef] [PubMed]
  21. Orszag, S.A.; Bender, C.M. Advanced Mathematical Methods for Scientist Engineers International Series in Pure and Applied Mathematics; McDraw-Hill: New York, NY, USA, 1978; Chapter 6. [Google Scholar]
  22. Lazaro, M.; Santamaria, I.; Erdogmus, D.; Hild, K.E.; Pantaleon, C.; Principe, J.C. Stochastic blind equalization based on pdf fitting using parzen estimator. IEEE Trans. Signal Process. 2005, 53, 696–704. [Google Scholar] [CrossRef]
Figure 1. A block diagram for baseband communication transmission.
Figure 1. A block diagram for baseband communication transmission.
Entropy 23 00547 g001
Figure 2. Equalization performance comparison between the GGD and MaxEnt methods for a 16QAM input going through channel1. The averaged result were obtained in 100 Monte Carlo trials for a SNR of 26 dB. The step size parameters were set to: μ = 6 × 10 4 , β = 1 × 10 4 , μ m e = 3 × 10 4 , β m e = 2 × 10 4 . In addition we set: ε = 0 .
Figure 2. Equalization performance comparison between the GGD and MaxEnt methods for a 16QAM input going through channel1. The averaged result were obtained in 100 Monte Carlo trials for a SNR of 26 dB. The step size parameters were set to: μ = 6 × 10 4 , β = 1 × 10 4 , μ m e = 3 × 10 4 , β m e = 2 × 10 4 . In addition we set: ε = 0 .
Entropy 23 00547 g002
Figure 3. Equalization performance comparison between the GGD and MaxEnt methods for a 16QAM input going through channel4. The averaged result were obtained in 50 Monte Carlo trials for a SNR of 30 dB. The step size parameters were set to: μ = 3 × 10 4 , β = 2 × 10 6 , μ m e = 2 × 10 4 , β m e = 2 × 10 6 . In addition we set: ε = 0.5 .
Figure 3. Equalization performance comparison between the GGD and MaxEnt methods for a 16QAM input going through channel4. The averaged result were obtained in 50 Monte Carlo trials for a SNR of 30 dB. The step size parameters were set to: μ = 3 × 10 4 , β = 2 × 10 6 , μ m e = 2 × 10 4 , β m e = 2 × 10 6 . In addition we set: ε = 0.5 .
Entropy 23 00547 g003
Figure 4. Equalization performance comparison between the GGD, MaxEnt, M a x E n t A N E W and M a x E n t B N E W methods for a 16QAM input going through channel1. The averaged result were obtained in 100 Monte Carlo trials for a SNR of 26 dB. The step size parameters were set to: μ = 6 × 10 4 , β = 1 × 10 4 , μ m e = 3 × 10 4 , β m e = 2 × 10 4 , μ A N E W = 3 × 10 4 , β A N E W = 2 × 10 5 , μ B N E W = 3 × 10 4 , β B N E W = 2 × 10 4 . In addition we set: ε = 0 , ε 1 = 0.5 .
Figure 4. Equalization performance comparison between the GGD, MaxEnt, M a x E n t A N E W and M a x E n t B N E W methods for a 16QAM input going through channel1. The averaged result were obtained in 100 Monte Carlo trials for a SNR of 26 dB. The step size parameters were set to: μ = 6 × 10 4 , β = 1 × 10 4 , μ m e = 3 × 10 4 , β m e = 2 × 10 4 , μ A N E W = 3 × 10 4 , β A N E W = 2 × 10 5 , μ B N E W = 3 × 10 4 , β B N E W = 2 × 10 4 . In addition we set: ε = 0 , ε 1 = 0.5 .
Entropy 23 00547 g004
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Shlisel, S.; Pinchas, M. Improved Approach for the Maximum Entropy Deconvolution Problem. Entropy 2021, 23, 547. https://0-doi-org.brum.beds.ac.uk/10.3390/e23050547

AMA Style

Shlisel S, Pinchas M. Improved Approach for the Maximum Entropy Deconvolution Problem. Entropy. 2021; 23(5):547. https://0-doi-org.brum.beds.ac.uk/10.3390/e23050547

Chicago/Turabian Style

Shlisel, Shay, and Monika Pinchas. 2021. "Improved Approach for the Maximum Entropy Deconvolution Problem" Entropy 23, no. 5: 547. https://0-doi-org.brum.beds.ac.uk/10.3390/e23050547

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop