Next Article in Journal
RF Signal-Based UAV Detection and Mode Classification: A Joint Feature Engineering Generator and Multi-Channel Deep Neural Network Approach
Next Article in Special Issue
Subjective Information and Survival in a Simulated Biological System
Previous Article in Journal
The Downside of Heterogeneity: How Established Relations Counteract Systemic Adaptivity in Tasks Assignments
Previous Article in Special Issue
Expected Logarithm and Negative Integer Moments of a Noncentral χ2-Distributed Random Variable
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Zero-Delay Joint Source Channel Coding for a Bivariate Gaussian Source over the Broadcast Channel with One-Bit ADC Front Ends

1
School of Computer Science and Engineering, Central South University, Changsha 410083, China
2
School of Electronics and Information Technology, Sun Yat-sen University, Guangzhou 510275, China
*
Author to whom correspondence should be addressed.
Submission received: 29 July 2021 / Revised: 7 December 2021 / Accepted: 8 December 2021 / Published: 14 December 2021
(This article belongs to the Special Issue Information Theory for Communication Systems)

Abstract

:
In this work, we consider the zero-delay transmission of bivariate Gaussian sources over a Gaussian broadcast channel with one-bit analog-to-digital converter (ADC) front ends. An outer bound on the conditional distortion region is derived. Focusing on the minimization of the average distortion, two types of methods are proposed to design nonparametric mappings. The first one is based on the joint optimization between the encoder and decoder with the use of an iterative algorithm. In the second method, we derive the necessary conditions to develop the optimal encoder numerically. Using these necessary conditions, an algorithm based on gradient descent search is designed. Subsequently, the characteristics of the optimized encoding mapping structure are discussed, and inspired by which, several parametric mappings are proposed. Numerical results show that the proposed parametric mappings outperform the uncoded scheme and previous parametric mappings for broadcast channels with infinite resolution ADC front ends. The nonparametric mappings succeed in outperforming the parametric mappings. The causes for the differences between the performances of two nonparametric mappings are analyzed. The average distortions of the parametric and nonparametric mappings proposed here are close to the bound for the cases with one-bit ADC front ends in low channel signal-to-noise ratio regions.

1. Introduction

Traditional digital communication systems, based on Shannon’s separation principle between source and channel coding [1], concentrate on mappings with long block lengths. Although these separated systems are not very robust to the channel variation, optimality can be achieved given that no constraints are considered in terms of complexity and delay. However, these systems have become unsuitable for certain emerging applications that require transmission in extreme latency constraints, such as those involving the internet of things (IoT) technologies [2] or wireless sensor networks (WSNs) [3]. Based on these applications scenarios, strict delay constraints are present owing to the near real-time monitoring and feedback between users and the underlying physical systems. For example, with the full realization of the industry 4.0 revolution in the forthcoming sixth-generation (6G) connection standards, machine controls are expected to achieve real-time operations with guaranteed microsecond delay jitter [4].
As a result, we consider the extreme case of JSCC, zero-delay transmission, where a single source sample is transmitted over a single use of the channel.
A well-known approach for zero-delay transmission is the linear scheme, in other words, the uncoded scheme that can achieve the minimum squared distortion for a Gaussian source transmitted over an additive white Gaussian noise (AWGN) channel with an input power constraint [5]. In the point-to-point setting, the linear scheme is an alternative to the optimal separate source and channel coding (SSCC). The linear scheme outperforms SSCC in terms of simplicity and zero-delay, specifically in applications that include the uncoded video transmission [6] and real-time control system for IoT [7]. However, at times, the linear scheme is not sufficient for exploiting the additional degrees-of-freedom available in the multi-terminal system. In many multi-terminal scenarios, both the SSCC and linear schemes underperform in terms of optimality. In [8], Bross et al. proved that, for the transmission of a memoryless bivariate Gaussian source over the Gaussian broadcast channel (GBC), the uncoded scheme achieves the optimality whenever the channel signal-to-noise ratio (CSNR) is below a certain threshold. To date, various zero-delay analog mappings, including parametric and nonparametric mappings, have been proposed for different scenarios [9,10,11]. In [12,13,14], hybrid digital and analog (HDA) schemes for zero-delay transmission to obtain superior performances to the uncoded schemes in various multi-terminal cases have been reported.
The analog-to-digital converter (ADC) plays an important role in the receiving antenna as the key component of the front end of the digital receiver. The power consumption of the ADCs increases exponentially with its resolution [15]. The above drawback leads to a growing concern of the energy consumption of the receiving ends. In [16], Jeon et al. proposed computationally efficient yet near-optimal soft-output detection methods for coded millimeter-wave (mmWave) multiple input multiple output (MIMO) systems with low-precision ADCs. The proposed method provides significant gains compared to the existing techniques in the same setting with the use of low-precision ADCs. In [17], Dong et al. analyzed the uplink performance of a multiuser massive MIMO system with spatially correlated channels with low-precision ADCs. Herein, we consider an extreme case, namely, one-bit ADCs, which can be realized by a simple threshold comparator, regardless of the need for mechanical gain control [18,19].
The advantages of the one-bit ADC front end on the performance of a specific communication system have been analyzed in the literature for numerous models. In [20], a low-complexity, near-maximum-likelihood-detection (near-MLD) algorithm was presented for an uplink massive MIMO system with one-bit ADCs, where the authors prove that the proposed algorithm achieves near-MLD performance, while the computational complexity was reduced compared with the existing method. In [21], supervised-learning technique in machine learning is exploited to provide efficient and robust channel estimation and data detection in massive MIMO systems with one-bit ADCs. In [22], conditional adversarial networks in the channel estimation for a massive MIMO system with one-bit ADCs were studied. Channel estimation algorithms were developed to exploit the low-rank property of mmWave channels with one-bit ADCs at the receivers [23]. The proposed methods achieve better channel reconstruction than compressed sensing-based techniques aiming at exploitation of sparsity of mmWave channels. In [24], Morteza et al. considered the zero-delay transmission of a Gaussian source over an AWGN channel with one-bit ADC front end and correlated side information at the receiver. Numerical results demonstrate the periodicity of the optimized encoder mapping.
Information transmission over broadcast channels is an appealing problem in multi-terminal communications. Numerous efforts have been expended in recent studies that focused on low/zero-delay transmission in this case. The asymptotic energy-distortion performance of zero-delay communication was investigated in [25] under the setting of Gaussian broadcasting. A constant lower bound on the energy-distortion dispersion pair is derived as well. In [26], the authors focused on the optimization of parametric continuous mappings that satisfy the individual quality of service requirements. By contrast [27], Tian et al. provided a complete characterization of the achievable distortion region for the above problem. In [28], Hassanin et al. proposed a low complexity, low delay, analog JSCC system based on the extensions of nested quantization techniques. In [29], they further presented the procedure for optimization of the decoding functions and analyzed the assessed performance improvements. For the case of lossy transmission of a Gaussian source over a GBC in instances where there is correlated side information at the receiver, a practical, low delay digital scheme was studied [30]. With the idea of layered superposition transmission and the successive canceling method, the proposed scheme shows higher accuracy of source reconstruction compared with SSCC. In [31], Saleh et al. studied the tradeoff between the distortion of the sources and the error of the interference estimation subject to the setting of the joint recovery of a bivariate Gaussian source and interference over the two-user Gaussian degraded broadcast channel in the presence of a common interference.
In this work, considering extremely low delay and low energy consumption requirements, we focus on the zero-delay JSCC communications system for a bivariate Gaussian source over a bandwidth-matched Gaussian broadcast channel with two receivers. Both of the receivers are equipped with a one-bit ADC front end. To the best of our knowledge, there are few works that have investigated this scenario. The main contributions of this work are summarized as follows:
  • Under mean squared error (MSE) distortion criterion, an outer bound on the conditional distortion region is derived.
  • Two types of nonparametric mappings are proposed. The first one is based on the joint optimization between the encoder and decoder under an iterative algorithm. In the second method, the implicit functions for the optimal encoder and decoder are derived. Employing the necessary condition mentioned above, the optimized encoder was obtained using the gradient descent method. To the best of our knowledge, there is no previous work that derives the necessary condition of the optimal encoder for the transmission of correlated Gaussian sources over the broadcast channel with one-bit ADC front ends. Hence, our contribution lies in obtaining an encoder mapping that satisfies the necessary derived condition numerically and reveals its structure in three-dimensional space.
  • Examining the optimized encoder obtained and imitating the property of its structure, we propose a series of parametric function curves applied to the system model. These mappings are easy to implement.
The remainder of the paper is organized as follows. In Section 2, we introduce the system model and explain the problem of interest. Section 3 focuses on the theoretical bounds under the setting of infinite resolution ADC and one-bit ADC front end. In Section 4, the analysis of the necessary conditions of the optimal encoder and decoder for the proposed system model is presented, and the optimized encoder mappings obtained via the aforementioned necessary condition with the use of two different algorithms are discussed. In Section 5, several new parametric mapping structures are presented. In Section 6, numerical results and analyses are provided and Section 7 concludes the paper.
Notation: Throughout the paper, the uppercase and lowercase letters denote random variables and their realizations, respectively. p ( · ) and P r ( · ) represent the probability density function (pdf) and probability, respectively. The standard normal distribution and its pdf are denoted by N ( 0 , 1 ) and ϕ ( · ) , respectively. Q ( · ) denotes the complementary cumulative distribution function of the standard normal distribution.

2. Problem Formulation

We consider the transmission of correlated Gaussian sources over Gaussian broadcast channels with one-bit ADC at the receivers. The setup is illustrated in Figure 1. Herein X = ( X 1 , X 2 ) denotes a couple of memoryless and stationary bivariate Gaussian sources with zero mean and variance σ X 2 . The covariance matrix of the two sources is presented below,
σ X 2 ρ σ X 2 ρ σ X 2 σ X 2 ,
where ρ [ 0 , 1 ] . The source vector X is transformed into a one-dimensional channel input V with the use of a nonlinear mapping function V = α ( X 1 , X 2 ) . The Gaussian memoryless broadcast channel is given by    
Y i = α ( X 1 , X 2 ) + N i , i = 1 , 2 ,
as shown in Figure 1, where Y i is the channel output for channel i, and N i is the AWGN, independent of X 1 and X 2 for channel i, with zero mean and variance σ n i 2 . Without loss of generality, we assume σ n 1 2 < σ n 2 2 . At the i-th receiver, the noisy signal Y i is quantized with a one-bit ADC, Γ ( . ) . The output of the ADC is
Z i = Γ ( Y i ) = 0 Y i 0 1 Y i < 0 .
The decoder observing the ADC output reconstructs the source X i as X ^ i = β i ( Z i ) where β i · denotes the i-th decoder.
In this paper, we assume that the encoding mapping α follows an average power constraint,
E [ α ( X 1 , X 2 ) 2 ] P .
The average MSE distortion measure is used, which is given by
D ¯ = M S E ¯ = 1 2 i = 1 2 D i = 1 2 i = 1 2 E [ ( X i X ^ i ) 2 ] .
Our target is to find the optimal source mapping function α and the decoding function β i to minimize the average MSE in (5) subject to the average power constraint in (4).

3. Preliminaries

3.1. The Average Distortion Bound When Infinite Resolution ADC Front Ends Are Adopted

In [27], the authors derived the characterization of the achievable distortion region D ( σ X 2 , ρ , P , σ n 1 2 , σ n 2 2 ) . The minimum and maximum values of D 1 are deduced as follows,
D 1 m i n = σ n 1 2 σ X 2 P + σ n 1 2 , D 1 m a x = σ X 2 ( 1 ρ 2 ) P + σ n 1 2 P + σ n 1 2 .
Then, for each D 1 [ D 1 m i n , D 1 m a x ] ,
D 2 ( P , D 1 , σ X 2 , ρ , σ n 1 2 , σ n 2 2 ) = min ( D 1 , d 2 ) D ( σ X 2 , ρ , P , σ n 1 2 , σ n 2 2 ) d 2 .
Subsequently, the average distortion can be obtained by D ¯ = 1 / 2 ( D 1 + D 2 ) for this distortion pair ( D 1 , D 2 ) . We select the smallest average distortion D ¯ min as the bound of average distortion for the setting of bivariate Gaussian sources over the broadcast channel with infinite resolution ADC front ends.

3.2. The Average Distortion Bound When One-Bit ADC Front Ends Are Adopted

The genie-aided distortion region for the transmission of correlated Gaussian sources over a GBC with one-bit ADC front ends, D c A D C ( σ X 2 , ρ , P , σ n 1 2 , σ n 2 2 ) , consists of all pairs of ( D 1 | 2 A D C , D 2 A D C ) such that
D 1 | 2 A D C σ X 2 ( 1 ρ 2 ) 2 2 h Q γ P / σ n 1 2 h Q P / σ n 1 2 , D 2 A D C σ X 2 2 2 1 h Q γ P / σ n 2 2 ,
for some γ [ 0 , 1 ] . A proof of (8) is given in Appendix A.
In the same way as in Section 3.1, we can obtain the average distortion bound.

4. Nonparametric Mappings

In this section, we proceed to develop two types of nonparametric mappings using the Lagrange multiplier method. We are going to study the optimal mapping such that the average distortion is minimized subject to the average power constraint.
Using the Lagrange multiplier method, we turn the constrained optimization problem of minimizing (5) subject to (4) into an unconstrained problem by forming the Lagrange cost function,
J ( α , β 1 , β 2 ) = i = 1 2 1 2 E X i X ^ i 2 + λ E α ( X 1 , X 2 ) 2 .
Therefore, our target turns into minimizing the unconstrained problem as
min α , β 1 , β 2 J ( α , β 1 , β 2 ) .
For a given λ , if the solution of the unconstrained problem (10) satisfies the average power constraint in (4), it is proven that the above solution also solves the constrained problem [32].
Herein, MSE i is expressed as
MSE i = P r ( Z i = 0 | X 1 = x 1 , X 2 = x 2 ) × p ( x 1 , x 2 ) × ( x i β i ( 0 ) ) 2 d x 2 d x 1   + P r ( Z i = 1 | X 1 = x 1 , X 2 = x 2 ) × p ( x 1 , x 2 ) × ( x i β i ( 1 ) ) 2 d x 2 d x 1 .
The actual transmission power is expressed as
P a c t = p ( x 1 , x 2 ) α ( x 1 , x 2 ) d x 2 d x 1 .

4.1. Nonparametric Mapping I

Herein, we proceed in a way similar to the vector quantizer design [33] by formulating the necessary conditions for optimality with the use of the discretization operation. This scheme is based on joint optimization with iteration between the mappings at the transmitter and receiver.
Note that the minimization of (10) is still difficult to achieve owing to the interdependencies between the components to be optimized. Therefore, we bypass this problem by optimizing the problem iteratively, one component at a time, while we keep the other components fixed.
Assuming that the decoders ( β 1 , β 2 ) are fixed, the optimal encoding mapping α can be expressed as   
α = arg min α i = 1 2 1 2 E X i X ^ i 2 + λ E α X 1 , X 2 2 .
Note that if the joint pdf p ( x 1 , x 2 ) in (12) is nonnegative, the optimization of (13) can be modified in the following way,
α ( x 1 , x 2 ) = arg min v R 1 2 i = 1 2 MSE i + λ v 2 .
Assuming that the encoder α is fixed, the optimal decoder is the minimum MSE (MMSE) estimator of X i given Z i . The MMSE estimation for user i is given by
x ^ i = β i ( z i ) = E [ x i | z i ] = x i p ( X i = x i | Z i = z i ) d x i = x i p x 1 p x 2 | x 1 P r Z i = z i | V = α x 1 , x 2 d x 2 d x 1 p ( x 1 ) p ( x 2 | x 1 ) P r ( Z i = z i | V = α ( x 1 , x 2 ) ) d x 2 d x 1 .
The design procedure is given by Algorithm 1. This type of iterative procedure has once been used in other scenarios [34,35]. It is worth noting that the following iterative optimization does not generally guarantee convergence to the global optimum solution. A good choice of initialization can contribute to the avoidance of poor local minima.
Algorithm 1: Nonparametric Mapping I
  • Data: Initial mapping of α ( x 1 , x 2 ) , the noise for different channels, and δ , which determines the instant at which the iterations will stop.
  • Result: Locally optimized ( α , β 1 , β 2 ) .
Entropy 23 01679 i001
For any given λ , using Algorithm 1 above, we obtain a certain encoder mapping α . The value of λ should be increased if the power E [ α ( x 1 , x 2 ) 2 ] exceeds the power constraint P, and vice versa.
For the actual implementation of (14) and (15), we implemented the following modifications and approximations owing to the fact that it is impossible to evaluate the formulas in the real domain. We generate Monte-Carlo samples from the distribution of X , which is denoted as the set X . We discretize the channel input by a set Y with finite modulation points. The maximum/minimum values of set Y are denoted as ± d L 1 2 , where L determines the number of points in the set, and d denotes the resolution. As the resolution d becomes smaller and the value L becomes larger, the set Y becomes closer to the analog.
The discretized version of (14) is given by
α ( x 1 , x 2 )   = arg min v Y P r z 1 | v × x 1 β 1 z 1 2 2 + P r z 2 | v × x 2 β 2 z 2 2 2 + λ v 2 .
The discretized version of (15) can be expressed as
x ^ 1 = β 1 ( z 1 ) = x 1 X x 1 x 2 X 2 | x 1 P r x 2 | x 1 P r z 1 | α x 1 , x 2 x 1 X x 2 X 2 | x 1 P r ( x 2 | x 1 ) P r ( z 1 | α ( x 1 , x 2 ) ) ,
and
x ^ 2 = β 2 ( z 2 ) = x 1 X x 2 X 2 | x 1 x 2 P r ( x 2 | x 1 ) P r ( z 2 | α ( x 1 , x 2 ) ) x 1 X x 2 X 2 | x 1 P r ( x 2 | x 1 ) P r ( z 2 | α ( x 1 , x 2 ) ) .
In our experiment, we use 10 4 samples to define the set X . We have also used δ = 10 3 , and kept d ( L 1 ) / 2 = 4 based on the considerations of the power constraint at the transmitter. The value L above is chosen depending on the noise variance, with [1281, 2561] in our experiment, by taking into account the tradeoff between accuracy and computational cost. Figure 2 shows the tradeoff between the value of L and the computational cost. The y axis shows the runtime to obtain the result for one point in Figure 8b when ρ , σ n 1 2 , σ n 2 2 are fixed. Its unit is hours.

4.2. Nonparametric Mapping II

In the following subsection, we study the functional properties of the unconstrained problem. We obtain an implicit equation for the optimal encoder mapping. Subsequently, we derive the optimal mappings with the necessary conditions above via gradient descent search.
Our system model is symmetrical to some extent, with respect to the nature of the one-bit ADC output and the probability density distributions of the source and noise. We derive below the optimal decoder with the ADC output being 0 and 1 for channel 1, respectively.
X ^ 1 0 = E [ X 1 | Z 1 = 0 ] = x 1 p x 1 , x 2 P r Z 1 = 0 | V = α x 1 , x 2 d x 2 d x 1 P r Z 1 = 0
= x 1 p ( x 1 , x 2 ) 1 Q α ( x 1 , x 2 ) σ n 1 d x 2 d x 1 P r ( Z 1 = 0 ) = x 1 p ( x 1 , x 2 ) Q α ( x 1 , x 2 ) σ n 1 d x 2 d x 1 P r ( Z 1 = 0 ) ,
We elaborate (19a) in detail as follows. While Z 1 = 0 , it means that Y 1 0 . Hence N 1 α ( x 1 , x 2 ) . Then we have
P r ( Z 1 = 0 | α ( x 1 , x 2 ) ) = P r ( N 1 α ( x 1 , x 2 ) ) = 1 2 π σ n 1 α ( x 1 , x 2 ) e x 2 2 σ n 1 2 d x = Q ( 1 ) α ( x 1 , x 2 ) σ n 1 = 1 Q α ( x 1 , x 2 ) σ n 1 .
In the same way, we could obtain X ^ 1 1 as
X ^ 1 1 = E [ X 1 | Z 1 = 1 ] = x 1 p ( x 1 , x 2 ) P r Z 1 = 1 | V = α x 1 , x 2 d x 2 d x 1 P r ( Z 1 = 1 ) = x 1 p ( x 1 , x 2 ) Q α ( x 1 , x 2 ) σ n 1 d x 2 d x 1 P r ( Z 1 = 1 ) .
Based on the results above, we can derive the following relationship,
X ^ 1 0 = X ^ 1 1 ,
X ^ 2 0 = X ^ 2 1 .
Herein, X ^ j i denotes the estimation when the ADC output is i for channel j, where i = { 0 , 1 } and j = { 1 , 2 } .
The overall average distortion is given by
D ¯ = 1 2 E X 1 X ^ 1 2 + E X 2 X ^ 2 2 = 1 2 E X 1 X ^ 1 X ˜ 1 + E X 2 X ^ 2 X ˜ 2 (24a) = 1 2 E X 1 X ˜ 1 + E X 2 X ˜ 2 = 1 2 E X 1 2 X 1 X ^ 1 + E X 2 2 X ^ 2 (24b) = 1 2 σ X 1 2 + σ X 2 2 E X 1 X ^ 1 E X 2 X ^ 2 < 1 2 σ X 1 2 + σ X 2 2 . (24c)
where (24a) is attributed to the orthogonality of the MMSE estimation. X i denotes the source samples, while X ^ i and X ˜ i denote the estimation, and the difference between the source and estimation, respectively. To be more specific, X ˜ i = X i X ^ i .
Note that under the MSE distortion criterion, the optimal decoder is the MMSE estimator. The estimation of source X i , for example, X ^ 1 is obtained as follows,
X ^ 1 = β 1 ( z 1 ) = E [ X 1 | Z 1 = z 1 ] = x 1 P r ( X 1 = x 1 | Z 1 = z 1 ) d x 1 = x 1 P r ( Z 1 = z 1 | X 1 = x 1 ) p ( x 1 ) d x 1 P r ( Z 1 = z 1 ) = x 1 p ( x 1 , x 2 ) P r ( Z 1 = z 1 | X 1 = x 1 , X 2 = x 2 ) d x 2 d x 1 p ( x 1 , x 2 ) P r ( Z 1 = z 1 | X 1 = x 1 , X 2 = x 2 ) d x 2 d x 1 = x 1 p ( x 1 , x 2 ) Q 1 z 1 + 1 α x 1 , x 2 σ n 1 d x 2 d x 1 p ( x 1 , x 2 ) Q ( 1 ) z 1 + 1 α ( x 1 , x 2 ) σ n 1 d x 2 d x 1 .
where (25) is attributed to the fact that P r ( Z 1 = 0 | V = α ( x 1 , x 2 ) ) = Q 1 α ( x 1 , x 2 ) σ n 1 while P r Z 1 = 1 | V = α x 1 , x 2 = Q α x 1 , x 2 σ n 1 . See (20).
In a similar way, X ^ 2 is obtained as,
X ^ 2 = β 2 ( z 2 ) = x 2 p ( x 1 , x 2 ) Q 1 z 2 + 1 α x 1 , x 2 σ n 2 d x 1 d x 2 p ( x 1 , x 2 ) Q 1 z 2 + 1 α x 1 , x 2 σ n 2 d x 1 d x 2 .
Herein, z 1 , z 2 { 0 , 1 } . Furthermore, we also notice that the estimation X ^ i is constant once z i is determined.
Owing to the orthogonality principle of the MMSE estimation, we can verify that D i = σ x i 2 E [ X i X ^ i ] . According to it, we can rewrite the Lagrangian cost function and drop the constants that are independent of α ,   
L ( α ) = 1 2 E [ X 1 X ^ 1 ] 1 2 E [ X 2 X ^ 2 ] + λ E [ α ( X 1 , X 2 ) 2 ] .
Herein, we would like to reemphasize that we use ϕ ( · ) to denote the pdf of standard normal distribution and ϕ ( · , · ) to denote the bivariate normal distribution. Q ( · ) denotes the complementary cumulative distribution function of the standard normal distribution.
By expanding (27), we proceed with the following process,
1 2 E X 1 X ^ 1 1 2 E X 2 X ^ 2 + λ E α ( X 1 , X 2 ) 2 = i = 1 2 1 2 σ x 1 σ x 2 σ n i ( α ( x 1 , x 2 ) / σ n i x i β i ( 1 ) ϕ x 1 σ x 1 , x 2 σ x 2 ϕ ( n i σ n i ) d x 1 d x 2 d n i + α ( x 1 , x 2 ) / σ n i x i β i ( 0 ) ϕ x 1 σ x 1 , x 2 σ x 2 ϕ ( n i σ n i ) d x 1 d x 2 d n i ) (28a) + λ σ x 1 σ x 2 × ϕ x 1 σ x 1 , x 2 σ x 2 α 2 x 1 , x 2 d x 2 d x 1 = 1 2 σ x 1 σ x 2 × x 1 β 1 1 Q α x 1 , x 2 σ n 1 + β 1 0 Q α x 1 , x 2 σ n 1 × ϕ x 1 σ x 1 , x 2 σ x 2 d x 2 d x 1 + 1 2 σ x 1 σ x 2 × x 2 β 2 1 Q α x 1 , x 2 σ n 2 + β 2 0 Q α x 1 , x 2 σ n 2 × ϕ x 1 σ x 1 , x 2 σ x 2 d x 2 d x 1 (28b) + λ σ x 1 σ x 2 ϕ x 1 σ x 1 , x 2 σ x 2 α 2 ( x 1 , x 2 ) d x 2 d x 1 = 1 σ x 1 σ x 2 × [ x 1 2 β 1 1 Q α x 1 , x 2 σ n 1 + β 1 0 Q α x 1 , x 2 σ n 1 x 2 2 β 2 1 Q α x 1 , x 2 σ n 2 + β 2 0 Q α x 1 , x 2 σ n 2 (28c) + λ α x 1 , x 2 2 ] ϕ x 1 σ x 1 , x 2 σ x 2 d x 2 d x 1 . (28d)
Given that X ^ i is a discrete random variable with two values: β i ( 0 ) and β i ( 1 ) , the first part of (28a) holds since
E [ X i X ^ i ] = x i β i ( 0 ) P r X ^ i = β i ( 0 ) | X i = x i p ( x i ) d x i + x i β i ( 1 ) P r X ^ i = β i ( 1 ) | X i = x i p ( x i ) d x i = x i β i ( 0 ) P r Z i = 0 | X i = x i p ( x i ) d x i + x i β i ( 1 ) P r Z i = 1 | X i = x i p ( x i ) d x i = 1 σ x 1 σ x 2 σ n 1 α ( x 1 , x 2 ) / σ n i x i β i ( 0 ) ϕ x 1 σ x 1 , x 2 σ x 2 ϕ ( n i σ n i ) d x 1 d x 2 d n i + α ( x 1 , x 2 ) / σ n i x i β i ( 1 ) ϕ x 1 σ x 1 , x 2 σ x 2 ϕ ( n i σ n i ) d x 1 d x 2 d n i
where i c = { 1 , 2 } i .
Equation (28b) holds due to the following fact that
1 σ n i α ( x 1 , x 2 ) / σ n i β i ( 1 ) ϕ ( n i σ n i ) d n i + 1 σ n i α ( x 1 , x 2 ) / σ n i β i ( 0 ) ϕ ( n i σ n i ) d n i = β i 1 Q α x 1 , x 2 σ n i + β i 0 Q α x 1 , x 2 σ n i .
Define F ( α ( x 1 , x 2 ) , x 1 , x 2 ) as:
F α x 1 , x 2 , x 1 , x 2 = 1 σ x 1 σ x 2 ( x 1 2 β 1 ( 1 ) Q α ( x 1 , x 2 ) σ n 1 + β 1 ( 0 ) Q α ( x 1 , x 2 ) σ n 1 x 2 2 β 2 ( 1 ) · Q α ( x 1 , x 2 ) σ n 2 + β 2 ( 0 ) · Q α ( x 1 , x 2 ) σ n 2 + λ α 2 ( x 1 , x 2 ) ) .
Then after putting F ( α ( x 1 , x 2 ) , x 1 , x 2 ) into (28c), we can apply the necessary condition as below,
min α L ( α ) F α x 1 , x 2 , x 1 , x 2 ϕ x 1 σ x 1 , x 2 σ x 2 d x 2 d x 1 .
According to the conclusion in Section 7.5 of [36], when the partial derivative of the function F with respect to α , denoted by F α ( α ( x 1 , x 2 ) , x 1 , x 2 ) is equal to 0, the function L ( α ) reaches the minimum. The partial derivative of the function F with respect to α is obtained as follows,
F α α x 1 , x 2 , x 1 , x 2 = 1 σ x 1 σ x 2 × ( x 1 2 β 1 ( 1 ) 1 2 π σ n 1 + β 1 ( 0 ) 1 2 π σ n 1 e α ( x 1 , x 2 ) 2 2 σ n 1 2 x 2 2 β 2 ( 1 ) 1 2 π σ n 2 + β 2 ( 0 ) 1 2 π σ n 2 e α ( x 1 , x 2 ) 2 2 σ n 2 2 + 2 λ α ( x 1 , x 2 ) ) .
After the deformation of (31), the optimal encoder mapping α subject to the MSE distortion criterion must satisfy the implicit equation as below,
4 2 π λ α ( x 1 , x 2 ) = x 1 σ n 1 e α ( x 1 , x 2 ) 2 2 σ n 1 2 β 1 0 β 1 1 + x 2 σ n 2 e α x 1 , x 2 2 2 σ n 2 2 β 2 0 β 2 1 .
To find the optimal encoder mapping, we perform the steepest descent search in the opposite direction of the functional derivative of the Lagrangian with respect to the encoder mapping α ( x 1 , x 2 ) as,
α i + 1 ( x 1 , x 2 ) = α i ( x 1 , x 2 ) μ α L α i x 1 , x 2 ,
where i is the iteration index, and μ > 0 is the step size.
Hereafter, the gradient of the Lagrangian function L ( α ) over α is denoted as
α L = 4 2 π λ α ( x 1 , x 2 ) x 1 σ n 1 e α 2 ( x 1 , x 2 ) 2 σ n 1 2 β 1 0 β 1 1 x 2 σ n 2 e α 2 x 1 , x 2 2 σ n 2 2 β 2 0 β 2 1 .
The overall design procedure for gradient descent search is given by Algorithm 2.
Algorithm 2: Nonparametric Mapping II
Data: Initial mapping of α ( x 1 , x 2 ) , and the noise for different channels.
Result: Locally optimized ( α , β 1 , β 2 ) .
Entropy 23 01679 i002

5. Parametric Mappings

Compared with the nonparametric mappings, parametric mappings have obvious advantages in terms of their lower computational cost and fixed functional structures. Moreover, they could be updated according to the variations of the signal properties and channel conditions by adjusting their parameters.
Figure 3 and Figure 4 show plots of the optimized encoder mapping with Algorithms 1 and 2, respectively. Herein and in the sequel, we define the CSNR as
CSNR = 10 log 10 N P i = 1 N σ n i 2 .
Although the structures of the two nonparametric mappings are not exactly the same, we can still summarize some common characteristics. There exist two flat layers in both nonparametric mappings. Different degrees of deformation can be observed in the middle part of two nonparametric mappings surfaces. While fixing X 1 = X 2 , the curve of α ( X 1 , X 2 ) with respect to X 1 is shown as Figure 3c and Figure 4c. The shapes of the two nonparametric mappings obtained above inspire us to propose several different parametric encoding schemes.
After examining the nonparametric mappings mentioned above for different CSNRs and the correlation coefficient ρ , we also notice that due to the symmetry of the system, if ρ = 1 and σ n 1 = σ n 2 , the problem studied becomes the point-to-point problem presented in [37]. When it comes to the case of ρ = 1 , σ n 1 = σ n 2 together with the infinite resolution front end, the problem reduces to the one in [38].

5.1. Linear Transmission

In [8], the linear scheme for the transmission of bivariate Gaussian sources over a GBC is proposed. The encoder mapping for the linear transmission is given by
V = P σ X 2 ( ω 2 + ζ 2 + 2 ω ζ ρ ) ( ω X 1 + ζ X 2 ) ,
where ω [ 0 , 1 ] , and ζ = 1 ω .
Closed-form expression of the average distortion D ¯ of linear transmission is hard to be obtained. We choose to substitute (32) into (15) to obtain D ¯ .

5.2. Sigmoid-like Function

From Figure 3 and Figure 4, we can observe that there exists a flat platform in the optimized encoding mapping. This feature is similar to the sigmoid function to some extent. Therefore, we propose to adopt the sigmoid-like function, which is defined as
α ( x 1 , x 2 ) = 1 1 + e ( a 1 x 1 + a 2 x 2 ) 1 2 ,
where a 1 and a 2 jointly control the offset angle of the mapping on the X-Y plane and the extension of the mapping surface.
The optimization step can be achieved by an exhaustive search on the parameter space to jointly determine the optimal values for a 1 and a 2 . The results are obtained via Monte-Carlo optimization of parameters a 1 and a 2 in (37) so that the SDR is maximized.

5.3. Sinh-like Function

The parametric sine-like mapping in [26] is proposed to satisfy individual quality of service requirements in Gaussian broadcast channels. We adapt the parametric curve structure in our setting and propose the new mapping as indicated below,
c b 1 , b 2 ( t ) = U Σ 1 / 2 s ( t ) ,
s ( t ) = [ s x ( t ) s y ( t ) ] T = t 2 sinh ( b 1 t ) b 2 sinh ( b 1 t ) ,
where U H Σ U is the eigendecomposition of the covariance matrix, with U the matrix consisting of the eigenvectors as columns, and Σ = diag { η 1 , η 2 } , η 1 > η 2 .
The optimization of b 1 and b 2 is achieved by exhaustively searching the parameter space.

5.4. Shannon-Kotel’nikov-like Function

Shannon-Kotel’nikov (S-K) mappings are studied in previous works such as [39,40]. For the 2:1 bandwidth reduction scenario, the spiral curve is given by
s ( t ) = ± Δ π ( cos ( φ ( c t ) ) i + sin ( φ ( c t ) ) j ) ,
where
φ ( ω t ) = c | t | Δ η .
We implement the following modifications to the S-K mapping function so that the pitch of the mapping curve varies. In other words, the (radial) distance between the two spiral arms varies all along instead of keeping constant as in previous works.
s ( t ) = ± t 2 π ( cos ( c | t | ) i + sin ( c | t | ) j ) .
The optimization of c in (41) is achieved by exhaustively searching the parameter space as well.
The curved surfaces of the sigmoid-like function, sinh function, S-K-like function and uncoded scheme are depicted in Figure 5a–d, respectively. Their corresponding two-dimensional representations are depicted in Figure 6a–d, respectively. While fixing X 1 = X 2 , the curves of α ( X 1 , X 2 ) with respect to X 1 are shown in Figure 7a–d.

6. Numerical Results

In this section, we present the performances and validate the effectiveness of the nonparametric and parametric mappings introduced in the previous sections. In the following experiments, the overall MSE is still defined as D ¯ = 1 2 ( D 1 + D 2 ) , and signal-to-distortion rate (SDR) is defined as 10 log 10 ( σ X 2 / D ¯ ) . The average distortion bound when the infinite resolution ADC front ends are adopted is denoted as bound A, and as bound B when one-bit ADC front ends are adopted.
According to (35), we change the values of CSNR by fixing the channel noise and by varying the values of transmitting power or vice versa. In the following experiments, without loss of generality, σ X is set to 1 as in the cases of other values for σ X , normalization can be adopted.
Under the average distortion criterion, we compare the parametric mappings mentioned in Section 5 with two state-of-the-art parametric mappings proposed for the broadcast channel with infinite resolution ADC front ends, the sine-like curve [26] and the alternating sign-scalar quantizer linear coder (AS-SQLC) [28] for different values of CSNR, as shown in Figure 8a. The performance of the sigmoid-like mapping is superior to all the other parametric schemes. Compared with the AS-SQLC scheme and the sine-like scheme, with the exception of the uncoded transmission scheme, the proposed parametric schemes inspired by optimal functional properties yeild better performances.
In Figure 8b, we compare the sigmoid-like function (37) and the two nonparametric mappings with the conditional outer bound under one-bit ADC and the outer bound in the case of an infinite resolution ADC front end. Figure 8b shows the performance curves of the proposed parametric sigmoid-like function and two nonparametric mappings in terms of SDR versus CSNR with correlation coefficient ρ = 0.7 . Herein, to vary CSNR, the values of σ n 1 2 and σ n 2 2 in (35) are fixed to 0.56 and 1, respectively, while the average transmitting power is varied. The bound for the scenario with one-bit ADC front end and the bound for the scenario with an infinite resolution front end are indicated in this figure with purple squares and blue circles, respectively.
We observed that with the increase in CSNR, the bound for an infinite resolution front end is increasingly ahead of the bound under the one-bit ADC front end. Two nonparametric mappings outperform the parametric sigmoid-like mappings, where the nonparametric mapping I leads to the nonparametric mapping II. Meanwhile, the performances of two nonparametric mappings approach the bound under one-bit ADC front end.
We also compare the average distortions by relevant schemes with the increase in CSNR when ρ = 0.6 and ρ = 0.2 in Figure 9 and Figure 10, respectively. Similarly, the sigmoid-like mappings perform best within all parametric mappings while nonparametric mapping I performs better than nonparametric mapping II.
In Figure 11 and Figure 12, we plot the SDR versus correlation coefficient ρ when CSNR is equal to 1.8 and 11.8 dB, respectively. Herein, we have kept the transmitting power P = 1 , while we change the channel noise, with σ n 1 2 = 0.32 and σ n 2 2 = 1 in Figure 11, and σ n 1 2 = 0.032 and σ n 2 2 = 0.1 in Figure 12.
When CSNR is significantly low (e.g., 1.8 dB) as shown in Figure 11, sigmoid-like mapping outperforms all the other parametric mappings at different correlation coefficient values while the sinh mapping and SK-like mapping are both superior to the remaining parametric ones. With the increase in correlation coefficient ρ , the uncoded scheme lags behind the AS-SQLC scheme and the sine-like scheme.
When CSNR increases to 11.8 dB as shown in Figure 12, the sigmoid-like mapping still yields the best performance within all the parametric mappings, and is inferior to the nonparametric ones, while the gap shrinks with the increase in the correlation coefficient ρ . For large values of the coefficient ρ , the performances of the AS-SQLC and sine-like scheme become closer to those of the proposed parametric mappings, while the uncoded scheme gradually lags behind the AS-SQLC and the sine-like schemes.
As observed in the mentioned figures, the proposed sigmoid-like mapping always yields a better performance than the AS-SQLC mapping and sine-like mapping, which are particularly designed for a broadcast channel with infinite resolution ADC front end. When the correlation coefficient ρ decreases, both the gap between the performances of the nonparametric mappings and parametric ones and the gap between the performances of parametric mappings proposed in this work and the AS-SQLC as well as the sine-like scheme expand.
Note that the nonparametric mapping I has a slight lead in the performance compared to the nonparametric mapping II. This is due to the fact that Algorithm 1 has a higher degree-of-freedom to place points in the channel space than Algorithm 2. The above gain comes at the expense of the computational cost.
As CSNR increases, the parametric sigmoid-like mapping approaches more closely to two nonparametric mappings, indicating less gain from the nonparametric algorithms. We attribute this performance to the fact that as the communication condition improves, the influence of the one-bit ADC front end becomes larger, and becomes harder to be compensated by nonparametric mapping algorithms. In low-CSNR cases, when the influence of channel noise outweighs the impact of the one-bit ADC front end, the performance promotions of the two nonparametric mapping algorithms become more obvious.
Figure 13, Figure 14 and Figure 15 plot the achievable distortion bounds for three parametric mappings, the bound with infinite resolution ADC front ends and the conditional outer bound with one-bit ADC front ends when different values are assigned to ρ . We would like to emphasize that the bounds we discuss here are not average distortion bounds in previous figures. These bounds are obtained by searching for minimal attainable D 2 for given D 1 , as shown in (7). They characterize the attainable distortion regions. To plot the D 1 - D 2 curves for the proposed parametric encoders in Figure 13, Figure 14 and Figure 15, we varied the parameters to obtain a database for D 1 - D 2 pairs. Then for a given value of D 1 , we document the corresponding minimal value of D 2 . (It is hard to keep D 1 constant in the practical experiments. We obtain the values of D 1 around the given one and document the corresponding D 2 s. Then the minimum D 2 is found within these D 2 s.) Finally, we could plot the complete D 1 - D 2 curves for the proposed parametric encoders.
We observe that the bound for an infinite resolution ADC front end is relatively closer to the bound for a one-bit ADC front end with larger ρ values. The sigmoid-like mapping outperforms the other two parametric mappings and is close to the bound for a one-bit ADC front end when ρ is relatively lower.
Figure 16 and Figure 17 show the encoder structures of the nonparametric mapping I at two CSNR levels, respectively. As our system model can be approximated as a symmetric one, it is an interesting result that the optimized encoder mappings are odd as well. As CSNR increases, the structure of the encoder mapping is gradually distorted. The deformation above indicates the advantage of the nonparametric mappings compared with the parametric ones, since the former ones have a higher degree-of-freedom for placing points in the channel space rather than being restrained within a fixed structure.

7. Conclusions

In this work, we consider the transmission of bivariate Gaussian sources over Gaussian broadcast channels with one-bit ADC front ends. The conditional distortion outer bound for this scenario is derived. Two algorithms are proposed to design the nonparametric mappings. The nonparametric mapping I is achieved based on the iterative optimization between the encoder and the decoder. The nonparametric mapping II is achieved by gradient descent search based on the necessary conditions for the optimal encoder. Based on the characteristics of the optimal encoder mappings, we propose several parametric mappings. Despite a certain extent of performance degradation, the parametric mappings proposed herein can be used in place of the nonparametric mappings as they require lower computational cost and are more adaptable to the channel condition variations. Future extensions of this work include the derivation of the closed-form approximations for the mapping distortion, further design of parametric mappings applied to the system with fading channels, and investigations of the performance of the system with higher level ADC front ends.

Author Contributions

Data curation, W.Z.; Formal analysis, W.Z.; Funding acquisition, X.C.; Methodology, X.C.; Software, W.Z.; Supervision, X.C.; Validation, W.Z.; Visualization, W.Z.; Writing—original draft, W.Z.; Writing—review & editing, X.C. All authors have read and agreed to the published version of the manuscript.

Funding

The authors would like to thank the support of the National Natural Foundation of China (61301181, U1530120) and the Scientific Research Foundation of Central South University.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to the ongoing study in this line of research.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proof for Bound under One-Bit ADC Front End

Herein, we derive the genie-aided outer bound while X 2 n is known at both the transmitter and receiver 1. On the basis of the data processing inequality (DPI), we have
I ( X 1 n ; X ^ 1 n | X 2 n ) I ( X 1 n ; Z 1 n | X 2 n ) .
The conditional rate-distortion function under the assumption that X 2 n is known to both encoder and receiver 1, implies the following inequality
I ( X 1 n ; X ^ 1 n | X 2 n ) n 2 log σ X 2 ( 1 ρ 2 ) D 1 | 2 A D C .
Due to the existence of the Markov chain relationship ( X 1 n , X 2 n ) V n Z 1 n , mutual information I ( X 1 n ; Z 1 n | X 2 n ) can be expressed as
I ( X 1 n ; Z 1 n | X 2 n ) = H ( Z 1 n | X 2 n ) H ( Z 1 n | X 1 n , X 2 n )   = H ( Z 1 n | X 2 n ) H ( Z 1 n | V n ) .
Furthermore, we have the following inequality
H ( Z 1 n ) H ( Z 1 n | X 2 n ) H ( Z 1 n | V n ) .
As shown in Lemma 2 of [41], since the quantizer is symmetric, it would not lose the optimality to restrict attention to symmetric input distributions. In doing so, as the quantizer and the noise are already symmetric, the probability mass function (PMF) of Z is also symmetric. Hence, H ( Z 1 ) = 1 .
With the one-bit symmetric quantization, the quantized channel output is
Z 1 n = Γ ( V n + N 1 n ) .
Since Z 1 is binary, it holds that
H ( Z 1 | V ) = E P r ( Z 1 = 0 | V ) log P r ( Z 1 = 0 | V ) P r ( Z 1 = 1 | V ) log P r ( Z 1 = 1 | V ) ,
then it is easy to derive that
H ( Z 1 | V ) = E h Q V σ n 1 ,
where Q ( x ) = 1 2 π x exp ( t 2 / 2 ) d t , and h ( · ) denotes the binary entropy function
h ( p ) = p log ( p ) ( 1 p ) log ( 1 p ) , 0 p 1 .
The conclusion is also presented in [41].
Since h ( Q ( · ) ) is an even function, we have
H ( Z 1 | V ) = E h Q V σ n 1 = E h Q | V | σ n 1 .
According to [41], the function h ( Q ( · ) ) is a convex function. Jensen’s inequality thus implies
H ( Z 1 | V ) h Q P / σ n 1 2 = h Q CSNR 1 .
with equality iff V 2 = P .
On the other hand, h ( Q ( 0 × P / σ n 1 2 ) ) = 1 , hence there exists γ [ 0 , 1 ] such that
H ( Z 1 | X 2 ) = h Q γ P / σ n 1 2 .
Therefore,
I ( X 1 n ; X ^ 1 n | X 2 n ) I ( X 1 n ; Z 1 n | X 2 n )   = H ( Z 1 n | X 2 n ) H ( Z 1 n | V n )   = k = 1 n H Z 1 , k | X 2 n Z 1 k 1 H Z 1 , k | V n Z 1 k 1   = ( a ) k = 1 n H Z 1 , k | X 2 , k H Z 1 , k | V k   n h Q γ P / σ n 1 2 h Q P / σ n 1 2 ,
where γ [ 0 , 1 ] and (a) follows by the sample-by-sample (zero-delay) encoding assumption. As a result, the following inequality holds
1 2 log σ X 2 ( 1 ρ 2 ) D 1 A D C 1 2 log σ X 2 ( 1 ρ 2 ) D 1 | 2 A D C h Q γ P / σ n 1 2 h Q P / σ n 1 2 .
On the other hand, applying the data processing inequality at receiver 2, we obtain the following inequality
n 2 log σ X 2 D 2 A D C I ( X 2 n ; X ^ 2 n ) I ( X 2 n ; Z 2 n ) .
Additionally, the mutual information can be expressed as
I ( X 2 n ; Z 2 n ) = H ( Z 2 n ) H ( Z 2 n | X 2 n )   = n n H ( Z 2 | X 2 ) .
Note that
H ( Z 2 n | X 2 n ) = n h Q γ P / σ n 2 2 ,
we can thus have
1 2 log σ X 2 D 2 A D C 1 h Q γ P / σ n 2 2 .
Based on (A9) and (A13), we have
D 1 A D C σ X 2 ( 1 ρ 2 ) 2 2 h Q γ P / σ n 1 2 h Q P / σ n 1 2 ,
D 2 A D C σ X 2 2 2 1 h Q γ P / σ n 2 2 .

References

  1. Shannon, C.E. A Mathematical Theory of Communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef] [Green Version]
  2. Varasteh, M.; Rassouli, B.; Simeone, O.; Gündüz, D. Zero-Delay Source-Channel Coding with a Low-Resolution ADC Front End. IEEE Trans. Inf. Theory 2018, 64, 1241–1261. [Google Scholar] [CrossRef]
  3. Davoli, F.; Marchese, M.; Mongelli, M. Non-linear coding and decoding strategies exploiting spatial correlation in wireless sensor networks. IET Commun. 2012, 6, 2198–2207. [Google Scholar] [CrossRef] [Green Version]
  4. Giordani, M.; Polese, M.; Mezzavilla, M.; Rangan, S.; Zorzi, M. Toward 6G Networks: Use Cases and Technologies. IEEE Commun. Mag. 2020, 58, 55–61. [Google Scholar] [CrossRef]
  5. Goblick, T.J. Theoretical limitations on the transmission of data from analog sources. IEEE Trans. Inf. Theory 1965, 11, 558–567. [Google Scholar] [CrossRef]
  6. He, C.; Hu, Y.; Chen, Y.; Fan, X.; Li, H.; Zeng, B. MUcast: Linear Uncoded Multiuser Video Streaming with Channel Assignment and Power Allocation Optimization. IEEE Trans. Circuits Syst. Video Technol. 2020, 30, 1136–1146. [Google Scholar] [CrossRef]
  7. Wang, K.; Wade, N. An integration platform for optimised design and real-time control of smart local energy systems. In Proceedings of the 2021 12th International Renewable Engineering Conference (IREC), Amman, Jordan, 14–15 April 2021. [Google Scholar]
  8. Bross, S.; Lapidoth, A.; Tinguely, S. Broadcasting correlated Gaussians. IEEE Trans. Inf. Theory 2010, 56, 3057–3068. [Google Scholar] [CrossRef] [Green Version]
  9. Floor, P.A.; Kim, A.N.; Ramstad, T.A.; Balasingham, I. Zero Delay Joint Source Channel Coding for Multivariate Gaussian Sources over Orthogonal Gaussian Channels. Entropy 2013, 15, 2129–2161. [Google Scholar] [CrossRef] [Green Version]
  10. Casal, P.S.; Fresnedo, O.; Castedo, L.; Frías, J.G. Analog Transmission of Correlated Sources over Fading SIMO Multiple Access Channels. IEEE Trans. Commun. 2017, 65, 2999–3011. [Google Scholar] [CrossRef]
  11. Hassanin, M.; Frías, J.G. Analog Mappings for Non-Linear Channels with Applications to Underwater Channels. IEEE Trans. Commun. 2020, 68, 445–455. [Google Scholar] [CrossRef]
  12. Floor, P.A.; Kim, A.N.; Wernersson, N.; Ramstad, T.A.; Skoglund, M.; Balasingham, I. Zero-Delay Joint Source-Channel Coding for a Bivariate Gaussian on a Gaussian MAC. IEEE Trans. Commun. 2012, 60, 3091–3102. [Google Scholar] [CrossRef]
  13. Chen, X.; Tuncel, E. Zero-Delay Joint Source-Channel Coding Using Hybrid Digital-Analog Schemes in the Wyner-Ziv Setting. IEEE Trans. Commun. 2014, 62, 726–735. [Google Scholar] [CrossRef]
  14. Chen, X. Zero-Delay Gaussian Joint Source–Channel Coding for the Interference Channel. IEEE Commun. Lett. 2018, 22, 712–715. [Google Scholar] [CrossRef] [Green Version]
  15. Murmann, B. ADC Performance Survey 1997–2020. Available online: http://web.stanford.edu/~murmann/adcsurvey.html (accessed on 29 July 2021).
  16. Jeon, Y.-S.; Do, H.; Hong, S.-N.; Lee, N. Soft-Output Detection Methods for Sparse Millimeter-Wave MIMO Systems with Low-Precision ADCs. IEEE Trans. Commun. 2019, 67, 2822–2836. [Google Scholar] [CrossRef] [Green Version]
  17. Dong, P.; Zhang, H.; Xu, W.; Li, G.Y.; You, X. Performance Analysis of Multiuser Massive MIMO with Spatially Correlated Channels Using Low-Precision ADC. IEEE Commun. Lett. 2018, 22, 205–208. [Google Scholar] [CrossRef]
  18. Park, S.; Simeone, O.; Sahin, O.; Shitz, S.S. Fronthaul Compression for Cloud Radio Access Networks: Signal processing advances inspired by network information theory. IEEE Signal Process. Mag. 2014, 31, 69–79. [Google Scholar] [CrossRef]
  19. O’Donnell, I.D.; Brodersen, R.W. An ultra-wideband transceiver architecture for low power, low rate, wireless systems. IEEE Trans. Veh. Technol. 2005, 54, 1623–1631. [Google Scholar] [CrossRef]
  20. Jeon, Y.; Lee, N.; Hong, S.; Heath, R.W. One-Bit Sphere Decoding for Uplink Massive MIMO Systems with One-Bit ADCs. IEEE Trans. Wirel. Commun. 2018, 17, 4509–4521. [Google Scholar] [CrossRef] [Green Version]
  21. Nguyen, L.V.; Swindlehurst, A.L.; Nguyen, D.H.N. SVM-Based Channel Estimation and Data Detection for One-Bit Massive MIMO Systems. IEEE Trans. Signal Process. 2021, 69, 2086–2099. [Google Scholar] [CrossRef]
  22. Dong, Y.; Wang, H.; Yao, Y.-D. Channel Estimation for One-Bit Multiuser Massive MIMO Using Conditional GAN. IEEE Commun. Lett. 2021, 25, 854–858. [Google Scholar] [CrossRef]
  23. Myers, N.J.; Tran, K.N.; Heath, R.W. Low-Rank MMWAVE MIMO Channel Estimation in One-Bit Receivers. In Proceedings of the 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, 4–8 May 2020; pp. 5005–5009. [Google Scholar]
  24. Varasteh, M.; Rassouli, B.; Simeone, O.; Gündüz, D. Zero-Delay Source-Channel Coding with a 1-Bit ADC Front End and Correlated Receiver Side Information. IEEE Trans. Commun. 2017, 65, 5429–5444. [Google Scholar] [CrossRef]
  25. Sevinç, C.; Tuncel, E. On the Analysis of Energy-Distortion Tradeoff for Zero-Delay Transmission over Gaussian Broadcast Channels. In Proceedings of the 2019 IEEE International Symposium on Information Theory (ISIT), Paris, France, 7–12 July 2019. [Google Scholar]
  26. Suárez-Casal, P.; Fresnedo, Ó.; Castedo, L.; García-Frías, J. Analog Transmission of Correlated Sources over BC with Distortion Balancing. IEEE Trans. Commun. 2017, 65, 4871–4885. [Google Scholar] [CrossRef]
  27. Tian, C.; Diggavi, S.; Shamai, S. The Achievable Distortion Region of Sending a Bivariate Gaussian Source on the Gaussian Broadcast Channel. IEEE Trans. Inf. Theory 2011, 57, 6419–6427. [Google Scholar] [CrossRef]
  28. Hassanin, M.; Garcia-Frias, J.; Castedo, L. Analog joint source channel coding for Gaussian Broadcast Channels. In Proceedings of the 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Florence, Italy, 4–9 May 2014; pp. 4264–4268. [Google Scholar]
  29. Hassanin, M.; Lu, B.; Garcia-Frias, J. Application of analog joint source-channel coding to broadcast channels. In Proceedings of the 2015 IEEE 16th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), Stockholm, Sweden, 28 June–1 July 2015; pp. 610–614. [Google Scholar]
  30. Lin, Z.; Chen, X.; Lu, X.; Tan, H. A Practical Low Delay Digital Scheme for Wyner-Ziv Coding over Gaussian Broadcast Channel. In Proceedings of the 2021 55th Annual Conference on Information Sciences and Systems (CISS), Baltimore, MD, USA, 24–26 March 2021; pp. 1–6. [Google Scholar]
  31. Abou Saleh, A.; Alajaji, F.; Chan, W.Y. Source-Interference Recovery Over Broadcast Channels: Asymptotic Bounds and Analog Codes. IEEE Trans. Commun. 2016, 8, 3406–3418. [Google Scholar] [CrossRef]
  32. Everett, H., III. Generalized Lagrange multiplier method for solving problems of optimum allocation of resources. Oper. Res. 1963, 11, 399–417. [Google Scholar] [CrossRef]
  33. Linde, Y.; Buzo, A.; Gary, R. An algorithm for vector quantizer design. IEEE Trans. Commun. 1980, 28, 84–95. [Google Scholar] [CrossRef] [Green Version]
  34. Kron, J.; Alajaji, F.; Skoglund, M. Low-Delay Joint Source-Channel Mappings for the Gaussian MAC. IEEE Commun. Lett. 2014, 18, 249–252. [Google Scholar] [CrossRef] [Green Version]
  35. Saleh, A.A.; Alajaji, F.; Chan, W. Power-constrained bandwidth-reduction source-channel mappings for fading channels. In Proceedings of the 2012 26th Biennial Symposium on Communications (QBSC), Kingston, ON, Canada, 28–29 May 2012; pp. 85–90. [Google Scholar]
  36. Luenberger, D.G. Optimization by Vector Space Methods; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 1977. [Google Scholar]
  37. Varasteh, M.; Rassouoli, B.; Simeone, O.; Gündüz, D. Joint Source channel coding with one-bit ADC front end. In Proceedings of the IEEE International Symposium on Information Theory, Barcelona, Spain, 10–15 July 2016; pp. 3062–3066. [Google Scholar]
  38. Akyol, E.; Viswanatha, K.B.; Rose, K.; Ramstad, T.A. On Zero-Delay Source-Channel Coding. IEEE Trans. Inf. Theory 2014, 60, 7473–7489. [Google Scholar] [CrossRef] [Green Version]
  39. Hekland, F.; Floor, P.A.; Ramstad, T.A. Shannon-Kotel’nikov mappings in joint source-channel coding. IEEE Trans. Commun. 2009, 57, 94–105. [Google Scholar] [CrossRef]
  40. Liang, F.; Luo, C.; Xiong, R.; Zeng, W.; Wu, F. Hybrid Digital–Analog Video Delivery with Shannon–Kotel’nikov Mapping. IEEE Trans. Multimed. 2018, 20, 2138–2152. [Google Scholar] [CrossRef]
  41. Singh, J.; Dabeer, O.; Madhow, U. On the limits of communication with low-precision analog-to-digital conversion at the receiver. IEEE Trans. Commun. 2009, 57, 3629–3639. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Broadcasting bivariate Gaussian sources with one-bit ADC front end.
Figure 1. Broadcasting bivariate Gaussian sources with one-bit ADC front end.
Entropy 23 01679 g001
Figure 2. The tradeoff between the value of L and the computational cost.
Figure 2. The tradeoff between the value of L and the computational cost.
Entropy 23 01679 g002
Figure 3. Optimized encoder mapping with Algorithm 1 for σ n 1 2 = 0.32 , σ n 2 2 = 1 , P = 1 and ρ = 0.7 . The (a) shows the curved surface while the (b) shows the corresponding two-dimensional representation. The (c) shows the curve while X 1 = X 2 .
Figure 3. Optimized encoder mapping with Algorithm 1 for σ n 1 2 = 0.32 , σ n 2 2 = 1 , P = 1 and ρ = 0.7 . The (a) shows the curved surface while the (b) shows the corresponding two-dimensional representation. The (c) shows the curve while X 1 = X 2 .
Entropy 23 01679 g003
Figure 4. Optimized encoder mapping with algorithm 2 for σ n 1 2 = 0.32 , σ n 2 2 = 1 , P = 1 and ρ = 0.7 . The (a) shows the curved surface while the (b) shows the corresponding two-dimensional representation. The (c) shows the curve while X 1 = X 2 .
Figure 4. Optimized encoder mapping with algorithm 2 for σ n 1 2 = 0.32 , σ n 2 2 = 1 , P = 1 and ρ = 0.7 . The (a) shows the curved surface while the (b) shows the corresponding two-dimensional representation. The (c) shows the curve while X 1 = X 2 .
Entropy 23 01679 g004
Figure 5. Curved surfaces of the sigmoid-like function, sinh function, S-K-like function and uncoded scheme with optimized parameters for σ n 1 2 = 0.32 , σ n 2 2 = 1 , P = 1 and ρ = 0.7 . (a): sigmoid-like function, (b): sinh-function, (c): S-K-like function, (d): uncoded scheme.
Figure 5. Curved surfaces of the sigmoid-like function, sinh function, S-K-like function and uncoded scheme with optimized parameters for σ n 1 2 = 0.32 , σ n 2 2 = 1 , P = 1 and ρ = 0.7 . (a): sigmoid-like function, (b): sinh-function, (c): S-K-like function, (d): uncoded scheme.
Entropy 23 01679 g005
Figure 6. The two-dimensional representations of the curved surfaces of the sigmoid-like function, sinh function, S-K-like function and uncoded scheme with optimized parameters for σ n 1 2 = 0.32 , σ n 2 2 = 1 , P = 1 and ρ = 0.7 . (a): sigmoid-like function, (b): sinh-function, (c): S-K-like function, (d): uncoded scheme.
Figure 6. The two-dimensional representations of the curved surfaces of the sigmoid-like function, sinh function, S-K-like function and uncoded scheme with optimized parameters for σ n 1 2 = 0.32 , σ n 2 2 = 1 , P = 1 and ρ = 0.7 . (a): sigmoid-like function, (b): sinh-function, (c): S-K-like function, (d): uncoded scheme.
Entropy 23 01679 g006
Figure 7. When X 1 = X 2 , the curves of the sigmoid-like function, sinh function, S-K-like function and uncoded scheme with optimized parameters for σ n 1 2 = 0.32 , σ n 2 2 = 1 , P = 1 and ρ = 0.7 . (a): sigmoid-like function, (b): sinh-function, (c): S-K-like function, (d): uncoded scheme.
Figure 7. When X 1 = X 2 , the curves of the sigmoid-like function, sinh function, S-K-like function and uncoded scheme with optimized parameters for σ n 1 2 = 0.32 , σ n 2 2 = 1 , P = 1 and ρ = 0.7 . (a): sigmoid-like function, (b): sinh-function, (c): S-K-like function, (d): uncoded scheme.
Entropy 23 01679 g007
Figure 8. Average distortion performance for σ n 1 2 = 0.32 , σ n 2 2 = 1 , ρ = 0.7 by the relevant schemes at different values of P. (a): Performance comparisons of all the relevant parametric mappings, (b): Performance comparisons of sigmoid-like function, non-parametric mappings and the bounds.
Figure 8. Average distortion performance for σ n 1 2 = 0.32 , σ n 2 2 = 1 , ρ = 0.7 by the relevant schemes at different values of P. (a): Performance comparisons of all the relevant parametric mappings, (b): Performance comparisons of sigmoid-like function, non-parametric mappings and the bounds.
Entropy 23 01679 g008
Figure 9. Average distortion performance for σ n 1 2 = 0.32 , σ n 2 2 = 1 , ρ = 0.6 by the relevant schemes at different values of P. (a): Performance comparisons of all the relevant parametric mappings, (b): Performance comparisons of sigmoid-like function, non-parametric mappings and the bounds.
Figure 9. Average distortion performance for σ n 1 2 = 0.32 , σ n 2 2 = 1 , ρ = 0.6 by the relevant schemes at different values of P. (a): Performance comparisons of all the relevant parametric mappings, (b): Performance comparisons of sigmoid-like function, non-parametric mappings and the bounds.
Entropy 23 01679 g009
Figure 10. Average distortion performance for σ n 1 2 = 0.32 , σ n 2 2 = 1 , ρ = 0.2 by the relevant schemes at different values of P. (a): Performance comparisons of all the relevant parametric mappings, (b): Performance comparisons of sigmoid-like function, non-parametric mappings and the bounds.
Figure 10. Average distortion performance for σ n 1 2 = 0.32 , σ n 2 2 = 1 , ρ = 0.2 by the relevant schemes at different values of P. (a): Performance comparisons of all the relevant parametric mappings, (b): Performance comparisons of sigmoid-like function, non-parametric mappings and the bounds.
Entropy 23 01679 g010
Figure 11. Average distortion performance for σ n 1 2 = 0.32 , σ n 2 2 = 1 , P = 1 with optimized values of parameters at different values of ρ . (a): Performance comparisons of all the relevant parametric mappings, (b): Performance comparisons of sigmoid-like function, non-parametric mappings and the bounds.
Figure 11. Average distortion performance for σ n 1 2 = 0.32 , σ n 2 2 = 1 , P = 1 with optimized values of parameters at different values of ρ . (a): Performance comparisons of all the relevant parametric mappings, (b): Performance comparisons of sigmoid-like function, non-parametric mappings and the bounds.
Entropy 23 01679 g011
Figure 12. Average distortion performance for σ n 1 2 = 0.032 , σ n 2 2 = 0.1 , P = 1 with optimized values of parameters at different values of ρ . (a): Performance comparisons of all the relevant parametric mappings, (b): Performance comparisons of sigmoid-like function, non-parametric mappings and the bounds.
Figure 12. Average distortion performance for σ n 1 2 = 0.032 , σ n 2 2 = 0.1 , P = 1 with optimized values of parameters at different values of ρ . (a): Performance comparisons of all the relevant parametric mappings, (b): Performance comparisons of sigmoid-like function, non-parametric mappings and the bounds.
Entropy 23 01679 g012
Figure 13. Distortion regions ( D 1 , D 2 ) for σ n 1 2 = 0.32 , σ n 2 2 = 1 , P = 1 , ρ = 0.2 for three parametric mappings and two bounds.
Figure 13. Distortion regions ( D 1 , D 2 ) for σ n 1 2 = 0.32 , σ n 2 2 = 1 , P = 1 , ρ = 0.2 for three parametric mappings and two bounds.
Entropy 23 01679 g013
Figure 14. Distortion regions ( D 1 , D 2 ) for σ n 1 2 = 0.32 , σ n 2 2 = 1 , P = 1 , ρ = 0.6 for three parametric mappings and two bounds.
Figure 14. Distortion regions ( D 1 , D 2 ) for σ n 1 2 = 0.32 , σ n 2 2 = 1 , P = 1 , ρ = 0.6 for three parametric mappings and two bounds.
Entropy 23 01679 g014
Figure 15. Distortion regions ( D 1 , D 2 ) for σ n 1 2 = 0.32 , σ n 2 2 = 1 , P = 1 , ρ = 0.7 for three parametric mappings and two bounds.
Figure 15. Distortion regions ( D 1 , D 2 ) for σ n 1 2 = 0.32 , σ n 2 2 = 1 , P = 1 , ρ = 0.7 for three parametric mappings and two bounds.
Entropy 23 01679 g015
Figure 16. Optimized encoder nonparametric mapping I with CSNR = 0 dB and ρ = 0.7 . (a) shows the curved surface, while the (b) shows the corresponding two-dimensional representation. The (c) shows the curve while X 1 = X 2 .
Figure 16. Optimized encoder nonparametric mapping I with CSNR = 0 dB and ρ = 0.7 . (a) shows the curved surface, while the (b) shows the corresponding two-dimensional representation. The (c) shows the curve while X 1 = X 2 .
Entropy 23 01679 g016
Figure 17. Optimized encoder nonparametric mapping I with CSNR = 4 dB and ρ = 0.7 . (a) shows the curved surface, while the (b) shows the corresponding two-dimensional representation. The (c) shows the curve while X 1 = X 2 .
Figure 17. Optimized encoder nonparametric mapping I with CSNR = 4 dB and ρ = 0.7 . (a) shows the curved surface, while the (b) shows the corresponding two-dimensional representation. The (c) shows the curve while X 1 = X 2 .
Entropy 23 01679 g017
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhao, W.; Chen, X. Zero-Delay Joint Source Channel Coding for a Bivariate Gaussian Source over the Broadcast Channel with One-Bit ADC Front Ends. Entropy 2021, 23, 1679. https://0-doi-org.brum.beds.ac.uk/10.3390/e23121679

AMA Style

Zhao W, Chen X. Zero-Delay Joint Source Channel Coding for a Bivariate Gaussian Source over the Broadcast Channel with One-Bit ADC Front Ends. Entropy. 2021; 23(12):1679. https://0-doi-org.brum.beds.ac.uk/10.3390/e23121679

Chicago/Turabian Style

Zhao, Weijie, and Xuechen Chen. 2021. "Zero-Delay Joint Source Channel Coding for a Bivariate Gaussian Source over the Broadcast Channel with One-Bit ADC Front Ends" Entropy 23, no. 12: 1679. https://0-doi-org.brum.beds.ac.uk/10.3390/e23121679

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop