Next Article in Journal
The Role of Data in Model Building and Prediction: A Survey Through Examples
Next Article in Special Issue
Robust Signaling for Bursty Interference
Previous Article in Journal
Stock Net Entropy: Evidence from the Chinese Growth Enterprise Market
Previous Article in Special Issue
Multilevel Diversity Coding with Secure Regeneration: Separate Coding Achieves the MBR Point
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Distributed Joint Source-Channel Coding Using Quasi-Uniform Systematic Polar Codes

School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, China
*
Author to whom correspondence should be addressed.
Submission received: 25 September 2018 / Revised: 11 October 2018 / Accepted: 20 October 2018 / Published: 22 October 2018
(This article belongs to the Special Issue Multiuser Information Theory II)

Abstract

:
This paper proposes a distributed joint source-channel coding (DJSCC) scheme using polar-like codes. In the proposed scheme, each distributed source encodes source message with a quasi-uniform systematic polar code (QSPC) or a punctured QSPC, and only transmits parity bits over its independent channel. These systematic codes play the role of both source compression and error protection. For the infinite code-length, we show that the proposed scheme approaches the information-theoretical limit by the technique of joint source-channel polarization with side information. For the finite code-length, the simulation results verify that the proposed scheme outperforms the distributed separate source-channel coding (DSSCC) scheme using polar codes and the DJSCC scheme using classic systematic polar codes.

1. Introduction

Polar codes, invented by Arikan [1] using a technique called channel polarization, are capable of achieving the symmetric capacity of any binary-input discrete memoryless channel (B-DMC) with low encoding and decoding complexity. Afterwards, the concept of source polarization was introduced in [2] as the complement of channel polarization. One immediate application of source polarization is the design of polar codes for source coding. Since the polarization phenomenon exists on both source and channel sides, it is of natural interest to integrate channel polarization and source polarization for joint source-channel coding (JSCC). From the perspective of information theory, this interest is motivated by the fact that the error exponent of JSCC is better than that of separate source-channel coding (SSCC) [3], which may indicate a performance loss of SSCC under the finite code-length. From a practical point of view, two main shortcomings of SSCC resulting in an unsatisfactory performance are summarized as follows:
  • With finite code-length, channel decoding error is unavoidable, such error may be disastrous for the source decoder.
  • With finite code-length, in the output of source coding may exist residual redundancy, such redundancy is not exploited by the channel decoder to further improve the performance.
Therefore, the JSCC scheme, which takes into consideration both source redundancy and channel error jointly, may have certain advantages over SSCC.
In recent years, JSCC has attracted an increasing number of attention among researchers. The JSCC schemes based on turbo codes, LDPC codes, and polar codes can be found in [4,5,6,7], and [8,9,10], respectively. In particular, the authors of [11] proposed a class of systematic polar codes (SPCs), called quasi-uniform SPCs (QSPCs), and showed that QSPCs are optimal for the problem of JSCC with side information and outperform classic SPCs [12].
For the distributed source coding (DSC) problem, the Slepian-Wolf theorem [13] states that for two or more correlated sources, lossless compression rates of joint encoding can be achieved with separate encoding if a joint decoder is used at the receiver. This theorem has been known for a long time, but the practical DSC scheme was only recently proposed by Pradhan and Ramchandran using syndromes [14]. Inspired by the syndromes approach, a large number of DSC schemes have sprung up based on powerful channel codes (CCs) such as turbo codes, LDPC codes, polar codes [15,16,17]. However, to obtain a good performance, these syndromes-based schemes usually need a considerably long code-length ( 10 4 10 5 ) and are sensitive to channel errors. Alternatively, the design of DSC scheme can be based on parity approach where a systematic code is used to encode the source and the source is recovered by parity bits. These kinds of schemes can be extended to distributed JSCC (DJSCC) [18,19]. Our work falls into this category. Specifically, we focus on the design of parity-based DJSCC using polar/polar-like codes, where only parity bits are transmitted over the noisy channels. Each distributed source has one encoder to encode source message for both compression and error protection.
The main contributions of this paper are summarized as follows. We propose a new class of polar-like codes, called punctured QSPCs, and show its asymptotic optimality. Then we propose a DJSCC scheme using two polar-like codes and show that the proposed scheme approaches achievable rates.
The rest of this paper is organized as follows. The system model is given in Section 2. Section 3 introduces QSPCs and punctured QSPCs. In Section 4, we propose a DJSCC scheme based on two polar-like codes and analyze its performance limit. Section 5 presents the simulation results to verify the advantage of the proposed scheme and in Section 6 we conclude this paper.

2. System Model

Consider the problem of transmitting distributed sources V 1 , V 2 , , V s over independent symmetric B-DMCs W 1 , W 2 , , W s to a common destination. The message from source V i , i = 1 , 2 , , s , is denoted by a binary vector v i = [ v i 1 , v i 2 , , v i K ] of length K. We assume that v j = [ v 1 j , v 2 j , , v s j ] for different j { 1 , 2 , , K } are i.i.d. (identically and independently distributed) vectors drawn from { 0 , 1 } s with a common probability distribution P V 1 , V 2 , V s .
From the Slepian-Wolf theorem, the compression rate R = ( R 1 , R 2 , , R s ) is achievable if
i T R i H ( v T j | v T c j )
for any T { 1 , 2 , , s } , where H ( · ) is entropy function, T c = { 1 , 2 , , s } T is the complementary set of T , v T j and v T c j denote, respectively, the sub-vectors of v j indexed by T and T c . This achievable compression region is also known as the Slepian-Wolf region R S W .
Let R ˜ i = N ˜ i / K be the rate, measured in the number of channel-use per source-symbol, where N ˜ i is the number of channel uses of source V i for transmitting v i . Let C i denote the channel capacity of W i . Since the source-channel separation theorem still holds for this problem [20], reliable transmission is possible if R ˜ = ( R ˜ 1 , R ˜ 2 , , R ˜ s ) falls into the achievable rate region R rate , which is defined as
R rate = { R ˜ | R ˜ i 0 , R SW R c ( R ˜ ) } ,
where R c ( R ˜ ) is the set of all points ( R 1 , R 2 , , R s ) satisfying
0 R i R ˜ i C i
for i = 1 , 2 , , s .
Given R ˜ 1 , R ˜ 2 R rate and 0 λ 1 , by the definition of convex sets, we can readily prove that λ R ˜ 1 + ( 1 λ ) R ˜ 2 R rate as follows. There obviously exist R 1 * R SW R c ( R ˜ 1 ) and R 2 * R SW R c ( R ˜ 2 ) . Due to the convexity of R SW , λ R 1 * + ( 1 λ ) R 2 * R SW . Besides, it can be observed that
λ R 1 * + ( 1 λ ) R 2 * R c ( λ R ˜ 1 + ( 1 λ ) R ˜ 2 ) .
This indicates that a point at least exists in R SW R c ( λ R ˜ 1 + ( 1 λ ) R ˜ 2 ) , and R SW R c ( λ R ˜ 1 + ( 1 λ ) R ˜ 2 ) . Therefore, λ R ˜ 1 + ( 1 λ ) R ˜ 2 R rate and the achievable rate region of R rate is convex.
To solve this transmission problem, one option is distributed SSCC (DSSCC) scheme as shown in Figure 1. The message from each source is first compressed into compressed bits with DSC. Then each source encodes its compressed bits into channel code bits with CCs and transmit these channel code bits over its channel. At the destination, the received signals are first decoded by channel decoders, and then decoded bits are jointly decompressed by a source joint decoder. There is no interaction between the source layer and the channel layer in the sense of encoding and decoding. Such a DSSCC scheme is asymptotically optimal when the code-length goes to infinity. However, with finite code-length, such source-channel separate schemes are generally sensitive to the residual errors of channel decoder, and thus are ineffective in some scenarios such as wireless sensor networks (WSNs). To address this problem, in this paper we propose a DJSCC scheme using polar-like codes to achieve the corner point in R rate , i.e.,
R ˜ i = H ( v i j | v 1 j v 2 j v i 1 j ) C i
for i = 1 , 2 , , s . As shown in Figure 2, each source is equipped with only one encoder to encode the source message for both compression and error protection.
Note that the order of the chain rule v 1 j v 2 j v s j in (5) has no effect on the code construction and overall performance, hence the ascending order is considered in this paper. It is also obvious that multiple points in R rate can be achieved with time-sharing operations, provided that DJSCC schemes for corner points have been well constructed.

3. QSPC and Punctured QSPC

In this section, we briefly review QSPC and propose punctured QSPC. At first, two index sets and their properties are presented. Consider a binary sequence L of length N = 2 n . The coordinates of the ones in L are quasi-uniformly distributed and denoted by the index set Q ( N , K ) or Q ¯ ( N , K ) , which is defined as
Q ( N , K ) = { b = 1 + i = 0 n 1 b i 2 i | 1 + i = 0 n 1 b i 2 n 1 i { i : N K + 1 i N } , b i { 0 , 1 } } , Q ¯ ( N , K ) = { b = 1 + i = 0 n 1 b i 2 i | 1 + i = 0 n 1 b i 2 n 1 i { i : 1 i K } , b i { 0 , 1 } } .
The property of this sequence is given as Proposition 1.
Proposition 1.
The coordinates of the ones in L M = L L L M ( M = 2 m ) are Q ( M N , M K ) and Q ¯ ( M N , M K ) if the coordinates of the ones in L are Q ( N , K ) and Q ¯ ( N , K ) , respectively.
An example for index sets is illustrated in Figure 3 and Figure 4. The coordinates of the ones in L = 0001 and L ¯ = 1000 are Q ( 4 , 1 ) = { 4 } and Q ¯ ( 4 , 1 ) = { 1 } , respectively. For L 2 = 00010001 and L ¯ 2 = 10001000 , the coordinates of ones are Q ( 8 , 2 ) = { 4 , 8 } and Q ¯ ( 8 , 2 ) = { 1 , 5 } , respectively.

3.1. QSPC

A class of SPCs, called QSPCs, were introduced for the problem of JSCC with side information. For a ( M N , M K ) QSPC ( M = 2 m , N = 2 n , K N ), systematic bits x B are first encoded by
u ˜ A ˜ = x B u A ˜ c G ˜ M N ( A ˜ c B ) G ˜ M N ( A ˜ B ) 1 = x B G ˜ M N ( A ˜ B ) 1 ,
where A ˜ = { i | M N M K 1 i M N } , B = Q ( M N , M K ) , and G ˜ M N ( A ˜ B ) denotes the sub-matrix of G ˜ M N with the rows indexed by A ˜ and the columns indexed by B . Then the codeword is obtained by
x = u ˜ A ˜ G ˜ M N ( A ˜ ) u ˜ A ˜ c G ˜ M N ( A ˜ c ) ,
where G ˜ M N ( A ˜ ) and G ˜ M N ( A ˜ c ) denote the rows of G ˜ M N indexed by A ˜ and A ˜ c , respectively. The generator matrix is given by
G ˜ M N = D G M N = D B M N 1 0 1 1 ( m + n ) ,
where B M N is a bit-reversal matrix, and D is an invertible bit-swap coding matrix (refer to Section IV.A of [11] for details). It can be seen that x = u ˜ G ˜ M N = u G M N and u = u ˜ D .
For this code, the typical decoder of polar codes attempts to decode u , then the codeword x is obtained by x = u G M N or x = u D 1 G ˜ M N . The entropies/BERs (bit error rate) of u are determined by underlying channels and the sources. With the aid of density evolution, we can locate those bits u A which have low entropies/BERs. However, the reversibility of G A B cannot be guaranteed, i.e., it is impossible to construct QSPCs via original polar coding. The bit-swap coding D is then proposed to change the information bits from u A to u ˜ A ˜ and modify the generator matrix G ˜ M N = D G M N . This operation ensures that G ˜ A ˜ B is always invertible. The block error rate (BLER) of x under the SC decoder can be calculated by
P BLER = 1 i A 1 P e ( u i ) ,
where P e ( u i ) is the BER of u i .
Consider a source message v = [ v 1 , v 2 , , v M K ] with side information w = [ w 1 , w 2 , , w M K ] . When v is encoded by a ( M N , M K ) QSPC for JSCC, only parity bits x B c are transmitted over the noisy channel W ( y | x ) . Due to B = Q ( M N , M K ) , the codeword structure can be depicted in Figure 3, and Theorem 1 can be obtained as follows.
Theorem 1 ([11]).
For any 0 ϵ < 1 2 ,
lim M 1 M N { i : H ( u i | y u 1 : i 1 ) [ 0 , ϵ ) } = lim M 1 M N | { i : Z ( u i | y u 1 : i 1 ) [ 0 , ϵ ) } | = 1 K H ( v | w ) + ( N K ) H ( x | y ) N , lim M 1 M N | { i : H ( u i | y u 1 : i 1 ) ( 1 ϵ , 1 ] } | = lim M 1 M N | { i : Z ( u i | y u 1 : i 1 ) ( 1 ϵ , 1 ] } | = K H ( v | w ) + ( N K ) H ( x | y ) N ,
where | · | denotes the cardinality of set, Z ( · | · ) is the source Bhattacharyya parameter [2], y B = w and y B c are channel outputs of W ( y | x ) , and 1 : i 1 denotes the set of { 1 , 2 , , i 1 } .
Note that the theorem shows that P BLER tends to 0 as M goes to infinity, if H ( v | w ) / ( 1 H ( x | y ) ) = H ( v | w ) / C ( N K ) / K , because Z ( u i | y u 1 : i 1 ) is an upper-bound of P e ( u i ) . Further, if the supermartingale is established for Z ( u i | y u 1 : i 1 ) , we have P BLER 2 M β 0 , for any β ( 0 , 1 2 ) [21].

3.2. Punctured QSPC

By puncturing M P parity bits from a ( M N , M K ) QSPC code, we can construct a punctured QSPC code with a lower rate R ˜ i (in channel-use per source-symbol). This punctured QSPC is dominated by the parameter ( M N , M K , M P ) . As depicted in Figure 4, there are three kinds of bits (systematic bits, parity bits, and punctured bits) in the codeword, and the coordinates of punctured bits are Q ¯ ( M N , M P ) . At the stage of code construction, punctured bits are regarded as the inputs of the zero-capacity channel to calculate BERs of u and bit-swap coding. When this punctured QSPC is used to encode source message v = [ v 1 , v 2 , , v M K ] with side information w = [ w 1 , w 2 , , w M K ] , Theorem 2 is obtained similarly as follows.
Theorem 2.
For any 0 ϵ < 1 2 ,
lim M 1 M N | { i : H ( u i | y u 1 : i 1 ) [ 0 , ϵ ) } | = lim M 1 M N | { i : Z ( u i | y u 1 : i 1 ) [ 0 , ϵ ) } | = 1 K H ( v | w ) + ( N K P ) H ( x | y ) + P N , lim M 1 M N | { i : H ( u i | y u 1 : i 1 ) ( 1 ϵ , 1 ] } | = lim M 1 M N | { i : Z ( u i | y u 1 : i 1 ) ( 1 ϵ , 1 ] } | = K H ( v | w ) + ( N K P ) H ( x | y ) + P N ,
where y Q ¯ ( M N , M P ) = because of puncturing.
Proof of Theorem 2.
The codeword structure of Figure 4 ensures that Z ( u i | y u 1 : i 1 ) and H ( u i | y u 1 : i 1 ) converge to either 0 or 1, as M goes to infinity, which is similar to QSPC. The total entropy can be calculated by
i = 1 M N H ( u i | y u 1 : i 1 ) = i = 1 M N H ( x i | y i ) = M ( N K P ) H ( x | y ) + M K H ( v | w ) + M P .
Therefore, the fraction of the high entropy part ( H 1 ) is
1 M N M ( N P K ) H ( x | y ) + M K H ( v | w ) + M P = 1 N ( N P K ) H ( x | y ) + 1 N K H ( v | w ) + P N ,
and the fraction of the low entropy part ( H 0 ) is 1 1 N ( N P K ) H ( x | y ) 1 N K H ( v | w ) P N . □
This theorem also shows that P BLER of punctured QSPC tends to 0 as M goes to infinity, if H ( v | w ) / ( 1 H ( x | y ) ) = H ( v | w ) / C ( N K P ) / K . Since punctured bits contributes zero channel-use, punctured QSPC has no performance loss. After the establishment of supermartingale for Z ( u i | y u 1 : i 1 ) , we also have P BLER 2 M β 0 , for any β ( 0 , 1 2 ) .

4. Proposed DJSCC Scheme and Performance Limit

In this section, we propose a DJSCC scheme based on the previous construction of two polar-like codes, and prove its asymptotic optimality.

4.1. Proposed DJSCC Scheme

To achieve the corner point (5), we turn this problem into the problem of JSCC with side information at the receiver, and solve it with QSPC and punctured QSPC. The proposed DJSCC scheme consists of s encoders and one joint decoder, as shown in Figure 2.
Define
R ˜ max = max i { 1 , 2 , , s } R ˜ i
and
i max = argmax i { 1 , 2 , , s } R ˜ i .
For source V i , the previous sources V 1 , V 2 , , V i 1 are completely regarded as side information to construct QSPC and punctured QSPC. For the source V i max , its message v i max is encoded by a ( M N , M K ) QSPC. For sources V i , i i max , the message v i is encoded by a ( M N , M K , M P i ) punctured QSPC to adapt a lower rate R ˜ i . At the destination, a joint decoder is used to recover the sources, as shown in Figure 5. The joint decoder consists of s conventional polar decoders working successively. The hard decisions v ^ 1 , v ^ 2 , , v ^ i 1 from decoder 1 , 2 , , i 1 are fed to i-th decoder for calculating log likelihood ratios (LLRs) of systematic bits. For parity bits, the LLRs are calculated with channel observations.

4.2. Performance Limit

Based on the analysis in Section 3, both QSPC and punctured QSPC can achieve the information-theoretical limit H / C in channel-use per source-symbol. Therefore, the proposed scheme is asymptotically optimal, as long as the rates can be arbitrarily approached. The following lemma shows that the QSPC and punctured QSPC can approach the rate limits.
Lemma 1.
The ( M N , M K ) QSPC asymptotically approaches the rate R ˜ i max and the ( M N , M K , M P i ) punctured QSPC asymptotically approaches arbitrary rate R ˜ i R ˜ i max .
Proof of Lemma 1.
Let K be the integer part of N / ( R ˜ i + 1 ) , then we have
N = K ( R ˜ i + 1 ) + ϵ , 0 ϵ < R ˜ i + 1
and
R ˜ peak = N K K = K ( R ˜ i + 1 ) + ϵ K K = R ˜ i + ϵ K .
Since lim K ( N ) R ˜ peak = R ˜ i + lim K ( N ) ϵ K = R ˜ i , the gap between R ˜ peak and R ˜ i can be arbitrarily small when N is sufficiently large. Besides, the minimum step of adjustable rate by puncturing Δ = 1 K is getting smaller with increasing N. This indicates that the punctured QSPC can also arbitrarily approach a rate less than R ˜ peak when N is large enough. □

5. Simulation Results

In the previous section, we have proposed a DJSCC scheme and shown its asymptotic optimality. In this section, we present the simulation results to verify the performance of the proposed scheme under finite code-length.
In the simulation, two sources V 1 , V 2 are considered and Pr { v 2 j | v 1 j } is modeled as binary symmetric channel (BSC) with a crossover probability q = 0.11 , and Pr { v 1 j = 0 } = 0.5 . The transmission channels W 1 , W 2 are BI-AWGN channels with the same signal-to-noise ratio (SNR). The ( 64 · 32 , 64 · 10 ) QSPC and the ( 64 · 32 , 64 · 10 , 64 · 11 ) punctured QSPC are constructed with density evolution. The QSPC and the punctured QSPC are respectively used to encode v 1 and v 2 for DJSCC. For this DJSCC scheme, the rates are R ˜ = ( R ˜ 1 , R ˜ 2 ) = ( 2.2 , 1.1 ) in channel-use/source-symbol. The Shannon limits for both two sources are approximately -0.4 dB, which are respectively obtained by M K · H ( v 1 j ) M ( N K ) · C ( SNR ) and M K · H ( v 2 j | v 1 j ) M ( N K P ) · C ( SNR ) , where C ( SNR ) is the capacity of BI-AWGN channels.
As a reference, we consider a DSSCC scheme where both DSC and CC are designed using polar codes of length N s = N c = 1024 . The source compression rate (in bit/source-symbol) of DSC R s i , and the channel code rate (in bit/channel-use) of CC R c i for source V i , i = 1 , 2 are illustrated in Table 1. The rate of this DSSCC scheme can be calculated by
R ˜ i = R s i R c i , i = 1 , 2
in channel-use/source-symbol. In the source layer, each N s bits of source message are divided into a block for DSC and the polar code based DSC is designed as follows. After polar transform u = [ v 1 1 v 2 1 , v 1 2 v 2 2 , , v 1 N s v 2 N s ] G N , the BERs P e ( u i ) of u are calculated with density evolution. As shown in Figure 6, both u 1 = v 1 G N and u 2 = v 2 G N are sorted in a descend order according to P e ( u i ) , and the bits represented by black boxes are compressed bits that will be fed into the buffer for CC. At the receiver, the decoder firstly tries to recover u 1 u 2 with LLRs log 1 q q , then the bits represented by gray boxes are reconstructed with some modulo-2 additions. Finally, the sources are obtained by v ^ 1 = u ^ 1 G N and v ^ 2 = u ^ 2 G N . In the channel layer, each N c R c i compressed bits in the buffer are divided into a block for ( N c , N c R c i ) CC based on polar codes.
Besides, we consider the DJSCC based on classic SPCs as a reference. For source V 1 , the ( 64 · 32 , 64 · 10 ) SPC is constructed via Gaussian approximation where the underlying channel is BI-AWGN channels with the equivalent capacity C e q u 1 = K N · ( 1 H ( v 1 j ) ) + N K N C 1 = N K N C 1 . For source V 2 , the ( 64 · 32 , 64 · 10 ) SPC is constructed via Gaussian approximation where the underlying channel is BI-AWGN channels with equivalent capacity C e q u 2 = K N · ( 1 H ( v 2 j | v 1 j ) ) + N K P N C 2 = K 2 N + N K P N C 2 . Since the conventional puncturing scheme is not applicable in this situation, 64 · 11 parity bits of source V 2 are randomly punctured.

5.1. Performance under SC Decoders

First, we investigate the performance of the proposed scheme, DSSCC schemes, and the DJSCC scheme with classic SPCs under the SC decoder. Figure 7 shows the BER performance versus the SNR of transmission channels. In this figure, lines labeled by “Proposed...” represent BERs for the proposed scheme, lines labeled by “DSSCC...” represent BERs for DSSCC schemes with different values of R s i , R c i shown in Table 1, and lines labeled by “Classic SPC...” represent the DJSCC scheme with classic SPCs. From these simulation results, we can see that the proposed scheme outperforms DSSCC schemes and the DJSCC scheme with classic SPCs by approximately 0.2∼2 dB. It can be observed that an “error-floor” of “DSSCC 1” starts from 3.5 dB, which is completely caused by the source joint decoder (channel decoders get almost error-free codewords from 3.5 dB to 5 dB). To avoid this error, the higher compressed rate R s i is required. Therefore, we do not observe the “error-floor” for “DSSCC 2” and “DSSCC 3”.

5.2. Performance under CA-SCL Decoders

For the short or moderate code-length, the performance of polar codes under SC decoder is dissatisfied. To improve the performance, Tal and Vardy proposed CRC-aid SC list (CA-SCL) decoder for polar codes [22]. Next, we investigate the BER performance of the proposed scheme together with two referenced schemes under the CA-SCL decoder. For QSPCs and punctured QSPCs, systematic bits contain 16 bits CRC, i.e., the length of source message is M K 16 . At the decoding stage, LLRs of these 16 bits are initialized as 0 and the list size is set as 32. Taking CRC into consideration, the rates of the DJSCC scheme become R ˜ = ( 2.256 , 1.128 ) in channel-use/source-symbol. In DSSCC schemes, information bits of both DSC and CC contain 16 bits CRC and the list size is also set as 32.
Figure 8 shows the BER performance under the CA-SCL decoder. We can see that compared with the SC decoder, the performances of all schemes are improved significantly. Due to the reasonable rate allocation, the performance of “DSSCC 1” is the best and very close the performance of the proposed scheme. However, the performances of other two DSSCC schemes are dissatisfied. It can be observed that the proposed scheme outperforms “DSSCC 1” and the DJSCC scheme with classic SPCs by approximately 0.2∼1 dB.

5.3. Complexity

For each distributed source, the encoding complexities of both two referenced schemes are O ( N log N ) . The encoding complexity of the proposed scheme is slightly larger, which is O ( N + N log N ) . Due to low encoding complexity, these schemes are well-suited for WSNs.
To investigate decoding complexity, we consider adaptive CA-SCL decoders [23] where the list size is expanded if decoding error is detected by CRC. These kinds of decoders have O ( L ¯ N log N ) complexity, where L ¯ is average list size. In the simulations, the allowable maximum list size is set as L m a x = 32 and the average list size L ¯ for decoding both two sources is recorded. The complexity (per source-symbol) can be approximated as O ( 1 M K · L ¯ N log N ) for the proposed scheme and DJSCC scheme with classic SPCs, and O ( 1 N s · L ¯ N log N ) for DSSCC schemes. As shown in Figure 9, for the high SNR region and the low SNR region, the DSSCC1 has lower decoding complexity while the proposed scheme has lower decoding complexity for the middle SNR region. In general, three schemes have similar decoding complexity under adaptive CA-SCL decoders.

6. Conclusions

In this paper, we construct two kinds of polar-like codes (QSPCs and punctured QSPCs), which can be used for JSCC with side information. We then proposed a DJSCC scheme based on QSPC and punctured QSPC. In this scheme, we have transformed the optimization transmission problem into a problem of optimizing JSCC with side information at the receiver, which can be solved by QSPCs and punctured QSPCs. For the infinite code-length, we have proved that the proposed scheme is asymptotically optimal. For the finite code-length, the simulation results have verified that the proposed scheme outperforms the DSSCC scheme with polar codes and the DJSCC scheme with classic SPCs.

Author Contributions

L.J. and H.Y. initiated the idea and designed the experiments. L.J. analyzed the data and wrote the manuscript. H.Y. made the revisions for the paper. All authors have read and approved the final manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Arikan, E. Channel Polarization: A Method For Constructing Capacity-achieving Codes For Symmetric Binary-input Memoryless Channels. IEEE Trans. Inf. Theory 2009, 55, 3051–3073. [Google Scholar] [CrossRef] [Green Version]
  2. Arikan, E. Source polarization. In Proceedings of the IEEE International Symposium on Information Theory, Austin, TX, USA, 13–18 June 2010; pp. 899–903. [Google Scholar]
  3. Zhong, Y.; Alajaji, F.; Campbell, L.L. On the joint source-channel coding error exponent for discrete memoryless systems. IEEE Trans. Inf. Theory 2006, 52, 1450–1468. [Google Scholar] [CrossRef] [Green Version]
  4. Jeanne, M.; Carlach, J.; Siohan, P. Joint source-channel decoding of variable-length codes for convolutional codes and turbo codes. IEEE Trans. Comm. 2005, 53, 10–15. [Google Scholar] [CrossRef]
  5. Garcia-Frias, J. Joint source-channel decoding of correlated sources over noisy channels. In Proceedings of the Data Compression Conference (DCC), Snowbird, UT, USA, 27–29 March 2001; pp. 283–292. [Google Scholar]
  6. Daneshgaran, F.; Laddomada, M.; Mondin, M. LDPC-based channel coding of correlated sources with iterative joint decoding. IEEE Trans. Commun. 2006, 54, 577–582. [Google Scholar] [CrossRef]
  7. Fresia, M.; Perez-Cruz, F.; Poor, H.V.; Verdu, S. Joint Source and Channel Coding. IEEE Signal Process. Mag. 2010, 27, 104–113. [Google Scholar] [CrossRef]
  8. Wang, Y.; Qin, M.; Narayanan, K.R.; Jiang, A.; Bandic, Z. Joint source-channel decoding of polar codes for language-based sources. In Proceedings of the 2016 IEEE Global Communications Conference (GLOBECOM), Washington, DC, USA, 4–8 December 2016. [Google Scholar]
  9. Yaacoub, C.; Sarkis, M. Systematic polar codes for joint source-channel coding in wireless sensor networks and the Internet of Things. Proc. Comput. Sci. 2017, 110, 266–273. [Google Scholar] [CrossRef]
  10. Jin, L.; Yang, P.; Yang, H. Distributed joint source-channel decoding using systematic polar codes. IEEE Commun. Lett. 2018, 22, 49–52. [Google Scholar] [CrossRef]
  11. Jin, L.; Yang, H. Joint Source-Channel Polarization With Side Information. IEEE Access 2018, 6, 7340–7349. [Google Scholar] [CrossRef]
  12. Arikan, E. Systematic Polar Coding. IEEE Comm. Lett. 2011, 15, 860–862. [Google Scholar] [CrossRef] [Green Version]
  13. Slepian, D.; Wolf, J. Noiseless coding of correlated information sources. IEEE Trans. Inf. Theory 1973, 19, 471–480. [Google Scholar] [CrossRef]
  14. Pradhan, S.S.; Ramchandran, K. Distributed Source Coding Using Syndromes (DISCUS): Design and Construction. IEEE Trans. Inf. Theory 2003, 3, 626–643. [Google Scholar] [CrossRef]
  15. Aaron, A.; Girod, B. Compression with side information using turbo codes. In Proceedings of the DCC 2002. Data Compression Conference, Snowbird, UT, USA, USA, 2–4 April 2002; pp. 252–261. [Google Scholar]
  16. Liveris, A.D.; Xiong, Z.; Georghiades, C.N. Compression of binary sources with side information at the decoder using LDPC codes. IEEE Comm. Lett. 2002, 6, 440–442. [Google Scholar] [CrossRef]
  17. Onay, S. Polar codes for distributed source coding. Electron. Lett. 2013, 49, 346–348. [Google Scholar] [CrossRef] [Green Version]
  18. Shahid, I.; Yahampath, P. Distributed Joint Source-Channel Coding Using Unequal Error Protection LDPC Codes. IEEE Trans. Commun. 2013, 61, 3472–3482. [Google Scholar] [CrossRef]
  19. Cen, F. Distributed Joint Source and Channel Coding with Low-Density Parity-Check Codes. IEEE Comm. Lett. 2013, 17, 2336–2339. [Google Scholar] [CrossRef] [Green Version]
  20. Gunduz, D.; Erkip, E.; Goldsmith, A.; Poor, H.V. Source and Channel Coding for Correlated Sources Over Multiuser Channels. IEEE Trans. Inf. Theory 2009, 55, 3927–3944. [Google Scholar] [CrossRef] [Green Version]
  21. Arikan, E.; Telatar, E. On the rate of channel polarization. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Seoul, Korea, 28 June–3 July 2009; pp. 1493–1495. [Google Scholar]
  22. Tal, I.; Vardy, A. List Decoding of Polar Codes. IEEE Trans. Inf. Theory 2015, 61, 2213–2226. [Google Scholar] [CrossRef]
  23. Li, B.; Shen, H.; Tse, D. An Adaptive Successive Cancellation List Decoder for Polar Codes with Cyclic Redundancy Check. IEEE Comm. Lett. 2012, 16, 2044–2047. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Distributed separate source-channel coding (DSSCC) scheme.
Figure 1. Distributed separate source-channel coding (DSSCC) scheme.
Entropy 20 00806 g001
Figure 2. Distributed joint source-channel coding (DJSCC) scheme.
Figure 2. Distributed joint source-channel coding (DJSCC) scheme.
Entropy 20 00806 g002
Figure 3. The codeword structure of quasi-uniform systematic polar code (QSPCs) with N = 4 , K = 1 .
Figure 3. The codeword structure of quasi-uniform systematic polar code (QSPCs) with N = 4 , K = 1 .
Entropy 20 00806 g003
Figure 4. The codeword structure of punctured QSPCs with N = 4 , K = 1 , P = 1 .
Figure 4. The codeword structure of punctured QSPCs with N = 4 , K = 1 , P = 1 .
Entropy 20 00806 g004
Figure 5. Joint decoder.
Figure 5. Joint decoder.
Entropy 20 00806 g005
Figure 6. Bit selection of distributed source coding (DSC) based on polar codes.
Figure 6. Bit selection of distributed source coding (DSC) based on polar codes.
Entropy 20 00806 g006
Figure 7. The bit error rate (BER) performance under the SC decoder.
Figure 7. The bit error rate (BER) performance under the SC decoder.
Entropy 20 00806 g007
Figure 8. The BER performance under the CRC-aid SC list (CA-SCL) decoder.
Figure 8. The BER performance under the CRC-aid SC list (CA-SCL) decoder.
Entropy 20 00806 g008
Figure 9. The decoding complexity (per source-symbol) under adaptive CA-SCL decoders.
Figure 9. The decoding complexity (per source-symbol) under adaptive CA-SCL decoders.
Entropy 20 00806 g009
Table 1. Distributed separate source-channel coding (DSSCC) schemes with different R s i , R c i .
Table 1. Distributed separate source-channel coding (DSSCC) schemes with different R s i , R c i .
R s 1 R s 2 R c 1 R c 2 R ˜
DSSCC 11024/1024650/1024465/1024591/1024(2.2, 1.1)
DSSCC 2900/1024900/1024409/1024818/1024(2.2, 1.1)
DSSCC 31024/1024900/1024465/1024818/1024(2.2, 1.1)

Share and Cite

MDPI and ACS Style

Jin, L.; Yang, H. Distributed Joint Source-Channel Coding Using Quasi-Uniform Systematic Polar Codes. Entropy 2018, 20, 806. https://0-doi-org.brum.beds.ac.uk/10.3390/e20100806

AMA Style

Jin L, Yang H. Distributed Joint Source-Channel Coding Using Quasi-Uniform Systematic Polar Codes. Entropy. 2018; 20(10):806. https://0-doi-org.brum.beds.ac.uk/10.3390/e20100806

Chicago/Turabian Style

Jin, Liqiang, and Hongwen Yang. 2018. "Distributed Joint Source-Channel Coding Using Quasi-Uniform Systematic Polar Codes" Entropy 20, no. 10: 806. https://0-doi-org.brum.beds.ac.uk/10.3390/e20100806

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop