Next Article in Journal
Optimization of the Casualties’ Treatment Process: Blended Military Experiment
Previous Article in Journal
Exergy and Exergoeconomic Analysis of a Cogeneration Hybrid Solar Organic Rankine Cycle with Ejector
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sharp Second-Order Pointwise Asymptotics for Lossless Compression with Side Information

Department of Engineering, University of Cambridge, Trumpington Street, Cambridge CB2 1PZ, UK
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Submission received: 22 May 2020 / Revised: 19 June 2020 / Accepted: 22 June 2020 / Published: 25 June 2020
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

:
The problem of determining the best achievable performance of arbitrary lossless compression algorithms is examined, when correlated side information is available at both the encoder and decoder. For arbitrary source-side information pairs, the conditional information density is shown to provide a sharp asymptotic lower bound for the description lengths achieved by an arbitrary sequence of compressors. This implies that for ergodic source-side information pairs, the conditional entropy rate is the best achievable asymptotic lower bound to the rate, not just in expectation but with probability one. Under appropriate mixing conditions, a central limit theorem and a law of the iterated logarithm are proved, describing the inevitable fluctuations of the second-order asymptotically best possible rate. An idealised version of Lempel-Ziv coding with side information is shown to be universally first- and second-order asymptotically optimal, under the same conditions. These results are in part based on a new almost-sure invariance principle for the conditional information density, which may be of independent interest.

1. Introduction

It is well-known that the presence of correlated side information can potentially offer dramatic benefits for data compression [1,2]. Important applications where such side information is naturally present include the compression of genomic data [3,4], file and software management [5,6], and image and video compression [7,8].
In practice, the most common approach to the design of effective compression methods with side information is based on generalisations of the Lempel-Ziv family of algorithms [9,10,11,12,13]. A different approach based on grammar-based codes was developed in [14], turbo codes were applied in [15], and a generalised version of context-tree weighting was used in [16].
In this work, we examine the theoretical fundamental limits of the best possible performance that can be achieved in such problems. Let ( X , Y ) = { ( X n , Y n ) ; n 1 } be a source-side information pair; X is the source to be compressed, and Y is the associated side information process which is assumed to be available both to the encoder and the decoder. Under appropriate conditions, the best average rate that can be achieved asymptotically [2], is the conditional entropy rate,
H ( X | Y ) = lim n 1 n H ( X 1 n | Y 1 n ) , bits / symbol ,
where X 1 n = ( X 1 , X 2 , , X n ) , Y 1 n = ( Y 1 , Y 2 , , Y n ) , and H ( X 1 n | Y 1 n ) denotes the conditional entropy of X 1 n given Y 1 n ; precise definitions will be given in Section 2.
Our main goal is to derive sharp asymptotic expressions for the optimum compression rate (with side information available to both the encoder and decoder), not only in expectation but with probability 1. In addition to the best first-order performance, we also determine the best rate at which this performance can be achieved, as a function of the length of the data being compressed. Furthermore, we consider an idealised version of a Lempel-Ziv compression algorithm, and we show that it can achieve asymptotically optimal first- and second-order performance, universally over a broad class of stationary and ergodic source-side information pairs ( X , Y ) .
Specifically, we establish the following. In Section 2.1 we describe the theoretically optimal one-to-one compressor f n ( X 1 n | Y 1 n ) , for arbitrary source-side information pairs ( X , Y ) . In Section 2.2 we prove our first result, stating that the description lengths ( f n ( X 1 n | Y 1 n ) ) can be well-approximated, with probability one, by the conditional information density, log P ( X 1 n | Y 1 n ) . Theorem 2 states that for any jointly stationary and ergodic source-side information pair ( X , Y ) , the best asymptotically achievable compression rate is H ( X | Y ) bits/symbol, with probability 1. This generalises Kieffer’s corresponding result [17] to the case of compression with side information.
Furthermore, in Section 2.4 we show that there is a sequence of random variables { Z n } such that the description lengths ( f n ( X 1 n | Y 1 n ) ) of any sequence of compressors { f n } satisfy a “one-sided” central limit theorem (CLT): Eventually, with probability 1,
( f n ( X 1 n | Y 1 n ) ) n H ( X | Y ) + n Z n + o ( n ) , bits ,
where the Z n converge to a N ( 0 , σ 2 ( X | Y ) ) distribution, and the term o ( n ) is negligible compared to n . The lower bound (1) is established in Theorem 3 where it is also shown that it is asymptotically achievable. This means that the rate obtained by any sequence of compressors has inevitable O ( n ) fluctuations around the conditional entropy rate, and that the size of these fluctuations is quantified by the conditional varentropy rate,
σ 2 ( X | Y ) = lim n 1 n Var log P ( X 1 n | Y 1 n ) .
This generalises the minimal coding variance of [18]. The bound (1) holds for a broad class of source-side information pairs, including all Markov chains with positive transition probabilities. Under the same conditions, a corresponding “one-sided” law of the iterated logarithm (LIL) is established in Theorem 4, which gives a precise description of the inevitable almost-sure fluctuations above H ( X | Y ) , for any sequence of compressors.
The proofs of all the results in Section 2.3 and Section 2.4 are based, in part, on analogous asymptotics for the conditional information density, log P ( X 1 n | Y 1 n ) . These are established in Section 2.5, where we state and prove a corresponding CLT and an LIL for log P ( X 1 n | Y 1 n ) . These results, in turn, follow from the almost sure invariance principle for log P ( X 1 n | Y 1 n ) , proved in Appendix A. Theorem A1, which is of independent interest, generalises the invariance principle established for the (unconditional) information density log P ( X 1 n ) by Philipp and Stout [19]. In fact, Theorem A1 along with the identification of the conditions under which it holds (Assumption 1) in Section 2.4), are the more novel contributions of this work.
In a different direction, Nomura and Han [20] establish finer coding theorems for the Slepian-Wolf problem, when the side information is only available to the decoder. There, they obtain general second-order asymptotics for the best achievable rate region, under an excess-rate probability constraint.
Section 3 is devoted to universal compression. We consider a simple, idealised version of Lempel-Ziv coding with side information. As in the case of Lempel-Ziv compression without side information [21,22], the performance of this scheme is determined by the asymptotics of a family of conditional recurrence times, R n = R n ( X | Y ) . Under appropriate, general conditions on the source-side information pair ( X , Y ) , in Theorem 8 we show that the ideal description lengths, log R n , can be well-approximated by the conditional information density log P ( X 1 n | Y 1 n ) . Combining this with our earlier results on the conditional information density, in Corollary 1 and Theorem 9 we show that the compression rate of this scheme converges to H ( X | Y ) , with probability 1, and that it is universally second-order optimal. The results of this section generalise the corresponding asymptotics without side information established in [23,24].
The proofs of the more technical results needed in Section 2 and Section 3 are given in the appendix.

2. Pointwise Asymptotics

In this section, we derive general, fine asymptotic bounds for the description lengths of arbitrary compressors with side information, as well as corresponding achievability results.

2.1. Preliminaries

Let X = { X n ; n 1 } be an arbitrary source to be compressed, and Y = { Y n ; n 1 } be an associated side information process. We let X , Y , denote their finite alphabets, respectively, and we refer to the joint process ( X , Y ) = { ( X n , Y n ) ; n 1 } as a source-side information pair.
Let x 1 n = ( x 1 , x 2 , , x n ) be a source string, and let y 1 n = ( y 1 , y 2 , , y n ) an associated side information string which is available to both the encoder and decoder. A fixed-to-variable one-to-one compressor with side information, of blocklength n, is a collection of functions f n , where each f n ( x 1 n | y 1 n ) takes a value in the set of all finite-length binary strings,
{ 0 , 1 } = U k = 0 { 0 , 1 } k = { Ø , 0 , 1 , 00 , 01 , 000 , } ,
with the convention that { 0 , 1 } 0 = { Ø } consists of just the empty string Ø of length zero. For each y 1 n Y n , we assume that f n ( · | y 1 n ) is a one-to-one function from X n to { 0 , 1 } , so that the compressed binary string f n ( x 1 n | y 1 n ) is always correctly decodable.
The main figure of merit in lossless compression is of course the description length,
( f n ( x 1 n | y 1 n ) ) = length   of f n ( x 1 n | y 1 n ) , bits ,
where throughout, ( s ) denotes the length, in bits, of a binary string s. It is easy to see that under quite general criteria, the optimal compressor f n is easy to describe; see [25] for an extensive discussion. For 1 i j , we use the shorthand notation z i j for the string ( z i , z i + 1 , , z j ) , and similarly Z i j for the corresponding collection of random variables Z i j = ( Z i , Z i + 1 , , Z j ) .
Definition 1
(The optimal compressor f n ). For each side information string y 1 n , f n ( · | y 1 n ) is the optimal compressor for the distribution P ( X 1 n = · | Y 1 n = y 1 n ) , namely the compressor that orders the strings x 1 n in order of decreasing probability P ( X 1 n = x 1 n | Y 1 n = y 1 n ) , and assigns them codewords from { 0 , 1 } in lexicographic order.

2.2. The Conditional Information Density

Definition 2
(Conditional information density). For an arbitrary source-side information pair ( X , Y ) , the conditional information density of blocklength n is the random variable: log P ( X 1 n | Y 1 n ) = log P X 1 n | Y 1 n ( X 1 n | Y 1 n ) .
[Throughout the paper, ‘log’ denotes ‘ log 2 ’, the logarithm taken to base 2, and all familiar information theoretic quantities are expressed in bits.]
The starting point is the following almost sure (a.s.) approximation result between the description lengths ( f n ( X 1 n | Y 1 n ) ) of an arbitrary sequence of compressors and the conditional information density log P ( X 1 n | Y 1 n ) ) of an arbitrary source-side information pair ( X , Y ) . When it causes no confusion, we drop the subscripts for PMFs and conditional PMFs, e.g., simply writing P ( x 1 n | y 1 n ) for P X 1 n | Y 1 n ( x 1 n | y 1 n ) as in the definition above. Recall the definition of the optimal compressors { f n } from Section 2.1.
Theorem 1.
For any source-side information pair ( X , Y ) , and any sequence { B n } that grows faster than logarithmically, i.e., such that B n / log n as n , we have:
(a
For any sequence of compressors with side information { f n } :
lim inf n ( f n ( X 1 n | Y 1 n ) ) [ log P ( X 1 n | Y 1 n ) ] B n 0 , a . s .
(b
The optimal compressors { f n } achieve the above bound with equality.
Proof. 
Fix ϵ > 0 arbitrary and let τ = τ n = ϵ B n . Applying the general converse in ([25], Theorem 3.3) with X 1 n , Y 1 n in place of X , Y and X n , Y n in place of X , Y , gives,
P ( f ( X 1 n | Y 1 n ) ) log P ( X 1 n | Y 1 n ) ϵ B n 2 log n ϵ B n ( log | X | + 1 ) ,
which is summable in n. Therefore, by the Borel-Cantelli lemma we have that eventually, a.s.,
( f ( X 1 n | Y 1 n ) ) + log P ( X 1 n | Y 1 n ) > ϵ B n ,
Since ϵ > 0 was arbitrary, this implies ( a ) . Part ( b ) follows from ( a ) together with the fact that ( f n ( X 1 n | Y 1 n ) ) + log P ( X 1 n | Y 1 n ) 0 , a.s., by the general achievability result in ([25], Theorem 3.1). □

2.3. First-Order Asymptotics

For any source-side information pair ( X , Y ) , the conditional entropy rate H ( X | Y ) is defined as:
H ( X | Y ) = lim sup n 1 n H ( X 1 n | Y 1 n ) .
Throughout H ( Z ) and H ( Z | W ) denote the discrete entropy of Z and the conditional entropy of Z given W, in bits. If ( X , Y ) are jointly stationary, then the above lim sup is in fact a limit, and it is equal to H ( X , Y ) H ( Y ) , where H ( X , Y ) and H ( Y ) are the entropy rates of ( X , Y ) and of Y, respectively [2]. Moreover, if ( X , Y ) are also jointly ergodic, then by applying the Shannon-McMillan-Breiman theorem [2] to Y and to the pair ( X , Y ) , we obtain its conditional version:
1 n log P ( X 1 n | Y 1 n ) H ( X | Y ) , a . s .
The next result states that the conditional entropy rate is the best asymptotically achievable compression rate, not only in expectation but also with probability 1. It is a consequence of Theorem 1 with B n = n , combined with (2).
Theorem 2.
Suppose ( X , Y ) is a jointly stationary and ergodic source-side information pair with conditional entropy rate H ( X | Y ) .
(a
For any sequence of compressors with side information { f n } :
lim inf n ( f n ( X 1 n | Y 1 n ) ) n H ( X | Y ) , a . s .
(b
The optimal compressors { f n } achieve the above bound with equality.

2.4. Finer Asymptotics

The refinements of Theorem 2 presented in this section will be derived as consequences of the general approximation results in Theorem 1, combined with corresponding refined asymptotics for the conditional information density log P ( X 1 n | Y 1 n ) . For clarity of exposition these are stated separately, in Section 2.5 below.
The results of this section will be established for a class of jointly stationary and ergodic source-side information pairs ( X , Y ) , that includes all Markov chains with positive transition probabilities. The relevant conditions, in their most general form, will be given in terms of the following mixing coefficients.
Definition 3.
Suppose Z = { Z n ; n Z } is a stationary process on a finite alphabet Z . For any pair of indices i j , let F i j denote the σ-algebra generated by Z i j . For d 1 , define:
α ( Z ) ( d ) = sup | P ( A B ) P ( A ) P ( B ) | ; A F 0 , B F d , γ ( Z ) ( d ) = max z Z E | log P ( Z 0 = z | Z 1 ) log P ( Z 0 = z | Z d 1 ) | .
Note that if Z is an ergodic Markov chain of order k, then α ( Z ) ( d ) decays exponentially fast [26], and γ ( Z ) ( d ) = 0 for all d k . Moreover, if ( X , Y ) is a Markov chain with all positive transition probabilities, then γ ( Y ) ( d ) also decays exponentially fast; cf. ([27], Lemma 2.1).
Throughout this section we will assume that the following conditions hold:
Assumption 1.
The source-side information pair ( X , Y ) is stationary and satisfies one of the following three conditions:
( a )
( X , Y ) is a Markov chain with all positive transition probabilities; or
( b )
( X , Y ) as well as Y are kth order, irreducible and aperiodic Markov chains; or
( c )
( X , Y ) is jointly ergodic and satisfies the following mixing conditions: [Our source-side information pairs ( X , Y ) are only defined for ( X n , Y n ) with n 1 , whereas the coefficients α ( Z ) ( d ) and γ ( Z ) ( d ) are defined for two-sided sequences { Z n ; n Z } . However, this does not impose an additional restriction, since any one-sided stationary process can be extended to a two-sided one by the Kolmogorov extension theorem [28].]
α ( X , Y ) ( d ) = O ( d 336 ) , γ ( X , Y ) ( d ) = O ( d 48 ) , a n d γ ( Y ) ( d ) = O ( d 48 ) .
In view of the discussion following Definition 3, ( a ) ( c ) and ( b ) ( c ) . Therefore, all results stated under Assumption 1 will be proved under the weakest set of conditions, namely that (3) hold.
Definition 4.
For a source-side information pair ( X , Y ) , the conditional varentropy rate is:
σ 2 ( X | Y ) = lim sup n 1 n Var log P ( X 1 n | Y 1 n ) .
Under the above assumptions, the lim sup in (4) is in fact a limit. Lemma 1 is proved in the Appendix A.
Lemma 1.
Under Assumption 1, the conditional varentropy rate σ 2 ( X | Y ) is:
σ 2 ( X | Y ) = lim n 1 n Var log P ( X 1 n | Y 1 n ) = lim n 1 n Var log P ( X 1 n , Y 1 n | X 0 , Y 0 ) P ( Y 1 n | Y 0 ) .
Our first main result in this section is a “one-sided” central limit theorem (CLT), which states that the description lengths ( f n ( X 1 n | Y 1 n ) ) of an arbitrary sequence of compressors with side information, { f n } , are asymptotically at best Gaussian, with variance σ 2 ( X | Y ) . Recall the optimal compressors { f n } described in Section 2.1
Theorem 3
(CLT for codelengths). Suppose ( X , Y ) satisfy Assumption 1, and let σ 2 = σ 2 ( X | Y ) > 0 denote the conditional varentropy rate (4). Then there exists a sequence of random variables { Z n ; n 1 } such that:
(a
For any sequence of compressors with side information, { f n } , we have,
lim inf n ( f n ( X 1 n | Y 1 n ) ) H ( X 1 n | Y 1 n ) n Z n 0 , a . s . ,
where Z n N ( 0 , σ 2 ) , in distribution, as n .
(b
The optimal compressors { f n } achieve the lower bound in (5) with equality.
Proof. 
Letting Z n = [ log P ( X 1 n | Y 1 n ) ] / n , n 1 , and taking B n = n , both results follow by combining the approximation results of Theorem 1 with the corresponding CLT for the conditional information density in Theorem 5. □
Our next result is in the form of a “one-sided” law of the iterated logarithm (LIL) which states that with probability 1, the description lengths of any compressor with side information will have inevitable fluctuations of order 2 σ 2 n log e log 2 n bits around the conditional entropy rate H ( X | Y ) ; throughout, log e denotes the natural logarithm to base e.
Theorem 4
(LIL for codelengths). Suppose ( X , Y ) satisfy Assumption 1, and let σ 2 = σ 2 ( X | Y ) > 0 denote the conditional varentropy rate (4). Then:
(a
For any sequence of compressors with side information, { f n } , we have:
lim sup n ( f n ( X 1 n | Y 1 n ) ) H ( X 1 n | Y 1 n ) 2 n log e log e n σ , a . s . ,
a n d lim inf n ( f n ( X 1 n | Y 1 n ) ) H ( X 1 n | Y 1 n ) 2 n log e log e n σ , a . s .
(b
The optimal compressors { f n } achieve the lower bounds in (6) and (7) with equality.
Proof. 
Taking B n = 2 n log 2 log e n , the results of the theorem again follow by combining the approximation results of Theorem 1 with the corresponding LIL for the conditional information density in Theorem 6. □
Remark 1.
  • Although the results in Theorems 3 and 4 are stated for one-to-one compressors { f n } , they remain valid for the class of prefix-free compressors. Since prefix-free codes are certainly one-to-one, the converse bounds in Theorem 3 ( a ) and 4 ( a ) are valid as stated, while for the achievability results it suffices to consider compressors f n p with description lengths ( f n p ( x 1 n | y 1 n ) ) ) = log P ( x 1 n | y 1 n ) , and then apply Theorem 5.
  • Theorem 3 says that the compression rate of any sequence of compressors { f n } will have at best Gaussian fluctuations around H ( X | Y ) ,
    1 n ( f n ( X 1 n | Y 1 n ) ) N H ( X | Y ) , σ 2 ( X | Y ) n , b i t s / s y m b o l ,
    and similarly Theorem 4 says that with probability 1, the description lengths will have inevitable fluctuations of approximately ± 2 n σ 2 log e log e n bits around n H ( X | Y ) .
    As both of these vanish when σ 2 ( X | Y ) is zero, we note that if the source-side information pair ( X , Y ) is memoryless, so that { ( X n , Y n ) } are independent and identically distributed, then the conditional varentropy rate reduces to,
    σ 2 ( X | Y ) = Var ( log P ( X 1 | Y 1 ) ) ,
    which is equal to zero if and only if, for each y Y , the conditional distribution of X 1 given Y 1 = y is uniform on a subset X y X , where all the X y have the same cardinality.
    In the more general case when both the pair process ( X , Y ) and the side information Y are Markov chains, necessary and sufficient conditions for σ 2 ( X | Y ) to be zero were recently established in [25].
  • In analogy with the source dispersion for the problem of lossless compression without side information [29,30], for an arbitrary source-side information pair ( X , Y ) the conditional dispersion D ( X | Y ) was recently defined [25] as,
    D ( X | Y ) = lim sup n 1 n Var [ ( f n ( X 1 n | Y 1 n ) ) ] .
    There, it was shown that when both the pair ( X , Y ) and Y itself are irreducible and aperiodic Markov chains, the conditional dispersion coincides with the conditional varentropy rate:
    D ( X | Y ) = lim n 1 n Var [ ( f n ( X 1 n | Y 1 n ) ) ] = σ 2 ( X | Y ) < .

2.5. Asymptotics of the Conditional Information Density

Here we show that the conditional information density itself, log P ( X 1 n | Y 1 n ) , satisfies a CLT and a LIL. The next two theorems are consequences of the almost sure invariance principle established in Theorem A1, in the Appendix A.
Theorem 5
(CLT for the conditional information density). Suppose ( X , Y ) satisfy Assumption 1, and let σ 2 = σ 2 ( X | Y ) > 0 denote the conditional varentropy rate (4). Then, as n :
log P ( X 1 n | Y 1 n ) H ( X 1 n | Y 1 n ) n N ( 0 , σ 2 ) , i n   d i s t r i b u t i o n .
Proof. 
The conditions (3), imply that as n , [ n H ( X , Y ) H ( X 1 n , Y 1 n ) ] / n 0 , and [ n H ( Y ) H ( Y 1 n ) ] / n 0 , cf. [19], therefore also, [ n H ( X | Y ) H ( X 1 n | Y 1 n ) ] / n 0 , so it suffices to show that as n ,
log P ( X 1 n | Y 1 n ) n H ( X | Y ) n N ( 0 , σ 2 ) . in   distribution .
Let D = D ( [ 0 , 1 ] , R ) denote the space of cadlag (right-continuous with left-hand limits) functions from [ 0 , 1 ] to R , and define, for each t 0 , S ( t ) = log P ( X 1 t | Y 1 t ) + t H ( X | Y ) , as in Theorem A1 in the Appendix A. For all n 1 , t [ 0 , 1 ] , define S n ( t ) = S ( n t ) . Then Theorem A1 implies that as n ,
1 σ n S n ( t ) ; t [ 0 , 1 ] { B ( t ) ; t [ 0 , 1 ] } , weakly in D ,
where { B ( t ) } is a standard Brownian motion; see, e.g., ([19], Theorem E, p. 4). In particular, this implies that
1 σ n S n ( 1 ) B ( 1 ) N ( 0 , 1 ) , in   distribution ,
which is exactly (9). □
Theorem 6
(LIL for the conditional information density). Suppose ( X , Y ) satisfy Assumption 1, and let σ 2 = σ 2 ( X | Y ) > 0 denote the conditional varentropy rate (4). Then:
lim sup n log P ( X 1 n | Y 1 n ) H ( X 1 n | Y 1 n ) 2 n log e log e n = σ , a . s . ,
a n d lim inf n log P ( X 1 n | Y 1 n ) H ( X 1 n | Y 1 n ) 2 n log e log e n = σ , a . s .
Proof. 
As in the proof of (8), it suffices to prove (10) with n H ( X | Y ) in place of H ( X 1 n | Y 1 n ) . However, this is immediate from Theorem A1, since, for a standard Brownian motion { B ( t ) } ,
lim sup t B ( t ) 2 t log e log e t = 1 , a . s . ,
see, e.g., ([31], Theorem 11.18). In addition, similarly for (11). □

3. Idealised LZ Compression with Side Information

Consider the following idealised version of Lempel-Ziv-like compression with side information. For a given source-side information pair ( X , Y ) = { ( X n , Y n ) ; n Z } , the encoder and decoder both have access to the infinite past ( X 0 , Y 0 ) and to the current side information Y 1 n . The encoder describes X 1 n to the decoder as follows. First she searches for the first appearance of ( X 1 n , Y 1 n ) in the past ( X 0 , Y 0 ) , that is, for the first r 1 such that ( X r + 1 r + n , Y r + 1 r + n ) = ( X 1 n , Y 1 n ) . Then she counts how many times Y 1 n appears in Y 0 between locations r + 1 and 0, namely how many indices 1 j < r there are, such that Y j + 1 j + n = Y 1 n . Say there are ( R n 1 ) such js. She describes X 1 n to the decoder by telling him to look at the R n th position where Y 1 n appears in the past Y 0 , and read off the corresponding X string.
This description takes log R n bits, and, as it turns out, the resulting compression rate is asymptotically optimal: As n , with probability 1,
1 n log R n H ( X | Y ) , bits / symbol .
Moreover, it is second-order optimal, in that it achieves equality in the CLT and LIL bounds given in Theorems 3 and 4 of Section 2.
Our purpose in this section is to make these statements precise. We will prove (12) as well as its CLT and LIL refinements, generalising the corresponding results for recurrence times without side information in [24].
The use of recurrence times in understanding the Lempel-Ziv (LZ) family of algorithms was introduced by Willems [21] and Wyner and Ziv [22,32]. In terms of practical methods for compression with side information, Subrahmanya and Berger [9] proposed a side information analog of the sliding window LZ algorithm [33], and Uyematsu and Kuzuoka [10] proposed a side information version of the incremental parsing LZ algorithm [34]. The Subrahmanya-Berger algorithm was shown to be asymptotically optimal in [12,13]. Different types of LZ-like algorithms for compression with side information were also considered in [11].
Throughout this section, we assume ( X , Y ) is a jointly stationary and ergodic source-side information pair, with values in the finite alphabets X , Y , respectively. We use bold lower-case letters x , y without subscripts to denote infinite realizations x , y of X , Y , and the corresponding bold capital letters X , Y without subscripts to denote the entire process, X = X , Y = Y .
The main quantities of interest are the recurrence times defined next.
Definition 5
(Recurrence times). For a realization x of the process X, and n 1 , define the repeated recurrence times R n ( j ) ( x ) of x 1 n , recursively, as:
R n ( 1 ) ( x ) = inf { i 1 : x i + 1 i + n = x 1 n } , R n ( j ) ( x ) = inf { i > R n ( j 1 ) ( x ) : x i + 1 i + n = x 1 n } , j > 1 .
For a realization ( x , y ) of the pair ( X , Y ) and n 1 , the joint recurrence time R n ( x , y ) of ( x 1 n , y 1 n ) is defined as,
R n ( x , y ) = inf { i 1 : ( x , y ) i + 1 i + n = ( x , y ) 1 n } ,
and the conditional recurrence time R n ( x | y ) of x 1 n among the appearances y 1 n is:
R n ( x | y ) = inf i 1 : x R n ( i ) ( y ) + 1 R n ( i ) ( y ) + n = x 1 n .
An important tool in the asymptotic analysis of recurrence times is Kac’s Theorem [35]. Its conditional version in Theorem 7 was first established in [12] using Kakutani’s induced transformation [36,37].
Theorem 7
(Conditional Kac’s theorem). [12] Suppose ( X , Y ) is a jointly stationary and ergodic source-side information pair. For any pair of strings x 1 n X n , y 1 n Y n :
E [ R n ( X | Y ) | X 1 n = x 1 n , Y 1 n = y 1 n ] = 1 P ( x 1 n | y 1 n ) .
The following result states that we can asymptotically approximate log R n ( X | Y ) by the conditional information density not just in expectation as in Kac’s theorem, but also with probability 1. Its proof is in Appendix B.
Theorem 8.
Suppose ( X , Y ) is a jointly stationary and ergodic source-side information pair. For any sequence { c n } of non-negative real numbers such that n n 2 c n < , we have:
( i ) log R n ( X | Y ) log 1 P ( X 1 n | Y 1 n ) c n , eventually a . s . ( i i ) log R n ( X | Y ) log 1 P ( X 1 n | Y 1 n , Y 0 , X 0 ) c n , eventually a . s . ( i i i ) log R n ( X | Y ) log P ( Y 1 n | Y 0 ) P ( X 1 n , Y 1 n | Y 0 , X 0 ) 2 c n , eventually a . s .
Next we state the main consequences of Theorem 8 that we will need. Recall the definition of the coefficients γ ( Z ) ( d ) from Section 2.4. Corollary 1 is proved in Appendix B.
Corollary 1.
Suppose ( X , Y ) are jointly stationary and ergodic.
(a
If, in addition, d γ ( X , Y ) ( d ) < and d γ ( Y ) ( d ) < , then for any β > 0 :
log [ R n ( X | Y ) P ( X 1 n | Y 1 n ) ] = o ( n β ) , a . s .
(b
In the general jointly ergodic case, we have:
log [ R n ( X | Y ) P ( X 1 n | Y 1 n ) ] = o ( n ) , a . s .
From part ( b ) combined with the Shannon-McMillan-Breiman theorem as in (2), we obtain the result (12) promised in the beginning of this section:
lim n 1 n log R n ( X | Y ) H ( X | Y ) , a . s .
This was first established in [12]. However, at this point we have already done the work required to obtain much finer asymptotic results for the conditional recurrence time.
For any pair of infinite realizations ( x , y ) of ( X , Y ) , let { R ( x | y ) ( t ) ; t 0 } be the continuous-time path, defined as:
R ( x | y ) ( t ) = 0 , for t < 1 , R ( x | y ) ( t ) = log R t ( x | y ) t H ( X | Y ) , for t 1 .
The following theorem is a direct consequence of Corollary 1 ( a ) combined with Theorem A1 in the Appendix A. Recall Assumption 1 from Section 2.4.
Theorem 9.
Suppose ( X , Y ) satisfy Assumption 1, and let σ 2 = σ 2 ( X | Y ) > 0 denote the conditional varentropy rate. Then { R ( X | Y ) ( t ) } can be redefined on a richer probability space that contains a standard Brownian motion { B ( t ) ; t 0 } such that for any λ < 1 / 294 :
R ( X | Y ) ( t ) σ B ( t ) = O ( t 1 / 2 λ ) , a . s .
Two immediate consequences of Theorem 9 are the following:
Theorem 10
(CLT and LIL for the conditional recurrence times). Suppose ( X , Y ) satisfy Assumption 1, and let σ 2 = σ 2 ( X | Y ) > 0 denote the conditional varentropy rate. Then:
( a ) log R n ( X | Y ) H ( X 1 n | Y 1 n ) n N ( 0 , σ 2 ) , in   distribution , a s n . ( b ) lim sup n log R n ( X | Y ) H ( X 1 n | Y 1 n ) 2 n log e log e n = σ , a . s .

Author Contributions

Conceptualization, L.G. and I.K.; methodology, L.G. and I.K.; formal analysis, L.G. and I.K.; investigation, L.G. and I.K.; writing—original draft preparation, L.G. and I.K.; writing—review and editing, L.G. and I.K.; funding acquisition, I.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded in part by the Hellenic Foundation for Research and Innovation (H.F.R.I.) under the “First Call for H.F.R.I. Research Projects to support Faculty members and Researchers and the procurement of high-cost research equipment grant,” project number 1034, and also in part by EPSRC grant number RG94782.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Appendix A. Invariance Principle for the Conditional Information Density

This Appendix is devoted to the proof of Theorem A1, which generalises the corresponding almost sure invariance principle of Philipp and Stout ([19], Theorem 9.1) for the (unconditional) information density log P ( X 1 n ) .
Theorem A1.
Suppose ( X , Y ) is a jointly stationary and ergodic process, satisfying the mixing conditions (3). For t 0 , let,
S ( t ) = log P ( X 1 t | Y 1 t ) + t H ( X | Y ) .
Then the following series converges:
σ 2 = E log P ( X 0 , Y 0 | X 1 , Y 1 ) + H ( X | Y ) 2 + 2 k = 1 E { log P ( X 0 , Y 0 | X 1 , Y 1 ) + H ( X | Y ) log P ( X k , Y k | X k 1 , Y k 1 ) + H ( X | Y ) } .
If σ 2 > 0 , then, without changing its distribution, we can redefine the process { S ( t ) ; t 0 } on a richer probability space that contains a standard Brownian motion { B ( t ) ; t 0 } , such that
S ( t ) σ B ( t ) = O ( t 1 2 λ ) , a . s . ,
as t , for each λ < 1 / 294 .
To simplify the notation, we write h = H ( X | Y ) and define,
f j = log P ( X j , Y j | X j 1 , Y j 1 ) P ( Y j | Y j 1 ) , j 0 ,
so that for example, the variance σ 2 in the theorem becomes,
σ 2 = E [ ( f 0 + h ) 2 ] + 2 k = 1 E [ ( f 0 + h ) ( f k + h ) ] .
Lemma A1.
If d γ ( X , Y ) ( d ) < and d γ ( Y ) ( d ) < then, as n :
k = 1 n f k log P ( X 1 n | Y 1 n ) = O ( 1 ) , a . s .
Proof. 
Let,
g j = log P ( X j , Y j | X 1 j 1 , Y 1 j 1 ) P ( Y j | Y 1 j 1 ) , j 2 ,
and,
g 1 = log P ( X 1 , Y 1 ) P ( Y 1 ) = log P ( X 1 | Y 1 ) .
We have, for k 2 ,
E | f k g k | E | log P ( X k , Y k | X k 1 , Y k 1 ) log P ( X k , Y k | X 1 k 1 , Y 1 k 1 ) | + E | log P ( Y k | Y k 1 ) log P ( Y k | Y 1 k 1 ) | x , y E | log P ( X k = x , Y k = y | X k 1 , Y k 1 ) log P ( X k = x , Y k = y | X 1 k 1 , Y 1 k 1 ) | + y E | log P ( Y k = y | Y k 1 ) log P ( Y k = y | Y 1 k 1 ) | | X | | Y | γ ( X , Y ) ( k 1 ) + | Y | γ ( Y ) ( k 1 ) .
Therefore, k = 1 E | f k g k | < , and by the monotone convergence theorem we have,
k = 1 | f k g k | < , a . s .
Hence, as n ,
k = 1 n f k log P ( X 1 n | Y 1 n ) k = 1 n | f k g k | = O ( 1 ) , a . s . ,
as claimed. □
The following bounds are established in the proof of ([19], Theorem 9.1):
Lemma A2.
Suppose Z = { Z n ; n Z } is a stationary and ergodic process on a finite alphabet, with entropy rate H ( Z ) , and such that α ( Z ) ( d ) = O ( d 336 ) and γ ( Z ) ( d ) = O ( d 48 ) , as d .
Let f k ( Z ) = log P ( Z k | Z k 1 ) , k 0 , and put η n ( Z ) = f n ( Z ) + H ( Z ) , n 0 . Then:
1. 
For each r > 0 , E | f 0 ( Z ) | r < .
2. 
For each r 2 and ϵ > 0 ,
E | f 0 ( Z ) log P ( Z 0 | Z k 1 ) | r C ( r , ϵ ) ( γ ( Z ) ( k ) ) 1 2 ϵ ,
where C ( r , ϵ ) is a constant depending only on r and ϵ.
3. 
For a constant C > 0 independent of n, η n ( Z ) 4 C .
4. 
Let η n ( Z ) = E [ η n ( Z ) | F n n ] . Then, as :
η n ( Z ) η n ( Z ) 4 = O ( 11 / 2 ) .
Please note that under the assumptions of Theorem A1, the conclusions of Lemma A2 apply to Y as well as to the pair process ( X , Y ) .
Lemma A3.
For each r > 0 , we have, E [ | f 0 | r ] < .
Proof. 
Simple algebra shows that
f 0 = f 0 ( X , Y ) f 0 ( Y ) .
Therefore, by two applications of Lemma A2, part 1,
f 0 r f 0 ( X , Y ) r + f 0 ( Y ) r < .
The next bound follows from Lemma A2, part 2, upon applying the Minkowski inequality.
Lemma A4.
For each r 2 and each ϵ > 0 ,
f 0 log P ( X 0 , Y 0 | X k 1 , Y k 1 ) P ( Y 0 | Y k 1 ) r C 1 ( r , ϵ ) [ γ ( X , Y ) ( k ) ] 1 2 ϵ 2 r + C 2 ( r , ϵ ) [ γ ( Y ) ( k ) ] 1 2 ϵ 2 r .
Lemma A5.
As N :
E k N ( f k + h ) 2 = σ 2 N + O ( 1 ) .
Proof. 
First we examine the definition of the variance σ 2 . The first term in (A4),
f 0 + h 2 2 ( f 0 2 + h ) 2 < ,
is finite by Lemma A3. For the series in (A4), let, for k 0 ,
ϕ k = log P ( X k , Y k | X k / 2 k 1 , Y k / 2 k 1 ) P ( Y k | Y k / 2 k 1 ) ,
and write,
E ( f 0 + h ) ( f k + h ) = E ( f 0 + h ) ( f k ϕ k ) + E ( f 0 + h ) ( ϕ k + h ) .
For the first term in the right-hand side above, we can bound, for any ϵ > 0 ,
| E ( f 0 + h ) ( f k ϕ k ) | ( a ) f 0 + h 2 f k ϕ k 2 [ f 0 2 + h ] f k ϕ k 2 ( b ) A C 1 ( 2 , ϵ ) γ ( X , Y ) ( k / 2 ) 1 4 1 2 ϵ + A C 2 ( 2 , ϵ ) γ ( Y ) ( k / 2 ) 1 4 1 2 ϵ ,
where ( a ) follows by the Cauchy-Schwarz inequality, and ( b ) follows by Lemmas A3 and A4, with A = f 0 2 + h < . Therefore, taking ϵ > 0 small enough and using the assumptions of Theorem A1,
| E ( f 0 + h ) ( f k ϕ k ) | = O ( k 12 + 24 ϵ ) = O ( k 3 ) , as k .
For the second term in (A5), we have that for any r > 0 , ϕ k r < , uniformly over k 1 by stationarity. Also, since f 0 , ϕ k are measurable with respect to the σ -algebras generated by ( X 0 , Y 0 ) and ( X k / 2 , Y k / 2 ) , respectively, we can apply ([19], Lemma 7.2.1) with p = r = s = 3 , to obtain that
| E ( f 0 + h ) ( ϕ k + h ) | 10 f 0 + h 3 ϕ k + h 3 α ( k / 2 ) 1 / 3 ,
where α ( k ) = α ( X , Y ) ( k ) = O ( k 48 ) , as k , by assumption. Therefore, a fortiori,
E ( f 0 + h ) ( f k + h ) = O ( k 3 ) ,
and combining this with (A6) and substituting in (A5), implies that σ 2 in (A4) is well defined and finite.
Finally, we have that as N ,
E k N ( f k + h ) 2 = N E ( f 0 + h ) 2 + 2 k = 0 N 1 ( N k ) E ( f 0 + h ) ( f k + h ) = N σ 2 2 k = 1 N 1 k E ( f 0 + h ) ( f k + h ) 2 N k = N E ( f 0 + h ) ( f k + h ) = σ 2 N + O ( 1 ) ,
as required. □
Proof of Lemma 1. 
Lemma A5 states that the limit,
lim n 1 n Var log P ( X 1 n , Y 1 n | X 0 , Y 0 ) P ( Y 1 n | Y 0 ) .
exists and is finite. Moreover, by Lemma A4, after an application of the Cauchy-Schwarz inequality, we have that as n ,
E k n log P ( X k , Y k | X 1 k 1 , Y 1 k 1 ) P ( Y k | Y 1 k 1 ) log P ( X k , Y k | X k 1 , Y k 1 ) P ( Y k | Y k 1 ) 2 = O ( 1 ) ,
therefore,
1 n Var log P ( X 1 n | Y 1 n ) Var log P ( X 1 n , Y 1 n | X 0 , Y 0 ) P ( Y 1 n | Y 0 ) = o ( 1 ) .
Combining this with (A7) and the definition of σ 2 , completes the proof. □
Proof of Theorem A1. 
Note that we have already established the fact that the expression for the variance converges to some σ 2 < . Also, in view of Lemma A1, it is sufficient to prove the theorem for { S ˜ ( t ) } instead of { S ( t ) } , where:
S ˜ ( t ) = k t ( f k + h ) , t 0 .
This will be established by an application of ([19], Theorem 7.1), once we verify that conditions (7.1.4), (7.1.5), (7.1.6), (7.1.7) and (7.1.9) there are all satisfied.
For each n 0 , let η n = f n + h , where f n is defined in (A3) and h is the conditional entropy rate. First we observe that by stationarity,
E [ η n ] = E log P ( X n , Y n | X n 1 , Y n 1 ) P ( Y n | Y n 1 ) + H ( X | Y ) = E log P ( X 0 , Y 0 | X 1 , Y 1 ) + H ( X , Y ) E log P ( Y 0 | Y 1 ) H ( Y ) = 0 ,
where H ( X , Y ) and H ( Y ) denote the entropy rates of ( X , Y ) and Y, respectively [2]. Observe that in the notation of Lemma A2, η n = η n ( X , Y ) η n ( Y ) , and η n = η n ( X , Y ) η n ( Y ) . By Lemma A2, parts 3 and 4, there exist a constant C, independent of n such that
η n 4 C < ,
and,
η n η n 4 = O ( 11 / 2 ) .
In addition, from Lemma A5 we have,
E n N 1 σ η n 2 = N + O ( 1 ) .
From (A8)–(A11) and the assumption that α ( X , Y ) ( d ) = O ( d 336 ) , we have that all of the conditions (7.1.4), (7.1.5), (7.1.6), (7.1.7) and (7.1.9) of ([19], Theorem 7.1) are satisfied for the random variables { η n / σ } , with δ = 2 . Therefore, { S ˜ ( t ) ; t 0 } can be redefined on a possibly richer probability space, where there exists a standard Brownian motion { B ( t ) ; t 0 } , such that as t :
1 σ S ˜ ( t ) B ( t ) = O ( t 1 / 2 λ ) , a . s .
By Lemma A1, this completes the proof. □

Appendix B. Recurrence Times Proofs

In this appendix, we provide the proofs of some of the more technical results in Section 3. First we establish the following generalisation of ([2], Lemma 16.8.3).
Lemma A6.
Suppose ( X , Y ) is an arbitrary source-side information pair. Then, for any sequence { t n } of non-negative real numbers such that n 2 t n < , we have:
log P ( Y 1 n | Y 0 , X 0 ) P ( Y 1 n | Y 0 ) t n , eventually a . s .
Proof. 
Let B ( X 0 , Y 0 ) Y n denote the support of P ( · | X 0 , Y 0 ) . We can compute,
E P ( Y 1 n | Y 0 ) P ( Y 1 n | Y 0 , X 0 ) = E E P ( Y 1 n | Y 0 ) P ( Y 1 n | Y 0 , X 0 ) | Y 0 , X 0 = E y 1 n B ( X 0 , Y 0 ) P ( y 1 n | Y 0 ) P ( y 1 n | Y 0 , X 0 ) P ( y 1 n | Y 0 , X 0 ) 1 .
By Markov’s inequality,
P log P ( Y 1 n | Y 0 ) P ( Y 1 n | Y 0 , X 0 ) > t n = P P ( Y 1 n | Y 0 ) P ( Y 1 n | Y 0 , X 0 ) > 2 t n 2 t n ,
and so, by the Borel-Cantelli lemma,
log P ( Y 1 n | Y 0 ) P ( Y 1 n | Y 0 , X 0 ) t n , eventually a . s . ,
as claimed. □
Proof of Theorem 8. 
Let K > 0 arbitrary. By Markov’s inequality and Kac’s theorem,
P ( R n ( X | Y ) > K | X 1 n = x 1 n , Y 1 n = y 1 n ) E R n ( X | Y ) | X 1 n = x 1 n , Y 1 n = y 1 n K = 1 K P ( x 1 n | y 1 n ) .
Taking K = 2 c n / P ( X 1 n | Y 1 n ) , we obtain,
P log [ R n ( X | Y ) P ( X 1 n | Y 1 n ) ] > c n | X 1 n = x 1 n , Y 1 n = y 1 n = P R n ( X | Y ) > 2 c n P ( X 1 n | Y 1 n ) | X 1 n = x 1 n , Y 1 n = y 1 n 2 c n .
Averaging over all x 1 n X n , y 1 n Y n ,
P ( log R n ( X | Y ) P ( X 1 n | Y 1 n ) > c n ) 2 c n ,
and the Borel-Cantelli lemma gives ( i ) .
For ( i i ) we first note that the probability,
P log [ R n ( X | Y ) P ( X 1 n | Y 1 n , X 0 , Y 0 ) ] < c n | Y 1 n = y 1 n , X 0 = x 0 , Y 0 = y 0
is the probability, under P ( X 1 n = · | Y 1 n = y 1 n , X 0 = x 0 , Y 0 = y 0 ) , of those z 1 n such that
P ( X 1 n = z 1 n | X 0 , Y n ) < 2 c n R n ( x 0 z 1 n | y n ) ,
where ‘*’ denotes the concatenation of strings. Let G n = G n ( x 0 , y n ) X n denote the set of all such z 1 n . Then the probability in (A12) is,
z n G n P ( z 1 n | x 0 , y n ) z n G n 2 c n R n ( x 0 z 1 n | y n ) 2 c n z n X n 1 R n ( x 0 z 1 n | y n ) .
Since both x 0 and y n are fixed, for each j 1 , there is exactly one z 1 n X n , such that R n ( x 0 z 1 n | y n ) = j . Thus, the last sum is bound above by,
j = 1 | X | n 1 j D n ,
for some positive constant D. Therefore, the probability in (A12) is bounded above by D n 2 c n , which is independent of x 0 , y n and, by assumption, summable over n. Hence, after averaging over all infinite sequences x 0 , y n , the Borel-Cantelli lemma gives ( i i ) .
For part ( i i i ) we have, eventually, almost surely,
log R n ( X | Y ) P ( X 1 n , Y 1 n | Y 0 , X 0 ) P ( Y 1 n | Y 0 ) = log R n ( X | Y ) P ( X 1 n | Y 1 n , X 0 , Y 0 ) P ( Y 1 n | X 0 , Y 0 ) P ( Y 1 n | Y 0 ) = log [ R n ( X | Y ) P ( X 1 n | Y 1 n , X 0 , Y 0 ) ] + log P ( Y 1 n | X 0 , Y 0 ) P ( Y 1 n | Y 0 ) 2 c n ,
where the last inequality follows from ( i i ) and Lemma A6, and we have shown ( i i i ) . □
Proof of Corollary 1. 
If we take c n = ϵ n β in theorem 8, with ϵ > 0 arbitrary, we get from ( i ) and ( i i i ) ,
lim sup n 1 n β log [ R n ( X | Y ) P ( X 1 n | Y 1 n ) ] 0 , a . s .
a n d lim inf n 1 n β log R n ( X | Y ) P ( X 1 n , Y 1 n | X 0 , Y 0 ) P ( Y 1 n | Y 0 ) 0 , a . s .
Hence, to prove ( a ) it is sufficient to show that as n ,
log P ( X 1 n | Y 1 n ) log P ( X 1 n , Y 1 n | X 0 , Y 0 ) P ( Y 1 n | Y 0 ) = O ( 1 ) , a . s . ,
which is exactly Lemma A1 in Appendix A.
To prove ( b ) , taking β = 1 in (A13) and (A14), it suffices to show that
lim n 1 n log P ( X 1 n | Y 1 n ) 1 n log P ( X 1 n , Y 1 n | X 0 , Y 0 ) P ( Y 1 n | Y 0 ) = 0 , a . s .
However, the first term converges almost surely to H ( X | Y ) by the Shannon-McMillan-Breiman theorem, as in (2), and the second term is,
1 n i = 1 n log P ( X i , Y i | X i 1 , Y i 1 ) + 1 n i = 1 n log P ( Y i | Y i 1 ) ,
which, by the ergodic theorem, converges almost surely to,
E [ log P ( X 0 , Y 0 | X 0 , Y 0 ) ] + E [ log P ( Y 0 | Y 0 ) ] = H ( X , Y ) H ( Y ) = H ( X | Y ) .
This completes the proof. □

References

  1. Slepian, D.; Wolf, J. Noiseless coding of correlated information sources. IEEE Trans. Inform. Theory 1973, 19, 471–480. [Google Scholar] [CrossRef]
  2. Cover, T.; Thomas, J. Elements of Information Theory, 2nd ed.; J. Wiley & Sons: New York, NY, USA, 2012. [Google Scholar]
  3. Yang, E.H.; Kaltchenko, A.; Kieffer, J. Universal lossless data compression with side information by using a conditional MPM grammar transform. IEEE Trans. Inform. Theory 2001, 47, 2130–2150. [Google Scholar] [CrossRef]
  4. Fritz, M.; Leinonen, R.; Cochrane, G.; Birney, E. Efficient storage of high throughput DNA sequencing data using reference-based compression. Genome Res. 2011, 21, 734–740. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Tridgell, A.; Mackerras, P. The Rsync Algorithm; Technical Report TR-CS-96-05; The Australian National University: Canberra, Australia, 1996. [Google Scholar]
  6. Suel, T.; Memon, N. Algorithms for delta compression and remote file synchronization. In Lossless Compression Handbook; Sayood, K., Ed.; Academic Press: New York, NY, USA, 2002. [Google Scholar]
  7. Pradhan, S.; Ramchandran, K. Enhancing analog image transmission systems using digital side information: A new wavelet-based image coding paradigm. In Proceedings of the 2001 Data Compression Conference, Snowbird, UT, USA, 27–29 March 2001; pp. 63–72. [Google Scholar]
  8. Aaron, A.; Zhang, R.; Girod, B. Wyner-Ziv coding of motion video. In Proceedings of the 36th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 3–6 November 2002; Volume 1, pp. 240–244. [Google Scholar]
  9. Subrahmanya, P.; Berger, T. A sliding window Lempel-Ziv algorithm for differential layer encoding in progressive transmission. In Proceedings of the 1995 IEEE International Symposium on Information Theory (ISIT), Whistler, BC, Canada, 17–22 September 1995; p. 266. [Google Scholar]
  10. Uyematsu, T.; Kuzuoka, S. Conditional Lempel-Ziv complexity and its application to source coding theorem with side information. IEICE Trans. Fundam. Electron. Commun. Comput. Sci. 2003, E86-A, 2615–2617. [Google Scholar]
  11. Tock, T.; Steinberg, Y. On Conditional Entropy and Conditional Recurrence Time. Unpublished manuscript.
  12. Jacob, T.; Bansal, R. On the optimality of Sliding Window Lempel-Ziv algorithm with side information. In Proceedings of the 2008 International Symposium on Information Theory and its Applications (ISITA), Auckland, New Zealand, 7–10 December 2008; pp. 1–6. [Google Scholar]
  13. Jain, A.; Bansal, R. On optimality and redundancy of side information version of SWLZ. In Proceedings of the 2017 IEEE International Symposium on Information Theory (ISIT), Aachen, Germany, 25–30 June 2017; pp. 306–310. [Google Scholar]
  14. Stites, R.; Kieffer, J. Resolution scalable lossless progressive image coding via conditional quadrisection. In Proceedings of the 2000 International Conference on Image Processing, Vancouver, BC, Canada, 10–13 September 2000; Volume 1, pp. 976–979. [Google Scholar]
  15. Aaron, A.; Girod, B. Compression with side information using turbo codes. In Proceedings of the 2002 Data Compression Conference, Snowbird, UT, USA, 2–4 April 2002; pp. 252–261. [Google Scholar]
  16. Cai, H.; Kulkarni, S.; Verdú, S. An algorithm for universal lossless compression with side information. IEEE Trans. Inform. Theory 2006, 52, 4008–4016. [Google Scholar] [CrossRef] [Green Version]
  17. Kieffer, J. Sample converses in source coding theory. IEEE Trans. Inform. Theory 1991, 37, 263–268. [Google Scholar] [CrossRef]
  18. Kontoyiannis, I. Second-order noiseless source coding theorems. IEEE Trans. Inform. Theory 1997, 43, 1339–1341. [Google Scholar] [CrossRef]
  19. Philipp, W.; Stout, W. Almost Sure Invariance Principles for Partial Sums of Weakly Dependent Random Variables; Memoirs of the AMS: Providence, RI, USA, 1975; Volume 2, p. 161. [Google Scholar]
  20. Nomura, R.; Han, T.S. Second-order Slepian-Wolf coding theorems for non-mixed and mixed sources. IEEE Trans. Inform. Theory 2014, 60, 5553–5572. [Google Scholar] [CrossRef]
  21. Willems, F. Universal data compression and repetition times. IEEE Trans. Inform. Theory 1989, 35, 54–58. [Google Scholar] [CrossRef]
  22. Wyner, A.; Ziv, J. Some asymptotic properties of the entropy of a stationary ergodic data source with applications to data compression. IEEE Trans. Inform. Theory 1989, 35, 1250–1258. [Google Scholar] [CrossRef]
  23. Ornstein, D.; Weiss, B. Entropy and data compression schemes. IEEE Trans. Inform. Theory 1993, 39, 78–83. [Google Scholar] [CrossRef]
  24. Kontoyiannis, I. Asymptotic recurrence and waiting times for stationary processes. J. Theoret. Probab. 1998, 11, 795–811. [Google Scholar] [CrossRef]
  25. Gavalakis, L.; Kontoyiannis, I. Fundamental Limits of Lossless Data Compression with Side Information. IEEE Trans. Inform. Theory 2019. Under revision. [Google Scholar]
  26. Bradley, B. Basic properties of strong mixing conditions. In Dependence in Probability and Statistics; Wileln, E., Taqqu, M.S., Eds.; Birkhäuser: Boston, MA, USA, 1986; pp. 165–192. [Google Scholar]
  27. Han, G. Limit theorems for the sample entropy of hidden Markov chains. In Proceedings of the 2011 IEEE International Symposium on Information Theory (ISIT), St. Petersburg, Russia, 31 July–5 August 2011; pp. 3009–3013. [Google Scholar]
  28. Billingsley, P. Probability and Measure, 3rd ed.; John Wiley & Sons Inc.: New York, NY, USA, 1995. [Google Scholar]
  29. Kontoyiannis, I.; Verdú, S. Optimal lossless data compression: Non-asymptotics and asymptotics. IEEE Trans. Inform. Theory 2014, 60, 777–795. [Google Scholar] [CrossRef]
  30. Tan, V.; Kosut, O. On the dispersions of three network information theory problems. IEEE Trans. Inform. Theory 2014, 60, 881–903. [Google Scholar] [CrossRef] [Green Version]
  31. Kallenberg, O. Foundations of Modern Probability, 2nd ed.; Springer: New York, NY, USA, 2002. [Google Scholar]
  32. Wyner, A.; Ziv, J. The sliding-window Lempel-Ziv algorithm is asymptotically optimal. Proc. IEEE 1994, 82, 872–877. [Google Scholar] [CrossRef]
  33. Ziv, J.; Lempel, A. A universal algorithm for sequential data compression. IEEE Trans. Inform. Theory 1977, 23, 337–343. [Google Scholar] [CrossRef] [Green Version]
  34. Ziv, J.; Lempel, A. Compression of individual sequences by variable rate coding. IEEE Trans. Inform. Theory 1978, 24, 530–536. [Google Scholar] [CrossRef] [Green Version]
  35. Kac, M. On the notion of recurrence in discrete stochastic processes. Bull. Amer. Math. Soc. 1947, 53, 1002–1010. [Google Scholar] [CrossRef] [Green Version]
  36. Kakutani, S. Induced measure preserving transformations. Proc. Imp. Acad. 1943, 19, 635–641. [Google Scholar] [CrossRef]
  37. Shields, P. The ergodic theory of discrete sample paths. In Graduate Studies in Mathematics; American Mathematical Society: Providence, RI, USA, 1996; Volume 13. [Google Scholar]

Share and Cite

MDPI and ACS Style

Gavalakis, L.; Kontoyiannis, I. Sharp Second-Order Pointwise Asymptotics for Lossless Compression with Side Information. Entropy 2020, 22, 705. https://0-doi-org.brum.beds.ac.uk/10.3390/e22060705

AMA Style

Gavalakis L, Kontoyiannis I. Sharp Second-Order Pointwise Asymptotics for Lossless Compression with Side Information. Entropy. 2020; 22(6):705. https://0-doi-org.brum.beds.ac.uk/10.3390/e22060705

Chicago/Turabian Style

Gavalakis, Lampros, and Ioannis Kontoyiannis. 2020. "Sharp Second-Order Pointwise Asymptotics for Lossless Compression with Side Information" Entropy 22, no. 6: 705. https://0-doi-org.brum.beds.ac.uk/10.3390/e22060705

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop