Next Article in Journal
Bayesian Computational Methods for Sampling from the Posterior Distribution of a Bivariate Survival Model, Based on AMH Copula in the Presence of Right-Censored Data
Next Article in Special Issue
Random Spacing between Metal Tree Electrodeposits in Linear DLA Arrays
Previous Article in Journal
Fixed-Rate Universal Lossy Source Coding and Model Identification: Connection with Zero-Rate Density Estimation and the Skeleton Estimator
Previous Article in Special Issue
1-D versus 2-D Entropy Velocity Law for Water Discharge Assessment in a Rough Ditch
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Rényi Entropy Power Inequalities via Normal Transport and Rotation

1
LTCI, Télécom ParisTech, Université Paris-Saclay, 75013 Paris, France
2
École Polytechnique, Université Paris-Saclay, 91128 Palaiseau, France
Submission received: 7 July 2018 / Revised: 22 August 2018 / Accepted: 23 August 2018 / Published: 26 August 2018
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)

Abstract

:
Following a recent proof of Shannon’s entropy power inequality (EPI), a comprehensive framework for deriving various EPIs for the Rényi entropy is presented that uses transport arguments from normal densities and a change of variable by rotation. Simple arguments are given to recover the previously known Rényi EPIs and derive new ones, by unifying a multiplicative form with constant c and a modification with exponent α of previous works. In particular, for log-concave densities, we obtain a simple transportation proof of a sharp varentropy bound.

1. Introduction

The entropy power inequality (EPI) dates back to Shannon’s seminal paper [1] and has a long history [2]. The link with the Rényi entropy was first made by Dembo, Cover and Thomas [3] in connection with Young’s convolutional inequality with sharp constants, where Shannon’s EPI is obtained by letting the Rényi entropy orders tend to one ([4] Theorem 17.8.3).
The Rényi entropy [5] was first defined as a generalization of Shannon’s entropy for discrete variables, when looking for the most general definition of information measures that would preserve the additivity for independent events. It has found many applications such as source coding [6], hypothesis testing [7], channel coding [8] and guessing [9]. The (differential) Rényi entropy considered in this paper (Definition 2) generalizes the (differential) Shannon’s entropy for continuous variables. It was first considered in [3] to make the transition between the entropy-power and the Brunn–Minkowski inequalities. It has also been applied to deconvolution problems [10]. A definition of the Renyi entropy-power itself appears in [11], which is essentially Definition 5 below.
Recently, there has been significant interest in Rényi entropy power inequalities for several independent variables (the survey [12] is recommended to the reader for recent developments on forward and reverse EPIs). Bobkov and Chistyakov [13] extended the classical Shannon’s EPI to the Rényi entropy by incorporating a multiplicative constant that depends on the order of the Rényi entropy. Ram and Sason [14] improved the value of the constant by making it depend also on the number of variables. Bobkov and Marsiglietti [15] proved another modification of the EPI for the Rényi entropy for two independent variables, with a power exponent parameter α whose value was further improved by Li [16]. All these EPIs were found for Rényi entropies of orders >1. The α -modification of the Rényi EPI was extended to orders <1 for two independent variables having log-concave densities by Marsiglietti and Melbourne [17]. The starting point of all the above works was Young’s strengthened convolutional inequality.
Recently, Shannon’s original EPI was given a simple proof [18] using a transport argument from normal variables and a change of variable by rotation. In this paper, we exploit these ingredients, described in the following lemmas, to establish all the above-mentioned Rényi EPIs and derive new ones.
Notation 1.
Throughout this article, the considered n-dimensional zero-mean random variables X R n admit a density which is implicitly assumed continuous inside its support. We write X f if X has density f and write X * N ( 0 , K ) if X * is normally distributed with n × n covariance matrix K .
Lemma 1 (Normal Transport).
Let X * N ( 0 , σ 2 I ) . There exists a diffeomorphism T : R n R n such that X = T ( X * ) f . Moreover, T can be chosen such that its Jacobian matrix T is (lower) triangular with positive diagonal elements.
Lemma 1 is known in optimal transport theory as an application of the Knothe–Rosenblatt map [19,20]. Two different proofs are given in [18]. The proof is very simple for one-dimensional variables [21], where T is just an increasing function with continuous derivative T > 0 .
Lemma 2
(Normal Rotation [18]). If X * , Y * are i.i.d. (independent and identically distributed) and normal, then for any 0 < λ < 1 , the rotation
X ˜ = λ X * + 1 λ Y * Y ˜ = 1 λ X * + 1 λ Y *
yields i.i.d. normal variables X ˜ , Y ˜ .
Notice that the starred variables can be expressed in terms of the tilde variables by the inverse rotation
X * = λ X ˜ 1 λ Y ˜ Y * = 1 λ X ˜ + 1 λ Y ˜ .
The proof of Lemma 2 is trivial considering covariance matrices. A deeper result states that this property of remaining i.i.d. by rotation characterizes the normal distribution—this is known as Bernstein’s lemma (see, e.g., ([21] [Lemma 4]) and ([22] [Chapter 5])). This explains why one obtains equality in the EPI only for normal variables (see [21] for more details).
This article is a revised, full version of what was presented in part in a previous conference communication [23]. It is organized as follows. Preliminary definitions and known properties are presented in Section 2. Section 3 derives a crucial “information inequality” for Rényi entropies that enjoys a transformational invariance. The central result is in Section 4, where the first version of the Rényi EPI by Dembo, Cover and Thomas is proven using the ingredients of Lemmas 1 and 2. All previously known Rényi EPIs for finite orders—and new ones—are then derived using a simple method in Section 5. Section 6 concludes.

2. Preliminary Definitions and Properties

Throughout this article, we consider exponents p > 0 with p 1 . The following definition is well known and used, e.g., in Hölder’s inequality.
Definition 1 (Conjugate Exponent).
The conjugate exponent of p is
p = p p 1 ,
that is, the number p such that
1 p + 1 p = 1 .
Remark 1.
There are two situations depending on whether p is positive or negative, as summarized in the following table:
p > 1 or 0 < p < 1
p > 1 p < 0
Definition 2 (Rényi Entropy).
If X has density f L p ( R n ) , its Rényi entropy of order p is defined by
h p ( X ) = 1 1 p log R n f p ( x ) d x = p log f p
where f p denotes the L p norm of f.
It is known that the limit as p 1 is the Shannon entropy
h 1 ( X ) = R n f ( x ) log f ( x ) d x .
The Rényi entropy enjoys well-known properties similar to those of the Shannon entropy, which are recalled here for completeness.
Lemma 3 (Scaling Property).
For any a 0 ,
h p ( a X ) = h p ( X ) + n log | a | .
Proof. 
Making a change of variables, h p ( a X ) = 1 1 p log 1 | a | n f ( x a ) p d x = 1 1 p log f p ( x a ) d x | a | n + 1 1 p log | a | n ( 1 p ) = h p ( X ) + n log | a | . ☐
One recovers the usual scaling property for the Shannon entropy by letting p 1 .
Lemma 4 (Rényi Entropy of the Normal).
If X * N ( 0 , K ) for some nonsingular covariance matrix K , then
h p ( X * ) = 1 2 log ( ( 2 π ) n | K | ) + n 2 p log p p
where | · | denotes the determinant. In particular, for X * N ( 0 , σ 2 I ) ,
h p ( X * ) = n 2 log ( 2 π σ 2 ) + n 2 p log p p .
Proof. 
By direct calculation, h p ( X * ) = 1 1 p log exp ( 1 2 x t K 1 x ) ( 2 π ) n | K | p d x = 1 1 p log ( 2 π ) n | K | p n ( 2 π ) n | K | p = 1 2 log ( ( 2 π ) n | K | ) n 2 log p 1 p . ☐
Again, one recovers the Shannon entropy of a normal variable by letting p 1 (then, p log p p log e ).
The following notion of escort distribution [24,25] is useful in the sequel.
Definition 3
(Escort Density ([25] § 2.2)). If f L p ( R n ) , its escort density of exponent p is the density defined by
f p ( x ) = f p ( x ) R n f p ( x ) d x .
In other words, f p = f p / f p p , where f p denotes the L p norm of f. We also use the notation X p to denote the corresponding escort random variable with density f p .
Lemma 5 (Monotonicity Property).
If p < q , then h p ( X ) h q ( X ) with equality if and only if X is uniformly distributed.
Proof. 
Let p 1 and assume that f L q ( R n ) for all q in a neighborhood of p so that one can freely differentiate under the integral sign:
p h p ( X ) = 1 ( 1 p ) 2 log f p + 1 1 p f p log f f p = 1 ( 1 p ) 2 log f p + f p log f 1 p = 1 ( 1 p ) 2 f p log f f p = D ( f p f ) ( 1 p ) 2 0
where D ( · · ) denotes the Kullback–Leibler divergence. Equality D ( f p f ) = 0 can hold only if f = f p a.e., which, since p 1 , implies that f is constant over some measurable subset A R n , and is zero elsewhere. It follows that h p ( X ) > h q ( X ) for any p < q if X is not uniformly distributed. Conversely, if X is uniformly distributed over some measurable subset A R n , its density can be written as f ( x ) = 1 / vol ( A ) for all x A and f ( x ) = 0 elsewhere. Then, h p ( X ) = 1 1 p log vol ( A ) vol ( A ) p = log vol ( A ) is independent of p. ☐
Remark 2.
Notice the identity established in the proof:
p h p ( X ) = D ( f p f ) ( 1 p ) 2 .
A similar formula for discrete variables can be found in ([26] § 5.3).

3. An Information Inequality

The Shannon entropy satisfies a fundamental “information inequality” ([4] Theorem 2.6.3) from which many classical information-theoretic inequalities can be derived. This can be written as
h 1 ( X ) E log φ ( X )
for any density φ , with equality if and only if φ = f a.e. The following Theorem 1 can be seen as the natural extension of the information inequality to Rényi entropies and is central in the following derivations of this paper. Bercher [27] pointed out to the author that it is similar to an inequality for discrete distributions established by Campbell [6] in the context of source coding (see also [24]).
Theorem 1 (Information Inequality).
For any density φ,
h p ( X ) p log E φ 1 / p ( X )
with equality if and only if φ = f p a.e.
By letting p 1 , one recovers the classical information inequality in Equation (13) for the Shannon entropy.
Proof. 
By definition in Equation (5),
h p ( X ) = p log f p = p log ( f p φ 1 ) φ 1 / p p log ( f p φ 1 ) 1 / p φ = p log f φ 1 / p
where the inequality follows from Jensen’s inequality applied to the function x x 1 / p , which is strictly concave if p > 1 (that is, p > 0 ) and strictly convex if p < 1 (that is, p < 0 ). Equality holds if and only if f p φ 1 is constant a.e., which means that φ and f p are proportional a.e. Normalizing gives the announced condition φ = f p a.e. ☐
Remark 3.
An alternate proof is obtained using Hölder’s inequality or its reverse applied to f and φ 1 / p . Notice that the equality case for φ = f p gives
h p ( X ) = p log E f p 1 / p ( X )
as can be easily checked directly.
The following conditional version of Theorem 1 involves a more complicated relation for dependent variables.
Corollary 1 (Conditional Information Inequality)
For any two random variables X , Y R n ,
p log E Y exp h p ( X | Y ) / p p log E φ 1 / p ( X | Y )
where h p ( X | y ) denotes the Rényi entropy of X knowing Y = y and the expectation on the l.h.s. is taken over Y (the expectation in the r.h.s. is taken over ( X , Y ) ).
In particular, when X and Y are independent,
h p ( X ) p log E φ 1 / p ( X | Y ) .
with equality if and only if φ ( x | y ) does not depend on y and equals f p ( x ) a.e.
Proof. 
From Equation (14) for fixed y, one has E φ 1 / p ( X | y ) exp h p ( X | y ) / p for p > 0 ( p > 1 ) and the opposite inequality for p < 0 ( p < 1 ), with equality if and only if φ ( x | y ) = f p ( x | y ) a.e. Taking the expectation over Y yields E φ 1 / p ( X | Y ) E Y exp h p ( X | Y ) / p for p > 0 ( p > 1 ) and the opposite inequality for p < 0 ( p < 1 ). The result follows by taking the logarithm and multiplying by p . When X and Y are independent, equality holds if and only if φ ( x | y ) = f p ( x ) a.e. for all y. ☐
For the Shannon entropy, the difference between the two sides of the information inequality in Equation (13) is the Kullback–Leibler divergence:
D ( f φ ) = E log f ( X ) φ ( X ) 0
which can also be noted D ( X Z ) where X f and Z φ . It is known (and easy to check) that the divergence is invariant by reversible transformations T. This means that, when X = T ( X * ) and Z = T ( Z * ) , one has D ( X Z ) = D ( X * Z * ) . A natural extension to Rényi entropies can be obtained on the difference
p log E φ 1 / p ( X ) h p ( X ) 0
between the two sides of the information inequality in Equation (14).
Theorem 2 (Transformational Invariance).
Let T : R n R n be a diffeomorphism and suppose that
X p = T ( X p * )
Z = T ( Z * )
where Z φ and Z * φ * . Then,
p log E φ 1 / p ( X ) h p ( X ) = p log E φ * 1 / p ( X * ) h p ( X * ) .
Note that, from Equation (5), this identity can be rewritten as
E φ * 1 / p ( X * ) f * p = E φ 1 / p ( X ) f p .
Proof. 
Proceed to prove Equation (23). Let f , f * be the respective densities of X , X * and recall that X p f p and X p * f p * . By the transformation T, the densities are related by
f p * ( x * ) = f p ( T ( x * ) ) | T ( x * ) |
φ * ( x * ) = φ ( T ( x * ) ) | T ( x * ) |
where | T | denotes the Jacobian determinant of T. Using these relations and Definition 3,
E φ * 1 / p ( X * ) / f * p = E φ 1 / p ( T ( X * ) ) | T ( X * ) | 1 / p / f * p = φ 1 / p ( T ( x * ) ) | T ( x * ) | 1 / p f * ( x * ) d x * / f * p = φ 1 / p ( T ( x * ) ) | T ( x * ) | 1 / p f p * ( x * ) 1 / p d x * = φ 1 / p ( T ( x * ) ) f p ( T ( x * ) ) 1 / p | T ( x * ) | d x * = φ 1 / p ( x ) f p ( x ) 1 / p d x = E φ 1 / p ( X ) / f p .  
Remark 4.
φ is a density and is not used in the proof of Theorem 2. Therefore, Equation (22) holds more generally for any function φ satisfying Equation (25).

4. First Version of the Rényi EPI

For two independent random variables X and Y, the Shannon entropy power inequality can be expressed as follows [2,3]: For any 0 < λ < 1 ,
h ( λ X + 1 λ Y ) λ h ( X ) + ( 1 λ ) h ( Y )
with equality if and only if X , Y are i.i.d. normal—more precisely (given the translation invariance of the entropy) when they are normal with identical covariance matrices; since it was assumed in Notation 1 that all considered variables have zero mean, both statements are equivalent. This means that the difference h ( λ X + 1 λ Y ) λ h ( X ) ( 1 λ ) h ( Y ) is minimum (zero) for i.i.d. normal X , Y . In this section, we study the natural generalization for Rényi entropies ([3] Theorem 12), namely that the quantity
h r ( λ X + 1 λ Y ) λ h p ( X ) ( 1 λ ) h q ( Y )
is minimum for X , Y i.i.d. normal. Here, the triple ( p , q , r ) and its associated λ satisfy the following condition, which is used, e.g., in Young’s convolutional inequality.
Definition 4 (Exponent Triple).
An exponent triple ( p , q , r ) λ has conjugates p , q , r of the same sign and such that
1 p + 1 q = 1 r .
The corresponding coefficient λ ( 0 , 1 ) is defined by
λ = r p = 1 r q
In other words, the exponents p , q , r are such that
1 p + 1 q = 1 + 1 r
and fulfill one the following two conditions:
p , q , r > 1 or 0 < p , q , r < 1
p , q , r > 1 p , q , r < 0
r < p , r < q | r | < | p | , | r | < | q |
r > p , r > q r < p , r < q
The key argument used in this section is the following. If X f and Y g , then for the escort variables, X p f p and Y q g q . By normal transport (Lemma 1), one can write
X p = T ( X p * )
Y q = U ( Y q * )
for two diffeomorphims T and U, where X * , Y * are, say, i.i.d. standard normal N ( 0 , I ) . (It follows that X p * N ( 0 , I / p ) and Y q * N ( 0 , I / q ) .) We then have the following straightforward extension of Theorem 2.
Lemma 6 (Transformational Invariance for Two Independent Variables).
For a two-dimensional φ ( x , y ) ,
r log E φ 1 / r ( X , Y ) λ h p ( X ) ( 1 λ ) h q ( Y ) = r log E φ * 1 / r ( X * , Y * ) λ h p ( X * ) ( 1 λ ) h q ( Y * )
where
φ * ( x * , y * ) = φ ( T ( x * ) , U ( y * ) ) | T ( x * ) | λ | U ( y * ) | 1 λ .
Proof. 
From Equation (5) and the definition of λ , Equation (34) can be rewritten as
E φ * 1 / r ( X * , Y * ) f * p g * q = E φ 1 / r ( X , Y ) f p g q .
By the transformations T and U, the densities of the escort variables are related by f p * ( x * ) = f p ( T ( x * ) ) | T ( x * ) | and g q * ( y * ) = g q ( U ( y * ) ) | U ( y * ) | . Now, by the same calculation as in the proof of Theorem 2,
E φ * 1 / r ( X * , Y * ) f * p g * q = E φ 1 / r ( T ( X * ) , U ( Y * ) ) | T ( X * ) | 1 / p | U ( Y * ) | 1 / q f * p g * q = φ 1 / r T ( x * ) , U ( y * ) | T ( x * ) | 1 / p | U ( y * ) | 1 / q f * ( x * ) g * ( y * ) d x * d y * f * p g * q = φ 1 / r T ( x * ) , U ( y * ) | T ( x * ) | 1 / p | U ( y * ) | 1 / q f p * ( x * ) 1 / p g q * ( y * ) 1 / q d x * d y * = φ 1 / r T ( x * ) , U ( y * ) | T ( x * ) | | U ( y * ) | f p ( T ( x * ) ) 1 / p g q ( U ( y * ) ) 1 / q d x * d y * = φ 1 / r ( x , y ) f p ( x ) 1 / p g q ( y ) 1 / q d x d y = E φ 1 / r ( X , Y ) f p g q .  
Lemma 7.
Let φ be the density of λ X + 1 λ Y . Then,
h r ( λ X + 1 λ Y ) λ h p ( X ) ( 1 λ ) h q ( Y ) r log E φ r ( λ T ( X * ) + 1 λ U ( Y * ) ) · | λ T ( X * ) + ( 1 λ ) U ( Y * ) | 1 / r λ h p ( X * ) ( 1 λ ) h q ( Y * ) .
Proof. 
By the equality case of Theorem 1 (see (16)), one has
h r ( λ X + 1 λ Y ) = r log E φ r 1 / r ( λ X + 1 λ Y ) .
Now, by Lemma 6 applied to φ ( x , y ) = φ r ( λ x + 1 λ y ) , we have φ * ( x * , y * ) = φ r ( λ T ( x * ) + 1 λ U ( y * ) ) | T ( x * ) | λ | U ( y * ) | 1 λ , and, therefore,
h r ( λ X + 1 λ Y ) λ h p ( X ) ( 1 λ ) h q ( Y ) = r log E φ r ( λ T ( X * ) + 1 λ U ( Y * ) ) · | T ( X * ) | λ | U ( Y * ) | 1 λ 1 / r λ h p ( X * ) ( 1 λ ) h q ( Y * ) .
Since from Lemma 1, T and U can be chosen such that T and U are (lower) triangular with positive diagonal elements, it follows easily from the arithmetic-geometric mean inequality that
| T ( X * ) | λ | U ( Y * ) | 1 λ | λ T ( X * ) + ( 1 λ ) U ( Y * ) | .
The result follows at once (for either positive or negative r ). ☐
We can now use the normal rotation Lemma 2 to conclude by proving the following,
Theorem 3
(Rényi EPI [3]). For independent X , Y and exponent triple ( p , q , r ) λ ,
h r ( λ X + 1 λ Y ) λ h p ( X ) ( 1 λ ) h q ( Y ) n 2 r log r r log p p log q q
with equality if and only if X , Y are i.i.d. normal.
Proof. 
If X , Y are i.i.d. normal, then λ X + 1 λ Y is also identically distributed as X and Y, and from Lemma 4, it is immediate to check that equality holds (irrespective of their covariances). Therefore, the inequality in Equation (42) is equivalent to
h r ( λ X + 1 λ Y ) λ h p ( X ) ( 1 λ ) h q ( Y ) h r ( λ X * + 1 λ Y * ) λ h p ( X * ) ( 1 λ ) h q ( Y * )
where X * , Y * are, say, i.i.d. standard normal N ( 0 , I ) .
To prove Equation (43), consider the normal rotation of Lemma 2 and write X * , Y * in terms of X ˜ , Y ˜ using Equation (2) in the first term of the r.h.s. of Equation (38) (Lemma 7). One obtains:
h r ( λ X + 1 λ Y ) λ h p ( X ) ( 1 λ ) h q ( Y ) r log E ψ ( X ˜ | Y ˜ ) 1 / r λ h p ( X * ) ( 1 λ ) h q ( Y * ) ,
where
ψ ( x ˜ | y ˜ ) = φ r ( λ T ( λ x ˜ 1 λ y ˜ ) + 1 λ U ( 1 λ x ˜ + λ y ˜ ) ) × | λ T ( λ x ˜ 1 λ y ˜ ) + ( 1 λ ) U ( 1 λ x ˜ + λ y ˜ ) | .
Making the change of variable z = λ T ( λ x ˜ 1 λ y ˜ ) + 1 λ U ( 1 λ x ˜ + λ y ˜ ) , one obtains
ψ ( x ˜ | y ˜ ) d x ˜ = φ r ( z ) d z = 1 ,
since φ r is a density. Hence, ψ ( x ˜ | y ˜ ) is also a density in x ˜ for fixed y ˜ . Now, since by Lemma 2, X ˜ and Y ˜ are independent, by the conditional information inequality in Equation (18) of Corollary 1, one has
r log E ψ ( X ˜ | Y ˜ ) 1 / r h r ( X ˜ ) = h r ( λ X * + 1 λ Y * ) .
Combining with Equation (44) yields the announced inequality in Equation (43).
It remains to settle the equality case in Equation (43). From the above proof, equality holds in Equation (43) if and only if both Equation (41) and Equation (47) are equalities. Equality in Equation (41) holds if and only if for all i = 1 , 2 , , n ,
T i x i ( X * ) = U i y i ( Y * ) a . s .
Since X * and Y * are independent normal variables, this implies that T x i and U y i are constant and equal. In particular the Jacobian | λ T ( λ x ˜ 1 λ y ˜ ) + ( 1 λ ) U ( 1 λ x ˜ + λ y ˜ ) | in (45) is constant.
From Corollary 1 equality in Equation (47) holds if and only if ψ ( x ˜ | y ˜ ) does not depend on y ˜ , which implies that λ T ( λ x ˜ 1 λ y ˜ ) + 1 λ U ( 1 λ x ˜ + λ y ˜ ) does not depend on the value of y ˜ . Taking derivatives with respect to y j for all j = 1 , 2 , , n ,
λ 1 λ T i x j ( λ X ˜ 1 λ Y ˜ ) + λ 1 λ U i x j ( 1 λ X ˜ + λ Y ˜ ) = 0
which implies
T i x j ( X * ) = U i y j ( Y * ) a . s .
for all i , j = 1 , 2 , , n . Therefore, T and U are linear transformations, equal up to an additive constant (equal to 0 since all variables are assumed of zero mean). It follows that X p = T ( X p * ) and Y q = U ( Y q * ) are normal with respective distributions X p N ( 0 , K / p ) and Y q N ( 0 , K / q ) . Hence, X and Y are i.i.d. normal N ( 0 , K ) . ☐
A straightforward generalization to several independent variables is the following.
Corollary 2 (Rényi EPI for Several Variables).
Let r 1 , r 2 , , r m , r be exponents; those conjugates r 1 , r 2 , , r m , r are of the same sign and satisfy
i = 1 m 1 r i = 1 r
and let λ 1 , λ 2 , , λ m be defined by
λ i = r r i ( i = 1 , 2 , , m ) .
Then, for independent X 1 , X 2 , , X m ,
h r i = 1 m λ i X i i = 1 m λ i h r i ( X i ) n 2 r log r r i = 1 m log r i r i
with equality if and only if the X i are i.i.d. normal.
Proof. 
By induction on m: The result for m = 2 is Theorem 3. Suppose the result satisfied at order m 1 and let Y m = i = 1 m 1 λ i X i / 1 λ m and s m be such that 1 s m = i = 1 m 1 1 r i . Notice that 1 r = 1 r m + 1 s m = λ m r + 1 s m , hence r = ( 1 λ m ) s m . By Theorem 3, h r i = 1 m λ i X i = h r ( λ m X m + 1 λ m Y m ) λ m h r m ( X m ) + ( 1 λ m ) h s m ( Y m ) + n 2 r log r r log r m r m log s m s m with equality if and only if X m , Y m are i.i.d. normal. Now, by the induction hypothesis, h s m ( Y m ) i = 1 m 1 λ i 1 λ m h r i ( X i ) + n 2 s m log s m s m i = 1 m 1 log r i r i with equality if and only if the X i ( i = 1 , 2 , , m 1 )—and hence Y m —are i.i.d. normal. The result at order m follows by combining the two inequalities since ( 1 λ m ) s m = r . ☐

5. Recent Versions of the Rényi EPI

Definition 5
(Rényi Entropy Power [13]). The Rényi entropy power of order r is defined by
N r ( X ) = e 2 h r ( X ) / n .
Up to a multiplicative constant, N r ( X ) is the (average) power of a white normal variable having the same Rényi entropy as X—hence the name “entropy power”. In fact, if X * N ( 0 , σ 2 ) has the same Rényi entropy h r ( X * ) = h r ( X ) , then by Lemma 4,
σ 2 = e 2 h r ( X ) / n 2 π r r / r .
The Renyi entropy power enjoys the same scaling property as for the usual power: By Lemma 3, for any a R ,
N r ( a X ) = a 2 N r ( X ) .
For independent X 1 , X 2 , , X m , Rényi entropy power inequalities take either the form [13,14]
N r i = 1 m X i c i = 1 m N r ( X i )
for some positive constant c, or the form [15,16,17]
N r α i = 1 m X i i = 1 m N r α ( X i )
for some positive exponent α . The constants c and α may depend on the order r, the number m of variables and the dimension n. What is desired is:
  • a maximum possible value of c in Equation (57) since the inequality is automatically satisfied for all positive constants c < c .
  • a minimum possible value of α in Equation (58) since the inequality is automatically satisfied for all positive exponents α > α ; in fact, since Equation (58) is homogeneous by scaling the variables X i a X i as in Equation (56), one may suppose without loss of generality that the r.h.s. of Equation (58) is = 1 ; then, N r ( X i ) < 1 , hence N r 2 α ( X i ) < N r 2 α ( X i ) for all i and 1 = i = 1 m N r α ( X i ) α i = 1 m N r α ( X i ) α .
The following useful characterization, which generalizes ([16] Lemma 2.1), makes the link between the various versions (Equations (53), (57), and (58)) of the Rényi entropy power inequality.
Lemma 8.
For independent X 1 , X 2 , , X m , the Rényi EPI in the general form
N r α i = 1 m X i c i = 1 m N r α ( X i )
for some constant c > 0 and exponent α > 0 is equivalent to the following inequality
h r i = 1 m λ i X i i = 1 m λ i h r ( X i ) n 2 log c α + 1 α 1 H ( λ )
for any positive λ 1 , λ 2 , , λ m such that i = 1 m λ i = 1 , where H ( λ ) > 0 denotes the discrete entropy
H ( λ ) = i = 1 m λ i log 1 λ i > 0 .
Proof. 
Suppose Equation (59) holds. Then,
h r i = 1 m λ i X i = n 2 α log N r α i = 1 m λ i X i
n 2 α log i = 1 m N r α ( λ i X i ) + n 2 α log c
= n 2 α log i = 1 m λ i α N r α ( X i ) + n 2 α log c
n 2 α i = 1 m λ i log λ i α 1 N r α ( X i ) + n 2 α log c
= i = 1 m λ i h r ( X i ) + n ( α 1 ) 2 α i = 1 m λ i log λ i + n 2 α log c
where the scaling property in Equation (56) is used in Equation (64) and the concavity of the logarithm is used in Equation (65). Conversely, suppose that Equation (60) is satisfied for all λ i > 0 such that i = 1 m λ i = 1 . Set λ i = N r α ( X i ) / i = 1 m N r α ( X i ) . Then,
N r α i = 1 m X i = exp 2 α n h r i = 1 m λ i X i λ i
exp 2 α n i = 1 m λ i h r X i λ i · c · exp ( 1 α ) i = 1 m λ i log 1 λ i
= c i = 1 m N r 2 α X i λ i λ i α 1 λ i
= c i = 1 m N r α ( X i ) λ i 1 λ i
= c i = 1 m N r α ( X i ) i = 1 m λ i
= c i = 1 m N r α ( X i ) .  

5.1. Rényi Entropy Power Inequalities for Orders >1

From Lemma 8 and Corollary 2, it is easy to recover known Rényi EPIs and obtain new ones for orders r > 1 . In fact, if r > 1 , then r > 0 and all r i are positive and > r . Therefore, all r i are < r and by monotonicity (Lemma 5),
h r i ( X i ) h r ( X i ) ( i = 1 , 2 , , m ) .
Plugging this into Equation (53), one obtains
h r i = 1 m λ i X i i = 1 m λ i h r ( X i ) n 2 r log r r i = 1 m log r i r i
where λ i = r / r i for i = 1 , 2 , , m . For future reference, define
A ( λ ) = | r | log r r i = 1 m log r i r i
= | r | i = 1 m ( 1 λ i r ) log ( 1 λ i r ) ( 1 1 r ) log ( 1 1 r ) .
(The absolute value | r | is needed in the next subsection where r is negative.)
This function is strictly convex in λ = ( λ 1 , λ 2 , , λ m ) because x ( 1 x / r ) log ( 1 x / r ) is strictly convex. Note that A ( λ ) vanishes in the limiting cases where λ tends to one of the standard unit vectors ( 1 , 0 , , 0 ) , ( 0 , 1 , 0 , , 0 ) , …, ( 0 , 0 , , 0 , 1 ) and since every λ is a convex combination of these vectors and A ( λ ) is strictly convex, one has A ( λ ) < 0 .
Theorem 4
(Ram and Sason [14]). The Rényi EPI (57) holds for r > 1 and c = r r / r 1 1 m r m r 1 .
Proof. 
By Lemma 8 for α = 1 we only need to check that the r.h.s. of Equation (74) is greater than n 2 log c for any choice of the λ i ’s, that is, for any choice of exponents r i such that i = 1 m 1 r i = 1 r . Thus, Equation (57) will hold for log c = min λ A ( λ ) . Now, by the log-sum inequality ([4] Theorem 2.7.1),
i = 1 m 1 r i log 1 r i i = 1 m 1 r i log i = 1 m 1 r i m = ( m 1 / r ) log m 1 / r m
with equality if and only if all r i are equal, that is, the λ i are equal to 1 / m . Thus, min λ A ( λ ) = r log r r + ( m 1 / r ) log m 1 / r m which yields c = r r / r 1 1 m r m r 1 . ☐
An alternate proof is to argue that A ( λ ) is convex and symmetrical in λ = ( λ 1 , λ 2 , , λ m ) and is, therefore, minimized when all λ i are equal.
Remark 5.
The above constant c is certainly not optimal since equality in Equation (73) holds if and only if the X i are uniformly distributed (Lemma 5) while equality in Equation (53) holds if and only if the X i are identically normally distributed (Corollary 2). Ram and Sason [14] tightened Equation (57) further using optimization techniques, resulting in a constant that depends on the relative values of the entropy powers themselves.
Remark 6.
It can be noted that log c = r log r r + ( m r 1 ) log 1 1 m r < 0 decreases (and tends to r log r r 1 ) as m increases; in fact log c m = r log 1 1 m r + m r r m 2 < r ( 1 m r ) + 1 m = 0 . Thus, a universal constant independent of m is obtained by taking
c = inf m r r / r 1 1 m r m r 1 = r r / r lim m 1 1 m r m r 1 = r r / r e
This was the constant established by Bobkov and Chistyakov [13].
Theorem 5.
The Rényi EPI (58) holds for r > 1 and α = 1 + r log 2 r r + ( 2 r 1 ) log 2 1 1 2 r 1 and this value of α cannot be improved using the method of this paper by making it depend on m.
Proof. 
By Lemma 8, for c = 1 , we only need to check that the r.h.s. of Equation (74) is greater than n 2 ( 1 / α 1 ) H ( λ ) for any choice of λ i s, that is, for any choice of exponents r i such that i = 1 m 1 r i = 1 r . Thus, Equation (58) will hold for 1 α 1 = min λ A ( λ ) H ( λ ) . By the proof of the preceding theorem, the numerator is minimized when all λ i are equal and this also maximizes the entropy = log m in the denominator. However, one cannot conclude yet since the minimum in the numerator is negative.
A stationary point is easily obtained by the Lagrangian method, which implies that λ i A ( λ ) H ( λ ) is constant independent of i. This gives that A ( λ ) H ( λ ) log λ i log ( 1 λ i / r ) is constant, hence a stationary point is obtained when all λ i are equal (to 1 / m ) and the corresponding value of A ( λ ) / H ( λ ) is r log r r + ( m r 1 ) log 1 1 m r / log m .
However, the boundary of the domain of A ( λ ) / H ( λ ) is the simplex { i λ i = 1 , λ i 0 } where on each vertex joining two standard unit vectors, A ( λ ) / H ( λ ) has the same expression as for m = 2 . Li [16] showed—this is also easily proved using [17, Lemma 8]—that, for m = 2 , the minimum is obtained when λ = ( 1 / 2 , 1 / 2 ) . The corresponding value of A ( λ ) / H ( λ ) is r log r r + ( 2 r 1 ) log 1 1 2 r / log 2 , which is easily seen to be less than r log r r + ( m r 1 ) log 1 1 m r / log m for any m 2 .
Therefore, the minimum of A ( λ ) / H ( λ ) is attained at the boundary when all λ i are zero except two of them equal to 1 / 2 . This gives 1 α 1 = r log r r + ( 2 r 1 ) log 1 1 2 r / log 2 . ☐
Remark 7.
The case m = 2 yields α = r 1 ( r + 1 ) log 2 ( r + 1 ) r log 2 r 2 which was found by Li [16] who remarked that this value of α is strictly smaller (better) than the value α = r + 1 2 obtained by Bobkov and Marsiglietti [15].
Interestingly, for m > 2 , the exponent of Theorem 5 cannot be improved by this method. In fact, in the above proof, it is easily seen that r log r r + ( m r 1 ) log 1 1 m r / log m is negative and increases toward 0 as m increases. Therefore, the exponent α cannot be decreased (improved) as m increases.
The above value of α is > 1 . However, using the same method, it is easy to obtain Rényi EPIs with exponent values α < 1 . This is given by the following Theorem 6.
Theorem 6.
The Rényi EPI (59) holds for r > 1 , 0 < α < 1 with c = m r r / r 1 1 m r m r 1 α / m .
Proof. 
By Lemma 8 we only need to check that the r.h.s. of Equation (74) is greater than n 2 ( log c ) / α + ( 1 / α 1 ) H ( λ ) , that is, A ( λ ) ( log c ) / α + ( 1 / α 1 ) H ( λ ) for any choice of λ i s, that is, for any choice of exponents r i such that i = 1 m 1 r i = 1 r . Thus, for a given 0 < α < 1 , Equation (59) will hold for log c = min λ α A ( λ ) ( 1 α ) H ( λ ) . From the preceding proofs (since both A ( λ ) and H ( λ ) are convex functions of λ ), the minimum is attained when all λ i s are equal. This gives log c = α r log r r + ( m r 1 ) log 1 1 m r ( 1 α ) log m . ☐

5.2. Rényi Entropy Power Inequalities for Orders <1 and Log-Concave Densities

If r < 1 , then r < 0 and all r i are negative and < r . Therefore, all r i are > r and by monotonicity (Lemma 5), the opposite inequality of Equation (73) holds and the method of the preceding subsection fails. For log-concave densities, however, Equation (73) can be replaced by a similar inequality in the right direction.
Definition 6 (Log-Concave Density).
A density f is log-concave if log f is concave in it support, i.e., for all 0 < μ < 1 ,
f ( x ) μ f ( y ) 1 μ f ( μ x + ( 1 μ ) y ) .
Lemma 9.
If X has a log-concave density, then h p ( p X ) p h p ( X ) = n log p + ( 1 p ) h p ( X ) is concave in p.
As noted below in Corollary 3, this is essentially a result obtained by Fradelizi, Madiman and Wang [28]. The following alternate proof uses the transport properties seen in Section 4.
Proof. 
Define r = λ p + ( 1 λ ) q where 0 < λ < 1 . By Lemma 1 there exists two diffeomorphisms T , U such that one can write p X p = T ( X * ) and q X q = U ( X * ) . Then, X * has density
1 p n f p T ( x * ) p | T ( x * ) | = 1 q n f q U ( x * ) q | U ( x * ) | = 1 p λ n q ( 1 λ ) n f p T ( x * ) p λ f q U ( x * ) q 1 λ | T ( x * ) | λ | U ( x * ) | 1 λ .
Now, by log-concavity, Equation (79) with μ = λ p / r ,
f p T ( x * ) p λ f q U ( x * ) q 1 λ = 1 f p λ p f q ( 1 λ ) q ) f T ( x * ) p λ p f U ( x * ) q ( 1 λ ) q
1 f p λ p f q ( 1 λ ) q ) f λ T ( x * ) + ( 1 λ ) U ( x * ) r r
= f r r f p λ p f q ( 1 λ ) q ) f r λ T ( x * ) + ( 1 λ ) U ( x * ) r .
Using the arithmetic-geometric mean inequality in Equation (41) and integrating the density in Equation (80) over x * R n , one obtains
( p λ q 1 λ ) n f p λ p f q ( 1 λ ) q ) r n f r r .
Taking the logarithm yields the announced concavity. ☐
As a side result, it is interesting to note that we have obtained a simple transportation proof of the following varentropy bound:
Corollary 3
(Varentropy Bound [28]). One has Var log f ( X p ) n / p 2 , that is, Var log f p ( X p ) n .
Proof. 
Since n log p + ( 1 p ) h p ( X ) is concave, one has 2 p 2 n log p + ( 1 p ) h p ( X ) 0 , that is,
2 p 2 ( 1 p ) h p ( X ) n p 2 .
Differentiating twice using Leibniz’s product rule and plugging the identity in Equation (12), the l.h.s. of this inequality becomes
1 1 p D ( f p f ) p = p f p log f = f p log 2 f f p log f 2 .  
As another easy consequence of Lemma 9, since n log p + ( 1 p ) h p ( X ) is concave and vanishes for p = 1 , the slopes n log p + ( 1 p ) h p ( X ) 0 p 1 are non-increasing in p. In other words, h p ( X ) + n log p 1 p is nondecreasing. Therefore:
Corollary 4
(Marsiglietti and Melbourne [17]). If p < q then for any X with log-concave density, h p ( X ) + n log p 1 p h q ( X ) + n log q 1 q .
We can now use Lemma 8 and Corollary 2 to obtain Rényi EPIs for orders r < 1 . Since all r i are > r , by Corollary 4,
h r i ( X ) + n log r i 1 r i h r ( X ) + n log r 1 r ( i = 1 , 2 , , m ) .
Plugging this into Equation (53), one obtains
h r i = 1 m λ i X i i = 1 m λ i h r ( X i ) n log r 1 r i = 1 m λ i log r i 1 r i + n 2 r log r r i = 1 m log r i r i
= n 2 r i = 1 m log r i r i log r r
where we have used that λ i = r / r i for i = 1 , 2 , , m . Notice that the r.h.s. of Equation (89) for r < 1 ( r < 0 ) is the opposite of that of Equation (74) for r > 1 ( r > 0 ). However, since r is now negative, the r.h.s. is exactly equal to n 2 A ( λ ) which is still convex and negative. For this reason, the proofs of the following theorems for r < 1 are such repeats of the theorems obtained previously for r > 1 .
Theorem 7.
The Rényi EPI (57) for log-concave densities holds for c = r r / r 1 1 m r 1 m r and r < 1 .
Proof. 
It is identical to that of Theorem 4 except for the change | r | = r in the expression of A ( λ ) . ☐
Theorem 8.
The Rényi EPI (58) for log-concave densities holds for r < 1 and α = 1 + | r | log 2 r r + ( 2 | r | + 1 ) log 2 1 + 1 2 | r | 1 and this value of α cannot be improved using the method of this paper by making it depend on m.
Proof. 
It is identical to that of Theorem 5 except for the change | r | = r in the expression of A ( λ ) . ☐
Remark 8.
The case m = 2 yields α = 1 r ( r + 1 ) log 2 ( r + 1 ) r log 2 r 2 r which was found by Marsiglietti and Melbourne [17]. Again, the exponent of the theorem is not improved for m > 2 .
Theorem 9.
The Rényi EPI in Equation (59) for log-concave densities holds for c = m r r / r 1 1 m r 1 m r α / m where r < 1 and 0 < α < 1 .
Proof. 
It is identical to that of Theorem 6 except for the change | r | = r in the expression of A ( λ ) . ☐

6. Conclusions

This article provides a comprehensive framework to derive known Rényi entropy power inequalities (with shorter proofs), and prove new ones. The framework is based on a transport argument from normal densities and a change of variable by rotation. Only basic properties of Rényi entropies are used in the proofs.
In particular, the α -modification of the EPI is recovered for two or more independent variables for Rényi entropy orders >1 as well as for orders <1, and the Rényi EPI with multiplicative constant c is extended to Rényi entropy orders <1. In addition, a more general formulation with both exponent α and constant c is obtained for all orders. In passing, a simple proof using normal transport of a recent sharp varentropy bound was obtained for log-concave densities.
As a perspective, the methods developed in this paper can perhaps be generalized to obtain reverse Rényi entropy power inequalities (see, e.g., the discussion in [16]).

Funding

This research received no external funding.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Shannon, C.E. A Mathematical Theory of Communication. Bell Syst. Tech. J. 1948, 27, 623–656. [Google Scholar] [CrossRef]
  2. Rioul, O. Information Theoretic Proofs of Entropy Power Inequalities. IEEE Trans. Inf. Theory 2011, 57, 33–55. [Google Scholar] [CrossRef] [Green Version]
  3. Dembo, A.; Cover, T.M.; Thomas, J.A. Information Theoretic Inequalities. IEEE Trans. Inf. Theory 1991, 37, 1501–1518. [Google Scholar] [CrossRef]
  4. Cover, T.M.; Thomas, J.A. Elements of Information Theory, 2nd ed.; Wiley: Hoboken, NJ, USA, 2006. [Google Scholar]
  5. Rényi, A. On Measures of Information and Entropy. In Proceedings of the Fourth Berkeley Symposium on Mathematics, Statistics and Probability, Berkeley, CA, USA, 20 June–30 July 1960; University of California Press: Berkeley, CA, USA, 1960; Volume 1, pp. 547–561. [Google Scholar]
  6. Campbell, L.L. A Coding Theorem and Rényi’s Entropy. Inf. Control 1965, 8, 423–429. [Google Scholar] [CrossRef]
  7. Ben-Bassat, M.; Raviv, J. Rényi’s Entropy and the Probability of Error. IEEE Trans. Inf. Theory 1978, 24, 324–331. [Google Scholar] [CrossRef]
  8. Arimoto, S. Information Measures and Capacity of Order α for Discrete Memoryless Channels. In Topics in Information Theory, Colloquia mathematica Societatis János Bolyai 16; Csiszár, I., Elias, P., Eds.; North Holland: Amsterdam, The Netherlands, 1977; pp. 41–52. [Google Scholar]
  9. Arikan, E. An Inequality on Guessing and its Application to Sequential Decoding. IEEE Trans. Inf. Theory 1996, 42, 99–105. [Google Scholar] [CrossRef] [Green Version]
  10. Erdogmus, D.; Hild, K.E.; Principe, J.C.; Lazaro, M.; Santamaria, I. Adaptive Blind Deconvolution of Linear Channels Using Rényi’s Entropy with Parzen Window Estimation. IEEE Trans. Signal Process. 2004, 52, 1489–1498. [Google Scholar] [CrossRef]
  11. Savaré, G.; Toscani, G. The Concavity of Rényi Entropy Power. IEEE Trans. Inf. Theory 2014, 60, 2687–2693. [Google Scholar] [CrossRef]
  12. Madiman, M.; Melbourne, J.; Xu, P. Forward and Reverse Entropy Power Inequalities in Convex Geometry. In Convexity and Concentration; Carlen, E., Madiman, M., Werner, E., Eds.; IMA Volumes in Mathematics and its Applications; Springer: Berlin, Germany, 2017; Volume 161, pp. 427–485. [Google Scholar]
  13. Bobkov, S.G.; Chistyakov, G.P. Entropy Power Inequality for the Rényi Entropy. IEEE Trans. Inf. Theory 2015, 61, 708–714. [Google Scholar] [CrossRef]
  14. Ram, E.; Sason, I. On Rényi Entropy Power Inequalities. IEEE Trans. Inf. Theory 2016, 62, 6800–6815. [Google Scholar] [CrossRef] [Green Version]
  15. Bobkov, S.G.; Marsiglietti, A. Variants of the Entropy Power Inequality. IEEE Trans. Inf. Theory 2017, 63, 7747–7752. [Google Scholar] [CrossRef] [Green Version]
  16. Li, J. Rényi Entropy Power Inequality and a Reverse. Stud. Math. 2018, 242, 303–319. [Google Scholar] [CrossRef]
  17. Marsiglietti, A.; Melbourne, J. On the Entropy Power Inequality for the Rényi Entropy of Order [0, 1]. arxiv, 2018; arXiv:1710.00800. [Google Scholar]
  18. Rioul, O. Yet Another Proof of the Entropy Power Inequality. IEEE Trans. Inf. Theory 2017, 63, 3595–3599. [Google Scholar] [CrossRef] [Green Version]
  19. Rosenblatt, M. Remarks on a Multivariate Transformation. Ann. Math. Stat. 1952, 23, 470–472. [Google Scholar] [CrossRef]
  20. Knothe, H. Contributions to the theory of convex bodies. Mich. Math. J. 1957, 4, 39–52. [Google Scholar] [CrossRef]
  21. Rioul, O. Optimal Transportation to the Entropy-Power Inequality. In Proceedings of the IEEE Information Theory and Applications Workshop (ITA 2017), San Diego, CA, USA, 12–17 February 2017. [Google Scholar]
  22. Bryc, W. The Normal Distribution—Characterizations with Applications; Lecture Notes in Statistics; Springer: Berlin, Germany, 1995. [Google Scholar]
  23. Rioul, O. Optimal Transport to Rényi Entropies. In Proceedings of the 3rd Conference on Geometric Science of Information (GSI 2017), Paris, France, 7–9 November 2017; Lecture Notes in Computer Science. Springer: Berlin, Germany, 2017. [Google Scholar]
  24. Bercher, J.F. Source Coding with Escort Distributions and Rényi Entropy Bounds. Phys. Lett. A 2009, 373, 3235–3238. [Google Scholar] [CrossRef] [Green Version]
  25. Bercher, J.F. On Generalized Cramér-Rao Inequalities, Generalized Fisher Information and Characterizations of Generalized q-Gaussian Distributions. J. Phys. A Math. Theor. 2012, 45, 255303. [Google Scholar] [CrossRef] [Green Version]
  26. Beck, C.; Schlögl, F. Thermodynamics of Chaotic Systems: An Introduction; Cambridge University Press: Cambridge, UK, 1993. [Google Scholar]
  27. Bercher, J.F.; (ESIEE Paris, Marne-la-Vallée, France). Private communication, 2018.
  28. Fradelizi, M.; Madiman, M.; Wang, L. Optimal Concentration of Information Content for Log-Concave Densities. In High Dimensional Probability VII: The Cargèse Volume; Houdré, C., Mason, D.M., Reynaud-Bouret, P., Rosiński, J., Eds.; Birkhäuser: Basel, Switzerland, 2016. [Google Scholar]

Share and Cite

MDPI and ACS Style

Rioul, O. Rényi Entropy Power Inequalities via Normal Transport and Rotation. Entropy 2018, 20, 641. https://0-doi-org.brum.beds.ac.uk/10.3390/e20090641

AMA Style

Rioul O. Rényi Entropy Power Inequalities via Normal Transport and Rotation. Entropy. 2018; 20(9):641. https://0-doi-org.brum.beds.ac.uk/10.3390/e20090641

Chicago/Turabian Style

Rioul, Olivier. 2018. "Rényi Entropy Power Inequalities via Normal Transport and Rotation" Entropy 20, no. 9: 641. https://0-doi-org.brum.beds.ac.uk/10.3390/e20090641

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop