Next Article in Journal
Using a Partially Evaporating Cycle to Improve the Volume Ratio Problem of the Trilateral Flash Cycle for Low-Grade Heat Recovery
Next Article in Special Issue
Selecting an Effective Entropy Estimator for Short Sequences of Bits and Bytes with Maximum Entropy
Previous Article in Journal
Comparative Assessment of Various Low-Dissipation Combined Models for Three-Terminal Heat Pump Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Refined Young Inequality and Its Application to Divergences

by
Shigeru Furuichi
1 and
Nicuşor Minculete
2,*
1
Department of Information Science, College of Humanities and Sciences, Nihon University, 3-25-40, Sakurajyousui, Setagaya-ku, Tokyo 156-8550, Japan
2
Faculty of Mathematics and Computer Science, Transilvania University of Braşov, 500091 Brasov, Romania
*
Author to whom correspondence should be addressed.
Submission received: 29 March 2021 / Revised: 17 April 2021 / Accepted: 21 April 2021 / Published: 23 April 2021
(This article belongs to the Special Issue Types of Entropies and Divergences with Their Applications)

Abstract

:
We give bounds on the difference between the weighted arithmetic mean and the weighted geometric mean. These imply refined Young inequalities and the reverses of the Young inequality. We also studied some properties on the difference between the weighted arithmetic mean and the weighted geometric mean. Applying the newly obtained inequalities, we show some results on the Tsallis divergence, the Rényi divergence, the Jeffreys–Tsallis divergence and the Jensen–Shannon–Tsallis divergence.

1. Introduction

The Young integral inequality is the source of many basic inequalities. Young [1] proved the following: suppose that f : 0 , 0 , is an increasing continuous function such that f 0 = 0 and lim x f x = . Then:
a b 0 a f x d x + 0 b f 1 x d x ,
with equality if b = f ( a ) . Such a gap is often used to define the Fenchel–Legendre divergence in information geometry [2,3]. For f x = x p 1 , ( p > 1 ) , in inequality (1), we deduce the classical Young inequality:
a b a p p + b q q ,
for all a , b > 0 and p , q > 1 with 1 p + 1 q = 1 . The equality occurs if and only if a p = b q .
Minguzzi [4] proved a reverse Young inequality in the following way:
0 a p p + b q q a b b a p 1 b q 1 a ,
for all a , b > 0 and p , q > 1 with 1 p + 1 q = 1 .
The classical Young inequality (2) is rewitten as
a 1 / p b 1 / q a p + b q
by putting a a 1 / p and b b 1 / q . Putting again:
a a j p j = 1 n a j p , b b j q j = 1 n b j q
in the inequality (4), we obtain the famous Hölder inequality:
j = 1 n a j b j j = 1 n a j p 1 / p j = 1 n b j q 1 / q , p , q > 1 , 1 p + 1 q = 1
for a 1 , , a n > 0 and b 1 , , b n > 0 . Thus, the inequality (2) is often reformulated as
a p b 1 p p a + 1 p b , a , b > 0 , 0 p 1
by putting 1 / p p (then 1 / q = 1 p ) in the inequality (4). It is notable that α -divergence is related to the difference between the weighted arithmetic mean and the weighted geometric mean [5]. For p = 1 / 2 , we deduce the inequality between the geometric mean and the arithmetic mean, G a , b a b a + b 2 A a , b . The Heinz mean ([6], Equation (3)) (see also [7]) is defined as H p a , b = a p b 1 p + a 1 p b p 2 and G a , b H p a , b A a , b .
Especially, when we discuss Young inequality, we will refer to the last form. We consider the following expression:
d p ( a , b ) p a + 1 p b a p b 1 p
which implies that d p ( a , b ) 0 and d p ( a , a ) = d 0 ( a , b ) = d 1 ( a , b ) = 0 . We remark the following properties:
d p ( a , b ) = b · d p a b , 1 , d p ( a , b ) = d 1 p ( b , a ) , d p 1 a , 1 b = 1 a b · d p ( b , a ) .
Cartwright–Field inequality (see, e.g., [8]) is often written as follows:
1 2 p 1 p a b 2 max { a , b } d p ( a , b ) 1 2 p 1 p a b 2 min { a , b }
for a , b > 0 and 0 p 1 . This double inequality gives an improvement of the Young inequality, and at the same time, gives a reverse inequality for the Young inequality.
Kober proved in [9] a general result related to an improvement of the inequality between arithmetic and geometric means, which for n = 2 implies the inequality:
r a b 2 d p ( a , b ) 1 r a b 2
where a , b > 0 , 0 p 1 and r = min p , 1 p . This inequality was rediscovered by Kittaneh and Manasrah in [10] (See also [11]).
Finally, we found, in [12], another improvement of the Young inequality and a reverse inequality, given as
r a b 2 + A p log 2 a b d p ( a , b ) 1 r a b 2 + B p log 2 a b
where a , b 1 , 0 < p < 1 and r = min p , 1 p with A p = p 1 p 2 r 4 , B p = p 1 p 2 1 r 4 . It is remarkable that the inequalities (9) give a further refinement of (8), since A ( p ) 0 and B ( p ) 0 .
In [13], we also presented two inequalities which give two different reverse inequalities for the Young inequality:
0 d p ( a , b ) a p b 1 p exp p 1 p a b 2 min 2 { a , b } a p b 1 p
and:
0 d p ( a , b ) p 1 p log 2 a b max { a , b }
where a , b > 0 , 0 p 1 . See ([14], Chapter 2) for recent advances on refinements and reverses of the Young inequality.
The α -divergence is related to the difference of a weighted arithmetic mean with a geometric mean [5]. We mention that the gap is used in information geometry to define the Fenchel–Legendre divergence [2,3]. We give bounds on the difference between the weighted arithmetic mean and the weighted geometric mean. These imply refined Young inequalities and the reverses of the Young inequality. We also studied some properties on the difference between the weighted arithmetic mean and the weighted geometric mean. Applying the newly obtained inequalities, we show some results on the Tsallis divergence, the Rényi divergence, the Jeffreys–Tsallis divergence and the Jensen–Shannon–Tsallis divergence [15,16]. The parametric Jensen–Shannon divergence can be used to detect unusual data, and this one can also use it as a means to perform the relevant analysis of fire experiments [17].

2. Main Results

We give estimates on d p ( a , b ) and also study the properties of d p ( a , b ) . We give the following estimates of d p ( a , b ) first.
Theorem 1.
For 0 < a , b 1 and 0 p 1 , we have:
r a b 2 + A p a b · log 2 a b d p ( a , b ) 1 r a b 2 + B p a b · log 2 a b
where r = min p , 1 p and A p = p 1 p 2 r 4 , B p = p 1 p 2 1 r 4 .
Proof. 
For p = 0 or p = 1 or a = b , we have equality. We assume a b and 0 < p < 1 . Because 0 < a , b 1 , we have 1 a , 1 b 1 , so, applying inequality (9), we deduce the following relation:
r 1 a 1 b 2 + A p log 2 b a d p 1 a , 1 b 1 r 1 a 1 b 2 + B p log 2 b a .
We know that d p ( 1 a , 1 b ) = 1 a b · d 1 p ( a , b ) and if we replace p by 1 p in relation (13) and because A ( p ) = A ( 1 p ) , B ( 1 p ) = B ( p ) , then we proved the inequality from the statement. □
Theorem 2.
For a b > 0 and 0 < p 1 , we have:
p ( a b ) ( a 1 p b 1 p ) 2 a 1 p d p ( a , b ) p ( a b ) ( a 1 p b 1 p ) a 1 p .
Proof. 
For p = 1 or a = b , we have equality. We assume a > b and 0 < p < 1 . It is easy to see that:
1 x ( 1 t p 1 ) d t = x 1 x p 1 p .
We take x = a / b in (14) and then obtain:
p b 1 a / b ( 1 t p 1 ) d t = d p ( a , b ) , 0 < p < 1
Then, we take the function f : 1 , a / b R defined by f ( t ) 1 t p 1 . By simple calculations we have:
d f ( t ) d t = ( 1 p ) t p 2 0 , d 2 f ( t ) d t 2 = ( 1 p ) ( p 2 ) t p 3 0 .
So the function f is concave so that we can apply Hermite–Hadamard inequality [18]:
1 2 f ( 1 ) + f ( a / b ) 1 a / b 1 1 a / b ( 1 t p 1 ) d t f 1 + a / b 2 .
The left-hand side of the inequalities above shows:
p ( a b ) ( a 1 p b 1 p ) 2 a 1 p d p ( a , b ) .
Since the function f ( t ) 1 t 1 p is increasing, we have:
1 t p 1 1 x p 1 , ( t x , 0 < p < 1 ) .
Integrating the above inequality by t from 1 to x, we obtain:
1 x ( 1 t p 1 ) d t ( x 1 ) ( 1 x p 1 )
which implies:
d p ( a , b ) = b p 1 a / b ( 1 t p 1 ) d t b p ( a / b 1 ) 1 ( a / b ) p 1 = p ( a b ) ( a 1 p b 1 p ) a 1 p .
Theorem 3.
For a , b > 0 and 0 p 1 , we have:
p 1 p a b 2 max { a , b } d p ( a , b ) + d 1 p ( a , b ) p 1 p a b 2 min { a , b }
Proof. 
We give two different proofs (I) and (II).
(I)
For a = b or p { 0 , 1 } , we obtain equality in the relation from the statement. Thus, we assume a b and p 0 , 1 . It is easy to see that d p ( a , b ) + d 1 p ( a , b ) = a + b a p b 1 p a 1 p b p = ( a p b p ) ( a 1 p b 1 p ) . Using the Lagrange theorem, there exists c 1 and c 2 between a and b such that ( a p b p ) ( a 1 p b 1 p ) = p ( 1 p ) ( a b ) 2 c 1 p 1 c 2 p . However, we have the inequality 1 max { a , b } 1 c 1 1 p c 2 p 1 min { a , b } . Therefore, we deduce the inequality of the statement.
(II)
Using the Cartwright–Field inequality, we have:
1 2 p 1 p a b 2 max { a , b } d p ( a , b ) 1 2 p 1 p a b 2 min { a , b }
and if we replace p by 1 p , we deduce:
1 2 p 1 p a b 2 max { a , b } d 1 p ( a , b ) 1 2 p 1 p a b 2 min { a , b }
for a , b > 0 and 0 p 1 . By summing up these inequalities, we proved the inequality of the statement:
Remark 1.
(i) From the proof of Theorem 3, we obtain A ( a , b ) H p ( a , b ) = d p ( a , b ) + d 1 p ( a , b ) 2 , we deduce an estimation for the Heinz mean:
A ( a , b ) 1 2 p 1 p a b 2 min { a , b } H p ( a , b ) A ( a , b ) 1 2 p 1 p a b 2 max { a , b } .
(ii) Since d p ( a , b ) + d 1 p ( a , b ) = ( a p b p ) ( a 1 p b 1 p ) and d 1 p ( a , b ) 0 , we have 0 d p ( a , b ) ( a p b p ) ( a 1 p b 1 p ) which is in fact the inequality given by Minguzzi (3).
Theorem 4.
Let a , b > 0 and 0 p 1 .
(i) For 1 / 2 p 1 , a b or 0 p 1 / 2 , a b , we have d p ( a , b ) d 1 p ( a , b ) .
(ii) For 0 p 1 / 2 , a b or 1 / 2 p 1 , a b , we have d p ( a , b ) d 1 p ( a , b ) .
Proof. 
For a = b or p { 0 , 1 } , we obtain equality in the relations from the statement. Thus, we assume a b and p 0 , 1 . However, we have:
d p ( a , b ) d 1 p ( a , b ) = 2 p 1 a b a p b 1 p + a 1 p b p = b 2 p 1 a b 1 a b p + a b 1 p .
We consider the function f : ( 0 , ) R defined by f ( t ) = ( 2 p 1 ) ( t 1 ) t p + t 1 p . We calculate the derivatives of f, thus we have:
d f ( t ) d t = ( 2 p 1 ) p t p 1 + ( 1 p ) t p , d 2 f ( t ) d t 2 = ( 1 p ) p t p 2 p ( 1 p ) t p 1 = p ( 1 p ) t p 1 t 2 p 1 1 .
For t > 1 and 1 / 2 p < 1 , we have d 2 f ( t ) d t 2 > 0 , so, function d f d t is increasing, so we obtain d f t d t > d f 1 d t = 0 , which implies that function f is increasing, so we have f ( t ) > f ( 1 ) = 0 , which means that ( 2 p 1 ) ( t 1 ) t p + t 1 p > 0 . For t = a / b > 1 , we find that d p ( a , b ) > d 1 p ( a , b ) . For t < 1 and 0 < p 1 / 2 , we have d 2 f ( t ) d t 2 > 0 , so, function d f d t is increasing, so we obtain d f t d t < d f 1 d t = 0 , which implies that function f is decreasing, so we have f ( t ) > f ( 1 ) = 0 , which means that ( 2 p 1 ) ( t 1 ) t p + t 1 p > 0 . For t = a / b < 1 , we find that d p ( a , b ) > d 1 p ( a , b ) . In the analogous way, we show the inequality in (ii). □
Remark 2.
From (i) in Theorem 4 for 1 / 2 p 1 and a b , we have d p ( a , b ) d 1 p ( a , b ) , so we obtain:
1 2 p 1 p a b 2 max { a , b } d p ( a , b ) ,
which is just left hand side of Cartwright–Field inequality:
1 2 p 1 p a b 2 max { a , b } d p ( a , b ) 1 2 p 1 p a b 2 min { a , b } , ( a , b > 0 , 0 p 1 ) .
Therefore, it is quite natural to consider the following inequality:
d p ( a , b ) 1 2 1 2 p 1 p a b 2 max { a , b } + 1 2 p 1 p a b 2 min { a , b } = 1 4 p 1 p a b 2 a + b a b
whether it holds or not for a general case a , b > 0 and 0 p 1 . However, this inequality does not hold in general. We set the function:
h p ( t ) = p t + 1 p t p p ( 1 p ) 4 t 1 2 t + 1 t , t > 0 , 0 p 1 .
Then, we have h 0.1 ( 0.3 ) 0.00434315 , h 0.1 ( 0.6 ) 0.000199783 and also h 0.9 ( 1.8 ) 0.000352199 , h 0.9 ( 2.6 ) 0.00282073 .
Theorem 5.
For a , b 1 and 0 p 1 , we have:
1 2 p 1 p a b 2 max { a , b } 1 2 E p a , b d p ( a , b ) ,
where E p a , b min p ( a b ) ( a 1 p b 1 p ) max { a , b } 1 p , ( 1 p ) ( a b ) ( a p b p ) max { a , b } p = E 1 p a , b .
Proof. 
For p = 0 or p = 1 or a = b , we have equality. We assume a b and 0 < p < 1 . If b < a , then using Theorem 2, we have:
p ( a b ) ( a 1 p b 1 p ) 2 a 1 p d p ( a , b ) .
Using the Lagrange theorem, we obtain a 1 p b 1 p = ( 1 p ) ( a b ) ϕ p , where b < ϕ < a . For b 1 , we deduce a 1 p b 1 p ( 1 p ) ( a b ) a p , which means that 1 2 p 1 p b a 2 a p ( a b ) ( a 1 p b 1 p ) 2 a 1 p . If b > a and we replace p by 1 p , then Theorem 2 implies:
( 1 p ) ( a b ) ( a p b p ) 2 b p d p ( a , b ) .
Using the Lagrange theorem, we obtain b p a p = p ( b a ) θ p 1 , where a < θ < b . For a 1 , we deduce b p a p p ( b a ) b p 1 , which means that 1 2 p 1 p b a 2 b ( 1 p ) ( a b ) ( a p b p ) 2 b p . Taking into account the above considerations, we prove the statement. □
Corollary 1.
For 0 < a , b 1 and 0 p 1 , we have:
1 2 p 1 p a b 2 max { a , b } a b 2 E p 1 a , 1 b d p ( a , b ) ,
where E · ( · , · ) is given in Theorem 5.
Proof. 
For p = 0 or p = 1 or a = b , we have the equality. We assume a b and 0 < p < 1 . If in inequality (18), we replace a , b 1 by 1 a , 1 b 1 , we deduce:
1 2 p 1 p a b 2 a b max { a , b } 1 2 E p 1 a , 1 b d p 1 a , 1 b = 1 a b d p ( a , b ) .
Consequently, we prove the inequalities of the statement. □
Theorem 6.
For a , b > 0 and 0 p 1 , we have:
d p ( a , b ) ( 1 p ) a b 2 b .
Proof. 
For p = 0 or p = 1 or a = b , we have equality in the relation from the statement. We assume a b and 0 < p < 1 . We consider function f : ( 0 , ) R defined by f ( t ) = 1 t p 1 ( 1 p ) ( t 1 ) , p [ 0 , 1 ] . For t ( 0 , 1 ] , we have d f ( t ) d t = ( 1 p ) ( t p 2 1 ) 0 , which implies that f is increasing, so we deduce f ( t ) f ( 1 ) = 0 . For t [ 1 , ) , we have d f ( t ) d t 0 , which implies that f is decreasing, so we obtain f ( t ) f ( 1 ) = 0 . Therefore, we find the following inequality:
1 t p 1 ( 1 p ) ( t 1 ) .
Multiplying the above inequality by t > 0 , we have:
t t p ( 1 p ) ( t 2 t ) ,
which is equivalent to the inequality:
p t + ( 1 p ) t p ( 1 p ) ( t 1 ) 2 ,
for all t > 0 and p [ 0 , 1 ] . Therefore, if we take t = a b in the above inequality and after some calculations, we deduce the inequality of the statement. □
Corollary 2.
For a , b > 0 and 0 p 1 , we have:
d p ( a , b ) + d 1 p ( a , b ) ( 1 p ) a b 2 ( a + b ) a b .
Proof. 
For p = 0 or p = 1 or a = b , we have the equality. We assume a b and 0 < p < 1 . If in inequality (21), we exchange a with b, we deduce:
d p ( b , a ) ( 1 p ) a b 2 a .
However, d p ( b , a ) = d 1 p ( a , b ) , so we have:
d p ( a , b ) + d 1 p ( a , b ) ( 1 p ) a b 2 b + ( 1 p ) a b 2 a = ( 1 p ) a b 2 ( a + b ) a b .
Consequently, we prove the inequality of the statement. □

3. Applications to Some Divergences

The Tsallis divergence (e.g., [19,20]) is defined for two probability distributions p { p 1 , , p n } and r { r 1 , , r n } with p j > 0 and r j > 0 for all j = 1 , , n as
D q T ( p | r ) j = 1 n p j p j q r j 1 q 1 q , ( q > 0 , q 1 ) .
The Rényi divergence (e.g., [21]) is also denoted by
D q R ( p | r ) 1 q 1 log j = 1 n p j q r j 1 q .
We see in (e.g., [22]) that:
D q R ( p | r ) = 1 q 1 log 1 + ( q 1 ) D q T ( p | r ) .
It is also known that:
lim q 1 D q T ( p | r ) = lim q 1 D q R ( p | r ) = j = 1 n p j log p j r j D ( p | r ) ,
where D ( p | r ) is the standard divergence (KL information, reltative entropy). The Jeffreys divergence (see [22,23]) is defined by J 1 ( p | r ) D ( p | r ) + D ( r | p ) and the Jensen–Shannon divergence [15,16] is defined by
J S 1 ( p | r ) 1 2 D p | p + r 2 + 1 2 D r | p + r 2 .
In [24], the Jeffreys and the Jensen–Shannon divergence are extended to biparametric forms. In [23], Furuichi and Mitroi generalizes these divergences to the Jeffreys–Tsallis divergence, which is given by J q ( p | r ) D q T ( p | r ) + D q T ( r | p ) and to the Jensen–Shannon–Tsallis divergence, which is defined as
J S q p | r 1 2 D q T p | p + r 2 + 1 2 D q T r | p + r 2 .
Several properties of divergences can be extended in the operator theory [25].
For the Tsallis divergence, we have the following relations.
Theorem 7.
For two probability distributions p { p 1 , , p n } and r { r 1 , , r n } with p j > 0 and r j > 0 for all j = 1 , , n , we have:
q j = 1 n ( p j r j ) 2 max { p j , r j } J q ( p | r ) q j = 1 n ( p j r j ) 2 min { p j , r j } , ( 0 < q < 1 ) .
Proof. 
From the definition of the Tsallis divergence, we deduce the equality:
J q ( p | r ) = j = 1 n p j + r j p j q r j 1 q p j 1 q r j q 1 q = 1 1 q j = 1 n d q ( p j , r j ) + d 1 q ( p j , r j ) ,
where d · ( · , · ) is defined in (6). Applying Theorem 3, we obtain:
q ( 1 q ) j = 1 n ( p j r j ) 2 max { p j , r j } j = 1 n d q ( p j , r j ) + d ( p j , r j ) q ( 1 q ) j = 1 n ( p j r j ) 2 min { p j , r j }
and combining with the above equality, we deduce the inequalities (24). □
Remark 3.
(i) In the limit of q 1 in (24), we then obtain:
j = 1 n ( p j r j ) 2 max { p j , r j } J 1 ( p | r ) j = 1 n ( p j r j ) 2 min { p j , r j }
for the standard divergence.
(ii)From (23), we have:
2 + ( q 1 ) D q T ( p | r ) + D q T ( r | p ) = exp ( q 1 ) D q R ( p | r ) + exp ( q 1 ) D q R ( r | p ) 2 + ( q 1 ) D q R ( p | r ) + D q R ( r | p ) ,
where we used the inequality e x x + 1 for all x R . Thus, we deduce the inequalities:
D q T ( p | r ) + D q T ( r | p ) D q R ( p | r ) + D q R ( r | p ) , ( 0 < q < 1 )
and:
D q T ( p | r ) + D q T ( r | p ) D q R ( p | r ) + D q R ( r | p ) , ( q > 1 ) .
Combining (25) with Theorem 7, we therefore have the following result for the Rényi divergence:
q j = 1 n ( p j r j ) 2 max { p j , r j } D q R ( p | r ) + D q R ( r | p ) , ( 0 < q < 1 ) .
We give the relation between the Jeffreys–Tsallis divergence and the Jensen–Shannon–Tsallis divergence:
Theorem 8.
For two probability distributions p { p 1 , , p n } and r { r 1 , , r n } with p j > 0 and r j > 0 for all j = 1 , , n , we have:
J S q p | r 1 4 J q p | r ,
where q 0 with q 1 .
Proof. 
We consider the function g : ( 0 , ) R defined by g ( t ) = t 1 q , which is concave for q [ 0 , 1 ) . Therefore, we have p j + r j 2 1 q p j 1 q + r j 1 q 2 , which implies the following inequalities:
p j p j q p j + r j 2 1 q p j p j q r j 1 q 2 , r j r j q p j + r j 2 1 q r j r j q p j 1 q 2 ,
From the definition of the Tsallis divergence, we deduce the inequality:
D q T p | p + r 2 + D q T r | p + r 2 1 2 D q T ( p | r ) + D q T ( r | p ) ,
which is equivalent to the relation of the statement. For the case of q > 1 , the function g ( t ) = t 1 q is convex in t > 0 . Similarly, we have the statement, taking into account that 1 q < 0 . □
Remark 4.
In the limit of q 1 in (26), we then obtain:
J S 1 p | r 1 4 J 1 p | r .
We give the bounds on the Jeffreys–Tsallis divergence by using the refined Young inequality given in Theorem 1. In [26], we found the Battacharyya coefficient defined as
B ( p | r ) j = 1 n p j r j ,
which is a measure of the amount of overlapping between two distributions. This can be expressed in terms of the Hellinger distance between the probability distributions p { p 1 , , p n } and r { r 1 , , r n } , which is given by
B ( p | r ) = 1 h 2 ( p | r ) ,
where the Hellinger distance ([26,27]) is a metric distance and defined by
h ( p | r ) 1 2 j = 1 n ( p j r j ) 2 .
Theorem 9.
For two probability distributions p { p 1 , , p n } and r { r 1 , , r n } with p j > 0 and r j > 0 for all j = 1 , , n , and 0 q < 1 , we have:
4 r 1 q h 2 ( p | r ) + 2 A q 1 q j = 1 n p j r j · log 2 p j r j J q ( p | r ) 4 1 r 1 q h 2 ( p | r ) + 2 B q 1 q j = 1 n p j r j · log 2 p j r j .
where r = min q , 1 q and A q = q 1 q 2 r 4 , B q = q 1 q 2 1 r 4 .
Proof. 
For q = 0 , we obtain the equality. Now, we consider 0 < q < 1 . Using Theorem 1 for a = p j < 1 and b = r j < 1 , j { 1 , 2 , . . . , n } , we deduce:
r p j r j 2 + A q p j r j · log 2 p j r j d q ( p j , r j )
1 r p j r j 2 + B q p j r j · log 2 p j r j ,
where r = min q , 1 q . If we replace q by 1 q and taking into account that A q = A 1 q and B q = B 1 q , then we have:
2 r p j r j 2 + 2 A q p j r j · log 2 p j r j d q ( p j , r j ) + d 1 q ( p j , r j )
2 1 r p j r j 2 + 2 B q p j r j · log 2 p j r j .
Taking the sum on j = 1 , 2 , , n , we find the inequalities:
2 r j = 1 n p j r j 2 + 2 A q j = 1 n p j r j · log 2 p j r j j = 1 n d q ( p j , r j ) + d 1 q ( p j , r j )
= ( 1 q ) D q T ( p | r ) + D q T ( r | p ) 2 1 r j = 1 n p j r j 2 + 2 B q j = 1 n p j r j · log 2 p j r j ,
which is equivalent to the inequalities in the statement. □
Remark 5.
In the limit of q 1 in (27), we then obtain:
4 h 2 ( p | r ) + 1 2 j = 1 n p j r j · log 2 p j r j J 1 ( p | r ) ,
since lim q 1 r 1 q = 1 , lim q 1 A ( q ) 1 q = 1 4 and lim q 1 1 r 1 q = .
We give the further bounds on the Jeffreys–Tsallis divergence by the use of Theorem 5 and Corollary 2:
Theorem 10.
For two probability distributions p { p 1 , , p n } and r { r 1 , , r n } with p j > 0 and r j > 0 for all j = 1 , , n , and 0 q < 1 , we have:
q j = 1 n p j r j 2 max { p j , r j } 1 1 q j = 1 n p j r j E q 1 p j , 1 r j J q ( p | r ) j = 1 n p j r j 2 ( p j + r j ) p j r j ,
where E · ( · , · ) is given in Theorem 5.
Proof. 
Putting a p j , b r j and p q in (20), we deduce:
1 2 q 1 q p j r j 2 max { p j , r j } p j r j 2 E q 1 p j , 1 r j d q ( p j , r j ) ,
and:
1 2 q 1 q p j r j 2 max { p j , r j } p j r j 2 E 1 q 1 p j , 1 r j d 1 q ( p j , r j ) .
Taking into account that:
E q 1 p j , 1 r j = E 1 q 1 p j , 1 r j ,
and by taking the sum on j = 1 , 2 , , n , we have:
q ( 1 q ) j = 1 n p j r j 2 max { p j , r j } j = 1 n p j r j E q 1 p j , 1 r j j = 1 n d q ( p j , r j ) + d 1 q ( p j , r j )
we prove the lower bounds of J q ( p | r ) . To prove the upper bound of J q ( p | r ) , we put a p j , b r j and p q in inequality (22). Then, we deduce:
d q ( p j , r j ) + d 1 q ( p j , r j ) ( 1 q ) p j r j 2 ( p j + r j ) p j r j .
By taking the sum on j = 1 , 2 , , n , we find:
j = 1 n d q ( p j , r j ) + d 1 q ( p j , r j ) ( 1 q ) j = 1 n p j r j 2 ( p j + r j ) p j r j .
Consequently, we prove the inequalities of the statement. □
We also give the further bounds on the Jeffreys–Tsallis divergence by the use of Cartwright–Field inequality given in (7).
Theorem 11.
For two probability distributions p { p 1 , , p n } and r { r 1 , , r n } with p j > 0 and r j > 0 for all j = 1 , , n , and 0 q < 1 , we have:
q 8 j = 1 n p j r j 2 1 p j + max { p j , r j } + 1 r j + max { p j , r j } J S q ( p | r ) q 8 j = 1 n p j r j 2 1 p j + min { p j , r j } + 1 r j + min { p j , r j }
Proof. 
For q = 0 , we have the equality. We assume 0 < q < 1 . By direct calculations, we have:
J S q p | r = 1 2 D q T p | p + r 2 + 1 2 D q T r | p + r 2 = 1 2 ( 1 q ) j = 1 n p j p j q p j + r j 2 1 q + r j r j q p j + r j 2 1 q = 1 2 ( 1 q ) j = 1 n q p j + ( 1 q ) p j + r j 2 p j q p j + r j 2 1 q + q r j + ( 1 q ) p j + r j 2 r j q p j + r j 2 1 q = 1 2 ( 1 q ) j = 1 n d q p j , p j + r j 2 + d q r j , p j + r j 2 .
Using inequality (7), we deduce:
q ( 1 q ) 4 ( p j r j ) 2 p j + max { p j , r j } d q p j , p j + r j 2 q ( 1 q ) 4 ( p j r j ) 2 p j + min { p j , r j }
and:
q ( 1 q ) 4 ( p j r j ) 2 r j + max { p j , r j } d q r j , p j + r j 2 q ( 1 q ) 4 ( p j r j ) 2 r j + min { p j , r j } .
From the above inequalities, we have the statement, by summing on j = 1 , 2 , , n . □
It is quite natural to extend the Jensen–Shannon–Tsallis divergence to the following form:
J S q v ( p | r ) v D q T ( p | v p + ( 1 v ) r ) + ( 1 v ) D q T ( r | v p + ( 1 v ) r ) ,
where 0 v 1 , q > 0 , q 1 . We call this the v-weighted Jensen–Shannon–Tsallis divergence. For v = 1 / 2 , we find that J S q 1 / 2 ( p | r ) = J S q ( p | r ) which is the Jensen–Shannon–Tsallis divergence. For this quantity J S q v ( p | r ) , we can obtain the following result in a way similar to the proof of the Theorem 11.
Proposition 1.
For two probability distributions p { p 1 , , p n } and r { r 1 , , r n } with p j > 0 and r j > 0 for all j = 1 , , n , 0 q < 1 and 0 v 1 , we have:
q v ( 1 v ) 2 j = 1 n p j r j 2 1 v v p j + ( 1 v ) max { p j , r j } + v ( 1 v ) r j + v max { p j , r j } J S q v ( p | r ) q v ( 1 v ) 2 j = 1 n p j r j 2 1 v v p j + ( 1 v ) min { p j , r j } + v ( 1 v ) r j + v min { p j , r j } .
Proof. 
We calculate as
J S q v ( p | r ) = v 1 q j = 1 n p j p j q v p j + ( 1 v ) r j 1 q + 1 v 1 q j = 1 n r j r j q v p j + ( 1 v ) r j 1 q = 1 1 q j = 1 n v p j + ( 1 v ) r j v p j q v p j + ( 1 v ) r j 1 q ( 1 v ) r j q v p j + ( 1 v ) r j 1 q = v 1 q j = 1 n q p j + ( 1 q ) v p j + ( 1 v ) r j p j q v p j + ( 1 v ) r j 1 q + 1 v 1 q j = 1 n q r j + ( 1 q ) v p j + ( 1 v ) r j r j q v p j + ( 1 v ) r j 1 q = 1 1 q j = 1 n v d q p j , v p j + ( 1 v ) r j + ( 1 v ) d q r j , v p j + ( 1 v ) r j .
Using inequality (7), we deduce:
q ( 1 q ) 2 ( 1 v ) 2 ( p j r j ) 2 v p j + ( 1 v ) max { p j , r j } d q p j , v p j + ( 1 v ) r j
q ( 1 q ) 2 ( 1 v ) 2 ( p j r j ) 2 v p j + ( 1 v ) min { p j , r j }
and:
q ( 1 q ) 2 v 2 ( p j r j ) 2 ( 1 v ) r j + v max { p j , r j } d q r j , v p j + ( 1 v ) r j
q ( 1 q ) 2 v 2 ( p j r j ) 2 ( 1 v ) r j + v min { p j , r j } .
Multiplying v and 1 v by the above inequalities, respectively, and then taking the sum on j = 1 , 2 , , n , we obtain the statement. □

4. Conclusions

We obtained new inequalities which improve classical Young inequality by analytical calculations with known inequalities. We also obtained some bounds on the Jeffreys–Tsallis divergence and the Jensen–Shannon–Tsallis divergence. At this point, we do not clearly know whether the obtained bounds will play any role in the information theory. However, if there exists a purpose to find the meaning of the parameter q in divergences based on Tsallis divergence, then we may state that almost all theorems (except for Theorem 8) hold for 0 q < 1 . In the first author’s previous studies [19,28], some results related to Tsallis divergence (relative entropy) are still true for 0 q < 1 , while some results related to Tsallis entropy are still true for q > 1 . In this paper, we treated the Tsallis type divergence so it is shown that almost all results are true for 0 q < 1 . This insight may give a rough meaning of the parameter q.
Since our results in Section 3 are based on the inequalities in Section 2, we summarized the tightness for our obtained inequalities in Section 2. The double inequality (12) is a counterpart of the double inequality (9) for a , b ( 0 , 1 ] . Therefore, they can not be compared wit each other from the point of view on the tightness, since the conditions are different. The double inequality (12) was used to obtain Theorem 9. The double inequality (15) is essentially a Cartwright–Field inequality in itself, and it was used to obtain Theorem 7 as the first result in Section 3. The results in Theorem 4 are mathematical properties on d p ( a , b ) . The inequalities given in (18) gave an improvement of the left-hand side in the inequality (7) for the case a , b 1 and we obtained Theorem 10 by (18). We obtained the upper bound of d p ( a , b ) as a counterpart of (18) for a general a , b > 0 . This is used to prove Corollary 2 which was used to prove Theorem 10. However, we found that the upper bound of d p ( a , b ) + d 1 p ( a , b ) given in (22) is not tighter than the one in (15).
Finally, Theorem 8 can be obtained from the convexity/concavity of the function t 1 q . This study will be continued in order to obtain much sharper bounds. We extend the Jensen–Shannon–Tsallis divergence to the following:
J S q v ( p | r ) v D q T ( p | v p + ( 1 v ) r ) + ( 1 v ) D q T ( r | v p + ( 1 v ) r ) , ( 0 v 1 , q > 0 , q 1 ) ,
and we call this the v-weighted Jensen–Shannon–Tsallis divergence. For v = 1 / 2 , we find that J S q 1 / 2 ( p | r ) = J S q ( p | r ) which is the Jensen–Shannon–Tsallis divergence. For this quantity, as a information-theoretic divergence measure J S q v ( p | r ) , we obtained several characterizations.

Author Contributions

Conceptualization, S.F. and N.M.; investigation, S.F. and N.M.; writing—original draft preparation, S.F. and N.M.; writing—review and editing, S.F.; funding acquisition, S.F. and N.M. All authors have read and agreed to the published version of the manuscript.

Funding

The author (S.F.) was partially supported by JSPS KAKENHI Grant Number 21K03341.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the referees for their careful and insightful comments to improve our manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Young, W.H. On classes of summable functions and their Fourier series. Proc. R. Soc. Lond. Ser. A 1912, 87, 225–229. [Google Scholar]
  2. Blondel, M.; Martins, A.F.T.; Niculae, V. Learning with Fenchel-Young Losses. J. Mach. Learn. Res. 2020, 21, 1–69. [Google Scholar]
  3. Nielsen, F. On Geodesic Triangles with Right Angles in a Dually Flat Space. In Progress in Information Geometry: Theory and Applications; Springer: Berlin/Heidelberger, Germany, 2021; pp. 153–190. [Google Scholar]
  4. Minguzzi, E. An equivalent form of Young’s inequality with upper bound. Appl. Anal. Discrete Math. 2008, 2, 213–216. [Google Scholar] [CrossRef] [Green Version]
  5. Nielsen, F. The α-divergences associated with a pair of strictly comparable quasi-arithmetic means. arXiv 2020, arXiv:2001.09660. [Google Scholar]
  6. Bhatia, R. Interpolating the arithmetic–geometric mean inequality and its operator version. Linear Alg. Appl. 2006, 413, 355–363. [Google Scholar] [CrossRef] [Green Version]
  7. Furuichi, S.; Ghaemi, M.B.; Gharakhanlu, N. Generalized reverse Young and Heinz inequalities. Bull. Malays. Math. Sci. Soc. 2019, 42, 267–284. [Google Scholar] [CrossRef] [Green Version]
  8. Cartwright, D.I.; Field, M.J. A refinement of the arithmetic mean-geometric mean inequality. Proc. Am. Math. Soc. 1978, 71, 36–38. [Google Scholar] [CrossRef]
  9. Kober, H. On the arithmetic and geometric means and Hölder inequality. Proc. Am. Math. Soc. 1958, 9, 452–459. [Google Scholar]
  10. Kittaneh, F.; Manasrah, Y. Improved Young and Heinz inequalities for matrix. J. Math. Anal. Appl. 2010, 361, 262–269. [Google Scholar] [CrossRef] [Green Version]
  11. Bobylev, N.A.; Krasnoselsky, M.A. Extremum Analysis (Degenerate Cases); Institute of Control Sciences: Moscow, Russia, 1981; 52p. (In Russian) [Google Scholar]
  12. Minculete, N. A refinement of the Kittaneh–Manasrah inequality. Creat. Math. Inform. 2011, 20, 157–162. [Google Scholar]
  13. Furuichi, S.; Minculete, N. Alternative reverse inequalities for Young’s inequality. J. Math. Inequal. 2011, 5, 595–600. [Google Scholar] [CrossRef]
  14. Furuichi, S.; Moradi, H.R. Advances in Mathematical Inequalities; De Gruyter: Berlin, Germany, 2020. [Google Scholar]
  15. Lin, J. Divergence measures based on the Shannon entropy. IEEE Trans. Inform. Theory 1991, 37, 145–151. [Google Scholar] [CrossRef] [Green Version]
  16. Sibson, R. Information radius. Z. Wahrscheinlichkeitstheorie Verw Gebiete 1969, 14, 149–160. [Google Scholar] [CrossRef]
  17. Mitroi-Symeonidis, F.C.; Anghel, I.; Minculete, N. Parametric Jensen-Shannon Statistical Complexity and Its Applications on Full-Scale Compartment Fire Data. Symmetry 2020, 12, 22. [Google Scholar] [CrossRef] [Green Version]
  18. Niculescu, C.P.; Persson, L.-E. Convex Functions and Their Applications, 2nd ed.; Springer: Berlin/Heidelberger, Germany, 2018. [Google Scholar]
  19. Furuichi, S.; Yanagi, K.; Kuriyama, K. Fundamental properties of Tsallis relative entropy. J. Math. Phys. 2004, 45, 4868–4877. [Google Scholar] [CrossRef] [Green Version]
  20. Tsallis, C. Generalized entropy-based criterion for consistent testing. Phys. Rev. E 1998, 58, 1442–1445. [Google Scholar] [CrossRef]
  21. Aczél, J.; Daróczy, Z. On Measures of Information and Their Characterizations; Academic Press: Cambridge, MA, USA, 1975. [Google Scholar]
  22. Furuichi, S.; Minculete, N. Inequalities related to some types of entropies and divergences. Physica A 2019, 532, 121907. [Google Scholar] [CrossRef] [Green Version]
  23. Furuichi, S.; Mitroi, F.-C. Mathematical inequalities for some divergences. Physica A 2012, 391, 388–400. [Google Scholar] [CrossRef] [Green Version]
  24. Mitroi, F.C.; Minculete, N. Mathematical inequalities for biparametric extended information measures. J. Math. Ineq. 2013, 7, 63–71. [Google Scholar] [CrossRef] [Green Version]
  25. Moradi, H.R.; Furuichi, S.; Minculete, N. Estimates for Tsallis relative operator entropy. Math. Ineq. Appl. 2017, 20, 1079–1088. [Google Scholar] [CrossRef] [Green Version]
  26. Lovričević, N.; Pečarić, D.; Pečarić, J. Zipf-Mandelbrot law, f-divergences and the Jensen-type interpolating inequalities. J. Inequal. Appl. 2018, 2018, 36. [Google Scholar] [CrossRef] [PubMed]
  27. Van Erven, T.; Harremöes, P. Rényi Divergence and Kullback -Leibler Divergence. IEEE Trans. Inf. Theory 2014, 60, 3797–3820. [Google Scholar] [CrossRef] [Green Version]
  28. Furuichi, S. Information theoretical properties of Tsallis entropies. J. Math. Phys. 2006, 47, 023302. [Google Scholar] [CrossRef] [Green Version]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Furuichi, S.; Minculete, N. Refined Young Inequality and Its Application to Divergences. Entropy 2021, 23, 514. https://0-doi-org.brum.beds.ac.uk/10.3390/e23050514

AMA Style

Furuichi S, Minculete N. Refined Young Inequality and Its Application to Divergences. Entropy. 2021; 23(5):514. https://0-doi-org.brum.beds.ac.uk/10.3390/e23050514

Chicago/Turabian Style

Furuichi, Shigeru, and Nicuşor Minculete. 2021. "Refined Young Inequality and Its Application to Divergences" Entropy 23, no. 5: 514. https://0-doi-org.brum.beds.ac.uk/10.3390/e23050514

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop