entropy-logo

Journal Browser

Journal Browser

Entropy and Information Inequalities

A special issue of Entropy (ISSN 1099-4300).

Deadline for manuscript submissions: closed (30 November 2018) | Viewed by 41740

Special Issue Editors


E-Mail
Guest Editor
Department of Mathematical Sciences, University of Delaware, Newark, DE 19716, USA
Interests: statistics; probability theory; geometry and topology

E-Mail
Guest Editor
Departments of Electrical & Computer Engineering, University of Wisconsin, Madison, WI 53706, USA
Interests: Information theory; machine learning; convex geometry
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In recent decades, information theoretic inequalities have provided an interface with both neighboring and seemingly disparate disciplines. What is more, bridges built from these interactions have produced new and richer understandings of Information theory itself. Important connections have been established between information theoretic inequalities and subjects including, convex geometry, optimal transport, concentration of measure, probability, statistics, estimation theory, additive combinatorics, and thermodynamics, by way of inequalities; entropy power, Brunn–Minkowski, HWI, log-Sobolev, monotonicity in CLT,  Sanov, sum-set, Landauer, and many more. Even within information theory, there has been renewed interest in developing inequalities in non-conventional settings such as convolution inequalities for Renyi or Tsallis entropy, inequalities for f-divergences, and entropy inequalities over discrete spaces.

In this Special Issue, we would like to invite contributions that establish novel information theoretic inequalities (broadly defined), extend the applications thereof, and deepen our understanding of information theory and related fields. Expository submissions are welcomed, and we envisage that these contributions will lead to an improvement of acumen in information theory, while also strengthening the growing bonds between the subject and the other areas outlined above, with the hope of generating further inter-field and interdisciplinary dialog.

Dr. James Melbourne
Dr. Varun Jog
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Entropy
  • Rényi entropy
  • Tsallis entropy
  • Fisher Information
  • Entropic Distances
  • Information-Theoretic Inequalities
  • Entropy Power Inequalities
  • Logarithmic Sobolev Inequalities

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

2 pages, 144 KiB  
Editorial
Entropy and Information Inequalities
by Varun Jog and James Melbourne
Entropy 2020, 22(3), 320; https://0-doi-org.brum.beds.ac.uk/10.3390/e22030320 - 12 Mar 2020
Viewed by 1995
Abstract
Entropy and information inequalities are vitally important in many areas of mathematics and engineering [...] Full article
(This article belongs to the Special Issue Entropy and Information Inequalities)

Research

Jump to: Editorial

23 pages, 446 KiB  
Article
Dual Loomis-Whitney Inequalities via Information Theory
by Jing Hao and Varun Jog
Entropy 2019, 21(8), 809; https://0-doi-org.brum.beds.ac.uk/10.3390/e21080809 - 18 Aug 2019
Cited by 1 | Viewed by 3052
Abstract
We establish lower bounds on the volume and the surface area of a geometric body using the size of its slices along different directions. In the first part of the paper, we derive volume bounds for convex bodies using generalized subadditivity properties of [...] Read more.
We establish lower bounds on the volume and the surface area of a geometric body using the size of its slices along different directions. In the first part of the paper, we derive volume bounds for convex bodies using generalized subadditivity properties of entropy combined with entropy bounds for log-concave random variables. In the second part, we investigate a new notion of Fisher information which we call the L 1 -Fisher information and show that certain superadditivity properties of the L 1 -Fisher information lead to lower bounds for the surface areas of polyconvex sets in terms of its slices. Full article
(This article belongs to the Special Issue Entropy and Information Inequalities)
Show Figures

Figure 1

15 pages, 325 KiB  
Article
Poincaré and Log–Sobolev Inequalities for Mixtures
by André Schlichting
Entropy 2019, 21(1), 89; https://0-doi-org.brum.beds.ac.uk/10.3390/e21010089 - 18 Jan 2019
Cited by 5 | Viewed by 3787
Abstract
This work studies mixtures of probability measures on R n and gives bounds on the Poincaré and the log–Sobolev constants of two-component mixtures provided that each component satisfies the functional inequality, and both components are close in the χ 2 -distance. The estimation [...] Read more.
This work studies mixtures of probability measures on R n and gives bounds on the Poincaré and the log–Sobolev constants of two-component mixtures provided that each component satisfies the functional inequality, and both components are close in the χ 2 -distance. The estimation of those constants for a mixture can be far more subtle than it is for its parts. Even mixing Gaussian measures may produce a measure with a Hamiltonian potential possessing multiple wells leading to metastability and large constants in Sobolev type inequalities. In particular, the Poincaré constant stays bounded in the mixture parameter, whereas the log–Sobolev may blow up as the mixture ratio goes to 0 or 1. This observation generalizes the one by Chafaï and Malrieu to the multidimensional case. The behavior is shown for a class of examples to be not only a mere artifact of the method. Full article
(This article belongs to the Special Issue Entropy and Information Inequalities)
21 pages, 861 KiB  
Article
On the Impossibility of Learning the Missing Mass
by Elchanan Mossel and Mesrob I. Ohannessian
Entropy 2019, 21(1), 28; https://0-doi-org.brum.beds.ac.uk/10.3390/e21010028 - 02 Jan 2019
Cited by 11 | Viewed by 2764
Abstract
This paper shows that one cannot learn the probability of rare events without imposing further structural assumptions. The event of interest is that of obtaining an outcome outside the coverage of an i.i.d. sample from a discrete distribution. The probability of this event [...] Read more.
This paper shows that one cannot learn the probability of rare events without imposing further structural assumptions. The event of interest is that of obtaining an outcome outside the coverage of an i.i.d. sample from a discrete distribution. The probability of this event is referred to as the “missing mass”. The impossibility result can then be stated as: the missing mass is not distribution-free learnable in relative error. The proof is semi-constructive and relies on a coupling argument using a dithered geometric distribution. Via a reduction, this impossibility also extends to both discrete and continuous tail estimation. These results formalize the folklore that in order to predict rare events without restrictive modeling, one necessarily needs distributions with “heavy tails”. Full article
(This article belongs to the Special Issue Entropy and Information Inequalities)
28 pages, 353 KiB  
Article
Entropy Inequalities for Lattices
by Peter Harremoës
Entropy 2018, 20(10), 784; https://0-doi-org.brum.beds.ac.uk/10.3390/e20100784 - 12 Oct 2018
Cited by 2 | Viewed by 3857
Abstract
We study entropy inequalities for variables that are related by functional dependencies. Although the powerset on four variables is the smallest Boolean lattice with non-Shannon inequalities, there exist lattices with many more variables where the Shannon inequalities are sufficient. We search for conditions [...] Read more.
We study entropy inequalities for variables that are related by functional dependencies. Although the powerset on four variables is the smallest Boolean lattice with non-Shannon inequalities, there exist lattices with many more variables where the Shannon inequalities are sufficient. We search for conditions that exclude the existence of non-Shannon inequalities. The existence of non-Shannon inequalities is related to the question of whether a lattice is isomorphic to a lattice of subgroups of a group. In order to formulate and prove the results, one has to bridge lattice theory, group theory, the theory of functional dependences and the theory of conditional independence. It is demonstrated that the Shannon inequalities are sufficient for planar modular lattices. The proof applies a gluing technique that uses that if the Shannon inequalities are sufficient for the pieces, then they are also sufficient for the whole lattice. It is conjectured that the Shannon inequalities are sufficient if and only if the lattice does not contain a special lattice as a sub-semilattice. Full article
(This article belongs to the Special Issue Entropy and Information Inequalities)
Show Figures

Figure 1

32 pages, 472 KiB  
Article
A Forward-Reverse Brascamp-Lieb Inequality: Entropic Duality and Gaussian Optimality
by Jingbo Liu, Thomas A. Courtade, Paul W. Cuff and Sergio Verdú
Entropy 2018, 20(6), 418; https://0-doi-org.brum.beds.ac.uk/10.3390/e20060418 - 30 May 2018
Cited by 14 | Viewed by 4488
Abstract
Inspired by the forward and the reverse channels from the image-size characterization problem in network information theory, we introduce a functional inequality that unifies both the Brascamp-Lieb inequality and Barthe’s inequality, which is a reverse form of the Brascamp-Lieb inequality. For Polish spaces, [...] Read more.
Inspired by the forward and the reverse channels from the image-size characterization problem in network information theory, we introduce a functional inequality that unifies both the Brascamp-Lieb inequality and Barthe’s inequality, which is a reverse form of the Brascamp-Lieb inequality. For Polish spaces, we prove its equivalent entropic formulation using the Legendre-Fenchel duality theory. Capitalizing on the entropic formulation, we elaborate on a “doubling trick” used by Lieb and Geng-Nair to prove the Gaussian optimality in this inequality for the case of Gaussian reference measures. Full article
(This article belongs to the Special Issue Entropy and Information Inequalities)
Show Figures

Figure 1

32 pages, 1210 KiB  
Article
On f-Divergences: Integral Representations, Local Behavior, and Inequalities
by Igal Sason
Entropy 2018, 20(5), 383; https://0-doi-org.brum.beds.ac.uk/10.3390/e20050383 - 19 May 2018
Cited by 33 | Viewed by 4492
Abstract
This paper is focused on f-divergences, consisting of three main contributions. The first one introduces integral representations of a general f-divergence by means of the relative information spectrum. The second part provides a new approach for the derivation of f-divergence [...] Read more.
This paper is focused on f-divergences, consisting of three main contributions. The first one introduces integral representations of a general f-divergence by means of the relative information spectrum. The second part provides a new approach for the derivation of f-divergence inequalities, and it exemplifies their utility in the setup of Bayesian binary hypothesis testing. The last part of this paper further studies the local behavior of f-divergences. Full article
(This article belongs to the Special Issue Entropy and Information Inequalities)
Show Figures

Figure 1

23 pages, 345 KiB  
Article
Logarithmic Sobolev Inequality and Exponential Convergence of a Markovian Semigroup in the Zygmund Space
by Ichiro Shigekawa
Entropy 2018, 20(4), 220; https://0-doi-org.brum.beds.ac.uk/10.3390/e20040220 - 23 Mar 2018
Cited by 1 | Viewed by 2951
Abstract
We investigate the exponential convergence of a Markovian semigroup in the Zygmund space under the assumption of logarithmic Sobolev inequality. We show that the convergence rate is greater than the logarithmic Sobolev constant. To do this, we use the notion of entropy. We [...] Read more.
We investigate the exponential convergence of a Markovian semigroup in the Zygmund space under the assumption of logarithmic Sobolev inequality. We show that the convergence rate is greater than the logarithmic Sobolev constant. To do this, we use the notion of entropy. We also give an example of a Laguerre operator. We determine the spectrum in the Orlicz space and discuss the relation between the logarithmic Sobolev constant and the spectral gap. Full article
(This article belongs to the Special Issue Entropy and Information Inequalities)
Show Figures

Figure 1

13 pages, 744 KiB  
Article
Some Inequalities Combining Rough and Random Information
by Yujie Gu, Qianyu Zhang and Liying Yu
Entropy 2018, 20(3), 211; https://0-doi-org.brum.beds.ac.uk/10.3390/e20030211 - 20 Mar 2018
Cited by 2 | Viewed by 3303
Abstract
Rough random theory, generally applied to statistics, decision-making, and so on, is an extension of rough set theory and probability theory, in which a rough random variable is described as a random variable taking “rough variable” values. In order to extend and enrich [...] Read more.
Rough random theory, generally applied to statistics, decision-making, and so on, is an extension of rough set theory and probability theory, in which a rough random variable is described as a random variable taking “rough variable” values. In order to extend and enrich the research area of rough random theory, in this paper, the well-known probabilistic inequalities (Markov inequality, Chebyshev inequality, Holder’s inequality, Minkowski inequality and Jensen’s inequality) are proven for rough random variables, which gives a firm theoretical support to the further development of rough random theory. Besides, considering that the critical values always act as a vital tool in engineering, science and other application fields, some significant properties of the critical values of rough random variables involving the continuity and the monotonicity are investigated deeply to provide a novel analytical approach for dealing with the rough random optimization problems. Full article
(This article belongs to the Special Issue Entropy and Information Inequalities)
24 pages, 496 KiB  
Article
A Lower Bound on the Differential Entropy of Log-Concave Random Vectors with Applications
by Arnaud Marsiglietti and Victoria Kostina
Entropy 2018, 20(3), 185; https://0-doi-org.brum.beds.ac.uk/10.3390/e20030185 - 09 Mar 2018
Cited by 32 | Viewed by 5611
Abstract
We derive a lower bound on the differential entropy of a log-concave random variable X in terms of the p-th absolute moment of X. The new bound leads to a reverse entropy power inequality with an explicit constant, and to new [...] Read more.
We derive a lower bound on the differential entropy of a log-concave random variable X in terms of the p-th absolute moment of X. The new bound leads to a reverse entropy power inequality with an explicit constant, and to new bounds on the rate-distortion function and the channel capacity. Specifically, we study the rate-distortion function for log-concave sources and distortion measure d ( x , x ^ ) = | x x ^ | r , with r 1 , and we establish that the difference between the rate-distortion function and the Shannon lower bound is at most log ( π e ) 1 . 5 bits, independently of r and the target distortion d. For mean-square error distortion, the difference is at most log ( π e 2 ) 1 bit, regardless of d. We also provide bounds on the capacity of memoryless additive noise channels when the noise is log-concave. We show that the difference between the capacity of such channels and the capacity of the Gaussian channel with the same noise power is at most log ( π e 2 ) 1 bit. Our results generalize to the case of a random vector X with possibly dependent coordinates. Our proof technique leverages tools from convex geometry. Full article
(This article belongs to the Special Issue Entropy and Information Inequalities)
Show Figures

Figure 1

309 KiB  
Article
Entropies of Weighted Sums in Cyclic Groups and an Application to Polar Codes
by Emmanuel Abbe, Jiange Li and Mokshay Madiman
Entropy 2017, 19(9), 235; https://0-doi-org.brum.beds.ac.uk/10.3390/e19090235 - 07 Sep 2017
Cited by 7 | Viewed by 4598
Abstract
In this note, the following basic question is explored: in a cyclic group, how are the Shannon entropies of the sum and difference of i.i.d. random variables related to each other? For the integer group, we show that they can differ by any [...] Read more.
In this note, the following basic question is explored: in a cyclic group, how are the Shannon entropies of the sum and difference of i.i.d. random variables related to each other? For the integer group, we show that they can differ by any real number additively, but not too much multiplicatively; on the other hand, for Z / 3 Z , the entropy of the difference is always at least as large as that of the sum. These results are closely related to the study of more-sums-than-differences (i.e., MSTD) sets in additive combinatorics. We also investigate polar codes for q-ary input channels using non-canonical kernels to construct the generator matrix and present applications of our results to constructing polar codes with significantly improved error probability compared to the canonical construction. Full article
(This article belongs to the Special Issue Entropy and Information Inequalities)
Show Figures

Figure 1

Back to TopTop