Next Article in Journal
Major Transitions in Language Evolution
Previous Article in Journal
Basic Concepts, Identities and Inequalities - the Toolkit of Information Theory
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Maximum Entropy Fundamentals

by
Peter Harremoës
1 and
Flemming Topsøe
2,*
1
Rønne Alle, Søborg, Denmark
2
Department of Mathematics; University of Copenhagen; Denmark
*
Author to whom correspondence should be addressed.
Submission received: 12 September 2001 / Accepted: 18 September 2001 / Published: 30 September 2001

Abstract

:
In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the “observer” and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy.
In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a first reading.
The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent “energy” . This type of application is very well known from the literature with hundreds of applications pertaining to several different fields and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over the development of natural languages. In fact, we are able to relate our theoretical findings to the empirically found Zipf’s law which involves statistical aspects of words in a language. The apparent irregularity inherent in models with entropy loss turns out to imply desirable stability properties of languages.

1 The Maximum Entropy Principle – overview and a generic example

The Maximum Entropy Principle as conceived in its modern form by Jaynes, cf. [11], [12] and [13], is easy to formulate: “Given a model of probability distributions, choose the distribution with highest entropy.” With this choice you single out the most significant distribution, the least biased one, the one which best represents the “true” distribution. The sensibility of this principle in a number of situations is well understood and discussed at length by Jaynes, in particular.
The principle is by now well established and has numerous applications in physics, biology, demography, economy etc. For practically all applications, the key example which is taken as point of departure – and often the only example discussed – is that of models prescribed by moment conditions. We refer to Kapur, [14] for a large collection of examples as well as a long list of references.
In this section we present models defined by just one moment condition. These special models will later be used to illustrate theoretical points of more technical sections to follow.
Our approach will be based on the introduction of a two-person zero-sum game. The principle which this leads to, called the principle of Game Theoretical Equilibrium is taken to be even more basic than the Maximum Entropy Principle. In fact, from this principle you are led directly to the Maximum Entropy Principle and, besides, new interesting features emerge naturally by focusing on the interplay between a system and the observer of the system. As such the new principle is in conformity with views of quantum physics, e.g. we can view the principle of Game Theoretical Equilibrium as one way of expressing certain sides of the notion of complementarity as advocated by Niels Bohr in a precise mathematical way.
To be more specific, let us choose the language of physics and assume that on the set of natural numbers ℕ we have given a function E, the energy function. This function is assumed to be bounded below. Typically, E will be non-negative. Further, we specify a certain finite energy level, λ, and take as our model all probability distributions with mean energy λ. We assume that the energy Ei in “state” i ∈ ℕ goes fast enough to infinity that the entropies of distributions in the model remain bounded. In particular, this condition is fulfilled if Ei = ∞ for all i sufficiently large – the corresponding states are then “forbidden states” – and in this case the study reduces to a study of models with finite support.
Once you have accepted the Maximum Entropy Principle, this leads to a search for a maximum entropy distribution in the model. It is then tempting to introduce Lagrange multipliers and to solve the constrained optimization problem you are faced with in the standard manner. In fact, this is what practically all authors do and we shall briefly indicate this approach
We want to maximize entropy H = 1 p i log p i subject to the moment condition 1 p i E i = λ and subject to the usual constraints pi ≥ 0; i ∈ ℕ and 1 p i = 1 . Introducing Lagrange multipliers –β and μ, we are led to search for a solution for which all partial derivatives of the function H β 1 p i E i + μ 1 p i E i vanish. This leads to the suggestion that the solution is of the form
p i = exp ( β E i ) Z ( β )   ; i 1
for some value of β for which the partition function Z defined by
Z ( β ) = i = 1 β exp ( β E i )
is finite.
The approach is indeed very expedient. But there are difficulties connected with it. Theoretically, we have to elaborate on the method to be absolutely certain that it leads to the solution, even in the finite case when Ei = ∞ for i sufficiently large. Worse than this, in the infinite case there may not be any solution at all. This is connected with the fact that there may be no distribution of the form (1.1) which satisfies the required moment condition. In such cases it is not clear what to do.
Another concern is connected with the observation that the method of Lagrange multipliers is a completely general tool, and this very fact indicates that in any particular situation there may possibly be other ways forward which better reflect the special structure of the problem at hand. Thus one could hope to discover new basic features by appealing to more intrinsic methods.
Finally we note that the method of Lagrange multipliers cannot handle all models of interest. If the model is refined by just adding more moment constraints, this is no great obstacle. Then the distributions and partition functions that will occur instead of (1.1) and (1.2) will work with inner products of the form β E i + β E i + in place of the simple product βEi. In fact, we shall look into this later. Also other cases can be handled based on the above analysis, e.g. if we specify the geometric mean, this really corresponds to a linear constraint by taking logarithms, and the maximum entropy problem can be solved as above (and leads to interesting distributions in this case, socalled power laws). But for some problems it may be difficult or even impossible to use natural extensions of the standard method or to use suitable transformations which will reduce the study to the standard set-up. In such cases, new techniques are required in the search for a maximum entropy distribution. As examples of this difficulty we point to models involving binomial or empirical distributions, cf. [8] and [22].
After presentation of preliminary material, we introduce in Section 3 the basic concepts related to the game we shall study. Then follows a section which quickly leads to familiar key results. The method depends on the information- and game-theoretical point of view. This does not lead to complete clarification. For the proper understanding of certain phenomena, a more thorough theoretical discussion is required and this is taken up in the remaining sections.
New results are related to so-called entropy loss – situations where a maximum entropy distribution does not exist. In the last section, these type of models are related to Zipf’s law regarding statistical aspects of the semantics of natural languages.
Mathematical justification of all results is provided. Some technical results which we shall need involve special analytical tools regarding Dirichlet series and are delegated to an appendix.

2 Information theoretical preliminaries

Let A , the alphabet, be a discrete set, either finite or countably infinite and denote by M + 1 ( A ) , respectively M + 1 ( A ) the set of non-negative measures P on A (with the discrete Borel structure) such that P ( A ) ≤ 1, respectively P ( A ) = 1. The elements in A can be thought of in many ways, e.g. as letters (for purely information theoretical or computer science oriented studies), as pure states (for applications to quantum physics) or as outcomes (for models of probability theory and statistics).
For convenience, A will always be taken to be the set ℕ of natural numbers or a finite section thereof, and elements in A are typically referred to by indices like i, j, ⋯.
Measures in M + 1 ( A ) are probability distributions, or just distributions, measures in ~ M + 1 ( A ) are general distributions and measures in ~ M + 1 ( A ) \ M + 1 ( A ) are incomplete distributions. For P, Q, ⋯ ∈ M + 1 ( A ) , the point masses are, typically, denoted by pi, qi, ⋯.
By K( A ), we denote the set of all mappings κ : A → [0; ∞] which satisfy Kraft’s inequality
i A exp ( κ i ) 1 .
Elements in K( A ) are general codes. The values of a general code κ are denoted κi. The terminology is motivated by the fact that if κK( A ) and if the base for the exponential in (2.3) is 2, then there exists a binary prefix-free code such that the i’th code word consists of approximately κi binary digits.
By K( A ) we denote the set of mappings κ : A → [0; ∞] which satisfy Kraft’s equality
i A exp ( κ i ) = 1.
This case corresponds to codes without superfluous digits. For further motivation, the reader may wish to consult [23] or standard textbooks such as [3] and [6].
Elements in K( A ) are compact codes, for short just codes.
For mathematical convenience, we shall work with exponentials and logarithms to the base e.
For κK( A ) and i A , κi is the code length associated with i or, closer to the intended interpretation, we may think of κi as the code length of the code word which we imagine κ associates with i. There is a natural bijective correspondance between M + 1 ( A ) and K( A ), expressed notationally by writing Pκ or κP , and defined by the formulas
κi = −log pi, pi = exp(−κi).
Here the values κi = ∞ and pi = 0 correspond to eachother. When the above formulas hold, we call (κ, P) a matching pair and we say that κ is adapted to P or that P is the general distribution which matches κ. If P M + 1 ( A ) and κK( A ), we say that κ is P -adapted if κ is adapted to one of the distributions in P . Note that the correspondance κP also defines a bijection between M + 1 ( A ) and k( A )
The support of κ is the set of i A with κi < ∞. Thus, with obvious notation, supp (κ) = supp (P) where P is the distribution matching κ and supp (P) is the usual support of P .
For expectations – always w.r.t. genuine probability distributions – we use the bracket notation. Thus, for P M + 1 ( A ) and f : A → [−∞; ∞], we put
f , P = i A f ( i ) p i
whenever this is a well-defined extended real number. Mostly, our functions will be non-negative and then 〈f, P〉 will of course be a well defined number in [0; ∞]. In particular this is the case for average code length defined for κK( A ) and P M + 1 ( A ) by
κ , P = i A f ( i ) p i .
Entropy and divergence are defined as usual, i.e., for P M + 1 ( A ) , the entropy of P is given by
H ( P ) = i A p i log p i
or, equivalently, by H(P) = 〈κ, P〉 where κ is the code adapted to P. And for P M + 1 ( A ) and Q M + 1 ( A ) we define the divergence (or relative entropy) between P and Q by
D ( P | | Q ) = i A p i log p i q i .
Divergence is well defined with 0 ≤ D(PQ) ≤ ∞ and D(PQ) = 0 if and only if P = Q.
The topological properties which we shall find useful for codes and for distributions do not quite go in parallel. On the coding side we consider the space K( A ) of all general codes and remark that this space is a metrizable, compact and convex Hausdorff space. This may be seen by embedding K( A ) in the space [0; ∞] A of all functions on A taking values in the compact space [0; ∞]. The topology on K( A ) then is the topology of pointwise convergence. This is the only topology we shall need on K( A ).
On the distribution side we shall primarily consider probability distributions but on the corresponding space, M + 1 ( A ) , we find it useful to consider two topologies, the usual, pointwise topology and then a certain stronger non-metrizable topology, the information topology.
As to the usual topology on M + 1 ( A ) we remind the reader that this is a metrizable topology, indeed it is metrized by total variation defined by
V ( P , Q ) = i | p i q i | .
We write P n V P for convergence and P ¯ V , co ¯ V P etc. for closure in this topology (the examples show the closure of 𝒫 and of the convex hull of 𝒫, respectively).
As to the information topology – the second topology which we need on the space M + 1 ( A ) –this can be described as the strongest topology such that, for (Pn)n≥1 M + 1 ( A ) and P M + 1 ( A ) , limn→∞ D(PnP) = 0 implies that the sequence (Pn)n≥1 converges to P. Convergence in this topology is denoted P n D P . We only need convergence in this topology for sequences, not for generalized sequences or nets. Likewise, we only need sequential closure and P ¯ D σ , co ¯ D σ P or what the case may be denotes sequential closure. Thus P ¯ D σ denotes the set of distributions P for which there exists a sequence (Pn)n≥1 of distributions in P with P n D P . The necessary and sufficient condition that P n D P holds is that D(PnP) → 0 as n → ∞. We warn the reader that the corresponding statement for nets (generalized sequences) is wrong – only the sufficiency part holds generally. For the purposes of this paper, the reader needs only worry about sequences but it is comforting to know that the sequential notion P n D P is indeed a topological notion of convergence. Further details will be in [9].
An important connection between total variation and divergence is expressed by Pinsker’s inequality:
D ( P | | Q ) 1 2 V ( P , Q ) 2 ,
which shows that convergence in the information topology is stronger than convergence in total variation.
The functions of relevance to us, entropy and divergence, have important continuity properties: PH(P) is lower semi-continuous on M + 1 ( A ) and (P, Q) ↷ D(PQ) is jointly lower semicontinuous on M + 1 ( A ) × M + 1 ( A ) . These continuity properties even hold w.r.t. the usual, pointwise topology. Details may be found in [23].

3 The Code Length Game, introduction

In this section P is a non-empty subset of M + 1 ( A ) , neutrally referred to as the model. In specific applications it may be more appropriate with other terminology, e.g. the preparation space or the statistical model. Distributions in P are called consistent distributions.
With P we associate a two-person zero-sum game, called the Code Length Game over P . In this game, Player I chooses a consistent distribution, and Player II chooses a general code. The cost-function, seen from the point of view of Player II, is the P × K ( A ) [ 0 , ] given by the average code length:
(P, κ) ↷ (κ, P).
This game was introduced in [20], see also [15], [21], [10], [8] and [22]. Player I may be taken to represent “the system”, “Nature”, “God” or ⋯ , whereas Player II represents “the observer”, “the statistician” or ⋯ .
We can motivate the game introduced in various ways. The personification of the two participants in the game is natural as far as Player II is concerned since, in many situations, we can identify ourselves with that person. Also, the objective of Player II appears well motivated. To comment on this in more detail, we first remind the reader that we imagine that there is associated a real code consisting of binary sequences to κK( A ) and that κ merely tells us what the code lengths of the various code words are.
We can think of a specific code in at least three different ways: as a representation of the letters in A , as a means for identification of these letters and – the view we find most fruitful – as a strategy for making observations from a source generating letters from A . The two last views are interrelated. In fact, for the strategy of observation which we have in mind, we use the code to identify the actual outcome by posing a succession of questions, starting with the question “is the first binary digit in the code word corresponding to the outcome a 1?” , then we ask for the second binary digit and so on until it is clear to us which letter is the actual outcome from the source. The number of questions asked is the number of binary digits in the corresponding code word.
The cost function can be interpretated as mean representation time, mean identification time or mean observation time and it is natural for Player II to attempt to minimize this quantity. The sense in assuming that Player I has the opposite aim, namely to maximize the cost function is more dubious. The arguments one can suggest to justify this, thereby motivating the zero-sum character of the Code Length Game, are partly natural to game theory in general, partly can be borrowed from Jaynes’ reasoning behind his Maximum Entropy Principle. Without going into lengthy discussions we give some indications: Though we do not seriously imagine that Player I is a “real” person with rational behaviour, such thoughts regarding the fictive Player I reflect back on our own conceptions. With our fictitious assumptions we express our own modelling. If all we know is the model 𝒫 and if, as is natural, all we strive for is minimization of the cost function, we cannot do better than imagining that Player I is a real person behaving rationally in a way which is least favourable to us. Any other assumption would, typically, lead to non-sensical results which would reveal that we actually knew more than first expressed and therefore, as a consequence, we should change the model in order better to reflect our level of knowledge.
To sum up, we have argued that the observer should be allowed freely to choose the means of observation, that codes offer an appropriate technical tool for this purpose and that the choice of a specific code should be dictated by the wish to minimize mean observation time, modelled adequately by the chosen cost function. Further, the more fictitious views regarding Player I and the behaviour of that player, really reflect on the adequacy and completeness of our modelling. If our modelling is precise, the assumptions regarding Player I are sensible and general theory of two-person zero-sum games can be expected to lead to relevant and useful results.
The overall principle we shall apply, we call the principle of Game Theoretical Equilibrium. It is obtained from general game theoretical considerations applied to the Code Length Game. No very rigid formulation of this principle is necessary. It simply dictates that in the study of a model, we shall investigate standard game theoretical notions such as equilibrium and optimal strategies.
According to our basic principle, Player I should consider, for each possible strategy P𝒫, the infimum of 〈κ, P〉 over κK( A ). This corresponds to the optimal response of Player II to the chosen strategy. The infimum in question can easily be identified by appealing to an important identity which we shall use frequently in the following. The identity connects average code length, entropy and divergence and states that
κ, P〉 = H(P) + D(P‖Q),
valid for any κK( A ) and P M + 1 ( A ) with Q the (possibly incomplete) distribution matching κ. The identity is called the linking identity. As D(PQ) ≥ 0 with equality if and only if P = Q, an immediate consequence of the linking identity is that entropy can be conceived as minimal average code length:
H ( P ) = min κ K ( A ) κ , P .
The minimum is attained for the code adapted to P and, provided H(P) < ∞, for no other code.
Seen from the point of view of Player I, the optimal performance is therefore achieved by maximizing entropy. The maximum value to strive for is called the maximum entropy value (Hmax-value) and is given by
H max ( P ) = sup P P H ( P ) .
On the side of Player II – the “coding side” – we consider, analogously, for each κK( A ) the associated risk given by
R ( κ | P ) = sup P P κ , P
and then the minimum risk value (Rmin-value)
R min ( P ) = inf κ K ( A ) R ( κ | P ) .
This is the value to strive for for Player II.
We have now looked at each side of the game separately. Combining the two sides, we are led to the usual concepts, well known from the theory of two-person zero-sum games. Thus, the model P is in equilibrium if Hmax( P ) = Rmin( P ) < ∞, and in this case, Hmax( P ) = Rmin( P ) is the value of the game. Note that as a “supinf” is bounded by the corresponding “infsup”, the inequality
H max ( P ) R min ( P )
always holds.
The concept of optimal strategies also follows from general considerations. For Player I, this is a consistent distribution with maximal entropy, i.e. a distribution P P with H(P) = Hmax( P ). And for Player II, an optimal strategy is a code κK( A ) such that R(κ| P ) = Rmin( P ). Such a code is also called a minimum risk code (Rmin-code).

4 Cost-stable codes, partition functions and exponential families

The purpose of this section is to establish a certain sufficient condition for equilibrium and to identify the optimal strategies for each of the players in the Code Length Game. This cannot always be done but the simple result presented here already covers most applications. Furthermore, the approach leads to familiar concepts and results. This will enable the reader to judge the merits of the game theoretical method as compared to a more standard approach via the introduction of Lagrange multipliers.
As in the previous section, we consider a model P M + 1 ( A ) . Let κK( A ) together with its matching distribution P be given and assume that P is consistent. Then we call κ a Nash equilibrium code for the model P if
κ, P 〉 ≤ 〈κ, P〉; P𝒫
and if H(P) < ∞. The terminology is adapted from mathematical economy, cf. e.g. Aubin [2]. The requirement can be written R(κ| P ) ≤ H(P) < ∞. Note that here we insist that a Nash equilibrium code be P -adapted. This condition will later be relaxed.
Theorem 4.1. 
Let P be a model and assume that there exists a P -adapted Nash equilibrium code κ, say, with matching distribution P. Then P is in equilibrium and both players have optimal strategies. Indeed, P is the unique optimal strategy for Player I and κ the unique optimal strategy for Player II.
Proof. 
Since R(κ| P ) ≤ H(P), Rmin( P ) ≤ Hmax( P ). As the opposite inequality always holds by (3.13), P is in equilibrium, the value of the Code Length Game associated with P is H(P) and κ and P are optimal strategies.
To establish the uniqueness of κ, let κ be any code distinct from κ. Let P be the distribution matching κ. Then, by the linking identity,
R(κ|𝒫) ≥ 〈κ, P〉 = H(P) + D(PP) > H(P) ,
hence κ is not optimal.
For the uniqueness proof of P, let P be a consistent distribution distinct from P. Then, again by the linking identity,
H(P) < H(P) + D(PP) = (κ, P) ≤ H(P),
and P cannot be optimal.      ☐
As we shall see later, the existence of a Nash equilibrium code is, essentially, also necessary for the conclusion of the theorem. 1 This does not remove the differenty of actually finding the Nash equilibrium code in concrete cases of interest. In many cases it turns out to be helpful to search for codes with stronger properties. A code κ is a cost-stable code for P if there exists h < ∞ such that 〈κ, P〉 = h for all P P . Clearly, a cost-stable code with a consistent matching distribution is a Nash equilibrium code. Therefore, we obtain the following corollary from Theorem 4.1:
Corollary 4.2. 
If κ is a cost-stable code for P and if the matching distribution P is consistent, then P is in equilibrium and κ and P are the unique optimal strategies pertaining to the Code Length Game.
In order to illustrate the usefulness of this result, consider the case of a model 𝒫 given by finitely many linear constraints, say
P = { P M + 1 ( A ) | E 1 , P = λ 1 , , E n , P = λ n }
with E1, . . . , En real-valued functions bounded from below and λ1, . . . , λn real-valued constants.
Let us search for cost-stable codes κ for P . Clearly, any code of the form
κ = α + β 1 E 1 + + β n E n = α + β ¯ E ¯
is cost-stable. Here, α and the β’s denote constants, β ¯ and E ¯ vectors and a dot signifies scalar products of vectors. For κ defined by (4.16) to define a code we must require that κ ≥ 0 and, more importantly, that Kraft’s equality (2.4) holds. We are thus forced to assume that the partition function evaluated at β ¯ = ( β 1 , , β n ) is finite, i.e. that
Z ( β ¯ ) = i A exp ( β ¯ E i ¯ )
is finite, and that α = log Z ( β ¯ ) . When these conditions are fulfilled, κ = κ β ¯ defined by
κ β ¯ = log Z ( β ¯ ) + β ¯ · E ¯
defines a cost-stable code with individual code lengths given by
κ i β ¯ = log Z ( β ¯ ) + β ¯ · E i ¯ .
The matching distribution P β ¯ is given by the point probabilities
P i β ¯ = exp ( β ¯ E i ¯ ) Z ( β ¯ ) .
In most cases where linear models occur in the applications, one will be able to adjust the parameters in β ¯ such that P β ¯ is consistent. By Corollary 4.2, the entropy maximization problem will then be solved. However, not all cases can be settled in this way as there may not exist a consistent maximum entropy distribution.
We have seen that the search for cost-stable codes led us to consider the well-known partition function and also the well-known exponential family consisting of distributions ( P β ¯ ) with β ¯ ranging over all vectors β ¯ n for which Z ( β ¯ ) < .
From our game theoretical point of view, the family of codes ( κ β ¯ ) with Z ( β ¯ ) < has at least as striking features as the corresponding family of distributions. We shall therefore focus on both types of objects and shall call the family of matching pairs ( κ β ¯ , P β ¯ ) with β ¯ ranging over vectors with Z ( β ¯ ) < for the exponential family associated with the set E ¯ = ( E 1 , , E n ) of functions on A or associated with the family of models one can define from E by choosing λ ¯ = ( λ 1 , , λ n ) and considering P given by (4.15).
We stress that the huge literature on exponential families displays other families of discrete distributions than those that can be derived from the above definition. In spite of this we maintain the view that an information theoretical definition in terms of codes (or related objects) is more natural than the usual structural definitions. We shall not pursue this point vigorously here as it will require the consideration of further games than the simple Code Length Game.
In order to further stress the significance of the class of cost stable codes we mention a simple continuity result:
Theorem 4.3. 
If a model P has a cost-stable code, the entropy function H is continuous when restricted to P .
Proof. 
Assume that 〈κ, P〉 = h < ∞ for all P P . Then H(P) + D(PP) = h for P P with P the distribution matching κ. As the sum of the two lower semi-contimuous functions in this identity is a constant function, each of the functions, in particular the entropy function, must be continuous.
     ☐
As we have already seen, the notion of cost-stable codes is especially well suited to handle models defined by linear constraints. In section 6 we shall take this up in more detail.
The following sections will be more technical and mathematically abstract. This appears necessary in order to give a comprehensive treatment of all basic aspects related to the Cost Length Game and to the Maximum Entropy Principle.

5 The Code Length Game, further preparations

In section 3 we introduced a minimum of concepts that enabled us to derive the useful results of section 4. With that behind us as motivation and background material, we are ready to embark on a more thorough investigation which will lead to a clarification of certain obscure points, especially related to the possibility that a consistent distribution with maximal entropy may not exist. In this section we point out certain results and concepts which will later be useful.
In view of our focus on codes it is natural to look upon divergence in a different way, as redundancy. Given is a code κK( A ) and a distribution P M + 1 ( A ) . We imagine that we use κ to code letters from A generated by a “source” and that P is the “true” distribution of the letters. The optimal performance is, according to (3.9), represented by the entropy H(P) whereas the actual performance is represented by the number 〈κ, P〉. The difference 〈κ, P 〉 − H(P) is then taken as the redundancy. This is well defined if H(P) < ∞ and then coincides with D(PQ) where Q denotes the distribution matching κ. As D(PQ) is always well defined, we use this quantity for our technical definition: The redundancy of κK( A ) against P M + 1 ( A ) is denoted D(Pκ) and defined by
D(P‖κ) = D(P‖Q) with κQ.
Thus D(Pκ) and D(PQ) can be used synonymously and reflect different ways of thinking. Using redundancy rather than divergence, the linking identity takes the following form:
κ, P 〉 = H(P) + D(P‖κ).
We shall often appeal to basic concavity and convexity properties. Clearly, the entropy function is concave as a minimum of affine functions, cf. (3.9). However, we need a more detailed result which also implies strict concavity. The desired result is the following identity
H ( v α v P v ) = v α v H ( P v ) + v α v H ( P v | | P ¯ ) ,
where P ¯ = v α v P v is any finite or countably infinite convex combination of probability distributions. This follows by the linking identity.
A closely related identity involves divergence and states that, with notation as above and with Q denoting an arbitrary general distribution,
v α v D ( P v | | Q ) = D ( v α v P v | | Q ) + v α v D ( P v | | P ¯ ) .
The identity shows that divergence D(·Q) is strictly convex. A proof can be found in [23].
For the remainder of the section we consider a model P M + 1 ( A ) and the associated Code Length Game.
By supp ( P ) we denote the support of P , i.e. the set of i A for which there exists P P with pi > 0. Thus, supp ( P ) = P P supp (P), the union of the usual supports of all consistent distributions. Often one may restrict attention to models with full support, i.e. to models with supp ( P ) = A . However, we shall not make this assumption unless pointed out specifically.
Recall that distributions in P are said to be consistent. Often, it is more appropriate to consider distributions in P ¯ σ . These distributions are called essentially consistent distributions. Using these distributions we relax the requirements to a distribution with maximum entropy, previously only considered for consistent distributions. Accordingly, a distribution P is called a maximum entropy distribution (Hmax-distribution) if P is essentially consistent and H(P) = Hmax( P ). We warn the reader that the usual definition in the literature insists on the requirement of consistency. Nevertheless, we find the relaxed requirement of essential consistency more adequate. For one thing, lower semi-continuity of the entropy function implies that
H max ( P ) = H max ( P ¯ σ ) = H max ( P ¯ V )
and this points to the fact that the models P ¯ σ and P ¯ V behave in the same way as P . This view is further supported by the observation that for any κK( A ),
R ( κ | P ) = R ( κ | co ¯ V P ) .
This follows as the map P ↷ 〈κ, P〉 is lower semi-continuous and affine. As a consequence,
R min ( P ) = R min ( co ¯ V ) .
It follows that all models with P P P ¯ V behave similarly as far as the Code Length Game is concerned. The reason why we do not relax further the requirement of a Hmax-distribution from P * P ¯ σ to P * P ¯ V is firstly, that we hold the information topology for more relevant for our investigations than the usual topology. Secondly, we shall see that the property P * P ¯ σ whichis stronger than P * P ¯ V can in fact be verified in the situations we have in mind (see Theorem 6.2).
The fact that a consistent Hmax-distribution may not exist leads to further important notions. Firstly, a sequence (Pn)n≥1 of distributions is said to be asymptotically optimal if all the Pn are consistent and if H(Pn) → Hmax( P ) for n → ∞. And, secondly, a distribution P is the maximum entropy attractor (Hmax-attractor) if P is essentially consistent and if P n D P * for every asymptotically optimal sequence (Pn)n≥1.
As an example, consider the (uninteresting!) model of all deterministic distributions. For this model, the Hmax-attractor does not exist and there is no unique Hmax-distribution. For more sensible models, the Hmax-attractor P will exist, but it may not be the Hmax-distribution as lower semi-continuity only quarantees the inequality H(P) ≤ Hmax( P ), not the corresponding equality.
Having by now refined the concepts related to the distribution side of the Code Length Game, we turn to the coding side.
It turns out that we need a localized variant of the risk associated with certain codes. The codes we shall consider are, intuitively, all codes which the observer (Player II) off-hand finds it worth while to consider. If P P is the “true” distribution, and the observer knows this, he will choose the code adapted to P in order to minimize the average code length. As nature (Player I) could from time to time change the choice of P P , essentially any strategy in the closure of P could be approached. With these remarks in mind we find it natural for the observer only to consider P ¯ D σ -adapted codes in the search for reasonable strategies.
Assume now that the observer decides to choose a P ¯ D σ -adapted code κ. Let P P ¯ D σ be the distribution which matches κ. Imagine that the choice of κ is dictated by a strong belief that the true distribution is P or some distribution very close to P (in the information topology!). Then the observer can evaluate the associated risk by calculating the localized risk associated with κ which is defined by the equation:
R loc ( κ | P ) = sup ( P n ) P , P n D P lim sup n κ , P n ,
where the supremum is over the class of all sequences of consistent distributions which converge in the information topology to P. Note that we insist on a definition which operates with sequences.
Clearly, the normal “global” risk must be at least as large as localized risk, therefore, for any P ¯ D σ -adapted code,
Rloc(κ|𝒫) ≤ R(κ|𝒫) .
A further and quite important inequality is the following:
Rloc(κ|𝒫) ≤ Hmax(𝒫).
This inequality is easily derived from the defining relation (5.27) by writing 〈κ, Pn〉 in the form H(Pn) + D(PnP) with κP, noting also that P n D P implies that D(PnP) → 0. We note that had we allowed nets in the defining relation (5.27), a different and sometimes strictly larger quantity would result and (5.29) would not necessarily hold.
As the last preparatory result, we establish pretty obvious properties of an eventual optimal strategy for the observer, i.e. of an eventual Rmin-code.
Lemma 5.1.  
Let P M + 1 ( A ) with Rmin ( P ) < ∞ be given. Then the Rmin-code is unique and if it exists, say R(κ| P ) = Rmin( P ), then κ is compact with supp (κ) = supp ( P ).
Proof. 
Assume that κK( A ) is a Rmin-code. As R(κ| P ) < ∞, supp ( P ) ⊆ supp (κ). Then consider an a0 ∈ supp (κ) and assume, for the purpose of an indirect proof, that a0 A \supp ( P ). Then the code κ obtained from κ by putting κ(a0) = ∞ and keeping all other values fixed, is a general non-compact code which is not identically +∞. Therefore, there exists ε > 0 such that κε is a compact code. For any P P , we use the fact that a0 ∉ supp (P)) to conclude that 〈κε, P 〉 = 〈κε, P〉, hence R(κε| P ) = R(κ| P ) − ε, contradicting the minimality property of κ. Thus we conclude that supp (κ) = supp ( P ). Similarly, it is clear that κ must be compact – since otherwise, κε would be more efficient than κ for some ε > 0.
In order to prove uniqueness, assume that both κ1 and κ2 are Rmin-codes for P . If we assume that κ1κ2, then κ1(a) ≠ κ2(a) holds for some a in the common support of the codes κ1 and κ2 and then, by the geometric/arithmetic inequality, we see that 1 2 ( κ 1 + κ 2 ) is a general non-compact code. For some ε > 0, 1 2 ( κ 1 + κ 2 ) ε will then also be a code and as this code is seen to be more efficient than κ1 and κ2, we have arrived at a contradiction. Thus κ1 = κ2, proving the uniqueness assertion. ☐

6 Models in equilibrium

Let P M + 1 ( A ) . By definition the requirement of equilibrium is one which involves the relationship between both sides of the Code Length Game. The main result of this section shows that the requirement can be expressed in terms involving only one of the sides of the game, either distributions or codes.
Theorem 6.1 (conditions for equilibrium). 
Let P M + 1 ( A ) be a model and assume that Hmax( P ) < ∞. Then the following conditions are equivalent:
(i) 
P is in equilibrium,
(ii) 
Hmax(co P ) = Hmax( P ),
(iii) 
there exists a P ¯ D -adapted code κ such that
R(κ|𝒫) = Rloc(κ|𝒫).
Proof. 
(i) ⇒ (iii): Here we assume that Hmax( P ) = Rmin( P ). In particular, Rmin( P ) < ∞. As the map κR(κ| P ) is lower semi-continuous on K( A ) (as the supremum of the maps κ ↷ 〈κ, P 〉; P P ), and as K( A ) is compact, the minimum of κR(κ| P ) is attained. Thus, there exists κK( A ) such that R(κ| P ) = Rmin( P ). As observed in Lemma 5.1, κ is a compact code and κ is the unique Rmin-code.
For P P ,
H(P) + D(Pκ) = 〈κ, P〉 ≤ Rmin(𝒫) = Hmax(𝒫) .
It follows that D(Pnκ) → 0 for any asymptotically optimal sequence (Pn)n≥1. In other words, the distribution P * M + 1 ( A ) which matches κ is the Hmax- attractor of the model.
We can now consider any asymptotically optimal sequence (Pn)n≥1 in order to conclude that
Entropy 03 00191 i001
By (5.28), the assertion of (iii) follows.
(iii) ⇒ (ii): Assuming that (iii) holds, we find from (5.25), (3.13) and (5.29) that
Entropy 03 00191 i002
and the equality of (ii) must hold.
(ii) ⇒ (i): For this part of the proof we fix a specific asymptotically optimal sequence (Pn)n≥1 P . We assume that (ii) holds. For each n and m we observe that by (5.22) and (2.7), with M = 1 2 ( P n + P m )
Entropy 03 00191 i003
It follows that (Pn)n≥1 is a Cauchy sequence with respect to total variation, hence there exists P * M + 1 ( A ) such that P n V P * .
Let κ be the code adapted to P. In order to evaluate R(κ| P ) we consider any P P . For a suitable sequence (εn)n≥1 of positive numbers converging to zero, we consider the sequence (Qn)n≥1 P given by
Q n = ( 1 ε n ) P n + ε n P n   ; n 1.
By (5.22) we find that
Hmax(𝒫) = Hmax(co 𝒫) ≥ H(Qn) ≥ (1 − εn)H(Pn) + εnH(P) + εnD(PQn),
hence
H ( P ) + D ( P | | Q n ) H ( P n ) + 1 ε n ( H max ( P ) H ( P n ) ) .
As QD(PQ) is lower semi-continuous, we conclude from this that
H ( P ) + D ( P | | P * ) H max ( P ) + lim inf n ( H max ( P ) H ( P n ) ) .
Choosing the εn’s appropriately, e.g. ε n = ( H max ( P ) H ( P n ) ) 1 2 , it follows that H(P)+D(PP) ≤Hmax( P ), i.e. that 〈κ, P〉 ≤ Hmax( P ). As this holds for all P P , R(κ| P ) ≤ Hmax( P ) follows. Thus Rmin( P ) ≤ Hmax( P ), hence equality must hold here, and we have proved that P is in equilibrium, as desired.
It is now easy to derive the basic properties which hold for a system in equilibrium.
Theorem 6.2 (models in equilibrium). 
Assume that P M + 1 ( A ) is a model in equilibrium. Then the following properties hold:
(i) 
There exists a unique Hmax-attractor and for this distribution, say P, the inequality
Rmin(𝒫) + D(Pκ) ≤ R(κ|𝒫)
holds for all κK( A ).
(ii) 
There exists a unique Rmin-code and for this code, say κ, the inequality
H(P) + D(Pκ) ≤ Hmax(𝒫)
holds for every P P , even for every P co ¯ V P . The Rmin-code is compact.
Proof. 
The existence of the Rmin-code was established by the compactness argument in the beginning of the proof of Theorem 6.1. The inequality (6.31) for P P is nothing but an equivalent form of the inequality R(κ| P ) ≤ Hmax( P ) and this inequality immediately implies that the distribution P matching κ is the Hmax-attractor. The extension of the validity of (6.31) to P co ¯ V P follows from (5.26).
To prove (6.30), let (Pn)n≥1 P be asymptotically optimal. Then
Entropy 03 00191 i004
which is the desired conclusion as Hmax( P ) = Rmin( P ). ☐
If P is a model in equilibrium, we refer to the pair (κ, P) from Theorem 6.2 as the optimal matching pair pair. Thus κ denotes the Rmin-code and P the Hmax-attractor.
Combining Theorem 6.2 and Theorem 6.1 we realize that, unless R(κ| P ) = ∞ for every κK( A ), there exists a unique Rmin-code. The matching distribution is the Hmax-attractor for the model co ( P ). We also note that simple examples (even with A a two-element set) show that P may have a Hmax-attractor without P being in equilibrium and this attractor may be far away from the Hmax-attractor for co ( P ).
Corollary 6.3. 
For a model P M + 1 ( A ) in equilibrium, the Rmin-code and the Hmax-attractor form a matching pair: κ ↔ P, and for any matching pair (κ, P) with P co ¯ V P ,
V ( P , P * ) ( R ( κ | P ) H ( P ) ) 1 2 .
Proof. 
Combining (6.30) with (6.31) it folows that for κP with P co ¯ V P ,
D(PP) + D(PP) ≤ R(κ|𝒫) − H(P)
and the result follows from Pinskers inequality, (2.7). ☐
Corollary 6.3 may help us to judge the approximate position of the Hmax-attractor P even without knowing the value of Hmax( P ). Note also that the proof gave the more precise bound J (P, P) ≤ R(κ| P ) − H(P) with assumptions as in the theorem and with J (·, ·) denoting Jeffrey’s measure of discrimination, cf. [3] or [16].
Corollary 6.4. 
Assume that the model P has a cost-stable code κ and let P be the matching distribution. Then P is in equilibrium and has (κ, P) as optimal matching pair if and only if P is essentially consistent.
Proof. 
By definition an Hmax-attractor is essentially consistent. Therefore, the necessity of the condition P * P ¯ D σ is trivial. For the proof of sufficiency, assume that 〈κ, P〉 = h for all P P with h a finite constant. Clearly then, Hmax( P ) ≤ h. Now, let (Pn)n≥1 be a sequence of consistent distributions with P n D P . By the linking identity, H(Pn) + D(PnP) = h for all n, and we see that Hmax( P ) ≥ h. Thus Hmax( P ) = h and (Pn) is asymptotically optimal. By Theorem 6.2, the sequence converges in the information topology to the Hmax-attractor which must then be P. The result follows.     ☐
Note that this result is a natural further development of Corollary 4.2.
Corollary 6.5. 
Assume that P M + 1 ( A ) is a model in equilibrium. Then all models P with P P co ¯ V P are in equilibrium too and they all have the same optimal matching pair.
Proof. 
If P P co ¯ V P then
H max ( co   P ) H max ( co ¯ V P ) = H max ( co   P ) = H max ( P ) H max ( P )
and we see that P is in equilibrium. As an asymptotically optimal sequence for P is also asymp- totically optimal for P , it follows that P has the same Hmax-attractor, hence also the same optimal matching pair, as P .
Another corollary is the following result which can be used as a basis for proving certain limit theorems, cf. [22].
Corollary 6.6. 
Let ( A , P n ) n 1 be a sequence of models and assume that they are all in equilibrium with sup n 1 H max ( P n ) < and that they are nested in the sense that c o ( P 1 ) c o ( P 2 ) . Let there further be given a model P such that
n 1 P n P co ¯ V ( n 1 P n ) .
Then P is in equilibrium too, and the sequence of Hmax-attractors of the P n ’s converges in diver-gence to the Hmax-attractor of P .
Clearly, the corollaries are related and we leave it to the reader to extend the argument in the proof of Corollary 6.5 so that it also covers the case of Corollary 6.6.
We end this section by developing some results on models given by linear conditions, thereby continuing the preliminary results from section 1 and section 4. We start with a general result which uses the following notion: A distribution P is algebraically inner in the model P if, for every P P there exists Q P such that P is a convex combination of P and Q.
Lemma 6.7. 
If the model P is in equilibrium and has a Hmax-distribution P which is algebraically inner in P , then P is cost-stable.
Proof. 
Let κ be the code adapted to P. To any P P we determine Q P such that P is a convex combination of these two distributions. Then, as 〈κ, P〉 ≤ Hmax( P ) and 〈κ, Q〉 ≤ Hmax( P ) and as a convex combination gives 〈κ, P〉 ≤ Hmax( P ) we must conclude that 〈κ, P〉 = 〈κ, Q〉 since 〈κ, P〉 is in fact equal to Hmax( P ). Therefore, κ is cost-stable.
Theorem 6.8. 
If the alphabet A is finite and the model P affine, then the model is in equilibrium and the Rmin-code is cost-stable.
Proof. 
We may assume that P is closed. By Theorem 6.1, the model is in equilibrium and by continuity of the entropy function, the Hmax-attractor is a Hmax-distribution. For the Rmin-code κ, supp (P) = supp ( P ) by Lemma 5.1. As A is finite we can then conclude that P is algebraically inner and Lemma 6.7 applies. ☐
We can now prove the following result:
Theorem 6.9. 
Let P be a non-empty model given by finitely many linear constraints as in (4.15):
P = { P M + 1 ( A ) | E 1 , P = λ 1 , , E n , P = λ n } .
Assume that the functions E1, ⋯ , En, 1 are linearly independent and that Hmax( P ) < ∞. Then the model is in equilibrium and the optimal matching pair (κ, P) belongs to the exponential family defined by (4.18) and (4.20). In particular, κ is cost-stable.
Proof. 
The model is in equilibrium by Theorem 6.1. Let (κ, P ) be the corresponding optimal matching pair. If A is finite the result follows by Theorem 6.8 and some standard linear algebra.
Assume now that A is infinite Choose an asymptotically optimal sequence (Pn)n≥1. Let A 0 be a finite subset of A , chosen sufficiently large (see below), and denote by P n the convex model of all P P for which pi = Pn,i for all i A A 0 . Let P n * be the Hmax-attractor for 𝒫n and κ n * the adapted code. Then this code is cost-stable for 𝒫n and of the form
κ n , i * = α n + v = 1 n β n , v E v ( i ) ; i A 0 .
If the set A 0 is sufficiently large, the constants appearing here are uniquely determined. We find that ( P n * ) n 1 is asymptotically optimal for P , and therefore, P n * D P * . It follows that the constants βn,ν and αn converge to some constants βν and α and that
κ i * = α + v = 1 n β v E v ( i ) ; i A 0
As A 0 can be chosen arbitrarily large, the constants α and βν must be independent of A 0 with i A 0 and the above equation must hold for all i A .   ☐
Remark
Extensions of the result just proved may well be possible, but care has to be taken. For instance, if we consider models obtained by infinitely many linear constraints, the result does not hold. As a simple instance of this, the reader may consider the case where the model is a “line” , viz. the affine hull generated by the two distributions P, Q on A = N given by pi = 2i; i ≥ 1 and qi = (ζ(3) i3)−1; i ≥ 1. This model is in equilibrium with P as Hmax-distribution, but the adapted code is not cost-stable. These facts can be established quite easily via the results quoted in the footnote following the proof of Theorem 4.1.

7 Entropy-continuous models

In the sequel we shall only discuss models in equilibrium. Such models can be quite different regarding the behaviour of the entropy function near the maximum. We start with a simple observation.
Lemma 7.1. 
If P is in equilibrium and the Hmax-value Hmax( P ) is attained on P ¯ V , it is only attained for the Hmax-attractor.
Proof. 
Assume that P P ¯ V and that H(P) = Hmax( P ). Choose a sequence (Pn)n≥1 P which converges to P in total variation. By lower semi-continuity and as H(P) = Hmax( P ) we see that (Pn) is asymptotically optimal. Therefore, for the Hmax-attractor P, P n D P * , hence also P n V P * . It follows that P = P.
Lemma 7.2. 
For a model P in equilibrium and with Hmax-attractor P the following conditions are equivalent:
(i) 
H : P ¯ V R + is continuous at P in the topology of total variation,
(ii) 
H : P ¯ σ R + is sequentially continuous at P in the information topology,
(iii) 
H(P) = Hmax(𝒫).
Proof. 
Clearly, (i) implies (ii).
Assume that (ii) holds and let (Pn)n≥1 P be asymptotically optimal. Then P n D P * . By assumption, H(Pn) → H(P) and (iii) follows since H(Pn) → Hmax( P ) also holds.
Finally, assume that (iii) holds and let ( P n ) n 1 P ¯ V satisfy P n V P * . By lower semi-continuity,
H max ( P ) = H ( P * ) lim   inf n   H ( P n ) lim sup n   H ( P n ) H max ( P ¯ V ) = H max ( P )
and H(Pn) → H(P) follows. Thus (i) holds.
A model P in equilibrium is entropy-continuous if H(P) = Hmax( P ) with P the Hmax-attractor. In the opposite case we say that there is an entropy loss.
We now discuss entropy-continuous models. As we shall see, the previously introduced notion of Nash equilibrium code, cf. Section 4, is of central importance in this connection. We need this concept for any P ¯ D σ -adapted code. Thus, by definition a code κ is a Nash equilibrium code if κ is P ¯ D σ -adapted and if
R(κ|𝒫) ≤ H(P) < ∞ .
We stress that the definition is used for any model P (whether or not it is known beforehand that the model is in equilibrium). We shall see below that a Nash equilibrium code is unique.
Theorem 7.3 (entropy-continuous models).  
Let P M + 1 ( A ) be a model. The following conditions are equivalent:
(i) 
P is in equilibrium and entropy-continuous,
(ii) 
P is in equilibrium and has a maximum entropy distribution,
(iii) 
P has a Nash equilibrium code.
If these conditions are fulfilled, the Hmax-distribution is unique and coincides with the Hmax-attractor. Likewise, the Nash equilibrium code is unique and it coincides with the Rmin-code.
Proof. 
(i) ⇒ (ii): This is clear since, assuming that (i) holds, the Hmax-attractor must be a Hmax-distribution.
(ii) ⇒ (iii): Assume that P is in equilibrium and that P 0 P ¯ D σ is a Hmax-distribution. Let (κ, P) be the optimal matching pair pair. Applying Theorem 6.2, (6.31) with P = P0, we conclude that D(P0P) = 0, hence P0 = P. Then we find that
R(κ|𝒫) = Rmin(𝒫) = Hmax(𝒫) = H(P0) = H(P)
and we see that κ is a Nash equilibrium code.
(iii) ⇒ (i): If κ is a Nash equilibrium code for P , then
Rmin(𝒫) ≤ R(κ|𝒫) ≤ H(P) ≤ Hmax(𝒫) ≤ Rmin(𝒫)
and we conclude that P is in equilibrium and that κ is the minimum risk code.
In establishing the equivalence of (i)–(iii) we also established the uniqueness assertions claimed. ☐
The theorem generalizes the previous result, Theorem 4.1. We refer to section 4 for results which point to the great applicability of results like Theorem 7.3.

8 Loss of entropy

We shall study a model P in equilibrium. By previous results we realize that for many purposes we may assume that P is a closed, convex subset of M + 1 ( A ) with Hmax( P ) < ∞. Henceforth, these assumptions are in force.
Denote by (κ, P) the optimal matching pair associated with P . By the disection of P we understand the decomposition of P consisting of all non-empty sets of the form
P x = { P P | κ * , P = x } .
Let ∆ denote the set of x R with P x . As R(κ| P ) = Rmin( P ) = Hmax( P ), and as P is convex with P P , ∆ is a subinterval of [0; Hmax( P )] which contains the interval [H(P), Hmax( P )[.
Clearly, κ is a cost-stable code for all models P x ; x ∈ ∆. Hence, by Theorem 4.3 the entropy function is continuous on each of the sets P x ; x ∈ ∆.
Each set P x ; x ∈ ∆ is a sub-model of P and as each P x is convex with Hmax( P x ) < ∞, these sub-models are all in equilibrium. The linking identity shows that for all P P x ,
H(P) + D(PP) = x .
This implies that Hmax( P x ) ≤ x, a sharpening of the trivial inequality Hmax( P x ) ≤ Hmax( P ). From (8.34) it also follows that maximizing entropy H(·) over P x amounts to the same thing as minimizing divergence D(·P) over P x . In other words, the Hmax-attractor of P x may, alterna-tively, be characterized as the I-projection of P on P x , i.e. as the unique distribution P x for which Q n D P x for every sequence (Qn) ⊆ P x for which D(QnP) converges to the infimum of D(QP) with Q P x .*
Further basic results are collected below:
Theorem 8.1 (disection of models). 
Let P be a convex model in equilibrium with optimal matching pair (κ, P) and assume that P P . Then the following properties hold for the disection ( P x ) x Δ defined by (8.33):
(i) 
The setis an interval with sup ∆ = Hmax( P ). A necessary and sufficient condition that Hmax( P ) ∈ ∆ is that P is entropy-continuous. If P has entropy loss,contains the non-degenerate interval [H(P), Hmax( P )[.
(ii) 
The entropy function is continuous on each sub-model P x ; x ∈ ∆,
(iii) 
Each sub-model P x ; x ∈ ∆ is in equilibrium and the Hmax-attractor for 𝒫x is the I-projection of P on P x ,
(iv) 
For x ∈ ∆, Hmax( P x ) ≤ x and the following bi-implications hold, where P x * denotes the Hmax-attractor of P x :
H max ( P x ) = x P x * = P * x H ( P * ) .
Proof. 
(i)–(ii) as well as the inequality Hmax( P x ) ≤ x of (iii) were proved above.
For the proof of (iv) we consider an x ∈ ∆ and let ( P n ) P x be an asymptotically optimal sequence for P x . Then the condition Hmax( P x ) = x is equivalent with the condition H(Pn) → x, and the condition P x * = P is equivalent with the condition P n D P * . In view of the equality x = H(n)+ D(PnP) we now realize that the first bi-implication of (8.35) holds. For the second bi-implication we first remark that as xH( P x * ) holds generally, if P x * = P * then xH(P) must hold.
For the final part of the proof of (iv), we assume that xH(P). The equality Hmax( P x ) = x is evident if x = H(P). We may therefore assume that H(P) < x < Hmax( P ). We now let (Pn) denote an asymptotically optimal sequence for the full model P such that H(Pn) ≥ x; n ≥ 1. As 〈κ, P〉 ≤ x ≤ 〈κ, Pn〉 for all n, we can find a sequence (Qn)n≥1 of distributions in Px such that each Qn is a convex combination of the form Qn = αnP + βnPn. By (5.23), D(QnP) ≤ βnD(PnP) → 0. Thus P is essentially consistent for P x and as the code adapted to P is cost-stable for this model, Corollary 6.4 implies that the model has P as its Hmax-attractor.     ☐
A distribution P is said to have potential entropy loss if the distribution is the Hmax-attractor of a model in equilibrium with entropy loss. As we shall see, this amounts to a very special behaviour of the point probabilities. The definition we need at this point we first formulate quite generally for an arbitrary distribution P . With P we consider the density function Ω associated with the adapted code, cf. the appendix. In terms of P this function is given by:
Ω ( t ) = # { i A | p i exp ( t ) }
(# = “number of elements in”). We can now define a hyperbolic distribution as a distribution P such that
lim   sup t log Ω ( t ) t = 1.
Clearly, Ω(t) ≤ exp(t) for each t so that the equality in the defining relation may just as well be replaced by the inequality “≥” .
We note that zero point probabilities do not really enter into the definition, therefore we may assume without any essential loss of generality that all point probabilities are positive. And then, we may as well assume that the point probabilities are ordered: p 1 * p 2 * . In this case, it is easy to see that (8.37) is equivalent with the requirement
lim   inf i log p i * log 1 i = 1.
In the sequel we shall typically work with distributions which are ordered in the above sense. The terminology regarding hyperbolic distributions is inspired by [19] but goes back further, cf. [24]. In these references the reader will find remarks and results pertaining to this and related types of distributions and their discovery from empirical studies which we will also comment on in the next section.
We note that in (8.38) the inequality “≤” is trivial as p i 1 i for every i A . Therefore, in more detail, a distribution with ordered point probabilities is hyperbolic if and only if, for every a > 1,
p i * 1 i a
for infinitely many indices.
Theorem 8.2. 
Every distribution with infinite entropy is hyperbolic.
Proof. 
Assume that P is not hyperbolic and that the point probabilities are ordered. Then there exists a > 1 such that piia for all sufficiently large i. As the distribution with point probabilities equal to ia, properly normalized, has finite entropy, the result follows. ☐
With every model P in equilibrium we associate a partition function and an exponential family, simply by considering the corresponding objects associated with the Rmin-code for the model in question. This then follows the definition given in Section 4, but for the simple case where there is only one “energy function” with the Rmin-code playing the role of the energy function.
Theorem 8.3 (maximal models). 
Let P M + 1 ( A ) be given and assume that there exists a model P such that P P , P is in equilibrium and Hmax( P ) = Hmax( P ). Then 𝒫 itself must be in equilibrium. Furthermore, there exists a largest model P max with the stated properties, namely the model
P max = { P M + 1 ( A ) | κ * , P H max ( P ) } ,
where κ denotes the minimum risk code of P . Finally, any model P with P P P max is in equilibrium and has the same optimal matching pair as P .
Proof. 
Choose P with the stated properties. By Theorem 6.1,
H max ( co   P ) H max ( co   P ) = H max ( P ) = H max ( P ) ,
hence P is in equilibrium. Let κ be the Rmin-code of P in accordance with Theorem 6.2 and consider P max defined by (8.40).
Now let P P be an equilibrium model with H max ( P ) = H max ( P ) . As an asymptotically optimal sequence for P is also asymptotically optimal for P , we realize that P has the same Hmax-attractor, hence also the same Rmin-code as P . Thus R(κ| P ) = Rmin( P ) = Hmax( P ) = Hmax( P ) and it follows that P P max .
Clearly, P max is convex and H max ( P max ) = H max ( P ) < , hence P max is in equilibrium by Theorem 6.1.
The final assertion of the theorem follows by one more application of Theorem 6.1. ☐
The models which can arise as in Theorem 8.3 via (8.40) are called maximal models.
Let κK( A ) and 0 ≤ h < ∞. Put
P κ * , h = { P M + 1 ( A ) | κ * , P h } .
We know that any maximal model must be of this form. Naturally, the converse does not hold. An obvious necessary condition is that the entropy of the matching distribution be finite. But we must require more. Clearly, the models in (8.41) are in equilibrium but it is not clear that they have κ as Rmin-code and h as Hmax-value.
Theorem 8.4. 
A distribution P * M + 1 ( ) with finite entropy has potential entropy loss if and only if it is hyperbolic.
Proof. 
We may assume that the point probabilities of P are ordered.
Assume first that P is not hyperbolic and that P is the attractor for some model. Consider the corresponding maximal models P κ * , h and consider a value of h with H(P) ≤ hHmax( P ). Let γ be the abscissa of convergence associated with κ and let Φ be defined as in the appendix. As γ < 1, we can choose β > γ such that Φ(β) = h. Now both P and Qβ given by
Q β = exp ( β κ i ) Z ( β )
are attractors for P κ * , h and hence equal. It follows that h = H(P). Next we show that a hyperbolic distribution has potential entropy loss.
Consider the maximal models P κ * , h . Each one of these models is given by a single linear constraint. Therefore, the attractor is element in the corresponding exponential family. The abscissa of convergence is 1 and, therefore, the range of the map Φ : [ 1 ; [   is ] κ 1 * ; H ( P * ) ] . For h ] κ 1 * ; H ( P * ) ] , there exists a consistent maximum entropy distribution. Assume that h0 > H(P) and that the attractor equals
Q β = exp ( β κ i ) Z ( β )
By Theorem 8.1, Qβ must be attractor for all P κ * , h with h ∈ [Φ(β); h0]. Especially, this holds for h = H(P). This shows that P = Qβ. By Theorem 8.1 the conclusion is now clear. ☐

9 Zipf’s law

Zipf’s law is an empirically discovered relationship for the relative frequencies of the words of a natural language. The law states that
log ( f i ) a   log ( 1 i ) + b
where fi is the relative frequency of the i’th most common word in the language, and where a and b denote constants. For large values of i we then have
a log   f i log   1 i
The constants a and b depend on the language, but for many languages a 1, see [24].
Now consider an ideal language where the frequencies of words is described by a hyperbolic probability distribution P. Assume that the entropy of the distribution is finite. We shall discribe in qualitative terms the consequences of these assumptions as they can be derived from the developed theory, especially Theorem 8.4. We shall see that our asumption introduces a kind of stability of the language which is desirable in most situations.
Small children with a limited vocabulary will use the few words they know with relative frequencies very different from the probabilities described by P. They will only form simple sentences, and at this stage the number of bits per word will be small in the sense that the entropy of the childs probability distribution is small. Therefore the parents will often be able to understand the child even though the pronounciation is poor. The parents will, typically, talk to their children with a lower bit rate than they normally use, but with a higher bit rate than their children. Thereby new words and grammatical structures will be presented to the child, and, adopting elements of this structure, the child will be able to increase its bit rate. At a certain stage the child will be able to communicate at a reasonably high rate (about H(P)). Now the child knows all the basic words and structures of the language.
The child is still able to increase its bit rate, but from now on this will make no significant change in the relative frequencies of the words. Bit rates higher than H(P) are from now on obtained by the introduction of specialized words, which occur seldom in the language as a whole. The introduction of new specialized words can be continued during the rest of the life. Therefore one is able to express even complicated ideas without changing the basic structure of the language, indeed there is no limit, theoretically, to the bit rate at which one can communicate without change of basic structure.
We realize that in view of our theoretical results, specifically Theorem 8.4, the features of a natural language as just discussed are only possible if the language obeys Zipf’s law. Thus we have the striking phenomenon that the apparent “irregular” behaviour of models with entropy loss (or just potential entropy loss) is actually the key to desirable stability, the fact that for such models you can increase the bit rate, the level of communication, and maintain the basic features of the language. One could even speculate that modelling based on entropy loss lies behind the phenomenon that many will realize as a fact, viz. that “we can talk without thinking” . We just start talking using basic structure of the language (and rather common words) and then from time to time stick in more informative words and phrases in order to give our talk more semantic content, but in doing so, we use relatively infrequent words and structures, thus not violating basic principles – hence still speaking recognizably danish, english or what the case may be, so that also the receiver or listener feels at ease and recognizes our talk as unmistakenly danish, english or ...
We see that very informative speeking can be obtained by use of infrequent expressions. Therefore a conversation between, say 2 physicists may use English supplied with specialized words like electron and magnetic flux. We recognize their language as English because the basic words and grammer is the same in all English. The specialists only have to know special words, not a special grammer. In this sense the languages are stable. If the entropy of our distribution is infinite the language will behave in just about the same manner as described above. In fact one would not feel any difference between a language with finite entropy and a language with infinite entropy.
We see that it is convienient that a language follows a Zipf’s law, but the information theoretic methods also gives some explanation of how the language may have evolved into a state which obeys Zipf’s law. The set of hyperbolic distributions is convex. Therefore if 2 information sources both follows Zipf’s law then so do their mixture, and if 2 information sources both approximately follows Zipf’s law their mixture will do this even more. The information sources may be from different languages, but it is more interesting to consider a small child learning the language. The child gets input from different sources: the mother, father, other children ect. trying to imitate their language the child will use the words with frequences which are closer to Zipf’s law the the sources. As the language develops during the centuries the frequences will converge to a hyperbolic distribution.
Here we have discussed entropy as bit per word and not bit per letter. The letters give an encoding of the words which should primarilly be understood by others, and therefore the encoding cannot just be changed to obtain a better data compression. To stress the difference between bit per word and bit per letter we remark the the words are the basic semantic structure in the language. Therefore we may have an internal representation of the words which has very little to do with their length when spoken, which could explain that it is often much easier to remember a long word in a language you understand than a short word in a language you do not understand. It would be interesting to compare these ideas with empirical measurements of the entropy here considered but, precisely in the regime where Zipf’s law holds, such a study is very difficult as convergence of estimators of the entropy is very slow, cf. [1].

A The partition function

In this appendix we collect some basic facts about partition functions associated with one linear constraint.
The point of departure is a code κK( A ). With κ we associate the partition function Z = Zκ which maps into ]0, ∞], given by
Z ( β ) = i I e β κ i
Here we adopt the convention that e β κ i = 0 if β = 0 and κi = ∞. Clearly, Z is decreasing on [1; ∞[ and Z(β) → 0 for β → ∞ (note that, given K, e β κ i e K e κ i for all i when β is large enough).
The series defining Z is a Dirichlet-series, cf. Hardy and Riesz [7] or Mandelbrojt [17]. The abscissa of convergence we denote by γ. Thus, by definition, Z(β) < ∞ for β > γ and Z(β) = ∞ for β < γ. As Z(1) = 1, γ ≤ 1. If supp (κ) is infinite, Z(β) = ∞ for β ≤ 0, hence γ ≥ 0. If supp (κ) is finite, Z(β) < ∞ for all β ∈ ℝ and we then find that γ = −∞. Mostly, we shall have the case when supp (κ) is infinite in mind.
We shall characterize γ analytically. This is only a problem when supp (κ) is infinite. So assume that this is the case and also assume, for the sake of convenience, that κ has full support and that the indexing set I is the set of natural numbers: I = , and that κ1κ2 ≤ ⋯.
Lemma A.1. 
With assumptions as just introduced,
γ = lim   sup i log i κ i .
Proof. 
First assume that β > γ with γ defined by (A.43). Then, for some α > 1, α log i/κiβ for all sufficiently large values of i. For these values of i, e β κ i i α and we conclude that Z(β) < ∞. Conversely, assume that, for some value of β,Z(β) < ∞. Then β > 0 and  ☐
Remark
If no special ordering on A is given, the abscissa of convergence can be expressed analytically via the density function Ω : 0 ( 0 = { 0 } ) which is defined by
Ω ( t ) = # { a A | κ ( a ) t }
(# = “number of elements in”). In fact, as follows easily from (A.43),
γ = lim   sup t log Ω ( t ) t .
We can now introduce the exponential family associated with the model κ. It is the family of distributions (Qβ) with β ranging over all values with Z(β) < ∞ which is defined by
Q β ( a i ) = e β κ i Z ( β ) ;   i I .
The family of adapted codes, denoted (ρβ), is also of significance. These codes are given by
ρβ(ai) = log Z(β) + βκi; iI .
We also need certain approximations to Z, Qβ and ρβ. For convenience we stick to the assumption I = , κ1κ2 ≤ . . . . We then define Zn, Qn,β and ρn,β by
Z n ( β ) = i = 1 n e β κ i ;   β ,
Q n , β ( a i ) = e β κ i Z ( β ) ;   i n ,
ρ n , β ( α i ) = log Z n ( β ) + β κ i ;   i n ,
it being understood that supp (Qn,β) = supp (ρn,β) = {1, 2, . . . , n}. Formally, the approximating quantities could be obtained from (non-compact) codes obtained from κ by replacing κi by the value ∞ for i > n.
We are particularly interested in the mean values 〈κ, Qβ〉 and 〈κ, Qn,β〉, and define functions Φ and Φn; n ≥ 1 by
Φ ( β ) = κ , Q β ;   Z ( β ) < ,
Φ n ( β ) = κ , Q n , β ;   β .
Note that Φ(1) = H(P) and that
Φ ( β ) = Z ( β ) / Z ( β ) = d d β log Z ( β ) ,
Φ n ( β ) = Z n ( β ) / Z n ( β ) = d d β log Z n ( β ) .
Furthermore, Z is a Dirichlet series with the same abscissa of convergence as Z, hence Φ(β) is well defined and finite for all β > γ.
Lemma A.2. 
With assumptions and notation as above, the following properties hold:
(i)
Φ1 ≤ Φ2 ≤ . . . ,
(ii)
Φn is strictly decreasing on (except if κ1 = ⋯ = κn),
(iii)
lim β Φ n ( β ) = κ 1 ,   lim β Φ n ( β ) = κ n ,
(iv)
Φ is strictly decreasing on ]γ, ∞[,
(v)
lim β Φ n ( β ) = κ 1
(vi)
Φ(γ) is infinite if and only if –Z’(γ) = ∞,
(vii)
If –Z’(β0) < ∞, then Φn → Φ, uniformly on [β0,∞[,
(viii)
lim β Φ n ( γ ) = Φ ( γ + ) , the limit from the right at γ,
(ix)
for every β < γ, lim β Φ n ( β ) = .
Proof. 
(i) follows from
Φ n + 1 ( β ) Φ n ( β ) = e β κ n + 1 Z n + 1 ( β ) Z n ( β ) i = 1 n ( κ n + 1 κ n ) e β κ i ,
(ii) from
Φ n ( β ) = 1 Z n ( β ) 2 n i > j ( κ i κ j ) 2 e β ( κ i + κ j )
and (iv) from an obvious extension of this formula. Writing Φn in the form
Φ n ( β ) = i = 1 n κ i j = 1 n e β ( κ i κ j ) ,
we derive the limiting behaviour of (iii) and, with a little care, the limit relation of (v) follows in a similar way.
(vii) follows from (A.53) and (A.54) since the convergences Zn(x) → Z(x) and Z n ( x ) Z ( x ) hold uniformly on [β0, ∞[ when –Z’(β0) < ∞. Actually, the uniform convergence is first derived only for intervals of the form [β0, K]. By (i) and (v) it is easy to extend the uniform convergence to [β0, ∞[.
It is now clear that for n ≥ 1 and x > γ, Φn(x) ≤ Φ(x) ≤ Φ(γ+), hence lim n Φ n ( γ ) Φ ( γ + ) . On the other hand, for x > γ, lim n Φ n ( γ ) lim n Φ n ( x ) = Φ ( x ) . We conclude that (viii) holds.
If −Z’(γ) < ∞, then Z(γ) < ∞ and lim n Φ n ( γ ) = Z ( γ ) / Z ( γ ) . By (viii), this shows that Φ(γ+) < ∞. Now assume that –Z’(γ) = ∞. If Z(γ) < ∞, it is easy to see that Φ(γ+) = ∞. If also Z(γ) = ∞, we choose to N ≥ 1, n0 > N such that
Q n , γ ( { 1 , 2 , , N } ) 1 2   for   n n 0 .
Then, for nn0,
Φ n ( γ ) = i = 1 n κ i Q n , γ ( i ) κ N Q n , γ ( { N + 1 , , n } ) 1 2 κ N .
This shows that Φn(γ) → ∞, hence Φ(γ+) = ∞. We have now proved (vi).
In order to prove (ix), let β < γ and choose, to a given K, n0 such that κiK for in0. Then, for nn0,
Entropy 03 00191 i005
As n 0 e β κ i = , we see that for n sufficiently large, Φn(β) ≥ K/2 and (ix) follows.
Remark
The formula for Φ n and Φ’can be interpreted more probabilistically. Consider Φ′ and remark first that when we consider κ as a random variable defined on the discrete probability space ( A , Qβ) = ( N , Qβ ), then Φ‘(β) is the expectation of this random variable. A simple calculation shows that Φ′(β) is the variance of this random variable.

References

  1. Antos, A.; Kontoyiannis, I. Convergence Properties of Functional Estimates for Discrete Distributions. to appear.
  2. Aubin, J.-P. Optima and equilibria. An introduction to nonlinear analysis. Berlin: Springer, 1993. [Google Scholar]
  3. Cover, T.M.; Thomas, J.A. Information Theory; Wiley: New York, 1991. [Google Scholar]
  4. Csiszár, I. I-divergence geometry of probability distributions and minimization problems. Ann.Probab. 1975, vol. 3, 146–158. [Google Scholar]
  5. Csiszár, I. Sanov property, generalized I-projection and a conditional limit theorem. Ann. Probab. 1984, vol. 12, 763–793. [Google Scholar]
  6. Gallager, R.G. Information Theory and reliable Communication; Wiley: New York, 1968. [Google Scholar]
  7. Hardy, G.H.; Riesz, M. The general Theory of Dirichlet’s series; Cambridge University Press: Cambridge, 1915. [Google Scholar]
  8. Harremoës, P. Binomial and Poisson Distributions as Maximum Entropy Distributions. IEEE Trans. Inform. Theory 2039, vol. 47, 2039–2041. [Google Scholar]
  9. Harremoës, P. The Information Topology. In preparation.
  10. Haussler, D. A general Minimax Result for Relative Entropy. IEEE Trans. Inform. Theory 1276, vol. 43, 1276–1280. [Google Scholar] [CrossRef]
  11. Jaynes, E. T. Information Theory and Statistical Mechanics. PhysicalReviews 1957, vol. 106, 620–630, vol. 108, pp. 171–190. [Google Scholar] [CrossRef]
  12. Jaynes, E. T. Clearing up mysteries – The original goal. In Maximum Entropy and Bayesian Methods; Skilling, J., Ed.; Kluwer: Dordrecht, 1989. [Google Scholar]
  13. http://bayes.wustl.edu [ONLINE] — a web page dedicated to Edwin T. Jaynes, maintained by L. Brethorst.
  14. Kapur, J.N. Maximum Entropy Models in Science and Engineering. Wiley: New York, 1993; (first edition 1989). [Google Scholar]
  15. Kazakos, D. Robust Noiceless Source Coding Through a Game Theoretic Approach. IEEE Trans. Inform. Theory 1983, vol. 29, 577–583. [Google Scholar] [CrossRef]
  16. Kullback, S. Information Theory and Statistics. Wiley: New York, 1959; (Dover edition 1968). [Google Scholar]
  17. Mandelbrot, S. Series de Dirichlet. Gauthier-Villars: Paris, 1969. [Google Scholar]
  18. Mandelbrot, B. B. On the theory of word frequencies and on related Markovian models of discourse. In Structures of Language and its Mathematical Aspects; Jacobsen, R., Ed.; American Mathematical Society: New York, 1961. [Google Scholar]
  19. Schroeder, M. Fractals, Chaos, Power Laws. W. H. Freeman: New York, 1991. [Google Scholar]
  20. Topsøe, F. Information theoretical Optimization Techniques. Kybernetika 1979, vol. 15, 8–27. [Google Scholar]
  21. Topsøe, F. Game theoretical equilibrium, maximum entropy and minimum information discrimination. In Maximum Entropy and Bayesian Methods; Mohammad-Djafari, A., Demoments, G., Eds.; Kluwer: Dordrecht, 1993; pp. 15–23. [Google Scholar]
  22. Topsøe, F. Maximum Entropy versus Minimum Risk and Applications to some classical discrete Distributions. submitted for publication.
  23. Topsøe, F. Basic Concepts, Identities and Inequalities – the Toolkit of Information Theory. Entropy 2001, vol. 3, 162–190. http://www.mdpi.org/entropy/ [ONLINE]. [Google Scholar]
  24. Zipf, G. K. Human Behavior and the Principle of Least Effort. Addison-Wesley: Cambridge, 1949. [Google Scholar]
  • 1The reader may want to note that it is in fact easy to prove directly that if 𝒫 is convex, Hmax(𝒫) finite and P* a consistent distribution with maximum entropy, then the adapted code κ* must be a Nash equilibrium code. To see this, let P0 and P1 be distributions with finite entropy and put Pα = (1 − α)P0 + αP1. Then h(α) = H(Pα); 0 ≤ α ≤ 1 is strictly concave and h’(α) = 〈κα, P1〉 − 〈κα, P0〉 with κα the code adapted to Pα. From this it is easy to derive the stated result. A more complete result is given in Theorem 7.3.
  • *Terminology is close to that adopted by Csiszár, cf. [4], [5], who first developed the concept for closed models. This was later extended, using a different terminology, in Topsøe [20]. In this paper we refrain from a closer study of I-projections and refer the reader to sources just cited.

Share and Cite

MDPI and ACS Style

Harremoës, P.; Topsøe, F. Maximum Entropy Fundamentals. Entropy 2001, 3, 191-226. https://0-doi-org.brum.beds.ac.uk/10.3390/e3030191

AMA Style

Harremoës P, Topsøe F. Maximum Entropy Fundamentals. Entropy. 2001; 3(3):191-226. https://0-doi-org.brum.beds.ac.uk/10.3390/e3030191

Chicago/Turabian Style

Harremoës, Peter, and Flemming Topsøe. 2001. "Maximum Entropy Fundamentals" Entropy 3, no. 3: 191-226. https://0-doi-org.brum.beds.ac.uk/10.3390/e3030191

Article Metrics

Back to TopTop