Next Article in Journal
A New Feature Extraction Method for Ship-Radiated Noise Based on Improved CEEMDAN, Normalized Mutual Information and Multiscale Improved Permutation Entropy
Next Article in Special Issue
Sensor Control in Anti-Submarine Warfare—A Digital Twin and Random Finite Sets Based Approach
Previous Article in Journal
Multi-Scale Feature Fusion for Coal-Rock Recognition Based on Completed Local Binary Pattern and Convolution Neural Network
Previous Article in Special Issue
Universality of Logarithmic Loss in Fixed-Length Lossy Compression
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Estimating the Mutual Information between Two Discrete, Asymmetric Variables with Limited Samples

by
Damián G. Hernández
* and
Inés Samengo
Department of Medical Physics, Centro Atómico Bariloche and Instituto Balseiro, 8400 San Carlos de Bariloche, Argentina
*
Author to whom correspondence should be addressed.
Submission received: 3 May 2019 / Revised: 11 June 2019 / Accepted: 13 June 2019 / Published: 25 June 2019
(This article belongs to the Special Issue Bayesian Inference and Information Theory)

Abstract

:
Determining the strength of nonlinear, statistical dependencies between two variables is a crucial matter in many research fields. The established measure for quantifying such relations is the mutual information. However, estimating mutual information from limited samples is a challenging task. Since the mutual information is the difference of two entropies, the existing Bayesian estimators of entropy may be used to estimate information. This procedure, however, is still biased in the severely under-sampled regime. Here, we propose an alternative estimator that is applicable to those cases in which the marginal distribution of one of the two variables—the one with minimal entropy—is well sampled. The other variable, as well as the joint and conditional distributions, can be severely undersampled. We obtain a consistent estimator that presents very low bias, outperforming previous methods even when the sampled data contain few coincidences. As with other Bayesian estimators, our proposal focuses on the strength of the interaction between the two variables, without seeking to model the specific way in which they are related. A distinctive property of our method is that the main data statistics determining the amount of mutual information is the inhomogeneity of the conditional distribution of the low-entropy variable in those states in which the large-entropy variable registers coincidences.

1. Introduction

Inferring the statistical dependencies between two variables from a few measured samples is a ubiquitous task in many areas of study. Variables are often linked through nonlinear, relations, which contain stochastic components. The standard measure employed to quantify the amount of dependency is the mutual information, defined as the reduction in entropy of one of the variables when conditioning the other variable [1,2]. If the states of the joint distribution are well-sampled, the joint probabilities can be estimated by the observed frequencies. Replacing such estimates in the formula for the mutual information yields the so-called “plug-in” estimator of mutual information. However, unless all the states of the joint distribution are sampled extensively, this procedure typically over-estimates the mutual information [3,4,5]. In fact, when the number of samples is of the order of the effective number of joint states, even independent variables tend to appear as correlated.
The search for an estimator of mutual information that remains approximately unbiased even with small data samples is an open field of research [6,7,8,9,10,11]. Here, we focus on discrete variables, and assume it is not possible to overcome the scarceness of samples by grouping states that are close according to some metric. In addition to corrections that only work in the limit of large samples [12], the state of the art for this problem corresponds to quasi-Bayesian methods that estimate mutual information indirectly through measures of the entropies of the involved variables [8,13,14]. These approaches have the drawback of not being strictly Bayesian, since the linear combination of two or more Bayesian estimates of entropies does not, in general, yield a Bayesian estimator of the combination of entropies [8]. The concern is not so much to remain within theoretical Bayesian purity, but rather to avoid frameworks that may be unnecessarily biased, or where negative estimates of information may arise.
Here, we propose a new method for estimating mutual information that is valid in the specific case in which there is an asymmetry between the two variables: One variable has a large number of effective states, and the other only a few. Examples of asymmetric problems can be found for instance in neuroscience, when assessing the statistical dependence between the activity of a population of neurons and a few stereotyped behaviors, as “lever to the left” or “lever to the right.” If the neural activity is represented as a collection of binary strings, with zeroes and ones tagging the absence or presence of spikes in small time bins, the set of possible responses is huge, and typically remains severely undersampled in electrophysiological experiments. If the behavioral paradigm is formulated in terms of a binary forced choice, the behavioral response only has two possible states.
In our approach, no hypotheses are made about the probability distribution of the large-entropy variable, but the marginal distribution of the low-entropy variable is assumed to be well sampled. The prior is chosen so as to accurately represent the amount of dispersion of the conditional distribution of the low-entropy variable around its marginal distribution. This prior is motivated in the framework of Bayesian estimation of information, and then tested with selected examples in which the hypotheses implied in the prior are fulfilled in varying degree, from high to low. The examples show that our estimator has very low bias, even in the severely under-sampled regime, where there are few coincidences that is, when any given state of the large-entropy variable has a low probability of being sampled more than once. The key data statistics that determine the estimated information is the inhomogeneity of the distribution of the low-entropy variable in those states of the high-entropy variable where two or more samples are observed. In addition to providing a practical algorithm to estimate mutual information, our approach sheds light on the way in which just a few samples reveal those specific properties of the underlying joint probability distribution that determine the amount of mutual information.

2. Bayesian Approaches to the Estimation of Entropies

We seek a low-bias estimate of the mutual information between two discrete variables. Let X be a random variable with a large number k x of effective states { x 1 , , x k x } with probabilities q x , and Y be a variable that varies in a small set y { y 1 , , y k y } , with k y k x . Given the conditional probabilities q y | x , the marginal and joint probabilities are q y = x q x q y | x and q x y = q x q y | x , respectively. The entropy H ( Y ) is
H ( Y ) = y q y log q y ,
and can be interpreted as the average number of well-chosen yes/no questions required to guess the sampled value of Y (when using a logarithm of base two). Since the entropy is a function of the probabilities, and not of the actual values taken by the random variable, here we follow the usual notation in which the expressions H ( Y ) and H ( q ) = H ( q y 1 , , q y k y ) are taken to represent the same concept, defined in Equation (1). The conditional entropy H ( Y | X ) is the average uncertainty of the variable Y once X is known,
H ( Y | X ) = x q x y q y | x log q y | x = x q x H ( Y | x ) .
The mutual information is the reduction in uncertainty of one variable once we know the other [2]
I ( X , Y ) = H ( X ) + H ( Y ) H ( X , Y ) = H ( Y ) H ( Y | X ) .
Our aim is to estimate I ( X , Y ) when Y is well sampled, but X is severely undersampled, in particular, when the sampled data contain few coincidences in X. Hence, for most values x, the number of samples n x is too small to estimate the conditional probability q y | x from the frequencies n x y / n x . The plug-in estimators of entropy, conditional entropy and mutual information are defined as the ones in which all the probabilities appearing in Equations (1)–(3) are estimated naïvely by the frequencies obtained in a given sample. In fact, when n x O ( 1 ) , the plug-in estimator typically underestimates H ( Y | x ) severely [5], and often leads to an overestimation of I ( X , Y ) .
One possibility is to estimate H ( X ) , H ( Y ) and H ( X , Y ) using a Bayesian estimator, and then insert the obtained values in Equation (3) to estimate the mutual information. We now discuss previous approaches to Bayesian estimators for entropy, to later analyze the case of information. For definiteness, we focus on H ( X ) , but the same logic applies to H ( Y ) , or H ( X , Y ) .
We seek the Bayesian estimator of the entropy conditional to having sampled the state x i a number n i of times. We denote the collection of sampled data as a vector n = ( n 1 , , n k x ) , and the total number of samples N = i n i . The Bayesian estimator of the entropy is a function that maps each vector n on a non-negative real number H ^ ( n ) , in such a way that the posterior expected error is minimized [15,16]. In the Bayesian framework, the sampled data n are known, whereas the underlying distribution q originating the data is unknown, and must be inferred. In this framework, the estimator of the entropy is the function H ^ ( n ) for which
H ^ ( n ) H 2 = H ^ ( n ) H ( q ) 2 p ( q | n ) d q
is minimized. The Bayesian nature of this framework is embodied in the fact that the integral weighs all candidate distributions q that could have originated the data with the posterior distribution p ( q | n ) , which can be related to the multinomial distribution p ( n | q ) by means of Bayes rule. The estimator H ^ ( n ) can be found by taking the derivative of Equation (4) with respect to H ^ and equating to zero. The result is [17]
H ^ ( n ) = H | n = H ( q ) p ( q | n ) d q = [ p ( n ) ] 1 H ( q ) p ( n | q ) p ( q ) d q ,
where the first equation results from the minimization procedure, and the second, from using Bayes rule. As a result, the Bayesian estimator is the expected value of H ( q | n ) .
Since p ( n | q ) is the multinomial distribution
p ( n | q ) = N ! x q x n x n x ! ,
and since the normalization constant p ( n ) can be calculated from the integral
p ( n ) = d q p ( n | q ) p ( q ) ,
the entire gist of the Bayesian approach is to find an adequate prior p ( q ) to plug into Equations (5) and (7). For the sake of analytical tractability, p ( q ) is often decomposed into a weighted combination of distributions p ( q | β ) that can be easily integrated, each tagged by one or a few parameters, here generically called β that vary within a certain domain,
p ( q ) = d β p ( β ) p ( q | β ) .
The decomposition requires to introduce a prior p ( β ) . Hence, the former search for an adequate prior p ( q ) is now replaced by the search for an adequate prior p ( β ) . The replacement implies an assumption and also a simplification. The family of priors that can be generated by Equation (8) does not necessarily encompass the entire space of possible priors. The decomposition relies on the assumption that the remaining family is still rich enough to make good inference about the quantity of interest, in this case, the entropy. The simplification stems from the fact that the search for p ( β ) is more restricted than the search for p ( q ) because the space of alternatives is smaller (the dimensionality of q is typically high, whereas the one of β is low). Two popular proposals of Bayesian estimators for entropies are Nemenman–Shafee–Bialek (NSB) [13] and Pitman–Yor Mixture (PYM) [14]. In NSB, the functions p ( q | β ) are Dirichlet distributions, in which β takes the role of a concentration parameter. In PYM, these functions are Pitman–Yor processes, and β stands for two parameters: one accounting for the concentration, and the other for the so-called discount. In both cases, the Bayesian machinery implies
H | n = 1 p ( n ) d β p ( β ) W ( β | n ) ,
where W ( β | n ) is the weight of each β in the estimation of the expected entropy
W ( β | n ) = d q H ( q ) p ( n | q ) p ( q | β ) .
When choosing the family of functions p ( q | β ) , it is convenient to select them in such a way that the weight W ( β | n ) can be solved analytically. However, this is not the only requirement. In order to calculate the integral in β , the prior p ( β ) also plays a role. The decomposition of Equation (8) becomes most useful when the arbitrariness in the choice of p ( β ) is less serious than the arbitrariness in the choice of p ( q ) . This assumption is justified when W ( β | n ) is peaked around a specific β -value, so that, in practice, the shape of p ( β ) hardly has an effect. In these cases, a narrow range of relevant β -values is selected by the sampled data, and all assumptions about the prior probability outside this range play a minor role. For the choices of the families p ( q | β ) proposed by NSB and PYM, W ( β | n ) can be calculated analytically, and one can verify that, indeed, a few coincidences in the data suffice for a peak to develop. In both cases, the selected β is one for which p ( q | β ) favours a range of q values that are compatible with the measured data (as assessed by p ( n | q ) ), and also produce non-negligible entropies (Equation (10)).
When the chosen Bayesian estimates of the entropies are plugged into Equation (3) to obtain an estimate of the information, each term is dominated by its own preferred β . Since the different entropies are estimated independently, the β values selected by the data to dominate the priors p ( q x ) and p ( q y ) need not be compatible with the ones dominating the priors of the joint or the conditional distributions. As a consequence, the estimation of the mutual information is no longer Bayesian, and can suffer from theoretical issues, as, for example, yield negative estimates [8].
A first alternative would be to consider an integrable prior containing a single β for the joint probability distribution q x y , and then replace H by I in the equations above, to calculate I . This procedure was tested by Archer et al. [8], and the results were only good when the collection of q x y values governing the data were well described by a distribution that was contained in the family of proposed priors p ( q | β ) . The authors concluded that mixtures of Dirichlet priors do not provide a flexible enough family of priors for highly-structured joint distributions, at least for the purpose of estimating mutual information.
To make progress, we note that I ( X , Y ) can be written as
I ( X ; Y ) = x q x y q y | x log q y | x q y = x q x D KL ( q y | x | | q y ) ,
where q y | x and q y stand for the k y -dimensional vectors ( q y 1 | x , , q y k y | x ) and ( q y 1 , , q y k y ) , and D KL represents the Kullback–Leibler divergence. The average divergence between q y | x and q y captures a notion of spread. Therefore, the mutual information is sensitive not so much to the value of the probabilities q y | x , but rather, to their degree of scatter around the marginal q y . The parameters controlling the prior should hence be selected in order to match the width of the distribution of q y | x values, and not so much each probability. With this intuition in mind, in this paper, we put forward a new prior for the whole ensemble of conditional probabilities q y | x obtained for different x values. In this prior, the parameter β controls the spread of the conditionals q y | x around the marginal q y .

3. A Prior Distribution for the Conditional Entropies

Our approach is valid when the total number of samples N is at least of the order of magnitude of e H ( X ) , since in this regime, some of the x states are expected to be sampled more than once [18,19]. In addition, the marginal distribution q y must be well sampled. This regime is typically achieved when X has a much larger set of available states than Y. In this case, the maximum likelihood estimators q ^ y of the marginal probabilities q y can be assumed to be accurate that is,
q ^ y = n y N q y , y .
In this paper, we put forward a Dirichlet prior distribution centered at q ^ y that is,
p ( { q y | x } | β ) = Γ ( β ) k x x y q y | x β q ^ y 1 Γ ( β q ^ y ) = Γ ( β ) y Γ ( β q ^ y ) k x e β k x H ( q ^ y ) exp β x y q y ( log q y log q y | x ) x y q y | x exp β x D KL ( q y ^ | | q y | x ) x y q y | x ,
where { q y | x } contains the k x conditional probabilities q y | x corresponding to different x values. Large β values select conditional probabilities close to q ^ y , while small values imply a large spread that pushes the selection towards the border of the k y -simplex.
For the moment, for simplicity, we work with a prior p ( { q y | x } ) defined on the conditional probabilities q y | x , and make no effort to model the prior probability of q x . In practice, we estimate the values of q x with the maximum likelihood estimator q ^ x = n x / N . Since X is assumed to be severely undersampled, this is a poor procedure to estimate q x . Still, the effect on the mutual information turns out to be negligible, since the only role of q x in Equation (11) is to weigh each of the Kullback–Leibler divergences appearing in the average. If k x is large, each D K L -value appears in several terms of the sum, rendering the individual value of the accompanying q x irrelevant, only the sum of all those with the same D KL matters. In Section 6, we tackle the full problem of making Bayesian inference both in q x and { q y | x } .
The choice of prior of Equation (13) is inspired in three facts. First, β captures the spread of q y | x around q y , as implied by the Kullback–Leibler divergence in Equation (13). Admittedly, this divergence is not exactly the one governing the mutual information (Equation (11)), since q y | x and q y are swapped. Yet, it is still a measure of spread. The exchange, as well as the denominator in Equation (13), were introduced for the sake of the second fact, namely, analytical tractability. The third fact regards the emergence of a single relevant β when the sampled data begin to register coincidences. If we follow the Bayesian rationale of the previous section, now replacing the entropy by the mutual information, we can again define a weight W ( β | n ) for the parameter β
W ( β | n ) = { d q y | x } I ( q ^ x , { q y | x } ) p ( n | q x ^ , { q y | x } ) p ( { q y | x } | β ) = p ( β | n ) F ( β , n ) ,
where F ( β , n ) can be obtained analytically, and is a well behaved function of its arguments, whereas
p ( β | n ) = p ( β ) p ( n | β ) p ( n ) = p ( β ) p ( n ) d q y | x p ( n | q ^ x , { q y | x } ) p ( { q y | x } | β ) = p ( β ) p ( n ) x Γ ( β ) Γ ( n x + β ) y = 1 k y Γ ( n x y + β q ^ y ) Γ ( β q ^ y ) .
For each x, the vector q y | x varies in a k y -dimensional simplex. For p n | q ^ x , { q y | x } we take the multinomial
p ( n | q ^ x , { q y | x } ) = N ! x y [ q ^ x q y | x ] n x y n x y ! .
The important point here is that the ratio of the Gamma functions of Equation (14) develops a peak in β as soon as the collected data register a few coincidences in x. Hence, with few samples, the prior proposed in Equation (13) renders the choice of p ( β ) inconsequential.
Assuming that the marginal probability of Y is well-sampled, the entropy H ( Y ) is well approximated by the plug-in estimator H ^ ( Y ) = y ( n y / N ) log ( n y / N ) . For each β , the expected posterior information can be calculated analytically,
I | n , β = H ^ ( Y ) x n x N ψ 0 ( β + n x + 1 ) y β q y ^ + n x y β + n x ψ 0 ( β q y ^ + n x y + 1 ) ,
where ψ 0 is the digamma function. A code implementing the estimate of Equation (16) is publicly available in the site mentioned in the Supplementary Material below.
When the system is well sampled, n x y 1 , so the effect of β becomes negligible, the digamma functions tend to logarithms, and the frequencies match the probabilities. In this limit, Equation (16) coincides with the plug-in estimator, which is consistent [20]. As a consequence, our estimator is also consistent. The rest of the paper focuses on the case in which the marginal probability of X is severely undersampled.

4. A Closer Look on the Case of a Symmetric and Binary Y -Variable

In this section, we analyze the case of a binary Y-variable, which, for simplicity, is assumed to be symmetric that is, q y = 0 = q y = 1 = 1 2 , such that H ( Y ) = log 2 nats. In this case, the Dirichlet prior in each factor of Equation (13) becomes a Beta distribution
p ( q 1 | x | β ) = Γ ( β ) Γ ( β / 2 ) 2 q 1 | x ( 1 q 1 | x ) β / 2 1 ,
and p { q 1 | x } | β = x p ( q 1 | x | β ) . Large values of β mostly select conditional probabilities q y | x close to 1 2 . If all conditional probabilities are similar, and similar to the marginal, the mutual information is low, since the probability of sampling a specific y value hardly depends on x. Instead, small values of β produce conditional probabilities q y | x around the borders ( q y | x 0 or q y | x 1 ). In this case, q y | x is strongly dependent on x (see Figure 1b), so the mutual information is large. The expected prior mutual information I | n = 0 , β can be calculated using the analytical approach developed by [14,17],
I | n = 0 , β = log 2 ψ 0 ( β + 1 ) + ψ 0 ( β / 2 + 1 ) .
The prior information is a slowly-varying function of the order of magnitude of β , namely of log β . Therefore, if a uniform prior in information is desired, it suffices to choose a prior on log β such that p ( log β ) | log β I | n = 0 , β | ,
p ( log β ) = β / 2 log 2 | 2 ψ 1 ( β + 1 ) ψ 1 ( β / 2 + 1 ) | .
When k y = 2 , the expected posterior information (Equation (16)) becomes
I | n , β = H ^ ( Y ) x n x N ψ 0 ( β + n x + 1 ) y { 0 , 1 } β / 2 + n x y β + n x ψ 0 ( β / 2 + n x y + 1 ) .
The marginal likelihood of the data given β is also analytically tractable. The likelihood is binomial for each x, so
p ( n | β ) = x 0 1 d q 1 | x p ( n x 1 , n x 0 | q 1 | x ) p ( q 1 | x | β ) x Γ ( n x 1 + β / 2 ) Γ ( n x 0 + β / 2 ) Γ ( β ) Γ ( n x + β ) Γ ( β / 2 ) 2 .
The posterior for β can be obtained by adding a prior p ( β ) , as p ( β | n ) p ( n | β ) p ( β ) . The role of the prior becomes relevant when the number of coincidences is too low for the posterior to develop a peak (see below).
In order to gain intuition about the statistical dependence between variables with few samples, we here highlight the specific aspects of the data that influence the estimator of Equation (20). Grouping together the terms of Equation (21) that are equal, the marginal likelihood can be rewritten in terms of the multiplicities m n n that is, the number of states x with specific occurrences { n x 1 = n , n x 0 = n } or { n x 1 = n , n x 0 = n } ,
log p ( n | β ) = n n m n n log Γ ( n + β / 2 ) Γ ( n + β / 2 ) Γ ( β ) Γ ( n + n + β ) Γ ( β / 2 ) 2 = n n m n n log p n n ( β ) ,
where
p 10 ( β ) = β / 2 β = 1 2 , p 11 ( β ) = ( β / 2 ) 2 β ( β + 1 ) = β 4 ( β + 1 ) , p 20 ( β ) = ( β / 2 ) ( β / 2 + 1 ) β ( β + 1 ) = ( β / 2 + 1 ) 2 ( β + 1 ) , p n n ( β ) = ( β / 2 ) ( β / 2 + 1 ) ( β / 2 + n 1 ) ( β / 2 ) ( β / 2 + 1 ) ( β / 2 + n 1 ) β ( β + 1 ) ( β + n + n 1 ) .
The posterior for β is independent from states x with just a single count, as p 10 ( β ) = constant . Only states x with coincidences matter. In order to see how the sampled data favor a particular β , we search for the β -value (denoted as β ) that maximizes log p ( n | β ) in the particular case where at most two samples coincide on the same x, obtaining
β log p ( n | β ) = m 11 β + m 20 β + 2 m 11 + m 20 β + 1 β = 0 .
Denoting the fraction of 2-count states that have one count for each y-value as f 11 = m 11 / ( m 11 + m 20 ) , Equation (24) implies that the likelihood p ( n | β ) is maximized at
β = + when f 11 1 2 , f 11 1 2 f 11 when f 11 < 1 2 .
If the y-values are independent of x, we expect f 11 1 2 . This case corresponds to a large β and, consequently, to little information. On the other side, for small f 11 , the parameter β is also small and the information grows. Moreover, the width of p ( n | β ) is also modulated by f 11 . When the information is large, the peak around β is narrow. Low information values, instead, require more evidence, and they come about with more uncertainty around β .
In Equation (24), the data only intervene through m 11 and m 20 , which characterize the degree of asymmetry of the y-values throughout the different x-states. This asymmetry, hence, constitutes a sufficient statistics for β . If a prior p ( β ) is included, the β that maximizes the posterior p ( β | n ) may shift, but the effect becomes negligible as the number of coincidences grows.
We now discuss the role of the selected β in the estimation of information, Equation (20), focusing on the conditional entropy H Y | X ( β ) . First, in terms of the multiplicities, the conditional entropy can be rewritten as
H Y | X ( β ) = r f r n + n = r f n n H n n ( β ) ,
where f r is the fraction of the N samples that fall in states x with r counts, and f n n is the fraction of all states x with n + n counts, of which n correspond to one y-value (whichever) and n for the other. Finally, H n n ( β ) is the estimation of the entropy of a binary variable after { n , n } samples,
H n n ( β ) = ψ 0 ( n + n + β + 1 ) ( n + β / 2 ) ψ 0 ( n + β / 2 + 1 ) + ( n + β / 2 ) ψ 0 ( n + β / 2 + 1 ) n + n + β .
A priori, I | n = 0 , β = log 2 H 00 ( β ) , as in Figure 1d. Surprisingly, from the property ψ 0 ( z + 1 ) = ψ 0 ( z ) + 1 / z , it turns out that H 00 = H 10 (in fact, H n n = H ( n + 1 ) n ). Hence, if only a single count breaks the symmetry between the two y-values, there is no effect on the conditional entropy. This is a reasonable result, since a single extra count is no evidence of an imbalance between the underlying conditional probabilities, it is just the natural consequence of comparing the counts falling on an even number of states (2) when taking an odd number of samples. Expanding the first terms for the conditional entropy,
H Y | X = f 1 H 00 ( β ) + f 2 f 11 H 11 ( β ) + f 2 f 20 H 20 ( β ) +
In the severely under-sampled regime, these first terms are the most important ones. Moreover, when evaluating these terms in β = β (Equation (25)), the conditional entropy simplifies into
H Y | X = log 2 when f 11 1 2 , ( f 1 + f 2 ) H 00 ( β ) + O ( f 3 ) when f 11 < 1 2 .
Typically, ( f 1 + f 2 ) takes most of the weight, so the estimation is close to the prior H 00 evaluated at the β -value that maximizes the marginal likelihood (or the posterior).
When dealing with few samples, it is important to have not just a good estimate of the mutual information, but also a confidence interval. Even a small information may be relevant, if the evidence attests that it is strictly above zero. The theory developed here also allows us to estimate the posterior variance of the mutual information, as shown in Appendix A. The variance (Equation (A4)) is shown to be inversely proportional to the number of states k x , thereby implying that our method benefits from a large number of available states X, even if undersampled.
If an estimator, such as ours, is guaranteed to provide a non-negative estimate for all possible sets of sampled data, it cannot be free of bias, not at least if the samples are generated by an independent distribution ( q y | x = q y ), for which the true information vanishes. In Appendix B, we show that, in this specific case, the bias decreases with the square root of the number of coincidences. This number may be large, even in the severely undersampled regime, if exp [ H ( X ) / 2 ] < N exp [ H ( X ) ] . If the distribution q x is approximately uniform, the number of coincidences is proportional to the square of the total number of samples, so the bias is inversely proportional to N.

5. Testing the Estimator

We now analyze the performance of our estimator in three examples where the number of samples N is below or in the order of the effective size of the system exp ( H X Y ) . In this regime, most observed x-states have very few samples. In each example, we define the probabilities q x and q 1 | x with three different criteria, giving rise to collections of probabilities that can be described with varying success by the prior proposed in this paper, Equation (17). Once the probabilities are defined, the true value I X Y of the mutual information can be calculated, and compared to the one estimated by our method in 50 different sets of samples n of the measured data. We present the estimate obtained with I | n , β from Equation (20) evaluated in the β that maximizes the marginal likelihood p ( n | β ) . We did not observe any improvement when integrating over the whole posterior p ( β | n ) with the prior p ( β ) of Equation (19), except when m 20 or m 10 were of order 1. This fact implies the existence of a well-defined peak in the marginal likelihood.
In Figure 2, the performance of our estimator is compared with that of three other methods widely employed in the literature: Plug in, NSB and PYM. In addition, two other estimators were evaluated, but not shown in the figure to avoid cramming: the one of reference [21], which is a particularly convenient case of the Schuermann family of estimators [22], and the one of reference [23], extensively used in ecology. Their estimates fell between the plug-in estimator and NSB in the first case, and between NSB and PYM in the second case.
In the first example (Figure 2a,d), the probabilities q x are obtained by sampling a Pitman–Yor distribution with concentration parameter α = 50 and tail parameter d = 0.55 (as described in [14]). These values correspond to a Pitman–Yor prior with a heavy tail. The conditional probabilities q y | x are defined by sampling a symmetric Beta distribution q y | x Beta ( β / 2 , β / 2 ) , as in Equation (17). In Figure 2a, we use β = 2.3 . Once the joint probability q x y is defined, 50 sets of samples n are generated. The effective size of the system is exp ( H X Y ) 800 . We compare our estimator to the plug-in estimator (Plug-in), NSB and PYM when applied to H X and H X Y (all methods coincide in the estimation of H Y ). Our estimator has a low bias, even when the number of samples per effective state is as low as N / e H X Y = 0.15 . The variance is larger than in the Plug-in estimator, comparable to NSB and smaller than PYM. All the other methods (Plug-in, NSB and to a lesser extend PYM) overestimate the mutual information. In Figure 2d, the performance of the estimators is also tested for different values of the exact mutual information I X Y , which we explore by varying β ( 0.04 , 14 ) . For each β , the conditional probabilities q 1 | x are sampled once. Each vector n contains N = 500 samples, and n is sampled 50 times. Our estimates have very low bias, even as the mutual information goes to zero —namely, for independent variables.
Secondly, we analyze an example where the statistical relation between X and Y is remarkably intricate (example inspired by [25]), which underscores the fact that making inference about the mutual information does not require inferences on the joint probability distribution. The variable x is a binary vector of dimension 12. Each component represents the presence or absence of one of a maximum of 12 delta functions equally spaced on the surface of a sphere. There are 2 12 possible x vectors, and they are governed by a uniform prior probability: q x = 2 12 . The conditional probabilities are generated in such a way that they be invariant under rotations of the sphere that is, q y | x = q y | R ( x ) , where R is a rotation. Using a spherical harmonic representation [24], the frequency components π ( f ( x ) ) of the spherical spectrum are obtained, where f ( x ) is the combination of deltas. The conditional probabilities q y | x are defined as a sigmoid function of ( π 0 π 1 π 2 ) . The offset of the sigmoid is chosen such that q y = 1 0.5 , and the gain such that I X Y 0.5 nats. In this example, and, unlike the Dirichlet prior implied by our estimator, p ( q y | x ) has some level of roughness (inset in Figure 2b), due to peaks coming from the invariant classes in { x 1 , , x 2 12 } . Hence, the example does not truly fit into the hypothesis of our method. With these settings, the effective size of the system is exp ( H X Y ) 5000 . Our estimator has little bias (Figure 2b,e), even with N / e H X Y = 0.2 samples per effective state. In this regime, around 80 % of the samples fall on x states that occur only once ( f 1 0.8 ), 19 % on states that occur twice and 1 % on states with three counts, or maybe four. As mentioned above, in such cases, the value of I X Y is very similar to the one that would be obtained by evaluating the prior information I | n = 0 , β of (Equation (18)) at the β that maximizes the marginal likelihood p ( n | β ) , which in turn is mainly determined by f 11 . In Figure 2e, the estimator is tested with a fixed number of samples N = 2000 for different values of the mutual information, which we explore by varying the gain of the sigmoid. The bias of the estimate is small in the entire range of mutual informations.
In the third place, we consider an example where the conditional probabilities are generated from a distribution that is poorly approximated by a Dirichlet prior. The conditional probabilities are sampled from three Dirac deltas, as q y | x [ 0.5 δ ( q 1 2 ) + 0.25 δ ( q q 0 ) + 0.25 δ ( q 1 + q 0 ) ] , with q 0 = 0.1 . The delta placed in q = 1 2 could be approximated by a Dirichlet prior with a large β , while the other two deltas could be approximated by a small β , but there is no single value of β that can approximate all three deltas at the same time. The x states are generated as Bernoulli ( p = 0.05 ) binary vectors of dimension D = 40 , while the conditional probabilities q 1 | x depend on the parity of the sum of the components of the vector x. When the sum is even, we assign q y | x = 1 2 , and when it is odd, we assign q y | x = q 0 or q y | x = 1 q 0 , both options with equal probability. Although in this case our method has some degree of bias, it still preserves a good performance in relation to the other approaches (see Figure 2c,f). The marginal likelihood p ( n | β ) contains a single peak in an intermediate value of β , coinciding with none of the deltas in p ( q 1 | x ) , but still capturing the right value of the mutual information. As in the previous examples, we also test the performance of the estimator for different values of the mutual information, varying in this case the value of q 0 (with N = 2000 ). Our method performs acceptably for all values of mutual information. The other methods, instead, are challenged more severely, probably because a large fraction of the x states have a very low probability, and are therefore difficult to sample. Those states, however, provide a crucial contribution to the relative weight of each of the three values of q 1 | x . PYM, in particular, sometimes produces a negative estimate for I X Y , even on average.
Finally, we check numerically the accuracy of the analytically predicted mean posterior information (Equation (20)) and variance (Equation (A4)) in the severely under-sampled regime. The test is performed in a different spirit than the numerical evaluations of Figure 2. There, averages were taken for multiple samples of the vector n , from a fixed choice of the probabilities q x and q y | x . The averages of Equations (20) and (A4), however, must be interpreted in the Bayesian sense. The square brackets in I | n and H Y | X 2 represent averages taken for a fixed data sample n , and unknown underlying probability distributions q x and q y | x . We generate many such distributions with q x DP ( α ) (a Dirichlet Process with concentration parameter α ) and q y | x Beta ( β / 2 , β / 2 ) . A total of 13,500 distributions q x y are produced, with log β sampled from Equation (19), and three equiprobable values of α = { e 4 , e 5 , e 6 } . For each of these distributions, we generate five (5) sets of just N = 40 samples, thereby constructing a list of 5 × 13,500 cases, each case characterized by specific values of α , β , q x , { q y | x } , I ( q x , { q y | x } ) , n , I | n and σ 2 ( I | n ) . Following the Bayesian rationale, we partition this list in classes, each class containing all the cases that end up in the same set of multiplicities { m n n } —for example, { m 10 = 36 , m 20 = 2 } . For each of the 100 most occurring sets of multiplicities (which together cover 70 % of all the cases), we calculate the mean and the standard deviation of the mutual information I ( q , { q y | x } ) of the corresponding class, and compare them with our predicted estimates I | { m n n } and σ I 2 | { m n n } , using the prior p ( log β ) from Equation (19). Figure 3 shows a good match between the numerical (y-axis) and analytical (x-axis) averages that define the mean information (panel a) and the standard deviation (b). The small departures from the diagonal stem from the fact that the analytical average contains all the possible q x and { q y | x } , even if some of them are highly improbable for one given set of multiplicities. The numerical average, instead, includes the subset of the 13,500 explored cases that produced the tested multiplicity. All the depicted subsets contained many cases, but, still, they remained unavoidably below the infinity covered by the theoretical result.
We have also tested cases where Y takes more than two values, and where the marginal distribution q y is not uniform, observing similar performance of our estimator.

6. A Prior Distribution for the Large Entropy Variable

The prior considered so far did not model the probability q x of the large-entropy variable X. Throughout the calculation, the probabilities q x were approximated by the maximum likelihood estimator q ^ X = n x / N . Here, we justify such procedure by demonstrating that proper Bayesian inference on q x hardly modifies the estimation of the mutual information. To that end, we replace the prior of Equation (13) by another prior that depends on both q x and { q y | x } .
The simplest hypothesis is to assume that the prior p ( q x , { q y | x } ) factorizes as p ( q x ) p ( { q y | x } ) , implying that the marginal probabilities q x are independent of the conditional probabilities q y | x . We propose q x DP ( α ) , so that the marginal probabilities q x are drawn from a Dirichlet Process with concentration parameter α , associated with the total number of pseudo-counts. After integrating in q x and in q y | x , the mean posterior mutual information for fixed hyper-parameters β and α is
I | n , α , β = N N + α H ^ ( Y ) x , n x > 0 n x N ψ 0 ( β + n x + 1 ) y β q y ^ + n x y β + n x ψ 0 ( β q y ^ + n x y + 1 ) + α N + α H ^ ( Y ) ψ 0 ( β + 1 ) + y q y ^ ψ 0 ( β q y ^ + 1 ) .
Before including the prior p ( q x ) , in the severely undersampled regime, the mean posterior information was approximately equal to the prior information evaluated in the best β (Equation (16)). The new calculation (Equation (30)) contains the prior information explicitly, weighted by α / ( N + α ) , that is, the ratio between the number of pseudo-counts from the prior and the total number of counts. Thereby, the role of the non-observed (but still inferred) states is established.
The independence assumed between q x and { q y | x } implies that
p ( n | α , β ) = p ( n x | α ) p ( n | β ) .
The inference over α coincides with the one of PYM with the tail parameter as d = 0 [14], since
p ( n x | α ) Γ ( 1 + α ) Γ ( N + α ) α k 1 1 ,
where k 1 = x , n x > 0 1 is the number of states x with at least one sample. With few coincidences in x, p ( n x | α ) develops a peak around a single α -value that represents the number of effective states. Compared to the present Bayesian approach, maximum likelihood underestimates the number of effective states (or entropy) in x. Since the expected variance of the mutual information decreases with the square root of the number of effective states, the Bayesian variance is reduced with respect to the one of the Plug-in estimator.

7. Discussion

In this work, we propose a novel estimator for mutual information of discrete variables X and Y, which is adequate when X has a much larger number of effective states than Y. If this condition does not hold, the performance of the estimator breaks down. We inspire our proposal in the Bayesian framework, in which the core issue can be boiled down to finding an adequate prior. The more the prior is dictated by the data, the less we need to assume from outside. Equation (11) implies that the mutual information I ( X , Y ) is the spread of the conditional probabilities of one of the variables (for example, q y | x , but the same holds for q x | y ) around the corresponding marginal ( q y or q x , respectively). This observation inspires the choice of our prior (Equation (13)), which is designed to capture the same idea, and, in addition, to be analytically tractable. We choose to work with a hyper-parameter β that regulates the scatter of q y | x around q y , and not the scatter of q x | y around q x because the asymmetry in the number of available states of the two variables makes the β of the first option (and not the second) strongly modulated by the data, by the emergence of a peak in p ( n | β ) .
Although our proposal is inspired in previous Bayesian studies, the procedure described here is not strictly Bayesian, since our prior (Equation (13)) requires the knowledge of q ^ y , which depends on the sampled data. However, in the limit in which q y is well sampled, this is a pardonable crime, since q ^ y is defined by a negligible fraction of the measured data. Still, Bayesian purists should employ a two-step procedure to define their priors. First, they should perform Bayesian inference on the center of the Dirichlet distribution of Equation (13) by maximizing p ( q y | n ) , and then replace q ^ y in Equation (13) by the inferred q y . For all practical purposes, however, if the conditions of validity of our method hold, both procedures lead to the same result.
By confining the set or possible priors p ( { q y | x } ) to those generated by Equation (13), we relinquish all aspiration to model the prior of, say, q y | x = 3 , in terms of the observed frequencies at x = 3 . In fact, the preferred β -value is totally blind to the specific x-value of each sampled datum. Only the number of x-values containing different counts of each y-value matters. Hence, the estimation of mutual information is performed without attempting to infer the specific way the variables X and Y are related, a property named equitability [26], and that is shared also by other methods [8,13,14]. Although this fact may be seen as a disadvantage, deriving a functional relation between the variables can actually bias the inference on mutual information [26]. Moreover, fitting a relation is unreasonable in the severe under-sampled regime, in which not all x-states are observed, most sampled x-states contain a single count, and few x-states contain more than two counts—at least without a strong assumption about the probability space. In fact, if the space of probabilities of the involved variables has some known structure or smoothness condition, other approaches that estimate information by fitting the relation first may perform well [9,10,11]. Part of the approach developed here could be extended to continuous variables or spaces with a determined metric. This extension is left for future work.
In the explored examples, our estimator had a small bias, even in the severely under-sampled regime, and it outperformed other estimators discussed in the literature. More importantly, the second and third examples of Section 5 showed that it even worked when the collection of true conditional probabilities q y | x was not contained in the family of priors generated by p ( q y | x | β ) . In these cases, the success of the method relies on the peaked nature of the posterior distribution p ( β | n ) . Even if the selected p ( q y | x | β ) provides a poor description of the actual collection of probabilities, the dominant β captures the right value of mutual information. This is the sheer instantiation of the equitability property discussed above.
When the number of samples N is much larger than the total number of available states k x × k y , our estimator of mutual information tends to the plug-in estimator, which is known to be consistent [20]. Consequently, our estimator is also consistent. By construction, I | n is equal to the mutual information averaged over all distributions q x and q y | x compatible with the measured data, each weighted by its posterior probability. As such, it can never produce a negative result, which is a desirable property. The down side is that the estimator must be positively biased, at least, when the true information vanishes. The derivation in Appendix B shows that, when the number of samples N [ e H ( X ) / 2 , e H ( X ) ] , this bias is inversely proportional to the square root of the number of pairwise coincidences or, when q x is fairly uniform, to the inverse of the total number of samples. Moreover, the factor of proportionality is significantly smaller than the one obtained in the bound of the bias of other frequentist estimators [3,27]. If the number of samples N grows even further, the bias tends to zero, since the bias of all consistent estimators vanishes asymptotically [28].
Our method provides also a transparent way to identify the statistics that matter, out of all the measured data. Quite naturally, the x-states that have not been sampled provide no evidence in shaping p ( β | n ) , as indicated by Equation (14), and only shift the posterior information towards the prior (Equation (30)). More interestingly, the x-states with just a single count are also irrelevant, both in shaping p ( β | n ) and in modifying the posterior information away from the prior. These states are unable to provide evidence about the existence of either flat or skewed conditional probabilities q y | x . Only the states x that have been sampled at least twice contribute to the formation of a peak in p ( β | n ) , and in deviating the posterior information away from the prior.
Our method can also be extended to generalizations of mutual information designed to characterize the degree of interdependence of more than two variables [29,30,31,32,33,34,35,36,37,38,39,40,41], as long as all but one of the variables are extensively sampled. The applicability of the method to these measures will be the object of future work, once a certain degree of consensus has been reached regarding their meaning and range of applicability.
Several fields can benefit from the application of our estimator of mutual information. Examples can be found in neuroscience, when studying whether neural activity (a variable with many possible states) correlates with a few selected stimuli or behavioral responses [12,42,43], or in genomics, to understand associations between genes (large-entropy variable) and a few specific phenotypes [44]. The method can also shed light on the development of rate-distortion methods to be employed in situations in which only a few samples are available. In particular, it can be applied within the information bottleneck framework [45,46], aimed at extracting a maximally compressed representation of an input variable, but still preserving those features that are relevant for the prediction of an output variable. The possibility of detecting statistical dependencies with only few samples is of key importance, not just for analyzing data sets, but also to understand how living organisms quickly infer dependencies in their environments and adapt accordingly [47].

8. Conclusions

We have proposed a novel estimator for the mutual information I ( X ; Y ) between two variables, applicable to those cases in which the marginal distribution of one of the variables—the one with minimal entropy—is well sampled. The other variable, as well as the joint and conditional distributions, can be severely undersampled. We obtain a consistent estimator that presents very low bias, outperforming previous methods discussed in the literature. The main data statistics determining the estimated value is the inhomogeneity of the conditional distribution of the low-entropy variable in those states in which the large-entropy variable registers coincidences.

Supplementary Materials

The code with our estimator is available online at https://github.com/dghernandez/info-estimation.

Author Contributions

Conceptualization, D.G.H.; methodology, D.G.H.; software, D.G.H.; validation, D.G.H.; formal analysis, D.G.H. and I.S.; resources, I.S.; data curation, D.G.H.; writing—original draft preparation, D.G.H.; writing—review and editing, D.G.H. and I.S.; visualization, D.G.H.

Funding

This research was funded by CONICET, CNEA, ANPCyT Raíces 2016 grant number 1004.

Acknowledgments

We thank Ilya Nemenman for his fruitful comments and discussions.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Expected Variance for a Symmetric, Binary Y-Variable

The posterior variance of the mutual information is
σ 2 ( I | n ) = ( I | n ) 2 I | n 2 .
In the first place, we demonstrate that this quantity is proportional to k x 1 , implying that our estimator becomes increasingly accurate as the number of states of the X-variable increases. Given that
I | n H ^ ( Y ) H Y | X ( n ) ,
with
H Y | X ( { q 1 | x } ) = x q ^ x H Y | x ( q 1 | x ) , H Y | x ( q 1 | x ) = q 1 | x log ( q 1 | x ) + ( 1 q 1 | x ) log ( 1 q 1 | x ) ,
it is easy to show that
σ 2 ( I | n ) σ 2 ( H Y | X | n ) = H Y | X 2 ( { q 1 | x } ) H Y | X ( { q 1 | x } ) 2 .
In other words, if the marginal entropy is well sampled, the variance in the information is mainly due to the variance in the conditional entropy. In turn, H Y | X is defined as an average of k x terms (Equation (A3)). The independence hypothesis implied in the prior of Equation (13), and in the way different q 1 | x and q x factor out in q ( n | q x , { q y | x } ) (Equation (6)), imply that the different terms of (Equation (A3)) are all independent from each other. The average of k x independent terms has a variance proportional to 1/ k x , so the estimator proposed here becomes increasingly accurate as k x grows.
We now derive the detailed dependence of H Y | X 2 H Y | X 2 on the sampled data n . The mean conditional entropy H Y | X can be written in terms of H Y | x ( n x 1 , n x 0 , β ) , that is, of the entropy of the variable Y for a particular state x with n x = n x 0 + n x 1 counts at fixed β
H Y | X ( n ) = p ( β | n ) d β x n x N H Y | x ( n x 0 , n x 1 , β ) , H Y | x ( n x 0 , n x 1 , β ) = ψ 0 ( β + n x + 1 ) y { 0 , 1 } β / 2 + n x y β + n x ψ 0 ( β / 2 + n x y + 1 ) .
Similarly, for the second moment,
H Y | X 2 ( n ) = d β p ( β | n ) x d q 1 | x p ( q 1 | x | n x 0 , n x 1 , β ) x q ^ x H Y | x ( q 1 | x ) 2 = d β p ( β | n ) x x q ^ x q ^ x H Y | x ( n x 0 , n x 1 , β ) H Y | x ( n x 0 , n x 1 , β ) + x q ^ x 2 H Y | x 2 ( n x 0 , n x 1 , β ) = d β p ( β | n ) H Y | X 2 ( n , β ) + x q ^ x 2 Var [ H Y | x ] ( n x 0 , n x 1 , β ) ,
In turn, Var [ H Y | x ( ( n x 0 , n x 1 , β ) ] = H Y | x 2 ( n x 0 , n x 1 , β ) H Y | x ( n x 0 , n x 1 , β ) 2 , where the first moment H Y | x ( n x 0 , n x 1 , β ) is given in Equation (A5). The second moment is [14],
H Y | x 2 ( n x 1 , n x 0 , β ) = 0 1 d q 1 | x p ( q 1 | x | n x 0 , n x 1 , β ) H Y | x 2 ( q 1 | x ) = 0 1 d q 1 | x p ( q 1 | x | n x 0 , n x 1 , β ) q 1 | x log ( q 1 | x ) + ( 1 q 1 | x ) log ( 1 q 1 | x ) 2 = 2 β 2 + n x 0 β 2 + n x 1 ( β + n x + 1 ) ( β + n x ) F β 2 + n x 0 , β 2 + n x 1 + + y { 0 , 1 } β 2 + n x y β 2 + n x y + 1 β + n x + 1 β + n x G β 2 + n x y , β + n x .
In this equation,
F ( z 0 , z 1 ) = ψ 1 ( z 0 + z 1 + 2 ) + i { 0 , 1 } ψ 0 ( z i + 1 ) ψ 0 ( z 0 + z 1 + 2 ) , G ( z i , z ) = ψ 0 ( z i + 2 ) ψ 0 ( z + 2 ) 2 + ψ 1 ( z i + 2 ) ψ 1 ( z + 2 ) ,
and ψ 1 ( z ) is the first polygamma function.
Replacing the obtained expressions in Equation (A4), the variance of the estimated information is obtained. The two terms of Equation (A6) represent the two sources of uncertainty of the conditional entropy: the uncertainty of β (first term), manifest in the width of p ( β | n ) , and the uncertainty of the conditional entropies for a fixed β (second term), manifest in the width of p ( { q 1 | x } | β ) . As the number of samples decreases, the uncertainty in β becomes the dominant term.
The approximate symbol in Equation (A2) stems from the fact that H ( Y ) is assumed to be well approximated by the plug-in estimator. The error in the marginal entropy H Y is thereby neglected, and the error in the mutual information is assumed to derive from the uncertainty in the conditional entropy H Y | X (Equation (A4)) alone. The assumption is well justified in the context explored in this paper, that is, when exp [ H ( X ) ] exp [ H ( Y ) ] .

Appendix B. Bounding the Bias for Independent Variables

The core idea of a Bayesian approach is that the measured datum n is given, and the underlying probabilities are unknown. Bayesian estimators are hence constructed by averaging over all possible inferred probabilities. In this context, the calculation of the bias of an estimator is somehow at odds with the core idea, since the bias is obtained by fixing the underlying probability, and averaging on the possible outcomes of the sampled data. Perhaps for this reason, the bias of Bayesian estimators is hardly ever calculated. Yet, if a Bayesian estimator is regarded as a function of n , it is always possible to average the estimate over all the possible sampled data for a fixed underlying distribution. This average is performed in this appendix for the estimator I | n , β , evaluated at the β -value that maximizes p ( n | β ) , for a binary and symmetric Y variable, in the severely undersampled regime, and when the variables X and Y are independent (that is, q y | x = q y = 1 2 , x ). In this regime, the true information vanishes, so the calculated average equals the bias.
We work with a number of samples N in which the previous estimators begin to fail, but ours is still useful that is, exp [ H ( X ) / 2 ] < N exp [ H ( X ) ] . In this regime, the majority of the x-states are sampled at most once, and most of the x-states with coincidences are sampled only twice. As an approximation, we assume that no x-states are sampled more than twice. Since the β -value that maximizes p ( n | β ) cannot be calculated in the absence of coincidences (see Equation (25)), we assume that there is at least one coincidence. The total number m of x-states with two-fold coincidences is decomposed into m = m 20 + m 11 , where m 20 is the number of states sampled with two equal y-values (whichever), and m 11 = m m 20 is the number of states with two different y-values.
We define a variable z = m 20 / m 1 2 . Equation (25) implies that the optimal β is β ( z ) = ( 1 2 z ) / z , if z > 0 and β if z 0 . In turn, assuming no more than two-fold coincidences occur (see Equations (24) and (29)), I ( n , β ) can be expressed as
I ( n , β ) log 2 H 00 ( β ( z ) ) = log 2 ψ 1 2 z + ψ 1 2 + z 2 z , if z > 0 , 0 , if z 0 .
In order to calculate the bias in the non-informative case, we take q y | x = 1 2 , x (a distribution with I ( X ; Y ) = 0 ). The bias B is obtained by averaging I ( n , β ) over p n | q x , { q y | x } ,
B = n I ( n , β ) p n | q x , { q y | x } = m > 0 p m | q x m 20 m p m 20 | m , { q y | x } I ( n , β ) .
The distribution p ( m 20 | m , { q y | x } ) is a Binomial ( 1 2 , m ) . For non-negligible values of m, the Binomial can be approximated by a Gaussian that we write in terms of the variable z, such that p z | m , { q y | x } N 0 , 1 4 m . Then, Equation (A10) becomes
B m > 0 p m | q x 0 d z e 2 m z 2 π / 2 m I ( n , β ) .
The function I ( n , β ) of Equation (A9) is continuous in z, and starts growing from zero when z = 0 ( m 20 = m / 2 ). As the main contributions to the integral of Equation (A11) come from small values of z, we expand I ( n , β ) in a Taylor series, obtaining I ( n , β ) z + z 2 2 z 4 + O ( z 5 ) . The integral of Equation (A11) can be calculated using the moments of the Half Gaussian, yielding
B 1 2 2 π m 1 2 ¯ + 1 8 m 1 ¯ 3 16 m 2 ¯ ,
where the averages indicated with a horizontal line are weighted by p m | q x . We have checked in simulations that in the regime exp [ H ( X ) / 2 ] < N exp [ H ( X ) ] , the dependence of the bias on m scales as predicted by Equation (A12). It is worth noticing that, when m is as small as 20 coincidences, the bias drops to B 0.05 nats . If the probabilities q x are approximately homogeneous, then the number coincidences scales as m 0.5 N 2 / exp [ H ( X ) ] [18]. In that case, the leading term in the bias is
B 1 2 π e H ( X ) N .
For comparison, the bias of the entropy estimator H G proposed by Grassberger [21] is bounded by Δ H G < 0.14 e H ( X ) / N . Although the scaling in N is the same, the accompanying factor is much larger than in our estimator, especially in the undersampled regime considered here, where exp [ H ( X ) / 2 ] < N exp [ H ( X ) ] .

References

  1. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  2. Cover, T.M.; Thomas, J.A. Elements of Information Theory; John Wiley & Sons: Hoboken, NJ, USA, 2012. [Google Scholar]
  3. Panzeri, S.; Treves, A. Analytical estimates of limited sampling biases in different information measures. Network Comput. Neural Syst. 1996, 7, 87–107. [Google Scholar] [CrossRef] [PubMed]
  4. Samengo, I. Estimating probabilities from experimental frequencies. Phys. Rev. E 2002, 65, 046124. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Paninski, L. Estimation of entropy and mutual information. Neural Comput. 2003, 15, 1191–1253. [Google Scholar] [CrossRef]
  6. Kraskov, A.; Stögbauer, H.; Grassberger, P. Estimating mutual information. Phys. Rev. E 2004, 69, 066138. [Google Scholar] [CrossRef]
  7. Montemurro, M.A.; Senatore, R.; Panzeri, S. Tight data-robust bounds to mutual information combining shuffling and model selection techniques. Neural Comput. 2007, 19, 2913–2957. [Google Scholar] [CrossRef] [PubMed]
  8. Archer, E.; Park, I.M.; Pillow, J.W. Bayesian and quasi-Bayesian estimators for mutual information from discrete data. Entropy 2013, 15, 1738–1755. [Google Scholar] [CrossRef]
  9. Kolchinsky, A.; Tracey, B.D. Estimating mixture entropy with pairwise distances. Entropy 2017, 19, 361. [Google Scholar] [CrossRef]
  10. Belghazi, I.; Rajeswar, S.; Baratin, A.; Hjelm, R.D.; Courville, A. MINE: Mutual information neural estimation. arXiv 2018, arXiv:1801.04062. [Google Scholar]
  11. Safaai, H.; Onken, A.; Harvey, C.D.; Panzeri, S. Information estimation using nonparametric copulas. Phys. Rev. E 2018, 98, 053302. [Google Scholar] [CrossRef] [Green Version]
  12. Strong, S.P.; Koberle, R.; van Steveninck, R.R.D.R.; Bialek, W. Entropy and information in neural spike trains. Phys. Rev. Lett. 1998, 80, 197. [Google Scholar] [CrossRef]
  13. Nemenman, I.; Bialek, W.; van Steveninck, R.D.R. Entropy and information in neural spike trains: Progress on the sampling problem. Phys. Rev. E 2004, 69, 056111. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Archer, E.; Park, I.M.; Pillow, J.W. Bayesian entropy estimation for countable discrete distributions. J. Mach. Learn. Res. 2014, 15, 2833–2868. [Google Scholar]
  15. Wolpert, D.H.; DeDeo, S. Estimating functions of distributions defined over spaces of unknown size. Entropy 2013, 15, 4668–4699. [Google Scholar] [CrossRef]
  16. Jaynes, E.T. Probability Theory: The Logic of Science; Cambridge University Press: Cambridge, UK, 2007. [Google Scholar]
  17. Wolpert, D.H.; Wolf, D.R. Estimating functions of probability distributions from a finite set of samples. Phys. Rev. E 1995, 52, 6841. [Google Scholar] [CrossRef]
  18. Ma, S.k. Calculation of entropy from data of motion. J. Stat. Phys. 1981, 26, 221–240. [Google Scholar] [CrossRef]
  19. Nemenman, I. Coincidences and estimation of entropies of random variables with large cardinalities. Entropy 2011, 13, 2013–2023. [Google Scholar] [CrossRef]
  20. Antos, A.; Kontoyiannis, I. Convergence properties of functional estimates for discrete distributions. Random Struct. Algorithms 2001, 19, 163–193. [Google Scholar] [CrossRef]
  21. Grassberger, P. Entropy estimates from insufficient samplings. arXiv 2003, arXiv:0307138. [Google Scholar]
  22. Schürmann, T. Estimating probabilities from experimental frequencies. J. Phys. Math. Gen. 2004, 37, L295–L300. [Google Scholar] [CrossRef]
  23. Chao, A.; Wang, Y.; Jost, L. Entropy and the species accumulation curve: A novel entropy estimator via discovery rates of new species. Methods Ecol. Evol. 2013, 4, 1091–1100. [Google Scholar] [CrossRef]
  24. Kazhdan, M.; Funkhouser, T.; Rusinkiewicz, S. Rotation invariant spherical harmonic representation of 3 d shape descriptors. Symp. Geom. Process. 2003, 6, 156–164. [Google Scholar]
  25. Shwartz-Ziv, R.; Tishby, N. Opening the black box of deep neural networks via information. arXiv 2017, arXiv:1703.00810. [Google Scholar]
  26. Kinney, J.B.; Atwal, G.S. Equitability, mutual information, and the maximal information coefficient. Proc. Natl. Acad. Sci. USA 2014, 111, 3354–3359. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Grassberger, P. Entropy estimates from insuficient samples. Archive 2001, 412, 787. [Google Scholar]
  28. Barlow, R.J. Statistics: A Guide to the Use of Statistical Methods in the Physical Sciences; John Wiley & Sons: Hoboken, NJ, USA, 1993. [Google Scholar]
  29. Amari, S.I. Information geometry on hierarchy of probability distributions. IEEE Trans. Inf. Theory 2001, 47, 1701–1711. [Google Scholar] [CrossRef]
  30. Panzeri, S.; Schultz, S.R.; Treves, A.; Rolls, E.T. Correlations and the encoding of information in the nervous system. Proc. R. Soc. B Biol. Sci. 1999, 226, 1001–1012. [Google Scholar] [CrossRef]
  31. Panzeri, S.; Schultz, S.R. Temporal Correlations and Neural Spike Train Entropy. Phys. Rev. Lett. 2001, 86, 5823–5826. [Google Scholar] [Green Version]
  32. Panzeri, S.; Schultz, S.R. A Unified Approach to the Study of Temporal, Correlational, and Rate Coding. Neural Comput. 2001, 13, 1311–1349. [Google Scholar] [CrossRef] [Green Version]
  33. Pola, G.; Hoffmann, K.P.; Panzeri, S. An exact method to quantify the information transmitted by different mechanisms of correlational coding. Network 2003, 14, 35–60. [Google Scholar] [CrossRef]
  34. Hernández, D.G.; Zanette, D.H.; Samengo, I. Information-theoretical analysis of the statistical dependencies between three variables: Applications to written language. Phys. Rev. E 2015, 92, 022813. [Google Scholar] [CrossRef] [PubMed]
  35. Williams, P.L.; Beer, R.D. Nonnegative decomposition of multivariate information. arXiv 2010, arXiv:1004.2515. [Google Scholar]
  36. Harder, M.; Salge, C.; Polani, D. Bivariate Measure of Redundant Information. Phys. Rev. E 2015, 87, 012130. [Google Scholar] [CrossRef] [PubMed]
  37. Timme, N.; Alford, W.; Flecker, B.; Beggs, J.M. Synergy, redundancy, and multivariate information measures: An experimentalist’s perspective. J. Comput. Neurosci. 2013, 36, 119–140. [Google Scholar] [CrossRef] [PubMed]
  38. Griffiths, V.; Koch, C. Quantifying Synergistic Mutual Information. In Guided Self-Organization: Inception; Prokopenko, M., Ed.; Springer: Berlin/Heidelberg, Germany, 2014; pp. 159–190. [Google Scholar] [Green Version]
  39. Bertschinger, N.; Rauh, J.; Olbrich, E.; Jost, J.; Ay, N. Quantifying unique information. Entropy 2014, 16, 2161–2183. [Google Scholar] [CrossRef]
  40. Ince, R.A.A. Measuring Multivariate Redundant Information with Pointwise Common Change in Surprisal. Entropy 2017, 19, 318. [Google Scholar] [CrossRef]
  41. Yu, S.; Giraldo, S.; Gonzalo, L.; Jenssen, R.; Príncipe, J.C. Multivariate Extension of Matrix-based Renyi’s α-order Entropy Functional. arXiv 2018, arXiv:1808.07912. [Google Scholar]
  42. Tang, C.; Chehayeb, D.; Srivastava, K.; Nemenman, I.; Sober, S.J. Millisecond-scale motor encoding in a cortical vocal area. PLoS Biol. 2014, 12, e1002018. [Google Scholar] [CrossRef]
  43. Maidana Capitán, M.; Kropff, E.; Samengo, I. Information-Theoretical Analysis of the Neural Code in the Rodent Temporal Lobe. Entropy 2018, 20, 571. [Google Scholar] [CrossRef]
  44. Butte, A.J.; Tamayo, P.; Slonim, D.; Golub, T.R.; Kohane, I.S. Discovering functional relationships between RNA expression and chemotherapeutic susceptibility using relevance networks. Proc. Natl. Acad. Sci. USA 2000, 97, 12182–12186. [Google Scholar] [CrossRef] [Green Version]
  45. Tishby, N.; Pereira, F.C.; Bialek, W. The information bottleneck method. arXiv 2000, arXiv:0004057. [Google Scholar]
  46. Still, S.; Bialek, W. How many clusters? An information-theoretic perspective. Neural Comput. 2004, 16, 2483–2506. [Google Scholar] [CrossRef] [PubMed]
  47. Fairhall, A.L.; Lewen, G.D.; Bialek, W.; van Steveninck, R.R.D.R. Efficiency and ambiguity in an adaptive neural code. Nature 2001, 412, 787. [Google Scholar] [CrossRef] [PubMed]
Figure 1. A scheme of our method to estimate the mutual information between two variables X and Y. (a) We collect a few samples of a variable x with a large number of effective states x 1 , x 2 , , each sample characterized by a binary variable y (the two values represented in white and gray). We consider different hypotheses about the strength with which the probability of each y-value varies with x; (b) one possibility is that the conditional probability of each of the two y-values hardly varies with x. This situation is modeled by assuming that the different q y | x are random variables governed by a Beta distribution with a large hyper-parameter β 1 ; (c) On the other hand, the conditional probability q y | x could vary strongly with x. This situation is modeled by a Beta distribution with a small hyper-parameter β 2 . (d) As β varies, so does the prior mutual information (Equation (18)). This prior is obtained by averaging all the I ( { q 1 | x } ) values obtained from different possible sets of marginal distributions { q 1 | x } that can be generated when sampling the prior p ( { q 1 | x } | β ) of Equation (17). The shaded area around the solid line illustrates such fluctuations in I ( { q 1 | x } ) when k x = 50 .
Figure 1. A scheme of our method to estimate the mutual information between two variables X and Y. (a) We collect a few samples of a variable x with a large number of effective states x 1 , x 2 , , each sample characterized by a binary variable y (the two values represented in white and gray). We consider different hypotheses about the strength with which the probability of each y-value varies with x; (b) one possibility is that the conditional probability of each of the two y-values hardly varies with x. This situation is modeled by assuming that the different q y | x are random variables governed by a Beta distribution with a large hyper-parameter β 1 ; (c) On the other hand, the conditional probability q y | x could vary strongly with x. This situation is modeled by a Beta distribution with a small hyper-parameter β 2 . (d) As β varies, so does the prior mutual information (Equation (18)). This prior is obtained by averaging all the I ( { q 1 | x } ) values obtained from different possible sets of marginal distributions { q 1 | x } that can be generated when sampling the prior p ( { q 1 | x } | β ) of Equation (17). The shaded area around the solid line illustrates such fluctuations in I ( { q 1 | x } ) when k x = 50 .
Entropy 21 00623 g001
Figure 2. Comparison of the performance of four different estimators for I X Y : the plug-in estimator, NSB estimator used in the limit of infinite states, PYM estimator, and our estimator I | n ( β ) (Equation (20)) calculated with the β that maximizes the marginal likelihood p ( n | β ) (Equation (21)). The curves represent the average over 50 different data sets n , with the standard deviation displayed as a colored area around the mean. (a) estimates of mutual information as a function of the total number of samples N, when the values of q 1 | x are generated under the hypothesis of our method (Equation (17)). We sample once the marginal probabilities q x PY ( d = 0 . 55 , α = 50 ) (as described in [14]), as well as the conditionals q y | x Beta ( β / 2 , β / 2 ) with β = 2 . 3 . The effective size of the system is exp ( H X Y ) 800 . The exact value of I X Y is shown as a horizontal dashed line; (b) E vbvbsgv gtimates of mutual information, for data sets where the conditional probabilities have spherical symmetry. X, a binary variable of dimension 12, corresponds to the presence of 12 delta functions equally spaced in a sphere ( q x = 2 12 , for all x). We generate the conditional probabilities such that they are invariant under rotations of the sphere, namely q y | x = q y | R ( x ) , being R a rotation. To this aim, we set q y | x as a sigmoid function of a combination of frequency components ( π 0 π 1 π 2 ) of the spherical spectrum [24]. The effective size of the system is exp ( H X Y ) 5000 ; (c) estimates of mutual information, for a conditional distribution far away from our hypotheses. The x states are generated as Bernoulli ( p = 0 . 05 ) binary vectors of dimension D = 40 , while the conditional probabilities depend on the parity of the sum of the components of the vector. When the sum is even we set q y | x = 1 / 2 , and when is odd, q y | x is generated by sampling a mixture of two deltas of equal weight q y | x [ δ ( q q 0 ) + δ ( q 1 + q 0 ) ] / 2 with q 0 = 0 . 1 . The resulting distribution of q y | x -values contains three peaks, and therefore, cannot be described with a Dirichlet distribution. The effective size of the system is exp ( H X Y ) 4000 ; (d) bias in the estimation as a function of the value of mutual information. Settings remain the same as in (a), but fixing N = 500 and changing β ( 0 . 04 , 14 ) in the conditional; (e) bias in the estimation as a function of the value of mutual information. Settings as in (b), but fixing N = 2000 and changing the gain of the sigmoid in the conditional; (f) bias in the estimation as a function of the value of mutual information. Settings as in (c), but fixing N = 2000 and changing q 0 ( 0 . 01 , 0 . 4 ) in the conditional.
Figure 2. Comparison of the performance of four different estimators for I X Y : the plug-in estimator, NSB estimator used in the limit of infinite states, PYM estimator, and our estimator I | n ( β ) (Equation (20)) calculated with the β that maximizes the marginal likelihood p ( n | β ) (Equation (21)). The curves represent the average over 50 different data sets n , with the standard deviation displayed as a colored area around the mean. (a) estimates of mutual information as a function of the total number of samples N, when the values of q 1 | x are generated under the hypothesis of our method (Equation (17)). We sample once the marginal probabilities q x PY ( d = 0 . 55 , α = 50 ) (as described in [14]), as well as the conditionals q y | x Beta ( β / 2 , β / 2 ) with β = 2 . 3 . The effective size of the system is exp ( H X Y ) 800 . The exact value of I X Y is shown as a horizontal dashed line; (b) E vbvbsgv gtimates of mutual information, for data sets where the conditional probabilities have spherical symmetry. X, a binary variable of dimension 12, corresponds to the presence of 12 delta functions equally spaced in a sphere ( q x = 2 12 , for all x). We generate the conditional probabilities such that they are invariant under rotations of the sphere, namely q y | x = q y | R ( x ) , being R a rotation. To this aim, we set q y | x as a sigmoid function of a combination of frequency components ( π 0 π 1 π 2 ) of the spherical spectrum [24]. The effective size of the system is exp ( H X Y ) 5000 ; (c) estimates of mutual information, for a conditional distribution far away from our hypotheses. The x states are generated as Bernoulli ( p = 0 . 05 ) binary vectors of dimension D = 40 , while the conditional probabilities depend on the parity of the sum of the components of the vector. When the sum is even we set q y | x = 1 / 2 , and when is odd, q y | x is generated by sampling a mixture of two deltas of equal weight q y | x [ δ ( q q 0 ) + δ ( q 1 + q 0 ) ] / 2 with q 0 = 0 . 1 . The resulting distribution of q y | x -values contains three peaks, and therefore, cannot be described with a Dirichlet distribution. The effective size of the system is exp ( H X Y ) 4000 ; (d) bias in the estimation as a function of the value of mutual information. Settings remain the same as in (a), but fixing N = 500 and changing β ( 0 . 04 , 14 ) in the conditional; (e) bias in the estimation as a function of the value of mutual information. Settings as in (b), but fixing N = 2000 and changing the gain of the sigmoid in the conditional; (f) bias in the estimation as a function of the value of mutual information. Settings as in (c), but fixing N = 2000 and changing q 0 ( 0 . 01 , 0 . 4 ) in the conditional.
Entropy 21 00623 g002
Figure 3. Verification of the accuracy of the analytically predicted mean posterior information (Equation (20)) and variance (Equation (A4)) in the severely under-sampled regime. A collection of 13,500 distributions q x y are constructed by sampling q x DP ( α ) and q y | x Beta ( β / 2 , β / 2 ) , with α varying in the set { e 4 , e 5 , e 6 } and log β from Equation (19). Each distribution q x y has an associated I X Y ( q x y ) . From each q x y , we take five (5) sets of just N = 40 samples. (a) the values of I ( q x y ) are grouped according to the multiplicities { m n n } produced by the samples, averaged together, and depicted as the y component of each data point. The x component is the analytical result of Equation (20), based on the sampled multiplicities; (b) same analysis for the standard deviation of the information (the square root of the variance calculated in Equation (A4)).
Figure 3. Verification of the accuracy of the analytically predicted mean posterior information (Equation (20)) and variance (Equation (A4)) in the severely under-sampled regime. A collection of 13,500 distributions q x y are constructed by sampling q x DP ( α ) and q y | x Beta ( β / 2 , β / 2 ) , with α varying in the set { e 4 , e 5 , e 6 } and log β from Equation (19). Each distribution q x y has an associated I X Y ( q x y ) . From each q x y , we take five (5) sets of just N = 40 samples. (a) the values of I ( q x y ) are grouped according to the multiplicities { m n n } produced by the samples, averaged together, and depicted as the y component of each data point. The x component is the analytical result of Equation (20), based on the sampled multiplicities; (b) same analysis for the standard deviation of the information (the square root of the variance calculated in Equation (A4)).
Entropy 21 00623 g003

Share and Cite

MDPI and ACS Style

Hernández, D.G.; Samengo, I. Estimating the Mutual Information between Two Discrete, Asymmetric Variables with Limited Samples. Entropy 2019, 21, 623. https://0-doi-org.brum.beds.ac.uk/10.3390/e21060623

AMA Style

Hernández DG, Samengo I. Estimating the Mutual Information between Two Discrete, Asymmetric Variables with Limited Samples. Entropy. 2019; 21(6):623. https://0-doi-org.brum.beds.ac.uk/10.3390/e21060623

Chicago/Turabian Style

Hernández, Damián G., and Inés Samengo. 2019. "Estimating the Mutual Information between Two Discrete, Asymmetric Variables with Limited Samples" Entropy 21, no. 6: 623. https://0-doi-org.brum.beds.ac.uk/10.3390/e21060623

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop