Next Article in Journal
Continuous Variable Quantum Key Distribution with a Noisy Laser
Next Article in Special Issue
Prebiotic Competition between Information Variants, With Low Error Catastrophe Risks
Previous Article in Journal
Differentiating Interictal and Ictal States in Childhood Absence Epilepsy through Permutation Rényi Entropy
Previous Article in Special Issue
Maximum Entropy Rate Reconstruction of Markov Dynamics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Quantifying Redundant Information in Predicting a Target Random Variable

1
School of Computing, National University of Singapore, Singapore 119077, Singapore
2
Computer Science and Electrical Engineering, Caltech, Pasadena, CA 91125, USA
*
Author to whom correspondence should be addressed.
Entropy 2015, 17(7), 4644-4653; https://0-doi-org.brum.beds.ac.uk/10.3390/e17074644
Submission received: 18 March 2015 / Revised: 24 June 2015 / Accepted: 26 June 2015 / Published: 2 July 2015
(This article belongs to the Special Issue Information Processing in Complex Systems)

Abstract

:
We consider the problem of defining a measure of redundant information that quantifies how much common information two or more random variables specify about a target random variable. We discussed desired properties of such a measure, and propose new measures with some desirable properties.

1. Introduction

Many molecular and neurological systems involve multiple interacting factors affecting an outcome synergistically and/or redundantly. Attempts to shed light on issues such as population coding in neurons, or genetic contribution to a phenotype (e.g., eye-color), have motivated various proposals to leverage principled information-theoretic measures for quantifying informational synergy and redundancy, e.g., [15]. In these settings, we are concerned with the statistics of how two (or more) random variables X1, X2, called predictors, jointly or separately specify/predict another random variable Y, called a target random variable. This focus on a target random variable is in contrast to Shannon’s mutual information which quantifies statistical dependence between two random variables, and various notions of common information, e.g., [68].
The concepts of synergy and redundancy are based on several intuitive notions, e.g., positive informational synergy indicates that X1 and X2 act cooperatively or antagonistically to influence Y; positive redundancy indicates there is an aspect of Y that X1 and X2 can each separately predict. However, it has been challenging [912] to come up with precise information-theoretic definitions of synergy and redundancy that are consistent with all intuitively desired properties.

2. Background: Partial Information Decomposition

Partial Information Decomposition (PID) [13] defines the concepts of synergistic, redundant and unique information in terms of intersection information, I({X1,…,Xn}:Y), which quantifies the common information that each of the n predictors X1,…,Xn conveys about a target random variable Y. An antichain lattice [14] of redundant, unique, and synergistic partial informations is built from the intersection information.
Partial information diagrams (PI-diagrams) extend Venn diagrams to represent synergy. A PI-diagram is composed of nonnegative partial information regions (PI-regions). Unlike the standard Venn entropy diagram in which the sum of all regions is the joint entropy H(X1…n, Y), in PI-diagrams the sum of all regions (i.e. the space of the PI-diagram) is the mutual information I(X1…n:Y). PI-diagrams show how the mutual information I(X1…n:Y) is distributed across subsets of the predictors. For example, in the PI-diagram for n = 2 (Figure 1): {1} denotes the unique information about Y that only X1 carries (likewise {2} denotes the information only X2 carries); {1, 2} denotes the redundant information about Y that X1 as well as X2 carries, while {12} denotes the information about Y that is specified only by X1 and X2 synergistically or jointly.
Each PI-region is either redundant, unique, or synergistic, but any combination of positive PI-regions may be possible. Per [13], for two predictors, the four partial informations are defined as follows: the redundant information as I({X1, X2}:Y), the unique informations as
I ( { X 1 } : Y ) = I ( X 1 : Y ) I ( { X 1 , X 2 } : Y ) I ( { X 2 } : Y ) = I ( X 2 : Y ) I ( { X 1 , X 2 } : Y ) ,
and the synergistic information as
I ( { X 1 , X 2 } : Y ) = I ( X 1 , X 2 : Y ) I ( { X 2 } : Y ) I ( { X 1 , X 2 } : Y ) = I ( X 1 , X 2 : Y ) I ( X 1 : Y ) I ( X 2 : Y ) + I ( { X 1 , X 2 } : Y ) .

3. Desired I properties and canonical examples

There are a number of intuitive properties, proposed in [5,913], that are considered desirable for the intersection information measure I to satisfy:
  • (So) Weak Symmetry: I({X1,…,Xn}:Y) is invariant under reordering of X1,…,Xn.
  • (Mo) Weak Monotonicity: I({X1,…,Xn, Z}:Y) ≤ I({X1,…,Xn}:Y) with equality if there exists Xi ∈ {X1,…,Xn} such that H(Z, Xi) = H(Z).
    Weak Monotonicity is a natural generalization of the monotonicity property from [13]. Weak monotonicity is inspired by the property of mutual information that if H(X|Z) = 0, then I(X:Y) ≤ I(Z:Y).
  • (SR) Self-Redundancy: I({X1}:Y) = I(X1:Y). The intersection information a single predictor X1 conveys about the target Y is equal to the mutual information between the X1 and the target Y.
  • (M1) Strong Monotonicity: I({X1,…,Xn, Z}:Y) ≤ I({X1,…,Xn}:Y) with equality if there exists Xi ∈ {X1,…,Xn} such that I(Z,Xi:Y) = I(Z:Y).
    Strong Monotonicity captures more precisely what is meant by “redundant information”, it says explicitly that it information about Y that is redundant, not just any redundancy among the predictors (weak monotonicity).
  • (LP) Local Positivity: For all n, the derived “partial informations” defined in [13] are nonnegative. This is equivalent to requiring that I satisfy total monotonicity, a stronger form of supermodularity. For n = 2 this can be concretized as, I({X1, X2}:Y) ≥ I(X1:X2) − I(X1:X2|Y).
  • (TM) Target Monotonicity: If H(Y|Z) = 0, then I({X1,…,Xn}:Y) ≤ I({X1,…,Xn}:Z).
There are also a number of canonical examples for which one or more of the partial informations have intuitive values, which are considered desirable for the intersection information measure I to attain.
Example UNQ, shown in Figure 2, is a canonical case of unique information, in which each predictor carries independent information about the target. Y has four equiprobable states: ab, aB, Ab, and AB. X1 uniquely specifies bit a/A, and X2 uniquely specifies bit b/B. Note that the states are named so as to highlight the two bits of unique information; it is equivalent to choose any four unique names for the four states.
Example RdnXor, shown in Figure 3, is a canonical example of redundancy and synergy coexisting. The r/R bit is redundant, while the 0/1 bit of Y is synergistically specified as the XOR of the corresponding bits in X1 and X2.
Example And, shown in Figure 4, is an example where the relationship between X1, X2 and Y is nonlinear, making the desired partial information values less intuitively obvious. Nevertheless, it is desired that the partial information values should be nonnegative.
Example ImperfectRdn, shown in Figure 5, is an example of “imperfect” or “lossy” correlation between the predictors, where it is intuitively desirable that the derived redundancy should be positive. Given (LP), we can determine the desired decomposition analytically. First, I(X1,X2:Y) = I(X1:Y) = 1 bit; therefore, I(X2:Y|X1) = I(X1,X2:Y) − I(X1:Y) = 0 bits. This determines two of the partial informations—the synergistic information I({X1, X2}Y) and the unique information I({X2}:Y) are both zero. Then, the redundant information I({X1, X2}:Y) = I(X2:Y) − I({X2}:Y) = I(X2:Y) = 0.99 bits. Having determined three of the partial informations, we compute the final unique information I({X1}:Y) = I(X1:Y) − 0.99 = 0.01 bits.

4. Previous candidate measures

In [13], the authors propose to use the following quantity, Imin, as the intersection information measure:
I min ( X 1 , , X n : Y ) y Y Pr ( y ) min i { 1 , , n } I ( X i : Y = y ) = y Y Pr ( y ) min i { 1 , , n } D KL [ Pr ( X i | y ) Pr ( X i ) ] ,
where DKL is the Kullback-Leibler divergence.
Though Imin is an intuitive and plausible choice for the intersection information, [9] showed that Imin has counterintuitive properties. In particular, Imin calculates one bit of redundant information for example UNQ (Figure 2). It does this because each input shares one bit of information with the output. However, it is quite clear that the shared informations are, in fact, different: X1 provides the low bit, while X2 provides the high bit. This led to the conclusion that Imin overestimates the ideal intersection information measure by focusing only on how much information the inputs provide to the output. Another way to understand why Imin overestimates redundancy in example UNQ is to imagine a hypothetical example where there are exactly two bits of unique information for every state yY and no synergy or redundancy. Imin would calculate the redundancy as the minimum over both predictors which would be min[1, 1] = 1 bit. Therefore Imin would calculate 1 bit of redundancy even though by definition there was no redundancy but merely two bits of unique information.
Another candidate measure of synergy, WholeMinusSum (WMS) [9,16], calculates zero synergy and redundancy for Example RDNXOR, as opposed to the intuitive value of one bit of redundancy and one bit of synergy.

5. New candidate measures

5.1. The IΛ measure

Based on [17], we can consider a candidate intersection information as the maximum mutual information I(Q:Y) that some random variable Q conveys about Y, subject to Q being a function of each predictor X1,…,Xn. After some algebra, this leads to,
I ( { X 1 , , X n } : Y ) max Pr ( Q | Y ) I ( Q : Y ) subject to i { 1 , , n } : H ( Q | X i ) = 0 ,
which reduces to a simple expression in [12].
Example IMPERFECTRDN highlights the foremost shortcoming of IΛ; IΛ does not detect “imperfect” or “lossy” correlations between X1 and X2. Instead, IΛ calculates zero redundant information, that I({X1, X2}:Y) = 0 bits. This arises from Pr(X1 = 1,X2 = 0) > 0. If this were zero, IMPERFECTRDN reverts to being determined by the properties (SR) and the (M0) equality condition. Due to the nature of the common random variable, IΛ only sees the “deterministic” correlations between X1 and X2—add even an iota of noise between X1 and X2 and IΛ plummets to zero. This highlights a related issue with IΛ; it is not continuous—an arbitrarily small change in the probability distribution can result in a discontinuous jump in the value of IΛ.
Despite this, IΛ is a useful stepping-stone, it captures what is inarguably redundant information (the common random variable). In addition, unlike earlier measures, IΛ satisfies (TM).

5.2. The Iα measure

Intuitively, we expect that if Q only specifies redundant information, that conditioning on any predictor Xi would vanquish all of the information Q conveys about Y. We take this intuition to its final conclusion and find it yields a tigther lowerbound on I than IΛ. Moreover, Iα pleasantly reduces to a IΛ but loosens the constraint in Equation (4) from H(Q|Xi) = 0 to H(Q|Xi) = H(Qi|Xi,Y):
I α ( { X 1 , , X n } : Y ) max Pr ( Q | Y ) I ( Q : Y ) subject to i { 1 , , n } : I ( Q , X i : Y ) = I ( X i : Y )
= max Pr ( Q | Y ) I ( Q : Y ) subject to i { 1 , , n } : H ( Q | X i ) = H ( Q | X i , Y ) .
This measure obtains the desired values for the canonical examples in Section 3. However, its implicit definition makes it more difficult to verify whether or not it satisfies the desired properties in Section 3. Pleasingly, Iα also satisfies (TM). We can also show (See Lemmas 1 and 2 in Appendix A) that
0 I ( ) { X 1 , , X n } : Y ) I α ( { X 1 , , X n } : Y ) I min ( { X 1 , , X n } : Y ) .
While Iα satisfies previously defined canonical examples, we have found another example, shown in Figure 6, for which IΛ and Iα both calculate negative synergy. This example further complicates Example AND by making the predictors mutually dependent.

6. Conclusion

The important part of this paper is exchanging (M0) with (M1) thus further constraining the space of acceptable I measures. The complexity community aspires to eventually find a unique I measure that satisfies a large portion of the desired properties, and any noncontroversial tightening of the space of possible I measures, even (or especially?) if obvious in hindsight, is immensely welcome.
As discussed in [12], I measures fail (LP) if and only if they are too strict a measure of redundant information. Loosening the constraints on IΛ yields Iα and achieves a nonnegative decomposition on example IMPERFECTRDN. a natural next step is to loosen the constraints on Iα until achieving a nonnegative decomposition for example SUBTLE. Alternatively, a very plausible measure of the “unique information” [9,15,18] that satisfies (LP) for n = 2 yet does not satisfy (TM). It seems that (LP) and (TM) will be incompatible, and it would be nice to prove this.

Acknowledgments

We thank Jim Beck, Yaser Abu-Mostafa, Edwin Chong, Chris Ellison, and Ryan James for helpful discussions.

A. Appendix

Proof Iα does not satisfy (LP). Proof by counter-example SUBTLE (Figure 6).
For I(Q:Y|X1) = 0, then Q must not distinguish between states of Y = 00 and Y = 01 (because X1 does not distinguish between these two states). This entails that Pr(Q|Y = 00) = Pr(Q|Y = 01). By symmetry, likewise for I(Q:Y|X2) = 0, Q must be distinguish between states Y = 01 and Y = 11. Altogether, this entails that Pr(Q|Y = 00) = Pr(Q|Y = 01) = Pr(Q|Y = 11), which then entails, Pr(q|yi) = Pr(q|yj) ∀qQ, yiY, yjY, which is only achievable when Pr(q) = Pr(q|y) ∀qQ, yY. This makes I(Q:Y) = 0, therefore for example SUBTLE, Iα({X1, X2}:Y) = 0.
Lemma 1. We have IΛ({X1,…,Xn}:Y) ≤ Iα({X1,…,Xn}:Y).
Proof. We define a random variable Q′ = X1 Λ⋯Λ Xn. We then plugin Q′ for Q in the definition of Iα. This newly plugged-in Q satisfies the constraint ∀i ∈ {1,…,n} that I(Q:Y|Xi) = 0. Therefore, Q′ is always a possible choice for Q, and the maximization of I(Q:Y) in Iα must be at least as large as I(Q′:Y) = IΛ({X1,…,Xn}:Y). □
Lemma 2. We have Iα({X1,…,Xn}:Y) ≤ Imin (X1,…,Xn:Y)
Proof. For a given state yY and two arbitrary random variables Q and X, given I(Q:y|X) = DKL[Pr(QX|y)∥ Pr(Q|X) Pr(X|y)] = 0, we show that, I(Q:y) ≤ I(X:y),
I ( X : y ) I ( Q : y ) = x X Pr ( x | y ) log Pr ( x | y ) Pr ( x ) q Q Pr ( q | y ) log Pr ( q | y ) Pr ( q ) 0.
Generalizing to n predictors X1,…,Xn, the above shows that that the maximum I(Q:y) under constraint I(Q:y|Xi) will always be less than mini∈{1,…,n} I(Xi:y), which completes the proof. □
Lemma 3. Measure Imin satisfies desired property Strong Monotonicity, (M1).
Proof. Given H(Y|Z) = 0, then the specific-surprise I(Z:y) yields,
I ( Z : y ) D KL [ Pr ( Z | y ) Pr ( Z ) ] = z Z Pr ( z | y ) log Pr ( z | y ) Pr ( z ) = z Z Pr ( z | y ) log 1 Pr ( y ) = log 1 Pr ( y ) .
Given that for an arbitrary random variable X i , I ( X i : y ) log 1 Pr ( y ). As Imin takes only uses the mini I(Xi:y), the minimum is invariant under adding any predictor Z such that H(Y|Z) = 0. Therefore, measure Imin satisfies property (M1). □

Author Contributions

Both authors shared in this research equally. Both authors have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Schneidman, E.; Bialek, W.; Berry, M.J. Synergy, redundancy, and independence in population codes. J. Neurosci. 2003, 23, 11539–11553. [Google Scholar]
  2. Narayanan, N.S.; Kimchi, E.Y.; Laubach, M. Redundancy and Synergy of Neuronal Ensembles in Motor Cortex. J. Neurosci. 2005, 25, 4207–4216. [Google Scholar]
  3. Balduzzi, D.; Tononi, G. Integrated information in discrete dynamical systems: Motivation and theoretical framework. PLoS Comput. Biol. 2008, 4, e1000091. [Google Scholar]
  4. Anastassiou, D. Computational analysis of the synergy among multiple interacting genes. Mol. Syst. Biol. 2007, 3, 83. [Google Scholar]
  5. Lizier, J.T.; Flecker, B.; Williams, P.L. Towards a Synergy-based Approach to Measuring Information Modification. Proceedings of 2013 IEEE Symposium on Artificial Life (ALIFE), Singapore, Singapore, 16–19 April 2013; pp. 43–51.
  6. Gács, P.; Körner, J. Common information is far less than mutual information. Prob. Control Inf. Theory. 1973, 2, 149–162. [Google Scholar]
  7. Wyner, A.D. The common information of two dependent random variables. IEEE Trans. Inf. Theory. 1975, 21, 163–179. [Google Scholar]
  8. Kumar, G.R.; Li, C.T.; Gamal, A.E. Exact Common Information. Proceedings of 2014 IEEE International Symposium on Information Theory (ISIT), Honolulu, HI, USA, 29 June–4 July 2014; pp. 161–165.
  9. Griffith, V.; Koch, C. Quantifying synergistic mutual information. In Guided Self-Organization: Inception; Emergence, Complexity and Computation Serie; Volume 9, Springer: Berlin/Heidelberg, Germany, 2014; pp. 159–190. [Google Scholar]
  10. Harder, M.; Salge, C.; Polani, D. Bivariate measure of redundant information. Phys. Rev. E 2013, 87, 012130. [Google Scholar]
  11. Bertschinger, N.; Rauh, J.; Olbrich, E.; Jost, J. Shared Information—New Insights and Problems in Decomposing Information in Complex Systems. In Proceedings of European Conference on Complex Systems 2012; Springer Proceedings in Complexity Serie; Springer; Switzerland, 2013; pp. 251–269. [Google Scholar]
  12. Griffith, V.; Chong, E.K.P.; James, R.G.; Ellison, C.J.; Crutchfield, J.P. Intersection Information based on Common Randomness. Entropy 2014, 16, 1985–2000. [Google Scholar]
  13. Williams, P.L.; Beer, R.D. Nonnegative Decomposition of Multivariate Information 2010. arXiv: 1004-2515.
  14. Weisstein, E.W. Antichain. Available online: http://mathworld.wolfram.com/Antichain.html accessed on 29 June 2015.
  15. Bertschinger, N.; Rauh, J.; Olbrich, E.; Jost, J.; Ay, N. Quantifying unique information. Entropy 2014, 16, 2161–2183. [Google Scholar]
  16. Schneidman, E.; Still, S.; Berry, M.J.; Bialek, W. Network Information and Connected Correlations. Phys. Rev. Lett. 2003, 91, 238701–238705. [Google Scholar]
  17. Wolf, S.; Wullschleger, J. Zero-error information and applications in cryptography. Proceedings of IEEE Information Theory Workshop, San Antonio, TX, USA, 24–29 October 2004; pp. 1–6.
  18. Rauh, J.; Bertschinger, N.; Olbrich, E.; Jost, J. Reconsidering unique information: Towards a multivariate information decomposition. Proceedings of 2014 IEEE International Symposium on Information Theory (ISIT), Honolulu, HI, USA, 29 June–4 July 2014; pp. 2232–2236.
Figure 1. PI-diagrams for n = 2 predictors, showing the amount of redundant (yellow/bottom), unique (magenta/left and right) and synergistic (cyan/top) information with respect to the target Y.
Figure 1. PI-diagrams for n = 2 predictors, showing the amount of redundant (yellow/bottom), unique (magenta/left and right) and synergistic (cyan/top) information with respect to the target Y.
Entropy 17 04644f1
Figure 2. Example UNQ. X1 and X2 each uniquely carry one bit of information about Y. I(X1X2:Y) = H(Y) = 2 bits.
Figure 2. Example UNQ. X1 and X2 each uniquely carry one bit of information about Y. I(X1X2:Y) = H(Y) = 2 bits.
Entropy 17 04644f2
Figure 3. Example RDNXOR. This is the canonical example of redundancy and synergy coexisting. Imin and IΛ each reach the desired decomposition of one bit of redundancy and one bit of synergy. This example demonstrates IΛ correctly extracting the embedded redundant bit within X1 and X2.
Figure 3. Example RDNXOR. This is the canonical example of redundancy and synergy coexisting. Imin and IΛ each reach the desired decomposition of one bit of redundancy and one bit of synergy. This example demonstrates IΛ correctly extracting the embedded redundant bit within X1 and X2.
Entropy 17 04644f3
Figure 4. Example AND. It is universally agreed that the redundant information is between [0, 0.311] bits. The most compelling argument is from [15] arguing for 0.311 bits of redundant information.
Figure 4. Example AND. It is universally agreed that the redundant information is between [0, 0.311] bits. The most compelling argument is from [15] arguing for 0.311 bits of redundant information.
Entropy 17 04644f4
Figure 5. Example IMPERFECTRDN. IΛ is blind to the noisy correlation between X1 and X2 and calculates zero redundant information. An ideal I measure would detect that all of the information X2 specifies about Y is also specified by X1 to calculate I({X1, X2}:Y) = 0.99 bits.
Figure 5. Example IMPERFECTRDN. IΛ is blind to the noisy correlation between X1 and X2 and calculates zero redundant information. An ideal I measure would detect that all of the information X2 specifies about Y is also specified by X1 to calculate I({X1, X2}:Y) = 0.99 bits.
Entropy 17 04644f5
Figure 6. Example SUBTLE.
Figure 6. Example SUBTLE.
Entropy 17 04644f6

Share and Cite

MDPI and ACS Style

Griffith, V.; Ho, T. Quantifying Redundant Information in Predicting a Target Random Variable. Entropy 2015, 17, 4644-4653. https://0-doi-org.brum.beds.ac.uk/10.3390/e17074644

AMA Style

Griffith V, Ho T. Quantifying Redundant Information in Predicting a Target Random Variable. Entropy. 2015; 17(7):4644-4653. https://0-doi-org.brum.beds.ac.uk/10.3390/e17074644

Chicago/Turabian Style

Griffith, Virgil, and Tracey Ho. 2015. "Quantifying Redundant Information in Predicting a Target Random Variable" Entropy 17, no. 7: 4644-4653. https://0-doi-org.brum.beds.ac.uk/10.3390/e17074644

Article Metrics

Back to TopTop