Next Article in Journal
Discrete Versions of Jensen–Fisher, Fisher and Bayes–Fisher Information Measures of Finite Mixture Distributions
Next Article in Special Issue
Implications of Noise on Neural Correlates of Consciousness: A Computational Analysis of Stochastic Systems of Mutually Connected Processes
Previous Article in Journal
Hyper-Chaotic Color Image Encryption Based on Transformed Zigzag Diffusion and RNA Operation
Previous Article in Special Issue
Computing Integrated Information (Φ) in Discrete Dynamical Systems with Multi-Valued Elements
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mechanism Integrated Information

by
Leonardo S. Barbosa
1,
William Marshall
1,2,
Larissa Albantakis
1 and
Giulio Tononi
1,*
1
Department of Psychiatry, University of Wisconsin-Madison, Madison, WI 53719, USA
2
Department of Mathematics and Statistics, Brock University, St. Catharines, ON L2S 3A1, Canada
*
Author to whom correspondence should be addressed.
Submission received: 12 January 2021 / Revised: 5 March 2021 / Accepted: 12 March 2021 / Published: 18 March 2021
(This article belongs to the Special Issue Integrated Information Theory and Consciousness)

Abstract

:
The Integrated Information Theory (IIT) of consciousness starts from essential phenomenological properties, which are then translated into postulates that any physical system must satisfy in order to specify the physical substrate of consciousness. We recently introduced an information measure (Barbosa et al., 2020) that captures three postulates of IIT—existence, intrinsicality and information—and is unique. Here we show that the new measure also satisfies the remaining postulates of IIT—integration and exclusion—and create the framework that identifies maximally irreducible mechanisms. These mechanisms can then form maximally irreducible systems, which in turn will specify the physical substrate of conscious experience.

Graphical Abstract

1. Introduction

Integrated information theory (IIT; [1,2,3]) identifies the essential properties of consciousness and postulates that a physical system accounting for it—the physical substrate of consciousness (PSC)—must exhibit these same properties in physical terms. Briefly, IIT starts from the existence of one’s own consciousness, which is immediate and indubitable. The theory then identifies five essential phenomenal properties that are immediate, indubitable and true of every conceivable experience, namely intrinsicality, composition, information, integration and exclusion. These phenomenal properties, called axioms, are translated into essential physical properties of the PSC, called postulates. The postulates are conceptualized in terms of cause–effect power and given a mathematical formulation in order to make testable predictions and allow for inferences and explanations.
So far, the mathematical formulation employed well-established measures of information, such as Kullback–Leibler divergence (KLD) [4] or earth mover’s distance (EMD) [3]. Ultimately, however, IIT requires a measure that is based on the postulates of the theory and is unique, because the quantity and quality of consciousness are what they are and cannot vary with the measure chosen. Recently, we introduced an information measure, called intrinsic difference [5], which captures three postulates of IIT—existence, intrinsicality and information—and is unique. Our primary goal here is to explore the remaining postulates of IIT—composition, integration and exclusion—in light of this unique measure, focusing on the assessment of integrated information φ for the mechanisms of a system. In doing so, we will also revisit the way of performing partitions.
The plan of the paper is as follows. In Section 2, we briefly introduce the axioms and postulates of IIT; in Section 3, we introduce the mathematical framework for measuring φ based on intrinsic difference (ID), which satisfies the postulates of IIT and is unique; in Section 4, we explore the behavior of the measure in several examples; and in Section 5, we discuss the connection between the new framework, previous versions of IIT and future developments.

2. Axioms and Postulates

This section summarizes the axioms of IIT and the corresponding postulates. For a complete description of the axioms and their motivation, the reader should consult [2,3,6].
Briefly, the zeroth axiom, existence, says that experience exists, immediately and indubitably. The zeroth postulate requires that the PSC must exist in physical terms. The PSC is assumed to be a system of interconnected units, such as a network of neurons. Physical existence is taken to mean that the units of the system must be able to be causally affected by or causally affect other units (take and make a difference). To demonstrate that a unit has a potential cause, one can observe whether the unit’s state can be caused by manipulating its input, while to demonstrate that a unit has a potential effect one can manipulate the state of the unit and observe if it causes the state of some other unit [7].
The first axiom, intrinsicality, says that experience is subjective, existing from the intrinsic perspective of the subject of experience. The corresponding postulate requires that a PSC has potential causes and effects within itself.
The second axiom, composition, says that experience is structured, being composed of phenomenal distinctions bound by phenomenal relations. The corresponding postulate requires that a PSC, too, must be structured, being composed by causal distinctions specified by subsets of units (mechanisms) over subsets of units (cause and effect purviews) and by causal relations that bind together causes and effects overlapping over the same units. The purviews are then subset of units whose states are constrained by another subset of units, the mechanisms, in its particular state. The set of all causal distinctions and relations within a system compose its cause–effect structure.
The third axiom, information, says that experience is specific, being the particular way it is, rather than generic. The corresponding postulate states that a PSC must specify a cause–effect structure composed of distinctions and relations that specify particular cause and effect states.
The fourth axiom, integration, says that experience is unified, in that it cannot be subdivided into parts that are experienced separately. The corresponding postulate states that a PSC must specify a cause–effect structure that is unified, being irreducible to the cause–effect structures specified by causally independent subsystems. Integrated information ( Φ ) is a measure of the irreducibility of the cause–effect structure specified by a system [8]. The degree Φ to which a system is irreducible can be interpreted as a measure of its existence. Mechanism integrated information ( φ ) is an analogous measure that quantifies the existence of a mechanism within a system. Only mechanisms that exist within a system ( φ > 0 ) contribute to its cause–effect structure.
Finally, the exclusion axiom says that experience is definite, in that it contains what it contains, neither less nor more. The corresponding postulate states that the cause–effect structure specified by a PSC should be definite: it must specify a definite set of distinctions and relations over a definite set of units, neither less nor more. The PSC and associated cause–effect structure is given by the set of units for which the value of Φ is maximal, and its distinctions and relations corresponding to maxima of φ . According to IIT, then, a system is a PSC if it is a maximum of integrated information, meaning that it has higher integrated information than any overlapping systems [3,9]. Moreover, the cause–effect structure specified by the PSC is identical to the subjective quality of the experience [10].

3. Theory

We first describe the process for measuring the integrated information ( φ ) of a mechanism based on the postulates of IIT. In order to contribute to experience, a mechanism must satisfy the postulates described in Section 2 (note that mechanisms cannot be compositional because, as components of the cause–effect structure, they cannot have components themselves). We then present some theoretical developments related to partitioning a mechanism in order to assess integration and to measuring the difference between probability distributions for quantifying intrinsic information. The subsequent process of measuring the integrated information of the system ( Φ ) will be discussed elsewhere.

3.1. Mechanism Integrated Information

Our starting point is a stochastic system S = { S 1 , S 2 , , S n } with state space Ω S and current state s t Ω S (Figure 1a). The system is constituted of n random variables that represent the units of a physical system and has a transition probability function
p ( s t + 1 s t ) = P ( S t + 1 = s t + 1 S t = s t ) , s t , s t + 1 Ω S ,
which describes how the system updates its state (see Appendix A.1 for details). The goal is to define the integrated information of a mechanism M S in a state m t Ω M based on the postulates of IIT. To this end, we will develop a difference measure φ ( m t , Z t ± 1 , ψ ) which quantifies how much a mechanism M in state m t constrains the state of a purview, a set of units Z t ± 1 S , compared to a partition
ψ = { ( M 1 , Z 1 ) , ( M 2 , Z 2 ) , , ( M k , Z k ) } ,
of the mechanism and purview into k independent parts (Figure 1b). As we evaluate the IIT postulates step by step, we will provide mathematical definitions for the required quantities, introduce constraints on φ and eventually arrive at a unique measure. Since potential causes of M = m t are always inputs to M, and potential effects of M = m t are always outputs of M, we will omit the corresponding update indices ( t 1 , t, t + 1 ) unless necessary.

3.1.1. Existence

For a mechanism to exist in a physical sense, it must be possible for something to change its state, and it must be able to change the state of something (it has potential causes and effects). To evaluate these potential causes and effects, we define the cause repertoire π c ( Z m ) (see Equation (A2)) and the effect repertoire π e ( Z m ) (see Equation (A1)), which describe how m constrains the potential input or output states of Z S respectively (Figure 1b) [3,11,12,13].
The cause and effect repertoires are probability distributions derived from the system’s transition probability function (Equation (1)) by conditioning on the state of the mechanism and causally marginalizing the variables outside the purview ( S \ Z ). Causal marginalization is also used to remove any contributions to the repertoire from units outside the mechanism ( S \ M ). In this way, we capture the constraints due to the mechanism in its state and nothing else. Note that the cause and effect repertoires generally differ from the corresponding conditional probability distributions.
Having introduced cause and effect repertoires, we can write the difference
φ e ( m , Z , ψ ) = D ( π e ( Z m ) , π e ψ ( Z m ) ) ,
where π e ψ ( Z m ) corresponds to the partitioned effect repertoire (see Equation (A3)) in which certain connections from M to Z are severed (causally marginalized). When there is no change after the partition, we require that
φ e ( m , Z , ψ ) = 0 .
The same analysis holds for causes, replacing π e with π c in the definition of φ c ( m , Z , ψ ) . Unless otherwise specified, in what follows we focus on effects.

3.1.2. Intrinsicality

The intrinsicality postulate states that, from the intrinsic perspective of the mechanism M = m over a purview Z, the effect repertoire π e ( Z | m ) is set and has to be taken as is. This means that, given the purview units and their connections to the mechanism, the constraints due to the mechanism are defined by how all its units at a particular state m at t constrain all units in the effect purview at t + 1 and cause purview at t 1 . For example, if the mechanism fully constrains all of its purview units except for one unit which remains fully unconstrained, the mechanism cannot just ignore the unconstrained unit or optimize its overall constraints by giving more weight to some states than others in the effect repertoire. For this reason, the intrinsicality postulate should make the difference measure D between the partitioned and unpartitioned repertoire sensitive to a tradeoff between “expansion” and “dilution”: the measure should increase if the purview includes more units that are highly constrained by the mechanism but decrease if the purview includes units that are weakly constrained. The mathematical formulation of this requirement is given in Section 3.3.

3.1.3. Information

The information postulate states that a mechanism M, by being in its particular state m, must have a specific effect, which means that it must specify a particular effect state z over the purview Z. The effect state should be the one for which m makes the most difference. To that end, we require a difference measure of the form
φ e ( m , Z , ψ ) = D ( π e ( Z m ) , π e ψ ( Z m ) ) = max z Ω Z f π e ( z m ) , π e ψ ( z m ) ,
such that the difference D between effect repertoires is evaluated as the maximum of the absolute value of some function f that is assessed for particular states. The function f is one of the main developments of the current work and is discussed in Section 3.3.

3.1.4. Integration

The integration postulate states that a mechanism must be unitary, being irreducible to independent parts. By comparing the effect repertoire π e ( Z m ) against the partitioned repertoire π e ψ ( Z m ) , we can assess how much of a difference the partition ψ makes to the effect of m. To quantify how irreducible m’s effect is on Z, one must compare all possible partitioned repertoires to the unpartitioned effect repertoire. In other words, one must evaluate each possible partition ψ . Of all partitions, we define the minimum information partition (MIP)
ψ * = argmin ψ φ e ( m , Z , ψ ) ,
which is the one that makes the least difference to the effect. The intrinsic integrated effect information (or integrated effect information for short) of the mechanism M in state m about a purview Z is then defined as
φ e ( m , Z ) = φ e ( m , Z , ψ * ) .
If φ e ( m , Z ) = 0 , there is a partition of the candidate mechanism that does not make a difference, which means that the candidate mechanism is reducible.

3.1.5. Exclusion

The exclusion postulate states that a mechanism must be definite, it must specify a definite effect over a definite set of units. That is, a mechanism must be about a maximally irreducible purview
Z e * = argmax Z S φ e ( m , Z ) ,
which maximizes integrated effect information and is in the effect state
z e * = argmax z Ω Z e * f π e ( z m ) , π e ψ * ( z m ) .
The purview Z e * is then used to define the integrated effect information of the mechanism M
φ e ( m ) = φ e ( m , Z e * ) .
Returning to the existence postulate, a mechanism must have both a cause and an effect. By an analogous process using cause repertoires π c instead of effect repertoires π e , we can define the integrated cause information of m
φ c ( m ) = φ c ( m , Z c * ) ,
and the integrated information of the mechanism
φ ( m ) = min φ c ( m ) , φ e ( m ) .
Thus, if a candidate mechanism M in state m is reducible over every purview either on the cause or effect side, φ ( m ) = 0 and M does not contribute to experience. Otherwise, M = m is irreducible and forms a mechanism within the system. As such, it specifies a distinction
X ( m ) = ( Z c * = z c * , Z e * = z e * , φ ( m ) ) : Z c * , Z e * S , z c * Ω Z c * , z e * Ω Z e * ,
which links its maximally irreducible cause with its maximally irreducible effect, for M S , m Ω M and φ ( m ) { x R : x > 0 } . While a mechanism always specifies a unique φ ( m ) value, due to symmetries in the system it is possible that there are multiple equivalent solutions for Z c * = z c * or Z e * = z e * . We expect such “ties” to be exceedingly rare in physical systems with variable connection strengths, as well as a certain amount of indeterminism and outline possible solutions to resolves “ties” in the discussion, Section 5.

3.2. Disintegrating Partitions

According to the integration postulate, a mechanism can only exist from the intrinsic perspective of a system if it is irreducible, meaning that any partition of the mechanism would make a difference to its potential cause or effect. Accordingly, computing the integrated information of a mechanism requires partitioning the mechanism and assessing the difference between partitioned and unpartitioned repertoires. In this section we give additional mathematical details and theoretical considerations for how to partition a mechanism together with its purview Z.
Generally, a partition ψ of a mechanism M and a purview Z is a set of parts as defined in Equation (2), with some restrictions on ( M i , Z i ) . The partition "cuts apart" the mechanism, severing any connections from M i to Z j ( i j ). We use causal marginalization (see Appendix A) to remove any causal power M i has over Z j ( i j ) and compute a partitioned repertoire. Practically, it is as though we do not condition on the state of M i when consider Z j . Before describing the restrictions on ( M i , Z i ) we will look at a few examples to highlight the conceptual issues. First, consider a third-order mechanism M = { A , B , C } with the same units (as inputs or outputs) in the corresponding third order purview Z = { A , B , C } . A standard example of a partition of this mechanism is
ψ 1 = { ( { A , B } , { A , B } ) , ( { C } , { C } ) } ,
which cuts units { A , B } away from unit { C } . Now consider the situation where we would like to additionally cut { B } in the purview away from { A , B } in the mechanism. This partition can be represented as
ψ 2 = { ( { A , B } , { A } ) , ( { } , { B } ) , ( { C } , { C } ) } .
This example raises the issue of whether to allow the empty set as part of a partition. The question is not only conceptual but also practical, in a situation where { A , B } and { C } have opposite effects (e.g., excitatory and inhibitory connections), then it may be that the MIP ψ * = ψ 2 (see Section 4.2 for an example). Here, the mechanism is always partitioned together with a purview subset.
While the definition of ψ should include partitions such as ψ 2 above, this raises additional issues. Consider the partition
ψ 3 = { ( { A , B , C } , { A , B } ) , ( { } , { C } ) } .
In ψ 3 , the set of all mechanism units is contained in one part. Should such a partition count as "cutting apart" the mechanism? The same problem arises for partitions of first-order mechanisms. Consider, for example, M = { A } with purview Z = { A , B , C } and partition
ψ 4 = { ( { A } , { A , B } ) , ( { } , { C } ) } .
A first-order mechanism should be considered completely irreducible by definition, yet for the proposed partition only a small fraction of its constraint is considered integrated information: while M = A may constrain A, B, and C, only its constraints over C would be evaluated by ψ 4 . A similar argument applies to ψ 3 , which would only allow us to evaluate the constraint of the mechanism M = { A , B , C } on C, not the entire purview Z = { A , B , C } . In sum, ψ 3 and ψ 4 should not be permissible partitions by the integration postulate. The set of mechanism units may not remain integrated over a purview subset once a partition is applied.
Based on the above argument, we propose a set of disintegrating partitions
Ψ ( M , Z ) = { { ( M i , Z i ) } i = 1 k | k { 2 , 3 , 4 , } , M i P ( M ) , Z i P ( Z ) , M i = M , Z i = Z , Z i Z j = M i M j = for   all i j , M i = M Z i = } ,
such that for each ψ Ψ ( M , Z ) : { M i } is a partition of M and { Z i } is a partition of Z but allows the empty set to be used as a part. Moreover, if the mechanism is not partitioned into at least two parts, then the mechanism must be cut away from the entire purview.
In summary, the above definition of possible partitions ensures that the mechanism set must be divided into at least two parts, except for the special case where one part contains the whole mechanism but no units in the purview (complete partition, ψ 0 ). This special partition can be interpreted as “destroying” the whole mechanism at once and observing the impact its absence has on the purview.

3.3. Intrinsic Difference (ID)

In this section we define the measure D, which quantifies the difference between the unpartitioned and partitioned repertoires specified by a mechanism and thus plays an important role in measuring integrated information. We propose a set of properties that D should satisfy based on the postulates of IIT described above, and then identify the unique measure that satisfies them.
Our desired properties are described in terms of discrete probability distributions P n = [ p 1 , p 2 , , p n ] and Q n = [ q 1 , q 2 , , q n ] . Generally, P n represents the cause or effect repertoire of a mechanism π ( Z | m ) , while Q n represents the partitioned repertoire π ψ ( Z | m ) .
The first property, causality, captures the requirement for physical existence (Section 3.1.1) that a mechanism has a potential cause and effect,
D ( P n , Q n ) = 0 P n Q n .
The interpretation is that the integrated information m specifies about Z is only zero if the unpartitioned and partitioned repertoires are identical. In other words, by being in state m, the mechanism M does not constrain the potential state of Z above its partition into independent parts.
The second property, intrinsicality, captures the requirement that physical existence must be assessed from the perspective of the mechanism itself (Section 3.1.2). The idea is that information should be measured from the intrinsic perspective of the candidate mechanism M in state m, which determines the potential state of the purview Z by itself, independent of external observers. In other words, the constraint m has over Z must depend only on their units and connections. In contrast, traditional information measures were conceived to quantify the amount of signal transmitted across a channel between a sender and a receiver from an extrinsic perspective, typically that of a channel designer who has the ability to optimize the channel’s capacity. This can be done by adjusting the mapping between the states of M and Z through encoders and decoders to reduce indeterminism in the signal transmission. However, such a remapping would require more than just the units and connections present in M and Z, thus violating intrinsicality [5].
The intrinsicality property is defined based on the behavior of the difference measure when distributions are extended by adding units to the purview or increasing the number of possible states of a unit [14]. A distribution P 1 n is extended by a distribution P 2 n to create a new distribution P 1 n P 2 n , where ⊗ is the Kronecker product. When a fully selective distribution (one where an outcome occurs with probability one) is extended by another fully selective distribution, the measure should increase additively (expansion). However, if a distribution is extended by a fully undetermined distribution (one where all n outcomes are equally likely), then the measure should decrease by a factor of n (dilution). For expansion, suppose P 1 n and P 2 n are fully selective distributions, then for any Q 1 n and Q 2 n we have
D ( P 1 n P 2 n , Q 1 n Q 2 n ) = D ( P 1 n , Q 1 n ) + D ( P 2 n , Q 2 n ) .
For dilution, suppose P 2 n and Q 2 n are fully undetermined distributions, then for any P 1 n , Q 1 n we have
D ( P 1 n P 2 n , Q 1 n Q 2 n ) = 1 n D ( P 1 n , Q 1 n ) .
Together, Equations (6) and (7) define the intrinsicality property.
The final property, specificity, requires that physical existence must be about a specific purview state (Section 3.1.3),
D ( P n , Q n ) = max α f ( p α , q α ) .
The function f ( p , q ) defines the difference between two probability distributions at a specific state of the purview. The mechanism is defined based on the state that maximizes its difference within the system.
Previous work employed similar properties to quantify intrinsic information but used a version of the specificity property that did not include the absolute value [5]. In that work, the goal was to compute the intrinsic information of a communication channel, with an implicit assumption that the source is sending a specific message. In that context, a signal is only informative if it increases the probability of receiving the correct message. Here we are interested in integrated information within the context of the postulates of IIT as a means to quantify existence, which requires causes and effects. A mechanism can be seen as having an effect (or cause) whether it increases or decreases the probability of a specific state.
Together, the three properties (causality, specificity, and intrinsicality) characterize a unique measure, the intrinsic difference, for measuring the integrated information of a mechanism. Note that while causality (Equation (5)) and expansion (Equation (6)) properties are traditionally required by information measures (see [15]), here we also require dilution (Equation (7)) and specificity (Equation (8)). While the maximum operation present in specificity in order to select one specific purview state seems to us uncontroversial, one may argue that the dilution factor 1 n in Equation (7) is somewhat arbitrary. However, note that if specificity requires that information is specific to one state, after adding a fully undetermined distribution of size n to the purview, the amount of causal power measured by the function f in state α will be invariably divided by n. This way, we believe that the dilution factor must be necessarily 1 n , at least in this particular case.
Theorem 1.
If D ( P n , Q n ) satisfies thecausality,intrinsicality, andspecificityproperties, then
D ( P n , Q n ) = max α f ( p α , q α ) ,
where
f ( p , q ) = k p log p q .
The full mathematical statement of the theorem and its proof are presented in Appendix B. For the rest of the manuscript we assume k = 1 without loss of generality. Here, our main interest is using ID to quantify the difference between unpartitioned and partitioned cause or effect repertoires when assessing the integrated information of a mechanism,
φ ( m , Z ) = D π ( Z m ) , π ψ * ( Z m ) = max z Ω Z π ( z m ) log π ( z m ) π ψ * ( z m ) .
One can interpret the integrated information as being composed of two terms. First, the informativeness
log π ( z m ) π ψ * ( z m ) ,
which reflects the difference in Hartley information contained in state z before and after the partition. Second, the selectivity
π ( z m ) ,
which reflects the likelihood of the cause or effect. Together, the two terms can be interpreted as the density of information for a particular state [5].

4. Methods and Results

Throughout this section we investigate each step necessary to compute φ ( m ) , the integrated information of a mechanism M in state m. To this end, we construct systems S formed by units A , B , C , that are either ↑ (1) or ↓ (−1) at time t with probability of being ↑ defined by (Figure 2a)
P ( Y t = 1 A t 1 = a t 1 , B t 1 = b t 1 , ) = 1 1 + exp 2 ( a t 1 + b t 1 + + h ) τ ,
for all Y S , where A , B , are the units that input to Y. Besides the sum of the input states, the function depends on two parameters: h R defines a bias towards being ↑ ( h > 0 ) or ↓ ( h < 0 ), while τ { x R : x 0 } defines how deterministic unit A is. For τ , the unit turns ↑ or ↓ with equal probability (fully undetermined), while for τ = 0 it turns ↑ whenever the sum of the inputs is greater than the threshold η , and turns ↓ otherwise (fully selective; Figure 2a). This way, τ = 0 means that the unit is fully constrained by the inputs (deterministic), τ = 1 means the unit is partially constrained, and τ = 10 means the unit is only weakly constrained, etc. Unless otherwise specified, in the following we focus on investigating effect purviews.

4.1. Intrinsic Information

We start by investigating the role of intrinsicality in computing the integrated information of a mechanism. To this end, we will compare φ e ( m , Z , ψ 0 ) for various mechanism-purview pairs, which evaluates the ID over a complete partition
ψ 0 = { ( { M } , { } ) , ( { } , { Z } ) }
of mechanism M and purview units Z, leaving the purview fully unconstrained after the partition (in this case, the partitioned repertoires are equivalent to the unconstrained repertoires defined in Equation (A5) and Equation (A4)). Intrinsicality requires that the ID must increase additively when fully constrained units are added to the purview (expansion, Equation (6)) and decrease exponentially when fully unconstrained units are added to the purview (dilution, Equation (7)). We define the system S depicted in Figure 2b to investigate the expansion and dilution of a mechanism M = { A } over different purviews Z S . Next, we fix the mechanism M in state m = 1 and measure the ID of this mechanism over effect purviews with varying levels of indeterminism τ but a fixed threshold h = 0 (partially deterministic majority gates).
First consider the purview Z = { B } with a fully constrained unit ( τ B = 0 ), such that (Figure 2B)
φ e ( m , Z , ψ 0 ) = ID ( π e ( B A = ) , π e ψ 0 ( B A = ) ) = 0.69 .
Now consider the same mechanism over a larger purview Z = { B , C } , which has an additional, partially constrained unit C ( τ C = 1 ). This purview has a larger repertoire of possible states, resulting in a larger difference between partitioned and unpartitioned probabilities of one state (high informativeness). At the same time, the probability of this state is still very high in absolute terms (high selectivity). Thus, the ID of m over { B , C } is higher than over { B } alone (Figure 2c):
φ e ( m , Z , ψ 0 ) = ID ( π e ( B C A = ) , π e ψ 0 ( B C A = ) ) = 1.11 .
The higher value for Z = { B , C } reflects the expansion that occurs whenever informativeness increases while selectivity is still high. Notice that the expansion here is subadditive since the new unit is constrained but not fully constrained (or fully selective).
Finally, consider another purview Z = { B , D } , where D is only weakly constrained ( τ D = 10 ). While the new purview has a state where informativeness is marginally higher than before, selectivity is much lower (the state has much lower probability). For this reason, φ e ( m , Z , ψ 0 ) is lower for Z = { B , D } than for the smaller purview Z = { B } , reflecting dilution (Figure 2c):
φ e ( m , Z , ψ 0 ) = ID ( π e ( B D A = ) , π e ψ 0 ( B D A = ) ) = 0.43 .
Notice that dilution here is not exactly a factor of 2 since the new unit is weakly constrained by the mechanism but not fully unconstrained.
Next we investigate the role of the information postulate, which requires that the mechanism must be specific, meaning that a mechanism must both be in a specific state and specify an effect state (or a cause state) of a specific purview. Consider the system in Figure 3a where we focus on a high-order mechanism with four units M = { A , B , C , D } over a purview with three units Z = { A , B , C } . The threshold and amount of indeterminism of the purview units are fixed: h = 3 and τ = 1 , which makes the purview units function like partially deterministic AND gates. We show not only that the mechanism can be more or less informative depending on its state but also that the specific purview state selected by the ID measure depends both on the probability of the state and on how much the state is constrained by the mechanism.
When the state of the mechanism is m = { , , , } (Figure 3b), the most informative state in the purview is z = { , , } since all units are more likely to be turned ↓ than they are after partitioning (high informativeness), and at the same time this state still has high probability (high selectivity). Out of all states, z = { , , } maximizes informativeness and selectivity in combination, resulting in
φ e ( m , Z , ψ 0 ) = ID ( π e ( A B C A B C D = ) , π e ψ 0 ( A B C A B C D = ) ) = 0.27 .
A different scenario is depicted if we change the state of the mechanism to A B C D = { , , , } (Figure 3c). In this mechanism state the constrained probability of A B C = { , , } is lower than than the probability after partitioning. However, the mechanism is informative because the probabilities are different. At the same time, the state A B C = { , , } still has high probability while being constrained by the mechanism A B C D = { , , , } . Together the product of the informativeness and selectivity is higher for the purview state { , , } than any other state, resulting in
φ e ( m , Z , ψ 0 ) = ID ( π e ( A B C A B C D = ) , π e ψ 0 ( A B C A B C D = ) ) = 0.08 .
Although it may be counterintuitive to identify an effect state whose probability is decreased by the mechanism, it highlights an important feature of intrinsic information: it balances informativeness and selectivity. Informativeness is about constraint, meaning how much the probability of observing a given state in the purview changes due to being constrained by the mechanism. At the same time, selectivity is about probability density at a given state, meaning that this constraint is only relevant if the state is realized by the purview. If the mechanism is informative while increasing selectivity, then there is no tension between the two. However, whenever the mechanism decreases the probability of a state, there is a tension between how informative and how selective that state is. As long as together the product of informativeness and selectivity of a state (in this case A B C = { , , } ) is higher than all other states, it is selected by the maximum operation in the ID function and thus determines the intrinsic information of the mechanism.

4.2. Integrated Information

The integration postulate of IIT requires that mechanisms be integrated or irreducible to parts. In this section we use the system defined in Figure 4a, with η = 0 and τ = 1 for all units, to investigate how mechanisms are impacted by different partitions. We compute the ID between the intact and all possible partitioned effect repertoires to measure the impact of each partition ψ Ψ ( M , Z ) . We identify the partition with lowest ID as the MIP of the candidate mechanism over a purview.
First, when considering the mechanism M = { A , E } = { , } over the purview Z = { A , E } , the complete partition ψ 0 (partitioning the entire mechanism away from the entire purview) assigns a positive value
φ e ( m , Z , ψ 0 ) = ID ( π e ( A E A E = ) , π e ψ 0 ( A E A E = ) ) = 0.36 .
However, if we try the partition
ψ 1 = { ( { A } , { A } ) , ( { E } , { E } ) } ,
we find that the candidate mechanism is not integrated (as is obvious after inspecting Figure 4b):
φ e ( m , Z , ψ 1 ) = ID ( π e ( A E A E = ) , π e ψ 1 ( A E A E = ) ) = 0 .
We conclude that this candidate mechanism does not exist within the system over this purview.
Next, we consider the candidate mechanism M = { A , B } = { , } over the purview Z = { A , B } . We observe that the partition
ψ 2 = { ( { A } , { A , B } ) , ( { B } , { } ) } ,
defines an intrinsic difference
φ e ( m , Z , ψ 2 ) = ID ( π e ( A B A B = ) , π e ψ 2 ( A B A B = ) ) = 0.36 ,
which is smaller than the one assigned by the complete partition ψ 0 ,
φ e ( m , Z , ψ 0 ) = ID ( π e ( A B A B = ) , π e ψ 0 ( A B A B = ) ) = 0.51 .
Although the ID over ψ 2 is smaller than that over the complete partition, this information is not zero. Moreover, the partition ψ 2 yields an ID value that is smaller than any other partition ψ Ψ ( A B , A B ) . In this case, we say that ψ 2 is the MIP ( ψ * = ψ 2 ) , and that the candidate mechanism M = { A , B } has integrated effect information (Figure 4c):
φ e ( m , Z ) = ID ( π ( A B A B = ) | π ψ * ( A B A B = ) ) = 0.36 .
Finally, for the candidate mechanism M = { A , B , D } = { , , } over the purview Z = { E , F } , any partition that does not include the empty set as a part in { M i } leads to nonzero ID. However, if we allow the empty set for M i (as discussed in Section 3.2), the candidate mechanism is reducible because disintegrating it with the partition
ψ * = { ( { A } , { } ) , ( { } , { F } ) , ( { B , D } , { E } ) }
makes no difference to the purview states, resulting in
φ e ( m , Z ) = ID ( π e ( E F A B D = ) , π e ψ * ( E F A B D = ) ) = 0 .
This occurs since B and D have opposite effects over the purview unit E, and by cutting both inputs to E we avoid changing the repertoire. Therefore, M = { A , B , D } does not exist as a mechanism over the purview Z = { E , F } .

4.3. Maximal Integrated Information

The last postulate we investigate is exclusion, which dictates that mechanisms are defined over a definite purview, the one over which the mechanism is maximally irreducible (has maximal integrated effect information). Using the system defined in Figure 4a, we investigate two candidate mechanisms. First, we study the candidate mechanism M = { A } = , similar to the one in Figure 2. Since M = { A } is first order (constituted of one unit), there is only one possible partition (the complete partition)
ψ * = ψ 0 = { ( { A } , { } ) , ( { } , { Z } ) } .
After computing φ e ( m , Z ) for all possible purviews Z S , we find that the mechanism has maximum integrated effect information over the purview Z e * = { A , F } , thus according to Equation (3) we have
φ e ( m ) = ID ( π e ( A F A = ) , π e ψ * ( A F A = ) ) = 0.36 .
Next, similarly to Figure 3, we investigate the candidate mechanism M = { A , B , C , D } = { , , , } . After computing φ e ( m , Z , ψ ) over all possible purviews in the system ( Z S ) and over all possible partitions for each purview ( ψ Ψ ( A B C D , Z ) ), we find that the mechanism has maximum integrated effect information over the purview Z e * = { A , B , C } , with partition
ψ * = { ( { A , B , C } , { A , B , C } ) , ( { D } , { } ) } ,
and that
φ e ( m ) = ID ( π e ( A B C A B C D = ) , π e ψ * ( A B C A B C D = ) ) = 0.04 .
Finally, IIT requires that mechanisms have both causes and effects within the system. We perform an analogous process using the cause repertoire π c ( Z A B C D = ) (see Equation (A2) and Figure 6) to identify the maximally irreducible cause purview Z c * . We find that Z c * = { A , B , F } with MIP
ψ * = { ( { A } , { A } ) , ( { B , C , D } , { F } } ,
and integrated cause information
φ c ( m ) = ID ( π c ( A B F A B C D = ) , π c ψ * ( A B F A B C D = ) ) = 0.09 .
Since M = { A , B , C , D } = { , , , } has an irreducible cause ( φ c > 0 ) and effect ( φ e > 0 ), we say that the mechanism A B C D exists within the system with integrated information
φ ( m ) = min { φ e ( m ) , φ c ( m ) } = 0.04 .
This means that, given the system S, the mechanism M = { A , B , C , D } S in state m = { , , , } Ω M specifies the distinction
X ( { A , B , C , D } = { , , , } ) = ( Z c * = { A , B , F } = { , , } , Z e * = { A , B , C } = { , , } , φ ( { A , B , C , D } ) = 0.04 ) .

5. Discussion

Mechanism integrated information φ ( m ) is a measure of the intrinsic cause–effect power of a mechanism M = m within a system. It reflects how much a mechanism as a whole (above and beyond its parts) constrains the units in its cause and effect purview. We characterize three properties of information based on the postulates of IIT: causality, intrinsicality, and specificity, and demonstrate that there is a unique measure (ID) that satisfies these properties. Notably, intrinsicality requires that information increases when expanding a purview with a fully constrained unit (expansion) but decreases when expanding a purview with a fully unconstrained unit (dilution). In situations with partial constraint, finding a unique measure gives us a principled way to balance expansion and dilution.
Early versions of IIT used the KLD to measure the difference between probability distributions [4,16]. The KLD was a practical solution given its unique mathematical properties and ubiquity in information theory; however, there was no principled reason to select it over any other measure. In [3], the KLD was replaced by the EMD, which was an initial attempt to capture the idea of relations among distinctions. The more two distinctions overlap in their purview units and states, the smaller the EMD distance between them; this distance was used as the ground distance to compute the system integrated information ( Φ ). This aspect of the EMD is now encompassed by including relations as an explicit part of the cause–effect structure, defined in a way that is consistent with the postulates of IIT [10]. The new intrinsic difference measure is the first principled measure based on properties derived from the postulates of IIT. Importantly, ID is shown to be the unique measure that satisfies the three properties—causality, intrinsicality and specificity—the KLD and EMD measures do not satisfy intrinsicality or specificity. See Appendix C for an example of how the different measures change the purview with maximum integrated information.
Furthermore, we define a set of possible partitions of a mechanism and its purview ( Ψ ( M , Z ) ), which ensures that the mechanism is destroyed (“distintegrated”) after the partition operation is applied. Previous formulations of mechanism integrated information restricted the set of all possible partitions to bipartitions of a mechanism and its purview but allowed for partitions that do not qualify as “disintegrating” the mechanism (for example, cutting away a single purview unit) [3]. For most mechanisms the minimum information partition ψ * still partitions the mechanism in two parts; exceptions tend to occur if multiple inputs to the same unit counteract each other. The requirement for disintegrating partitions is more consequential, especially for first-order mechanisms (those composed of a single unit). Without this restriction, the ψ * of a first-order mechanism would always be to cut away its weakest purview unit, and the integrated information of the mechanism would then be equal to the information the mechanism specifies about its least constrained purview unit. With the disintegrating partitions, a first-order mechanism must be cut away from its entire purview, reflecting the notion that everything that a first-order mechanism does is irreducible (since it is unified).
The particular partition ψ * Ψ ( M , Z ) that yields the minimum ID between partitioned and unpartitioned repertoires defines the integrated information of a mechanism over a purview. The balance between expansion and dilution, together with the set of possible partitions, allows us to find the purviews Z c * and Z e * with maximum integrated cause and effect information. Moreover, the ID measure identifies the specific cause state z c * and effect state z e * that maximize the mechanism’s integrated cause and effect information. Finally, the overall integrated information of a mechanism M in state m is the minimum between its integrated cause and effect information: φ ( m ) = min { φ c ( m ) , φ e ( m ) } .
Mechanisms that exist within a system ( φ ( m ) > 0 ) specify a distinction (a cause and effect) for the system, and the set of all distinctions and the relations among them define the cause–effect structure of the system [10]. As mentioned above (Section 3.1.5), it is in principle possible that there are multiple solutions for Z c * = z c * or Z e * = z e * for a given mechanism m in degenerate systems with symmetries in connectivity and functionality (but note that φ ( m ) is uniquely defined). However, by the exclusion postulate, distinctions within the cause–effect structure of a conscious system should specify a definite cause and effect, which means that they should specify a definite cause and effect purview in a specific state. As also argued in [17], distinctions that are underdetermined should thus not be included in the cause–effect structure until the tie between purviews or states can be resolved. In physical systems that evolve in time with a certain amount of variability and indeterminism, ties are likely short lived and may typically resolve on a faster scale than the temporal scale of experience.
The principles and arguments applied to mechanism information will need to be extended to relation integrated information and system integrated information, laying the ground work for an updated 4.0 version of the theory. Relations describe how causes and effects overlap in the cause–effect structure, by being over the same units and specifying the same state. Like distinctions, relations exist within the cause–effect structure, and their existence is quantified by an analogous notion of relation integrated information ( φ r ). Similarly, the intrinsic existence of a candidate system and its cause–effect structure as a PSC with an experience is quantified by system integrated information ( Φ ). Both φ r and Φ measure the difference made by "cutting apart" the object (relation or system) according to its ψ * . As a measure of existence, the difference measures used for φ r and Φ must also satisfy the causality, intrinsicality and specificity properties. In the case of Φ , the expansion and dilution properties will need to be adapted to the combinatorial nature of the measure, since adding a single unit to a PSC doubles the number of potential distinctions.
According to IIT, a system is a PSC if its cause–effect structure is maximally irreducible (it is a maximum of system integrated information, Φ ). Moreover, if a system is a PSC, then its subjective experience is identical to its cause–effect structure [3]. Since the quantity and quality of consciousness are what they are, the cause–effect structure cannot vary arbitrarily with the chosen measure of intrinsic information. For this reason, a measure of intrinsic information that is based on the postulates and is unique is a critical requirement of the theory.

Author Contributions

Conceptualization, L.S.B., W.M., L.A. and G.T.; software, L.S.B.; investigation, L.S.B., W.M. and L.A.; writing—original draft preparation, L.S.B. and W.M.; writing—review and editing, L.S.B., L.A., W.M. and G.T.; visualization, L.S.B.; supervision, G.T.; project administration, G.T.; funding acquisition, G.T. All authors have read and agreed to the published version of the manuscript.

Funding

This project was made possible through the support of a grant from Templeton World Charity Foundation, Inc. (#TWCF0216) and by the Tiny Blue Dot Foundation (UW 133AAG3451). The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of Templeton World Charity Foundation, Inc. and Tiny Blue Dot Foundation.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

Would like to thank Shuntaro Sasai, Andrew Haun, William Mayner, Graham Findlay and Matteo Grasso for their helpful comments.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Cause and Effect Repertoires

The cause and the effect repertoire can be derived from the system defined in Equation (1). The random variables S i define the system state space Ω S = × i = 1 n Ω S i , where × is the cross product of each individual state space. We also require that the random variables are conditional independent
p ( s t + 1 | s t ) = i = 1 n p ( s i , t + 1 | s t ) ,
that the transitions are time invariant
p ( s t + 1 | s t ) = p ( s t | s t 1 ) ,
and that the probabilities are well-defined for all possible states
p ( s t + 1 | s t ) for   all s t , s t + 1 Ω S .
Given the former definitions, the stochastic system S corresponds to a causal network where p ( s t + 1 | s t ) = p ( s t + 1 | d o ( s t ) ) [12,18,19].
We use uppercase letters as parameters of the probability function to define probability distributions, e.g., p ( S t + 1 | s t ) = { p ( s t + 1 | s t ) : s t + 1 Ω S } , and the operators ∑ and ∏ are applied to each state independently.

Appendix A.1. Causal Marginalization

Given a mechanism M S in a state m t Ω M and a purview Z S , causal marginalization serves to remove any contributions to the repertoire of states Ω Z that are outside the mechanism M and purview Z. Explicitly, given the set W = S \ M , we define the effect repertoire of a single unit Z i Z as
π e ( Z i m ) = w t Ω W p Z i , t + 1 m t , w t | Ω W | 1 .
Note that, for causal marginalization, we impose a uniform distribution as p ( W t ) . This ensures that the repertoire captures the constraints due to the mechanism alone and not to whatever external factors might bias the variables in W to one state or another.
Given the set V = S \ Z , the cause repertoire for a single unit M i M , using Bayes’ rule, is
π c ( Z m i ) = v t 1 Ω V p m i , t Z t 1 , v t 1 s t 1 Ω S p m i , t s t 1 ,
where again we impose the uniform distributions as p ( V t 1 ) , p ( Z t 1 ) , and p ( S t 1 ) .
Note that the transition probability function p ( Z t + 1 m t ) not only contains dependencies of Z t + 1 on m t but also correlations between the variables in Z due to common inputs from units in W, which should not be counted as constraints due to m t . To discount such correlations, we define the effect repertoire over a set Z of r units Z i as the product of the effect repertoires over individual units
π e ( Z m ) = i = 1 r π e ( Z i m ) ,
where ⨂ is the Kronecker product of the probability distributions. In the same manner, given that the mechanism M has q units M i , we define the cause repertoire of Z as
π c ( Z m ) = i = 1 q π c ( Z m i ) z Ω Z i = 1 q π c ( z m i ) .

Appendix A.2. Partitioned Repertoires

Given a partition ψ Ψ ( M , Z ) constituted of k parts (see Equation (4)), we can define the partitioned repertoire
π ψ ( Z m ) = j = 1 k π ( Z j m j ) ,
with π ( | m j ) = π ( ) = 1 . In the case of m j = , π ( Z j | ) = π ( Z j ) corresponds to an unconstrained effect repertoire
π e ( Z ) = i = 1 r π e ( Z i ) = i = 1 r s t Ω S p ( Z i , t + 1 s t ) | Ω S | 1 ,
which follows from Equation (A1) and cause repertoire
π c ( Z ) = 1 | Ω Z | ,
which follows from Equation (A2).

Appendix B. Full Statement and Proof of Theorem 1

We now give the full statement and proof of the Theorem 1, demonstrating the uniqueness of the function f (see also [15,20]). We start with some preliminary definitions:
R + = { x R : x > 0 } , N 2 = { 2 , 3 , 4 , } ,
J = ( 0 , 1 ) , J ^ = ( 0 , 1 ] , J ¯ = [ 0 , 1 ] , K = ( J ¯ × J ¯ ) \ ( J ^ × { 0 } ) ,
I δ ( x ) = { y J : | x y | < δ } .
For each n N 2 , we further define
Γ n = X n = ( x 1 , , x n ) : x 1 , , x n J ¯ , α = 1 n x α = 1 ,
V n = ( 1 , 0 , , 0 ) Γ n , U n = 1 n , , 1 n Γ n ,
Δ n = { ( X n , Y n ) Γ n × Γ n : ( x α , y α ) K , for   all α { 1 , , n } } .
Using these definitions, we further define the following properties.
  • Property I: Causality. Let ( P n , Q n ) Δ n . The difference D ( P n , Q n ) is defined as D : Δ n R , such that
    D ( P n , Q n ) = 0 P n = Q n .
  • Property II: Intrinsicality. Let ( P l , Q l ) Δ l and ( P m , Q m ) Δ m . Then
    (a) expansion: D ( V l V m , P l Q m ) = D ( V l , P l ) + D ( V m , Q m ) ,
    (b) dilution: D ( P l U m , Q l U m ) = D ( P l , Q l ) + D ( U m , U m ) m ,
    where P l Q m = ( p 1 q 1 , , p 1 q m , , p l q 1 , , p l q m ) Γ l m and from Property I D ( U m , U m ) = 0 .
  • Property III: Specificity. The difference must be state-specific, meaning there exists f : K R such that for all ( P n , Q n ) Δ n we have D ( P n , Q n ) = f ( p α , q α ) , where α { 1 , , n } , p α P n and q α Q n . More precisely, we define
    D ( P n , Q n ) : = max α { | f ( p α , q α ) | } ,
    where f is continuous on K, analytic on J ^ × J and f ( 0 , q α ) is analytic on J.
The following lemma allows the analytic extension of real analytic functions.
Lemma A1
(See Proposition 1.2.3 in [21]). If f and g are real analytic functions on an open interval U R and if there is a sequence of distinct points { x n } n U with x 0 = lim n x n U such that
f ( x n ) = g ( x n ) ,
then
f ( x ) = g ( x ) , f o r   a l l x U .
Corollary A1
(See Corollary 1.2.6 in [21]). If f and g are analytic functions on an open interval U and if there is an open interval W U such that
f ( x ) = g ( x ) , f o r   a l l x W ,
then
f ( x ) = g ( x ) , f o r   a l l x U .
The following lemma shows that a strict maximum over continuous functions, each evaluated at fixed points, must hold for an open interval around such fixed points.
Lemma A2.
Let g α : J R be continuous functions, where α { 1 , , n } , fix x 1 , , x n J . If there exists α * such that for α α * ,
g α * ( x α * ) > g α ( x α ) ,
then there exists δ > 0 such that
max α { g α ( x ^ α ) } = g α * ( x ^ α * ) ,
for all x ^ α I δ ( x α ) and for all α { 1 , , n } .
We now provide the solution to a functional equation similar to the Pexider logarithmic equation [22].
Lemma A3.
Let f , g , h : J R be analytic functions on J. Suppose the functional equation
| f ( p q ) | = max { | g ( p ) | , | h ( p ) | } + max { | g ( q ) | , | h ( q ) | } ,
holds for all p q I δ ( p q ) , where I δ ( p q ) J . Then there exists c , d R such that
f ( x ) = c log ( x ) + d , f o r   a l l x J .
Proof. 
First, for some i { g , h } suppose that there exists ( p i , q i ) J × J such that p i q i I δ ( p q ) and
| i ( p i ) | = max { | g ( p i ) | , | h ( p i ) | }
is a strict maximum. Then by Lemma A2 there exists δ p > 0 such that
| i ( p ) | = max { | g ( p ) | , | h ( p ) | } , for   all p I δ p ( p i ) .
Second, if there does not exist ( p i , q i ) J × J such that p i q i I δ ( p q ) and | i ( p i ) | is a strict maximum, then we set q i = q , p i = p and δ p = δ q so that Equation (A6) holds since | g ( p ) | = | h ( p ) | for all ( p , q ) J × J such that p q I δ ( p q ) . Next, define δ : = min { δ | p i q i p q | , δ p q } . Suppose that there exists q j J such that p i q j I δ ( p i q i ) , and for some j { g , h } ,
| j ( q j ) | = max { | g ( q j ) | , | h ( q j ) | }
is a strict maximum. Then by Lemma A2 there exists δ q > 0 such that
| j ( q ) | = max { | g ( q ) | , | h ( q ) | } , for   all q I δ q ( q j ) .
Finally, if there does not exist q j J such that p i q j I δ ( p i q i ) and | j ( q j ) | is a strict maximum, then we set q j = q i and δ q = δ p i so that Equation (A7) holds since | g ( q ) | = | h ( q ) | for all ( p , q ) J × J such that p q I δ ( p i q i ) . Let p q = x and define δ : = min { δ | p i q j p i q i | , δ q p i } , then
| f ( x ) | = | i ( p ) | + | j ( q ) | , for   all x I δ ( p i q j ) .
Moreover, it follows that one of the following options must be true
f ( x ) = ± i ( p ) ± j ( q ) , for   all x I δ ( p i q j ) .
Since the functions are analytic on J and therefore twice differentiable, then
q p [ f ( x ) ] = x d d x 2 f ( x ) + d d x f ( x ) = ± q p i ( p ) ± q p j ( q ) = 0 .
Integrating with respect to x yields
f ( x ) = c log ( x ) + d ,
for c , d R and for all x I δ ( x ) where x = p q . Since f is analytic on J and since I δ ( x ) J , by Corollary A1, we can extend f ( x ) such that
f ( x ) = c log ( x ) + d , for   all x J .
Lemma A4.
If D : Δ n R satisfies properties I and III for some f : K R , then
f ( p , p ) = 0 , f o r   a l l p J ¯ .
Proof. 
Let P 2 = ( p , 1 p ) Γ 2 . By properties I and III
0 = D ( P 2 , P 2 ) = max | f ( p , p ) | , | f ( 1 p , 1 p ) | , for   all p J ¯ .
Theorem A1.
Let ( P n , Q n ) Δ n for some n N 2 and D : Δ n R where D satisfies properties I, II and III. Then
D ( P n , Q n ) = max α { | f ( p α , q α ) | } ,
where for some k R \ { 0 } ,
f ( p , q ) = k p log p q , f o r   a l l ( p , q ) K .
Proof of Theorem A1.
First we show that the function in Equation (A10) satisfies properties I, II and III. To see that the function satisfies Property I, notice that for each ( P n , Q n ) Δ n where P n Q n , since k 0 , then there exists β { 1 , , n } such that
D ( P n , Q n ) = max α k p α log p α q α k p β log p β q β > 0 ,
and for each P n = Q n ,
D ( P n , P n ) = max α k p α log p α p α = 0 .
To see that it satisfies Property II.a, for each P l Γ l and for each Q m Γ m
D ( V l V m , P l Q m ) = max k log 1 p 1 q 1 , 0 = k log 1 p 1 q 1 = k log 1 p 1 + k log 1 q 1 = max k log 1 p 1 , 0 + max k log 1 q 1 , 0 = D ( V l , P l ) + D ( V m , Q m ) .
Similarly by Property II.b notice that for each ( P l , Q l ) Δ l
D ( P l U m , Q l U m ) = max α k p α m log p α q α = 1 m max α k p α log p α q α = 1 m D ( P l , Q l ) .
It is clear that the function f in Equation (A10) satisfies Property III.
The remaining part of the proof is divided into two steps:
Step 1. First we show that under four assumptions, f satisfies properties I, II and III iff f ( p , q ) = k p log p q , k R \ { 0 } .
Step 2. Next we show that if any of our assumptions is violated, then no suitable f exists.
Verification of Step 1. We apply Property II.a with P 2 = ( p , 1 p ) , Q 2 = ( q , 1 q ) for some p , q J where p q . We then have
D ( V 2 V 2 , P 2 Q 2 ) = D ( V 2 , P 2 ) + D ( V 2 , Q 2 ) .
By Property III, the following identity holds
max | f ( 1 , p q ) | , | f ( 0 , p ( 1 q ) ) | , | f ( 0 , ( 1 p ) q ) | , | f ( 0 , ( 1 p ) ( 1 q ) ) | = max | f ( 1 , p ) | , | f ( 0 , 1 p ) | + max | f ( 1 , q ) | , | f ( 0 , 1 q ) | .
Our first assumption (AS1) states that there exists some p , q J such that | f ( 1 , p q ) | is a strict maximum. By Lemma A2, there exists a δ > 0 such that
| f ( 1 , p q ) | = max | f ( 1 , p ) | , | f ( 0 , 1 p ) | + max | f ( 1 , q ) | , | f ( 0 , 1 q ) | ,
for all p q I δ ( p q ) . Further, by Lemma A3 there exists c , d R such that
f ( 1 , q ) = c log ( q ) + d , for   all q J ,
and since by Property III f is continuous, the application of Lemma A4 yields
lim q 1 f ( 1 , q ) = lim q 1 c log ( q ) + d = d = 0 ,
i.e., for k 1 = c , we have
f ( 1 , q ) = k 1 log 1 q , for   all q J .
Now applying Property II.b for l = 2 , P 2 = V 2 , Q 2 = ( r , 1 r ) for all r J and for each m N 2 , we have
D ( V 2 U m , Q 2 U m ) = 1 m D ( V 2 , Q 2 ) ,
and by Property III
max f 1 m , r m , f 0 , 1 r m = 1 m max | f ( 1 , r ) | , | f ( 0 , 1 r ) | ,
for all r J . Our second assumption (AS2) states that there exists q 1 J such that | f ( 1 , q 1 ) | > | f ( 0 , 1 q 1 ) | . For some a { 1 , 1 } , we have
max { | f ( 1 , q 1 ) | , | f ( 0 , 1 q 1 ) | } = a f ( 1 , q 1 ) .
Further, by Lemma A2 and Equation (A13), there exists δ > 0 such that
max { | f ( 1 , r ) | , | f ( 0 , 1 r ) | } = a f ( 1 , r ) = a k 1 log 1 r , for   all r I δ ( q 1 ) .
Plugging this result back into Equation (A14) yields
max f 1 m , r m , f 0 , 1 r m = a k 1 m log 1 r , for   all r I δ ( q 1 ) .
Our third assumption (AS3) states that f 0 , 1 r m is never a strict maximum in Equation (A15), so that for some b { 1 , 1 } , we have
max f 1 m , r m , f 0 , 1 r m = b f 1 m , r m , for   all r I δ ( q 1 ) .
Let q = r m and let I = I δ m r m I δ m q 1 m . Then by Equation (A15)
f 1 m , q = k m m log 1 m q , for   all q I ,
where k m = a k 1 b . By Corollary A1, we can extend f 1 m , q to J
f 1 m , q = k m m log 1 m q , for   all q J ,
where k m { + k 1 , k 1 } . Let n N 2 and let 0 < q 2 < n 1 2 n , then q 2 J . By Property II.b for l = 2 , P 2 = n 1 2 n , n + 1 2 n , Q 2 = ( q 2 , 1 q 2 ) and m = ( n 1 ) ( n + 1 ) , we have
D ( P 2 U m , Q 2 U m ) = 1 m D ( P 2 , Q 2 ) ,
and by Property III
max f 1 2 n ( n + 1 ) , q 2 ( n 1 ) ( n + 1 ) , f 1 2 n ( n 1 ) , 1 q 2 ( n 1 ) ( n + 1 ) = 1 ( n 1 ) ( n + 1 ) max f n 1 2 n , q 2 , f n + 1 2 n , 1 q 2 .
By Equation (A16), we have
max k 2 n ( n + 1 ) 2 n ( n + 1 ) log n 1 2 n q 2 , k 2 n ( n 1 ) 2 n ( n 1 ) log n + 1 2 n ( 1 q 2 ) = 1 ( n 1 ) ( n + 1 ) max f n 1 2 n , q 2 , f n + 1 2 n , 1 q 2 .
Since q 2 < n 1 2 n < 1 2 , this yields
a k 2 n ( n + 1 ) ( n 1 ) 2 n log n 1 2 n q 2 = max f n 1 2 n , q 2 , f n + 1 2 n , 1 q 2 ,
for some a { + 1 , 1 } . Then we have that for the sequence h 1 2 : = n 1 2 n n
k p n p n log p n q = max | f p n , q | , | f 1 p n , 1 q | ,
for all ( p n , q ) h 1 2 × ( 0 , p n ) , where k p n = a p n k 2 n ( n + 1 ) . Our fourth and last assumption (AS4) is that | f ( 1 p n , 1 q ) | is a strict maximum only for a finite number of p n h 1 2 . More specifically, let
P ¯ = { p n h 1 2 : q ( 0 , p n ) s . t . | f ( 1 p n , 1 q ) | > | f ( p n , q ) | } .
Then (A4) states that sup { P ¯ } < 1 2 where, for convention, sup { P ¯ } : = if P ¯ = . Let n : = 0 if P ¯ = , else there exists n N 2 such that sup { P ¯ } = n 1 2 n . Define h 1 2 = { n + n 1 2 ( n + n ) } n . Then for a fixed p n h 1 2 , there exists q n ( 0 , p n ) such that
max b f p n , q n , | f 1 p n , 1 q n | = b f p n , q n ,
for some b { + 1 , 1 } . By Lemma A2, there exists δ > 0 such that
max b f p n , q , | f 1 p n , 1 q | = b f p n , q , for   all q I δ ( q n ) .
By Equation (A17), for I n = ( 0 , p n ) I δ ( q n ) , we have
f p n , q = k p n p n log p n q , for   all ( p n , q ) h 1 2 × I n ,
where the sign b was absorbed by the constant k p n . By Corollary A1, for a fixed p n * h 1 2 , we can extend f ( p n * , q ) to J, i.e.,
f ( p n * , q ) = k p n * p n * log p n * q , for   all q J .
For a fixed q * J , since by Property III f is continuous, we have
lim n k p n = k { + k 1 , k 1 } .
By Lemma A1, we can uniquely extend f ( p n , q * ) to J such that
f ( p , q * ) = k p log p q * , for   all p J .
Since this is true for all q * J , we have that
f ( p , q ) = k p log p q , for   all ( p , q ) ( J × J ) .
Note that k = 0 violates Property I since for some q J and Q 2 = ( q , 1 q ) V 2 , we have
D ( V 2 , Q 2 ) = max { | f ( 1 , q ) | , | f ( 0 , 1 q ) | } = 0 .
By Property III, f is continuous in K and the following limits exist for all p , q J :
f ( 1 , q ) = lim p 1 k p log p q = k log 1 q , f ( p , 1 ) = lim q 1 k p log p q = k p log ( p ) , f ( 0 , q ) = lim p 0 + k p log p q = 0 , f ( 0 , 1 ) = lim p 0 + k p log ( p ) = 0 , f ( 0 , 0 ) = lim p 0 + k p log p p = 0 , f ( 1 , 1 ) = lim p 1 k p log p p = 0 .
Consequently,
f ( p , q ) = k p log p q , for   all ( p , q ) K .
Verification of Step 2. Up until here we have showed that Equation (A10) not only defines a function which satisfies properties I, II and III, but it also defines the only function which satisfies properties I, II and III for l = m = 2 given the following assumptions
AS1:
p , q J such that | f ( 1 , p q ) | is a strict maximum in Equation (A11),
AS2:
q 1 J such that | f ( 1 , q 1 ) | > | f ( 0 , 1 q 1 ) | in Equation (A14),
AS3:
f 0 , 1 r m is never a strict maximum in Equation (A15),
AS4:
sup { P ¯ } < 1 2 .
All that is left to prove the theorem is to show that violating any of these assumptions also violates some property. First assume that (A1), (A2) and (A3) are true but (A4) is violated, i.e., sup { P ¯ } = lim n p n = 1 2 for p n h 1 2 . Let p n = 1 p n , q = 1 q and let g = { 1 p i } i be the sequence of ordered elements in P ¯ such that p i < p i + 1 . Then by Equation (A17), for all p n g , there exists q ( p n , 1 ) such that
f ( p n , q ) = k p n ( 1 p n ) log 1 p n 1 q .
By Lemma A2, for a fixed p n * g , there exists δ > 0 such that
f p n * , q = k p n * ( 1 p n * ) log 1 p n * 1 q , for   all q I δ ( q ) .
Since this holds for all p n * g , we have
f p n , q = k p n ( 1 p n ) log 1 p n 1 q , for   all ( p n , q ) g × I δ ( q ) .
Similarly to Equation (A18), using the sequence g instead of the sequence h 1 2 , this result can be extended to J × J , i.e.,
f ( p , q ) = k ( 1 p ) log 1 p 1 q , for   all ( p , q ) ( J × J ) .
Applying Property II.b to l = m = 2 , P 2 = ( p , 1 p ) , Q 2 = ( q , 1 q ) with q p yields
D ( P 2 * U 2 , Q 2 * U 2 ) = 1 2 D ( P 2 , Q 2 ) .
Further, by Property III
max f p 2 , q 2 , f 1 p 2 , 1 q 2 = 1 2 max f ( p , q ) , f ( 1 p , 1 q ) .
However this contradicts Equation (A19) since
max k 1 p 2 log 1 p 2 1 q 2 , k 1 1 p 2 log 1 1 p 2 1 1 q 2 1 2 max k 1 p log 1 p 1 q , k p log p q .
Next we assume that (AS1) and (AS2) are true but (AS3) is violated, meaning that there exists ( m , r ) N 2 × I δ ( q 1 ) such that | f 0 , 1 r m | is a strict maximum in Equation (A15), i.e.,
f 0 , 1 r m = a k 1 m log 1 r ,
for some a { 1 , 1 } . For some q = 1 r m I δ m 1 q 1 m , we have
f 0 , q = a k 1 m log 1 1 m q .
By Lemma A2, there exists δ > 0 such that
f 0 , q = a k 1 m log 1 1 m q , for   all q I δ q .
By Corollary A1, we can extend f 0 , q to 0 , 1 m , i.e.,
f 0 , q = a k 1 m log 1 1 m q , for   all q 0 , 1 m .
However, this implies that f is discontinuous and violates Property III since
lim q 1 m f 0 , q = lim q 1 m a k 1 m log 1 1 m q = ± .
We now assume that AS1 is true but AS2 is violated, i.e.,
| f ( 1 , q ) | | f ( 0 , 1 q ) | , for   all q J .
For some p , q J and δ > 0 , by Equation (A11) and by Equation (A13)
k 1 log 1 p q = | f ( 0 , 1 p ) | + | f ( 0 , 1 q ) | , for   all p q I δ ( p q ) .
Let q 1 , q 2 J and let p q 1 , p q 2 I δ ( p q ) , then
F ( q 1 ) = k 1 log 1 q 1 | f ( 0 , 1 q 1 ) | = | f ( 0 , 1 p ) | k 1 log 1 p = k 1 log 1 q 2 | f ( 0 , 1 q 2 ) | = F ( q 2 ) .
Therefore, F ( q ) = d is constant and
| f ( 0 , 1 q ) | = k 1 log 1 q + d , for   all q I δ p ( q ) .
Plugging this back into Equation (A20) we see that d = 0 , and for some a { 1 , 1 } , there exists q * I δ p ( q ) such that
f ( 0 , q * ) = a k 1 log 1 1 q * .
By Lemma A2, there exists δ > 0 such that
f ( 0 , q ) = a k 1 log 1 1 q , for   all q I δ ( q * ) ,
and by Corollary A1
f ( 0 , q ) = a k 1 log 1 1 q , for   all q J .
However this is a contradiction since by Equation (A13), for k 1 0 and x < 1 2 , we have
| f ( 1 , x ) | = k 1 log 1 x > k 1 log 1 1 x = | f ( 0 , 1 x ) | .
Note that k 1 = 0 violates Property I since for any q J and Q 2 = ( q , 1 q ) V 2 , we have
D ( V 2 , Q 2 ) = max k 1 log 1 q , k 1 log 1 1 q = 0 .
Finally, if AS1 is violated then there does not exist p , q J such that | f ( 1 , p q ) | is a strict maximum in Equation (A11). Hence, for all p , q J , there exists x { p ( 1 q ) , q ( 1 p ) , ( 1 p ) ( 1 q ) } such that
| f ( 0 , x ) | = max | f ( 1 , p ) | , | f ( 0 , 1 p ) | + max | f ( 1 , q ) | , | f ( 0 , 1 q ) | .
If | f ( 0 , x ) | is a strict maximum, then by Lemma A2 there exists δ > 0 such that
| f ( 0 , x ) | = max | f ( 1 , p ) | , | f ( 0 , 1 p ) | + max | f ( 1 , q ) | , | f ( 0 , 1 q ) | ,
for all x I δ ( x ) . If there does not exist x { p ( 1 q ) , q ( 1 p ) , ( 1 p ) ( 1 q ) } such that | f ( 0 , x ) | is a strict maximum, then there must exist x { p ( 1 q ) , q ( 1 p ) , ( 1 p ) ( 1 q ) } such that Equation (A21) holds for all x J . In both cases, by Lemma A3 there exists c , d R such that
f ( 0 , x ) = c log ( x ) + d , for   all x J .
However, this implies that f is discontinuous since
lim x 0 f ( 0 , x ) = lim x 0 c log ( x ) + d = ± ,
even though f ( 0 , 0 ) = 0 by Lemma A4. □

Appendix C. Comparison between ID and EMD

Since the different information measures satisfy different properties, the distinctions that exist in a given system may be different depending on the information measure used to compute integrated information. Here, using the same system used in Figure 5, we provide an example where the cause purview with maximum integrated information is larger when using the EMD measure (Figure A1a) when compared to the same mechanism when using the ID measure (Figure A1b).
Figure A1. Comparison between earth mover’s distance (EMD) and ID. Using the same system S used in Figure 4a, we find the cause purview with maximum integrated information for the mechanism M = { A , B } in state m = { , } , which is larger when using the EMD measure (a) when compared to the ID measure (b). The integrated information when using the EMD measure is also larger than the ID measure.
Figure A1. Comparison between earth mover’s distance (EMD) and ID. Using the same system S used in Figure 4a, we find the cause purview with maximum integrated information for the mechanism M = { A , B } in state m = { , } , which is larger when using the EMD measure (a) when compared to the ID measure (b). The integrated information when using the EMD measure is also larger than the ID measure.
Entropy 23 00362 g0a1

References

  1. Albantakis, L. Integrated information theory. In Beyond Neural Correlates of Consciousness; Overgaard, M., Mogensen, J., Kirkeby-Hinrup, A., Eds.; Routledge: London, UK, 2020; pp. 87–103. [Google Scholar]
  2. Tononi, G.; Boly, M.; Massimini, M.; Koch, C. Integrated information theory: From consciousness to its physical substrate. Nat. Rev. Neurosci. 2016, 17, 450–461. [Google Scholar] [CrossRef] [PubMed]
  3. Oizumi, M.; Albantakis, L.; Tononi, G. From the Phenomenology to the Mechanisms of Consciousness: Integrated Information Theory 3. 0. PLoS Comput Biol 2014, 10, e1003588. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Balduzzi, D.; Tononi, G. Integrated information in discrete dynamical systems: Motivation and theoretical framework. PLoS Comput. Biol. 2008, 4, e1000091. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Barbosa, L.; Marshall, W.; Streipert, S.; Albantakis, L.; Tononi, G. A measure for intrinsic information. Sci. Rep. 2020, 10, 1–9. [Google Scholar] [CrossRef] [PubMed]
  6. Tononi, G. The Integrated Information Theory of Consciousness. In The Blackwell Companion to Consciousness; John Wiley & Sons, Ltd.: Hoboken, NJ, USA, 2017; pp. 243–256. [Google Scholar]
  7. Albantakis, L.; Marshall, W.; Hoel, E.; Tononi, G. What caused what? A quantitative account of actual causation using dynamical causal networks. Entropy 2019, 21, 459. [Google Scholar]
  8. Tononi, G. Consciousness as integrated information: A provisional manifesto. Biol. Bull. 2008, 215, 216–242. [Google Scholar] [CrossRef] [PubMed]
  9. Marshall, W.; Albantakis, L.; Tononi, G. Black-boxing and cause-effect power. PLoS Comput. Biol. 2018, 14, e1006114. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  10. Haun, A.; Tononi, G. Why Does Space Feel the Way it Does? Towards a Principled Account of Spatial Experience. Entropy 2019, 21, 1160. [Google Scholar]
  11. Albantakis, L.; Tononi, G. The Intrinsic Cause-Effect Power of Discrete Dynamical Systems—From Elementary Cellular Automata to Adapting Animats. Entropy 2015, 17, 5472–5502. [Google Scholar] [CrossRef] [Green Version]
  12. Albantakis, L.; Tononi, G. Causal Composition: Structural Differences among Dynamically Equivalent Systems. Entropy 2019, 21, 989. [Google Scholar] [CrossRef] [Green Version]
  13. Marshall, W.; Gomez-Ramirez, J.; Tononi, G. Integrated Information and State Differentiation. Conscious. Res. 2016, 7, 926. [Google Scholar]
  14. Gomez, J.D.; Mayner, W.G.P.; Beheler-Amass, M.; Tononi, G.; Albantakis, L. Computing Integrated Information (Φ) in Discrete Dynamical Systems with Multi-Valued Elements. Entropy 2021, 23, 6. [Google Scholar] [CrossRef] [PubMed]
  15. Csiszár, I. Axiomatic Characterizations of Information Measures. Entropy 2008, 10, 261–273. [Google Scholar] [CrossRef] [Green Version]
  16. Tononi, G. An information integration theory of consciousness. BMC Neurosci. 2004, 5, 42. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Kyumin, M. Exclusion and Underdetermined Qualia. Entropy 2019, 21, 405. [Google Scholar]
  18. Pearl, J. Causality; Cambridge University Press: Cambridge, UK, 2009. [Google Scholar]
  19. Janzing, D.; Balduzzi, D.; Grosse-Wentrup, M.; Schölkopf, B. Quantifying causal influences. Ann. Stat. 2013, 41, 2324–2358. [Google Scholar] [CrossRef]
  20. Ebanks, B.; Sahoo, P.; Sander, W. Characterizations of Information Measures; World Scientific: Singapore, 1998. [Google Scholar]
  21. Krantz, S.G.; Parks, H.R. A Primer of Real Analytic Functions, 2nd ed.; Birkhäuser Advanced Texts Basler Lehrbücher; Birkhäuser: Basel, Switzerland, 2002. [Google Scholar]
  22. Aczél, J. Lectures on Functional Equations and Their Applications; Dover Publications: Mineola, NY, USA, 2006. [Google Scholar]
Figure 1. Theory. (a) System S with four random variables. (b) Example of a mechanism M = { A , C } in state m = { , } constraining a cause purview Z = { B } and an effect purview Z = { B , D } . Dashed lines show the partitions. The bar plots show the probability distributions, that is the cause repertoire (left) and effect repertoire (right). The black bars show the probabilities when the mechanism is constraining the purview, and the white bars show the probabilities after partitioning the mechanism.
Figure 1. Theory. (a) System S with four random variables. (b) Example of a mechanism M = { A , C } in state m = { , } constraining a cause purview Z = { B } and an effect purview Z = { B , D } . Dashed lines show the partitions. The bar plots show the probability distributions, that is the cause repertoire (left) and effect repertoire (right). The black bars show the probabilities when the mechanism is constraining the purview, and the white bars show the probabilities after partitioning the mechanism.
Entropy 23 00362 g001
Figure 2. Intrinsicality. (a) Activation functions without bias ( η = 0 ) and different levels of constraint ( τ = 0 , τ = 1 and τ = 10 ). (b) System S analyzed in this figure. The remaining panels show on top the causal graph of the mechanism M = { A } at state m = { 1 } constraining different output purviews and on the bottom the probability distributions of the purviews (effect repertoires). The black bars show the probabilities when the mechanism is constraining the purview, and the white bars show the unconstrained probabilities after the complete partition ψ 0 . The “*” indicates the state selected by the maximum operation in the intrinsic difference (ID) function. (c) The mechanism fully constrains the unit B in the purview Z = { B } ( τ B = 0 ), resulting in state z = { } defining the amount of intrinsic information in the mechanism as φ ( m , Z , ψ 0 ) = I D ( π e ( B | M = ) π e ψ 0 ( B | M = ) ) = π e ( B = | A = ) · | log ( π e ( B = | A = ) / π e ψ 0 ( B = | M = ) ) | = 1 · 0.69 = 0.69 . (d) After adding a slightly undetermined unit ( τ C = 1 ) to the purview ( Z = { B , C } ), the intrinsic information increases to 1.11 . The new maximum state ( z = { , } ) has now much higher informativeness ( | log ( π e ( B C = | A = ) / π e ψ 0 ( B C = | A = ) ) | = 1.26 ) but only slightly lower selectivity ( π ( B C = | A = ) = 0.89 ), resulting in expansion. (e) When instead of C, we add the very undetermined unit D to the purview ( τ D = 10 ), the new purview ( Z = { B , D } ) has a new maximum state ( z = { , } ) with marginally higher informativeness ( | log ( π e ( B C = | A = ) / π e ψ 0 ( B C = | A = ) ) | = 0.79 ) and very low selectivity ( π e ( B C = | A = ) = 0.55 ), resulting in dilution.
Figure 2. Intrinsicality. (a) Activation functions without bias ( η = 0 ) and different levels of constraint ( τ = 0 , τ = 1 and τ = 10 ). (b) System S analyzed in this figure. The remaining panels show on top the causal graph of the mechanism M = { A } at state m = { 1 } constraining different output purviews and on the bottom the probability distributions of the purviews (effect repertoires). The black bars show the probabilities when the mechanism is constraining the purview, and the white bars show the unconstrained probabilities after the complete partition ψ 0 . The “*” indicates the state selected by the maximum operation in the intrinsic difference (ID) function. (c) The mechanism fully constrains the unit B in the purview Z = { B } ( τ B = 0 ), resulting in state z = { } defining the amount of intrinsic information in the mechanism as φ ( m , Z , ψ 0 ) = I D ( π e ( B | M = ) π e ψ 0 ( B | M = ) ) = π e ( B = | A = ) · | log ( π e ( B = | A = ) / π e ψ 0 ( B = | M = ) ) | = 1 · 0.69 = 0.69 . (d) After adding a slightly undetermined unit ( τ C = 1 ) to the purview ( Z = { B , C } ), the intrinsic information increases to 1.11 . The new maximum state ( z = { , } ) has now much higher informativeness ( | log ( π e ( B C = | A = ) / π e ψ 0 ( B C = | A = ) ) | = 1.26 ) but only slightly lower selectivity ( π ( B C = | A = ) = 0.89 ), resulting in expansion. (e) When instead of C, we add the very undetermined unit D to the purview ( τ D = 10 ), the new purview ( Z = { B , D } ) has a new maximum state ( z = { , } ) with marginally higher informativeness ( | log ( π e ( B C = | A = ) / π e ψ 0 ( B C = | A = ) ) | = 0.79 ) and very low selectivity ( π e ( B C = | A = ) = 0.55 ), resulting in dilution.
Entropy 23 00362 g002
Figure 3. Information. (a) System S analyzed in this figure. All units have τ = 1 and η = 3 (partially deterministic AND gates). The remaining panels show on the left the time unfolded graph of the mechanism M = { A , B , C , D } constraining different output purviews and on the right the probability distribution of the purview Z = { A , B , C } (effect repertoires). The black bars show the probabilities when the mechanism is constraining the purview, and the white bars show the unconstrained probabilities after the complete partition. The “*” indicates the state selected by the maximum operation in the ID function. (b) The mechanism at state m = { , , , } . The purview state z = { , , } is not only the most constrained by the mechanism (high informativeness) but also very dense (high selectivity). As a result, it has intrinsic information higher than all other states in the purview and defines the intrinsic information of the mechanism as 0.27. (c) If we change the mechanism state to m = { , , , } , the probability of observing the purview state z = { , , } is now smaller than chance. However, this probability is still very different from chance and therefore very constrained by the mechanism (high informativeness). At the same time, the state is still very dense, meaning it has a probability of happening much higher than all other states (high selectivity). Together, they define the intrinsic information of the state, which is higher than the intrinsic information of all other states in the purview, defining the intrinsic information of the mechanism as 0.08.
Figure 3. Information. (a) System S analyzed in this figure. All units have τ = 1 and η = 3 (partially deterministic AND gates). The remaining panels show on the left the time unfolded graph of the mechanism M = { A , B , C , D } constraining different output purviews and on the right the probability distribution of the purview Z = { A , B , C } (effect repertoires). The black bars show the probabilities when the mechanism is constraining the purview, and the white bars show the unconstrained probabilities after the complete partition. The “*” indicates the state selected by the maximum operation in the ID function. (b) The mechanism at state m = { , , , } . The purview state z = { , , } is not only the most constrained by the mechanism (high informativeness) but also very dense (high selectivity). As a result, it has intrinsic information higher than all other states in the purview and defines the intrinsic information of the mechanism as 0.27. (c) If we change the mechanism state to m = { , , , } , the probability of observing the purview state z = { , , } is now smaller than chance. However, this probability is still very different from chance and therefore very constrained by the mechanism (high informativeness). At the same time, the state is still very dense, meaning it has a probability of happening much higher than all other states (high selectivity). Together, they define the intrinsic information of the state, which is higher than the intrinsic information of all other states in the purview, defining the intrinsic information of the mechanism as 0.08.
Entropy 23 00362 g003
Figure 4. Integration. (a) System S analysed in this figure and in Figure 5. All units have τ = 1 and η = 0 (partially deterministic MAJORITY gates). The remaining panels show on the top the time unfolded graph of different mechanisms constraining different output purviews and on the bottom the probability distributions (effect repertoires). The black bars show the probabilities when the mechanism is constraining the purview, and the white bars show the partitioned probabilities. The “*” indicates the state selected by the maximum operation in the ID function. (b) The mechanism M = { A , E } in state m = { , } constraining the purview Z = { A , E } . While the complete partition has nonzero intrinsic information, the mechanism is clearly not integrated, as revealed by the MIP partition ψ * = { ( { A , } , { A } ) , ( { E , } , { E } ) } , resulting in zero integrated information. (c) The mechanism M = { A , B } in state m = { , } constraining the purview Z = { A , B } . The partition ψ * = { ( { A , } , { A , B } ) , ( { B } , { } ) } has less intrinsic information than any other partition, i.e., it is the MIP of this mechanism, and it defines the integrated information as 0.36. (d) The mechanism M = { A , B , D } in state m = { , , } constraining the purview Z = { E , F } . The tri-partition ψ * = { ( { A } , { } ) , ( { , } , { F } ) , ( { B , D } , { E } ) } is the MIP and it shows that the mechanism is not integrated, i.e, the mechanism has zero integrated information.
Figure 4. Integration. (a) System S analysed in this figure and in Figure 5. All units have τ = 1 and η = 0 (partially deterministic MAJORITY gates). The remaining panels show on the top the time unfolded graph of different mechanisms constraining different output purviews and on the bottom the probability distributions (effect repertoires). The black bars show the probabilities when the mechanism is constraining the purview, and the white bars show the partitioned probabilities. The “*” indicates the state selected by the maximum operation in the ID function. (b) The mechanism M = { A , E } in state m = { , } constraining the purview Z = { A , E } . While the complete partition has nonzero intrinsic information, the mechanism is clearly not integrated, as revealed by the MIP partition ψ * = { ( { A , } , { A } ) , ( { E , } , { E } ) } , resulting in zero integrated information. (c) The mechanism M = { A , B } in state m = { , } constraining the purview Z = { A , B } . The partition ψ * = { ( { A , } , { A , B } ) , ( { B } , { } ) } has less intrinsic information than any other partition, i.e., it is the MIP of this mechanism, and it defines the integrated information as 0.36. (d) The mechanism M = { A , B , D } in state m = { , , } constraining the purview Z = { E , F } . The tri-partition ψ * = { ( { A } , { } ) , ( { , } , { F } ) , ( { B , D } , { E } ) } is the MIP and it shows that the mechanism is not integrated, i.e, the mechanism has zero integrated information.
Entropy 23 00362 g004
Figure 5. Exclusion. Causal graphs of different mechanisms constraining different purviews. The system S used in these examples is the same as in Figure 4a. Each line shows the mechanism M constraining different purviews Z. (a) The mechanism M = { A } at state m = { } . The bottom line shows the purview Z S with maximum integrated effect information and the MIP is the complete partition. (b) The mechanism M = { A , B , C , D } at state m = { , , , } . The bottom line is the purview Z S with maximum integrated effect information and the MIP is ψ * = { ( { A , B , C } , { A , B , C } ) , ( { D } , { } ) } .
Figure 5. Exclusion. Causal graphs of different mechanisms constraining different purviews. The system S used in these examples is the same as in Figure 4a. Each line shows the mechanism M constraining different purviews Z. (a) The mechanism M = { A } at state m = { } . The bottom line shows the purview Z S with maximum integrated effect information and the MIP is the complete partition. (b) The mechanism M = { A , B , C , D } at state m = { , , , } . The bottom line is the purview Z S with maximum integrated effect information and the MIP is ψ * = { ( { A , B , C } , { A , B , C } ) , ( { D } , { } ) } .
Entropy 23 00362 g005
Figure 6. Integrated cause information. (a) Causal graph of mechanism M = { A , B , C , D } at state m = { , , , } constraining the purview Z = { A , B , F } , which has the maximum integrated information of all Z S and defines the mechanism integrated information. (b) The black bars show the probabilities when the mechanism is constraining the purview (cause repertoire), and the white bars show the probabilities after the partition (partitioned cause repertoire). The “*” indicates the state selected by the maximum operation in the ID function and defines Z c * .
Figure 6. Integrated cause information. (a) Causal graph of mechanism M = { A , B , C , D } at state m = { , , , } constraining the purview Z = { A , B , F } , which has the maximum integrated information of all Z S and defines the mechanism integrated information. (b) The black bars show the probabilities when the mechanism is constraining the purview (cause repertoire), and the white bars show the probabilities after the partition (partitioned cause repertoire). The “*” indicates the state selected by the maximum operation in the ID function and defines Z c * .
Entropy 23 00362 g006
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Barbosa, L.S.; Marshall, W.; Albantakis, L.; Tononi, G. Mechanism Integrated Information. Entropy 2021, 23, 362. https://0-doi-org.brum.beds.ac.uk/10.3390/e23030362

AMA Style

Barbosa LS, Marshall W, Albantakis L, Tononi G. Mechanism Integrated Information. Entropy. 2021; 23(3):362. https://0-doi-org.brum.beds.ac.uk/10.3390/e23030362

Chicago/Turabian Style

Barbosa, Leonardo S., William Marshall, Larissa Albantakis, and Giulio Tononi. 2021. "Mechanism Integrated Information" Entropy 23, no. 3: 362. https://0-doi-org.brum.beds.ac.uk/10.3390/e23030362

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop