Next Article in Journal
VBSM: VCC-Based Black Box Service Model with Enhanced Data Integrity
Previous Article in Journal
A Model for Information
Previous Article in Special Issue
Top-Down Causation and the Rise of Information in the Emergence of Life
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Essay

An Order-Theoretic Quantification of Contextuality

Department of Physics, Saint Anselm College, 100 Saint Anselm Drive, Manchester, NH 03102, USA
Submission received: 15 May 2014 / Revised: 16 September 2014 / Accepted: 16 September 2014 / Published: 22 September 2014
(This article belongs to the Special Issue Physics of Information)

Abstract

:
In this essay, I develop order-theoretic notions of determinism and contextuality on domains and topoi. In the process, I develop a method for quantifying contextuality and show that the order-theoretic sense of contextuality is analogous to the sense embodied in the topos-theoretic statement of the Kochen–Specker theorem. Additionally, I argue that this leads to a relation between the entropy associated with measurements on quantum systems and the second law of thermodynamics. The idea that the second law has its origin in the ordering of quantum states and processes dates to at least 1958 and possibly earlier. The suggestion that the mechanism behind this relation is contextuality, is made here for the first time.

1. Background

The theories of categories, topoi and domains have all been extensively applied to the study of physical systems, both quantum and classical [1,2,3,4,5,6]. In this essay, I extend the work of Coecke and Martin [7,8] on states and measurements, within the neo-realist framework developed by Döring and Isham [9], to the concept of quantum contextuality. I begin with a review of the basic principles and definitions that will be used throughout this essay.
A category is a mathematical structure that consists of objects and “arrows”, along with the requirements of compositeness, associativity and identity. The arrows (called morphisms) represent a connective pattern between the objects. A category is anything that satisfies these conditions [6,10]. Consider a physical system S. In general, S can be described by real-valued physical quantities (or expectation values) Q assigned by the state of the system ρ. In other words, there is a distinction between a physical quantity and its real-valued representation. For example, consider an ideal gas whose pressure is measured to be 10 Pa. Pressure is the physical quantity, whereas “10” is the real-valued representation of that quantity at a given instant. The state of the system ρ represents a specific mapping from the physical quantities of the system to the real-valued representations of those quantities, ρ : Q R .
This description has two notable features. The first is that in proposing this description, Döring and Isham adopt a specifically “neo-realist” structure. They interpret Q and R as objects in a category, referring to them as the state-object and the quantity-value object, respectively. As such, the category of which they are a part is of a special type known as a topos. As they note, “[w]hatever meaning can be ascribed to the concept of the “value” of a physical quantity is encoded in (or derived from) this representation” [9]. In this manner, their interpretation is technically independent of any type of agent, i.e., no agent is necessary to instantiate the state. This is intentional on their part for reasons related to quantum gravity. However, it should be noted that their description does not strictly preclude the existence of an agent. The framework is generally agnostic on the issue and allows for a variety of interpretations, both with and without agents.
The second notable feature of this model is that it lends itself to a description that encompasses both classical and quantum systems under a single structure. Thus, ρ can represent the state of a classical, quantum, or hybrid system, as long as it is more generally viewed as an arrow in a topos. One final cautionary note should be made regarding topoi, however. The aforementioned description focuses on one of three primary requirements for defining topoi. The other two are quite technical, and I will only briefly mention them. One is that propositions about a system are represented by sub-objects of the state-object. These sub-objects form a Heyting algebra (as do any sub-objects of any object within the topos). The other is that, in general, an object in a topos may not be determined by its “points” per se. As such, microstates are not always useful (though they are not strictly prohibited). In fact, in quantum theory the state object has no points (microstates) at all. Döring and Isham show that this lack of microstates is equivalent to the Kochen–Specker theorem.
In this manner, Döring and Isham confront the basic (and age-old) question, “What is a thing?” Their definition bears a strong resemblance to Eddington’s conception of a physical “object” as a somewhat loosely defined bundle of properties wherein these properties refer to values of physical quantities [11]. In other words, an object is defined by its state ρ. This means that for the purposes of this paper, we take an object or system to be equivalent to its state, i.e., S : = ρ . Even though the state itself is a mapping, it is imperative to remember that this does not necessarily require the existence of an agent. Indeed, Döring and Isham are adopting an expressly neo-realist view. Nevertheless, neither does it preclude such an agent. This is a very subtle point that is somewhat counter-intuitive as we are used to thinking of states as being representations of physical states. Within the present framework, however, we have essentially abstracted away the difference. Thus, ρ can be both an object and an arrow in a category (and, indeed, category theory itself recognizes this duality: a slice category, for example, has arrows as its objects [10]). In the case of a complex physical system S that is composed of smaller physical systems S S , we generalize this by saying that the state of S would be given by the set Σ of all sub-system states ρ i Σ , where ρ i : Q i R i .
Let us now consider a set of such sub-system state-objects Σ together with a partial order ⊑ that includes certain intrinsic notions of completeness and approximation that are defined by this order. Together, they form a domain, ( Σ , ) . Given two objects ρ , σ Σ , the statement ρ σ is interpreted as saying that ρ contains (or carries) some information about σ, i.e., σ is “more informative” than ρ [8]. We take “information” to be anything that may be represented by a state ρ. In the event that ρ contains complete information about σ, then ρ = σ and ρ is said to be a maximal element (object) of the domain, in which case it is an example of an ideal element. An object that is not ideal is said to be partial. The order ⊑ is interpreted as an information order in the sense that if a process generates some ordered sequence ( ρ n ) of elements that increases with respect to the information order, then ρ n ρ n + 1 for all n, i.e., ρ n + 1 is more informative than ρ n . To be clear, information is defined as anything that can be represented by ρ; no ordering relation is necessary. As such, not all information necessarily has an inherent order. A special type of information order is an approximation ⪯, where the statement ρ σ is interpreted as saying that ρ approximates σ [12]. We take this to mean that ρ carries essential information about σ. In other words, any “information path” leading to σ must pass through ρ. An element ρ of a poset is compact if ρ ρ .
If one takes a neo-realist viewpoint, information about reality exists independent of any agent or observer. On the other hand, it often only makes sense to talk about ordering information in the context of an agent or observer who is obtaining that information. The ordering relation ⊑ has to do with potential evolutions of a state and depends directly on the agent, whereas ⪯ has to do with the intrinsic information about a state. For example, consider a set of three closed boxes. We are told that there is a ball in one of the three boxes, A, B and C. Our task is to determine the color of the ball. Clearly, in order to determine its color, we must first locate it. Suppose we open box A and find that it does not contain the ball. We only have two boxes left; we have eliminated one possibility. Thus, the states of the other two boxes are clearly more informative than the state of the box we just opened, i.e., ρ A ρ B , ρ C . If we open box B and do not find the ball, we immediately know it is in box C. Thus, ρ A ρ B ρ C . We may now observe the ball to determine its color. Suppose, instead, that we opened box B first. In that case, ρ B ρ A ρ C . The neo-realist view assumes that the presence of the ball in box C is independent of any agent: ρ C contains essential information about the ball’s color, regardless of whether or not the box is opened. Thus, the relation ⪯ is independent of any agent, whereas ⊑ is not.
Both ordering relations can be thought of as a collection of arrows where each arrow points from one state (mapping) ρ i to another. As such, the domain consisting of the objects ρ i along with the set of arrows associated with the information order form a category. Each object ρ i by itself is, of course, an arrow ρ i : Q i R i in a topos with a state-object Q i and a quantity-value object R i . Each of the objects ρ i represents some amount of partial information about the larger system, as noted above. It is the amount of partiality that helps to define the order. In order to quantify this, we now more formally define a measurement μ on a domain Σ as a function μ : Σ R + that assigns to each informative object on the domain, ρ i Σ , a number μ ρ i R + that measures the amount of partiality contained in ρ i . This number is referred to as the information content of the object ρ i . Thus, moving “up” a given information order (such as the order in which we opened the boxes in search of the ball) represents decreasing partiality, i.e.,
ρ σ μ ρ μ σ .
Further, if ρ is a maximal element of a domain, then μ ρ = 0 [7]. Such a state is said to be pure. Intuitively, this means that if full knowledge of an object is known, then there is obviously no partiality. It is important to remember that μ may be any number as long as (1) and the condition:
μ ρ = 0 ρ max ( Σ )
are satisfied. This requirement then constrains the possible functional forms that μ may take. There are several conditions that are implicit in (1) and (2). The key is recognizing that we are quantifying an ordering relation of mappings based on how informative each mapping is relative to the others, i.e., we seek to essentially rank these mappings. For example, (1) tells us that σ is more informative than ρ and, thus, essentially ranks ρ and σ according to the information that they provide.
Consider a measurement μ ρ on a state ρ. The measurement will presumably have a number of possible outcomes. Whatever those outcomes may be, the information supplied by the measurement provides a certain amount of insight into the object or system under consideration. Suppose we now add additional outcomes, all of which are zero probability. It seems trivial to assume that adding these zero-probability outcomes will not add or subtract from the information provided by a measurement, i.e., that the measurement should not supply us with any more or any less information than it would without the additional zero-probability outcomes. This assumption is known as expansibility, and it implies that adding zero-probability outcomes to a given measurement will not change the ordering relation (1). In the example involving the boxes and the ball, this would be akin to adding a third outcome that has zero likelihood of occurring, say, for example, the possibility that the ball is both in the box and not in the box simultaneously (which, for classical systems, such as balls and boxes, is clearly absurd). The addition of this outcome does not fundamentally alter the ordering relation of the states of the various boxes. (Note that this is especially true in a neo-realist interpretation.)
It is also natural for us to assume that there is symmetry in the outcomes of any given measurement, i.e., permuting them does not change the amount of information given by the measurement. In other words, it is irrelevant as to whether or not we find the ball in a given box; the presence or absence of the ball are of equal “worth” to us as far as an individual measurement is concerned.
Now, consider two measurements μ ρ and μ σ on a pair of states ρ and σ, respectively. We assume that the information gained from a joint measurement on the two systems is less than or equal to the sum of the information gained from individual measurements on the two systems. This, of course, is the well-known property of subadditivity (cf. [13,14]). When the information gained from the joint measurement is identical to the sum of the information from the individual measurements (i.e., when equality holds), the information is said to be additive [15]. While subadditivity appears intuitive, there strangely do exist systems that may be additive, but not subadditive [16].
Thus, an appropriate functional form for μ would satisfy both (1) and (2), as well as the conditions of expansibility, symmetry, subadditivity and additivity (as well as a standard normalization condition). Aczél, Forte and Ng have shown that only linear combinations of the Shannon and Hartley entropies satisfy the conditions of expansibility, symmetry, subadditivity, additivity and normalization. Coecke and Martin have shown that Shannon entropy satisfies (1) and (2). Thus, we are justified in choosing the Shannon entropy:
μ ρ = - i = 1 n ρ i log ρ i
as a functional form of μ, where the individual mappings ρ i must be chosen, such that they yield real-number representations that allow (3) to satisfy (1) and (2) [17]. Note also that Aczél et al. employ Forte’s interpretation of “experiments (measurements)” and “outcomes” as partitions of a set [18]. This is particularly appropriate for our purposes, given that the domain-, category- and topos-theoretic approach presented here is fundamentally built on a set-theoretic foundation. Note also that Forte shows that Shannon entropy is the only function defined on n-tuples that fully satisfies the conditions of expansibility, symmetry, subadditivity and additivity (as well as of probabilistic normalization) [18]. Thus, though the Shannon entropy is technically a scalar, we are interested in its ability to rank the mappings, which allows us to preserve the structure of the n-tuple in the ordering relation itself.
A decreasing partiality, then, corresponds to decreasing uncertainty about a physical system, such that when the complete state is known, there is no uncertainty about it. As such, regardless of the functional form that μ takes, we use the term “relative” entropy to describe anything that satisfies (1) and (2). Notice that this is entirely consistent with the usual counter-arguments to Maxwell’s Demon, since it necessarily requires an agent (hence, the use of the term “relative”). One might assume that in adopting the general domain-theoretic definition of a measurement as a broader definition for entropy, I am taking entropy to be a measure of knowledge where, in this sense, “knowledge” refers to information transferred to an agent. Thus “knowledge” would necessarily be an agent-dependent concept. In fact, it is the exact combination of an agent and a neo-realist view that raises the false specter of Maxwell’s Demon in the first place: the only reason the entropy decreases to begin with is precisely because of the presence of the agent (“demon”), whose own entropy is not being properly accounted for. To be clear, I am expressly not adopting this view of entropy. Rather, entropy is interpreted here solely as a method of rank-ordering states based on their relative informativeness about a system.
Returning to the mathematical aspects of measurements on domains, consider some information order on a domain, such that ρ 1 ρ 2 ρ 3 ρ n . The sense of decreasing partiality suggests that the information order is, in Martin’s words, “going somewhere” [8]. That is:
ρ 1 ρ 2 ρ 3 n N ρ n Σ
where the element n N ρ n is a maximal element of the domain. Thus, any process that leads to ρ = ρ n will yield an entropy of:
μ n N ρ n = lim n μ ρ n .
With this in mind, it makes sense to ask if there is some way in which an agent can maximize the efficiency associated with obtaining the information about a particular domain. For instance, it might be that under one particular choice of information order, two of the chosen sub-states contain redundant information. Ideally, any set of sub-states considered by an agent should be maximally informative with no redundancy, i.e., no two sub-states should contain duplicate information, but all of the sub-states considered together should give complete information about the state. To better understand this point, consider a simple system that may be completely characterized by a single measurement (“query”) that yields a “yes” or a “no” to the query. The state of the system is thus a map ρ : Q { 0 , 1 } , and the purpose of a measurement by an agent is to distinguish between ρ : Q 0 and ρ : Q 1 , i.e., to determine which of the two possible states the system is actually in. Maximizing the information requires choosing the correct basis within which to measure the system. In fact, Schumacher and Westmoreland define information as the probability of successfully distinguishing between orthogonal measurements [19].
These ideas are nicely summarized in a mathematical form via a directed-complete partial order or dcpo. Intuitively, a dcpo is a poset in which every directed sub-set has a supremum. In other words, every sub-state of a state should be maximally knowable, by which I mean that the maximum amount of information for a sub-state should be, under ideal conditions, fully transferable to an agent if one exists. Formally, a dcpo is defined as follows.
Definition 1 (dcpo). Let ( P , ) be a poset. Then, the nonempty subset P P is said to be directed if for all x , y P there exists z P , such that x , y z . The supremum P of P P is the least of its upper bounds when it exists. A dcpo is a poset in which every directed set has a supremum.
Any continuous dcpo is an example of a domain [7,8].
In the example in which a colored ball was in one of three boxes, we considered a sequence of processes that resulted in one of two possible outcomes in each case: either the ball was in the box or it was not. Any time we have a situation in which a process has more than one possible outcome (be it two or ten), we need a formal way to distinguish between those outcomes. Consider, then, n + 1 boxes and assume that within one of these boxes is a ball [20]. Now, suppose that Alice and Bob are each tasked with locating the ball and determining its color. Prior to opening any boxes, the state of the system, representing both Alice’s and Bob’s knowledge, is given by the completely mixed state:
ρ = ( 1 / ( n + 1 ) , , 1 / ( n + 1 ) )
where we are representing the fact that Alice and Bob both initially assume that the ball is equally likely to be in any of the boxes. In this case, the state (or their knowledge of the state, if you prefer) is a probability. Once the ball is located, the state is then given by the pure state:
ρ i = ( 0 , , 1 , , 0 )
where i indicates in which of the n + 1 boxes the ball is found. In a neo-realist view, (7) is independent of any agent. If we say that ρ Σ n + 1 represents the state as it appears to Alice and σ Σ n + 1 represents the state as it appears to Bob, then the statement ρ σ indicates that Bob knows more about the position of the ball than Alice. For example, perhaps Bob was able to look in the boxes faster than Alice. For every box in which the ball is not found, the state can be updated, since one possibility has been eliminated. As such, Bob could eliminate possibilities faster than Alice. In this way the completely mixed state is the least element of the domain of states Σ n + 1 . The set of all pure states would thus be the set of all maximal elements. Coecke and Martin use a similar example to show that there exists a unique order on classical two-states given by the set Σ 2 and that a partial order ⊑ on the more general Σ n respects a mixing law under certain restrictions [7]. The most important point here is that classical states have a unique ordering relation. I will return to this later.
Now, let ( Σ , ) be a dcpo. For elements ρ , σ Σ , we write ρ σ if and only if for every directed subset Δ Σ with σ Δ we have ρ τ for some τ Δ . In order to simplify notation, we introduce the following sets:
ρ : = { ρ , σ Σ : ρ σ } and ρ : = { ρ , σ Σ : σ ρ } ρ : = { ρ , σ Σ : ρ σ } and ρ : = { ρ , σ Σ : σ ρ }
where the arrows suggest the “direction” of the information order. In fact, these may be viewed as defining a specific information order. Therefore, for example, ρ is the set of states for which σ is more informative than ρ, whereas σ = ρ is the set of states for which ρ is more informative than σ. Thus, for some dcpo ( Σ , ) , a pair of elements ρ , σ Σ are said to be orthogonal if:
μ ( ρ σ ) { 0 }
where { 0 } is the null set and Σ is assumed to have a least element ρ . In a way, this formalizes the neo-realist viewpoint, since it says that there can be no reality in which σ is more informative than ρ and simultaneously ρ is more informative than σ, i.e., there exists only one reality for a set of processes. This offers another way to distinguish between the relations ⊑ and ⪯: the former is a statement about knowledge and, hence, as mentioned above, is agent-dependent, whereas the latter is a statement about processes and (in a neo-realist interpretation) is agent-independent. Crucially, this is related to the fact that, as Coecke and Martin have shown, there exists a unique order on classical states [7]. I discuss this further in Section 2.
Now consider a specific agent’s lack of information. We can quantify this lack of information via the Shannon entropy (Equation (3)), such that conditions (1) and (2) hold. In other words, for some dcpo ( Σ , ) and ρ , σ Σ , we set:
μ ρ = i = 1 n ρ i log ρ i with ρ σ μ ρ μ σ
where for some value n = N , we have ρ = σ and μ ρ = μ σ . Thus, as our knowledge of Σ increases, ρ approaches the maximal (ideal) element σ, i.e., ρ σ . Simultaneously, the entropy decreases, such that μ ρ μ σ and μ is said to be monotone.
In classical physics, which assumes a neo-realist viewpoint, we typically can infer knowledge about a system given some minimum amount of information. For example, suppose that the information about the state of a system is fully encoded in the number π. Suppose via some process that we have been given information about the state in the form of the decimal 3 . 14 ± 0 . 01 . This is clearly not enough information for us to say with certainty that the state of the system is given by π. For example, the number 3 . 146 falls within our range of uncertainty. This number is the square root of 9 . 9 and is algebraic, whereas π is transcendental. While perfect certainty in this example is impossible, since both π and 9 . 9 are irrational, we can at least establish a limit, such that, at some point, we can be fairly certain that the state is π. In other words, in classical systems, there may a threshold ρ min , such that if ρ min ρ , σ may be predicted with near certainty, i.e., once ρ passes some threshold, we may say with confidence that ρ σ . A system whose future states may be predicted with certainty based on complete knowledge of its prior states may be said to be physically deterministic. Note that this definition of determinism inherently assumes that measurements do not disturb the system [21].
Condition 1 (physical determinism). Let I σ be the maximal (ideal) element on a domain of physical states ρ n ( Σ , ) , where for some value n = N , ρ n = σ . For a sequence of measurements μ ρ 1 , μ ρ 2 , , μ ρ N , if ρ min ρ N and N is finite, then if ρ = I and μ ρ = μ I = 0 , the system is said to be physically deterministic.
A hypothetically omniscient being who happens to be in possession of ρ min ρ N (i.e., who possesses enough information about a system to fully predict its future states) is known as Laplace’s demon. A system for which N is not finite, but that otherwise satisfies all other aspects of the condition, may be said to be approximately deterministic. In the example in which the state of a system is given by π, there is clearly a point at which the system becomes approximately deterministic (though, to some extent, this point is arbitrary). It is clear from this that physical determinism inherently assumes a neo-realist viewpoint. This is not without its problems, since it assumes that, if the universe is a valid system, as we obtain more and more information about it, the entropy should decrease. Of course, it is well known that the exact opposite is actually happening. The explanation for this lies in quantum contextuality.

2. Contextuality

Consider three states, ρ , σ , τ Σ , and suppose that ρ σ . Furthermore suppose that σ τ . This means that ρ carries essential information about σ and σ carries some (not necessarily essential) information about τ . In order for us to conclude from this that ρ τ , we would need to know that the statement σ τ is being made in the same context as ρ σ . Coecke and Martin define context in the form of the following proposition [7].
Proposition 2 (context). For all states ρ , σ , τ Σ , if ρ σ τ and τ { 0 } , then ρ τ .
I refer the interested reader to [7] for the proof of this proposition. Intuitively what this says is that if a unique information order can be established for τ that includes ρ and σ, then ρ and τ have the same “context”, which means the former carries essential information about the latter. Considering the example of the ball in the boxes, since the ball is assumed to be in one and only one box, the states of all of the other boxes, whether or not they are opened, contain essential information about the ball: it is not in them. That is the essence of context. As it happens, the results of classical measurements are elements of continuous domains and approximation on continuous domains is context independent [7]. Note that this is a statement about domains in the mathematical sense set out above and is not a statement concerning actual physical experiments. Recall that measurements here simply refer to a rank ordering of states. What this means is that for classical measurements, it is automatically true that if ρ σ and σ τ , then ρ τ (for ρ , σ , τ Σ ). This is at the root of the fact that there is a unique order on classical states, as I briefly mentioned in Section 1. Again, this formalizes the fact that classical states naturally have a neo-realist interpretation.
Quantum states are not necessarily representable by measurements on continuous domains and, so, are not necessarily context independent. Recall that on the most fundamental level, we are actually working with topoi and ρ is a mapping from a state object to a quantity-value object. In topos theory, the state object Q has no points (microstates), and as briefly mentioned above, Döring and Isham show that this is equivalent to the Kochen–Specker (KS) theorem [9]. The KS theorem essentially points out a conflict between two neo-realist assumptions: (i) that measurable quantities have definite values at all times and (ii) that the values of those measurables are intrinsic and independent of the measurement process used to obtain them [22]. In the language we developed in Section 1, Assumption (i) says that a definite state of a sub-system ρ : Q R exists at all times, while Assumption (ii) says that the value corresponding to R is unique. For quantum systems, we are forced to abandon one or the other of the two assumptions.
The KS theorem thus establishes the notion of context as fundamental to quantum measurements: the state of the sub-system and/or the real-valued quantity that is associated with that state, is dependent on the details of the measurement on that state. The notion of context set forth in Proposition 2 formalizes this idea in an order-theoretic way: for any set of states ρ , σ , τ Σ , if ρ σ τ , we can only conclude that ρ τ if τ { 0 } . Recall that the relation ⊑ pertains to knowledge and is thus agent-dependent, whereas the relation ⪯ pertains to processes and is entirely independent of any agent. In other words, the relation ⪯ can only apply to a pair of quantum states if those states exist in the same context. Or, to put it another way, there is no unique order on quantum states. Let us consider two examples.
First, let us return to the entirely classical example of the ball in one of three boxes. We established that if the ball was in box C and the boxes were opened in the order A, then B, then C, the information order would be ρ A ρ B ρ C . Conversely, if we swapped the order in which we opened boxes A and B, then the information order would be ρ B ρ A ρ C . Since this is a classical system and the only information of interest is whether or not the ball is in a particular box, it should be clear that these two cases are essentially equivalent, i.e., ρ A = ρ B . As such:
ρ A ρ B ρ C ρ B ρ A ρ C
and we may substitute ⪯ for ⊑ in both cases. This is the essence of a classical system: it is completely independent of context and thus independent of any agent.
Now, consider a sequence of spin- 1 2 measurements on a certain quantum system, as shown in Figure 1. Due to the nature of quantum mechanical spin, we generally assume that it is an intrinsic property, such that it has a definite value along a given axis if measured along that axis. In other words, a neo-realist interpretation would assume that, if the spin is measured along, say, axis A (corresponding to basis a ), and was found to be aligned with that axis, then regardless of any intermediate measurements along other axes, any subsequent measurement along A must necessarily show the spin to be aligned with that axis. Quantum mechanics, however, tells us that the probabilities associated with the two possible outcomes to a measurement along axis C (basis c ) in the figure solely depend on the relative alignment of axes B and C and the state as it enters the device measuring along axis C. For example, if, as in the figure, the state exiting the middle device measuring axis B in basis b is b , then the probabilities for the outcomes from the third device are Pr ( c + ) = sin 2 1 2 θ BC and Pr ( c ) = cos 2 1 2 θ BC , where θ B C is the angle between the B and C axes [23]. If, for instance, θ B C = π 2 , then Pr ( c + ) = Pr ( c ) = 0 . 5 , meaning that the state as measured along axis C could equally well be aligned or anti-aligned with that axis. This is independent of the outcome of any previous measurement. That means that if A and C represent the same axis, it is possible for the state to be measured to be a + initially, but then later to be found to be a . In the classical example with the ball in one of three boxes, this would be the equivalent of opening box A and finding the ball, then opening box B (finding nothing) and, finally, opening box C and finding the ball again.
Figure 1. Each box represents a measurement of the spin for a spin- 1 2 particle along some axis with the top output indicating that the state is aligned (+) with the measurement axis and the bottom output indicating that the state is anti-aligned (−) with the measurement axis. Red and blue lights on the top simply indicate to the experimenter which of the two results is obtained (e.g., red might indicate aligned and blue might indicate anti-aligned).
Figure 1. Each box represents a measurement of the spin for a spin- 1 2 particle along some axis with the top output indicating that the state is aligned (+) with the measurement axis and the bottom output indicating that the state is anti-aligned (−) with the measurement axis. Red and blue lights on the top simply indicate to the experimenter which of the two results is obtained (e.g., red might indicate aligned and blue might indicate anti-aligned).
Information 05 00508 g001
From these examples, it should be clear that no unique ordering relation exists for quantum states. Recall that the statement ρ σ is interpreted as saying that ρ contains essential information about σ. Therefore, in the classical example, if the ball is in box C, it clearly is not in any of the other boxes. Thus, the other boxes contain essential information about the location of the ball: it is not there. In the quantum example, even though the prior state does affect the probabilities, it does not guarantee anything. It merely establishes whether an information order exists or not. For example, consider just the first two spin measurement devices in Figure 1, and as in the figure, assume that the measurement along A finds the state aligned with that axis. The probabilities associated with a measurement along axis B are Pr ( b + ) = cos 2 1 2 θ AB and Pr ( b ) = sin 2 1 2 θ AB , respectively. Let us consider three cases [24]:
( i ) : θ ab = π 2 ( ii ) : 0 < θ ab < π 2 ( iii ) : θ ab = 0 .
In Case (i), the probabilities for the outcome of a measurement along axis B will be Pr ( b + ) = Pr ( b ) = 1 2 . This is equivalent to a completely random outcome meaning that we have no information whatsoever about ρ B prior to making the measurement along B. Order theoretically, prior to the measurement, ρ B is a completely mixed state as in (6) and is said to be a least element on the domain. What this means is that no information order can be established for ρ A and ρ B ; neither ⊑ nor ⪯ apply, since knowledge of the outcome of the measurement along A does nothing to improve our chances of predicting the outcome of a measurement along B.
Now consider Case (ii). In this case, the probabilities are not 1 2 , and so, knowledge of the outcome of the measurement along A does improve our chances of correctly predicting the outcome of a measurement along B, but it still does not guarantee a specific result. In this way, we may write ρ A ρ B , since this partial knowledge does allow us to establish some kind of order on the states. Likewise, for Case (iii) if θ A B = 0 , the outcome of the measurement along B is guaranteed to be exactly the same as it was along A. As such, we may write ρ A ρ B .
In order to generalize this, it is necessary to introduce the order-theoretic notion of a basis. Consider some subset of states m on a domain ( Σ , ), such that m Σ . The subset m is said to be a basis when m ρ is directed with supremum ρ for each ρ Σ . Recall that a dcpo is a poset in which every directed set has a supremum. Notice that this definition inherently includes the relation ⪯ (via ρ ). As such, it codifies the notion that a basis is any subset for which the neo-realist assumption holds within that subset. In order to see why the subset must be directed with a supremum, consider just two boxes, one of which contains a ball of some indeterminate color. The state representing the box that contains the ball is the supremum, since it contains more information about the color of the ball than the box that does not contain the ball. It is necessarily directed, because, regardless of the order in which the boxes are opened, ρ no ball ρ ball .
Now, consider two quantum states, ρ and σ. If they are measured in the same basis, i.e., if ρ , σ m Σ , then we can establish relationships, such as ρ σ . If they are measured in different bases, the establishment of any kind of information order is dependent on how “close” they are. We might be tempted to use the term “orthogonal” here to describe two bases for which no information order can be established, based on our example with the spin measurements. This can be a bit confusing, since each of these bases are individually said to be orthogonal, since measurements on different elements within a given basis should satisfy (9). In fact, it is the orthogonality of various information orders that defines a basis in the first place. Consider the classical example of the three boxes, A, B and C, with a ball in one of them. Without knowledge of the ball’s location (i.e., before measurement), there are three possible information orders:
ρ A , ρ B ρ C , ρ A , ρ C ρ B , ρ B , ρ C ρ A
given by ρ C , ρ B and ρ A , respectively, depending on where the ball is located. Since only one may be correct in a neo-realist interpretation, a measurement on the intersection of any two must satisfy (9). Thus, we can think of measurements on these boxes (which entails opening them) as representing a method for identifying an orthogonal basis for measurements where those measurements are aimed at determining the location and color of the ball. In other words, the set ρ A , ρ B , ρ C = : m Σ defines the basis. Classically, of course, this is somewhat irrelevant, but it serves to illustrate the definition.
Clearly, then, if a measurement on states at the intersection of any two information orders is anything other than an element of the null set, the two information orders must necessarily be defined on different bases that share some information, i.e., for ρ m Σ and σ n Σ :
μ ( ρ σ ) { 0 } m n .
This adequately generalizes Case (ii) above. Distinguishing between Case (i) and Case (iii) is a bit more problematic, since the result μ ( ρ σ ) { 0 } does not automatically (in a mathematical sense) tell us whether ρ and σ belong to the same basis or not (recall that orthogonality is defined here by an information order).
Consider the set M ( ρ σ ) of all measurements on states at the intersection of any two information orders where μ M . Recall that μ is defined as a function on a domain that assigns to each informative object a number that measures the amount of “partiality” such that continued measurements lead to decreasing partiality (see (1)). The greatest amount of partiality, then, corresponds to the greatest lack of knowledge (which is why μ is often most conveniently expressed as entropy). The greatest lack of knowledge regarding any two information orders is associated with complete randomness. In the example given in Figure 1, this corresponds to Case (i), where we may write the state in one basis in terms of a different basis’ least element. For example, suppose that a = z and b = x . The state of the system exiting the second spin measurement device in Figure 1 can be written in terms of the first:
x = 1 2 z + 1 2 z .
Two points should be clear from this. First, this demonstrates that the set M is partially ordered. Second, it allows us to clearly define a supremum for M as representing the case in which knowledge of a unique basis corresponds to a least element on one of the bases (i.e., as in the example given by (13)). Thus, we may say that for ρ m Σ and σ n Σ :
μ ( ρ σ ) = M ( ρ σ ) m n
where I use the symbol “⊥” intentionally to tie this to the concept of a least element and a completely mixed state (e.g., Equation (6)). This, then, adequately generalizes Case (i) above. Thus Case (iii) is simply the definition of orthogonality given by (9), i.e.,
μ ( ρ σ ) { 0 } m = n .
Intuitively, this says that within any given basis, there is a single, unique information order, and thus, neo-realism holds (within that basis; see Section 3). Any difference in basis necessarily eliminates the uniqueness of the ordering relation, and a neo-realist interpretation is no longer tenable under these conditions. It is the dependency of quantum systems on a measurement basis (vis-à-vis projective measurements) that is at the heart of this behavior. Classical systems do not suffer from this complication and, thus, are context independent, as discussed before.
This idea may now be related to Proposition 2 as a means of quantifying (quantum) contextuality, since some measurement μ ( ρ σ ) is a number that necessarily lies on the interval 0 μ ( ρ σ ) M ( ρ σ ) . A specific measurement context can then be defined as follows, and the value of μ ( ρ σ ) is akin to a “distance” measure telling us how “far” one measurement context is from another.
Definition 3 (measurement context). For any three quantum states ρ j Σ , σ k Σ and τ l Σ , where j , k and l are bases, if μ ( ρ σ ) { 0 } and μ ( σ τ ) { 0 } , then it follows that μ ( ρ τ ) { 0 } and ρ , σ and τ are said to have an identical context, i.e., ρ σ τ .
Note that the fact that this definition necessarily ensures that a unique information order exists, e.g., τ { 0 } . Any values of μ other than elements of the null set imply that, at best, the states only possess a partial context. Measurements on states that represent a supremum on the set of all possible measurements imply that those states share no context. Thus, we have established an order-theoretic method for quantifying contextuality.

3. Context, Determinism and Entropy

What does it mean to say that an information order may be defined within a basis? In Figure 1, a single particle always exits one and only one of the output channels of any given spin measurement device. Subsequent measurements of the same particle in that basis (i.e., without ever changing the basis) yield the same result. In other words, as long as a basis does not change, it behaves much as the classical example of a single ball that is inside one of multiple boxes. As such, it possesses a unique information order (just as the ball and box example does), and thus, neo-realism holds, even though only a single measurement is usually required to produce the outcome. For example, in Figure 1, the information order ρ b + ρ b remains unique as long as the measurement basis is not changed, even though only a single measurement is required to establish the order. This is simply due to the nature of quantum measurement devices. The same would be true, for example, in the classical case involving the boxes and the ball if all of the boxes could be opened and viewed simultaneously. Any restriction to this is practical and not theoretical.
This concept, then, is easily generalized to an n-dimensional basis m n : though only a single measurement may be required in order to determine the state, this measurement nevertheless establishes a unique information order. As such, following the prescription given by (10), μ is monotone, so long as the basis does not change. It is thus trivially true that Condition 1 holds for any sequence of measurements on quantum states in which the basis does not change and the basis itself is finite-dimensional. Equation (10) tells us that this corresponds to a decrease in entropy.
Condition 1 does not hold, however, if the basis changes. This is because, as the above examples clearly demonstrate, each set of basis states has its own maximal element. In other words, inherent in the order theoretic definition of physical determinism given by Condition 1 is the notion of the uniqueness of a maximal element on the complete set of states for some physical system. A change of basis in a quantum system introduces a new maximal element associated with the new basis. Thus, the complete set of states for a quantum system does not possess a single unique maximal element, but, rather, possesses many. Therefore, a full characterization of the complete set of states for a quantum system is not physically deterministic. In addition, for a sequence of measurements on this complete set of states, the entropy will not decrease. I formalize this with the following remark.
Remark 1. For some complete set of quantum states Σ ¯ measured on a complete set of finite-dimensional bases, no single, unique maximal element I exists.
This result is very closely related to the topos-theoretic statement of the Kochen–Specker (KS) theorem, as given in [9]. The details in [9] are quite involved, but some general remarks are in order.
The topos-theoretic statement of the KS theorem as given in [9] employs the concept of a spectral presheaf. While the details are beyond the scope of this essay, suffice it to say that a spectral presheaf is a particular type of category-theoretic mapping called a contravariant functor. For our purposes, it is essentially representative of the complete set of states on a system, and we will label it Σ ¯ to distinguish it from Σ. Very roughly, a global element on Σ ¯ is a function that essentially assigns a unique number to each state on Σ ¯ . In other words, it guarantees that there should be one and only one maximal element. As stated in [9], the KS theorem only applies to systems described by a finite-dimensional Hilbert space with dim H > 2 . What it says is that Σ ¯ has no global element.
In the notion of contextuality that I have developed here, there is no restriction on the dimensionality of the Hilbert space, per se. There is, however, a restriction to finite-dimensional bases. This provokes the following remark. In both statements of KS contextuality (i.e., here and in [9]), complications with the infinite-dimensional basis states arise from the fact that they can have continuous parts. As Coecke and Martin show, continuous bases are context independent [7]. Thus, Remark 1 is analogous to the topos-theoretic statement of the KS theorem.
One of the key points related to the lack of a unique maximal element on sets of quantum states, as pointed out earlier, is the fact that if measurements are made on a complete set of bases, the overall entropy of the system will not decrease. Each change of basis essentially “resets” the system and, thus, the entropy. Consider a complete set of N measurement bases on a system of states Σ. If the system behaves classically, then (10) holds and a sequence of measurements will result in decreasing partiality, i.e., increasing knowledge about the system. This corresponds to the existence of a single, unique maximal element. It must necessarily be true, then, that the existence of multiple maximal elements would prevent the entropy from decreasing. We might imagine, though, a system for which there are N maximal elements corresponding to sub-systems, such that the entropy could decrease for any given sub-system individually. This, however, would require neo-realism to hold for a given sub-system regardless of whether the measurements on that sub-system are interrupted by measurements on a different sub-system.
Quantum systems, of course, do not behave in this manner. In the example given in Figure 1, there is no guarantee that the outcome of the third device will match that of the first device, even if they measure along the same axis. As such, it is possible that re-measuring in a given basis may result in a different maximal element for that basis. In other words, in quantum systems, it is possible for the maximal element of a given basis to change. This means that even if the complete set of bases is finite, the number of maximal elements may be infinite. This may at first appear paradoxical, but the paradox is resolved if we consider that the state as measured by the first device is not the same state as that measured by the third device, regardless of the outcome. This is because the object being measured fundamentally possesses a world line in spacetime. In the example given in Figure 1, the object is a localized qubit. However, as pointed out in [25,26], in a strict sense, the “location” of a qubit on a world line is fundamentally a part of its state, i.e., a localized qubit is really best understood as a sequence of quantum states associated with points on a world line. In other words, quantum states are constantly changing, since they are associated with objects that possess world lines. If the world line is infinite, then, regardless of the number of possible measurement bases, the number of maximal elements must be infinite, since one exists for each possible measurement. This, in fact, is the very essence of contextuality: neo-realism fails spectacularly. In this case, the overall entropy of the complete system will tend to increase. This warrants an additional remark.
Remark 2. For a sequence of measurements on a complete set of finite-dimensional bases for some complete set of quantum states ρ n Σ ¯ n , the entropy must be greater than or equal to zero, i.e., μ ρ n 0 (recall that complete knowledge of a system corresponds to μ ρ = 0 ).
Remark 2 bears a striking resemblance to the second law of thermodynamics, sometimes called the “law of increase of entropy.” Indeed, the quantum mechanical origin of this law is not a new suggestion. In the 1958 English translation of their volume on statistical physics, Landau and Lifshitz write that “[i]t is more natural to think of the origin of the law of increase of entropy in its general formulation …as being bound up with the quantum mechanical effects”. They continue:
[I]f two [quantum mechanical] processes of interaction take place consecutively with a given quantum object (let us call them A and B) then the assertion that the probability of some result of process B is determined by the results of process A can be true only if process A takes place before process B.
[27] (p.31)
This statement is equivalent to the order-theoretic statement “ ρ A ρ B for ρ A , ρ B Σ ¯ is not a unique information order” (where Σ ¯ is understood to represent a quantum mechanical system). As I have argued, this lack of a unique ordering relation on quantum states is an order-theoretic manifestation of the phenomenon of quantum contextuality. Thus, it would appear as if quantum contextuality is at the root of the second law of thermodynamics. However, note that I have also shown that the lack of a unique ordering relation arises from the presence of an agent. This would seem to suggest, then, that the second law itself is somehow agent-dependent [28].

4. Summary and Concluding Remarks

In this essay, I have developed order-theoretic notions of determinism and contextuality on domains and topoi, in the process developing an order-theoretic quantification of contextuality that is compatible with the sense of the term embodied in the Kochen–Specker theorem. The order-theoretic view has allowed me to show that, while a unique ordering relation exists for classical states, no such unique relation exists for quantum states. I have argued that this lack of a unique ordering relation necessarily appears with the introduction of an agent. As such, quantum states do not allow for a neo-realist interpretation. This fact is a result of the contextual nature of quantum states. Thus, contextuality (at least in the sense given by the Kochen–Specker theorem) is deeply connected to the concept of a measuring agent. Contextuality also assures us that no sequence of measurements on quantum states can lead to the complete characterization of a quantum system in the same sense that such a sequence of measurements on classical states could completely characterize a classical system. In fact, the entropy associated with such a sequence of measurements on a quantum system will necessarily never decrease. Incidentally, this is perfectly consistent with the notion of conditional quantum entropy, which can be negative (cf. [29,30]). Entropy, as the term is applied here, simply refers to the entropy associated with measurements on the system that establish an information order. Nevertheless, this non-decrease in entropy for measurements on quantum systems bears a striking resemblance to the second law of thermodynamics when applied to sequential measurements. Indeed it essentially formalizes a suggestion made by Landau and Lifshitz in 1958. This does seem to suggest that the second law is an agent-dependent phenomenon. Additionally, it suggests some relation between contextuality and thermodynamics.
This essay suggests at least two pieces of additional work. First, Remark 1 should be put on more solid ground by stating it as a proposition, lemma or theorem supported by a formal proof. Second, a deeper relation between Remark 2 and the second law of thermodynamics should be found by formalizing the latter in order-theoretic terms on domains of generalized states. This would necessarily involve additional work related to coarse-graining and, potentially, could involve extensions to generalized probability theories. It is interesting to note in passing that coarse-graining is inherent in the derivation of the Kochen–Specker theorem in terms of spectral presheafs given by Döring and Isham. Given the close analogy to between Remark 1 and their statement of the KS theorem, it stands to reason that additional work on coarse-graining in relation to Remark 2 should yield a deeper connection and should solidify any relation between contextuality and thermodynamics.

Acknowledgments

I wish to thank Bob Coecke for introducing me to categories and domains and for sending me a copy of New Structures for Physics, by which I was inspired. I also wish to thank two anonymous referees for very helpful comments that aided in making my arguments more succinct. In particular, I would like to thank one referee for introducing me to the work of Aczél, Forte and Ng, which has yielded additional insights. Finally, I acknowledge financial support from FQXi.

Conflicts of Interest

The author declares no conflict of interest.

References and Notes

  1. Martin, K. A foundation for computation. Ph.D. Thesis, Tulane University, New Orleans, LA, USA, 2000. [Google Scholar]
  2. Knuth, K.H. Deriving laws from ordering relations. In Bayesian Inference and Maximum Entropy Methods in Science and Engineering (AIP Conference Proceedings); Zhai, Y., Erickson, G., Eds.; American Institute of Physics: Melville, NY, USA, 2004; pp. 203–235. [Google Scholar]
  3. Coecke, B. Introducing categories to the practicing physicist. In What Is Category Theory? Sica, G., Ed.; Volume 30, Advanced Studies in Mathematics and Logic; Polimetrica: Milan, Italy, 2006; pp. 45–74. [Google Scholar]
  4. Abramsky, S.; Coecke, B. Categorical quantum mechanics. In Handbook of Quantum Logic and Quantum Structures; Elsevier: Amsterdam, The Netherlands, 2008; Volume II. [Google Scholar]
  5. Isham, C. Topos Methods in the Foundations of Physics. In Deep Beauty; Halvorsen, H., Ed.; Cambridge University Press: Cambridge, UK, 2010. [Google Scholar]
  6. Spivak, D.I. Category theory for scientists. 2013; arXiv:1302.6946. [Google Scholar]
  7. Coecke, B.; Martin, K. A Partial Order on Classical and Quantum States. In New Structures for Physics; Lecture Notes in Physics, Volume 813; Springer: Berlin/Heidelberg, Germany, 2011; Chapter 10; pp. 593–683. [Google Scholar]
  8. Martin, K. Domain Theory and Measurement. In New Structures for Physics; Lecture Notes in Physics, Volume 813; Springer: Berlin/Heidelberg, Germany, 2011; Chapter 9; pp. 491–591. [Google Scholar]
  9. Döring, A.; Isham, C. “What is a Thing?”: Topos Theory in the Foundations of Physics. In New Structures for Physics; Lecture Notes in Physics, Volume 813; Springer: Berlin/Heidelberg, Germany, 2011; Chapter 13; pp. 753–937. [Google Scholar]
  10. Awodey, S. Category Theory, 2nd ed.; Oxford University Press: Oxford, UK, 2010. [Google Scholar]
  11. Eddington, A.S. The Philosophy of Physical Science; Cambridge University Press: Cambridge, UK, 1939. [Google Scholar]
  12. The notation ≪ is standard, but in order to avoid confusion with other inequalities, we adopt ⪯, so as to clearly distinguish it from the usual meaning of ≪ in inequalities.
  13. Lieb, E.H. Some convexity and subadditivity properties of entropy. Bull. Am. Math. Soc. 1975, 81, 1–13. [Google Scholar] [CrossRef]
  14. Nielsen, M.A.; Chuang, I.L. Quantum Computation and Quantum Information; Cambridge University Press: Cambridge, UK, 2000. [Google Scholar]
  15. Aczél, J.; Forte, B.; Ng, C. Why the Shannon and Hartley Entropies are ‘Natural’. Adv. Appl. Probab. 1974, 6, 131–146. [Google Scholar] [CrossRef]
  16. The difference can be better understood by noting that some systems obey the strong subadditivity condition, while others do not [14].
  17. It is important to remember that, while the functional form of μ generally depends on an agent, the basic conditions set by (1) and (2) always hold under a neo-realist interpretation.
  18. Forte, B. Why Shannon’s entropy. In Symposia Mathematica, Instituto Nazionale di Alta Mathematica ed.; Academic Press: New York, NY, USA, 1975; Volume XI, pp. 137–152. [Google Scholar]
  19. Schumacher, B.; Westmoreland, M. Quantum Processes, Systems, and Information; Cambridge University Press: Cambridge, UK, 2010. [Google Scholar]
  20. This example is adapted from one given in [7].
  21. There are, of course, other forms of determinism that may allow for invasive measurements. While I do not consider those here, a domain-theoretic definition of such an alternate definition of determinism would an intriguing line of enquiry.
  22. Kochen, S.B.; Specker, E. The problem of hidden variables in quantum mechanics. J. Math. Mech. 1967, 17, 59–87. [Google Scholar] [CrossRef]
  23. Sakurai, J. Modern Quantum Mechanics, revised ed.; Addison Wesley Longman: Reading, MA, USA, 1994. [Google Scholar]
  24. There are many more cases than these three, corresponding to various multiples of π and π 2 . I am simply using these as examples to illustrate how the ordering relations apply in the quantum case.
  25. Palmer, M.C.; Takahashi, M.; Westman, H.F. Localized qubits in curved spacetimes. Ann. Phys. 2012, 327, 1078–1131. [Google Scholar] [CrossRef]
  26. Palmer, M.C. Relativistuc quantum information theory and quantum reference frames. Ph.D. Thesis, University of Sydney, Austrialia, 2013. [Google Scholar]
  27. Landau, L.; Lifshitz, E. Statistical Physics, 2nd ed.; Volume 5, Course of Theoretical Physics; Addison Wesley: Reading, MA, USA, 1958. [Google Scholar]
  28. This idea is very similar to one that was relayed to me by Chris Adami, who has suggested that the second law actually refers to relative entropy. He has hinted at the concept in a few of his publications (see, for example, [31,32]), but to my knowledge, has never stated it explicitly in print.
  29. Cerf, N.J.; Adami, C. Negative Entropy and Information in Quantum Mechanics. Phys. Rev. Lett. 1997, 79. [Google Scholar] [CrossRef]
  30. Cerf, N.J.; Adami, C. Quantum extension of conditional probability. Phys. Rev. A 1999, 60. [Google Scholar] [CrossRef]
  31. Cerf, N.J.; Adami, C. Information theory of quantum entanglement and measurement. Physica D 1998, 120, 62–81. [Google Scholar] [CrossRef]
  32. Adami, C. Quantum Mechanics of Consecutive Measurements. 2010; arXiv:0911.1142. [Google Scholar]

Share and Cite

MDPI and ACS Style

Durham, I.T. An Order-Theoretic Quantification of Contextuality. Information 2014, 5, 508-525. https://0-doi-org.brum.beds.ac.uk/10.3390/info5030508

AMA Style

Durham IT. An Order-Theoretic Quantification of Contextuality. Information. 2014; 5(3):508-525. https://0-doi-org.brum.beds.ac.uk/10.3390/info5030508

Chicago/Turabian Style

Durham, Ian T. 2014. "An Order-Theoretic Quantification of Contextuality" Information 5, no. 3: 508-525. https://0-doi-org.brum.beds.ac.uk/10.3390/info5030508

Article Metrics

Back to TopTop