Next Article in Journal
A Simple Chaotic Map-Based Image Encryption System Using Both Plaintext Related Permutation and Diffusion
Next Article in Special Issue
Microcanonical Entropy, Partitions of a Natural Number into Squares and the Bose–Einstein Gas in a Box
Previous Article in Journal
From Identity to Uniqueness: The Emergence of Increasingly Higher Levels of Hierarchy in the Process of the Matter Evolution
Previous Article in Special Issue
Entropy of Iterated Function Systems and Their Relations with Black Holes and Bohr-Like Black Holes Entropies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Symmetry and Correspondence of Algorithmic Complexity over Geometric, Spatial and Topological Representations †

1
Algorithmic Dynamics Lab, Centre for Molecular Medicine, Karolinska Institute, Stockholm 171 77, Sweden
2
Unit of Computational Medicine, Department of Medicine, Karolinska Institute, Stockholm 171 77, Sweden
3
Science for Life Laboratory (SciLifeLab), Stockholm 171 77, Sweden
4
Algorithmic Nature Group, Laboratoire de Recherche Scientifique (LABORES) for the Natural and Digital Sciences, Paris 75005, France
5
Biological and Environmental Sciences and Engineering Division (BESE), King Abdullah University of Science and Technology (KAUST), Thuwal 23955, Saudi Arabia
6
Computer, Electrical and Mathematical Sciences and Engineering Division (CEMSE), King Abdullah University of Science and Technology (KAUST), Thuwal 23955, Saudi Arabia
*
Author to whom correspondence should be addressed.
An online implementation to estimations of graph complexity is available online at http://www.complexitycalculator.com (accessed on 17 July 2018).
Submission received: 25 June 2018 / Revised: 12 July 2018 / Accepted: 14 July 2018 / Published: 18 July 2018

Abstract

:
We introduce a definition of algorithmic symmetry in the context of geometric and spatial complexity able to capture mathematical aspects of different objects using as a case study polyominoes and polyhedral graphs. We review, study and apply a method for approximating the algorithmic complexity (also known as Kolmogorov–Chaitin complexity) of graphs and networks based on the concept of Algorithmic Probability (AP). AP is a concept (and method) capable of recursively enumerate all properties of computable (causal) nature beyond statistical regularities. We explore the connections of algorithmic complexity—both theoretical and numerical—with geometric properties mainly symmetry and topology from an (algorithmic) information-theoretic perspective. We show that approximations to algorithmic complexity by lossless compression and an Algorithmic Probability-based method can characterize spatial, geometric, symmetric and topological properties of mathematical objects and graphs.

1. Introduction

The literature on connections between symmetry and complexity is sparse, in particular ito information theory and algorithmic complexity. In [1], a relation between symmetry and entropy was suggested in the context of molecular complexity, thereby establishing connections between low symmetry and low entropy or higher (classical—as opposed to algorithmic) information content.
A measure of structural complexity must not be based on classical symmetry. The use of symmetry in the realm of complexity is justified only in the context of combinatorial complexity, establishing equivalences among the diversity of elements of an object with symmetry by way of statistical regularities. An illustrative example is an object such as the mathematical constant π , which would appear random to an observer uninformed about its deterministic origin, and which has no apparent symmetry even though it is often, if not always, involved in aspects of symmetry related to, e.g., the sphere. The constant π , however, can be briefly described algorithmically (by a formula or a computer program), and therefore is considered to be of low complexity [2]. It is clear then that classical and algorithmic information theory measure different properties. However, Shannon entropy does not provide a method to have access to the probability distributions thereby heavily relying on ensemble assumptions to which an object in question is supposed to belong.
There is therefore a need for tools to extract meaningful generalizations from graphs and networks in, for example, computational chemistry. The concept of symmetry has traditionally been important in areas ranging from pure mathematics to biology and molecular complexity in chemistry. In developmental biology, for example, symmetry is a powerful tool to transfer information and build organisms by self-replication communicating information across long distances. In chemistry, symmetries of a molecule and of molecular orbitals forming covalent bonds, have been studied extensively.
In [3,4], we explore connections related to structure networks of chemical compounds with toxicological applications. Here, in the current paper, we provide a unified and robust platform for estimating the algorithmic complexity of a graph on the basis of algorithmic information theory that is applicable to both abstract objects—e.g., geometric ones such as polyominoes, polytopes, and graphs.
We test this notion in terms of simplicity and complexity and in our analysis we find connections between symmetry and complexity. We will then examine properties of symmetry captured by graph descriptions, in order to test and illustrate the algorithmic measure of algorithmic complexity for graphs and networks.

1.1. Polyhedra, Polytopes and Polyhedral Networks

Definition 1.
A polytope is a geometric object with flat sides, and may exist in any general number of dimensions n as an n-dimensional polytope or n-polytope.
For example, a two-dimensional polygon is a 2-polytope and a three-dimensional polyhedron is a 3-polytope.
Definition 2.
A Platonic solid is a regular, convex polyhedron.
There are five polyhedra that can have the properties of a Platonic solid in three-dimensions and 13 that are Archimedian.
Definition 3.
An Archimedean solid is a semi-regular convex polyhedron composed of regular polygons meeting in identical vertices, excluding the five Platonic solids and also the prisms and antiprisms.
Definition 4.
Every n-polytope has a dual structure, obtained by interchanging its vertices for facets, edges for ridges, and so on, generally interchanging its (j − 1)-dimensional elements for (n − j)-dimensional elements (for j = 1 to n − 1), while retaining the connectivity or between elements.
As an example, the tetrahedron is dual to itself, the cube is dual to the octahedron, and icosahedron is dual to dodecahedron. Thus, to find the direct symmetry group of all the five Platonic solids, it suffices to find the groups of the tetrahedron, cubes, and dodecahedron.
Definition 5.
An n-polyhedral graph (or c-net) is a three-connected simple planar graph on n nodes.

1.2. Basics of Graph Theory

Definition 6.
A graph is an ordered pair G = ( V , E ) comprising a set V of nodes or vertices and a set E of edges or links, which are 2-element subsets of V.
Definition 7.
A graph is planar if it can be drawn in a plane without graph edges crossing.
Planarity is an interesting property because only planar graphs have duals.
Definition 8.
A dual graph of a planar graph G is a graph that has a vertex corresponding to each face of G, and an edge joining two neighbouring faces for each edge in G. Dual polyhedra share the same symmetry axes and planes.
Definition 9.
A uniform polyhedron is a polyhedron that has regular polygons as faces and is vertex-transitive on its vertices, that is, there is an isometry mapping any vertex onto any other.
Every convex polyhedron can be represented in the plane or on the surface of a sphere by a three-connected planar graph. In what follows, we will use the terms nodes and vertex, and links and edges, interchangeably.

1.3. Polyominoes

A polyomino is a collection of cells of equal size that share at least one side. One can think of polyominoes as an extension of dominoes. Polyominoes are familiar because of their use of Tetrominoes (polyominoes of size 4) as introduced in the game of Tetris. The goal of the game of Tetris can actually be defined as the reduction of the Tetromino complexity by tiling, and while solutions to Tetris can easily be achieved by minimization of the spaces between Tetrominoes, the method of algorithmic complexity, by means of finite approximations to algorithmic probability (AP), provides a (numerical as well as practical) alternative to the solution of Tetris other than by the minimization of spaces, with the alternative solution by means of algorithmic complexity producing interesting, efficient packings through minimization of shape complexity by producing tilings of low complexity.

2. Methodology

2.1. Computability/Recursivity

In computability theory, computable (or Turing-computable) functions are also called recursive functions. Computable functions are the mathematical formalization of the intuitive notion of an algorithm.
Definition 10.
A function f : N k N is computable if and only if there is a Turing machine that, given any k-tuple x of natural numbers, will produce the value f ( x ) .
In other words, a function is computable if there exists an algorithm (a Turing machine) that can implement the the function.
Definition 11.
A set of natural numbers is called recursively enumerable if there is a computable function f such that for each number n, f ( n ) is defined if and only if n is in the set.
Thus, a set is recursively enumerable if and only if it is the domain of some computable function.
Operations or transformations that are recursively enumerable simply implies that such operations can be implemented as a computer program running on a universal Turing machine.

2.2. Group-Theoretic Symmetry

Definition 12.
Let X be an object in R 3 . A symmetry axe l of X is a line about which there exists θ ( 0 , 2 π ) such that the object X is rotate by an angle θ, which appears to be identical to X.
Definition 13.
A symmetry plane P of X is a reflected mirror image of the object X appearing unchanged.
In group theory, the symmetry group of an object is the group of all transformations under which the object is invariant under composition of the group operations. Here, we consider symmetry groups in Euclidean geometry and polyhedral networks.
For example, given an equilateral triangle, the counterclockwise rotation by 120 degrees around the centre leaves the triangle invariant as it maps every vertex and edge of the triangle to another one occupying exactly the same space.
Definition 14.
The direct symmetry group of an object X, denoted S d ( X ) , is a group of symmetry of X if only rotation is allowed.
Definition 15.
The full symmetry group of an object X, denoted S ( X ) , is the symmetry group of X of rotations and reflections.
A (full) symmetry group is thus a set of symmetry-preserving operations, such as rotations and reflections. Dual polyhedra have the same symmetry group. For example, a tetrahedron has a total of 24 symmetries, that is, | S ( T ) | = 24 .
While symmetry groups are continuous or discrete, here we are interested in recursively enumerable discrete symmetry groups and actions such that, for every point of the space, the set of images of the point under the isometries in the full symmetry group is a recursively enumerable set.
In what follows, we will use this recursively enumerable variation of the group-theoretic characterization of mathematical symmetry, and symmetry will be taken thus as a space or graph-theoretic invariant under recursively enumerable defined transformations.
The direct and full symmetry groups of tetrahedra, cubes and octahedra, and dodecahedra and icosahedra are, respectively, A 4 and S 4 , S 4 and S 4 × Z 2 , and A 5 and A 5 × Z 2 suggesting a natural but limited symmetry partial order, the largest the group subindex the more symmetric. However, it is not clear how to compare different types of symmetry.
In the case of the sphere, it is characterized by spherical symmetry, a type of continuous symmetry as it has an infinite number of symmetries (both for rotations and reflections), here we will only require that the scalars and reflections lines involved are recursively enumerable.
Here, we will advance a notion of symmetry based both on group theoretic and algorithmic information to find the correspondences between each other. As a result, we will provide a proposal of a total order of symmetry.

2.3. Information Theory

A random configuration of, let us say a gas in a room, has little symmetry but high entropy, but a specific symmetric configuration will have high entropy because any change will destroy the symmetry towards a more likely disordered configuration. However, the extent to which entropy can be used to characterize symmetry is limited to only apparent symmetry, and it is not robust in the face of object description, due to its dependence on probability distributions [5]. Entropy measures the uncertainty associated with a random variable, i.e., the expected value of the information in a message (e.g., a string) in bits. Formally,
Definition 16.
The Shannon entropy of a string x is defined by:
i = 1 n p ( x i ) log 2 p ( x i ) .
Shannon entropy allows the estimation of the average minimum number of bits needed to encode a string (or object) based on the alphabet size and the occurrence of the alphabet symbolsbased on a probability distribution. Despite its limitations, classical information theory (Shannon entropy) can be consistent with symmetry. One of its main properties is the property of symmetry given by H ( x 1 , x 2 , , x n ) = H ( x τ ( 1 ) , x τ ( 2 ) , , x τ ( n ) ) , where τ is any permutation from 1 to n, and this property also holds for variations of H such as block entropy, where the string bits are taken up by tuples, e.g., bytes. In other words, H remains invariant amidst the reordering of elements. While entropy may look as if it preserves certain properties for symmetrical objects, it also fails to some extent to characterize symmetry. For example, s 1 = 0000011111 and s 2 = 1101001011 have H ( s 1 ) = H ( s 2 ) because there is a permutation τ that sends s 1 onto s 2 and vice versa. However, H misses the fact that s 2 looks significantly less symmetrical than s 1 , which has a reflection symmetry at the centre bit. When taking 2-bit elements as units for H, hence applying what we will call block entropy denoted by H 2 for blocks of size two bits), this is solved and H 2 ( s 1 ) < H 2 ( s 2 ) , but then we will miss possible 2-bit symmetries, and so on. Taking H b for b = 1 , , n , where n = | s i | . We will see that, for algorithmic complexity, there are nearly similar results, but with far more interesting subtleties.

2.4. Graph Entropy

We will define the Shannon entropy (or simply entropy) of a graph G represented by its adjacency matrix A ( G ) by
Definition 17.
H ( A ( G ) ) = i = 1 n P ( A ( x i ) ) log 2 P ( A ( x i ) ) ,
where G is the random variable with n possible outcomes (all possible adjacency matrices of size | V ( G ) | ). For example, a completely disconnected graph G with all adjacency matrix entries equal to zero has entropy H ( A ( G ) ) = 0 because the number of different symbols in the adjacency matrix is 1. However, if a different number of 1 s and 0 s occur in A ( G ) , then H ( A ( G ) ) 0 . In general, we will use Block Entropy in order to detect more graph regularities (through the adjacency matrix) at a greater resolution. However, for Block Entropy, there is an extra factor to be taken into account. The adjacency matrix of a graph is not invariant under graph relabellings. This means that the correct calculation of the Block Entropy (not relevant for 1-bit Entropy) of a graph has to take into consideration all possible adjacency matrix representations for all possible labellings. Therefore,
Definition 18.
The Block Entropy of a graph is given by:
H ( G ) = min { H ( A ( G L ) ) | G L L ( G ) } ,
where L ( G ) is the group of all possible labellings of G.

2.5. Algorithmic Complexity

The algorithmic (Kolmogorov–Chaitin) complexity of a string x is the length of the shortest effective description of x. There are several versions of this notion. Here, we use mainly the plain complexity, denoted by C ( x ) , and the conditional plain complexity of a string x given a string y, denoted by C ( x | y ) , which is the length of the shortest effective description of x given y. The formal definitions are as follows. We work over the binary alphabet { 0 , 1 } . A string is an element of { 0 , 1 } * . If x is a string, | x | denotes its length. Let M be a universal Turing machine that takes two input strings and outputs one string. For any strings x and y,
Definition 19.
The algorithmic complexity of x conditioned by y with respect to M is defined as
C M ( x | y ) = min { | p |   such that   M ( p , y ) = x } .
We will often drop the subscript M in C M ( x | y ) because of the invariance theorem, and we will also write C ( x ) instead of C ( x | λ ) (where λ is the empty string). If n is a natural number, C ( n ) denotes the algorithmic complexity of the binary representation of n. Prefix-free complexity K is defined in a similar way, the difference being that the universal machine is required to be prefix-free. That is, only self-delimited programs are valid programs; no program is a prefix of any other, a property necessary to keep 0 < m ( s ) < 1 a (semi-) probability measure.

2.6. Algorithmic Probability

Algorithmic Probability is a seminal concept in the theory of algorithmic information. The algorithmic probability of a string s is a measure that describes the probability that a valid random program p produces the string s when run on a universal Turing machine U. In equation form, this can be rendered as
Definition 20.
m ( s ) = p : U ( p ) = s 1 / 2 | p | .
That is, the sum over all the programs p for which U outputs s and halts.
The Algorithmic Probability [6,7,8] measure m ( s ) is related to algorithmic complexity K ( s ) in that m ( s ) is at least the maximum term in the summation of programs, given that the shortest program carries the greatest weight in the sum.
Definition 21.
The Coding Theorem further establishes the connection between m ( s ) and K ( s ) as follows:
| log 2 m ( s ) K ( s ) | < c ,
where c is a fixed constant independent of s. The Coding Theorem implies that [9,10] one can estimate the algorithmic complexity of a string from its frequency. By rewriting Equation (1) as:
K m ( s ) = log 2 m ( s ) + c ,
where O ( 1 ) is a constant, one can see that it is possible to approximate K by approximations to m (such finite approximations have also been explored in [11] on integer sequences), with the added advantage that m ( s ) is more sensitive to small objects [12] than the traditional approach to K using lossless compression algorithms, which typically perform poorly for small objects (e.g., small graphs).
As shown in [13], estimations of algorithmic complexity are able to distinguish complex from random networks (of the same size, or asymptotically growing), which are both in turn distinguished from regular graphs (also of the same size). K calculated by the BDM assigns low algorithmic complexity to regular graphs, medium complexity to complex networks following Watts–Strogatz or Barabási–Albert algorithms, and higher algorithmic complexity to random networks. That random graphs are the most algorithmically complex is clear from a theoretical point of view: nearly all long binary strings are algorithmically random, and so nearly all random unlabelled graphs are algorithmically random [13].

The Coding Theorem and Block Decomposition Methods

The Coding Theorem Method (CTM) [12,14] is rooted in the relation provided by Algorithmic Probability between frequency of production of a string from a random program and its Kolmogorov complexity (Equation (1), also called the algorithmic Coding theorem, as distinct from another coding theorem in classical information theory). Essentially, it uses the fact that the more frequent a string (or object), the lower its algorithmic complexity; and strings of lower frequency have higher algorithmic complexity.
Here, we report results that would not have been possible if they were not specific enough to correctly identify small patterns that represent signatures of the algorithmic content of an object by using CTM and BDM. We show that the AP-based measures either constitute an equivalent or a superior alternative to other more limited measures, such as lossless compression algorithms, widely used as estimators of algorithmic complexity, and to Shannon entropy and related measures that are based on traditional statistics and require that broad assumptions be encoded in their underlying probability distributions.
The Block Decomposition method (BDM) consists of determining the algorithmic complexity of a matrix by quantifying the likelihood that a random Turing machine operating on a two-dimensional tape (also called a termite or Langton’s ant [15]) can generate it and halt. The Block Decomposition Method (BDM) decomposes the adjacency matrix of a graph into smaller matrices for which we can numerically calculate its algorithmic probability by running a large set of small two-dimensional deterministic Turing machines, and therefore its algorithmic complexity upon application of the Algorithmic Coding Theorem. Then, the overall complexity of the original adjacency matrix is the sum of the complexity of its parts, albeit a logarithmic penalization for repetitions, given that n repetitions of the same object only add log 2 n complexity to its overall complexity, as one can simply describe a repetition in terms of the multiplicity of the first occurrence. The following graph complexity definition will also introduce BDM.

2.7. The Algorithmic Complexity of a Graph

We define the algorithmic complexity estimation of a graph as follows:
Definition 22.
The Kolmogorov complexity of a graph G is defined as follows:
B D M ( G , d ) = ( r u , n u ) A ( G ) d × d log 2 ( n u ) + C T M ( r u ) ,
where K m ( r u ) is the approximation of the algorithmic (Kolmogorov–Chaitin) complexity of the subarrays r u using the Algorithmic Coding Theorem (Equation (2)) method that we denote by CTM, A ( G ) d × d represents the set with elements ( r u , n u ) obtained when decomposing the adjacency matrix A ( G ) of G into non-overlapping squares of size d by d denoted by A ( G ) d × d . In each ( r u , n u ) pair, r u is one such square and n u is its multiplicity (number of occurrences). From now on, K B D M ( g , d = 4 ) may be denoted only by K ( G ) , but it should be taken as an approximation to K ( G ) unless otherwise stated (e.g., when referring to the theoretical true K ( G ) value).
Considering relabellings, the correct evaluation of the algorithmic complexity of a graph is given by:
Definition 23.
K ( G ) = min { K ( A ( G L ) ) | G L L ( G ) } ,
where A ( G L ) is the adjacency matrix of G L and L ( G ) is the group of all possible labellings of G.
One contribution of these algorithmic-based measures is that the two-dimensional versions of both CTM and BDM are native bidimensional measures of complexity and thus do not destroy the two-dimensional structure of an adjacency matrix.
By making a generalization of the Algorithmic Coding Theorem using two-dimensional Turing machines. This makes it possible to define the probability of production of an adjacency matrix as the result of a randomly chosen deterministic two-dimensional-tape Turing machine without any array transformations to a string, thus making it dependent on yet another mapping between graphs and strings, unlike our approach that natively deals directly with the complexity of the graph adjacency matrix.
Most algorithms implementing a computable measure of graph complexity are based either on a graph-theoretic/topological feature that is computable or upon Shannon entropy. An example of an improvement on Shannon entropy is the introduction of graph lossless compression such as Graphitour [16]. A drawback of graph compression is that lossless compression based on popular algorithms such as LZW (Gzip, PNG, Compress), which are traditionally considered to be approximations to algorithmic (Kolmogorov) complexity, are more closely related to Shannon entropy than to algorithmic complexity (which we will denote by K). This is because these popular algorithms implement a method that traverses an object looking for trivial repetitions from which a basic grammar is produced based on frequency.
A major improvement in the means of approximating the algorithmic complexity of strings, graphs and networks, based on the concept of algorithmic probability (AP), offers different and more stable and robust approximations to algorithmic complexity by way of the so-called Algorithmic Coding Theorem (c.f. below). The method, called the Coding Theorem Method, suffers the same drawback as other approximations of K, including lossless compression, related to the additive constant involved in the Invariance Theorem as introduced by Kolmogorov, Chaitin and Solomonoff [7,8,17,18], which guarantees convergence towards K though its rate of convergence is unknown. The chief advantage of the algorithm is, however, that algorithmic probability (AP) [6,7,8] not only looks for repetitions but for algorithmic causal segments, such as in the deterministic nature of the digits of π , without the need of distribution assumptions. As with π , a graph that is produced recursively enumerable will be eventually characterized by algorithmic probability as having low algorithmic complexity, unlike traditional compression algorithms that implement a version of classical block Shannon entropy. In previous work, this kind of recursively enumerable graph [5] has been featured, illustrating how inappropriate Shannon entropy can be when there is a need for a universal, unbiased measure where no feature has to be pre-selected.
The method studied and applied here was first defined in [2,19] and is in many respects independent of the observer to the greatest possible extent. For example, unlike popular implementations of lossless compression used to approximate algorithmic complexity (such as LZW), the method based on Algorithmic Probability averages over a large number of computer programs found to reproduce the output, thus making the problem of the choice of enumeration less relevant compared to the more arbitrary choice of lossless compression algorithm. The advantage of the algorithmic complexity measure is that when it diverges from algorithmic complexity (because it requires unbounded increasing computational power) it then collapses into Shannon entropy [2].
We have previously reported connections between algebraic and topological properties using algorithmic complexity [13], where we introduced a definition and numerical method for labelled graph complexity; and in applications to the clustering capabilities of network superfamilies in [20], as well as in applications to biology [21], where we also introduced a generalization of unlabelled graph complexity. We have also provided theoretical estimations of the error of approximations to the algorithmic complexity of graphs and complex networks in [21], offering both exact and numerical approximations.
The algorithm here considered can deal with a variety of graph types including directed graphs and weighted graphs. The resulting structure could be used for representation and classification as we will see.

3. Results

3.1. Algorithmic Characterization of Geometric Symmetry

It is not difficult to identify a group of (recursively enumerable) symmetries as being of low conditional algorithmic (Kolmogorov–Chaitin) complexity because a recursively enumerable transformation requires an encoding of fixed computer program length implementing the recursively enumerable transformation.
Here, we propose an algorithmic information characterization of symmetry:
Definition 24.
We define a transformation of a recursively enumerable object s to s as a transformation T such that there exists c = | U ( p ) = T | such that | K ( s ) K ( s ) | c ,
where c is the length of a program that implements the transformation T on a (prefix-free) Turing machine running program p, and so K ( s ) = K ( s ) + c such that c c , i.e., K ( s ) K ( s ) . The Kolmogorov complexity of the most symmetric geometrical object, the n-dimensional hypersphere can be given by:
Definition 25.
K ( c ) = min { p : p ( r , n ) = x Q n : x = r } ,
where x is a computable number. It can be seen that K depends only on the dimension n and the radius r.
Let T ( s ) be the recursively enumerable symmetry group for object s. It follows that, if there is a recursively enumerable function t T , such that s = t ( s ) with t and a recursively enumerable function t 1 mapping s to s, then | K ( s ) K ( s ) | < c . In other words, K is invariant.
For example, a tetrahedron s can be placed in 12 distinct positions by rotation alone s i for i { 1 , , 12 } . The 12 rotations form the rotation (symmetry) group. Let t be one of these rotations. Without loss of generality, if t is recursively enumerable, let p ( t ) be the program implementing t such that s i = t ( s ) , then | K ( s ) K ( s ) | < | p ( t ) | .
Just as in Euclidean geometry, algorithmic complexity remains invariant under symmetric transformations in any space using the same argument, with the only requirement that t T is a recursively enumerable transformation acting in a recursively enumerable space.

3.2. Correspondence of Algorithmic Ranking

The previous results suggest a symmetry ranking of objects subject to symmetry comparison that can be achieved numerically as demonstrated in Figure 1. The algorithmic characterization of symmetry is a particular case of the more general case of algorithmic complexity alone and is thus rather an application of algorithmic complexity to symmetry with the provisions of recursivity/computability required.
Figure 2 illustrates the way in which estimations of the algorithmic information of graphs capture symmetry breaking, thereby demonstrating the ability of these methods to characterize it. Figure 2 (top) shows how removing edges from a complete graph drives the estimations much higher while removing nodes that preserve the symmetries of the complete graph (by producing another complete graph) and thus remaining stable for graphs of growing size. In contrast, Figure 2 (bottom) illustrates that random graphs are immune to symmetry breaking, being deprived of symmetries to begin with, and both node and edge removal have the same effect.
Figure 2 (top) illustrates how symmetry breaking generates information. The results show how to detect and produce information content starting from a highly symmetrical object (a complete graph). Similar results had been reported in [13]. It also suggests how growing a symmetrical object converges and cannot be the source of new information if unbroken; only symmetry breaking can generate differences that produce information when moving a symmetrical object from simplicity to complexity. Likewise, information from breaking a random object does not produce information that could not already be generated by the underlying source of random graphs.

3.3. Classification of Polyominoes

When exploring free polyominoes (equivalent shapes under the symmetry group are considered the same) of size 4 and 5 non-zero (black) cells, we found that the classification by AP-based measures can yield more robust characterizations in the sense that they are less variant amidst changes in the description of the same object. In Figure 3, we classified the same set of polyominoes by their geometric versus topological representations. In the case of the geometric representation, we considered a binary array of black and white cells of length n × m such that each row 1 n and column 1 m contains at least a non-zero element. The topological representation is a network whose vertices represent the corners of the geometrical representation and whose edges are cell boundaries, of which at least one is shared with another cell.
While the precise order between solid and network representations in Figure 3 (and additional results reported in the Appendix) may look different, the high correspondence is evident and they are divided into the two clusters for both of the representations that the BDM values suggest. In both cases, the first clusters with greatest values contain almost the same elements and the intersection of elements with lowest values is also significant, hence the high correlation value.
Spearman ranking correlation values are given in Figure 3, establishing the ability of the AP-measure (BDM) to deal with small patterns in a robust but sensitive fashion, unlike the two other methods we compared with (a lossless compression algorithm and Shannon entropy). The compression algorithm used is Compress, based on a Unix shell compression program, itself based on the LZW compression algorithm. LZW is the most common lossless compression algorithm behind, e.g., png, zip, gzip. Shannon entropy is applied to the arrays, with the space of all possible uniformly distributed binary arrays of the size of either the bitmap or the adjacency matrix of the network representation as probability space.

3.4. Polytope Profiling

Regular polyhedra or so-called Platonic solid networks are of low complexity compared to the sphere, for which we showed that its complexity grows by log 2 n log 2 r the r is the radius and n the dimension of a hypershpere.
Polygonal approximations to sphere rendering depend only on the number of polygons used and therefore will tend to be of greater complexity than, say, platonic solids. Platonic solids are rough approximations of spheres with low polygon face count, and hence are lower bounds of any other sphere approximation.
Unlike the Platonic solids, that are all formed by a single polygon, each Archimedean polyhedron is formed by two different polygons and they are thus expected to be slightly more complex, as numerically shown in Figure 1. It shows the expected agreement between the classical information and algorithmic complexity of objects of similar complexity, in particular graphs and their duals, where vertices are exchanged for edges and edges for vertices from the original graphs. Figure 4 reports the change of complexity as a function of dimension and symmetric group over a hypercube of growing dimension and the set of all graphs grouped by symmetric group extracted from GraphData[] in the Wolfram Language.

4. Conclusions

The complexity of graphs has historically been based on graph-theoretic, and, more recently, Entropy-based indices. While some of them may continue to be of interest in approaches to molecular similarity (such as QSPR and QSAR [23]), here we have instead explored more universal approaches to the problem of feature-free approximations of the complexity of objects both geometric and topological. An essential ingredient for a complexity measure is that it integrates all its properties, something that only a noncomputable approximating measure such as CTM/BDM can achieve. We have shown suggestive connections between algorithmic information, geometry and symmetry in the context of graph and mathematical objects. For the objects studied in this work (polyominoes and platonic solids) together with our previous results [13,21], we have demonstrated that that we can characterize objects by estimations of algorithmic complexity and that their characterization is robust when taking them as geometric or topological objects going beyond statistical or group-theoretic characterizations and achieving results better than those achieved by means such as those considered in this paper based in lossless compression and classical information theory.

Author Contributions

Conceptualization, H.Z.; Methodology, H.Z. and N.A.K.; Software, H.Z.; Validation, H.Z.; Formal Analysis, H.Z.; Investigation, H.Z., N.A.K. and J.T.; Resources, H.Z., N.A.K. and J.T.; Data Curation, H.Z.; Writing—Original Draft Preparation, H.Z.; Writing— Review & Editing, H.Z. and J.T.; Visualization, H.Z.; Supervision, H.Z. and J.T.; Project Administration, H.Z.; Funding Acquisition, H.Z. and J.T.

Funding

This research was funded by Swedish Research Council (Vetenskapsrådet) grant number [2015-05299].

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Figure A1. Another illustration of the distribution of polyominoes according to different order parameters. Entropy is sometimes too sensitive or insensitive, lossless compression (Compress) is always too insensitive and the AP-based measure BDM appears to be robust, assigning similar complexity and respecting relative order as shown in Figure 3.
Figure A1. Another illustration of the distribution of polyominoes according to different order parameters. Entropy is sometimes too sensitive or insensitive, lossless compression (Compress) is always too insensitive and the AP-based measure BDM appears to be robust, assigning similar complexity and respecting relative order as shown in Figure 3.
Entropy 20 00534 g0a1

References

  1. Lin, S.-K. Correlation of Entropy with Similarity and Symmetry. J. Chem. Inf. Comput. Sci. 1996, 36, 367–376. [Google Scholar] [CrossRef]
  2. Zenil, H.; Soler-Toscano, F.; Kiani, N.A.; Hernández-Orozco, S.; Rueda-Toicen, A. A Decomposition Method for Global Evaluation of Shannon Entropy and Local Estimations of Algorithmic Complexity. arXiv, 2016; arXiv:1609.00110. [Google Scholar]
  3. Kiani, N.A.; Shang, M.; Zenil, H.; Tegnér, J. Predictive Systems Toxicology. In Computational Toxicology—Methods and Protocols, Methods in Molecular Biology; Nicolotti, O., Ed.; Springer: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
  4. Zenil, H.; Kiani, N.A.; Shang, M.-M.; Tegnér, J. Algorithmic Complexity and Reprogrammability of Chemical Structure Networks. Parallel Process. Lett. 2018, 28, 1850005. [Google Scholar] [CrossRef]
  5. Zenil, H.; Kiani, N.A.; Tegnér, J. Low Algorithmic Complexity Entropy-deceiving Graphs. Phys. Rev. E 2017, 96, 012308. [Google Scholar] [CrossRef] [PubMed]
  6. Levin, L. Laws of information conservation (non-growth) and aspects of the foundation of probability theory. Probl. Form. Trans. 1974, 10, 206–210. [Google Scholar]
  7. Solomonoff, R.J. A formal theory of inductive inference. Parts 1. Inf. Control 1964, 7, 1–22. [Google Scholar] [CrossRef]
  8. Solomonoff, R.J. A formal theory of inductive inference. Parts 2. Inf. Control 1964, 7, 224–254. [Google Scholar] [CrossRef]
  9. Calude, C.S. Information and Randomness: An Algorithmic Perspective, EATCS Series, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  10. Cover, T.M.; Thomas, J.A. Elements of Information Theory, 2nd ed.; Wiley-Blackwell: Oxford, UK, 2009. [Google Scholar]
  11. Soler-Toscano, F.; Zenil, H. A Computable Measure of Algorithmic Probability by Finite Approximations with an Application to Integer Sequences. Complexity 2017, 2017, 7208216. [Google Scholar] [CrossRef]
  12. Delahaye, J.-P.; Zenil, H. Numerical Evaluation of the Complexity of Short Strings: A Glance Into the Innermost Structure of Algorithmic Randomness. Appl. Math. Comput. 2012, 219, 63–77. [Google Scholar] [CrossRef]
  13. Zenil, H.; Soler-Toscano, F.; Dingle, K.; Louis, A. Correlation of automorphism group size and topological properties with program-size complexity evaluations of graphs and complex networks. Phys. A Stat. Mech. Appl. 2014, 404, 341–358. [Google Scholar] [CrossRef] [Green Version]
  14. Soler-Toscano, F.; Zenil, H.; Delahaye, J.-P.; Gauvrit, N. Calculating Kolmogorov Complexity from the Output Frequency Distributions of Small Turing Machines. PLoS ONE 2014, 9, e96223. [Google Scholar] [CrossRef] [PubMed]
  15. Langton, C.G. Studying artificial life with cellular automata. Phys. D Nonlinear Phenom. 1986, 22, 120–149. [Google Scholar] [CrossRef]
  16. Peshkin, L. Structure induction by lossless graph compression. In Proceedings of the 2007 Data Compression Conference, Snowbird, UT, USA, 27–29 March 2007; pp. 53–62. [Google Scholar]
  17. Chaitin, G.J. On the length of programs for computing finite binary sequences. J. ACM 1966, 13, 547–569. [Google Scholar] [CrossRef]
  18. Kolmogorov, A.N. Three approaches to the quantitative definition of information. Probl. Inf. Trans. 1965, 1, 1–7. [Google Scholar] [CrossRef]
  19. Zenil, H.; Soler-Toscano, F.; Delahaye, J.-P.; Gauvrit, N. Two-Dimensional Kolmogorov Complexity and Validation of the Coding Theorem Method by Compressibility. PeerJ Comput. Sci. 2013, 1, e23. [Google Scholar] [CrossRef]
  20. Zenil, H.; Kiani, N.A.; Tegnér, J. Algorithmic complexity of motifs clusters superfamilies of networks. In Proceedings of the IEEE International Conference on Bioinformatics and Biomedicine, Shanghai, China, 18–21 December 2013. [Google Scholar]
  21. Zenil, H.; Kiani, N.A.; Tegnér, J. Methods of Information Theory and Algorithmic Complexity for Network Biology. Semin. Cell Dev. Biol. 2016, 51, 32–43. [Google Scholar] [CrossRef] [PubMed]
  22. Weisstein, E.W. “Polyomino.” From MathWorld—A Wolfram Web Resource. Available online: http://mathworld.wolfram.com/Polyomino.html (accessed on 17 July 2018).
  23. Bonchev, D. Overall Connectivity and Topological Complexity: A New Tool for QSPR/QSAR. In Topological Indices and Related Descriptors in QSAR and QSPR; Devillers, J., Balaban, A.T., Eds.; Gordon & Breach: Langhorne, PA, USA, 1999; pp. 361–401. [Google Scholar]
Figure 1. Top: platonic solid networks with order parameter values (Block Decomposition Method (BDM), entropy and lossless compression by Compress); Bottom left: different polyhedra sorted by algorithmic symmetry (BDM); Bottom right: platonic graphs and their duals classified by order parameters, duals are in same colour. Numerical approximations of graphs and duals have similar values, as theoretically expected, given that there is an algorithm of fixed size that sends a graph to its unique dual and back.
Figure 1. Top: platonic solid networks with order parameter values (Block Decomposition Method (BDM), entropy and lossless compression by Compress); Bottom left: different polyhedra sorted by algorithmic symmetry (BDM); Bottom right: platonic graphs and their duals classified by order parameters, duals are in same colour. Numerical approximations of graphs and duals have similar values, as theoretically expected, given that there is an algorithm of fixed size that sends a graph to its unique dual and back.
Entropy 20 00534 g001
Figure 2. Symmetry breaking by edge removal. Top: starting from a growing complete graph (perfect symmetry), removing a node produces another perfectly symmetrical object (another complete graph) hence preserving the symmetry. However, as soon as a random edge is deleted from the complete graph, the symmetry is broken and information is generated as quantified by the difference between the original algorithmic content of the complete graph and the mutated one without a single edge; Bottom: in a random graph, symmetry breaking does not produce algorithmic information. Entropy and lossless compression both fail to characterize these instances of graph-theoretic symmetry breaking.
Figure 2. Symmetry breaking by edge removal. Top: starting from a growing complete graph (perfect symmetry), removing a node produces another perfectly symmetrical object (another complete graph) hence preserving the symmetry. However, as soon as a random edge is deleted from the complete graph, the symmetry is broken and information is generated as quantified by the difference between the original algorithmic content of the complete graph and the mutated one without a single edge; Bottom: in a random graph, symmetry breaking does not produce algorithmic information. Entropy and lossless compression both fail to characterize these instances of graph-theoretic symmetry breaking.
Entropy 20 00534 g002
Figure 3. (a) Spearman rank correlation values between the same polyominoes represented as graphs from their adjacency matrices were: ρ = 0.99 , p = 3.216 × 10 14 for Algorithmic Probability (AP)-based (BDM) ranking; ρ = 0.44 , p = 0.0779 for Compress; and ρ = 0.227 , p = 0.38 and thus only statistically significant for the AP-based ranking. This means that it is the AP-based measure that also assigns the same complexity to qualitatively similar polyominoes, something that neither compression nor entropy was able to do consistently due to under- or over-sensitivity; (b,c) display their AP-estimated values for some examples. Adjacency matrices of net-form representations of simply connected free polyominoes (without holes). Adjacency matrices representing networks and solid objects may look very different yet the AP-based measure produces a robust classification more independent of the object description than other measures (entropy and compression). Some functions used to generate the objects from a library of polyominoes and net polyominoes written by Eric Weinstein’s and published in Wolfram’s MathWorld were adapted for our purposes [22].
Figure 3. (a) Spearman rank correlation values between the same polyominoes represented as graphs from their adjacency matrices were: ρ = 0.99 , p = 3.216 × 10 14 for Algorithmic Probability (AP)-based (BDM) ranking; ρ = 0.44 , p = 0.0779 for Compress; and ρ = 0.227 , p = 0.38 and thus only statistically significant for the AP-based ranking. This means that it is the AP-based measure that also assigns the same complexity to qualitatively similar polyominoes, something that neither compression nor entropy was able to do consistently due to under- or over-sensitivity; (b,c) display their AP-estimated values for some examples. Adjacency matrices of net-form representations of simply connected free polyominoes (without holes). Adjacency matrices representing networks and solid objects may look very different yet the AP-based measure produces a robust classification more independent of the object description than other measures (entropy and compression). Some functions used to generate the objects from a library of polyominoes and net polyominoes written by Eric Weinstein’s and published in Wolfram’s MathWorld were adapted for our purposes [22].
Entropy 20 00534 g003
Figure 4. Left: Numerical approximation of algorithmic complexity by BDM over a polytope (hypercube) of growing dimensions, showing how algorithmic complexity monotonically decreases; Right: Graphs belonging to different symmetric groups have different growing numerical algorithmic complexity values as theoretically expected.
Figure 4. Left: Numerical approximation of algorithmic complexity by BDM over a polytope (hypercube) of growing dimensions, showing how algorithmic complexity monotonically decreases; Right: Graphs belonging to different symmetric groups have different growing numerical algorithmic complexity values as theoretically expected.
Entropy 20 00534 g004

Share and Cite

MDPI and ACS Style

Zenil, H.; Kiani, N.A.; Tegnér, J. Symmetry and Correspondence of Algorithmic Complexity over Geometric, Spatial and Topological Representations. Entropy 2018, 20, 534. https://0-doi-org.brum.beds.ac.uk/10.3390/e20070534

AMA Style

Zenil H, Kiani NA, Tegnér J. Symmetry and Correspondence of Algorithmic Complexity over Geometric, Spatial and Topological Representations. Entropy. 2018; 20(7):534. https://0-doi-org.brum.beds.ac.uk/10.3390/e20070534

Chicago/Turabian Style

Zenil, Hector, Narsis A. Kiani, and Jesper Tegnér. 2018. "Symmetry and Correspondence of Algorithmic Complexity over Geometric, Spatial and Topological Representations" Entropy 20, no. 7: 534. https://0-doi-org.brum.beds.ac.uk/10.3390/e20070534

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop