Next Article in Journal
Quantum Contextuality and Indeterminacy
Next Article in Special Issue
Complexity in Economic and Social Systems: Cryptocurrency Market at around COVID-19
Previous Article in Journal
A Refinement of Recurrence Analysis to Determine the Time Delay of Causality in Presence of External Perturbations
Previous Article in Special Issue
Looking at Extremes without Going to Extremes: A New Self-Exciting Probability Model for Extreme Losses in Financial Markets
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Towards a Universal Measure of Complexity

by
Jarosław Klamut
1,
Ryszard Kutner
1,* and
Zbigniew R. Struzik
1,2,3
1
Faculty of Physics, University of Warsaw, Pasteura 5, 02-093 Warsaw, Poland
2
Graduate School of Education, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan
3
Advanced Center for Computing and Communication, RIKEN, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan
*
Author to whom correspondence should be addressed.
Submission received: 27 May 2020 / Revised: 27 July 2020 / Accepted: 3 August 2020 / Published: 6 August 2020
(This article belongs to the Special Issue Complexity in Economic and Social Systems)

Abstract

:
Recently, it has been argued that entropy can be a direct measure of complexity, where the smaller value of entropy indicates lower system complexity, while its larger value indicates higher system complexity. We dispute this view and propose a universal measure of complexity that is based on Gell-Mann’s view of complexity. Our universal measure of complexity is based on a non-linear transformation of time-dependent entropy, where the system state with the highest complexity is the most distant from all the states of the system of lesser or no complexity. We have shown that the most complex is the optimally mixed state consisting of pure states, i.e., of the most regular and most disordered which the space of states of a given system allows. A parsimonious paradigmatic example of the simplest system with a small and a large number of degrees of freedom is shown to support this methodology. Several important features of this universal measure are pointed out, especially its flexibility (i.e., its openness to extensions), suitability to the analysis of system critical behaviour, and suitability to study the dynamic complexity.

1. Introduction

Analysis of the concept of complexity is a non-trivial task due to its diversity, arbitrariness, uncertainty, and contextual nature [1,2,3,4,5,6,7,8,9,10]. There are many different levels/scales, faces, and types of complexity, researched with very different technologies/techniques and tools [11,12,13] (and refs. therein). In the context of dynamical systems, Grassberger suggested [14] that a slow convergence of the entropy to its extensive asymptotic limit is a signature of complexity. This idea was materialized [15,16] further by information and statistical mechanics techniques. It generalizes many previous approaches to complexity, unifying physical ideas with ideas from learning and coding theory [17]. There also exists a connection of this approach to algorithmic or Kolmogorov complexity. The hidden pattern can be the essence of complexity [18,19,20,21]. Techniques adapted from the theories of information and computation have led physical science (in particular, the region extended between classical determinism and deterministic chaos) to discover hidden patterns and quantify their dynamic structural complexity [22]. The above approaches are not universal—they only capture small fragments of the concept of complexity.
We must remember that complexity also depends on the conditions imposed (e.g., boundary or initial conditions), as well as the restrictions adopted. This creates a challenge for every complexity study. It concerns the complexity that can appear in the movement of a single entity and collection of entities braided together. These entities can be irreducible or straightforward, simple systems, but they can also be complex systems.
When we talk about complexity, we mean irreducible complexity, which can no longer be divided into smaller sub-complexities. We refer to this as a primary complexity. Considering the primary complexity here, we mean one that can be expressed at least in an algorithmic way—it is an effective complexity if it also contains a logical depth [23,24,25,26,27]. We should take into account that our models (analytical and numerical) and theories describing reality are not fully deterministic. The evolution of a complex system is potentially multi-branched and the selection of an alternative trajectory (or branch selection) is based on decisions taken randomly.
One of the essential questions concerning a complex system is the problem of its stability/robustness and the question of the stationarity of its evolution [28]. Moreover, the relationship between complexity and disorder on the one hand, and complexity and pattern on the other is an important question—especially in the context of irreversible processes, where non-linear processes, running away from the equilibrium, play a central role. Financial markets can be a spectacular example of these processes [29,30,31,32,33,34,35,36,37,38,39].
The central question of whether entropy is a direct measure of complexity is one we answer in the negative. In our opinion, based on the Gell–Mann concept of complexity, the measure of complexity is appropriately, non-linearly transformed entropy. This work is devoted to finding this transformation and examining the resulting consequences.

2. Definition of a Universal Measure of Complexity and Its Properties

In this Section, we translate the Gell–Mann general qualitative concept of complexity into the language of mathematics, and we present the consequences of this.

2.1. The Gell–Mann Concept of Complexity

The problem of defining a universal measure of complexity is urgent. For this work, the Gell–Mann concept [23,40] of complexity is the inspirational starting point. We apply this concept to irreversible processes, by assuming that both fully ordered and fully disordered systems cannot be the complex. The fully ordered system essentially has no complexity because of maximal possible symmetry of the system, but the fully disordered system contains no information as it entirely dissipates. Hence, the maximum of complexity should be sought somewhere in between these pure extreme states. This point of view allows for the introduction of a formal quantitative phenomenological complexity measure based on entropy as a parameter of order [29,41]. This measure reflects the dynamics of the system through the dependence of entropy on time. The vast majority of works analyzing the general aspects of complexity, including its basis, are based on information theory and computational analysis. Such an approach requires supplementing with a provision allowing a return from a bit representation to physical representation—only this will allow physical interpretations, including understanding of the causes of complexity.
We define the phenomenological partial measure of complexity as a non-linear function of entropy S of the order of ( m , n ) ,
C X ( S ; m , n ) = def . ( S m a x S ) m ( S S m i n ) n = C X ( S ; m 1 , n 1 ) Z 2 2 ( S S a r i t ) 2 , m , n 1 ,
where S m i n and S m a x are minimal and maximal values of entropy S, respectively, S a r i t = S m i n + S m a x 2 , and the entropic span Z = def . S m a x S m i n , whereas m and n are natural numbers (an extension to real positive numbers is possible but this is not the subject of this work). They define the order ( m , n ) of the partial measure of complexity C X . Let us add that this formula is also applicable at a mesoscopic scale. In other words, complexity appears in all systems for which we can build entropy. Notably, S m a x does not have to concern the state of thermodynamic equilibrium of the system. It may refer to the state for which entropy reaches its maximum value in the observed time interval. However, in this work, we are only limited to systems having a state of thermodynamic equilibrium. Below, we discuss the Equation (1), indicating that it satisfies all properties of the measure of complexity. Of course, when m = 0 and n = 1 then C X simply becomes S S m i n , i.e., the entropy of the system (the constant is not important here). However, when m = 1 , n = 0 , we obtain the information contained in the system (constant does not play a role here). Equation (1) gives us a lot more—showing this is the purpose of this work (helpful features of CX are shown in Appendix A).
The partial measure of complexity given by Equation (1) is determined with the accuracy of the additive constant of S, i.e., this constant does not contribute to the measure of the complexity of the system.
Using Equation (1), we can also enter the partial measure of specific complexity, as follows,
c x ( s ; m , n ) = def . 1 N m + n C X ( N s ; m , n ) ,
where N is the number of entities that make up the system and specific entropy s = S / N . As one can see, the partial measure of specific complexity c x is independent of N for an extensive system. Specific entropy and specific complexity are particularly convenient when comparing different extensive systems and when we do not examine the complexity dependence on N.
However, the extraction of an additional multiplicative constant (e.g., particle number) to have s independent of N often presents a technical difficulty, or may even be impossible, especially for non-extensive systems. Then it is more convenient to use the entropy of the system instead of the specific entropy. It is also important to realize that determining extreme entropy values (or extreme specific entropy values) of actual systems can be complicated and it requires additional dedicated tools/technologies, algorithms, and models.
The partial measures of complexity are enslaved by entropy in every order ( m , n ) of complexity. However, the kind of entropy we use in Equation (1) depends on the specific situation of the system and what we want to know about the system, because our definition of complexity does not specify this. From our point of view, relative entropies formulated in the spirit of Kullback–Leibler seem to be the most appropriate (this is referred to in Appendix B). Using the Kullback–Leibler type of entropy, one can express both ordinary entropies and conditional entropies, in particular one can describe the entropy rate increasingly used in the context of complexity analysis.
The entropy here can be both additive (the Boltzmann–Gibbs thermodynamic one [42], Shanon information [17], Rényi [43]), and non-additive entropy (Tsallis [44]). The measure C X ( S ) is a concave (or convex up) function of entropy S, which disappears on the edges at points S = S m i n and S = S m a x .
It has a maximum
C X m a x = C X ( S = S C X m a x ; m , n ) = m m n n Z m + n m + n
at point
S = S C X m a x = S ¯ = m S m i n + n S m a x m + n = 1 m S m a x + 1 n S m i n 1 m + 1 n
as at this point d C X ( S ) d S S = S ¯ = 0 and d 2 C X ( S ) d S 2 S = S ¯ < 0 . The quantity S C X m a x is a characteristic also because it is a weighted average. The quantity C X m a x is well suited to global universal measurements of complexity, because (at a given order ( m , n ) ), it only depends on the entropy span Z. The quantity c x m a x = def . C X m a x / N m + n might also be a good candidate for measuring the logic depth of complexity.

2.2. The Most Complex Structure

The question now arises about the structure of the system corresponding to entropy S C X m a x given by Equation (4). The answer is given by the following constitutive equation,
S Y = Y C X m a x = S C X m a x ,
where Y is the set of variables and parameters (e.g., thermodynamic), on which the state of the system depends. However, Y = Y C X m a x is a set of such values of these variables and parameters that are the solution of Equation (5). This solution gives the entropy value S = S C X m a x that maximizes the partial measure of complexity, that is C X = C X m a x . Hence, with the value of Y C X m a x , we can finally answer the key question: what structure/pattern is behind C X m a x or what the structure of maximum complexity looks like.
There are a few comments to be made regarding the constitutive Equation (5) itself. It is a (non-linear) transcendental equation in the untangled form relative to the Y. This equation should be numerically solved, because we do not expect it to have an analytical solution for maximally complex systems. An instructive example of a specific form of this equation and its solution for a specific physical problem is presented in Section 3. However, this will help us to understand how our machinery works.
Equation (4) legitimizes the measure of complexity we have introduced. Namely, its maximum value falls on the weighted average entropy value, which describes the optimal mixture of completely ordered and completely disordered phases. To the left of S ¯ , we have a phase with dominance of order and to the right a phase with dominance of disorder. The transition between both phases at S ¯ is continuous. Thus, we can say that the partial measure of complexity that we have introduced also defines a certain type of phase diagram in S and C X variables (phase diagram plain). Section 2.5 provides more detailed information.

2.3. Evolution of the Partial Measure of Complexity

Differentiating Equation (1) over time t, we obtain the following non-linear dynamics equation,
d C X ( S ( t ) ; m , n ) d t = χ C X ( S ; m , n ) d S ( t ) d t = ( m + n ) S C X m a x S ( t ) C X ( S ( t ) ; m 1 , n 1 ) d S ( t ) d t ,
where the entropic S-dependent (non-linear), susceptibility is defined by
χ C X ( S ; m , n ) = def . C X ( S ; m , n ) S = ( m + n ) S C X m a x S ( t ) C X ( S ( t ) ; m 1 , n 1 )
and d S ( t ) d t can be expressed, for example, using the right-hand side of the master Markov equation (see Ref. [45] for details). However, we must realize that the dependence of entropy on time can, in general, be non-monotonic, because real systems are not isolated (cf. the schematic plot in Figure 2). One can see how the dynamics of complexity is controlled in a non-linear way by the evolution of the entropy of the system.
In concluding this Section, we state that Equations (1)–(6) together provide a technology for studying the multi-scale aspects of complexity, including the dynamic complexity. However, it is still a simplified approach, as we show in Section 4.

2.4. Significant Partial Measure of Complexity

We consider the partial measure of complexity to be significant when the entropy of the system is located between two inflection points of the C X ( S ; m , n ) curve, i.e., in the range S i p S S i p + . This case occurs for n , m 2 . We then obtain
S m i n < S i p = S m i n + n ( n 1 ) n ( n + m 1 ) S m a x S m i n n ( n + m 1 ) ± m < S m a x ,
see Figure 1d for details.
There are two different cases where a single inflection point is present. Namely,
S m i n < S i p = 2 S m a x + m ( m 1 ) S m i n 2 + m ( m 1 ) < S ¯ , for   m 2 , n = 1 ,
and
S ¯ < S i p + = 2 S m i n + n ( n 1 ) S m a x 2 + n ( n 1 ) < S m a x , for   m = 1 , n 2 .
In Figure 1b, we present the case defined by Equation (9), while that defined by Equation (10) is shown in Figure 1c.
For n = m = 1 , the curve C X ( S ; m , n ) vs. S has no inflection points and it looks like a horseshoe (cf. Figure 1a).
Notably, we can equivalently write
S m i n < S i p = S m a x m ( m 1 ) m ( n + m 1 ) S m a x S m i n m ( n + m 1 ) n < S m a x , for   n , m 2 .
Let us consider the span Z i p = S i p + S i p of the two-phase area. From Equation (8), or equivalently from Equation (11), we obtain
Z i p = 2 n m ( n + m ) n + m 1 Z .
As one can see, the span Z i p depends linearly on the span Z and in a non-trivial way on the exponents n and m. Thus, with the Z set, only Z i p ’s non-trivial dependence on the order ( m , n ) of measure of complexity C X occurs, which is different from C X m a x dependence. In other words, Z i p is less sensitive to complexity than C X m a x .
The significant partial measure of complexity ranges between the two inflection points only for the case n , m 2 (cf. Figure 1d). Indeed, a mixture of phases is observed in this area. For areas where S m i n S < S i p and S i p + < S S m a x , we have (practically speaking) only single phases, ordered and disordered, respectively (see Section 2.5 for details). The case defined by Equation (8), and equivalently by Equation (11), is the most general, while taking into account the fullness of complexity behaviour as a function of entropy. Other cases impoverish the description of complexity. Therefore, we will continue to consider the situation, where n , m 2 .
The choice of any of the C X ( S ; m , n ) forms (i.e., exponents n and m) is a somewhat arbitrary function of the state of the system as it depends on the function of the state, that is on the entropy. In our opinion, the shape of the C X ( S ; m , n ) measure vs. S we present in Figure 1d is the most appropriate, because only then the significant complexity is ranging between non-vanishing inflection points S i p and S i p + .
In generic case we should, however, use the series of partial measures defined by Equation (1). Then, we define the order of the partial complexity using the pair of exponents ( n , m ) . The introduction of the order of the partial complexity is in line with our perception of the existence of multiple levels of (full) complexity.
We are able to discover the nature of the C X measure, i.e., its dynamics and, in particular, its dynamical structures, when we analyze the entropy dynamics S ( t ) (see Figure 2 for details).
The measurability of the partial measure of complexity is necessary for characterizing it quantitatively and to be able to compare different complexities. Following Gell–Mann [40], we must identify the scales at which we perform the analysis and thus determine coarse-graining to define the entropy. Its dependence on complexity cannot be ruled out.
However, the question of direct measurement of the partial measure of complexity in an experiment (real or numerical) remains a challenge.

2.5. Remarks on the Partial Entropic Susceptibility

An essential tool for studying phase transitions is the system susceptibility—in our case, the partial entropic susceptibility of the partial measure of complexity. Here, it (additionally) plays the role of the partial order parameter.
The plot of susceptibility χ C X ( S ; m , n ) vs. S is presented in Figure 3. Four phases, already marked in Figure 1, are visible (also numbered 1 to 4).
Phase number 1 is almost entirely ordered—the disordered phase input is residual. At point S i p , there is a phase transition to the mixed-phase marked with number 2, still with the predominance of the ordered phase. At the S i p inflection point, the entropic susceptibility reaches a local maximum. By further increasing the entropy of the system, it enters phase 3 as a result of phase transition at the very specific S C X m a x transition point. At this point, the entropic susceptibility of the partial measure of complexity disappears. This mixed phase (number 3) is already characterized by the advantage of the disordered phase over the ordered one. Finally, the last transition, which occurs at S i p + , leads the system to the dominating phase of the disordered phase—the input of the ordered phase is residual here. At this transition point, the susceptibility reaches a local minimum. Intriguingly, entropic susceptibility can have both positive and negative value passing smoothly through zero at S = S C X m a x , where the system is exceptionally robust. The presence of phases with positive and negative entropic susceptibility is an exceptionally intriguing phenomenon. The phases discussed above, together with the above-mentioned inflection points, are also marked in Figure 1d. Let us add that the location of the phases mentioned above, i.e., the location of the inflection points, depends on the order ( m , n ) of the partial measure of complexity. This is clearly seen in Figure 4 and Figure 5.
The values of local extremes of the entropic susceptibility of the partial measure of complexity are finite here and not divergent, as in the case of (equilibrium and non-equilibrium) phase transitions in the absence of an external field. We use this definition to describe the critical behaviour of a system that we demonstrate in Section 2.7, where it requires an explicit dependence on N.

2.6. Universal Full Measure of Complexity

The full universal measure of complexity X is a weighted sum of the partial measures of complexity C X ( S ; m , n ) for individual scales. That is,
X ( S ; m 0 , n 0 ) = m m 0 , n n 0 w ( m , n ) C X ( S ; m , n ) , m 0 , n 0 0 ,
where w ( m , n ) is a normalized weight, which must be given in an explicit form. This form is to some extent imposed by the power-law form of partial complexity. Namely, we can assume
w ( m , n ) = 1 1 M 2 1 M m m 0 + n n 0 , M > 1 ,
which seems to be particularly simple because
w ( m + 1 , n ) w ( m , n ) = w ( m , n + 1 ) w ( m , n ) = 1 M ,
independently of m and n.
As one can see, Equation (13), supported by Equation (15), is the product of the sums of two geometric series,
X ( S ; m 0 , n 0 ) = S m a x S m 0 1 1 M m m 0 S m a x S m m 0 M m m 0 × S m a x S n 0 1 1 M n n 0 S m a x S n n 0 M n n 0 .
If both series converge for any S m i n S S m a x , which is the case if and only if the condition Z ( = S m a x S m i n ) < M is met, then we directly obtain
X ( S ; m 0 , n 0 ) = 1 1 M 2 ( S m a x S ) m 0 1 S m a x S M ( S S m i n ) n 0 1 S S m i n M .
In other words, the M parameter can always be chosen, so that the sums of both series in Equation (21) diverge for all S values. Thus, m 0 , n 0 1 is the natural lower limit of m 0 , n 0 , satisfying the condition of X ( S ; m 0 , n 0 ) disappearing for S = S m i n , S m a x . We still assume more strongly that m 0 , n 0 2 , which has already been explained above.
For extensive systems, Equation (17) can be presented in a form that clearly shows the dependence of the X complexity on the number of entities N, simply replacing S entropy by N s , where s is already N-independent specific entropy. Subsequently,
X ( N s ; m 0 , n 0 ) = 1 1 M 2 N m 0 + n 0 ( s m a x s ) m 0 1 N M ( s m a x s ) ( s s m i n ) n 0 1 N M ( s s m i n ) .
We emphasize that X does not scale with N, as opposed to partial measures of complexity.
In Figure 6 and Figure 7, we show the dependence of X on N (on the plane) and on N and s (in three dimensions), respectively. We obtained the singularities of full complexity, N j c r ( s ) , j = 1 , 2 , as a result of the zeroing of denominators in the Equation (17) at nonzero numerators.
Note that, for M Z , both measures of complexity have approximate values X ( S ; m 0 , n 0 ) C X ( S ; m 0 , n 0 ) . Important differences between these two measures only appear for Z / M close to 1, because only then does the denominator in Equation (17) play an important role. Of course, M is a free parameter, and possibly its specific value could be obtained from some additional (e.g., external) constraint.
In Figure 4, we compare the behaviour of the partial (black curve) and full (orange curve) measures of complexity, where we used the entropy instead of the specific entropy. Whether C X lies below or above X depends both on M parameter (determining the weight at which individual measures of partial complexity enter the full measure of complexity), and on the Z / M ratio.
We continue to determine the full entropic susceptibility of the full measure of complexity,
χ X ( S ; m 0 , n 0 ) = d X ( S ; m 0 , n 0 ) d S = ( m 0 + n 0 ) ( S C X m a x S ) X ( S ; m 0 1 , n 0 1 ) + 2 M 2 X ( S ; m 0 , n 0 ) S S a r i t 1 S m a x S M 1 S S m i n M , m 0 , n 0 1 ,
where S C X m a x is given here by Equation (4) but for m = m 0 and n = n 0 . Notably, for the symmetric cases m = n and/or m 0 = n 0 , we have S C X m a x = S X m a x = S a r i t , which are independent of m , m 0 .
Similarly to the partial entropic susceptibility of a partial measure of complexity, we obtain the full entropic susceptibility of a full measure of complexity,
χ X ( N s ; m 0 , n 0 ) = d X ( S ; m 0 , n 0 ) d S = ( m 0 + n 0 ) N ( s C X m a x s ) X ( N s ; m 0 1 , n 0 1 ) + 2 M 2 X ( N s ; m 0 , n 0 ) N s s a r i t 1 N M ( s m a x s ) 1 N M ( s s m i n ) , m 0 , n 0 1 ,
where s C X m a x = S C X m a x / N , s a r i t = S a r i t / N , s m i n = S m i n / N , and s m a x = S m a x / N . The progression of susceptibility χ X ( S ; m 0 , n 0 ) , depending on S, for selected parameter values is shown in Figure 5. This progression course is similar to the analogous one that is presented in Figure 3.
Thus, the evolution of X is governed by an equation that is analogous to Equation (6), except that χ C X present in that equation should be replaced by χ X given by Equation (19). Therefore, we have
d X ( S ( t ) , m 0 , n 0 ) d t = χ X ( S ( t ) ; m 0 , n 0 ) d S ( t ) d t .
The relationship between measures of complexity and time is implicit in our work—complexity indirectly depends on time through the dependence of entropy on time. It should be emphasized that the dependence of entropy on time is external in our approach—it can be taken into account based on additional modelling that is dedicated to specific real situations. We have already signalled this when discussing Equation (6).

2.7. Criticality in Extensive Systems

By using Equation (17), we show when the universal full measure of complexity diverges and, thus, the system enters a critical state. We assume that we are dealing with an extensive system, i.e., that Equation (17) can be represented as
X ( N s ; m 0 , n 0 ) = 1 1 M 2 N m 0 + n 0 ( s m a x s ) m 0 1 N M ( s m a x s ) × ( s s m i n ) n 0 1 N M ( s s m i n ) , N z M < 1 ,
where entropy densities s ( = S / N ) , s m i n ( = S m i n / N ) , s m a x ( = S m a x / N ) are (at most) slowly varying functions of the number N of elements making up the system and special entropy span z = s m a x s m i n . As one can see, the measure X is divergent in two critical points N c r m a x ( s ) = M s m a x s and N c r m i n ( s ) = M s m i n s , where s m i n < s < s m a x . Moreover, the susceptibilities given by Equations (19) and (20) diverge at the same points where measures of complexity given by Equations (17) and (18) diverge, which underlines the self-consistency of our approach.
Equation (22) can now be written in a form that explicitly includes both critical points (both physical and non-physical):
X ( N s ; m 0 , n 0 ) = 1 1 M 2 N m 0 + n 0 ( s m a x s ) m 0 1 N N c r m a x ( s ) β m a x × ( s s m i n ) n 0 1 N N c r m i n ( s ) β m i n ,
where critical exponents assume the mean-field values β m a x = β m i n = 1 . In this case, we could speak of two-criticality were it not for the fact that one of these criticalities is unphysical.
Figure 6 shows dependence X ( N s ; m 0 , n 0 ) vs. N at fixed s = 0.8 . The values of parameters are shown there, while the specific entropy s is chosen so that the condition s s m i n < s m a x s is satisfied (this is equivalent to a condition s < s a r i t ). This means that s is closer to s m i n than s m a x . The existence of these divergences is a signature of criticality. However, the situation for borderline cases s = s m i n or s = s m a x changes rapidly—it is a different consideration.
Critical numbers of entities in the system N c r m a x ( s ) and N c r m i n ( s ) are determined by the ratio of the M parameter characterizing the hierarchy/cascade of scales in the system and the distance between entropy density s and its extreme values s m i n and s m a x . The construction of these critical numbers resembles the canonical critical temperature structure for the Ising model in the mean-field approximation, where β c J z = 1 (here β c = 1 / k B T c and k B is the Boltzmann constant). In our case, the role of the inverse temperature β c is played by N c r m a x and N c r m i n , the role of the coupling constant J is 1 / M , while the role of the mean coordination number z is played by s m a x s and s s m i n , respectively.
The hierarchy is the source of criticality here. Criticality is an immanent feature of our full description of complexity. Nevertheless, in this work, we do not specify the sources of this hierarchy—it could be self-organized criticality or due to some other sources.
For the sake of completeness, note that the dependence on N of the partial measure of complexity is given by Equation (2). This means that for extensive systems this measure increases powerfully depending on N. Therefore, only the weighted infinite sum of these measures generates the existence of singularity.
Let us now consider in more detail the behaviour of X ( N s ; m 0 , n 0 ) depending on N and s. A three-dimensional plot of Figure 7 will be helpful here. One can see how the mutual location of the singularities of N c r m a x ( s ) and N c r m i n ( s ) changes with the increase of s. From the situation of s < s a r i t , in which N c r m a x ( s ) > N c r m i n ( s ) , through the situation when s = s a r i t in which N c r m a x ( s ) = N c r m i n ( s ) , up to the situation in which N c r m a x ( s ) > N c r m i n ( s ) for s > s a r i t .
It must be clearly stated that the area physically accessible is the one in front of the first singularity, which is further emphasized in Figure 7 by blue curves. Let us emphasize that the N range in which criticality occurs is sufficient to cover the corresponding values of N discussed in the literature to date, especially the Dunbar numbers [46,47,48,49] (e.g., N = 5 , 15 , 50 , and N = 150 ). However, it should be noted that our view of complexity is complementary to that presented in the literature.

3. Finger Print of Complexity in Simplicity

Let us consider a perfect gas at a fixed temperature, which is initially closed in the left half of an isolated container. The partition is next removed, and the gas undergoes a spontaneous expansion. Here we are dealing (practically speaking) with an irreversible process, even for a small number of particles (at least the order of 10 2 ).
Let us recall the definition of ’perfect gas’. It is a gas of particles that cannot ‘see’ each other, i.e., there are no interactions between them. Thus, from a physical point of view, it is a dilute gas at high temperature. We further assume that all of the particles have the same kinetic energy. A legitimate question is whether such a gas will expand after the partition is removed. We notice that the thermodynamic force is at work here, being roughly proportional to the difference in the number of particles in the right and left parts of the container. This force causes the expansion process. Thus, we are dealing with the simplest paradigmatic irreversible process [50]. The particles remain stuck in the final state and will not leave it (with accuracy subject to slight fluctuations in the number of particles in the right half of the container). Such a final state of the whole system is referred to as the equilibrium state. The simple coarse-grain description of the system allows us to introduce here the concept of configuration entropy.
Note that the macroscopic state of the system (generally, the non-equilibrium and non-stationary/relaxing one) can be described by the instantaneous number of particles in the left ( N L ( t ) ) and right ( N R ( t ) ) parts of the container, with N = N L ( t ) + N R ( t ) , where N is the fixed total number of particles in the container (isolated system). It allows for one to define the weight of the macroscopic state Γ ( N L ( t ) ) , also called thermodynamic probability. This is the number of ways to arrange the N L ( t ) particles in the left part of the container and N R ( t ) = N N L ( t ) in the right. Hence,
Γ ( N L ( t ) ) = N ! N L ( t ) ! ( N N L ( t ) ) ! .
Here we do not distinguish permutations of particles inside each part of the container separately. We only take into account permutations of particles located in different halves of the container. This is because our resolution here is too small to observe the location of particles inside each container separately. Such a coarse-graining creates an information barrier: more information can mask the complexity of the system. We will not be able to see the complexity, because we will not be able to construct entropy. This creates a paradoxical situation: the surplus of information makes the task difficult and does not facilitate obtaining the insight into the system. Here we have an analogy with chaotic dynamics, where chaos is only visible in the Poincaré surface cross-section of the phase space and not in the entire phase space.
The configuration entropy at a given time t we define, as follows,
S ( N L ( t ) ) = ln Γ ( N L ( t ) ) ,
where Γ ( N L ( t ) ) is given by Equation (24). The above expression can be used both for the equilibrium and non-equilibrium states.
It can be demonstrated using the Stirling formula that for large N, entropy S is reduced to the BGS form,
ln Γ ( N L ( t ) ) = N p L ( t ) ln p L ( t ) + p R ( t ) ln p R ( t ) = N s ( t ) ,
where p J ( t ) = def . N J ( t ) N , J = L , R , and s ( t ) is a specific entropy. The law of entropy increase Equation (A8) is also fulfilled here, as expected.
We now prepare the equation for determining N L C X m a x , i.e., the number of particles in the left part of the container that maximizes the partial complexity measure C X . To this end, we assume, for instance, the symmetric partial measure of complexity of the order of ( m = 2 , n = 2 ) . Next, we substitute N L = N L C X m a x into the both sides of Equation (25) and according to constitutive Equation (5), we equate Equation (25) to S C X m a x . Hence, we obtain a constitutive equation for the relaxing perfect gas,
S ( N L ( t ) = N L C X m a x ) = S C X m a x ,
where N L C X m a x is our sought quantity.
Now, we need to independently determine S C X m a x . Recall that the number of N L particles that maximize entropy is the number of N L e q particles in the statistical/thermodynamic equilibrium state of the system. This number is equal to half of all particles in the container, i.e., N L e q = N / 2 . It can still be assumed (without reducing the general considerations) that S m i n = 0 . Therefore,
S m a x = S ( N / 2 ) .
However, from Equation (A5), we know that S C X m a x = S m a x / 2 . By using it, we transform Equation (27) into the form,
S N L C X m a x = 1 2 S ( N / 2 ) .
Equation (27) is an example of the general constitutive Equation (5), where N L C X m a x plays the role of Y C X m a x . This equation has the following explicit form,
Π j = 1 N N L C X m a x 1 + N L C X m a x j 2 = Π j = 1 N / 2 1 + N 2 j , for   n = m = 2 .
Just deriving Equation (30) (see Appendix C for details) is the primary purpose of this example. This is a transcendental equation of which the exact analytical solution is unknown. When deriving Equation (30), we used the initial condition for the entropy that is, S ( t = 0 ) = S m i n = ln Γ ( N L = N ) = 0 , which follows from Equations (24) and (25). Even for such a simple toy model, determining the partial measure of complexity is a non-trivial task, also because N L is different from N / 2 (as we show below).
The numerical solutions of Equation (30), i.e., the relationship of N L C X m a x to N, are shown in Figure 8 (for simplicity, L defining the vertical axis on the plot means N L C X m a x ). Both of the solutions (small circles above and below the solid straight line) show that N L C X m a x is significantly different from N / 2 . Thus, the most complex state is significantly different from the equilibrium state.
Having the N L C X m a x dependence on N, we can obtain the dependence of the partial measure of specific complexity c x m a x = def . C X m a x / N m + n on N order ( m = 2 , n = 2 ) . We can write
c x m a x = s ( N / 2 ) 2 4 = 1 2 N ln Π j = 1 N / 2 1 + N 2 j 4 ,
as in our case s m a x = s ( N / 2 ) equals the logarithm of the right-hand side of Equation (30) divided by N. Notably, Equation (31) is based on Equation (A12).
In Figure 10, we present the dependence of c x m a x on N. Quantity c x m a x is a non-extensive function—it reaches the plateau for N 1 . For N 10 4 the plateau is achieved with a good approximation. This is important for researching complexity. Namely, systems can attain complexity already on a mesoscopic scale. Although the absolute value of the complexity measure is relatively small, it is evident and possesses a structure that is related to the current inflection point there (near N = 10 ).
This example shows that even such a simple arrangement of non-interacting objects may have non-equilibrium non-stationary complexity. A necessary (but not sufficient) condition is the possibility of constructing entropy and the presence of a time arrow.

4. Concluding Remarks

In many recent publications [5,8,9,51] it is argued that entropy can be a direct measure of complexity. Namely, a smaller value of entropy indicates more regularity or lower system complexity, while its larger value indicates more disorder, randomness and higher system complexity. However, according to Gell–Mann, more disorder means less, and not more, system complexity. These two viewpoints are contradictory—this is a serious problem, which we have addressed.
Our motivation in solving the above problem was based on Gell–Mann’s view of complexity. This is because we fail to agree that the loss of information by the system as it approaches equilibrium increases its complexity; notably, Δ I ( p e q , p e q ) (see Appendix B for detail) takes its minimum value then, and complexity must decrease.
In addition, the differences between entropies in Equation (1) eliminate the useless dependence of complexity on the additive constant that may appear in the definition of entropy. It can be said that the system state with the highest complexity is the state most distant from all of the states of the system of lesser or no complexity.
Thus, in the sense of Gell–Mann, the measure of complexity should supply complementary information to the entropy or its monotonic mapping.
Therefore, in this work, we have presented a methodology which allows building a universal measure of complexity as a function of a system state based on non-linearly transformed entropy. This is a non-extensive measure. This measure should meet a number of conditions/axioms, which we have indicated in this work. A parsimonious example, of the simplest system with a small and a large number of degrees of freedom, is presented in order to support our methodology. As a result of this approach, we have shown that (generally speaking) the most complex are optimally mixed states consisting of pure states, i.e., of the most regular and most disordered, which the space of states of a given system allows. This also applies to the distinctive examples outlined in Appendix D and Appendix E (although this requires a redefinition of some variables and parameters).
We should pay attention to an essential issue regarding the definition of the phenomenological partial measure of complexity that is given by the Equation (1). This definition is open in the sense that if the description of complexity requires, for example, one additional quantity, then the Equation (1) takes on an extended form,
C X ( S , E ; m 1 , n 1 , m 2 , n 2 ) = def . ( S m a x S ) m 1 ( S S m i n ) n 1 ( E m a x E ) m 2 ( E E m i n ) n 2 0 ,
whereby E m i n E E m a x this new quantity is marked. This definition has still an open character. Specifically, this definition also allows (if the situation requires) the replacement of one quantity with another, e.g., entropy with free energy, or considering some derivatives (e.g., of the type S E ). Openness and substitutability should be the key features of the measure of complexity. Moreover, exponents m j , n j , j = 1 , 2 , determine the order of complexity, i.e., its level or scale. We emphasize that the measure of complexity introduced can describe isolated and closed systems (although in contact with the reservoir), as well as open systems that can change their elements.
From Equations (13) and (32), we get the phenomenological universal full measure of complexity in the form, which extends Equation (17),
X ( S , E ; m 1 0 , n 1 0 , m 2 0 , n 2 0 ) = 1 1 M S 2 ( S m a x S ) m 1 0 1 S m a x S M 1 ( S m i n S ) n 1 0 1 S m i n S M 1 × 1 1 M 2 2 ( E m a x E ) m 2 0 1 E m a x E M 2 ( E m i n E ) n 2 0 1 E m i n E M 2 0 .
The full measure of complexity is a weighted sum of partial measures of complexity across all complexity scales. As one can see, this full measure may contain singularities. They are the necessary signatures of criticality existing in the system. This meets the expectations presented in the literature.
Definitions of measures of complexity Equations (1) and (17) and their possible extensions are universal and useful. It is due to entropy that is associated not only with thermodynamics (Carnot, Clausius, Kelvin) and statistical physics (Boltzmann, Gibbs, Planck, Rényi, Tsallis), but also with the information approach (Shannon, Kolmogorov, Lapunov, Takens, Grassberger, Hantschel, Procaccia), and with the approach from the side of cellular automata (von Neumann, Ulam, Turing, Conway, Wolfram, et al.), i.e., with any representation of the real world using a binary string. Today, we already have several very effective methods for counting entropy of such strings, as well as other macroscopic characteristics sensitive to organization and self-organizing systems, as well as to their synchronization (synergy, coherence), competition, reproduction, adaptation—all of them sometimes having local and sometimes global characters.
Our definition of complexity also extends to meet research into the complexity of the biologically active matter. In this, especially research on the consciousness of the human brain can derive a fresh impulse. The point is that most researchers believe that the main feature of conscious action is a maximum complexity or even a critical complexity [52]. In our approach, it would be C X m a x and X ( N 1 c r s ) .
We hope that our approach will enable: (i) the universal classification of complexity, (ii) the analysis of a system critical behaviour and its applications, and (iii) the study of dynamic complexity. All of these constitute the background to the science of complexity.

Author Contributions

R.K. and Z.R.S. conceptualised the work; R.K. wrote the draft and conducted the formal analysis and prepared the draft of figures; J.K. provided numerical calculations and provided the final figures; Z.R.S. finalised the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

One of the authors of the work (Z.R.S.) partially benefited from the financial support of the ZIP Program. This program of integrated development activities of the University of Warsaw is implemented under the operational program Knowledge Education Development, priority axis III. Higher education for economy and development, action: 3.5 Comprehensive university programs, from 2 April 2018 to 31 March 2022, based on the contract signed between the University of Warsaw and the National Center for Research and Development. The program is co-financed by the European Union from the European Social Fund; http://zip.uw.edu.pl/node/192. The author (Z.R.S.) is also partially supported by the Center for Study of Systemic Risk, Faculty of “Artes Liberales” University of Warsaw; http://al.uw.edu.pl/. Apart from that, there was no other financial support.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Properties of the Partial Measure of Complexity

It is worth paying attention to Equations (1)–(4). For a fixed span of Z and the order ( m , n ) , there may still be systems of different complexities. The complexity description only using C X m a x is insufficient, because there can be many systems with the same span and order. However, we assume systems to be equivalent, i.e., belonging to the same complexity class ( Z , m , n ) , if they have the same span at a given order. We can still distinguish them as, in general, they differ in the location of S C X m a x . We can say that a given class has a greater partial measure of complexity if it has a larger C X m a x . In a given class, the system has a larger complexity if the system stays closer to C X m a x , i.e., its current entropy S is closer to S C X m a x . For a given C X with Equation (1), we obtain (for each order ( m , n ) ) two S solutions: one on the left and the other on the right of C X m a x (except when S = S C X m a x ). Division into classes allows us to introduce an additional classification of complexity.
A distinction should be made between two cases of measuring complexity: (i) Z < m + n and (ii) Z > m + n . This is particularly evident when we consider the ratio of both types of complexity measures for m + n > 1 ,
C X m a x ( Z i ) C X m a x ( Z i i ) = Z i Z i i m + n < 1 ,
where Z i belongs to case (i), while Z i i to case (ii). Thus, the greater the exponent m + n , the greater the difference between C X m a x ( Z i i ) and C X m a x ( Z i ) .
The alternate form of Equation (1),
C X ( Δ ) = n n + m Z + Δ n m n + m Z Δ m ,
where deviation Δ = Δ ( t ) = S ( t ) S C X m a x makes the operating of the C X coefficient easier in the vicinity of S C X m a x , where the parabolic expansion is valid. We then have:
C X ( Δ ) C X m a x 1 1 2 m n ( n + m ) Δ Z 2 C X m a x exp 1 2 m n ( n + m ) Δ Z 2 ,
that is a Gaussian form, which has variance σ 2 = n m ( n + m ) 2 Z 2 . Only in the narrow range of S around S C X m a x is the measure of complexity C X symmetrical regardless of the order ( m , n ) .
In fact, only the location of the maximum of C X ( S ; m , n ) is determined (for a given range of S) by the ratio of m to n. However, to have dependence of coefficient C X on entropy in the entire entropy range S m i n S S m a x , it is necessary to determine two extreme values of entropy ( S m i n and S m a x ) and two exponents (n and m). In general, finding these parameters and exponents is still far from trivial because they have a contextual (and not a universal) character.
However, in a particular situation, when the maximum complexity is symmetrical, i.e., when m = n , we obtain
S C X m a x = S ¯ = S m i n + S m a x 2
and
C X m a x = Z 2 2 n .
Equation (1) of the partial measure of complexity applies both to single- and multi-particle problems because entropy can also be built even for a very long single-particle trajectory. Moreover, Equation (1) emphasizes our point of view that any evolving system for which one can introduce the concept of entropy and which has a state of thermodynamic equilibrium (for which entropy reaches a global maximum) contains at least a signature of complexity. For systems of negligible complexity, i.e., for which S S m i n or S S m a x , the measure C X ( S ; m , n ) is close to zero. This does not mean, however, that we cannot locate S C X m a x near S m i n or S m a x . It is then sufficient to have strongly asymmetric situations when n m or n m , respectively.

Appendix B. Non-Stationary Entropies

In this Appendix, we sketch a non-stationary situation, which is what we are dealing with throughout this work. We are dealing with systems evolving to a state of statistical equilibrium, even from states far from statistical equilibrium.
Non-stationary entropy is understood to be entropy based on coarse-grained time-dependent probability distributions—this type of entropy is most often used [43,44,45,51]. A very characteristic example is the entropy class built on time-dependent probability distributions, { p j ( t ) } , satisfying the master (Markovian) or M-equation, in the presence of detailed balance conditions. Thus, we are only considering systems evolving to statistical equilibrium.
Here we give two very characteristic (non-equivalent) examples of non-stationary entropies (more preciselly: one should talk about specific entropies). In addition, entropies given by Equations (A6) and (A7) belong to the category of relative entropies. Namely,
S ( t ) = S 0 1 j p j e q f p j ( t ) p j e q
and
S ( t ) = S 0 ln j p j e q f p j ( t ) p j e q ,
where p j ( t ) is a probability of finding a system in state j at time t, while p j e q is a corresponding equilibrium probability. We are considering only discrete states here. The function S 0 f ( x ) 0 , where domain 0 x , is a non-negative convex function obeying S 0 d 2 f d x 2 0 . It can be shown [45] that entropies defined in this way meet the law of entropy increase, i.e., its derivative
d S ( t ) d t 0 ;
therefore S ( t ) S m a x from below when p j ( t ) p j e q , for any j. Equation (A8) is the key property of entropy. Let us add that at the limit p j ( t ) = p j e q , for any j, entropy defined by Equations (A6) and (A7) disappears. In other words, these entropies are negative and grow to zero as the system tends to equilibrium.
It is worth paying attention to the possibility of defining generalized information gain, whereby this information gain is calculated here relative to the equilibrium distribution. We can write,
Δ I ( p ( t ) , p e q ) = S ( t ) ,
where p ( t ) = { p j ( t ) } and p e q = { p j e q } . Furthermore, entropy S ( t ) is closely related to partition function. Therefore, in this approach, the entropy is a base function.
Most often the function f ( x ) is selected in the form [45,56],
f ( x ) = x α , α > 1 ,
coupled with a constant S 0 = 1 α 1 , where α can converge to 1. With these choices the entropy given by Equation (A6) is called Tsallis (relative) entropy and the entropy given by Equation (A7) Rényi (relative) entropy. Usually, the entropic index α is denoted by q in the case of Tsallis entropy.
Entropies given by Equations (A6) and (A7) converge, with the help of Equation (A10), to Kullback-Leibler entropy [9] when entropy index α 1 . However, the Rényi and Tsallis entropies are essentially different for α 1 . The Rényi entropy is an additive function describing extensive systems, while the Tsallis is not. It is a non-additive function describing non-extensive systems.
Using relative entropy in the definition of complexity measures is productive. It is because other types of entropy can be derived from it, such as ordinary entropy (or Boltzmann-Gibbs-Shannon one) and conditional entropy.

Appendix C. Derivation of the Constitutive Equation for Perfect Gas

The derivation of constitutive Equation (30) comes down to presenting both sides of Equation (29) in explicit form, shortening of common factors, proper organization and presentation. Accordingly, the left side of Equation (29) takes the form,
S C X m a x = S N L C X m a x = ln Γ N L C X m a x = ln N ! N L C X m a x ! N N L C X m a x ! = ln N L C X m a x + 1 N L C X m a x + 2 · · ( N 1 ) N 1 · 2 · 3 · · N N L C X m a x = ln Π j = 1 N N L C X m a x 1 + N L C X m a x j ,
where the number of factors in the numerator and denominator is the same and equals N N L C X m a x .
As for the right side of Equation (29), we present it in an explicit form,
1 2 S ( N / 2 ) = 1 2 ln Γ ( N / 2 ) = 1 2 ln N ! N 2 ! N 2 ! = 1 2 ln N 2 + 1 N 2 + 2 N 2 + 3 · · N 2 + N 2 1 · 2 · 3 · · N 2 = 1 2 ln Π j = 1 N / 2 1 + N 2 j .
Comparison of Equations (A11) and (A12) just leads to Equation (30).

Appendix D. Entropies of Time Series—A Sketch

The entropy study of various time series is a crucial issue in system dynamics. The point is that the activity of the systems is perceived precisely through time series. The study of nonlinear time series is particularly important. Below we outline two essential methodologies for constructing entropy (including a multi-scale one). Then we show how to connect our complexity measure with these methodologies.

Appendix D.1. Entropy of Embedded Time Series

Various time series from stock exchanges or Forex quotations are the central sources of empirical data available from financial markets. The main question is about the entropy of the time series and hence about the measure of time series complexity. Following Li et al. [51], we present a method of constructing entropy for a finite time series.
We consider the time series { x j } j = 1 N consisting of N elements x j , j = 1 , 2 , . From this we select 1 i N l + 1 sub-series indexed by i. Each sub-series consists of l components defined as follows, y i l ( k ) : 0 k l 1 , where (for given indexes i and l) component y i l ( k ) = def . x ( i + k ) . As one can see, m here means the embedding dimension.
Moreover, two subsequent sub-series characterized by the same l have l 2 elements in common. In the collection of sub-series, which create m-dimensional vector space, one can enter a topology defined by the metric d i j defining the distance between arbitrary sub-series i and j. Then one can build a distribution (histogram), p ( d i j ) , of distances between vectors. Of course, the question of how to choose the embedding dimension l is fully justified. For example, one could assume that this dimension is equal to the correlation dimension [7]. However, we treat l as a free parameter, and we do not impose any additional restrictions on it. With the probability distribution, p ( d i j ) , one can build the entropy (for example, S ( l ) = ln p ( d i j ) ) and hence the complexity of the time series (see Section 2.1 and Appendix B for details).
Similarly to the example with expanding gas in Section 3 and based on Equation (5), we can formulate the constitutive equation in the form
S ( l C X m a x ) = 1 m S ( l m a x ) + 1 n S ( l m i n ) 1 m + 1 n ,
where S ( l m a x ) = S m a x and S ( l m i n ) = S m i n .
The transcendental Equation (A13) should be solved numerically due to the value of the unknown l C X m a x sought. However, first, the values of l m i n and l m a x must be found (also numerically), which leads to the determination of S m i n and S m a x , respectively. It allows us to determine C X m a x . In the final step, one can (similar to that presented in Appendix B) find informative relationships of the above quantities l C X m a x , l C X m a x / N and C X m a x from N.
This is all possible if one has sufficiently useful statistics, i.e., when N l + 1 l N 1 2 l .

Appendix D.2. Multi-Scale Entropy

We can now proceed to define multi-scale complexity, but first we need to define multi-scale entropy or the hierarchy of entropies. For this purpose, we prepare the coarse-grained scheme. The primary time series consists of N elements. We divide it into n = I n t [ N / τ ] non-overlapping intervals, where τ is the time horizon/scale, i.e., the number of time steps that we use to separate the elements of the first time series. Now we can build a new time series, of which the non-overlapping elements are defined as arithmetic means in subsequent intervals of τ ,
y j τ = 1 τ i = j τ + 1 ( j + 1 ) τ x i , 0 j n 1 .
More can be said about the choice of τ using the (bilinear) autocorrelation function
A C ( t ) 1 n t j = 0 n t y j τ y j + t τ , n t ,
if we are dealing with a stationary and a long time series. Then time τ = τ c can be considered, e.g., as the half-life of this function, i.e., A C ( τ c ) 1 2 A C ( 0 ) . Other choices for τ can also be considered [43]. Note that the time series { y j τ } can consist of τ > τ c with statistically independent elements.
With time series dependent on the time scale τ , we can build scale-dependent entropy S τ ( t ) and the corresponding complexity C X ( S τ ) by the corresponding methods presented in Appendix D.1. Thanks to this, for each scale τ separately (i.e., for each n separately), we can find quantities such as n C X m a x , n C X m a x / N and C X m a x .

Appendix D.3. Elements of Deterministic Chaos: A Cyclically Kicked Damped Rotor

Deterministic chaos can be an example of the complexity of single-particle motion (i.e., the complexity of its phase space). It is caused by instability due to initial conditions. A typical example of this is a cyclically kicked damped rotor [20] or a damped pendulum with cyclic driving force [19]. Here we sketch this example.
The starting point is Newton’s equation for a rotor motion in the presence of viscous drag, which takes the form
d 2 ϕ ( t ) d t 2 = γ d ϕ ( t ) d t ,
where ϕ ( t ) is a time-dependent rotation angle of the rotor, γ is the viscous drag coefficient, and the moment of inertia is equal to the unit. The exact solution of Equation (A16) is based on the time-dependent exponential function.
Next, we enrich this equation with a non-linear impulse forcing force,
F = κ f ( ϕ ) n = 0 δ ( t n T ) ,
where f ( ϕ ) is a non-linear function of ϕ and κ is its amplitude, while T is the period of this force. Hence, in the stroboscopic variables (or in the Poincaré representation) we obtain the Poincaré map,
ω n + 1 = exp ( γ T ) [ ω n + κ f ( ϕ n ) ] ,
ϕ n + 1 = ϕ n + 1 exp ( γ T ) γ [ ω n + κ f ( ϕ n ) ] ,
where ω = d ϕ d t . The above set of recursive equations allows us to examine on the Poincaré surface both dissipative deterministic chaos (e.g., logistic or Henon mapping) and conservative (i.e., Chirikov mapping). This depends on the values of γ and κ parameters and the form of the function f.
Belief in the complexity of the phase space { ω , ϕ } of the system presented above is common. The complexity requires a knowledge of entropy. The total entropy of the system is the entropy of its long phase trajectory. However, we consider the complexity of the phase space structure mapped to the Poincaré surface, i.e., based on narrow entropy describing only this mapped structure. Obviously, this entropy depends on initial conditions and parameters assumed. Constructing entropy first requires defining a long time series. We have this time series as one consisting of N two-component elements { ω n , ϕ n } n = 0 N 1 . Indeed, the approach presented in Appendix D.1 can be used to calculate entropy.
The goal here is to have the entropy value S C X m a x at which the complexity measure reaches the maximum value C X m a x . It comes down to finding the dimension of the embedded subspace l C X m a x for which the entropy S = S C X m a x . It is l that is the optimized parameter (in Section 3 it was the dynamic variable N L ). It can be expected that l C X m a x is neither too small nor too large compared with the dimension N of the base space.

Appendix D.4. Elements of Anomalous Diffusion

Let us consider as a typical example the deterministic dissipative motion of an extremely massive single molecule in a viscous fluid. This simple motion is described by the Newton’s dynamics, i.e., by the linear ordinary differential equation,
d u ( t ) d t = γ u ( t ) ,
where u ( t ) is a time-dependent molecule velocity and γ is a constant friction coefficient. The exact solution of Equation (A20) is given by the time-dependent exponential function.
We are now moving from a deterministic level to a stochastic level by a relatively small extension of Equation (A20). That is, we extend Equation (A20) so as to obtain the retarded general Langevin equation (GLE) [50],
d u ( t ) d t = 0 t γ ( t t ) u ( t ) d t + R ( t ) m ,
where γ ( t ) is the retarded friction coefficient, R ( t ) is a random force of a thermal origin, i.e., caused by the random action of fluid particles, and m is the mass of the molecule. Notably, the retardation is present only when the mass of the molecule suspended in the fluid is not too heavy.
Equation (A21), which is the essence of the Ornstein-Uhlenbeck (OU) generalized theory of Brownian motion, takes into account both the feedback effect associated with the reverse fluid flow pushed by the molecule and the erratic nature of the molecule’s motion. Although it is still a linear equation relative to u, the velocity the autocorrelation function, C ( t ) , is no longer expressed by a simple exponential, but exhibits a slower, power-law decay,
C ( t ) 1 t 2 α , α = 1 2 ,
for a long time, which is indeed more realistic. Equation (A22) is a central result of OU generalized theory. As one can see, a relatively small extension of Equation (A20)—small because it still leaves this equation in the domain of linear equations relative to u—led to decay according the power law. Such a law is an essential attribute of a complex system. This algebraic fat tail was noticed for the first time by Adler and Wainwright in molecular-dynamic simulation of hard spheres’ fluid [53], at least at intermediate fluid densities. Equation (A22) is the result of a cooperative phenomenon in the form of a positive feedback arising in the system between the molecule suspended in the fluid and the particles of the fluid. The non-linear nature of this coupling in time (that is the nonlinear dependence of γ on time) is contained in the integral kernel of Equation (A21). There is a wide class of physical problems that can be modelled using this equation [28,54] (for almost arbitrary α ). Equation (A21) is the first level of complexity here.
Note that the direct consequence of Equation (A22) is the sub-diffusive behaviour of the suspended molecule, i.e., for long times the variance of the stochastic process X (t) (without a drift) takes the form,
X ( t ) 2 t α .
Now one can ask a question about what the distribution family with the variance given by the above formula looks like. The answer is almost instant namely,
P ( X , t ) = 1 t α / 2 f X t α / 2 ,
which is a positive and normalized time-dependent probability distribution, where scaling function f is almost an arbitrary one.
Results Euqation (A23) and (A24) became the inspiration for extensive research on anomalous diffusion [28]. For example, by taking restrictions on the shape exponent α , i.e., α can be generally different from 1/2. This type of extension permits realistic considerations [28].

Characteristic Examples

Here we show how comprehensive the probability distribution given by Equation (A24) is.
Example A1.
Brownian and Gaussian random walk.
Let us assume for α = 1 ,
f X t α / 2 = 1 2 π exp X 2 t 2 .
As one can see, the variance of the distribution given by Equation (A24) with the substitution given by Equation (A25) satisfies Equation (A23)—it is a linear function of time, as it should be. This example is our reference case.
Example A2.
Brownian and non-Gaussian random walk.
Let us now assume that still α = 1 but
f X t α / 2 = 1 2 exp X t 1 / 2 ,
that is, probability distribution P is a Laplace distribution. Despite this, the variance of this distribution remains a linear function of time. Here we are dealing here with a Brownian and Laplace random walk.
Example A3.
Non-Brownian and Gaussian random walk.
Let us here assume that
f X t α / 2 = 1 2 π exp X 2 2 t α ,
where α is an arbitrary shape exponent. As one can see, the variance is here (in general) a non-linear function of time: it can be both sub- and super-linear. Thus, we are dealing here with the fractional Brownian motion.
Example A4.
Non-Brownian and Non-Gaussian random walk.
Let us consider the Weibull distribution—this gives f function in the form,
f X t α / 2 = κ X t α / 2 κ 1 exp X t α / 2 κ , X 0 ,
where κ > 1 . The variance for Weibull distribution is given by Equation (A23). This means that we can model both sub-diffusion (when α < 1 ) and super-diffusion (when α > 1 ) using the Weibull distribution. Of course, with this distribution one can also model a Brownian (when α = 1 ) but non-Gaussian random walk.

Appendix D.5. Nonequilibrium Configuration Entropy

We are now constructing configuration entropy (i.e., based on configuration rather than phase space). We first discretize the space, i.e., the X axis. Let the size of the discretization step be Δ X . The time-dependent probability of finding a stochastic process at time t in the nth cell with size Δ X is,
p n ( Δ X , t ) = n Δ X ( n + 1 ) Δ X P ( X , t ) d X ,
where free parameter Δ X here defines the level of coarse-graining. Indeed, we use this probability in Equations (A6) or (A7). The equilibrium probability needed there p n e q ( Δ X ) = p n ( Δ X , t ) constructs in the usual way. That is, we confine the system to a very large but finite size. Then time t is going to infinity. Next, both probabilities (the nonequilibrium for finite time t and equilibrium probability) substitute to Equations (A6) or Equation (A7). Finally, the system goes to the thermodynamic limit.
The goal is to find Δ X size equal to Δ X C X m a x that maximizes C X , that is C X m a x . This can only be done by numerical means.
We emphasize that Equation (A21) is the dynamic basis of the (stochastic) Fokker–Planck equation [45] as well as the Langevin [55] fractal equation and hence the Fokker-Planck [28] fractal equation. The same Equation (A21) is a springboard to move to higher levels of complexity.

Appendix E. Dynamic Self-Organisation on a Complex Network—A Sketch

It is hard to imagine not using complex network technologies to analyze collective processes in the socio-economic world. One of the fascinating sources of complexity is its capacity for the self-organization—spontaneous (of endogenic) character or stimulated (of exogenic) character. We consider the dynamic phase transition here, in which the network of the stock market companies evolves towards a characteristic big-star structure—it condensates [6,32].

Appendix E.1. Minimal Spanning Tree of the Frankfurt Stock Exchange

Here we consider the canonical Minimal Spanning Tree (MST) of the Frankfurt Stock Exchange (FSE) companies belonging to the widely-exploited class of correlation complex networks (the content of this section was created with the participation of Mateusz Wiliński). They describe well the dynamics of relationships between companies, that is the evolution of the MST. The MST is a very simple type of a complex network that, although it resets the clustering rate, provides a lot of important information about the structure and dynamics of real networks.
For example, in Figure A1, we present the self-organized structure of the Frankfurt Stock Exchange as of 29 January 2007. As one can see, the prominent star centred at SALZGITTER (SZG) AG-Stahl und Technologie company (the network node marked by the red circle located at the centre of the network) dominates the stock market structure. The middle plot on the top and the plot on the right there correctly reflect this type of behaviour. The former plot clearly shows the local minimum determining the size of the average coordination zone of the SZG node. Its average radius (i.e., the MOL equals about 2.5 separation steps) confirms that the SZG company is the centre of the star. The last plot shows a red circle with a multiplicity of 1 and a degree close to 100, which represents the SZG node. This node is an outlier—an excellent example of the appearance of a super-extreme event, i.e., dragon-king event [29,30,31].
Figure A1. A snapshot picture of Monday 29 January 2007. The plot on the top left shows the DAX index in the years 2005–2010. This index is the emanation of the entire stock exchange. Of course, a computer analyzes quotations of all companies on this stock exchange that are in the range mentioned above. Two vertical red straight lines mark the scanning window, and its centre is marked by a blue line. The current window location, i.e., the beginning and end, are given dates just below the DAX and MOL plots. The same scanning window applied to the MOL indicator. The middle plot at the top shows the course of the widely used mean-occupation layer (MOL) [32]. The upper plot on the right shows the distribution of degrees of the vertices of the network shown below. The red circle on this plot, which stands out significantly from the power-law distribution, concerns the centre of the star (the most prominent red circle). The coloured circles in the legend and the abbreviations of the names mean the companies presented in the graph. The figure was taken from our publication [32].
Figure A1. A snapshot picture of Monday 29 January 2007. The plot on the top left shows the DAX index in the years 2005–2010. This index is the emanation of the entire stock exchange. Of course, a computer analyzes quotations of all companies on this stock exchange that are in the range mentioned above. Two vertical red straight lines mark the scanning window, and its centre is marked by a blue line. The current window location, i.e., the beginning and end, are given dates just below the DAX and MOL plots. The same scanning window applied to the MOL indicator. The middle plot at the top shows the course of the widely used mean-occupation layer (MOL) [32]. The upper plot on the right shows the distribution of degrees of the vertices of the network shown below. The red circle on this plot, which stands out significantly from the power-law distribution, concerns the centre of the star (the most prominent red circle). The coloured circles in the legend and the abbreviations of the names mean the companies presented in the graph. The figure was taken from our publication [32].
Entropy 22 00866 g0a1
The structure dynamics of the dragon-king is presented in Figure A2. That is, we presented there the dependence of the degree of this node on time, yielding a very meaningful λ peak characterizing here a structural condensate (for details see [32]).
In Figure A3 we compare empirical degree entropies S ( t ) = k P ( k ) ln P ( k ) , where P ( k ) defines the empirical degree distribution, versus time in the presence and absence of SZG company on the network. In Figure A1, the upper plot on the right shows an example of the (non-normalized) distribution, P ( k ) , where k means the degree of a node. Various nodes are marked here with coloured circles defined in the legend. As one can see, the presence of the SZG node significantly changes the structure of the MST network. It is worth determining the complexity of such a network.
Figure A2. The temporal SZG vertex degree, k S Z G , vs. time. It forms the so-called λ -peak marked with a red vertical dashed straight line. It shows temporary edge condensation on the SZG node. The span of this peak is marked by the last blue vertical dashed straight lines. These lines are also plotted in Figure A3 below. The equilibrium scale-free networks are placed outside this area. The centre of this peak has been extrapolated on Thursday 25 January 2007—a deviation of two trading days from the result shown in Figure A1 is irrelevant here. The plot was taken from the publication [32] with the consent of the editors.
Figure A2. The temporal SZG vertex degree, k S Z G , vs. time. It forms the so-called λ -peak marked with a red vertical dashed straight line. It shows temporary edge condensation on the SZG node. The span of this peak is marked by the last blue vertical dashed straight lines. These lines are also plotted in Figure A3 below. The equilibrium scale-free networks are placed outside this area. The centre of this peak has been extrapolated on Thursday 25 January 2007—a deviation of two trading days from the result shown in Figure A1 is irrelevant here. The plot was taken from the publication [32] with the consent of the editors.
Entropy 22 00866 g0a2
Figure A3. Degree entropy (defined in [32] by degree distributions) vs. time. The solid line marked entropy in the presence of the SZG vertex but dotted after removing this node. One can see the crucial role of this node in preparing the temporary network structure. The well-defined local absolute minimum of degree entropy is placed on Thursday, 25 January 2007. The plot was taken from the publication [32] with the consent of the editors.
Figure A3. Degree entropy (defined in [32] by degree distributions) vs. time. The solid line marked entropy in the presence of the SZG vertex but dotted after removing this node. One can see the crucial role of this node in preparing the temporary network structure. The well-defined local absolute minimum of degree entropy is placed on Thursday, 25 January 2007. The plot was taken from the publication [32] with the consent of the editors.
Entropy 22 00866 g0a3

Appendix E.2. Analysis for Case Z<1

We note that the extreme entropy values, for the data shown in Figure A3, are as follows: S m i n ( t = 2007-01-25) = 5.690 and S m a x ( t = 2005-03-15) = 5.948, which gives a span of Z = 0.258 < 1 . Substituting these numbers into Equations (A4) and (A5) we get the entropy of the order of ( m = 2 , n = 2 ) ,
S C X m a x = 5.819 ,
and complexity measure of the same order
C X m a x = 2.769 10 4 .
From the plot data: ‘Entropy with SZG ’in Figure A3, dates corresponding to the entropy given by Equation (A30) can be read. There are many of them—we chose three particular ones: 6 January 2006, 21 December 2007, 15 April 2008. All of them concern the most complex networks.
Figure A4 shows the network corresponding to the second date. One can see how much it differs from the least complex shown in Figure A1. This Figure shows a mixture of few mini stars with a degree not greater than about 20 (but no central one with degree almost 100, as shown in Figure A1). It is indicated by the degree distribution on the right-hand side of the plot, besides several skeletons and developed cascades arranged there in a somewhat disordered way. It is a richer and more disordered structure—more complex than that shown in Figure A1.
Figure A4. The snapshot picture of the Frankfurt Stock Exchange complex network on 21 December 2007. This network represents the most sophisticated state of the stock market, that is, C X m a x given by Equation (A31). One can see how much this network differs from that shown in Figure A1—e.g., the dominant central star has ceased to exist. The network structure is now skeletal.
Figure A4. The snapshot picture of the Frankfurt Stock Exchange complex network on 21 December 2007. This network represents the most sophisticated state of the stock market, that is, C X m a x given by Equation (A31). One can see how much this network differs from that shown in Figure A1—e.g., the dominant central star has ceased to exist. The network structure is now skeletal.
Entropy 22 00866 g0a4

References

  1. Nicolis, G.; Nicolis, C. Foundations of Complex Systems. Emergence, Information and Prediction, 2nd ed.; World Science Publication: Singapore, 2012; ISBN 981-4366-60-9. [Google Scholar]
  2. Kwapień, J.; Drożdż, S. Physical approach to complex systems. Phys. Rep. 2012, 515, 115–226. [Google Scholar] [CrossRef]
  3. Dorogovtsev, S.N.; Goltsev, A.V. Critical phenomena in complex networks. Rev. Mod. Phys. 2008, 80, 1275–1335. [Google Scholar] [CrossRef] [Green Version]
  4. Albert, R.; Barabási, A.-L. Statistical mechanics of complex networks. Rev. Mod. Phys. 2002, 74, 47–97. [Google Scholar] [CrossRef] [Green Version]
  5. Pincus, S.M. Approximate entropy as a measure of system complexity. Proc. Natl. Acad. Sci. USA 1991, 88, 2297–2301. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Dorogovtsev, S.N. Lectures on Complex Networks; Clarendon Press: Oxford, UK, 2010. [Google Scholar]
  7. Grassberger, P.; Procaccia, I. On the characterization of strange attractors. Phys. Rev. Lett. 1983, 50, 346–349. [Google Scholar] [CrossRef]
  8. Richman, J.S.; Moorman, J.R. Physiological time-series analysis using approximate entropy and sample entropy. Am. J. Physiol. Heart Circ. Physiol. 2000, 278, 2039–2049. [Google Scholar] [CrossRef] [Green Version]
  9. Prehl, J.; Boldt, F.; Essex, C.H.; Hoffmann, K.H. Time evolution of relative entropies for anomalous diffusion. Entropy 2013, 15, 2989–3006. [Google Scholar] [CrossRef]
  10. Thurner, S.; Hanel, R.; Klimek, P. Introduction to the Theory of Complex Systems; Oxford Univ. Press: Oxford, UK, 2018; ISBN 9780198821939. [Google Scholar]
  11. Popiel, N.J.M.; Khajehabdollahi, S.; Abeyasinghe, P.M.; Riganello, F.; Nichols, E.; Owen, A.M.; Soddu, A. The Emergence of Integrated Information, Complexity, and ‘Consciousness’ at Criticality. Entropy 2020, 22, 339. [Google Scholar] [CrossRef] [Green Version]
  12. Available online: https://en.wikipedia.org/wiki/Complexity (accessed on 5 August 2020).
  13. Thurner, S.; Hanel, R.; Gell-Mann, M. How multiplicity of random processes determines entropy and the derivation of the maximum entropy principle for complex systems. Proc. Natl. Acad. Sci. USA 2014, 111, 6905–6910. [Google Scholar]
  14. Grassbereger, P. Toward a quantitative theory of self-generated complexity. Int. J. Theor. Phys. 1986, 25, 907–938. [Google Scholar] [CrossRef]
  15. Bialek, W.; Nemenmana, I.; Tishby, N. Complexity through nonextensivity. Phys. A 2001, 302, 89–99. [Google Scholar] [CrossRef] [Green Version]
  16. Prokopenko, M.; Boschetti, F.; Ryan, A.J. An information-theoretic primer on complexity, self-organization, and emergence. Complexity 2008, 15, 11–28. [Google Scholar] [CrossRef]
  17. Borda, M. Fundamentals in Information Theory and Coding; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  18. Crutchfield, J.P. Between order and chaos. Nat. Phys. 2012, 8, 17–24. [Google Scholar] [CrossRef]
  19. Baker, L.G.; Gollub, J.P. Chaotic Dynamics: An Introduction, 2nd ed.; Cambridge University Press: Cambridge, UK, 1996; Section 3. [Google Scholar]
  20. Schuster, H.G. Deterministic Chaos. An Introduction, 2nd ed.; VCH: Verlagsgesellschaft, Germany, 1988. [Google Scholar]
  21. Henkel, M.; Hinrichsen, H.; Lübeck, S. Non-Equilibrium Phase Transitions. Volume I: Absorbing Phase Transitions; Springer: Berlin/Heidelberg, Germany, 2008; Section 4.1.7; pp. 112–116. [Google Scholar]
  22. Wolfram, S. A New Kind of Science; Wolfram Media Inc.: Champaign, IL, USA, 2002. [Google Scholar]
  23. Gell-Mann, M. Plectics: The study of simplicity and complexity. Europhysicsnews 2002, 1, 17–20. [Google Scholar] [CrossRef]
  24. Gell-Mann, M.; Lloyd, S. Information measures, effective complexity and total information. Complexity 1996, 2, 44–52. [Google Scholar] [CrossRef]
  25. Gell-Mann, M.; Lloyd, S. Effective complexity. In Nonextensive Entropy: Interdisciplinary Applications; Gell-Mann, M., Tsallis, C., Eds.; Oxford University Press: New York, NY, USA, 2004; pp. 387–398. ISBN 0-19-515976-4. [Google Scholar]
  26. Ay, N.; Müller, M.; Szkoła, A. Effective complexity and its relation to logical depth. IEEE Trans. Inf. Theory 2010, 56, 4593–4607. [Google Scholar] [CrossRef] [Green Version]
  27. Gell-Mann, M. The Quark and the Jaguar: Adventures in the Simple and the Complex, 8th ed.; Freeman, W.H., Ed.; W.H. Freeman: New York, NY, USA, 1994; ISBN 0-7167-2725-0. [Google Scholar]
  28. Kutner, R.; Masoliver, J. The continuous time random walk still trendy: Fifty-year history, state of art, and outlook. Eur. Phys. J. 2017, 90, 50. [Google Scholar] [CrossRef] [Green Version]
  29. Sornette, D. Dragon-kings, black swans and the prediction of crises. Int. J. Terraspace Sci. Eng. 2009, 1, 1–17. [Google Scholar] [CrossRef]
  30. Sornette, D.; Ouillon, G. Dragon-kings: Mechanisms, statistical methods and empirical evidence. Eur. Phys. J. Spec. Top. 2012, 205, 1–26. [Google Scholar] [CrossRef] [Green Version]
  31. Wiliński, M.; Sienkiewicz, A.; Gubiec, T.; Kutner, R.; Struzik, Z.R. Structural and topological phase transition on the german stock exchange. Phys. A 2013, 392, 5963–5973. [Google Scholar] [CrossRef] [Green Version]
  32. Wiliński, M.; Szewczak, B.; Gubiec, T.; Kutner, R.; Struzik, Z.R. Temporal condensation and dynamic λ-transition within the complex network: An application to real-life market evolution. Eur. Phys. J. B 2015, 34, 1–15. [Google Scholar]
  33. Kozłowska, M.; Denys, M.; Wiliński, M.; Link, G.; Gubiec, T.; Werner, T.R.; Kutner, R.; Struzik, Z.R. Dynamic bifurcations on financial markets. Chaos Solitons Fractals 2016, 88, 126–142. [Google Scholar] [CrossRef]
  34. Jakimowicz, A. The role of entropy in the development of economics. Entropy 2020, 22, 452. [Google Scholar] [CrossRef]
  35. Jakimowicz, A. Fundamental sources of economic complexity. Int. J. Nonlinear Sci. Numer. 2016, 17, 1–13. [Google Scholar] [CrossRef]
  36. Rossler, J.B., Jr. Econophysics and economic complexity. Adv. Complex Syst. 2008, 11, 745–760. [Google Scholar] [CrossRef]
  37. Rossler, J.B., Jr. Entropy and econophysics. Eur. Phys. J. Spec. Top. 2016, 225, 3091–3104. [Google Scholar] [CrossRef]
  38. Zambelli, S.; George, D.A.R. Nonlinearity, Complexity, and Randomness in Economics: Towards Algorithmic Foundations for Economic; Wiley–Blackwell: Chichester, UK, 2012; ISBN 978-1-4443-5031-9. [Google Scholar]
  39. Kutner, R.; Ausloos, M.; Grech, D.; Di Matteo, T.; Schinckus, C.H.; Stanley, H.E. Econophysics and sociophysics: Their milestones & challenges. Physica A 2019, 516, 240–253. [Google Scholar]
  40. Gell-Mann, M. What is Complexity? Complexity 1995, 1, 16–19. [Google Scholar] [CrossRef]
  41. Bertin, E. A Concise Introduction to the Statistical Physics of Complex Systems; Springer Briefs in Complexity; Springer: Berlin/Heidelberg, Germany, 2012; pp. 33–38. ISBN 978-3-642-23922-9. [Google Scholar]
  42. Jaynes, E.T. Gibbs vs boltzmann entropies. Am. J. Phys. 1965, 33, 391–398. [Google Scholar] [CrossRef]
  43. Beck, C.H.; Schlögl, F. Thermodynamics of Chaotic Systems. An Introduction; Cambridge Nonlinear Science Series 4; Cambridge University Press: Cambridge, UK, 1995; pp. 50–55. [Google Scholar]
  44. Tsallis, C. Possible generalization of boltzmann-gibbs statistics. J. Stat. Phys. 1988, 52, 479–487. [Google Scholar] [CrossRef]
  45. Van Kampen, G. Stochastic Processes in Physics and Chemistry, 3rd ed.; Elsevier: Amsterdam, The Netherlands, 2007; pp. 111–114. [Google Scholar]
  46. Bahcall, S. Loonshots: How to Nurture the Crazy Ideas that Win Wars, Cure Diseases, and Transform Industries; St. Martin’s Press: New York, NY, USA, 2019; ISBN 978-1-250-18596-9. [Google Scholar]
  47. West, G. Scale: The Universal Laws of Life, Growth, and Death in Organisms, Cities, and Companies; Penguin Books: London, UK, 2017; ISBN 978-1594205583. [Google Scholar]
  48. Dunbar, R.M. The social brain hypothesis. Evol. Antropol. 1998, 6, 178–190. [Google Scholar] [CrossRef]
  49. Acharjee, S.; Bora, B.; Dunbar, R.I.M. On M-polynomials of dunbar graphs in social networks. Symmetry 2020, 12, 932. [Google Scholar] [CrossRef]
  50. Kubo, R.; Toda, M.; Hashitsume, N. Statistical Physics II. Nonequilibrium Statistical Mechanics; Springer: Tokyo, Japan, 1985. [Google Scholar]
  51. Li, P.; Liu, C.; Li, K.; Zheng, D.; Liu, C.; Hou, Y. Assessing the complexity of short-term heartbeat interval series by distribution entropy. Med. Biol. Eng. Comput. 2015, 53, 77–87. [Google Scholar] [CrossRef] [PubMed]
  52. Sornette, D. Critical Phenomena in Natural Sciences. Chaos, Fractals, Selforganization and Disorder: Concepts and Tools; Springer: Berlin/Heidelberg, Germany, 2000; ISBN 3-540-67462-4. [Google Scholar]
  53. Alder, B.J.; Wainwright, T.E. Decay of the velocity autocorrelation function. Phys. Rev. A 1970, 1, 18–21. [Google Scholar] [CrossRef]
  54. Bunde, A.; Havlin, S. (Eds.) Fractals and Disordered Systems, Second Revised and Enlarged Edition; Springer: Berlin/Heidelberg, Germany, 1996. [Google Scholar]
  55. Camargo, R.F.; Chiacchio, A.O.; Charnet, R.; de Oliveira, C.E. Solution of the fractional Langevin equation and the Mittag–Leffler functions. J. Math. Phys. 2009, 50, 063507. [Google Scholar] [CrossRef] [Green Version]
  56. Wehrl, A. General properties of entropy. Rev. Mod. Phys. 1978, 50, 221–260. [Google Scholar] [CrossRef]
Figure 1. Plots of the partial measure of complexity C X ( S ; m , n ) vs. S given by Equation (1) for four characteristic cases: (a) Case n = m = 1 where no inflection points, S i p are present. (b) Case m = 2 and n = 1 where a single inflection point S i p + is present. (c) Case m = 1 and n = 2 where a single inflection point S i p is present. (d) Case m = 2 and n = 2 where both inflection points are present. The shape of the curve, containing two inflection points, is typical for partial measures of complexity, characterized by exponents m , n 2 . Numbers 1–4 mark individual phases differing in the degree of order.
Figure 1. Plots of the partial measure of complexity C X ( S ; m , n ) vs. S given by Equation (1) for four characteristic cases: (a) Case n = m = 1 where no inflection points, S i p are present. (b) Case m = 2 and n = 1 where a single inflection point S i p + is present. (c) Case m = 1 and n = 2 where a single inflection point S i p is present. (d) Case m = 2 and n = 2 where both inflection points are present. The shape of the curve, containing two inflection points, is typical for partial measures of complexity, characterized by exponents m , n 2 . Numbers 1–4 mark individual phases differing in the degree of order.
Entropy 22 00866 g001
Figure 2. Schematic plot of the partial measure of complexity C X ( S ; m , n ) vs. S and t given by Equation (1). The red curve shows the dependence of entropy S on time t. The black curve represents C X ( S ( t ) ; m , n ) in three dimensions. The blue curve represents projection of the black curve on the ( S , C X ) plane. We show different variants of this blue curve presented in Figure 1. The non-monotonic dependence of the entropy on time visible here indicates the open nature of the system. However, this non-monotonicity is not visible through the blue curve. For instance, the three local maxima of the black curve collapse to one of the blue curve.
Figure 2. Schematic plot of the partial measure of complexity C X ( S ; m , n ) vs. S and t given by Equation (1). The red curve shows the dependence of entropy S on time t. The black curve represents C X ( S ( t ) ; m , n ) in three dimensions. The blue curve represents projection of the black curve on the ( S , C X ) plane. We show different variants of this blue curve presented in Figure 1. The non-monotonic dependence of the entropy on time visible here indicates the open nature of the system. However, this non-monotonicity is not visible through the blue curve. For instance, the three local maxima of the black curve collapse to one of the blue curve.
Entropy 22 00866 g002
Figure 3. Plot of the partial entropic (non-equilibrium) susceptibility χ C X ( S ; m , n ) of the partial measure of complexity vs. S given by Equation (7) at fixed order ( m = 2 , n = 2 ) . The finite susceptibility value at the S i p and S i p + phase transition points (cf. Figure 1) may be considered to correspond to finite susceptibility value in absorbing the non-equilibrium phase transition in the model of direct percolation at a critical point in the presence of an external field [21]. However, the situation presented here is richer, because susceptibility changes its sign, smoothly passing through zero at S = S C X m a x . At this point, the system is exceptionally robust and, therefore, is poorly affected by data artefacts, because its susceptibility vanishes there.
Figure 3. Plot of the partial entropic (non-equilibrium) susceptibility χ C X ( S ; m , n ) of the partial measure of complexity vs. S given by Equation (7) at fixed order ( m = 2 , n = 2 ) . The finite susceptibility value at the S i p and S i p + phase transition points (cf. Figure 1) may be considered to correspond to finite susceptibility value in absorbing the non-equilibrium phase transition in the model of direct percolation at a critical point in the presence of an external field [21]. However, the situation presented here is richer, because susceptibility changes its sign, smoothly passing through zero at S = S C X m a x . At this point, the system is exceptionally robust and, therefore, is poorly affected by data artefacts, because its susceptibility vanishes there.
Entropy 22 00866 g003
Figure 4. Comparison of the partial measure of complexity C X ( S ; m = 2 , n = 2 ) given by Equation (1) and full measure of complexity X ( S ; m 0 = 2 , n 0 = 2 ) given by Equation (17), for instance, for the symmetric case of m = n = m 0 = n 0 . In addition, we assume that S m i n = 0 , S m a x = 8 and M = 10 . Vertical dashed lines indicate inflection points: black for the C X curve, orange for the X curve, while S C X m a x = S X m a x = 4 . Notably, S X m a x maximizes X (here at a given ratio Z / M = 0.8 ). Vertical dashed lines mark the locations of inflection points on both curves.
Figure 4. Comparison of the partial measure of complexity C X ( S ; m = 2 , n = 2 ) given by Equation (1) and full measure of complexity X ( S ; m 0 = 2 , n 0 = 2 ) given by Equation (17), for instance, for the symmetric case of m = n = m 0 = n 0 . In addition, we assume that S m i n = 0 , S m a x = 8 and M = 10 . Vertical dashed lines indicate inflection points: black for the C X curve, orange for the X curve, while S C X m a x = S X m a x = 4 . Notably, S X m a x maximizes X (here at a given ratio Z / M = 0.8 ). Vertical dashed lines mark the locations of inflection points on both curves.
Entropy 22 00866 g004
Figure 5. Plot of the full entropic susceptibility χ X ( S ; m 0 , n 0 ) of the full measure of complexity vs. S given by Equation (19), at arbitrary fixed order ( m 0 = 2 , n 0 = 2 ) . As expected from the comparison with Figure 3, the turning points of C X (cf. Figure 4) lie within the S interval bounded by inflection points of X.
Figure 5. Plot of the full entropic susceptibility χ X ( S ; m 0 , n 0 ) of the full measure of complexity vs. S given by Equation (19), at arbitrary fixed order ( m 0 = 2 , n 0 = 2 ) . As expected from the comparison with Figure 3, the turning points of C X (cf. Figure 4) lie within the S interval bounded by inflection points of X.
Entropy 22 00866 g005
Figure 6. Dependence of the universal full measure of complexity X vs. number of entities N given by Equation (23). It should be emphasized that the full measure of complexity and its susceptibility have singularities in the same points. As one can see, we are dealing here with complexity barriers separating the phases/states of the system and the small and large number of objects forming them. The parameters we adopted here are as follows: M = 30 , s m i n = 0 , s m a x = 2 , s = 0.8 , m 0 = n 0 = 2 , hence, point N c r m a x ( s = 0.8 ) = 25 and point N c r m i n ( s = 0.8 ) = 37.5 .
Figure 6. Dependence of the universal full measure of complexity X vs. number of entities N given by Equation (23). It should be emphasized that the full measure of complexity and its susceptibility have singularities in the same points. As one can see, we are dealing here with complexity barriers separating the phases/states of the system and the small and large number of objects forming them. The parameters we adopted here are as follows: M = 30 , s m i n = 0 , s m a x = 2 , s = 0.8 , m 0 = n 0 = 2 , hence, point N c r m a x ( s = 0.8 ) = 25 and point N c r m i n ( s = 0.8 ) = 37.5 .
Entropy 22 00866 g006
Figure 7. Dependence of the universal full measure of complexity X vs. number of entities N and specific entropy s given by Equation (23), for m 0 , n 0 1 . Notably, the full measure of complexity and its susceptibility have singularities at the same points N c r m a x ( s ) and N c r m i n ( s ) . We are dealing here with complexity barriers separating the phases/states of the system and the small and large number of entities that form them. The parameters we adopted here are, as follows: M = 30 , s m i n = 0 , s m a x = 2 , s = 0.8 , m 0 = n 0 = 2 . These are the same parameters that we used to construct the plain plot in Figure 6.
Figure 7. Dependence of the universal full measure of complexity X vs. number of entities N and specific entropy s given by Equation (23), for m 0 , n 0 1 . Notably, the full measure of complexity and its susceptibility have singularities at the same points N c r m a x ( s ) and N c r m i n ( s ) . We are dealing here with complexity barriers separating the phases/states of the system and the small and large number of entities that form them. The parameters we adopted here are, as follows: M = 30 , s m i n = 0 , s m a x = 2 , s = 0.8 , m 0 = n 0 = 2 . These are the same parameters that we used to construct the plain plot in Figure 6.
Entropy 22 00866 g007
Figure 8. Dependence of L ( = N L C X m a x ) vs. N. There are two solutions of Equation (30): one marked with blue circles and the other with orange ones. Above N 10 2 , both dependencies are linear, which is particularly clearly confirmed in Figure 9. That is, in a log-log scale, their slopes equal 1. However, in linear scale, the directional coefficients of these straight lines equal 0.11 and 0.89 , respectively. This is clearly shown in Figure 9. Only the solution with orange circles is realistic, because the chance that 89 % of particles will pass in a finite time to the second part of the container (as indicated by the solution marked with blue circles) is negligibly small. The black solid tangent straight line indicates a reference case N L C X m a x = N / 2 .
Figure 8. Dependence of L ( = N L C X m a x ) vs. N. There are two solutions of Equation (30): one marked with blue circles and the other with orange ones. Above N 10 2 , both dependencies are linear, which is particularly clearly confirmed in Figure 9. That is, in a log-log scale, their slopes equal 1. However, in linear scale, the directional coefficients of these straight lines equal 0.11 and 0.89 , respectively. This is clearly shown in Figure 9. Only the solution with orange circles is realistic, because the chance that 89 % of particles will pass in a finite time to the second part of the container (as indicated by the solution marked with blue circles) is negligibly small. The black solid tangent straight line indicates a reference case N L C X m a x = N / 2 .
Entropy 22 00866 g008
Figure 9. Directional coefficient of linear dependencies L vs. N as a function of N. For N greater than 10 2 , no N-dependence of this coefficient is observed. Both of the solutions (having L / N = 0.11 and L / N = 0.89 ) are mutually symmetric about the straight horizontal line L / N = 1 / 2 , but we only consider the solution L / N = 0.89 to be realistic. The black horizontal straight solid line indicates a reference case N L C X m a x = N / 2 .
Figure 9. Directional coefficient of linear dependencies L vs. N as a function of N. For N greater than 10 2 , no N-dependence of this coefficient is observed. Both of the solutions (having L / N = 0.11 and L / N = 0.89 ) are mutually symmetric about the straight horizontal line L / N = 1 / 2 , but we only consider the solution L / N = 0.89 to be realistic. The black horizontal straight solid line indicates a reference case N L C X m a x = N / 2 .
Entropy 22 00866 g009
Figure 10. Dependence of c x m a x on N given by Equation (31). As one can see, c x m a x is a non-extensive function—it reaches the plateau for N 1 . For N 10 4 the plateau is achieved with a good approximation. This is an important issue for researching complexity. Namely, systems can attain complexity already on a mesoscopic scale. It can be said that the curve’s inflection point (located near N = 10 ) marks the beginning of the complexity stabilization region.
Figure 10. Dependence of c x m a x on N given by Equation (31). As one can see, c x m a x is a non-extensive function—it reaches the plateau for N 1 . For N 10 4 the plateau is achieved with a good approximation. This is an important issue for researching complexity. Namely, systems can attain complexity already on a mesoscopic scale. It can be said that the curve’s inflection point (located near N = 10 ) marks the beginning of the complexity stabilization region.
Entropy 22 00866 g010

Share and Cite

MDPI and ACS Style

Klamut, J.; Kutner, R.; Struzik, Z.R. Towards a Universal Measure of Complexity. Entropy 2020, 22, 866. https://0-doi-org.brum.beds.ac.uk/10.3390/e22080866

AMA Style

Klamut J, Kutner R, Struzik ZR. Towards a Universal Measure of Complexity. Entropy. 2020; 22(8):866. https://0-doi-org.brum.beds.ac.uk/10.3390/e22080866

Chicago/Turabian Style

Klamut, Jarosław, Ryszard Kutner, and Zbigniew R. Struzik. 2020. "Towards a Universal Measure of Complexity" Entropy 22, no. 8: 866. https://0-doi-org.brum.beds.ac.uk/10.3390/e22080866

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop