Next Article in Journal
Psychomotor Predictive Processing
Next Article in Special Issue
Financial Return Distributions: Past, Present, and COVID-19
Previous Article in Journal
An Efficient Chosen-Plaintext Attack on an Image Fusion Encryption Algorithm Based on DNA Operation and Hyperchaos
Previous Article in Special Issue
Are Mobility and COVID-19 Related? A Dynamic Analysis for Portuguese Districts
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Aspects of a Phase Transition in High-Dimensional Random Geometry

1
Institute of Physics, Carl von Ossietzky University of Oldenburg, D-26111 Oldenburg, Germany
2
Parmenides Foundation, 82049 Pullach, Germany
3
London Mathematical Laboratory, London W6 8RH, UK
4
Complexity Science Hub, 1080 Vienna, Austria
*
Author to whom correspondence should be addressed.
Submission received: 10 May 2021 / Revised: 16 June 2021 / Accepted: 17 June 2021 / Published: 24 June 2021
(This article belongs to the Special Issue Three Risky Decades: A Time for Econophysics?)

Abstract

:
A phase transition in high-dimensional random geometry is analyzed as it arises in a variety of problems. A prominent example is the feasibility of a minimax problem that represents the extremal case of a class of financial risk measures, among them the current regulatory market risk measure Expected Shortfall. Others include portfolio optimization with a ban on short-selling, the storage capacity of the perceptron, the solvability of a set of linear equations with random coefficients, and competition for resources in an ecological system. These examples shed light on various aspects of the underlying geometric phase transition, create links between problems belonging to seemingly distant fields, and offer the possibility for further ramifications.
PACS:
05.20.-y; 05.40.-a; 05.70.Fh; 87.23.Ge

1. Introduction

A large class of problems in random geometry is concerned with the collocation of points in high-dimensional space. Applications range from optimization of financial portfolios [1], binary classifications of data strings [2] and optimal stategies in game theory [3] to the existence of non-negative solutions to systems of linear equations [4,5], the emergence of cooperation in competitive ecosystems [6,7], and linear programming with random parameters [8]. It is frequently relevant to consider the case where both the number of points T and the dimension of space N tend to infinity. This limit is often characterized by abrupt qualitative changes reminiscent of phase transitions when an external parameter or the ratio T / N vary and cross a critical value. At the same time, this high-dimensional case is amenable to methods from the statistical mechanics of disordered systems offering additional insight.
Some results obtained in different disciplines are closely related to each other without the connection always being appreciated. In the present paper, we discuss some particular cases. We will show that the boundedness of the expected maximal loss, as well as the possibility of zero variance of a random financial portfolio is closely related to the existence of a linear separable binary coloring of random points called a dichotomy. Moreover, we point out the connection with the existence of non-negative solutions to systems of linear equations and with mixed strategies in zero-sum games. On a more technical level and for the above-mentioned limit of large instances in high-dimensional spaces, we also make contact between replica calculations performed for different problems in different fields.
In addition to uncovering the common random geometrical background of seemingly very different problems, our comparative analysis sheds light on each of them from various angles and points to ramifications in their respective fields.

2. Dichotomies of Random Points

Consider an N-dimensional Euclidean space with a fixed coordinate system. Choose T points in this space and color them either black or white. The coloring is called a dichotomy if a hyperplane through the origin of the coordinate system exists that separates black points from white ones, see Figure 1.
To avoid special arrangements like all points falling on one line, the points are required to be in what is called a general position: the position vectors of any subset of N points should be linearly independent. Under this rather mild prerequisite, the number C ( T , N ) of dichotomies of T points in N dimensions only depends on T and N and not on the particular location of the points. This remarkable result was proven in several works, among them a classical paper by Cover [2]. Establishing a recursion relation for C ( T , N ) , the explicit result was derived:
C ( T , N ) = 2 i = 0 N 1 T 1 i .
If the coordinates of the points are chosen at random from a continuous distribution, the points are in a general position with the probability one. Since there are in total 2 T different binary colorings of these points and only C ( T , N ) of them are dichotomies, we find for the probability that T random points in N dimensions with random coloring form a dichotomy with the cumulative binomial distribution:
P d ( T , N ) = C ( T , N ) 2 T = 1 2 T 1 i = 0 N 1 T 1 i .
Hence, P d ( T , N ) = 1 for T N , P d ( T , N ) = 1 / 2 for T = 2 N and P d ( T , N ) 0 for T . The transition from P 1 at T = N to P 0 at large T becomes sharper with increasing N. This is clearly seen when considering the case of constant ratio
α : = T N
between the number of points and the dimension of space for different values of N, which shows an abrupt transition at α c = 2 for N , cf. Figure 2.
For later convenience, it is useful to reformulate the condition for a certain coloring to be a dichotomy in different ways. Let us denote the position vector of point t , t = 1 , , T , by ξ t R N and its coloring by the binary variable ζ t = ± 1 . If a separating hyperplane exists, it has a normal vector w R N that fulfills
ζ t = sign ( w · ξ t ) , t = 1 , , T ,
where we define sign ( x ) = 1 for x 0 and sign ( x ) = 1 otherwise. With the abbreviation
r t : = ζ t ξ t ,
Equation (4) translates into w · r t 0 for all t = 1 , , T which for points in a general position, is equivalent to the somewhat stronger condition
w · r t > 0 , t = 1 , , T .
A certain coloring ζ t of points ξ t is hence a dichotomy if a vector w exists such that (6) is fulfilled, that is, if its scalar product with all vectors r t is positive. This is quite intuitive, since by going from the vectors ξ t to r t according to the (5), we replace all points colored black by their white-colored mirror images (or vice versa). If we started out with a dichotomy, after the transformation, all points will lie on the same side of the separating hyperplane. The meaning of Equation (6) is clear: For T random points in N dimensions with coordinates chosen independently from a symmetric distribution, there exists with probability P d ( T , N ) a hyperplane such that all these points lie on the same side of the hyperplane. This formulation will be crucial in Section 3 to relate dichotomies to bounded cones characterizing financial portfolios.
Singling out one particular point s = 1 , , T , this in turn implies that there is, for any choice of s, a vector w with
w · r t > 0 , t = 1 , , T , t s and w · ( r s ) < 0 .
Consider now all vectors r ¯ of the form
r ¯ = t s c t r t , with c t 0 , t = 1 , , T , t s ,
that is, all vectors that may be written as a linear combination of the r t with t s and all expansion parameters c t being non-negative. The set of these vectors r ¯ is called the non-negative cone of the r t , t s . Equation (7) then means that r s cannot be an element of this non-negative cone. This is clear since the hyperplane perpendicular to w separates r s from this very cone, an observation that is known as Farkas’ lemma [9]. Therefore, if a set of vectors r t forms a dichotomy no mirror image r s of any of them may be written as a linear combination of the remaining ones with non-negative expansion coefficients
t s c t r t r s , c t 0 .
Finally, adding r s to both sides of (9), we find
t c t r t o , with c t 0 , t = 1 , , T , and t c t > 0 ,
where o denotes the null vector in N dimensions. Given T points r t in N dimensions forming a dichotomy, it is therefore impossible to find a nontrivial linear combination of these vectors with non-negative coefficients that equals the null vector.
Additionally, this corollary to the Cover result is easily intuitively understood. Assume there were some coefficients c t 0 that were not all zero at the same time, and that realize
t c t r t = o .
If the points r t form a dichotomy, then according to (6), there is a vector w that makes a positive scalar product with all of them. Multiplying (11) with this vector, we immediately arrive at a contradiction, since the l.h.s. of this equation is positive and the r.h.s. is zero.
Note that the inverse of (10) is also true: if the points do not form a dichotomy, a decomposition of the null vector of the type (11) can always be found. This is related to the fact that the non-negative cone of the corresponding position vectors is the complete R N . For if there were a vector b R N that lies not in this cone by Farkas’ lemma, there would be a hyperplane separating the cone from b . However, the very existence of this hyperplane would qualify the points r t to be a dichotomy in contradiction to what was assumed.
In the limit N , T with α = T / N , keeping the problem of random dichotomies constant can be investigated within statistical mechanics. To make this connection explicit, we first note that no inequality in (6) is altered if w is multiplied by a positive constant. To decide whether an appropriate vector w fulfilling (6) may be found or not, it is hence sufficient to study vectors of a given length. It is convenient to choose this length as N , requiring
i = 1 N w i 2 = N .
Next, we introduce for each realization of the random vectors r t an energy function
E ( w ) : = t = 1 T Θ i w i r i t ,
where Θ ( x ) = 1 if x > 0 , and Θ ( x ) = 0 ; otherwise it is the Heaviside step function. This energy is nothing but the number of points violating (6) for a given vector w . Our central quantity of interest is the entropy of the groundstate of the system, that is, the logarithm of the fraction of points on the sphere defined by (12) that realize zero energy:
S ( κ , α ) : = lim N 1 N ln i = 1 N d w i δ ( i w i 2 N ) t = 1 α N Θ i w i r i t κ i = 1 N d w i δ ( i w i 2 N ) .
Here, δ ( x ) denotes the Dirac δ -function, and we have introduced the positive stability parameter κ to additionally sharpen the inequalities (6).
The main problem in the explicit determination of S ( κ , α ) is its dependence on the many random parameters r i t . Luckily, for large values of N deviations of S from its typical value, S typ becomes extremely rare and, moreover, this typical value is given by the average over the realizations of the r i t :
S typ ( κ , α ) = S ( κ , α ) .
The calculation of this average was performed by a classical calculation [10] which gave rise to the result:
S typ ( κ , α ) = e x t r q 1 2 ln ( 1 q ) + q 2 ( 1 q ) + α D t ln H κ q t 1 q ,
where the extremum is over the auxiliary quantity q, and we have used the shorthand notations
D t : = d t 2 π e t 2 2 and H ( x ) : = x D t .
More details of the calculation may be found in the original reference, and in chapter 6 of [11]. Appendix A contains some intermediate steps for a closely related analysis.
Studying the limit q 1 of (16) reveals
S typ ( κ , α ) > if α < α c ( κ ) if α > α c ( κ ) ,
corresponding to a sharp transition from solvability to non-solvability at a critical value α c ( κ ) . This is because κ = 0 finds α c = 2 in agreement with (2), cf. Figure 2.
Note that Cover’s result (2) holds for all values of T and N, whereas the statistical mechanics analysis is restricted to the thermodynamic limit N . On the other hand, the latter can deal with all values of the stability parameter κ , whereas no generalization of Cover’s approach to the case κ 0 is known.

3. Phase Transitions in Portfolio Optimization under the Variance and the Maximal Loss Risk Measure

3.1. Risk Measures

The purpose of this subsection is to indicate the financial context, in which the geometric problem discussed in this paper appears. A portfolio is the weighted sum of financial assets. The weights represent the parts of the total wealth invested in the various assets. Some of the weights are allowed to be negative (short positions), but the weights sum to 1; this is called the budget constraint. Investment carries risk, and higher returns usually carry higher risk. Portfolio optimization seeks a trade-off between risk and return by the appropriate choice of the portfolio weights. Markowitz was the first to formulate the portfolio choice as a risk-reward problem [12]. Reward is normally regarded as the expected return on the portfolio. Assuming return fluctuations to be Gaussian-distributed random variables, portfolio variance offered itself as the natural risk measure. This setup made the optimization of portfolios a quadratic programming problem, which, especially in the case of large institutional portfolios, posed a serious numerical difficulty in its time. Another critical point concerning variance as a risk measure was that variance is symmetric in gains and losses, whereas investors are believed not to be afraid of big gains, only big losses. This consideration led to the introduction of downside risk measures, starting already with the semivariance [13]. Later it was recognized that the Gaussian assumption was not realistic, and alternative risk measures were sought to grasp the risk of rare but large events, and also to allow risk to be aggregated across the ever-increasing and increasingly heterogeneous institutional portfolios. Around the end of the 1980s, Value at Risk (VaR) was introduced by JP Morgan [14], and subsequently it was widely spread over the industry by their RiskMetrics methodology [15]. VaR is a high quantile, a downside risk measure (note that in the literature, the profit and loss axis is often reflected, so that losses are assigned a positive sign. It is under this convention that VaR is a high quantile, rather than a low one). It soon came under academic criticism for its insensitivity to the details of the distribution beyond the quantile, and for its lack of sub-additivity. Expected Shortfall (ES), the average loss above the VaR quantile, appeared around the turn of the century [16]. An axiomatic approach to risk measures was proposed by Artzner et al. [17] who introduced a set of postulates which any coherent risk measure was required to satisfy. ES turned out to be coherent [18,19] and was strongly advocated by academics. After a long debate, international regulation embraced it as the official risk measure in 2016 [20].
The various risk measures discussed all involved averages. Since the distributions of financial data are not known, the relative price movements of assets are observed at a number T of time points, and the true averages are replaced by empirical averages from these data. This works well if T is sufficiently large; however, in addition to all the aforementioned problems, a general difficulty of portfolio optimization lies in the fact that the dimension N of institutional portfolios (the number of different assets) is large, but the number T of observed data per asset is never large enough, due to lack of stationarity of the time series and the natural limits (transaction costs, technical difficulties of rebalancing) on the sampling frequency. Therefore, portfolio optimization in large dimensions suffers from a high degree of estimation error, which renders the exercise more or less illusory (see e.g., [21]). Estimation of returns is even more error-prone than the risk part, so several authors disregard the return completely, and seek the minimum risk portfolio (e.g., [22,23,24]). We follow the same approach here.
In the two subsections that follow, we also assume that the returns are independent, symmetrically distributed random variables. This is, of course, not meant to be a realistic market model, but it allows us to make an explicit connection between the optimization of the portfolio variance under a constraint excluding short positions and the geometric problem of dichotomies discussed in Section 2. This is all the more noteworthy because analytic results are notoriously scarce for portfolio optimization with no short positions. We note that similar simplifying assumptions (Gaussian fluctuations, independence) were built into the original JP Morgan methodology, which was industry standard in its time, and influences the thinking of practitioners even today.

3.2. Vanishing of the Estimated Variance

We consider a portfolio of N assets with weights w i , i = 1 , , N . The observations r i t of the corresponding returns at various times t = 1 , , T are assumed to be independent, symmetrically distributed random variables. Correspondingly, the average value of the portfolio is zero. Its variance is given by
σ p 2 = 1 T t i w i r i t 2 = i , j w i w j 1 T t r i t r j t = : i , j w i w j C i j ,
where C i j denotes the covariance matrix of the observations. Note that the variance of a portfolio optimized in a given sample depends on the sample, so it is itself a random variable.
The variance of a portfolio obviously vanishes if the returns are fixed quantities that do not fluctuate. This subsection is not about such a trivial case. We shall see, however, that the variance optimized under a no-short constraint can vanish with a certain probability if the dimension N is larger than the number of observations T.
The rank of the covariance matrix is the smaller of N and T, and for N T the estimated variance is positive with the probability one. Thus, the optimization of variance can always be carried out as long as the number of observations T is larger than the dimension N, albeit with an increasingly larger error as T / N decreases. For large N and T and fixed α = T / N , the estimation error increases as α / ( α 1 ) with decreasing α and diverges at α 1 [25,26]. The divergence of the estimation error can be regarded as a phase transition. Below the critical value α d : = 1 , the optimization of variance becomes impossible. Of course, in practice, one never has such an optimization task without some additional constraints. Note that because of the possibility of short-selling (negative portfolio weights), the budget constraint (a hyperplane) in itself is not sufficient to forbid the appearance of large positive and negative positions, which then destabilize the optimization. In contrast, any constraint that makes the allowed weights finite can act as a regularizer. The usual regularizers are constraints on the norm of the portfolio vector. It was shown in [27,28] how liquidity considerations naturally lead to regularization. Ridge regression (a constraint on the 2 norm of the portfolio vector) prevents the covariance matrix from developing zero eigenvalues, and, especially in its nonlinear form [29], results in very satisfactory out-of-sample performance.
An alternative is the 1 regularizer, of which the exclusion of short positions is a special case. Together with the budget constraint, it prevents large sample fluctuations of the weights. Let us then impose the no-short ban, as it is indeed imposed in practice on a number of special portfolios (e.g., on pension funds), or, in episodes of crisis, on the whole industry. The ban on short-selling extends the region where the variance can be optimized, but below α = 1 the optimization acquires a probabilistic character in that the regularized variance vanishes with a certain probability, and the optimization can only be carried out when it is positive. (Otherwise, there is a continuum of solutions, namely any combination of the eigenvectors belonging to zero eigenvalues, which makes the optimized variance zero).
Interestingly, the probability of the variance vanishing is related to the problem of random dichotomies in the following way. For the portfolio variance (19) to become zero, we need to have
i w i r i t = 0
for all t. If we interchange t and i, we see that according to (11), this is possible as long as the N points in R T with position vectors r i : = { r i t } do not form a dichotomy. Hence, the probability for zero variance is from (2)
P zv ( T , N ) = 1 P d ( N , T ) = 1 1 2 N 1 i = 0 T 1 N 1 i = 1 2 N 1 i = T N 1 N 1 i .
Therefore, the probability of the variance vanishing is almost 1 for small α , decreases to the value 1/2 at α = 1 / 2 , decreases further to 0 as α increases to 1, and remains identically zero for α > 1 [30,31]. This is similar but also somewhat complementary to the curve shown in Figure 2. Equation (21) for the vanishing of the variance was first written up in [30,31] on the basis of analogy with the minimax problem to be considered below, and it was also verified by extended numerical simulations. The above link to the Cover problem is a new result, and it is rewarding to see how a geometric proof establishes a bridge between the two problems.
In [30,31], an intriguing analogy with, for example, the condensed phase of an ideal Bose gas was pointed out. The analogous features are the vanishing of the chemical potential in the Bose gas, resp. the vanishing of the Lagrange multiplier enforcing the budget constraint in the portfolio problem; the onset of Bose condensation, resp. the appearance of zero weights (“condensation” of the solutions on the coordinate planes) due to the no-short constraint; the divergence of the transverse susceptibility, and the emergence of zero modes in both models.

3.3. The Maximal Loss

The introduction of the Maximal Loss (ML) or minimax risk measure by Young [32] in 1998 was motivated by numerical expediency. In contrast to the variance whose optimization demands a quadratic program, ML is constructed such that it can be optimized by linear programming, which could be performed very efficiently even on large datasets already at the end of the last century. Maximal Loss combines the worst outcomes of each asset and seeks the best combination of them. This may seem to be an over-pessimistic risk measure, but there are occasions when considering the worst outcomes is justifiable (think of an insurance portfolio in the time of climate change), and, as will be seen, the present regulatory market risk measure is not very far from ML.
Omitting the portfolio’s return again and focusing on the risk part, the maximal loss of a portfolio is given by
ML : = min w max 1 t T i w i r i t
with the constraint
i w i = N .
We are interested in the probability P ML ( T , N ) that this minimax problem is feasible, that is, ML does not diverge to . To this end, we first eliminate the constraint (23) by putting
w N = N i = 1 N 1 w i .
This results in
ML : = min w ˜ max 1 t T i = 1 N 1 w i ( r i t r N t ) N r N t = : min w ˜ max 1 t T i = 1 N 1 w i r ˜ i t N r N t
with w ˜ : = { w 1 , , w N 1 } R N 1 and r ˜ t : = { r 1 t r N t , , r N 1 t r N t } R N 1 . For ML to stay finite for all choices of w ˜ , the T random hyperplanes with normal vectors r ˜ t have to form a bounded cone. If the points r ˜ t form a dichotomy, then according to (6), there is a vector W R N 1 with W · r ˜ t > 0 for all t. Since there is no constraint on the norm of w ˜ , the maximal loss (25) can become arbitrarily small for w ˜ = λ W and λ . The cone then is not bounded. We therefore find
P ML ( T , N ) = P d ( T , N 1 ) = 1 2 T 1 i = 0 N 2 T 1 i
for the probability that ML cannot be optimized.
In the limit N , T with α = T / N kept finite, (25) displays the same abrupt change as in the problem of dichotomies, a phase transition at α c = 2 . Note that this is larger than the critical point α d = 1 of the unregularized variance, which is quite natural, since the ML uses only the extremal values in the data set. The probability for the feasibility of ML was first written up without proof in [1], where a comparative study of the noise sensitivity of four risk measures, including ML, was performed. There are two important remarks we can make at this point. First, the geometric consideration above does not require any assumption about the data generating process; as long as the the returns are independent, they can be drawn from any symmetric distribution without changing the value of the critical point. This is a special case of the universality of critical points discovered by Donoho and Tanner [33].
The second remark is that the problem of bounded cones is closely related to that of bounded polytopes [34]. The difference is just the additional dimension of the ML itself. If the random hyperplanes perpendicular to the vectors r ˜ t form a bounded cone for ML according to (25), then they will trace out a bounded polytope on hyperplanes perpendicular to the ML axis at sufficiently high values of ML. In fact, after the replacement N 1 N Equation (26) coincides with the result in Theorem 4 of [34] for the probability of T random hyperplanes forming a bounded polytope in N dimensions (there is a typo in Theorem 4 in [34]; the summation has to start at i = 0 ). The close relationship between the ML problem and the bounded polytope problem, on the one hand, and the Cover problem on the other hand, was apparently not clarified before.
If we spell out the financial meaning of the above result, we are led to interesting ramifications. To gain an intuition, let us consider just two assets, N = 2 . If asset 1 produces a return sometimes above, sometimes below that of asset 2, then the minimax problem will have a finite solution. If, however, asset 1 dominates asset 2 (i.e., yields a return which is at least as large, and, at least at one time point, larger, than the return on asset 2 in a given sample), then, with unlimited short positions allowed, the investor will be induced to take an arbitrarily large long position in asset 1 and go correspondingly short in asset 2. This means that the solution of the minimax problem will run away to infinity, and the risk of ML will be equal to minus infinity [1]. The generalization to N assets is immediate: if among the assets there is one that dominates the rest, or there is a combination of assets that dominates some of the rest, the solution will run away to infinity, and ML will take the value of . This scenario corresponds to an arbitrage, and the investor gains an arbitrarily large profit without risk [35]. Of course, if such a dominance is realized in one given sample, it may disappear in the next time interval, or the dominance relations can rearrange to display another mirage of an arbitrage.
Clearly, the ML risk measure is unstable against these fluctuations. In practice, such a brutal instability can never be observed, because there are always some constraints on the short positions, or groups of assets corresponding to branches of industries, geographic regions, and so forth. These constraints will prevent instabilities from taking place, and the solution cannot run away to infinity, but will go as far as allowed by the constraints and then stick to the boundary of the allowed region. Note, however, that in such a case, the solution will be determined more by the constraints (and ultimately by the risk manager imposing the constraints) rather than by the structure of the market. In addition, in the next period, a different configuration can be realized, so the solution will jump around on the boundary defined by the constraints.
We may illustrate the role of short positions for the instability of ML further by investigating the case of portfolio weights w i that have to be larger than a threshold γ 0 . For γ , there are no restrictions on short positions, whereas γ = 0 corresponds to a complete ban on them. For N , T with fixed α = T / N , the problem may be solved within the framework of statistical mechanics. The minimax problem for ML is equivalent to the following problem in linear programming: minimize the threshold variable κ under the constraints (23), w i γ , and
i w i r i t κ t = 1 , , T .
Similarly to (14), the central quantity of interest is
Ω ( κ , γ , α ) = γ i = 1 N d w i δ ( i w i N ) t = 1 α N Θ i w i r i t + κ γ i = 1 N d w i δ ( i w i N ) ,
giving the fractional volume of points on the simplex defined by (23) that fulfill all constraints (27). For given α and γ , we decrease κ down to the point κ c , where the typical value of this fractional volume vanishes. The ML is then given by κ c ( α , γ ) .
Some details of the corresponding calculations are given in the Appendix A. In Figure 3, we show some results. As discussed above, the divergence of ML for α < 2 is indeed formally eliminated for all γ > , and the functions ML ( α ; γ ) smoothly interpolate between the cases γ = 0 and γ . However, the situation is now even more dangerous, since the unreliability of ML as a risk measure for small α remains without being deducible from its divergence.
The recognition of the instability of ML as a dominance problem has proved very fruitful and led to a series of generalizations. First, it was realized [1] that the instability of the expected shortfall, of which ML is an extreme special case, has a very similar geometric origin. (The current regulatory ES is the expected loss above a 97.5% quantile, whereas ML corresponds to 100%.) Both ES and ML are so-called coherent risk measures [17], and it was proved [35] that the root of this instability lies in the coherence axioms themselves, so every coherent risk measure suffers from a similar instability. Furthermore, it was proved [35] that the existence of a dominant/dominated pair of assets in the portfolio was a necessary and sufficient condition for the instability of ML, whereas it was only sufficient for other coherent risk measures. It follows that in terms of the variable α used in this paper (which is the reciprocal of the aspect ratio N / T used in some earlier works, such as [35,36,37]), the critical point of ML is a lower bound for the critical points of other coherent measures. Indeed, the critical line of ES was found to lie above the ML critical value of α c = 2 [36]. Value at Risk is not a coherent measure and can violate convexity, so it is not amenable to a similar study of its critical point. However, parametric VaR (that is, the quantile where the underlying distribution is given, only its expectation value and variance is determined from empirical data) is convex, and it was shown to possess a critical line that runs above that of ES [37]. The investigation of the semi-variance yielded similar results [37]. It seems, then, that the geometrical analysis of ML provides important information for a variety of risk measures, including some of the most widely used measures in the industry (VaR and ES), and also other downside risk measures.

4. Related Problems

In this section, we list a few problems from different fields of mathematics and physics that are linked to the random coloring of points in high-dimensional space and point out their connection with the questions discussed above.

4.1. Binary Classifications with a Perceptron

Feed-forward networks of formal neurons perform binary classifications of input data [38]. The simplest conceivable network of this type—the perceptron—consists of just an input layer of N units ξ i and a single output bit ζ = ± 1 [39]. Each input ξ i is directly connected to the output by a real valued coupling w i . The output is computed as the sign of the weighted inputs
ζ = sign i = 1 N w i ξ i .
Consider now a family of random inputs { ξ i t } , t = 1 , , T and ask for the probability P p ( T , N ) that the perceptron is able to implement a randomly chosen binary classification { ζ t } of these inputs. Interpreting the vectors ξ t : = { ξ i t } as position vectors of T points in N dimensions and the required classifications ζ t as a black/white coloring, we hence need to know the probability that this particular coloring is a dichotomy. Indeed, if a hyperplane exists that separates black points from white ones, it has a normal vector w that gives a suitable choice for the perceptron weights to get all classifications right. Therefore, we have
P p ( T , N ) = P d ( T , N ) = 1 2 T 1 i = 0 N 1 T 1 i .
In the thermodynamic limit N , T , this problem, together with a variety of modifications, can be analyzed using methods from the statistical mechanics of disordered systems along the lines of Equations (14)–(16), see [11].

4.2. Zero-Sum Games with Random Pay-Off Matrices

In game theory, two or more players choose among different strategies at their disposal and receive a pay-off (that may be negative) depending on the choices of all participating players. A particularly simple situation is given by a zero-sum game between two players, where one player’s profit is the other player’s loss. If the first player may choose among N strategies and the second among T, the setup is defined by an N × T pay-off matrix r i t , giving the reward for the first player if he plays strategy i and his opponent strategy t. Barring rare situations in which it is advantageous for one or both players to always choose one and the same strategy, it is known from the classical work of Morgenstern and von Neumann [40] that the best the players can do is to choose at random with different probabilities among their available strategies. The set of these probabilities p i and q t , respectively, is called a mixed strategy.
For large numbers of available strategies, it is sensible to investigate typical properties of such mixed strategies for random pay-off matrices. This can be done in a rather similar way to the calculation of ML presented in the Appendix A of the present paper [3]. One interesting result is that an extensive part of the probabilities p i and q t forming the optimal respective mixed strategies have to be identically zero: for both players, there are strategies they should never touch.

4.3. Non-Negative Solutions to Large Systems of Linear Equations

Consider a random N × T matrix r i t and a random vector b R N . When will the system of linear equations
t r i t x t = b i , i = 1 , , N
typically have a solution with all x t being non-negative? This question is related to the optimization of financial portfolios under a ban of short-selling as discussed above, and also occurs when investigating the stability of chemical or ecological problems [6,41]. Here, the x t denotes concentrations of chemical or biological species, and hence has to be non-negative. Similar to optimal mixed strategies considered in the previous subsection, the solution typically has a number of entries x t that are strictly zero (species that died out), the remaining ones being positive (surviving species). Again for T = α N and N , a sharp transition at a critical value α c separates situations with typically no non-negative solution from those in which typically such a solution can be found [4].
To make contact with the cases discussed before, it is useful to map the problem to a dual one by again using Farkas’ lemma. Let us denote by
r ¯ = t c t r t , c t 0 , t = 1 , , T
the vectors in the non-negative cone of the column vectors r t of matrix r i t . It is clear that (31) has a non-negative solution x if b belongs to this cone, and that no such solution exists if b lies outside the cone. In the latter case, however, there must be a hyperplane separating b from the cone. Denoting the normal of this hyperplane by w , we hence have the following duality: either the system (31) has a non-negative solution x , or there exists a vector w with
w · r t 0 t = 1 , , T and w · b < 0 .
If the r i t is drawn independently from a distribution with finite first and second cumulant R and σ r 2 , respectively, and the components b i are independent random numbers with average B and variance σ b 2 / N , the dual problem (33) may be analyzed along the lines of (14)–(16). The result for the typical entropy of solution vectors w reads [4]
S typ ( γ , α ) = e x t r q , κ 1 2 ln ( 1 q ) + q 2 ( 1 q ) κ 2 γ 2 ( 1 q ) + α D t ln H κ q t 1 q ,
where the parameter
γ : = B σ r R σ b 2
characterizes the distributions of r i t and b i . The main difference to (16) is the additional extremum over κ regularized by the penalty term proportional to κ 2 . Considering the limit q 1 in (34), it is possible to determine the critical value α c ( γ ) bounding the region where typically no solution w may be found. For nonrandom b , that is, σ b 0 implying γ , we find back the Cover result α c = 2 .
The problem is closely related to a phase transition found recently in MacArthur’s resource competition model [4,6,7], in which a community of purely competing species builds up a collective cooperative phase above a critical threshold of the biodiversity.

5. Discussion

In this paper, we have reviewed various problems from different disciplines, including high-dimensional random geometry, finance, binary classification with a perceptron, game theory, and random linear algebra, which all have at their root the problem of dichotomies, that is, the linear separability of points carrying a binary label and scattered randomly over a high-dimensional space. No doubt there are several further problems belonging to this class; those that spring to mind are theoretical ecology alluded to at the end of the previous Section, or linear programming with random parameters [8]. Some of these conceptual links are obvious, and have been known for decades (for example, the link between dichotomies and the perceptron), and others are far less clear at first sight, such as the relationship with the two finance problems discussed in Section 3. We regard as one of the merits of this paper the establishment of this network of conceptual connections between seemingly faraway areas of study. Apart from the occasional use of the heavy machinery of the replica theory, in most of the paper we offered transparent geometric arguments, where our only tool was basically the Farkas’ lemma.
The phase transitions we encountered in all of the problems discussed here are similar in spirit to the geometric transitions discovered by Donoho and Tanner [33] and interpreted at a very high level of abstraction by [42]. One of the central features of these transitions is the universality of the critical point. This universality is different from the one observed in the vicinity of continuous phase transitions in physics, where the value of the critical point can vary widely, even between transitions belonging to the same universality class. The universality in physical phase transitions is a property of the critical indices and other critical parameters. Critical indices also appear in our abstract geometric problems, and they are universal, but we omitted their discussion which might have led far from the main theme.
At the bottom of our geometric problems, there is the optimization of a convex objective function (which is, by the way, the key to the replica symmetric solutions we found). The recent evolution of neural networks, machine learning, and artificial intelligence is mainly concerned with a radical lack of convexity, which points to the direction in which we may try to extend our studies. Another simplifying feature we exploited was the independence of the random variables. The moment that correlations appear, these problems become hugely more complicated. We left this direction for future exploration. However, it is evident that progress in any of these problems will induce progress in the other fields, and we feel that revealing their fundamental unity may help the transfer of methods and ideas between these fields. This may be the most important achievement of this analysis.

Author Contributions

Conceptualization, I.K. and A.E.; formal analysis, A.P., I.K. and A.E.; software, A.P.; writing—original draft, A.P., I.K. and A.E. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

A.P. and A.E. are grateful to Stefan Landmann for many interesting discussions.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Replica Calculation of Maximal Loss

In this appendix, we provide some details for the determination of the maximal loss of a random portfolio using the replica trick. The calculation is a generalization of the one presented in [3] for random zero-sum games. A presentation at full length can be found in [43]. As we pointed out in the main text, maximal loss is a special limit of the Expected Shortfall risk measure, corresponding to the so-called confidence level going to 100%. In [44] a detailed study of the behavior of ES was carried out, including the limiting case of maximal loss. That treatment is completely different from the one in here, so the present calculation can be regarded as complementary to that in [44].
The central quantity of interest is the fractional volume
Ω ( κ , γ , α ) = γ i = 1 N d w i δ ( i w i N ) t = 1 α N Θ i w i r i t + κ γ i = 1 N d w i δ ( i w i N )
defined in (28). Although not explicitly indicated, Ω ( κ , γ , α ) depends on all the random parameters r i t and is therefore by itself a random quantity. The calculation of its complete probability density P ( Ω ) is hopeless but for large N this distribution gets concentrated around the typical value Ω typ ( κ , γ , α ) . Because Ω involves a product of many independent random factors this typical value is given by
Ω typ ( κ , γ , α ) = e ln Ω ( κ , γ , α )
rather than by Ω ( κ , γ , α ) . Here denotes the average over the r i t . A direct calculation of ln Ω is hardly possible. It may be circumvented by exploiting the identity
ln ( Ω ( κ , γ , α ) ) = lim n 0 1 n Ω n ( κ , γ , α ) 1
For natural n the determination of Ω n is feasible. The main problem then is to continue the result to real n in order to perform the limit n 0 .
The explicit calculation starts with
Ω ( κ , γ , α ) n = γ i = 1 N a = 1 n d w i a a = 1 n δ ( i w i a N ) t = 1 α N a = 1 n Θ ( i w i a r i t + κ ) γ i = 1 N a = 1 n d w i a a = 1 n δ ( i w i a N ) .
Using
γ i = 1 N d w i δ ( i w i N ) exp { N [ 1 + ln ( 1 γ ) ] }
for large N and representing the δ -functions and Θ -functions by integrals over auxiliary variables E a , λ t a , and y t a we arrive at
Ω ( κ , γ , α ) n = exp n N 1 + ln ( 1 γ ) × γ i , a d w i a a d E a 2 π exp i N a E a 1 N i w i a 1 × κ t , a d λ t a t , a d y t a 2 π exp i t , a y t a λ t a exp ( i i , t , a y t a w i a r i t ) .
The average over the r i t may now be performed for independent Gaussian r i t with average zero and variance σ 2 = 1 / N . The result is valid also for more general distributions. First, multiplying the variance by a constant just rescales the maximal loss but does not influence the optimal w . Second, for N only the first two cumulants of the distribution matter due to the central limit theorem. Crucial is, however, the assumption of the r i t being independent.
Performing the average we find
exp i i , t , a y t a w i a r i t = i , t d r i t 2 π σ 2 exp ( r i t ) 2 2 σ 2 i r i t a y t a w i a = exp 1 2 N i , t a , b w i a w i b y t a y t b .
To disentangle in (A6) the w-integrals from those over λ and y we introduce the order parameters
q a b = 1 N i w i a w i b , a b
together with the conjugate ones q ^ a b . Using standard techniques [11] we end up with
Ω ( κ , γ , α ) n = a b d q a b d q ^ a b 2 π / N a d E a 2 π × exp i N a b q a b q ^ a b i N a E a n N 1 + ln 1 γ + N G s + α N G E ,
where
G S = ln γ a d w a exp i a b q ^ a b w a w b + i a E a w a
and
G E = ln κ a d λ a a d y a 2 π exp 1 2 a , b q a b y a y b + i a y a λ a .
For N the integrals over the order parameters in (A9) may be calculated using the saddle-point method. The essence of the so-called replica-symmetric ansatz is the assumption that the values of the order parameters at the saddle-point are invariant under permutation of the replica indices a and b. In [43] arguments are given why the replica-symmetric saddle-point should yield correct results in the present context. We therefore assume for the saddle-point values of the order parameters
q a a = q 1 i q ^ a a = 1 2 q ^ 1 i E a = E a q a b = q 0 i q ^ a b = q ^ 0 a > b .
which implies various simplifications in (A9)–(A11). Employing standard manipulations [11] we arrive at
Ω ( κ , γ , α ) n exp N extr q 0 , q ^ 0 , q 1 , q ^ 1 , E n ( n 1 ) 2 q 0 q ^ 0 + n 2 q 1 q ^ 1 n E n ( 1 + ln ( 1 γ ) ) + G S + α G E .
Using the shorthand notations (17) the functions G S and G E are now given by
G S = ln D l exp ( q ^ 0 l + E ) 2 2 ( q ^ 0 + q ^ 1 ) 2 π q ^ 0 + q ^ 1 H q ^ 0 l + E γ ( q ^ 0 + q ^ 1 ) q ^ 0 + q ^ 1 n
and
G E = ln D m H q 0 m κ q 1 q 0 n .
We may now treat n as a real number and perform the limit n 0 . In this way we find for the averaged entropy
S ( κ , γ , α ) : = lim N 1 N ln Ω ( κ , γ , α ) = lim N 1 N lim n 0 1 n Ω ( κ , γ , α ) n 1
the expression
S ( κ , γ , α ) = extr q 0 , q ^ 0 , q 1 , q ^ 1 , E q 0 q ^ 0 2 + q 1 q ^ 1 2 E 1 ln ( 1 γ ) + 1 2 ln ( 2 π ) 1 2 ln ( q ^ 0 + q ^ 1 ) + q ^ 0 + E 2 2 ( q ^ 0 + q ^ 1 ) + D l ln H q ^ 0 l + E γ ( q ^ 0 + q ^ 1 ) q ^ 0 + q ^ 1 + α D m ln H q 0 m κ q 1 q 0 .
The remaining extremization has to be done numerically. Before embarking on this task it is useful to remember that Ω and S are only instrumental in determining the maximal loss which in turn is given by the value κ c of κ for which Ω tends to zero. At the same time the typical overlap q 0 between two different vectors in Ω has to tend to the self-overlap q 1 . To investigate this limit we replace the order parameter q 1 by
v : = q 1 q 0
and study the saddle-point equations for v 0 . In this limit it turns out that the remaining order parameters may either also tend to zero or diverge. It is therefore convenient to make the replacements
q ^ 0 q ^ 0 v 2 , q ^ 1 w ^ : = q ^ 1 + q ^ 0 v , E E v .
Rescaled in this way the saddle-point values of the order parameters remain O ( 1 ) for v 0 . After some tedious calculations the saddle-point equations acquire the form
0 = w ^ α H κ c q 0 0 = q 0 ^ + w ^ ( q 0 + κ c 2 ) α q 0 κ c G κ c q 0 0 = E ( 1 γ ) w ^ ( q 0 γ ) + q ^ 0 0 = w ^ H E γ w ^ q ^ 0 0 = w ^ ( E 1 ) + q ^ 0 G E γ w ^ q ^ 0 + γ w ^ ( 1 w ^ )
where
G ( x ) : = 1 2 π e x 2 2 .
From the numerical solution of the system (A20) we determine κ c ( α , γ ) as shown in Figure 3.

References

  1. Kondor, I.; Pafka, S.; Nagy, G. Noise sensitivity of portfolio selection under various risk measures. J. Bank. Financ. 2007, 31, 1545–1573. [Google Scholar] [CrossRef] [Green Version]
  2. Cover, T.M. Geometrical and Statistical Properties of Systems of Linear Inequalities with Applications in Pattern Recognition. IEEE Trans. Electron. Comput. 1965, EC-14, 326–334. [Google Scholar] [CrossRef] [Green Version]
  3. Berg, J.; Engel, A. Matrix Games, Mixed Strategies, and Statistical Mechanics. Phys. Rev. Lett. 1998, 81, 4999–5002. [Google Scholar] [CrossRef] [Green Version]
  4. Landmann, S.; Engel, A. On non-negative solutions to large systems of random linear equations. Physica 2020, A552, 122544. [Google Scholar] [CrossRef]
  5. Garnier-Brun, J.; Benzaquen, M.; Ciliberti, S.; Bouchaud, J.P. A New Spin on Optimal Portfolios and Ecological Equilibria. 2021. Available online: https://arxiv.org/abs/2104.00668 (accessed on 17 June 2021).
  6. MacArthur, R. Species packing and competitive equilibrium for many species. Theor. Popul. Biol. 1970, 1, 1–11. [Google Scholar] [CrossRef]
  7. Tikhonov, M.; Monasson, R. Collective Phase in Resource Competition in a Highly Diverse Ecosystem. Phys. Rev. Lett. 2017, 118, 048103. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Todd, M. Probabilistic models for linear programming. Math. Oper. Res. 1991, 16, 671–693. [Google Scholar] [CrossRef] [Green Version]
  9. Farkas, J. Theorie der einfachen Ungleichungen. J. Reine Angew. Math. (Crelles J.) 1902, 1902, 1–27. [Google Scholar]
  10. Gardner, E. The space of interactions in neural network models. J. Phys. A Math. Gen. 1988, 21, 257–270. [Google Scholar] [CrossRef]
  11. Engel, A.; Van den Broeck, C. Statistical Mechanics of Learning; Cambridge University Press: Cambridge, UK, 2001. [Google Scholar]
  12. Markowitz, H. Portfolio selection. J. Financ. 1952, 7, 77–91. [Google Scholar]
  13. Markowitz, H. Portfolio Selection: Efficient Diversification of Investments; J. Wiley and Sons: New York, NY, USA, 1959. [Google Scholar]
  14. JP Morgan. Riskmetrics Technical Manual; JP Morgan: New York, NY, USA, 1995. [Google Scholar]
  15. JP Morgan and Reuters. Riskmetrics. In Technical Document; JP Morgan: New York, NY, USA, 1996. [Google Scholar]
  16. Acerbi, C.; Nordio, C.; Sirtori, C. Expected Shortfall as a Tool for Financial Risk Management. 2001. Available online: https://arxiv.org/abs/cond-mat/0102304 (accessed on 17 June 2021).
  17. Artzner, P.; Delbaen, F.; Eber, J.M.; Heath, D. Coherent Measures of Risk. Math. Financ. 1999, 9, 203–228. [Google Scholar] [CrossRef]
  18. Acerbi, C.; Tasche, D. Expected Shortfall: A Natural Coherent Alternative to Value at Risk. Econ. Notes 2002, 31, 379–388. [Google Scholar] [CrossRef] [Green Version]
  19. Pflug, G.C. Some remarks on the value-at-risk. In Probabilistic Constrained Optimization; Uryasev, S., Ed.; Springer: Berlin/Heidelberg, Germany, 2000; pp. 272–281. [Google Scholar]
  20. Basel Committee on Banking Supervision. Minimum Capital Requirements for Market Risk; Basel Committee on Banking Supervision: Basel, Switzerland, 2016. [Google Scholar]
  21. Michaud, R.O. The Markowitz optimization enigma: Is ‘optimized’ optimal? Financ. Anal. J. 1989, 45, 31–42. [Google Scholar] [CrossRef]
  22. Kempf, A.; Memmel, C. Estimating the global minimum variance portfolio. Schmalenbach Bus. Rev. 2006, 58, 332–348. [Google Scholar] [CrossRef]
  23. Basak, G.K.; Jagannathan, R.; Ma, T. A jackknife estimator for tracking error variance of optimal portfolios constructed using estimated inputs. Manag. Sci. 2009, 55, 990–1002. [Google Scholar] [CrossRef]
  24. Frahm, G.; Memmel, C. Dominating estimators for minimum-variance portfolios. J. Econom. 2010, 159, 289–302. [Google Scholar] [CrossRef] [Green Version]
  25. Pafka, S.; Kondor, I. Noisy Covariance Matrices and Portfolio Optimization II. Physica 2003, A 319, 487–494. [Google Scholar] [CrossRef] [Green Version]
  26. Burda, Z.; Jurkiewicz, J.; Nowak, M.A. Is Econophysics a Solid Science? Acta Phys. Pol. 2003, B 34, 87–132. [Google Scholar]
  27. Caccioli, F.; Still, S.; Marsili, M.; Kondor, I. Optimal liquidation strategies regularize portfolio selection. Eur. J. Financ. 2013, 19, 554–571. [Google Scholar] [CrossRef] [Green Version]
  28. Caccioli, F.; Kondor, I.; Marsili, M.; Still, S. Liquidity Risk And Instabilities In Portfolio Optimization. Int. J. Theor. Appl. Financ. 2016, 19, 1650035. [Google Scholar] [CrossRef] [Green Version]
  29. Ledoit, O.; Wolf, M. Nonlinear shrinkage estimation of large-dimensional covariance matrices. Ann. Stat. 2012, 40, 1024–1060. [Google Scholar] [CrossRef]
  30. Kondor, I.; Papp, G.; Caccioli, F. Analytic solution to variance optimization with no short positions. J. Statitical Mech. Theory Exp. 2017, 2017, 123402. [Google Scholar] [CrossRef] [Green Version]
  31. Kondor, I.; Papp, G.; Caccioli, F. Analytic approach to variance optimization under an 1 constraint. Eur. Phys. J. B 2019, 92, 8. [Google Scholar] [CrossRef]
  32. Young, M.R. A minimax portfolio selection rule with linear programming solution. Manag. Sci. 1998, 44, 673–683. [Google Scholar] [CrossRef] [Green Version]
  33. Donoho, D.; Tanner, J. Observed universality of phase transitions in high-dimensional geometry, with implications for modern data analysis and signal processing. Philos. Trans. R. Soc. Math. Phys. Eng. Sci. 2009, 367, 4273–4293. [Google Scholar] [CrossRef]
  34. Schmidt, B.K.; Mattheiss, T. The probability that a random polytope is bounded. Math. Oper. Res. 1977, 2, 292–296. [Google Scholar] [CrossRef]
  35. Kondor, I.; Varga-Haszonits, I. Instability of portfolio optimization under coherent risk measures. Adv. Complex Syst. 2010, 13, 425–437. [Google Scholar] [CrossRef]
  36. Ciliberti, S.; Kondor, I.; Mézard, M. On the Feasibility of Portfolio Optimization under Expected Shortfall. Quant. Financ. 2007, 7, 389–396. [Google Scholar] [CrossRef]
  37. Varga-Haszonits, I.; Kondor, I. The instability of downside risk measures. J. Stat. Mech. Theory Exp. 2008, 2008, P12007. [Google Scholar] [CrossRef] [Green Version]
  38. Hertz, J.; Krogh, A.; Palmer, R.G. Introduction to the Theory of Neural Computation; Addison-Wesley: Redwood City, CA, USA, 1991. [Google Scholar]
  39. Rosenblatt, F. Principles of Neurodynamics: Perceptions and the Theory of Brain Mechanisms; Spartan: Washington, DC, USA, 1962. [Google Scholar]
  40. Von Neumann, J.; Morgenstern, O. Theory of Games and Economic Behavior; Princeton University Press: Princeton, NJ, USA, 1953. [Google Scholar]
  41. May, R. Will a large complex system be stable? Nature 1972, 238, 413–414. [Google Scholar] [CrossRef]
  42. Amelunxen, D.; Lotz, M.; McCoy, M.B.; Tropp, J.A. Living on the edge: A geometric theory of phase transitions in convex optimization. Inform. Inference 2013, 3, 224–294. [Google Scholar] [CrossRef]
  43. Prüser, A. Phasenübergänge in Zufälligen Geometrischen Problemen. Master’s Thesis, University of Oldenburg, Oldenburg, Germany, 2020. [Google Scholar]
  44. Caccioli, F.; Kondor, I.; Papp, G. Portfolio optimization under expected shortfall: Contour maps of estimation error. Quant. Financ. 2018, 18, 1295–1313. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Two colorings of three points in two dimensions. In the left one, black and white points can be separated by a line through the origin; this coloring therefore represents a dichotomy. For the right one, no such separating line exists.
Figure 1. Two colorings of three points in two dimensions. In the left one, black and white points can be separated by a line through the origin; this coloring therefore represents a dichotomy. For the right one, no such separating line exists.
Entropy 23 00805 g001
Figure 2. Probability P d ( T , N ) that T randomly colored points in a general position in N-dimensional space form a dichotomy as a function of the ratio α between T and N for different values of N. The transition between the limiting values P = 1 at α = 1 and P = 0 at large α becomes increasingly sharp when N grows.
Figure 2. Probability P d ( T , N ) that T randomly colored points in a general position in N-dimensional space form a dichotomy as a function of the ratio α between T and N for different values of N. The transition between the limiting values P = 1 at α = 1 and P = 0 at large α becomes increasingly sharp when N grows.
Entropy 23 00805 g002
Figure 3. Left: The Maximal Loss ML = κ c as a function of α . The analytical results (solid line) are compared to simulation results (circles) with N = 200 averaged over 100 samples. The symbol size corresponds to the statistical error. Right: Same as left with largely extended axis of ML.
Figure 3. Left: The Maximal Loss ML = κ c as a function of α . The analytical results (solid line) are compared to simulation results (circles) with N = 200 averaged over 100 samples. The symbol size corresponds to the statistical error. Right: Same as left with largely extended axis of ML.
Entropy 23 00805 g003
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Prüser, A.; Kondor, I.; Engel, A. Aspects of a Phase Transition in High-Dimensional Random Geometry. Entropy 2021, 23, 805. https://0-doi-org.brum.beds.ac.uk/10.3390/e23070805

AMA Style

Prüser A, Kondor I, Engel A. Aspects of a Phase Transition in High-Dimensional Random Geometry. Entropy. 2021; 23(7):805. https://0-doi-org.brum.beds.ac.uk/10.3390/e23070805

Chicago/Turabian Style

Prüser, Axel, Imre Kondor, and Andreas Engel. 2021. "Aspects of a Phase Transition in High-Dimensional Random Geometry" Entropy 23, no. 7: 805. https://0-doi-org.brum.beds.ac.uk/10.3390/e23070805

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop