Next Article in Journal
Mechanism Design for Demand Management in Energy Communities
Previous Article in Journal
Self-Enforcing Price Leadership
Previous Article in Special Issue
Horizon-K Farsightedness in Criminal Networks
Article

Consensus towards Partially Cooperative Strategies in Self-Regulated Evolutionary Games on Networks

by and *,†
Department of Information Engineering and Mathematics, Via Roma, 56, 53100 Siena, Italy
*
Author to whom correspondence should be addressed.
The authors contributed equally to this work.
Academic Editors: Paolo Pin and Ulrich Berger
Received: 27 June 2021 / Revised: 19 July 2021 / Accepted: 27 July 2021 / Published: 29 July 2021
(This article belongs to the Special Issue Game Theory in Social Networks)

Abstract

Cooperation is widely recognized to be fundamental for the well-balanced development of human societies. Several different approaches have been proposed to explain the emergence of cooperation in populations of individuals playing the Prisoner’s Dilemma game, characterized by two concurrent natural mechanisms: the temptation to defect and the fear to be betrayed by others. Few results are available for analyzing situations where only the temptation to defect (Chicken game) or the fear to be betrayed (Stag-Hunt game) is present. In this paper, we analyze the emergence of full and partial cooperation for these classes of games. We find the conditions for which these Nash equilibria are asymptotically stable, and we show that the partial one is also globally stable. Furthermore, in the Chicken and Stag-Hunt games, partial cooperation has been found to be more rewarding than the full one of the Prisoner’s Dilemma game. This result highlights the importance of such games for understanding and sustaining different levels of cooperation in social networks.
Keywords: evolutionary games; cooperation; consensus; dynamics on networks; stag-hunt game; chicken game; mixed Nash equilibrium; self-regulation; stable equilibrium; complex systems evolutionary games; cooperation; consensus; dynamics on networks; stag-hunt game; chicken game; mixed Nash equilibrium; self-regulation; stable equilibrium; complex systems

1. Introduction

Cooperation in a population is a key emerging phenomenon, which has fascinated many scientists in several fields, ranging from biology to social and economics science [1,2,3,4,5,6,7], and recently also considered in technological applications [8,9]. Although cooperation has been sometimes seen to contrast with the Darwinian concept of natural selection, it emerges in many complex systems providing substantial benefits for all members of groups and organizations [10,11,12,13,14,15,16,17]. Generally, this topic is tackled using the tools of evolutionary game theory, which constitute the mathematical foundations for modeling the decision making process of players taking part in a replicator/selection competition. These mechanisms are described by means of the well known replicator Equation [3,18,19]. Moreover, the structure of the society plays an important role: studies aimed to embed networks into games with a finite set of strategies have been developed for infinite lattices [20]. Further developments have been proposed in [21,22] where the graph topology is general, and players are allowed to choose a strategy in the continuous set Δ = [ 0 , 1 ] . In general, it has been shown that particular networks act as a catalyst for the emergence of cooperation [14,23,24,25,26,27]. In [28], it has been shown that cooperation can emerge if the average connectivity, measured as the average degree of the underlying graph, is smaller than the benefits/costs ratio of altruistic behavior. Moreover, to solve the problem of cooperation, the majority of papers on the topic add and/or change the rule of the played games [29]. For example, as part of a society, individuals often use punishment mechanisms which limit the detrimental behavior of free riders or they can be awarded if they prefer cooperative behaviors [10,17,30,31,32,33]. Furthermore, mechanisms based on discount and synergies among players have been also proposed [13]. In summary, all these efforts are aimed to point out the role of exogenous factors in the emergence of cooperation.
Recently the fact that every human player is characterized by endogenous factors, like the awareness of conflicts, which can act as a motivation for cooperation has been emphasized. In other words, the rules of the game are important, but also other elements that distinguish human and animals, must have a crucial effect. In particular, in [34] the Self-Regulated Evolutionary Game on Network Equation (SR-EGN) this has been introduced, by extending the EGN equation studied in [21,22]. The SR-EGN equation is a set of ordinary differential equations modeling the social pressure on each individual (exogenous factors) and their innate tendency to cooperate with one another even when it contrasts their rational self-interest (endogenous factors). The introduced self-regulation is not imposed by internal norms for cooperation; instead, it is inspired by the fact that individuals are able to look at their interactions from the point of view of the others [35,36]. In this way, selfish behavior are subject to an inertial mechanism, driving the final decision towards a more cooperative one.
Consensus solutions, where all players agree to converge to a common level of cooperation, are also significant for the sustainable development of interacting real societies. Consensus is a puzzling topic since it is often achieved without centralized control [37,38,39]. Remarkably, when cooperation spreads all over a population, consensus of all individuals to the same strategy is reached, thus allowing social individuals to wipe out the cost of indecision [40]. In the context of cooperation, consensus is usually studied in the full sense, where all players have the ability to make fully cooperative decisions (pure NE equilibrium). For the purpose of being more realistic, we notice that real players can be also partially cooperative. Thus, a revision of the concept of consensus towards cooperation in a more general sense is required.
The SR-EGN is suitable for this scope, since the modeled individuals are naturally able to exhibit both full and partial levels of cooperation, as well as full defection. Additionally, the SR-EGN equation has three different consensus steady states: x A C * , where all players are fully cooperative, x A M * where all players are partially cooperative, and x A D * , where all players are fully defective. While x A D * should be avoided, x A C * and x A M * are both desirable. In this paper, we study and compare the stability conditions and the effectiveness of the last two states in the Stag-Hunt (SH) [13,41,42,43] and Chicken (CH) [12,13,25,41] games. In [34], the conditions for the onset of the fully cooperative consensus steady state have been found for the Prisoner’s Dilemma game (PD). Anyway, in this case, convergence towards a partially cooperative consensus is not possible.
The main finding of this paper is that, in both SH and CH games, the steady states x A C * and x A D * are simultaneously asymptotically stable. On the other hand, when they are unstable, the conditions for which the partially cooperative state x A M * is globally asymptotically stable. Additionally, we show that x A M * can produce an higher payoff than x A C * . These results highlight the importance of studying SH and CH games, thus providing a deeper understanding of the mechanisms leading towards cooperation in real world situations.
The paper is structured as follows: Section 2 illustrates main concepts and the SR-EGN equation. Section 3 presents the results on asymptotic stability of x A C * and x A D * , and on asymptotic and global stability of x A M * . In Section 4, several numerical experiments are developed and the discussion on the main findings is reported. Finally, some conclusions and further developments are presented in Section 5.

2. Emergence of Cooperation and Consensus

The study developed in this paper is grounded on a recently introduced equation, namely the SR-EGN model [34], which represents a framework for understanding the evolution of cooperation under the effect of self regulation in a structured population.
Following [34], we consider a population of N players, labeled by v = { 1 , , N } , arranged on an undirected graph, described by the adjacency matrix A = { a v , w } ( a v , w = a w , v = 1 when v plays against w, 0 otherwise). We assume a v , v = 0 v V . Moreover, the degree of player v, i.e., the number of its connections, is denoted by k v = w = 1 N a v , w . At each time instant, an individual v plays k v two-person games with its neighbors. The vector k collects all degrees k v . We denote with x v Δ = [ 0 , 1 ] the level of cooperation of player v, and with 1 x v its level of defection. A player with x v int ( Δ ) = ( 0 , 1 ) is exhibiting a partial level of cooperation, while full cooperator has x v = 1 and a free rider is characterized by x v = 0 . The cooperation levels of each individual are collected in the vector x = [ x 1 , , x N ] Δ N .

2.1. Games and Payoffs Calculation

The reward of player v playing against a connected individual w is defined by the payoff function ϕ : Δ × Δ R :
ϕ ( x v , x w ) = x v 1 x v B x w 1 x w ,
where B represents the payoff matrix defined as:
B = 1 S T 0 ,
where 1 is the reward collected by v when both players cooperate, T is earned by v when it defects and w cooperates (temptation to defect), S is the payoff for cooperative v and defective w (sucker’s payoff), and 0 is get when both players defect (punishment for mutual defection). The games classification with respect to parameters T and S, as well as the characterizing social tensions [13,24,25], are reported in Figure 1.
Using (2), Equation (1) can be rewritten as follows:
ϕ ( x v , x w ) = x v ( 1 · x w + S ( 1 x w ) ) + ( 1 x v ) ( T x w + 0 · ( 1 x w ) ) = ( ( 1 T S ) x w + S ) x v + T x w .
Accordingly, the derivative of (3) with respect to x v is:
ϕ ( x v , x w ) x v = ( 1 T S ) x w + S .
Over the network of connections, the total payoff function ϕ v : Δ N R of a generic player v corresponds to the sum of all payoffs gained against neighbors, and it is defined as follows:
ϕ v ( x ) = w = 1 N a v , w ϕ ( x v , x w ) .
The player v is able to appraise if a change of its own strategy x v leads to an improvement of the payoff ϕ v . That is, if the derivative of ϕ v with respect to x v is positive (negative), the player will increase (decrease) its strategy over time in order to raise its payoff. When this derivative is null, the player has reached a steady state. The derivative of the total payoff (5) is:
ϕ v x v = x v w = 1 N a v , w ϕ ( x v , x w ) = w = 1 N a v , w ϕ ( x v , x w ) x v = w = 1 N a v , w ( 1 T S ) x w + S = k v ( 1 T S ) x ¯ v + S ,
where x ¯ v = 1 k v w = 1 N a v , w x w represents the average of the strategies of all its neighbors. In particular, this quantity can be interpreted as the strategy of an equivalent neighboring player for v. ϕ v x v represents the external feedback perceived by player v from the environment, which influences its own strategy dynamics [34].
The similarity of Equations (4) and (6) suggests the introduction of a function modeling the dependency of the game mechanics on the elements of the payoff matrix:
ϕ v x v = k v ( 1 T S ) x ¯ v + S : = k v f ( x ¯ v ) .

2.2. The SR-EGN Model

“What kind of reward can I earn if I use a certain strategy against myself?”. In order to appropriately answer to these kinds of questions, we need to account for internal feedback induced by cultural traits, awareness, altruism, learning, etc. [1,11]. When an individual judges cooperation as a greater good, some inertial mechanisms, able to reduce the (rational) temptation to defect, may be activated. The most intuitive way to account for these factors is to use the same game mechanics, assuming that the opponent is the player itself. In this way, a player is capable to assume the position of the other agent, thus evaluating the payoff internally as an additional driver for its decision. This is accounted by the SR-EGN Equation [34], which reads as follows:
x ˙ v = x v ( 1 x v ) k v f ( x ¯ v ) β v f ( x v ) ,
where we used Equation (7) and β v is a parameter weighting the importance of the internal factors.
Equation (8) embeds both external and internal feedback mechanisms driving the player’s decisions. When β v = 0 , the individual is somehow “member of the flock”, since its strategy changes only according to the outcomes of the game interactions with neighbors. This effect is particularly dramatic in the classical PD context, as cooperation completely disappears from the population. In this direction, β v can be also interpreted as the resistance strength of player v to external feedback [39]. This corresponds to the presence of an internal feedback, which can be positive ( β v > 0 ) or negative ( β v < 0 ).
It is useful to introduce the following matrix:
A ( β ) = A diag β ,
where
β = [ β 1 , , β N ] .
The diagonal of A ( β ) contains the weights β v of the self games.

2.3. The Concepts of Emergence of Cooperation and Consensus

Figure 2 shows five possible asymptotic configurations in a simple social network with N = 5 individuals.
The color of each node of the graph denotes the level of cooperation of the corresponding player (red for full defectors, yellow for full cooperators and orange shadings for intermediate levels). In the first graph of Figure 2, a generic configuration is illustrated, including defectors (players 1 and 5), one cooperator (player 2), and mixed ones (players 3 and 4). The second graph in Figure 2 depicts a population without full defectors (i.e., x v > 0 v V ). Moreover, the third graph shows consensus towards the partially cooperative steady state x v * = m int ( Δ ) v V .
In this study, we focus on the following consensus steady states of (8):
  • Full cooperation (pure strategy): x A C * = [ 1 , 1 , , 1 ] .
  • Full defection (pure strategy): x A D * = [ 0 , 0 , , 0 ] .
  • Partial cooperation (mixed strategy): x A M * = [ m , m , , m ] , where m int ( Δ ) .
We will refer to them as consensus steady states, and their stability will be deeply analyzed later in this paper.
Following [24,25,26,28,44], full cooperation is reached when all members of a social network turn their strategies to cooperation. This concept can be formally defined as follows:
Definition 1.
In SR-EGN Equation (8) consensus on full cooperation emerges if
lim t + x v ( t ) = 1 v V ,
for any initial condition x ( 0 ) S Δ N .
Definition 1 corresponds to the asymptotic stability of x A C * with basin of attraction S . When dealing with partially cooperative players, a weaker definition of consensus is required:
Definition 2.
In SR-EGN Equation (8) consensus on partial cooperation emerges if:
lim t + x v ( t ) = m v V ,
with m int ( Δ ) for any initial condition x ( 0 ) S Δ N .
Definition 2 corresponds to the asymptotic stability of x A M * with basin of attraction S .
In [34], sufficient conditions for full cooperation and full defection have been found for the PD game (see Main Results 2 and 3). In the next sections, we will develop results on cooperation and consensus for SH and CH games. Specifically, we start by analyzing the asymptotic stability of x A C * , x A D * and x A M * . Finally, an appropriate Lyapunov function is found, guaranteeing the emergence of cooperation in partial sense starting from any initial condition in int ( Δ N ) (global asymptotic stability).

3. Results on the Emergence of Cooperative Consensus

3.1. Steady States

A steady state x * is a solution of Equation (8) satisfying x ˙ v = 0 v V . In order to be feasible, the steady state components must lay in Δ . Formally, the set of feasible steady states is:
Θ = x * Δ N : x v * ( 1 x v * ) k v f ( x ¯ v * ) β v f ( x v * ) = 0 v V .
It is clear that all points such that for all v, x v * = 0 or x v * = 1 are in the set Θ . They are 2 N and we will refer to them as pure steady states. We denote with Θ P Θ their set, which includes x A C * and x A D * among the others.
Mixed steady states may exist when:
k v f ( x ¯ v * ) β v f ( x v * ) = 0 v V ,
and x v * int ( Δ ) . We denote the set of mixed steady states with Θ M Θ . In [21], it has been shown that, if 1 T S 0 , the solution of (10) is x A M * = [ m , , m ] , where:
m = S S + T 1 ,
feasible when m int ( Δ ) ; this is possible only in SH and CH games. In particular:
  • SH games: S < 0 and 0 < T < 1 . Then, S < S + T 1 < 0 . Thus, m int ( Δ ) ;
  • CH games: S > 0 and T > 1 . Then, 0 < S < S + T 1 . Thus, m int ( Δ ) .
For the sake of completeness, we remark that SR-EGN may also have pure-mixed steady states, which belong to the set Θ P M = Θ \ ( Θ P Θ M ) . These are not considered in this work, since they do not represent consensus steady states.

3.2. Linearization of SR-EGN Model

The Jacobian matrix of system (8), J ( x ) = { j v , w ( x ) } , is defined as follows:
j v , w ( x ) = x ˙ v x w = x v ( 1 x v ) ( 1 T S ) , if a v , w = 1 β v x v ( 1 x v ) ( 1 T S ) + ( 1 2 x v ) k v f ( x ¯ v ) β v f ( x v ) , if w = v 0 , otherwise .
Evaluating the Jacobian matrix on a generic steady state x * Θ , we have the following cases:
  • If x v * { 0 , 1 } (player v uses a pure strategy at steady state), then all non-diagonal entries j v , w ( x ) in (12) with v w are null, while
    j v , v ( x * ) = k v f ( x ¯ v * ) β v f ( x v * ) , if x v * = 0 k v f ( x ¯ v * ) β v f ( x v * ) , if x v * = 1 .
  • If x v * ( 0 , 1 ) (player v uses a mixed strategy at steady state), then, according to Equations (10) and (12), the entries of the v-th row of J ( x * ) are:
    j v , w ( x * ) = x v * ( 1 x v * ) ( 1 T S ) , if a v , w = 1 β v x v * ( 1 x v * ) ( 1 T S ) , if w = v 0 , otherwise .

3.3. Stability of Pure Consensus Steady States x A C * and x A D *

Recall that the spectrum of J ( x * ) characterizes the linear stability of a steady state x * of the SR-EGN Equation (see [45]). Therefore, the eigenvalues of the Jacobian matrix J ( x * ) are related to the emergence of cooperation, provided that x * is a consensus steady state.
According to Equation (13), the Jacobian matrices evaluated at the consensus steady states x A C * and x A D * are both diagonal, and they read as:
J ( x A C * ) = ( 1 T ) · diag k β ,
and
J ( x A D * ) = S · diag k β .
The following results hold.
Theorem 1.
  • SH game. If β v < k v v V then x A C * is asymptotically stable for Equation (8);
  • CH game. If β v > k v v V then x A C * is asymptotically stable for Equation (8).
Proof. 
The diagonal elements of the Jacobian matrix evaluated at x A C * (Equation (15)) correspond to its eigenvalues and they read as:
j v , v ( x A C * ) = λ v = ( 1 T ) ( k v β v ) .
In SH Game, T < 1 and β v < k v , hence, all eigenvalues are negative. In CH Game, T > 1 and β v > k v , and again all eigenvalues are negative. Thus, x A C * is asymptotically stable in both cases. □
Notice that Theorem 1 is an extension of the Main result 1 of [34] to SH and CH games. Additionally, Theorem 2 stated in [21], ensures that any asymptotically stable pure steady state is also a Nash equilibrium. Then, under the hypotheses of Theorem 1, x A C * is a Nash equilibrium of the networked game.
Theorem 2.
  • SH game. If β v < k v v V then x A D * is asymptotically stable for Equation (8);
  • CH game. If β v > k v v V then x A D * is asymptotically stable for Equation (8).
Proof. 
The eigenvalues of the Jacobian matrix relative to the steady state x A D * (Equation (16)) are:
j v , v ( x A D * ) = λ v = S ( k v β v ) .
In SH game, S < 0 and β v < k v , hence all eigenvalues are negative. In CH game, then S > 0 and β v > k v , hence all eigenvalues are negative. Thus, x A D * is an asymptotically stable steady state in both cases. □
Theorem 2 represents an extension of Main Result 1 on PD in [34] to SH and CH games.
Following Theorems 1 and 2, when x A C * and x A D * are unstable, it is interesting to investigate the presence of other asymptotically stable steady states in int ( Δ N ) . In the next Section 3.4, we find the conditions for which the state x A M * is asymptotically stable, and prove that it is also globally asymptotically stable. This means that no other attractive states exists in int ( Δ N ) .

3.4. Stability of Mixed Consensus Steady State x A M *

According to Equations (11) and (14) the Jacobian matrix for the mixed steady state x A M * is:
J ( x A M * ) = m ( 1 m ) ( 1 T S ) A diag β = S ( T 1 ) 1 T S A ( β ) .
The following result holds.
Theorem 3.
  • SH game. If β v > k v v V then x A M * is asymptotically stable for Equation (8);
  • CH game. If β v < k v v V then x A M * is asymptotically stable for Equation (8).
Proof. 
If | β v | > k v v V , then A ( β ) is a strictly diagonally dominant matrix. Indeed, k v corresponds to the sum of all non-diagonal entries of the v-th row of A ( β ) , while β v is the diagonal entry of A ( β ) .
In the SH game, since β v > k v v V and A has null diagonal, then diagonal entries of A ( β ) are negative. Therefore, from the strict diagonal dominance of A ( β ) it follows that all eigenvalues of A ( β ) are negative. Moreover, since T < 1 and S < 0 , then 1 T S > 0 and S ( T 1 ) > 0 . Thus, according to Equation (17), the eigenvalues of J ( x A M * ) are all negative.
In the CH game, for the same reasons as above, since β v < k v v V , then all eigenvalues of A ( β ) are positive. Moreover, since T > 1 and S > 0 , then 1 T S < 0 and S ( T 1 ) > 0 . Thus, according to Equation (17), the eigenvalues of J ( x A M * ) are all negative.
In both cases, x A M * is an asymptotically stable steady state for Equation (8). □
The results of Theorems 1, 2 and 4 are synthesized in Figure 3. In PD game, if β v > k v for all members of the population, x A D * is destabilized and x A C * becomes attractive (subplot a of Figure 3). Conversely, β v < k v v V defection is dominant (subplot b of Figure 3), while x A M * is not feasible.
A SH game naturally exhibits bistability, which is a common feature of many social and biological systems [13,41,42,43]. Indeed, in the natural case where β = 0 we have a repulsive equilibrium x A M * standing between two attractive equilibria x A C * and x A D * . In the SR-EGN, thanks to Theorems 1 and 2, for β v < k v v V , x A C * and x A D * are both attractive, while they are both unstable for β v > k v v V (subplots a,b of Figure 3). Moreover, according to Theorem 4, when β v > k v v V , then x A M * is asymptotically stable (subplot c of Figure 3).
CH games represent an important class of social dilemmas [12,13,25,41], where cooperators and free riders coexist. In the standard replicator equation, total cooperation and total defection are now repulsive equilibria, while the mixed steady state is attractive. In the the SR-EGN model, when β = 0 , the mixed stead state x A M * , x A C * and x A D * are present and repulsive. When self-regulation is active, and in particular β v > k v v V , then for Theorems 1 and 2 x A C * and x A D * are asymptotically stable (subplots a,b of Figure 3), while for β v < k v v V they are unstable. Moreover, according to Theorem 4, when β v < k v v V , then x A M * is asymptotically (subplot c of Figure 3). In the interval ( k v , k v ) other steady states belonging to Θ P M are asymptotically stable.

3.5. Global Stability of x A M *

The consensus on full cooperation can emerge under the condition of Theorem 1. Anyway, the basin of attraction of x A C * does not correspond to the whole set int ( Δ N ) , since x A D * is asymptotically stable for the same parameters. Hence, we cannot expect to reach global consensus towards full cooperation. Moreover, we showed in Theorem 3 that x A M * is asymptotically stable for suitable conditions. Then, we further check if there are conditions for which it is also globally asymptotically stable.
Hereafter we introduce a Lyapunov function V ( x ) allowing us to prove that x A M * is globally asymptotically stable:
V ( x ) = v = 1 N m log m x v + ( 1 m ) log 1 m 1 x v ,
for x int ( Δ N ) . First of all, notice that V ( x AM * ) = 0 . Moreover, the gradient of V ( x ) is null for x AM * . Indeed, the partial derivatives of V ( x ) with respect to x are:
V ( x ) x v = m x v + 1 m 1 x v = x v m x v ( 1 x v ) .
It is straightforward to see that the Hessian matrix H ( x ) = { h v , w ( x ) } of V ( x ) is diagonal:
h v , v ( x ) = 2 V ( x ) x v 2 = x v 2 2 m x v + m x v 2 ( 1 x v ) 2 .
From (19) it follows that V ( x ) C 2 for all x and it is definite positive x int ( Δ N ) . Indeed, the denominator of h v , v ( x ) is always positive as its numerator:
x v 2 2 m x v + m = x v 2 2 m x v + m 2 + m m 2 = ( x v m ) 2 + m ( 1 m ) > 0 .
This proves that V ( x ) is a strictly convex function in the set int ( Δ N ) . Thus, x AM * is a global minimum of V ( x ) in the set int ( Δ N ) .
The time derivative of V ( x ) is:
V ˙ ( x ) = V ( x ) t = v = 1 N V ( x ) x v x ˙ v = v = 1 N x v m x v ( 1 x v ) x v ( 1 x v ) k v f ( x ¯ v ) β v f ( x v ) = v = 1 N ( x v m ) k v f ( x ¯ v ) β v f ( x v ) .
Notice that the function f defined in Equation (7), can be rewritten using the explicit value of m in Equation (11) as follows:
f ( x v ) = ( 1 T S ) x v + S = ( 1 T S ) x v + S 1 T S = ( 1 T S ) ( x v m ) .
Similarly:
f ( x ¯ v ) = ( 1 T S ) ( x ¯ v m ) .
Then, Equation (20) becomes:
V ˙ ( x ) = ( 1 T S ) v = 1 N ( x v m ) k v ( x ¯ v m ) β v ( x v m ) .
Notice that:
k v ( x ¯ v m ) = k v x ¯ v k v m = k v 1 k v w = 1 N a v , w x w w = 1 N a v , w m = w = 1 N a v , w x w w = 1 N a v , w m = w = 1 N a v , w ( x w m ) .
This yields to:
V ˙ ( x ) = ( 1 T S ) v = 1 N ( x v m ) w = 1 N a v , w ( x w m ) β v ( x v m ) . = ( 1 T S ) v = 1 N w = 1 N a v , w x w m ( x v m ) β v x v m 2 = ( 1 T S ) x x AM * ( A diag β ) x x AM * = x x AM * ( 1 T S ) A ( β ) x x AM * = x x AM * M x x AM * .
Indeed, V ˙ ( x ) is a quadratic form, where M = ( 1 T S ) A ( β ) is a symmetric matrix. Additionally, notice that V ˙ ( x AM * ) = 0 .
We now prove that x A M * is globally asymptotically stable.
Theorem 4.
 
Consider the set int ( Δ N ) .
  • SH game. If β v > k v v V then x A M * is globally asymptotically stable for Equation (8);
  • CH game. If β v < k v v V then x A M * is globally asymptotically stable for Equation (8).
Proof. 
According to Equation (17), the matrix M is related to the the Jacobian matrix evaluated at the internal steady state x A M * as follows:
M = 1 m ( 1 m ) J ( x A M * ) .
Since m ( 1 m ) > 0 , then using the same arguments of Theorem (3), we have that all eigenvalues of M are negative. Therefore, the quadratic form V ˙ ( x ) is negative definite in the set int ( Δ N ) , and then V ( x ) in Equation (18) is a Lyapunov function in the set int ( Δ N ) . Thus, x A M * is a global attractor in the set int ( Δ N ) . □
Theorem 4 states that the system (8) shows consensus towards partial cooperation. Notice that under the assumptions of Theorem 4, both Theorems 1 and 2 are not satisfied, and hence both x A C * and x A D * are unstable.

4. Discussion and Numerical Results

The theoretical results presented in the previous Section have been numerically tested by means of several simulation experiments.
The first experiment is shown in Figure 4 for SH and CH games, where the reported numerical solutions of Equation (8) are evaluated over a Scale-Free (SF) network composed by 20 nodes (players) and average degree 4. Subplots a,d of Figure 4 have been carried out using β v = 2 k v v V ; in subplots b,e of Figure 4, we fixed β v = 0 v V ; finally, for subplots c,f of Figure 4, we have set β v = 2 k v v V . In all cases, the first and third slices show the dynamics obtained using random initial conditions (i.c.s) close, for all players, to defection and cooperation, respectively. The central slice is obtained using i.c.s randomly chosen in int ( Δ N ) . The bistable dynamics of the two games is visible: i.c.s close to 0 lead to defective asymptotic states, while i.c.s near 1 drive towards cooperation (subplots a,b,f of Figure 4). In these three cases, both Theorems 1 and 2 are satisfied. Subplot e of Figure 4 shows that the solutions converge to pure-mixed steady states ( β v = 0 v V ). Finally, the parameters used to obtain the subplots c,d of Figure 4 satisfy the hypotheses of Theorems 3 and 4, thus ensuring the consensus towards partial cooperation.
In order to evaluate the performance of players decisions, in Figure 5 (first row) we draw the payoffs ϕ ( x v , x w ) obtained by each individual in a two-players game at the stable steady state x A M * as a function of parameters T and S. The steady state itself is reported in the second row of the same figure. Subplots a,b of Figure 5 show that in a PD game, the reward for full cooperation ϕ ( x v , x w ) is equal to 1 (white line), while ϕ ( x v , x w ) > 1 for SH and CH games (green regions). At the same time, x A M * < x A C * (subplots c,d of Figure 5). A larger payoff is retrieved by paying the cost of a lower level of cooperation, which in turn is more realistic and achievable. This fact highlights the importance of studying SH and CH games with respect to the over-exploited PD game.
Further experiments, aimed at analyzing the distribution of altruism for different topologies of the population network, have been realized. In particular, we considered populations of of N = 100 individuals arranged on Erdös-Rényi (ER) or Scale free (SF) random networks with average degree equal 10. The ER networks are characterized by a binomial distribution of the degrees, that are evenly distributed close to the average. The SF networks are characterized by a power-law distribution of the degrees p ( k ) k 3 , where there are several peripheral individuals (small k v ) and few hubs (high k v ).
For each network type, and for each game type, SH ( T = 1 and S = 1 ) and CH ( T = 2 and S = 2 ), we simulate 500 instances of the problem in order to obtain robust results. More specifically, for each instance, the starting i.c.s are randomly chosen in the set int ( Δ N ) , the network is randomly generated, and for all members of the population the same value of β v is set: β v = 10 v V for SH, and β v = 5 v V for CH. For each simulation, the indicator
c v = x v ( ) x ¯ v ( ) ,
has been evaluated for each simulation and each player. These values are reported with colored dots in Figure 6. This indicator measures the difference between the cooperation x v of player v and the average cooperation x ¯ v of its neighbors, at steady state. If c v > 0 , then player v is considered as an altruist since it cooperates more than the average of its neighbors, while c v < 0 indicates a more selfish behavior. In each subplot, we also depict the degree distributions of the underlying connection networks with superposed gray lines. The subdivision into two subgroups (non-central players and hubs) is observed for all considered games, especially for the SF case, where hubs with very high degree are present. In the SH case (subplots a,c of Figure 6), this subdivision is coherent with the players connectivity: poorly connected individuals (blue dots) present negative c v (selfish behavior), while high connectivity drives players to adopt an altruistic strategy as indicated by positive c v (purple dots). For the CH case (subplots b,d of Figure 6), the relationship between the degree k v and c v vanishes: indeed, altruistic as well as selfish players are present, independently of their connectivity. This phenomenon is more prominent in the SF case due to the higher number of hubs with respect to the ER network.

5. Conclusions

In this paper, the emergence of cooperative consensus for SH and CH games has been tackled in the framework of the SR-EGN equation, which describes the behavior of a population of randomly interconnected individuals, driven by game theoretic mechanisms, jointly with internal self-regulating factors.
We proved that in both SH and CH games, consensus over the fully cooperative state is asymptotically stable for feasible values of the self-regulating parameters. Unfortunately, the same conditions also ensure the asymptotic stability of the fully defective consensus. Starting from this point, the possibility to observe a global convergence of the SR-EGN towards different consensus steady states has also been investigated. In particular, we found a Lyapunov function to show that the unique partially cooperative consensus is globally asymptotically stable. An important consequence of our findings is that, the fully cooperative consensus is reached only from a suitable set of initial conditions, while for the weaker one the basin of attraction corresponds to the whole feasible set of strategies. We showed that partially cooperative consensus of SH and CH games is more rewarding than the full one of a PD game.
The global asymptotic stability of the partially cooperative steady state and, on the other hand, the simultaneous stability of full cooperation and full defection, inducing bistable behavior, can be fruitfully exploited for planning good policies taking into account the actual state of the population described by the initial state of the SR-EGN equation. Indeed, in the first case, the cooperation is independent of the initial state, while, in the second case, a more aware population is required in order to observe a full cooperation. Indeed, the control parameters β v indicate how strong is the ability of individuals to look at their interactions from the point of view of the others, which in turn depends on the awareness of social dilemmas and conflicts. Hence, improving the education and culture of individuals is the best way to make human populations able to clearly understand the society problems, with the direct consequence that a diffused wellness can be achieved. In this direction, further investigations are required. Currently, the authors are working on the capability of the presented model to reproduce real-world cooperation and awareness of people by means of data collected in several European countries. Additionally, the authors are investigating how strong is the impact of self regulating mechanisms in the personal decision making process on the control measures adopted to contrast the COVID-19 pandemic, assuming that the parameter β v is influenced by the available information on infected and death people. All the above results highlight the importance of studying SH and CH games, which can be exploited to understand more deeply the mechanisms fostering cooperation in social networks.

Author Contributions

Conceptualization, C.M. and D.M.; methodology, C.M. and D.M.; software, D.M.; formal analysis, C.M. and D.M.; writing—review and editing, C.M. and D.M.; supervision and funding acquisition, C.M. Both authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Science without Borders Program of CNPq/Brazil, grant number 313773/2013–0 and by Regione Toscana, Program FESR 2014–2020.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The study does not report any data.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hammerstein, P. Genetic and Cultural Evolution of Cooperation; MIT Press: Cambridge, MA, USA, 2003. [Google Scholar]
  2. Fehr, E.; Fischbacher, U. Social Norms and Human Cooperation. Trends Cogn. Sci. 2004, 8, 185–190. [Google Scholar] [CrossRef] [PubMed]
  3. Nowak, M.A.; Sigmund, K. Evolutionary Dynamics of Biological Games. Science 2004, 303, 793–799. [Google Scholar] [CrossRef] [PubMed]
  4. Gintis, H. Moral Sentiments and Material Interests: The Foundations of Cooperation in Economic Life; MIT Press: Cambridge, MA, USA, 2005. [Google Scholar]
  5. Pennisi, E. On the origin of cooperation. Science 2009, 325, 1196–1199. [Google Scholar] [CrossRef]
  6. Iyer, S.; Killingback, T. Evolution of Cooperation in Social Dilemmas with Assortative Interactions. Games 2020, 11, 41. [Google Scholar] [CrossRef]
  7. Guarin, G.; Babin, J.J. Collaboration and Gender Focality in Stag Hunt Bargaining. Games 2021, 12, 39. [Google Scholar] [CrossRef]
  8. Cao, Y.; Yu, W.; Ren, W.; Chen, G. An overview of recent progress in the study of distributed multi-agent coordination. IEEE Trans. Ind. Inform. 2013, 9, 427–438. [Google Scholar] [CrossRef]
  9. Toupo, D.F.P.; Strogatz, S.H.; Cohen, J.D.; Rand, D.G. Evolutionary game dynamics of controlled and automatic decision-making. Chaos 2015, 25, 073120. [Google Scholar] [CrossRef]
  10. Fehr, E.; Gachter, S. Altruistic punishment in humans. Nature 2002, 415, 137–140. [Google Scholar] [CrossRef]
  11. Vogel, G. The evolution of the golden rule: Humans and other primates have a keen sense of fairness and a tendency to cooperate, even when it does them no discernible good. Science 2004, 303, 1128–1131. [Google Scholar] [CrossRef]
  12. Doebeli, M.; Hauert, C.; Killingback, T. The Evolutionary Origin of Cooperators and Defectors. Science 2004, 306, 859–862. [Google Scholar] [CrossRef]
  13. Hauert, C.; Michor, F.; Nowak, M.A.; Doebeli, M. Synergy and Discounting of Cooperation in Social Dilemmas. J. Theor. Biol. 2006, 239, 195–202. [Google Scholar] [CrossRef]
  14. Santos, F.C.; Santos, M.D.; Pacheco, J.M. Social Diversity Promotes the Emergence of Cooperation in Public Goods Games. Nature 2008, 454, 213–216. [Google Scholar] [CrossRef]
  15. Rand, D.G.; Dreber, A.; Ellingsen, T.; Fudenberg, D.; Nowak, M.A. Positive Interactions Promote Public Cooperation. Science 2009, 325, 1272–1275. [Google Scholar] [CrossRef]
  16. Li, X.; Jusup, M.; Wang, Z.; Li, H.; Shi, L.; Podobnik, B.; Stanley, H.E.; Havlin, S.; Boccaletti, S. Punishment Diminishes the Benefits of Network Reciprocity in Social Dilemma Experiments. Proc. Natl. Acad. Sci. USA 2018, 115, 30–35. [Google Scholar] [CrossRef]
  17. Wang, Z.; Jusup, M.; Wang, R.; Shi, L.; Iwasa, Y.; Moreno, Y.; Kurths, J. Onymity Promotes Cooperation in Social Dilemma Experiments. Sci. Adv. 2017, 3, e1601444. [Google Scholar] [CrossRef] [PubMed]
  18. Nowak, M.A. Evolutionary Dynamics: Exploring the Equations of Life; Harvard University Press: Cambridge, MA, USA, 2006. [Google Scholar]
  19. Weibull, J. Evolutionary Game Theory; MIT Press: Cambridge, MA, USA, 1995. [Google Scholar]
  20. Ohtsuki, H.; Nowak, M.A. The Replicator Equation on Graphs. J. Theor. Biol. 2006, 243, 86–96. [Google Scholar] [CrossRef] [PubMed]
  21. Madeo, D.; Mocenni, C. Game Interactions and Dynamics on Networked Populations. IEEE Trans. Autom. Control 2015, 60, 1801. [Google Scholar] [CrossRef]
  22. Iacobelli, G.; Madeo, D.; Mocenni, C. Lumping Evolutionary Game Dynamics on Networks. J. Theor. Biol. 2016, 407, 328–338. [Google Scholar] [CrossRef]
  23. Santos, F.C.; Pacheco, J.M. Scale-free Networks Provide a Unifying Framework for the Emergence of Cooperation. Phys. Rev. Lett. 2005, 95, 098104. [Google Scholar] [CrossRef] [PubMed]
  24. Santos, F.C.; Pacheco, J.M.; Lenaerts, T. Evolutionary Dynamics of Social Dilemmas in Structured Heterogeneous Populations. Proc. Natl. Acad. Sci. USA 2006, 103, 3490–3494. [Google Scholar] [CrossRef]
  25. Wang, Z.; Kokubo, S.; Jusup, M.; Tanimoto, J. Universal Scaling for the Dilemma Strength in Evolutionary Games. Phys. Life Rev. 2015, 14, 1–30. [Google Scholar] [CrossRef]
  26. Szabó, G.; Fáth, G. Evolutionary Games on Graphs. Phys. Rep. 2007, 446, 97–216. [Google Scholar] [CrossRef]
  27. Jackson, M.O.; Zenou, Y. Games on Networks. In Handbook of Game Theory with Economic Applications; Elsevier: Amsterdam, The Netherlands, 2015; pp. 95–163. [Google Scholar]
  28. Ohtsuki, H.; Hauert, E.; Lieberman, C.; Nowak, M.A. A Simple Rule for the Evolution of Cooperation on Graphs and Social Networks. Nature 2006, 441, 502–505. [Google Scholar] [CrossRef]
  29. Helbing, D.; Johansson, A. Cooperation, Norms, and Revolutions: A Unified Game-Theoretical Approach. PLoS ONE 2010, 5, e12530. [Google Scholar] [CrossRef]
  30. Dall’Asta, L.; Marsili, M.; Pin, P. Collaboration in Social Networks. Proc. Natl. Acad. Sci. USA 2015, 109, 4395–4400. [Google Scholar]
  31. Hannelore, B.; Hauert, C.; Sigmund, K. Punishment and Reputation in Spatial Public Goods Games. Proc. R. Soc. Lond. B Biol. 2003, 270, 1099–1104. [Google Scholar]
  32. Rand, D.G.; Nowak, M.A. The Evolution of Anti-social Punishment in Optional Public Goods Games. Nat. Comm. 2011, 2, 434. [Google Scholar] [CrossRef] [PubMed]
  33. Boyd, R.; Gintis, H.; Bowles, S. Coordinated Punishment of Defectors Sustains Cooperation and Can Proliferate when Rare. Science 2010, 328, 617–620. [Google Scholar] [CrossRef] [PubMed]
  34. Madeo, D.; Mocenni, C. Self-regulation Versus Social Influence for Promoting Cooperation on Networks. Sci. Rep. 2020, 10, 4830. [Google Scholar] [CrossRef] [PubMed]
  35. Smith, A. The Theory of Moral Sentiment. In McMaster University Archive for the History of Economic Thought; Emerald Group Publishing Limited: Bingley, UK, 1759. [Google Scholar]
  36. Godłów-Legiȩdź, J. Adam Smith’s Concept of a Great Society and its Timeliness. Stud. Log. Gramm. Rhetor. 2019, 57, 175–190. [Google Scholar] [CrossRef]
  37. Raffaelli, G.; Marsili, M. Statistical Mechanics Model for the Emergence of Consensus. Phys. Rev. E 2005, 72, 016114. [Google Scholar] [CrossRef]
  38. Bauso, D.; Giarrè, L.; Pesenti, R. Consensus in Noncooperative Dynamic Games: A Multiretailer Inventory Application. IEEE Trans. Autom. Control 2008, 53, 998–1003. [Google Scholar] [CrossRef]
  39. Gray, R.; Franci, A.; Srivastava, V.; Leonard, N.E. Multi-agent Decision-making Dynamics Inspired by Honeybees. IEEE Trans. Control Netw. Syst. 2018, 5, 793–806. [Google Scholar] [CrossRef]
  40. Couzin, I.D.; Ioannou, C.C.; Demirel, G.; Gross, T.; Torney, C.J.; Hartnett, A.; Conradt, L.; Levin, S.A.; Leonard, N.E. Uninformed Individuals Promote Democratic Consensus in Animal Groups. Science 2011, 334, 1578–1580. [Google Scholar] [CrossRef] [PubMed]
  41. Brown, S.P.; Taddei, F. The Durability of Public Goods Changes the Dynamics and Nature of Social Dilemmas. PLoS ONE 2007, 2, e593. [Google Scholar] [CrossRef]
  42. Pacheco, J.M.; Santos, F.C.; Souza, M.O.; Skyrms, B. Evolutionary Dynamics of Collective Action in N-Person Stag Hunt Dilemmas. Proc. R. Soc. Lond. B Biol. 2009, 276, 315–321. [Google Scholar] [CrossRef]
  43. Starnini, M.; Sánchez, A.; Poncela, J.; Moreno, Y. Coordination and Growth: The Stag Hunt Game on Evolutionary Networks. J. Stat. Mech. Theory E 2011, 2011, P05008. [Google Scholar] [CrossRef]
  44. Nowak, M.A.; Sasaki, A.; Taylor, C.; Fudenberg, D. Emergence of Cooperation and Evolutionary Stability in Finite Populations. Nature 2004, 428, 646–650. [Google Scholar] [CrossRef]
  45. Strogatz, S.H. Nonlinear Dynamics and Chaos: With Applications to Physics, Biology, Chemistry, and Engineering; Westview Press: Boulder, CO, USA, 2015. [Google Scholar]
Figure 1. Different game types according to the values of parameters T and S.
Figure 1. Different game types according to the values of parameters T and S.
Games 12 00060 g001
Figure 2. Emergence of cooperation and consensus. Five configurations of a social network of 5 players. The cooperation level of each player is represented by a color ranging from red (full defection, x v = 0 ) to yellow (full cooperation, x v = 1 ), as reported on the shaded box on the rightmost part of the figure. The first graph shows a generic configuration with cooperators, defectors and mixed players. The graphs highlighted by the blue dashed line box, are examples of emergence of cooperation, since no full defector is present. In particular, the middle graph corresponds to the steady state x * A M , while the last one represents the steady state x * A C . The fifth graph shows a population composed of full defectors only, thus SR-EGN converged to the steady state x * A D . The graphs in the green dashed line box represent consensus states, since all players reach the same level of cooperation.
Figure 2. Emergence of cooperation and consensus. Five configurations of a social network of 5 players. The cooperation level of each player is represented by a color ranging from red (full defection, x v = 0 ) to yellow (full cooperation, x v = 1 ), as reported on the shaded box on the rightmost part of the figure. The first graph shows a generic configuration with cooperators, defectors and mixed players. The graphs highlighted by the blue dashed line box, are examples of emergence of cooperation, since no full defector is present. In particular, the middle graph corresponds to the steady state x * A M , while the last one represents the steady state x * A C . The fifth graph shows a population composed of full defectors only, thus SR-EGN converged to the steady state x * A D . The graphs in the green dashed line box represent consensus states, since all players reach the same level of cooperation.
Games 12 00060 g002
Figure 3. Graphical representation of Theorems 1 (subplot a), 2 (subplot b) and 3 (subplot c).
Figure 3. Graphical representation of Theorems 1 (subplot a), 2 (subplot b) and 3 (subplot c).
Games 12 00060 g003
Figure 4. Simulations of the SR-EGN Equation (8) for the SH game ( T = 0.5 and S = 0.5 ) and CH game ( T = 1.5 and S = 0.5 ). In subplots (a,d), β v = 2 k v < k v v V , in (b,e), β v = 0 v V , and in (c,f), β v = 2 k v > k v v V (f). In all subplots, the first slice refers to simulations with random initial conditions close to full defection, the middle one refers to simulations with random initial conditions in int ( Δ N ) , while the last one refers to simulations with random initial conditions close to full cooperation. The numerical simulations have been performed using the build-in ODE solver ode45 explicit Runge–Kutta (4,5) of Matlab R2020b.
Figure 4. Simulations of the SR-EGN Equation (8) for the SH game ( T = 0.5 and S = 0.5 ) and CH game ( T = 1.5 and S = 0.5 ). In subplots (a,d), β v = 2 k v < k v v V , in (b,e), β v = 0 v V , and in (c,f), β v = 2 k v > k v v V (f). In all subplots, the first slice refers to simulations with random initial conditions close to full defection, the middle one refers to simulations with random initial conditions in int ( Δ N ) , while the last one refers to simulations with random initial conditions close to full cooperation. The numerical simulations have been performed using the build-in ODE solver ode45 explicit Runge–Kutta (4,5) of Matlab R2020b.
Games 12 00060 g004
Figure 5. Payoff ϕ ( x v , x w ) (Equation (3)) and mixed equilibrium x A M * (Equation (11)). SH game (subplots a,c): T [ 0 , 1 ] and S [ 1 , 0 ] . CH game (subplots b,d): T [ 1 , 2 ] and S [ 0 , 1 ] . The white lines indicate the locus of values S and T for which the players choose full cooperation (subplots a,b) and the corresponding values of x A M * (subplots c,d).
Figure 5. Payoff ϕ ( x v , x w ) (Equation (3)) and mixed equilibrium x A M * (Equation (11)). SH game (subplots a,c): T [ 0 , 1 ] and S [ 1 , 0 ] . CH game (subplots b,d): T [ 1 , 2 ] and S [ 0 , 1 ] . The white lines indicate the locus of values S and T for which the players choose full cooperation (subplots a,b) and the corresponding values of x A M * (subplots c,d).
Games 12 00060 g005
Figure 6. Altruistic and selfish behavior at steady state. The indicator c v is evaluated for SH (subplots a,c) and CH (subplots b,d) games in both ER (subplots a,b) and SF (subplots c,d) networks. β v has been set equal for all players: values of k v lower and larger than β v are reported in blue and pink colors, respectively. In the simulation, the parameters used for the SH game are: T = 1 , S = 1 and β v = 10 v V , while the ones of the CH game are: T = 2 , S = 2 and β v = 5 v V . The numerical simulations have been performed using the build-in ODE solver ode45 explicit Runge–Kutta (4,5) of Matlab R2020b.
Figure 6. Altruistic and selfish behavior at steady state. The indicator c v is evaluated for SH (subplots a,c) and CH (subplots b,d) games in both ER (subplots a,b) and SF (subplots c,d) networks. β v has been set equal for all players: values of k v lower and larger than β v are reported in blue and pink colors, respectively. In the simulation, the parameters used for the SH game are: T = 1 , S = 1 and β v = 10 v V , while the ones of the CH game are: T = 2 , S = 2 and β v = 5 v V . The numerical simulations have been performed using the build-in ODE solver ode45 explicit Runge–Kutta (4,5) of Matlab R2020b.
Games 12 00060 g006
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Back to TopTop