Next Article in Journal
Gender, Emotions, and Tournament Performance in the Laboratory
Previous Article in Journal
Clusters with Minimum Transportation Cost to Centers: A Case Study in Corn Production Management
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Leader Multi-Follower Model with Aggregative Uncertainty

1
Department of Mathematics and Applications, University of Naples Federico II, Via Claudio 21, 80125 Naples, Italy
2
Department of Economic and Statistic Science, University of Naples Federico II, Complesso Monte Sant’Angelo 21, 80125 Naples, Italy
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Submission received: 28 April 2017 / Revised: 9 June 2017 / Accepted: 19 June 2017 / Published: 22 June 2017

Abstract

:
We study a non-cooperative game with aggregative structure, namely when the payoffs depend on the strategies of the opponent players through an aggregator function. We assume that a subset of players behave as leaders in a Stackelberg model. The leaders, as well the followers, act non-cooperatively between themselves and solve a Nash equilibrium problem. We assume an exogenous uncertainty affecting the aggregator and we obtain existence results for the stochastic resulting game. Some examples are illustrated.

1. Introduction

The well known Nash equilibrium concept is a solution concept used in a non-cooperative situation where all the players act simultaneously, optimizing their own payoff taking into account the decisions of the opponents. There is also the possibility that players do not act simultaneously: for example, in the classical Stackelberg leader–follower model, a player called the leader acts first, anticipating the strategy of the opponent, known as the follower, who reacts optimally to the leader’s decision. In this case, the leader’s optimization problem contains a nested optimization task that corresponds to the follower’s optimization problem.
In the case of multiple players, more than two, it is possible to have a hierarchy between two groups of players: a group, acting as a leader, decides first and the other group reacts to the leaders’ decision. Now, it is necessary to determine the behavior between each group. In several applications, players at the same hierarchical level decide in a non-cooperative way: each player in the group knows that any other group member optimizes his own payoff taking into account the behavior of the rest. Thus, a Nash equilibrium problem is solved within each group and a Stackelberg model is assumed between the two groups. This leads to the multi-leader multi-follower equilibrium concept that we are going to analyze in this paper.
In literature, this model appeared in the context of oligopolistic markets (see [1]) with one leader firm and several follower firms acting as Cournot competitors. Other applications can be found, for example, in transportation (see [2]), in the analysis of deregulated electricity markets (see [3]), in water management systems (see [4]), and in wireless networks (see [5] ). See [6] for a survey on the topic.
As it happens in concrete situations, some uncertainty may appear in the data and a stochastic model can be formulated. Usually, a random variable may affect the payoffs, and then one can consider the expected payoffs with respect to its probability distribution. Then, the players optimize the expected payoffs according the considered solution concept. De Miguel and Xu (see [7] ) extend the multiple-leader Stackelberg–Nash–Cournot model studied in [1] to the stochastic case assuming uncertainty in the demand function: leaders choose their supply levels first, knowing the demand function only in distribution and followers make their decisions after observing the leader supply levels and the realized demand function.
In this paper, we generalize the multi-leader multi-follower equilibrium concept for the class of aggregative games (see [8,9,10]), namely games where each player’s payoff depends on his own actions and an aggregate of the actions of all the players. Many common games in industrial organization, public economics and macroeconomics are aggregative games.
The concept of aggregative games goes back to Selten ([11]) who considers as aggregation function the summation of players’ strategies. Later, this concept has been studied in the case of other aggregation functions, and it has been generalized to the concept of quasi-aggregative games (see [7,8,9,10,12]). Computational results for the class of aggregative games have been also investigated (see, for example, [13,14]).
We present the multi-leader multi-follower equilibrium model under uncertainty, assuming an exogenous uncertainty affecting the aggregator, and some existence results for the stochastic resulting game are obtained in the smooth case of nice aggregative games, where payoff functions are continuous and concave in own strategies, as well as in the general case of aggregative games with strategic substitutes. Applicative examples, such as the global emission game and the teamwork project game, are illustrated.
We point out that our results hold for the general class of aggregative games and generalize the ones obtained by De Miguel and Xu (see [7]) and by Nakamura (see [15]) for the Cournot oligopoly games. Moreover, we briefly discuss the experimental evaluation based on the Sample Average Approximation (SAA) method (see [16]) for the global emission game.
After the introduction, some preliminaries are recalled in Section 2. The model is presented in Section 3, then studied in the smooth case in Section 4 and in the strategic substitutes case in Section 5, providing existence theorems and examples. Some concluding remarks are in Section 6.

2. Preliminaries

Let us consider a parametric non-cooperative game in normal form Γ t = I , ( S i , π i ) i I , t where
  • I = 1 , , I is the finite set of players ( I N is a natural number);
  • for any i I , S i R N is the finite-dimensional strategy set and π i : S × T R the payoff function of player i;
  • t T R M a vector of exogenous parameters.
We denote by S = i = 1 I S i and S i = j i S j , i I , and we recall some definitions that are useful for a better understanding of this paper.
Definition 1.
The game Γ t = I , ( S i , π i ) i I , t is called aggregative if there exists a continuous and additively separable function g : S R (the aggregator) and functions Π i : S i × R × T R (the reduced payoff functions) such that for each player i I :
π i ( s i , s i , t ) = Π i ( s i , g ( s ) , t )
for s i S i , for all s S and for all t T (see [8,12,17]).
Recall that a function g : S R is additively separable if there exist strictly increasing functions H : R R and h 1 , h I : S i R such that g ( s ) = H i I h i ( s i ) for all s S (see [18]).
Let us suppose S i equipped with a partial order ≥ that is transitive, reflexive and antisymmetric1 (see [19]). Given S i S i , s ¯ S i is called an upper bound for S i if s ¯ x for all x S i ; s ¯ is called the supremum of S i (denoted by s u p ( S i ) ) if for all upper bounds s of S i , s s ¯ . Lower bounds and infimums are defined analogously. A point x is a maximal element of S i if there is no y S i such that y > x (that is, no y such that y x but not x y ); it is the largest element of S i if x y for all y S i . Minimal and smallest elements are defined similarly. A set may have many maximal and minimal elements, but it can have at most one largest and one smallest element.
The set S i is a lattice if s , s S i implies s s , s s S i , where s s and s s denote, respectively, the infimum and supremum of s and s . The lattice is complete if for all nonempty subsets S i S i , i n f ( S i ) S i and s u p ( S i ) S i .
The real line (with the usual order) is a lattice and any compact subset of it is, in fact, a complete lattice, as in any set in R n formed as the product of n compact sets (with the product order).
A sublattice S i of a lattice S i is a subset of S i that is closed under ∨ and ∧. A complete sublattice S i is a sublattice such that the infimum and supremum of every subset of S i is in S i .
A function π i : S i × S i × T R is supermodular in s i if for all fixed ( s i , t ) S i × T ,
π i ( s i s i , s i , t ) π i ( s i , s i , t ) π i ( s i , s i , t ) π i ( s i s i , s i , t )
for all s i , s i S i .
Supermodularity represents the economic notion of complementary inputs. The theory of supermodular optimization has been developed by Topkis (see [20,21]), Vives (see [10,22]) and by Granot and Veinott (see [23]). The following result is a characterization of supermodularity for twice continuously differentiable functions with Euclidean domains. The standard order on such domains is the so-called“product order” i.e., x y if and only if x i y i for all i.
Topkis’s Characterization Theorem
Let I = [ s ̲ , s ¯ ] be an interval in R n . Suppose that π : R n R is twice continuously differentiable on some open set containing I. Then, π is supermodular on I if and only if for all s I and all i j , 2 π / s i s j 0 .
A function π i : S i × S i × T R exhibits decreasing differences in s i and s i if for all t T and s i > s i the function π i ( s i , s i , t ) π i ( s i , s i , t ) is non-increasing in s i .
We deal in the following with parametric aggregative games
Γ t = I , ( S i , Π i ) i I , g , t
where g is the aggregator that is additively separable and Π i ( s i , g ( s ) , t ) are real valued functions defined on S i × R × T for any i I , which are the reduced payoffs.
Given these notions, it is useful to recall two existence results obtained by Acemoglu and Jensen (see [12]).
In order to give the first result, let us introduce the following definition:
Definition 2.
An aggregative game Γ t = I , ( S i , Π i ) i I , g , t is said to be a nice aggregative game for any t T if:
  • the aggregator g is twice continuously differentiable;
  • each strategy set S i is compact and convex, and every payoff function π ( s , t ) = Π i ( s i , g ( s ) , t ) is twice continuously differentiable and pseudo-concave in the player’s own strategies2;
  • for each player, the first-order conditions hold whenever a boundary strategy is a best response, i.e., D s i Π i ( s i , g ( s ) , t ) = 0 whenever s i S i and ( v s i ) T D s i Π i ( s i , g ( s ) , t ) 0 for all v S i .
Thus, under convexity assumptions, the following holds:
Theorem 1.
Let Γ t = I , ( S i , Π i ) i I , g , t be a nice aggregative game for any t T . Then, there exists an equilibrium s * ( t ) S and also the smallest and largest equilibrium aggregates Q * ( t ) and Q * ( t ) 3. Moreover, Q * : T R is a lower semi-continuous function and Q * : T R is an upper semi-continuous function.
The second useful result is another existence result, obtained without any assumption of convexity, but with assumption of supermodularity and decreasing differences.
Definition 3.
The game Γ t = I , ( S i , Π i ) i I , g , t is an aggregative game with strategic substitutes for any t T if it is aggregative, strategy sets are lattices and each player’s payoff function π i ( s i , s i , t ) is supermodular, is s i , and exhibits decreasing differences in s i and s i .
Theorem 2.
Let Γ t = I , ( S i , Π i ) i I , g , t be an aggregative game with strategic substitutes for any t T . Then, there exists an equilibrium s * ( t ) S and also the smallest and largest equilibrium aggregates Q * ( t ) and Q * ( t ) . Moreover, Q * : T R is a lower semi-continuous function and Q * : T R is an upper semi-continuous function.

3. The Model

We consider a parametric M + N -player game where M players (leaders) have the leadership in the decision process: they commit a strategy knowing the best reply response of other N players (followers) who are involved in a non-cooperative Nash equilibrium problem. Here, M , N N 0 = 0 , 1 , 2 , ; if M or N is equal to zero, the game turns out to be a non-cooperative Nash equilibrium problem.
We consider an aggregative structure and let M + N , ( U i ) i = 1 M , ( V j ) j = 1 N , ( l i ) i = 1 M , ( f j ) j = 1 N , g , t be the normal form of the game where, for every i 1 , , M , the finite-dimensional strategy set of leader i is denoted by U i R + , and, for every j 1 , , N , the finite-dimensional strategy set of follower j is denoted by V j R + . Let us denote by U = i = 1 M U i and V = j = 1 N V j .
We assume an aggregative structure, i.e., there exists an aggregator function g : U × V R that is additively separable:
g ( x , y ) = H i = 1 M x i + j = 1 N y j = H ( X + Y ) ,
where x = ( x 1 , , x M ) with x i U i , i = 1 , , M and y = ( y 1 , , y N ) with y j V j , j = 1 , , N and H : R R is a strictly increasing function.
In order to introduce the payoff functions for both leaders and followers, that in our model are profits, let us suppose that there is a shock in the game that hits the payoff functions.
Since the M leaders must make a decision at the present time on their future strategies, they must decide their strategies before the shock is realized, while the N followers can wait for choosing their strategies and so they can first observe the strategies chosen by the M leaders and the realization of the shock.
Let us represent this shock by a continuous random variable ξ : Ω R , where Ω is the set of all possible events. Of course, we obtain a different payoff for every realization of the random variable ξ , and, thus, by the distribution of ξ , we can characterize the uncertainty in the payoff functions.
The jth follower chooses his strategy after observing the realization of the shock and the strategies chosen by all the leaders and he will keep the aggregate quantity of the leaders and the quantity of the other followers fixed. Thus, denoting by T the set of all possibile realizations of the random variable and by f j : V j × R × T R his payoff function, for all ( x , y ) U × V
f j ( y j , g ( x , y ) , t ) = f j ( y j , H ( X + Y ) , t ) ,
gives the profit he receives when t is the realization of the random variable. For a fixed x U and t T , the followers solve a Nash equilibrium problem:
m a x y j V j f j ( y j , H ( X + y j + Y j ) , t )
for any j = 1 , , N , where Y j = k j y k and X = i = 1 M x i is the aggregate leaders’ committed strategy.
The ith leader chooses his strategy knowing the payoff function only in distribution since the shock ξ ( ω ) is not realized yet: for all ( x , y ) U × V , his profit is given by the function l i : U i × R × ξ ( Ω ) R defined as
l i ( x i , g ( x , y ) , ξ ( ω ) ) = l i ( x i , H ( X + Y ) , ξ ( ω ) ) .
Moreover, since he acts simultaneously with all other leaders, he must take into account that the strategies of other leaders, x i U i = k i U k , are fixed and, since he acts before every follower, he must also consider the reaction of the followers to the aggregate leaders’ strategy that is a solution to problem (1), i.e., y 1 ( H ( X , ξ ( ω ) ) ) , , y N ( H ( X , ξ ( ω ) ) ) . Then, if Y ( H ( X , ξ ( ω ) ) ) = j = 1 N y j ( H ( X , ξ ( ω ) ) ) , any leader considers the expectation with respect to ξ ( ω ) of his profit l i ( x i , H ( X + Y ( H ( X , ξ ( ω ) ) ) ) , ξ ( ω ) ) and solves the problem:
m a x x i U i E [ l i ( x i , H ( x i + X i + Y ( H ( x i + X i , ξ ( ω ) ) ) ) , ξ ( ω ) ) ] ,
where X i = k i x k .
In the following, U , V are assumed to be compact and each f j : V j × R × T R is assumed to be upper semi-continuous on V j × R × T and continuous in R × T , and analogously each l i : U i × R × ξ ( Ω ) R is assumed to be upper semi-continuous on U i × R × ξ ( Ω ) and continuous in R × ξ ( Ω ) .
Suppose now that the followers’ problem (1) has a unique solution.
Definition 4.
A multi-leader multi-follower equilibrium with aggregate uncertainty (MLMFA equilibrium) is an M + N -tuple
( x 1 , , x M , y 1 ( H ( X , · ) ) , , y N ( H ( X , · ) ) ) ,
such that
E [ l i ( x i , H ( X + Y ( H ( X , ξ ( ω ) ) ) ) , ξ ( ω ) ) ] = m a x x i U i E [ l i ( x i , H ( x i + X i + Y ( H ( x i + X i , ξ ( ω ) ) ) ) , ξ ( ω ) ) ]
for any i = 1 , , M , where
y j ( H ( X , t ) ) a r g m a x y j V j f j ( y j , H ( X + y j + Y j ( H ( X , t ) ) ) , t )
for any j = 1 , , N and ( y 1 ( H ( X , t ) ) , , y N ( H ( X , t ) ) ) is the Nash equilibrium among followers given the aggregate leaders’ strategy and the realized shock ξ ( ω ) = t t T .
The given definition generalizes to aggregative games the multiple-leader Stackelberg equilibrium given by De Miguel and Xu in the case of a Nash–Cournot oligopoly game (see [7]).
If ξ ( ω ) is a continuous random variable, we suppose that it has density function ρ ( t ) with T supporting set, and we can rewrite the leaders’ payoff functions in the following way:
E [ l i ( x i , H ( X + Y ( H ( X , ξ ( ω ) ) ) ) , ξ ( ω ) ) = T l i ( x i , H ( x i + X i + Y ( H ( x i + X i , t ) ) ) , t ) ρ ( t ) d t
for any i = 1 , , M and for t T .
If ξ : Ω R is a discrete random variable i.e., ξ ( Ω ) is finite or countable, ξ ( Ω ) = t 1 , , t h , , the leaders’ payoff functions are defined by
E [ l i ( x i , H ( X + Y ( H ( X , ξ ( ω ) ) ) ) , ξ ( ω ) ) = h l i ( x i , H ( x i + X i + Y ( H ( x i + X i , t h ) ) ) , t h ) p ( t h )
for i = 1 , , M , where p ( t h ) = P ( ξ ( ω ) = t h ) i.e the probability that the realization of the random variable is t h , for any h N .
Remark 1.
In the deterministic case, when M = N = 1 , the model corresponds to the classical Stackelberg Leader–Follower problem (see [25]). The case M = 1 and N 1 has been introduced in the oligopolistic market context in [1,26] and studied from a computational point of view in [27], where it has been called MPEC, and it has been applied in other several contexts, for example in transportation in [2].

4. The Regular Case

In order to prove the existence and uniqueness of the follower Nash equilibrium and the existence of a multi-leader multi-follower equilibrium with aggregate uncertainty, we will use the following assumptions, already presented in literature (see [12,28]).
Assumption A1.
The aggregative game M + N , ( U i ) i = 1 M , ( V j ) j = 1 N , ( l i ) i = 1 M , ( f j ) j = 1 N , g , t is an aggregative nice game for every t T i.e.,
  • the aggregator g is twice continuously differentiable;
  • every strategy set U i and V j for i = 1 , , M and j = 1 , , N is compact and convex;
  • the payoff functions l i ( x i , g ( x , y ) , t ) and f j ( y j , g ( x , y ) , t ) are twice continuously differentiable and pseudo-concave in the player’s own strategy for all i = 1 , , M and for all j = 1 , , N ;
  • D x i l i ( x i , g ( x , y ) , t ) = 0 whenever x i U i and ( v x i ) D x i l i ( x i , g ( x , y ) , t ) 0 v U i and D y j f j ( y j , g ( x , y ) , t ) = 0 whenever y j V j and ( u y j ) D y j f j ( y j , g ( x , y ) , t ) 0 u V j for all i = 1 , , M and for all j = 1 , , N (that means that the first-order conditions hold whenever a best response is on the boundary).
Remark 2.
By using these assumptions on l i , for all i = 1 , , M , and the theorem of differentiation under the integral sign, it also follows that the expected payoff functions E [ l i ( x i , H ( X + Y ( H ( X , ξ ( ω ) ) ) ) , ξ ( ω ) ) satisfy the Assumption 1 for any i = 1 , , M .
Now let us give another assumption on the followers’ side.
Let us introduce for any t T and j = 1 , , N the marginal profit that we can denote by
π j ( y j , H ( X + Y ) , t ) : = D 1 f j ( y j , H ( X + Y ) , t ) + D 2 f j ( y j , H ( X + Y ) , t ) H y j ( X + Y ) ,
where D 1 f j = f j y j and D 2 f j = f j H .
Assumption A2.
If ( y j , H ( X + Y ) ) satisfies y j < H ( X + Y ) and the marginal profit π j ( y j , H ( X + Y ) , t ) = 0 , then
(i)
π j y j < 0 , where π j y j = D 11 f j ( y j , H ( X + Y ) , t ) + 2 D 12 f j ( y j , H ( X + Y ) , t ) H y j ( X + Y ) + D 22 f j ( y j , H ( X + Y ) , t ) H y j ( X + Y ) + D 2 f j ( y j , H ( X + Y ) , t ) 2 H y j 2 ( X + Y ) ;
(ii)
y j π j y j + H ( X + Y ) π j H < 0 .
Note that (i) corresponds to the law of diminishing marginal utility, while (ii) is assumed in order to obtain that the share functions are strictly decreasing and so to ensure the uniqueness of the followers’ Nash equilibrium (see [28]).
Example 1.
(see [7]). An oligopolistic situation with M + N firms that supply an homogeneous product noncooperatively is given. M leader firms announce their quantities in U 1 , U M and the rest of the N firms react by choosing a Cournot–Nash equilibrium in V 1 , , V N . We consider:
  • U i and V j for all i = 1 , , M and for all j = 1 , , N are compact subsets of R + 4;
  • the aggregator is the sum of the strategies i.e., g ( x , y ) = X + Y ;
  • given an exogenous random variable ξ ( ω ) , the payoff function for every leader i is
    E [ l i ( x i , X + Y ( X , ξ ( ω ) ) , ξ ( ω ) ) =
    T x i p ( x i + X i + Y ( x i + X i , t ) , t ) ρ ( t ) d t C i ( x i ) ,
    where p is the inverse demand function that depends on the aggregate quantity and the random variable ξ ( ω ) and C i is the ith leader’s cost function, with p and C i for all i = 1 , , M being twice continuously differentiable functions;
  • the payoff function for every follower j is
    f j ( y j , X + Y , t ) = y j p ( X + y j + Y j , t ) c j ( y j ) ,
    where c j is the jth follower’s cost function, that is twice continuously differentiable, and t is the realization of the random variable.
Remark 3.
Note that Assumptions 1 and 2 hold for Example 1.

4.1. Existence of an MLMFA Equilibrium

In this section, based on the results obtained by Acemoglu and Jensen in [12] and by Cornes and Hartley in [28], we give theorems that prove first of all the existence and uniqueness of the followers’ Nash equilibrium and then the existence of a multi-leader multi-follower equilibrium with aggregative uncertainty.
Fixing x = ( x 1 , , x M ) , for any t T , we consider the reduced aggregative game N , ( V j , f j ) j = 1 N , g , t .
Theorem 3.
Under Assumptions 1 and 2, the following hold:
(i)
there exists an equilibrium ( y 1 ( H ( X , t ) ) , , y N ( H ( X , t ) ) ) V i.e., this N-tuple satisfies (4) with X = i = 1 M x i ;
(ii)
denoted by Q ( x , t ) = g ( x , y 1 ( H ( X , t ) ) , , y N ( H ( X , t ) ) ) , which is called an equilibrium aggregate given t and given x, there exist the smallest and largest equilibrium aggregates with respect to t, while x is fixed, denoted by Q * ( x , t ) and Q * ( x , t ) , respectively;
(iii)
Q * : U × T R is a lower semi-continuous function x and Q * : U × T R is an upper semi-continuous function x ;
(iv)
the equilibrium ( y 1 ( H ( X , t ) ) , , y N ( H ( X , t ) ) ) is unique.
Proof. 
The result (i) is obtained easily, applying Kakutani’s fixed point theorem. In fact, the best-reply correspondences will be upper hemi-continuous and have convex values, since by Assumption 1 f j j = 1 , , N are quasi-concave functions (because pseudo-concavity implies quasi-concavity).
Points (ii) and (iii) follow straightforwardly from Theorem 1.
Point (iv) follows from Cornes–Hartley (see [28]) because of Assumption 2. ☐
Remark 4.
By points (ii) and (iv), it follows that Q * ( x , t ) = Q * ( x , t ) = Q ( x , t ) and so, by point (iii), we can conclude that the function Q : U × T R is a continuous function.
Theorem 4.
Under Assumptions 1 and 2, there exists an MLMFA equilibrium.
Proof. 
The existence follows from Theorem 3, Remark 2 and Theorem 1. ☐
Example 2.
( Global   Emission   Game )
Let us consider the game
4 , ( U i , l i ) i = 1 2 , ( V j , f j ) j = 3 4 , g , t ,
where, fixing s m a x > 0 , U i = [ 0 , s m a x ] for i = 1 , 2 , V j = [ 0 , s m a x ] for j = 3 , 4 and, denoted by s = ( s 1 , s 2 , s 3 , s 4 ) the vector of strategies, g ( s ) = h = 1 4 s h 2 . Assume a random variable ξ ( ω ) with uniform distribution ρ ( t ) = 1 T t [ 0 , T ] ( T > 0 ) . Let’s consider α, β > 0 . Then,
  • for any t [ 0 , T ] , the payoff functions for the followers are
    f j ( s j , g ( s ) , t ) = α s j s j 2 2 β 2 ( s 1 + s 2 + s 3 + s 4 + t ) 2
    for j = 3 , 4 ;
  • the payoff functions for the leaders are
    E [ l i ( s i , g ( s ) , ξ ( ω ) ) ] = α s i s i 2 2 β 2 0 T ( s 1 + s 2 + s 3 + s 4 + t ) 2 1 T d t
    for i = 1 , 2 .
Fixing ( s 1 , s 2 ) and t, the followers choose the unique symmetric Nash equilibrium
s 3 ( s 1 , s 2 ) = s 4 ( s 1 , s 2 ) = α β ( s 1 + s 2 + t ) 1 + 2 β
with α β ( 2 s m a x + T ) . Thus, the leaders maximize, with respect their own strategy, the following payoff function:
E [ l i ( s i , g ( s ) , ξ ( ω ) ) ] = α s i s i 2 2 β 2 0 T s 1 + s 2 + 2 α + t 1 + 2 β 2 1 T d t ,
and the MLMFA equilibrium is
( s 1 * , s 2 * , s 3 * , s 4 * ) =
= ( 2 α ( 1 + 2 β ) 2 β ( 4 α + T ) 2 [ ( 1 + 2 β ) 2 + 2 β ] , 2 α ( 1 + 2 β ) 2 β ( 4 α + T ) 2 [ ( 1 + 2 β ) 2 + 2 β ] ,
( 1 + 2 β ) 2 ( α β t 2 α β ) + 2 β ( α β t + 2 α β ) + β 2 T [ ( 1 + 2 β ) 2 + 2 β ] ( 1 + 2 β ) ,
( 1 + 2 β ) 2 ( α β t 2 α β ) + 2 β ( α β t + 2 α β ) + β 2 T [ ( 1 + 2 β ) 2 + 2 β ] ( 1 + 2 β ) ) ,
t [ 0 , T ] , β > 0 and α β ( 2 s m a x + T ) .
This model corresponds to a global emission game in the context of an IEA (International Environmental Agreement) under the Stackelberg assumption (see [29,30]), where leaders are signatory countries and followers are non-signatory countries, in a non cooperative strategic game, and strategies describe their emission levels.
Note that Assumptions 1 and 2 are satisfied for this game and Theorems 3 and 4 hold.

4.2. An Illustrative Computational Evaluation

A classical algorithm to solve stochastic optimization problems is the Sample Average Approximations method (SAA) (see, for example, [16]). The reason why this algorithm is really useful is that this method, rather than using the distribution of the random variable ξ ( ω ) , uses only a sample of ξ ( ω ) . Thus, the idea is to approximate the expected value function by the sample average of the random function and then to give convergence results that guarantee the convergence of the solutions of the SAA problem as the sample size increases.
The SAA method has been used for Cournot oligopoly games (see [7]). As an illustrative example, we apply this method to the Global Emission Game of Example 2.
Let ξ 1 , , ξ k be an independently and identically distributed (i.i.d.) random sample of k realizations of the random variable ξ ( ω ) . We approximate the ith leader’s decision problem by the following SAA problem:
m a x x i U i ϕ i k ( x i , X i ) : = 1 k h = 1 k l ¯ i ( x 1 , , x i , , x M , ξ h ) ,
where, for simplicity, we denote l ¯ i ( x 1 , , x i , , x M , ξ h ) = l i ( x i , H ( x i + X i + Y ( H ( X , ξ h ) ) ) , ξ h ) .
If ( x 1 k , , x M k ) satisfies ϕ i k ( x i k , X i k ) = m a x x i U i ϕ i k ( x i , X i k ) for i = 1 , , M , then ( x 1 k , , x M k ) is called a multi-leader multi-follower equilibrium with aggregate uncertainty of the SAA problem (MLMFA-SAA equilibrium).
If we introduce the function
L ( x , z , ξ h ) : = i = 1 M l ¯ i ( x 1 , , z i , , x M , ξ h )
and the function
ϕ k ( x , z ) : = 1 k h = 1 k L ( x , z , ξ h ) ,
then x k = ( x 1 k , , x M k ) is an MLMFA-SAA equilibrium if and only if
ϕ k ( x k , x k ) = m a x z U ϕ k ( x k , z ) .
Note that if we consider L ( x , z , ξ ( ω ) ) : = i = 1 M l ¯ i ( x 1 , , z i , , x M , ξ ( ω ) ) and ϕ ( x , z ) : = E ( L ( x , z , ξ ( ω ) ) , it is straightforward to see that the vector x * = ( x 1 * , , x M * ) is an MLMFA equilibrium if and only if
ϕ ( x * , x * ) = m a x z U ϕ ( x * , z ) .
Let us consider the case of Global Emission Games (Example 2). We have that
l ¯ 1 ( z 1 , x 2 , ξ ( ω ) ) = α z 1 z 1 2 2 β 2 z 1 + x 2 + 2 α + ξ ( ω ) 1 + 2 β 2
and
l ¯ 2 ( x 1 , z 2 , ξ ( ω ) ) = α z 2 z 2 2 2 β 2 x 1 + z 2 + 2 α + ξ ( ω ) 1 + 2 β 2
and so
L ( x , z , ξ ( ω ) = α ( z 1 + z 2 ) ( z 1 2 + z 2 2 ) 2 β 2 z 1 + x 2 + 2 α + ξ ( ω ) 1 + 2 β 2 β 2 x 1 + z 2 + 2 α + ξ ( ω ) 1 + 2 β 2 .
With the method proposed above, for a fixed k, we can compute an MLMFA-SAA equilibrium, maximizing over z U the function ϕ k ( x k , z ) :
x k = ( x 1 k , x 2 k ) = ( α ( 1 + 2 β ) 2 ( 1 + 2 β ) 2 + 2 β 2 α β ( 1 + 2 β ) 2 + 2 β β h = 1 k ξ h k [ ( 1 + 2 β ) 2 + 2 β ] ,
α ( 1 + 2 β ) 2 ( 1 + 2 β ) 2 + 2 β 2 α β ( 1 + 2 β ) 2 + 2 β β h = 1 k ξ h k [ ( 1 + 2 β ) 2 + 2 β ] ) .
In order to investigate the convergence of a sequence of MLMFA-SAA equilibria for k + , let us note that L ( x , z , ξ ( ω ) ) is a Lipschitz continuous function. Thus, in this case, we can easily obtain that ϕ k ( x , z ) converges to ϕ ( x , z ) uniformly and, with a probability of one, the sequence { x k } converges to the unique MLMFA equilibrium x * .

5. A More General Case in an Optimistic View

In the previous sections, we have considered that the payoff functions are twice continuously differentiable; in this section, we want to avoid this assumption, and, in order to obtain results on the existence of an MLMFA equilibrium in this more general framework, we need an assumption taken from Acemoglu and Jensen ([12]).
Assumption A3.
The aggregative game M + N , ( U i ) i = 1 M , ( V j ) j = 1 N , ( l i ) i = 1 M , ( f j ) j = 1 N , g , t is an aggregative game with strategic substitutes for any t T , i.e.,
  • every strategy set U i and V j for i = 1 , , M and j = 1 , , N is a lattice;
  • for all i = 1 , , M and for all j = 1 , , N the payoff functions l i ( x i , g ( x , y ) , t ) and f j ( y j , g ( x , y ) , t ) are supermodular in the player’s own strategy and exhibit decreasing differences in x i and X i and in y j and Y j , respectively, for any t T .
For any x = ( x 1 , , x M ) and for any t T , we consider the reduced aggregative game N , ( V j , f j ) j = 1 N , g , t .
Theorem 5.
Under Assumption 3, the following hold:
(i)
there exists an equilibrium ( y 1 ( H ( X , t ) ) , , y N ( H ( X , t ) ) ) V i.e., this N-tuple satisfies (4);
(ii)
there exist the smallest and largest equilibrium aggregates denoted by Q * ( x , t ) and Q * ( x , t ) , respectively;
(iii)
Q * : U × T R is a lower semi-continuous function and Q * : U × T R is an upper semi-continuous function.
Proof. 
This result is an immediate consequence of Theorem 2. ☐
This theorem gives us the existence of a Nash equilibrium among followers but not the uniqueness of it. Thus, in principle, there are multiple equilibria denoted by N E ( X , t ) . We can consider a selection of the correspondence ( X , t ) N E ( X , t ) , namely, a function
λ : ( X , t ) ( y 1 λ ( H ( X , t ) ) , , y N λ ( H ( X , t ) ) ) ,
in order to choose a profile in the set of the possible Nash equilibria of the followers.
We suppose that l i ( x i , g ( x , y ) , t ) = l i ( x i , H ( X + Y ) , t ) is increasing in the second variable i.e., the aggregator g. Since H is a strictly increasing function, l i ( x i , H ( X + Y ) , t ) is increasing in the aggregate of strategies. By Theorem 5, the equilibria aggregates are ordered from the smallest one to the largest one and we assume that the leaders adopt the max-selection (in line with [31]) i.e.,
( y 1 m a x ( H ( X , t ) ) , , y N m a x ( H ( X , t ) ) ) ,
such that j = 1 N y j m a x ( H ( X , t ) ) = Y * ( x , t ) , and they take into account the following functions:
l i ( x i , g ( x , Y * ( x , t ) ) , t ) = l i ( x i , H ( X + Y * ( x , t ) ) , t ) ,
and they solve a Nash equilibrium problem.
Remark 5.
By integral’s monotonicity and by Assumption 3, the function E [ l i ( x i , H ( X + Y * ( x , t ) ) , t ) ] is supermodular in the player’s own strategy and exhibits decreasing differences in x i and X i , for all i = 1 , , M and for any t T .
Remark 6.
In the case of multiple followers’ responses, the max-selection corresponds (for M = 1 ) to the so-called strong Stackelberg–Nash solution or optimistic Stackelberg–Nash solution (see [32,33,34,35]).
Theorem 6.
If Assumption 3 holds and if l i ( x i , g ( x , y ) , t ) is an increasing function in the aggregator, then there exists an MLMFA equilibrium.
Proof. 
Since Assumption 3 holds, then Theorem 5 holds, and, since l i ( x i , g ( x , y ) , t ) is an increasing function in the aggregator, using the max-selection, we can consider the reduced aggregative game M , ( U i , l i ) i = 1 M , g , t , where l i = l i ( x i , H ( X + Y * ( x , t ) ) , t ) i = 1 , , M , which is an aggregative game with strategic substitutes. Considering the functions E [ l i ( x i , H ( X + Y * ( x , t ) ) , t ) ] , for any i = 1 , , M , by Remark 5 and using Theorem 2, the result is proved. ☐
Example 3.
( Teamwork   Project )
Let us consider the game
3 , ( U 1 , l 1 ) , ( V j , f j ) j = 2 3 , g , t ,
where U 1 = [ 0 , 1 ] , V j = [ 0 , 1 ] for j = 2 , 3 , and, denoted by s = ( s 1 , s 2 , s 3 ) , the vector of strategies, g ( s ) = h = 1 3 s h . Assume a random variable ξ ( ω ) with uniform distribution ρ ( t ) = 1 t [ 0 , 1 ] . Then,
  • for any t [ 0 , 1 ] , the payoff functions for the followers are
    f j ( s j , g ( s ) , t ) = ( s 1 s 2 s 3 ) 1 + t
    for j = 2 , 3 ;
  • the payoff function for the leader is
    E [ l 1 ( s 1 , g ( s ) , ξ ( ω ) ) ] = E ( s 1 s 2 s 3 ) 1 + ξ ( ω ) s 1 + 1 4 4
    = 0 1 ( s 1 s 2 s 3 ) 1 + t d t s 1 + 1 4 4 .
Fixing s 1 and t, the followers’ Nash equilibria are
N E ( s 1 , t ) = ( 0 , 0 ) , ( 1 , 1 ) i f s 1 0 , [ 0 , 1 ] 2 i f s 1 = 0 .
By using the max-selection, the leader considers ( s 2 * , s 3 * ) = ( 1 , 1 ) and so he maximizes, with respect to his own strategy, the following payoff function:
E s 1 1 + ξ ( ω ) s 1 + 1 4 4 = 0 1 s 1 1 + t d t s 1 + 1 4 4 =
s 1 ( s 1 1 ) l o g s 1 s 1 + 1 4 4 .
It can be proved that there exists s M [ 0 , 1 ] where this function has a positive maximum.
This model corresponds to the Teamwork project with multiple task (see [9,36]).
Note that Assumption 3 is satisfied in this game, and, since l 1 is an increasing function in the aggregator g ( s ) = h = 1 3 s h , Theorems 5 and 6 hold.

6. Conclusions

We have studied a non-cooperative aggregative game, with M players acting as leaders and N players acting as followers in a hierarchical model, and we have assumed that there is an exogenous uncertainty that affects the aggregator. We have proved an existence result in the case of followers’ Nash equilibrium uniqueness as well as an existence result in the case of non-uniqueness, using the max-selection.
A further possible direction of research could be investigating the existence of MLMFA equilibria in the case of non uniqueness of the followers’ reactions, in order to adopt other selections (see [37]).
Moreover, in order to obtain existence results in more general settings, it would be interesting to consider a broader class of aggregative games (see [9,17]).

Acknowledgments

This work has been supported by STAR 2014 (linea 1) “Variational Analysis and Equilibrium Models in Physical and Social Economic Phenomena”, the University of Naples Federico II, Italy and by GNAMPA 2017 “Approccio stocastico per le disequazioni quasi-variazionali e applicazioni”.

Author Contributions

L.M. and R.M. contributed equally to all parts of the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sheraly, H.D.; Soyster, A.L.; Murphy, F.H. Stackelberg-Nash-Cournot Equilibria: Characterizations and computations. Oper. Res. 1983, 31, 253–276. [Google Scholar] [CrossRef]
  2. Marcotte, P.; Blain, M.A. Stackelberg-Nash model for the design of deregulated transit system. In Dynamic Games in Economic Analysis; Lecture Notes in Control and Information Sciences; Hamalainen, R.H., Ethamo, H.K., Eds.; Springer: Berlin, Germany, 1991; Volume 157, pp. 21–28. [Google Scholar]
  3. Hobbs, B.F.; Metzler, C.B.; Pang, J.S. Strategic gaming analysis for electric power systems: An MPEC approach. IEEE Trans. Power Syst. 2000, 15, 638–645. [Google Scholar] [CrossRef]
  4. Ramos, M.A.; Boix, M.; Aussel, D.; Montastruc, L.; Domenech, S. Water integration in eco-industrial parks using a multi-leader-follower approach. Comput. Chem. Eng. 2016, 87, 190–207. [Google Scholar] [CrossRef]
  5. Kim, S. Multi-leader multi-follower Stackelberg model for cognitive radio spectrum sharing scheme. Comput. Netw. 2012, 56, 3682–3692. [Google Scholar] [CrossRef]
  6. Hu, F. Multi-Leader-Follower Games: Models, Methods and Applications. J. Oper. Res. Soc. Jpn. 2015, 58, 1–23. [Google Scholar] [CrossRef]
  7. De Miguel, V.; Xu, H. A Stochastic Multiple-Leader Stackelberg Model: Analysis, Computation, and Application. Oper. Res. 2009, 57, 1220–1235. [Google Scholar] [CrossRef]
  8. Cornes, R.; Hartley, R. Asymmetric contests with general technologies. Econ. Theory 2005, 3, 923–946. [Google Scholar] [CrossRef]
  9. Jensen, M.K. Aggregative Games and Best-Reply Potentials. Econ. Theory 2010, 43, 45–66. [Google Scholar] [CrossRef]
  10. Vives, X. Nash equilibrium with strategic complementaries. J. Math. Econ. 1990, 19, 305–321. [Google Scholar] [CrossRef]
  11. Selten, R. Preispolitik der Mehrproduktenunternehmung in der Statichen Theorie; Springer: Berlin/Heidelberg, Germany, 1970. [Google Scholar]
  12. Acemoglu, D.; Jensen, M.K. Aggregate comparative statics. Games Econ. Behav. 2013, 81, 27–49. [Google Scholar] [CrossRef]
  13. Grammatico, S. Dynamic Control of Agents playing Aggregative Games with Coupling Constraints. IEEE Trans. Autom. Control 2017. [Google Scholar] [CrossRef]
  14. Koshal, J.; Nediè, A.; Shanbhag, U.V. Distributed Algorithms for Aggregative Games on Graphs. Oper. Res. 2016, 64, 680–704. [Google Scholar] [CrossRef]
  15. Nakamura, T. One-leader and multiple-follower Stackelberg games with private information. Econ. Lett. 2015, 127, 27–30. [Google Scholar] [CrossRef]
  16. Kleywegt, A.J.; Shapiro, A.; Homem-De-Mello, T. The Sample Average Approximation Method for Stochastic Discrete Optimization. SIAM J. Optim. 2001, 12, 479–502. [Google Scholar] [CrossRef]
  17. Alos-Ferrer, C.; Ania, A.B. The evolutionary stability of perfectly competitive behavior. Econ. Theory 2005, 26, 497–516. [Google Scholar] [CrossRef]
  18. Gorman, W.M. The structure of utility functions. Rev. Econ. Stud. 1968, 35, 367–390. [Google Scholar] [CrossRef]
  19. Milgrom, P.; Roberts, J. Rationalizability, Learning and Equilibrium in Games with Strategic Complementarities. Econometrica 1990, 58, 1255–1277. [Google Scholar] [CrossRef]
  20. Topkis, D.M. Minimizing a Submodular Function on a Lattice. Oper. Res. 1978, 26, 305–321. [Google Scholar] [CrossRef]
  21. Topkis, D.M. Supermodularity and Complementarity; Princeton University Press: Princeton, NJ, USA, 1998. [Google Scholar]
  22. Vives, X. Oligopoly Pricing: Old Ideas and New Tools; MIT Press: Cambridge, MA, USA, 2000. [Google Scholar]
  23. Granot, F.; Veinott, A.F. Substitutes Complements and Ripples in Network Flows. Math. Oper. Res. 1985, 10, 471–497. [Google Scholar] [CrossRef]
  24. Mangasarian, O.L. Pseudo-convex functions. SIAM J. Control 1965, 3, 281–290. [Google Scholar] [CrossRef]
  25. Von Stackelberg, H. Marktform und Gleichgewicht; Julius Springer: Vienna, Austria, 1934; (The Theory of the Market Economy, English Edition; Peacock, A., Ed.;William Hodge: London, UK, 1952.). [Google Scholar]
  26. Sherali, H.D. A Multiple Leader Stackelberg Model and Analysis. Oper. Res. 1984, 32, 390–404. [Google Scholar] [CrossRef]
  27. Luo, Z.Q.; Pang, J.S.; Ralph, D. Mathematical Programs with Equilibrium Constraints; Cambridge University Press: Cambridge, UK, 1996. [Google Scholar]
  28. Cornes, R.; Hartley, R. Well-behaved Aggregative Games. In Proceedings of the 2011 Workshop on Aggregative Games at Strathclyde University, Glasgow, UK, 6–7 April 2011. [Google Scholar]
  29. Brahim, M.B.; Zaccour, G.; Abdelaziz, F.B. Strategic Investments in R & D and Efficiency in the presence of free riders. RAIRO-Oper. Res. 2015, 50, 611–625. [Google Scholar] [CrossRef]
  30. Finus, M. Game Theory and International Environmental Cooperation: A Survey with an Application to the Kyoto-Protocol; Fondazione Eni Enrico Mattei: Milan, Italy, 2000. [Google Scholar]
  31. Mallozzi, L.; Tijs, S. Partial Cooperation and Non-Signatories Multiple Decision. AUCO Czech Econ. Rev. 2008, 2, 23–30. [Google Scholar]
  32. Breton, M.; Alj, A.; Haurie, A. Sequential Stackelberg equilibria in two-person games. J. Optim. Theory Appl. 1988, 59, 71–97. [Google Scholar] [CrossRef]
  33. Kulkarni, A.A.; Shanbhag, U.V. An Existence Result for Hierarchical Stackelberg v/s Stackelberg Games. IEEE Trans. Autom. Control 2015, 60, 3379–3384. [Google Scholar] [CrossRef]
  34. Leitmann, G. On generalized Stackelberg strategies. J. Optim. Theory Appl. 1978, 26, 637–643. [Google Scholar] [CrossRef]
  35. Nishizaki, I.; Sakawa, M. Stackelberg solutions to multi objective two-level linear programming problems. J. Optim. Theory Appl. 1999, 103, 161–182. [Google Scholar] [CrossRef]
  36. Dubey, P.; Haimanko, O.; Zapechelnyuk, A. Strategic complements and substitutes, and potential games. Games Econ. Behav. 2006, 54, 77–94. [Google Scholar] [CrossRef]
  37. Mallozzi, L.; Morgan, J. Oligopolistic markets with leadership and demand functions possibly discontinuous. J. Optim. Theory Appl. 2005, 125, 393–407. [Google Scholar] [CrossRef]
1.
Recall that transitive means that xy and yz implies xz; reflexive means that xx; antisymmetric means that xy and yx implies x = y.
2.
A differentiable function π i ( s i , s i , t ) is pseudo concave in s i if for all s i , s i S i
( s i s i ) T D s i π i ( s i , s i , t ) 0 π i ( s i , s i , t ) π i ( s i , s i , t ) ,
(see [24]).
3.
Whenever s * ( t ) is an equilibrium, Q ( t ) : = g ( s * ( t ) ) is called an equilibrium aggregate given t. Furthermore, if smallest and largest equilibrium aggregates exist, these are denoted by Q * ( t ) and Q * ( t ) .
4.
U i and V j are subsets of R + with capacity limits.

Share and Cite

MDPI and ACS Style

Mallozzi, L.; Messalli, R. Multi-Leader Multi-Follower Model with Aggregative Uncertainty. Games 2017, 8, 25. https://0-doi-org.brum.beds.ac.uk/10.3390/g8030025

AMA Style

Mallozzi L, Messalli R. Multi-Leader Multi-Follower Model with Aggregative Uncertainty. Games. 2017; 8(3):25. https://0-doi-org.brum.beds.ac.uk/10.3390/g8030025

Chicago/Turabian Style

Mallozzi, Lina, and Roberta Messalli. 2017. "Multi-Leader Multi-Follower Model with Aggregative Uncertainty" Games 8, no. 3: 25. https://0-doi-org.brum.beds.ac.uk/10.3390/g8030025

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop