Next Article in Journal
Global Fixed-Time Sliding Mode Trajectory Tracking Control Design for the Saturated Uncertain Rigid Manipulator
Next Article in Special Issue
Robust Solution of the Multi-Model Singular Linear-Quadratic Optimal Control Problem: Regularization Approach
Previous Article in Journal
A New Extension of Optimal Auxiliary Function Method to Fractional Non-Linear Coupled ITO System and Time Fractional Non-Linear KDV System
Previous Article in Special Issue
The Stability Analysis of Linear Systems with Cauchy—Polynomial-Vandermonde Matrices
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Non-Zero Sum Nash Game for Discrete-Time Infinite Markov Jump Stochastic Systems with Applications

College of Mathematics and Systems Science, Shandong University of Science and Technology, Qingdao 266590, China
*
Author to whom correspondence should be addressed.
Submission received: 24 July 2023 / Revised: 8 September 2023 / Accepted: 11 September 2023 / Published: 15 September 2023
(This article belongs to the Special Issue Advances in Analysis and Control of Systems with Uncertainties II)

Abstract

:
This paper is to study finite horizon linear quadratic (LQ) non-zero sum Nash game for discrete-time infinite Markov jump stochastic systems (IMJSSs). Based on the theory of stochastic analysis, a countably infinite set of coupled generalized algebraic Riccati equations are solved and a necessary and sufficient condition for the existence of Nash equilibrium points is obtained. From a new perspective, the finite horizon mixed robust H 2 / H control is investigated, and summarize the relationship between Nash game and H 2 / H control problem. Moreover, the feasibility and validity of the proposed method has been proved by applying it to a numerical example.

1. Introduction

Markov jump stochastic systems (MJSSs), as a typical class of stochastic hybrid dynamical systems, are widely used in practical engineering control systems. At the same time, a great deal of related research on the stability, ergodicity, robust control and filtering, and so on [1,2,3,4,5,6,7,8,9] has been undertaken. However, for the Markov chain, when the state space takes values in infinite countably sets, from the application point of view, it may be more extensive. Consequently, plentiful theory literature and research achievements have emerged for infinite Markov jump stochastic systems (IMJSSs). In recent years, various efforts have been made to cope with IMJSSs in a wide variety of systems. To be specific, via the operator spectrum method, exponential stabilities for discrete-time [10] and continuous-time [11] IMJSSs have been investigated respectively. Aiming at the practical time-delay factors, with the aid of Lyapunov stability theory, [12] discussed the stability analysis for IMJSSs with time-delay; further, stabilities for uncertain discrete-time IMJSSs with time-delay have been developed in [13], and on this basis, [14] addressed the finite horizon H 2 / H control problems. H 2 / H fuzzy filtering has been solved for nonlinear IMJSSs by the T-S fuzzy model approach in [15].
Dynamic game theory, which has come into wider use in many fields such as engineering, economics, and management science, has attracted great attention, and a large number of results have been obtained in the literature [16,17,18,19,20,21]. Furthermore, the Nash game problem has been studied for MJSSs, and further, a unified treatment approach for H 2 , H and H 2 / H control design problems was presented in [22]. In [23,24], the authors revealed the relationship between the Nash equilibrium points and H 2 / H control for continuous-time MJSSs. While many theoretical results have been established to study the stochastic system governed by a finite-state Markov chains, more research efforts are required to discuss IMJSSs. However, the causal and anticausal Lyapunov operators of IMJSSs are no more adjoint, which is the essential difference between finite and infinite MJSSs [10,25]. In most practical applications, IMJSSs have broader application prospects than MJSSs with finite jumps. As a consequence, the related research about IMJSSs is of critical importance. In the game field, [26] discussed an infinite horizon linear quadratic (LQ) Nash game for continuous-time IMJSSs. Unfortunately, to the best of our knowledge, there is almost no research about the Nash game problem for discrete-time IMJSSs.
In this article, we study a finite horizon LQ non-zero sum Nash game for discrete-time IMJSSs, which considers a more general system. From a new perspective, the finite horizon mixed H 2 / H control is further investigated. On the one hand, this is an extension of the previous study from MJSSs [22], to IMJSSs; on the other, it is a discrete-time case of [26]. Concretely, the main work and contribution of this thesis is as follows. First, the existence of Nash equilibrium points, which can be boiled down to the solvability of a countably infinite set of coupled generalized algebraic Riccati equations, is shown. For infinite Markov jump systems, the causal and anticausal Lyapunov operators are no more adjoint, which leads to the inequivalence between stochastic stability, asymptotic mean square stability, and exponential mean square stability. Further, this indicates the essential difference between the finite and infinite Markov jump systems. To this end, we introduce the infinite dimension Banach spaces, where the elements are countably infinite sequences of linear and bounded operators. Thus, the source of the problem resides in the way to solve this kind of equations, which are harder to solve than those in [22]. Second, the finite horizon mixed H 2 / H control is solved from the new view of a Nash game, and further, the relationship between the Nash game and H 2 / H control problem is summarized with some remarks. Finally, the typical example cited in the paper demonstrates the validity of the proposed method well.
The rest of this article is arranged as below. Some useful preliminary results are introduced in Section 2. The existence of Nash equilibrium points is solved by a necessary and sufficient condition in Section 3. In Section 4, some special cases with some remarks are given. In Section 5, a numerical example is illustrated, and a summary is provided in Section 6.
For convenience, the following notations are adopted. R n : n-dimensional real Euclidean space; R m × n : the linear space of all m by n real matrices; · : the Euclidean norm of R n or the operator norm of R m × n ; I n : the n × n identity matrix; N : the transpose of a matrix (or vector) N; N : the pseudo-inverse of a matrix N; N > 0 ( 0 ) : N is positive (semi-positive) definite; N T : = { 0 , 1 , , T } ; D : = { 1 , 2 , } .

2. Preliminaries

On a complete probability space ( Ω , F , P ), we consider the following discrete-time IMJSS:
y ( t + 1 ) = Q 0 ( t , ξ t ) y ( t ) + R 0 ( t , ξ t ) u ( t ) + U 0 ( t , ξ t ) v ( t ) + k = 1 r [ Q k ( t , ξ t ) y ( t ) + R k ( t , ξ t ) u ( t ) + U k ( t , ξ t ) v ( t ) ] w k ( t ) , z ( t ) = L ( t , ξ t ) y ( t ) M ( t , ξ t ) u ( t ) , M ( t , ξ t ) M ( t , ξ t ) = I n u , y ( 0 ) = y 0 R n , ξ ( 0 ) = ξ 0 D , t N T ,
where y ( t ) R n represents the system state, and u ( t ) R n u and v ( t ) R n v are the control processes of two different players, respectively. z ( t ) R n z stands for the controlled output. w ( t ) = { w ( t ) | w ( t ) = ( w 1 ( t ) , w 2 ( t ) , , w r ( t ) ) } , t N T = { 0 , 1 , , T } is a standard r-dimensional Brownian motion. { ξ t } t N T denotes a infinite Markov jump process taking values in D = { 1 , 2 , } , and the transition probability matrix is P = [ p ( ς , ι ) ] with p ( ς , ι ) = P ( ξ t + 1 = ι | ξ t = ς ) . In this paper, we assume P is nondegenerate, π 0 ( ς ) = P ( ξ 0 = ς ) > 0 for all ς D , and the stochastic processes { w t } t N T and { ξ t } t N T are mutually independent. Let F t = { ξ k , w s | 0 k t , 0 s t 1 } , F 0 = σ ( ξ 0 ) . l 2 ( N T ; R m ) : = { x R m | x is F t -measurable, and ( t = 0 T E x ( t ) 2 ) 1 2 < }.
H m × n is defined as the Banach space of the set { H | H = ( H ( 1 ) , H ( 2 ) , ) , H ( ς ) R m × n } with the norm H = sup ς D H ( ς ) < . In the sequel, we assume that all coefficients of considered systems have a finite norm · . We make use of the notation of H n for m = n . Further, H n + denotes the subspace of H n formed by all H ( ς ) S n and H ( ς ) 0 , ς D . For L , M H n + , L M is equivalent to L ( ς ) M ( ς ) , ς D , which means L M . We set Q k ( t ) = { Q k ( t , ς ) } ς D 0 k r , R k ( t ) = { R k ( t , ς ) } ς D , 0 k r U k ( t ) = { U k ( t , ς ) } ς D , 0 k r L ( t ) = { L ( t , ς ) } ς D M ( t ) = { M ( t , ς ) } ς D , and assume Q k ( t ) H n ,   R k ( t ) H n × n u ,   U k ( t ) H n × n v ,   L ( t ) H n z × n ,   M ( t ) H n z × n u for t N T .
The two cost functions for the Nash game problem are given by
J 1 ( x 0 , ξ 0 , u * ( · ) , v ( · ) ) = t = 0 T E [ γ 2 v ( t ) 2 z ( t ) 2 ] ,
J 2 ( x 0 , ξ 0 , u ( · ) , v * ( · ) ) = t = 0 T E [ z ( t ) 2 ] ,
where γ > 0 is a given prescribed disturbance attenuation level. For simplicity, let the Nash game problem for cost Functions (2) and (3) with Equation (1) by the Nash game problem P . Hence, the Nash game problem P is addressed, which is to find admissible control ( u * ( · ) , v * ( · ) ) to minimize cost Functions (2) and (3) subject to Equation (1).
Next, we list some definitions and lemmas that are needed for the follow-up procedures.
Definition 1. 
A strategy pair ( u * ( · ) , v * ( · ) ) l 2 ( N T ; R n u ) × l 2 ( N T ; R n v ) is a Nash equilibrium point if
J 1 ( x 0 , ξ 0 , u * ( · ) , v * ( · ) ) J 1 ( x 0 , ξ 0 , u * ( · ) , v ( · ) ) ,
J 2 ( x 0 , ξ 0 , u * ( · ) , v * ( · ) ) J 2 ( x 0 , ξ 0 , u ( · ) , v * ( · ) )
for all ( u ( · ) , v ( · ) ) l 2 ( N T ; R n u ) × l 2 ( N T ; R n v ) .
Lemma 1 
([27]). For a symmetric matrix S, we have
(i) 
S = ( S ) ;
(ii) 
S S = S S ;
(iii) 
S 0 if and only if S 0 .
Lemma 2 
([27]). Let matrices F = F , H, and G = G with appropriate dimensions consider the following quadratic form:
q ( y , u ) = E [ y F y + 2 y H u + u G u ] ,
where y and u are random variables defined on a probability space (Ω, F , P ). Then, the following conditions are equivalent:
(i) 
inf u q ( y , u ) > for any random variable y.
(ii) 
There exists a symmetric matrix S = S such that inf u q ( y , u ) = E [ y S y ] for any random variable y.
(iii) 
G 0 and H ( I G G ) = 0 .
(iv) 
G 0 and K e r ( G ) K e r ( H ) .
(v) 
There exists a symmetric matrix T = T such that F T H H G 0 .
Moreover, if any of the above conditions hold, then (ii) is satisfied by S = F H G H . In addition, S T for any T satisfying (v). Finally, for any random variable y, the random variable u * = G H y is optimal and the optimal value is q ( y , u * ) = E [ y ( F H G H ) y ] .
The following LQ result for discrete-time IMJSSs can be directly yielded from Theorem 3.1 [14], which is the continuous-time case.
Lemma 3. 
For the following standard LQ optimal control problem with discrete-time IMJSSs,
min u ( · ) l 2 ( N T ; R n u ) { J 2 ( y 0 , ξ 0 , u ( · ) ) = t = 0 T E [ z ( t ) 2 ] } ,
subject to
y ( t + 1 ) = Q 0 ( t , ξ t ) y ( t ) + R 0 ( t , ξ t ) u ( t ) + k = 1 r [ Q k ( t , ξ t ) y ( t ) + R k ( t , ξ t ) u ( t ) ] w k ( t ) , z ( t ) = L ( t , ξ t ) y ( t ) M ( t , ξ t ) u ( t ) , M ( t , ξ t ) M ( t , ξ t ) = I n u , y ( 0 ) = y 0 R n , ξ ( 0 ) = ξ 0 D , t N T ,
then, we find that P ( t , ς ) , ( t , ς ) N T × D solves the following coupled generalized algebraic Riccati equations:
P ( t , ς ) = k = 0 r Q k ( t , ς ) ε ς ( t , P ) Q k ( t , ς ) + L ( t , ς ) L ( t , ς ) [ k = 0 r Q k ( t , ς ) ε ς ( t , P ) R k ( t , ς ) ] · H ( t , ς , P ) 1 [ k = 0 r Q k ( t , ς ) ε ς ( t , P ) R k ( t , ς ) ] , P ( T + 1 , ς ) = 0 , H ( t , ς , P ) > 0 , ( t , ς ) N T × D ,
where
H ( t , ς , P ) = I n u + k = 0 r R k ( t , ς ) ε ς ( t , P ) R k ( t , ς ) ,
ε ς ( t , P ) = ι = 1 p ( ς , ι ) P ( t + 1 , ι ) ,
and
min u ( · ) l 2 ( N T ; R n u ) J 2 ( y 0 , ξ 0 , u ( · ) ) = J 2 ( y 0 , ξ 0 , u * ( · ) ) = ς = 1 π 0 ( ς ) y 0 P ( 0 , ς ) y 0 ,
u * ( t ) = H ( t , ς , P ) 1 [ k = 0 r Q k ( t , ς ) ε ς ( t , P ) R k ( t , ς ) ] y ( t ) .

3. Nash Equilibrium Points

This section focuses on solving the Nash gameproblem P , and we assume that the linear, memoryless feedback strategy has the following form [28]:
u ( t ) = G 2 ( t , ξ t ) y ( t ) , v ( t ) = G 1 ( t , ξ t ) y ( t ) ,
under this assumption, we obtain the feedback Nash equilibrium points by a countably infinite set of coupled generalized algebraic Riccati equations.
Theorem 1. 
The Nash game problem P has unique Nash equilibrium points
( u * ( t ) = G 2 ( t , ξ t , X 2 ) y ( t ) , v * ( t ) = G 1 ( t , ξ t , X 1 ) y ( t ) )
iff the following coupled generalized algebraic Riccati equations,
Λ 1 ( t , ς ) = k = 0 r [ Q k ( t , ς ) + R k ( t , ς ) G 2 ( t , ς , Λ 2 ) ] ε ς ( t , Λ 1 ) · [ Q k ( t , ς ) + R k ( t , ς ) G 2 ( t , ς , Λ 2 ) ] L ( t , ς ) L ( t , ς ) G 2 ( t , ς , Λ 2 ) G 2 ( t , ς , Λ 2 ) G 3 ( t , ς , Λ 1 ) H 1 ( t , ς , Λ 1 ) G 3 ( t , ς , Λ 1 ) , ( I H 1 ( t , ς , Λ 1 ) H 1 ( t , ς , Λ 1 ) ) G 3 ( t , ς , Λ 1 ) = 0 , Λ 1 ( T + 1 , ς ) = 0 , H 1 ( t , ς , Λ 1 ) 0 , ( t , ς ) N T × D ,
G 1 ( t , ς , Λ 1 ) = H 1 ( t , ς , Λ 1 ) G 3 ( t , ς , Λ 1 ) ,
Λ 2 ( t , ς ) = k = 0 r [ Q k ( t , ς ) + U k ( t , ς ) G 1 ( t , ς , Λ 1 ) ] ε ς ( t , Λ 2 ) · [ Q k ( t , ς ) + U k ( t , ς ) G 1 ( t , ς , Λ 1 ) ] + L ( t , ς ) L ( t , ς ) G 4 ( t , ς , Λ 2 ) H 2 ( t , ς , Λ 2 ) 1 G 4 ( t , ς , Λ 2 ) , Λ 2 ( T + 1 , ς ) = 0 , H 2 ( t , ς , Λ 2 ) > 0 , ( t , ς ) N T × D ,
G 2 ( t , ς , Λ 2 ) = H 2 ( t , ς , Λ 2 ) 1 G 4 ( t , ς , Λ 2 )
admit a group of solutions ( Λ 1 ( t , ς ) , G 1 ( t , ς , Λ 1 ) ; Λ 2 ( t , ς ) , G 2 ( t , ς , Λ 2 ) ) with Λ 1 ( t , ς ) 0 , Λ 2 ( t , ς ) 0 for ( t , ς ) N T × D , where
H 1 ( t , ς , Λ 1 ) = γ 2 I n v + k = 0 r U k ( t , ς ) ε ς ( t , Λ 1 ) U k ( t , ς ) , H 2 ( t , ς , Λ 2 ) = I n u + k = 0 r R k ( t , ς ) ε ς ( t , Λ 2 ) R k ( t , ς ) , G 3 ( t , ς , Λ 1 ) = k = 0 r [ Q k ( t , ς ) + R k ( t , ς ) G 2 ( t , ς , Λ 2 ) ] ε ς ( t , Λ 1 ) U k ( t , ς ) , G 4 ( t , ς , Λ 2 ) = k = 0 r [ Q k ( t , ς ) + U k ( t , ς ) G 1 ( t , ς , Λ 1 ) ] ε ς ( t , Λ 2 ) R k ( t , ς ) .
Proof. 
Sufficiency: Since Equations (10)–(13) have a group of solutions
( Λ 1 ( t , ς ) , G 1 ( t , ς , Λ 1 ) ; Λ 2 ( t , ς ) , G 2 ( t , ς , Λ 2 ) ) ,
by constructing u * ( t ) = G 2 ( t , ξ t , Λ 2 ) y ( t ) , then, we substitute u * ( t ) into Equation (1) to get the equation as follows:
y ( t + 1 ) = [ Q 0 ( t , ξ t ) + R 0 ( t , ξ t ) G 2 ( t , ξ t , Λ 2 ) ] y ( t ) + U 0 ( t , ξ t ) v ( t ) + k = 1 r { [ Q k ( t , ξ t ) + R k ( t , ξ t ) G 2 ( t , ξ t , Λ 2 ) ] y ( t ) + U k ( t , ξ t ) v ( t ) } w k ( t ) , z ( t ) = L ( t , ξ t ) y ( t ) M ( t , ξ t ) G 2 ( t , ξ t , Λ 2 ) y ( t ) , M ( t , ξ t ) M ( t , ξ t ) = I n u , y ( 0 ) = y 0 R n , ξ ( 0 ) = ξ 0 D , t N T ,
using similar processing, Lemma 1 in [29] can be generalized to the infinite Markov jump systems. In fact, via the assumption that { w t } t N T and { ξ t } t N T are mutually independent, and { w t } t N T is also irrelevant to v ( t ) , we have
E [ y ( t + 1 ) Λ 1 ( t + 1 , ξ t + 1 ) y ( t + 1 ) y ( t ) Λ 1 ( t , ξ t ) y ( t ) | F t , ξ t = ς ] = y ( t ) v ( t ) C ( t , ς , Λ 1 ) Λ 1 ( t , ς ) G 3 ( t , ς , Λ 1 ) G 3 ( t , ς , Λ 1 ) H 1 ( t , ς , Λ 1 ) γ 2 I n v y ( t ) v ( t ) ,
it follows from taking summation from 0 to T on both sides of (15) that
E [ y T + 1 Λ 1 ( T + 1 , ξ T + 1 ) y T + 1 y 0 Λ 1 ( 0 , ξ 0 ) y 0 ] = t = 0 T E y ( t ) v ( t ) C ( t , ς , Λ 1 ) Λ 1 ( t , ς ) G 3 ( t , ς , Λ 1 ) G 3 ( t , ς , Λ 1 ) H 1 ( t , ς , Λ 1 ) γ 2 I n v y ( t ) v ( t ) .
Further, we can obtain the following result:
J 1 ( y 0 , ξ 0 , u * ( · ) , v ( · ) ) = t = 0 T E [ γ 2 v ( t ) 2 z ( t ) 2 ] = E [ y 0 Λ 1 ( 0 , ξ 0 ) y 0 ] E [ y T + 1 Λ 1 ( T + 1 , ξ T + 1 ) y T + 1 ] + t = 0 T E y ( t ) v ( t ) D ( t , ξ t , Λ 1 ) y ( t ) v ( t ) ,
where
D ( t , ξ t , Λ 1 ) = D ( t , ς , Λ 1 ) = C ( t , ς , Λ 1 ) L ( t , ς ) L ( t , ς ) Λ 1 ( t , ς ) G 3 ( t , ς , Λ 1 ) G 3 ( t , ς , Λ 1 ) H 1 ( t , ς , Λ 1 ) , C ( t , ς , Λ 1 ) = k = 0 r [ Q k ( t , ς ) + R k ( t , ς ) G 2 ( t , ς , Λ 2 ) ] ε ς ( t , Λ 1 ) · [ Q k ( t , ς ) + R k ( t , ς ) G 2 ( t , ς , Λ 2 ) ] ,
for ξ t = ς . A combination of the method of completing square and (10) turns Equation (16) into
J 1 ( y 0 , ξ 0 , u * ( · ) , v ( · ) ) = E [ y 0 Λ 1 ( 0 , ξ 0 ) y 0 ] + t = 0 T E E y ( t ) v ( t ) D ( t , ξ t , Λ 1 ) y ( t ) v ( t ) | F t 1 , ξ t = ς = E [ y 0 Λ 1 ( 0 , ξ 0 ) y 0 ] + t = 0 T E E [ v ( t ) v * ( t ) ] H 1 ( t , ς , Λ 1 ) [ v ( t ) v * ( t ) ] | F t 1 , ξ t = ς ,
where v * ( t ) = G 1 ( t , ξ t , Λ 1 ) y ( t ) ; therefore, the Nash equilibrium inequality (4) occurs naturally, that is to say
J 1 ( y 0 , ξ 0 , u * ( · ) , v ( · ) ) J 1 ( Λ 0 , ξ 0 , u * ( · ) , v * ( · ) ) = E [ y 0 Λ 1 ( 0 , ξ 0 ) y 0 ] ,
additionally, by plugging v * ( t ) = G 1 ( t , ξ t , Λ 1 ) y ( t ) into Equation (1), Equation (1) can be converted into
y ( t + 1 ) = [ Q 0 ( t , ξ t ) + U 0 ( t , ξ t ) G 1 ( t , ξ t , Λ 1 ) ] y ( t ) + R 0 ( t , ξ t ) u ( t ) + k = 1 r { [ Q k ( t , ξ t ) + U k ( t , ξ t ) G 1 ( t , ξ t , Λ 1 ) ] y ( t ) + R k ( t , ξ t ) u ( t ) } w k ( t ) , z ( t ) = L ( t , ξ t ) y ( t ) M ( t , ξ t ) u ( t ) , M ( t , ξ t ) M ( t , ξ t ) = I n u , y ( 0 ) = y 0 R n , ξ ( 0 ) = ξ 0 D , t N T ,
in the meantime, we also have
J 2 ( y 0 , ξ 0 , u ( · ) , v * ( · ) ) = t = 0 T E [ y ( t ) L ( t , ξ t ) L ( t , ξ t ) y ( t ) + u ( t ) u ( t ) ] .
Obviously, to illustrate that u * ( t ) minimizes (3), we only need to prove a kind of LQ optimal control problem of IMJSSs that causes (19) minimization under the condition of (18). Via Lemma 3, we can easily prove the Nash equilibrium inequality (5).
Necessity: Assume that ( u * ( t ) , v * ( t ) ) = ( G 2 ( t , ξ t ) y ( t ) , G 1 ( t , ξ t ) y ( t ) ) is a linear feedback Nash equilibrium point for the Nash game (4) and (5). Putting u * ( t ) into Equation (1), we gain
y ( t + 1 ) = [ Q 0 ( t , ξ t ) + R 0 ( t , ξ t ) G 2 ( t , ξ t ) ] y ( t ) + U 0 ( t , ξ t ) v ( t ) + k = 1 r { [ Q k ( t , ξ t ) + R k ( t , ξ t ) G 2 ( t , ξ t ) ] y ( t ) + U k ( t , ξ t ) v ( t ) } w k ( t ) , z ( t ) = L ( t , ξ t ) y ( t ) M ( t , ξ t ) G 2 ( t , ξ t ) y ( t ) , M ( t , ξ t ) M ( t , ξ t ) = I n u , y ( 0 ) = y 0 R n , ξ ( 0 ) = ξ 0 D , t N T ,
now discover that the above problem is transformed into the following indefinite LQ optimal control with optimal solution v * ( t ) = K 1 ( t , ξ t ) y ( t ) :
min v ( · ) l 2 ( N T ; R n v ) { J 1 ( y 0 , ξ 0 , u * ( · ) , v ( · ) ) = t = 0 T E [ γ 2 v ( t ) 2 z ( t ) 2 ] } , s u b j e c t t o ( 20 ) ,
it is obvious that the above indefinite LQ problem (21) is well-posed. Next, we prove that the coupled generalized algebraic Riccati Equation (22) is solvable by mathematical induction:
Λ ˜ 1 ( t , ς ) = k = 0 r [ Q k ( t , ς ) + R k ( t , ς ) G 2 ( t , ς ) ] ε ς ( t , Λ ˜ 1 ) [ Q k ( t , ς ) + R k ( t , ς ) G 2 ( t , ς ) ] L ( t , ς ) L ( t , ς ) G 2 ( t , ς ) G 2 ( t , ς ) G 3 ( t , ς , Λ ˜ 1 ) H 1 ( t , ς , Λ ˜ 1 ) G 3 ( t , ς , Λ ˜ 1 ) , ( I H 1 ( t , ς , Λ ˜ 1 ) H 1 ( t , ς , Λ ˜ 1 ) ) G 3 ( t , ς , Λ ˜ 1 ) = 0 , Λ ˜ 1 ( T + 1 , ς ) = 0 , H 1 ( t , ς , Λ ˜ 1 ) 0 , ( t , ς ) N T × D ,
to this end, the value function is introduced as follows:
V ( τ , y ( τ ) , ξ τ ) = inf v ( · ) l 2 ( [ τ , T ] ; R n v ) J 1 ( y ( τ ) , ξ τ , u * ( · ) , v ( · ) ) = inf v ( · ) l 2 ( [ τ , T ] ; R n v ) t = τ T E { y ( t ) [ L ( t , ξ t ) L ( t , ξ t ) G 2 ( t , ξ t ) G 2 ( t , ξ t ) ] y ( t ) + γ 2 v ( t ) v ( t ) } ,
for  τ = T , we can obtain from Λ ˜ 1 ( T + 1 , i ) = 0 that the existence of Λ ˜ 1 ( T , i ) motivates
V ( T , y ( T ) , ξ T ) = inf v ( T ) E { y ( T ) [ L ( T , ξ T ) L ( T , ξ T ) G 2 ( T , ξ T ) G 2 ( T , ξ T ) ] y ( T ) + γ 2 v ( T ) v ( T ) } = inf v ( T ) E [ y ( T ) Λ ˜ 1 ( T , ς ) y ( T ) + γ 2 v ( T ) v ( T ) ] ,
it is noteworthy that Equation (22) is available for t = T with Λ ˜ 1 ( T , ς ) , and the optimal value function is
V ( T , y ( T ) , ξ T ) = E [ y ( T ) Λ ˜ 1 ( T , ξ T ) y ( T ) ] ,
in the process,
v * ( T ) = 0 = H 1 ( T , ξ T , Λ ˜ 1 ) G 3 ( T , ξ T , Λ ˜ 1 ) y ( T ) .
then, for t = τ , assume that the coupled generalized algebraic Riccati Equation (22) has a solution Λ ˜ 1 ( τ , ς ) , ς D . Meanwhile, the optimal value function is
V ( τ , y ( τ ) , ξ τ ) = i = 1 π τ ( i ) y ( τ ) Λ ˜ 1 ( τ , ξ τ ) y ( τ ) ,
and the optimal control is
v * ( τ ) = H 1 ( τ , ξ τ , Λ ˜ 1 ) G 3 ( τ , ξ τ , Λ ˜ 1 ) y ( τ ) ,
now, our goal is to illustrate that for t = τ 1 , the existence of solution Λ ˜ 1 ( τ 1 , ς ) , ς D to Equation (22). By the aid of (20), the dynamic programming optimality principle applies to
V ( τ 1 , y ( τ 1 ) , ξ τ 1 ) = inf v ( τ 1 ) E { y ( τ 1 ) [ L ( τ 1 , ξ τ 1 ) L ( τ 1 , ξ τ 1 ) G 2 ( τ 1 , ξ τ 1 ) G 2 ( τ 1 , ξ τ 1 ) ] y ( τ 1 ) + γ 2 v ( τ 1 ) v ( τ 1 ) + V ( τ , y ( τ ) , ξ τ ) } , = inf v ( τ 1 ) E [ y ( τ 1 ) Γ ( τ 1 , ξ τ 1 , Λ ˜ 1 ) y ( τ 1 ) + 2 v ( τ 1 ) G 3 ( τ 1 , ξ τ 1 , Λ ˜ 1 ) y ( τ 1 ) + v ( τ 1 ) H 1 ( τ 1 , ξ τ 1 , Λ ˜ 1 ) v ( τ 1 ) ] ,
where
Γ ( τ 1 , ξ τ 1 , Λ ˜ 1 ) = k = 0 r [ Q k ( τ 1 , ξ τ 1 ) + R k ( τ 1 , ξ τ 1 ) G 2 ( τ 1 , ξ τ 1 ) ] · ε ξ τ 1 ( τ 1 , Λ ˜ 1 ) [ Q k ( τ 1 , ξ τ 1 ) + R k ( τ 1 , ξ τ 1 ) G 2 ( τ 1 , ξ τ 1 ) ] L ( τ 1 , ξ τ 1 ) L ( τ 1 , ξ τ 1 ) G 2 ( τ 1 , ξ τ 1 ) G 2 ( τ 1 , ξ τ 1 ) ,
similar results to Lemma 2 apply to the infinite Markov jump case, it is demonstrated that Λ ˜ 1 ( τ 1 , ς ) , ς D satisfies the following equation:
Λ ˜ 1 ( τ 1 , ς ) = Γ ( τ 1 , ς , Λ ˜ 1 ) G 3 ( τ 1 , ς , Λ ˜ 1 ) H 1 ( τ 1 , ς , Λ ˜ 1 ) G 3 ( τ 1 , ς , Λ ˜ 1 ) , ( I H 1 ( τ 1 , ς , Λ ˜ 1 ) H 1 ( τ 1 , ς , Λ ˜ 1 ) ) G 3 ( τ 1 , ς , Λ ˜ 1 ) = 0 , Λ ˜ 1 ( T + 1 , ς ) = 0 , H 1 ( τ 1 , ς , Λ ˜ 1 ) 0 , ς D ,
the following conclusions can be handled in the same manner as (17), that is
V ( τ 1 , y ( τ 1 ) , ξ τ 1 ) = ς = 1 π τ 1 ( i ) y ( τ 1 ) P ( τ 1 , ς ) y ( τ 1 ) ,
and
v * ( τ 1 ) = H 1 ( τ 1 , ξ τ 1 , Λ ˜ 1 ) G 3 ( τ 1 , ξ τ 1 , Λ ˜ 1 ) y ( τ 1 ) ,
up to the present, one can infer that there exists Λ ˜ 1 ( t , ς ) , ( t , ς ) N T × D satisfying Equation (22). Furthermore, an optimal solution of the indefinite LQ problem (21) is v * ( t ) = H 1 ( t , ξ t , Λ ˜ 1 ) G 3 ( t , ξ t , Λ ˜ 1 ) y ( t ) with G 1 ( t , ξ t ) = H 1 ( t , ξ t , Λ ˜ 1 ) G 3 ( t , ξ t , Λ ˜ 1 ) = G 1 ( t , ξ t , Λ ˜ 1 ) . Amalgamating the above results into Equation (22), it is stated that Λ ˜ 1 ( t , ς ) = Λ 1 ( t , ς ) , ( t , ς ) N T × D . It only remains to indicate Λ 1 ( t , ς ) 0 , ( t , ς ) N T × D . In practice,
J 1 ( y 0 , ς , u * ( · ) , v * ( · ) ) = ς = 1 π 0 ( ς ) y 0 Λ 1 ( 0 , ς ) y 0 J 1 ( y 0 , ξ 0 , u * ( · ) , 0 ) = t = 0 T E [ z ( t ) 2 ] } 0 ,
in addition, if we plug v * ( t ) = G 1 ( t , ξ t ) y ( t ) into Equation (1), then (18) is gained. We can deduce that (5) is a standard LQ control subject to (18). Using Lemma 3, Λ 2 ( t , ς ) 0 , ( t , ς ) N T × D can be easily accessible. Moreover, J 2 ( y 0 , ς , u * ( · ) , v * ( · ) ) = ς = 1 π 0 ( ς ) y 0 Λ 2 ( 0 , ς ) y 0 with u * ( t ) = G 2 ( t , ξ t ) y ( t ) = G 2 ( t , ξ t , Λ 2 ) y ( t ) = H 2 ( t , ξ t , Λ 2 ) 1 G 4 ( t , ξ t , Λ 2 ) y ( t ) . The proof has been completed. □
Remark 1. 
Note that Theorem 1 can be considered as an extension of [22] to infinite jumps and multiplicative noise and a discrete-time version of [23].
Remark 2. 
If the infinite horizon cost function is concerned, it is much more challenging owing to the requirement of stabilization limitation for the closed-loop system. As discussed in [26], the infinite horizon LQ Nash game has been considered.

4. Application to Special Case

In the previous section, the Nash game problem for discrete-time IMJSSs is solved. It should be noted that when γ takes different values or v ( t ) is regarded as exogenous disturbance. As a special case, we discuss the finite horizon robust H 2 / H control problem from a new perspective and further explore the relationship between Nash equilibrium points and finite horizon H 2 / H control with some remarks.

4.1. Finite Horizon H 2 / H Control

v ( t ) in system (1) is seen as one of the players in the Nash game problem; as a matter of fact, it is more a case of considering v ( t ) as an exogenous disturbance in most control systems. Let γ > 0 be a prescribed disturbance attenuation. In consequence, the original Nash game problem is turned into finding a controller u * ( · ) l 2 ( N T ; R n u ) such that
(i)
L T < γ for the closed system of Equation (1) with x 0 = 0 ;
(ii)
if v * ( · ) exists, u * ( · ) minimizes the output energy:
J 2 ( y 0 , ξ 0 , u ( · ) , v * ( · ) ) = t = 0 T E [ z ( t ) 2 ] ,
where v * ( · ) is the worst case disturbance and
v * ( · ) = arg max v J 1 ( y 0 , ξ 0 , u * ( · ) , v ( · ) ) = arg max v t = 0 T E [ γ 2 v ( t ) 2 z ( t ) 2 ] ,
in other words, the above problem is called the finite horizon H 2 / H control problem. We need to pay attention to the definition of perturbed operator L T given in [29,30].
As an applications of Nash game problem, it can be obtained from Theorem 1 that
L T γ ,
and
J 2 ( y 0 , ξ 0 , u * ( · ) , v * ( · ) ) J 2 ( y 0 , ξ 0 , u ( · ) , v * ( · ) ) ,
where u * ( t ) and v * ( t ) are defined in Theorem 1.
Remark 3. 
In accordance with the definition of H 2 / H control problem, it is crucial to note that L T γ does not come down to L T < γ . If we can confirm that L T γ can be replaced by L T < γ , then the following result would be given naturally.
Theorem 2. 
For system (1), assume the following coupled generalized algebraic Riccati equations:
Λ 1 ( t , ς ) = k = 0 r [ Q k ( t , ς ) + R k ( t , ς ) G 2 ( t , ς , Λ 2 ) ] ε ς ( t , Λ 1 ) · [ Q k ( t , ς ) + R k ( t , ς ) G 2 ( t , ς , Λ 2 ) ] L ( t , ς ) L ( t , ς ) G 2 ( t , ς , Λ 2 ) G 2 ( t , ς , Λ 2 ) G 3 ( t , ς , Λ 1 ) H 1 ( t , ς , Λ 1 ) 1 G 3 ( t , ς , Λ 1 ) , Λ 1 ( T + 1 , ς ) = 0 , H 1 ( t , ς , Λ 1 ) > 0 , ( t , ς ) N T × D ,
G 1 ( t , ς , Λ 1 ) = H 1 ( t , ς , Λ 1 ) 1 G 3 ( t , ς , Λ 1 ) ,
Λ 2 ( t , ς ) = k = 0 r [ Q k ( t , ς ) + U k ( t , ς ) G 1 ( t , ς , Λ 1 ) ] ε ς ( t , Λ 2 ) · [ Q k ( t , ς ) + U k ( t , ς ) G 1 ( t , ς , Λ 1 ) ] + L ( t , ς ) L ( t , ς ) G 4 ( t , ς , Λ 2 ) H 2 ( t , ς , Λ 2 ) 1 G 4 ( t , ς , Λ 2 ) , Λ 2 ( T + 1 , ς ) = 0 , H 2 ( t , ς , Λ 2 ) > 0 , ( t , ς ) N T × D ,
G 2 ( t , ς , Λ 2 ) = H 2 ( t , ς , Λ 2 ) 1 G 4 ( t , ς , Λ 2 )
admit solutions ( Λ 1 ( t , ς ) , G 1 ( t , ς , Λ 1 ) ; Λ 2 ( t , ς ) , G 2 ( t , ς , Λ 2 ) ) with Λ 1 ( t , ς ) 0 , Λ 2 ( t , ς ) 0 for ( t , ς ) N T × D , then the finite horizon H 2 / H control optimal controller is u * ( t ) = G 2 ( t , ς , Λ 2 ) y ( t ) , v * ( t ) = G 1 ( t , ς , Λ 1 ) y ( t ) . Conversely, if the finite horizon H 2 / H control problem has the solution u * ( t ) = G 2 ( t , ς , Λ 2 ) y ( t ) , v * ( t ) = G 1 ( t , ς , Λ 1 ) y ( t ) , and H 1 ( t , ς , Λ 1 ) > 0 , then the coupled generalized algebraic Riccati Equations (35)–(38) admit a group of solutions ( Λ 1 ( t , ς ) , G 1 ( t , ς , Λ 1 ) ; Λ 2 ( t , ς ) , G 2 ( t , ς , Λ 2 ) ) with Λ 1 ( t , ς ) 0 , Λ 2 ( t , ς ) 0 for ( t , ς ) N T × D .
Proof. 
Sufficiency: It is obvious from the sufficiency part in Theorem 1 that we need only to explain L T < γ . In fact, in the light of the definition of perturbed operator L T , when x 0 = 0 , note that the condition H 1 ( t , ς , Λ 1 ) > 0 of (35), Equation (17) means J 1 ( y 0 , ξ 0 , u * ( · ) , v ( · ) ) = 0 iff v ( t ) = v * ( t ) = G 1 ( t , ς , Λ 1 ) y ( t ) . Putting v * ( t ) into Equation (14), the obtained closed-loop system with initial state y 0 = 0 leads to the state response y ( t ) 0 , t N T . One step further, v ( t ) = v * ( t ) = 0 naturally occurs. Then, the inescapable conclusion is that J 1 ( 0 , ξ 0 , u * ( · ) , v ( · ) ) > 0 iff v ( t ) = v * ( t ) 0 , which stands for L T < γ .
Necessity: Assume that the finite horizon H 2 / H control problem has the solution u * ( t ) = G 2 ( t , ς , Λ 2 ) y ( t ) , v * ( t ) = G 1 ( t , ς , Λ 1 ) y ( t ) , a combination of (31) and (32) causes J 1 ( y 0 , ξ 0 , u * ( · ) , v * ( · ) ) J 1 ( y 0 , ξ 0 , u * ( · ) , v ( · ) ) , J 2 ( y 0 , ξ 0 , u * ( · ) , v * ( · ) ) J 2 ( y 0 , ξ 0 , u ( · ) , v * ( · ) ) for all ( u ( · ) , v ( · ) ) l 2 ( N T ; R n u ) × l 2 ( N T ; R n v ) . By now, it can be deduced from the necessity of Theorem 1 that Equations (10)–(13) have solutions. Keep in mind that H 1 ( t , ς , Λ 1 ) > 0 , hence, Equations (35)–(38) admit a group of solutions ( Λ 1 ( t , ς ) , G 1 ( t , ς , Λ 1 ) ; Λ 2 ( t , ς ) , G 2 ( t , ς , Λ 2 ) ) with Λ 1 ( t , ς ) 0 , Λ 2 ( t , ς ) 0 for ( t , ς ) N T × D . The proof has been completed. □
Remark 4. 
Although the finite horizon H 2 / H control problem has been solved in [30], we yield a similar result from the new perspective of a Nash game.
Remark 5. 
By comparing Theorem 1 with Theorem 2, surely the existence of Nash equilibrium points and the solvability of finite horizon H 2 / H control are not equivalent for system (1), which differs from the continuous-time case as described in [26]. The main cause of the inequivalence lies in that the condition H 1 ( t , ς , Λ 1 ) > 0 of Equation (35) cannot meet the Nash equilibrium problem, namely, L T < γ is not equivalent to L T γ .

4.2. Some Remarks on Nash Equilibrium Points

As a point of fact, only when H 1 ( t , ς , Λ 1 ) > 0 is the equivalence between Nash equilibrium points and finite horizon H 2 / H control valid. Accordingly, the relationship between the Nash game and H 2 / H control problem can be discussed in the following theorem.
Theorem 3. 
For system (1), under the condition of
H 1 ( t , ς , Λ 1 ) = γ 2 I n v + k = 0 r U k ( t , ς ) ε ς ( t , Λ 1 ) U k ( t , ς ) > 0 ,
the following statements are equivalent:
(i) 
There exists a linear memoryless Nash equilibrium points ( u * ( · ) , v * ( · ) ) with ( u * ( t ) = G 2 ( t , ς , Λ 2 ) y ( t ) , v * ( t ) = G 1 ( t , ς , Λ 1 ) y ( t ) ) ;
(ii) 
The finite horizon H 2 / H control is solvable with u * ( t ) = G 2 ( t , ς , Λ 2 ) y ( t ) , v * ( t ) = G 1 ( t , ς , Λ 1 ) y ( t ) ;
(iii) 
The coupled generalized algebraic Riccati Equations (35)–(38) have a group of solutions ( Λ 1 ( t , ς ) , G 1 ( t , ς , Λ 1 ) ; Λ 2 ( t , ς ) , G 2 ( t , ς , Λ 2 ) ) with Λ 1 ( t , ς ) 0 , Λ 2 ( t , ς ) 0 for ( t , ς ) N T × D .
Proof. 
Theorem 3 can be demonstrated through Theorem 1 and Theorem 2. □
Remark 6. 
Keep in mind that when H 1 ( t , ς , Λ 1 ) > 0 , H 1 ( t , ς , Λ 1 ) = H 1 ( t , ς , Λ 1 ) 1 , at this point, Equations (10)–(13) are consistent with Equations (35)–(38). In other words, for system (1), the existence of Nash equilibrium problem, the solvable of finite horizon H 2 / H control and the solvability of Equations (35)–(38) are equivalent. In addition, under the limitation of H 1 ( t , ς , Λ 1 ) > 0 , a unified treatment for H 2 , H and H 2 / H control can be investigated such as [22].

5. Numerical Example

In this section, to solve the coupled generalized algebraic Riccati Equations (35)–(38), we provide an iterative algorithm, which can be summarized as follows:
(i)
When t = T , the terminal condition Λ 1 ( T + 1 , ς ) = 0 and Λ 2 ( T + 1 , ς ) = 0 can obtain H 1 ( T , ς , Λ 1 ) and H 2 ( T , ς , Λ 2 ) ;
(ii)
Working out (36) and (38), G 1 ( T , ς , Λ 1 ) and G 2 ( T , ς , Λ 2 ) can be computed;
(iii)
To compute (35) and (37), it is found that Λ 1 ( T , ς ) 0 and Λ 2 ( T , ς ) 0 ;
(iv)
Repeating the above procedures, for t = T 1 , T 2 , , 0 , we can compute that Λ 1 ( t , ς ) 0 , Λ 2 ( t , ς ) 0 , G 1 ( t , ς , Λ 1 ) and G 2 ( t , ς , Λ 2 ) .
Remark 7. 
It should be noted that for the coupled generalized algebraic Riccati Equations (35)–(38), H 1 ( t , ς , Λ 1 ) > 0 and H 2 ( t , ς , Λ 2 ) > 0 are prerequisites for the effectiveness of the above iterative algorithm. Similarly, we can easily derive the algorithm used to solve the coupled generalized algebraic Riccati Equations (10)–(13).
Next, a numerical example will be presented to show the validity of the proposed method.
Example 1. 
Consider a three-stage one-dimensional discrete-time IMJSS with coefficients in Table 1.
For { ξ t } t N T , the element of P = [ p ( ς , ι ) ] is given by p ( ς , ς ) = 1 2 , p ( ς , ς + 1 ) = 1 2 and p ς ι = 0 , ς D , ι D / { ς , ς + 1 } . Set γ = 2 2 , the coupled generalized algebraic Riccati Equations (10)–(13) are solved by the above iterative algorithm with
G 1 ( 0 , ς , Λ 1 ) = 3 ( ς + 1 ) 3 ( ς + 1 ) 2 4 , G 2 ( 0 , ς , Λ 2 ) = 0.53 , Λ 1 ( 0 , ς ) = 0.36 3 6 ( ς + 1 ) 2 8 ς ς + 1 0 , Λ 2 ( 0 , ς ) = 0.55 + 110 33 ( ς + 1 ) 2 27 ( ς + 1 ) 4 72 ( ς + 1 ) 2 + 48 + ς ς + 1 0 .
Therefore, the Nash equilibrium points and optimal H 2 / H controller of the considered system will be obtained naturally.

6. Conclusions

This paper mainly explores a finite horizon LQ non-zero sum Nash game for discrete-time IMJSSs, whose system is governed by a countable Markov chain. The Nash equilibrium points for the considered system are solved by a countably infinite set of coupled generalized algebraic Riccati equations. Then, some special cases are given via a new perspective of the Nash game, which is the finite horizon mixed H 2 / H control with some remarks, and we summarized the relationship between Nash game and H 2 / H control problem. The contents in this paper are an extension and improvement of the previous works [22,23] in the MJSSs case. In fact, to solve the difficulties caused by the countable Markov chain, we introduce the infinite dimension Banach spaces, where the elements are countably infinite sequences of linear and bounded operators. Besides, to overcome the difficulty of solving a countably infinite set of coupled generalized algebraic Riccati equations, we present an iterative algorithm. In the future, the infinite horizon Nash game for IMJSSs can be considered.

Author Contributions

Conceptualization, Y.L.; methodology, Y.L.; formal analysis, Y.L.; investigation, Z.W. and X.L.; writing—original draft preparation, Y.L.; writing—review and editing, Y.L.; supervision, X.L. and Z.W.; project administration, Y.L.; funding acquisition, Y.L., Z.W. and X.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Natural Science Foundation of Qingdao under Grant 23-2-1-7-zyyd-jch, Social Science Planning and Research Special Project of Shandong Province under Grant 22CSDJ43, Natural Science Foundation of China under grant 62273212, the Natural Science Foundation of Shandong Province under grant ZR2020MF062 and People Benefit Project of Qingdao under Grant 22-3-7-smjk-16-nsh.

Data Availability Statement

Not applicable.

Acknowledgments

We would like to thank the anonymous reviewers for their constructive suggestions to improve the quality of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shafieepoorfard, E.; Raginsky, M.; Meyn, S.P. Rationally inattentive control of Markov processes. SIAM J. Control Optim. 2016, 54, 987–1016. [Google Scholar] [CrossRef]
  2. Veretennikov, A.Y.; Veretennikova, M.A. On improved bounds and conditions for the convergence of Markov chains. Izv. Math. 2022, 86, 92–125. [Google Scholar] [CrossRef]
  3. Khasminskii, R.Z. Stability of regime-switching stochastic differential equations. Probl. Inform. Transm. 2012, 48, 259–270. [Google Scholar] [CrossRef]
  4. Li, F.; Xu, S.; Shen, H.; Zhang, Z. Extended dissipativity-based control for hidden Markov jump singularly perturbed systems subject to general probabilities. IEEE Trans. Syst. Man Cybern. Syst. 2021, 51, 5752–5761. [Google Scholar] [CrossRef]
  5. Wang, L.; Wu, Z.G.; Shen, Y. Asynchronous mean stabilization of positive jump systems with piecewise-homogeneous Markov chain. IEEE Trans. Circuits Syst. II Exp. Briefs 2021, 68, 3266–3270. [Google Scholar] [CrossRef]
  6. Wang, B.; Zhu, Q. Stability analysis of discrete-time semi-Markov jump linear systems with partly unknown semi-Markov kernel. Syst. Control Lett. 2020, 140, 104688. [Google Scholar] [CrossRef]
  7. Zhao, X.; Deng, F.; Gao, W. Exponential stability of stochastic Markovian jump systems with time-varying and distributed delays. Sci. China Inf. Sci. 2021, 64, 209202:1–209202:3. [Google Scholar] [CrossRef]
  8. Han, X.; Wu, K.N.; Niu, Y. Asynchronous boundary control of Markov jump Neural networks with diffusion terms. IEEE Trans. Cybern. 2023, 53, 4962–4971. [Google Scholar] [CrossRef]
  9. Xue, M.; Yan, H.; Zhang, H.; Shen, H.; Peng, S. Dissipativity-based filter design for Markov jump systems with packet loss compensation. Automatica 2021, 133, 109843. [Google Scholar] [CrossRef]
  10. Hou, T.; Ma, H. Exponential stability for discrete-time infinite Markov jump systems. IEEE Trans. Autom. Control. 2016, 61, 4241–4246. [Google Scholar] [CrossRef]
  11. Ma, H.; Jia, Y. Stability analysis for stochastic differential equations with infinite Markovian switchings. J. Math. Anal. Appl. 2016, 435, 593–605. [Google Scholar] [CrossRef]
  12. Song, R.; Zhu, Q. Stability of linear stochastic delay differential equations with infinite Markovian switchings. Int. J. Robust Nonlinear Control 2018, 28, 825–837. [Google Scholar] [CrossRef]
  13. Hou, T.; Liu, Y.; Deng, F. Stability for discrete-time uncertain systems with infinite Markov jump and time-delay. Sci. China Inf. Sci. 2021, 64, 152202:1–152202:11. [Google Scholar] [CrossRef]
  14. Hou, T.; Liu, Y.; Deng, F. Finite horizon H2/H control for SDEs with infinite Markovian jumps. Nonlinear Anal. Hybrid Syst. 2019, 34, 108–120. [Google Scholar] [CrossRef]
  15. Liu, Y.; Hou, T. Robust H2/H fuzzy filtering for nonlinear stochastic systems with infinite Markov jump. J. Syst. Sci. Complex. 2020, 33, 1023–1039. [Google Scholar] [CrossRef]
  16. Dockner, E.J.; Jorgensen, S.; Long, N.V. Differential Games in Economics and Management Science; Cambridge University Press: Cambridge, UK, 2000. [Google Scholar]
  17. Chen, B.S.; Tseng, C.S.; Uang, H.J. Fuzzy differential games for nonlinear stochastic systems: Suboptimal approach. IEEE Trans. Fuzzy Syst. 2002, 10, 222–233. [Google Scholar] [CrossRef]
  18. Lin, Y.; Zhang, T.; Zhang, W. Infinite horizon linear quadratic Pareto game of the stochastic singular systems. J. Frankl. Inst. 2018, 355, 4436–4452. [Google Scholar] [CrossRef]
  19. Moon, J. Linear-quadratic stochastic leader-follower differential games for Markov jump-diffusion models. Automatica 2023, 147, 110713. [Google Scholar] [CrossRef]
  20. Gao, X.; Deng, F.; Zeng, P. Zero-sum game-based security control of unknown nonlinear Markov jump systems under false data injection attacks. Int. J. Robust Nonlinear Control 2022. Early Access. [Google Scholar] [CrossRef]
  21. Dufour, F.; Prieto-Rumeau, T. Stationary Markov Nash equilibria for nonzero-sum constrained ARAT Markov games. SIAM J. Control Optim. 2022, 60, 945–967. [Google Scholar] [CrossRef]
  22. Hou, T.; Zhang, W. A game-based control design for discrete-time Markov jump systems with multiplicative noise. IET Control Theory Appl. 2013, 7, 773–783. [Google Scholar] [CrossRef]
  23. Sheng, L.; Zhang, W.; Gao, M. Relationship between Nash equilibrium strategies and H2/H control of stochastic Markov jump systems with multiplicative noise. IEEE Trans. Autom. Control. 2014, 59, 2592–2597. [Google Scholar] [CrossRef]
  24. Sheng, L.; Zhang, W.; Gao, M. Some remarks on infinite horizon stochastic H2/H control with (x, u, v) dependent noise and Markov jumps. J. Frankl. Inst. 2015, 352, 3929–3946. [Google Scholar] [CrossRef]
  25. Dragan, V.; Morozan, T.; Stoica, A.M. Mathematical Methods in Robust Control of Linear Stochastic Systems, 2nd ed.; Springer: New York, NY, USA, 2013. [Google Scholar]
  26. Liu, Y.; Hou, T. Infinite horizon LQ Nash Games for SDEs with infinite jumps. Asian J. Control 2021, 23, 2431–2443. [Google Scholar] [CrossRef]
  27. Rami, M.A.; Chen, X.; Zhou, X. Discrete-time indefinite LQ control with state and control dependent noises. J. Glob. Optim. 2002, 23, 245–265. [Google Scholar] [CrossRef]
  28. Basar, T.; Olsder, G.J. Dynamic Noncooperative Game Theory; SIAM: Philadelphia, PA, USA, 1999. [Google Scholar]
  29. Hou, T.; Zhang, W.; Ma, H. Finite horizon H2/H control for discrete-time stochastic systems with Markovian jumps and multiplicative noise. IEEE Trans. Autom. Control. 2010, 55, 1185–1191. [Google Scholar]
  30. Wang, J.; Hou, T. Finite horizon H2/H control for discrete-time time-varying stochastic systems with infinite Markov jumps. In Proceedings of the 36th Chinese Control Conference, Dalian, China, 26–28 July 2017. [Google Scholar]
Table 1. Coefficients of considered system.
Table 1. Coefficients of considered system.
Coefficients t = 0 t = 1 t = 2
Q 0 ( t , ς ) 1 2 1 1 3 ( ς + 1 )
Q 1 ( t , ς ) 11 1 ς + 1
R 0 ( t , ς ) 1 1 ς + 1 ς ς + 1
R 1 ( t , ς ) 1 1 ς + 1 1 2 ( ς + 1 ) 2
U 0 ( t , ς ) 1 ς + 1 1 1 ς + 1
U 1 ( t , ς ) 1 ς + 1 11
L ( t , ς ) ς ς + 1 11
M ( t , ς ) 111
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, Y.; Wang, Z.; Lin, X. Non-Zero Sum Nash Game for Discrete-Time Infinite Markov Jump Stochastic Systems with Applications. Axioms 2023, 12, 882. https://0-doi-org.brum.beds.ac.uk/10.3390/axioms12090882

AMA Style

Liu Y, Wang Z, Lin X. Non-Zero Sum Nash Game for Discrete-Time Infinite Markov Jump Stochastic Systems with Applications. Axioms. 2023; 12(9):882. https://0-doi-org.brum.beds.ac.uk/10.3390/axioms12090882

Chicago/Turabian Style

Liu, Yueying, Zhen Wang, and Xiangyun Lin. 2023. "Non-Zero Sum Nash Game for Discrete-Time Infinite Markov Jump Stochastic Systems with Applications" Axioms 12, no. 9: 882. https://0-doi-org.brum.beds.ac.uk/10.3390/axioms12090882

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop