Next Article in Journal
Interpretation of Second Law of Thermodynamics in Extended Procedures for the Exploitation of the Entropy Inequality: Korteweg Fluids and Strain-Gradient Elasticity as Examples
Previous Article in Journal
Lagrangian Partition Functions Subject to a Fixed Spatial Volume Constraint in the Lovelock Theory
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Finite-Time H Controllers Design for Stochastic Time-Delay Markovian Jump Systems with Partly Unknown Transition Probabilities

1
College of Mathematics and Systems Science, Shandong University of Science and Technology, Qingdao 266590, China
2
Department of Fundamental Courses, Shandong University of Science and Technology, Jinan 250031, China
3
Department of Electrical Engineering and Information Technology, Shandong University of Science and Technology, Jinan 250031, China
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Submission received: 28 January 2024 / Revised: 22 March 2024 / Accepted: 23 March 2024 / Published: 27 March 2024

Abstract

:
This paper concentrates on the finite-time  H  control problem for a type of stochastic discrete-time Markovian jump systems, characterized by time-delay and partly unknown transition probabilities. Initially, a stochastic finite-time (SFT)  H  state feedback controller and an SFT  H  observer-based state feedback controller are constructed to realize the closed-loop control of systems. Then, based on the Lyapunov–Krasovskii functional (LKF) method, some sufficient conditions are established to guarantee that closed-loop systems (CLSs) satisfy SFT boundedness and SFT  H  boundedness. Furthermore, the controller gains are obtained with the use of the linear matrix inequality (LMI) approach. In the end, numerical examples reveal the reasonableness and effectiveness of the proposed designing schemes.

1. Introduction

The structure or parameters of various practical systems often undergoes changes due to environmental mutations, component failures, and other factors, resulting in a decrease in system performance and potential instability [1]. How to ensure the stability of mutation systems has been one of the hot topics for scholars. Markovian jump systems (MJSs), a type of hybrid systems consisting of several subsystems, can be used to model dynamical systems with structural mutations and have been extensively researched in both the practical and theoretical domains [2,3]. An adaptive neural network-based control approach was devised in [4] to address the problem of fault-tolerant control for nonlinear MJSs. In [5], the asynchronous filtering problem of MJSs affected by time-varying and infinite distributed delays was studied by using the homogeneous polynomial method. For stochastic T-S fuzzy singular MJSs, the robust  H  sliding mode control problem was studied in [6,7]. The authors of [8] studied the fault-detection filter design problem of uncertain singular MJSs by means of the LKF and convex polyhedron techniques. In addition, for the achievements regarding the stability and stabilization of MJSs, readers may see [9,10] and references therein.
It should be emphasized that there is a qualification in references [4,5,6,7,8,9,10], that is, the transition probability (TP) information of MJSs must be exactly and completely known. However, due to the limitations of measurement costs and measuring instruments, this condition is difficult to meet in the actual system modeling. As a result, it is essential and significant to investigate MJSs with partly unknown TPs [11]. For networked MJSs with partly unknown TPs, the event-triggered dynamic output feedback control problem and sliding mode control problem were solved in [12,13], respectively. For a type of singular MJSs with partly unknown TPs, the  H  filtering problem was studied in [14,15]. The authors of [16] achieved the event-triggered guaranteed cost control for time-delay MJSs with partly unknown TPs, and some sufficient conditions were established to guarantee the presence of guaranteed cost controllers. In [17], a state feedback controller was constructed to ensure that MJSs with partly unknown TPs were stochastic stable. A sliding mode controller based on an adaptive neural network was proposed in [18], and the reliable control problem of uncertain MJSs with partially unknown TPs was studied.
Notably, most of the above research findings mainly concentrate on the asymptotic behavior of systems in an infinite-time interval, namely, as in the Lyapunov stability theory. However, in many practical systems, such as vehicle emergency braking systems [19], aircraft-tracking systems [20], and ship-maneuvering systems [21], it is required that the systems respond ideally to work in a finite time interval. To realize this practical need, in 1961, Dorato proposed finite-time stability [22]. Since finite-time stability has a better transient performance, a faster response speed, and a higher tracking accuracy, it has been applied to MJSs [23,24,25,26], T-S fuzzy systems [27,28,29], nonlinear pulse systems [30,31], mean-field systems [32,33,34], and so on.
In addition to requiring better transient performance, modern industries increasingly emphasize the anti-interference performance of control systems. Both external disturbances and imprecise modeling can adversely affect the performance of control systems. To weaken the effect of external disturbances,  H  control has emerged. Recently, many scholars have carried out plenty of research on finite-time  H  control [35,36,37,38,39,40,41]. Specifically, ref. [35] introduced a new switching dynamic event-triggering mechanism, and discussed the finite-time  H  control problem for switching fuzzy systems. In [36], the finite-time  H  control problem of nonlinear pulse switching systems was studied to guarantee that the CLS was bounded. On the other hand, due to the constraints of measurement technology and measurement costs, the system state information is frequently challenging to measure directly. In tackling this challenge, many meaningful results of finite-time  H  observer-based controller designing schemes have been successfully attained; see [37,38,39,40,41] and references therein.
At present, the study of continuous-time MJSs has obtained rich results. With the popularization of digital controllers and the development of computer science and technology, the research on discrete-time systems has attracted much attention. Discrete-time MJSs provide a framework for modeling and analyzing a variety of complex systems in the real world [42]. Through the discrete description of the system, it is easier to analyze the dynamic behavior, stability, and convergence of the system [43]. This kind of modeling and analysis is essential for understanding and predicting the behavior of systems [44], and is widely used in control systems.
Inspired by the preceding analysis, this article presents the designing schemes of a stochastic finite-time  H  state feedback controller and a stochastic finite-time  H  observer-based state feedback controller for a discrete-time MJS. Different from [17,38], the MJS considered in this paper is influenced by a time delay and stochastic white noise, which is more in line with the actual demand, but also increases the difficulty of the article derivation. Compared with the existing literature, the primary contributions of this study include the following:
(I) In this paper, the state feedback control strategy and the observer-based state feedback control strategy are adopted. The concepts of SFT  H  state feedback stabilization and SFT  H  observer-based state feedback stabilization for time-delay MJSs are defined simultaneously for the first time. The results of [17,38,44] are extended to time-delay MJSs with partially unknown TPs.
(II) By constructing a delay-dependent LKF, several sufficient conditions are given to ensure that the CLS is SFT  H -bounded under two control strategies.
The article is structured as follows: Section 2 presents an introduction to the system along with some preliminary knowledge. In Section 3, a state feedback controller is designed, and some sufficient conditions for the MJS to be SFT  H  state feedback stabilization are obtained through the LKF and LMI methods. Similar to Section 3, in Section 4 we design an observer-based state feedback controller and verify that the MJS is SFT  H  observer-based state feedback stabilization. In Section 5, the feasibility and effectiveness of this work are validated through two simulation examples. Section 6 summarizes the entire article and provides an outlook on future research directions.
Notation:  A 1  and  A T  represent the matrix inverse and transpose of matrix A, respectively. The expression for a real positive definite matrix A is denoted as  A > 0 d i a g { P 1 , P 2 , , P n }  is the block diagonal matrix with  P 1 , P 2 , , P n  on the diagonal. We denote  I n × n  as the identity matrix with  n × n  dimensions.  N +  is the set of positive integers and  R  is the real number set.  R m  and  R m × n  are the m-dimensional Euclidean space with 2-norm  ·  and the vector space of all  m × n  matrices with entries in  R , respectively.  E { σ }  represents the mathematical expectation of  σ . The symbol ∗ implies the symmetric hidden matrix entries. This paper presupposes that every specified matrix possesses the necessary dimensions. For ease of understanding, the acronyms in this paper and their corresponding meanings are shown in Table 1.

2. System Description and Preliminary Knowledge

Consider an MJS with a time delay, as outlined below:
x ( k + 1 ) = A 1 ( m k ) x ( k ) + A d 1 ( m k ) x ( k τ ) + B 1 ( m k ) u ( k ) + C 1 ( m k ) v ( k ) + [ A 2 ( m k ) x ( k ) + A d 2 ( m k ) x ( k τ ) + B 2 ( m k ) u ( k ) + C 2 ( m k ) v ( k ) ] ω ( k ) , y ( k ) = D ( m k ) x ( k ) + G ( m k ) u ( k ) , z ( k ) = D 1 ( m k ) x ( k ) + D d 1 ( m k ) x ( k τ ) + G 1 ( m k ) u ( k ) + G 2 ( m k ) v ( k ) , k { 0 , 1 , 2 , , T ^ } , x ( n ) = ψ ( n ) , n { τ , τ + 1 , , 0 } ,
where  x ( k ) R n  is the system state,  y ( k ) R p  is the measured output,  z ( k ) R r  is the control output, and  u ( k ) R q  is the control input.  ψ ( n ) , n { τ , τ + 1 , , 0 }  are the initial conditions.  τ  is a positive integer that signifies the fixed time delay. The sequence  ω ( k )  denotes one-dimensional white noises on the complete probability space  ( Ω , F , P ) , and satisfies  E { ω ( k ) } = 0  and  E { ω ( k ) ω ( s ) } = δ k s , where  δ k s  is the Kronecker delta.  v ( k ) R l  stands for the external disturbance, which satisfies the following:
k = 0 T ^ v T ( k ) v ( k ) h , h 0 .
A 1 ( m k ) A 2 ( m k ) A d 1 ( m k ) A d 2 ( m k ) B 1 ( m k ) B 2 ( m k ) C 1 ( m k ) C 2 ( m k ) D ( m k ) D 1 ( m k ) D d 1 ( m k ) G ( m k ) G 1 ( m k ) , and  G 2 ( m k )  are coefficient matrices with appropriate dimensions. These matrices depend upon the Markovian jump process  { m k , k 0 } , which is a discrete-time, discrete-state Markovian chain taking values in a finite state space  S = { 1 , 2 , , N }  with transition probabilities  π i j , where  π i j = P r { m k + 1 = j | m k = i } , i , j S , denotes the transition probability from mode i at time k to mode j at time  k + 1 , and satisfies  j = 1 N π i j = 1 , π i j 0 ( i S ) . When  m k = i , i S , the system parameter matrices are expressed by  A 1 i A 2 i A d 1 i A d 2 i B 1 i B 2 i , and so on. In addition,  ω ( k )  and  m k  are independent of each other.
In this paper, it is presumed that the information in the TP matrix is partially available. In this situation, the TP matrix  Π  for an MJS with N modes may be represented by:
Π = π ^ 11 π 12 π 1 N π ^ 21 π 22 π ^ 2 N π ^ N 1 π N 2 π N N ,
where  π ^ i j  is the unknown TP, for all  i S , and the set S is defined as  S = S k i S u k i , where:
S k i = { j : π i j i s k n o w n } , S u k i = { j : π i j i s u n k n o w n } .
Moreover, when  S k i , then it can be described as:
S k i = { ζ 1 , ζ 2 , , ζ p i } , p i { 1 , 2 , , N 2 } ,
where  ζ g N + , g { 1 , 2 , , p i }  denotes the g-th known element in the i-th row of the TP matrix  Π . Similarly, when  S u k i , it can be expressed as follows:
S u k i = { ζ u 1 , ζ u 2 , , ζ u q i } , q i { 2 , , N } ,
where  ζ u g N + , g { 1 , 2 , , q i }  is the g-th unknown element in the i-th row of the TP matrix  Π .
Remark 1.
Since  j = 1 N π i j = 1 , there are at least two unknown elements in (3), and if there are unknown elements in a certain row, their quantity is at least two.
Lemma 1.
(Schur’s complement [38]) The LMI
S = S 11 S 12 T S 12 S 22 < 0
is equivalent to  S 11 S 12 T S 22 1 S 12 < 0 , where  S 22 < 0 .
Definition 1.
(SFT stability)
The MJS (1) with  v ( k ) = 0  is said to be SFT-stable with respect to  ( ρ 1 , ρ 2 , R i , T ^ ) , if:
sup k 0 { τ , , 0 } E { x T ( k 0 ) R i x ( k 0 ) } ρ 1 E { x T ( k ) R i x ( k ) } < ρ 2 , k { 0 , 1 , 2 , , T ^ }
holds for matrix  R i > 0 , i S , and given scalars  0 < ρ 1 < ρ 2 .
Remark 2.
Definition 1 means that if the initial state is bounded, then the state trajectory of the system does not exceed a predetermined boundary in a finite time interval under certain conditions, which is different from asymptotic stability. An asymptotically stable system may not be finite-time stable, if its state trajectory exceeds the given upper bound in a finite-time interval, and vice versa.
Definition 2.
(SFT boundedness)
The MJS (1) is said to be SFT-bounded with respect to  ( ρ 1 , ρ 2 ,   R i , T ^ , h )  if the system state  x ( k )  and the external disturbance  v ( k )  satisfy (7) and (2), respectively.

3. Finite-Time  H  State Feedback Control

3.1. State Feedback Controller

We design the following state feedback controller for MJS (1):
u ( k ) = K ( m k ) x ( k ) ,
where  K ( m k )  is the state feedback controller gain to be designed and  K ( m k )  is denoted by  K i  when  m k = i , i S . Then, the resulting CLS can be described as follows:
x ( k + 1 ) = A ¯ 1 i x ( k ) + A d 1 i x ( k τ ) + C 1 i v ( k ) + [ A ¯ 2 i x ( k ) + A d 2 i x ( k τ ) + C 2 i v ( k ) ] ω ( k ) , z ( k ) = D ¯ 1 i x ( k ) + D d 1 i x ( k τ ) + G 2 i v ( k ) ,
where  A ¯ 1 i = A 1 i + B 1 i K i ,   A ¯ 2 i = A 2 i + B 2 i K i ,   D ¯ 1 i = D 1 i + G 1 i K i .
Definition 3.
(SFT  H  boundedness)
The CLS (9) is said to be SFT  H -bounded with respect to  ( ρ 1 , ρ 2 , R i , T ^ , h , γ )  if the subsequent two conditions hold:
(a) 
The CLS (9) satisfies SFT boundedness with respect to  ( ρ 1 , ρ 2 , R i , T ^ , h ) ;
(b) 
Under the zero initial condition, for any external disturbance  v ( k )  satisfying (2), the control output  z ( k )  satisfies
E { k = 0 T ^ z T ( k ) z ( k ) } < γ 2 E { k = 0 T ^ v T ( k ) v ( k ) } ,
where  R i > 0 , i S γ > 0 0 < ρ 1 < ρ 2 .
Definition 4.
(SFT  H  state feedback stabilization)
The MJS (1) is said to be SFT  H  state feedback stabilization with respect to  ( ρ 1 , ρ 2 , R i , T ^ ,   h , γ )  if there exists a state feedback controller (8) such that the CLS (9) satisfies SFT  H  boundedness. Moreover, the controller (8) is called the SFT  H  state feedback controller.

3.2. Main Results

This section will present some sufficient conditions for the existence of a state feedback controller (8) for system (1).
Theorem 1.
The CLS (9) with partly unknown TPs is SFT  H -bounded with respect to  ( ρ 1 , ρ 2 , R i ,   T ^ , h , γ )  if there exist the scalars  α > 1  and  γ > 0  and matrices  M > 0  and  P i > 0  for all  i S , satisfying the following:
Θ 1 i T α P i + M + D ¯ 1 i T D ¯ 1 i Θ 2 i T + D ¯ 1 i T D d 1 i Θ 3 i T + D ¯ 1 i T G 2 i Θ 4 i T M + D d 1 i T D d 1 i Θ 5 i T + D d 1 i T G 2 i Θ 6 i T γ 2 I + G 2 i T G 2 i < 0 ,
α T ^ ρ 1 sup i S { λ m a x ( P ¯ i ) } + sup i S { λ m a x ( M ¯ i ) } τ + γ 2 α T ^ h < inf i S { λ m i n ( P ¯ i ) } ρ 2 ,
where  Θ 1 i T = A ¯ 1 i T Ψ i A ¯ 1 i + A ¯ 2 i T Ψ i A ¯ 2 i ,   Θ 2 i T = A ¯ 1 i T Ψ i A d 1 i + A ¯ 2 i T Ψ i A d 2 i ,
Θ 3 i T = A ¯ 1 i T Ψ i C 1 i + A ¯ 2 i T Ψ i C 2 i ,   Θ 4 i T = A d 1 i T Ψ i A d 1 i + A d 2 i T Ψ i A d 2 i ,
Θ 5 i T = A d 1 i T Ψ i C 1 i + A d 2 i T Ψ i C 2 i ,   Θ 6 i T = C 1 i T Ψ i C 1 i + C 2 i T Ψ i C 2 i ,
Ψ i = j S k i π i j P j + ( 1 π k i ) ( j S u k i P j ) ,   P ¯ i = R i 1 2 P i R i 1 2 , M ¯ i = R i 1 2 M R i 1 2 .
Proof. 
For the CLS (9), we consider the following LKF:
V ( x ( k ) , m k = i ) = x T ( k ) P i x ( k ) + l = k τ k 1 x T ( l ) M x ( l ) .
Then, we compute the following:
E { Δ V ( x ( k ) , m k = i ) } = E { V ( x ( k + 1 ) , m k + 1 = j ) } E { V ( x ( k ) , m k = i ) } = j S π i j x T ( k + 1 ) P j x ( k + 1 ) + l = k + 1 τ k x T ( l ) M x ( l ) x T ( k ) P i x ( k ) l = k τ k 1 x T ( l ) M x ( l ) = x T ( k + 1 ) j S π i j P j x ( k + 1 ) + x T ( k ) [ M P i ] x ( k ) x T ( k τ ) M x ( k τ ) .
Since the TP matrix  Π  contains partly accessible information, not all the probabilities  π i j ( j S )  are known. Thus, we denote  π k i = j S k i π i j π ^ i j  are the unknown TPs of  Π . Moreover, from  j = 1 N π i j = 1 , it is obvious that  j S u k i π ^ i j = 1 π k i 0 . Supposing that  π k i < 1 , we can obtain the following:
j S π i j P j = j S k i π i j P j + j S u k i π ^ i j P j = j S k i π i j P j + ( 1 π k i ) j S u k i π ^ i j ( 1 π k i ) P j j S k i π i j P j + ( 1 π k i ) ( j S u k i P j ) = Ψ i .
By (15), we can rewrite (14) as follows:
E { Δ V ( x ( k ) , m k = i ) } x T ( k + 1 ) Ψ i x ( k + 1 ) + x T ( k ) [ M P i ] x ( k ) x T ( k τ ) M x ( k τ ) = x ( k ) x ( k τ ) v ( k ) T Θ 1 i T P i + M Θ 2 i T Θ 3 i T Θ 4 i T M Θ 5 i T Θ 6 i T x ( k ) x ( k τ ) v ( k ) .                  
From (16) and (11), we have the following:
E { Δ V ( x ( k ) , m k = i ) } < ( α 1 ) E { x T ( k ) P i x ( k ) } + γ 2 E { v T ( k ) v ( k ) } E { z T ( k ) z ( k ) } < ( α 1 ) E { x T ( k ) P i x ( k ) } + γ 2 E { v T ( k ) v ( k ) } ( α 1 ) E { V ( x ( k ) , m k = i ) } + γ 2 E { v T ( k ) v ( k ) } .
Thus, we can obtain the following:
E { V ( x ( k + 1 ) , m k + 1 = j ) } < α E { V ( x ( k ) , m k = i ) } + γ 2 E { v T ( k ) v ( k ) } .
Observing that  α > 1 , from (17) we obtain the following:
E { V ( x ( k ) , m k = i ) } < α k E { V ( x ( 0 ) , m 0 ) } + γ 2 l = 0 k 1 α k 1 l E { v T ( l ) v ( l ) } < α T ^ E { V ( x ( 0 ) , m 0 ) } + γ 2 α T ^ h .
Letting  P ¯ i = R i 1 2 P i R i 1 2 , M ¯ i = R i 1 2 M R i 1 2 . According to (7), we have the following:
E { V ( x ( 0 ) , m 0 ) } sup i S { λ m a x ( P ¯ i ) } E { x T ( 0 ) R i x ( 0 ) } + sup i S { λ m a x ( M ¯ i ) } E { l = τ 1 x T ( l ) R i x ( l ) } sup i S { λ m a x ( P ¯ i ) } + sup i S { λ m a x ( M ¯ i ) } τ ρ 1 ,
and
E { V ( x ( k ) , m k = i ) } E { x T ( k ) P i x ( k ) } = E { x T ( k ) R i 1 2 P ¯ i R i 1 2 x ( k ) } inf i S { λ m i n ( P ¯ i ) } E { x T ( k ) R i x ( k ) } .
Combining with (18)–(20), it can be inferred that:
E { x T ( k ) R i x ( k ) } < α T ^ ρ 1 sup i S { λ m a x ( P ¯ i ) } + sup i S { λ m a x ( M ¯ i ) } τ + γ 2 α T ^ h inf i S { λ m i n ( P ¯ i ) } .
Together with (12) and (21), it is clear that  E { x T ( k ) R i x ( k ) } < ρ 2 , k { 0 , 1 , 2 , , T ^ } . This implies that the CLS (9) satisfies SFT boundedness. Next, we demonstrate that the  H  condition (10) holds under the zero initial condition. From (13), we can obtain the following:
E { V ( x ( k + 1 ) , m k + 1 = j ) } < α E { V ( x ( k ) , m k = i ) } E { z T ( k ) z ( k ) } + γ 2 E { v T ( k ) v ( k ) } .
Then, we have the following:
E { V ( x ( k ) , m k = i ) } < α k E { V ( x ( 0 ) , m 0 ) } l = 0 k 1 α k 1 l E { z T ( l ) z ( l ) } + γ 2 E { l = 0 k 1 α k 1 l v T ( l ) v ( l ) } .
Assuming a zero initial condition and recognizing that  V ( x ( k ) , m k = i ) 0  for all  k { 0 , 1 , 2 , , T ^ } , we have the following:
l = 0 k 1 α k 1 l E { z T ( l ) z ( l ) } < γ 2 E { l = 0 k 1 α k 1 l v T ( l ) v ( l ) } .
Noting that  α > 1 , from (24) we obtain the following:
E { k = 0 T ^ z T ( k ) z ( k ) } < γ 2 E { k = 0 T ^ v T ( k ) v ( k ) } .
Therefore the closed-loop MJS (9) is SFT  H -bounded. □
Remark 3.
It is important to note that Theorem 1 is preliminary, and since it does not provide a way to choose  K i , one can check (11), (12) on the closed-loop matrices, but this requires that  K i  has already been chosen. The problem is solved in Theorem 2.
Theorem 2.
Consider the state feedback controller (8); if there exist scalars  α > 1 , γ > 0 , ρ 2 > 0 , σ 1 > 0 , ξ 1 > 0 , ξ 2 > 0  and matrices  J > 0 X i > 0 , and  Y i , for all  i S , satisfying the following conditions:
α X i 0 Ω A ¯ 1 i T Ω A ¯ 2 i T D ¯ 1 i T X i γ 2 I Ω C 1 i T Ω C 2 i T G 2 i T 0 X + Ω A d 1 i J Ω A d 1 i T 0 0 0 X + Ω A d 2 i J Ω A d 2 i T 0 0 I + D d 1 i J D d 1 i T 0 J < 0 ,
σ 1 R i 1 < X i < R i 1 ,
ξ 1 R i 1 < J < ξ 2 R i 1 ,
α T ^ ρ 2 + γ 2 h ρ 1 τ ρ 1 σ 1 0 ξ 1 < 0 ,
where  Ω A ¯ 1 i T = [ π i ζ 1 ( A 1 i X i + B 1 i Y i ) T π i ζ 2 ( A 1 i X i + B 1 i Y i ) T π i ζ p i ( A 1 i X i + B 1 i Y i ) T 1 π k i ( A 1 i X i + B 1 i Y i ) T 1 π k i ( A 1 i X i + B 1 i Y i ) T 1 π k i ( A 1 i X i + B 1 i Y i ) T ] ,
        Ω A ¯ 2 i T = [ π i ζ 1 ( A 2 i X i + B 2 i Y i ) T π i ζ 2 ( A 2 i X i + B 2 i Y i ) T π i ζ p i ( A 2 i X i + B 2 i Y i ) T 1 π k i ( A 2 i X i + B 2 i Y i ) T 1 π k i ( A 2 i X i + B 2 i Y i ) T 1 π k i ( A 2 i X i + B 2 i Y i ) T ] ,
        Ω A d 1 i T = [ π i ζ 1 A d 1 i T π i ζ 2 A d 1 i T π i ζ p i A d 1 i T 1 π k i A d 1 i T 1 π k i A d 1 i T 1 π k i A d 1 i T ] ,
        Ω A d 2 i T = [ π i ζ 1 A d 2 i T π i ζ 2 A d 2 i T π i ζ p i A d 2 i T 1 π k i A d 2 i T 1 π k i A d 2 i T 1 π k i A d 2 i T ] ,
        Ω C 1 i T = [ π i ζ 1 C 1 i T π i ζ 2 C 1 i T π i ζ p i C 1 i T 1 π k i C 1 i T 1 π k i C 1 i T 1 π k i C 1 i T ] ,
        Ω C 2 i T = [ π i ζ 1 C 2 i T π i ζ 2 C 2 i T π i ζ p i C 2 i T 1 π k i C 2 i T 1 π k i C 2 i T 1 π k i C 2 i T ] ,
        X = d i a g { X ζ 1 , X ζ 2 , , X ζ p i , X ζ u 1 , X ζ u 2 , , X ζ u q i } ,   D ¯ 1 i = D 1 i X i + G 1 i Y i ,
  • then the CLS (9) with the partly unknown TPs is SFT  H -bounded with respect to  ( ρ 1 , ρ 2 , R i , T ^ , γ , h ) , i.e., MJS (1) is SFT  H  state feedback stabilization, and the controller gain  K i = Y i X i 1 .
Proof. 
First, we demonstrate the equivalence between condition (26) and condition (11). According to Lemma 1, (26) is equivalent to the following:
α X i + X i J 1 X i 0 0 Ω A ¯ 1 i T Ω A ¯ 2 i T D ¯ 1 i T J 1 0 Ω A d 1 i T Ω A d 2 i T D d 1 i T γ 2 I Ω C 1 i T Ω C 2 i T G 2 i T X 0 0 X 0 I < 0 .
Letting  X i = P i 1 , J = M 1 , X = P 1 , K i = Y i X i 1 . Pre- and post-multiplying (30) by  d i a g { X i 1 , I , I , I , I , I } , we can observe that (30) is equivalent to (11) by using Lemma 1.
On the other hand, from Lemma 1, (29) can be expressed as the following inequality:
α T ^ ρ 1 ( σ 1 1 + ξ 1 1 τ ) + γ 2 α T ^ h < ρ 2 .
We note that  P ¯ i = R i 1 2 P i R i 1 2  and  M ¯ i = R i 1 2 M R i 1 2 ; combined with conditions (27) and (28), we can obtain that:
sup i S { λ m a x ( P ¯ i ) } < σ 1 1 , inf i S { λ m i n ( P ¯ i ) } > 1 , sup i S { λ m a x ( M ¯ i ) } < ξ 1 1 .
Therefore, it is easy to observe that (12) holds. This completes the proof. □
Remark 4.
Theorem 2 generalizes the results of [17] to a time-delay MJS, and gives sufficient conditions for MJS (1) to be SFT  H  state feedback stabilization.
Remark 5.
It can be seen from Theorem 2 that the controller gain  K i  depends on  X i  and  Y i , that in turn depend on the system matrices of mode i. It is necessary to consider the dynamic characteristics of the system in different modes and the transition probabilities between modes to ensure the stability and the controller in each mode.

4. Finite-Time  H  Observer-Based State Feedback Control

4.1. Observer-Based State Feedback Controller

In the presence of a system state that is not fully measurable, the following observer-based state feedback controller is designed:
x ^ ( k + 1 ) = A 1 ( m k ) x ^ ( k ) + A d 1 ( m k ) x ^ ( k τ ) + B 1 ( m k ) u ( k ) + H ( m k ) [ y ( k ) y ^ ( k ) ] , y ^ ( k ) = D ( m k ) x ^ ( k ) + G ( m k ) u ( k ) , u ( k ) = K ^ ( m k ) x ^ ( k ) , x ^ ( n ) = ψ ( n ) , n { τ , τ + 1 , , 0 } ,
where  x ^ ( k )  is the estimated state and  y ^ ( k )  is the estimated output, and  K ^ ( m k )  and  H ( m k )  denote the state feedback gain and observer gain to be determined, respectively. The estimated state error is defined as  e ( k ) = x ( k ) x ^ ( k ) , and  η T ( k ) = [ x T ( k ) e T ( k ) ] . For  m k = i ( i S ) , the CLS is represented by the following:
η ( k + 1 ) = A ^ 1 i η ( k ) + A ^ d 1 i η ( k τ ) + C ^ 1 i v ( k ) + [ A ^ 2 i η ( k ) + A ^ d 2 i η ( k τ ) + C ^ 2 i v ( k ) ] ω ( k ) , z ( k ) = D ^ 1 i η ( k ) + D ^ d 1 i η ( k τ ) + G 2 i v ( k ) ,
where  A ^ 1 i = A 1 i + B 1 i K ^ i B 1 i K ^ i 0 A 1 i H i D i ,   A ^ d 1 i = A d 1 i 0 0 A d 1 i ,   C ^ 1 i = C 1 i C 1 i ,
        A ^ 2 i = A 2 i + B 2 i K ^ i B 2 i K ^ i A 2 i + B 2 i K ^ i B 2 i K ^ i ,   A ^ d 2 i = A d 2 i 0 A d 2 i 0 ,   C ^ 2 i = C 2 i C 2 i ,
        D ^ 1 i = D 1 i + G 1 i K ^ i G 1 i K ^ i , D ^ d 1 i = D d 1 i 0 .
Definition 5.
(SFT  H  boundedness)
The CLS (33) is said to be SFT  H -bounded with respect to  ( ρ 1 , ρ 2 , R ^ i , T ^ , h , γ )  if the following two conditions hold:
(a) 
The MJS (33) satisfies SFT boundedness with respect to  ( ρ 1 , ρ 2 , R ^ i , T ^ , h ) ;
(b) 
Under the zero initial condition, for the external disturbance  v ( k )  satisfying condition (2), the control output  z ( k )  satisfies the following:
E { k = 0 T ^ z T ( k ) z ( k ) } < γ 2 E { k = 0 T ^ v T ( k ) v ( k ) } ,
where  R ^ i > 0 , i S 0 < ρ 1 < ρ 2 γ > 0 .
Definition 6.
(SFT  H  observer-based state feedback stabilization)
The MJS (1) is said to be SFT  H  observer-based state feedback stabilization with respect to  ( ρ 1 , ρ 2 , R ^ i , T ^ , h , γ )  if there exists an observer-based state feedback controller (32) such that the CLS (33) satisfies SFT  H  boundedness, and the controller (32) is called the SFT  H  observer-based state feedback controller.

4.2. Main Results

Theorem 3.
The CLS (33) with the partly unknown TPs is SFT  H -bounded with respect to  ( ρ 1 , ρ 2 , R ^ i , T ^ , γ , h )  if there exist the scalars  β > 1  and  γ > 0 , matrix  M ^ > 0 , and positive-definite matrices  P ^ i  for all  i S , satisfying the following conditions:
β P ^ i + M ^ + Γ 1 i T + D ^ 1 i T D ^ 1 i Γ 2 i T + D ^ 1 i T D ^ d 1 i Γ 3 i T + D ^ 1 i T G 2 i Γ 4 i T M ^ + D ^ d 1 i T D ^ d 1 i Γ 5 i T + D ^ d 1 i T G 2 i Γ 6 i T γ 2 + G 2 i T G 2 i < 0 ,
β T ^ ρ 1 sup i S { λ m a x ( P ˜ i ) } + sup i S { λ m a x ( M ˜ i ) } τ + γ 2 β T ^ h < inf i S { λ m i n ( P ˜ i ) } ρ 2 ,
where   P ^ i = d i a g { P i , P i } , M ^ = d i a g { M , M } , P ˜ i = R ^ i 1 2 P ^ i R ^ i 1 2 , M ˜ i = R ^ i 1 2 M ^ R ^ i 1 2 ,
         Γ 1 i T = A ^ 1 i T Ψ ^ i A ^ 1 i + A ^ 2 i T Ψ ^ i A ^ 2 i , Γ 2 i T = A ^ 1 i T Ψ ^ i A ^ d 1 i + A ^ 2 i T Ψ ^ i A ^ d 2 i , Γ 3 i T = A ^ 1 i T Ψ ^ i C ^ 1 i + A ^ 2 i T Ψ ^ i C ^ 2 i ,
         Γ 4 i T = A ^ d 1 i T Ψ ^ i A ^ d 1 i + A ^ d 2 i T Ψ ^ i A ^ d 2 i , Γ 5 i T = A ^ d 1 i T Ψ ^ i C ^ 1 i + A ^ d 2 i T Ψ ^ i C ^ 2 i , Γ 6 i T = C ^ 1 i T Ψ ^ i C ^ 1 i + C ^ 2 i T Ψ ^ i C ^ 2 i ,
         Ψ ^ i = j S k i π i j P ^ j + ( 1 π k i ) ( j S u k i P ^ j ) .
Proof. 
The proof procedure is similar to Theorem 1, and thus will not be reiterated. □
From the above discussion, system (33) is SFT  H -bounded. Then, the following theorem will develop the observer-based state feedback controller for system (33).
Theorem 4.
The CLS (33) with the partly unknown TPs is SFT  H -bounded with respect to  ( ρ 1 , ρ 2 , R ^ i , T ^ , γ , h )  if there exist the scalars  β > 1 , γ > 0 , ρ 2 > 0 , σ 2 > 0 , ϱ 1 > 0 , ϱ 2 > 0 , matrix  J > 0 , positive-definite matrices  X i , nonsingular matrices  Z i , and matrices  Y i  and  F i  for all  i S , satisfying the following conditions:
β X ^ i 0 Ξ A ˜ 1 i T Ξ A ˜ 2 i T D ˜ 1 i T X ^ i γ 2 I Ξ C ^ 1 i T Ξ C ^ 2 i T G 2 i T 0 X ^ + Ξ A ^ d 1 i J ^ Ξ A ^ d 1 i T 0 0 0 X ^ + Ξ A ^ d 2 i J ^ Ξ A ^ d 2 i T 0 0 I + D ^ d 1 i J ^ D ^ d 1 i T 0 J ^ < 0 ,
D i X i = Z i D i ,
σ 2 R ^ i 1 < X ^ i < R ^ i 1 ,
ϱ 1 R ^ i 1 < J ^ < ϱ 2 R ^ i 1 ,
β T ^ ρ 2 + γ 2 h ρ 1 τ ρ 1 σ 2 0 ϱ 1 < 0 ,
where  Ξ A ˜ 1 i T = [ π i ζ 1 A ˜ 1 i T π i ζ 2 A ˜ 1 i T π i ζ p i A ˜ 1 i T 1 π k i A ˜ 1 i T 1 π k i A ˜ 1 i T 1 π k i A ˜ 1 i T ] ,
     Ξ A ˜ 2 i T = [ π i ζ 1 A ˜ 2 i T π i ζ 2 A ˜ 2 i T π i ζ p i A ˜ 2 i T 1 π k i A ˜ 2 i T 1 π k i A ˜ 2 i T 1 π k i A ˜ 2 i T ] ,
     A ˜ 1 i = A 1 i X i + B 1 i Y i B 1 i Y i 0 A 1 i X i F i D i ,   A ˜ 2 i = A 2 i X i + B 2 i Y i B 2 i Y i A 2 i X i + B 2 i Y i B 2 i Y i ,
     Ξ A ^ d 1 i T = [ π i ζ 1 A ^ d 1 i T π i ζ 2 A ^ d 1 i T π i ζ p i A ^ d 1 i T 1 π k i A ^ d 1 i T 1 π k i A ^ d 1 i T 1 π k i A ^ d 1 i T ] ,
     Ξ A ^ d 2 i T = [ π i ζ 1 A ^ d 2 i T π i ζ 2 A ^ d 2 i T π i ζ p i A ^ d 2 i T 1 π k i A ^ d 2 i T 1 π k i A ^ d 2 i T 1 π k i A ^ d 2 i T ] ,
     Ξ C ^ 1 i T = [ π i ζ 1 C ^ 1 i T π i ζ 2 C ^ 1 i T π i ζ p i C ^ 1 i T 1 π k i C ^ 1 i T 1 π k i C ^ 1 i T 1 π k i C ^ 1 i T ] ,
     Ξ C ^ 2 i T = [ π i ζ 1 C ^ 2 i T π i ζ 2 C ^ 2 i T π i ζ p i C ^ 2 i T 1 π k i C ^ 2 i T 1 π k i C ^ 2 i T 1 π k i C ^ 2 i T ] ,
     D ˜ 1 i = [ D 1 i X i + G 1 i Y i G 1 i Y i ] , X ^ i = d i a g { X i , X i } , J ^ = d i a g { J , J } , R ^ i = d i a g { R i , R i } ,
     X ^ = d i a g { X ^ ζ 1 , X ^ ζ 2 , , X ^ ζ p i , X ^ ζ u 1 , X ^ ζ u 2 , , X ^ ζ u q i } .
Then, MJS (1) is called SFT  H  observer-based state feedback stabilization, and the controller gain  K ^ i  as well as the observer gain  H i  are represented as follows:
K ^ i = Y i X i 1 , H i = F i Z i 1 .
Proof. 
Defining  R ^ i = d i a g { R i , R i } , P ^ i = d i a g { P i , P i } X i = P i 1 , X ^ = P ^ 1 , M ^ = d i a g { M , M } ,   M 1 = J , K ^ i = Y i X i 1 , H i = F i Z i 1 ,  and taking into account condition (38), (35) will be obtained from (37) via Lemma 1. In addition, we denote  P ˜ i = R ^ i 1 2 P ^ i R ^ i 1 2 , M ˜ i = R ^ i 1 2 M ^ R ^ i 1 2 . According to the proof of Theorem 2, it is obvious that condition (36) will be guaranteed by (39) to (41). □
Remark 6.
Theorems 3 and 4 extend the results of [38,44] to MJSs with partly unknown TPs.
Remark 7.
Addressing condition (38) through the application of the LMI toolbox is a challenging task. As a solution, constraint (38) can be approximated by the following inequality:
[ D i X i Z i D i ] T [ D i X i Z i D i ] < ϖ I ,
where ϖ represents an exceedingly small positive scalar. According to Lemma 1, the above inequality can be formulated as follows:
ϖ I [ D i X i Z i D i ] T I < 0 .
Remark 8.
We can note that conditions (26), (29), (37), and (41) are not strict LMIs; however, once we fix the parameters α and β, the conditions can be turned into LMI-based feasibility problems. Therefore, the feasibility of the conditions stated in Theorems 2 and 4 can be turned into the following feasibility problems with the fixed parameters α and β, respectively:
m i n ( ρ 2 + γ 2 ) s . t . L M I s ( 26 ) , ( 27 ) , ( 28 ) a n d ( 29 ) ,
m i n ( ρ 2 + γ 2 ) s . t . L M I s ( 37 ) , ( 39 ) , ( 40 ) , ( 41 ) a n d ( 44 ) .

5. Numerical Examples

In this section, we present two examples to validate the effectiveness and practicality of the proposed method. The first example is used to show the effectiveness of the state feedback controller (8) design approach developed in Theorem 2 for MJSs (1) with partly unknown transition probabilities.
Example 1.
Consider MJS (1) with three modes, and the coefficient matrices are given as follows:
Mode 1 ( i = 1 ):
A 11 = 0.1 0 0 0.1 ,   A d 11 = 0.8 0 0.2 0.1 ,   B 11 = 1 0.1 ,   C 11 = 0.1 0.1 ,   A 21 = 0.1 0 0 0.1 ,   A d 21 =   0.1 1 0 0.1 ,   B 21 =   0.1 0.3 ,   C 21 =   0.1 0.1 ,   D 11 =   0.2 1 ,   D d 11 = 0.1 0 ,   D 1 = 1 1 ,   G 1 = 1 ,   G 11 = 1 G 21 = 0.1 .
Mode 2 ( i = 2 ):
A 12 = 1 0 0.3 1 ,   A d 12 = 0 0.1 0.1 0.1 ,   B 12 = 0.1 0.1 ,   C 12 = 0.1 0.1 ,   A 22 = 0.1 0 0 0.1 ,   A d 22 = 0.1 1 0 0.1 ,   B 22 = 0.1 0.1 ,   C 22 = 0.1 0.1 ,   D 12 = 1 0.1 ,   D d 12 = 0.1 0.1 ,   D 2 = 2 1 ,   G 2 = 1 ,   G 12 = 1 G 22 = 0.2 .
Mode 3 ( i = 3 ):
A 13 = 1 0.1 0 0.1 ,   A d 13 = 0.1 0.1 0 0.1 ,   B 13 = 1.1 0.1 ,   C 13 = 0.1 0.1 ,   A 23 = 0.3 0 0 0.1 ,   A d 23 = 0.2 1 0 0.1 ,   B 23 = 0.1 0.4 ,   C 23 = 0.1 0.1 ,   D 13 = 0.1 1 ,   D d 13 = 0.1 0.1 ,   D 3 = 1 3 ,   G 3 = 1 ,   G 13 = 1 G 23 = 0.1 .
The partly unknown TP matrix  Π  with three modes is given as follows:
Π = π ^ 11 0.2 π ^ 13 π ^ 21 π ^ 22 0.8 0.1 π ^ 32 π ^ 33 ,
where  π ^ i j ( i , j = 1 , 2 , 3 )  is the unknown element. One possible mode evolution is given in Figure 1.
According to Remark 8, the minimum value of  ρ 2 + γ 2  relies on the parameter  α . We can obtain the feasible solution of (45) when  1.01 α 2.01 . Figure 2 and Figure 3 show the optimal values of  ρ 2 γ 2  and  ρ 2 + γ 2  with different  α  values. We can see that the optimal values  ρ 2 = 2.6437 γ 2 = 7.5107 , and  γ 2 + ρ 2 = 10.1544  when  α = 1.02 .
Next, letting  τ = 3 , h = 3 , T ^ = 30 , ρ 1 = 0.1 , and  R i = I 2 × 2 ( i = 1 , 2 , 3 ) , and solving LMIs (26) to (29), we obtain  σ 1 = 0.1888 ξ 1 = 0.3535 , and  ξ 2 = 21.5646 , and the gains of the state feedback controller (8) are as follows:
K 1 = 0.1176 0.2082 , K 2 = 0.9916 0.1625 , K 3 = 0.4783 0.3299 .
Then, we set the initial value  x ( 0 ) = 0 0 T  for MJS (1) and CLS (9), and the external disturbance signal  v ( k ) = 0.4 sin k , which satisfies  k = 0 T ^ v T ( k ) v ( k ) h = 3 .
Figure 4 shows the trajectories of  x T ( k ) R i x ( k )  (50 curves) and  E { x T ( k ) R i x ( k ) }  of the open-loop system (1) ( u ( k ) = 0 ). It can be seen that the trajectory of  E { x T ( k ) R i x ( k ) }  exceeds the upper bound  ρ 2 , despite  E { x T ( 0 ) R i x ( 0 ) } = 0 < ρ 1 = 0.1 . This implies that the open-loop system (1) is not finite-time bound.
Figure 5 shows the trajectories of the system state  x ( k )  for closed-loop system (9) and the control input  u ( k )  of MJS (1). The trajectories of  x T ( k ) R i x ( k )  (50 curves) and  E { x T ( k ) R i x ( k ) }  of closed-loop system (9) are illustrated in Figure 6. From Figure 6, it is seen that when  E { x T ( 0 ) R i x ( 0 ) } = 0 < ρ 1 = 0.1 E { x T ( k ) R i x ( k ) } < ρ 2 = 2.6437 , which means that the CLS (9) is SFT  H -bounded, that is to say, MJS (1) is SFT  H  state feedback stabilization. Therefore, it is proven that the state feedback controller (8) designed in this paper is effective.
Next, the second example focuses on the effectiveness of the observer-based state feedback controller (32) designed in Theorem 4 for MJS (1) with partly unknown transition probabilities.
Example 2.
The parameters of MJS (1) with three modes and partly unknown TPs are given as follows:
Mode 1 ( i = 1 ):
A 11 = 0.1 0 0 0.1 ,   A d 11 = 0.1 0 0 0.1 ,   B 11 = 1 0.1 ,   C 11 = 0 0.1 ,   A 21 = 0.1 0 0 0.2 ,   A d 21 =   0.1 0 0 0.1 ,   B 21 =   0.1 0.1 ,   C 21 = 0.1 0.1 ,   D 11 = 0.5 0.1 ,   D d 11 = 0 0.5 ,   D 1 = 1 1 ,   G 1 = 1 ,   G 11 = 0.9 G 21 = 0.1 .
Mode 2 ( i = 2 ):
A 12 =   0.1 0.1 0.1 0.2 ,   A d 12 =   0.2 0 0 0.2 ,   B 12 =   0.1 0.1 ,   C 12 =   0.1 0.1 ,   A 22 =   0.1 0 0 0.1 ,   A d 22 =   0.1 0 0.2 0.1 ,   B 22 =   0.1 0.1 ,   C 22 =   0.1 0.1 ,   D 12 = 1 0.1 ,   D d 12 = 0.3 0.1 ,   D 2 = 2 1 ,   G 2 = 1 ,   G 12 = 1 G 22 = 0.1 .
Mode 3 ( i = 3 ):
A 13 = 0.1 0 0.2 0.1 ,   A d 13 = 0.2 0.2 0.1 0.2 ,   B 13 = 0.1 0.1 ,   C 13 = 0.1 0.1 ,   A 23 = 0.2 0 0 0.2 ,   A d 23 =   0 0.2 0.1 0.1 ,   B 23 =   1 0.1 ,   C 23 = 0.1 0.1 ,   D 13 = 0.1 0.2 ,   D d 13 = 0.2 0 ,   D 3 = 1 1 ,   G 3 = 2 ,   G 13 = 2 G 23 = 0.1 .
Then, letting  ϖ = 10 10 R ^ i = I 4 × 4 ( i = 1 , 2 , 3 ) , the partly unknown TP matrix  Π  and the remaining parameters have identical values to those in Example 1. Similar to Example 1, we can obtain the feasible solution of (46) when  1.01 β 2.04 . The relationships between  β  and  γ 2  and  ρ 2 , and between  β  and  γ 2 + ρ 2  are shown in Figure 7 and Figure 8, respectively. From Figure 7 and Figure 8, we can see that the optimal values are  ρ 2 = 28.5424  and  γ 2 = 92.5307  with  β = 1.03 .
Then, we compare the results of Theorem 4 with Theorem 2 in [44]. The optimal values of  ρ 2  (i.e.,  τ  in [44]) and  γ  obtained from the two works are shown in Table 2.
From Table 2, it appears that the optimal values of  γ ρ 2 , and  γ 2 + ρ 2  obtained in this paper are smaller than those of [44], which indicates that the results of this paper are better. In addition, ref. [44] assumed that the transition probabilities were completely known, which means that the results of [44] are special cases of this paper.
In addition, by solving LMIs (37), (39), (40), (41), and (44), we have  σ 2 = 0.1779 ϱ 1 = 0.9901 , and  ϱ 2 = 5.2268 , and the gains of observer-based state feedback controller (32) are as follows:
K ^ 1 = 0.2337 0.0433 , K ^ 2 = 0.4995 0.0534 , K ^ 3 = 0.0599 0.0414 ,
H 1 = 0.0501 0.0500 , H 2 = 0.0541 0.0623 , H 3 = 0.0429 0.0527 .
Next, we set the initial value  x ( 0 ) = x ^ ( 0 ) = 0 0 T  for systems (1) and (33), respectively. The external disturbance signal  v ( k )  is the same as in Example 1. The Markovian switching process of MJS (1) and CLS (33) is shown in Figure 9. Figure 10 shows the trajectories of  x T ( k ) R i x ( k )  (50 curves) and  E { x T ( k ) R i x ( k ) }  of open-loop system (1 ( u ( k ) = 0 ) , which implies that open-loop system (1) is not finite time-bound.
The trajectories of system state  η ( k )  for CLS (33) and the curve of the control input  u ( k )  of (32) are illustrated in Figure 11. Moreover, Figure 12 shows the trajectories of  η T ( k ) R ^ i η ( k )  (50 curves) and  E { η T ( k ) R ^ i η ( k ) }  of closed-loop system (33). From Figure 12, it can be observed that CLS (33) is SFT  H -bounded, i.e., MJS (1) is SFT  H  observer-based state feedback stabilization. Furthermore, by comparing Figure 10 and Figure 12, it can be proven that observer-based state feedback controller (32) is effective.

6. Conclusions

Based on existing results, the design schemes of a stochastic finite-time  H  state feedback controller and a stochastic finite-time  H  observer-based state feedback controller for MJSs with a time delay and partly unknown TPs were studied in this paper. A state feedback controller and an observer-based state feedback controller were designed and some sufficient conditions for the CLSs to satisfy SFT  H  boundedness were presented via LKF technology. Then, the controller gains were obtained by using the LMI method. Lastly, two examples were provided to verify the validity of the proposed design schemes. In the following work, the finite-time guaranteed cost control and event-triggered control of discrete-time MJSs will be studied on the basis of this paper.

Author Contributions

Data curation, X.G.; Writing—original draft, X.G.; Conceptualization, Y.L.; Methodology, Y.L.; Writing—review & editing, Y.L.; Supervision, X.L.; Project administration, X.L.; Funding acquisition, X.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (No. 61972236) and the Shandong Provincial Natural Science Foundation (No. ZR2022MF233).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Liu, Z.; Karimi, H.R.; Yu, J. Passivity-based robust sliding mode synthesis for uncertain delayed stochastic systems via state observer. Automatica 2020, 111, 108596. [Google Scholar] [CrossRef]
  2. Li, Y.; Zhang, W.; Liu, X. H- index for discrete-time stochastic systems with Markovian jump and multiplicative noise. Automatica 2018, 90, 286–293. [Google Scholar] [CrossRef]
  3. Liu, X.; Zhang, W.; Li, Y. H- index for continuous-time stochastic systems with Markovian jump and multiplicative noise. Automatica 2019, 105, 167–178. [Google Scholar] [CrossRef]
  4. Yang, H.; Yin, S.; Kaynak, O. Neural network-based adaptive fault-tolerant control for Markovian jump systems with nonlinearity and actuator faults. IEEE Trans. Syst. Man Cybern. Syst. 2020, 51, 3687–3698. [Google Scholar] [CrossRef]
  5. Li, F.; Li, X.; Zhang, X.; Yang, C. Asynchronous filtering for delayed Markovian jump systems via homogeneous polynomial approach. IEEE Trans. Autom. Control. 2019, 65, 2163–2170. [Google Scholar] [CrossRef]
  6. Ren, J.; He, G.; Fu, J. Robust H sliding mode control for nonlinear stochastic T-S fuzzy singular Markovian jump systems with time-varying delays. Inf. Sci. 2020, 535, 42–63. [Google Scholar] [CrossRef]
  7. Wang, Y.; Han, Y.; Gao, C. Robust H sliding mode control for uncertain discrete singular T-S fuzzy Markov jump systems. Asian J. Control. 2023, 25, 524–536. [Google Scholar] [CrossRef]
  8. Shi, Y.; Peng, X. Fault detection filters design of polytopic uncertain discrete-time singular Markovian jump systems with time-varying delays. J. Frankl. Inst. 2020, 357, 7343–7367. [Google Scholar] [CrossRef]
  9. Wang, G.; Xu, L. Almost sure stability and stabilization of Markovian jump systems with stochastic switching. IEEE Trans. Autom. Control. 2021, 67, 1529–1536. [Google Scholar] [CrossRef]
  10. Li, X.; Zhang, W.; Lu, D. Stability and stabilization analysis of Markovian jump systems with generally bounded transition probabilities. J. Frankl. Inst. 2020, 357, 8416–8434. [Google Scholar] [CrossRef]
  11. Liu, X.; Zhuang, J.; Li, Y. H filtering for Markovian jump linear systems with uncertain transition probabilities. Int. J. Control. Autom. Syst. 2021, 19, 2500–2510. [Google Scholar] [CrossRef]
  12. Pan, S.; Zhou, J.; Ye, Z. Event-triggered dynamic output feedback control for networked Markovian jump systems with partly unknown transition rates. Math. Comput. Simul. 2021, 181, 539–561. [Google Scholar] [CrossRef]
  13. Su, X.; Wang, C.; Chang, H.; Yang, Y.; Assawinchaichote, W. Event-triggered sliding mode control of networked control systems with Markovian jump parameters. Automatica 2021, 125, 109405. [Google Scholar] [CrossRef]
  14. Shen, A.; Li, L.; Li, C. H filtering for discrete-time singular Markovian jump systems with generally uncertain transition rates. Circuits, Syst. Signal Process. 2021, 40, 3204–3226. [Google Scholar] [CrossRef]
  15. Park, C.; Kwon, N.K.; Park, I.S.; Park, P. H filtering for singular Markovian jump systems with partly unknown transition rates. Automatica 2019, 109, 108528. [Google Scholar] [CrossRef]
  16. Xue, M.; Yan, H.; Zhang, H.; Li, Z.; Chen, S.; Chen, C. Event-triggered guaranteed cost controller design for T-S fuzzy Markovian jump systems with partly unknown transition probabilities. IEEE Trans. Fuzzy Syst. 2020, 29, 1052–1064. [Google Scholar] [CrossRef]
  17. Sun, H.; Zhang, Y.; Wu, A. H control for discrete-time Markovian jump linear systems with partially uncertain transition probabilities. Optim. Control. Appl. Methods 2020, 41, 1796–1809. [Google Scholar] [CrossRef]
  18. Zhang, J.; Liu, Z.; Jiang, B. Neural network-based adaptive reliable control for nonlinear Markov jump systems against actuator attacks. Nonlinear Dyn. 2023, 111, 13985–13999. [Google Scholar] [CrossRef]
  19. Guo, G.; Zhang, X.; Liu, Y.; Zhao, Z.; Zhang, R.; Zhang, C. Disturbance observer-based finite-time braking control of vehicular platoons. IEEE Trans. Intell. Veh. 2023. [Google Scholar] [CrossRef]
  20. Hamrah, R.; Sanyal, A.K.; Viswanathan, S.P. Discrete finite-time stable attitude tracking control of unmanned vehicles on SO(3). In Proceedings of the 2020 American Control Conference (ACC), Denver, CO, USA, 1–3 July 2020; pp. 824–829. [Google Scholar] [CrossRef]
  21. Zhao, J.; Qiu, L.; Xie, X.; Sun, Z. Finite-time stabilization of stochastic nonlinear systems and its applications in ship maneuvering systems. IEEE Trans. Fuzzy Syst. 2023, 32, 1023–1035. [Google Scholar] [CrossRef]
  22. Dorato, P. Short-Time Stability in Linear Time-Varying Systems. Ph. D. Thesis, Polytechnic Institute of Brooklyn, Brooklyn, NY, USA, 1961. [Google Scholar]
  23. Ren, C.; He, S. Finite-time stabilization for positive Markovian jumping neural networks. Appl. Math. Comput. 2020, 365, 124631. [Google Scholar] [CrossRef]
  24. Zhang, Y.; Jiang, T. Finite-time boundedness and chaos-like dynamics of a class of Markovian jump linear systems. J. Frankl. Inst. 2020, 357, 2083–2098. [Google Scholar] [CrossRef]
  25. Chen, Q.; Tong, D.; Zhou, W. Finite-time stochastic boundedness for Markovian jumping systems via the sliding mode control. J. Frankl. Inst. 2022, 359, 4678–4698. [Google Scholar] [CrossRef]
  26. Zhong, S.; Zhang, W.; Feng, L. Finite-time stability and asynchronous resilient control for Itô stochastic semi-Markovian jump systems. J. Frankl. Inst. 2022, 359, 1531–1557. [Google Scholar] [CrossRef]
  27. Sang, H.; Wang, P.; Zhao, Y.; Nie, H.; Fu, J. Input-output finite-time stability for switched T-S fuzzy delayed systems with time-dependent Lyapunov-Krasovskii functional approach. IEEE Trans. Fuzzy Syst. 2023, 31, 3823–3837. [Google Scholar] [CrossRef]
  28. Kaviarasan, B.; Kwon, O.; Park, M.J.; Sakthivel, R. Input-output finite-time stabilization of T-S fuzzy systems through quantized control strategy. IEEE Trans. Fuzzy Syst. 2021, 30, 3589–3600. [Google Scholar] [CrossRef]
  29. Hu, X.; Wang, L.; Sheng, Y.; Hu, J. Finite-time stabilization of fuzzy spatiotemporal competitive neural networks with hybrid time-varying delays. IEEE Trans. Fuzzy Syst. 2023, 31, 3015–3024. [Google Scholar] [CrossRef]
  30. Li, X.; Ho, D.W.; Cao, J. Finite-time stability and settling-time estimation of nonlinear impulsive systems. Automatica 2019, 99, 361–368. [Google Scholar] [CrossRef]
  31. Yang, X.; Li, X. Finite-time stability of nonlinear impulsive systems with applications to neural networks. IEEE Trans. Neural Netw. Learn. Syst. 2021, 34, 243–251. [Google Scholar] [CrossRef]
  32. Zhang, T.; Deng, F.; Shi, P. Non-fragile finite-time stabilization for discrete mean-field stochastic systems. IEEE Trans. Autom. Control. 2023, 68, 6423–6430. [Google Scholar] [CrossRef]
  33. Liu, X.; Liu, Q.; Li, Y. Finite-time guaranteed cost control for uncertain mean-field stochastic systems. J. Frankl. Inst. 2020, 357, 2813–2829. [Google Scholar] [CrossRef]
  34. Liu, X.; Teng, Y.; Li, Y. A design proposal of finite-time H controller for stochastic mean-field systems. Asian J. Control. 2024. [Google Scholar] [CrossRef]
  35. Sun, X.; Yang, D.; Zong, G. Annular finite-time H control of switched fuzzy systems: A switching dynamic event-triggered control approach. Nonlinear Anal. Hybrid Syst. 2021, 41, 101050. [Google Scholar] [CrossRef]
  36. Zhu, C.; Li, X.; Cao, J. Finite-time H dynamic output feedback control for nonlinear impulsive switched systems. Nonlinear Anal. Hybrid Syst. 2021, 39, 100975. [Google Scholar] [CrossRef]
  37. Wang, G.; Zhao, F.; Chen, X.; Qiu, J. Observer-based finite-time H control of Itô -type stochastic nonlinear systems. Asian J. Control. 2023, 25, 2378–2387. [Google Scholar] [CrossRef]
  38. Zhang, Y.; Liu, C. Observer-based finite-time H control of discrete-time Markovian jump systems. Appl. Math. Model. 2013, 37, 3748–3760. [Google Scholar] [CrossRef]
  39. Gao, X.; Ren, H.; Deng, F.; Zhou, Q. Observer-based finite-time H control for uncertain discrete-time nonhomogeneous Markov jump systems. J. Frankl. Inst. 2019, 356, 1730–1749. [Google Scholar] [CrossRef]
  40. Mu, X.; Li, X.; Fang, J.; Wu, X. Reliable observer-based finite-time H control for networked nonlinear semi-Markovian jump systems with actuator fault and parameter uncertainties via dynamic event-triggered scheme. Inf. Sci. 2021, 546, 573–595. [Google Scholar] [CrossRef]
  41. He, Q.; Xing, M.; Gao, X.; Deng, F. Robust finite-time H synchronization for uncertain discrete-time systems with nonhomogeneous Markovian jump: Observer-based case. Int. J. Robust Nonlinear Control. 2020, 30, 3982–4002. [Google Scholar] [CrossRef]
  42. Liu, X.; Wei, X.; Li, Y. Observer-based finite-time fuzzy H control for Markovian jump systems with time-delay and multiplicative noises. Int. J. Fuzzy Syst. 2023, 25, 1643–1655. [Google Scholar] [CrossRef]
  43. Liu, X.; Li, W.; Wang, J.; Li, Y. Robust finite-time stability for uncertain discrete-time stochastic nonlinear systems with time-varying delay. Entropy 2022, 24, 828. [Google Scholar] [CrossRef] [PubMed]
  44. Wei, X.; Liu, N.; Liu, X.; Li, Y. Observer-based finite-time H control for discrete-time Markovian jump systems with time-delays. J. Shandong Univ. Technol. Nat. Sci. Ed. 2022, 36, 17–27. [Google Scholar] [CrossRef]
Figure 1. Markovian switching process of MJS (1) and CLS (9).
Figure 1. Markovian switching process of MJS (1) and CLS (9).
Entropy 26 00292 g001
Figure 2. The values of  ρ 2  and  γ 2  with different  α  values.
Figure 2. The values of  ρ 2  and  γ 2  with different  α  values.
Entropy 26 00292 g002
Figure 3. The values of  γ 2 + ρ 2  with different  α  values.
Figure 3. The values of  γ 2 + ρ 2  with different  α  values.
Entropy 26 00292 g003
Figure 4. The trajectories of  x T ( k ) R i x ( k )  and  E { x T ( k ) R i x ( k ) }  for open-loop system (1) ( u ( k ) = 0 ).
Figure 4. The trajectories of  x T ( k ) R i x ( k )  and  E { x T ( k ) R i x ( k ) }  for open-loop system (1) ( u ( k ) = 0 ).
Entropy 26 00292 g004
Figure 5. The trajectories of system state  x ( k )  for CLS (9) and control input  u ( k )  (8).
Figure 5. The trajectories of system state  x ( k )  for CLS (9) and control input  u ( k )  (8).
Entropy 26 00292 g005
Figure 6. The trajectories of  x T ( k ) R i x ( k )  and  E { x T ( k ) R i x ( k ) }  for MJS (1) and CLS (9).
Figure 6. The trajectories of  x T ( k ) R i x ( k )  and  E { x T ( k ) R i x ( k ) }  for MJS (1) and CLS (9).
Entropy 26 00292 g006
Figure 7. The optimal values of  ρ 2  and  γ 2  with different  β  values.
Figure 7. The optimal values of  ρ 2  and  γ 2  with different  β  values.
Entropy 26 00292 g007
Figure 8. The values of  γ 2 + ρ 2  with different  β  values.
Figure 8. The values of  γ 2 + ρ 2  with different  β  values.
Entropy 26 00292 g008
Figure 9. Markovian switching process of MJS (1) and CLS (33).
Figure 9. Markovian switching process of MJS (1) and CLS (33).
Entropy 26 00292 g009
Figure 10. The trajectories of  x T ( k ) R i x ( k )  and  E { x T ( k ) R i x ( k ) }  for open-loop system (1 ( u ( k ) = 0 ) .
Figure 10. The trajectories of  x T ( k ) R i x ( k )  and  E { x T ( k ) R i x ( k ) }  for open-loop system (1 ( u ( k ) = 0 ) .
Entropy 26 00292 g010
Figure 11. The trajectories of system state  η ( k )  for CLS (33) and control input  u ( k )  (32).
Figure 11. The trajectories of system state  η ( k )  for CLS (33) and control input  u ( k )  (32).
Entropy 26 00292 g011
Figure 12. The trajectories of  η T ( k ) R ^ i η ( k )  and  E { η T ( k ) R ^ i η ( k ) }  for CLS (33).
Figure 12. The trajectories of  η T ( k ) R ^ i η ( k )  and  E { η T ( k ) R ^ i η ( k ) }  for CLS (33).
Entropy 26 00292 g012
Table 1. The acronyms used in this article and their meanings.
Table 1. The acronyms used in this article and their meanings.
AcronymsMeaning of Acronyms
MJSMarkovian jump system
SFTStochastic finite-time
LKFLyapunov–Krasovskii functional
CLSClosed-loop system
LMILinear matrix inequality
TPTransition probability
Table 2. The optimal values of  γ ρ 2 , and  γ 2 + ρ 2 .
Table 2. The optimal values of  γ ρ 2 , and  γ 2 + ρ 2 .
MethodTheorem 4 in This PaperTheorem 2 in Reference [44]
γ 9.6193105.9526
ρ 2 28.5424150.4146
γ 2 + ρ 2 121.073111,376.368
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Guo, X.; Li, Y.; Liu, X. Finite-Time H Controllers Design for Stochastic Time-Delay Markovian Jump Systems with Partly Unknown Transition Probabilities. Entropy 2024, 26, 292. https://0-doi-org.brum.beds.ac.uk/10.3390/e26040292

AMA Style

Guo X, Li Y, Liu X. Finite-Time H Controllers Design for Stochastic Time-Delay Markovian Jump Systems with Partly Unknown Transition Probabilities. Entropy. 2024; 26(4):292. https://0-doi-org.brum.beds.ac.uk/10.3390/e26040292

Chicago/Turabian Style

Guo, Xinye, Yan Li, and Xikui Liu. 2024. "Finite-Time H Controllers Design for Stochastic Time-Delay Markovian Jump Systems with Partly Unknown Transition Probabilities" Entropy 26, no. 4: 292. https://0-doi-org.brum.beds.ac.uk/10.3390/e26040292

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop