Next Article in Journal
An Improved MV Method for Stock Allocation Based on the State-Space Measure of Cognitive Bias with a Hybrid Algorithm
Previous Article in Journal
Trapezium-Type Inequalities for Raina’s Fractional Integrals Operator Using Generalized Convex Functions
Previous Article in Special Issue
Symmetry-Breaking for Airflow Control Optimization of an Oscillating-Water-Column System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Extended Analysis on Robust Dissipativity of Uncertain Stochastic Generalized Neural Networks with Markovian Jumping Parameters

by
Usa Humphries
1,
Grienggrai Rajchakit
2,*,
Ramalingam Sriraman
3,
Pramet Kaewmesri
1,
Pharunyou Chanthorn
4,
Chee Peng Lim
5 and
Rajendran Samidurai
6
1
Department of Mathematics, Faculty of Science, King Mongkut’s University of Technology Thonburi (KMUTT), Thung Khru 10140, Thailand
2
Department of Mathematics, Faculty of Science, Maejo University, Chiang Mai 50290, Thailand
3
Department of Science and Humanities, Vel Tech High Tech Dr. Rangarajan Dr. Sakunthala Engineering College, Avadi, Tamil Nadu 600062, India
4
Research Center in Mathematics and Applied Mathematics, Department of Mathematics, Faculty of Science, Chiang Mai University, Chiang Mai 50200, Thailand
5
Institute for Intelligent Systems Research and Innovation, Deakin University, Waurn Ponds, VIC 3216, Australia
6
Department of Mathematics, Thiruvalluvar University, Vellore, Tamil Nadu 632115, India
*
Author to whom correspondence should be addressed.
Submission received: 15 May 2020 / Revised: 11 June 2020 / Accepted: 12 June 2020 / Published: 20 June 2020
(This article belongs to the Special Issue Symmetry in Nonlinear Studies)

Abstract

:
The main focus of this research is on a comprehensive analysis of robust dissipativity issues pertaining to a class of uncertain stochastic generalized neural network (USGNN) models in the presence of time-varying delays and Markovian jumping parameters (MJPs). In real-world environments, most practical systems are subject to uncertainties. As a result, we take the norm-bounded parameter uncertainties, as well as stochastic disturbances into consideration in our study. To address the task, we formulate the appropriate Lyapunov–Krasovskii functional (LKF), and through the use of effective integral inequalities, simplified linear matrix inequality (LMI) based sufficient conditions are derived. We validate the feasible solutions through numerical examples using MATLAB software. The simulation results are analyzed and discussed, which positively indicate the feasibility and effectiveness of the obtained theoretical findings.

1. Introduction

Over the last few decades, many studies on a wide variant of neural network (NN) models and their applications to different fields, e.g., optimization, image analysis, pattern recognition, and signal process, have been conducted [1,2,3,4,5,6,7]. With regard to stability analysis of NNs, two mathematical models are commonly adopted: either local field NN models or static NN models. Nevertheless, both categories of models are not always equivalent. By following certain assumptions, we are able to transform them into compact representations. As such, there exists a class of unified generalized neural network (GNN) models in the literature [8,9,10]. Indeed, theoretical investigations on various dynamical properties of GNNs have become available recently, e.g., [8,9,10,11,12,13]. On the other hand, time delays arise naturally in nearly all dynamical systems, e.g., in chemical processes, nuclear reactors, and other fields [14,15,16,17,18,19,20]. In real environments, time delays are commonly viewed as one of the main factors that contribute to unstable system performance. In general, time delays can be categorized as either constant delays or time-varying delays (which constitute a generalized case of constant time delays). In the literature, a number of aspects concerning various dynamical characteristics of time-delayed NN models have been examined, and effective methods that produce significant results have been reported [18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35]. On the other hand, the Markovian jumping neural network (MJNN) has recently received significant research interest. It is an extremely useful model for understanding the underlying dynamics when the NNs incorporate abrupt changes in their structure. Studies on MJNN models with various dynamical properties are available in the literature [17,18,19,20,21]. In practical modeling problems, it is inevitable for most NN models to exhibit stochastic effects. A comprehensive investigation of NN models with certain stochastic inputs is necessary [22,23,24,25,26]. The stability of stochastic nonlinear systems has recently become an important research field. Considerable efforts have been devoted to stochastic NNs with Markovian jumping parameters (MJPs), and several stability conditions have been published recently [23,24,25,26,27]. In [19], issues on exponential stability pertaining to stochastic NNs with MJPs were tackled using the Lyapunov functional method. The research in [20] focused on the stability issues related to stochastic NN models with Markovian switching. Similar results with respect to the proposed problem have also been published, e.g., [18,19,20,21,22,23,24,25,26,27,28]. Another concern in modeling practical systems is uncertainties associated with the system parameters. Indeed, many practical systems in real environments are susceptible to uncertainties. Thus, the investigations on NN models along with their uncertain parameters are important [27,28,29,30,31].
Undoubtedly, the dissipative behavior is essential in control and engineering problems. As a result, the dissipativity analysis of USGNN models is of importance, and this area has attracted attention from many researchers [34,35,36,37,38,39,40]. In [37], three types of neuron activation functions were discussed for global dissipativity of delayed recurrent NN models: monotonous non-decreasing, Lipschitz-continuous, and bounded. In [38], the problem with respect to the global dissipativity of NN models subject to unbounded, as well as time-varying delays was addressed. Meanwhile, by exploiting the multi-dynamic behaviors derived from the ( Q , S , R ) dissipativity principles, researchers were able to obtain effective results by changing the system weight matrices. These results are useful for undertaking various control and engineering problems [35,36,37,38,39]. Recently, analyses on the ( Q , S , R ) dissipativity issues in NN models became available, e.g., [40,41,42]. Nonetheless, there are only a few studies on the dissipativity of GNNs with MJPs. In accordance with our literature analysis, robust dissipativity analyses pertaining to USGNN models that incorporate the Markovian jumping parameters and time-varying delays constitute a new research topic, which is the main focus and contribution of our current paper.
In view of the limitations of many existing studies, it is our goal to establish robust dissipativity and stability for USGNNs with Markovian jumping parameters. By leveraging Lyapunov stability theory, we incorporate time-delay information into the formulation of appropriate Lyapunov–Krasovskii functionals (LKFs). The LKF derivatives are estimated with new integral inequalities, which offer less conservatism in the results. By employing Ito’s formula and some analytic techniques, robust dissipativity and stability conditions can be formulated using simplified LMI. We present several numerical examples to ascertain the results.
In Section 2 and Section 3, we present the problem definition and the main results. Section 4 presents the numerical examples, while Section 5 outlines the conclusions.
Notations: In the following presentation, R n indicates an n-dimensional Euclidean space; R n × n indicates the set of n × n real matrices, while P > 0 indicates a symmetric positive definite matrix. The transpose of X is denoted by the superscript in X T . In addition, tr { D } denotes the trace of matrix D . Given a symmetric block matrix, the elements below its main diagonal are denoted by ⋆. On the other hand, ( Ω , F , P ) indicates a complete probability space that incorporates a natural filtration. Besides that, diag { . } indicates a block diagonal matrix. An identify matrix having appropriate dimensions is dented by I n . The mathematical expectation is denoted by E { · } , while L 2 [ 0 , ) indicates the space of an n-dimensional square integral vector function pertaining to [ 0 , ) .

2. Problem Statement and Basic Information

By using { e ( t ) , t 0 } to express a right-continuous Markovian process on ( Ω , F , P ) , we have the transition probability matrix Π = [ π x y ] N × N on a finite state space S = { 1 , 2 , , N } as:
Pr { e ( t + t ) = y | e ( t ) = x } = π x y t + o ( t ) , if x y , 1 + π x x t + o ( t ) , if x = y ,
subject to x y , where π x y 0 indicates the transition rate from x to y and π x x = y = 1 , y x N π x y . In addition, t > 0 and lim t 0 o ( t ) t = 0 .
A GNN model that incorporates time-varying delays and MJPs is considered, i.e.,
p ˙ ( t ) = D ( e ( t ) ) p ( t ) + A ( e ( t ) ) g ( W ( e ( t ) ) p ( t ) ) + B ( e ( t ) ) g ( W ( e ( t ) ) p ( t r ( t ) ) ) + u ( t ) q ( t ) = g ( W ( e ( t ) ) p ( t ) ) ,
where the state vector is denoted by p ( t ) = [ p 1 ( t ) , p 2 ( t ) , , p n ( t ) ] T R n , while g ( W ( e ( t ) ) p ( · ) ) = [ g 1 ( W ( e ( t ) ) p 1 ( · ) ) , g 2 ( W ( e ( t ) ) p 2 ( · ) ) , , g n ( W ( e ( t ) ) p n ( · ) ) ] T R n is the nonlinear neuron activation function. In addition, the external disturbance is indicated by u ( t ) = [ u 1 ( t ) , , u n ( t ) ] T R n , which belongs to L 2 [ 0 , ) . Note that r ( t ) corresponds to the transmission delay, while the output vector is indicated by q ( t ) = [ q 1 ( t ) ,…, q n ( t ) ] T R n . Let p ( t ) = ϕ ( t ) on r t 0 in ϕ C ( [ r , 0 ] ; R n ) be the initial condition of (2). Besides that, D ( e ( t ) ) , A ( e ( t ) ) , B ( e ( t ) ) , and W ( e ( t ) ) are the matrix functions of e ( t ) . For each e ( t ) S ,
D ( e ( t ) ) = d 1 ( e ( t ) ) . . . d n ( e ( t ) ) R n , A ( e ( t ) ) = a 11 ( e ( t ) ) . . . a 1 n ( e ( t ) ) . . . . . . . . . a n 1 ( e ( t ) ) . . . a n n ( e ( t ) ) R n × n , B ( e ( t ) ) = b 11 ( e ( t ) ) . . . b 1 n ( e ( t ) ) . . . . . . . . . b n 1 ( e ( t ) ) . . . b n n ( e ( t ) ) R n × n , W ( e ( t ) ) = w 11 ( e ( t ) ) . . . w 1 n ( e ( t ) ) . . . . . . . . . w n 1 ( e ( t ) ) . . . w n n ( e ( t ) ) R n × n .
( A 1 ) : The known time-delay of an NN model (2), i.e., r ( t ) , satisfies:
0 r ( t ) r , r ˙ ( t ) μ .
where μ and r are known real constants.
( A 2 ) : For all ζ 1 , ζ 2 R , ζ 1 ζ 2 , the activation function of a neuron, i.e., g ( · ) is continuous and bounded, which fulfills the following:
[ g ( ζ 1 ) g ( ζ 2 ) Δ 1 ( ζ 1 ζ 2 ) ] T [ g ( ζ 1 ) g ( ζ 2 ) Δ 2 ( ζ 1 ζ 2 ) ] 0 ,
where Δ 1 and Δ 2 are known constant matrices.
Remark 1.
Assumption ( A 2 ) is imposed on the activation function of a neuron, which is known as the sector-bounded activation function. In the numerical example later, we can see that this sector bound condition (4) achieves a less conservative result than those from both the sigmoid and Lipschitz based activation functions.
As stated before, NN models are affected by environmental noise in the real world, compromising the equilibrium stability. To undertake this challenge, we study a stochastic model in which the consequent part of a GNN model is subject to a set of stochastic MJPs with time-varying delays, as follows:
d p ( t ) = [ D ( e ( t ) ) p ( t ) + A ( e ( t ) ) g ( W ( e ( t ) ) p ( t ) ) + B ( e ( t ) ) g ( W ( e ( t ) ) p ( t r ( t ) ) ) + u ( t ) ] d t + σ ( t , p ( t ) , p ( t r ( t ) ) , e ( t ) ) d ω ( t ) , q ( t ) = g ( W ( e ( t ) ) p ( t ) ) ,
where the Brownian motion n-space on ( Ω , F , P ) is denoted by ω ( t ) = [ ω 1 ( t ) , , ω n ( t ) ] T R n , while the stochastic perturbation is denoted by σ ( t , p ( t ) , p ( t r ( t ) ) , e ( t ) ) . Besides that, σ ( · ) : R + × R n × R n × S R n × n is Borel measurable with σ ( 0 , 0 , 0 , e ( t ) ) 0 .
For simplicity, let e ( t ) = x , ( x S ) . As such, W ( e ( t ) ) = W x , D ( e ( t ) ) = D x , A ( e ( t ) ) = A x , B ( e ( t ) ) = B x . Model (5) becomes:
d p ( t ) = [ D x p ( t ) + A x g ( W x p ( t ) ) + B x g ( W x p ( t r ( t ) ) ) + u ( t ) ] d t + σ ( t , p ( t ) , p ( t r ( t ) ) , x ) d ω ( t ) q ( t ) = g ( W x p ( t ) ) .
For convenience, we adopt the following abbreviations:
φ ( t ) D x p ( t ) + A x g ( W x p ( t ) ) + B x g ( W x p ( t r ( t ) ) ) + u ( t ) σ ( t ) σ ( t , p ( t ) , p ( t r ( t ) ) , x ) .
As such, Model (6) becomes:
d p ( t ) = φ ( t ) d t + σ ( t ) d ω ( t ) q ( t ) = g ( W x p ( t ) ) .
( A 3 ) : For all x S , matrices L 1 x > 0 , L 2 x > 0 exists, and they satisfy:
tr { σ T ( t ) σ ( t ) } p T ( t ) L 1 x p ( t ) + p T ( t r ( t ) ) L 2 x p ( t r ( t ) ) .
Consider V C ( R + × R n × R n × S ; R . Pertinent to the trajectory of model (6), we can formulate an operator L V from R + × R n × R n × S to R , i.e.,
L V ( t , p ( t ) , x ) = V t ( t , p ( t ) , x ) + V p ( t , p ( t ) , x ) [ D x p ( t ) + A x g ( W x p ( t ) ) + B x g ( W x p ( t r ( t ) ) ) + u ( t ) ] + 1 2 tr [ σ T ( t , p ( t ) , p ( t r ( t ) ) , x ) V p p ( t , p ( t ) , x ) σ ( t , p ( t ) , p ( t r ( t ) ) , x ) ] + y = 1 N π x y V ( t , p ( t ) , y ) ,
where:
V t ( t , p ( t ) , x ) = V ( t , p ( t ) , x ) t , V p ( t , p ( t ) , x ) = V ( t , p ( t ) , x ) p 1 , , V ( t , p ( t ) , x ) p n , V p p ( t , p ( t ) , x ) = 2 V ( t , p ( t ) , x ) p x p y n × n .
Definition 1
([28]). Model (6) is mean-squared stable if for any ϵ > 0 , a scalar υ ( ϵ ) > 0 exists in which E { p ( t ) 2 } < ϵ , t > 0 , whenever sup r t 0 E { ϕ ( t ) 2 } < υ ( ϵ ) . Besides that, Model (6) is mean-squared asymptotically stable for any initial condition, if the inequality is satisfied lim t E { p ( t ) 2 } = 0 .
Definition 2
([36]). Model (6) is strictly ( Q , S , R ) γ dissipative if for γ > 0 and a zero initial condition, the inequality below is satisfied:
E { G ( u , q , t d ) } E { γ u , u t d } , t d 0 .
Remark 2.
The energy supply function G ( u , q , t d ) can be expressed as follows:
G ( u , q , t d ) = q , Q q t d + 2 q , S u t d + u , R u t d , t d 0 ,
where Q , S , R R n × n , and Q , R are symmetric. In addition, 0 t d q T ( t ) Q q ( t ) d t , 0 t d q T ( t ) S u ( t ) d t and 0 t d u T ( t ) R u ( t ) d t are represented by q , Q q t d , q , S u t d and u , R u t d , respectively.
As a result, the following dissipativity condition represents the relation in (10):
J γ , t d = 0 t d E q ( t ) u ( t ) T Q S R γ I q ( t ) u ( t ) d t .
Definition 3
([29]). Model (6) is passive if there exists a scalar γ > 0 in which all t d 0 ,
2 0 t d E { q T ( t ) u ( t ) } d t γ 0 t d E { u T ( t ) u ( t ) } d t .
The condition in (13) holds for all solutions with p ( 0 ) = 0 .
Lemma 1
([43]). Consider scalars s 1 and s 2 that satisfy s 1 < s 2 and a matrix W = W T > 0 . Pertinent to all continuous functions that are differentiable ϑ in [ s 1 , s 2 ] R n , the following inequality holds:
s 1 s 2 ϑ T ( z 1 ) W ϑ ( z 1 ) d z 1 1 ( s 2 s 1 ) ϖ 1 T Θ 1 ϖ 1 ,
where:
ϖ 1 = s 1 s 2 ϑ T ( z 1 ) d z 1 s 1 s 2 s 1 z 1 ϑ T ( z 2 ) d z 2 d z 1 s 1 s 2 s 1 z 1 s 1 z 2 ϑ T ( z 3 ) d z 3 d z 2 d z 1 T , Θ 1 = 9 W 36 ( s 2 s 1 ) W 60 ( s 2 s 1 ) 2 W 36 ( s 2 s 1 ) W T 192 ( s 2 s 1 ) 2 W 360 ( s 2 s 1 ) 3 W 60 ( s 2 s 1 ) 2 W T 360 ( s 2 s 1 ) 3 W T 720 ( s 2 s 1 ) 4 W .
Lemma 2
([44]). Consider scalars s 1 and s 2 that satisfy s 1 < s 2 and a matrix R = R T > 0 . Pertinent to all continuous functions that are differentiable ϑ in [ s 1 , s 2 ] R n , the following inequality holds:
s 1 s 2 s 1 z 1 ϑ T ( z 2 ) R ϑ ( z 2 ) d z 2 d z 1 2 ( s 2 s 1 ) 2 ϖ 2 T Θ 2 ϖ 2 ,
where:
ϖ 2 = s 1 s 2 s 1 z 1 ϑ T ( z 2 ) d z 2 d z 1 s 1 s 2 s 1 z 1 s 1 z 2 ϑ T ( z 3 ) d z 3 d z 2 d z 1 s 1 s 2 s 1 z 1 s 1 z 2 s 1 z 3 ϑ T ( z 4 ) d z 4 d z 3 d z 2 d z 1 T , Θ 2 = 6 R 30 ( s 2 s 1 ) R 60 ( s 2 s 1 ) 2 R 30 ( s 2 s 1 ) R 210 ( s 2 s 1 ) 2 R 480 ( s 2 s 1 ) 3 R 60 ( s 2 s 1 ) 2 R 480 ( s 2 s 1 ) 3 R 1200 ( s 2 s 1 ) 4 R .
Lemma 3
([45]). Consider scalars s 1 and s 2 that satisfy s 1 < s 2 and a matrix M = M T > 0 . Pertinent to all continuous functions that are differentiable ϑ in [ s 1 , s 2 ] R n , the following inequality holds:
s 1 s 2 ϑ T ( z 1 ) M ϑ ( z 1 ) d z 1 1 ( s 2 s 1 ) s 1 s 2 ϑ ( z 1 ) d z 1 T M s 1 s 2 ϑ ( z 1 ) d z 1 .
Lemma 4
([46]). Consider that J 1 and J 2 are real matrices, and Θ = Θ T , F ( t ) fulfills F T ( t ) F ( t ) I . As a result, Θ + ( J 1 F ( t ) J 2 ) + ( J 1 F ( t ) J 2 ) T < 0 , iff there exists a scalar ϵ > 0 in which Θ + ϵ 1 J 1 J 1 T + ϵ J 2 T J 2 < 0 or equivalently:
Θ J 1 ϵ J 2 ϵ I 0 ϵ I < 0 .

3. Main Results

For clarity of the notations, we adopt the following abbreviations in the remaining part of this paper:
p t p ( t ) , p r ( t ) p ( t r ( t ) ) , p r p ( t r ) , g t g ( W x p ( t ) ) , g r ( t ) g ( W x p ( t r ( t ) ) ) , q t q ( t ) , u t u ( t ) , φ t φ ( t ) , φ r ( t ) t r t φ ( z 1 ) d z 1 , χ 1 t r t p ( z 1 ) d z 1 , χ 2 r 0 t r t + z 1 p ( z 2 ) d z 2 d z 1 , χ 3 r 0 t r t + z 1 t r t + z 2 p ( z 3 ) d z 3 d z 2 d z 1 , χ 4 r 0 t r t + z 1 t r t + z 2 t r t + z 3 p ( z 4 ) d z 4 d z 3 d z 2 d z 1 , ξ ( t ) [ p t T φ t T p r ( t ) T p r T g t T g r ( t ) T φ r ( t ) T χ 1 T χ 2 T χ 3 T χ 4 T u t T ] T .

3.1. Dissipativity Analysis

In this subsection, we use the LKF and LMI methods to establish several key sufficient conditions for dissipativity analysis of Model (6).
Theorem 1.
Model (6) is ( Q , S , R ) -dissipative subject to any given scalars 0 < r and 0 < μ in which matrices P x ( x S ) > 0 , Q > 0 , R > 0 , S > 0 , U > 0 , V > 0 , X > 0 , and any matrices G 1 , G 2 , diagonal matrices H 1 > 0 , H 2 > 0 , and scalars δ x ( x S ) > 0 , γ > 0 exist, whereby, given all ( x S ) , the LMIs below are valid:
P x δ x I ,
Θ x = ( Θ i , j , x ) 12 × 12 < 0 ,
where
Θ 1 , 1 , x y = 1 N π x y P y + Q + R + r 2 U + r 4 4 V + δ x L 1 x G 1 D x ( G 1 D x ) T H 1 W x T K 1 W x , Θ 1 , 2 , x P x G 1 ( G 2 D x ) T , Θ 1 , 5 , x G 1 A x + H 1 W x T K 2 , Θ 1 , 6 , x G 1 B x , Θ 1 , 12 , x G 1 , Θ 2 , 2 , x r X G 2 G 2 T , Θ 2 , 5 , x G 2 A x , Θ 2 , 6 , x G 2 B x , Θ 2 , 12 , x G 2 , Θ 3 , 3 , x ( 1 μ ) Q + δ x L 2 x H 2 W x T K 1 W x , Θ 3 , 6 , x H 2 W x T K 2 , Θ 4 , 4 , x R , Θ 5 , 5 , x S H 1 Q , Θ 5 , 12 , x S , Θ 6 , 6 , x ( 1 μ ) S H 2 , Θ 7 , 7 , x 1 r X , Θ 8 , 8 , x 9 U , Θ 8 , 9 , x 36 r U , Θ 8 , 10 , x 60 r 2 U , Θ 9 , 9 , x 192 r 2 U 6 V , Θ 9 , 10 , x 360 r 3 U + 30 r V , Θ 9 , 11 , x 60 r 2 V , Θ 10 , 10 , x 720 r 4 U 210 r 2 V , Θ 10 , 11 , x 480 r 3 V , Θ 11 , 11 , x 1200 r 4 V , Θ 12 , 12 , x R + γ I .
Proof. 
For Model (6), the LKF candidate is as follows:
V ( t , p t , x ) = i = 1 4 V i ( t , p t , x ) ,
where:
V 1 ( t , p t , x ) = p t T P x p t , V 2 ( t , p t , x ) = t r ( t ) t p T ( z 1 ) Q p ( z 1 ) d z 1 + t r t p T ( z 1 ) R p ( z 1 ) d z 1 + t r ( t ) t g T ( W x p ( z 1 ) ) S g ( W x p ( z 1 ) ) d z 1 , V 3 ( t , p t , x ) = r r 0 t r t + z 1 p T ( z 2 ) U p ( z 2 ) d z 2 d z 1 + r 2 2 r 0 t r t + z 1 t r t + z 2 p T ( z 3 ) V p ( z 3 ) d z 3 d z 2 d z 1 V 4 ( t , p t , x ) = r 0 t r t + z 1 φ T ( z 2 ) X φ ( z 2 ) d z 2 d z 1 .
A weak infinitesimal random process is denoted by L . As such, Ito’s formula can be used to compute V ( t , p t , x ) , i.e.,
d V ( t , p t , x ) = L V ( t , p t , x ) d t + { σ ( t , p t , p r ( t ) , x } d ω ( t ) ,
where:
L V ( t , p t , x ) = i = 1 4 L V i ( t , p t , x ) .
The solution of Model (6) can be computed with L V ( t , p t , x ) , yielding:
L V 1 ( t , p t , x ) = 2 p t T P x φ t + p t T y = 1 N π x y P y p t + tr { σ T ( t ) P x σ ( t ) } ,
L V 2 ( t , p t , x ) = p t T Q p t ( 1 r ˙ ( t ) ) p r ( t ) T Q p r ( t ) + p t T R p t p r T R p r + g t T S g t ( 1 r ˙ ( t ) ) g r ( t ) T S g r ( t ) , p t T Q p t ( 1 μ ) p r ( t ) T Q p r ( t ) + p t T R p t p r T R p r + g t T S g t ( 1 μ ) g r ( t ) T S g r ( t ) ,
L V 3 ( t , p t , x ) = r 2 p t T U p t r t r t p T ( z 1 ) U p ( z 1 ) d z 1 + r 4 4 p t T V p t r 2 2 r 0 t r t + z 1 p T ( z 2 ) V p ( z 2 ) d z 2 z 1 ,
L V 4 ( t , p t , x ) = r φ t T X φ t t r t φ T ( z 1 ) X φ ( z 1 ) d z 1 .
By using Lemmas 1–3, we can estimate the integral term in (21)–(22),
r t r t p T ( z 1 ) U p ( z 1 ) d z 1 t r t p ( z 1 ) d z 1 r 0 t r t + z 1 p ( z 2 ) d z 2 d z 1 r 0 t r t + z 1 t r t + z 2 p ( z 3 ) d z 3 d z 2 d z 1 T 9 U 36 r U 60 r 2 U * 192 r 2 U 360 r 3 U * * 720 r 4 U t r t p ( z 1 ) d z 1 r 0 t r t + z 1 p ( z 2 ) d z 2 d z 1 r 0 t r t + z 1 t r t + z 2 p ( z 3 ) d z 3 d z 2 d z 1 ,
r 2 2 r 0 t r t + z 1 p T ( z 2 ) V p ( z 2 ) d z 2 d z 1 r 0 t r t + z 1 p ( z 2 ) d z 2 d z 1 r 0 t r t + z 1 t r t + z 2 p ( z 3 ) d z 3 d z 2 d z 1 r 0 t r t + z 1 t r t + z 2 t r t + z 3 p ( z 4 ) d z 4 d z 3 d z 2 d z 1 T 6 V 30 r V 60 r 2 V * 210 r 2 V 480 r 3 V * * 1200 r 4 V r 0 t r t + z 1 p ( z 2 ) d z 2 d z 1 r 0 t r t + z 1 t r t + z 2 p ( z 3 ) d z 3 d z 2 d z 1 r 0 t r t + z 1 t r t + z 2 t r t + z 3 p ( z 4 ) d z 4 d z 3 d z 2 d z 1 ,
t r t φ T ( z 1 ) X φ ( z 1 ) d z 1 1 r t r t φ ( z 1 ) d z 1 T X t r t φ ( z 1 ) d z 1 .
From (9) and (14),
tr { σ T ( t ) P x σ ( t ) } δ x tr { σ T ( t ) σ ( t ) } p t T δ x L 1 x p t + p r ( t ) T δ x L 2 x p r ( t ) .
For any constant matrices G 1 , G 2 with suitable dimension, the subsequent condition holds,
2 [ p t G 1 + φ t G 2 ] T [ φ t D x p t + A x g t + B x g r ( t ) + u t ] = 0 .
Besides that, we can obtain the following inequalities from (4):
( g t Δ 1 W x p t ) T ( g t Δ 2 W x p t ) 0 ,
( g r ( t ) Δ 1 W x p r ( t ) ) T ( g r ( t ) Δ 2 W x p r ( t ) ) 0 .
Obviously, given any positive diagonal matrices H 1 , H 2 , there exist diagonal matrices K 1 , K 2 0 , and Δ 1 , Δ 2 in which the inequalities below hold:
0 H 1 W x p t g t T K 1 K 2 * I W x p t g t ,
0 H 2 W x p r ( t ) g r ( t ) T K 1 K 2 * I W x p r ( t ) g r ( t ) .
where:
K 1 = Δ 1 T Δ 2 + Δ 2 T Δ 1 2 , K 2 = Δ 1 T + Δ 2 T 2 .
Combining (19)–(31) yields:
L V ( t , p t , x ) q t T Q q t 2 q t T S u t u t T ( R γ I ) u t 2 p t T P x φ t + p t T y = 1 N π x y P y p t + p t T δ x L 1 x p t + p r ( t ) T δ x L 2 x p r ( t ) + p t T Q p t ( 1 μ ) p r ( t ) T Q p r ( t ) + p t T R p t p r T R p r + g t T S g t ( 1 μ ) g r ( t ) T S g r ( t ) + r 2 p t T U p t χ 1 T ( 9 U ) χ 1 + χ 1 T ( 36 r U ) χ 2 χ 1 T ( 60 r 2 U ) χ 3 χ 2 T ( 192 r 2 U ) χ 2 + χ 2 T ( 360 r 3 U ) χ 3 χ 3 T ( 720 r 4 U ) χ 3 + r 4 4 p t T V p t χ 2 T ( 6 V ) χ 2 + χ 2 T ( 30 r V ) χ 3 χ 2 T ( 60 r 2 V ) χ 4 χ 3 T ( 210 r 2 V ) χ 3 + χ 3 T ( 480 r 3 V ) χ 4 χ 4 T ( 1200 r 4 V ) χ 4 + r φ t T X φ t φ r ( t ) T ( 1 r X ) φ r ( t ) 2 p t T ( G 1 ) φ t 2 p t T ( G 1 D x ) p t + 2 p t T ( G 1 A x ) g t + 2 p t T ( G 1 B x ) g r ( t ) + 2 p t T ( G 1 ) u t 2 φ t T ( G 2 ) φ t 2 p t T ( G 2 D x ) T φ t + 2 φ t T ( G 2 A x ) g t + 2 φ t T ( G 2 B x ) g r ( t ) + 2 φ t T ( G 2 ) u t p t T ( W x T H 1 K 1 W x ) p t + p t T ( W x T H 1 K 2 ) g t g t T ( H 1 ) g t p r ( t ) T ( W x T H 2 K 1 W x ) p r ( t ) + p r ( t ) T ( W x T H 2 K 2 ) g r ( t ) g r ( t ) T ( H 2 ) g r ( t ) q t T Q q t 2 q t T S u t u t T ( R γ I ) u t ,
Take the mathematical expectation, which is equivalent to:
E { L V ( t , p t , x ) q t T Q q t 2 q t T S u t u t T ( R γ I ) u t } E { ξ ( t ) T Θ ξ ( t ) } .
where Θ and ξ ( t ) are defined in (15) and the main results, respectively.
Suppose Θ < 0 ; it is straightforward to obtain:
E { q t T Q q t + 2 q t T S u t + u t T R u t } E { L V ( t , p t , x ) + γ u t T u t } .
Subject to zero initial conditions, integrating (35) from zero to t d yields:
E { G ( q , u , t d ) } E { γ u , u t d + V ( t d , p ( t d ) , x ) V ( 0 , p ( 0 ) , x ) } E { γ u , u t d } ,
for all t d 0 . As a result, Model (6) is strictly ( Q , S , R ) -dissipative pertaining to Definition 2. This completes the proof. □
Remark 3.
Given the unavoidable influence of stochastic disturbances, many stability related issues in different NN models with stochastic inputs have been investigated, e.g., the local-field NN model [19], the static NN model [20], the Hopfield NN model [24], and the Cohen–Grossberg NN model [46]. These results are derived without considering GNN models. Comparing with the results in [19,20,24], our results are more general, since we adopt a general form of the model for analysis.
Remark 4.
It should be noted that, by appropriate setting of the weight matrices, multi-dynamic behaviors can be depicted by ( Q , S , R ) -dissipativity. Suppose Q = 0 , S = I and R = 2 γ I ; Model (10) can yield the passivity condition of 2 E { q , S u t d } γ E { u , u t d } .
Corollary 2.
Model (6) is passive subject to any given scalars 0 < r and 0 < μ in which matrices P x ( x S ) > 0 , Q > 0 , R > 0 , S > 0 , U > 0 , V > 0 , X > 0 , any matrices G 1 , G 2 , diagonal matrices H 1 > 0 , H 2 > 0 , and scalars δ x ( x S ) > 0 , γ > 0 exist, whereby, given all ( x S ) , the LMIs below are valid:
P x δ x I ,
Θ ¯ x = ( Θ ¯ i , j , x ) 12 × 12 < 0 ,
where
Θ ¯ 1 , 1 , x y = 1 N π x y P y + Q + R + r 2 U + r 4 4 V + δ x L 1 x G 1 D x ( G 1 D x ) T H 1 W x T K 1 W x , Θ ¯ 1 , 2 , x P x G 1 ( G 2 D x ) T , Θ ¯ 1 , 5 , x G 1 A x + H 1 W x T K 2 , Θ ¯ 1 , 6 , x G 1 B x , Θ ¯ 1 , 12 , x G 1 , Θ ¯ 2 , 2 , x r X G 2 G 2 T , Θ ¯ 2 , 5 , x G 2 A x , Θ ¯ 2 , 6 , x G 2 B x , Θ ¯ 2 , 12 , x G 2 , Θ ¯ 3 , 3 , x ( 1 μ ) Q + δ x L 2 x H 2 W x T K 1 W x , Θ ¯ 3 , 6 , x H 2 W x T K 2 , Θ ¯ 4 , 4 , x R , Θ ¯ 5 , 5 , x S H 1 , Θ ¯ 5 , 12 , x I , Θ ¯ 6 , 6 , x ( 1 μ ) S H 2 , Θ ¯ 7 , 7 , x 1 r X , Θ ¯ 8 , 8 , x 9 U , Θ ¯ 8 , 9 , x 36 r U , Θ ¯ 8 , 10 , x 60 r 2 U , Θ ¯ 9 , 9 , x 192 r 2 U 6 V , Θ ¯ 9 , 10 , x 360 r 3 U + 30 r V , Θ ¯ 9 , 11 , x 60 r 2 V , Θ ¯ 10 , 10 , x 720 r 4 U 210 r 2 V , Θ ¯ 10 , 11 , x 480 r 3 V , Θ ¯ 11 , 11 , x 1200 r 4 V , Θ ¯ 12 , 12 , x γ I .
Proof. 
Using a similar LKF of (16), the following passivity condition with respect to the model in (6) can be defined as:
2 0 t d E { q t T u t } d t γ 0 t d E { u t T u t } d t .
With Theorem 1, the proof below is obtained:
E { L V ( t , p t , x ) 2 q t T u t u t T γ u t } E { ξ ( t ) T Θ 2 ξ ( t ) } .
As a result, Θ 2 < 0 holds, and (40) implies that:
E { L V ( t , p t , x ) 2 q t T u t γ u t T u t } 0 .
Subject to zero initial conditions, integrating (41) from zero to t d yields:
2 0 t d E { q t T u t } d t E { V ( t d , p ( t d ) , x ) V ( 0 , p ( 0 ) , x ) γ 0 t d u t T u t d t } γ 0 t d E { u t T u t } d t
for all t d 0 . As a result, Model (6) is passive with respect to Definition 3. The proof is completed. □
Remark 5.
When we consider u t = 0 , Model (6) yields:
d p ( t ) = [ D x p ( t ) + A x g ( W x p ( t ) ) + B x g ( W x p ( t r ( t ) ) ) ] d t + σ ( t , p ( t ) , p ( t r ( t ) ) , x ) d ω ( t ) .
We can obtain Corollary 3 using Theorem 1.
Corollary 3.
In the mean-squared sense, Model (43) is globally asymptotically stable subject to any given scalars 0 < r and 0 < μ in which matrices P x ( x S ) > 0 , Q > 0 , R > 0 , S > 0 , U > 0 , V > 0 , X > 0 , any matrices G 1 , G 2 , diagonal matrices H 1 > 0 , H 2 > 0 , and scalar δ x ( x S ) > 0 exist, whereby, given all ( x S ) , the LMIs below hold:
P x δ x I ,
Θ ˘ x = ( Θ ˘ i , j , x ) 11 × 11 < 0 ,
where
Θ ˘ 1 , 1 , x y = 1 N π x y P y + Q + R + r 2 U + r 4 4 V + δ x L 1 x G 1 D x ( G 1 D x ) T H 1 W x T K 1 W x , Θ ˘ 1 , 2 , x P x G 1 ( G 2 D x ) T , Θ ˘ 1 , 5 , x G 1 A x + H 1 W x T K 2 , Θ ˘ 1 , 6 , x G 1 B x , Θ ˘ 2 , 2 , x r X G 2 G 2 T , Θ ˘ 2 , 5 , x G 2 A x , Θ ˘ 2 , 6 , x G 2 B x , Θ ˘ 3 , 3 , x ( 1 μ ) Q + δ x L 2 x H 2 W x T K 1 W x , Θ ˘ 3 , 6 , x H 2 W x T K 2 , Θ ˘ 4 , 4 , x R , Θ ˘ 5 , 5 , x S H 1 , Θ ˘ 6 , 6 , x ( 1 μ ) S H 2 , Θ ˘ 7 , 7 , x 1 r X , Θ ˘ 8 , 8 , x 9 U , Θ ˘ 8 , 9 , x 36 r U , Θ ˘ 8 , 10 , x 60 r 2 U , Θ ˘ 9 , 9 , x 192 r 2 U 6 V , Θ ˘ 9 , 10 , x 360 r 3 U + 30 r V , Θ ˘ 9 , 11 , x 60 r 2 V , Θ ˘ 10 , 10 , x 720 r 4 U 210 r 2 V , Θ ˘ 10 , 11 , x 480 r 3 V , Θ ˘ 11 , 11 , x 1200 r 4 V .
Remark 6.
When u t = 0 and no stochastic disturbance exists, Model (6) yields:
d p ( t ) = [ D x p ( t ) + A x g ( W x p ( t ) ) + B x g ( W x p ( t r ( t ) ) ) ] d t .
We obtain Corollary 4 using Theorem 1.
Corollary 4.
Model (46) is globally asymptotically stable subject to any given scalars 0 < r and 0 < μ in which matrices P x ( x S ) > 0 , Q > 0 , R > 0 , S > 0 , U > 0 , V > 0 , and diagonal matrices H 1 > 0 , H 2 > 0 exist, whereby, given all ( x S ) , the LMI below holds:
Θ ˜ x = ( Θ ˜ i , j , x ) 9 × 9 < 0 ,
where
Θ ˜ 1 , 1 , x P x D x ( P x D x ) T + y = 1 N π x y P y + Q + R + r 2 U + r 4 4 V H 1 W x T K 1 W x , Θ ˜ x 1 , 4 , x P x A x + H 1 W x T K 2 , Θ ˜ 1 , 5 , x P x B x , Θ ˜ 2 , 2 , x ( 1 μ ) Q H 2 W x T K 1 W x , Θ ˜ 2 , 5 , x H 2 W x T K 2 , Θ ˜ 3 , 3 , x R , Θ ˜ 4 , 4 , x S H 1 , Θ ˜ 5 , 5 , x ( 1 μ ) S H 2 , Θ ˜ 6 , 6 , x 9 U , Θ ˜ 6 , 7 , x 36 r U , Θ ˜ 6 , 8 , x 60 r 2 U , Θ ˜ 7 , 7 , x 192 r 2 U 6 V , Θ ˜ 7 , 8 , x 360 r 3 U + 30 r V , Θ ˜ 7 , 9 , x 60 r 2 V , Θ ˜ 8 , 8 , x 720 r 4 U 210 r 2 V , Θ ˜ 8 , 9 , x 480 r 3 V , Θ ˜ 9 , 9 , x 1200 r 4 V .

3.2. An Analysis on Robust Dissipativity

We examine robust dissipativity by extending the previous dissipativity condition with respect to the following uncertain GNN model:
d p ( t ) = [ ( D x + Δ D x ( t ) ) p ( t ) + ( A x + Δ A x ( t ) ) g ( W x p ( t ) ) + ( B x + Δ B x ( t ) ) g ( W x p ( t r ( t ) ) ) + u ( t ) ] d t + σ ( t , p ( t ) , p ( t r ( t ) ) , x ) d ω ( t ) q ( t ) = g ( W x p ( t ) ) .
where the uncertainties pertaining to the time-varying parameters are Δ D x ( t ) , Δ A x ( t ) and Δ B x ( t ) , and they are represented as:
[ Δ D x ( t ) Δ A x ( t ) Δ B x ( t ) ] = M x F x ( t ) [ N 1 x N 2 x N 3 x ] ,
where an unknown time-varying matrix function of F x ( t ) is able to satisfy F x ( t ) T F x ( t ) I , while the known real matrices are denoted by M x , N 1 x , N 2 x and N 3 x .
We can derive Theorem 5 using Theorem 1.
Theorem 5.
Model (48) is robust, ( Q , S , R ) -dissipative subject to any given scalars 0 < r and 0 < μ in which matrices P x ( x S ) > 0 , Q > 0 , R > 0 , S > 0 , U > 0 , V > 0 , X > 0 , any matrices G 1 , G 2 , diagonal matrices H 1 > 0 , H 2 > 0 , and scalars δ x ( x S ) > 0 , ϵ > 0 , γ > 0 exist, whereby, given all ( x S ) , the LMIs below hold:
P x δ x I ,
Θ ^ x = ( Θ ^ i , j , x ) 12 × 12 Γ 1 ϵ Γ 2 ϵ I 0 ϵ I < 0 .
where
Θ ^ 1 , 1 , x y = 1 N π x y P y + Q + R + r 2 U + r 4 4 V + δ x L 1 x G 1 D x ( G 1 D x ) T H 1 W x T K 1 W x , Θ ^ 1 , 2 , x P x G 1 ( G 2 D x ) T , Θ ^ 1 , 5 , x G 1 A x + H 1 W x T K 2 , Θ ^ 1 , 6 , x G 1 B x , Θ ^ 1 , 12 , x G 1 , Θ ^ 2 , 2 , x r X G 2 G 2 T , Θ ^ 2 , 5 , x G 2 A x , Θ ^ 2 , 6 , x G 2 B x , Θ ^ 2 , 12 , x G 2 , Θ ^ 3 , 3 , x ( 1 μ ) Q + δ x L 2 x H 2 W x T K 1 W x , Θ ^ 3 , 6 , x H 2 W x T K 2 , Θ ^ 4 , 4 , x R , Θ ^ 5 , 5 , x S H 1 Q , Θ ^ 5 , 12 , x S , Θ ^ 6 , 6 , x ( 1 μ ) S H 2 , Θ ^ 7 , 7 , x 1 r X , Θ ^ 8 , 8 , x 9 U , Θ ^ 8 , 9 , x 36 r U , Θ ^ 8 , 10 , x 60 r 2 U , Θ ^ 9 , 9 , x 192 r 2 U 6 V , Θ ^ 9 , 10 , x 360 r 3 U + 30 r V , Θ ^ 9 , 11 , x 60 r 2 V , Θ ^ 10 , 10 , x 720 r 4 U 210 r 2 V , Θ ^ 10 , 11 , x 480 r 3 V , Θ ^ 11 , 11 , x 1200 r 4 V , Θ ^ 12 , 12 , x R + γ I , Γ 1 = [ G 1 T M x T G 2 T M x T 0 0 0 0 0 0 0 0 0 0 ] T , Γ 2 = [ N 1 x T 0 0 0 N 2 x T N 3 x T 0 0 0 0 0 0 ] T .
Proof. 
Replacing D x , A x , B x , in LMI (15) with ( D x + Δ D x ( t ) ) , ( A x + Δ A x ( t ) ) , ( B x + Δ B x ( t ) ) yields
Θ ^ + ( Γ 1 F ( t ) Γ 2 ) + ( Γ 1 F ( t ) Γ 2 ) T < 0 .
where Θ ^ , Γ 1 , and Γ 2 are given in (51).
Based on Lemma 4, a scalar ϵ > 0 exists in which:
Θ ^ + ϵ 1 Γ 1 Γ 1 T + ϵ Γ 2 T Γ 2 < 0 .
where:
Γ 1 = [ G 1 T M x T G 2 T M x T 0 0 0 0 0 0 0 0 0 0 ] T , Γ 2 = [ N 1 x T 0 0 0 N 2 x T N 3 x T 0 0 0 0 0 0 ] T .
We can deduce that Inequality (53) is equivalent to Inequality (51) by using the Schur complement lemma. This proof is completed. □

4. Simulation Studies

The usefulness of the obtain results is evaluated using three simulated examples.
Example 1.
Consider Model (6) with respect to both modes below:
D 1 = 2.2 0 0 1.8 , A 1 = 0.3 0.2 0.3 0.2 , B 1 = 0.2 0.3 0.4 0.2 , W 1 = 1 0 0 1 , L 11 = 0.22 0 0 0.22 , L 21 = 0.18 0 0 0.18 , D 2 = 1.4 0 0 2 , A 2 = 0.3 0.5 0.2 0.1 , B 2 = 0.3 0.2 0.3 0.5 , W 2 = 1 0 0 1 , L 12 = 0.20 0 0 0.20 , L 22 = 0.12 0 0 0.12 .
Moreover, we take,
Q = 7 0 0 7 , S = 0.1 0.1 0.1 0.5 , R = 12 0 0 12 .
Let Π = 3 3 2 2 and r ( t ) = 0.2 + 0.1 s i n t , which satisfies r = 0.3 , μ = 0.2 . Furthermore, choose g i ( p i ( t ) ) = t a n h ( p i ( t ) ) , i = 1 , 2 . As a result, we have Δ 1 = 0 , Δ 2 = I . In addition, from (32), we have K 1 = 0 , K 2 = 0.5 I . We show that LMIs (14) and (15) are valid using MATLAB. The initial values are chosen as p ( 0 ) = [ 0.3 , 0.6 ] T . As such, the following simulation results can be obtained by taking u ( t ) = 0.01 e t s i n ( 0.02 t ) t > 0 with respect to the model in (6). Figure 1 shows that time responses of Model (6). The state transient response of Model (6) is displayed in Figure 2. Figure 3 describes the Markovian switching signal.
Example 2.
We consider Model (46) subject to x = 1 along with the following parameters:
A = 0.0373 0.4852 0.3351 0.2336 1.6033 0.5988 0.3224 1.2352 0.3394 0.0860 0.3894 0.5785 0.1311 0.3253 0.9534 0.5015 , B = 0.8674 1.2405 0.5325 0.0220 0.0474 0.9164 0.0360 0.9816 1.8495 2.6117 0.3788 0.8428 2.0413 0.5179 1.1734 0.2775 , D = d i a g { 1.2769 , 0.6231 , 0 . 9230 , 0.4480 } , W = d i a g { 1 , 1 , 1 , 1 } .
This simulation study facilitates the comparison pertaining to the criteria in [11,12,13]. The neuron activation function is chosen as g i ( p i ( t ) ) = 0.2 t a n h ( p i ( t ) ) , i = 1 , 2 , 3 , 4 . As a result, we have Δ 1 = 0 , Δ 2 = d i a g { 0.1137 , 0.1279 , 0.7994 , 0.2368 } . In addition, from (32), we have K 1 = 0 , K 2 = d i a g { 0.1137 2 , 0.1279 2 , 0.7994 2 , 0.2368 2 } . We show that LMI (47) is valid using MATLAB. For various setting of μ, the maximum permissible delay limit r is listed in Table 1. It is evident that the obtained results in this paper show less conservatism than those in [11,12,13]. For the simulation purpose, we used r ( t ) = 2.6272 + 0.2 s i n t , which satisfies r = 2.8272 . The following simulation results for the initial condition p ( 0 ) = [ 1.5 , 1 , 1.6 , 0.5 ] T . The time responses of the state variables p 1 ( t ) , p 2 ( t ) , p 3 ( t ) , p 4 ( t ) of Model (46) are depicted in Figure 4. According to Corollary 4, the GNN (46) is globally asymptotically stable.
Remark 7.
In regard to computational complexity analysis, the main governing factor is the maximum number of decision variables in the LMIs. The use of delay augmented LKFs [32] and the free-matrix based methods [33] leads to an increase in the number of decision variables. As such, the computational load and complexity increase with respect to the increase in the number of delay subintervals. To address this problem, we chose suitable LKFs and exploited new inequalities with tighter bounds, in order to derive simplified LMI based sufficient conditions and yield less conservative results as compared with those in [11,12,13]. In addition to less conservatism, our results required a lower computational load, because we did not adopt any free-matrix based methods or delay-decomposing methods in our theoretical analysis. In [13], enhanced stability criteria pertaining to GNN model were derived with free-matrix based methods coupled with augmented LKFs, which resulted in many decision variables, i.e., 82 n 2 + 5 n . Comparatively, we only required on 18 n 2 + 5 n decision various in our results. Therefore, it was evident that our results were less conservative with a smaller computational load.
Example 3.
Consider Model (48) with respect to both modes below.
D 1 = 2.2 0 0 1.8 , A 1 = 0.3 0.2 0.3 0.2 , B 1 = 0.2 0.3 0.4 0.2 , W 1 = 1 0 0 1 , L 11 = 0.22 0 0 0.22 , L 21 = 0.18 0 0 0.18 , M 1 = 0.2 0 0 0.2 , N 11 = 0.3 0 0 0.3 , N 21 = 0.2 0 0 0.2 , N 31 = 0.1 0 0 0.1 , D 2 = 1.4 0 0 2 , A 2 = 0.3 0.5 0.2 0.1 , B 2 = 0.3 0.2 0.3 0.5 , W 2 = 1 0 0 1 , L 12 = 0.20 0 0 0.20 , L 22 = 0.12 0 0 0.12 , M 2 = 0.2 0 0 0.2 , N 12 = 0.3 0 0 0.3 , N 22 = 0.2 0 0 0.2 , N 32 = 0.1 0 0 0.1 ,
Moreover, we take,
Q = 1 0 0 1 , S = 1 0 1 1 , R = 3 0 0 3 .
Let Π = 3 3 2 2 and r ( t ) = 0.2 + 0.1 s i n t , which satisfies r = 0.3 , μ = 0.2 . Furthermore, choose g i ( p i ( t ) ) = t a n h ( p i ( t ) ) , i = 1 , 2 , and we have Δ 1 = 0 , Δ 2 = I . In addition, from (32), we have K 1 = 0 , K 2 = 0.5 I . We show that LMIs (50) and (51) are valid using MATLAB. Under the the initial values p ( 0 ) = [ 0.3 , 0.6 ] T , the following results can be obtained by taking u ( t ) = 0.01 e t s i n ( 0.02 t ) t > 0 . For Model (48), the time responses with respect to variables p 1 ( t ) , p 2 ( t ) are given in Figure 5. Figure 6 describes the transient responses of variables p 1 ( t ) , p 2 ( t ) . The Markovian switching signals are depicted in Figure 7.

5. Conclusions

In this article, we analyzed the robust dissipativity of USGNN models incorporating MJPs and time-varying delays. Our analyses covered a more general form of USGNN models, which took both stochastic effects and parameter uncertainties into consideration. To facilitate our analyses, we formulated appropriate LKFs along with effective integral inequalities and sector bound conditions. As such, we derived several simplified LMI based sufficient conditions. The corresponding feasible solutions were validated using MATLAB. We also ascertained the usefulness of our results with three simulation examples.
For further research, we will analyze other types of stochastic NN models with the proposed method. The stability and synchronization analyses of stochastic fuzzy NN models, coupled stochastic NN models, fractional-order NN models, and memristor based stochastic NN models can be conducted. Applications of the resulting NN models to various control and engineering problems will also be examined.

Author Contributions

Conceptualization, G.R.; data curation, G.R. and R.S. (Ramalingam Sriraman); formal analysis, G.R. and R.S. (Ramalingam Sriraman); funding acquisition, U.H.; investigation, G.R.; methodology, G.R.; software, P.K., P.C., and R.S. (Rajendran Samidurai); supervision, C.P.L.; validation, G.R.; writing, original draft, G.R.; writing, review and editing, G.R. All authors read and agreed to the published version of the manuscript.

Funding

This research is financially supported by King Mongkut’s University of Technology Thonburi (KMUTT) and the Thailand research Grant Fund (RSA6280004).

Acknowledgments

The authors gratefully acknowledge the support from King Mongkut’s University of Technology Thonburi for funding the Postdoctoral Fellowship of Pramet Kaewmesri and the Thailand Research Grant Fund (RSA6280004). The authors are very grateful to the Department of Mathematics, Faculty of Science, King Mongkut’s University of Technology Thonburi (KMUTT) for all the support.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Cao, J. Global asymptotic stability of neural networks with transmission delays. Int. J. Syst. Sci. 2000, 31, 1313–1316. [Google Scholar] [CrossRef]
  2. Arik, S. An analysis of global asymptotic stability of delayed cellular neural networks. IEEE Trans. Neural Netw. 2012, 13, 1239–1242. [Google Scholar] [CrossRef] [PubMed]
  3. Zhao, Z.; Song, Q.; He, S. Passivity analysis of stochastic neural networks with time-varying delays and leakage delay. Neurocomputing 2014, 125, 22–27. [Google Scholar] [CrossRef]
  4. Wei, H.; Li, R.; Chen, C. State estimation for memristor-based neural networks with time-varying delays. Int. J. Mach. Learn. Cybern. 2015, 6, 213–225. [Google Scholar] [CrossRef]
  5. Huang, H.; Huang, T.; Cao, Y. Reduced-order filtering of delayed static neural networks with Markovian jumping parameters. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 5606–5618. [Google Scholar] [CrossRef] [PubMed]
  6. Arunkumar, A.; Sakthivel, R.; Mathiyalagan, K.; Park, J.H. Robust stochastic stability of discrete-time fuzzy Markovian jump neural networks. ISA Trans. 2014, 53, 1006–1014. [Google Scholar] [CrossRef]
  7. Chen, Y.; Wang, Z.; Liu, Y.; Alsaadi, F.E. Stochastic stability for distributed delay neural networks via augmented Lyapunov–Krasovskii functionals. Appl. Math. Comput. 2018, 338, 869–881. [Google Scholar] [CrossRef]
  8. Chen, G.; Xia, J.; Zhuang, G. Delay-dependent stability and dissipativity analysis of generalized neural networks with Markovian jump parameters and two delay components. J. Frankl. Inst. 2016, 353, 2137–2158. [Google Scholar] [CrossRef]
  9. Samidurai, R.; Manivannan, R.; Ahn, C.K.; Karimi, H.R. New criteria for stability of generalized neural networks including Markov jump parameters and additive time delays. IEEE Trans. Syst. Man Cybern. Syst. 2018, 48, 485–499. [Google Scholar] [CrossRef]
  10. Jiao, S.; Shen, H.; Wei, Y.; Huang, X.; Wang, Z. Further results on dissipativity and stability analysis of Markov jump generalized neural networks with time-varying interval delays. Appl. Math. Comput. 2018, 336, 338–350. [Google Scholar] [CrossRef]
  11. Zhang, C.K.; He, Y.; Jiang, L.; Wu, Q.H.; Wu, M. Delay-dependent stability criteria for generalized neural networks with two delay components. IEEE Trans. Neural Netw. Learn. Syst. 2014, 25, 1263–1276. [Google Scholar] [CrossRef]
  12. Zeng, H.B.; He, Y.; Wu, M.; Xiao, S. Stability analysis of generalized neural networks with time-varying delays via a new integral inequality. Neurocomputing 2015, 161, 148–154. [Google Scholar] [CrossRef]
  13. Wang, B.; Yan, J.; Cheng, J.; Zhong, S. New criteria of stability analysis for generalized neural networks subject to time-varying delayed signals. Appl. Math. Comput. 2017, 314, 322–333. [Google Scholar] [CrossRef]
  14. Sriraman, R.; Cao, Y.; Samidurai, R. Global asymptotic stability of stochastic complex-valued neural networks with probabilistic time-varying delays. Math. Comput. Simulat. 2020, 171, 103–118. [Google Scholar] [CrossRef]
  15. Wang, C.; Shen, Y. Delay-dependent non-fragile robust stabilization and H control of uncertain stochastic systems with time-varying delay and nonlinearity. J. Frankl. Inst. 2011, 348, 2174–2190. [Google Scholar] [CrossRef]
  16. Samidurai, R.; Sriraman, R. Robust dissipativity analysis for uncertain neural networks with additive time-varying delays and general activation functions. Math. Comput. Simulat. 2019, 155, 201–216. [Google Scholar] [CrossRef]
  17. Boukas, E.K.; Liu, Z.K.; Liu, G.X. Delay-dependent robust stability and H control of jump linear systems with time-delay. Int. J. Control 2010, 74, 329–340. [Google Scholar] [CrossRef]
  18. Cao, Y.Y.; Lam, J.; Hu, L.S. Delay-dependent stochastic stability and H analysis for time-delay systems with Markovian jumping parameters. J. Frankl. Inst. 2013, 340, 423–434. [Google Scholar] [CrossRef]
  19. Zhu, Q.; Cao, J. Exponential stability of stochastic neural networks with both Markovian jump parameters and mixed time delays. IEEE Trans. Syst. Man Cybern. Part B 2011, 41, 341–353. [Google Scholar]
  20. Tan, H.; Hua, M.; Chen, J.; Fei, J. Stability analysis of stochastic Markovian switching static neural networks with asynchronous mode-dependent delays. Neurocomputing 2015, 151, 864–872. [Google Scholar] [CrossRef]
  21. Zhu, S.; Shen, M.; Lim, C.C. Robust input-to-state stability of neural networks with Markovian switching in presence of random disturbances or time delays. Neurocomputing 2017, 249, 245–252. [Google Scholar] [CrossRef]
  22. Blythe, S.; Mao, X.; Liao, X. Stability of stochastic delay neural networks. J. Frankl. Inst. 2001, 338, 481–495. [Google Scholar] [CrossRef]
  23. Chen, Y.; Zheng, W. Stability analysis of time-delay neural networks subject to stochastic perturbations. IEEE Trans. Cybern. 2013, 43, 2122–2134. [Google Scholar] [CrossRef] [PubMed]
  24. Yang, R.; Gao, H.; Shi, P. Novel robust stability criteria for stochastic Hopfield neural networks with time delays. IEEE Trans. Syst. Man Cybern. Part B 2009, 39, 467–474. [Google Scholar] [CrossRef] [PubMed]
  25. Zhu, S.; Shen, Y. Passivity analysis of stochastic delayed neural networks with Markovian switching. Neurocomputing 2011, 74, 1754–1761. [Google Scholar] [CrossRef]
  26. Cao, Y.; Samidurai, R.; Sriraman, R. Stability and dissipativity analysis for neutral type stochastic Markovian jump static neural networks with time delays. J. Artif. Int. Soft Comput. Res. 2019, 9, 189–204. [Google Scholar] [CrossRef] [Green Version]
  27. Liu, G.; Yang, S.X.; Chai, Y.; Feng, W.; Fu, W. Robust stability criteria for uncertain stochastic neural networks of neutral-type with interval time-varying delays. Neural Comput. Appl. 2013, 22, 349–359. [Google Scholar] [CrossRef]
  28. Pradeep, C.; Chandrasekar, A.; Murugesu, M.; Rakkiyappan, R. Robust stability analysis of stochastic neural networks with Markovian jumping parameters and probabilistic time-varying delays. Complexity 2014, 21, 59–72. [Google Scholar] [CrossRef]
  29. Sakthivel, R.; Arunkumar, A.; Mathiyalagan, K.; Marshal Anthoni, S. Robust passivity analysis of fuzzy Cohen-Grossberg BAM neural networks with time-varying delays. Appl. Math. Comput. 2011, 275, 213–228. [Google Scholar] [CrossRef]
  30. Kwon, O.M.; Lee, S.M.; Park, J.H. Improved delay-dependent exponential stability for uncertain stochastic neural networks with time-varying delays. Phys. Lett. A 2010, 374, 1232–1241. [Google Scholar] [CrossRef]
  31. Muthukumar, P.; Subramanian, K.; Lakshmanan, S. Robust finite time stabilization analysis for uncertain neural networks with leakage delay and probabilistic time-varying delays. J. Frankl. Inst. 2016, 353, 4091–4113. [Google Scholar] [CrossRef]
  32. Lee, C.H.; Lee, S.H.; Park, M.J.; Kwon, O.M. Stability and stabilization criteria for sampled-data control system via augmented Lyapunov–Krasovskii functionals. Int. J. Control. Autom. Syst. 2018, 16, 2290–2302. [Google Scholar] [CrossRef]
  33. Park, M.J.; Lee, S.H.; Kwon, O.M.; Ryu, J.H. Enhanced stability criteria of neural networks with time-varying delays via a generalized free-weighting matrix integral inequality. J. Frankl. Inst. 2018, 355, 6531–6548. [Google Scholar] [CrossRef]
  34. Willems, J.C. Dissipative dynamical systems part I: General theory. Arch. Ration. Mech. Anal. 1972, 45, 321–351. [Google Scholar] [CrossRef]
  35. Hill, D.L.; Moylan, P.J. Dissipative dynamical systems: Basic input-output and state properties. J. Frankl. Inst. 1980, 309, 327–357. [Google Scholar] [CrossRef]
  36. Wu, Z.G.; Park, J.H.; Su, H.; Chu, J. Robust dissipativity analysis of neural networks with time-varying delay and randomly occurring uncertainties. Nonlinear Dyn. 2012, 69, 1323–1332. [Google Scholar] [CrossRef]
  37. Liao, X.; Wang, J. Global dissipativity of continuous-time recurrent neural networks with time delay. Phys. Rev. E 2013, 68, 016118. [Google Scholar] [CrossRef]
  38. Song, Q.; Zhao, Z. Global dissipativity of neural networks with both variable and unbounded delays. Chaos Solitons Fract. 2005, 25, 393–401. [Google Scholar] [CrossRef]
  39. Feng, Z.; Lam, J. Stability and dissipativity analysis of distributed delay cellular neural networks. IEEE Trans. Neural Netw. 2011, 22, 976–981. [Google Scholar] [CrossRef]
  40. Raja, R.; Raja, U.K.; Samidurai, R.; Leelamani, A. Dissipativity of discrete-time BAM stochastic neural networks with Markovian switching and impulses. J. Frankl. Inst. 2013, 350, 3217–3247. [Google Scholar] [CrossRef]
  41. Zeng, H.B.; Park, J.H.; Zhang, C.F.; Wang, W. Stability and dissipativity analysis of static neural networks with interval time-varying delay. J. Frankl. Inst. 2015, 352, 1284–1295. [Google Scholar] [CrossRef]
  42. Cao, J.; Yuan, K.; Ho, D.W.C.; Lam, J. Global point dissipativity of neural networks with mixed time-varying delays. Chaos 2006, 16, 013105. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  43. Zhang, L.; He, L.; Song, Y. New results on stability analysis of delayed systems derived from extended Wirtingers integral inequality. Neurocomputing 2018, 283, 98–106. [Google Scholar] [CrossRef]
  44. Samidurai, R.; Sriraman, R. Non-fragile sampled-data stabilization analysis for linear systems with probabilistic time-varying delays. J. Frankl. Inst. 2019, 356, 4335–4357. [Google Scholar] [CrossRef]
  45. Gu, K.; Kharitonov, V.L.; Chen, J. Stability of Time-Delay Systems; Birkhäuser: Boston, MA, USA, 2003. [Google Scholar]
  46. Zhu, Q.; Cao, J. Robust exponential stability of Markovian jump impulsive stochastic Cohen-Grossberg neural networks with mixed time delays. IEEE Trans. Neural Netw. 2010, 21, 1314–1325. [Google Scholar]
Figure 1. Time responses of the state variables p 1 ( t ) , p 2 ( t ) with respect to Model (6) in Example 1.
Figure 1. Time responses of the state variables p 1 ( t ) , p 2 ( t ) with respect to Model (6) in Example 1.
Symmetry 12 01035 g001
Figure 2. Transient responses of the state variables p 1 ( t ) , p 2 ( t ) with respect to Model (6) in Example 1.
Figure 2. Transient responses of the state variables p 1 ( t ) , p 2 ( t ) with respect to Model (6) in Example 1.
Symmetry 12 01035 g002
Figure 3. The Markovian switching signal e ( t ) in Example 1.
Figure 3. The Markovian switching signal e ( t ) in Example 1.
Symmetry 12 01035 g003
Figure 4. Time responses of the state variables p 1 ( t ) , p 2 ( t ) , p 3 ( t ) , p 4 ( t ) with respect to Model (46) in Example 2.
Figure 4. Time responses of the state variables p 1 ( t ) , p 2 ( t ) , p 3 ( t ) , p 4 ( t ) with respect to Model (46) in Example 2.
Symmetry 12 01035 g004
Figure 5. Time responses of the state variables p 1 ( t ) , p 2 ( t ) with respect to Model (48) in Example 3.
Figure 5. Time responses of the state variables p 1 ( t ) , p 2 ( t ) with respect to Model (48) in Example 3.
Symmetry 12 01035 g005
Figure 6. Transient responses of the state variables p 1 ( t ) , p 2 ( t ) with respect to Model (48) in Example 3.
Figure 6. Transient responses of the state variables p 1 ( t ) , p 2 ( t ) with respect to Model (48) in Example 3.
Symmetry 12 01035 g006
Figure 7. The Markovian switching signal e ( t ) in Example 3.
Figure 7. The Markovian switching signal e ( t ) in Example 3.
Symmetry 12 01035 g007
Table 1. The maximum permissible delay limit r with different μ settings.
Table 1. The maximum permissible delay limit r with different μ settings.
Methods μ 0.10.50.9
 [11]r3.87392.78212.3279
 [12]r4.19033.07792.8268
 [13]r4.19193.07902.8271
Corollary 4r4.19203.07912.8272

Share and Cite

MDPI and ACS Style

Humphries, U.; Rajchakit, G.; Sriraman, R.; Kaewmesri, P.; Chanthorn, P.; Lim, C.P.; Samidurai, R. An Extended Analysis on Robust Dissipativity of Uncertain Stochastic Generalized Neural Networks with Markovian Jumping Parameters. Symmetry 2020, 12, 1035. https://0-doi-org.brum.beds.ac.uk/10.3390/sym12061035

AMA Style

Humphries U, Rajchakit G, Sriraman R, Kaewmesri P, Chanthorn P, Lim CP, Samidurai R. An Extended Analysis on Robust Dissipativity of Uncertain Stochastic Generalized Neural Networks with Markovian Jumping Parameters. Symmetry. 2020; 12(6):1035. https://0-doi-org.brum.beds.ac.uk/10.3390/sym12061035

Chicago/Turabian Style

Humphries, Usa, Grienggrai Rajchakit, Ramalingam Sriraman, Pramet Kaewmesri, Pharunyou Chanthorn, Chee Peng Lim, and Rajendran Samidurai. 2020. "An Extended Analysis on Robust Dissipativity of Uncertain Stochastic Generalized Neural Networks with Markovian Jumping Parameters" Symmetry 12, no. 6: 1035. https://0-doi-org.brum.beds.ac.uk/10.3390/sym12061035

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop