Next Article in Journal
Analysing the Influential Parameters on the Monopile Foundation of an Offshore Wind Turbine
Next Article in Special Issue
Integrating Data Mining Techniques for Naïve Bayes Classification: Applications to Medical Datasets
Previous Article in Journal
RFID Applications and Security Review
Previous Article in Special Issue
Hybrid Feedback Control for Exponential Stability and Robust H Control of a Class of Uncertain Neural Network with Mixed Interval and Distributed Time-Varying Delays
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

LMI-Based Results on Robust Exponential Passivity of Uncertain Neutral-Type Neural Networks with Mixed Interval Time-Varying Delays via the Reciprocally Convex Combination Technique

1
Department of Mathematics, Faculty of Science, Khon Kaen University, Khon Kaen 40002, Thailand
2
Department of Applied Mathematics and Statistics, Faculty of Science and Liberal Arts, Rajamangala University of Technology Isan, Nakhon Ratchasima 30000, Thailand
3
Rail System Institute, Rajamangala University of Technology Isan, Nakhon Ratchasima 30000, Thailand
*
Author to whom correspondence should be addressed.
Submission received: 30 April 2021 / Revised: 6 June 2021 / Accepted: 7 June 2021 / Published: 10 June 2021

Abstract

:
The issue of the robust exponential passivity analysis for uncertain neutral-type neural networks with mixed interval time-varying delays is discussed in this work. For our purpose, the lower bounds of the delays are allowed to be either positive or zero adopting the combination of the model transformation, various inequalities, the reciprocally convex combination, and suitable Lyapunov–Krasovskii functional. A new robust exponential passivity criterion is received and formulated in the form of linear matrix inequalities (LMIs). Moreover, a new exponential passivity criterion is also examined for systems without uncertainty. Four numerical examples indicate our potential results exceed the previous results.

1. Introduction

Delayed dynamical systems have been proposed rather extensively because they can be exploited as models to illustrate the transportation systems, communication networks, teleportation systems, physical systems, and biological systems. Time delay regularly appeared in many practicals, and it is frequently a cause of instability and terrible performance. Largely, stability for systems with time delays in mainly divided into two categories; delay-independent and delay-dependent. Delay-independent stability criteria have a tendency to be more conservative, particularly for delays with small size; such criteria give no information on the size of the delay. On the other hand, delay-dependent stability criteria are connected with the size of the delay and generally provide a maximal delay size. Meanwhile, type of time delays are separated the into two types including information processing delay and information communication delay, which for which a lot of issues delayed scaled consensus have been presented [1]. Lately, practical engineering systems have examined the delays of which the lower bound of the delay is not limited to zero, which is called an interval time-varying delay. Several delay-interval-dependent criteria of systems are shown in [2,3,4,5,6,7,8]. Moreover, the neutral time delay is a type of delay which is currently drawing attention. This comes from the fact that the delay exists in systems both in its derivatives and state variables [9,10] which can be noticed in various fields such as mechanics, automatic control, distributed networks, heat exchanges, and robots in contact with rigid environments [11,12] etc.
Nowadays, neural networks are popularly discussed because they can be applied in many fields. Especially engineering and applied science including signal processing, pattern recognition, industrial automation, image processing, parallel computation, industrial automation [13,14,15,16,17,18]) etc. Therefore, many researchers have been interested in studying neural networks with time-delays [5,19,20,21,22]. Furthermore, in neural networks, it might occur that there are connections between past state derivatives in the systems. As a result, it is more natural to consider neural networks with activation functions of past state derivative networks. Neural networks of this model are called neutral-type neural networks (NTNNs), which have appeared to be useful systems in a variety of applications, including population ecology, propagation, diffusion models and so on. Meanwhile, as is well known, in the biochemistry experiments of neural network dynamics, neural information may transfer across chemical reactivity, which results in a neutral-type process. Recently, the research on the above systems has been extensive and there are many findings in this area [8,23,24,25,26].
As the exponential stability is also important for testing stability because it can identify the rate of convergence of the system, which correlates with equilibrium points, accordingly, the exponential stability of various systems has also received a lot of attention from the researchers (for examples, see [19,21,22,23,25,26]). Meanwhile, the passivity theory is considered and has played an important role in the astonishing stability of time-delay systems [27,28]. It generally has its practical use in signal processing [29], complexity [30], chaos and synchronization control [31], and fuzzy control [32]. The main idea of the theory informs that the stability of the system can be perfectly maintained by the passivity’s properties. Because of this, it can lead to general conclusions on the stability using only input–-output characteristics. As a consequence, there are quite a few researchers who have been studying this issue. (e.g., [8,20,21,22,25,28,29,30,31,32]).The properties of exponential passivity for dynamical systems were studied in [33,34,35], who remark that exponential passivity implies passivity, but the opposite does not necessarily hold. Note that a lot of previous studies mainly attracted on work on stability and passivity analysis of neutral-type neural networks [8,23,24,25,26], and exponential passivity analysis of neural networks [21,22]. As far as we can tell, the robust exponential passivity of the uncertain neutral-type neural networks with mixed interval time-varying delays has never been presented.
In this paper, the issue of the new robust exponential passivity criterion is designed for uncertain NTNNs with mixed interval time-varying delays including discrete, neutral, and distributed delays. One of the aims of the criterion is to obtain the maximum upper bounds of time delays or maximum values of the rate of convergence. So, we concentrate on interval time-varying delays, in which the lower bounds are allowed to be either positive or zero. The model transformation, the various inequalities, and the reciprocally convex combination are adopted along with a suitable Lyapunov—Krasovskii functional when estimating their derivatives to improve the performance of the uncertain NTNNs. A new robust exponential passivity criterion is received and formulated in the form of LMIs. Moreover, a new exponential passivity criterion for NTNNs without uncertainty is also examined. The main contributions of this work are highlighted as follows: (i) the criterion proposed different from the NTNNs reported in [8,23,24,25,26]; (ii) the method suggested here can be used for the general neural networks with implied distributed time-varying delays [22] and the implied general neural networks implies [21]. Finally, we present some results that show the potential results exceed the results that previously seen.
Notations. 
R n and R n × r denotes the n-dimensional Euclidean space and the set of all n × r real matrices, respectively. B > 0 ( B 0 ) means that the symmetric matrix B is positive (semi-positive) definite; B < 0 ( B 0 ) means that the symmetric matrix B is negative (semi-negative) definite. I is the identity matrix with appropriate dimensions. * represents the elements below the main diagonal of a symmetric matrix. z ˙ ( t ) denotes the upper right-hand derivative of z at t. z t = { z ( t + θ ) : θ [ max { δ 2 , τ 2 , η 2 } , 0 ] } . V ˙ ( t , ϕ ) = lim θ 0 + sup { t + θ , z t + θ ( t , ϕ ) V ( t , ϕ ) } / θ where ϕ ( t ) is the initial function that is continuously differentiable on C ( [ max { δ 2 , τ 2 , η 2 } , 0 ] , R n ) .

2. Preliminaries

First, we suggest the uncertain NTNNs, which are the form
z ˙ ( t ) = ( A + Δ A ( t ) ) z ( t ) + ( W 0 + Δ W 0 ( t ) ) g ( z ( t ) ) + ( W 1 + Δ W 1 ( t ) ) g ( z ( t δ ( t ) ) ) + ( W 2 + Δ W 2 ( t ) ) z ˙ ( t τ ( t ) ) + ( W 3 + Δ W 3 ( t ) ) t η ( t ) t g ( z ( s ) ) d s + u ( t ) , t 0 , y ( t ) = C 0 g ( z ( t ) ) + C 1 g ( z ( t δ ( t ) ) ) + C 2 t η ( t ) t g ( z ( s ) ) d s + C 3 u ( t ) , z ( t ) = ϕ ( t ) , t [ max { τ 2 , δ 2 , η 2 } , 0 ] ,
where z ( t ) = [ z 1 ( t ) , z 2 ( t ) , . . . , z n ( t ) ] R n is the neuron state vector, y ( t ) is the output vector of neuron networks, A = [ a i ] is a diagonal matrix with a i > 0 , i = 1 , 2 , . . . , n , W 0 is the connection weight matrix, W 1 , W 2 and W 3 are the delayed connection weight matrices. C 0 , C 1 , C 2 , and C 3 , are given real matrices, u ( t ) R n is an external input vector to neurons, the continuous functions ϕ ( t ) and φ ( t ) are the initial conditions.
The delays τ ( t ) , δ ( t ) and η ( t ) satisfy
0 τ 1 τ ( t ) τ 2 , τ ˙ ( t ) τ d ,
0 δ 1 δ ( t ) δ 2 , δ ˙ ( t ) δ d ,
0 η 1 η ( t ) η 2 ,
where τ 1 , τ 2 , δ 1 , δ 2 , η 1 , η 2 , τ d and δ d are non-negative real constants.
Assumption 1.
The activation function g ( z ( t ) ) = [ g 1 ( z 1 ( t ) ) , g 2 ( z 2 ( t ) ) , . . . , g n ( z n ( t ) ) ] T R n is assumed to satisfy the following condition
0 g i ( ζ 1 ) g i ( ζ 2 ) ζ 1 ζ 2 l i , g ( 0 ) = 0 , ζ 1 , ζ 2 R , ζ 1 ζ 2 ,   i = 1 , 2 , . . . , n ,
where l i , i = 1 , 2 , . . . , n are positive real constants, we denote L = [ l i ] , i = 1 , 2 , . . . , n as a diagonal matrix.
The uncertainty matrices. Δ A ( t ) , Δ W 0 ( t ) , Δ W 1 ( t ) , Δ W 2 ( t ) , and Δ W 3 ( t ) , are assumed to be of the form
Δ A ( t ) W 0 ( t ) Δ W 1 ( t ) Δ W 2 ( t ) Δ W 3 = E Δ ( t ) G a G 0 G 1 G 2 G 3 ,
where E, G a , and G i , i = 0 , 1 , 2 , 3 , are known real constant matrices; the uncertainty matrix Δ ( t ) satisfies
Δ ( t ) = F ( t ) [ I J F ( t ) ] 1 ,
is said to be admissible where J is an unknown matrix satisfying
I J J T > 0 ,
and the uncertainty matrix F ( t ) is satisfying
F ( t ) T F ( t ) 0 .
Assumption 2.
All eigenvalues of the matrix W 2 + Δ W 2 ( t ) are inside the unit circle.
Then, the following Definition and Lemmas are methods that are use to prove our main results.
Definition 1
([25]). The system (1) is said to be robust and exponentially passive from input u ( t ) to output y ( t ) , if there exists an exponential Lyapunov function V ( z t ) , and a constant ρ > 0 such that for all u ( t ) , all initial conditions z ( t 0 ) , all t t 0 , the following inequality holds:
V ˙ ( z t ) + ρ V ( z t ) 2 z T ( t ) u ( t ) ; t t 0 ,
where V ˙ ( z t ) denotes the total derivative of V ( z t ) along the state trajectories z ( t ) of the system (1)
Lemma 1
( ( J e n s e n s i n e q u a l i t y ) [14]). Let Q R n × n , Q = Q T > 0 be any constant matrix, δ 2 be positive real constant and ω : [ δ 2 , 0 ] R n be vector-valued function. Then,
δ 2 t δ 2 t ω T ( s ) Q ω ( s ) d s t δ 2 t ω ( s ) d s T Q t δ 2 t ω ( s ) d s .
Lemma 2
([36]). Let f 1 , f 2 , , f N : R n R have positive values in an open subset D of R n . Then, the reciprocally convex combination of f i over D satisfies
min { α i | α 1 > 0 , i α i = 1 } i 1 α i f i ( t ) = i f i ( t ) + max g i , j ( t ) i j g i , j ( t ) ,
subject to
g i , j : R n R , g i , j = g j , i , f i ( t ) g i , j ( t ) g j , i ( t ) f j ( t ) 0 .
Lemma 3
([6]). For Q R n × n , Q = Q T > 0 , and any continuously differentiable function z : [ σ 1 , σ 2 ] R n , the following inequality holds:
( σ 2 σ 1 ) σ 1 σ 2 z ˙ T ( s ) Q z ˙ ( s ) d s Ω 1 T Q Ω 1 + 3 Ω 2 T Q Ω 2 + 5 Ω 3 T Q Ω 3 + 7 Ω 4 T Q Ω 4 , σ 1 σ 2 θ σ 2 z ˙ T ( s ) Q z ˙ ( s ) d s d θ 2 Ω 5 T Q Ω 5 + 4 Ω 6 T Q Ω 6 + 6 Ω 7 T Q Ω 7 , σ 1 σ 2 σ 1 θ z ˙ T ( s ) Q z ˙ ( s ) d s d θ 2 Ω 8 T Q Ω 8 + 4 Ω 9 T Q Ω 9 + 6 Ω 10 T Q Ω 10 ,
where
Ω 1 = z ( σ 2 ) z ( σ 1 ) , Ω 2 = z ( σ 2 ) + z ( σ 1 ) 2 σ 2 σ 1 σ 1 σ 2 z ( s ) d s , Ω 3 = z ( σ 2 ) z ( σ 1 ) + 6 σ 2 σ 1 σ 1 σ 2 z ( s ) d s 12 ( σ 2 σ 1 ) 2 σ 1 σ 2 θ σ 2 z ( s ) d s d θ , Ω 4 = z ( σ 2 ) + z ( σ 1 ) 12 σ 2 σ 1 σ 1 σ 2 z ( s ) d s + 60 ( σ 2 σ 1 ) 2 σ 1 σ 2 θ σ 2 z ( s ) d s d θ 120 ( σ 2 σ 1 ) 3 σ 1 σ 2 u σ 2 θ σ 2 z ( s ) d s d θ d u , Ω 5 = z ( σ 2 ) 1 σ 2 σ 1 σ 1 σ 2 z ( s ) d s , Ω 6 = z ( σ 2 ) + 2 σ 2 σ 1 σ 1 σ 2 z ( s ) d s 6 ( σ 2 σ 1 ) 2 σ 1 σ 2 θ σ 2 z ( s ) d s d θ , Ω 7 = z ( σ 2 ) 3 σ 2 σ 1 σ 1 σ 2 z ( s ) d s + 24 ( σ 2 σ 1 ) 2 σ 1 σ 2 θ σ 2 z ( s ) d s d θ 60 ( σ 2 σ 1 ) 3 σ 1 σ 2 u σ 2 θ σ 2 z ( s ) d s d θ d u , Ω 8 = z ( σ 1 ) 1 σ 2 σ 1 σ 1 σ 2 z ( s ) d s , Ω 9 = z ( σ 1 ) 4 σ 2 σ 1 σ 1 σ 2 z ( s ) d s + 6 ( σ 2 σ 1 ) 2 σ 1 σ 2 θ σ 2 z ( s ) d s d θ , Ω 10 = z ( σ 1 ) 9 σ 2 σ 1 σ 1 σ 2 z ( s ) d s + 36 ( σ 2 σ 1 ) 2 σ 1 σ 2 θ σ 2 z ( s ) d s d θ 60 ( σ 2 σ 1 ) 3 σ 1 σ 2 u σ 2 θ σ 2 z ( s ) d s d θ d u .
Lemma 4
([37]). For any real constant matrices of appropriate dimensions M, S and N with M = M T , and Δ ( t ) is given as constant by (6)–(8), then
M + S Δ ( t ) N + N T Δ ( t ) T S T < 0 ,
holds if and only if
M S β N T * β I β J T * * β I < 0 ,
where β is any positive real constant.

3. Main Results

This section intended to develop new criteria of system (1) with conditions (2)–(4). We separate the consideration into two parts. In the first part, we consider the nominal system, then suggest our main system, in which new criteria of systems are introduced via LMIs approach.
Theorem 1.
Assume that Assumptions 1 and 2 hold. For given scalars δ 1 , δ 2 , δ d , τ 1 , τ u , τ d , η 1 , η 2 , ρ with conditions (2)–(4) and ρ > 0 , if there exist matrices P > 0 , M n , n = 1 , 2 , , 6 , M 1 M 2 * M 3 > 0 , M 4 M 5 * M 6 > 0 , N j > 0 , O j > 0 , S j > 0 , T j > 0 , j = 1 , 2 , 3 , any diagonal matrices D 1 > 0 , D 2 > 0 , U i > 0 , i = 1 , 2 , 3 , 4 , any appropriate dimensional matrices X k , Y l , Z i , k = 1 , 2 , , 7 , l = 1 , 2 , , 5 , i = 1 , 2 , 3 , 4 , satisfying the following
Λ < 0 ,
O 1 + O 2 Z i * O 1 + O 3 0 ,
then the system (11) is exponential passive, where Λ is defined in Appendix A.
Proof. 
Firstly, we propose the exponential passivity analysis for the nominal system
z ˙ ( t ) = A z ( t ) + W 0 g ( z ( t ) ) + W 1 g ( z ( t δ ( t ) ) ) + W 2 z ˙ ( t τ ( t ) ) + W 3 t η ( t ) t g ( z ( t ) ) + u ( t ) , t 0 .
Second, modify the system (11) in terms of model transformation, which is the form as follows
z ˙ ( t ) = w ( t ) , 0 = w ( t ) A z ( t ) + W 0 g ( z ( t ) ) + W 1 g ( z ( t δ ( t ) ) ) + W 2 z ˙ ( t τ ( t ) )
+ W 3 t η ( t ) t g ( z ( t ) ) + u ( t ) .
Then, the Lyapunov–Krasovskii functional is designed for the system (11) and (13):
V ( z t ) = i = 1 6 V i ( z t ) ,
where
V 1 ( z t ) = z T ( t ) P z ( t ) + 2 i = 1 n 0 z i ( t ) d 1 i ( g i ( s ) ) + d 2 i ( l i s g i ( s ) ) d s ,
V 2 ( z t ) = t δ 2 t δ 1 e 2 α ( s t ) z ( s ) g ( z ( s ) ) T M 1 M 2 * M 3 z ( s ) g ( z ( s ) ) d s + t δ ( t ) t e 2 α ( s t ) z ( s ) g ( z ( s ) ) T M 4 M 5 * M 6 z ( s ) g ( z ( s ) ) d s , V 3 ( z t ) = δ 1 t δ 1 t θ t e 2 α ( s t ) z ˙ T ( s ) N 1 z ˙ ( s ) d s d θ + t δ 1 t u t θ t e 2 α ( s t ) z ˙ T ( s ) N 2 z ˙ ( s ) d s d θ d u + t δ 1 t t δ 1 u θ t e 2 α ( s t ) z ˙ T ( s ) N 3 z ˙ ( s ) d s d θ d u , V 4 ( z t ) = ( δ 2 δ 1 ) t δ 2 t δ 1 θ t e 2 α ( s t ) z ˙ T ( s ) O 1 z ˙ ( s ) d s d θ + t δ 2 t δ 1 u t δ 1 θ t e 2 α ( s t ) z T ( s ) O 2 z ( s ) d s d θ d u + t δ 2 t δ 1 t δ 2 u θ t e 2 α ( s t ) z ˙ T O 3 z ˙ ( s ) d s d θ d u , V 5 ( z t ) = t τ 2 t e 2 α ( s t ) z ˙ T ( s ) S 1 z ˙ ( s ) d s + t τ ( t ) t e 2 α ( s t ) z ˙ T ( s ) S 2 z ˙ ( s ) d s + t τ 1 t e 2 α ( s t ) z ˙ T ( s ) S 3 z ˙ ( s ) d s , V 6 ( z t ) = η 2 t η 2 t θ t e 2 α ( s t ) g T ( z ( s ) ) ( T 1 + T 2 ) g ( z ( s ) ) d s d θ + η 1 t η 1 t θ t e 2 α ( s t ) g T ( z ( s ) ) T 3 g ( z ( s ) ) d s d θ .
From the time derivative of V 1 ( z t ) along the trajectory of system (11) and (13), we obtain
V ˙ 1 ( z t ) = 2 z ( t ) t δ 2 t z ˙ ( s ) d s g ( z ( t ) ) g ( z ( t δ ( t ) ) ) T P Q 1 T Q 5 T Q 9 T 0 Q 2 T Q 6 T Q 10 T 0 Q 3 T Q 7 T Q 11 T 0 Q 4 T Q 8 T Q 12 T z ˙ ( t ) 0 0 0 + 2 g T ( z ( t ) ) D 1 z ˙ ( t ) + 2 z T ( t ) D 2 L z ˙ ( t ) 2 g T ( z ( t ) ) D 2 z ˙ ( t ) = 2 z T ( t ) P [ A z ( t ) + W 0 g ( z ( t ) ) + W 1 g ( z ( t δ ( t ) ) ) + W 2 z ˙ ( t τ ( t ) ) + W 3 t η ( t ) t g ( z ( s ) ) d s + u ( t ) ] + [ z T ( t ) Q 1 T + t δ 2 t z ˙ T ( s ) d s Q 2 T + g T ( z ( t ) ) ( s ) Q 3 T + g T ( z ( t δ ( t ) ) Q 4 T ] [ z ( t ) z ( t δ ( t ) ) t δ 2 t z ˙ ( s ) d s ] + [ z T ( t ) Q 5 T + t δ 2 t z ˙ T ( s ) d s Q 6 T + g T ( z ( t ) ) ( s ) Q 7 T + g T ( z ( t δ ( t ) ) Q 8 T ] z ( t ) z ( t δ ( t ) ) t δ 2 t z ˙ ( s ) d s + [ z T ( t ) Q 9 T + t δ 2 t z ˙ T ( s ) d s Q 10 T + g T ( z ( t ) ) ( s ) Q 11 T + g T ( z ( t δ ( t ) ) Q 12 T ] [ w ( t ) A z ( t ) + W 0 g ( z ( t ) ) + W 1 g ( z ( t δ ( t ) ) ) + W 2 z ˙ ( t τ ( t ) ) + W 3 t η ( t ) t g ( z ( t ) ) + u ( t ) ] + 2 g T ( z ( t ) ) D 1 z ˙ ( t ) + 2 z T ( t ) D 2 L z ˙ ( t ) 2 g T ( z ( t ) ) D 2 z ˙ ( t ) + 2 α z T ( t ) P 1 z ( t ) + 4 α g T ( z ( t ) ) D 1 z ( t ) + 4 α z T ( t ) L g T ( z ( t ) ) D 2 z ( t ) 2 α V 1 ( z t ) .
Calculating V ˙ 2 ( z t ) leads to
V 2 ˙ ( z t ) e 2 α δ 1 z ( t δ 1 ) g ( z ( t δ 1 ) ) T M 1 M 2 * M 3 z ( t δ 1 ) g ( z ( t δ 1 ) ) e 2 α δ 2 z ( t δ 2 ) g ( z ( t δ 2 ) ) T M 1 M 2 * M 3 z ( t δ 2 g ( z ( t δ 2 ) ) + z ( t ) g ( z ( t ) ) T M 4 M 5 * M 6 z ( t ) g ( z ( t ) ) + ( δ d e 2 α δ 2 ) z ( t δ ( t ) ) g ( z ( t δ ( t ) ) ) T × M 4 M 5 * M 6 z ( t δ ( t ) ) g ( z ( t δ ( t ) ) 2 α V 2 ( z t ) .
By employing Lemma 3 to estimate the integral terms in V ˙ 3 ( z t ) , we readily obtain
V 3 ˙ ( z t ) z ˙ ( t ) δ 1 N 1 + δ 1 2 2 ( N 2 + N 3 ) z ˙ ( t ) e 2 α δ 2 { Ω 1 T [ t δ 1 ] , t N 1 Ω 1 [ t δ 1 , t ] + 3 Ω 2 T [ t δ 1 , t ] N 1 Ω 2 [ t δ 1 , t ] + 5 Ω 3 T [ t δ 1 , t ] N 1 Ω 3 T [ t δ 1 , t ] + 7 Ω 4 T [ t δ 1 , t ] N 1 × Ω 4 [ t δ 1 , t ] + 2 Ω 5 T [ t δ 1 , t ] N 2 Ω 5 [ t δ 1 , t ] + 4 Ω 6 T [ t δ 1 , t ] N 2 Ω 6 [ t δ 1 , t ] + 6 Ω 7 T [ t δ 1 , t ] N 2 Ω 7 T [ t δ 1 , t ] + 2 Ω 8 T [ t δ 1 , t ] N 3 Ω 8 [ t δ 1 , t ] + 4 Ω 9 T [ t δ 1 , t ] N 3 × Ω 9 [ t δ 1 , t ] + 6 Ω 10 T [ t δ 1 , t ] N 3 Ω 10 [ t δ 1 , t ] } 2 α V 3 ( z t ) ,
where
Ω 1 [ σ 1 , σ 2 ] = z ( σ 2 ) z ( σ 1 ) , Ω 2 [ σ 1 , σ 2 ] = z ( σ 2 ) + z ( σ 1 ) 2 σ 2 σ 1 σ 1 σ 2 z ( s ) d s , Ω 3 [ σ 1 , σ 2 ] = z ( σ 2 ) z ( σ 1 ) + 6 σ 2 σ 1 σ 1 σ 2 z ( s ) d s 12 ( σ 2 σ 1 ) 2 σ 1 σ 2 θ σ 2 z ( s ) d s d θ , Ω 4 [ σ 1 , σ 2 ] = z ( σ 2 ) + z ( σ 1 ) 12 σ 2 σ 1 σ 1 σ 2 z ( s ) d s + 60 ( σ 2 σ 1 ) 2 σ 1 σ 2 θ σ 2 z ( s ) d s d θ 120 ( σ 2 σ 1 ) 3 σ 1 σ 2 u σ 2 θ σ 2 z ( s ) d s d θ d u , Ω 5 [ σ 1 , σ 2 ] = z ( σ 2 ) 1 σ 2 σ 1 σ 1 σ 2 z ( s ) d s , Ω 6 [ σ 1 , σ 2 ] = z ( σ 2 ) + 2 σ 2 σ 1 σ 1 σ 2 z ( s ) d s 6 ( σ 2 σ 1 ) 2 σ 1 σ 2 θ σ 2 z ( s ) d s d θ , Ω 7 [ σ 1 , σ 2 ] = z ( σ 2 ) 3 σ 2 σ 1 σ 1 σ 2 z ( s ) d s + 24 ( σ 2 σ 1 ) 2 σ 1 σ 2 θ σ 2 z ( s ) d s d θ 60 ( σ 2 σ 1 ) 3 σ 1 σ 2 u σ 2 θ σ 2 z ( s ) d s d θ d u , Ω 8 [ σ 1 , σ 2 ] = z ( σ 1 ) 1 σ 2 σ 1 σ 1 σ 2 z ( s ) d s , Ω 9 [ σ 1 , σ 2 ] = z ( σ 1 ) 4 σ 2 σ 1 σ 1 σ 2 z ( s ) d s + 6 ( σ 2 σ 1 ) 2 σ 1 σ 2 θ σ 2 z ( s ) d s d θ , Ω 10 [ σ 1 , σ 2 ] = z ( σ 1 ) 9 σ 2 σ 1 σ 1 σ 2 z ( s ) d s + 36 ( σ 2 σ 1 ) 2 σ 1 σ 2 θ σ 2 z ( s ) d s d θ 60 ( σ 2 σ 1 ) 3 σ 1 σ 2 u σ 2 θ σ 2 z ( s ) d s d θ d u , t δ 2 σ 1 σ 2 t .
By calculating the derivative of V 4 ( z t ) , we obtain
V 4 ˙ ( z t ) z ˙ ( t ) [ ( δ 2 δ 1 ) 2 O 1 + ( δ 2 δ 1 ) 2 2 O 2 + O 3 ) z ˙ ( t ) e 2 α δ 2 ( δ 2 δ 1 ) × t δ 2 t δ 1 z ˙ T ( s ) O 2 z ˙ ( s ) d s e 2 α δ 2 t δ 2 t δ 1 θ t δ 1 z ˙ T ( s ) O 2 z ˙ ( s ) d s d θ e 2 α δ 2 t δ 2 t δ 1 t δ 2 θ z ˙ T ( s ) O 3 z ˙ ( s ) d s d θ 2 α V 4 ( z t ) z ˙ ( t ) [ ( δ 2 δ 1 ) 2 O 1 + ( δ 2 δ 1 ) 2 2 O 2 + O 3 ) z ˙ ( t ) e 2 α δ 2 { ( δ 2 δ 1 ) × t δ 2 t δ ( t ) z ˙ T ( s ) O 1 z ˙ ( s ) d s + ( δ 2 δ 1 ) t δ ( t ) t δ 1 z ˙ T ( s ) O 1 z ˙ ( s ) d s + t δ 2 t δ ( t ) θ t δ ( t ) z ˙ T ( s ) O 2 z ˙ ( s ) d s + t δ ( t ) t δ 1 θ t δ 1 z ˙ T ( s ) O 2 z ˙ ( s ) d s + ( δ 2 δ ( t ) ) t δ ( t ) t δ 1 z ˙ T ( s ) O 2 z ˙ ( s ) d s + t δ 2 t δ ( t ) t δ 2 θ z ˙ T ( s ) O 3 z ˙ ( s ) d s + t δ ( t ) t δ 1 t δ ( t ) θ z ˙ T ( s ) O 3 z ˙ ( s ) d s + ( δ ( t ) δ 1 ) t δ 2 t δ ( t ) z ˙ T ( s ) O 3 z ˙ ( s ) d s } 2 α V 4 ( z t ) .
Since O 1 > 0 , O 2 > 0 and O 3 > 0 , using Lemma 3, we observe that
e 2 α δ 2 { ( δ 2 δ 1 ) t δ ( t ) t δ 1 z ˙ T ( s ) O 1 z ˙ ( s ) d s + ( δ 2 δ 1 ) t δ 2 t δ ( t ) z ˙ T ( s ) O 1 z ˙ ( s ) d s + ( δ 2 δ ( t ) ) t δ ( t ) t δ 1 z ˙ T ( s ) O 2 z ˙ ( s ) d s + ( δ ( t ) δ 1 ) t δ 2 t δ ( t ) z ˙ T ( s ) O 3 z ˙ ( s ) d s } e 2 α δ 2 { ( δ 2 δ 1 ) ( δ ( t ) δ 1 ) ( Ω 1 T [ t δ ( t ) , t δ 1 ] ( O 1 + O 2 ) Ω 1 [ t δ ( t ) , t δ 1 ] + 3 Ω 2 T [ t δ ( t ) , t δ 1 ] ( O 1 + O 2 ) Ω 2 [ t δ ( t ) , t δ 1 ] + 5 Ω 3 T [ t δ ( t ) , t δ 1 ] ( O 1 + O 2 ) × Ω 3 [ t δ ( t ) , t δ 1 ] + 7 Ω 4 T [ t δ ( t ) , t δ 1 ] ( O 1 + O 2 ) Ω 4 [ t δ ( t ) , t δ 1 ] ) + ( Ω 1 T [ t δ ( t ) , t δ 1 ] O 2 Ω 1 T [ t δ ( t ) , t δ 1 ] + 3 Ω 2 T [ t δ ( t ) , t δ 1 ] O 2 Ω 2 [ t δ ( t ) , t δ 1 ] + 5 Ω 2 T [ t δ ( t ) , t δ 1 ] O 2 Ω 2 [ t δ ( t ) , t δ 1 ] + 7 Ω 2 T [ t δ ( t ) , t δ 1 ] O 2 Ω 2 [ t δ ( t ) , t δ 1 ] ) ( δ 2 δ 1 ) ( δ 2 δ ( t ) ) ( Ω 1 T [ t δ 2 , t δ ( t ) ) ] ( O 1 + O 3 ) Ω 1 [ t δ 2 , t δ ( t ) ) ] + 3 Ω 2 T [ t δ 2 , t δ ( t ) ) ] ( O 1 + O 3 ) Ω 2 [ t δ 2 , t δ ( t ) ) ] + 5 Ω 3 T [ t δ 2 , t δ ( t ) ) ] ( O 1 + O 3 ) Ω 3 [ t δ 2 , t δ ( t ) ) ] + 7 Ω 4 T [ t δ 2 , t δ ( t ) ) ] ( O 1 + O 3 ) Ω 4 [ t δ 2 , t δ ( t ) ) ] ) + ( Ω 1 T [ t δ 2 , t δ ( t ) ) ] O 3 Ω 1 T [ t δ 2 , t δ ( t ) ) ] + 3 Ω 2 T [ t δ 2 , t δ ( t ) ) ] O 3 Ω 2 [ t δ 2 , t δ ( t ) ) ] + 5 Ω 3 T [ t δ 2 , t δ ( t ) ) ] O 3 Ω 3 [ t δ 2 , t δ ( t ) ) ] + 7 Ω 4 T [ t δ 2 , t δ ( t ) ) ] O 3 Ω 4 [ t δ 2 , t δ ( t ) ) ] ) } .
Next, we use the reciprocally convex combination technique to estimate inequality following
e 2 α δ 2 { ( δ 2 δ 1 ) t δ ( t ) t δ 1 z ˙ T ( s ) O 1 z ˙ ( s ) d s + ( δ 2 δ 1 ) t δ 2 t δ ( t ) z ˙ T ( s ) O 1 z ˙ ( s ) d s + ( δ 2 δ ( t ) ) t δ ( t ) t δ 1 z ˙ T ( s ) O 2 z ˙ ( s ) d s + ( δ ( t ) δ 1 ) t δ 2 t δ ( t ) z ˙ T ( s ) O 3 z ˙ ( s ) d s } e 2 α δ 2 { Ω 1 T [ t δ ( t ) , t δ 1 ] O 1 Ω 1 [ t δ ( t ) , t δ 1 ] + 3 Ω 2 T [ t δ ( t ) , t δ 1 ] O 1 Ω 2 [ t δ ( t ) , t δ 1 ] + 5 Ω 3 T [ t δ ( t ) , t δ 1 ] O 1 Ω 3 [ t δ ( t ) , t δ 1 ] + 7 Ω 4 T [ t δ ( t ) , t δ 1 ] O 1 Ω 4 [ t δ ( t ) , t δ 1 ] + Ω 1 T [ t δ 2 , t δ ( t ) ] O 1 Ω 1 [ t δ 2 , t δ ( t ) ] + 3 Ω 2 T [ t δ 2 , t δ ( t ) ] O 1 Ω 2 [ t δ 2 , t δ ( t ) ] + 5 Ω 3 T [ t δ 2 , t δ ( t ) ] O 1 Ω 3 [ t δ 2 , t δ ( t ) ] + 7 Ω 4 T [ t δ 2 , t δ ( t ) ] O 1 Ω 4 [ t δ 2 , t δ ( t ) ] + Ω 1 T [ t δ ( t ) , t δ 1 ] Z 1 Ω 1 [ t δ 2 , t δ ( t ) ] + Ω 1 T [ t δ 2 , t δ ( t ) ] Z 1 T Ω 1 [ t δ ( t ) , t δ 1 ] + 3 Ω 2 T [ t δ ( t ) , t δ 1 ] Z 2 Ω 2 [ t δ 2 , t δ ( t ) + 3 Ω 2 T [ t δ 2 , t δ ( t ) ] Z 2 T Ω 2 [ t δ ( t ) , t δ 1 ] + 5 Ω 3 T [ t δ ( t ) , t δ 1 ] Z 3 Ω 3 [ t δ 2 , t δ ( t ) ] + 5 Ω 3 T [ t δ 2 , t δ ( t ) ] Z 3 T Ω 3 [ t δ ( t ) , t δ 1 ] + 7 Ω 4 T [ t δ ( t ) , t δ 1 ] Z 4 Ω 4 [ t δ 2 , t δ ( t ) ] + 7 Ω 4 T [ t δ 2 , t δ ( t ) ] Z 4 T Ω 4 [ t δ ( t ) , t δ 1 ] } .
Then, we estimate V ˙ 4 ( z t ) as
V 4 ˙ ( z t ) z ˙ ( t ) [ ( δ 2 δ 1 ) 2 O 1 + ( δ 2 δ 1 ) 2 2 O 2 + O 3 ) z ˙ ( t ) e 2 α δ 2 { 2 Ω 5 T [ t δ ( t ) , t δ 1 ] O 2 Ω 5 [ t δ ( t ) , t δ 1 ] + 4 Ω 6 T [ t δ ( t ) , t δ 1 ] O 2 × Ω 6 [ t δ ( t ) , t δ 1 ] + 6 Ω 7 T [ t δ ( t ) , t δ 1 ] O 2 Ω 7 T [ t δ ( t ) , t δ 1 ] + 2 Ω 5 T [ t δ 2 , t δ ( t ) ] × O 2 Ω 5 [ t δ 2 , t δ ( t ) ] + 4 Ω 6 T [ t δ 2 , t δ ( t ) ] O 2 Ω 6 [ t δ 2 , t δ ( t ) ] + 6 Ω 7 T [ t δ 2 , t δ ( t ) ] O 2 Ω 7 [ t δ 2 , t δ ( t ) ] + 2 Ω 8 T [ t δ ( t ) , t δ 1 ] O 3 Ω 8 [ t δ ( t ) , t δ 1 ] + 4 Ω 9 T [ t δ ( t ) , t δ 1 ] O 3 Ω 9 [ t δ ( t ) , t δ 1 ] + 6 Ω 10 T [ t δ ( t ) , t δ 1 ] O 3 Ω 10 [ t δ ( t ) , t δ 1 ] + 2 Ω 8 T [ t δ 2 , t δ ( t ) ] O 3 Ω 8 [ t δ 2 , t δ ( t ) ] + 4 Ω 9 T [ t δ 2 , t δ ( t ) ] O 3 Ω 9 [ t δ 2 , t δ ( t ) ] + 6 Ω 10 T [ t δ 2 , t δ ( t ) ] O 3 Ω 10 T [ t δ 2 , t δ ( t ) ] + Ω 1 T [ t δ ( t ) , t δ 1 ] O 1 Ω 1 [ t δ ( t ) , t δ 1 ] + 3 Ω 2 T [ t δ ( t ) , t δ 1 ] O 1 Ω 2 [ t δ ( t ) , t δ 1 ] + 5 Ω 3 T [ t δ ( t ) , t δ 1 ] O 1 Ω 3 [ t δ ( t ) , t δ 1 ] + 7 Ω 4 T [ t δ ( t ) , t δ 1 ] O 1 Ω 4 [ t δ ( t ) , t δ 1 ] + Ω 1 T [ t δ 2 , t δ ( t ) ] O 1 Ω 1 [ t δ 2 , t δ ( t ) ] + 3 Ω 2 T [ t δ 2 , t δ ( t ) ] O 1 Ω 2 [ t δ 2 , t δ ( t ) ] + 5 Ω 3 T [ t δ 2 , t δ ( t ) ] O 1 Ω 3 [ t δ 2 , t δ ( t ) ] + 7 Ω 4 T [ t δ 2 , t δ ( t ) ] O 1 Ω 4 [ t δ 2 , t δ ( t ) ] + Ω 1 T [ t δ ( t ) , t δ 1 ] Z 1 Ω 1 [ t δ 2 , t δ ( t ) ] + Ω 1 T [ t δ 2 , t δ ( t ) ] Z 1 T Ω 1 [ t δ ( t ) , t δ 1 ] + 3 Ω 2 T [ t δ ( t ) , t δ 1 ] Z 2 Ω 2 [ t δ 2 , t δ ( t ) ] + 3 Ω 2 T [ t δ 2 , t δ ( t ) ] Z 2 T Ω 2 [ t δ ( t ) , t δ 1 ] + 5 Ω 3 T [ t δ ( t ) , t δ 1 ] Z 3 Ω 3 [ t δ 2 , t δ ( t ) ] + 5 Ω 3 T [ t δ 2 , t δ ( t ) ] Z 3 T Ω 3 [ t δ ( t ) , t δ 1 ] + 7 Ω 4 T [ t δ ( t ) , t δ 1 ] Z 4 Ω 4 [ t δ 2 , t δ ( t ) ] + 7 Ω 4 T [ t δ 2 , t δ ( t ) ] Z 4 T Ω 4 [ t δ ( t ) , t δ 1 ] } 2 α V 4 ( z t ) .
The times derivative of V 5 ( z t ) is calculated as
V 5 ˙ ( z t ) z ˙ T ( t ) ( S 1 + S 2 + S 3 ) z ˙ ( t ) e 2 α τ 2 z ˙ T ( t τ 2 ) S 1 z ˙ ( t τ 2 )
e 2 α τ 2 z ˙ T ( t τ ( t ) ) S 2 z ˙ ( t τ ( t ) ) + τ d z ˙ T ( t τ ( t ) ) S 2 z ˙ ( t τ ( t ) ) e 2 α τ 1 z ˙ T ( t τ 1 ) S 3 z ˙ ( t τ 1 ) 2 α V 5 ( z t ) .
Further, from Lemma 1, we receive V ˙ 6 ( z t ) as follows
V ˙ 6 ( z t ) g T ( z ( t ) ) η 2 2 T 1 + η 2 2 T 2 + η 1 2 T 3 g ( z ( t ) ) e 2 α η 2 t η 2 t g T ( z ( s ) ) d s T 1 × t η 2 t g ( z ( s ) ) d s e 2 α η 2 t η ( t ) t g T ( z ( s ) ) d s T 2 t η ( t ) t g ( z ( s ) ) d s e 2 α η 1 t η 1 t g T ( z ( s ) ) d s T 3 t η 1 t g ( z ( s ) ) d s 2 α V 6 ( z t ) .
Since 0 f i ( s ) s i l i , i = 1 , 2 , , n , for diagonal matrices U m > 0 , m = 1 , 2 , 3 , 4 , we have
z T ( t ) L g T ( z ( t ) ) U 1 T g ( z ( t ) ) 0 ,
z T ( t δ 1 ) L g T ( z ( t δ 1 ) ) U 2 T g ( z ( t δ 1 ) ) 0 ,
z T ( t δ ( t ) ) L g T ( z ( t δ ( t ) ) U 3 T g ( z ( t δ ( t ) ) ) 0 ,
z T ( t δ 2 ) L g T ( z ( t δ 2 ) ) U 4 T g ( z ( t δ 2 ) ) 0 .
Through the use zero equations, for any appropriate dimensions X k , k = 1 , 2 , , 7 , and Y l , l = i = 1 , 2 , , 5 , we obtain the following equations
2 [ z ˙ T ( t ) X 1 T + z T ( t ) X 2 T + g T ( z ( t ) ) X 3 T + g T ( z ( t δ ( t ) ) ) X 4 T + z ˙ T ( t τ ( t ) ) X 5 T + t η ( t ) t g T ( z ( s ) ) d s X 6 T + u T ( t ) X 7 T ] [ z ˙ ( t ) A z ( t ) + W 0 g ( z ( t ) ) + W 1 g ( z ( t δ ( t ) ) ) + W 2 z ˙ ( t τ ( t ) ) + W 3 t η ( t ) t g ( z ( s ) ) d s + u ( t ) ] = 0 ,
2 z ˙ T ( t ) Y 1 T + w T ( t ) Y 2 T z ˙ ( t ) w ( t ) = 0 ,
2 z T ( t ) Y 3 T + z T ( t δ ( t ) ) Y 4 T + t δ ( t ) t z ˙ T ( s ) d s Y 5 T [ z ( t ) z ( t δ ( t ) ) t δ ( t ) t z ˙ ( s ) d s ] = 0 .
Recalling (14)–(20) and estimation of the time derivative of V ( z t ) , it is apparent that
V ˙ ( z t ) + 2 α V ( z t ) 2 y ( t ) u ( t ) ξ T ( t ) Λ ξ ( t ) ,
where ξ ( t ) = [ z ( t ) , z ( t δ 1 ) , z ( t δ ( t ) ) , z ( t δ 2 ) , g ( z ( t ) ) , g ( z ( t δ 1 ) ) , g ( z ( t δ ( t ) ) ) , g ( z ( t δ 2 ) ) , z ˙ ( t ) , t δ 2 t z ˙ ( s ) d s , ω ( t ) , 1 δ 1 t δ 1 t z ( s ) d s , 1 δ 1 2 t δ 1 t θ t z ( s ) d s d θ , 1 δ 1 3 t δ 1 t u t θ t z ( s ) d s d θ d u , 1 δ ( t ) δ 1 t δ ( t ) t δ 1 z ( s ) d s , 1 ( δ ( t ) δ 1 ) 2 t δ ( t ) t δ 1 θ t δ 1 z ( s ) d s d θ , 1 ( δ ( t ) δ 1 ) 3 × t δ ( t ) t δ 1 u t δ 1 θ t δ 1 z ( s ) d s d θ d u , 1 δ 2 δ ( t ) t δ 2 t δ ( t ) z ( s ) d s , 1 ( δ 2 δ ( t ) ) 2 t δ 2 t δ ( t ) θ t δ ( t ) z ( s ) d s d θ , 1 ( δ 2 δ ( t ) ) 3 t δ 2 t δ ( t ) u t δ ( t ) θ t δ ( t ) z ( s ) d s d θ d u , z ˙ ( t τ 1 ) , z ˙ ( t τ ( t ) ) , z ˙ ( t τ 2 ) , t η 1 t g ( z ( s ) ) d s , t η ( t ) t g ( z ( s ) ) d s , t η 2 t g ( z ( s ) ) d s , u ( t ) ] and Λ is defined in (9). From assumption (21), it is readily visible that
V ˙ ( z t ) + 2 α V ( z t ) 2 y ( t ) u ( t ) 0 , t 0 ,
or
V ˙ ( z t ) + ρ V ( z t ) 2 y ( t ) u ( t ) , t 0 ,
where ρ = 2 α . Therefore, if the LMIs conditions (9) and (10) hold, we conclude that the system (11) is exponentially passive. This proof is complete. □
Moreover, we suggest the robust exponential passivity for uncertain NTNNs with mixed interval time-varying delays of system (1). It is apparent that system (1) is robustly exponentially passive, from which we summarize the corresponding result in Theorem 2.
Theorem 2.
Assume that Assumptions 1 and 2 hold. For scalars δ 1 , δ 2 , δ d , τ 1 , τ 2 , τ d , η 1 , η 2 , ρ, β with conditions (2)–(4), ρ > 0 and β > 0 , if there exist matrices P > 0 , M n , n = 1 , 2 , , 6 , M 1 M 2 * M 3 > 0 , M 4 M 5 * M 6 > 0 , N j > 0 , O j > 0 , S j > 0 , T j > 0 , j = 1 , 2 , 3 , any diagonal matrices D 1 > 0 , D 2 > 0 , U i > 0 , i = 1 , 2 , 3 , 4 , any appropriate dimensional matrices X k , Y l , Z i , k = 1 , 2 , , 7 , l = 1 , 2 , , 5 , i = 1 , 2 , 3 , 4 , satisfying LMIs (9), (10) and
Λ Γ 1 β Γ 2 T * β I β J T * * β I < 0 ,
then the system (1) is robust exponential passive.
Proof. 
Based on Theorem 1, if A, W 0 , W 1 , W 2 , and W 3 are replaced with A + E Δ ( t ) G a , W 0 + E Δ ( t ) G 0 , W 1 + E Δ ( t ) G 1 , W 2 + E Δ ( t ) G 2 and W 3 + E Δ ( t ) G 3 , respectively, a new criterion of the uncertain NTNNs (1) is equivalent to the following conditions,
Λ + Γ 1 Δ ( t ) Γ 2 + Γ 2 T Δ ( t ) T Γ 1 T 0 ,
where Γ 1 = ( P + Q 9 + X 2 ) T E , 0 , 0 , 0 , ( Q 11 + X 3 ) T E , 0 , ( Q 12 + X 4 ) T E , 0 , X 1 T E , Q 10 T E , 0 , 0 , , 0 11 i t e m s , X 5 T E , 0 , 0 , X 6 T E , 0 , X 7 T E and Γ 2 = G a , 0 , 0 , 0 , G 0 , 0 , G 1 , 0 , 0 , , 0 14 i t e m s , G 2 , 0 , 0 , G 3 , 0 , 0 . By Lemma 4, an adequate condition ensuring
Λ Γ 1 β Γ 2 T * β I β J T * * β I < 0 ,
is that there exists a scalar β > 0 . Together with the similar proof of Theorem 1, we have
V ˙ ( z t ) + ρ V ( z t ) 2 y ( t ) u ( t ) , t 0 .
Based on Definition 1, the uncertain NTNNs (1) is robustly exponentially passive. This completes the proof. □

4. Numerical Examples

In this section, we allow the numerical examples to show the performance of the systems (1) and (11).
Example 1.
Consider the NTNNs (11), with the parameters [2,7,24] being as follows:
A = 2 0 0 2 , W 0 = γ 0.3 0.3 0.5 , W 1 = 0.2 0.1 0.1 0.2 , W 2 = 0.15 0 0 0.15 , W 3 = 0 0 0 0 , L = 1 0 0 1 , u ( t ) = 0 , y ( t ) = 0 .
By applying Theorem 1, we find that the LMIs (9) and (10) are feasible. In Table 1, we compare the maximum allowable bound γ, where δ 1 = 0.5 , δ 2 = 2.0 , τ 1 = 0.5 , τ 2 = 1.0 , and δ d = 0.9 for ensuring Theorem 1 of the system (11), which the potential of our result exceeds those previous results ([2,7,24]).
Example 2.
Considering the uncertain NTNNs (1), the parameters [22] are as follows:
A = 4 0 0 4 , W 0 = 0.6 0.4 0.5 0.4 , W 1 = 0.9993 0.3554 0.0471 0.2137 , W 2 = 0 0 0 0 , W 3 = 0.3978 1.3337 0.2296 0.9361 , G a = 0.2 0 0 0.2 , G 0 = 0.6 0 0 0.6 , G 1 = 0.3 0 0 0.3 , G 2 = 0 0 0 0 , G 3 = 0.4 0 0 0.4 , L = E = C 0 = C 1 = C 3 = I , C 2 = 0 .
Through adaption of Theorem 2 and using the Matlab LMI toolbox, the feasibility of the aimed method holds. A similar result of this method was shown previously in [22] where δ ( t ) = 1 + sin ( t ) , and η ( t ) = 1.5 + cos ( t ) . Meanwhile, we show that the robust exponential passive of system (1) is guaranteed by calculating the maximum allowable bound of ρ for δ 2 = 2.0 , η 1 = 0.0 , η 2 = 2.5 , different δ d , and various δ 1 in Table 2.
Example 3.
Consider the uncertain NTNNs (1), with the parameters [21,22] being as follows:
A = 4 0 0 7 , W 0 = 0 0.5 0.5 0 , W 1 = 1 1 1 2 , W 2 = W 3 = 0 0 0 0 , G a = 0.02 0.04 0.03 0.06 , G 0 = 0.02 0.04 0.02 0.04 , G 1 = 0.03 0.06 0.02 0.04 , G 2 = G 3 = 0 , L = E = I , C 0 = C 1 = C 3 = I , C 2 = 0 .
This example shows the maximum bounds of the allowable values ρ where δ 1 = 0.0 and δ 2 = 0.16 to guarantee that system (1) is the robust exponential passive. The list in Table 3 shows that the potential of our results is superior to those in [21,22].
Example 4.
Consider the uncertain NTNNs (1), the parameters being as follows:
A = 4 0 0 5 , W 0 = 0.4 0 0.1 0.1 , W 1 = 0.1 0.2 0.15 0.18 , W 2 = 0.1 0 0 0.1 , W 3 = 0.41 0.5 0.69 0.31 , G a = 0 0 0.1 0.1 , G 0 = G 1 0 0 0.02 0.03 , G 2 = 0 0 0.001 0.001 , G 3 = 0 0 0.02 0.02 , L = E = C 0 = C 1 = C 2 = C 3 = I .
We confirm the feasibility of the criterion of Theorem 2 by using the Matlab LMI toolbox, where 0.5 δ ( t ) 4 , δ d = 0.5 , 0.5 τ ( t ) 3.5 , τ d = 0.7 , 0.5 η ( t ) 1 , ρ = 0.094 , of which the feasible solution is as follows:
P = 1.3322 0.2119 0.2119 1.0924 ,  D 1 = 1.4453 0 0 1.4453 ,
D 2 = 0.9707 0 0 0.9707 ,  M 1 = 0.1220 0.0042 0.0042 0.1062 ,
M 2 = 0.1326 0.0016 0.0016 0.1384 ,  M 3 = 0.3416 0.0051 0.0051 0.3610 ,
M 4 = 4.7528 0.3159 0.3159 4.4572 ,  M 5 = 2.0930 0.2243 0.2243 0.9815 ,
M 6 = 8.2592 0.9770 0.9770 7.3324 ,  N 1 = 0.0509 0.0031 0.0031 0.0370 ,
N 2 = 10 3 0.7335 0.0602 0.0602 0.5430 ,  N 3 = 0.0769 0.0033 0.0033 0.0656 ,
O 1 = 0.0012 0.0001 0.0001 0.0008 ,  O 2 = 0.0016 0.0001 0.0001 0.0011 ,
O 3 = 0.0016 0.0001 0.0001 0.0011 ,  S 1 = 0.4670 0.0273 0.0273 0.3901 ,
S 2 = 0.0040 0.0003 0.0003 0.0028 ,  S 3 = 0.0039 0.0003 0.0003 0.0027 ,
T 1 = 0.2694 0.0200 0.0200 0.1852 ,  T 2 = 3.2174 0.0233 0.0233 2.5143 ,
T 3 = 1.0763 0.0798 0.0798 0.7395 ,  Q 1 = 10 3 1.1925 0.0000 0.0000 1.1924 ,
Q 2 = 10 3 1.1927 0.0001 0.0001 1.1926 ,  Q 3 = 0.5234 0.0422 0.0422 0.4805 ,
Q 4 = 0.2096 0.0098 0.0098 0.0691 ,  Q 5 = 10 3 1.1923 0.0000 0.0000 1.1924 ,
Q 6 = 10 3 1.1927 0.0000 0.0000 1.1928 ,  Q 7 = 0.5771 0.0465 0.0465 0.3998 ,
Q 8 = 0.2180 0.0171 0.0171 0.0950 ,  Q 9 = 0.9438 0.6339 0.6339 0.7053 ,
Q 10 = 0.0146 0.0911 0.0911 0.0433 ,  Q 11 = 0.4912 0.8620 0.8620 0.6054 ,
Q 12 = 0.7129 0.3630 0.3630 1.0636 ,  Z 1 = 10 3 0.1163 0.0042 0.0042 0.0458 ,
Z 2 = 10 6 0.2203 0.0787 0.0787 0.5239 ,  Z 3 = 10 8 0.0204 0.0252 0.0252 0.1607 ,
Z 4 = 10 9 0.4913 0.0730 0.0730 0.2132 ,  U 1 = 8.1698 0 0 8.1698 ,
U 2 = 0.1671 0 0 0.1671 ,  U 3 = 0.5990 0 0 0.5990 ,
U 4 = 0.0479 0 0 0.0479 ,  X 1 = 6.4830 0.7995 0.7995 4.5869 ,
X 2 = 25.5529 4.0582 4.0582 21.7320 ,  X 3 = 2.0164 0.5346 0.5346 0.2008 ,
X 4 = 1.5646 0.1081 0.1081 0.2932 ,  X 5 = 0.6444 0.0799 0.0799 0.4607 ,
X 6 = 2.9287 0.5075 0.5075 1.7624 ,  X 7 = 6.6621 0.9600 0.9600 4.4849 ,
Y 1 = 10 3 5.3746 0.0050 0.0050 5.3748 ,  Y 2 = 10 3 5.3750 0.0050 0.0050 5.3750 ,
Y 3 = 10 3 1.1923 0.0000 0.0000 1.1924 ,  Y 4 = 10 3 3.5779 0.0000 0.0000 3.5778 ,
Y 5 = 10 3 1.1927 0.0000 0.0000 1.1927 , and β = 87.2598 .
Figure 1 gives the state response of the uncertain NTNNs (1) under zero input, and the initial condition [ 0.2 , 0.2 ] , the interval time-varying delays are chosen as δ ( t ) = 3 + 0.25 | sin ( 2 t ) | , τ ( t ) = 3 + 0.35 | sin ( 2 t ) | , and η ( t ) = | cos ( t ) | , the activation function is set as g ( z ( t ) ) = tanh ( z ( t ) ) . Meanwhile, we sketched the solution trajectory of the system (1) of Example 4 with initial conditions [ 3.5 , 3.5 ] in Figure 2.

5. Conclusions

In this paper, we considered the robust exponential passivity analysis for uncertain NTNNs with mixed interval time-varying delays including discrete, neutral, and distributed delays. We concentrated on interval time-varying delays, in which the lower bounds were allowed to be either positive or zero. For the potential of results, we simultaneously adopted the model transformation, various inequalities, the reciprocally convex combination, and suitable Lyapunov-–Krasovskii functional. In the first place, a new exponential passivity analysis for NTNNs was derived and formulated in the form of LMIs. Secondly, a new robust exponential passivity analysis for uncertain NTNNs was obtained. Note that our proposed method can be adapted to many criteria such as exponential passivity of uncertain neural networks with distributed time-varying delays, the exponential passivity of uncertain neural networks with time-varying delays, stability of NTNNs with interval time-varying delays. Lastly, the feasibility of the aimed methods was shown in numerical simulations by applying our method. We achieved the aim to receive the maximum values of the rate of convergence, for which our results exceeded the results that were previously seen. We also introduced a new example to demonstrate the existence of a solution to the proposed method.

Author Contributions

Conceptualization, N.S.; methodology, N.S.; software, N.S. and N.Y.; validation, N.S. and K.M.; formal analysis, K.M., ; investigation, N.S., N.Y., P.S. and K.M.; writing—original draft preparation, N.S., N.Y., P.S. and K.M.; writing—review and editing, N.S., N.Y., P.S. and K.M.; visualization, N.Y., P.S., and K.M.; supervision, N.Y. and K.M.; project administration, N.S., N.Y., P.S. and K.M.; funding acquisition, K.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by National Research Council of Thailand (NRCT) and Khon Kaen University (Mid-Career Research Grant NRCT5-RSA63003).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors thank the reviewers for their valuable comments and suggestions, which led to the improvement of the content of the paper.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Λ = [ Λ i , j ] 27 × 27 , Λ i , j = Λ j , i T , i , j = 1 , 2 , , 27 , Λ 1 , 1 = 2 α P ( P + Q 9 + X 2 ) T A A T ( P + Q 9 + X 2 ) + Q 1 + Q 1 T + Q 5 + Q 5 T + M 4 + 2 α L T D 2 T + 2 α D 2 L + Y 3 + Y 3 T e 2 α δ 1 ( 16 N 1 + 12 N 2 ) , Λ 1 , 2 = 4 e 2 α δ 1 N 1 , Λ 1 , 3 = Q 1 T Q 5 T Y 3 T + Y 4 , Λ 1 , 5 = ( P + Q 9 + X 2 ) T W 0 + Q 3 + Q 7 A T ( Q 11 + X 3 ) + 2 α D 1 T 2 α D 2 + M 5 + L U 1 T , Λ 1 , 7 = ( P + Q 9 + X 2 ) T W 1 + Q 4 + Q 8 A T ( Q 12 + X 4 ) , Λ 1 , 9 = D 2 L X 2 T A T X 1 , Λ 1 , 10 = Q 1 T Q 5 T + Q 2 + Q 6 A Q 10 Y 3 T + Y 5 , Λ 1 , 11 = Q 9 , Λ 1 , 12 = e 2 α δ 1 ( 60 N 1 + 12 N 2 ) , Λ 1 , 13 = e 2 α δ 1 ( 360 N 1 + 120 N 2 ) , Λ 1 , 14 = e 2 α δ 1 ( 840 N 1 + 360 N 2 ) , Λ 1 , 22 = ( P + Q 9 + X 2 ) T W 2 A T X 5 , Λ 1 , 25 = ( P + Q 9 + X 2 ) T W 3 A T X 6 , Λ 1 , 27 = P T + Q 9 T + X 2 T A T X 7 , Λ 2 , 2 = e 2 α δ 1 M 1 e 2 α δ 1 ( 16 N 1 + 12 N 3 ) e 2 α δ 2 ( 16 O 1 + 12 O 2 ) , Λ 2 , 3 = e 2 α δ 2 ( Z 1 + 3 Z 2 + 5 Z 3 + 7 Z 4 ) T 4 e 2 α δ 2 O 1 , Λ 2 , 4 = e 2 α δ 2 ( Z 1 3 Z 2 + 5 Z 3 7 Z 4 ) T , Λ 2 , 6 = e 2 α δ 1 M 2 + L U 2 T , Λ 2 , 12 = e 2 α δ 1 ( 120 N 1 + 72 N 3 ) , Λ 2 , 13 = e 2 α δ 1 ( 480 N 1 + 240 N 3 ) , Λ 2 , 14 = e 2 α δ 1 ( 840 N 1 + 360 N 3 ) , Λ 2 , 15 = e 2 α δ 2 ( 60 O 1 + 12 O 2 ) , Λ 2 , 16 = e 2 α δ 2 ( 360 O 1 + 120 O 2 ) , Λ 2 , 17 = e 2 α δ 2 ( 840 O 1 + 360 O 2 ) , Λ 2 , 18 = e 2 α δ 2 ( 6 Z 2 30 Z 3 + 84 Z 4 ) T , Λ 2 , 19 = e 2 α δ 2 ( 60 Z 3 420 Z 4 ) T , Λ 2 , 20 = 840 e 2 α δ 2 Z 4 T , Λ 3 , 3 = ( δ d e 2 α δ 2 ) M 4 Y 4 Y 4 T + e 2 α δ 2 ( Z 1 3 Z 2 + 5 Z 3 7 Z 4 ) T + e 2 α δ 2 ( Z 1 3 Z 2 + 5 Z 3 7 Z 4 ) e 2 α δ 2 ( 32 O 1 + 12 O 2 + 12 O 3 ) , Λ 3 , 4 = e 2 α δ 2 ( Z 1 + 3 Z 2 + 5 Z 3 + 7 Z 4 ) T 4 e 2 α δ 2 O 1 , Λ 3 , 5 = Q 3 Q 7 , Λ 3 , 7 = Q 4 Q 8 + ( δ d e 2 α δ 2 ) M 5 + L U 3 T , Λ 3 , 10 = Q 2 Q 6 Y 4 T Y 5 , Λ 3 , 15 = e 2 α δ 2 ( 6 Z 2 30 Z 3 + 84 Z 4 ) T + e 2 α δ 2 ( 120 O 1 + 72 O 3 ) , Λ 3 , 16 = e 2 α δ 2 ( 60 Z 3 420 Z 4 ) e 2 α δ 2 ( 480 O 1 + 240 O 3 ) , Λ 3 , 17 = 840 e 2 α δ 2 Z 4 + e 2 α δ 2 ( 840 O 1 + 360 O 3 ) , Λ 3 , 18 = e 2 α δ 2 ( 6 Z 2 + 30 Z 3 + 84 Z 4 ) T + e 2 α δ 2 ( 60 O 1 + 12 O 2 ) , Λ 3 , 19 = e 2 α δ 2 ( 60 Z 3 + 420 Z 4 ) T e 2 α δ 2 ( 360 O 1 + 120 O 2 ) , Λ 3 , 20 = 840 e 2 α δ 2 Z 4 T + e 2 α δ 2 ( 840 O 1 + 360 O 2 ) , Λ 4 , 4 = e 2 α δ 2 M 1 e 2 α δ 2 ( 16 O 1 + 12 O 3 ) , Λ 4 , 8 = e 2 α δ 2 M 2 + L U 4 T , Λ 4 , 15 = e 2 α δ 2 ( 6 Z 2 + 30 Z 3 + 84 Z 4 ) , Λ 4 , 16 = e 2 α δ 2 ( 60 Z 3 + 420 Z 4 ) , Λ 4 , 17 = 840 e 2 α δ 2 Z 4 , Λ 4 , 18 = e 2 α δ 2 ( 120 O 1 + 72 O 3 ) , Λ 4 , 19 = e 2 α δ 2 ( 480 O 1 + 240 O 3 ) , Λ 4 , 20 = e 2 α δ 2 ( 840 O 1 + 360 O 3 ) , Λ 5 , 5 = ( Q 11 + X 3 ) T W 0 + W 0 T ( Q 11 + X 3 ) + M 6 U 1 U 1 T + η 2 2 ( T 1 + T 2 ) + η 1 2 T 3 , Λ 5 , 7 = ( Q 11 T + X 3 ) T W 1 + W 0 T ( Q 12 + X 4 ) , Λ 5 , 9 = D 1 D 2 X 3 T + W 0 T X 1 , Λ 5 , 10 = Q 3 T Q 7 T + W 0 T Q 10 , Λ 5 , 11 = Q 11 T , Λ 5 , 22 = ( Q 11 + X 3 ) T W 2 + W 0 T X 5 , Λ 5 , 25 = ( Q 11 + X 3 ) T W 3 + W 0 T X 6 , Λ 5 , 27 = W 0 T X 7 + X 3 T + Q 11 T C 0 T , Λ 6 , 6 = e 2 α δ 2 M 3 U 2 U 2 T , Λ 7 , 7 = ( Q 12 + X 4 ) T W 1 + W 1 T ( Q 12 + X 4 ) + ( δ d e 2 α δ 2 ) M 6 U 3 U 3 T , Λ 7 , 9 = X 4 T + W 1 T X 1 , Λ 7 , 10 = Q 4 Q 8 + W 1 T Q 10 , Λ 7 , 11 = Q 12 , Λ 7 , 22 = ( Q 12 + X 4 ) T W 2 + W 1 T X 5 , Λ 7 , 25 = ( Q 12 + X 4 ) T W 3 + W 1 T X 6 , Λ 7 , 27 = W 1 T X 7 + X 4 T + Q 12 T C 1 T , Λ 8 , 8 = e 2 α δ 2 M 3 T U 4 U 4 T , Λ 9 , 9 = Y 1 + Y 1 T X 1 X 1 T + δ 1 2 N 1 + δ 1 2 2 ( N 2 + N 3 ) , Λ 9 , 11 = Y 1 T + Y 2 T , Λ 9 , 22 = X 1 T W 2 X 5 , Λ 9 , 25 = X 1 T W 3 X 6 , Λ 9 , 27 = X 7 + X 1 , Λ 10 , 10 = Q 2 Q 2 T Q 6 Q 6 T Y 5 Y 5 T , Λ 10 , 11 = Q 10 T ,
Λ 10 , 22 = Q 10 T W 2 , Λ 10 , 25 = Q 10 T W 3 , Λ 10 , 27 = Q 10 , Λ 11 , 11 = Y 2 Y 2 T + ( δ 2 δ 1 ) 2 × O 1 + ( δ 2 δ 1 ) 2 2 ( O 2 + O 3 ) + S 1 + S 2 + S 3 , Λ 12 , 12 = e 2 α δ 1 ( 1200 N 1 + 720 N 2 + 552 N 3 ) , Λ 12 , 13 = e 2 α δ 1 ( 5400 N 1 + 480 N 2 + 2040 N 3 ) , Λ 12 , 14 = e 2 α δ 1 ( 10080 N 1 + 10080 N 2 + 3240 N 3 ) , Λ 13 , 13 = e 2 α δ 1 ( 25920 N 1 + 3600 N 2 + 7920 N 3 ) , Λ 13 , 14 = e 2 α δ 1 ( 50400 N 1 + 8640 N 2 + 12960 N 3 , Λ 14 , 14 = e 2 α δ 1 ( 100800 N 1 + 21600 N 2 + 21600 N 3 ) , Λ 15 , 15 = e 2 α δ 2 ( 1200 O 1 + 72 O 2 + 552 O 3 ) , Λ 15 , 16 = e 2 α δ 2 ( 5400 O 1 + 480 O 2 + 2040 O 3 ) , Λ 15 , 17 = e 2 α δ 2 ( 10080 O 1 + 1080 O 2 + 3240 O 3 ) , Λ 15 , 18 = e 2 α δ 2 ( 12 Z 2 + 180 Z 3 + 1008 Z 4 ) T , Λ 15 , 19 = e 2 α δ 2 ( 360 Z 3 + 5040 Z 4 ) T , Λ 15 , 20 = 10080 e 2 α δ 2 Z 4 T , Λ 16 , 16 = e 2 α δ 2 ( 25920 O 1 + 3600 O 2 + 7920 O 3 ) , Λ 16 , 17 = e 2 α δ 2 ( 50400 O 1 + 8640 O 2 + 12960 O 3 ) , Λ 16 , 18 = e 2 α δ 2 ( 360 Z 3 + 5040 Z 4 ) T , Λ 16 , 19 = e 2 α δ 2 ( 720 Z 3 + 25200 Z 4 ) T , Λ 16 , 20 = 50400 e 2 α δ 2 Z 4 T , Λ 17 , 17 = e 2 α δ 2 ( 100800 O 1 + 21600 O 2 + 21600 O 3 ) , Λ 17 , 18 = 10080 e 2 α δ 2 Z 4 T , Λ 17 , 19 = 50400 e 2 α δ 2 Z 4 T , Λ 17 , 20 = 100800 e 2 α δ 2 Z 4 T , Λ 18 , 18 = e 2 α δ 2 ( 1200 O 1 + 72 O 2 + 552 O 3 ) , Λ 18 , 19 = e 2 α δ 2 ( 5400 O 1 + 480 O 2 + 2040 O 3 ) , Λ 18 , 20 = e 2 α δ 2 ( 10080 O 1 + 1080 O 2 + 3240 O 3 ) , Λ 19 , 19 = e 2 α δ 2 ( 25920 O 1 + 3600 O 2 + 7920 O 3 ) , Λ 19 , 20 = e 2 α δ 2 ( 50400 O 1 + 8640 O 2 + 12960 O 3 ) , Λ 20 , 20 = e 2 α δ 2 ( 100800 O 1 + 21600 O 2 + 21600 O 3 ) , Λ 21 , 21 = e 2 α τ 1 S 3 , Λ 22 , 22 = ( τ d e 2 α τ 2 ) S 2 + X 5 T W 2 + W 2 X 5 , Λ 22 , 25 = X 5 T W 3 + W 2 X 6 , Λ 22 , 27 = X 5 T + W 2 T X 7 , Λ 23 , 23 = e 2 α τ 2 S 1 , Λ 24 , 24 = e 2 α δ 2 T 3 , Λ 25 , 25 = e 2 α δ 2 T 2 + X 6 T W 3 + W 3 T X 6 , Λ 25 , 27 = X 6 T + W 3 T X 7 C 2 T , Λ 26 , 26 = e 2 α δ 2 T 1 , Λ 27 , 27 = X 7 + X 7 T C 3 C 3 T ,
and the other terms are 0.

References

  1. Shang, Y. On the delayed scaled consensus problems. Appl. Sci. 2017, 7, 713. [Google Scholar] [CrossRef] [Green Version]
  2. Park, J.H.; Kwon, O.M. Global stability for neural networks of neutral-type with interval time-varying delays. Chaos Solitons Fractals 2009, 41, 1174–1181. [Google Scholar] [CrossRef]
  3. Sun, J.; Liu, G.P.; Chen, J.; Rees, D. Improved delay-range-dependent stability criteria for linear systems with time-varying delays. Automatica 2010, 46, 466–470. [Google Scholar] [CrossRef]
  4. Peng, C.; Fei, M.R. An improved result on the stability of uncertain t-s fuzzy systems with interval time-varying delay. Fuzzy Sets Syst. 2013, 212, 97–109. [Google Scholar] [CrossRef]
  5. Manivannan, R.; Mahendrakumar, G.; Samidurai, R.; Cao, J.; Alsaedi, A. Exponential stability and extended dissipativity criteria for generalized neural networks with interval time-varying delay signals. J. Frankl. Inst. 2017, 354, 4353–4376. [Google Scholar] [CrossRef]
  6. Zhang, S.; Qi, X. Improved Integral Inequalities for Stability Analysis of Interval Time-Delay Systems. Algorithms 2017, 10, 134. [Google Scholar] [CrossRef] [Green Version]
  7. Manivannan, R.; Samidurai, R.; Cao, J.; Alsaedi, A.; Alsaadi, F.E. Delay-dependent stability criteria for neutral-type neural networks with interval time-varying delay signals under the effects of leakage delay. Adv. Differ. Equ. 2018, 2018, 53. [Google Scholar] [CrossRef]
  8. Klamnoi, A.; Yotha, N.; Weera, W.; Botmart, T. Improved results on passivity anlysis of neutral-type neural networks with time-varying delays. J. Res. Appl. Mech. Eng. 2018, 6, 71–81. [Google Scholar]
  9. Hale, J.K. Introduction to Functional Differention Equations; Springer: New York, NY, USA, 2001. [Google Scholar]
  10. Niculescu, S.I. Delays Effects on Stability: A Robust Control Approach; Springe: London, UK, 2001. [Google Scholar]
  11. Brayton, R.K. Bifurcation of periodic solution in a nonlinear difference-differential equation of neutral type. Quart. Appl. Math. 1966, 24, 215–224. [Google Scholar] [CrossRef] [Green Version]
  12. Kuang, Y. Delay Differential Equation with Application in Population Dynamics; Academic Press: Boston, MA, USA, 1993. [Google Scholar]
  13. Bevelevich, V. Classical Network Synthesis; Van Nostrand: New York, NY, USA, 1968. [Google Scholar]
  14. Gu, K.; Kharitonov, V.L.; Chen, J. Stability of Time-Delay Systems; Birkhäuser: Berlin, Germany, 2003. [Google Scholar]
  15. Haykin, S. Neural Networks: A Comprehensive Foundation; Prentice–Hall: Englewood Cliffs, NJ, USA, 1998. [Google Scholar]
  16. Joya, G.; Atencia, M.A.; Sandoval, F. Hopfield neural networks for optimization: Study of the different dynamics. Neurocomputing 2002, 43, 219–37. [Google Scholar] [CrossRef]
  17. Manivannan, R.; Samidurai, R.; Zhu, Q. Further improved results on stability and dissipativity analysis of static impulsive neural networks with interval time-varying delays. J. Frankl. Inst. 2017, 354, 6312–6340. [Google Scholar] [CrossRef]
  18. Marcu, T.; Kppen-Seliger, B.; Stucher, R. Design of fault detection for a hydraulic looper using dynamic neural networks. Control Eng. Pract. 2008, 16, 192–213. [Google Scholar] [CrossRef]
  19. Stamova, I.; Stamov, T.; Li, X. Global exponential stability of a class of impulsive cellular neural networks with supremums. Int. J. Adapt. Control Signal Process. 2014, 28, 1227–39. [Google Scholar] [CrossRef]
  20. Park, M.J.; Kwon, O.M.; Ryu, J.H. Passivity and stability analysis of neural networks with time-varying delays via extended free-weighting matrices integral inequality. Neural Netw. 2018, 106, 67–78. [Google Scholar] [CrossRef]
  21. Wu, Z.G.; Park, J.H.; Su, H.; Chu, J. New results on exponential passivity of neural networks with time-varying delays. Nonlinear Anal. Real World Appl. 2012, 13, 1593–1599. [Google Scholar] [CrossRef]
  22. Du, Y.; Zhong, S.; Xu, J.; Zhou, N. Delay-dependent exponential passivity of uncertain cellular neural networks with discrete and distributed time-varying delays. ISA Trans. 2015, 56, 1–7. [Google Scholar] [CrossRef] [PubMed]
  23. Xu, S.; Lam, J.D.; Ho, W.C.; Zou, Y. Delay-dependent exponential stability for a class of neutral neural networks with time delays. J. Comput. Appl. Math. 2005, 183, 16–28. [Google Scholar] [CrossRef] [Green Version]
  24. Tu, Z.W.; Cao, J.; Alsaedi, A.; Alsaadi, F.E.; Hayat, T. Global Lagrange stability of complex-valued neural networks of neutral type with time-varying delays. Complexity 2016, 21, 438–450. [Google Scholar] [CrossRef]
  25. Maharajan, C.; Raja, R.; Cao, J.; Rajchakit, G.; Alsaedi, A. Novel results on passivity and exponential passivity for multiple discrete delayed neutral-type neural networks with leakage and distributed time-delays. Chaos Solitons Fractals 2018, 115, 268–282. [Google Scholar] [CrossRef]
  26. Weera, W.; Niamsup, P. Novel delay-dependent exponential stability criteria for neutral-type neural networks with non-differentiable time-varying discrete and neutral delays. Neurocomputing 2016, 173, 886–898. [Google Scholar] [CrossRef]
  27. Hill, D.J.; Moylan, P.J. Stability results for nonlinear feedback systems. Automatica 1977, 13, 377–382. [Google Scholar] [CrossRef]
  28. Santosuosso, G.J. Passivity of nonlinear systems with input-output feed through. Automatica 1977, 33, 693–697. [Google Scholar] [CrossRef]
  29. Xie, L.; Fu, M.; Li, H. Passivity analysis and passification for uncertain signal processing systems. IEEE Trans. Signal Process. 1998, 46, 2394–2403. [Google Scholar]
  30. Chua, L.O. Passivity and complexity. IEEE Trans. Circuits Syst. I Fundam. Theory Appl. 1999, 46, 71–82. [Google Scholar] [CrossRef]
  31. Calcev, G.; Gorez, R.; Neyer, M.D. Passivity approach to fuzzy control systems. Automatica 1988, 34, 339–344. [Google Scholar] [CrossRef]
  32. Wu, C.W. Synchronization in arrays of coupled nonlinear systems: Passivity, circle criterion, and observer design. IEEE Trans. Circuits Syst. I Fundam. Theory Appl. 2001, 48, 1257–1261. [Google Scholar]
  33. Chellaboina, V.; Haddad, W.M. Exponentially dissipative dynamical systems: A nonlinear extension of strict positive realness. J. Math. Prob. Eng. 2003, 2003, 25–45. [Google Scholar] [CrossRef] [Green Version]
  34. Fradkov, A.L.; Hill, D.J. Exponential feedback passivity and stabilizability of nonlinear systems. Automatica 1998, 34, 697–703. [Google Scholar] [CrossRef]
  35. Hayakawa, T.; Haddad, W.M.; Bailey, J.M.; Hovakimyan, N. Passivity-based neural network adaptive output feedback control for nonlinear nonnegative dynamical systems. IEEE Trans. Neural Netw. 2005, 16, 387–398. [Google Scholar] [CrossRef] [PubMed]
  36. Park, P.G.; Ko, J.W.; Jeong, C. Reciprocally convex approach to stability of systems with time-varying delays. Automatica 2011, 47, 235–238. [Google Scholar] [CrossRef]
  37. Li, T.; Guo, L.; Lin, C. A new criterion of delay-dependent stability for uncertain time-delay systems. IET Control Theory Appl. 2007, 1, 611–616. [Google Scholar] [CrossRef]
Figure 1. The state response of the system (1).
Figure 1. The state response of the system (1).
Computation 09 00070 g001
Figure 2. The solution trajectory of the system (1).
Figure 2. The solution trajectory of the system (1).
Computation 09 00070 g002
Table 1. The maximum upper bound of γ for different τ d .
Table 1. The maximum upper bound of γ for different τ d .
τ d 0.50.8
Park and Kwon [2]1.65-
Tu et al. [24]2.66-
Manivannan et al. [7]3.943.43
Theorem 14.063.68
Table 2. The maximum upper bound of ρ for different δ d and various δ 1 .
Table 2. The maximum upper bound of ρ for different δ d and various δ 1 .
δ d δ 1 = 0.0 δ 1 = 0.1 δ 1 = 0.2
0.1 0.5224 0.9252 0.4510
0.3 0.3204 0.5790 0.3782
0.5 0.1734 0.3362 0.2642
0.8 0.0092 0.1080 0.0804
Table 3. The maximum upper bound of ρ for different δ d .
Table 3. The maximum upper bound of ρ for different δ d .
δ d 0.1 0.3 0.5 0.9
Wu et al. [21] 5.4753 5.4121 5.3518 5.2864
Du et al. [22] ( N = 3 , M = 3 ) 6.0297 5.8124 5.7735 5.7294
Theorem 2 6.0374 5.9652 5.9634 5.9634
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Samorn, N.; Yotha, N.; Srisilp, P.; Mukdasai, K. LMI-Based Results on Robust Exponential Passivity of Uncertain Neutral-Type Neural Networks with Mixed Interval Time-Varying Delays via the Reciprocally Convex Combination Technique. Computation 2021, 9, 70. https://0-doi-org.brum.beds.ac.uk/10.3390/computation9060070

AMA Style

Samorn N, Yotha N, Srisilp P, Mukdasai K. LMI-Based Results on Robust Exponential Passivity of Uncertain Neutral-Type Neural Networks with Mixed Interval Time-Varying Delays via the Reciprocally Convex Combination Technique. Computation. 2021; 9(6):70. https://0-doi-org.brum.beds.ac.uk/10.3390/computation9060070

Chicago/Turabian Style

Samorn, Nayika, Narongsak Yotha, Pantiwa Srisilp, and Kanit Mukdasai. 2021. "LMI-Based Results on Robust Exponential Passivity of Uncertain Neutral-Type Neural Networks with Mixed Interval Time-Varying Delays via the Reciprocally Convex Combination Technique" Computation 9, no. 6: 70. https://0-doi-org.brum.beds.ac.uk/10.3390/computation9060070

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop