Next Article in Journal
Is Jump Robust Two Times Scaled Estimator Superior among Realized Volatility Competitors?
Previous Article in Journal
A Refined Closed-Form Solution for the Large Deflections of Alekseev-Type Annular Membranes Subjected to Uniformly Distributed Transverse Loads: Simultaneous Improvement of Out-of-Plane Equilibrium Equation and Geometric Equation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Double Features Zeroing Neural Network Model for Solving the Pseudoninverse of a Complex-Valued Time-Varying Matrix

1
College of Mathematics and Statistics, Jishou University, Jishou 416000, China
2
College of Information Science and Engineering, Jishou University, Jishou 416000, China
*
Author to whom correspondence should be addressed.
Submission received: 8 May 2022 / Revised: 14 June 2022 / Accepted: 14 June 2022 / Published: 18 June 2022

Abstract

:
The solution of a complex-valued matrix pseudoinverse is one of the key steps in various science and engineering fields. Owing to its important roles, researchers had put forward many related algorithms. With the development of research, a time-varying matrix pseudoinverse received more attention than a time-invarying one, as we know that a zeroing neural network (ZNN) is an efficient method to calculate a pseudoinverse of a complex-valued time-varying matrix. Due to the initial ZNN (IZNN) and its extensions lacking a mechanism to deal with both convergence and robustness, that is, most existing research on ZNN models only studied the convergence and robustness, respectively. In order to simultaneously improve the double features (i.e., convergence and robustness) of ZNN in solving a complex-valued time-varying pseudoinverse, this paper puts forward a double features ZNN (DFZNN) model by adopting a specially designed time-varying parameter and a novel nonlinear activation function. Moreover, two nonlinear activation types of complex number are investigated. The global convergence, predefined time convergence, and robustness are proven in theory, and the upper bound of the predefined convergence time is formulated exactly. The results of the numerical simulation verify the theoretical proof, in contrast to the existing complex-valued ZNN models, the DFZNN model has shorter predefined convergence time in a zero noise state, and enhances robustness in different noise states. Both the theoretical and the empirical results show that the DFZNN model has better ability in solving a time-varying complex-valued matrix pseudoinverse. Finally, the proposed DFZNN model is used to track the trajectory of a manipulator, which further verifies the reliability of the model.

1. Introduction

As an extend form of matrix inverse, the pseudoinverse is also known as the Moore–Penrose inverse (or generalized inverses), which appeared in mathematics and engineering fields frequently, e.g., stochastic systems estimation [1], optimization [2], hyperspectral image processing [3], robotics [4], and forecasting [5]. Due to the important role of pseudoinverse in mentioned fields, for calculating a matrix pseudoinverse fast and accurately, considerable effort has been exerted toward the numerical algorithms. For instance, for obtaining the Drazin and Moore–Penrose inverses, a third order iterative method was presented in [6]. Some direct algorithms were established for computing a pseudoinverse [7,8]. Santiago Artidiello et al. designed a secant-type method for approximating generalized inverses of complex matrices [9]. In general, most existing methods for solving real-valued and time-invariant matrix pseudoinverse are iterative algorithms. However, these iterative algorithms are not an optimal scheme when developed to solve the time-variant issue, and they will sometimes even appear to be powerless or fail to complete the calculation.
It has been proven that the neural network was an effective solution of real-time computing. In the past two decades, recurrent neural network (RNN) methods were popularly applied in science and engineering fields [10,11,12]. For example, Xiao et al. proposed RNN for computing quadratic minimization [12]. As a classical RNN, the gradient-based RNN (GNN) and its extensions were designed for handling scientific issues [13,14,15]. GNN took the Frobenius-norm of the error matrix as the performance index. In the time-invariant case, GNN ran along the negative gradient direction to monitor the error-norm converging to zero in infinite time. In the time-variant case, as it did not utilize time derivative information, GNN produced a lagging error between the theoretical solution and the neural output solution. Thus, the error norm of GNN failed in vanishing to zero, which cannot meet the requirements of dynamic real-time computing. For overcoming the defect of GNN, Zhang et al. proposed a zeroing neural network (ZNN) method [16]. By adopting a time derivative to trace the theory solution, ZNN made the error exponential decrease to zero as time goes on. Therefore, ZNNs were developed to solve the time-variant problem as required [17,18,19,20,21,22]. However, in implementations of the ZNN models mentioned, it is assumed that they were free of noises. As we known that the neural network systems are inevitably interfered with by various noise, and that the invading noise has a great effect on the accuracy of resolving a time-variant issue [23,24]. Therefore, improving the robustness of ZNN in a noise environment became a hot topic. Thus, researchers developed some noise-resistant ZNN models. For instance, in [25], an integration-enhanced ZNN was first proposed for solving real-time-varying matrix inversion in the presence of different noises. A noise-tolerant ZNN was proposed to solve a time-dependent matrix inverse [26]. The proposal of these noise-resistant ZNN models greatly extends the application of ZNN in practice.
It is worth pointing out that the initial ZNN and noise-resistant ZNN models take infinite time to converge, which cannot satisfy real-time application. As time is precious for handling a time-varying problem in practice, and the convergence speed is a vital performance indicator of a neural system. Thus, there has been a wave of research on the limited time convergence ZNN (LTCZNN) model [27,28]. In [27], Li embedded a delicately designed nonlinear function in the ZNN model for accelerating the convergence speed; thus, the ZNN model achieved limited time convergence, which is seminal in the ZNN field. Xiao proposed two novel activation functions that were better structured and more convenient to implement [28]. These LTCZNN models, by embedding a certain nonlinear activation function, greatly improve the convergence speed. For these LTCZNN models, a fly in the ointment is that initial values and noise have significant impact on the convergence speed. To overcome the disadvantages of LTCZNN, in [29], a predefined time convergence ZNN (PTCZNN) with an improved nonlinear activation function was proposed. However, the design formula of the PTCZNN only controlled the influence of initial values within a particular scope, and the PTCZNN still did not improve the robustness.
On the other hand, the complex-valued neural networks have been widely used in numerous scientific and engineering issues, such as signal processing [30], imaging [31], automatic control [32], and optimization [33]. Due to the high storage capacity of stable equilibrium points, the stability analysis of complex-valued neural networks has received considerable attention, and many interesting results have been obtained [34,35,36,37]. In contrast to the real-valued neural networks, some complex-valued ZNN models have highlighted the competitive advantage in compatibility and processing power [38,39,40]. Inspired by the capability of these complex-valued ZNN models, in this paper, for solving pseudoinverse of complex-valued time-varying matrix, we explore a complex-valued ZNN with better performance.
In [41], an RNN model with time-varying parameters was first designed for solving a time-varying Sylvester equation, compared to the RNN with fixed parameters, the robustness of RNN with time-varying parameter is improved significantly. The result indicates that time-varying parameters in ZNN have crucial implications. Recently, increasing attention has been focused on time-varying parameters in a neural network, for example, a varying-parameter ZNN model is designed for approximating outer inverses [42], Xiao and He also designed a flexible variable parameter ZNN model with noise tolerance ability [43]. In order to simultaneously improve the robustness and convergence of ZNN for solving the pseudoinverse of a complex-valued time-varying matrix, inspired by literature [29,41], we explore a double features ZNN (DFZNN) model, which embedded a novel delicately designed time-varying parameter and nonlinear activation function. Moreover, in this paper, we find two ways (called type I and type II) to activate a complex-valued number. Both activated types are proven to insure the predefine-time convergence and good robustness of the DFZNN for solving a pseudoninverse of a complex-valued time-varying matrix in different noise circumstances. The research further enhances the scope of compatibility and processing power of ZNN in real-time computation. The innovations and contributions of this paper can be listed below:
  • For solving complex-valued time-varying issues, the design of the DFZNN model constructs a mechanism to deal with both convergence and robustness, which enriches the usage of complex-valued ZNN;
  • The convergence rate is improved; specifically, the DFZNN model can converge in the predefine time. Moreover, the convergence rate is not affected by the initial value;
  • Robustness is better, compared with the existing ZNNs, and the DFZNN model has stronger stability in noise situation;
  • The simulation example and an application further illustrate the reliability of the proposed DFZNN model.

2. Problem Formulation and ZNN Models

In this section, the corresponding preliminaries and the formulations of the complex-valued time-varying matrix pseudoinverse are introduced first. Secondly, we provide the design process of the ZNN model. Different evolutions of IZNN, LTCZNN, PTCZNN, and DFZNN are introduced. Finally, based on two complex-valued activation types, we construct the DFZNN model.

2.1. Problem Formulation

Definition 1.
Given a complex-valued time-varying matrix A ( t ) C m × n , if Y ( t ) C n × m satisfies all the Penrose equations as below:
A ( t ) Y ( t ) A ( t ) = A ( t ) , Y ( t ) A ( t ) Y ( t ) = Y ( t ) , ( A ( t ) Y ( t ) ) H = A ( t ) Y ( t ) , ( Y ( t ) A ( t ) ) H = Y ( t ) A ( t ) ,
then Y ( t ) is named the pseudoinverse of matrix A ( t ) , denoted by A + ( t ) , which exists and is unique; here, A H ( t ) denotes the conjugate transpose of the matrix A ( t ) .
Moreover, if the complex-valued time-varying matrix A ( t ) C m × n is full rank, which means rank( A ( t ) ) = min( m , n ) for any t, we can obtain the pseudoinverse of A ( t ) according to the following lemma.
Lemma 1.
If the complex-valued time-varying matrix A ( t ) C m × n is full rank, then A ( t ) A H ( t ) or A H ( t ) A ( t ) is nonsingular. Hence, the pseudoinverse of A ( t ) can be obtained as
A + ( t ) = A H ( t ) ( A ( t ) A H ( t ) ) 1 , m > n , A 1 ( t ) , m = n , ( A H ( t ) A ( t ) ) 1 A H ( t ) , m < n .
For the situation m > n and m < n , the procedure of computing pseudoinverse is identical. Thus, this paper investigates the full rank complex-valued time-varying matrix A ( t ) C m × n with m < n . The complex-valued time-varying pseudoinverse problem can be presented as
Y ( t ) A ( t ) A H ( t ) = A H ( t ) .
Y ( t ) is an unknown complex-valued time-varying matrix, and the purpose of this work is to design a DFZNN model for solving the unknown matrix Y ( t ) within a predefined time under additive noises.

2.2. Design Process of ZNN Model

The design process of the ZNN model for solving a complex-valued time-varying pseudoinverse is the same as widely described in the previously mentioned literature [16,17,18,19,20,21,22], which can be listed as the following procedure.
Firstly, defining an error-monitoring function:
D ( t ) = Y ( t ) A ( t ) A H ( t ) A H ( t ) .
Secondly, defining an evolution for D ( t ) :
D ˙ ( t ) = d D ( t ) d t = α W ( D ( t ) ) ,
where α > 0 is a scaling parameter, W ( · ) is an activation method which is defined in a complex-value element-wise. Thirdly, substituting (2) into (1), one can obtain a ZNN model:
Y ˙ ( t ) A ( t ) A H ( t ) = A ˙ H ( t ) Y ( t ) A ˙ ( t ) A H ( t ) A ( t ) A ˙ H ( t ) α W D ( t ) .
For complex-value element-wise, the activation way operates in two types in our work: one is that the activation function operates the real part and the imaginary part separately, another is that the activation function operates the modulus and the argument separately. The definitions of two types are given as follows:
Definition 2.
The activation type I
W 1 ( P + i Q ) = F ( P ) + i F ( Q ) .
The activation type II
W 2 ( P + i Q ) = F ( τ ) exp ( i Θ ) ,
where P R m × n , Q R m × n , i is the imaginary unit, τ R m × n , and Θ [ π , π ) m × n respectively denote the modulus and the argument of the complex matrix P + i Q . F ( · ) denotes an activation function, and ∘ denotes matrix opposite element multiplication.
In initial ZNN (IZNN), F ( · ) is a linear activation function, i.e., F ( x ) = x ; according to (3), the IZNN for solving complex-valued time-varying pseudoinverse can be expressed as:
Y ˙ ( t ) A ( t ) A H ( t ) = A ˙ H ( t ) Y ( t ) A ˙ ( t ) A H ( t ) A ( t ) A ˙ H ( t ) α D ( t ) ,
where Y ˙ ( t ) , A ˙ H ( t ) , A ˙ ( t ) respectively denote the time derivatives of Y ( t ) , A H ( t ) , A ( t ) .
IZNN is an effective approach for dealing with time-varying problems, but the convergence speed is restricted for IZNN adopting the linear activation function. To improve the convergence rate, Li proposed an LTCZNN embedded with a delicately designed nonlinear activation function, which achieved limited time convergence [27]. The designed nonlinear activation function is expressed as:
F 1 ( x ) = sign ( x ) | x | μ + | x | 1 μ 2 ,
where x R , 0 < μ < 1 . sign ( · ) is defined as:
sign ( x ) = 1 , x > 0 , 0 , x = 0 , 1 , x < 0 .
In terms of convergence, the LTCZNN achieved a significant improvement. However, the initial value will greatly impact the convergence effect in LTCZNN. In order to reduce the effect of the initial value, Li et al. proposed a PTCZNN with a modified nonlinear activation function as:
F 2 ( x ) = sign ( x ) ( l 1 | x | μ + l 2 | x | ν + l 3 ) + l 4 x ,
where the parameter μ is the same definition as (7), and ν > 1 , l 1 , l 2 > 0 , l 3 , l 4 0 .
In all of the above ZNN models, the parameters in (2) are fixed. To improve the performance of the models, the parameters often need to be adjusted frequently to find the approximate optimal value, which is inefficient in practical application. Thus, Zhang et al. first proposed a VPZNN; in contrast to the fixed parameter ZNN, VPZNN has better robustness, which is beneficial for online processing [41]. In VPZNN, the evolution of D ( t ) was defined as
D ˙ ( t ) = ( t θ + θ ) F 1 ( D ( t ) ) ,
where θ > 0 , t θ + θ > 0 is a time-varying scaling parameter, and F 1 denotes the activation function defined in (7).
The proposal of VPZNN makes a breakthrough in the design of time-varying parameters; however, the initial value still impacts the solving task of the VPZNN model. Furthermore, when we increase the convergence rate of VPZNN, efficiency will be sacrificed. In addition, the PTCZNN achieves predefined time convergence by controlling the influence of the initial value in a limited scope. This inspires us to explore a new time-varying parameter model which is not disturbed by the initial value and accelerate convergence rate without remarkable efficiency loss. The novel evolution of D ( t ) was designed as:
D ˙ ( t ) = α exp ( ζ 1 arccot ( t ) + ζ 2 t ) W ( D ( t ) ) .
where α , ζ 1 , ζ 2 > 0 , the exponential part of time varying parameter contains attenuation factor ζ 1 arccot ( t ) and noise suppression factor ζ 2 t , and the attenuation factor design is based on the properties of the arccotangent function, which can guarantee that the model converges quickly and possesses short-term noise suppression capability. The noise suppression factor ensures that the the model possesses long-term noise suppression capability. The nonlinear activation function F ( · ) in W ( · ) is designed as:
F 3 ( x ) = l 1 | x | μ sign ( x ) + l 5 x , if | x | 1 , l 2 | x | ν sign ( x ) + l 5 x , if | x | > 1 ,
where l 1 , l 2 are defined the same as (8) and l 5 > 0 . Then, the DFZNN model is constructed as:
Y ˙ ( t ) A ( t ) A H ( t ) = A ˙ H ( t ) Y ( t ) A ˙ ( t ) A H ( t ) A ( t ) A ˙ H ( t ) α exp ( ζ 1 arccot ( t ) + ζ 2 t ) W D ( t ) .
In addition, the DFZNN model with noise intervention can be described as:
Y ˙ ( t ) A ( t ) A H ( t ) = A ˙ H ( t ) Y ( t ) A ˙ ( t ) A H ( t ) A ( t ) A ˙ H ( t ) α exp ( ζ 1 arccot ( t ) + ζ 2 t ) W D ( t ) + Δ ( t ) ,
where Δ ( t ) denotes additive complex-valued noise.

3. Theoretical Analysis

In this section, we mainly present the theoretical results of the DFZNN model. At first, the global stability and predefined time convergence of the DFZNN model are proven in theory. Then, the robustness of the DFZNN model with unknown additive noises is discussed.

3.1. Global Stability

Theorem 1.
Given a complex-valued time-varying matrix A ( t ) of full rank, the state matrix Y ( t ) synthesized by the DFZNN with the activated type I, starting from stochastic initial Y ( 0 ) , always converges to theoretical solution, i.e., the error matrix D ( t ) globally converges to 0.
Proof. 
Defining the error matrix D ( t ) , according to Equation (9), we obtain
D ˙ ( t ) = α exp ( ζ 1 arccot ( t ) + ζ 2 t ) W 1 ( D ( t ) ) .
Based on the definition of W 1 in (4), element-wise,
D ˙ i j ( t ) = α exp ( ζ 1 arccot ( t ) + ζ 2 t ) W 1 ( D i j ( t ) ) = α exp ( ζ 1 arccot ( t ) + ζ 2 t ) ( F 3 ( p i j ( t ) ) + i F 3 ( q i j ( t ) ) ) ,
where D i j ( t ) C is the i j th item of the complex matrix D ( t ) , and p i j ( t ) and q i j ( t ) respectively denote the real and imaginary parts of D i j ( t ) . We study the real and imaginary parts of D ˙ i j ( t ) separately, and two equations can be obtained as follows:
p ˙ i j ( t ) = α exp ( ζ 1 arccot ( t ) + ζ 2 t ) F 3 ( p i j ( t ) ) ,
q ˙ i j ( t ) = α exp ( ζ 1 arccot ( t ) + ζ 2 t ) F 3 ( q i j ( t ) ) .
For the variable p i j ( t ) , we choose the Lyapunov function υ i j ( t ) = p i j 2 ( t ) . Then, the derivative can be obtained as
υ ˙ i j ( t ) = 2 p i j ( t ) p ˙ i j ( t ) = 2 α exp ( ζ 1 arccot ( t ) + ζ 2 t ) p i j ( t ) F 3 ( p i j ( t ) ) 2 α exp ( ζ 2 t ) p i j ( t ) F 3 ( p i j ( t ) ) .
According to the magnitude of p i j ( t ) , i.e., | p i j ( t ) | > 1 and | p i j ( t ) | 1 , two situations are discussed as below.
First situation: if | p i j ( t ) | > 1 . Based on the definition of F 3 ( · ) in (10), (15) can be divided into two steps.
(1) For | p i j ( t ) | > 1 , Equation (15) can be represented as
υ ˙ i j ( t ) 2 α exp ( ζ 2 t ) p i j ( t ) ( l 2 | p i j ( t ) | ν sign ( p i j ( t ) ) + l 5 p i j ( t ) ) = 2 α exp ( ζ 2 t ) ( l 2 | p i j ( t ) | ν + 1 + l 5 | p i j ( t ) | 2 ) .
Thus, υ ˙ i j ( t ) is negative definite, which means that p i j ( t ) will decrease to p i j ( t ) 1 , then enter the second step as:
(2) For | p i j ( t ) | 1 . Equation (15) can be represented as
υ ˙ i j ( t ) 2 α exp ( ζ 2 t ) p i j ( t ) ( l 1 | p i j ( t ) | μ sign ( p i j ( t ) ) + l 5 p i j ( t ) ) = 2 α exp ( ζ 2 t ) ( l 2 | p i j ( t ) | μ + 1 + l 5 | p i j ( t ) | 2 ) 0 ,
clearly, υ ˙ i j ( t ) is negative definite.
Second situation: if | p i j ( t ) | 1 , which is just like the second step in the first situation. Thus, υ ˙ i j ( t ) is negative definite. In general, based on the Lyapunov stability theory, it can be concluded that p i j ( t ) globally converges to 0. Similarly, it can be proven that q i j ( t ) globally converges to 0. Then, we can conclude that D i j ( t ) globally converges to 0. In summary, the error matrix D ( t ) globally converging to 0 is proven. □
Theorem 2.
Given a complex-valued time-varying matrix A ( t ) of full rank, the state matrix Y ( t ) synthesized by the DFZNN with the activated type II, starting from stochastic initial Y ( 0 ) , always converges to a theoretical solution, i.e., the error matrix D ( t ) globally converges to 0.
Proof. 
According to the activated type II, we obtain the derivative of D ( t ) as
D ˙ ( t ) = α exp ( ζ 1 arccot ( t ) + ζ 2 t ) W 2 ( D ( t ) ) .
Element-wise,
D ˙ i j ( t ) = α exp ( ζ 1 arccot ( t ) + ζ 2 t ) W 2 ( D i j ( t ) ) .
where D i j ( t ) C is the i j th item of the complex matrix D ( t ) ; note that D i j ( t ) is a complex scalar, and we construct a Lyapunov function
υ i j ( t ) = | D i j ( t ) | 2 = D i j ( t ) D i j ( t ) ¯ ,
where | D i j ( t ) | denotes the modulus of D i j ( t ) . The derivative of υ i j ( t ) can be obtained as
υ ˙ i j ( t ) = D ˙ i j ( t ) D i j ( t ) ¯ + D i j ( t ) D i j ( t ) ¯ ˙ . = α exp ( ζ 1 arccot ( t ) + ζ 2 t ) W 2 ( D i j ( t ) ) D i j ( t ) ¯ + D i j ( t ) W 2 ( D i j ( t ) ) ¯ .
According to the definition of W 2 ( · ) in (5), we have that
W 2 ( D i j ( t ) ) = F 3 ( D i j ( t ) ) exp 1 · arg ( D i j ( t ) ) ,
where 1 represents the imaginary unit; therefore,
υ ˙ i j ( t ) = α exp ( ζ 1 arccot ( t ) + ζ 2 t ) ( D i j ( t ) ¯ F 3 ( | D i j ( t ) | ) exp 1 · arg ( D i j ( t ) ) + D i j ( t ) F 3 ( | D i j ( t ) | ) exp 1 · arg ( D i j ( t ) ) ) = α exp ( ζ 1 arccot ( t ) + ζ 2 t ) ( | D i j ( t ) | exp 1 · arg ( D i j ( t ) ) F 3 ( | D i j ( t ) | ) exp 1 · arg ( D i j ( t ) ) + | D i j ( t ) | exp 1 · arg ( D i j ( t ) ) F 3 ( | D i j ( t ) | ) exp 1 · arg ( D i j ( t ) ) ) = 2 α exp ( ζ 1 arccot ( t ) + ζ 2 t ) | D i j ( t ) | F 3 ( | D i j ( t ) | ) ,
considering similarly the magnitude of | D i j ( t ) | , i.e., | D i j ( t ) | > 1 and | D i j ( t ) | 1 , two situations are discussed as below.
First situation: if | D i j ( t ) | > 1 . Based on the definition of F 3 ( · ) in (10), Equation (17) can be divided into two steps.
(1) For | D i j ( t ) | > 1 . Equation (17) can be represented as
υ ˙ i j ( t ) 2 α exp ( ζ 2 t ) D i j ( t ) ( l 2 | D i j ( t ) | ν sign ( D i j ( t ) ) + l 5 D i j ( t ) ) = 2 α exp ( ζ 2 t ) ( l 2 | D i j ( t ) | ν + 1 + l 5 | D i j ( t ) | 2 ) .
Thus, υ ˙ i j ( t ) is negative definite, which means that D i j ( t ) will decrease to D i j ( t ) 1 , then enter the second step as:
(2) For | D i j ( t ) | 1 . Equation (17) can be represented as
υ ˙ i j ( t ) 2 α exp ( ζ 2 t ) D i j ( t ) ( l 1 | D i j ( t ) | μ sign ( D i j ( t ) ) + l 5 D i j ( t ) ) = 2 α exp ( ζ 2 t ) ( l 2 | D i j ( t ) | μ + 1 + l 5 | D i j ( t ) | 2 ) 0 ,
clearly, υ ˙ i j ( t ) is negative definite.
Second situation: if | D i j ( t ) | 1 , which is just like the second step in the first situation. Thus, υ ˙ i j ( t ) is negative definite. In general, based on the Lyapunov stability theory, it can be concluded that D i j ( t ) globally converges to 0. Therefore, the error matrix D ( t ) globally converging to 0 is proven. □

3.2. Predefined Time Convergence

Theorem 3.
Given the complex-valued time-varying matrix A ( t ) of full rank, the state matrix Y ( t ) synthesized by the DFZNN with the activated type I(or type II), starting from stochastic initial Y ( 0 ) , always converges to theoretical solution in predefine time t t o p :
t top 1 ζ 2 ln 1 + ζ 2 α l 1 ( 1 μ ) + ζ 2 α l 2 ( ν 1 ) .
Proof. 
In this proof, we first define the same error matrix D ( t ) as Theorem 1, then consider the case with the activated type I and the activated type II, respectively.
When choosing the activated type I, the same as Theorem 1, we study the real and imaginary parts of D ˙ i j ( t ) separately; two equations can be obtained as follows:
p ˙ i j ( t ) = α exp ( ζ 1 arccot ( t ) + ζ 2 t ) F 3 ( p i j ( t ) ) ,
q ˙ i j ( t ) = α exp ( ζ 1 arccot ( t ) + ζ 2 t ) F 3 ( q i j ( t ) ) ,
where p i j ( t ) and q i j ( t ) respectively denote the real and imaginary parts of D i j ( t ) ; for the real variable p i j ( t ) , we choose the Lyapunov function υ i j ( t ) = | p i j ( t ) | . Then, the derivative can be obtained as
d υ i j ( t ) d t = p ˙ i j ( t ) sign ( p i j ( t ) ) = α exp ( ζ 1 arccot ( t ) + ζ 2 t ) F 3 ( p i j ( t ) ) sign ( p i j ( t ) ) α exp ( ζ 2 t ) F 3 ( p i j ( t ) ) sign ( p i j ( t ) ) .
Similarly, according to | p i j ( t ) | > 1 and | p i j ( t ) | 1 , two steps are divided for the whole process.
First step, the initial value | p i j ( 0 ) | > 1 , the elapsed time t 1 notes | p i j ( 0 ) | > 1 decreases to | p i j ( t 1 ) | = 1 , then inequality (18) yields
d υ i j ( t ) d t α exp ( ζ 2 t ) F 3 ( p i j ( t ) ) = α exp ( ζ 2 t ) l 2 | p i j ( t ) | ν + l 5 p i j ( t ) α l 2 exp ( ζ 2 t ) | p i j ( t ) | ν ,
then we have
| p i j ( t ) | ν d υ i j ( t ) α l 2 exp ( ζ 2 t ) d t ,
that is,
υ i j ( t ) ν d υ i j ( t ) α l 2 exp ( ζ 2 t ) d t .
Integrating on both sides of (19), we obtain
υ i j ( 0 ) υ i j ( t 1 ) υ i j ( t ) ν d υ i j ( t ) 0 t 1 α l 2 exp ( ζ 2 t ) d t 1 1 ν ( υ i j ( t ) 1 ν ) | υ i j ( 0 ) 1 α l 2 ζ 2 exp ( ζ 2 t ) | 0 t 1 1 1 ν ( 1 υ i j ( 0 ) 1 ν ) α l 2 ζ 2 ( exp ( ζ 2 t 1 ) 1 ) .
Then, we have
t 1 1 ζ 2 ln 1 + ζ 2 ( 1 υ i j ( 0 ) 1 ν ) α l 2 ( ν 1 ) 1 ζ 2 ln 1 + ζ 2 α l 2 ( ν 1 ) .
Second step, the elapsed time t 2 notes | p i j ( 0 ) | = 1 decreases to | p i j ( t 1 ) | = 0 , according to (10), the inequality (18) yields
d υ i j ( t ) d t α exp ( ζ 2 t ) l 1 | p i j ( t ) | μ + l 5 p i j ( t ) α l 1 exp ( ζ 2 t ) | p i j ( t ) | ( μ ) ,
that is,
υ i j ( t ) μ d υ i j ( t ) α l 1 exp ( ζ 2 t ) d t .
Integrating on both sides of (22), we obtain
υ i j ( t 1 ) υ i j ( t 1 + t 2 ) υ i j ( t ) μ d υ i j ( t ) t 1 t 1 + t 2 α l 1 exp ( ζ 2 t ) d t 1 1 μ υ i j ( t ) 1 μ | 1 0 α l 1 ζ 2 exp ( ζ 2 t 1 ) | t 1 t 1 + t 2 exp ( ζ 2 ( t 1 + t 2 ) ) exp ( ζ 2 t 1 ) + ζ 2 α l 1 ( 1 μ ) .
Combining with (20) and (21), the predefined convergence time t top of the real variable p i j ( t ) is obtained as
t top = t 1 + t 2 1 ζ 2 ln 1 + ζ 2 ( 1 υ i j ( 0 ) 1 ν ) α l 2 ( ν 1 ) + ζ 2 α l 1 ( 1 μ ) 1 ζ 2 ln 1 + ζ 2 α l 1 ( 1 μ ) + ζ 2 α l 2 ( ν 1 ) .
Identically, the same predefined convergence time t top of the imaginary variable q i j ( t ) is obtained.
When choosing the activated type II, for D ˙ i j ( t ) = α exp ( ζ 1 arccot ( t ) + ζ 2 t ) W 2 ( D i j ( t ) ) , we also define a Lyapunov function υ i j ( t ) = | D i j ( t ) | . Then, the derivative of υ i j ( t ) can be obtained as
υ ˙ i j ( t ) = 1 2 α exp ( ζ 1 arccot ( t ) + ζ 2 t ) W 2 ( | D i j ( t ) | ) = α exp ( ζ 1 arccot ( t ) + ζ 2 t ) F 3 ( | D i j ( t ) | ) .
Similarly to the activated type I, from | D i j ( t ) | > 1 to | D i j ( t ) | = 0 , two steps are divided for the whole process, the predefined convergence time t top of D i j ( t ) can be obtained as
t top = t 1 + t 2 1 ζ 2 ln 1 + ζ 2 ( 1 υ i j ( 0 ) 1 ν ) α l 2 ( ν 1 ) + ζ 2 α l 1 ( 1 μ ) 1 ζ 2 ln 1 + ζ 2 α l 1 ( 1 μ ) + ζ 2 α l 2 ( ν 1 ) .
This completes the proof. □

3.3. Robustness

Theorem 4.
Given a complex-valued time-varying matrix A ( t ) of full rank, the state matrix Y ( t ) synthesized by the DFZNN with an unknown bound noise Δ ( t ) , starting from stochastic initial Y ( 0 ) , always converges to a theoretical solution, i.e., the error matrix D ( t ) globally converges to 0.
Proof. 
According to the error matrix D ( t ) , when choosing the activated type I, we obtain
D ˙ ( t ) = α exp ( ζ 1 arccot ( t ) + ζ 2 t ) W 1 ( D ( t ) ) + Δ ( t ) .
Element-wise,
D ˙ i j ( t ) = α exp ( ζ 1 arccot ( t ) + ζ 2 t ) W 1 ( D i j ( t ) ) + δ i j ( t ) .
then, two equations of the real part and imaginary part can be obtained as follows:
p ˙ i j ( t ) = α exp ( ζ 1 arccot ( t ) + ζ 2 t ) F 3 ( p i j ( t ) ) + δ a ( t ) ,
q ˙ i j ( t ) = α exp ( ζ 1 arccot ( t ) + ζ 2 t ) F 3 ( q i j ( t ) ) + δ b ( t ) ,
where δ a ( t ) , δ b ( t ) respectively denote the real part and imaginary part of δ i j ( t ) . For real variable p i j ( t ) , we also choose the Lyapunov function υ i j ( t ) = 1 2 p i j 2 ( t ) ; then,
υ ˙ i j ( t ) = p i j ( t ) p ˙ i j ( t ) = p i j ( t ) ( α exp ( ζ 1 arccot ( t ) + ζ 2 t ) F 3 ( p i j ( t ) ) + δ a ( t ) ) .
If υ ˙ i j ( t ) < 0 , | p i j ( t ) | will approach zero. If υ ˙ i j ( t ) > 0 , | p i j ( t ) | will increase, i.e., p i j ( t ) ( α exp ( ζ 1 arccot ( t ) + ζ 2 t ) F 3 ( p i j ( t ) ) + δ a ( t ) ) > 0 . Furthermore, we have
| δ a ( t ) | > | α exp ( ζ 1 arccot ( t ) + ζ 2 t ) F 3 ( p i j ( t ) ) | .
| p i j ( t ) | will not increase until α exp ( ζ 1 arccot ( t ) + ζ 2 t ) F 3 ( p i j ( t ) ) + δ a ( t ) ) decreases to zero. According to Equation (25), we have
| F 3 ( p i j ( t ) ) | = | δ a ( t ) α exp ( ζ 1 arccot ( t ) + ζ 2 t ) | .
Therefore, as the time goes, | p i j ( t ) | is bounded by
0 < | p i j ( t ) | < | F 1 ( δ a ( t ) α exp ( ζ 1 arccot ( t ) + ζ 2 t ) ) | ,
where F 3 1 notes the inverse function of F 3 . According to (10) and | F 3 ( x ) | | l 5 x | , we have F 3 1 ( x ) | x / l 5 | . Inequality (26) can be expressed as
0 < | p i j ( t ) | < | δ a ( t ) α l 5 exp ( ζ 1 arccot ( t ) + ζ 2 t ) | .
Because Δ ( t ) is an unknown bounded noise, there must be a constant ξ that satisfies δ a ( t ) ξ . Therefore, we have
0 < | p i j ( t ) | < | ξ α l 5 exp ( ζ 1 arccot ( t ) + ζ 2 t ) | .
It means that | p i j ( t ) | will be close to zero over time, i.e.,
0 < lim t | p i j ( t ) | < lim t | ξ α l 5 exp ( ζ 1 arccot ( t ) + ζ 2 t ) | 0 .
Similarly, | q i j ( t ) | will be close to zero over time, so, when t , | D i j ( t ) | 0 . Hence,
lim t D ( t ) F = 0 .
When choosing the activated type II, we also define a Lyapunov function υ i j ( t ) = 1 2 D i j 2 ( t ) . According to the modulus, | D i j ( t ) | is bounded and Equation (17), υ i j ( t ) is negative definite. It can be concluded that | D i j ( t ) | globally converges to 0. Hence,
lim t D ( t ) F = 0 .
The proof completes. □

4. Numerical Experiments

In this section, we employ an example to verify the superiority of the proposed DFZNN in solving the pseudoinverse of the complex-valued time-varying matrix.
Example 1.
For the purpose of comparative and demonstrative, the following complex-valued time-dependent matrix A ( t ) of full rank is exhibited:
A ( t ) = e i t e i ( t + 0.5 π ) 2 e i t e i ( t 0.5 π ) e i t e i ( t + 0.5 π ) ,
the theoretical pseudoinverse of A ( t ) is
A + ( t ) = 0.5 e i t e i ( t + 0.5 π ) 0.5 e i ( t + 0.5 π ) e i t e i t e i ( t 0.5 π ) .
For comparing the performance of the models, the relevant parameters in the four models (DFZNN, PTCZNN, LTCZNN, IZNN) are set with the same value. Without loss of generality, in this paper, α = 3 , μ = 0.5 , ν = 2 , l 1 = l 2 = l 4 = 0.5 , l 3 = l 5 = 4 , ζ 1 = 1 , ζ 2 = 0.8 .
Firstly, for computing a pseudoinverse of a complex-valued time-varying matrix in a noiseless situation, the experimental results generated by the proposed DFZNN model with two different activation types are shown in Figure 1. In Figure 1a, beginning from a randomly initial state Y ( 0 ) C m × n , the dynamic trajectories Y ( t ) generated by the proposed DFZNN with activation type I converge to the theoretical solution precisely and rapidly. Figure 1b depicts the profile of dynamic trajectories generated by the proposed DFZNN with activation type II; it also can be observed that the neural network output Y ( t ) arrives to the theoretical solution precisely and rapidly. The estimation error is measured by Frobenius norm D ( t ) F , and the estimation error of the DFZNN is plotted in Figure 2 for the activation type I and activation type II, respectively. The simulation results indicate that the DFZNN with two different activation types have the same stability and convergence. The results verify the performance proven in Theorems 1–3.
Secondly, we compare the convergence of the four ZNN models in different noise situations in Figure 3. Figure 3a depicts that the estimation error declines to zero with δ i j ( t ) = 0 . The convergence time of DFZNN, PTCZNN, LTCZNN, and IZNN is about 0.15 s, 0.3 s, 0.8 s, and 2 s, respectively. According to Theorem 3, for the DFZNN, the theoretical upper bound of the convergence time can be calculated as:
t top 1 ζ 2 ln 1 + ζ 2 α l 1 ( 1 μ ) + ζ 2 α l 2 ( ν 1 ) = 1.47 s .
For PTCZNN and LTZCNN, from Refs. [27,41], the theoretical upper bound of the convergence time can be obtained respectively as:
t top 1 α l 1 ( 1 μ ) + 1 α l 2 ( ν 1 ) = 2 s ,
and
t top d 0 1 μ α ( 1 μ ) = 1.73 s ,
where d 0 = max i , j | D i j ( 0 ) | , the convergence of LTCZNN is affected by the initial value. At this point, it is obvious that the DFZNN has an advantage in convergence. In order to confirm the robustness in Theorem 4, four complex-valued noise situations are considered, including constant noise δ i j ( t ) = 5 + 5 i , bounded random noise δ i j ( t ) = [ 2 , 4 ] + [ 2 , 4 ] i , bounded time varying noise δ i j ( t ) = 0.5 cos ( 2 t ) + 0.5 cos ( 2 t ) i , unbounded time varying noise δ i j ( t ) = 0.3 exp ( 0.2 t ) + 0.3 exp ( 0.2 t ) i . The rest of the results demonstrated in Figure 4 show that the proposed DFZNN has stronger robustness than others.
Finally, we discuss the convergence property of the DFZNN model with different values of parameters for the complex-valued time-varying pseudoinverse. From the designed evolution of D ( t ) in (9), the time varying parameters contain attenuation factor ζ 1 arccot ( t ) and noise suppression factor ζ 2 t . The convergence and robustness of the DFZNN mainly depend on the time varying parameter in the exponential part. Therefore, we show convergence time of the DFZNN with the different values of ζ 1 and ζ 2 . Furthermore, we discuss convergence time T c in terms of zero noise and time varying noise; sixteen groups of experiments are summarized in Table 1. Without loss of generality, here we set a bounded time varying noise 10 sin ( 5 t ) + 10 sin ( 5 t ) i . From the table, we conclude that larger ζ 1 (or ζ 2 ) has a shorter convergence time. In terms of robustness, parameter ζ 2 has the role of stabilizing the neural system. For example, from numbers 1–4 and 5–8, the convergence time is relatively stable. In particular, from numbers 12 and 16, the convergence time of the DFZNN has no difference even if it is identical in zero noise and time varying noise. While, in terms of convergence speed, the effect of ζ 1 is more significant, from numbers 9–12 and 13–16, the convergence time changes significantly when ζ 1 increases.

5. Application

Referring to the mechanical structure of the manipulator in [44], this section introduces a mobile manipulator trajectory control task to verify the effectiveness of the DFZNN model. Therefore, the kinematic model of the mobile manipulator considering only the position of the end effector can be obtained as follows:
P a ( t ) = F ( Θ ( t ) ) R m ,
where Θ ( t ) = [ ψ T ( t ) , θ T ( t ) ] T R n + 2 denotes the angle vector, including the mobile platform angle vector ψ ( t ) = [ ψ L ( t ) , ψ R ( t ) ] T and the manipulator angle vector θ ( t ) = [ θ 1 ( t ) , , θ n ( t ) ] T . In addition, P a ( t ) = [ r x ( t ) , r y ( t ) , r z ( t ) ] T denotes the actual position of end-effector in space coordinates, and F ( · ) represents a known nonlinear mapping from Θ ( t ) to P a ( t ) .
Similarly, redefine a new error function E ( t ) :
E ( t ) = P d ( t ) P a ( t ) ,
where P d ( t ) stands for the desired trajectory. Then, the derivative of E ( t ) can be obtained:
E ˙ ( t ) = P ˙ d ( t ) J ( Θ ( t ) ) Θ ˙ ( t ) ,
where J ( Θ ( t ) ) = F ( Θ ( t ) ) / Θ R m × ( n + 2 ) . Based on the DFZNN model design method, the dynamic equation of the mobile manipulator can be obtained to achieve the task:
J ( Θ ( t ) ) Θ ˙ ( t ) = P ˙ d ( t ) + α exp ( ζ 1 arccot ( t ) + ζ 2 t ) F 3 ( E ( t ) ) ,
in which E ( t ) = P d ( t ) F ( Θ ( t ) ) . In a similar way, based on the LTCZNN model design method, another dynamic equation can be obtained to achieve the task:
J ( Θ ( t ) ) Θ ˙ ( t ) = P ˙ d ( t ) + α W 1 ( E ( t ) ) ,
In this simulation, we make use of the manipulator to track an infinite symbol trajectory, and the corresponding results are shown in Figure 5 and Figure 6. Among them, the parameters set are: α = 2 ; Θ ( 0 ) = [ 0 , 0 , π / 6 , π / 2 , π / 3 , π / 3 , π / 4 , π / 3 ] .
Figure 5 shows the whole tracking process of the mobile manipulator controlled by the DFZNN model and LTCZNN model. Thereinto, Figure 5a,c represent the entire space trajectory of the DFZNN model and LTCZNN model. Figure 5b,d are the desired trajectory and the actual manipulator trajectory. In a noisy environment ( δ i ( t ) = 0.3 sin ( 5 t ) , i = 1 , 2 , , m ), we can observe that only the manipulator controlled by the DFZNN can complete the desired trajectory task well. Figure 6a,b show the errors of tracking the butterfly trajectory of the mobile manipulator controlled by DFZNN and LTCZNN. After two seconds, the upper bound of the trajectory error controlled by LTCZNN is about 5 × 10 2 , while the upper bound of the trajectory error controlled by DFZNN is only about 1.344 × 10 3 . Obviously, the mobile manipulator controlled by DFZNN can satisfactorily complete the infinite symbol trajectory tracking task in the noise δ i ( t ) .

6. Conclusions

In this paper, a double feature ZNN with a time varying parameter is proposed for solving a complex-valued time-varying matrix pseudoinverse. A new nonlinear activation function is designed to accelerate convergence speed, the function of a time varying parameter is to further improve the convergence speed and enhance the robustness in additional noises. Moreover, two complex-valued activation types are investigated in our work. The stability and robustness of the proposed DFZNN are proven theoretically. In the DFZNN, the upper bound of the predefine time convergence is deduced, which is smaller than PTCZNN and is independent of the initial value. Furthermore, the validity and superiority of the DFZNN are elaborated by comparing with other zeroing neural network models in different noise environments. An application on controlling manipulator movement illustrates the utility value of the DFZNN model in artificial intelligence. Compared with complex-valued ZNN, quaternion-valued ZNN has superiority in storage capacity and dealing with color image problems. In the future, extending the DFZNN to solve quaternion-valued time-varying issue is a worthy direction.

Author Contributions

Conceptualization, B.L. and Y.L.; writing—original draft preparation, Y.L., Y.H. and Z.D.; writing—review and editing, B.L., Z.D., G.X. and Y.H.; supervision, B.L. and Y.L.; funding acquisition, B.L. and Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China Grant 62066015; and the Natural Science Foundation of Hunan Province of China under grants 2020JJ4511 and 2019JJ50478; and the Research Foundation of Education Bureau of Hunan Province under grant 20A407.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ZNNzeroing neural network
IZNNinitial ZNN
RNNrecurrent neural network
GNNgradient-based RNN
LTCZNNlimited time convergence ZNN
PTCZNNpredefined time convergence ZNN
VPZNNvarying-time parameter ZNN
DFZNNdouble features ZNN

References

  1. Kulikov, G.Y.; Kulikova, M.V. Moore–Penrose-pseudo-inverse-based Kalman-like filtering methods for estimation of stiff continuous-discrete stochastic systems with ill-conditioned measurements. IET Control Theory Appl. 2018, 12, 2205–2212. [Google Scholar] [CrossRef]
  2. Nabavi, S.; Zhang, J.; Chakrabortty, A. Distributed optimization algorithms for wide-area oscillation monitoring in power systems using interregional PMU-PDC architectures. IEEE Trans. Smart. Grid. 2015, 6, 2529–2538. [Google Scholar] [CrossRef]
  3. Arias, F.X.; Sierra, H.; Arzuaga, E. Improving execution time for supervised sparse representation classification of hyperspectral images using the Moore–Penrose pseudoinverse. J. Appl. Remote Sens. 2019, 13, 026512. [Google Scholar] [CrossRef]
  4. Guo, D.; Xu, F.; Yan, L. New pseudoinverse-based path-planning scheme with PID characteristic for redundant robot manipulators in the presence of noise. IEEE Trans. Control Syst. Technol. 2017, 26, 2008–2019. [Google Scholar] [CrossRef]
  5. Filelis-Papadopoulos, C.K.; Kyziropoulos, P.E.; Morrison, J.P.; O’Reilly, P. Modelling and forecasting based on recurrent pseudoinverse matrices. In Proceedings of the International Conference on Computational Science, Krakow, Poland, 16–18 June 2021; Springer: Berlin/Heidelberg, Germany, 2021; pp. 229–242. [Google Scholar]
  6. Sayevand, K.; Pourdarvish, A.; Machado, J.A.T.; Erfanifar, R. On the calculation of the Moore–Penrose and Drazin inverses: Application to fractional calculus. Mathematics 2021, 9, 2501. [Google Scholar] [CrossRef]
  7. Stanimirović Predrag, S.; Tasić Milan, B. Computing generalized inverses using LU factorization of matrix product. Int. J Comput. Math. 2008, 85, 1865–1878. [Google Scholar] [CrossRef] [Green Version]
  8. Kyrchei, I.I. Analogs of the adjoint matrix for generalized inverses and corresponding Cramer rules. Linear Multilinear Algebra 2008, 56, 453–469. [Google Scholar] [CrossRef]
  9. Artidiello, S.; Cordero, A.; Torregrosa, J.R.; Vassileva, M.P. Generalized inverses estimations by means of iterative methods with memory. Mathematics 2019, 8, 2. [Google Scholar] [CrossRef] [Green Version]
  10. Chen, C.; Li, K.; Ouyang, A.; Tang, Z.; Li, K. Gpu-accelerated parallel hierarchical extreme learning machine on flink for big data. IEEE Trans. Syst. Man Cybern. Syst. 2017, 47, 2740–2753. [Google Scholar] [CrossRef]
  11. Shao, Y.E.; Hu, Y.T. Using machine learning classifiers to recognize the mixture control chart patterns for a multiple-input multiple-output process. Mathematics 2020, 8, 102. [Google Scholar] [CrossRef] [Green Version]
  12. Xiao, L.; Lu, R. A finite-time recurrent neural network for computing quadratic minimization with time-varying coefficients. Chin. J. Electron. 2019, 28, 253–258. [Google Scholar] [CrossRef]
  13. Zhang, Y.; Chen, K.; Tan, H.Z. Performance analysis of gradient neural network exploited for online time-varying matrix inversion. IEEE Trans. Autom. Control 2009, 54, 1940–1945. [Google Scholar] [CrossRef]
  14. Long Jin, S.L.; Hu, B. RNN models for dynamic matrix inversion: A control-theoretical perspective. IEEE Trans. Ind. Inform. 2017, 14, 189–199. [Google Scholar]
  15. Xiao, L.; Li, K.; Tan, Z.; Zhang, Z.; Liao, B.; Chen, K.; Jin, L.; Li, S. Nonlinear gradient neural network for solving system of linear equations. Inf. Process. Lett. 2019, 142, 35–40. [Google Scholar] [CrossRef]
  16. Zhang, Y.; Jiang, D.; Wang, J. A recurrent neural network for solving Sylvester equation with time-varying coefficients. IEEE Trans. Neural Netw. 2002, 13, 1053–1063. [Google Scholar] [CrossRef]
  17. Liao, B.; Zhang, Y. From different ZFs to different ZNN models accelerated via Li activation functions to finite-time convergence for time-varying matrix pseudoinversion. Neurocomputing 2014, 133, 512–522. [Google Scholar] [CrossRef]
  18. Xu, F.; Li, Z.; Nie, Z.; Shao, H.; Guo, D. Zeroing neural network for solving time-varying linear equation and inequality systems. IEEE Trans. Neural Netw. Learn. Syst. 2018, 30, 2346–2357. [Google Scholar] [CrossRef]
  19. Guo, D.; Zhang, Y. Zhang neural network for online solution of time-varying linear matrix inequality aided with an equality conversion. IEEE Trans. Neural Netw. Learn. Syst. 2013, 25, 370–382. [Google Scholar] [CrossRef]
  20. Jin, L.; Li, S.; Liao, B.; Zhang, Z. Zeroing neural networks: A survey. Neurocomputing 2017, 267, 597–604. [Google Scholar] [CrossRef]
  21. Li, S.; Li, Y. Nonlinearly activated neural network for solving time-varying complex Sylvester equation. IEEE Trans. Cybern. 2013, 44, 1397–1407. [Google Scholar] [CrossRef]
  22. Zhang, Y.; Yan, X.; Liao, B.; Zhang, Y.; Ding, Y. Z-type control of populations for Lotka–Volterra model with exponential convergence. Math. Biosci. 2016, 272, 15–23. [Google Scholar] [CrossRef] [PubMed]
  23. Liao, B.; Wang, Y.; Li, W.; Peng, C.; Xiang, Q. Prescribed-time convergent and noise-tolerant Z-type neural dynamics for calculating time-dependent quadratic programming. Neural Comput. Applic. 2021, 33, 5327–5337. [Google Scholar] [CrossRef]
  24. Zhang, Z.; Deng, X.; Kong, L.; Li, S. A circadian rhythms learning network for resisting cognitive periodic noises of time-varying dynamic system and applications to robots. IEEE Trans. Cogn. Dev. Syst. 2019, 12, 575–587. [Google Scholar] [CrossRef]
  25. Jin, L.; Zhang, Y.; Li, S. Integration-enhanced Zhang neural network for real-time-varying matrix inversion in the presence of various kinds of noises. IEEE Trans. Neural Netw. Learn. Syst. 2015, 27, 2615–2627. [Google Scholar] [CrossRef]
  26. Xiao, L.; Zhang, Y.; Zuo, Q.; Dai, J.; Li, J.; Tang, W. A noise-tolerant zeroing neural network for time-dependent complex matrix inversion under various kinds of noises. IEEE Trans. Ind. Inform. 2019, 16, 3757–3766. [Google Scholar] [CrossRef] [Green Version]
  27. Li, S.; Chen, S.; Liu, B. Accelerating a recurrent neural network to finite-time convergence for solving time-varying Sylvester equation by using a sign-bi-power activation function. Neural Process Lett. 2013, 37, 189–205. [Google Scholar] [CrossRef]
  28. Xiao, L.; Liao, B.; Li, S.; Chen, K. Nonlinear recurrent neural networks for finite-time solution of general time-varying linear matrix equations. Neural Netw. 2018, 98, 102–113. [Google Scholar] [CrossRef]
  29. Yu, F.; Liu, L.; Xiao, L.; Li, K.; Cai, S. A robust and fixed-time zeroing neural dynamics for computing time-variant nonlinear equation using a novel nonlinear activation function. Neurocomputing 2019, 350, 108–116. [Google Scholar] [CrossRef]
  30. Lustig, M.; Donoho, D.; Pauly, J.M. Sparse MRI: The application of compressed sensing for rapid MR imaging. Magn. Reson. Med. 2007, 58, 1182–1195. [Google Scholar] [CrossRef]
  31. Loesch, B.; Yang, B. Cramér-Rao bound for circular complex independent component analysis. In Proceedings of the International Conference on Latent Variable Analysis and Signal Separation, Tel Aviv, Israel, 12–15 March 2012; Springer: Berlin/Heidelberg, Germany, 2012; pp. 42–49. [Google Scholar]
  32. Bobrovnikova, E.Y.; Vavasis, S.A. A norm bound for projections with complex weights. Linear Algebra Appl. 2000, 307, 69–75. [Google Scholar] [CrossRef] [Green Version]
  33. Zhang, S.; Xia, Y.; Zheng, W. A complex-valued neural dynamical optimization approach and its stability analysis. Neural Netw. 2015, 61, 59–67. [Google Scholar] [CrossRef] [PubMed]
  34. Syed Ali, M.; Narayanan, G.; Orman, Z.; Shekher, V.; Arik, S. Finite time stability analysis of fractional-order complex-valued memristive neural networks with proportional delays. Neural Process Lett. 2020, 51, 407–426. [Google Scholar] [CrossRef]
  35. Gunasekaran, N.; Zhai, G. Sampled-data state-estimation of delayed complex-valued neural networks. Int. J. Syst. Sci. 2020, 51, 303–312. [Google Scholar] [CrossRef]
  36. Gunasekaran, N.; Zhai, G. Stability analysis for uncertain switched delayed complex-valued neural networks. Neurocomputing 2019, 367, 198–206. [Google Scholar] [CrossRef]
  37. Rakkiyappan, R.; Cao, J.; Velmurugan, G. Existence and uniform stability analysis of fractional-order complex-valued neural networks with time delays. IEEE Trans. Neural Netw. Learn. Syst. 2014, 26, 84–97. [Google Scholar] [CrossRef]
  38. Wang, X.Z.; Stanimirović, P.S.; Wei, Y. Complex ZFs for computing time-varying complex outer inverses. Neurocomputing 2018, 275, 983–1001. [Google Scholar] [CrossRef]
  39. Zhang, Q.; Wang, X. Complex-valued neural network for hermitian matrices. Eng. Lett. 2017, 25, 312–320. [Google Scholar]
  40. Qiao, S.; Wang, X.Z.; Wei, Y. Two finite-time convergent Zhang neural network models for time-varying complex matrix Drazin inverse. Linear Algebra Appl. 2018, 542, 101–117. [Google Scholar] [CrossRef]
  41. Zhang, Z.; Zheng, L.; Weng, J.; Mao, Y.; Lu, W.; Xiao, L. A new varying-parameter recurrent neural-network for online solution of time-varying Sylvester equation. IEEE Trans. Cybern. 2018, 48, 3135–3148. [Google Scholar] [CrossRef]
  42. Stanimirović Predrag, S.; Katsikis Vasilios, N.; Zhang, Z.; Li, S.; Chen, J.; Zhou, M. Varying-parameter Zhang neural network for approximating some expressions involving outer inverses. Optim. Methods Softw. 2020, 35, 1304–1330. [Google Scholar] [CrossRef]
  43. Xiao, L.; He, Y. A noise-suppression ZNN model with new variable parameter for dynamic Sylvester equation. IEEE Trans. Ind. Inform. 2021, 17, 7513–7522. [Google Scholar] [CrossRef]
  44. Zhang, Z.; Zheng, L.; Yu, J.; Li, Y.; Yu, Z. Three recurrent neural networks and three numerical methods for solving a repetitive motion planning scheme of redundant robot manipulators. IEEE/ASME Trans. Mechatron. 2017, 22, 1423–1434. [Google Scholar] [CrossRef]
Figure 1. Dynamic characteristics of Y ( t ) synthesized by the proposed DFZNN with two activation types. (a) the DFZNN model with the activation type I; (b) profile synthesized by the DFZNN model with the activation type II.
Figure 1. Dynamic characteristics of Y ( t ) synthesized by the proposed DFZNN with two activation types. (a) the DFZNN model with the activation type I; (b) profile synthesized by the DFZNN model with the activation type II.
Mathematics 10 02122 g001
Figure 2. The estimation error D ( t ) F of Y ( t ) synthesized by the DFZNN with two activation types in different situations. (a) with the activation type I; (b) with the activation type II.
Figure 2. The estimation error D ( t ) F of Y ( t ) synthesized by the DFZNN with two activation types in different situations. (a) with the activation type I; (b) with the activation type II.
Mathematics 10 02122 g002
Figure 3. The estimation error D ( t ) F of Y ( t ) synthesized by the DFZNN, PTCZNN, LTCZNN, IZNN with the activation type II. (a) zero noise; (b) linear time varying noise δ i j ( t ) = 0.5 t + 0.5 t i .
Figure 3. The estimation error D ( t ) F of Y ( t ) synthesized by the DFZNN, PTCZNN, LTCZNN, IZNN with the activation type II. (a) zero noise; (b) linear time varying noise δ i j ( t ) = 0.5 t + 0.5 t i .
Mathematics 10 02122 g003
Figure 4. Trajectories of state solution of the DFZNN, PTCZNN, LTCZNN, IZNN with the activation type II in four noise situations. (a) δ i j ( t ) = 5 + 5 i ; (b) δ i j ( t ) = [ 2 , 4 ] + [ 2 , 4 ] i ; (c) δ i j ( t ) = 0.5 cos ( 2 t ) + 0.5 cos ( 2 t ) i ; (d) δ i j ( t ) = 0.3 exp ( 0.2 t ) + 0.3 exp ( 0.2 t ) i .
Figure 4. Trajectories of state solution of the DFZNN, PTCZNN, LTCZNN, IZNN with the activation type II in four noise situations. (a) δ i j ( t ) = 5 + 5 i ; (b) δ i j ( t ) = [ 2 , 4 ] + [ 2 , 4 ] i ; (c) δ i j ( t ) = 0.5 cos ( 2 t ) + 0.5 cos ( 2 t ) i ; (d) δ i j ( t ) = 0.3 exp ( 0.2 t ) + 0.3 exp ( 0.2 t ) i .
Mathematics 10 02122 g004
Figure 5. The space trajectory, plane actual trajectory, and plane desired trajectory of the butterfly using the LTCZNN model and the DFZNN model under noise δ i ( t ) = 0.3 sin ( 5 t ) . (a) By the LTCZNN model. (b) By the LTCZNN model. (c) By the DFZNN model. (d) By the DFZNN model.
Figure 5. The space trajectory, plane actual trajectory, and plane desired trajectory of the butterfly using the LTCZNN model and the DFZNN model under noise δ i ( t ) = 0.3 sin ( 5 t ) . (a) By the LTCZNN model. (b) By the LTCZNN model. (c) By the DFZNN model. (d) By the DFZNN model.
Mathematics 10 02122 g005
Figure 6. The tracking errors of the mobile manipulator controlled by the LTCZNN model and the DFZNN model under noise δ i ( t ) = 0.3 sin ( 5 t ) . In addition, e x , e y , e z are the errors along the X , Y , Z axes. (a) By the LTCZNN model. (b) By the DFZNN model.
Figure 6. The tracking errors of the mobile manipulator controlled by the LTCZNN model and the DFZNN model under noise δ i ( t ) = 0.3 sin ( 5 t ) . In addition, e x , e y , e z are the errors along the X , Y , Z axes. (a) By the LTCZNN model. (b) By the DFZNN model.
Mathematics 10 02122 g006
Table 1. The convergence time T c (in seconds) of the DFZNN model under different parameters.
Table 1. The convergence time T c (in seconds) of the DFZNN model under different parameters.
NumberInfluencing Parameter T c
ζ 1 ζ 2 Zero NoiseTime-Varying Noise
10.50.50.16941.2766
20.510.16640.6436
30.51.50.16400.6388
40.520.15920.6338
510.50.08220.6384
6110.07740.6326
711.50.07580.6284
8120.07500.6284
90.50.50.16941.2766
1010.50.08220.6384
111.50.50.03740.6286
1220.50.01700.0168
130.510.16400.6436
14110.07740.6326
151.510.03680.6256
16210.01680.0168
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lei, Y.; Dai, Z.; Liao, B.; Xia, G.; He, Y. Double Features Zeroing Neural Network Model for Solving the Pseudoninverse of a Complex-Valued Time-Varying Matrix. Mathematics 2022, 10, 2122. https://0-doi-org.brum.beds.ac.uk/10.3390/math10122122

AMA Style

Lei Y, Dai Z, Liao B, Xia G, He Y. Double Features Zeroing Neural Network Model for Solving the Pseudoninverse of a Complex-Valued Time-Varying Matrix. Mathematics. 2022; 10(12):2122. https://0-doi-org.brum.beds.ac.uk/10.3390/math10122122

Chicago/Turabian Style

Lei, Yihui, Zhengqi Dai, Bolin Liao, Guangping Xia, and Yongjun He. 2022. "Double Features Zeroing Neural Network Model for Solving the Pseudoninverse of a Complex-Valued Time-Varying Matrix" Mathematics 10, no. 12: 2122. https://0-doi-org.brum.beds.ac.uk/10.3390/math10122122

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop