Next Article in Journal
Modified Inertial-Type Subgradient Extragradient Methods for Variational Inequalities and Fixed Points of Finite Bregman Relatively Nonexpansive and Demicontractive Mappings
Next Article in Special Issue
Non-Zero Sum Nash Game for Discrete-Time Infinite Markov Jump Stochastic Systems with Applications
Previous Article in Journal
Surface Family Pair with Bertrand Pair as Mutual Curvature Lines in Three-Dimensional Lie Group
Previous Article in Special Issue
Model-Free Sliding Mode Enhanced Proportional, Integral, and Derivative (SMPID) Control
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Stability Analysis of Linear Systems with Cauchy—Polynomial-Vandermonde Matrices

1
Department of Mathematical Analysis, Bukhara State University, Bukhara 200100, Uzbekistan
2
Department of Mathematics and Sciences, Prince Sultan University, Riyadh 11586, Saudi Arabia
3
Department of Industrial Engineering, OSTİM Technical University, Ankara 06374, Türkiye
*
Authors to whom correspondence should be addressed.
Submission received: 30 May 2023 / Revised: 20 August 2023 / Accepted: 24 August 2023 / Published: 28 August 2023
(This article belongs to the Special Issue Advances in Analysis and Control of Systems with Uncertainties II)

Abstract

:
The numerical approximation of both eigenvalues and singular values corresponding to a class of totally positive Bernstein–Vandermonde matrices, Bernstein–Bezoutian structured matrices, Cauchy—polynomial-Vandermonde structured matrices, and quasi-rational Bernstein–Vandermonde structured matrices are well studied and investigated in the literature. We aim to present some new results for the numerical approximation of the largest singular values corresponding to Bernstein–Vandermonde, Bernstein–Bezoutian, Cauchy—polynomial-Vandermonde and quasi-rational Bernstein–Vandermonde structured matrices. The numerical approximation for the reciprocal of the largest singular values returns the structured singular values. The new results for the numerical approximation of bounds from below for structured singular values are accomplished by computing the largest singular values of totally positive Bernstein–Vandermonde structured matrices, Bernstein–Bezoutian structured matrices, Cauchy—polynomial-Vandermonde structured matrices, and quasi-rational Bernstein–Vandermonde structured matrices. Furthermore, we present the spectral properties of totally positive Bernstein–Vandermonde structured matrices, Bernstein–Bezoutian structured matrices, Cauchy—polynomial-Vandermonde structured matrices, and structured quasi-rational Bernstein–Vandermonde matrices by computing the eigenvalues, singular values, structured singular values and its lower and upper bounds and condition numbers.
MSC:
15A18; 05A05

1. Introduction

In numerical linear algebra, the design of some accurate as well as efficient numerical algorithms for a class of structured matrices remains an interesting topic in research in recent years. Among these structured matrices, an interesting class of matrices is totally positive matrices. The several classes of very structured matrices are studied in [1,2,3,4]. The important and interesting literature in the listed articles covers many aspects of both theory and application but does not contain topics such as accuracy and efficient numerical computations with such kinds of structured matrices. For more details on the computational aspects of structured totally positive matrices can be found in the work [5,6,7].
We consider a special class of totally positive structured matrices that are deeply studied and analyzed in [8] for solving a system of linear equations: Bernstein–Vandermonde matrices. Such a class of matrices has been used to solve and analyze least squares fitting problems while considering the Bernstein basis [9]. The Bernstein–Vandermonde structured matrices are the straightforward generalization of the Vandermonde structured matrices when the most suitable choice of Bernstein basis is considered rather than monomial basis corresponding to a space spanned by algebraic polynomials with degree bounded above by n. The Bernstein polynomials were originally discovered about a hundred years ago by Sergi Natanovich Bernstein in order to facilitate the most famous proof of the Weierstrass approximation theorem. The work by Bezier and de Casteljau introduces the Bernstein polynomials in computer-aided geometric design; see [10] for more details.
The numerical methods based upon Bezier curves are more popular in computer-aided geometric design (CAGD), see [11,12,13,14,15]. The Bernstein polynomials parameterize the Bezier curves reduce to Bernstein polynomial basis.
The theoretical basis to design fast and accurate algorithms in order to compute the greatest p ( a ) common divisor for the real type of polynomials and q ( a ) with degree of at most n and which are expressible in Bernstein polynomial basis { α 0 ( n ) , , α n ( n ) } , where α 0 ( n ) ( a ) = n i ( 1 a ) n i a i , 0 i a , are studied in [16]. The fast O ( n 2 ) algorithms to determine the required power form of the polynomials p ( a ) and q ( a ) are studied in [17,18] or its matrix counterparts [19] for the evaluation of GCD.
In [16], the Bezout form B = ( b i , j ) R n × n for polynomials p ( a ) and q ( a ) is defined by the expression of the form
p ( a ) q ( w ) p ( w ) q ( a ) = ( a w ) i , j = 1 n b i , j α i 1 ( n 1 ) ( a ) α j 1 ( n 1 ) ( w ) .
Furthermore, the Bezoution matrices, while considering different polynomial bases, are studied by various authors, see [20,21,22,23,24].
The structured matrices, particularly both Vandermonde matrices and Cauchy matrices, appear in the vast areas of computation, see [25,26]. The Cauchy–Vandermonde structured matrices act as useful tools to study the numerical approximation of solution corresponding to singular integral equations; for more detail, we refer [27]. Such a type of structured matrices does occur in connection with numerical approximation of solutions of problems related to study quadrature [28]. In fact, Cauchy–Vandermonde matrices are ill-conditional matrices. The high accuracy of numerical approximation has been achieved for such a class of structured matrices while very carefully studying their specific structure properties [1,29,30,31,32,33,34,35,36].
The Vandermonde matrices appear during the study of interpolation problems in order to exploit the monomial basis [25]. The polynomial-Vandermonde matrices appear when the polynomial basis is considered rather than the monomial basis, and such matrices help to study many applications. A few included in the list are approximation, interpolation, and Gaussian quadrature [37,38,39,40,41,42].
An extensive amount of research has been done in order to analyze the high accuracy of numerical approximations for many classes of matrices having specific structures. This does includes a class of structured matrices includes: totally positive and totally negative matrices [6,43], totally non-positive matrices [44,45], matrices having rank-revealing decomposition [46,47], rank structured matrices [48,49], the diagonally dominant structured M-matrices [50,51] and the structured sign regular matrices [7,52]. The numerical approximation of eigenvalues for structured quasi-rational Bernstein–Vandermonde matrices up to high accuracy (relative) are studied in much greater detail in [53,54].
An extensive amount of work has been done in the direction of discussing both necessary and sufficient criteria for swarm stability asymptotically. The system under consideration achieves the asymptotically if and only if there exists Hermitian matrices satisfying complex Lyapunov inequality for all of the system vertex matrices [55].
The symmetry and asymmetry properties of orthogonal polynomials play a key role in solving system of differential equations that appear in mathematical modeling corresponding to real-world problems. The classical orthogonal polynomials, for instance, Hermit, Legendre, Laguerre, and Discrete, including Krawtchouk and Chebyshev, have numerous widespread applications across many very important branches of science and engineering. In [56], Chebyshev polynomials are used to discuss the simulation of a two-dimensional mass transfer equation subject to Robin and Neumann boundary conditions.
The Boolean complexity for the multiplication of structured matrices by a vector and the solution of nonsingular linear systems of equations with a class of such matrices is studied in [57]. The main focus is to study four basic and most popular classes, that is, Toeplitz, Hankel, Cauchy, and Vandermonde matrices, for which the cited computational problems are equivalent to the task of polynomial multiplication and division and polynomial and rational multipoint evaluation and interpolation.
In this article, we present the spectral properties of a class of structured matrices. We study the behavior of eigenvalues, singular values, and structured singular values for a class of structured matrices, that is, totally positive Bernstein–Vandermonde matrices, Bernstein–Bezoutian structured matrices, Cauchy—polynomial-Vandermonde structured matrices and quasi-rational Bernstein–Vandermonde structured matrices. Furthermore, we also present the numerical approximation of conditioned numbers for structured matrices considered in the current study. Our proposed approach in a recent study differs from the methodology [58] where a low-rank ODE-based technique was developed to study the stability and instability analysis of linear time-invariant system appearing in control and was mainly based on a two level-algorithm, that is, inner-outer algorithm.
The key contribution of this paper is to study and investigate the spectral properties (particularly the computation of structured singular) of totally positive Bernstein–Vandermonde matrices, Bernstein–Bezoutian matrices, Cauchy—polynomial-Vandermonde matrices, and structured quasi-rational Bernstein–Vandermonde matrices and this act as the novel contribution to this paper.
In Section 2 of this article, we aim to present the definitions of structured totally positive Bernstein–Vandermonde matrices, Bernstein–Bezoutian matrices, Cauchy—polynomial-Vandermonde matrices and quasi-rational Bernstein–Vandermonde matrices.
We give a brief and concise introduction to the numerical approximation of the structured singular values in Section 3. Section 4 contains the main results on the numerical computation of the largest and the smallest singular values. Furthermore, the exact behavior of the largest and smallest singular values is also discussed. In Section 5, we present numerical experimentation for Bernstein–Vandermonde and Bernstein–Bezoutian matrices. The numerical approximation of eigenvalues and both singular values and structured singular values are also analyzed and presented. The numerical experimentation for spectral quantities of Cauchy-polynomial-Vandermonde structured matrices and quasi-rational Bernstein–Vandermonde structured matrices are presented in Section 6 and Section 7, respectively. Section 8 contains the numerical testing for the comparison between the approximated lower bounds of structured singular values for a class of higher dimensional structured Bernstein–Vandermonde matrices. Finally, in Section, we present concluding remarks.

2. Preliminaries

Definition 1
([59]). The Bernstein basis on the closed interval [ 0 , 1 ] for the space S n ( x ) of polynomials of degree less than or equal to n is defined as
B n = { b ^ i ( n ) ( x ) = n i ( 1 x ) n i x i , i = 0 : n } .
Definition 2
([59]). The ( m + 1 ) × ( n + 1 ) Bernstein–Vandermonde structured matrix for B n and nodes { x i } 1 i m + 1 is defined as
B = n 0 ( 1 x 1 ) n n 1 x 1 ( 1 x 1 ) n 1 n n x 1 n n 0 ( 1 x 2 ) n n 1 x 2 ( 1 x 2 ) n 1 n n x 2 n n 0 ( 1 x m + 1 ) n n 1 x m + 1 ( 1 x m + 1 ) n 1 n n x m + 1 n .
Definition 3
([16]). The transformation matrix between Bernstein and power basis is defined as
T n α 0 ( n ) ( a ) α n ( n ) ( a ) = 1 a n ,
where
t i , j ( n ) = 0 , i f i > j j 1 i 1 n i 1 1 , i f i j .
Definition 4
([16]). The Bezoutian matrix B R n × n for polynomials having at most the degree k, that is, p ( a ) = i = 0 n p i α i ( n ) ( a ) , q ( a ) = i = 0 n q i α i ( n ) ( a ) in the Bernstein basis { α 0 ( n 1 ) ( a ) , , α n ( n 1 ) ( a ) } is defined as
p ( a ) q ( w ) p ( w ) q ( a ) a w = i , j = 1 n b i , j α i 1 ( n 1 ) ( a ) α j 1 ( n 1 ) ( w ) .
Definition 5
([60]). The coefficient matrix (associated with an interpolation problem) of the form
M = 1 x 1 + y 1 1 x 2 + y l a 0 a 0 + a 1 x 1 k = 0 n l 1 a k x 1 k 1 x 2 + y 1 1 x 2 + y l a 0 a 0 + a 1 x 2 k = 0 n l 1 a k x 2 k 1 x n + y 1 1 x n + y l a 0 a 0 + a 1 x n k = 0 n l 1 a k x n k
is known to be a Polynomial-Vandermonde structured matrix if l = 0 and a Cauchy matrix if l = n . Otherwise, it is a Cauchy—polynomial-Vandermonde structured matrix.
Definition 6
([60]). For a sequence of positive integers (in strict sense) { w i } 0 i n 1 and the Bernstein basis of the space S n 1 ( n ) of polynomials on the closed interval [ 0 , 1 ] having the degree less than or equal to ( n 1 ) , the rational Bernstein basis are defined as
B n r = r i ( n 1 ) ( x ) = w i b i ( n 1 ) ( x ) W ( x ) , i { 0 , , n 1 }
with b i ( n 1 ) ( x ) = n 1 i x i ( 1 x ) n i 1 and W ( x ) = i = 0 n 1 w i b i n 1 ( x ) .
Definition 7
([60]). For rational Bernstein basis B n r , the rational Bernstein–Vandermonde matrix is defined as
B = w 0 n 1 0 ( 1 x 1 ) n 1 W ( x 1 ) w n 1 n 1 n 1 ( x 1 ) n 1 W ( x 1 ) w 0 n 1 0 ( 1 x n ) n 1 W ( x n ) w n 1 n 1 n 1 ( x n ) n 1 W ( x n ) .
Definition 8
([60]). The quasi-rational Bernstein–Vandermonde structured matrix B q r = ( r i j ) R n × n is defined as
r i j = w j 1 n 1 j 1 x i j 1 ( 1 x i ) n j W ( x i ) , i f 1 i , j n , ( i , j ) ( n , n ) w n 1 n 1 n 1 x n n 1 W ( x n ) w ^ , i f ( i , j ) = ( n , n ) .
where perturbation parameter w ^ satisfies
0 < x 1 < x 2 < < x n < 1 x n x 1 1 x 1 γ < w ^ x n γ w i t h γ = w n 1 W ( x n ) k = 2 n 1 ( x n x k ) ( 1 x k ) .

3. Structured Singular Values

In this section, we introduce a mathematical quantity appearing in control theory known as structured singular value, that is, μ -values. Let A R n × n and Δ R n × n . The matrix Δ represents the admissible uncertainty from the set of block diagonal matrices (or set of uncertainties), that is,
B : = { d i a g ( Δ i ) , Δ i R n × n , i = 1 : n } .
Definition 9.
For a given A, the structured singular value dented with μ · is defined as
μ Δ ( A ) : = 1 m i n { σ m a x ( Δ ) : Δ B , d e t I A Δ = 0 } .
If there exist no such uncertainty Δ B such that d e t ( I A Δ ) becomes zero, then μ Δ ( A ) = 0 .
By definition, the mathematical quantities 9, d e t · denotes the determinant of some structured matrix under consideration. The exact computation of structured singular value or μ -value for a class of very large-scale structured matrices is NP-hard [61]. Because of this, one needs to numerically approximate the tighter bounds of structured singular values. The numerical computation of the tighter lower bounds of structured singular values or μ -values ensures enough information to study and discuss the instability of the dynamical system. On the other hand, the numerical approximation of an upper bound of structured singular value or μ -value helps in a great manner to study and discuss the stability analysis of systems.

4. Main Results

In this section, we aim to present our new and main results concerning the numerical approximation of the largest singular value σ m a x of A R n × n . Furthermore, we also discuss the increasing behavior of σ m a x and the decreasing behavior of σ m i n . The following Theorem 1 allows the computation of σ m a x .
Definition 10.
For a given matrix A R n × n , the scalars λ i are called the eigenvalues of A such that d e t ( A λ i I ) = 0 , where I denotes an identity matrix possessing the same dimension as of the matrix A.
Definition 11.
For a given matrix A R n × n , the non-negative numbers σ i are known as the singular values of A if A can be decomposed as A = U Σ V t , where the matrices U and V are the orthogonal and Σ is a diagonal matrix having σ i on its main diagonal.
Lemma 1.
Let A : R C n × n be matrix family (smooth). Let λ ( t ) denotes the eigenvalue of A ( t ) , t R that converges to an eigenvalue (simple) λ 0 of A 0 as t 0 . Then, the continuous branch of eigenvalues λ ( t ) is analytic nearby t = 0 having
d d t λ ( 0 ) = y 0 A 1 x 0 y 0 x 0 .
Here, A 1 = A ˙ ( 0 ) and x 0 and y 0 denotes left and right hands singular vectors of A 0 corresponding to λ 0 .
Theorem 1.
For a given A R n × n , σ m a x ( A ) , the largest singular value is obtained as
σ m a x ( A ) = α β ,
where
α = i λ i 1 2 n ( D ) ( 1 2 ) n 1 ( n 2 ) ( 1 n 1 ) n 1 2 ( n 2 ) i ( σ i 2 ) n 1 n 2 ( D ) n 1 2 i σ i 2 ( D ) ,
and
β = i λ i 1 n 2 4 n + 4 ( E ) ( 1 2 ) n 2 2 n + 1 n 2 4 n + 4 ( 1 n 1 ) n 2 2 n + 1 2 n 2 8 n + 8 i ( σ i 2 ) n 2 2 n + 1 n 2 4 n + 4 ( E ) n 1 i ( σ i 4 ) ( E ) .
Proof. 
First, we show that the above-given expressions for α and β are valid. We consider the following factorization of A. For the following factorization of A, we refer interested readers to see [62].
A = D 1 E ( E t D 1 E ) 1 E t .
Here, D denotes a diagonal non-negative matrix while E is of the full-rank.
For the quantity i λ i 1 2 n ( D ) ( 1 2 ) n 1 ( n 2 ) ( 1 n 1 ) n 1 2 ( n 2 ) i ( σ i 2 ) n 1 n 2 ( D ) , which is the numerator of α , we make use of arithmetic-geometric-mean inequality that allows us to write the following inequality for singular values of D.
σ 1 2 ( D ) σ 2 2 ( D ) + σ 3 2 ( D ) + + σ n 2 ( D ) ( σ 1 2 ( D ) + σ 2 2 ( D ) + + σ n 2 ( D ) ) 2 4 .
Next, we make use of arithmetic-geometric-mean inequality on the quantity σ 1 2 n 4 ( D ) i σ i ( D ) , which yields
σ 1 2 n 4 ( D ) i σ i 2 ( D ) = σ 1 2 n 4 ( D ) σ 1 2 ( D ) σ n 2 ( D ) , = σ 1 2 n 2 ( D ) σ 2 2 ( D ) σ n 2 ( D ) , < σ 1 2 ( D ) σ 2 2 ( D ) + + σ n 2 ( D ) n 1 ( n 1 ) n 1 .
Equations (12) and (13) allows us to write
σ 1 2 n 4 ( D ) i σ i 2 ( D ) i ( σ i 2 ) 2 ( D ) n 1 ( 4 n 4 ) n 1 .
Finally, from Equation (14), we have
σ 1 ( D ) = σ m a x ( D ) i λ i 1 2 n ( D ) 1 4 n 4 n 1 2 ( n 2 ) i ( σ i 2 ) n 1 n 2 ( D ) .
For the quantity n 1 2 i σ i 2 ( D ) which is the denominator of α , we make use of arithmetic-geometric-mean inequality for the singular values of D as,
i ( σ i 2 ( D ) ) 1 n 1 n i σ i 2 ( D ) .
In addition,
n i ( 1 σ i 2 ( D ) ) 1 n i 1 σ i 2 ( D ) .
The inequalities in Equations (15) and (16) yields
n 2 i σ i 2 ( D ) i 1 σ i 2 ( D ) .
Because we know that the matrix 2-norm of matrix D can be written as
D 2 2 = σ i 2 ( D ) .
Therefore, Equations (17) and (18) implies that
n 2 i σ i 2 ( D ) i 1 σ i 2 ( D ) .
or
σ n ( D ) = σ m i n ( D ) = n 1 2 i σ i 2 ( D ) .
In a similar way, we can obtain the expressions for the numerator and denominator of β . Now, we aim to prove that σ m a x ( A ) = α β .
Since, D = D t and λ i ( D ) 0 , i . Thus, the matrix D takes the form of D = ( D 1 2 ) 2 and D 1 2 = ( D 1 2 ) 1 . We obtain the following expression while making use of the singular value decomposition of D 1 2 E yields
D 1 2 E = u 1 u n u n + 1 u N σ 1 0 0 0 σ 2 0 0 0 0 σ n 0 0 0 0 0 0 [ v 1 v n v n + 1 v N ] t , = u 1 u n u n + 1 u N 1 0 v 1 v n v n + 1 v N t .
From [63], the σ m a x ( D 1 2 E ) can be written as
σ m a x ( D 1 2 E ) σ m a x ( D 1 2 ) σ m a x ( E ) , = σ m a x ( E ) σ m a x ( D 1 2 ) , = i λ i 1 n 2 4 n + 4 ( E ) ( 1 2 ) n 2 2 n + 1 n 2 4 n + 4 ( 1 n 1 ) n 2 2 n + 1 2 n 2 8 n + 8 i ( σ i 2 ) n 2 2 n + 1 n 2 4 n + 4 ( E ) n 1 i σ i 4 ( D )
with σ m i n ( D ) = n 1 2 i σ i 2 ( D ) 0 .
In a similar way from [63], σ m i n ( D 1 2 E ) takes the following form as, that is,
σ m i n ( D ) σ m i n ( D 1 2 ) σ m i n ( E ) = σ m i n ( E ) ( σ m a x ( D ) ) 1 2 . = n 1 i σ i 4 ( E ) i λ i 1 ( 2 n ) 2 ( D ) ( 1 2 ) n 1 2 ( n 2 ) 2 1 n 1 n 1 2 ( n 2 ) 2 i ( σ i 2 ) n 1 n 2 2 ( D )
with σ m a x ( D ) = i λ i 1 2 n ( D ) ( 1 2 ) n 1 2 ( n 2 ) ( 1 n 1 ) n 1 2 ( n 2 ) i ( σ i 2 ) n 1 n 2 ( D ) 0 .
The singular value decomposition of ( E t D 1 E ) 1 yields ( E t D 1 E ) 1 = V Σ 1 2 V t where Σ 1 = σ 1 0 0 0 σ 2 0 0 0 σ n . As,
A = D 1 E ( E D 1 E ) 1 E t , = svd D 1 E V t Σ 1 2 V E t .
and
σ m a x ( A ) σ m a x ( D 1 ) σ m a x 2 ( E ) σ m a x ( Σ 1 2 ) , σ m a x 2 ( E ) σ m i n ( D ) σ m i n ( 1 2 ) σ m a x 2 ( E ) σ m i n ( D ) σ m i n ( E ) ( σ m a x ( D ) ) 1 2 2 , = σ m a x ( D ) σ m i n ( D ) σ m a x 2 ( E ) σ m i n 2 ( E ) , = i λ i 1 2 n ( D ) ( 1 2 ) n 1 2 ( n 2 ) ( 1 n 1 ) n 1 2 ( n 2 ) i ( σ i 2 ) n 1 n 2 ( D ) n 1 2 i σ i 2 ( D ) , = i λ i 1 n 2 4 n + 4 ( E ) ( 1 2 ) n 2 2 n + 1 n 2 4 n + 4 ( 1 n 1 ) n 2 2 n + 1 2 n 2 8 n + 8 i ( σ i 2 ) n 2 2 n + 1 n 2 4 n + 4 ( E ) n 1 i ( σ i 4 ) ( E ) , = α β .
   □
The increasing behavior of σ m a x for A R n × n is given in Theorem 2. Furthermore, A i = A i ( t ) and σ i = σ i ( t ) . For simplicity, we omit the dependency of A i and σ i on t in Theorem 2.
Theorem 2.
Let A i R n i × n i , i j are submatrices of A j + 1 R n j + 1 × n j + 1 j , and let σ i ( A i ) be the largest singular values of A i and σ j + 1 ( A j + 1 ) denotes the largest singular values of A j + 1 . The largest singular values of σ j + 1 ( A j + 1 ) satisfies the inequality
d d t σ j + 1 ( A j + 1 A j + 1 ) d d t σ i ( A i A i ) .
Proof. 
For i j , the submatrices A i and matrices A j + 1 can be written as
A i A i = k = 1 i a k a k ,
and
A j + 1 A j + 1 = A i A i + r j + 1 r j + 1 .
In Equations (19) and (20), a k denotes all the k components of sub-matrix A i and r j + 1 denotes all components of A j + 1 for j i . Let u i and v i denotes the left and right hand sides singular vectors of A i A i , then
d d t σ i ( A i A i ) = d d t σ i ( k = 1 i a k a k ) = u i d d t ( k = 1 i a k a k ) v i u i v i = u i d d t k = 1 i a k a k v i ,
From Equation (21), we have
d d t σ j + 1 ( A j + 1 A j + 1 ) = d d t σ i ( A i ) A i + d d t σ j + 1 ( r j + 1 r j + 1 ) = d d t σ i ( A i A i ) + u ^ j + 1 d d t r j + 1 r j + 1 v ^ j + 1 .
In Equation (22), u ^ j + 1 , and v ^ j + 1 denote the right and left-hand sides singular vectors corresponding to a family of matrices A j + 1 A j + 1 , respectively. From Equations (21) and (22), we have
d d t σ j + 1 ( A j + 1 A j + 1 ) d d t σ i ( A i A i ) .
   □
The decreasing behavior of σ m i n for A R n × n is given in Theorem 3.
Theorem 3.
Let A i R n i × n i , i j are sub-matrices of A j + 1 R n j + 1 × n j + 1 j , and let σ ^ i ( A i ) be the smallest singular values of A i and σ j + 1 ^ ( A j + 1 ) are the smallest singular values of A j + 1 . The smallest singular vales σ ^ j + 1 ( A j + 1 ) satisfies the inequality
d d t σ ^ j + 1 ( A j + 1 A j + 1 ) d d t σ ^ i ( A i A i ) .
Proof. 
For i j , the matrices A i and A j + 1 can be written as
A i A i = k = 1 i a k a k ,
and
A j + 1 A j + 1 = A i A i + b j + 1 b j + 1 .
Let u ^ i and v ^ i be left hand and right hand singular vectors of A i A i , then
d d t σ ^ i ( A i A i ) = d d t σ ^ i ( k = 1 i a k a k ) = u ^ i d d t k = 1 i a k a k v i .
Now,
d d t σ ^ j + 1 ( A j + 1 A j + 1 ) = d d t σ ^ i ( A i A i ) + u ^ j + 1 b j + 1 b j + 1 v ^ j + 1 .
Since, σ ^ i and σ ^ j + 1 are the smallest singular values of ( A i A i ) and ( A j + 1 A j + 1 ) for i j , thus Equations (24) and (25) yields
d d t σ ^ j + 1 ( A j + 1 A j + 1 ) d d t σ ^ i ( A i A i ) .
   □
Next, we aim to fix the largest singular value σ m a x ( A ) for A R n × n such that σ m a x = 1 . For this purpose, we make use of an inner-outer algorithm. The main objective is to develop and then solve an optimization problem. In turn, this optimization problem yields a system of ordinary differential equations (ODEs). On the other hand, for the case of the outer algorithm, our main aim is to modify the perturbation level ϵ via fast Newton’s iteration. For more details, we refer [58].

5. Total Positive Bernstein–Vandermonde and Bernstein–Bezoutian Matrices

The following theorem guarantees us that if all of the computed minors are non-negative and respectively positive, then the given original matrix appears to be a totally positive (respectively, strictly totally positive) matrix.
Theorem 4
([59]). A structured matrix is strictly totally positive if its Neville elimination [21, 22] is performed without making any changes in the row and column. For A and A t , multipliers of Neville elimination are positive, and A has positive diagonal pivots for its Neville elimination.
Proof. 
For proof, see [59].    □
The Bernstein–Vandermonde matrix appears to be a strictly totally positive structured matrix for given nodes 0 < x 1 < x 2 < < x k + 1 < 1 . The obtained result is the consequence of the following theorem.
Theorem 5
([59]). Let M = ( m i j ) k i , j k + 1 be a Bernstein–Vandermonde structured matrix. The existing nodes of M satisfies 0 < x 1 < x 2 < < x k + 1 < 1 , then M 1 satisfies the matrix factorization possessing the form
M 1 = A 1 A 2 A n D 1 B n B n 1 B 1 ,
where B i , i i n are ( n + 1 ) × ( n + 1 ) bi-diagonal structured matrices of the form
B i = 1 0 1 0 1 r i + 1 , i 1 r i + 2 , i 1 r n + 1 , i 1 ,
B i t 1 i n are ( n + 1 ) × ( n + 1 ) bi-diagonal structured matrices of the form
B i t = 1 0 1 0 1 r ^ i + 1 , i 1 r ^ i + 2 , i 1 r ^ n + 1 , i 1 ,
and the matrix D is diagonal having order ( n + 1 ) such that
D = d i a g { d 1 , 1 , d 2 , 2 , , d n + 1 , n + 1 } .
For the given matrix M, r i j are the possible multipliers of Neville elimination and are obtained as
m i , j = ( 1 x i ) n j + 1 ( 1 x i j l = 1 j 1 ( x i x i l ) ) ( 1 x i 1 ) n j 2 l = 2 j ( x i 1 x i l ) , j = 1 : n , i = j + 1 : n + 1 .
For M t , the possible quantities m ^ i , j are the multiplies of Neville elimination and are obtained as
m ^ i , j = ( n i + 2 ) x j ( i 1 ) ( 1 x j ) , j = 1 : n , i = j + 1 : n + 1 .
Finally, the i t h -diagonal element of D is the diagonal pivot of Neville elimination of M and is expressed as
d i , i = n i 1 ( 1 x i ) n i + 1 l < i ( x i x k ) l = 1 i 1 ( 1 x k ) , i = 1 : n .
Proof. 
It can be seen in [59].    □

Spectral Properties of Total Positive Bernstein–Vandermonde and Bernstein–Bezoutian Matrices

In this subsection, we aim to present the spectral properties of Bernstein–Vandermonde structured matrices and Bernstein–Bezoutian structured matrices. These structured matrices are taken from the paper by [59] for the numerical approximations of structured singular values.
In our numerical experimentation, we make use of MATLAB functions e i g ( · ) and s v d ( · ) for the numerical approximation of both eigenvalues and singular values. The main aim is to numerically approximate the lower bounds of structured singular or μ -value, which is nothing but the straightforward generalization of singular values for a class of structured matrices. Furthermore, the use of MATLAB functions mussv is considered for the numerical approximation of both lower and upper bounds corresponding to structured singular values.
Example 1.
We consider a 3 × 3 Bernstein–Vandermonde structured matrix
M 1 = 9 16 3 8 1 16 1 14 1 2 1 4 1 16 3 8 9 16 .
The first column of Table 1 represents the numerically approximated eigenvalues of M 1 via MATLAB function e i g ( · ) . The second column represents the singular values approximated by MATLAB function s v d ( · ) . The third and fourth columns represent the numerical approximation of both upper and lower bounds of structured singular values approximated by MATLAB function mussv. The numerical approximation to the lower bounds of structured singular values or μ -values approximated by methodology based on low-rank ODEs [58] is represented in the very last column of the table.
In Figure 1, the left-hand side subfigure represents the plots of eigenvalues, singular values, and the numerical approximation of both lower and upper bounds of structured singular values against the time t. In Figure 1, the blue color dotted line starting from point 1.0 , 0.0 at the bottom of the left side figure denotes the spectrum, that is, the eigenvalues of M 1 . Because singular values are non-negative numbers, the red color dotted line starting from point 1.0 , 0.4 indicates that eigenvalues are bounded above by singular values. The golden color dotted line starting from point 1.0 , 2.6 shows that all quantities, that is, eigenvalues, singular values, and numerically approximated lower bounds corresponding to structured singular values approximated by mussv function, which is represented with a purple color dotted line starting from point 1.0 , 2.1 and the numerical approximation to the lower bounds of structured singular values approximated with [58] represented with turquoise color dotted line starting from point 1.0 , 2.3 are strictly bounded by the numerical approximation of the upper bounds of structured singular values or μ -values with the help of MATLAB function mussv.
In Figure 1, the right-hand side subfigure represents the plots of condition numbers vs. time. Furthermore, the behaviour of spectral condition numbers k 1 ( M 1 ) starting from point 1.0 , 0.0 , k 2 ( M 1 ) starting from point 1.0 , 124 and K ( M 1 ) starting from point 1.0 , 100 is shown in right hand side figure of Figure 1.
In Figure 1, the behavior of the continuous branch of the eigenvalues (in terms of absolute values) of the three-dimensional Bernstein–Vandermonde matrix is shown with a dotted blue line in the left subfigure. This continuous branch of eigenvalues is dominated by a continuous branch of singular values (denoted with a red dotted line) for the values of t 3.5 . This is because of the fact that the computation of the singular values is the generalization of eigenvalues. However, surprisingly, after t 3.5 , the behavior of both eigenvalues and singular values abruptly changes. The dotted line with the light olive color (the topmost line) represents how the numerical approximation of upper bounds corresponding to the structured singular values computed with MATLAB function mussv acts and behaves. All continuous branches of eigenvalues, singular values, and numerically approximated lower bounds of structured singular values are bounded above by the numerical approximation of the upper bounds of structured singular values or μ -values (light olive color). This is possible because of the fact that the numerical approximation of structured singular value or μ -value is the generalization of both eigenvalues and singular values.
Example 2.
The following Bernstein–Bezoutian matrix is taken from [16]. The polynomials
p ( a ) = 4 s a 2 + a 4 = 4 α 0 ( 4 ) ( a ) + 4 α 1 ( 4 ) ( a ) + 19 6 α 2 ( 4 ) ( a ) + 3 2 α 3 ( 4 ) ( a ) ,
and
q ( a ) = 1 2 1 4 a 2 a 2 + a 3 = 1 2 α 0 ( 4 ) ( a ) + 7 16 α 1 ( 4 ) ( a ) + 1 24 α 2 ( 4 ) ( a ) 7 16 α 3 ( 4 ) ( a ) 3 4 α 4 ( 4 ) ( a ) ,
gives the matrix
M 2 = 1 17 6 10 3 3 17 6 157 36 83 18 4 10 3 83 18 187 36 19 4 3 4 19 4 9 2 .
The first column appearing in Table 2 denotes the eigenvalues of M 2 computed by MATLAB function e i g ( · ) . The second column represents the singular values approximated with MATLAB function s v d ( · ) . The third and fourth columns represent the approximation (numerically) of both upper and lower bounds of structured singular values or μ -values approximated by MATLAB function mussv. The numerically computed lower bounds of structured singular values or μ -values via [58] are represented in the very last column.
The behaviour of spectral condition numbers k 2 ( M i ) for all i = 2 , 3 , 4 , the condition numbers k 1 ( M i ) and K for all i = 2 , 3 , 4 are shown in Table 3.

6. Cauchy—Polynomial-Vandermonde Matrices (CPV Matrices)

The following lemma by Zhao Yang [60] provides results for the determinant of CPV-matrices.
Lemma 2
([60]). Let A R n × n be a Cauchy—polynomial-Vandermonde matrix, then the expression for the determinant d ( A ) is given as
d ( A ) = i = 0 n k 1 a i 1 t s n ( x s x t ) 1 t s n ( y s y t ) 1 s n , 1 t k ( x s + y t ) .
Proof. 
For proof, see [60].    □
The minors of Cauchy—polynomial-Vandermonde matrices are computed with the help of the following result.
Theorem 6
([60]). Let A R n × n be a Cauchy—polynomial-Vandermonde matrix, then
d ( A ( i j + 1 : i , 1 : j ) ) = i j + 1 t < s i ( x s x t ) 1 t < s j ( y s y t ) i j + 1 s i , 1 t j ( x s + y t ) , j 1 r = 0 j k 1 a r i j + 1 s i , 1 t k ( x s + x t ) 1 t < s k ( y s y t ) i j + 1 s i , 1 t k ( x s + y t ) , j k ,
which holds true for all i j .
Furthermore, we have that
d ( A ( 1 : i , j i + 1 : j ) ) = 1 t < s i ( x s x t ) j i + 1 t < s j ( y s y t ) 1 s i , j i + 1 t j ( x s + y t ) , j k r = 0 j k 1 a r 1 t < s i ( x s x t ) j i + 1 t < s k ( y s y t ) 1 s i , j i + 1 t k ( x s + y t ) , j i + 1 k , j k + 1 r = 0 j k 1 a r 1 t < s i ( x s x t ) , j i + 1 = k + 1 r = j i k + 1 j k 1 a r 1 t < s i ( x s x t ) g ( i , j , k ) , j i + 1 k + 2 .
where
g ( i , j , k ) = r = 0 j i k ( a r s λ ( j i k , r ) ( x 1 : i ) ) , j i k 1 .
with
λ ( j i k , r ) = ( j i k , , j i k , r ) R i
Proof. 
For proof, see [60].    □

Spectral Properties of Cauchy—Polynomial-Vandermonde Matrices

In this subsection, we aim to present and discuss the meaningful spectral properties of Cauchy—polynomial-Vandermonde structured matrices. For the numerical approximations of desired structured singular values or μ -values, we let a class of test matrices from the paper by [60].
We make use of built-in MATLAB functions e i g ( · ) and s v d ( · ) to numerically approximate both the eigenvalues and singular values. Thus, our main aim is to approximate numerically the obtained lower bounds of structured singular values, a simple and straightforward generalization of singular values for structured matrices. Furthermore, we make use of the built-in MATLAB functions mussv to approximate numerically both the lower and upper bounds of structured singular values or or μ -values for the structured matrices.
Example 3.
Consider a 2 × 2 Cauchy—polynomial-Vandermonde matrix
M 3 = 6 5 6 7 6 11 6 13
The matrix M 3 is obtained with { a i } 0 i n k 1 , { x i } 1 i n and { y i } 1 j k which are given as
a r = ( 1 2 ) r + 1 , 1 r = 0 , 1 , 2 , , n k + 1 x i = i + 1 2 , i = 1 , 2 , 3 , , n y i = j 3 , j = 1 , 2 , 3 , , k .
For M 3 , we consider
a 0 = 1 2 a 1 = 1 4 , x 1 = 1 2 x 2 = 3 2 x 3 = 5 2 and y 1 = 1 3 y 2 = 2 3 y 3 = 1
in Equ (4.1).
The first column of Table 4 denotes the approximated eigenvalues of M 3 computed by MATLAB function e i g ( · ) . The second column represents the singular values computed with MATLAB function s v d ( · ) . The third and fourth columns represent the approximation (numerically) of both upper and lower bounds of structured singular values, which are approximated by MATLAB function mussv. The numerical approximations of the lower bounds of structured singular values or the or μ -values with the help of algorithm [58] is represented in the very last column.
In Figure 2, the left-hand side subfigure represents the plots of eigenvalues, singular values, and numerical approximation to both lower and upper bounds of structured singular values or μ -values against the time t. In Figure 2, the blue color dotted line identified with point 2.0 , 1.75 denotes the spectrum, that is, the eigenvalues of M 3 . Because singular values are non-negative numbers, the red color dotted line starting from point 1.0 , 1.60 indicates that all eigenvalues are bounded above by singular values. The golden color dotted line starting from point 1.0 , 0.60 shows that all quantities, that is, positive eigenvalues, singular values, and numerically approximated bounds from below for structured singular values or μ -values via mussv function which is represented with purple color dotted line starting from point 1.0 , 0.40 and the numerically approximated lower bounds of structured singular values with [58] represented with turquoise color dotted line starting from point 1.0 , 0.26 are strictly bounded by upper bounds (numerically approximated) of structured singular values with MATLAB function mussv.
In Figure 2, the right-hand side subfigure represents the plots of condition numbers vs. time. Furthermore, the behaviour of spectral condition numbers k 1 ( M 3 ) containing end point 3.0 , 11 , 000 , k 2 ( M 3 ) containing end point 3.0 , 8000 and K ( M 3 ) containing end point 3.0 , 15 , 000 is shown in right hand side figure of Figure 2.

7. Quasi-Rational Bernstein–Vandermonde Matrices

The following result in [60] is the computation of the determinant of quasi-rational Bernstein–Vandermonde structured matrices.
Theorem 7
([60]). Let M b = ( m i j ) R n × n be a quasi-rational Bernstein–Vandermonde matrix. Then, the determinant d ( · ) is computed as
d ( M b ( i j + 1 : i | i : j ) ) = n 1 0 n 1 j 1 k = i j + 1 i W ( x k ) k = 0 j 1 w k k = i j + 1 ( 1 x k ) n j i j + 1 t < s i ( x s x t ) , i j n .
In addition,
d M b ( 1 : i | j i + 1 : j ) = n 1 j 1 n 1 j 1 k = 1 i W ( x k ) k = 1 i x k j i k = j i j 1 w k k = 1 i ( 1 x k ) n j 1 t < s i ( x s x t ) , i < j
and
d e t ( M b ) = ( w n 1 W ( x n ) k = 1 n 1 ( x n x k ) w k = 1 n 1 ( 1 x k ) ) n 1 0 n 1 n 2 k = 1 n 1 W ( x k ) k = 0 n 2 w k 1 t < s n 1 ( x s x t ) , d e t ( M b ( 2 : n ) ) = ( w n 1 x n W ( x n ) k = 2 n 1 ( x n x k ) w k = 2 n 1 ( 1 x k ) ) n 1 1 n 1 n 2 k = 2 n 1 W ( x k ) k = 1 n 2 w k k = 2 n 1 x k 2 t < s ( n 1 ) ( x S x t ) .
Proof. 
For proof, see [60].    □
The parametric matrix P r M ( M b ) for a quasi-rational Bernstein–Vandermonde matrix is given by the following theorem.
Theorem 8.
Let M b = ( m i j ) R n × n is a non-singular quasi-rational Bernstein–Vandermonde matrix. The parametric matrix P r M ( M b ) R n × n is the following matrix
P r M ( M b ) = d i i = n 1 i 1 w i 1 ( 1 x i ) n i W ( x i ) k = 1 i ( x i x k ) ( 1 x k ) , 1 i n 2 d n 1 , n 1 = n 1 n 2 w n 2 ( 1 x n ) W ( x n ) k = 2 n 1 ( x n x k ) ( 1 x k ) , i = n 1 d n , n = ( w k = 1 n 1 ( 1 x k ) w n 1 W ( x n ) k = 1 n 1 ( x n x k ) ) B e t a W ( x n ) ( 1 x n 1 ) W ( x n 1 ) ( 1 x 1 ) ( 1 x n ) k = 2 n 1 ( x n x k ) k = 1 n 2 ( x n 1 x k ) ( 1 x k ) , i = n , 1 i n . ,
P r M ( m b ) i j = x i j = w ( x i 1 ) ( 1 x i ) n j ( 1 x i j ) ( x i x i 1 ) W ( x i ) ( 1 x i 1 ) n j + 1 ( x i 1 x i j ) k = i j + 1 i 2 x i x k x i 1 x k , ( i , j ) ( n , n 1 ) x n , n 1 = w ( x n ) ( 1 x n ) 2 ( x n 1 x 1 ) W ( x n 1 ) ( 1 x n ) ( 1 x 1 ) ( x n x n 1 ) k = 2 n 2 x n 1 x k x n x k , ( i , j ) = ( n , n 1 ) , 1 j i n . and
P r M ( m b ) i j = B i j = ( n j + 1 ) w j 1 x i ( j 1 ) w j 2 ( 1 x i ) , 1 i n 2 δ = ( w n 1 x n W ( x n ) k = 2 n 1 ( x n x k ) w k = 2 n 1 ( 1 x k ) ) B e t a n 1 1 . . . n 1 n 2 k = 1 n 2 w k k = 2 n 1 x k 2 t < s n 1 k = 2 ( x s x t ) d n 1 , n 1 t = 3 n d t 2 , t 2 β t 2 , t 1 γ t 1 , t 2 k = 2 n 1 W ( x k ) , 1 i i n .

Spectral Properties of Quasi-Rational Bernstein–Vandermonde Matrices

In this subsection, we aim to present important and meaningful spectral properties of quasi-rational Bernstein–Vandermonde matrices. These matrices are taken from the paper [60] for the numerical approximations of structured singular values.
We make use of the well-known MATLAB functions e i g ( · ) and s v d ( · ) to approximate numerically both eigenvalues and singular values. Our main objective is to approximate numerically the lower bounds of structured singular value or μ -value, which is a straightforward generalization of singular values for constant structured matrices. Furthermore, we make use of MATLAB functions mussv to compute both lower and upper bounds of structured singular values or μ -values for constant structured matrices numerically.
Example 4.
Consider a 6 × 6 quasi-rational Bernstein–Vandermonde matrix
M 4 = 2.9091 × 10 5 2.1818 × 10 5 3.6364 × 10 4 2.5455 × 10 3 8.1818 × 10 1 1.0000 7.0344 × 10 3 1.1107 × 10 4 3.8972 × 10 3 5.7432 × 10 2 3.8864 × 10 1 1.0000 7.0691 × 10 2 1.7673 × 10 3 9.8182 × 10 2 2.2909 × 10 2 2.4545 × 10 1 1.0000 1.2605 × 10 2 4.4489 × 10 2 3.4893 × 10 2 1.1494 × 10 2 1.7386 × 10 1 1.0000 3.0504 × 10 1 1.4299 × 10 2 1.4895 × 10 2 6.5164 × 10 1 1.3091 × 10 1 9.9999 × 10 1 8.8778 5.3267 × 10 1 7.1023 × 10 1 3.9772 × 10 1 1.0227 × 10 1 9.6161 × 10 1 .
The computation of both eigenvalues and singular values are obtained as 1.0 × 10 5 × 2.9639 , 0.0642 , 0.0030 , 0.0002 , 0 , 0 and 1.0 × 10 5 × 3.6568 , 0.0555 , 0.0030 , 0.0002 , 0 , 0 , respectively. The first and second columns represent numerically approximated upper and lower bounds of structured singular values or μ -values via MATLAB function mussv. The approximation (numerical) lower bounds of structured singular values or μ -values with algorithm [58] are represented in the very last column of Table 5.
In Figure 3, the left-hand side subfigure represents the plots of eigenvalues, singular values, and numerically approximated bounds (from below and above) of structured singular values or μ -values against the time t. In Figure 3, the blue color dotted line starting from point 1.0 , 0.3 at the bottom of the left side figure denotes the spectrum, that is, the eigenvalues of M 4 . Because singular values are non-negative numbers, the red color dotted line starting from point 1.0 , 0.37 indicates that numerically approximated eigenvalues are bounded from above by means of singular values. The golden color dotted line starting from point 1.0 , 0.37 shows that all quantities, that is, eigenvalues, singular values, and lower bounds of structured singular values or μ -values approximated by MATLAB mussv function, which is represented with the purple color dotted line starting from point 1.0 , 0.37 and the numerically approximated lower bounds of structured singular values or μ -values via algorithm [58] represented with a turquoise color dotted line starting from point 1.0 , 0.37 are strictly bounded by upper bounds (computed numerically) of structured singular values with MATLAB function mussv.
In Figure 3, the right-hand side subfigure represents the plots of condition numbers vs. time. Furthermore, the behaviour of spectral condition numbers k 1 ( M 4 ) having end point 3.0 , 12 , 000 , k 2 ( M 4 ) having end point 3.0 , 6000 and K ( M 4 ) having end point 3.0 , 4000 is shown in right hand side figure of Figure 3.

8. Numerical Testing for Matrices in Higher Dimensions

In this section, we aim to present the comparison of the numerically approximated bounds of structured singular values for Bernstein-Vandermonde structured matrices in higher dimensions. For numerical testing, we choose the Bernstein-Vandermonde matrices having sizes 10 , 15 , 20 , 25 , 30 , respectively.
The very first column in Table 6 denoted the size of square Bernstein–Vandermonde structured matrices, which are under consideration in this article. Both the second and third columns indicate the numerical approximation to both upper and lower bounds of structured singular values or μ -values with the help of MATLAB function mussv, respectively. The last and fourth column of Table 6 shows the numerical approximation of the bounds (from below) of structured singular value or μ -value computed via algorithm [58]. For the size 10, the lower bound of structured singular value approximated numerically via MATLAB mussv function is much better. However, for the sizes 15 , 20 , 25 , 30 , the lower bounds approximated by [58] are significantly better than the lower bounds approximated by MATLAB function mussv.
Algorithm 1 Approximate perturbation level.
  • procedure Given A, B ,THE TOLERANCE t o l > 0 , ϵ ( 0 ) (BOUND FROM BELOW), ϵ l (BOUND FROM ABOVE), ϵ u (GIVEN UPPER BOUND), i m a x (INITIAL OF EIGENVALUES)
  • for i 1 to  imax  d o
  • Determine solution to system of ODEs (4.10) in [58] for all cases that begin with the initial choice of the value of Δ i ( 0 ) .
  • The quantity Δ i denotes solution (of stationary nature) and let ξ i denotes smallest eigenvalue corresponding to modified perturbed structured matrix I ϵ ( 0 ) A Δ i
  • Take i = a r g m i n | ξ i |
  • Take Δ ( 0 ) = Δ i , ξ ( 0 ) = ξ i , x ( 0 ) , y ( 0 ) the computed eigenvectors
  • Determine ϵ ( 1 ) via a single step fast Newton iteration
  • Set k = 1
  • While | ϵ ( k ) ϵ ( k 1 ) | > t o l t h e n , d o
  • Determine solution to ODEs (4.10) in [58] with ϵ = ϵ ( k ) begins from Δ ( 0 ) = Δ ( k 1 )
  • Consider Δ ( k ) , solution (stationary) of (4.10) in [58]
  • Consider ξ ( k ) , the smallest eigenvalue of modified structured matrix I ϵ ( 0 ) A Δ ( k )
  • if | ξ ( k ) | > t o l t h e n , d o
  • Set perturbation level ϵ l = ϵ ( k )
  • Determine suitable value of ϵ ( k + 1 ) with one step fast Newton iteration.
  • end procedure

9. Conclusions

In this article, we have presented and analyzed the numerical approximation of a mathematical tool commonly known as structured singular value for a class of totally positive Bernstein–Vandermonde structured matrices, Bernstein–Bezoutian structured matrices, Cauchy—polynomial-Vandermonde structured matrices, and quasi-rational Bernstein–Vandermonde structured matrices. The numerical computation of both eigenvalues and singular values for such a class of structured matrices is presented and investigated with MATLAB functions e i g ( · ) and s s v ( · ) . The new contribution in this article is to make a comparison for the numerical approximation of lower bounds of structured singular value compared with MATLAB function m u s s v available in MATLAB Control Toolbox.

Author Contributions

M.-U.R. introduced the problem, and J.A. and N.F. validated the results. M.-U.R. and T.H.R. wrote the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No datasets were generated or analyzed during the current study.

Acknowledgments

J. Alzabut and N. Fatima are thankful to Prince Sultan University for its endless support. J. Alzabut is appreciative of OSTIM Technical University for unwavering assistance.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. James, D.; Ioana, D.; Olga, H.; Plamen, K. Accurate and Efficient Expression Evaluation and Linear Algebra; Acta Numerica; Cambridge University Press: Cambridge, UK, 2008; Volume 10, pp. 87–145. [Google Scholar]
  2. Fallat, S.M.; Johnson, C.R. Totally Nonnegative Matrices; Princeton University Press: Princeton, NJ, USA, 2011. [Google Scholar]
  3. Ando, T. Totally Positive Matrices; Cambridge University Press: Cambridge, UK, 2010; p. 181. [Google Scholar]
  4. Andalib, T.W.; Azizan, N.A.; Halim, H.A. Case Matrices and Connections of Entrepreneurial Career Management Module. Int. J. Entrep. 2019, 23, 1–10. [Google Scholar]
  5. James, D.; Plamen, K. The accurate and efficient solution of a totally positive generalized Vandermonde linear system. SIAM J. Matrix Anal. Appl. 2005, 27, 142–152. [Google Scholar]
  6. Koev, P. Accurate eigenvalues and SVDs of totally nonnegative matrices. SIAM J. Matrix Anal. Appl. 2005, 27, 1–23. [Google Scholar] [CrossRef]
  7. Koev, P. Accurate computations with totally nonnegative matrices. SIAM J. Matrix Anal. Appl. 2007, 29, 731–751. [Google Scholar] [CrossRef]
  8. Marco, A.; Martı, J.J. A fast and accurate algorithm for solving Bernstein-Vandermonde linear systems. Linear Algebra Its Appl. 2007, 4, 616–628. [Google Scholar] [CrossRef]
  9. Marco, A.; Martı, J.J. Polynomial least squares fitting in the Bernstein basis. Linear Algebra Its Appl. 2010, 433, 1254–1264. [Google Scholar] [CrossRef]
  10. Farin, G.E.; Farin, G. Curves and Surfaces for CAGD: A Practical Guide; Morgan Kaufmann: Burlington, MA, USA, 2002. [Google Scholar]
  11. Farin, G. Curves and Surfaces for Computer-Aided Geometric Design: A Practical Guide; Elsevier: Amsterdam, The Netherlands, 2014. [Google Scholar]
  12. Farin, G.; Hamann, B. Current trends in geometric modeling and selected computational applications. J. Comput. Phys. 1997, 138, 1–15. [Google Scholar] [CrossRef]
  13. Farin, G.E.; Hansford, D. The Geometry Toolbox for Graphics and Modeling; AK Peters/CRC Press: Boca Raton, FL, USA, 2017. [Google Scholar]
  14. Forrest, A.R. Interactive interpolation and approximation by Bézier polynomials. Comput. J. 1972, 15, 71–79. [Google Scholar] [CrossRef]
  15. Wolters, H.J.; Farin, G. Geometric curve approximation. Comput. Aided Geom. Des. 1997, 14, 499–513. [Google Scholar] [CrossRef]
  16. Bini, D.A.; Gemignani, L. Bernstein-bezoutian matrices. Theor. Comput. Sci. 2004, 315, 319–333. [Google Scholar] [CrossRef]
  17. Brown, W.S. On Euclid’s algorithm and the computation of polynomial greatest common divisors. J. ACM 1971, 18, 478–504. [Google Scholar] [CrossRef]
  18. Collins, G.E. Subresultants and reduced polynomial remainder sequences. J. ACM 1967, 14, 128–142. [Google Scholar] [CrossRef]
  19. Bini, D.A.; Gemignani, L. Fast fraction-free triangularization of Bezoutians with applications to sub-resultant chain computation. Linear Algebra Its Appl. 1998, 284, 19–39. [Google Scholar] [CrossRef]
  20. Barnett, S. A Bezoutian Matrix for Chebyshev Polynomials; University of Bradford, School of Mathematical Sciences: Bradford, UK, 1987. [Google Scholar]
  21. Gemignani, L. Fast and Stable Computation of the Barycentric Representation of Rational Interpolants. Calcolo 1996, 33, 371–388. [Google Scholar] [CrossRef]
  22. Gohberg, I.; Olshevsky, V. Fast inversion of Chebyshev-Vandermonde matrices. Numer. Math. 1994, 67, 71–92. [Google Scholar] [CrossRef]
  23. Kailath, T.; Olshevsky, V. Displacement-structure approach to polynomial Vandermonde and related matrices. Linear Algebra Its Appl. 1997, 261, 49–90. [Google Scholar] [CrossRef]
  24. Rost, K. Generalized companion matrices and matrix representations for generalized Bezoutians. Linear Algebra Its Appl. 1993, 193, 151–172. [Google Scholar] [CrossRef]
  25. Pan, V. Structured Matrices and Polynomials: Unified Superfast Algorithms; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2001. [Google Scholar]
  26. Phillips, G.M. Interpolation and Approximation by Polynomials; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2003. [Google Scholar]
  27. Junghanns, P.; Oestreich, D. Numerische Lösung des Staudammproblems mit Drainage. Z. Angew. Math. Mech. 1989, 69, 83–92. [Google Scholar] [CrossRef]
  28. Weideman, J.A.C.; Laurie, D.P. Quadrature rules based on partial fraction expansions. Numer. Algorithms 2000, 24, 159–178. [Google Scholar] [CrossRef]
  29. Dailey, M.; Dopico, F.M.; Ye, Q. Relative perturbation theory for diagonally dominant matrices. SIAM J. Matrix Anal. Appl. 2014, 35, 1303–1328. [Google Scholar] [CrossRef]
  30. Dailey, M.; Dopico, F.M.; Ye, Q. A new perturbation bound for the LDU factorization of diagonally dominant matrices. SIAM J. Matrix Anal. Appl. 2014, 35, 904–930. [Google Scholar] [CrossRef]
  31. Demmel, J.; Kahan, W. Accurate singular values of bidiagonal matrices. SIAM J. Sci. Stat. Comput. 1990, 11, 873–912. [Google Scholar] [CrossRef]
  32. Demmel, J.; Gu, M.; Eisenstat, S.; Slapničar, I.; Veselić, K.; Drmač, Z. Computing the singular value decomposition with high relative accuracy. Linear Algebra Its Appl. 1999, 299, 21–80. [Google Scholar] [CrossRef]
  33. Demmel, J.; Koev, P. Accurate SVDs of weakly diagonally dominant M-matrices. Numer. Math. 2004, 98, 99–104. [Google Scholar] [CrossRef]
  34. Dopico, F.M.; Koev, P. Accurate symmetric rank revealing and eigendecompositions of symmetric structured matrices. SIAM J. Matrix Anal. Appl. 2006, 28, 1126–1156. [Google Scholar] [CrossRef]
  35. Dopico, F.; Koev, P. Perturbation theory for the LDU factorization and accurate computations for diagonally dominant matrices. Numer. Math. 2011, 119, 337–371. [Google Scholar] [CrossRef]
  36. Arif, M.S.; Abodayeh, K.; Nawaz, Y. Numerical Schemes for Fractional Energy Balance Model of Climate Change with Diffusion Effects. Emerg. Sci. J. 2023, 7, 808–820. [Google Scholar] [CrossRef]
  37. Boros, T.; Kailath, T.; Olshevsky, V. Fast algorithms for solving Vandermonde and Chebyshev-Vandermonde systems. Stanf. Inf. Syst. Lab. Rep. 1994. [Google Scholar]
  38. Higham, N.J. Fast solution of Vandermonde-like systems involving orthogonal polynomials. IMA J. Numer. Anal. 1988, 8, 473–486. [Google Scholar] [CrossRef]
  39. Higham, N.J. Stability analysis of algorithms for solving confluent Vandermonde-like systems. SIAM J. Matrix Anal. Appl. 1990, 11, 23–41. [Google Scholar] [CrossRef]
  40. Kailath, T.; Olshevsky, V. Displacement structure approach to Chebyshev-Vandermonde and related matrices. Integral Equ. Oper. Theory 1995, 22, 65–92. [Google Scholar] [CrossRef]
  41. Reichel, L.; Opfer, G. Chebyshev-vandermonde systems. Math. Comput. 1991, 57, 703–721. [Google Scholar] [CrossRef]
  42. Verde-Star, L. Inverses of generalized Vandermonde matrices. J. Math. Anal. Appl. 1988, 131, 341–353. [Google Scholar] [CrossRef]
  43. Marco, A.; Martínez, J.-J.; Peña, J.M. Accurate bidiagonal decomposition of totally positive Cauchy–Vandermonde matrices and applications. Linear Algebra Its Appl. 2017, 517, 63–84. [Google Scholar] [CrossRef]
  44. Huang, R.; Chu, D. Relative perturbation analysis for eigenvalues and singular values of totally nonpositive matrices. SIAM J. Matrix Anal. Appl. 2015, 36, 476–495. [Google Scholar] [CrossRef]
  45. Huang, R.; Chu, D. Computing singular value decompositions of parameterized matrices with total nonpositivity to high relative accuracy. J. Sci. Comput. 2017, 71, 682–711. [Google Scholar] [CrossRef]
  46. Dopico, F.M.; Molera, J.M.; Moro, J. n orthogonal high relative accuracy algorithm for the symmetric eigenproblem. SIAM J. Matrix Anal. Appl. 2003, 25, 301–351. [Google Scholar] [CrossRef]
  47. Dopico, F.M.; Koev, P.; Molera, J.M. Implicit standard Jacobi gives high relative accuracy. Numer. Math. 2009, 113, 519–553. [Google Scholar] [CrossRef]
  48. Huang, R. Rank structure properties of rectangular matrices admitting bidiagonal-type factorizations. Linear Algebra Its Appl. 2015, 465, 1–14. [Google Scholar] [CrossRef]
  49. Yang, Z.; Huang, R.; Zhu, W.; Liu, J. Accurate solutions of structured generalized Kronecker product linear systems. Numer. Algorithms 2021, 87, 797–818. [Google Scholar] [CrossRef]
  50. Alfa, A.; Xue, J.; Ye, Q. Accurate computation of the smallest eigenvalue of a diagonally dominant M-matrix. Math. Comput. 2002, 71, 217–236. [Google Scholar] [CrossRef]
  51. Ye, Q. Computing singular values of diagonally dominant matrices to high relative accuracy. Math. Comput. 2008, 77, 2195–2230. [Google Scholar] [CrossRef]
  52. Huang, R. A test and bidiagonal factorization for certain sign regular matrices. Linear Algebra Its Appl. 2013, 438, 1240–1251. [Google Scholar] [CrossRef]
  53. Yang, Z.; Ma, X.-X. Computing eigenvalues of quasi-rational Bernstein-Vandermonde matrices to high relative accuracy. Numer. Linear Algebra Appl. 2022, 29, 2421. [Google Scholar] [CrossRef]
  54. Ameer, E.; Nazam, M.; Aydi, H.; Arshad, M.; Mlaiki, N. On (Λ, Y, R)-contractions and applications to nonlinear matrix equations. Mathematics 2019, 7, 443. [Google Scholar] [CrossRef]
  55. Riazat, M.; Azizi, A.; Naderi Soorki, M.; Koochakzadeh, A. Robust Consensus in a Class of Fractional-Order Multi-Agent Systems with Interval Uncertainties Using the Existence Condition of Hermitian Matrices. Axioms 2023, 12, 65. [Google Scholar] [CrossRef]
  56. Ali, I.; Saleem, M.T. Applications of Orthogonal Polynomials in Simulations of Mass Transfer Diffusion Equation Arising in Food Engineering. Symmetry 2023, 15, 527. [Google Scholar] [CrossRef]
  57. Pan, V.Y.; Tsigaridas, E.P. Nearly optimal computations with structured matrices. In Proceedings of the 2014 Symposium on Symbolic-Numeric Computation, Shanghai, China, 28–31 July 2014; pp. 21–30. [Google Scholar]
  58. Guglielmi, N.; Rehman, M.-U.; Kressner, D. A novel iterative method to approximate structured singular values. SIAM J. Matrix Anal. Appl. 2017, 38, 361–386. [Google Scholar] [CrossRef]
  59. Marco, A.; Martínez, J.-J. Accurate computations with totally positive Bernstein-Vandermonde matrices. Electron. J. Linear Algebra 2013, 26, 357–380. [Google Scholar] [CrossRef]
  60. Yang, Z.; Huang, R.; Zhu, W. Accurate computations for eigenvalues of products of Cauchy-polynomial-Vandermonde matrices. Numer. Algorithms 2020, 85, 329–351. [Google Scholar] [CrossRef]
  61. Braatz, R.P.; Young, P.M.; Doyle, J.C.; Morari, M. Computational complexity of μ calculation. IEEE Trans. Autom. Control. 1994, 39, 1000–1002. [Google Scholar] [CrossRef]
  62. Levin, D. The approximation power of moving least-squares. Math. Comput. 1998, 67, 1517–1531. [Google Scholar] [CrossRef]
  63. Jabbari, F. Linear System Theory II, Chapter 3: Eigenvalue, Singular Values, Pseudoinverse; The Henry Samueli School of Engineering, University of California: Irvine, CA, USA, 2015; Available online: http://gram.eng.uci.edu/fjabbari/me270b/me270b.html (accessed on 20 August 2023).
Figure 1. Behaviour of eigenvalues, singular values, structured singular values and condition numbers.
Figure 1. Behaviour of eigenvalues, singular values, structured singular values and condition numbers.
Axioms 12 00831 g001
Figure 2. Behavior of eigenvalues, singular values, structured singular values, and condition numbers.
Figure 2. Behavior of eigenvalues, singular values, structured singular values, and condition numbers.
Axioms 12 00831 g002
Figure 3. Behavior of eigenvalues, singular values, structured singular values, and condition numbers.
Figure 3. Behavior of eigenvalues, singular values, structured singular values, and condition numbers.
Axioms 12 00831 g003
Table 1. Spectral properties of M 1 .
Table 1. Spectral properties of M 1 .
EigenvaluesSingular Valuesssv (Upper)ssv (Lower)ssv (Lower New)
0.9153 , 0.5000 , 0.2097 0.9698 , 0.5113 , 0.1936 0.9698 0.9698 0.9698
Table 2. Spectral properties of M 2 .
Table 2. Spectral properties of M 2 .
EigenvaluesSingular Valuesssv (Upper)ssv (Lower)ssv (Lower New)
0.9403 , 0 , 0.4361 , 15.5597 15.5597 , 0.9403 , 0.4361 , 0 0.0643 0.0621 0.0634
Table 3. The condition number of M 2 .
Table 3. The condition number of M 2 .
K 1 ( M 2 ) K 2 ( M 2 ) K ( M 2 )
8.4464 × 10 16 3.3158 × 10 16 8.4464 × 10 16
Table 4. Spectral properties of M 3 .
Table 4. Spectral properties of M 3 .
EigenvaluesSingular Valuesssv (Upper)ssv (Lower)ssv (Lower New)
1.6079 , 0.0537 1.6378 , 0.0527 1.7415 1.7415 1.7415
Table 5. Spectral properties of M 4 .
Table 5. Spectral properties of M 4 .
ssv (Upper)ssv (Lower)ssv (Lower New)
1.0 × 10 0.5 × 3.6568 1.0 × 10 0.5 × 3.6568 3.6568 × 10 5
Table 6. Comparison of bounds of structured singular values for Bernstein–Vandermonde matrices in higher dimensions.
Table 6. Comparison of bounds of structured singular values for Bernstein–Vandermonde matrices in higher dimensions.
nssv (Upper)ssv (Lower)ssv (Lower New)
10 4.5711 × 10 4 4.5711 × 10 4 4.4640 × 10 4
15 2.3241 × 10 8 2.2732 × 10 8 2.2878 × 10 8
20 3.3512 × 10 12 3.30011 × 10 12 3.30035 × 10 12
25 7.9028 × 10 16 7.7121 × 10 16 7.7987 × 10 16
30 6.2264 × 10 24 0.0005 × 10 21 0.0005 × 10 24
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rehman, M.-U.; Alzabut, J.; Fatima, N.; Rasulov, T.H. The Stability Analysis of Linear Systems with Cauchy—Polynomial-Vandermonde Matrices. Axioms 2023, 12, 831. https://0-doi-org.brum.beds.ac.uk/10.3390/axioms12090831

AMA Style

Rehman M-U, Alzabut J, Fatima N, Rasulov TH. The Stability Analysis of Linear Systems with Cauchy—Polynomial-Vandermonde Matrices. Axioms. 2023; 12(9):831. https://0-doi-org.brum.beds.ac.uk/10.3390/axioms12090831

Chicago/Turabian Style

Rehman, Mutti-Ur, Jehad Alzabut, Nahid Fatima, and Tulkin H. Rasulov. 2023. "The Stability Analysis of Linear Systems with Cauchy—Polynomial-Vandermonde Matrices" Axioms 12, no. 9: 831. https://0-doi-org.brum.beds.ac.uk/10.3390/axioms12090831

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop