Next Article in Journal
A Note on Generalized Quasi-Einstein and (λ, n + m)-Einstein Manifolds with Harmonic Conformal Tensor
Next Article in Special Issue
A Sylvester-Type Matrix Equation over the Hamilton Quaternions with an Application
Previous Article in Journal
Study of Solutions for a Degenerate Reaction Equation with a High Order Operator and Advection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Modified Conjugate Residual Method and Nearest Kronecker Product Preconditioner for the Generalized Coupled Sylvester Tensor Equations

1
Department of Mathematics, Key Laboratory of Engineering Modeling and Statistical Computation of Hainan Province, Hainan University, Haikou 570228, China
2
Department of Mathematics, Shanghai University, Shanghai 200444, China
3
Collaborative Innovation Center for the Marine Artificial Intelligence, Shanghai 200444, China
*
Author to whom correspondence should be addressed.
Submission received: 17 April 2022 / Revised: 10 May 2022 / Accepted: 13 May 2022 / Published: 18 May 2022

Abstract

:
This paper is devoted to proposing a modified conjugate residual method for solving the generalized coupled Sylvester tensor equations. To further improve its convergence rate, we derive a preconditioned modified conjugate residual method based on the Kronecker product approximations for solving the tensor equations. A theoretical analysis shows that the proposed method converges to an exact solution for any initial tensor at most finite steps in the absence round-off errors. Compared with a modified conjugate gradient method, the obtained numerical results illustrate that our methods perform much better in terms of the number of iteration steps and computing time.

1. Introduction

As is well known, the tensor equations as generalizations of matrix equations have been intensively applied in the field of theory and applications. Particularly, the generalized coupled Sylvester tensor equations have drawn significant attention because they play an increasingly important role in control theory [1,2,3,4,5,6,7,8], finite difference [9], finite element [10] and information retrieval [11], etc. This motivated us to propose two effective iterative methods for solving the tensor equations.
In this paper, we are interested in developing two iterative algorithms to solve the following generalized coupled Sylvester tensor equations:
X 1 × 1 A 11 + X 2 × 2 A 12 + + X n 1 × n 1 A 1 ( n 1 ) + X n × n A 1 n = B 1 , X 2 × 1 A 21 + X 3 × 2 A 22 + + X n × n 1 A 2 ( n 1 ) + X 1 × n A 2 n = B 2 , X n × 1 A n 1 + X 1 × 2 A n 2 + + X n 2 × n 1 A n ( n 1 ) + X n 1 × n A n n = B n ,
where the matrices A i j R I j × I j , the right-hand side tensors B i R I 1 × I 2 × × I n are given ( i , j = 1 , , n ) , and X j R I 1 × I 2 × × I n must be unknown. As a special case of Equation (1) ( X j = X for j = 1 , 2 , 3 ), the third-order Sylvester tensor equation (known as 3D matrix equation) is
X × 1 A 1 + X × 2 A 2 + X × 3 A 3 = B ,
which arises frequently from the discretization of radiative discrete ordinates equations [12,13] and high-dimensional partial differential equation [14]. One application of Equation (2) in heat transfer is to solve the three-dimensional microscopic heat transport problem [15]
ρ C p κ ( U t + τ q 2 U t 2 ) = 2 U + τ q ( 3 U t x 2 + 3 U t y 2 ) + τ U 3 U t z 2 + Q + τ q Q / t κ ,
where Q is a heat source, U is temperature, C p is specific heat, ρ is density, κ is conductivity, and τ q and τ U are the time lags of the heat flux and temperature gradient, respectively. From the mixed collocation-finite difference method, Equation (3) can be discretized as Equation (2) with its coefficient matrices given by
( A 1 ) j i = τ q ( π δ x ) 2 ( I x 2 + 2 3 ) Δ t + τ q α Δ t , i f i = j 2 τ q ( π δ x ) 2 ( 1 ) i + j sin 2 [ 1 2 ( 2 π β j δ x x i ) ] , i f i j , ( A 2 ) s k = τ q ( π δ y ) 2 ( I y 2 + 2 3 ) , i f k = s 2 τ q ( π δ y ) 2 ( 1 ) k + s sin 2 [ 1 2 ( 2 π γ s δ y y k ) ] , i f k s , ( A 3 ) q p = τ U ( π δ z ) 2 ( I z 2 + 2 3 ) , i f p = q 2 τ U ( π δ z ) 2 ( 1 ) p + q sin 2 [ 1 2 ( 2 π η q δ z z p ) ] , i f p q ,
where x i = 2 π i I x , y k = 2 π k I y and z p = 2 π p I z , and β j = j δ x I x , γ s = s δ y I y and q δ z I z for i , j = 0 , , I x 1 , k , s = 0 , , I y 1 and p , q = 0 , , I z 1 . Some effective iterative algorithms have been proposed for solving Equation (2). For example, according to the hierarchical identification principle, Chen and Lv proposed a gradient-based iterative method and its modification version [16] for solving the tensor equation when it has a unique solution. To accelerate the rate of convergence of the GI method, Ali Beik and Ahmadi Asl [17] derived a so-called residual norm steepest descent algorithm for solving the tensor equation. If X j = X for j = 1 , , n , then Equation (1) reduces to the generalized Sylvester tensor equation
X × 1 A 1 + X × 2 A 2 + + X × n A n = B ,
which has been widely applied in the signal image, blind source separation, restoration of color and hyperspectral images. For more details, one may refer to [18,19,20,21,22,23] and the references therein.
In the last decade, some efficient algorithms for solving Equation (5) have been developed well. For example, Chen and Lv [24] proposed a generalized minimum residual (GMRES) method and its preconditioned version in their tensor forms for solving Equation (5). By employing the Arnoldi process and the full orthogonalization method, Ali Beik et al. [25] introduced the conjugate gradient and nested conjugate gradient methods to search the solution of the generalized Sylvester tensor equation, respectively. Moreover, Karimi and Dehghan [26] presented a new method based on the global Bidiag 1 process [27] for finding an approximate solution of (5), which is a promising method. Wang et al. [28] studied the performance of an iterative algorithm that obtains the least square solution of Equation (5) over the quaternion algebra. Additionally, Xu and Wang [29] extended the BiCG and BiCR methods to solve the Stein tensor equation, which is a general form of the Lyapunov matrix equation. Li et al. [30] developed five numerical algorithms for solving the discrete Lyapunov tensor equation. Huang and Li [31] proposed some Krylov subspace methods including the conjugate residual, generalized conjugate residual, biconjugate gradient, conjugate gradient squared and biconjugate gradient stabilized methods in their tensor forms for solving a tensor equation. Hajarian [32] gave the matrix form of the biconjugate residual algorithm for studying the generalized reflexive and anti-reflexive solutions of a generalized Sylvester matrix equation. Then, Hajarian [33] derived four efficient iterative methods for obtaining the reflexive periodic solutions of general periodic matrix equations. Additionally, Hajarian [34] presented a conjugate gradient-like method for solving a general tensor equation with respect to the Einstein product. In [35], Najafi-Kalyani et al. derived some effective iterative methods for solving the tensor Equation (5), which are based on the tensor form of global Hessenberg process. Liu et al. [36] investigated the solvability of a quaternion matrix equation. Mehany and Wang [37] studied some necessary and sufficient conditions for the solution of the three symmetrical systems of coupled Sylvester-like quaternion matrix equations. Combining the standard Tikhonov regularization technique, a new iterative method to solve the ill-posed tensor Equation (5), was also established. By CP decomposition, Bentbib et al. [38] developed Arnoldi-based methods (block and global), which possess the global convergence for solving the tensor Equation (5) efficiently. Additionally, Heyouni et al. [39] established the generalized Hessenberg method in its tensor form for solving (5). However, so far, there has been little research on solving the generalized coupled Sylvester tensor Equation (1), numerically. Only Lv and Ma [40] presented a modified conjugate gradient (MCG) method for investigating its iterative solutions.
For solving an n-by-n symmetric linear system A x = b , the classical conjugate residual (CR) method as a special case of GMRES method was derived in [41]. In this case, the residual vectors are always A-orthogonal, and the vectors A p i are also orthogonal for i = 0 , 1 , , n . This fact shows that the method converges fast to an exact solution within finite steps in the absence round-off errors. Compared with the CG method, the computational work and storage costs of CR method may increase slightly, but its convergence behavior is more smooth. Note that the classical CR method cannot be extended directly to solve the tensor Equation (1), because it is nonsymmetric. So, we propose a modified conjugate residual (MCR) method for studying Equation (1). Also note that preconditioning is an effective mean of reaching improved convergence of iterative methods. Preconditioners, such as incomplete LU factorizations preconditioner [42], Neumann preconditioner [43], and nearest Kronecker product (NKP) approximate preconditioner [44], have been confirmed as powerful tools for solving large linear systems. In [45], some iterative methods with the NKP preconditioner performed much better than other preconditioners. In this paper, we present a preconditioned MCR method based on Kronecker product approximations for solving Equation (1), which is superior to the MCR method.
The rest of this paper is organized as follows: In Section 2, we recall some notations and preliminaries that will be used in the sequel. In Section 3, we formulate a modified conjugate residual method to solve the generalized coupled Sylvester tensor Equation (1), and present its preconditioned version to speed up the rate of convergence of MCR method. The established convergence analysis shows that an exact solution to Equation (1) can be given for any initial tensor at most finite steps in the absence of round-off errors. In Section 4, we provide some numerical examples to illustrate the effectiveness of the methods proposed in here. Finally we draw some conclusions in Section 5.

2. Preliminaries

In this section, we recall some definitions and useful results that will be used in the sequel. Let R denote the real number field. Given a positive integer n, an order n real tensor X = ( x i 1 i 2 i n ) ( 1 i j I j , j = 1 , 2 , , n ) is a multidimensional array consisting of I 1 I 2 I n entries [46,47]. We denote the set of all real tensors by R I 1 × I 2 × × I n . If n = 2 , then we call the tensor A as an I 1 × I 2 matrix, denoted by A. We denote the set of real matrices by R I 1 × I 2 . An I 1 × I 1 identity matrix is denoted by I I 1 . For a matrix A R I 1 × I 2 , A T R I 2 × I 1 is then said to be the transpose of A. Given matrices A = ( a i j ) R m × n and B, their Kronecker product, denoted A B , is defined as
A B = a 11 B a 12 B a 1 n B a 21 B a 22 B a 1 n B a m 1 B a m 2 B a m n B .
For matrices A and B, ( A B ) 1 = A 1 B 1 . The unfolding of a tensor X along mode 1, denoted by X ( 1 ) , is an I 1 × I 2 I 3 I n matrix with its column being a column of X along mode 1. The symbol vec ( X ) denotes a vectorized matrix by stacking its column to form a vector. O is called a zero tensor if its entries are zero. Given tensors X , Y R I 1 × I 2 × × I n , their inner product is a scalar given by
X , Y = vec ( X ( 1 ) ) T vec ( Y ( 1 ) ) = i 1 = 1 I 1 i 2 = 1 I 2 i n = 1 I n x i 1 i 2 i n y i 1 i 2 i n .
If X , Y = 0 , then X and Y are said to be orthogonal. The Frobenius norm over R I 1 × I 2 × × I m is defined as
X = X , X = i 1 = 1 I 1 i 2 = 1 I 2 i m = 1 I m x i 1 i 2 i m 2 .
For a tensor X R I 1 × I 2 × × I n and a matrix A R m × I k , their k-mode product, denoted by X × k A , is an nth order I 1 × × I k 1 × m × I k + 1 × × I n dimensional tensor with its entries given by
( X × k A ) i 1 i k 1 j i k + 1 i n = i k = 1 I k x i 1 i k i n a j i k , j = 1 , , m .
From the definition of unfolding and the results in [48,49], the following statements hold:
(1)
If Y = X × 1 A 1 × 2 A 2 × 3 × n A n , then Y ( 1 ) = A 1 X ( 1 ) ( A n A n 1 A 2 ) holds;
(2)
For different mode multiplications ( k n ), it follows that X × n A × k B = X × k B × n A ;
(3)
If k = n , then X × n A × n B = X × n ( B A ) ;
(4)
The k-mode multiplication commutes with respect to the inner product, that is, X , Y × k A = X × k A T , Y .

3. A Modified Conjugate Residual Method for Solving the Tensor Equations

In this section, we propose a modified conjugate residual method for solving the tensor equation Equation (1), and establish its precondition version to speed up the rate of convergence. First of all, we present a necessary and sufficient condition for the existence of solution of (1). From the above results in Section 2, Equation (1) can be rewritten as the following linear system of equations
A x = b ,
where
A = I I n I I 2 A 11 I I n A 12 I I 1 A 1 n I I n 1 I I 1 A 2 n I I n 1 I I 1 I I n I I 2 A 21 I I n A 2 ( n 1 ) I I 1 I I n A n 2 I I 1 I I n A n 3 I I 2 I I 1 I I n I I 2 A n 1 , x = vec ( X 1 ( 1 ) ) vec ( X 2 ( 1 ) ) vec ( X n ( 1 ) ) , b = vec ( B 1 ( 1 ) ) vec ( B 2 ( 1 ) ) vec ( B n ( 1 ) ) .
Then, we have the following results.
Lemma 1
([40]). Equation (1) has a solution if and only if the linear system of Equation (8) is consistent. In this case, its solution is unique if the coefficient matrix A is nonsingular.
Actually, for Equation (8), it is difficult to find its solution by the classical conjugate residual method because the matrix A cannot be guaranteed to be symmetric. Moreover, the above method cannot be achieved in actual implementations because its computational work and storage costs are very expensive for larger I j , j = 1 , , n . Thereby we propose a modified conjugate residual method based on tensor form for solving Equation (1). Similar to [40], we define two operators L i and L ¯ i ( i = 1 , , n ) over R I 1 × I 2 × × I n as
L 1 ( X 1 , , X n ) = X 1 × 1 A 11 + X 2 × 2 A 12 + + X n × n A 1 n , L 2 ( X 1 , , X n ) = X 2 × 1 A 21 + X 3 × 2 A 22 + + X 1 × n A 2 n , L n ( X 1 , , X n ) = X n × 1 A n 1 + X 1 × 2 A n 2 + + X n 1 × n A n n
and
L ¯ 1 ( R 1 , , R n ) = R 1 × 1 A 11 T + R 2 × n A 2 n T + + R n × 2 A n 2 T , L ¯ 2 ( R 1 , , R n ) = R 1 × 2 A 12 T + R 2 × 1 A 21 T + + R n × 3 A n 3 T , L ¯ n ( R 1 , , R n ) = R 1 × n A 1 n T + R 2 × n 1 A 2 ( n 1 ) T + + R n × 1 A n 1 T ,
where
R 1 = B 1 L 1 ( X 1 , , X n ) , R 2 = B 2 L 2 ( X 1 , , X n ) , R n = B n L n ( X 1 , , X n ) .
From Equations (9) and (10), we can obtain the following lemma.
Lemma 2.
If tensors X i , Y i R I 1 × I 2 × × I n , i = 1 , , n , and scalars μ , ν R , then
L i ( μ X 1 + ν Y 1 , , μ X n + ν Y n ) = μ L i ( X 1 , , X n ) + ν L i ( Y 1 , , Y n ) , L ¯ i ( μ X 1 + ν Y 1 , , μ X n + ν Y n ) = μ L ¯ i ( X 1 , , X n ) + ν L ¯ i ( Y 1 , , Y n ) ,
i.e., L i and L ¯ i are linear operators from R I 1 × I 2 × × I n to itself. Moreover,
X 1 , L 1 ( Y 1 , Y 2 , , Y n ) + X 2 , L 2 ( Y 1 , Y 2 , , Y n ) + + X n , L n ( Y 1 , Y 2 , , Y n ) = Y 1 , L ¯ 1 ( X 1 , X 2 , , X n ) + Y 2 , L ¯ 2 ( X 1 , X 2 , , X n ) + + Y n , L ¯ n ( X 1 , X 2 , , X n ) .
Proof. 
For the first statement, we only prove that Equation (12) holds for i = 1 ; the rest ( i = 2 , , n ) can be deduced similarly. If μ , ν R , then
L 1 ( μ X 1 + ν Y 1 , , μ X n + ν Y n ) = ( μ X 1 + ν Y 1 ) × 1 A 11 + + ( μ X n + ν Y n ) × n A 1 n , = μ L 1 ( X 1 , , X n ) + ν L 1 ( Y 1 , , Y n ) , L ¯ 1 ( μ X 1 + ν Y 1 , , μ X n + ν Y n ) = ( μ X 1 + ν Y 1 ) × 1 A 11 T + + ( μ X 1 + ν Y 1 ) × 2 A n 2 T = μ L ¯ 1 ( X 1 , , X n ) + ν L ¯ 1 ( Y 1 , , Y n ) .
For Equation (13), it follows that
X 1 , Y 1 × 1 A 11 + Y 2 × 2 A 12 + + Y n × n A 1 n + X 2 , Y 2 × 1 A 21 + Y 3 × 2 A 22 + Y 1 × n A 2 n + + X n , Y n × 1 A n 1 + Y 1 × 2 A n 2 + + Y n 1 × n A n n = [ X 1 × 1 A 11 T , Y 1 + X 1 × 2 A 12 T , Y 2 + + X 1 × n A 1 n T , Y n ] + [ X 2 × 1 A 21 T , Y 2 + X 2 × 2 A 22 T , Y 3 + + X 2 × n A 2 n T , Y 1 ] + [ X n × 1 A n 1 T , Y n + X n × 2 A n 2 T , Y 1 + + X n × n A n n T , Y n 1 ] = Y 1 , X 1 × 1 A 11 T + X 2 × n A 2 n T + + X n × 2 A n 2 T + Y 2 , X 1 × 2 A 12 T + X 2 × 1 A 21 T + X n × 3 A n 3 T + Y n , X 1 × n A 1 n T + X 2 × n 1 A 2 ( n 1 ) T + + X n × 1 A n 1 T = Y 1 , L ¯ 1 ( X 1 , X 2 , , X n ) + Y 2 , L ¯ 2 ( X 1 , X 2 , , X n ) + + Y n , L ¯ n ( X 1 , X 2 , , X n ) .
   □
Now, we present the following modification of conjugate residual method for solving Equation (1):
From Algorithm 1, one can see that the key computations are the computation of R i ( k + 1 ) and P i ( k + 1 ) with i = 1 , 2 , , n . Both of them require O ( I 1 I 2 I n ( i = 1 n I i + 1 ) ) floating point operations (flops). So, we can obtain that the amount of flops required by each iteration of Algorithm 1 is O ( 4 n I 1 I 2 I n ( i = 1 n I i + 1 ) ) .
To investigate the convergence of Algorithm 1, we first give some properties of the tensor sequences generated by the algorithm as follows:
Theorem 1.
Let { R l k } , { P l k } , { L ¯ l ( R 1 k , , R n k ) } and { L l ( P 1 k , , P n k ) } ( l = 1 , , n , k = 0 , 1 , ) be the tensor sequences given by Algorithm 1. Then, for i j , i , j = 0 , 1 , , we have
L ¯ 1 ( R 1 i , , R n i ) , L ¯ 1 ( R 1 j , , R n j ) + + L ¯ n ( R 1 i , , R n i ) , L ¯ n ( R 1 j , , R n j ) = 0 , L 1 ( P 1 i , , P n i ) , L 1 ( P 1 j , , P n j ) + + L n ( P 1 i , , P n i ) , L n ( P 1 j , , P n j ) = 0 .
Moreover, for j < i , it follows that
R 1 i , L 1 ( P 1 j , , P n j ) + + R n i , L n ( P 1 j , , P n j ) = P 1 j , L ¯ 1 ( R 1 i , , R n i ) + + P n j , L ¯ n ( R 1 i , , R n i ) = 0 .
Algorithm 1 A modified conjugate residual method for solving Equation (1)
  • Given initial values  X i 0 , B i R I 1 × I 2 × × I n , A i j R I j × I j , k = 0 , and set ϵ > 0 .
  • Compute
    R 1 0 = B 1 L 1 ( X 1 0 , , X n 0 ) , R 2 0 = B 2 L 2 ( X 1 0 , , X n 0 ) , R n 0 = B n L n ( X 1 0 , , X n 0 ) , P 1 0 = L ¯ 1 ( R 1 0 , , R n 0 ) , P 2 0 = L ¯ 2 ( R 1 0 , , R n 0 ) , P n 0 = L ¯ n ( R 1 0 , , R n 0 )
    and
    s 0 = L ¯ 1 ( R 1 0 , , R n 0 ) 2 + L ¯ 2 ( R 1 0 , , R n 0 ) 2 + + L ¯ n ( R 1 0 , , R n 0 ) 2 , p 0 = L 1 ( P 1 0 , , P n 0 ) 2 + L 2 ( P 1 0 , , P n 0 ) 2 + + L n ( P 1 0 , , P n 0 ) 2 , η k = R 1 k 2 + R 2 k 2 + + R n k 2 .
  • For k = 0 , 1 , 2 , until η k < ϵ do:
    α k = s k p k ,
    X 1 k + 1 = X 1 k + α k P 1 k , X 2 k + 1 = X 2 k + α k P 2 k , X n k + 1 = X n k + α k P n k , R 1 k + 1 = R 1 k α k L 1 ( P 1 k , , P n k ) , R 2 k + 1 = R 2 k α k L 2 ( P 1 k , , P n k ) , R n k + 1 = R n k α k L n ( P 1 k , , P n k ) , L ¯ 1 ( R 1 k + 1 , , R n k + 1 ) = R 1 k + 1 × 1 A 11 T + R 2 k + 1 × n A 2 n T + + R n k + 1 × 2 A n 2 T , L ¯ 2 ( R 1 k + 1 , , R n k + 1 ) = R 1 k + 1 × 2 A 12 T + R 2 k + 1 × 1 A 21 T + R n k + 1 × 3 A n 3 T , L ¯ n ( R 1 k + 1 , , R n k + 1 ) = R 1 k + 1 × n A 1 n T + R 2 k + 1 × n 1 A 2 ( n 1 ) T + + R n k + 1 × 1 A n 1 T , s k + 1 = L ¯ 1 ( R 1 k + 1 , , R n k + 1 ) 2 + L ¯ 2 ( R 1 k + 1 , , R n k + 1 ) 2 + + L ¯ n ( R 1 k + 1 , , R n k + 1 ) 2 .
  • Compute β k = s k + 1 s k ,
    P 1 k + 1 = L ¯ 1 ( R 1 k + 1 , , R n k + 1 ) + β k P 1 k , P 2 k + 1 = L ¯ 2 ( R 1 k + 1 , , R n k + 1 ) + β k P 2 k , P n k + 1 = L ¯ n ( R 1 k + 1 , , R n k + 1 ) + β k P n k ,
    p k + 1 = L 1 ( P 1 k + 1 , , P n k + 1 ) 2 + L 2 ( P 1 k + 1 , , P n k + 1 ) 2 + + L n ( P 1 k + 1 , , P n k + 1 ) 2 .
Proof. 
The first half of Equation (15) can be obtained by straightforward computation. We now prove that Equations (14) and (15) hold by induction. Suppose k = 1 , from Lemma 2; it follows that
L ¯ 1 ( R 1 1 , , R n 1 ) , L ¯ 1 ( R 1 0 , , R n 0 ) + + L ¯ n ( R 1 1 , , R n 1 ) , L ¯ n ( R 1 0 , , R n 0 ) = L ¯ 1 ( R 1 0 α 0 L 1 ( P 1 0 , , P n 0 ) , , R n 0 α 0 L n ( P 1 0 , , P n 0 ) ) , L ¯ 1 ( R 1 0 , , R n 0 ) + + L ¯ n ( R 1 0 α 0 L 1 ( P 1 0 , , P n 0 ) , , R n 0 α 0 L n ( P 1 0 , , P n 0 ) ) , L ¯ n ( R 1 0 , , R n 0 ) = [ L ¯ 1 ( R 1 0 , , R n 0 ) 2 + + L ¯ n ( R 1 0 , , R n 0 ) 2 ] L ¯ 1 ( R 1 0 , , R n 0 ) 2 + + L ¯ n ( R 1 0 , , R n 0 ) 2 L 1 ( P 1 0 , , P n 0 ) 2 + + L n ( P 1 0 , , P n 0 ) 2 [ L 1 ( P 1 0 , , P n 0 ) 2 + + L n ( P 1 0 , , P n 0 ) 2 ] = 0 , L 1 ( P 1 1 , , P n 1 ) , L 1 ( P 1 0 , , P n 0 ) + + L n ( P 1 1 , , P n 1 ) , L n ( P 1 0 , , P n 0 ) = [ L 1 ( L ¯ 1 ( R 1 1 , , R n 1 ) , , L ¯ n ( R 1 1 , , R n 1 ) ) , L 1 ( P 1 0 , , P n 0 ) + + L n ( L ¯ 1 ( R 1 1 , , R n 1 ) , , L ¯ n ( R 1 1 , , R n 1 ) ) , L n ( P 1 0 , , P n 0 ) ] + β 0 [ L 1 ( P 1 0 , , P n 0 ) , L 1 ( P 1 0 , , P n 0 ) + + L n ( P 1 0 , , P n 0 ) , L n ( P 1 0 , , P n 0 ) ] = [ L 1 ( L ¯ 1 ( R 1 1 , , R n 1 ) , , L ¯ n ( R 1 1 , , R n 1 ) ) , 1 α 0 ( R 1 0 R 1 1 ) + + L n ( L ¯ 1 ( R 1 1 , , R n 1 ) , , L ¯ n ( R 1 1 , , R n 1 ) ) , 1 α 0 ( R n 0 R n 1 ) ] + β 0 [ L 1 ( P 1 0 , , P n 0 ) 2 + + L n ( P 1 0 , , P n 0 ) 2 ] = 1 α 0 [ L ¯ 1 ( R 1 1 , , R n 1 ) , L ¯ 1 ( R 1 0 , , R n 0 ) L ¯ 1 ( R 1 1 , , R n 1 ) + + L ¯ n ( R 1 1 , , R n 1 ) , L ¯ n ( R 1 0 , , R n 0 ) L ¯ n ( R 1 1 , , R n 1 ) ] + β 0 [ L 1 ( P 1 0 , , P n 0 ) 2 + + L n ( P 1 0 , , P n 0 ) 2 ] = 1 α 0 [ L ¯ 1 ( R 1 1 , , R n 1 ) 2 + + L ¯ n ( R 1 1 , , R n 1 ) 2 ] + β 0 [ L 1 ( P 1 0 , , P n 0 ) 2 + + L n ( P 1 0 , , P n 0 ) 2 ] = 0 .
Since
P 1 0 = L ¯ 1 ( R 1 0 , , R n 0 ) , P 2 0 = L ¯ 2 ( R 1 0 , , R n 0 ) , P n 0 = L ¯ n ( R 1 0 , , R n 0 ) ,
it follows that
P 1 0 , L ¯ 1 ( R 1 1 , , R n 1 ) + + P n 0 , L ¯ n ( R 1 0 , , R n 0 ) = 0 .
If Equations (14) and (15) hold for 0 j < i k ( k > 1 ) , then for the case 0 j < i k + 1 and 0 j < k we have
L ¯ 1 ( R 1 k + 1 , , R n k + 1 ) , L ¯ 1 ( R 1 j , , R n j ) + + L ¯ n ( R 1 k + 1 , , R n k + 1 ) , L ¯ n ( R 1 j , , R n j ) = [ L ¯ 1 ( R 1 k , , R n k ) , P 1 j + + L ¯ n ( R 1 k , , R n k ) , P n j ] β j 1 [ L ¯ 1 ( R 1 k , , R n k ) , P 1 j 1 + + L ¯ n ( R 1 k , , R n k ) , P n j 1 ] α k [ L 1 ( P 1 k , , P n k ) , L 1 ( P 1 j , , P n j ) + + L n ( P 1 k , , P n k ) , L n ( P 1 j , , P n j ) ] + α k β j 1 [ L 1 ( P 1 k , , P n k ) , L 1 ( P 1 j 1 , , P n j 1 ) + + L n ( P 1 k , , P n k ) , L n ( P 1 j 1 , , P n j 1 ) ] = 0 , L ¯ 1 ( R 1 k + 1 , , R n k + 1 ) , L ¯ 1 ( R 1 k , , R n k ) + + L ¯ n ( R 1 k + 1 , , R n k + 1 ) , L ¯ n ( R 1 k , , R n k ) = L ¯ 1 ( R 1 k α k L 1 ( P 1 k , , P n k ) , , R n k α k L n ( P 1 k , , P n k ) ) , L ¯ 1 ( R 1 k , , R n k ) + + L ¯ n ( R 1 k α k L 1 ( P 1 k , , P n k ) , , R n k α k L n ( P 1 k , , P n k ) ) , L ¯ n ( R 1 k , , R n k ) = [ L ¯ 1 ( R 1 k , , R n k ) , L ¯ 1 ( R 1 k , , R n k ) + + L ¯ n ( R 1 k , , R n k ) , L ¯ n ( R 1 k , , R n k ) ] α k [ L ¯ 1 ( L 1 ( P 1 k , , P n k ) , , L n ( P 1 k , , P n k ) ) , P 1 k β k 1 P 1 k 1 + + L ¯ n ( L 1 ( P 1 k , , P n k ) , , L n ( P 1 k , , P n k ) ) , P n k β k 1 P n k 1 ] = [ L ¯ 1 ( R 1 k , , R n k ) 2 + + L ¯ n ( R 1 k , , R n k ) 2 ] α k [ L 1 ( P 1 k , , P n k ) , L 1 ( P 1 k , , P n k ) β k 1 L 1 ( P 1 k 1 , , P n k 1 ) + + L n ( P 1 k , , P n k ) , L n ( P 1 k , , P n k ) β k 1 L n ( P 1 k 1 , , P n k 1 ) ] = [ L ¯ 1 ( R 1 k , , R n k ) 2 + + L ¯ n ( R 1 k , , R n k ) 2 ] α k [ L 1 ( P 1 k , , P n k ) 2 + + L n ( P 1 k , , P n k ) 2 ] = 0 , L 1 ( P 1 k + 1 , , P n k + 1 ) , L 1 ( P 1 j , , P n j ) + + L n ( P 1 k + 1 , , P n k + 1 ) , L n ( P 1 j , , P n j ) = L 1 ( L ¯ 1 ( R 1 k + 1 , , R n k + 1 ) + β k P 1 k , , L ¯ n ( R 1 k + 1 , , R n k + 1 ) + β k P n k ) , L 1 ( P 1 j , , P n j ) + + L n ( L ¯ 1 ( R 1 k + 1 , , R n k + 1 ) + β k P 1 k , , L ¯ n ( R 1 k + 1 , , R n k + 1 ) + β k P n k ) , L n ( P 1 j , , P n j ) = L 1 ( L ¯ 1 ( R 1 k + 1 , , R n k + 1 ) , , L ¯ n ( R 1 k + 1 , , R n k + 1 ) ) , L 1 ( P 1 j , , P n j ) + + L n ( L ¯ 1 ( R 1 k + 1 , , R n k + 1 ) , , L ¯ n ( R 1 k + 1 , , R n k + 1 ) ) , L n ( P 1 j , , P n j ) + β k L 1 ( P 1 k , , P n k ) , L 1 ( P 1 j , , P n j ) + + L n ( P 1 k , , P n k ) , L n ( P 1 j , , P n j ) = L 1 ( L ¯ 1 ( R 1 k + 1 , , R n k + 1 ) , , L ¯ n ( R 1 k + 1 , , R n k + 1 ) ) , 1 α j ( R 1 j R 1 j + 1 ) + + L n ( L ¯ 1 ( R 1 k + 1 , , R n k + 1 ) , , L ¯ n ( R 1 k + 1 , , R n k + 1 ) ) , 1 α j ( R n j R n j + 1 ) = 1 α j [ L ¯ 1 ( R 1 k + 1 , , R n k + 1 ) , L ¯ 1 ( R 1 j , , R n j ) + + L ¯ n ( R 1 k + 1 , , R n k + 1 ) , L ¯ n ( R 1 j , , R n j ) ] 1 α j [ L ¯ 1 ( R 1 k + 1 , , R n k + 1 ) , L ¯ 1 ( R 1 j + 1 , , R n j + 1 ) + + L ¯ n ( R 1 k + 1 , , R n k + 1 ) , L ¯ n ( R 1 j + 1 , , R n j + 1 ) ] = 0 , L 1 ( P 1 k + 1 , , P n k + 1 ) , L 1 ( P 1 k , , P n k ) + + L n ( P 1 k + 1 , , P n k + 1 ) , L n ( P 1 k , , P n k ) = L 1 ( L ¯ 1 ( R 1 k + 1 , , R n k + 1 ) + β k P 1 k , , L ¯ n ( R 1 k + 1 , , R n k + 1 ) + β k P n k ) , L 1 ( P 1 k , , P n k ) + + L n ( L ¯ 1 ( R 1 k + 1 , , R n k + 1 ) + β k P 1 k , , L ¯ n ( R 1 k + 1 , , R n k + 1 ) + β k P n k ) , L n ( P 1 k , , P n k ) = 1 α k [ L ¯ 1 ( R 1 k + 1 , , R n k + 1 ) , L ¯ 1 ( R 1 k + 1 , , R n k + 1 ) + + L ¯ n ( R 1 k + 1 , , R n k + 1 ) , L ¯ n ( R 1 k + 1 , , R n k + 1 ) ] + β k L 1 ( P 1 k , , P n k ) , L 1 ( P 1 k , , P n k ) + + L n ( P 1 k , , P n k ) , L n ( P 1 k , , P n k ) = 1 α k [ L ¯ 1 ( R 1 k + 1 , , R n k + 1 ) 2 + + L ¯ n ( R 1 k + 1 , , R n k + 1 ) 2 ] + β k [ L 1 ( P 1 k , , P n k ) 2 + + L n ( P 1 k , , P n k ) 2 ] = 0 , P 1 j , L ¯ 1 ( R 1 k + 1 , , R n k + 1 ) + + P n j , L ¯ n ( R 1 k + 1 , , R n k + 1 ) = P 1 j , L ¯ 1 ( R 1 k α k L 1 ( P 1 k , , P n k ) , , R n k α k L n ( P 1 k , , P n k ) ) + + P n j , L ¯ n ( R 1 k α k L 1 ( P 1 k , , P n k ) , , R n k α k L n ( P 1 k , , P n k ) ) = R 1 k α k L 1 ( P 1 k , , P n k ) , L 1 ( P 1 j , , P n j ) + + R n k α k L n ( P 1 k , , P n k ) , L n ( P 1 j , , P n j ) = [ R 1 k , L 1 ( P 1 j , , P n j ) + + R n k , L n ( P 1 j , , P n j ) ] α k [ L 1 ( P 1 k , , P n k ) , L 1 ( P 1 j , , P n j ) + + L n ( P 1 k , , P n k ) , L n ( P 1 j , , P n j ) ] = 0 , P 1 k , L ¯ 1 ( R 1 k + 1 , , R n k + 1 ) + + P n k , L ¯ n ( R 1 k + 1 , , R n k + 1 ) = [ R 1 k , L 1 ( P 1 k , , P n k ) + + R n k , L n ( P 1 k , , P n k ) ] α k [ L 1 ( P 1 k , , P n k ) , L 1 ( P 1 k , , P n k ) + + L n ( P 1 k , , P n k ) , L n ( P 1 k , , P n k ) ] = [ L ¯ 1 ( R 1 k , , R n k ) , P 1 k + + L ¯ n ( R 1 k , , R n k ) , P n k ] α k [ L 1 ( P 1 k , , P n k ) 2 + + L n ( P 1 k , , P n k ) 2 ] = [ L ¯ 1 ( R 1 k , , R n k ) , L ¯ 1 ( R 1 k , , R n k ) + β k 1 P 1 k 1 + + L ¯ n ( R 1 k , , R n k ) , L ¯ n ( R 1 k , , R n k ) + β k 1 P n k 1 ] α k [ L 1 ( P 1 k , , P n k ) 2 + + L n ( P 1 k , , P n k ) 2 ] = [ L ¯ 1 ( R 1 k , , R n k ) 2 + + L ¯ n ( R 1 k , , R n k ) 2 ] α k [ L 1 ( P 1 k , , P n k ) 2 + + L n ( P 1 k , , P n k ) 2 ] = 0 .
Consequently, Equations (14) and (15) hold for 0 j < i k . From the definition of the inner product on R I 1 × I 2 × × I m , it also follows that
L ¯ 1 ( R 1 i , , R n i ) , L ¯ 1 ( R 1 j , , R n j ) + + L ¯ n ( R 1 i , , R n i ) , L ¯ n ( R 1 j , , R n j ) = L ¯ 1 ( R 1 j , , R n j ) , L ¯ 1 ( R 1 i , , R n i ) + + L ¯ n ( R 1 j , , R n j ) , L ¯ n ( R 1 i , , R n i ) , L 1 ( P 1 i , , P n i ) , L 1 ( P 1 j , , P n j ) + + L n ( P 1 i , , P n i ) , L n ( P 1 j , , P n j ) = L 1 ( P 1 j , , P n j ) , L 1 ( P 1 i , , P n i ) + + L n ( P 1 j , , P n j ) , L n ( P 1 i , , P n i ) .
This shows that Equation (14) holds for i j .    □
Next, we present a useful lemma that can be used to study the convergence of Algorithm 1.
Lemma 3.
Assume that the generalized coupled Sylvester tensor Equation (1) is consistent. If it has solutions X 1 * , , X n * , then
L ¯ 1 ( R 1 k , , R n k ) , X 1 * X 1 k + + L ¯ n ( R 1 k , , R n k ) , X n * X n k = R 1 k 2 + + R n k 2 .
Moreover,
L 1 ( P 1 k , , P n k ) , R 1 k + + L n ( P 1 k , , P n k ) , R n k = L ¯ 1 ( R 1 k , , R n k ) 2 + + L ¯ n ( R 1 k , , R n k ) 2 .
Proof. 
It follows from Equation (13) that
L ¯ 1 ( R 1 k , , R n k ) , X 1 * X 1 k + + L ¯ n ( R 1 k , , R n k ) , X n * X n k = R 1 k , L 1 ( X 1 * X 1 k , , X n * X n k ) + + R n k , L n ( X 1 * X 1 k , , X n * X n k ) = R 1 k , L 1 ( X 1 * , , X n * ) L 1 ( X 1 k , , X n k ) + + R n k , L n ( X 1 * , , X n * ) L n ( X 1 k , , X n k ) = R 1 k , B 1 L 1 ( X 1 k , , X n k ) + + R n k , B n L n ( X 1 k , , X n k ) = R 1 k 2 + + R n k 2 .
For Equation (17), we have
L 1 ( P 1 k , , P n k ) , R 1 k + + L n ( P 1 k , , P n k ) , R n k = P 1 k , L ¯ 1 ( R 1 k , , R n k ) + + P n k , L ¯ n ( R 1 k , , R n k ) = L ¯ 1 ( R 1 k , , R n k ) + β k 1 P 1 k 1 , L ¯ 1 ( R 1 k , , R n k ) + + L ¯ n ( R 1 k , , R n k ) + β k 1 P n k 1 , L ¯ n ( R 1 k , , R n k ) = L ¯ 1 ( R 1 k , , R n k ) 2 + + L ¯ n ( R 1 k , , R n k ) 2 + β k 1 [ P 1 k 1 , L ¯ 1 ( R 1 k , , R n k ) + + P n k 1 , L ¯ n ( R 1 k , , R n k ) ] = L ¯ 1 ( R 1 k , , R n k ) 2 + + L ¯ n ( R 1 k , , R n k ) 2 .
   □
From Equation (16), we can obtain that L l ( R 1 k , , R n k ) O if R 1 k 2 + + R n k 2 0 , ( l = 1 , , n ) ; otherwise, the tensor equations (1) are nonconsistent. Similarly, from Equation (17), if L ¯ 1 ( R 1 k , , R n k ) 2 + + L ¯ n ( R 1 k , , R n k ) 2 0 , then R l k O and L l ( P 1 k , , P n k ) O , ( l = 1 , , n ) . Overall, we have the following results.
Proposition 1.
Let { R l k } , { P l k } , { L ¯ l ( R 1 k , , R n k ) } and { L l ( P 1 k , , P n k ) } be the tensor sequences given by Algorithm 1, l = 1 , , n , k = 0 , 1 , . Then
R 1 k 2 + + R n k 2 = 0
if and only if
L ¯ 1 ( R 1 k , , R n k ) 2 + + L ¯ n ( R 1 k , , R n k ) 2 = 0 .
According to the foregoing observations, we present the following theorem to show that Algorithm 1 converges to an exact solution at most finite steps in the absence round-off errors.
Theorem 2.
Assume that the generalized coupled Sylvester tensor equations, Equation (1), are consistent. If the tensor sequences { R l k } , { P l k } , { L ¯ l ( R 1 k , , R n k ) } and { L l ( P 1 k , , P n k ) } are given by Algorithm 1, l = 1 , , n , k = 0 , 1 , , then we can obtain a solution for any initial tensors X 1 0 , , X n 0 at most n I 1 I 2 I n + 1 steps in the absence round-off errors.
Proof. 
If
L ¯ 1 ( R 1 k , , R n k ) 2 + + L ¯ n ( R 1 k , , R n k ) 2 = 0 , ( k = 1 , , n I 1 I 2 I n 1 ) ,
then it follows from Proposition 1 that R l k = O , i.e., X 1 k , , X n k are solutions of Equation (1). If
L ¯ 1 ( R 1 k , , R n k ) 2 + + L ¯ n ( R 1 k , , R n k ) 2 0 ( k = 1 , , n I 1 I 2 I n ) ,
then we have L l ( P 1 k , , P n k ) O , ( l = 1 , , n ) , i.e.,
L 1 ( P 1 k , , P n k ) 2 + + L n ( P 1 k , , P n k ) 2 0 .
So, we can obtain L ¯ l ( R 1 n I 1 I 2 I n + 1 , , R n n I 1 I 2 I n + 1 ) . Therefore, by Equation (14), it follows that
L ¯ 1 ( R 1 i , , R n i ) , L ¯ 1 ( R 1 j , , R n j ) + + L ¯ n ( R 1 i , , R n i ) , L ¯ n ( R 1 j , , R n j ) = L ¯ 1 ( R 1 i , , R n i ) 2 + + L ¯ n ( R 1 i , , R n i ) 2 , j = i , 0 , j i .
holds for 0 j , i n I 1 I 2 I n + 1 . Let
M i = L ¯ 1 ( R 1 i , , R n i ) ( 1 ) L ¯ 2 ( R 1 i , , R n i ) ( 1 ) L ¯ n ( R 1 i , , R n i ) ( 1 ) ,
where L ¯ 1 ( R 1 i , , R n i ) ( 1 ) , , L ¯ n ( R 1 i , , R n i ) ( 1 ) denote the unfolding of themselves along mode 1. From Equation (18), we have
M i , M j = M i 2 , j = i , 0 , j i .
Hence, M i ( i = 1 , , n I 1 I 2 I n + 1 ) is an orthogonal basis of matrix space
L = S | S = S 1 S 2 S n ,
where S l R I 1 × I 2 I n , ( l = 1 , , n ) . However, it is impossible for M n I 1 I 2 I n + 1 O because the maximum dimension of L is n I 1 I 2 I n . Using Proposition 1, it follows that R l n I 1 I 2 I n + 1 = O . This fact shows that X 1 n I 1 I 2 I n + 1 , , X n n I 1 I 2 I n + 1 are solutions of Equation (1).    □
Since the preconditioned technique is used to solve nonsingular linear systems, in the rest of this paper, without specification, we assume that Equation (1) always has a unique solution; i.e., the matrix A of Equation (8) is nonsingular. Now, we want to find a preconditioning matrix P to approximate A 1 such that
P A x = P b
and Equation (8) has a same solution. Additionally, Equation (19) could be solved more effectively than the original systems.
The nearest Kronecker product (NKP) preconditioner, which is based on the Kronecker product approximations, was firstly described by Pitsianis and Van Loan in [44]. For a general matrix A, they derived a method for finding its nearest Kronecker product M N , which causes P = M 1 N 1 to approximate A 1 sufficiently. Then, Langville and Stewart [45] extended the above results on matrices with special structure. The goal is to find matrices Q 1 , Q 2 , , Q n such that
A Q 1 Q 2 Q n 2
is minimized, where A = j = 1 m i = 1 n A j ( i ) . From [45], we can see that
Q 1 a 11 A 1 ( 1 ) + a 12 A 2 ( 1 ) + a 1 m A m ( 1 ) , Q 2 a 21 A 1 ( 2 ) + a 22 A 2 ( 2 ) + a 2 m A m ( 2 ) , Q n a n 1 A 1 ( n ) + a n 2 A 2 ( n ) + a n m A m ( n ) .
Similarly, we now want to find matrices R , P 1 , P 2 , , P n such that
A R P 1 P 2 P n 2
is minimized. Here, A is the same as in in Equation (8). It is easy to see that the matrix A can be written as
A = R 11 I I n I I 2 A 11 + R 12 I I n A 12 I I 1 + + R 1 n A 1 n I I n 1 I I 1 + R 21 A 2 n I I n 1 I I 1 + + R 2 n I I n A 2 ( n 1 ) I I 1 + R n 1 I I n A n 2 I I 1 + + R n n I I n I I 2 A n 1 ,
where
R i j = j 1 i
is an n × n matrix, i , j = 1 , , n . By Equation (20), it follows that
R r 11 R 11 + r 12 R 12 + + r 1 n R 1 n + r 21 R 21 + + r 2 n R 2 n + + r n 1 R n 1 + + r n n R n n , P 1 p 10 I I n + p 11 A 1 n + p 12 A 2 n + + p 1 n A n n , P 2 p 20 I I n 1 + p 21 A 1 ( n 1 ) + p 22 A 2 ( n 1 ) + + p 2 n A n ( n 1 ) , P n p n 0 I I 1 + p n 1 A 11 + p n 2 A 21 + + p n n A n 1 ,
where r i j , p i k are unknown parameters, i , j = 1 , , n , k = 0 , 1 , , n . Moreover, in order to reduce the complexity of finding the inverse of R, we only take R r 11 R 11 + r 22 R 22 + + r n n R n n , i.e.,
R r 11 r 22 r n n .
As stated in [24,45], the optimal parameters then can be solved by the nonlinear optimization software quickly, such as fminsearch in MATLAB, nlp in SAS, multilevel coordinate search. So, we have A R P 1 P 2 P n , and take
P = ( R P 1 P 2 P n ) 1 = R 1 P 1 1 P 2 1 P n 1 = r 11 1 P 1 1 P 2 1 P n 1 r 22 1 P 1 1 P 2 1 P n 1 r n n 1 P 1 1 P 2 1 P n 1
as a preconditioner for solving Equation (8). Obviously, according to Equation (23) and the fact ( A B ) ( C D ) = ( A C B D ) , the tensor equation, Equation (1), can be rewritten as
X 1 × 1 P n 1 A 11 × 2 × n P 1 1 + X 2 × 1 P n 1 × 2 P n 1 1 A 12 × 3 × n P 1 1 + + X n × 1 P n 1 × 2 × n P 1 1 A 1 n = B 1 × 1 P n 1 × 2 × n P 1 1 , X 2 × 1 P n 1 A 21 × 2 × n P 1 1 + X 3 × 1 P n 1 × 2 P n 1 1 A 22 × 3 × n P 1 1 + + X 1 × 1 P n 1 × 2 × n P 1 1 A 2 n = B 2 × 1 P n 1 × 2 × n P 1 1 , X n × 1 P n 1 A n 1 × 2 × n P 1 1 + X 1 × 1 P n 1 × 2 P n 1 1 A n 2 × 3 × n P 1 1 + + X n 1 × 1 P n 1 × 2 × n P 1 1 A n n = B n × 1 P n 1 × 2 × n P 1 1 .
From the above discussion, we propose a preconditioned modified conjugate residual (PMCR) method for solving Equation (1) (Algorithm 2).
Algorithm 2 A preconditioned modified conjugate residual method for solving Equation (1)
  • Given initial values  X i 0 , B i R I 1 × I 2 × × I n , A i j R I j × I j , k = 0 , and set ϵ > 0 , i , j = 1 , , n .
  • Compute B ¯ i = B i × 1 P n 1 × 2 × n P 1 1 , and set
    M 1 ( X 1 0 , , X n 0 ) = X 1 0 × 1 P n 1 A 11 × 2 × n P 1 1 + + X n 0 × 1 P n 1 × 2 × n P 1 1 A 1 n , M 2 ( X 1 0 , , X n 0 ) = X 2 0 × 1 P n 1 A 21 × 2 × n P 1 1 + + X 1 0 × 1 P n 1 × 2 × n P 1 1 A 2 n , M n ( X 1 0 , , X n 0 ) = X n 0 × 1 P n 1 A n 1 × 2 × n P 1 1 + + X n 1 0 × 1 P n 1 × 2 × n P 1 1 A n n , M ¯ 1 ( R 1 0 , , R n 0 ) = R 1 0 × 1 ( P n 1 A 11 ) T × 2 × n ( P 1 1 ) T + + R n 0 × 1 ( P n 1 ) T × 2 × n ( P 1 1 A 2 n ) T , M ¯ 2 ( R 1 0 , , R n 0 ) = R 1 0 × 1 ( P n 1 ) T × 2 ( P n 1 1 A 12 ) T × 3 × n ( P 1 1 ) T + + R n 0 × 1 × 3 ( P n 2 1 A n 3 ) T × 4 × n ( P 1 1 ) T , M ¯ n ( R 1 0 , , R n 0 ) = R 1 0 × 1 ( P n 1 ) T × 2 × n ( P 1 1 A 1 n ) T + + R n 0 × 1 ( P n 1 A n 1 ) T × 2 × n ( P 1 1 ) T ,
  • Define R i 0 = B ¯ i M i ( X 1 0 , , X n 0 ) , P i 0 = M ¯ i ( R 1 0 , , R n 0 ) , and compute
    s 0 = M ¯ 1 ( R 1 0 , , R n 0 ) 2 + + M ¯ n ( R 1 0 , , R n 0 ) 2 , p 0 = M 1 ( P 1 0 , , P n 0 ) 2 + + M n ( P 1 0 , , P n 0 ) 2 , η k = R 1 k 2 + + R n k 2 .
  • For k = 1 , 2 , until η k < ϵ do:
    R 1 k + 1 = R 1 k α k M 1 ( P 1 k , , P n k ) , R 2 k + 1 = R 2 k α k M 2 ( P 1 k , , P n k ) , R n k + 1 = R n k α k M n ( P 1 k , , P n k ) , M ¯ 1 ( R 1 k + 1 , , R n k + 1 ) = R 1 k + 1 × 1 ( P n 1 A 11 ) T × 2 × n ( P 1 1 ) T + + R n k + 1 × 1 ( P n 1 ) T × 2 × n ( P 1 1 A 2 n ) T , M ¯ 2 ( R 1 k + 1 , , R n k + 1 ) = R 1 k + 1 × 1 ( P n 1 ) T × 2 ( P n 1 1 A 12 ) T × 3 × n ( P 1 1 ) T + + R n k + 1 × 1 × 3 ( P n 2 1 A n 3 ) T × 4 × n ( P 1 1 ) T , M ¯ n ( R 1 k + 1 , , R n k + 1 ) = R 1 k + 1 × 1 ( P n 1 ) T × 2 × n ( P 1 1 A 1 n ) T + + R n k + 1 × 1 ( P n 1 A n 1 ) T × 2 × n ( P 1 1 ) T , s k + 1 = M ¯ 1 ( R 1 k + 1 , , R n k + 1 ) 2 + + M ¯ n ( R 1 k + 1 , , R n k + 1 ) 2 .
  • Calculate β k = s k + 1 s k ,
    P 1 k + 1 = M ¯ 1 ( R 1 k + 1 , , R n k + 1 ) + β k P 1 k , P 2 k + 1 = M ¯ 2 ( R 1 k + 1 , , R n k + 1 ) + β k P 2 k , P n k + 1 = M ¯ n ( R 1 k + 1 , , R n k + 1 ) + β k P n k ,
    p k + 1 = M 1 ( P 1 k + 1 , , P n k + 1 ) 2 + + M n ( P 1 k + 1 , , P n k + 1 ) 2 .

4. Numerical Experiments

In this section, we give some numerical experiments to illustrate the effectiveness of the algorithms proposed in here. All codes were written in Matlab 2016b and run on a PC with Inter(R) Core(TM) i7-9750H @2.6 GHz and 8.00 GB memory. Additionally, all operations were based on tensor toolbox (version 2.5) proposed by Bader and Kolda [49]. We compare the MCR and PMCR algorithms with MCG [40] in terms of the number of iteration steps (“IT”), the elapsed CPU time (“CPU”) in seconds and the residual Frobenius norm (“RES”) defined as η k . In all of the following examples, we take X = O as initial tensors, and η k < 10 16 or the number of iteration steps exceeding 5000 as the stopping criteria.
Example 1.
Consider the third-order Sylvester tensor equation, Equation (2), with its coefficient matrices being the form of Equation (4). Such matrices are scaled by the same parameter 1 I x I y I z , and B = t e n r a n d ( I x , I y , I z ) . From [15], we let τ q = 1 π 2 + 10 3 , τ U = 1 π 2 1.99 × 10 5 , α = ρ C p κ = 1 , Δ t = 10 3 , δ x = δ y = 0.1 and δ z = 10 5 .
In this case, we implement the proposed algorithms and MCG for finding the iterative solution of Equation (2), and reported the numerical results in Table 1. From this Table, we can observe that our algorithms in here are feasible and effective, and the elapsed CPU time corresponding to MCR algorithm is slightly less than MCG. The number of iteration steps to MCR algorithm is no more than I x I y I z + 1 , which coincides with the results of Theorem 2. It is also shown that the PMCR algorithm outperforms other algorithms, and its convergence rate is about four times that of the MCG method. Additionally, from Figure 1, we can see that the convergence behavior of MCR is more stable than that of MCG.
Example 2.
As the second example, we consider the following convection-diffusion equation [9,16,25,29]
v Δ u + c T u = f i n Γ = [ 0 , 1 ] n , u = 0 o n Γ . .
From the standard finite difference discretization on equidistant nodes and a second-order convergent scheme (Fromm’s scheme), Equation (25) can be discretized as Equation (5) with
A i = v h 2 2 1 1 2 1 1 2 1 2 1 m × m + c i 4 h 3 5 1 1 3 5 1 1 3 5 1 3 m × m ,
where h = 1 m + 1 is the mesh size, i = 1 , , n . Set n = 5 , m = 10 , and
( 1 ) . v = 100 , c 1 = 1 , c 2 = 1 , c 3 = 1 , c 4 = 1 , c 5 = 1 , ( 2 ) . v = 100 , c 1 = 10 , c 2 = 30 , c 3 = 50 , c 4 = 70 , c 5 = 90 , ( 3 ) . v = 1 , c 1 = 1 , c 2 = 1 , c 3 = 1 , c 4 = 1 , c 5 = 1 , ( 4 ) . v = 1 , c 1 = 10 , c 2 = 30 , c 3 = 50 , c 4 = 70 , c 5 = 90 , ( 5 ) . v = 0.01 , c 1 = 1 , c 2 = 1 , c 3 = 1 , c 4 = 1 , c 5 = 1 , ( 6 ) . v = 0.01 , c 1 = 10 , c 2 = 30 , c 3 = 50 , c 4 = 70 , c 5 = 90 ,
and B = t e n r a n d ( 10 , 10 , 10 , 10 , 10 ) .
In this example, we utilize the proposed algorithms and MCG to solve Equation (5). The obtained numerical results are reported in Table 2. From this table, one can see that the number of iteration steps to MCR is less than 10 5 + 1 . On the other hand, the performance of MCR is slightly better than MCG in terms of the elapsed CPU time. It is also shown that PMCR algorithm converges faster than that of MCG and MCR.
Example 3.
Consider the following generalized coupled Sylvester tensor equations
X 1 × 1 A 11 + X 2 × 2 A 12 + X 3 × 3 A 13 = B 1 , X 2 × 1 A 21 + X 3 × 2 A 22 + X 1 × 3 A 23 = B 2 , X 3 × 1 A 31 + X 1 × 2 A 32 + X 2 × 3 A 33 = B 3 ,
where A 11 and A 22 are the matrices A 1 and A 2 of Equation (4), respectively, and the remaining matrices and tensors are given by
a l p h a = 100 ; A 12 = r a n d ( I 2 , I 2 ) + d i a g ( o n e s ( I 2 , 1 ) ) a l p h a , A 13 = r a n d ( I 3 , I 3 ) + d i a g ( o n e s ( I 3 , 1 ) ) a l p h a ; A 21 = r a n d ( I 1 , I 1 ) + d i a g ( o n e s ( I 1 , 1 ) ) a l p h a , A 23 = r a n d ( I 3 , I 3 ) + d i a g ( o n e s ( I 3 , 1 ) ) a l p h a ; A 31 = r a n d ( I 1 , I 1 ) + d i a g ( o n e s ( I 1 , 1 ) ) a l p h a , A 32 = r a n d ( I 2 , I 2 ) + d i a g ( o n e s ( I 2 , 1 ) ) a l p h a ; A 33 = r a n d ( I 3 , I 3 ) + d i a g ( o n e s ( I 3 , 1 ) ) a l p h a , B i = t e n r a n d ( I 1 , I 2 , I 3 ) , i = 1 , 2 , 3 .
Our tested experimental results in Table 3 show that the elapsed CPU time to all of the algorithms are increasing as the dimension increases, and the number of iteration steps to MCR algorithm does not exceed I 1 I 2 I 3 + 1 . It also reflects that the elapsed CPU time to MCR with nearest Kronecker product preconditioner is much lower than that of MCR and MCG. From Figure 2, one can see that the MCG algorithm has a quiet irregular convergence behavior but MCR is rather stable. Additionally, we can see that the MCR algorithm is less than MCG in terms of the elapsed CPU time.

5. Conclusions

In this paper, we proposed a modified conjugate residual algorithm and its preconditioned version based on the Kronecker product approximations for solving the generalized coupled Sylvester tensor Equation (1). Our convergence analysis showed that the MCR algorithm is convergent to an exact solution for any initial tensors at most finite steps in the absence round-off errors, and its convergence behavior is more smooth than MCG for solving some problems. From the Kronecker product approximations, we developed the PMCR method for solving the original tensor Equation (1). The performance of PMCR outperforms other algorithms in terms of the elapsed CPU time and the number of iteration steps. Note that the nearest Kronecker product technique can also be applied to some existing methods, such as the biconjugate gradient method and biconjugate residual method. So, the preconditioned technique is very promising.

Author Contributions

All authors have equal contributions for Conceptualization, Formal analysis, Investigation, Methodology, Software, Validation, Writing an original draft, Writing a review, and Editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported by National Natural Science Foundation of China [grant numbers 11971294 and 12171369], and Hainan Provincial Natural Science Foundation of China [grant numbers 122QN214 and 122MS001].

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the anonymous for their valuable suggestions, comments and National Natural Science Foundation of China [grant numbers 11971294 and 12171369], and Hainan Provincial Natural Science Foundation of China [grant numbers 122QN214 and 122MS001].

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Weaver, J. Centrosymmetric (cross-symmetric) matrices, their basic properties, eigenvalues, and eigenvectors. Am. Math. Month. 1985, 92, 711–717. [Google Scholar] [CrossRef]
  2. Datta, L.; Morgera, S. Some results on matrix symmetries and a pattern recognition application. IEEE Trans. Signal Process. 1986, 34, 992–994. [Google Scholar] [CrossRef]
  3. Datta, L.; Morgera, S. On the reducibility of centrosymmetric matrices-applications in engineering problems. Circ. Syst. Signal Process. 1989, 8, 71–96. [Google Scholar] [CrossRef]
  4. Datta, B.N.; Lin, W.; Wang, J.N. Robust partial pole assignment for vibrating systems with aerodynamic effects. IEEE Trans. Automat. Control 2006, 51, 1979–1984. [Google Scholar] [CrossRef]
  5. Liang, M.L.; Zheng, B.; Zhao, R.J. Alternating iterative methods for solving tensor equations with applications. Numer. Algorithms 2018, 80, 1437–1465. [Google Scholar] [CrossRef]
  6. Li, Q.; Zheng, B. Scaled three-term derivative-free methods for solving large-scale nonlinear monotone equations. Numer. Algorithms 2021, 87, 1343–1367. [Google Scholar] [CrossRef]
  7. Zhang, F.X.; Wei, M.S.; Li, Y.; Zhao, J. An efficient method for least-squares problem of the quaternion matrix equation. Linear Multilinear Algebra 2020, 1–13. [Google Scholar] [CrossRef]
  8. Zhang, F.X.; Wei, M.S.; Li, Y.; Zhao, J. An efficient real representation method for least squares problem of the quaternion constrained matrix equation AXB + CYD = E. Int. J. Comput. Math. 2021, 98, 1408–1419. [Google Scholar] [CrossRef]
  9. Grasedyck, L. Existence and computation of low Kronecker-rank approximations for large linear systems of tensor product structure. Computing 2004, 72, 247–265. [Google Scholar] [CrossRef] [Green Version]
  10. Bai, Z.Z.; Golub, G.H.; Ng, M.K. Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems. SIAM J. Matrix Anal. Appl. 2003, 24, 603–626. [Google Scholar] [CrossRef] [Green Version]
  11. Li, X.T.; Ng, M.K. Solving sparse non-negative tensor equations: Algorithms and applications. Front. Math. China 2015, 10, 649–680. [Google Scholar] [CrossRef]
  12. Li, B.W.; Sun, Y.S.; Yu, Y. Iterative and direct Chebyshev collocation spectral methods for one-dimensional radiative heat transfer. Int. J. Heat Mass Transf. 2008, 51, 5887–5894. [Google Scholar] [CrossRef]
  13. Sun, Y.S.; Ma, J.; Li, B.W. Chebyshev collocation spectral method for three-dimensional transient coupled radiative conductive heat transfer. J. Heat Transfer 2012, 134, 092701. [Google Scholar] [CrossRef]
  14. Li, B.W.; Tian, S.; Sun, Y.S.; Hu, Z.M. Schur-decomposition for 3D matrix equations and its application in solving radiative discrete ordinates equations discretized by Chebyshev collocation spectral method. J. Comput. Phys. 2010, 229, 1198–1212. [Google Scholar] [CrossRef]
  15. Malek, A.; Momeni-Masuleh, S.H. A mixed collocation-finite difference method for 3D microscopic heat transport problem. J. Comput. Appl. Math. 2008, 217, 137–147. [Google Scholar] [CrossRef] [Green Version]
  16. Chen, Z.; Lu, L.Z. A gradient based iterative solutions for Sylvester tensor equations. Math. Probl. Eng. 2013, 2013, 819479. [Google Scholar] [CrossRef]
  17. Beik, F.P.A.; Asl, S.A. Residual norm steepest descent based iterative algorithms for Sylvester tensor equations. J. Math. Model. 2015, 2, 115–131. [Google Scholar]
  18. Li, N.; Navasca, C.; Glenn, C. Iterative methods for symmetric outer product tensor decomposition. Electron. Trans. Numer. Anal. 2015, 44, 124–139. [Google Scholar]
  19. Bentbib, A.H.; Guide, M.E.; Jbilou, K.; Onunwor, E.; Reichel, L. Solution methods for linear discrete ill-posed problems for color image restoration. BIT Numer. Math. 2018, 58, 555–578. [Google Scholar] [CrossRef]
  20. Li, F.; Ng, M.K.; Plemmons, R.J. Coupled segmentation and denoising/deblurring for hyperspectral material identification. Numer. Linear Algebra Appl. 2012, 19, 153–173. [Google Scholar] [CrossRef]
  21. Hajarian, M. Symmetric solutions of the coupled generalized Sylvester matrix equations via BCR algorithm. J. Franklin Inst. 2016, 353, 3233–3248. [Google Scholar] [CrossRef]
  22. Hajarian, M. Computing symmetric solutions of general Sylvester matrix equations via Lanczos version of biconjugate residual algorithm. Comput. Math. Appl. 2018, 76, 686–700. [Google Scholar] [CrossRef]
  23. Wang, W.; Ding, F.; Dai, J. Maximum likelihood least squares identification for systems with autoregressive moving average noise. Appl. Math. Model. 2012, 36, 1842–1853. [Google Scholar] [CrossRef]
  24. Chen, Z.; Lu, L.Z. A projection method and Kronecker product preconditioner for solving Sylvester tensor equations. Sci. China Math. 2012, 55, 1281–1292. [Google Scholar] [CrossRef]
  25. Beik, F.P.A.; Movahed, F.S.; Ahmadi-Asl, S. On the Krylov subspace methods based on tensor format for positive definite Sylvester tensor equations. Linear Algebra Appl. 2016, 23, 444–466. [Google Scholar] [CrossRef]
  26. Karimi, S.; Dehghan, M. Global least squares method based on tensor form to solve linear systems in Kronecker format. Trans. Inst. Meas. Control 2018, 40, 2378–2386. [Google Scholar] [CrossRef]
  27. Toutounian, F.; Karimi, S. Global least squares method (GlLSQR) for solving general linear systems with several right-hand sides. Appl. Math. Comput. 2006, 178, 452–460. [Google Scholar]
  28. Wang, Q.W.; Xu, X.X.; Duan, X.F. Least squares solution of the quaternion Sylvester tensor equation. Linear Multilinear Algebra 2021, 69, 104–130. [Google Scholar] [CrossRef]
  29. Xu, X.J.; Wang, Q.W. Extending BiCG and BiCR methods to solve the Stein tensor equation. Comput. Math. Appl. 2019, 77, 3117–3127. [Google Scholar] [CrossRef]
  30. Li, T.; Wang, Q.W.; Duan, X.F. Numerical algorithms for solving discrete Lyapunov tensor equation. J. Comput. Appl. Math. 2020, 370, 112676. [Google Scholar] [CrossRef]
  31. Huang, B.H.; Li, W. Numerical subspace algorithms for solving the tensor equations involving Einstein product. Numer. Linear Algebra Appl. 2021, 28, e2351. [Google Scholar] [CrossRef]
  32. Hajarian, M. Convergence properties of BCR method for generalized Sylvester matrix equation over generalized reflexive and anti-reflexive matrices. Linear Multilinear Algebra 2017, 66, 1975–1990. [Google Scholar] [CrossRef]
  33. Hajarian, M. Reflexive periodic solutions of general periodic matrix equations. Math. Methods Appl. Sci. 2019, 42, 3527–3548. [Google Scholar] [CrossRef]
  34. Hajarian, M. Conjugate gradient-like methods for solving general tensor equation with Einstein product. J. Franklin Inst. 2020, 357, 4272–4285. [Google Scholar] [CrossRef]
  35. Najafi-Kalyani, M.; Beik, F.P.A.; Jbilou, K. On global iterative schemes based on Hessenberg process for (ill-posed) Sylvester tensor equations. J. Comput. Appl. Math. 2020, 373, 112216. [Google Scholar] [CrossRef]
  36. Liu, L.S.; Wang, Q.W.; Chen, J.F.; Xie, Y.Z. An exact solution to a quaternion matrix equation with an application. Symmetry 2022, 14, 375. [Google Scholar] [CrossRef]
  37. Mehany, M.S.; Wang, Q.W. Three symmetrical systems of coupled Sylvester-like quaternion matrix equations. Symmetry 2022, 14, 550. [Google Scholar] [CrossRef]
  38. Bentbib, A.H.; El-Halouy, S.; Sadek, E.M. Krylov subspace projection method for Sylvester tensor equation with low rank right-hand side. Numer. Algorithms 2020, 84, 1411–1430. [Google Scholar] [CrossRef]
  39. Heyouni, M.; Saberi-Movahed, F.; Tajaddini, A. A tensor format for the generalized Hessenberg method for solving Sylvester tensor equations. J. Comput. Appl. Math. 2020, 377, 112878. [Google Scholar] [CrossRef]
  40. Lv, C.Q.; Ma, C.F. A modified CG algorithm for solving generalized coupled Sylvester tensor equations. Appl. Math. Comput. 2020, 365, 124699. [Google Scholar] [CrossRef]
  41. Saad, Y. Iterative Methods for Sparse Linear Systems, 2nd ed.; SIAM: Philadelphia, PA, USA, 2003; Volume 82. [Google Scholar]
  42. Saad, Y. ILUT: A dual threshold incomplete LU factorization. Numer. Linear Algebra Appl. 1994, 1, 387–402. [Google Scholar] [CrossRef]
  43. Tallec, P.L.; Roeck, Y.H.D.; Vidrascu, M. Domain decomposition methods for large linearly elliptic three-dimensional problems. J. Comput. Appl. Math. 1991, 34, 93–117. [Google Scholar] [CrossRef] [Green Version]
  44. Loan, C.F.V.; Pitsianis, N. Approximation with Kronecker products. In Linear Algebra for Large Scale and Real-Time Applications (Leuven, 1992); Kluwer Academic Publishers: Dordrecht, The Netherlands, 1993; pp. 293–314. [Google Scholar]
  45. Langville, A.N.; Stewar, W.J. A Kronecker product approximate preconditioner for SANs. Numer. Linear Algebra Appl. 2004, 11, 723–752. [Google Scholar] [CrossRef] [Green Version]
  46. Qi, L.Q.; Luo, Z.Y. Tensor Analysis: Spectral Theory and Special Tensors; SIAM: Philadelphia, PA, USA, 2017. [Google Scholar]
  47. Qi, L.Q.; Chen, H.; Chen, Y. Tensor Eigenvalues and Their Applications; Springer: Berlin/Heidelberg, Germany, 2018. [Google Scholar]
  48. Kolda, T.; Bader, B. The TOPHITS model for higher-order web link analysis, Workshop on Link Analysis, Counterterrorism Security. Minnesota 2006, 7, 26–29. [Google Scholar]
  49. Kolda, T.; Bader, B. Tensor decompositions and applications. SIAM Rev. 2009, 51, 455–500. [Google Scholar] [CrossRef]
Figure 1. Convergence history of Example 1, I 1 = 15 , I 2 = 15 , I 3 = 20 .
Figure 1. Convergence history of Example 1, I 1 = 15 , I 2 = 15 , I 3 = 20 .
Mathematics 10 01730 g001
Figure 2. Convergence history of Example 3: I 1 = 5 , I 2 = 10 , I 3 = 15 (left), I 1 = 15 , I 2 = 15 , and I 3 = 15 (right).
Figure 2. Convergence history of Example 3: I 1 = 5 , I 2 = 10 , I 3 = 15 (left), I 1 = 15 , I 2 = 15 , and I 3 = 15 (right).
Mathematics 10 01730 g002
Table 1. Comparison results of Example 1.
Table 1. Comparison results of Example 1.
AlgorithmsMCG [40]MCRPMCR
[ I x , I y , I z ] ITCPURESITCPURESITCPURES
[ 5 , 5 , 5 ] 150.14635.9784 × 10 18 140.13963.6714 × 10 18 50.03136.2438 × 10 19
[ 10 , 10 , 15 ] 3010.99289.6023 × 10 17 2960.94348.8758 × 10 17 620.21836.2891 × 10 17
[ 15 , 15 , 20 ] 6322.77399.6572 × 10 17 6242.65999.5403 × 10 17 1490.58668.6559 × 10 17
[ 15 , 20 , 25 ] 11898.04896.4185 × 10 17 11527.85834.5580 × 10 17 3592.21541.5868 × 10 17
[ 20 , 20 , 30 ] 169610.23219.2562 × 10 17 16489.71579.1825 × 10 17 4783.01354.8919 × 10 17
Table 2. Comparison results of Example 2.
Table 2. Comparison results of Example 2.
AlgorithmsMCG [40]MCRPMCR
ITCPURESITCPURESITCPURES
(1)2323.11129.7305 × 10 17 2263.09478.6237 × 10 17 580.89348.1465 × 10 17
(2)3214.25948.8469 × 10 17 2993.97428.9632 × 10 17 661.04237.7125 × 10 17
(3)2833.94329.9765 × 10 17 2783.90999.3321 × 10 17 630.95789.0356 × 10 17
(4)4005.27238.9944 × 10 17 3825.21509.1298 × 10 17 822.12837.9046 × 10 17
(5)3765.09429.4951 × 10 17 3584.92859.8407 × 10 17 791.92018.9957 × 10 17
(6)4876.00279.3902 × 10 17 4625.97559.4925 × 10 17 1242.45629.1929 × 10 17
Table 3. Comparison results of Example 3.
Table 3. Comparison results of Example 3.
AlgorithmsMCG [40]MCRPMCR
[ I 1 , I 2 , I 3 ] ITCPURESITCPURESITCPURES
[ 5 , 10 , 15 ] 760.58443.1415 × 10 17 740.51794.3350 × 10 17 200.12482.5513 × 10 17
[ 10 , 15 , 20 ] 4653.43858.1439 × 10 17 4423.18066.1823 × 10 17 1020.60267.6829 × 10 17
[ 15 , 15 , 15 ] 1300.97151.1913 × 10 17 1280.92371.0795 × 10 17 420.25566.4513 × 10 17
[ 20 , 25 , 30 ] 13109.59459.1953 × 10 17 12899.12178.3788 × 10 17 3632.01597.2917 × 10 17
[ 25 , 25 , 25 ] 12559.05598.2675 × 10 17 12108.85677.4489 × 10 17 3251.82197.6519 × 10 17
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Li, T.; Wang, Q.-W.; Zhang, X.-F. A Modified Conjugate Residual Method and Nearest Kronecker Product Preconditioner for the Generalized Coupled Sylvester Tensor Equations. Mathematics 2022, 10, 1730. https://0-doi-org.brum.beds.ac.uk/10.3390/math10101730

AMA Style

Li T, Wang Q-W, Zhang X-F. A Modified Conjugate Residual Method and Nearest Kronecker Product Preconditioner for the Generalized Coupled Sylvester Tensor Equations. Mathematics. 2022; 10(10):1730. https://0-doi-org.brum.beds.ac.uk/10.3390/math10101730

Chicago/Turabian Style

Li, Tao, Qing-Wen Wang, and Xin-Fang Zhang. 2022. "A Modified Conjugate Residual Method and Nearest Kronecker Product Preconditioner for the Generalized Coupled Sylvester Tensor Equations" Mathematics 10, no. 10: 1730. https://0-doi-org.brum.beds.ac.uk/10.3390/math10101730

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop