Next Article in Journal
A Novel Hybrid Genetic-Whale Optimization Model for Ontology Learning from Arabic Text
Previous Article in Journal
Nearest Embedded and Embedding Self-Nested Trees
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A FEAST Algorithm for the Linear Response Eigenvalue Problem

1
College of Computer and Information Science, Fujian Agriculture and Forestry University, Fuzhou 350002, China
2
School of Mathematical Sciences, Guizhou Normal University, Guiyang 550001, China
*
Author to whom correspondence should be addressed.
Submission received: 22 July 2019 / Revised: 25 August 2019 / Accepted: 27 August 2019 / Published: 29 August 2019

Abstract

:
In the linear response eigenvalue problem arising from quantum chemistry and physics, one needs to compute several positive eigenvalues together with the corresponding eigenvectors. For such a task, in this paper, we present a FEAST algorithm based on complex contour integration for the linear response eigenvalue problem. By simply dividing the spectrum into a collection of disjoint regions, the algorithm is able to parallelize the process of solving the linear response eigenvalue problem. The associated convergence results are established to reveal the accuracy of the approximated eigenspace. Numerical examples are presented to demonstrate the effectiveness of our proposed algorithm.

1. Introduction

In computational quantum chemistry and physics, the random phase approximation (RPA) or the Bethe–Salpeter (BS) equation describe the excitation states and absorption spectra for molecules or the surfaces of solids [1,2]. One important question in the RPA or BS equation is how to compute a few eigenpairs associated with several of the smallest positive eigenvalues of the following eigenvalue problem:
H w = A B B A u v = λ u v = λ w ,
where A , B R N × N are both symmetric matrices and A B B A is positive definite [3,4,5]. Through a similarity transformation, the eigenvalue problem (1) can be equivalently transformed into:
H z = 0 K M 0 y x = λ y x = λ z ,
where K = A B and M = A + B . The eigenvalue problem (2) was still referred to as the linear response eigenvalue problem (LREP) [3,6] and will be so in this paper, as well. The condition imposed upon A and B in (1) implies that both K and M are N × N real symmetric positive definite matrices. However, there are cases where one of them may be indefinite [7]. Therefore, to be consistence with [8,9,10], throughout the rest of the paper, we relax the condition on K and M to that they are symmetric and one of them is positive definite, while the other may be indefinite. There has been immense recent interest in developing new theories, efficient numerical algorithms of LREP, and the associated excitation response calculations of molecules for materials’ design in energy science [11,12,13,14,15].
From (2), we have K x = λ y and M y = λ x , and they together lead to:
K M y = λ 2 y , M K x = λ 2 x .
Since K M = K M 1 / 2 M 1 / 2 has the same eigenvalues as M 1 / 2 K M 1 / 2 , which is symmetric, all eigenvalues of K M are real. Denote these eigenvalues by ω i ( 1 i N ) in ascending order, i.e.,
ω 1 ω 2 ω N .
The eigenvalues of M K are ω i ( 1 i N ), as well. Let ι = 1 , the imaginary unit, and:
λ i = ω i , if   ω i 0 , ι ω i , if   ω i < 0 .
The eigenvalues of H are:
± λ i for   i = 1 , 2 , , N .
This practice of enumerating the eigenvalues of H will be used later for the much smaller projection of H, as well.
Since the dimension N is usually very large for systems of practical interest, solving LREP (2) is a very challenging problem. The iterative methods are recent efforts for computing the partial spectrum of (2), such as the locally-optimal block preconditioned 4D conjugate gradient method (LOBP4DCG) [6] and its space-extended variation [16], the block Chebyshev–Davidson method [10], as well as the generalized Lanczos-type methods [8,9]. These algorithms are all based on the so-called pair of deflating subspaces, which is a generalization of the concept of the invariant subspace in the standard eigenvalue problems. Thus, these methods can be regarded as extensions of the associated classical methods for the standard eigenvalue problem. Recently, in [17], a density-matrix-based algorithm, called FEAST, was proposed by Polizzi for the calculation of a segment of eigenvalues and their associated eigenvectors of a real symmetric or Hermitian matrix. The algorithm promises high potential for parallelism by dividing the spectrum of a matrix into an arbitrary number of nonintersecting intervals. The eigenvalues in each interval and the associated eigenvectors can be solved independently of those in the other intervals. As a result, one can compute a large number of eigenpairs at once on modern computing architectures, while typical Krylov- or Davidson-type solvers fail in this respect because they require orthogonalization of any new basis vector with respect to the previously-converged ones.
The basic idea of the FEAST algorithm is based on the contour integral function:
f ( α ) = 1 2 π ι C ( μ α ) 1 d μ
to construct bases of particular subspaces and obtain the corresponding approximate eigenpairs by the Rayleigh–Ritz procedure where C is any contour enclosing the set of wanted eigenvalues. The corresponding theoretical analysis was established in [18]. In particular, it was shown that the FEAST algorithm can be understood as an accelerated subspace iteration algorithm. Due to the advantage of the FEAST algorithm in parallelism, it has been well received in the electronic structure calculations community. Additionally, the improvements and generalizations of FEAST algorithm have been active research subjects in eigenvalue problems for nearly a decade. For example, in [19], Kestyn, Polizzi, and Tang presented the FEAST eigensolver for non-Hermitian problems by using the dual subspaces for computing the left and right eigenvectors. In [20], the FEAST algorithm was developed to solve nonlinear eigenvalue problems for eigenvalues that lie in a user-defined region. Some other work for the FEAST method can be found in [21,22,23,24]. Motivated by these facts, in this paper, we will continue the effort by extending the FEAST algorithm to LREP.
The rest of the paper is organized as follows. In Section 2, some notations and preliminaries including the canonical angles of two subspaces and basic results for LREP are collected for use later. Section 3 contains our main algorithm, the implementation issue, and the corresponding convergence analysis. In Section 4, we present some numerical examples to show the numerical behavior of our algorithm. Finally, conclusions are drawn in Section 5.

2. Preliminaries

R m × n is the set of all m × n real matrices. For X R n , dim ( X ) is the dimension of X . For X R m × n , X T is its transpose and R ( X ) the column space of X, and the submatrix X ( : , i : j ) of X consists of column i to column j. I n is the n × n identity matrix or simply I if its dimension is clear from the context. For matrices or scalars X i , diag ( X 1 , X 2 , , X k ) denotes the block diagonal matrix with diagonal block X 1 , X 2 , , X k .
For any given symmetric and positive definite matrix W R N × N , the W-inner product and its induced W-norm are defined by:
x , y W = y T W x , x W = x , x W .
If x , y W = 0 , then we say x W y or y W x . The projector Π W is said to the W-orthogonal projector onto X if for any vector x R N ,
Π W x X and ( I Π W ) x W X .
For two subspaces X , Y R N with k = dim ( X ) dim ( Y ) = , let X and Y be the W-orthonormal basis matrices of subspaces X and Y , respectively, i.e.,
X T W X = I k , R ( X ) = X , and Y T W Y = I , R ( Y ) = Y .
Denote the singular values of X T W Y by σ j for 1 j k in ascending order, i.e., σ 1 σ k . The diagonal matrix of W-canonical angles from (If = k , we may say that these angles are between X and Y  [25].) X to Y in descending order is defined by:
Θ W ( X , Y ) = diag ( θ W ( 1 ) ( X , Y ) , , θ W ( k ) ( X , Y ) ) ,
where θ W ( 1 ) ( X , Y ) θ W ( k ) ( X , Y ) and:
0 θ W ( j ) ( X , Y ) = arccos σ j π 2 , for   1 j k .
In what follows, we sometimes place a vector or matrix in one or both arguments of Θ W ( . , . ) with the understanding that it is about the subspace spanned by the vector or the columns of the matrix argument.
Several theoretical properties of the canonical angle and LREP have been established in [25] and [3], respectively. In particular, the following lemmas are necessary for our later developments.
Lemma 1
([9] Lemma 3.2). Let X and Y be two subspaces in R N with equal dimension dim ( X ) = dim ( Y ) = k . Suppose θ W ( 1 ) ( X , Y ) < π / 2 . Then, for any set { y 1 , y 2 , , y k 1 } of the basis vectors in Y where 1 k 1 k , there is a set { x 1 , x 2 , , x k 1 } of linear independent vectors in X such that Π W x j = y j for 1 j k 1 , where Π W is the W-orthogonal projector onto Y .
Lemma 2
([3] Theorem 2.3). The following statements hold for any symmetric matrices K , M R N × N with M being positive definite.
(a) 
There exists a nonsingular Ψ R N × N such that:
K = Ψ Λ 2 Ψ T a n d M = Φ Φ T ,
where Λ = diag ( λ 1 , λ 2 , , λ N ) , λ 1 2 λ 2 2 λ N 2 and Φ = Ψ T .
(b) 
If K is also definite, then all λ i > 0 , and H is diagonalizable:
H Ψ Λ Ψ Λ Φ Φ = Ψ Λ Ψ Λ Φ Φ Λ Λ .
(c) 
The eigen-decomposition of K M and M K is:
( K M ) Ψ = Ψ Λ 2 a n d ( M K ) Φ = Φ Λ 2 ,
respectively.
Lemma 3
([10] Lemma 2.2). Let Z be an invariant subspace of H and Z = [ V T , U T ] T be the basis matrix of Z with both V and U having N rows, then R ( M V ) = R ( U ) .
Lemma 4
([25] Lemma 2.1). Let A and B be two symmetric matrices and λ ( A ) and λ ( B ) be the set of the eigenvalues of A and B, respectively. For the Sylvester equation A Y Y B = S , if λ ( A ) λ ( B ) = , then the equation has a unique solution Y, and moreover:
Y F 1 η S F ,
where η = m i n | μ ω | over all μ λ ( A ) and ω λ ( B ) .

3. The FEAST Algorithm for LREP

3.1. The Main Algorithm

For LREP, suppose λ i for i = 1 , , s are all the wanted eigenvalues whose square are inside the circle C with center at c and radius r on the complex plane, as illustrated in the following Figure 1.
Given a random matrix Y R N × s , for any filter function f ( · ) that serves the purpose of filtering out the unwanted eigen-directions, we have, by Lemma 2,
f ( K M ) Y = Ψ f ( Λ 2 ) Φ T Y .
As we known in [10], the ideal filter for computing λ i for i = 1 , , s should map all s wanted eigenvalues to one and all unwanted ones to zero. For such a purpose, we also consider the filter function applied in [17], i.e.,
f ( α ) = 1 2 π ι C ( μ α ) 1 d μ .
By Cauchy’s residue theorem [26] in complex analysis, we have:
f ( K M ) Y = Ψ f ( Λ 2 ) Φ T Y = Ψ 1 2 π ι C ( μ I Λ 2 ) 1 d μ Φ T Y = Ψ ( : , 1 : s ) Φ ( : , 1 : s ) T Y .
As stated in [24] [Lemma 1], the matrix Φ ( : , 1 : s ) T Y , with probability one, is nonsingular if the entries of Y are random numbers from a continuous distribution and they are independent and identically distributed. Therefore, naturally, the columns of f ( K M ) Y span the invariant subspace of K M corresponding to λ i 2 for i = 1 , , s . However, it needs to calculate the contour integral. In practice, this is done by using numerical quadratures to compute this filter f ( K M ) approximately. It is noted that λ i 2 for i = 1 , , s are all real. Let μ = c + r e ι t for 0 t 2 π . Then, we have:
1 2 π ι C ( μ I Λ 2 ) 1 d μ = r 2 π 0 2 π e ι t ( c + r e ι t ) I Λ 2 1 d t = r 2 π 0 π e ι t ( μ I Λ 2 ) 1 d t + π 2 π e ι t ( μ I Λ 2 ) 1 d t = r 2 π 0 π e ι t ( μ I Λ 2 ) 1 + e ι t ¯ ( μ ¯ I Λ 2 ) 1 d t = r π 0 π Re e ι t ( μ I Λ 2 ) 1 d t ,
where Re ( ) stands for the real part. A q-point interpolatory quadrature rule, such as the trapezoidal rule or Gauss–Legendre quadrature, leads to:
f ( K M ) Y r π i = 1 q ω i Re e ι t i ( μ i I K M ) 1 Y ,
where t i and ω i are the quadrature nodes and weights, respectively, and μ i = c + r e ι t i for 1 i q . Let V = f ˜ ( K M ) Y = r π i = 1 q ω i Re e ι t i ( μ i I K M ) 1 Y . Then, by (6), V is computed by solving shifted linear systems of the form:
( μ i I K M ) X i = Y , i = 1 , 2 , , q .
Furthermore, for any interpolatory quadrature rule, we have by [18] that:
f ˜ ( α ) > 1 2 for | α c | < r .
It follows by Lemma 3 that if the columns of V span the y-component of an approximate invariant subspace of H, then the columns of U = M V should span the x-component of the same approximate invariant subspace. Therefore, { R ( U ) , R ( V ) } is a pair of approximated deflating subspaces of H. By [3], the best approximations of λ i for i = 1 , , s with the pair of approximate deflating subspaces { R ( U ) , R ( V ) } are eigenvalues of:
H S R = 0 W 1 T U T K U W 1 1 W 2 T V T M V W 2 1 0 ,
where W 1 and W 2 are two nonsingular factors of V T U = W 1 T W 2 . In particular, we choose W 1 = W 2 = R where R T R is V T U ’s Cholesky decomposition. This leads to:
H S R = 0 G I 0 , where   G = R T U T K U R 1 .
Let the eigenvalues of G be ρ j 2 in ascending order and the associated eigenvectors be q j , i.e., G q j = ρ j 2 q j for 1 j s . Then,
H S R z ^ j = ρ j z ^ j , z ^ j = ρ j q j q j .
Approximating eigenpairs of H is then taken to be ( ρ j , z ˜ j ) where:
z ˜ j = ρ j ψ ˜ j ϕ ˜ j , ϕ ˜ j = U R 1 q j and ψ ˜ j = V R 1 q j .
We summarize the pseudo-code of the FEAST algorithm for LREP in Algorithm 1. A few remarks regarding Algorithm 1 are in order:
Remark 1.
 
(a) In practice, we do not need to save V ( i ) , U ( i ) and G ( i ) for every i in Algorithm 1. In fact, the subscript “ ( i ) ” is just convenience for the convergence analysis in the next subsection.
(b) Since the trapezoidal rule yields much faster decay outside C than the Gauss–Legendre quadrature [23], our numerical examples use the trapezoidal rule to evaluate f ( K M ) Y , i.e., in (6),
μ i = c + r e ι π t i , t i = i 1 q 1 , f o r   1 i q , ω 1 = π 2 ( q 1 ) , ω q = π 2 ( q 1 ) , ω i = π q 1 , f o r   2 i q 1 .
(c) In Algorithm 1, it usually is required to know the eigenvalue counts s inside the contour C in advance. In practice, s is unknown a priori, and an estimator has to be used instead. Some available methods have been proposed in [27,28] to estimate eigenvalue counts, based on stochastic evaluations of:
1 2 π ι C trace ( μ I K M ) 1 d μ .
(d) In our numerical implementation of Algorithm 1, we monitor the convergence of a computed eigenpair ( ρ j , z ˜ j ) by its normalized relative residual norm:
r ( ρ j ) = H z ˜ j ρ j z ˜ j 1 ( H 1 + | ρ j | ) z ˜ j 1 .
The approximate eigenpair ( ρ j , z ˜ j ) is considered as converged when:
ρ j ( ω ̲ , ω ¯ ) a n d r ( ρ j ) < ε ,
where ω ̲ = c r , ω ¯ = c + r , and ε is a preset tolerance.
Algorithm 1 The FEAST algorithm for LREP.
Input: Given an initial block Y ( 0 ) R N × s .
Output: Converged approximated eigenpairs ( ρ j , z ˜ j ) .
 1: for i = 1 , 2 , , until convergence do
 2:  Compute V ( i ) = f ˜ ( K M ) Y ( i 1 ) by (6), and U ( i ) = M V ( i ) .
 3:  Compute V ( i ) T U ( i ) = R ( i ) T R ( i ) , U ˜ ( i ) = U ( i ) R ( i ) 1 , V ˜ ( i ) = V ( i ) R ( i ) 1 , and G ( i ) = U ˜ ( i ) T K U ˜ ( i ) .
 4:  Compute the spectral decomposition G ( i ) = Q ( i ) Ω ( i ) Q ( i ) T and approximate eigenpairs ( ρ j ( i ) , z ˜ j ( i ) ) where z ˜ j ( i ) = ρ j ( i ) ψ ˜ j ( i ) T , ϕ ˜ j ( i ) T T by (9) for j = 1 , , s .
 5:  If convergence is not reached then go to Step 2, with Y ( i ) = V ˜ ( i ) Q ( i ) .
 6: end for

3.2. Convergence Analysis

Without loss of generality, we suppose in this subsection:
| f ˜ ( λ 1 2 ) | | f ˜ ( λ s 2 ) | > | f ˜ ( λ ( s + 1 ) 2 ) | | f ˜ ( λ N 2 ) | ,
and use the following simplifying notation:
Ψ 1 = Ψ ( : , 1 : s ) , Ψ 2 = Ψ ( : , ( s + 1 ) : N ) , Φ 1 = Φ ( : , 1 : s ) , Φ 2 = Φ ( : , ( s + 1 ) : N ) , Λ 1 = Λ ( 1 : s , 1 : s ) , Λ 2 = Λ ( ( s + 1 ) : N , ( s + 1 ) : N ) , Ψ ˜ ( k ) = [ ψ ˜ 1 ( k ) , , ψ ˜ s ( k ) ] , Φ ˜ ( k ) = [ ϕ ˜ 1 ( k ) , , ϕ ˜ s ( k ) ] .
In Algorithm 1, for a given scalar k, naturally, we would use Ψ ˜ ( k ) as approximations to Ψ 1 and Φ ˜ ( k ) as approximations to Φ 1 . In this subsection, we will investigate how good such approximations may be. Notice that f ˜ ( Λ 1 2 ) is invertible since f ˜ ( λ i 2 ) > 1 / 2 for i = 1 , , s by (8).
Lemma 5.
Suppose Ψ 1 T M Y ( 0 ) is nonsingular. Then, Y ( i ) is full column rank for all i in Algorithm 1.
Proof. 
We prove this statement by induction. If Ψ 1 T M Y ( j ) is invertible for some j, then:
V ( j + 1 ) = f ˜ ( K M ) Y ( j ) = f ˜ ( K M ) Ψ Φ T Y ( j ) = f ˜ ( K M ) ( Ψ 1 Φ 1 T + Ψ 2 Φ 2 T ) Y ( j ) = Ψ 1 f ˜ ( Λ 1 2 ) Φ 1 T + Ψ 2 f ˜ ( Λ 2 2 ) Φ 2 T Y ( j ) = Ψ 1 + Ψ 2 f ˜ ( Λ 2 2 ) Φ 2 T Y ( j ) E ( j ) 1 E ( j ) ,
where E ( j ) = f ˜ ( Λ 1 2 ) Φ 1 T Y ( j ) = f ˜ ( Λ 1 2 ) Ψ 1 T M Y ( j ) is nonsingular. Thus, V ( j + 1 ) has full column rank. This leads to R ( j + 1 ) and Q ( j + 1 ) both being invertible. By Step 5 of Algorithm 1, we have Y ( j + 1 ) = V ˜ ( j + 1 ) Q ( j + 1 ) = V ( j + 1 ) R ( j + 1 ) 1 Q ( j + 1 ) , which is also full column rank. Furthermore,
Ψ 1 T M Y ( j + 1 ) = Ψ 1 T M V ( j + 1 ) R ( j + 1 ) 1 Q ( j + 1 ) = Ψ 1 T M Ψ 1 + Ψ 2 f ˜ ( Λ 2 2 ) Φ 2 T Y ( j ) E ( j ) 1 E ( j ) R ( j + 1 ) 1 Q ( j + 1 ) = E ( j ) R ( j + 1 ) 1 Q ( j + 1 )
is nonsingular. We conclude by induction that Y ( i ) is full column rank for i = 1 , 2 , .  □
Theorem 1.
Suppose the condition of Lemma 5 holds. For 1 k 1 k 2 s , there exist Y ^ R N × ( k 2 k 1 + 1 ) where R ( Y ^ ) R ( Y ( 0 ) ) and X ^ = M Y ^ such that, for any k,
tan Θ M ( Ψ ( : , k 1 : k 2 ) , V ( k ) ) F | f ˜ ( λ ( s + 1 ) 2 ) | | f ˜ ( λ k 2 2 ) | k × tan Θ M ( Ψ ( : , k 1 : k 2 ) , Y ^ ) F ,
tan Θ M 1 ( Φ ( : , k 1 : k 2 ) , U ( k ) ) F | f ˜ ( λ ( s + 1 ) 2 ) | | f ˜ ( λ k 2 2 ) | k × tan Θ M 1 ( Φ ( : , k 1 : k 2 ) , X ^ ) F .
Proof. 
We first prove:
R ( V ( i ) ) = R ( f ˜ ( K M ) ) i Y ( 0 ) , for   1 i k ,
by induction. It is obviously true for i = 1 . Suppose that (13) holds for i = j 1 . Then, there exists a nonsingular matrix F R s × s such that:
V ( j 1 ) = f ˜ ( K M ) j 1 Y ( 0 ) F .
Now, for i = j k , we have:
Y ( j ) = V ( j 1 ) R ( j 1 ) 1 Q ( j 1 ) = f ˜ ( K M ) j 1 Y ( 0 ) F R ( j 1 ) 1 Q ( j 1 ) ,
where R ( j 1 ) and Q ( j 1 ) are both invertible by Lemma 5. Then,
V ( j ) = f ˜ ( K M ) Y ( j ) = f ˜ ( K M ) f ˜ ( K M ) j 1 Y ( 0 ) F R ( j 1 ) 1 Q ( j 1 ) = f ˜ ( K M ) j Y ( 0 ) F R ( j 1 ) 1 Q ( j 1 ) .
Thus, we have R ( V ( j ) ) = R ( ( f ˜ ( K M ) ) j Y ( 0 ) ) . This completes the proof of (13).
By Lemma 1, there exists Y ^ R N × ( k 2 k 1 + 1 ) where R ( Y ^ ) R ( Y ( 0 ) ) such that:
Π M Y ^ = Ψ 1 Ψ 1 T M Y ^ = Ψ ( : , k 1 : k 2 ) ,
where Π M is the M-orthogonal projector onto R ( Ψ 1 ) . Let Λ ^ 1 2 = diag ( λ k 1 2 , , λ k 2 2 ) and Ψ ^ 1 = Ψ ( : , k 1 : k 2 ) . Consider:
Z ^ = f ˜ ( K M ) k Y ^ = Ψ 1 f ˜ ( Λ 1 2 ) k Φ 1 T Y ^ + Ψ 2 f ˜ ( Λ 2 2 ) k Φ 2 T Y ^ = Ψ ^ 1 f ˜ ( Λ ^ 1 2 ) k + Ψ 2 f ˜ ( Λ 2 2 ) k Φ 2 T Y ^ .
It follows by (13) that R ( Z ^ ) R ( V ( k ) ) . Therefore, we have:
tan Θ M ( Ψ ( : , k 1 : k 2 ) , V ( k ) ) F tan Θ M ( Ψ ( : , k 1 : k 2 ) , Z ^ ) F = Ψ 2 T M Z ^ ( Z ^ T M Z ^ ) 1 / 2 Ψ ^ 1 T M Z ^ ( Z ^ T M Z ^ ) 1 / 2 1 F = Ψ 2 T M Z ^ Ψ ^ 1 T M Z ^ 1 F = f ˜ ( Λ 2 2 ) k Φ 2 T Y ^ f ˜ ( Λ ^ 1 2 ) k ( Ψ ^ 1 T M Y ^ ) 1 F = f ˜ ( Λ 2 2 ) k Ψ 2 T M Y ^ f ˜ ( Λ ^ 1 2 ) k ( Ψ ^ 1 T M Y ^ ) 1 F max ( s + 1 ) j N | f ˜ ( λ j 2 ) | k × max k 1 j k 2 1 | f ˜ ( λ j 2 ) | k × tan Θ M ( Ψ ^ 1 , Y ^ ) F | f ˜ ( λ ( s + 1 ) 2 ) | | f ˜ ( λ k 2 2 ) | k × tan Θ M ( Ψ ^ 1 , Y ^ ) F ,
which gives (11). Similarly we can prove (12). □
As we know in [29], an important quantity for the convergence properties of projection methods is the distance of the exact eigenspace from the search subspace. Theorem 1 establishes bounds on M-canonical angles from Ψ ( : , k 1 : k 2 ) to the search subspace V ( k ) and M 1 -canonical angles from Φ ( : , k 1 : k 2 ) to the search subspace U ( k ) , respectively, to illustrate the effectiveness of the FEAST algorithm for LREP. As stated in [18], if the spectrum of K M is distributed somewhat uniformly, then we would expect | f ˜ ( λ ( s + 1 ) 2 ) / f ˜ ( λ k 2 2 ) | to be as small as 10 3 . That means tan Θ M ( Ψ ( : , k 1 : k 2 ) , V ( k ) ) F 0 and tan Θ M 1 ( Φ ( : , k 1 : k 2 ) , U ( k ) ) F 0 at a rate of 10 3 k . Based on Theorem 1, the following theorem is obtained to bound the M-canonical angles between Ψ ( : , k 1 : k 2 ) and Ψ ˜ ( k ) ( : , k 1 : k 2 ) and the M 1 -canonical angles between Φ ( : , k 1 : k 2 ) and Φ ˜ ( k ) ( : , k 1 : k 2 ) , respectively.
Theorem 2.
Suppose the condition of Lemma 5 holds. Using the notations of Theorem 1, we have:
sin Θ M ( Ψ ( : , k 1 : k 2 ) , Ψ ˜ ( k ) ( : , k 1 : k 2 ) ) F γ tan Θ M ( Ψ ( : , k 1 : k 2 ) , Y ^ ) F ,
sin Θ M 1 ( Φ ( : , k 1 : k 2 ) , Φ ˜ ( k ) ( : , k 1 : k 2 ) ) F γ tan Θ M 1 ( Ψ ( : , k 1 : k 2 ) , X ^ ) F ,
where:
γ = 1 + 1 η 2 Π M V K M ( I Π M V ) 2 2 | f ˜ ( λ ( s + 1 ) 2 ) | | f ˜ ( λ k 2 2 ) | k , η = min k 1 i k 2 j < k 1 o r j > k 2 | λ i 2 ρ j 2 | ,
and Π M V is the M-orthogonal projector onto R ( V ( k ) ) .
Proof. 
Let Q ˜ ( k ) = [ Q ( k ) ( : , 1 : k 1 ) , Q ( k ) ( : , k 2 : s ) ] , Ω ˜ ( k ) = diag ( Ω ( k ) ( 1 : k 1 , 1 : k 1 ) , Ω ( k ) ( k 2 : s , k 2 : s ) ) and V ˜ ( k ) R N × ( N s ) such that [ V ˜ ( k ) , V ˜ ( k ) ] is M-orthogonal matrix, i.e.,
[ V ˜ ( k ) , V ˜ ( k ) ] T M [ V ˜ ( k ) , V ˜ ( k ) ] = I N .
Then, we can write Ψ ( : , k 1 : k 2 ) = V ˜ ( k ) C + V ˜ ( k ) C , where C = V ˜ ( k ) T M Ψ ( : , k 1 : k 2 ) and C = ( V ˜ ( k ) ) T M Ψ ( : , k 1 : k 2 ) . Furthermore, we have:
sin Θ M ( Ψ ( : , k 1 : k 2 ) , V ( k ) ) F = sin Θ M ( Ψ ( : , k 1 : k 2 ) , V ˜ ( k ) ) F = C F , sin Θ M ( Ψ ( : , k 1 : k 2 ) , Ψ ˜ ( k ) ( : , k 1 : k 2 ) ) F = V ˜ ( k ) Q ˜ ( k ) , V ˜ ( k ) T M Ψ ( : , k 1 : k 2 ) F
= Q ˜ ( k ) T C F 2 + C F 2 .
Now, we turn to bound the term Q ˜ ( k ) T C F . Notice that [ V ˜ ( k ) , V ˜ ( k ) ] [ V ˜ ( k ) , V ˜ ( k ) ] T M = I N . It follows by Lemma 2 that:
K M [ V ˜ ( k ) , V ˜ ( k ) ] [ V ˜ ( k ) , V ˜ ( k ) ] T M Ψ ( : , k 1 : k 2 ) = Ψ ( : , k 1 : k 2 ) Λ ( : , k 1 : k 2 ) 2 .
Multiply (19) by V ˜ ( k ) T M from the left to get:
V ˜ ( k ) T M K M [ V ˜ ( k ) , V ˜ ( k ) ] [ V ˜ ( k ) , V ˜ ( k ) ] T M Ψ ( : , k 1 : k 2 ) = V ˜ ( k ) T M Ψ ( : , k 1 : k 2 ) Λ ( : , k 1 : k 2 ) 2 G ( k ) C + V ˜ ( k ) T M K M V ˜ ( k ) C = C Λ ( : , k 1 : k 2 ) 2 G ( k ) C C Λ ( : , k 1 : k 2 ) 2 = V ˜ ( k ) T M K M V ˜ ( k ) C .
Furthermore, we have:
Q ˜ ( k ) T G ( k ) C Q ˜ ( k ) T C Λ ( : , k 1 : k 2 ) 2 = Q ˜ ( k ) T V ˜ ( k ) T M K M V ˜ ( k ) C Ω ˜ ( k ) Q ˜ ( k ) T C Q ˜ ( k ) T C Λ ( : , k 1 : k 2 ) 2 = Q ˜ ( k ) T V ˜ ( k ) T M K M V ˜ ( k ) C .
By Lemma 4, we conclude that:
Q ˜ ( k ) T C F 1 η Q ˜ ( k ) T V ˜ ( k ) T M K M V ˜ ( k ) C F 1 η V ˜ ( k ) T M K M V ˜ ( k ) C F 1 η V ˜ ( k ) T M K M V ˜ ( k ) 2 C F = 1 η M 1 / 2 V ˜ ( k ) V ˜ ( k ) T M K M V ˜ ( k ) ( V ( k ) ) T M 1 / 2 2 C F = 1 η V ˜ ( k ) V ˜ ( k ) T M K M V ˜ ( k ) ( V ( k ) ) T M 2 C F = 1 η Π M V K M ( I Π M V ) 2 C F .
The inequality (15) is now a consequence of (17), (18), (20), and Theorem 1. Similarly we can prove (16). □

4. Numerical Examples

In this section, we present some numerical examples to illustrate the effectiveness of Algorithm 1 and the upper bounds in Theorem 1.
Example 1.
We first examine the upper bounds of Theorem 1. For simplicity, we consider diagonal matrices for K and M. Take M = K = diag ( λ 1 , , λ N ) with N = 100 where:
λ 1 = 1 + η , λ 2 = 1 , λ 3 = 1 η , λ j = N + 4 j 2 N , f o r   j = 4 , , N .
In such a case, there are two eigenvalue clusters: { ± λ 1 , ± λ 2 , ± λ 3 } and { ± λ 4 , , ± λ N } , and Ψ = K 1 / 2 .
We seek the approximations associated with the first cluster { ± λ 1 , ± λ 2 , ± λ 3 } and vary the parameter η > 0 to control the tightness among eigenvalues. To make the numerical example repeatable, the initial block Y ( 0 ) is chosen to be:
Y ( 0 ) = 1 0 0 0 1 0 0 0 1 1 N sin 1 cos 1 N 3 N sin ( N 3 ) cos ( N 3 ) .
In such a way, Y ( 0 ) satisfies the condition that Y ( 0 ) T M Ψ ( : , 1 : 3 ) is nonsingular. We run Algorithm 1 and check the bound for tan Θ M ( Ψ ( : , 1 : 3 ) , V ( k ) ) F given by (11) with k = 1 and k = 2 , respectively. In Algorithm 1, the trapezoidal rule is applied to calculate f ˜ ( K M ) Y ( 0 ) with the parameters c = 1 and r = 0 . 2 . We compute the following factors:
ε 1 = tan Θ M ( Ψ ( : , 1 : 3 ) , V ( k ) ) F ,
ε 2 = | f ˜ ( λ 4 2 ) | | f ˜ ( λ 3 2 ) | k × tan Θ M ( Ψ ( : , 1 : 3 ) , Y ^ ) F ,
in Table 1. From Table 1, we can see that, as η goes to zero, the bounds ε 2 for k = 1 , 2 are sharp and insensitive to η.
Example 2.
To test the effectiveness of Algorithm 1, we chose three test problems, i.e.,test1 totest3 in Table 2, which come from the linear response analysis for Na 2 , Na 4 , and silane (SiH4) compounds, respectively [9]. The matrices K and M oftest1,test2, andtest3 are both symmetric positive definite with order N = 1862 , 2834, and 5660, respectively. We compute the eigenvalues whose square lies in the interval ( ω ̲ t , ω ¯ t ) for t = 1 , 2 , 3 and the associated eigenvectors by using the MATLAB parallel pool. The interval ( ω ̲ t , ω ¯ t ) and their corresponding eigenvalue counts s t for t = 1 , 2 , 3 are detailed in Table 2. All our experiments were performed on a Windows 10 (64 bit) PC-Intel(R) Core(TM) i7-6700 CPU 3.40 GHz, 16 GB of RAM using MATLAB Version 8.5 (R2015a) with machine epsilon 2 . 22 × 10 16 in double precision floating point arithmetic. In demonstrating the quality of computed approximations, we calculated the normalized residual norms r ( ρ j ) defined in (10) and relative eigenvalues errors:
e ( ρ j ) = | ρ j λ j | λ j
for the j th approximation eigenvalue ( ρ j , z ˜ j ) where λ j were computed by MATLAB’s function eig on H, and considered to be the “exact” eigenvalues for testing purposes.
In this example, we first investigated how the maximal relative eigenvalue errors and normalized residual norms, i.e., max e ( ρ j ) and max r ( ρ j ) in Table 3, respectively, changed for an increase in the number of quadrature points q in (6). We used the direct methods, such as Gaussian elimination, to solve the shifted linear systems (7) in this example. The numerical results of Algorithm 1 by running four FEAST iterations with q varying from four to nine are presented in Table 3. It is shown by Table 3 that the trapezoidal rule with 6, 7, or 8 quadrature nodes already achieved satisfactory results.
Notice that the wanted eigenvalues are inside the intervals presented in Table 2. The shift and invert strategy is usually not available for LREP since the traditional algorithms to calculate the partial spectrum of LREP, such as the methods proposed in [6,8], combined with the shift and invert strategy will lose the structure of (2).
In addition, as stated in [17], the FEAST algorithm can be cast as a direct technique in comparison to iterative Krylov subspace-type methods, because it is based on an exact mathematical derivation (5). Therefore, by dividing the whole spectrum of LREP into a number of disjoint intervals, the FEAST algorithm is able to compute all eigenvalues simultaneously on the parallel architectures. From this aspect, as [24], we compared Algorithm 1 with the MATLAB built-in function eig, which was used to compute all eigenvalues of H, and then, the target eigenvalues were selected in this example. Let the number of quadrature nodes q = 7 and the convergence tolerance ε = 10 8 . Table 4 reports the total required CPU time in seconds of Algorithm 1 and eig fortest 1,test 2, andtest3, respectively. It is clear from Table 4 that Algorithm 1 was much faster than the MATLAB built-in function eig in this example. A similar comparison between the FEAST-type algorithms and the MATLAB built-in function eig in terms of timing can be found in [30,31].
Example 3.
As stated in Section 3, the implementation of Algorithm 1 involves solving several shifted linear systems of the form:
( μ i I K M ) X i = Y , f o r   i = 1 , 2 , , q .
Direct solvers referring to O ( N 3 ) operations are prohibitively expensive in solving large-scale sparse linear systems. Iterative methods, such as Krylov subspace-type methods [32], are usually preferred for large-scale sparse linear systems. However, Krylov subspace solvers may need a large number of iterations to converge. A way to speed up the FEAST method for LREP is to terminate the iterative solvers for linear systems before full convergence is reached. Therefore, in this example, we investigated the effect of the accuracy in the solution of linear systems on the relative eigenvalues errors and normalized residual norms. The linear systems were solved column-by-column by running GMRES [33] until:
Y ( : , j ) ( μ i I K M ) X i ( : , j ) 2 Y ( : , j ) 2 ε l i n .
Figure 2 plots the maximal relative eigenvalues errors and normalized residual norms of Algorithm 1 with q = 7 in solvingtest 1,test 2, andtest 3with ε l i n = 10 4 , 10 6 , and 10 8 . It is revealed by Figure 2 that the accuracy from the solution of the linear systems translated to the accuracy of the eigenpairs obtained by the FEAST algorithm for LREP; that is to say, higher accuracy in solving linear systems leads to better numerical performance in Algorithm 1. In particular, for a rather large ε l i n , such as ε l i n = 10 4 , it is hard to obtain satisfactory normalized residual norms in this example.

5. Conclusions

In this paper, we proposed a FEAST algorithm, i.e., Algorithm 1, for the linear response eigenvalue problem (LREP). The algorithm can effectively calculate the eigenvectors associated with eigenvalues that are located inside some preset intervals. Compared with other methods for LREP, the attractive computational advantage of Algorithm 1 is that it is easily parallelizable. We implemented numerical examples by using the MATLAB parallel pool to compute the eigenpairs in each interval independently of the eigenpairs in the other interval. The corresponding numerical results demonstrated that Algorithm 1 was fast and could achieve high accuracy. In addition, theoretical convergence results for eigenspace approximations of Algorithm 1 were established in Theorems 1 and 2. These theoretical bounds revealed the accuracy of the approximations of eigenspace. Numerical examples showed that the bounds provided by Theorem 1 were sharp.
Notice that the main computational tasks in Algorithm 1 consisted of solving q independent shifted linear systems with s right-hand sides. Meanwhile, as described above, the relative residual bounds in the solution of the inner shifted linear systems were translated almost one-to-one into the residuals of the approximate eigenpairs. Therefore, the subjects of further study include the acceleration of solving shifted linear systems on parallel architectures and theoretical analysis on how the accuracy of the inexact solvers in solving shifted linear systems interacts with the convergence behavior of the FEAST algorithm. In addition, while the algorithm in this paper is for real symmetric K and M, it is also available for the case of Hermitian K and M, simply by replacing all R by C (the set of complex numbers) and each matrix/vector transpose by the complex conjugate and transpose.

Author Contributions

Writing, original draft, Z.T.; writing, review and editing, Z.T. and L.L.

Funding

The work of the first author is supported in part by National Natural Science Foundation of China NSFC-11601081 and the research fund for distinguished young scholars of Fujian Agriculture and Forestry University No. xjq201727. The work of the second author is supported in part by the National Natural Science Foundation of China Grant NSFC-11671105.

Acknowledgments

The authors are grateful to the anonymous referees for their careful reading, useful comments, and suggestions for improving the presentation of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Saad, Y.; Chelikowsky, J.R.; Shontz, S.M. Numerical methods for electronic structure calculations of materials. SIAM Rev. 2010, 52, 3–54. [Google Scholar] [CrossRef]
  2. Shao, M.; da Jornada, F.H.; Lin, L.; Yang, C.; Deslippe, J.; Louie, S.G. A structure preserving Lanczos algorithm for computing the optical absorption spectrum. SIAM J. Matrix Anal. Appl. 2018, 39, 683–711. [Google Scholar] [CrossRef]
  3. Bai, Z.; Li, R.C. Minimization principle for linear response eigenvalue problem, I: Theory. SIAM J. Matrix Anal. Appl. 2012, 33, 1075–1100. [Google Scholar] [CrossRef]
  4. Li, T.; Li, R.C.; Lin, W.W. A symmetric structure-preserving ΓQR algorithm for linear response eigenvalue problems. Linear Algebra Appl. 2017, 520, 191–214. [Google Scholar] [CrossRef]
  5. Wang, W.G.; Zhang, L.H.; Li, R.C. Error bounds for approximate deflating subspaces for linear response eigenvalue problems. Linear Algebra Appl. 2017, 528, 273–289. [Google Scholar] [CrossRef]
  6. Bai, Z.; Li, R.C. Minimization principle for linear response eigenvalue problem, II: Computation. SIAM J. Matrix Anal. Appl. 2013, 34, 392–416. [Google Scholar] [CrossRef]
  7. Papakonstantinou, P. Reduction of the RPA eigenvalue problem and a generalized Cholesky decomposition for real-symmetric matrices. Europhys. Lett. 2007, 78, 12001. [Google Scholar] [CrossRef]
  8. Teng, Z.; Li, R.C. Convergence analysis of Lanczos-type methods for the linear response eigenvalue problem. J. Comput. Appl. Math. 2013, 247, 17–33. [Google Scholar] [CrossRef]
  9. Teng, Z.; Zhang, L.H. A block Lanczos method for the linear response eigenvalue problem. Electron. Trans. Numer. Anal. 2017, 46, 505–523. [Google Scholar]
  10. Teng, Z.; Zhou, Y.; Li, R.C. A block Chebyshev-Davidson method for linear response eigenvalue problems. Adv. Comput. Math. 2016, 42, 1103–1128. [Google Scholar] [CrossRef]
  11. Rocca, D.; Lu, D.; Galli, G. Ab initio calculations of optical absorpation spectra: solution of the Bethe-Salpeter equation within density matrix perturbation theory. J. Chem. Phys. 2010, 133, 164109. [Google Scholar] [CrossRef] [PubMed]
  12. Shao, M.; Felipe, H.; Yang, C.; Deslippe, J.; Louie, S.G. Structure preserving parallel algorithms for solving the Bethe-Salpeter eigenvalue problem. Linear Algebra Appl. 2016, 488, 148–167. [Google Scholar] [CrossRef]
  13. Vecharynski, E.; Brabec, J.; Shao, M.; Govind, N.; Yang, C. Efficient block preconditioned eigensolvers for linear response time-dependent density functional theory. Comput. Phys. Commun. 2017, 221, 42–52. [Google Scholar] [CrossRef] [Green Version]
  14. Zhong, H.X.; Xu, H. Weighted Golub-Kahan-Lanczos bidiagonalization algorithms. Electron. Trans. Numer. Anal. 2017, 47, 153–178. [Google Scholar] [CrossRef]
  15. Zhong, H.X.; Teng, Z.; Chen, G. Weighted block Golub-Kahan-Lanczos algorithms for linear response eigenvalue problem. Mathematics 2019, 7. [Google Scholar] [CrossRef]
  16. Bai, Z.; Li, R.C.; Lin, W.W. Linear response eigenvalue problem solved by extended locally optimal preconditioned conjugate gradient methods. Sci. China Math. 2016, 59, 1–18. [Google Scholar] [CrossRef]
  17. Polizzi, E. Density-matrix-based algorithm for solving eigenvalue problems. Phys. Rev. B 2009, 79, 115112. [Google Scholar] [CrossRef] [Green Version]
  18. Tang, P.T.P.; Polizzi, E. FEAST as a subspace iteration eigensolver accelerated by approximate spectral projection. SIAM J. Matrix Anal. Appl. 2014, 35, 354–390. [Google Scholar] [CrossRef]
  19. Kestyn, J.; Polizzi, E.; Tang, P.T.P. FEAST eigensolver for non-Hermitian problems. SIAM J. Sci. Comput. 2016, 38, S772–S799. [Google Scholar] [CrossRef]
  20. Gavin, B.; Miedlar, A.; Polizzi, E. FEAST eigensolver for nonlinear eigenvalue problems. J. Comput. Sci. 2018, 27, 107–117. [Google Scholar] [CrossRef] [Green Version]
  21. Krämer, L.; Di Napoli, E.; Galgon, M.; Lang, B.; Bientinesi, P. Dissecting the FEAST algorithm for generalized eigenproblems. J. Comput. Appl. Math. 2013, 244, 1–9. [Google Scholar] [CrossRef]
  22. Guttel, S.; Polizzi, E.; Tang, P.T.P.; Viaud, G. Zolotarev quadrature rules and load balancing for the FEAST eigensolver. SIAM J. Matrix Anal. Appl. 2015, 37, A2100–A2122. [Google Scholar] [CrossRef]
  23. Ye, X.; Xia, J.; Chan, R.H.; Cauley, S.; Balakrishnan, V. A fast contour-integral eigensolver for non-Hermitian matrices. SIAM J. Matrix Anal. Appl. 2017, 38, 1268–1297. [Google Scholar] [CrossRef]
  24. Yin, G.; Chan, R.H.; Yeung, M.C. A FEAST algorithm with oblique projection for generalized eigenvalue problems. Numer. Linear Algebra Appl. 2017, 24, e2092. [Google Scholar] [CrossRef] [Green Version]
  25. Li, R.C.; Zhang, L.H. Convergence of the block Lanczos method for eigenvalue clusters. Numer. Math. 2015, 131, 83–113. [Google Scholar] [CrossRef]
  26. Stein, E.M.; Shakarchi, R. Complex Analysis; Princeton University Press: New Jersey, NJ, USA, 2010. [Google Scholar]
  27. Futamura, Y.; Tadano, H.; Sakurai, T. Parallel stochastic estimation method of eigenvalue distribution. J. SIAM Lett. 2010, 2, 127–130. [Google Scholar] [CrossRef] [Green Version]
  28. Napoli, E.D.; Polizzi, E.; Saad, Y. Efficient estimation of eigenvalue counts in an interval. Numer. Linear Algebra Appl. 2016, 23, 674–692. [Google Scholar] [CrossRef] [Green Version]
  29. Saad, Y. Numerical Methods for Large Eigenvalue Problems: Revised Version; SIAM: Philadelphia, PA, USA, 2011. [Google Scholar]
  30. Yin, G. A harmonic FEAST algorithm for non-Hermitian generalized eigenvalue problems. Linear Algebra Appl. 2019, 578, 75–94. [Google Scholar] [CrossRef]
  31. Yin, G. On the non-Hermitian FEAST algorithms with oblique projection for eigenvalue problems. J. Comput. Appl. Math. 2019, 355, 23–35. [Google Scholar] [CrossRef]
  32. Saad, Y. Iterative Methods for Sparse Linear Systems; SIAM: Philadelphia, PA, USA, 2003. [Google Scholar]
  33. Saad, Y.; Schultz, M.H. GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems. SIAM J. Sci. Comput. 1986, 7, 856–869. [Google Scholar] [CrossRef]
Figure 1. Schematic representation of the wanted eigenvalues.
Figure 1. Schematic representation of the wanted eigenvalues.
Algorithms 12 00181 g001
Figure 2. Convergence behavior of Algorithm 1 for test 1, test 2, and test 3 with ε l i n = 10 4 , 10 6 , and 10 8 , respectively.
Figure 2. Convergence behavior of Algorithm 1 for test 1, test 2, and test 3 with ε l i n = 10 4 , 10 6 , and 10 8 , respectively.
Algorithms 12 00181 g002
Table 1. ε 1 together with their corresponding upper bounds ε 2 of Example 1.
Table 1. ε 1 together with their corresponding upper bounds ε 2 of Example 1.
η ε 1 (for k = 1 ) ε 2 ( k = 1 ) ε 1 ( k = 2 ) ε 2 ( k = 2 )
10 1 4 . 4772 × 10 5 3 . 6089 × 10 3 7 . 9793 × 10 10 2 . 4574 × 10 6
10 2 5 . 9244 × 10 5 3 . 5685 × 10 3 1 . 0762 × 10 9 2 . 3402 × 10 6
10 3 5 . 9128 × 10 5 3 . 5639 × 10 3 1 . 0740 × 10 9 2 . 3372 × 10 6
10 4 5 . 9116 × 10 5 3 . 5635 × 10 3 1 . 0738 × 10 9 2 . 3369 × 10 6
10 5 5 . 9115 × 10 5 3 . 5635 × 10 3 1 . 0738 × 10 9 2 . 3369 × 10 6
Table 2. Test matrices.
Table 2. Test matrices.
ProblemNKM( ω ̲ 1 , ω ¯ 1 ) s 1 ( ω ̲ 2 , ω ¯ 2 ) s 2 ( ω ̲ 3 , ω ¯ 3 ) s 3
test 11862Na 2 Na 2 (0.40,0.50)3(0.60,0.70)3(1.50,1.60)3
test 22834Na 4 Na 4 (0.03,0.04)4(0.05,0.06)4(0.32,0.33)5
test 35660SiH4SiH4(0.25,0.30)3(0.75,0.80)6(1.75,1.80)8
Table 3. Changes of max e ( ρ j ) and max r ( ρ j ) with the number of quadrature nodes q varying from 4 to 9.
Table 3. Changes of max e ( ρ j ) and max r ( ρ j ) with the number of quadrature nodes q varying from 4 to 9.
qTEST 1TEST 2TEST 3
max e ( ρ j ) max r ( ρ j ) max e ( ρ j ) max r ( ρ j ) max e ( ρ j ) max r ( ρ j )
4 1 . 25 × 10 5 7 . 54 × 10 6 2 . 35 × 10 7 6 . 82 × 10 7 1 . 87 × 10 11 7 . 48 × 10 8
5 7 . 08 × 10 8 5 . 69 × 10 7 8 . 13 × 10 10 3 . 99 × 10 8 1 . 26 × 10 13 1 . 07 × 10 9
6 5 . 64 × 10 10 5 . 08 × 10 8 4 . 23 × 10 12 2 . 72 × 10 9 1 . 29 × 10 13 1 . 68 × 10 11
7 5 . 39 × 10 12 4 . 97 × 10 9 9 . 53 × 10 13 2 . 01 × 10 10 1 . 29 × 10 13 2 . 71 × 10 13
8 9 . 95 × 10 14 5 . 10 × 10 10 9 . 52 × 10 13 1 . 54 × 10 11 1 . 29 × 10 13 4 . 29 × 10 15
9 9 . 93 × 10 14 5 . 37 × 10 11 9 . 52 × 10 13 1 . 21 × 10 12 1 . 29 × 10 13 4 . 00 × 10 16
Table 4. Comparison of Algorithm 1 and eig for CPU time in seconds.
Table 4. Comparison of Algorithm 1 and eig for CPU time in seconds.
ProblemeigAlgorithm 1
test 145.8215.88
test 2159.7737.08
test 31200.71233.12

Share and Cite

MDPI and ACS Style

Teng, Z.; Lu, L. A FEAST Algorithm for the Linear Response Eigenvalue Problem. Algorithms 2019, 12, 181. https://0-doi-org.brum.beds.ac.uk/10.3390/a12090181

AMA Style

Teng Z, Lu L. A FEAST Algorithm for the Linear Response Eigenvalue Problem. Algorithms. 2019; 12(9):181. https://0-doi-org.brum.beds.ac.uk/10.3390/a12090181

Chicago/Turabian Style

Teng, Zhongming, and Linzhang Lu. 2019. "A FEAST Algorithm for the Linear Response Eigenvalue Problem" Algorithms 12, no. 9: 181. https://0-doi-org.brum.beds.ac.uk/10.3390/a12090181

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop