Next Article in Journal
A Pairing-Free Identity-Based Identification Scheme with Tight Security Using Modified-Schnorr Signatures
Next Article in Special Issue
Generalized Higher Order Preinvex Functions and Equilibrium-like Problems
Previous Article in Journal
Numerical Solution of Two-Dimensional Fredholm–Volterra Integral Equations of the Second Kind
Previous Article in Special Issue
A Special Multigrid Strategy on Non-Uniform Grids for Solving 3D Convection–Diffusion Problems with Boundary/Interior Layers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Preconditioned Variant of the Refined Arnoldi Method for Computing PageRank Eigenvectors

1
College of Science, Sichuan Agricultural University, Ya’an 625000, China
2
Johann Bernoulli Institute for Mathematics and Computer Science, Faculty of Science and Engineering, University of Groningen, 9700 AK Groningen, The Netherlands
3
Faculty of Computer Science, Free University of Bozen-Bolzano, 39100 Bolzano, Italy
4
School of Economic Mathematics, Southwestern University of Finance and Economics, Chengdu 611130, China
5
School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 611731, China
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Submission received: 30 June 2021 / Revised: 19 July 2021 / Accepted: 19 July 2021 / Published: 23 July 2021

Abstract

:
The PageRank model computes the stationary distribution of a Markov random walk on the linking structure of a network, and it uses the values within to represent the importance or centrality of each node. This model is first proposed by Google for ranking web pages, then it is widely applied as a centrality measure for networks arising in various fields such as in chemistry, bioinformatics, neuroscience and social networks. For example, it can measure the node centralities of the gene-gene annotation network to evaluate the relevance of each gene with a certain disease. The networks in some fields including bioinformatics are undirected, thus the corresponding adjacency matrices are symmetry. Mathematically, the PageRank model can be stated as finding the unit positive eigenvector corresponding to the largest eigenvalue of a transition matrix built upon the linking structure. With rapid development of science and technology, the networks in real applications become larger and larger, thus the PageRank model always desires numerical algorithms with reduced algorithmic or memory complexity. In this paper, we propose a novel preconditioning approach for solving the PageRank model. This approach transforms the original PageRank eigen-problem into a new one that is more amenable to solve. We then present a preconditioned version of the refined Arnoldi method for solving this model. We demonstrate theoretically that the preconditioned Arnoldi method has higher execution efficiency and parallelism than the refined Arnoldi method. In plenty of numerical experiments, this preconditioned method exhibits noticeably faster convergence speed over its standard counterpart, especially for difficult cases with large damping factors. Besides, this superiority maintains when this technique is applied to other variants of the refined Arnoldi method. Overall, the proposed technique can give the PageRank model a faster solving process, and this will possibly improve the efficiency of researches, engineering projects and services where this model is applied.

1. Introduction

With the rapid development of the Internet, web search engines become very popular for information retrieval. Because a web search engine can usually find an immense set of Web pages matching the search query, it is necessary to rank higher the most important pages to make this tool practical. Fur this purpose, the PageRank model was developed by the Google team to rank the importance of Web pages based on the frequency of page visits recorded by a random user who keeps browsing the World Wide Web with an equal probability of choosing the hyperlinks on each page. Mathematically speaking, it requires the computation of the stationary distribution of this Markov random walk on the linking structure of pages, the values within the distribution represent the frequency of visits to each Web page. The linking structure of the Web is represented by a large directed graph (called the Web link graph) and by its adjacency matrix G N n × n (here n is the number of Web pages) such that G ( i , j ) 0 (being 1) only when page j has a hyperlink pointing to page i. The transition probability matrix P R n × n of this random walk process is defined as
P ( i , j ) = 1 k = 1 n G ( k , j ) , if   G ( i , j ) = 1 , 0 , otherwise .
However, to avoid that the process stagnates if a dangling page without hyperlinks is visited, in the model P is usually modified as
P ˜ = P + v d T ,
where v R n × 1 is a probability distribution vector, d N n × 1 is a binary vector, and d ( i ) = 1 ( 1 i n ) if page i has no hyperlinks. According to the Perron-Frobenius theorem, the unique existence of the stationary distribution vector is guaranteed when P ˜ is an irreducible stochastic matrix. To enforce this assumption, P ˜ is further modified into the matrix
A = α P ˜ + ( 1 α ) v e T ,
where α ( 0 , 1 ) is called the damping factor and e = [ 1 , 1 , , 1 ] T . Matrix A is often called the Google matrix [1]. Finally, the PageRank model can be stated mathematically as finding the unit positive eigenvector corresponding to the eigenvalue 1 of A, that is the solution of the eigen-problem
A x = x , x 1 = 1 , x > 0 .
Because of the assumption that A is an irreducible stochastic matrix, given by Equation (2), it follows that Equation (3) is uniquely solvable. The unique stationary distribution vector x is called the PageRank vector. Note that, the PageRank model now is often used as a network centrality measure to identify the most important nodes within large networks arising in several applications such as in chemistry, bioinformatics, neuro-science, bibliometrics, web search engines, social networks, etc. [2]. In some fields such as the bioinformatics, the related networks are usually undirected, thus the corresponding adjacency matrices are symmetry. This symmetry can be used to build efficient storage format and efficient implementation for matrix-vector multiplications, then the difficulty of solving the PageRank model (also called the PageRank problem) can be reduced.
With rapid development of science and technology, the dimension of PageRank problems from various application fields has grown hugely in the last decades and still keeps growing. Accordingly, iterative methods become the only viable option to solve Equation (3) numerically. Stationary iterative methods such as the Power, Jacobi and Gauss-Seidel methods are effective when the damping parameter α is not too close to 1, e.g., for search engine applications Google initially used α = 0.85 . However, larger values of α (not too close to 1) such as 0.99 may sometimes give a better ranking result [3], and they tend to converge significantly more slowly when α is large. These computational issues make the PageRank problem always require new algorithmic solutions that are more time-saving and memory-saving.
The development of more efficient algorithms for solving PageRank problems has been ongoing for the past decade or so. One research direction is to accelerate the convergence speed of stationary iterative methods, related works include adaptive [4] and extrapolation methods [5,6,7], multigrid solvers [8,9], and inner-outer iterations [10,11,12,13,14]. Meanwhile a significant amount of work has been devoted in particular to the analysis of Krylov subspace methods for computing PageRank. Golub and Greif proposed in [3] a new variant of the Arnoldi algorithm that is particularly suited for solving problems with a large range of damping values, and can outperform many conventional stationary iterative solvers when α is closer to 1. Yin et al. have used weighted inner-products instead of the standard inner-products in the Arnoldi process reporting faster convergence due to a more favourable distribution of the harmonic Ritz values [15]. The Power-Arnoldi method [16], its weighted version [17] and the Arnoldi-Inout method [18] are hybrid variants of the Arnoldi process that combine periodically stationary and Krylov iterations, and can improve other established PageRank solvers on many examples. All of these techniques could accelerate the Arnoldi method from [3], but there is likely to be a lot of room for further improvement. One obvious thing is that, preconditioning, which is widely used to accelerate Krylov subspace methods for solving linear systems, has not been considered yet for accelerating the Arnoldi-type methods when solving the PageRank eigen-problem. This is an area worth investigating because successful preconditioning usually results in significant acceleration of the solving process, and can be well combined with other acceleration techniques.
In this work we propose a new theoretical formulation of the PageRank eigenvector problem (3) that is characterized by a better eigenvalue separation in the spectrum between the dominant and the second dominant eigenvalues of the coefficient matrix, so that the Arnoldi process may require significantly less iterations to converge, and consequently less BLAS-1 (inner-products and SAXPY) operations in the Gram-Schmidt orthogonalization. Our strategy to transform the original problem into a new one that is more amenable to the iterative solution can be seen as a form of preconditioning. The optimal parameter setting to choose in this preconditioning technique is also discussed. Our experiments confirm the theoretical findings and demonstrate that the Arnoldi-type methods applied to the preconditioned eigen-problem can converge much faster over their standard counterparts. Thus it has potential to solve large-scale PageRank computations more effectively on both sequential and parallel machines. Accordingly, we can expect that the proposed technique can accelerate accomplishing projects from various fields that use the PageRank model. For example, for the GeneRank problem [19] and the ProteinRank [20] problem that use the PageRank model on the gene-gene and protein-protein networks, users can find the genes and proteins that are pathogenic with high probability faster.
The paper is organized as follows. In Section 2, we outline the refined Arnoldi method proposed in [3] that is at the basis of our development, along with its main convergence results. In Section 3, we present a preconditioned variant of the refined Arnoldi method for computing PageRank eigenvectors. Numerical experiments are reported in Section 4 to support our theoretical findings. Finally, some conclusions from this study are presented in Section 5. Note that, MATLAB notations are used throughout this article.

2. The Refined Arnoldi Method for PageRank

The Arnoldi process proposed in 1951 is an algorithm based on the modified Gram-Schmidt orthogonalization that, after m steps, computes an orthonormal basis { v 1 , v 2 , , v m + 1 } of the Krylov subspace K m + 1 ( A , v 0 ) = s p a n { v 0 , A v 0 , , A m v 0 } , where A R n × n and v 0 R n × 1 are a given matrix and an initial vector, respectively. Here we sketch it in Algorithm 1.
In matrix form, Algorithm 1 yields after m steps the Arnoldi decompositions
A V m = V m + 1 H ¯ m
= V m H m + h m + 1 , m v m + 1 e m T ,
and
V m T A V m = H m ,
where H m = { h i , j } R m × m is an upper Hessenberg matrix, and we denote by H ¯ m = [ H m ; h m + 1 , m e m T ] R ( m + 1 ) × m , by e m = [ 0 , 0 , , 0 , 1 ] T and by V m + 1 = [ V m , v m + 1 ] . In Table 1 we summarize the computational cost of Algorithm 1 for the case of a general matrix A, and in Table 2 for the special case of the Google matrix ((2). Note that in the latter case, from the expression A = α P + v ( ( 1 α ) e + α d ) T , it follows that the product A u requires 1 sparse matrix-vector operation α P u (hereafter u represents an arbitrary vector with appropriate dimension), 1 inner product f T u where f = ( ( 1 α ) e + α d ) is stored, 1 vector scaling operation and 1 vector addition.
Algorithm 1 The Arnoldi process [ V m , H m , v m + 1 , h m + 1 , m , β ]= A r n o l d i ( A , v 0 , m )
1:
Compute β = v 0 2 , v 1 = v 0 / β .
2:
for j = 1 : m   do
3:
    Compute w = A v j .
4:
    for i = 1 : j do
5:
       Compute h i , j = v i T w .
6:
       Compute w = w h i , j v i .
7:
    end for
8:
    Compute h j + 1 , j = w 2 .
9:
     if h j + 1 , j = 0 then
10:
       Set m = j , v m + 1 = 0 , and break.
11:
    else
12:
       Set v j + 1 = w / h j + 1 , j .
13:
    end if
14:
end for
15:
Generate V m = [ v 1 , v 2 , , v m ] .
The Arnoldi process is the main ingredient of many efficient numerical techniques for solving linear systems and eigen-problems. After m steps, scalars λ i ( m ) and vectors ϕ i ( m ) = V m y i ( m ) ( i = 1 , 2 , , m ) satisfying H m y i ( m ) = λ i ( m ) y i ( m ) are called the Ritz values and the Ritz vectors of A onto the Krylov subspace K m ( A , v 0 ) , respectively. The Ritz values with largest real parts and their corresponding Ritz vectors are often used to approximate the eigenvalues with largest real parts of A and their associated eigenvectors [21]. However, in-depth convergence analysis of the Arnoldi method shows that the Ritz vectors are not guaranteed to converge to the actual eigenvectors, even when the Ritz values do converge. To overcome this difficulty, Jia proposed in [22] to compute the refined Ritz vector u i ( m ) associated to the Ritz value λ i ( m ) , defined as
( A λ i ( m ) I ) u i ( m ) = min u K m ( A , v 0 ) , u = 1 ( A λ i ( m ) I ) u .
The solution of Equation (7) is given by
[ H m ; h m + 1 , m e m T ] λ i ( m ) [ I ; O ] = U Σ W T , u i ( m ) = V m W ( : , m ) ,
where U Σ W T is the singular value decomposition of matrix [ H m ; h m + 1 , m e m T ] λ i ( m ) [ I ; O ] , the singular values are stored in the diagonal part of Σ in decreasing order and the columns of W are the right singular vectors. Besides, the 2-norm of the residual vector ( A λ i ( m ) I ) u i ( m ) 2 is equal to the smallest singular value Σ ( m , m ) .
Although in exact arithmetic the refined Ritz vector u i ( m ) is guaranteed to converge to the eigenvector associated to λ i when λ i ( m ) λ i , some potential numerical difficulties may still arise in practice. Firstly, the largest Ritz value may be complex, and the use of complex arithmetic may be a memory burden for large-scale computations. Secondly, when α is close to 1, slow or irregular convergence can still happen due to a weak separation of the eigenvalues of A. Golub and Greif propose to use the largest eigenvalue of the Google matrix, which is known and equal to 1, as a shift in (8) instead of the largest Ritz value λ 1 ( m ) as an attempt to overcome problems related to slow solving process [3]. We present the Golub and Greif variant of the refined Arnoldi method for PageRank in Algorithm 2, and refer to it shortly as the GG-Arnoldi method hereafter.
Algorithm 2 The Golub and Greif variant of the restarted refined Arnoldi method (shortly referred to as GG-Arnoldi)
Input:  The PageRank coefficient matrix A, initial guess x 0 , parameters m and t o l .
1:
Run Algorithm 1 as [ V m , H m , v m + 1 , h m + 1 , m , β ]= A r n o l d i ( A , x 0 , m ) .
2:
Compute the SVD factorization [ H m ; h m + 1 , m e m T ] [ I ; O ] = U Σ W T .
3:
Compute the updated approximated solution x = V m W ( : , m ) .
4:
Compute the residual 2-norm by the smallest singular value r = Σ ( m , m ) .
5:
if r / x 1 t o l then
6:
    Outputs x = x / x 1 and ends.
7:
else
8:
    Set x 0 = x and go to step 1.
9:
end if
10:
returnx.
The overall computational cost of Algorithm 2 is summarized in Table 3. Note that the singular value decomposition computed at line 3 is omitted because this cost is negligible when m is very small. Analogously, the vector scaling x = x / x 1 at line 6 is not reported in the table as it is computed only at the last cycle. Finally, the operation x = V m W ( : , m ) is equivalent to m vector scaling operations and m 1 vector additions.
The convergence analysis of the GG-Arnoldi method is presented in [20]. Here we recall the main result. We denote as X the standard orthogonal basis of the orthogonal complement of the space s p a n { x } , so that [ x , X ] is orthonormal. Then, we have
x T X T A [ x , X ] = x 2 2 x T A X O A 2 ,
where A 2 = X T A X . According to the Perron-Frobenius theorem, the spectral radius of A is equal to 1, A has only one eigenvalue 1 on the unit circle, and the eigenspace of this eigenvalue is 1-dimensional. As 1 is a simple eigenvalue of A, we introduce a separation function named “sep” and defined as
s e p ( 1 , A 2 ) = ( 1 A 2 ) 1 2 1 = σ m i n ( I A 2 ) .
Since σ m i n ( I A 2 ) | 1 λ 2 | , where λ 2 is the second largest eigenvalue of A [20], we obtain
s e p ( 1 , A 2 ) | 1 λ 2 | .
Under these assumptions, the theorem below suggests that the larger the second dominant eigenvalue λ 2 , the slower the convergence rate of the GG-Arnoldi method applied to the eigen-problem (3).
Theorem 1
(Theorem 2.2 in [20]). Let P m be the orthogonal projector onto the subspace K m ( A , v 0 ) , ϵ m = ( I P m ) x 2 be the distance between the PageRank vector x and the subspace K m ( A , v 0 ) , and x A T be the solution generated by the GG-Arnoldi method. Then it holds
s i n ( x , x A T ) ϵ m 1 ϵ m 2 · A I 2 s e p ( 1 , A 2 ) .
We recall below another property of the eigenvalues of the Google matrix A that is very relevant to our analysis.
Theorem 2
([23]). If a column-stochastic matrix P ˜ has at least two irreducible closed subsets (which is the case for the web hyperlink matrix), then the second eigenvalue λ 2 of A = α P ˜ + ( 1 α ) v e T , where 0 < α < 1 and v is a vector with non-negative elements satisfying v 1 = 1 , is given by α.
Therefore, slow convergence of the GG-Arnoldi method for PageRank problems may be expected in particular when α approaches 1. As this situation arises in several applications, some acceleration techniques have been developed in the past years to enhance the robustness of the GG-Arnoldi method. They can be classified in two types: (1) using weighted inner products instead of the standard inner products in Algorithm 2 to ensure a more favourable distribution of Ritz values [15,17]; (2) combining the GG-Arnoldi method with a few cycles of stationary iterative solvers, like in the Power-Arnoldi method [16], the Inout-Arnoldi method [18], the Arnoldi-PET method [17] and others. Both approaches attempt to provide better initial guesses for the Arnoldi process. To the best of our knowledge, no research work has investigated efficient ways to precondition the PageRank eigen-problem (3), in order to make it easier to be solved by the Arnoldi-type method. This is the main objective of our study.

3. Preconditioning the Refined Arnoldi Method

Preconditioning is an established technique used to accelerate Krylov subspace methods for solving linear systems. A nonsingular linear system A y = c can be transformed into an equivalent one of either the form M A y = M c (left preconditioned system) or the form A M z = c with y = M z (right preconditioned system), where M A 1 is called the preconditioner matrix or simply the preconditioner. Goal of preconditioning is to improve the spectral distribution of the coefficient matrix A so that a Krylov subspace algorithm can solve the transformed preconditioned system much faster than the original one. The development of efficient preconditioners is a very important topic in numerical linear algebra because it can enable rapid solution of problems that may initially appear numerically intractable. However, preconditioning is seldom used for solving eigen-problems A y = λ y because the left preconditioned ( M A y = λ M y ) and right preconditioned ( A M z = λ y , y = M z ) eigenproblems are no longer equivalent to the original one.
Here we propose a new approach to precondition the PageRank eigen-problem (3) for the computation of the dominant eigenvector x corresponding to the largest eigenvalue λ 1 = 1 of the Google matrix A. According to the analysis presented in Section 2, the convergence speed of the GG-Arnoldi method is mainly determined by the separation between the largest and second largest eigenvalues of A, namely by the quantity | λ 1 λ 2 | = 1 α . Therefore, our strategy for preconditioning Equation (3) is to transform it into an equivalent eigen-problem that has a better separation between its two dominant eigenvalues. The main theoretical result underlying our method is presented in the theorem below.
Theorem 3.
For Google matrix A in (2) and its eigenvalues 1 > | λ 2 | > | λ 3 | > > | λ s | , if a polynomial P satisfies P ( λ i ) P ( 1 ) ( 2 i s ) , then the PageRank eigen-problem (3) is equivalent to the eigen-problem:
P ( A ) x = P ( 1 ) x , x > 0 and x 1 = 1 .
Proof. 
It is clear that any solution of A x = x is also the solution of P ( A ) x = P ( 1 ) x . Suppose
A = T 1 1 J 2 J 3 J s T ,
where each J i ( 2 i s ) is a Jordan matrix whose diagonal elements equal to λ i . Then,
P ( A ) = T 1 P ( 1 ) P ( J 2 ) P ( J 3 ) P ( J s ) T .
Clearly, each P ( J i ) ( 2 i s ) is still a lower triangle matrix, and its diagonal elements equal to P ( λ i ) . Because P ( λ i ) P ( 1 ) ( 2 i s ) , therefore the algebraic multiplicity of the eigenvalue P ( 1 ) of P ( A ) equals to 1, as same as that of the eigenvalue 1 of A. Therefore P ( A ) x = P ( 1 ) x has the same solution space as A x = x , i.e., problem (3) is equivalent to problem (11).    □
The obvious question to address is whether such a polynomial P satisfying the condition P ( λ i ) P ( 1 ) ( 2 i s ) on the eigenvalues 1 > | λ 2 | > | λ 3 | > > | λ s | of A exists, and then how to make P ( A ) x = P ( 1 ) x easier to solve than A x = x . The simple polynomial P ( x ) = x k ( k N + ) clearly satisfies P ( λ i ) P ( 1 ) = 1 ( 2 i s ) . According to Theorem 3, the two eigen-problems
A k x = x , x > 0 and x 1 = 1
and
A x = x , x > 0 and x 1 = 1
are equivalent. Besides, in A k the distance between the largest two eigenvalues equals to | P ( 1 ) P ( α ) | = 1 α k > > 1 α . As a result, we can expect that the GG-Arnoldi method will require less cycles and operations to converge when it is applied to A k x = x than to the problem A x = x . We sketch the complete preconditioned version of the Golub and Greif variant of the refined Arnoldi method with the polynomial choice P ( x ) = x k , hereafter shortly referred to as the PGG-Arnoldi method, in Algorithm 3.
Algorithm 3 The preconditioned version of the Golub and Greif variant of the refined Arnoldi method, with the polynomial choice P ( x ) = x k (shortly, PGG-Arnoldi) for PageRank.
Input:  the PageRank coefficient matrix A, initial guess x 0 , parameters m, k and t o l .
1:
Run Algorithm 1 as [ V m , H m , v m + 1 , h m + 1 , m , β ]= A r n o l d i ( A k , x 0 , m ) .
2:
Compute the SVD factorization [ H m ; h m + 1 , m e m T ] [ I ; O ] = U Σ W T ;
3:
Compute the updated approximated solution x = V m W ( : , m ) .
4:
Compute the residual 2-norm by the smallest singular value r = m i n ( d i a g ( Σ ) ) .
5:
if r x 1 t o l then
6:
    Outputs x = x / x 1 and ends.
7:
else
8:
    Set x 0 = x and go to step 1.
9:
end if
10:
returnx
Note that the matrix A k in Algorithm 3 is never formed explicitly. The operations associated with A k in this algorithm are all matrix-vector multiplications, such operation A k u is computed by carrying out k sparse matrix-vector products without assembling the Google matrix A shown as follows in Algorithm 4.
Algorithm 4 Implementation of the matrix-vector product y A k u
1:
fori = 1:k do
2:
    Compute u α P u + ( f T u ) v .
3:
end for
4:
Set y u .
5:
returny.
The overall algorithmic complexity of a cycle of the PGG-Arnoldi method (Algorithm 3) is summarized in Table 4.
It is clear that one cycle of Algorithm 3 is computationally more expensive than that of the GG-Arnoldi method (Algorithm 2) for the same value of m, because the former requires to compute the matrix-vector product A k u . In detail, the PGG-Arnoldi algorithm needs additional ( k 1 ) m sparse matrix-vector products, ( k 1 ) m inner products, ( k 1 ) m vector scalings and ( k 1 ) m vector additions compared with the GG-Arnoldi algorithm per cycle. The computation of an inner product requires n floating-point multiplications and n 1 floating-point additions while scaling a vector only needs n floating-point multiplications. Thus, we will assume that a vector scaling operation has half the cost of an inner product with vectors of the same dimension. For the same reason, the cost of each vector addition, 2-norm and 1-norm of a vector can be counted as 0.5 , 1 and 0.5 inner-product, respectively. Finally, the algorithmic complexity estimates of one cycle of GG-Arnoldi and PGG-Arnoldi in terms of sparse matrix-vector multiplications and vector inner-products are presented in Table 5.
We observe from Table 5 that one cycle of the PGG-Arnoldi costs less than k times the cost of GG-Arnoldi. This means that the operations in addition to the sparse matrix-vector product in PGG-Arnoldi are less than those required in GG-Arnoldi. Therefore, if the number of cycles required by the PGG-Arnoldi is less than 1 k times of the GG-Arnoldi, the former must be faster. Indeed, it may be faster even when the number of cycles required by the PGG-Arnoldi is larger than 1 k times of the GG-Arnoldi, which will be shown by numerical experiments. Besides, in real applications, the computation of a matrix-vector product is more efficient than that of vector operations in either serial or parallel environments, as the former belongs to BLAS-2 classification while the latter belongs to BLAS-1. Note that, it is very desirable to improve the efficiency of parallel computing for solving large problems such as PageRank. Given a good estimate of the convergence rate comparison between these two methods and the density of matrix G, the results presented in Table 5 can guide the choice between the PGG-Arnoldi or the GG-Arnoldi methods for solving the problem at hand. However, few things (in particular, quantitative conclusions) are known about the convergence rate of the GG-Arnoldi method besides that it increases with the gap | 1 λ 2 | , therefore we rely on numerical experiments to analyze the performance of the PGG-Arnoldi method.
Finally, we remark that:
  • acceleration techniques designed to work with the GG-Arnoldi, such as extrapolation methods [6,7], formulations based on weighted inner products [3] and hybrid solution schemes [16,17,18] that combine stationary iterative iterations with the Arnoldi method, can still be applied to the PGG-Arnoldi;
  • the performance of the PGG-Arnoldi can be further improved by using pre-processing algorithms such as the elimination strategy in [24] and the low-rank factorization in [25] that reduce the time cost and the memory cost of computing the sparse matrix-vector product α P u , because the sparse matrix-vector products in the PGG-Arnoldi account for a larger proportion of computation compared to the GG-Arnoldi.

4. Numerical Results

In this section, we present results of numerical experiments on a suite of Web matrix problems obtained from the University of Florida matrix repository [26] and from the Laboratory for Web Algorithmics [27,28,29]. The characteristics of our test problems are presented in Table 6. For each Web adjacency matrix G, we build the Google matrix using Equation (2) with personalization vector v = [ 1 , 1 , , 1 ] T / n . We use the value α = 0.99 for the damping parameter, so that the resulting PageRank problems are rather difficult to solve for iterative methods. All the runs with iterative solvers are started from the initial guess x 0 = [ 1 / n , 1 / n , , 1 / n ] T and are stopped when either the approximate solution x i satisfies ( I A ) x i 2 x i 1 < 10 8 , or the number of matrix-vector products computed exceeds 20,000. All the runs are carried out in MATLAB R2016b on a 64-bit Windows-10 computer equipped with an Intel core i7-8750H processor and 16 GB RAM memory.

4.1. Performance Analysis of the PGG-Arnoldi Method

We first assess the performance behaviour of the PGG-Arnoldi method with the polynomial choice P ( x ) = x k , using different values of degree k and increasing dimensions m of the Krylov search space. Because of the very large size of PageRank problems in real applications, the dimension of the Krylov subspace should be kept as small as possible. In our experiments on the web_NotreDame and in-2004 matrices, we use m = [ 3 , 4 , , 10 ] and k = [ 1 , 2 , , 10 ] . In Table 7 and Table 8 we report on the results of our experiments in terms of number of iterations and CPU time (in seconds) for each run.
We observe that, in general, the number of iterations decreases when higher degree polynomials and Krylov subspaces of higher dimensions are used. By a simple multiple linear regression of number of iterations ( I t e r ), 1 k and 1 m for Table 7, we obtained I t e r = 66.4 + 394.2 1 m + 157.1 1 k with the R 2 statistic being 0.8 , the probability of the F-statistic is less than 0.05 , and all the confidence intervals not including the origin. Therefore, the linear relationship can be considered significant. Similar results are obtained for Table 8. The CPU time cost generally decreases when m increases, while it oscillates with the increase of k. For the purpose of saving computing time, it is suggested to set the dimension of the Krylov subspace to the maximum value allowed by the available memory. For a given m, the number of iteration tends to increase inversely proportional with k, or approximately proportional to 1 k . Then, if the ratio between the time costs of computing the matrix-vector product α P u and an inner product between vectors of the same dimension is known, the complexity estimates given in Table 5 can guide the user in the choice of the almost optimal k in the range 1:10. Here we set m = 10 and k = 5 based on our experiments.

4.2. Preconditioning Combined with Weighted Inner-Product

We study the effect of using weighted inner products instead of standard inner products on the performance of the PGG-Arnoldi method. The experiments are carried out on the same problems as in our previous experiments. The results are presented in Table 9 and Table 10.
The general trend is still the same, that is the number of iterations generally decreases with the increase of the polynomial degree k and of the Krylov subspace dimension m, but some exceptions to this trend can be observed. Here we also carry out a simple multiple linear regression of the data: the number of iterations ( I t e r ), 1 k and 1 m . The result for Table 9 is I t e r = 72.4 + 426.3 1 m + 146.0 1 k , with the R 2 statistic being 0.75 that is smaller than 0.8 of the results of Table 7. This phenomenon may be explained by the fact that the adaptively weighting technique [15] makes the Arnoldi method more irregular. Besides, we can see that the preconditioning strategy proposed in this paper can also accelerate the weighted Arnoldi remarkably when solving these difficult PageRank problems. For nearly all the value of m, the CPU time cost corresponding to the optimal value of k is less than 50% of the time cost corresponding to k = 1 that is the case with no preconditioning.

4.3. Comparisons with Other Methods

In this section, we compare the performance of the PGG-Arnoldi algorithm against other algorithms including the FOM method in [30] and its weighted version (referred to as W-FOM), the Golub and Greif variant of the refined Arnoldi method for PageRank (referred to as GG-Arnoldi) and its weighted version in [15] (referred to as W-Arnoldi), the Power method (referred to as Power), the extrapolation accelerated Power-Arnoldi method in [7,17] (referred to as EXT-Arnoldi) and the multi-step Power-inner-outer method in [12] (referred to as MPIO). Note that, we also test the performances of the preconditioned weighted Arnoldi method (referred to as PW-Arnoldi) and the preconditioned extrapolation accelerated Power-Arnoldi method (referred to as EXT-PArnoldi) where the GG-Arnoldi method is replaced by our preconditioned Arnoldi method. All the matrices listed in Table 6 are tested. For all the tested methods, the Krylov subspace dimension m is set as m = 10 , and the polynomial degree k of the preconditioner is set as k = 5 . For the MPIO method, the number of steps of the Power method is set as 7, the parameters β and η for controlling the inner iterations are set as 0.5 and 0.1 respectively. For the EXT-Arnoldi method and the EXT-PArnoldi method, the extrapolation technique is applied every 40 Power iterations, and the residual tolerance value ϵ 1 of the Power method is set as 10 6 . Note that, the parameter settings of the MPIO and the EXT-Arnoldi methods are almost the best settings given in the literatures [7,12]. The numerical results are presented in Table 11, where the smallest CPU time cost is typeset for clarity in bold font.
We can see from Table 11 the clear potential of the proposed preconditioning strategy for accelerating the GG-Arnoldi method remarkably. The time costs of PGG-Arnoldi and PW-Arnoldi are significantly smaller than their unpreconditioned versions. Moreover, the time cost can be often further reduced when PGG-Arnoldi is combined with Power iterations and extrapolation techniques. The resulting EXT-PArnoldi outperforms all the other methods on four out of six problems, PGG-Arnoldi acts as the fastest method on one problem and is close to the best on the remaining problem. Note that, the GG-Arnoldi method is always slower than W-FOM or MPIO, while PGG-Arnoldi outperforms them in most cases. We can conclude that the proposed preconditioning strategy can improve the efficiency of the Arnoldi-type methods, and makes them faster than some other state-of-the-art methods, when solving difficult PageRank problems.
Because the Arnoldi-type methods are very competitive for computing PageRank, it can be expected that the developed preconditioning technique can possibly accelerate the computation of PageRank problems arising from various fields. Besides, this technique can improve the efficiency of parallel computing of the PageRank problem. Accordingly, any project using the PageRank model can have a faster solving process, this will possibly improve the efficiencies of scientific researches, engineering and services. As pre-described, for the GeneRank problem [19] and the ProteinRank [20] problem, users can find the genes and proteins that are pathogenic with high probability faster.

5. Conclusions

In this paper, we show that if a polynomial P satisfies P ( 1 ) P ( λ i ) for any | λ i | < 1 , then the PageRank problem A x = x is equivalent to the new eigen-problem P ( A ) x = P ( 1 ) x . Moreover, by a suitable choice of the polynomial P such as P = x k chosen in this paper, the new eigen-problem can exhibit a much better separation between the two largest eigenvalues, thus the Arnoldi-type methods can solve this problem by less iterations. Accordingly, the number of vector-vector operations with low parallelism in the solving process can be reduced. Based on this result, we introduce a preconditioned version of the Golub and Greif variant of the refined Arnoldi method for computing PageRank. Numerical experiments demonstrate that this method can solve PageRank problem much faster than the refined Arnoldi in a wide range of parameter settings, meanwhile the weighted-Arnoldi and the extrapolated-Arnoldi methods can also be accelerated by this preconditioning strategy. Finally, preconditioned Arnoldi-type methods show superiority over some other state-of-the-art methods for solving this problem class.

Author Contributions

Methodology, Z.-L.S., B.C. and H.Y.; software, Z.-L.S. and H.Y.; writing—original draft preparation, Z.-L.S. and H.Y.; writing—review and editing, B.C., X.-M.G. and C.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Two-Way Support Programs of Sichuan Agricultural University (Grant No.1921993077). The third author is a member of the Gruppo Nazionale per il Calcolo Scientifico (GNCS) of the Istituto Nazionale di Alta Matematica (INdAM) and this work was partially supported by INdAM-GNCS under Progetti di Ricerca 2020.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Langville, A.N.; Meyer, C.D. Google’s Pagerank and Beyond: The Science of Search Engine Rankings, 3rd ed.; Princeton University Press: Princeton, NJ, USA, 2006. [Google Scholar]
  2. Gleich, D.F. PageRank beyond the web. SIAM Rev. 2015, 57, 321–363. [Google Scholar] [CrossRef]
  3. Golub, G.H.; Greif, C. An Arnoldi-type algorithm for computing pagerank. BIT 2006, 46, 759–771. [Google Scholar] [CrossRef]
  4. Kamvar, S.D.; Haveliwala, T.H.; Golub, G.H. Adaptive methods for the computation of the PageRank. Linear Algebra Appl. 2004, 386, 51–65. [Google Scholar] [CrossRef] [Green Version]
  5. Kamvar, S.D.; Haveliwala, T.H.; Manning, C.D.; Golub, G.H. Extrapolation methods for accelerating PageRank computation. In Proceedings of the 12th International World Wide Web Conference, Budapest, Hungary, 20–24 May 2003; pp. 261–270. [Google Scholar]
  6. Wu, G.; Wei, Y. An Arnoldi-Extrapolation algorithm for computing PageRank. J. Comput. Appl. Math. 2010, 234, 3196–3212. [Google Scholar] [CrossRef] [Green Version]
  7. Tan, X. A new extrapolation method for PageRank computations. J. Comput. Appl. Math. 2017, 313, 383–392. [Google Scholar] [CrossRef]
  8. Sterck, H.D.; Manteuffel, S.F.; Nguyen, Q.; Ruge, J. Multilevel adaptive aggregation for Markov chains, with application to web ranking. SIAM J. Sci. Comput. 2008, 30, 2235–2262. [Google Scholar] [CrossRef] [Green Version]
  9. Shen, Z.-L.; Huang, T.-Z.; Carpentieri, B.; Wen, C.; Gun, X.-M. Block-accelerated aggregation multigrid for Markov chains with application to PageRank problems. Commun. Nonlinear Sci. 2018, 59, 472–487. [Google Scholar] [CrossRef]
  10. Gleich, D.F.; Gray, A.P.; Greif, C.; Wen, C.; Lau, T. An Inner-Outer Iteration for Computing PageRank. SIAM J. Sci. Comput. 2010, 32, 349–371. [Google Scholar] [CrossRef]
  11. Gu, C.-Q.; Xie, F.; Greif, C.; Zhang, K. A two-step matrix splitting iteration for computing PageRank. J. Comput. Appl. Math. 2015, 278, 19–28. [Google Scholar] [CrossRef]
  12. Wen, C.; Huang, T.-Z.; Shen, Z.-L. A note on the two-step matrix splitting iteration for computing PageRank. J. Comput. Appl. Math. 2017, 315, 87–97. [Google Scholar] [CrossRef]
  13. Tian, Z.-L.; Liu, L.; Zhang, Y.; Liu, Z.-Y.; Tian, M.-Y. The general inner-outer iteration method based on regular splittings for the PageRank problem. Appl. Math. Comput. 2019, 356, 479–501. [Google Scholar] [CrossRef]
  14. Tian, M.-Y.; Liu, L.; Zhang, Y.; Wang, Y.-D. A general multi-splitting iteration method for computing PageRank. Comput. Appl. Math. 2019, 38, 1–29. [Google Scholar] [CrossRef]
  15. Yin, J.-F.; Yin, G.-J.; Ng, M. On adaptively accelerated Arnoldi method for computing PageRank. Numer. Linear Algebra Appl. 2012, 19, 73–85. [Google Scholar] [CrossRef]
  16. Wu, G.; Wei, Y. A power-Arnoldi algorithm for computing pagerank. Numer. Linear Algebra Appl. 2007, 14, 521–546. [Google Scholar] [CrossRef]
  17. Hu, Q.-Y.; Wei, C.; Huang, T.-Z.; Shen, Z.-L.; Gu, X.-M. A variant of the Power-Arnoldi algorithm for computing PageRank. J. Comput. Appl. Math. 2021, 381, 113034. [Google Scholar] [CrossRef]
  18. Gu, C.-Q.; Wang, W.-W. An Arnoldi-Inout algorithm for computing PageRank problems. J. Comput. Appl. Math. 2017, 309, 219–229. [Google Scholar] [CrossRef]
  19. Morrison, J.L.; Breitling, R.; Higham, D.J.; Gilbert, D.R. GeneRank: Using search engine technology for the analysis of microarray experiments. BMC Bioinform. 2005, 6, 233. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Wu, G.; Zhang, Y.; Wei, Y. Accelerating the Arnoldi-type algorithm for the PageRank problem and the ProteinRank problem. J. Sci. Comput. 2013, 57, 74–104. [Google Scholar] [CrossRef]
  21. Saad, Y. Variations on Arnoldi’s method for computing eigenelements of large unsymmetric matrices. Numer. Linear Algebra Appl. 1980, 34, 269–295. [Google Scholar] [CrossRef] [Green Version]
  22. Jia, Z. Refined iterative algorithms based on Arnoldi’s process for large unsymmetric eigenproblems. Numer. Linear Algebra Appl. 1997, 259, 1–23. [Google Scholar] [CrossRef] [Green Version]
  23. Haveliwala, T.; Kamvar, S. The Second Eigenvalue of the Google Matrix; Stanford University Technical Report; Stanford InfoLab: Stanford, CA, USA, 2004. [Google Scholar]
  24. Shen, Z.-L.; Huang, T.-Z.; Carpentieri, B.; Gun, X.-M.; Wen, C. An efficient elimination strategy for solving PageRank problems. Appl. Math. Comput. 2017, 298, 111–122. [Google Scholar] [CrossRef]
  25. Shen, Z.-L.; Huang, T.-Z.; Carpentieri, B.; Wen, C.; Gun, X.-M.; Tan, X.-Y. Off-diagonal low-rank preconditioner for difficult PageRank problems. J. Comput. Appl. Math. 2019, 346, 456–470. [Google Scholar] [CrossRef]
  26. Davis, T.A.; Hu, Y.; Wei, Y. The University of Florida sparse matrix collection. ACM Trans. Math. Softw. 2011, 38, 1:1–1:25. [Google Scholar] [CrossRef]
  27. Boldi, P.; Vigna, S. The WebGraph Framework I: Compression Techniques. In Proceedings of the 13th International Conference on World Wide Web, New York, NY, USA, 17–20 May 2004; pp. 595–602. [Google Scholar]
  28. Boldi, P.; Rosa, M.; Santini, M.; Vigna, S. Layered label propagation: A multiresolution coordinate-free ordering for compressing social networks. In Proceedings of the 20th International Conference on World Wide Web, Hyderabad, India, 28 March–1 April 2011; pp. 587–596. [Google Scholar]
  29. Boldi, P.; Codenotti, B.; Santini, M.; Vigna, S. Ubicrawler: A scalable fully distributed Web crawler. Softw. Pract. Exp. 2004, 34, 711–726. [Google Scholar] [CrossRef]
  30. Zhang, H.-F.; Huang, T.-Z.; Shen, Z.-L. FOM accelerated by an extrapolation method for solving PageRank problems. J. Comput. Appl. Math. 2016, 296, 397–409. [Google Scholar] [CrossRef]
Table 1. Computational cost of the Arnoldi process (Algorithm 1).
Table 1. Computational cost of the Arnoldi process (Algorithm 1).
OperationLineTimes
Matrix-vector multiplication3m
Inner-product5 m ( m + 1 ) 2
Vector scaling1, 12 m + 1
Vector 2-norm computation1, 8 m + 1
SAXPY (SAXPY denotes the operation x + a y where a is a scalar, x and y are vectors.)6 m ( m + 1 ) 2
Table 2. Computational cost of the Arnoldi process (Algorithm 1) for PageRank.
Table 2. Computational cost of the Arnoldi process (Algorithm 1) for PageRank.
OperationTimes
Sparse matrix-vector multiplicationm
Inner product m ( m + 3 ) 2
Vector scaling m 2 + 5 m + 2 2
Vector 2-norm computation m + 1
Vector addition m ( m + 3 ) 2
Table 3. The computational cost per cycle of the GG-Arnoldi method for PageRank.
Table 3. The computational cost per cycle of the GG-Arnoldi method for PageRank.
OperationTimes
Sparse matrix-vector multiplicationm
Inner-product m ( m + 3 ) 2
Vector-scaling m 2 + 7 m + 4 2
Vector 2-norm computation m + 1
Vector-addition m 2 + 5 m 2 2
Vector 1-norm computation1
Table 4. The computational cost per cycle of the PGG-Arnoldi method for PageRank.
Table 4. The computational cost per cycle of the PGG-Arnoldi method for PageRank.
OperationTimes
Sparse matrix-vector multiplication k m
Inner-product m ( m + 2 k + 1 ) 2
Vector-scaling m 2 + 2 k m + 5 m + 4 2
Vector 2-norm computation m + 1
Vector-addition m 2 + 2 k m + 3 m 2 2
Vector 1-norm computation1
Table 5. Estimated algorithmic complexity of the GG-Arnoldi (Algorithm 2) and of its preconditioned version PGG-Arnoldi (Algorithm 3).
Table 5. Estimated algorithmic complexity of the GG-Arnoldi (Algorithm 2) and of its preconditioned version PGG-Arnoldi (Algorithm 3).
MethodGG-ArnoldiPGG-Arnoldi
Sparse matrix-vector multiplicationsm k m
Inner-products 2 m 2 + 11 m + 4 2 2 m 2 + 4 k m + 7 m + 4 2
Table 6. Characteristics of the Web adjacency matrices G tested in our experiments. The symbol n is the dimension of the matrix, and n n z is the number of nonzero elements (listed in increasing matrix size).
Table 6. Characteristics of the Web adjacency matrices G tested in our experiments. The symbol n is the dimension of the matrix, and n n z is the number of nonzero elements (listed in increasing matrix size).
Namen n n z n n z / n 2
web-Stanford281,9032,312,4972.9e-5
cnr-2000325,5573,216,1523.0e-5
web-BerkStan685,2307,600,5951.6e-5
eu-2005862,66419,235,1402.6e-5
in-20041,382,90816,917,0538.8e-6
indochina-20047,414,866194,109,3113.5e-6
wb-edu9,845,72557,156,5375.9e-7
Table 7. Performances of the PGG-Arnoldi method on the web_NotreDame problem.
Table 7. Performances of the PGG-Arnoldi method on the web_NotreDame problem.
Number of iterations
m k = 1 k = 2 k = 3 k = 4 k = 5 k = 6 k = 7 k = 8 k = 9 k = 10
3351226117110737253553944
42331478471504735342527
51731096153373527252219
6139874942292722201715
7112703534232217161313
890583427191815131011
97847282316161212108
1062392418141210887
CPU time (in seconds)
m k = 1 k = 2 k = 3 k = 4 k = 5 k = 6 k = 7 k = 8 k = 9 k = 10
38.06.74.65.14.04.53.84.33.54.4
46.45.84.34.43.74.03.43.73.03.6
55.95.43.94.23.53.73.23.43.33.1
65.95.23.74.03.43.43.23.33.02.9
75.54.83.13.72.93.32.93.02.73.0
85.04.63.53.32.83.12.92.82.42.9
95.14.33.23.22.73.12.72.92.72.4
104.43.93.12.92.62.62.42.22.42.3
Table 8. Performances of the PGG-Arnoldi method on the in-2004 problem.
Table 8. Performances of the PGG-Arnoldi method on the in-2004 problem.
Number of iterations
m k = 1 k = 2 k = 3 k = 4 k = 5 k = 6 k = 7 k = 8 k = 9 k = 10
3355272121137758856674252
42351638282525735412730
51751146158393725282018
6110744438282120181512
795443725231716121210
86242262017141210108
957302117121010986
104624171810108776
CPU time (in seconds)
m k = 1 k = 2 k = 3 k = 4 k = 5 k = 6 k = 7 k = 8 k = 9 k = 10
341.546.327.338.625.434.925.334.023.732.2
435.937.224.930.923.930.020.927.520.124.6
533.232.222.927.322.324.218.923.718.821.4
627.725.521.622.720.220.418.218.216.814.8
725.717.619.816.518.315.717.614.716.314.7
819.822.015.915.215.514.814.513.615.013.3
920.715.814.614.612.312.014.113.813.811.2
1018.614.313.217.311.513.312.111.913.212.5
Table 9. Performances of the weighted PGG-Arnoldi method on the web_NotreDame problem.
Table 9. Performances of the weighted PGG-Arnoldi method on the web_NotreDame problem.
Number of iterations
m k = 1 k = 2 k = 3 k = 4 k = 5 k = 6 k = 7 k = 8 k = 9 k = 10
3355226114112717152543943
42311458269484735332527
51991086254323525241920
6128814841252619191415
786603031202214151113
859332725141712121011
960301917111211977
10422413161198977
CPU time (in seconds)
m k = 1 k = 2 k = 3 k = 4 k = 5 k = 6 k = 7 k = 8 k = 9 k = 10
310.38.35.26.04.45.04.14.73.74.4
48.67.04.94.93.94.33.63.83.54.0
59.66.84.74.73.24.03.33.52.93.4
67.15.84.44.33.03.62.93.22.63.0
75.65.03.13.82.83.52.53.02.43.1
84.43.23.23.52.33.12.52.72.53.0
95.13.32.52.72.02.52.52.32.02.1
104.02.91.92.82.22.12.12.62.22.4
Table 10. Performances of the weighted PGG-Arnoldi method on the in-2004 problem.
Table 10. Performances of the weighted PGG-Arnoldi method on the in-2004 problem.
Number of iterations
m k = 1 k = 2 k = 3 k = 4 k = 5 k = 6 k = 7 k = 8 k = 9 k = 10
3354269118133738753654053
42311568182495033382617
5161595333332823201713
68835322121151613129
749252415161013899
83824151410910886
9362314101188775
1034171210886665
CPU time (in seconds)
m k = 1 k = 2 k = 3 k = 4 k = 5 k = 6 k = 7 k = 8 k = 9 k = 10
353.554.230.441.727.337.225.735.224.234.7
444.241.627.634.624.028.421.127.120.514.6
538.119.522.517.120.220.318.817.816.714.0
625.013.916.313.015.612.715.213.914.111.6
716.311.614.210.813.79.914.510.012.413.5
814.512.710.211.69.810.112.811.412.610.3
915.713.910.89.412.210.211.611.312.99.9
1016.811.510.410.79.911.49.710.811.910.9
Table 11. Comparative analysis of different methods for solving PageRank problems with α = 0.99 . Notation: M V denotes the number of matrix-vector multiplications that have been computed for achieving convergence, C P U t is the CPU time cost (in seconds).
Table 11. Comparative analysis of different methods for solving PageRank problems with α = 0.99 . Notation: M V denotes the number of matrix-vector multiplications that have been computed for achieving convergence, C P U t is the CPU time cost (in seconds).
Method M V C P U t M V C P U t M V C P U t
web-Stanfordweb-NotreDamein-2004
FOM2712.107214.8749118.70
W-FOM2913.164114.1228113.77
GG-Arnoldi4213.406214.3546117.89
PGG-Arnoldi4012.127012.5850111.38
PW-Arnoldi4012.305512.294019.87
Power11415.759103.04109423.78
EXT-Arnoldi3752.025012.3343011.21
EXT-PArnoldi4152.204511.694009.33
MPIO8003.998032.6580917.20
wiki-Talkindochinaweb-edu
FOM612.99641187.39621144.03
W-FOM614.21551184.88601183.14
GG-Arnoldi1015.22611175.33591138.89
PGG-Arnoldi1012.28651129.3870181.56
PW-Arnoldi1012.56501104.2355170.53
Power68815.16933179.49942104.95
EXT-Arnoldi2526.10453105.4445978.58
EXT-PArnoldi2926.71513101.7752962.76
MPIO78416.53715135.0165070.08
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Shen, Z.-L.; Yang, H.; Carpentieri, B.; Gu, X.-M.; Wen, C. A Preconditioned Variant of the Refined Arnoldi Method for Computing PageRank Eigenvectors. Symmetry 2021, 13, 1327. https://0-doi-org.brum.beds.ac.uk/10.3390/sym13081327

AMA Style

Shen Z-L, Yang H, Carpentieri B, Gu X-M, Wen C. A Preconditioned Variant of the Refined Arnoldi Method for Computing PageRank Eigenvectors. Symmetry. 2021; 13(8):1327. https://0-doi-org.brum.beds.ac.uk/10.3390/sym13081327

Chicago/Turabian Style

Shen, Zhao-Li, Hao Yang, Bruno Carpentieri, Xian-Ming Gu, and Chun Wen. 2021. "A Preconditioned Variant of the Refined Arnoldi Method for Computing PageRank Eigenvectors" Symmetry 13, no. 8: 1327. https://0-doi-org.brum.beds.ac.uk/10.3390/sym13081327

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop