Next Article in Journal
Estimation of Reliability in a Multicomponent Stress–Strength System for the Exponentiated Moment-Based Exponential Distribution
Previous Article in Journal
A Novel Multi-Objective Five-Elements Cycle Optimization Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Observations on the Computation of Eigenvalue and Eigenvector Jacobians †

1
NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA
2
Rensselaer Polytechnic Institute, Troy, NY 12180, USA
3
NASA Johnson Space Center, Houston, TX 77058, USA
*
Author to whom correspondence should be addressed.
A portion of this work appears in A.J. Liounis’s M.S. Thesis from West Virginia University.
Submission received: 24 September 2019 / Revised: 16 November 2019 / Accepted: 18 November 2019 / Published: 20 November 2019

Abstract

:
Many scientific and engineering problems benefit from analytic expressions for eigenvalue and eigenvector derivatives with respect to the elements of the parent matrix. While there exists extensive literature on the calculation of these derivatives, which take the form of Jacobian matrices, there are a variety of deficiencies that have yet to be addressed—including the need for both left and right eigenvectors, limitations on the matrix structure, and issues with complex eigenvalues and eigenvectors. This work addresses these deficiencies by proposing a new analytic solution for the eigenvalue and eigenvector derivatives. The resulting analytic Jacobian matrices are numerically efficient to compute and are valid for the general complex case. It is further shown that this new general result collapses to previously known relations for the special cases of real symmetric matrices and real diagonal matrices. Finally, the new Jacobian expressions are validated using forward finite differencing and performance is compared with another technique.

1. Introduction

There are many problems that make use of the eigenvalue or eigenvector of a matrix in their solution. Because of this, it is often beneficial to be able to calculate the Jacobians of eigenvalues and eigenvectors with respect to the elements of the matrix from which they were computed. For example, eigenvalue and eigenvectors are used throughout finite-element analysis (FEA) solutions to vibration problems, where the goal is often to minimize a structure’s sensitivity to various parameters through the use of eigenvalue/eigenvector derivatives [1]. As a second example, there are many instances in which the solution for the optimal estimate of a parameter vector takes the form of an eigenvalue problem—such as quaternion-based spacecraft attitude estimation [2,3] or ellipse fitting in computer vision [4]. For these types of problems, it is often important to understand the uncertainty in the optimal estimate, which requires knowledge of the eigenvector Jacobian. Other examples abound, such as direction-of-arrival of plane waves [5], epipolar geometry in computer vision [6], pose estimation in robotics [7], spacecraft optical navigation [8], or infectious disease epidemiology [9]. In all of these problems, and in many others, it can be useful (or, in some cases, necessary) to have an analytic expression for the Jacobians of eigenvalues and eigenvectors with respect to the entries in their parent matrix.
Because of the popularity of problems requiring the calculation of eigenvalues, eigenvectors, and their Jacobians, much work has already been devoted to these topics. The prior work on eigenvalue and eigenvector Jacobians can largely be grouped into three basic categories: modal expansion techniques, direct techniques, and techniques based on Nelson’s method. Modal techniques mimic the idea of modal expansion used in FEA, whereby the eigenvector Jacobian is expressed as a linear combination of the other eigenvectors of the system [10,11,12,13,14,15,16,17]. Direct solutions find the Jacobians through the solution of a linear system of equations [18,19,20,21,22,23,24,25]. Solutions based on Nelson’s method find the Jacobians by considering a homogeneous and particular solution [26,27,28,29,30,31,32,33]. Finally, there is a grouping of solutions that are not easily fit into the above categories [34,35,36,37,38,39]. For a more thorough discussion of this literature, see [39].
In light of the extensive literature already available on the topic, it may not be clear why more work is needed. After careful review, however, a few limitations are apparent with the existing methods. First, the majority of the above literature only presents solutions for the derivatives of the eigenvalues and eigenvectors with respect to a single element of the parent matrix. While these single element derivatives may be assembled to create a Jacobian, their use is cumbersome and the expressions are not compact. In the literature reviewed by these authors, expressions for the full Jacobians of the eigenvalues and eigenvectors only appear once and only for the special case of a symmetric parent matrix [34]. Second, in the majority of the literature, the derivatives are found using both the left and right eigenvectors of the system or by simultaneously solving a system of equations for both derivatives at once. While it is straightforward to find both the left and right eigensystems, calculating the left eigenvector in order to find the Jacobian of the right eigenvector is an unnecessary extra step that we seek to circumvent. Finally, much of the literature on this topic is only valid when the parent matrix and its eigenvalues and eigenvectors are real due to the choice of the normalization of the eigenvectors.
Due to these observed deficiencies, the authors of this paper proposed a new method in [39] that did not involve the left eigenvector, did not require the simultaneous solution of both the eigenvector and eigenvalue derivatives, provided the full Jacobian matrices with respect to every element of the parent matrix, and provided a solution capable of handling any matrix with real or complex elements and real or complex eigenvalues and eigenvectors. This new method found the eigenvalue Jacobian by considering the characteristic equation and solved for the eigenvector Jacobian by using the results of the eigenvalue Jacobian along with the normalization condition for the eigenvector.
While the method introduced in [39] addressed the deficiencies described above, it did produce some new issues—namely that the calculation of the eigenvalue Jacobian was computationally expensive. Additionally, the eigenvector Jacobian relied on the computation of the eigenvalue Jacobian, which could lead to numerical stability issues in large matrices ( n > 30 ).
This paper introduces novel improvements to [39] that address both of these shortcomings. Specifically, the present work introduces a much simpler solution that addresses all of the original deficiencies in the eigenvalue and eigenvector derivative literature and also addresses the new issues associated with the technique in [39]. The simpler solution leads to an efficient, compact, and intuitive algorithm. The compact results presented for an arbitrary real or complex matrix (with real or complex eigenvalues and eigenvectors) are found to collapse even further for special cases and to reveal new insight into the structure of this fascinating problem. Thus, the novel approach for computing eigenvalue and eigenvector Jacobians introduced in this manuscript is exceedingly general, is applicable to a wide range of problems, and represents a significant improvement to earlier methods.
This paper is organized as follows. First, a brief discussion on the analyticity of eigenvector normalizations is presented. Then, a review of the solution proposed in [39] is presented. Next, these results are modified to arrive at the new technique, which is efficient to compute and is valid for general complex matrices. This general result is then shown to collapse to known relations from the literature for the special cases of real symmetric matrices and real diagonal matrices. The new derivatives are then validated numerically through comparison with forward finite differencing. Finally, the execution time on a digital computer is compared for the technique proposed in [39] and for the new technique proposed this paper.

2. Existence of Eigenvalue and Eigenvector Jacobians

The standard algebraic eigenvalue problem is defined as
A v = λ v ,
where A C n × n is a n × n square matrix, v C n is an eigenvector of the system, and λ C is an eigenvalue of the system. As should be apparent, each eigenvalue and eigenvector pair is only implicitly defined according to
A λ I v = 0 .
It is well known that eigenvectors corresponding to simple eigenvalues are only unique when a constraint is placed on their norm (i.e., “length”). In addition, the eigenvector normalization can be expressed implicitly as
x v α = 0 ,
where x is a row vector formed by choosing a column normalization vector and transposition method and α is a normalization constant. Although the choice of α is arbitrary, we often choose α = 1 . These two implicit vector functions can be combined to form a single implicit vector equation:
f ( A , y ) = A λ I v x v α = 0 n + 1 × 1 ,
where y = v T λ T . To determine if eigenvalue and eigenvector derivatives exist with respect to the elements of A , we must determine if both λ and v can be expressed as an explicit function of A . The implicit function theorem can be used to test if this is possible.
The implicit function theorem states that as long as the determinant of the Jacobian of f with respect to y can be found and is nonzero, then y can be expressed as an explicit function of x . Therefore, consider the Jacobian of f as defined in Equation (4):
J = f y = A λ I v v ( x v ) 0 ,
where v ( x v ) will be a n × 1 row vector from the rules of matrix vector calculus. Recognizing the block structure of the Jacobian, it can be shown that the determinant is given by
J = 1 A λ I + A λ I + v v ( x v ) = A λ I + v v ( x v ) .
Therefore, in order to evaluate the determinant of the Jacobian, we must evaluate ( x v ) / v for the specific normalization described by the row vector x . The following subsections discuss two potential eigenvector normalizations.

2.1. Constraining the Eigenvectors to the Unit Hypersphere

Begin with the most commonly used normalization for the standard eigenvalue/eigenvector problem, x = v H . Substitution into Equation (3) leads to:
v H v = α α + v H v = 0 ,
which constrains the eigenvector to lie on the hypersphere of radius α . As usual, the superscript of H indicates the Hermitian (conjugate) transpose ( A H = conj ( A ) T ). Expressing this normalization in terms of the elements of v gives
α + k = 1 n v k ¯ v k = α + k = 1 n ( x k i y k ) ( x k + i y k ) = α + k = 1 n x k 2 + y k 2 ,
where v k = x k + i y k is the k t h element of the eigenvector v and ¯ is the complex conjugate of •.
We proceed by attempting to determine the partial derivative of this normalization with respect to the eigenvector in a domain around the eigenvector. First, since the eigenvector may be complex requiring a complex partial derivative, we need to check that the normalization equation is analytic in a region around the eigenvector. This is done by using the Cauchy–Riemann equations. Since the partial derivative and summation operators are linear, the required partial derivatives for checking the Cauchy–Riemann equations are
x j Re k = 1 n x k 2 + y k 2 = 2 x j ,
x j Im k = 1 n x k 2 + y k 2 = 0 ,
and
y j Re k = 1 n x k 2 + y k 2 = 2 y j ,
y j Im k = 1 n x k 2 + y k 2 = 0 ,
where Re takes the real component of • and Im takes the imaginary component of •. Now, using the Cauchy–Riemann equations, it is easy to see that this function is differentiable only when x j and y j are 0 and is analytic nowhere. In addition, this shows that this normalization is only differentiable when v = 0 , which is not generally a valid eigenvector. Therefore, this normalization cannot be used when the eigenvectors are complex because the normalization is not differentiable or analytic for valid eigenvectors. We do note, however, that when v R N there are no issues with using this normalization.

2.2. Constraining the Eigenvectors to a Hyperplane

Now, consider an eigenvector normalization which constrains the length of the projection of v onto an arbitrary vector, v 0 (so long as v 0 H v 0 , where we stress here that v 0 is not functionally dependent on v , λ , or A .) Such a normalization occurs when x = v 0 H and constrains v to lie on a hyperplane. Therefore, substituting this into Equation (3) leads to:
v 0 H v = α α + v 0 H v = 0 .
Specifically, note that the eigenvector v is now constrained to lie on a hyperplane tangent to the hypersphere of radius α / v at the point v 0 . Expressing this normalization in the terms of the elements of v and v 0 gives
α + k = 1 n v 0 k ¯ v k = α + k = 1 n ( a k i b k ) ( x k + i y k )
= α + k = 1 n a k x k + b k y k + i ( a k y k b k x k ) ,
where v k is as defined before and v 0 k = a k + i b k is the k t h element of v 0 .
As before, since we are seeking a complex partial derivative, the Cauchy–Riemann equations are used to check if the partial derivative of the normalization is analytic in a region around the eigenvector. The partial derivatives with respect to the components of the j t h elements of the eigenvector are
x j Re k = 1 n a k x k + b k y k + i ( a k y k b k x k ) = a j ,
x j Im k = 1 n a k x k + b k y k + i ( a k y k b k x k ) = b j ,
and
y j Re k = 1 n a k x k + b k y k + i ( a k y k b k x k ) = b j ,
y j Im k = 1 n a k x k + b k y k + i ( a k y k b k x k ) = a j ,
which satisfy the Cauchy–Riemann equations for any choice of a j and b j indicating that the normalization is analytic in all of C n . Since the Cauchy–Riemann equations are satisfied, the vector form of the partial derivative of the eigenvector normalization constraint with respect to the eigenvector is given as
v v 0 H v α = v 0 H
using the rules of matrix vector calculus.
This allows the implicit function theorem to be used to check to see if the eigenvalues and eigenvectors are guaranteed to be analytic functions of A on some domain centered at their nominal values. Substituting Equation (19) into Equation (6) gives
J = A λ I + v v 0 H ,
which is guaranteed to be full rank and have a non-zero determinant if λ is a simple eigenvalue (defined as an eigenvalue with a multiplicity of 1). Therefore, using the implicit function theorem with this normalization, it is guaranteed that the eigenvectors and eigenvalues are analytic functions of A in some domain around their nominal values for simple eigenvalues. Due to this, the rest of this paper will use the normalization v 0 H v = α and the case when the eigenvalue is simple.

3. Previous Work

In [39], the authors of this paper presented a new technique for calculating the eigenvalue and eigenvector Jacobians. This technique allows the calculation of the full Jacobian matrices for both the eigenvalues and eigenvectors using just the eigenvalue and eigenvector being considered. Some key results from this prior work are now reviewed.
Begin with the standard eigenvalue problem
A v = λ v ,
where λ is a simple eigenvalue of A and v is its corresponding eigenvector. Now, take the partial derivative of this relation with respect to the vectorized version of the matrix A to get
A λ I v A vec = v λ A vec v T I ,
where A vec is formed by stacking the columns of A into a column vector and ⊗ indicates the Kronecker product. This results in a single equation with two unknowns, v / A vec and λ / A vec , and thus a second equation is needed. As described in [39], the eigenvalue derivative can be calculated by considering the characteristic equation for the matrix A . The characteristic equation for A can be expressed using the notation of exterior algebra [40] as
λ I A = k = 0 n λ n k ( 1 ) k Tr k A = 0 ,
where n is the dimension of A , indicates the determinant, k A is the k t h exterior power of A , and [41]
Tr k A = Q k k !
Q k = Tr A k 1 0 0 Tr A 2 Tr A k 2 0 Tr A 3 Tr A 2 Tr A 0 Tr A k Tr A k 1 Tr A k 2 Tr A .
With the above expression for the characteristic equation, the solution for the eigenvalue Jacobian is easy to find:
λ A vec = k = 0 n λ n k c k A vec k = 0 n 1 c k ( n k ) λ n k 1 ,
where
c k = ( 1 ) k Tr k A ,
c k A vec = ( 1 ) k k Q k A vec ,
Q k A vec = Q k vec Q k T T vec I T 2 vec A T T 3 vec ( A 2 ) T T k vec ( A k 1 ) T T 0 1 × n 2 vec I T 2 vec A T T ( k 1 ) vec ( A k 1 ) T T 0 2 × n 2 0 ( k 1 ) × n 2 vec I T .
Now, with the independent equation for the eigenvalue Jacobian, return to Equation (22) to solve for the eigenvector Jacobian. While it would appear simple to calculate the derivative of the eigenvectors directly from Equation (22), remember that the term A λ I is rank deficient (by the definition of an eigenvalue) and is therefore not invertible. In order to solve this problem, we must make use of a normalization condition to make the eigenvectors unique. As discussed in the preceding section (and [34]), it is important to ensure that the normalization chosen makes the eigenvector an analytic function of A ; therefore, choose the normalization
v 0 H v = α ,
where v 0 is any non-zero vector that is not orthogonal to v and α is any real non-zero scalar value. In practice, it is usually chosen that numerically v 0 v , as this gives rise to a more intuitive interpretation (as we discuss later); however, even when this is the case, it is still important to remember that v 0 is not a function of A or v .
In addition to making the eigenvectors analytic, Equation (30) also leads to another important relation. Consider the derivative of Equation (30) with respect to A vec ,
v 0 H v A vec = 0 1 × n 2 ,
which shows that the normalization vector v 0 is orthogonal to the column space of the eigenvector Jacobian.
Using the above normalization condition and its derivative, perform a rank-one update to the matrix on the lefthand side of Equation (22) with the so-called null space matrix, N , in order to make A λ I invertible. This approach is discussed further in [39]. This allows for the solution of the eigenvector derivative as
v A vec = ( A λ I + N ) 1 v λ A vec v T I ,
where
N = σ v 0 v 0 H
is the Null Space Matrix and σ is a scaling term chosen to make N approximately the same order as A λ I (it is usually sufficient to let the scale be σ = Tr A ). Including the Null Space Matrix, as in Equation (33), is the same as adding a zero vector due to the relation in Equation (31). Note, of course, that a similar end objective may accomplished by using the pseudoinverse of A λ I instead of including a Null Space Matrix (an example of this alternative approach may be found in [34]).
The term A λ I is rank deficient with a null space in the direction of v . Additionally, the rank one Null Space Matrix only contains information in the direction of v 0 . Thus, because v 0 is required to not be orthogonal to v , the quantity A λ I + N is guaranteed to be full rank. For further discussion of the use of the Null Space Matrix and its properties, refer to [39].
This concludes the review of the technique proposed in [39] and attention is now turned to the simplification of these methods.

4. Compact Expressions for Eigenvalue and Eigenvector Jacobians

Using the insights from [39], it is now possible to arrive at a simpler and more efficient solution. Beginning again with Equation (21), left multiply by v 0 H in order to form a new equation:
v 0 H A v = λ v 0 H v .
Recall that, while it is common to set v 0 v , v 0 is not a function of A . Taking the derivative of Equation (34) leads to
v 0 H A v A vec + v T v 0 H = v 0 H v λ A vec + λ v 0 H v A vec
through simple application of the chain rule and identities pertaining to the vectorization of a matrix. Note again that a superscript of T indicates a standard transpose while a superscript of H indicates the Hermitian (or conjugate) transpose. Now, recalling Equation (31), the right most term of Equation (35) vanishes. Thus, after incorporating Equation (30) and a few simple rearrangements, one finds
λ A vec = v 0 H A v A vec + v T v 0 H α ,
which expresses the eigenvalue Jacobian as a function of A , v , v 0 , v / A vec , and α .
We now turn our attention to finding a compact expression for the eigenvector Jacobian. Observe that Equations (22) and (36) create a system of two equations with two unknowns. Therefore, substituting Equation (36) into Equation (22),
A λ I v A vec = v v 0 H A v A vec + v T v 0 H α v T I ,
which can be arranged to give
A λ I v v 0 H A α v A vec = v ( v T v 0 H ) α v T I
as an equation that isolates the eigenvector derivative. This expression can be simplified to
A λ I v v 0 H A α v A vec = v T v v 0 H α I
by manipulating the Kronecker products.

4.1. Eigenvector Jacobian

The objective is now to solve Equation (39) for v / A vec . Unlike the result from [39], there is no need to incorporate a Null Space Matrix (or to use a pseudoinverse) since the term A λ I v v 0 H A / α is already full rank and invertible as long as the eigenvalue under consideration is non-zero. This fact is straightforward to show by considering the column space of the term A λ I , which will generally be rank n 1 (assuming λ is simple and A is full rank). Specifically, A λ I spans C n 1 with a null space in the direction of v . Now, consider the column space of the term v v 0 H A / α , which is rank one and spans only v . Therefore, by adding v v 0 H A / α to A λ I , the resulting column space spans all of C n , making the overall term full rank and invertible.
In light of this fact, the solution for the eigenvector Jacobian for a non-zero eigenvalue is given by
v A vec = A λ I v ( v 0 H A ) α 1 v T v v 0 H α I ,
which is a function of only A , λ , v , v 0 , and α . Additionally, manipulating the Kronecker products allows for a final form of
v A vec = v T A λ I v v 0 H A α 1 v v 0 H α I .
In the case where it is necessary to find the eigenvector Jacobians when the eigenvalue is zero, the Null Space Matrix can be used as discussed in [39] to make the left-hand side invertible, resulting in
v A vec = v T A λ I v v 0 H A α + N 1 v v 0 H α I .

4.2. Eigenvalue Jacobian

Now, consider how the eigenvector Jacobian may be used to find the eigenvalue Jacobian. Substitute the result of Equation (41) into Equation (36) and collect like terms
λ A vec = v T α v 0 H A A λ I v v 0 H A α 1 v v 0 H α I + v 0 H .
For the zero-eigenvalue case, the equivalent expression for the eigenvalue Jacobian becomes
λ A vec = v T α v 0 H A A λ I v v 0 H A α + N 1 v v 0 H α I + v 0 H .
Focusing on the general case from Equation (43), we observe that further simplification is possible. Begin simplification by recognizing that the following are true:
A λ I v v 0 H A α v = A λ I v v v 0 H A α v = 1 α v v 0 H A v = λ α v v 0 H v = λ v
and, therefore,
A λ I v v 0 H A α 1 v = 1 λ v .
Applying this result, the intermediate term in Equation (43) may be simplified
v 0 H A A λ I v v 0 H A α 1 v v 0 H α I
= v 0 H A v v 0 H λ α v 0 H A A λ I v v 0 H A α 1 = v 0 H v v 0 H α v 0 H A A λ I v v 0 H A α 1
= v 0 H v 0 H A A λ I v v 0 H A α 1 ,
which, upon substitution into Equation (43), leads to
λ A vec = v T 1 α v 0 H A A λ I v v 0 H A α 1 ,
providing a concise expression for λ / A vec only in terms of the right eigenvector and the chosen normalization convention, v 0 H v = α .
Intuitively, however, we expect the eigenvalue Jacobian to be unrelated to the choice of eigenvector normalization. Indeed, we find λ / A vec to be the same for any choice of v 0 , so long as v 0 H v 0 . We may show this by rewriting Equation (49) without the use of v 0 or α , although doing so does require consideration of the left eigenvector. Specifically, let w be the left eigenvector corresponding to the eigenvalue λ (which also has a right eigenvector v ),
w H A = λ w H ,
w H A λ I = 0 .
Observe, therefore, that
w H w H v A λ I v v 0 H A α = 1 w H v w H A λ I w H v v 0 H A α = 1 α v 0 H A .
Substituting this result into Equation (49) produces the following compact form that is independent of the choice of eigenvector normalization
λ A vec = v T w H w H v .
This result leads to a few important observations about the sensitivity of λ to perturbations in A . First, this demonstrates that the eigenvalue Jacobian from Equation (49) is indeed the same for any choice of v 0 other than v 0 H v 0 . Second, Equation (49) shows the eigenvalue Jacobian in terms of only the right eigenvector, whereas Equation (53) shows the same eigenvalue Jacobian in terms of both the right and left eigenvectors. Thus, in cases where only the right eigenvector is known (and it is not desirable to compute the corresponding left eigenvector), the result of Equation (49) provides a compact means of computing the eigenvalue Jacobian. Third, and, finally, observe the division by w H v on Equation (53). In general, this suggests eigenvalues will be most stable when their corresponding left and right eigenvectors are colinear (as happens in a Hermitian matrix) and become unstable as the angle between v and w increases ( w H v 0 ).

5. On the Choice of a Normalization Vector

As mentioned earlier, the choice of v 0 is arbitrary, so long as v 0 H v 0 . While this is true, the choice of v 0 can play an important role in the numerical stability of the system in finite precision computing. In particular, the closer the normalization vector gets to being orthogonal to the eigenvector, the more numerically unstable the system becomes. To demonstrate this phenomena consider Figure 1, which shows the condition number of the term A λ I v v 0 H A / α normalized by the number of dimensions, n, as a function of the angle between v 0 and v . A higher condition number indicates a more poorly conditioned matrix prone to issues in finite precision computing.
The samples were generated by producing 10,000 random matrices for each integer value in degrees for the angle between v 0 and v . Figure 1 shows matrices whose elements are real values which are normally distributed with zero mean and unit variance. In these experiments, the eigenvector, v , for each matrix is randomly selected. Since v 0 and/or v may be complex, the angle between the vectors is computed using
cos θ = Re v 0 H v v 0 v .
Figure 1 indicates that the probability of poor matrix conditioning is greatest when v 0 and v are nearly orthogonal. Presuming they are are in random directions, as n increases, it becomes highly probable that v 0 and v will be nearly orthogonal.
Therefore, it is clear that choosing the normalization vector v 0 v (leading to an angle of 0 deg; or, equivalently, v 0 v leading to an angle of 180 deg) will ensure the best conditioning for the system and will usually lead to the most intuitive solution. The worst conditioning occurs as v 0 H v 0 (angle of 90 deg). This is why v 0 v is often chosen in practice, unless there is a compelling reason to choose otherwise.
Note that selecting v 0 = v causes the condition number of A λ I v v 0 H A / α to share many of the statistical properties of the condition number of A . This is especially true for large values of the condition number. Edelman [42] developed probability density functions for the condition number of A normalized by n when the elements of A are real values with zero mean and unit variance,
p x = 2 x + 4 x 3 exp 2 x 2 x 2 .
Likewise, for the case where A is composed of complex elements with real and imaginary parts which are each normally distributed with zero mean and unit variance,
p x = 8 x 3 exp 4 x 2 .
Histograms for when v 0 = v are overlaid with Edelman’s probability density functions in Figure 2. These figures show that the probability of large condition numbers occurring are well described by Edelman’s result. Unfortunately, Edelman’s density function does not work as well for small condition numbers, but these are of little consequence.
In light of the above discussion, we stress again that v 0 is not the same as v . These results simply demonstrate that the best numerical performance is achieved when the values of the arbitrary vector v 0 are set to be the same as v . Consequently, one may wonder how the usual normalization choice of v H v = α (instead of the v 0 H v = α normalization suggested in this paper) affects the Jacobians. Observation of the problem geometry reveals that the normalization proposed in Equation (30) ( v 0 H v = α ) serves as a good approximation for the usual normalization of v H v = α when A is well conditioned and the values of v 0 are set to that of v . This is because the usual normalization constrains the eigenvectors to fall on the hypersphere of radius α , while the normalization from Equation (30) constrains the perturbed eigenvectors to fall on the hyperplane tangent to the hypersphere of radius α / v at v 0 . Therefore, as long as the perturbation of A leaves the perturbed eigenvector near the original eigenvector, the proposed normalization approximates the normalization constraining the eigenvectors to the hypersphere to first order. Furthermore, when v 0 is chosen to be v , the difference between the usual normalization and Equation (30) rarely needs to be considered in practice except in very rare situations, such as when attempting to numerically verify the analytic expressions as was done in the “Numerical Validation” section of this paper.
With these observations in mind, additional remarks are necessary regarding the expressions in the literature that rely on the normalization v H v = α . Recall from our earlier discussion that using v H v = α does not result in a valid expression for the eigenvector derivatives because the normalization is not analytic. Despite this, the numerical results produced by these methods will be equivalent to the numerical results of the expressions developed in this paper when v 0 is chosen to be v . While the expressions for the derivatives are numerically equivalent, the derivatives from the existing literature are not valid for the normalization employed. This means that that the derivatives in the existing literature cannot be used to predict the perturbations caused to eigenvectors by perturbations to A when the eigenvectors are complex. The derivatives and normalization in this paper are valid for all complex eigenvectors and can be used to predict the perturbation caused to the eigenvectors by the perturbations to A .

6. Simplified Cases

The goal of the work presented in this paper was to create a compact, efficient, and intuitive algorithm for the calculation of the Jacobians of an eigenvalue and eigenvector with respect to the elements of the parent matrix. Now that these expressions have been developed, it is beneficial to discuss how they simplify as assumptions are placed on the parent matrix. A variety of simplifications are possible by imposing structure on A and subsequently Equations (41) and (49). This work focuses on two particularly useful simplifications as a way to gain key insights and show connections with existing literature.

6.1. Real Symmetric Parent Matrix

The first simplified case considered is when the parent matrix of the eigenvalues and eigenvectors is real and symmetric. Matrices of this structure frequently appear in practical problems from science and engineering (for instance, the Davenport solution to Wahba’s Problem [3]). In order to simplify and to parallel results from the existing literature, it is also necessary to make a choice for v 0 ; therefore, choose that v 0 = v and α = 1 . Note that, as discussed previously, this is the choice usually made in practice, as it leads to the best condition for the calculation of the eigenvector derivative.
To begin the simplifications for the symmetric case consider Equation (38), repeated here for convenience:
A λ I v v T A v A vec = v T I v v T .
Note that the Hermitian transposes have been replaced by standard transposes, since the eigenvalues and eigenvectors are guaranteed to be real since A is real and symmetric. In addition, note that v 0 has been replaced by v . Now, as discussed before, the coefficient matrix of the eigenvector Jacobian is full rank (and invertible) as long as the matrix A is full rank and invertible. Making use of the fact that for an invertible matrix
A + = A 1 ,
where A + is the Moore–Penrose pseudoinverse of A [43], it is possible to write
v A vec = v T A λ I v v T A + I v v T .
From here, it is necessary to further consider the pseudoinverse term. Recognize that the pseudoinverse term in Equation (59) can be expressed as the addition of a matrix and an outer product
A λ I v v T A + = B + c d T + ,
where B = A λ I , c = v , and d = A v = λ v . Using the case i identities presented in [44] when A is real and symmetric, it can be shown that
A λ I v v T A + = A λ I + λ 1 v v T .
Substituting this result into Equation (59) yields
v A vec = v T A λ I + λ 1 v v T I v v T .
Now, expanding the matrix multiplication in the Kronecker product gives
v A vec = v T A λ I + λ 1 v v T A λ I + v v T + λ 1 v v T v v T ,
which, when taking into account that for symmetric matrices the pseudoinverse has the same null space as the matrix itself, simplifies to
v A vec = v T A λ I + ,
which is exactly the same result presented in [34].
With the simplified version of the eigenvector derivative in hand, the simplified eigenvalue Jacobian is trivial to find. To begin, substitute Equation (64) into Equation (36) to obtain
λ A vec = v T v T A A λ I + + v T v T .
Now, making use of the fact that v T A = λ v T for symmetric matrices and the sharing of the null spaces (the original matrix and its pseudo inverse share the same null space for symmetric matrices), this reduces to
λ A vec = v T v T ,
which, again, is exactly the same as that presented in [34]. Note that these expressions do not assume that the parent matrix is perturbed symmetrically.
Thus, the general eigenvalue and eigenvector Jacobians presented in Equations (41) and (49) cleanly simplify to the results from [34] for the special case when A is symmetric. This same result can be achieved by assuming the symmetry of A and enforcing v 0 v in Equations (22) and (36), as was done in [34].

6.2. Real Diagonal Parent Matrix

The second simplified case considered is a diagonal parent matrix with only real valued elements. In this case, the eigenvalues of the matrix are simply the diagonal elements of A and the eigenvectors are the standard basis. While this case is trivial, it leads to some powerful insights into the overall problem.

6.2.1. Simplified Jacobians for a Diagonal Matrix

To develop the simplified derivatives for a diagonal matrix, begin with the simplified derivatives for the symmetric case given in Equations (64) and (66). Now, recognizing that the pseudoinverse of a diagonal matrix is just the reciprocal of the non-zero diagonal elements, the eigenvector derivative simplifies to
v i A vec = e i T A λ I + = 0 n × ( i 1 ) n A λ I + 0 n × ( n i ) n ,
where
A λ I + = diag ( λ 1 λ i ) 1 , , ( λ i 1 λ i ) 1 , 0 , ( λ i + 1 λ i ) 1 , ( λ n λ i ) 1
and where e i is the i t h standard basis vector and the derivatives presented are for the i t h eigenvalue and eigenvector (at this point, it becomes necessary to distinguish the eigenvalue and eigenvector being considered). Recall that the pseudoinverse of a non-zero scalar is the reciprocal, while the pseudoinverse of a zero scalar is 0.
The eigenvalue derivative simplifies similarly:
λ i A vec = e i T e i = 0 1 × ( i 1 ) n e i T 0 1 × ( n i ) n .

6.2.2. Perturbation to the Eigenspace of a Diagonal Matrix

With the simplified relationships in hand, it is possible to make some interesting observations on the perturbation of the eigenspace. (Again, note that it is only assumed that the parent matrix is diagonal. The perturbation matrix is not constrained to be diagonal.) The first observation is that to perturb the i t h eigenvalue, one must perturb the i t h diagonal element of the diagonal parent matrix, at least to first order. Furthermore, the perturbation to the eigenvalue in this case is exactly the perturbation to the parent matrix. While this observation should be trivial (since the diagonal elements are the eigenvalues themselves), it leads to a more interesting observation for the general case as will be discussed later. This observation can be expressed mathematically as
Δ λ i = δ i
when
Δ A = δ i e i e i T = δ i v i v i T .
The next observation is that the eigenvector is only perturbed when the i t h column of the parent matrix is perturbed. Furthermore, there is an analytic relationship between the change to the eigenvector, the eigenvalues, and the perturbation itself. Mathematically, this is expressed as
Δ v i = k = 1 , k i n δ k λ k λ i e k = k = 1 , k i n δ k λ k λ i v k
when
Δ A = k = 1 , k i n δ k e k e i T = k = 1 , k i n δ k v k v i T .
These relations show that the eigenvector derivative for a diagonal matrix can be expressed as a modal expansion of the other eigenvectors if the coefficients δ k can be calculated (which is quite simple for a diagonal matrix since the eigenvectors are the standard basis).

6.2.3. Perturbations to the Eigenspace of a Diagonalizable Matrix

Now, reconsider the case when the parent matrix is symmetric (as was done for the previous section). Since the parent matrix is symmetric, the eigenvectors will form an orthonormal basis for R n and the matrix is diagonalizable as
A = V Λ V T ,
where Λ is a diagonal matrix of the eigenvalues and V is an orthogonal matrix whose columns are the eigenvectors of A . Substituting this into the standard eigenvalue problem gives
V Λ V T v i = λ i v i ,
which can be rewritten as
Λ V T v i = λ i V T v i .
Since the columns of V are made up of the orthogonal eigenvectors of A ,
V T v i = e i ,
which quickly reduces Equation (75) down to the diagonalized eigensystem,
Λ e i = λ i e i .
Now, suppose that the matrix A is perturbed by adding a matrix Δ A . In the diagonalized space this matrix perturbation can be expressed as
Δ Λ = V T Δ A V ,
where Δ Λ is an additive update to Λ . The matrix Δ Λ will not generally be diagonal.
Now that the problem has been diagonalized, the observations described above can be utilized. Since the eigenvectors in the diagonalized space form the standard basis, it is clear that the matrix Δ Λ can be decomposed as
Δ Λ = i = 1 n k = 1 n δ k i e k e i T ,
where
Δ Λ = δ 11 δ 12 δ 1 n δ 21 δ 22 δ 2 n δ n 1 δ n 2 δ n n
and
δ k i = v k T Δ A v i .
Now, analogous to relations in Equations (70) and (72), the perturbations in the diagonalized space are described by
Δ λ i = δ i i ,
Δ e i = k = 1 , k i n δ k i λ k λ i e k .
These perturbations must now be related to perturbations in the original eigenvectors, v i . For an additive update of the diagonalized eigenvector,
v i + Δ v i = V e i + Δ e i .
Thus, it becomes apparent that the update to the original eigenvector is given by
Δ v i = V Δ e i = k = 1 , k i n δ k i λ k λ i v k .
Furthermore, taking the inverse of Equation (78) combined with Equation (79) gives
Δ A = V i = 1 n k = 1 n δ k i e k e i T V T = i = 1 n k = 1 n δ k i v k v i T ,
which shows that any perturbation can be expressed as a linear combination of the outer products of the eigenvectors of any symmetric matrix where the coefficients are found using Equation (81). While this approach is not efficient, it provides an interesting parallel to the modal expansion techniques discussed in [10,11,12,13,14,15,16,17] as well as stability theory for eigenvectors [45]. Interestingly, the result arrived at in Equation (85) is exactly that arrived at in [45] for the symmetric case.
Furthermore, if the update to A is defined to be Δ A = A i j e i e j T , then it becomes possible to find the derivatives for a perturbation to any element of A as
v i A l m = k = 1 , k i n v k l v i m λ k λ i v k ,
where v i / A l m is the partial derivative of the i t h eigenvector with respect to the ( l , m ) t h element of the matrix A , and v a b is the b t h element of the a t h eigenvector of A , which is very similar to what is done in [10,11,12,13,14,15,16,17]. A similar proof using left eigenvectors can be shown for the case of any diagonalizable matrix.
In summary, it once again becomes evident that the very general and very efficient expressions for eigenvalue and eigenvector Jacobians presented in this manuscript may be reduced to a variety of important special cases presented elsewhere in the literature. In addition, these simplifications provide powerful insight into the structure and dynamics of the eigenvalue and eigenvector Jacobian problem.

7. Numerical Validation

Forward finite differencing was used to validate the formulation of the new eigenvalue and eigenvector Jacobians presented in this manuscript. This provides a numerical approximation of the Jacobians which may be compared with the analytic expressions developed in this paper.
The forward finite differencing was performed by perturbing each element of the parent matrix individually in order to calculate each element of the Jacobians. The analytic derivatives from Equations (40) and (49) were then compared with the finite differences and the percent differences were calculated as
% Difference = 100 x n u m e r i c x a n a l y t i c x n u m e r i c .
This was performed for 5000 randomly generated complex matrices of size 2 × 2 , 5000 randomly generated complex matrices of size 3 × 3 , and 5000 randomly generated complex matrices of size 10 × 10 . The results for both the eigenvalue and eigenvector derivatives are shown in the histograms in Figure 3. Note that, due to finite precision issues, matrices had to be ignored where the smallest component of the eigenvector derivatives was less than the perturbation size used in the finite differencing. As can be seen in the figure, the new method performed well in every instance, well below 0 . 1 % difference for each and every element of the eigenvalue and eigenvector Jacobians. In addition, the output from the techniques derived in this paper matched to within machine precision the outputs from [39].

8. Comparison of Performance

The primary goal of the derivations presented in this paper was to decrease the computational complexity of those presented in [39]. An examination of the two formulations indicates that both techniques are O ( n 4 ) due to the n × n by n × n 2 multiplication in Equation (40) and the Tr A n term in Equation (25) (assuming that the technique used to calculate the determinant is faster than O ( n ! ) , as this is the case in most modern linear algebra libraries). Despite the fact that both these formulations have the same upper limit on their computational complexity, it should be clear that the new formulation is much simpler, both in terms of operations performed (the formulation from [39] has two operations that are of order O ( n 4 ) as opposed to one for the formulations proposed here) and in terms of memory use.
A simulation was run in an attempt to detail the increase in computational efficiency from the technique in [39] to the technique presented in this paper. The simulation was performed by applying each technique in turn to 50 randomly generated matrices (the same 50 matrices for each technique) at matrix sizes varying from 2 to 50. For each run, the computation time of each method was recorded. Finally, the minimum computation time for each matrix size was chosen for each method, and the results are shown in Figure 4. As can be seen in the figure, the new method is at minimum an order of magnitude faster and the distance between the performance of the two methods increases as the matrix size increases. In addition, note that the new method is less susceptible to numerical precision issues, as is evidenced by the cut-off of the results for the method from [39].
These simulations were performed by the authors on the campus of West Virginia University (WVU) in Morgantown, WV in 2016, using an Intel Core i7-3770 processor at 3.4 GHz, and executed within the Matlab programming language (version R2015b).

9. Conclusions

A new formulation is derived for the complete Jacobians of eigenvalues and eigenvectors with respect to the elements of their parent matrix. The new solution relies on only the eigenvalue and eigenvector being considered and is valid for any unitary complex eigenvalue/eigenvector pair. Furthermore, the parent matrix may contain complex entries and need not be symmetric. As a result, the method presented here is extremely general with applications to finite-element analysis (FEA) solutions to vibration problems, fitting of an ellipse to scattered data points, quaternion-based attitude estimation, and a host of other important scientific and engineering problems.
The new eigenvalue and eigenvector Jacobians developed in this manuscript are shown to collapse to well-known results if the parent matrix is either (1) real and symmetric or (2) real and diagonal. This new method may also be reinterpreted to gain a deeper understanding of perturbations of the eigenspace.
Finally, the new eigenvalue and eigenvector Jacobians are validated by comparison with forward finite differencing. The computational performance speed of this new technique was shown to be better by a factor of 10 (or greater for large matrices) when compared with the performance of the technique proposed in [39].

Author Contributions

A.J.L.: Writing—Original Draft Preparation, Writing—Review and Editing, Formal Analysis, Conceptualization, Software, Methodology, Investigation. J.A.C.: Writing—Original Draft Preparation, Writing—Review and Editing, Formal Analysis, Conceptualization, Supervision, Project Administration. S.B.R.: Writing—Original Draft Preparation, Writing—Review and Editing, Formal Analysis, Conceptualization.

Funding

A portion of this work was made possible by NASA under award NNX13AJ25A.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hughes, T.J. The Finite Element Method: Linear Static and Dynamic Finite Element Analysis; Dover Publications, Inc.: Mineola, NY, USA, 2012; pp. 429–456. [Google Scholar]
  2. Keat, J. Analysis of Least-Squares Attitude Determiniation Routine DOAOP; Technical Report CSC/TM-77/6034; Computer Sciences Corp.: Falls Church, VA, USA, 1977. [Google Scholar]
  3. Markley, F.L.; Crassidis, J.L. Fundamentals of Spacecraft Attitude Determination and Control; Springer: New York, NY, USA, 2014; pp. 187–189. [Google Scholar]
  4. Fitzgibbon, A.; Pilu, M.; Fisher, R.B. Direct Least Square Fitting of Ellipses. IEEE Trans. Pattern Anal. Mach. Intell. 1999, 21, 476–480. [Google Scholar] [CrossRef]
  5. Roy, R.; Kailath, T. ESPRIT—Estimation of Signal Parameters Via rotational Invariance Techniques. IEEE Trans. Acoust. Speech Signal Process. 1989, 37, 984–995. [Google Scholar] [CrossRef]
  6. Ma, Y.; Soatto, S.; Košecká, J.; Sastry, S. An Invitation to 3D Vision: From Images to Geometric Models; Springer: New York, NY, USA, 2010; pp. 52–58. [Google Scholar]
  7. Horn, B. Closed-form Solution of Absolute Orientation Using Unit Quaternions. J. Opt. Soc. Am. 1987, 4, 629–642. [Google Scholar] [CrossRef]
  8. Christian, J.A. Optical Navigation Using Planet’s Centroid and Apparent Diameter in Image. J. Guid. Control. Dyn. 2015, 38, 192–204. [Google Scholar] [CrossRef]
  9. Diekmann, O.; Heesterbeek, J.; Roberts, M. The construction of next-generation matrices for compartmental epidemic models. J. R. Soc. Interface 2010, 7, 873–885. [Google Scholar] [CrossRef] [PubMed]
  10. Fox, R.; Kapoor, M. Rates of Change of Eigenvalues and Eigenvectors. AIAA J. 1968, 6, 2426–2429. [Google Scholar] [CrossRef]
  11. Rogers, L.C. Derivatives of Eigenvalues and Eigenvectors. AIAA J. 1970, 8, 943–944. [Google Scholar] [CrossRef]
  12. Plaut, R.; Huseyin, K. Derivatives of Eigenvalues and Eigenvectors in Non-Self-Adjoint Systems. AIAA J. 1973, 11, 250–251. [Google Scholar] [CrossRef]
  13. Kalaba, R.; Spingarn, K.; Tesfatsion, L. Variational Equations for the Eigenvalues and Eigenvectors of Nonsymmetric Matrices. J. Optim. Theory Appl. 1981, 33, 1–8. [Google Scholar] [CrossRef]
  14. Lim, K.; Junkins, J.; Wang, B. Re-examination of eigenvector derivatives. J. Guid. Control. Dyn. 1987, 10, 581–587. [Google Scholar] [CrossRef]
  15. Andrew, A.L.; Tan, R.C. Computation of derivatives of repeated eigenvalues and corresponding eigenvectors by simultaneous iteration. AIAA J. 1996, 34, 2214–2216. [Google Scholar] [CrossRef]
  16. Andrew, A.L.; Tan, R.C. Computation of Derivatives of Repeated Eigenvalues and the Corresponding Eigenvectors of Symmetric Matrix Pencils. SIAM J. Matrix Anal. Appl. 1998, 20, 78–100. [Google Scholar] [CrossRef]
  17. Van Der Aa, N.; Ter Morsche, H.; Mattheij, R. Computation of Eigenvalue and Eigenvector Derivatives for a General Complex-Valued Eigensystem. Electron. J. Linear Algebra 2007, 16, 300–314. [Google Scholar] [CrossRef]
  18. Garg, S. Derivatives of Eigensolutions for a General Matrix. AIAA J. 1973, 11, 1191–1194. [Google Scholar] [CrossRef]
  19. Rudisill, C.S. Derivatives of Eigenvalues and Eigenvectors for a General Matrix. AIAA J. 1974, 12, 721–722. [Google Scholar] [CrossRef]
  20. Rudisill, C.S.; Chu, Y.Y. Numerical Methods for Evaluating the Derivatives of Eigenvalues and Eigenvectors. AIAA J. 1975, 13, 834–837. [Google Scholar] [CrossRef]
  21. Juang, J.N.; Lim, K.B. On the Eigenvalue and Eigenvector Derivatives of a General Matrix. Available online: https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19870012842.pdf (accessed on 19 November 2019).
  22. Lim, K.B.; Juang, J.N.; Ghaemmaghami, P. Eigenvector derivatives of repeated eigenvalues using singular value decomposition. J. Guid. Control. Dyn. 1989, 12, 282–283. [Google Scholar] [CrossRef]
  23. Juang, J.N.; Ghaemmaghami, P.; Lim, K.B. Eigenvalue and eigenvector derivatives of a nondefective matrix. J. Guid. Control. Dyn. 1989, 12, 480–486. [Google Scholar] [CrossRef]
  24. Murthy, D.V.; Haftka, R.T. Derivatives of Eigenvalues and Eigenvectors of a General Complex Matrix. Int. J. Numer. Methods Eng. 1988, 26, 293–311. [Google Scholar] [CrossRef]
  25. Xu, Z.; Wu, B. Derivatives of Complex Eigenvectors with Distinct and Repeated Eigenvalues. Int. J. Numer. Methods Eng. 2008, 75, 945–963. [Google Scholar] [CrossRef]
  26. Nelson, R.B. Simplified Calculation of Eigenvector Derivatives. AIAA J. 1976, 14, 1201–1205. [Google Scholar] [CrossRef]
  27. Cardani, C.; Mantegazza, P. Calculation of Eigenvalue and Eigenvector Derivatives for Algebraic Flutter and Divergence Eigenproblems. AIAA J. 1979, 17, 408–412. [Google Scholar] [CrossRef]
  28. Ojalvo, I. Gradients for Large Structural Models With Repeated Frequencies; Technical Report 861789, SAE Technical Paper; University of Bridgeport: Bridgeport, CT, USA, 1986. [Google Scholar]
  29. Dailey, R.L. Eigenvector Derivatives With Repeated Eigenvalues. AIAA J. 1989, 27, 486–491. [Google Scholar] [CrossRef]
  30. Mills-Curran, W.C. Calculation of Eigenvector Derivatives for Structures with Repeated Eigenvalues. AIAA J. 1988, 26, 867–871. [Google Scholar] [CrossRef]
  31. Friswell, M. The Derivatives of Repeated Eigenvalues and Their Associated Eigenvectors. J. Vib. Acoust. 1996, 118, 390–397. [Google Scholar] [CrossRef]
  32. Friswell, M.I.; Adhikari, S. Derivatives of Complex Eigenvectors Using Nelson’s Method. AIAA J. 2000, 38, 2355–2357. [Google Scholar] [CrossRef]
  33. Wu, B.; Xu, Z.; Li, Z. Improved Nelson’s Method for Computing Eigenvector Derivatives with Distinct and Repeated Eigenvalues. AIAA J. 2007, 45, 950–952. [Google Scholar] [CrossRef]
  34. Magnus, J.R. On Differentiating Eigenvalues and Eigenvectors. Econom. Theory 1985, 1, 179–191. [Google Scholar] [CrossRef]
  35. Meyer, C.D.; Stewart, G.W. Derivatives and Perturbations of Eigenvectors. SIAM J. Numer. Anal. 1988, 25, 679–691. [Google Scholar] [CrossRef]
  36. Crouzeix, M.; Philippe, B.; Sadkane, M. The Davidson Method. SIAM J. Sci. Comput. 1994, 15, 62–76. [Google Scholar] [CrossRef]
  37. Xie, H.; Dai, H. Davidson Method for Eigenpairs and Their Partial Derivatives of Generalized Eigenvalue Problems. Commun. Numer. Methods Eng. 2006, 22, 155–165. [Google Scholar] [CrossRef]
  38. De Leeuw, J. Derivatives of Generalized Eigensystems with Applications. Available online: https://escholarship.org/content/qt2s67h3nv/qt2s67h3nv.pdf (accessed on 19 November 2019).
  39. Liounis, A.; Christian, J. Techniques for Generating Analytic Covariance Expressions for Eigenvalues and Eigenvectors. IEEE Trans. Signal Process. 2015, 64, 1808–1821. [Google Scholar] [CrossRef]
  40. Winitzki, S. Linear Algebra Via Exterior Products; Lulu Press: Morrisville, NC, USA, 2010; pp. 69–98. [Google Scholar]
  41. Bourbaki, N. Algebra I; Hermann: Paris, France, 1971; pp. 507–549. [Google Scholar]
  42. Edelman, A. Eigenvalue and Condition Numbers of Random Matrices. Ph.D. Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 1989. [Google Scholar]
  43. Penrose, R. A Generalized Inverse for Matrices. Proc. Camb. Philos. Soc. 1955, 51, 406–413. [Google Scholar] [CrossRef] [Green Version]
  44. Meyer, C.D. Generalized Inversion of Modified Matrices. SIAM J. Appl. Math. 1973, 24, 315–323. [Google Scholar] [CrossRef]
  45. Atkinson, K.E. An Introduction to Numerical Analysis; John Wiley & Sons: Hoboken, NJ, USA, 2008; pp. 591–601. [Google Scholar]
Figure 1. The condition number of the term A λ I v v 0 H A / α normalized by the number of dimensions, n, as a function of the angle between v 0 and v . Here, A is a real matrix. The numerical results are annotated to make the statistics more clearly visible according to the legend in the bottom right frame. Results are similar for a complex A .
Figure 1. The condition number of the term A λ I v v 0 H A / α normalized by the number of dimensions, n, as a function of the angle between v 0 and v . Here, A is a real matrix. The numerical results are annotated to make the statistics more clearly visible according to the legend in the bottom right frame. Results are similar for a complex A .
Algorithms 12 00245 g001
Figure 2. Histograms of the condition number of the term A λ I v v 0 H A / α normalized by the number of dimensions, n, compared to Edelman’s probability density function for the condition number of A . Here, A is a real matrix.
Figure 2. Histograms of the condition number of the term A λ I v v 0 H A / α normalized by the number of dimensions, n, compared to Edelman’s probability density function for the condition number of A . Here, A is a real matrix.
Algorithms 12 00245 g002
Figure 3. Histograms of percent difference between analytic derivatives computed using Equations (36) and (40) (eigenvalue derivatives top and eigenvector derivatives bottom) and finite forward differencing for 5000 randomly generated matrices of each size. The histograms are of the percent difference for each element of the eigenvalue and eigenvector derivatives (for example, for each n × n matrix there are n2 eigenvalue derivative elements and n × n2 eigenvector derivative elements). Similar histograms are presented in [39] for the method discussed in that paper.
Figure 3. Histograms of percent difference between analytic derivatives computed using Equations (36) and (40) (eigenvalue derivatives top and eigenvector derivatives bottom) and finite forward differencing for 5000 randomly generated matrices of each size. The histograms are of the percent difference for each element of the eigenvalue and eigenvector derivatives (for example, for each n × n matrix there are n2 eigenvalue derivative elements and n × n2 eigenvector derivative elements). Similar histograms are presented in [39] for the method discussed in that paper.
Algorithms 12 00245 g003
Figure 4. A plot of minimum computation time versus matrix size for the method from [39] (original method) and the method proposed in this paper (new method). Note that the method from [39] encounters numerical stability issues around a matrix size of 35 due to Equation (29). This is why there is a cut-off in the data.
Figure 4. A plot of minimum computation time versus matrix size for the method from [39] (original method) and the method proposed in this paper (new method). Note that the method from [39] encounters numerical stability issues around a matrix size of 35 due to Equation (29). This is why there is a cut-off in the data.
Algorithms 12 00245 g004

Share and Cite

MDPI and ACS Style

Liounis, A.J.; Christian, J.A.; Robinson, S.B. Observations on the Computation of Eigenvalue and Eigenvector Jacobians. Algorithms 2019, 12, 245. https://0-doi-org.brum.beds.ac.uk/10.3390/a12120245

AMA Style

Liounis AJ, Christian JA, Robinson SB. Observations on the Computation of Eigenvalue and Eigenvector Jacobians. Algorithms. 2019; 12(12):245. https://0-doi-org.brum.beds.ac.uk/10.3390/a12120245

Chicago/Turabian Style

Liounis, Andrew J., John A. Christian, and Shane B. Robinson. 2019. "Observations on the Computation of Eigenvalue and Eigenvector Jacobians" Algorithms 12, no. 12: 245. https://0-doi-org.brum.beds.ac.uk/10.3390/a12120245

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop