Next Article in Journal
Vibration Characteristics of Flexible Steel Plate on Proposed Magnetic Levitation System Using Gravity
Next Article in Special Issue
Unsteady Aerodynamic Lift Force on a Pitching Wing: Experimental Measurement and Data Processing
Previous Article in Journal / Special Issue
Fourier Series Approximation of Vertical Walking Force-Time History through Frequentist and Bayesian Inference
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Free Vibrations of Multi-Degree Structures: Solving Quadratic Eigenvalue Problems with an Excitation and Fast Iterative Detection Method

1
Center of Excellence for Ocean Engineering, National Taiwan Ocean University, Keelung 202301, Taiwan
2
Department of Mechanical Engineering, National United University, Miaoli 36063, Taiwan
*
Author to whom correspondence should be addressed.
Submission received: 1 October 2022 / Revised: 31 October 2022 / Accepted: 15 December 2022 / Published: 18 December 2022
(This article belongs to the Special Issue Feature Papers in Vibration)

Abstract

:
For the free vibrations of multi-degree mechanical structures appeared in structural dynamics, we solve the quadratic eigenvalue problem either by linearizing it to a generalized eigenvalue problem or directly treating it by developing the iterative detection methods for the real and complex eigenvalues. To solve the generalized eigenvalue problem, we impose a nonzero exciting vector into the eigen-equation, and solve a nonhomogeneous linear system to obtain a response curve, which consists of the magnitudes of the n-vectors with respect to the eigen-parameters in a range. The n-dimensional eigenvector is supposed to be a superposition of a constant exciting vector and an m-vector, which can be obtained in terms of eigen-parameter by solving the projected eigen-equation. In doing so, we can save computational cost because the response curve is generated from the data acquired in a lower dimensional subspace. We develop a fast iterative detection method by maximizing the magnitude to locate the eigenvalue, which appears as a peak in the response curve. Through zoom-in sequentially, very accurate eigenvalue can be obtained. We reduce the number of eigen-equation to n 1 to find the eigen-mode with its certain component being normalized to the unit. The real and complex eigenvalues and eigen-modes can be determined simultaneously, quickly and accurately by the proposed methods.

1. Introduction

In the free vibration of a q-degree mass-damping-spring structure, the system of differential equations for describing the motion is [1]
M q ¨ ( t ) + C q ˙ ( t ) + K q ( t ) = 0 ,
where q ( t ) is a time-dependent q-dimensional vector to signify the generalized displacements of the system.
In engineering application the mass matrix M and the stiffness matrix K are positive definite because they are related to the kinetic energy and elastic strain energy. However, the damping properties of a system reflected in the viscous damping matrix C are rarely known, making it difficult to be evaluated exactly [2,3].
In terms of the vibration mode u , we can express the fundamental solution of Equation (1) as
q ( t ) = e λ t u ,
which leads to a nonlinear eigen-equation for ( λ , u ) :
Q ( λ ) u : = ( λ 2 M + λ C + K ) u = 0 ,
where Q ( λ ) is a matrix quadratic of the structure. Equation (3) is a quadratic eigenvalue problem (QEP) to determine the eigen-pair ( λ , u ) . In addition to the free vibrations of mechanical structures, many related systems which lead to the quadratic eigenvalue problems were discussed by Tisseur and Meerbergen [4]. In the design of linear structure, knowing natural frequency of the structure is vital important to avoid the resonance with external exciting. The natural frequency is the imaginary part of the eigenvalue λ in Equation (3), which is a frequency to easily vibrate the structure [5,6].
Using the generalized Bezout method [7] to tackle Equation (3) one introduces a solvent matrix S determined by
Q ( S ) = M S 2 + C S + K = 0 q ,
where 0 q is a q × q zero matrix, such that a factorization of Q ( λ ) reads as
Q ( λ ) = ( λ M + M S + C ) ( λ I q S ) .
It gives us an opportunity to determine the eigenvalues of Equation (3) through the eigenvalues solved from two q-dimensional eigenvalue problems, generalized one and standard one:
( M S + C ) u = λ M u , S u = λ u .
The key point of the solvent matrix method is solving a nonlinear matrix Equation (4) to obtain an accurate matrix S [8,9]. Most of the numerical methods that deal directly with the quadratic eigenvalue problem by solving Equations (4) and (6) are the variants of Newton’s method [10,11,12,13]. These Newton’s variants converge when the initial guess is close enough to the solution. But even for a good initial guess there is no guarantee that the method will converge to the desired eigenvalue. There are different methods to solve the quadratic eigenvalue problems [14,15,16,17,18,19].
Let
v = λ u
be the generalized velocity of vibration mode. We can combine Equations (7) and (3) together as
0 q I q K C u v = λ I q 0 q 0 q M u v .
Defining
x : = u v , A : = 0 q I q K C , B : = I q 0 q 0 q M ,
Equation (8) becomes a generalized eigenvalue problem for the n-vector x :
A x = λ B x ,
where A , B R n × n with n = 2 q . Equation (10) is used to determine the eigen-pair ( λ , x ) , which is a linear eigen-equation associated to the pencil A λ B , where λ is an eigen-parameter.
In the linearization from Equation (3) to Equation (10), an unsatisfactory aspect is that the dimension of the working space is doubled to n = 2 q . However, for the generalized eigenvalue problems many powerful numerical methods are available [20,21]. The numerical computations in [22,23] revealed that the methods based on the Krylov subspace can be very effective in the nonsymmetric eigenvalue problems by using the Lanczos biorthogonalization algorithm and the Arnoldi’s algorithm. The Arnoldi and nonsymmetric Lanczos methods are both of the Krylov subspace methods. Among the many algorithms to solve the matrix eigenvalue problems the Arnoldi method [23,24,25,26], the nonsymmetric Lanczos algorithm [27], and the subspace iteration method [28] are well known. The affine Krylov subspace method was first developed by Liu [29] to solve linear equations system, which is however not yet used to solve the generalized eigenvalue problems. It is well known that the eig function in MATLAB is an effective code to compute the eigenvalues of the generalized eigenvalue problems. We will test it for the eigenvalue problem of highly ill-conditioned matrices and point out its limitation.
The paper sets up a new detection method for determining the real and complex eigenvalues of Equation (10) in Section 2, where the idea of exciting vector and excitation method (EM) are introduced with two examples being demonstrated. In Section 3, we express the generalized eigenvalue problem (10) in an affine Krylov subspace, and a nonhomogeneous linear equations system is derived. To precisely locate the position of the real eigenvalue in the response curve, we develop an iterative detection method (IDM) in Section 4, where we derive new methods to compute eigenvalue and eigenvector. In Section 5 some examples of the generalized eigenvalue problems are given. The EM and IDM are extended in Section 6 to directly solve the quadratic eigenvalue problem (3), where we derive IDM in the eigen-parametric plane to determine the complex eigenvalue. In Section 7, several examples are given and solved by either linearizing them to the generalized eigenvalue problems or treating them in the original quadratic forms by the direct detection method. Finally, the conclusions are drawn in Section 8.

2. A New Detection Method

Let λ ( A , B ) be the set of all the eigenvalues of Equation (10), which may include the pairs of conjugate complex eigenvalues. It is known that if λ λ ( A , B ) then Equation (10) has only the trivial solution with x = 0 and x = 0 . In contrast, if λ λ ( A , B ) then Equation (10) has a non-trivial solution with x 0 and x > 0 , which is called the eigenvector. However, when n is large it is difficult to directly solve Equation (10) to determine λ and x by the manual operations. Instead, numerical methods have to be employed to solve Equation (10), from which x is always obtained to be a zero vector no matter which λ is, since the right-hand side of ( A λ B ) x = 0 is zero.
To definitely determine x to have a finite magnitude with 0 x < , we consider a variable transformation from x to y by
x = x 0 + y ,
where x 0 is a given nonzero vector, which being inserted into Equation (10) generates
( A λ B ) y = ( A λ B ) x 0 .
It is important that the right-hand side is not zero because of x 0 0 , which is a given exciting vector to render a nonzero response of y 0 and then x 0 by Equation (11) is available. We must emphasize that when A λ B is near to a singular matrix, we cannot eliminate A λ B in Equation (12) by inverting the matrix A λ B to obtain y = x 0 .

2.1. Real Eigenvalue

Let the eigen-parameter in Equation (12) run in an interval λ [ a , b ] , and we can solve Equation (12) by the Gaussian elimination method to obtain y and then x = x 0 + y . Hence, the response curve is formed by varying the magnitude x ( λ ) vs. the eigen-parameter λ in the interval λ [ a , b ] . It is different from Equation (10) that now we can compute y in Equation (12) and then x by Equation (11), when x 0 is a given nonzero vector. Through this transformation by solving a nonhomogeneous linear system (12), rather than the homogeneous linear system (10), the resultant vector x can generate a nonzero finite response of x when the eigen-parameter tends to an eigenvalue, and for most eigen-parameters that not near to any eigenvalue the responses of x are very small, nearly close to zero. The technique to construct a nonzero response curve is called an excitation method (EM).
For instance, we consider Equation (10) endowing with [20]:
A = 2 3 4 5 6 4 4 5 6 7 0 3 6 7 8 0 0 2 8 9 0 0 0 1 10 , B = 1 1 1 1 1 0 1 1 1 1 0 0 1 1 1 0 0 0 1 1 0 0 0 0 1 .
We take [ a , b ] = [ 5 , 25 ] and x 0 = 0 , and we plot x ( λ ) vs. the eigen-parameter λ [ 5 , 25 ] in Figure 1a, wherein only zero values of x ( λ ) appear.
However, under a nonzero excitation with x 0 = 1 5 : = ( 1 , 1 , 1 , 1 , 1 ) T , we plot x ( λ ) vs. the eigen-parameter λ [ 5 , 25 ] in Figure 1b, where we can observe five peaks of the response curve which signifying the locations of five real eigenvalues to be sought. When λ does not locate at the peak point, x ( λ ) is zero from the theoretical point of view; however, the value of x ( λ ) as shown in Figure 1b is not zero, which is very small due to the machinery round-off error caused by using the Gaussian elimination method to solve Equation (12). In Section 4, we will develop an iterative method to precisely locate those eigenvalues based on the EM.

2.2. Complex Eigenvalue

Because A and B are real matrices, the eigenvalue may be a complex number, which is assumed to be
λ = λ R + i λ I ,
where i 2 = 1 , and λ R and λ I are, respectively, the real and imaginary parts of λ . Correspondingly, we take
x = z + i w .
Inserting Equations (14) and (15) into Equation (10), yields
A λ R B λ I B λ I B A λ R B z w = 0 .
Letting
X : = z w , D : = A λ R B λ I B λ I B A λ R B ,
Equation (16) becomes
D X = 0 .
Similarly, by taking
X = X 0 + Y ,
where X 0 0 is a given exciting vector and by Equation (18), we have
D Y = D X 0 .
No matter which D is, since Equation (20) is a consistent linear system with a dimension 2 n , we can solve it by using the Gaussian elimination method to obtain Y and then X by Equation (19).
When λ R and λ I take values inside a rectangle by ( λ R , λ I ) [ a , b ] × [ c , d ] , we can plot X ( λ R , λ I ) vs. ( λ R , λ I ) over the eigen-parametric plane, and investigate the property of the response surface.
For instance, for
A = 1 1 1 2 , B = 1 0 0 1 ,
we have a pair of complex eigenvalues:
λ = 3 2 ± i 3 2 .
We take [ a , b ] = [ 1 , 2 ] , [ c , d ] = [ 0 , 1 ] , and with X 0 = 1 4 , we plot X ( λ R , λ I ) over the plane ( λ R , λ I ) in Figure 2, where we can observe one peak near to the point ( 3 / 2 , 3 / 2 ) . More precise values of ( λ R , λ I ) will be obtained by the iterative detection method to be developed in Section 6.

3. Generalized Eigenvalue Problem in an Affine Krylov Subspace

The detection method developed in the previous section has a weakness that we need to solve an n-dimensional linear system (12) for real eigenvalue and a 2 n -dimensional linear system (20) for complex eigenvalue. Therefore, in the many selected points inside the range we may spend a lot of computational time to construct the response curve to detect real eigenvalues or the response surface to detect complex eigenvalues. In this section we develop the detection methods in a lower dimensional subspace, instead of the detection methods carried out in the full space.

3.1. The Krylov Subspace

When n is a large dimension, Equation (10) is a high-dimensional linear equations system. In order to reduce the dimension of the governing equation to m in an m-dimensional Krylov subspace, we begin with
K m : = Span { A x 0 , , A m x 0 } .
The Krylov matrix is fixed by
U : = [ u 1 , , u m ] ,
which is an n × m matrix with its jth column being the vector u j .
The Arnoldi process is used to normalize and orthogonalize the Krylov vectors A j x 0 , j = 1 , , m , such that the resultant vectors u i , i = 1 , , m satisfy u i · u j = δ i j , i , j = 1 , , m , where δ i j is the Kronecker delta symbol. The Arnoldi procedure for the orthogonalization of U can be written as follows [30].
Select m and give an initial vector A x 0 , u 1 = A x 0 A x 0 , Do j = 1 : m 1 , w j = A u j , Do i = 1 : j , h i j = u i · w j , w j = w j h i j u i , Enddo of i , h j + 1 , j = w j . If h j + 1 , j = 0 stop , u j + 1 = w j h j + 1 , j , U j + 1 = w j h j + 1 , j , Enddo of j .
U denotes the Krylov matrix, where the subscript k in U k means the kth column of U , which possesses the following property:
U T U = I m .

3.2. Linear Nonhomogeneous Equations in an Affine Krylov Subspace

Because of x x 0 + K m , we can expand x by
x = x 0 + U α ,
where α is an unknown m-vector to be determined. From Equations (10) and (27) it follows that
A U α + A x 0 = λ B U α + λ B x 0 ,
which is projected onto the Krylov subspace K m by
U T A U α + U T A x 0 = λ U T B U α + λ U T B x 0 .

4. An Iterative Detection Method for Real Eigenvalue

It follows from Equation (29) that
( U T A U λ U T B U ) α = λ U T B x 0 U T A x 0 ,
which is the projection of Equation (10) into the affine Krylov subspace x 0 + K m as a nonhomogeneous projected eigen-equation. Upon comparing to the original eigen-equation ( A λ B ) x = 0 in Equation (10), Equation (30) is different for the appearance of a nonhomogeneous term λ U T B x 0 U T A x 0 in the right-hand side, When we take x 0 = 0 in the right-hand side, α is a zero vector solved by the numerical method and thus x = 0 . To excite the nonzero response of x , we must give a nonzero exciting vector x 0 0 in the right-hand side.
In Section 2, we have given an example to show that the peaks of x ( λ ) at the eigenvalues are happened in the response curve of x ( λ ) vs. λ , which motivates us using a simple maximum method to determine the eigenvalue by collocating points inside an interval by
max λ [ a , b ] x ( λ ) ,
where the size of [ a , b ] must be sufficiently large to include at least one eigenvalue λ [ a , b ] . Therefore, the numerical procedures for determining the eigenvalue of a generalized eigenvalue problem are summarized as follows. (i) Select m, x 0 , a and b. (ii) Construct U . (iii) For collocating point λ i [ a , b ] solving Equation (30), setting x = x 0 + U α and taking the optimal λ i to satisfy Equation (31).
By repeating the use of Equation (31), we gradually reduce the size of the interval centered at the previous peak point by renewing the interval to a finer one. Give an initial interval [ a k , b k ] , k = 0 , and we place the collocating points by λ ( i ) = a k + i ( b k a k ) / N 0 , i = 0 , , N 0 and pick up the maximal point denoted by λ k . Then, a finer interval is given by a k + 1 = λ k ( b k a k ) / N 1 and b k + 1 = λ k + ( b k a k ) / N 1 , which is centered at the previous peak point λ k and with a smaller length 2 ( b k a k ) / N 1 < b k a k . In that new interval we pick up the new maximum point denoted by λ k + 1 . Continuing this process until | λ k + 1 λ k | < ε , we can obtain the eigenvalue with high accuracy. This algorithm is shortened as an iterative detection method (IDM).
In order to construct the response curve, we choose a large interval of [ a , b ] to include all eigenvalues, such that the rough locations of the eigenvalues can be observed in the response curve as the peaks. Then, to precisely determine the individual eigenvalue, we choose a small initial interval [ a 0 , b 0 ] to include that eigenvalue as internal point. A few iterations by the IDM can compute very accurate eigenvalue.
When the eigenvalue λ is obtained, if one wants to compute the eigenvector, we can normalize a nonzero j 0 th component of x by x j 0 = 1 . Let
c i j = a i j λ b i j , e i = c i j 0 , i , j = 1 , , n ,
where a i j and b i j are the components of A and B , respectively. Then, it follows from Equation (10) an n 0 = ( n 1 ) -dimensional linear system:
d i j y j = e i , i , j = 1 , , n 0 ,
where d i j are constructed by
Do i = 1 : n 0 , k = 0 , Do j = 1 : n , If j = j 0 next j , k = k + 1 , d i k = c i j , Enddo of j , Enddo of i .
We can apply the Gaussian elimination method or the conjugate gradient method to solve y = ( y 1 , , y n 0 ) T in Equation (33). And then x = ( x 1 , , x n ) T is computed by
k = 0 , Do j = 1 : n , If j = j 0 x j = 1 next j , k = k + 1 , x j = y k , Enddo of j .
To evaluate the accuracy of the obtained eigenvalue λ and eigenvector x , we can investigate the error of A x λ B x to satisfy the eigen-equation (10). We will extend the above IDM to detect the complex eigenvalue in Section 6.

5. Examples of Generalized Eigenvalue Problems

Example 1.
We consider
A = 1 2 3 2 4 5 3 5 6 , G = 0.001 0 0 1 0.001 0 2 1 0.001 ,
and B = G G T . The two smallest eigenvalues are { 0.619402940600584 , 1.627440079051887 } . It is a highly ill-conditioned generalized eigenvalue problem due to Cond ( B ) = 10 18 .
We take m = 3 and plot x ( λ ) vs. λ in Figure 3, where we can observe two peaks happened at two eigenvalues. The zigzags in the response curve are due to the ill-conditioned B in Equation (10).
Starting from [ a 0 , b 0 ] = [ 1 , 0 ] and under a convergence criterion 10 15 , with seven iterations we can obtain λ = 0.6194029406657194 , which is very close to −0.619402940600584 with a difference 4.38 × 10 9 . On the other hand, the error to satisfy Equation (10) is very small with A x λ B x = 1.93 × 10 15 , where x is solved from Equations (32)–(35) with j 0 = 3 .
Starting from [ a 0 , b 0 ] = [ 1 , 2 ] and with six iterations, we can obtain λ = 1.627440079500534 , which is very close to 1.627440079051887 with a difference 1.55 × 10 8 , and A x λ B x = 1.25 × 10 15 is obtained.
Example 2.
In Equation (10), A and B are given in Equation (13). We take m = 5 and plot x ( λ ) vs. λ in Figure 4, where five peaks in the response curve happen at five eigenvalues.
Starting from [ a 0 , b 0 ] = [ 1 , 1 ] and under a convergence criterion 10 12 , through five iterations λ = 0.1873528931969792 is obtained. A x λ B x = 4.96 × 10 14 is obtained, where x is solved from Equations (32)–(35) with j 0 = 1 .
Starting from [ a 0 , b 0 ] = [ 1 , 5 ] and under a convergence criterion 10 15 , we can obtain λ = 1.313278952662423 and A x λ B x = 3.77 × 10 14 after seven iterations.
With [ a 0 , b 0 ] = [ 5 , 10 ] and seven iterations, we can obtain λ = 5.537956370847875 and A x λ B x = 3.19 × 10 15 .
With [ a 0 , b 0 ] = [ 10 , 15 ] and seven iterations, we can obtain λ = 12.089692853066820 and A x λ B x = 2.84 × 10 15 .
With [ a 0 , b 0 ] = [ 15 , 25 ] and seven iterations, we can obtain λ = 21.246424716619960 and A x λ B x = 3.98 × 10 15 .
By comparing the response curve in Figure 4 with that in Figure 1b, the numerical noise is disappeared by using the IDM in the affine Krylov subspace.
Example 3.
Let A = [ a i j ] and B = [ b i j ] , i , j = 1 , , n . In Equation (10), we take [21]:
a i i = n i 100 , a i , i + 1 = 1 100 , i = 1 , , n 1 , b i i = 1 , i = 81 , , n .
Other elements are all zeros, where we take n = 100 . The eigenvalues are λ j = ( j 1 ) / 100 for j = 1 , , 20 .
We take m = 50 and plot x ( λ ) vs. λ in Figure 5a, where five peaks are obtained by the IDM, which correspond to the first five eigenvalues. Within a finer interval [ a , b ] = [ 0.005 , 0.015 ] in Figure 5b, one peak appears at a more precise position.
Starting from [ a 0 , b 0 ] = [ 0.005 , 0.015 ] and with one iteration under a convergence criterion 10 15 , the eigenvalue obtained is very close to 0.01 with an error 2.24 × 10 10 . A x λ B x = 1.803 × 10 18 is obtained, where x is solved from Equations (32)–(35) with j 0 = 99 .
With [ a 0 , b 0 ] = [ 0.011 , 0.021 ] and j 0 = 99 , after two iterations the eigenvalue obtained is very close to 0.02 with an error 4.47 × 10 10 . A x λ B x = 2.236 × 10 19 is obtained.
For this problem, if we take m = n and U = I n to compute the eigenvalue in the full space, the computational time is increased. The CPU time spent to construct the response curve is 6.09 s. For the eigenvalue 0.01 the error is the same with 2.24 × 10 10 , but the CPU time increases to 5.83 s. In contrast, the subspace method spent 1.05 s to construct the response curve, and the CPU time for the eigenvalue 0.01 is 1.05 s. When n is increased to n = 200 , the subspace method with m = 50 spent 1.05 s to construct the response curve and the CPU time for the eigenvalue 0.01 is 6.53 s; however, by using the full space method with n = 200 the CPU time is 44.91 s and and CPU time for the eigenvalue 0.01 is 44.67 s. Therefore, the computational efficiency of the m-dimensional affine Krylov subspace method is better than that by using the full space method with dimension n.
To further test the efficiency of the proposed method, we consider the eigenvalue problem of Hilbert matrix [31]. In Equation (10), we take B = I n and
a i j = 1 i + j 1 , i , j = 1 , , n .
Due to highly ill-conditioned nature of the Hilbert matrix, it is a quite difficult eigenvalue problem. For this problem, we take n = 100 , m = 30 and j 0 = 50 to compute the largest eigenvalue, which is given as 2.182696097757424. The affine Krylov subspace method is convergent very fast with six iterations, and the CPU time is 1.52 s. The error of eigen-equation is A x λ B x = 2.005 × 10 13 . However, finding the largest eigenvalue in the full space with dimension n = 100 , it does not converge within 100 iterations, and the CPU time is increased to 33.12 s. By using the MATLAB, we obtain 2.182696097757423 which is close to that obtained by the Krylov subspace method. However, the MATLAB leads to a large error of Det ( A x λ B x ) = 6.94680072753434 × 10 17 , which indicates that the characteristic equation for the Hilbert matrix is highly ill-posed. Notice that the smallest eigenvalue is very difficult to be computed, since it is very close to zero. However, we can obtain the smallest eigenvalue 3.1 × 10 19 with one iteration and the error A x λ B x = 9.412 × 10 17 is obtained. For the eigenvalue problem of the Hilbert matrix, the MATLAB leads to a wrong eigenvalue 9.693294591063901 × 10 17 , which is negative and contradicts to the positive eigenvalues of the Hilbert matrix. The eig function in Matlab cannot guarantee to obtain a positive eigenvalue for the positive definite Hilbert matrix. The first 41 eigenvalues are all negative. The first 73 eigenvalues are less than 10 17 . So most of these eigenvalues computed by Matlab should be spurious. The MATLAB is effective for general purpose eigenvalue problem with normal matrices, but for the highly ill-conditioned matrices the effectiveness of the MATLAB might be lost.
In addition to the computational efficiency, the Krylov subspace method has several advantages including easy-implementation, the ability to detect all eigenvalues and computing all the corresponding eigenfunctions simultaneously. One can roughly locate the eigenvalues from the peaks in the response curve and then determine precise value by using the IDM. Although for the eigenvalue problem of the highly ill-conditioned Hilbert matrix, the Krylov subspace method is reliable.

6. An Iterative Detection Method for Complex Eigenvalue

Instead linearizing Equation (3) to a linear generalized eigenvalue problem in Equation (10), we directly treat the quadratic eigenvalue problem (3). Now, we consider the detection of complex eigenvalue of the quadratic eigenvalue problem (3). Because M , C and K are real matrices, the complex eigenvalue is written by Equation (14). When we take
u = v + i w ,
inserting Equations (14) and (39) into Equation (3) yields
( λ R 2 λ I 2 ) M + λ R C + K 2 λ R λ I M λ I C 2 λ R λ I M + λ I C ( λ R 2 λ I 2 ) M + λ R C + K v w = 0 .
Letting
X : = v w , D : = ( λ R 2 λ I 2 ) M + λ R C + K 2 λ R λ I M λ I C 2 λ R λ I M + λ I C ( λ R 2 λ I 2 ) M + λ R C + K ,
Equation (40) becomes
D X = 0 ,
which is an n = 2 q dimensional homogeneous linear system.
Taking
X = X 0 + Y ,
where X 0 0 is a given exciting vector and by Equation (42), we have
D Y = D X 0 .
The numerical procedures for determining the complex eigenvalue of the quadratic eigenvalue problem (3) are summarized as follows. (i) Give q, X 0 , a and b. (ii) For each collocating point ( λ R , λ I ) [ a , b ] × [ c , d ] solving Equation (44) to obtain X = X 0 + Y and taking the optimal ( λ R , λ I ) to satisfy
max ( λ R , λ I ) [ a , b ] × [ c , d ] X ( λ R , λ I ) ,
where the size of [ a , b ] × [ c , d ] must be large enough to include at least one complex eigenvalue.
By Equation (45), we gradually reduce the size of the rectangle centered at the previous peak point by renewing the range to a finer one. Give an initial interval [ a k , b k ] × [ c k , d k ] , k = 0 , and we fix the collocating points by λ R ( i ) = a k + i ( b k a k ) / N 1 , i = 0 , , N 1 , λ I ( j ) = c k + i ( d k c k ) / N 2 , j = 0 , , N 2 and pick up the maximal point denoted by ( λ R k , λ I k ) . Then, a finer rectangle is given by ( a k + 1 , b k + 1 ) = ( λ R k ( b k a k ) / N 3 , λ R k + ( b k a k ) / N 3 ) , and ( c k + 1 , d k + 1 ) = ( λ I k ( d k c k ) / N 3 , λ I k + ( d k c k ) / N 3 ) , and in that new rectangle we pick up the new maximum point denoted by ( λ R k + 1 , λ I k + 1 ) . Continuing this process until ( λ R k + 1 λ R k ) 2 + ( λ I k + 1 λ I k ) 2 < ε , we can obtain the complex eigenvalue with high accuracy. This algorithm is shortened as an iterative detection method (IDM).
When the complex eigenvalue is computed, we can apply the techniques in Equations (32)–(35) with x replaced by X and A λ B by D in Equation (42) to compute the complex eigen-mode. The numerical procedures to detect the complex eigenvalue and to compute the complex eigenvector in Equations (14)–(18) for the generalized eigenvalue problem (10) are the same to those in the above.

7. Examples of Quadratic Eigenvalue Problems

7.1. Linearizing Method

For the quadratic eigenvalue problem (3) there are two major methods: Linearization method to a generalized eigenvalue problem as shown by Equation (10) and the decomposition method based on the generalized Bezout method as shown by Equations (4)–(6). In this section, we give two examples of the quadratic eigenvalue problem (3) solved by an iterative detection method in Section 4 for the resultant generalized eigenvalue problem, and developing a direct detection method to Equation (3) for the applications to other three examples.
Example 4.
We consider the structural system (1) with [4]:
M = 0 6 0 0 6 0 0 0 1 , C = 1 6 0 2 7 0 0 0 0 , K = I 3 .
There exist four real eigenvalues λ = { 1 / 3 , 1 / 2 , 1 , } and two imaginary eigenvalues i and i .
According to Equations (9) and (10), we can write
A = 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 1 0 0 1 6 0 0 1 0 2 7 0 0 0 1 0 0 0 , B = 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 6 0 0 0 0 0 6 0 0 0 0 0 0 1 .
We take m = 6 and plot x ( λ ) vs. λ in Figure 6, where three peaks happen at λ = { 1 / 3 , 1 / 2 , 1 } . Starting from [ a 0 , b 0 ] = [ 0 , 0.4 ] and under a convergence criterion 10 15 , through six iterations λ = 0.3333333333333386 is obtained with A x λ B x = 1.82 × 10 15 , where x is solved from Equations (32)–(35) with j 0 = 1 . Starting from [ a 0 , b 0 ] = [ 0.4 , 0.6 ] and after one iteration, we can obtain λ = 0.5 and A x λ B x = 1.01 × 10 15 is obtained. Starting from [ a 0 , b 0 ] = [ 0.7 , 1.5 ] and after two iterations, we can obtain λ = 1 and A x λ B x = 1.85 × 10 17 , where x is solved from Equations (32)–(35) with j 0 = 2 .
By Equation (47), we cannot recover λ = . To remedy this defect, we notice that Equation (3) can also be expressed by Equation (10) with
A : = K 0 q 0 q I q , B : = C M I q 0 q .
By Equations (10) and (48), we can take
A = I 6 , B = 1 6 0 0 6 0 2 7 0 0 6 0 0 0 0 0 0 1 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 .
Let μ = 1 / λ . We can set up a new eigenvalue problem:
B x = μ x .
We take m = 6 and plot x ( μ ) vs. μ in Figure 7a, where we can observe four peaks happened at μ = { 3 , 2 , 1 , 0 } , by which we can recover λ = { 1 / 3 , 1 / 2 , 1 , } by using λ = 1 / μ .
Starting from [ a 0 , b 0 ] = [ 3.5 , 2.5 ] and under a convergence criterion 10 15 , after one iteration μ = 3 is obtained, and B x μ x = 1.11 × 10 15 , where x is solved from Equations (32)–(35) with j 0 = 1 . Starting from [ a 0 , b 0 ] = [ 2.5 , 1.5 ] and after one iteration, we can obtain μ = 2 and B x μ x = 2.04 × 10 15 . Starting from [ a 0 , b 0 ] = [ 1.5 , 0.5 ] and after one iteration, we can obtain μ = 1 and B x μ x = 1.07 × 10 14 is obtained, where x is solved from Equations (32)–(35) with j 0 = 2 . Starting from [ a 0 , b 0 ] = [ 0.5 , 0.5 ] and after one iteration, we can obtain μ = 0 .
We employ the IDM in Section 6 to locate the complex eigenvalue as shown in Figure 7b. We find that with the initial guess [ a 0 , b 0 ] × [ c 0 , d 0 ] = [ 0.01 , 0.01 ] × [ 0.5 , 1.5 ] , we can find μ R = 0 and μ I = 1 with one iteration.
By the same token, when we locate the complex eigenvalue of Equation (21) with the initial guess [ a 0 , b 0 ] × [ c 0 , d 0 ] = [ 1 , 2 ] × [ 0.5 , 1 ] , we can find λ R = 3 / 2 and λ I = 0.8660254037844387 within ten iterations. Upon comparing to the exact one in Equation (22) the error is 1.11 × 10 16 , which is very accurate, and the IDM is convergent very fast.
In the structural dynamics the coefficient matrices M , C and K are symmetric; moreover, M and K are positive definite matrices. This example with a non-positive definite matrix M and a non-symmetric matrix C is borrowed from the literature [4]. This eigenvalue problem has an infinity eigenvalue.
Example 5.
We consider a free vibration problem of an MK structural system with [1]:
M = m 0 B = m 0 1 0 0 0 1 0 0 0 2 , K = k 0 A = k 0 2 1 0 1 3 2 0 2 2 .
By inserting y = x e i ω t into Equation (1) with C = 0 , we can derive
K x = ω 2 M x .
Let λ = m 0 ω 2 / k 0 for A x = λ B x . We take m = 3 and plot x ( λ ) vs. λ in Figure 8, where we can observe three peaks, whose precise values are 0.1391941468883, 1.745898311614913 and 4.114907541476740, respectively. Those values coincide to the roots obtained from the characteristic equation Det ( A λ B ) = 0 , that is,
λ 3 6 λ 2 + 8 λ 1 = 0 .
The corresponding natural frequencies are 0.3730873180480677 k 0 / m 0 , 1.321324453574864 k 0 / m 0 and 2.028523488026880 k 0 / m 0 , respectively.
Starting from [ a 0 , b 0 ] = [ 0 , 0.5 ] and under a convergence criterion 10 10 , with four iterations A x λ B x = 1.79 × 10 15 is obtained, where x is solved from Equations (32)–(35) with j 0 = 1 . Starting from [ a 0 , b 0 ] = [ 1 , 2 ] and with five iterations, we can obtain A x λ B x = 7.63 × 10 11 . Starting from [ a 0 , b 0 ] = [ 4 , 5 ] and with four iterations, we can obtain A x λ B x = 1.44 × 10 13 .
The corresponding eigen-modes are given as follows:
x ( 1 ) = 1 1.860805853111704 2.161702138043240 , x ( 2 ) = 1 0.2541016883850868 0.3406653217873807 , x ( 3 ) = 1 2.114907541476740 0.6789631837592252 .
We extend the results to a ten-degree system with m 1 = = m 9 = m 0 , m 10 = 2 m 0 , k 1 = = k 9 = k 0 and k 10 = 2 k 0 . The response curve of x ( λ ) vs. λ is plotted Figure 9, where we can observe ten peaks. The minimal frequency is 0.1371265436475375 k 0 / m 0 and the maximal frequency is 2.058147747545097 k 0 / m 0 .
As a practical application, we consider a five-story shear building with [32]
M = 140 0 0 0 0 0 120 0 0 0 0 0 120 0 0 0 0 0 120 0 0 0 0 0 100 kip/g , K = 800 400 0 0 0 400 600 200 0 0 0 200 400 200 0 0 0 200 300 100 0 0 0 100 100 kip/in .
We take m = 5 and plot x ( λ ) vs. λ in Figure 10a, where we can observe five peaks. Starting from [ a 0 , b 0 ] = [ 0 , 1 ] and under a convergence criterion 10 15 , with six iterations ω 2 = 0.2039991612696613 is obtained and K x ω 2 M x = 2.85 × 10 13 is obtained, where x is solved from Equations (32)–(35) with j 0 = 1 with the first mode being shown in Figure 10b at the first column.
Starting from [ a 0 , b 0 ] = [ 1 , 2 ] and with seven iterations ω 2 = 1.195924448669029 is obtained and K x ω 2 M x = 2.37 × 10 13 is obtained, with the second mode being shown in Figure 10b at the second column.
Starting from [ a 0 , b 0 ] = [ 2 , 3 ] and with six iterations ω 2 = 2.55144529001161 is obtained and K x ω 2 M x = 1.43 × 10 13 is obtained, with the third mode being shown in Figure 10b at the third column.
Starting from [ a 0 , b 0 ] = [ 4 , 5 ] and with six iterations ω 2 = 4.870842516791811 is obtained and K x ω 2 M x = 2.33 × 10 12 is obtained, with the fourth mode being shown in Figure 10b at the fourth column.
Starting from [ a 0 , b 0 ] = [ 8 , 9 ] and under a convergence criterion 10 15 , with six iterations ω 2 = 8.725407630876937 is obtained, and K x ω 2 M x = 1.78 × 10 11 is obtained, where x is solved from Equations (32)–(35) with j 0 = 1 , and the fifth mode is shown in Figure 10b at the fifth column. We have compared to the exact solution ω 2 = 8.725407630876935 solved from the characteristic equation Det ( K ω 2 M ) = 0 with an error 3.05 × 10 8 , which is given by
800 140 ω 2 400 0 0 0 400 600 120 ω 2 200 0 0 0 200 400 120 ω 2 200 0 0 0 200 300 120 ω 2 100 0 0 0 100 100 100 ω 2 = [ ( 800 140 ω 2 ) ( 600 120 ω 2 ) 160000 ] [ ( 400 120 ω 2 ) ( 300 120 ω 2 ) ( 100 100 ω 2 ) 10000 ( 400 120 ω 2 ) 40000 ( 100 100 ω 2 ) ] 40000 ( 800 140 ω 2 ) [ ( 300 120 ω 2 ) ( 100 100 ω 2 ) 10000 ] = 0 .
The result ω 2 = 8.8746 presented in [32] has an error 0.1492, which is much more larger than 3.05 × 10 8 . The advantage of the presented method is convergent fast and very accurate. The errors of the characteristic equation from the first to fifth eigenvalues are, respectively, Det ( K ω 2 M ) = 1.83 × 10 4 , 3.66 × 10 4 , 1.22 × 10 4 , 3.78 × 10 3 and 3.52 × 10 2 .
The fundamental modes and frequencies obtained at here are convergent faster and more accurate than the Stodola iteration method as described in [32]. We have checked the accuracy to satisfy the characteristic equation Det ( K ω 2 M ) = 0 , which is about in the orders 10 4 to 10 2 by the presented method. But with the Stodola method its accuracy is poor with very large error in the order 10 12 for the fifth eigenvalue to satisfy the characteristic equation. One reason to cause the large error is that even with a small error of ω 2 obtained by the Stodola method, it is amplified by the product of large coefficients in M and K with the amount 5.76 × 10 12 , which is estimated by the product of the diagonal elements of K .

7.2. Direct Detection Method

Example 6.
Let μ = 1 / λ , Equation (3) can be written as
( M + μ C + μ 2 K ) u = 0 .
By letting
D : = M + μ C + μ 2 K , u = u 0 + v ,
we come to a nonhomogeneous linear system:
D v = D u 0 ,
where u 0 is a nonzero constant exciting vector.
We apply the IDM in Section 4 to directly detect the real eigenvalues of example 4, and plot u ( μ ) vs. μ in Figure 11, where four peaks are happened at μ = { 0 , 1 , 2 , 3 } , by which we can recover λ = { 1 / 3 , 1 / 2 , 1 , } by using λ = 1 / μ .
Starting from [ a 0 , b 0 ] = [ 0.5 , 0.5 ] and under a convergence criterion 10 15 , with six iterations μ = 6.56 × 10 4 is obtained. Starting from [ a 0 , b 0 ] = [ 0.5 , 1.5 ] and with six iterations, we can obtain μ = 1.000656781471890 and D u = 4.63 × 10 4 . Starting from [ a 0 , b 0 ] = [ 1.5 , 2.5 ] and after one iteration, we can obtain μ = 2 and D u = 0 . Starting from [ a 0 , b 0 ] = [ 2.5 , 4.5 ] and after two iterations, we can obtain μ = 3 and D u = 2.48 × 10 16 .
The eigen-modes corresponding to λ = { 1 / 3 , 1 / 2 , 1 } are given as follows:
u ( 1 ) = 1 1 0 , u ( 2 ) = 1 1 0 , u ( 3 ) = 1.805 × 10 3 1 0 .
Example 7.
We consider [12,33]:
M = 17.6 1.28 2.89 1.28 0.824 0.413 2.89 0.413 0.725 , C = 7.66 2.45 2.1 0.23 1.04 0.223 0.6 0.756 0.658 , K = 121 18.9 15.9 0 2.7 0.145 11.9 3.64 15.5 .
There exist three pairs of complex eigenvalues.
Beginning with [ a 0 , b 0 ] × [ c 0 , d 0 ] = [ 1 , 0.5 ] × [ 8.4 , 8.5 ] , [ a 0 , b 0 ] × [ c 0 , d 0 ] = [ 0 , 0.2 ] × [ 2 , 3 ] , and [ a 0 , b 0 ] × [ c 0 , d 0 ] = [ 1 , 0.5 ] × [ 1 , 2 ] , respectively, and under a convergence criterion 10 15 , the IDM with 11 iterations converges to the following eigenvalues:
λ 1 λ 2 λ 3 = 0.8848302276193034 ± i 8.441512059499651 0.09472173815159692 ± i 2.52287655639731 0.9179981428161037 ± i 1.760584228706396 ,
whose corresponding complex eigen-modes are with the errors D X = 7.45 × 10 14 , D X = 3.02 × 10 14 and D X = 7.43 × 10 15 , respectively. The accuracy of the eigevalues and eigen-modes is in the order of 10 14 or 10 15 .
Example 8.
We extend example 5 to a q-degree MCK system with m 1 = = m q 1 = 1 , m q = 2 , c 1 = = c q 1 = 0.01 , c q = 0.02 , and k 1 = = k q 1 = 1 and k q = 2 . With q = 3 , we employ the IDM in Section 6 to locate the complex eigenvalue, and over the plane there exists one peak in Figure 12. We find that with the initial guess [ a 0 , b 0 ] × [ c 0 , d 0 ] = [ 0.001 , 0 ] × [ 0.36 , 0.38 ] and j 0 = 4 , we can find λ R = 0.0006959802368 and λ I = 0.3730861090816 with six iterations. The corresponding complex eigen-mode is given by
u = v + i w = 0.01700169666735284 0.03163685511252461 0.03 . 675260035588075 + i 1 1.860806270952949 2.161703124485385 ,
whose error D X = 2.67 × 10 6 is obtained.
For q = 5 , we find that with the initial guess [ a 0 , b 0 ] × [ c 0 , d 0 ] = [ 0.1 , 0 ] × [ 0.1 , 0.5 ] and j 0 = 5 , we can find λ R = 0.0003059040256 and λ I = 0.24710529040384 with seven iterations, and the error D X = 2.08 × 10 5 is obtained.
In Figure 13, with j 0 = 3 we plot the maximal and minimal frequencies with respect to q, which can be seen that the maximal frequencies are insensitive to the dimension q of the MCK system.
Remark 1.
The presented IDM is easily extended to nonlinear eigenvalue problem [34]. For example, we change Equation (3) to
( λ 2 M + λ C + K ) u = 0 ,
where M , C and K are given by Equation (46). Equation (64) cannot be linearized to a linear one as that in Equation (10).
It is interesting that the original real eigenvalues μ = { 0 , 1 , 2 , 3 } = 1 / λ are changed to three as shown in Figure 14, where we can observe three peaks. Starting from [ a 0 , b 0 ] = [ 0.8 , 1.1 ] and with six iterations, we can obtain μ = 1.001315771923528 and D u = 4.62 × 10 4 . Starting from [ a 0 , b 0 ] = [ 1.1 , 1.5 ] and with six iterations, we can obtain μ = 1.346624934599962 and D u = 2.58 × 10 9 .

8. Conclusions

This paper was concerned with the fast iterative solutions of generalized and quadratic eigenvalue problems. Since the vector eigen-equation is a homogeneous linear system, the eigenvector is either a zero vector when the eigen-parameter is not an eigenvalue, or an unknown vector when the eigen-parameter is an eigenvalue. Based on the original eigen-equation, we cannot construct the response curve, which is the magnitude of the eigenvector with respect to the eigen-parameter. We transformed the eigenvector to a new vector including a nonzero exciting vector, which by inserting into the original eigen-equation yields a nonhomogeneous linear system for the new vector. By varying the eigen-parameter in a desired interval, we can construct the response curve. This is a new idea of the so-called excitation method (EM). Then, by maximizing the magnitude of the eigenvector solved from the nonhomogeneous linear system, we can quickly detect the location of the eigenvalue, which is a peak in the response curve. To precisely obtain the eigenvalue, we developed an iterative detection method (IDM) by sequentially reducing the size of the searching interval. For reducing the computational cost of the generalized eigenvalue problem, we derived the nonhomogeneous linear system in a lower m-dimensional affine Krylov subspace. If m is large enough and m < n , the presented method can find all eigenvalues very effectively. Then, we reduced the eigen-equation with one dimension less to a nonhomogeneous linear system to determine the eigenvector with high accuracy.
In summary, the key outcomes are pointed out here.
  • We transformed the eigenvector to a new vector including a nonzero exciting vector, which by inserting into the original eigen-equation yields a nonhomogeneous linear system for the new vector.
  • By varying the eigen-parameter in a desired interval, we can construct the response curve.
  • By maximizing the magnitude of the eigenvector solved from the nonhomogeneous linear system, we can quickly detect the location of the eigenvalue, which is a peak in the response curve.
  • To precisely obtain the eigenvalue, we developed an iterative detection method (IDM) by sequentially zoom-in.
  • From the peaks in the response curve one can roughly locate the eigenvalues and then determine precise value by using the IDM.
  • By using the EM and IDM, we have solved the quadratic eigenvalue problem for the application to the free vibrations of multi-degree mechanical systems.
  • For the complex eigenvalue we developed the IDM in the eigen-parameter plane of real and imaginary parts of complex eigenvalue. Very accurate eigenvalue and eigen-mode can be obtained merely through a few iterations.

Author Contributions

Conceptualization, C.-S.L.; Data curation, C.-S.L.; Formal analysis, C.-S.L.; Funding acquisition, C.-W.C.; Investigation, C.-W.C.; Methodology, C.-S.L.; Project administration, C.-W.C.; Resources, C.-S.L.; Software, C.-W.C.; Supervision, C.-S.L. and C.-W.C.; Validation, C.-W.C.; Visualization, C.-W.C. and C.-L.K.; Writing—original draft, C.-S.L.; Writing—review & editing, C.-W.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Meirovitch, L. Elements of Vibrational Analysis, 2nd ed.; McGraw-Hill: New York, NY, USA, 1986. [Google Scholar]
  2. Smith, H.A.; Singh, R.K.; Sorensen, D.C. Formulation and solution of the non-linear, damped eigenvalue problem for skeletal systems. Int. J. Numer. Meth. Engng. 1995, 38, 3071–3085. [Google Scholar] [CrossRef]
  3. Osinski, Z. (Ed.) Damping of Vibrations; A.A. Balkema: Rotterdam, The Netherlands, 1998. [Google Scholar]
  4. Tisseur, F.; Meerbergen, K. The quadratic eigenvalue problem. SIAM Rev. 2001, 43, 235–286. [Google Scholar] [CrossRef] [Green Version]
  5. Cakar, O. Mass and stiffness modifications without changing any specified natural frequency of a structure. J. Vib. Contr. 2011, 17, 769–776. [Google Scholar] [CrossRef]
  6. Gao, W. Natural frequency and mode shape analysis of structures with uncertainty. Mech. Sys. Sig. Process 2007, 21, 24–39. [Google Scholar] [CrossRef]
  7. Gantmacher, F.R. The Theory of Matrices 1; Chelsea: New York, NY, USA, 1959. [Google Scholar]
  8. Kublanovskaya, V.N. On an approach to the solution of the generalized latent value problem for λ-matrices. SIAM J. Numer. Anal. 1970, 7, 532–537. [Google Scholar] [CrossRef]
  9. Ruhe, A. Algorithms for the nonlinear eigenvalue problem. SIAM J. Numer. Anal. 1973, 10, 674–689. [Google Scholar] [CrossRef]
  10. Dennis, J.E., Jr.; Traub, J.F.; Weber, R.P. The algebraic theory of matrix polynomials. SIAM J. Numer. Anal. 1976, 13, 831–845. [Google Scholar] [CrossRef] [Green Version]
  11. Davis, G.J. Numerical solution of a quadratic matrix equation. SIAM J. Sci. Stat. Comput. 1981, 2, 164–175. [Google Scholar] [CrossRef]
  12. Higham, N.J.; Kim, H. Solving a quadratic matrix equation by Newton’s method with exact line searches. SIAM J. Matrix Anal. Appl. 2001, 23, 303–316. [Google Scholar] [CrossRef] [Green Version]
  13. Higham, N.J.; Kim, H. Numerical analysis of a quadratic matrix equation. IMA J. Numer. Anal. 2000, 20, 499–519. [Google Scholar] [CrossRef]
  14. Bai, Z.; Su, Y. SOAR: A second-order Arnoldi method for the solution of the quadratic eigenvalue problem. SIAM J. Matrix Anal. Appl. 2005, 26, 640–659. [Google Scholar] [CrossRef] [Green Version]
  15. Qian, J.; Lin, W.-W. A numerical method for quadratic eigenvalue problems of gyroscopic systems. J. Sound Vib. 2007, 306, 284–296. [Google Scholar] [CrossRef] [Green Version]
  16. Meerbergen, K. The quadratic Arnoldi method for the solution of the quadratic eigenvalue problem. SIAM J. Matrix Anal. Appl. 2008, 30, 1463–1482. [Google Scholar] [CrossRef] [Green Version]
  17. Hammarling, S.; Munro, C.J.; Tisseur, F. An algorithm for the complete solution of quadratic eigenvalue problems. ACM Trans. Math. Softw. 2013, 39, 18. [Google Scholar] [CrossRef] [Green Version]
  18. Guo, C.-H. Numerical solution of a quadratic eigenvalue problem. Linear Algebra Appl. 2004, 385, 391–406. [Google Scholar] [CrossRef]
  19. Chen, C.; Ma, C. An accelerated cyclic-reduction-based solvent method for solving quadratic eigenvalue problem of gyroscopic systems. Comput. Math. Appl. 2019, 77, 2585–2595. [Google Scholar] [CrossRef]
  20. Golub, G.H.; van Loan, C.F. Matrix Computations; The John Hopkins University Press: Baltimore, MD, USA, 2012. [Google Scholar]
  21. Sakurai, T.; Sugiura, H. A projection method for generalized eigenvalue problems using numerical integration. J. Comput. Appl. Math. 2003, 159, 119–128. [Google Scholar] [CrossRef]
  22. Saad, Y. Variations on Arnoldi’s method for computing eigenelements of large unsymmetric matrices. Linear Algebra Its Appl. 1980, 34, 269–295. [Google Scholar] [CrossRef] [Green Version]
  23. Saad, Y. Chebyshev acceleration techniques for solving nonsymmetric eigenvalue problems. Math. Comput. 1984, 42, 567–588. [Google Scholar] [CrossRef]
  24. Arnoldi, W.E. The principle of minimized iterations in the solution of the matrix eigenvalue problem. Quart. Appl. Math. 1951, 9, 17–29. [Google Scholar] [CrossRef]
  25. Saad, Y. Numerical solution of large nonsymmetric eigenvalue problems. Comput. Phys. Commun. 1989, 53, 71–90. [Google Scholar] [CrossRef] [Green Version]
  26. Morgan, R.B. On restarting the Arnoldi method for large nonsymmetric eigenvalue problems. Math. Comput. 1996, 65, 1213–1230. [Google Scholar] [CrossRef] [Green Version]
  27. Parlett, B.N.; Taylor, D.R.; Liu, Z.A. A look-ahead Lanczos algorithm for unsymmetric matrices. Math. Comput. 1985, 44, 105–124. [Google Scholar]
  28. Nour-Omid, B.; Parlett, B.N.; Taylor, R.L. Lanczos versus subspace iteration for solution of eigenvalue problems. Int. J. Num. Meth. Eng. 1983, 19, 859–871. [Google Scholar] [CrossRef]
  29. Liu, C.-S. A doubly optimized solution of linear equations system expressed in an affine Krylov subspace. J. Comput. Appl. Math. 2014, 260, 375–394. [Google Scholar] [CrossRef]
  30. Saad, Y. Iterative Methods for Sparse Linear Systems, 2nd ed.; SIAM: Philadelphia, PA, USA, 2003. [Google Scholar]
  31. Fettis, H.E.; Caslin, J.C. Eigenvalues and eigenvectors of Hilbert matrices of order 3 through 10. Math. Comput. 1967, 21, 431–441. [Google Scholar] [CrossRef] [Green Version]
  32. Berg, G.V. Elements of Structural Dynamics; Prentice-Hall: Hoboken, NJ, USA, 1988. [Google Scholar]
  33. Lancaster, P. Lambda-Matrices and Vibrating Systems; Pergamon Press: Oxford, UK, 1966. [Google Scholar]
  34. Chiappinelli, R. What do you mean by “nonlinear eigenvalue problems”? Axioms 2018, 7, 39. [Google Scholar] [CrossRef]
Figure 1. For a generalized eigenvalue problem (13), (a) zero response under a zero excitation, and (b) showing five peaks in the response curve under a nonzero excitation.
Figure 1. For a generalized eigenvalue problem (13), (a) zero response under a zero excitation, and (b) showing five peaks in the response curve under a nonzero excitation.
Vibration 05 00053 g001
Figure 2. For a 2 by 2 matrix the detection of a complex eigenvalue.
Figure 2. For a 2 by 2 matrix the detection of a complex eigenvalue.
Vibration 05 00053 g002
Figure 3. For example 1, showing two peaks in the response curve obtained from the IDM, corresponding to eigenvalues −0.619402940600584 and 1.627440079051887.
Figure 3. For example 1, showing two peaks in the response curve obtained from the IDM, corresponding to eigenvalues −0.619402940600584 and 1.627440079051887.
Vibration 05 00053 g003
Figure 4. For example 2, showing five peaks in the response curve obtained from the IDM, corresponding to five eigenvalues.
Figure 4. For example 2, showing five peaks in the response curve obtained from the IDM, corresponding to five eigenvalues.
Vibration 05 00053 g004
Figure 5. For example 3, (a) showing five picks of x with m = 50 in the interval [−0.01,0.05], and (b) a pick of x is enlarged in the interval [0.005,0.015], which is a process of zoom-in.
Figure 5. For example 3, (a) showing five picks of x with m = 50 in the interval [−0.01,0.05], and (b) a pick of x is enlarged in the interval [0.005,0.015], which is a process of zoom-in.
Vibration 05 00053 g005
Figure 6. For example 4, showing three picks of x in the response curve in the interval [0,1.6].
Figure 6. For example 4, showing three picks of x in the response curve in the interval [0,1.6].
Vibration 05 00053 g006
Figure 7. For example 4, (a) showing four picks of x in the response curve in the interval [−4,1], and (b) a pick of x over the plane is enlarged in the interval [ 0.01 , 0.01 ] × [ 0.5 , 1.5 ] .
Figure 7. For example 4, (a) showing four picks of x in the response curve in the interval [−4,1], and (b) a pick of x over the plane is enlarged in the interval [ 0.01 , 0.01 ] × [ 0.5 , 1.5 ] .
Vibration 05 00053 g007
Figure 8. For example 5, showing three picks of x in the response curve in the interval [0,5].
Figure 8. For example 5, showing three picks of x in the response curve in the interval [0,5].
Vibration 05 00053 g008
Figure 9. For example 5 with ten-degree, showing ten picks of x in the response curve in the interval [0,5].
Figure 9. For example 5 with ten-degree, showing ten picks of x in the response curve in the interval [0,5].
Vibration 05 00053 g009
Figure 10. For a five-degree MK system, (a) showing five picks of x in the response curve in the interval [0,10], and (b) displaying the five vibration modes.
Figure 10. For a five-degree MK system, (a) showing five picks of x in the response curve in the interval [0,10], and (b) displaying the five vibration modes.
Vibration 05 00053 g010
Figure 11. For example 4 solved as a quadratic eigenvalue problem, showing four picks of u in the response curve in the interval [0,3].
Figure 11. For example 4 solved as a quadratic eigenvalue problem, showing four picks of u in the response curve in the interval [0,3].
Vibration 05 00053 g011
Figure 12. For example 7, showing a pick of u in the response surface for complex eigenvalues.
Figure 12. For example 7, showing a pick of u in the response surface for complex eigenvalues.
Vibration 05 00053 g012
Figure 13. For example 7 with different dimension, showing maximal and minimal frequencies.
Figure 13. For example 7 with different dimension, showing maximal and minimal frequencies.
Vibration 05 00053 g013
Figure 14. For a nonlinear perturbation of example 4, showing three picks of u in the response curve in the interval [0,2].
Figure 14. For a nonlinear perturbation of example 4, showing three picks of u in the response curve in the interval [0,2].
Vibration 05 00053 g014
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liu, C.-S.; Kuo, C.-L.; Chang, C.-W. Free Vibrations of Multi-Degree Structures: Solving Quadratic Eigenvalue Problems with an Excitation and Fast Iterative Detection Method. Vibration 2022, 5, 914-935. https://0-doi-org.brum.beds.ac.uk/10.3390/vibration5040053

AMA Style

Liu C-S, Kuo C-L, Chang C-W. Free Vibrations of Multi-Degree Structures: Solving Quadratic Eigenvalue Problems with an Excitation and Fast Iterative Detection Method. Vibration. 2022; 5(4):914-935. https://0-doi-org.brum.beds.ac.uk/10.3390/vibration5040053

Chicago/Turabian Style

Liu, Chein-Shan, Chung-Lun Kuo, and Chih-Wen Chang. 2022. "Free Vibrations of Multi-Degree Structures: Solving Quadratic Eigenvalue Problems with an Excitation and Fast Iterative Detection Method" Vibration 5, no. 4: 914-935. https://0-doi-org.brum.beds.ac.uk/10.3390/vibration5040053

Article Metrics

Back to TopTop