Next Article in Journal
Security Enhanced Symmetric Key Encryption Employing an Integer Code for the Erasure Channel
Next Article in Special Issue
Error Estimates of a Symmetric Spectral Method for a Linear Volterra Integral Equation
Previous Article in Journal
An Improved Diagonal Transformation Algorithm for the Maximum Eigenvalue of Zero Symmetric Nonnegative Matrices
Previous Article in Special Issue
Convergence Analysis of a Symmetrical and Positivity-Preserving Finite Difference Scheme for 1D Poisson–Nernst–Planck System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Class of Adaptive Exponentially Fitted Rosenbrock Methods with Variable Coefficients for Symmetric Systems

1
School of Mathematics and Statistics, Huazhong University of Science and Technology, Wuhan 430074, China
2
Hubei Key Laboratory of Engineering Modeling and Scientific Computing, Huazhong University of Science and Technology, Wuhan 430074, China
*
Author to whom correspondence should be addressed.
Submission received: 30 June 2022 / Revised: 9 August 2022 / Accepted: 12 August 2022 / Published: 17 August 2022

Abstract

:
In several important scientific fields, the efficient numerical solution of symmetric systems of ordinary differential equations, which are usually characterized by oscillation and periodicity, has become an open problem of interest. In this paper, we construct a class of embedded exponentially fitted Rosenbrock methods with variable coefficients and adaptive step size, which can achieve third order convergence. This kind of method is developed by performing the exponentially fitted technique for the two-stage Rosenbrock methods, and combining the embedded methods to estimate the frequency. By using Richardson extrapolation, we determine the step size control strategy to make the step size adaptive. Numerical experiments are given to verify the validity and efficiency of our methods.

1. Introduction

In several important scientific fields such as quantum mechanics, elasticity, and electronics, many problems can be represented by mathematical models of symmetric systems, see, e.g., [1,2,3,4], which usually lead to ordinary differential equations characterized by oscillation and periodicity [5,6], i.e.,
y ( t ) = f ( t , y ( t ) ) , t [ 0 , T ] , y ( 0 ) = y 0 R d .
In view of the oscillation and periodicity of the equations in symmetric systems, the exponentially fitted methods, whose theoretical basis was first provided by Gautschi [7] and Lyche [8], have been considered to solve these equations in many studies. For instance, The authors of [9,10] investigated exponentially fitted two-step BDF methods and linear multistep methods. The idea of exponential fitting was first applied to the Runge–Kutta methods by Simos [11], in 1998. Since then, based on Simos’ research, a few exponentially fitted Runge–Kutta methods have been constructed [12,13,14]. Some scholars have also tried to estimate the frequency of the methods by analyzing the local truncation error, and control the step size to improve the effects and efficiency [15,16].
However, the exponentially fitted technique has been mostly applied to Runge–Kutta methods, which have difficulty in solving stiff problems by using the explicit form or cost large amounts of computation by using the implicit form. For this reason, Rosenbrock methods, which are based on the idea introduced by Rosenbrock [17] that a single Newton iteration is enough to preserve the stability properties of the diagonally implicit Runge–Kutta methods, have been considered to solve ordinary differential equations in symmetric systems. The general form of Rosenbrock methods for first-order ordinary differential equations has been given by Hairer and Wanner [18]; then, many scholars developed this form and analyzed the implementation, see, e.g., [19,20] and their references. Rosenbrock methods not only keep the stability of the relative diagonal implicit Runge–Kutta methods, but also reduce the amount of calculation compared against the implicit methods because only a linear system of equations needs to be solved per step. At present, there have been some studies on the exponentially fitted Rosenbrock methods [21,22,23], but these methods, which use constant frequency and step size, have difficulty in solving the equations efficiently and adaptively.
In this paper, we will combine the exponentially fitted Rosenbrock methods with the embedded Rosenbrock methods to estimate the frequency before each step and control the step size by using Richardson extrapolation. By the frequency estimation and step size control, our methods with variable coefficients can solve the equations with oscillation and periodicity efficiently, and the order of convergence can be improved by one compared with the methods with constant coefficients.
The outline of this paper is as follows. In Section 2, a class of exponentially fitted Rosenbrock methods is constructed, and we give the local truncation error, frequency estimation, and stability analysis of the methods. In Section 3, we combine the exponentially fitted Rosenbrock methods with the embedded Rosenbrock methods to construct a kind of embedded variable coefficient exponentially fitted Rosenbrock (3,2) methods, and perform the frequency estimation and step size control strategy. In Section 4, three numerical tests are presented to verify the validity of our methods by comparing the number of calculation steps, the error and the calculating time with other numerical methods. Section 5 gives some discussion and remarks.

2. A Class of Exponentially Fitted Rosenbrock Methods

In this section, a class of exponentially fitted Rosenbrock methods for the models of the ordinary differential equations is constructed, and we give the local truncation error, frequency estimation, and stability analysis of the methods.
Applying the s-stage Rosenbrock method to solve system (1) yields
k i = h f ( t n + α i h , d i y n + j = 1 i 1 α i j k j ) + γ i h 2 f t ( t n , y n ) + h J j = 1 i γ i j k j , i = 1 , 2 , , s , y n + 1 = y n + j = 1 s b j k j ,
where h is the step size, y n y ( t n ) , J = f y ( t n , y n ) , α i j , γ i j , α i , γ i and d i are real coefficients which satisfy α i = j = 1 i 1 α i j and γ i = j = 1 i γ i j for i = 1 , 2 , , s .
We can also change the nonautonomous system (1) to an autonomous system by the following transformation,
t y ( t ) = 1 f ( t , y ( t ) ) , t 0 y ( t 0 ) = 0 y 0 .
Thus, we will focus on the autonomous problems for simplicity of presentation, i.e.,
y ( t ) = f ( y ( t ) ) , t [ 0 , T ] , y ( 0 ) = y 0 ,
where y ( t ) : [ 0 , T ] R d is assumed to be thrice continuously differentiable and f ( y ) : R d R d is assumed to be twice continuously differentiable for the subsequent theoretical analysis. We consider the following s-stage Rosenbrock methods for the autonomous system (2),
k i = h f ( d i y n + j = 1 i 1 α i j k j ) + h J j = 1 i γ i j k j , i = 1 , 2 , , s , y n + 1 = y n + j = 1 s b j k j ,
or, in tableau form,
0 d 1 γ α 2 d 2 β 21 γ α 3 d 3 β 31 β 32 γ α s d s β s 1 β s 2 β s 3 γ b 1 b 2 b 3 b s ,
i.e.,
α d A + V b T ,
where α 1 = 0 , α i j + γ i j = β i j , α i i = 0 , β i i = γ for i , j = 1 , 2 , , s and
A = 0 α 21 0 α s 1 α s s 1 0 , V = γ γ 21 γ γ s 1 γ s s 1 γ .
Compared with the classic Rosenbrock methods, the methods (4) in which γ i i = γ for i = 1 , 2 , , s , need only one LU-decomposition per step and their order conditions can be simplified [18]. Moreover, this kind of method has higher degree of freedom in its coefficient selection due to the extra coefficients d i in each step.
Let this method exactly integrate the function y ( t ) = e ± λ t , then, we have
( 1 λ h γ ) ( ± λ h e ± λ ( t n + α i h ) ) = d i ( ± λ h e ± λ t n ) + λ 2 h 2 j = 1 i 1 α i j e ± λ ( t n + α j h ) + λ 2 h 2 j = 1 i 1 γ i j e ± λ ( t n + α j h ) , i = 1 , 2 , , s , e ± λ ( t n + h ) = e ± λ t n ± λ h j = 1 s b j e ± λ ( t n + α j h ) .
Let z = λ h , it follows that
( 1 z γ ) e ± α i z = d i ± z j = 1 i 1 β i j e ± α j z , i = 1 , 2 , , s , e ± z = 1 ± z j = 1 s b j e ± α j z .
We now try to construct 1–2 stage Rosenbrock methods by (5). If s = 1 , together with α 1 = 0 , we have
1 z γ = d 1 , 1 + z γ = d 1 , e z = 1 + z b 1 , e z = 1 z b 1 .
Solving the system of equations above with γ 0 derives in d 1 = 1 , z = 0 , b 1 = 1 , which yields a class of 1-stage exponentially fitted Rosenbrock methods, i.e.,
0 1 γ 1 .
If s = 2 , together with α 1 = 0 and d 1 = 1 , we have
( 1 z γ ) e α 2 z = d 2 + z β 21 , ( 1 + z γ ) e α 2 z = d 2 z β 21 , e z = 1 + z b 1 + z b 2 e α 2 z , e z = 1 z b 1 z b 2 e α 2 z .
In order to obtain the second order methods, Hairer and Wanner provided the following order conditions in [18], i.e.,
b 1 + b 2 = 1 , b 2 β 21 = 1 2 γ .
Combining (6) and (7), and letting the coefficients of the methods satisfy the order conditions when z 0 , then we have α 2 = 1 2 . Therefore, we obtain the following coefficients of the 2-stage exponentially fitted Rosenbrock methods with order 2,
α 1 = 0 , α 2 = 1 2 , d 2 = cosh z 2 γ z sinh z 2 , β 21 = sinh z 2 z γ cosh z 2 , b 1 = sinh z z ( cosh z 1 ) cosh z 2 z sinh z 2 = 0 , b 2 = cosh z 1 z sinh z 2 ,
where γ is a free coefficient or, in tableau form,
0 1 γ 1 2 cosh z 2 γ z sinh z 2 sinh z 2 z γ cosh z 2 γ 0 cosh z 1 z sinh z 2 .
We now consider the following 2-stage exponentially fitted Rosenbrock methods of order 2 and analyze the local truncation error, i.e.,
k 1 = ( I γ h J ) 1 h f ( d 1 y n ) , k 2 = ( I γ h J ) 1 [ h f ( d 2 y n + α 2 k 1 ) + h J γ 21 k 1 ] , y n + 1 = y n + b 1 k 1 + b 2 k 2 ,
where the coefficients are given by (8). Based on Bui’s idea in [24], we expand ( I γ h J ) 1 in the geometrical series, i.e.,
( I γ h J ) 1 = I + γ h J + γ 2 h 2 J 2 + O ( h 3 ) .
If we assume that y n in (9) is the exact solution y ( t n ) at t n and expand the hyperbolic functions in the coefficients in Taylor series, we can obtain a one-step approximation y ~ n + 1 of the solution at t n + h by (9) and (10), i.e.,
y ~ n + 1 = y ( t n ) + h f ( y ( t n ) ) + 1 2 J f ( y ( t n ) ) h 2 + 1 8 f ( y ( t n ) ) f ( y ( t n ) ) 2 h 3 + ( γ γ 2 ) J 2 f ( y ( t n ) ) h 3 4 γ 1 8 J λ 2 h 3 y ( t n ) + λ 2 24 f ( y ( t n ) ) h 3 + O ( h 4 ) .
Meanwhile, for the exact solution at t n + h , we have
y ( t n + h ) = y ( t n ) + h f ( y ( t n ) ) + 1 2 J f ( y ( t n ) ) h 2 + 1 6 [ f ( y ( t n ) ) f ( y ( t n ) ) 2 + J 2 f ( y ( t n ) ) ] h 3 + O ( h 4 ) .
Based on (11) and (12), the local truncation error L T E E F R B of exponentially fitted Rosenbrock methods (9) can be expressed as
L T E E F R B = y ( t n + h ) y ~ n + 1 = h 3 24 [ y ( 24 γ 24 γ 2 3 ) f y D ( y ( 12 γ 3 ) f y ) ] + O ( h 4 ) = h 3 ( ψ 1 ( t , y , f ) + ψ 2 ( t , y , f , λ ) ) + O ( h 4 ) ,
where h 3 ψ 1 ( t , y , f ) = h 3 24 [ y ( 24 γ 24 γ 2 3 ) f y ] , h 3 ψ 2 ( t , y , f , λ ) = h 3 D ψ 3 ( t , y , f ) + O ( h 4 ) , h 3 ψ 3 ( t , y , f ) = h 3 24 [ y ( 12 γ 3 ) f y ] , λ = d i a g ( λ 1 , λ 2 , , λ d ) R d × d , D = λ 2 and all functions in (13) are evaluated at t = t n and y = y ( t n ) .
If we let the principle local truncation error in (13) be zero, we can approximate and renew the frequency λ in each step to make the coefficients variable by the following equation,
h 3 ψ 11 h 3 ψ 12 h 3 ψ 1 d + λ 1 2 λ 2 2 λ d 2 h 3 ψ 31 h 3 ψ 32 h 3 ψ 3 d = 0 0 0 ,
where h 3 ψ 1 i and h 3 ψ 3 i for i = 1 , 2 , , d are respectively the ith-component of h 3 ψ 1 ( t , y , f ) and h 3 ψ 3 ( t , y , f ) , then the order of the methods can be improved by one.
On the other hand, we consider the stability of (9). Let z 0 , then we get a class of constant coefficient exponentially fitted Rosenbrock methods, i.e.,
0 1 γ 1 2 1 1 2 γ γ 0 1 .
By analogy with the definition of the stability function of Runge–Kutta methods in [25], the definition of the stability function of Rosenbrock methods is as follows.
Definition 1.
Applying the Rosenbrock methods (3) to the following linear test equations
y = λ y , λ C , y ( 0 ) = y 0
yields y n + 1 = R ( z ) y n with
R ( z ) = 1 + z b T ( I z ( A + V ) ) 1 d ,
then R ( z ) is called the stability function of Rosenbrock methods (3).
It is obvious that
R ( z ) = d e t ( I z ( A + V ) + z d b T ) d e t ( I z ( A + V ) ) ,
where the numerator and denominator of R ( z ) are polynomials of degree no more than s. It means that R ( z ) can be expressed as
R ( z ) = P ( z ) Q ( z ) ,
where P ( z ) = k = 0 l a k z k , Q ( z ) = k = 0 m b k z k are two polynomials with real coefficients, l , m s , a l , b m 0 , a 0 = b 0 = 1 , and P ( z ) and Q ( z ) contain no common factors. Li pointed out in [25] that the A-stability of single-step methods was equivalent to the A-acceptability of the rational approximations to the function e z . The following lemma gives the necessary and sufficient condition for the A-acceptability of R ( z ) .
Lemma 1
([25]). Assume that p 2 m 3 , then R ( z ) is A-acceptable iff the three inequalities hold
(i)
| R ( ) | 1 ;
(ii)
Q ( z ) > 0 when z 0 ;
(iii)
b m 1 2 a m 1 2 + 2 ( a m a m 2 b m b m 2 ) 0 ,
where m 2 , and we add the definitions that a l + 1 = a l + 2 = = a m = 0 if l < m .
Based on Lemma 1, we can give the following theorem for the A-stability of method (15).
Theorem 1.
If γ 1 4 , the 2-stage Rosenbrock method (15) with single coefficient γ is A-stable.
Proof. 
According to (16), the stability function of the method (15) is
R ( z ) = 1 + ( 1 2 γ ) z + ( γ 2 2 γ + 1 2 ) z 2 1 2 γ z + γ 2 z 2 .
Let z , then we have | R ( ) | = | γ 2 2 γ + 1 2 γ 2 | . When γ 1 4 , we have | R ( ) | 1 , which means that the condition (i) in Lemma 1 holds if γ 1 4 .
According to (17), we have Q ( z ) = ( 1 γ z ) 2 . For z 0 , we have Q ( z ) > 0 , which means that the condition (ii) in Lemma 1 holds.
Consider the condition (iii) in Lemma 1 when l = m = 2 , it is easy to find in (17) that b 0 = 1 , b 1 = 2 γ , b 2 = γ 2 and a 0 = 1 , a 1 = 1 2 γ , a 2 = γ 2 2 γ + 1 2 . If γ 1 4 , we have
b m 1 2 a m 1 2 + 2 ( a m a m 2 b m b m 2 ) = 2 γ ( 2 γ 1 ) 0 ,
which means that the condition (iii) in Lemma 1 holds.
To sum up, if γ 1 4 , R ( z ) is A-acceptable as the rational approximation to the function e z , which means that the 2-stage Rosenbrock method (15) with single coefficient γ is A-stable. □

3. Frequency Estimation and Step Size Control

This section will combine the exponentially fitted Rosenbrock methods with the embedded Rosenbrock methods to construct a kind of embedded variable coefficient exponentially fitted Rosenbrock (3,2) methods, and perform the frequency estimation and step size control strategy.
We consider the embedded Rosenbrock methods for system (2). This kind of method combines two Rosenbrock methods with different orders, which have the same coefficients of the lower stage part. The tableau form of the methods is as follows,
0 d 1 γ α 2 d 2 β 21 γ α 3 d 3 β 31 β 32 γ α s d s β s 1 β s 2 β s 3 γ b 1 b 2 b 3 b s b ^ 1 b ^ 2 b ^ 3 b ^ s ,
and we define y 1 = y 0 + j = 1 s b j k j and y ^ 1 = y 0 + j = 1 s b ^ j k j . To estimate the local truncation error of the embedded methods, we give the following lemma referred to in [26].
Lemma 2
([26]). Whenever a starting step h has been chosen, the Rosenbrock methods with order p and q respectively compute two approximations to the solution, y 1 and y ^ 1 , where p < q , then the error of y 1 is estimated by y ^ 1 y 1 , i.e.,
y ( t 0 + h ) y 1 = y ^ 1 y 1 + O ( h p + 2 ) .
Now, we construct a class of embedded Rosenbrock methods by using the coefficients of the 2-stage Rosenbrock methods (8). It can be expressed in the following tableau form
0 1 γ 1 2 cosh z 2 γ z sinh z 2 sinh z 2 z γ cosh z 2 γ α 3 1 β 31 β 32 γ 0 cosh z 1 z sinh z 2 b ^ 1 b ^ 2 b ^ 3 ,
where γ is a free coefficient. Together with the order conditions for order 3 in [18]
b 1 + b 2 + b 3 = 1 , b ^ 2 ( 1 2 γ ) + b ^ 3 β 31 + b ^ 3 β 32 = 1 2 γ , b ^ 2 × 1 4 + b ^ 3 α 3 2 = 1 3 , b ^ 3 β 32 = 1 3 2 γ + 2 γ 2 1 2 γ ,
when we let b ^ 2 = 0 and α 3 = 2 3 , the coefficients of method (18) are determined by (19), i.e.,
α 1 = 0 , α 2 = 1 2 , α 3 = 2 3 , d 2 = cosh z 2 γ z sinh z 2 , d 3 = 1 , β 21 = sinh z 2 z γ cosh z 2 , β 31 = 2 9 ( 1 2 γ ) , β 32 = 4 ( 6 γ 2 6 γ + 1 ) 9 ( 1 2 γ ) , b 1 = 0 , b 2 = cosh z 1 z sinh z 2 , b ^ 1 = 1 4 , b ^ 2 = 0 , b ^ 3 = 3 4 ,
where γ 1 2 or, in tableau form,
0 1 γ 1 2 cosh z 2 γ z sinh z 2 sinh z 2 z γ cosh z 2 γ 2 3 1 2 9 ( 1 2 γ ) 4 ( 6 γ 2 6 γ + 1 ) 9 ( 1 2 γ ) γ 0 cosh z 1 z sinh z 2 1 4 0 3 4 .
We now choose one of the methods above to introduce the frequency estimation and step size control strategy. We record the exponentially fitted Rosenbrock (3,2) method (20) as EFRB(3,2) when γ = 1 4 , i.e.,
0 1 1 4 1 2 cosh z 2 z 4 sinh z 2 sinh z 2 z 1 4 cosh z 2 1 4 2 3 1 4 9 1 9 1 4 0 cosh z 1 z sinh z 2 1 4 0 3 4 .
We also record the Rosenbrock (3,2) method (21) as RB(3,2) when z 0 , i.e.,
0 1 1 4 1 2 1 1 4 1 4 2 3 1 4 9 1 9 1 4 0 1 1 4 0 3 4 .
Suppose that y n + 1 c l a s s and L T E c l a s s are the numerical solution and the local truncation error for the second-order component of RB(3,2), and y n + 1 E F R B and L T E E F R B are the numerical solution and the local truncation error for the second-order component of EFRB(3,2), then we have
L T E c l a s s = y ( t n + h ) y n + 1 c l a s s = h 3 ψ 1 ( t , y , f ) + O ( h 4 ) ,
L T E E F R B = y ( t n + h ) y n + 1 E F R B = h 3 ( ψ 1 ( t , y , f ) + ψ 2 ( t , y , f , λ ) ) + O ( h 4 ) ,
where h 3 ψ 1 ( t , y , f ) = h 3 ( 1 24 y 1 16 f y ) , h 3 ψ 2 ( t , y , f , λ ) = h 3 D ψ 3 ( t , y , f ) + O ( h 4 ) , h 3 ψ 3 ( t , y , f ) = h 3 24 y and all functions in (23) and (24) are evaluated at t = t n and y = y ( t n ) . Together with (23) and (24), we have
h 3 ψ 2 ( t , y , f , λ ) L T E E F R B L T E c l a s s = y n + 1 c l a s s y n + 1 E F R B .
For each integration step, we can estimate h 3 ψ 3 ( t , y , f ) by the following equation,
h 3 ψ 31 h 3 ψ 32 h 3 ψ 3 d λ 1 2 λ 2 2 λ d 2 1 h 3 ψ 21 h 3 ψ 22 h 3 ψ 2 d ,
where h 3 ψ 2 i and h 3 ψ 3 i for i = 1 , 2 , , d are respectively the ith-component of h 3 ψ 2 ( t , y , f , λ ) and h 3 ψ 3 ( t , y , f ) . For the first integration step, λ is set as a suitable starting frequency λ ( 0 ) .
On the other hand, if we suppose that the numerical solution for the third-order component of RB(3,2) is y ^ n + 1 c l a s s , we can estimate h 3 ψ 1 ( t , y , f ) based on lemma 2 by
L T E c l a s s = h 3 ψ 1 ( t , y , f ) + O ( h 4 ) y ^ n + 1 c l a s s y n + 1 c l a s s .
After obtaining the approximations of h 3 ψ 1 ( t , y , f ) and h 3 ψ 3 ( t , y , f ) by (25) and (26), we substitute them into (14) to estimate and renew the frequency λ . In addition, if the estimate of h 3 ψ 1 i is close to zero, the estimate of λ i is zero, which means that the coefficients of the EFRB(3,2) method (21) are equal to the coefficients of the RB(3,2) method (22). If the estimate of h 3 ψ 3 i is close to zero, it means that the principle local truncation error is not related to λ i . In this case, we do not renew the frequency. If we estimate the frequency before each step of the method, we will get a variable coefficient method and its order will be increased by one.
Now, we try to control its step size by Richadson extrapolation. We first give the following lemma referred to in [26].
Lemma 3
([26]). Assume that y n + 1 is the numerical solution of one step with step size h of a Rosenbrock method of order p from ( t n , y n ) , and ω n + 1 is the numerical solution of two steps with step size h 2 , then the error of ω n + 1 can be expressed as
y ( x n + 1 ) ω n + 1 = ω n + 1 y n + 1 2 p 1 + O ( h p + 2 ) .
Based on Lemma 3, we give the following step size control strategy. Let e r r o r = ω n + 1 y n + 1 2 p 1 , we compare e r r o r with the tolerance t o l which is given by user. If | e r r o r | t o l , then we accept the step and progress with the ω n + 1 value; if | e r r o r | > t o l , then we reject the step and repeat the whole procedure with a new step size. In both cases, referred to in [18], the new step size is given by
h n e w = h o l d m i n ( f a c m a x , m a x ( f a c m i n , f a c ( t o l / e r r o r ) 1 p + 1 ) ) ,
where f a c m a x and f a c m i n are the maximum and minimum acceptable factors, respectively, and f a c is the safety factor. In this paper, we let f a c m a x = 2 , f a c m i n = 0.5 and f a c = 0.8 .

4. Numerical Experiments

In this section, we present three numerical experiments to test the performance of our methods and compare the error and computational efficiency with other numerical methods. All the numerical experiments were executed by using MATLAB® on a Windows 11 PC with an Intel® Core TM i5-10210U CPU.
Example 1.
Consider the following ODE system in [27],
y ( t ) = ω y ( t ) + ( ω 2 1 ) sin t , t 0 , y ( 0 ) = 1 , y ( 0 ) = ω + 1 ,
with exact solution y ( t ) = cos ω t + sin ω t + s i n t . Let y ( t ) = x ( t ) , Y ( t ) = ( x ( t ) , y ( t ) ) T , then we get a new ODE system
Y ( t ) = 0 ω 2 1 0 Y ( t ) + ( ω 2 1 ) sin t 0 , t 0 , Y ( 0 ) = ω + 1 1 .
Problem (28) has been solved in the interval 0 t 10 with λ i ( 0 ) = 10 for each component of the solution when ω = 10 .
Example 2.
Consider the following ODE system in [28],
y ( t ) = 21 19 20 19 21 20 40 40 40 y ( t ) , 0 t 100 , y ( 0 ) = 1 0 1 ,
with the following solution:
y ( t ) = 1 2 1 1 1 1 1 1 0 2 2 e 2 t e 40 t cos 40 t e 40 t sin 40 t .
Problem (29) has been solved in the interval 0 t 100 with λ i ( 0 )   =   40 for each component of the solution.
Example 3.
Consider the following PDE system in [29],
u t = 2 u x 2 u + 2 e t , 0 < x < 1 , t > 0 , u ( 0 , t ) = u ( 1 , t ) = 0 , t > 0 , u ( x , 0 ) = x ( 1 x ) , 0 < x < 1 ,
with exact solution u ( x , t ) = x ( 1 x ) e t . The PDE system (30) can be transformed into an ODE system by spatial discretization with central finite difference of second order, which results in
u i ( t ) = 1 Δ x 2 [ u i 1 ( t ) 2 u i ( t ) + u i + 1 ( t ) ] u i ( t ) + 2 e t , i = 1 , 2 , , M 1 , t > 0 , u i ( 0 ) = i Δ x ( 1 i Δ x ) , i = 1 , 2 , , M 1 ,
where Δ x = 1 / M , x i = i Δ x , u i ( t ) is meant to approximate the solution of (30) at the point ( t , x i ) and we define u 0 ( t ) = u M ( t ) = 0 . Then, problem (31) has been solved in the interval 0 t 10 with λ i ( 0 ) = 2 for each component of the solution when M = 10 .
We first solve Example 1 by the EFRB(3,2) method with constant step size to test the order of our method. We use the following formula to estimate the order of our method,
p l o g 2 [ e r r o r ( h ) e r r o r ( h / 2 ) ] ,
where e r r o r ( h ) = max 1 n N | | ε ( h ) | | , ε ( h ) = y ( t n ) y n represents the error of y n when h is the step size and t N = 10 . Let h = 1 / 2 k , k = 4 , 5 , , 9 , then the error and the order of convergence of our method are shown in Table 1. The results in Table 1 imply that the EFRB(3,2) method can achieve third order convergence.
We now compare the EFRB(3,2) method with the stiff ODE solvers in MATLAB® such as ode23s, ode23t, and ode23tb, which have the same stage as our method. For each stiff ODE solver in MATLAB®, the relative tolerance R e l T o l is set as t o l and the absolute tolerance A b s T o l is set as 10 3 t o l . The error for each component of the solution was calculated as the maximum of the absolute value of the difference between the numerical and exact solutions, and we use the largest error across all components as the error of the problems. Figure 1, Figure 2 and Figure 3 show the relationship between the error and the average calculating time for each method when t o l = 10 k , k = 2 , 3 , , 10 . Table 2, Table 3 and Table 4 show the calculating steps, the error, and the calculating time for each method when t o l = 10 5 , 10 7 and 10 9 . From the figures and tables, we conclude that the EFRB(3,2) method achieves better performance than all the stiff ODE solvers for ODE Examples 1 and 2. For the PDE Example 3, the EFRB(3,2) method achieves similar performance with ode23tb and better performance than other stiff ODE solvers. Furthermore, our method performs better than ode23tb in the small-tolerance range for Example 3. The performance of the EFRB(3,2) method in these three examples verifies the effectiveness and efficiency of our method, making it possible to be applied to the stiff ODE systems and PDE systems.

5. Conclusions

In this paper, a class of variable coefficient exponentially fitted embedded Rosenbrock methods with adaptive step size has been developed. By the frequency estimation and step size control strategy, the order of convergence will be increased by one and the methods can renew the step size adaptively. The numerical experiments show that compared with other methods such as ode23s, ode23t, and ode23tb in MATLAB®, our methods can achieve lower error with fewer calculating steps and shorter time, and these advantages will be much more significant if the tolerance is lower. We believe that this kind of methods can be applied to more complex symmetric systems, and the Rosenbrock methods of higher order can be constructed by our frequency estimation and step size control strategy.

Author Contributions

Conceptualization, T.Q.; methodology, T.Q.; software, Y.H.; validation, Y.H.; formal analysis, T.Q. and M.Z.; investigation, T.Q.; resources, T.Q.; data curation, Y.H.; writing—original draft preparation, T.Q. and Y.H.; writing—review and editing, T.Q. and Y.H.; visualization, Y.H.; supervision, T.Q.; project administration, T.Q.; funding acquisition, T.Q. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by Hubei Provincial Natural Science Foundation of China (Grant No. 2019CFC844) and the Fundamental Research Funds for the Central Universities, HUST: 2019093.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hairer, E.; Lubich, C.; Wanner, G. Geometric Numerical Integration: Structure-Preserving Algorithms for Ordinary Differential Equations, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  2. Li, D.; Zhang, C. Split Newton iterative algorithm and its application. Appl. Math. Comput. 2010, 217, 2260–2265. [Google Scholar] [CrossRef]
  3. Cao, W.; Li, D.; Zhang, Z. Optimal superconvergence of energy conserving local discontinuous Galerkin methods for wave equations. Commun. Comput. Phys. 2017, 21, 211–236. [Google Scholar] [CrossRef]
  4. Li, D.; Sun, W. Linearly implicit and high-order energy-conserving schemes for nonlinear wave equations. J. Sci. Comput. 2020, 83, 1–17. [Google Scholar] [CrossRef]
  5. Landau, L.D.; Lifshitz, E.M. Quantum Mechanics; Pergamon Press: New York, NY, USA, 1965. [Google Scholar]
  6. Liboff, R. Introductory Quantum Mechanics; Holden-Day: Oakland, CA, USA, 1980. [Google Scholar]
  7. Gautschi, W. Numerical integration of ordinary differential equations based on trigonometric polynomials. Numer. Math. 1961, 3, 381–397. [Google Scholar] [CrossRef]
  8. Lyche, T. Chebyshevian multistep methods for ordinary differential equations. Nmuer. Math. 1972, 19, 65–75. [Google Scholar] [CrossRef]
  9. Ixaru, L.G.; Berghe, G.V.; Meyer, H.D. Exponentially fitted variable two-step BDF algorithm for first order ODEs. Comput. Phys. Commun. 2003, 150, 116–128. [Google Scholar] [CrossRef]
  10. Simos, T.E. Exponentially-fitted and trigonometrically-fitted symmetric linear multistep methods for the numerical integration of orbital problems. Phys. Lett. A 2003, 315, 437–446. [Google Scholar] [CrossRef]
  11. Simos, T.E. An exponentially-fitted Runge-Kutta method for the numerical integration of initial-value problems with periodic or oscillating solutions. Comput. Phys. Commun. 1998, 115, 1–8. [Google Scholar] [CrossRef]
  12. Berghe, G.V.; De Meyer, H.; Van Daele, M.; Van Hecke, T. Exponentially fitted explicit Runge-Kutta methods. Comput. Phys. Commun. 1999, 123, 7–15. [Google Scholar] [CrossRef]
  13. Avdelas, G.; Simos, T.E.; Vigo-Aguiar, J. An embedded exponentially-fitted Runge-Kutta method for the numerical solution of the Schrödinger equation and related periodic initial-value problems. Comput. Phys. Commun. 2000, 131, 52–67. [Google Scholar] [CrossRef]
  14. Franco, J.M. An embedded pair of exponentially fitted explicit Runge–Kutta methods. J. Comput. Appl. Math. 2002, 149, 407–414. [Google Scholar] [CrossRef]
  15. Berghe, G.V.; De, M.H.; Van, D.M.; Hecke, T.V. Exponentially fitted Runge-Kutta methods. J. Comput. Appl. Math. 2000, 125, 107–115. [Google Scholar] [CrossRef]
  16. Berghe, G.V.; Ixaru, L.G.; Meyer, H.D. Frequency determination and step-length control for exponentially-fitted Runge–Rutta methods. J. Comput. Appl. Math. 2001, 132, 95–105. [Google Scholar] [CrossRef]
  17. Rosenbrock, H. Some general implicit processes for the numerical solution of differential equations. Comput. J. 1963, 5, 329–330. [Google Scholar] [CrossRef]
  18. Hairer, E.; Wanner, G. Solving Ordinary Differential Equations II; Springer: Berlin/Heidelberg, Germany, 1996. [Google Scholar]
  19. Tranquilli, P.; Sandu, A. Rosenbrock-Krylov methods for large systems of differential equations. SIAM J. Sci. Comput. 2014, 36, 1313–1338. [Google Scholar] [CrossRef]
  20. Wang, L.; Yu, M. Comparison of ROW, ESDIRK, and BDF2 for unsteady flows with the high-order flux reconstruction formulation. J. Sci. Comput. 2020, 83, 1–27. [Google Scholar] [CrossRef]
  21. Bao, Z. Two Classes of Functionally Fitted Rosenbrock Methods for Solving Stiff Ordinary Differential Equations. Mater’s Thesis, Huazhong University of Science and Technology, Wuhan, China, 2018. [Google Scholar]
  22. Zhang, Y. Exponentially fitted Rosenbrock Methods with Variable Coefficients for Solving First-Order Ordinary Differential Equations. Mater’s Thesis, Huazhong University of Science and Technology, Wuhan, China, 2019. [Google Scholar]
  23. Wei, T. Some Kinds of Exponentially Fitted Rosenbrock Methods for Solving Second-Order Ordinary Differential Equations. Mater’s Thesis, Huazhong University of Science and Technology, Wuhan, China, 2019. [Google Scholar]
  24. Bui, T.D.; Poon, S.W.H. On the computational aspects of rosenbrock procedures with built-in error estimates for stiff systems. BIT 1981, 21, 168–174. [Google Scholar] [CrossRef]
  25. Li, S. Numerical Analysis for Stiff Ordinary and Functional Differential Equations; Xiangtan University Press: Xiangtan, China, 2010. [Google Scholar]
  26. Hairer, E.; Wanner, G.; Nørsett, S.P. Solving Ordinary Differential Equations I; Springer: Berlin/Heidelberg, Germany, 1993. [Google Scholar]
  27. Paternoster, B. Runge-Kutta(-Nyström) methods for ODEs with periodic solutions based on trigonometric polynomials. Appl. Numer. Math. 1998, 28, 401–412. [Google Scholar] [CrossRef]
  28. Lambert, J.D. Computational Methods in Ordinary Differential Equations; John Wiley: London, UK, 1973. [Google Scholar]
  29. Wang, W.; Mao, M.; Wang, Z. Stability and error estimates for the variable step-size BDF2 method for linear and semilinear parabolic equations. Adv. Comput. Math. 2021, 47, 1–28. [Google Scholar] [CrossRef]
Figure 1. CPUtime−error of each method for Example 1.
Figure 1. CPUtime−error of each method for Example 1.
Symmetry 14 01708 g001
Figure 2. CPUtime−error of each method for Example 2.
Figure 2. CPUtime−error of each method for Example 2.
Symmetry 14 01708 g002
Figure 3. CPUtime−error of each method for Example 3.
Figure 3. CPUtime−error of each method for Example 3.
Symmetry 14 01708 g003
Table 1. The error and the order of convergence of the EFRB(3,2) method for problem (27).
Table 1. The error and the order of convergence of the EFRB(3,2) method for problem (27).
h1/161/321/641/1281/2561/512
e r r o r ( h ) 3.9592 × 10 1 3.0439 × 10 2 2.8673 × 10 3 3.5307 × 10 4 4.3312 × 10 5 5.4082 × 10 6
p-3.70123.40823.02173.02713.0016
Table 2. The accepted steps, rejected steps, error, and calculating time of each method for Example 1.
Table 2. The accepted steps, rejected steps, error, and calculating time of each method for Example 1.
MethodTolAccepted StepsRejected StepsErrorTime(s)
10 5 62244 3.9582 × 10 4 0.0387
EFRB(3,2) 10 7 191523 1.4846 × 10 5 0.0965
10 9 603870 4.9237 × 10 7 0.3006
10 5 300968 8.5624 × 10 3 0.1937
ode23s 10 7 1440471 3.8814 × 10 4 0.9061
10 9 6739642 1.7927 × 10 5 4.2465
10 5 444444 8.2055 × 10 3 0.2010
ode23t 10 7 2114840 3.7150 × 10 4 0.8442
10 9 9880519 1.7141 × 10 5 3.7756
10 5 347755 6.4195 × 10 3 0.0708
ode23tb 10 7 1658338 2.9314 × 10 4 0.2978
10 9 7762726 1.3478 × 10 5 1.3235
Table 3. The accepted steps, rejected steps, error, and calculating time of each method for Example 2.
Table 3. The accepted steps, rejected steps, error, and calculating time of each method for Example 2.
MethodTolAccepted StepsRejected StepsErrorTime(s)
10 5 494 6.3863 × 10 5 0.0176
EFRB(3,2) 10 7 15025 1.0545 × 10 6 0.0246
10 9 37220 2.9298 × 10 8 0.0423
10 5 52918 2.4912 × 10 5 0.0446
ode23s 10 7 2478115 1.1568 × 10 6 0.2064
10 9 11540521 5.3705 × 10 8 0.8153
10 5 7835 1.9925 × 10 5 0.0544
ode23t 10 7 35463 9.7763 × 10 7 0.1946
10 9 164313 4.5192 × 10 8 0.6497
10 5 6085 1.6531 × 10 5 0.0198
ode23tb 10 7 27883 7.7115 × 10 7 0.6134
10 9 132513 3.5798 × 10 8 0.2668
Table 4. The accepted steps, rejected steps, error, and calculating time of each method for Example 3.
Table 4. The accepted steps, rejected steps, error, and calculating time of each method for Example 3.
MethodTolAccepted StepsRejected StepsErrorTime(s)
10 5 584 2.3242 × 10 5 0.0079
EFRB(3,2) 10 7 1394 9.9917 × 10 8 0.0299
10 9 5434 1.4384 × 10 8 0.0703
10 5 4711 3.6454 × 10 6 0.0570
ode23s 10 7 22931 1.4693 × 10 7 0.2551
10 9 107921 6.5482 × 10 9 1.1690
10 5 2560 1.4514 × 10 6 0.0164
ode23t 10 7 11560 7.1862 × 10 8 0.0567
10 9 53410 3.3804 × 10 9 0.2405
10 5 1780 1.5574 × 10 6 0.0083
ode23tb 10 7 8460 6.8615 × 10 8 0.0314
10 9 39480 3.1499 × 10 9 0.0848
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Qin, T.; Hua, Y.; Zhang, M. A Class of Adaptive Exponentially Fitted Rosenbrock Methods with Variable Coefficients for Symmetric Systems. Symmetry 2022, 14, 1708. https://0-doi-org.brum.beds.ac.uk/10.3390/sym14081708

AMA Style

Qin T, Hua Y, Zhang M. A Class of Adaptive Exponentially Fitted Rosenbrock Methods with Variable Coefficients for Symmetric Systems. Symmetry. 2022; 14(8):1708. https://0-doi-org.brum.beds.ac.uk/10.3390/sym14081708

Chicago/Turabian Style

Qin, Tingting, Yuchen Hua, and Mengyao Zhang. 2022. "A Class of Adaptive Exponentially Fitted Rosenbrock Methods with Variable Coefficients for Symmetric Systems" Symmetry 14, no. 8: 1708. https://0-doi-org.brum.beds.ac.uk/10.3390/sym14081708

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop