Next Article in Journal
A Comparison of Ensemble and Dimensionality Reduction DEA Models Based on Entropy Criterion
Previous Article in Journal
Simulated Annealing with Exploratory Sensing for Global Optimization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Class of Spline Functions for Solving 2-Order Linear Differential Equations with Boundary Conditions

1
School of Mathematics and Finance, Hunan University of Humanities, Science and Technology, Loudi 417000, China
2
School of Mathematics and Statistics, Central South University, Changsha 410083, China
*
Author to whom correspondence should be addressed.
Submission received: 9 August 2020 / Revised: 8 September 2020 / Accepted: 11 September 2020 / Published: 15 September 2020

Abstract

:
In this paper, we exploit an numerical method for solving second order differential equations with boundary conditions. Based on the theory of the analytic solution, a series of spline functions are presented to find approximate solutions, and one of them is selected to approximate the solution automatically. Compared with the other methods, we only need to solve a tri-diagonal system, which is much easier to implement. This method has the advantages of high precision and less computational cost. The analysis of local truncation error is also discussed in this paper. At the end, some numerical examples are given to illustrate the effectiveness of the proposed method.

1. Introduction

Ordinary differential equations (ODEs) of second order are used throughout engineering, mathematics and science. It is well known that of all differential equations, only a little of them have analytic solutions. Most of them, especially for these with variable coefficients are very difficult to solve. Very often, the solving techniques are difficult to deal with complicated equations in practice. In this paper, we consider the numerical solutions for second order linear differential equations of the form
y ( x ) + P ( x ) y = Q ( x ) , x [ a , b ] ,
with the boundary conditions
y ( a ) = A , y ( b ) = B ,
where A , B are constants and P ( x ) , Q ( x ) are sufficiently smooth functions. We remark that Equation (1) doesn’t contain the first derivative term y ( x ) , because we can eliminate it by proper transformation.
How to solve the differential equations effectively has gain great attentions in recent years, researchers have been putting forth effort on obtaining stable algorithms with less float operations and higher accuracy. Numerical methods for differential equations are well developed and some of them are widely used, such as Euler’s method, the linear multi-step method, Runge–Kutta Method and so on. For more detailed researches on this topic, we refer the reader to read [1].
Recently, solving differential equations by spline functions have been investigated by more and more researchers and a large of spline functions have been exploited to solve these equations. Usmani [2] derived an uniform quartic polynomial spline at mid-knots. Ramadan et al. [3,4] presented some quadratic, cubic polynomial and nonpolynomial spline functions to find the approximate solutions. They also shown that the nonpolynomial splines could yield better approximations and have lower computational cost than the polynomial ones. By using the nonpolynomial quintic splines, Srivastava et al. developed a numerical algorithm [5]. In [6], a quadratic non-polynomial spline functions based method was proposed by Islam et al. Zahra and Mhlawy studied the singularly perturbed semi-linear boundary value problem with exponential spline [7]. Lodhi and Mishra exploited the numerical method to solve second order singularly perturbed two-point linear and nonlinear boundary value problems [8]. In [9], Zahra proposed the exponential spline functions consisting of a polynomial part of degree three and an exponential part to find approximation of linear and nonlinear fourth order two-point boundary value problems. By solving a algebraic system, Said and Noor proposed the optimized multigrid methods [10]. Ideon and Peeter investigated the collocation method with linear/linear rational spline for the numerical solution of two-point boundary value problems [11]. By using B-spline, Rashidinia and Ghasemi developed a numerical method which can solve the general nonlinear two-point boundary value problems up to order 6 [12].
Despite the fact that there are a large number of spline functions to approximate the solutions to the equations, most of them are not constructed based on the differential equations to be solved, and can not achieve satisfying results. To the best of our knowledge, there is still no any research about the construction of spline functions based on the differential equations. It is known to all that second order linear differential equations with constant coefficients can be solved analytically. This leads us to think that whether we can use the analytic solutions to solve these with variable coefficients. The spline function based on analytic solutions can achieve better approximation performance. Therefore, we put forth effort on constructing a spline function based on analytic solutions of equations with constant coefficients.

2. Preliminary

We first review some conclusions on the theory of solutions to linear differential equations.
Equation (1) is said to be homogeneous if Q ( x ) 0 , and nonhomogeneous if not. Moreover, the equation y + P ( x ) y = 0 is said to be the associated homogeneous equation of (1). Equation (1) is said to have constant coefficients when P ( x ) is a constant, and have variable coefficients if not. If Equation (1) has constant coefficients, it can be written as
y + p y = Q ( x ) ,
where p is a constant. The associated homogeneous equation of (3) is y + p y = 0 and the characteristic equation of (3) is λ 2 + p = 0 . Hence the characteristic roots of (3) are λ = ± p .
Let y p denote any particular solution of Equation (3) and y h represent the general solution of its associated homogeneous equation, then the general solution of Equation (3) is y = y h + y p .
The following two lemmas are introduced for obtaining the general solution y h and particular solution y p of Equation (3), which will be used in the next section.
Lemma 1
([13]). If p < 0 , then λ is real, and hence the general solution is y h = k 1 e λ x + k 2 e λ x . If p = 0 , so is the value of λ, then the general solution is y h = k 1 e 0 x + k 2 x e 0 x = k 1 + k 2 x . If p > 0 , then λ is a pair of conjugated imaginary numbers, and hence the general solution is y h = k 1 sin ( λ x ) + k 2 cos ( λ x ) . Here k 1 , k 2 are arbitrary constants.
Lemma 2
([13]). Suppose that Q ( x ) is a polynomial of degree n, then a particular solution of Equation (2) is
y p = e α x ( k n x n + k n 1 x n 1 + + k 1 x 1 + k 0 ) ,
where α is a known constant and k i ( i = 0 , 1 , 2 , , n ) are constants to be determined.
Specially, when the degree of Q ( x ) equals to 0, the particular solution y p = c if λ 0 ; and y p = c x 2 if λ = 0 , where c is a constant.

3. Derivation of The Method

3.1. Definition of the Piecewise Spline Functions

In this subsection, we use the theory of the analytic solution to construct the spline functions.
Firstly, the interval [ a , b ] is equally divided into n subintervals [ x i 1 , x i ] ( i = 1 , 2 , . . . , n ) such that : a = x 0 < x 1 < < x n = b with h = x i x i 1 = ( b a ) / n , and we denote x i ± 1 / 2 = x i ± 1 2 h .
If h is small, the value of x varies little at the interval [ x i 1 , x i ] , then P ( x ) and Q ( x ) can be seem as constants at the interval [ x i 1 , x i ] , therefore Equation (1) can also be seem as a second order linear differential equation with constant coefficients at the interval [ x i 1 , x i ] . We assume that the value of x at the interval [ x i 1 , x i ] identically equals to the midpoint x i 1 / 2 , hence Equation (1) can be seem as
y + P i 1 / 2 y = Q i 1 / 2 ,
where P i 1 / 2 = P ( x i 1 / 2 ) , Q i 1 / 2 = Q ( x i 1 / 2 ) .
The roots of the characteristic equation of (4) is λ i = ± P i 1 / 2 . Therefore, the solution of (4) can be obtained in the following three cases.
case 1. If P i 1 / 2 < 0 , the characteristic roots λ i are nonzero real numbers, it follows from Lemma 1 that y h = a i e λ i ( x x i ) + b i e λ i ( x x i ) . Additionally, since the right side of Equation (4) identically equals to the constant Q i 1 / 2 , we can assume that y p = c according to Lemma 2. By substituting y p into Equation (4), we have c = Q i 1 / 2 P i 1 / 2 . Therefore, the approximate function at the interval [ x i 1 , x i ] can be defined as
y ˜ i ( x ) = a i e λ i ( x x i 1 ) + b i e λ i ( x x i 1 ) + Q i 1 / 2 P i 1 / 2 .
case 2. If P i 1 / 2 > 0 , then the characteristic roots λ i are imaginary numbers, it follows from Lemma 1 that y h = c i sin λ i ( x x i 1 ) + d i cos λ i ( x x i 1 ) . Similarly to that described in Case 1, the particular solution is y p = Q i 1 / 2 P i 1 / 2 . Therefore, the approximate function at the interval [ x i , x i + 1 ] can be defined as
y ˜ i ( x ) = c i sin λ i ( x x i 1 ) + d i cos λ i ( x x i 1 ) + Q i 1 / 2 P i 1 / 2 .
case 3. If P i 1 / 2 = 0 , then λ i = 0 , the general solution of the homogeneous is y h = e i + f i ( x x i 1 ) . There are many particular solutions, here we take the simplest form y p = g i ( x x i 1 ) 2 . By substituting y p into Equation (4), we yield g i = 1 2 Q i 1 / 2 . Therefore, the approximate function at the interval [ x i , x i + 1 ] can be defined as
y ˜ i ( x ) = 1 2 Q i 1 / 2 ( x x i 1 ) 2 + e i ( x x i 1 ) + f i ,
To summarize, the approximate function at the interval [ x i 1 , x i ] is defined as
y ˜ i ( x ) = a i e λ i ( x x i 1 ) + b i e λ i ( x x i 1 ) + Q i 1 / 2 P i 1 / 2 , if P i 1 / 2 < 0 ; c i sin λ i ( x x i 1 ) + d i cos λ i ( x x i 1 ) + Q i 1 / 2 P i 1 / 2 , if P i 1 / 2 > 0 ; 1 2 Q i 1 / 2 ( x x i 1 ) 2 + e i ( x x i 1 ) + f i , if P i 1 / 2 = 0 ,
where a i , b i , c i , d i , e i and f i are the coefficients to be determined.
Let the spline Function (5) satisfy the interpolation conditions
y ˜ i ( x i 1 ) = y i 1 , y ˜ i ( x i ) = y i .
From (5) and (6), by a tedious but straightforward calculation, we can obtain the coefficients
a i = ( y i 1 Q i 1 / 2 P i 1 / 2 ) e λ i h y i + Q i 1 / 2 P i 1 / 2 A i ; b i = ( y i 1 Q i 1 / 2 P i 1 / 2 ) e λ i h y i + Q i 1 / 2 P i 1 / 2 B i ; c i = y i Q i 1 / 2 P i 1 / 2 ; d i = y i Q i 1 / 2 P i 1 / 2 ( y i 1 Q i 1 / 2 P i 1 / 2 ) cos ( λ i h ) sin ( λ i h ) ; e i = y i y i 1 1 2 Q i 1 / 2 h 2 h ; f i = y i 1 ,
where A i = e λ i h e λ i h , B i = e λ i h + e λ i h .

3.2. Continuity Condition

Very often, the approximate function on adjacent subintervals is usually required to be C 1 continuous, i.e.,
y ˜ i 1 ( x i ) = y ˜ i ( x i + ) , i = 1 , 2 , , n 1 .
From (5), we have
y ˜ i ( x ) = a i λ i e λ i ( x x i 1 ) λ i b i e λ i ( x x i 1 ) , if P i 1 / 2 < 0 ; c i λ i cos λ i ( x x i 1 ) λ i d i sin λ i ( x x i 1 ) , if P i 1 / 2 > 0 ; Q i 1 / 2 ( x x i 1 ) + f i , if P i 1 / 2 = 0 .
The derivative functions in (7) are selected according to the values of P i 1 / 2 and P i + 1 / 2 , and hence the C 1 continuity condition (7) can be arranged as one of the following situations:
  • If P i 1 / 2 < 0 and P i + 1 / 2 < 0 , then from (7) we have
    α i 1 y ˜ i 1 + β i 1 y ˜ i + γ i 1 y ˜ i + 1 = δ i 1 ,
    where α i 1 = 2 λ i 1 A i , β i 1 = λ i 1 A i B i 1 + λ i A i 1 B i , γ i 1 = 2 λ i A i 1 , δ i 1 = λ i 1 A i Q i 1 / 2 P i 1 / 2 ( B i 1 2 ) + λ i A i 1 Q i + 1 / 2 P i + 1 / 2 ( B i 2 ) .
  • If P i 1 / 2 > 0 and P i + 1 / 2 > 0 , then from (7) we have
    α i 2 y ˜ i 1 + β i 2 y ˜ i + γ i 2 y ˜ i + 1 = δ i 2 ,
    where α i 2 = λ i 1 sin ( λ i h ) , β i 2 = λ i 1 sin ( λ i h ) cos ( λ i 1 h ) + λ i sin ( λ i 1 h ) cos ( λ i h ) , γ i 2 = λ i sin ( λ i 1 h ) , δ i 2 = λ i 1 sin ( λ i h ) [ cos ( λ i 1 h ) 1 ] Q i 1 / 2 P i 1 / 2 + λ i sin ( λ i 1 h ) [ cos ( λ i h ) 1 ] Q i + 1 / 2 P i + 1 / 2 .
  • If P i 1 / 2 = 0 and P i + 1 / 2 = 0 , then from (7) we have
    α i 3 y ˜ i 1 + β i 3 y ˜ i + γ i 3 y ˜ i + 1 = δ i 3 ,
    where α i 3 = 1 , β i 3 = 2 , γ i 3 = 1 , δ i 3 = 1 2 [ Q i 1 / 2 + Q i + 1 / 2 ] h 2 .
  • If P i 1 / 2 < 0 and P i + 1 / 2 > 0 , then from (7) we have
    α i 4 y ˜ i 1 + β i 4 y ˜ i + γ i 4 y ˜ i + 1 = δ i 4 ,
    where α i 4 = 2 λ i 1 sin ( λ i h ) , β i 4 = λ i 1 B i 1 sin ( λ i h ) + λ i A i 1 cos ( λ i h ) , γ i 4 = λ i A i 1 , δ i 4 = λ i 1 sin ( λ i h ) Q i 1 / 2 P i 1 / 2 ( B i 1 2 ) + λ i A i 1 [ cos ( λ i h ) 1 ] Q i + 1 / 2 P i + 1 / 2 .
  • If P i 1 / 2 < 0 and P i + 1 / 2 = 0 , then from (7) we have
    α i 5 y ˜ i 1 + β i 5 y ˜ i + γ i 5 y ˜ i + 1 = δ i 5 ,
    where α i 5 = 2 λ i 1 h , β i 5 = A i 1 + λ i 1 B i 1 h , γ i 5 = A i 1 , δ i 5 = λ i 1 h Q i 1 / 2 P i 1 / 2 ( B i 1 2 ) 1 2 Q i + 1 / 2 A i 1 h 2 .
  • If P i 1 / 2 > 0 and P i + 1 / 2 < 0 , then from (7) we have
    α i 6 y ˜ i 1 + β i 6 y ˜ i + γ i 6 y ˜ i + 1 = δ i 6 ,
    where α i 6 = λ i 1 A i , β i 6 = λ i 1 A i cos ( λ i 1 h ) + λ i sin ( λ i 1 h ) B i , γ i 6 = 2 λ i sin ( λ i 1 h ) , δ i 6 = λ i sin ( λ i 1 h ) ( B i 2 ) Q i + 1 / 2 P i + 1 / 2 + λ i 1 A i [ cos ( λ i 1 h ) 1 ] Q i 1 / 2 P i 1 / 2 .
  • If P i 1 / 2 > 0 and P i + 1 / 2 = 0 , then from (7) we have
    α i 7 y ˜ i 1 + β i 7 y ˜ i + γ i 7 y ˜ i + 1 = δ i 7 ,
    where α i 7 = λ i 1 h , β i 7 = λ i 1 cos ( λ i 1 h ) h + sin ( λ i 1 h ) , γ i 7 = sin ( λ i 1 h ) , δ i 7 = 1 2 Q i + 1 / 2 sin ( λ i 1 h ) h 2 + λ i 1 h [ cos ( λ i 1 h ) 1 ] Q i 1 / 2 P i 1 / 2 .
  • If P i 1 / 2 = 0 and P i + 1 / 2 < 0 , then from (7) we have
    α i 8 y ˜ i 1 + β i 8 y ˜ i + γ i 8 y ˜ i + 1 = δ i 8 ,
    where α i 8 = A i , β i 8 = A i + λ i B i h , γ i 8 = 2 λ i h , δ i 8 = 1 2 A i Q i 1 / 2 h 2 + λ i h Q i + 1 / 2 P i + 1 / 2 ( B i 2 ) .
  • If P i 1 / 2 = 0 and P i + 1 / 2 > 0 , then from (7) we have
    α i 9 y ˜ i 1 + β i 9 y ˜ i + γ i 9 y ˜ i + 1 = δ i 9 ,
    α i 9 = sin ( λ i h ) , β i 9 = sin ( λ i h ) + λ i cos ( λ i h ) h , γ i 9 = λ i h , δ i 9 = 1 2 Q i 1 / 2 sin ( λ i h ) h 2 + λ i h [ cos ( λ i h ) 1 ] Q i + 1 / 2 P i + 1 / 2 .
We remark that there are n 1 linear equations including n + 1 unknowns. In order to solve the system uniquely, we need two additional conditions, which can be obtained from the boundary conditions (2). The continuity conditions (7) combined with the boundary conditions (2) form a system of linear equations with n + 1 unknowns, which can be rewritten in the matrix form as
1 0 α 1 j 1 β 1 j 1 γ 1 j 1 α 2 j 2 β 2 j 2 γ 2 j 2 α n 2 j n 2 β n 2 j n 2 γ n 2 j n 2 α n 1 j n 1 β n 1 j n 1 γ n 1 j n 1 0 1 y ˜ 0 y ˜ 1 y ˜ 2 y ˜ n 2 y ˜ n 1 y ˜ n = A δ 1 j 1 δ 2 j 2 δ n 2 j n 2 δ n 1 j n 1 B
Here the value of the superscript j k ( k = 1 , 2 , . . . , n 1 ) may be one of 1–9, which is determined by P i 1 / 2 and P i + 1 / 2 .
The coefficient matrix of the linear system (18) is tridiagonal, which can be solved easily by Thomas algorithm.

4. Truncation Error Estimation

Suppose y ( x ) is sufficiently smooth at [ a , b ] . By substituting x i 1 / 2 into (1), we have
M i 1 / 2 + P i 1 / 2 y i 1 / 2 = Q i 1 / 2 ,
where M i 1 / 2 , y i 1 / 2 denote y ( x i 1 / 2 ) , y ( x i 1 / 2 ) respectively.
To estimate the truncation error, we have to consider all these nine cases in Section 2, here we only analysis parts of these cases due to space limitations. Without loss of generality, we discuss the truncation errors of case 1 and 2 in detail, the results of the others are also listed. For convenience, the local truncation error is denoted by t i j ( i = 1 , 2 , , n 1 ; j = 1 , 2 , , 9 ) .
According to (9), the local truncation error can be written as
t i 1 = α i 1 y ˜ i 1 + β i 1 y ˜ i + γ i 1 y ˜ i + 1 δ i 1 .
By using Taylor series, the terms y i 1 , y i 1 / 2 , y i + 1 , y i + 1 / 2 and M i 1 / 2 , M i + 1 / 2 can be expanded around the point x i , hence we have
t i 1 = j = 0 6 k j 1 y i ( j ) + O ( h 7 ) ,
where
k 0 1 = 0 ; k 1 1 = h ( λ i 1 A i λ i A i 1 ) + h 2 ( λ i 1 A i B i 1 λ i A i 1 B i ) ; k 2 1 = 3 h 2 4 ( λ i 1 A i + λ i A i 1 ) h 2 8 ( λ i 1 A i B i 1 + λ i A i 1 B i ) + A i ( 2 B i 1 ) λ i 1 + A i 1 ( 2 B i ) λ i ; k 3 1 = 7 h 3 24 ( λ i 1 A i λ i A i 1 ) + h 3 48 ( λ i 1 A i B i 1 λ i A i 1 B i ) h A i ( 2 B i 1 ) 2 λ i 1 + h A i 1 ( 2 B i ) 2 λ i ; k 4 1 = 15 h 4 192 ( λ i 1 A i + λ i A i 1 ) h 4 384 ( λ i 1 A i B i 1 + λ i A i 1 B i ) + h 2 A i ( 2 B i 1 ) 8 λ i 1 + h 2 A i 1 ( 2 B i ) 8 λ i ; k 5 1 = 31 h 5 1920 ( λ i 1 A i λ i A i 1 ) + h 5 3840 ( λ i 1 A i B i 1 λ i A i 1 B i ) h 3 A i ( 2 B i 1 ) 48 λ i 1 + h 3 A i 1 ( 2 B i ) 48 λ i ;
k 6 1 = 63 h 6 23040 ( λ i 1 A i + λ i A i 1 ) h 6 46080 ( λ i 1 A i B i 1 + λ i A i 1 B i ) + h 4 A i ( 2 B i 1 ) 384 λ i 1 + h 4 A i 1 ( 2 B i ) 384 λ i .
Note that A i = 2 sinh ( λ i h ) , B i = 2 cosh ( λ i h ) , since the hyperbolic functions can be expanded in
sinh ( x ) = n = 0 x 2 n + 1 ( 2 n + 1 ) ! = x + x 3 3 ! + x 5 5 ! + ; cosh ( x ) = n = 0 x 2 n ( 2 n ) ! = 1 + x 2 2 ! + x 4 4 ! + .
By expanding the terms A i 1 , A i , B i 1 and B i in series, and by a tedious but straightforward calculation, k j 1 ( j = 1 , 2 , 3 , 4 , 5 , 6 ) can be written as
k 1 1 = h 4 3 λ i 1 λ i ( λ i 1 2 λ i 2 ) h 6 20 λ i 1 λ i ( λ i 4 λ i 1 4 ) + O ( h 7 ) ; k 2 1 = h 5 12 λ i 1 λ i ( λ i 1 2 + λ i 2 ) + O ( h 7 ) ; k 3 1 = h 6 72 λ i 1 λ i ( λ i 2 λ i 1 2 ) + O ( h 7 ) ; k 4 1 = h 5 6 λ i 1 λ i + O ( h 7 ) ; k 5 1 = k 6 1 = 0 + O ( h 7 ) .
Similarly, the local truncation error of (10) can be written as
t i 2 = α i 2 y ˜ i 1 + β i 2 y ˜ i + γ i 2 y ˜ i + 1 δ i 2 = j = 0 6 k j 2 y i ( j ) + O ( h 7 ) ,
where
k 0 2 = 0 ; k 1 2 = h 2 ( λ i 1 sin ( λ i h ) [ 1 + cos ( λ i 1 h ) ] h 2 ( λ i sin ( λ i 1 h ) [ 1 + cos ( λ i h ) ] ; k 2 2 = 3 h 2 8 [ λ i 1 sin ( λ i h ) + λ i sin ( λ i 1 h ) ] h 2 8 [ λ i 1 sin ( λ i h ) cos ( λ i 1 h ) + λ i sin ( λ i 1 h ) cos ( λ i h ) ] + sin ( λ i h ) [ 1 cos ( λ i 1 h ) ] λ i 1 + sin ( λ i 1 h ) [ 1 cos ( λ i h ) ] λ i ; k 3 2 = 7 h 3 48 [ λ i 1 sin ( λ i h ) λ i sin ( λ i 1 h ) ] + h 3 48 [ λ i 1 sin ( λ i h ) cos ( λ i 1 h ) λ i sin ( λ i 1 h ) cos ( λ i h ) ] h sin ( λ i h ) [ 1 cos ( λ i 1 h ) ] 2 λ i 1 + h sin ( λ i 1 h ) [ 1 cos ( λ i h ) ] 2 λ i ;
k 4 2 = 15 h 4 384 [ λ i 1 sin ( λ i h ) + λ i sin ( λ i 1 h ) ] h 4 384 [ λ i 1 sin ( λ i h ) cos ( λ i 1 h ) + λ i sin ( λ i 1 h ) cos ( λ i h ) ] + h 2 sin ( λ i h ) [ 1 cos ( λ i 1 h ) ] 8 λ i 1 + h 2 sin ( λ i 1 h ) [ 1 cos ( λ i h ) ] 8 λ i ; k 5 2 = 31 h 5 3840 [ λ i 1 sin ( λ i h ) λ i sin ( λ i 1 h ) ] + h 5 3840 [ λ i 1 sin ( λ i h ) cos ( λ i 1 h ) λ i sin ( λ i 1 h ) cos ( λ i h ) ] h 3 sin ( λ i h ) [ 1 cos ( λ i 1 h ) ] 48 λ i 1 + h 3 sin ( λ i 1 h ) [ 1 cos ( λ i h ) ] 48 λ i ; k 6 2 = 63 h 6 46080 [ λ i 1 sin ( λ i h ) + λ i sin ( λ i 1 h ) ] h 6 46080 [ λ i 1 sin ( λ i h ) cos ( λ i 1 h ) + λ i sin ( λ i 1 h ) cos ( λ i h ) ] + h 4 sin ( λ i h ) [ 1 cos ( λ i 1 h ) ] 384 λ i 1 + h 4 sin ( λ i 1 h ) [ 1 cos ( λ i h ) ] 384 λ i .
Since the trigonometric functions can be expressed in
sin ( x ) = n = 0 ( 1 ) n x 2 n + 1 ( 2 n + 1 ) ! = x x 3 3 ! + x 5 5 ! + ; cos ( x ) = n = 0 ( 1 ) n ( 2 n ) ! x 2 n = 1 x 2 2 ! + x 4 4 ! + .
By expanding the trigonometric functions of k j 2 ( j = 1 , 2 , , 6 ) in series, then k j 2 ( j = 1 , 2 , , 6 ) can be written as
k 1 2 = h 4 12 λ i 1 λ i ( λ i 2 λ i 1 2 ) h 6 80 λ i 1 λ i ( λ i 4 λ i 1 4 ) + O ( h 7 ) ; k 2 2 = h 5 48 λ i 1 λ i ( λ i 1 2 + λ i 2 ) + O ( h 7 ) ; k 3 2 = h 6 288 λ i 1 λ i ( λ i 2 λ i 1 2 ) + h 6 720 λ i 1 λ i ( λ i 4 λ i 1 4 ) + O ( h 7 ) ; k 4 2 = h 5 24 λ i 1 λ i + O ( h 7 ) ; k 5 2 = k 6 2 = 0 + O ( h 7 ) .
Then the local truncation error of (10) is O ( h 4 ) .
Next, we only give conclusions of the other cases. The truncation error of (11) is
t i 3 = j = 0 6 k i 3 y i ( j ) + O ( h 7 ) ,
where k 0 3 = k 1 3 = k 2 3 = k 3 3 = 0 ; k 4 3 = h 4 24 ; k 5 3 = 0 ; k 6 3 = h 6 5760 .
The truncation error of (12) is
t i 4 = j = 0 6 k j 4 y i ( j ) + O ( h 7 ) ,
where
k 0 4 = 0 ; k 1 4 = h 4 6 λ i 1 λ i ( λ i 1 2 + λ i 2 ) h 6 40 λ i 1 λ i ( λ i 4 λ i 1 4 ) + O ( h 7 ) ; k 2 4 = h 5 24 λ i 1 λ i ( λ i 2 λ i 1 2 ) + O ( h 7 ) ; k 3 4 = h 6 144 λ i 1 λ i ( λ i 1 2 + λ i 2 ) + O ( h 7 ) ; k 4 4 = h 5 12 λ i 1 λ i + O ( h 7 ) ; k 5 4 = k 6 4 = 0 + O ( h 7 ) .
The truncation error of (13) is
t i 5 = j = 0 6 k j 5 y i ( j ) + O ( h 7 ) ,
where k 0 5 = 0 ; k 1 5 = h 4 6 λ i 1 3 + h 6 40 λ i 1 5 + O ( h 7 ) ; k 2 5 = h 5 24 λ i 1 3 + O ( h 7 ) ; k 3 5 = 11 h 6 144 λ i 1 3 + O ( h 7 ) ; k 4 5 = h 5 12 λ i 1 + O ( h 7 ) ; k 5 5 = k 6 5 = 0 + O ( h 7 ) .
The truncation error of (14) is
t i 6 = j = 0 6 k 6 6 y i ( j ) + O ( h 7 ) ,
where
k 0 6 = 0 ; k 1 6 = h 4 6 λ i 1 λ i ( λ i 1 2 + λ i 2 ) + h 6 40 λ i 1 λ i ( λ i 1 4 λ i 4 ) + O ( h 7 ) ; k 2 6 = h 5 24 λ i 1 λ i ( λ i 1 2 λ i 2 ) + O ( h 7 ) ; k 3 6 = h 6 6 λ i 1 λ i ( λ i 1 2 + λ i 2 ) + O ( h 7 ) ; k 4 6 = h 5 12 λ i 1 λ i + O ( h 7 ) ; k 5 6 = k 6 6 = 0 + O ( h 7 ) .
The truncation error of (15) is
t i 7 = j = 0 6 k j 7 y i ( j ) + O ( h 7 ) ,
where k 0 7 = 0 ; k 1 7 = h 4 12 λ i 1 3 + h 6 80 λ i 1 5 + O ( h 7 ) ; k 2 7 = h 5 48 λ i 1 3 + O ( h 7 ) ; k 3 7 = h 6 288 λ i 1 3 + O ( h 7 ) ; k 4 7 = h 5 24 λ i 1 + O ( h 7 ) ; k 5 7 = k 6 7 = 0 + O ( h 7 ) .
The truncation error of (16) is
t i 8 = j = 0 6 k j 8 y i ( j ) + O ( h 7 ) ,
where k 0 8 = 0 ; k 1 8 = h 4 6 λ i 3 h 6 40 λ i 5 + O ( h 7 ) ; k 2 8 = h 5 24 λ i 3 + O ( h 7 ) ; k 3 8 = h 4 18 λ i 3 h 6 16 λ i 3 + h 6 360 λ i 5 + O ( h 7 ) ; k 4 8 = h 5 12 λ i + O ( h 7 ) ; k 5 8 = k 6 8 = 0 + O ( h 7 ) .
The truncation error of (17) is
t i 9 = j = 0 6 k j 9 y i ( j ) + O ( h 7 ) ,
where k 0 9 = 0 ; k 1 9 = h 4 12 λ i 3 h 6 80 λ i 5 + O ( h 7 ) ; k 2 9 = h 5 48 λ i 3 + O ( h 7 ) ; k 3 9 = h 6 288 λ i 3 + O ( h 7 ) ; k 4 9 = h 5 24 λ i + O ( h 7 ) ; k 5 9 = k 6 9 = 0 + O ( h 7 ) .
To summarize, the local truncation errors of the proposed method is O ( h 4 ) .

5. Numerical Examples

In this section, some numerical examples are given to illustrate the effectiveness of the proposed method. We begin with two definitions to measure the effectiveness of the proposed method.
The maximum absolute error [14] is computed by
e = max 0 i n | y ˜ ( x i ) y ( x i ) | ,
where y ˜ ( x i ) and y ( x i ) represent the numerical and exact solution at the node x i , respectively.
The rate of convergence [14,15] is computed by
ρ = ln ( e 1 / e 2 ) ln ( n 2 / n 1 ) ,
where e 1 and e 2 are the maximum absolute errors estimated for two different grids with n 1 + 1 and n 2 + 1 points.
Example 1.
Consider the linear differential equation with variable coefficients:
y + x y = ( x 2 + 1 ) e x , 2 x 1 .
with boundary conditions y ( 2 ) = 3 e 2 , y ( 1 ) = 2 e 1 .
Example 2.
Consider the linear differential equation with variable coefficients:
y + ( sin x ) y = 2 cos x x sin x + x sin 2 x , π 6 x π 6 .
with boundary conditions y ( π 6 ) = y ( π 6 ) = π 12 .
Example 3.
Consider the linear differential equation with constant coefficients:
y + y = 1 , 0 x 1 .
with boundary conditions y ( 0 ) = 0 , y ( 1 ) = cos 1 1 .
The analytic solutions of (20)–(22) are y = ( x 1 ) e x , y = x sin x and y = cos x 1 , respectively. In Table 1, we list the approximate and exact solutions to (20). In Table 2, we show the maximum errors in Examples 1–3 and the rate of convergence in Examples 1 and 2. From Table 2, we can observe that the order of convergence is 2. It is evident from Table 1 and Table 2 that the proposed method produces high precision numerical solutions. Especially in Example 3, the errors of numerical solutions are close to the machine accuracy. This is because the approximate Function (5) is constructed based on the analytic solutions to the linear differential equation with constant coefficients, and Equation (22) happens to be the one with constant coefficients. It is not surprising that we can yield the nearly exact solutions in Example 3.
In Figure 1a, we show the exact solution and the numerical results obtained by the proposed method. The corresponding error curve are also shown in Figure 1b. These results coincide with the results listed in Table 2.
Example 4.
Consider the linear differential equation with variable coefficients [3,5]:
y + x y = ( 3 x x 2 + x 3 ) sin x + 4 x cos x , 0 x 1 .
with boundary conditions y ( 0 ) = 1 , y ( 1 ) = 2 sin 1 .
The analytic solution of (23) is y = ( x 2 1 ) sin x . In Table 3, we list the maximum errors and the elapsed CPU time. The numerical results obtained by implementing the methods [3,5] are also listed for comparison. Note that the elapsed CPU time for the same routine in Matlab may be different every time. In order to count the time accurately, we took the average of all runtimes of 100 independent runs as the elapsed CPU time.
On one hand, our method is more accurate than Ramadan’s method [3] and comparable to Srivastava’s method [5]. On the other hand, the runtime of our method is less than that of Srivastava’s method and comparable to Ramadan’s method. This indicates that Srivastava’s method can achieve the best precision in our experiment, while our method is superior in the term of elapsed CPU time. This is because we only need to solve a tridiagonal linear system instead of solving a penta-diagonal system in [5], which is much easier to implement.
Clearly, the proposed method is efficient, therefore it provides a new approach to deal with second order linear differential equations.

6. Conclusions

Based on the analytic solutions to the linear differential equation with constant coefficients, we propose an numerical method for second order differential equations in this paper. The analytic solutions are used to construct the approximation function and the local truncation error is analyzed. Numerical examples have shown the effectiveness of the proposed method. Compared with the other methods, we only need to solve a tridiagonal system, which is much easier to implement.

Author Contributions

Methodology, X.H.; Resources, C.L.; Writing—original draft, C.L.; Writing—review and editing, J.L. and C.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China, grant number 11771453, Natural Science Foundation of Hunan Province, grant number 2020JJ5267 and Scientific Research Funds of Hunan Provincial Education Department, grant number 18C877.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Griffiths, D.F.; Higham, D.J. Numerical Methods for Ordinary Differential Equations; Springer: London, UK, 2010. [Google Scholar]
  2. Usmani, R.A. The use of quartic splines in the numerical solution of a fourth-order boundary value problem. J. Comput. Appl. Math. 1992, 44, 187–200. [Google Scholar] [CrossRef] [Green Version]
  3. Ramadan, M.A.; Lashien, I.F.; Zahra, W.K. Polynomial and nonpolynomial spline approaches to the numerical solution of second order boundary value problems. Appl. Math. Comput. 2007, 184, 476–484. [Google Scholar] [CrossRef]
  4. Ramadan, M.A.; Lashien, I.F.; Zahra, W.K. High order accuracy nonpolynomial spline solutions for 2μth order two point boundary value problems. Appl. Math. Comput. 2008, 204, 920–927. [Google Scholar] [CrossRef]
  5. Srivastava, P.K.; Kumar, M.; Mohapatra, R.N. Quintic nonpolynomial spline method for the solution of a second order boundary-value problem with engineering applications. Comput. Math. Appl. 2011, 62, 1707–1714. [Google Scholar] [CrossRef] [Green Version]
  6. Noor, M.A.; Tirmizi, I.A.; Khan, M.A. Quadratic non-polynomial spline approach to the solution of a system of second order boundary-value problems. Appl. Math. Comput. 2007, 179, 153–160. [Google Scholar]
  7. Zahra, W.K.; Mhlawy, A.M.E. Numerical solution of two-parameter singularly perturbed boundary value problems via exponential spline. J. King Saud Univ. Sci. 2013, 25, 201–208. [Google Scholar] [CrossRef] [Green Version]
  8. Lodhi, R.K.; Mishra, H.K. Quintic B-spline method for solving second order linear and nonlinear singularly perturbed two-point boundary value problems. J. Comput. Appl. Math. 2017, 319, 170–187. [Google Scholar] [CrossRef]
  9. Zahra, W.K. A smooth approximation based on exponential spline solutions for nonlinear fourth order two point boundary value problems. Appl. Math. Comput. 2011, 217, 8447–8457. [Google Scholar] [CrossRef]
  10. Al-Said, E.A.; Noor, M.A. Quartic spline method for solving fourth order obstacle boundary value problems. J. Comput. Appl. Math. 2002, 143, 107–116. [Google Scholar] [CrossRef] [Green Version]
  11. Ideon, E.; Oja, P. Linear/linear rational spline collocation for linear boundary value problems. J. Comput. Appl. Math. 2014, 263, 32–44. [Google Scholar] [CrossRef]
  12. Rashidinia, J.; Ghasemi, M. B-spline collocation for solution of two-point boundary value problems. J. Comput. Appl. Math. 2011, 235, 2325–2342. [Google Scholar] [CrossRef] [Green Version]
  13. Wanner, G. Solving Ordinary Differential Equations II; Springer: Berlin/Heidelberg, Germany, 1991. [Google Scholar]
  14. Ahmad, G.I.; Shelly, A.; Kukreja, V.K. Cubic Hermite Collocation Method for Solving Boundary Value Problems with Dirichlet, Neumann, and Robin Conditions. Int. J. Eng. Math. 2014, 2014, 365209. [Google Scholar]
  15. Ge, Y.; Cao, F. Multigrid method based on the transformation-free HOC scheme on nonuniform grids for 2D convection diffusion problems. J. Comput. Phys. 2011, 230, 4051–4070. [Google Scholar] [CrossRef]
Figure 1. Numerical results in Example 2 ( n = 32 ).
Figure 1. Numerical results in Example 2 ( n = 32 ).
Algorithms 13 00231 g001
Table 1. Comparison of the numerical solutions with the exact solutions in Example 1 ( h = 2 3 ) .
Table 1. Comparison of the numerical solutions with the exact solutions in Example 1 ( h = 2 3 ) .
x y ˜ ( x ) y ( x ) x y ˜ ( x ) y ( x ) x y ˜ ( x ) y ( x )
−2.000−0.406006−0.406006−1.625−0.516883−0.516893−1.250−0.644628−0.644636
−1.875−0.440891−0.440896−1.500−0.557815−0.557825−1.125−0.689882−0.689886
−1.750−0.477870−0.477878−1.375−0.600484−0.600494−1.000−0.735759−0.735759
Table 2. Numerical results in Examples 1–3.
Table 2. Numerical results in Examples 1–3.
Example 1Example 2Example 3
h e ρ n e ρ n e
2 2 4.327 × 10 5 8 3.785 × 10 4 4 2.012 × 10 16
2 4 2.673 × 10 6 2.008432 2.390 × 10 5 1.992616 2.317 × 10 15
2 6 1.672 × 10 7 2.0039128 1.494 × 10 6 1.996264 2.185 × 10 13
2 8 1.045 × 10 8 2.0026512 9.343 × 10 8 1.9974256 4.247 × 10 13
Table 3. Maximum errors and elapsed CPU times in Example 4.
Table 3. Maximum errors and elapsed CPU times in Example 4.
nRamadan et al. [3]Srivastava et al. [5]Our Method
e Time (s) e Time (s) e Time (s)
4 4.946 × 10 2 1.78 × 10 4 5.927 × 10 3 5.33 × 10 3 2.710 × 10 3 2.48 × 10 4
8 1.231 × 10 2 1.89 × 10 4 6.173 × 10 4 5.34 × 10 3 6.700 × 10 4 2.62 × 10 4
16 3.081 × 10 3 2.61 × 10 4 1.797 × 10 4 5.54 × 10 3 1.670 × 10 4 3.68 × 10 4
32 7.704 × 10 4 3.43 × 10 4 3.495 × 10 5 5.72 × 10 3 4.183 × 10 5 5.94 × 10 4
64 1.926 × 10 4 5.75 × 10 4 6.702 × 10 6 6.28 × 10 3 1.046 × 10 5 1.10 × 10 3
128 4.800 × 10 5 1.62 × 10 3 1.179 × 10 6 7.36 × 10 3 2.615 × 10 6 2.25 × 10 3

Share and Cite

MDPI and ACS Style

Liu, C.; Han, X.; Li, J. A Class of Spline Functions for Solving 2-Order Linear Differential Equations with Boundary Conditions. Algorithms 2020, 13, 231. https://0-doi-org.brum.beds.ac.uk/10.3390/a13090231

AMA Style

Liu C, Han X, Li J. A Class of Spline Functions for Solving 2-Order Linear Differential Equations with Boundary Conditions. Algorithms. 2020; 13(9):231. https://0-doi-org.brum.beds.ac.uk/10.3390/a13090231

Chicago/Turabian Style

Liu, Chengzhi, Xuli Han, and Juncheng Li. 2020. "A Class of Spline Functions for Solving 2-Order Linear Differential Equations with Boundary Conditions" Algorithms 13, no. 9: 231. https://0-doi-org.brum.beds.ac.uk/10.3390/a13090231

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop