Next Article in Journal
Review of Recent Advances in the Application of the Wavelet Transform to Diagnose Cracked Rotors
Next Article in Special Issue
The Iterative Solution to Discrete-Time H Control Problems for Periodic Systems
Previous Article in Journal
Co-Clustering under the Maximum Norm
Previous Article in Special Issue
A Geometric Orthogonal Projection Strategy for Computing the Minimum Distance Between a Point and a Spatial Parametric Curve
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Constructing Frozen Jacobian Iterative Methods for Solving Systems of Nonlinear Equations, Associated with ODEs and PDEs Using the Homotopy Method

1
Department of Mathematics, Riphah International University, Main Satyana Road, Faisalabad 44000, Pakistan
2
Dipartimento di Scienza e Alta Tecnologia, Università dell’Insubria, Via Valleggio 11, Como 22100, Italy
3
Departament de Física i Enginyeria Nuclear, Universitat Politècnica de Catalunya, Comte d’Urgell 187, Barcelona 08036, Spain
4
Department of Information Technology, Uppsala University, Box 337, Uppsala SE-751 05, Sweden
5
Department of Mathematics, King Abdulaziz University, Jeddah 21589, Saudi Arabia
6
Department of Computer Science, Virtual University, Lahore 5400, Pakistan
*
Author to whom correspondence should be addressed.
Submission received: 25 December 2015 / Revised: 3 March 2016 / Accepted: 4 March 2016 / Published: 11 March 2016
(This article belongs to the Special Issue Numerical Algorithms for Solving Nonlinear Equations and Systems)

Abstract

:
A homotopy method is presented for the construction of frozen Jacobian iterative methods. The frozen Jacobian iterative methods are attractive because the inversion of the Jacobian is performed in terms of LUfactorization only once, for a single instance of the iterative method. We embedded parameters in the iterative methods with the help of the homotopy method: the values of the parameters are determined in such a way that a better convergence rate is achieved. The proposed homotopy technique is general and has the ability to construct different families of iterative methods, for solving weakly nonlinear systems of equations. Further iterative methods are also proposed for solving general systems of nonlinear equations.

1. Introduction

Iterative methods for solving nonlinear equations and systems of nonlinear equations always represent an attractive research topic. Newton’s method [1,2,3] is a classical iterative method for solving systems of nonlinear equations and offers quadratic convergence. Many authors have developed further iterative methods for solving either nonlinear equations or systems of nonlinear equations [4,5,6,7]. Here, we focus on the idea of the frozen Jacobian: we observe that this idea is computationally attractive for designing iterative methods because the inversion of the Jacobian (regarding LUfactorization) is performed a single time, for a single instance of the iterative method. Many researchers have proposed frozen Jacobian iterations (see [8,9,10,11,12] and references therein). In our view, the use of perturbation techniques in connection with the homotopy method can be very effective for developing new frozen Jacobian iterative methods, when solving systems of nonlinear equations. The homotopy method can be found in the work of Poincaré [13]. After a careful review of the relevant literature, we come to know that much work has been done in the direction of designing iterative methods for solving nonlinear scalar equations, using the homotopy perturbation method, but not for systems of nonlinear equations. The reason behind this fact is that usually, the constructed iterative methods, for solving nonlinear equations using different homotopy methods, contain higher order derivatives. From a computational point of view, the evaluation of higher order Fréchet derivatives is expensive for a general system of nonlinear equations. However, when considering systems of weakly nonlinear equations, associated with wide classes of ordinary and partial differential equations, we find that all higher order Fréchet derivatives possess a diagonal structure: as a consequence, the resulting computational cost is mild.
The core idea of homotopy (or homotopy continuation) was developed for solving hard problems numerically. The nonlinear problem is approximated around some initial guess to construct a linear model or a model that can be solved easily. Once we have a linear model, we construct a homotopy function that maps linear model to a nonlinear model with the help of auxiliary parameters. The embedding of auxiliary parameters in the homotopy function helps us to solve stiff problems iteratively in a way that the solution of the less hard problem is used as an initial guess for the next harder problem. The gradual change in the parameters assists us in solving a hard problem softly in the mathematical sense. The above-described scenario is one of the pictures of the homotopy methods or, precisely speaking, homotopy continuation methods. The other picture is to use the homotopy method to approximate solution of nonlinear problems analytically [14,15,16,17,18,19,20]. Many authors have used [7,21] homotopy methods for constructing iterative methods for solving nonlinear equations. For the construction of iterative methods, they build a homotopy between a linear model of the nonlinear equation and the nonlinear equation, by expanding it around some initial guess. The constructed homotopy contains auxiliary parameters. The auxiliary parameters can be used for different purposes. Some authors use the homotopy parameters to enhance the convergence by changing the dynamics of the iterative method and give estimates for the optimal parameters [22].
Now, we present in detail our proposals. Noor [21] proposed the following homotopy iterative method for solving nonlinear equations. Let f ( x ) = 0 be a scalar nonlinear equation, and let x 0 be an initial guess. We perform a linearization of f ( x ) around a point γ:
f ( x ) = f ( γ ) + f ( γ ) ( x - γ ) + g ( x ) = 0 g ( x ) = f ( x ) - f ( γ ) - f ( γ ) ( x - γ )
where f ( γ ) + f ( γ ) ( x - γ ) is the linear approximation of f ( x ) around γ and g ( x ) is the nonlinear part of f ( x ) . Equation (1) can be written as:
x = c + N ( x )
where c = γ - f ( γ ) f ( γ ) and N ( x ) = - g ( x ) f ( γ ) . When assuming that L is a linear function, according to [21], a homotopy can be defined between L ( v ) - L ( x 0 ) and L ( v ) + N ( v ) - c :
H ( v , q , T ) = ( 1 - α q ) L ( v ) - L ( x 0 ) + q α L ( v ) + N ( v ) - c - α q 2 ( 1 - q ) T = 0
where H ( v , q , T ) : R × [ 0 , 1 ] × R R , q [ 0 , 1 ] is an embedded parameter, α is a real free parameter and T is a suitable operator. In Equation (3), L ( v ) - L ( x 0 ) and L ( v ) + N ( v ) - c are called homotopic, i.e., we established homotopy between the linear model and the nonlinear model. When q = 0 , we have to solve the linear model H ( v , 0 , T ) = L ( v ) - L ( x 0 ) , and for q = 1 , we have as the target the nonlinear system in Equation (2), that is H ( v , 1 , T ) = α N ( v ) - c . When we change q from zero to one, the linear problem continuously deforms into the original nonlinear problem. Further, we assume that the solution of Equation (3) can be expressed as power series in q:
v = x 0 + q x 1 + q 2 x 2 +
Notice that the solution of target nonlinear problem can be approximated as:
x = lim q 1 v = x 0 + x 1 + x 2 +
In 2010, Yongyan et al. [22] proposed a two-parameter homotopy method for the construction of iterative methods, for solving scalar nonlinear equations. The authors in [22] use the homotopy auxiliary parameters to change the dynamics of the proposed iterative methods for solving nonlinear equations and provide the optimal values of auxiliary parameters, for enhancing the convergence speed without altering the convergence order of the iterative method. It is worth noting that they used auxiliary parameters to change the path of converging sequences of successive approximations to obtain fast convergence. Inspired by their mathematical model for homotopy, we introduce a generalization of the two-parameter homotopy for solving a system of weakly nonlinear equations. However, our primary motivation is to use the auxiliary parameters to increase the convergence order of the iterative methods. Secondly, our proposal is for the construction of iterative methods for solving a system of nonlinear equations instead of a single nonlinear equation. In summary, we use the homotopy techniques to develop mathematical models that we use for designing higher order iterative methods with the help of new parameters.

2. Parametric Homotopy

Denote a system of nonlinear equations in a general from as:
F ( y ) = 0
where y = [ y 1 , y 2 , , y n ] T and F ( y ) = [ F 1 ( y ) , F 2 ( y ) , , F n ( y ) ] T . We are interested in finding a simple root y * of Equation (6), i.e., F ( y * ) = 0 and det F ( y * ) 0 . Let y ϑ be an initial guess. Hence, we construct the linear approximation of Equation (6) as:
F ( y ) L ( y ) = F ( y ϑ ) + F ( y ϑ ) y - y ϑ
Now, we define a parametric homotopy between L ( y ) and F ( y ) , taking inspiration from the work in [22]. Therefore:
1 + i = 1 m α i q i L ( y ) - α 0 q F ( y ( q ) ) = 0
where the quantities α i , i = 0 , , m , are parameters, α 0 0 , 1 + i = 1 m α i = 0 , y ( q ) = x 0 + q x 1 + q 2 x 2 + and y ( 1 ) = y * .

3. Construction of Iterative Methods

We start from the case where m = 3 and when Equation (8) can be written as:
1 + α 1 q + α 2 q 2 - ( 1 + α 1 + α 2 ) q 3 L ( y ) - α 0 q F ( y ( q ) ) = 0
The Taylor series expansion of F ( · ) around x 0 gives:
F ( y ) = F ( x 0 ) + F ( x 0 ) y - x 0 + 1 / 2 F ( x 0 ) y - x 0 2 + 1 / 6 F ( x 0 ) y - x 0 3 + O y - x 0 4 F ( y ) = C 0 + C 1 z + 1 / 2 C 2 z 2 + 1 / 6 C 3 z 3 + O z 4
where z = y - x 0 and C j = F ( j ) ( x 0 ) , i.e., C j is a j-th order Fréchet derivative. Using y ( q ) = x 0 + i = 1 q i x i in Equation (10), we get:
F ( y ) = C 0 + C 1 x 1 q + C 1 x 2 + 1 / 2 C 2 x 1 2 q 2 + C 2 x 2 x 1 + C 1 x 3 + 1 / 6 C 3 x 1 3 q 3 + ( C 1 x 4 + C 2 x 3 x 1 + 1 / 2 C 2 x 2 2 + 1 / 2 C 3 x 1 2 x 2 ) q 4 + ( C 2 x 4 x 1 + C 2 x 3 x 2 + 1 / 2 C 3 x 1 2 x 3 + 1 / 2 C 3 x 1 x 2 2 ) q 5 + O q 6
The linear approximation L ( y ) can be written as:
L ( y ) = F ( y ϑ ) x 0 - F ( y ϑ ) y ϑ + F ( y ϑ ) + F ( y ϑ ) x 1 q + F ( y ϑ ) x 2 q 2 + F ( y ϑ ) x 3 q 3 + F ( y ϑ ) x 4 q 4 + F ( y ϑ ) x 5 q 5 + O q 6
Simplification of Equation (9) gives:
F ( y ϑ ) + F ( y ϑ ) x 0 - F ( y ϑ ) y ϑ + α 1 F ( y ϑ ) x 0 - α 1 F ( y ϑ ) y ϑ - α 0 C 0 + α 1 F ( y ϑ ) + F ( y ϑ ) x 1 q + - α 0 C 1 x 1 + α 1 F ( y ϑ ) x 1 + α 2 F ( y ϑ ) x 0 - α 2 F ( y ϑ ) y ϑ + α 2 F ( y ϑ ) + F ( y ϑ ) x 2 q 2 + ( - α 0 C 1 x 2 - α 1 F ( y ϑ ) x 0 + α 1 F ( y ϑ ) x 2 + α 1 F ( y ϑ ) y 0 - α 2 F ( y ϑ ) x 0 + α 2 F ( y ϑ ) x 1 + α 2 F ( y ϑ ) y ϑ - α 1 F ( y ϑ ) - α 2 F ( y ϑ ) - F ( y ϑ ) x 0 + F ( y ϑ ) x 3 + F ( y ϑ ) y ϑ - F ( y ϑ ) - 1 / 2 α 0 C 2 x 1 2 ) q 3 + ( - α 0 C 2 x 2 x 1 - α 0 C 1 x 3 - α 1 F ( y ϑ ) x 1 + α 1 F ( y ϑ ) x 3 - α 2 F ( y ϑ ) x 1 + α 2 F ( y ϑ ) x 2 - F ( y ϑ ) x 1 + F ( y ϑ ) x 4 - 1 / 6 α 0 C 3 x 1 3 ) q 4 + O q 5 = 0
By equating to zero the coefficients of powers of q, we find five equations:
F ( y ϑ ) x 0 - F ( y ϑ ) y ϑ + F ( y ϑ ) = 0
α 1 F ( y ϑ ) x 0 - α 1 F ( y ϑ ) y ϑ - α 0 C 0 + α 1 F ( y ϑ ) + F ( y ϑ ) x 1 = 0
- α 0 C 1 x 1 + α 1 F ( y ϑ ) x 1 + α 2 F ( y ϑ ) x 0 - α 2 F ( y ϑ ) y ϑ + α 2 F ( y ϑ ) + F ( y ϑ ) x 2 = 0 F ( y ϑ ) x 3 + α 1 F ( y ϑ ) x 2 + α 2 F ( y ϑ ) x 1 - F ( y ϑ ) - F ( y ϑ ) x 0 + F ( y ϑ ) y ϑ - α 1 F ( y ϑ ) - α 1 F ( y ϑ ) x 0
+ α 1 F ( y ϑ ) y ϑ - α 2 F ( y ϑ ) - α 2 F ( y ϑ ) x 0 + α 2 F ( y ϑ ) y ϑ - α 0 C 1 x 2 - 1 / 2 α 0 C 2 x 1 2 = 0 F ( y ϑ ) x 4 + α 1 F ( y ϑ ) x 3 + α 2 F ( y ϑ ) x 2 - F ( y ϑ ) x 1 - α 1 F ( y ϑ ) x 1 - α 2 F ( y ϑ ) x 1 - α 0 C 1 x 3 - α 0 C 2 x 2 x 1
- 1 / 6 α 0 C 3 x 1 3 = 0
By solving Equations (14)–(18), we obtain:
x 0 = y ϑ - F ( y ϑ ) - 1 F ( y ϑ ) x 1 = - α 0 F ( y ϑ ) - 1 F ( x 0 ) x 2 = α 0 α 1 F ( y ϑ ) - 1 F ( x 0 ) + α 0 2 F ( y ϑ ) - 1 F ( x 0 ) F ( y ϑ ) - 1 F ( x 0 ) x 3 = - α 1 2 α 0 F ( y ϑ ) - 1 F ( x 0 ) + α 2 α 0 F ( y ϑ ) - 1 F ( x 0 ) + α 0 3 F ( y ϑ ) - 1 F ( x 0 ) F ( y ϑ ) - 1 F ( x 0 ) F ( y ϑ ) - 1 F ( x 0 ) + 1 / 2 α 0 3 F ( y ϑ ) - 1 F ( x 0 ) F ( y ϑ ) - 1 F ( x 0 ) F ( y ϑ ) - 1 F ( x 0 )
x 4 = α 1 3 α 0 F ( y ϑ ) - 1 F ( x 0 ) - 2 α 1 α 2 α 0 F ( y ϑ ) - 1 F ( x 0 ) - α 1 ( α 0 3 ) F ( y ϑ ) - 1 F ( x 0 ) F ( y ϑ ) - 1 F ( x 0 ) F ( y ϑ ) - 1 F ( x 0 ) - 3 / 2 α 1 α 0 3 F ( y ϑ ) - 1 F ( x 0 ) F ( y ϑ ) - 1 F ( x 0 ) F ( y ϑ ) - 1 F ( x 0 ) - α 0 F ( y ϑ ) - 1 F ( x 0 ) - α 0 α 1 F ( y ϑ ) - 1 F ( x 0 ) - α 2 α 0 F ( y ϑ ) - 1 F ( x 0 ) - α 1 2 ( α 0 2 ) F ( y ϑ ) - 1 F ( x 0 ) F ( y ϑ ) - 1 F ( x 0 ) + α 0 4 F ( y ϑ ) - 1 F ( x 0 ) F ( y ϑ ) - 1 F ( x 0 ) F ( y ϑ ) - 1 F ( x 0 ) F ( y ϑ ) - 1 F ( x 0 ) + 1 / 2 α 0 4 F ( y ϑ ) - 1 F ( x 0 ) F ( y ϑ ) - 1 F ( x 0 ) F ( y ϑ ) - 1 F ( x 0 ) F ( y ϑ ) - 1 F ( x 0 ) - 1 / 2 α 0 4 F ( y ϑ ) - 1 F ( x 0 ) F ( y ϑ ) - 1 F ( x 0 ) F ( y ϑ ) - 1 F ( x 0 ) F ( y ϑ ) - 1 F ( x 0 ) - 1 / 2 α 0 4 F ( y ϑ ) - 1 F ( x 0 ) F ( y ϑ ) - 1 F ( x 0 ) F ( y ϑ ) - 1 F ( x 0 ) F ( y ϑ ) - 1 F ( x 0 ) - 1 / 6 α 0 4 F ( y ϑ ) - 1 F ( x 0 ) F ( y ϑ ) - 1 F ( x 0 ) F ( y ϑ ) - 1 F ( x 0 ) F ( y ϑ ) - 1 F ( x 0 ) .

3.1. Iterative Method I

By adding x 0 and x 1 , we get the following iterative method:
u 0 = Initial guess F ( u 0 ) φ 1 = F ( u 0 ) u 1 = u 0 - φ 1 F ( u 0 ) φ 2 = F ( u 1 ) u 2 = u 1 - α 0 φ 2
If α 0 = 1 , then the order of convergence of Equation (19) is three.

3.2. Iterative Method II

Summing to x 0 , x 1 and x 2 , we deduce the following iterative method:
u 0 = Initial guess F ( u 0 ) φ 1 = F ( u 0 ) u 1 = u 0 - φ 1 F ( u 0 ) φ 2 = F ( u 1 ) F ( u 0 ) φ 3 = F ( u 1 ) φ 2 u 2 = u 1 - 2 φ 2 + φ 3
The convergence order of Equation (20) is four. Notice that we have the information F ( u 1 ) , and we want to use this information in a computationally-efficient way to enhance the convergence order. This means we can add more steps of the form F ( u 0 ) φ 3 = F ( u 1 ) φ 2 in the iterative structure of the iterative scheme. Thus, we can enhance the convergence order of the iterative method Equation (20) by including more parameters owing to the inclusion of additional steps. We construct a model of the iterative method Equation (20) as:
u 0 = Initial guess F ( u 0 ) φ 1 = F ( u 0 ) u 1 = u 0 - φ 1 F ( u 0 ) φ 2 = F ( u 1 ) F ( u 0 ) φ 3 = F ( u 1 ) φ 2 F ( u 0 ) φ 4 = F ( u 1 ) φ 3 u 2 = u 1 - 13 / 4 φ 2 + 7 / 2 φ 3 - 5 / 4 φ 4
The order of convergence of the iterative method Equation (21) is five.

3.3. Iterative Method III

The summation of x 0 , x 1 , x 2 and x 3 induces the following iterative method:
u 0 = Initial guess F ( u 0 ) φ 1 = F ( u 0 ) u 1 = u 0 - φ 1 F ( u 0 ) φ 2 = F ( u 1 ) F ( u 0 ) φ 3 = F ( u 1 ) φ 2 F ( u 0 ) φ 4 = F ( u 1 ) φ 3 F ( u 0 ) φ 5 = F ( u 1 ) φ 2 2 u 2 = u 1 + ( - 2 + α 0 ) φ 2 + ( 1 - 2 α 0 ) φ 3 + α 0 φ 4 + ( - 2 α 0 - 5 / 2 ) φ 5
The order of convergence of the iterative method Equation (22) is five. We extend the model of iterative method Equation (22), by including more free-parameters, so that we can enhance the convergence order.
u 0 = Initial guess F ( u 0 ) φ 1 = F ( u 0 ) u 1 = u 0 - φ 1 F ( u 0 ) φ 2 = F ( u 1 ) F ( u 0 ) φ 3 = F ( u 1 ) φ 2 F ( u 0 ) φ 4 = F ( u 1 ) φ 3 F ( u 0 ) φ 5 = F ( u 1 ) φ 2 2 F ( u 0 ) φ 6 = F ( u 1 ) φ 2 φ 3 u 2 = u 1 + β 1 φ 2 + β 2 φ 3 + β 3 φ 4 + β 4 φ 5 + β 5 φ 6
If β 1 = - 3 , β 2 = 3 , β 3 = - 1 , β 4 = - 4 and β 5 = 7 2 , then the order of convergence of the iterative method Equation (23) is six. Further iterative methods can be constructed by adding more x i s and following essentially the same basic ideas.

4. Application to Systems of Nonlinear Equations Associated with Ordinary and Partial Differential Equations

According to our analysis, we observed that the parametric homotopy method has the potential to provide an effective model for the construction of a higher order iterative method for solving systems of nonlinear equations. The unknown parameters in the homotopy method and the extended models (bases on homotopy provided models) can be used to enhance the convergence order of the iterative method. The iterative method Equation (21) can be employed effectively for solving a general system of nonlinear equations. On the contrary, the iterative method Equation (23) is only efficient to solve weakly nonlinear systems associated with nonlinear ordinary equations (ODEs) and partial differential equations (PDEs). In fact, many ODEs and PDEs can be written as:
L ( y ) + f ( y ) - w ( x ) = 0
where L ( · ) is a linear differential operator, f ( · ) is a nonlinear function, y is a dependent variable and x is an independent variable. Let A be the discrete approximation of linear differential operator L ( · ) , f ( y ) = [ f ( y 1 ) , f ( y 2 ) , , f ( y n ) ] T and w ( x ) = [ w ( x 1 ) , w ( x 2 ) , , w ( x n ) ] T . The nonlinear problem Equation (24) can be written in the form of a system of nonlinear equations:
F y = A y + f y - w x = 0
It should be noticed that there are discretizations of Equation (24) that do not correspond to Equation (25). Here, we explicitly assume that the numerical methods used for approximating Equation (24) lead to a representation as in Equation (25). The higher order Fréchet derivatives of Equation (25) can be computed as:
F ( y ) = A + diag f ( y ) F ( y ) = diag f ( y ) F ( y ) = diag f ( y )
where diag · represents the diagonal matrix, which keeps the input vector as its main diagonal. The iterative method Equation (23) for weakly nonlinear systems can be written as:
u 0 = Initial guess B = A + diag f ( u 0 ) B φ 1 = A u 0 + f u 0 - w x u 1 = u 0 - φ 1 B φ 2 = A u 1 + f u 1 - w x B φ 3 = A φ 2 + f ( u 1 ) φ 2 B φ 4 = A φ 3 + f ( u 1 ) φ 3 B φ 5 = f ( u 1 ) φ 2 2 B φ 6 = f ( u 1 ) φ 2 φ 3 u 2 = u 1 + β 1 φ 2 + β 2 φ 3 + β 3 φ 4 + β 4 φ 5 + β 5 φ 6
where ⊙ is element-wise multiplication between two vectors.

5. Computational Cost of Equation (27)

The iterative method Equation (27) uses five function evaluations, one LU factorization, the solution of six lower and upper triangular systems, four matrix-vector multiplications, six vector-vector multiplications and three scalar-vector multiplications. For weakly nonlinear systems Equation (25), the Fréchet derivatives are diagonal matrices, but can be converted into vectors to perform element-wise vector-vector multiplications. In conclusion, the count of scalar function evaluations for a nonlinear function Equation (25) and its Fréchet derivatives is equal.

6. Convergence Analysis

We will only provide the proof of the convergence of iterative method Equation (23), and the convergence proofs of other iterative methods are similar. To perform the convergence analysis, we use the symbolic algebraic tools of Maple ® 2015.
Theorem 1. 
Let F : Γ R n R n be a sufficiently Fréchet differentiable function on an open convex neighborhood Γ of u * R n with F ( u * ) = 0 and det ( F ( u * ) ) 0 , where F ( u ) denotes the Fréchet derivative of F ( u ) . Let D 1 = F ( u * ) and D s = 1 s ! F ( u * ) - 1 F ( s ) ( u * ) , for s 2 , where F ( s ) ( u ) denotes the s-order Fréchet derivative of F ( u ) . Then, the iterative method Equation (23), with an initial guess in the neighborhood of u * , converges to u * with local order of convergence at least six and error:
e 2 = L e 0 6 + O e 0 7
where e 0 = u 0 - u * , e 0 p = ( e 0 , e 0 , , e 0 ) p t i m e s and L = 2 D 2 2 D 3 D 2 - 9 D 2 D 3 D 2 2 + 14 D 3 D 2 3 + 16 D 2 6 - 88 D 2 3 + 114 D 2 5 is a six-linear function, i.e., L L R n , R n , R n , , R n 6 t i m e s with L e 0 6 R n .
Proof. 
Let F : Γ R n R n be a sufficiently Fréchet differentiable function in Γ. The p-th Fréchet derivative of F at v R n , p 1 , is a p-linear function F ( p ) ( v ) : R n R n R n p times with F ( p ) ( v ) ( h 1 , h 2 , , h q ) R n . Taylor’s series expansion of F ( u 0 ) around u * is:
F u 0 = F u * + u 0 - u * = F ( u * + e 0 ) = F u * + F ( u * ) e 0 + 1 2 ! F ( u * ) e 0 2 + 1 3 ! F ( 3 ) ( u * ) e 0 3 + 1 4 ! F ( 4 ) ( u * ) e 0 4 + = F ( u * ) e 0 + 1 2 ! F ( u * ) - 1 F ( u * ) e 0 2 + 1 3 ! F ( u * ) - 1 F ( 3 ) ( u * ) e 0 3 + 1 4 ! F ( u * ) - 1 F ( 4 ) ( u * ) e 0 4 + = D 1 e 0 + D 2 e 0 2 + D 3 e 0 3 + D 4 e 0 4 + O e 0 5
where e k = u k - u * . Computing the Fréchet derivative of F with respect to e 0 , we deduce:
F ( u 0 ) = D 1 I + 2 D 2 e 0 + 3 D 3 e 0 2 + 4 D 4 e 0 3 + O e 0 4
where I is the identity matrix. Computing its inverse using the symbolic mathematical package Maple, we obtain:
F ( u 0 ) - 1 = ( I - 2 D 2 e 0 + 4 D 2 2 - 3 D 3 e 0 2 + 6 D 3 D 2 + 6 D 2 D 3 - 8 D 2 3 - 4 D 4 e 0 3 + ( 8 D 4 D 2 + 9 D 3 2 + 8 D 2 D 4 - 5 D 5 - 12 D 3 D 2 2 - 12 D 2 D 3 D 2 - 12 D 2 2 D 3 + 16 D 2 4 ) e 0 4 + O e 0 5 ) D 1 - 1
By using Maple, we find the following expressions:
φ 1 = e 0 - D 2 e 0 2 + 2 D 2 2 - 2 D 3 e 0 3 + - 4 D 2 3 - 3 D 4 + 4 D 2 D 3 + 3 D 3 D 2 e 0 4 + ( - 6 D 3 D 2 2 - 6 D 2 D 3 D 2 + 8 D 2 4 - 8 D 2 2 D 3 - 4 D 5 + 6 D 2 D 4 + 6 D 3 2 + 4 D 4 D 2 ) e 0 5 + ( 12 D 2 2 D 3 D 2 + 12 D 2 D 3 D 2 2 + 12 D 3 D 2 3 - 8 D 2 D 4 D 2 - 9 D 3 2 D 2 - 8 D 4 D 2 2 - 12 D 2 D 3 2 - 12 D 3 D 2 D 3 - 12 D 2 2 D 4 + 16 D 2 3 D 3 - 16 D 2 5 + D 6 + 8 D 2 D 5 + 9 D 3 D 4 + 8 D 4 D 3 + 5 D 5 D 2 ) e 0 6 + O e 0 7
e 1 = D 2 e 0 2 + - 2 D 2 2 + 2 D 3 e 0 3 + 4 D 2 3 + 3 D 4 - 4 D 2 D 3 - 3 D 3 D 2 e 0 4 + ( 6 D 3 D 2 2 + 6 D 2 D 3 D 2 - 8 D 2 4 + 8 D 2 2 D 3 + 4 D 5 - 6 D 2 D 4 - 6 D 3 2 - 4 D 4 D 2 ) e 0 5 + ( - 12 D 2 2 D 3 D 2 - 12 D 2 D 3 D 2 2 - 12 D 3 D 2 3 + 8 D 2 D 4 D 2 + 9 D 3 2 D 2 + 8 D 4 D 2 2 + 12 D 2 D 3 2 + 12 D 3 D 2 D 3 + 12 D 2 2 D 4 - 16 D 2 3 D 3 + 16 D 2 5 - D 6 - 8 D 2 D 5 - 9 D 3 D 4 - 8 D 4 D 3 - 5 D 5 D 2 ) e 0 6 + O e 0 7
F ( u 1 ) = D 1 D 2 e 0 2 + 2 D 1 D 3 - 2 D 1 D 2 2 e 0 3 + 5 D 1 D 2 3 + 3 D 1 D 4 - 4 D 1 D 2 D 3 - 3 D 1 D 3 D 2 e 0 4 + ( 6 D 1 D 3 D 2 2 + 8 D 1 D 2 D 3 D 2 - 12 D 1 D 2 4 + 10 D 1 D 2 2 D 3 + 4 D 1 D 5 - 6 D 1 D 2 D 4 - 6 D 1 D 3 2 - 4 D 1 D 4 D 2 ) e 0 5 + ( 28 D 1 D 2 5 + 11 D 1 D 2 D 4 D 2 - 19 D 1 D 2 2 D 3 D 2 - 19 D 1 D 2 D 3 D 2 2 + 16 D 1 D 2 D 3 2 - 24 D 1 D 2 3 D 3 + 15 D 1 D 2 2 D 4 - 8 D 1 D 2 D 5 - 9 D 1 D 3 D 4 - 8 D 1 D 4 D 3 - 5 D 1 D 5 D 2 + 8 D 1 D 4 D 2 2 + 9 D 1 D 3 2 D 2 + 12 D 1 D 3 D 2 D 3 - 11 D 1 D 3 D 2 3 - D 1 D 6 ) e 0 6 + O e 0 7
φ 2 = D 2 e 0 2 + - 4 D 2 2 + 2 D 3 e 0 3 + - 6 D 3 D 2 + 13 D 2 3 - 8 D 2 D 3 + 3 D 4 e 0 4 + ( - 38 D 2 4 - 8 D 4 D 2 + 20 D 2 D 3 D 2 + 18 D 3 D 2 2 - 12 D 3 2 + 26 D 2 2 D 3 - 12 D 2 D 4 + 4 D 5 ) e 0 5 + ( - 59 D 2 2 D 3 D 2 - 55 D 2 D 3 D 2 2 - 50 D 3 D 2 3 + 27 D 2 D 4 D 2 + 27 D 3 2 D 2 + 24 D 4 D 2 2 + 40 D 2 D 3 2 + 36 D 3 D 2 D 3 + 39 D 2 2 D 4 - 76 D 2 3 D 3 + 88 D 2 3 + 16 D 2 5 - D 6 - 16 D 2 D 5 - 18 D 3 D 4 - 16 D 4 D 3 - 10 D 5 D 2 ) e 0 6 + O e 0 7
φ 3 = D 2 e 0 2 + - 6 D 2 2 + 2 D 3 e 0 3 + - 9 D 3 D 2 + 27 D 2 3 - 12 D 2 D 3 + 3 D 4 e 0 4 + ( 42 D 2 D 3 D 2 - 104 D 2 4 - 18 D 3 2 + 36 D 3 D 2 2 + 54 D 2 2 D 3 - 12 D 4 D 2 - 18 D 2 D 4 + 4 D 5 ) e 0 5 + ( - 163 D 2 2 D 3 D 2 - 149 D 2 D 3 D 2 2 - 128 D 3 D 2 3 + 57 D 2 D 4 D 2 + 54 D 3 2 D 2 + 48 D 4 D 2 2 + 84 D 2 D 3 2 + 72 D 3 D 2 D 3 + 16 D 2 6 + 81 D 2 2 D 4 - 208 D 2 3 D 3 + 88 D 2 3 + 258 D 2 5 - D 6 - 24 D 2 D 5 - 27 D 3 D 4 - 24 D 4 D 3 - 15 D 5 D 2 ) e 0 6 + O e 0 7
φ 4 = D 2 e 0 2 + - 8 D 2 2 + 2 D 3 e 0 3 + 3 D 4 + 45 D 2 3 - 16 D 2 D 3 - 12 D 3 D 2 e 0 4 + ( - 210 D 2 4 + 90 D 2 2 D 3 - 24 D 2 D 4 - 24 D 3 2 - 16 D 4 D 2 + 60 D 3 D 2 2 + 70 D 2 D 3 D 2 + 4 D 5 ) e 0 5 + ( - D 6 - 329 D 2 2 D 3 D 2 - 299 D 2 D 3 D 2 2 - 260 D 3 D 2 3 + 95 D 2 D 4 D 2 + 90 D 3 2 D 2 + 80 D 4 D 2 2 + 140 D 2 D 3 2 + 120 D 3 D 2 D 3 + 135 D 2 2 D 4 - 420 D 2 3 D 3 + 748 D 2 5 - 32 D 2 D 5 - 36 D 3 D 4 - 32 D 4 D 3 - 20 D 5 D 2 + 88 D 2 3 + 32 D 2 6 ) e 0 6 + O e 0 7
φ 5 = 2 D 2 3 e 0 4 + 4 D 2 D 3 D 2 - 20 D 2 4 + 4 D 2 2 D 3 e 0 5 + ( 124 D 2 5 + 6 D 2 D 4 D 2 - 36 D 2 2 D 3 D 2 - 28 D 2 D 3 D 2 2 + 8 D 2 D 3 2 - 40 D 2 3 D 3 + 6 D 2 2 D 4 ) e 0 6 + O e 0 7
φ 6 = 2 D 2 3 e 0 4 + 4 D 2 D 3 D 2 - 24 D 2 4 + 4 D 2 2 D 3 e 0 5 + ( 8 D 2 D 3 2 - 36 D 2 D 3 D 2 2 - 48 D 2 3 D 3 + 176 D 2 5 - 42 D 2 2 D 3 D 2 + 6 D 2 D 4 D 2 + 6 D 2 2 D 4 ) e 0 6 + O e 0 7
By using all of the values of ϕ’s, we get the following error equation.
e 2 = 2 D 2 2 D 3 D 2 - 9 D 2 D 3 D 2 2 + 14 D 3 D 2 3 + 16 D 2 6 - 88 D 2 3 + 114 D 2 5 e 0 6 + O e 0 7
The error Equation (30) directly implies that the local order of convergence of the iterative method Equation (23) is at least six. ☐
It is beneficial to construct the multi-step version of the iterative method Equation (27) because we get an increment of factor two in the convergence order attained in the previous step, at the cost of single function evaluation plus the solution of two lower and upper triangular systems. The multi-step version of Equation (27) can be written as:
u 0 = Initial guess B = A + diag f ( u 0 ) B φ 1 = A u 0 + f u 0 - w x u 1 = u 0 - φ 1 B φ 2 = A u 1 + f u 1 - w x B φ 3 = A φ 2 + f ( u 1 ) φ 2 B φ 4 = A φ 3 + f ( u 1 ) φ 3 B φ 5 = f ( u 1 ) φ 2 2 B φ 6 = f ( u 1 ) φ 2 φ 3 u 2 = u 1 + β 1 φ 2 + β 2 φ 3 + β 3 φ 4 + β 4 φ 5 + β 5 φ 6 for i = 3 , m B ψ 1 = A u i - 1 + f u i - 1 - w x B ψ 2 = f u 1 φ 1 ψ 1 u i = u i - 1 - ψ 1 - ψ 2 end
We claim that the convergence order of the m-step iterative method Equation (31) is 2 ( m + 1 ) .
Theorem 2. 
Let F : Γ R n R n be a sufficiently Fréchet differentiable function on an open convex neighborhood Γ of u * R n with F ( u * ) = 0 and det ( F ( u * ) ) 0 , where F ( u ) denotes the Fréchet derivative of F ( u ) . Let D 1 = F ( u * ) and D s = 1 s ! F ( u * ) - 1 F ( s ) ( u * ) , for s 2 , where F ( s ) ( u ) denotes the s-order Fréchet derivative of F ( u ) . Then, the iterative method Equation (31), with an initial guess in the neighborhood of u * , converges to u * with local order of convergence at least 2 ( m + 1 ) and error:
e m = L e 0 2 ( m + 1 ) + O e 0 2 ( m + 1 ) + 1
where e 0 = u 0 - u * , e 0 p = ( e 0 , e 0 , , e 0 ) p t i m e s and L = 3 2 D 2 2 + D 3 m - 2 42 D 2 5 + 2 D 2 2 D 3 D 2 - 9 D 2 D 3 D 2 2 + 14 D 3 D 2 3 is a 2 ( m + 1 ) -linear function, i.e., L L R n , R n , R n , , R n 2 ( m + 1 ) t i m e s with L e 0 6 R n .
Proof. 
We will prove the convergence order via mathematical induction. Suppose that the claimed order of convergence is true for m = s , i.e., e s = O e 2 ( s + 1 ) . For the rest of the proof, we proceed as follows:
u s - u * e s F ( u s ) D 1 e s B - 1 I - 2 D 2 e 0 D 1 - 1 ψ 1 I - 2 D 2 e 0 e s = e s - 2 D 2 e 0 e s B - 1 F ( u 1 ) φ 1 2 D 2 e 0 - 6 D 2 2 e 2 ψ 2 2 D 2 e 0 - 6 D 2 2 e 0 2 I - 2 D 2 e 0 e s 2 D 2 e 0 e s - 10 D 2 2 e 0 2 e s u s + 1 - u * = u s - u * - ψ 1 - ψ 2 e s - e s + 2 D 2 e 0 e s - 2 D 2 e 0 e s + 10 D 2 2 e 0 2 e s u s + 1 - u * e 0 2 e s e s + 1 = O e 0 2 ( s + 2 )

7. Numerical Testing

To show the effectiveness of our proposed multi-step iterative method, we solve a carefully-chosen set of ODEs and PDEs. It is also important to verify the claimed order of convergence numerically, so we adopt the following definition to compute the computational order of convergence (COC):
COC = l o g | | F ( x k + 1 ) | | / | | F ( x k ) | | l o g | | F ( x k ) | | / | | F ( x k - 1 ) | |
Consider a system of four nonlinear equations:
F 1 ( x ) = x 2 x 3 + x 4 x 2 + x 3 = 0 F 2 ( x ) = x 1 x 3 + x 4 x 1 + x 3 = 0 F 3 ( x ) = x 1 x 2 + x 4 x 1 + x 2 = 0 F 4 ( x ) = x 1 x 2 + x 3 x 1 + x 2 - 1 = 0
Let w = [ w 1 , w 2 , w 3 , w 4 ] T be a constant vector, and F ( x ) and F ( x ) w = F ( x ) w can be written as:
F ( x ) = 0 x 3 + x 4 x 2 + x 4 x 2 + x 3 x 3 + x 4 0 x 1 + x 4 x 1 + x 3 x 2 + x 4 x 1 + x 4 0 x 1 + x 2 x 2 + x 3 x 1 + x 3 x 1 + x 2 0 F ( x ) w = 0 w 3 + w 4 w 2 + w 4 w 2 + w 3 w 3 + w 4 0 w 1 + w 4 w 1 + w 3 w 2 + w 4 w 1 + w 4 0 w 1 + w 2 w 2 + w 3 w 1 + w 3 w 1 + w 2 0
In Table 1, we computed the COC’s of different iterative methods, and numerical results confirm the theoretically-claimed convergence orders. The last two columns of Table 1 are similar because the structure of iterative methods Equations (21) and (22) is similar, and iterative method Equation (22) has an extra parameter that does not enhance the convergence order. Table 2 shows the COC for different values of m for the iterative method Equation (31), and the COC satisfies the formula 2 ( m + 1 ) that was claimed. The numerical computations in Table 1 and Table 2 are performed in Mathematica ® 10 by setting $MinPrecision = 82,000 and $MinPrecision = 7000, respectively.

7.1. 3D Nonlinear Poisson Problem

We consider the following nonlinear Poisson–Dirichlet boundary value problem:
u x x + u y y + u z z + f ( u ) = p ( x , y , z ) , ( x , y , z ) ( 0 , 1 ) 3
where f ( · ) is a nonlinear function of u ( x , y , z ) and p ( x , y , z ) is the source term. The discretization of Equation (35) is performed by using the Chebyshev pseudo-spectral collocation method [23,24,25,26,27]. The discretized form of Equation (35) is:
F U = T x x I y I z + I x T y y I z + I x I y T z z U + f U - p = 0 F U = T x x I y I z + I x T y y I z + I x I y T z z + diag f U F U = diag f U
where T · · is the Chebyshev pseudo-spectral collocation discretization of the second order derivative, I is the identity matrix and ⊗ is a Kronecker product. We assume the solution of Equation (35) is u = s i n ( x + y + z ) and f ( u ) = u 2 . In Table 3, we show the absolute error in the solution of Equation (36). In all of the cases, we perform a single iteration and get maximum reduction in the absolute error when m = 5 and the grid size is 12 × 12 × 12 . In one iteration, we perform the LU factorization only once, and then, we solve lower and upper triangular systems for linear subproblems, which make the procedure computationally inexpensive.

7.2. 2D Nonlinear Wave Equation

The following 2D nonlinear wave equation is studied.
u t t - c 2 u x x + u y y + f ( u ) = p ( x , y ) , ( x , y , t ) ( - 1 , 1 ) 2 × ( 0 , 2 ]
where f ( u ) = u s is nonlinear function and c, s are constants. The source term p ( x , y ) can be computed by assuming solution u = e x p ( - t ) s i n ( x + y ) . Dirichlet boundary conditions are imposed. We use the Chebyshev pseudo-spectral collocation method to discretize Equation (37), and we obtain the following system of nonlinear equations.
F U = T t t I x I y - c 2 I t T x x I y + I t I x T y y U + f U - p = 0 F U = T t t I x I y - c 2 I t T x x I y + I t I x T y y + diag f U F U = diag f U
Table 4 shows that the number of steps, to get a reduction in the absolute error in the solution of Equation (38), depends on the step-size in the spectral approximation.

7.3. Verification of the Mesh-Independence Principle

In the paper [28], the authors proved the mesh-independence principle for the classical Newton method in order to solve systems of nonlinear equations associated with a general class of nonlinear boundary value problems. They also have stated some conditions on the discretization of the nonlinear boundary value problems. In simple words, suppose we have two discretizations (one is finer than the other) of a nonlinear boundary value problem, and suppose that we use the Newton method to solve two associated systems of nonlinear equations, one for each discretization. The magnitude of absolute errors for both discretizations against the number of Newton iterations will be almost equal. Table 4 and Table 5 show that multi-step iterative methods do not obey the mesh-independence principle, when we perform a single evaluation of the Jacobian and its inversion. Table 6 displays that the Newton method and the iterative method Equation (27) almost verify the mesh-independence principle. Notice that the iterative method in Equation (31) is the multi-step version of the iterative method Equation (27). The multi-step iterative methods for the problem Equation (35) does not satisfy the mesh-independence property for the problem Equation (35), while Newton and the iterative method in Equation (27) have this very nice feature.
Remark. In a single iteration of multi-step iterative methods, we use the frozen Jacobian at the initial guess, and we employ its LU factors repeatedly in the multi-step part to solve the related lower and upper triangular systems. The idea of the frozen Jacobian is fruitful because the computation of a new Jacobian and its LU factorization are computationally expensive operations. Now, we ask whether the applicability of such methods is limited. What we have observed in numerical simulations is the following: when one has to solve a stiff initial or boundary value problem, in the start of the simulations, it is recommended to perform more than one iteration with a limited number of multi-steps. In fact, in the case of stiff problems, the computation of the Jacobian repeatedly is crucial for the convergence of the whole iterative process.

8. Conclusions

The homotopy methods provide a way to construct models for the development of higher order iterative methods, to solve systems of nonlinear equations. We have shown that the inclusion of parameters in the homotopy can help to enhance the convergence order of the proposed techniques. Usually, we get higher order Fréchet derivatives that are computationally unsuitable for a general system of nonlinear equations. However, for particular classes of ODEs and PDEs, we have shown that the computational cost of higher order Fréchet derivatives is not high. The numerical results show the validity and effectiveness of the proposed multi-step iterative methods.

Acknowledgments

The work of Stefano Serra-Capizzano was partially supported by INdAM-GNCSGruppo Nazionale per il Calcolo Scientifico and by the Donation KAW 2013.0341 from the Knut & Alice Wallenberg Foundation in collaboration with the Royal Swedish Academy of Sciences, supporting Swedish research in mathematics. The authors are thankful to the anonymous reviewers for their valuable comments to improve the scientific contents of the article.

Author Contributions

Uswah Qasim proposed the idea and constructed the iterative methods. Zulifqar Ali, Fayyaz Ahmad and Stefano Serra-Capizzano contributed in the convergence analysis and devised setup for numerical simulations. Malik Zaka Ullah and Mir Asma performed the numerical simulations and wrote the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ortega, J.M.; Rheinbodt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press Limited: London, UK, 1970. [Google Scholar]
  2. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
  3. Burden, R.L.; Faires, J.D. Numerical Analysis; PWS Publishing Company: Bostan, MA, USA, 2001. [Google Scholar]
  4. Abbasbandy, S. Improving Newton-Raphson method for nonlinear equations by modified Adomian decomposition method. Appl. Math. Comput. 2003, 145, 887–893. [Google Scholar] [CrossRef]
  5. Abbasbandy, S.; Tan, Y.; Liao, S.J. Newton-homotopy analysis method for nonlinear equations. Appl. Math. Comput. 2007, 188, 1794–1800. [Google Scholar] [CrossRef]
  6. Chun, C. Iterative methods improving Newton’s method by the decomposition method. Comput. Math. Appl. 2005, 50, 1559–1568. [Google Scholar] [CrossRef]
  7. Shah, F.A. Modified Homotopy Perturbation Technique for the Approximate Solution of Nonlinear Equations. Chin. J. Math. 2014, 2014, 787591. Available online: http://0-dx-doi-org.brum.beds.ac.uk/10.1155/2014/787591. [Google Scholar] [CrossRef]
  8. Amat, S.; Busquier, S.; Grau, A.; Sanchez, M.G. Maximum efficiency for a family of Newton-like methods with frozen derivatives and some applications. Appl. Math. Comput. 2013, 219, 7954–7963. [Google Scholar] [CrossRef]
  9. Ullah, M.Z.; Soleymani, F.; Al-Fhaid, A.S. Numerical solution of nonlinear systems by a general class of iterative methods with application to nonlinear PDEs. Numer. Algorithm. 2014, 67, 223–242. [Google Scholar] [CrossRef]
  10. Ahmad, F.; Tohidi, E.; Carrasco, J.A. A parameterized multi-step Newton method for solving systems of nonlinear equations. Numer. Algorithm. 2015, 71, 1017–1398. [Google Scholar] [CrossRef]
  11. Ullah, M.Z.; Serra-Capizzano, S.; Ahmad, F. An efficient multi-step iterative method for computing the numerical solution of systems of nonlinear equations associated with ODEs. Appl. Math. Comput. 2015, 250, 249–259. [Google Scholar] [CrossRef]
  12. Ahmad, F.; Tohidi, E.; Ullah, M.Z.; Carrasco, J.A. Higher order multi-step Jarratt-like method for solving systems of nonlinear equations: Application to PDEs and ODEs. Comput. Math. Appl. 2015, 70, 624–636. Available online: http://0-dx-doi-org.brum.beds.ac.uk/10.1016/j.camwa.2015.05.012. [Google Scholar] [CrossRef] [Green Version]
  13. Poincare, H. Second complement a l’Analysis Situs. Proc. Lond. Math. Soc. 1900, 32, 277–308. [Google Scholar] [CrossRef]
  14. Leykin, A.; Verschelde, J.; Zhao, A. Newton’s method with deflation for isolated singularities of polynomial systems. Theor. Comp. Sci. 2006, 359, 111–122. [Google Scholar] [CrossRef]
  15. Liao, S.J. The Proposed Homotopy Analysis Technique for the Solution of Nonlinear Problems. Ph.D. Thesis, Shanghai Jiao Tong University, Shanghai, China, 1992. [Google Scholar]
  16. Liao, S.J. On the homotopy analysis method for nonlinear problems. Appl. Math. Comput. 2004, 147, 499–513. [Google Scholar] [CrossRef]
  17. Liao, S.J.; Campo, A. Analytic solutions of the temperature distribution in Blasius viscous flow problems. J. Fluid Mech. 2002, 453, 411–425. [Google Scholar] [CrossRef]
  18. Liao, S.J.; Tan, Y. A general approach to obtain series solutions of nonlinear differential equations. Stud. Appl. Math. 2007, 119, 297–354. [Google Scholar] [CrossRef]
  19. Mogan, A.P.; Sommese, A.J. A homotopy for solving general polynomial systems that respects m-homogeneous structures. Appl. Math. Comput. 1987, 24, 101–113. [Google Scholar] [CrossRef]
  20. Pakdemirli, M.; Boyac, H. Generation of root finding algorithms via perturbation theory and some formulas. Appl. Math. Comput. 2007, 184, 783–788. [Google Scholar] [CrossRef]
  21. Noor, M.A. Some iterative methods for solving nonlinear equations using homotopy perturbation method. Int. J. Comput. Math. 2010, 87, 141–149. [Google Scholar] [CrossRef]
  22. Wu, Y.; Cheung, K.F. Two-parameter homotopy method for nonlinear equations. Numer. Algorithm. 2010, 53, 555–572. [Google Scholar] [CrossRef]
  23. Bhrawy, A.H. An efficient Jacobi pseudospectral approximation for nonlinear complex generalized Zakharov system. Appl. Math. Comput. 2014, 247, 30–46. [Google Scholar] [CrossRef]
  24. Dehghan, M.; Izadi, F.F. The spectral collocation method with three different bases for solving a nonlinear partial differential equation arising in modeling of nonlinear waves. Math. Comput. Model. 2011, 53, 1865–1877. Available online: http://0-dx-doi-org.brum.beds.ac.uk/10.1016/j.mcm.2011.01.011. [Google Scholar] [CrossRef]
  25. Doha, E.H.; Bhrawy, A.H.; Ezz-Eldien, S.S. Efficient Chebyshev spectral methods for solving multi-term fractional orders differential equations. Appl. Math. Comput. 2011, 35, 5662–5672. [Google Scholar] [CrossRef]
  26. Doha, E.H.; Bhrawy, A.H.; Hafez, R.M. On shifted Jacobi spectral method for high-order multi-point boundary value problems. Commun. Nonlinear Sci. Numer. Simul. 2010, 17, 3802–3810. Available online: http://0-dx-doi-org.brum.beds.ac.uk/10.1016/j.cnsns.2012.02.027. [Google Scholar] [CrossRef]
  27. Tohidi, E.; Noghabi, S.L. An efficient Legendre Pseudospectral Method for Solving Nonlinear Quasi Bang-Bang Optimal Control Problems. J. Appl. Math. Stat. Inform. 2012, 8, 73–85. [Google Scholar] [CrossRef]
  28. Allgower, E.L.; Bohmer, K.; Potra, F.A.; Rheinboldt, W.C. A mesh-independence principle for operator equations and their discretizations. SIAM J. Numer. Anal. 1986, 23, 160–169. [Google Scholar] [CrossRef]
Table 1. Verification of the convergence order of different iterative methods for the problem Equation (33); the initial guess is x k = 15 / 10 . COC, computational order of convergence.
Table 1. Verification of the convergence order of different iterative methods for the problem Equation (33); the initial guess is x k = 15 / 10 . COC, computational order of convergence.
Iterations \  Methods(19)(20)(21)(22)
1       | | F ( x k ) | | 8.88e-015.80e-014.12e-014.12e-01
2       -3.57e-022.48e-039.94e-059.94e-05
3       -1.33e-066.41e-145.51e-255.51e-25
4       -7.985e-214.48e-584.63e-1294.63e-129
5       -1.91e-641.67e-2363.09e-6523.09e-652
6       -2.90e-1964.99e-9526.59e-32716.59e-3271
7       -1.13e-5926.26e-38164.63e-16,3674.63e-16,367
8       -7.53e-17832.43e-15,2731.27e-81,8501.27e−81,850
COC3.004.005.005.00
Table 2. Multi-step iterative method Equation (31): verification of convergence order for the problem Equation (33).
Table 2. Multi-step iterative method Equation (31): verification of convergence order for the problem Equation (33).
Iterations\ Steps m = 2 m = 3 m = 4 m = 5
1       | | F ( x k ) | | 6.65e-021.66e-024.69e-031.28e-03
2       -6.10e-125.46e-211.24e-327.41e-47
3       -1.42e-753.71e-1752.15e-3371.38e-577
4       -2.85e-4618.08e-14155.24e-33943.16e-6958
COC6.068.0410.012.0
Table 3. Multi-step iterative method Equation (31): absolute error in the solution of Equation (36) versus different grid sizes. Number of iterations = 1; initial guess U = 0 .
Table 3. Multi-step iterative method Equation (31): absolute error in the solution of Equation (36) versus different grid sizes. Number of iterations = 1; initial guess U = 0 .
m \N 8 × 8 × 8 10 × 10 × 10 12 × 12 × 12
2       | | F ( U k ) | | 6.70e-077.51e-079.07e-07
3       -1.54e-091.42e-091.47e-09
4       -5.62e-101.09e-111.12e-11
5       -5.62e-103.20e-137.45e-14
Table 4. Multi-step iterative method Equation (31): absolute error in the solution of Equation (38) versus different grid sizes. Number of iterations = 1, s = 3 , c = 1 ; initial guess U = 0 .
Table 4. Multi-step iterative method Equation (31): absolute error in the solution of Equation (38) versus different grid sizes. Number of iterations = 1, s = 3 , c = 1 ; initial guess U = 0 .
m \N 10 × 10 × 10 20 × 20 × 20 20 × 20 × 30
2       | | F ( U k ) | | 2.82e-048.34e-045.46e-05
3       -2.81e-053.42e-052.99e-05
4       -8.32e-065.99e-061.66e-05
5       -8.23e-072.02e-069.22e-06
6       - 8.71e-075.12e-06
9       - 5.53e-088.77e-07
12      - 3.45e-091.50e-07
13      - 1.37e-098.35e-08
14      - 5.40e-104.64e-08
16      - 8.75e-111.43e-08
17      - 7.94e-09
21      - 7.55e-10
25      - 7.19e-11
Table 5. Multi-step Newton method: absolute error in the solution of Equation (38) versus different grid sizes. Number of iterations = 1, s = 3 , c = 1 ; initial guess U = 0 .
Table 5. Multi-step Newton method: absolute error in the solution of Equation (38) versus different grid sizes. Number of iterations = 1, s = 3 , c = 1 ; initial guess U = 0 .
m \N 10 × 10 × 10 20 × 20 × 20 20 × 20 × 30
2       | | F ( U k ) | | 6.14e-032.13e-023.69e-03
3       -1.23e-031.79e-031.70-04
4       -2.11e-047.85e-046.29e-05
5       -4.68e-054.41e-041.02e-05
6       -2.09e-062.15e-045.12e-06
9       -2.60e-072.93e-057.04e-07
12      - 3.90e-064.91e-08
13      - 1.99e-062.03e-08
14      - 1.02e-068.36e-09
16      - 2.66e-071.42e-09
17      - 1.36e-075.88e-10
21      - 9.28e-092.14e-11
25      - 6.36e-102.13e-11
28      - 8.77e-112.12e-11
Table 6. Newton method and iterative method Equation (27): absolute error in the solution of Equation (38) versus different grid sizes. Number of iterations = 4; number of steps = 0, s = 3 , c = 1 ; initial guess U = 0 .
Table 6. Newton method and iterative method Equation (27): absolute error in the solution of Equation (38) versus different grid sizes. Number of iterations = 4; number of steps = 0, s = 3 , c = 1 ; initial guess U = 0 .
Iterations \N 10 × 10 × 10 15 × 15 × 15 20 × 20 × 20
Newton method
1       | | F ( U k ) | | 5.78e-024.52e-025.13e-02
2       -1.66e-032.70e-042.24e-04
3       -7.84e-072.43e-082.16e-09
4       -1.94e-074.74e-125.39e-11
Iterations \N 30 × 20 × 20 20 × 30 × 20 20 × 20 × 30
1       | | F ( U k ) | | 5.11e-025.11-025.19e-02
2       -2.24e-042.24e-042.23e-04
3       -2.55e-092.55e-092.65e-09
4       -5.36e-115.37e-112.13e-11
Iterations \N 10 × 10 × 10 15 × 15 × 15 20 × 20 × 20
Iterative method Equation (27)
1       | | F ( U k ) | | 2.82e-048.42e-058.34e-04
2       -1.94e-074.74e-125.39e-11
3       -1.94e-074.74e-125.39e-11
4       -1.94e-074.74e-125.39e-11
Iterations \N 30 × 20 × 20 20 × 30 × 20 20 × 20 × 30
1       | | F ( U k ) | | 5.23e-045.23e-045.46e-05
2       -5.37e-115.37e-112.14e-11
3       -5.37e-115.37e-112.14e-11
4       -5.37e-115.37e-112.14e-11

Share and Cite

MDPI and ACS Style

Qasim, U.; Ali, Z.; Ahmad, F.; Serra-Capizzano, S.; Zaka Ullah, M.; Asma, M. Constructing Frozen Jacobian Iterative Methods for Solving Systems of Nonlinear Equations, Associated with ODEs and PDEs Using the Homotopy Method. Algorithms 2016, 9, 18. https://0-doi-org.brum.beds.ac.uk/10.3390/a9010018

AMA Style

Qasim U, Ali Z, Ahmad F, Serra-Capizzano S, Zaka Ullah M, Asma M. Constructing Frozen Jacobian Iterative Methods for Solving Systems of Nonlinear Equations, Associated with ODEs and PDEs Using the Homotopy Method. Algorithms. 2016; 9(1):18. https://0-doi-org.brum.beds.ac.uk/10.3390/a9010018

Chicago/Turabian Style

Qasim, Uswah, Zulifqar Ali, Fayyaz Ahmad, Stefano Serra-Capizzano, Malik Zaka Ullah, and Mir Asma. 2016. "Constructing Frozen Jacobian Iterative Methods for Solving Systems of Nonlinear Equations, Associated with ODEs and PDEs Using the Homotopy Method" Algorithms 9, no. 1: 18. https://0-doi-org.brum.beds.ac.uk/10.3390/a9010018

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop