Next Article in Journal
Results on Meromorphic Functions Partially Sharing Some Values in an Angular Domain
Next Article in Special Issue
Convergence Ball and Complex Geometry of an Iteration Function of Higher Order
Previous Article in Journal
Evaluating the Predictive Power of Ordination Methods in Ecological Context
Previous Article in Special Issue
Ball Convergence of an Efficient Eighth Order Iterative Method Under Weak Conditions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

New Iterative Methods for Solving Nonlinear Problems with One and Several Unknowns

by
Ramandeep Behl
1,
Alicia Cordero
2,
Juan R. Torregrosa
2,* and
Ali Saleh Alshomrani
1
1
Department of Mathematics, King Abdulaziz University, Jeddah 21589, Saudi Arabia
2
Multidisciplinary Institute of Mathematics, Universitat Politènica de València, 46022 Valencia, Spain
*
Author to whom correspondence should be addressed.
Submission received: 26 October 2018 / Revised: 23 November 2018 / Accepted: 25 November 2018 / Published: 1 December 2018
(This article belongs to the Special Issue Computational Methods in Analysis and Applications)

Abstract

:
In this manuscript, a new type of study regarding the iterative methods for solving nonlinear models is presented. The goal of this work is to design a new fourth-order optimal family of two-step iterative schemes, with the flexibility through weight function/s or free parameter/s at both substeps, as well as small residual errors and asymptotic error constants. In addition, we generalize these schemes to nonlinear systems preserving the order of convergence. Regarding the applicability of the proposed techniques, we choose some real-world problems, namely chemical fractional conversion and the trajectory of an electron in the air gap between two parallel plates, in order to study the multi-factor effect, fractional conversion of species in a chemical reactor, Hammerstein integral equation, and a boundary value problem. Moreover, we find that our proposed schemes run better than or equal to the existing ones in the literature.

1. Introduction

The role of iterative methods in solving nonlinear problems of many branches of science and engineering has increased dramatically in recent years. One of the most important reasons is the applicability of the iterative methods to real-life problems. For example, Shacham, Balaji, and Seader [1,2] described the fraction of the nitrogen-hydrogen feed that gets converted to ammonia (this fact is called fractional conversion) in the form of a nonlinear scalar equation. On the other hand, Shacham [3] expressed the fractional conversion of Species A in a chemical reactor also in the form of a scalar equation. In addition, Shacham and Kehat [4] gave several examples of real-life problems, which can be modeled by means of real scalar equations whose roots play an important role in the cited problems. Some of them are: The chemical equilibrium calculation problem, the isothermal flash problem, the energy or material balance problem in the chemical reactor problem, the azeotropic point calculation problem, the adiabatic flame temperature problem, the calculation of gas volume by the Beattie-Bridgeman method problem, the liquid flow rate in a pipe problem, the pressure drop in the converging-diverging nozzle problem, etc.
On the other hand, Moré [5] proposed a collection of nonlinear model problems, most of them described in terms of nonlinear systems of equations. Further, Grosan and Abraham [6] also discussed the applicability of the iterative methods for solving nonlinear systems in different sciences such as neurophysiology, the kinematics synthesis problem, the chemical equilibrium problem, the combustion problem, and the economics modeling problem. Furthermore, the reactor and steering problems were solved in [7,8] by describing these problems in the form of F ( x ) = 0 . Moreover, Lin et al. [9] also discussed the applicability of the procedures for solving nonlinear systems in transport theory.
The construction of iterative methods for solving nonlinear equations, f ( x ) = 0 , or nonlinear systems, F ( x ) = 0 , with m equations and m unknowns, is an interesting task in the field of numerical analysis. In both cases, there are different ways to develop iterative schemes. Different tools such as quadrature formulae, Adomian polynomials, the divided differences approach, the composition of known methods, the weight function procedure, etc., have been used for designing iterative schemes to solve nonlinear problems. For a good overview on the procedures and techniques, as well as the different schemes developed in the last half century, we refer to some standard text books [10,11,12,13]. Some scalar schemes can be translated, in a natural way, to multivariate methods, whereas for other ones, this translation is not possible or requires special algebraic manipulations.
It is straightforward to say from the fourth-order methods for scalar equations and systems available in the literature [14,15,16,17,18,19,20,21,22,23,24] that they present free parameters only at the second step, in order to obtain new iterative methods. In this paper, we explore the idea of including free parameters or weight functions also in the first step. In this case, is it possible to obtain new optimal fourth-order methods for scalar equations with simple iterative expression, smaller asymptotic error constants, and smaller residual errors? Then, we extend this idea to a system of nonlinear equations.
Motivated and inspired by these questions, our main objective in this paper is to highlight the advantages of the new approach over the traditional approach in building new optimal iterative methods of fourth-order. In addition, our proposed methods not only offer faster convergence, but also have less residual error and asymptotic error constants.
Our proposed schemes only use three functional evaluations per iteration, so they are optimal in the sense of the Kung-Traub conjecture for scalar equations. Then, we extend this family for nonlinear systems, preserving the order of convergence. The efficiency of the proposed schemes is tested on several real-life problems, which allow us to conclude that the new methods perform better than or equal to many other known schemes with the same order.
We organize the rest of the manuscript as follows. Section 2 is devoted to developing the proposed family of optimal iterative schemes, establishing its fourth-order of convergence and presenting some special cases that will be used in the numerical section. The extension of the proposed family to nonlinear systems is developed in Section 3 joined with the study of its computational efficiency index and the comparison with known schemes of fourth-order. The performance of the new methods is analyzed with respect to real-life problems and academic ones of one or several variables and described in Section 4. We finish this work with some conclusions and the references used in it.

2. Development of Fourth-Order Optimal Schemes

This section is devoted to describing the new family of optimal fourth-order schemes to solve f ( x ) = 0 , where f : D R R is defined in an open interval D. The iterative expression of this family is given by:
y n = x n φ ( x n ) f ( x n ) f ( x n ) , x n + 1 = z n f ( z n ) f ( x n ) 1 + 2 f ( z n ) f ( x n ) + α f ( x n ) f ( x n ) , n = 0 , 1 ,
where φ ( x ) is a real weight function, α is a free disposable parameter, and z n is the midpoint of x n and y n , i.e., z n = x n + y n 2 .
Under some conditions of function φ , the fourth-order convergence of the elements of (1) is presented in the following result. We can observe the role of φ ( x n ) and α in the construction of the fourth-order convergence schemes.

2.1. Convergence Analysis

Theorem 1.
Let f : C C be an analytic function in a neighborhood of the required simple zero x * . Let us consider that initial guess x 0 is close enough to x * . Then, the members of the family (1) have fourth-order convergence if φ ( x * ) = 2 and α = φ ( x * ) .
Proof. 
Let us denote by e n = x n x * the error at the n th step. By using Taylor expansion around x * , we get:
f ( x n ) = f ( x * ) + f ( x * ) ( x n x * ) + 1 2 f ( x * ) ( x n x * ) 2 + 1 3 ! f ( x * ) ( x n x * ) 3 + = f ( x * ) e n + c 2 e n 2 + c 3 e n 3 + c 4 e n 4 + O ( e n 5 )
and:
f ( x n ) = f ( x * ) + f ( x * ) ( x n x * ) + 1 2 f ( x * ) ( x n x * ) 2 + 1 3 ! f ( i v ) ( x * ) ( x n x * ) 3 + = f ( x * ) 1 + 2 c 2 e n + 3 c 3 e n 2 + 4 c 4 e n 3 + O ( e n 4 ) ,
where c j = f ( j ) ( x * ) j ! f ( x * ) for j 2 .
Similarly, we can expand φ ( x n ) around x * by using Taylor’s series, which is given as follows:
φ ( x n ) = φ ( x * ) + e n φ ( x * ) + e n 2 2 ! φ ( x * ) + e n 3 3 ! φ ( x * ) + O ( e n 4 ) .
By using (2)–(4) in the first substep of (1), we have:
y n x * = ( 1 φ ( x * ) ) e n + ( c 2 φ ( x * ) φ ( x * ) ) e n 2 + 2 ( c 3 c 2 2 ) φ ( x * ) + c 2 φ ( x * ) φ ( x * ) 2 e n 3 + ( 4 c 2 3 7 c 3 c 2 + 3 c 4 ) φ ( x * ) + 2 ( c 3 c 2 2 ) φ ( x * ) + c 2 2 φ ( x * ) φ ( x * ) 6 e n 4 + O ( e n 5 ) .
Once again, we expand f ( z n ) around point x * , which leads to:
f ( z n ) = f x n + y n 2 = f ( x * ) [ 1 φ ( x * ) 2 e n + 1 4 c 2 ( φ ( x * ) 2 2 φ ( x * ) + 4 ) 2 φ ( x * ) e n 2 + 1 8 [ 4 c 2 ( φ ( x * ) 1 ) φ ( x * ) 4 c 2 2 φ ( x * ) 2 c 3 φ ( x * ) 3 6 φ ( x * ) 2 + 4 φ ( x * ) 8 2 φ ( x * ) ] e n 3 ] + O ( e n 4 ) .
Now, we use Equations (2)–(6) in the last substep of (1), obtaining:
e n + 1 = ( φ ( x * ) 2 ) 2 2 e n + ( φ ( x * ) 2 ) 4 2 ( α 2 φ ( x * ) ) + c 2 2 φ ( x * ) 2 + φ ( x * ) 6 e n 2 + M 1 e n 3 + M 2 e n 4 + O ( e n 5 ) ,
where M 1 and M 2 are functions of c 2 , c 3 , c 4 , φ ( x * ) , φ ( x * ) , φ ( x * ) and φ ( x * ) .
It is clear from the above Equation (7) that if we choose the value φ ( x * ) = 2 , then we obtain at least cubic convergence. Therefore, by using φ ( x * ) = 2 in the Equation (7), we further have:
e n + 1 = 1 2 ( α φ ( x * ) ) ( φ ( x * ) 2 c 2 ) e n 3 + M ¯ 2 e n 4 + O ( e n 5 ) ,
where M ¯ 2 depends on c 2 , c 3 , c 4 , φ ( x * ) , φ ( x * ) and φ ( x * ) .
It is straightforward to say that the coefficient of e n 3 should be zero in order to obtain fourth-order convergence. Then, we have:
α = φ ( x * ) .
Finally, we substitute the value α = φ ( x * ) in Equation (8), obtaining:
e n + 1 = 1 4 φ ( x * ) 2 c 2 c 2 φ ( x * ) 10 c 2 2 + 2 c 3 φ ( x * ) e n 4 + O ( e n 5 ) ,
where φ ( x * ) , φ ( x * ) R are free disposable parameters. This completes the proof. □
Hence, the proposed family (1) reaches fourth-order convergence by using only three functional evaluations (viz. f ( x n ) , f ( x n ) , and f ( z n ) ) per iteration. Therefore, it satisfies the optimality of the Kung-Traub conjecture [25] for multi-point iterative methods without memory. It is vital to note that the values of φ ( x n ) and α contribute to the construction of the desired fourth-order convergence.

2.2. Some Particular Cases

In this section, some particular cases of the family (1) are presented, by assigning different types of function h ( x ) . In Table 1, we show different expressions of function h ( x ) . Moreover, we can find new and interesting iterative methods obtained from different functions h ( x ) satisfying the conditions of Theorem 1.
Remark 1.
It is important to note in Case-2 that if we consider any second-order iteration, which only employs the function of f ( x n ) and f ( x n ) , then we will obtain optimal fourth-order convergence. Otherwise, if we consider any other iteration function, then we will also obtain fourth-order convergence, but not optimal in the sense of the Kung-Traub conjecture, e.g., if we choose ϕ ( x ) as Steffensen’s method or a Steffensen-type method, which further produces a new fourth-order iterative method, but not optimal.

3. Multidimensional Extension

Let us consider now the nonlinear system F ( x ) = 0 , defined by a multidimensional function F : D R m R m , the zero x ¯ for which we are searching. Our aim is to generalize the family (1) to nonlinear systems, and the main drawback is the existence of the quotient f ( z n ) / f ( x n ) . This usually makes the method non-extendable to several variables, but in the recent literature (see for example, [26,27]), the authors solved it by means of the following strategy: the quotient can be written as:
f ( z n ) f ( x n ) = 1 + f ( z n ) f ( x n ) f ( x n ) ,
but y n x n = 2 ( z n x n ) ; and from first step of (1), we have f ( x n ) = ( x n y n ) f ( x n ) φ ( x n ) . Therefore,
f ( z n ) f ( x n ) = 1 + φ ( x n ) f ( z n ) f ( x n ) ( x n y n ) f ( x n ) = 1 1 2 φ ( x n ) f [ x n , z n ] f ( x n ) ,
where f [ x n , z n ] is the first-order divided difference. Now, the class (1) can be written in the following way for nonlinear systems:
y ( n ) = x ( n ) H ( x ( n ) ) [ F ( x ( n ) ) ] 1 F ( x n ) , x ( n + 1 ) = z ( n ) 3 I H ( x ( n ) ) [ F ( x ( n ) ) ] 1 [ x ( n ) , z ( n ) ; F ] [ F ( x ( n ) ) ] 1 F ( z ( n ) ) ,
where H ( x ) is a matrix-valued function and z ( n ) is the midpoint of x ( n ) and y ( n ) , i.e., z ( n ) = x ( n ) + y ( n ) 2 .
The divided difference operator [ x , y ; F ] is the map [ · , · ; F ] : D × D R m × R m L ( R m ) defined by Ortega and Rheinboldt in [10], such that [ x , y ; F ] ( x y ) = F ( x ) F ( y ) , x , y D . In order to obtain the Taylor expansion of the divided difference operator around the solution x ¯ , we use the Genocchi-Hermite formula (see [28]):
[ x , x + h ; F ] = 0 1 F ( x + t h ) d t
and by developing F ( x + t h ) around x, we obtain:
0 1 F ( x + t h ) d t = F ( x ) + 1 2 F ( x ) h + 1 6 F ( x ) h 2 + O ( h 3 ) .
If F ( x ¯ ) is nonsingular and denoting e = x x ¯ , we have:
F ( x ) = F ( x ¯ ) ( e + C 2 e 2 + C 3 e 3 + C 4 e 4 + C 5 e 5 ) + O ( e 6 ) , F ( x ) = F ( x ¯ ) ( I + 2 C 2 e + 3 C 3 e 2 + 4 C 4 e 3 + 5 C 5 e 4 + ) + O ( e 5 ) , F ( x ) = F ( x ¯ ) ( 2 C 2 + 6 C 3 e + 12 C 4 e 2 ) + O ( e 3 ) , F ( x ) = F ( x ¯ ) ( 6 C 3 + 24 C 4 e ) + O ( e 2 ) ,
where C q = 1 q ! [ F ( x ¯ ) ] 1 F ( q ) ( x ¯ ) , q 2 . Replacing these expressions in (12) and using y = x + h and e y = y x ¯ , we have:
[ x , y ; F ] = F ( x ¯ ) [ I + C 2 ( e y + e ) + C 3 e 2 ] + O ( e 3 ) .
In particular, if y is Newton’s approximation, i.e., h = x y = [ F ( x ) ] 1 F ( x ) , we obtain:
[ x , y ; F ] = F ( x ¯ ) [ I + C 2 e + ( C 2 2 + C 3 ) e 2 ] + O ( e 3 ) .
The following result establishes the sufficient conditions for the convergence of family (11) with order four. The notation used for multidimensional Taylor expansions can be found in [29].
Theorem 2.
Let F : Ω R m R m be three times differentiable in an open neighborhood Ω of x ¯ R m , which is a solution of F ( x ) = 0 , and x ( 0 ) is an initial guess close enough to x ¯ . If F ( x ) is continuous and nonsingular in x ¯ , then sequence { x ( n ) } n 0 obtained from (11) converges to x ¯ with order 4 when H ( x ¯ ) = 2 I , H ( x ¯ ) = 0 (is a null matrix), and H ( x ¯ ) is bounded, the error equation being in this case:
e ( n + 1 ) = C 2 C 3 + 5 C 2 2 + 1 2 H ( x ¯ ) e 4 + O ( e 5 ) ,
where C q = 1 q ! [ F ( x ¯ ) ] 1 F ( q ) ( x ¯ ) , q = 2 , 3 , , and e ( n ) = x ( n ) x ¯ .
Proof. 
By using Taylor expansion of F ( x ( n ) ) and F ( x ( n ) ) around x ¯ ,
F ( x ( n ) ) = F ( x ¯ ) e + C 2 e 2 + C 3 e 3 + C 4 e 4 + C 5 e 5 + O ( e 6 ) ,
F ( x ( n ) ) = F ( x ¯ ) I + 2 C 2 e + 3 C 3 e 2 + 4 C 4 e 3 + 5 C 5 e 4 + O ( e 5 ) .
From the above expression and forcing [ F ( x ( n ) ) ] 1 F ( x ( n ) ) = F ( x ( n ) ) [ F ( x ( n ) ) ] 1 = I , we have:
[ F ( x ( n ) ) ] 1 = I + X 2 e + X 3 e 2 + X 4 e 3 [ F ( x ¯ ) ] 1 + O ( e 4 ) ,
where:
X 2 = 2 C 2 , X 3 = 3 C 3 + 4 C 2 2 , X 4 = 4 C 4 + 6 C 2 C 3 + 6 C 3 C 2 8 C 2 3 ,
Then,
F ( x ( n ) ) 1 F ( x ( n ) ) = e C 2 e 2 + 2 ( C 2 2 C 3 ) e 3 + ( 4 C 2 C 3 + 3 C 3 C 2 4 C 2 3 3 C 4 ) e 4 + O ( e 6 ) .
As H ( x ( n ) ) can be developed in the following way:
H ( x ( n ) ) = H ( x ¯ ) + H ( x ¯ ) e + 1 2 H ( x ¯ ) e 2 + O ( e 5 ) ,
the error at the first step is:
y ( n ) x ¯ = ( I H ( x ¯ ) ) e + ( H ( x ¯ ) C 2 H ( x ¯ ) ) e 2 + 2 H ( x ¯ ) ( C 3 C 2 2 ) + H ( x ¯ ) C 2 1 2 H ( x ¯ ) e 3 + 3 H ( x ¯ ) C 4 4 H ( x ¯ ) C 2 C 3 3 H ( x ¯ ) C 3 C 2 + 4 H ( x ¯ ) C 2 3 + 2 H ( x ¯ ) ( C 3 C 2 2 ) + 1 2 H ( x ¯ ) C 2 e 4 + O ( e 5 )
and the error at midpoint z ( n ) is:
z ( n ) x ¯ = ( I 1 2 H ( x ¯ ) ) e + 1 2 ( H ( x ¯ ) C 2 H ( x ¯ ) ) e 2 + H ( x ¯ ) ( C 3 C 2 2 ) + 1 2 H ( x ¯ ) C 2 1 4 H ( x ¯ ) e 3 + 3 2 H ( x ¯ ) C 4 2 H ( x ¯ ) C 2 C 3 3 2 H ( x ¯ ) C 3 C 2 + 2 H ( x ¯ ) C 2 3 + H ( x ¯ ) ( C 3 C 2 2 ) + 1 4 H ( x ¯ ) C 2 e 4 + O ( e 5 ) .
In order to guaranty the quadratic convergence for z ( n ) , we need to assure that H ( x ¯ ) = 2 I . Therefore,
F ( z ( n ) ) = F ( x ¯ ) ( z ( k ) x ¯ ) + C 2 ( z ( k ) x ¯ ) 2 + O ( ( z ( k ) x ¯ ) 3 ) = F ( x ¯ ) [ ( C 2 1 2 H ( x ¯ ) ) e 2 + ( 2 ( C 3 C 2 2 ) + 1 2 H ( x ¯ ) C 2 1 4 H ( x ¯ ) ) e 3 + 3 C 4 4 C 2 C 3 3 C 3 C 2 + 5 C 2 3 + H ( x ¯ ) ( C 3 C 2 2 ) + 1 4 H ( x ¯ ) C 2 1 2 C 2 2 H ( x ¯ ) 1 2 H ( x ¯ ) C 2 + 1 2 C 2 H ( x ¯ ) 2 e 4 + O ( e 5 ) .
In order to obtain the error equation, we calculate:
x ( n ) , z ( n ) ; F = F ( x ( n ) ) + F ( x ( n ) ) ( z ( n ) x ( n ) ) + O r ( ( z ( n ) x ( n ) ) 2 ) = F ( x ¯ ) I + C 2 e + ( C 3 + C 2 2 1 2 C 2 H ( x ¯ ) ) e 2 + O ( e 3 ) .
Therefore, the error at the final step is:
e ( n + 1 ) = z ( n ) x ¯ 3 I H ( x ( n ) ) [ F ( x ( n ) ) ] 1 [ x ( n ) , z ( n ) ; F ] [ F ( x ( n ) ) ] 1 F ( z ( n ) ) = ( H ( x ¯ ) C 2 1 2 H ( x ¯ ) 2 ) e 3 + ( C 3 C 2 + 5 C 2 3 + 2 H ( x ¯ ) C 3 5 H ( x ¯ ) C 2 2 + 1 2 H ( x ¯ ) C 2 + 15 2 C 2 2 H ( x ¯ ) 1 2 C 2 H ( x ¯ ) C 2 + 1 4 C 2 H ( x ¯ ) 2 + 1 2 C 3 H ( x ¯ ) 10 C 2 3 H ( x ¯ ) + 3 2 H ( x ¯ ) C 2 H ( x ¯ ) + 1 2 H ( x ¯ ) 2 C 2 1 4 H ( x ¯ ) H ( x ¯ ) 1 4 H ( x ¯ ) H ( x ¯ ) e 4 + O ( e 5 ) .
By assuming H ( x ¯ ) = 0 , the error equation of the method is obtained.
e ( n + 1 ) = C 2 C 3 + 5 C 2 2 + 1 2 H ( x ¯ ) e 4 + O ( e 5 ) .
 □

Some Special Cases and Their Computational Indices

Now, we present some particular cases of the family (11) by using different functions H ( x ) . In the following, we show several selected functions. Let us remark that if H ( x ¯ ) = 2 I , the method is not new, as it appears in [30]. In the following, we focus our attention on the new schemes, describing in each case the computational effort that they involve, in terms of functional evaluations d and the amount of products and quotients o p . By using this information, we are going to use the multidimensional extension of the efficiency index defined by Ostrowski in [31] as I = p 1 / d and the computational efficiency index C I defined in [29] as C I = p 1 / ( d + o p ) , where p is the order of convergence, d is the number of functional evaluations per iteration, and o p is the number of products-quotients per iteration.
  • If H ( x ) = 2 I + a d i a g ( F ( x ( n ) ) ) b is replaced in (11), the resulting scheme is denoted as Scase-1, a 0 , b R .
  • We denote by Scase-2 the iterative scheme resulting from using H ( x ) = 2 I + a [ d i a g ( F ( x ( n ) ) ) ] 1 d i a g ( F ( x ( n ) ) ) b in (11).
  • By substituting H ( x ) = 2 I + a F ( x ( n ) ) d i a g ( F ( x ( n ) ) ) b in (11), procedure Scase-3 is obtained.
  • If we replace H ( x ) = 2 I + a [ F ( x ( n ) ) ] 1 d i a g ( F ( x ( n ) ) ) b in (11), the resulting method is called Scase-4.
In order to compare our proposed schemes with other similar ones (of the same order of convergence) existing in the literature, we introduce in what follows some of them, including their respective iterative expressions. This will allow us to calculate their corresponding efficiency indices.
In 2008, Nedzhibov [14] extended the original Jarratt’s method (see [15]) for the multi-dimensional case with the help of the Chebyshev-Halley family, whose iterative expression is:
y ( n ) = x ( n ) 2 3 [ F ( x ( n ) ) ] 1 F ( x ( n ) ) , x ( n + 1 ) = x ( n ) 1 2 [ 3 F ( y ( n ) ) F ( x ( n ) ) ] 1 [ 3 F ( y ( n ) ) + F ( x ( n ) ) ] [ F ( x ( n ) ) ] 1 F ( x ( n ) ) ,
denoted in the following as JM.
On the other hand, Hueso et al. in [17] (Equations (1)–(5)) designed the fourth-order scheme:
y ( n ) = x ( n ) 2 3 [ F ( x ( n ) ) ] 1 F ( x ( n ) ) , x ( n + 1 ) = x ( n ) S 1 I + S 2 H ( y ( n ) , x ( n ) ) + S 3 H ( x ( n ) , y ( n ) ) + S 4 H ( y ( n ) , x ( n ) ) 2 [ F ( x ( n ) ) ] 1 F ( x ( n ) ) ,
where H ( x , y ) = [ F ( x ) ] 1 F ( y ) , S 1 = 5 8 S 2 8 , S 3 = S 2 3 , S 4 = 9 8 S 2 24 and S 2 R is a free disposable parameter. This scheme is denoted throughout the manuscript as HM for S 2 = 9 8 .
Moreover, Junjua et al. in [18] designed a Jarratt-type scheme of the fourth-order of convergence, denoted as JAM, whose iterative expression is:
y ( n ) = x ( n ) 2 3 [ F ( x ( n ) ) ] 1 F ( x ( n ) ) , x ( n + 1 ) = x ( n ) I + 1 4 ( η ( x ( n ) ) I ) + 3 8 ( η ( x ( n ) ) I ) 2 [ F ( y ( n ) ) ] 1 F ( x ( n ) ) ,
where η ( x ( n ) ) = [ F ( x ( n ) ) ] 1 F ( y ( n ) ) .
In Table 2, the efficiency indices I of the new methods Scase-1, Scase-2, Scase-3, and Scase-4 are presented, together with the known methods JM, HM, and JAM. The number of functional evaluations in all these schemes is different, but the order of convergence is the same. To calculate the efficiency index I, it must be taken into account that the number of functional evaluations of one F, F , and first-order-divided difference [ · , · ; F ] at certain iterates are m, m 2 , and m ( m 1 ) , respectively, m being the size of the system. Despite the differences in the structure of the new and existing methods, index I is the same for all of them.
On the other hand, to compute an inverse linear operator, we solve an m × m linear system, where as we know, the number of products-quotients that we need to perform to obtain the solution of the system by means of the L U decomposition is 1 3 m 3 + m 2 1 3 m In addition, we need m 2 products for matrix-vector multiplication and m 2 quotients for a divided difference.
Therefore, we calculate the C I of method Scase-1. For each iteration, we need to evaluate function F twice, once Jacobian F , and once the divided difference, so 2 m 2 + m functional evaluations are needed. In addition, we must solve three linear systems with F ( x ( n ) ) as coefficients matrix (that is, 1 3 m 3 + 3 m 2 1 3 m products-quotients), m 2 quotients for calculating the divided difference, one matrix-vector product ( m 2 products-quotients), and three vector-vector products ( 3 m products-quotients). Therefore, the value of index C I for method Scase-1 on a nonlinear system of size m × m is:
C I Scase-1 = 4 1 1 3 m 3 + 7 m 2 + 11 3 m .
In Table 3, we show index C I of schemes Scase-1, Scase-2, Scase-3, Scase-4, JM, HM, and JAM. In it, N F E denotes the number of functional evaluations, N L S 1 is the number of linear systems with the matrix of coefficients F ( x ( n ) ) to be solved, N L S 2 is the number of linear systems with other matrices of coefficients that are solved, and M × V , V × V denote the number of matrix-vector and vector-vector products, respectively.
We observe that, although index I is the same in all these cases, this is not the case of index C I , since the number of inverse linear operators is different for each scheme. In Figure 1, index C I for those methods and systems of size from 2–20 is shown. We can observe that, for a size of the system greater than eight, the best index corresponds to the proposed methods Scase-1 and Scase-2, due to the number of linear systems to be solved and the factor of the dominating term, that is 1 3 m 3 , in comparison to 2 3 m 3 in other schemes.

4. Numerical Examples

In this section, we show the effectiveness and efficiency of some of our proposed methods, and we compare them with other existing optimal (scalar case) iterative schemes of the same order. In the one-dimensional case, for comparison purposes, we consider some cases from Table 1, Case-1, Case-3 for a = 1 10 , b = 3 , Case-4 (for a = 10 and b = 2 ), and Case-5 (for a = 1 and b = 1 ), called O M 1 , O M 2 , O M 3 , and O M 4 , respectively. In addition, we consider three real-life problems, e.g., a chemical engineering problem, the movement of an electron in the air space between two parallel plates, and the fractional conversion in a chemical reactor problem, which are displayed in Examples 1–3. In addition, the solution to the corresponding problem is also listed in the corresponding example, which is correct up to 30 significant digits. However, the desired roots are available up to several significant digits (a minimum of one thousand), but due the page limit restriction, only 30 significant digits are displayed.
Now, we compare them with the optimal fourth-order multi-point iteration function proposed by Khattri and Abbasbandy [32] and Soleymani et al. [33]; from them, we considered methods (6) and (19) denoted as ( K A ) and ( S K V ) , respectively. In addition, we also compare them with the optimal schemes of order four, which were presented by Chun [34] and King [35]; we have picked expressions (10) and (2) (for β = 1 ) from them, called ( C M ) and ( K M ) , respectively. Finally, we also compare them with another optimal family of fourth-order methods derived by Cordero et al. [36], from which we have chosen expression (9), called ( C H M T ) .
We compare our proposed methods with the existing ones in terms of the absolute residual error of the corresponding function | f ( x n ) | , errors between the two consecutive iterations | x n + 1 x n | , x n + 1 x n ( x n x n 1 ) 4 , and the asymptotic error constant η = lim n x n + 1 x n ( x n x n 1 ) 4 in Table 4, Table 5 and Table 6.
We calculate the asymptotic error constant and other constants up to several significant digits (a minimum of 1000 significant digits) to minimize the round-off error. We show the values of x n , the absolute residual error in the function | f ( x n ) | , the difference between the two consecutive iterations | x n + 1 x n | , and the values of x n + 1 x n ( x n x n 1 ) 4 and η .
In the context of nonlinear systems, we also consider two applied science problems to check further the validity of the theoretical results for the nonlinear system. We shall compare them with the methods, namely (14)–(16), called JM, HM, and JAM, respectively. We have included the number of iteration indexes ( n ) , the residual error of the corresponding function ( F ( x ( n + 1 ) ) ) , the error in the iterations x ( n + 1 ) x ( n ) , and the approximated computational order of convergence ρ = log x ( n + 1 ) x ( n ) / x ( n ) x ( n 1 ) log x ( n ) x ( n 1 ) / x ( n 1 ) x ( n 2 ) (for details, see Cordero and Torregrosa [37]) in Table 7 and Table 8.
All computations have been performed using the software Mathematica 9 (Wolfram Research, Champaign, IL, USA) with multiple precision arithmetic, and in Table 4, Table 5, Table 6, Table 7 and Table 8, A ( ± B ) denotes the number A × 10 ( ± B ) .
Example 1.
We consider a quartic equation from Shacham, Balaji, and Seader [1,2], which describes the fractional conversion, that is the fraction of the nitrogen-hydrogen feed getting converted to ammonia. If we consider 250 atm and 500 C, then the equation has the form:
f 2 ( z ) = z 4 7.79075 z 3 + 14.7445 z 2 + 2.511 z 1.674 .
Function (17) has four zeros, two real zeros, and two conjugated complexes. We want to obtain the zero x * 3.9485424455620457727 + 0.3161235708970163733 i with initial approximation x 0 = 3.7 + 0.25 i . Other initial approaches further away from the solution give similar numerical results, with a slower approach to the solution.
Example 2.
In the analysis of the movement of an electron in the air gap between two parallel plates, the multi-factor effect is given by:
p ( t ) = p 0 + v 0 + e E 0 m ω sin ( ω t 0 + α ) ( t t 0 ) + e E 0 m ω 2 cos ( ω t + α ) + sin ( ω + α ) ,
e and m, respectively, being the charge and the mass of the electron, p 0 and v 0 being, respectively, the position and velocity of the electron at instant t 0 , and also E 0 sin ( ω t + α ) being the RF electric field between the plates. By selecting some values of the parameters in Equation (17), to simplify the expression, we obtain:
f 3 ( x ) = x 1 2 cos ( x ) + π 4 .
This function has a simple zero at x * 0.309093271541794952741986808924 , and we use the initial estimation of one.
Example 3.
In expression (see [3]),
f 4 ( x ) = x 1 x 5 log 0.4 ( 1 x ) 0.4 0.5 x + 4.45977 ,
where x denotes the fractional conversion of Species A in a chemical reactor. We must take into account that there is no physical meaning to this expression if x < 0 or x > 1 . Then, x is considered in the interval 0 x 1 . The searched zero is x * 0.757396246253753879459641297929 . Indeed, let us remark that expression (20) is undefined in 0.8 x 1 , very near to the zero. The derivative of expression (20) is close to zero in 0 x 0.5 . Therefore, we consider the initial approximation to be x 0 = 0.76 for this problem.
Example 4.
Considering the mixed Hammerstein integral equation, from Ortega and Rheinbolt [10], given by:
x ( s ) = 1 + 1 5 0 1 G ( s , t ) x ( t ) 3 d t
where x C [ 0 , 1 ] , s, and t belong to [ 0 , 1 ] , and kernel G is defined as ( 1 s ) t if t s and as s ( 1 t ) when s t .
We use the Gauss-Legendre quadrature formula with ten nodes t j and weights w j (see Table 9), j = 1 , 2 , , 10 , to transform the integral equation into a nonlinear system. By denoting the approximations of x ( t i ) by x i , i = 1 , 2 , , 10 , one gets the nonlinear system:
5 x i 5 j = 1 10 a i j x j 3 = 0 ,
where i = 1 , 2 , , 10 , and:
a i j = w j t j ( 1 t i ) , j i , w j t i ( 1 t j ) , i < j ,
In Table 7, we show the numerical results obtained by using x 0 = ( 1 , 1 , , 1 ) T to search the root:
x ¯ ( 1.0013 , 1.0067 , 1.0145 , 1.0219 , 1.0265 , 1.0265 , 1.0219 , 1.0145 , 1.0067 , 1.0013 ) T .
Example 5.
Let us consider the following boundary value problem described in [10]:
y = 1 2 y 3 + 3 y 3 2 x + 1 2 , y ( 0 ) = 0 , y ( 1 ) = 1 .
We partition the interval [ 0 , 1 ] as follows:
x 0 = 0 < x 1 < x 2 < x 3 < < x n , where x i + 1 = x i + h , h = 1 n ,
and we also consider y 0 = y ( x 0 ) = 0 , y 1 = y ( x 1 ) , , y n 1 = y ( x n 1 ) , y n = y ( x n ) = 1 . Now, we discretize problem (22) by using the following approximations of the derivatives:
y j y j + 1 y j 1 2 h , y j y j 1 2 y j + y j + 1 h 2 , j = 1 , 2 , , n 1 .
Therefore, a nonlinear system of size ( n 1 ) × ( n 1 ) is obtained:
y j + 1 2 y j + y j 1 h 2 2 y j 3 3 2 x j h 2 3 y j + 1 y j 1 2 h 1 h 2 = 0 , k = 1 , 2 , , n 1 .
If we use the initial guess y ( 0 ) = 1 10 , 1 10 , 1 10 , 1 10 , 1 10 , 1 10 T , to solve the nonlinear system obtained for n = 7 , we obtain the solution:
x ¯ 0.07654393 , 0.1658739 , 0.2715210 , 0.3984540 , 0.5538864 , 0.7486878 T ,
and the numerical results are shown in Table 8.

Results and Discussion

We can say from Table 4, Table 5 and Table 6 that our methods had smaller residual error, in each test function, than the known methods used for comparison, namely K A , S K V , C M , K M , and C H M T . In addition, our methods also had a smaller distance between two consecutive iterations. Therefore, our method converged faster towards the exact root as compared to the existing ones. On the other hand, the proposed methods also had a simple asymptotic error constant corresponding to each test function, which can be seen in Table 4, Table 5 and Table 6. A similar type of behavior of our methods was found in the case of the multidimensional extension, which is mentioned in Table 7 and Table 8. However, our methods could have different behaviors depending on the nonlinear equation. Actually, the behavior of the iterative methods mainly depended on the complexity of the iterative expression, the test function used, the initial guess, the programming of the scheme, etc.

5. Conclusions

In the past, several researchers proposed optimal multi-point fourth-order iterative methods for simple roots of nonlinear equations, by using weight functions or free parameters only in the second scheme. In this paper, we design a family of two-step optimal iterative methods with fourth-order convergence, including weight functions and parameters in the two steps of the methods. The main strength of our proposed schemes is that they not only give flexibility to researchers at both steps for constructing new optimal fourth-order methods, but also give a faster convergence, a small residual error corresponding to the involved function, and asymptotic error constants in relation to other known schemes. In addition of this, the local convergence analysis of the suggested schemes was proven through Lipschitz constants and the Banach lemma in order to calculate the local convergence radius. Moreover, we extended the proposed scheme to systems of nonlinear equations, preserving the same order of convergence and the same beauty that the scalar one has. Numerical results were performed and compared with earlier existing methods. The results were very consistent with the existing methods.

Author Contributions

The authors contributed equally to this paper.

Funding

This research was partially supported by Ministerio de Economía y Competitividad MTM2014-52016-C2-2-P (Spain) and by Generalitat Valenciana PROMETEO/2016/089 (Spain).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shacham, M. An improved memory method for the solution of a nonlinear equation. Chem. Eng. Sci. 1989, 44, 1495–1501. [Google Scholar] [CrossRef]
  2. Balaji, G.V.; Seader, J.D. Application of interval Newton’s method to chemical engineering problems. Reliab. Comput. 1995, 1, 215–223. [Google Scholar] [CrossRef]
  3. Shacham, M. Numerical solution of constrained nonlinear algebraic equations. Int. J. Numer. Method Eng. 1986, 23, 1455–1481. [Google Scholar] [CrossRef]
  4. Shacham, M.; Kehat, E. Converging interval methods for the iterative solution of nonlinear equations. Chem. Eng. Sci. 1973, 28, 2187–2193. [Google Scholar] [CrossRef]
  5. Moré, J.J. A Collection of Nonlinear Model Problems; Lectures in Applied Mathematics; American Mathematical Society: Providence, RI, USA, 1990; Volume 26, pp. 723–762. [Google Scholar]
  6. Grosan, C.; Abraham, A. A new approach for solving nonlinear equations systems. IEEE Trans. Syst. Man Cyber. Part A Syst. Hum. 2008, 38, 698–714. [Google Scholar] [CrossRef]
  7. Awawdeh, F. On new iterative method for solving systems of nonlinear equations. Numer. Algorithms 2010, 54, 395–409. [Google Scholar] [CrossRef]
  8. Tsoulos, I.G.; Stavrakoudis, A. On locating all roots of systems of nonlinear equations inside bounded domain using global optimization methods. Nonlinear Anal. Real World Appl. 2010, 11, 2465–2471. [Google Scholar] [CrossRef]
  9. Lin, Y.; Bao, L.; Jia, X. Convergence analysis of a variant of the Newton method for solving nonlinear equations. Comput. Math. Appl. 2010, 59, 2121–2127. [Google Scholar] [CrossRef]
  10. Ortega, J.M.; Rheinbolt, W.C. Iterative Solutions of Nonlinears Equations in Several Variables; Academic Press: Cambridge, MA, USA, 1970. [Google Scholar]
  11. Petković, M.S.; Neta, B.; Petković, L.D.; Dz̆unić, J. Multipoint Methods for Solving Nonlinear Equations; Academic Press: Cambridge, MA, USA, 2013. [Google Scholar]
  12. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
  13. Amat, S.; Busquier, S. Advances in Iterative Methods for Nonlinear Equations; SEMA SIMAI Springer Series; Springer: Berlin, Germany, 2016; Volume 10. [Google Scholar]
  14. Nedzhibov, G.H. A family of multi-point iterative methods for solving systems of nonlinear equations. J. Comput. Appl. Math. 2008, 222, 244–250. [Google Scholar] [CrossRef]
  15. Jarratt, P. Some fourth-order multipoint iterative methods for solving equations. Math. Comput. 1966, 20, 434–437. [Google Scholar] [CrossRef]
  16. Khan, W.A.; Noor, K.I.; Bhattia, K.; Ansari, F.A. A new fourth order Newton-type method for solution of system of nonlinear equations. Appl. Math. Comput. 2015, 270, 724–730. [Google Scholar] [CrossRef]
  17. Hueso, J.L.; Martínez, E.; Teruel, C. Convergence, Efficiency and Dynamics of new fourth and sixth order families of iterative methods for nonlinear systems. J. Comput. Appl. Math. 2015, 275, 412–420. [Google Scholar] [CrossRef]
  18. Junjua, M.; Akram, S.; Yasmin, N.; Zafar, F. A New Jarratt-Type Fourth-Order Method for Solving System of Nonlinear Equations and Applications. Appl. Math. 2015, 2015, 805278. [Google Scholar] [CrossRef]
  19. Gutiérrez, J.M.; Hernández, M.A. An acceleration of Newton’s method: Super-Halley method in a Banach space. Appl. Math. Comput. 2001, 117, 223–239. [Google Scholar]
  20. Chun, C. Some second-derivative-free variants of Chebyshev’s Halley methods. Appl. Math. Comput. 2007, 191, 410–414. [Google Scholar]
  21. Kou, J.; Li, Y.; Wang, X. Fourth-order iterative methods free from second derivative. Appl. Math. Comput. 2007, 184, 880–885. [Google Scholar] [CrossRef]
  22. Noor, M.A.; Ahmad, F. Fourth-order convergent iterative method for nonlinear equation. Appl. Math. Comput. 2006, 182, 1149–1153. [Google Scholar] [CrossRef]
  23. Sharma, J.R.; Guha, R.K.; Sharma, R. Some variants of Hansen-Patrick method with third and fourth order convergence. Appl. Math. Comput. 2009, 214, 171–177. [Google Scholar] [CrossRef]
  24. Chun, C.; Neta, B. Certain improvement of Newton’s method with fourth-order convergence. Appl. Math. Comput. 2009, 215, 821–828. [Google Scholar] [CrossRef]
  25. Kung, H.T.; Traub, J.F. Optimal order of one-point and multi-point iteration. J. ACM 1974, 21, 643–651. [Google Scholar] [CrossRef]
  26. Abad, M.F.; Cordero, A.; Torregrosa, J.R. A family of seventh-order schemes for solving nonlinear systems. Bull. Math. Soc. Sci. Math. Roum. 2014, 57, 133–145. [Google Scholar]
  27. Cordero, A.; Maimó, J.G.; Torregrosa, J.R.; Vassileva, M.P. Solving nonlinear problems by Ostrowski-Chun type parametric families. Math. Chem. 2014, 52, 430–449. [Google Scholar]
  28. Hermite, C. Sur la formule d’interpolation de Lagrange. Reine Angew. Math. 1878, 84, 70–79. [Google Scholar] [CrossRef]
  29. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. A modified Newton-Jarratt’s composition. Numer. Algorithms 2010, 55, 87–99. [Google Scholar] [CrossRef]
  30. Sharma, J.R.; Arora, H. On efficient weighted-Newton methods for solving systems of nonlinear equations. Appl. Math. Comput. 2013, 222, 497–506. [Google Scholar] [CrossRef]
  31. Ostrowski, A.M. Solution of Equations and Systems of Equations; Prentice-Hall: Englewood Cliffs, NJ, USA; New York, NY, USA, 1964. [Google Scholar]
  32. Khattri, S.K.; Abbasbandy, S. Optimal fourth order family of iterative methods. Mat. Vesnik 2011, 63, 67–72. [Google Scholar]
  33. Soleymani, F.; Khattri, S.K.; Vanani, S.K. Two new classes of optimal Jarratt-type fourth-order methods. Appl. Math. Lett. 2012, 25, 847–853. [Google Scholar] [CrossRef]
  34. Chun, C. Some variants of King’s fourth-order family of methods for nonlinear equations. Appl. Math. Comput. 2007, 190, 57–62. [Google Scholar] [CrossRef]
  35. King, R.F. A family of fourth order methods for nonlinear equations. SIAM Numer. Anal. 1973, 10, 876–879. [Google Scholar] [CrossRef]
  36. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. New modifications of Potra-Pták’s method with optimal fourth and eighth orders of convergence. Comput. Appl. Math. 2010, 234, 2969–2976. [Google Scholar] [CrossRef]
  37. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
Figure 1. Index C I for different sizes of the system.
Figure 1. Index C I for different sizes of the system.
Mathematics 06 00296 g001
Table 1. Some particular cases of (1).
Table 1. Some particular cases of (1).
φ ( x ) Second Step Corresponding to φ ( x )
Case-12 x n + 1 = z n f ( z n ) f ( x n ) 1 + 2 f ( z n ) f ( x n )
Case-2 2 + ϕ ( x ) x , x n + 1 = z n f ( z n ) f ( x n ) 1 + 2 f ( z n ) f ( x n ) f ( x n ) f ( x n )
where ϕ ( x ) is any
second-order iterative method.
Case-3 2 + a f ( x ) b , x n + 1 = z n f ( z n ) f ( x n ) 1 + 2 f ( z n ) f ( x n )
where a R and
b > 2 , otherwise h ( x * )
will be unbounded.
Case-4 2 a a + b f ( x ) , x n + 1 = z n f ( z n ) f ( x n ) 1 + 2 f ( z n ) f ( x n ) 2 b a f ( x n )
where a 0 , b R
Case-5 1 + 1 + a f ( x ) 1 + b f ( x ) , x n + 1 = z n f ( z n ) f ( x n ) 1 + 2 f ( z n ) f ( x n ) + ( a b ) f ( x n )
where a , b R
Table 2. Efficiency index for different schemes.
Table 2. Efficiency index for different schemes.
MethodNo. FNo. F No. [ · , · ; F ] Functional EvaluationsI
Scase-1211 2 m 2 + m 4 1 2 m 2 + m
Scase-2211 2 m 2 + m 4 1 2 m 2 + m
Scase-3211 2 m 2 + m 4 1 2 m 2 + m
Scase-4211 2 m 2 + m 4 1 2 m 2 + m
JM120 2 m 2 + m 4 1 2 m 2 + m
HM120 2 m 2 + m 4 1 2 m 2 + m
JAM120 2 m 2 + m 4 1 2 m 2 + m
Table 3. Functional evaluations and products-quotients of the methods. N F E , number of functional evaluations; N L S 1 , number of linear systems.
Table 3. Functional evaluations and products-quotients of the methods. N F E , number of functional evaluations; N L S 1 , number of linear systems.
Method NFE NLS 1 NLS 2 [ · , · ; F ] M × V V × V CI
Scase-1 2 m 2 + m 30113 4 1 1 3 m 3 + 7 m 2 + 11 3 m
Scase-2 2 m 2 + m 30112 4 1 1 3 m 3 + 7 m 2 + 14 3 m
Scase-3 2 m 2 + m 30154 4 1 1 3 m 3 + 11 m 2 + 14 3 m
Scase-4 2 m 2 + m 70114 4 1 1 3 m 3 + 11 m 2 + 14 3 m
JM 2 m 2 + m 11010 4 1 2 3 m 3 + 5 m 2 + 1 3 m
HM 2 m 2 + m 23020 4 1 2 3 m 3 + 9 m 2 + 1 3 m
JAM 2 m 2 + m 31010 4 1 2 3 m 3 + 7 m 2 + 1 3 m
Table 4. Convergence behavior of the different fourth-order optimal methods for f 2 ( x ) .
Table 4. Convergence behavior of the different fourth-order optimal methods for f 2 ( x ) .
Casesn x n | f ( x n ) | | x n + 1 x n | x n + 1 x n ( x n x n 1 ) 4 η
K A 1 4.06831779454242 + 0.32664091245691 i 1.3 ( ) 1.2 ( 1 )
2 3.94566403244595 + 0.31195820042558 i 5.1 ( 2 ) 5.1 ( 3 ) 2.8685 3.7507
3 3.94854247724924 + 0.31612357071166 i 3.2 ( 7 ) 3.7 ( 8 ) 3.7507
S K V 1 3.99435611429986 + 0.36507812608361 i 7.5 ( 1 ) 6.7 ( 2 )
2 3.94874528304345 + 0.31606965860747 i 2.1 ( 3 ) 2.1 ( 4 ) 3.7147 3.9136
3 3.94854244556205 + 0.31612357089705 i 3.4 ( 13 ) 3.3 ( 14 ) 3.9136
C M 1 3.98466616205972 + 0.38743296774392 i 9.2 ( 1 ) 8.0 ( 2 )
2 3.94878185252247 + 0.31634115609934 i 3.3 ( 3 ) 3.2 ( 4 ) 3.9938 3.9005
3 3.94854244556214 + 0.31612357089689 i 1.5 ( 12 ) 1.5 ( 13 ) 3.9005
K M 1 3.97985638724627 + 0.36440676727915 i 6.4 ( 1 ) 5.7 ( 2 )
2 3.94863621913203 + 0.31615261583746 i 9.9 ( 4 ) 9.8 ( 5 ) 3.8383 3.9314
3 3.94854244556204 + 0.31612357089702 i 1.3 ( 14 ) 1.3 ( 15 ) 3.9314
C H M T 1 4.00173953283203 + 0.36617288922003 i 8.2 ( 1 ) 7.3 ( 2 )
2 3.94882489377247 + 0.31598753843337 i 3.2 ( 3 ) 3.1 ( 4 ) 3.6593 3.9027
3 3.94854244556219 + 0.31612357089712 i 1.8 ( 12 ) 1.8 ( 13 ) 3.9027
O M 1 1 4.01399307003023 + 0.35890761473628 i 8.8 ( 1 ) 7.8 ( 2 )
2 3.94881862583921 + 0.31570149393965 i 5.1 ( 3 ) 5.0 ( 4 ) 3.4856 3.8875
3 3.94854244556209 + 0.31612357089548 i 1.6 ( 11 ) 1.5 ( 12 ) 3.8875
O M 2 1 4.12288139593880 + 0.38749751766959 i 2.4 ( ) 2.2 ( 1 )
2 3.90767119668767 + 0.32432746535133 i 4.2 ( 1 ) 4.2 ( 2 ) 2.4576 3.8283
3 3.94858535174257 + 0.31617458259283 i 6.7 ( 4 ) 6.7 ( 5 ) 3.8283
O M 3 1 3.96901080029588 + 0.32053604782876 i 2.2 ( 1 ) 2.1 ( 2 )
2 3.94854330300287 + 0.31612265035071 i 1.3 ( 5 ) 1.3 ( 6 ) 3.7580 4.0300
3 3.94854244556205 + 0.31612357089702 i 1.2 ( 22 ) 1.2 ( 23 ) 4.0300
O M 4 1 4.01399307003023 + 0.35890761473628 i 8.8 ( 1 ) 7.8 ( 2 )
2 3.94881862583921 + 0.31570149393965 i 5.1 ( 3 ) 5.0 ( 4 ) 3.4856 3.8875
3 3.94854244556209 + 0.31612357089548 i 1.6 ( 11 ) 1.5 ( 12 ) 3.8875
Table 5. Convergence behavior of different fourth-order optimal methods for f 3 ( x ) .
Table 5. Convergence behavior of different fourth-order optimal methods for f 3 ( x ) .
Casesn x n | f ( x n ) | | x n + 1 x n | x n + 1 x n ( x n x n 1 ) 4 η
K A 1 0.25417804552034148536 4.7 ( 2 ) 5.5 ( 2 )
2 0.30909163590606450952 1.4 ( 6 ) 1.6 ( 6 ) 3.3312 3.9907
3 0.30909327154179495274 1.2 ( 24 ) 1.4 ( 24 ) 3.9907
S K V 1 1.9228013411768680913 9.7 ( 1 ) 1.9 ( + 1 )
2 16.956846304184010120 1.8 ( + 1 ) 5.7 ( + 7 ) 7.9958 4.7137
3 5.6776688150342258615 ( + 7 ) 5.7 ( + 7 ) 2.0 ( + 38 ) 4.7137
C M 1 0.27247767593361877437 3.1 ( 2 ) 3.1 ( 2 )
2 0.30909316991236159145 8.6 ( 8 ) 1.0 ( 7 ) 3.6059 3.9979
3 0.30909327154179495274 5.3 ( 30 ) 6.2 ( 30 ) 3.9979
K M 1 0.27395213337785257000 3.0 ( 2 ) 3.5 ( 2 )
2 0.30909318584556631521 7.3 ( 8 ) 8.6 ( 8 ) 3.5995 3.9975
3 0.30909327154179495274 2.7 ( 30 ) 3.1 ( 30 ) 3.9975
C H M T 1 0.26958019693473408634 3.4 ( 2 ) 4.0 ( 2 )
2 0.30909308449385539179 1.6 ( 7 ) 1.9 ( 7 ) 3.5336 3.9964
3 0.30909327154179495274 8.3 ( 29 ) 9.8 ( 29 ) 3.9964
O M 1 1 0.26631675279431609571 3.7 ( 1 ) 4.3 ( 2 )
2 0.30909294799418381577 2.7 ( 7 ) 3.2 ( 7 ) 3.4807 3.9951
3 0.30909327154179495274 9.5 ( 28 ) 1.1 ( 27 ) 3.9951
O M 2 1 0.28888414806339401078 1.7 ( 2 ) 2.0 ( 2 )
2 0.30909325486518945619 1.4 ( 8 ) 1.7 ( 8 ) 3.3709 3.9983
3 0.30909327154179495274 6.7 ( 33 ) 7.9 ( 33 ) 3.9983
O M 3 1 0.15994222854937208373 1.3 ( 1 ) 1.5 ( 1 )
2 0.30899020010081667265 8.7 ( 5 ) 1.0 ( 4 ) 3.5463 3.9729
3 0.30909327154179492402 2.4 ( 17 ) 2.9 ( 17 ) 3.9729
O M 4 1 0.26631675279431609571 3.7 ( 2 ) 4.3 ( 2 )
2 0.30909294799418381577 2.7 ( 7 ) 3.2 ( 7 ) 3.4807 3.9951
3 0.30909327154179495274 9.5 ( 28 ) 1.1 ( 27 ) 3.9951
Table 6. Convergence behavior of different fourth-order optimal methods for f 4 ( x ) .
Table 6. Convergence behavior of different fourth-order optimal methods for f 4 ( x ) .
Casesn x n | f ( x n ) | | x n + 1 x n | x n + 1 x n ( x n x n 1 ) 4 η
K A 1 0.75739764681484808246 1.1 ( 4 ) 1.4 ( 6 )
2 0.75739624625375387959 1.0 ( 17 ) 1.3 ( 19 ) 3.9858 4.0000
3 0.75739624625375387946 7.9 ( 70 ) 9.9 ( 72 ) 4.0000
S K V 1 0.75739672982865568692 3.9 ( 5 ) 4.8 ( 7 )
2 0.75739624625375387946 4.8 ( 20 ) 6.0 ( 22 ) 3.9955 4.0000
3 0.75739624625375387946 1.4 ( 79 ) 1.4 ( 81 ) 4.0000
C M 1 0.75739660630557206625 2.9 ( 5 ) 3.6 ( 7 )
2 0.75739624625375387946 1.0 ( 20 ) 1.3 ( 22 ) 4.0011 4.0000
3 0.75739624625375387946 1.8 ( 82 ) 2.2 ( 84 ) 4.0000
K M 1 0.75739659212374876149 2.8 ( 5 ) 3.5 ( 7 )
2 0.75739624625375387946 8.9 ( 21 ) 1.1 ( 22 ) 3.9966 4.0000
3 0.75739624625375387946 9.4 ( 83 ) 1.2 ( 84 ) 4.0000
C H M T 1 0.75739676303951339051 4.1 ( 5 ) 5.2 ( 7 )
2 0.75739624625375387946 6.7 ( 20 ) 8.4 ( 22 ) 3.9950 4.0000
3 0.75739624625375387946 4.6 ( 79 ) 5.8 ( 81 ) 4.0000
O M 1 1 0.75739692085844618425 5.4 ( 5 ) 6.7 ( 7 )
2 0.75739624625375387946 2.6 ( 19 ) 3.3 ( 21 ) 3.9917 4.0000
3 0.75739624625375387946 1.4 ( 76 ) 1.8 ( 78 ) 4.0000
O M 2 1 0.75739683227987612611 4.7 ( 5 ) 5.9 ( 7 )
2 0.75739624625375387946 1.5 ( 19 ) 1.9 ( 21 ) 3.9751 4.0000
3 0.75739624625375387946 1.5 ( 77 ) 1.9 ( 79 ) 4.0000
O M 3 1 0.75739622120709837787 2.0 ( 6 ) 2.5 ( 8 )
2 0.75739624625375387946 2.2 ( 27 ) 2.7 ( 29 ) 4.1779 4.0000
3 0.75739624625375387946 3.2 ( 111 ) 4.0 ( 113 ) 4.0000
O M 4 1 0.75739692085844618425 5.4 ( 5 ) 6.7 ( 7 )
2 0.75739624625375387946 2.6 ( 19 ) 3.3 ( 21 ) 3.9917 4.0000
3 0.75739624625375387946 1.4 ( 76 ) 1.8 ( 78 ) 4.0000
Table 7. Convergence behavior of different fourth-order methods for Example 4.
Table 7. Convergence behavior of different fourth-order methods for Example 4.
Casesn F ( x ( n + 1 ) ) x ( n + 1 ) x ( n ) ρ
Scase-1  ( a = 1 / 200 , b = 3 ) 1 7.1 ( 9 ) 1.5 ( 9 )
2 5.5 ( 39 ) 1.2 ( 39 )
3 2.5 ( 159 ) 5.3 ( 160 ) 3.996
Scase-2  ( a = 1 / 200 , b = 3 ) 1 9.1 ( 9 ) 2.0 ( 9 )
2 1.9 ( 38 ) 4.0 ( 39 )
3 3.6 ( 157 ) 7.7 ( 158 ) 3.999
Scase-3  ( a = 1 / 200 , b = 3 ) 1 4.5 ( 7 ) 9.6 ( 8 )
2 1.6 ( 3 ) 3.4 ( 32 )
3 1.9 ( 129 ) 4.1 ( 130 ) 4.005
Scase-4  ( a = 1 / 200 , b = 3 ) 1 9.7 ( 9 ) 2.1 ( 9 )
2 2.4 ( 38 ) 5.2 ( 39 )
3 1.0 ( 156 ) 2.2 ( 157 ) 3.999
JM1 5.9 ( 9 ) 1.3 ( 9 )
2 2.0 ( 39 ) 4.3 ( 40 )
3 2.9 ( 161 ) 6.2 ( 162 ) 3.999
HM1 7.1 ( 9 ) 1.5 ( 9 )
2 5.4 ( 39 ) 1.2 ( 39 )
3 1.8 ( 159 ) 3.8 ( 160 ) 3.999
JAM1 7.1 ( 9 ) 1.5 ( 9 )
2 5.4 ( 39 ) 1.2 ( 39 )
3 1.8 ( 159 ) 3.8 ( 160 ) 3.999
Table 8. Convergence behavior of different fourth-order methods for Example 5.
Table 8. Convergence behavior of different fourth-order methods for Example 5.
Casesn F ( x ( n + 1 ) ) x ( n + 1 ) x ( n ) ρ
Scase-1  ( a = 1 / 200 , b = 3 ) 1 1.8 ( 5 ) 3.3 ( 5 )
2 1.2 ( 22 ) 2.4 ( 22 )
3 2.9 ( 91 ) 6.2 ( 91 ) 4.000
Scase-2  ( a = 1 / 200 , b = 3 ) 1 3.4 ( 5 ) 4.0 ( 5 )
2 2.5 ( 22 ) 3.9 ( 22 )
3 2.0 ( 90 ) 3.8 ( 90 ) 3.998
Scase-3  ( a = 1 / 200 , b = 3 ) 1 1.2 ( 5 ) 2.1 ( 5 )
2 1.5 ( 23 ) 3.0 ( 23 )
3 7.0 ( 95 ) 1.5 ( 94 ) 3.998
Scase-4  ( a = 1 / 200 , b = 3 ) 1 3.2 ( 5 ) 4.0 ( 5 )
2 2.5 ( 22 ) 4.0 ( 22 )
3 2.2 ( 90 ) 4.3 ( 90 ) 3.998
JM1 2.8 ( 5 ) 4.0 ( 5 )
2 3.5 ( 22 ) 6.2 ( 22 )
3 1.8 ( 89 ) 3.7 ( 89 ) 3.998
HM1 2.7 ( 5 ) 3.9 ( 5 )
2 2.8 ( 22 ) 5.1 ( 22 )
3 7.3 ( 90 ) 1.5 ( 89 ) 3.998
JAM1 2.7 ( 5 ) 3.9 ( 5 )
2 2.8 ( 22 ) 5.1 ( 22 )
3 7.3 ( 90 ) 1.5 ( 89 ) 3.998
Table 9. Values of abscissas t j and weights w j .
Table 9. Values of abscissas t j and weights w j .
j t j w j
1 0.01304673574141413996101799 0.03333567215434406879678440
2 0.06746831665550774463395165 0.07472567457529029657288816
3 0.16029521585048779688283632 0.10954318125799102199776746
4 0.28330230293537640460036703 0.13463335965499817754561346
5 0.42556283050918439455758700 0.14776211235737643508694649
6 0.57443716949081560544241300 0.14776211235737643508694649
7 0.71669769706462359539963297 0.13463335965499817754561346
8 0.83970478414951220311716368 0.10954318125799102199776746
9 0.93253168334449225536604834 0.07472567457529029657288816
10 0.98695326425858586003898201 0.03333567215434406879678440

Share and Cite

MDPI and ACS Style

Behl, R.; Cordero, A.; Torregrosa, J.R.; Saleh Alshomrani, A. New Iterative Methods for Solving Nonlinear Problems with One and Several Unknowns. Mathematics 2018, 6, 296. https://0-doi-org.brum.beds.ac.uk/10.3390/math6120296

AMA Style

Behl R, Cordero A, Torregrosa JR, Saleh Alshomrani A. New Iterative Methods for Solving Nonlinear Problems with One and Several Unknowns. Mathematics. 2018; 6(12):296. https://0-doi-org.brum.beds.ac.uk/10.3390/math6120296

Chicago/Turabian Style

Behl, Ramandeep, Alicia Cordero, Juan R. Torregrosa, and Ali Saleh Alshomrani. 2018. "New Iterative Methods for Solving Nonlinear Problems with One and Several Unknowns" Mathematics 6, no. 12: 296. https://0-doi-org.brum.beds.ac.uk/10.3390/math6120296

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop