 Next Article in Journal
Time Domain Simulation of Sound Waves Using Smoothed Particle Hydrodynamics Algorithm with Artificial Viscosity
Next Article in Special Issue
A Quartically Convergent Jarratt-Type Method for Nonlinear System of Equations
Previous Article in Journal
Training Artificial Neural Networks by a Hybrid PSO-CS Algorithm
Previous Article in Special Issue
Numerical Solution of Turbulence Problems by Solving Burgers’ Equation
Article

# An Optimal Eighth-Order Derivative-Free Family of Potra-Pták’s Method

University Institute of Engineering and Technology, Panjab University, Chandigarh 160-014, India
*
Author to whom correspondence should be addressed.
Academic Editor: Alicia Cordero
Algorithms 2015, 8(2), 309-320; https://0-doi-org.brum.beds.ac.uk/10.3390/a8020309
Received: 25 April 2015 / Accepted: 8 June 2015 / Published: 15 June 2015
(This article belongs to the Special Issue Numerical Algorithms for Solving Nonlinear Equations and Systems)

## Abstract

In this paper, we present a new three-step derivative-free family based on Potra-Pták’s method for solving nonlinear equations numerically. In terms of computational cost, each member of the proposed family requires only four functional evaluations per full iteration to achieve optimal eighth-order convergence. Further, computational results demonstrate that the proposed methods are highly efficient as compared with many well-known methods.

## 1. Introduction

One of the most basic and earliest problem of numerical analysis concerns with finding efficiently and accurately the simple roots of a nonlinear equation of the form
$f ( x ) = 0 ,$
where f : D ⊆ ℝ ℝ is a nonlinear continuous function. Analytical methods for solving such equations are almost non-existent and therefore, it is only possible to obtain approximate solutions by relying on numerical methods based on iterative procedure (see e.g., ). Newton’s method  is one of the most famous and basic method for solving such equations, which is given by
$x n + 1 = x n − f ( x n ) f ′ ( x n ) , n = 0 , 1 , 2 , … .$
It converges quadratically for simple roots and linearly for multiple roots.
Multipoint iterative methods for solving nonlinear equation are of great practical importance since they overcome the limitations of one-point methods regarding the convergence order and computational efficiency. According to the Kung-Traub conjecture , the order of convergence of any multipoint method without memory requiring n function evaluations per iteration, cannot exceed the bound 2n−1, called the optimal order. Thus, the optimal order for a method with three functional evaluations per step would be four.
As the order of an iterative method increases, so does the number of functional evaluations per step. Commonly, the efficiency of an iterative method is measured by the efficiency index defined by Ostrowski in  as p1/d, where p is the order of convergence and d is the number of functional evaluations per step. To improve the order and efficiency of Newton’s method Equation (2), Potra and Pták  proposed the following third-order method:
${ y n = x n − f ( x n ) f ′ ( x n ) , x n + 1 = x n − f ( x n ) + f ( y n ) f ′ ( x n ) , n = 0 , 1 , 2 , … .$
It satisfies the following error equation
$e n + 1 = 2 c 2 2 e n 3 + ( − 9 c 2 3 + 7 c 2 c 3 ) e n 4 + O ( e n 5 ) .$
However, there are many practical situations in which the calculations of derivatives are expensive or it requires a great deal of time for them to be given or calculated. Therefore, the idea of removing derivatives from the iteration process is very significant.
In particular, when the first-order derivative f′(xn) in Newton’s method is replaced by forward-difference approximation $f ( x n + f ( x n ) ) − f ( x n ) f ( x n )$, we get the well-known Steffensen method  as follows:
$x n + 1 = x n − f ( x n ) f [ x n , w n ] ,$
where wn = xn + f(xn) and f[·, ·] denotes the first order divided difference. As a matter of fact, both methods maintain quadratic convergence using only two functional evaluations per full step, but Steffensen method is derivative free, which is very useful in optimization problems. Recently, many higher-order derivative-free methods are built according to the Steffensen’s method, (cf. [7,8] and the references cited therein). Soleymani et al. in  presented the following fourth-order optimal Steffensen type methods given by
${ y n = x n − f ( x n ) f [ x n , w n ] , w n = x n + β f ( x n ) , x n + 1 = x n − f ( x n ) + f ( y n ) f [ x n , w n ] − ( 2 f ( x n ) + a f ( y n ) f [ x n , w n ] ( f ( y n ) f ( x n ) ) 2 ) ( 1 − β f [ x n , w n ] 2 + 2 β f [ x n , w n ] ) , a ϵ ℝ ,$
where β ∈ ℝ \{0}. The construction of this family is based on Potra-Pták’s method. However, we do not have any higher-order derivative-free modifications of Potra-Pták’s method till date.
With this aim, we intend to propose a new derivative-free modification of Potra-Pták’s method having optimal eighth-order convergence. The construction of the proposed class is based on weight function approach. It is found by way of illustrations that the proposed methods are very useful in high precision computations.

## 2. Development of Derivative-Free Methods and Convergence Analysis

In this section, we intend to develop a new derivative-free class of three-point methods having optimal eighth-order convergence.
Thus, we consider the following iteration scheme
${ y n = x n − f ( x n ) f ′ ( x n ) , z n = y n − f ( y n ) f ′ ( x n ) , x n + 1 = z n − f ( z n ) f ′ ( z n ) ,$
where first two steps of the well-known Potra-Pták’s method are composed with the Newton step.
It satisfies the following error equation
$e n + 1 = 4 c 2 5 e n 6 + ( − 36 c 2 6 + 28 c 2 4 c 3 ) e n 7 + O ( e n 8 ) ,$
where en = xnα and $c k = 1 k ! f k ( α ) f ′ ( α ) , k ≥ 2.$.
According to the Kung-Traub conjecture, the above scheme Equation (5) is not optimal because it has sixth-order convergence and requires five functional evaluations per full iteration. Following Cordero-Torregrosa conjecture , we replace derivatives in all three steps by suitable approximations that use available data. Therefore, we approximate
$f ′ ( x n ) ≈ f [ x n , w n ] , f ′ ( z n ) ≈ f [ x n , w n ] ,$
where wn = xn+βf(xn)3, β ∈ ℝ\{0} and $f [ x , y ] = f ( x ) − f ( y ) x − y$ denotes a divided difference (without index n).
Substituting these approximations in Equation (5), we get a derivative-free three-point iterative method given by
${ y n = x n − f ( x n ) f [ x n , w n ] , z n = y n − f ( y n ) f [ x n , w n ] , x n + 1 = z n − f ( z n ) f [ x n , w n ] .$
It satisfies the following error equation
$e n + 1 = 4 c 2 3 e n 4 + ( − 26 c 2 4 + 20 c 2 2 c 3 ) e n 5 + O ( e n 6 ) .$
Again, the family of methods Equation (8) is not optimal according to the Kung-Traub conjecture. Therefore, to further improve its order of convergence, we shall now make use of weight function approach. Therefore, we consider
${ y n = x n − f ( x n ) f [ x n , w n ] , w n = x n + β f ( x n ) 3 , z n = x n − ( f ( x n ) + f ( y n ) f [ x n , w n ] ) G ( τ ) , τ = f ( y n ) f ( x n ) , x n + 1 = z n − f ( z n ) f [ x n , w n ] H ( τ , ϕ ) , ϕ = f ( z n ) f ( y n ) ,$
where β ∈ ℝ \{0} and G and H are parametric functions of one and two variables, respectively. Theorem (1) illustrates that under what conditions on weight functions, convergence order of family Equation (9) will arrive at the optimal level eight.

## 3. Convergence Analysis

Theorem 1. Assume that function f : D ⊆ ℝ is sufficiently differentiable and has a simple zero αD. If an initial guess x0 is sufficiently close to αD, then the iterative scheme defined by Equation (9) has optimal convergence of order eight when
${ G ( 0 ) = 1 , G ′ ( 0 ) = 0 , G ″ ( 0 ) = 4 a n d | G ( 3 ) ( 0 ) | ≤ ∞ , β ∈ ℝ \ { 0 } , H 00 = 1 , H 10 = 2 , H 01 = 1 , H 20 = G ( 3 ) ( 0 ) 3 + 6 , H 11 = 4 , H 30 = 3 G ( 3 ) ( 0 ) + G ( 4 ) ( 0 ) 4 ,$
where$H i j = 1 i ! j ! ∂ H ( u , v ) ∂ u i v i | ( 0 , 0 )$, i = 0, 1, 2, 3 and j = 0, 1, 2, 3.
It satisfies the following error equation
$e n + 1 = 1 432 c 2 ( ( − 18 + G ( 3 ) ( 0 ) ) c 2 2 + 6 c 3 [ ( H 02 ( − 18 + G ( 3 ) ( 0 ) ) 2 − 3 ( 648 + 2 H 21 ( − 18 + G ( 3 ) ( 0 ) ) − 28 G ( 3 ) ( 0 ) + 3 G ( 4 ) ( 0 ) ) ) c 2 4 + 36 ( − 2 + G ( 2 ) ( 0 ) ) c 3 2 − 12 c 2 2 ( 6 β f ′ ( α ) 3 + ( − 102 + 18 H 02 + 3 H 21 + G ( 3 ) ( 0 ) − G ( 2 ) ( 0 ) G ( 3 ) ( 0 ) ) c 3 ) − 72 c 2 c 4 ] e n 8 + O ( e n 9 ) ,$
where en and ck are already defined in Equation (6).
Proof. Using Taylor’s series and symbolic computation, we can determine the asymptotic error constant of three-step derivative-free class of methods Equation (9). Furthermore, taking into account that f(α) = 0, we can expand f(xn) about xn = α. Therefore, we get
$f ( x n ) = f ′ ( α ) ( e n + c 2 e n 2 + c 3 e n 3 + c 4 e n 4 + c 5 e n 5 + c 6 e n 6 + c 7 e n 7 + c 8 e n 8 ) + O ( e n 9 ) .$
Using that wn = xn + βf(xn)3, one gets
$f [ x n , w n ] = f ′ ( α ) + 2 f ′ ( α ) c 2 e n + 3 f ′ ( α ) c 3 e n 2 + f ′ ( α ) ( β f ′ ( α ) 3 c 2 + 4 c 4 ) e n 3 + f ′ ( α ) ( 3 β f ′ ( α ) 3 c 2 2 + 3 β f ′ ( α ) 3 c 3 + 5 c 5 ) e n 4 + O ( e n ) 5 .$
From Equations (11) and (12), we have
$y n − α = x n − α − f ( x n ) f [ x n , w n ] = c 2 e n 2 + ( − 2 c 2 2 + 2 c 3 ) e n 3 + ( β f ′ ( α ) 3 c 2 + 4 c 2 3 − 7 c 2 c 3 + 3 c 4 ) e n 4 + O ( e n 5 ) .$
Expanding $f ( x n − f ( x n ) f [ x n , w n ] )$ about xn = α, we have
$f ( y n ) = f ( x n − f ( x n ) f [ x n , w n ] ) = f ′ ( α ) c 2 e n 2 + f ′ ( α ) ( − 2 c 2 2 + 2 c 3 ) e n 3 + f ′ ( α ) ( 5 c 2 3 + c 2 ( β f ′ ( α ) 3 − 7 c 3 ) + 3 c 4 ) e n 4 + O [ e n ] 5 ,$
and
$τ = f ( y n ) f ( x n ) = c 2 e n + ( − 3 c 2 2 + 2 c 3 ) e n 2 + ( 3 c 4 − 10 c 2 c 3 + 8 c 2 3 + β f ′ ( α ) 3 c 2 ) e n 3 + O ( e n 4 ) .$
In the same vein, by considering G(0) = 1, G′(0) = 0, G″(0) = 4 and |G(3)(0)| ≤ ∞, we obtain
$z n − α = x n − α − ( f ( x n ) + f ( y n ) f [ x n , w n ] ) G ( τ ) = ( ( 3 − G ( 3 ) ( 0 ) 6 ) c 2 3 − c 2 c 3 ) e n 4 + O ( e n 5 ) .$
Moreover, we find
$f ( z n ) = f ′ ( α ) [ ( 3 − G 3 ( 0 ) 6 ) c 2 3 − c 2 c 3 ] e n 4 + f ′ ( α ) [ ( − 16 + 3 G 3 ( 0 ) 2 − G ( 4 ) ( 0 ) 24 ) c 2 4 − 2 c 3 2 − c 2 2 ( β f ′ ( α ) 3 + ( − 20 + G 3 ( 0 ) ) c 3 ) − 2 c 2 c 4 ] e n 5 + O ( e n 6 ) ,$
and
$ϕ = f ( z n ) f ( y n ) = 1 6 ( 18 c 2 2 − G ( 3 ) ( 0 ) c 2 2 − 6 c 3 ) e n 2 + 1 24 ( − 24 β f ′ ( α ) 3 c 2 − 240 c 2 3 + 28 G ( 3 ) ( 0 ) c 2 3 − G ( 4 ) ( 0 ) c 2 3 + 288 c 2 c 3 − 16 G ( 3 ) ( 0 ) c 2 c 3 − 48 c 4 ) e n 3 + O ( e n 4 ) .$
Since, it is clear from Equations (15) and (18) that τ and ϕ are of order en and $e n 2$, respectively. Therefore, we can expand weight function H(τ, ϕ) in the neighborhood of origin by Taylor series expansion up to third order terms as follows:
$H ( τ , ϕ ) = H 00 + H 10 τ + H 01 ϕ + 1 2 ! ( H 20 τ 2 + 2 H 11 τ ϕ + H 02 ϕ 2 ) + 1 3 ! ( H 30 τ 3 + 3 H 21 τ 2 ϕ + 3 H 12 τ ϕ 2 + H 03 ϕ 3 ) .$
Using Equations (17) and (19) in the last step of Equation (9), we get
$e n + 1 = 1 6 ( − 1 + H 00 ) c 2 ( ( − 18 + G ( 3 ) ( 0 ) ) c 2 2 + 6 c 3 ) e n 4 + ( 1 24 ( − 384 + 4 H 10 ( − 18 + G ( 3 ) ( 0 ) ) + 36 G ( 3 ) ( 0 ) − G ( 4 ) ( 0 ) + H 00 ( 528 − 44 G ( 3 ) ( 0 ) + G ( 4 ) ( 0 ) ) ) c 2 4 + 2 ( − 1 + H 00 ) c 3 2 + c 2 2 ( β f ′ ( α ) 3 ( − 1 + H 00 ) + ( 20 + H 10 + H 00 ( − 22 + G ( 3 ) ( 0 ) ) − G ( 3 ) ( 0 ) ) c 3 ) + 2 ( − 1 + H 00 ) c 2 c 4 ) e n 5 + … + O ( e n 9 ) .$
This implies that the derivative-free class of methods Equation (9) arrive at optimal eighth-order of convergence by choosing the weight functions as follow:
${ G ( 0 ) = 1 , G ′ ( 0 ) = 0 , G ″ ( 0 ) = 4 and | G ( 3 ) ( 0 ) | ≤ ∞ , β ∈ ℝ \ 0 , H 00 = 1 , H 10 = 2 , H 01 = 1 , H 20 = G ( 3 ) ( 0 ) 3 + 6 , H 11 = 4 , H 30 = 3 G ( 3 ) ( 0 ) + G ( 4 ) ( 0 ) 4 .$
Finally, using Equations (21) in (20), we get the following error equation
$e n + 1 = 1 432 c 2 ( ( − 18 + G ( 3 ) ( 0 ) ) c 2 2 + 6 c 3 ) [ ( H 02 ( − 18 + G ( 3 ) ( 0 ) ) 2 − 3 ( 648 + 2 H 21 ( − 18 + G ( 3 ) ( 0 ) ) − 28 G ( 3 ) ( 0 ) + 3 G ( 4 ) ( 0 ) ) ) c 2 4 + 36 ( − 2 + G ( 2 ) ( 0 ) ) c 3 2 − 12 c 2 2 ( 6 β f ′ ( α ) 3 + ( − 102 + 18 H 02 + 3 H 21 + G ( 3 ) ( 0 ) − G ( 2 ) ( 0 ) G ( 3 ) ( 0 ) ) c 3 ) − 72 c 2 c 4 ] e n 8 + O ( e n 9 ) .$
This concludes the proof. □
Remark 1. It is straightforward to see that all the proposed methods of family Equation (9) require four functional evaluations, viz., f(xn), f(yn), f(wn), f(zn), per full iteration. Therefore, these methods are optimal in the sense of Kung-Traub conjecture and have the efficiency indices$E = 8 1 4 ≈ 1.682$. Furthermore, by choosing appropriate weight functions in family Equation (9), we can develop several new optimal derivative-free families having optimal eighth-order convergence.
Remark 2. From the application point of view, when the given problem is complicated, it becomes very difficult to evaluate derivatives. For example, the nonlinear function$h ( x ) = ( cot x ) e x ( 1 / ( 2 x 2 cosh x )$ (see Figure 1), has a very complicated first derivative. Such shortcomings lead us to investigate new optimal iterative methods which are totally free from derivatives.

## 4. Special Cases

In this section, we introduce some concrete methods based on the proposed class Equation (9). Method 1 Let us consider the weight functions defined by
$G ( τ ) = γ 6 τ 3 + 2 τ 2 + 1 and H ( τ , ϕ ) = γ 2 τ 3 + ( γ 6 + 3 ) τ 2 + 4 τ ϕ + 2 τ + ϕ + 1 ,$
where $τ = f ( y n ) f ( x n )$, $ϕ = f ( z n ) f ( y n )$ and γ is any free disposable parameter.
It can be easily seen that the above mentioned weight functions G(τ) and H(τ, ϕ) satisfy all the conditions of Theorem (1). Therefore, we get a new derivative-free optimal family of eighth-order methods given by
${ y n = x n − f ( x n ) f [ x n , w n ] , w n = x n + β f ( x n ) 3 , z n = x n − ( f ( x n ) + f ( y n ) f [ x n , w n ] ) [ γ 6 ( f ( y n ) f ( x n ) ) 3 + 2 ( f ( y n ) f ( x n ) ) 2 + 1 ] , x n + 1 = z n − f ( z n ) f [ x n , w n ] [ γ 2 ( f ( y n ) f ( x n ) ) 3 + ( γ 6 + 3 ) ( f ( y n ) f ( x n ) ) 2 + 4 f ( y n ) f ( z n ) f ( x n ) f ( y n ) + 2 f ( y n ) f ( x n ) + f ( z n ) f ( y n ) + 1 ] .$
Method 2 Now, we consider the following weight functions
$G ( τ ) = τ ( 1 − 12 ( μ + 2 ) τ ) − 12 τ ( 1 − 12 μ τ ) − 12 and H ( τ , ϕ ) = − 24 + ( 299 3 + 48 μ ) τ 3 4 ( − 6 + 6 ϕ + ( 12 − 5 τ ) τ ) ,$
where $τ = f ( y n ) f ( x n )$, $ϕ = f ( z n ) f ( y n )$ and µ is any free disposable parameter.
These weight functions satisfy all the conditions of Theorem (1). Therefore, we obtain another new derivative-free optimal family of eighth-order methods given by
${ y n = x n − f ( x n ) f [ x n , w n ] , w n = x n + β f ( x n ) 3 , z n = x n − ( f ( x n ) + f ( y n ) f [ x n , w n ] ) [ f ( y n ) f ( x n ) ( 1 − 12 ( μ + 2 ) f ( y n ) f ( x n ) ) − 12 f ( y n ) f ( x n ) ( 1 − 12 μ f ( y n ) f ( x n ) ) − 12 ] , x n + 1 = z n − f ( z n ) f [ x n , w n ] [ − 24 + ( 299 3 + 48 μ ) f ( y n ) 3 f ( x n ) 4 ( − 6 + 6 f ( z n ) f ( y n ) + ( 12 − 5 f ( y n ) f ( x n ) ) f ( y n ) f ( x n ) ) ] .$
Method 3 Consider the weight functions defined by
$G ( τ ) = 6 η − τ + 12 η τ 2 + ( η − 2 ) τ 3 6 η − τ and H ( τ , ϕ ) = τ 2 − 6 η ( 12 + 25 τ 2 ) τ 2 + 6 η ( − 12 + 12 ϕ + ( 24 − 35 τ ) τ ) ,$
where $τ = f ( y n ) f ( x n )$, $ϕ = f ( z n ) f ( y n )$ and η is any free disposable parameter.
These weight functions also satisfy all the conditions of Theorem (1). Therefore, we get another new optimal family of eighth-order methods given by
${ y n = x n − f ( x n ) f [ x n , w n ] , w n = x n + β f ( x n ) 3 , z n = x n − ( f ( x n ) + f ( y n ) f [ x n , w n ] ) [ 6 η − f ( y n ) f ( x n ) + 12 η f ( y n ) 2 f ( x n ) + ( η − 2 ) f ( y n ) 3 f ( x n ) 6 η − f ( y n ) f ( x n ) ] x n + 1 = z n − f ( z n ) f [ x n , w n ] [ f ( y n ) 2 f ( x n ) − 6 η ( 12 + 25 f ( y n ) 2 f ( x n ) ) f ( y n ) 2 f ( x n ) + 6 η ( − 12 + 12 f ( z n ) f ( y n ) + ( 24 − 35 f ( y n ) f ( x n ) ) f ( y n ) f ( x n ) ) ] .$
Method 4 The method by Kung and Traub, see , denoted by KTM8, is
${ y n = x n + β f ( x n ) , β ∈ ℝ − { 0 } , z n = y n − β f ( x n ) f ( y n ) f ( y n ) − f ( x n ) , w n = z n − f ( x n ) f ( y n ) f ( z n ) − f ( x n ) [ 1 f [ y n , x n ] − 1 f [ z n , y n ] ] , x n + 1 = z n − f ( x n ) f ( y n ) f ( z n ) f ( w n ) − f ( x n ) [ 1 f [ y n , x n ] { 1 f [ w n , z n ] − 1 f [ z n , y n ] } − 1 f [ z n , x n ] { 1 f [ z n , y n ] − 1 f [ y n , x n ] } ] .$
Method 5 The method by Soleymani, see , denoted by $S M 8 1$, is
${ y n = x n − f ( x n ) f [ x n , w n ] , w n = x n + f ( x n ) , z n = y n − f ( y n ) f [ x n , y n ] [ 1 + f ( y n ) f ( w n ) + ( f ( y n ) f ( w n ) ) 2 ] , x n + 1 = z n − f ( z n ) f [ z n , y n ] [ 1 + 1 f [ x n , w n ] + 1 ( f ( y n ) f ( x n ) ) 2 + ( 2 + f [ x n , w n ] ) f ( z n ) f ( w n ) ] .$
Method 6 The method by Zheng et al., see , denoted by ZM8, is
${ y n = x n − f ( x n ) f [ x n , w n ] , w n = x n + β f ( x n ) , z n = y n − f ( y n ) f [ x n , y n ] + f [ y n , w n ] − f [ x n , w n ] , x n + 1 = z n − f ( z n ) f [ z n , y n ] + f [ z n , x n , x n ] ( z n − y n ) + f [ z n , y n , x n , w n ] ( z n − y n ) ( z n − x n ) .$
Method 7 The method by Soleymani, see , denoted by $S M 8 2$, is
${ y n = x n − f ( x n ) f [ x n , w n ] , w n = x n + β f ( x n ) , z n = y n − f ( y n ) f [ x n , y n ] + f [ y n , w n ] − f [ x n , w n ] , x n + 1 = z n − f ( z n ) { 1 + ( f ( y n ) f ( x n ) ) 4 − ( 1 + β f [ x n , w n ] ) ( f ( y n ) f ( w n ) ) 3 − ( f ( z n ) f ( y n ) ) 2 + f ( z n ) f ( w n ) + ( f ( z n ) f ( x n ) ) 2 } f [ x n , z n ] + f [ z n , y n ] − f [ x n , y n ] .$

## 5. Numerical Experiments

In this section, we shall check the effectiveness of the newly proposed methods. We employ the present methods Equation (23) (for β = 1, γ = 12 ), Equation (24) (for β = 1, µ = 12) and Equation (25) (for β = 1, η = 12 ), denoted by $M M 8 1$, $M M 8 2$ and $M M 8 3$, respectively to solve nonlinear equations. We compare them with Kung and Traub method Equation (26) (KTM8), Soleymani method Equation (27)$( S M 8 1 )$, Zheng et al. method Equation (28) (ZM8), Soleymani et al. method Equation (29)$( S M 8 2 )$, respectively. The test functions and their roots are displayed in Table 1. Comparison of different eighth-order derivative-free iterative methods with respect to the same number of functional evaluations (TNE = 12) are provided in Tables 24. All computations have been performed using the programming package Mathematica 9 with multiple precision arithmetic. We use ϵ = 10−35 as a tolerance error. The following stopping criteria are used for computer programs: (i) |xn+1 −xn| < ϵ, (ii) |f(xn+1)| < ϵ. These methods are employed to solve some nonlinear equations of two classes: smooth functions and non-smooth functions:
$g 1 ( x ) = | x 2 − 2 | , α ≈ 1.4142135623730950488016887242096981 , x 0 = 1.3. g 2 ( x ) = { x ( x − 1 ) if x ≤ 0 − 2 x ( x + 1 ) if x ≥ 0 , α = 0 , x 0 = 0.5.$

## 6. Conclusions

In this study, we contribute further to the development of the theory of iteration processes and propose a new derivative-free optimal family of eighth-order methods for solving nonlinear equations numerically. It is noteworthy that the given scheme can produce several new derivative-free optimal eighth-order methods by choosing different types of weight functions. The asserted superiority of proposed methods is also corroborated by numerical results displayed in the Tables 24. The numerical experiments suggest that the new class would be valuable alternative for solving nonlinear equations.

## Acknowledgments

We would like to express our gratitude to the anonymous referees for their insightful valuable comments and suggestions.

## Author Contributions

The contributions of all of the authors have been similar. All of them have worked together to develop the present manuscript.

## Conflicts of Interest

The authors declare no conflict of interest.

## References

1. Petković, M.S.; Neta, B.; Petković, L.D.; Džunić, J. Multipoint Methods for Solving Nonlinear Equations; Academic Press: Orlando, FL, USA, 2012. [Google Scholar]
2. Kung, H.T.; Traub, J.F. Optimal order of one-point and multi-point iteration. J. Assoc. Comput. Math. 1974, 21, 643–651. [Google Scholar]
3. Ostrowski, A.M. Solutions of Equations and System of Equations; Academic Press: New York, NY, USA, 1960. [Google Scholar]
4. Potra, F.A.; Pták, V. Nondiscrete introduction and iterative processes. In Research Notes in Mathematics; Pitman: Boston, MA, USA, 1984; Volume 103. [Google Scholar]
5. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: Upper Saddle River, NJ, USA, 1964. [Google Scholar]
6. Steffensen, J.F. Remarks on iteration. Skand Aktuar Tidsr 1933, 16, 64–72. [Google Scholar]
7. Petković, M.S.; Ilić, S.; Džunić, J. Derivative free two-point methods with and without memory for solving nonlinear equations. Appl. Math. Comput. 2010, 217, 1887–1895. [Google Scholar]
8. Andreu, C.; Cambil, N.; Cordero, A.; Torregrosa, J.R. A class of optimal eighth-order derivative-free methods for solving the Danchick-Guass problem. Appl. Math. Comput. 2014, 232, 237–246. [Google Scholar]
9. Soleymani, F.; Sharma, R.; Li, X.; Tohidi, E. An optimized derivative-free form of the Potra-Pták method. Math. Comput. Model. 2012, 56, 97–104. [Google Scholar]
10. Soleymani, F. Optimal Steffensen-type methods with eighth-order of convergence. Comput. Math. Appl. 2011, 62, 4619–4626. [Google Scholar]
11. Zheng, Q.; Li, J.; Huang, F. An optimal Steffensen-type family for solving nonlinear equations. Appl. Math. Comput. 2011, 217, 9592–9597. [Google Scholar]
12. Soleymani, F. On a bi-parametric class of optimal eight-order derivative-free methods. Int. J. Pure Appl. Math. 2011, 72, 27–37. [Google Scholar]
Figure 1. The graph of h(x) and its root.
Figure 1. The graph of h(x) and its root. Table 1. Test functions and their roots.
Table 1. Test functions and their roots.
Test functionsRootsInitial guess
f1(x) = (sin x)2 + xα = 0x0 = 0.5
f2(x) = x2 − (1 − x)25α. ≈ 0.14373925929975369826697493201066691…x0 = 0.4
$f 3 ( x ) = sin − 1 ( x 2 − 1 ) − x 2 + 1$α. ≈ 0.59481096839836917752265623515213618…x0 = 0.3
$f 4 ( x ) = tan ( log x ) + cos ( x 3 ) 1 2 x$α. ≈ 0.44326078355676706795301995624689113…x0 = 0.41
f5(x) = 10xex2 − 1α. ≈ 1.6796306104284499406749203388379704…x0 = 1.5
Table 2. Comparison of different methods for smooth functions.
Table 2. Comparison of different methods for smooth functions.
fKTM8$S M 8 1$ZM8$S M 8 2$$M M 8 1$$M M 8 2$$M M 8 3$
f1|f(x1)|1.21e−31.22e−34.33e−31.67e−30.9e−35.86e−47.81e−4
|f(x2)|5.84e−223.06e−221.00e−134.85e−217.46e−241.44e−246.59e−25
|f(x3)|7.16e−1684.67e−1711.63e−772.60e−1611.31e−1841.92e−1893.35e−193
f2|f(x1)|4.37e−34.08e−31.18e−23.02e−32.08e−33.49e−53.49e−3
|f(x2)|3.21e−121.06e−111.16e−61.18e−122.69e−168.72e−208.09e−15
|f(x3)|1.01e−851.81e−805.09e−314.59e−881.06e−1181.32e−1441.26e−107
f3|f(x1)|1.06e−76.23e−86.50e−63.92e−71.94e−84.81e−81.55e−8
|f(x2)|9.23e−591.07e−603.25e−332.65e−544.55e−661.73e−622.44e−66
|f(x3)|3.15e−4678.39e−4835.05e−1971.15e−4310.1e−4904.93e−4980.1e−492
f4|f(x1)|9.31e−51.25e−41.46e−51.83e−51.00e−84.94e−71.21e−6
|f(x2)|8.46e−302.55e−295.66e−295.52e−361.08e−658.35e−499.70e−470
|f(x3)|3.95e−2307.45e−2271.92e−1693.76e−2800.1e−4925.53e−3831.68e−367
f5|f(x1)|1.00e−33.79e−42.80e−31.78e−42.61e−51.79e−61.84e−6
|f(x2)|4.54e−269.35e−854.27e−497.58e−871.42e−391.06e−474.60e−48
|f(x3)|7.83e−2054.28e−2341.00e−1015.17e−2571.09e−3131.58e−3777.04e−381
Table 3. Comparison of different methods for non-smooth function g1(x).
Table 3. Comparison of different methods for non-smooth function g1(x).
Methods|g1(x1)||g1(x2)||g1(x3)|
Method KTM8 Equation (26)9.81e−34.14e−68.18e−13
Method $S M 8 1$ Equation (27)1.03e−23.21e−63.21e−13
Method ZM8 Equation (28)3.28e−36.23e−118.36e−42
Method $S M 8 2$ Equation (29)2.89e−26.28e−53.66e−10
Our Method $M M 8 1$ Equation (23)2.97e−32.43e−224.69e−175
Our Method $M M 8 2$ Equation (24)5.98e−43.56e−241.31e−188
Our Method $M M 8 3$ Equation (25)1.33e−45.87e−336.67e−258
Table 4. Comparison of different methods for non-smooth function g2(x).
Table 4. Comparison of different methods for non-smooth function g2(x).
Methods|g2(x1)||g2(x2)||g2(x3)|
Method KTM8 Equation (26)DDD
Method $S M 8 1$ Equation (27)DDD
Method ZM8 Equation (28)2.63e−16.16e−71.87e−40
Method $S M 8 2$ Equation (29)0.786e+30.12e+11.03e−4
Our Method $M M 8 1$ Equation (23)1.09e−17.44e−75.53e−13
Our Method $M M 8 2$ Equation (24)2.61e−34.05e−251.40e−199
Our Method $M M 8 3$ Equation (25)2.52e−33.94e−251.43e−199
D: stands for divergent.
Back to TopTop