# An Optimal Eighth-Order Derivative-Free Family of Potra-Pták’s Method

^{*}

Next Article in Journal

Next Article in Special Issue

Next Article in Special Issue

Previous Article in Journal

Previous Article in Special Issue

Previous Article in Special Issue

University Institute of Engineering and Technology, Panjab University, Chandigarh 160-014, India

Author to whom correspondence should be addressed.

Academic Editor: Alicia Cordero

Received: 25 April 2015 / Accepted: 8 June 2015 / Published: 15 June 2015

(This article belongs to the Special Issue Numerical Algorithms for Solving Nonlinear Equations and Systems)

In this paper, we present a new three-step derivative-free family based on Potra-Pták’s method for solving nonlinear equations numerically. In terms of computational cost, each member of the proposed family requires only four functional evaluations per full iteration to achieve optimal eighth-order convergence. Further, computational results demonstrate that the proposed methods are highly efficient as compared with many well-known methods.

One of the most basic and earliest problem of numerical analysis concerns with finding efficiently and accurately the simple roots of a nonlinear equation of the form
where f : D ⊆ ℝ → ℝ is a nonlinear continuous function. Analytical methods for solving such equations are almost non-existent and therefore, it is only possible to obtain approximate solutions by relying on numerical methods based on iterative procedure (see e.g., [1–7]). Newton’s method [5] is one of the most famous and basic method for solving such equations, which is given by

$$f(x)=0,$$

$${x}_{n+1}={x}_{n}-\frac{f({x}_{n})}{{f}^{\prime}({x}_{n})},\phantom{\rule{1em}{0ex}}n=0,1,2,\dots .$$

It converges quadratically for simple roots and linearly for multiple roots.

Multipoint iterative methods for solving nonlinear equation are of great practical importance since they overcome the limitations of one-point methods regarding the convergence order and computational efficiency. According to the Kung-Traub conjecture [2], the order of convergence of any multipoint method without memory requiring n function evaluations per iteration, cannot exceed the bound 2^{n−}^{1}, called the optimal order. Thus, the optimal order for a method with three functional evaluations per step would be four.

As the order of an iterative method increases, so does the number of functional evaluations per step. Commonly, the efficiency of an iterative method is measured by the efficiency index defined by Ostrowski in [3] as p^{1}^{/d}, where p is the order of convergence and d is the number of functional evaluations per step. To improve the order and efficiency of Newton’s method Equation (2), Potra and Pták [4] proposed the following third-order method:

$$\{\begin{array}{l}{y}_{n}={x}_{n}-\frac{f({x}_{n})}{{f}^{\prime}({x}_{n})},\hfill \\ {x}_{n+1}={x}_{n}-\frac{f({x}_{n})+f({y}_{n})}{{f}^{\prime}({x}_{n})},\phantom{\rule{0.2em}{0ex}}n=0,1,2,\dots .\hfill \end{array}$$

It satisfies the following error equation

$${e}_{n+1}=2{c}_{2}^{2}{e}_{n}^{3}+(-9{c}_{2}^{3}+7{c}_{2}{c}_{3}){e}_{n}^{4}+O({e}_{n}^{5}).$$

However, there are many practical situations in which the calculations of derivatives are expensive or it requires a great deal of time for them to be given or calculated. Therefore, the idea of removing derivatives from the iteration process is very significant.

In particular, when the first-order derivative f′(x_{n}) in Newton’s method is replaced by forward-difference approximation
$\frac{f({x}_{n}+f({x}_{n}))-f({x}_{n})}{f({x}_{n})}$, we get the well-known Steffensen method [6] as follows:
where w_{n} = x_{n} + f(x_{n}) and f[·, ·] denotes the first order divided difference. As a matter of fact, both methods maintain quadratic convergence using only two functional evaluations per full step, but Steffensen method is derivative free, which is very useful in optimization problems. Recently, many higher-order derivative-free methods are built according to the Steffensen’s method, (cf. [7,8] and the references cited therein). Soleymani et al. in [9] presented the following fourth-order optimal Steffensen type methods given by
where β ∈ ℝ \{0}. The construction of this family is based on Potra-Pták’s method. However, we do not have any higher-order derivative-free modifications of Potra-Pták’s method till date.

$${x}_{n+1}={x}_{n}-\frac{f({x}_{n})}{f[{x}_{n},{w}_{n}]},$$

$$\{\begin{array}{l}{y}_{n}={x}_{n}-\frac{f({x}_{n})}{f[{x}_{n},{w}_{n}]},{w}_{n}={x}_{n}+\beta f({x}_{n}),\hfill \\ {x}_{n+1}={x}_{n}-\frac{f({x}_{n})+f({y}_{n})}{f[{x}_{n},{w}_{n}]}-\left(\frac{2f({x}_{n})+af({y}_{n})}{f[{x}_{n},{w}_{n}]}{\left(\frac{f({y}_{n})}{f({x}_{n})}\right)}^{2}\right)\left(1-\frac{\beta f[{x}_{n},{w}_{n}]}{2+2\beta f[{x}_{n},{w}_{n}]}\right),a\u03f5\mathbb{R},\hfill \end{array}$$

With this aim, we intend to propose a new derivative-free modification of Potra-Pták’s method having optimal eighth-order convergence. The construction of the proposed class is based on weight function approach. It is found by way of illustrations that the proposed methods are very useful in high precision computations.

In this section, we intend to develop a new derivative-free class of three-point methods having optimal eighth-order convergence.

Thus, we consider the following iteration scheme
where first two steps of the well-known Potra-Pták’s method are composed with the Newton step.

$$\{\begin{array}{c}\phantom{\rule{0.8em}{0ex}}{y}_{n}={x}_{n}-\frac{f({x}_{n})}{{f}^{\prime}({x}_{n})},\\ \phantom{\rule{0.8em}{0ex}}{z}_{n}={y}_{n}-\frac{f({y}_{n})}{{f}^{\prime}({x}_{n})},\\ {x}_{n+1}={z}_{n}-\frac{f({z}_{n})}{{f}^{\prime}({z}_{n})},\end{array}$$

It satisfies the following error equation
where e_{n} = x_{n} – α and
${c}_{k}=\frac{1}{k!}\frac{{f}^{k}(\alpha )}{{f}^{\prime}(\alpha )},\phantom{\rule{1em}{0ex}}k\ge 2.$.

$${e}_{n+1}=4{c}_{2}^{5}{e}_{n}^{6}+(-36{c}_{2}^{6}+28{c}_{2}^{4}{c}_{3}){e}_{n}^{7}+O({e}_{n}^{8}),$$

According to the Kung-Traub conjecture, the above scheme Equation (5) is not optimal because it has sixth-order convergence and requires five functional evaluations per full iteration. Following Cordero-Torregrosa conjecture [8], we replace derivatives in all three steps by suitable approximations that use available data. Therefore, we approximate
where w_{n} = x_{n}+βf(x_{n})^{3}, β ∈ ℝ\{0} and
$f[x,y]=\frac{f(x)-f(y)}{x-y}$ denotes a divided difference (without index n).

$$\begin{array}{l}{f}^{\prime}({x}_{n})\approx f[{x}_{n},{w}_{n}],\\ {f}^{\prime}({z}_{n})\approx f[{x}_{n},{w}_{n}],\end{array}$$

Substituting these approximations in Equation (5), we get a derivative-free three-point iterative method given by

$$\{\begin{array}{c}\phantom{\rule{0.8em}{0ex}}{y}_{n}={x}_{n}-\frac{f({x}_{n})}{f[{x}_{n},{w}_{n}]},\\ \phantom{\rule{0.8em}{0ex}}{z}_{n}={y}_{n}-\frac{f({y}_{n})}{f[{x}_{n},{w}_{n}]},\\ {x}_{n+1}={z}_{n}-\frac{f({z}_{n})}{f[{x}_{n},{w}_{n}]}.\end{array}$$

It satisfies the following error equation

$${e}_{n+1}=4{c}_{2}^{3}{e}_{n}^{4}+(-26{c}_{2}^{4}+20{c}_{2}^{2}{c}_{3}){e}_{n}^{5}+O({e}_{n}^{6}).$$

Again, the family of methods Equation (8) is not optimal according to the Kung-Traub conjecture. Therefore, to further improve its order of convergence, we shall now make use of weight function approach. Therefore, we consider
where β ∈ ℝ \{0} and G and H are parametric functions of one and two variables, respectively. Theorem (1) illustrates that under what conditions on weight functions, convergence order of family Equation (9) will arrive at the optimal level eight.

$$\{\begin{array}{l}{y}_{n}={x}_{n}-\frac{f({x}_{n})}{f[{x}_{n},{w}_{n}]},\phantom{\rule{1em}{0ex}}{w}_{n}={x}_{n}+\beta f{({x}_{n})}^{3},\hfill \\ {z}_{n}={x}_{n}-\left(\frac{f({x}_{n})+f({y}_{n})}{f[{x}_{n},{w}_{n}]}\right)G(\tau ),\phantom{\rule{1em}{0ex}}\tau =\frac{f({y}_{n})}{f({x}_{n})},\hfill \\ {x}_{n+1}={z}_{n}-\frac{f({z}_{n})}{f[{x}_{n},{w}_{n}]}H(\tau ,\varphi ),\phantom{\rule{1em}{0ex}}\varphi =\frac{f({z}_{n})}{f({y}_{n})},\hfill \end{array}$$

$$\{\begin{array}{l}G(0)=1,{G}^{\prime}(0)=0,{G}^{\u2033}(0)=4\phantom{\rule{0.2em}{0ex}}and|{G}^{(3)}(0)|\le \infty ,\beta \in \mathbb{R}\backslash \{0\},\hfill \\ {H}_{00}=1,{H}_{10}=2,{H}_{01}=1,{H}_{20}=\frac{{G}^{(3)}(0)}{3}+6,{H}_{11}=4,{H}_{30}=3{G}^{(3)}(0)+\frac{{G}^{(4)}(0)}{4},\hfill \end{array}$$

It satisfies the following error equation

$$\begin{array}{l}{e}_{n+1}=\frac{1}{432}{c}_{2}((-18+{G}^{(3)}(0)){c}_{2}^{2}+6{c}_{3}[({H}_{02}{(-18+{G}^{(3)}(0))}^{2}-3(648+2{H}_{21}(-18+{G}^{(3)}(0))-28{G}^{(3)}(0)\\ +3{G}^{(4)}(0))){c}_{2}^{4}+36(-2+{G}^{(2)}(0)){c}_{3}^{2}-12{c}_{2}^{2}(6\beta {f}^{\prime}{(\alpha )}^{3}+(-102+18{H}_{02}+3{H}_{21}+{G}^{(3)}(0)-{G}^{(2)}(0){G}^{(3)}(0)){c}_{3})-72{c}_{2}{c}_{4}]{e}_{n}^{8}+O({e}_{n}^{9}),\end{array}$$

where e_{n} and c_{k} are already defined in Equation (6).

$$f({x}_{n})={f}^{\prime}(\alpha )({e}_{n}+{c}_{2}{e}_{n}^{2}+{c}_{3}{e}_{n}^{3}+{c}_{4}{e}_{n}^{4}+{c}_{5}{e}_{n}^{5}+{c}_{6}{e}_{n}^{6}+{c}_{7}{e}_{n}^{7}+{c}_{8}{e}_{n}^{8})+O({e}_{n}^{9}).$$

Using that w_{n} = x_{n} + βf(x_{n})^{3}, one gets

$$\begin{array}{l}f[{x}_{n},{w}_{n}]={f}^{\prime}(\alpha )+2{f}^{\prime}(\alpha ){c}_{2}{e}_{n}+3{f}^{\prime}(\alpha ){c}_{3}{e}_{n}^{2}+{f}^{\prime}(\alpha )(\beta {f}^{\prime}{(\alpha )}^{3}{c}_{2}+4{c}_{4}){e}_{n}^{3}\\ \phantom{\rule{4em}{0ex}}+{f}^{\prime}(\alpha )(3\beta {f}^{\prime}{(\alpha )}^{3}{c}_{2}^{2}+3\beta {f}^{\prime}{(\alpha )}^{3}{c}_{3}+5{c}_{5}){e}_{n}^{4}+O{({e}_{n})}^{5}.\end{array}$$

From Equations (11) and (12), we have

$${y}_{n}-\alpha ={x}_{n}-\alpha -\frac{f({x}_{n})}{f[{x}_{n},{w}_{n}]}={c}_{2}{e}_{n}^{2}+(-2{c}_{2}^{2}+2{c}_{3}){e}_{n}^{3}+(\beta {f}^{\prime}{(\alpha )}^{3}{c}_{2}+4{c}_{2}^{3}-7{c}_{2}{c}_{3}+3{c}_{4}){e}_{n}^{4}+O({e}_{n}^{5}).$$

Expanding
$f\left({x}_{n}-\frac{f({x}_{n})}{f[{x}_{n},{w}_{n}]}\right)$ about x_{n} = α, we have
and

$$f({y}_{n})=f\left({x}_{n}-\frac{f({x}_{n})}{f[{x}_{n},{w}_{n}]}\right)={f}^{\prime}(\alpha ){c}_{2}{e}_{n}^{2}+{f}^{\prime}(\alpha )(-2{c}_{2}^{2}+2{c}_{3}){e}_{n}^{3}+{f}^{\prime}(\alpha )(5{c}_{2}^{3}+{c}_{2}(\beta {f}^{\prime}{(\alpha )}^{3}-7{c}_{3})+3{c}_{4}){e}_{n}^{4}+O{[{e}_{n}]}^{5},$$

$$\tau =\frac{f({y}_{n})}{f({x}_{n})}={c}_{2}{e}_{n}+(-3{c}_{2}^{2}+2{c}_{3}){e}_{n}^{2}+(3{c}_{4}-10{c}_{2}{c}_{3}+8{c}_{2}^{3}+\beta {f}^{\prime}{(\alpha )}^{3}{c}_{2}){e}_{n}^{3}+O({e}_{n}^{4}).$$

In the same vein, by considering G(0) = 1, G′(0) = 0, G″(0) = 4 and |G^{(3)}(0)| ≤ ∞, we obtain

$${z}_{n}-\alpha ={x}_{n}-\alpha -\left(\frac{f({x}_{n})+f({y}_{n})}{f[{x}_{n},{w}_{n}]}\right)G(\tau )=\left(\left(3-\frac{{G}^{(3)}(0)}{6}\right){c}_{2}^{3}-{c}_{2}{c}_{3}\right){e}_{n}^{4}+O({e}_{n}^{5}).$$

Moreover, we find
and

$$\begin{array}{c}f({z}_{n})={f}^{\prime}(\alpha )\left[\left(3-\frac{{G}^{3}(0)}{6}\right){c}_{2}^{3}-{c}_{2}{c}_{3}\right]{e}_{n}^{4}+{f}^{\prime}(\alpha )[\left(-16+\frac{3{G}^{3}(0)}{2}-\frac{{G}^{(4)}(0)}{24}\right){c}_{2}^{4}\\ -2{c}_{3}^{2}-{c}_{2}^{2}(\beta {f}^{\prime}{(\alpha )}^{3}+(-20+{G}^{3}(0)){c}_{3})-2{c}_{2}{c}_{4}]{e}_{n}^{5}+O({e}_{n}^{6}),\end{array}$$

$$\begin{array}{c}\varphi =\frac{f({z}_{n})}{f({y}_{n})}=\frac{1}{6}(18{c}_{2}^{2}-{G}^{(3)}(0){c}_{2}^{2}-6{c}_{3}){e}_{n}^{2}+\frac{1}{24}(-24\beta {f}^{\prime}{(\alpha )}^{3}{c}_{2}-240{c}_{2}^{3}+28{G}^{(3)}(0){c}_{2}^{3}\\ -{G}^{(4)}(0){c}_{2}^{3}+288{c}_{2}{c}_{3}-16{G}^{(3)}(0){c}_{2}{c}_{3}-48{c}_{4}){e}_{n}^{3}+O({e}_{n}^{4}).\end{array}$$

Since, it is clear from Equations (15) and (18) that τ and ϕ are of order e_{n} and
${e}_{n}^{2}$, respectively. Therefore, we can expand weight function H(τ, ϕ) in the neighborhood of origin by Taylor series expansion up to third order terms as follows:

$$H(\tau ,\varphi )={H}_{00}+{H}_{10}\tau +{H}_{01}\varphi +\frac{1}{2!}({H}_{20}{\tau}^{2}+2{H}_{11}\tau \varphi +{H}_{02}{\varphi}^{2})+\frac{1}{3!}({H}_{30}{\tau}^{3}+3{H}_{21}{\tau}^{2}\varphi +3{H}_{12}\tau {\varphi}^{2}+{H}_{03}{\varphi}^{3}).$$

Using Equations (17) and (19) in the last step of Equation (9), we get

$$\begin{array}{l}{e}_{n+1}=\frac{1}{6}(-1+{H}_{00}){c}_{2}\left((-18+{G}^{(3)}(0)){c}_{2}^{2}+6{c}_{3}\right){e}_{n}^{4}\\ \phantom{\rule{3em}{0ex}}+(\frac{1}{24}(-384+4{H}_{10}(-18+{G}^{(3)}(0))+36{G}^{(3)}(0)-{G}^{(4)}(0)+{H}_{00}(528-44{G}^{(3)}(0)+{G}^{(4)}(0))){c}_{2}^{4}\\ \phantom{\rule{3em}{0ex}}+2(-1+{H}_{00}){c}_{3}^{2}+{c}_{2}^{2}\left(\beta {f}^{\prime}{(\alpha )}^{3}(-1+{H}_{00})+(20+{H}_{10}+{H}_{00}(-22+{G}^{(3)}(0))-{G}^{(3)}(0)){c}^{3}\right)\\ \phantom{\rule{3em}{0ex}}+2(-1+{H}_{00}){c}_{2}{c}_{4}){e}_{n}^{5}+\dots +O({e}_{n}^{9}).\end{array}$$

This implies that the derivative-free class of methods Equation (9) arrive at optimal eighth-order of convergence by choosing the weight functions as follow:

$$\{\begin{array}{l}G(0)=1,{G}^{\prime}(0)=0,{G}^{\u2033}(0)=4\phantom{\rule{0.2em}{0ex}}\text{and}\phantom{\rule{0.2em}{0ex}}|{G}^{(3)}(0)|\le \infty ,\beta \in \mathbb{R}\backslash 0,\hfill \\ {H}_{00}=1,{H}_{10}=2,{H}_{01}=1,{H}_{20}=\frac{{G}^{(3)}(0)}{3}+6,{H}_{11}=4,{H}_{30}=3{G}^{(3)}(0)+\frac{{G}^{(4)}(0)}{4}.\hfill \end{array}$$

Finally, using Equations (21) in (20), we get the following error equation

$$\begin{array}{l}{e}_{n+1}=\frac{1}{432}{c}_{2}((-18+{G}^{(3)}(0)){c}_{2}^{2}+6{c}_{3})[({H}_{02}{(-18+{G}^{(3)}(0))}^{2}-3(648+2{H}_{21}(-18+{G}^{(3)}(0))-28{G}^{(3)}(0)\\ +3{G}^{(4)}(0))){c}_{2}^{4}+36(-2+{G}^{(2)}(0)){c}_{3}^{2}-12{c}_{2}^{2}(6\beta {f}^{\prime}{(\alpha )}^{3}+(-102+18{H}_{02}+3{H}_{21}+{G}^{(3)}(0)-{G}^{(2)}(0){G}^{(3)}(0)){c}_{3})\\ -72{c}_{2}{c}_{4}]{e}_{n}^{8}+O({e}_{n}^{9}).\end{array}$$

This concludes the proof. □

Remark 2. From the application point of view, when the given problem is complicated, it becomes very difficult to evaluate derivatives. For example, the nonlinear function$h(x)=(\mathrm{cot}\phantom{\rule{0.2em}{0ex}}x){e}^{x}\sqrt{(1/(2{x}^{2}\mathrm{cosh}\phantom{\rule{0.2em}{0ex}}x)}$ (see Figure 1), has a very complicated first derivative. Such shortcomings lead us to investigate new optimal iterative methods which are totally free from derivatives.

In this section, we introduce some concrete methods based on the proposed class Equation (9). **Method 1** Let us consider the weight functions defined by
where
$\tau =\frac{f({y}_{n})}{f({x}_{n})}$,
$\varphi =\frac{f({z}_{n})}{f({y}_{n})}$ and γ is any free disposable parameter.

$$G(\tau )=\frac{\mathrm{\gamma}}{6}{\tau}^{3}+2{\tau}^{2}+1\phantom{\rule{0.2em}{0ex}}\text{and}\phantom{\rule{0.2em}{0ex}}H(\tau ,\varphi )=\frac{\mathrm{\gamma}}{2}{\tau}^{3}+\left(\frac{\mathrm{\gamma}}{6}+3\right){\tau}^{2}+4\tau \varphi +2\tau +\varphi +1,$$

It can be easily seen that the above mentioned weight functions G(τ) and H(τ, ϕ) satisfy all the conditions of Theorem (1). Therefore, we get a new derivative-free optimal family of eighth-order methods given by

$$\{\begin{array}{l}{y}_{n}={x}_{n}-\frac{f({x}_{n})}{f[{x}_{n},{w}_{n}]},\phantom{\rule{1em}{0ex}}{w}_{n}={x}_{n}+\beta f{({x}_{n})}^{3},\hfill \\ {z}_{n}={x}_{n}-\left(\frac{f({x}_{n})+f({y}_{n})}{f[{x}_{n},{w}_{n}]}\right)\left[\frac{\mathrm{\gamma}}{6}{\left(\frac{f({y}_{n})}{f({x}_{n})}\right)}^{3}+2{\left(\frac{f({y}_{n})}{f({x}_{n})}\right)}^{2}+1\right],\hfill \\ {x}_{n+1}={z}_{n}-\frac{f({z}_{n})}{f[{x}_{n},{w}_{n}]}\left[\frac{\mathrm{\gamma}}{2}{\left(\frac{f({y}_{n})}{f({x}_{n})}\right)}^{3}+\left(\frac{\mathrm{\gamma}}{6}+3\right){\left(\frac{f({y}_{n})}{f({x}_{n})}\right)}^{2}+4\frac{f({y}_{n})f({z}_{n})}{f({x}_{n})f({y}_{n})}+2\frac{f({y}_{n})}{f({x}_{n})}+\frac{f({z}_{n})}{f({y}_{n})}+1\right].\hfill \end{array}$$

$$G(\tau )=\frac{\tau (1-12(\mu +2)\tau )-12}{\tau (1-12\mu \tau )-12}\text{and}\phantom{\rule{0.2em}{0ex}}H(\tau ,\varphi )=\frac{-24+\left(\frac{299}{3}+48\mu \right){\tau}^{3}}{4(-6+6\varphi +(12-5\tau )\tau )},$$

These weight functions satisfy all the conditions of Theorem (1). Therefore, we obtain another new derivative-free optimal family of eighth-order methods given by

$$\{\begin{array}{l}{y}_{n}={x}_{n}-\frac{f({x}_{n})}{f[{x}_{n},{w}_{n}]},\phantom{\rule{1em}{0ex}}{w}_{n}={x}_{n}+\beta f{({x}_{n})}^{3},\hfill \\ {z}_{n}={x}_{n}-\left(\frac{f({x}_{n})+f({y}_{n})}{f[{x}_{n},{w}_{n}]}\right)\left[\frac{\frac{f({y}_{n})}{f({x}_{n})}(1-12(\mu +2)\frac{f({y}_{n})}{f({x}_{n})})-12}{\frac{f({y}_{n})}{f({x}_{n})}(1-12\mu \frac{f({y}_{n})}{f({x}_{n})})-12}\right],\hfill \\ {x}_{n+1}={z}_{n}-\frac{f({z}_{n})}{f[{x}_{n},{w}_{n}]}\left[\frac{-24+\left(\frac{299}{3}+48\mu \right)\frac{f{({y}_{n})}^{3}}{f({x}_{n})}}{4(-6+6\frac{f({z}_{n})}{f({y}_{n})}+(12-5\frac{f({y}_{n})}{f({x}_{n})})\frac{f({y}_{n})}{f({x}_{n})})}\right].\hfill \end{array}$$

$$G(\tau )=\frac{6\eta -\tau +12\eta {\tau}^{2}+(\eta -2){\tau}^{3}}{6\eta -\tau}\phantom{\rule{0.2em}{0ex}}\text{and}\phantom{\rule{0.2em}{0ex}}H(\tau ,\varphi )=\frac{{\tau}^{2}-6\eta (12+25{\tau}^{2})}{{\tau}^{2}+6\eta (-12+12\varphi +(24-35\tau )\tau )},$$

These weight functions also satisfy all the conditions of Theorem (1). Therefore, we get another new optimal family of eighth-order methods given by

$$\{\begin{array}{l}{y}_{n}={x}_{n}-\frac{f({x}_{n})}{f[{x}_{n},{w}_{n}]},\phantom{\rule{1em}{0ex}}{w}_{n}={x}_{n}+\beta f{({x}_{n})}^{3},\hfill \\ {z}_{n}={x}_{n}-\left(\frac{f({x}_{n})+f({y}_{n})}{f[{x}_{n},{w}_{n}]}\right)\left[\frac{6\eta -\frac{f({y}_{n})}{f({x}_{n})}+12\eta \frac{f{({y}_{n})}^{2}}{f({x}_{n})}+(\eta -2)\frac{f{({y}_{n})}^{3}}{f({x}_{n})}}{6\eta -\frac{f({y}_{n})}{f({x}_{n})}}\right]\hfill \\ {x}_{n+1}={z}_{n}-\frac{f({z}_{n})}{f[{x}_{n},{w}_{n}]}\left[\frac{\frac{f{({y}_{n})}^{2}}{f({x}_{n})}-6\eta \left(12+25\frac{f{({y}_{n})}^{2}}{f({x}_{n})}\right)}{\frac{f{({y}_{n})}^{2}}{f({x}_{n})}+6\eta (-12+12\frac{f({z}_{n})}{f({y}_{n})}+(24-35\frac{f({y}_{n})}{f({x}_{n})})\frac{f({y}_{n})}{f({x}_{n})})}\right].\hfill \end{array}$$

$$\{\begin{array}{l}{y}_{n}={x}_{n}+\beta f({x}_{n}),\phantom{\rule{1em}{0ex}}\beta \in \mathbb{R}-\left\{0\right\},\hfill \\ {z}_{n}={y}_{n}-\beta \frac{f({x}_{n})f({y}_{n})}{f({y}_{n})-f({x}_{n})},\hfill \\ {w}_{n}={z}_{n}-\frac{f({x}_{n})f({y}_{n})}{f({z}_{n})-f({x}_{n})}\left[\frac{1}{f[{y}_{n},{x}_{n}]}-\frac{1}{f[{z}_{n},{y}_{n}]}\right],\hfill \\ {x}_{n+1}={z}_{n}-\frac{f({x}_{n})f({y}_{n})f({z}_{n})}{f({w}_{n})-f({x}_{n})}\left[\frac{1}{f[{y}_{n},{x}_{n}]}\left\{\frac{1}{f[{w}_{n},{z}_{n}]}-\frac{1}{f[{z}_{n},{y}_{n}]}\right\}-\frac{1}{f[{z}_{n},{x}_{n}]}\left\{\frac{1}{f[{z}_{n},{y}_{n}]}-\frac{1}{f[{y}_{n},{x}_{n}]}\right\}\right].\hfill \end{array}$$

$$\{\begin{array}{l}{y}_{n}={x}_{n}-\frac{f({x}_{n})}{f[{x}_{n},{w}_{n}]},\phantom{\rule{1em}{0ex}}{w}_{n}={x}_{n}+f({x}_{n}),\hfill \\ {z}_{n}={y}_{n}-\frac{f({y}_{n})}{f[{x}_{n},{y}_{n}]}\left[1+\frac{f({y}_{n})}{f({w}_{n})}+{\left(\frac{f({y}_{n})}{f({w}_{n})}\right)}^{2}\right],\hfill \\ {x}_{n+1}={z}_{n}-\frac{f({z}_{n})}{f[{z}_{n},{y}_{n}]}\left[1+\frac{1}{f[{x}_{n},{w}_{n}]+1}{\left(\frac{f({y}_{n})}{f({x}_{n})}\right)}^{2}+(2+f[{x}_{n},{w}_{n}])\frac{f({z}_{n})}{f({w}_{n})}\right].\hfill \end{array}$$

$$\{\begin{array}{l}{y}_{n}={x}_{n}-\frac{f({x}_{n})}{f[{x}_{n},{w}_{n}]},\phantom{\rule{1em}{0ex}}{w}_{n}={x}_{n}+\beta f({x}_{n}),\hfill \\ {z}_{n}={y}_{n}-\frac{f({y}_{n})}{f[{x}_{n},{y}_{n}]+f[{y}_{n},{w}_{n}]-f[{x}_{n},{w}_{n}]},\hfill \\ {x}_{n+1}={z}_{n}-\frac{f({z}_{n})}{f[{z}_{n},{y}_{n}]+f[{z}_{n},{x}_{n},{x}_{n}]({z}_{n}-{y}_{n})+f[{z}_{n},{y}_{n},{x}_{n},{w}_{n}]({z}_{n}-{y}_{n})({z}_{n}-{x}_{n})}.\hfill \end{array}$$

$$\{\begin{array}{l}{y}_{n}={x}_{n}-\frac{f({x}_{n})}{f[{x}_{n},{w}_{n}]},\phantom{\rule{1em}{0ex}}{w}_{n}={x}_{n}+\beta f({x}_{n}),\hfill \\ {z}_{n}={y}_{n}-\frac{f({y}_{n})}{f[{x}_{n},{y}_{n}]+f[{y}_{n},{w}_{n}]-f[{x}_{n},{w}_{n}]},\hfill \\ {x}_{n+1}={z}_{n}-\frac{f({z}_{n})\left\{1+{\left(\frac{f({y}_{n})}{f({x}_{n})}\right)}^{4}-(1+\beta f[{x}_{n},{w}_{n}]){\left(\frac{f({y}_{n})}{f({w}_{n})}\right)}^{3}-{\left(\frac{f({z}_{n})}{f({y}_{n})}\right)}^{2}+\frac{f({z}_{n})}{f({w}_{n})}+{\left(\frac{f({z}_{n})}{f({x}_{n})}\right)}^{2}\right\}}{f[{x}_{n},{z}_{n}]+f[{z}_{n},{y}_{n}]-f[{x}_{n},{y}_{n}]}.\hfill \end{array}$$

In this section, we shall check the effectiveness of the newly proposed methods. We employ the present methods Equation (23) (for β = 1, γ = 12 ), Equation (24) (for β = 1, µ = 12) and Equation (25) (for β = 1, η = 12 ), denoted by
$M{M}_{8}^{1}$,
$M{M}_{8}^{2}$ and
$M{M}_{8}^{3}$, respectively to solve nonlinear equations. We compare them with Kung and Traub method Equation (26) (KTM_{8}), Soleymani method Equation (27)$(S{M}_{8}^{1})$, Zheng et al. method Equation (28) (ZM_{8}), Soleymani et al. method Equation (29)$(S{M}_{8}^{2})$, respectively. The test functions and their roots are displayed in Table 1. Comparison of different eighth-order derivative-free iterative methods with respect to the same number of functional evaluations (TNE = 12) are provided in Tables 2–4. All computations have been performed using the programming package Mathematica 9 with multiple precision arithmetic. We use ϵ = 10^{−35} as a tolerance error. The following stopping criteria are used for computer programs: (i) |x_{n}_{+1} −x_{n}| < ϵ, (ii) |f(x_{n}_{+1})| < ϵ. These methods are employed to solve some nonlinear equations of two classes: **smooth functions and non-smooth functions:**

$$\begin{array}{l}{g}_{1}(x)=|{x}^{2}-2|,\phantom{\rule{0.2em}{0ex}}\alpha \approx 1.4142135623730950488016887242096981,\phantom{\rule{0.2em}{0ex}}{x}_{0}=\mathrm{1.3.}\\ {g}_{2}(x)=\{\begin{array}{lll}x(x-1)\hfill & \mathrm{if}\phantom{\rule{0.2em}{0ex}}x\le 0\hfill & \hfill \\ -2x(x+1)\hfill & \mathrm{if}\phantom{\rule{0.2em}{0ex}}x\ge 0,\hfill & \alpha =0,\phantom{\rule{0.2em}{0ex}}{x}_{0}=\mathrm{0.5.}\hfill \end{array}\end{array}$$

In this study, we contribute further to the development of the theory of iteration processes and propose a new derivative-free optimal family of eighth-order methods for solving nonlinear equations numerically. It is noteworthy that the given scheme can produce several new derivative-free optimal eighth-order methods by choosing different types of weight functions. The asserted superiority of proposed methods is also corroborated by numerical results displayed in the Tables 2–4. The numerical experiments suggest that the new class would be valuable alternative for solving nonlinear equations.

We would like to express our gratitude to the anonymous referees for their insightful valuable comments and suggestions.

The contributions of all of the authors have been similar. All of them have worked together to develop the present manuscript.

The authors declare no conflict of interest.

- Petković, M.S.; Neta, B.; Petković, L.D.; Džunić, J. Multipoint Methods for Solving Nonlinear Equations; Academic Press: Orlando, FL, USA, 2012. [Google Scholar]
- Kung, H.T.; Traub, J.F. Optimal order of one-point and multi-point iteration. J. Assoc. Comput. Math.
**1974**, 21, 643–651. [Google Scholar] - Ostrowski, A.M. Solutions of Equations and System of Equations; Academic Press: New York, NY, USA, 1960. [Google Scholar]
- Potra, F.A.; Pták, V. Nondiscrete introduction and iterative processes. In Research Notes in Mathematics; Pitman: Boston, MA, USA, 1984; Volume 103. [Google Scholar]
- Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: Upper Saddle River, NJ, USA, 1964. [Google Scholar]
- Steffensen, J.F. Remarks on iteration. Skand Aktuar Tidsr
**1933**, 16, 64–72. [Google Scholar] - Petković, M.S.; Ilić, S.; Džunić, J. Derivative free two-point methods with and without memory for solving nonlinear equations. Appl. Math. Comput.
**2010**, 217, 1887–1895. [Google Scholar] - Andreu, C.; Cambil, N.; Cordero, A.; Torregrosa, J.R. A class of optimal eighth-order derivative-free methods for solving the Danchick-Guass problem. Appl. Math. Comput.
**2014**, 232, 237–246. [Google Scholar] - Soleymani, F.; Sharma, R.; Li, X.; Tohidi, E. An optimized derivative-free form of the Potra-Pták method. Math. Comput. Model.
**2012**, 56, 97–104. [Google Scholar] - Soleymani, F. Optimal Steffensen-type methods with eighth-order of convergence. Comput. Math. Appl.
**2011**, 62, 4619–4626. [Google Scholar] - Zheng, Q.; Li, J.; Huang, F. An optimal Steffensen-type family for solving nonlinear equations. Appl. Math. Comput.
**2011**, 217, 9592–9597. [Google Scholar] - Soleymani, F. On a bi-parametric class of optimal eight-order derivative-free methods. Int. J. Pure Appl. Math.
**2011**, 72, 27–37. [Google Scholar]

Test functions | Roots | Initial guess |
---|---|---|

f_{1}(x) = (sin x)^{2} + x | α = 0 | x_{0} = 0.5 |

f_{2}(x) = x^{2} − (1 − x)^{25} | α. ≈ 0.14373925929975369826697493201066691… | x_{0} = 0.4 |

${f}_{3}(x)={\mathrm{sin}}^{-1}({x}^{2}-1)-\frac{x}{2}+1$ | α. ≈ 0.59481096839836917752265623515213618… | x_{0} = 0.3 |

${f}_{4}(x)=\mathrm{tan}(\mathrm{log}x)+\mathrm{cos}({x}^{3})\frac{1}{\sqrt{2x}}$ | α. ≈ 0.44326078355676706795301995624689113… | x_{0} = 0.41 |

f_{5}(x) = 10xe^{−}^{x}^{2} − 1 | α. ≈ 1.6796306104284499406749203388379704… | x_{0} = 1.5 |

f | KTM_{8} | $S{M}_{8}^{1}$ | ZM_{8} | $S{M}_{8}^{2}$ | $M{M}_{8}^{1}$ | $M{M}_{8}^{2}$ | $M{M}_{8}^{3}$ | |
---|---|---|---|---|---|---|---|---|

f_{1} | |f(x_{1})| | 1.21^{e}−3 | 1.22e−3 | 4.33e−3 | 1.67e−3 | 0.9e−3 | 5.86e−4 | 7.81e−4 |

|f(x_{2})| | 5.84e−22 | 3.06e−22 | 1.00e−13 | 4.85e−21 | 7.46e−24 | 1.44e−24 | 6.59e−25 | |

|f(x_{3})| | 7.16e−168 | 4.67e−171 | 1.63e−77 | 2.60e−161 | 1.31e−184 | 1.92e−189 | 3.35e−193 | |

f_{2} | |f(x_{1})| | 4.37e−3 | 4.08e−3 | 1.18e−2 | 3.02e−3 | 2.08e−3 | 3.49e−5 | 3.49e−3 |

|f(x_{2})| | 3.21e−12 | 1.06e−11 | 1.16e−6 | 1.18e−12 | 2.69e−16 | 8.72e−20 | 8.09e−15 | |

|f(x_{3})| | 1.01e−85 | 1.81e−80 | 5.09e−31 | 4.59e−88 | 1.06e−118 | 1.32e−144 | 1.26e−107 | |

f_{3} | |f(x_{1})| | 1.06e−7 | 6.23e−8 | 6.50e−6 | 3.92e−7 | 1.94e−8 | 4.81e−8 | 1.55e−8 |

|f(x_{2})| | 9.23e−59 | 1.07e−60 | 3.25e−33 | 2.65e−54 | 4.55e−66 | 1.73e−62 | 2.44e−66 | |

|f(x_{3})| | 3.15e−467 | 8.39e−483 | 5.05e−197 | 1.15e−431 | 0.1e−490 | 4.93e−498 | 0.1e−492 | |

f_{4} | |f(x_{1})| | 9.31e−5 | 1.25e−4 | 1.46e−5 | 1.83e−5 | 1.00e−8 | 4.94e−7 | 1.21e−6 |

|f(x_{2})| | 8.46e−30 | 2.55e−29 | 5.66e−29 | 5.52e−36 | 1.08e−65 | 8.35e−49 | 9.70e−470 | |

|f(x_{3})| | 3.95e−230 | 7.45e−227 | 1.92e−169 | 3.76e−280 | 0.1e−492 | 5.53e−383 | 1.68e−367 | |

f_{5} | |f(x_{1})| | 1.00e−3 | 3.79e−4 | 2.80e−3 | 1.78e−4 | 2.61e−5 | 1.79e−6 | 1.84e−6 |

|f(x_{2})| | 4.54e−26 | 9.35e−85 | 4.27e−49 | 7.58e−87 | 1.42e−39 | 1.06e−47 | 4.60e−48 | |

|f(x_{3})| | 7.83e−205 | 4.28e−234 | 1.00e−101 | 5.17e−257 | 1.09e−313 | 1.58e−377 | 7.04e−381 |

Methods | |g_{1}(x_{1})| | |g_{1}(x_{2})| | |g_{1}(x_{3})| |
---|---|---|---|

Method KTM_{8} Equation (26) | 9.81e−3 | 4.14e−6 | 8.18e−13 |

Method $S{M}_{8}^{1}$ Equation (27) | 1.03e−2 | 3.21e−6 | 3.21e−13 |

Method ZM_{8} Equation (28) | 3.28e−3 | 6.23e−11 | 8.36e−42 |

Method $S{M}_{8}^{2}$ Equation (29) | 2.89e−2 | 6.28e−5 | 3.66e−10 |

Our Method $M{M}_{8}^{1}$ Equation (23) | 2.97e−3 | 2.43e−22 | 4.69e−175 |

Our Method $M{M}_{8}^{2}$ Equation (24) | 5.98e−4 | 3.56e−24 | 1.31e−188 |

Our Method $M{M}_{8}^{3}$ Equation (25) | 1.33e−4 | 5.87e−33 | 6.67e−258 |

Methods | |g_{2}(x_{1})| | |g_{2}(x_{2})| | |g_{2}(x_{3})| |
---|---|---|---|

Method KTM_{8} Equation (26) | D | D | D |

Method $S{M}_{8}^{1}$ Equation (27) | D | D | D |

Method ZM_{8} Equation (28) | 2.63e−1 | 6.16e−7 | 1.87e−40 |

Method $S{M}_{8}^{2}$ Equation (29) | 0.786e+3 | 0.12e+1 | 1.03e−4 |

Our Method $M{M}_{8}^{1}$ Equation (23) | 1.09e−1 | 7.44e−7 | 5.53e−13 |

Our Method $M{M}_{8}^{2}$ Equation (24) | 2.61e−3 | 4.05e−25 | 1.40e−199 |

Our Method $M{M}_{8}^{3}$ Equation (25) | 2.52e−3 | 3.94e−25 | 1.43e−199 |

D: stands for divergent.

© 2015 by the authors; licensee MDPI, Basel, Switzerland This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/4.0/).