# A Family of Newton Type Iterative Methods for Solving Nonlinear Equations

^{1}

^{2}

^{*}

Next Article in Journal

Next Article in Special Issue

Next Article in Special Issue

Previous Article in Journal / Special Issue

School of Mathematics and Physics, Bohai University, Jinzhou 121013, China

College of Engineering, Bohai University, Jinzhou 121013, China

Author to whom correspondence should be addressed.

Academic Editors: Alicia Cordero, Juan R. Torregrosa and Francisco I. Chicharro

Received: 9 July 2015 / Revised: 14 September 2015 / Accepted: 15 September 2015 / Published: 22 September 2015

(This article belongs to the Special Issue Numerical Algorithms for Solving Nonlinear Equations and Systems)

In this paper, a general family of n-point Newton type iterative methods for solving nonlinear equations is constructed by using direct Hermite interpolation. The order of convergence of the new n-point iterative methods without memory is 2n requiring the evaluations of n functions and one first-order derivative in per full iteration, which implies that this family is optimal according to Kung and Traub’s conjecture (1974). Its error equations and asymptotic convergence constants are obtained. The n-point iterative methods with memory are obtained by using a self-accelerating parameter, which achieve much faster convergence than the corresponding n-point methods without memory. The increase of convergence order is attained without any additional calculations so that the n-point Newton type iterative methods with memory possess a very high computational efficiency. Numerical examples are demonstrated to confirm theoretical results.

Solving nonlinear equations by iterative methods have been of great interest to numerical analysts. The most famous one-point iterative method is probably Newton’s Equation [1]: ${x}_{k+1}={x}_{k}-f\left({x}_{k}\right)/{f}^{\prime}\left({x}_{k}\right),$ which converges quadratically. However, the condition ${f}^{\prime}\left(x\right)\ne 0$ in a neighborhood of the required root is severe indeed for convergence of Newton method, which restricts its applications in practical. For resolving this problem, Wu in [2] proposed the following one-point iterative method
where $\lambda \in R,0<\left|\lambda \right|<+\infty $ and λ is chosen such that the corresponding function values $\lambda f\left({x}_{k}\right)$ and ${f}^{\prime}\left({x}_{k}\right)$ have the same signs. This method converges quadratically under the condition $\lambda f\left({x}_{k}\right)+{f}^{\prime}\left({x}_{k}\right)\ne 0,$ while ${f}^{\prime}\left({x}_{k}\right)=0$ in some points is permitted. Wang and Zhang in [3] obtained the error equation of the Equation (1) as follows
where ${e}_{k}={x}_{k}-a,{c}_{k}=(1/k!){f}^{\left(k\right)}\left(a\right)/{f}^{\prime}\left(a\right),k=2,3,\cdots $ and a is the root of the nonlinear equation $f\left(x\right)=0.$

$${x}_{k+1}={x}_{k}-\frac{f\left({x}_{k}\right)}{\lambda f\left({x}_{k}\right)+{f}^{\prime}\left({x}_{k}\right)}$$

$${e}_{k+1}=({c}_{2}+\lambda ){e}_{k}^{2}+O({e}_{k}^{3})$$

The convergence order and computational efficiency of the one-point iterative methods are lower than multipoint iterative methods. Multipoint iterative methods can overcome theoretical limits of one-point methods concerning the convergence order and computational efficiency. In recent years, many multipoint iterative methods have been proposed for solving nonlinear equations, see [4,5,6,7,8,9,10,11,12,13,14,15,16,17,18]. Wang and Liu in [4] developed the following eighth-order iterative method without memory by Hermite interpolation methods
where $N({x}_{k},{y}_{k},{z}_{k})=f[{z}_{k},{y}_{k}]+2f[{z}_{k},{x}_{k}]-2f[{y}_{k},{x}_{k}]+f[{y}_{k},{x}_{k},{x}_{k}]({y}_{k}-{z}_{k}).$ Using the same strategy, Kou in [5] presented a family of eighth-order iterative method without memory. The Equation (3) is a special case of the Kou’s method. Petković in [6] claimed a general class of optimal n-point methods without memory by Hermite interpolation methods, which have the order of convergence ${2}^{n}$ and require evaluations of n functions and one first-order derivative. The Equation (3) is a special case of the Petković’s n-point Method for $n=3.$ But, the Petković’s n-point method gives no specific iterative scheme and error relation for $n\ge 4.$ In this paper, we construct a class of n-point iterative methods with and without memory by Hermite interpolation methods and give the specific iterative scheme and error relation for all $n\ge 2.$

$$\left\{\begin{array}{cc}\hfill {y}_{k}=& {x}_{k}-\frac{f\left({x}_{k}\right)}{{f}^{\prime}\left({x}_{n}\right)}\hfill \\ \hfill {z}_{k}=& {y}_{k}-\frac{f\left({y}_{k}\right)}{2f[{y}_{k},{x}_{k}]-{f}^{\prime}\left({x}_{k}\right)}\hfill \\ \hfill {x}_{k+1}=& {z}_{k}-\frac{f\left({z}_{k}\right)}{N({x}_{k},{y}_{k},{z}_{k})}\hfill \end{array}\right.$$

This paper is organized as follows. In Section 2, based on Wu’s Equation [2] and Petković’s n-point Equation [6], we derive a family of n-point iterative methods without memory for solving nonlinear equations. We prove that the order of convergence of the n-point methods without memory is ${2}^{n}$ requiring the evaluations of n functions and one first-order derivative in per full iteration. Kung and Traub in [7] conjectured that a multipoint iteration without memory based on n functional evaluations could achieve an optimal convergence of order ${2}^{n-1}$. The new methods without memory agree with the conjecture. Further accelerations of convergence speed are attained in Section 3. A family of n-point iterative methods with memory is obtained by using a self-accelerating parameter in per full iteration. This self-accelerating parameter is calculated using information available from the current and previous iterations. Numerical examples are given in Section 4 to confirm theoretical results.

Based on Wu’s Equation [2] and Petković’s n-point methods [6], we derive a general optimal ${2}^{n}$th order family and write it in the following form:
where $N({y}_{k,n-1},{y}_{k,n-2},\cdots ,{y}_{k,1},{y}_{k,0})=f[{y}_{k,n-1},{y}_{k,n-2}]+\cdots +f[{y}_{k,n-1},{y}_{k,n-2},\cdots ,{y}_{k,1},{y}_{k,0},{y}_{k,0}]({y}_{k,n-1}-{y}_{k,n-2})\cdots ({y}_{k,n-1}-{y}_{k,0}),{y}_{k,0}={x}_{k},\lambda \in R$ is a constant and k being the iteration index. The entries ${y}_{k,0},\cdots {y}_{k,n}$ are approximations with the associated error ${e}_{k,j}={y}_{k,j}-a(j=0,1,\cdots ,n).$

$$\left\{\begin{array}{cc}\hfill {y}_{k,1}=& {y}_{k,0}-\frac{f\left({y}_{k,0}\right)}{\lambda f\left({y}_{k,0}\right)+{f}^{\prime}\left({y}_{k,0}\right)}\hfill \\ \hfill {y}_{k,2}=& {y}_{k,1}-\frac{f\left({y}_{k,1}\right)}{f[{y}_{k,1},{y}_{k,0}]+f[{y}_{k,1},{y}_{k,0},{y}_{k,0}]({y}_{k,1}-{y}_{k,0})}\hfill \\ \hfill \cdots \\ \hfill {y}_{k,n}=& {y}_{k,n-1}-\frac{f\left({y}_{k,n-1}\right)}{N({y}_{k,n-1},{y}_{k,n-2}\cdots ,{y}_{k,1},{y}_{k,0})}\hfill \end{array}\right.$$

Using the Taylor series and symbolic computation in the programming package Mathematica, we can find the order of convergence and the asymptotic error constant (AEC) of the n-point methods Equation (4) for $n=1,n=2$ and $n=3$, respectively. For simplicity, we sometimes omit the iteration index n and write e instead of ${e}_{k}$. The approximation ${x}_{k+1}$ to the root a will be denoted by $\widehat{x}$. Regarding Equation (4), let us define $x={y}_{k,0},y={y}_{k,1},z={y}_{k,2},e=x-a,d=y-a,p=z-a,e1=\widehat{x}-a.$

The following abbreviations are used in the program.

$$\mathrm{ck}={f}^{\left(k\right)}\left(a\right)/(k!{f}^{\prime}\left(a\right)),\mathrm{e}=x-a,\mathrm{d}=y-a,\mathrm{p}=z-a,\mathrm{el}=\widehat{x}-a,\mathrm{fx}=f\left({y}_{k,0}\right),\mathrm{fy}=f\left({y}_{k,1}\right)$$

$$\mathrm{dfx}={f}^{\prime}\left({y}_{k,0}\right),\mathrm{fxxy}=f[{y}_{k,0},{y}_{k,0},{y}_{k,1}],\mathrm{fla}={f}^{\prime}\left(a\right),\mathrm{fyz}=f[{y}_{k,1},{y}_{k,2}],\mathrm{fxz}=f[{y}_{k,0},{y}_{k,2}],$$

$$\mathrm{fz}=f\left({y}_{k,2}\right),\mathrm{L}=\lambda ,\mathrm{fzxx}=f[{y}_{k,2},{y}_{k,0},{y}_{k,0}],\mathrm{fxy}=f[{y}_{k,0},{y}_{k,1}],\mathrm{fzxxy}=f[{y}_{k,2},{y}_{k,0},{y}_{k,0},{y}_{k,1}].$$

fx=fla*(e+c2*e^2+c3*e^3+c4*e^4+c5*e^5+c6*e^6+c7*e^7+c8*e^8);

dfx=D[fx,e];

t=Series[fx/(L*fx+dfx),{e,0,8}];

d=Series[e-t,{e,0,8}]//Simplify

fy=Series[fla*(d+c2*d^2+c3*d^3+c4*d^4),{e,0,8}];

fxy=Series[(fy-fx)/(d-e),{e,0,8}];

z=Series[d-fy/(2*fxy-dfx),{e,0,8}]//Simplify

fz=Series[fla*(z+c2*z^2),{e,0,8}];

fyz=Series[(fy-fz)/(d-z),{e,0,8}];

fxz=Series[(fx-fz)/(e-z),{e,0,8}];

fxxy=Series[(dfx-fxy)/(e-d),{e,0,8}];

fzxx=Series[(dfx-fxz)/(e-z),{e,0,8}];

fzxxy=Series[(fzxx-fxxy)/(z-d),{e,0,8}];

fzxy=Series[(fxz-fyz)/(e-d),{e,0,8}];

e1=Series[z-fz/(fyz+fzxy*(z-d)+fzxxy*(z-e)*(z-d)),{e,0,8}]//Simplify

$$\mathrm{Out}\left[\mathrm{d}\right]=({c}_{2}+L){e}^{2}+O{\left[e\right]}^{3}$$

$$\mathrm{Out}\left[\mathrm{z}\right]=({c}_{2}+L)({c}_{2}^{2}-{c}_{3}+{c}_{2}L){e}^{4}+O{\left[e\right]}^{5}$$

$$\mathrm{Out}\left[\mathrm{el}\right]={({c}_{2}+L)}^{2}({c}_{2}^{2}-{c}_{3}+{c}_{2}L)({c}_{2}^{3}-{c}_{2}{c}_{3}+{c}_{4}+{c}_{2}^{2}L){e}^{8}+O{\left[e\right]}^{9}$$

We obtain the asymptotic error constants of n-point methods Equation (4) with $n=1,2,3$. Altogether, we can state the following theorem.

$${e}_{k+1}=({c}_{2}+\lambda ){d}_{2}{e}_{k}^{4}+O\left({e}_{k}^{5}\right)$$

$${e}_{k+1}={({c}_{2}+\lambda )}^{2}{d}_{3}{e}_{k}^{8}+O\left({e}_{k}^{9}\right)$$

The order of the convergence of the Equation (4) is analyzed in the following theorem.

$${e}_{k+1}={e}_{k,n}={y}_{k,n}-a={({c}_{2}+\lambda )}^{{2}^{n-2}}{d}_{n}{e}_{k}^{{2}^{n}}+O({e}_{k}^{{2}^{n}+1})$$

$${e}_{k,j}={y}_{k,j}-a={({c}_{2}+\lambda )}^{{2}^{j-2}}{d}_{j}{e}_{k}^{{2}^{j}}+O({e}_{k}^{{2}^{j}+1})$$

$$\begin{array}{cc}\hfill {e}_{k+1}=& {e}_{k,n-1}{({f}^{\prime}\left(a\right)+O\left({e}_{k}\right))}^{-1}(f[{y}_{k,n-1},{y}_{k,n-2},{y}_{n-3}]{e}_{k,n-1}\cdots +{(-1)}^{n-1}\hfill \\ & \times f[{y}_{k,n-1},{y}_{k,n-2},\cdots ,{y}_{k,1},{y}_{k,0},{y}_{k,0},a]{e}_{k,n-2}\cdots {e}_{k,0}{e}_{k,0}+O({e}_{k}^{{2}^{n-1}+1}))\hfill \\ \hfill =& {e}_{k,n-1}\left({c}_{2}{e}_{k,n-1}+{(-1)}^{n-1}{c}_{n+1}{e}_{k,n-2}\cdots {e}_{k,0}{e}_{k,0}+O({e}_{k}^{{2}^{n-1}+1})\right)\hfill \\ \hfill =& {({c}_{2}+\lambda )}^{{2}^{n-3}}{d}_{n-1}{e}_{k}^{{2}^{n-1}}({c}_{2}{({c}_{2}+\lambda )}^{{2}^{n-3}}{d}_{n-1}{e}_{k}^{{2}^{n-1}}+{(-1)}^{n-1}{c}_{n+1}\hfill \\ & \times {({c}_{2}+\lambda )}^{{2}^{n-4}}{d}_{n-2}{e}_{k}^{{2}^{n-2}}\cdots ({c}_{2}+\lambda ){d}_{2}{e}_{k}^{4}({c}_{2}+\lambda ){d}_{1}{e}_{k}^{2}{d}_{0}{e}_{k}^{2})\hfill \\ \hfill =& {({c}_{2}+\lambda )}^{{2}^{n-2}}{d}_{n-1}[{c}_{2}{d}_{n-1}+{(-1)}^{n-1}{c}_{n+1}{d}_{n-2}\cdots {d}_{2}{d}_{1}{d}_{0}]{e}_{k}^{{2}^{n}}\hfill \end{array}$$

Hence, by induction, we conclude that the error relations can be written in the following form

$${e}_{k+1}={e}_{k,n}={y}_{k,n}-a={({c}_{2}+\lambda )}^{{2}^{n-2}}{d}_{n}{e}_{k}^{{2}^{n}}+O\left({e}_{k}^{{2}^{n}+1}\right)$$

In this section we will improve the convergence order of the family Equation (4). We observe from Equation (13) that the order of convergence of the family Equation (4) is ${2}^{n}$ when $\lambda \ne -{c}_{2}.$ With the choice $\lambda =-{c}_{2}=-{f}^{\u2033}\left(a\right)/(2{f}^{\prime}\left(a\right))$, it can be proved that the order of the family Equation (4) would exceed ${2}^{n}$. However, the exact values of ${f}^{\prime}\left(a\right)$ and ${f}^{\u2033}\left(a\right)$ are not available in practice and such acceleration of convergence can not be realized. But we could approximate the parameter λ by ${\lambda}_{k}$. The parameter ${\lambda}_{k}$ can be computed by using information available from the current and previous iterations and satisfies ${lim}_{k\to \infty}{\lambda}_{k}=-{c}_{2}=-{f}^{\u2033}\left(a\right)/\left(2{f}^{\prime}\left(a\right)\right),$ such that the ${2}^{n}$th order asymptotic convergence constant to be zero in Equation (13). We consider the following three methods for ${\lambda}_{k}$:
where ${H}_{2}\left(x\right)=f\left({y}_{k,0}\right)+f[{y}_{k,0},{y}_{k,0}](x-{y}_{k,0})+f[{y}_{k,0},{y}_{k,0},{y}_{k-1,n-1}]{(x-{y}_{k,0})}^{2},$ and ${H}_{2}^{\u2033}\left({y}_{k,0}\right)=$$2f[{y}_{k,0},{y}_{k,0},{y}_{k-1,n-1}].$
where ${H}_{3}\left(x\right)={H}_{2}\left(x\right)+f[{y}_{k,0},{y}_{k,0},{y}_{k-1,n-1},{y}_{k-1,n-2}]{(x-{y}_{k,0})}^{2}(x-{y}_{k-1,n-1}),$ and ${H}_{3}^{\u2033}\left({y}_{k,0}\right)=$${H}_{2}^{\prime \prime}\left({y}_{k,0}\right)+2f[{y}_{k,0},{y}_{k,0},{y}_{k-1,n-1},{y}_{k-1,n-2}]({y}_{k,0}-{y}_{k-1,n-1}).$
where ${H}_{4}\left(x\right)={H}_{3}\left(x\right)+f[{y}_{k,0},{y}_{k,0},{y}_{k-1,n-1},{y}_{k-1,n-2},{y}_{k-1,n-3}]{(x-{y}_{k,0})}^{2}(x-{y}_{k-1,n-1})(x-{y}_{k-1,n-2})$ and ${H}_{4}^{\u2033}\left({y}_{k,0}\right)={H}_{3}^{\u2033}\left({y}_{k,0}\right)+2f[{y}_{k,0},{y}_{k,0},{y}_{k-1,n-1},{y}_{k-1,n-2},{y}_{k-1,n-3}]({y}_{k,0}-{y}_{k-1,n-1})({y}_{k,0}-{y}_{k-1,n-2}).$

$${\lambda}_{k}=-{H}_{2}^{\u2033}\left({y}_{k,0}\right)/(2{f}^{\prime}\left({y}_{k,0}\right))$$

$${\lambda}_{k}=-{H}_{3}^{\u2033}\left({y}_{k,0}\right)/(2{f}^{\prime}\left({y}_{k,0}\right))$$

$${\lambda}_{k}=-{H}_{4}^{\u2033}\left({y}_{k,0}\right)/(2{f}^{\prime}\left({y}_{k,0}\right))$$

The parameter ${\lambda}_{k}$ is recursively calculated as the iteration proceeds using Equation (14)–Equation (16) in Equation (4). Substituting ${\lambda}_{k}$ instead of λ in Equation (4), we can obtain the following iterative method with memory
where $N({y}_{k,n-1},{y}_{k,n-2},\cdots ,{y}_{k,1},{y}_{k,0})=f[{y}_{k,n-1},{y}_{k,n-2}]+\cdots +f[{y}_{k,n-1},{y}_{k,n-2},\cdots ,{y}_{k,1},{y}_{k,0},{y}_{k,0}]\times ({y}_{k,n-1}-{y}_{k,n-2})\cdots ({y}_{k,n-1}-{y}_{k,0}),$ and the parameter${\lambda}_{k}$ is calculated by using one of the Equations (14)– (16) and depends on the data available from the current and the previous iterations.

$$\left\{\begin{array}{cc}\hfill {y}_{k,1}=& {y}_{k,0}-\frac{f\left({y}_{k,0}\right)}{{\lambda}_{k}f\left({y}_{k,0}\right)+{f}^{\prime}\left({y}_{k,0}\right)}\hfill \\ \hfill {y}_{k,2}=& {y}_{k,1}-\frac{f\left({y}_{k,1}\right)}{f[{y}_{k,1},{y}_{k,0}]+f[{y}_{k,1},{y}_{k,0},{y}_{k,0}]({y}_{k,1}-{y}_{k,0})}\hfill \\ \hfill \cdots \\ \hfill {y}_{k,n}=& {y}_{k,n-1}-\frac{f\left({y}_{k,n-1}\right)}{N({y}_{k,n-1},{y}_{k,n-2},\cdots ,{y}_{k,1},{y}_{k,0})}\hfill \end{array}\right.$$

- all nodes ${y}_{k,0},{y}_{k-1,n-1,}\cdots {y}_{k-1,n-m+1}$ are sufficiently close to the zero a;
- the condition ${e}_{k}=O({e}_{k-1,n-1}\cdots {e}_{k-1,n-m+1})$ holds.

Then
and

$${H}_{m}^{\u2033}\left({y}_{k,0}\right)\sim 2{f}^{\prime}\left(a\right)\left({c}_{2}-{(-1)}^{m-1}{c}_{m+1}\prod _{j=1}^{m-1}{e}_{k-1,n-j}\right)$$

$$\frac{{H}_{m}^{\prime \prime}\left({y}_{k,0}\right)}{2{f}^{\prime}\left({y}_{k,0}\right)}\sim \left({c}_{2}-{(-1)}^{m-1}{c}_{m+1}\prod _{j=1}^{m-1}{e}_{k-1,n-j}\right)$$

$$f\left(x\right)-{H}_{m}\left(x\right)=\frac{{f}^{(m+1)}\left(\xi \right)}{(m+1)!}{(x-{y}_{k,0})}^{2}\prod _{j=1}^{m-1}(x-{y}_{k-1,n-j})(\xi \in I)$$

Differentiating Equation (20) at the point $x={y}_{k,0}$, we obtain

$${f}^{\u2033}\left({y}_{k,0}\right)-{H}_{m}^{\u2033}\left({y}_{k,0}\right)=2\frac{{f}^{(m+1)}\left(\xi \right)}{(m+1)!}\prod _{j=1}^{m-1}({y}_{k,0}-{y}_{k-1,n-j})(\xi \in I)$$

$${H}_{m}^{\u2033}\left({y}_{k,0}\right)={f}^{\u2033}\left({y}_{k,0}\right)-2\frac{{f}^{(m+1)}\left(\xi \right)}{(m+1)!}\prod _{j=1}^{m-1}({y}_{k,0}-{y}_{k-1,n-j})(\xi \in I)$$

Taylor’s series of derivatives of fat the point ${y}_{k,0}\in I$ and $\xi \in I$ about the zero a of fgive
where ${e}_{\xi}=\xi -a.$

$${f}^{\prime}\left({y}_{k,0}\right)={f}^{\prime}\left(a\right)(1+2{c}_{2}{e}_{k,0}+3{c}_{3}{e}_{k,0}^{2}+O({e}_{k,0}^{3}))$$

$${f}^{\u2033}\left({y}_{k,0}\right)={f}^{\prime}\left(a\right)(2{c}_{2}+6{c}_{3}{e}_{k,0}+O({e}_{k,0}^{2}))$$

$${f}^{(m+1)}\left(\xi \right)={f}^{\prime}\left(a\right)((m+1)!{c}_{m+1}+(m+2)!{c}_{m+1}{e}_{\xi}+O({e}_{\xi}^{2}))$$

Substituting Equation (24) and Equation (25) into Equation (22), we have
and
$$\frac{{H}_{m}^{\prime \prime}\left({y}_{k,0}\right)}{2{f}^{\prime}\left({y}_{k,0}\right)}\sim \left({c}_{2}-{(-1)}^{m-1}{c}_{m+1}\prod _{j=1}^{m-1}{e}_{k-1,n-j}\right)$$

$${H}_{m}^{\u2033}\left({y}_{k,0}\right)=2{f}^{\prime}\left(a\right)({c}_{2}-{(-1)}^{m-1}{c}_{m+1}\prod _{j=1}^{m-1}{e}_{k-1,n-j})$$

The concept of the R-order of convergence [1] and the following assertion (see [8]) will be applied to estimate the convergence order of the iterative method with memory Equation (17). Now we can state the following convergence theorem for iterative method with memory Equation (17).

$${e}_{k+1}={e}_{k,n}\sim {D}_{k,r}{e}_{k}^{r}\sim {D}_{k,r}{\left({D}_{k-1,r}{e}_{k-1}^{r}\right)}^{r}={D}_{k,r}{D}_{k-1,r}^{r}{e}_{k-1}^{{r}^{2}}$$

$${e}_{k,n-1}\sim {D}_{k,q}{e}_{k}^{q}\sim {D}_{k,q}{\left({D}_{k-1,r}{e}_{k-1}^{r}\right)}^{q}={D}_{k,q}{D}_{k-1,r}^{q}{e}_{k-1}^{rq}$$

From Equation (13), we obtain the following error relations

$${e}_{k,n-1}={y}_{k,n-1}-a\sim {({c}_{2}+\lambda )}^{{2}^{n-3}}{d}_{n-1}{e}_{k}^{{2}^{n-1}}$$

$${e}_{k+1}={y}_{k,n}-a\sim {({c}_{2}+\lambda )}^{{2}^{n-2}}{d}_{n}{e}_{k}^{{2}^{n}}$$

Using the Lemma 1 for $m=2$, we obtain

$${\lambda}_{k}\sim -\left({c}_{2}+{c}_{3}{e}_{k-1,n-1}\right)$$

Substituting Equation (32) into Equation (30) and Equation (31) instead of λ, we have

$${e}_{k,n-1}={y}_{k,n-1}-a\sim {(-{c}_{3})}^{{2}^{n-3}}{D}_{k-1,q}^{{2}^{n-3}}{e}_{k-1}^{q{2}^{n-3}}{d}_{n-1}{\left({D}_{k-1,r}{e}_{k-1}^{r}\right)}^{{2}^{n-1}}$$

$$\sim {(-{c}_{3})}^{{2}^{n-3}}{D}_{k-1,q}^{{2}^{n-3}}{D}_{k-1,r}^{{2}^{n-1}}{d}_{n-1}{e}_{k-1}^{r{2}^{n-1}+q{2}^{n-3}}$$

$${e}_{k+1}={y}_{k,n}-a\sim {(-{c}_{3}{e}_{k-1,n-1})}^{{2}^{n-2}}{d}_{n}{e}_{k}^{{2}^{n}}$$

$$\sim {(-{c}_{3})}^{{2}^{n-2}}{D}_{k-1,q}^{{2}^{n-2}}{D}_{k-1,r}^{{2}^{n}}{d}_{n-1}{e}_{k-1}^{r{2}^{n}+q{2}^{n-2}}$$

By comparing exponents of ${e}_{k-1}$ appearing in two pairs of relations Equation (29)–Equation (33) and Equation (28)–Equation (34), we get the following system of equations

$$\left\{\begin{array}{c}r{2}^{n-1}+q{2}^{n-3}=rq\hfill \\ r{2}^{n}+q{2}^{n-2}={r}^{2}\hfill \end{array}\right.$$

The solution of the system Equation (35) is given by $r={2}^{n}+{2}^{n-3}$ and $q={2}^{n-1}+{2}^{n-4}$. Therefore, the R-order of the methods with memory Equation (17) is at least $r={2}^{n}+{2}^{n-3}$ for $n\ge 3$. For example, the R-order of the three-point family Equation (17) is at least 9, the four-point family has the R-order at least 18, assuming that ${\lambda}_{k}$ is calculated by Equation (14).

The case $n=2$ differs from the previous analysis; Hermit’s interpolating polynomial is constructed at the nodes ${y}_{k,0},{y}_{k-1,1}$. Substituting Equation (32) into Equation (2) and Equation (8) instead of λ, we have

$$\begin{array}{cc}\hfill {e}_{k,1}=& {y}_{k,1}-a\sim (-{c}_{3}{e}_{k-1,1}){d}_{1}{e}_{k}^{2}\sim -{c}_{3}{D}_{k-1,q}{e}_{k-1}^{q}{d}_{1}{\left({D}_{k-1,r}{e}_{k-1}^{r}\right)}^{2}\hfill \\ & \sim -{c}_{3}{D}_{k-1,q}{d}_{1}{D}_{k-1,r}^{2}{e}_{k-1}^{2r+q}\hfill \end{array}$$

$$\begin{array}{cc}\hfill {e}_{k+1}=& {e}_{k,2}={y}_{k,2}-a\sim (-{c}_{3}{e}_{k-1,1}){d}_{2}{e}_{k}^{4}\sim -{c}_{3}{D}_{k-1,q}{e}_{k-1}^{q}{d}_{2}{\left({D}_{k-1,r}{e}_{k-1}^{r}\right)}^{4}\hfill \\ & \sim -{c}_{3}{D}_{k-1,q}{d}_{1}{D}_{k-1,r}^{4}{e}_{k-1}^{4r+q}\hfill \end{array}$$

By comparing exponents of ${e}_{k-1}$ appearing in two pairs of relations Equation (29)–Equation (36) and Equation (28)–Equation (37), we get the following system of equations

$$\left\{\begin{array}{c}2r+q=rq\hfill \\ 4r+q={r}^{2}\hfill \end{array}\right.$$

Positive solution of the system Equation (38) is given by $r=(5+\sqrt{17})/2$ and $q=(1+\sqrt{17})/2$. Therefore, the R-order of the methods with memory Equation (17) with Equation (14) is at least $(5+\sqrt{17})/2\approx 4.5616$ for $n=2.$

Now, the new family Equation (4) without memory and the corresponding family Equation (17) with memory are employed to solve some nonlinear equations and compared with several known iterative methods. All algorithms are implemented using Symbolic Math Toolbox of MATLAB 7.0. For demonstration, we have selected three methods displayed below.

King’s methods without memory ( KM4, see [9] ):
where $\beta \in R$.

$$\left\{\begin{array}{cc}\hfill {y}_{n}=& {x}_{n}-\frac{f\left({x}_{n}\right)}{{f}^{\prime}\left({x}_{n}\right)},\hfill \\ \hfill {x}_{n+1}=& {y}_{n}-\frac{f\left({x}_{n}\right)+\beta f\left({y}_{n}\right)}{f\left({x}_{n}\right)+(\beta -2)f\left({y}_{n}\right)}\frac{f\left({y}_{n}\right)}{{f}^{\prime}\left({x}_{n}\right)}\hfill \end{array}\right.$$

Bi-Wu-Ren method without memory ( BRM8, see [10] ):
where $h\left(t\right)$is a real-valued function satisfying the conditions$h\left(0\right)=1,{h}^{\prime}\left(0\right)=2,{h}^{\u2033}\left(0\right)=10,\left|{h}^{\u2034}\left(0\right)\right|<\infty $ and $\gamma \in R.$

$$\left\{\begin{array}{cc}\hfill {y}_{n}=& {x}_{n}-\frac{f\left({x}_{n}\right)}{{f}^{\prime}\left({x}_{n}\right)}\hfill \\ \hfill {z}_{n}=& {y}_{n}-h\left(\frac{f\left({y}_{n}\right)}{f\left({x}_{n}\right)}\right)\frac{f\left({y}_{n}\right)}{{f}^{\prime}\left({x}_{n}\right)}\hfill \\ \hfill {x}_{n+1}=& {z}_{n}-\frac{f\left({x}_{n}\right)+(\gamma +2)f\left({z}_{n}\right)}{f\left({x}_{n}\right)+\gamma f\left({z}_{n}\right)}\frac{f\left({z}_{n}\right)}{f[{z}_{n},{y}_{n}]+f[{x}_{n},{x}_{n},{z}_{n}]({z}_{n}-{y}_{n})}\hfill \end{array}\right.$$

Petković-Ilić-Džunić method with memory ( PD, see [12] )
where ${z}_{n}={x}_{n}-{\gamma}_{n}f\left({x}_{n}\right).$ The parameter${\gamma}_{n}$ can be calculated by the following three formulas:

$$\left\{\begin{array}{cc}\hfill {y}_{n}=& {x}_{n}-\frac{f\left({x}_{n}\right)}{f[{x}_{n},{z}_{n}]},\hfill \\ \hfill {x}_{n+1}=& {y}_{n}-\frac{f\left({y}_{n}\right)}{f[{x}_{n},{z}_{n}]}\left(1+\frac{f\left({y}_{n}\right)}{f\left({x}_{n}\right)}+\frac{f\left({y}_{n}\right)}{f\left({z}_{n}\right)}\right)\hfill \end{array}\right.$$

$${\gamma}_{n}=({x}_{n}-{y}_{n-1})/(f\left({x}_{n}\right)-f\left({y}_{n-1}\right))$$

$${\gamma}_{n}=1/(f[{x}_{n},{y}_{n-1}]+f[{x}_{n},{x}_{n-1}]-f[{x}_{n-1},{y}_{n-1}])$$

The absolute errors$\left|{x}_{k}-a\right|$in the first four iterations are given in Table 1, Table 2, Table 3 and Table 4, where a is the exact root computed with 2400 significant digits. The computational order of convergence ρ is defined by [19]:

$$\rho \approx \frac{ln(\left|{x}_{n+1}-{x}_{n}\right|/\left|{x}_{n}-{x}_{n-1}\right|)}{ln(\left|{x}_{n}-{x}_{n-1}\right|/\left|{x}_{n-1}-{x}_{n-2}\right|)}$$

The iterative processes of the Equations (4) and (17) are given in Figure 1, where Equation (4) (n=1) is one-point method. The parameters of the Equations (4) and (17) are $\lambda ={\lambda}_{0}=1.0$. The initial value is ${x}_{0}=-1.3$. The stopping criterium is $\left|f\left(x\right)\right|<{10}^{-150}$. We will call $f\left(x\right)$ the nonlinear residual or residual. The Figure 1 is a semilog plot of residual history, the norm of the nonlinear residual against the iteration number.

Following test functions are used:

$${f}_{1}\left(x\right)=x{e}^{{x}^{2}}-{sin}^{2}\left(x\right)+3cos\left(x\right)+5,a\approx -1.2076478271309189,{x}_{0}=-1.3.$$

$${f}_{2}\left(x\right)={x}^{5}+{x}^{4}+4{x}^{2}-15,a\approx 1.3474280989683050,{x}_{0}=1.6.$$

Methods | $|{x}_{1}-a|$ | $|{x}_{2}-a|$ | $|{x}_{3}-a|$ | ρ |
---|---|---|---|---|

Equation (4) $n=2,\lambda =0.5$ | 0.32719E−4 | 0.57076E−18 | 0.52848E−73 | 4.0000005 |

Equation (4) $n=2,\lambda =1$ | 0.58111E−4 | 0.71445E−17 | 0.16328E−68 | 3.9999938 |

$KM4,\beta =2$ | 0.24269E−3 | 0.13078E−13 | 0.11033E−54 | 3.9999864 |

$BRM8,h\left(t\right)=1+2t+5{t}^{2},\gamma =1$ | 0.40513E−6 | 0.32351E−48 | 0.53484E−385 | 8.0000001 |

Equation (4) $n=3,\lambda =1$ | 0.22673E−8 | 0.83510E−70 | 0.28282E−561 | 8.0000000 |

Equation (4) $n=3,\lambda =1.5$ | 0.18012E−9 | 0.75259E−83 | 0.69916E−670 | 8.0000000 |

Methods | $|{x}_{1}-a|$ | $|{x}_{2}-a|$ | $|{x}_{3}-a|$ | ρ |
---|---|---|---|---|

Equation (4)$n=2,\lambda =-1.5$ | 0.29673E−2 | 0.37452E−10 | 0.94752E−42 | 4.0001713 |

Equation (4)$n=2,\lambda =-0.5$ | 0.27276E−4 | 0.11867E−19 | 0.42516E−81 | 4.0000025 |

$KM4,\beta =0.5$ | 0.37189E−2 | 0.32631E−9 | 0.19533E−37 | 3.9993916 |

$BRM8,h\left(t\right)=1+2t+5{t}^{2},\gamma =1$ | 0.84179E−4 | 0.62964E−31 | 0.61512E−248 | 8.0000456 |

Equation (4)$n=3,\lambda =-1$ | 0.34838E−7 | 0.19030E−62 | 0.15080E−504 | 8.0000000 |

Equation (4)$n=3,\lambda =-0.5$ | 0.11873E−7 | 0.80149E−66 | 0.34562E−531 | 8.0000000 |

Methods | $|{x}_{1}-a|$ | $|{x}_{2}-a|$ | $|{x}_{3}-a|$ | ρ |
---|---|---|---|---|

Equation (42)$-PD,{\gamma}_{0}=-0.01$ | 0.10690E−2 | 0.10554E−12 | 0.24668E−58 | 4.5605896 |

Equation (43)$-PD,{\gamma}_{0}=-0.01$ | 0.10690E−2 | 0.58225E−14 | 0.18875E−67 | 4.7487424 |

Equation (14) - (17)$,n=2,{\lambda}_{0}=0.5$ | 0.32719E−4 | 0.42649E−19 | 0.26035E−87 | 4.5827899 |

Equation (15) - (17)$,n=2,{\lambda}_{0}=0.5$ | 0.32719E−4 | 0.47493E−20 | 0.16676E−96 | 4.8272294 |

Equation (14) - (17)$,n=2,{\lambda}_{0}=1$ | 0.58111E−4 | 0.25364E−18 | 0.61743E−84 | 4.5691828 |

Equation (15) - (17)$,n=2,{\lambda}_{0}=1$ | 0.58111E−4 | 0.28197E−19 | 0.69228E−93 | 4.8066915 |

Equation (14) - (17)$,n=3,{\lambda}_{0}=1$ | 0.22673E−8 | 0.14247E−76 | 0.38886E−690 | 8.9963034 |

Equation (15) - (17)$,n=3,{\lambda}_{0}=1$ | 0.22673E−8 | 0.53419E−81 | 0.96778E−777 | 9.5795515 |

Equation (16) - (17)$,n=3,{\lambda}_{0}=1$ | 0.22673E−8 | 0.45910E−83 | 0.96092E−815 | 9.7957408 |

Equation (14) - (17)$,n=3,{\lambda}_{0}=1.5$ | 0.18012E−9 | 0.49194E−86 | 0.27126E−775 | 9.0024260 |

Equation (15) - (17)$,n=3,{\lambda}_{0}=1.5$ | 0.18012E−9 | 0.13193E−91 | 0.20518E−878 | 9.5794268 |

Equation (16) - (17)$,n=3,{\lambda}_{0}=1.5$ | 0.18012E−9 | 0.11706E−93 | 0.17692E−918 | 9.7974669 |

Methods | $|{x}_{1}-a|$ | $|{x}_{2}-a|$ | $|{x}_{3}-a|$ | ρ |
---|---|---|---|---|

Equation (42)$-PD,{\gamma}_{0}=-0.01$ | 0.14930E−1 | 0.54292E−8 | 0.20342E−37 | 4.5697804 |

Equation (43)$-PD,{\gamma}_{0}=-0.01$ | 0.14930E−1 | 0.32753E−9 | 0.16659E−45 | 4.7387964 |

Equation (14) - (17)$,n=2,{\lambda}_{0}=-1.5$ | 0.29673E−2 | 0.10381E−11 | 0.90169E−55 | 4.5538013 |

Equation (15) - (17)$,n=2,{\lambda}_{0}=-1.5$ | 0.29673E−2 | 0.13370E−13 | 0.29875E−67 | 4.7285160 |

Equation (14) - (17)$,n=2,{\lambda}_{0}=-0.5$ | 0.27276E−4 | 0.76276E−20 | 0.21310E−91 | 4.6005252 |

Equation (15) - (17)$,n=2,{\lambda}_{0}=-0.5$ | 0.27276E−4 | 0.62055E−21 | 0.70672E−102 | 4.8635157 |

Equation (14) - (17)$,n=3,{\lambda}_{0}=-1$ | 0.34838E−7 | 0.12841E−67 | 0.15487E−611 | 9.0002878 |

Equation (15) - (17)$,n=3,{\lambda}_{0}=-1$ | 0.34838E−7 | 0.34679E−73 | 0.10151E−705 | 9.5835521 |

Equation (16) - (17)$,n=3,{\lambda}_{0}=-1$ | 0.34838E−7 | 0.41211E−75 | 0.11560E−741 | 9.8127640 |

Equation (14) - (17)$,n=3,{\lambda}_{0}=-0.5$ | 0.11873E−7 | 0.35119E−73 | 0.13260E−661 | 8.9795793 |

Equation (15) - (17)$,n=3,{\lambda}_{0}=-0.5$ | 0.11873E−7 | 0.43166E−77 | 0.67183E−743 | 9.5883270 |

Equation (16) - (17)$,n=3,{\lambda}_{0}=-0.5$ | 0.11873E−7 | 0.45981E−83 | 0.29759E−820 | 9.7754885 |

Figure 1 shows that the convergence speed of the multipoint iterative method is faster than the one-point iterative method. As shown in Table 1 and Table 2, the results obtained with our methods without memory are better than the other methods without memory. From the results displayed in Table 3 and Table 4, it can be concluded that the convergence of the tested multipoint Equation (17) with memory is remarkably fast. The R-order of convergence of the family Equation (17) with memory is increased by applying a self-accelerating parameter given by Equation (14)–Equation (16). In addition, above all, the increase of convergence order is obtained without any additional function evaluations, which indicates a very high computational efficiency of our methods with memory.

The project supported by the National Natural Science Foundation of China (Nos. 11371071 and 11201037), the PhD Start-up Fund of Liaoning Province of China ( Nos. 20141137 and 20141139), the Liaoning BaiQianWan Talents Program (No.2013921055) and the Educational Commission Foundation of Liaoning Province of China (Nos. L2014443,L2014444 and L2015012).

Xiaofeng Wang and Yuping Qin conceived and designed the experiments; Weiyi Qian and Sheng Zhang analyzed the data; Xiaofeng Wang and Xiaodong Fan wrote the paper.

The authors declare no conflict of interest.

- Ortega, J.M.; Rheinbolt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
- Wu, X. A new continuation Newton-like method and its deformation. Appl. Math. Comput.
**2000**, 112, 75–78. [Google Scholar] [CrossRef] - Wang, X.; Zhang, T. A new family of Newton-type iterative methods with and without memory for solving nonlinear equations. Calcolo
**2014**, 51, 1–15. [Google Scholar] [CrossRef] - Wang, X.; Liu, L. Modified Ostrowski’s method with eighth-order convergence and high efficiency index. Appl. Math. Lett.
**2010**, 23, 549–554. [Google Scholar] [CrossRef] - Kou, J.; Wang, X.; Li, Y. Some eithth-order root-finding three-step methods. Commn. Nonli. Sci. Numer. Simulat.
**2010**, 15, 536–544. [Google Scholar] [CrossRef] - Petković, M.S. Remarks on “on a general class of multipoint root-finding methods of high computational efficiency”. SIAM J. Numer. Anal.
**2011**, 49, 1317–1319. [Google Scholar] [CrossRef] - Kung, H.T.; Traub, J.F. Optimal order of one-point and multipoint iterations. J. Appl. Comput. Math.
**1973**, 10, 643–651. [Google Scholar] [CrossRef] - Alefeld, G.; Herzberger, J. Introduction to Interval Computation; Academic Press: New York, NY, USA, 1983. [Google Scholar]
- King, R.F. A family of fourth order methods for nonlinear equations. SIAM J. Numer. Anal.
**1973**, 10, 876–879. [Google Scholar] [CrossRef] - Bi, W.; Wu, Q.; Ren, H. A new family of eighth-order iterative methods for solving nonlinear equations. Appl. Math. Comput.
**2009**, 214, 236–245. [Google Scholar] [CrossRef] - Petković, M.S.; Ilić, S.; Džuni, J. Derivative free two-point methods with and without memory for solving nonlinear equations. Appl. Math. Comput.
**2010**, 217, 1887–1895. [Google Scholar] [CrossRef] - Thukral, R.; Petković, M.S. A family of three-point methods of optimal order for solving nonlinear equations. J. Comp. Appl. Math.
**2010**, 223, 2278–2284. [Google Scholar] [CrossRef] - Wang, X.; Zhang, T. A family of Steffensen type methods with seventh-order convergence. Numer. Algor.
**2013**, 62, 429–444. [Google Scholar] [CrossRef] - Wang, X.; Džunić, J.; Zhang, T. On an efficient family of derivative free three-point methods for solving nonlinear equations. Appl. Math. Comput.
**2012**, 219, 1749–1760. [Google Scholar] [CrossRef] - Soleymani, F.; Sharifi, M.; Mousavi, B.S. An inprovement of Ostrowski’s and King’s techniques with optimal convergence order eight. J. Optim. Theory. Appl.
**2012**, 153, 225–236. [Google Scholar] [CrossRef] - Yun, B.I. A non-iterative method for solving non-linear equations. Appl. Math. Comput.
**2008**, 198, 691–699. [Google Scholar] [CrossRef] - Sharma, J.R.; Sharma, R. A new family of modified Ostrowski’s methods with accelerated eighth order convergence. Numer. Algor.
**2010**, 54, 445–458. [Google Scholar] [CrossRef] - Cordero, A.; Ezquerro, J.A.; Hernánde-Verón, M.A.; Torregrosa, J.R. On the local convergence of a fifth-order iterative method in Banach spaces. Appl. Math. Comput.
**2015**, 215, 396–403. [Google Scholar] [CrossRef] - Grau-Sánchez, M.; Noguera, M.; Gutiérrez, J.M. On some computational orders of convergence. Appl. Math. Lett.
**2010**, 23, 472–478. [Google Scholar] [CrossRef]

© 2015 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/4.0/).