# On the Local Convergence of a Third Order Family of Iterative Processes

^{*}

^{†}

Previous Article in Journal / Special Issue

Department of Mathematics and Computation, University of La Rioja, Calle Luis de Ulloa s/n, Logroño, Spain

Author to whom correspondence should be addressed.

These authors contributed equally to this work.

Academic Editor: Alicia Cordero

Received: 7 September 2015 / Revised: 24 November 2015 / Accepted: 26 November 2015 / Published: 1 December 2015

(This article belongs to the Special Issue Numerical Algorithms for Solving Nonlinear Equations and Systems)

Efficiency is generally the most important aspect to take into account when choosing an iterative method to approximate a solution of an equation, but is not the only aspect to consider in the iterative process. Another important aspect to consider is the accessibility of the iterative process, which shows the domain of starting points from which the iterative process converges to a solution of the equation. So, we consider a family of iterative processes with a higher efficiency index than Newton’s method. However, this family of proecsses presents problems of accessibility to the solution x * . From a local study of the convergence of this family, we perform an optimization study of the accessibility and obtain iterative processes with better accessibility than Newton’s method.

In general, the roots of a nonlinear equation
cannot be expressed in a closed form and this problem is commonly carried out by applying iterative methods. So, starting from one or several initial approximations of a solution ${x}^{*}$ of Equation (1), a sequence $\left\{{x}_{n}\right\}$ of approximations is constructed so that it converges to ${x}^{*}$. We can get the sequence $\left\{{x}_{n}\right\}$ in different ways, depending on the iterative method that is applied. The best-known iterative scheme is Newton’s method,

$$F\left(x\right)=0$$

$${x}_{0}\phantom{\rule{4.pt}{0ex}}\text{given}\phantom{\rule{4.pt}{0ex}}\text{in}\phantom{\rule{4.pt}{0ex}}\Omega ,\phantom{\rule{1.em}{0ex}}{x}_{n+1}={x}_{n}-{\left[{F}^{\prime}\left({x}_{n}\right)\right]}^{-1}F\left({x}_{n}\right),\phantom{\rule{4pt}{0ex}}n\ge 0$$

Observe that we need the operator F to be differentiable Fréchet in order to apply Newton’s method.

Efficiency is generally the most important aspect to take into account when choosing an iterative method to approximate a solution of Equation (1) (see [1,2,3,4]). The classic efficiency index, defined by Ostrowski in [5], provides a balance between the order of convergence ρ and the number of functional evaluations d: $I={\rho}^{\frac{1}{d}}$. However, it is interesting to note that if Equation (1) represents a system of n nonlinear equations, the computational cost of evaluating the operators F and ${F}^{\prime}$ is not similar, as it happens in the case of scalar equations. So, for one evaluation of F, n functional evaluations are required. Meanwhile, the evaluation of the associated jacobian matrix ${F}^{\prime}$ requires ${n}^{2}$ functional evaluations, so that the evaluations of F and ${F}^{\prime}$ cannot be considered in the same way. Therefore, this efficiency index must be modified to consider the number of functional evaluations in order for it to be a good efficiency measurement for an iterative process in the multivariate case.

Taking into account this extension of the efficiency index from the Chebyshev method with cubical convergence, Ezquerro and Hernández define in [6] a family of multipoint iterative processes with cubical convergence under the usual conditions required of the operator F. This family of iterative processes is given by
and has better index of efficiency than Newton’s method Equation (2). In this paper, we focus our attention on the analysis of the local convergence of sequence Equation (3) in Banach spaces under weak convergence conditions. In fact, we only consider conditions for the first derivative of the operator F, as in the case of the Newton method [7].

$$\left\{\begin{array}{c}{x}_{0}\in \Omega ,\hfill \\ {y}_{n}={x}_{n}-{\Gamma}_{n}F\left({x}_{n}\right),\phantom{\rule{1.em}{0ex}}{\Gamma}_{n}={\left[{F}^{\prime}\left({x}_{n}\right)\right]}^{-1}\hfill \\ {z}_{n}={x}_{n}+p\phantom{\rule{0.166667em}{0ex}}({y}_{n}-{x}_{n}),\phantom{\rule{1.em}{0ex}}p\in (0,1],\hfill \\ {\displaystyle {x}_{n+1}={z}_{n}-\frac{1}{{p}^{2}}{\Gamma}_{n}\left((1-{p}^{2})(p-1)F\left({x}_{n}\right)+F\left({z}_{n}\right)\right),\phantom{\rule{1.em}{0ex}}n\ge 0}\hfill \end{array}\right.$$

In any case, the efficiency index is not the only aspect to compare iterative processes. Another important aspect to take into account is the domain of accessibility associated to the iterative process, which is defined as the domain of starting points from which the iterative process converges to a solution of the equation. The location of starting approximations, from which the iterative methods converge to a solution of the equation, is a difficult problem to solve. This location is from the study of the convergence that is made of the iterative process. The analysis of the accessibility of an iterative process from the local study of the convergence is based on demanding conditions to the solution ${x}^{*}$, from certain conditions on the operator F, and provide the so-called ball of convergence of the iterative process, that shows the accessibility to ${x}^{*}$ from the initial approximation ${x}_{0}$ belonging to the ball. This study requires conditions on the operator F. As we indicate previously, family Equation (3) has higher efficiency index than the Newton method, but it presents problems of accessibility to the solution ${x}^{*}$ of Equation (1) if we consider a semilocal study of the convergence (see [6]).

The paper is organized as follows. In Section 2, a local convergence result is provided. In Section 3, from the previous local convergence result, we observe that the size of the ball of convergence of family Equation (3) depends on the parameter $p\in (0,1]$ and, then, we do an optimization study. Moreover, we realize a comparative study from the point of view of the accessibility or, equivalently, from the ball of convergence with the Newton method. Finally, in the last section, a numerical test confirms the theoretical results obtained.

As we indicated above, the local convergence results for iterative methods require conditions on the operator F and the solution ${x}^{*}$ of Equation (1). Note that a local result provides what we call ball of convergence and denote by $B({x}^{*},\tilde{R})$. From the value $\tilde{R}$, the ball of convergence gives information about the accessibility of the solution ${x}^{*}$.

An interesting local result for Newton’s method Equation (2) is given in [7] by Dennis and Schnabel, where the following conditions are required:

- $\left({C}_{1}\right)$
- Let ${x}^{*}$ be a solution of Equation (1) and exist $r>0$, so that $B({x}^{*},r)\subset \Omega $ and the operator ${\left[{F}^{\prime}\left({x}^{*}\right)\right]}^{-1}$ exists with $\parallel {\left[{F}^{\prime}\left({x}^{*}\right)\right]}^{-1}\parallel \le \beta $,
- $\left({C}_{2}\right)$
- $\parallel {F}^{\prime}\left(x\right)-{F}^{\prime}\left(y\right)\parallel \le K\parallel x-y\parallel $, $K\ge 0$, for all $x,y\in \Omega $.

Now, our aim is to establish the local convergence study for family Equation (3), with cubical convergence (see [6]), from conditions $\left({C}_{1}\right)$ and $\left({C}_{2}\right)$. So, we obtain a local convergence result in the same initial conditions that for the Newton method, which is a second order method.

Firstly, we provide two technical lemmas, where we obtain some results about the operator F and the sequences $\left\{{x}_{n}\right\}$, $\left\{{y}_{n}\right\}$ and $\left\{{z}_{n}\right\}$, which define family Equation (3).

Let us suppose that conditions $\left({C}_{1}\right)$–$\left({C}_{2}\right)$ are satisfied. Let us assume that exists S, a positive real number, with $S\le r$. If $\beta KS<1$, then, for all $x\in B({x}^{*},S)$, the operator ${\left[{F}^{\prime}\left(x\right)\right]}^{-1}$ exists and

$$\parallel {\left[{F}^{\prime}\left(x\right)\right]}^{-1}\parallel \le \frac{\beta}{1-\beta KS}$$

Taking into account the Banach lemma on invertible operators and conditions $\left({C}_{1}\right)$–$\left({C}_{2}\right)$, the proof follows. □

Fixed $n\in \mathbb{N}$, we assume that ${x}_{n},{z}_{n}\in B({x}^{*},r)$ and there exists ${\Gamma}_{n}$. Then, we obtain the following decompositions

$$\begin{array}{cc}\hfill \left({i}_{n}\right)\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}& {y}_{n}-{x}^{*}={\Gamma}_{n}{\int}_{0}^{1}\left({F}^{\prime}({x}_{n}+\tau ({x}^{*}-{x}_{n}))-{F}^{\prime}\left({x}_{n}\right)\right)\phantom{\rule{0.166667em}{0ex}}d\tau \phantom{\rule{0.166667em}{0ex}}({x}^{*}-{x}_{n}),\hfill \\ \hfill \left(i{i}_{n}\right)\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}& {z}_{n}-{x}^{*}=(p-1)({x}^{*}-{x}_{n})+p{\Gamma}_{n}{\int}_{0}^{1}\left({F}^{\prime}({x}_{n}+\tau ({x}^{*}-{x}_{n}))-{F}^{\prime}\left({x}_{n}\right)\right)\phantom{\rule{0.166667em}{0ex}}d\tau \phantom{\rule{0.166667em}{0ex}}({x}^{*}-{x}_{n}),\hfill \\ \hfill \left(ii{i}_{n+1}\right)\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}& {x}_{n+1}-{x}^{*}=\frac{1}{{p}^{2}}{\Gamma}_{n}{\int}_{0}^{1}\left({F}^{\prime}({z}_{n}+\tau ({x}^{*}-{z}_{n}))-{F}^{\prime}\left({x}_{n}\right)\right)\phantom{\rule{0.166667em}{0ex}}d\tau \phantom{\rule{0.166667em}{0ex}}({x}^{*}-{z}_{n})\hfill \\ \\ & +\frac{{p}^{2}-1}{{p}^{2}}{\Gamma}_{n}{\int}_{0}^{1}\left({F}^{\prime}({x}_{n}+\tau ({x}^{*}-{x}_{n}))-{F}^{\prime}\left({x}_{n}\right)\right)\phantom{\rule{0.166667em}{0ex}}d\tau \phantom{\rule{0.166667em}{0ex}}({x}^{*}-{x}_{n})\hfill \\ \hfill \end{array}$$

From the Taylor series of $F\left({x}^{*}\right)$, $\left({i}_{n}\right)$ and $\left(i{i}_{n}\right)$ are easily obtained. Now, by considering iterative scheme Equation (3), it follows $\left(ii{i}_{n+1}\right)$:
□

$$\begin{array}{ccc}\hfill {x}_{n+1}-{x}^{*}& =& {z}_{n}-\frac{1}{{p}^{2}}{\Gamma}_{n}\left((p-1)(1-{p}^{2})F\left({x}_{n}\right)+F\left({z}_{n}\right)\right)-{x}^{*}\hfill \\ & =& {\Gamma}_{n}\left(-{F}^{\prime}\left({x}_{n}\right)({x}^{*}-{z}_{n})-\frac{1}{{p}^{2}}F\left({z}_{n}\right)-\frac{(p-1)(1-{p}^{2})}{{p}^{2}}F\left({x}_{n}\right)\right)\hfill \\ & =& {\Gamma}_{n}\left(\frac{1}{{p}^{2}}({z}_{n}-{x}^{*})+\frac{{p}^{2}-1}{{p}^{2}}{F}^{\prime}\left({x}_{n}\right)({z}_{n}-{x}^{*})+\frac{1}{{p}^{2}}F\left({x}^{*}\right)-\frac{1}{{p}^{2}}F\left({z}_{n}\right)\right.\hfill \\ & & \left.-\frac{(p-1)(1-{p}^{2})}{{p}^{2}}F\left({x}_{n}\right)\right)\hfill \\ & =& {\Gamma}_{n}\left(\frac{1}{{p}^{2}}{\int}_{{z}_{n}}^{{x}^{*}}\left({F}^{\prime}\left(\xi \right)-{F}^{\prime}\left({x}_{n}\right)\right)d\xi +\frac{{p}^{2}-1}{{p}^{2}}{F}^{\prime}\left({x}_{n}\right)({x}_{n}-p{\Gamma}_{n}F\left({x}_{n}\right)-{x}^{*})\right.\hfill \\ & & \left.-\frac{(p-1)(1-{p}^{2})}{{p}^{2}}F\left({x}_{n}\right)\right)\hfill \\ & =& {\Gamma}_{n}\left(\frac{1}{{p}^{2}}{\int}_{{z}_{n}}^{{x}^{*}}\left({F}^{\prime}\left(\xi \right)-{F}^{\prime}\left({x}_{n}\right)\right)d\xi +\frac{{p}^{2}-1}{{p}^{2}}F\left({x}^{*}\right)-\frac{{p}^{2}-1}{{p}^{2}}F\left({x}_{n}\right)\right.\hfill \\ & & \left.-\frac{({p}^{2}-1)}{{p}^{2}}{F}^{\prime}\left({x}_{n}\right)({x}^{*}-{x}_{n})\right)\hfill \\ & =& {\Gamma}_{n}\left(\frac{1}{{p}^{2}}{\int}_{{z}_{n}}^{{x}^{*}}\left({F}^{\prime}\left(\xi \right)-{F}^{\prime}\left({x}_{n}\right)\right)d\xi +\frac{{p}^{2}-1}{{p}^{2}}{\int}_{{x}_{n}}^{{x}^{*}}\left({F}^{\prime}\left(\xi \right)-{F}^{\prime}\left({x}_{n}\right)\right)d\xi \right)\hfill \end{array}$$

Now, we are able to obtain a result of local convergence to family Equation (3).

Let $F:\Omega \subseteq X\to Y$ be a nonlinear continuously differentiable operator on a non-empty open convex domain Ω of a Banach space X with values in a Banach space Y. Suppose that conditions $\left({C}_{1}\right)$ and $\left({C}_{2}\right)$ are satisfied. Then, fixed $p\in (0,1]$, there exists $\tilde{R}>0$, such that the sequence $\left\{{x}_{n}\right\}$, given by the iterative process of family Equation (3), corresponding to the value of p prefixed, is well-defined and converges to a solution ${x}^{*}$ of the equation $F\left(x\right)=0$, from every point ${x}_{0}\in B({x}^{*},\tilde{R})$.

Firstly, fixed $p\in (0,1]$, we define two auxiliary scalar functions:
with $t\in \mathbb{R}$ and $p\in (0,1]$, which are strictly increasing in ${\mathbb{R}}_{+}$.

$$f\left(t\right)=1-p+p\frac{t}{2}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}and\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}g\left(t\right)=\frac{t}{2{p}^{2}}\left(f{\left(t\right)}^{2}+2f\left(t\right)+1-{p}^{2}\right)$$

Secondly, let $\tilde{R}<min\{r,R\}$, where $R=\frac{{\zeta}_{p}}{(1+{\zeta}_{p})\beta K}$ and ${\zeta}_{p}$ is the positive real root of the equation $g\left(t\right)-1=0$. As ${x}_{0}\in B({x}^{*},\tilde{R})$, with $\tilde{R}<r$ and $\beta K\tilde{R}<\beta KR<1$, then by Lemma 1 the operator ${\Gamma}_{0}={\left[{F}^{\prime}\left({x}_{0}\right)\right]}^{-1}$ exists and is such that

$$\parallel {\Gamma}_{0}\parallel <\frac{\beta}{1-\beta K\tilde{R}}$$

So, ${y}_{0}$ is well-defined and, from Lemma 2 $\left({i}_{0}\right)$, we obtain
where $\delta =\frac{\beta K\tilde{R}}{1-\beta K\tilde{R}}$.

$$\parallel {y}_{0}-{x}^{*}\parallel <\frac{\delta}{2}\parallel {x}_{0}-{x}^{*}\parallel $$

As $\tilde{R}<R$, then $\delta <\frac{\beta KR}{1-\beta KR}={\zeta}_{p}$. On the other hand, taking into account that $g\left(2\right)=\frac{4-{p}^{2}}{{p}^{2}}>1$ and $g\left({\zeta}_{p}\right)=1$, it follows that $\frac{\delta}{2}<1$ and then $\parallel {y}_{0}-{x}^{*}\parallel <\frac{\delta}{2}\parallel {x}_{0}-{x}^{*}\parallel <\parallel {x}_{0}-{x}^{*}\parallel $. Therefore ${y}_{0}\in B\left({x}^{*}\tilde{R}\right)$.

Moreover, ${z}_{0}$ is well-defined and ${z}_{0}\in B({x}^{*},\tilde{R})$, since, from item $\left(i{i}_{0}\right)$ of Lemma 2, it follows
as a consequence of $f\left(\delta \right)<1$.

$$\parallel {z}_{0}-{x}^{*}\parallel <\left(1-p+p\frac{\delta}{2}\right)\parallel {x}_{0}-{x}^{*}\parallel <\parallel {x}_{0}-{x}^{*}\parallel $$

Furthermore, ${x}_{1}$ is well-defined and, from item $\left(ii{i}_{1}\right)$ of Lemma 2, we obtain that ${x}_{1}\in B({x}^{*},\tilde{R})$, since

$$\begin{array}{ccc}\hfill \parallel {x}_{1}-{x}^{*}\parallel & \u2a7d& \frac{1}{{p}^{2}}\parallel {\Gamma}_{0}\parallel \left({\int}_{0}^{1}\parallel {F}^{\prime}({z}_{0}+\tau ({x}^{*}-{z}_{0}))-{F}^{\prime}\left({x}_{0}\right)\parallel d\tau \parallel {z}_{0}-{x}^{*}\parallel \right.\hfill \\ & & \left.+(1-{p}^{2}){\int}_{0}^{1}\parallel {F}^{\prime}({x}_{0}+\tau ({x}^{*}-{x}_{0}))-{F}^{\prime}\left({x}_{0}\right)\parallel d\tau \parallel {x}_{0}-{x}^{*}\parallel \right)\hfill \\ & <& \frac{1}{{p}^{2}}\left(\frac{\delta}{2}f{\left(\delta \right)}^{2}+\delta f\left(\delta \right)+(1-{p}^{2})\frac{\delta}{2}\right)\parallel {x}_{0}-{x}^{*}\parallel \hfill \\ & =& g\left(\delta \right)\parallel {x}_{0}-{x}^{*}\parallel \hfill \end{array}$$

Now, as $g\left({\zeta}_{p}\right)=1$ and $\delta <{\zeta}_{p}$, it follows that $g\left(\delta \right)<1$ and then $\parallel {x}_{1}-{x}^{*}\parallel <g\left(\delta \right)\parallel {x}_{0}-{x}^{*}\parallel <\parallel {x}_{0}-{x}^{*}\parallel $. Therefore ${x}_{1}\in B({x}^{*}\tilde{R})$.

Following now an inductive procedure on $n\in \mathbb{N}$, we have:

$$\parallel {y}_{n}-{x}^{*}\parallel <\frac{\delta}{2}\parallel {x}_{n}-{x}^{*}\parallel ,\phantom{\rule{2.em}{0ex}}\parallel {z}_{n}-{x}^{*}\parallel <f\left(\delta \right)\parallel {x}_{n}-{x}^{*}\parallel ,\phantom{\rule{2.em}{0ex}}\parallel {x}_{n+1}-{x}^{*}\parallel <g\left(\delta \right)\parallel {x}_{n}-{x}^{*}\parallel $$

Therefore, $\parallel {x}_{n}-{x}^{*}{\parallel <g\left(\delta \right)}^{n}\parallel {x}_{0}-{x}^{*}\parallel $, for all $n\in \mathbb{N}$, and consequently $\underset{n\to +\infty}{lim}{x}_{n}={x}^{*}$. □

As we have just seen in the previous result, the equation $g\left(t\right)-1=0$ determines the value of ${\zeta}_{p}$ which allows us to obtain the radius of the ball of convergence.

Concerning the uniqueness of the solution ${x}^{*}$, we have the following result.

Under the conditions $\left(C1\right)-\left(C2\right)$, the limit point ${x}^{*}$ is the only solution of Equation (1) in $B({x}^{*},\frac{2}{\beta K})\cap \Omega $.

Let ${y}^{*}\in B({x}^{*},\frac{2}{\beta K})\cap \Omega $ be such that $F\left({y}^{*}\right)=0$. If $J={\int}_{0}^{1}{F}^{\prime}({y}^{*}+t({x}^{*}-{y}^{*}))dt$ is invertible, it follows that ${x}^{*}={y}^{*}$, since $J({y}^{*}-{x}^{*})=F\left({y}^{*}\right)-F\left({x}^{*}\right)$. By the Banach lemma, we have that J is invertible, since

$$\parallel I-{\left[{F}^{\prime}\left({x}^{*}\right)\right]}^{-1}J\parallel \le \parallel {\left[{F}^{\prime}\left({x}^{*}\right)\right]}^{-1}\parallel \phantom{\rule{0.166667em}{0ex}}\parallel {F}^{\prime}\left({x}^{*}\right)-J\parallel \le \frac{\beta K}{2}\parallel {x}^{*}-{y}^{*}\parallel <1$$

The proof is complete. □

A procedure to determine the accessibility of an iterative process to ${x}^{*}$ is to estimate the ball of convergence of the method from a local convergence result. In this way, from the local convergence result obtained in the previous section, we study the accessibility of family Equation (3). To start the comparative study of accessibility of the different iterative processes of family Equation (3), we can see in Figure 1 that the function $g\left(t\right)-1$ is strictly increasing on ${\mathbb{R}}_{+}$, tends to infinite and $g\left(0\right)<1$. Therefore, for each fixed p, there is a positive real root ${\zeta}_{p}$ of the equation $g\left(t\right)-1=0$. In addition, as we can see in Table 1, the value of this root can be increased by increasing the value of p. So, we can conclude that the accessibility of the iterative processes of family Equation (3) increase by increasing the value of p.

On the other hand, we want to compare the accessibility of the iterative processes of family Equation (3) with that of the Newton method. For this, we consider the result of convergence obtained by Dennis and Schnabel in [7]. They prove that under conditions $\left({C}_{1}\right)$ and $\left({C}_{2}\right)$, for any starting point belonging to $B({x}^{*},\tilde{{R}_{N}})$, where $\tilde{{R}_{N}}=min\{r,{R}_{N}\}$ and $R}_{N}=\frac{1}{2\beta K$, the Newton method is convergent. As the radii of the balls of convergence of family Equation (3) and the Newton method are given under conditions $\left({C}_{1}\right)$ and $\left({C}_{2}\right)$, it is clear that the radius of the ball of convergence of the Newton method is larger than that of family Equation (3), for $p\in (0,0.896037)$, while for $p\in (0.896037...,1]$, the iterative processes of family Equation (3) have improved accessibility to the Newton method, coinciding both for $p=0.896037...$ (see Table 1). Therefore, we have obtained, for $p\in (0.896037...,1]$, iterative processes with better accessibility and efficiency than the Newton method.

p | ${\zeta}_{p}$ | R |
---|---|---|

$0.1$ | $0.00555393\dots $ | $0.005523\dots /\beta K$ |

$0.2$ | $0.02493\dots $ | $0.024323\dots /\beta K$ |

$0.3$ | $0.0635481\dots $ | $0.059751\dots /\beta K$ |

$0.4$ | $0.128869\dots $ | $0.114158\dots /\beta K$ |

$0.5$ | $0.229815\dots $ | $0.18687\dots /\beta K$ |

$0.6$ | $0.373733\dots $ | $0.272057\dots /\beta K$ |

$0.7$ | $0.560476\dots $ | $0.35917\dots /\beta K$ |

$0.8$ | $0.778457\dots $ | $0.437715\dots /\beta K$ |

$\mathbf{0}.\mathbf{896037}\dots $ | $\mathbf{1}$ | $\mathbf{0}.\mathbf{5}/\beta \mathbf{K}$ |

$\mathbf{0}.\mathbf{9}$ | $\mathbf{1}.\mathbf{00916}\dots $ | $\mathbf{0}.\mathbf{502280}\dots /\beta \mathbf{K}$ |

$\mathbf{1}$ | $\mathbf{1}.\mathbf{23607}\dots $ | $\mathbf{0}.\mathbf{552786}\dots /\beta \mathbf{K}$ |

Next, we illustrate the previous result with the following example given in [7]. We choose the max-norm.

Let $F:{\mathbb{R}}^{3}\to {\mathbb{R}}^{3}$ be defined as $F(x,y,z)=(x,{y}^{2}+y,{e}^{z}-1)$. It is obvious that ${x}^{*}=(0,0,0)$ is a solution of the system.

From F, we have
and ${F}^{\prime}\left({x}^{*}\right)$ is the identity matrix $3\times 3$. So, $\parallel {F}^{\prime}{\left({x}^{*}\right)}^{-1}\parallel =1$ and $\beta =1$. On the other hand, there exists $r=1$, such that $B(0,r)=\{w\in {\mathbb{R}}^{3}:\phantom{\rule{0.166667em}{0ex}}\parallel w\parallel <1\}\subset {\mathbb{R}}^{3}$, and it is easy to prove that
in $B(0,r)$. Therefore, $R=\frac{1}{2\beta K}=\frac{1}{6}$ and Newton’s method is convergent from any starting point belonging to $B({x}^{*},0.166667\dots )$. However, from Theorem 1 and once fixed $p=1$ in family Equation (3), this iterative process is convergent from any starting point belonging to $B({x}^{*},0.184262\dots )$.

$${F}^{\prime}(x,y,z)=\left(\begin{array}{ccc}1& 0& 0\\ 0& 2y+1& 0\\ 0& 0& {e}^{z}\end{array}\right)$$

$$\parallel {F}^{\prime}(x,y,z)-{F}^{\prime}(u,v,w)\parallel =max\left\{2\right|y-v|,|{e}^{z}-{e}^{w}\left|\right\}\le 3\parallel (x,y,z)-(u,v,w)\parallel $$

This work has been partially supported by the project MTM2014-52016-C2-1-P of the Spanish Ministry of Economy and Competitiveness.

The Contribution of the two authors have been similar. Both authors have worked together to develop the present manuscript.

The authors declare no conflict of interest.

- Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. Efficient high-order methods based on golden ratio for nonlinear systems. Appl. Math. Comput.
**2011**, 217, 4548–4556. [Google Scholar] [CrossRef] - Ezquerro, J.A.; Hernández, M.A.; Romero, N. Improving the efficiency index of one-point iterative processes. J. Comput. Appl. Math.
**2009**, 223, 879–892. [Google Scholar] [CrossRef] - Grau-Sánchez, M.; Díaz-Barrero, J.L. On computational efficiency for multi-precision zero-finding methods. Appl. Math. Comput.
**2006**, 181, 402–412. [Google Scholar] [CrossRef] - Grau-Sánchez, M. Improvement of the efficiency of some three-step iterative like-Newton methods. Numer. Math.
**2007**, 107, 131–146. [Google Scholar] [CrossRef] - Ostrowski, A.M. Solutions of Equations and Systems of Equations; Academic Press: New York, USA, 1966. [Google Scholar]
- Ezquerro, J.A.; Hernández, M.A. An optimization of Chebyshev’s method. J. Complex.
**2009**, 25, 343–361. [Google Scholar] [CrossRef] - Dennis, J.E.; Schnabel, R.B. Numerical Methods for Unconstrained Optimization and Nonlinear Equations; SIAM: Philadelphia, PA, USA, 1996. [Google Scholar]

© 2015 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/4.0/).