Next Article in Journal
Network Community Detection on Metric Space
Next Article in Special Issue
Gradient-Based Iterative Identification for Wiener Nonlinear Dynamic Systems with Moving Average Noises
Previous Article in Journal / Special Issue
Fifth-Order Iterative Method for Solving Multiple Roots of the Highest Multiplicity of Nonlinear Equation

Article

# Expanding the Applicability of a Third Order Newton-Type Method Free of Bilinear Operators

by
1
2
Escuela de Ingeniería, Universidad Internacional de La Rioja, C/Gran Vía 41, Logroño (La Rioja) 26005, Spain
*
Author to whom correspondence should be addressed.
Algorithms 2015, 8(3), 669-679; https://0-doi-org.brum.beds.ac.uk/10.3390/a8030669
Received: 26 May 2015 / Revised: 9 August 2015 / Accepted: 14 August 2015 / Published: 21 August 2015
(This article belongs to the Special Issue Numerical Algorithms for Solving Nonlinear Equations and Systems)

## Abstract

This paper is devoted to the semilocal convergence, using centered hypotheses, of a third order Newton-type method in a Banach space setting. The method is free of bilinear operators and then interesting for the solution of systems of equations. Without imposing any type of Fréchet differentiability on the operator, a variant using divided differences is also analyzed. A variant of the method using only divided differences is also presented.

## 1. Introduction

For the approximation of a solution of a nonlinear equation $F ( x ) = 0$, Newton-type methods are the first option. Under some regularity assumptions, the methods are at least second order convergent. The classical third order schemes, such as Halley or Chebyshev methods, evaluate second order Fréchet derivatives [1,2,3]. These evaluations are very time-consuming for systems of equations. Indeed, let us observe that for a nonlinear system of m equations and m unknowns, the first Fréchet derivative is a matrix with $m 2$ entries, while the second Fréchet derivative has $m 3$ entries. This implies a huge amount of operations in order to evaluate every iteration. In particular, these methods are hardly used in practice.
In this paper we study the following two-step method [4,5] that improves the order of Newton’s method up to third order, but without evaluating any second Fréchet derivative:
$y n = x n + F ′ ( x n ) − 1 F ( x n ) x n + 1 = y n − F ′ ( x n ) − 1 F ( y n )$
The basic advantage of this method is that, as the matrix that appears at each iteration is the same, only one $L U$ decomposition is computed. In most real problems, the computational cost of solving a linear system is more expensive than some extra evaluations of the operator. Moreover, from a dynamical point of view [6] the method seem better than the classical two-point Newton method, that is
$y n = x n − F ′ ( x n ) − 1 F ( x n ) x n + 1 = y n − F ′ ( x n ) − 1 F ( y n )$
since the regions of non convergence are reduced.
The main assumption for obtaining convergence of third order iterative methods [2,7,8], is a Lipschitz condition in the second Fréchet derivatives,
$F ″ ( x ) − F ″ ( y ) ≤ c x − y$
With this hypothesis, and by choosing an initial guess such that $F ′ ( x 0 ) − 1$ exists and $| | F ( x 0 ) | |$ is sufficiently close to zero, cubic convergence is obtained. The Lipschitz condition (3) can be relaxed to a p-Hölder condition
$F ″ ( x ) − F ″ ( y ) ≤ c x − y p$
or even to some ω-condition
$F ″ ( x ) − F ″ ( y ) ≤ ω ( | | x − y | | )$
where $ω : R + ⟶ R +$ is a nondecreasing continuous function.
By means of this type of conditions we can ensure the convergence of the scheme (1).
Alternatively, since the scheme (1) only uses first derivatives, we could also consider the possibility of obtaining convergence by assuming the main condition just in the first derivatives, instead of in the second derivatives. There are many theories on the local and semilocal convergence of Newton’s type methods, see for instance [9,10,11,12,13,14,15,16,17,18].
Following [19,20,21,22], in [5] the semilocal convergence of (1) under ω-conditioned divided differences was analyzed. Remember that a continuous bounded linear operator $[ x , y ; F ]$ associated to a nonlinear operator $F : B ⊂ X → Y$, is called a divided difference of first order for the operator F on the points x and y if $[ x , y ; F ] ( x − y ) = F ( x ) − F ( y )$. If F is Fréchet differentiable, then $F ′ ( x ) = [ x , x ; F ]$ for all $x ∈ B$. Moreover, a divided difference satisfies an ω-condition if
$| | [ x , y ; F ] − [ v , w ; F ] | | ≤ ω ( | | x − v | | , | | y − w | | ) , x , y , v , w ∈ B$
where $ω : R + × R + ⟶ R +$ is a continuous function, which is nondecreasing in both variables.
In this paper, following [23], we expand the applicability of (1) relaxing the hypotheses of convergence. We also analyze a modification of scheme (1) using divided differences:
$y n = x n + [ x n − α n F ( x n ) , x n + α n F ( x n ) ; F ] − 1 F ( x n ) , x n + 1 = y n − [ x n − α n F ( x n ) , x n + α n F ( x n ) ; F ] − 1 F ( y n ) .$
Here, the parameters $α n$ can be considered as a control of the good approximation to the first Fréchet derivative. For this method we are able to obtain convergence without assuming any Fréchet differentiability in the operator F.
Other higher order methods, in some cases with a better behavior in the real case, have been proposed during the last few years [24,25,26,27]. The main advantage of the schemes studied in this paper is that it is not necessary to evaluate any bilinear operator (second order Fréchet derivatives or their approximations using divided differences).
The rest of the paper includes the semilocal convergence of both schemes using centered hypotheses expanding their applicability.

## 2. Semilocal Convergence Using Centered Hypotheses

Convergence theorems for Newton-type methods establish sufficient conditions on the operator and the first approximation to the solution in order to ensure that the sequence of iterates converges to a solution of the equation. In some works such as [19,20,21,22,28], convergence is established by assuming as a main hypothesis that the divided difference satisfies an ω-condition (6). In [5] we extend this theory to the method (1). Now we expand the applicability using centered hypotheses.
As pointed out in [5], we observe that this type of strategies to derive convergence are not applicable to the scheme (1) directly. The problem is the sign ‘+’ in the first step. In general, the iteration $y n$ is not closer to the solution than $x n$. In order to obtain convergence, we rewrite the scheme as a Newton-secant type method of one step, instead of the original two steps version of the scheme.
By using the definition of divided differences and the original form of the method
$y n = x n + F ′ ( x n ) − 1 F ( x n ) x n + 1 = y n − F ′ ( x n ) − 1 F ( y n )$
we obtain the following Newton-secant formula:
$x n + 1 = x n + F ′ ( x n ) − 1 ( F ( x n ) − F ( y n ) ) = x n + F ′ ( x n ) − 1 ( [ x n , y n ; F ] ( x n − y n ) ) = x n + F ′ ( x n ) − 1 ( [ x n , y n ; F ] ( − F ′ ( x n ) − 1 F ( x n ) ) ) = x n − F ′ ( x n ) − 1 [ x n , y n ; F ] F ′ ( x n ) − 1 F ( x n )$
We will use the following notations:
$Γ n = F ′ ( x n ) Φ n = Γ n [ x n , x n + Γ n − 1 F ( x n ) ; F ] − 1 Γ n$
Theorem 1 Let $X , Y$ be two Banach spaces. Let B be a convex open subset of X, and suppose that there exists a first order divided difference of the Fréchet differentiable operator $F : B ⊂ X → Y$ satisfying
$| | [ x , y ; F ] − [ v , w ; F ] | | ≤ ω ( | | x − v | | , | | y − w | | ) , x , y , v , w ∈ B$
and
$| | [ x , y ; F ] − [ x 0 , x 0 ; F ] | | ≤ ω 0 ( | | x − x 0 | | , | | y − x 0 | | ) , x , y ∈ B$
where $ω : R + × R + ⟶ R +$ $ω 0 : R + × R + ⟶ R +$ are continuous functions, nondecreasing in both variables, such that $ω ( 0 , x ) = ω ( x , 0 ) = 1 2 ω ( x , x )$ and $ω 0 ( 0 , x ) = ω 0 ( x , 0 ) = 1 2 ω 0 ( x , x )$. By definition $ω 0 ( x , y ) ≤ ω ( x , y )$.
Let $x 0 ∈ B$. Assume that
(1) $| | Γ 0 − 1 | | ≤ β$.
(2) $max ( | | Γ 0 − 1 F ( x 0 ) | | , | | Φ 0 − 1 F ( x 0 ) | | ) ≤ η$.
(3) The equation
$t ( 1 − m 1 − β ( ω ( t , t ) + ω 0 ( t , t ) ) ) − η = 0$
has a smallest positive root R, where $m = β ω 0 ( η , η )$.
If $β ( ω ( R , R ) + 2 ω 0 ( R , R ) ) < 1$ and $B ( x 0 , R ) ¯ ⊂ B$, then $M : = m 1 − β ( ω ( R , R ) + ω 0 ( R , R ) ) ∈ ( 0 , 1 )$ and the method (9) is well defined, it remains in $B ( x 0 , R )$ and converges to the unique solution of $F ( x ) = 0$ in $B ( x 0 , R ) ¯$.
Proof.
From the initial hypothesis, it follows that $x 1$ is well defined and
$| | x 1 − x 0 | | ≤ η < R$
Thus, $x 1 ∈ B ( x 0 , R )$.
Since $ω 0$ is a nondecreasing function, we have
$| | I − Γ 0 − 1 Γ 1 | | ≤ | | Γ 0 − 1 | | · | | Γ 0 − Γ 1 | | ≤ | | Γ 0 − 1 | | ω 0 ( | | x 1 − x 0 | | , | | x 1 − x 0 | | ) ≤ β ω 0 ( η , η ) ≤ β ω 0 ( R , R ) < 1$
Hence, $Γ 1 − 1$ is well defined and
$| | Γ 1 − 1 Γ 0 | | ≤ 1 1 − β ω 0 ( η , η ) | | Γ 1 − 1 | | ≤ β 1 − β ω 0 ( η , η )$
In particular, $Φ 1 − 1$ and $x 2$ are well defined.
Similarly,
$| | I − Γ 0 − 1 [ x 0 , y 0 ; F ] | | ≤ β ω 0 ( 0 , η ) ≤ β ω 0 ( R , R ) < 1$
Hence, $[ x 0 , y 0 ; F ] − 1$ is well defined and
$| | [ x 0 , y 0 ; F ] − 1 | | ≤ β 1 − β ω 0 ( 0 , η )$
By the definition of the method (9) and of the divided differences, we get
$F ( x 1 ) = F ( x 1 ) − F ( x 0 ) + F ( x 0 ) = F ( x 1 ) − F ( x 0 ) − Φ 0 ( x 1 − x 0 ) = ( [ x 1 , x 0 ; F ] − Φ 0 ) ( x 1 − x 0 )$
Thus,
$| | x 2 − x 1 | | = | | Φ 1 − 1 F ( x 1 ) | | = | | Φ 1 − 1 Γ 1 Γ 1 − 1 F ( x 1 ) | | ≤ | | Φ 1 − 1 Γ 1 | | · | | Γ 1 − 1 F ( x 1 ) | | ≤ | | Φ 1 − 1 Γ 1 | | · | | Γ 1 − 1 ( [ x 1 , x 0 ; F ] − Φ 0 ) | | · | | x 1 − x 0 | |$
Now, we need to bound the first two terms adequately.
• A bound for $| | Γ 1 − 1 ( [ x 1 , x 0 ; F ] − Φ 0 ) | |$
From
$| | I − [ x 0 , y 0 ; F ] − 1 Γ 0 | | = | | [ x 0 , y 0 ; F ] − 1 ( [ x 0 , y 0 ; F ] − Γ 0 ) | | ≤ | | [ x 0 , y 0 ; F ] − 1 | | · | | [ x 0 , y 0 ; F ] − Γ 0 | | ≤ β ω 0 ( 0 , η ) 1 − β ω 0 ( 0 , η ) < 1$
we obtain
$| | Γ 1 − 1 ( [ x 1 , x 0 ; F ] − Φ 0 ) | | = | | Γ 1 − 1 ( [ x 1 , x 0 ; F ] − Φ 0 + Γ 0 − Γ 0 ) | | ≤ | | Γ 1 − 1 | | · | | [ x 1 , x 0 ; F ] − Γ 0 | | + | | Γ 1 − 1 Γ 0 | | · | | [ x 0 , y 0 ; F ] − 1 Γ 0 − I | | ≤ β ω 0 ( η , 0 ) 1 − β ω 0 ( η , η ) + β ω 0 ( 0 , η ) ( 1 − β ω 0 ( η , η ) ) ( 1 − β ω 0 ( 0 , η ) ) = β ω 0 ( η , η ) 2 − 2 β ω 0 ( η , η ) · 4 − β ω 0 ( η , η ) 2 − β ω 0 ( η , η ) < 1$
• A bound for $| | Φ 1 − 1 Γ 1 | |$
First, note that
$| | y 1 − x 1 | | = | | Γ 1 − 1 F ( x 1 ) | | = | | Γ 1 − 1 ( [ x 1 , x 0 ; F ] − Φ 0 ) ( x 1 − x 0 ) | | ≤ | | Γ 1 − 1 ( [ x 1 , x 0 ; F ] − Φ 0 ) | | · | | ( x 1 − x 0 ) | | < η$
Besides, we have
$| | I − Γ 1 − 1 [ x 1 , y 1 ; F ] | | = | | Γ 1 − 1 ( Γ 1 − [ x 1 , y 1 ; F ] ) | | ≤ | | Γ 1 − 1 | | · | | Γ 1 − [ x 1 , y 1 ; F ] | | ≤ β ω ( 0 , η ) 1 − β ω 0 ( η , η ) < 1$
Thus, we get
$| | [ x 1 , y 1 ; F ] − 1 Γ 1 | | ≤ 1 1 − β ω ( 0 , η ) 1 − β ω 0 ( η , η ) = 1 1 − β 2 ω ( η , η ) 1 − β ω 0 ( η , η ) = 2 − 2 β ω 0 ( η , η ) 2 − β ( ω ( η , η ) + 2 ω 0 ( η , η ) )$
and
$| | [ x 1 , y 1 ; F ] − 1 | | ≤ β 1 − β ω 0 ( η , η ) · 2 − 2 β ω 0 ( η , η ) 2 − β ( ω ( η , η ) + 2 ω 0 ( η , η ) ) = 2 β 2 − β ( ω ( η , η ) + 2 ω 0 ( η , η ) )$
Finally
$| | I − Γ 1 − 1 Φ 1 | | = | | I − [ x 1 , y 1 ; F ] − 1 Γ 1 | | ≤ | | [ x 1 , y 1 ; F ] − 1 | | · | | [ x 1 , y 1 ; F ] − Γ 1 | | ≤ β ω ( η , η ) 2 − β ( ω ( η , η ) + 2 ω 0 ( η , η ) ) < 1$
and therefore
$| | Φ 1 − 1 Γ 1 | | ≤ 1 1 − β ω ( η , η ) 2 − β ( ω ( η , η ) + 2 ω 0 ( η , η ) ) = 2 − β ( ω ( η , η ) + 2 ω 0 ( η , η ) ) 2 − 2 β ( ω ( η , η ) + ω 0 ( η , η ) )$
On the other hand, the relation
$2 − β ( ω ( η , η ) + 2 ω 0 ( η , η ) ) 2 − 2 β ( ω ( η , η ) + ω 0 ( η , η ) ) · β ω 0 ( η , η ) 2 − 2 β ω 0 ( η , η ) · 4 − β ω 0 ( η , η ) 2 − β ω 0 ( η , η ) < β ω 0 ( η , η ) 1 − β ( ω ( η , η ) + ω 0 ( η , η ) )$
is equivalent to
$( 2 − β ( ω ( η , η ) + 2 ω 0 ( η , η ) ) ) ( 4 − β ω 0 ( η , η ) ) < 4 ( 1 − β ω 0 ( η , η ) ) ( 2 − β ω 0 ( η , η ) )$
By definition $ω 0 ( x , y ) ≤ ω ( x , y )$ then
$( 2 − β ( ω ( η , η ) + 2 ω 0 ( η , η ) ) ) ( 4 − β ω 0 ( η , η ) ) < ( 2 − β ( ω 0 ( η , η ) + 2 ω 0 ( η , η ) ) ) ( 4 − β ω 0 ( η , η ) )$
Moreover
$( 2 − β ( ω 0 ( η , η ) + 2 ω 0 ( η , η ) ) ) ( 4 − β ω 0 ( η , η ) ) < 4 ( 1 − β ω 0 ( η , η ) ) ( 2 − β ω 0 ( η , η ) )$
since
$− 2 β ω 0 ( η , η ) − ( β ω 0 ( η , η ) ) 2 < 0$
Therefore,
$| | x 2 − x 1 | | = | | Φ 1 − 1 F ( x 1 ) | | = | | Φ 1 − 1 Γ 1 Γ 1 − 1 F ( x 1 ) | | ≤ | | Φ 1 − 1 Γ 1 | | · | | Γ 1 − 1 ( [ x 1 , x 0 ; F ] − Φ 0 ) | | · | | x 1 − x 0 | | ≤ 2 − β ( ω ( η , η ) + 2 ω 0 ( η , η ) ) 2 − 2 β ( ω ( η , η ) + ω 0 ( η , η ) ) · β ω 0 ( η , η ) 2 − 2 β ω 0 ( η , η ) · 4 − β ω 0 ( η , η ) 2 − β ω 0 ( η , η ) | | x 1 − x 0 | | ≤ β ω 0 ( η , η ) 1 − β ( ω ( η , η ) + ω 0 ( η , η ) ) | | x 1 − x 0 | | ≤ M η$
Then, using $( 12 )$ and $M < 1$, we obtain
$| | x 2 − x 0 | | ≤ ( M + 1 ) η < R$
Thus, $x 2 ∈ B ( x 0 , R )$.
By using the same arguments together with an induction strategy we can prove the following facts:
• $| | x n − x 0 | | ≤ ∑ k = 0 n − 1 M k η < R ,$ that is, $x n ∈ B ( x 0 , R )$
• From the estimate
$| | x n − x n − 1 | | ≤ M n − 1 | | x 1 − x 0 | |$
we conclude that ${ x n }$ is a Cauchy sequence, and hence it converges to some $x * ∈ B ( x 0 , R )$.
• Since
$| | F ( x n ) | | ≤ | | Γ n | | · | | x n − x n − 1 | |$
and $| | x n − x n − 1 | | → 0$ when $n → + ∞$, we obtain that $F ( x * ) = 0$. Let us remark that, by $( 10 )$, $| | Γ n | | ≤ | | Γ 0 | | + ω 0 ( R , R ) .$
Moreover, if $y *$ is another solution of $F ( x ) = 0$ in $B ( x 0 , R ) ¯$, we have
$| | I − Γ 0 − 1 [ x * , y * ; F ] | | ≤ | | Γ 0 − 1 | | · | | Γ 0 − [ x * , y * ; F ] | | ≤ β ω 0 ( R , R ) < 1$
Therefore, the operator $[ x * , y * ; F ]$ is invertible. In particular, we have $x * = y *$.
The main restriction in the theorem $β ( ω ( R , R ) + 2 ω 0 ( R , R ) ) < 1$ replaces the original restriction used in [5] that writes $3 β ω ( R , R ) < 1$.
We can find simple numerical examples where only the centered hypotheses are satisfied, see for instance [23].

## 3. A Variant Using Only Divided Differences

For applications involving not Fréchet differentiable operators, we can consider a modification of the proposed method by using divided differences. Specifically, we will consider
$y n = x n + [ x n − α n F ( x n ) , x n + α n F ( x n ) ; F ] − 1 F ( x n ) x n + 1 = y n − [ x n − α n F ( x n ) , x n + α n F ( x n ) ; F ] − 1 F ( y n )$
where $α n ∈ [ 0 , 1 ]$ is computed in practice to satisfy
$t o l c < < | | α n F ( x n ) | | ≤ t o l u s e r$
Here, $t o l c$ is related to the computer precision and $t o l u s e r$ is a free parameter for the user, see [20,29].
Denoting
$Υ n = [ x n − α n F ( x n ) , x n + α n F ( x n ) ; F ]$
and
$Ψ n = Υ n [ x n , y n ; F ] − 1 Υ n$
the method (13) can be written alternatively as
$x n + 1 = x n − Ψ n − 1 F ( x n )$
By using a similar strategy to the one in Section 2, we can derive its semilocal convergence, but in this case without assuming any differentiability of the operator.
A possible theorem should be:
Theorem 2 Let $X , Y$ be two Banach spaces. Let B be a convex open subset of X, and suppose that there exists a first order divided difference of the operator $F : B ⊂ X → Y$ satisfying
$| | [ x , y ; F ] − [ v , w ; F ] | | ≤ ω ( | | x − v | | , | | y − w | | ) , x , y , v , w ∈ B$
and
$| | [ x , y ; F ] − [ x 0 , x 0 ; F ] | | ≤ ω 0 ( | | x − x 0 | | , | | y − x 0 | | ) , x , y ∈ B$
where $ω : R + × R + ⟶ R +$ $ω 0 : R + × R + ⟶ R +$ are continuous functions, nondecreasing in both variables, such that $ω ( 0 , x ) = ω ( x , 0 ) = 1 2 ω ( x , x )$ and $ω 0 ( 0 , x ) = ω 0 ( x , 0 ) = 1 2 ω 0 ( x , x )$. By definition $ω 0 ( x , y ) ≤ ω ( x , y )$.
Let $x 0 ∈ B$. Assume that
(1) $| | Υ 0 − 1 | | ≤ β$.
(2) $max ( | | Υ 0 − 1 F ( x 0 ) | | , | | Ψ 0 − 1 F ( x 0 ) | | ) ≤ η$.
(3) The equation
$t ( 1 − m 1 − β ( ω ( t + 2 t o l u s e r , t + 2 t o l u s e r ) + ω 0 ( t + 2 t o l u s e r , t + 2 t o l u s e r ) ) ) − η = 0$
has a smallest positive root R, where $m = β ω 0 ( η + t o l u s e r , η + t o l u s e r )$.
If $β ( ω ( R + 2 t o l u s e r , R + 2 t o l u s e r ) + 2 ω 0 ( R + 2 t o l u s e r , R + 2 t o l u s e r ) ) < 1$ and $B ( x 0 , R ) ¯ ⊂ B$, then $M : = m 1 − β ( ω ( R + 2 t o l u s e r , R + 2 t o l u s e r ) + ω 0 ( R + 2 t o l u s e r , R + 2 t o l u s e r ) ) ∈ ( 0 , 1 )$ and the method (14) is well defined, it remains in $B ( x 0 , R )$ and converges to the unique solution of $F ( x ) = 0$ in $B ( x 0 , R ) ¯$.

## 4. Numerical Example

We consider
$x ( s ) = 1 + ∫ 0 1 G ( s , t ) x ( t ) 2 d t , s ∈ [ 0 , 1 ]$
where $x ∈ C [ 0 , 1 ]$ and the kernel G is the Green function in $[ 0 , 1 ] × [ 0 , 1 ]$.
We use a discretization process and transform Equation (18) into a finite dimensional problem and we obtain the following system of nonlinear equations:
$F ( x ) ≡ x − 1 − A v x = 0 , F : R 8 → R 8$
where
$a = ( x 1 , x 2 , … , x 8 ) T , 1 = ( 1 , 1 , … , 1 ) T , A = ( a i j ) i , j = 1 8 , v x = ( x 1 2 , x 2 2 , … , x 8 2 ) T$
We use the divided difference of first order of F as $[ u , v ; F ] = I − B$, where $B = ( b i j ) i , j = 1 8$ with $b i j = a i j ( u j + v j )$.
If we choose the starting points $x − 1 = ( 7 10 , 7 10 , … , 7 10 ) T$ and $x 0 = ( 18 10 , 18 10 , … , 18 10 ) T$, method (9)) with the max-norm, we obtain $β = 1.2938442 …$, $η = 0.474572 …$,
$w ( s , t ) = 0.04381 … s + 0.04381 … t$
$w 0 ( s , t ) = 0.021905 … s + 0.021905 … t$
and
$m = 0.0269002 … .$
The solutions of Equation (12) are
$r 1 = 0.488915 … and r 2 = 5.70809 … .$
Then, by denoting $R = 0.488915 …$ it is easy to see that the following condition is verified
$β ( w ( R , R ) + 2 w 0 ( R , R ) ) = 0.110853 … < 1$
and
$M = 0 . 0293395 ∈ ( 0 , 1 )$
So, all the conditions of Theorem 1 are satisfied and a consequence we can ensure the convergence of method (9).

## Acknowledgments

This activity has been partially supported by the grant SENECA 19374/PI/14, by the project MTM2014-52016-C2-1-P of the Spanish Ministry of Economy and Competitiveness and by the Universidad Internacional de La Rioja (UNIR), under the Plan Propio de Investigación, Desarrollo e Innovación (2013–2015). Research group: Matemática aplicada al mundo real (MAMUR).

## Author Contributions

The contributions of the four authors have been similar. All authors have worked together to develop the present manuscript.

## Conflicts of Interest

The authors declare no conflict of interest.

## References

1. Amat, S.; Busquier, S.; Gutiérrez, J.M. Geometric constructions of iterative functions to solve nonlinear equations. J. Comput. Appl. Math. 2003, 157, 197–205. [Google Scholar] [CrossRef]
2. Amat, S.; Busquier, S. Third-order iterative methods under Kantorovich conditions. J. Math. Anal. Appl. 2007, 336, 243–261. [Google Scholar] [CrossRef]
3. Ezquerro, J.A.; Hernández, M.A. Halley’s method for operators with unbounded second derivative. Appl. Numer. Math. 2007, 57, 354–360. [Google Scholar] [CrossRef]
4. Kou, J.; Li, Y.; Wang, X. A modification of Newton’s method with third-order convergence. Appl. Math. Comput. 2007, 181, 1106–1111. [Google Scholar] [CrossRef]
5. Amat, S.; Bermúdez, C.; Busquier, S.; Plaza, S. On a third-order Newton-type method free of bilinear operators. Numer. Linear Algebra Appl. 2010, 17, 639–653. [Google Scholar] [CrossRef]
6. Amat, S.; Bermúdez, C.; Busquier, S.; Plaza, S. On two families of high order Newton type methods Original Research Article. Appl. Math. Lett. 2012, 25, 2209–2217. [Google Scholar] [CrossRef]
7. Hernández, M.A.; Romero, N. On a characterization of some Newton-like methods of R-order at least three. J. Comput. Appl. Math. 2005, 183, 53–66. [Google Scholar] [CrossRef]
8. Hernández, M.A.; Romero, N. General study of iterative processes of R-order at least three under weak convergence conditions. J. Optim. Theor. Appl. 2007, 133, 163–177. [Google Scholar] [CrossRef]
9. Amat, S.; Busquier, S.; Gutiérrez, J.M. On the local convergence of secant-type methods. Int. J. Comput. Math. 2004, 81, 1153–1161. [Google Scholar] [CrossRef]
10. Argyros, I.K. Improved error bounds for Newton-like iterations under Chen-Yamamoto assumptions. Appl. Math. Lett. 1992, 10, 97–100. [Google Scholar] [CrossRef]
11. Argyros, I.K. A Newton-Kantorovich theorem for equations involving m-Fréchet differentiable operators and applications in radiative transfer. J. Comput. Appl. Math. 2001, 131, 149–159. [Google Scholar] [CrossRef]
12. Argyros, I.K.; Szidarovszky, F. Theory and Applications of Iterative Methods; CRC-Press Inc.: Boca Raton, FL, USA, 1993. [Google Scholar]
13. Brown, P.N. A local convergence theory for combined inexact-Newton finite-difference projection methods. SIAM J. Numer. Anal. 1987, 24, 402–434. [Google Scholar] [CrossRef]
14. Gutiérrez, J.M. A new semi-local convergence theorem for Newton’s method. J. Comput. Appl. Math. 1997, 79, 131–145. [Google Scholar] [CrossRef]
15. Kantorovich, L.V.; Akilov, G.P. Functional Analysis; Pergamon Press: Oxford, UK, 1982. [Google Scholar]
16. Magreñán, Á.A. Estudio de la Dinámica del método de Newton Amortiguado. Ph.D. Thesis, The Universidad de La Rioja, La Rioja, Spain, 2013. [Google Scholar]
17. Rheinboldt, W.C. An Adaptice Continuation Process for Solving Systems of Nonlinear Equations; Polish Academy of Sciences, Banach Center Publications: Greifswald, Germany, 1977; Volume 3, pp. 129–142. [Google Scholar]
18. Ypma, T.J. Local convergence of inexact Newton’s method. SIAM J. Numer. Anal. 1984, 21, 583–590. [Google Scholar] [CrossRef]
19. Amat, S.; Busquier, S. A modified secant method for semismooth equations. Appl. Math. Lett. 2003, 16, 877–881. [Google Scholar] [CrossRef]
20. Amat, S.; Busquier, S. Convergence and numerical analysis of a family of two-step Steffensen’s methods. Comput. Math. Appl. 2005, 49, 13–22. [Google Scholar] [CrossRef]
21. Hernández, M.A.; Rubio, M.J. A uniparametric family of iterative processes for solving nondifferentiable equations. J. Math. Anal. Appl. 2002, 275, 821–834. [Google Scholar] [CrossRef]
22. Hernández, M.A.; Rubio, M.J. Semilocal convergence of the secant method under mild convergence conditions of differentiability. Comput. Math. Appl. 2002, 44, 277–285. [Google Scholar] [CrossRef]
23. Argyros, I.K.; Hilout, S. Extending the Newton-Kantorovich hypothesis for solving equations. J. Comput. Appl. Math. 2010, 234, 2993–3006. [Google Scholar] [CrossRef]
24. Kou, J. Some new sixth-order methods for solving non-linear equations. Appl. Math. Comput. 2007, 189, 647–651. [Google Scholar] [CrossRef]
25. Kou, J.; Li, Y. A family of modified super-Halley methods with fourth-order convergence. Appl. Math. Comput. 2007, 189, 366–370. [Google Scholar]
26. Kou, J.; Wang, X. Some variants of Chebyshev-Halley methods for solving nonlinear equations. Appl. Math. Comput. 2007, 189, 1839–1843. [Google Scholar] [CrossRef]
27. Chun, C. Construction of third-order modifications of Newton’s method. Appl. Math. Comput. 2007, 189, 662–668. [Google Scholar] [CrossRef]
28. Argyros, I.K.; Hilout, S. Weaker conditions for the convergence of Newton’s method. J. Complexity 2012, 28, 364–387. [Google Scholar] [CrossRef]
29. Amat, S.; Busquier, S.; Candela, V.F. A class of quasi-Newton Generalized Steffensen’s methods on Banach spaces. J. Comput. Appl. Math. 2002, 149, 397–406. [Google Scholar] [CrossRef]