Next Article in Journal
Existence and Uniqueness of Solutions to a Nabla Fractional Difference Equation with Dual Nonlocal Boundary Conditions
Next Article in Special Issue
On the Semi-Local Convergence of a Jarratt-Type Family Schemes for Solving Equations
Previous Article in Journal
Big Rip Scenario in Brans-Dicke Theory
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On the Semi-Local Convergence of a Fifth-Order Convergent Method for Solving Equations

by
Christopher I. Argyros
1,
Ioannis K. Argyros
2,*,
Stepan Shakhno
3 and
Halyna Yarmola
4
1
Department of Computing and Technology, Cameron University, Lawton, OK 73505, USA
2
Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
3
Department of Theory of Optimal Processes, Ivan Franko National University of Lviv, Universytetska Str. 1, 79000 Lviv, Ukraine
4
Department of Computational Mathematics, Ivan Franko National University of Lviv, Universytetska Str. 1, 79000 Lviv, Ukraine
*
Author to whom correspondence should be addressed.
Submission received: 6 January 2022 / Revised: 18 January 2022 / Accepted: 18 January 2022 / Published: 20 January 2022
(This article belongs to the Special Issue Iterative Methods with Applications in Mathematical Sciences)

Abstract

:
We study the semi-local convergence of a three-step Newton-type method for solving nonlinear equations under the classical Lipschitz conditions for first-order derivatives. To develop a convergence analysis, we use the approach of restricted convergence regions in combination with majorizing scalar sequences and our technique of recurrent functions. Finally, a numerical example is given.

1. Introduction

Let us consider an equation
G ( x ) = 0 .
Here, G : Ω X Y is a nonlinear Fréchet-differentiable operator, X and Y are Banach spaces, Ω is an open convex subset of X. To find the approximate solution x * Ω of (1), iterative methods are used very often. The most popular is the quadratically convergent Newton method [1,2,3]. To increase the order of convergence, multi-step methods have been developed [4,5,6,7,8,9,10,11,12]. Multipoint iterative methods for solving the nonlinear equation have advantages over one-point methods because they have higher orders of convergence and computational efficiency. Furthermore, some methods need to compute only one derivative or divided difference per one iteration.
In this article, we consider the method with the fifth-order convergent
y k = x k G ( x k ) 1 G ( x k ) , z k = x k 2 T k 1 G ( x k ) , x k + 1 = z k G ( y k ) 1 G ( z k ) , k = 0 , 1 , ,
where T k = G ( x k ) + G ( y k ) . It was proposed in [9]. However, the local convergence was shown using Taylor expansions and required the existence of six-order derivatives not used on (2) in the proof of the main result. The semi-local convergence has not been studied. This is the purpose of this paper. We also only use the first derivative, which only appears in (2). To study the multi-step method, it is often required that the operator F be a sufficiently differentiable function in a neighborhood of solutions. This restricts the applicability of methods. Let us consider the function
φ ( t ) = t 3 ln t 2 + t 5 t 4 , t 0 , 0 , t = 0 .
where φ : Ω R R , Ω = [ 0.5 , 1.5 ] . This function has zero t * = 1 , and φ ( t ) = 6 log t 2 + 60 t 2 24 t + 22 . Obviously, φ ( t ) is not bounded on Ω . Therefore, the convergence of Method (2) is not guaranteed by the analysis in the previous paper. That is why we develop a semi-local convergence analysis of Method (2) under classical Lipschitz conditions for first-order derivatives only. Hence, we extend the applicability of the method. There is a plethora of single, two-step, three-step, and multi-step methods whose convergence has been shown using the second or higher-order derivatives or divided differences [1,2,3,5,6,7,8,9,10,12].
The paper is organized as follows: Section 2 deals with the convergence of scalar majorizing sequences. Section 3 gives the semi-local convergence analysis of Method (2) and the uniqueness of solution. The numerical example is shown in Section 4.

2. Majorizing Sequence

Let L 0 , L and η be positive parameters. Define scalar sequences { δ k } , { μ k } and { σ k } by
δ 0 = 0 , μ 0 = η , σ k = μ k + L ( 1 + L 0 δ k ) ( μ k δ k ) 2 2 ( 1 L 0 δ k ) ( 1 q k ) δ k + 1 = σ k + L ( σ k δ k ) 2 2 ( 1 L 0 μ k ) , μ k + 1 = δ k + 1 + L ( σ k μ k + 0.5 ( δ k + 1 σ k ) ) ( δ k + 1 σ k ) 1 L 0 δ k + 1 , f o r e a c h k = 0 , 1 , 2 ,
where
q k = L 0 2 ( μ k + δ k ) .
Sequence (3) shall be shown to be majorizing for Method (2) in Section 3. However, first, we present some convergence results for Method (2).
Lemma 1.
Assume
δ k μ k < 1 L 0 .
Then, Sequence (3) is bounded from above by 1 L 0 , nondecreasing and lim k δ k = δ * , where δ * is the unique least upper bound of sequence { δ k } satisfying δ * 0 , 1 L 0 .
Proof. 
It follows by the definition of sequence { δ k } and (4) that it is bounded from above by 1 L 0 and non-decreasing, so it converges to δ * . □
Next, we present stronger convergence criteria than (4) that are easier to verify. Define polynomials on the interval [ 0 , 1 ) by
p 1 ( t ) = 3 L t 3 L + 4 L 0 t 2 ,
p 2 ( t ) = L ( 1 + t ) 2 t L ( 1 + t ) 2 + 4 L 0 t 2 ,
and
p 3 ( t ) = 3 L t 3 L + 4 L 0 t .
It follows that p 1 ( 0 ) = 3 L , p 1 ( 1 ) = p 2 ( 1 ) = p 3 ( 1 ) = 4 L 0 , p 2 ( 0 ) = L and p 3 ( 0 ) = 3 L . Consequently, these polynomials have zeros in ( 0 , 1 ) . Denote minimal such zeros by α 1 , α 2 and α 3 . Moreover, define
a = L η 2 ( 1 L 0 η / 2 ) , b = L σ 0 2 ( 1 L 0 μ 0 ) η , c = L ( σ 0 μ 0 + 0.5 ( δ 1 σ 0 ) ) ( δ 1 σ 0 ) ( 1 L 0 δ 1 ) η ,
d = max { a , b , c } , α 4 = min { α 1 , α 2 , α 3 } ,
and
α = max { α 1 , α 2 , α 3 } .
Lemma 2.
Assume
d α 4 α 1 4 L 0 η .
Then, sequence { δ k } is bounded from above by δ * * = 2 η 1 α , nondecreasing and lim k δ k = δ * , where δ * is the least upper bound satisfying δ * 0 , δ * * .
Proof. 
Items
( A i ( 1 ) ):
0 L ( 1 + L 0 δ i ) ( μ i δ i ) 2 ( 1 L 0 δ i ) ( 1 δ i ) α
( A i ( 2 ) ):
0 L ( σ i δ i ) 2 2 ( 1 L 0 μ i ) α ( μ i δ i )
( A i ( 3 ) ):
0 L ( σ i μ i + 0.5 ( δ i + 1 σ i ) ) ( δ i + 1 σ i ) 1 L 0 δ i + 1 α ( μ i δ i )
shall be shown using mathematical induction on i. These items are true for i = 0 by (5). It follows from Definition (3) and ( A i ( 1 ) ), ( A i ( 2 ) ) and ( A i ( 3 ) ) that
σ 0 μ 0 α ( μ 0 δ 0 ) , δ 1 σ 0 α ( μ 0 δ 0 ) a n d μ 1 δ 1 α ( μ 0 δ 0 ) .
Assume items ( A i ( 1 ) ), ( A i ( 2 ) ) and ( A i ( 3 ) ) are true for all values of i smaller or equal to k 1 . Then, we use
0 σ i μ i α ( μ i δ i ) α i η ,
0 δ i + 1 σ i α ( μ i δ i ) α i + 1 η
and
0 μ i + 1 δ i + 1 α ( μ i δ i ) α i + 1 η .
It follows
μ i + 1 δ i + 1 + α i + 1 η σ i + α i + 1 η + α i + 1 η μ i + α i η + α i + 1 η + α i + 1 η δ i + α i η + α i η + α i + 1 η + α i + 1 η δ 0 + α 0 η + + α i η + α i η + α i + 1 η + α i + 1 η = 2 ( 1 α i + 2 ) η 1 α 2 η 1 α = δ * * ,
since
μ i + 1 δ i + 1 , δ i + 1 2 ( 1 α i + 2 ) η 1 α δ * * .
Then, evidently, ( A i ( 1 ) ) certainly holds if
3 L α i η + 4 L 0 α ( 1 + α + + α i ) η 2 α 0 ,
or if
f i ( 1 ) ( t ) 0 a t t = α 1 .
where recurrent polynomials are defined on the interval [ 0 , 1 )
f i ( 1 ) ( t ) = 3 L t i 1 η + 4 L 0 ( 1 + t + + t i ) η 2 .
A connection between two consecutive polynomials is needed
f i + 1 ( 1 ) ( t ) = 3 L t i η + 4 L 0 ( 1 + t + + t i + 1 ) η 2 + f i ( 1 ) ( t ) 3 L t i 1 η 4 L 0 ( 1 + t + + t i ) η + 2 = f i ( 1 ) ( t ) + p 1 ( t ) t i 1 η .
In particular, one obtains
f i + 1 ( 1 ) ( t ) = f i ( 1 ) ( t ) a t t = α 1 .
Define the function on the interval [ 0 , 1 ) by
f ( 1 ) ( t ) = lim i f i ( 1 ) ( t ) .
Then, by (8), one has
f ( 1 ) ( t ) = 4 L 0 η 1 t 2 .
Hence, (7) holds if
f ( 1 ) ( t ) 0 a t t = α 1 ,
which is true by (5).
Similarly, instead of ( A i ( 2 ) ), one can show
( A i ( 2 ) )′: 0 ( 1 + α ) 2 ( μ i δ i ) 2 ( 1 L 0 μ i ) α , since 0 σ i δ i σ i μ i + μ i δ i ( 1 + α ) ( μ i δ i ) .
Then, ( A i ( 2 ) ) holds if
L ( 1 + α ) 2 α i η + 4 L 0 α ( 1 + α + + α i ) η 2 α 0 ,
or if
f i ( 2 ) ( t ) 0 a t t = α 2 ,
where
f i ( 2 ) ( t ) = L ( 1 + α ) 2 t i 1 η + 4 L 0 ( 1 + t + + t i ) η 2 .
This time one obtains
f i + 1 ( 2 ) ( t ) = f i ( 2 ) ( t ) + p 2 ( t ) t i 1 η .
In particular, one has
f i + 1 ( 2 ) ( t ) = f i ( 2 ) ( t ) a t t = α 2 .
Define the function on the interval [ 0 , 1 ) by
f ( 2 ) ( t ) = lim i f i ( 2 ) ( t ) .
It follows from (16) and (18) that
f ( 2 ) ( t ) = 4 L 0 η 1 t 2 .
Therefore, (15) holds if
f ( 2 ) ( t ) 0 a t t = α 2 ,
which is true by (5).
Similarly, ( A i ( 3 ) ) holds if
( A i ( 3 ) ) : 0 3 α 2 ( μ i δ i ) 2 ( 1 L 0 δ i + 1 ) α ,
where we also used
0 σ i μ i + 1 2 ( δ i + 1 σ i ) α + 1 2 α ( μ i δ i ) = 3 2 α ( μ i δ i ) .
Then, (21) holds if
3 L α 2 α i η + 4 L 0 α ( 1 + α + + α i + 1 ) η 2 α 0 ,
or if
f i ( 3 ) ( t ) 0 a t t = α 3 ,
where
f i ( 3 ) ( t ) = 3 L t i + 1 η + 4 L 0 ( 1 + t + + t i + 1 ) η 2 .
As in (8), one obtains
f i + 1 ( 3 ) ( t ) = f i ( 3 ) ( t ) + p 3 ( t ) t i + 1 η .
In particular, one obtains that
f i + 1 ( 3 ) ( t ) = f i ( 3 ) ( t ) a t t = α 3 .
Define the function on the interval [ 0 , 1 ) by
f ( 3 ) ( t ) = lim i f i ( 3 ) ( t ) .
In view of (24) and (27), we have
f ( 3 ) ( t ) = 4 L 0 η 1 t 2 .
Hence, (23) holds if
f ( 3 ) ( t ) 0 a t t = α 3 ,
which is true by (5).
We also used
1 1 L 0 2 ( μ i + δ i ) 2 ,
which is true since
L 0 ( μ i + δ i ) 4 L 0 η 1 α < 1
and 1 + L 0 δ i < 3 2 by (5). The induction for items ( A i ( 1 ) )–( A i ( 3 ) ) is completed. It follows that sequence { δ i } is bounded from above by δ * * and in non-decreasing, and as such, it converges to δ * . □

3. Semi-Local Convergence

The hypotheses (H) are needed. Assume:
Hypothesis 1.
There exist x 0 Ω and η > 0 such that G ( x 0 ) 1 L ( Y , X ) and G ( x 0 ) 1 G ( x 0 ) η .
Hypothesis 2.
Center Lipschitz condition G ( x 0 ) 1 ( G ( w ) G ( x 0 ) ) L 0 w x 0 holds for all w Ω and some L 0 > 0 .
Let Ω 1 = Ω U ( x 0 , 1 L 0 ) .
Hypothesis 3.
Restricted Lipschitz condition G ( x 0 ) 1 ( G ( w ) G ( v ) ) L w v holds for all v , w Ω 1 and some L > 0 .
Hypothesis 4.
Hypotheses of Lemma 1 or Lemma 2 hold.
Hypothesis 5.
U [ x 0 , δ * ] Ω (or U [ x 0 , δ * * ] Ω ).
The main Semi-local result for Method (2) is shown next using the hypotheses (H).
Theorem 1.
Assume hypotheses ( H ) hold. Then, sequence { x k } produced by Method (2) exists in U ( x 0 , δ * ) and stays in U ( x 0 , δ * ) , and lim k x k = x * U [ x 0 , δ * ] so that F ( x * ) = 0 and
x k x * δ * δ k .
Proof. 
Items
( B k ( 1 ) ):
z k y k σ k μ k
( B k ( 2 ) ):
x k + 1 z k δ k + 1 σ k
( B k ( 3 ) ):
y k x k μ k δ k
shall be shown using the induction of k.
By (H1), one obtains
y 0 x 0 = G ( x 0 ) 1 G ( x 0 ) μ 0 δ 0 = η < δ * ,
so y 0 U ( x 0 , δ * ) and ( B k ( 3 ) ) holds. Let w U ( x 0 , δ * ) . Then, it follows from (H1) and (H2) that
G ( x 0 ) 1 ( G ( w ) G ( x 0 ) ) L 0 w x 0 L 0 δ * < 1 ,
so G ( w ) 1 L ( Y , X ) and
G ( w ) 1 G ( x 0 ) 1 1 L 0 w x 0
follow by a Lemma on linear invertible operators due to Banach [3,13]. Notice also that x 1 is well-defined by the third substep of Method (2) for k = 0 since y 0 U ( x 0 , δ * ) .
Next, the linear operator ( G ( x k ) + G ( y k ) ) is shown to be invertible. Indeed, one obtains by (H2):
( 2 G ( x 0 ) ) 1 ( G ( x k ) + G ( y k ) 2 G ( x 0 ) ) 1 2 ( G ( x 0 ) 1 ( G ( x k ) G ( x 0 ) ) + G ( x 0 ) 1 ( G ( y k ) G ( x 0 ) ) ) L 0 2 ( x k x 0 + y k x 0 ) L 0 2 ( μ k + δ k ) = q k L 0 δ * < 1 ,
so
( G ( x k ) + G ( y k ) ) 1 G ( x 0 ) 1 2 ( 1 L 0 2 ( μ k + δ k ) ) .
In particular, z 0 is well-defined by the second substep of Method (2) for k = 0 . Moreover, we can write
z k = x k G ( x k ) 1 G ( x k ) + G ( x k ) 1 G ( x k ) 2 ( G ( x k ) + G ( y k ) ) 1 G ( x k ) = y k + [ G ( x k ) 1 2 ( G ( x k ) + G ( y k ) ) 1 ] G ( x k ) = y k G ( x k ) 1 [ G ( y k ) G ( x k ) ] ( G ( x k ) + G ( y k ) ) 1 G ( x k ) ( y k x k ) .
Hence, by (31) (for w = x 0 ), (32), (33), (H2) and (3), one obtains
z k y k L ( 1 + L 0 x k x 0 ) ( y k x k ) 2 2 ( 1 L 0 x k x 0 ) ( 1 q k ) L ( 1 + L 0 δ k ) ( μ k δ k ) 2 2 ( 1 L 0 δ k ) ( 1 q k ) = σ k μ k ,
where we also used
G ( x 0 ) 1 G ( x k ) = G ( x 0 ) 1 [ G ( x 0 ) + G ( x k ) G ( x 0 ) ] 1 + L 0 x k x 0 1 + L 0 ( δ k δ 0 ) = 1 + L 0 δ k .
This shows ( B k ( 1 ) ) for k = 0 .
Moreover, one has
z k x 0 z k y k + y k x 0 σ k μ k + μ k δ 0 = σ k < δ * ,
so z 0 U ( x 0 , δ * ) .
One can write by the second substep of Method (2):
G ( z k ) = G ( z k ) G ( x k ) 1 2 ( G ( x k ) + G ( y k ) ) ( z k x k ) = 1 2 0 1 [ 2 G ( x k + θ ( z k x k ) ) ( G ( x k ) + G ( y k ) ) ] d θ ( z k x k )
so by (H3):
G ( x 0 ) 1 G ( z k ) L 0 1 ( x k y k + θ z k x k ) d θ + L 2 z k x k z k x k 2 L x k y k + z k x k + z k x k z k x k 4 L μ k δ k + σ k μ k + σ k δ k ( σ k δ k ) 4 = L ( σ k δ k ) 2 4
Hence, by (3), (31) (for w = y k ), and (35),
x k + 1 z k L ( σ k δ k ) 2 4 ( 1 L 0 μ k ) = δ k + 1 σ k ,
which shows ( B k ( 2 ) ). Using the third subset of Method (2), one has
G ( x k + 1 ) = G ( x k + 1 ) G ( z k ) G ( y k ) ( x k + 1 z k ) = 0 1 G ( z k + θ ( x k + 1 z k ) ) d θ G ( y k ) ( x k + 1 z k ) ,
so by (H3), (31) (for w = x k + 1 ), and the induction hypotheses
y k + 1 x k + 1 L z k y k + 1 2 x k + 1 z k x k + 1 z k 1 L 0 x k + 1 x 0 L σ k μ k + 1 2 ( δ k + 1 σ k ) ( δ k + 1 σ k ) 1 L 0 δ k + 1 = μ k + 1 δ k + 1 .
The following have also been used
x k + 1 x 0 x k + 1 z k + z k x 0 δ k + 1 σ k + σ k δ 0 = δ k + 1 < δ 0
and
y k + 1 x 0 y k + 1 x k + 1 + x k + 1 x 0 μ k + 1 σ k + 1 + σ k + 1 δ 0 = μ k + 1 < δ 0 ,
so x k + 1 , y k + 1 U ( x 0 , δ * ) .
Hence, the induction for items ( B k ( 1 ) )–( B k ( 3 ) ) is completed. Moreover, because of x k , y k , z k U ( x 0 , δ * ) , sequence { δ k } is fundamental since X is a Banach space. Therefore, there exists x * U [ x 0 , δ * ] such that lim k x k = x * . By (37), one obtains
G ( x 0 ) 1 G ( x k + 1 ) L σ k μ k + 1 2 ( δ k + 1 σ k ) ( δ k + 1 σ k ) 0 a s k .
It follows that G ( x k ) = 0 , where the continuity of G is also used.
Let j 0 . Then, from the estimate
x k + j x k x k + j x k + j 1 + x k + j 1 x k + j 2 + + x k + 1 x k δ k + j δ k
one obtains (30) by letting j . □
A uniqueness of the solution result follows next.
Theorem 2.
Assume:
(i) 
The point x * is a simple solution of equation F ( x ) = 0 in U ( x 0 , δ * ) .
(ii) 
There exists δ ¯ * δ * such that
L 0 ( δ ¯ * + δ * ) < 2 .
Let Ω 2 = Ω U [ x 0 , δ ¯ * ] . Then, the only solution of Equation (1) in the region Ω 2 is x * .
Proof. 
Let λ Ω 2 with G ( λ ) = 0 . Set M = 0 1 G ( λ + θ ( x * λ ) ) d θ . Then, in view of (H2) and (41), one obtains
G ( x 0 ) 1 ( M G ( x 0 ) ) L 0 2 x * x 0 + λ x 0 L 0 2 ( δ * + δ ¯ * ) < 1 ,
so λ = x * is obtained from the invertibility of M and 0 = G ( λ ) G ( x * ) = M ( λ x * ) . □

4. Numerical Example

Let us consider following system of nonlinear equations. Let X = Y = R n , Ω = ( 0 , 1.5 ) n and
G i ( x ) = 2 x i 3 + 2 x i + 1 4 , i = 1 , G i ( x ) = 2 x i 3 + x i 1 + 2 x i + 1 5 , 1 < i < n , G i ( x ) = 2 x i 3 + x i 1 3 , i = n .
The solution of system F ( x ) = 0 is x * = ( 1 , , 1 ) T . Since, for each x , w
G ( x ) G ( w ) = d i a g { 6 ( x 1 2 w 1 2 ) , , 6 ( x n 2 w n 2 ) }
we have
L 0 = 6 max 1 i n { | m i i | max x Ω | x i + ξ i | } , L = 6 max 1 i n { | m i i | max x , w Ω 1 | x i + w i | } .
Here, x 0 = ( ξ i ) i = 1 n , and m i i denotes the diagonal element of matrix G ( x 0 ) . Let us choose x 0 = ( 1.18 , , 1.18 ) T and n = 20 . Then, we obtain η = 0.1620 , L 0 = 2.0455 , ρ = 1 L 0 = 0.4889 , Ω 1 = ( 0.6911 , 1.5 ) n and L = 2.2898 . The majorizing sequences
{ δ k } = { 0 , 0.2651 , 0.2956 , 0.2957 , } , { μ k } = { 0.1620 , 0.2885 , 0.2957 , } ,
{ σ k } = { 0.1980 , 0.2934 , 0.2957 , }
converge to δ * = 0.2957 < ρ . Therefore, the conditions of Lemma 1 are satisfied.
Table 1 gives error estimates (30), and ( B k ( 1 ) )–( B k ( 3 ) ). The solution x * is obtained at three iterations for ε = 10 6 . Therefore, the conditions of Theorem 1 are satisfied, and { x k } converges to x * U [ x 0 , δ * ] .
Let us estimate the order of convergence using the computational order of convergence (COC) and the approximate computational order of convergence (ACOC) [1,9], which can be used given, respectively, by
p * = ln x k + 1 x * x k x * ln x k x * x k 1 x * , for each k = 1 , 2 , and p = ln x k + 1 x k x k x k 1 ln x k x k 1 x k 1 x k 2 , for each k = 2 , 3 , . We use the stopping criterion x k + 1 x k < 10 100 . If x 0 = ( 2.3 , , 2.3 ) T then p * = 4.9080 , p = 4.9084 . If x 0 = ( 1.7 , , 1.7 ) T then p * p = 4.9818 . The method converges to a solution at seven iterations. Therefore, the computational order of convergence coincides with the theoretical one.

5. Conclusions

A semi-local convergence analysis of the Newton-type method that is fifth-order convergent is provided under the classical Lipschitz conditions for first-order derivatives. The regions of convergence and uniqueness of the solution are established. The results of a numerical experiment are given.

Author Contributions

Conceptualization, I.K.A.; methodology, I.K.A.; investigation, I.K.A., C.I.A., S.S. and H.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Argyros, I.K.; Magreñán, A.A. Iterative Methods and Their Dynamics with Applications: A Contemporary Study; CRC Press: Boca Raton, FL, USA, 2017. [Google Scholar]
  2. Dennis, J.E., Jr.; Schnabel, R.B. Numerical Methods for Unconstrained Optimization and Nonlinear Equations; Prentice-Hall: Englewood Cliffs, NJ, USA, 1983. [Google Scholar]
  3. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  4. Amat, S.; Argyros, I.K.; Busquier, S.; Hernández-Verón, M.A. On two high-order families of frozen Newton-type methods. Numer. Linear Algebra Appl. 2018, 25, e2126. [Google Scholar] [CrossRef]
  5. Amat, S.; Bermudez, C.; Hernandez, M.A.; Martinez, E. On an efficient k-step iterative method for nonlinear equations. J. Comput. Appl. Math. 2016, 302, 258–271. [Google Scholar] [CrossRef] [Green Version]
  6. Argyros, I.K.; George, S. On the complexity of extending the convergence region for Traub’s method. J. Complex. 2020, 56, 101423. [Google Scholar] [CrossRef]
  7. Argyros, I.K.; George, S. On a two-step Kurchatov-type method in Banach space. Mediterr. J. Math. 2019, 16, 21. [Google Scholar] [CrossRef]
  8. Argyros, I.K.; Shakhno, S. Extended Two-Step-Kurchatov Method for Solving Banach Space Valued Nondifferentiable Equations. Int. J. Appl. Comput. Math. 2020, 6, 32. [Google Scholar] [CrossRef]
  9. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. Increasing the convergence order of an iterative method for nonlinear systems. Appl. Math. Lett. 2012, 25, 2369–2374. [Google Scholar] [CrossRef] [Green Version]
  10. Cordero, A.; Martínez, E.; Torregrosa, J.R. Iterative methods of order four and five for systems of nonlinear equations. J. Comput. Appl. Math. 2009, 231, 541–551. [Google Scholar] [CrossRef] [Green Version]
  11. Shakhno, S.M. On an iterative algorithm with superquadratic convergence for solving nonlinear operator equations. J. Comput. Appl. Math. 2009, 231, 222–235. [Google Scholar] [CrossRef] [Green Version]
  12. Shakhno, S.M.; Iakymchuk, R.P.; Yarmola, H.P. Convergence of a two-step method for the nonlinear least squares problem with decomposition of operator. J. Numer. Appl. Math. 2018, 2, 82–95. [Google Scholar]
  13. Kantorovich, L.V.; Akilov, G.P. Functional Analysis; Pergamon Press: Oxford, UK, 1982. [Google Scholar]
Table 1. Error estimates.
Table 1. Error estimates.
k z k y k σ k μ k x k + 1 z k δ k + 1 σ k y k x k μ k δ k x k x * δ * δ k
01.98 × 10 2 3.60 × 10 2 3.92 × 10 3 6.71 × 10 2 2.37 × 10 2 1.62 × 10 1 1.80 × 10 1 2.96 × 10 1
13.23 × 10 8 4.86 × 10 3 6.93 × 10 12 2.22 × 10 3 3.23 × 10 8 2.34 × 10 2 1.74 × 10 4 3.05 × 10 2
206.95 × 10 8 1.11 × 10 16 1.72 × 10 8 1.11 × 10 16 7.69 × 10 5 1.11 × 10 16 7.70 × 10 5
300 07.77 × 10 15 1.11 × 10 16 7.77 × 10 15
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Argyros, C.I.; Argyros, I.K.; Shakhno, S.; Yarmola, H. On the Semi-Local Convergence of a Fifth-Order Convergent Method for Solving Equations. Foundations 2022, 2, 140-150. https://0-doi-org.brum.beds.ac.uk/10.3390/foundations2010008

AMA Style

Argyros CI, Argyros IK, Shakhno S, Yarmola H. On the Semi-Local Convergence of a Fifth-Order Convergent Method for Solving Equations. Foundations. 2022; 2(1):140-150. https://0-doi-org.brum.beds.ac.uk/10.3390/foundations2010008

Chicago/Turabian Style

Argyros, Christopher I., Ioannis K. Argyros, Stepan Shakhno, and Halyna Yarmola. 2022. "On the Semi-Local Convergence of a Fifth-Order Convergent Method for Solving Equations" Foundations 2, no. 1: 140-150. https://0-doi-org.brum.beds.ac.uk/10.3390/foundations2010008

Article Metrics

Back to TopTop