Next Article in Journal
Big Rip Scenario in Brans-Dicke Theory
Previous Article in Journal
Relativistic Effects for a Hydrogen Rydberg Atom in a High-Frequency Laser Field: Analytical Results
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On the Semi-Local Convergence of a Traub-Type Method for Solving Equations

1
Learning Commons, University of North Texas at Dallas, Dallas, TX 75201, USA
2
Department of Computing and Technology, Cameron University, Lawton, OK 73505, USA
3
Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
4
Department of Mathematical and Computational Sciences, National Institute of Technology Karnataka, Mangaluru 575025, India
*
Author to whom correspondence should be addressed.
Submission received: 25 December 2021 / Revised: 9 January 2022 / Accepted: 12 January 2022 / Published: 14 January 2022
(This article belongs to the Section Mathematical Sciences)

Abstract

:
The celebrated Traub’s method involving Banach space-defined operators is extended. The main feature in this study involves the determination of a subset of the original domain that also contains the Traub iterates. In the smaller domain, the Lipschitz constants are smaller too. Hence, a finer analysis is developed without the usage of additional conditions. This methodology applies to other methods. The examples justify the theoretical results.
MSC:
49M15; 47H17; 65J15; 65G99; 41A25

1. Introduction

The purpose of this article is to locate a solution x * of equation
F ( x ) = 0 ,
provided that F : Ω E 1 E 2 is derivable according to Fréchet. Moreover, E 1 , E 2 stand for Banach spaces, whereas Ω is nonempty and open.
The famous quadratically convergent Newton–Kantorovich method is defined for all j = 0 , 1 , 2 , as
x 0 Ω , x j + 1 = x j F ( x j ) 1 F ( x j )
has been used extensively to produce sequence { x j } such that lim j x j = x * [1,2,3,4,5,6,7,8]. Although there is a plethora on convergence results for (2) there exist some problems. In particular, the convergence ball is in general small [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18]. Hence, it is important to extend this ball but with no additional conditions. Other defects relate to the accuracy of bounds on x j + 1 x j or x j x * , as well as the results on the location uniqueness of x * . The same defects appear in the study of high convergence order methods [19,20,21,22]. We have developed a technique that helps determine some D Ω , where iterates can also be found. This way, using D instead of Ω , a finer analysis is possible with no additional conditions.
We demonstrate our techniques for a certain high convergence order, although it can similarly be used on other methods [12,15,16,17].
We extend the two step Traub method [21] (see also [18,22]) to the following three step fifth order method
y j = x j F ( x j ) 1 F ( x j ) z j = y j F ( x j ) 1 F ( y j ) x j + 1 = z j F ( x j ) 1 F ( z j ) .
Traub’s two-step method requires less computational effort than any third-order method utilizing the second derivative [2,4,5,14].
Let us provide the earlier results.
(i)
Convergence has been shown in Potra and Pták in [14] using
F ( x 0 ) 1 F ( x 0 ) μ
F ( x 0 ) 1 ( F ( w ) F ( v ) ) L 1 w v for   all w , v Ω
T 1 = 2 L 1 μ 1
and
U [ x 0 , ρ 0 ] Ω ,
and
ρ 0 = 1 1 T 1 L 1 ,
where U [ x 0 , ρ 0 ] is a closed ball with radius ρ 0 and center at x 0 Ω . The center-Lipschitz condition is introduced by us as
F ( x 0 ) 1 ( F ( w ) F ( x 0 ) ) L 0 w x 0 for   all w Ω .
Define the set
D = U ( x 0 , 1 L 0 ) Ω .
Moreover, we introduced the restricted-Lipschitz condition
F ( x 0 ) 1 ( F ( w ) F ( v ) ) L w v for   all w , v D .
However, then we notice that
L 0 L 1
and
L L 1
hold, since
D Ω .
Suppose
T = 2 L μ 1 .
It follows that (9), (13), and
ρ ¯ 0 = 1 1 T L
can used for (4), (5), and ρ 0 , respectively, given in [14] (Theorem 5.2, p. 79). Hence,
T 1 1 T 1
and
ρ ¯ 0 ρ 0
hold. So, the applicability of Traub’s method is extended. The parameters L 0 and L are special cases of L 1 , so no additional effort is used. It is also worth to mention that L 1 = L 1 ( Ω ) , L 0 = L 0 ( Ω ) but L = L ( Ω , L 0 ) . The proof in [14] (Theorem 5.2) utilized
F ( x 0 ) 1 ( F ( w ) F ( x 0 ) ) L 1 w x 0 < 1
leading to (by the Banach lemma [12] on linear invertible operators)
F ( w ) 1 F ( x 0 ) 1 1 L 1 w x 0 .
However, we get
F ( x 0 ) 1 ( F ( w ) F ( x 0 ) ) L 0 w x 0 < 1
leading to tighter
F ( w ) 1 F ( x 0 ) 1 1 L 0 w x 0 .
This modification in the proof brings the aforementioned advantages. In the numerical section one can find cases when (10)–(12) are strict.
(ii)
In [12], they used
F ( x 0 ) 1 F ( x 0 ) μ ,
F ( x 0 ) 1 F ( x 0 ) τ ,
F ( x 0 ) 1 ( F ( w ) F ( v ) ) K 1 w v for   all w , v Ω ,
μ ( τ 2 + 2 K 1 ) 3 2 τ ( τ 2 + 3 K 1 ) 3 K 1
and
U [ x 0 , ρ 1 ] Ω ,
with ρ 1 denoting the minimal positive solution of equation
K 1 6 t 3 + τ 2 t 2 t + μ = 0 .
In our case we use
F ( x 0 ) 1 ( F ( w ) F ( v ) ) K w v for   all w , v D 0 ,
where
D 0 = U ( x 0 , 1 L 0 ) Ω
or
D 0 = U ( x 0 , ρ 2 ) Ω ,
if
F ( x 0 ) 1 ( F ( w ) F ( x 0 ) ) K 0 w x 0 for   all w Ω ,
is used, instead, where ρ 2 is the minimal positive solution of equation
K 6 t 3 + τ 2 t 2 t + μ = 0 .
Then, condition
μ ( τ 2 + 2 K ) 3 2 τ ( τ 2 + 3 K ) 3 K
is the corresponding and weaker sufficient convergence criterion. Notice again that
K K 1 ,
ρ 2 ρ 1 ,
K 1 = K 1 ( Ω ) , K 0 = K 0 ( Ω ) , and K = K ( Ω , K 0 ) . The old estimate in [12] involving the bounds on F ( w ) 1 F ( x 0 ) is
F ( w ) 1 F ( x 0 ) 1 1 ( τ w x 0 + K 1 2 w x 0 2 ) ,
whereas, we use
F ( w ) 1 F ( x 0 ) 1 1 ( τ w x 0 + K 2 w x 0 2 ) ,
which is more precise.
The rest of the paper is organized as follows: Next, Section 2 contains the convergence analysis, whereas the numerical examples and conclusions can be found in Section 3 and Section 4, respectively.

2. Majorizing Sequences

We introduce some auxiliary results on scalar majorizing sequences.
Definition 1.
Let { p j } be a Banach space valued sequence. Then, a nondecreasing scalar sequence { q j } is majorizing for { p j } , if
p j + 1 p j q j + 1 q j for   each j = 0 , 1 , 2 , .
So, the convergence of sequence { p j } reduces to studying that of { q j } [14]. Set H = [ 0 , ) and H 0 = [ 0 , b ) for some b > 0 .
Let μ 0 be a parameter and φ 0 : H H , φ : H 0 H be continuous and non-decreasing function. We shall use scalar sequences { t j } , { s j } and { u j } defined for each j = 0 , 1 , 2 , by t 0 = 0 , s 0 = μ
u j = s j + 0 1 φ ¯ ( ( 1 θ ) ( s j t j ) ) d θ ( s j t j ) 1 φ 0 ( t j ) t j + 1 = u j + 0 1 φ ¯ ( s j t j + θ ( u j s j ) ) d θ ( u j s j ) 1 φ 0 ( t j ) s j + 1 = t j + 1 + 0 1 φ ¯ ( u j t j + θ ( t j + 1 u j ) ) d θ ( t j + 1 u j ) 1 φ 0 ( t j + 1 ) ,
where φ ¯ = φ 0 , j = 0 φ , j = 1 , 2 ,
Next, we present a convergence result for a sequence { t j } under very general conditions
Lemma 1.
Suppose:
(a)
for all j = 0 , 1 , 2 ,
φ 0 ( t j ) < 1
and
t j a for   some a > 0 .
(b)
Function φ 0 : H H is continuous, increasing and
t j φ 0 1 ( 1 )
for each j = 0 , 1 , 2 , . Then, sequences { t j } , { s j } , { u j } converge monotonically to s * * , which is their unique upper bound (least).
Proof. 
(a) By (17)–(19): 0 t j s j u j t j + 1 , are bounded by a so, they converge to s * [ μ , a ] .
(b) By (17) and (20) one has again
lim j t j = lim j s j = lim j u j = s * .
Remark 1.
Conditions (18)–(20) can be replaced by stronger, which however are easier to satisfy. This is why we give alternative criteria (but stronger) that can easier be verified.
We introduce, sequences functions and sequences of functions as follows
a j = 0 1 φ ¯ ( ( 1 θ ) ( s j t j ) ) d θ 1 φ 0 ( t j ) , b j = 0 1 φ ¯ ( u j t j + θ ( u j s j ) ) d θ 1 φ 0 ( t j ) , c j = 0 1 φ ¯ ( u j t j + θ ( t j + 1 u j ) ) d θ 1 φ 0 ( t j + 1 ) ,
h ¯ j ( 1 ) ( t ) = 0 1 φ ( ( 1 θ ) ( s j t j ) ) d θ + t φ 0 ( t j ) t , h j ( 1 ) ( t ) = 0 1 φ ( ( 1 θ ) t j μ ) d θ + t φ 0 1 t j + 1 1 t μ t , h ( 1 ) ( t ) = 0 1 φ ( ( 1 θ ) μ ) d θ + t φ 0 μ 1 t t ,
h ¯ j ( 2 ) ( t ) = 0 1 φ ( s j t j + θ ( u j s j ) ) d θ + t φ 0 ( t j ) t , h j ( 2 ) ( t ) = 0 1 φ ( ( 1 + θ t ) t 2 j μ ) d θ + t φ 0 1 t 2 j + 1 1 t μ t , h ( 2 ) ( t ) = 0 1 φ ( ( 1 + θ t ) μ ) d θ + t φ 0 μ 1 t t ,
h ¯ j ( 3 ) ( t ) = 0 1 φ ( u j t j + θ ( t j + 1 u j ) ) d θ + t φ 0 ( t j + 1 ) t , h j ( 3 ) ( t ) = 0 1 φ ( ( 1 + t + θ t 2 ) t 2 j μ d θ + t φ 0 1 t 2 j + 3 1 t μ t
and
h ( 3 ) ( t ) = 0 1 φ ( ( 1 + t + θ t 2 ) μ ) d θ + t φ 0 μ 1 t t .
Next, we present a second convergence result for { t j } .
Lemma 2.
Suppose:
There exists parameter γ [ 0 , 1 ) such that
0 a 0 γ , 0 b 0 γ , 0 c 0 γ ,
0 φ 0 ( t 1 ) < 1 ,
h ( 1 ) ( γ ) 0 , h ( 2 ) ( γ ) 0 and h ( 3 ) ( γ ) 0 .
Then, sequences { t j } , { s j } , { u j } converge to s * [ μ , s * * ] , where s * * = 1 1 γ μ . Moreover, the following estimates hold
0 s j t j γ j ( t j s j 1 ) γ 2 j μ ,
0 u j s j γ ( s j t j ) γ 2 j + 1 μ ,
0 t j + 1 u j γ ( u j s j ) γ 2 j + 2 μ ,
t j + 1 1 γ 2 j + 3 1 γ μ ,
and
t j s j u j t j + 1 s * * .
Furthermore, we have φ 0 ( t j ) < 1 for each j = 0 , 1 , 2 , .
Proof. 
Estimates (24)–(28) hold if
0 a m γ ,
0 b m γ ,
0 c m γ ,
φ 0 ( t m + 1 ) < 1
and
t m s m u m t m + 1
hold for each m = 0 , 1 , 2 , . However, they are true for m = 0 , by (21)–(23). Notice that we have by the definition (17) and these conditions that
t m + 1 u m + γ 2 m + 2 μ s m + γ 2 m + 1 μ + γ 2 m + 2 μ t m + γ 2 m μ + γ 2 m + 1 μ + γ 2 m + 2 μ μ + γ μ + + γ 2 m + 2 μ = 1 γ 2 m + 3 1 γ μ μ 1 γ = s * * .
Suppose, estimates (24)–(28) hold for all integers smaller or equal to k . Hence, by replacing t 0 , s 0 , u 0 , t 1 by t m , s m , u m , t m + 1 and using the induction hypotheses we see that (29)–(33) shall be true if
h ¯ j ( γ ) ( i ) 0
or
h j ( γ ) ( i ) 0
or
h ( γ ) ( i ) 0 ,
for i = 1 , 2 , 3 and j = 0 , 1 , 2 , , m , which holds true by (23). The induction for (24)–(28) is terminated. The remaining of the proof can be found in Lemma 1. □
Remark 2.
(a) The conditions of Lemma 2 imply those of Lemma 1 but not necessarily vice versa.
(b) Consider functions “φ“ to be given by φ 0 and φ in the interesting case φ 0 ( t ) = L 0 t and φ ( t ) = L t for L 0 > 0 , and L > 0 .
Then, consider functions f j ( i ) on [ 0 , 1 ) given by
f j ( 1 ) ( t ) = 1 2 t 2 j 1 μ + L 0 ( 1 + t + + t 2 j ) μ 1 , f j ( 2 ) ( t ) = L ( 1 + 1 2 t ) t 2 j 1 μ + L 0 ( 1 + t + + t 2 j ) μ 1 ,
f j ( 3 ) ( t ) = L ( 1 + t + t 2 2 ) t 2 j 1 μ + L 0 ( 1 + t + + t 2 j + 2 ) μ 1 ,
g 1 ( t ) = L 0 t 3 + ( L 0 + L 2 ) t 2 L 2 ,
g 2 ( t ) = ( L 0 + L 2 ) t 3 + ( L 0 + L ) t 2 L 2 t L
and
g 3 ( t ) = L 0 t 5 + ( L 0 + L 2 ) t 4 + L t 3 + L 2 t 2 L t L .
By these definitions, we have
g 1 ( 0 ) = L 2 < 0 , g 1 ( 1 ) = 2 L 0 > 0 , g 2 ( 0 ) = L , g 2 ( 1 ) = 2 L 0 g 3 ( 0 ) = L and g 3 ( 1 ) = 2 L 0 .
It follows from the intermediate value theorem (IVT) that functions g i have zeros in ( 0 , 1 ) . Denote the minimal such zeros by γ i , respectively.
Define parameters
λ 0 = max { a 0 , b 0 , c 0 } , λ 1 = min { γ 1 , γ 2 , γ 3 }
and
λ 2 = max { γ 1 , γ 2 , γ 3 } .
Then, we can show a third result on the convergence of sequence { t j } .
Lemma 3.
Suppose:
There exists γ [ 0 , 1 ) satisfying
λ 0 λ 1 γ λ 2 < 1 L 0 μ .
Then, the conclusions of Lemma 2 for a sequence { t j } follow.
Proof. 
We must show by Lemma 2
h m ( i ) ( γ ) 0 .
But by the preceding definitions, we can show instead
f m ( i ) ( γ ) 0 .
We must relate f m + 1 ( i ) ( t ) to f m ( i ) . We can write
f m + 1 ( i ) ( t ) = L 2 t 2 m + 1 + L 0 ( 1 + t + + t 2 m + 2 ) μ 1 L 2 t 2 m 1 μ L 0 ( 1 + t + + t 2 m ) μ + 1 + f m ( 1 ) ( t ) = f m ( 1 ) ( t ) + ( L 2 t 2 + L 0 ( t 2 + t 3 ) L 2 ) t 2 m 1 μ = f m ( 1 ) ( t ) + g 1 ( t ) t 2 m 1 μ ,
so
f m + 1 ( 1 ) ( t ) = f m ( 1 ) ( t ) + g 1 ( t ) t 2 m 1 μ .
In particular, by (38) and the definition of γ
f m + 1 ( 1 ) ( γ ) f m ( 1 ) ( γ ) .
Define function f ( 1 ) by
f ( 1 ) ( t ) = lim m f m ( 1 ) ( t ) .
Then, we have by (40)
f ( 1 ) ( t ) = L 0 μ 1 t 1 .
So, we can show instead of (37) (for i = 1 ) that
f ( 1 ) ( γ ) 0 ,
which is true by (35). Similarly, we get
f m + 1 ( 2 ) ( t ) = L ( 1 + t 2 ) t 2 m + 1 μ + L 0 ( 1 + t + + t 2 m + 2 ) μ 1 L ( 1 + t 2 ) ( 1 + t 2 ) t 2 m 1 μ L 0 ( 1 + t + + t 2 m ) + 1 + f m ( 2 ) ( t ) = f m ( 2 ) ( t ) + [ ( 1 + t 2 ) L t 2 ( 1 + t 2 ) L + L 0 ( t 2 + t 3 ) ] t 2 m 1 μ = f m ( 2 ) ( t ) + g 2 ( t ) t 2 m 1 μ ,
so
f m + 1 ( 2 ) ( t ) = f m ( 2 ) ( t ) + = g 2 ( t ) t 2 m 1 μ .
In particular, we have
f m + 1 ( 2 ) ( γ ) f m ( 2 ) ( γ ) ,
and again
f ( 2 ) ( t ) = L 0 μ 1 t 1 .
Therefore, (37) (for i = 2 ) reduces to showing
f ( 2 ) ( γ ) 0 ,
which is true by (35). Moreover, we have analogously
f m + 1 ( 3 ) ( t ) = L ( 1 + t + t 2 2 ) t 2 m + 1 μ + L 0 ( 1 + t + + t 2 m + 1 ) μ 1 L ( 1 + t + t 2 2 ) t 2 m 1 μ L 0 ( 1 + t + + t 2 m + 2 ) μ + 1 + f m ( 3 ) ( t ) = f m ( 3 ) ( t ) + [ L ( 1 + t + t 2 2 ) t 2 + L 0 ( t 4 + t 5 ) L ( 1 + t + t 2 2 ) ] t 2 m 1 μ = f m ( 3 ) ( t ) + g 3 ( t ) t 2 m 1 μ ,
so
f m + 1 ( 3 ) ( γ ) f m ( 3 ) ( γ ) ,
Hence, we can show instead of (37) (for i = 3 ) that
f ( 3 ) ( γ ) 0 ,
which is true by (34), where
f ( 3 ) ( t ) = lim m f m ( 3 ) ( t ) = L 0 μ 1 t 1 .
Therefore, sequence { t j } is nondecreasing and bounded from above by s * * = 1 1 γ μ , so it converges to s * .
Next, we connect Lemmas 1 and 2 to method (3). We first consider conditions (A):
Suppose
(A1)
There exists x 0 Ω , μ 0 such that F ( x 0 ) 1 L ( E 2 , E 1 ) and
F ( x 0 ) 1 F ( x 0 ) μ .
(A2)
For all w Ω
F ( x 0 ) 1 ( F ( w ) F ( x 0 ) ) φ 0 ( w x 0 ) .
(A3)
Function φ 0 ( t ) 1 has a smallest positive solution ρ . Set
Ω 0 = U ( x 0 , ρ ) Ω .
(A4)
For each w , v Ω 0
F ( x 0 ) 1 ( F ( w ) F ( v ) ) φ ( w v )
(A5)
Hypotheses of Lemma 1 or Lemma 2 or Lemma 3 hold and
(A6)
U [ w 0 , s * ] Ω (or U [ w 0 , s * * ] Ω ).
Next, we prove the first semi-local convergence theorem for sequence { w j } .
Theorem 1.
Suppose hypotheses (A) hold. Then, sequences { w j } produced by method (3) is well defined in U [ w 0 , s * ] , remains in U [ w 0 , s * ] for each j = 0 , 1 , 2 , and converges to a solution w * U [ w 0 , s * ] (or w * U [ w 0 , s * * ] ) of equation F ( w ) = 0 .
Proof. 
Using condition (A1) and the first substep of method (3) for j = 0 , we see that y 0 is well defined and
y 0 x 0 = F ( x 0 ) 1 F ( x 0 ) μ = s 0 t 0 = s 0 s * ,
so y 0 U ( x 0 , s * ) . Iterate z 0 is exists by (A1) and (3) for j = 0 . So, by (3) and (A3) one has
z 0 y 0 = F ( x 0 ) 1 F ( y 0 ) = F ( x 0 ) 1 ( F ( y 0 ) F ( x 0 ) F ( x 0 ) ( y 0 x 0 ) ) = 0 1 F ( x 0 ) 1 ( F ( x ) F ( x 0 + θ ( y 0 x 0 ) ) ) d θ ( y 0 x 0 ) 0 1 φ ¯ ( ( 1 θ ) y 0 x 0 ) d θ y 0 x 0 0 1 φ ¯ ( ( 1 θ ) ( s 0 t 0 ) ) d θ ( s 0 t 0 ) u 0 s 0 .
We also have z 0 x 0 z 0 y 0 + y 0 x 0 u 0 s 0 + s 0 t 0 = u 0 t 0 < s * , so z 0 U ( x 0 , s * ) . By condition (A1) and (3) for j = 0 , x 1 exists, and we can write
x 1 y 0 = z 0 x 0 F ( x 0 ) 1 ( F ( z 0 ) F ( x 0 ) ) = F ( x 0 ) 1 0 1 [ F ( x 0 ) F ( x 0 + θ ( z 0 x 0 ) ) ] d θ ( z 0 x 0 ) .
The condition (A2) and (17) give in turn that
x 1 y 0 0 1 φ 0 ( ( 1 θ ) z 0 x 0 ) d θ z 0 x 0 0 1 φ 0 ( ( 1 θ ) z 0 x 0 ) d θ z 0 x 0 1 φ 0 ( x 0 x 0 ) 0 1 φ 0 ( ( 1 θ ) ( u 0 t 0 ) ) d θ ( u 0 t 0 ) 1 φ 0 ( t 0 ) = t 1 s 0 .
We also have
x 1 x 0 x 1 y 0 + y 0 x 0 t 1 s 0 + s 0 t 0 = t 1 s * ,
so w 1 U ( w 0 , s * ) . Let w U ( w 0 , s * ) . Using (A2) one obtains
F ( w 0 ) 1 ( F ( w ) F ( w 0 ) ) φ 0 ( w w 0 ) φ 0 ( s * ) < 1 ,
so the Banach lemma for linear invertible operators [5] assures the existence of F ( w ) 1 and
F ( w ) 1 F ( w 0 ) 1 1 φ 0 ( w w 0 ) .
In particular for w = w 1 , F ( w 1 ) 1 exists, so does iterate y 1 . Then, we can write by the first substep of method (3) for k = 1
y 1 x 1 = F ( x 1 ) 1 F ( x 1 ) F ( x 1 ) 1 F ( x 0 ) F ( x 0 ) 1 F ( x 1 ) 0 1 F ( x 0 ) 1 ( F ( z 0 + θ ( x 1 z 0 ) ) F ( x 0 ) ) d θ ( x 1 z 0 ) 1 φ 0 ( x 1 x 0 ) 0 1 φ 0 ( z 0 x 0 + θ x 1 z 0 ) d θ x 1 z 0 1 φ 0 ( x 1 x 0 ) 0 1 φ ¯ ( u 0 t 0 + θ ( t 1 u 0 ) ) d θ ( t 1 u 0 ) 1 φ 0 ( t 1 ) ,
where we also used by the definition of the method
F ( x 0 ) 1 F ( x 1 ) = F ( x 0 ) 1 ( F ( x 1 ) F ( z 0 ) + F ( z 0 ) ) 0 1 F ( x 0 ) 1 ( F ( z 0 + θ ( x 1 z 0 ) ) F ( x 0 ) ) d θ ( x 1 z 0 ) 0 1 φ 0 ( z 0 x 0 + θ x 1 z 0 ) d θ x 1 z 0 0 1 φ ¯ ( u 0 t 0 + θ ( t 1 u 0 ) ) d θ ( t 1 u 0 ) = s 1 t 1 ,
and
x 1 x 0 x 1 y 0 + y 0 x 0 t 1 s 0 + s 0 t 0 = t 1 t 0 .
Hence, we showed so far
y m x m s m t m , m = 0 , 1 ,
z m y m u m s m , m = 0 ,
x m + 1 z m t m + 1 u m , m = 0 ,
and
x m , y m , z m , x m + 1 U ( x 0 , s * )
for m = 0 . Consider these estimates are true for all m j 1 . Then, simply replace x 0 , y 0 , z 0 , x 1 by x m , y m , z m , x m + 1 , to terminate the induction for items (46)–(49). So, { x j } is fundamental in a Banach space E 1 . Hence, lim j x j = x * U [ x 0 , s * ] . Then, by letting j in the estimate (see also (45))
F ( x 0 ) 1 F ( x j + 1 ) 0 1 φ ( u j t j + θ ( t j + 1 u j ) ) d θ ( t j + 1 u j ) ,
and the continuity of F , we conclude F ( x * ) = 0 .
A uniqueness of the solution result follows.
Proposition 1.
Suppose:
(i)
There exists a simple solution x * Ω of equation F ( x ) = 0 .
(ii)
There exists α s * such that
0 1 φ 0 ( ( 1 θ ) α + θ s * ) d θ < 1 .
Set Ω 1 = U [ x 0 , α ] Ω . Then, x * is unique in Ω 1 .
Proof. 
Let w * Ω 1 with F ( w * ) = 0 . Set M = 0 1 F ( w * + θ ( x * w * ) ) d θ . Then, in view of (A2), we obtain in turn that
F ( x 0 ) 1 ( M F ( x 0 ) ) 0 1 φ 0 ( ( 1 θ ) w * x 0 + θ x * x 0 ) d θ 0 1 φ 0 ( ( 1 θ ) α + θ s * ) d θ < 1 ,
so w * = x * follows from the invertability of M and the estimate M ( x * w * ) = F ( x * ) F ( w * ) = 0 0 = 0 .

3. Examples

We present examples to further justify the theoretical results.
Example 1.
Consider
ψ ( t ) = b 0 t + b 1 + b 2 sin b 3 t , t 0 = 0 ,
where b j , j = 0 , 1 , 2 , 3 are parameters. Then, clearly for b 3 large and b 2 small, L 0 L 1 can be small (arbitrarily). Notice that as L 0 L 1 0 , T T 1 0 .
Example 2.
If E 1 = E 2 = R , x 0 = 1 and Ω = U [ 1 , 1 p ] for p ( 0 , 2 3 ) , define polynomial ψ on Ω as
ψ ( t ) = t 3 p .
If we consider case 1 of Newton’s method, then, we obtain L 0 = 3 p , L = L 1 = 2 ( 2 p ) and μ = 1 3 ( 1 p ) . But then, T 1 > 1 2 for all p ( 0 , 1 2 ) . So, Theorem 5.2 in [14] cannot assure convergence. However, we have T 1 2 for all p I = [ 0.4271907643 , 1 2 ) . Hence, our result guarantees convergence to x * = p 3 as long as p I .
Example 3.
Let E 1 = E 2 = H ( [ 0 , 1 ] ) the domain of functions given on [ 0 , 1 ] which are continuous. We consider the max-norm. Choose Ω = U ( 0 , d ) , d > 1 . Define G on Ω be
G ( x ) ( s ) = x ( s ) w ( s ) δ 0 1 N ( s , t ) x 3 ( t ) d t ,
x E 1 , s [ 0 , 1 ] , w E 1 is given, δ is a parameter and N is the Green’s kernel given by
N ( b 2 , b 1 ) = ( 1 b 2 ) b 1 , b 1 b 2 b 2 ( 1 b 1 ) , b 2 b 1 .
By (50), we have
( G ( x ) ( z ) ) ( s ) = z ( s ) 3 δ 0 1 N ( s , t ) x 2 ( t ) z ( t ) d t ,
t E 1 , s [ 0 , 1 ] . Consider x 0 ( s ) = w ( s ) = 1 and | δ | < 8 3 . We get
I G ( x 0 ) < 3 8 | δ | , G ( x 0 ) 1 L ( E 2 , E 1 ) ,
F ( x 0 ) 1 8 8 3 | δ | , μ = | δ | 8 3 | δ | , L 0 = 12 | δ | 8 3 | δ | ,
and L 1 = L = 6 μ | δ | 8 3 | δ | .
In Table 1 that follows we have listed the results on the convergence criteria for various values of the parameter involved.
Example 4.
Let E 1 , E 2 , and Ω be as in the Example 3. It is well known that the boundary value problem [16]
ψ ( 0 ) = 0 , ( 1 ) = 1 ,
ψ = ψ ψ 2
can be presented as a Hammerstein-like nonlinear integral equation [12]
ψ ( s ) = s + 0 1 K ( s , t ) ( ψ 3 ( t ) + ψ 2 ( t ) ) d t
for ℓ being a parameter, consider F : Ω E 2 given by
[ F ( x ) ] ( s ) = x ( s ) s 0 1 K ( s , t ) ( x 3 ( t ) + x 2 ( t ) ) d t .
Choose ψ 0 ( s ) = s and Ω = U ( ψ 0 , ρ 0 ) . Then, clearly U ( ψ 0 , ρ 0 ) U ( 0 , ρ 0 + 1 ) , since ψ 0 = 1 . Suppose 2 < 5 . Then, conditions (A) are satisfied for
L 0 = 2 + 3 ρ 0 + 6 8 , L 1 = L = + 6 ρ 0 + 3 4
and μ = 1 + 5 2 . Notice that L 0 < L 1 .

4. Conclusions

Two different techniques and a new domain D included in the original one are introduced. This change in the analysis gives a finer convergence with no additional conditions.

Author Contributions

Conceptualization, S.R., C.I.A., I.K.A. and S.G.; methodology, S.R., C.I.A., I.K.A. and S.G.; software, S.R., C.I.A., I.K.A. and S.G.; validation, S.R., C.I.A., I.K.A. and S.G.; formal analysis, S.R., C.I.A., I.K.A. and S.G.; investigation, S.R., C.I.A., I.K.A. and S.G.; resources, S.R., C.I.A., I.K.A. and S.G.; data curation, S.R., C.I.A., I.K.A. and S.G.; writing—original draft preparation, S.R., C.I.A., I.K.A. and S.G.; writing—review and editing, S.R., C.I.A., I.K.A. and S.G.; visualization, S.R., C.I.A., I.K.A. and S.G.; supervision, S.R., C.I.A., I.K.A. and S.G. project administration, S.R., C.I.A., I.K.A. and S.G.; funding acquisition, S.R., C.I.A., I.K.A. and S.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Argyros, I.K. On the Newton-Kantorovich hypothesis for solving equations. J. Comput. Math. 2004, 169, 315–332. [Google Scholar] [CrossRef] [Green Version]
  2. Argyros, I.K. Convergence and Applications of Newton-Type Iterations; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
  3. Argyros, I.K.; Hilout, S. Weaker conditions for the convergence of Newton’s method. J. Complex. 2012, 28, 364–387. [Google Scholar] [CrossRef] [Green Version]
  4. Argyros, I.K.; Magréñan, A.A. A Contemporary Study of Iterative Methods; Elsevier (Academic Press): New York, NY, USA, 2018. [Google Scholar]
  5. Kantorovich, L.V.; Akilov, G.P. Functional Analysis; Pergamon Press: Oxford, UK, 1982. [Google Scholar]
  6. Nashed, M.Z.; Chen, X. Convergence of Newton-like methods for singular operator equations using outer inverses. Numer. Math. 1993, 66, 235–257. [Google Scholar] [CrossRef]
  7. Ortega, L.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  8. Zabrejko, P.P.; Nguen, D.F. The majorant method in the theory of Newton-Kantorovich approximations and the Pták error estimates. Numer. Funct. Anal. Optim. 1987, 9, 671–684. [Google Scholar] [CrossRef]
  9. Argyros, I.K. Computational Theory of Iterative Methods; Chui, C.K., Wuytack, L., Eds.; Series: Studies in Computational Mathematics 15; Elsevier Publishing Co.: New York, NY, USA, 2007. [Google Scholar]
  10. Argyros, I.K.; Hilout, S. On an improved convergence analysis of Newton’s method. Appl. Math. Comput. 2013, 225, 372–386. [Google Scholar] [CrossRef]
  11. Behl, R.; Maroju, P.; Martinez, E.; Singh, S. A study of the local convergence of a fifth order iterative method. Indian J. Pure Appl. Math. 2020, 51, 439–455. [Google Scholar]
  12. Ezquerro, J.A.; Hernandez, M.A. Newton’s Method: An Updated Approach of Kantorovich’s Theory; Birkhäuser: Cham, Switzerland, 2018. [Google Scholar]
  13. Magréñan, A.A.; Gutiérrez, J.M. Real dynamics for damped Newton’s method applied to cubic polynomials. J. Comput. Appl. Math. 2015, 275, 527–538. [Google Scholar] [CrossRef]
  14. Potra, F.A.; Pták, V. Nondiscrete Induction and Iterative Processes; Research Notes in Mathematics, 103; Pitman Advanced Publishing Program: Boston, MA, USA, 1984. [Google Scholar]
  15. Proinov, P.D. New general convergence theory for iterative processes and its applications to Newton-Kantorovich type theorems. J. Complex. 2010, 26, 3–42. [Google Scholar] [CrossRef] [Green Version]
  16. Shakhno, S.M.; Iakymchuk, R.P.; Yarmola, H.P. Convergence analysis of a two step method for the nonlinear squares problem with decomposition of operator. J. Numer. Appl. Math. 2018, 128, 82–95. [Google Scholar]
  17. Sharma, J.R.; Guha, R.K.; Sharma, R. An efficient fourth order weighted-Newton method for systems of nonlinear equations. Numer. Algorithms 2013, 62, 307–323. [Google Scholar] [CrossRef]
  18. Verma, R. New Trends in Fractional Programming; Nova Science Publisher: New York, NY, USA, 2019. [Google Scholar]
  19. Rheinboldt, W.C. An Adaptive Continuation Process of Solving Systems of Nonlinear Equations; Polish Academy of Science, Banach Center Publications: Greifswald, Germany, 1978; Volume 3, pp. 129–142. [Google Scholar]
  20. Soleymani, F.; Lotfi, T.; Bakhtiari, P. A multi-step class of iterative methods for nonlinear systems. Optim. Lett. 2014, 8, 1001–1015. [Google Scholar] [CrossRef]
  21. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice Hall: Upper Saddle River, NJ, USA, 1964. [Google Scholar]
  22. Traub, J.F.; Werschulz, A.G. Complexity and Information, Lezioni Lince; Lincei Lectures; Cambridge University Press: Cambridge, UK, 1998; p. xii+139. ISBN 0-521-48506-1. [Google Scholar]
Table 1. Comparison table of criteria.
Table 1. Comparison table of criteria.
μ δ * T 1 T
2.098990.99766137781.0075152000.9639223786
2.198970.98317660581.0555056000.9678118280
2.295970.96981856591.1020656000.9715205068
3.0954670.879631132111.4858241601.000082409
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Regmi, S.; Argyros, C.I.; Argyros, I.K.; George, S. On the Semi-Local Convergence of a Traub-Type Method for Solving Equations. Foundations 2022, 2, 114-127. https://0-doi-org.brum.beds.ac.uk/10.3390/foundations2010006

AMA Style

Regmi S, Argyros CI, Argyros IK, George S. On the Semi-Local Convergence of a Traub-Type Method for Solving Equations. Foundations. 2022; 2(1):114-127. https://0-doi-org.brum.beds.ac.uk/10.3390/foundations2010006

Chicago/Turabian Style

Regmi, Samundra, Christopher I. Argyros, Ioannis K. Argyros, and Santhosh George. 2022. "On the Semi-Local Convergence of a Traub-Type Method for Solving Equations" Foundations 2, no. 1: 114-127. https://0-doi-org.brum.beds.ac.uk/10.3390/foundations2010006

Article Metrics

Back to TopTop