Next Article in Journal
Certain Notions of Neutrosophic Topological K-Algebras
Next Article in Special Issue
Ball Convergence of an Efficient Eighth Order Iterative Method Under Weak Conditions
Previous Article in Journal
Error Bound for Non-Zero Initial Condition Using the Singular Perturbation Approximation Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Unified Semi-Local Convergence for k—Step Iterative Methods with Flexible and Frozen Linear Operator

by
Ioannis K. Argyros
1 and
Santhosh George
2,*
1
Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
2
Department of Mathematical and Computational Sciences, National Institute of Technology Karnataka, Mangalore 575 025, India
*
Author to whom correspondence should be addressed.
Submission received: 15 October 2018 / Revised: 24 October 2018 / Accepted: 25 October 2018 / Published: 30 October 2018
(This article belongs to the Special Issue Computational Methods in Analysis and Applications)

Abstract

:
The aim of this article is to present a unified semi-local convergence analysis for a k-step iterative method containing the inverse of a flexible and frozen linear operator for Banach space valued operators. Special choices of the linear operator reduce the method to the Newton-type, Newton’s, or Stirling’s, or Steffensen’s, or other methods. The analysis is based on center, as well as Lipschitz conditions and our idea of the restricted convergence region. This idea defines an at least as small region containing the iterates as before and consequently also a tighter convergence analysis.
MSC Subject Classification:
65G99; 65H10; 47H17; 49M15

1. Introduction

Let X , Y be Banach spaces and D X be a nonempty and open set. By L ( X , Y ) , we denote the space of bounded linear operators from X into Y . Let also U ( w , d ) stand for an open set centered at w X and of radius d > 0 and U ¯ ( w , d ) stand for its closure.
There is a plethora of problems from diverse disciplines, such as mathematics [1,2,3,4,5,6,7,8,9,10,11,12,13], optimization [3,4,5,6,7,8], mathematical programming [7,8], chemistry [7], biology [1,2,12], physics [9,13], economics [8], statistics [13], engineering [1,2,9,10,11,12,13] and other disciplines, that can be reduced to finding a solution x of the equation:
F ( x ) = 0 ,
where F : D Y is a continuous operator. The solution x of Equation (1) should be unique in a neighborhood about it and in closed form. However, the latter can be achieved only in special cases. This problem leads researchers to the construction of iterative methods that generate a sequence converging to x .
The most widely-used iterative method is Newton’s, defined for each n = 0 , 1 , 2 , , by:
x 0 D , x n + 1 = x n F ( x n ) 1 F ( x n ) .
Newton’s method is a special case of one-point iterative methods without memory defined for each n = 0 , 1 , 2 , 3 , by:
x 0 D x n + 1 = R ( x n ) ,
where R : X X has some properties. The order of convergence p N depends explicitly on the first p 1 derivatives of the functions appearing in the method. Moreover, the computational cost increases in general especially when the convergence order increases, since successive derivatives must be computed [1,2,3,4,5,6,7,8,9,10,11,12,13].
That is why researchers and practitioners have developed iterative methods that on the one hand avoid the computation of derivatives and on the other hand achieve a high order of convergence. In particular, we unify the study of such methods by considering k-step iterative methods with a frozen linear operator defined for each n = 0 , 1 , 2 , , by:
x 0 D , x n ( 1 ) = x n ( 0 ) A n 1 F ( x n ( 0 ) ) x n ( 2 ) = x n ( 1 ) A n 1 F ( x n ( 1 ) ) x n ( k 1 ) = x n ( k 2 ) A n 1 F ( x n ( k 2 ) ) x n ( k ) = x n ( k 1 ) A n 1 F ( x n ( k 1 ) ) ,
where A n = A ( x n ) , A : D L ( X , Y ) , x n = x n ( 0 ) and x n + 1 = x n ( k ) for each k = 0 , 1 , 2 , . Special choices of operator A lead to well-known methods. If k = 1 and A ( x ) = F ( x ) for each x D , we obtain Newton’s method (2), whereas, if k = 1 , 2 , and A ( x ) = F ( x ) for each x D , we obtain a method whose semi-local convergence was given in [12]. If A ( x ) = [ g 1 ( x ) , g 2 ( x ) ; F ] for each x D , k = 1 or k = 1 , 2 , , where g 1 : X X and g 2 : X X , we obtain Steffensen-type methods. Stirling’s and other one-point methods are also special cases of method (4). Based on the above, it is important to study the semi-local convergence analysis of method (4). It is well known that as the convergence order increases, the convergence region decreases in general. To avoid this problem as well, we introduce a center-Lipschitz-type condition that helps us determine an at least as small region as before containing the iterates { x n } . This way, the resulting Lipschitz constants are at least as small. A tighter convergence analysis is obtained this way.
The rest of the article is organized as follows: Section 2 contains the conditions to be used in the semi-local convergence that follows in Section 3. Final remarks are given in the concluding Section 4.

2. Convergence Conditions

We shall assume that U ( x 0 , r η ) D for some r > 1 and η > 0 . The semi-local convergence analysis of Method (4) is based on Condition (A) (see also the Conclusion Section 4):
(a1)
F : D Y is a differentiable operator in the sense of Fréchet, A ( x ) L ( X , Y ) , and there exists x 0 D , β > 0 , η > 0 such that A ( x 0 ) 1 L ( Y , X ) ,
A ( x 0 ) 1 β and A ( x 0 ) 1 F ( x 0 ) η .
(a2)
There exist L > 0 , ( 0 , 1 β ) such that for each x , y D :
A ( x ) A ( x 0 ) L x x 0 + .
Set D 0 = D U ( x 0 , 1 β β L η ) .
(a3)
There exist K > 0 , M > 0 , μ > 0 such that for each x , y D 0 :
F ( y ) F ( x ) K x y ,
A ( x ) F ( x ) M x x 0 + μ .
(a4)
There exist β ¯ > 0 and K 0 > 0 such that F ( x 0 ) 1 L ( Y , X ) , F ( x 0 ) 1 β ¯ and for each x D 0 :
F ( x ) F ( x 0 ) K 0 x x 0 .
(a5)
There exists r r such that r < 2 β ¯ K 0 η = r 0 . Set D 1 = D U ¯ ( x 0 , r η ) .
From now on, we assume Condition (A).

3. Semi-Local Convergence

We need some auxiliary results to show the semi-local convergence of Method (4).
Lemma 1.
Suppose that there exists r > 1 such that x n ( i ) U ( x 0 , r η ) D for each i = 1 , 2 , k , k 1 , n N and for:
β < 1 , r < 1 β β L η .
Then, method (4) is well defined.
Proof. 
We have that F ( x n ( i ) ) are well defined for each i = 1 , 2 , , k , k 1 . Using (5), (a1) and (a2), we have in turn that:
A 0 1 ( A n A 0 ) A 0 1 A n A 0 β ( L x n x ) + ) β ( L r η + ) < 1 .
By (6) and the Banach lemma on invertible operators [3,4,5,6,7,11], we deduce A n 1 L ( Y , X ) and:
A n 1 β 1 β ( L r η + ) .
 □
Let μ 1 = max { μ , } , K 1 = max { K , L } , K 2 = K 1 + M ρ n ( t ) = K 2 η n t + μ 1 and ρ n = ρ n ( r ) . We assume from now on that the previous hypotheses are satisfied. Let n = 0 and i = 1 . Then, we have:
x 0 ( i ) x 0 ( 0 ) = A 0 1 F ( x 0 ( 0 ) ) η .
Set:
η 0 = η , β 0 = β , ρ 0 = ρ 0 ( r ) , h 0 = h 0 ( r ) = β 0 ρ 0 .
By the first step in Method (4), we can write:
F ( x 0 ( 1 ) ) = F ( x 0 ( 1 ) ) F ( x 0 ( 0 ) ) F ( x 0 ( 0 ) ) ( x 0 ( 1 ) x 0 ( 0 ) ) + ( F ( x 0 ( 0 ) ) A ( x 0 ( 0 ) ) ) ( x 0 ( 1 ) x 0 ( 0 ) ) ,
so:
F ( x 0 ( 1 ) ) 0 1 [ F ( x 0 ( 0 ) + θ ( x 0 ( 1 ) x 0 ( 0 ) ) ) F ( x 0 ( 0 ) ) ] d θ ( x 0 ( 1 ) x 0 ( 0 ) ) + F ( x 0 ( 0 ) ) A ( x 0 ( 0 ) ) x 0 ( 1 ) x 0 ( 0 ) 1 2 K 0 η 0 x 0 ( 1 ) x 0 ( 0 ) + ( M x 0 ( 1 ) x 0 ( 0 ) + μ ) x 0 ( 1 ) x 0 ( 0 ) = ( 1 2 K 0 η 0 + μ ) x 0 ( 1 ) x 0 ( 0 ) ρ 0 x 0 ( 1 ) x 0 ( 0 ) x 0 ( 2 ) x 0 ( 1 ) = A 0 1 F ( x 0 ( 1 ) ) A 0 1 F ( x 0 ( 1 ) ) β 0 ρ 0 x 0 ( 1 ) x 0 ( 0 ) = h 0 x 0 ( 1 ) x 0 ( 0 )
and:
x 0 ( 2 ) x 0 ( 0 ) x 0 ( 2 ) x 0 ( 1 ) + x 0 ( 1 ) x 0 ( 0 ) h 0 x 0 ( 1 ) x 0 ( 0 ) + x 0 ( 1 ) x 0 ( 0 ) = ( 1 + h 0 ) η 0 .
Similarly, we can write:
F ( x 0 ( 2 ) ) = F ( x 0 ( 2 ) ) F ( x 0 ( 1 ) ) F ( x 0 ( 0 ) ) ( x 0 ( 2 ) x 0 ( 1 ) ) + ( F ( x 0 ( 0 ) ) A ( x 0 ( 0 ) ) ) ( x 0 ( 2 ) x 0 ( 1 ) ) ,
so:
F ( x 0 ( 2 ) ) 0 1 [ F ( x 0 ( 1 ) + θ ( x 0 ( 2 ) x 0 ( 1 ) ) ) F ( x 0 ( 0 ) ) ] d θ ( x 0 ( 2 ) x 0 ( 1 ) ) + F ( x 0 ( 0 ) ) A ( x 0 ( 0 ) ) x 0 ( 2 ) x 0 ( 1 ) K 0 0 1 [ ( 1 θ ) x 0 ( 1 ) x 0 ( 0 ) + θ x 0 ( 2 ) x 0 ( 0 ) ] d θ x 0 ( 2 ) x 0 ( 1 ) ( M x 0 ( 1 ) x 0 ( 0 ) + μ ) x 0 ( 2 ) x 0 ( 1 ) = ( K 0 r η 0 + μ ) x 0 ( 2 ) x 0 ( 1 ) ρ 0 x 0 ( 2 ) x 0 ( 1 ) x 0 ( 3 ) x 0 ( 2 ) = A 0 1 F ( x 0 ( 2 ) ) β 0 ρ 0 x 0 ( 2 ) x 0 ( 1 ) = h 0 x 0 ( 2 ) x 0 ( 1 ) h 0 2 x 0 ( 1 ) x 0 ( 0 )
and:
x 0 ( 3 ) x 0 ( 0 ) x 0 ( 3 ) x 0 ( 2 ) + x 0 ( 2 ) x 0 ( 1 ) + x 0 ( 1 ) x 0 ( 0 ) ( 1 + h 0 + h 0 2 ) η 0 .
Hence, we arrive at:
Lemma 2.
The following assertions hold for n = 0 , i = 1 , 2 , 3 , , k 1 :
F ( x 0 ( i ) ) ρ x 0 ( i ) x 0 ( i 1 ) ,
x 0 ( i ) x 0 ( i 1 ) h 0 x 0 ( i 1 ) x 0 ( i 2 ) ,
x 0 ( i ) x 0 ( 0 ) ( 1 + h 0 + + h 0 j 1 ) η 0
and:
x 0 ( k ) x 0 ( 0 ) ( 1 + h 0 + + h 0 k 1 ) η 0 .
Proof. 
We have that for each θ [ 0 , 1 ] , x 0 ( i 1 ) + θ ( x 0 ( i ) x 0 ( i 1 ) ) U ( x 0 , r η ) , since x 0 ( i ) , x 0 ( i 1 ) U ( x 0 , r η ) and:
x 0 ( i 1 ) x 0 ( 0 ) + θ ( x 0 ( i ) x 0 ( i 1 ) ) ( 1 θ ) x 0 ( i 1 ) x 0 ( 0 ) + θ x 0 ( i ) x 0 ( 0 ) ( 1 θ ) r η + θ r η = r n .
Then, as previously, using Method (4), we can write:
F ( x 0 ( i ) ) = 0 1 [ F ( x 0 ( i 1 ) + θ ( x 0 ( i ) x 0 ( i 1 ) ) ) F ( x 0 ( 0 ) ) ] d θ ( x 0 ( i ) x 0 ( i 1 ) ) + ( F ( x 0 ( 0 ) ) A ( x 0 ( 0 ) ) ) ( x 0 ( i ) x 0 ( i 1 ) ) ,
so:
F ( x 0 ( i ) ) K 0 0 1 [ ( 1 θ ) x 0 ( i 1 ) x 0 ( 0 ) + θ x 0 ( i ) x 0 ( 0 ) ] d θ x 0 ( i ) x 0 ( i 1 ) + F ( x 0 ( 0 ) ) A ( x 0 ( 0 ) ) x 0 ( i ) x 0 ( i 1 ) = ( K 0 r η + μ ) x 0 ( i ) x 0 i 1 ) ρ x 0 ( i ) x 0 ( i 1 ) , x 0 ( i + 1 ) x 0 ( i ) = A 0 1 F ( x 0 ( i ) ) β ρ x 0 ( i ) x 0 ( i 1 ) = h 0 x 0 ( i ) x 0 ( i 1 )
and:
x 0 ( i + 1 ) x 0 ( 0 ) x 0 ( i + 1 ) x 0 ( i ) + x 0 ( i ) x 0 ( 1 ) ( 1 + h 0 + + h 0 i ) η 0 ,
which show Estimates (8)–(10), respectively. Estimate (11) follows from (10) for i = k .  □
It follows that x 0 ( i ) for i = 1 , 2 , , k 1 , x 0 ( i ) = x 1 belong in U ( x 0 , r η ) . Define:
T 0 ( t ) = 1 , if k = 1 1 + h 0 + + h 0 k 1 , if k = 2 , 3 , .
Next, we study Method (4) for n = 1 in an analogous way to n = 0 . It follows from Lemma 1 that A ( x 1 ) 1 L ( Y , X ) and:
A ( x ) 1 β 1 h 0 : = β 1 .
Hence, x 1 ( 1 ) = x 1 ( 0 ) A ( x 1 ) 1 F ( x 1 ( 0 ) ) , with x 1 ( 0 ) = x 0 ( k ) = x 1 , is well defined,
F ( x 1 ( 0 ) ) ρ 1 x 0 ( k ) x 0 ( k 1 ) ,
so:
x 1 ( 1 ) x 0 ( 0 ) = A ( x 1 ) 1 F ( x 1 ( 0 ) ) A ( x 1 ) 1 F ( x 1 ( 0 ) ) β 1 ρ 1 x 0 ( k ) x 0 ( k 1 ) β ρ 1 1 h 0 h 0 k 1 η 0 h 0 k 1 h 0 η 0 = h 1 η 0 = η 1 ,
where:
h 1 = h 0 k 1 h 0 .
Define as previously,
T 1 ( t ) = 1 , if k = 1 1 + h 1 + + h 1 k 1 , if k = 2 , 3 , .
Then, we have again that:
A ( x 1 ) 1 F ( x 1 ( i ) ) ρ 1 x 1 ( i ) x 1 ( i 1 ) x 1 ( i + 1 ) x 1 ( i ) h 1 x 1 ( i ) x 1 ( i 1 ) , F ( x 1 ( k ) ) ρ 1 x 1 ( k ) x 1 ( k 1 ) , x 1 ( k ) x 1 ( k 1 ) = x 2 x 1 ( k 1 ) h 1 x 1 ( k 1 ) x 1 ( k 2 )
and:
x 1 ( k ) x 1 ( 0 ) = x 2 x 1 T 1 ( r ) η 1 .
Next, we continue for n = 2 . By Lemma 1, A ( x 2 ) 1 L ( Y , X ) and:
A ( x 2 ) 1 β 1 h 0 : = β 2 .
Notice that β 2 = β 1 . Then, for i = 1 and since x 2 = x 2 ( 0 ) , we get as in (14):
x 2 ( 1 ) x 2 ( 0 ) = A ( x 2 ) 1 F ( x 2 ( 0 ) ) A ( x 2 ) 1 F ( x 2 ( 0 ) ) β 2 ρ 2 x 1 ( k ) x 1 ( k 1 ) = h 2 η 1 = η 2 ,
where h 2 = h 1 k 1 h 0 . Then, as before, we can write:
A ( x 2 ) 1 β 2 ,
A ( x 2 ) 1 F ( x 2 ) η 2 ,
T 2 ( t ) = 1 , if k = 1 1 + h 2 + + h 2 k 1 , if k = 2 , 3 , .
so x 2 ( i ) , x 3 U ( x 2 , T 2 ( r ) η 2 ) for i = 1 , 2 , , k 1 . We are motivated by the preceding items to define recurrent relations:
β n = β 1 ,
η n = 1 1 h 0 h n 1 k η n 1 ,
h n = β n ρ n η n ,
T n ( t ) = 1 , if k = 1 1 + h n + + h n k 1 , if k = 2 , 3 , .
Hence, we arrive at:
Lemma 3.
Suppose that the hypotheses of Lemma 1 hold. Then, x n ( i ) , x n + 1 U ( x n , T n ( r ) η n ) for each i = 1 , 2 , , k 1 .
Proof. 
As in the cases n = 1 , 2 , we get for each n = 1 , 2 , 3 , :
F ( x n ( i ) ) ρ n ( r ) x n ( i ) x n ( i 1 ) , i = 1 , 2 , , k
and for i = 1 , 2 , , k 1 ,
x n ( i + 1 ) x n ( i ) h n x n ( i ) x n ( i 1 ) h n i x n ( i ) x n ( i 1 ) ,
x n ( i + 1 ) x n ( 0 ) ( 1 + h n + h n 2 + + h n i ) η n .
That is, we obtain:
F ( x n ( k 1 ) ) ρ n ( r ) x n ( k 1 ) x n ( k 2 ) ,
x n ( k ) x n ( k 1 ) = x n + 1 x n ( k 1 ) h n x n ( k 1 ) x n ( k 2 ) h n k 1 x n ( 1 ) x n ( 0 )
and:
x n ( k ) x n ( 0 ) = x n + 1 x n ( 0 ) ( 1 + h n + h n 2 + h n k 1 ) η n .
 □
Define function φ on the interval [ 0 , 1 ] by:
φ ( t ) = t k 1 + t 1 , k = 1 , 2 , .
We have that φ ( 0 ) = 1 and φ ( 1 ) = 1 > 0 . It then follows from the intermediate value theorem that equation φ ( t ) = 0 has at least one solution in ( 0 , 1 ) . Denote by s the smallest such solution. Notice that for:
r < s μ 1 K 2 η , η 0 a n d μ 1 < s ,
a simple inductive argument shows that:
h n + 1 < h 0 s f o r e a c h n = 1 , 2 , .
and:
T n ( r ) < T n 1 ( r ) .
Hence, we arrive at:
Lemma 4.
Suppose that (20) holds. Then, sequences { h n } and { T n ( r ) } are decreasing.
Proof. 
It follows immediately from (19). □
Taking into account x n ( i ) U ( x 0 , r η n ) and (20)–(22), we can obtain in turn the estimate:
x n ( i ) x 0 x n ( i ) x n ( 0 ) + i = 0 n 1 x n i x n i 1 i = 0 n T i ( r ) η i T 0 ( r ) i = 0 n η i = T 0 ( r ) [ η + η 1 + η 2 + ] T 0 ( r ) [ 1 + 1 1 h 0 h 0 k + ( 1 1 h 0 ) 2 h 0 2 k + + ( 1 1 h 0 ) n h 0 n k + ] η = T 0 ( r ) [ 1 + 1 1 h 0 h 0 k ( 1 + ( 1 1 h 0 ) h 0 k + + ( 1 1 h 0 ) n 1 h 0 ( n 1 ) k + ) ] η = T 0 ( r ) [ 1 + h 0 k 1 h 0 1 1 h 0 k 1 h 0 ] η = T 0 ( r ) 1 h 0 1 h 0 h 0 k η ,
where we also used that:
η n = 1 1 h 0 h n 1 k η n 1 1 1 h 0 h 0 k 1 1 h 0 h 0 k η n 2 ( 1 1 h 0 ) n h 0 n k η .
Then, we can show:
Theorem 1.
Suppose Condition (A) is satisfied and for each fixed number of steps k , equation:
T 0 ( t ) [ 1 h 0 1 h 0 h 0 k ] η = t
has at least one positive solution. Denote by r the smallest such solution. Moreover, suppose that (20) is satisfied and U ( x 0 , r η ) D . Then, sequence { x n } generated by Method (4) is well defined, remains in U ¯ ( x 0 , r η ) for each n = 0 , 1 , 2 , , i = 1 , 2 , , k and converges to a solution x U ¯ ( x 0 , r η ) of equation F ( x ) = 0 . The solution x is unique in D 1 .
Proof. 
It follows from the previous results that x n ( i ) and x n ( k ) = x n belong in U ( x 0 , r η ) . We must show that sequence { x n } is complete:
x n + j x n i = 1 j x n + i x n i 1 i = 1 j T n + i 1 ( r ) η n + i 1 T 0 ( r ) i = 1 j η n + i 1 T 0 ( r ) i = 0 j 1 η n + i T 0 ( r ) i = 0 j 1 ( h 0 k 1 h 0 ) n + i η T 0 ( r ) ( h 0 k 1 h 0 ) ( h 0 k 1 h 0 ) n + j 1 h 0 k 1 h 0 η ,
( h 0 k 1 h 0 < 1 ) , so { x n } is complete in a Banach space X, and as such, it converges to some x U ¯ ( x 0 , r η ) , since U ¯ ( x 0 , r η ) is a closed set. Moreover, we have:
F ( x n ) = F ( x n 1 ( k ) ) ρ n 1 ( r ) x n 1 ( k ) x n 1 ( k 1 ) ρ n 1 ( r ) h n 1 k 1 x n ( 1 ) x n ( 0 ) h n k 1 x n ( 1 ) x n ( 0 ) h n k 1 η n h n k 1 ( h 0 k 1 h 0 ) n h 0 k 1 ( h 0 k 1 h 0 ) n ( h 0 k 1 h 0 ) n 0 a s n ,
so F ( x ) = 0 . Furthermore, to show the uniqueness part, let y D 1 with F ( y ) = 0 . Set Q = 0 1 F ( x + θ ( y x ) ) d θ . By (a4) and (a6), we get in turn that:
F ( x 0 ) 1 Q F ( x 0 ) β ¯ 0 1 [ K 0 x + θ ( y x ) x 0 ] d θ β [ x x 0 + y x 0 2 ] β [ K 0 ( r + r ) η 2 ] < 1
by (27), so Q 1 L ( Y , X ) . Then, from the identity 0 = F ( y ) F ( x ) = Q ( y x ) , we conclude that x = y .
Remark 1.
As noted in the Introduction, even if specialized to A ( x ) = F ( x ) , Theorem 1 can give better results, since K 0 K . As an example, consider the uniqueness result in [12], where:
r < 2 β ¯ K η = r 1 ,
but r 0 < r 1 for K 0 < K .

4. Conclusions

We presented a semi-local convergence analysis for a k-steps iterative method with a flexible and frozen linear operator. The results obtained in this article reduce to the ones given in [1,2,12], if we choose A ( x ) = F ( x ) for each x D . On top of that, in the special case, our results have the following advantages over these works:
(1)
Larger convergence region, leading to more initial points;
(2)
Tighter upper bound estimates on x n + 1 x n , as well as x n x , which means that fewer iterations are needed to arrive at a desired error tolerance.
(3)
The information on the location of the solution is at least as precise.
These advantages are obtained, since we locate a ball inside the old ball containing the iterates. Then, the Lipschitz constants depend on the smallest ball and that is why these constants are at least as small as the old ones. It is also worth noticing that these advantages are attained, because the new constants are special cases of the old ones. That is no additional effort is required to compute the new constants. A plethora of numerical examples where the new constants are strictly smaller than the old ones can be found in [3,4,5,6,7,8]. Finally, other choices of operator A lead to methods not studied before.

Author Contributions

Conceptualization, I.K.A.; Editing, S.G.; Data Curation, S.G.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Amat, S.; Busquier, S.; Plaza, S. On two families of high order Newton type methods. Appl. Math. Comput. 2012, 25, 2209–2217. [Google Scholar] [CrossRef]
  2. Amat, S.; Argyros, I.K.; Busquier, S.; Hernandez, M.A. On two high-order families of frozen Newton-type methods. Numer. Linear Algebra Appl. 2018, 25, e2126. [Google Scholar] [CrossRef]
  3. Argyros, I.K.; Ezquerro, J.A.; Gutierrez, J.M.; Hernandez, M.A.; Hilout, S. On the semi-local convergence of efficient Chebyshev-Secant-type methods. J. Comput. Appl. Math. 2011, 235, 3195–3206. [Google Scholar] [CrossRef]
  4. Argyros, I.K.; Hilout, S. Weaker conditions for the convergence of Newton’s method. J. Complex. 2012, 28, 364–387. [Google Scholar] [CrossRef]
  5. Argyros, I.K.; Magréñan, A.A. A Contemporary Study of Iterative Methods; Elsevier (Academic Press): New York, NY, USA, 2018. [Google Scholar]
  6. Argyros, I.K.; Magreñán, A.A. Iterative Methods and Their Dynamics with Applications; CRC Press: New York, NY, USA, 2017. [Google Scholar]
  7. Argyros, I.K.; George, S.; Thapa, N. Mathematical Modeling for the Solution of Equations and Systems of Equations with Applications; Nova Publishes: New York, NY, USA, 2018; Volume I. [Google Scholar]
  8. Argyros, I.K.; George, S.; Thapa, N. Mathematical Modeling for the Solution of Equations and Systems of Equations with Applications; Nova Publishes: New York, NY, USA, 2018; Volume II. [Google Scholar]
  9. Behl, R.; Cordero, A.; Motsa, S.S.; Torregrosa, J.R. An eighth-order family of optimal multiple root finders and its dynamics. Numer. Algorithms 2018, 77, 1249–1272. [Google Scholar] [CrossRef]
  10. Cordero, A.; Hueso, J.L.; Martinez, E.; Torregrosa, J.R. Generating optimal derivative free iterative methods for nonlinear equations by using polynomial interpolation. Math. Comput. Mod. 2013, 57, 1950–1956. [Google Scholar] [CrossRef]
  11. Kantorovich, L.V.; Akilov, G.P. Functional Analysis in Normed Spaces; Pergamon Press: New York, NY, USA, 1982. [Google Scholar]
  12. Hernandez, M.A.; Martinez, E.; Tervel, C. Semi-local convergence of a k-step iterative process and its application for solving a special kind of conservative problems. Numer. Algorithm 2017, 76, 309–331. [Google Scholar]
  13. Sharma, J.R.; Guha, R.K.; Sharma, R. An efficient fourth order weighted Newton method for systems of nonlinear equations. Numer. Algorithm 2013, 62, 307–323. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Argyros, I.K.; George, S. Unified Semi-Local Convergence for k—Step Iterative Methods with Flexible and Frozen Linear Operator. Mathematics 2018, 6, 233. https://0-doi-org.brum.beds.ac.uk/10.3390/math6110233

AMA Style

Argyros IK, George S. Unified Semi-Local Convergence for k—Step Iterative Methods with Flexible and Frozen Linear Operator. Mathematics. 2018; 6(11):233. https://0-doi-org.brum.beds.ac.uk/10.3390/math6110233

Chicago/Turabian Style

Argyros, Ioannis K., and Santhosh George. 2018. "Unified Semi-Local Convergence for k—Step Iterative Methods with Flexible and Frozen Linear Operator" Mathematics 6, no. 11: 233. https://0-doi-org.brum.beds.ac.uk/10.3390/math6110233

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop