Next Article in Journal
Modeling and Thermodynamic Analysis of the Water Sorption Isotherms of Cottonseed Products
Previous Article in Journal
Publishers’ Forewords to Launch the New Journal Foundations
Article

Ball Convergence of a Parametric Efficient Family of Iterative Methods for Solving Nonlinear Equations

1
Learning Commons, University of North Texas at Dallas, Dallas, TX 75038, USA
2
Department of Computer Science, Cameron University, Lawton, OK 73505, USA
3
Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
4
Department of Mathematical and Computational Sciences, NIT, Karnataka 575 025, India
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Academic Editor: Jay Jahangiri
Received: 30 April 2021 / Revised: 9 June 2021 / Accepted: 17 June 2021 / Published: 18 June 2021

Abstract

The goal is to extend the applicability of Newton-Traub-like methods in cases not covered in earlier articles requiring the usage of derivatives up to order seven that do not appear in the methods. The price we pay by using conditions on the first derivative that actually appear in the method is that we show only linear convergence. To find the convergence order is not our intention, however, since this is already known in the case where the spaces coincide with the multidimensional Euclidean space. Note that the order is rediscovered by using ACOC or COC, which require only the first derivative. Moreover, in earlier studies using Taylor series, no computable error distances were available based on generalized Lipschitz conditions. Therefore, we do not know, for example, in advance, how many iterates are needed to achieve a predetermined error tolerance. Furthermore, no uniqueness of the solution results is available in the aforementioned studies, but we also provide such results. Our technique can be used to extend the applicability of other methods in an analogous way, since it is so general. Finally note that local results of this type are important, since they demonstrate the difficulty in choosing initial points. Our approach also extends the applicability of this family of methods from the multi-dimensional Euclidean to the more general Banach space case. Numerical examples complement the theoretical results.
Keywords: Banach space valued mapping; parametric family of methods; ball convergence; Euclidean space Banach space valued mapping; parametric family of methods; ball convergence; Euclidean space

1. Introduction

Let B 1 , B 2 denote Banach spaces and T B 1 be a nonempty convex and open set. Set L B ( B 1 , B 2 ) = { V : B 1 B 2 is by a bounded linear operator } . Many problems in mechanics, biomechanics, physics, mathematical chemistry, economics, radiative transfer, biology, ecology, medicine, engineering, and other areas [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25] are reduced to an equation (nonlinear)
F ( x ) = 0 ,
with F : T B 2 being continuously differentiable in the Fréchet sense. Therefore solving Equation (1) is an extremely important and difficult problem in general. A solution ξ is very difficult to find, especially in closed or analytical form. This function forces practitioners and researchers to develop higher order and efficient methods converging to ξ by starting from a point x 0 T sufficiently close to it [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28]. Motivated by Traub-like and Newton’s methods, the following methods were studied in [10]:
y k = x k F ( x k ) 1 F ( x k ) z k = y k [ w k , y k ; F ] 1 F ( y k ) x k + 1 = z k [ w k , y k ; F ] 1 F ( z k ) ,
where [ · , · ; F ] : T × T L B ( B 1 , B 2 ) is a divided difference of order one [18] and w n = w ( x n ) , w : T T is a given iteration function. To be more precise, the special choice of w given by
w n = y n + α F ( y n ) + β F ( y n ) 2 ,
was used in [10], for parameters α and β that are not zero at the same time, f i ( x ) the co-ordinate functions for F and
F ( x ) 2 = ( f 1 2 ( x ) , f 2 2 ( x ) , , f n 2 ( x ) ) T .
Moreover, they used method (2) when B 1 = B 2 = R k , and compared it favorably to other sixth-order methods.
However, in this article we do not necessarily assume that w is given by (3). The sixth convergence order has been verified for any value of α and β using Taylor series, but hypothesizes on the derivatives up to seventh order [10]. That simply means that convergence is not guaranteed for mappings that are not up to seventh-order differentiable.
For example, consider h on T = [ 1 2 , 3 2 ) as
h ( t ) = t 3 l n t 2 + t 5 t 4 , t 0 0 , t = 0 .
Then, we calculate
h ( t ) = 3 t 2 l n t 2 + 5 t 4 4 t 3 + 2 t 2 ,
h ( t ) = 6 t l n t 2 + 20 t 3 + 12 t 2 + 10 t
and
h ( t ) = 6 l n t 2 + 60 t 2 24 t + 22 .
Notice that h ( t ) is unbounded on T.
Another problem is that there are no error bounds on x n ξ or results on uniqueness of ξ or how close to ξ we should start, and the selection of x 0 is really “ a shot in the dark”. To address all these concerns about this very efficient and useful method, we only use conditions on the first derivative. Moreover, convergence radius, error estimates, and uniqueness results are computed based on these conditions. Furthermore, we rely on the computational order (COC) or approximated computational convergence order (ACOC) formulae to determine the order [8,25]. These formulae also only use the first derivative. That is how we extend the applicability of method (2). Our technique can also be used to study other methods in an analogous way.
It is worth noticing that the α = β = 0 , method (2) reduces to Newton-Secant-like methods, and if β = 0 to Newton-Steffensen-like methods.
The layout for the rest of the article includes the convergence analysis of method (2) in Section 2 and the numerical examples in Section 3.

2. Convergence Analysis of Method (2)

Let D = [ 0 , ) . Let also φ 0 : D R be increasing and continuous, satisfying φ 0 ( 0 ) = 0 . Assume
φ 0 ( t ) = 1
having one positive zero (at least). Let r 0 be the minimal zero. Set D 0 = [ 0 , r 0 ) . Assume there exists a function φ : D 0 R that is continuous and increasing satisfying φ ( 0 ) = 0 . Consider g 1 and g 1 ¯ on D 0 as g 1 ( t ) = 0 1 φ ( ( 1 τ ) t ) d τ 1 φ 0 ( t ) , and g 1 ¯ ( t ) = g 1 ( t ) 1 . By these definitions g 1 ¯ ( 0 ) = 1 and g 1 ¯ ( t ) for t r 0 . The application of the intermediate value theorem on function g 1 ¯ assures the existence of at least one zero in ( 0 , r 0 ) . Denote by ρ 1 the minimal such zero.
Assume
φ 0 ( g 1 ( t ) t ) = 1 , φ 2 ( φ 5 ( t ) , g 1 ( t ) t ) = 1
have at least one positive zero, where φ 5 is as φ . Denote by r 1 the minimal such zero and let D 1 = [ 0 , r ¯ 0 ] , where r ¯ 0 = min { r 0 , r 1 } . Assume there exist functions φ 1 : D 1 R , φ 2 : D 1 × D 1 R , and φ 3 : D 1 R that are continuous and increasing. Consider functions g 2 and g 2 ¯ on D 1 as
g 2 ( t ) = 0 1 φ ( ( 1 τ ) g 1 ( t ) t ) d τ 1 φ 0 ( g 1 ( t ) t ) + φ 1 ( φ 5 ( t ) + g 1 ( t ) t ) 0 1 φ 3 ( τ g 1 ( t ) t ) d τ ( 1 φ 0 ( g 1 ( t ) t ) ) ( 1 φ 2 ( φ 5 ( t ) , g 1 ( t ) t ) ) g 1 ( t )
and
g 2 ¯ ( t ) = g 2 ( t ) 1 .
By these definitions g 2 ¯ ( 0 ) = 1 , and g 2 ¯ ( t ) for t r ¯ 0 . Denote by ρ 2 the minimal zero of function g 2 ¯ on ( 0 , r ¯ 0 ) .
Assume
φ 0 ( g 2 ( t ) t ) = 1
having at least one positive zero at least. Let r 2 be the minimal zero. Set D 2 = [ 0 , r ¯ ¯ 0 ) , where r ¯ ¯ 0 = min { r ¯ 0 , r 2 } . Consider g 3 and g 3 ¯ on D 2 as
g 3 ( t ) = 0 1 φ ( ( 1 τ ) g 2 ( t ) t ) d τ 1 φ 0 ( λ ) + φ 4 ( φ 5 ( t ) + λ , g 1 ( t ) t + λ ) 0 1 φ 3 ( τ λ ) d τ ( 1 φ 0 ( λ ) ) ( 1 φ 2 ( φ 5 ( t ) , g 1 ( t ) t ) ) g 2 ( t )
and
g 3 ¯ ( t ) = g 3 ( t ) 1 .
where
λ = g 2 ( t ) t
By these definitions g 3 ¯ ( 0 ) = 1 , and g 3 ¯ ( t ) for t r ¯ ¯ 0 . Denote by ρ 3 the minimal zero of function g 3 ¯ on ( 0 , r ¯ ¯ 0 ) .
Define a radius of convergence ρ by
ρ = min { ρ j } , j = 1 , 2 , 3 .
By the preceding we have for all t [ 0 , ρ )
0 φ 0 ( t ) < 1 ,
0 φ 2 ( φ 5 ( t ) , g 1 ( t ) t ) < 1 ,
0 φ 0 ( g 1 ( t ) t ) < 1 ,
0 φ 0 ( g 2 ( t ) t ) < 1
and
o g j ( t ) < 1 .
Define S ( x , μ ) = { y T : y x < μ } and let S ¯ ( x , μ ) be the closure of S ( x , μ ) . Next, we list the conditions (A) to be used in the convergence analysis:
( a 1 ) F : T B 2 is continuous, differentiable, [ · , · ; F ] : T × T L B ( B 1 , B 2 ) is a divided difference of order one, F ( ξ ) = 0 , and F ( ξ ) 1 L B ( B 1 , B 2 ) for some ξ T .
( a 2 ) φ 0 : D R is increasing, continuous, φ 0 ( 0 ) = 0 and for all x T
F ( ξ ) 1 ( F ( x ) F ( ξ ) ) φ 0 ( x ξ ) .
Define T 0 = T S ( ξ , r 0 ) , where r 0 is given in (6).
( a 3 ) φ : D 0 R is continuous, increasing, φ ( 0 ) = 0 and for x , y T 0
F ( ξ ) 1 ( F ( y ) F ( x ) ) φ ( y x ) .
( a 4 ) φ 1 : D 1 R , φ 3 : D 1 R , φ 2 : D 1 × D 1 R , φ 5 : D 1 R , are increasing, continuous, w : T T is continuous and for all x , y T 1 : = T S ( ξ , r ¯ 0 )
F ( ξ ) 1 ( [ w , y ; F ] F ( y ) ) φ 1 ( w y ) , F ( ξ ) 1 ( [ w , y ; F ] F ( ξ ) ) φ 2 ( w ξ , y ξ ) , F ( ξ ) 1 F ( x ) φ 3 ( x ξ ) ,
and
w ξ φ 5 ( x ξ ) ,
where r ¯ 0 is given in (7).
( a 5 ) φ 4 : D 2 × D 2 R is increasing, continuous and for all y , z T 2 : = T S ( x 0 , r ¯ ¯ 0 )
F ( ξ ) 1 ( [ w , y ; F ] F ( z ) ) φ 4 ( w z , y z ) ,
( a 6 ) S ¯ ( ξ , ρ ) T , r 0 , r ¯ 0 , r ¯ ¯ 0 given by (6), (7), (8) exist and ρ is defined in (9).
( a 7 ) There exist ρ ¯ ρ such that
0 1 φ 0 ( τ ρ ¯ ) d τ < 1 .
Define T 3 = T S ¯ ( ξ , ρ ¯ ) .
Theorem 1.
Assume conditions (A) hold. Then, for x 0 S ( ξ , ρ ) { ξ } , { x n } produced by (2) is such that { x n } S ( ξ , ρ ) , lim n x n = ξ so that
y n ξ g 1 ( x n ξ ) x n ξ x n ξ < ρ
z n ξ g 2 ( x n ξ ) x n ξ x n ξ ,
and
x n + 1 ξ g 3 ( x n ξ ) x n ξ x n ξ ,
where functions g j were given previously. The only solution of Equation (1) is ξ , in T 3 , where T 3 is given in ( a 7 ) .
Proof. 
If v S ( ξ , ρ ) , then (6), (9), (11) and ( a 2 ) give
F ( ξ ) 1 ( F ( ξ ) F ( v ) ) φ 0 ( ξ v ) φ 0 ( r 0 ) φ 0 ( ρ ) < 1 .
This estimation together with the lemma by Banach for operators that are invertible [5,22] assures F ( v ) 1 L B ( B 1 , B 2 ) with
F ( v ) 1 F ( ξ ) 1 1 φ 0 ( v ξ ) .
It also follows that iterate y 0 exists by (19), and the first substep of method (2). Using (2) (the first substep) (9), (11), (14) (for j = 1 ), (19) (for v = x 0 ), and ( a 3 )
y 0 ξ = x 0 ξ F ( x 0 ) 1 F ( x 0 ) F ( x 0 ) 1 F ( ξ ) 0 1 F ( ξ ) 1 ( F ( ξ + τ ( x 0 ξ ) ) F ( x 0 ) ) d τ ( x 0 ξ ) 0 1 φ ( ( 1 τ ) x 0 ξ ) d τ x 0 ξ 1 φ 0 ( x 0 ξ ) = g 1 ( x 0 ξ ) x 0 ξ x 0 ξ < ρ ,
showing that y 0 S ( ξ , ρ ) and (15) is true for n = 0 . By ( a 1 ) and ( a 4 ) we have
F ( ξ ) 1 F ( v ) = F ( ξ ) 1 ( F ( v ) F ( ξ ) ) = 0 1 F ( ξ ) 1 F ( ξ + τ ( v ξ ) ) d τ ( v ξ ) 0 1 φ 3 ( τ v ξ ) d τ v ξ .
We get the estimates by (9), (11), and ( a 3 )
F ( ξ ) 1 ( [ w 0 , y 0 ; F ] F ( ξ ) ) φ 2 ( w 0 ξ , y 0 ξ ) φ 0 ( ρ ) < 1
leading to
[ w 0 , y 0 ; F ] 1 F ( ξ ) 1 1 φ 2 ( w 0 ξ , y 0 ξ ) ,
so z 0 and x 1 exist. Then, by (19) (for v = x 0 , y 0 ), (9), (14) (for j = 2 ), (20), (21), ( a 3 ) method (2) (second substep) we obtain
z 0 ξ = ( y 0 ξ F ( y 0 ) 1 F ( y 0 ) ) + F ( y 0 ) 1 ( [ w 0 , y 0 ; F ] F ( y 0 ) ) [ w 0 , y 0 ; F ] 1 F ( y 0 ) 0 1 φ ( ( 1 τ ) y 0 ξ d τ 1 φ 0 ( y 0 ξ ) + φ 1 ( ( w 0 ξ ) + ( ξ y 0 ) ) 0 1 φ 3 ( τ y 0 ξ ) d τ ( 1 φ 0 ( y 0 ξ ) ) ( 1 φ 2 ( w 0 ξ , y 0 ξ ) ) y 0 ξ g 2 ( x 0 ξ ) x 0 ξ x 0 ξ < ρ ,
showing that z 0 S ( ξ , ρ ) and (15) is true for n = 0 . In view of (9), (14), (19) (for v = z 0 ), (20)–(24), and the last substep of method (2), we obtain the estimations
x 1 ξ = ( z 0 ξ F ( z 0 ) 1 F ( z 0 ) ) + F ( z 0 ) 1 ( [ w 0 , y 0 ; F ] F ( z 0 ) ) [ w 0 , y 0 ; F ] 1 F ( z 0 ) 0 1 φ ( ( 1 τ ) z 0 ξ d τ 1 φ 0 ( z 0 ξ ) + φ 4 ( ( w 0 ξ ) + ( ξ z 0 ) , ( y 0 ξ ) + ( ξ z 0 ) ) 0 1 φ 3 ( τ z 0 ξ ) d τ ( 1 φ 0 ( z 0 ξ ) ) ( 1 φ 2 ( w 0 ξ , z 0 ξ ) ) × z 0 ξ g 3 ( x 0 ξ ) x 0 ξ x 0 ξ < ρ ,
which completes the induction for estimations (15)–(17) if n = 0 and x 1 S ( ξ , ρ ) . By repeating the previous estimations for x m , y m , z m , x m + 1 replacing x 0 , y 0 , z 0 , x 1 , respectively, the induction for items (15)–(17) is completed. Moreover, by the estimation
x m + 1 ξ γ x m ξ ρ , γ = g 3 ( x 0 ξ ) [ 0 , 1 ) ,
we arrive at lim m x m = ξ , and x m + 1 S ( ξ , ρ ) . Let G = 0 1 F ( ξ + τ ( ξ 0 ξ ) ) d τ for ξ 0 T 3 and F ( ξ 0 ) = 0 . By ( a 2 ) , ( a 7 ) , we get that
F ( ξ ) 1 ( G F ( ξ ) ) 0 1 φ 0 ( ( 1 τ ) ξ 0 ξ ) d τ 0 1 φ 0 ( τ ρ ¯ ) d τ < 1 ,
so G 1 L B ( B 1 , B 2 ) , leading to ξ = ξ 0 , where we also used the estimation 0 = F ( ξ 0 ) F ( ξ ) = G ( ξ 0 ξ ) .
Remark 1. 
(a) 
Let φ 0 ( t ) = K 0 t and φ ( t ) = K t . The radius r 1 = 2 2 K 0 + K was obtained by Argyros in [1] as the convergence radius for Newton’s method under conditions (17)–(19). Notice that the convergence radius for Newton’s method given independently by Rheinboldt [23] and Traub [25] is given by
ρ = 2 3 K 1 < r 1 ,
where K 1 is the Lipschitz constant on T , so K 0 K 1 and K K 1 . Define f ( x ) = e x 1 and T = S ¯ ( 0 , 1 ) . Then, we find K 0 = e 1 < K = e 1 e 1 < K = e , so ρ = 0.24252961 < r 1 = 0.3827 .
Moreover, the new error bounds [1,2,3,4] are:
x n + 1 x * L 1 L 0 x n x * x n x * 2 ,
whereas the old ones [12,14] are
x n + 1 x * L 1 1 L 1 x n x * x n x * 2 .
Therefore the new bounds are tighter if L 0 < L or L < L 1 . Clearly, we do not expect the radius of convergence of method (2) given by r to be larger than r 1 (see (8)).
(b) 
By (a2) and
F ( x * ) 1 F ( x ) = F ( x * ) 1 ( F ( x ) F ( x * ) ) + I 1 + F ( x * ) 1 ( F ( x ) F ( x * ) ) 1 + φ 0 ( x x * ) ,
the condition on φ 3 can be dropped and we can set
φ 3 ( t ) = 1 + φ 0 ( t ) .

3. Numerical Examples

We use [ x , y ; F ] = 0 1 F ( y + τ ( x y ) ) d τ and w as given in (3) for α = 1 10 F ( ξ ) , and β = 1 10 F ( ξ ) 2 . In view of the definition of the divided difference, conditions (A) and the estimation (for x S ( ξ , ρ ) )
w ( x ) ξ y ( x ) ξ + α F ( ξ ) 1 F ( y ( x ) ) + β F ( ξ ) 1 F ( y ( x ) ) 2 g 1 ( t ) t + α 0 1 φ 3 ( τ g 1 ( t ) t ) d τ + β ( 0 1 φ 3 ( τ g 1 ( t ) t ) d τ ) 2 = φ 5 ( t ) .
Then, we can choose functions φ i , i = 1 , 2 , 4 in terms of functions φ 0 , φ , φ 5 as follows
φ 1 ( t ) = φ ( φ 5 ( t ) ) + φ ( g 1 ( t ) t ) 2 φ 2 ( s , t ) = φ 0 ( φ 5 ( s ) ) + φ 0 ( g 1 ( t ) t ) 2
and
φ 4 ( s , t ) = φ 2 ( s , t ) + φ 0 ( t )
in all examples.
Example 1.
Consider B 1 and B 2 to be the space of continuous functions and defined on interval [ 0 , 1 ] with T = S ¯ ( 0 , 1 ) . Let G on T be
G ( ψ ) ( z ) = ψ ( z ) 5 0 1 z τ ψ ( τ ) 3 d τ .
Then, we find
G ( ψ ) ( ( ξ ( x ) ) = ξ ( x ) 15 0 1 z τ ψ ( τ ) 2 ξ ( τ ) d τ , for each ξ T .
Notice that we can take ξ = 0 , giving φ ( t ) = 15 t , φ 0 ( t ) = 7.5 t , φ 3 ( t ) = 15 , α = β = 1 10 . This way, we have that
ρ 1 = 0.066667 , ρ 2 = 0.00097005 , ρ = ρ 3 = 0.000510334 .
Example 2.
By the motivational example, represented in Figure 1, we choose φ 0 ( t ) = φ ( t ) = 96.662907 t , φ 3 ( t ) = 1.0631 , α = 3 10 , and β = 9 10 . Then, the parameters for method (2) are for ξ = 1
ρ 1 = 0.00689682 , ρ 2 = 0.00000221412 , ρ = ρ 3 = 0.00000121412 .
Example 3.
Let B 1 = B 2 = R 3 , T = S ( 0 , 1 ) , and let F on T be
F ( x ) = F ( x 1 , x 2 , x 3 ) = ( e x 1 1 , e 1 2 x 2 2 + x 2 , x 3 ) T .
For the points u = ( u 1 , u 2 , u 3 ) T , we find
F ( u ) = e u 1 0 0 0 ( e 1 ) u 2 + 1 0 0 0 1 .
Then, for x * = ( 0 , 0 , 0 ) T we get F ( ξ ) = d i a g ( 1 , 1 , 1 ) ,   φ ( t ) = e 1 e 1 t , φ 0 ( t ) = ( e 1 ) t , φ 3 ( t ) = e 1 e 1 , α = β = 1 10 .
Then, we obtain that
ρ 1 = 0.382692 , ρ 2 = 0.96949 , ρ = ρ 3 = 0.154419 .

4. Conclusions

In this article we extended the applicability of Newton-Traub-like methods in cases not covered before requiring the usage of derivatives up to order seven that do not appear in the methods. The price we pay by using conditions on the first derivative that actually appears on the method is that we show only linear convergence. To find the convergence order is not, however, our intention, since this is already known in the case of spaces that coincide with the multidimensional Euclidean space. Notice that the order is rediscovered by using ACOC or COC, which require only the first derivative. Moreover, in earlier studies using Taylor series, no computable error distances were available based on generalized Lipschitz conditions. Therefore, we do not know, for example, in advance, how many iterates are needed to achieve a predetermined error tolerance. Furthermore, no uniqueness of the solution results is available in the aforementioned studies, but we also provide such results. Our technique can be used to extend the applicability of other methods in an analogous way, since it is so general. Finally notice that local results of this type are important, since they demonstrate the difficulty in choosing initial points.

Author Contributions

Conceptualization, S.R., C.I.A., I.K.A. and S.G.; methodology, S.R., C.I.A., I.K.A. and S.G.; software, S.R., C.I.A., I.K.A. and S.G.; validation, S.R., C.I.A., I.K.A. and S.G.; formal analysis, S.R., C.I.A., I.K.A. and S.G.; investigation, S.R., C.I.A., I.K.A. and S.G.; resources, S.R., C.I.A., I.K.A. and S.G.; data curation, S.R., C.I.A., I.K.A. and S.G.; writing—original draft preparation, S.R., C.I.A., I.K.A. and S.G.; writing—review and editing, S.R., C.I.A., I.K.A. and S.G.; visualization, S.R., C.I.A., I.K.A. and S.G.; supervision, S.R., C.I.A., I.K.A. and S.G.; project administration, S.R., C.I.A., I.K.A. and S.G.; funding acquisition, S.R., C.I.A., I.K.A. and S.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Argyros, I.K. Computational Theory of Iterative Methods; Elsevier: Amsterdam, The Netherlands, 2007; Volume 15. [Google Scholar]
  2. Argyros, I.K.; George, S.; Thapa, N. Mathematical Modeling for the Solution of Equations and Systems of Equations with Applications; Nova Publishes: New York, NY, USA, 2018; Volume I. [Google Scholar]
  3. Argyros, I.K.; George, S.; Thapa, N. Mathematical Modeling for the Solution of Equations and Systems of Equations with Applications; Nova Publishes: New York, NY, USA, 2018; Volume II. [Google Scholar]
  4. Argyros, I.K.; Cordero, A.; Magreñán, A.A.; Torregrosa, J.R. Third-degree anomalies of Traub’s method. J. Comput. Appl. Math. 2017, 309, 511–521. [Google Scholar] [CrossRef]
  5. Argyros, I.K.; Hilout, S. Weaker conditions for the convergence of Newtons method. J. Complex. 2012, 28, 364–387. [Google Scholar] [CrossRef]
  6. Argyros, I.K.; Hilout, S. Computational Methods in Nonlinear Analysis: Efficient Algorithms, Fixed Point Theory and Applications; World Scientific: Singapore, 2013. [Google Scholar]
  7. Argyros, I.K.; Magreñán, A.A.; Orcos, L.; Sicilia, J.A. Local convergence of a relaxed two-step Newton like method with applications. J. Math. Chem. 2017, 55, 1427–1442. [Google Scholar] [CrossRef]
  8. Argyros, I.K.; Magréñan, A.A. A Contemporary Study of Iterative Methods; Elsevier (Academic Press): New York, NY, USA, 2018. [Google Scholar]
  9. Argyros, I.K.; Magréñan, A.A. Iterative Methods and Their Dynamics with Applications; CRC Press: New York, NY, USA, 2017. [Google Scholar]
  10. Chicharo, F.I.; Cordero, A.; Garrido, N.; Torregrosa, J.R. A new efficient methods for solving nonlinear systems. J. Diff. Equat. Appl. 2019. [Google Scholar] [CrossRef]
  11. Cordero, A.; Torregrosa, J.R. Low-complexity root-finding iteration functions with no derivatives of any order of convergence. J. Comput. Appl. Math. 2015, 275, 502–515. [Google Scholar] [CrossRef]
  12. Cordero, A.; Torregrosa, J.R.; Vindel, P. Study of the dynamics of third-order iterative methods on quadratic polynomials. Int. J. Comput. Math. 2012, 89, 1826–1836. [Google Scholar] [CrossRef]
  13. Ezquerro, J.A.; Herńandez-Veŕon, M.A. How to improve the domain of starting points for Steffensen’s method. Stud. Appl. Math. 2014, 132, 354–380. [Google Scholar] [CrossRef]
  14. Ezquerro, J.A.; Herńandez-Veŕon, M.A. Majorizing sequences for nonlinear Fredholm Hammerstein integral equations. Stud. Appl. Math. 2018, 140, 270–297. [Google Scholar] [CrossRef]
  15. Ezquerro, J.A.; Grau-Śanchez, M.; Herńandez-Veŕon, M.A.; Noguera, M. A family of iterative methods that uses divided differences of first and second orders. Numer. Algorithms 2015, 70, 571–589. [Google Scholar] [CrossRef]
  16. Ezquerro, J.A.; Herńandez-Veŕon, M.A.; Velasco, A.I. An analysis of the semilocal convergence for Secant-like methods. Appl. Math. Comput. 2015, 266, 883–892. [Google Scholar] [CrossRef]
  17. Herńandez-Veŕon, M.A.; Martínez, E.; Teruel, C. Semilocal convergence of a k-step iterative process and its application for solving a special kind of conservative problems. Numer. Algorithms 2017, 76, 309–331. [Google Scholar] [CrossRef]
  18. LKantorovich, V.; Akilov, G.P. Functional Analysis, 2nd ed.; Pergamon Press: Oxford, NY, USA, 1982. [Google Scholar]
  19. nán, A.A.M.; Cordero, A.; Gutiérrez, J.M.; Torregrosa, J.R. Real qualitative behavior of a fourth-order family of iterative methods by using the convergence plane. Math. Comput. Simul. 2014, 105, 49–61. [Google Scholar]
  20. Magreñán, A.A.; Argyros, I.K. Improved convergence analysis for Newton-like methods. Numer. Algorithms 2016, 71, 811–826. [Google Scholar] [CrossRef]
  21. Magreñán, A.A.; Argyros, I.K. Two-step Newton methods. J. Complex. 2014, 30, 533–553. [Google Scholar] [CrossRef]
  22. Potra, F.A.; Pták, V. Nondiscrete Induction and Iterative Processes; Pitman Advanced Publishing Program: Boston, MA, USA, 1984; Volume 103. [Google Scholar]
  23. Rheinboldt, W.C. An Adaptive Continuation Process for Solving Systems of Nonlinear Equations; Polish Academy of Science, Banach Ctr. Publ.: Warsaw, Poland, 1978; pp. 129–142. [Google Scholar]
  24. Ren, H.; Argyros, I.K. On the convergence of King-Werner-type methods of order free of derivatives. Appl. Math. Comput. 2015, 256, 148–159. [Google Scholar]
  25. Traub, J.F. Iterative Methods for the Solution of Equations; American Mathematical Soc.: Providence, RI, USA, 1982. [Google Scholar]
  26. Amat, S.; Argyros, I.K.; Busquier, S.; Herńandez-Veŕon, M.A. On two high-order families of frozen Newton-type methods. Numer. Linear Algebra Appl. 2018, 25, e2126. [Google Scholar] [CrossRef]
  27. Amat, S.; Berḿudez, C.; Herńandez-Veŕon, M.A.; Martinez, E. On an efficient k-step iterative method for nonlinear equations. J. Comput. Appl. Math. 2016, 302, 258–271. [Google Scholar] [CrossRef]
  28. Amat, S.; Busquier, S.; Berḿudez, C.; Plaza, S. On two families of high order Newton type methods. Appl. Math. Lett. 2012, 25, 2209–2217. [Google Scholar] [CrossRef]
Figure 1. Plot of motivational function.
Figure 1. Plot of motivational function.
Foundations 01 00004 g001
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Back to TopTop