Next Article in Journal
Information Geometric Investigation of Solutions to the Fractional Fokker–Planck Equation
Previous Article in Journal
Niching Multimodal Landscapes Faster Yet Effectively: VMO and HillVallEA Benefit Together
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Ball Comparison between Three Sixth Order Methods for Banach Space Valued Operators

by
Ramandeep Behl
1,
Ioannis K. Argyros
2 and
Jose Antonio Tenreiro Machado
3,*
1
Department of Mathematics, King Abdulaziz University, Jeddah 21589, Saudi Arabia
2
Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
3
Department of Electrical Engineering, ISEP-Institute of Engineering, Polytechnic of Porto, 431 4294-015 Porto, Portugal
*
Author to whom correspondence should be addressed.
Submission received: 8 January 2020 / Revised: 22 April 2020 / Accepted: 23 April 2020 / Published: 28 April 2020

Abstract

:
Three methods of sixth order convergence are tackled for approximating the solution of an equation defined on the finitely dimensional Euclidean space. This convergence requires the existence of derivatives of, at least, order seven. However, only derivatives of order one are involved in such methods. Moreover, we have no estimates on the error distances, conclusions about the uniqueness of the solution in any domain, and the convergence domain is not sufficiently large. Hence, these methods have limited usage. This paper introduces a new technique on a general Banach space setting based only the first derivative and Lipschitz type conditions that allow the study of the convergence. In addition, we find usable error distances as well as uniqueness of the solution. A comparison between the convergence balls of three methods, not possible to drive with the previous approaches, is also given. The technique is possible to use with methods available in literature improving, consequently, their applicability. Several numerical examples compare these methods and illustrate the convergence criteria.
MSC:
47J25; 49M15; 65G99; 65H10

1. Introduction

Let F : Ω X Y be Fréchet differentiable operator, X , Y be two Banach spaces and Ω X be open, convex, and non-void. To solve F ( x ) = 0 , we study the local convergence of the following three step methods defined for σ = 0 , 1 , 2 , as
y σ = x σ 2 3 F ( x σ ) 1 F ( x σ ) z σ = x σ 1 2 I + 2 F ( x σ ) 3 F ( y σ ) F ( x σ ) 1 F ( x σ ) 1 F ( x σ ) x σ + 1 = z σ 2 3 F ( y σ ) F ( x σ ) 1 F ( z σ ) ,
y σ = x σ 2 3 F ( x σ ) 1 F ( x σ ) z σ = x σ 1 2 I + 2 F ( x σ ) 3 F ( y σ ) F ( x σ ) 1 F ( x σ ) 1 F ( x σ ) x σ + 1 = z σ 1 4 I + 2 F ( x σ ) 3 F ( y σ ) F ( x σ ) 1 2 F ( x σ ) 1 F ( z σ ) ,
and
y σ = x σ F ( x σ ) 1 F ( x σ ) z σ = y σ + 1 3 F ( x σ ) 1 + 2 F ( x σ ) 3 F ( y σ ) 1 F ( x σ ) x σ + 1 = z σ + 1 3 4 F ( x σ ) 3 F ( y σ ) 1 F ( x σ ) 1 F ( z σ ) .
The application of F ( x ) = 0 is mentioned in the standard books [1,2,3,4]. The definition of the Fréchet derivative can be found for example in [5]. These methods use two operators, two Fréchet derivative evaluations, and two linear operator inversions. The sixth convergence order of methods was given in Cordero et al. [6], Soleymani et al. [7], and Esmaeili and Ahmadi [8], respectively. The conclusions were obtained for the special case when X = Y = R i , using Taylor series with hypotheses up to the seventh derivative even though it does not appear in the methods. Thus, these hypotheses restrict the applicability of the methods. Let us consider a motivational example. We assume the following function F on X = Y = R and D = [ 1 2 , 3 2 ] such as:
F ( κ ) = κ 3 ln κ 2 + κ 5 κ 4 , κ 0 0 , κ = 0 ,
which leads to
F ( κ ) = 3 κ 2 ln κ 2 + 5 κ 4 4 κ 3 + 2 κ 2 ,
F ( κ ) = 6 κ ln κ 2 + 20 κ 3 12 κ 2 + 10 κ ,
F ( κ ) = 6 ln κ 2 + 60 κ 2 12 κ + 22 .
We note that F ( κ ) is not bounded in D . Therefore, results requiring the existence of F ( κ ) or higher cannot be applied for studying the convergence of Equations (1)–(3). Moreover, no computable error bounds x σ x * , where x * solves the equation F ( x ) = 0 , or any information regarding the uniqueness of the solution are provided using Lipschitz-type functions. Similar types of problems can be found in [9,10,11,12,13,14,15]. Furthermore, the convergence criteria can not be compared, since they are based on different hypotheses. We address all these problems by using only the first derivative. Moreover, we rely on the computational order of convergence ( C O C ) or approximated computational order of convergence ( A C O C ) [16,17,18] to determine the c-order (Computational order of convergence) not requiring derivatives of order higher than one. The new technique uses the same set of conditions for the three methods. Furthermore, it can also be used to extend the applicability of other methods along the same lines.
Local convergence results are important because they demonstrate the degree of difficulty in choosing initial points within the so-called convergence ball that is in the region from which we can pick the initial points ensuring the convergence of the iterative method. In general, the convergence ball is small and, furthermore, decreases when the convergence order of the method increases. Therefore, it is very important to extend the radius of the convergence ball, but without imposing additional hypotheses that may limit the applicability of the method.
This is the main motivation for this paper that accomplishes this objective under weaker hypotheses than previous methods. It must be noted that the number of required iterations to achieve a certain error tolerance is a distinct issue. This information is also provided, as well as the uniqueness of the solution that are not clearly addressed in previous works. In fact, when applying the previous methods, we do not have sufficient information for establishing an educated guess about the convergence ball from where the initial choice point must be picked. Therefore, with those methods, the initial point may, or may not, result in convergence toward the results.
The rest of the paper includes the following sections. Section 2 analyzes the local convergence of the proposed technique. Section 3 discusses several numerical experiments. Section 4 presents the concluding results.

2. Local Convergence

Let us introduce some real functions and parameters to be used later as follows in the local convergence analysis.
Suppose that equation
w 0 ( ζ ) = 1
has a minimal positive solution ρ 0 , where w 0 : I I is continuous, increasing, with w 0 ( 0 ) = 0 , and I = [ 0 , ) . Consider functions w : I 0 I , v : I 0 I to be continuous, increasing, with w ( 0 ) = 0 , and I 0 = [ 0 , ρ 0 ) .
Suppose that
v ( 0 ) 3 1 < 0 .
Define functions g 1 and h 1 on I 0 as follows:
g 1 ( ζ ) = 0 1 w ( 1 θ ) ζ d θ + 1 3 0 1 v ( θ ζ ) d θ 1 w 0 ( ζ ) , h 1 ( ζ ) = g 1 ( ζ ) 1 .
By (6) and these definitions, we have h 1 ( 0 ) = v ( 0 ) 3 1 < 0 and h 1 ( ζ ) as t ρ 0 . Denote by r 1 the minimal solution of equation h 1 ( ζ ) = 0 in the interval ( 0 , ρ 0 ) with assured existence by the intermediate value theorem.
Suppose that the equation
p ( ζ ) = 1
has a minimal positive solution ρ p , where
p ( ζ ) = 1 2 3 w 0 g 1 ( ζ ) ζ + w 0 ( ζ ) .
Set I 1 = [ 0 , ρ 1 ) , where ρ 1 : = min { ρ 0 , ρ p } . Define functions g 2 and h 2 on the interval I 1 by
g 2 ( ζ ) = 0 1 w ( 1 θ ) ζ d θ 1 w 0 ( ζ ) + 3 4 w 0 g 1 ( ζ ) ζ + w 0 ( ζ ) 0 1 v ( θ ζ ) d θ 1 p ( ζ ) 1 w 0 ( ζ ) , h 2 ( ζ ) = g 2 ( ζ ) 1 .
We get again h 2 ( 0 ) = 1 and h 2 ( ζ ) as ζ ρ 1 . Denote by r 2 the smallest solution of equation h 2 ( ζ ) = 0 in the interval ( 0 , ρ 1 ) .
Suppose that equation
w 0 g 2 ( ζ ) ζ = 1
has a minimal positive solution ρ 2 .
Set I 2 : = [ 0 , ρ ) , where ρ = min { ρ 1 , ρ 2 } . Next, define functions g 3 and h 3 on the interval I 2 by
g 3 ( ζ ) = 0 1 w ( 1 θ ) g 2 ( ζ ) ζ d θ 1 w 0 g 2 ( ζ ) ζ + 3 w 0 g 1 ( ζ ) ζ + 2 w 0 g 2 ( ζ ) ζ + w 0 ( ζ ) 0 1 v θ g 2 ( ζ ) ζ d θ 2 1 w 0 g 2 ( ζ ) ζ 1 p ( ζ ) g 2 ( ζ ) , h 3 ( ζ ) = g 3 ( ζ ) 1 .
We obtain h 3 ( 0 ) = 1 and h 3 ( ζ ) as ζ ρ . Denote by r 3 the minimal solution of equation h 3 ( ζ ) = 0 in the interval ( 0 , ρ ) . Define a radius of convergence r by
r = min { r j } , j = 1 , 2 , 3 .
It follows that, for all ζ I 3 : = [ 0 , r ) ,
0 w 0 ( ζ ) < 1 ,
0 w 0 g 2 ( ζ ) ζ < 1 ,
0 p ( ζ ) < 1 ,
0 g j ( ζ ) < 1 .
The hypotheses ( A i , i = 1 , 2 , 5 ) used in the local convergence analysis of all three methods are:
(A1)
F : Ω X Y , is Fréchet differentiable and there exists x * Ω with F ( x * ) = 0 and F ( x * ) 1 ( Y , X ) .
(A2)
There exists function w 0 : I I continuous, increasing with w 0 ( 0 ) = 0 such that for each x Ω
F ( x * ) 1 F ( x ) F ( x * ) w 0 ( x x * ) .
Set Ω 0 = Ω U ( x * , ρ 0 ) .
(A3)
There exist functions w : I 0 I and v : : I 0 I continuous and increasing with w ( 0 ) = 0 , such that for each x , y Ω 0
F ( x * ) 1 F ( x ) F ( y ) w ( x x * ) x y
and
F ( x * ) 1 F ( x ) v ( x x * ) .
(A4)
The ball U ¯ ( x * , r ) Ω , ρ 0 , ρ p , and ρ 2 are defined in previous expressions.
(A5)
There exists r * r such that
0 1 w 0 ( θ r * ) d θ < 1 .
Set Ω 1 = Ω U * ( x * , r * ) .
Next, we provide the local convergence analysis of method (1) using the hypotheses ( A ) and the aforementioned symbols.
Theorem 1.
Suppose that the hypotheses ( A ) hold. Then, starting from any x 0 U ( x * , r ) { x * } , the sequence { x σ } generated by method (1) is well defined, which remains in U ( x * , r ) for each σ = 0 , 1 , 2 , 3 , and lim σ x σ = x * . Moreover, the following error estimates are available:
y σ x * g 1 ( x σ x * ) x σ x * x σ x * < r ,
z σ x * g 2 ( x σ x * ) x σ x * x σ x * ,
x σ + 1 x * g 3 ( x σ x * ) x σ x * x σ x * ,
where the functions g j are given previously and the radius r is defined by (9). Furthermore, x * is the only solution of equation F ( x ) = 0 in the set Ω 1 given below ( A 5 ) .
Proof. 
Inequations (14)–(16) are shown by using mathematical induction. Using (9) and (10), A 1 , and A 2 , we have for all x U ( x * , r )
F ( x ) 1 F ( x ) F ( x * ) w 0 ( x x * ) w 0 ( r ) < 1 .
By the Banach lemma on invertible operators [5,19,20,21], expression (17), F ( x ) 1 ( Y , X ) , and
F ( x ) 1 F ( x ) 1 w 0 ( x x * ) .
Then, y 0 is well defined by the first substep of method (1). By A 1 and A 3 , we can write
F ( x ) = F ( x ) F ( x * ) = 0 1 F x * + θ ( x 0 x * ) d θ ( x 0 x * )
and so, by the second hypothesis in ( A 3 ) , we have
F ( x ) 1 F ( x ) = F ( x * ) 1 0 1 F x * + θ ( x 0 x * ) d θ ( x 0 x * ) 0 1 v θ x 0 x * d θ x 0 x * .
In view of method (1) (for σ = 0 ), expressions (9) and (13) (for j = 1 ), hypothesis ( A 3 ) , expression (18) (for x = x 0 ) and (19), we obtain
y 0 x * = x 0 x * F ( x 0 ) 1 F ( x 0 ) + 1 3 F ( x 0 ) 1 F ( x 0 ) x 0 x * F ( x 0 ) 1 F ( x 0 ) + 1 3 F ( x 0 ) 1 F ( x 0 ) F ( x 0 ) 1 F ( x * ) 0 1 F ( x * ) 1 F x * + θ ( x 0 x * ) F ( x 0 ) d θ ( x 0 x * ) + 1 3 F ( x 0 ) 1 F ( x * ) F ( x 0 ) 1 F ( x 0 ) 0 1 w ( 1 θ ) x 0 x * d θ + 1 3 0 1 v θ x 0 x * d θ x 0 x * w 0 ( x 0 x * ) = g 1 ( x 0 x * ) x 0 x * x 0 x * < r ,
so that y 0 U ( x * , r ) and (14) hold for σ = 0 .
By expressions (9), (11) and (20), we have
2 F ( x * ) 1 3 F ( y 0 ) F ( x 0 ) 3 F ( x * ) + F ( x * ) 1 2 [ 3 F ( x * ) 1 F ( y 0 ) F ( x * ) + F ( x * ) 1 F ( x 0 ) F ( x * ) 1 2 3 w 0 ( y 0 x * ) + w 0 ( x 0 x * ) p ( x 0 x * ) p ( r ) < 1 ,
so that
3 F ( y 0 ) F ( x 0 ) 1 ( Y , X ) ,
and
3 F ( y 0 ) F ( x 0 ) 1 F ( x * ) 1 2 ( 1 p ( x 0 x * ) ) .
Then, z 0 is well defined by the second substep of method (1) for σ = 0 . Next, by the second substep of method (1) for σ = 0 , we can write
z 0 x * = x 0 x * F ( x 0 ) 1 F ( x 0 ) + F ( x 0 ) 1 F ( x 0 ) 1 2 F ( x 0 ) 1 F ( x 0 ) F ( x 0 ) 3 F ( y 0 ) F ( x 0 ) 1 F ( x 0 ) 1 F ( x 0 ) = x 0 x * F ( x 0 ) 1 F ( x 0 ) + 1 2 I F ( x 0 ) 3 F ( y 0 ) F ( x 0 ) 1 F ( x 0 ) 1 F ( x 0 ) = x 0 x * F ( x 0 ) 1 F ( x 0 ) + 3 F ( y 0 ) F ( x 0 ) 2 F ( x 0 ) 3 F ( y 0 ) F ( x 0 ) 1 F ( x 0 ) 1 F ( x 0 ) = x 0 x * F ( x 0 ) 1 F ( x 0 ) + 3 2 F ( y 0 ) F ( x 0 ) 3 F ( y 0 ) F ( x 0 ) 1 F ( x 0 ) 1 F ( x 0 ) .
Hence, by expressions (9), (13) (for j = 2 ) and (19)–(21), we obtain
z 0 x * = [ 0 1 w ( 1 θ ) x 0 x * d θ 1 w 0 ( x 0 x * ) + 3 w 0 ( y 0 x * ) + w 0 ( x 0 x * ) 0 1 v θ x 0 x * d θ 4 1 w 0 ( x 0 x * ) 1 p ( x 0 x * ) ] x 0 x * = g 2 ( x 0 x * ) x 0 x * x 0 x * < r .
Thus, z 0 U ( x * , r ) and expression (15) hold for σ = 0 .
In view of method (1) for σ = 0 , x 1 is well defined ( F ( z 0 ) 1 ( Y , X ) by (18) for x = z 0 ). Then, we can write
x 1 x * = z 0 x * F ( z 0 ) 1 F ( z 0 ) + F ( z 0 ) 1 2 3 F ( y 0 ) F ( x 0 ) 1 F ( z 0 ) ,
which further yields
x 1 x * = [ 0 1 w θ z 0 x * d θ 1 w 0 ( z 0 x * ) + 2 w 0 ( y 0 x * ) + w 0 ( z 0 x * ) + w 0 ( y 0 x * ) + w 0 ( x 0 x * ) 0 1 v θ z 0 x * d θ 2 1 w 0 ( z 0 x * ) 1 p ( x 0 x * ) ] z 0 x * = g 3 ( x 0 x * ) x 0 x * x 0 x * < r ,
so that x 1 U ( x * , r ) and expression (16) hold for σ = 0 . Thus far, we have shown that estimates (14)–(16) hold for σ = 0 . If we simply replace x 0 , y 0 , z 0 and x 1 by x m , y m , z m and x m + 1 , ( m = 1 , 2 , 3 , . . . , σ 1 ) , respectively, in the preceding computations, then we obtain
y m + 1 x * g 1 ( x m + 1 x * ) x m + 1 x * x m + 1 x * < r z m + 1 x * g 2 ( x m + 1 x * ) x m + 1 x * x m + 1 x * < r and x m + 2 x * g 3 ( x m + 1 x * ) x m + 1 x * < r .
By the above estimations
x m + 1 x * c x m x * < r , c = g 3 ( x 0 x * ) [ 0 , 1 ) ,
we deduce that lim m x m = x * , with x m + 1 U ( x * , r ) . Consider y * Ω 1 with F ( y * ) = 0 and set
S = 0 1 F x * + θ ( y * x * ) d θ .
By ( A 2 ) and ( A 5 ) , we obtain
F ( x * ) 1 S F ( x * ) 0 1 w 0 ( θ y * x * ) d θ 0 1 w 0 ( θ r * ) d θ < 1 ,
so that S 1 ( Y , X ) . Then, x * = y * follows from the identity 0 = F ( y * ) F ( x * ) = S ( y * x * ) .  □
Secondly, for the method (2), the conclusion of Theorem 1 holds, but r is defined by
r ( 2 ) = min { r 1 , r 2 , r 3 ( 2 ) } ,
so that r 3 ( 2 ) is the minimal positive solution of equation h 3 2 ( ζ ) = 0 , which h 3 2 ( ζ ) = g 3 2 ( ζ ) 1 and
g 3 ( 2 ) ( ζ ) = 1 + 1 4 q ( ζ ) 0 1 v θ g 2 ( ζ ) ζ d θ 1 w 0 ( ζ ) g 2 ( ζ ) ,
where
q ( ζ ) = 3 w 0 g 1 ( ζ ) ζ + w 0 ( ζ ) + 4 2 1 p ( ζ ) .
Notice also that g 1 , h 1 , g 2 , h 2 , r 1 , and r 2 are the same as in Theorem 1. Functions g 3 ( 2 ) , h 3 ( 2 ) , and q appear due to the estimates
I + 2 F ( x σ ) 3 F ( y σ ) F ( x σ ) 1 = 3 F ( y σ ) F ( x σ ) + 2 F ( x σ ) 3 F ( y σ ) F ( x σ ) 1 = 3 F ( y σ ) F ( x σ ) + F ( x σ ) F ( x * ) + 4 F ( x * ) 3 F ( y σ ) F ( x σ ) 1 3 w 0 ( y σ x * + w 0 ( x σ x * ) + 4 2 1 p ( x σ x * ) q ( x σ x * )
and
x σ + 1 x * z σ x * + 1 4 I + 2 F ( x σ ) 3 F ( y σ ) F ( x σ ) 1 F ( x σ ) 1 F ( z σ ) 1 + 1 4 q ( x σ x * ) 0 1 v θ z σ x * d θ 1 w 0 ( x σ x * ) z σ x * g 3 ( 2 ) ( x σ x * ) x σ x * x σ x * < r ( 2 ) .
Hence, we arrive at the following theorem.
Theorem 2.
Suppose that the conditions ( A ) hold, but with r 2 and g 3 ( 2 ) replaced by r and g 3 , respectively. Then, the same conclusions hold for method (2), but with (16) replaced by
x σ + 1 x * g 3 ( 2 ) ( x σ x * ) x σ x * x σ x * .
Finally, for the local convergence of method (3), we introduce the functions
g 2 ( 3 ) ( ζ ) = g 1 ( ζ ) + w 0 ( ζ ) + w 0 g 1 ( ζ ) ζ 0 1 v ( θ ζ ) d θ 2 2 w 0 ( ζ ) 1 p ( ζ ) , h 2 ( 3 ) ( ζ ) = g 2 ( 3 ) ( ζ ) 1 , g 3 ( 3 ) ( ζ ) = 1 + 3 w 0 ( ζ ) + w 0 g 1 ( ζ ) ζ 0 1 v θ g 2 ( 3 ) ( ζ ) ζ d θ 2 2 w 0 ( ζ ) 1 p ( ζ ) g 2 ( 3 ) ( ζ ) , h 3 ( 3 ) ( ζ ) = g 3 ( 3 ) ( ζ ) 1 .
Let us denote by r 2 ( 3 ) and r 3 ( 3 ) the minimal positive solutions of equations h 2 ( 3 ) ( ζ ) = 0 and h 3 ( 3 ) ( ζ ) = 0 , respectively. Set
r ( 3 ) = min { r 1 , r 2 ( 3 ) , r 3 ( 3 ) } .
These functions are defined due to the estimates
z σ x * = y σ x * + 1 3 F ( x σ ) 1 F ( x σ ) 3 F ( y σ ) + 2 F ( x σ ) F ( x σ ) 3 F ( y σ ) 1 F ( x σ ) = ( y σ x * ) + F ( x σ ) 1 F ( x σ ) F ( y σ ) F ( x σ ) 3 F ( y σ ) 1 F ( x σ ) g 1 ( x σ x * ) + w 0 ( x σ x * ) + w 0 ( y σ x * ) 0 1 v ( θ x σ x * ) d θ 2 1 w 0 ( x σ x * ) 1 p ( x σ x * ) x σ x * g 2 ( 3 ) ( x σ x * ) x σ x * x σ x *
and
x σ + 1 x * = z σ x * + F ( x σ ) 3 F ( y σ ) 1 4 F ( x σ ) F ( x σ ) 3 F ( y σ ) F ( x σ ) 1 F ( z σ ) = ( z σ x * ) + 3 F ( x σ ) 3 F ( y σ ) 1 F ( x σ ) F ( y σ ) F ( x σ ) 1 F ( z σ ) 1 + 3 w 0 ( x σ x * ) + w 0 ( y σ x * ) 0 1 v ( θ z σ x * ) d θ 2 1 w 0 ( x σ x * ) 1 p ( x σ x * ) x σ x * g 3 ( 3 ) ( x σ x * ) x σ x * x σ x * .
Theorem 3.
Let us consider hypotheses ( A ) , but with g 2 ( 3 ) , g 3 ( 3 ) , and r 3 replacing by g 2 , g 3 , and r, respectively. Then, the conclusions of Theorem 1 hold for method (3), but with (15) and (16) replaced by
z σ x * = g 2 ( 3 ) ( x σ x * ) x σ x * x σ x *
and
x σ + 1 x * = g 3 ( 3 ) ( x σ x * ) x σ x * x σ x * ,
respectively.

3. Numerical Examples

The theoretical results developed in the previous sections are illustrated numerically in this section. We denote the methods (1)–(3) by ( C M ) , ( S M ) , and ( E A ) , respectively. We consider two real life problems and two standard nonlinear problems that are illustrated in Examples 1–4. The results are listed in Table 1, Table 2, Table 3 (values of ψ i and φ i (in radians) for Example 3), Table 4, and Table 5. Additionally, we obtain the C O C approximated by means of
ξ = ln x σ + 1 x * | x σ x * ln x σ x * x σ 1 x * , for   σ = 1 , 2 ,
or A C O C [18] by:
ξ * = ln x σ + 1 x σ x σ x σ 1 ln x σ x σ 1 x σ 1 x σ 2 , for   σ = 2 , 3 ,
We adopt ϵ = 10 100 as the error tolerance and the terminating criteria to solve nonlinear system or scalar equations are: ( i ) x σ + 1 x σ < ϵ , and ( i i ) F ( x σ ) < ϵ .
The computations are performed with the package M a t h e m a t i c a 9 with multiple precision arithmetics.
Example 1.
Following the example presented in the Introduction, for x * = 1 , we can set
w 0 ( t ) = w ( t ) = 96.662907 t a n d v ( t ) = 2 .
In Table 1, we present radii for example (1).
Example 2.
Let X = Y = R 3 and Ω = S ¯ ( 0 , 1 ) . Assume F on Ω with v = ( x , y , z ) T as
F ( u ) = F ( u 1 , u 2 , u 3 ) = e u 1 1 , e 1 2 u 2 2 + u 2 , u 3 T ,
where u = ( u 1 , u 2 , u 3 ) T . Define the Fréchet-derivative as
F ( u ) = e u 1 0 0 0 ( e 1 ) u 2 + 1 0 0 0 1 .
Then, for x * = ( 0 , 0 , 0 ) T and F ( x * ) = F ( x * ) 1 = d i a g { 1 , 1 , 1 } , we have
w 0 ( t ) = ( e 1 ) t , w ( t ) = e 1 e 1 t a n d v ( t ) = e 1 e 1 .
We obtain the convergence radii depicted in Table 2.
Example 3.
The kinematic synthesis problem for steering [22,23] is given as
E i x 2 sin ψ i x 3 F i x 2 sin φ i x 3 2 + F i x 2 cos φ i + 1 F i x 2 cos ψ i 1 2 x 1 x 2 sin ψ i x 3 x 2 cos φ i + 1 x 1 x 2 cos ψ i x 3 x 2 sin φ i x 3 2 = 0 , for i = 1 , 2 , 3 ,
where
E i = x 3 x 2 sin φ i sin φ 0 x 1 x 2 sin φ i x 3 + x 2 cos φ i cos φ 0 , i = 1 , 2 , 3
and
F i = x 3 x 2 sin ψ i + x 2 cos ψ i + x 3 x 1 x 2 sin ψ 0 + x 2 cos ψ 0 + x 1 x 3 , i = 1 , 2 , 3 .
In Table 3, we present the values of ψ i and φ i (in radians).
The approximated solution is for Ω = S ¯ ( x * , 1 )
x * = ( 0.9051567 , 0.6977417 , 0.6508335 ) T .
Then, we get
w 0 ( t ) = w ( t ) = 3 t a n d v ( t ) = 2 .
We provide the radii of convergence for Example 3 in Table 4.
Example 4.
Let us consider that X = Y = C [ 0 , 1 ] , Ω = S ¯ ( 0 , 1 ) and introduce the space of maps continuous in [ 0 , 1 ] having the max norm. We consider the following function φ on A :
Ψ ( ϕ ) ( x ) = Ψ ( x ) 0 1 x τ ϕ ( τ ) 3 d τ .
which further yields:
Ψ ϕ ( μ ) ( x ) = μ ( x ) 3 0 1 x τ ϕ ( τ ) 2 μ ( τ ) d τ , f o r μ Ω .
We have x * = 0 and
w 0 ( t ) = 3 2 , w ( t ) = 3 t a n d v ( t ) = 2 .
We list the radii of convergence for example (4) in Table 5.

4. Conclusions

We have introduced a new technique capable of proving convergence relying on hypotheses only on the first derivative (used in these methods) in contrast to earlier studies using hypotheses up to the seven derivatives and the Taylor series. Moreover, the new technique provides usable error analysis for operators valued on Banach space. In order to recover the convergence order, but, without using Taylor series, we rely on the C O C and A C O C that require only the first order derivative. Four numerical examples compare the radii of the convergence balls for these methods, showing that our results can be used in cases not possible before. The technique can also be used to extend the usage of other iterative methods using inverses in an analogous procedure.

Author Contributions

R.B. and I.K.A.: Conceptualization; Methodology; Validation; Writing—Original Draft Preparation; and Writing—Review and Editing. J.A.T.M.: Validation; Review and Editing. All authors have read and agreed to the published version of the manuscript.

Funding

Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah, Saudi Arabia, under Grant No. D-540-130-1441.

Acknowledgments

This project was funded by the Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah, Saudi Arabia, under Grant No. D-540-130-1441. The authors, therefore, acknowledge with thanks DSR for technical and financial support.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Iliev, A.; Kyurkchiev, N. Nontrivial Methods in Numerical Analysis: Selected Topics in Numerical Analysis; LAP LAMBERT Academic Publishing: Saarbrucken, Germany, 2010; ISBN 978-3-8433-6793-6. [Google Scholar]
  2. Ostrowski, A.M. Solution of Equations in Euclidean and Banach Spaces, Pure and Applied Mathematics; Academic Press: Cambridge, MA, USA, 1973; Volume 9. [Google Scholar]
  3. Petkovic, M.S.; Neta, B.; Petkovic, L.; Džunič, J. Multipoint Methods for Solving Nonlinear Equations; Elsevier: Amsterdam, The Netherlands, 2013. [Google Scholar]
  4. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice Hall Series in Automatic Computation; Prentice Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
  5. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: Cambridge, MA, USA, 1970. [Google Scholar]
  6. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. A modified Newton-Jarratt’s composition. Numer. Algor. 2010, 55, 87–99. [Google Scholar] [CrossRef]
  7. Soleymani, F.; Lotfi, T.; Bakhtiari, O. A multi-step class of iterative methods for nonlinear systems. Ptim. Lett. 2014, 8, 1001–1015. [Google Scholar] [CrossRef]
  8. Esmaeili, H.; Ahmadi, M. An efficient three step method to solve system of nonlinear equations. Appl. Math. Comput. 2015, 266, 1093–1101. [Google Scholar] [CrossRef]
  9. Amat, S.; Argyros, I.K.; Busquier, S.; Herńandez-Veŕon, M.A. On two high-order families of frozen Newton-type methods. Num. Lin. Algebra Appl. 2018, 25, 1–17. [Google Scholar] [CrossRef]
  10. Amat, S.; Berḿudez, C.; Herńandez-Veŕon, M.A.; Martínez, E. On an efficient k-step iterative method for nonlinear equations. J. Comput. Appl. Math. 2016, 302, 258–271. [Google Scholar] [CrossRef] [Green Version]
  11. Amat, S.; Busquier, S.; Berḿudez, C.; Plaza, S. On two families of high order Newton type methods. Appl. Math. Lett. 2012, 25, 2209–2217. [Google Scholar] [CrossRef] [Green Version]
  12. Artidiello, S.; Cordero, A.; Torregrosa, J.R.; Vassileva, M.P. Multidimensional generalization of iterative methods for solving nonlinear problems by means of weigh-function procedure. Appl. Math. Comput. 2015, 268, 1064–1071. [Google Scholar]
  13. Magreñán, Á.A. Different anomalies in a Jarratt family of iterative root-finding methods. Appl. Math. Comput. 2014, 233, 29–38. [Google Scholar]
  14. Magreñán, Á.A. A new tool to study real dynamics: The convergence plane. Appl. Math. Comput. 2014, 248, 215–224. [Google Scholar] [CrossRef] [Green Version]
  15. Rheinboldt, W.C. An adaptive continuation process for solving systems of nonlinear equations. Pol. Acad. Sci. 1978, 3, 129–142. [Google Scholar] [CrossRef]
  16. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
  17. Ezquerro, J.A.; Hernández, M.A. New iterations of R-order four with reduced computational cost. BIT Numer. Math. 2009, 49, 325–342. [Google Scholar] [CrossRef]
  18. Weerakoon, S.; Fernando, T.G.I. A variant of Newton’s method with accelerated third order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
  19. Argyros, I.K.; George, S. On the complexity of extending the convergence region for Traub’s method. J. Complex. 2020, 56. [Google Scholar] [CrossRef]
  20. Argyros, I.K.; Magreñán, Á.A. Iterative Methods and Their Dynamics with Applications; CRC Press: New York, NY, USA, 2017. [Google Scholar]
  21. Magrenan, Á.A.; Argyros, I.K. A Contemporary Study of Iterative Methods: Convergence, Dynamics and Applications; Academic Press: Cambridge, MA, USA, 2018. [Google Scholar]
  22. Awawdeh, F. On new iterative method for solving systems of nonlinear equations. Numer. Algor. 2010, 54, 395–409. [Google Scholar] [CrossRef]
  23. Burden, R.L.; Faires, J.D. Numerical Analysis; PWS Publishing Company: Boston, MA, USA, 2001. [Google Scholar]
Table 1. Radii for Example (1).
Table 1. Radii for Example (1).
Cases r 1 r 2 r 3 r 3 ( 2 ) r 2 ( 3 ) r 3 ( 3 ) r r ( 2 ) r ( 3 ) x 0 σ ξ
CM0.00229890.00179930.0015625---0.0015625--1.00136.0000
SM0.00229890.0017993-0.001022---0.001022- 1.000936.0000
EA0.0022989---0.00132400.00081403--0.000814031.000836.0000
(On the basis of obtained results, we conclude that method C M has a larger radius of convergence.)
Table 2. Radii for Example 2.
Table 2. Radii for Example 2.
Cases r 1 r 2 r 3 r 3 ( 2 ) r 2 ( 3 ) r 3 ( 3 ) r r ( 2 ) r ( 3 ) x 0 σ ξ
CM0.154410.110110.096467---0.096467--(0.094,0.094,0.094)36.0000
SM0.154410.11011-0.065471---0.065471-(0.063,0.063,0.063)36.0000
EA0.15441---0.0925840.059581--0.059581(0.054,0.054,0.054)36.0000
(Among the three methods, the larger radius of convergence belong to the method C M .)
Table 3. Values of ψ i and φ i (in radians) for Example 3.
Table 3. Values of ψ i and φ i (in radians) for Example 3.
i ψ i φ i
0 1.3954170041747090114 1.7461756494150842271
1 1.7444828545735749268 2.0364691127919609051
2 2.0656234369405315689 2.2390977868265978920
3 2.4600678478912500533 2.4600678409809344550
Table 4. Radii for Example 3.
Table 4. Radii for Example 3.
Cases r 1 r 2 r 3 r 3 ( 2 ) r 2 ( 3 ) r 3 ( 3 ) r r ( 2 ) r ( 3 ) x 0 σ ξ
CM0.0740740.0579770.050345---0.050345--(0.945,0.737,0.690)36.1328
SM0.0740740.057977-0.032936---0.032936-(0.933,0.726,0.678)36.1377
EA0.074074---0.0426620.026229--0.026229(0.929,0.722,0.674)34.8142
Table 5. Radii of convergence for Example 4.
Table 5. Radii of convergence for Example 4.
Cases r 1 r 2 r 3 r 3 ( 2 ) r 2 ( 3 ) r 3 ( 3 ) r r ( 2 ) r ( 3 )
CM0.1111110.1055420.0922709---0.0922709--
SM0.1111110.105542-0.0594758---0.0594758
EA0.111111---0.07184540.0465723--0.0465723
(CM has a larger radius of convergence as compared to other two methods.)

Share and Cite

MDPI and ACS Style

Behl, R.; Argyros, I.K.; Tenreiro Machado, J.A. Ball Comparison between Three Sixth Order Methods for Banach Space Valued Operators. Mathematics 2020, 8, 667. https://0-doi-org.brum.beds.ac.uk/10.3390/math8050667

AMA Style

Behl R, Argyros IK, Tenreiro Machado JA. Ball Comparison between Three Sixth Order Methods for Banach Space Valued Operators. Mathematics. 2020; 8(5):667. https://0-doi-org.brum.beds.ac.uk/10.3390/math8050667

Chicago/Turabian Style

Behl, Ramandeep, Ioannis K. Argyros, and Jose Antonio Tenreiro Machado. 2020. "Ball Comparison between Three Sixth Order Methods for Banach Space Valued Operators" Mathematics 8, no. 5: 667. https://0-doi-org.brum.beds.ac.uk/10.3390/math8050667

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop