Next Article in Journal
Automatic Classification of Protein Structure Using the Maximum Contact Map Overlap Metric
Next Article in Special Issue
On Some Improved Harmonic Mean Newton-Like Methods for Solving Systems of Nonlinear Equations
Previous Article in Journal
Finding Supported Paths in Heterogeneous Networks
Previous Article in Special Issue
A Family of Newton Type Iterative Methods for Solving Nonlinear Equations
Article

Newton-Type Methods on Generalized Banach Spaces and Applications in Fractional Calculus

1
Department of Mathematical Sciences, University of Memphis, Memphis, TN 38152, USA
2
Department of Mathematics Sciences, Cameron University, Lawton, OK 73505, USA
*
Author to whom correspondence should be addressed.
Academic Editor: Alicia Cordero
Algorithms 2015, 8(4), 832-849; https://0-doi-org.brum.beds.ac.uk/10.3390/a8040832
Received: 23 June 2015 / Revised: 13 September 2015 / Accepted: 29 September 2015 / Published: 9 October 2015
(This article belongs to the Special Issue Numerical Algorithms for Solving Nonlinear Equations and Systems)

Abstract

We present a semilocal convergence study of Newton-type methods on a generalized Banach space setting to approximate a locally unique zero of an operator. Earlier studies require that the operator involved is Fréchet differentiable. In the present study we assume that the operator is only continuous. This way we extend the applicability of Newton-type methods to include fractional calculus and problems from other areas. Moreover, under the same or weaker conditions, we obtain weaker sufficient convergence criteria, tighter error bounds on the distances involved and an at least as precise information on the location of the solution. Special cases are provided where the old convergence criteria cannot apply but the new criteria can apply to locate zeros of operators. Some applications include fractional calculus involving the Riemann-Liouville fractional integral and the Caputo fractional derivative. Fractional calculus is very important for its applications in many applied sciences.
Keywords: Generalized Banach space; Newton-type method; semilocal convergence; Riemann-Liouville fractional integral; Caputo fractional derivative Generalized Banach space; Newton-type method; semilocal convergence; Riemann-Liouville fractional integral; Caputo fractional derivative

1. Introduction

We present a semilocal convergence analysis for Newton-type methods on a generalized Banach space setting to approximate a zero of an operator. A generalized norm is defined to be an operator from a linear space into a partially order Banach space (as will be elaborated in Section 2). Earlier studies such as [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16] for Newton’s method have shown that a more precise convergence analysis is obtained when compared with the real norm theory. However, the main assumption is that the operator involved is Fréchet differentiable. This hypothesis limits the applicability of Newton’s method. In the present study we only assume the continuity of the operator. This may be expand the applicability of these methods. Our approach allows the extension of Newton-type methods in fractional calculus and other areas (see Section 4) not possible before (since the operator must be Fréchet differentiable). Moreover, we obtain the following advantages over the earlier mentioned studies using Newton’s method:
(i) Weaker sufficient semilocal convergence criteria.
(ii) Tighter error bounds on the distances involved.
(iii) An at least as precise information on the location of the zero.
Moreover, we show that the advantages (ii) are possible even if our Newton-type methods are reduced to Newton’s method.
Furthermore, the advantages (i)–(iii) are obtained under the same or less computational cost.
Notice that in the recent elegant work by Adly et al., [1] Newton’s method has also been generalized to other important directions for solving inclusions and set-valued approximations. In the classical Banach space setting though these results that rely on non smooth analysis and metric regularity do not provide sufficient convergence criteria in the local as well as semilocal convergence case that are verifiable using Lipschitz-type constants as we utilize in the present study. Moreover, computable error bounds on the distances involved are not given neither the uniqueness or location of the solution is discussed.
The rest of the paper is organized as follows. Section 2 contains the basic concepts on generalized Banach spaces and auxiliary results on inequalities and fixed points. In Section 3 we present the semilocal convergence analysis of Newton-type methods. Finally, in Section 4 and Section 5, we present special cases and favorable comparisons with earlier results and applications in some areas including fractional calculus.

2. Generalized Banach Spaces

We present some standard concepts that are needed in what follows to make the paper as self-contained as possible. More details on generalized Banach spaces can be found in [5,6,7,14], and the references therein.
Definition 2.1. A generalized Banach space is a triplet x , E , · such that
(i) X is a linear space over R C .
(ii) E = E , K , · is a partially ordered Banach space, i.e.,
(ii 1 ) E , · is a real Banach space,
(ii 2 ) E is partially ordered by a closed convex cone K,
(iii 3 ) The norm · is monotone on K.
(iii) The operator · : X K satisfies
x = 0 x = 0 , θ x = θ x ,
x + y x + y for each x , y X , θ R ( C ) .
(iv) X is a Banach space with respect to the induced norm · i : = · · · .
Remark 2.2. The operator · is called a generalized norm. In view of (iii) and (ii 3 ), · i is a real norm. In the rest of this paper all topological concepts will be understood with respect to this norm.
Let L X j , Y stand for the space of j-linear symmetric and bounded operators from X j to Y, where X and Y are Banach spaces. For X , Y partially ordered L + X j , Y stands for the subset of monotone operators P such that
0 a i b i P a 1 , . . . , a j P b 1 , . . . , b j .
Definition 2.3. The set of bounds for an operator Q L X , X on a generalized Banach space X , E , · is defined to be:
B Q : = P L + E , E , Q x P x f o r e a c h x X .
Let D X and T : D D be an operator. If x 0 D the sequence { x n } given by
x n + 1 : = T x n = T n + 1 x 0
is well-defined. We write in case of convergence
T x 0 : = lim T n x 0 = lim n x n .
We need some auxiliary results on inequations.
Lemma 2.4. 
Let E , K , · be a partially ordered Banach space, ξ K and M , N L + E , E .
(i) Suppose there exists r K such that
R r : = M + N r + ξ r
and
M + N k r 0 as k .
Then, b : = R 0 is well-defined, satisfies the equation t = R t and is the smaller than any solution of the inequality R s s .
(ii) Suppose there exists q K and θ 0 , 1 such that R q θ q , then there exists r q satisfying (i).
Proof. 
(i) Define sequence { b n } by b n = R n 0 . Then, we have by Equantion (2.5) that b 1 = R 0 = ξ r b 1 r . Suppose that b k r for each k = 1 , 2 , . . . , n . Then, we have by Equantion (2.5) and the inductive hypothesis that b n + 1 = R n + 1 0 = R R n 0 = R b n = M + N b n + ξ M + N r + ξ r b n + 1 r . Hence, sequence { b n } is bounded above by r. Set P n = b n + 1 b n . We shall show that
P n M + N n r for each n = 1 , 2 , . . .
We have by the definition of P n and Equantion (2.6) that
P 1 = R 2 0 R 0 = R R 0 R 0
= R ξ R 0 = 0 1 R t ξ ξ d t 0 1 R ξ ξ d t
0 1 R r r d t M + N r ,
which shows Equantion (2.7) for n = 1 . Suppose that Equantion (2.7) is true for k = 1 , 2 , . . . , n . Then, we have in turn by Equantion (2.6) and the inductive hypothesis that
P k + 1 = R k + 2 0 R k + 1 0 = R k + 1 R 0 R k + 1 0 =
R k + 1 ξ R k + 1 0 = R R k ξ R R k 0 =
0 1 R R k 0 + t R k ξ R k 0 R k ξ R k 0 d t
R R k ξ R k ξ R k 0 = R R k ξ R k + 1 0 R k 0
R r R k + 1 0 R k 0 M + N M + N k r = M + N k + 1 r
which completes the induction for Equantion (2.7). It follows that { b n } is a complete sequence in a Banach space and as such it converges to some b. Notice that R b = R lim n R n 0 = lim n R n + 1 0 = b b solves the equation R t = t . We have that b n r b r , where r is a solution of R r r . Hence, b is smaller than any solution of R s s .
(ii) Define sequences { v n } , { w n } by v 0 = 0 , v n + 1 = R v n , w 0 = q , w n + 1 = R w n . Then, we have that
0 v n v n + 1 w n + 1 w n q w n v n θ n q v n
and the sequence { v n } is bounded above by q. Hence, it converges to some r with r q . We also get by Equantion (2.8) that w n v n 0 as n w n r as n . ☐
We also need the auxiliary result for computing solutions of fixed point problems.
Lemma 2.5. 
Let X , E , K , · , · be a generalized Banach space, and P B Q be a bound for Q L X , X . Suppose there exist y X and q K such that
P q + y q a n d P k q 0 a s k
Then, z = T 0 , T x : = Q x + y is well-defined and satisfies: z = Q z + y and z P z + y q . Moreover, z is the unique solution in the subspace x X | θ R : x θ q .
The proof can be found in [14, Lemma 3.2].

3. Semilocal Convergence

Let X , E , K , · , · and Y be generalized Banach spaces, D X an open subset, G : D Y a continuous operator and A · : D L X , Y . A zero of operator G is to be determined by a Newton-type method starting at a point x 0 D . The results are presented for an operator F = J G , where J L Y , X . The iterating elements are determined through a fixed point problem:
x n + 1 = x n + y n , A x n y n + F x n = 0 y n = T y n = I A x n y n F x n
Let U x 0 , r stand for the ball defined by
U x 0 , r = x X : x x 0 r
for some r K .
Next, we present the semilocal convergence analysis of Newton-type method Equation (3.1) using the preceding notation.
Theorem 3.1. 
Let F : D X , A · : D L X , Y and x 0 D be as defined previously. Suppose:
(H 1 ) There exists an operator M B I A x for each x D .
(H 2 ) There exists an operator N L + E , E satisfying for each x , y D
F y F x A x y x N y x
(H 3 ) There exists a solution r K of
R 0 t : = M + N t + F x 0 t
(H 4 ) U x 0 , r D .
(H 5 ) M + N k r 0 as k .
Then, the following hold:
(C 1 ) The sequence { x n } defined by
x n + 1 = x n + T n 0 , T n y : = I A x n y F x n
is well-defined, remains in U x 0 , r for each n = 0 , 1 , 2 , . . . and converges to the unique zero of operator F in U x 0 , r .
(C 2 ) An a priori bound is given by the null-sequence { r n } defined by r 0 : = r and for each n = 1 , 2 , . . .
r n = P n 0 , P n t = M t + N r n 1
(C 3 ) An a posteriori bound is given by the sequence { s n } defined by
s n = R n 0 , R n t = M + N t + N a n 1
b n : = x n x 0 r r n r
where
a n 1 = x n x n 1 f o r e a c h n = 1 , 2 , . . .
Proof. 
Let us define for each n N the statement:
(I n ) x n X and r n K are well-defined and satisfy
r n + a n 1 r n 1
We use induction to show (I n ). The statement (I 1 ) is true: By Lemma 2.4, (H 3 ) and (H 5 ), there exists q r such that:
M q + F x 0 = q and M k q M k r 0 as k .
Hence, by Lemma 2.5 x 1 is well-defined and we have a 0 q . Then, we get the estimate
P 1 r q = M r q + N r 0
M r M q + N r = R 0 r q
R 0 r q = r q
It follows with Lemma 2.4 that r 1 is well-defined and
r 1 + a 0 r q + q = r = r 0
Suppose that (I j ) is true for each j = 1 , 2 , . . . , n . We need to show the existence of x n + 1 and obtain a bound q for a n . To achieve this, notice that:
M r n + N r n 1 r n = M r n + N r n 1 N r n = P n r n N r n r n
Then, it follows from Lemma 2.4 that there exists q r n such that
q = M q + N r n 1 r n and M + N k q 0 , as k
By (I j ) it follows that
b n = x n x 0 j = 0 n 1 a j j = 0 n 1 r j r j + 1 = r r n r .
Hence, x n U x 0 , r D and by (H 1 ) M is a bound for I A x n
We can write by (H 2 ) that
F x n = F x n F x n 1 A x n 1 x n x n 1
N a n 1 N r n 1 r n
It follows from Equations (3.3) and (3.4) that
M q + F x n q
By Lemma 2.5, x n + 1 is well-defined and a n q r n . In view of the definition of r n + 1 we have that
P n + 1 r n q = P n r n q = r n q
so that by Lemma 2.4, r n + 1 is well-defined and
r n + 1 + a n r n q + q = r n
which proves (I n + 1 ). The induction for (I n ) is complete. Let m n , then we obtain in turn that
x m + 1 x n j = n m a j j = n m r j r j + 1 = r n r m + 1 r n
Moreover, we get inductively the estimate
r n + 1 = P n + 1 r n + 1 P n + 1 r n M + N r n . . . M + N n + 1 r
It follows from (H 5 ) that { r n } is a null-sequence. Hence, { x n } is a complete sequence in a Banach space X by Equation (3.5) and as such it converges to some x * X . By letting m in Equation (3.5) we deduce that x * U x n , r n . Furthermore, Equation (3.4) shows that x * is a zero of F. Hence, (C 1 ) and (C 2 ) are proved.
In view of the estimate
R n r n P n r n r n
the a priori bound of (C 3 ) is well-defined by Lemma 2.4. That is, s n is smaller in general than r n . The conditions of Theorem 3.1 are satisfied for x n replacing x 0 . A solution of the inequality of (C 2 ) is given by s n —see Equation (3.4). It follows from Equation (3.5) that the conditions of Theorem 3.1 are easily verified. Then, it follows from (C 1 ) that x * U x n , s n , which proves (C 3 ). ☐
In general, the a posterior estimate is of interest. Then, condition (H 5 ) can be avoided as follows:
Proposition 3.2. 
Suppose: condition (H 1 ) of Theorem 3.1 is true.
(H 3 ) There exists s K , θ 0 , 1 such that
R 0 s = M + N s + F x 0 θ s
(H 4 ) U x 0 , s D .
Then, there exists r s satisfying the conditions of Theorem 3.1. Moreover, the zero x * of F is unique in U x 0 , s .
Remark 3.3. (i) Notice that by Lemma 2.4 R n 0 is the smallest solution of R n s s . Hence any solution of this inequality yields on the upper estimate for R n 0 . Similar inequalities appear in (H 2 ) and (H 2 ).
(ii) The weak assumptions of Theorem 3.1 do not imply the existence of A x n 1 . In practice, the computation of T n 0 as a solution of a linear equation is of no problem, and the computation of the expensive or impossible to compute in general A x n 1 is not needed.
(iii) We can use the following result for the computation of the a posteriori estimates. The proof can be found in [14, Lemma 4.2] by simply exchanging the definitions of R.
Lemma 3.4. 
Suppose that the conditions of Theorem 3.1 are satisfied. If s K is a solution of R n s s , then q : = s a n K and solves R n + 1 q q . This solution might be improved by R n + 1 k q q for each k = 1 , 2 , . . . .

4. Special Cases and Applications

Application 4.1. The results obtained in earlier studies such as [5,6,7,14] require that the operator F (i.e., G) is Fréchet differentiable. This assumption limits the applicability of the earlier results. In the present study we only require that F is a continuous operator. Hence, we have extended the applicability of Newton-type methods to classes of operators that are only continuous. Moreover, as we will show next, by specializing F to be a Fréchet differentiable operator (i.e., F x n = A x n ), our Theorem 3.1 improves earlier results. Indeed, first of all, notice that the Newton-type method defined by Equation (3.1) reduces to Newton’s method:
x n + 1 = x n + y n , F x n y n + F x n = 0 y n = T n y n = I F x n y n F x n
Next, we present Theorem 2.1 from [14] and the specialization of our Theorem 3.1 so that we can compare them.
Lemma 4.2. 
Let F : D X be a Fréchet differentiable operator and x 0 D . Suppose that the following conditions hold:
( H ¯ 1 ) There exists an operator M 0 B I F x 0 .
( H ¯ 2 ) There exists an operator N 1 L + E 2 , E satisfying for
x , y D , z X : F x F y z 2 N 1 x y , z
( H ¯ 3 ) There exists a solution c K of the inequality
R 0 ¯ c : = M 0 c + N 1 c 2 + F x 0 c
( H ¯ 4 ) U x 0 , c D .
( H ¯ 5 ) M 0 + 2 N 1 c k c 0 as k .
Then, the following hold
( C ¯ 1 ) The sequence { x n } generated by Equation (4.1) is well-defined and converges to a unique zero of F in U x 0 , c .
( C ¯ 2 ) An a priori bound is given by the null-sequence { c n } defined by
c 0 = c , c n : = P ¯ n 0 , P ¯ n t : = M 0 t + 2 N 1 c c n 1 t + N 1 c n 1 2
( C ¯ 3 ) An a posteriori bound is given by the sequence { d n } defined by
d n = R ¯ n 0 , R ¯ n t : = M 0 t + 2 N 1 b n t + N 1 t 2 + N 1 a n 1 2
where sequences { a n } and { b n } are as defined previously.
Lemma 4.3. 
Let F : D X be a Fréchet differentiable operator and x 0 D . Suppose that the following conditions hold:
( H ˜ 1 ) There exists an operator M 1 B I F x for each x D .
( H ˜ 2 ) There exists an operator N 2 L + E , E satisfying for each x , y D
F y F x F x y x N 2 y x
( H ˜ 3 ) There exists a solution r ˜ K of
R ˜ 0 t : = M 1 + N 2 t + F x 0 t
( H ˜ 4 ) U x 0 , r ˜ D .
( H ˜ 5 ) M 1 + N 2 k r ˜ 0 as k .
Then, the following hold:
( C ˜ 1 ) The sequence { x n } generated by Equation (4.1) is well-defined and converges to a unique zero of F in U x o , r ˜ .
( C ˜ 2 ) An a priori bound is given by r ˜ 0 = r ˜ , r ˜ n : = P ˜ n 0 , P ˜ n t = M 1 t + N 2 r ˜ n 1 .
( C ˜ 3 ) An a posteriori bound is given by the sequence { s ˜ n } defined by s ˜ n : = R ˜ n 0 , R ˜ n t = M 1 + N 2 t + N 2 a n 1 .
We can now compare the two preceding theorems. Notice that we can write
F y F x F x y x = 0 1 F x + θ y x F x y x d t
Then, it follows from ( H ¯ 2 ), ( H ˜ 2 ) and the preceding estimate that
N 2 N 1 p , for each p X
holds in general. In particular, we have that
N 2 N 1 c
Moreover, we get in turn by ( H ¯ 1 ), ( H ¯ 2 ) and ( H ¯ 5 ) that
1 F x I F x 0 + F x 0 F x M 0 + 2 N 1 x x 0 M 0 + 2 N 1 c
Therefore, by ( H ˜ 1 ) and Equation (4.3), we obtain that
M 1 M 0 + 2 N 1 c
holds in general.
Then, in view of Equation (4.2), (4.4) and the ( H ¯ ), ( H ˜ ) hypotheses we deduce that
R ¯ 0 c c R ˜ 0 r ˜ r ˜
M 0 + 2 N 1 c k c 0 M 1 + N 2 k r ˜ 0
but not necessarily vice versa unless if equality holds in Equations (4.2) and (4.4);
r ˜ c
r ˜ n c n
and
s ˜ n d n
Notice also that strict inequality holds in Equation (4.8) or (4.9) if strict inequality holds in Equation (4.2) or (4.4).
Estimates (4.5)–(4.9) justify the advantages of our approach over the earlier studies as already stated in the introduction of this study.
Next, we show that the results of Theorem 2.1 in [14], i.e., of Theorem 4.2 can be improved under the same hypotheses by noticing that in view of ( H ¯ 2 ).
( H ¯ 2 0 ) There exists an operator N 0 L + E 2 , E satisfying for x D , z X ,
F x F x 0 z 2 N 0 x x 0 , z
Moreover,
N 0 N 1
holds in general and N 1 N 0 can be arbitrarily large [4,5,6,7].
It is worth noticing that ( H ¯ 2 0 ) is not an additional hypothesis to ( H ¯ 2 ), since in practice the computation of N 1 requires the computation of N 0 as a special case. Using now ( H ¯ 2 0 ) and ( H ¯ 1 ) we get that
I F x I F x 0 + F x 0 F x M 0 + 2 N 0 x x 0
Hence, M 0 + 2 N 0 b n , M 0 + 2 N 0 c c n can be used as bounds for I F x n instead of M 0 + 2 N 1 b n , M 0 + 2 N 1 c c n , respectively.
Notice also that
M 0 + 2 N 0 b n M 0 + 2 N 1 b n
and
M 0 + 2 N 0 c c n M 0 + 2 N 1 c c n
Then, with the above changes and following the proof of Theorem 2.1 in [14], we arrive at the following improvement:
Lemma 4.4. 
Suppose that the conditions of Theorem 4.2 hold but with N 1 replaced by the at most as large N 0 . Then, the conclusions ( C ¯ 1 )–( C ¯ 3 ),
c ¯ n c n
and
d ¯ n d n
where the sequences { c ¯ n } , { d ¯ n } are defined by
c ¯ 0 = c , c ¯ n = P ¯ ¯ n 0 , P ¯ ¯ n t = M 0 t + 2 N 0 c c n 1 t + N 1 c n 1 2
d ¯ n = R ¯ ¯ n 0 , R ¯ ¯ n t = M 0 t + 2 N 0 b n t + N 1 t 2 + N 1 a n 1 2
Remark 4.5. Notice that estimates Equation (4.13) and Equation (4.14) follow by a simple inductive argument using Equations (4.11) and (4.12). Moreover, strict inequality holds in Equation (4.13) (for n 1 ) and in Equation (4.14) (for n > 1 ) if strict inequality holds in Equation (4.11) or (4.12). Hence, again we obtain better a priori and a posteriori bounds under the same hypotheses ( H ¯ ).
Condition ( H ¯ 5 ) has been weakened since N 0 N 1 . It turns out that condition ( H ¯ 3 ) can be weakened and sequences { c n } and { d n } can be replaced by more precise sequences as follows: Define operators Q 0 , Q 1 , Q 2 , H 1 , H 2 on D by
( H ¯ ¯ 3 ) Q 0 ( t ) = M 0 t + F ( x 0 )
Suppose that there exists a solution μ 0 K of the inequality
Q 0 ( μ 0 ) μ 0
There exists a solution μ 1 K with μ 1 μ 0 of the inequality
Q 1 ( t ) t
where
Q 1 ( t ) : = M 0 t + 2 N 0 ( μ 0 t ) t + N 0 μ 0 2
There exists a solution μ 2 = μ K with μ μ 1 such that
Q 2 ( t ) t
where
Q 2 ( t ) = M 0 t + 2 N 0 ( μ t ) t + N 1 μ 1 2
Moreover, define operators on D by
H 1 ( t ) = M 0 t , H 2 ( t ) = Q 1 ( t )
H n ( t ) = M 0 t + 2 N 0 ( μ μ n 1 ) t + N 1 μ n 1 2 , n = 3 , 4 ,
and
Q n ( t ) = M 0 t + 2 N 0 b n t + N 1 t 2 + N 1 a n 1
Furthermore, define sequences { c ¯ ¯ n } and { d ¯ ¯ n } by
c ¯ ¯ n = H n ( 0 ) and d ¯ ¯ n : = Q n ( 0 )
Then, the proof of Theorem 4.2 goes on through in this setting to arrive at:
Theorem 4.6. 
Suppose that the conditions of Theorem 4.2 are satisfied but with c, ( H ¯ 3 ) ( H ¯ 5 ) replaced by μ, ( H ¯ 3 ) ,
( H ¯ ¯ 4 ) U ( x 0 , μ ) D
( H ¯ ¯ 5 ) ( M 0 + N 0 μ ) k μ 0 as k , respectively.
Then, the conclusions of Theorem 4.2 hold with sequences { c ¯ ¯ n } and { d ¯ ¯ n } replacing { c n } and { d n } respectively. Moreover, we have that
c ¯ ¯ n c ¯ n c n
d ¯ ¯ n d ¯ n d n
and
μ c
Clearly, the new error bounds are more precise: the information on the location of the solution x * is at least as precise and the sufficient convergence criteria ( H ¯ ¯ 3 ) and ( H ¯ ¯ 5 ) are weaker than ( H ¯ 3 ) and ( H ¯ 5 ) , respectively.
Example 4.7. The j-dimensional space R j is a classical example of a generalized Banach space. The generalized norm is defined by component-wise absolute values. Then, as ordered Banach space we set E = R j with component-wise ordering with, e.g., the maximum norm. A bound for a linear operator (a matrix) is given by the corresponding matrix with absolute values. Similarly, we can define the “N” operators.
Let E = R . That is we consider the case of a real normed space with norm denoted by · . Let us see how the conditions of Theorem 3.1 and Theorem 4.4 look like.
Theorem 4.8. 
( H 1 ) I A x M for some M 0 .
( H 2 ) F y F x A x y x N y x for some N 0 .
( H 3 ) M + N 1 ,
r = F x 0 1 M + N
( H 4 ) U x 0 , r D .
( H 5 ) M + N k r 0 as k , where r is given by Equation (4.15).
Then, the conclusions of Theorem 3.1 hold.
Theorem 4.9. 
( H ¯ 1 ) I F x 0 M 0 for some M 0 [ 0 , 1 ) .
( H ¯ 2 ) F x F x 0 2 N 0 x x 0 ,
F x F y 2 N 1 x y , for some N 0 0 and N 1 > 0 .
( H ¯ 3 )
4 N 1 F x 0 1 M 0 2
c = 1 M 0 1 M 0 2 4 N 1 F x 0 2 N 1
( H ¯ 4 ) U x 0 , c D .
( H ¯ 5 ) M 0 + 2 N 0 c k c 0 as k , where c is defined by Equation (4.17).
Then, the conclusions of Theorem 4.4 hold.
Remark 4.10. Condition (4.16) is a Newton–Kantorovich type hypothesis appearing as a sufficient semilocal convergence hypothesis in connection to Newton-type methods. In particular, if F x 0 = I , then M 0 = 0 and Equation (4.16) reduces to the famous for its simplicity and clarity Newton–Kantorovich hypothesis
4 N 1 F x 0 1
appearing in the study of Newton’s method [1,2,5,6,7,9,10,11,12,13,14,15,16].

5. Application to Fractional Calculus

The semilocal convergence Newton-type general methods that we presented earlier, see Theorem 4.8, apply in the next two fractional settings given that the following inequalities are fulfilled:
1 A x γ 0 0 , 1
and
F y F x A x y x γ 1 y x
where γ 0 , γ 1 0 , 1 ; furthermore
γ = γ 0 + γ 1 0 , 1
for all x , y a * , b .
Here we consider a < a * < b .
The specific functions A x , F x will be described next.
(I) Let α > 0 and f L a , b . The Riemann–Liouville integral ([8], p.13) is given by
J a α f x = 1 Γ α a x x t α 1 f t d t , x a , b
Then
J a α f x 1 Γ α a x x t α 1 f t d t
1 Γ α a x x t α 1 d t f = 1 Γ α x a α α f
= x a α Γ α + 1 f = ξ 1
Clearly
J a α f a = 0 .
ξ 1 b a α Γ α + 1 f
That is
J a α f , a , b b a α Γ α + 1 f <
i.e., J a α is a bounded linear operator.
By [3], p. 388, we get that J a α f is a continuous function over a , b and in particular over a * , b . Thus there exist x 1 , x 2 a * , b such that
J a α f x 1 = min J a α f x
J a α f x 2 = max J a α f x , x a * , b
We assume that
J a α f x 1 > 0
Hence
J a α f , a * , b = J a α f x 2 > 0
Here it is
J x = m x , m 0
Therefore the equation
J f x = 0 , x a * , b
has the same solutions as the equation
F x : = J f x 2 J a α f x 2 = 0 , x a * , b
Notice that
J a α f 2 J a α f x 2 x = J a α f x 2 J a α f x 2 1 2 < 1 , x a * , b
Call
A x : = J a α f x 2 J a α f x 2 , x a * , b
We notice that
0 < J a α f x 1 2 J a α f x 2 A x 1 2 , x a * , b
Hence the first condition (5.1) is fulfilled
1 A x = 1 A x 1 J a α f x 1 2 J a α f x 2 = : γ 0 , x a * , b
Clearly γ 0 0 , 1 .
Next we assume that F x is a contraction, i.e.,
F x F y λ x y ; all x , y a * , b
and 0 < λ < 1 2 .
Equivalently we have
J f x J f y 2 λ J a α f x 2 x y , all x , y a * , b
We observe that
F y F x A x y x F y F x + A x y x
λ y x + A x y x = λ + A x y x = : ψ 1 , x , y a * , b
We have that
J a α f x b a α Γ α + 1 f < , x a * , b
Hence
A x = J a α f x 2 J a α f x 2 b a α f 2 Γ α + 1 J a α f x 2 < , x a * , b
Therefore we get
ψ 1 λ + b a a f 2 Γ α + 1 J a α f x 2 y x , x , y a * , b
Call
0 < γ 1 : = λ + b a a f 2 Γ α + 1 J a α f x 2
Choosing b a small enough, we can make γ 1 0 , 1 , fulfilling Equation (5.2).
Next we call and we need that
0 < γ : = γ 0 + γ 1 = 1 J a α f x 1 2 J a α f x 2 + λ + b a a f 2 Γ α + 1 J a α f x 2 < 1
equivalently,
λ + b a a f 2 Γ α + 1 J a α f x 2 < J a α f x 1 2 J a α f x 2
equivalently,
2 λ J a α f x 2 + b a a f Γ α + 1 < J a α f x 1 ,
which is possible for small λ, b a . That is γ 0 , 1 , fulfilling Equation (5.3). So our numerical method converges and solves Equation (5.13).
(II) Let again a < a * < b , α > 0 , m = α ( · ceiling function), α N , G C m 1 a , b , 0 G m L a , b . Here we consider the Caputo fractional derivative (see [3], p. 270),
D * a α G x = 1 Γ m α a x x t m α 1 G m t d t
By [3], p. 388, D * a α G is a continuous function over a , b and in particular continuous over a * , b . Notice that by [4], p. 358, we have that D * a α G a = 0 .
Therefore there exist x 1 , x 2 a * , b such that D * a α G x 1 = min D * a α G x , and D * a α G x 2 = max D * a α G x , for x a * , b .
We assume that
D * a α G x 1 > 0
(i.e., D * a α G x > 0 , ∀ x a * , b ).
Furthermore
D * a α G , a * , b = D * a α G x 2
Here it is
J x = m x , m 0
The equation
J G x = 0 , x a * , b
has the same set of solutions as the equation
F x : = J G x 2 D * a α G x 2 = 0 , x a * , b
Notice that
D * a α G x 2 D * a α G x 2 = D * a α G x 2 D * a α G x 2 1 2 < 1 , x a * , b
We call
A x : = D * a α G x 2 D * a α G x 2 , x a * , b
We notice that
0 < D * a α G x 1 2 D * a α G x 2 A x 1 2
Hence the first condition (5.1) is fulfilled
1 A x = 1 A x 1 D * a α G x 1 2 D * a α G x 2 = : γ 0 , x a * , b
Clearly γ 0 0 , 1 .
Next we assume that F x is a contraction over a * , b , i.e.,
F x F y λ x y ; x , y a * , b ,
and 0 < λ < 1 2 .
Equivalently we have
J G x J G y 2 λ D * a α G x 2 x y , x , y a * , b .
We observe that
F y F x A x y x F y F x + A x y x
λ y x + A x y x = λ + A x y x = : ξ 2 , x , y a * , b
We observe that
D * a α G x 1 Γ m α a x x t m α 1 G m t d t
1 Γ m α a x x t m α 1 d t G m = 1 Γ m α x a m α m α G m
= 1 Γ m α + 1 x a m α G m b a m α Γ m α + 1 G m
That is
D * a α G x b a m α Γ m α + 1 G m < , x a , b
Hence, ∀ x a * , b we get that
A x = D * a α G x 2 D * a α G x 2 b a m α 2 Γ m α + 1 G m D * a α G x 2 <
Consequently we observe
ξ 2 λ + b a m α 2 Γ m α + 1 G m D * a α G x 2 y x , x , y a * , b
Call
0 < γ 1 : = λ + b a m α 2 Γ m α + 1 G m D * a α G x 2
Choosing b a small enough we can make γ 1 0 , 1 . So Equation (5.2) is fulfilled.
Next we call and need
0 < γ : = γ 0 + γ 1 = 1 D * a α G x 1 2 D * a α G x 2 + λ + b a m α 2 Γ m α + 1 G m D * a α G x 2 < 1 ,
equivalently we find,
λ + b a m α 2 Γ m α + 1 G m D * a α G x 2 < D * a α G x 1 2 D * a α G x 2
equivalently,
2 λ D * a α G x 2 + b a m α Γ m α + 1 G m < D * a α G x 1
which is possible for small λ, b a .
That is γ 0 , 1 , fulfilling Equation (5.3). Hence Equation (5.33) can be solved with our presented numerical methods.

6. Conclusions

We presented a convergence analysis for Newton-type methods under weaker convergence criteria than in earlier studies with applications in fractional calculus.

Acknowledgments

We would like to express our gratitude to all the reviewers the constructive criticism of this paper.

Author Contributions

The contributions of both authors have been similar. Authors have worked together to develop the present manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Amat, S.; Busquier, S. Third-order iterative methods under Kantorovich conditions. J. Math. Anal. Applic. 2007, 336, 243–261. [Google Scholar] [CrossRef]
  2. Amat, S.; Busquier, S.; Plaza, S. Chaotic dynamics of a third-order Newton-type method. J. Math. Anal. Applic. 2010, 366, 164–174. [Google Scholar] [CrossRef]
  3. Anastassiou, G. Fractional Differentiation Inequalities; Springer: New York, NY, USA, 2009. [Google Scholar]
  4. Anastassiou, G. Intelligent Mathematics: Computational Analysis; Springer: Heidelberg, Germany, 2011. [Google Scholar]
  5. Argyros, I.K. Newton-like methods in partially ordered linear spaces. J. Approx. Th. Applic 1993, 9, 1–10. [Google Scholar]
  6. Argyros, I.K. Results on controlling the residuals of perturbed Newton-like methods on Banach spaces with a convergence structure. Southwest J. Pure Appl. Math. 1995, 1, 32–38. [Google Scholar]
  7. Argyros, I.K. Convergence and Applications of Newton-type iterations; Springer: New York, NY, USA, 2008. [Google Scholar]
  8. Diethelm, K. The Analysis of Fractional Differential Equations. Lecture Notes in Mathematics, 1st ed.; Springer: New York, NY, USA, 2010. [Google Scholar]
  9. Ezquerro, J.A.; Gutierrez, J.M.; Hernández, M.Á.; Romero, N.; Rubio, M.J. The Newton method: From Newton to Kantorovich (Spanish). Gac. R. Soc. Mat. Esp. 2010, 13, 53–76. [Google Scholar]
  10. Ezquerro, J.A.; Hernández, M.Á. Newton-type methods of high order and domains of semilocal and global convergence. Appl. Math. Comput. 2009, 214, 142–154. [Google Scholar] [CrossRef]
  11. Kantorovich, L.V.; Akilov, G.P. Functional Analysis in Normed Spaces; Pergamon Press: New York, NY, USA, 1964. [Google Scholar]
  12. Magreñán, Á. A. Different anomalies in a Jarratt family of iterative root finding methods. Appl. Math. Comput. 2014, 233, 29–38. [Google Scholar] [CrossRef]
  13. Magreñán, Á. A. A new tool to study real dynamics: The convergence plane. Appl. Math. Comput. 2014, 248, 215–224. [Google Scholar] [CrossRef]
  14. Meyer, P.W. Newton’s method in generalized Banach spaces. Numer. Func. Anal. Optimiz. 1987, 3/4, 244–259. [Google Scholar] [CrossRef]
  15. Potra, F.A.; Ptak, V. Nondiscrete induction and iterative processes; Pitman Publishing: London, UK, 1984. [Google Scholar]
  16. Proinov, P.D. New general convergence theory for iterative processes and its applications to Newton-Kantorovich type theorems. J. Complexity 2010, 26, 3–42. [Google Scholar] [CrossRef]
Back to TopTop