Next Article in Journal
Algorithmic Strategies for Precious Metals Price Forecasting
Previous Article in Journal
Siamese Neural Networks for Damage Detection and Diagnosis of Jacket-Type Offshore Wind Turbine Platforms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Iterative Algorithm to Approximate Fixed Points of Non-Linear Operators with an Application

by
Maryam Gharamah Alshehri
1,
Faizan Ahmad Khan
1,* and
Faeem Ali
2,*
1
Computational & Analytical Mathematics and Their Applications Research Group, Department of Mathematics, Faculty of Science, University of Tabuk, Tabuk 71491, Saudi Arabia
2
Department of Mathematics, Maulana Azad National Institute of Technology, Bhopal 462003, India
*
Authors to whom correspondence should be addressed.
Submission received: 5 February 2022 / Revised: 3 March 2022 / Accepted: 29 March 2022 / Published: 1 April 2022
(This article belongs to the Section Computational and Applied Mathematics)

Abstract

:
In this article, we study the JF iterative algorithm to approximate the fixed points of a non-linear operator that satisfies condition (E) in uniformly convex Banach spaces. Further, some weak and strong convergence results are presented for the same operator using the JF iterative algorithm. We also demonstrate that the JF iterative algorithm is weakly w 2 G -stable with respect to almost contractions. In connection with our results, we provide some illustrative numerical examples to show that the JF iterative algorithm converges faster than some well-known iterative algorithms. Finally, we apply the JF iterative algorithm to estimate the solution of a functional non-linear integral equation. The results of the present manuscript generalize and extend the results in existing literature and will draw the attention of researchers.

1. Introduction

Fixed point theory has an eminent position in pure and applied mathematics because it has a variety of applications in different fields within mathematics, such as differential and integral equations, variational inequalities, approximation theory, etc. The application of fixed point results is not merely confined to mathematics, but is also relevant in other fields, such as statistics, computer sciences, chemical sciences, physical sciences, economics, biological sciences, medical sciences, engineering, game theory, etc. (see, e.g., [1,2]). It is a domain that is of great interest in two research directions: the first is to find progressively wider classes of mappings and conditions under which the existence of fixed points can be proved; the second is to define iterative algorithms for the approximation of the fixed points of these mappings, as it is not always an easy task to approximate the fixed points using direct methods.
The fundamental result in metric fixed point theory is the Banach contraction principle, which was first introduced in the literature in 1922. This result provides the guarantee of the existence and uniqueness of the fixed point of a contraction mapping in a complete metric space. It not only demonstrates the existence and uniqueness of a fixed point, but also allows the Picard iterative algorithm to converge to that fixed point. Further, on account of its simplicity, utility and applicability, the Banach contraction principle has become an extremely well-known tool in solving existence problems in numerous branches of mathematical analysis. As such, several authors have improved, extended and generalized the Banach contraction principle. One of the most important generalizations of the Banach contraction principle was produced by Berinde [3] in 2003. He defined almost contraction mapping as follows.
A self-mapping G defined on a non-empty subset S of a Banach space B is called an almost contraction when constants δ ( 0 , 1 ) and L 0 exist in such a way that:
G x G y δ x y + L y G x , x , y S .
It is worth mentioning here that condition (1) only ensures the existence of a fixed point of an almost contraction (see, [3]). For the uniqueness of the fixed point of an almost contraction, he proved the following result.
Theorem 1
([3]). Let ( B , d ) be a complete metric space and G : B B be an almost contraction (1). When constants δ ( 0 , 1 ) and L 0 exist in such a way that:
d ( G x , G y ) δ d ( x , y ) + L d ( x , G x ) , x , y S .
Then G has a unique fixed point, i.e., t, in B .
Berinde has also shown that almost contractions include the classes of Kannan [4], Chatterjea [5] and Zamfirescu [6] mappings.
A widely studied extension of contraction mappings is the class of non-expansive mappings, which is natural and vast due to isometry and metric projections. A self-mapping G defined on a non-empty subset S of a Banach space B is said to be non-expansive when:
G x G y x y , x , y S .
The fixed point theory for non-expansive mappings has a variety of applications in convex feasibility problems, convex optimization problems, monotone inequality problems, image restorations, etc. Due to its applicability, a large number of eminent researchers have generalized and extended this theory to the large classes of non-linear mappings. One of the most important generalizations of non-expansive mappings was produced by Garcia-Falset et al. [7] in 2011, which is defined as follows.
Definition 1
([7]). Let S be a non-empty subset of a Banach space B and μ 1 . An operator G : S B is said to satisfy condition ( E μ ) when:
x G y μ x G x + x y , x , y S .
Moreover, G is said to satisfy condition ( E ) when G satisfies condition ( E μ ) with μ 1 .
It can be easily seen that when G : S B is a non-expansive mapping, it satisfies condition ( E μ ) with μ = 1 . It is worth mentioning here that the class of operators that satisfy condition ( E ) properly includes the classes of Hardy and Rogers mappings [8], mappings satisfying Suzuki’s condition ( C ) [9], generalized α non-expansive mappings [10] and generalized α –Reich–Suzuki non-expansive mappings [11].
In many instances, it is not possible to find the exact solution of fixed point problems. Therefore, iterative algorithms are used to approximate the solutions of the fixed point problems. Thus, a large number of iterative algorithms have been introduced and studied for the approximation of solutions to fixed point problems (see, e.g., [12,13,14,15,16,17,18,19,20,21,22], etc).
Very recently, Ali et al. [23] introduced a new iterative algorithm called the JF iterative algorithm, which is defined as follows.
Let S be a non-empty closed and convex subset of a Banach space B and let G : S S be the mapping. Then, the sequence { τ n } is generated by an initial point τ 0 and defined by:
τ n + 1 = G ( ( 1 μ n ) σ n + μ n G σ n ) , σ n = G ξ n , ξ n = G ( ( 1 θ n ) τ n + θ n G τ n ) , n Z + ,
where { μ n } and { θ n } are control sequences in ( 0 , 1 ) and Z + denotes the set of non-negative integers. They pointed that the JF iterative algorithm is independent from all other iterative algorithms that have been previously defined in the literature. They have produced some weak and strong convergence results for Hardy and Rogers generalized non-expansive mappings using the JF iterative algorithm in uniformly convex Banach spaces. They have also numerically shown that the JF iterative algorithm converges to the fixed point of Hardy and Rogers generalized non-expansive mappings faster than some other remarkable iterative algorithms.
On the other hand, many scientific and engineering problems are presented in the form of non-linear integral equations. The class of initial and boundary value problems can be transformed to Fredholm or Volterra non-linear integral equations. The solution of non-linear integral equations exists locally and has blow-up phenomena (see, [24,25]). In Section 5, we apply the JF iterative method to approximate the solution of a non-linear integral equation in the setting of a Banach space.
Inspired by the above study, we aim to prove that the JF iterative algorithm is weakly w 2 G -stable with respect to almost contractions in the current manuscript. Further, we present some weak and strong convergence results for the operators that satisfy condition ( E ) using the JF iterative algorithm in uniformly convex Banach spaces. We numerically show that the JF iterative algorithm converges to the fixed point of the operators that satisfy condition ( E ) faster than Mann, Ishikawa, Noor, SP, S and Picard-S iterative algorithms. Finally, we approximate the solution of a mixed Volterra–Fredholm functional non-linear integral equation. The results of the present manuscript generalize and extend the results in existing literature, particularly those of [20,23].

2. Preliminaries

The aim of this section is to set out some lemmas and definitions that are used in this paper.
Lemma 1
([26]). Let { u n } and { ϵ n } be sequences in R + that satisfy the following inequality:
u n + 1 ( 1 v n ) u n + ϵ n ,
where v n ( 0 , 1 ) for all n Z + with n = 0 v n = . When lim n ϵ n v n = 0 , then lim n u n = 0 .
Definition 2
([27]). Let B be a Banach space and { τ n } be a weakly convergent sequence to x B , then B satisfies Opial’s property when:
lim n inf τ n x < lim n inf τ n y
holds for all y B with y x .
Example 1.
All Hilbert spaces and p ( 1 < p < ) spaces satisfy Opial’s property, while L p [ 0 , 2 π ] ( 1 < p 2 ) spaces do not satisfy Opial’s property.
Definition 3
([28]). An operator G : S S satisfies condition ( I ) when a non-decreasing function ψ : [ 0 , ) [ 0 , ) exists with ψ ( 0 ) = 0 and ψ ( y ) > 0 , y > 0 , such that y G y ψ ( d ( y , F ( G ) ) ) and y S , where F ( G ) = { t S : G t = t } and d ( y , F ( G ) ) = inf { y t : t F ( G ) } .
Definition 4
([29]). Let S be a non-empty subset of a Banach space B . The two sequences { τ n } and { t n } in S are said be equivalent when:
lim n τ n t n = 0 .
Definition 5
([30]). Let S be a non-empty subset of a Banach space B , let G : S S be a mapping with at least one fixed point, i.e., t, and let { τ n } be a sequence defined by:
τ 0 S , τ n + 1 = h ( G , τ n ) , n Z + ,
where h is a function of G and τ n . Assume that the sequence { τ n } converges to a fixed point of G and { t n } is an equivalent sequence of { τ n } in S . When:
lim n t n + 1 h ( G , t n ) = 0 lim n t n = t ,
then the iterative sequence { τ n } is called weakly w 2 -stable with respect to G .
Definition 6.
Let S be a non-empty, closed and convex subset of B . Let { τ n } be a bounded sequence in B and for x S :
r ( x , { τ n } ) = lim n sup τ n x .
The asymptotic radius and asymptotic center of { τ n } relative to S are defined, respectively, by:
r ( S , { τ n } ) = inf { r ( x , { τ n } ) : x S } .
A ( S , { τ n } ) = { x S : r ( x , { τ n } ) = r ( S , { τ n } ) } .
When B is a uniformly convex Banach space, then the set A ( S , { τ n } ) is a singleton.
Lemma 2
([31]). Assume B is a uniformly convex Banach space and 0 < a s n b < 1 , n 1 . Let { τ n } and { σ n } be two sequences in B that satisfy lim n sup τ n w , lim n sup σ n w and lim n s n τ n + ( 1 s n ) σ n = w holds for w 0 . Then, lim n τ n σ n = 0 .
Lemma 3
([7]). Let S be a non-empty, closed and convex subset of a uniformly convex Banach space B that satisfies Opial’s property. Let G : S B be an operator that satisfies condition ( E ) . When the sequence { τ n } converges weakly to t and lim n τ n G τ n = 0 , then t F ( G ) .

3. Weak w 2 -Stability of the JF Iterative Algorithm

The purpose of this section is to prove the convergence and stability results for the JF iterative algorithm with respect to almost contractions in an arbitrary Banach space. The following theorem shows the convergence and stability of the iterative algorithm (4) for almost contractions.
Theorem 2.
Let S be a non-empty, closed and convex subset of a Banach space B and let G : S S be an almost contraction that satisfies inequality (2). Then, the sequence { τ n } defined by the iterative algorithm (4) converges to a unique fixed point of G . Moreover, the iterative sequence { τ n } is weakly w 2 -stable with respect to the almost contraction.
Proof. 
Since G is an almost contraction that satisfies inequality (2), a constant β [ 0 , 1 ) exists in such a way that for all x S and t F ( G ) = { t S : G t = t } :
G x G t = G x t β x t .
Using iterative algorithm (4), we obtain:
ξ n t = G ( ( 1 θ n ) τ n + θ n G τ n ) t β ( 1 θ n ) τ n + θ n G τ n t β ( ( 1 θ n ) τ n t + θ n G τ n t ) β ( ( 1 θ n ) τ n t + β θ n τ n t ) = β ( 1 ( 1 β ) θ n ) τ n t .
Since 0 β < 1 and θ n ( 0 , 1 ) and using the fact that 0 < ( 1 ( 1 β ) θ n ) 1 , we obtain:
ξ n t β τ n t .
Using Equation (5), we obtain:
σ n t = G ξ n t β ξ n t β 2 τ n t .
Using Equation (6), we obtain:
τ n + 1 t = G ( ( 1 μ n ) σ n + μ n G σ n ) t β ( 1 μ n ) σ n + μ n G σ n t β ( ( 1 μ n ) σ n t + μ n G σ n t ) β ( ( 1 μ n ) σ n t + β μ n σ n t ) β ( 1 ( 1 β ) μ n ) σ n t β 3 τ n t .
Inductively, we then obtain:
τ n + 1 t β 3 ( n + 1 ) τ 0 t .
Since 0 β < 1 , it can be concluded that { τ n } converges to t.
Now, we aim to prove the stability of the iterative algorithm (4). Let { t n } be an equivalent sequence of { τ n } in S , let the sequence that is defined by the iterative algorithm (4) be τ n + 1 = h ( G , τ n ) and assume ϵ n = t n + 1 h ( G , t n ) , n Z + . Now, we can show that lim n ϵ n = 0 lim n t n = t .
Let lim n ϵ n = 0 . Then, using the iterative algorithm (4), we obtain:
t n + 1 t t n + 1 h ( G , t n ) + h ( G , t n ) t = ϵ n + h ( G , t n ) t ϵ n + β 3 ( 1 ( 1 β ) μ n ) t n t .
By defining u n = t n t and v n = ( 1 β ) μ n ( 0 , 1 ) , then:
u n + 1 β 3 ( 1 v n ) u n + ϵ n .
Since lim n ϵ n = 0 , then ϵ n v n 0 as n . Thus, according to Lemma 1, lim n u n = 0 , i.e., lim n t n = t . Thus, the iterative sequence that is defined by the algorithm (4) is weakly w 2 -stable with respect to the almost contraction. □

4. Convergence Results for the Non-linear Operator (E)

The purpose of this section is to prove convergence results for the operator that satisfies condition ( E ) in uniformly convex Banach spaces. First, we aim to prove the following fruitful lemmas that helped us to obtain the these results. Throughout this section, it is assumed that S is a non-empty, closed and convex subset of a uniformly convex Banach space B , G : S S is an operator that satisfies condition ( E ) and F ( G ) = { t S : G t = t } .
Lemma 4.
Assume that F ( G ) and let { τ n } be a sequence that is developed by the iterative algorithm (4), then lim n τ n t exists for all t F ( G ) .
Proof. 
As the operator G satisfies condition ( E ) and F ( G ) , for t F ( G ) , we obtain:
G τ n t τ n t ,
for all τ n S . Using the iterative algorithm (4), we obtain:
ξ n t = G ( ( 1 θ n ) τ n + θ n G τ n ) t ( 1 θ n ) τ n + θ n G τ n t ( 1 θ n ) τ n t + θ n G τ n t ( 1 θ n ) τ n t + θ n τ n t τ n t .
Using Equation (9), we obtain:
σ n t = G ξ n t ξ n t τ n t .
Using Equation (10), we obtain:
τ n + 1 t = G ( ( 1 μ n ) σ n + μ n G σ n ) t ( 1 μ n ) σ n + μ n G σ n t ( 1 μ n ) σ n t + μ n G σ n t ( 1 μ n ) σ n t + μ n σ n t ( 1 μ n ) τ n t + μ n τ n t = τ n t .
This shows that the sequence { τ n t } is non-increasing and bounded below t F ( G ) . Thus, lim n τ n t exists. □
Lemma 5.
Let { τ n } be a sequence that is developed by the iterative algorithm (4). Then, F ( G ) when, and only when, { τ n } is bounded and lim n τ n G τ n = 0 .
Proof. 
Presume that F ( G ) and t F ( G ) . Then, lim n τ n t exists according to Lemma 4 and { τ n } is bounded. Presume that:
lim n τ n t = α .
From Equations (9), (10) and (12), we obtain:
lim n sup ξ n t lim n sup τ n t α .
lim n sup σ n t lim n sup τ n t α .
Since G satisfies condition ( E ) , we obtain:
G τ n t τ n t
lim n sup G τ n t lim n sup τ n t α .
Now:
τ n + 1 t = G ( ( 1 μ n ) σ n + μ n G σ n ) t ( 1 μ n ) σ n + μ n G σ n t ( 1 μ n ) σ n t + μ n G σ n t ( 1 μ n ) σ n t + μ n σ n t = σ n t .
Taking lim n on both sides, we obtain:
α = lim n inf τ n + 1 t lim n inf σ n t .
So, it follows from (14) and (16) that:
α lim n inf σ n t lim n sup σ n t α lim n σ n t = α .
Additionally:
σ n t = G ξ n t ξ n t .
Taking lim n on both sides, we obtain:
α = lim n inf σ n t lim n inf ξ n t .
So, it follows from (13) and (18) that:
α lim n inf ξ n t lim n sup ξ n t α lim n ξ n t = α .
Thus:
α = lim n ξ n t = lim n G ( ( 1 θ n ) τ n + θ n G τ n ) t lim n ( 1 θ n ) τ n + θ n G τ n t = lim n ( 1 θ n ) ( τ n t ) + θ n ( G τ n t ) lim n ( ( 1 θ n ) τ n t + θ n G τ n t ) lim n ( ( 1 θ n ) τ n t + θ n τ n t ) α .
Hence:
lim n ( 1 θ n ) ( τ n t ) + θ n ( G τ n t ) = α .
From (13), (15) and (20) and using Lemma 2, we obtain:
lim n τ n G τ n = 0 .
Conversely, assume that { τ n } is bounded and lim n τ n G τ n = 0 . Let t A ( S , { τ n } ) , then we obtain:
r ( G t , { τ n } ) = lim n sup τ n G t lim n sup ( τ n t + μ G τ n τ n ) = lim n sup τ n t = r ( t , { τ n } ) = r ( S , { τ n } ) .
This implies that G t A ( S , { τ n } ) . Since B is uniformly convex, A ( S , { τ n } ) is a singleton, which implies that G t = t . □
Now, we aim to prove the following weak convergence theorem for the operators that satisfy condition ( E ) using the iterative algorithm (4).
Theorem 3.
Presume that F ( G ) and B satisfies Opial’s property, then the sequence { τ n } that is defined by the iterative algorithm (4) converges weakly to a fixed point of the operator G .
Proof. 
In Lemma 4, we demonstrated that lim n τ n t exists. Now, we have to show that { τ n } has a unique weak subsequential limit in F ( G ) . Let t and q be two weak limits of { τ n j } and { τ n k } , respectively, where { τ n j } and { τ n k } are subsequences of { τ n } . According to Lemma 5, lim n τ n G τ n = 0 and therefore, using Lemma 3, t F ( G ) and similarly, q F ( G ) .
Now, our aim is to show that t = q . When t q , then using Opial’s property, we obtain:
lim n τ n t = lim j τ n j t < lim j τ n j q = lim n τ n q = lim k τ n k q < lim k τ n k t = lim n τ n t .
which is not possible and hence, t = q . It can be deduced that { τ n } converges weakly to t F ( G ) . □
Theorem 4.
The sequence { τ n } that is defined by the iterative algorithm (4) converges strongly to t F ( G ) when, and only when, lim n inf d ( τ n , F ( G ) ) = 0 , where d ( τ n , F ( G ) ) = inf { τ n t : t F ( G ) } .
Proof. 
The first part is trivial. Now, we aim to prove the converse part. Presume that lim n inf d ( τ n , F ( G ) ) = 0 . According to Lemma 4, lim n τ n t exists for all t F ( G ) ; therefore, it can be hypothesized that lim n d ( τ n , F ( G ) ) = 0 .
Now, our claim is that { τ n } is a Cauchy sequence in S . Since lim n d ( τ n , F ( G ) ) = 0 for a given η > 0 , M N exists in such a way that for all n M :
d ( τ n , F ( G ) ) < η 2 inf { τ n t : t F ( G ) } < η 2 .
In particular, inf { τ M t : t F ( G ) } < η 2 . Therefore, t F ( G ) exists in such a way that:
τ M t < η 2 .
Now, for m , n M :
τ n + m τ n τ n + m t + τ n t τ M t + τ M t = 2 τ M t < η .
This implies that { τ n } is a Cauchy sequence in S , so there is an element S such that lim n τ n = . Since lim n d ( τ n , F ( G ) ) = 0 , it follows that d ( , F ( G ) ) = 0 and thus, we obtain F ( G ) . □
We now aim to prove the following strong convergence result by applying condition ( I ) .
Theorem 5.
Assume that F ( G ) and the operator G satisfies condition ( I ) . Then, the sequence { τ n } that is defined by the iterative algorithm (4) converges strongly to a fixed point of G .
Proof. 
We demonstrated in Lemma 5 that:
lim n τ n G τ n = 0 .
By applying condition ( I ) and Equation (21), we obtain:
0 lim n ψ ( d ( τ n , F ( G ) ) ) lim n τ n G τ n = 0 lim n ψ ( d ( τ n , F ( G ) ) ) = 0 .
It then follows that:
lim n ( d ( τ n , F ( G ) ) ) = 0 .
Hence, using Theorem 4, the sequence { τ n } converges strongly to a fixed point of G . □
Now, we present the following example to support Theorem 5.
Example 2.
Let B = R be a Banach space with respect to the usual norm and S = [ 2 , ) be a non-empty, closed and convex subset of B . Let G : S S be an operator that is defined by:
G ( x ) = x 4 , i f x [ 2 , 1 2 ] , x 5 , i f x ( 1 2 , ) .
Since G is discontinuous at x = 1 2 and we know that every non-expansive mapping is continuous, it follows that G is not a non-expansive mapping. Now, we verify that G satisfies condition ( E ) . For this, the following cases arise:
Case-I. When x , y [ 2 , 1 2 ] , then we obtain:
x G y = x y 4 = x x 4 + x 4 y 4 x x 4 + 1 4 x y 16 15 x G x + x y .
Case-II. When x , y ( 1 2 , ) , then we obtain:
x G y = x y 5 = x x 5 + x 5 y 5 x x 5 + 1 5 x y 16 15 x G x + x y .
Case-III. When x [ 2 , 1 2 ] and y ( 1 2 , ) , then we obtain:
x G y = x y 5 = x x 5 + x 5 y 5 x x 5 + 1 5 x y x x 4 + x 4 x 5 + x y = x x 4 + 1 15 x x 4 + x y = 16 15 x x 4 + x y = 16 15 x G x + x y .
Case-IV. When x ( 1 2 , ) and y [ 2 , 1 2 ] , then we obtain:
x G y = x y 4 = x x 4 + x 4 y 4 x x 4 + 1 4 x y x x 5 + x 5 x 4 + x y = x x 5 1 16 x x 5 + x y = 15 16 x x 5 + x y 16 15 x G x + x y .
Hence, for all of the above cases, G satisfies condition ( E ) with μ = 16 15 and G has a fixed point t = 0 . Thus, F ( G ) = { 0 } . Now, we consider a function ψ ( x ) = x 3 , where x ( 0 , ) , which is non-decreasing and satisfies ψ ( 0 ) = 0 and ψ ( x ) > 0 for all x ( 0 , ) . Now:
d ( x , F ( G ) ) = inf t F ( G ) x t = inf x 0 = inf x = 0 , i f x [ 2 , 1 2 ] , 1 2 , i f x ( 1 2 , ) . ψ d ( x , F ( G ) ) = 0 , i f x [ 2 , 1 2 ] , 1 6 , i f x ( 1 2 , ) .
Now, we have the following cases:
Case-I. When x [ 2 , 1 2 ] , then we obtain:
x G x = x x 4 = 3 4 x 0 = ψ d ( x , F ( G ) ) .
Case-II. When x ( 1 2 , ) , then we obtain:
x G x = x x 5 = 4 5 x 1 6 = ψ d ( x , F ( G ) ) .
Hence, from both the cases, we obtain:
x G x ψ d ( x , F ( G ) ) .
Thus, the operator G satisfies condition ( I ) . Now, all of the assumptions of Theorem 5 are satisfied. Hence, using Theorem 5, the sequence that is defined by the JF iterative algorithm converges strongly to the fixed point t = 0 of G .
Now, we extend the following example to compare the rate of convergence of the JF iterative algorithm to some other well-known iterative algorithms for operators that satisfy condition ( E ) .
Example 3.
Let B = R be a Banach space with respect to the usual norm and let S = [ 1 , 1 ] be a subset of B . Let G : S S be defined by:
G ( x ) = x , i f x [ 0 , 3 4 ) ( 3 4 , 1 ] , 1 2 sin x , i f x [ 1 , 0 ) , 0 , i f x = 3 4 .
It can easily be seen that the operator G satisfies condition ( E ) with μ = 4 .
Now, we choose control sequences μ n = 0.22 , θ n = 0.65 and η n = 0.95 for all n Z + with the initial estimate of τ 0 = 0.5 to numerically compare the rate of convergence of remarkable iterative algorithms.
Using MATLAB 2015a, we demonstrate that the proposed iterative algorithm (4) converges to the fixed point t = 0 of the operator G faster than Mann, Ishikawa, S, Picard-S, Noor and SP iterative algorithms, which can easily be seen in Table 1 and Figure 1.

5. Application

The purpose of this section is to estimate the solution of a mixed Volterra–Fredholm functional non-linear integral equation using the iterative algorithm (4).
We considered the following non-linear integral equation (see [32]):
x ( z ) = T z , x ( z ) , c 1 z 1 c n z n K ( z , s , x ( s ) ) d s , c 1 d 1 c n d n H ( z , s , x ( s ) ) d s ,
where [ c 1 , d 1 ] × . . . × [ c n , d n ] is an interval in R n , z = ( z 1 , z 2 z n ) , s = ( s 1 , s 2 , s n ) [ c 1 , d 1 ] × . . . × [ c n , d n ] , K , H : [ c 1 , d 1 ] × . . . × [ c n , d n ] × [ c 1 , d 1 ] × . . . × [ c n , d n ] × R R are continuous functions and T : [ c 1 , d 1 ] × . . . × [ c n , d n ] × R 3 R .
Assume that the following prerequisites are satisfied:
( D 1 ) K , H C ( [ c 1 , d 1 ] × × [ c n , d n ] × [ c 1 , d 1 ] × × [ c n , d n ] × R ) ;
( D 2 ) T C ( [ c 1 , d 1 ] × × [ c n , d n ] × R 3 ) ;
( D 3 ) constants α , β , γ 0 exist in such a way that:
| T ( z , u 1 , u 2 , u 3 ) T ( z , v 1 , v 2 , v 3 ) | α | u 1 v 1 | + β | u 2 v 2 | + γ | u 3 v 3 | ,
for all z [ c 1 , d 1 ] × . . . × [ c n , d n ] , u i , v i R , i = 1 , 2 , 3 ;
( D 4 ) constants L K 0 and L H 0 exist in such a way that:
| K ( z , s , u ) K ( z , s , v ) | L K | u v | ,
| H ( z , s , u ) H ( z , s , v ) | L H | u v | ,
for all z , s [ c 1 , d 1 ] × . . . × [ c n , d n ] , a n d u , v R ;
( D 5 ) α + ( β L K + γ L H ) ( d 1 c 1 ) . . . ( d n c n ) < 1 .
Using the solution to problem (22), we obtain a function x * C ( [ c 1 , d 1 ] × × [ c n , d n ] ) .
The following existence result for problem (22) was proved by Crăciun and Şerban [32].
Theorem 6.
Assume that prerequisites ( D 1 ) ( D 5 ) are satisfied. Then, problem (22) has a unique solution of x * C ( [ c 1 , d 1 ] × . . . × [ c n , d n ] ) .
We now demonstrate the main result of this section using the iterative algorithm (4).
Theorem 7.
Let B = C ( [ c 1 , d 1 ] × . . . × [ c n , d n ] , . ) be a Banach space with Chebyshev’s norm. Let { τ n } be a sequence that is defined by the iterative algorithm (4) for the operator G : B B , which is defined as:
G x ( z ) = T z , x ( z ) , c 1 z 1 . . . c n z n K ( z , s , x ( s ) ) d s , c 1 d 1 . . . c n d n H ( z , s , x ( s ) ) d s ,
where T, K and H are defined as above. Assume that prerequisites ( D 1 ) ( D 5 ) are satisfied. Then, the iterative algorithm (4) converges to the unique solution, i.e., x * C ( [ c 1 , d 1 ] × . . . × [ c n , d n ] ) of problem (22).
Proof. 
In Theorem 6, we saw that problem (22) has a unique solution, so let us assume that x * is the fixed point of G . Now, we aim to show that the sequence { τ n } that is defined by the JF iterative algorithm (4) converges to the solution of problem (22), i.e., x * . First, we need to show that the operator G that is defined in (23) is an almost contraction.
Presume that the prerequisites ( D 1 ) ( D 4 ) are satisfied. Then:
G x G x * = G x x * = | G x ( z ) G x * ( z ) | = | T z , x ( z ) , c 1 z 1 . . . c n z n K ( z , s , x ( s ) ) d s , c 1 d 1 . . . c n d n H ( z , s , x ( s ) ) d s T z , x * ( z ) , c 1 z 1 . . . c n z n K ( z , s , x * ( s ) ) d s , c 1 d 1 . . . c n d n H ( z , s , x * ( s ) ) d s | α | x ( z ) x * ( z ) | + β | c 1 z 1 . . . c n z n K ( z , s , x ( s ) ) d s c 1 z 1 . . . c n z n K ( z , s , x * ( s ) ) d s | + γ | c 1 d 1 . . . c n d n H ( z , s , x ( s ) ) d s c 1 d 1 . . . c n d n H ( z , s , x * ( s ) ) d s | α | x ( z ) x * ( z ) | + β c 1 z 1 . . . c n z n | K ( z , s , x ( s ) ) K ( z , s , x * ( s ) ) | d s + γ c 1 d 1 . . . c n d n | H ( z , s , x ( s ) ) H ( z , s , x * ( s ) ) | d s α | x ( z ) x * ( z ) | + β c 1 z 1 . . . c n z n L K | x ( s ) x * ( s ) | d s + γ c 1 d 1 . . . c n d n L H | x ( s ) x * ( s ) | d s α x x * + β c 1 z 1 . . . c n z n L K x x * d s + γ c 1 d 1 . . . c n d n L H x x * d s = α x x * + β L K ( z 1 c 1 ) . . . ( z n c n ) x x * + γ L H ( d 1 c 1 ) . . . ( d n a n ) x x * .
G x G x * [ α + ( β L K + γ L H ) ( d 1 c 1 ) . . . ( d n c n ) ] x x * .
By using condition ( D 5 ) and defining δ : = α + ( β L K + γ L H ) ( d 1 c 1 ) ( d n c n ) < 1 , then for any L 0 Equation (24) becomes:
G x G x * δ x x * + L x G x .
This shows that G is an almost contraction. Hence, using Theorem 2, the sequence { τ n } that is defined by the JF iterative algorithm (4) converges to the solution of problem (22). This completes the proof. □

6. Conclusions

The purpose of this manuscript was to study a well-known and effective iterative algorithm to approximate the fixed points of non-linear operators that satisfy condition ( E ) within the contest of Banach spaces. It is well known that the class of operators that satisfy condition ( E ) includes the classes of mappings that satisfy Suzuki’s condition ( C ) , Hardy and Rogers mappings, generalized α non-expansive mappings, Reich–Suzuki generalized non-expansive mappings, etc. Therefore, the results of the present manuscript generalize and extend the relevant results in existing literature (see, for example, [8,9,18,20,23]). It is also shown here that the JF iterative algorithm is weakly w 2 -stable with respect to almost contractions. The JF iterative algorithm can be successfully implemented to approximate the solutions of non-linear integral equations. Thus, the results of the current manuscript are very useful and interesting.

Author Contributions

All authors contributed equally and significantly in writing this paper. All authors have read and approved the final version of the manuscript.

Funding

This research work was supported by the Deanship of Scientific Research (Project Grant Number: S-1441-0089), University of Tabuk, Tabuk, KSA.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the anonymous referees for their valuable comments and suggestions that improved the paper. The first two authors were supported by the Deanship of Scientific Research (Project Grant Number: S-1441-0089), University of Tabuk, Tabuk, KSA.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Li, Z.; Shu, X.B.; Miao, T. The existence of solutions for Sturm-Liouville differential equation with random impulses and boundary value problems. Bound. Value Probl. 2021, 2021, 97. [Google Scholar] [CrossRef]
  2. Tian, X.; Guo, S. Traveling wave solutions for nonlocal dispersal Fisher–KPP model with age structure. Appl. Math. Lett. 2022, 123, 107593. [Google Scholar] [CrossRef]
  3. Berinde, V. On the approximation of fixed points of weak contractive mappings. Carpathian Math. 2003, 19, 7–22. [Google Scholar]
  4. Kannan, R. Some results on fixed points. Bull. Calcutta Math. Soc. 1968, 10, 71–76. [Google Scholar]
  5. Chatterjea, S.K. Fixed point theorems. Comptes Rendus Acad. Bulg. Sci. 1972, 25, 727–730. [Google Scholar] [CrossRef]
  6. Zamfirescu, T. Fix point theorems in metric spaces. Arch. Math. 1972, 23, 292–298. [Google Scholar] [CrossRef]
  7. Garcia-Falset, J.; Llorens-Fuster, E.; Suzuki, T. Fixed point theory for a class of generalized nonexpansive mappings. J. Math. Anal. Appl. 2011, 375, 185–195. [Google Scholar] [CrossRef] [Green Version]
  8. Hardy, G.F.; Rogers, T.D. A generalization of a fixed point theorem of Reich. Can. Math. Bull. 1973, 16, 201–206. [Google Scholar] [CrossRef]
  9. Suzuki, T. Fixed point theorems and convergence theorems for some generalized non-expansive mappings. J. Math. Anal. Appl. 2008, 340, 1088–1095. [Google Scholar] [CrossRef] [Green Version]
  10. Pant, R.; Shukla, R. Approximating fixed points of generalized α-nonexpansive mappings in Banach spaces. Numer. Funct. Anal. Optim. 2008, 38, 248–266. [Google Scholar] [CrossRef]
  11. Pandey, R.; Pant, R.; Rakočevič, V.; Shukla, R. Approximating fixed points of a general class of nonexpansive mappings in Banach spaces with application. Results Math. 2019, 74, 7. [Google Scholar] [CrossRef]
  12. Picard, E. Memoire sur la theorie des equations aux derivees partielles et la methode des approximations successives. J. Math. Pures Appl. 1890, 6, 145–210. [Google Scholar]
  13. Mann, W.R. Mean value methods in iteration. Proc. Amer. Math. Soc. 1953, 4, 506–510. [Google Scholar] [CrossRef]
  14. Ishikawa, S. Fixed points by a new iteration method. Proc. Am. Math. Soc. 1974, 44, 147–150. [Google Scholar] [CrossRef]
  15. Noor, M.A. New approximation schemes for general variational inequalities. J. Math. Anal. Appl. 2000, 251, 217–229. [Google Scholar] [CrossRef] [Green Version]
  16. Phuengrattana, W.; Suantai, S. On the rate of convergence of Mann, Ishikawa, Noor and SP-iterations for continuous functions on an arbitrary interval. J. Comput. Appl. Math. 2011, 235, 3006–3014. [Google Scholar] [CrossRef] [Green Version]
  17. Agrawal, R.P.; O’Regan, D.; Sahu, D.R. Iterative construction of fixed points of nearly asymptotically non-expansive mappings. J. Nonlinear Convex Anal. 2007, 8, 61–79. [Google Scholar]
  18. Gürsoy, F.; Karakaya, V. A Picard-S hybrid type iteration method for solving a differential equation with retarded argument. arXiv 2014, arXiv:1403.2546v2. [Google Scholar]
  19. Thakur, B.S.; Thakur, D.; Postolache, M. A new iterative scheme for numerical reckoning fixed points of Suzuki’s generalized non-expansive mappings. Appl. Math. Comp. 2016, 275, 147–155. [Google Scholar] [CrossRef]
  20. Ali, J.; Ali, F.; Kumar, P. Approximation of fixed points for Suzuki’s generalized non-expansive mappings. Mathematics 2019, 7, 522. [Google Scholar] [CrossRef] [Green Version]
  21. Ali, J.; Ali, F. A new iterative scheme for approximating fixed points with an application to delay differential equation. J. Nonlinear Convex Anal. 2020, 21, 2151–2163. [Google Scholar]
  22. Ali, F.; Ali, J. Convergence, stability, and data dependence of a new iterative algorithm with an application. Comp. Appl. Math. 2020, 39, 267. [Google Scholar] [CrossRef]
  23. Ali, F.; Ali, J.; Nieto, J.J. Some observations on generalized non-expansive mappings with an application. Comp. Appl. Math. 2020, 39, 74. [Google Scholar] [CrossRef]
  24. Sidorov, D.N. Existence and blow-up of Kantorovich principal continuous solutions of nonlinear integral equations. Diff. Equ. 2014, 50, 1217–1224. [Google Scholar] [CrossRef]
  25. Argun, R.; Gorbachev, A.; Lukyanenko, D.; Shishlenin, M. On some features of the numerical solving of coefficient inverse problems for an equation of the reaction-diffusion-advection-type with data on the position of a reaction front. Mathematics 2021, 9, 2894. [Google Scholar] [CrossRef]
  26. Weng, X. Fixed point iteration for local strictly pseudocontractive mapping. Proc. Am. Math. Soc. 1991, 113, 727–731. [Google Scholar] [CrossRef]
  27. Opial, Z. Weak convergence of the sequence of successive approximations for non-expansive mappings. Bull. Am. Math. Soc. 1967, 73, 595–597. [Google Scholar] [CrossRef] [Green Version]
  28. Senter, H.F.; Dotson, W.G. Approximating fixed points of non-expansive mappings. Proc. Am. Math. Soc. 1974, 44, 375–380. [Google Scholar] [CrossRef]
  29. Cardinali, T.; Rubbioni, T. A generalization of the Caristi fixed point theorem in metric spaces. Fixed Point Theory 2010, 11, 3–10. [Google Scholar]
  30. Timis, I. On the weak stability of Picard iteration for some contractive type mappings. Ann. Univ.-Craiova-Math. Comput. Sci. Ser. 2010, 37, 106–114. [Google Scholar]
  31. Schu, J. Weak and strong convergence to fixed points of asymptotically non-expansive mappings. Bull. Austral. Math. Soc. 1991, 43, 153–159. [Google Scholar] [CrossRef] [Green Version]
  32. Crăciun, C.; Şerban, M.A. A nonlinear integral equation via Picard operators. Fixed Point Theory 2011, 12, 57–70. [Google Scholar]
Figure 1. Graphical representation of the rate of convergence of well known iterative algorithms.
Figure 1. Graphical representation of the rate of convergence of well known iterative algorithms.
Mathematics 10 01132 g001
Table 1. A comparison of the rate of convergence of well-known iterative algorithms.
Table 1. A comparison of the rate of convergence of well-known iterative algorithms.
Iter.MannIshikawaSPicard-SNoorSPJF
10.5000000.5000000.5000000.5000000.5000000.5000000.500000
20.2800000.423000–0.3570000.3570000.2871500.1108800.084000
90.0048360.1312000.0337720.0337720.0059170.0000030.000000
100.0027080.110995–0.0241130.0241130.0033980.0000010.000000
110.0015170.0939020.0172170.0172170.0019510.0000000.000000
250.0000000.0090340.0001540.0001540.0000010.0000000.000000
260.0000000.007642–0.0001100.0001100.0000000.0000000.000000
410.0000000.0006220.0000010.0000010.0000000.0000000.000000
420.0000000.000526–0.0000010.0000010.0000000.0000000.000000
430.0000000.0004450.0000000.0000000.0000000.0000000.000000
840.0000000.0000000.0000000.0000000.0000000.0000000.000000
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Alshehri, M.G.; Khan, F.A.; Ali, F. An Iterative Algorithm to Approximate Fixed Points of Non-Linear Operators with an Application. Mathematics 2022, 10, 1132. https://0-doi-org.brum.beds.ac.uk/10.3390/math10071132

AMA Style

Alshehri MG, Khan FA, Ali F. An Iterative Algorithm to Approximate Fixed Points of Non-Linear Operators with an Application. Mathematics. 2022; 10(7):1132. https://0-doi-org.brum.beds.ac.uk/10.3390/math10071132

Chicago/Turabian Style

Alshehri, Maryam Gharamah, Faizan Ahmad Khan, and Faeem Ali. 2022. "An Iterative Algorithm to Approximate Fixed Points of Non-Linear Operators with an Application" Mathematics 10, no. 7: 1132. https://0-doi-org.brum.beds.ac.uk/10.3390/math10071132

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop