Next Article in Journal
Some Latest Families of Exact Solutions to Date–Jimbo–Kashiwara–Miwa Equation and Its Stability Analysis
Next Article in Special Issue
Halpern-Type Inertial Iteration Methods with Self-Adaptive Step Size for Split Common Null Point Problem
Previous Article in Journal
CMKG: Construction Method of Knowledge Graph for Image Recognition
Previous Article in Special Issue
Aggregate Bound Choices about Random and Nonrandom Goods Studied via a Nonlinear Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Modified Viscosity-Type Self-Adaptive Iterative Algorithm for Common Solution of Split Problems with Multiple Output Sets in Hilbert Spaces

1
School of Applied Sciences and Humanities, Maharshi University of Information and Technology, Noida 201304, India
2
Department of Mathematics, Faculty of Science, University of Tabuk, P.O. Box 741, Tabuk 71491, Saudi Arabia
3
Department of Mathematical Science, College of Sciences, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
4
Department of Mathematics, Faculty of Science, Islamic University of Madinah, P.O. Box 170, Madinah 42351, Saudi Arabia
*
Authors to whom correspondence should be addressed.
Submission received: 5 September 2023 / Revised: 29 September 2023 / Accepted: 2 October 2023 / Published: 5 October 2023

Abstract

:
A modified viscosity-type self-adaptive iterative algorithm is presented in this study, having a strong convergence theorem for estimating the common solution to the split generalized equilibrium problem along with the split common null point problem with multiple output sets, subject to some reasonable control sequence restrictions. The suggested algorithm and its immediate consequences are also discussed. The effectiveness of the proposed algorithm is finally demonstrated through analytical examples. The findings presented in this paper will help to consolidate, extend, and improve upon a number of recent findings in the literature.

1. Introduction

Suppose ( H 1 , · , · ) and ( H 2 , · , · ) are real Hilbert spaces and · represents the induced norm on H 1 and H 2 . Let K ( ϕ ) H 1 and D ( ϕ ) H 2 be closed and convex sets. Let A * : H 2 H 1 be the adjoint of a bounded linear operator A : H 1 H 2 . In 1994, Censor and Elfving [1] came out with the subsequent split convex feasibility problem (SCFP): find z * K such that
A z * D .
The SCFP (1) was developed for the purpose of simulating particular inverse problems. It has been discovered that the SCFP (1) is helpful in the investigation of a variety of problems, including signal processing, radiation therapy treatment planning, phase retrievals, reconstruction of medical images, and many others; see [2,3]. Since then, various successive approximation methods for solving the SCFP (1) have been established and studied; see [4,5,6,7,8,9,10,11,12,13,14,15,16]. Some commonly investigated generalizations of the SCFP (1) are multiple set split feasibility problems (MSSFPs) [9], split common fixed point problems (SCFPPs) [17], split variational inequality problems (SVIPs) [8], split monotone variational inclusion problems (SMVIPs)  [18,19], and split common null point problems (SCNPPs) [20,21,22].
In 2020, the subsequent generalization of the split feasibility problem with multiple output sets (SFPMOS) was proposed and investigated in real Hilbert spaces by Reich and Tuyen [23]: they assumed H , H i , ( i = 1 , 2 , , N ) are N + 1 real Hilbert spaces and A i : H H i , ( i = 1 , 2 , , N ) are N bounded linear operators. They also assumed that K H and D i H i , ( i = 1 , 2 , , N ) are non-empty, closed, and convex sets. Assuming that K N i = 1 A i 1 ( D i ) , they considered the following problem: find
z * K and A i z * D i , i = 1 , 2 , , N .
Reich and Tuyen [23] came out with the following two successive techniques to solve the SFPMOS (2): for any two elements x 0 , y 0 K , assume that the sequences { x k } and { y k } are induced by
x k + 1 = P K x k γ k i = 1 N A i * ( I P D i ) A i x k ,
y k + 1 = ζ k h ( y k ) + ( 1 ζ k ) P K y k γ k i = 1 N A i * ( I P D i ) A i y k ,
where h : K K is used for a strict contraction mapping. By employing Algorithms (3) and (4), weak and strong convergence were analyzed.
Further, Reich and Tuyen [24] investigated the following split common null point problem with multiple output sets (SCNPPMOS) in real Hilbert spaces:
z * M 1 0 N i = 1 A i 1 ( M i 1 0 ) ,
where M : H 2 H , M i : H i 2 H i , and ( i = 1 , 2 , , N ) a r e N + 1 multi-valued monotone operators and A i are the same as in (2). The authors estimated the solution of (5) by employing the following scheme: for any x 0 K , let the sequence { x k } be induced by
y k = i = 1 N β i , k [ x k τ i , k A i * ( I H i R r i , k M i ) A i x k ] , x k + 1 = ζ k h ( x k ) + ( 1 ζ k ) y k .
Under certain assumptions on the control parameters, they established strong convergence results. On the other hand, the theory of equilibrium problems has seen tremendous expansion in a variety of fields throughout the pure and practical sciences, and it has been the subject of extensive research in published works. It offers a structure that may be applied to a variety of problems pertaining to finance, economics, network analysis, optimization, and other areas; see, for example, [25,26,27,28,29].
The following split generalized equilibrium problem (SGEP) was developed by Kazmi and Rizvi [30] and investigated in response to a wide range of works in this area: find z * K such that
ψ 1 ( z * , z ) + φ 1 ( z * , z ) 0 , z K ,
and t * = A z * D such that
ψ 2 ( t * , t ) + φ 2 ( t * , t ) 0 , t D ,
where ψ 1 , φ 1 : K × K R and ψ 2 , φ 2 : D × D R are real-valued nonlinear bi-functions. If  ψ 2 = φ 2 = 0 , then the SGEP (7) and (8) becomes the subsequent generalized equilibrium problem (GEP) suggested and investigated by Cianciaruso and Marino [31]: find z * K in such a way that
ψ ( z * , z ) + φ ( z * , z ) 0 , z K ,
where ψ : K × K R and φ : D × D R are real-valued nonlinear bi-functions. The GEP (9) is generic in the sense that it encompasses minimization problems, Nash equilibrium problems in non-cooperative games, variational inequality problems, fixed point problems, etc.; see [32]. When φ = 0 in the GEP (9), the GEP (9) turns into the subsequent classical equilibrium problem (EP): find z * K in such a way that
ψ ( z * , z ) 0 , z K .
The EP (10) was initially suggested and investigated by Blum and Oetlli [33] in 1994.
Recently, Mewomo et al. [34] introduced the split generalized equilibrium problem with multiple output sets (SGEPMOS) as follows: find z * K in such a way that
z * G E P ( ψ , φ ) N i = 1 A i 1 ( G E P ( ψ i , φ i ) ) ,
where G E P ( ψ , φ ) is the solution set of the GEP (9). In order to examine null point problems and equilibrium problems independently, a large number of iterative techniques exist. You can find examples of these algorithms in a number of published works and on the web. Many researchers have focused their efforts recently on developing common solutions to the aforementioned problems; see, for example, [3,32].
Motivated by the work of [24,34] and the continuous study in this area, the following problem is considered in this article: find z * such that
z * Ω : = M 1 0 N i = 1 A i 1 ( M i 1 0 ) G E P ( ψ , φ ) M j = 1 B j 1 ( G E P ( ψ j , φ j ) ) ,
where B j : H H j , j = 1 , 2 , , M are bounded linear operators. In other words, find z * such that z * is a common solution of the SCNPPMOS (5) and SGEPMOS (11). To solve the problem (12), a modified viscosity-type self-adaptive algorithm is proposed and studied. The significance of the recommended approach is that it does not call for any prior knowledge of the bounded linear operators’ norm. This attribute is essential for algorithms that implement the operator norm since it is challenging to compute A . The results of this study are more general than previous ones since they incorporate a number of additional optimization problems as special cases. The method that this paper proposes has the following characteristics, stated plainly and simply:
  • The current literature extends the works of [24,34].
  • Our solution employs a straightforward self-adaptive step size that is determined at each iteration by a straightforward calculation. As a result, our method does not require prior estimation of the norm of a bounded linear operators. This characteristic is crucial since it allows for the computation of the bounded linear operator’s norm, which is typically exceedingly challenging to do and is necessary for algorithms whose implementation relies on the operator norm.

2. Preliminaries

The following definitions and results are mentioned in this section, which are used in the convergence analysis of the suggested scheme.
Assume that → and ⇀ stand for strong and weak convergence, respectively; ω w ( x k ) , the set of all weak cluster points of { x k } and N , is the set of natural numbers.
The mapping P K : H K is referred to as a metric projection if each z H assigns the unique element P K z K and satisfies
z P K z z t t K .
Evidently, P K is nonexpansive. Moreover, P K x is possesses the subsequent fact:
z P K z , t P K z 0 , z H , t K .
Definition 1.
A mapping U : H H is referred to as follows:
(i)
A contraction, if L ( 0 , 1 ) satisfying
U z U t L z t , z , t H .
(ii)
Nonexpansive, if the inequality (14) holds with L = 1 .
(iii)
γ-cocoercive or γ-inverse strongly monotone (γ-ism) if, for all z , t H , γ > 0 satisfying
z t , U z U t γ U z U t 2 ,
(iv)
Firmly nonexpansive if, for any z , t H ,
U z U t , z t U z U t 2 ,
Moreover, F i x ( U ) represents the collection of all fixed points of U , i.e.,
F i x ( U ) : = { z H : U z = z } .
Lemma 1
([35]). Assume that H is a real Hilbert space. A mapping U : H H is referred to as firmly nonexpansive iff the compliment of U i.e.,  I U is firmly nonexpansive.
The domain and the range of a multi-valued operator M : H 2 H are defined as follows:
D O M ( M ) : = z H : M ( z ) ,
I M G ( M ) : = D O M ( M ) : z M ( z ) .
Definition 2
([36]). Suppose that M : H 2 H is a multi-valued mapping. Then,
(i)
The graph of M , denoted as G ( M ) , can be defined by
G ( M ) : = ( z , t ) H × H ; t M ( z ) ,
(ii)
M is called maximal monotone, if
t a , z b 0 , t M ( z ) , a M ( b ) ,
and the graph of no other monotone operator properly contains G ( M ) . Evidently, a monotone mapping M is maximal iff, for any pair, ( z , t ) H × H , t a , z b 0 for every pair ( b , a ) G ( M ) implies that t M ( z ) .
Remark 1
([36]). Assume that M : H 2 H is a multi-valued maximal monotone mapping. Then R r M : H H defined as R r M ( z ) = ( I H + r M ) 1 ( z ) , for all z H , is said to be the resolvent operator of M , where r > 0 and I H is the identity operator. Note that R r M is nonexpansive. It is trivial that M 1 0 = F i x ( R r M ) , for all r > 0 .
To accomplish our main results, we set out following significant lemmas.
Lemma 2
([37]). Assume that M : D ( M ) H 2 H is a multi-valued monotone mapping. Then, subsequent assertions hold:
(i)
For each z R ( I H + r 1 M ) R ( I H + r 2 M ) , r 1 r 2 > 0 ,
z R r 1 M z 2 z R r 2 M z .
(ii)
For every number r > 0 and for every point z , t R ( I H + r M ) , we have:
( I H + r M ) z ( I H + r M ) t , z t ( I H + r M ) z ( I H + r M ) t 2 .
(iii)
If M 1 0 , then for each z * M 1 0 and z ( I H + r M ) ,
R r M z z * 2 z z * 2 z R r M z 2 .
Lemma 3
([38] (Demiclosedness principle)). Let K ϕ ) H be a closed convex set. Let U be a nonexpansive mapping from H to itself with F i x ( U ) . Then, ( I H U ) is demiclosed, i.e., whenever { x k } is a sequence in H such that x k z H and ( I H U ) x k t implies ( I H U ) z = t .
Lemma 4
([39]). For all z , t H and ζ [ 0 , 1 ] , the subsequent hold:
(i)
2 z , t = z 2 + t 2 z t 2 = z t 2 z 2 t 2 ;
(ii)
z + t 2 z 2 + 2 t , z + t ;
(iii)
ζ z + ( 1 ζ ) t 2 = ζ z 2 + ( 1 ζ ) t 2 ζ ( 1 ζ ) z t 2 .
Lemma 5
([35]). Let a , b , c H and ζ , β , γ [ 0 , 1 ] satisfy ζ + β + γ = 1 . Then,
ζ a + β b + γ c 2 = ζ a 2 + β b 2 + γ c 2 ζ β a b 2 β γ c b 2 ζ γ c a 2 .
Lemma 6
([40]). Consider { s k } , { ζ k } and { c k } to be sequences such that s k 0 and s k R for all k N , ζ k ( 0 , 1 ) for all k N , satisfying k = 1 ζ k = and c k R for all k N . Assume that
s k + 1 ( 1 ζ k ) s k + ζ k c k , k 0 ,
if lim sup s c k s 0 for every subsequence { s k s } of { s k } comply with the condition:
lim inf s ( s k s + 1 s k s ) 0 ,
then lim k s k = 0 .
To deal with the split generalized equilibrium problem, it is assumed that the real-valued bi-functions ψ , φ : K × K R satisfy the subsequent assumptions:
Assumption 1
([41]). Let ψ : K × K R be a real-valued bi-function comply with the subsequent presumptions:
(i)
ψ ( z , z ) 0 , for all z K ;
(ii)
For any pair z , t K ,
ψ ( z , t ) + ψ ( t , z ) 0 ;
(iii)
For any triplet z , t , s K ,
lim t 0 sup ψ ( t s + ( 1 t ) z , t ) ψ ( z , t ) ;
(iv)
For any fixed point z K , the map t ψ ( z , t ) is convex and lower semi-continuous.
Let φ : K × K R such that:
(a)
φ ( z , z ) 0 , for all z K ;
(b)
For any fixed point t K , the map z φ ( z , t ) is upper semi-continuous;
(c)
For any fixed point z K , the map t φ ( z , t ) is convex and lower semi-continuous;
(d)
For any fixed point s > 0 and any z K , there exists a non-empty closed, convex, and bounded subset Q of H 1 and z K Q such that
ψ ( t , z ) + φ ( t , z ) + 1 s t z , z z 0 , t K \ Q .
The subsequent assertions are true given these presumptions:
Lemma 7
([41]). Assume that the real-valued bi-functions ψ 1 , φ 1 : K × K R satisfythe conditions of Assumptions 1. Suppose that, for any s > 0 and any point z H 1 , z K such that
ψ 1 ( z , t ) + φ 1 ( z , t ) + 1 s t z , z z 0 , t K .
Lemma 8
([1]). Assume that the real-valued bi-functions ψ 1 , φ 1 : K × K R satisfythe conditions of Assumption 1. For any s > 0 and any point x H 1 , define Q s ( ψ 1 , φ 1 ) : H 1 K in the subsequent manner:
Q s ( ψ 1 , φ 1 ) ( z ) = z K : ψ 1 ( z , t ) + φ 1 ( z , t ) + 1 s t z , z z 0 , t K .
Then, the subsequent assertions hold:
(i)
Q s ( ψ 1 , φ 1 ) is non-empty as a set and single-valued as a map;
(ii)
Q s ( ψ 1 , φ 1 ) is firmly nonexpansive, i.e.,
Q s ( ψ 1 , φ 1 ) ( z ) Q s ( ψ 1 , φ 1 ) ( t ) 2 Q s ( ψ 1 , φ 1 ) ( z ) Q s ( ψ 1 , φ 1 ) ( t ) , z t z , t H 1 ;
(iii)
F i x ( Q s ( ψ 1 , φ 1 ) ) = G E P ( ψ 1 , φ 1 ) ;
(iv)
G E P ( ψ 1 , φ 1 ) is closed and convex.

3. Main Result

This section presents the suggested algorithm and provides an analysis of its convergence.
Let K ( ϕ ) and K j ( ϕ ) be closed convex subsets of real Hilbert spaces H and H j ( j = 1 , 2 , , M ) , respectively. Suppose that the linear operators B j : H H j are bounded. Let ψ , φ : K × K R , ψ j , φ j : K j × K j R be bi-functions comply with Assumption 1, and for i = 1 , 2 , , N , the linear operators A i : H H i are bounded. Let M : H 2 H , M i : H i 2 H i be multi-valued maximal monotone operators and h : H H an L-contraction mapping. Suppose the solution set Ω is non-empty. Let { ζ k } , { δ k } , { μ k } be sequences in ( 0 , 1 ) and, for k 0 , { θ i , k } and { φ j , k } are positive real sequences for each i = 1 , 2 , , N and j = 1 , 2 , , M . Let { x k } be the sequences induced by Algorithm 1:
Algorithm 1: Modified viscosity-type self-adaptive iterative algorithm.
Step 0. Take any x 0 H ; assume
   H 0 = H , A 0 = B 0 = I H , ψ 0 = ψ , M 0 = M , φ 0 = φ ; let k = 0 .
Step 1. Compute
y k = i = 0 N β i , k x k τ i , k A i * ( I H i R r i , k M i ) A i x k ,

  Step 2. Compute
z k = j = 0 M γ j , k y k λ j , k B j * ( I H j Q s j ( ψ j , φ j ) ) B j y k ,

  Step 3. Compute
x k + 1 = ζ k h ( x k ) + δ k x k + μ k z k , k 1 .

  Update step sizes τ i , k and λ j , k as:
τ i , k = ρ i , k ( I H i R r i , k M i ) A i x k 2 A i * ( I H i R r i , k M i ) A i x k 2 + θ i , k , λ j , k = χ j , k ( I H j Q s j ( ψ j , φ j ) ) B j y k 2 B j * ( I H j Q s j ( ψ j , φ j ) ) B j y k 2 + φ j , k .

  Set k = k + 1 , go to Step 1.
Following hypotheses are necessary tools to analyze the convergence.
Assumption 2.
(i)
lim k ζ k = 0 , k = 1 ζ k = , and ζ k + δ k + μ k = 1 , μ k [ a 1 , a 2 ] ( 0 , 1 ) ;
(ii)
min i = 0 , 1 , , N { inf k { r i , k } } = r > 0 ; max i = 0 , 1 , , N { sup k { θ i , k } } = K 1 < ;
(iii)
s j > 0 for all j = 1 , 2 , , M ; max j = 0 , 1 , , M { sup k { φ j , k } } = K 2 < ;
(iv)
{ β i , k } [ a 3 , a 4 ] ( 0 , 1 )  such that  i = 1 N β i , k = 1  for each  k 0 , { ρ i , k } [ a 5 , a 6 ] ( 0 , 2 ) ;
(v)
{ γ j , k } [ a 7 , a 8 ] ( 0 , 1 )  such that  j = 1 M γ j , k = 1  for each  k 0 , { χ j , k } [ a 9 , a 10 ] ( 0 , 2 ) .
Lemma 9.
The sequences induced by Algorithm 1 are bounded.
Proof. 
Let z Ω ; we obtain A i z = R r i , k M i ( A i z ) for all i = 0 , 1 , , N . The convexity of · 2 yields
y k z 2 = i = 0 N β i , k x k τ i , k A i * ( I H i R r i , k M i ) A i x k z 2 i = 0 N β i , k x k τ i , k A i * ( I H i R r i , k M i ) A i x k z 2 .
From Lemma 2 (ii), we obtain
x k τ i , k A i * ( I H i R r i , k M i ) A i x k z 2 = x k z 2 + τ i , k 2 A i * ( I H i R r i , k M i ) A i x k 2 2 τ i , k A i * ( I H i R r i , k M i ) A i x k , x k z = x k z 2 + τ i , k 2 A i * ( I H i R r i , k M i ) A i x k 2 2 τ i , k ( I H i R r i , k M i ) A i x k , A i x k A i z = x k z 2 + τ i , k 2 A i * ( I H i R r i , k M i ) A i x k 2 2 τ i , k ( I H i R r i , k M i ) A i x k ( I H i R r i , k M i ) A i z , A i x k A i z x k z 2 + τ i , k 2 ( I H i R r i , k M i ) A i x k 2 + θ i , k 2 τ i , k ( I H i R r i , k M i ) A i x k 2 = x k z 2 ρ i , k ( 2 ρ i , k ) ( I H i R r i , k M i ) A i x k 4 A i * ( I H i R r i , k M i ) A i x k 2 + θ i , k .
From (18) and (19), we attain
y k z 2 x k z 2 i = 0 N β i , k ρ i , k ( 2 ρ i , k ) ( I H i R r i , k M i ) A i x k 4 A i * ( I H i R r i , k M i ) A i x k 2 + θ i , k .
Since z Ω , we have B j z = Q s j ( ψ j , φ j ) ) B j z for each j = 0 , 1 , , M . Similarly,
z k z 2 y k z 2 j = 0 M γ j , k χ j , k ( 2 χ j , k ) ( I H j Q s j ( ψ j , φ j ) ) B j y k 4 B j * ( I H j Q s j ( ψ j , φ j ) ) B j y k 2 + φ j , k .
Taking (20) into consideration, we acquire
z k z 2 x k z 2 i = 0 N β i , k ρ i , k ( 2 ρ i , k ) ( I H i R r i , k M i ) A i x k 4 A i * ( I H i R r i , k M i ) A i x k 2 + θ i , k j = 0 M γ j , k χ j , k ( 2 χ j , k ) ( I H j Q s j ( ψ j , φ j ) ) B j y k 4 B j * ( I H j Q s j ( ψ j , φ j ) ) B j y k 2 + φ j , k .
It follows from Assumption 2 (ii)–(v) that
z k z 2 x k z 2 .
Further, by applying (23), we obtain
x k + 1 z = ζ k h ( x k ) + δ k x k + μ k z k z ζ k h ( x k ) z + δ k x k z + μ k z k z ζ k h ( x k ) h ( z ) + ζ k h ( z ) z + δ k x k z + μ k x k z ζ k L x k z + ( δ k + μ k ) x k z + ζ k h ( z ) z ( 1 ζ k ( 1 L ) ) x k z + ζ k h ( z ) z max x k z , h ( z ) z ( 1 L ) .
Continuing the process, we acquire
x k + 1 z max x 1 z , h ( z ) z ( 1 L ) .
As a result, both the sequence { x k } and the sequences { y k } and { z k } are bounded.    □
The operator P Ω h can be easily understood to be a contraction. Consequently, a unique point z * Ω is proven to exist by the Banach contraction theorem such that z * = P Ω h ( z * ) . The description of the projection implies
h ( z * ) z * , x z * 0 , x Ω .
Lemma 10.
Suppose that { x k } is a sequence induced by Algorithm 1, and let z Ω . Then, under Assumption 1 and Assumption 2 (i)–(v), the subsequent inequality meets, for all k 1 ,
x k + 1 z 2 1 2 ζ k ( 1 L ) ( 1 ζ k L ) x k z 2 + 2 ζ k ( 1 L ) ( 1 ζ k L ) [ ζ k M 1 2 ( 1 L ) + 1 ( 1 L ) h ( z ) z , x k + 1 z ] μ k ( 1 ζ k ) ( 1 ζ k L ) [ i = 0 N β i , k ρ i , k ( 2 ρ i , k ) ( I H i R r i , k M i ) A i x k 4 A i * ( I H i R r i , k M i ) A i x k 2 + θ i , k + j = 0 M γ j , k χ j , k ( 2 χ j , k ) ( I H j Q s j ( ψ j , φ j ) ) B j y k 4 B j * ( I H j Q s j ( ψ j , φ j ) ) B j y k 2 + φ j , k ]
Proof. 
Let z Ω . Applying to Lemma 4 (ii) and (22), we achieve
x k + 1 z 2 = ζ k h ( x k ) + δ k x k + μ k z k z 2 δ k ( x k z ) + μ k ( z k z ) 2 + 2 ζ k h ( x k ) z , x k + 1 z δ k 2 x k z 2 + μ k 2 z k z 2 + 2 δ k μ k x k z z k z + 2 ζ k h ( x k ) h ( z ) , x k + 1 z + 2 ζ k h ( z ) z , x k + 1 z δ k 2 x k z 2 + μ k 2 z k z 2 + δ k μ k x k z 2 + z k z 2 + 2 ζ k L x k z , x k + 1 z + 2 ζ k h ( z ) z , x k + 1 z = δ k ( δ k + μ k ) x k z 2 + μ k ( μ k + δ k ) z k z 2 + 2 ζ k L x k z x k + 1 z + 2 ζ k h ( z ) z , x k + 1 z δ k ( 1 ζ k ) x k z 2 + μ k ( 1 ζ k ) z k z 2 + 2 ζ k L x k z x k + 1 z + 2 ζ k h ( z ) z , x k + 1 z δ k ( 1 ζ k ) x k z 2 + μ k ( 1 ζ k ) [ x k z 2 i = 0 N β i , k ρ i , k ( 2 ρ i , k ) ( I H i R r i , k M i ) A i x k 4 A i * ( I H i R r i , k M i ) A i x k 2 + θ i , k j = 0 M γ j , k χ j , k ( 2 χ j , k ) ( I H j Q s j ( ψ j , φ j ) ) B j y k 4 B j * ( I H j Q s j ( ψ j , φ j ) ) B j y k 2 + φ j , k ] + ζ k L x k z 2 + x k + 1 z 2 + 2 ζ k h ( z ) z , x k + 1 z
= ( δ k + μ k ) ( 1 ζ k ) x k z 2 + ζ k L x k z 2 μ k ( 1 ζ k ) [ i = 0 N β i , k ρ i , k ( 2 ρ i , k ) ( I H i R r i , k M i ) A i x k 4 A i * ( I H i R r i , k M i ) A i x k 2 + θ i , k + j = 0 M γ j , k χ j , k ( 2 χ j , k ) ( I H j Q s j ( ψ j , φ j ) ) B j y k 4 B j * ( I H j Q s j ( ψ j , φ j ) ) B j y k 2 + φ j , k ] + ζ k L x k + 1 z 2 + 2 ζ k h ( z ) z , x k + 1 z = ( 1 ζ k ) 2 + ζ k L x k z 2 + ζ k L x k + 1 z 2 + 2 ζ k h ( z ) z , x k + 1 z μ k ( 1 ζ k ) [ i = 0 N β i , k ρ i , k ( 2 ρ i , k ) ( I H i R r i , k M i ) A i x k 4 A i * ( I H i R r i , k M i ) A i x k 2 + θ i , k + j = 0 M γ j , k χ j , k ( 2 χ j , k ) ( I H j Q s j ( ψ j , φ j ) ) B j y k 4 B j * ( I H j Q s j ( ψ j , φ j ) ) B j y k 2 + φ j , k ] .
Consequently,
x k + 1 z 2 ( 1 2 ζ k + ζ k 2 + ζ k L ) ( 1 ζ k L ) x k z 2 + 2 ζ k ( 1 ζ k L ) h ( z ) z , x k + 1 z μ k ( 1 ζ k ) ( 1 ζ k L ) [ i = 0 N β i , k ρ i , k ( 2 ρ i , k ) ( I H i R r i , k M i ) A i x k 4 A i * ( I H i R r i , k M i ) A i x k 2 + θ i , k + j = 0 M γ j , k χ j , k ( 2 χ j , k ) ( I H j Q s j ( ψ j , φ j ) ) B j y k 4 B j * ( I H j Q s j ( ψ j , φ j ) ) B j y k 2 + φ j , k ] = ( 1 2 ζ k + ζ k L ) ( 1 ζ k L ) x k z 2 + ζ k 2 ( 1 ζ k L ) x k z 2 + 2 ζ k ( 1 ζ k L ) h ( z ) z , x k + 1 z μ k ( 1 ζ k ) ( 1 ζ k L ) [ i = 0 N β i , k ρ i , k ( 2 ρ i , k ) ( I H i R r i , k M i ) A i x k 4 A i * ( I H i R r i , k M i ) A i x k 2 + θ i , k + j = 0 M γ j , k χ j , k ( 2 χ j , k ) ( I H j Q s j ( ψ j , φ j ) ) B j y k 4 B j * ( I H j Q s j ( ψ j , φ j ) ) B j y k 2 + φ j , k ] 1 2 ζ k ( 1 L ) ( 1 ζ k L ) x k z 2 + 2 ζ k ( 1 L ) ( 1 ζ k L ) ζ k M 1 2 ( 1 L ) + 1 ( 1 L ) h ( z ) z , x k + 1 z μ k ( 1 ζ k ) ( 1 ζ k L ) [ i = 0 N β i , k ρ i , k ( 2 ρ i , k ) ( I H i R r i , k M i ) A i x k 4 A i * ( I H i R r i , k M i ) A i x k 2 + θ i , k + j = 0 M γ j , k χ j , k ( 2 χ j , k ) ( I H j Q s j ( ψ j , φ j ) ) B j y k 4 B j * ( I H j Q s j ( ψ j , φ j ) ) B j y k 2 + φ j , k ] ,
where M 1 : = sup x k z 2 : k 1 . Hence, the proof is complete.    □
The strong convergence for the suggested scheme is presented as follows:
Theorem 1.
Assume that Assumption 1, Assumption 2 (i)–(v) are true and the sequence { x k } is induced by Algorithm 1. Then x k x ^ Ω , where x ^ = P Ω h ( x ^ ) .
Proof. 
Let x ^ = P Ω h ( x ^ ) , and thanks to Lemma 10, we acquire
x k + 1 x ^ 2 1 2 ζ k ( 1 L ) ( 1 ζ k L ) x k x ^ 2 + 2 ζ k ( 1 L ) ( 1 ζ k L ) [ ζ k M 1 2 ( 1 L ) + 1 ( 1 L ) h ( x ^ ) x ^ , x k + 1 x ^ ] .
Next, we prove that lim k x k x ^ 0 . By invoking Lemma 6, it remains to prove that lim sup s h ( x ^ ) x ^ , x k s + 1 x ^ 0 for every subsequence { x k s x ^ } of { x k x ^ } complying with
lim inf s x k s + 1 x ^ x k s x ^ 0 .
Presume that the subsequence { x k s x ^ } of { x k x ^ } satisfies (26). Then,
lim inf s x k s + 1 x ^ 2 x k s x ^ 2 = lim inf s x k s + 1 x ^ x k s x ^ x k s + 1 x ^ + x k s x ^ 0 .
Again, from Lemma 10, we have
μ k s ( 1 ζ k s ) ( 1 ζ k s L ) i = 0 N β i , k s ρ i , k s ( 2 ρ i , k s ) ( I H i R r i , k s M i ) A i x k s 4 A i * ( I H i R r i , k s M i ) A i x k s 2 + θ i , k s 1 2 ζ k s ( 1 L ) ( 1 ζ k s L ) x k s x ^ 2 x k s + 1 x ^ 2 + 2 ζ k s ( 1 L ) ( 1 ζ k s L ) ζ k s M 1 2 ( 1 L ) + 1 ( 1 L ) h ( x ^ ) x ^ , x k s + 1 x ^ .
By using (27) along with Assumption 2 (i), we have
μ k s ( 1 ζ k s ) ( 1 ζ k s L ) i = 0 N β i , k s ρ i , k s ( 2 ρ i , k s ) ( I H i R r i , k s M i ) A i x k s 4 A i * ( I H i R r i , k s M i ) A i x k s 2 + θ i , k s 0 , as s .
By Assumption 2 (iv), we have
( I H i R r i , k s M i ) A i x k s 4 A i * ( I H i R r i , k s M i ) A i x k s 2 + θ i , k s 0 , as s ,
for each i = 0 , 1 , , N . Given that the operator A i and the sequence { x k s } are bounded and the resolvent operators R r i , k s M i are nonexpansive, then it follows that
L 1 : = max i = 0 , 1 , , N sup k A i * ( I H i R r i , k s M i ) A i x k s 2 < .
Thus, from Assumption 2 (ii), it follows that
( I H i R r i , k s M i ) A i x k s 4 A i * ( I H i R r i , k s M i ) A i x k s 2 + θ i , k s ( I H i R r i , k s M i ) A i x k s 4 L 1 + K 1 .
Combining (28) and (29), we deduce that
lim s ( I H i R r i , k s M i ) A i x k s = 0 , i = 0 , 1 , , N .
By similar arguments, from Lemma 10, Assumption 2 (i),(iv), and (27), we obtain that
( I H j Q s j ( ψ j , φ j ) ) B j y k s 4 B j * ( I H j Q s j ( ψ j , φ j ) ) B j y k s 2 + φ j , k s 0 , as s ,
for all j = 0 , 1 , , M . As a result of the boundedness of the operator B j , the nonexpansivity of the resolvent operators Q s j ( ψ j , φ j ) , and the boundedness of the sequence { y k s } , it follows that
L 2 : = max j = 0 , 1 , , M sup k B j * ( I H j Q s j ( ψ j , φ j ) ) B j y k s 2 < .
Thus, from Assumption 2 (iii), it follows that
( I H j Q s j ( ψ j , φ j ) ) B j y k s 4 B j * ( I H j Q s j ( ψ j , φ j ) ) B j y k s 2 + φ j , k s ( I H j Q s j ( ψ j , φ j ) ) B j y k s 4 L 2 + K 2 .
Combining (31) and (32), we deduce that
lim s ( I H j Q s j ( ψ j , φ j ) ) B j y k s = 0 , j = 0 , 1 , , M .
Further, we obtain from the definition of the sequence { y k } that
y k s x k s = i = 0 N β i , k s τ i , k s A i * ( I H i R r i , k s M i ) A i x k s ,
Applying (30) together with Assumption 2 (iv), it follows from the last inequality that
lim s y k s x k s = 0 .
Furthermore, from the definition of the sequence { z k } and (33) together with Assumption 2 (v), we obtain
lim s z k s y k s = 0 .
It follows from (34) and (35) that
lim s x k s z k s lim s x k s y k s + lim s y k s z k s = 0 .
Consequently, by applying Assumption 2 (i), we have
lim s x k s + 1 x k s lim s ζ k s h ( x k s ) x k s + lim s δ k s x k s x k s + lim s z k s x k s = 0 .
To conclude the proof, we must demonstrate that ω w ( x k ) Ω . It is given that the sequence { x k } is bounded; hence, ω w ( x k ) is non-empty. Let us take an arbitrary element x ¯ ω w ( x k ) . Then, one can have a subsequence { x k s } of { x k } satisfying x k s x ¯ as s . From (34), y k s x ¯ . Since the operators A i , i = 0 , 1 , 2 , , N , are linear and bounded. It follows that A i x k s A i x ¯ . Thus, with the help of Lemma 3 and (30), we can conclude that A i x ¯ F i x ( R r i , k M i ) , i = 0 , 1 , , N . Hence, A i x ¯ i = 0 N M i 1 0 ; that is, A i x ¯ Ω . Furthermore, from (34), y k s x ¯ . Since, j = 0 , 1 , , M , B j are bounded linear operators, then B j y k s B j x ¯ . Invoking Lemma 3 and (33), we acquire B j x ¯ F i x ( Q s j ( ψ j , φ j ) ) for all j = 0 , 1 , , M ; that is B j x ¯ Ω . In light of this, we obtain x ¯ Ω , which suggests ω w ( x k ) Ω .
Because { x k s } is bounded, so we have a subsequence { x k s l } of { x k s } satisfying x k s l x ¯ ¯ and
lim l h ( x ^ ) x ^ , x k s l x ^ = lim sup l h ( x ^ ) x ^ , x k s x ^ .
In the light of x ^ = P Ω h ( x ^ ) , inequalities (24) and (36) yields
lim sup s h ( x ^ ) x ^ , x k s + 1 x ^ = lim sup s h ( x ^ ) x ^ , x k s + 1 x k s + lim sup s h ( x ^ ) x ^ , x k s x ^ = lim sup s h ( x ^ ) x ^ , x k s l x ^ = h ( x ^ ) x ^ , x ¯ ¯ x ^ 0 .
With the help of Lemma 6 to (25) and using (37), along with the fact that lim k ζ k = 0 , we conclude that lim k x k x ^ = 0 , as required.    □

4. Consequences

Herein, some direct consequences of the proposed algorithm are listed.
If we set I H i = R r i , k M i for i = 0 , 1 , , N , then the following scheme will be obtained.
The following corollary can be derived by implementing Algorithm 2.
Algorithm 2: Modified viscosity-type self-adaptive iterative algorithm for the SGEPMOS.
Step 0. Take any x 0 H ; assume H 0 = H , B 0 = I H , ψ 0 = ψ , φ 0 = φ ; let k = 0 .
Step 1. Compute
z k = j = 0 M γ j , k x k λ j , k B j * ( I H j Q s j ( ψ j , φ j ) ) B j x k ,

  Step 2. Compute
x k + 1 = ζ k h ( x k ) + δ k x k + μ k z k , k 1 .

  Update step size λ j , k as:
λ j , k = χ j , k ( I H j Q s j ( ψ j , φ j ) ) B j y k 2 B j * ( I H j Q s j ( ψ j , φ j ) ) B j y k 2 + φ j , k .

  Set k = k + 1 , go to Step 1.
Corollary 1.
Suppose that Assumption 1 and Assumption 2 (i)–(iii)–(v) hold. Then, x k x ^ G E P ( ψ , φ ) N i = 1 A i 1 ( G E P ( ψ i , φ i ) ) , where x ^ = P Ω h ( x ^ ) .
If we set I H j = Q s j ( ψ j , φ j ) for j = 0 , 1 , , M , then we get the succeeding algorithm.
Algorithm 3: Modified viscosity-type self-adaptive iterative algorithm for the SCNPPMOS.
Step 0. Take any x 0 H ; assume H 0 = H , A 0 = I H , M 0 = M ; let k = 0 .
Step 1. Compute
y k = i = 0 N β i , k x k τ i , k A i * ( I H i R r i , k M i ) A i x k ,

  Step 2. Compute
x k + 1 = ζ k h ( x k ) + δ k x k + μ k y k , k 1 .

  Update step size τ i , k as:
τ i , k = ρ i , k ( I H i R r i , k M i ) A i x k 2 A i * ( I H i R r i , k M i ) A i x k 2 + θ i , k ,

  Set k = k + 1 , go to Step 1.
Corollary 2.
Suppose that Assumptions 2 (i)–(ii)–(iv) hold. Let a sequence { x k } be induced by Algorithm 3, then x k x ^ M 1 0 N i = 1 A i 1 ( M i 1 0 ) , where x ^ = P Ω h ( x ^ ) .

5. Analytical Discussion

For better understanding of how our suggested approaches can be put into practice, we provide some examples in this section.
For Algorithm 1, we let h ( z ) = z 3 , ζ k = 1 140 k + 1 , δ k = 1 3 k + 14 , μ k = 1 δ k ζ k , for i , j = 0 , 1 , 2 and let s j = s = 0.5 , r i , k = r = 0.5 , β i , k = γ j , k = 1 3 for all i , j , k 0 . Moreover, we consider ρ i , k = 1.25 , θ i , k = 1 , χ j , k = 1.5 , and φ j , k = 5 2 + j , for all i , j , k 0 . Matlab Version R2021a on an Asus Core i5 8th Gen Laptop with an NVIDIA 1650 Geforce GTX graphics card was utilized for all numerical calculations. We plot the error versus iteration graphs using several initial points that were selected at random. We terminated the computation if x k + 1 x k 10 6 .
Example 1.
(Finite-dimensional) Let H , H i , H j = R 2 for i , j = 0 , 1 , 2 , with H = H 0 . Define M = M 0 : R 2 R 2 , M 1 : R 2 R 2 and M 2 : R 2 R 2 , respectively, by
M 0 ( z ) = 8 0 0 2 z , M 1 ( z ) = 3 0 0 6 z , and M 2 ( z ) = 1 0 0 1 z , for all z R 2 .
Furthermore, We define the mappings F = ψ 0 : R 2 R 2 , ψ 1 : R 2 R 2 and ψ 2 : R 2 R 2 , respectively, by ψ ( z , t ) = 3 z 2 + z t + 2 t 2 , ψ 1 ( z , t ) = 4 z 2 + z t + 3 t 2 and ψ 2 ( z , t ) = 5 t 2 + 2 t + 5 z t 5 z t 2 , for each z , t R 2 . Furthermore, for i = 0 , 1 , 2 , let the mappings φ = φ 0 : R 2 × R 2 R 2 , φ 1 : R 2 × R 2 R 2 and φ 2 : R 2 × R 2 R 2 , be defined by φ ( z , t ) = z 2 z t , φ 1 ( z , t ) = 2 z ( z t ) , and φ 2 ( z , t ) = 5 t 2 2 z , for each z , t R 2 . Let A i , B j : R 2 R 2 be defined by A i ( z ) = z i + 1 , and B j ( z ) = z j + 1 , respectively, for all i , j , z R 2 . Evidently, we have ( 0 , 0 ) Ω .
We gather information such as the iterations and time of execution with the considered terminating scale and randomly selected initial points for Example 1 to manifest the efficiency of Algorithm 1 in Table 1.
In Figure 1, errors with regards to the number of iterations are plotted for randomly chosen different initial points for Example 1.
Example 2.
(Infinite dimensional) Let H , H i , H j = l 2 for i , j = 0 , 1 , 2 , with H = H 0 , where
l 2 : = { z : = ( z 1 , z 2 , , z m , ) , z m R : m = 1 | z m | 2 < } .
Define · , · : l 2 × l 2 R by z , t = m = 1 z m t m , where z = { z m } m = 1 , t = { t m } m = 1 l 2 , and induced norm · 2 : l 2 l 2 by z 2 = m = 1 | z m | 2 1 / 2 for all z = { z m } m = 1 l 2 . For i = 0 , 1 , 2 , define M i : l 2 l 2 by M i = M such that M ( z ) = 3 2 z for all z = { z m } m = 1 l 2 . Define the mappings A i : l 2 l 2 by A i ( z ) = ( z 1 4 , z 2 4 , z 3 4 , , z m 4 , ) for all z = { z m } m = 1 l 2 , and A i * : l 2 l 2 by A i * ( t ) = ( t 1 4 , t 2 4 , t 3 4 , , t m 4 , ) for all t = { t m } m = 1 l 2 . Furthermore, for j = 0 , 1 , 2 , define the mappings B j : l 2 l 2 by B j ( z ) = ( z 1 3 , z 2 3 , z 3 3 , , z m 3 , ) for all z = { z m } m = 1 l 2 , and B j * : l 2 l 2 by B j * ( t ) = ( t 1 3 , t 2 3 , t 3 3 , , t m 3 , ) for all t = { t m } m = 1 l 2 . We define the mappings ψ j : l 2 × l 2 l 2 by ψ j = F such that ψ ( z , t ) = z 2 + t 2 for all z = { z m } m = 1 , t = { t m } m = 1 l 2 , and φ j = 0 for each j = 0 . It is easy to see that ( 0 , 0 ) Ω .
Table 2 represents the iterations and execution time with randomly the chosen initial points and terminating scale of Algorithm 1 for Example 2.
In Figure 2, errors with regards to the number of iterations are plotted for randomly chosen different initial points for Example 2.

6. Conclusions

This paper introduced a novel modified viscosity-type self-adaptive scheme to address SCNPPMOS and SGEPMOS. We rigorously proved strong convergence theorems, discussed the practical implications, and provided analytical examples that highlight the algorithm’s effectiveness. Our work not only contributes to the theoretical foundations of split problems, but also offers valuable tools for practitioners in fields such as optimization, signal processing, and machine learning. By consolidating and extending recent findings, our research advances the state-of-the-art in solving complex split problems. Future research may explore further enhancements and applications of this algorithm, pushing the boundaries of knowledge and practical problem-solving in this domain.

Author Contributions

M.A. (Mohd Asad), M.D., D.F. and M.A. (Mohammad Akram) contributed equally to this manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Acknowledgments

The authors are thankful to the Editor and reviewers for their kind and valuable suggestions, which improved the quality and contents of this paper. The second author wishes to extend his sincere gratitude to the Deanship of Scientific Research at the Islamic University of Madinah for the support provided to the Post-Publishing Program 2.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Censor, Y.; Elfving, T. A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8, 221–239. [Google Scholar] [CrossRef]
  2. Godwin, E.; Izuchukwu, C.; Mewomo, O. An inertial extrapolation method for solving generalized split feasibility problems in real Hilbert spaces. Boll. Dell’Unione Mat. Ital. 2021, 14, 379–401. [Google Scholar] [CrossRef]
  3. Oyewole, O.; Abass, H.; Mewomo, O. A strong convergence algorithm for a fixed point constrained split null point problem. Rend. Del Circ. Mat. Di Palermo Ser. 2 2021, 70, 389–408. [Google Scholar] [CrossRef]
  4. Butnariu, D.; Resmerita, E. Bregman distances, totally convex functions, and a method for solving operator equations in Banach spaces. Abstr. Appl. Anal. 2006, 2006, 084919. [Google Scholar] [CrossRef]
  5. Byrne, C. Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 2002, 18, 441. [Google Scholar] [CrossRef]
  6. Byrne, C. A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2003, 20, 103–120. [Google Scholar] [CrossRef]
  7. Censor, Y.; Elfving, T.; Kopf, N.; Bortfeld, T. The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 2005, 21, 2071. [Google Scholar] [CrossRef]
  8. Censor, Y.; Gibali, A.; Reich, S. Algorithms for the split variational inequality problem. Numer. Algorithms 2012, 59, 301–323. [Google Scholar] [CrossRef]
  9. Dadashi, V. Shrinking projection algorithms for the split common null point problem. Bull. Aust. Math. Soc. 2017, 96, 299–306. [Google Scholar] [CrossRef]
  10. Takahashi, S.; Takahashi, W. The split common null point problem and the shrinking projection method in Banach spaces. Optimization 2016, 65, 281–287. [Google Scholar] [CrossRef]
  11. Takahashi, W. The split common null point problem in Banach spaces. Arch. Der Math. 2015, 104, 357–365. [Google Scholar] [CrossRef]
  12. Takahashi, W. The split feasibility problem and the shrinking projection method in Banach spaces. J. Nonlinear Convex Anal. 2015, 16, 1449–1459. [Google Scholar]
  13. Wang, F.; Xu, H.K. Cyclic algorithms for split feasibility problems in Hilbert spaces. Nonlinear Anal. Theory, Methods Appl. 2011, 74, 4105–4111. [Google Scholar] [CrossRef]
  14. Xu, H.K. A variable Krasnosel’skii–Mann algorithm and the multiple-set split feasibility problem. Inverse Probl. 2006, 22, 2021. [Google Scholar] [CrossRef]
  15. Xu, H.K. Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 2010, 26, 105018. [Google Scholar] [CrossRef]
  16. Yang, Q. The relaxed CQ-algorithm solving the split feasibility problem. Inverse Probl. 2004, 20, 1261. [Google Scholar] [CrossRef]
  17. Moudafi, A. The split common fixed-point problem for demicontractive mappings. Inverse Probl. 2010, 26, 055007. [Google Scholar] [CrossRef] [PubMed]
  18. Akram, M.; Dilshad, M.; Rajpoot, A.K.; Babu, F.; Ahmad, R.; Yao, J.C. Modified iterative schemes for a fixed point problem and a split variational inclusion problem. Mathematics 2022, 10, 2098. [Google Scholar] [CrossRef]
  19. Dilshad, M.; Aljohani, A.F.; Akram, M.; Khidir, A.A. Yosida approximation iterative methods for split monotone variational inclusion problems. J. Funct. Spaces 2022, 2022, 3665713. [Google Scholar] [CrossRef]
  20. Dilshad, M.; Siddiqi, A.H.; Ahmad, R.; Khan, F.A. An iterative algorithm for a common solution of a split variational inclusion problem and fixed point problem for non-expansive semigroup mappings. In Industrial Mathematics and Complex Systems; Industrial and Applied Mathematics; Manchanda, P., Lozi, R., Siddiqi, A., Eds.; Springer: Singapore, 2017. [Google Scholar] [CrossRef]
  21. Tuyen, T.M. A strong convergence theorem for the split common null point problem in Banach spaces. Appl. Math. Optim. 2019, 79, 207–227. [Google Scholar] [CrossRef]
  22. Tuyen, T.M.; Ha, N.S.; Thuy, N.T.T. A shrinking projection method for solving the split common null point problem in Banach spaces. Numer. Algorithms 2019, 81, 813–832. [Google Scholar] [CrossRef]
  23. Reich, S.; Truong, M.T.; Mai, T.N.H. The split feasibility problem with multiple output sets in Hilbert spaces. Optim. Lett. 2020, 14, 2335–2353. [Google Scholar] [CrossRef]
  24. Reich, S.; Tuyen, T.M. Two new self-adaptive algorithms for solving the split common null point problem with multiple output sets in Hilbert spaces. Fixed Point Theory Appl. 2021, 23, 16. [Google Scholar] [CrossRef]
  25. Bnouhachem, A. A hybrid iterative method for a combination of equilibria problem, a combination of variational inequality problems and a hierarchical fixed point problem. Fixed Point Theory Appl. 2014, 2014, 163. [Google Scholar] [CrossRef]
  26. Bnouhachem, A. An iterative algorithm for system of generalized equilibrium problems and fixed point problem. Fixed Point Theory Appl. 2014, 2014, 235. [Google Scholar] [CrossRef]
  27. Bnouhachem, A. Strong convergence algorithm for approximating the common solutions of a variational inequality, a mixed equilibrium problem and a hierarchical fixed-point problem. J. Inequalities Appl. 2014, 2014, 154. [Google Scholar] [CrossRef]
  28. Bnouhachem, A.; Al-Homidan, S.; Ansari, Q.H. An iterative method for common solutions of equilibrium problems and hierarchical fixed point problems. Fixed Point Theory Appl. 2014, 2014, 194. [Google Scholar] [CrossRef]
  29. Phuengrattana, W.; Lerkchaiyaphum, K. On solving the split generalized equilibrium problem and the fixed point problem for a countable family of nonexpansive multivalued mappings. Fixed Point Theory Appl. 2018, 2018, 6. [Google Scholar] [CrossRef]
  30. Kazmi, K.; Rizvi, S. Iterative approximation of a common solution of a split equilibrium problem, a variational inequality problem and a fixed point problem. J. Egypt. Math. Soc. 2013, 21, 44–51. [Google Scholar] [CrossRef]
  31. Cianciaruso, F.; Marino, G.; Muglia, L.; Yao, Y. A hybrid projection algorithm for finding solutions of mixed equilibrium problem and variational inequality problem. Fixed Point Theory Appl. 2009, 2010, 383740. [Google Scholar] [CrossRef]
  32. Olona, M.A.; Alakoya, T.O.; Abd-semii, O.E.O.; Mewomo, O.T. Inertial shrinking projection algorithm with self-adaptive step size for split generalized equilibrium and fixed point problems for a countable family of nonexpansive multivalued mappings. Demonstr. Math. 2021, 54, 47–67. [Google Scholar] [CrossRef]
  33. Blum, E. From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63, 123–145. [Google Scholar]
  34. Godwin, E.C.; Mewomo, O.; Alakoya, T.A. On split generalized equilibrium problem witth multiple output sets and common fixed point problem. Demonstr. Math. 2023. [Google Scholar] [CrossRef]
  35. Zegeye, H.; Shahzad, N. Convergence of Mann’s type iteration method for generalized asymptotically nonexpansive mappings. Comput. Math. Appl. 2011, 62, 4007–4014. [Google Scholar] [CrossRef]
  36. Zhang, S.s.; Lee, J.H.; Chan, C.K. Algorithms of common solutions to quasi variational inclusion and fixed point problems. Appl. Math. Mech. 2008, 29, 571–581. [Google Scholar] [CrossRef]
  37. Reich, S.; Tuyen, T.M. Iterative methods for solving the generalized split common null point problem in Hilbert spaces. Optimization 2020, 69, 1013–1038. [Google Scholar] [CrossRef]
  38. Goebel, K.; Kirk, W.A. Topics in Metric Fixed Point Theory; Number 28; Cambridge University Press: Cambridge, UK, 1990. [Google Scholar]
  39. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: Berlin/Heidelberg, Germany, 2011; Volume 408. [Google Scholar]
  40. Saejung, S.; Yotkaew, P. Approximation of zeros of inverse strongly monotone operators in Banach spaces. Nonlinear Anal. Theory Methods Appl. 2012, 75, 742–750. [Google Scholar] [CrossRef]
  41. Mahdioui, H.; Chadli, O. On a system of generalized mixed equilibrium problems involving variational-like inequalities in Banach spaces: Existence and algorithmic aspects. Adv. Oper. Res. 2012, 2012, 843486. [Google Scholar] [CrossRef]
Figure 1. Error analysis of Algorithm 1 for Example 1.
Figure 1. Error analysis of Algorithm 1 for Example 1.
Mathematics 11 04175 g001
Figure 2. Error analysis of Algorithm 1 for Example 2.
Figure 2. Error analysis of Algorithm 1 for Example 2.
Mathematics 11 04175 g002
Table 1. Numerical results of Algorithm 1 for Example 1.
Table 1. Numerical results of Algorithm 1 for Example 1.
IterationsInitial PointsError ToleranceCPU Time
41(0.78, 1.25)1.0000 × 10 6 0.031250
36(3.78, 1.25)1.0000 × 10 6 0.015625
37(4, 2)1.0000 × 10 6 0.015625
72(−1,−5)1.0000 × 10 6 0.046875
Table 2. Numerical results of Algorithm 1 for Example 2.
Table 2. Numerical results of Algorithm 1 for Example 2.
IterationsInitial PointsError ToleranceCPU Time
67 ( 2 , 1 , 1 2 , ) 1.0000 × 10 6 0.046875
69 ( 4 , 2 , 1 , ) 1.0000 × 10 6 0.046875
68 ( 3 , 3 5 , 3 25 , ) 1.0000 × 10 6 0.046875
71 ( 6 , 1 , 1 6 , ) 1.0000 × 10 6 0.046875
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Asad, M.; Dilshad, M.; Filali, D.; Akram, M. A Modified Viscosity-Type Self-Adaptive Iterative Algorithm for Common Solution of Split Problems with Multiple Output Sets in Hilbert Spaces. Mathematics 2023, 11, 4175. https://0-doi-org.brum.beds.ac.uk/10.3390/math11194175

AMA Style

Asad M, Dilshad M, Filali D, Akram M. A Modified Viscosity-Type Self-Adaptive Iterative Algorithm for Common Solution of Split Problems with Multiple Output Sets in Hilbert Spaces. Mathematics. 2023; 11(19):4175. https://0-doi-org.brum.beds.ac.uk/10.3390/math11194175

Chicago/Turabian Style

Asad, Mohd, Mohammad Dilshad, Doaa Filali, and Mohammad Akram. 2023. "A Modified Viscosity-Type Self-Adaptive Iterative Algorithm for Common Solution of Split Problems with Multiple Output Sets in Hilbert Spaces" Mathematics 11, no. 19: 4175. https://0-doi-org.brum.beds.ac.uk/10.3390/math11194175

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop