Next Article in Journal
A Coupled System of Fractional Difference Equations with Nonlocal Fractional Sum Boundary Conditions on the Discrete Half-Line
Next Article in Special Issue
Reich, Jungck, and Berinde Common Fixed Point Results on ℱ-Metric Spaces and an Application
Previous Article in Journal
Some Schemata for Applications of the Integral Transforms of Mathematical Physics
Previous Article in Special Issue
Common Fixed Point Theorems of Generalized Multivalued (ψ,ϕ)-Contractions in Complete Metric Spaces with Application
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Convergence Theorems for Common Solutions of Split Variational Inclusion and Systems of Equilibrium Problems

1
College of Mathematics and Statistics, Chongqing Technology and Business University, Chongqing 400067, China
2
Department of Mathematics Education, Gyeongsang National University, Jinju 52828, Korea
3
School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 611731, China
*
Author to whom correspondence should be addressed.
Submission received: 11 February 2019 / Accepted: 28 February 2019 / Published: 12 March 2019
(This article belongs to the Special Issue Fixed Point Theory and Related Nonlinear Problems with Applications)

Abstract

:
In this paper, the split variational inclusion problem (SVIP) and the system of equilibrium problems (EP) are considered in Hilbert spaces. Inspired by the works of Byrne et al., López et al., Moudafi and Thukur, Sobumt and Plubtieng, Sitthithakerngkiet et al. and Eslamian and Fakhri, a new self-adaptive step size algorithm is proposed to find a common element of the solution set of the problems SVIP and EP. Convergence theorems are established under suitable conditions for the algorithm and application to the common solution of the fixed point problem, and the split convex optimization problem is considered. Finally, the performances and computational experiments are presented and a comparison with the related algorithms is provided to illustrate the efficiency and applicability of our new algorithms.

1. Introduction

Let ϕ : C × C R be a bifunction, where C is a nonempty closed convex subset of a real Hilbert space H and R is the set of real numbers. The equilibrium problem (EP) for ϕ is as follows:
Find a point x C such that ϕ ( x , y ) 0 for all y C .
The problem (EP) (1) represents a very suitable and common format for investigation and solution of various applied problems and involves many other general problems in nonlinear analysis, such as complementarity, fixed point, and variational inequality problems. A wide range of problems in finance, physics, network analysis, economics, and optimizations can be reduced to find the solution of the problem (1) (see, for instance, Blum and Oetti [1], Flam and Antipin [2], Moudafi [3], and Bnouhachem et al. [4]). Moreover, many authors have studied the methods and algorithms to approximate a solution of the problem (1), for instance, descent algorithms in Konnov and Ali [5], Konnov and Pinyagina [6], Charitha [7], and Lorenzo et al. [8]. For more details, refer to Ceng [9], Yao et al. [10,11,12], Qin et al. [13,14], Hung and Muu [15], Quoc et al. [16], Santos et al. [17], Thuy et al. [18], Rockafellar [19], Moudafi [20,21], Muu and Otelli [22], and Dong et al. [23].
On the other hand, many real-world inverse problems can be cast into the framework of the split inverse problem (SIP), which is formulated as follows:
Find a point x X that solves the problem IP 1
and such that the point
y = A x Y solves the problem IP 2 ,
where IP1 and IP2 are two inverse problems, X and Y are two vector spaces and A : X Y is a linear operator.
Realistic problems can be represented by making different choices of the spaces X and Y (including the case X = Y ) and by choosing appropriate inverse problems for the problems IP1 and IP2. In particular, the well-known split convex feasibility problem (SCFP) (Censor and Elfving [24]) is illustrated as follows:
Find a point x such that x C and A x Q ,
where C and Q are nonempty closed and convex subsets of real Hilbert spaces H 1 and H 2 , respectively, and A is a bounded and linear operator from H 1 to H 2 . The problem SCFP (2) has received important attention due to its applications in signal processing, image reconstruction, with particular progress in intensity modulated radiation therapy, compressed sensing, approximating theory, and control theory (see, for example, [25,26,27,28,29,30,31,32] and the references therein).
Initiated by the problem SCFP, several split type problems have been investigated and studied, for example, split variational inequality problems, split common fixed point problems, and split null point problems. Especially, Moudafi [33] introduced the split monotone variational inclusion problem (SMVIP) for two operators f 1 : H 1 H 1 and f 2 : H 2 H 2 and multi-valued maximal monotone mappings B 1 , B 2 as follows:
Find a point x H 1 such that 0 f 1 ( x ) + B 1 ( x )
and
y = A x H 2 such that 0 f 2 ( y ) + B 2 ( y ) .
If f 1 = f 2 = 0 in the problem SMVIPs (3) and (4), the problem SMVIP reduces to the following split variational inclusion problem (SVIP):
Find a point x H 1 such that 0 B 1 ( x )
and
y = A x H 2 such that 0 B 2 ( y ) .
The problem SVIPs (5) and (6) constituted a pair of variational inclusion problems which have to be solved so that the image y = A x under a given bounded linear operator A of the solution of the problem SVIP (5) in H 1 is the solution of the other SVIP (6) in another Hilbert space H 2 . Indeed, one can see that x B 1 1 ( 0 ) and y = A x B 2 1 ( 0 ) . The SVIPs (5) and (6) is at the core of modeling many inverse problems arising from phase retrieval and other real-world problems, for instance, in sensor networks in computerized tomography and data compression (see, for example, [34,35,36]).
In the process of studying equilibrium problems and split inverse problems, not only techniques and methods for solving the respective problems have been proposed (see, for example, CQ-algorithm in Byrne [37,38], relaxed CQ-algorithm in Yang [39] and Gibali et al. [40], self-adaptive algorithm in López et al. [41], Moudafi and Thukur [42], and Gibali [43]), but also the common solution of equilibrium problems, split inverse problems, and other problems have been considered in many works (see, for example, Plubtieng and Sombut [44] considered the common solution of equilibrium problems and nonspreading mappings; Sobumt and Plubtieng [45] studied a common solution of equilibrium problems and split feasibility problems in Hilbert spaces; Sitthithakerngkiet et al. [46] investigated a common solution of split monotone variational inclusion problems and fixed points problem of nonlinear operators; Eslamian and Fakhri [47] considered split equality monotone variational inclusion problems and fixed point problem of set-valued operators; Censor and Segal [48], Plubtieng and Sriprad [49] explored split common fixed point problems for directed operators). In particular, some applications to mathematical models for studying a common solution of convex optimizations and compressed sensing whose constraints can be presented as equilibrium problems and split variational inclusion problems, which stimulated our research on this kind of problem.
Motivated by the above works, we consider the following split variational inclusion problem and equilibrium problem:
Let H 1 and H 2 be real Hilbert spaces, A : H 1 H 2 be a bounded linear operator. Let B 1 : H 1 2 H 1 and B 2 : H 2 2 H 2 be two set-valued mappings with nonempty values and ϕ : C × C R be a bifunction, where C is nonempty closed convex subset of H 1 .
The split monotone variational inclusion and equilibrium problem (SMVIEP) is as follows:
Find a point x EP ( ϕ ) B 1 1 ( 0 )
such that
y = A x B 2 1 ( 0 ) ,
where EP ( ϕ ) denotes the solution set of the problem EP.
Combined with techniques of Byrne et al., López et al., Moudafi and Thukur, Sitthithakerngkiet et al. as well as of Sobumt and Plubtieng and Eslamian and Fakhri, the purpose of this paper is to introduce a new iterative method which is called a new self-adaptive step size algorithm for solving the problem SMVIEPs (7) and (8) in Hilbert spaces.
The outline of the paper is as follows: In Section 2, we collect definitions and results which are needed for our further analysis. In Section 3, our new self-adaptive step size algorithms are introduced and analyzed, the weak and strong convergence theorems for the proposed algorithms are obtained, respectively, under suitable conditions. Moreover, as applications, the existence of a fixed point of a pseudo contractive mapping and a solution of the split convex optimization problem is considered in Section 4. Finally, the numerical examples and a comparison with some related algorithms are presented to illustrate the performances of our new algorithms.

2. Preliminaries

Now, we recall some concepts and results which are needed in the sequel.
Let H be a Hilbert space with the inner product · , · and the induced norm · and I be the identity operator on H. Let F i x ( T ) denote the fixed point set of an operator T if T has fixed point. The symbols " and " represent the strong and the weak convergence, respectively. For any sequence { x n } H , w w ( x n ) denotes the weak w-limit set of { x n } , that is,
w w ( x n ) : = { x H : x n j x for some subsequence { n j } of { n } } .
The following properties of the norm in a Hilbert space H are well known:
α x + β y + γ z 2 = α x 2 + β y 2 + γ z 2 α β x y 2 β γ y z 2 γ α x z 2
for all x , y , z H and α , β , γ 0 . Moreover, the following inequality holds: for all x , y H ,
x + y 2 x 2 + 2 y , x + y
Let C be a closed convex subset of H. For all x H , there exists a unique nearest point in C, denote P C x , such that
x P C x = min { x y : y C } .
The operator P C is called the metric projection of H onto C. Some properties of the operator P C are as follows: for all x , y H ,
x y , P C x P C y P C x P C y 2
and, for all x H and y C ,
x P C x , y P C x 0 .
Definition 1.
Let H be a real Hilbert space and D be a subset of H. For all x , y D , an operator h : D H is said to be:
(1) 
firmly nonexpansive on D if
h ( x ) h ( y ) , x y h ( x ) h ( y ) 2 .
(2) 
Lipschitz continuous with constant κ > 0 on D if
h ( x ) h ( y ) κ x y .
(3) 
nonexpansive on D if
h ( x ) h ( y ) x y .
(4) 
hemicontinuous if it is continuous along each line segment in D.
(5) 
averaged if there exist a nonexpansive operator T : D H and a number c ( 0 , 1 ) such that
h = ( 1 c ) I + c T .
Remark 1.
The following can be easily obtained:
(1) 
An operator h is firmly nonexpansive if and only if I h is firmly nonexpansive (see [50], Lemma 2.3), then h is nonexpansive.
(2) 
If h 1 and h 2 are averaged, then their composition S = h 1 h 2 is averaged (see [50], Lemma 2.2).
Definition 2.
Let H be a real Hilbert space and λ > 0 . The operator B : H 2 H is said to be:
(1) 
monotone if, for all u B ( x ) and v B ( y ) ,
u v , x y 0 .
(2) 
maximal monotone if the graph G r a p h ( B ) of B,
G r a p h ( B ) : = { ( x , u ) H × H : u B ( x ) } ,
is not properly contained in the graph of any other monotone mapping.
(3) 
The resolvent of B with parameter λ > 0 is denoted by
J λ B : = ( I + λ B ) 1 ,
where I is the identity operator.
Remark 2.
For any λ > 0 , the following hold:
(1) 
B is maximal monotone if and only if J λ B is single-valued, firmly nonexpansive, and d o m ( J λ B ) = H , where d o m ( B ) : = { x H : B ( x ) } .
(2) 
x B 1 ( 0 ) if and only if x = J λ B x .
(3) 
The solution set Ω of the problem SVIPs (5) and (6) is equivalent to the following:
Find a point x H 1 with x = J λ B 1 x such that y = A x H 2 and y = J λ B 2 y .
(4) 
For some more results, refer to [33,36,51].
Assume that an equilibrium bifunction ϕ : C × C R satisfies the following conditions:
(A1)
ϕ ( x , x ) = 0 for all x C ;
(A2)
ϕ ( x , y ) + ϕ ( y , x ) 0 for all x , y C ;
(A3)
For all x , y , z C , lim t 0 ϕ ( t z + ( 1 t ) x , y ) ϕ ( x , y ) for all x C ;
(A4)
For all x C , y ϕ ( x , y ) is convex and lower semi-continuous.
Lemma 1.
(see [2]) Let C be a nonempty closed convex subset of a Hilbert space H and suppose that ϕ : C × C R satisfies the conditions ( A 1 ) ( A 4 ) . For all r > 0 and x H , define a mapping T r : H C by
T r ( x ) = { z C : ϕ ( z , y ) + 1 r y z , z x 0 } .
Then the following hold:
(1) 
T r is nonempty single-valued.
(2) 
T r is firmly nonexpansive, that is, for all x , y C ,
T r x T r y , x y T r x T r y 2
and, further, T r is nonexpansive.
(3) 
EP ( ϕ ) = F i x ( T r ) is closed and convex.
Lemma 2.
(see [26,52]) Assume that { a n } is a sequence of nonnegative real numbers such that, for each n 0 ,
a n + 1 ( 1 θ n ) a n + δ n ,
where { θ n } is a sequence in ( 0 , 1 ) and { δ n } is a sequence such that
(a) 
lim n θ n = 0 and Σ n = 1 θ n = ;
(b) 
lim sup n δ n θ n 0 or Σ n = 1 | δ n | < .
Then the limit of the sequence { a n } exists and lim n a n = 0 .
Lemma 3.
(see [53]) Assume that { a n } and { δ n } are the sequences of nonnegative numbers such that, for each n 0 ,
a n + 1 a n + δ n .
If lim sup n δ n and { a n } has a subsequence converging to zero, then lim n a n = 0 .
Lemma 4.
(see [52]) Let { Γ n } be a sequence of real numbers that does not decrease at infinity, in the sense that there exists a subsequence { Γ n j } of { Γ n } such that Γ n j < Γ n j + 1 for each j 0 . Also, consider the sequence { σ ( n ) } n n 0 of integers defined by
σ ( n ) = max { k n : Γ k Γ k + 1 } .
Then { σ ( n ) } n n 0 is a nondecreasing sequence satisfying lim n σ ( n ) = and, for each n n 0 ,
max { Γ σ ( n ) , Γ n } Γ σ ( n ) + 1 .
Lemma 5.
(see [54]) Let C be a nonempty closed convex subset of a real Hilbert space H. If T : C C is nonexpansive and F i x ( T ) , then the mapping I T is demiclosed at 0, that is, if { x n } is a sequence in C converges weakly to x and x n T x n 0 , then x = T x .

3. The Main Results

In this section, we introduce our algorithms and state our main results.
Throughout this paper, we always assume that H 1 and H 2 are Hilbert spaces, C is a nonempty closed convex subset of H 1 , the bifunction ϕ : C × C R satisfies the conditions ( A 1 ) ( A 4 ) , A : H 1 H 2 is a bounded linear operator, A denotes the conjugate transpose of A (in finite dimensional spaces, A = A T ). Let B 1 : H 1 2 H 1 and B 2 : H 2 2 H 2 be two maximal monotone operators.
Now, we define the functions by
f ( x ) = 1 2 ( I J λ B 2 ) A x 2 , g ( x ) = 1 2 ( I J λ B 1 ) x 2 ,
and
F ( x ) = A ( I J λ B 2 ) A x , G ( x ) = ( I J λ B 1 ) x ,
where J λ B = ( I + λ B ) 1 for any λ > 0 .
From Aubin [55], one can see that f and g are weakly lower semi-continuous and convex differentiable. Moreover, it is known that the functions F and G are Lipschitz continuous according to Beryne et al. [36]. Denote the solution of the problem SVIPs (5) and (6) by
Ω = { x H 1 : 0 B 1 ( x ) , 0 B 2 ( A x ) } .

3.1. Iterative Algorithms

Now, we introduce the following 3 algorithms for our main results:
Algorithm 1.Choose a positive sequence { ρ n } satisfying ε < ρ n < 4 ε for some ε > 0 small enough. Select arbitrary starting point x 0 , set n = 0 and let r > 0 , λ > 0 .
Iterative Step: For any iterate x n for each n 0 , compute
ϕ ( z n , y ) + 1 r y z n , z n x n 0 ,
y n = β n x n + ( 1 β n ) z n ,
γ n = ρ n ( f ( y n ) + g ( y n ) ) F ( y n ) 2 + G ( y n ) 2 , F ( y n ) 2 + G ( y n ) 2 0 , 0 , o t h e r w i s e ,
and calculate the next iterate as
x n + 1 = α n x n + ( 1 α n ) J λ B 1 ( I γ n A ( I J λ B 2 ) A ) y n ,
where { α n } and { β n } are the sequences in ( 0 , 1 ) .
Stop Criterion: If x n + 1 = x n = y n , then stop. Otherwise, set n : = n + 1 and return to Iterative Step.
Algorithm 2.Choose a positive sequence { ρ n } satisfying ε < ρ n < 4 ε (for some ε > 0 small enough). Select arbitrary starting point x 0 , set n = 0 and let r > 0 , λ > 0 .
Iterative Step: For any iterate x n for each n 0 , compute
ϕ ( z n , y ) + 1 r y z n , z n x n 0 ,
y n = β n x n + ( 1 β n ) z n ,
γ n = ρ n ( f ( y n ) + g ( y n ) ) F ( y n ) 2 + G ( y n ) 2 , F ( y n ) 2 + G ( y n ) 2 0 , 0 , o t h e r w i s e ,
and calculate the next iterate as
x n + 1 = α n x 0 + ( 1 α n ) J λ B 1 ( I γ n A ( I J λ B 2 ) A ) y n ,
where { α n } and { β n } are the sequences in ( 0 , 1 ) .
Stop Criterion: If x n + 1 = x n = y n , then stop. Otherwise, set n : = n + 1 and return to Iterative Step.
Algorithm 3.Choose a positive sequence { ρ n } satisfying ε < ρ n < 4 ε (for some ε > 0 small enough). Select arbitrary starting point x 0 , set n = 0 and let r > 0 , λ > 0 .
Iterative Step: For any iterate x n for each n 0 , compute
ϕ ( z n , y ) + 1 r y z n , z n x n 0 ,
y n = β n x n + ( 1 β n ) z n ,
γ n = ρ n ( f ( y n ) + g ( y n ) ) F ( y n ) 2 + G ( y n ) 2 , F ( y n ) 2 + G ( y n ) 2 0 , 0 , o t h e r w i s e ,
and calculate the next iterate as
x n + 1 = ( 1 α n τ n ) x n + α n J λ B 1 ( I γ n A ( I J λ B 2 ) A ) y n ,
where { α n } , { β n } and τ n are the sequences in ( 0 , 1 ) with α n + τ n 1 .
Stop Criterion: If x n + 1 = x n = y n , then stop. Otherwise, set n : = n + 1 and return to Iterative Step.

3.2. Weak Convergence Analysis for Algorithm 1

First, we give one lemma for our main result.
Lemma 6.
Suppose that EP ( ϕ ) Ω . If x n + 1 = y n = x n in Algorithm 1, then x n EP ( ϕ ) Ω .
Proof. 
Denote T r ( x n ) = { z n C , ϕ ( z n , y ) + 1 r y z n , z n x n 0 } . Then we can see z n = T r x n .
If x n = y n , then it follows from the construction of y n that x n = z n and hence x n F i x ( T r ) , that is, x n EP ( ϕ ) from Lemma 1.
On the other hand, the operator J λ B 1 and I γ n A ( I J λ B 2 ) A are averaged from Remark 1. Since Ω , it follows from Lemma 2.1 of [51], with the averaged operators J λ B 1 and I γ n A ( I J λ B 2 ) A , that
F i x ( J λ B 1 ) F i x ( I γ n A ( I J λ B 2 ) A ) = F i x ( J λ B 1 ( I γ n A ( I J λ B 2 ) A ) ) = F i x ( ( I γ n A ( I J λ B 2 ) A ) J λ B 1 ) .
If x n + 1 = y n = x n , then we can see x n = J λ B 1 ( I γ n A ( I J λ B 2 ) A ) x n from the recursion (31) and hence x n F i x ( J λ B 1 ) and x n F i x ( I γ n A ( I J λ B 2 ) A ) , that is, x n B 1 1 ( 0 ) and
( I γ n A ( I J λ B 2 ) A ) x n = x n A ( I J λ B 2 ) A x n = 0 .
It can be written as J λ B 2 A x n = A x n + w , where A w = 0 . Without loss generality, if we take z Ω , then J λ B 2 A z = A z and J λ B 2 A x n J λ B 2 A z = A x n + w A z and so
A x n A z 2 J λ B 2 ( A x n ) J λ B 2 ( A z ) 2 = A x n + w A z 2 = A x n A z 2 + 2 A x n A z , w + w 2 = A x n A z 2 + 2 x n z , A w + w 2 = A x n A z 2 + w 2 ,
which means that w = 0 and so J λ B 2 A x n = A x n , that is, A x n B 2 1 ( 0 ) . Hence x n Ω and, furthermore, x n EP ( ϕ ) Ω . This completes the proof. □
Theorem 1.
Let H 1 , H 2 be two real Hilbert spaces and A : H 1 H 2 be a bounded linear operator. Let ϕ be a bifunction satisfying the conditions ( A 1 ) ( A 4 ) . Assume that B 1 : H 1 2 H 1 and B 2 : H 2 2 H 2 are maximal monotone mappings with EP ( ϕ ) Ω . Then the sequence { x n } generated by Algorithm 1 converges weakly to an element of EP ( ϕ ) Ω , where the parameters { α n } , { β n } are in ( 0 , 1 ) and satisfy the following conditions:
lim n α n = 0 , Σ n = 1 | β n β n 1 | < , Σ n = 1 | α n α n 1 | < .
Proof. 
Denote T n = I γ n A ( I J λ B 2 ) A . Since J λ B 2 is firmly nonexpansive according to Remark 2, T n is averaged from (1), (2) of Remark 1 and J λ B 1 ( I γ n A ( I J λ B 2 ) A ) is averaged and nonexpansive.
First, we show that the sequences { x n } and { y n } are bounded. Since EP ( ϕ ) Ω , we take z EP ( ϕ ) Ω and then z = T r ( z ) , z = J λ B 1 z and A z = J λ B 2 A z . At the same time, it follows that z n = T r x n and T r is nonexpansive according to Lemma 1 and so
y n z β n x n z + ( 1 β n ) z n z x n z
and
x n + 1 z = α n x n + ( 1 α n ) J λ B 1 ( I γ n A ( I J λ B 2 ) A ) y n z α n x n z + ( 1 α n ) y n z x n z ,
which means that the sequence { x n } is bounded and so are { y n } , { z n } and { u n } , where z n = T r x n and u n = T n y n .
Next, we show x n + 1 x n 0 . Since z n = T r x n , we have z n 1 = T r x n 1 and, for all y C ,
ϕ ( z n , y ) + 1 r y z n , z n x n 0
and
ϕ ( z n 1 , y ) + 1 r y z n 1 , z n 1 x n 1 0 .
Putting y = z n 1 in (39) and y = z n in (40), we have
ϕ ( z n , z n 1 ) + 1 r z n 1 z n , z n x n 0
and
ϕ ( z n 1 , z n ) + 1 r z n z n 1 , z n 1 x n 1 0 .
Adding the inequalities (41) and (42), since ϕ is monotone from ( A 2 ) , we have
ϕ ( z n , z n 1 ) + ϕ ( z n 1 , z n ) 0 z n z n 1 , z n 1 x n 1 ( z n x n ) 0 ,
which implies that
z n z n 1 , z n 1 z n + z n z n 1 , x n x n 1 0 ,
that is,
z n z n 1 , x n x n 1 z n z n 1 2 .
Furthermore, we have z n z n 1 x n x n 1 . According to the definition of the iterative sequence { y n } , we have
y n y n 1 = β n x n + ( 1 β n ) z n β n 1 x n 1 + ( 1 β n 1 ) z n 1 = β n ( x n x n 1 ) + ( 1 β n ) ( z n z n 1 ) + ( β n β n 1 ) ( x n 1 z n 1 ) β n x n x n 1 + ( 1 β n ) z n z n 1 + | β n β n 1 | { x n 1 + z n 1 } x n x n 1 + | β n β n 1 | { x n 1 + z n 1 }
and, since u n = T n y n , we have
u n u n 1 = y n γ n F ( y n ) y n 1 + γ n 1 F ( y n 1 ) = y n y n 1 + γ n F ( y n ) + γ n 1 F ( y n 1 ) x n x n 1 + | β n β n 1 | { x n 1 + z n 1 } + γ n F ( y n ) + γ n 1 F ( y n 1 ) .
On the other hand, we have
F ( y n ) , y n z = A ( I J λ B 2 ) A y n , y n z = ( I J λ B 2 ) A y n ( I J λ B 2 ) A z , A y n A z ( I J λ B 2 ) A y n 2 = 2 f ( y n ) .
Since J λ B 1 is nonexpansive, we have
x n + 1 z 2 = α n ( x n z ) + ( 1 α n ) ( J λ B 1 ( I γ n A ( I J λ B 2 ) A ) y n z ) 2 α n x n z 2 + ( 1 α n ) [ y n z 2 + γ n 2 F ( y n ) 2 2 γ n F ( y n ) , y n z ] α n x n z 2 + ( 1 α n ) [ y n z 2 + γ n 2 F ( y n ) 2 4 γ n f ( y n ) ] x n z 2 ρ n 4 f ( y n ) f ( y n ) + g ( y n ) ρ n ( f ( y n ) + g ( y n ) ) 2 F ( y n ) 2 + G ( y n ) 2 .
For any ε > 0 small enough, the sequence { x n } is Fejer monotone with respect to EP ( ϕ ) Ω , which ensures the existence of the limit of { x n z } and so we can denote l ( z ) = lim n x n z . Thus it follows from (48) that
ρ n 4 f ( y n ) f ( y n ) + g ( y n ) ρ n ( h ( y n ) + g ( y n ) ) 2 F ( y n ) 2 + G ( y n ) 2 x n z 2 x n + 1 z 2 0
since F and G are Lipschitz continuous from Byrne [36] and so F ( y n ) and G ( y n ) are bounded.
In addition, it also follows (48) that
Σ n = 1 ( f ( y n ) + g ( y n ) ) 2 F ( y n ) 2 + G ( y n ) 2 < ,
which means that
Σ n = 1 γ n <
and hence it follows that f ( y n ) 0 and g ( y n ) 0 if and only if f ( y n ) + g ( y n ) 0 as n .
Thus it follows from the above inequalities that
x n + 1 x n α n x n x n 1 + ( 1 α n ) u n u n 1 + | α n α n 1 | x n 1 u n 1 α n x n x n 1 + ( 1 α n ) ( x n x n 1 + θ n ) + | α n α n 1 | x n 1 u n 1 x n x n 1 + ( 1 α n ) θ n + | α n α n 1 | x n 1 u n 1 ,
where θ n = | β n β n 1 | { x n 1 + z n 1 } + γ n F ( y n ) + γ n 1 F ( y n 1 ) and Σ n = 1 θ n < . Note that it follows from the conditions of α n and β n , combining the formula (51), that the limit of x n + 1 x n exists from Lemma 3. Since the limit of x n z exists, there exists a subsequence { x n k z } of the sequence { x n z } converges to a point and x n k + 1 x n k 0 . Thus it follows that x n + 1 x n 0 from Lemma 3. Further, z n x n 0 because of x n + 1 x n = ( 1 α n ) ( 1 β n ) ( z n x n ) and x n y n = ( 1 β n ) x n z n 0 .
Next, we show x EP ( ϕ ) Ω , where x is a weak cluster of the sequence { x n } . Note that { x n } is bounded and so there exists a point x C such that x n x and so y n x and z n x . Also, since z n = T r x n and z n x n = T r x n x n 0 , we can see that x F i x ( T r ) from Lemma 5, which implies that x EP ( ϕ ) according to Lemma 1.
On the other hand, according to the lower semi-continuity of f and g, it follows from the formula (48) and the lower semi-continuities of f and g that
0 f ( x ) lim k inf f ( y n k ) = lim n f ( y n ) = 0
and
0 g ( x ) lim k inf g ( y n l ) = lim n g ( y n ) = 0 ,
that is,
f ( x ) = 1 2 ( I J λ B 2 ) A x = 0 , g ( x ) = 1 2 ( I J λ B 1 ) x = 0
and so we can have A x B 2 1 ( 0 ) and x B 1 1 ( 0 ) from Remark 2. Therefore, x EP ( ϕ ) Ω . This completes the proof. □
Remark 3.
If the operators B 1 and B 2 are set-valued, odd and maximal monotone mappings, then the operator J λ B 1 ( I γ n ( I J λ B 2 ) A ) is asymptotically regular (see Theorem 4.1 in [56] and Theorem 5 in [57]) and odd. Consequently, the strong convergence of Algorithm 1 is obtained (for the similar proof, see Theorem 1.1 in [58], Theorem 4.3 in [36]).
Remark 4.
If we take γ n γ , where γ 0 , 2 L is a constant which depends on the norm of the operator A, then the conclusion of Theorem 1 also holds.

3.3. Strong Convergence Analysis for Algorithms 2 and 3

Theorem 2.
Let H 1 , H 2 be two real Hilbert spaces and A : H 1 H 2 be a bounded linear operator. Let ϕ be a bifunction satisfying the conditions ( A 1 ) ( A 4 ) . Assume that B 1 : H 1 2 H 1 and B 2 : H 2 2 H 2 are maximal monotone mappings with EP ( ϕ ) Ω . If the sequence { α n } in ( 0 , 1 ) satisfies the following conditions:
lim n α n = 0 , Σ n = 1 α n = ,
then the sequence { x n } generated by Algorithm 2 converges strongly to a point z = P EP ( ϕ ) Ω x 0 .
Proof. 
First, we show that the sequence { x n } and { y n } are bounded. Denote
u n = J λ B 1 ( I γ n A ( I J λ B 2 ) A ) y n
and take p EP ( ϕ ) Ω , as in the proof of Theorem 1, we can see y n p x n p and
x n + 1 p = α n x 0 + ( 1 α n ) u n p α n x 0 p + ( 1 α n ) x n p max { x 0 p , x n p } ,
which implies that the sequence { x n } is bounded and so is the sequence { y n } .
Next, we show lim n sup x 0 z , x n z 0 , where z = P EP ( ϕ ) Ω x 0 . Indeed, there exists a subsequence { x n k } of { x n } such that
lim n sup x 0 z , x n z = lim k x 0 z , x n k z .
Since { x n k } converges weakly to x because { x n } is bounded, according to (11), we can see that
lim n sup x 0 z , x n z = lim k x 0 z , x n k z = x 0 z , x z 0 .
Now, we show x n z . As in the proof of Theorem 1, the operator J λ B 1 ( I γ n A ( I J λ B 2 ) A ) is averaged and then nonexpansive. Thus it follows from (2) that
x n + 1 z 2 = α n ( x 0 z ) + ( 1 α n ) ( u n z ) 2 ( 1 α n ) 2 u n z 2 + 2 α n x 0 z , x n + 1 z ( 1 2 α n ) u n z 2 + 2 α n x 0 z , x n + 1 z + α n 2 u n z 2 ( 1 2 α n ) x n z 2 + θ n ,
where θ n = 2 α n x 0 z , x n + 1 z + α n 2 u n z 2 . It is easy to see that lim n θ n α n 0 and hence we have x n z 0 from Lemma 2 and (59), which means that the sequence { x n } converges strongly to z. This completes the proof. □
For the following strong convergence theorem of Algorithm 3, now, we recall the minimum-norm element of EP ( ϕ ) Ω , which is a solution of the following problem:
argmin { x : x solves the problem EP ( 1 ) and the problem SVIPs ( 5 ) and ( 6 ) }
Theorem 3.
Let H 1 and H 2 be two real Hilbert spaces and A : H 1 H 2 be a bounded linear operator. Let ϕ be a bifunction satisfying the conditions ( A 1 ) ( A 4 ) . Assume that B 1 : H 1 2 H 1 and B 2 : H 2 2 H 2 are maximal monotone mappings with E P ( ϕ ) Ω . If the sequences { α n } , { β n } , { τ n } in ( 0 , 1 ) with α n + τ n 1 satisfy the following conditions:
lim n τ n = 0 , inf ( 1 α n τ n ) α n > 0 , Σ n = 1 τ n = ,
then the sequences { x n } and { y n } generated by Algorithm 3 converge strongly to a point z = P EP ( ϕ ) Ω ( 0 ) , the minimum-norm element of E P ( ϕ ) Ω .
Proof. 
We show several steps to prove the result.
Step 1. We show that the sequences { x n } and { y n } are bounded. Since EP ( ϕ ) Ω is not empty, take a point p EP ( ϕ ) Ω . Since the operator J λ B 1 ( I γ n A ( I J λ B 2 ) A ) is nonexpansive and y n p x n p , we have
x n + 1 p = ( 1 α n τ n ) x n + α n J λ B 1 ( I γ n A ( I J λ B 2 ) A ) y n p ( 1 α n τ n ) x n p + α n y n p + τ n p ( 1 τ n ) y n p + τ n p ( 1 τ n ) x n p + τ n p max { x n p , p } ,
which implies that { x n } is bounded and so is { y n } .
Step 2. We show that x n + 1 x n 0 and x n z , where z = P EP ( ϕ ) Ω ( 0 ) , the minimum-norm element of EP ( ϕ ) Ω . To this end, we denote
u n = J λ B 1 ( I γ n A ( I J λ B 2 ) A ) y n .
For a point z = P EP ( ϕ ) Ω ( 0 ) , similarly as in the proof of Theorem 1, we have
u n z 2 y n z 2 ρ n 4 f ( y n ) f ( y n ) + g ( y n ) ρ n ( f ( y n ) + g ( y n ) ) 2 F ( y n ) 2 + G ( y n ) 2 ,
which means, for any ε > 0 small enough, that
u n z y n z .
Setting v n = ( 1 α n ) x n + α n u n , we have x n v n = α n ( x n u n ) and so x n + 1 = ( 1 τ n ) v n τ n α n ( x n u n ) ,
v n z = ( 1 α n ) x n + α n u n z ( 1 α n ) x n z + α n u n z x n z
and
x n + 1 x n = ( 1 α n τ n ) x n + α n u n x n = α n ( u n x n ) τ n x n α n u n x n + τ n x n .
Moreover, it follows from (64) that
x n + 1 z 2 = ( 1 τ n ) v n τ n α n ( x n u n ) z 2 = ( 1 τ n ) ( v n z ) τ n α n ( x n u n ) τ n z 2 ( 1 τ n ) 2 v n z 2 2 τ n α n ( x n u n ) + τ n z , x n + 1 z ( 1 τ n ) 2 x n z 2 2 τ n α n x n u n , x n + 1 z + 2 τ n z , x n + 1 z .
On the other hand, since y n z x n z , it follows from (10), (33), and (59) that
x n + 1 z 2 = ( 1 α n τ n ) x n + α n u n z 2 = ( 1 α n τ n ) ( x n z ) + α n ( u n z ) + τ n ( z ) 2 ( 1 α n τ n ) x n z 2 + α n u n z 2 + τ n z 2 ( 1 α n τ n ) α n u n x n 2 ( 1 α n τ n ) x n z 2 + α n y n z 2 ρ n 4 f ( y n ) f ( y n ) + g ( y n ) ρ n ( f ( y n ) + g ( y n ) ) 2 F ( y n ) 2 + G ( y n ) 2 + τ n z 2 ( 1 α n τ n ) α n u n x n 2 ( 1 τ n ) x n z 2 ρ n 4 f ( y n ) f ( y n ) + g ( y n ) ρ n α n ( f ( y n ) + g ( y n ) ) 2 F ( y n ) 2 + G ( y n ) 2 + τ n z 2 ( 1 α n τ n ) α n u n x n 2 = x n z 2 + τ n ( z 2 x n z 2 ) ρ n 4 f ( y n ) f ( y n ) + g ( y n ) ρ n α n ( f ( y n ) + g ( y n ) ) 2 F ( y n ) 2 + G ( y n ) 2 ( 1 α n τ n ) α n u n x n 2 ,
which implies that
ρ n 4 f ( y n ) f ( y n ) + g ( y n ) ρ n α n ( f ( y n ) + g ( y n ) ) 2 F ( y n ) 2 + G ( y n ) 2 + ( 1 α n τ n ) α n u n x n 2 x n z 2 x n + 1 z 2 + τ n ( z 2 x n z 2 ) .
Now, we consider two possible cases for the convergence of the sequence { x n z 2 ) } .
Case I. Assume that { x n z } is not increasing, that is, there exists n 0 0 such that, for each n n 0 , x n + 1 z x n z . Therefore, the limit of lim n x n z exists and
lim n ( x n + 1 z x n z ) = 0 .
Since lim n τ n = 0 , it follows from (68) that
ρ n ( 4 f ( y n ) f ( y n ) + g ( y n ) ρ n ) α n ( f ( y n ) + g ( y n ) ) 2 F ( y n ) 2 + G ( y n ) 2 0 , ( 1 α n γ n ) α n u n x n 2 0 .
Note that
lim inf n ( 1 α n τ n ) α n > 0
and F and G are Lipschitz continuous and, for any ε > 0 small enough, we obtain
lim n ( f ( y n ) + g ( y n ) ) 2 = 0 , lim n u n x n 2 = 0
and so f ( y n ) 0 , g ( y n ) 0 and u n x n 0 as n . Therefore, it follows from (65) that x n + 1 x n 0 . From (11) and (33), (66), we have
x n + 1 z 2 = ( 1 α n τ n ) x n + α n u n z 2 = ( 1 τ n ) ( x n z ) + α n ( x n u n ) τ n z 2 ( 1 τ n ) 2 x n z 2 + 2 τ n α n x n u n , x n + 1 z + 2 τ n z , x n + 1 z ( 1 τ n ) x n z 2 + 2 τ n [ α n x n u n , x n + 1 z + z , x n + 1 z ] .
Since { x n } is bounded, as in the proof of Theorem 1, the sequence { x n } converges weakly to a point x EP ( ϕ ) Ω and the following inequality holds from the property (12):
lim n sup x n + 1 z , z = max x w w ( x n ) x z , z 0 .
Since τ n 0 , Σ n = 1 τ n = by using Lemma 2 to the formula (70) and so we can deduce that x n z 0 , that is, the sequence { x n } converges strongly to z. Furthermore, it follows from the property of the metric projection that, for all p EP ( ϕ ) Ω ,
p z , z 0 z 2 z p z p ,
which implies that z is the minimum-norm solution of the system of the problem EP (1) and the problem SVIPs (5) and (6).
Case II. If the sequence { x n z 2 } is increasing, then it is easy to see that u n x n 0 from (68) because of τ n 0 and so, from (65), we can get x n + 1 x n 0 .
Without loss generality, we assume that there exists a subsequence { x n k z } of the sequence { x n z } such that x n k z x n k + 1 z for each k 1 . In this case, if we define an indicator
σ ( n ) = max { m n : x m z x m + 1 z } ,
then σ ( n ) as n and x σ ( n ) z 2 x σ ( n ) + 1 z 2 and so, from (68), it follows that
τ σ ( n ) ( z 2 x σ ( n ) z 2 ) x σ ( n ) z 2 x σ ( n ) + 1 z 2 + τ σ ( n ) ( z 2 x σ ( n ) z 2 ) ρ σ ( n ) 4 f ( y σ ( n ) ) f ( y σ ( n ) ) + g ( y σ ( n ) ) ρ σ ( n ) α σ ( n ) ( f ( y σ ( n ) ) + g ( y σ ( n ) ) ) ) 2 F ( y σ ( n ) ) 2 + G ( y σ ( n ) 2 + ( 1 α σ ( n ) τ σ ( n ) ) α σ ( n ) u σ ( n ) y σ ( n ) 2 .
Since τ σ ( n ) 0 as n , similarly as in the proof in Case I, we get
lim n ( f ( y σ ( n ) ) + g ( y σ ( n ) ) ) 2 = 0 , lim n α σ ( n ) u σ ( n ) x σ ( n ) 2 = 0 ,
lim n sup x σ ( n ) + 1 z , z = max z ˜ w w ( x σ ( n ) ) z ˜ z , z 0
and
x σ ( n ) + 1 z 2 ( 1 τ σ ( n ) ) x σ ( n ) z 2 + 2 τ σ ( n ) [ α σ ( n ) x σ ( n ) u σ ( n ) , x σ ( n ) + 1 z + z , x σ ( n ) + 1 z ]
and so
x σ ( n ) z 2 2 α σ ( n ) x σ ( n ) u σ ( n ) , x σ ( n ) + 1 z + 2 z , x σ ( n ) + 1 z .
Combining (75) and (76) yields
lim n sup x σ ( n ) z 2 = 0
and hence
lim n x σ ( n ) z 2 = 0 .
From (77), we can see that
lim n sup x σ ( n ) + 1 z 2 = lim n sup x σ ( n ) z 2 = 0 .
Thus lim n x σ ( n ) + 1 z 2 = 0 . Therefore, according to Lemma 4, we have
0 x n z 2 max { x σ ( n ) z 2 , x n z 2 } x σ ( n ) + 1 z 2 0 ,
which implies that the sequence { x n } converges strongly to z, the minimum-norm element of EP ( ϕ ) Ω . This completes the proof. □
Corollary 1.
Let H 1 , H 2 be two real Hilbert spaces and A : H 1 H 2 be a bounded linear operator. Let ϕ be a bifunction satisfying the conditions ( A 1 ) ( A 4 ) . Assume that B 1 : H 1 2 H 1 and B 2 : H 2 2 H 2 are maximal monotone mappings with EP ( ϕ ) Ω . If the sequence { α n } in ( 0 , 1 ) satisfies the following conditions:
lim n α n = 0 , Σ n = 1 α n =
and, for any u H 1 , the sequence { x n } is generated by the following iterations:
ϕ ( z n , y ) + 1 r y z n , z n x n 0 , y n = β n x n + ( 1 β n ) z n , x n + 1 = α n u + ( 1 α n ) J λ B 1 ( I γ n A ( I J λ B 2 ) A ) y n ,
where γ n is defined as the above algorithms, then the sequence { x n } converges strongly to a point z = P EP ( ϕ ) Ω u .
Remark 5.
If the bifunction ϕ ( x , y ) = B x , y x 0 , then the problem EP (1) is equivalent to the following problem:
Find a point x C such that B x , y x 0 for all y C ,
where B : C H is a nonlinear operator. It is a well-known classical variational inequality problem and, clearly, we can obtain a common solution of the variational inequality problem and the split variational inclusion problem via the above algorithms.
Remark 6.
If the bifunction ϕ ( x , y ) = 0 , B 1 = I U , B 2 = I T , C and Q are N-dimensional and M-dimensional Euclidean spaces, respectively, then the proposed problem in this present paper reduces to the split common fixed point problem of Censor and Segal [48], where U and T are direct operators.

4. Applications to Fixed Points and Split Convex Optimization Problems

In this section, we consider the fixed point problem and the split convex optimization problem.
A mapping T : D ( T ) H is called pseudo-contractive if the following inequality holds: for any x , y D ( T ) H ,
T x T y , x y x y 2 .
It is well known that pseudo-contraction mappings includes nonexpansive mappings and B = I T is monotone mapping. The following lemma is very useful for obtaining the fixed point of the pseudo-contractive mapping.
Lemma 7.
(see [2]) Let C be a nonempty closed convex subset of a real Hilbert space H. Let T : C C be a continuous pseudo-contractive mapping and define a mapping T r as follows: for any x H and r ( 0 , ) ,
T r ( x ) = z C : y z , T z 1 r y z , ( 1 + r ) z x 0 , y C .
Then the following hold:
(1) 
T r is a single-valued mapping.
(2) 
T r is a nonexpansive mapping.
(3) 
F i x ( T r ) = F i x ( T ) is closed and convex.
If h and l are two proper, convex and lower semi-continuous functions, define h C × C (: the subdifferential mapping of h) as follows:
h ( x ) = { z C : h ( x ) + y x , z h ( y ) , y C } .
From Rockafellar [59], h is a maximal monotone mapping. The split convex optimization problem is illustrated as follows:
Find a point x H 1 such that 0 h ( x ) and 0 l ( A x ) .
Denote the solution set of the split convex optimization problem by
Ω = { x H 1 : 0 h ( x ) , 0 l ( A x ) } .
Now, we show the existence of a common element of the fixed point set of a pseudo-contractive mapping and the solution set of the split convex minimization problem as follows:
Theorem 4.
Let H 1 , H 2 be two real Hilbert spaces and A : H 1 H 2 be a bounded linear operator. Let T be a pseudo-contractive mapping. Assume that h : H 1 R and l : H 2 R are two proper, convex and lower semi-continuous functions such that h and l are maximal monotone mappings with F i x ( T ) Ω . If the sequences { α n } , { β n } in ( 0 , 1 ) satisfy the following conditions:
lim n α n = 0 , Σ n = 1 | β n β n 1 | < , Σ n = 1 | α n α n 1 | <
and, for any λ > 0 and r > 0 , the sequence { x n } is generated by the following iterations:
y z n , T z n 1 r y z n , ( 1 + r ) z n x n 0 , y n = β n x n + ( 1 β n ) z n , x n + 1 = α n x n + ( 1 α n ) J λ B 1 ( I γ n A ( I J λ B 2 ) A ) y n ,
where γ n is defined as in Algorithm 1, then the sequence { x n } converges weakly to an element of F i x ( T ) Ω .
Proof. 
Take ϕ ( x , y ) = ( I T ) x , y x , then y z n , T z n 1 r y z n , ( 1 + r ) z n x n 0 is equivalent to ϕ ( z n , y ) + 1 r y z n , z n x n 0 . Thus, according to Theorem 1, the conclusion follows. □
Theorem 5.
Let H 1 , H 2 be two real Hilbert spaces and A : H 1 H 2 be a bounded linear operator. Let T be a pseudo-contractive mapping. Assume that h : H 1 R and l : H 2 R are two proper, convex and lower semi-continuous functions such that h and l are maximal monotone mappings with F i x ( T ) Ω . If the sequence { α n } in ( 0 , 1 ) satisfies the following conditions:
lim n α n = 0 , Σ n = 1 α n =
and, for any x 0 H 1 , the sequence { x n } is generated by the following iterations:
y z n , T z n 1 r y z n , ( 1 + r ) z n x n 0 , y n = β n x n + ( 1 β n ) z n , x n + 1 = α n x 0 + ( 1 α n ) J λ B 1 ( I γ n A ( I J λ B 2 ) A ) y n ,
where γ n is defined as in Algorithm 2, then the sequence { x n } converges strongly to a point z = P F i x ( T ) Ω x 0 .
Theorem 6.
Let H 1 , H 2 be two real Hilbert spaces and A : H 1 H 2 be a bounded linear operator. Let T be a pseudo-contractive mapping. Assume that h : H 1 R and l : H 2 R are two proper, convex and lower semi-continuous functions such that h and l are maximal monotone mappings with F i x ( T ) Ω . If the sequence { α n } in ( 0 , 1 ) satisfies the following conditions:
lim n τ n = 0 , inf ( 1 α n τ n ) α n > 0 , Σ n = 1 τ n =
and, for any λ > 0 and r > 0 , the sequence { x n } is generated by the following iterations:
y z n , T z n 1 r y z n , ( 1 + r ) z n x n 0 , y n = β n x n + ( 1 β n ) z n , x n + 1 = ( 1 α n τ n ) x n + α n J λ B 1 ( I γ n A ( I J λ B 2 ) A ) y n ,
where γ n is defined as in Algorithm 3, then the sequence { x n } converges strongly to a point z = P F i x ( T ) Ω ( 0 ) .

5. Numerical Examples

In this section, we present some examples to illustrate the applicability, efficiency and stability of our self-adaptive step size iterative algorithms. We have written all the codes in Matlab R2016b (MathWorks, Natick, MA, USA) and are preformed on a LG dual core personal computer (LG Display, Seoul, Korea).

5.1. Numerical Behavior of Algorithm 1

Example 1.
Let H 1 = H 2 = R and define the operators A , B 1 , and B 2 on real line R by A x = 3 x , B 1 x = 2 x , B 2 x = 4 x for all x R , the bifunction ϕ by ϕ ( x , y ) = 3 x 2 + x y + 2 y 2 and set the parameters on Algorithm 1 by ρ n = 3 1 n + 1 , α n = 1 n + 1 , β n = 1 ( n + 1 ) 2 for each n 1 . It is clear that
Σ n = 1 | α n α n 1 | < , Σ n = 1 | β n β n 1 | < , EP ( ϕ ) Ω = { 0 } .
From the definition of ϕ, we have
0 ϕ ( z n , y ) + 1 r y z n , z n x n = 3 z n 2 + y z n + 2 y 2 + 1 r ( y z n ) ( z n x n )
and then
0 r ( 3 z n 2 + y z n + 2 y 2 ) + z n y z n 2 x n y + x n z n .
For the quadratic function of y, if it has at most one solution in R , then the discriminant of this function Δ = ( z n + 5 z n r x n ) 2 = 0 , that is, z n = x n 1 + 5 r . According to Algorithm 1, if G ( y n ) 2 + H ( y n ) 2 0 , then we compute the new iteration { x n + 1 } and the iterative progress is written as
z n = x n 1 + 5 r , y n = β n x n + ( 1 β n ) z n , x n + 1 = α n x n + ( 1 α n ) J λ B 1 ( I γ n A ( I J λ B 2 A ) ) y n ,
In this way, the step size γ n is self-adaptive and not given beforehand.
First, we test three cases of λ = 0.01 , 0.5 , 2 and r = 0.01 , 0.5 , 2 for initial point x 0 = 1 , and then test three initial points x 0 randomly generated by Matlab for λ and r. The values of { y n } and { x n } are reported in Figure 1 and Figure 2 and Table 1. In these figures, x-axes represent for the number of iterations while y-axes represent the value of x n and y n , the stopping criterion of Figure 1 is x n + 1 y n = 10 6 . These figures imply that the behavior of x n for Algorithm 1 that converges to the same solution, i.e., 0 EP ( ϕ ) Ω as a solution of this example.
We can summarize the following observations from these Figure 1 and Figure 2 and Table 1:
(1)
The results presented in Figure 1 and Figure 2 and Table 1 imply that Algorithm 1 converges to the same solution;
(2)
The convergence rate of Algorithm 1 is fast, efficient, stable and simple to implement. The number of iterations remains almost consistent irrespective of the initial point x 0 and parameters λ , r .
(3)
The error of x n + 1 y n can be obtain approximately equal to 10 15 even smaller in 20 iterations.

5.2. Numerical Behaviours of Algorithms 2 and 3

Example 2.
Let H 1 = H 2 = R 3 and define the operators A , B 1 , and B 2 as the following:
A = 6 3 1 8 7 5 3 6 2 , B 1 = 6 0 0 0 4 0 0 0 3 , B 2 = 7 0 0 0 5 0 0 0 2 ,
the bifunction ϕ by ϕ ( x , y ) = 3 x 2 + x , y + 2 y 2 . In this example, we set the parameters in Algorithm 2 and Algorithm 3 by α n = n n + 1 , ρ n = 3 1 n + 1 , τ n = 1 ( n + 1 ) 2 , and β n = 1 ( n + 1 ) for each n 1 .
First, we take an initial point x 0 = ( 13 , 12 , 25 ) , then the test results are reported in Figure 3.
Next, we present the test results for initial point x 0 = ( 1 , 1 , 2 ) in Table 2.
From Table 2 and Figure 3, one can see the convergence rate of Algorithm 3 is faster than Algorithm 2 and ( 0 , 0 , 0 ) EP ( ϕ ) Ω is the minimum-norm solution of the experiment.

5.3. Comparisons with Other Algorithms

In this part, we present several experiments in comparison our Algorithms 1 and 3 with the other algorithms. Two methods used to compare are the algorithm in Byrne et al. [36] and the algorithm in Sitthithakerngkiet et al. [46]. The step size γ 0.001 for the algorithm in Byrne et al. [36] and the algorithm in Sitthithakerngkiet et al. [46] which depends on the norm of operator A. In addition, let the mapping S n : R 3 R 3 be an infinite family of nonexpansive mappings S n ( x ) = { x 2 n } and a nonnegative real sequence ζ n = 1 . Then the W-mapping W n is generated by S n and ζ n and W n = 1 2 n ( n + 1 ) 2 , the bounded linear operator D = I in Sitthithakerngkiet et al. [46]. We choose the stopping criterion for our algorithm is x n + 1 y n DOL , for Byrne et al. [36] is x n + 1 x n DOL and for Sitthithakerngkiet et al. [46] is x n + 1 y n DOL . For the three algorithms, the operators A , B 1 , B 2 are defined as Example 2, the parameters α n = 10 3 n and β n = 0.5 1 10 n + 2 , u = ( 0 , 0 , 1 ) in Sitthithakerngkiet et al. [46], ρ n = 3 1 n + 1 , α n = 1 n + 1 and β n = 1 10 n + 2 in our Algorithm 1, ρ n = 3 1 n + 1 , α n = n n + 1 and β n = 1 10 n + 2 in our Algorithm 3. We take λ = 1 , x 0 = ( 13 , 12 , 25 ) for all these algorithms and compare the iterations and computer times. The experiment results are reported in Table 3.
From all the above figures and tables, one can see the behavior of the sequences { x n } and { y n } , which concludes that { x n } and { y n } converge to a solution and our algorithms are fast, efficient, and stable and simple to implement (it takes average of 10 iterations to converge). Especially, one can see that our Algorithms 1 and 3 seem to have a competitive advantage. However, as mentioned in the previous sections, the main advantage of these our algorithms is that the step size is self-adaptive without the prior knowledge of operator norms.

5.4. Compressed Sensing

In the last example, we choose a problem from the field of compressed sensing according to the review comments, that is the recovery of a sparse and noisy signal from a limited number of sampling. Let x 0 R n be K-sparse signal, K < < n . The sampling matrix A R m × n , m < n is stimulated by standard Gaussian distribution and vector b = A x + ϵ , where ϵ is additive noise. When ϵ = 0 , it means that there is no noise to the observed data. Our task is to recover the signal x 0 from the data b. For further details, one can consult with Nguyen and Shin [34].
For solving the problem, we recall the LASSO (Least Absolute Shrink Select Operator) problem Tibshirani [60] as follows:
min x R n 1 2 A x b 2 2 s . t . x 1 t ,
where t > 0 is a given constant. So, in the relation with the problem SVIPs (5) and (6), we consider B 1 1 ( 0 ) = { x | x 1 t } , B 2 1 ( 0 ) = { b } and define
B 1 ( x ) = { u | sup y 1 t y x , u 0 } , x C , , otherwise .
and define
B 2 ( y ) = H 2 , y = b , , otherwise .
In this example, we take h ( x ) = 1 2 A x b 2 , ϕ ( x , y ) = h ( y ) h ( x ) , then the problem EP (1) is equivalent to the following problem:
min x R n 1 2 A x b 2 2 .
For the experiment setting, we choose the following parameters in our Algorithm 3: α i = i 1 i + 1 , γ i = 1 i + 1 , β i = 2 i 1 2 i + 1 , ρ i = 3 1 i + 1 , A R m × n is generated randomly with m = 2 10 , n = 2 12 , x 0 R n is K-spikes with amplitude ± 1 distributed in whole domain randomly. In addition, for simplicity, we take t = K and the stopping criterion x i + 1 x i DOL with DOL = 10 6 . All the numerical results are presented in Figure 4 and Figure 5.

6. Conclusions

A series of some problems in finance, physics, network analysis, signal processing, image reconstruction, economics, and optimizations are reduced to find a common solution of the split inverse and equilibrium problems, which implies numerous possible applications to mathematical models whose constraints can be presented as the problem EP (1) and the split inverse problem.
This motivated the study of a common solution set of split variational inclusion problems and equilibrium problems.
The main result of this paper is to introduce a new self-adaptive step size algorithm without prior knowledge of the operator norms in Hilbert spaces to solve the split variational inclusion and equilibrium problems. The convergence theorems are obtained under suitable assumptions and the numerical examples and comparisons are presented to illustrate the efficiency and reliability of the algorithms. In one sense, the algorithms and theorems in this paper complement, extend, and unify some related results in the split variational inclusion and equilibrium problems.

Author Contributions

The authors contributed equally to this work. The authors read and approved final manuscript.

Funding

The first author was funded by the National Science Foundation of China (11471059) and Science and Technology Research Project of Chongqing Municipal Education Commission (KJ 1706154) and the Research Project of Chongqing Technology and Business University (KFJJ2017069).

Acknowledgments

The authors express their deep gratitude to the referee and the editor for his/her valuable comments and suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Blum, E.; Oettli, W. From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63, 123–145. [Google Scholar]
  2. Flam, D.S.; Antipin, A.S. Equilibrium programming using proximal-like algorithms. Math. Program. 1997, 78, 29–41. [Google Scholar] [CrossRef]
  3. Moudafi, A. Second-order differential proximal methods for equilibrium problems. J. Inequal. Pure Appl. Math. 2003, 4, 18. [Google Scholar]
  4. Bnouhachem, A.; Suliman, A.H.; Ansari, Q.H. An iterative method for common solution of equilibrium problems and hierarchical fixed point problems. Fixed Point Theory Appl. 2014, 2014, 194. [Google Scholar] [CrossRef]
  5. Konnov, I.V.; Ali, M.S.S. Descent methods for monotone equilibrium problems inBanach spaces. J. Comput. Appl. Math. 2006, 188, 165–179. [Google Scholar] [CrossRef]
  6. Konnov, I.V.; Pinyagina, O.V. D-gap functions and descent methods for a class of monotone equilibrium problems. Lobachevskii J. Math. 2003, 13, 57–65. [Google Scholar]
  7. Charitha, C. A Note on D-gap functions for equilibrium problems. Optimization 2013, 62, 211–226. [Google Scholar] [CrossRef]
  8. Lorenzo, D.D.; Passacantando, M.; Sciandrone, M. A convergent inexact solution method for equilibrium problems. Optim. Methods Softw. 2014, 29, 979–991. [Google Scholar] [CrossRef]
  9. Ceng, L.C.; Ansari, Q.H.; Yao, J.C. Some iterative methods for finding fixed point and for solving constrained convex minimization problems. Nonlinear Anal. 2011, 74, 5286–5302. [Google Scholar] [CrossRef]
  10. Yao, Y.; Cho, Y.J.; Liou, Y.C. Iterative algorithm for hierarchical fixed points problems and variational inequalities. Math. Comput. Model. 2010, 52, 1697–1705. [Google Scholar] [CrossRef]
  11. Yao, Y.; Cho, Y.J.; Liou, Y.C. Algorithms of common solutions for variational inclusions, mixed equilibrium problems and fixed point problems. Eur. J. Oper. Res. 2011, 212, 242–250. [Google Scholar] [CrossRef]
  12. Yao, Y.H.; Liou, Y.C.; Yao, J.C. New relaxed hybrid-extragradient method for fixed point problems, a general system of variational inequality problems and generalized mixed equilibrium problems. Optimization 2011, 60, 395–412. [Google Scholar] [CrossRef]
  13. Qin, X.; Chang, S.S.; Cho, Y.J. Iterative methods for generalized equilibrium problems and fixed point problems with applications. Nonlinear Anal. Real World Appl. 2010, 11, 2963–2972. [Google Scholar] [CrossRef]
  14. Qin, X.; Cho, Y.J.; Kang, S.M. Viscosity approximation methods for generalized equilibrium problems and fixed point problems with applications. Nonlinear Anal. 2010, 72, 99–112. [Google Scholar] [CrossRef]
  15. Hung, P.G.; Muu, L.D. For an inexact proximal point algorithm to equilibrium problem. Vietnam J. Math. 2012, 40, 255–274. [Google Scholar]
  16. Quoc, T.D.; Muu, L.D.; Nguyen, V.H. Extragradient algorithms extended to equilibrium problems. Optimization 2008, 57, 749–776. [Google Scholar] [CrossRef]
  17. Santos, P.; Scheimberg, S. An inexact subgradient algorithm for equilibrium problems. Comput. Appl. Math. 2011, 30, 91–107. [Google Scholar]
  18. Thuy, L.Q.; Anh, P.K.; Muu, L.D.; Hai, T.N. Novel hybrid methods for pseudomonotone equilibrium problems and common fixed point problems. Numer. Funct. Anal. Optim. 2017, 38, 443–465. [Google Scholar] [CrossRef]
  19. Rockafellar, R.T. Monotone operators and proximal point algorithm. SAIM J. Control Optim. 1976, 14, 877–898. [Google Scholar] [CrossRef]
  20. Moudafi, A. From alternating minimization algorithms and systems of variational inequalities to equilibrium problems. Commun. Appl. Nonlinear Anal. 2019, 16, 31–35. [Google Scholar]
  21. Moudafi, A. Proximal point algorithm extended to equilibrium problem. J. Nat. Geometry 1999, 15, 91–100. [Google Scholar]
  22. Muu, L.D.; Otelli, W. Convergence of an adaptive penalty scheme for finding constrained equilibria. Nonlinear Anal. 1992, 18, 1159–1166. [Google Scholar] [CrossRef]
  23. Dong, Q.L.; Tang, Y.C.; Cho, Y.J.; Rassias, T.M. “Optimal” choice of the step length of the projection and contraction methods for solving the split feasibility problem. J. Glob. Optim. 2018, 71, 341–360. [Google Scholar] [CrossRef]
  24. Censor, Y.; Elfving, T. A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8, 221–239. [Google Scholar] [CrossRef]
  25. Ansari, Q.H.; Rehan, A. Split feasibility and fixed point problems. In Nonlinear Analysis, Approximation Theory, Optimization and Applications; Springer: Berlin/Heidelberg, Germany, 2014; pp. 281–322. [Google Scholar]
  26. Xu, H.K. Iterative algorithms for nonliear operators. J. Lond. Math. Soc. 2002, 66, 240–256. [Google Scholar] [CrossRef]
  27. Qin, X.; Shang, M.; Su, Y. A general iterative method for equilibrium problems and fixed point problems in Hilbert spaces. Nonlinear Anal. 2008, 69, 3897–3909. [Google Scholar] [CrossRef]
  28. Ceng, L.C.; Ansari, Q.H.; Yao, J.C. An extragradient method for solving split feasibility and fixed point problems. Comput. Math. Appl. 2012, 64, 633–642. [Google Scholar] [CrossRef]
  29. Censor, Y.; Bortfeld, T.; Martin, B.; Trofimov, A. A unified approach for inversion problems in intensity modulated radiation therapy. Phys. Med. Biol. 2003, 51, 2353–2365. [Google Scholar] [CrossRef]
  30. Kazmi, K.R.; Rizvi, S.H. An iterative method for split variational inclusion problem and fixed point problem for a nonexpansive mapping. Optim. Lett. 2014, 8, 1113–1124. [Google Scholar] [CrossRef]
  31. Censor, Y.; Gibali, A.; Reich, S. Algorithms for the split variational inequality problem. Numer. Algorithms 2012, 59, 301–323. [Google Scholar] [CrossRef]
  32. Moudafi, A. The split common fixed point problem for demicontractive mappings. Inverse Probl. 2010, 26, 587–600. [Google Scholar] [CrossRef]
  33. Moudafi, A. Split monotone variational inclusions. J. Optim. Theory Appl. 2011, 150, 275–283. [Google Scholar] [CrossRef]
  34. Nguyen, T.L.; NShin, Y. Deterministic sensing matrices in compressive sensing: A survey. Sci. World J. 2013, 2013, 1–6. [Google Scholar] [CrossRef]
  35. Dang, Y.; Gao, Y. The strong convergence of a KM-CQ-like algorithm for a split feasibility problem. Inverse Probl. 2011, 27, 015007. [Google Scholar] [CrossRef]
  36. Byrne, C.; Censor, Y.; Gibali, A.; Reich, S. Weak and strong convergence of algorithms for the split common null point problem. J. Nonlinear Convex Anal. 2012, 13, 759–775. [Google Scholar]
  37. Byrne, C. Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 2002, 18, 441–453. [Google Scholar] [CrossRef]
  38. Byrne, C. A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20, 103–120. [Google Scholar] [CrossRef]
  39. Yang, Q. The relaxed CQ algorithm for solving the split feasibility problem. Inverse Probl. 2004, 20, 1261–1266. [Google Scholar] [CrossRef]
  40. Gibali, A.; Mai, D.T.; Nguyen, T.V. A new relaxed CQ algorithm for solving split feasibility problems in Hilbert spaces and its applications. J. Ind. Manag. Optim. 2018, 2018, 1–25. [Google Scholar]
  41. López, G.; Martin-Marquez, V.; Xu, H.K. Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. 2012, 28, 085004. [Google Scholar] [CrossRef]
  42. Moudafi, A.; Thukur, B.S. Solving proximal split feasibility problem without prior knowledge of matrix norms. Optim. Lett. 2013, 8. [Google Scholar] [CrossRef]
  43. Gibali, A. A new split inverse problem and application to least intensity feasible solutions. Pure Appl. Funct. Anal. 2017, 2, 243–258. [Google Scholar]
  44. Plubtieng, S.; Sombut, K. Weak convergence theorems for a system of mixed equilibrium problems and nonspreading mappings in a Hilbert space. J. Inequal. Appl. 2010, 2010, 246237. [Google Scholar] [CrossRef]
  45. Sombut, K.; Plubtieng, S. Weak convergence theorem for finding fixed points and solution of split feasibility and systems of equilibrium problems. Abstr. Appl. Anal. 2013, 2013, 430409. [Google Scholar] [CrossRef]
  46. Sitthithakerngkiet, K.; Deepho, J.; Martinez-Moreno, J.; Kuman, P. Convergence analysis of a general iterative algorithm for finding a common solution of split variational inclusion and optimization problems. Numer. Algorithms 2018, 79, 801–824. [Google Scholar] [CrossRef]
  47. Eslamian, M.; Fakhri, A. Split equality monotone variational inclusions and fixed point problem of set-valued operator. Acta Univ. Sapientiaemath. 2017, 9, 94–121. [Google Scholar] [CrossRef]
  48. Censor, Y.; Segal, A. The split common fixed point problem for directed operators. J. Convex Anal. 2019, 16, 587–600. [Google Scholar]
  49. Plubtieng, S.; Sriprad, W. A viscosity approximation method for finding common solutions of variational inclusions, equilibrium problems, and fixed point problems in Hilbert spaces. Fixed Point Theory Appl. 2009, 2009, 567147. [Google Scholar] [CrossRef]
  50. Mainge, P.E. Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 2008, 16, 899–912. [Google Scholar] [CrossRef]
  51. Bruck, R.E.; Reich, S. Nonexpansive projections and resolvents of accretive operators in Banach spaces. Houst. J. Math. 1977, 3, 459–470. [Google Scholar]
  52. Mainge, P.E. Approximation methods for common fixed points of nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 2007, 325, 469–479. [Google Scholar] [CrossRef]
  53. Osilike, M.O.; Aniagbosor, S.C.; Akuchu, G.B. Fixed point of asymptotically demicontractive mappings in arbitrary Banach spaces. PanAm. Math. J. 2002, 12, 77–88. [Google Scholar]
  54. Goebel, K.; Kirk, W.A. Topics in Metric Fixed Point Theory; Cambridge University Press: Cambridge, UK, 1990. [Google Scholar]
  55. Aubin, J.P. Optima and Equilibria: An Introduction to Nonlinear Analysis; Springer: Berlin/Heidelberg, Germany, 1993. [Google Scholar]
  56. Ishikawa, S. Fixed points and iteration of a nonexpansive mapping in Banach space. Proc. Am. Math. Soc. 1976, 59, 65–71. [Google Scholar] [CrossRef]
  57. Browder, F.E.; Petryshyn, W.V. The solution by iteration of nonlinear functional equations in Banach spaces. Bull. Am. Math. Soc. 1966, 72, 571–575. [Google Scholar] [CrossRef]
  58. Baillon, J.B.; Bruck, R.E.; Reich, S. On the asymptotic behavior of nonexpansive mappings and semigroups in Banach spaces. Houst. J. Math. 1978, 4, 1–9. [Google Scholar]
  59. Rockafellar, R.T. On the maximal monotonicity of subdifferential mappings. Pac. J. Math. 1970, 33, 209–216. [Google Scholar] [CrossRef]
  60. Tibshirani, R. Regression shrinkage and selection via the Lasso. J. R. Stat. Soc. 1996, 58, 267–288. [Google Scholar] [CrossRef]
Figure 1. Values of x n and y n for different λ and r.
Figure 1. Values of x n and y n for different λ and r.
Mathematics 07 00255 g001
Figure 2. Values of x n + 1 y n and x n .
Figure 2. Values of x n + 1 y n and x n .
Mathematics 07 00255 g002
Figure 3. Values of x n , y n , z n for Algorithms 2 and 3.
Figure 3. Values of x n , y n , z n for Algorithms 2 and 3.
Mathematics 07 00255 g003
Figure 4. Numerical result for K = 50 .
Figure 4. Numerical result for K = 50 .
Mathematics 07 00255 g004
Figure 5. Numerical result for K = 40 .
Figure 5. Numerical result for K = 40 .
Mathematics 07 00255 g005
Table 1. The convergence of Algorithm 1.
Table 1. The convergence of Algorithm 1.
x 0 = 40 x 0 = 50
n z n y n x n z n y n x n
0 11.4286 18.5714 40 14.285723.214350
1 5.6086 7.1666 19.6302 7.01088.958224.5378
2 1.7934 2.0736 6.2767 2.24172.59207.8459
3 0.4200 0.620 1.4700 0.52500.57751.8374
4 0.0821 0.0768 0.2686 0.09590.10260.3358
5 0.0114 0.0120 0.0399 0.01420.01500.0498
6 0.0015 0.0014 0.0049 0.00180.00180.0062
7 1.4847 × 10 4 1.5305 × 10 4 5.1964 × 10 4 1.8559 × 10 4 1.9131 × 10 4 6.4955 × 10 4
8 1.3498 × 10 5 1.3836 × 10 5 4.7245 × 10 5 1.6873 × 10 5 1.7295 × 10 5 5.9056 × 10 5
9 1.0716 × 10 6 1.0938 × 10 6 3.7507 × 10 6 1.3396 × 10 6 1.3672 × 10 6 4.6884 × 10 6
Table 2. The convergence of Algorithm 3.
Table 2. The convergence of Algorithm 3.
n x n y n x n y n
0 ( 1 , 1 , 2 ) ( 0.6429 , 0.6429 , 1.2857 ) 2.44901.5747
1 ( 0.2561 , 0.3074 , 0.5742 ) ( 0.1341 , 0.1610 , 0.3008 ) 0.69980.3666
2 ( 0.0582 , 0.0866 , 0.1509 ) ( 0.0270 , 0.0402 , 0.0701 ) 0.18350.0852
3 ( 0.0111 , 0.0211 , 0.0345 ) ( 0.0048 , 0.0090 , 0.0148 ) 0.04190.0180
4 ( 0.0018 , 0.0045 , 0.0069 ) ( 0.0007 , 0.0018 , 0.0028 ) 0.00840.0034
5 ( 0.0002 , 0.0008 , 0.0012 ) ( 9.17 × 10 4 , 3.274 × 10 4 , 4.805 × 10 4 ) 0.0015 5.8868 × 10 4
6 ( 1.93 × 10 5 , 1.443 × 10 4 , 1.936 × 10 4 ) ( 7.25 × 10 6 , 5.413 × 10 5 , 7.261 × 10 5 ) 2.4229 × 10 4 9.0860 × 10 5
7 ( 4.21 × 10 6 , 1.672 × 10 5 , 3.119 × 10 5 ) ( 1.54 × 10 6 , 6.10 × 10 6 , 1.139 × 10 5 ) 3.5640 × 10 5 1.3011 × 10 5
8 ( 2.98 × 10 7 , 2.483 × 10 6 , 4.284 × 10 6 ) ( 1.07 × 10 7 , 8.87 × 10 7 , 1.530 × 10 6 ) 4.9606 × 10 6 1.7716 × 10 6
9 ( 2.33 × 10 8 , 3.472 × 10 7 , 5.032 × 10 7 ) ( 8.2 × 10 9 , 1.217 × 10 7 , 1.764 × 10 7 ) 6.1179 × 10 7 2.1458 × 10 7
Table 3. Comparison Algorithms 1 and 3 with other algorithms.
Table 3. Comparison Algorithms 1 and 3 with other algorithms.
DOLMethodStep SizeIteration (n)CPU Time (s) z x n x 0 x n + 1
10 4 Algorithm 1 γ n = ρ n ( f ( y n ) + g ( y n ) ) F ( y n ) 2 + G ( y n ) 2 90.10002 1.1393 × 10 4
Algorithm 3 γ n = ρ n ( f ( y n ) + g ( y n ) ) F ( y n ) 2 + G ( y n ) 2 80.0898 1.2675 × 10 4
Sitthithakerngkiet et al. [46] γ = 0.001 230.086643 5.4438 × 10 6
Byrne et al. [36] γ = 0.001 100.087847 1.8355 × 10 6
10 5 Algorithm 1 γ n = ρ n ( f ( y n ) + g ( y n ) ) F ( y n ) 2 + G ( y n ) 2 110.11003 2.9283 × 10 6
Algorithm 3 γ n = ρ n ( f ( y n ) + g ( y n ) ) F ( y n ) 2 + G ( y n ) 2 100.109419 3.2668 × 10 6
Sitthithakerngkietet et al. [46] γ = 0.001 2180.104271 5.2318 × 10 7
Byrne et al. [36] γ = 0.001 120.092779 1.0173 × 10 7
10 6 Algorithm 1 γ n = ρ n ( f ( y n ) + g ( y n ) ) F ( y n ) 2 + G ( y n ) 2 120.116322 4.4459 × 10 7
Algorithm 3 γ n = ρ n ( f ( y n ) + g ( y n ) ) F ( y n ) 2 + G ( y n ) 2 110.119499 4.4445 × 10 7
Sitthithakerngkietet et al. [46] γ = 0.001 21710.768808 5.2142 × 10 8
Byrne et al. [36] γ = 0.001 130.084488 2.3954 × 10 8

Share and Cite

MDPI and ACS Style

Tang, Y.; Cho, Y.J. Convergence Theorems for Common Solutions of Split Variational Inclusion and Systems of Equilibrium Problems. Mathematics 2019, 7, 255. https://0-doi-org.brum.beds.ac.uk/10.3390/math7030255

AMA Style

Tang Y, Cho YJ. Convergence Theorems for Common Solutions of Split Variational Inclusion and Systems of Equilibrium Problems. Mathematics. 2019; 7(3):255. https://0-doi-org.brum.beds.ac.uk/10.3390/math7030255

Chicago/Turabian Style

Tang, Yan, and Yeol Je Cho. 2019. "Convergence Theorems for Common Solutions of Split Variational Inclusion and Systems of Equilibrium Problems" Mathematics 7, no. 3: 255. https://0-doi-org.brum.beds.ac.uk/10.3390/math7030255

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop