Next Article in Journal
Stable Difference Schemes with Interpolation for Delayed One-Dimensional Transport Equation
Previous Article in Journal
A New Meshless Method for Solving 3D Inverse Conductivity Issues of Highly Nonlinear Elliptic Equations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Strengthened Extragradient Methods Non-Convex Combination with Adaptive Step Sizes Rule for Equilibrium Problems

1
Department of Mathematics, College of Science & Arts, King Abdulaziz University, P.O. Box 344, Rabigh 21911, Saudi Arabia
2
Applied Mathematics for Science and Engineering Research Unit (AMSERU), Program in Applied Statistics, Department of Mathematics and Computer Science, Faculty of Science and Technology, Rajamangala University of Technology Thanyaburi (RMUTT), Pathum Thani 12110, Thailand
3
Department of Mathematics, King Mongkut’s University of Technology Thonburi (KMUTT), 126 Pracha-Uthit Road, Bang Mod, Thung Khru, Bangkok 10140, Thailand
4
Applied Mathematics for Science and Engineering Research Unit (AMSERU), Department of Mathematics and Computer Science, Faculty of Science and Technology, Rajamangala University of Technology Thanyaburi (RMUTT), Pathum Thani 12110, Thailand
*
Author to whom correspondence should be addressed.
Submission received: 12 April 2022 / Revised: 30 April 2022 / Accepted: 16 May 2022 / Published: 19 May 2022
(This article belongs to the Section Mathematics)

Abstract

:
Symmetries play a vital role in the study of physical phenomena in diverse areas such as dynamic systems, optimization, physics, scientific computing, engineering, mathematical biology, chemistry, and medicine, to mention a few. These phenomena specialize mostly in solving equilibria-like problems in abstract spaces. Motivated by these facts, this research provides two innovative modifying extragradient strategies for solving pseudomonotone equilibria problems in real Hilbert space with the Lipschitz-like bifunction constraint. Such strategies make use of multiple step-size concepts that are modified after each iteration and are reliant on prior iterations. The excellence of these strategies comes from the fact that they were developed with no prior knowledge of Lipschitz-type parameters or any line search strategy. Mild assumptions are required to prove strong convergence theorems for proposed strategies. Various numerical tests have been reported to demonstrate the numerical behavior of the techniques and then contrast them with others.

1. Introduction

Consider that Σ is a nonempty, convex, and closed subset of a real Hilbert space Π . The inner product and norm are indicated with . , . and . , respectively. Furthermore, R and N symbolize the set of real numbers and the set of natural numbers, respectively. Assume that R : Π × Π R is indeed a bifunction with the equilibrium problem solution set E P ( R , Σ ) . Let
s * = P E P ( R , Σ ) ,
whereas θ represents a zero element in Π . In this case, Σ characterizes the subset of a Hilbert space Π and R as follows: R : Π × Π R is a bifunction through R ( r 1 , r 1 ) = 0 , for all r 1 Σ . The equilibrium problem [1,2] for R on Σ is to:
Find s * Σ such that R ( s * , r 1 ) 0 , r 1 Σ .
The above-mentioned framework is an appropriate mathematical framework that incorporates a variety of problems, including vector and scalar minimization problems, saddle point problems, variational inequality problems, complementarity problems, Nash equilibrium problems in non-cooperative games, and inverse optimization problems [1,3,4]. This issue is primarily connected to Ky Fan inequity on the grounds of his prior contributions to the field [2]. It is also important to consider an approximate solution if the problem does not have an exact solution or is difficult to calculate. Several methodologies have been proposed and tested to tackle various types of equilibrium problems (1). Many successful algorithmic techniques, as well as theoretical characteristics, have already been proposed to solve the (1) issue in both finite- and infinite-dimensional spaces.
The regularization technique is the most significant method for dealing with many ill-posed problems in various subfields of applied and pure mathematics. The regularization approach is distinguished by the use of monotone equilibrium problems to convert the original problem into a strongly monotone equilibrium subproblem. As a result, each computationally productive subproblem is strongly monotone and has a unique solution. The discovered subproblem, for example, may be more successfully resolved than the initial problem, and the regularization solutions may lead to some solution to the basic problem once the regularization variables look to have an adequate limit. The two most prevalent regularization methods are the proximal point and Tikhonov’s regularized approaches. These approaches were recently extended to equilibrium problems [5,6,7,8,9,10,11,12,13]. A few techniques to address non-monotone equilibrium problems can be found in [14,15,16,17,18,19,20,21,22,23,24,25,26].
The proximal method [27] is indeed an innovative approach for determining equilibrium problems that are founded on minimization problems. Along with Korpelevich’s contribution [28] technique to addressing the saddle point problem, this procedure has also been known as the two-step extragradient method in [29]. Tran et al. [29] constructed an iterative sequence of { s k } in the following manner:
s 1 Σ , m k = arg min v Σ { λ R ( s k , v ) + 1 2 s k v 2 } , s k + 1 = arg min v Σ { λ R ( m k , v ) + 1 2 s k v 2 } ,
where 0 < λ < min 1 2 c 1 , 1 2 c 2 . The iterative sequence created by the aforementioned approach exhibits weak convergence, and prior knowledge of Lipschitz-type variables is necessary in order to use it. Lipschitz-type parameters are frequently unknown or difficult to calculate. To address this issue, Hieu et al. [30] introduced the following adaptation of the approach in [31] for equilibrium: Let [ t ] + = max { t , 0 } and select s 1 Σ ,   μ ( 0 , 1 ) with λ 0 > 0 , such that
m k = arg min v Σ { λ k R ( s k , v ) + 1 2 s k v 2 } , s k + 1 = arg min v Σ { λ k R ( m k , v ) + 1 2 s k v 2 } ,
along with
λ k + 1 = min λ k , μ ( s k m k 2 + s k + 1 m k 2 ) 2 [ R ( s k , s k + 1 ) R ( s k , m k ) R ( m k , s k + 1 ) ] + .
To solve a pseudomonotone equilibrium problem, the authors have suggested a non-convex combination iterative technique in [32]. The availability of a strong convergence iterative sequence without the need for hybrid projection or viscosity techniques is the main contribution. The details of the algorithm are as follows: Choose 0 < λ k < min 1 2 c 1 , 1 2 c 2 ,   δ k [ δ , 1 ) with δ > 0 and ϕ k such that
lim k + ϕ k = 0 and k = 1 + ϕ k = + .
m k = arg min v Σ { λ k R ( s k , v ) + 1 2 s k v 2 } , r k = arg min v Σ { λ k R ( m k , v ) + 1 2 s k v 2 } ,
and
s k + 1 = P Σ ϕ k s k + ( 1 ϕ k ) r k ϕ k δ k s k .
The main objective of this study is to focus on using well-known projection algorithms that are, in general, easier to apply due to their efficient and easy mathematical computation. We design and adapt an explicit subgradient extragradient method to solve the problem of pseudomonotone equilibrium and other specific classes of variational inequality problems and fixed-point problems, inspired by the works of [30,33]. Our techniques are a variation on the approaches described in [32]. Strong convergence results matching the sequence of the two methods are achieved under specific, moderate circumstances. Some applications of variational inequality and fixed-point problems are given. Consequently, experimental investigations have shown that the proposed strategy is more successful than the current one [32].
The rest of the article is organized as follows: Section 2 includes basic definitions and lemmas. Section 3 proposes new methods and their convergence analysis theorems. Section 4 contains several applications of our findings to variational inequality and fixed-point problems. Section 5 contains numerical tests to demonstrate the computational effectiveness of our proposed methods.

2. Preliminaries

Suppose that a convex function : Σ R and subdifferential of ℑ at r 1 Σ is expressed as follows:
( r 1 ) = { r 3 Π : ( r 2 ) ( r 1 ) r 3 , r 2 r 1 , r 2 Σ } .
A normal cone of Σ at r 1 Σ is expressed as follows:
N Σ ( r 1 ) = { r 3 Π : r 3 , r 2 r 1 0 , r 2 Σ } .
Lemma 1.
([34]) Suppose that a convex function : Σ R is subdifferentiable and lower semicontinuous upon Σ . Then r 1 Σ is a minimizer of a function ℑ if and only if
0 ( r 1 ) + N Σ ( r 1 ) ,
where ( r 1 ) and N Σ ( r 1 ) denotes the subdifferential of ℑ at r 1 Σ and the normal cone of Σ at r 1 , respectively.
Definition 1.
([35]) A metric projection P Σ ( r 1 ) for r 1 Π onto a convex and closed subset Σ of Π is stated as follows:
P Σ ( r 1 ) = arg min { r 2 r 1 : r 2 Σ } .
Lemma 2.
([36]) Consider that a metric projection P Σ : Π Σ . Then
(i)
For some r 2 Σ and r 1 Π in order that
r 1 P Σ ( r 1 ) r 1 r 2 2 .
(ii)
r 3 = P Σ ( r 1 ) if and only if
r 1 r 3 , r 2 r 3 0 , r 2 Σ .
Lemma 3.
([37]) For some r 1 , r 2 Π and χ R . Then
(i)
χ r 1 + ( 1 χ ) r 2 2 = χ r 1 2 + ( 1 χ ) r 2 2 χ ( 1 χ ) r 1 r 2 2 ;
(ii)
r 1 + r 2 2 r 1 2 + 2 r 2 , r 1 + r 2 .
Lemma 4.
([38]) Consider a sequence of non-negative real numbers { χ k } such that
χ k + 1 ( 1 τ k ) χ k + τ k δ k , k N ,
while { τ k } ( 0 , 1 ) and { δ k } R conforming to the following parameters:
lim k + τ k = 0 , k = 1 + τ k = + , a n d lim sup k + δ k 0 .
Thus, lim k + χ k = 0 .
Lemma 5.
([39]) Assume that { χ k } is a sequence of real numbers namely that there exists a subsequence { k i } of { k } such that
χ k i < χ k i + 1 , f o r a l l i N .
Then, there would be a nondecreasing sequence { e k } N , namely that e k + as k + , and the following criteria are fulfilled by all (sufficiently big) integers k N :
χ e k χ m k + 1 and χ k χ m k + 1 .
In fact, e k = max { j k : χ j χ j + 1 } .
Now, we consider the following bifunction monotonicity notions (for more information, see [1,40]). A bifunction R : Π × Π R on Σ for ξ > 0 such that
(1) strongly monotone if
R ( r 1 , r 2 ) + R ( r 2 , r 1 ) ξ r 1 r 2 2 , r 1 , r 2 Σ ;
(2) monotone if
R ( r 1 , r 2 ) + R ( r 2 , r 1 ) 0 , r 1 , r 2 Σ ;
(3) strongly pseudomonotone if
R ( r 1 , r 2 ) 0 R ( r 2 , r 1 ) ξ r 1 r 2 2 , r 1 , r 2 Σ ;
(4) pseudomonotone if
R ( r 1 , r 2 ) 0 R ( r 2 , r 1 ) 0 , r 1 , r 2 Σ .
Suppose that R : Π × Π R meets the Lipschitz-type condition [41] over Σ if c 1 , c 2 > 0 , such that
R ( r 1 , r 3 ) R ( r 1 , r 2 ) + R ( r 2 , r 3 ) + c 1 r 1 r 2 2 + c 2 r 2 r 3 2 , r 1 , r 2 , r 3 Σ .
We shall presume that the requirements listed below have been satisfied. A bifunction R meets the following criteria:
( R 1)
R ( r 2 , r 2 ) = 0 for all r 2 Σ and R is pseudomonotone on feasible set Σ ;
( R 2)
R meet the Lipschitz-type condition on Π with constants c 1 and c 2 ;
( R 3)
R ( r 1 , r 2 ) is jointly weakly continuous on Π × Π ;
( R 4)
R ( r 1 , . ) need to be convex and subdifferentiable over Π for each r 1 Π .

3. Main Results

We add a method and have strong convergence results for that method. The following is a detailed algorithm:
The following lemma can be used to demonstrate that the step-size sequence λ k generated by the previous formula decreases monotonically and is bounded, as required for iterative sequence convergence.
Lemma 6.
A sequence { λ k } is decreasing monotonically with lower bound min μ 2 max { c 1 , c 2 } , λ 0 and converge to λ > 0 .
Proof. 
It is straightforward that { λ k } decreases monotonically. Let R ( s k , r k ) R ( s k , m k ) R ( m k , r k ) > 0 , such that
μ ( s k m k 2 + r k m k 2 ) 2 [ R ( s k , r k ) R ( s k , m k ) R ( m k , r k ) ] μ ( s k m k 2 + r k m k 2 ) 2 [ c 1 s k m k 2 + c 2 r k m k 2 ] μ 2 max { c 1 , c 2 } .
Thus, sequence { λ k } has the lower bound min μ 2 max { c 1 , c 2 } , λ 0 . Thus, there exists a real number λ > 0 , to ensure that lim k + λ k = λ .    □
The following lemma can be used to verify the boundedness of an iterative sequence.
Lemma 7.
Let R : Π × Π R be a bifunction that satisfies the conditions( R 1)( R 4). For any s * E P ( R , Σ ) , we have
r k s * 2 s k s * 2 1 μ λ k λ k + 1 s k m k 2 1 μ λ k λ k + 1 r k m k 2 .
Proof. 
By the value r k and Lemma 1, we obtain
λ k R ( m k , y ) λ k R ( m k , r k ) s k r k , y r k , y Π k .
From definition of Π k , we have
λ k R ( s k , r k ) λ k R ( s k , m k ) s k m k , r k m k .
Using the value of λ k + 1 , we can write
R ( s k , r k ) R ( s k , m k ) R ( m k , r k ) μ ( s k m k 2 + r k m k 2 ) 2 λ k + 1 .
Expressions (2)–(4) imply that (see Lemma 3.3 in [42]):
r k s * 2 s k s * 2 1 μ λ k λ k + 1 s k m k 2 1 μ λ k λ k + 1 r k m k 2 .
   □
The strong convergence analysis for Algorithm 1 is presented in the following theorem. The details of the convergence theorems are given below.
Algorithm 1 Self-Adaptive Explicit Extragradient Method with Non-Convex Combination
  • Step 0: Let s 1 Π , λ 0 > 0 , μ ( 0 , 1 ) , δ k [ δ , 1 ) through δ > 0 and ϕ k ( 0 , 1 ) such that
    lim k + ϕ k = 0 and k = 1 + ϕ k = + .
  • Step 1: Compute
    m k = arg min v Σ { λ k R ( s k , v ) + 1 2 s k v 2 } .
    In the case that m k = s k , stop and s k E P ( R , Σ ) . Otherwise, go to the next step.
  • Step 2: First, choose ω k R ( s k , m k ) satisfying s k λ k ω k m k N K ( m k ) and generate a half-space
    Π k = { z Π : s k λ k ω k m k , z m k 0 } .
    Solve r k = arg min v Π k { λ k R ( m k , v ) + 1 2 s k v 2 } .
  • Step 3: Compute
    s k + 1 = P Σ ϕ k s k + ( 1 ϕ k ) r k ϕ k δ k s k .
  • Step 4: Revise the step size as follows and continue:
    λ k + 1 = min λ k , μ ( s k m k 2 + r k m k 2 ) 2 [ R ( s k , r k ) R ( s k , m k ) R ( m k , r k ) ] if R ( s k , r k ) R ( s k , m k ) R ( m k , r k ) > 0 λ k otherwise .
  • Set k : = k + 1 and move back to Step 1.
Theorem 1.
Let a sequence { s k } be generated by Algorithm 1. Then, sequence { s k } converges strongly to s * E P ( R , Σ ) .
Proof. 
Given that λ k λ , then ϵ ( 0 , 1 μ ) , is a number such that
lim k + 1 μ λ k λ k + 1 = 1 μ > ϵ > 0 .
As a result, there exists a finite number k 1 N such that
1 μ λ k λ k + 1 > ϵ > 0 , k k 1 .
Using Lemma 7, we have
r k s * 2 s k s * 2 , k k 1 .
We derive using Lemma 3 (i) for any k k 1 , such that
s k + 1 s * 2   = P Σ ϕ k ( 1 δ k ) s k + ( 1 ϕ k ) r k P Σ ( s * ) 2 ϕ k ( 1 δ k ) s k + ( 1 ϕ k ) r k s * 2 = ϕ k ( 1 δ k ) s k s * + ( 1 ϕ k ) ( r k s * ) 2 ϕ k ( 1 δ k ) s k s * 2 + ( 1 ϕ k ) r k s * 2 ϕ k ( 1 δ k ) ( s k s * ) + δ k s * 2 + ( 1 ϕ k ) s k s * 2 ( 1 ϕ k ) 1 μ λ k λ k + 1 s k m k 2 + 1 μ λ k λ k + 1 r k m k 2 ϕ k ( 1 δ k ) s k s * 2 + δ k s * 2 + ( 1 ϕ k ) s k s * 2 ( 1 ϕ k ) ϵ s k m k 2 + ϵ r k m k 2 = ( 1 ϕ k δ k ) s k s * 2 + ϕ k δ k s * 2
ϵ ( 1 ϕ k ) s k m k 2 + r k m k 2 max { s k s * 2 , s * 2 }
max { s k 1 s * 2 , s * 2 } .
It is deduced that sequence { s k } is a bounded sequence. Let q k = ϕ k s k + ( 1 ϕ k ) r k , for any k N . By Lemma 3 (i), we have
q k s * 2 =   ϕ k s k + ( 1 ϕ k ) r k s * 2 s k s * 2 , k k 1 .
Notice that there is
s k + 1 = P Σ ( q k ϕ k δ k s k ) = P Σ ( 1 ϕ k δ k ) q k + ϕ k δ k ( 1 ϕ k ) ( r k s k ) .
By Lemma 3 (ii) and (9), (10) implies that (see Equation (3.6) [32])
s k + 1 s * 2 = P Σ ( 1 ϕ k δ k ) q k + ϕ k δ k ( 1 ϕ k ) ( r k s k ) P Σ ( s * ) 2 ( 1 ϕ k δ k ) s k s * 2 + 2 ϕ k δ k ( 1 ϕ k ) r k s k , ( 1 ϕ k δ k ) q k + ϕ k δ k ( 1 ϕ k ) ( r k s k ) s * + 2 ϕ k δ k ( 1 ϕ k ) s * , r k s k + 2 ϕ k δ k s * , s k s * + 2 ϕ k 2 δ k 2 s * , s k .
The remains of the proof can be split into two parts:
Case 1: Let k 2 N ( k 2 k 1 ) such that
s k + 1 s * s k s * , k k 2 .
Thus, lim k + s k s * , exists and let lim k + s k s * = l . By relationship (7), we have
ϵ s k m k 2 + r k m k 2 s k s * 2 s k + 1 s * 2 + ϕ k δ k s * 2 + ϵ ϕ k s k m k 2 + r k m k 2 , k k 2 .
The existence of lim k + s k s * = l , provides that
lim k + s k m k = lim k + r k m k = 0 ,
and accordingly
lim k + s k r k lim k + s k m k + lim k + m k r k = 0 .
Thus, the sequence { s k } is a bounded sequence. Hence, we may select a subsequence { s k j } of { s k } such that { s k j } converges weakly to a certain s Σ such that
lim sup k + s * , s k s * = lim sup j + s * , s k j s * = s * , s s * .
From (13) the subsequence { m k j } also converges weakly to s as j + . Due to the expression (3), we obtain
λ k j R ( s k j , y ) λ k j R ( s k j , m k j ) s k j m k j , y m k j , y Σ .
Allowing j + entails that
R ( s , y ) 0 , y Σ .
As a result, s E P ( R , Σ ) . Eventually, using (15) and Lemma 2 (ii), we derive
lim sup k + s * , s k s * = lim sup j + s * , s k j s * = s * , s s * . = θ P E P ( R , Σ ) , s P E P ( R , Σ ) . 0 .
We have the desired results from of the assertion on ϕ k , δ k , (11), (13), (14), (18) and Lemma 4.
Case 2: Assume that there exists a subsequence { k i } of { k } such that
s k i s * s k i + 1 s * , i N .
Consequently, according to Lemma 5, there is indeed a sequence { n j } N such that n j + , we have
s n j s * s m j + 1 s * and s j s * s m j + 1 s * , for all j N .
By the expression (7), we have
ϵ s n j m n j 2 + r n j m n j 2 s n j s * 2 s n j + 1 s * 2 + ϕ n j δ n j s * 2 + ϵ ϕ n j s n j m n j 2 + r n j m n j 2 , n j k 1 .
The above expressions imply that
lim j + s n j m n j = lim j + r n j m n j = 0 ,
thus
lim j + s n j r n j lim j + s n j m n j + lim j + m n j r n j = 0 .
By statements identical to those in expression (18), we have
lim sup j + s * , s n j s * 0 .
From expression (11), we obtain
s n j + 1 s * 2 ( 1 ϕ n j δ n j ) s n j s * 2 + 2 ϕ n j δ n j ( 1 ϕ n j ) r n j s n j , ( 1 ϕ n j δ n j ) q n j + ϕ n j δ n j ( 1 ϕ n j ) ( r n j s n j ) s * + 2 ϕ n j δ n j ( 1 ϕ n j ) s * , r n j s n j + 2 ϕ n j δ n j s * , s n j s * + 2 ϕ n j 2 δ n j 2 s * , s n j .
It is given that s n j s * s m j + 1 s * , implies that
s n j + 1 s * 2 ( 1 ϕ n j δ n j ) s m j + 1 s * 2 + 2 ϕ n j δ n j ( 1 ϕ n j ) r n j s n j , ( 1 ϕ n j δ n j ) q n j + ϕ n j δ n j ( 1 ϕ n j ) ( r n j s n j ) s * + 2 ϕ n j δ n j ( 1 ϕ n j ) s * , r n j s n j + 2 ϕ n j δ n j s * , s n j s * + 2 ϕ n j 2 δ n j 2 s * , s n j .
The expression (19) and (25) implies that
s j s * 2 s n j + 1 s * 2 2 ( 1 ϕ n j ) r n j s n j , ( 1 ϕ n j δ n j ) q n j + ϕ n j δ n j ( 1 ϕ n j ) ( r n j s n j ) s * + 2 ( 1 ϕ n j ) s * , r n j s n j + 2 s * , s n j s * + 2 ϕ n j δ n j s * , s n j , k k 1 .
Because ϕ n j 0 , it derives via expressions (21), (22) such that
lim k + s j s * 2 lim k + s n j + 1 s * 2 0 .
Consequently, s k s * . This is the required result.    □
Now, a modification of Algorithm 1 proves a strong convergence theorem for it. For the purpose of simplicity, we will adopt the notation [ t ] + = max { 0 , t } and the conventional 0 0 = + and a 0 = + ( a 0 ). The following is a more detailed algorithm:
Lemma 8.
Let R : Π × Π R be a bifunction satisfies the conditions( R 1)( R 4). For any s * E P ( R , Σ ) , we have
r k s * 2 P Σ ( s k ) s * 2 1 μ λ k λ k + 1 P Σ ( s k ) m k 2 1 μ λ k λ k + 1 r k m k 2 .
The strong convergence analysis for Algorithm 2 is presented in the following theorem. The details of the convergence theorems are given below.
Algorithm 2 Modified Self-Adaptive Explicit Extragradient Method with Non-Convex Combination
  • Step 0: Let s 1 Π ,   λ 0 > 0 , μ ( 0 , 1 ) , δ k [ δ , 1 ) with δ > 0 and ϕ k , φ k ( 0 , 1 ) such that
    lim k + ϕ k = 0 , k = 1 + ϕ k = + and lim inf k + φ k ( 1 φ k ) > 0 .
  • Step 1: Compute
    m k = arg min v Σ { λ k R ( P Σ ( s k ) , v ) + 1 2 P Σ ( s k ) v 2 } .
    If m k = s k , then s k is the solution of problem (EP). Otherwise, go to next step.
  • Step 2: First, choose ω k R ( P Σ ( s k ) , m k ) satisfying P Σ ( s k ) λ k ω k m k N K ( m k ) and generate a half-space
    Π k = { z Π : P Σ ( s k ) λ k ω k m k , z m k 0 } .
    Solve
    r k = arg min v Π k { λ k R ( m k , v ) + 1 2 P Σ ( s k ) v 2 } .
  • Step 3: Compute
    s k + 1 = ϕ k ( 1 δ k ) s k + ( 1 ϕ k ) φ k r k + ( 1 φ k ) s k ) .
  • Step 4: Modify step size as follows:
    λ k + 1 = min λ k , μ ( P Σ ( s k ) m k 2 + r k m k 2 ) 2 [ R ( P Σ ( s k ) , r k ) R ( P Σ ( s k ) , m k ) R ( m k , r k ) ] + .
    Set k : = k + 1 and go back to Step 1.
Theorem 2.
Let a sequence { s k } be generated by Algorithm 2 and satisfy the conditions( R 1)( R 4). Then, a sequence { s k } is strongly convergent to an element s * of E P ( R , Σ ) .
Proof. 
Using Lemma 8, we have
s k + 1 s * 2 = ϕ k ( 1 δ k ) s k + ( 1 ϕ k ) φ k r k + ( 1 φ k ) s k s * 2 = ϕ k ( 1 δ k ) s k s * + ( 1 ϕ k ) φ k ( r k s * ) + ( 1 φ k ) ( s k s * ) 2 ϕ k ( 1 δ k ) s k s * 2 + ( 1 ϕ k ) φ k ( r k s * ) + ( 1 φ k ) ( s k s * ) 2 ϕ k ( 1 δ k ) ( s k s * ) + δ k s * 2 + ( 1 ϕ k ) φ k r k s * 2 + ( 1 φ k ) s k s * 2 φ k ( 1 φ k ) r k s k 2 ϕ k ( 1 δ k ) s k s * 2 + δ k s * 2 + ( 1 ϕ k ) [ s k s * 2 φ k 1 μ λ k λ k + 1 P Σ ( s k ) m k 2 φ k 1 μ λ k λ k + 1 r k m k 2 φ k ( 1 φ k ) r k s k 2 ] ( 1 ϕ k δ k ) s k s * 2 + ϕ k δ k s * 2 ( 1 ϕ k ) [ φ k 1 μ λ k λ k + 1 P Σ ( s k ) m k 2 + φ k 1 μ λ k λ k + 1 r k m k 2 + φ k ( 1 φ k ) r k s k 2 ] .
It is given that λ k λ , there exists a fixed number ϵ 0 ( 0 , 1 μ ) , which is indeed a specific number such that
lim k + 1 μ λ k λ k + 1 = 1 μ > ϵ 0 > 0 .
Thus, there exists a fixed number m 1 N such that
1 μ λ k λ k + 1 > ϵ 0 > 0 , k m 1 .
Combining the expression (28) and (29), we obtain
s k + 1 s * 2 max { s k s * 2 , s * 2 } max { s m 1 s * 2 , s * 2 } .
The value of s k + 1 with Lemma 3 provides (see Equation (3.17) [32])
s k + 1 s * 2 ( 1 ϕ k δ k ) s k s * 2 + 2 ϕ k δ k ( 1 ϕ k ) φ k r k s k , s k + 1 s * + 2 ϕ k δ k s * , s k + 1 s * .
The rest of the discussion will be divided into two parts:
Case 1: Assume that there exists an integer m 2 N ( m 2 m 1 ) such that
s k + 1 s * s k s * , k m 2 .
Thus, the lim k + s k s * exists. By expression (28), we have
ϵ 0 φ k P Σ ( s k ) m k 2 + r k m k 2 + φ k ( 1 φ k ) r k s k 2 s k s * 2 s k + 1 s * 2 + ϕ k δ k s * 2 + ϵ 0 ϕ k φ k P Σ ( s k ) m k 2 + r k m k 2 + ϕ k φ k ( 1 φ k ) r k s k 2 .
The above, together with the assumptions on λ k , ϕ k and φ k , yields that
lim k + P Σ ( s k ) m k = lim k + r k m k = 0 = lim k + s k r k = 0 .
As a result, { s k } is bounded, and we may choose a subsequence { s k j } of { s k } such that { s k j } converges weakly to s Σ and
lim sup k + s * , s k s * = lim sup j + s * , s k j s * = s * , s s * .
As with expression (3) with (34), we have
λ k R ( P Σ ( s k j ) , y ) λ k j R ( P Σ ( s k j ) , m k ) P Σ ( s k j ) m k j , y m k j , y Σ .
Allowing j + , indicates that R ( s , y ) 0 , y Σ . It continues that s E P ( R , Σ ) . In the end, by expression (35) and Lemma 2, we may obtain
lim sup k + s * , s k s * = lim sup j + s * , s k j s * = s * , s s * . = θ P E P ( R , Σ ) , s P E P ( R , Σ ) . 0 .
The needed result is obtained using Equation (31) and the Lemma 4.
Case 2: Assume that a subsequence { k i } of { k } such that
s k i s * s k i + 1 s * , i N .
Thus, by Lemma 5 there exists a nondecreasing sequence { n j } N such that { n j } + , which gives
s n j s * s m j + 1 s * and s j s * s m j + 1 s * , for all j N .
Using expression (31), we have
s n j + 1 s * 2 ( 1 ϕ n j δ n j ) s n j s * 2 + 2 ϕ n j δ n j ( 1 ϕ n j ) φ n j r n j s n j , s n j + 1 s * + 2 ϕ n j δ n j s * , s n j + 1 s *
The remaining proof is analogous to Case 2 in Theorem 1. □

4. Applications

In this section, we derive our main results, which are used to solve fixed-point and variational inequality problems. An operator T : Σ Π Σ is said to be
(i) κ-strict pseudocontraction [43] on Σ if
T r 1 T r 2 2 r 1 r 2 2 + κ ( r 1 T r 1 ) ( r 2 T r 2 ) 2 , r 1 , r 2 Σ ;
which is equivalent to
T r 1 T r 2 , r 1 r 2 r 1 r 2 2 1 κ 2 ( r 1 T r 1 ) ( r 2 T r 2 ) 2 , r 1 , r 2 Σ .
(ii) Weakly sequentially continuous on Σ if
T ( s k ) T ( s * ) as each sequence in Σ satisfying s k s * .
Note: If we take R ( x , y ) = x T x , y x , x , y Σ , the equilibrium problem converts into to the fixed-point problem through 2 c 1 = 2 c 2 = 3 2 κ 1 κ . The algorithm’s m k and r k values become (for more information, see [32]):
m k = arg min v Σ { λ k R ( s k , v ) + 1 2 s k v 2 } = ( 1 λ k ) s k + λ k T ( s k ) , r k = arg min v Π k { λ k R ( m k , v ) + 1 2 s k v 2 } = P Σ s k λ k ( m k T ( m k ) ) .
The following fixed-point theorems are derived from the results in Section 3.
Corollary 1.
Suppose that Σ is a nonempty closed and convex subset of a Hilbert space Π . Let T : Σ Σ is a weakly continuous and κ-strict pseudocontraction with F i x ( T ) . Let s 1 Σ , λ 0 > 0 , μ ( 0 , 1 ) , δ k [ δ , 1 ) with δ > 0 and ϕ k ( 0 , 1 )
lim k + ϕ k = 0 a n d k = 1 + ϕ k = + .
Additionally, the sequence { s k } is created as follows:
m k = ( 1 λ k ) s k + λ k T ( s k ) , r k = P Π k s k λ k ( m k T ( m k ) ) , s k + 1 = P Σ ϕ k s k + ( 1 ϕ k ) r k ϕ k δ k s k ,
where
Π k = { z Π : s k λ k T ( s k ) m k , z m k 0 } .
The relevant step-size λ k + 1 is obtained:
λ k + 1 = min λ k , μ ( s k m k 2 + r k m k 2 ) 2 ( s k m k ) ( T ( s k ) T ( m k ) ) , r k m k + .
Thus, the sequence { s k } strongly converges to s * = P F i x ( T ) ( θ ) .
Corollary 2.
Suppose that Σ is a nonempty closed and convex subset of a Hilbert space Π . Let T : Σ Σ is a weakly continuous and κ-strict pseudocontraction with F i x ( T ) . Let s 1 Π ,   λ 0 > 0 , μ ( 0 , 1 ) , δ k [ δ , 1 ) with δ > 0 and ϕ k , φ k ( 0 , 1 ) such that
lim k + ϕ k = 0 , k = 1 + ϕ k = + a n d lim inf k + φ k ( 1 φ k ) > 0 .
Additionally, the sequence { s k } is created as follows:
m k = ( 1 λ k ) P Σ ( s k ) + λ k T ( P Σ ( s k ) ) , r k = P Π k s k λ k ( m k T ( m k ) ) , s k + 1 = ϕ k ( 1 δ k ) s k + ( 1 ϕ k ) φ k r k + ( 1 φ k ) s k ) ,
where
Π k = { z Π : P Σ ( s k ) λ k T ( P Σ ( s k ) ) m k , z m k 0 } .
The relevant step size λ k + 1 is obtained as follows:
λ k + 1 = min λ k , μ ( P Σ ( s k ) m k 2 + r k m k 2 ) 2 ( P Σ ( s k ) m k ) ( T ( P Σ ( s k ) ) T ( m k ) ) , r k m k + .
Thus, the sequence { s k } strongly converges to s * = P F i x ( T ) ( θ ) .
The variational inequality problem is presented as follows:
Find s * Σ such that G ( s * ) , y s * 0 , y Σ .
An operator G : Π Π is said to be
(i)
L-Lipschitz continuous on Σ if
G ( r 1 ) G ( r 2 ) L r 1 r 2 , r 1 , r 2 Σ ;
(ii)
pseudomonotone on Σ if
G ( r 1 ) , r 2 r 1 0 G ( r 2 ) , r 1 r 2 0 , r 1 , r 2 Σ .
Note: If R ( x , y ) : = G ( x ) , y x for all x , y Σ , the equilibrium problem converts into a variational inequality problem via L = 2 c 1 = 2 c 2 (for more information, see [44]). By the value of m k and r k in Algorithm 1, we derived
m k = arg min v Σ { λ k R ( s k , v ) + 1 2 s k v 2 } = P Σ s k λ k G ( s k ) , r k = arg min v Π k { λ k R ( m k , v ) + 1 2 s k v 2 } = P Π k s k λ k G ( m k ) .
Due to ω k R ( s k , m k ) , we obtain
ω k , z m k G ( s k ) , z s k G ( s k ) , m k s k , z Π = G ( s k ) , z m k , z Π ,
and consequently 0 G ( s k ) ω k , z m k , z Π . It implies that
s k λ k G ( s k ) m k , z m k s k λ k G ( s k ) m k , z m k + λ k G ( s k ) ω k , z m k = s k λ k ω k m k , z m k .
Assumption 1.
Assume that G fulfills the following conditions:
(i)
An operator G is pseudomonotone upon Σ and V I ( G , Σ ) is nonempty;
(ii)
G is L-Lipschitz continuous on Σ with L > 0 ;
(iii)
lim sup k + G ( s k ) , y s k G ( s * , y s * for any y Σ and { s k } Σ meet s k s * .
Corollary 3.
Let G : Σ Π be an operator and satisfies Assumption 1. Assume that sequence { s k } is generated as follows: Let s 1 Π ,   λ 0 > 0 , μ ( 0 , 1 ) , δ k [ δ , 1 ) with δ > 0 and ϕ k ( 0 , 1 ) such that
lim k + ϕ k = 0 a n d k = 1 + ϕ k = + .
Moreover, sequence { s k } is generated as follows:
m k = P Σ s k λ k G ( s k ) , r k = P Π k s k λ k G ( m k ) , s k + 1 = P Σ ϕ k s k + ( 1 ϕ k ) r k ϕ k δ k s k ,
where
Π k = { z Π : s k λ k G ( s k ) m k , z m k 0 } .
Next, step size λ k + 1 is obtained as follows:
λ k + 1 = min λ k , μ ( s k m k 2 + r k m k 2 ) 2 F ( s k ) F ( m k ) , r k m k + .
Then, sequence { s k } strongly converges to the solution s * V I ( G , Σ ) .
Corollary 4.
Let G : Σ Π be an operator that satisfies Assumption 1. Assume that { s k } , is generated as follows: Let s 1 Π , λ 0 > 0 , μ ( 0 , 1 ) , δ k [ δ , 1 ) with δ > 0 and ϕ k , φ k ( 0 , 1 ) such that
lim k + ϕ k = 0 , k = 1 + ϕ k = + a n d lim inf k + φ k ( 1 φ k ) > 0 .
Moreover, the sequence { s k } generated as follows:
m k = P Σ P Σ ( s k ) λ k G ( P Σ ( s k ) ) , r k = P Π k P Σ ( s k ) λ k G ( m k ) , s k + 1 = ϕ k ( 1 δ k ) s k + ( 1 ϕ k ) φ k r k + ( 1 φ k ) s k ) ,
where
Π k = { z Π : P Σ ( s k ) λ k G ( P Σ ( s k ) ) m k , z m k 0 } .
Next step-size λ k + 1 is obtained as follows:
λ k + 1 = min λ k , μ ( P Σ ( s k ) m k 2 + r k m k 2 ) 2 F ( P Σ ( s k ) ) F ( m k ) , r k m k + .
Then, sequence { s k } strongly converges to the solution s * V I ( G , Σ ) .

5. Numerical Illustration

The computational results in this section show that our proposed algorithms are more efficient than Algorithms 3.1 and 3.2 in [32]. The MATLAB program was executed in MATLAB version 9.5 on a PC (with Intel(R) Core(TM)i3-4010U CPU @ 1.70 GHz 1.70 GHz, RAM 4.00 GB) (R2018b). In all our algorithms, we used the built-in MATLAB fmincon function to solve the minimization problems. (i) The setting for design variables for Algorithm 3.1 (Algo. 3.1) and Algorithm 3.2 (Algo. 3.2) in [32] possess different values that are given in all examples.
ϕ k = 1 40 k , δ k = 1 10 + 1 10 k , λ k = k 3 + 2 c 1 , φ k = 1 4 + 1 4 n and D k = s k m k ϵ .
(ii) The settings for the design variables for Algorithm 1 (Algo. 1 ) and Algorithm 2 (Algo. 2) are
ϕ k = 1 40 k , δ k = 1 10 + 1 10 k , φ k = 1 4 + 1 4 k , D k = s k m k ϵ and for different λ 0 .
Example 1.
Let us consider a bifunction R : Σ × Σ R , which is represented as follows:
R ( s , m ) = i = 2 5 ( m i s i ) s , s , m R 5 .
In addition, the convex set is defined as follows:
Σ = ( s 1 , , s 5 ) : s 1 1 , s i 1 , i = 2 , , 5 .
Consequently, R is Lipschitz-type continuous across c 1 = c 2 = 2 and meets the condition( R 1)–( R 4). The obtained simulations are shown in Figure 1 and Figure 2 and Table 1 and Table 2 by using s 1 = ( 2 , 3 , 2 , 5 , 5 ) and ϵ = 10 4 .
Example 2.
According to the articles [29], the bifunction R might be written as follows:
R ( s , m ) = A s + B m + c , m s ,
where c R 5 and A, B are
A = 3.1 2 0 0 0 2 3.6 0 0 0 0 0 3.5 2 0 0 0 2 3.3 0 0 0 0 0 3 B = 1.6 1 0 0 0 1 1.6 0 0 0 0 0 1.5 1 0 0 0 1 1.5 0 0 0 0 0 2 c = 1 2 1 2 1 .
The Lipschitz parameters are also c 1 = c 2 = 1 2 A B (see [29]). The possible set Σ and its subset R 5 are given as
Σ : = { s R 5 : 5 s i 5 } .
Figure 3 and Figure 4 and Table 3 and Table 4 display the numeric effects with s 1 = ( 1 , , 1 ) and ϵ = 10 6 .
Example 3.
Consider that Π = L 2 ( [ 0 , 1 ] ) is indeed a Hilbert space with
s = 0 1 | s ( t ) | 2 d t ,
where the internal product
s , m = 0 1 s ( t ) m ( t ) d t , s , m Π .
Suppose that unit ball is Σ : = { s L 2 ( [ 0 , 1 ] ) : s 1 } . Let us begin by defining an operator
G ( s ) ( t ) = 0 1 s ( t ) H ( t , s ) R ( s ( s ) ) d s + g ( t ) ,
where
H ( t , s ) = 2 t s e ( t + s ) e e 2 1 , R ( s ) = c o s x , g ( t ) = 2 t e t e e 2 1 .
As illustrated in [45], G is monotone and L-Lipschitz-continuous via L = 2 .  Figure 5 and Figure 6 and Table 5 and Table 6 illustrate the numerical results with s 1 = t and ϵ = 10 6 .
Discussion About Numerical Experiments: The following conclusions may be drawn from the numerical experiments outlined above: (i) Examples 1–3 have reported data for numerous methods in both finite- and infinite-dimensional domains. It is apparent that the given algorithms outperformed in terms of number of iterations and elapsed time in practically all circumstances. All trials demonstrate that the suggested algorithms outperform the previously available techniques. (ii) Examples 1–3 have reported results for several methods in finite and infinite-dimensional domains. In most cases, we can observe that the scale of the problem and the relative standard deviation used impact the algorithm’s effectiveness. (iii) The development of an inappropriate variable step size generates a hump in the graph of algorithms in all examples. It has no impact on the effectiveness of the algorithms. (iv) For large-dimensional problems, all approaches typically took longer and showed significant variation in execution time. The number of iterations, on the other hand, changes slightly less.

6. Conclusions

The paper provides two explicit extragradient-like approaches for solving an equilibrium problem involving a pseudomonotone and a Lipschitz-type bifunction in a real Hilbert space. A new step-size rule has been presented that does not rely on Lipschitz-type constant information. The algorithm’s convergence has been established. Several tests are presented to show the numerical behavior of our two algorithms and to compare them to others that are well known in the literature.

Author Contributions

Conceptualization, M.S., W.K. and H.u.R.; methodology, M.S., H.u.R. and W.K.; software, M.S., W.K. and K.S.; validation, H.u.R., K.S. and W.K.; formal analysis, W.K., H.u.R. and K.S.; investigation, H.u.R., K.S. and W.K.; writing—original draft preparation, M.S., W.K. and H.u.R.; writing—review and editing, W.K., H.u.R. and K.S.; visualization, M.S., W.K. and H.u.R.; supervision and funding, W.K. and K.S. All authors have read and agree to the published version of the manuscript.

Funding

Thailand Science Research and Innovation (TSRI) and Rajamangala University of Technology Thanyaburi (RMUTT) under National Science, Research and Innovation Fund (NSRF); Basic Research Fund: Fiscal year 2022 (Contract No. FRB650070/0168 and under project number FRB65E0632M.1).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors acknowledge the financial support provided by Thailand Science Research and Innovation (TSRI) and Rajamangala University of Technology Thanyaburi (RMUTT) under National Science, Research and Innovation Fund (NSRF); Basic Research Fund: Fiscal year 2022 (Contract No. FRB650070/0168 and under project number FRB65E0632M.1).

Conflicts of Interest

The authors declare that they have no competing interests.

References

  1. Blum, E. From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63, 123–145. [Google Scholar]
  2. Fan, K. A Minimax Inequality and Applications, Inequalities III; Shisha, O., Ed.; Academic Press: New York, NY, USA, 1972. [Google Scholar]
  3. Bigi, G.; Castellani, M.; Pappalardo, M.; Passacantando, M. Existence and solution methods for equilibria. Eur. J. Oper. Res. 2013, 227, 1–11. [Google Scholar] [CrossRef] [Green Version]
  4. Muu, L.; Oettli, W. Convergence of an adaptive penalty scheme for finding constrained equilibria. Nonlinear Anal. Theory Methods Appl. 1992, 18, 1159–1166. [Google Scholar] [CrossRef]
  5. Hung, P.G.; Muu, L.D. The Tikhonov regularization extended to equilibrium problems involving pseudomonotone bifunctions. Nonlinear Anal. Theory Methods Appl. 2011, 74, 6121–6129. [Google Scholar] [CrossRef]
  6. Konnov, I. Application of the Proximal Point Method to Nonmonotone Equilibrium Problems. J. Optim. Theory Appl. 2003, 119, 317–333. [Google Scholar] [CrossRef]
  7. Moudafi, A. Proximal point algorithm extended to equilibrium problems. J. Nat. Geom. 1999, 15, 91–100. [Google Scholar]
  8. Oliveira, P.; Santos, P.; Silva, A. A Tikhonov-type regularization for equilibrium problems in Hilbert spaces. J. Math. Anal. Appl. 2013, 401, 336–342. [Google Scholar] [CrossRef]
  9. Rehman, H.U.; Kumam, P.; Shutaywi, M.; Pakkaranang, N.; Wairojjana, N. An Inertial Extragradient Method for Iteratively Solving Equilibrium Problems in Real Hilbert Spaces. Int. J. Comput. Math. 2021, 99, 1081–1104. [Google Scholar] [CrossRef]
  10. Rehman, H.u.; Pakkaranang, N.; Hussain, A.; Wairojjana, N. A modified extra-gradient method for a family of strongly pseudomonotone equilibrium problems in real Hilbert spaces. J. Math. Comput. Sci. 2021, 22, 38–48. [Google Scholar] [CrossRef]
  11. Wairojjana, N.; Pakkaranang, N.; Pholasa, N. Strong convergence inertial projection algorithm with self-adaptive step size rule for pseudomonotone variational inequalities in Hilbert spaces. Demonstr. Math. 2021, 54, 110–128. [Google Scholar] [CrossRef]
  12. Rehman, H.U.; Kumam, P.; Abubakar, A.B.; Cho, Y.J. The extragradient algorithm with inertial effects extended to equilibrium problems. Comput. Appl. Math. 2020, 39, 100. [Google Scholar] [CrossRef]
  13. Rehman, H.U.; Kumam, P.; Kumam, W.; Shutaywi, M.; Jirakitpuwapat, W. The Inertial Sub-Gradient Extra-Gradient Method for a Class of Pseudo-Monotone Equilibrium Problems. Symmetry 2020, 12, 463. [Google Scholar] [CrossRef] [Green Version]
  14. Konnov, I.V. Regularization method for nonmonotone equilibrium problems. J. Nonlinear Convex Anal. 2009, 10, 93–101. [Google Scholar]
  15. Konnov, I.V. Partial proximal point method for nonmonotone equilibrium problems. Optim. Methods Softw. 2006, 21, 373–384. [Google Scholar] [CrossRef]
  16. Singh, A.; Shukla, A.; Vijayakumar, V.; Udhayakumar, R. Asymptotic stability of fractional order (1,2] stochastic delay differential equations in Banach spaces. Chaos Solitons Fractals 2021, 150, 111095. [Google Scholar] [CrossRef]
  17. Patel, R.; Shukla, A.; Jadon, S.S. Existence and optimal control problem for semilinear fractional order (1,2] control system. Math. Methods Appl. Sci. 2020. [Google Scholar] [CrossRef]
  18. Shukla, A.; Sukavanam, N.; Pandey, D. Controllability of Semilinear Stochastic System with Multiple Delays in Control. IFAC Proc. Vol. 2014, 47, 306–312. [Google Scholar] [CrossRef] [Green Version]
  19. Shukla, A.; Patel, R. Existence and Optimal Control Results for Second-Order Semilinear System in Hilbert Spaces. Circuits Syst. Signal Process. 2021, 40, 4246–4258. [Google Scholar] [CrossRef]
  20. Van Hieu, D.; Duong, H.N.; Thai, B. Convergence of relaxed inertial methods for equilibrium problems. J. Appl. Numer. Optim. 2021, 3, 215–229. [Google Scholar]
  21. Ogbuisi, F.U. The projection method with inertial extrapolation for solving split equilibrium problems in Hilbert spaces. Appl. Set-Valued Anal. Optim. 2021, 3, 239–255. [Google Scholar]
  22. Liu, L.; Cho, S.Y.; Yao, J.C. Convergence Analysis of an Inertial Tseng’s Extragradient Algorithm for Solving Pseudomonotone Variational Inequalities and Applications. J. Nonlinear Var. Anal. 2021, 5, 627–644. [Google Scholar]
  23. Rehman, H.u.; Kumam, P.; Argyros, I.K.; Shutaywi, M.; Shah, Z. Optimization Based Methods for Solving the Equilibrium Problems with Applications in Variational Inequality Problems and Solution of Nash Equilibrium Models. Mathematics 2020, 8, 822. [Google Scholar] [CrossRef]
  24. Rehman, H.u.; Kumam, P.; Shutaywi, M.; Alreshidi, N.A.; Kumam, W. Inertial Optimization Based Two-Step Methods for Solving Equilibrium Problems with Applications in Variational Inequality Problems and Growth Control Equilibrium Models. Energies 2020, 13, 3292. [Google Scholar] [CrossRef]
  25. Rehman, H.; Kumam, P.; Dong, Q.-L.; Peng, Y.; Deebani, W. A new Popov’s subgradient extragradient method for two classes of equilibrium programming in a real Hilbert space. Optimization 2020, 70, 2675–2710. [Google Scholar] [CrossRef]
  26. Rehman, H.U.; Pakkaranang, N.; Kumam, P.; Cho, Y.J. Modified subgradient extragradient method for a family of pseudomonotone equilibrium problems in real a Hilbert space. J. Nonlinear Convex Anal. 2020, 21, 2011–2025. [Google Scholar]
  27. Flåm, S.D.; Antipin, A.S. Equilibrium programming using proximal-like algorithms. Math. Program. 1996, 78, 29–41. [Google Scholar] [CrossRef]
  28. Korpelevich, G. The extragradient method for finding saddle points and other problems. Matecon 1976, 12, 747–756. [Google Scholar]
  29. Tran, D.Q.; Dung, M.L.; Nguyen, V.H. Extragradient algorithms extended to equilibrium problems. Optimization 2008, 57, 749–776. [Google Scholar] [CrossRef]
  30. Hieu, D.V.; Quy, P.K.; Vy, L.V. Explicit iterative algorithms for solving equilibrium problems. Calcolo 2019, 56, 11. [Google Scholar] [CrossRef]
  31. Yang, J.; Liu, H.; Liu, Z. Modified subgradient extragradient algorithms for solving monotone variational inequalities. Optimization 2018, 67, 2247–2258. [Google Scholar] [CrossRef]
  32. Wang, S.; Zhang, Y.; Ping, P.; Cho, Y.; Guo, H. New extragradient methods with non-convex combination for pseudomonotone equilibrium problems with applications in Hilbert spaces. Filomat 2019, 33, 1677–1693. [Google Scholar] [CrossRef] [Green Version]
  33. Censor, Y.; Gibali, A.; Reich, S. The Subgradient Extragradient Method for Solving Variational Inequalities in Hilbert Space. J. Optim. Theory Appl. 2010, 148, 318–335. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  34. Tiel, J.V. Convex Analysis: An Introductory Text, 1st ed.; Wiley: New York, NY, USA, 1984. [Google Scholar]
  35. Bushell, P.J. UNIFORM CONVEXITY, HYPERBOLIC GEOMETRY, AND NONEXPANSIVE MAPPINGS (Pure and Applied Mathematics: A Series of Monographs & Textbooks, 83) By K. Goebel and S. Reich: Pp. 192. SFr.96.-. (Marcel Dekker Inc, U.S.A., 1984). Bull. Lond. Math. Soc. 1985, 17, 293–294. [Google Scholar] [CrossRef]
  36. Kreyszig, E. Introductory Functional Analysis with Applications, 1st ed.; Wiley Classics Library, Wiley: Hoboken, NJ, USA, 1989. [Google Scholar]
  37. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd ed.; CMS Books in Mathematics; Springer International Publishing: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
  38. Xu, H.K. Another control condition in an iterative method for nonexpansive mappings. Bulletin of the Australian Mathematical Society 2002, 65, 109–113. [Google Scholar] [CrossRef] [Green Version]
  39. Maingé, P.E. Strong Convergence of Projected Subgradient Methods for Nonsmooth and Nonstrictly Convex Minimization. Set-Valued Anal. 2008, 16, 899–912. [Google Scholar] [CrossRef]
  40. Bianchi, M.; Schaible, S. Generalized monotone bifunctions and equilibrium problems. J. Optim. Theory Appl. 1996, 90, 31–43. [Google Scholar] [CrossRef]
  41. Mastroeni, G. On Auxiliary Principle for Equilibrium Problems. In Nonconvex Optimization and Its Applications; Springer: Boston, MA, USA, 2003; pp. 289–298. [Google Scholar] [CrossRef]
  42. Rehman, H.U.; Kumam, P.; Cho, Y.J.; Yordsorn, P. Weak convergence of explicit extragradient algorithms for solving equilibirum problems. J. Inequalities Appl. 2019, 2019, 282. [Google Scholar] [CrossRef]
  43. Browder, F.; Petryshyn, W. Construction of fixed points of nonlinear mappings in Hilbert space. J. Math. Anal. Appl. 1967, 20, 197–228. [Google Scholar] [CrossRef] [Green Version]
  44. Rehman, H.U.; Kumam, P.; Je Cho, Y.; Suleiman, Y.I.; Kumam, W. Modified Popov’s explicit iterative algorithms for solving pseudomonotone equilibrium problems. Optim. Methods Softw. 2020, 36, 82–113. [Google Scholar] [CrossRef]
  45. Van Hieu, D.; Anh, P.K.; Muu, L.D. Modified hybrid projection methods for finding common solutions to variational inequality problems. Comput. Optim. Appl. 2017, 66, 75–96. [Google Scholar] [CrossRef]
Figure 1. Algorithm 1 is compared to Algorithm 3.1 in [32].
Figure 1. Algorithm 1 is compared to Algorithm 3.1 in [32].
Symmetry 14 01045 g001
Figure 2. Algorithm 2 is compared to Algorithm 3.2 in [32].
Figure 2. Algorithm 2 is compared to Algorithm 3.2 in [32].
Symmetry 14 01045 g002
Figure 3. Algorithm 1 is compared to Algorithm 3.1 in [32].
Figure 3. Algorithm 1 is compared to Algorithm 3.1 in [32].
Symmetry 14 01045 g003
Figure 4. Algorithm 2 is compared to Algorithm 3.2 in [32].
Figure 4. Algorithm 2 is compared to Algorithm 3.2 in [32].
Symmetry 14 01045 g004
Figure 5. Algorithm 1 is compared to Algorithm 3.1 in [32].
Figure 5. Algorithm 1 is compared to Algorithm 3.1 in [32].
Symmetry 14 01045 g005
Figure 6. Algorithm 2 is compared to Algorithm 3.2 in [32].
Figure 6. Algorithm 2 is compared to Algorithm 3.2 in [32].
Symmetry 14 01045 g006
Table 1. Algorithm 1 is compared to Algorithm 3.1 in [32].
Table 1. Algorithm 1 is compared to Algorithm 3.1 in [32].
Number of Iterations CPU Time in Seconds
λ 0 Algo. 3.1Algo. 1 Algo. 3.1Algo. 1
0.20 3533 1.67991.6119
0.50 -30 -1.4787
0.70 -25 -1.1520
1.00 -22 -1.0100
Table 2. Algorithm 2 is compared to Algorithm 3.2 in [32].
Table 2. Algorithm 2 is compared to Algorithm 3.2 in [32].
Number of Iterations CPU Time in Seconds
λ 0 Algo. 3.2Algo. 2 Algo. 3.2Algo. 2
0.20 99109 4.93915.3081
0.50 -84 -4.0511
0.70 -72 -3.2269
1.00 -64 -2.9225
Table 3. Algorithm 1 is compared to Algorithm 3.1 in [32].
Table 3. Algorithm 1 is compared to Algorithm 3.1 in [32].
Number of Iterations CPU Time in Seconds
λ 0 Algo. 3.1Algo. 1 Algo. 3.1Algo. 1
0.20 149126 7.03635.7321
0.50 -114 -5.0527
0.70 -107 -4.8159
1.00 -101 -4.5495
Table 4. Algorithm 2 is compared to Algorithm 3.2 in [32].
Table 4. Algorithm 2 is compared to Algorithm 3.2 in [32].
Number of Iterations CPU Time in Seconds
λ 0 Algo. 3.2Algo. 2 Algo. 3.2Algo. 2
0.20 300326 14.779114.2233
0.50 -273 -13.7107
0.70 -236 -11.7754
1.00 -211 -11.2054
Table 5. Algorithm 1 is compared to Algorithm 3.1 in [32].
Table 5. Algorithm 1 is compared to Algorithm 3.1 in [32].
Number of Iterations CPU Time in Seconds
λ 0 Algo. 3.1Algo. 1 Algo. 3.1Algo. 1
0.20 6472 0.01740.0331
0.50 61 0.0295
0.70 53 0.0273
1.00 47 0.0265
Table 6. Algorithm 2 is compared to Algorithm 3.2 in [32].
Table 6. Algorithm 2 is compared to Algorithm 3.2 in [32].
Number of Iterations CPU Time in Seconds
λ 0 Algo. 3.2Algo. 2 Algo. 3.2Algo. 2
0.20 213200 0.03130.0500
0.50 181 0.0460
0.70 177 0.0352
1.00 155 0.0260
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Shutaywi, M.; Kumam, W.; Rehman, H.u.; Sombut, K. On Strengthened Extragradient Methods Non-Convex Combination with Adaptive Step Sizes Rule for Equilibrium Problems. Symmetry 2022, 14, 1045. https://0-doi-org.brum.beds.ac.uk/10.3390/sym14051045

AMA Style

Shutaywi M, Kumam W, Rehman Hu, Sombut K. On Strengthened Extragradient Methods Non-Convex Combination with Adaptive Step Sizes Rule for Equilibrium Problems. Symmetry. 2022; 14(5):1045. https://0-doi-org.brum.beds.ac.uk/10.3390/sym14051045

Chicago/Turabian Style

Shutaywi, Meshal, Wiyada Kumam, Habib ur Rehman, and Kamonrat Sombut. 2022. "On Strengthened Extragradient Methods Non-Convex Combination with Adaptive Step Sizes Rule for Equilibrium Problems" Symmetry 14, no. 5: 1045. https://0-doi-org.brum.beds.ac.uk/10.3390/sym14051045

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop