Next Article in Journal
Finite Series of Distributional Solutions for Certain Linear Differential Equations
Next Article in Special Issue
Fixed Points of g-Interpolative Ćirić–Reich–Rus-Type Contractions in b-Metric Spaces
Previous Article in Journal
Application of Bernoulli Polynomials for Solving Variable-Order Fractional Optimal Control-Affine Problems
Previous Article in Special Issue
The Split Various Variational Inequalities Problems for Three Hilbert Spaces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Strong Convergence of Extragradient-Type Method to Solve Pseudomonotone Variational Inequalities Problems

by
Nopparat Wairojjana
1,
Nuttapol Pakkaranang
2,
Habib ur Rehman
2,
Nattawut Pholasa
3,* and
Tiwabhorn Khanpanuk
4,*
1
Applied Mathematics Program, Faculty of Science and Technology, Valaya Alongkorn Rajabhat University under the Royal Patronage (VRU), 1 Moo 20 Phaholyothin Road, Klong Neung, Klong Luang, Pathumthani 13180, Thailand
2
Department of Mathematics, Faculty of Science, King Mongkut’s University of Technology Thonburi (KMUTT), Bangkok 10140, Thailand
3
School of Science, University of Phayao, Phayao 56000, Thailand
4
Department of Mathematics, Faculty of Science and Technology, Phetchabun Rajabhat University, Phetchabun 67000, Thailand
*
Authors to whom correspondence should be addressed.
Submission received: 23 August 2020 / Revised: 29 September 2020 / Accepted: 30 September 2020 / Published: 13 October 2020
(This article belongs to the Special Issue Fixed Point Theory and Its Related Topics II)

Abstract

:
A number of applications from mathematical programmings, such as minimax problems, penalization methods and fixed-point problems can be formulated as a variational inequality model. Most of the techniques used to solve such problems involve iterative algorithms, and that is why, in this paper, we introduce a new extragradient-like method to solve the problems of variational inequalities in real Hilbert space involving pseudomonotone operators. The method has a clear advantage because of a variable stepsize formula that is revised on each iteration based on the previous iterations. The key advantage of the method is that it works without the prior knowledge of the Lipschitz constant. Strong convergence of the method is proved under mild conditions. Several numerical experiments are reported to show the numerical behaviour of the method.

1. Introduction

In this article, we consider the classic variational inequalities problems (VIPs) [1,2] for an operator F : E E is formulated in the following way:
Find u * K such that F ( u * ) , y u * 0 , y K ,
where K is a nonempty, convex and closed subset of a real Hilbert space E . The inner product and induced norm on E are denoted by . , . and . , respectively. Moreover, the set of real and natural numbers are denoted by R and N , respectively. It is important to note that solving the problem (1) is equivalent to solving the following problem:
Find an element u * K such that u * = P K [ u * ζ F ( u * ) ] .
We assume that the following requirements have been fulfilled:
(B1)
The solution set of the problem (1), represented by SVIP is nonempty.
(B2)
A mapping F : E E is called to be pseudomonotone, i.e.,
F ( y 1 ) , y 2 y 1 0 F ( y 2 ) , y 1 y 2 0 , y 1 , y 2 K .
(B3)
A mapping F : E E is said to be Lipschitz continuous, i.e., there exists L > 0 such that
F ( y 1 ) F ( y 2 ) L y 1 y 2 , y 1 , y 2 K .
(B4)
A mapping F : E E is called to be sequentially weakly continuous, i.e., { F ( u n ) } converges weakly to F ( u ) , where { u n } weakly converges to u.
The concept of variational inequalities has been used as a powerful tool to study different subjects, i.e., physics, engineering, economics and optimization theory. The problem (1) was firstly introduced by Stampacchia [1] in 1964 and also provided that this problem (1) is a crucial problem in nonlinear analysis. This is an efficient mathematical technique that integrates several key elements of applied mathematics, i.e., the problems of network equilibrium, the necessary optimality conditions, the complementarity problems and the systems of non-linear equations (for more details [3,4,5,6,7,8,9]). On the other hand, the projection methods are important to find the numerical solution of variational inequalities. Many authors have proposed and studied different projection methods to solve the problem of variational inequalities (see for more details [10,11,12,13,14,15,16,17,18,19,20]) and others in [21,22,23,24,25,26,27,28,29,30,31,32]. In particular, Karpelevich [10] and Antipin [33] introduced the following extragradient method:
u n K , v n = P K [ u n ζ F ( u n ) ] , u n + 1 = P K [ u n ζ F ( v n ) ] .
Recently, the subgradient extragradient algorithm was established by Censor et al. [12] for solving problem (1) in real Hilbert space. Their method has the form
u n K , v n = P K [ u n ζ F ( u n ) ] , u n + 1 = P E n [ u n ζ F ( v n ) ] .
where E n = { z E : u n ζ F ( u n ) v n , z v n 0 } . Migorski et al. [34] proposed a viscosity-type subgradient extragradient method to solve monotone variational inequalities problems. The main contribution is the presence of a viscosity scheme in the algorithm that was used to improve the convergence rate of the iterative sequence and provide strong convergence theorem. The iterative sequence { u n } was generated in the following way: (i) Let u 0 K , μ ( 0 , 1 ) , ζ 0 > 0 and a sequence γ n ( 0 , 1 ) with γ n 0 and n γ n = + . (ii) Compute
v n = P K [ u n ζ n F ( u n ) ] , w n = P E n [ u n ζ n F ( v n ) ] , u n + 1 = γ n f ( u n ) + ( 1 γ n ) w n ,
where
E n = { z E : u n ζ n F ( u n ) v n , z v n 0 } .
(iii) Revised the stepsize in the following way:
ζ n + 1 = min ζ n , μ u n v n F ( u n ) F ( v n ) if F ( u n ) F ( v n ) , ζ n otherwise .
In this paper, inspired by the iterative methods in [12,16,35,36], a modified subgradient extragradient algorithm is proposed for solving variational inequalities problems involving pseudomonotone mapping in real Hilbert space. It is important to note that our proposed scheme is effective. In particular, by comparing the results of Migorski et al. [34], our algorithm can solve pseudomonotone variational inequalities. Similar to the results of Migorski et al. [34] the proof of strong convergence of the proposed algorithm is proved without knowing the Lipschitz constant of the operator F . The proposed algorithm could be seen as a modification of the methods that are appeared in [10,12,34,35,36]. Under mild conditions, a strong convergence theorem is proved. Numerical experiments have been shown that the new approach tends to be more successful than the existing one [34].
The rest of this article has been arranged as follows: Section 2 contains some definitions and basic results that have been used throughout the paper. Section 3 contains our main algorithm and a strong convergence theorem. Section 4 presents the numerical results showing the algorithmic efficacy of the proposed method.

2. Preliminaries

This section contains useful lemmas and basic identities that have been used throughout the article. The metric projection P K ( u 1 ) for u 1 E onto a closed and convex subset K of E is defined by
P K ( u 1 ) = arg min { u 2 u 1 : u 2 K } .
Lemma 1.
[37,38]Assume K is a nonempty, convex and closed subset of a real Hilbert space E and P K : E K is a metric projection from E onto K .
(i) Let u 1 K and u 2 E , we have
u 1 P K ( u 2 ) 2 + P K ( u 2 ) u 2 2 u 1 u 2 2 .
(ii)  u 3 = P K ( u 1 ) if and only if
u 1 u 3 , u 2 u 3 0 , u 2 K .
(iii) For u 2 K and u 1 E
u 1 P K ( u 1 ) u 1 u 2 .
Lemma 2.
[37] Let u , v E and ϖ R .
(i)  ϖ u + ( 1 ϖ ) v 2 = ϖ u 2 + ( 1 ϖ ) v 2 ϖ ( 1 ϖ ) u v 2 .
(ii)  u + v 2 u 2 + 2 v , u + v .
Lemma 3.
[39] Assume that { χ n } be a sequence of non-negative real numbers satisfying
χ n + 1 ( 1 τ n ) χ n + τ n δ n , n N ,
where { τ n } ( 0 , 1 ) and { δ n } R satisfy the following conditions:
lim n τ n = 0 , n = 1 τ n = , and lim sup n δ n 0 .
Then, lim n χ n = 0 .
Lemma 4.
[40] Assume that { χ n } is a sequence of real numbers such that there exists a subsequence { n i } of { n } such that χ n i < χ n i + 1 for all i N . Then, there exists a non decreasing sequence m k N such that m k as k , and the following conditions are fulfilled by all (sufficiently large) numbers k N :
χ m k χ m k + 1 a n d χ k χ m k + 1 .
In fact, m k = max { j k : χ j χ j + 1 } .
Lemma 5.
[41] Assume that F : K E is a pseudomonotone and continuous mapping. Then, u * is a solution of the problem (1) if and only if u * is a solution of the following problem.
Find x K such that F ( y ) , y x 0 , y K .

3. Main Results

We provide a method consisting of two convex minimization problems through a viscosity scheme and an explicit stepsize formula which is being used to improve the convergence rate of the iterative sequence and to make the method independent of the Lipschitz constants. The detailed method is provided in Algorithm 1.
Algorithm 1 (Explicit method for pseudomonotone variational inequalities problems).
Step 0: Let u 0 K , μ ( 0 , 1 ) , ζ 0 > 0 and a sequence γ n ( 0 , 1 ) satisfying
lim n γ n = 0 and n γ n = + .
Step 1: Evaluate
v n = P K [ u n ζ n F ( u n ) ] .
   If u n = v n ; STOP. Otherwise, go to Step 2.
Step 2: Evaluate
w n = P E n [ u n ζ n F ( v n ) ] ,
   where E n = { z E : u n ζ n F ( u n ) v n , z v n 0 } .
Step 3: Compute
u n + 1 = γ n f ( u n ) + ( 1 γ n ) w n .
Step 4: Evaluate
ζ n + 1 = { min ζ n , μ u n v n 2 + μ w n v n 2 2 F ( u n ) F ( v n ) , w n v n if F ( u n ) F ( v n ) , w n v n > 0 , ζ n else . }
Lemma 6.
The stepsize sequence { ζ n } is monotonically decreasing with a lower bound min μ L , ζ 0 and converges to a fixed ζ > 0 .
Proof. 
Let F ( u n ) F ( v n ) , w n v n > 0 , such that
μ ( u n v n 2 + w n v n 2 ) 2 F ( u n ) F ( v n ) , w n v n 2 μ u n v n w n v n 2 F ( u n ) F ( v n ) w n v n 2 μ u n v n w n v n 2 u n v n w n v n μ L .
Clearly, from above we can conclude that { ζ n } has a lower bound min μ L , ζ 0 . Moreover, there exists a real number ζ > 0 , such that lim n ζ n = ζ .
Lemma 7.
Assume that F : E E satisfies the conditions(B1)(B4). For a given u * S V I P , we have
w n u * 2 u n u * 2 1 μ ζ n ζ n + 1 u n v n 2 1 μ ζ n ζ n + 1 w n v n 2 .
Proof. 
Consider that
w n u * 2 = P E n [ u n ζ n F ( v n ) ] u * 2 = P E n [ u n ζ n F ( v n ) ] + [ u n ζ n F ( v n ) ] [ u n ζ n F ( v n ) ] u * 2 = [ u n ζ n F ( v n ) ] u * 2 + P E n [ u n ζ n F ( v n ) ] [ u n ζ n F ( v n ) ] 2 + 2 P E n [ u n ζ n F ( v n ) ] [ u n ζ n F ( v n ) ] , [ u n ζ n F ( v n ) ] u * .
Given that u * S V I P K E n , we get
P E n [ u n ζ n F ( v n ) ] [ u n ζ n F ( v n ) ] 2 + P E n [ u n ζ n F ( v n ) ] [ u n ζ n F ( v n ) ] , [ u n ζ n F ( v n ) ] u * = [ u n ζ n F ( v n ) ] P E n [ u n ζ n F ( v n ) ] , u * P E n [ u n ζ n F ( v n ) ] 0 ,
which implies that
P E n [ u n ζ n F ( v n ) ] [ u n ζ n F ( v n ) ] , [ u n ζ n F ( v n ) ] u * P E n [ u n ζ n F ( v n ) ] [ u n ζ n F ( v n ) ] 2 .
Using expressions (6) and (8), we obtain
w n u * 2 u n ζ n F ( v n ) u * 2 P E n [ u n ζ n F ( v n ) ] [ u n ζ n F ( v n ) ] 2 u n u * 2 u n w n 2 + 2 ζ n F ( v n ) , u * w n .
Since u * is the solution of problem (1), we have
F ( u * ) , y u * 0 , for all y K .
Due to the pseudomonotonicity of F on K , we get
F ( y ) , y u * 0 , for all y K .
By substituting y = v n K , we get
F ( v n ) , v n u * 0 .
Thus, we have
F ( v n ) , u * w n = F ( v n ) , u * v n + F ( v n ) , v n w n F ( v n ) , v n w n .
Combining expressions (9) and (10), we obtain
w n u * 2 u n u * 2 u n w n 2 + 2 ζ n F ( v n ) , v n w n u n u * 2 u n v n + v n w n 2 + 2 ζ n F ( v n ) , v n w n u n u * 2 u n v n 2 v n w n 2 + 2 u n ζ n F ( v n ) v n , w n v n .
Note that w n = P E n [ u n ζ n F ( v n ) ] and by the definition of ζ n + 1 , we have
2 u n ζ n F ( v n ) v n , w n v n = 2 u n ζ n F ( u n ) v n , w n v n + 2 ζ n F ( u n ) F ( v n ) , w n v n 2 ζ n ζ n + 1 ζ n + 1 F ( u n ) F ( v n ) , w n v n ζ n ζ n + 1 μ u n v n 2 + μ w n v n 2 .
Combining expressions (11) and (12), we obtain
w n u * 2 u n u * 2 u n v n 2 v n w n 2 + ζ n ζ n + 1 μ u n v n 2 + μ w n v n 2 u n u * 2 1 μ ζ n ζ n + 1 u n v n 2 1 μ ζ n ζ n + 1 w n v n 2 .
Lemma 8.
Suppose that conditions (B1)(B4) hold. Let { u n } be a sequence generated by Algorithm 1. If there is a subsequence { u n k } which is weakly convergent to u ^ E and lim n u n v n = 0 , then u ^ S V I P .
Proof. 
We have
v n k = P K [ u n k ζ n k F ( u n k ) ] ,
which is equivalent to
u n k ζ n k F ( u n k ) v n k , y v n k 0 , y K .
From expression (15), we can write
u n k v n k , y v n k ζ n k F ( u n k ) , y v n k , y K .
Therefore, we get
1 ζ n k u n k v n k , y v n k + F ( u n k ) , v n k u n k F ( u n k ) , y u n k , y K .
Due to the boundedness of the sequence { u n k } so does { F ( u n k ) } . By using the facts lim n u n k v n k = 0 , and lim k ζ n k = ζ > 0 , limit as k in (17), we get
lim inf k F ( u n k ) , y u n k 0 , y K .
Moreover, we have
F ( v n k ) , y v n k = F ( v n k ) F ( u n k ) , y u n k + F ( u n k ) , y u n k + F ( v n k ) , u n k v n k .
Since lim n u n k v n k = 0 , and F is L-Lipschitz continuous on E , we get
lim n F ( u n k ) F ( v n k ) = 0 .
From (19) and (20), we obtain
lim inf k F ( v n k ) , y v n k 0 , y K .
Next, we show that u * S V I P . We choose a sequence { ϵ k } of positive numbers decreasing and tending to 0. For each k, we denote by m k the smallest positive integer such that
lim inf k F ( u n i ) , y u n i + ϵ k 0 , i m k .
Due to { ϵ k } being decreasing, the sequence { m k } is increasing.
Case 1: If there is a subsequence u n m k j of u n m k such that F ( u n m k j ) = 0 ( j ). Letting j , we obtain
F ( u * ) , y u * = lim j F ( u n m k j ) , y u * = 0 .
Hence u * K , therefore we have u * S V I P .
Case 2: If there exists N 0 such that for all n m k N 0 , F ( u n m k ) 0 . Suppose that
Θ n m k = F ( u n m k ) F ( u n m k ) 2 , n m k N 0 .
Due to the above definition, we obtain
F ( u n m k ) , F ( Θ n m k ) = 1 , n m k N 0 .
From (18) and (25), for all n m k N 0 , we have
F ( u n m k ) , y + ϵ k Θ n m k u n m k 0 .
Due to pseudomonotonicity of F for n m k N 0 , we obtain
F ( y + ϵ k Θ n m k ) , y + ϵ k Θ n m k u n m k 0 .
For all n m k N 0 , we have
F ( y ) , y u n m k F ( y ) F ( y + ϵ k Θ n m k ) , y + ϵ k Θ n m k u n m k ϵ k F ( y ) , Θ n m k .
Since { u n k } converges weakly to u * K and F is sequentially weakly continuous on K , we have { F ( u n k ) } converges weakly to F ( u * ) . We can suppose that F ( u * ) 0 . Since the norm mapping is sequentially weakly lower semicontinuous, we have
F ( u * ) lim inf k F ( u n k ) .
Since { u n m k } { u n k } and lim k ϵ k = 0 , we have
0 lim k ϵ k Θ n m k = lim k ϵ k F ( u n m k ) 0 F ( u * ) = 0 .
Now, letting k in (28), we obtain
F ( y ) , y u * 0 , y K .
Applying the well-known Lemma 5, we can deduce that u * S V I P .
Theorem 1.
Assume that F : K E satisfies the conditions (B1)(B4). Moreover, assume that u * belongs to the solution set S V I P . Then, the sequences { u n } , { v n } and { w n } generated by Algorithm 1 converge strongly to u * .
Proof. 
By using Lemma 7, we have
w n u * 2 u n u * 2 1 μ ζ n ζ n + 1 u n v n 2 1 μ ζ n ζ n + 1 w n v n 2 .
Due to ζ n ζ , there exists a fixed number ϵ ( 0 , 1 μ ) such that
lim n 1 μ ζ n ζ n + 1 = 1 μ > ϵ > 0 .
Then, there exists a finite number N 1 N such that
1 μ ζ n ζ n + 1 > ϵ > 0 , n N 1 .
Hence, we obtain
w n u * 2 u n u * 2 , n N 1 .
From the definition of the sequence { u n + 1 } and the fact that f is a contraction with constant ρ [ 0 , 1 ) and n N 1 , we obtain
u n + 1 u * = γ n f ( u n ) + ( 1 γ n ) w n u * = γ n [ f ( u n ) u * ] + ( 1 γ n ) [ w n u * ] = γ n [ f ( u n ) + f ( u * ) f ( u * ) u * ] + ( 1 γ n ) [ w n u * ] γ n f ( u n ) f ( u * ) + γ n f ( u * ) u * + ( 1 γ n ) w n u * γ n ρ u n u * + γ n f ( u * ) u * + ( 1 γ n ) w n u * .
From expressions (34) and (35) and γ n ( 0 , 1 ) , we obtain
u n + 1 u * γ n ρ u n u * + γ n f ( u * ) u * + ( 1 γ n ) u n u * = [ 1 γ n + ρ γ n ] u n u * + γ n ( 1 ρ ) f ( u * ) u * ( 1 ρ ) max u n u * , f ( u * ) u * ( 1 ρ ) max u N 1 u * , f ( u * ) u * ( 1 ρ ) .
Hence, we conclude that the sequence { u n } is bounded. Next, the reflexivity of E and the boundedness of the sequence { u n } guarantee that there exists a subsequence { u n k } such that { u n k } u * E as k . Now, we prove the strong convergence of the sequence iterative sequence { u n } generated by Algorithm 1. Due to the continuity and pseudomonotonicity of the operator F imply that the solution set S V I P is a closed and convex set (for more details see [42,43]). Since the mapping f is a contraction, P S V I P f is a contraction. The Banach contraction theorem guarantee the existence of a fixed point of u * S V I P such that
u * = P S V I P ( f ( u * ) ) .
By using Lemma 1 (ii), we have
f ( u * ) u * , y u * 0 , y S V I P .
From given u n + 1 = γ n f ( u n ) + ( 1 γ n ) w n , and using Lemma 2 (i) and Lemma 7, we have
u n + 1 u * 2 = γ n f ( u n ) + ( 1 γ n ) w n u * 2 = γ n [ f ( u n ) u * ] + ( 1 γ n ) [ w n u * ] 2 = γ n f ( u n ) u * 2 + ( 1 γ n ) w n u * 2 γ n ( 1 γ n ) f ( u n ) w n 2 γ n f ( u n ) u * 2 + ( 1 γ n ) [ u n u * 2 1 μ ζ n ζ n + 1 u n v n 2 1 μ ζ n ζ n + 1 w n v n 2 ] γ n ( 1 γ n ) f ( u n ) w n 2 γ n f ( u n ) u * 2 + u n u * 2 ( 1 γ n ) 1 μ ζ n ζ n + 1 w n v n 2 + u n v n 2 .
The rest of the proof shall be divided into the following two parts:
Case 1: Assume that there exists a fixed number N 2 N ( N 2 N 1 ) such that
u n + 1 u * u n u * , n N 2 .
Thus, lim n u n u * exists and let lim n u n u * = l . From expression (38), we have
( 1 γ n ) 1 μ ζ n ζ n + 1 w n v n 2 + u n v n 2 γ n f ( u n ) u * 2 + u n u * 2 u n + 1 u * 2 .
Due to the existence of lim n u n u * = l , and γ n 0 , we deduce that
lim n u n v n = lim n w n v n = 0 .
From expression (41), we have
lim n u n w n lim n u n v n + lim n v n w n = 0 .
It follows that
u n + 1 u n = γ n f ( u n ) + ( 1 γ n ) w n u n = γ n [ f ( u n ) u n ] + ( 1 γ n ) [ w n u n ] γ n f ( u n ) u n + ( 1 γ n ) w n u n 0 .
Thus, the sequences { u n } , { v n } and { w n } are bounded. Thus, we can take a subsequence { u n k } of { u n } such that { u n k } weakly converges to some u ^ E . Moreover, due to u n v n 0 and using Lemma 8, we have u ^ S V I P . By following expression (37), we consider that
lim sup n f ( u * ) u * , u n u * = lim sup k f ( u * ) u * , u n k u * = f ( u * ) u * , u ^ u * 0 .
We have lim n u n + 1 u n = 0 . It follows (44) that
lim sup n f ( u * ) u * , u n + 1 u * lim sup k f ( u * ) u * , u n + 1 u n + lim sup k f ( u * ) u * , u n u * 0 .
From Lemma 2 (ii) and Lemma 7 for all n N 2 , we get
u n + 1 u * 2 = γ n f ( u n ) + ( 1 γ n ) w n u * 2 = γ n [ f ( u n ) u * ] + ( 1 γ n ) [ w n u * ] 2 ( 1 γ n ) 2 w n u * 2 + 2 γ n f ( u n ) u * , ( 1 γ n ) [ w n u * ] + γ n [ f ( u n ) u * ] = ( 1 γ n ) 2 w n u * 2 + 2 γ n f ( u n ) f ( u * ) + f ( u * ) u * , u n + 1 u * = ( 1 γ n ) 2 w n u * 2 + 2 γ n f ( u n ) f ( u * ) , u n + 1 u * + 2 γ n f ( u * ) u * , u n + 1 u * ( 1 γ n ) 2 w n u * 2 + 2 γ n ρ u n u * u n + 1 u * + 2 γ n f ( u * ) u * , u n + 1 u * ( 1 + γ n 2 2 γ n ) u n u * 2 + 2 γ n ρ u n u * 2 + 2 γ n f ( u * ) u * , u n + 1 u * = ( 1 2 γ n ) u n u * 2 + γ n 2 u n u * 2 + 2 γ n ρ u n u * 2 + 2 γ n f ( u * ) u * , u n + 1 u * = 1 2 γ n ( 1 ρ ) u n u * 2 + 2 γ n ( 1 ρ ) γ n u n u * 2 2 ( 1 ρ ) + f ( u * ) u * , u n + 1 u * 1 ρ .
It follows from expressions (45) and (46), we obtain
lim sup n γ n u n u * 2 2 ( 1 ρ ) + f ( u * ) u * , u n + 1 u * 1 ρ 0 .
Choose n N 3 N ( N 3 N 2 ) large enough such that 2 γ n ( 1 ρ ) < 1 . Now, using expressions (46) and (47) and applying Lemma 3, we conclude that u n u * 0 , as n .
Case 2: Suppose that there exists a subsequence { n i } of { n } such that
u n i u * u n i + 1 u * , i N .
Thus, by Lemma 4, there exits a sequence { m k } N and { m k } , such that
u m k u * u m k + 1 u * and u k u * u m k + 1 u * , k N .
Similar to Case 1, using (38), we have
( 1 γ m k ) 1 μ ζ m k ζ m k + 1 w m k v m k 2 + u m k v m k 2 γ m k f ( u m k ) u * 2 + u m k u * 2 u m k + 1 u * 2 .
Due to γ m k 0 and 1 μ ζ m k ζ m k + 1 1 μ , we can deduce the following:
lim n u m k v m k = lim k w m k v m k = 0 .
From expression (50), we have
lim k u m k w m k lim k u m k v m k + lim k v m k w m k = 0 .
Hence, we obtain
u m k + 1 u m k = γ m k f ( u m k ) + ( 1 γ m k ) w m k u m k = γ m k [ f ( u m k ) u m k ] + ( 1 γ m k ) [ w m k u m k ] γ m k f ( u m k ) u m k + ( 1 γ m k ) w m k u m k 0 .
We have to use the same justification as in the Case 1, such that
lim sup k f ( u * ) u * , u m k + 1 u * 0 .
Using (46) and (48), we have
u m k + 1 u * 2 1 2 γ m k ( 1 ρ ) u m k u * 2 + 2 γ m k ( 1 ρ ) γ m k u m k u * 2 2 ( 1 ρ ) + f ( u * ) u * , u m k + 1 u * 1 ρ 1 2 γ m k ( 1 ρ ) u m k + 1 u * 2 + 2 γ m k ( 1 ρ ) γ m k u m k u * 2 2 ( 1 ρ ) + f ( u * ) u * , u m k + 1 u * 1 ρ .
It follows that
u m k + 1 u * 2 γ m k u m k u * 2 2 ( 1 ρ ) + f ( u * ) u * , u m k + 1 u * 1 ρ .
Since γ m k 0 and u m k u * is a bounded sequence. Thus, expressions (53) and (55) implies that
u m k + 1 u * 2 0 , as k .
From the inequality (48), we have
lim n u k u * 2 lim n u m k + 1 u * 2 0 .
Consequently, u n u * . This completes the proof of the theorem. □

4. Numerical Experiments

Numerical investigations present in this section to demonstrate the efficiency of the introduced Algorithm 1 in four test problems, all of which are pseudomonotone. The MATLAB program has been performed on a PC (with Intel(R) Core(TM)i3-4010U CPU @ 1.70 GHz, RAM 4.00 GB) in MATLAB version 9.5 (R2018b). We use the built-in MATLAB Quadratic programming to solve the minimization problems.
Example 1.
Consider the non-linear complementarity problem of Kojima-–Shindo where the feasible set K which is defined by
K = { u R 4 : 1 u i 5 , i = 1 , 2 , 3 , 4 } .
The mapping F : R 4 R 4 is defined by
F ( u ) = u 1 + u 2 + u 3 + u 4 4 u 2 u 3 u 4 u 1 + u 2 + u 3 + u 4 4 u 1 u 3 u 4 u 1 + u 2 + u 3 + u 4 4 u 1 u 2 u 4 u 1 + u 2 + u 3 + u 4 4 u 1 u 2 u 3 .
It is easy to see that F is not monotone on the set K . By using the Monte Carlo approach [44], it can be shown that F is pseudomonotone on K , This problem has a unique solution u * = ( 5 , 5 , 5 , 5 ) T . Generate many pairs of points u and v uniformly in K satisfying F ( u ) T ( v u ) 0 and then check if F ( v ) T ( v u ) 0 . In this experiment, we take different initial points and D n = u n v n . Moreover, control parameters ζ 0 = 0.33 , μ = 0.25 , γ n = 1 100 ( n + 2 ) and f ( u ) = u 2 for Algorithm 1. Numerical investigation regarding the first example was shown in Table 1.
Example 2.
Consider the quadratic fractional programming problem in the following form [44]:
min f ( u ) = u T Q u + a T u + a 0 b T u + b 0 , S u b j e c t t o u K = { u R 4 : b T u + b 0 > 0 } ,
where
Q = 5 1 2 0 1 5 1 3 2 1 3 0 0 3 0 5 , a = 1 2 2 1 , b = 2 1 1 0 , a 0 = 2 , a n d b 0 = 4 .
It is easy to verify that Q is symmetric and positive definite on R 4 and consequently f is pseudo-convex on K . Therefore, f is pseudomonotone. Using the quotient rule, we obtain
f ( u ) = ( b T u + b 0 ) ( 2 Q u + a ) b ( u T Q + a T u + a 0 ) ( b T u + b 0 ) 2 .
In this point of view, we can set F = f in Theorem 1. We minimize f over K = { u R 4 : 1 u i 10 , i = 1 , 2 , 3 , 4 } . This problem has a unique solution u * = ( 1 , 1 , 1 , 1 ) T K . In this experiment, we take different initial points and D n = u n v n . Moreover, control parameters ζ 0 = 0.33 , μ = 0.25 , γ n = 1 100 ( n + 2 ) and f ( u ) = u 2 for Algorithm 1. Numerical investigation regarding the second example is shown in Table 2.
Example 3.
The third example was taken from [45] where F : R 2 R 2 is defined by
F ( u ) = 0.5 u 1 u 2 2 u 2 10 7 4 u 1 0.1 u 2 2 10 7 ,
on K = { u R 2 : ( u 1 2 ) 2 + ( u 2 2 ) 2 1 } . It can easily see that F is Lipschitz continuous with L = 5 and F is not monotone on K but pseudomonotone. The above problem has a unique solution u * = ( 2.707 , 2.707 ) T . In this experiment, we take different initial points and D n = u n v n . Moreover, control parameters ζ 0 = 0.33 , μ = 0.25 , γ n = 1 100 ( n + 2 ) and f ( u ) = u 3 for Algorithm 1. Numerical investigations regarding the third example is shown in Table 3.
Example 4.
The fourth example was taken from [45] where F : R 2 R 2 is defined by
F ( u ) = ( u 1 2 + ( u 2 1 ) 2 ) ( 1 + u 2 ) u 1 3 u 1 ( u 2 1 ) 2 ,
where K = { u R 2 : 10 u i 10 , i = 1 , 2 } . It can easily see that F is Lipschitz continuous with L = 5 and F is not monotone on K but pseudomonotone. In this experiment, we take different initial points and D n = u n v n . Moreover, control parameters ζ 0 = 0.33 , μ = 0.25 , γ n = 1 100 ( n + 2 ) and f ( u ) = u 4 for Algorithm 1. Numerical investigations regarding the fourth example is shown in Table 4.

5. Conclusions

We have developed an extragradient-like method to solve pseudomonotone variational inequalities in real Hilbert space. The method had an explicit formula for an appropriate and effective stepsize evaluation on each step. For each iteration, the stepsize formula is modified based on the previous iterations. The numerical investigation was presented to explain the numerical effectiveness of our algorithm relative to other methods. These numerical studies suggest that viscosity schemes in this sense generally improve the effectiveness of the iterative sequence.

Author Contributions

Data curation, N.W.; formal analysis, T.K.; funding acquisition, N.P. (Nuttapol Pakkaranang), N.P. (Nattawut Pholasa) and T.K.; investigation, N.W., N.P. (Nuttapol Pakkaranang) and T.K.; methodology, T.K.; project administration, H.u.R., N.P. (Nattawut Pholasa) and T.K.; resources, N.P. (Nattawut Pholasa) and T.K.; software, H.u.R.; supervision, H.u.R. and N.P. (Nattawut Pholasa); Writing—original draft, N.W. and H.u.R.; Writing—review and editing, N.P. (Nuttapol Pakkaranang). All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by University of Phayao and Phetchabun Rajabhat University.

Acknowledgments

We are very grateful to the editor and the anonymous referees for their valuable and useful comments, which helps in improving the quality of this work. N. Wairojjana would like to thank by Valaya Alongkorn Rajabhat University under the Royal Patronage (VRU). N. Pholasa would like to thank by University of Phayao. T. Khanpanuk would like to thanks Phetchabun Rajabhat University.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Stampacchia, G. Formes bilinéaires coercitives sur les ensembles convexes. Comptes Rendus Hebd. Seances Acad. Sci. 1964, 258, 4413. [Google Scholar]
  2. Konnov, I.V. On systems of variational inequalities. Russ. Math. C/C Izv. Vyss. Uchebnye Zaved. Mat. 1997, 41, 77–86. [Google Scholar]
  3. Kassay, G.; Kolumbán, J.; Páles, Z. On Nash stationary points. Publ. Math. 1999, 54, 267–279. [Google Scholar]
  4. Kassay, G.; Kolumbán, J.; Páles, Z. Factorization of Minty and Stampacchia variational inequality systems. Eur. J. Oper. Res. 2002, 143, 377–389. [Google Scholar] [CrossRef]
  5. Kinderlehrer, D.; Stampacchia, G. An Introduction to Variational Inequalities and Their Applications; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2000. [Google Scholar] [CrossRef]
  6. Konnov, I. Equilibrium Models and Variational Inequalities; Elsevier: Amsterdam, The Netherlands, 2007; Volume 210. [Google Scholar]
  7. Elliott, C.M. Variational and Quasivariational Inequalities Applications to Free—Boundary ProbLems. (Claudio Baiocchi And António Capelo). SIAM Rev. 1987, 29, 314–315. [Google Scholar] [CrossRef]
  8. Nagurney, A.; Economics, E.N. A Variational Inequality Approach; Springer: Dordrecht, The Netherlands, 1999. [Google Scholar]
  9. Takahashi, W. Introduction to Nonlinear and Convex Analysis; Yokohama Publishers: Yokohama, Japan, 2009. [Google Scholar]
  10. Korpelevich, G. The extragradient method for finding saddle points and other problems. Matecon 1976, 12, 747–756. [Google Scholar]
  11. Noor, M.A. Some iterative methods for nonconvex variational inequalities. Comput. Math. Model. 2010, 21, 97–108. [Google Scholar] [CrossRef]
  12. Censor, Y.; Gibali, A.; Reich, S. The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2010, 148, 318–335. [Google Scholar] [CrossRef] [Green Version]
  13. Censor, Y.; Gibali, A.; Reich, S. Extensions of Korpelevich’s extragradient method for the variational inequality problem in Euclidean space. Optimization 2012, 61, 1119–1132. [Google Scholar] [CrossRef]
  14. Malitsky, Y.V.; Semenov, V.V. An Extragradient Algorithm for Monotone Variational Inequalities. Cybern. Syst. Anal. 2014, 50, 271–277. [Google Scholar] [CrossRef]
  15. Tseng, P. A Modified Forward-Backward Splitting Method for Maximal Monotone Mappings. SIAM J. Control Optim. 2000, 38, 431–446. [Google Scholar] [CrossRef]
  16. Moudafi, A. Viscosity Approximation Methods for Fixed-Points Problems. J. Math. Anal. Appl. 2000, 241, 46–55. [Google Scholar] [CrossRef] [Green Version]
  17. Zhang, L.; Fang, C.; Chen, S. An inertial subgradient-type method for solving single-valued variational inequalities and fixed point problems. Numer. Algorithms 2018, 79, 941–956. [Google Scholar] [CrossRef]
  18. Iusem, A.N.; Svaiter, B.F. A variant of korpelevich’s method for variational inequalities with a new search strategy. Optimization 1997, 42, 309–321. [Google Scholar] [CrossRef]
  19. Thong, D.V.; Hieu, D.V. Modified subgradient extragradient method for variational inequality problems. Numer. Algorithms 2017, 79, 597–610. [Google Scholar] [CrossRef]
  20. Thong, D.V.; Hieu, D.V. Weak and strong convergence theorems for variational inequality problems. Numer. Algorithms 2017, 78, 1045–1060. [Google Scholar] [CrossRef]
  21. Censor, Y.; Gibali, A.; Reich, S. Strong convergence of subgradient extragradient methods for the variational inequality problem in Hilbert space. Optim. Methods Softw. 2011, 26, 827–845. [Google Scholar] [CrossRef]
  22. Gibali, A.; Reich, S.; Zalas, R. Outer approximation methods for solving variational inequalities in Hilbert space. Optimization 2017, 66, 417–437. [Google Scholar] [CrossRef]
  23. Ogbuisi, F.U.; Shehu, Y. A projected subgradient-proximal method for split equality equilibrium problems of pseudomonotone bifunctions in Banach spaces. J. Nonlinear Var. Anal. 2019, 3, 205–224. [Google Scholar]
  24. Ceng, L.C. Asymptotic inertial subgradient extragradient approach for pseudomonotone variational inequalities with fixed point constraints of asymptotically nonexpansive mappings. Commun. Optim. Theory 2020, 2020, 2. [Google Scholar]
  25. Wang, L.; Yu, L.; Li, T. Parallel extragradient algorithms for a family of pseudomonotone equilibrium problems and fixed point problems of nonself-nonexpansive mappings in Hilbert space. J. Nonlinear Funct. Anal. 2020, 2020, 13. [Google Scholar]
  26. Ceng, L.C. Two inertial linesearch extragradient algorithms for the bilevel split pseudomonotone variational inequality with constraints. J. Appl. Numer. Optim. 2020, 2, 213–233. [Google Scholar] [CrossRef]
  27. Ur Rehman, H.; Kumam, P.; Je Cho, Y.; Suleiman, Y.I.; Kumam, W. Modified Popov’s explicit iterative algorithms for solving pseudomonotone equilibrium problems. Optim. Methods Softw. 2020, 1–32. [Google Scholar] [CrossRef]
  28. Ur Rehman, H.; Kumam, P.; Kumam, W.; Shutaywi, M.; Jirakitpuwapat, W. The inertial sub-gradient extra-gradient method for a class of pseudo-monotone equilibrium problems. Symmetry 2020, 12, 463. [Google Scholar] [CrossRef] [Green Version]
  29. Ur Rehman, H.; Kumam, P.; Argyros, I.K.; Deebani, W.; Kumam, W. Inertial extra-gradient method for solving a family of strongly pseudomonotone equilibrium problems in real Hilbert spaces with application in variational inequality problem. Symmetry 2020, 12, 503. [Google Scholar] [CrossRef] [Green Version]
  30. Ur Rehman, H.; Kumam, P.; Argyros, I.K.; Alreshidi, N.A.; Kumam, W.; Jirakitpuwapat, W. A self-adaptive extra-gradient methods for a family of pseudomonotone equilibrium programming with application in different classes of variational inequality problems. Symmetry 2020, 12, 523. [Google Scholar] [CrossRef] [Green Version]
  31. Ur Rehman, H.; Kumam, P.; Shutaywi, M.; Alreshidi, N.A.; Kumam, W. Inertial optimization based two-step methods for solving equilibrium problems with applications in variational inequality problems and growth control equilibrium models. Energies 2020, 13, 3292. [Google Scholar] [CrossRef]
  32. Ur Rehman, H.; Kumam, P.; Abubakar, A.B.; Cho, Y.J. The extragradient algorithm with inertial effects extended to equilibrium problems. Comput. Appl. Math. 2020, 39, 100. [Google Scholar] [CrossRef]
  33. Antipin, A.S. On a method for convex programs using a symmetrical modification of the Lagrange function. Ekonomika Matematicheskie Metody 1976, 12, 1164–1173. [Google Scholar]
  34. Migórski, S.; Fang, C.; Zeng, S. A new modified subgradient extragradient method for solving variational inequalities. Appl. Anal. 2019, 1–10. [Google Scholar] [CrossRef]
  35. Yang, J.; Liu, H.; Liu, Z. Modified subgradient extragradient algorithms for solving monotone variational inequalities. Optimization 2018, 67, 2247–2258. [Google Scholar] [CrossRef]
  36. Kraikaew, R.; Saejung, S. Strong Convergence of the Halpern Subgradient Extragradient Method for Solving Variational Inequalities in Hilbert Spaces. J. Optim. Theory Appl. 2013, 163, 399–412. [Google Scholar] [CrossRef]
  37. Heinz, H. Bauschke, P.L.C. Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd ed.; CMS Books in Mathematics; Springer International Publishing: Cham, Switzerland, 2017. [Google Scholar]
  38. Goebel, K.; Reich, S. Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings; Marcel Dekker, Inc.: New York, NY, USA, 1984. [Google Scholar]
  39. Cottle, R.W.; Yao, J.C. Pseudo-monotone complementarity problems in Hilbert space. J. Optim. Theory Appl. 1992, 75, 281–295. [Google Scholar] [CrossRef]
  40. Maingé, P.E. Strong Convergence of Projected Subgradient Methods for Nonsmooth and Nonstrictly Convex Minimization. Set-Valued Anal. 2008, 16, 899–912. [Google Scholar] [CrossRef]
  41. Takahashi, W. Nonlinear Functional Analysis; Yokohama Publishers: Yokohama, Japan, 2000. [Google Scholar]
  42. Liu, Z.; Zeng, S.; Motreanu, D. Evolutionary problems driven by variational inequalities. J. Differ. Equ. 2016, 260, 6787–6799. [Google Scholar] [CrossRef]
  43. Liu, Z.; Migórski, S.; Zeng, S. Partial differential variational inequalities involving nonlocal boundary conditions in Banach spaces. J. Differ. Equ. 2017, 263, 3989–4006. [Google Scholar] [CrossRef]
  44. Hu, X.; Wang, J. Solving pseudomonotone variational inequalities and pseudoconvex optimization problems using the projection neural network. IEEE Trans. Neural Networks 2006, 17, 1487–1499. [Google Scholar] [CrossRef] [Green Version]
  45. Shehu, Y.; Dong, Q.L.; Jiang, D. Single projection method for pseudo-monotone variational inequality in Hilbert spaces. Optimization 2018, 68, 385–409. [Google Scholar] [CrossRef]
Table 1. Numerical behaviour of Algorithm 1 using different starting points for Example 1.
Table 1. Numerical behaviour of Algorithm 1 using different starting points for Example 1.
TOL 10 2 10 3 10 4 10 5 10 2 10 3 10 4 10 5
u 0 Iter.Iter.Iter.Iter.TimeTimeTimeTime
[ 2 , 2 , 8 , 10 ] T 135150150010.0798210.2477763.25146543.637834
[ 1 , 1 , 5 , 6 ] T 125150150010.0838700.2369242.68437039.651178
[ 5 , 2 , 1 , 2 ] T 95150150010.0654220.2351733.03474743.630625
[ 1 , 2 , 3 , 4 ] T 61004100450010.0408668.0512346.68663242.431705
Table 2. Numerical behaviour of Algorithm 1 using different starting points for Example 2.
Table 2. Numerical behaviour of Algorithm 1 using different starting points for Example 2.
TOL 10 2 10 3 10 4 10 5 10 2 10 3 10 4 10 5
u 0 Iter.Iter.Iter.Iter.TimeTimeTimeTime
[ 10 , 10 , 10 , 10 ] T 4346999890.2891490.2492850.4755208.480530
[ 10 , 20 , 30 , 40 ] T 4146999890.2117070.1875590.4452406.898924
[ 20 , 20 , 20 , 20 ] T 2932999890.1385750.1691900.3946547.168460
Table 3. Numerical behaviour of Algorithm 1 using different starting points for Example 3.
Table 3. Numerical behaviour of Algorithm 1 using different starting points for Example 3.
TOL 10 2 10 3 10 4 10 5 10 2 10 3 10 4 10 5
u 0 Iter.Iter.Iter.Iter.TimeTimetimeTime
[ 0 , 0 ] T 82726525660.6069171.90721214.120655107.506926
[ 10 , 10 ] T 72726525910.2866591.05762310.764532116.258335
[ 5 , 5 ] T 82625825960.3882271.19019111.424257107.584978
Table 4. Numerical behaviour of Algorithm 1 using different starting points for Example 4.
Table 4. Numerical behaviour of Algorithm 1 using different starting points for Example 4.
TOL 10 2 10 3 10 4 10 5 10 2 10 3 10 4 10 5
u 0 Iter.Iter.Iter.Iter.TimeTimeTimeTime
[ 0 , 0 ] T 162202231292530.215432.3540129.86562224.95083
[ 10 , 10 ] T 271902072257620.253222.6474226.84528198.26446
[ 5 , 5 ] T 434113801478910.782624.7711642.41738427.904781

Share and Cite

MDPI and ACS Style

Wairojjana, N.; Pakkaranang, N.; Rehman, H.u.; Pholasa, N.; Khanpanuk, T. Strong Convergence of Extragradient-Type Method to Solve Pseudomonotone Variational Inequalities Problems. Axioms 2020, 9, 115. https://0-doi-org.brum.beds.ac.uk/10.3390/axioms9040115

AMA Style

Wairojjana N, Pakkaranang N, Rehman Hu, Pholasa N, Khanpanuk T. Strong Convergence of Extragradient-Type Method to Solve Pseudomonotone Variational Inequalities Problems. Axioms. 2020; 9(4):115. https://0-doi-org.brum.beds.ac.uk/10.3390/axioms9040115

Chicago/Turabian Style

Wairojjana, Nopparat, Nuttapol Pakkaranang, Habib ur Rehman, Nattawut Pholasa, and Tiwabhorn Khanpanuk. 2020. "Strong Convergence of Extragradient-Type Method to Solve Pseudomonotone Variational Inequalities Problems" Axioms 9, no. 4: 115. https://0-doi-org.brum.beds.ac.uk/10.3390/axioms9040115

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop