Next Article in Journal
On the Bernstein Affine Fractal Interpolation Curved Lines and Surfaces
Next Article in Special Issue
Nonlinear Approximations to Critical and Relaxation Processes
Previous Article in Journal
Trapezium-Type Inequalities for an Extension of Riemann–Liouville Fractional Integrals Using Raina’s Special Function and Generalized Coordinate Convex Functions
Previous Article in Special Issue
Reconstruction of Piecewise Smooth Multivariate Functions from Fourier Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Modified Viscosity Subgradient Extragradient-Like Algorithms for Solving Monotone Variational Inequalities Problems

1
Applied Mathematics Program, Faculty of Science and Technology, Valaya Alongkorn Rajabhat University under the Royal Patronage (VRU), 1 Moo 20 Phaholyothin Road, Klong Neung, Klong Luang, Pathumthani 13180, Thailand
2
Department of Applied Mathematics, UIT-Rajiv Gandhi Technological University (University of Technology of M.P.), Bhopal 462033, India
3
Department of Mathematics, Faculty of Science, King Mongkut’s University of Technology Thonburi (KMUTT), Bangkok 10140, Thailand
4
School of Science, University of Phayao, Phayao 56000, Thailand
*
Author to whom correspondence should be addressed.
Submission received: 19 August 2020 / Revised: 30 September 2020 / Accepted: 7 October 2020 / Published: 15 October 2020
(This article belongs to the Special Issue Nonlinear Analysis and Optimization with Applications)

Abstract

:
Variational inequality theory is an effective tool for engineering, economics, transport and mathematical optimization. Some of the approaches used to resolve variational inequalities usually involve iterative techniques. In this article, we introduce a new modified viscosity-type extragradient method to solve monotone variational inequalities problems in real Hilbert space. The result of the strong convergence of the method is well established without the information of the operator’s Lipschitz constant. There are proper mathematical studies relating our newly designed method to the currently state of the art on several practical test problems.

1. Introduction

Assume that C is a nonempty, closed and convex subset of a real Hilbert space H , and R and N are the sets of real numbers and natural numbers, respectively. In this paper, we consider the classical variational inequalities problems [1,2] (in short, V I ( F , C ) ) and the solution set of variational inequalities problem represent by S V I ( F , C ) . Assume that F is an operator F : H H and the variational inequalities problem for an operator F : H H is defined in the following way:
Find u * C such that F ( u * ) , y u * 0 , y C .
The problem (1) is well defined and equivalent to solve the following fixed point problem:
Find a point u * C such that u * = P C [ u * ζ F ( u * ) ] ,
for some 0 < ζ < 1 L where L is the Lipschitz constant of the operator F. We assume that the followings conditions have been satisfied:
(b1)
The solution set is represented by S V I ( F , C ) and it is nonempty;
(b2)
An operator F : H H is monotone—i.e.,
F ( u 1 ) F ( u 2 ) , u 1 u 2 0 , u 1 , u 2 C ;
(b3)
F is Lipschitz continuous if there exists L > 0 , such that
F ( u 1 ) F ( u 2 ) L u 1 u 2 , u 1 , u 2 C .
The variational inequalities theory is a useful technique for investigating a large number of problems in physics, economics, engineering and optimization theory. It was firstly introduced by Stampacchia [1] in 1964 and also well established that the problem (1) is an important problem in nonlinear analysis. It is an advantageous mathematical model that puts together several topics of applied mathematics, such as the network equilibrium problems, the necessary optimality conditions, the systems of non-linear equations and the complementarity problems [3,4,5,6,7].
The projection method and its modified version methods are crucial for finding the numerical solutions of variational inequality problems. Many studies have been suggested and researched different types of projection methods to solve the variational inequalities problem (see for more details [8,9,10,11,12,13,14,15,16,17,18]) and others, as in [19,20,21,22,23,24,25,26,27,28]. The simplistic methodology is the gradient method for which only one projection on a feasible set is required. A convergence of the method, however, requires strong monotonicity on F. To prevent the strong monotonicity hypothesis, Korpelevich [8] and Antipin [29] introduced the following extragradient method.
u n C , v n = P C [ u n ζ F ( u n ) ] , u n + 1 = P C [ u n ζ F ( v n ) ] ,
for some 0 < ζ < 1 L . The subgradient extragradient algorithm was recently developed by Censor et al. [10] to resolve problem (1) in real Hilbert space. Their method has the form of
u n C , v n = P C [ u n ζ F ( u n ) ] , u n + 1 = P H n [ u n ζ F ( v n ) ] ,
where 0 < ζ < 1 L and H n = { z H : u n ζ F ( u n ) v n , z v n 0 } .
In this article, motivated by the methods in [10,30,31] and the viscosity method [14] we introduce a new viscosity subgradient–extragradient algorithm to solve variational inequality problems involving monotone operators in Hilbert space. It is important to note that, our proposed algorithm operates more effectively than the existing ones. Particularly in comparison to the results of Yang et al. [30], our algorithm operates efficiently in most situations. Analogously to the results of Yang et al. [30], proof of the convergence of Algorithm 1, it is not compulsory to have the information of the Lipschitz constant of the operator F . The proposed algorithm could be seen as a modification of the methods that are found in [8,10,30,31]. Under mild conditions, a strong convergence theorem was proven to be associated with the proposed method. Numerical experimental studies have been shown that the new method considers being more effective than the current ones in [30].
The rest of the article is arranged in the following way: Section 2 provides a few definitions and basic results that are used throughout the paper. Section 3 contains the main algorithm and convergence theorem. Section 4 includes the numerical results that illustrate the algorithmic efficacy of the introduced method.
Algorithm 1 An Explicit Method for Monotone Variational Inequality Problems
  • Step 0: Let u 0 C , μ ( 0 , 1 ) , ζ 0 > 0 and a sequence β n ( 0 , 1 ) with β n 0 and n β n = + .
  • Step 1: Assume that { u n } is given and compute
    v n = P C [ u n ζ n F ( u n ) ] .
     If u n = v n ; STOP. Else, move to Step 2.
  • Step 2: Create a half-space
    H n = { z H : u n ζ n F ( u n ) v n , z v n 0 } .
  • Step 3:
    u n + 1 = β n f ( u n ) + ( 1 β n ) z n ,
     while z n = P H n [ u n ζ n F ( v n ) ] .
  • Step 4: Compute
    ζ n + 1 = min ζ n , μ u n v n 2 + μ z n v n 2 2 F ( u n ) F ( v n ) , z n v n if F ( u n ) F ( v n ) , z n v n > 0 , ζ n otherwise .
     Set n : = n + 1 and return to Step 1.

2. Background

A metric projection P C ( u 1 ) for u 1 H onto a closed and convex subset C of H is defined by
P C ( u 1 ) = arg min { u 2 u 1 : u 2 C } .
Lemma 1
([32]; Page 31). For u , v H and a R , then the following relationship holds.
(i). 
a u + ( 1 a ) v 2 = a u 2 + ( 1 a ) v 2 a ( 1 a ) u v 2 .
(ii). 
u + v 2 u 2 + 2 v , u + v .
Lemma 2
([32,33]). Assume C be a nonempty, closed and convex subset of a real Hilbert space H and let P C : H C be a metric projection from H onto C . Then:
(i). 
Let u 1 C and u 2 H
u 1 P C ( u 2 ) 2 + P C ( u 2 ) u 2 2 u 1 u 2 2 .
(ii). 
u 3 = P C ( u 1 ) if and only if
u 1 u 3 , u 2 u 3 0 , u 2 C .
(iii). 
For u 2 C and u 1 H
u 1 P C ( u 1 ) u 1 u 2 .
Lemma 3
([34]). Assume that { χ n } is a sequence of non-negative real numbers such that
χ n + 1 ( 1 α n ) χ n + α n δ n , n N ,
where { α n } ( 0 , 1 ) and { δ n } R meet with the following criteria:
lim n α n = 0 , n = 1 α n = , a n d lim sup n δ n 0 .
Then, lim n χ n = 0 .
Lemma 4
([35]). Assume that { χ n } is a sequence of real numbers such that there is a subsequence { n i } of { n } such that χ n i < χ n i + 1 for all i N . Then, there is a non decreasing sequence m k N such that m k as k , and the following conditions are fullfilled by all (sufficiently large) numbers k N :
χ m k χ m k + 1 a n d χ k χ m k + 1 .
In fact, m k = max { j k : χ j χ j + 1 } .
Lemma 5
([36]). Assume that C is a nonempty closed convex set in H and an operator F : C H is monotone and continuous. Then, u * is a solution of the problem (1) if and only if u * is a solution of the following problem:
F i n d x C such that F ( y ) , y x 0 , y C .

3. Algorithm and Corresponding Strong Convergence Theorem

We provide a method consisting of two convex minimization problems through a viscosity and an explicit stepsize formula which are being used to enhance the rate of convergence the iterative sequence and to make the method independent of the Lipschitz constant L. The detailed method is given below:
Remark 1.
H n is a half-space and so H n is a closed and convex set in H .
Lemma 6.
The sequence { ζ n } is decreasing monotonically with a lower bound min μ L , ζ 0 and converges to ζ > 0 .
Proof. 
From the sequence { ζ n } , we see that this sequence is monotone and nonincreasing. It is given that F is Lipschitz-continuous with L > 0 . Let F ( u n ) F ( v n ) , z n v n > 0 , such that
μ ( u n v n 2 + z n v n 2 ) 2 F ( u n ) F ( v n ) , z n v n 2 μ u n v n z n v n 2 F ( u n ) F ( v n ) z n v n 2 μ u n v n z n v n 2 u n v n z n v n μ L .
The above discussion implies that the sequence { ζ n } has a lower bound min μ L , ζ 0 . Moreover, there exists number ζ > 0 , such that lim n ζ n = ζ .  □
Lemma 7.
Assume that an operator F : C H satisfies the conditions(b1)(b3). For each u * S V I ( F , C ) , we have
z n u * 2 u n u * 2 1 μ ζ n ζ n + 1 u n v n 2 1 μ ζ n ζ n + 1 z n v n 2 .
Proof. 
Let consider the following
z n u * 2 = P H n [ u n ζ n F ( v n ) ] u * 2 = P H n [ u n ζ n F ( v n ) ] + [ u n ζ n F ( v n ) ] [ u n ζ n F ( v n ) ] u * 2 = [ u n ζ n F ( v n ) ] u * 2 + P H n [ u n ζ n F ( v n ) ] [ u n ζ n F ( v n ) ] 2 + 2 P H n [ u n ζ n F ( v n ) ] [ u n ζ n F ( v n ) ] , [ u n ζ n F ( v n ) ] u * .
From the assumption that u * S V I ( F , C ) C H n , we have
P H n [ u n ζ n F ( v n ) ] [ u n ζ n F ( v n ) ] 2 + P H n [ u n ζ n F ( v n ) ] [ u n ζ n F ( v n ) ] , [ u n ζ n F ( v n ) ] u * = [ u n ζ n F ( v n ) ] P H n [ u n ζ n F ( v n ) ] , u * P H n [ u n ζ n F ( v n ) ] 0 ,
implies that
P H n [ u n ζ n F ( v n ) ] [ u n ζ n F ( v n ) ] , [ u n ζ n F ( v n ) ] u * P H n [ u n ζ n F ( v n ) ] [ u n ζ n F ( v n ) ] 2 .
Now, using the Equation (4) implies that
z n u * 2 u n ζ n F ( v n ) u * 2 P H n [ u n ζ n F ( v n ) ] [ u n ζ n F ( v n ) ] 2 u n u * 2 u n z n 2 + 2 ζ n F ( v n ) , u * z n .
Given that u * is a solution of V I ( F , C ) , we get
F ( u * ) , y u * 0 , y C .
Due to the monotonicity of F on C , we can obtain
F ( v n ) F ( u * ) , v n u * 0 , y C .
Since v n C , it follows that
F ( v n ) , v n u * 0 .
Thus, we have
F ( v n ) , u * z n = F ( v n ) , u * v n + F ( v n ) , v n z n F ( v n ) , v n z n .
From (7) and (11), we get
z n u * 2 u n u * 2 u n z n 2 + 2 ζ n F ( v n ) , v n z n = u n u * 2 u n v n + v n z n 2 + 2 ζ n F ( v n ) , v n z n u n u * 2 u n v n 2 v n z n 2 + 2 u n ζ n F ( v n ) v n , z n v n .
Note that z n = P H n [ u n ζ n F ( v n ) ] and by the definition of ζ n + 1 , we have
2 u n ζ n F ( v n ) v n , z n v n = 2 u n ζ n F ( u n ) v n , z n v n + 2 ζ n F ( u n ) F ( v n ) , z n v n 2 ζ n ζ n + 1 ζ n + 1 F ( u n ) F ( v n ) , z n v n ζ n ζ n + 1 μ u n v n 2 + μ z n v n 2 .
From expression (12) and (13), we obtain
z n u * 2 u n u * 2 u n v n 2 v n z n 2 + ζ n ζ n + 1 μ u n v n 2 + μ z n v n 2 u n u * 2 1 μ ζ n ζ n + 1 u n v n 2 1 μ ζ n ζ n + 1 z n v n 2 .
 □
Theorem 1.
Assume that an operator F : C H satisfies the conditions(b1)(b3) and u * belongs to solution set S V I ( F , C ) . Then, the sequences { u n } , { v n } and { z n } generated by Algorithm 1 strongly converge to u * .
Proof. Claim 1:
The sequence { u n } is bounded in H .
From Lemma 7, we have
z n u * 2 u n u * 2 1 μ ζ n ζ n + 1 u n v n 2 1 μ ζ n ζ n + 1 z n v n 2 .
Since ζ n ζ , then exits a fixed number ϵ ( 0 , 1 μ ) such that
lim n 1 μ ζ n ζ n + 1 = 1 μ > ϵ > 0 .
Thus, there is a finite number N 1 N such that
1 μ ζ n ζ n + 1 > ϵ > 0 , n N 1 .
Thus, from (15), we obtain
z n u * 2 u n u * 2 , n N 1 .
Let u * S V I ( F , C ) . By definition of the sequence { u n + 1 } and due to contraction f with constant ρ [ 0 , 1 ) and n N 1 , we obtain
u n + 1 u * = β n f ( u n ) + ( 1 β n ) z n u * = β n [ f ( u n ) u * ] + ( 1 β n ) [ z n u * ] = β n [ f ( u n ) + f ( u * ) f ( u * ) u * ] + ( 1 β n ) [ z n u * ] β n f ( u n ) f ( u * ) + β n f ( u * ) u * + ( 1 β n ) z n u * β n ρ u n u * + β n f ( u * ) u * + ( 1 β n ) z n u * .
Consider the expressions (17) and (18) and β n ( 0 , 1 ) , we have
u n + 1 u * β n ρ u n u * + β n f ( u * ) u * + ( 1 β n ) u n u * = [ 1 β n + ρ β n ] u n u * + β n ( 1 ρ ) f ( u * ) u * ( 1 ρ ) max u n u * , f ( u * ) u * ( 1 ρ ) max u N 1 u * , f ( u * ) u * ( 1 ρ ) .
Finally, we deduce that the sequence { u n } is bounded.
Claim 2: If lim n u n v n = 0 , then, as a subsequence, { u n k } of { u n } such that { u n k } u * S V I ( F , C ) as k .
The reflexivity of H and the boundedness of { u n } imply that there exists a subsequence { u n k } such that { u n k } u * H as k . It is sufficient to prove that u * S V I ( F , C ) . Due to lim n u n v n = 0 , we also have { v n k } u * as k . In addition, the fact that
v n k = P C [ u n k ζ n k F ( u n k ) ] ,
that is equivalent to
u n k ζ n k F ( u n k ) v n k , y v n k 0 , y C .
That is, we have
u n k v n k , y v n k ζ n k F ( u n k ) , y v n k , y C .
From the monotonicity condition on F, we have
F ( u n k ) F ( y ) , u n k y 0 , y C ,
that is
F ( y ) , y u n k F ( u n k ) , y u n k , y C .
Combining expressions (20) and (21), we obtain
0 v n k u n k , y v n k + ζ n k F ( u n k ) , y v n k = v n k u n k , y v n k + ζ n k F ( u n k ) , y u n k + ζ n k F ( u n k ) , u n k v n k v n k u n k , y v n k + ζ n k F ( y ) , y u n k + ζ n k F ( u n k ) , u n k v n k ,
for all y C , since lim k ζ n k = ζ > 0 (see Lemma 6) and the sequence { u n } is bounded in H . As lim n u n v n = 0 , and pass the limit in (22) as k , we obtain
F ( y ) , y u * 0 , y C .
Apply the well-known Minty Lemma 5, this is what we infer: u * S V I ( F , C ) .
Claim 3: The sequence { u n } is strong convergent in H .
The strong convergence of the sequence { u n } is as follows. The continuity and monotonicity of the operator F and the Minty lemma gives that S V I ( F , C ) is a closed and convex set (see [37,38] for more details). As mapping f is a contraction, so is P S V I ( F , C ) f . By using the Banach contraction principle to guarantee that an unique element exists, u * S V I ( F , C ) , such that
u * = P S V I ( F , C ) ( f ( u * ) ) .
Hence, we have
f ( u * ) u * , y u * 0 , y S V I ( F , C ) .
Now, considering u n + 1 = β n f ( u n ) + ( 1 β n ) z n , and using Lemma 1 (i) and Lemma 7, we have
u n + 1 u * 2 = β n f ( u n ) + ( 1 β n ) z n u * 2 = β n [ f ( u n ) u * ] + ( 1 β n ) [ z n u * ] 2 = β n f ( u n ) u * 2 + ( 1 β n ) z n u * 2 β n ( 1 β n ) f ( u n ) z n 2 β n f ( u n ) u * 2 + ( 1 β n ) [ u n u * 2 1 μ ζ n ζ n + 1 u n v n 2 1 μ ζ n ζ n + 1 z n v n 2 ] β n ( 1 β n ) f ( u n ) z n 2 β n f ( u n ) u * 2 + u n u * 2 ( 1 β n ) 1 μ ζ n ζ n + 1 z n v n 2 + u n v n 2 .
The remainder of the proof can be divided into two cases:
Case 1: Assume that there is a fixed number N 2 N ( N 2 N 1 ) such that
u n + 1 u * u n u * , n N 2 .
Thus, lim n u n u * exists and let lim n u n u * = l . By using expression (25), we have
( 1 β n ) 1 μ ζ n ζ n + 1 z n v n 2 + u n v n 2 β n f ( u n ) u * 2 + u n u * 2 u n + 1 u * 2 .
Due to the existence of lim n u n u * = l , and β n 0 , we obtain
lim n u n v n = lim n z n v n = 0 .
It follows that
lim n u n z n lim n u n v n + lim n v n z n = 0 .
Hence, we obtain
u n + 1 u n = β n f ( u n ) + ( 1 β n ) z n u n = β n [ f ( u n ) u n ] + ( 1 β n ) [ z n u n ] β n f ( u n ) u n + ( 1 β n ) z n u n 0 .
The sequence { u n } is bounded and implies that the sequences { v n } and { z n } are also bounded. Thus, we can take a subsequence { u n k } of { u n } such that { u n k } converges weakly to some u ^ C and
lim sup n f ( u * ) u * , u n u * = lim sup k f ( u * ) u * , u n k u * = f ( u * ) u * , u ^ u * 0 .
We have lim n u n + 1 u n = 0 . It means that
lim sup n f ( u * ) u * , u n + 1 u * lim sup k f ( u * ) u * , u n + 1 u n + lim sup k f ( u * ) u * , u n u * 0 .
From Lemma 7 and Lemma 1 (ii) (∀ n N 2 ), we obtain
u n + 1 u * 2 = β n f ( u n ) + ( 1 β n ) z n u * 2 = β n [ f ( u n ) u * ] + ( 1 β n ) [ z n u * ] 2 ( 1 β n ) 2 z n u * 2 + 2 β n f ( u n ) u * , ( 1 β n ) [ z n u * ] + β n [ f ( u n ) u * ] = ( 1 β n ) 2 z n u * 2 + 2 β n f ( u n ) f ( u * ) + f ( u * ) u * , u n + 1 u * = ( 1 β n ) 2 z n u * 2 + 2 β n f ( u n ) f ( u * ) , u n + 1 u * + 2 β n f ( u * ) u * , u n + 1 u * ( 1 β n ) 2 z n u * 2 + 2 β n ρ u n u * u n + 1 u * + 2 β n f ( u * ) u * , u n + 1 u * ( 1 + β n 2 2 β n ) u n u * 2 + 2 β n ρ u n u * 2 + 2 β n f ( u * ) u * , u n + 1 u * = ( 1 2 β n ) u n u * 2 + β n 2 u n u * 2 + 2 β n ρ u n u * 2 + 2 β n f ( u * ) u * , u n + 1 u * = 1 2 β n ( 1 ρ ) u n u * 2 + 2 β n ( 1 ρ ) β n u n u * 2 2 ( 1 ρ ) + f ( u * ) u * , u n + 1 u * 1 ρ .
It follows (32) that
lim sup n β n u n u * 2 2 ( 1 ρ ) + f ( u * ) u * , u n + 1 u * 1 ρ 0 .
Choose n N 3 N ( N 3 N 2 ) large enough such that 2 β n ( 1 ρ ) < 1 . Now, by using expressions (33) and (34) and applying Lemma 3, conclude that u n u * 0 , as n .
Case 2: Assume that there is a subsequence { n i } of { n } such that
u n i u * u n i + 1 u * , i N .
Thus, by Lemma 4 there is a sequence { m k } N as { m k } , such that
u m k u * u m k + 1 u * and u k u * u m k + 1 u * , k N .
Similar to case 1 and from (25), we obtain
( 1 β m k ) 1 μ ζ m k ζ m k + 1 z m k v m k 2 + u m k v m k 2 β m k f ( u m k ) u * 2 + u m k u * 2 u m k + 1 u * 2 .
Due to β m k 0 , and 1 μ ζ m k ζ m k + 1 1 μ , we deduce the following:
lim n u m k v m k = lim k z m k v m k = 0 .
It follows that
lim k u m k z m k lim k u m k v m k + lim k v m k z m k = 0 .
Similar to case 1, we can easily obtain that
lim k u m k + 1 u m k = 0 , and lim sup k f ( u * ) u * , u m k + 1 u * 0 .
By using (35) and the same argument as in (33), we have
u m k + 1 u * 2 = 1 2 β m k ( 1 ρ ) u m k u * 2 + 2 β m k ( 1 ρ ) β m k u m k u * 2 2 ( 1 ρ ) + f ( u * ) u * , u m k + 1 u * 1 ρ 1 2 β m k ( 1 ρ ) u m k + 1 u * 2 + 2 β m k ( 1 ρ ) β m k u m k u * 2 2 ( 1 ρ ) + f ( u * ) u * , u m k + 1 u * 1 ρ .
It follows that
u m k + 1 u * 2 β m k u m k u * 2 2 ( 1 ρ ) + f ( u * ) u * , u m k + 1 u * 1 ρ .
Due to β m k 0 as k , and lim sup k f ( u * ) u * , u m k + 1 u * 0 , we obtain
u m k + 1 u * 2 0 , as k .
Finally, the inequality
lim n u k u * 2 lim n u m k + 1 u * 2 0 .
Consequently, u n u * . This completes the proof of the theorem.  □

4. Numerical Illustrations

The experimental results are discussed in this section to illustrate the efficacy of our proposed Algorithm 1 (m-EgA3) compared to Algorithm 1 (m-EgA1) in [30] and Algorithm 2 (m-EgA2) in [30].
Example 1.
Consider the HpHard problem which is taken from [39] and considered by many authors for numerical tests (see [40,41,42]), where F : R m R m is an operator defined by F ( u ) = M u + q with q R m and
M = N N T + B + D ,
where N is an m × m matrix, B is an m × m skew–symmetric matrix and D is an m × m positive definite diagonal matrix. The feasible set is defined by
C = { u R m : Q u b } ,
where Q is an 100 × m matrix and b is a nonnegative vector in R m . It is clear that F is monotone and Lipschitz continuous with L = M . For q = 0 , the solution set of the corresponding variational inequality is V I ( C , F ) = { 0 } . In this experiment, we take the initial point u 0 = ( 1 , 1 , , 1 ) and D n = u n v n T O L = 10 3 . Moreover, the control parameters ζ 0 = 0.7 L and μ = 0.9 for Algorithm 1 (m-EgA1) in [30]; ζ 0 = 0.7 L , μ = 0.9 and β n = 1 30 ( k + 2 ) for Algorithm 2 (m-EgA2) in [30]; ζ 0 = 0.7 L , μ = 0.9 , β n = 1 n + 4 and f ( u ) = u 2 for Algorithm 1 (m-EgA3). The numerical results of all methods have been reported in Figure 1, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7 and Figure 8 and Table 1.
Example 2.
Assume that H = L 2 ( [ 0 , 1 ] ) is a Hilbert space with an inner product
u , v = 0 1 u ( t ) v ( t ) d t , u , v H ,
and the induced norm is
u = 0 1 | u ( t ) | 2 d t .
Let C : = { u L 2 ( [ 0 , 1 ] ) : u 1 } be the unit ball and F : C H is defined by
F ( u ) ( t ) = 0 1 u ( t ) H ( t , s ) f ( u ( s ) ) d s + g ( t ) ,
where
H ( t , s ) = 2 t s e ( t + s ) e e 2 1 , f ( u ) = cos ( u ) , g ( t ) = 2 t e t e e 2 1 .
We can see in [41], that F is Lipschitz-continuous with Lipschitz constant L = 2 and monotone. Figure 9, Figure 10 and Figure 11 and Table 2 show the numerical results by taking different initial values u 0 and ϵ = 10 3 . In this experiment, we take the different initial points u 0 and D n = u n v n T O L = 10 3 . Moreover, the control parameters ζ 0 = 0.6 L and μ = 0.45 for Algorithm 1 (m-EgA1) in [30]; ζ 0 = 0.6 L , μ = 0.45 and β n = 1 100 ( k + 2 ) for Algorithm 2 (m-EgA2) in [30]; ζ 0 = 0.6 L , μ = 0.45 , β n = 1 n + 2 and f ( u ) = u 3 for Algorithm 1 (m-EgA3).
Example 3.
Let F : R 2 R 2 is defined by
F u 1 u 2 = u 1 + u 2 + sin ( u 1 ) u 1 + u 2 + sin ( u 2 ) , u 1 u 2 R 2
and C is taken as
C = { u = ( u 1 , u 1 ) T R 2 : 0 u i 10 , i = 1 , 2 } .
This problem was proposed in [43], where F is L-Lipschitz continuous with Lipschitz constant L = 10 and monotone. In this experiment, we take the different initial points u 0 and D n = u n v n T O L . Moreover, the control parameters ζ 0 = 0.7 L and μ = 0.50 for Algorithm 1 (m-EgA1) in [30]; ζ 0 = 0.7 L , μ = 0.50 and β n = 1 100 ( n + 2 ) for Algorithm 2 (m-EgA2) in [30]; ζ 0 = 0.7 L , μ = 0.50 , β n = 1 100 ( n + 2 ) and f ( u ) = u 4 for Algorithm 1 (m-EgA3). Table 3 reports the numerical results by using different tolerance and initial points.

Author Contributions

Data curation, N.W.; formal analysis, M.Y.; funding acquisition, N.P. (Nuttapol Pakkaranang) and N.P. (Nattawut Pholasa); investigation, N.W., N.P. (Nuttapol Pakkaranang) and H.u.R.; methodology, H.u.R.; project administration, H.u.R., N.P. (Nattawut Pholasa) and M.Y.; resources, N.P. (Nattawut Pholasa); software, H.u.R.; supervision, H.u.R. and N.P. (Nuttapol Pakkaranang); Writing—original draft, N.W. and H.u.R.; Writing—review and editing, N.P. (Nuttapol Pakkaranang). All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by School of Science, University of Phayao, Phayao, Thailand (Grant No. UoE 63002).

Acknowledgments

We are very grateful to the editor and the anonymous referees for their valuable and useful comments, which helps in improving the quality of this work. N. Wairojjana would like to thank Valaya Alongkorn Rajabhat University under the Royal Patronage (VRU). N. Pholasa was partially supported by University of Phayao.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Stampacchia, G. Formes bilinéaires coercitives sur les ensembles convexes. Comptes Rendus Hebd. Des Seances De L Acad. Des Sci. 1964, 258, 4413. [Google Scholar]
  2. Konnov, I.V. On systems of variational inequalities. Russ. Math. C/C-Izv.-Vyss. Uchebnye Zaved. Mat. 1997, 41, 77–86. [Google Scholar]
  3. Kassay, G.; Kolumbán, J.; Páles, Z. On Nash stationary points. Publ. Math. 1999, 54, 267–279. [Google Scholar]
  4. Kassay, G.; Kolumbán, J.; Páles, Z. Factorization of Minty and Stampacchia variational inequality systems. Eur. J. Oper. Res. 2002, 143, 377–389. [Google Scholar] [CrossRef]
  5. Kinderlehrer, D.; Stampacchia, G. An Introduction to Variational Inequalities and Their Applications; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2000. [Google Scholar] [CrossRef]
  6. Konnov, I. Equilibrium Models and Variational Inequalities; Elsevier: Amsterdam, The Netherlands, 2007; Volume 210. [Google Scholar]
  7. Takahashi, W. Introduction to Nonlinear and Convex Analysis; Yokohama Publishers: Yokohama, Japan, 2009. [Google Scholar]
  8. Korpelevich, G. The extragradient method for finding saddle points and other problems. Matecon 1976, 12, 747–756. [Google Scholar]
  9. Noor, M.A. Some iterative methods for nonconvex variational inequalities. Comput. Math. Model. 2010, 21, 97–108. [Google Scholar] [CrossRef]
  10. Censor, Y.; Gibali, A.; Reich, S. The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2010, 148, 318–335. [Google Scholar] [CrossRef] [Green Version]
  11. Censor, Y.; Gibali, A.; Reich, S. Extensions of Korpelevich’s extragradient method for the variational inequality problem in Euclidean space. Optimization 2012, 61, 1119–1132. [Google Scholar] [CrossRef]
  12. Malitsky, Y.V.; Semenov, V.V. An Extragradient Algorithm for Monotone Variational Inequalities. Cybern. Syst. Anal. 2014, 50, 271–277. [Google Scholar] [CrossRef]
  13. Tseng, P. A Modified Forward-Backward Splitting Method for Maximal Monotone Mappings. SIAM J. Control Optim. 2000, 38, 431–446. [Google Scholar] [CrossRef]
  14. Moudafi, A. Viscosity Approximation Methods for Fixed-Points Problems. J. Math. Anal. Appl. 2000, 241, 46–55. [Google Scholar] [CrossRef] [Green Version]
  15. Zhang, L.; Fang, C.; Chen, S. An inertial subgradient-type method for solving single-valued variational inequalities and fixed point problems. Numer. Algorithms 2018, 79, 941–956. [Google Scholar] [CrossRef]
  16. Iusem, A.N.; Svaiter, B.F. A variant of korpelevich’s method for variational inequalities with a new search strategy. Optimization 1997, 42, 309–321. [Google Scholar] [CrossRef]
  17. Thong, D.V.; Hieu, D.V. Modified subgradient extragradient method for variational inequality problems. Numer. Algorithms 2017, 79, 597–610. [Google Scholar] [CrossRef]
  18. Thong, D.V.; Hieu, D.V. Weak and strong convergence theorems for variational inequality problems. Numer. Algorithms 2017, 78, 1045–1060. [Google Scholar] [CrossRef]
  19. Marino, G.; Scardamaglia, B.; Karapinar, E. Strong convergence theorem for strict pseudo-contractions in Hilbert spaces. J. Inequal. Appl. 2016, 2016. [Google Scholar] [CrossRef] [Green Version]
  20. Ur Rehman, H.; Kumam, P.; Cho, Y.J.; Yordsorn, P. Weak convergence of explicit extragradient algorithms for solving equilibirum problems. J. Inequal. Appl. 2019, 2019. [Google Scholar] [CrossRef]
  21. Ur Rehman, H.; Kumam, P.; Je Cho, Y.; Suleiman, Y.I.; Kumam, W. Modified Popov’s explicit iterative algorithms for solving pseudomonotone equilibrium problems. Optim. Methods Softw. 2020, 1–32. [Google Scholar] [CrossRef]
  22. Ur Rehman, H.; Kumam, P.; Abubakar, A.B.; Cho, Y.J. The extragradient algorithm with inertial effects extended to equilibrium problems. Comput. Appl. Math. 2020, 39. [Google Scholar] [CrossRef]
  23. Ur Rehman, H.; Kumam, P.; Kumam, W.; Shutaywi, M.; Jirakitpuwapat, W. The inertial sub-gradient extra-gradient method for a class of pseudo-monotone equilibrium problems. Symmetry 2020, 12, 463. [Google Scholar] [CrossRef] [Green Version]
  24. Ur Rehman, H.; Kumam, P.; Argyros, I.K.; Deebani, W.; Kumam, W. Inertial extra-gradient method for solving a family of strongly pseudomonotone equilibrium problems in real Hilbert spaces with application in variational inequality problem. Symmetry 2020, 12, 503. [Google Scholar] [CrossRef] [Green Version]
  25. Ur Rehman, H.; Kumam, P.; Argyros, I.K.; Alreshidi, N.A.; Kumam, W.; Jirakitpuwapat, W. A self-adaptive extra-gradient methods for a family of pseudomonotone equilibrium programming with application in different classes of variational inequality problems. Symmetry 2020, 12, 523. [Google Scholar] [CrossRef] [Green Version]
  26. Ur Rehman, H.; Kumam, P.; Argyros, I.K.; Shutaywi, M.; Shah, Z. Optimization based methods for solving the equilibrium problems with applications in variational inequality problems and solution of Nash equilibrium models. Mathematics 2020, 8, 822. [Google Scholar] [CrossRef]
  27. Ur Rehman, H.; Kumam, P.; Shutaywi, M.; Alreshidi, N.A.; Kumam, W. Inertial optimization based two-step methods for solving equilibrium problems with applications in variational inequality problems and growth control equilibrium models. Energies 2020, 13, 3292. [Google Scholar] [CrossRef]
  28. Rehman, H.U.; Kumam, P.; Dong, Q.L.; Peng, Y.; Deebani, W. A new Popov’s subgradient extragradient method for two classes of equilibrium programming in a real Hilbert space. Optimization 2020, 1–36. [Google Scholar] [CrossRef]
  29. Antipin, A.S. On a method for convex programs using a symmetrical modification of the Lagrange function. Ekon. I Mat. Metod. 1976, 12, 1164–1173. [Google Scholar]
  30. Yang, J.; Liu, H.; Liu, Z. Modified subgradient extragradient algorithms for solving monotone variational inequalities. Optimization 2018, 67, 2247–2258. [Google Scholar] [CrossRef]
  31. Kraikaew, R.; Saejung, S. Strong Convergence of the Halpern Subgradient Extragradient Method for Solving Variational Inequalities in Hilbert Spaces. J. Optim. Theory Appl. 2013, 163, 399–412. [Google Scholar] [CrossRef]
  32. Heinz, H.; Bauschke, P.L.C. Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd ed.; CMS Books in Mathematics; Springer International Publishing: New York, NY, USA, 2017. [Google Scholar]
  33. Kreyszig, E. Introductory Functional Analysis with Applications, 1st ed.; Wiley Classics Library: Hoboken, NJ, USA, 1989. [Google Scholar]
  34. Xu, H.K. Another control condition in an iterative method for nonexpansive mappings. Bull. Aust. Math. Soc. 2002, 65, 109–113. [Google Scholar] [CrossRef] [Green Version]
  35. Maingé, P.E. Strong Convergence of Projected Subgradient Methods for Nonsmooth and Nonstrictly Convex Minimization. Set-Valued Anal. 2008, 16, 899–912. [Google Scholar] [CrossRef]
  36. Takahashi, W. Nonlinear Functional Analysis; Yokohama Publisher: Yokohama, Japan, 2000. [Google Scholar]
  37. Liu, Z.; Zeng, S.; Motreanu, D. Evolutionary problems driven by variational inequalities. J. Differ. Equ. 2016, 260, 6787–6799. [Google Scholar] [CrossRef]
  38. Liu, Z.; Migórski, S.; Zeng, S. Partial differential variational inequalities involving nonlocal boundary conditions in Banach spaces. J. Differ. Equ. 2017, 263, 3989–4006. [Google Scholar] [CrossRef]
  39. Harker, P.T.; Pang, J.S. for the Linear Complementarity Problem. Comput. Solut. Nonlinear Syst. Equ. 1990, 26, 265. [Google Scholar]
  40. Solodov, M.V.; Svaiter, B.F. A New Projection Method for Variational Inequality Problems. SIAM J. Control Optim. 1999, 37, 765–776. [Google Scholar] [CrossRef]
  41. Van Hieu, D.; Anh, P.K.; Muu, L.D. Modified hybrid projection methods for finding common solutions to variational inequality problems. Comput. Optim. Appl. 2016, 66, 75–96. [Google Scholar] [CrossRef]
  42. Dong, Q.L.; Cho, Y.J.; Zhong, L.L.; Rassias, T.M. Inertial projection and contraction algorithms for variational inequalities. J. Glob. Optim. 2017, 70, 687–704. [Google Scholar] [CrossRef]
  43. Dong, Q.L.; Lu, Y.Y.; Yang, J. The extragradient algorithm with inertial effects for solving the variational inequality. Optimization 2016, 65, 2217–2226. [Google Scholar] [CrossRef]
Figure 1. Numerical behaviour of Algorithm 1 compared to Algorithm 1 in [30] and Algorithm 2 in [30] for Example 1, when m = 5 .
Figure 1. Numerical behaviour of Algorithm 1 compared to Algorithm 1 in [30] and Algorithm 2 in [30] for Example 1, when m = 5 .
Axioms 09 00118 g001
Figure 2. Numerical behaviour of Algorithm 1 compared to Algorithm 1 in [30] and Algorithm 2 in [30] for Example 1, when m = 5 .
Figure 2. Numerical behaviour of Algorithm 1 compared to Algorithm 1 in [30] and Algorithm 2 in [30] for Example 1, when m = 5 .
Axioms 09 00118 g002
Figure 3. Numerical behaviour of Algorithm 1 compared to Algorithm 1 in [30] and Algorithm 2 in [30] for Example 1, when m = 10 .
Figure 3. Numerical behaviour of Algorithm 1 compared to Algorithm 1 in [30] and Algorithm 2 in [30] for Example 1, when m = 10 .
Axioms 09 00118 g003
Figure 4. Numerical behaviour of Algorithm 1 compared to Algorithm 1 in [30] and Algorithm 2 in [30] for Example 1, when m = 10 .
Figure 4. Numerical behaviour of Algorithm 1 compared to Algorithm 1 in [30] and Algorithm 2 in [30] for Example 1, when m = 10 .
Axioms 09 00118 g004
Figure 5. Numerical behaviour of Algorithm 1 compared to Algorithm 1 in [30] and Algorithm 2 in [30] for Example 1, when m = 20 .
Figure 5. Numerical behaviour of Algorithm 1 compared to Algorithm 1 in [30] and Algorithm 2 in [30] for Example 1, when m = 20 .
Axioms 09 00118 g005
Figure 6. Numerical behaviour of Algorithm 1 compared to Algorithm 1 in [30] and Algorithm 2 in [30] for Example 1, when m = 20 .
Figure 6. Numerical behaviour of Algorithm 1 compared to Algorithm 1 in [30] and Algorithm 2 in [30] for Example 1, when m = 20 .
Axioms 09 00118 g006
Figure 7. Numerical behaviour of Algorithm 1 compared to Algorithm 1 in [30] and Algorithm 2 in [30] for Example 1, when m = 50 .
Figure 7. Numerical behaviour of Algorithm 1 compared to Algorithm 1 in [30] and Algorithm 2 in [30] for Example 1, when m = 50 .
Axioms 09 00118 g007
Figure 8. Numerical behaviour of Algorithm 1 compared to Algorithm 1 in [30] and Algorithm 2 in [30] for Example 1, when m = 50 .
Figure 8. Numerical behaviour of Algorithm 1 compared to Algorithm 1 in [30] and Algorithm 2 in [30] for Example 1, when m = 50 .
Axioms 09 00118 g008
Figure 9. Numerical behaviour of Algorithm 1 compared to Algorithm 1 in [30] and Algorithm 2 in [30] for Example 1, when u 0 = t .
Figure 9. Numerical behaviour of Algorithm 1 compared to Algorithm 1 in [30] and Algorithm 2 in [30] for Example 1, when u 0 = t .
Axioms 09 00118 g009
Figure 10. Numerical behaviour of Algorithm 1 compared to Algorithm 1 in [30] and Algorithm 2 in [30] for Example 1, when u 0 = sin ( t ) .
Figure 10. Numerical behaviour of Algorithm 1 compared to Algorithm 1 in [30] and Algorithm 2 in [30] for Example 1, when u 0 = sin ( t ) .
Axioms 09 00118 g010
Figure 11. Numerical behaviour of Algorithm 1 compared to Algorithm 1 in [30] and Algorithm 2 in [30] for Example 1, when u 0 = cos ( t ) .
Figure 11. Numerical behaviour of Algorithm 1 compared to Algorithm 1 in [30] and Algorithm 2 in [30] for Example 1, when u 0 = cos ( t ) .
Axioms 09 00118 g011
Table 1. Numerical results numeric values for Figure 1, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7 and Figure 8.
Table 1. Numerical results numeric values for Figure 1, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7 and Figure 8.
m-EgA1 [30]m-EgA2 [30]m-EgA3
mIter.TimeIter.TimeIter.Time
5591.0641921.8107340.8386
101262.20071371.9408731.0267
202043.28792313.36548311.9559
502975.89903445.6944731.2942
Table 2. Numerical comparison values for Figure 1, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7 and Figure 8.
Table 2. Numerical comparison values for Figure 1, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7 and Figure 8.
m-EgA1 [30]m-EgA2 [30]m-EgA3
u 0 Iter.TimeIter.TimeIter.Time
t440.0342720.0609270.0390
sin ( t ) 440.0876720.0569400.0569
cos ( t ) 450.0366720.0358270.0358
Table 3. Numerical behaviour of Algorithm 1 compared to Algorithm 1 in [30] and Algorithm 2 in [30] for Example 3 by using different initial points u 0 .
Table 3. Numerical behaviour of Algorithm 1 compared to Algorithm 1 in [30] and Algorithm 2 in [30] for Example 3 by using different initial points u 0 .
TOL0.010.0010.00010.000010.010.0010.00010.00001
u 0 Iter.Iter.Iter.Iter.TimeTimeTimeTime
Algorithm 1 in [30]
[ 10 , 20 ] T 2941832770.46680.62341.53953.0415
[ 10 , 10 ] T 45571173450.92341.14401.73873.4382
[ 10 , 20 ] T 59711433891.08061.42641.82713.9269
Algorithm 2 in [30]
[ 10 , 20 ] T 3142872900.47430.59811.49213.2051
[ 10 , 10 ] T 45611153600.89761.20811.58913.7891
[ 10 , 20 ] T 69731514071.27111.39102.08104.1981
Algorithm 1
[ 10 , 20 ] T 1926491190.23910.38710.77161.6781
[ 10 , 10 ] T 2539641230.29910.51920.99811.7021
[ 10 , 20 ] T 3145731890.30180.76101.10122.4071
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wairojjana, N.; Younis, M.; Rehman, H.u.; Pakkaranang, N.; Pholasa, N. Modified Viscosity Subgradient Extragradient-Like Algorithms for Solving Monotone Variational Inequalities Problems. Axioms 2020, 9, 118. https://0-doi-org.brum.beds.ac.uk/10.3390/axioms9040118

AMA Style

Wairojjana N, Younis M, Rehman Hu, Pakkaranang N, Pholasa N. Modified Viscosity Subgradient Extragradient-Like Algorithms for Solving Monotone Variational Inequalities Problems. Axioms. 2020; 9(4):118. https://0-doi-org.brum.beds.ac.uk/10.3390/axioms9040118

Chicago/Turabian Style

Wairojjana, Nopparat, Mudasir Younis, Habib ur Rehman, Nuttapol Pakkaranang, and Nattawut Pholasa. 2020. "Modified Viscosity Subgradient Extragradient-Like Algorithms for Solving Monotone Variational Inequalities Problems" Axioms 9, no. 4: 118. https://0-doi-org.brum.beds.ac.uk/10.3390/axioms9040118

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop