Next Article in Journal
Nonlocal Integro-Differential Equations of the Second Order with Degeneration
Next Article in Special Issue
Complex Intuitionistic Fuzzy Soft Lattice Ordered Group and Its Weighted Distance Measures
Previous Article in Journal
Convergence Rate of the Modified Landweber Method for Solving Inverse Potential Problems
Previous Article in Special Issue
Robust Dissipativity Analysis of Hopfield-Type Complex-Valued Neural Networks with Time-Varying Delays and Linear Fractional Uncertainties
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Inertial Iterative Schemes with Variable Step Sizes for Variational Inequality Problem Involving Pseudomonotone Operator

by
Jamilu Abubakar
1,2,†,
Poom Kumam
1,3,4,*,
Habib ur Rehman
1 and
Abdulkarim Hassan Ibrahim
1
1
Department of Mathematics, King Mongkut’s University of Technology Thonburi, Bangkok 10140, Thailand
2
Department of Mathematics, Usmanu Danfodiyo University, Sokoto 840001, Nigeria
3
Center of Excellence in Theoretical and Computational Science (TaCS-CoE), Science Laboratory Building, King Mongkut’s University of Technology Thonburi (KMUTT), 126 Pracha-Uthit Road, Bang Mod, Thrung Khru, Bangkok 10140, Thailand
4
Department of Medical Research, China Medical University Hospital, China Medical University, Taichung 40402, Taiwan
*
Author to whom correspondence should be addressed.
Current address: Department of Mathematics, King Mongkut’s University of Technology Thonburi, 126 Pracha-Uthit Road, Bang Mod, Thrung Khru, Bangkok 10140, Thailand.
Submission received: 18 March 2020 / Revised: 6 April 2020 / Accepted: 8 April 2020 / Published: 16 April 2020

Abstract

:
Two inertial subgradient extragradient algorithms for solving variational inequality problems involving pseudomonotone operator are proposed in this article. The iterative schemes use self-adaptive step sizes which do not require the prior knowledge of the Lipschitz constant of the underlying operator. Furthermore, under mild assumptions, we show the weak and strong convergence of the sequences generated by the proposed algorithms. The strong convergence in the second algorithm follows from the use of viscosity method. Numerical experiments both in finite- and infinite-dimensional spaces are reported to illustrate the inertial effect and the computational performance of the proposed algorithms in comparison with the existing state of the art algorithms.

1. Introduction

This paper considers the problem of finding a point u ˘ E such that
A u ˘ , u u ˘ 0 , u E ,
where E is a nonempty closed convex subset of a real Hilbert space H and A is an operator on H . The Variational Inequality (VI) problem Equation (1) is a fundamental problem in optimization theory which is applied in many areas of study, such as transportation problems, equilibrium, economics, engineering and so on (Refs. [1,2,3,4,5,6,7,8,9,10,11,12,13,14]).
There are many approaches to the VI problem, the basic one being the regularization and projection method. Many studies have been carried out and several algorithms have been considered and proposed (Refs. [5,15,16,17,18,19,20,21,22,23,24]). In this study, we are interested in the projection method.
The main iterative scheme in this study is given by
t n = u n + ϱ n ( u n u n 1 ) , s n = P E t n λ n A t n , T n = u H : t n λ n A t n s n , u s n 0 u n + 1 = P T n t n λ n A s n n 1 .
where ϱ n is the inertial parameter with two different updating rules. The step size λ n is define to be self adaptively updated according to a new step size rule.
We now discuss the relationship between our scheme and the existing algorithms. The iterative scheme Equation (2) with a fixed step size is exactly the scheme proposed in [25] and without the inertial step, it reduces to the original subgradient extragradient scheme proposed by Censor et al. [26] where the underlying operator is monotone. The subgradient extragradient method has been studied, modified and improved by several researchers to produce variant methods. Most of these modifications used a fixed step size which depends on the factorials of the underlying operator such as strongly (inverse) modulus and Lipschitz constant. Therefore, algorithms with fixed step size require the prior knowledge of such factorials to be implemented. In a situation where such constants are difficult to compute or do not exist, such algorithms may be impossible to implement.
Recently, Yang et al. [27] proposed a modified subgradient algorithm with variable step size which does not require the knowledge of the Lipschitz constant for solving Equation (1). However, they considered the underlying operator to be monotone and the sequence generated converges weakly. Interested readers may refer to some recent articles that propose algorithms with variable step sizes which are independent of factorials of the underlying operator and Lipschitz constant (Refs. [28,29,30,31]). The question now is, can we have an iterative scheme involving a more general class of operators with a strong convergence and self-adaptive step size? We hence provide a positive answer to this question.
Inspired and motivated by [26,27], we proposed a self-adaptive subgradient extragradient algorithm by incorporating the inertial extrapolation step with the subgradient extragradient scheme. The aim of this modification is to obtain a self-adaptive scheme with fast convergence properties involving a more general class of operators. Moreover, we present a strong convergence version of the proposed algorithm by incorporating a viscosity method. The proposed schemes do not require prior knowledge of the Lipschitz constant of the operator. Furthermore, we present numerical experiments in finite- and infinite-dimensional spaces to illustrate the performance and the effect of the inertial step when compared to the existing algorithms in the literature.
The outline of this work is as follows: We give some definitions and Lemmas which we will use in our convergence analysis in the next section. We present the convergence analysis of our proposed schemes in Section 3 and lastly, in Section 4, we illustrate the inertial effect and the computational performance of our algorithms by giving some examples.

2. Preliminaries

This section recalls some known facts and necessary tools that we need for the convergence analysis of our method. Throughout this article, H is a real Hilbert space with inner product and norm denoted respectively as · , · and · , E is a nonempty closed and convex subset of H . The notation u j u ( resp u j u ) is used to indicate that, respectively, the sequence { u j } converges weakly (strongly) to u. The following is known to hold in a Hilbert space:
t ± s 2 = t 2 + s 2 ± 2 t , s ,
for every t , s H [32].
Definition 1.
Let A : H H be a mapping defined on a real Hilbert space H . A is said to be:
(1) 
Pseudomonotone if
A u , v u 0 , A v , u v 0 u , v E .
(2) 
sequentially weakly continuous on H if u j u H , then A u j A u .
Lemma 1.
[32] For any E H closed and convex and a metric projection from H onto A . v = P E u for any u H , ⇔ u v , ω v 0 , ω E .
Lemma 2.
[33] For any E H closed and convex, for every u in H , the following holds
i. 
P E u P E v 2 P E u P E v , u v for all v H ,
ii. 
P E u v 2 u v 2 u P E v 2 for all v E .
For more properties of projection interested reader should see [33].
Lemma 3.
[34] Let A : E H be a continuous and pseudomonotone mapping. Then u ˘ Γ if and only if u ˘ is a solution of the problem of finding u E such that A v , v u 0 , v E .
Lemma 4.
[35] Suppose { γ n } , { ϕ n } and { ϱ n } be sequences in [ 0 , ) such that for all n 1 ,
γ n + 1 γ n + ϱ n ( γ n γ n 1 ) + ϕ n , n = 1 ϕ n <
and there exists ϱ R with 0 ϱ n ϱ 1 for all n 1 . Then the following are satisfied:
(i) 
n = 1 [ γ n γ n 1 ] + < , where [ a ] + = max { a , 0 } ;
(ii) 
there exists γ [ 0 , ) with lim γ n = γ .
Lemma 5.
[36] Let E H be a nonempty set and a sequence { u j } in H such that the following are satisfied:
(a) 
for every u E , lim j u j u exists;
(b) 
every sequentially weak cluster point of { u j } is in E.
Therefore, { u j } converges weakly in E .
Lemma 6.
Let { ϱ n } be a nonnegative real number sequence, { γ n } be a sequence of real numbers in ( 0 , 1 ) with n = 1 γ n = and { δ n } be a sequence of real numbers satisfying
ϱ n + 1 ( 1 γ n ) ϱ n + γ n δ n , f o r a l l n 1 .
If lim sup i δ n j 0 for every subsequence { ϱ n j } of { ϱ n } satisfying ( ϱ n j + 1 ϱ n j ) 0 , then lim n ϱ n = 0 .

3. A Self-Adaptive Subgradient Extragradient Scheme for Variational Inequality Problem

In this section, we give a detail description of our proposed algorithms. First, we present weak convergence analysis of the iterates generated by the algorithm to the solution of the VI problem Equation (1) involving pseudomonotone operator. We suppose the following assumptions for the analysis of our method.
Assumption 1.
A1 
The feasible set of Equation (1) is nonempty closed and convex subset of H .
A2 
The solution set Γ of Equation (1) is nonempty.
A3 
A : H H is pseudomonotone, L-Lipschitz continuous on H and sequentially weakly continuous on E .
Remark 1.
For 0 ϱ n ϱ , it can be observed from Equation (5) that ϱ n u n u n 1 2 1 n 2 for all n N . This implies that
n = 1 ϱ n u n u n 1 2 < .
Algorithm 1 Adaptive Subgradient Extragradient Algorithm for Pseudomonotone Operator.
  • Initialization: Choose u 1 , u 0 H , μ ( 0 , 1 ) , ϱ > 0 and λ 0 > 0 .
  • Iterative Steps: Given the current iterates u n 1 and u n H .
  •    Step 1. Set t n as:
    t n : = u n + ϱ n ( u n u n 1 ) ,
       where
    ϱ n : = m i n ϱ , 1 n 2 u n u n 1 2 , if u n u n + 1 , ϱ , otherwise .
  •    Step 2. Compute
    s n = P E t n λ n A t n .
       If t n = s n or A t n = 0 , stop. Else, go to step 3.
  •    Step 3. Construct
    T n : = x H : t n λ n A t n s n , x s n 0 .
       Compute
    u n + 1 = P T n t n λ n A s n .
       where the stepsize sequence λ n + 1 is updated as follows:
    λ n + 1 : = min λ n , Λ n , if A t n A s n , u n + 1 s n > 0 λ n , otherwise 0 .
       where
    Λ n : = μ ( t n s n 2 u n + 1 s n 2 ) 2 A t n A s n , u n + 1 s n
  • Set n : = n + 1 and go back to Step 1.
Lemma 7.
The generated sequence { λ n } by Equation (6) is monotonically decreasing and bounded from below by min { μ L , λ 0 }
Proof. 
It can be observed that, the sequence { λ n } is monotonically decreasing. Since A is a Lipschitz function with Lipschitz’s constant L , for A t n A s n , u n + 1 s n > 0 , we have
μ ( t n s n 2 u n + 1 s n 2 ) 2 A t n A s n , u n + 1 s n μ ( t n s n 2 u n + 1 s n 2 ) 2 A t n A s n u n + 1 s n , μ ( t n s n 2 u n + 1 s n 2 ) 2 L t n s n u n + 1 s n 2 μ ( t n s n 2 u n + 1 s n 2 ) 2 L t n s n 2 + u n + 1 s n 2 , μ L
Remark 2.
By Lemma 7, Equation (6) is well-defined and
λ n + 1 A t n A s n , u n + 1 s n μ 2 ( t n s n 2 u n + 1 s n 2 ) .
Next, the following lemma and its proof is crucial for the convergence analysis of the sequence generated by Algorithm 1.
Lemma 8.
Let A be an operator satisfying the Assumption 1 ( A 3 ). Then, for all p Γ , we have
u n + 1 u ˘ 2 t n u ˘ 2 1 λ n 2 λ n + 1 μ t n s n 2 1 λ n 2 λ n + 1 μ u n + 1 s n 2 .
Proof. 
By Lemma 2 and the definition of u n + 1 , we get
u n + 1 u ˘ 2 = P T n ( t n λ n A s n ) u ˘ 2 , ( t n λ n A s n ) u ˘ 2 ( t n λ n A s n ) u n + 1 2 , = t n u ˘ 2 t n u n + 1 2 + 2 λ n u ˘ u n + 1 , A s n , = t n u ˘ 2 t n u n + 1 2 + 2 λ n u ˘ s n , A s n A u ˘ + 2 λ n u ˘ s n , A u ˘ + 2 λ n s n u n + 1 , A s n , t n u ˘ 2 t n u n + 1 2 + 2 λ n s n u n + 1 , A s n , = t n u ˘ 2 t n s n 2 u n + 1 s n 2 + 2 λ n s n u n + 1 , A s n 2 t n s n , s n u n + 1 , = t n u ˘ 2 t n s n 2 u n + 1 s n 2 + 2 t n λ n A s n s n , u n + 1 s n .
Using the definition of s n and the fact that u n + 1 T n , we have
t n λ n A s n s n , u n + 1 s n = t n λ n A t n s n , u n + 1 s n + λ n A t n λ n A s n , u n + 1 s n , λ n A t n A s n , u n + 1 s n .
Now, from the definition of λ n + 1 , we have,
A t n A s n , u n + 1 s n 1 λ n + 1 μ 2 ( t n s n 2 u n + 1 s n 2 ) .
Combining Equations (7)–(9), we obtain the required result. □
Lemma 9.
Let { t n } be a sequence generated by Algorithm 1 and Assumption ( A 1 A 3 ) is satisfied. If there exists a subsequence { t n i } weakly convergent to q H with lim i t n i s n i = 0 . Then q Γ .
Proof. 
Since,
t n i λ n i A t n i s n i , u s n i 0 for all u E ,
this implies that
1 λ n i t n i s n i , u s n i A t n i , u s n i for all u E .
Furthermore,
1 λ n i t n i s n i , u s n i + A t n i , s n i t n i A t n i , u t n i for all u E .
By the hypothesis, { t n i } is bounded, consequently, { s n i } is also bounded, and from the Lipschitz continuity of A , { A t n i } is bounded as well. Passing limit as i to Equation (10), we obtain
lim inf i A t n i , u t n i 0 for all u E .
Furthermore, we have
A s n i , u s n i = A s n i A t n i , u t n i + A s n i , t n i s n i for all u E .
lim inf i A s n i , u s n i 0 .
The last inequality follows from Equations (11) and (12) and lim i A t n i A s n i = 0 (this follows from the fact that A is Lipschitz on H , and lim i t n i s n i = 0 . ). Let { η i } be a decreasing sequence of positive numbers tending to 0 and for each i , N i a smallest positive integer such that
A s n j , u s n j + η i 0 for all i N i .
It can be observed that the sequence { N i } is increasing. Moreover, assume A s n i 0 (otherwise s n i is a solution), set
ρ N i = A s n i A s n i 2 ,
therefore, for each i, we get A s n i , ρ N i = 1 .
It can be inferred from Equation (13) that for any i
A s n i , u + η i ρ N i s n i 0 .
By the pseudomonotonicity of A , we have
A ( u + η i ρ N i ) , u + η i ρ N i s n i 0 .
This implies that
A u , u s n i A u A ( u + η i ρ N i ) , u + η i ρ N i s n i η i A u , ρ N i .
To show that q Γ , to that end, we show that lim i η i ρ N i = 0 . By the sequentially weakly lower semicontinuity of the norm mapping, we get
0 < A q lim inf i A s n i .
Since, { s N i } { s n i } and η i 0 as i , we get
0 lim sup i η i ρ N i = lim sup i η i A s n i lim sup i η i A s n i = 0 .
Therefore, lim i η i ρ N i = 0 . Now, from the fact that A is uniformly bounded, { t n i } , { ρ N i } , are bounded and lim i η i ρ N i = 0 , we obtain
lim inf i A u , u s n i 0
Therefore, for all u E , we have
A u , u q = lim i A u , u s n i = lim inf i A u , u s n i 0 .
By Lemma 3, q Γ and hence the proof is complete. □
Remark 3.
As noted in [37,38], when the operator A is monotone, the sequentially weak- weak assumption is not needed in Lemma 9.
Theorem 1.
Let A be an operator satisfying the Assumption 1 ( A 3 ). Then, for all p Γ , the sequence { u n } generated by Algorithm 1, converges weakly to u ˘ .
Proof. 
By Lemma 8, we have
u n + 1 u ˘ 2 t n u ˘ 2 1 λ n 2 λ n + 1 μ t n s n 2 1 λ n 2 λ n + 1 μ u n + 1 s n 2 .
Moreover, from Equation (4), we have
t n u ˘ 2 = u n + ϱ n ( u n u n 1 ) u ˘ 2 , = ( 1 + ϱ ) ( u n p ) ϱ n ( u n 1 u ˘ ) 2 , = ( 1 + ϱ n ) u n u ˘ 2 ϱ n u n 1 u ˘ 2 + ϱ n ( 1 + ϱ n ) u n u n 1 2 .
Substituting the last equation in Equation (15), we have
u n + 1 u ˘ 2 ( 1 + ϱ n ) u n u ˘ 2 ϱ n u n 1 u ˘ 2 + ϱ n ( 1 + ϱ n ) u n u n 1 2 , 1 λ n 2 λ n + 1 μ t n s n 2 1 λ n 2 λ n + 1 μ u n + 1 s n 2 . u n u ˘ 2 + ϱ n ( u n u ˘ 2 u n 1 u ˘ 2 ) , + ϱ n ( 1 + ϱ n ) u n u n 1 2 1 λ n 2 λ n + 1 μ t n s n 2 , 1 λ n 2 λ n + 1 μ u n + 1 s n 2 .
Observe that lim n λ n μ λ n + 1 = μ with 0 < μ < 1 , thus, there exists 0 , such that for all n 0 < λ n μ λ n + 1 < 1 . Consequently, we have
u n + 1 u ˘ 2 u n u ˘ 2 + ϱ n ( u n p 2 u n 1 u ˘ 2 ) , + ϱ n ( 1 + ϱ n ) u n u n 1 2 .
Now, comparing the last equation with Lemma 4, we have
Γ n = u n u ˘ 2 and φ n = ϱ n ( 1 + ϱ n ) u n u n 1 2 .
Therefore, we get
n = 1 u n u ˘ 2 u n 1 u ˘ 2 + + .
Consequently,
lim n u n u ˘ 2 u n 1 u ˘ 2 + = 0 ,
and lim n u n u ˘ 2 exists. On the other hand, from Equation (17) we have
1 λ n 2 λ n + 1 μ t n s n 2 + 1 λ n 2 λ n + 1 μ u n + 1 s n 2 u n u ˘ 2 u n + 1 u ˘ 2 + ϱ n ( u n u ˘ 2 u n 1 u ˘ 2 ) + ϱ n ( 1 + ϱ n ) u n u n 1 2 , u n u ˘ 2 u n + 1 u ˘ 2 + ϱ n u n u ˘ 2 u n 1 u ˘ 2 + + ϱ n ( 1 + ϱ n ) u n u n 1 2 for all n N .
This implies that
lim n t n s n = 0 .
Moreover, we obtain from Equation (4)
t n u n 2 = ϱ n 2 u n u n 1 2 α · ϱ n u n u n 1 2 0 as n ,
hence,
lim n t n u n = 0 .
Since lim n u n u ˘ 2 exists, therefore, the sequence { u n } is bounded. Let { u n i } be a subsequence of { u n } such that u n i z , then from Equation (18) we have t n i z . Now, since lim n t n s n = 0 , by Lemma 9 we get z Γ . Consequently by Lemma 5, the sequence { u n } converges weakly to the solution of the VI problem. □
Notice that when ϱ n = 0 in Algorithm 1, then we have the following as a corollary.
Corollary 1.
Suppose that A : E H satisfies ( A 3 ) in Assumption 1. Let { u n } and { s n } be the sequences generated as follows:
i. 
Choose u 0 H , μ ( 0 , 1 ) , and λ 0 > 0 .
s n = P E ( u n λ n A u n ) , u n + 1 = P T n ( u n λ n A s n ) ,
where T n = { z H : u n λ n A u n s n , z s n 0 } . Moreover, the stepsize sequence λ n + 1 is updated as follows:
λ n + 1 = min μ ( u n s n 2 u n + 1 s n 2 ) 2 A n A s n , u n + 1 s n , λ n , i f A n A s n , u n + 1 s n > 0 λ n , o t h e r w i s e .
Then, the sequences { u n } and { s n } weakly converge to the solution u ˘ Γ of the VI problem.

Strong Convergence Analysis

In this subsection we present a modified version of Algorithm 1 by using viscosity method to establish a strong convergence. The modified algorithm similar to Algorithm 1 has adaptive step size, that is, the step size does not depend on the Lipschitz constant. Moreover, for the convergence analysis of the modified algorithm, the Lipschitz constant of the operator is not required.
For the convergence analysis, we suppose g : H H to be a contraction mapping with τ ( 0 , 1 ] as the contraction parameter. { ζ n } is a positive sequence such that ζ n = ( γ n ) where { γ n } ( 0 , 1 ) with:
lim n γ n = 0 and n = 1 γ n = .
The modified algorithm is as follows:
Algorithm 2 Adaptive Subgradient Extragradient Algorithm for Pseudomonotone Operator.
  • Initialization: Choose u 1 , u 0 H , μ ( 0 , 1 ) , ϱ > 0 and λ 0 > 0 .
  • Iterative Steps: Given the current iterates u n 1 and u n H , choose ϱ n such that 0 ϱ n ϱ n , where
    ϱ n : = m i n ζ n u n x n + 1 , ϱ , if u n u n + 1 , ϱ , otherwise .
  •    Step 1. Set t n as:
    t n : = u n + ϱ n ( u n u n 1 ) ,
  •    Step 2. Compute
    s n = P E t n λ n A t n .
       If t n = s n or A t n = 0 , stop. Else, go to step 3.
  •    Step 3. Construct
    T n : = x H : t n λ n A t n s n , x s n 0 .
       Compute
    u n + 1 = γ n g ( z n ) + ( 1 γ n ) z n ,
       where
    z n = P T n t n λ n A s n .
       where the stepsize sequence λ n + 1 is updated as follows:
    λ n + 1 : = min λ n , Λ n , if A t n A s n , u n + 1 s n > 0 , λ n otherwise .
       where
    Λ n : = μ ( t n s n 2 z n s n 2 ) 2 A t n A s n , z n s n
  • Set n : = n + 1 and go back to Step 1.
Remark 4.
It can be observed that from Equation (20) and ζ n = ( γ n ) that is lim n ζ n γ n = 0 , we have for all n 1
lim n ζ n γ n u n u n 1 lim n ζ n γ n = 0 .
Theorem 2.
Let A be an operator satisfying the Assumption 1 (A3). Then, for all p Γ , the sequence { u n } generated by Algorithm 2, converges strongly to u ˘ .
Proof. 
To show that iterates generated by Algorithm 2 converges strongly to u ˘ Γ , we consider four claims.
Claim I: The sequence { u n } generated by Algorithm 2 is bounded.
In fact, from Lemma 9,
z n u ˘ 2 t n u ˘ 2 1 λ n 2 λ n + 1 μ t n s n 2 1 λ n 2 λ n + 1 μ z n s n 2 .
Recall that there exists N , such that for all n 0 < λ n μ λ n + 1 < 1 . This implies that 1 λ n 2 λ n + 1 μ > 0 for all n . Thus, from Equation (22) we get
z n u ˘ 2 t n u ˘ 2 for all n N .
On the other hand,
t n u ˘ = u n + ϱ n ( u n u n 1 ) u ˘ , u n u ˘ + ϱ n ( u n u n 1 ) , = u n u ˘ + γ n · ϱ n γ n ( u n u n 1 ) .
It follows from Remark 4 that for all n N , there exists a constant C 1 such that
ϱ n γ n ( u n u n 1 ) C 1 .
Putting Equation (23), Equation (24) and the last equation together, we obtain
z n u ˘ t n u ˘ u n u ˘ + γ n C 1 for all n .
On the other hand, we have
u n + 1 u ˘ = γ n g ( z n ) + ( 1 γ n ) z n u ˘ , = γ n ( g ( z n ) u ˘ ) + ( 1 γ n ) ( z n u ˘ ) , γ n g ( z n ) u ˘ + ( 1 γ n ) z n u ˘ , γ n g ( z n ) g ( u ˘ ) + γ n g ( u ˘ ) u ˘ + ( 1 γ n ) z n u ˘ , γ n τ z n u ˘ + γ n g ( u ˘ ) u ˘ + ( 1 γ n ) z n u ˘ , ( 1 ( 1 τ ) γ n ) z n u ˘ + γ n g ( u ˘ ) u ˘ .
Putting Equation (13) in Equation (26), we get
u n + 1 u ˘ ( 1 ( 1 τ ) γ n ) u n u ˘ + γ n C 1 + γ n g ( u ˘ ) u ˘ for all n , ( 1 ( 1 τ ) γ n ) u n u ˘ + ( 1 τ ) γ n C 1 + γ n g ( u ˘ ) u ˘ 1 τ for all n , max u n u ˘ , C 1 + γ n g ( u ˘ ) u ˘ 1 τ for all n , max u n u ˘ , C 1 + γ n g ( u ˘ ) u ˘ 1 τ for all n ,
Therefore, { u n } is bounded and thus, { t n } , { g ( u n ) } and { z n } are bounded.
Claim II:
1 λ n 2 λ n + 1 μ t n s n 2 + 1 λ n 2 λ n + 1 μ u n + 1 s n 2 u n u ˘ 2 u n + 1 u ˘ 2 + γ n C 4 .
In fact, from Equation (26), we have
u n + 1 u ˘ 2 γ n g ( z n ) u ˘ 2 + ( 1 γ n ) z n u ˘ 2 , γ n g ( z n ) g ( u ˘ ) + g ( u ˘ ) u ˘ 2 + ( 1 γ n ) z n u ˘ 2 , γ n τ z n u ˘ + g ( u ˘ ) u ˘ 2 + ( 1 γ n ) z n u ˘ 2 , = γ n z n u ˘ 2 + γ n 2 z n u ˘ · g ( u ˘ ) u ˘ + g ( u ˘ ) u ˘ 2 + ( 1 γ n ) z n u ˘ 2 , γ n z n u ˘ 2 + ( 1 γ n ) z n u ˘ 2 + γ n C 2 , for some C 2 > 0 , = z n u ˘ 2 + γ n C 2 .
Substituting Equation (22) into Equation (28), we obtain
u n + 1 u ˘ 2 t n u ˘ 2 1 λ n 2 λ n + 1 μ t n s n 2 1 λ n 2 λ n + 1 μ z n s n 2 + γ n C 2 .
From Equation (25), we get
t n u ˘ 2 u n u ˘ + γ n C 1 2 , = u n u ˘ 2 + γ n 2 C 1 u n u ˘ + γ n C 1 2 , u n u ˘ 2 + γ n C 3 , for some C 3 > 0 .
Putting Equations (29) and (30), we have
u n + 1 u ˘ 2 u n u ˘ 2 + γ n C 3 1 λ n 2 λ n + 1 μ t n s n 2 1 λ n 2 λ n + 1 μ z n s n 2 + γ n C 2 .
Thus,
1 λ n 2 λ n + 1 μ t n s n 2 + 1 λ n 2 λ n + 1 μ u n + 1 s n 2 u n u ˘ 2 u n + 1 u ˘ 2 + γ n C 4 ,
where C 4 = γ n C 3 + γ n C 2 .
Claim III:
u n + 1 u ˘ ( 1 ( 1 τ ) γ n ) z n u ˘ + ( 1 τ ) γ n 2 1 τ g ( u ˘ ) u ˘ , u n + 1 u ˘ + 3 C 1 τ · ϱ n γ n u n u n 1 .
In fact, we have
t n p 2 = u n + ϱ n ( u n u n 1 ) u ˘ 2 , u n u ˘ 2 + ϱ n 2 u n u n 1 2 + 2 ϱ n u n u ˘ , u n u n 1 , u n u ˘ 2 + ϱ n 2 u n u n 1 2 + 2 ϱ n u n u ˘ u n u n 1 .
Using the inequality t + s t 2 + 2 s , t + s , we get
u n + 1 u ˘ 2 = γ n g ( z n ) + ( 1 γ n ) z n u ˘ 2 , = γ n ( g ( z n ) g ( u ˘ ) ) + ( 1 γ n ) ( z n u ˘ ) + γ n ( g ( u ˘ ) u ˘ ) 2 , γ n ( g ( z n ) g ( u ˘ ) ) + ( 1 γ n ) ( z n u ˘ ) 2 + 2 γ n g ( u ˘ ) u ˘ , u n + 1 u ˘ , γ n g ( z n ) g ( u ˘ ) 2 + ( 1 γ n ) z n u ˘ 2 + 2 γ n g ( u ˘ ) u ˘ , u n + 1 u ˘ , γ n τ 2 z n u ˘ 2 + ( 1 γ n ) z n u ˘ 2 + 2 γ n g ( u ˘ ) u ˘ , u n + 1 u ˘ , ( 1 ( 1 τ ) γ n ) z n u ˘ 2 + 2 γ n g ( u ˘ ) u ˘ , u n + 1 u ˘ , ( 1 ( 1 τ ) γ n ) t n u ˘ 2 + 2 γ n g ( u ˘ ) u ˘ , u n + 1 u ˘ ,
where the last inequality follows from Equation (23). Substituting Equation (32) into Equation (33), we get
u n + 1 u ˘ 2 ( 1 ( 1 τ ) γ n ) u n u ˘ 2 + ϱ n 2 u n u n 1 2 + 2 ϱ n u n u ˘ u n u n 1 + 2 γ n g ( u ˘ ) u ˘ , u n + 1 u ˘ , ( 1 ( 1 τ ) γ n ) u n u ˘ 2 + ( 1 τ ) γ n 2 1 τ g ( u ˘ ) u ˘ , u n + 1 u ˘ + ϱ n u n u n 1 2 u n u ˘ + ϱ n u n u n 1 for all n , ( 1 ( 1 τ ) γ n ) + ( 1 τ ) γ n 2 1 τ g ( u ˘ ) u ˘ , u n + 1 u ˘ + 3 C ϱ n u n u n 1 for all n , ( 1 ( 1 τ ) γ n ) + ( 1 τ ) γ n 2 1 τ g ( u ˘ ) u ˘ , u n + 1 u ˘ + 3 C 1 τ · ϱ n γ n u n u n 1 for all n ,
where M = sup n N u n u ˘ , ϱ n u n u n 1 > 0 .
Claim IV: The sequence { u n u ˘ } converges to zero. From Lemma 6 it suffices to show that lim sup k g ( u ˘ ) u ˘ , u n + 1 u ˘ 0 for every subsequence { u n k u ˘ } of { u n u ˘ } such that
lim inf k u n k + 1 u ˘ u n k u ˘ .
To this end, let { u n k u ˘ } be a subsequence of { u n u ˘ } such that Equation (34) is satisfied, then by claim II we get
lim sup k [ ( 1 λ n k 2 λ n k + 1 μ ) t n k v n k 2 + 1 λ n k 2 λ n k + 1 μ u n k + 1 v n k 2 ] lim sup k u n k u ˘ 2 u n k + 1 u ˘ 2 + γ n k C 4 , lim sup k u n k u ˘ 2 u n k + 1 u ˘ 2 + + lim sup k γ n k C 4 , lim inf k u n k + 1 u ˘ 2 u n k u ˘ 2 , 0 .
Therefore,
lim k t n k v n k = 0
and
lim k z n k v n k = 0 ,
Next, we show that
u n k + 1 u n k 0 as k .
In fact, we have
u n k + 1 z n k = γ n k z n k g ( z n k ) 0 as k ,
and
u n k t n k = α n i u n k + 1 u n k = γ n k α n i γ n k u n k + 1 u n k 0 as k .
It now follows that, there exists a subsequence { u n k j } of { u n k } such that, for some z H , u n k j z , and
lim sup k g ( u ˘ ) u ˘ , u n k u ˘ = lim k g ( u ˘ ) u ˘ , u n k j u ˘ = g ( u ˘ ) u ˘ , z u ˘
From Equation (37), we have t n k z as k , which it follows from Equation (35) and Lemma 9 that z Γ . From Equation (38) and the fact that u ˘ = P Γ g ( u ˘ ) , we have
lim sup k g ( u ˘ ) u ˘ , u n k u ˘ = g ( u ˘ ) u ˘ , z u ˘ 0 .
Putting Equations (36) and (39) together, we get
lim sup k g ( u ˘ ) u ˘ , u n k + 1 u ˘ lim sup k g ( u ˘ ) u ˘ , u n k u ˘ , = g ( u ˘ ) u ˘ , z u ˜ , 0 .
Thus, the desired result follows from lim n α n i γ n k u n k + 1 u n k = 0 , Equation (40), Claim III and Lemma 6. □

4. Computational Experiments

Numerical experiments will be presented in this section to demonstrate the performance of our proposed methods. The Codes were run on a PC Intel(R) Core(TM)i5-6200 CPU @ 2.30GHz 2.40GHz, RAM 8.00 GB, MATLAB version 9.5 (R2018b). Throughout these examples y-axis and x-axis represent D n = u n + 1 u n and number of iterations or execution time (in seconds) respectively. The following examples were considered for the numerical experiments of the two proposed algorithms.
Example 1.
Let an operator A be define on E R n as follows
A = ( B B T + S + D ) u + q
where q R n , B is an n × n matrix, S is an n × n skew-symmetric matrix, D is an n × n diagonal matrix, whose diagonal entries are nonnegative. These matrices and the vector q are randomly generated in ( 0 , 1 ) . The set E R n is closed and convex and defined as
E = { u R n : C u d }
where C is an 100 × n matrix and d is a nonnegative vector. It is clear that A is monotone and L-Lipschitz continuous with L = max ( e i g ( B B T + S + D ) ) .
Example 2.
The following problem which was considered by Sun [39] (page 9, example 5), with an operator defined as
A u = G u + H u ,
where
G u = g 1 u , g 2 u , , g n u ,
H u = E x + d ,
g i u = u i 1 2 + u i 2 + u i 1 u i + u i u i + 1 , i = 1 , 2 , , n u 0 = u n + 1 = 0 .
Here E is an n × n square matrix with entries defined as
e l , m = 4 m = l 1 l m = 1 2 l m = 1 0 o t h e r w i s e ,
while d = ( 1 , 1 , , 1 ) .
Example 3.
Let H = l 2 and E = { u H : u 5 } . Define an operator A as follows:
A u = ( 7 u ) u u H ,
where u = i | u i | 2 . Observe that the solution set Γ and the operator is pseudomonotone but not monotone with Lipschitz constant L = 11 . The projection onto E is explicitly computed as
P E u = u i f u 5 , 5 u u , o t h e r w i s e .

4.1. Comparative Analysis for Algorithm 1

Here, we give the numerical experiment of Algorithm 1 in comparison with some existing algorithms in the literature. We use the following algorithms and their corresponding control parameters details.
  • Dong et al. [19] on page 3 (shortly, iEgA) with α n = 0.40 , λ n = 0.80 and τ = 1 1.5 L .
  • Thong et al. [25] on page 5 (shortly, ISEGM), with α = 0.20 and τ = 0.5 2 α 0.5 α 2 2 L ( 0.5 α + 0.5 α 2 ) .
  • Dong et al. [17] Algorithm 3.1 (shortly, iPCA), with α n = 0.40 , γ = 1.5 and τ = 1 2 L .
  • Yang et al. [40] Algorithm 3.1 (shortly, Alg3.1), with α n = 0.15 , μ = 0.5 and λ 1 = 0.7 26 .
  • Thong et al. [41] Algorithm 3.1 (shortly, WISEGM), α = 0.80 , μ = 0.9 and τ 1 = 1 .
For Algorithm 1 (shortly, Algo1) we chose ϱ = 0.50 , μ = 0.80 and λ 0 = 0.60 . The results are shown in Figure 1, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7 and Figure 8.
It can be clearly seen from Figure 1, Figure 2, Figure 3 and Figure 4, Figure 5 and Figure 6 and Figure 7 and Figure 8 for Example 1, 2 and 3 respectively that the proposed algorithm has fewer iterations and less CPU time than the compared algorithms.

4.2. Comparative Results of Algorithm 2

For the numerical results of Algorithm 2, we used the following algorithms with strong convergence.
  • Kraikaew et al. [42] on page 6 (shortly, hEgA), with α k = 1 100 ( k + 2 ) and τ = 1 1.5 L .
  • Yang et al. [27] Algorithm 2 (shortly, Alg.2), with α k = 1 100 ( k + 2 ) , μ = 0.4 and λ 0 = 0.7 L .
  • Shehu et al. [29] Algorithm 4.3 (shortly, Algorithm4.3), with α n = 1 13 n + 2 , γ = 1.99 and λ 0 = 0.9 L .
  • Thong et al. [41] Algorithm 3.2 (shortly, SISEGM), with α = 1 , μ = 0.9 , f ( x ) = 1 2 x , β n = 1 n + 1 , ϵ n = 1 ( n + 1 ) 2 and τ 1 = 1 .
For Algorithm 2 (shortly, Algo2), we take ϱ = 0.8 , μ = 0.9 , g ( x ) = 1 2 x , γ n = 1 13 n + 2 , ϵ n = 1 ( n + 1 ) 2 and λ 0 = 0.5 . The results are shown in Figure 9, Figure 10, Figure 11, Figure 12, Figure 13, Figure 14, Figure 15 and Figure 16.
For the strong convergent algorithm, it can be observed that for Example 1, 2 and 3 in Figure 9, Figure 10, Figure 11 and Figure 12, Figure 13 and Figure 14 and Figure 15 and Figure 16 respectively, the proposed algorithm is faster, more efficient and more robust than the compared existing algorithms.

5. Conclusions

A self-adaptive subgradient extragradient algorithms with an inertial extrapolation step is presented in this work. The proposed algorithms involve a more general class of operators, the iterates generated by the first scheme converge weakly to the solution of the variational inequality problem. Furthermore, a modified version of the proposed algorithm is given by using the viscosity method to obtain a strong convergence algorithm. The main advantage of the proposed algorithms is that they do not require the prior knowledge of the Lipchitz constant of the cost operator and the iterates generated converge faster to the solution of the problem due to the inertial extrapolation step. Numerical comparison of the proposed algorithms with some of the existing state of the art algorithms shows that the proposed algorithms are fast, robust and efficient.

Author Contributions

Conceptualization, J.A. and A.H.I.; methodology, J.A.; software, H.u.R.; validation, J.A., P.K. and A.H.I.; formal analysis, J.A.; investigation, P.K.; resources, P.K.; data curation, H.u.R.; writing—original draft preparation, J.A.; writing—review and editing, J.A. and A.H.I.; visualization, H.u.R.; supervision, P.K.; project administration, P.K.; funding acquisition, P.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Center of Excellence in Theoretical and Computational Science (TaCS-CoE), Faculty of Science, KMUTT. The first, third and the fourth authors were supported by “the Petchra Pra Jom Klao Ph.D. Research Scholarship” from ‘King Mongkut’s University of Technology Thonburi” (Grant No. 38/2018, 35/2017 and 16/2018 respectively).

Acknowledgments

The authors acknowledge the financial support provided by the Center of Excellence in Theoretical and Computational Science (TaCS-CoE), Faculty of Science, KMUTT.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Noor, M.A. Some developments in general variational inequalities. Appl. Math. Comput. 2004, 152, 199–277. [Google Scholar]
  2. Aubin, J.; Ekeland, I. Applied Nonlinear Analysis; John Wiley and Sons: New York, NY, USA, 1984. [Google Scholar]
  3. Marcotte, P. Application of Khobotov’s algorithm to variational inequalities and network equilibrium problems. INFOR Inf. Syst. Oper. Res. 1991, 29, 258–270. [Google Scholar] [CrossRef]
  4. Khobotov, E.N. Modification of the extra-gradient method for solving variational inequalities and certain optimization problems. USSR Comput. Math. Math. Phys. 1987, 27, 120–127. [Google Scholar] [CrossRef]
  5. Abbas, M.; Ibrahim, Y.; Khan, A.R.; de la Sen, M. Strong Convergence of a System of Generalized Mixed Equilibrium Problem, Split Variational Inclusion Problem and Fixed Point Problem in Banach Spaces. Symmetry 2019, 11, 722. [Google Scholar] [CrossRef] [Green Version]
  6. Kinderlehrer, D.; Stampacchia, G. An Introduction to Variational Inequalities and Their Applications; Siam: Philadelphia, PA, USA, 1980; Volume 31. [Google Scholar]
  7. Baiocchi, C. Variational and quasivariational inequalities. Appl. Free Bound. Probl. 1984. [Google Scholar]
  8. Konnov, I. Combined Relaxation Methods for Variational Inequalities; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2001; Volume 495. [Google Scholar]
  9. Jaiboon, C.; Kumam, P. An extragradient approximation method for system of equilibrium problems and variational inequality problems. Thai J. Math. 2012, 7, 77–104. [Google Scholar]
  10. Kumam, W.; Piri, H.; Kumam, P. Solutions of system of equilibrium and variational inequality problems on fixed points of infinite family of nonexpansive mappings. Appl. Math. Comput. 2014, 248, 441–455. [Google Scholar] [CrossRef]
  11. Chamnarnpan, T.; Phiangsungnoen, S.; Kumam, P. A new hybrid extragradient algorithm for solving the equilibrium and variational inequality problems. Afr. Mat. 2015, 26, 87–98. [Google Scholar] [CrossRef]
  12. Deepho, J.; Kumam, W.; Kumam, P. A new hybrid projection algorithm for solving the split generalized equilibrium problems and the system of variational inequality problems. J. Math. Model. Algorithms Oper. Res. 2014, 13, 405–423. [Google Scholar] [CrossRef]
  13. ur Rehman, H.; Kumam, P.; Cho, Y.J.; Yordsorn, P. Weak convergence of explicit extragradient algorithms for solving equilibirum problems. J. Inequalities Appl. 2019, 2019, 1–25. [Google Scholar] [CrossRef]
  14. Kazmi, K.R.; Rizvi, S. An iterative method for split variational inclusion problem and fixed point problem for a nonexpansive mapping. Optim. Lett. 2014, 8, 1113–1124. [Google Scholar] [CrossRef]
  15. Facchinei, F.; Pang, J.S. Finite-Dimensional Variational Inequalities and Complementarity Problems; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
  16. Solodov, M.V.; Svaiter, B.F. A new projection method for variational inequality problems. SIAM J. Control. Optim. 1999, 37, 765–776. [Google Scholar] [CrossRef]
  17. Dong, Q.; Cho, Y.; Zhong, L.; Rassias, T.M. Inertial projection and contraction algorithms for variational inequalities. J. Glob. Optim. 2018, 70, 687–704. [Google Scholar] [CrossRef]
  18. Denisov, S.; Semenov, V.; Chabak, L. Convergence of the modified extragradient method for variational inequalities with non-Lipschitz operators. Cybern. Syst. Anal. 2015, 51, 757–765. [Google Scholar] [CrossRef]
  19. Dong, Q.L.; Lu, Y.Y.; Yang, J. The extragradient algorithm with inertial effects for solving the variational inequality. Optimization 2016, 65, 2217–2226. [Google Scholar] [CrossRef]
  20. Malitsky, Y.V. Projected reflected gradient methods for variational inequalities. SIAM J. Optim. 2015, 25, 502–520. [Google Scholar] [CrossRef] [Green Version]
  21. Abubakar, J.; Sombut, K.; Ibrahim, A.H.; Rehman, H.R. An Accelerated Subgradient Extragradient Algorithm for Strongly Pseudomonotone Variational Inequality Problems. Thai J. Math. 2019, 18, 166–187. [Google Scholar]
  22. Sombut, K.; Kitkuan, D.; Padcharoen, A.; Kumam, P. Weak Convergence Theorems for a Class of Split Variational Inequality Problems. In Proceedings of the 2018 International Conference on Control, Artificial Intelligence, Robotics & Optimization (ICCAIRO), Prague, Czech Republic, 19–21 May 2018; pp. 277–282. [Google Scholar]
  23. Chamnarnpan, T.; Wairojjana, N.; Kumam, P. Hierarchical fixed points of strictly pseudo contractive mappings for variational inequality problems. SpringerPlus 2013, 2, 540. [Google Scholar] [CrossRef] [Green Version]
  24. Crombez, G. A geometrical look at iterative methods for operators with fixed points. Numer. Funct. Anal. Optim. 2005, 26, 157–175. [Google Scholar] [CrossRef]
  25. Thong, D.V.; Van Hieu, D. Modified subgradient extragradient method for variational inequality problems. Numer. Algorithms 2018, 79, 597–610. [Google Scholar] [CrossRef]
  26. Censor, Y.; Gibali, A.; Reich, S. The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2011, 148, 318–335. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Yang, J.; Liu, H.; Liu, Z. Modified subgradient extragradient algorithms for solving monotone variational inequalities. Optimization 2018, 67, 2247–2258. [Google Scholar] [CrossRef]
  28. Van Hieu, D.; Thong, D.V. New extragradient-like algorithms for strongly pseudomonotone variational inequalities. J. Glob. Optim. 2018, 70, 385–399. [Google Scholar] [CrossRef]
  29. Shehu, Y.; Dong, Q.L.; Jiang, D. Single projection method for pseudo-monotone variational inequality in Hilbert spaces. Optimization 2019, 68, 385–409. [Google Scholar] [CrossRef]
  30. Khanh, P.D.; Vuong, P.T. Modified projection method for strongly pseudomonotone variational inequalities. J. Glob. Optim. 2014, 58, 341–350. [Google Scholar] [CrossRef]
  31. Thong, D.V.; Van Hieu, D. Inertial extragradient algorithms for strongly pseudomonotone variational inequalities. J. Comput. Appl. Math. 2018, 341, 80–98. [Google Scholar] [CrossRef]
  32. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: Berlin/Heidelberg, Germany, 2011; Volume 408. [Google Scholar]
  33. Goebel, K.; Reich, S. Uniform Convexity, Hyperbolic Geometry, and Nonexpansive; Marcel Dekker Inc.: New York, NY, USA, 1984. [Google Scholar]
  34. Alvarez, F. Weak convergence of a relaxed and inertial hybrid projection-proximal point algorithm for maximal monotone operators in Hilbert space. SIAM J. Optim. 2004, 14, 773–782. [Google Scholar] [CrossRef]
  35. Ofoedu, E. Strong convergence theorem for uniformly L-Lipschitzian asymptotically pseudocontractive mapping in real Banach space. J. Math. Anal. Appl. 2006, 321, 722–728. [Google Scholar] [CrossRef] [Green Version]
  36. Opial, Z. Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 1967, 73, 591–597. [Google Scholar] [CrossRef] [Green Version]
  37. Thong, D.V.; Vuong, P.T. Modified Tseng’s extragradient methods for solving pseudo-monotone variational inequalities. Optimization 2019, 68, 2207–2226. [Google Scholar] [CrossRef]
  38. Vuong, P.T. On the weak convergence of the extragradient method for solving pseudo-monotone variational inequalities. J. Optim. Theory Appl. 2018, 176, 399–409. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  39. Sun, D. A projection and contraction method for the nonlinear complementarity problem and its extensions. Math. Numer. Sin. 1994, 16, 183–194. [Google Scholar]
  40. Yang, J. Self-adaptive inertial subgradient extragradient algorithm for solving pseudomonotone variational inequalities. Appl. Anal. 2019. [Google Scholar] [CrossRef]
  41. Thong, D.V.; Van Hieu, D.; Rassias, T.M. Self adaptive inertial subgradient extragradient algorithms for solving pseudomonotone variational inequality problems. Optim. Lett. 2020, 14, 115–144. [Google Scholar] [CrossRef]
  42. Kraikaew, R.; Saejung, S. Strong convergence of the Halpern subgradient extragradient method for solving variational inequalities in Hilbert spaces. J. Optim. Theory Appl. 2014, 163, 399–412. [Google Scholar] [CrossRef]
Figure 1. Example 1 in R 5 : Numbers of iterations are 85, 254, 73, 102, 63, 40 respectively.
Figure 1. Example 1 in R 5 : Numbers of iterations are 85, 254, 73, 102, 63, 40 respectively.
Mathematics 08 00609 g001
Figure 2. Example 1 in R 5 : Elapsed time 1.83, 2.78, 0.82, 1.25, 0.69, 0.48 respectively.
Figure 2. Example 1 in R 5 : Elapsed time 1.83, 2.78, 0.82, 1.25, 0.69, 0.48 respectively.
Mathematics 08 00609 g002
Figure 3. Example 1 in R 50 : Numbers of iterations 421, 985, 320, 430, 759, 292 respectively.
Figure 3. Example 1 in R 50 : Numbers of iterations 421, 985, 320, 430, 759, 292 respectively.
Mathematics 08 00609 g003
Figure 4. Example 1 in R 50 : Elapsed time 9.95, 11.79, 3.87, 5.19, 9.70, 3.41 respectively.
Figure 4. Example 1 in R 50 : Elapsed time 9.95, 11.79, 3.87, 5.19, 9.70, 3.41 respectively.
Mathematics 08 00609 g004
Figure 5. Example 2: Numbers of iterations are 45, 169, 155, 52, 60, 38 respectively.
Figure 5. Example 2: Numbers of iterations are 45, 169, 155, 52, 60, 38 respectively.
Mathematics 08 00609 g005
Figure 6. Example 2: Elapsed time are 1.17, 2.35, 0.76, 0.78, 0.82, 0.56 respectively.
Figure 6. Example 2: Elapsed time are 1.17, 2.35, 0.76, 0.78, 0.82, 0.56 respectively.
Mathematics 08 00609 g006
Figure 7. Example 3: Numbers of iterations are 58, 190, 52, 35, 47, 31 respectively.
Figure 7. Example 3: Numbers of iterations are 58, 190, 52, 35, 47, 31 respectively.
Mathematics 08 00609 g007
Figure 8. Example 3: Elapsed time are 2.94, 19.59, 2.73, 3.83, 4.76, 3.04 respectively.
Figure 8. Example 3: Elapsed time are 2.94, 19.59, 2.73, 3.83, 4.76, 3.04 respectively.
Mathematics 08 00609 g008
Figure 9. Example 1 in R 5 : Numbers of iterations are 135, 105, 168, 97, 77 respectively.
Figure 9. Example 1 in R 5 : Numbers of iterations are 135, 105, 168, 97, 77 respectively.
Mathematics 08 00609 g009
Figure 10. Example 1 in R 5 : Elapsed time 1.44, 1.23, 1.97, 1.03, 0.84 respectively.
Figure 10. Example 1 in R 5 : Elapsed time 1.44, 1.23, 1.97, 1.03, 0.84 respectively.
Mathematics 08 00609 g010
Figure 11. Example 1 in R 5 : Numbers of iterations are 482, 382, 513, 749, 216 respectively.
Figure 11. Example 1 in R 5 : Numbers of iterations are 482, 382, 513, 749, 216 respectively.
Mathematics 08 00609 g011
Figure 12. Example 1 in R 5 : Elapsed time 5.76, 4.38, 6.37, 9.47, 2.71 respectively.
Figure 12. Example 1 in R 5 : Elapsed time 5.76, 4.38, 6.37, 9.47, 2.71 respectively.
Mathematics 08 00609 g012
Figure 13. Example 2: Numbers of iterations are 181, 172, 145, 78, 47 respectively.
Figure 13. Example 2: Numbers of iterations are 181, 172, 145, 78, 47 respectively.
Mathematics 08 00609 g013
Figure 14. Example 2: Elapsed time 2.33, 2.29, 2.01, 1.08, 0.64 respectively.
Figure 14. Example 2: Elapsed time 2.33, 2.29, 2.01, 1.08, 0.64 respectively.
Mathematics 08 00609 g014
Figure 15. Example 3: Numbers of iterations are 406, 372, 280, 88, 54 respectively.
Figure 15. Example 3: Numbers of iterations are 406, 372, 280, 88, 54 respectively.
Mathematics 08 00609 g015
Figure 16. Example 3: Elapsed time 2.28, 2.40, 1.35, 0.58, 0.37 respectively.
Figure 16. Example 3: Elapsed time 2.28, 2.40, 1.35, 0.58, 0.37 respectively.
Mathematics 08 00609 g016

Share and Cite

MDPI and ACS Style

Abubakar, J.; Kumam, P.; Rehman, H.u.; Hassan Ibrahim, A. Inertial Iterative Schemes with Variable Step Sizes for Variational Inequality Problem Involving Pseudomonotone Operator. Mathematics 2020, 8, 609. https://0-doi-org.brum.beds.ac.uk/10.3390/math8040609

AMA Style

Abubakar J, Kumam P, Rehman Hu, Hassan Ibrahim A. Inertial Iterative Schemes with Variable Step Sizes for Variational Inequality Problem Involving Pseudomonotone Operator. Mathematics. 2020; 8(4):609. https://0-doi-org.brum.beds.ac.uk/10.3390/math8040609

Chicago/Turabian Style

Abubakar, Jamilu, Poom Kumam, Habib ur Rehman, and Abdulkarim Hassan Ibrahim. 2020. "Inertial Iterative Schemes with Variable Step Sizes for Variational Inequality Problem Involving Pseudomonotone Operator" Mathematics 8, no. 4: 609. https://0-doi-org.brum.beds.ac.uk/10.3390/math8040609

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop