Next Article in Journal
Energetic-Property-Preserving Numerical Schemes for Coupled Natural Systems
Next Article in Special Issue
Projected Subgradient Algorithms for Pseudomonotone Equilibrium Problems and Fixed Points of Pseudocontractive Operators
Previous Article in Journal
Integral Domains in Which Every Nonzero w-Flat Ideal Is w-Invertible
Previous Article in Special Issue
Results in wt-Distance over b-Metric Spaces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Parallel-Viscosity-Type Subgradient Extragradient-Line Method for Finding the Common Solution of Variational Inequality Problems Applied to Image Restoration Problems

by
Suthep Suantai
1,
Pronpat Peeyada
2,
Damrongsak Yambangwai
2,* and
Watcharaporn Cholamjiak
2
1
Research Center in Mathematics and Applied Mathematics, Department of Mathematics, Faculty of Science, Chiang Mai University, Chiang Mai 50200, Thailand
2
School of Science, University of Phayao, Phayao 56000, Thailand
*
Author to whom correspondence should be addressed.
Submission received: 8 January 2020 / Revised: 7 February 2020 / Accepted: 9 February 2020 / Published: 14 February 2020

Abstract

:
In this paper, we study a modified viscosity type subgradient extragradient-line method with a parallel monotone hybrid algorithm for approximating a common solution of variational inequality problems. Under suitable conditions in Hilbert spaces, the strong convergence theorem of the proposed algorithm to such a common solution is proved. We then give numerical examples in both finite and infinite dimensional spaces to justify our main theorem. Finally, we can show that our proposed algorithm is flexible and has good quality for use with common types of blur effects in image recovery.

1. Introduction

Let H be a real Hilbert space with the inner product . , . and the induced norm . . Let C be a nonempty closed and convex subset of H. A mapping f : C C is said to be a strict contraction if there exists k [ 0 , 1 ) such that
f x f y k x y , x , y C .
A mapping A : C H is said to be
  • Monotone if
    A x A y , x y 0 for all x , y C ;
  • Pseudo-monotone if
    A y , x y 0 A x , x y 0 for all x , y C ;
  • L-Lipschitz continuous if there exists a positive constant L such that
    A x A y L x y for all x , y C .
In this paper, we study the following variational inequality problem (VIP) for the operator A to find x * , C such that
A x * , y x * 0 , y C .
The set of solutions of VIP (5) is denoted by VI(A,C). The VIP was introduced and studied by Hartman and Stampacchia in 1966 [1]. The variational inequality theory is an important tool based on studying a wide class of problems—unilateral and equilibrium problems arising in structural analysis, economics, optimization, operations research, and engineering sciences (see [2,3,4,5,6,7] and the references therein). Several algorithms have been improved for solving variational inequality and related optimization problems (see [6,8,9,10,11,12,13,14,15] and the references therein). It is well known that x is the solution of the VIP (5) if and only if x is the fixed point of the mapping P C ( I r A ) , r > 0 (see [4] for details)
x = P C ( x γ A x ) , γ > 0   and   r γ ( x ) : = x P C ( x γ A x ) = 0 .
Therefore, we can find the fixed point of the mapping P C ( I r A ) replaces finding the solution of VIP (5) (see [7,9]). For solving VIP (5), the projection on closed and convex sets have been used. The gradient method is the simplest projection method, in which only one projection on the feasible set needs to be computed. However, strongly monotone or inverse strongly monotone operators have been required to obtain the convergence result. In 1976, Korpelevich [16] proposed another projection method called the extragradient method for finding a saddle point, and then it was extended to finding a solution of VIP for Lipschitz continuous and monotone (even pseudomonotone) mappings A . The extragradient method is designed as follows:
y n = P C ( x n λ A ( x n ) ) , x n + 1 = P C ( x n λ A ( y n ) ) ,
where P C is the metric projection onto C and λ is a suitable parameter. When the structure of C is simple, the extragradient method is computable and very useful because the projection onto C can be found easily. However, the computation of a projection onto a closed convex subset is generally difficult, and two distance optimization problems in the extragradient method are solved to obtain the next approximation x n + 1 over each iteration. This can be precious and seriously affect the efficiency of the used method. In 2011, the subgradient extragradient method was proposed in [17] for solving VIPs in Hilbert spaces. A projection onto a closed convex subset is reduced into one step and a special half-space is constructed for the projection in the second step. The method is generated as follows:
y n = P C ( x n λ A ( x n ) ) , x n + 1 = P T n ( x n λ A ( y n ) ) ,
where T n is a half-space whose bounding hyperplane is supported on C at y n , that is,
T n = { υ H : ( x n λ A ( x n ) ) y n , υ y n } .
The authors in [17] proved that two sequences { x n } , { y n } generated by (7) converge weakly to a solution of the VIP.
Recently, Gibali [18] proposed a new subgradient extragradient method by using adopting Armijo-like searches, called the self-adaptive subgradient extragradient method. Under the assumption of pseudomonotonicity and continuity of the operator, it has been proven that the convergence result for VIP (5) is R n . Gibali [9] remarked that the Armijo-like searches can be viewed as a local approximation of the Lipschitz constant of A.
x 0 R n , y n = P C ( x n α A ( x n ) ) , α { α n 1 , α n 1 β , α n 1 β 2 , . . . } , ( α satisfies α x n y n , A ( x n ) A ( y n ) ( 1 ε ) x n y n 2 ) , x n + 1 = P T n ( x n α n A ( y n ) ) , n 1 ,
where T n = { w R n x n α n A ( x n ) y n , w y n 0 } , α 1 ( 0 , ) , and ε , β ( 0 , 1 ) .
Very recently, solving the VIP (5) when A is a Lipschitz continuous monotone mapping such that the Lipschitz constant is unknown in Hilbert spaces by using the following viscosity-type subgradient extragradient-like method was proposed by Shehu and Iyiola [19].
y n = P C ( x n λ n A x n ) , λ n = ρ l n , ( l n is the smallest non negative integer l such that λ n A x n A y n μ r ρ l n ( x n ) ) , z n = P T n ( x n λ n A y n ) , x n + 1 = α n f ( x n ) + ( 1 α n ) z n , n 1 ,
where T n = { z H : x n λ n A x n y n , z y n 0 } with ρ , μ ( 0 , 1 ) and { α n } ( 0 , 1 ) . It was proved that the sequence { x n } generated by (9) converges strongly to x * VI(C,A), where x * = P V I ( C , A ) f ( x * ) is the unique solution of the variational inequality
( I f ) x * , x x * 0 , x V I ( C , A ) ,
where f : H H is a strict contraction mapping such that constant k [ 0 , 1 ) under the following conditions:
( C 1 ) lim n α n = 0   and   ( C 2 ) n = 1 α n = .
Our interest in this paper is to study the finding of common solutions of variational inequality problems (CSVIPs). The CSVIP is stated as follows: Let C be a nonempty closed and convex subset of H. Let A i : H H , i = 1 , 2 , . . . , N be mappings. The CSVIP is to find x * C such that
A i x * , x x * 0 , x C , i = 1 , 2 , . . . , N .
If N = 1 , CSVIP (11) becomes VIP (5).
The CSVIP has received a great deal of attention due to its applications in a large variety of problems arising in structural analysis, convex feasibility problems, common fixed point problems, common minimizer problems, common saddle-point problems, and common variational inequality problems [20]. These problems have practical applicabilities in signal processing, network resource allocation, image processing, and many other fields [21,22]. Recently, many mathematicians have been widely studying this problem both theoretically and algorithmically; see [23,24,25,26,27] and the references therein.
Very recently, Anh and Hieu [28,29] proposed an important method for finding a common fixed point of a finite family of quasi ∅-nonexpansive mappings { S i } i = 1 N in Banach spaces, which they called the parallel monotone hybrid algorithm. This algorithm is related to Hilbert spaces as follows:
x 0 C , y n i = α n x n + ( 1 α n ) S i x n , i = 1 , . . . , N , i n = argmax { y n i x n : i = 1 , . . . , N } , y ¯ n : = y n i n , C n + 1 = { υ C n : υ y ¯ n i υ x n } , x x + 1 = P C n + 1 x 0 ,
where 0 < α n < 1 , lim n sup α n < 1 . We see that a parallel algorithm is an algorithm that can execute several directions simultaneously on different processing devices and then combine all the individual outputs.
Inspired and encouraged by the previous results, in this paper we introduce a modified parallel method with a viscosity-type subgradient extragradient-like method for finding a common solution of variational inequality problems. Numerical experiments are also conducted to illustrate the efficiency of the proposed algorithms. Moreover, the problem of multiblur effects in an image is solved by applying our proposed algorithm.

2. Preliminaries

In order to prove our main result, we recall some basic definitions and lemmas needed for further investigation. In a Hilbert space H, let C be a nonempty closed and convex subset of H. For every point x H , there exists a unique nearest point of C, denoted by P C x , such that x P C x x y for all y C . Such a P C is called the metric projection from H onto C.
Lemma 1
([30]). Let C be a nonempty closed and convex subset of a real Hilbert space H and let P C be the metric projection of H onto C. Let x H and z C . Then, z = P C x if and only if
x z , y z 0 , y C .
Lemma 2
([30]). The following statements hold in any real Hilbert space H:
( i ) x + y 2 = x 2 + 2 x , y + y 2 , x , y H . ( i i ) x + y 2 x 2 + 2 y , x + y , x , y H .
Lemma 3
(Xu, [31]). Let { a n } n = 0 be a sequence of non-negative real numbers satisfying the following relation:
a n + 1 ( 1 α n ) a n + α n σ n + γ n , n 1 ,
where
(i) 
{ α n } n = 0 [ 0 , 1 ] , Σ n = 1 α n = ;
(ii) 
lim sup n σ n 0 ;
(iii) 
γ n 0 ( n 1 ) , Σ n = 1 γ n < .
Then, a n 0 as n .
Lemma 4
([8]). Let C be a nonempty closed and convex subset of a real Hilbert space H and P C : H C is the metric projection from H onto C. Then, the following inequality holds:
x y 2 x P C x 2 + y P C x 2 x H , y C .
Lemma 5
([19]). There exists a nonnegative integer l n satisfying (9).
Lemma 6
([32]). For each x 1 , x 2 , . . . , x m H and α 1 , α 2 , . . . , α m [ 0 , 1 ] with i = 1 m α i = 1 , we have
α 1 x 1 + . . . . + α m x m 2 = i = 1 m α i x i 2 1 i < j m α i α j x i x j 2 .

3. Main Results

In this section, we propose the parallel method with the viscosity-type subgradient extragradient-like method modified for solving common variational inequality problems. Let C be a nonempty closed and convex subset of a real Hilbert space H. Let A i : C H be a monotone mapping and L i -Lipschitz continuous on H but L i is unknown for all i = 1 , 2 , . . . , N such that F : = i = 1 N V I ( C , A i ) . Let f : C C be a strict contraction mapping with constant k ( 0 , 1 ] . Suppose { x n } n = 1 is generated in the following Algorithm 1:
Algorithm 1. Given ρ ( 0 , 1 ) , μ ( 0 , 1 ) . Let { α n } n = 1 be a real sequence in ( 0 , 1 ) . Let x 1 H be arbitrary.
Step 1:  
Compute y n i for all i = 1 , 2 , . . . , N by
y n i = P C ( x n λ n i A i x n ) , n 1 ,
where λ n i = ρ l n i and l n i is the smallest non-negative integer l i such that
λ n i A i x n A i y n i μ r ρ l i ( x n ) .
Step 2:  
Compute
z n i = P T n i ( x n λ n i A i y n i ) ,
where T n i : = { z H : x n λ n i A i x n y n i , z y n i 0 } .
Step 3:  
Compute
x n + 1 = α n 0 f ( x n ) + i = 1 N α n i z n i , n 1 .
Set n + 1 n and go to Step 1.
Theorem 1.
Assume that
(a) 
lim n α n 0 = 0 , n = 1 α n 0 =
(b) 
lim inf n α n i > 0 , i = 1 , 2 , . . . , N .
Then the sequence { x n } n = 1 generated by Algorithm 1 strongly converges to x * F , where x * = P F f ( x * ) is the unique solution of the variational inequality:
( I f ) x * , x x * 0 , x F .
Proof. 
Let x * F and u n i = x n λ n i A i y n i , n 1 , i = 1 , 2 , . . , N
z n i x * 2 = P T n i ( u n i ) x * , P T n i ( u n i ) x * = P T n i ( u n i ) u n i 2 + 2 P T n i ( u n i ) u n i , u n i x * + u n i x * 2 = u n i x * 2 + u n i P T n i ( u n i ) 2 + 2 P T n i ( u n i ) u n i , u n i x * .
By the characterization of the metric projection P T n i and x * F C T n i , we get
2 u n i P T n i ( u n i ) 2 + 2 P T n i ( u n i ) u n i , u n i x *
= 2 u n i P T n i ( u n i ) , x * P T n i ( u n i ) 0 .
This implies that
u n i P T n i ( u n i ) 2 + 2 P T n i ( u n i ) u n i , u n i x * u n i P T n i ( u n i ) 2 .
We then obtain by the definition of Algorithm 1 that
z n i x * 2 u n i x * 2 u n i z n i 2 = ( x n λ n i A i y n i ) x * 2 ( x n λ n i A i y n i ) z n i 2 = x n x * 2 x n z n i 2 + 2 λ n i x n + x * , A i y n i + 2 λ n i x n z n i , A i y n i = x n x * 2 x n z n i 2 + 2 λ n i x * z n i , A i y n i .
By the monotonicity of the operator A i , we have
0 A i y n i A i x * , y n i x * = A i y n i , y n i x * A i x * , y n i x * A i y n i , y n i x * = A i y n i , y n i z n i + A i y n i , z n i x * .
Thus
x * z n i , A i y n i A i y n i , y n i z n i .
Using (20) in (21), we obtain
z n i x * 2 x n x * 2 x n z n i 2 + 2 λ n i A i y n i , y n i z n i = x n x * 2 + 2 λ n i A i y n i , y n i z n i 2 x n y n i , y n i z n i x n y n i 2 y n i z n i 2 = x n x * 2 + 2 λ n i A i y n i + x n y n i , z n i y n i x n y n i 2 y n i z n i 2 = x n x * 2 + 2 x n λ n i A i y n i y n i , z n i y n i x n y n i 2 y n i z n i 2 .
Observe that
x n λ n i A i y n i y n i , z n i y n i = x n λ n i A i x n y n i , z n i y n i + λ n i A i x n λ n i A i y n i , z n i y n i λ n i A i x n λ n i A i y n i , z n i y n i .
Using the last inequality in (22) and Lemma 5, we have
z n i x * 2 x n x * 2 + 2 λ n i A i x n λ n i A i y n i , z n i y n i x n y n i 2 y n i z n i 2 x n x * 2 + 2 λ n i A i x n A i y n i z n i y n i x n y n i 2 y n i z n i 2 x n x * 2 + 2 μ x n y n i z n i y n i x n y n i 2 y n i z n i 2 x n x * 2 + μ ( x n y n i 2 + z n i y n i 2 ) x n y n i 2 y n i z n i 2 = x n x * 2 ( 1 μ ) x n y n i 2 ( 1 μ ) y n i z n i 2 .
It follows from (15) and (23) that
x n + 1 x * = α n 0 f ( x n ) + i = 1 N α n i z n i x * α n 0 f ( x n ) x * + i = 1 N α n i z n i x * α n 0 f ( x n ) f ( x * ) + α n 0 f ( x * ) x * + i = 1 N α n i z n i x * α n 0 k x n x * + i = 1 N α n i x n x * + α n 0 f ( x * ) x * = ( 1 α n 0 ( 1 k ) ) x n x * + α n 0 f ( x * ) x * = ( 1 α n 0 ( 1 k ) ) x n x * + α n 0 ( 1 k ) f ( x * ) x * 1 k max { x n x * , f ( x * ) x * 1 k } max { x 1 x * , f ( x * ) x * 1 k } .
This implies that { x n } is bounded. Consequently, { f ( x n ) }, { y n i }, and { z n i } are also bounded.
Let z = P F f ( z ) . From (15) and (23), we have
x n + 1 z 2 = α n 0 f ( x n ) i = 1 N α n i z n i z 2 = α n 0 ( f ( x n ) z ) + i = 1 N α n i ( z n i z ) 2 α n 0 f ( x n ) z 2 + i = 1 N α n i z n i z 2 α n f ( x n ) z 2 + i = 1 N α n i ( x n z 2 ( 1 μ ) x n y n i 2 ( 1 μ ) y n i z n i 2 ) = i = 1 N α n i x n z 2 + α n 0 f ( x n ) z 2 i = 1 N α n i ( 1 μ ) x n y n i 2 i = 1 N α n i ( 1 μ ) y n i z n i 2 x n z 2 + α n 0 f ( x n ) z 2 i = 1 N α n i ( 1 μ ) x n y n i 2 i = 1 N α n i ( 1 μ ) y n i z n i 2 .
Furthermore, using Lemma 2 (ii) in (15), we obtain
x n + 1 z 2 = α n 0 f ( x n ) + i = 1 N α n i z n i z 2 = α n 0 ( f ( x n ) z ) + i = 1 N α n i ( z n i z ) 2 = i = 1 N α n i ( z n i z ) + α n 0 ( f ( x n ) z ) 2 ( 1 α n 0 ) 2 x n z 2 + 2 α n 0 f ( x n ) z , x n + 1 z = ( 1 α n 0 ) 2 x n z 2 + 2 α n 0 f ( x n ) f ( z ) , x n + 1 z + 2 α n 0 f ( z ) z , x n + 1 z ( 1 α n 0 ) 2 x n z 2 + 2 α n 0 k x n z x n + 1 z + 2 α n 0 f ( z ) z , x n + 1 z ( 1 α n 0 ) 2 x n z 2 + α n 0 k ( x n z 2 + x n + 1 z 2 ) + 2 α n 0 f ( z ) z , x n + 1 z
which implies that for some M > 0 ,
x n + 1 z 2 ( 1 α n 0 ) 2 + α n 0 k 1 α n 0 k x n z 2 + 2 α n 0 1 α n 0 k f ( z ) z , x n + 1 z = 1 2 α n 0 + α n 0 k 1 α n 0 k x n z 2 + ( α n 0 ) 2 1 α n 0 k x n z 2 + 2 α n 0 1 α n 0 k f ( z ) z , x n + 1 z ( 1 2 ( 1 k ) α n 0 ( 1 α n 0 ) k ) x n z 2 + 2 ( 1 k ) α n 0 ( 1 α n 0 ) k × [ α n 0 2 ( 1 k ) x n z 2 + 1 1 k f ( z ) z , x n + 1 z ] ( 1 2 ( 1 k ) α n 0 1 α n 0 k ) x n z 2 + 2 ( 1 k ) α n 0 1 α n 0 k × [ α n 0 2 ( 1 k ) M + 1 1 k f ( z ) z , x n + 1 z ] .
We will divide the next proof into two parts.
Case 1 Suppose that there exists n 0 N such that { x n z } n = n 0 is non-increasing. Then, { x n z } n = 1 converges and x n z 2 x n + 1 z 2 0 , n . From (24), we have
i = 1 N α n i ( 1 μ ) x n y n i 2 x n z 2 x n + 1 z 2 + α n 0 f ( x n ) z 2 .
It follows from our assumptions (i) and (ii) that
lim n x n y n i = 0 , i = 1 , 2 , . . . , N .
Similarly, from (24), we obtain that
lim n y n i z n i = 0 , i = 1 , 2 , . . . , N .
Set t n = α n 0 x n + i = 1 N α n i z n i and s n = α n 0 x n + i = 1 N α n i y n i . It follows from our assumption (i) and (29) that
lim n x n + 1 t n = lim n α n 0 f ( x n ) x n = 0
and
lim n t n s n lim n i = 1 N α n i z n i y n i = 0 .
It follows from (28) that
lim n s n x n lim n i = 1 N α n i y n i x n = 0 .
It follows from (30)–(32) that
x n + 1 x n x n + 1 t n + t n s n + s n x n 0 .
Since { x n } is bounded, it has a subsequence { x n j } such that { x n j } converges weakly to some ω H and lim sup n f ( z ) z , x n z = lim j f ( z ) z , x n j z . We show that ω F . Now, x n y n i 0 implies that y n j i ω and since y n i C , we then obtain ω C . For all x C and using the property of the projection P C , we have (since A i is monotone):
0 y n j i x n j λ n j i A i x n j , x y n j i = y n j i x n j , x y n j i + λ n i A i x n j , x n j y n j i + λ n j i A i x n j , x x n j i y n j i x n j , x y n j i + λ n j i A i x n j , x n j y n j i + λ n j i A i x n j , x x n j i .
Taking j , we get (recall that inf n 1 λ n j > 0 by Remark 3.2 in [19])
A i ω , x ω 0 , x C .
This implies that ω F . Since z = P F f ( z ) , we have
lim sup n f ( z ) z , x n z = lim j f ( z ) z , x n j z = f ( z ) z , w z 0 .
Since lim n x n + 1 x n = 0 , we have
lim sup n f ( z ) z , x n + 1 z 0 .
In (26), let a n : = x n z 2 , β n : = 2 ( 1 k ) α n 0 1 α n 0 k , and σ n : = α n 0 2 ( 1 k ) M + 1 1 k f ( z ) z , x n + 1 z . Then, we can write (26) as
a n + 1 ( 1 β n ) a n + β n σ n .
It is easy to see that lim n β n = 0 , n = 1 β n = , and lim sup n σ n 0 .
Using Lemma 3 in (33), we obtain lim n x n z = 0 . Thus, x n z , n .
Case 2 Assume that { x n z } is not a monotone and decreasing sequence. Set F n = x n z 2 and let τ : N N be a mapping defined for all n n 0 (for some n 0 large enough) by
τ ( n ) : = max { k N : k n , F k F k + 1 } .
It is clear that τ is a non-decreasing sequence such that τ ( n ) as n and
0 F τ ( n ) F τ ( n ) + 1 , n n 0 .
This implies that x τ ( n ) z x τ ( n ) + 1 z , n n 0 . Thus, lim n x τ ( n ) z exists. By (27), we obtain
i = 1 N α τ ( n ) i ( 1 μ ) x τ ( n ) y τ ( n ) i 2 x τ ( n ) z 2 x τ ( n ) + 1 z 2 + α τ ( n ) 0 f ( x τ ( n ) ) z 2 0 ,
as n . Thus,
x τ ( n ) y τ ( n ) i 0
as n . As in Case 1, we can prove that
lim n y τ ( n ) i z τ ( n ) i = lim n x τ ( n ) + 1 z τ ( n ) i = lim n x τ ( n ) + 1 x τ ( n ) = 0 .
Since { x τ ( n ) } is bounded, there exists a subsequence of { x τ ( n ) } that converges weakly to ω . Without loss of generality, we assume that x τ ( n ) ω . Observe that since lim n x τ ( n ) y τ ( n ) i = 0 we also have y τ ( n ) i ω . By similar argument in Case 1, we can show that ω F and
lim sup n f ( z ) z , x τ ( n ) z 0 .
Observe that since lim n x τ ( n ) + 1 x τ ( n ) = 0 and lim sup n f ( z ) z , x τ ( n ) z 0 , this implies that
lim sup n f ( z ) z , x τ ( n ) + 1 z 0 .
By (26), we obtain that
x τ ( n ) + 1 z 2 ( 1 2 ( 1 τ ) α τ ( n ) 0 1 α τ ( n ) 0 k ) x τ ( n ) z 2 + 2 ( 1 k ) α τ ( n ) 0 1 α n 0 k × [ α τ ( n ) 0 2 ( 1 k ) M + 1 1 k f ( z ) z , x τ ( n ) + 1 z ] = ( 1 β τ ( n ) ) x τ ( n ) z 2 + β τ ( n ) ( α τ ( n ) 0 2 ( 1 k ) M + 1 1 k < f ( z ) z , x τ ( n ) + 1 z > )
where β τ ( n ) : = 2 ( 1 k ) α τ ( n ) 0 1 α τ ( n ) 0 k . Hence, we have (since F τ ( n ) F τ ( n ) + 1 )
β τ ( n ) x τ ( n ) z 2 x τ ( n ) z 2 x τ ( n ) + 1 z 2 + β τ ( n ) ( α τ ( n ) 0 2 ( 1 k ) M + 1 1 k f ( z ) z , x τ ( n ) + 1 z ) β τ ( n ) ( α τ ( n ) 0 2 ( 1 k ) M + 1 1 k f ( z ) z , x τ ( n ) + 1 z ) .
Since α τ ( n ) 0 > 0 and k [ 0 , 1 ) , we have that β τ ( n ) > 0 . So, we get
x τ ( n ) z 2 α τ ( n ) 0 2 ( 1 τ ) M + 1 1 k f ( z ) z , x τ ( n ) + 1 z ,
and this implies that
lim sup n x τ ( n ) z 2 lim sup n α τ ( n ) 0 2 ( 1 τ ) M + 1 1 k f ( z ) z , x τ ( n ) + 1 z 0 .
Therefore,
lim n x τ ( n ) z = 0
and
lim n x τ ( n ) + 1 z = 0 .
Hence,
lim n F τ ( n ) = lim n F τ ( n ) + 1 = 0 .
Furthermore, for n n 0 , it is easy to see that F τ ( n ) F τ ( n ) + 1 if n τ ( n ) (that is τ ( n ) < n ), because F j F j + 1 for τ ( n ) + 1 j n . As a consequence, we obtain for all n n 0 ,
0 F n max { F τ ( n ) , F τ ( n ) + 1 } = F τ ( n ) + 1 .
Hence, lim F n = 0 , that is , lim n x n z = 0 . Hence, { x n } converges strongly to z.
This completes the proof. □
We now give an example in Euclidian space R 3 where . is 2 -norm defined by x = x 1 2 + x 2 2 + x 3 2 where x = ( x 1 , x 2 , x 3 ) to support the main theorem.
Example 1.
Let A 1 , A 2 , A 3 : R 3 R 3 be defined by A 1 x = 4 x , A 2 x = 7 x + ( 5 , 2 , 1 ) and A 3 x = ( 10 5 5 5 10 5 5 5 10 ) + ( 4 , 2 , 1 ) for all x = ( x 1 , x 2 , x 3 ) . Let f : R 3 R 3 be defined by f ( x ) = x 2 for all x R 3 . Let C = { x R 3 | x 2 4 } . We can choose α n 0 = 1 ( n + 1 ) 0 . 3 , α n 1 = 1 2 n , α n 2 = n n + 1 and α n 3 = 1 α n 0 α n 1 α n 2 . The stopping criterion is defined by x n x n 1 < 10 15 (See in Figure 1, Figure 2 and Figure 3). The different choices of x 1 are given in Table 1 as follows in Example 1.
For infinitely dimensional space, we give an example in function space L 2 [ 0 , 1 ] such that . is L 2 -norm defined by x ( t ) = 0 1 | x ( t ) | 2 d t where x ( t ) L 2 [ 0 , 1 ] .
Example 2.
Let A 1 , A 2 , A 3 : L 2 [ 0 , 1 ] L 2 [ 0 , 1 ] be defined by A 1 x ( t ) = 0 t 4 x ( s ) d s , A 2 x ( t ) = 0 t t x ( s ) d s and A 3 x ( t ) = 0 t ( t 2 1 ) x ( s ) d s where x ( t ) L 2 [ 0 , 1 ] . Let f : L 2 [ 0 , 1 ] L 2 [ 0 , 1 ] be defined by f ( x ( t ) ) = x ( t ) 2 where x ( t ) L 2 [ 0 , 1 ] . Let C = { x ( t ) L 2 [ 0 , 1 ] : 0 1 ( t 2 + 1 ) x ( t ) d t } . We can choose α n 0 = 1 ( n + 1 ) 0 . 3 , α n 1 = 1 2 n , α n 2 = n n + 1 and α n 3 = 1 α n 0 α n 1 α n 2 . The stopping criterion is defined by x n ( t ) x n 1 ( t ) < 10 5 (See in Figure 4, Figure 5 and Figure 6).
The different choices of x 1 ( t ) are given in Table 2 as follows:
Choice 1 
Bernstein initial data: x 1 ( t ) = 120 t 7 ( t 1 ) 3 ;
Choice 2 
Chebyshev initial data: x 1 ( t ) = 64 t 7 112 t 5 + 56 t 3 7 t ;
Choice 3 
Legendre initial data: x 1 ( t ) = 315 128 t 1155 32 t 3 + 9009 64 t 5 6435 32 t 7 + 12155 128 t 9 .
From Table 1 and Table 2, we see that the advantage of the parallel viscosity type subgradient extragradient-line method Algorithm 1 when the common solution of two or more inputting A i gives the number of iterations smaller than one inputting.

4. Application to Image Restoration Problems

The image restoration problem can be modeled in one-dimensional vectors by the following linear equation system:
b = A x + υ ,
where x R n × 1 is an original image, b R m × 1 is the observed image, υ is additive noise, and A R m × n is the blurring matrix. For solving problem (34), we aim to approximate the original image, vector x, by minimizing the additive noise, which is called a least squares (LS) problem as follows:
min x 1 2 b A x 2 ,
where . is 2 -norm defined by x = i = 1 n | x i | 2 . The solution of the problem (35) can be approximated by many well-known iteration methods.
The Richardson iteration, which is often called the Landweber method [33,34,35,36], is generally used as an iterative regularization method to solve (35). The basic iteration takes the form:
x n + 1 = x n + τ A T ( b A x n ) .
Here the step size τ remains constant for each iteration. The convergence can be proved under the step size τ such that 0 < τ < 2 σ max 2 where σ m a x is the largest singular value of A.
The goal in image restoration is to deblur an image without knowing which one is the blurring operator. Thus, we focus on the following problem:
min x R n 1 2 A 1 x b 1 2 , min x R n 1 2 A 2 x b 2 2 , . . . , min x R n 1 2 A N x b N 2
where x is the original true image, A i is the blurred matrix, b i is the blurred image by the blurred matrix A i for all i = 1 , 2 , . . . , N . For solving this problem, we designed the following flowchart:
Where X ˜ is the deblurred image or the common solutions of the problem (37) and as seen in Figure 7. We can apply the algorithm in Theorem 1 to solve the problem (37), and as a result, we know that A i T ( A i x b i ) is Lipschitz continuous for each i = 1 , 2 , . . . , N . Let f : R n R n be a strict contraction mapping with constant k ( 0 , 1 ] . Suppose { x n } n = 1 is generated in the following Algorithm 2:
Algorithm 2. Given ρ ( 0 , 1 ) , μ ( 0 , 1 ) . Let { α n } n = 1 be a real sequence in ( 0 , 1 ) . Let x 1 H be arbitrary.
Step 1:  
Compute for all i = 1 , 2 , . . . , N by
y n i = P C ( x n λ n i A i T ( A i x n b i ) ) , n 1 ,
where λ n i = ρ l n i and l n i is the smallest nonnegative integer l i such that
λ n i A i x n A i y n i μ r ρ l i ( x n ) .
Step 2:  
Compute
z n i = P T n i ( x n λ n i A i T ( A i y n i b i ) ) ,
where T n i : = { z H : x n λ n i A i x n y n i , z y n i 0 } .
Step 3:  
Compute
x n + 1 = α n 0 f ( x n ) + i = 1 N α n i z n i , n 1 .
Set n + 1 n and go to Step 1.
We will present restoration of images corrupted by the following blur types:
(1)
Gaussian blur of filter size 9 × 9 with standard deviation σ = 4 (the original image was degraded by the blurring matrix A 1 ).
(2)
Out-of-focus blur (disk) with radius r = 6 (the original image was degraded by the blurring matrix A 2 ).
(3)
Motion blur specifying with motion length of 21 pixels (len = 21 ) and motion orientation 11 ( θ = 11 ) (the original image was degraded by the blurring matrix A 3 ).
The performance of the studied proposed Algorithm 2 with the following original grey and RGB images was tested, as can be seen in Figure 8 and Figure 9.
The parameter α n i on the implemented algorithm for solving the problem (VIP) was set as
α n i = n n + 1 , i = 1 , 2 , 3 .
Three different types of blurred grey and RGB images degraded by the blurring matrices A 1 , A 2 and A 3 are shown in Figure 10, Figure 11, Figure 12, Figure 13, Figure 14 and Figure 15.
We applied the proposed algorithm to obtain the solution of the deblurring problem (VIP) with ( N = 1 ) by inputting A 1 , A 2 , and A 3 . The results of the proposed algorithm with 10,000 iterations for the following three cases:
Case I:
Inputting A 1 in the proposed algorithm;
Case II:
Inputting A 2 in the proposed algorithm; and
Case III:
Inputting A 3 in the proposed algorithm
are shown in Figure 16, Figure 17, Figure 18, Figure 19, Figure 20 and Figure 21 that are composed of the restored images and their peak signal-to-noise ratios (PSNRs).
Next found the common solutions of a deblurring problem (VIP) with ( N = 2 ) by using the proposed algorithm. So, we can consider the results of the proposed algorithm with 10,000 iterations in the following three cases:
Case I:
Inputting A 1 and A 2 in the proposed algorithm;
Case II:
Inputting A 1 and A 3 in the proposed algorithm; and
Case III:
Inputting A 2 and A 3 in the proposed algorithm.
It can be seen from Figure 22, Figure 23, Figure 24, Figure 25, Figure 26 and Figure 27 that the quality of restored images by using the proposed algorithm in solving the common solutions of the deblurring problem (VIP) with ( N = 2 ) was improved compared with the previous results in Figure 16, Figure 17, Figure 18, Figure 19, Figure 20 and Figure 21 .
Finally, the common solution of the deblurring problem (VIP) with ( N = 3 ) using the proposed algorithm was also tested (inputting A 1 , A 2 , and A 3 in the proposed algorithm).
Figure 28 and Figure 29 show the reconstructed grey and RGB images with 10 , 000 iterations. The quality of the recovered grey and RGB images obtained by this algorithm were the highest compared to the previous two algorithms.
Moreover, the Cauchy error, the figure error, and the peak signal-to-noise ratio (PSNR) for recovering the degraded grey and RGB images by using the proposed method within the first 10,000 iterations are shown in Figure 30, Figure 31, Figure 32, Figure 33, Figure 34 and Figure 35.
The Cauchy error is defined as x n x n 1 < 10 8 . The figure error is defined as x n x , where x is the solution of the problem (VIP). The performance of the proposed algorithm at x n in the image restoration process was measured quantitatively by the means of the peak signal-to-noise ratio (PSNR), which is defined by
PSNR ( x n ) = 20 log 10 ( 255 2 M S E ) ,
where MSE = x n x 2 , x n x is the 2 -norm of vec( x n x ) and vec( x n x ) = A reshape matrix x n x as vector.
The Cauchy error plot shows the validity of the proposed method, while the figure error plot confirms the convergence of the proposed method and the PSNR quality plot shows the measured quantitatively of the image. From Figure 30, Figure 31, Figure 32, Figure 33, Figure 34 and Figure 35, it is clearly seen that the common solution of the deblurring problem (VIP) with ( N 2 ) obtained quality improvements in the reconstructed grey and RGB images. Another advantage of the proposed method when the common solution of two or more image deblurring problems was used to restore the image is that the received image is more consistent than usual (see Figure 36, Figure 37, Figure 38, Figure 39, Figure 40, Figure 41, Figure 42, Figure 43, Figure 44, Figure 45, Figure 46, Figure 47, Figure 48 and Figure 49). Figure 36, Figure 37, Figure 38, Figure 39, Figure 40, Figure 41, Figure 42, Figure 43, Figure 44, Figure 45, Figure 46, Figure 47, Figure 48 and Figure 49 show the reconstructed grey and RGB images using the proposed algorithm in obtaining the common solution of the following problem with the same PSNR.
(1)
Deblurring problem (VIP) with ( N = 1 ) by inputting A 1 , A 2 , and A 3 in the proposed algorithm.
(2)
Deblurring problem (VIP) with ( N = 2 ) by inputting A 1 and A 2 , A 1 and A 3 , and A 2 and A 3 in the proposed algorithm respectively.
(3)
Deblurring problem (VIP) with ( N = 3 ) by inputting A 1 , A 2 , and A 3 in the proposed algorithm.

5. Conclusions

In this work, we considered the problem of finding a common solution of variational inequalities with monotonic and Lipschitz operators in a Hilbert space. Under some suitable conditions imposed on the parameters, we proved the strong convergence of the algorithm. Several numerical examples in both finite and infinite dimensional spaces were performed to illustrate the performance of the proposed algorithm (see Table 1 and Table 2 and Figure 1, Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6). We applied our proposed algorithm to image recovery (2) under a situation without knowing the type of matrix blurs to demonstrate the computational performance (see Figure 10, Figure 11, Figure 12, Figure 13, Figure 14, Figure 15, Figure 16, Figure 17, Figure 18, Figure 19, Figure 20, Figure 21, Figure 22, Figure 23, Figure 24, Figure 25, Figure 26, Figure 27, Figure 28, Figure 29, Figure 30, Figure 31, Figure 32, Figure 33, Figure 34 and Figure 35). We found that the advantage of our proposed algorithm was its ability to restore two or more multiblur effects in an image, giving a restoration performance better than one (see Figure 36, Figure 37, Figure 38, Figure 39, Figure 40, Figure 41, Figure 42, Figure 43, Figure 44, Figure 45, Figure 46, Figure 47, Figure 48 and Figure 49).

Author Contributions

Supervision, S.S.; formal analysis and writing, P.P. and W.C.; editing and software, D.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Chiang Mai University.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Hartman, P.; Stampacchia, G. On some non-linear elliptic diferential-functional equations. Acta Math. 1966, 115, 271–310. [Google Scholar] [CrossRef]
  2. Aubin, J.-P.; Ekeland, I. Applied Nonlinear Analysis; Wiley: New York, NY, USA, 1984. [Google Scholar]
  3. Baiocchi, C.; Capelo, A. Variational and Quasivariational Inequalities: Applications to Free Boundary Problems; Wiley: New York, NY, USA, 1984. [Google Scholar]
  4. Glowinski, R.; Lions, J.-L.; Tremolieres, R. Numerical Analysis of Variational Inequalities; NorthHolland: Amsterdam, The Netherlands, 1981. [Google Scholar]
  5. Kinderlehrer, D.; Stampacchia, G. An Introduction to Variational Inequalities and Their Applications; Academic: New York, NY, USA, 1980. [Google Scholar]
  6. Konnov, I.V. Combined Relaxation Methods for Variational Inequalities; Springer: Berlin, Germany, 2001. [Google Scholar]
  7. Nagurney, A. Network Economics: A Variational Inequality Approach; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1999. [Google Scholar]
  8. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; CMS Books in Mathematics; Springer: New York, NY, USA, 2011. [Google Scholar]
  9. Facchinei, F.; Pang, J.-S. Finite-Dimensional Variational Inequalities And Complementarity Problems; Springer Series in Operations Research; Springer: New York, NY, USA, 2003; Volume II. [Google Scholar]
  10. Cholamjiak, P.; Suantai, S. Viscosity approximation methods for a nonexpansive semigroup in Banach spaces with gauge functions. J. Glob. Optim. 2012, 54, 185–197. [Google Scholar] [CrossRef]
  11. Shehu, Y.; Cholamjiak, P. Iterative method with inertial for variational inequalities in Hilbert spaces. Calcolo 2019. [Google Scholar] [CrossRef]
  12. Kesornprom, S.; Cholamjiak, P. Proximal type algorithms involving linesearch and inertial technique for split variational inclusion problem in hilbert spaces with applications. Optimization 2019, 68, 2365–2391. [Google Scholar] [CrossRef]
  13. Cholamjiak, P.; Thong, D.V.; Cho, Y.J. A novel inertial projection and contraction method for solving pseudomonotone variational inequality problems. Acta Appl. Math. 2019. [Google Scholar] [CrossRef]
  14. Anh, P.N.; Hien, N.D.; Phuong, N.X. A parallel subgradient method extended to variational inequalities involving nonexpansive mappings. Appl. Anal. 2018. [Google Scholar] [CrossRef]
  15. Ceng, L.C.; Coroian, I.; Qin, X.; Yao, J.C. A general viscosity implicit iterative algorithm for split variational inclusions with hierarchical variational inequality constraints. Fixed Point Theory 2019, 20, 469–482. [Google Scholar] [CrossRef]
  16. Korpelevich, G.M. The extragradient method for finding saddle points and other problems. Ekonomikai Matematicheskie Metody 1976, 12, 747–756. [Google Scholar]
  17. Censor, Y.; Gibali, A.; Reich, S. The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2011, 148, 318–335. [Google Scholar] [CrossRef] [Green Version]
  18. Gibali, A. A new non-Lipschitzian projection method for solving variational inequalities in Euclidean spaces. J. Nonlinear Anal. Optim. 2015, 6, 41–51. [Google Scholar]
  19. Shehu, Y.; Iyiola, O.S. Strong convergence result for monotone variational inequalities. Numer. Algorithms 2016. [Google Scholar] [CrossRef]
  20. Censor, Y.; Gibali, A.; Reich, S.; Sabach, S. Common solutions to variational inequalities. Set Val. Var. Anal. 2012, 20, 229–247. [Google Scholar] [CrossRef]
  21. Bauschke, H.H.; Borwein, J.M. On projection algorithms for solving convex feasibility problems. SIAMRev 1996, 38, 367–426. [Google Scholar] [CrossRef] [Green Version]
  22. Stark, H. (Ed.) Image Recovery Theory and Applications; Academic: Orlando, FL, USA, 1987. [Google Scholar]
  23. Censor, Y.; Chen, W.; Combettes, P.L.; Davidi, R.; Herman, G.T. On the effectiveness of projection methods for convex feasibility problems with linear inequality constraints. Comput. Optim. Appl. 2011. [Google Scholar] [CrossRef]
  24. Hieu, D.V.; Anh, P.K.; Muu, L.D. Modified hybrid projection methods for finding common solutions to variational inequality problems. Comput. Optim. Appl. 2016. [Google Scholar] [CrossRef]
  25. Hieu, D.V. Parallel hybrid methods for generalized equilibrium problems and asymptotically strictly pseudocontractive mappings. J. Appl. Math. Comput. 2016. [Google Scholar] [CrossRef] [Green Version]
  26. Yamada, I. The hybrid steepest descent method for the variational inequality problem over the intersection of fixed point sets of nonexpansive mappings. In Inherently Parallel Algorithms in Feasibility and Optimization and Their Applications; Butnariu, D., Censor, Y., Reich, S., Eds.; Elsevier: Amsterdam, The Netherlands, 2001; pp. 473–504. [Google Scholar]
  27. Yao, Y.; Liou, Y.C. Weak and strong convergence of Krasnoselski-Mann iteration for hierarchical fixed point problems. Inverse Probl. 2008, 24, 015015. [Google Scholar] [CrossRef]
  28. Anh, P.K.; Hieu, D.V. Parallel and sequential hybrid methods for a finite family of asymptotically quasi ϕ-nonexpansive mappings. J. Appl. Math. Comput. 2015, 48, 241–263. [Google Scholar] [CrossRef]
  29. Anh, P.K.; Hieu, D.V. Parallel hybrid methods for variational inequalities, equilibrium problems and common fixed point problems. Vietnam J. Math. 2015. [Google Scholar] [CrossRef]
  30. Takahashi, W. Nonlinear Functional Analysis; Yokohama Publishers: Yokohama, Japan, 2000. [Google Scholar]
  31. Xu, H.-K. Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66, 240–256. [Google Scholar] [CrossRef]
  32. Takahashi, S.; Takahashi, W. Strong convergence theorem for a generalized equilibrium problem and a nonexpansive mapping in a Hilbert space. Nonlinear Anal. 2008, 69, 1025–1033. [Google Scholar] [CrossRef]
  33. Engl, H.W.; Hanke, M.; Neubauer, A. Regularization of Inverse Problems; Kluwer Academic Publishers: Dordrecht, The Netherlands, 2000. [Google Scholar]
  34. Hansen, P.C. Rank-Deficient and Discrete Ill-Posed Problems; SIAM: Philadelphia, PA, USA, 1997. [Google Scholar]
  35. Hansen, P.C. Discrete Inverse Problems: Insight and Algorithms; SIAM: Philadelphia, PA, USA, 2010. [Google Scholar]
  36. Vogel, C.R. Computational Methods for Inverse Problems; SIAM: Philadelphia, PA, USA, 2002. [Google Scholar]
Figure 1. The error plotting x n x n 1 for choice 1 in Example 1.
Figure 1. The error plotting x n x n 1 for choice 1 in Example 1.
Mathematics 08 00248 g001
Figure 2. The error plotting x n x n 1 for choice 2 in Example 1.
Figure 2. The error plotting x n x n 1 for choice 2 in Example 1.
Mathematics 08 00248 g002
Figure 3. The error plotting x n x n 1 for choice 3 in Example 1.
Figure 3. The error plotting x n x n 1 for choice 3 in Example 1.
Mathematics 08 00248 g003
Figure 4. The error plotting x n ( t ) x n 1 ( t ) for choice 1 in Example 2.
Figure 4. The error plotting x n ( t ) x n 1 ( t ) for choice 1 in Example 2.
Mathematics 08 00248 g004
Figure 5. The error plotting x n ( t ) x n 1 ( t ) for choice 2 in Example 2.
Figure 5. The error plotting x n ( t ) x n 1 ( t ) for choice 2 in Example 2.
Mathematics 08 00248 g005
Figure 6. The error plotting x n ( t ) x n 1 ( t ) for choice 3 in Example 2.
Figure 6. The error plotting x n ( t ) x n 1 ( t ) for choice 3 in Example 2.
Mathematics 08 00248 g006
Figure 7. The flowchart of the image restoration process.
Figure 7. The flowchart of the image restoration process.
Mathematics 08 00248 g007
Figure 8. The matrix size of grey image is 276 × 490 .
Figure 8. The matrix size of grey image is 276 × 490 .
Mathematics 08 00248 g008
Figure 9. The matrix size of RGB image is 280 × 497 × 3 .
Figure 9. The matrix size of RGB image is 280 × 497 × 3 .
Mathematics 08 00248 g009
Figure 10. Three degraded grey image by blurred matrices A 1 , A 2 , and A 3 , respectively.
Figure 10. Three degraded grey image by blurred matrices A 1 , A 2 , and A 3 , respectively.
Mathematics 08 00248 g010
Figure 11. Three degraded grey image by blurred matrices A 1 , A 2 , and A 3 , respectively.
Figure 11. Three degraded grey image by blurred matrices A 1 , A 2 , and A 3 , respectively.
Mathematics 08 00248 g011
Figure 12. Three degraded grey image by blurred matrices A 1 , A 2 , and A 3 , respectively.
Figure 12. Three degraded grey image by blurred matrices A 1 , A 2 , and A 3 , respectively.
Mathematics 08 00248 g012
Figure 13. Three degraded RGB image by blurred matrices A 1 , A 2 , and A 3 , respectively.
Figure 13. Three degraded RGB image by blurred matrices A 1 , A 2 , and A 3 , respectively.
Mathematics 08 00248 g013
Figure 14. Three degraded RGB image by blurred matrices A 1 , A 2 , and A 3 , respectively.
Figure 14. Three degraded RGB image by blurred matrices A 1 , A 2 , and A 3 , respectively.
Mathematics 08 00248 g014
Figure 15. Three degraded RGB image by blurred matrices A 1 , A 2 , and A 3 , respectively.
Figure 15. Three degraded RGB image by blurred matrices A 1 , A 2 , and A 3 , respectively.
Mathematics 08 00248 g015
Figure 16. The reconstructed grey image with their PSNRs for three different cases using the proposed algorithm presented in 10,000 iterations, respectively.
Figure 16. The reconstructed grey image with their PSNRs for three different cases using the proposed algorithm presented in 10,000 iterations, respectively.
Mathematics 08 00248 g016
Figure 17. The reconstructed grey image with their PSNRs for three different cases using the proposed algorithm presented in 10,000 iterations, respectively.
Figure 17. The reconstructed grey image with their PSNRs for three different cases using the proposed algorithm presented in 10,000 iterations, respectively.
Mathematics 08 00248 g017
Figure 18. The reconstructed grey image with their PSNRs for three different cases using the proposed algorithm presented in 10,000 iterations, respectively.
Figure 18. The reconstructed grey image with their PSNRs for three different cases using the proposed algorithm presented in 10,000 iterations, respectively.
Mathematics 08 00248 g018
Figure 19. The reconstructed RGB image with their PSNRs for three different cases using the proposed algorithm presented in 10,000 iterations, respectively.
Figure 19. The reconstructed RGB image with their PSNRs for three different cases using the proposed algorithm presented in 10,000 iterations, respectively.
Mathematics 08 00248 g019
Figure 20. The reconstructed RGB image with their PSNRs for three different cases using the proposed algorithm presented in 10,000 iterations, respectively.
Figure 20. The reconstructed RGB image with their PSNRs for three different cases using the proposed algorithm presented in 10,000 iterations, respectively.
Mathematics 08 00248 g020
Figure 21. The reconstructed RGB image with their PSNRs for three different cases using the proposed algorithm presented in 10,000 iterations, respectively.
Figure 21. The reconstructed RGB image with their PSNRs for three different cases using the proposed algorithm presented in 10,000 iterations, respectively.
Mathematics 08 00248 g021
Figure 22. The reconstructed grey image with their PSNRs for three different cases using the proposed algorithm presented in 10,000 iterations, respectively.
Figure 22. The reconstructed grey image with their PSNRs for three different cases using the proposed algorithm presented in 10,000 iterations, respectively.
Mathematics 08 00248 g022
Figure 23. The reconstructed grey image with their PSNRs for three different cases using the proposed algorithm presented in 10,000 iterations, respectively.
Figure 23. The reconstructed grey image with their PSNRs for three different cases using the proposed algorithm presented in 10,000 iterations, respectively.
Mathematics 08 00248 g023
Figure 24. The reconstructed grey image with their PSNRs for three different cases using the proposed algorithm presented in 10,000 iterations, respectively.
Figure 24. The reconstructed grey image with their PSNRs for three different cases using the proposed algorithm presented in 10,000 iterations, respectively.
Mathematics 08 00248 g024
Figure 25. The reconstructed RGB image with their PSNRs for three different cases using the proposed algorithm presented in 10,000 iterations, respectively.
Figure 25. The reconstructed RGB image with their PSNRs for three different cases using the proposed algorithm presented in 10,000 iterations, respectively.
Mathematics 08 00248 g025
Figure 26. The reconstructed RGB image with their PSNRs for three different cases using the proposed algorithm presented in 10,000 iterations, respectively.
Figure 26. The reconstructed RGB image with their PSNRs for three different cases using the proposed algorithm presented in 10,000 iterations, respectively.
Mathematics 08 00248 g026
Figure 27. The reconstructed RGB image with their PSNRs for three different cases using the proposed algorithm presented in 10,000 iterations, respectively.
Figure 27. The reconstructed RGB image with their PSNRs for three different cases using the proposed algorithm presented in 10,000 iterations, respectively.
Mathematics 08 00248 g027
Figure 28. The reconstructed grey image from the blurring operators A 1 , A 2 , and A 3 using the proposed algorithm presented in 10,000 iterations, respectively.
Figure 28. The reconstructed grey image from the blurring operators A 1 , A 2 , and A 3 using the proposed algorithm presented in 10,000 iterations, respectively.
Mathematics 08 00248 g028
Figure 29. The reconstructed RGB image from the blurring operators A 1 , A 2 , and A 3 using the proposed algorithm presented in 10,000 iterations, respectively.
Figure 29. The reconstructed RGB image from the blurring operators A 1 , A 2 , and A 3 using the proposed algorithm presented in 10,000 iterations, respectively.
Mathematics 08 00248 g029
Figure 30. Cauchy error plots of the proposed algorithm in all cases of grey images.
Figure 30. Cauchy error plots of the proposed algorithm in all cases of grey images.
Mathematics 08 00248 g030
Figure 31. Figure error plots of the proposed algorithm in all cases of grey images.
Figure 31. Figure error plots of the proposed algorithm in all cases of grey images.
Mathematics 08 00248 g031
Figure 32. PSNR quality plots of the proposed algorithm in all cases of grey images.
Figure 32. PSNR quality plots of the proposed algorithm in all cases of grey images.
Mathematics 08 00248 g032
Figure 33. Cauchy error plots of the proposed algorithm in all cases of RGB images.
Figure 33. Cauchy error plots of the proposed algorithm in all cases of RGB images.
Mathematics 08 00248 g033
Figure 34. Figure error plots of the proposed algorithm in all cases of RGB images.
Figure 34. Figure error plots of the proposed algorithm in all cases of RGB images.
Mathematics 08 00248 g034
Figure 35. PSNR quality plots of the proposed algorithm in all cases of RGB images.
Figure 35. PSNR quality plots of the proposed algorithm in all cases of RGB images.
Mathematics 08 00248 g035
Figure 36. The reconstructed grey image of all cases using the proposed method (2) with PSNR = 24.
Figure 36. The reconstructed grey image of all cases using the proposed method (2) with PSNR = 24.
Mathematics 08 00248 g036
Figure 37. The reconstructed grey image of all cases using the proposed method (2) with PSNR = 24.
Figure 37. The reconstructed grey image of all cases using the proposed method (2) with PSNR = 24.
Mathematics 08 00248 g037
Figure 38. The reconstructed grey image of all cases using the proposed method (2) with PSNR = 24.
Figure 38. The reconstructed grey image of all cases using the proposed method (2) with PSNR = 24.
Mathematics 08 00248 g038
Figure 39. The reconstructed grey image of all cases using the proposed method (2) with PSNR = 24.
Figure 39. The reconstructed grey image of all cases using the proposed method (2) with PSNR = 24.
Mathematics 08 00248 g039
Figure 40. The reconstructed grey image of all cases using the proposed method (2) with PSNR = 24.
Figure 40. The reconstructed grey image of all cases using the proposed method (2) with PSNR = 24.
Mathematics 08 00248 g040
Figure 41. The reconstructed grey image of all cases using the proposed method (2) with PSNR = 24.
Figure 41. The reconstructed grey image of all cases using the proposed method (2) with PSNR = 24.
Mathematics 08 00248 g041
Figure 42. The reconstructed grey image of all cases using the proposed method (2) with PSNR = 24.
Figure 42. The reconstructed grey image of all cases using the proposed method (2) with PSNR = 24.
Mathematics 08 00248 g042
Figure 43. The reconstructed RGB image of all cases using the proposed method (2) with PSNR = 23.
Figure 43. The reconstructed RGB image of all cases using the proposed method (2) with PSNR = 23.
Mathematics 08 00248 g043
Figure 44. The reconstructed RGB image of all cases using the proposed method (2) with PSNR = 23.
Figure 44. The reconstructed RGB image of all cases using the proposed method (2) with PSNR = 23.
Mathematics 08 00248 g044
Figure 45. The reconstructed RGB image of all cases using the proposed method (2) with PSNR = 23.
Figure 45. The reconstructed RGB image of all cases using the proposed method (2) with PSNR = 23.
Mathematics 08 00248 g045
Figure 46. The reconstructed RGB image of all cases using the proposed method (2) with PSNR = 23.
Figure 46. The reconstructed RGB image of all cases using the proposed method (2) with PSNR = 23.
Mathematics 08 00248 g046
Figure 47. The reconstructed RGB image of all cases using the proposed method (2) with PSNR = 23.
Figure 47. The reconstructed RGB image of all cases using the proposed method (2) with PSNR = 23.
Mathematics 08 00248 g047
Figure 48. The reconstructed RGB image of all cases using the proposed method (2) with PSNR = 23.
Figure 48. The reconstructed RGB image of all cases using the proposed method (2) with PSNR = 23.
Mathematics 08 00248 g048
Figure 49. The reconstructed RGB image of all cases using the proposed method (2) with PSNR = 23.
Figure 49. The reconstructed RGB image of all cases using the proposed method (2) with PSNR = 23.
Mathematics 08 00248 g049
Table 1. Comparison of the number of iterations in Example 1.
Table 1. Comparison of the number of iterations in Example 1.
Inputting x 1 = ( 3 , 5 , 8 ) x 1 = ( 1 , 7 , 6 ) x 1 = ( 6 . 13 , 5 . 24 , 1 . 19 )
CPU TimeIter No.CPU TimeIter No.CPU TimeIter No.
A 1 0.00000685920.00000565910.00001589
A 2 0.00037952300.00028482300.0002887229
A 3 0.00046192300.00078522300.000766229
A 1 , A 2 0.00029422310.00029652310.0002945231
A 1 , A 3 0.00084442310.00099532310.0009992231
A 2 , A 3 0.00115162300.00097812300.0007956229
A 1 , A 2 , A 3 0.00074292310.00075862310.0007621231
Table 2. Comparison of the number of iterations in Example 2.
Table 2. Comparison of the number of iterations in Example 2.
InputtingBernstein Initial DataChebyshev Initial DataLegendre Initial Data
CPU TimeIter. No.CPU TimeIter. No.CPU TimeIter. No.
A 1 2.20542404.53568402.6665633
A 2 2.93440351.53655391.4619533
A 3 2.699356282.13809381.3835932
A 1 , A 2 20.51623633.96563920.300733
A 1 , A 3 11.01092977.19073844.638932
A 2 , A 3 7.479272852.56073830.773332
A 1 , A 2 , A 3 6.209552882.35493845.878932

Share and Cite

MDPI and ACS Style

Suantai, S.; Peeyada, P.; Yambangwai, D.; Cholamjiak, W. A Parallel-Viscosity-Type Subgradient Extragradient-Line Method for Finding the Common Solution of Variational Inequality Problems Applied to Image Restoration Problems. Mathematics 2020, 8, 248. https://0-doi-org.brum.beds.ac.uk/10.3390/math8020248

AMA Style

Suantai S, Peeyada P, Yambangwai D, Cholamjiak W. A Parallel-Viscosity-Type Subgradient Extragradient-Line Method for Finding the Common Solution of Variational Inequality Problems Applied to Image Restoration Problems. Mathematics. 2020; 8(2):248. https://0-doi-org.brum.beds.ac.uk/10.3390/math8020248

Chicago/Turabian Style

Suantai, Suthep, Pronpat Peeyada, Damrongsak Yambangwai, and Watcharaporn Cholamjiak. 2020. "A Parallel-Viscosity-Type Subgradient Extragradient-Line Method for Finding the Common Solution of Variational Inequality Problems Applied to Image Restoration Problems" Mathematics 8, no. 2: 248. https://0-doi-org.brum.beds.ac.uk/10.3390/math8020248

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop