Next Article in Journal
Closure System and Its Semantics
Next Article in Special Issue
A Non-Standard Finite Difference Scheme for a Diffusive HIV-1 Infection Model with Immune Response and Intracellular Delay
Previous Article in Journal
An Improved Tikhonov-Regularized Variable Projection Algorithm for Separable Nonlinear Least Squares
Previous Article in Special Issue
Global Stability of a Lotka-Volterra Competition-Diffusion-Advection System with Different Positive Diffusion Distributions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Bounded Perturbation Resilience of Two Modified Relaxed CQ Algorithms for the Multiple-Sets Split Feasibility Problem

College of Science, Civil Aviation University of China, Tianjin 300300, China
*
Author to whom correspondence should be addressed.
Submission received: 16 July 2021 / Revised: 19 August 2021 / Accepted: 20 August 2021 / Published: 23 August 2021
(This article belongs to the Special Issue Special Issue in Honor of the 60th Birthday of Professor Hong-Kun Xu)

Abstract

:
In this paper, we present some modified relaxed CQ algorithms with different kinds of step size and perturbation to solve the Multiple-sets Split Feasibility Problem (MSSFP). Under mild assumptions, we establish weak convergence and prove the bounded perturbation resilience of the proposed algorithms in Hilbert spaces. Treating appropriate inertial terms as bounded perturbations, we construct the inertial acceleration versions of the corresponding algorithms. Finally, for the LASSO problem and three experimental examples, numerical computations are given to demonstrate the efficiency of the proposed algorithms and the validity of the inertial perturbation.

1. Introduction

In this paper, we focus on the Multiple-sets Split Feasibility Problem (MSSFP), which is formulated as follows.
Find a point x C = i = 1 t C i such that A x Q = j = 1 r Q j ,
where A : H 1 H 2 is a bounded and linear operator, C i H 1 , i = 1 , , t , and Q j H 2 , j = 1 , , r are nonempty closed and convex sets, and H 1 and H 2 are Hilbert spaces. When t = 1 , r = 1 , it is the Split Feasibility Problem (SFP). Byrne in [1,2] introduced the following CQ algorithm to solve the SFP,
x k + 1 = P C ( x k α k A ( I P Q ) A x k ) ,
where α k ( 0 , 2 A 2 ) . It is proven that the iterates { x k } converge to a solution of the SFP. When P C and P Q have explicit expressions, the CQ algorithm is easy to carry out. However, P C and P Q have no explicit formulas in general; thus the computation of P C and P Q is itself an optimization problem.
To avoid the computation of P C and P Q , Yang [3] proposed the relaxed CQ algorithm in finite dimensional spaces. The algorithm is
x k + 1 = P C k ( x k α k A ( I P Q k ) A x k ) ,
where α k ( 0 , 2 A 2 ) , C k and Q k are sequences of closed half spaces containing C and Q, respectively.
As for the MSSFP (1), Censor et al. in [4] proposed the following algorithm,
x k + 1 = P Ω ( x k α p ( x k ) ) ,
where Ω is an auxiliary closed subset, and p ( x ) is a function to measure the distance from a point to all the sets C i and Q j ,
p ( x ) = 1 2 i = 1 t λ i x P C i ( x ) 2 + 1 2 j = 1 r β j A x P Q j ( A x ) 2 ,
where λ i > 0 , β j > 0 for every i and j, and i = 1 t λ i + j = 1 r β j = 1 , 0 < α < 2 / L , L = i = 1 t λ i + A 2 j = 1 r β j . The convergence of the algorithm (4) is proved in finite dimensional spaces.
Later, He et al. [5] introduced a relaxed self-adaptive CQ algorithm,
x k + 1 = τ k μ + ( 1 τ k ) ( x k α k p k ( x k ) ) ,
where the sequence { τ k } ( 0 , 1 ) , μ H , p k ( x ) = 1 2 i = 1 t λ i x P C i k ( x ) 2 + 1 2 j = 1 r β j A x P Q j k ( A x ) 2 , where the closed convex sets C i k and Q j k are level sets of some convex functions containing C i and Q j , and self-adaptive step size α k = ρ k p k ( x k ) p k ( x k ) 2 , 0 < ρ k < 4 . They proved that the sequence { x k } generated by algorithm (6) converges in norm to P S ( μ ) , where S is the solution set of the MSSFP.
In order to improve the rate of convergence, many scholars have investigated the choice of the step size of the algorithms. Based on the CQ algorithm (2), Yang [6] proposed the step size
α k = ρ k f ( x k ) ,
where { ρ k } is a sequence of positive real numbers satisfying n = 0 ρ k = and n = 0 ρ k 2 < + , and f ( x ) = 1 2 ( I P Q ) A x 2 . Assuming that Q is bounded and A is a matrix with full column rank, Yang proved the convergence of the underlying algorithm in finite dimensional spaces. In 2012, López et al. [7] introduced another choice of the step size sequence { α k } in the algorithm (3) as follows
α k = ρ k f k ( x k ) f k ( x k ) 2 ,
where 0 < ρ k < 4 , f k ( x ) = 1 2 ( I P Q k ) A x 2 , and they proved the weak convergence of the iteration sequence in Hilbert spaces. The advantage of this choice of the step size lies in the fact that neither prior information about the matrix norm A nor any other conditions on Q and A are required. Recently, Gibali et al. [8] and Chen et al. [9] used step size determined by Armijo-line search and proved the convergence of the algorithm. For more information on the relaxed CQ algorithm and the selection of step size, please refer to references [10,11,12].
On the other hand, in order to make the algorithms converge faster, specific perturbations have been introduced into the iterative format, since the perturbations guide the iteration to a lower objective function value without losing the overall convergence. So far, bounded perturbation recovery has been used in many problems.
Consider the usage of the bounded perturbation for the non-smooth optimization problems, min x H ϕ ( x ) = f ( x ) + g ( x ) , where f and g are proper lower semicontinuous convex functions in real Hilbert spaces, f is differentiable, g is not necessarily differentiable, and f is L-Lipschitz continuous. One of the classic algorithms is the proximal gradient (PG) algorithm, based on which Guo et al. [13] proposed the following PG algorithm with perturbations,
x k + 1 = prox λ k g ( I λ k D f + e ) ( x k ) .
Assume that (i) D is a bounded linear operator, (ii) 0 < inf λ k λ k sup λ k < 2 L , (iii) e ( x k ) satisfies k = 0 e ( x k ) < + , and (iv) θ k = f ( x k ) D ( x k ) f ( x k ) satisfies k = 0 θ k < + . They asserted that the generated sequence { x k } converges weakly to a solution. Later, Guo and Cui [14] proposed the modified PG algorithm for solving this problem,
x k + 1 = τ k h ( x k ) + ( 1 τ k ) prox λ k g ( I λ k f ) ( x k ) + e ( x k ) ,
where τ k [ 0 , 1 ] , h is a ρ ( 0 , 1 ) -contractive operator. They proved that the sequence { x k } generated by the algorithm (8) converges strongly to a solution x . In 2020, Pakkaranang et al. [15] considered PG algorithm combined with inertial technique
y k = x k + θ k ( x k x k 1 ) , x k + 1 = τ k h ( y k ) + ( 1 τ k ) prox λ k g ( I λ k f ) ( y k ) + e ( y k ) ,
and they proved its strong convergence under suitable conditions.
For the convex minimization problem, min x Ω f ( x ) , where Ω is a nonempty closed convex subset in finite dimensional space and the objective function f is convex, Jin et al. [16] presented the following projected scaled gradient (PSG) algorithm with errors
x k + 1 = P Ω ( x k λ k D ( x k ) f ( x k ) + e ( x k ) ) .
Assume that (i) { D ( x k ) } k = 0 is a sequence of diagonal scaling matrices, and that (ii) (iii) (iv) are the same as the conditions in algorithm (7); then the generated sequence { x k } converges weakly to a solution.
In 2017, Xu [17] applied the superiorization techniques to the relaxed PSG. The iterative form is
x k + 1 = ( 1 τ k ) x k + τ k P Ω ( x k λ k D ( x k ) f ( x k ) + e ( x k ) ) ,
where τ k is a sequence in [ 0 , 1 ] , and D ( x k ) is a diagonal scaling matrix. He established weak convergence of the above algorithm under appropriate conditions imposed on { τ k } and { λ k } .
For the variational inequality problem (VIP for short) F ( x ) , x x 0 , x C , where F is a nonlinear operator, Dong et al. [18] considered the external gradient algorithm with perturbations
x ¯ k = P C ( x k α k F ( x k ) + e 1 ( x k ) ) , x k + 1 = P C ( x k α k F ( x ¯ k ) + e 2 ( x k ) ) .
where α k = γ l m k with m k the smallest non-negative integer such that
α k F ( x k ) F ( x ¯ k ) μ x k x ¯ k .
Assume that F is monotonous and L-Lipschitz is continuous and that the error sequence is summable; the sequence { x k } generated by the algorithm converges weakly to a solution of the VIP.
For the split variational inclusion problem, Duan and Zheng [19] in 2020 proposed the following algorithm
x k + 1 = τ k h ( x k ) + ( 1 τ k ) J γ B 1 ( I λ k A ( I J γ B 2 ) A ) ( x k ) + e ( x k ) ,
where A is a bounded linear operator, B 1 and B 2 are maximal monotone operators. Assuming that lim k τ k = 0 , k = 0 τ k = , 0 < inf k λ k sup k λ k < 2 L , L = A 2 and k = 0 e ( x k ) < + , they proved that the sequence { x k } strongly converges to a solution of the split variational inclusion problem, which is also the unique solution of some variational inequality problem.
For the convex feasibility problem, Censor and Zaslavski [20] considered the perturbation resilience and convergence of dynamic string-averaging projection method.
Adding an inertial term can improve the convergence rate, which is also a perturbation. Recently, for a common solution of the split minimization problem and the fixed point problem, Kaewyong and Sitthithakerngkiet [21] combined the proximal algorithm and a modified Mann’s iterative method with the inertial extrapolation and improved related results. Shehu et al. [22] and Li et al. [23] added alternated inertial perturbation to the algorithms for solving the SFP and improved the convergence rate.
At present, the (multiple-sets) split feasibility problem is widely used in application fields, such as CT tomography, image restoration, and image reconstruction, etc. There are many related literatures on the iterative algorithms for solving the (multiple-sets) split feasibility problem. However, there are relatively fewer documents studying the algorithms of the (multiple-sets) split feasibility problem with perturbations, especially with self-adaptive step size. In fact, the latter also has a bounded disturbance recovery property. Motivated by [9,18], we focus on the modified relaxed CQ algorithms to solve the MSSFP (1) in real Hilbert spaces and assert that the proposed algorithms are also bounded-perturbation-resilient.
The rest of the paper is arranged as follows. In Section 2, definitions and notions that will be useful for our analysis are presented. In Section 3, we present our algorithms and prove their weak convergence. In Section 4, we prove that the proposed algorithms have bounded perturbation resilience and construct the inertial modification of the algorithms. Furthermore, finally, in Section 5, we present some numerical simulations to show the validity of the proposed algorithms.

2. Preliminaries

In this section, we first define some symbols and then review some definitions and basic results that will be used in this paper.
Throughout this paper, H denotes a real Hilbert space endowed with an inner product · , · and its deduced norm · , and I is the identity operator on H . We denote by S the solution set of the MSSFP (1). Moreover, x k x ( x k x ) represents that the sequence { x k } converges strongly (weakly) to x. Finally, we denote by ω ω ( x k ) all the weak cluster points of { x k } .
An operator T : H H is said to be nonexpansive if for all x , y H ,
T x T y x y ;
T : H H is said to be firmly nonexpansive if for all x , y H ,
| | T x T y | | 2 | | x y | | 2 | | ( I T ) x ( I T ) y | | 2 ,
or equivalently
| | T x T y | | 2 T x T y , x y .
It is well known that T is firmly nonexpansive if and only if I T is firmly nonexpansive.
Let C be a nonempty closed convex subset of H . Then the metric projection P C from H onto C is defined as
P C ( x ) = argmin y C | | x y | | 2 , x H .
The metric projection P C is a firmly nonexpansive operator.
Definition 1
([24]). A function f : H R is said to be weakly lower semicontinuous at x ^ if x k converges weakly to x ^ implies
f ( x ^ ) lim inf k f ( x k ) .
Definition 2.
If φ : H R is a convex function, the subdifferential of φ at x is defined as
φ ( x ) = { ξ H φ ( y ) φ ( x ) + ξ , y x , y H } .
Lemma 1
([24]). Let C be a nonempty closed and convex subset of H ; then for any x , y H , z C , the following assertions hold:
(i)
x P C x , z P C x 0 ;
(ii)
P C x z 2 x z 2 P C x x 2 ;
(iii)
2 x , y x + x y 2 ;
(iv)
2 x , y x 2 + y 2 .
Lemma 2
([25]). Assume that { a k } k = 0 is a sequence of nonnegative real numbers such that
a k + 1 ( 1 + σ k ) a k + δ k , k 0 ,
where the nonnegative sequences { σ k } k = 0 and { δ k } k = 0 satisfies k = 0 σ k < + and k = 0 δ k < + , respectively. Then lim k a k exists.
Lemma 3
([25]). Let S be a nonempty closed and convex subset of H and { x k } be a sequence in H that satisfies the following properties:
(i)
lim k x k x e x i s t s f o r e a c h x S ;
(ii)
ω ω ( x k ) S .
Then { x k } converges weakly to a point in S.
Definition 3.
An algorithmic operator P is said to be bounded perturbations resilient if the iteration x k + 1 = P ( x k ) and x k + 1 = P ( x k + λ k ν k ) all converge, where { λ k } is a sequence of nonnegative real numbers, { ν k } is a sequence in H , and M R and satisfies
k = 0 λ k < + , ν k M .

3. Algorithms and Their Convergence

In this section, we introduce two algorithms of the MSSFP (1) and prove their weak convergence. First assume that the following four assumptions hold.
(A1) The solution set S of the MSSFP (1) is nonempty.
(A2) The level sets of convex functions can be expressed by
C i = { x H 1 c i ( x ) 0 } and Q j = { y H 2 q j ( y ) 0 } ,
where c i : H 1 R ( i = 1 , , t ) and q j : H 2 R ( j = 1 , , r ) are weakly lower semicontinuous and convex functions.
(A3) For any x H 1 and y H 2 , at least one subgradient ξ i c i ( x ) and η j q j ( y ) can be calculated. The subdifferential c i and q j are bounded on the bounded sets.
(A4) The sequences of perturbations { e i ( x k ) } k = 0 ( i = 1 , 2 , 3 ) is summable, i.e.,
k = 0 e i ( x k ) < + .
Define two sets at point x k by
C i k = { x H 1 c i ( x k ) + ξ i k , x x k 0 } ,
and
Q j k = { y H 2 q j ( A x k ) + η j k , y A x k 0 } ,
where ξ i k c i ( x k ) and η j k q j ( A x k ) . Define the function f k by
f k ( x ) = 1 2 j = 1 r β j ( I P Q j k ) A x 2 ,
where β j > 0 . Then it is easy to verify that the function f k ( x ) is convex and differentiable with gradient
f k ( x ) = j = 1 r β j A ( I P Q j k ) A x ,
and the L-Lipschitz constant of f k ( x ) is L = A 2 j = 1 r β j .
We see that C i k ( i = 1 , , t ) and Q j k ( j = 1 , , r ) are half spaces such that C i C i k , Q j Q j k , for all k 1 . We now present Algorithm 1 with Armijo-line search step size.
Algorithm 1 (The relaxed CQ algorithm with Armijo-line search and perturbation)
  Given constant γ > 0 , l ( 0 , 1 ) , μ ( 0 , 1 ) . Let x 0 be arbitrarily chosen, for k = 0 , 1 , , compute
x ¯ k = P C [ k ] k ( x k α k f k ( x k ) + e 1 ( x k ) ) ,

where [ k ] = k mod t and α k = γ l m k with m k the smallest non-negative integer such that
α k f k ( x k ) f k ( x ¯ k ) μ x k x ¯ k .

  Construct the next iterate x k + 1 by
x k + 1 = P C [ k ] k ( x k α k f k ( x ¯ k ) + e 2 ( x k ) ) .
Lemma 4
([6]). The Armijo-line search terminates after a finite number of steps. In addition,
μ l L < α k γ , f o r a l l k 0 .
where L = A 2 j = 1 r β j .
The weak convergence of Algorithm 1 is established below.
Theorem 1.
Let { x k } be the sequence generated by Algorithm 1, and the assumptions (A1)∼(A4) hold. Then { x k } converges weakly to a solution of the MSSFP (1).
Proof. 
Let x be a solution of the MSSFP. Note that C C i C i k , Q Q j Q j k , i = 1 , , t , j = 1 , , r , k = 0 , 1 , , so x = P C ( x ) = P C i ( x ) = P C i k ( x ) and A x = P Q ( A x ) = P Q j ( A x ) = P Q j k ( A x ) , and thus f k ( x ) = 0 and f k ( x ) = 0 .
First, we prove that { x k } is bounded. Following Lemma 1 (ii), we have
x k + 1 x 2 = P C [ k ] k ( x k α k f k ( x ¯ k ) + e 2 ( x k ) ) x 2 x k α k f k ( x ¯ k ) + e 2 ( x k ) x 2 x k + 1 x k + α k f k ( x ¯ k ) e 2 ( x k ) 2 = x k x 2 x k + 1 x k 2 2 α k f k ( x ¯ k ) e 2 ( x k ) , x k x 2 α k f k ( x ¯ k ) e 2 ( x k ) , x k + 1 x k = x k x 2 x k + 1 x k 2 2 α k f k ( x ¯ k ) e 2 ( x k ) , x k + 1 x = x k x 2 x k + 1 x k 2 2 α k f k ( x ¯ k ) , x k + 1 x + 2 e 2 ( x k ) , x k + 1 x = x k x 2 x k + 1 x ¯ k 2 x ¯ k x k 2 2 x k + 1 x ¯ k , x ¯ k x k 2 α k f k ( x ¯ k ) , x k + 1 x + 2 e 2 ( x k ) , x k + 1 x = x k x 2 x k + 1 x ¯ k 2 x ¯ k x k 2 2 x k + 1 x ¯ k , x ¯ k x k 2 α k f k ( x ¯ k ) , x k + 1 x ¯ k 2 α k f k ( x ¯ k ) , x ¯ k x + 2 e 2 ( x k ) , x k + 1 x = x k x 2 x k + 1 x ¯ k 2 x ¯ k x k 2 2 α k f k ( x ¯ k ) , x ¯ k x + 2 x k x ¯ k α k f k ( x ¯ k ) , x k + 1 x ¯ k + 2 e 2 ( x k ) , x k + 1 x .
From Lemma 1 (iii), we have that
2 e 2 ( x k ) , x k + 1 x e 2 ( x k ) + e 2 ( x k ) x k + 1 x 2 .
Since I P C is firmly nonexpensive, f k ( x ) = 0 , and Lemma 4, we get that
2 α k f k ( x ¯ k ) , x ¯ k x = 2 α k j = 1 r β j A ( I P Q j k ) A x ¯ k j = 1 r β j A ( I P Q j k ) A x , x ¯ k x = 2 α k j = 1 r β j ( I P Q j k ) A x ¯ k ( I P Q j k ) A x , A x ¯ k A x 2 μ l L j = 1 r β j ( I P Q j k ) A x ¯ k 2 .
Based on the definition of x ¯ k and Lemma 1 (i), we know that
x ¯ k x k + α k f k ( x k ) e 1 ( x k ) , x k + 1 x ¯ k 0 .
Note that (17), (23), and Lemma 1 (iii) yield that
2 x k x ¯ k α k f k ( x ¯ k ) , x k + 1 x ¯ k 2 e 1 ( x k ) + α k f k ( x k ) α k f k ( x ¯ k ) , x k + 1 x ¯ k = 2 α k f k ( x k ) f k ( x ¯ k ) , x k + 1 x ¯ k 2 e 1 ( x k ) , x k + 1 x ¯ k 2 α k f k ( x k ) f k ( x ¯ k ) x k + 1 x ¯ k + 2 e 1 ( x k ) x k + 1 x ¯ k 2 μ x k x ¯ k x k + 1 x ¯ k + e 1 ( x k ) + e 1 ( x k ) x k + 1 x ¯ k 2 μ x k x ¯ k 2 + μ x k + 1 x ¯ k 2 + e 1 ( x k ) + e 1 ( x k ) x k + 1 x ¯ k 2 = μ x k x ¯ k 2 + ( μ + e 1 ( x k ) ) x k + 1 x ¯ k 2 + e 1 ( x k ) .
From assumption (A4), we know that lim k e i ( x k ) = 0 , i = 1 , 2 , and thus ε > 0 , K , it holds that e i ( x k ) < ε for k > K . We can therefore assume e 1 ( x k ) [ 0 , 1 μ τ ) and e 2 ( x k ) [ 0 , 1 / 2 ) for k K , where τ ( 0 , 1 μ ) . Hence, from (24), we get that
2 x k x ¯ k α k f k ( x ¯ k ) , x k + 1 x ¯ k μ x k x ¯ k 2 + ( 1 τ ) x k + 1 x ¯ k 2 + e 1 ( x k ) .
Substituting (21), (22), and (25) into (20) yields
x k + 1 x 2 x k x 2 ( 1 μ ) x k x ¯ k 2 τ x k + 1 x ¯ k 2 + e 1 ( x k ) + e 2 ( x k ) + e 2 ( x k ) x k + 1 x 2 2 μ l L j = 1 r β j ( I P Q j k ) A x ¯ k 2 .
Organizing the above formula we know that
x k + 1 x 2 1 1 e 2 ( x k ) x k x 2 1 μ 1 e 2 ( x k ) x k x ¯ k 2 τ 1 e 2 ( x k ) x k + 1 x ¯ k 2 + e 1 ( x k ) + e 2 ( x k ) 1 e 2 ( x k ) 2 μ l ( 1 e 2 ( x k ) ) L j = 1 r β j ( I P Q j k ) A x ¯ k 2 .
Since e 2 ( x k ) [ 0 , 1 / 2 ) for k K , we get
1 1 1 e 2 ( x k ) 1 + 2 e 2 ( x k ) < 2 .
This together with (27) shows that
x k + 1 x 2 ( 1 + 2 e 2 ( x k ) ) x k x 2 ( 1 μ ) x k x ¯ k 2 + 2 e 1 ( x k ) + 2 e 2 ( x k ) τ x k + 1 x ¯ k 2 2 μ l L j = 1 r β j ( I P Q j k ) A x ¯ k 2 ( 1 + 2 e 2 ( x k ) ) x k x 2 + 2 e 1 ( x k ) + 2 e 2 ( x k ) .
Using Lemma 2 and assumption (A4), we know the existence of lim k x k x 2 and the boundedness of { x k } k = 0 .
From (29), it follows
( 1 μ ) x k x ¯ k 2 + τ x k + 1 x ¯ k 2 + 2 μ l L j = 1 r β j ( I P Q j k ) A x ¯ k 2 ( 1 + 2 e 2 ( x k ) ) x k x 2 x k + 1 x 2 + 2 e 1 ( x k ) + 2 e 2 ( x k ) ,
which means that
k = 0 x k x ¯ k < + , k = 0 x k + 1 x ¯ k < + .
We therefore have
lim k x k x ¯ k = 0 , lim k x k + 1 x ¯ k = 0 .
Thus, by taking k in the inequality x k + 1 x k x k + 1 x ¯ k + x ¯ k x k , we have
lim k x k + 1 x k = 0 .
From (30), we also know
lim k j = 1 r β j ( I P Q j k ) A x ¯ k = 0 .
Hence for every j = 1 , 2 , , r , we have
lim k ( I P Q j k ) A x ¯ k = 0 .
Since { x k } is bounded, the set ω ω ( x k ) is nonempty. Let x ^ ω ω ( x k ) ; then there exists a subsequence { x k n } of { x k } such that x k n x ^ . Next, we show that x ^ is a solution of the MSSFP (1), which will show that ω ω ( x k ) S . In fact, since x k n + 1 C [ k n ] k n , then by the definition of C [ k n ] k n , we have
c [ k n ] ( x k n ) + ξ [ k n ] k n , x k n + 1 x k n 0 ,
where ξ [ k n ] k n c [ k n ] ( x k n ) . For every i = 1 , 2 , , t , choose a subsequence { k n s } { k n } such that [ k n s ] = i , then
c i ( x k n s ) + ξ i k n s , x k n s + 1 x k n s 0 .
Following the assumption (A3) on the boundedness of c i and (32), there exists M 1 such that
c i ( x k n s ) ξ i k n s , x k n s x k n s + 1 ξ i k n s x k n s x k n s + 1 M 1 x k n s x k n s + 1 0 , s .
From the weak lower semicontinuity of the convex function c i , we deduce from (37) that c i ( x ^ ) lim inf s c i ( x k n s ) 0 , i.e., x ^ C = i = 1 t C i .
Noting the fact that I P Q j k n is nonexpansive, together with (31), (34), and A being a bounded and linear operator, we get that
( I P Q j k n ) A x k n ( I P Q j k n ) A x k n ( I P Q j k n ) A x ¯ k n + ( I P Q j k n ) A x ¯ k n A x k n A x ¯ k n + ( I P Q j k n ) A x ¯ k n A x k n x ¯ k n + ( I P Q j k n ) A x ¯ k n 0 , n .
Since P Q j k n ( A x k n ) Q j k n , we have
q j ( A x k n ) + η j k n , P Q j k n ( A x k n ) A x k n 0 ,
where η j k n q j ( A x k n ) . From the boundedness assumption (A3), (38), and (39), there exists M 2 such that
q j ( A x k n ) η j k n A x k n P Q j k n ( A x k n ) M 2 ( I P Q j k n ) A x k n 0 , n .
Then q j ( A x ^ ) lim inf n q j ( A x k n ) 0 , thus A x ^ Q = j = 1 r Q j , and therefore x ^ S . Using Lemma 3, we conclude that the sequence { x k } converges weakly to a solution of the MSSFP (1). □
Now, we present Algorithm 2 in which the step size is given by the self-adaptive method and prove its weak convergence.
Algorithm 2 (The relaxed CQ algorithm with self-adaptive step size and perturbation)
  Take arbitrarily the initial guess x 0 , and calculate
x k + 1 = P C [ k ] k ( x k α k f k ( x k ) + e 3 ( x k ) ) ,
where α k = ρ k f k ( x k ) f k ( x k ) 2 , 0 < ρ k < 4 , and C i , Q j , C i k , Q j k and f k ( x ) were defined at the beginning of this section.
The convergence result of Algorithm 2 is stated in the next theorem.
Theorem 2.
Let { x k } be the sequence generated by Algorithm 2. Assumptions (A1)∼(A4) hold and ρ k satisfies inf k ρ k ( 4 ρ k ) > 0 . Then { x k } converges weakly to a solution of the MSSFP (1).
Proof. 
First, we prove { x k } is bounded. Let x S . Following Lemma 1 (ii), we have
x k + 1 x 2 = P C [ k ] k ( x k α k f k ( x k ) + e 3 ( x k ) ) x 2 x k α k f k ( x k ) + e 3 ( x k ) x 2 x k + 1 x k + α k f k ( x k ) e 3 ( x k ) 2 = x k x 2 x k + 1 x k 2 2 α k f k ( x k ) e 3 ( x k ) , x k x 2 α k f k ( x k ) e 3 ( x k ) , x k + 1 x k = x k x 2 x k + 1 x k 2 2 α k f k ( x k ) , x k x 2 α k f k ( x k ) , x k + 1 x k + 2 e 3 ( x k ) , x k + 1 x .
From Lemma 1 (iii), it follows
2 e 3 ( x k ) , x k + 1 x e 3 ( x k ) + e 3 ( x k ) x k + 1 x 2 .
Similar with (22), it holds that
2 α k f k ( x k ) , x k x 2 α k j = 1 r β j ( I P Q j k ) A x k 2 = 4 α k f k ( x k ) .
From Lemma 1 (iv), one has
2 α k f k ( x k ) , x k + 1 x k α k 2 f k ( x k ) 2 + x k + 1 x k 2 .
Substituting (43)–(45) into (42), we get that
x k + 1 x 2 x k x 2 + α k 2 f k ( x k ) 2 4 α k f k ( x k ) + e 3 ( x k ) + e 3 ( x k ) x k + 1 x 2 , = x k x 2 + ρ k 2 f k 2 ( x k ) f k ( x k ) 4 f k ( x k ) 2 4 ρ k f k ( x k ) f k ( x k ) 2 f k ( x k ) + e 3 ( x k ) + e 3 ( x k ) x k + 1 x 2 , = x k x 2 ρ k ( 4 ρ k ) f k 2 ( x k ) f k ( x k ) 2 + e 3 ( x k ) x k + 1 x 2 + e 3 ( x k ) .
Organizing the above formula, we obtain that
x k + 1 x 2 1 1 e 3 ( x k ) x k x 2 ρ k ( 4 ρ k ) 1 e 3 ( x k ) f k 2 ( x k ) f k ( x k ) 2 + e 3 ( x k ) 1 e 3 ( x k ) .
From assumption (A4), we know that lim k e 3 ( x k ) = 0 , so we can assume without loss of generality that e 3 ( x k ) [ 0 , 1 / 2 ) , k 0 , then
1 1 1 e 3 ( x k ) 1 + 2 e 3 ( x k ) < 2 .
So (47) can be reduced as
x k + 1 x 2 ( 1 + 2 e 3 ( x k ) ) x k x 2 + 2 e 3 ( x k ) .
Using Lemma 2, we get the existence of lim k x k x 2 and the boundedness of { x k } k = 0 .
From (47), we know
ρ k ( 4 ρ k ) 1 e 3 ( x k ) f k 2 ( x k ) f k ( x k ) 2 1 1 e 3 ( x k ) x k x 2 x k + 1 x 2 + e 3 ( x k ) 1 e 3 ( x k ) 0 ,
then the fact that inf k ρ k ( 4 ρ k ) > 0 asserts that f k 2 ( x k ) f k ( x k ) 2 0 . Since f k is Lipschitz continuity and f k ( x ) = 0 , we get that
f k ( x k ) 2 = f k ( x k ) f k ( x ) 2 L 2 x k x 2 .
This implies that f k ( x k ) is bounded, and thus (50) yields f k ( x k ) 0 . Hence for every j = 1 , 2 , , r , we have
( I P Q j k ) A x k 0 , k .
Let { x k n } be a subsequence of { x k } such that x k n x ^ ω ω ( x k ) , and { k n s } are a subsequence of { k n } such that [ k n s ] = i . Similar to the proof of Theorem 1, we know that c i ( x ^ ) lim inf s c i ( x k n s ) 0 , i.e., x ^ C = i = 1 t C i . Since (52) indicates that q j ( A x ^ ) lim inf n q j ( A x k n ) 0 , A x ^ Q = j = 1 r Q j . Therefore x ^ S . Using Lemma 3, we conclude that the sequence { x k } converges weakly to a solution of the MSSFP (1). □

4. The Bounded Perturbation Resilience

4.1. Bounded Perturbation Resilience of the Algorithms

In this subsection, we consider the bounded perturbation algorithms of Algorithms 1 and 2. Based on Definition 3, in Algorithm 1, let e i ( x k ) = 0 , i = 1 , 2 . The original algorithm is
x ¯ k = P C [ k ] k ( x k α k f k ( x k ) ) , x k + 1 = P C [ k ] k ( x k α k f k ( x ¯ k ) ) ,
where α k is obtained by Armijo-line search step size such that α k f k ( x k ) f k ( x ¯ k ) μ x k x ¯ k , where μ ( 0 , 1 ) . The generated iteration sequence is weakly convergent, which is proved as a special case in Section 3. The algorithm with the bounded perturbation of (53) is that
x ¯ k = P C [ k ] k ( x k + λ k ν k α k f k ( x k + λ k ν k ) ) , x k + 1 = P C [ k ] k ( x k + λ k ν k α k f k ( x ¯ k ) ) .
where [ k ] = k mod t and α k = γ l m k with m k the smallest non-negative integer such that
α k f k ( x k + λ k ν k ) f k ( x ¯ k ) μ x k + λ k ν k x ¯ k μ ( x k x ¯ k + λ k ν k ) .
The following theorem shows that the algorithm (53) is bounded perturbation-resilient.
Theorem 3.
Assume that (A1)∼(A3) are true; the sequence { ν k } k = 0 is bounded and the scalar sequence { λ k } k = 0 satisfies λ k 0 and Σ k = 0 λ k < + . Then the sequence { x k } k = 0 generated by iterative scheme (54) converges weakly to a solution of the MSSFP (1). Thus, the algorithm (53) is bounded perturbation-resilient.
Proof. 
Let x S . Since Σ k = 0 λ k < + and the sequence { ν k } k = 0 are bounded, we have
k = 0 λ k ν k < + ,
thus
lim k λ k ν k = 0 .
So we can assume that λ k ν k [ 0 , ( 1 μ τ ) / 2 ) , where τ ( 0 , 1 μ ) , without loss of generality. Replacing e 2 ( x k ) with λ k ν k in (20) and using Lemma 1 (iii) show
x k + 1 x 2 x k x 2 x ¯ k x k 2 x k + 1 x ¯ k 2 2 α k f k ( x ¯ k ) , x ¯ k x + 2 x k x ¯ k α k f k ( x ¯ k ) , x k + 1 x ¯ k + 2 λ k ν k , x k + 1 x x k x 2 x ¯ k x k 2 x k + 1 x ¯ k 2 2 α k f k ( x ¯ k ) , x ¯ k x + 2 x k x ¯ k α k f k ( x ¯ k ) , x k + 1 x ¯ k + λ k ν k + λ k ν k x k + 1 x 2 .
Since I P C is firmly nonexpensive, f k ( x ) = 0 and Lemma 4, we get that
2 α k f k ( x ¯ k ) , x ¯ k x 2 μ l L j = 1 r β j ( I P Q j k ) A x ¯ k 2 .
Based on the definition of x ¯ k and Lemma 1 (i), we know that
x ¯ k x k + α k f k ( x k + λ k ν k ) λ k ν k , x k + 1 x ¯ k 0 .
Based on (55), the following formulas holds
2 α k f k ( x k + λ k ν k ) α k f k ( x ¯ k ) , x k + 1 x ¯ k 2 α k f k ( x k + λ k ν k ) f k ( x ¯ k ) x k + 1 x ¯ k 2 μ x k x ¯ k x k + 1 x ¯ k + 2 μ λ k ν k x k + 1 x ¯ k = μ x k x ¯ k 2 + ( μ + λ k ν k ) x k + 1 x ¯ k 2 + μ 2 λ k ν k .
Lemma 1 (iii) reads that
2 λ k ν k , x k + 1 x ¯ k λ k ν k + λ k ν k x k + 1 x ¯ k 2 .
Substituting (60)–(62) into the fifth item of (58), we get
2 x k x ¯ k α k f k ( x ¯ k ) , x k + 1 x ¯ k 2 x k x ¯ k α k f k ( x ¯ k ) , x k + 1 x ¯ k + 2 x ¯ k x k + α k f k ( x k + λ k ν k ) λ k ν k , x k + 1 x ¯ k = 2 α k f k ( x k + λ k ν k ) α k f k ( x ¯ k ) , x k + 1 x ¯ k 2 λ k ν k , x k + 1 x ¯ k μ x k x ¯ k 2 + ( μ + 2 λ k ν k ) x k + 1 x ¯ k 2 + ( 1 + μ 2 ) λ k ν k μ x k x ¯ k 2 + ( 1 τ ) x k + 1 x ¯ k 2 + 2 λ k ν k .
Substituting (59) and (63) into (58) we get
x k + 1 x 2 1 1 λ k ν k [ x k x 2 + 3 λ k ν k ( 1 μ ) x ¯ k x k 2 τ x k + 1 x ¯ k 2 2 μ l L j = 1 r β j ( I P Q j k ) A x ¯ k 2 ] .
Since λ k ν k [ 0 , ( 1 μ τ ) / 2 ) , we get
1 1 1 λ k ν k 1 + 2 λ k ν k < 2 .
This, together with (64), shows that
x k + 1 x 2 ( 1 + 2 λ k ν k ) [ x k x 2 ( 1 μ ) x k x ¯ k 2 τ x k + 1 x ¯ k 2 2 μ l L j = 1 r β j ( I P Q j k ) A x ¯ k 2 ] + 6 λ k ν k ( 1 + 2 λ k ν k ) x k x 2 + 6 λ k ν k .
Using Lemma 2, we know the existence of lim k x k x 2 and the boundedness of { x k } k = 0 .
From (64), it follows that
( 1 μ ) x ¯ k x k 2 + τ x k + 1 x ¯ k 2 + 2 μ l L j = 1 r β j ( I P Q j k ) A x ¯ k 2 x k x 2 ( 1 λ k ν k ) x k + 1 x 2 + 3 λ k ν k .
Thus, we have lim k x k x ¯ k = 0 , lim k x k + 1 x ¯ k = 0 and lim k j = 1 r β j ( I P Q j k ) A x ¯ k 2 = 0 . Hence,
lim k x k + 1 x k = 0 ,
and for every j = 1 , 2 , , r ,
lim k ( I P Q j k ) A x ¯ k = 0 .
Similarly to with Theorem 1, we conclude that the sequence { x k } converges weakly to a solution of the MSSFP (1). □
Remark 1.
When t = 1 , r = 1 , the MSSFP reduces to the SFP; thus Theorems 1 and 3 guarantee that algorithm (53) is bounded perturbation-resilient with Armijo-line search step size for the SFP.
Remark 2.
Replace f k ( x ) in algorithm (53) by g k ( x ) , and f k ( x ) by g k ( x ) , where g k ( x ) = 1 2 ( I P Q [ k ] k ) A x 2 , and g k ( x ) = A ( I P Q [ k ] k ) A x , [ k ] = k mod r . The corresponding algorithm is also bounded perturbation-resilient.
Next, we will prove that Algorithm 2 with self-adaptive step size is bounded perturbation-resilient. Based on Definition 3, let e 3 ( x k ) = 0 in Algorithm 2. The original algorithm is
x k + 1 = P C [ k ] k ( x k α k f k ( x k ) ) ,
where α k = ρ k f k ( x k ) f k ( x k ) 2 , 0 < ρ k < 4 . The iterative sequence converges weakly to a solution of the MSSFP (1); see [26]. Consider the algorithm with the bounded perturbation
x k + 1 = P C [ k ] k ( x k + λ k ν k α ˜ k f k ( x k + λ k ν k ) ) ,
where α ˜ k = ρ k f k ( x k + λ k ν k ) f k ( x k + λ k ν k ) 2 , 0 < ρ k < 4 . The following theorem shows that the algorithm (70) is bounded-perturbation-resilient.
Theorem 4.
Suppose that (A1)∼(A3) are true; the sequence { ν k } k = 0 is bounded and the scalar sequence { λ k } k = 0 satisfies λ k 0 , Σ k = 0 λ k < + , and ρ k satisfies inf k ρ k ( 4 ρ k ) > 0 . Then the sequence { x k } k = 0 generated by iterative scheme (71) converges weakly to a solution of the MSSFP (1). Thus, the algorithm (70) is bounded-perturbation-resilient.
Proof. 
Set e 3 ( x k ) = λ k ν k + α k f k ( x k ) α ˜ k f k ( x k + λ k ν k ) , then (71) can be rewritten as x k + 1 = P C [ k ] k ( x k α k f k ( x k ) + e 3 ( x k ) ) , which is the form of Algorithm 2. According to Theorem 2, it suffices to prove that k = 0 e 3 ( x k ) < + . Since ρ k f k ( x k + λ k ν k ) f k ( x k + λ k ν k ) 2 f k ( x k + λ k ν k ) is continuous, we write
ρ k f k ( x k + λ k ν k ) f k ( x k + λ k ν k ) 2 f k ( x k + λ k ν k ) = ρ k f k ( x k ) f k ( x k ) 2 f k ( x k ) + O ( λ k ν k ) ,
where O ( λ k ν k ) denotes the infinitesimal of the same order of λ k ν k . From the expression of e 3 ( x k ) , we obtain
e 3 ( x k ) λ k ν k + α k f k ( x k ) α ˜ k f k ( x k + λ k ν k ) = λ k ν k + ρ k f k ( x k ) f k ( x k ) 2 f k ( x k ) ρ k f k ( x k + λ k ν k ) f k ( x k + λ k ν k ) 2 f k ( x k + λ k ν k ) = λ k ν k + ρ k f k ( x k ) f k ( x k ) 2 f k ( x k ) ρ k f k ( x k ) f k ( x k ) 2 f k ( x k ) + O ( λ k ν k ) = λ k ν k + O ( λ k ν k ) .
Since { λ k ν k } is summable, we know that { e 3 ( x k ) } is summable, i.e., k = 0 e 3 ( x k ) + . Thus, we conclude that the sequence { x k } converges weakly to a solution of the MSSFP (1); i.e., the algorithm (70) is the bounded-perturbation-resilient. □
Remark 3.
When t = 1 , r = 1 , the MSSFP reduces to the SFP; thus Theorems 2 and 4 guarantee that algorithm (70) is bounded-perturbation-resilient with the self-adaptive step size for the SFP.

4.2. Construction of the Inertial Algorithms by Bounded Perturbation Resilience

In this subsection, we consider algorithms with inertial terms as a special case of Algorithms 1 and 2. In Algorithm 1, letting e i ( x k ) = θ k ( i ) ( x k x k 1 ) , i = 1 , 2 , we obtain
x ¯ k = P C [ k ] k ( x k α k f k ( x k ) + θ k ( 1 ) ( x k x k 1 ) ) , x k + 1 = P C [ k ] k ( x k α k f k ( x ¯ k ) + θ k ( 2 ) ( x k x k 1 ) ) ,
where the step size α k is obtained by Armijo-line search and
θ k ( i ) = λ k ( i ) x k x k 1 , x k x k 1 > 1 , λ k ( i ) , x k x k 1 1 , i = 1 , 2 .
Theorem 5.
Assume that the assumptions (A1)∼(A3) are true, and the sequence { λ k } k = 0 satisfies λ k 0 , and Σ k = 0 λ k ( i ) < + , i = 1 , 2 . Then, the sequence { x k } k = 0 generated by iterative scheme (74) converges weakly to a solution of the MSSFP (1).
Proof. 
Let e i ( x k ) = λ k ( i ) ν k , i = 1 , 2 , where
ν k = x k x k 1 x k x k 1 , x k x k 1 > 1 , x k x k 1 , x k x k 1 1 .
Thus, we know that ν k 1 and { e i ( x k ) } k = 0 satisfies assumption (A4). According to Theorem 1, we conclude that the sequence { x k } converges weakly to a solution of the MSSFP (1). □
Considering the algorithm with inertial bounded perturbation
x ¯ k = P C [ k ] k ( x k + θ k ( x k x k 1 ) α k f k ( x k + θ k ( x k x k 1 ) ) ) , x k + 1 = P C [ k ] k ( x k + θ k ( x k x k 1 ) α k f k ( x ¯ k ) ) .
where
θ k = λ k x k x k 1 , x k x k 1 > 1 , λ k , x k x k 1 1 .
According to Theorem 3, it is easy to know that the sequence { x k } converges weakly to a solution of the MSSFP (1). More relevant evidence can be found in reference [27].
Similarly, we can get Theorem 6, which asserts that Algorithm 2 with the inertial perturbation is weakly convergent.
Theorem 6.
Assume that (A1)∼(A3) are true; the scalar sequence { λ k } k = 0 satisfies λ k 0 , and Σ k = 0 λ k < + , and ρ k satisfies inf k ρ k ( 4 ρ k ) > 0 . Then the sequence { x k } k = 0 is generated by each of the following iterative scheme,
x k + 1 = P C [ k ] k ( x k α k f k ( x k ) + θ k ( x k x k 1 ) ) ,
x k + 1 = P C [ k ] k ( x k α k f k ( x k + θ k ( x k x k 1 ) ) + θ k ( x k x k 1 ) ) ,
where θ k is the same as (78) and α k is self-adaptive step size which is the same as in Algorithm 2, converges weakly to a solution of the MSSFP (1).

5. Numerical Experiments

In this section, we compare the asymptotic behavior of algorithms (53) (Chen et al. [9]), (77) (Algorithm 1), (70) (Wen et al. [26]) and (80) (Algorithm 2), denoted by NP1, HP1, NP2, and HP2, respectively. For the sake of convenience, we denote e 0 = ( 0 , 0 , , 0 ) T and e 1 = ( 1 , 1 , , 1 ) T , respectively. The codes are written in Matlab 2016a and run on Inter(R) Core(TM) i7-8550U CPU @ 1.80 GHz 2.00 GHz, RAM 8.00 GB. We present two kinds of experiments. One is a real-life problem called LASSO problem, the other kind is some numerical simulation including three examples of the MSSFP.

5.1. LASSO Problem

Let us consider the following LASSO problem [28]
min 1 2 A x b 2 2 x R n , x 1 ε
where A R m × n , m < n , b R m , and ε > 0 . The matrix A is generated from a standard normal distribution with mean zero and unit variance. The true sparse signal x is generated from uniformly distribution in the interval [ 2 , 2 ] with random p position nonzero, while the rest is kept zero. The sample data b = A x . For the considered MSSFP, let r = t = 1 and C = { x x 1 ε } , Q = { b } . The objective function is defined as f ( x ) = 1 2 A x b 2 2 .
We report the final error between the reconstructed signal and the true signal. Take x k x < 10 4 as the stopping criterion, where x is the true signal. We compare the algorithms NP1, HP1, NP2 and HP2 with Yang’s algorithm [3]. Let α k = γ l m k for all k 1 , γ = 1 , l = 1 2 , μ = 1 2 , θ k = 1 4 , ρ k = 0.1 , and α k = 0.1 1 A 2 of Yang’s algorithm [3].
The results are reported in Table 1. Figure 1 shows the objective function value versus iteration numbers when m = 240 , n = 1024 , p = 30 .
From Table 1 and Figure 1, we know that the inertial perturbation can improve the convergence of the algorithms and that the algorithms with Armijo-line search or self-adaptive step size perform better than Yang’s algorithm [3].
We also measure the restoration accuracy by means of the mean squared error, i.e., MSE = ( 1 / k ) x x k , where x is an estimated signal of x. Figure 2 shows a comparison of the accuracy of the recovered signals when m = 1440 , n = 6144 , p = 180 . Given the same number of iterations, the recovered signals generated by algorithms in this paper outperform the one generated by Yang’s algorithm; NP1 needs more CPU time and presents lower accuracy; algorithms with self-adaptive step size perform better than the algorithms with step size determined by Armijo-line search in CPU time and imposing inertial perturbation accelerates the convergence rate and accuracy of signal recovery.

5.2. Three MSSFP Problems

Example 1
([5]). Take H 1 = H 2 = R 3 , r = t = 2 , β 1 = β 2 = 1 2 and α k = γ l m k for all k 1 , γ = 1 , l = 1 2 , μ = 1 2 , θ k = 1 4 , ρ k = 0.1 . Define
C 1 = x = ( x 1 , x 2 , x 3 ) T R 3 x 1 + x 2 2 + 2 x 3 0 , C 2 = x = ( x 1 , x 2 , x 3 ) T R 3 x 1 2 16 + x 2 2 9 + x 3 2 4 1 0 , Q 1 = x = ( x 1 , x 2 , x 3 ) T R 3 x 1 2 + x 2 x 3 0 , Q 2 = x = ( x 1 , x 2 , x 3 ) T R 3 x 1 2 4 + x 2 2 4 + x 3 2 9 1 0 ,
and
A = 2 1 3 4 2 5 2 0 2 .
The underlying MSSFP is to find x C 1 C 2 such that A x Q 1 Q 2 .
We use inertial perturbation to accelerate the convergence of the algorithm. For the convenience of comparison, the initial values of the two inertial algorithms are set to be the same. Let x 0 = x 1 . We use E k = x k + 1 x k / x k to measure the error of the k-th iterate. If E k < 10 5 , then the iteration process stops. We compare our proposed iteration methods HP1, HP2 with NP1, NP2 and Liu and Tang’s Algorithm 2 in [29]. Algorithm 2 is of the form x k + 1 = U [ k ] ( x k α k j = 1 r β j A ( I T j ) A x ) , α k ( 0 , 2 A 2 ) . We take U [ k ] = P C [ k ] k , T j = P Q j k and α k = 0.2 1 A 2 , and the algorithm is referred to as LT alg.
The convergence results and the CPU time of the five algorithms are shown in Table 2 and Figure 3. The errors are shown in Figure 4.
The results show that (80) (HP2) outperforms (77) (HP1) for certain initial values. The main reason may be that the self-adaptive step size is more efficient than the one determined by the Armijo-line search. Comparison results of five algorithms and the convergence behavior show that in most cases, the convergence rate of the algorithm can be improved by adding an appropriate perturbation.
Example 2.
Take H 1 = R n , H 2 = R m , A = ( a i j ) m × n with a i j ( 0 , 1 ) generated randomly, C i = { x R n x d i 2 2 r i 2 } , i = 1 , 2 , , t , Q j = { y R m y l j 1 2 h j 2 } , j = 1 , 2 , , r , where d i [ e 0 , 10 e 1 ] , r i [ 40 , 60 ] , l j [ e 0 , e 1 ] , h j [ 10 , 20 ] are all generated randomly. Set β 1 = β 2 = = β r = 1 r and α k = γ l m k for all k 1 , γ = 1 , l = 1 2 , μ = 1 2 , θ k = 1 4 , ρ k = 0.001 .
We consider using inertial perturbation to accelerate the convergence of the algorithm. If E k = x k + 1 x k / x k < 10 4 , then the iteration process stops. Let x 0 = x 1 . We choose arbitrarily three different initial points and consider iterative steps of the four algorithms with m , n , r , t being different values. See Table 3 for details.
In this example, we found that the algorithm with Armijo-line search needs fewer iteration steps in relatively low-dimensional spaces. In the case of high-dimensional spaces, the algorithm with self-adaptive step size outperforms in time. Generally, the convergence is improved by inertial perturbations for both algorithms in our paper.
Example 3
([30]). Take H 1 = R n , H 2 = R m , A = ( a i j ) m × n with a i j ( 0 , 1 ) generated randomly, C i = { x R n x d i 2 2 r i 2 } , i = 1 , 2 , , t , Q j = { y R m y T B j y + b j y + c j 0 } , j = 1 , 2 , , r , where d i ( 6 e 0 , 16 e 1 ) , r i ( 100 , 120 ) , b j ( 30 e 1 , 20 e 1 ) , c j ( 50 , 60 ) , and all elements of the matrix B j are all generated randomly in the interval (2,10). Set β 1 = β 2 = = β r = 1 r and α k = γ l m k for all k 1 , γ = 1 , l = 1 2 , μ = 1 2 , θ k = 1 4 , ρ k = 0.1 .
We consider using inertial perturbation to accelerate the convergence of the algorithm. The stopping criterion is defined by E k = 1 2 i = 1 t x k P C i k x k 2 + 1 2 j = 1 r A x k P Q j k A x k 2 < 10 4 . Let x 0 = x 1 . The details are shown in Table 4.
We can see from Table 4 that the convergence rate is improved by inertial perturbations for both algorithms. In most cases, the algorithm with step size determined by Armijo-line search outperforms the one with self-adaptive step size in the number of iterations, whereas the latter outperforms the former in CPU time.

6. Conclusions

In this paper, for the MSSFP, we present two relaxed CQ algorithms with different kinds of self-adaptive step size and discuss their bounded perturbation resilience. Treating appropriate inertial terms as bounded perturbations, we construct the inertial acceleration versions of the corresponding algorithms. For the real-life LASSO problem and three experimental examples, we numerically compare the performance with or without inertial perturbation of the algorithms and also compare the performance of the proposed algorithms with Yang’s algorithm [3], and Liu and Tang’s algorithm [29]. The results show the efficiency of the proposed algorithms and the validity of the inertial perturbation.

Author Contributions

Both the authors contributed equally to this work. Both authors have read and agreed to the published version of the manuscript.

Funding

The authors was supported by the National Natural Science Foundation under Grant No. 61503385 and No. 11705279, and the Fundamental Research Funds for the Central Universities No. 3122018L004.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would also like to thank the editors and the reviewers for their constructive suggestions and comments, which greatly improved the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Byrne, C. Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 2002, 18, 441–453. [Google Scholar] [CrossRef]
  2. Byrne, C. A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20, 103–120. [Google Scholar] [CrossRef] [Green Version]
  3. Yang, Q.Z. The relaxed CQ algorithm solving the split feasibility problem. Inverse Probl. 2004, 20, 1261–1266. [Google Scholar] [CrossRef]
  4. Censor, Y.; Elfving, T.; Kopf, N.; Bortfeld, T. The multiple-sets spilt feasibility problem and its applications for inverse problems. Inverse Probl. 2005, 21, 2071–2084. [Google Scholar] [CrossRef] [Green Version]
  5. He, S.N.; Zhao, Z.Y.; Luo, B. A relaxed self-adaptive CQ algorithm for the multiple-sets split feasibility problem. Optimization 2015, 64, 1907–1918. [Google Scholar] [CrossRef]
  6. Yang, Q.Z. On variable-step relaxed projection algorithm for variational inequalities. J. Math. Anal. Appl. 2005, 302, 166–179. [Google Scholar] [CrossRef] [Green Version]
  7. López, G.; Martín-Márquez, V.; Wang, F.H.; Xu, H.K. Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. 2012, 28, 374–389. [Google Scholar] [CrossRef]
  8. Gibali, A.; Liu, L.W.; Tang, Y.C. Note on the the modified relaxation CQ algorithm for the split feasibility problem. Optim. Lett. 2018, 12, 817–830. [Google Scholar] [CrossRef]
  9. Chen, Y.; Guo, Y.; Yu, Y.; Chen, R. Self-adaptive and relaxed self-adaptive projection methods for solving the multiple-set split feasibility problem. Abstr. Appl. Anal. 2012, 2012, 958040. [Google Scholar] [CrossRef]
  10. Xu, H.K. Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 2010, 26, 105018. [Google Scholar] [CrossRef]
  11. Xu, H.K. A variable Krasnosel’skii-Mann algorithm and the multiple-set split feasibility problem. Inverse Probl. 2006, 22, 2021–2034. [Google Scholar] [CrossRef]
  12. Yao, Y.H.; Postolache, M.; Liou, Y.C. Strong convergence of a self-adaptive method for the split feasibility problem. Fixed Point Theory Appl. 2013, 2013, 201. [Google Scholar] [CrossRef] [Green Version]
  13. Guo, Y.N.; Cui, W.; Guo, Y.S. Perturbation resilience of proximal gradient algorithm for composite objectives. J. Nonlinear Sci. Appl. 2017, 10, 5566–5575. [Google Scholar] [CrossRef] [Green Version]
  14. Guo, Y.N.; Cui, W. Strong convergence and bounded perturbation resilience of a modified proximal gradient algorithm. J. Inequalities Appl. 2018, 2018, 103. [Google Scholar] [CrossRef] [PubMed]
  15. Pakkaranang, N.; Kumam, P.; Berinde, V.; Suleiman, Y.I. Superiorization methodology and perturbation resilience of inertial proximal gradient algorithm with application to signal recovery. J. Supercomput. 2020, 76, 9456–9477. [Google Scholar] [CrossRef]
  16. Jin, W.M.; Censor, Y.; Jiang, M. Bounded perturbation resilience of projected scaled gradient methods. Comput. Optim. Appl. 2016, 63, 365–392. [Google Scholar] [CrossRef] [Green Version]
  17. Xu, H.K. Bound perturbation resilience and superiorization techniques for the projected scaled gradient method. Inverse Probl. 2017, 33, 044008. [Google Scholar] [CrossRef]
  18. Dong, Q.L.; Gibali, A.; Jiang, D.; Tang, Y. Bounded perturbation resilience of extragradient-type methods and their applications. J. Inequalities Appl. 2017, 2017, 280. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Duan, P.C.; Zheng, X.B. Bounded perturbation resilience of generalized viscosity iterative algorithm for split variational inclusion problem. Appl. Set-Valued Anal. Optim. 2020, 2, 49–61. [Google Scholar]
  20. Censor, Y.; Zaslavski, A.J. Convergence and perturbation resilience of dynamic string-averaging projection methods. Comput. Optim. Appl. 2013, 54, 65–76. [Google Scholar] [CrossRef] [Green Version]
  21. Kaewyong, N.; Sitthithakerngkiet, K. A self-adaptive algorithm for the common solution of the split minimization problem and the fixed point problem. Axioms 2021, 10, 109. [Google Scholar] [CrossRef]
  22. Shehu, Y.; Dong, Q.L.; Liu, L.L. Global and linear convergence of alternated inertial methods for split feasibility problems. R. Acad. Sci. 2021, 115, 53. [Google Scholar]
  23. Li, H.Y.; Wu, Y.L.; Wang, F.H. New inertial relaxed CQ algorithms for solving split feasibility problems in Hilbert spaces. J. Math. 2021, 2021, 6624509. [Google Scholar] [CrossRef]
  24. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Space; Springer: London, UK, 2011. [Google Scholar]
  25. Bauschke, H.H.; Combettes, P.L. A weak-to-strong convergence principle for Fejér-monntone methods in Hilbert spaces. Math. Oper. Res. 2001, 26, 248–264. [Google Scholar] [CrossRef] [Green Version]
  26. Wen, M.; Peng, J.G.; Tang, Y.C. A cyclic and simultaneous iterative method for solving the multiple-sets split feasibility problem. J. Optim. Theory Appl. 2015, 166, 844–860. [Google Scholar] [CrossRef]
  27. Sun, J.; Dang, Y.Z.; Xu, H.L. Inertial accelerated algorithms for solving a split feasibility problem. J. Ind. Manag. Optim. 2017, 13, 1383–1394. [Google Scholar]
  28. Tang, Y.C.; Zhu, C.X.; Yu, H. Iterative methods for solving the multiple-sets split feasibility problem with splitting self-adaptive step size. Fixed Point Theory Appl. 2015, 2015, 178. [Google Scholar] [CrossRef] [Green Version]
  29. Liu, L.W.; Tang, Y.C. Several iterative algorithms for solving the split common fixed point problem of directed operators with applications. Optim. A J. Math. Program. Oper. Res. 2016, 65, 53–65. [Google Scholar]
  30. Zhao, J.L.; Yang, Q.Z. Self-adaptive projection methods for the multiple-sets split feasibility problem. Inverse Probl. 2011, 27, 035009. [Google Scholar] [CrossRef]
Figure 1. The objective function value versus the iteration number.
Figure 1. The objective function value versus the iteration number.
Axioms 10 00197 g001
Figure 2. Comparison of signal processing.
Figure 2. Comparison of signal processing.
Axioms 10 00197 g002
Figure 3. Comparison of CPU times of the algorithms in Example 1: (a) Comparison for choice 1. (b) Comparison for choice 2. (c) Comparison for choice 3. (d) Comparison for choice 4.
Figure 3. Comparison of CPU times of the algorithms in Example 1: (a) Comparison for choice 1. (b) Comparison for choice 2. (c) Comparison for choice 3. (d) Comparison for choice 4.
Axioms 10 00197 g003
Figure 4. Comparison of iterations of the algorithms in Example 1: (a) Comparison for choice 1. (b) Comparison for choice 2. (c) Comparison for choice 3. (d) Comparison for choice 4.
Figure 4. Comparison of iterations of the algorithms in Example 1: (a) Comparison for choice 1. (b) Comparison for choice 2. (c) Comparison for choice 3. (d) Comparison for choice 4.
Axioms 10 00197 g004
Table 1. Comparison of algorithms with different step size.
Table 1. Comparison of algorithms with different step size.
mnp NP1HP1NP2HP2Yang’s Alg.
12051215No. of Iter1588111910,004742610,944
cpu(time)0.85600.69060.66750.49910.7011
240102430No. of Iter1909135410,726796913,443
cpu(time)2.12241.48361.62361.20111.9789
480204860No. of Iter2972211717,33812,89722,118
cpu(time)22.514014.878215.472911.103319.3376
720307290No. of Iter3955287221,85316,24428,004
cpu(time)134.924382.670579.164057.1230110.0482
Table 2. Numerical results of five algorithms for Example 1.
Table 2. Numerical results of five algorithms for Example 1.
Choice NP1HP1NP2HP2LT Alg.
1. x 0 = ( 0.1 , 0.1 , 0.1 ) T No. of Iter6043219162420
   cpu(time)0.05110.04500.03620.03470.0879
2. x 0 = ( 0.4 , 0.555 , 0.888 ) T No. of Iter13985195143178
   cpu(time)0.06690.05090.03420.03180.0552
3. x 0 = ( 1 , 2 , 3 ) T No. of Iter14289195141178
   cpu(time)0.06940.04900.03520.03390.0551
4. x 0 = ( 0.123 , 0.745 , 0.789 ) T No. of Iter1498510877526
   cpu(time)0.05900.04480.02950.02680.1018
Table 3. Numerical results of the algorithms with and without perturbation for Example 2.
Table 3. Numerical results of the algorithms with and without perturbation for Example 2.
Initial Point NP1HP1NP2HP2
r = t = 10 , m = 15 , n = 20
x 0 = x 1 = 2 e 1 No. of Iter49361281999
cpu(time)0.10310.06690.14801494
x 0 = x 1 = 50 e 1 No. of Iter18712122971669
cpu(time)0.24850.15360.28870.1868
x 0 = x 1 = 100 r a n d ( n , 1 ) No. of Iter31222523571811
cpu(time)0.42020.29080.28300.2159
r = t = 10 , m = n = 40
x 0 = x 1 = 2 e 1 No. of Iter8966956732
cpu(time)0.31400.17770.15340.1318
x 0 = x 1 = 50 e 1 No. of Iter1710158313011061
cpu(time)4.03904.035718600.1555
x 0 = x 1 = 100 r a n d ( n , 1 ) No. of Iter1674165814871219
cpu(time)4.65813.77520.20650.1762
r = t = 30 , m = n = 40
x 0 = x 1 = 2 e 1 No. of Iter136103985753
cpu(time)0.69120.51740.33120.2515
x 0 = x 1 = 50 e 1 No. of Iter161214111258968
cpu(time)12.343711.71640.39910.3127
x 0 = x 1 = 100 r a n d ( n , 1 ) No. of Iter1541113316431012
cpu(time)11.82737.46461.03630.2965
Table 4. Results of Armijo-line search and self-adaptive algorithms for Example 3.
Table 4. Results of Armijo-line search and self-adaptive algorithms for Example 3.
Initial Point NP1HP1NP2HP2
r = t = 10 , m = n = 20
x 0 = x 1 = e 1 No. of Iter47735722681700
cpu(time)1.24530.92671.05160.8038
x 0 = x 1 = 50 e 1 No. of Iter75756432912470
cpu(time)1.62051.28051.56231.1023
x 0 = x 1 = 100 r a n d ( n , 1 ) No. of Iter99673743233231
cpu(time)1.90871.43961.96961.4493
r = t = 20 , m = 40 , n = 50
x 0 = x 1 = e 1 No. of Iter125694153364001
cpu(time)12.13104.00615.91654.0919
x 0 = x 1 = 50 e 1 No. of Iter1492110569175221
cpu(time)12.64308.238212.96319.4880
x 0 = x 1 = 100 r a n d ( n , 1 ) No. of Iter2101183599369226
cpu(time)16.407013.286814.961112.8079
r = t = 40 , m = n = 60
x 0 = x 1 = e 1 No. of Iter1758131783286245
cpu(time)48.257038.066830.675923.4267
x 0 = x 1 = 50 e 1 No. of Iter2503177712,9058677
cpu(time)59.212744.791549.582332.6868
x 0 = x 1 = 100 r a n d ( n , 1 ) No. of Iter2274147418,78113,952
cpu(time)58.256938.191772.662254.9814
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Li, Y.; Zhang, Y. Bounded Perturbation Resilience of Two Modified Relaxed CQ Algorithms for the Multiple-Sets Split Feasibility Problem. Axioms 2021, 10, 197. https://0-doi-org.brum.beds.ac.uk/10.3390/axioms10030197

AMA Style

Li Y, Zhang Y. Bounded Perturbation Resilience of Two Modified Relaxed CQ Algorithms for the Multiple-Sets Split Feasibility Problem. Axioms. 2021; 10(3):197. https://0-doi-org.brum.beds.ac.uk/10.3390/axioms10030197

Chicago/Turabian Style

Li, Yingying, and Yaxuan Zhang. 2021. "Bounded Perturbation Resilience of Two Modified Relaxed CQ Algorithms for the Multiple-Sets Split Feasibility Problem" Axioms 10, no. 3: 197. https://0-doi-org.brum.beds.ac.uk/10.3390/axioms10030197

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop