Next Article in Journal
Stability Characteristics and Mechanism of U-Shaped Metal Bellows under Symmetrical Cyclic Tension and Compression Process
Next Article in Special Issue
Image Encryption Based on Arnod Transform and Fractional Chaotic
Previous Article in Journal
On Baryogenesis from a Complex Inflaton
Previous Article in Special Issue
A Proximal Algorithm with Convergence Guarantee for a Nonconvex Minimization Problem Based on Reproducing Kernel Hilbert Space
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Dynamically Adjusted Subspace Gradient Method and Its Application in Image Restoration

1
Guangxi (ASEAN) Financial Research Center, Guangxi University of Finance and Economics, Nanning 530003, China
2
College of Mathematics and Information Science, Guangxi University, Nanning 530005, China
3
School of Business Administration, Guangxi University of Finance and Economics, Nanning 530003, China
4
Guangxi Key Laboratory Cultivation Base of Cross-Border E-Commerce Intelligent Information Processing, Guangxi University of Finance and Economics, Nanning 530003, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Submission received: 8 December 2021 / Revised: 14 December 2021 / Accepted: 15 December 2021 / Published: 20 December 2021

Abstract

:
In this paper, a new subspace gradient method is proposed in which the search direction is determined by solving an approximate quadratic model in which a simple symmetric matrix is used to estimate the Hessian matrix in a three-dimensional subspace. The obtained algorithm has the ability to automatically adjust the search direction according to the feedback from experiments. Under some mild assumptions, we use the generalized line search with non-monotonicity to obtain remarkable results, which not only establishes the global convergence of the algorithm for general functions, but also R-linear convergence for uniformly convex functions is further proved. The numerical performance for both the traditional test functions and image restoration problems show that the proposed algorithm is efficient.

1. Introduction

The Conjugate Gradient (CG) method is dedicated to solving the unconstrained optimization problem:
min x n f ( x ) ,
where f : R n R is smooth and the gradient of f ( x ) at x k is marked g k . The advantages of its simple form and low storage requirements make the CG method a powerful tool for dealing with problem (1). It starts at a starting point x 0 and generates an iterative sequence { x k } in the following form:
x k + 1 = x k + α k d k , k 0 ,
that is, x k moves forward by one step α k along the search direction d k and reaches the ( k + 1 ) -th iteration point x k + 1 .
The direction d k is usually defined as
d k = g k , if k = 0 , g k + β k d k 1 , if k 1 ,
where β k is CG parameter. The different β k corresponds to different CG methods, such as Polak and Ribiere (PRP) [1], Hestenes and Stiefel (HS) [2], Liu and Storey (LS) [3], Fletcher and Reeves (FR) [4], Dai and Yuan (DY) [5], and the conjugate descent (CD) method [6]. In addition, more relevant research and the progress of CG method can be found in the literature [7,8,9,10].
The step size α k can be obtained by different rules. In this paper, we focus on the following generalized line search, which has been shown to be very efficient for CG methods in [11].
f ( x k + α k d k ) C k + δ α k g k T d k , g k + 1 T d k σ g k T d k ,
where the definition of C k is as follows:
C 0 = f ( x 0 ) , Q 0 = 1 , Q k + 1 = Q k + 1 , C k + 1 = Q k C k + f k + 1 Q k + 1 .
From Equation (5), we can find that C k is a convex combination of the function values f ( x 0 ) to f ( x k ) . The generalized line search is non-monotonic, which facilitates the establishment of the global convergence of the algorithm under milder conditions.
Subspace technology plays an extraordinary role in solving large-scale unconstrained optimization problems. As the scale of optimization problems to be dealt with continues to expand, subspace technology has attracted increased attention from researchers. Using subspace minimization technology with CG method, Yuan and Stoer [12] creatively proposed theSMCG method, in which the approximate function of f ( x ) is minimized on the subspace Ω k + 1 = S p a n { g k + 1 , s k } , and the expression of the search direction is derived:
d k + 1 = μ k g k + 1 + ν k s k ,
where μ k and ν k are parameters, and s k = x k + 1 x k . Obviously, the SMCG method is a further promotion based on the CG method, and at the same time, it has a profound influence on the subsequent vigorous development of subspace technology. Based on Yuan’s ideas above, Andrei [13] developed a new SMCG method, in which it further expands the search direction, develops into three subspaces, and used the acceleration strategy. Inspired by Andrei, Yang et al. [14] applied the technique of subspace minimization to another special three-term subspace and came up with a new SMCG method. On the same subspace, Li et al. [10] conducted a more in-depth study of Yang’s results, analyzed more complex three-parameter situations, and set different conditions to dynamically select the search direction under different dimensions of subspace. Subspace technology has more extensive applications. Dai [15] proposed a new method called BBCG by fusing it with the Barzilai–Borwein [16] method and compared the performance of several BBCG methods proposed in the article through numerical experiments. It was found that the BBCG3 method has better performance. Many scholars have also tried to integrate the idea of minimizing subspace into the trust region method. For related research, readers can refer to [17]. More research on the use of subspace technology to construct different methods is still in progress [18,19,20,21,22].
The outline of this article is as follows: in Section 2, we give preliminary information. In Section 3, the search direction is be discussed first, and then the obtained algorithm is proposed. Based on the above-mentioned work, under some mild assumptions, the global convergence of the algorithm for general functions is proved; more importantly, the result of R-linear convergence for uniformly convex functions is also established. Some numerical results for solving unconstrained opitmization problems and image restoration problems are shown in Section 4. The conclusion and discussion are presented in Section 5.

2. Preliminary

The main work of this section is: in the subspace Ω k + 1 = S p a n { g k + 1 , s k , g k } , according to the different dimensions of Ω k + 1 , the discussion is divided into three cases; then, combined with the technique of subspace minimization, four forms of d k are determined, and the conditions for dynamic selection of each direction are given.
In this paper, the direction at x k + 1 is expected to minimize the quadratic approximation of the objective function,
min d Ω k + 1 ϕ k + 1 ( d ) = g k + 1 T d + 1 2 d T B k + 1 d ,
on the subspace Ω k + 1 , where B k + 1 is regarded as an approximation of the Hessian matrix and is positive definite. Assuming B k + 1 satisfies the modified secant equation [23]
B k + 1 s k = y k * = y k + m a x { z k , 0 } s k T s k s k ,
where z k = 2 ( f k f k + 1 ) + ( g k + 1 + g k ) T s k . Combined with (4), obviously, s k T y k * s k T y k > 0 .

3. Proposed Method

3.1. Direction Selection

According to the above discussion, as is known, the subspace may have three different dimensions; based on that, we analyze the selection of the search direction in the next section.
Case I: dim( Ω k + 1 ) = 3.
In this case, the direction can be expressed as
d k + 1 = a k g k + 1 + b k s k + c k g k ,
where a k , b k , c k are parameters to be determined. Substituting (9) into (7), we get
{ a k , b k , c k } = arg min ( a , b , c ) g k + 1 2 g k + 1 T s k g k + 1 T g k T a b c + 1 2 a b c T ρ k + 1 g k + 1 T y k * w k g k + 1 T y k * s k T y k * g k T y k * w k g k T y k * ρ k a b c ,
where ρ k = g k T B k + 1 g k ρ k + 1 = g k + 1 T B k + 1 g k + 1 w k = g k + 1 T B k + 1 g k . Inspired by the BBCG method [11], we set
w k = ξ k g k + 1 T g k y k 2 s k T y k , ξ k = m a x { 0.9 ξ k 1 , 1.2 } , i f α k > 1 , m i n { 1.1 ξ k 1 , 1.75 } , o t h e r w i s e ,
where ξ k is an adaptive parameter, and its value remains the same throughout the whole paper. Setting ξ 0 = 1.5 , we not only find that 1.2 ξ k 1.75 , but we also show that its numerical performance is better than a constant. The matrix in (10) is represented by D k .
The positive definiteness of D k is presented in Lemma 1. Now that we assume that D k is positive definite, the unique solution of (10) can be calculated as follows:
a k b k c k = 1 k χ θ 1 θ 2 θ 1 θ θ 3 θ 2 θ 3 ν g k + 1 2 g k + 1 T s k g k + 1 T g k ,
where
  • k = | D k | = ρ k + 1 χ + w k θ 2 + g k + 1 T y k * θ 1 ,
  • χ = ρ k ( s k T y k * ) ( g k T y k * ) 2 ,
  • ν = ρ k + 1 ( s k T y k * ) ( g k + 1 T y k * ) 2 ,
  • θ 3 = w k ( g k + 1 T y k * ) ρ k + 1 ( g k T y k * ) ,
  • θ 1 = w k ( g k T y k * ) ρ k ( g k + 1 T y k * ) ,
  • θ = ρ k ρ k + 1 ( w k ) 2 ,
  • θ 2 = ( g k T y k * ) ( g k + 1 T y k * ) w k ( s k T y k * ) .
If D k is positive definite, then k > 0 , so
ρ k + 1 w k θ 2 g k + 1 T y k * θ 1 χ = h k .
Setting n k = 1 ( g k T y k * ) 2 ρ k ( s k T y k * ) and substituting the variable value in Equation (12), we get
χ = n k ρ k ( s k T y k * ) , h k = ( w k 2 ρ k + ( g k + 1 T y k * ) 2 s k T y k * 2 w k ( g k + 1 T y k * ) ( g k T y k * ) ρ k ( s k T y k * ) ) n k .
Considering the formula in Equation (13), we compute ρ k + 1 as
ρ k + 1 = ξ k m a x { h k , N } ,
where N = N 1 g k + 1 2 , N 1 = m a x { y k * 2 s k T y k * , 4 y k * 4 g k * 2 ρ k ( s k T y k * ) 2 } .
In order to make the algorithm perform better, in a manner similar to [7,24], we set the following conditions:
ρ 0 n k ,
ζ 1 s k T y k s k 2 y k * 2 s k T y k * ζ 2 ,
ζ 1 ρ k g k 2 , 4 y k * 4 g k * 2 ρ k ( s k T y k * ) 2 ζ 2 ,
where ζ 1 , ζ 2 are positive constants, ρ 0 ( 0 , 1 ) .
Now, we prove that D k is positive definite.
Lemma 1.
If ρ k + 1 is calculated by Equation (15), then the matrix D k is positive definite.
Proof. 
Using mathematical induction and ρ 0 ( 0 , 1 ) , we can get ρ k + 1 ξ k N > 0 ; because ρ k + 1 > g k + 1 2 y k * 2 s k T y k * , so ν = ρ k + 1 ( s k T y k * ) ( g k + 1 T y k * ) 2 > 0 ; since ρ k + 1 > h k = w k θ 2 g k + 1 T y k * θ 1 χ , therefore k = | D k | = ρ k + 1 χ + w k θ 2 + g k + 1 T y k * θ 1 > 0 . □
Case II: dim( Ω k + 1 ) = 2.
We define d k as
d k + 1 = a k g k + 1 + b k s k ,
where a k , b k are parameters. Similarly, substituting Equation (19) into the approximation function (7), we find
{ a k , b k } = arg min ( a , b ) g k + 1 2 g k + 1 T s k T a b + 1 2 a b T ρ k + 1 g k + 1 T y k * g k + 1 T y k * s k T y k * a b .
If k = ρ k + 1 ( s k T y k * ) ( g k + 1 T y k * ) 2 > 0 , then Equation (20) has a unique solution:
a k b k = 1 k ( g k + 1 T y k * ) ( g k + 1 T s k ) ( s k T y k * ) g k + 1 2 ( g k + 1 T y k * ) g k + 1 2 ρ k + 1 ( g k + 1 T s k ) .
Similar to the way that Equation w k in (11) is evaluated, we set ρ k + 1 = ξ k g k + 1 2 y k * 2 s k T y k * ; apparently, k > 0 . Furthermore, for the better performance of the algorithm, we require relevant variables to satisfy the condition in Equation (17).
As we all know, DY and HS methods have some good properties. For example, the finite termination of HS is helpful to improve the convergence rate. In view of the above considerations, we put forward an idea; when the conditions
ζ 1 s k T y k s k 2 , g k + 1 d k d k T y k ζ 3 ,
| g k + 1 T y k g k + 1 T d k | d k T y k g k + 1 2 ζ 3 ,
are met, we take
d k + 1 = g k + 1 + β k d k , β k = m a x { β k H S , β k D Y } ,
where ζ 3 [ 0 , 1 ) .
Above all, in the case of two-dimensional subspace, when the condition (17) is established, d k takes Equations (19) and (21); when the inequalities in Equations (22) and (23) are true, d k is calculated by Equation (24).
Case III: dim( Ω k + 1 ) = 1.
When dim( Ω k + 1 ) = 1, we adopt the method of the steepest descent, namely d k + 1 = g k + 1 .

3.2. Description of DSCG Algorithm

In this section, we first introduce an acceleration strategy (Algorithm 1) [25] which has been shown to be quite efficient for the CG method. Then, we present our dynamically adjusted subspace conjugate gradient algorithm (DSCG, Algorithm 2) and prove that the direction satisfies sufficient descent.
Algorithm 1: Acceleration Strategy.
Step 1: Compute: z = x k + α k d k , g z = f ( z ) and y z = g k g z ;
Step 2: Compute: a ¯ k = α k g k T d k and b ¯ k = α k y k T d k ;
Step 3: If b ¯ k > 0 , then, compute η k = a ¯ k b ¯ k and update the variables as x k + 1 = x k + η k α k d k , otherwise update the variables as x k + 1 = x k + α k d k .
Algorithm 2: DSCG.
Step 1: Given x 0 n , α 0 ( 0 ) , ϵ > 0 , ζ 1 , ζ 2 , ζ 3 [ 0 , 1 ) . Let d 0 : = g 0 and k : = 0 .
Step 2: When g k ϵ , stop, otherwise go to step 3.
Step 3: Compute a stepsize α k that satisfies conditions (4) and (5), then adopt Algorithm 1 (acceleration strategy).
Step 4: Compute the direction d k .
  • Step 4.1: If the conditions (17) holds, compute d k + 1 by Equations (19) and (21), and go to Step 5;
  • Step 4.2: If the conditions (22) and (23) holds, compute d k + 1 by Equation (24), and go to Step 5;
  • Step 4.3: If the conditions (16)–(18) hold, compute d k + 1 by Equations (9) and (12), and go to Step 5;
  • Step 4.4: Otherwise, set d k + 1 = g k + 1 .
Step 5: Set k : = k + 1 , and go to step 2.

3.3. Convergence Analysis

In this subsection, we focus on the convergence properties of the proposed algorithm (DSCG). The sufficient descent condition is crucial for a gradient descent algorithm. In order to establish the sufficient descent condition for the DSCG method, we firstly introduce the following lemma.
Lemma 2.
If d k + 1 is generated by Equations (9) and (12) or by Equations (19) and (20), then
g k + 1 T d k + 1 g k + 1 4 ρ k + 1 ,
holds.
Proof. 
If d k + 1 is generated by Equations (19) and (20), we get
g k + 1 T d k + 1 = g k + 1 4 k [ s k T y k * 2 ( g k + 1 T y k * ) ( g k + 1 T s k ) g k + 1 2 + ρ k + 1 ( g k + 1 T s k g k + 1 2 ) 2 ] g k + 1 4 k H min ( g k + 1 T s k g k + 1 2 ) = g k + 1 4 k H ( g k + 1 T y k * ρ k + 1 ) = g k + 1 4 k k ρ k + 1 = g k + 1 4 ρ k + 1 ,
where H min ( g k + 1 T s k g k + 1 2 ) represents the minimum value of function s k T y k * 2 ( g k + 1 T y k * ) ( g k + 1 T s k ) g k + 1 2 + ρ k + 1 ( g k + 1 T s k g k + 1 2 ) 2 with g k + 1 T s k g k + 1 2 as the variable.
When d k + 1 is given by Equations (9) and (12),
g k + 1 T d k + 1 = g k + 1 2 g k + 1 T s k g k + 1 T g k T a b c = 1 k g k + 1 2 g k + 1 T s k g k + 1 T g k χ θ 1 θ 2 θ 1 θ θ 3 θ 2 θ 3 ν g k + 1 2 g k + 1 T s k g k + 1 T g k = g k + 1 4 k φ ( x , y ) ,
taking x = g k + 1 T g k g k + 1 2 , y = g k + 1 T s k g k + 1 2 as independent variable, where φ ( x , y ) = ν x 2 + 2 θ 3 x y + θ y 2 + 2 θ 2 x + 2 θ 1 y + χ .
From Lemma 1, we find that ν > 0 , ν θ θ 3 2 = k ρ k + 1 > 0 ; that is, there is a minimum value for φ ( x , y ) , which is calculated to be φ ( x , y ) m i n = k ρ k + 1 .
Therefore, we can also get g k + 1 T d k + 1 g k + 1 4 ρ k + 1 . □
Lemma 3.
Suppose d k + 1 is generated by the DSCG algorithm. Then, there is a constant c 3 > 0 , such that
g k + 1 T d k + 1 c 3 g k + 1 2 .
Proof. 
Based on the form of direction, we analyze this in three cases:
Case I: if d k + 1 = g k + 1 , let c 3 = 1 2 , and thus it is proved.
Case II: When the direction is binomial, the following information must be considered.
We first discuss the case where d k + 1 is given by Equations (19) and (21); here, for ρ k + 1 = ξ k g k + 1 2 y k * 2 s k T y k * , combined with the conditions (11), (17) and Lemma 2, the following results can be obtained:
ρ k + 1 2 ζ 2 g k + 1 2 , g k + 1 T d k + 1 1 2 ζ 2 g k + 1 2 .
When d k is determined by Equation (24), for β k = β k D Y , we have
g k + 1 T d k + 1 = g k + 1 2 + β k g k + 1 T d k g k + 1 2 + g k + 1 2 g k + 1 d k d k T y k ( 1 ζ 3 ) g k + 1 2 ,
for β k = β k H S ; similarly, using (23), the same result can be obtained.
Case III: When the direction is computed by Equations (9) and (12), considering Lemma 2, we first prove that ρ k + 1 has an upper bound.
| h k | = | ( w k 2 ρ k + ( g k + 1 T y k * ) 2 s k T y k * 2 w k ( g k + 1 T y k * ) ( g k T y k * ) ρ k ( s k T y k * ) ) n k | | ( 4 y k * 4 g k * 2 ρ k ( s k T y k * ) 2 + y k * 2 s k T y k * ) g k + 1 2 + 2 w k ρ k g k + 1 T y k * s k T y k * g k T y k * ρ k s k T y k * | / n k ( 2 N 1 g k + 1 2 + 2 | w k | ρ k | g k + 1 T y k * | s k T y k * ) / n k ( 2 N + 2 N N ) / n k 4 N ρ 0
The above formula follows from Equations (11), (14), (15), and χ > 0 . By using the conditions in Equations (16) and (17), we have
ρ k + 1 = ξ k m a x { h k , N } 2 4 N ρ 0 = 8 N 1 g k + 1 2 ρ 0 8 ζ 2 g k + 1 2 ρ 0 .
Finally, using the conclusion of Lemma 2, it is concluded that
g k + 1 T d k + 1 g k + 1 4 ρ k + 1 ρ 0 8 ζ 2 g k + 1 2 .
Summarizing all the above cases, we take
c 3 = m a x { 1 2 , 1 2 ζ 2 , ( 1 ζ 3 ) , ρ 0 8 ζ 2 } ,
and thus complete the proof. □
In the remainder of this subsection, the global convergence of the algorithm for general functions is proved; more importantly, the result of R-linear convergence for uniformly convex functions is also established in this section. We first introduce two necessary assumptions.
Assumption 1.
Function f : R n R is continuously differentiable and has a lower bound on R n .
Assumption 2.
The gradient function g ( x ) is Lipschitz continuous with a constant L > 0 ; i.e.,
g ( x ) g ( y ) L x y , x , y n ,
which means that y k L s k . Remarkably, Assumption 1 is milder than the usual assumption: the level set D = { x n : f ( x ) f ( x 0 ) } is bounded.
Lemma 4.
Supposing α k is generated by a generalized line search (4) and satisfies Assumption 2, then
α k ( 1 σ ) | g k T d k | L d k 2 .
Proof. 
By the line search condition (4), we get
( σ 1 ) g k T d k ( g k + 1 g k ) T d k = y k T d k y k d k α k L d k 2 ,
Since σ < 1 and g k T d k < 0 , then (36) holds immediately. □
Lemma 5.
If α k fulfills the generalized line search conditions (4), (5) and Assumption 1 holds, it follows that
f k C k , k .
Proof. 
From (4) and g k T d k < 0 , there is f k + 1 C k . Condition (5) shows that
Q k + 1 C k + 1 = Q k C k + f k + 1 Q k C k + C k = ( Q k + 1 ) C k ,
namely, C k + 1 C k . Then,
Q k C k + f k + 1 = Q k C k + 1 + C k + 1 Q k C k + C k + 1 ,
which means f k + 1 C k + 1 . □
Lemma 6.
Let d k + 1 be generated by the DSCG algorithm. Then, there is a constant c 4 > 0 such that
d k + 1 c 4 g k + 1 .
Proof. 
Similarly, we analyze it in three cases.
Case I: If d k + 1 = g k + 1 , let c 4 = 1 , and thus it is proved.
Case II: If d k + 1 is given by Equation (24), from Assumption 2 and condition (22),
d k + 1 = g k + 1 + β k d k g k + 1 + g k + 1 y k d k d k T y k ( 1 + L ζ 1 ) g k + 1 . ( β k = β k H S )
When β k = β k D Y , using the same method, we can get (41).
Now, if d k is calculated by Equations (19) and (21), where ρ k + 1 = ξ k g k + 1 2 y k * 2 s k T y k * , conditions (11) and (17) hold, then
k = ρ k + 1 ( s k T y k * ) ( g k + 1 T y k * ) 2 = s k T y k * ( ρ k + 1 ( g k + 1 T y k * ) 2 s k T y k * ) s k T y k ( 1 5 g k + 1 2 y k * 2 s k T y k * ) 1 5 ζ 1 s k 2 g k + 1 2 y k * 2 s k T y k *
Applying the above results, combined with Cauchy inequality and triangle inequality, we have
d k + 1 = a k g k + 1 + b k s k 1 k ( | g k + 1 T y k * | | g k + 1 T s k | g k + 1 + | ( g k + 1 T y k * ) g k + 1 2 + ρ k + 1 ( g k + 1 T s k ) | s k 1 k ( 2 s k y k * + ρ k + 1 s k 2 g k + 1 2 ) g k + 1 3 5 s k T y k * ζ 1 s k 2 y k * 2 ( 2 s k y k * + ρ k + 1 s k 2 g k + 1 2 ) g k + 1 = ( 10 s k T y k * ζ 1 s k y k * + 5 ξ k ζ 1 ) g k + 1 ( 10 s k T y k * ζ 1 s k y k * + 10 ζ 1 ) g k + 1 20 ζ 1 g k + 1 .
Case III: When d k is computed by Equations (9) and (12), Similarly, let us first discuss a lower bound of k . Based on Equations (11), (13), and (15), we have
k = ρ k + 1 χ + w k θ 2 + g k + 1 T y k * θ 1 = χ ( ρ k + 1 w k θ 2 g k + 1 T y k * θ 1 χ ) = χ ( ρ k + 1 h k ) χ ( 1.2 m a x { h k , N } h k ) 1 5 χ N .
Defining χ 1 = ρ k ( s k T y k * ) , from Equations (14) and (16), we have χ = χ 1 n k χ 1 ρ 0 > 0 ; therefore,
k 1 5 χ 1 ρ 0 N .
Then,
d k + 1 = a k g k + 1 + b k s k + c k g k = 1 k g k + 1 2 | g k + 1 T s k | | g k + 1 T g k | | χ | | θ 1 | | θ 2 | | θ 1 | | θ | | θ 3 | | θ 2 | | θ 3 | | ν | g k + 1 s k g k g k + 1 k ( χ 1 g k + 1 2 + 4 e k χ 1 N 1 g k + 1 2 + 2 i k χ 1 ( N + ρ k + 1 ) + j k ρ k + 1 ) 5 g k + 1 χ 1 ρ 0 N ( χ 1 g k + 1 2 + 4 e k χ 1 N 1 g k + 1 2 + 2 i k χ 1 ( N + ρ k + 1 ) + j k ρ k + 1 ) g k + 1 ( 5 ρ 0 N 1 + 20 ρ 0 N 1 e k χ 1 + 10 ( ρ 0 + 8 ) ( ρ 0 ) 2 i k χ 1 + 40 ( ρ 0 ) 2 j k χ 1 ) ,
where e k = ρ k s k + s k T y k * g k , a n d i k = s k g k , j k = ρ k s k 2 + s k T y k * g k 2 . From Equations (15), (17), and (18), we obtain
ζ 1 y k * 2 s k T y k * N 1 ,
e k χ 1 = s k s k T y k * + g k ρ k 2 ζ 1 ,
i k χ 1 = s k g k ρ k ( s k T y k * ) 1 ζ 1 ,
j k χ 1 = s k 2 s k T y k * + g k 2 ρ k 1 ζ 1 + 1 ζ 1 = 2 ζ 1 .
Based on the above results, it can be deduced further that
d k + 1 ( 5 ρ 0 ζ 1 + 40 ρ 0 ζ 1 + 10 ( ρ 0 + 8 ) ( ρ 0 ) 2 1 ζ 1 + 40 ( ρ 0 ) 2 2 ζ 1 ) g k + 1 ( 55 ρ 0 + 160 ρ 0 2 ζ 1 ) g k + 1 .
In conclusion, let
c 4 = m i n { 1 + L ζ 1 , 20 ζ 1 , 55 ρ 0 + 160 ρ 0 2 ζ 1 } ,
and thus (40) holds. □
Theorem 1.
Assuming that Assumptions 1 and 2 hold, the sequence { x k } is generated by the DSCG algorithm, and we have
lim k i n f g k = 0 .
Proof. 
According to the generalized line search conditions (4) and (5), we know that
f k + 1 C k + δ α k g k T d k , Q k + 1 = Q k + 1 , C k + 1 = Q k C k + f k + 1 Q k + 1 ,
Combined with Lemmas 3, 4, and 6, it follows that
f k + 1 C k ( 1 σ ) δ L ( g k T d k d k ) 2 C k ( 1 σ ) δ c 3 2 L c 4 2 g k 2 = C k β g k 2 ( β = ( 1 σ ) δ c 3 2 L c 4 2 ) .
Since
Q k + 1 = Q k + 1 = k + 2 ,
therefore,
C k + 1 Q k C k + C k β g k 2 Q k + 1 = C k β g k 2 Q k + 1 .
According to Assumption 1 and Lemma 5, it can be seen that C k + 1 has a lower bound, and so
k = 0 g k 2 Q k + 1 < .
Thus, we proved that Equation (49) holds. □
Theorem 2.
Supposing that Assumptions 1 and 2 hold, f is a uniformly convex function, and the unique minimizer is x * , the sequence { x k } is generated by the DSCG algorithm. For all k, there exists a ^ > 0 such that α k a ^ , τ m a x < 1 . Then, there is a constant θ ( 0 , 1 ) , which makes
f k f ( x * ) θ k ( f 0 f ( x * ) ) .
Proof. 
In the proof of Lemma 5, we know that C k > C k + 1 > f ( x * ) , which implies that
0 < C k + 1 f ( x * ) C k f ( x * ) < 1 , k 0 .
Denote r = lim k s u p C k + 1 f ( x * ) C k f ( x * ) ; obviously, r [ 0 , 1 ] .
Let us first analyze the case when r = 1 . Now, there is a subsequence { x k j } such that
lim j C k j + 1 f ( x * ) C k j f ( x * ) = 1 .
From Equation (2.15) of [26], we know that 0 < 1 τ m a x 1 Q k 1 . Thus, there exists a convergent subsequence { 1 Q k j + 1 } . We assume that
lim j 1 Q k j + 1 = r 1 ,
and so 0 < r 1 1 .
Through the expression of C k + 1 (1.6), we have
C k j + 1 f ( x * ) C k j f ( x * ) = ( 1 1 Q k j + 1 ) + 1 Q k j + 1 f k j + 1 f ( x * ) C k j f ( x * ) .
Combining the above three formulas, it is obvious that
lim j f k j + 1 f ( x * ) C k j f ( x * ) = 1 .
Based on Equation (3.4) of [26], we know that the uniformly convex function f has the following property:
f k j + 1 f ( x * ) γ g k j + 1 2
It is known that α k a ^ , g is Lipschitz continuous; thus, it follows that
g k j + 1 g k j + 1 g k j + g k j L x k j + 1 x k j + g k j = L α k j d k j + g k j ( 1 + L a ^ c 4 ) g k j
Therefore, f k j + 1 f ( x * ) γ ( 1 + L a ^ c 4 ) 2 g k j 2 , and
0 < f k j + 1 f ( x * ) C k j f ( x * ) γ ( 1 + L a ^ c 4 ) 2 g k j 2 C k j f ( x * ) .
Condition (51) means that
f k j + 1 f ( x * ) C k j f ( x * ) β g k j 2 ,
and
f k j + 1 f ( x * ) C k j f ( x * ) 1 β g k j 2 C k j f ( x * ) .
From Equation (59), we can see that
lim j g k j 2 C k j f ( x * ) = 0 .
Combined with condition (62),
lim j f k j + 1 f ( x * ) C k j f ( x * ) = 0
Obviously, this conflicts with Equation (59), so r 1 ; i.e.,
lim k s u p C k + 1 f ( x * ) C k f ( x * ) = r < 1
Thus, there is an integer k 0 > 0 such that
C k + 1 f ( x * ) C k f ( x * ) r + 1 r 2 = 1 + r 2 < 1 , k k 0 .
It is deduced that 0 < max 0 k k 0 { C k + 1 f ( x * ) C k f ( x * ) } = r ^ < 1 . Define θ = m a x { 1 + r 2 , r ^ } , from condition (55); obviously,
C k + 1 f ( x * ) θ ( C k f ( x * ) ) , 0 < θ < 1 .
It follows from condition (69) that
C k + 1 f ( x * ) θ ( C k f ( x * ) ) θ k + 1 ( C 0 f ( x * ) ) .
Lemma 5 ( f k C k ) and C 0 = f 0 imply that (54) holds. □

4. Numerical Results

In this section, we report the numerical performance of the DSCG algorithm from two aspects. Firstly, the algorithm is compared with TTS [13] and CG_DESCENT [27] algorithms on the normal unconstrained problem; secondly, the algorithm is applied to the image restoration problem, and the numerical results are observed.The running environment of all codes is a PC with 2.20 GHz CPU, 4.00 GB RAM memory, and the Windows 10 operating system.

4.1. Unconstrained Problem

The experiment selected 73 test functions, as shown in Table 1.
The dimensions of the function were set as £ºn = 3000, n = 6000, and n = 9000.
The iteration stop criterion is as follows: if g ( x ) < 10 6 or the number of iterations exceeds 1000 and s t o p 1 < 10 5 , then the algorithm will terminate, where s t o p 1 = | f k + 1 f k | | f k | , when | f k | > 10 5 ; otherwise, s t o p 1 = | f k + 1 f k | .
The parameters used by the algorithm are δ = 0.1 , σ = 0.8 , ζ 1 = 10 7 , ζ 2 = 10 5 , ζ 3 = 10 5 , ρ 0 = 0.8 .
The initial stepsize selection strategy [27] is
α 0 ( 0 ) = 1.0 , i f x 0 < 10 30 a n d f 0 < 10 30 , 2 | f 0 | g 0 , i f x 0 < 10 30 a n d f 0 10 30 , m i n { 1.0 , x 0 g 0 } , i f x 0 10 30 a n d g 0 < 10 7 , m i n { 1.0 , m a x { x 0 g 0 , 1 g 0 } } , i f x 0 10 30 a n d g 0 10 7 ,
where · represents the infinite norm.
TTS and CG_DESCENT use the parameters in their code. We apply the profiles of Dolan and Moré [28] to evaluate the effectiveness of the three algorithms and discuss the performance profiles of the algorithm in CPU, NFG, and NI in detail.
The meanings of some symbols in the text are as follows:
N0: The serial number of the test problem;
CPU: The running time of algorithm (seconds);
NFG: Total evaluation numbers of function and gradient;
NI: The number of iterations.
When the problem dimension is 9000, the CG_DESCENT method only solves 64 problems, while the other two methods successfully complete all the problems. Compared with other methods, the method represented by the top curve in a performance profile drawing can solve the most problems in the best time range.
As shown in Figure 1, it is clear that the DSCG method is superior to other algorithms in terms of CPU time. Thiss corresponds to the top curve and can solve 47.49% of the test problems in the shortest time. In contrast, TTS is the fastest at solving 40.64% of the test problems, and CG_DESCENT is the fastest for only 9.59% of problems.
Now, let us focus on Figure 2. By comparison, it is found that DSCG requires fewer functions and gradient evaluations than other algorithms, which helps to simplify calculation and improve algorithm efficiency. It can solve 58.9% of the test problems with minimal function and gradient evaluations. TTS can solve 28.77% of the test problems with the least amount of function and gradient evaluations. The proportion of CG_DESCENT corresponds to 13.24%.
In addition, Figure 3 shows the performance comparison results of each algorithm in terms of the number of iterations. It can be seen from the figure that the performance of the DSCG algorithm is outstanding, as it can solve 64.38% of the problems with the minimum number of iterations. At the same time, TTS and CG_DESCENT have the least iterations in 52.05% and 7.76% of the problems, respectively.
The above three pictures of CPU, NFG, and NI contain some similar information. It is concluded that in the given test set, DSCG performs very well, with numerical results superior to those of TTS and CG_DESCENT.

4.2. Image Restoration Problem

The proposed DSCG is also applied to the problem of image restoration in this subsection. For more professional work in the field of image processing, please see [29,30]. In two scenes with different noise values, the blurred original image is repaired to make the picture clear and recognizable. This work has a wide range of applications in many fields of production and life, with important practical significance, and is also a difficult subject in the field of optimization. Its basic model is b = A x + ς , where x n is the original image, A m × n is the blur matrix, ς m represents noise, and b m is the image observed after noise reduction. The unknown noise value ς is usually obtained through
min x n A x + b 2 ,
but because the image system is susceptible to noise and lack of information, it is difficult to obtain a satisfactory solution. In order to overcome the above shortcoming, the least square model is usually introduced,
min x n A x + b 2 + λ Y x 1 ,
where Y is the linear operator, · 1 represents the l 1 norm, and λ is the regularization parameter used to weigh the data item and the regularization term.
Stop condition: | f k + 1 f k | | f k | < 10 3 or x k + 1 x k x k < 10 3 ;
Tested picture: Barbara ( 512 × 512 ), Baboon ( 512 × 512 ).
The specific image repair results of the two algorithms under different noise values are shown in Figure 4 and Figure 5. Obviously, for a given set of pictures, both algorithms successfully completed the repair. Let us focus on the comparison on Table 2, which presents the CPU time spent in the repair process of algorithms.
Many works have focused on the image restoration problem, and more detailed references can be found in [31,32,33,34]. In this paper, we display the original image to be repaired, and the repaired results of DSCG, TTS, and CG_DESCENT from left to right.
Summarizing the information contained in the pictures and tables in this section, we have obtained two conclusions: (i) both algorithms are capable of repairing pictures within a reasonable time frame; (ii) with noise interference of 20% and 50%, DSCG is shown to be a promising algorithm.

5. Conclusions and Discussion

In this paper, an algorithm for dynamically adjusting direction was proposed, which corresponds to the directions of four calculation forms by satisfying different conditions. We discuss the selection of directions in a special three-term subspace using modified secant equations, subspace minimization techniques, and acceleration strategies. The algorithm has a good property: each search direction satisfies the sufficient descent condition. We use the nonmonotonic generalized line search to obtain remarkable results: under some mild assumptions, we not only prove the global convergence of the general function algorithm but also further prove the R-linear convergence of the uniformly convex function. Interestingly, we apply this algorithm to image restoration, and the algorithm has good numerical performance in both the unconstrained and image restoration problems, which fully demonstrates the efficiency of this algorithm.

Author Contributions

Conceptualization, J.H. and S.Y.; methodology, S.Y.; software and validation, Y.W.; visualization and formal analysis, Y.W.; writing—original draft preparation, Y.W.; supervision, G.X. and S.Y.; funding acquisition, G.X. and S.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Natural Science Foundation of China No. 71862003, Natural Science Foundation of Guangxi Province (CN) No. 2020GXNSFAA159014, Program for Innovative Team of Guangxi University of Finance and Economics, and the Special Funds for Local Science and Technology Development Guided by the Central Government grant number ZY20198003.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Polyak, B.T. The conjugate gradient method in extreme problems. USSR Comput. Math. Math. Phys. 1969, 9, 94–112. [Google Scholar] [CrossRef]
  2. Hestenes, M.R.; Stiefel, E.L. Methods of conjugate gradient for solving linear systems. Res. Natl. Bur. Stand. 1952, 6, 409–436. [Google Scholar] [CrossRef]
  3. Liu, Y.L.; Storey, C.S. Efficient generalized conjugate gradient, Part I: Theory. J. Optim. Theory Appl. 1964, 7, 149–154. [Google Scholar]
  4. Flether, R.; Reeves, C.M. Function minimization by conjugate gradient. Comput. J. 1964, 7, 149–154. [Google Scholar] [CrossRef] [Green Version]
  5. Dai, Y.H.; Yuan, Y.X. A nonlinear conjugate gradient method with a strong global convergence property. SIAM J. Optim. 2000, 10, 177–182. [Google Scholar] [CrossRef] [Green Version]
  6. Flecther, R. Practical Methods of Optimization, Unconstrained Optimization; Wiley: New York, NY, USA, 1988; Volume I. [Google Scholar]
  7. Li, Y.F.; Liu, Z.X.; Liu, H.W. A subspace minimization conjugate gradient method based on conic model for unconstrained optimization. Comput. Appl. Math. 2019, 38, 16. [Google Scholar] [CrossRef]
  8. Dai, Y.H.; Kou, C.X. A nonlinear conjugate gradient algorithm with an optimal property and an improved Wolfe line search. SIAM J. Optim. 2013, 23, 296–320. [Google Scholar] [CrossRef] [Green Version]
  9. Wang, T.; Liu, Z.X.; Liu, H.W.; Wang, T.; Liu, Z.; Liu, H. A new subspace minimization conjugate gradient method based on tensor model for unconstrained optimization. Int. J. Comput. Math. 2019, 96, 1924–1942. [Google Scholar] [CrossRef]
  10. Li, M.; Liu, H.W.; Liu, Z.X. A new subspace minimization conjugate gradient method with nonmonotone line search for unconstrained optimization. Numer. Algorithms 2018, 79, 195–219. [Google Scholar] [CrossRef]
  11. Liu, H.; Liu, Z. An Efficient Barzilai-Borwein Conjugate Gradient Method for Unconstrained Optimization. J. Optim. Theory Appl. 2019, 180, 879–906. [Google Scholar] [CrossRef]
  12. Yuan, Y.; Stoer, J. A subspace study on conjugate gradient algorithms. Zamm J. Appl. Math. Mech. Z. Angew. Math. Mech. 1995, 75, 69–77. [Google Scholar] [CrossRef]
  13. Andrei, N. An accelerated subspace minimization three-term conjugate gradient algorithm for unconstrained optimization. Numer. Algorithms 2014, 65, 859–874. [Google Scholar] [CrossRef]
  14. Yang, Y.T.; Chen, Y.T.; Lu, Y.L. A subspace conjugate gradient algorithm for large-scale unconstrained optimization. Numer. Algorithms 2017, 76, 813–828. [Google Scholar] [CrossRef]
  15. Dai, Y.H.; Kou, C.X. A Barzilai-Borwein conjugate gradient method. Sci. China Math. 2016, 59, 1511–1524. [Google Scholar] [CrossRef]
  16. Barzilai, J.; Borwein, J.M. Two-point step size gradient methods. IMA J. Numer. Anal. 1988, 8, 141–148. [Google Scholar] [CrossRef]
  17. Wang, Z.H.; Yuan, Y. A subspace implementation of quasi-Newton trust region methods for unconstrained optimization. Numer. Math. 2006, 104, 241–269. [Google Scholar] [CrossRef]
  18. Yuan, Y.X. A review on subspace methods for nonlinear optimization. In Proceedings of the International Congress of Mathematics, Seoul, Korea, 13–21 August 2014; pp. 807–827. [Google Scholar]
  19. Fialko, S.; Karpilovskyi, V. Block subspace projection preconditioned conjugate gradient method in modal structural analysis. Comput. Math. Appl. 2020, 79, 3410–3428. [Google Scholar] [CrossRef]
  20. Hanzely, F.; Doikov, N.; Nesterov, Y.; Richtarik, P. Stochastic subspace cubic Newton method. In Proceedings of the International Conference on Machine Learning, PMLR, Montreal, QC, Canada, 6–8 July 2020; pp. 4027–4038. [Google Scholar]
  21. Moufawad, S.M. s-Step Enlarged Krylov Subspace Conjugate Gradient Methods. SIAM J. Sci. Comput. 2020, 42, A187–A219. [Google Scholar] [CrossRef]
  22. Soodhalter, K.M.; de Sturler, E.; Kilmer, M.E. A survey of subspace recycling iterative methods. GAMM-Mitteilungen 2020, 43, e202000016. [Google Scholar] [CrossRef]
  23. Babaie-Kafaki, S. Two modified scaled nonlinear conjugate gradient method. J. Comput. Appl. Math. 2014, 261, 172–182. [Google Scholar] [CrossRef]
  24. Yao, S.; Wu, Y.; Yang, J.; Xu, J. A three-term gradient descent method with subspace techniques. Math. Probl. Eng. 2021, 2021, 8867309. [Google Scholar] [CrossRef]
  25. Andrei, N. An acceleration of gradient descent algorithm with backtracking for unconstrained optimization. Numer. Algorithms 2006, 42, 63–73. [Google Scholar] [CrossRef]
  26. Zhang, H.C.; Hager, W.W. A nonmonotone line search technique and its application to unconstrained optimization. SIAM J. Optim. 2004, 14, 1043–1105. [Google Scholar] [CrossRef] [Green Version]
  27. Hager, W.W.; Zhang, H. A new conjugate gradient method with guaranteed descent and an efficient line search. SIAM J. Optim. 2005, 16, 170–192. [Google Scholar] [CrossRef] [Green Version]
  28. Dolan, E.D.; Moré, J.J. Benchmarking optimization software with performance profiles. Math. Program. 2002, 91, 201–203. [Google Scholar] [CrossRef]
  29. Versaci, M.; Calcagno, S.; Morabito, F.C. Fuzzy geometrical approach based on unit hyper-cubes for image contrast enhancement. In Proceedings of the 2015 IEEE International Conference on Signal and Image Processing Applications (ICSIPA), Kuala Lumpur, Malaysia, 19–21 October 2015; pp. 488–493. [Google Scholar]
  30. Rahim, S.S.; Palade, V.; Shuttleworth, J.; Jayne, C. Automatic screening and classification of diabetic retinopathy and maculopathy using fuzzy image processing. Brain Inform. 2016, 3, 249–267. [Google Scholar] [CrossRef] [PubMed]
  31. Hanjing, A.; Suantai, S. A fast image restoration algorithm based on a fixed point and optimization method. Mathematics 2020, 8, 378. [Google Scholar] [CrossRef] [Green Version]
  32. Padcharoen, A.; Kitkuan, D. Iterative methods for optimization problems and image restoration. Carpathian J. Math. 2021, 37, 497–512. [Google Scholar] [CrossRef]
  33. Ibrahim, A.H.; Kumam, P.; Kumam, W. A family of derivative-free conjugate gradient methods for constrained nonlinear equations and image restoration. IEEE Access 2020, 8, 162714–162729. [Google Scholar] [CrossRef]
  34. Fessler, J.A. Optimization methods for magnetic resonance image reconstruction: Key models and optimization algorithms. IEEE Signal Process. Mag. 2020, 37, 33–40. [Google Scholar] [CrossRef]
Figure 1. Performance profiles for the CPU.
Figure 1. Performance profiles for the CPU.
Symmetry 13 02450 g001
Figure 2. Performance profiles for the NFG.
Figure 2. Performance profiles for the NFG.
Symmetry 13 02450 g002
Figure 3. Performance profiles for the NI.
Figure 3. Performance profiles for the NI.
Symmetry 13 02450 g003
Figure 4. 20% noise. (a) original image; (b) DSCG; (c) TTS; (d) CG_DESCENT.
Figure 4. 20% noise. (a) original image; (b) DSCG; (c) TTS; (d) CG_DESCENT.
Symmetry 13 02450 g004
Figure 5. 50% noise. (a) original image; (b) DSCG; (c) TTS; (d) CG_DESCENT.
Figure 5. 50% noise. (a) original image; (b) DSCG; (c) TTS; (d) CG_DESCENT.
Symmetry 13 02450 g005
Table 1. The test problems.
Table 1. The test problems.
Test ProblemsNo.Test ProblemsNo.
Extended Freudenstein and Roth Function1ARWHEAD Function (CUTE)38
Extended Trigonometric Function2ARWHEAD Function (CUTE)39
Extended Rosenbrock Function3NONDQUAR Function (CUTE)40
Extended White and Holst Function4DQDRTIC Function (CUTE)41
Extended Beale Function5EG2 Function (CUTE)42
Extended Penalty Function6DIXMAANA Function (CUTE)43
Perturbed Quadratic Function7DIXMAANB Function (CUTE)44
Raydan 1 Function8DIXMAANC Function (CUTE)45
Raydan 2 Function9DIXMAANE Function (CUTE)46
Diagonal 1 Function10Partial Perturbed Quadratic Function47
Diagonal 2 Function11Broyden Tridiagonal Function48
Diagonal 3 Function12Almost Perturbed Quadratic Function49
Hager Function13Tridiagonal Perturbed Quadratic Function50
Generalized Tridiagonal 1 Function14EDENSCH Function (CUTE)51
Extended Tridiagonal 1 Function15VARDIM Function (CUTE)52
Extended Three Exponential Terms Function16STAIRCASE S1 Function53
Generalized Tridiagonal 2 Function17LIARWHD Function (CUTE)54
Diagonal 4 Function18DIAGONAL 6 Function55
Diagonal 5 Function19DIXON3DQ Function (CUTE)56
Extended Himmelblau Function20DIXMAANF Function (CUTE)57
Generalized PSC1 Function21DIXMAANG Function (CUTE)58
Extended PSC1 Function22DIXMAANH Function (CUTE)59
Extended Powell Function23DIXMAANI Function (CUTE)60
Extended Block Diagonal BD1 Function24DIXMAANJ Function (CUTE)61
Extended Maratos Function25DIXMAANK Function (CUTE)62
Extended Cliff Function26IXMAANL Function (CUTE)D63
Quadratic Diagonal Perturbed Function27DIXMAAND Function (CUTE)64
Extended Wood Function28ENGVAL1 Function (CUTE)65
Extended Hiebert Function29FLETCHCR Function (CUTE)66
Quadratic Function QF1 Function30COSINE Function (CUTE)67
Extended Quadratic Penalty QP1 Function31Extended DENSCHNB Function (CUTE)68
Extended Quadratic Penalty QP2 Function32DENSCHNF Function (CUTE)69
A Quadratic Function QF2 Function33SINQUAD Function (CUTE)70
Extended EP1 Function34BIGGSB1 Function (CUTE)71
Extended Tridiagonal-2 Function35Partial Perturbed Quadratic PPQ2 Function72
BDQRTIC Function (CUTE)36Scaled Quadratic SQ1 Function73
TRIDIA Function (CUTE)37
Table 2. CPU time spent by algorithms (seconds).
Table 2. CPU time spent by algorithms (seconds).
20% NoiseBarbaraBaboonTotal
DSCG1.26561.28132.5469
TTS1.28131.26562.5469
CG_DESCENT1.42191.29692.7188
50% NoiseBarbaraBaboonTotal
DSCG2.06251.96884.0313
TTS2.1252.17194.2969
CG_DESCENT2.03132.18754.2188
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Huo, J.; Wu, Y.; Xia, G.; Yao, S. A Dynamically Adjusted Subspace Gradient Method and Its Application in Image Restoration. Symmetry 2021, 13, 2450. https://0-doi-org.brum.beds.ac.uk/10.3390/sym13122450

AMA Style

Huo J, Wu Y, Xia G, Yao S. A Dynamically Adjusted Subspace Gradient Method and Its Application in Image Restoration. Symmetry. 2021; 13(12):2450. https://0-doi-org.brum.beds.ac.uk/10.3390/sym13122450

Chicago/Turabian Style

Huo, Jun, Yuping Wu, Guoen Xia, and Shengwei Yao. 2021. "A Dynamically Adjusted Subspace Gradient Method and Its Application in Image Restoration" Symmetry 13, no. 12: 2450. https://0-doi-org.brum.beds.ac.uk/10.3390/sym13122450

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop