 Previous Article in Journal
A Data Analytic Algorithm for Managing, Querying, and Processing Uncertain Big Data in Cloud Environments
Article

# A New Smoothing Conjugate Gradient Method for Solving Nonlinear Nonsmooth Complementarity Problems

College of Mathematics, Qingdao University, 308 Qingdao Ningxia Road, Qingdao 266071, China
*
Author to whom correspondence should be addressed.
Academic Editor: Louxin Zhang
Algorithms 2015, 8(4), 1195-1209; https://0-doi-org.brum.beds.ac.uk/10.3390/a8041195
Received: 13 October 2015 / Revised: 27 November 2015 / Accepted: 11 December 2015 / Published: 17 December 2015

## Abstract

In this paper, by using the smoothing Fischer-Burmeister function, we present a new smoothing conjugate gradient method for solving the nonlinear nonsmooth complementarity problems. The line search which we used guarantees the descent of the method. Under suitable conditions, the new smoothing conjugate gradient method is proved globally convergent. Finally, preliminary numerical experiments show that the new method is efficient.

## 1. Introduction

We consider the nonlinear nonsmooth complementarity problem, which is to find a vector in $R n$ satisfying the conditions
$x ≥ 0 , F ( x ) ≥ 0 , x T F ( x ) = 0$
where $F : R n → R n$ is a locally Lipschitz continuous function. If F is continuously differentiable, then Problem (1) is called the nonlinear complementarity problems NCP(F). As we all know, Equation (1) is a very useful general mathematics model, which is closely related to the mathematical programming, variational inequalities, fixed point problems and mixed strategy problems (such as [1,2,3,4,5,6,7,8,9,10,11,12,13]). The methods for solving NCP(F) are classified into three categories: nonsmooth Newton methods, Jacobian smoothing methods and smoothing methods (see [14,15,16,17,18,19]). Conjugate gradient methods are widely and increasingly used for solving unconstrained optimization problem, especially in large-scale cases. There are few scholar has investigated the problem how to use the conjugate gradient method to solve NCP(F) (such as [10,20]). Moreover, in these papers, F is required to be a continuous differentiable $P 0 + R 0$ function. In this paper, we present a new smoothing conjugate gradient method for solving Equation (1), where F is only required to be a locally Lipschitz continuous function.
In this paper, we also define the generalized gradient of F at x is
$∂ F ( x ) = c o n v { lim x k ⟶ x , x k ∈ D F ▽ F ( x k ) }$
where $′ ′ c o n v ′ ′$ denotes the convex hull of a set, $D F$ denotes the set of points at which F is differentiable (see ). In the following, we introduce the definition of the smoothing function.
Definition 1
(see ) Let $F : R n ⟶ R n$ be a locally Lipschitz continuous function. We call $F ˜ : R n × R + ⟶ R n$ is a smoothing function of F, if $F ˜ ( x , μ )$ is continuously differentiable in $R n$ for any fixed $μ > 0$, and
$lim μ ⟶ 0 F ˜ ( x , μ ) = F ( x )$
for any fixed $x ∈ R n$. If
$lim x k ⟶ x , μ ↓ 0 ▽ F ˜ ( x k , μ ) ∈ ∂ F ( x )$
for any $x k ∈ R n$, we say F satisfies gradient consistency property.
In the following sections of our paper, we also use the Fischer-Burmeister function (see ) and the smoothing Fischer-Burmeister function. (1) The Fischer-Burmeister function
$φ ( a , b ) = a 2 + b 2 - a - b , ( a , b ) T ∈ R 2$
where $φ : R 2 ⟶ R$. From the definition of $φ$, we know that it is twice continuously differentiable besides $( 0 , 0 ) T$. Moreover, it is a complementarity function, which satisfies
$φ ( a , b ) = 0 ⟺ a ≥ 0 , b ≥ 0 , a b = 0$
Denote
$H ( x ) = φ ( x 1 , F 1 ( x ) ) ⋮ φ ( x n , F n ( x ) )$
It is obvious that $H ( x )$ is zero at a point x if and only if x is a solution of Equation (1). Then Equation (1) can be transformed into the following unconstrained optimization problem
$min Ψ ( x ) = 1 2 ∥ H ( x ) ∥ 2$
We know that the optimal value of $Ψ$ is zero, and $Ψ$ is called the value function of Equation (1).
(2) The smoothing Fischer-Burmeister function
$φ μ ( a , b ) = a 2 + b 2 + μ - a - b$
where $φ : R 3 ⟶ R$ and $μ > 0 .$
Let
$H μ ( x ) = φ μ ( x 1 , F ˜ 1 ( x , μ ) ) ⋮ φ μ ( x n , F ˜ n ( x , μ ) )$
$Ψ μ ( x ) = 1 2 ∥ H μ ( x ) ∥ 2$
where $F ˜ i ( x , μ )$ is the smoothing function of $F i ( x ) , i = 1 , ⋯ , n$.
The rest of this work is organized as follows. In Section 2, we describe the new smoothing conjugate gradient method for the solution of Problem (1). It is shown that this method has global convergence properties under fairly mild assumptions. In Section 3, preliminary numerical results and some discussions for this method are presented.

## 2. The New Smoothing Conjugate Gradient Method and its Global Convergence

The new smoothing conjugate gradient direction is defined as
$d k = - ∇ Ψ μ 0 ( x 0 ) , k = 0 - ∇ Ψ μ k - 1 ( x k ) + β k d k - 1 , k ≥ 1$
where $β k$ is a scalar. Here, we use $β k$ (see ) which is defined as
$β k D Y = ∥ ∇ Ψ μ k - 1 ( x k ) ∥ 2 d k - 1 T y k - 1$
where $y k - 1 = ∇ Ψ μ k - 1 ( x k ) - ∇ Ψ μ k - 2 ( x k - 1 )$. When $k = 1$, we set $μ k - 2 = μ 0$. The line search is Armijo-type line search (see ), where $α k = η m k , 0 < η < 1 , m k$ is the smallest nonnegative integer satisfies
$Ψ μ k ( x k + α k d k ) ≤ Ψ μ k ( x k ) + δ α k ( ∇ Ψ μ k - 1 ( x k ) ) T d k$
$( ∇ Ψ μ k ( x k + α k d k ) ) T d k + 1 ≤ - σ ∥ ∇ Ψ μ k ( x k + α k d k ) ∥ 2 , 0 < σ ≤ 1 , 0 < δ < 1$
Then, we give the new smoothing conjugate gradient method for solving Equation (1).
 Algorithm 1: Smoothing Conjugate Gradient Method (S.0) Choose $x 0 ∈ R n , ε > 0 , μ 0 > 0 , m > 0 , σ , δ , m 1 ∈ ( 0 , 1 ) ,$$d 0 = - ∇ Ψ μ 0 ( x 0 )$. Set $k = 0$. (S.1) If $Ψ ( x k ) ≤ ε$, then stop, otherwise go to Step 2. (S.2) Compute $α k$ by Equations (4) and (5), where $d k + 1 = - ∇ Ψ μ k ( x k + α k d k ) + β k + 1 d k$ and $β k + 1$ is given by Equation (3). Let $x k + 1 = x k + α k d k$. (S.3) If $∥ ∇ Ψ μ k ( x k + 1 ) ∥ ≥ m μ k$, then set $μ k + 1 = μ k$, otherwise set $μ k + 1 = m 1 μ k$. (S.4) Let $k = k + 1$, go back to Step 1.
 Algorithm 2: Algorithm Framework of Algorithm 1 PROGRAM ALGORITHM INITIALIZE $x 0 ∈ R n , ε > 0 , μ 0 > 0 , m > 0 , m 1 ∈ ( 0 , 1 )$; Set $k = 0$ and $d 0 = - ∇ Ψ μ 0 ( x 0 )$. WHILE the termination condition $Ψ ( x k ) ≤ ε$ is not met Find step sizes $α k$; Set $x k + 1 = x k + α k d k$ Evaluate $∇ Ψ μ k ( x k + 1 )$ and $d k + 1$; IF $∥ ∇ Ψ μ k ( x k + 1 ) ∥ ≥ m μ k$ THEN Set $μ k + 1 = μ k$; ELSE Set $μ k + 1 = m 1 μ k$; END IF Set $k = k + 1$; END WHILE RETURN final solution $x k$; END ALGORITHM
In the following, we will give the analysis about the global convergence of Algorithm 1. (The Algorithm 2 is the algorithm framework of Algorithm 1.) Before doing this work, we need the following basic assumptions.
Assumption 1.
(i) For any $x ∈ R n$, $0 < μ ≤ μ 0$, the level set $L μ ( c ) = { x ∈ R n | Ψ μ ( x ) ≤ c }$ is bounded.
(ii) $∇ Ψ μ k ( x k + 1 )$ is Lipschitz continuous, that is, there exists a constant $L > 0$ such that
$∥ ∇ Ψ μ k ( x k + 1 ) - ∇ Ψ μ k - 1 ( x k ) ∥ ≤ L ∥ x k + 1 - x k ∥ , ∀ x k + 1 , x k ∈ L μ ( c )$
Lemma 1.
Suppose that ${ d k }$ is an infinite sequence of directions generated by Algorithm 1, then
$- ( ∇ Ψ μ k - 1 ( x k ) ) T d k ≥ c ¯ ∥ ∇ Ψ μ k - 1 ( x k ) ∥ 2 , ∀ k ≥ 0 , c ¯ < σ$
Proof
$If$ $k = 0$, by Equation (2) and $c ¯ < 1$, we can know that Equation (6) holds. If $k > 0$, by Equation (5) and $c ¯ < σ$, we can conclude that Equation (6) holds.
Lemma 2.
Suppose that Assumption 1 holds. Then, there exists $α k > 0$ for every k, and
$α k ≥ ω | ( ∇ Ψ μ k - 1 ( x k ) ) T d k | ∥ d k ∥ 2$
with ω is a positive constant.
Proof
$From$ Step 0 of Algorithm 1, we know that $d 0 = - ∇ Ψ μ 0 ( x 0 )$, i.e., $d 0$ is a descent direction. Suppose that $d k$ is satisfied
$( ∇ Ψ μ k - 1 ( x k ) ) T d k ≤ - σ ∥ ∇ Ψ μ k - 1 ( x k ) ∥ 2 ≤ 0$
for any $α ˜ k$. We denote
$x ˜ k + 1 = x k + α ˜ k d k$
$d ˜ k + 1 = - ∇ Ψ μ k ( x ˜ k + 1 ) + β ˜ k + 1 d k$
By
$( ∇ Ψ μ k ( x ˜ k + 1 ) ) T d ˜ k + 1 = - ∥ ∇ Ψ μ k ( x ˜ k + 1 ) ∥ 2 + β ˜ k + 1 [ ( ∇ Ψ μ k - 1 ( x k ) ) T d k + ( ∇ Ψ μ k ( x ˜ k + 1 ) - ∇ Ψ μ k - 1 ( x k ) ) T d k ]$
We know that $β k$ in Equation (3) is equivalent to (see )
$β k = ( ∇ Ψ μ k - 1 ( x k ) ) T d k ( ∇ Ψ μ k - 2 ( x k - 1 ) ) T d k - 1 > 0$
Since Assumption 1, Equations (10) and (11) yield
$( ∇ Ψ μ k ( x ˜ k + 1 ) ) T d ˜ k + 1 ≤ - σ ∥ ∇ Ψ μ k ( x ˜ k + 1 ) ∥ 2 , α ˜ k ∈ ( 0 , | ( ∇ Ψ μ k - 1 ( x k ) ) T d k | L ∥ d k ∥ 2 )$
by Mean Value Theorem, we have
$Ψ μ k ( x ˜ k + 1 ) - Ψ μ k ( x k ) = ∫ 0 1 α ˜ k ( ∇ Ψ μ k ( x k + t α ˜ k d k ) ) T d k d t = α ˜ k ( ∇ Ψ μ k - 1 ( x k ) ) T d k + ∫ 0 1 α ˜ k [ ∇ Ψ μ k ( x k + t α ˜ k d k ) - ∇ Ψ μ k - 1 ( x k ) ] T d k d t ≤ α ˜ k ( ∇ Ψ μ k - 1 ( x k ) ) T d k + ∫ 0 1 L α ˜ k 2 ∥ d k ∥ 2 t d t ≤ α ˜ k ( ∇ Ψ μ k - 1 ( x k ) ) T d k + 1 2 L α ˜ k 2 ∥ d k ∥ 2$
Then, we obtain that
$Ψ μ k ( x ˜ k + 1 ) - Ψ μ k ( x k ) ≤ δ α ˜ k ( ∇ Ψ μ k - 1 ( x k ) ) T d k , ∀ α ˜ k ∈ ( 0 , 2 ( 1 - δ ) L | ( ∇ Ψ μ k - 1 ( x k ) ) T d k | ∥ d k ∥ 2 )$
By Equations (12) and (13), we know that Equations (4) and (5) determine a positive stepsize $α k$. And there must exists a constant $ξ ∈ ( 0 , 1 )$ yields
$ξ · | ( ∇ Ψ μ k - 1 ( x k ) ) T d k | L ∥ d k ∥ 2 < 1$
Denote $ω = min { ξ L , 2 ξ ( 1 - δ ) L }$, then Equation (7) holds. And Equation (5) implies that Equation (8) holds for $k + 1$. Hence, the proof is completed.
Theorem 1.
Suppose that for any fixed $μ > 0$, $Ψ μ$ satisfies Assumption 1, then the infinite sequence ${ x k }$ generated by Algorithm 1 satisfies
$lim k ⟶ ∞ μ k = 0 , lim inf k ⟶ ∞ ∥ ∇ Ψ μ k - 1 ( x k ) ∥ = 0$
Proof
$Denote$ $K = { k | μ k + 1 = m 1 μ k }$, we first show that K is an infinite set.
If K is a finite set, there exists an integer $k ¯$ such that
$∥ ∇ Ψ μ k - 1 ( x k ) ∥ ≥ m μ k - 1$
for all $k > k ¯$. We also have $μ k = μ k ¯ = : μ ¯$ for all $k > k ¯$ and
$lim inf k ⟶ ∞ ∥ ∇ Ψ μ ¯ ( x k ) ∥ > 0$
In the following, we will proof
$lim inf k ⟶ ∞ ∥ ∇ Ψ μ ¯ ( x k ) ∥ = 0$
By Lemma 1 and Assumption 1, we know that ${ Ψ μ ¯ ( x k ) }$ is a monotone decreasing sequence and the limit of ${ Ψ μ ¯ ( x k ) }$ is exist. Summing Equation (7), we get
$∑ k ≥ k ¯ + 1 ( ∇ Ψ μ ¯ ( x k ) ) T d k ) 2 ∥ d k ∥ 2 < ∞$
Due to Equation (2), we also have
$d k + ∇ Ψ μ ¯ ( x k ) = β k d k - 1 , ∀ k ≥ k ¯ + 1$
Square both sides of Equation (18), we get
$∥ d k ∥ 2 = ( β k ) 2 ∥ d k - 1 ∥ 2 - 2 ( ∇ Ψ μ ¯ ( x k ) ) T d k - ∥ ∇ Ψ μ ¯ ( x k ) ∥ 2$
Divided both sides of Equation (19) by $( ( ∇ Ψ μ ¯ ( x k ) ) T d k ) 2$, we have
$∥ d k ∥ 2 ( ( ∇ Ψ μ ¯ ( x k ) ) T d k ) 2 = ( β k ) 2 ∥ d k - 1 ∥ 2 ( ( ∇ Ψ μ ¯ ( x k ) ) T d k ) 2 - 2 ( ∇ Ψ μ ¯ ( x k ) ) T d k ( ( ∇ Ψ μ ¯ ( x k ) ) T d k ) 2 - ∥ ∇ Ψ μ ¯ ( x k ) ∥ 2 ( ( ∇ Ψ μ ¯ ( x k ) ) T d k ) 2 = ( β k ) 2 ∥ d k - 1 ∥ 2 ( ( ∇ Ψ μ ¯ ( x k ) ) T d k ) 2 - ( 1 ∥ ∇ Ψ μ ¯ ( x k ) ∥ + ∥ ∇ Ψ μ ¯ ( x k ) ∥ ( ∇ Ψ μ ¯ ( x k ) ) T d k ) 2 + 1 ∥ ∇ Ψ μ ¯ ( x k ) ∥ 2 ≤ ( β k ) 2 ∥ d k - 1 ∥ 2 ( ( ∇ Ψ μ ¯ ( x k ) ) T d k ) 2 + 1 ∥ ∇ Ψ μ ¯ ( x k ) ∥ 2 = ∥ d k - 1 ∥ 2 ( ( ∇ Ψ μ ¯ ( x k - 1 ) ) T d k - 1 ) 2 + 1 ∥ ∇ Ψ μ ¯ ( x k ) ∥ 2 ≤ ∥ d k - 2 ∥ 2 ( ( ∇ Ψ μ ¯ ( x k - 2 ) ) T d k - 2 ) 2 + 1 ∥ ∇ Ψ μ ¯ ( x k - 1 ) ∥ 2 + 1 ∥ ∇ Ψ μ ¯ ( x k ) ∥ 2 ≤ ⋯ ≤ ∥ d k ¯ + 1 ∥ 2 ( ( ∇ Ψ μ ¯ ( x k ¯ + 1 ) ) T d k ¯ + 1 ) 2 + 1 ∥ ∇ Ψ μ ¯ ( x k ¯ + 2 ) ∥ 2 + ⋯ + 1 ∥ ∇ Ψ μ ¯ ( x k ) ∥ 2$
Denote
$∥ d k ¯ + 1 ∥ 2 ( ( ∇ Ψ μ ¯ ( x k ¯ + 1 ) ) T d k ¯ + 1 ) 2 = λ > 0$
Then
$∥ d k ∥ 2 ( ( ∇ Ψ μ ¯ ( x k ) ) T d k ) 2 ≤ λ + ∑ i = k ¯ + 2 k 1 ∥ ∇ Ψ μ ¯ ( x i ) ∥ 2$
If Equation (16) is not hold, there exists $γ > 0$ such that
$∥ ∇ Ψ μ ¯ ( x k ) ∥ ≥ γ , ∀ k > k ¯ + 1$
We obtain from Equations (20) and (21) that
$∥ d k ∥ 2 ( ( ∇ Ψ μ ¯ ( x k ) ) T d k ) 2 ≤ λ γ 2 + k - k ¯ - 1 γ 2$
Because of
$( ( ∇ Ψ μ ¯ ( x k ) ) T d k ) 2 ∥ d k ∥ 2 ≥ γ 2 λ γ 2 + k - k ¯ - 1$
provies
$∑ k ≥ k ¯ + 1 ( ( ∇ Ψ μ ¯ ( x k ) ) T d k ) 2 ∥ d k ∥ 2 = + ∞$
which leads to a contradiction with Equation (17). This show that Equation (16) holds. There are conflicts between Equations (16) and (15). This show that K must be an infinite set and
$lim k ⟶ ∞ μ k = 0$
Then, we can assume that $K = { k 0 , k 1 , . . . }$ with $k 0 < k 1 < . . .$ Hence, we get
$lim i ⟶ ∞ ∥ ∇ Ψ μ k i ( x k i + 1 ) ∥ ≤ m lim i ⟶ ∞ μ k i = 0$
and completes the proof.

## 3. Numerical Tests

In this section, we intend to test the efficiency of Algorithm 1 by numerical experiments. We use Algorithm 1 to solve eleven examples, some of them are proposed the first time, some of them are modified by the examples of the references (such as [26,27]).
The smoothing function of F as $F ˜ i ( x , μ ) = F i ( x ) 2 + μ$ is used in solving Examples 1–4. From Example 5 to Example 11, the smoothing function of F is defined by (see ). $F ˜ ( x , μ ) = μ ln ∑ i = 1 m exp ( f i ( x ) μ ) ,$ where $F ( x ) = max { f 1 ( x ) , ⋯ , f m ( x ) }$, $i = 1 , ⋯ m$.
Throughout the experiments, we set $σ = 10 - 2 , m = 1.5 , m 1 = 0.5 .$ In Examples 1–3 and Examples 5–8, we set $ε = 10 - 4 , δ = 10 - 3 , η = 0.4 , μ 0 = 0.2$. Example 4, in which we set parameters $ε = 10 - 3 , δ = 10 - 2 , η = 0.1 , μ 0 = 0.02$. In the case of Examples 9–11, we set $ε = 10 - 2 , δ = 10 - 3 , η = 0.4 , μ 0 = 0.2$. We choose $Ψ ( x ) ≤ ε$ as the termination criterion. Our numerical results are summaried in Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10 and Table 11, where all components of $x 0$ are randomly selected from 0 to 10. We randomly generate 10 initial points, then implement Algorithm 1 to solve the test problem for each initial point. By the numerical of results of Examples 10–11, we can see that Algorithm 1 is suitably to solve the large scale problems.
Example 1.
We consider Equation (1), where F is defined by $F ( x ) = | 2 x - 1 |$. The exact solutions of this problem are 0 and $0.5$.
Table 1. Number of iterations and the final value of $Ψ ( x * ) .$
Table 1. Number of iterations and the final value of $Ψ ( x * ) .$
x0x*$Ψ$(x*)k
0.97135.053658e−15.636783e−51
1.71194.977618e−19.929266e−611
2.7850−1.295343e−28.495830e−58
3.17105.422178e−31.461954e−58
4.00145.562478e−31.538368e−58
5.4688−7.521520e−32.849662e−57
6.55745.926470e−31.745635e−510
7.92211.276205e−28.037197e−57
8.4913−1.994188e−31.992344e−67
9.33991.723553e−31.482749e−67
Example 2.
We consider Equation (1), where $F = | 2 x 1 - 1 | | 4 x 2 + x 1 - 1 2 | .$ There are three exact solutions as $( 1 2 , 0 ) T , ( 0 , 1 8 ) T$ and $( 0 , 0 ) T$.
Table 2. Number of iterations and the final value of $Ψ ( x * ) .$
Table 2. Number of iterations and the final value of $Ψ ( x * ) .$
x0x*$Ψ$(x*)k
$( 4.6939 , 0.1190 ) T$$( - 0.0058 , 0.0031 ) T$2.162427e−57
$( 5.2853 , 1.6565 ) T$$( 0.0077 , 0.1233 ) T$2.974655e−513
$( 9.9613 , 0.7818 ) T$$( 0.0030 , 0.1248 ) T$7.106107e−65
$( 4 . 9836 , 9 . 5974 ) ) T$$( 0.4979 , - 0.0097 ) T$6.744680e−512
$( 1.4495 , 8.5303 ) T$$( 0.4978 , 0.0080 ) T$3.327241e−513
$( 0.4965 , 9.0272 ) T$$( 0.0045 , 0.1221 ) T$3.348105e−515
$( 9.1065 , 1.8185 ) T$$( - 0.0045 , 0.1296 ) T$9.857281e−56
$( 4.0391 , 0.9645 ) T$$( 0.0087 , 0.1247 ) T$6.417282e−510
$( 7.7571 , 4.8679 ) T$$( 0.0045 , - 0.0043 ) T$1.946526e−513
$( 7.0605 , 0.3183 ) T$$( 0.0086 , 0.122 ) T$4.037476e−58
Example 3.
We consider Equation (1), where $F = | 5 x 1 + x 2 - x 3 | x 1 2 + 4 x 2 - x 3 - 2 5 x 2 2 - 6 x 1 - 2 x 3 .$ $( 0 , 1 2 , 0 ) T$ is one of the exact solutions of this problem.
Table 3. Number of iterations and the final value of $Ψ ( x * ) .$
Table 3. Number of iterations and the final value of $Ψ ( x * ) .$
x0x*$Ψ$(x*)k
$( 1.9175 , 7.3843 , 2.4285 ) T$$( 0.0087 , 0.5021 , - 0.0026 ) T$9.785244e−521
$( 1.1921 , 9.3983 , 6.4555 ) T$$( 0.0047 , 0.5019 , 0.0102 ) T$6.577107e−525
$( 1.8687 , 4.8976 , 4.4559 ) T$$( - 0.0055 , 0.4974 , - 0.0109 ) T$7.552798e−519
$( 2.7029 , 2.0846 , 5.6498 ) T$$( 0.0099 , 0.4998 , 0.0028 ) T$5.759549e−526
$( 7.2866 , 7.3784 , 0.6340 ) T$$( 0.0099 , 0.5003 , - 0.0063 ) T$9.612693e−536
$( 1.2991 , 5.6882 , 4.6939 ) T$$( - 0.0115 , 0.5009 , 0.0065 ) T$9.216436e−531
$( 5.3834 , 9.9613 , 0.7818 ) T$$( 0.0022 , 0.4952 , - 0.0083 ) T$9.798255e−526
$( 9.5613 , 5.7521 , 0.5978 ) T$$( 0.0089 , 0.5029 , 0.0074 ) T$7.570081e−528
$( 7.7571 , 4.8679 , 4.3586 ) T$$( 0.0048 , 0.5025 , - 0.0008 ) T$6.774521e−524
$( 3.8827 , 5.5178 , 2.2895 ) T$$( - 0.0047 , 0.4981 , - 0.0111 ) T$7.903206e-525
Example 4.
We consider Equation (1), where $F = | 2 x 1 - x 2 + 3 x 3 + 2 x 4 - 6 | 3 x 1 - 3 x 2 + 3 x 3 + 2 x 4 - 5 3 x 1 - x 2 - x 3 + 2 x 4 - 3 3 x 1 - x 2 + 3 x 3 - x 4 - 4 .$ $( 31 13 , 22 13 , 0 , 19 13 ) T$, $( 7 4 , 0 , 0 , 5 4 ) T$, $( 0 , 0 , 11 5 , 13 5 ) T$, $( 3 , 0 , 0 , 0 ) T$ are four of the exact solutions of this problem.
Table 4. Number of iterations and the final value of $Ψ ( x * ) .$
Table 4. Number of iterations and the final value of $Ψ ( x * ) .$
x0x*$Ψ$(x*)k
$( 5.6743 , 9.6878 , 8.2450 , 9.5961 ) T$$( 0.0062 , - 0.0249 , 2.1692 , 2.5652 ) T$4.436641e-421
$( 0.1485 , 1.5669 , 4.7157 , 5.4299 ) T$$( 0.0009 , 0.0320 , 2.2173 , 2.6291 ) T$5.987181e−437
$( 0.5969 , 6.5803 , 8.8964 , 1.0963 ) T$$( 3.0522 , 0.0031 , - 0.0202 , - 0.0151 ) T$3.790585e−423
$( 8.7494 , 1.2100 , 8.5635 , 8.9978 ) T$$( 0.0296 , - 0.0290 , 2.1300 , 2.5000 ) T$9.638295e−417
$( 7.7836 , 0.6937 , 2.7878 , 3.7937 ) T$$( 3.0003 , 0.0096 , 0.0154 , - 0.0158 ) T$3.052065e−413
$( 0.6837 , 0.8497 , 0.6834 , 4.0982 ) T$$( 3.0055 , 0.0333 , - 0.0050 , 0.0131 ) T$7.098848e−421
$( 7.6034 , 5.8410 , 4.0295 , 5.1004 ) T$$( 2.4048 , 1.7076 , - 0.0096 , 1.4753 ) T$4.242339e−425
$( 9.8754 , 9.2271 , 5.6426 , 4.3146 ) T$$( 3.0166 , 0.0148 , - 0.0271 , 0.0280 ) T$8.913504e−420
$( 8.5061 , 1.4453 , 3.7049 , 6.2239 ) T$$( 2.9635 , 0.0040 , 0.0176 , 0.0220 ) T$5.981987e−426
$( 2.7744 , 0.0611 , 3.7471 , 4.3693 ) T$$( 0.0132 , - 0.0399 , 2.1529 , 2.5310 ) T$9.794091e−421
Example 5.
We consider Equation (1), where $F = m a x { ( x - 2 ) , ( 2 x - 5 ) }$. There are two exact solutions as 0 and 2.
Table 5. Number of iterations and the final value of $Ψ ( x * ) .$
Table 5. Number of iterations and the final value of $Ψ ( x * ) .$
x0x*$Ψ$(x*)k
0.29222.00242.787626e−65
1.70711.98945.621467e−53
2.27662.00752.836408e−53
3.11102.00012.938429e−91
4.35702.01015.061011e−54
5.78532.01095.937701e−55
6.24061.98718.325445e−56
7.11222.01166.635145e−53
8.85171.99704.557770e−66
9.79751.99282.607803e−54
Example 6.
We consider Equation (1), where $F = f 1 ( x ) f 2 ( x ) f 3 ( x ) f 4 ( x ) .$ $f i ( x ) = m a x { x 1 2 , x 2 2 , x 3 2 , x 4 2 }$, $i = 1 , 2 , 3 , 4 .$ The exact solution of this problem is $( 0 , 0 , 0 , 0 ) T$.
Table 6. Number of iterations and the final value of $Ψ ( x * ) .$
Table 6. Number of iterations and the final value of $Ψ ( x * ) .$
x0x*$Ψ$(x*)k
$( 7.4003 , 2.3483 , 7.3496 , 9.7060 ) T$$( 0.0825 , 0.0842 , 0.0825 , 0.0830 ) T$9.228501e−529
$( 1.3393 , 0.3089 , 9.3914 , 3.0131 ) T$$( 0.0858 , 0.0629 , 0.0484 , 0.0414 ) T$9.446089e−522
$( 7.3434 , 0.5133 , 0.7289 , 0.8853 ) T$$( 0.0852 , 0.0817 , 0.0801 , 0.0607 ) T$9.546345e−536
$( 6.7865 , 4.9518 , 1.8971 , 4.9501 ) T$$( 0.0431 , 0.0800 , 0.0733 , 0.0801 ) T$7.428734e−534
$( 1.4761 , 0.5497 , 8.5071 , 5.6056 ) T$$( 0.0774 , 0.0691 , 0.0717 , 0.0655 ) T$6.591548e−539
$( 0.5670 , 5.2189 , 3.3585 , 1.7567 ) T$$( 0.0244 , 0.0421 , 0.0575 , 0.0604 ) T$2.433055e−525
$( 7.6903 , 5.8145 , 9.2831 , 5.8009 ) T$$( 0.0477 , 0.0739 , 0.0745 , 0.0766 ) T$6.280631e−532
$( 6.9475 , 7.5810 , 4.3264 , 6.5550 ) T$$( 0.0846 , 0.0819 , 0.0566 , 0.0850 ) T$9.476538e−521
$( 2.8785 , 4.1452 , 4.6484 , 7.6396 ) T$$( 0.0637 , 0.0802 , 0.0742 , 0.0012 ) T$5.739573e−533
$( 2.9735 , 0.6205 , 2.9824 , 0.4635 ) T$$( 0.0854 , 0.0727 , 0.0831 , 0.0542 ) T$9.575717e−522
Example 7.
We consider Equation (1), where $F = f 1 ( x ) ⋮ f 10 ( x )$ with $f i ( x ) = m a x { x 1 2 , ⋯ , x 10 2 }$, $i = 1 , ⋯ , 10 .$ The exact solution of this problem is $( 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) T$.
Table 7. Number of iterations and the final value of $Ψ ( x * ) .$
Table 7. Number of iterations and the final value of $Ψ ( x * ) .$
x0$Ψ$(x*)k
$( 8.2408 , 8.2798 , 2.9337 , 3.0937 , 5.2303 , 3.2530 , 8.3184 , 8.1029 , 5.5700 , 2.6296 ) T$9.719070e−527
$( 9.5089 , 4.4396 , 0.6002 , 8.6675 , 6.3119 , 3.5507 , 9.9700 , 2.2417 , 6.5245 , 6.0499 ) T$9.957464e−545
$( 4.1705 , 9.7179 , 9.8797 , 8.6415 , 3.8888 , 4.5474 , 2.4669 , 7.8442 , 8.8284 , 9.1371 ) T$8.965459e−539
$( 8.3975 , 3.7172 , 8.2822 , 1.7652 , 1.2952 , 8.7988 , 0.4408 , 6.8672 , 7.3377 , 4.3717 ) T$9.644608e−547
$( 9.7209 , 0.3146 , 8.3540 , 8.3571 , 0.4986 , 5.4589 , 9.4317 , 3.2147 , 8.0647 , 6.0140 ) T$9.737485e−537
$( 8.3336 , 4.0363 , 3.9018 , 3.6045 , 1.4026 , 2.6013 , 0.8682 , 4.2940 , 2.5728 , 2.9756 ) T$9.240212e−547
$( 4.8267 , 3.7601 , 5.2378 , 2.6487 , 0.6836 , 4.3633 , 1.7385 , 0.2611 , 9.5468 , 4.3060 ) T$8.801643e−544
$( 0.5398 , 0.2062 , 6.8148 , 5.9863 , 1.1403 , 7.9625 , 6.1785 , 0.7021 , 0.6928 , 1.3601 ) T$8.946151e−544
$( 5.7099 , 1.6977 , 1.4766 , 4.7608 , 9.0810 , 5.5218 , 0.3294 , 0.5386 , 8.0506 , 4.5137 ) T$8.815143e−545
$( 2.1647 , 7.8620 , 7.2309 , 2.7884 , 5.8243 , 4.2101 , 0.9207 , 0.2403 , 4.9115 , 2.7827 ) T$6.697806e−539
Example 8.
We consider Equation (1), where $F = f 1 ( x ) ⋮ f 4 ( x )$ with $f i ( x ) = ∑ j = 1 4 m a x { - x j - x j + 1 , - x j - x j + 1 + ( x j 2 + x j + 1 2 + 1 ) }$, $i = 1 , 2 , 3 , 4 .$ The exact solution of this problem is $( 0 , 0 , 0 , 0 ) T$.
Table 8. Number of iterations and the final value of $Ψ ( x * ) .$
Table 8. Number of iterations and the final value of $Ψ ( x * ) .$
x0$Ψ$(x*)k
$( 4.1131 , 8.2898 , 9.3511 , 3.9907 ) T$3.670149e−54
$( 0.5221 , 5.7119 , 7.4767 , 3.2024 ) T$4.216994e−54
$( 5.4000 , 2.2106 , 0.9595 , 0.6017 ) T$6.167554e−57
$( 6.6015 , 0.5231 , 5.5683 , 7.1203 ) T$3.838925e−54
$( 1.6924 , 2.5845 , 1.9791 , 6.0569 ) T$6.272257e−56
$( 3.3969 , 1.9786 , 5.0683 , 9.5076 ) T$7.097729e−55
$( 4.2175 , 4.1131 , 9.5914 , 7.5025 ) T$2.693701e−54
$( 8.8728 , 0.5585 , 1.3822 , 8.6306 ) T$9.021922e−57
$( 9.8100 , 2.3352 , 0.9623 , 3.8458 ) T$4.687797e−55
$( 9.6426 , 6.7115 , 2.9917 , 5.3113 ) T$8.657057e−56
Example 9.
We consider Equation (1), where $F = f 1 ( x ) ⋮ f 4 ( x )$ with $f i ( x ) = ∑ j = 1 4 m a x { - x j - x j + 1 , - x j - x j + 1 + ( x j 2 + x j + 1 2 - 1 ) }$, $i = 1 , 2 , 3 , 4 .$ The exact solution of this problem is $( 0 , 0 , 0 , 0 ) T$.
Table 9. Number of iterations and the final value of $Ψ ( x * ) .$
Table 9. Number of iterations and the final value of $Ψ ( x * ) .$
x0$Ψ$(x*)k
$( 1.5290 , 1.5254 , 1.5555 , 0.8957 ) T$9.777886e−38
$( 4.5442 , 6.6890 , 8.3130 , 7.9024 ) T$3.912481e−35
$( 9.0150 , 3.1834 , 5.9708 , 2.9780 ) T$9.081688e−33
$( 3.1781 , 9.8445 , 5.4825 , 7.4925 ) T$6.868711e−37
$( 8.4185 , 1.6689 , 9.0310 , 1.0512 ) T$5.318627e−34
$( 7.4509 , 7.2937 , 7.1747 , 1.3343 ) T$7.203761e−39
$( 4.4579 , 5.0879 , 5.3049 , 8.5972 ) T$9.500345e−34
$( 6.7772 , 8.0584 , 5.3124 , 9.5590 ) T$9.421194e−34
$( 0.6668 , 5.4152 , 2.8166 , 4.8090 ) T$6.718722e−37
$( 6.8486 , 2.0826 , 6.0816 , 3.2618 ) T$3.494877e−34
Example 10.
We consider Equation (1), where $F = f 1 ( x ) ⋮ f n ( x )$ with $f i ( x ) = m a x { x 1 2 - 6 x 1 , ⋯ , x n 2 - 6 x n }$, $i = 1 , ⋯ , n .$ n represents the problem dimension. The solution is $x * = ( λ ⋯ λ ) T$ (λ is no more than 6). In this problem, we intend to check the efficiency of Algorithm 1 with the dimension of test problem is $50 , 100 ,$ and 200. We randomly selected ten initial values when $n = 50 ,$ $n = 100$ and $n = 200$.
Table 10. Number of iterations, the final value of $Ψ ( x * )$ and dimension of the test problem.
Table 10. Number of iterations, the final value of $Ψ ( x * )$ and dimension of the test problem.
n = 50n = 100n = 200
$Ψ ( x * )$k$Ψ ( x * )$k$Ψ ( x * )$k
1.625691e−399.444914e−3119.897292e−315
4.082584e−375.358975e−593.937758e−45
6.082289e−374.734809e−395.800944e−316
2.042082e−393.249863e−363.289200e−311
3.765484e−396.587880e−3104.674659e−310
7.553578e−3132.632872e−3101.450852e−313
4.208302e−4144.177174e−339.461359e−316
4.250316e−399.744427e−373.778464e−315
2.634965e−5105.854241e−6101.501579e−38
3.445498e−3114.209193e−361.984871e−325
Example 11.
We consider Equation (1), where $F = f 1 ( x ) ⋮ f n ( x )$ with $f i ( x ) = m a x { x 1 2 , ⋯ , x n 2 }$, $i = 1 , ⋯ , n .$ The problem has only unique solution $x * = ( 0 , ⋯ , 0 ) T$. We randomly selected ten initial values when $n = 100 ,$$n = 200$ and $n = 500$.
Table 11. Number of iterations, the final value of $Ψ ( x * )$ and dimension of the test problem.
Table 11. Number of iterations, the final value of $Ψ ( x * )$ and dimension of the test problem.
n = 100n = 200n = 500
$Ψ ( x * )$k$Ψ ( x * )$k$Ψ ( x * )$k
9.152621e−03179.040255e−0397.682471e−314
4.383679e−3156.976857e−398.861191e−315
5.172738e−3156.902897e−3108.892858e−312
5.796109e−3127.686345e−3129.210427e−314
7.613768e−3168.400876e−3109.843579e−310
5.398565e−3128.066523e−3109.717126e−313
3.403516e−3159.097423e−3128.999900e−315
8.701785e−3137.208014e−3119.970099e−312
8.302172e−3117.822304e−3139.391355e−315
6.610621e−3137.278306e-399.624919e−310

## 4. Conclusions

In this paper, we have presented a new smoothing conjugate gradient method for the nonlinear nonsmooth complementarity problems. The method is based on a smoothing Fischer-Burmeister function and Armijo-type line search. With careful analysis, we are able to show that our method is globally convergent. Numerical tests illustrate that the method can efficiently solve the given test problems, therefor the new method is promising. We might consider more effective ways of choosing smoothing functions and line search methods for our method. This remains under investigation.

## Acknowledgments

The authors wish to thank the anonymous referees for their helpful comments and suggestions, which led to great improvement of the paper. This work is also supported by National Natural Science Foundation of China (NO. 11101231, 11401331), Natural Science Foundation of Shandong (No. ZR2015AQ013) and Key Issues of Statistical Research of Shandong Province (KT15173).

## Author Contributions

Ajie Chu prepared the manuscript. Yixiao Su assisted in the work. Shouqiang Du was in charge of the overall research of the paper.

## Conflicts of Interest

The authors declare no conflict of interest.

## References

1. Facchinei, F.; Pang, J.S. Finite-Demensional Variational Inequalities and Complementarity Problems; Spring-Verlag: New York, NY, USA, 2003. [Google Scholar]
2. Luca, T.D.; Facchinei, F.; Kanzow, C. A semismooth equation approach to the solution of nonlinear complementarity problems. Math. Programm. 1996, 75, 407–439. [Google Scholar] [CrossRef]
3. Ferris, M.C.; Pang, J.S. Engineering and economic applications of complementarity problems. SIAM Rev. 1997, 39, 669–713. [Google Scholar] [CrossRef]
4. Zhao, Y.B.; Li, D. A new path-following algorithm for nonlinear complementarity problems. Comp. Optim. Appl. 2005, 34, 183–214. [Google Scholar] [CrossRef]
5. Yu, Q.; Huang, C.C.; Wang, X.J. A combined homotopy interior point method for the linear complementarity problem. Appl. Math. Comp. 2006, 179, 696–701. [Google Scholar] [CrossRef]
6. Wang, Y.; Zhao, J.X. An algorithm for a class of nonlinear complementarity problems with non-Lipschitzianfunctions. Appl. Numer. Math. 2014, 82, 68–79. [Google Scholar] [CrossRef]
7. Fischer, A.; Jiang, H. Merit functions for complementarity and related problems: A survey. Comp. Optim. Appl. 2000, 17, 159–182. [Google Scholar] [CrossRef]
8. Chen, J.S.; Pan, S.H. A family of NCP functions and a descent method for the nonlinear complementarity problem. Comp. Optim. Appl. 2008, 40, 389–404. [Google Scholar] [CrossRef]
9. Luca, T.D.; Facchinei, F.; Kanzow, C. A theoretical and numerical comparison of some semismooth algorithm for complementarity problems. Comp. Optim. Appl. 2000, 16, 173–205. [Google Scholar] [CrossRef]
10. Wu, C.Y. The Conjugate Gradient Method for Solving Nonlinear Complementarity Problems; Inner Mongolia University: Hohhot, China, 2012. [Google Scholar]
11. Qi, L.; Sun, D. Nonsmooth and smoothing methods for nonlinear complementarity problems and variational inequalities. Encycl. Optim. 2009, 1, 2671–2675. [Google Scholar]
12. Facchinei, F.; Kanzow, C. A nonsmooth inexact Newton method for the solution of large-scale nonlinear complementarity problems. Math. Programm. 1997, 76, 493–512. [Google Scholar] [CrossRef]
13. Yang, Y.F.; Qi, L. Smoothing trust region methods for nonlinear complementarity problems with P0 -functions. Ann. Op. Res. 2005, 133, 99–117. [Google Scholar] [CrossRef]
14. Chen, B.; Xiu, N. A global linear and local quadratic non-interior continuation method for nonlinear complementarity problems based on Chen-Mangasarian smoothing functions. SIAM J. Optim. 1999, 9, 605–623. [Google Scholar] [CrossRef]
15. Chen, B.; Chen, X.; Kanzow, C. A penalized Fischer-Burmeister NCP-function: Theoretical investigation and numerical results. Math. Programm. 2000, 88, 211–216. [Google Scholar] [CrossRef]
16. Kanzow, C.; Kleinmichel, H. A new class of semismooth Newton-type methods for nonlinear complementarity problems. Comp. Optim. Appl. 1998, 11, 227–251. [Google Scholar] [CrossRef]
17. Chen, X.; Qi, L.; Sun, D. Global and superlinear convergence of the smoothing Newton method and its application to general box constrained variational inequalities. Math. Comp. 1998, 67, 519–540. [Google Scholar] [CrossRef]
18. Kanzow, C.; Pieper, H. Jacobian smoothing methods for general nonlinear complementarity problems. SIAM J. Optim. 1999, 9, 342–372. [Google Scholar] [CrossRef]
19. Chen, B.; Harker, P.T. Smoothing approximations to nonlinear complementarity problems. SIAM J. Optim. 1997, 7, 403–420. [Google Scholar] [CrossRef]
20. Wu, C.Y.; Chen, G.Q. A smoothing conjugate gradient algorithm for nonlinear complementarity problems. J. Sys. Sci. Sys. Engin. 2008, 17, 460–472. [Google Scholar] [CrossRef]
21. Clarke, F.H. Optimization and Nonsmooth Analysis; John Wiley and Sons, Inc.: New York, NY, USA, 1983. [Google Scholar]
22. Chen, X.J. Smoothing methods for nonsmooth, nonconvex minimization. Math. Programm. 2012, 134, 71–99. [Google Scholar] [CrossRef]
23. Fischer, A. A special Newton-type optimization method. Optimization 1992, 24, 269–284. [Google Scholar] [CrossRef]
24. Dai, Y.H.; Yuan, Y. An efficient hybrid conjugate gradient method for unconstrained optimization. Ann. Oper. Res. 2001, 103, 33–47. [Google Scholar] [CrossRef]
25. Dai, Y.H. Conjugate gradient methods with Armijo-type line searches. Acta Math. Appl. Sin. 2002, 18, 123–130. [Google Scholar] [CrossRef]
26. Xu, S. Smoothing method for minimax problem. Comp. Optim. Appl. 2001, 20, 267–279. [Google Scholar] [CrossRef]
27. Haarala, M. Large-Scale Nonsmooth Optimization: Variable Metric Bundle Method with Limited Memory. Ph.D. Thesis, University of Jyväskylä, Jyväskylä, Finland, 13 November 2004. [Google Scholar]
Back to TopTop