Next Article in Journal
Faster Algorithms for Mining Shortest-Path Distances from Massive Time-Evolving Graphs
Next Article in Special Issue
A Mixed-Integer and Asynchronous Level Decomposition with Application to the Stochastic Hydrothermal Unit-Commitment Problem
Previous Article in Journal
Node Placement Optimization of Wireless Sensor Networks Using Multi-Objective Adaptive Degressive Ary Number Encoded Genetic Algorithm
Previous Article in Special Issue
Polyhedral DC Decomposition and DCA Optimization of Piecewise Linear Functions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On a Nonsmooth Gauss–Newton Algorithms for Solving Nonlinear Complementarity Problems

by
Marek J. Śmietański
Faculty of Mathematics and Computer Science, University of Lodz, Banacha 22, 90-238 Łódź, Poland
Submission received: 25 June 2020 / Revised: 30 July 2020 / Accepted: 31 July 2020 / Published: 4 August 2020

Abstract

:
In this paper, we propose a new version of the generalized damped Gauss–Newton method for solving nonlinear complementarity problems based on the transformation to the nonsmooth equation, which is equivalent to some unconstrained optimization problem. The B-differential plays the role of the derivative. We present two types of algorithms (usual and inexact), which have superlinear and global convergence for semismooth cases. These results can be applied to efficiently find all solutions of the nonlinear complementarity problems under some mild assumptions. The results of the numerical tests are attached as a complement of the theoretical considerations.

1. Introduction

Let F : R n R n and let F i , i = 1 , , n denote the components of F. The nonlinear complementarity problem (NCP) is to find x R n such that
x 0 , F ( x ) 0   and   x T F ( x ) = 0 .
The ith component of a vector x is represented by x i . Solving (1) is equivalent to solving a nonlinear equation G ( x ) = 0 , where the operator G : R n R n is defined by
G ( x ) = φ ( x 1 , F 1 ( x ) ) φ ( x n , F n ( x ) )
with some special function φ . Function φ may have one of the following forms:
φ 1 ( a , b ) = min { a , b } ; φ 2 ( a , b ) = a 2 + b 2 a b ; φ 3 ( a , b ) = θ ( | a b | ) θ ( a ) θ ( b ) ,
where θ : R R is any strictly increasing function with θ ( 0 ) = 0 , see [1].
The (NCP) problem is one of the fundamental problems of mathematical programming, operations research, economic equilibrium models, and in engineering sciences. A lot of interesting and important applications can be found in the papers of Harker and Pang [2] and Ferris and Pang [3]. We can find the most essential applications in:
  • engineering—optimal control problems, contact or structural mechanics problems, structural design problems, or traffic equilibrium problems,
  • equilibrium modeling—general equilibrium (in production or consumption), invariant capital stock, or game-theoretic models.
We borrow a technique used in solving some smooth problems. If g is a merit function of G, i.e., g ( x ) = 1 2 G ( x ) T G ( x ) , then any stationary point of g ( x ) is a least-squares solution of the equation G ( x ) = 0 . Then, algorithms for minimization are equivalent to algorithms for solving equations. The usual Gauss–Newton method (known also as the differential corrections method), presented by Ortega and Rheinboldt [4] in the smooth case, has the form
x ( k + 1 ) = x ( k ) G ( x ( k ) ) T G ( x ( k ) ) 1 G ( x ( k ) ) T G ( x ( k ) ) .
Local convergence properties of the Gauss–Newton method was discussed by Chen and Li [5], but only for some smooth case. The Levenberg–Marquardt method is also considered, which is a modified Gauss–Newton method, in some papers, e.g., [6] or [7]. Moreover, some comparison of semismooth algorithms for solving (NCP) problems has been made in [8].
In practice, we may also consider the damped Gauss–Newton method
x ( k + 1 ) = x ( k ) ω k G ( x ( k ) ) T G ( x ( k ) ) + λ k I 1 G ( x ( k ) ) T G ( x ( k ) )
with parameters ω k and λ k . Parameter ω k may be chosen to ensure suitable decrease of g. If λ k is positive for all k, then the inverse matrix in (3) always exists because G ( x ( k ) ) T G ( x ( k ) ) is a symmetric and positive semidefinite matrix. The method (3) has the important advantage: the search direction always exists, even if G ( x ) is singular. Naturally, in the case of nonsmooth equations, some additional assumptions are needed to allow the use of some line search strategies and to ensure the global convergence. Because, in some cases, a function G is nondifferentiable, so the equation G ( x ) = 0 will be nonsmooth, whereby the method (3) may be useless. Some version of the Gauss–Newton method for solving complementarity problems was also introduced by Xiu and Zhang [9] for generalized problems, but only for linear ones. Thus, for solving nonsmooth and nonlinear problems, we propose two new versions of a damped Gauss–Newton algorithm based on B-differential. The usual generalized method is a relevant extension of the work by Subramanian and Xiu [10] for a nonsmooth case. In turn, an inexact version is related to the traditional approach, which was widely studied, e.g., in [11]. In recent years, various versions of the Gauss–Newton method were discussed, although most frequently for solving nonlinear least-squares problems, e.g., in [12,13].
The paper is organized as follows: in the next section, we review some notions needed, such as B-differential, BD-regularity, semismoothness, etc. (Section 2.1). Next, we propose a new optimization problem-based methods for the NCP, transforming the NCP into an unconstrained minimization problem by employing a function φ 3 (Section 2.2). We state its global convergence and superlinear convergence rate under appropriate conditions. In Section 3, we present the results of numerical tests.

2. Materials and Methods

2.1. Preliminaries

If F is Lipschitz continuous, the Rademacher’s theorem [14] implies that F is almost everywhere differentiable. Let the set of points, where F is differentiable, be denoted by D F . Then, the B-differential (the Bouligand differential) of F at x (introduced in [15]) is
B F ( x ) = lim x ( n ) x F x ( n ) , x ( n ) D F ,
where F ( x ) denotes the usual Jacobian of F at x . The generalized Jacobian of F at x in the sense of Clarke [14] is
F ( x ) = c o n v B F ( x )
We say that F is BD-regular at x , if F is locally Lipschitz at x and if all V B F ( x ) are nonsingular (regularity on account of B-differential). Qi proved (Lemma 2.6, [15]) that, if F is BD-regular at x , then a neighborhood N of x and a constant C > 0 exist such that, for any y N and V B F ( y ) , V is nonsingular and
V 1 C .
Throughout this paper, · denotes the 2-norm.
The notion of semismoothness was originally introduced for functionals by Mifflin [16]. The following definition is taken from Qi and Sun [17]. A function F is semismooth at a point x , if F is locally Lipschitzian at x and
lim V F ( x + t h ) , h h , t 0 V h
exists for any h R n . F is also said semismooth at x , if it is directionally differentiable at x and
V h F x , h = o h .
Scalar products and sums of semismooth functions are still semismooth functions. Piecewise smooth functions and maximum of a finite number of smooth functions are also semismooth. The semismoothness is the almost usually seen assumption on F in papers dealing with nonsmooth equations because it implies some important properties for convergence analysis of methods in nonsmooth optimization.
If for any V F ( x + h ) , as h 0
V h F x , h = O h 1 + p ,
where 0 < p 1 , then we say F is p -order semismooth at x . Clearly, p -order semismoothness implies semismoothness. If p = 1 , then the function F is called strongly semismooth. Piecewise C 2 functions are examples of strongly semismooth functions.
Qi and Sun [17] remarked that, if F is semismooth at x , then, for any h 0
F ( x + h ) F ( x ) F ( x ; h ) = o h ,
and, if F is p -order semismooth at x , then for any h 0
F ( x + h ) F ( x ) F ( x ; h ) = O h 1 + p .
Remark 1.
Strong semismoothness of the appropriate function usually implies quadratic convergence of method instead of the superlinear one for semismooth function.
In turn, Pang and Qi [18] proved that semismoothness of F at x implies that
sup V F ( x + h ) F ( x + h ) F ( x ) V h = o h .
Moreover, if F is p -order semismooth at x , then
sup V F ( x + h ) F ( x + h ) F ( x ) V h = O h 1 + p .

2.2. The Algorithm and Its Convergence

Consider nonlinear equation G ( x ) = 0 defined by φ 3 . The equivalence of solving this equation and problem (NCP) is described by the following theorem:
Theorem 1
(Mangasarian [1]). Let θ be any strictly increasing function from R into R, that is, a > b θ ( a ) > θ ( b ) , and let θ ( 0 ) = 0 . Then, x solves the complementarity problem (1) if and only if
θ ( F i ( x ) x i ) θ ( F i ( x ) ) θ ( x i ) = 0 , i = 1 , 2 , , n .
For the convenience, denote
G i ( x ) : = θ ( F i ( x ) x i ) θ ( F i ( x ) ) θ ( x i )
for i = 1 , 2 , , n .
We assume that the function θ in Theorem 1 has the form
θ ( ξ ) = ξ ξ .
Let G ( x ) be the associated function. We define function g in the following way:
g ( x ) = 1 2 G ( x ) 2 ,
which allows for solving system G ( x ) = 0 based on solving the nonlinear least-square problem
min x g ( x ) .
Let us note that x * solves G ( x ) = 0 if and only if it is a stationary point of g. Thus, from Theorem 1, x * solves (1).
Remark 2. 
On the other hand, the first-order optimality conditions for problem (6) are equivalent to the nonlinear system
g ( x ) = G ( x ) T G ( x ) = 0 ,
where g is the gradient of g, provided G is differentiable and G is the Jacobian matrix of G.
The continuous differentiability of the merit function g for some kind of nonsmooth functions was established by Ulbrich in the following lemma:
Lemma 1
(Ullbrich, [19]). Assume that the function G : R n D R n is semismooth, or, stronger, p-order semismooth, 0 < p 1 , then the merit function 1 2 G ( x ) 2 is continuously differentiable on D with gradient g ( x ) = V T G ( x ) , where V G ( x ) is arbitrary.
Lemma 2.
For any x R n , let A x = V x T V x , where V x B G ( x ) . Suppose that g ( x ) 0 . Then, given λ > 0 , the direction d given by
( A x + λ I ) d = g ( x )
is an ascent direction for g. In particular, there is a positive w such that g ( x w d ) < g ( x ) .
Proof. 
There exist constants β 0 and γ > 0 such that
β h 2 h T A x h γ h 2 for   all   h R n ,
because A x defined as V x T V x is symmetric and positive semidefinite.
It follows that
( β + λ ) h 2 h T ( A x + λ I ) h ( γ + λ ) h 2 for   all   h R n .
Since g ( x ) 0 , d 0 . If we take h = d , we obtain
d T g ( x ) ( β + λ ) d 2 > 0 .
It follows that g ( x ) d > 0 and that d is a ascent direction for g (Section 8.2.1 in [4]).  □
Now, we present the generalized version of the damped Gauss–Newton method for solving the nonlinear complementarity problem.
Algorithm 1: The damped Gauss-Newton method for solving NCP
Let β , δ ( 0 , 1 ) be given. Let x ( 0 ) be a starting point. Given x ( k ) , the steps for obtaining x ( k + 1 ) are:
Step 1: If g ( x ( k ) ) = 0 , then stop. Otherwise, choose any matrix V k B G ( x ( k ) ) and let A k = V k T V k .
Step 2: Let λ k = g ( x ( k ) ) .
Step 3: Find d ( k ) that is a solution of the linear system
( A k + λ k I ) d ( k ) = g ( x ( k ) ) .
Step 4: Compute the smallest nonnegative integer m k such that
g ( x ( k ) + β m d ( k ) ) g ( x ( k ) ) δ β m g ( x ( k ) ) T d ( k )
  and set
x ( k + 1 ) = x ( k ) + β m k d ( k ) .
Remark 3. 
(i) In Step 2, letting λ k = g ( x ( k ) ) is one of the simplest strategy because then λ k converges to 0.
(ii) The line search step (Step 4) in the algorithm follows the Armijo rule.
Theorem 2.
Let x ( 0 ) be a starting point and { x ( k ) } be a sequence generated by Algorithm 1. Assume that:
(a) sup k V k < for all V k B G ( x ( k ) ) ;
(b) g ( x ) is Lipschitzian with a constant L g > 0 on the level set L = x : g ( x ) g ( x ( 0 ) ) .
Then, the generalized damped Gauss–Newton method described by Algorithm 1 is well defined and either { x ( k ) } terminates at a stationary point of g, or else every accumulation point of { x ( k ) } , if it exists, is a stationary point of g.
Proof. 
The proof is almost the same as Theorem 2.1 in [10], providing appropriately modified assumptions.  □
For the nonsmooth case, the alternative condition may be considered instead of Lipschitz continuity of g ( x ) (similar as in [10]). Thus, we have the following convergence theorem:
Theorem 3.
Let x ( 0 ) be a starting point and { x ( k ) } be a sequence generated by Algorithm 1. Assume that:
(a) the level set L = x : g ( x ) g ( x ( 0 ) ) is bounded;
(b) G is semismooth on L .
Then, the generalized damped Gauss–Newton method described by Algorithm 1 is well defined and either { x ( k ) } terminates at a stationary point of g, or else every accumulation point of { x ( k ) } , if it exists, is a stationary point of g.
Now, we take up the rate of convergence of the considered algorithm. The following theorem shows suitable conditions in various cases.
Theorem 4.
Suppose that x * is a solution of problem (1), G is semismooth, and G is BD-regular at x * . Then, there exists a neighborhood N * of x * such that, if x ( 0 ) N * and the sequence { x ( k ) } is generated by Algorithm 1, we have:
(i) x ( k ) N * for all k and the sequence { x ( k ) } is linear convergent to x * ;
(ii) if δ < 0.5 , then the convergence is at least superlinear;
(iii) If G is strongly semismooth, then the convergence is quadratic.
Proof. 
The proof of similar theorem given by Subramanian and Xiu [10] is based on three lemmas, which have the same assumptions as theorem. Now, we present these lemmas in versions adapted to our nonsmooth case.  □
Lemma 3.
Assume that d x is a solution of the equation
( A x + λ x I ) d x = g ( x ) ,
where
λ x = g ( x )   a n d   A x = V x T V x
for some matrix V x taken from B G ( x ) . Then, there is a neighborhood D 1 of x * such that, for all x D 1 ,
x d x x * = o x x * .
Lemma 4.
There is a neighborhood D 2 of x * such that, for all x D 2 ,
(a) g ( x ) = 1 2 ( x x * ) T A * ( x x * ) + o x x * 2 ,
(b) g ( x ) = 1 2 ( x x * ) T A x ( x x * ) + o x x * 2 .
Lemma 5.
Suppose that the conditions of Lemma 1 hold. Then, there is a neighborhood D 3 of x * such that, for all x D 3 ,
g ( x d x ) g ( x ) + 1 2 g ( x ) T d x o x x * 2 .
The proofs of Lemmas 5 and 6 are almost the same as in [10]; however, in the proof of Lemma 4, we have to take into account the semismoothness and to use Lemma 1 to obtain the desired result.
At the same time, in a similar way, we may show a suitable rate of convergence.
Now, we consider the inexact version of the considered method, which computes an approximate step, using the nonnegative sequence of forcing terms to control the level of accuracy.
For the above inexact version of the algorithm, we can state the analogous theorems which are equivalents of Theorems 2–4. Based on our previous results, the proof can be carried out almost in the same way as that of theorems for the ’exact’ version of the method. However, the condition (7), implied by inexactness given in Step 3 of Algorithm 2, has to be considered. Thus, we omit both theorems as proofs here.
Algorithm 2: The inexact version of the damped Gauss-Newton method for solving NCP
Let β , δ , θ ( 0 , 1 ) and η k [ 0 , 1 ) for all k given. Let x ( 0 ) R n be a starting point. Given x ( k ) , the steps for obtaining x ( k + 1 ) are:
Step 1: If g ( x ( k ) ) = 0 , then stop. Otherwise, choose any matrix V k B G ( x ( k ) ) and let A k = V k T V k .
Step 2: Let λ k = g ( x ( k ) ) .
Step 3: Find d ( k ) that is a solution of the linear system
( A k + λ k I ) d ( k ) + g ( x ( k ) ) η k g ( x ( k ) ) .
Step 4: If
g ( x ( k ) + d ( k ) ) θ g ( x ( k ) )
  then let
x ( k + 1 ) = x ( k ) + d ( k )
  and go to Step 1.
Step 5: Compute the smallest nonnegative integer m k such that
g ( x ( k ) + β m d ( k ) ) g ( x ( k ) ) δ β m g ( x ( k ) ) T d ( k )
  and set
x ( k + 1 ) = x ( k ) + β m k d ( k )
  and go to Step 1.

3. Numerical Results

In this section, we present results of our numerical experiments, obtained by coding both algorithms in Code:Blocks. We use double precision on an Intel Core i7 3.2 GHz running under the Windows Server 2016 operating system. We applied the generalized damped Gauss–Newton method to solve three nonlinear complementarity problems. In the following examples: N 1 and N 2 denote the number of performed iterations to satisfy the stopping criterion x ( k + 1 ) x ( k ) < 10 7 , using Algorithms 1 and 2, respectively. The forcing terms in Algorithm 2 were chosen as follows: η k = ( 10 k ) 1 for all k.
Example 1
(from Kojima and Shindo [20]). Let the function F : R 4 R 4 have the form
F 1 ( x ) = 3 x 1 2 + 2 x 1 x 2 + 2 x 2 2 + x 3 + 3 x 4 6 , F 2 ( x ) = 2 x 1 2 + x 1 + x 2 2 + 10 x 3 + 2 x 4 2 , F 3 ( x ) = 3 x 1 2 + x 1 x 2 + 2 x 2 2 + 2 x 3 + 9 x 4 9 , F 4 ( x ) = x 1 2 + 3 x 2 2 + 2 x 3 + 3 x 4 3 .
Problem (NCP) with the above function F has two solutions:
x * = ( 1 , 0 , 3 , 0 ) T   a n d   x * * = ( 6 / 2 , 0 , 0 , 0.5 ) T
for which
F ( x * ) = ( 0 , 31 , 0 , 4 ) T   a n d   F ( x * * ) = 0 , 2 + 6 2 , 0 , 0 T .
Thus, x * is a non-degenerate solution of (NCP) because
L : = i : x i * = 0 , F i ( x * ) = 0 = ,
but x * * is a degenerate solution.
Depending upon the starting point, we obtained the convergence iteration process to both solutions (see Table 1 or Figure 1).
Example 2.
Let function F : R 2 R 2 be defined as follows:
F ( x ) = 2 x 1 + x 2 2 6 x 1 2 + 4 x 1 + 1 2 x 2 3 .
Then, problem (NCP) has two solutions:
-
non-degenerate
x * = ( 0 , 6 ) T   f o r   w h i c h   F ( x * ) = ( 30 , 0 ) T
-
degenerate
x * * = ( 3 , 0 ) T   f o r   w h i c h   F ( x * * ) = ( 0 , 0 ) T .
Similar to Example 1, we obtained the convergence iteration process for both solutions, depending on the starting point (see Table 2 or Figure 2).
Example 3
(from Jiang and Qi [21]). Let function F : R n R n has the form F ( x ) = M x + q , where
M = 4 1 0 0 0 1 4 1 0 0 0 1 4 0 0 0 0 0 4 1 0 0 0 1 4 , q = 1 , , 1 T .
Because F is strictly monotonic, the proper problem (NCP) has exactly one solution.
Calculations have been made for various n with one starting point x ( 0 ) = ( 0 , , 0 ) T . For all tests, we obtain the same number of iterations N 1 = 3 and N 2 = 4 .

4. Conclusions

We have given the nonsmooth version of the damped generalized Gauss–Newton method presented by Subramanian and Xu [10]. The generalized Newton algorithms related to the Gauss–Newton method are well-known important tools for solving nonsmooth equations, which arise from various nonlinear problems such as nonlinear complementarity or variational inequality. These algorithms are especially useful when the problem has many variables. We have proved that the sequences generated by the methods are superlinearly convergent under mild assumptions. Clearly, the semismoothness and BD-regularity are sufficient to obtain only a superlinear convergence of methods, while strong semismoothness even gives quadratic convergence. However, if function G is not semismooth or BD-regular or the gradient of g is not Lipschitzian, the Gauss–Newton methods may be useless.
The performance of both methods was evaluated in terms of the number of iterations required. The analysis of the numerical results seems to indicate that the methods are usually reliable for solving semismooth problems. The results show that the inexact approach can produce a noticeable slowdown by the number of iterations (compare N 1 and N 2 in Figure 1 and Figure 2). In turn, an important advantage is that the algorithms allow us to find various solutions to the problem (this can be observed in two examples: the first and second one). However, if there are many solutions of the problem, then the relationship between the starting point and the obtained solution may be unpredictable.
Clearly, traditional numerical algorithms aren’t the only method for solving the nonlinear complementarity problems, regardless of the degree of nonsmoothness. Except for the methods presented in the paper and mentioned in the Introduction, some computational intelligence algorithms can be used to solve (NCP) problems, i.a., monarch butterfly optimization (see [22,23]), the earthworm optimization algorithm (see [24]), the elephant herding optimization (see [25,26]), or the moth search algorithm (see [27,28]). All of these approaches are bio-inspired metaheuristic algorithms.

Funding

This research received no external funding.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Mangasarian, O.L. Equivalence of the complementarity problem to a system of nonlinear equations. SIAM J. Appl. Math. 1976, 31, 89–92. [Google Scholar] [CrossRef] [Green Version]
  2. Harker, P.T.; Pang, J.S. Finite-dimensional variational inequality and nonlinear complementarity problems: A survey of theory, algorithms and applications. Math. Program. 1990, 48, 161–220. [Google Scholar] [CrossRef]
  3. Ferris, M.C.; Pang, J.S. Engineering and economic applications of complementarity problems. SIAM Rev. 1997, 39, 669–713. [Google Scholar] [CrossRef] [Green Version]
  4. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  5. Chen, J.; Li, W. Local convergence of Gauss–Newton’s like method in weak conditions. J. Math. Anal. Appl. 2006, 324, 381–394. [Google Scholar] [CrossRef] [Green Version]
  6. Fan, J.; Pan, J. On the convergence rate of the inexact Levenberg-Marquardt method. J. Ind. Manag. Optim. 2011, 7, 199–210. [Google Scholar] [CrossRef]
  7. Yamashita, N.; Fukushima, M. On the Rate of Convergence of the Levenberg-Marquardt Method. J. Comput. Suppl. 2001, 15, 227–238. [Google Scholar]
  8. De Luca, T.; Facchinei, F.; Kanzow, C.T. A theoretical and numerical comparison of some semismooth algorithms for complementarity problems. Comp. Optim. Appl. 2000, 16, 173–205. [Google Scholar] [CrossRef]
  9. Xiu, N.; Zhang, J. A smoothing Gauss–Newton method for the generalized HLCP. J. Comput. Appl. Math. 2001, 129, 195–208. [Google Scholar] [CrossRef] [Green Version]
  10. Subramanian, P.K.; Xiu, N.H. Convergence analysis of Gauss–Newton method for the complemetarity problem. J. Optim. Theory Appl. 1997, 94, 727–738. [Google Scholar] [CrossRef]
  11. Martínez, J.M.; Qi, L. Inexact Newton method for solving nonsmooth equations. J. Comput. Appl. Math. 1995, 60, 127–145. [Google Scholar] [CrossRef] [Green Version]
  12. Bao, J.F.; Li, C.; Shen, W.P.; Yao, J.C.; Guu, S.M. Approximate Gauss–Newton methods for solving underdetermined nonlinear least squares problems. App. Num. Math. 2017, 111, 92–110. [Google Scholar] [CrossRef]
  13. Cartis, C.; Roberts, L. A derivative-free Gauss–Newton method. Math. Program. Comput. 2019, 11, 631–674. [Google Scholar] [CrossRef] [Green Version]
  14. Clarke, F.H. Optimization and Nonsmooth Analysis; John Wiley & Sons: New York, NY, USA, 1983. [Google Scholar]
  15. Qi, L. Convergence analysis of some algorithms for solving nonsmooth equations. Math. Oper. Res. 1993, 18, 227–244. [Google Scholar] [CrossRef]
  16. Mifflin, R. Semismooth and semiconvex functions in constrained optimization. SIAM J. Control Optim. 1977, 15, 142–149. [Google Scholar] [CrossRef] [Green Version]
  17. Qi, L.; Sun, D. A nonsmooth version of Newton’s method. Math. Program. 1993, 58, 353–367. [Google Scholar] [CrossRef]
  18. Pang, J.S.; Qi, L. Nonsmooth equations: Motivation and algorithms. SIAM J. Optim. 1993, 3, 443–465. [Google Scholar] [CrossRef]
  19. Ulbrich, M. Nonmonotone trust-region methods for bound-constrained semismooth systems of equations with applications to nonlinear mixed complementarity problems. SIAM J. Optim. 2001, 11, 889–917. [Google Scholar] [CrossRef]
  20. Kojima, M.; Shindo, S. Extensions of Newton and quasi-Newton methods to systems of PC1-equations. J. Oper. Res. Soc. Jpn. 1986, 29, 352–374. [Google Scholar] [CrossRef]
  21. Jiang, H.; Qi, L. A new nonsmooth equations approach to nonlinear complementarity problems. SIAM J. Control Optim. 1997, 35, 178–193. [Google Scholar] [CrossRef]
  22. Feng, Y.; Wang, G.; Li, W.; Li, N. Multi-strategy monarch butterfly optimization algorithm for discounted 0–1 knapsack problem. Neural Comput. Appl. 2018, 30, 3019–3036. [Google Scholar] [CrossRef]
  23. Feng, Y.; Yu, X.; Wang, G. A novel monarch butterfly optimization with global position updating operator for large-scale 0–1 knapsack problems. Mathematics 2019, 7, 1056. [Google Scholar] [CrossRef] [Green Version]
  24. Wang, G.; Deb, S.; Coelho, L.D. Earthworm optimisation algorithm: A bio-inspired metaheuristic algorithm for global optimisation problems. Int. J. Biol. Inspired Comput. 2018, 12, 1–22. [Google Scholar] [CrossRef]
  25. Wang, G.; Deb, S.; Coelho, L.D. Elephant herding optimization. In Proceedings of the 2015 3rd International Symposium on Computational and Business Intelligence (ISCBI), Bali, Indonesia, 7–9 December 2015; pp. 1–5. [Google Scholar]
  26. Wang, G.; Deb, S.; Gao, X.; Coelho, L.D. A new metaheuristic optimisation algorithm motivated by elephant herding behaviour. Int. J. Biol. Inspired Comput. 2016, 8, 394–409. [Google Scholar] [CrossRef]
  27. Feng, Y.; Wang, G. Binary moth search algorithm for discounted 0-1 knapsack problem. IEEE Access 2018, 6, 10708–10719. [Google Scholar] [CrossRef]
  28. Wang, G. Moth search algorithm: A bio-inspired metaheuristic algorithm for global optimization problems. Memet. Comput. 2018, 10, 151–164. [Google Scholar] [CrossRef]
Figure 1. Number of iterations for various starting points (for Example 1).
Figure 1. Number of iterations for various starting points (for Example 1).
Algorithms 13 00190 g001
Figure 2. Number of iterations for various starting points (for Example 2.)
Figure 2. Number of iterations for various starting points (for Example 2.)
Algorithms 13 00190 g002
Table 1. Results for Example 1.
Table 1. Results for Example 1.
x ( 0 ) N 1 N 2 Solution
( 1 , 0 , 0 , 0 ) T 911 x * *
( 0 , 0 , 1 , 0 ) T failed18 x * *
( 0 , 0 , 0 , 1 ) T failedfailed-
( 1 , 0 , 1 , 0 ) T 710 x *
( 1 , 0 , 0 , 1 ) T 79 x * *
( 1 , 0 , 1 , 5 ) T 68 x * *
Table 2. Results for Example 2.
Table 2. Results for Example 2.
x ( 0 ) N 1 N 2 Solution
( 0 , 0 ) T 57 x *
( 1 , 0 ) T 12 x * *
( 0 , 1 ) T 47 x *
( 1 , 1 ) T 24 x * *
( 1 , 1 ) T 47 x * *
( 5 , 5 ) T 35 x *
( 100 , 100 ) T 36 x *

Share and Cite

MDPI and ACS Style

Śmietański, M.J. On a Nonsmooth Gauss–Newton Algorithms for Solving Nonlinear Complementarity Problems. Algorithms 2020, 13, 190. https://0-doi-org.brum.beds.ac.uk/10.3390/a13080190

AMA Style

Śmietański MJ. On a Nonsmooth Gauss–Newton Algorithms for Solving Nonlinear Complementarity Problems. Algorithms. 2020; 13(8):190. https://0-doi-org.brum.beds.ac.uk/10.3390/a13080190

Chicago/Turabian Style

Śmietański, Marek J. 2020. "On a Nonsmooth Gauss–Newton Algorithms for Solving Nonlinear Complementarity Problems" Algorithms 13, no. 8: 190. https://0-doi-org.brum.beds.ac.uk/10.3390/a13080190

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop