Nonsmooth Optimization in Honor of the 60th Birthday of Adil M. Bagirov

A special issue of Algorithms (ISSN 1999-4893). This special issue belongs to the section "Analysis of Algorithms and Complexity Theory".

Deadline for manuscript submissions: closed (30 June 2020) | Viewed by 27058

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computing, University of Turku, 20014 Turku, Finland
Interests: nonsmooth optimization; large-scale optimization; machine learning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Nonsmooth optimization (NSO) refers to the general problem of minimizing (or maximizing) functions that have discontinuous gradients. These types of functions arise in many applied fields, for instance, in image denoising, optimal shape design, computational chemistry and physics, machine learning, and data mining including cluster analysis, classification and regression. In most of these applications NSO approaches lead to a significant reduction in the number of decision variables in comparison with any other approaches, and thus facilitate the design of efficient algorithms for their solution. In addition, various real-world problems can be modeled more realistic as an NSO problem; the robust formulation of a system may lead to solving an NSO problem; and even solving a difficult smooth (continuously differentiable) problem sometimes requires the use of NSO techniques in order to either reduce the problem’s size or simplify its structure. These are some of the main reasons for the increased attraction to nonsmooth analysis and optimization during the past few years. Despite the considerable developments in NSO, the lack of numerically effective methods is still evident and their applications to real-world problems is somewhat limited. The aim of this Special Issue is to collect together the most recent techniques and applications in the area of NSO.

We invite you to submit your original and unpublished research papers to the Special Issue on nonsmooth optimization. We have a special interest in research works focusing on various new NSO algorithms including those applying the special structure of nonsmooth problems (DC, partial smoothness, sparsity, etc.) and the applications of NSO including (but not limited to) image denoising, machine learning, and data mining

Dr. Napsu Karmitsa
Dr. Sona Taheri
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Nonsmooth optimization
  • Non-differentiable programming
  • Subgradient methods
  • Bundle methods
  • Applications of nonsmooth optimization

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

4 pages, 1122 KiB  
Editorial
Special Issue “Nonsmooth Optimization in Honor of the 60th Birthday of Adil M. Bagirov”: Foreword by Guest Editors
by Napsu Karmitsa and Sona Taheri
Algorithms 2020, 13(11), 282; https://0-doi-org.brum.beds.ac.uk/10.3390/a13110282 - 07 Nov 2020
Viewed by 2103
Abstract
Nonsmooth optimization refers to the general problem of minimizing (or maximizing) functions that have discontinuous gradients. This Special Issue contains six research articles that collect together the most recent techniques and applications in the area of nonsmooth optimization. These include novel techniques utilizing [...] Read more.
Nonsmooth optimization refers to the general problem of minimizing (or maximizing) functions that have discontinuous gradients. This Special Issue contains six research articles that collect together the most recent techniques and applications in the area of nonsmooth optimization. These include novel techniques utilizing some decomposable structures in nonsmooth problems—for instance, the difference-of-convex (DC) structure—and interesting important practical problems, like multiple instance learning, hydrothermal unit-commitment problem, and scheduling the disposal of nuclear waste. Full article
Show Figures

Figure 1

Research

Jump to: Editorial

16 pages, 1542 KiB  
Article
A Mixed-Integer and Asynchronous Level Decomposition with Application to the Stochastic Hydrothermal Unit-Commitment Problem
by Bruno Colonetti, Erlon Cristian Finardi and Welington de Oliveira
Algorithms 2020, 13(9), 235; https://0-doi-org.brum.beds.ac.uk/10.3390/a13090235 - 18 Sep 2020
Cited by 3 | Viewed by 2053
Abstract
Independent System Operators (ISOs) worldwide face the ever-increasing challenge of coping with uncertainties, which requires sophisticated algorithms for solving unit-commitment (UC) problems of increasing complexity in less-and-less time. Hence, decomposition methods are appealing options to produce easier-to-handle problems that can hopefully return good [...] Read more.
Independent System Operators (ISOs) worldwide face the ever-increasing challenge of coping with uncertainties, which requires sophisticated algorithms for solving unit-commitment (UC) problems of increasing complexity in less-and-less time. Hence, decomposition methods are appealing options to produce easier-to-handle problems that can hopefully return good solutions at reasonable times. When applied to two-stage stochastic models, decomposition often yields subproblems that are embarrassingly parallel. Synchronous parallel-computing techniques are applied to the decomposable subproblem and frequently result in considerable time savings. However, due to the inherent run-time differences amongst the subproblem’s optimization models, unequal equipment, and communication overheads, synchronous approaches may underuse the computing resources. Consequently, asynchronous computing constitutes a natural enhancement to existing methods. In this work, we propose a novel extension of the asynchronous level decomposition to solve stochastic hydrothermal UC problems with mixed-integer variables in the first stage. In addition, we combine this novel method with an efficient task allocation to yield an innovative algorithm that far outperforms the current state-of-the-art. We provide convergence analysis of our proposal and assess its computational performance on a testbed consisting of 54 problems from a 46-bus system. Results show that our asynchronous algorithm outperforms its synchronous counterpart in terms of wall-clock computing time in 40% of the problems, providing time savings averaging about 45%, while also reducing the standard deviation of running times over the testbed in the order of 25%. Full article
Show Figures

Figure 1

11 pages, 362 KiB  
Article
On a Nonsmooth Gauss–Newton Algorithms for Solving Nonlinear Complementarity Problems
by Marek J. Śmietański
Algorithms 2020, 13(8), 190; https://0-doi-org.brum.beds.ac.uk/10.3390/a13080190 - 04 Aug 2020
Cited by 3 | Viewed by 2969
Abstract
In this paper, we propose a new version of the generalized damped Gauss–Newton method for solving nonlinear complementarity problems based on the transformation to the nonsmooth equation, which is equivalent to some unconstrained optimization problem. The B-differential plays the role of the derivative. [...] Read more.
In this paper, we propose a new version of the generalized damped Gauss–Newton method for solving nonlinear complementarity problems based on the transformation to the nonsmooth equation, which is equivalent to some unconstrained optimization problem. The B-differential plays the role of the derivative. We present two types of algorithms (usual and inexact), which have superlinear and global convergence for semismooth cases. These results can be applied to efficiently find all solutions of the nonlinear complementarity problems under some mild assumptions. The results of the numerical tests are attached as a complement of the theoretical considerations. Full article
Show Figures

Figure 1

25 pages, 1523 KiB  
Article
Polyhedral DC Decomposition and DCA Optimization of Piecewise Linear Functions
by Andreas Griewank and Andrea Walther
Algorithms 2020, 13(7), 166; https://0-doi-org.brum.beds.ac.uk/10.3390/a13070166 - 11 Jul 2020
Cited by 2 | Viewed by 3450
Abstract
For piecewise linear functions f : R n R we show how their abs-linear representation can be extended to yield simultaneously their decomposition into a convex f ˇ and a concave part f ^ , including a pair of generalized gradients [...] Read more.
For piecewise linear functions f : R n R we show how their abs-linear representation can be extended to yield simultaneously their decomposition into a convex f ˇ and a concave part f ^ , including a pair of generalized gradients g ˇ R n g ^ . The latter satisfy strict chain rules and can be computed in the reverse mode of algorithmic differentiation, at a small multiple of the cost of evaluating f itself. It is shown how f ˇ and f ^ can be expressed as a single maximum and a single minimum of affine functions, respectively. The two subgradients g ˇ and g ^ are then used to drive DCA algorithms, where the (convex) inner problem can be solved in finitely many steps, e.g., by a Simplex variant or the true steepest descent method. Using a reflection technique to update the gradients of the concave part, one can ensure finite convergence to a local minimizer of f, provided the Linear Independence Kink Qualification holds. For piecewise smooth objectives the approach can be used as an inner method for successive piecewise linearization. Full article
Show Figures

Figure 1

14 pages, 363 KiB  
Article
On the Use of Biased-Randomized Algorithms for Solving Non-Smooth Optimization Problems
by Angel Alejandro Juan, Canan Gunes Corlu, Rafael David Tordecilla, Rocio de la Torre and Albert Ferrer
Algorithms 2020, 13(1), 8; https://0-doi-org.brum.beds.ac.uk/10.3390/a13010008 - 25 Dec 2019
Cited by 13 | Viewed by 5133
Abstract
Soft constraints are quite common in real-life applications. For example, in freight transportation, the fleet size can be enlarged by outsourcing part of the distribution service and some deliveries to customers can be postponed as well; in inventory management, it is possible to [...] Read more.
Soft constraints are quite common in real-life applications. For example, in freight transportation, the fleet size can be enlarged by outsourcing part of the distribution service and some deliveries to customers can be postponed as well; in inventory management, it is possible to consider stock-outs generated by unexpected demands; and in manufacturing processes and project management, it is frequent that some deadlines cannot be met due to delays in critical steps of the supply chain. However, capacity-, size-, and time-related limitations are included in many optimization problems as hard constraints, while it would be usually more realistic to consider them as soft ones, i.e., they can be violated to some extent by incurring a penalty cost. Most of the times, this penalty cost will be nonlinear and even noncontinuous, which might transform the objective function into a non-smooth one. Despite its many practical applications, non-smooth optimization problems are quite challenging, especially when the underlying optimization problem is NP-hard in nature. In this paper, we propose the use of biased-randomized algorithms as an effective methodology to cope with NP-hard and non-smooth optimization problems in many practical applications. Biased-randomized algorithms extend constructive heuristics by introducing a nonuniform randomization pattern into them. Hence, they can be used to explore promising areas of the solution space without the limitations of gradient-based approaches, which assume the existence of smooth objective functions. Moreover, biased-randomized algorithms can be easily parallelized, thus employing short computing times while exploring a large number of promising regions. This paper discusses these concepts in detail, reviews existing work in different application areas, and highlights current trends and open research lines. Full article
Show Figures

Figure 1

20 pages, 1079 KiB  
Article
Planning the Schedule for the Disposal of the Spent Nuclear Fuel with Interactive Multiobjective Optimization
by Outi Montonen, Timo Ranta and Marko M. Mäkelä
Algorithms 2019, 12(12), 252; https://0-doi-org.brum.beds.ac.uk/10.3390/a12120252 - 25 Nov 2019
Cited by 4 | Viewed by 6797
Abstract
Several countries utilize nuclear power and face the problem of what to do with the spent nuclear fuel. One possibility, which is under the scope in this paper, is to dispose of the fuel assemblies in the disposal facility. Before the assemblies can [...] Read more.
Several countries utilize nuclear power and face the problem of what to do with the spent nuclear fuel. One possibility, which is under the scope in this paper, is to dispose of the fuel assemblies in the disposal facility. Before the assemblies can be disposed of, they must cool down their decay heat power in the interim storage. Next, they are loaded into canisters in the encapsulation facility, and finally, the canisters are placed in the disposal facility. In this paper, we model this process as a nonsmooth multiobjective mixed-integer nonlinear optimization problem with the minimization of nine objectives: the maximum number of assemblies in the storage, maximum storage time, average storage time, total number of canisters, end time of the encapsulation, operation time of the encapsulation facility, the lengths of disposal and central tunnels, and total costs. As a result, we obtain the disposal schedule i.e., amount of canisters disposed of periodically. We introduce the interactive multiobjective optimization method using the two-slope parameterized achievement scalarizing functions which enables us to obtain systematically several different Pareto optimal solutions from the same preference information. Finally, a case study adapting the disposal in Finland is given. The results obtained are analyzed in terms of the objective values and disposal schedules. Full article
Show Figures

Figure 1

12 pages, 432 KiB  
Article
SVM-Based Multiple Instance Classification via DC Optimization
by Annabella Astorino, Antonio Fuduli, Giovanni Giallombardo and Giovanna Miglionico
Algorithms 2019, 12(12), 249; https://0-doi-org.brum.beds.ac.uk/10.3390/a12120249 - 23 Nov 2019
Cited by 8 | Viewed by 3600
Abstract
A multiple instance learning problem consists of categorizing objects, each represented as a set (bag) of points. Unlike the supervised classification paradigm, where each point of the training set is labeled, the labels are only associated with bags, while the labels of the [...] Read more.
A multiple instance learning problem consists of categorizing objects, each represented as a set (bag) of points. Unlike the supervised classification paradigm, where each point of the training set is labeled, the labels are only associated with bags, while the labels of the points inside the bags are unknown. We focus on the binary classification case, where the objective is to discriminate between positive and negative bags using a separating surface. Adopting a support vector machine setting at the training level, the problem of minimizing the classification-error function can be formulated as a nonconvex nonsmooth unconstrained program. We propose a difference-of-convex (DC) decomposition of the nonconvex function, which we face using an appropriate nonsmooth DC algorithm. Some of the numerical results on benchmark data sets are reported. Full article
Show Figures

Figure 1

Back to TopTop