Next Issue
Volume 14, February
Previous Issue
Volume 13, December

Algorithms, Volume 14, Issue 1 (January 2021) – 28 articles

Cover Story (view full-size image): Our method, which is called Locus mutation, is a gene-dependent local mutation operator where every gene has a different mutation rate induced from its partial fitness value. A certain set of problems are partially solvable, allowing us to grade segments of a solution individually, which results in the local and individual tuning of mutation parameters for genes. In these cases, our approach can speed up the convergence of the algorithm and yield a more accurate final solution, which we have demonstrated on the N-Queens and traveling salesman problems. Our approach extends the traditional mutation operator by introducing an extra parameter that yields the traditional approach if it is set to zero or balances the exploitation–exploration ratio with non-zero values. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Select all
Export citation of selected articles as:
Open AccessArticle
Capturing Protein Domain Structure and Function Using Self-Supervision on Domain Architectures
Algorithms 2021, 14(1), 28; https://0-doi-org.brum.beds.ac.uk/10.3390/a14010028 - 19 Jan 2021
Viewed by 887
Abstract
Predicting biological properties of unseen proteins is shown to be improved by the use of protein sequence embeddings. However, these sequence embeddings have the caveat that biological metadata do not exist for each amino acid, in order to measure the quality of each [...] Read more.
Predicting biological properties of unseen proteins is shown to be improved by the use of protein sequence embeddings. However, these sequence embeddings have the caveat that biological metadata do not exist for each amino acid, in order to measure the quality of each unique learned embedding vector separately. Therefore, current sequence embedding cannot be intrinsically evaluated on the degree of their captured biological information in a quantitative manner. We address this drawback by our approach, dom2vec, by learning vector representation for protein domains and not for each amino acid base, as biological metadata do exist for each domain separately. To perform a reliable quantitative intrinsic evaluation in terms of biology knowledge, we selected the metadata related to the most distinctive biological characteristics of a domain, which are its structure, enzymatic, and molecular function. Notably, dom2vec obtains an adequate level of performance in the intrinsic assessment—therefore, we can draw an analogy between the local linguistic features in natural languages and the domain structure and function information in domain architectures. Moreover, we demonstrate the dom2vec applicability on protein prediction tasks, by comparing it with state-of-the-art sequence embeddings in three downstream tasks. We show that dom2vec outperforms sequence embeddings for toxin and enzymatic function prediction and is comparable with sequence embeddings in cellular location prediction. Full article
Show Figures

Figure 1

Open AccessArticle
A Memetic Algorithm for an External Depot Production Routing Problem
Algorithms 2021, 14(1), 27; https://0-doi-org.brum.beds.ac.uk/10.3390/a14010027 - 19 Jan 2021
Viewed by 608
Abstract
This study aims to compare the results of a memetic algorithm with those of the two-phase decomposition heuristic on the external depot production routing problem in a supply chain. We have modified the classical scheme of a genetic algorithm by replacing the mutation [...] Read more.
This study aims to compare the results of a memetic algorithm with those of the two-phase decomposition heuristic on the external depot production routing problem in a supply chain. We have modified the classical scheme of a genetic algorithm by replacing the mutation operator by three local search algorithms. The first local search consists in exchanging two customers visited the same day. The second consists in trying an exchange between two customers visited at consecutive periods and the third consists in removing a customer from his current tour for a better insertion in any tour of the same period. The tests that were carried out on 128 instances of the literature have highlighted the effectiveness of the memetic algorithm developed in this work compared to the two-phase decomposition heuristic. This is reflected by the fact that the results obtained by the memetic algorithm lead to a reduction in the overall average cost of production, inventory, and transport, ranging from 3.65% to 16.73% with an overall rate of 11.07% with regard to the results obtained with the two-phase decomposition heuristic. The outcomes will be beneficial to researchers and supply chain managers in the choice and development of heuristics and metaheuristics for the resolution of production routing problem. Full article
Show Figures

Figure 1

Open AccessArticle
Crowd Evacuation Guidance Based on Combined Action Reinforcement Learning
Algorithms 2021, 14(1), 26; https://0-doi-org.brum.beds.ac.uk/10.3390/a14010026 - 18 Jan 2021
Viewed by 662
Abstract
Existing crowd evacuation guidance systems require the manual design of models and input parameters, incurring a significant workload and a potential for errors. This paper proposed an end-to-end intelligent evacuation guidance method based on deep reinforcement learning, and designed an interactive simulation environment [...] Read more.
Existing crowd evacuation guidance systems require the manual design of models and input parameters, incurring a significant workload and a potential for errors. This paper proposed an end-to-end intelligent evacuation guidance method based on deep reinforcement learning, and designed an interactive simulation environment based on the social force model. The agent could automatically learn a scene model and path planning strategy with only scene images as input, and directly output dynamic signage information. Aiming to solve the “dimension disaster” phenomenon of the deep Q network (DQN) algorithm in crowd evacuation, this paper proposed a combined action-space DQN (CA-DQN) algorithm that grouped Q network output layer nodes according to action dimensions, which significantly reduced the network complexity and improved system practicality in complex scenes. In this paper, the evacuation guidance system is defined as a reinforcement learning agent and implemented by the CA-DQN method, which provides a novel approach for the evacuation guidance problem. The experiments demonstrate that the proposed method is superior to the static guidance method, and on par with the manually designed model method. Full article
Show Figures

Figure 1

Open AccessArticle
Advanced Construction of the Dynamic Matrix in Numerically Efficient Fuzzy MPC Algorithms
Algorithms 2021, 14(1), 25; https://0-doi-org.brum.beds.ac.uk/10.3390/a14010025 - 17 Jan 2021
Viewed by 571
Abstract
A method for the advanced construction of the dynamic matrix for Model Predictive Control (MPC) algorithms with linearization is proposed in the paper. It extends numerically efficient fuzzy algorithms utilizing skillful linearization. The algorithms combine the control performance offered by the MPC algorithms [...] Read more.
A method for the advanced construction of the dynamic matrix for Model Predictive Control (MPC) algorithms with linearization is proposed in the paper. It extends numerically efficient fuzzy algorithms utilizing skillful linearization. The algorithms combine the control performance offered by the MPC algorithms with nonlinear optimization (NMPC algorithms) with the numerical efficiency of the MPC algorithms based on linear models in which the optimization problem is a standard, easy-to-solve, quadratic programming problem with linear constraints. In the researched algorithms, the free response obtained using a nonlinear process model and the future trajectory of the control signals is used to construct an advanced dynamic matrix utilizing the easy-to-obtain fuzzy model. This leads to obtaining very good prediction and control quality very close to those offered by NMPC algorithms. The proposed approach is tested in the control system of a nonlinear chemical control plant—a CSTR reactor with the van de Vusse reaction. Full article
(This article belongs to the Special Issue Model Predictive Control: Algorithms and Applications)
Show Figures

Figure 1

Open AccessArticle
Analysis of a Traditional and a Fuzzy Logic Enhanced Perturb and Observe Algorithm for the MPPT of a Photovoltaic System
Algorithms 2021, 14(1), 24; https://0-doi-org.brum.beds.ac.uk/10.3390/a14010024 - 14 Jan 2021
Cited by 1 | Viewed by 611
Abstract
This paper presents the results obtained for the maximum power point tracking (MPPT) technique applied to a photovoltaic (PV) system, composed of five solar panels in series using two different methodologies. First, we considered a traditional Perturb and Observe (P&O) algorithm and in [...] Read more.
This paper presents the results obtained for the maximum power point tracking (MPPT) technique applied to a photovoltaic (PV) system, composed of five solar panels in series using two different methodologies. First, we considered a traditional Perturb and Observe (P&O) algorithm and in a second stage we applied a Fuzzy Logic Controller (FLC) that uses fuzzy logic concepts to improve the traditional P&O; both were implemented in a boost converter. The main aim of this paper is to study if an artificial intelligence (AI) based MPPT method, can be more efficient, stable and adaptable than a traditional MPPT method, in varying environment conditions, namely solar irradiation and/or environment temperature and also to analyze their behaviour in steady state conditions. The proposed FLC with a rule base collection of 25 rules outperformed the controller using the traditional P&O algorithm due to its adaptative step size, enabling the FLC to adapt the PV system faster to changing environment conditions, guessing the correct maximum power point (MPP) faster and achieving lower oscillations in steady state conditions, leading to higher generated energy due to lower losses both in steady state and dynamic environment conditions. The simulations in this study were performed using MATLAB (Version 2018)/Simulink. Full article
(This article belongs to the Special Issue Algorithms for PID Controller 2021)
Show Figures

Figure 1

Open AccessArticle
Simheuristics Approaches for Efficient Decision-Making Support in Materials Trading Networks
Algorithms 2021, 14(1), 23; https://0-doi-org.brum.beds.ac.uk/10.3390/a14010023 - 14 Jan 2021
Viewed by 608
Abstract
The distribution process in business-to-business materials trading is among the most complex and in transparent ones within logistics. The highly volatile environment requires continuous adaptations by the responsible decision-makers, who face a substantial number of potential improvement actions with conflicting goals, such as [...] Read more.
The distribution process in business-to-business materials trading is among the most complex and in transparent ones within logistics. The highly volatile environment requires continuous adaptations by the responsible decision-makers, who face a substantial number of potential improvement actions with conflicting goals, such as simultaneously maintaining a high service level and low costs. Simulation-optimisation approaches have been proposed in this context, for example based on evolutionary algorithms. But, on real-world system dimensions, they face impractically long computation times. This paper addresses this challenge in two principal streams. On the one hand, reinforcement learning is investigated to reduce the response time of the system in a concrete decision situation. On the other hand, domain-specific information and defining equivalent solutions are exploited to support a metaheuristic algorithm. For these approaches, we have developed suitable implementations and evaluated them with subsets of real-world data. The results demonstrate that reinforcement learning exploits the idle time between decision situations to learn which decisions might be most promising, thus adding computation time but significantly reducing the response time. Using domain-specific information reduces the number of required simulation runs and guides the search for promising actions. In our experimentation, defining equivalent solutions decreased the number of required simulation runs up to 15%. Full article
(This article belongs to the Special Issue Simulation-Optimization in Logistics, Transportation, and SCM)
Show Figures

Figure 1

Open AccessArticle
Algorithmic Aspects of Some Variations of Clique Transversal and Clique Independent Sets on Graphs
Algorithms 2021, 14(1), 22; https://0-doi-org.brum.beds.ac.uk/10.3390/a14010022 - 13 Jan 2021
Viewed by 507
Abstract
This paper studies the maximum-clique independence problem and some variations of the clique transversal problem such as the {k}-clique, maximum-clique, minus clique, signed clique, and k-fold clique transversal problems from algorithmic aspects for k-trees, suns, planar graphs, doubly chordal graphs, clique perfect graphs, total graphs, split graphs, line graphs, and dually chordal graphs. We give equations to compute the {k}-clique, minus clique, signed clique, and k-fold clique transversal numbers for suns, and show that the {k}-clique transversal problem is polynomial-time solvable for graphs whose clique transversal numbers equal their clique independence numbers. We also show the relationship between the signed and generalization clique problems and present NP-completeness results for the considered problems on k-trees with unbounded k, planar graphs, doubly chordal graphs, total graphs, split graphs, line graphs, and dually chordal graphs. Full article
(This article belongs to the Special Issue Graph Algorithms and Applications)
Show Figures

Figure 1

Open AccessArticle
Dynamic Shortest Paths Methods for the Time-Dependent TSP
Algorithms 2021, 14(1), 21; https://0-doi-org.brum.beds.ac.uk/10.3390/a14010021 - 12 Jan 2021
Viewed by 615
Abstract
The time-dependent traveling salesman problem (TDTSP) asks for a shortest Hamiltonian tour in a directed graph where (asymmetric) arc-costs depend on the time the arc is entered. With traffic data abundantly available, methods to optimize routes with respect to time-dependent travel times are [...] Read more.
The time-dependent traveling salesman problem (TDTSP) asks for a shortest Hamiltonian tour in a directed graph where (asymmetric) arc-costs depend on the time the arc is entered. With traffic data abundantly available, methods to optimize routes with respect to time-dependent travel times are widely desired. This holds in particular for the traveling salesman problem, which is a corner stone of logistic planning. In this paper, we devise column-generation-based IP methods to solve the TDTSP in full generality, both for arc- and path-based formulations. The algorithmic key is a time-dependent shortest path problem, which arises from the pricing problem of the column generation and is of independent interest—namely, to find paths in a time-expanded graph that are acyclic in the underlying (non-expanded) graph. As this problem is computationally too costly, we price over the set of paths that contain no cycles of length k. In addition, we devise—tailored for the TDTSP—several families of valid inequalities, primal heuristics, a propagation method, and a branching rule. Combining these with the time-dependent shortest path pricing we provide—to our knowledge—the first elaborate method to solve the TDTSP in general and with fully general time-dependence. We also provide for results on complexity and approximability of the TDTSP. In computational experiments on randomly generated instances, we are able to solve the large majority of small instances (20 nodes) to optimality, while closing about two thirds of the remaining gap of the large instances (40 nodes) after one hour of computation. Full article
(This article belongs to the Special Issue Algorithms for Shortest Paths in Dynamic and Evolving Networks)
Show Figures

Figure 1

Open AccessArticle
Urban e-Grocery Distribution Design in Pamplona (Spain) Applying an Agent-Based Simulation Model with Horizontal Cooperation Scenarios
Algorithms 2021, 14(1), 20; https://0-doi-org.brum.beds.ac.uk/10.3390/a14010020 - 12 Jan 2021
Viewed by 782
Abstract
E-commerce has boosted in the last decades because of the achievements of the information and telecommunications technology along with the changes in the society life-style. More recently, the groceries online purchase (or e-grocery), has also prevailed as a way of making the weekly [...] Read more.
E-commerce has boosted in the last decades because of the achievements of the information and telecommunications technology along with the changes in the society life-style. More recently, the groceries online purchase (or e-grocery), has also prevailed as a way of making the weekly shopping, particularly, the one including fresh vegetables and fruit. Furthermore, this type of virtual shopping in supermarkets is gaining importance as the most efficient delivery system in cost and time. Thus, we have evaluated in this study the influence of the cooperation-based policies on costs and service quality among different supermarkets in Pamplona, Spain. Concerning methodology, first of all, we carried out a survey in Pamplona having the purpose of modelling the demand patterns about e-grocery. Second, we have developed an agent-based simulation model for generating scenarios in non-cooperative, limited cooperation, and full cooperation settings, considering the real data obtained from the survey analysis. At this manner, Vehicle Routing Problems (VRP) and Multi Depot VRPs (MDVRP) are dynamically generated and solved within the simulation framework using a biased-randomization algorithm. Finally, the results show significant reductions in distance driven and lead times when employing horizontal cooperation in e-grocery distribution. Full article
(This article belongs to the Special Issue Simulation-Optimization in Logistics, Transportation, and SCM)
Show Figures

Figure 1

Open AccessArticle
Sampling Effects on Algorithm Selection for Continuous Black-Box Optimization
Algorithms 2021, 14(1), 19; https://0-doi-org.brum.beds.ac.uk/10.3390/a14010019 - 11 Jan 2021
Viewed by 609
Abstract
In this paper, we investigate how systemic errors due to random sampling impact on automated algorithm selection for bound-constrained, single-objective, continuous black-box optimization. We construct a machine learning-based algorithm selector, which uses exploratory landscape analysis features as inputs. We test the accuracy of [...] Read more.
In this paper, we investigate how systemic errors due to random sampling impact on automated algorithm selection for bound-constrained, single-objective, continuous black-box optimization. We construct a machine learning-based algorithm selector, which uses exploratory landscape analysis features as inputs. We test the accuracy of the recommendations experimentally using resampling techniques and the hold-one-instance-out and hold-one-problem-out validation methods. The results demonstrate that the selector remains accurate even with sampling noise, although not without trade-offs. Full article
Show Figures

Figure 1

Open AccessArticle
Quantitative Spectral Data Analysis Using Extreme Learning Machines Algorithm Incorporated with PCA
Algorithms 2021, 14(1), 18; https://0-doi-org.brum.beds.ac.uk/10.3390/a14010018 - 11 Jan 2021
Viewed by 511
Abstract
Extreme learning machine (ELM) is a popular randomization-based learning algorithm that provides a fast solution for many regression and classification problems. In this article, we present a method based on ELM for solving the spectral data analysis problem, which essentially is a class [...] Read more.
Extreme learning machine (ELM) is a popular randomization-based learning algorithm that provides a fast solution for many regression and classification problems. In this article, we present a method based on ELM for solving the spectral data analysis problem, which essentially is a class of inverse problems. It requires determining the structural parameters of a physical sample from the given spectroscopic curves. We proposed that the unknown target inverse function is approximated by an ELM through adding a linear neuron to correct the localized effect aroused by Gaussian basis functions. Unlike the conventional methods involving intensive numerical computations, under the new conceptual framework, the task of performing spectral data analysis becomes a learning task from data. As spectral data are typical high-dimensional data, the dimensionality reduction technique of principal component analysis (PCA) is applied to reduce the dimension of the dataset to ensure convergence. The proposed conceptual framework is illustrated using a set of simulated Rutherford backscattering spectra. The results have shown the proposed method can achieve prediction inaccuracies of less than 1%, which outperform the predictions from the multi-layer perceptron and numerical-based techniques. The presented method could be implemented as application software for real-time spectral data analysis by integrating it into a spectroscopic data collection system. Full article
(This article belongs to the Special Issue Algorithms in Hyperspectral Data Analysis)
Show Figures

Figure 1

Open AccessArticle
Mobile-Aware Deep Learning Algorithms for Malaria Parasites and White Blood Cells Localization in Thick Blood Smears
Algorithms 2021, 14(1), 17; https://0-doi-org.brum.beds.ac.uk/10.3390/a14010017 - 11 Jan 2021
Viewed by 651
Abstract
Effective determination of malaria parasitemia is paramount in aiding clinicians to accurately estimate the severity of malaria and guide the response for quality treatment. Microscopy by thick smear blood films is the conventional method for malaria parasitemia determination. Despite its edge over other [...] Read more.
Effective determination of malaria parasitemia is paramount in aiding clinicians to accurately estimate the severity of malaria and guide the response for quality treatment. Microscopy by thick smear blood films is the conventional method for malaria parasitemia determination. Despite its edge over other existing methods of malaria parasitemia determination, it has been critiqued for being laborious, time consuming and equally requires expert knowledge for an efficient manual quantification of the parasitemia. This pauses a big challenge to most low developing countries as they are not only highly endemic but equally low resourced in terms of technical personnel in medical laboratories This study presents an end-to-end deep learning approach to automate the localization and count of P.falciparum parasites and White Blood Cells (WBCs) for effective parasitemia determination. The method involved building computer vision models on a dataset of annotated thick blood smear images. These computer vision models were built based on pre-trained deep learning models including Faster Regional Convolutional Neural Network (Faster R-CNN) and Single Shot Multibox Detector (SSD) models that help process the obtained digital images. To improve model performance due to a limited dataset, data augmentation was applied. Results from the evaluation of our approach showed that it reliably detected and returned a count of parasites and WBCs with good precision and recall. A strong correlation was observed between our model-generated counts and the manual counts done by microscopy experts (posting a spear man correlation of ρ = 0.998 for parasites and ρ = 0.987 for WBCs). Additionally, our proposed SSD model was quantized and deployed on a mobile smartphone-based inference app to detect malaria parasites and WBCs in situ. Our proposed method can be applied to support malaria diagnostics in settings with few trained Microscopy Experts yet constrained with large volume of patients to diagnose. Full article
(This article belongs to the Special Issue Machine Learning in Healthcare and Biomedical Application)
Show Figures

Figure 1

Open AccessArticle
Adaptive Gene Level Mutation
Algorithms 2021, 14(1), 16; https://0-doi-org.brum.beds.ac.uk/10.3390/a14010016 - 09 Jan 2021
Viewed by 634
Abstract
Genetic Algorithms are stochastic optimization methods where solution candidates, complying to a specific problem representation, are evaluated according to a predefined fitness function. These approaches can provide solutions in various tasks even, where analytic solutions can not be or are too complex to [...] Read more.
Genetic Algorithms are stochastic optimization methods where solution candidates, complying to a specific problem representation, are evaluated according to a predefined fitness function. These approaches can provide solutions in various tasks even, where analytic solutions can not be or are too complex to be computed. In this paper we will show, how certain set of problems are partially solvable allowing us to grade segments of a solution individually, which results local and individual tuning of mutation parameters for genes. We will demonstrate the efficiency of our method on the N-Queens and travelling salesman problems where we can demonstrate that our approach always results faster convergence and in most cases a lower error than the traditional approach. Full article
Show Figures

Figure 1

Open AccessArticle
On Nash Equilibria in Non-Cooperative All-Optical Networks
Algorithms 2021, 14(1), 15; https://0-doi-org.brum.beds.ac.uk/10.3390/a14010015 - 09 Jan 2021
Viewed by 528
Abstract
We consider the problem of determining a routing in all-optical networks, in which some couples of nodes want to communicate. In particular, we study this problem from the point of view of a network provider that has to design suitable payment functions for [...] Read more.
We consider the problem of determining a routing in all-optical networks, in which some couples of nodes want to communicate. In particular, we study this problem from the point of view of a network provider that has to design suitable payment functions for non-cooperative agents, corresponding to the couples of nodes wishing to communicate. The network provider aims at inducing stable routings (i.e., routings corresponding to Nash equilibria) using a low number of wavelengths. We consider three different kinds of local knowledge that agents may exploit to compute their payments, leading to three corresponding information levels. Under complete information, the network provider can design a payment function, inducing the agents to reach a Nash equilibrium mirroring any desired routing. If the price to an agent is computed only as a function of the wavelengths used along connecting paths (minimal level) or edges (intermediate level), the most reasonable functions either do not admit Nash equilibria or admit very inefficient ones, i.e., with the largest possible price of anarchy. However, by suitably restricting the network topology, a constant price of anarchy for chains and rings and a logarithmic one for trees can be obtained under the minimal and intermediate levels, respectively. Full article
(This article belongs to the Special Issue Algorithmic Game Theory 2020)
Show Figures

Figure 1

Open AccessArticle
Subpath Queries on Compressed Graphs: A Survey
Algorithms 2021, 14(1), 14; https://0-doi-org.brum.beds.ac.uk/10.3390/a14010014 - 05 Jan 2021
Viewed by 681
Abstract
Text indexing is a classical algorithmic problem that has been studied for over four decades: given a text T, pre-process it off-line so that, later, we can quickly count and locate the occurrences of any string (the query pattern) in T in [...] Read more.
Text indexing is a classical algorithmic problem that has been studied for over four decades: given a text T, pre-process it off-line so that, later, we can quickly count and locate the occurrences of any string (the query pattern) in T in time proportional to the query’s length. The earliest optimal-time solution to the problem, the suffix tree, dates back to 1973 and requires up to two orders of magnitude more space than the plain text just to be stored. In the year 2000, two breakthrough works showed that efficient queries can be achieved without this space overhead: a fast index be stored in a space proportional to the text’s entropy. These contributions had an enormous impact in bioinformatics: today, virtually any DNA aligner employs compressed indexes. Recent trends considered more powerful compression schemes (dictionary compressors) and generalizations of the problem to labeled graphs: after all, texts can be viewed as labeled directed paths. In turn, since finite state automata can be considered as a particular case of labeled graphs, these findings created a bridge between the fields of compressed indexing and regular language theory, ultimately allowing to index regular languages and promising to shed new light on problems, such as regular expression matching. This survey is a gentle introduction to the main landmarks of the fascinating journey that took us from suffix trees to today’s compressed indexes for labeled graphs and regular languages. Full article
(This article belongs to the Special Issue Combinatorial Methods for String Processing)
Show Figures

Figure 1

Open AccessEditorial
Special Issue on Process Mining and Emerging Applications
Algorithms 2021, 14(1), 13; https://0-doi-org.brum.beds.ac.uk/10.3390/a14010013 - 05 Jan 2021
Viewed by 535
Abstract
This article is the editorial of the “Process Mining and Emerging Applications” (https://www [...] Full article
(This article belongs to the Special Issue Process Mining and Emerging Applications)
Open AccessArticle
A Dynamic Clause Specific Initial Weight Assignment for Solving Satisfiability Problems Using Local Search
Algorithms 2021, 14(1), 12; https://0-doi-org.brum.beds.ac.uk/10.3390/a14010012 - 05 Jan 2021
Cited by 1 | Viewed by 474
Abstract
For decades, the use of weights has proven its superior ability to improve dynamic local search weighting algorithms’ overall performance. This paper proposes a new mechanism where the initial clause’s weights are dynamically allocated based on the problem’s structure. The new mechanism starts [...] Read more.
For decades, the use of weights has proven its superior ability to improve dynamic local search weighting algorithms’ overall performance. This paper proposes a new mechanism where the initial clause’s weights are dynamically allocated based on the problem’s structure. The new mechanism starts by examining each clause in terms of its size and the extent of its link, and its proximity to other clauses. Based on our examination, we categorized the clauses into four categories: (1) clauses small in size and linked with a small neighborhood, (2) clauses small in size and linked with a large neighborhood, (3) clauses large in size and linked with a small neighborhood, and (4) clauses large in size and linked with a large neighborhood. Then, the initial weights are dynamically allocated according to each clause category. To examine the efficacy of the dynamic initial weight assignment, we conducted an extensive study of our new technique on many problems. The study concluded that the dynamic allocation of initial weights contributes significantly to improving the search process’s performance and quality. To further investigate the new mechanism’s effect, we compared the new mechanism with the state-of-the-art algorithms belonging to the same family in terms of using weights, and it was clear that the new mechanism outperformed the state-of-the-art clause weighting algorithms. We also show that the new mechanism could be generalized with minor changes to be utilized within the general-purpose stochastic local search state-of-the-art weighting algorithms. Full article
(This article belongs to the Section Randomized, Online, and Approximation Algorithms)
Show Figures

Figure 1

Open AccessArticle
A Bayesian Nonparametric Learning Approach to Ensemble Models Using the Proper Bayesian Bootstrap
Algorithms 2021, 14(1), 11; https://0-doi-org.brum.beds.ac.uk/10.3390/a14010011 - 03 Jan 2021
Cited by 1 | Viewed by 793
Abstract
Bootstrap resampling techniques, introduced by Efron and Rubin, can be presented in a general Bayesian framework, approximating the statistical distribution of a statistical functional ϕ(F), where F is a random distribution function. Efron’s and Rubin’s bootstrap procedures can be extended, introducing an informative prior through the Proper Bayesian bootstrap. In this paper different bootstrap techniques are used and compared in predictive classification and regression models based on ensemble approaches, i.e., bagging models involving decision trees. Proper Bayesian bootstrap, proposed by Muliere and Secchi, is used to sample the posterior distribution over trees, introducing prior distributions on the covariates and the target variable. The results obtained are compared with respect to other competitive procedures employing different bootstrap techniques. The empirical analysis reports the results obtained on simulated and real data. Full article
Show Figures

Figure 1

Open AccessArticle
Tuning of Multivariable Model Predictive Control for Industrial Tasks
Algorithms 2021, 14(1), 10; https://0-doi-org.brum.beds.ac.uk/10.3390/a14010010 - 03 Jan 2021
Cited by 1 | Viewed by 851
Abstract
This work is concerned with the tuning of the parameters of Model Predictive Control (MPC) algorithms when used for industrial tasks, i.e., compensation of disturbances that affect the process (process uncontrolled inputs and measurement noises). The discussed simulation optimisation tuning procedure is quite [...] Read more.
This work is concerned with the tuning of the parameters of Model Predictive Control (MPC) algorithms when used for industrial tasks, i.e., compensation of disturbances that affect the process (process uncontrolled inputs and measurement noises). The discussed simulation optimisation tuning procedure is quite computationally simple since the consecutive parameters are optimised separately, and it requires only a very limited number of simulations. It makes it possible to perform a multicriteria control assessment as a few control quality measures may be taken into account. The effectiveness of the tuning method is demonstrated for a multivariable distillation column. Two cases are considered: a perfect model case and a more practical case in which the model is characterised by some error. It is shown that the discussed tuning approach makes it possible to obtain very good control quality, much better than in the most common case in which all tuning parameters are constant. Full article
(This article belongs to the Special Issue Model Predictive Control: Algorithms and Applications)
Show Figures

Figure 1

Open AccessArticle
Constructing Minimally 3-Connected Graphs
Algorithms 2021, 14(1), 9; https://0-doi-org.brum.beds.ac.uk/10.3390/a14010009 - 01 Jan 2021
Viewed by 680
Abstract
A 3-connected graph is minimally 3-connected if removal of any edge destroys 3-connectivity. We present an algorithm for constructing minimally 3-connected graphs based on the results in (Dawes, JCTB 40, 159-168, 1986) using two operations: adding an edge between non-adjacent vertices and splitting [...] Read more.
A 3-connected graph is minimally 3-connected if removal of any edge destroys 3-connectivity. We present an algorithm for constructing minimally 3-connected graphs based on the results in (Dawes, JCTB 40, 159-168, 1986) using two operations: adding an edge between non-adjacent vertices and splitting a vertex. To test sets of vertices and edges for 3-compatibility, which depends on the cycles of the graph, we develop a method for obtaining the cycles of G from the cycles of G, where G is obtained from G by one of the two operations above. We eliminate isomorphic duplicates using certificates generated by McKay’s isomorphism checker nauty. The algorithm consecutively constructs the non-isomorphic minimally 3-connected graphs with n vertices and m edges from the non-isomorphic minimally 3-connected graphs with n1 vertices and m2 edges, n1 vertices and m3 edges, and n2 vertices and m3 edges. Full article
(This article belongs to the Special Issue Selected Algorithmic Papers From CSR 2020)
Show Figures

Figure 1

Open AccessArticle
Hardness of an Asymmetric 2-Player Stackelberg Network Pricing Game
Algorithms 2021, 14(1), 8; https://0-doi-org.brum.beds.ac.uk/10.3390/a14010008 - 31 Dec 2020
Viewed by 557
Abstract
Consider a communication network represented by a directed graph G=(V,E) of n nodes and m edges. Assume that edges in E are partitioned into two sets: a set C of edges with a fixed non-negative real cost, and a set P of edges whose costs are instead priced by a leader. This is done with the final intent of maximizing a revenue that will be returned for their use by a follower, whose goal in turn is to select for his communication purposes a subnetwork of Gminimizing a given objective function of the edge costs. In this paper, we study the natural setting in which the follower computes a single-source shortest paths tree of G, and then returns to the leader a payment equal to the sum of the selected priceable edges. Thus, the problem can be modeled as a one-round two-player Stackelberg Network Pricing Game, but with the novelty that the objective functions of the two players are asymmetric, in that the revenue returned to the leader for any of her selected edges is not equal to the cost of such an edge in the follower’s solution. As is shown, for any ϵ>0 and unless P=NP, the leader’s problem of finding an optimal pricing is not approximable within n1/2ϵ, while, if G is unweighted and the leader can only decide which of her edges enter in the solution, then the problem is not approximable within n1/3ϵ. On the positive side, we devise a strongly polynomial-time O(n)-approximation algorithm, which favorably compares against the classic approach based on a single-price algorithm. Finally, motivated by practical applications, we consider the special cases in which edges in C are unweighted and happen to form two popular network topologies, namely stars and chains, and we provide a comprehensive characterization of their computational tractability. Full article
(This article belongs to the Special Issue Graph Algorithms and Network Dynamics)
Show Figures

Figure 1

Open AccessArticle
A Modified Network-Wide Road Capacity Reliability Analysis Model for Improving Transportation Sustainability
Algorithms 2021, 14(1), 7; https://0-doi-org.brum.beds.ac.uk/10.3390/a14010007 - 29 Dec 2020
Viewed by 637
Abstract
Reserve capacity is the core of reliability analysis based on road network capacity. This article optimizes its bi-level programming model: the upper-level model has added service level coefficient and travel time limit, which realizes the unification of capacity and travel time reliability analysis [...] Read more.
Reserve capacity is the core of reliability analysis based on road network capacity. This article optimizes its bi-level programming model: the upper-level model has added service level coefficient and travel time limit, which realizes the unification of capacity and travel time reliability analysis to a certain extent. Stochastic user equilibrium (SUE) model based on origin has been selected as the lower-level model, which added capacity constraints to optimize distribution results, and allows the evaluation of reliability to be continued. Through the SUE model, the iterative step size of the method of successive averages can be optimized to improve the efficiency of the algorithm. After that, the article designs the algorithm flow of reliability analysis based on road network capacity, and verifies the feasibility of the designed algorithm with an example. Finally, combined with the conclusion of reliability analysis, this article puts forward some effective methods to improve the reliability of the urban road network. Full article
Show Figures

Figure 1

Open AccessArticle
Improving Scalable K-Means++
Algorithms 2021, 14(1), 6; https://0-doi-org.brum.beds.ac.uk/10.3390/a14010006 - 27 Dec 2020
Cited by 1 | Viewed by 949
Abstract
Two new initialization methods for K-means clustering are proposed. Both proposals are based on applying a divide-and-conquer approach for the K-means‖ type of an initialization strategy. The second proposal also uses multiple lower-dimensional subspaces produced by the random projection method for the initialization. [...] Read more.
Two new initialization methods for K-means clustering are proposed. Both proposals are based on applying a divide-and-conquer approach for the K-means‖ type of an initialization strategy. The second proposal also uses multiple lower-dimensional subspaces produced by the random projection method for the initialization. The proposed methods are scalable and can be run in parallel, which make them suitable for initializing large-scale problems. In the experiments, comparison of the proposed methods to the K-means++ and K-means‖ methods is conducted using an extensive set of reference and synthetic large-scale datasets. Concerning the latter, a novel high-dimensional clustering data generation algorithm is given. The experiments show that the proposed methods compare favorably to the state-of-the-art by improving clustering accuracy and the speed of convergence. We also observe that the currently most popular K-means++ initialization behaves like the random one in the very high-dimensional cases. Full article
Show Figures

Figure 1

Open AccessArticle
Re-Pair in Small Space
Algorithms 2021, 14(1), 5; https://0-doi-org.brum.beds.ac.uk/10.3390/a14010005 - 25 Dec 2020
Viewed by 796
Abstract
Re-Pairis a grammar compression scheme with favorably good compression rates. The computation of Re-Pair comes with the cost of maintaining large frequency tables, which makes it hard to compute Re-Pair on large-scale data sets. As a solution for this problem, we present, given [...] Read more.
Re-Pairis a grammar compression scheme with favorably good compression rates. The computation of Re-Pair comes with the cost of maintaining large frequency tables, which makes it hard to compute Re-Pair on large-scale data sets. As a solution for this problem, we present, given a text of length n whose characters are drawn from an integer alphabet with size σ=nO(1), an O(min(n2,n2lglogτnlglglgn/logτn)) time algorithm computing Re-Pair with max((n/c)lgn,nlgτ)+O(lgn) bits of working space including the text space, where c1 is a fixed user-defined constant and τ is the sum of σ and the number of non-terminals. We give variants of our solution working in parallel or in the external memory model. Unfortunately, the algorithm seems not practical since a preliminary version already needs roughly one hour for computing Re-Pair on one megabyte of text. Full article
(This article belongs to the Special Issue Combinatorial Methods for String Processing)
Show Figures

Figure 1

Open AccessArticle
A Discrete-Continuous Algorithm for Free Flight Planning
Algorithms 2021, 14(1), 4; https://0-doi-org.brum.beds.ac.uk/10.3390/a14010004 - 25 Dec 2020
Viewed by 614
Abstract
We propose a hybrid discrete-continuous algorithm for flight planning in free flight airspaces. In a first step, our discrete-continuous optimization for enhanced resolution (DisCOptER) method computes a globally optimal approximate flight path on a discretization of the problem using the A* method. This route initializes a Newton method that converges rapidly to the smooth optimum in a second step. The correctness, accuracy, and complexity of the method are governed by the choice of the crossover point that determines the coarseness of the discretization. We analyze the optimal choice of the crossover point and demonstrate the asymtotic superority of DisCOptER over a purely discrete approach. Full article
(This article belongs to the Special Issue Algorithms for Shortest Paths in Dynamic and Evolving Networks)
Show Figures

Graphical abstract

Open AccessArticle
Perpetual American Cancellable Standard Options in Models with Last Passage Times
Algorithms 2021, 14(1), 3; https://0-doi-org.brum.beds.ac.uk/10.3390/a14010003 - 24 Dec 2020
Viewed by 549
Abstract
We derive explicit solutions to the perpetual American cancellable standard put and call options in an extension of the Black–Merton–Scholes model. It is assumed that the contracts are cancelled at the last hitting times for the underlying asset price process of some constant [...] Read more.
We derive explicit solutions to the perpetual American cancellable standard put and call options in an extension of the Black–Merton–Scholes model. It is assumed that the contracts are cancelled at the last hitting times for the underlying asset price process of some constant upper or lower levels which are not stopping times with respect to the observable filtration. We show that the optimal exercise times are the first times at which the asset price reaches some lower or upper constant levels. The proof is based on the reduction of the original optimal stopping problems to the associated free-boundary problems and the solution of the latter problems by means of the smooth-fit conditions. Full article
(This article belongs to the Special Issue Algorithms for Sequential Analysis)
Show Figures

Figure 1

Open AccessArticle
Sliding Mode Control Algorithms for Anti-Lock Braking Systems with Performance Comparisons
Algorithms 2021, 14(1), 2; https://0-doi-org.brum.beds.ac.uk/10.3390/a14010002 - 23 Dec 2020
Cited by 1 | Viewed by 617
Abstract
This paper presents the performance of two sliding mode control algorithms, based on the Lyapunov-based sliding mode controller (LSMC) and reaching-law-based sliding mode controller (RSMC), with their novel variants designed and applied to the anti-lock braking system (ABS), which is known to be [...] Read more.
This paper presents the performance of two sliding mode control algorithms, based on the Lyapunov-based sliding mode controller (LSMC) and reaching-law-based sliding mode controller (RSMC), with their novel variants designed and applied to the anti-lock braking system (ABS), which is known to be a strongly nonlinear system. The goal is to prove their superior performance over existing control approaches, in the sense that the LSMC and RSMC do not bring additional computational complexity, as they rely on a reduced number of tuning parameters. The performance of LSMC and RSMC solves the uncertainty in the process model which comes from unmodeled dynamics and a simplification of the actuator dynamics, leading to a reduced second order process. The contribution adds complete design details and stability analysis is provided. Additionally, performance comparisons with several adaptive, neural networks-based and model-free sliding mode control algorithms reveal the robustness of the proposed LSMC and RSMC controllers, in spite of the reduced number of tuning parameters. The robustness and reduced computational burden of the controllers validated on the real-world complex ABS make it an attractive solution for practical industrial implementations. Full article
(This article belongs to the Special Issue 2020 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Figure 1

Open AccessArticle
New Approach for Radial Basis Function Based on Partition of Unity of Taylor Series Expansion with Respect to Shape Parameter
Algorithms 2021, 14(1), 1; https://0-doi-org.brum.beds.ac.uk/10.3390/a14010001 - 22 Dec 2020
Cited by 1 | Viewed by 583
Abstract
Radial basis function (RBF) is gaining popularity in function interpolation as well as in solving partial differential equations thanks to its accuracy and simplicity. Besides, RBF methods have almost a spectral accuracy. Furthermore, the implementation of RBF-based methods is easy and does not [...] Read more.
Radial basis function (RBF) is gaining popularity in function interpolation as well as in solving partial differential equations thanks to its accuracy and simplicity. Besides, RBF methods have almost a spectral accuracy. Furthermore, the implementation of RBF-based methods is easy and does not depend on the location of the points and dimensionality of the problems. However, the stability and accuracy of RBF methods depend significantly on the shape parameter, which is primarily impacted by the basis function and the node distribution. At a small value of shape parameter, the RBF becomes more accurate, but unstable. Several approaches were followed in the open literature to overcome the instability issue. One of the approaches is optimizing the solver in order to improve the stability of ill-conditioned matrices. Another approach is based on searching for the optimal value of the shape parameter. Alternatively, modified bases are used to overcome instability. In the open literature, radial basis function using QR factorization (RBF-QR), stabilized expansion of Gaussian radial basis function (RBF-GA), rational radial basis function (RBF-RA), and Hermite-based RBFs are among the approaches used to change the basis. In this paper, the Taylor series is used to expand the RBF with respect to the shape parameter. Our analyses showed that the Taylor series alone is not sufficient to resolve the stability issue, especially away from the reference point of the expansion. Consequently, a new approach is proposed based on the partition of unity (PU) of RBF with respect to the shape parameter. The proposed approach is benchmarked. The method ensures that RBF has a weak dependency on the shape parameter, thereby providing a consistent accuracy for interpolation and derivative approximation. Several benchmarks are performed to assess the accuracy of the proposed approach. The novelty of the present approach is in providing a means to achieve a reasonable accuracy for RBF interpolation without the need to pinpoint a specific value for the shape parameter, which is the case for the original RBF interpolation. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop