Next Issue
Volume 14, March
Previous Issue
Volume 14, January

Algorithms, Volume 14, Issue 2 (February 2021) – 39 articles

Cover Story (view full-size image): The introduction of automated parcel locker (APL) systems is one possible approach to improve urban logistics activities. Based on the city of Dortmund as a case study, we propose a simulation-optimization approach integrating a system dynamics simulation model (SDSM) with a multiperiod capacitated facility location problem (CFLP) as a decision support tool for future APL implementations. We first built an SDSM to estimate the number of APLs and then a CFLP model to determine their locations within the city’s districts. Finally, we used Monte Carlo simulation to estimate the costs and reliability level under random demands. We evaluate three e-shopper rate scenarios with the SDSM and then analyze ten detailed demand configurations for the middle-size scenario with our CFLP model. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Select all
Export citation of selected articles as:
Open AccessArticle
Optimal Cooking Procedure Presentation System for Multiple Recipes and Investigating Its Effect
Algorithms 2021, 14(2), 67; https://0-doi-org.brum.beds.ac.uk/10.3390/a14020067 - 23 Feb 2021
Viewed by 560
Abstract
As the number of users who cook their own food increases, there is increasing demand for an optimal cooking procedure for multiple dishes, but the optimal cooking procedure varies from user to user due to the difference of each user’s cooking skill and [...] Read more.
As the number of users who cook their own food increases, there is increasing demand for an optimal cooking procedure for multiple dishes, but the optimal cooking procedure varies from user to user due to the difference of each user’s cooking skill and environment. In this paper, we propose a system of presenting optimal cooking procedures that enables parallel cooking of multiple recipes. We formulate the problem of deciding optimal cooking procedures as a task scheduling problem by creating a task graph for each recipe. To reduce execution time, we propose two extensions to the preprocessing and bounding operation of PDF/IHS, a sequential optimization algorithm for the task scheduling problem, each taking into account the cooking characteristics. We confirmed that the proposed algorithm can reduce execution time by up to 44% compared to the base PDF/IHS, and increase execution time by about 900 times even when the number of required searches increases by 10,000 times. In addition, through the experiment with three recipes for 10 participants each, it was confirmed that by following the optimal cooking procedure for a certain menu, the actual cooking time was reduced by up to 13 min (14.8% of the time when users cooked freely) compared to the time when users cooked freely. Full article
(This article belongs to the Special Issue Feature Papers in Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

Open AccessArticle
Detection of Representative Variables in Complex Systems with Interpretable Rules Using Core-Clusters
Algorithms 2021, 14(2), 66; https://0-doi-org.brum.beds.ac.uk/10.3390/a14020066 - 22 Feb 2021
Viewed by 473
Abstract
In this paper, we present a new framework dedicated to the robust detection of representative variables in high dimensional spaces with a potentially limited number of observations. Representative variables are selected by using an original regularization strategy: they are the center of specific [...] Read more.
In this paper, we present a new framework dedicated to the robust detection of representative variables in high dimensional spaces with a potentially limited number of observations. Representative variables are selected by using an original regularization strategy: they are the center of specific variable clusters, denoted CORE-clusters, which respect fully interpretable constraints. Each CORE-cluster indeed contains more than a predefined amount of variables and each pair of its variables has a coherent behavior in the observed data. The key advantage of our regularization strategy is therefore that it only requires to tune two intuitive parameters: the minimal dimension of the CORE-clusters and the minimum level of similarity which gathers their variables. Interpreting the role played by a selected representative variable is additionally obvious as it has a similar observed behaviour as a controlled number of other variables. After introducing and justifying this variable selection formalism, we propose two algorithmic strategies to detect the CORE-clusters, one of them scaling particularly well to high-dimensional data. Results obtained on synthetic as well as real data are finally presented. Full article
(This article belongs to the Special Issue Interpretability, Accountability and Robustness in Machine Learning)
Show Figures

Figure 1

Open AccessArticle
Approximation Ratios of RePair, LongestMatch and Greedy on Unary Strings
Algorithms 2021, 14(2), 65; https://0-doi-org.brum.beds.ac.uk/10.3390/a14020065 - 20 Feb 2021
Viewed by 428
Abstract
A grammar-based compressor is an algorithm that receives a word and outputs a context-free grammar that only produces this word. The approximation ratio for a single input word is the size of the grammar produced for this word divided by the size of [...] Read more.
A grammar-based compressor is an algorithm that receives a word and outputs a context-free grammar that only produces this word. The approximation ratio for a single input word is the size of the grammar produced for this word divided by the size of a smallest grammar for this word. The worst-case approximation ratio of a grammar-based compressor for a given word length is the largest approximation ratio over all input words of that length. In this work, we study the worst-case approximation ratio of the algorithms Greedy, RePair and LongestMatch on unary strings, i.e., strings that only make use of a single symbol. Our main contribution is to show the improved upper bound of O((logn)8·(loglogn)3) for the worst-case approximation ratio of Greedy. In addition, we also show the lower bound of 1.34847194 for the worst-case approximation ratio of Greedy, and that RePair and LongestMatch have a worst-case approximation ratio of log2(3). Full article
(This article belongs to the Special Issue Algorithms and Data-Structures for Compressed Computation)
Show Figures

Figure 1

Open AccessArticle
Number of Financial Indicators as a Factor of Multi-Criteria Analysis via the TOPSIS Technique: A Municipal Case Study
Algorithms 2021, 14(2), 64; https://0-doi-org.brum.beds.ac.uk/10.3390/a14020064 - 19 Feb 2021
Cited by 1 | Viewed by 657
Abstract
Multi-criteria analysis is a decision-making and efficiency assessment tool for application in both the private and public sectors. Its application is preceded by the selection of suitable indicators and a homogenous set of variants, as well as suitable methods based on the nature [...] Read more.
Multi-criteria analysis is a decision-making and efficiency assessment tool for application in both the private and public sectors. Its application is preceded by the selection of suitable indicators and a homogenous set of variants, as well as suitable methods based on the nature of the input data. The goal of the submitted research is to highlight the importance of selecting suitable indicators using a case study assessment of the financial health of a municipality—more precisely, the efficiency of management of this municipality. Four key indicators, thirty-two homogenous subjects, and one multi-criteria analysis method were identified in this study based on the theoretical foundations of the specific issue. These elements were processed into a total of 14 variants depending on the number of assessed indicators. Then, these results were subjected to statistical verification alongside verification using the Jaccard index. Based on the acquired results, we highlight the need for correct and expert identification of the relevant sets of alternatives (the criteria matrix) and expert discussion, which should precede the selection of the assessed indicators and objectify this selection process as much as possible. Assessment based on a low number of indicators was shown to be insufficient, highly variable, and diverse, and these differences were partially eliminated as the number of assessed indicators increased. Full article
(This article belongs to the Special Issue Algorithms and Models for Dynamic Multiple Criteria Decision Making)
Show Figures

Figure 1

Open AccessArticle
Molecular Subtyping and Outlier Detection in Human Disease Using the Paraclique Algorithm
Algorithms 2021, 14(2), 63; https://0-doi-org.brum.beds.ac.uk/10.3390/a14020063 - 19 Feb 2021
Viewed by 564
Abstract
Recent discoveries of distinct molecular subtypes have led to remarkable advances in treatment for a variety of diseases. While subtyping via unsupervised clustering has received a great deal of interest, most methods rely on basic statistical or machine learning methods. At the same [...] Read more.
Recent discoveries of distinct molecular subtypes have led to remarkable advances in treatment for a variety of diseases. While subtyping via unsupervised clustering has received a great deal of interest, most methods rely on basic statistical or machine learning methods. At the same time, techniques based on graph clustering, particularly clique-based strategies, have been successfully used to identify disease biomarkers and gene networks. A graph theoretical approach based on the paraclique algorithm is described that can easily be employed to identify putative disease subtypes and serve as an aid in outlier detection as well. The feasibility and potential effectiveness of this method is demonstrated on publicly available gene co-expression data derived from patient samples covering twelve different disease families. Full article
(This article belongs to the Special Issue Biological Knowledge Discovery from Big Data)
Show Figures

Figure 1

Open AccessArticle
k-Circle Formation and k-epf by Asynchronous Robots
Algorithms 2021, 14(2), 62; https://0-doi-org.brum.beds.ac.uk/10.3390/a14020062 - 18 Feb 2021
Viewed by 479
Abstract
For a given positive integer k, the k-circle formation problem asks a set of autonomous, asynchronous robots to form disjoint circles having k robots each at distinct locations, centered at a set of fixed points in the Euclidean plane. The robots [...] Read more.
For a given positive integer k, the k-circle formation problem asks a set of autonomous, asynchronous robots to form disjoint circles having k robots each at distinct locations, centered at a set of fixed points in the Euclidean plane. The robots are identical, anonymous, oblivious, and they operate in Look–Compute–Move cycles. This paper studies the k-circle formation problem and its relationship with the k-epf problem, a generalized version of the embedded pattern formation problem, which asks exactly k robots to reach and remain at each fixed point. First, the k-circle formation problem is studied in a setting where the robots have an agreement on the common direction and orientation of one of the axes. We have characterized all the configurations and the values of k, for which the k-circle formation problem is deterministically unsolvable in this setting. For the remaining configurations and the values of k, a deterministic distributed algorithm has been proposed, in order to solve the problem. It has been proved that for the initial configurations with distinct robot positions, if the k-circle formation problem is deterministically solvable then the k-epf problem is also deterministically solvable. It has been shown that by modifying the proposed algorithm, the k-epf problem can be solved deterministically. Full article
Show Figures

Figure 1

Open AccessArticle
Network Inference from Gene Expression Data with Distance Correlation and Network Topology Centrality
Algorithms 2021, 14(2), 61; https://0-doi-org.brum.beds.ac.uk/10.3390/a14020061 - 15 Feb 2021
Viewed by 616
Abstract
The reconstruction of gene regulatory networks based on gene expression data can effectively uncover regulatory relationships between genes and provide a deeper understanding of biological control processes. Non-linear dependence is a common problem in the regulatory mechanisms of gene regulatory networks. Various methods [...] Read more.
The reconstruction of gene regulatory networks based on gene expression data can effectively uncover regulatory relationships between genes and provide a deeper understanding of biological control processes. Non-linear dependence is a common problem in the regulatory mechanisms of gene regulatory networks. Various methods based on information theory have been developed to infer networks. However, the methods have introduced many redundant regulatory relationships in the network inference process. A recent measurement method called distance correlation has, in many cases, shown strong and computationally efficient non-linear correlations. In this paper, we propose a novel regulatory network inference method called the distance-correlation and network topology centrality network (DCNTC) method. The method is based on and extends the Local Density Measurement of Network Node Centrality (LDCNET) algorithm, which has the same choice of network centrality ranking as the LDCNET algorithm, but uses a simpler and more efficient distance correlation measure of association between genes. In this work, we integrate distance correlation and network topological centrality into the reasoning about the structure of gene regulatory networks. We will select optimal thresholds based on the characteristics of the distribution of each gene pair in relation to distance correlation. Experiments were carried out on four network datasets and their performance was compared. Full article
Show Figures

Figure 1

Open AccessArticle
A Primal-Dual Interior-Point Method for Facility Layout Problem with Relative-Positioning Constraints
Algorithms 2021, 14(2), 60; https://0-doi-org.brum.beds.ac.uk/10.3390/a14020060 - 13 Feb 2021
Viewed by 727
Abstract
We consider the facility layout problem (FLP) in which we find the arrangements of departments with the smallest material handling cost that can be expressed as the product of distance times flows between departments. It is known that FLP can be formulated as [...] Read more.
We consider the facility layout problem (FLP) in which we find the arrangements of departments with the smallest material handling cost that can be expressed as the product of distance times flows between departments. It is known that FLP can be formulated as a linear programming problem if the relative positioning of departments is specified, and, thus, can be solved to optimality. In this paper, we describe a custom interior-point algorithm for solving FLP with relative positioning constraints (FLPRC) that is much faster than the standard methods used in the general-purpose solver. We build a compact formation of FLPRC and its duals, which enables us to establish the optimal condition very quickly. We use this optimality condition to implement the primal-dual interior-point method with an efficient Newton step computation that exploit special structure of a Hessian. We confirm effectiveness of our proposed model through applications to several well-known benchmark data sets. Our algorithm shows much faster speed for finding the optimal solution. Full article
(This article belongs to the Special Issue Evolutionary Algorithms and Applications)
Show Figures

Figure 1

Open AccessArticle
An Investigation of Alternatives to Transform Protein Sequence Databases to a Columnar Index Schema
Algorithms 2021, 14(2), 59; https://0-doi-org.brum.beds.ac.uk/10.3390/a14020059 - 11 Feb 2021
Viewed by 595
Abstract
Mass spectrometers enable identifying proteins in biological samples leading to biomarkers for biological process parameters and diseases. However, bioinformatic evaluation of the mass spectrometer data needs a standardized workflow and system that stores the protein sequences. Due to its standardization and maturity, relational [...] Read more.
Mass spectrometers enable identifying proteins in biological samples leading to biomarkers for biological process parameters and diseases. However, bioinformatic evaluation of the mass spectrometer data needs a standardized workflow and system that stores the protein sequences. Due to its standardization and maturity, relational systems are a great fit for storing protein sequences. Hence, in this work, we present a schema for distributed column-based database management systems using a column-oriented index to store sequence data. In order to achieve a high storage performance, it was necessary to choose a well-performing strategy for transforming the protein sequence data from the FASTA format to the new schema. Therefore, we applied an in-memory map, HDDmap, database engine, and extended radix tree and evaluated their performance. The results show that our proposed extended radix tree performs best regarding memory consumption and runtime. Hence, the radix tree is a suitable data structure for transforming protein sequences into the indexed schema. Full article
(This article belongs to the Special Issue Biological Knowledge Discovery from Big Data)
Show Figures

Figure 1

Open AccessArticle
Adaptive Quick Reduct for Feature Drift Detection
Algorithms 2021, 14(2), 58; https://0-doi-org.brum.beds.ac.uk/10.3390/a14020058 - 11 Feb 2021
Viewed by 539
Abstract
Data streams are ubiquitous and related to the proliferation of low-cost mobile devices, sensors, wireless networks and the Internet of Things. While it is well known that complex phenomena are not stationary and exhibit a concept drift when observed for a sufficiently long [...] Read more.
Data streams are ubiquitous and related to the proliferation of low-cost mobile devices, sensors, wireless networks and the Internet of Things. While it is well known that complex phenomena are not stationary and exhibit a concept drift when observed for a sufficiently long time, relatively few studies have addressed the related problem of feature drift. In this paper, a variation of the QuickReduct algorithm suitable to process data streams is proposed and tested: it builds an evolving reduct that dynamically selects the relevant features in the stream, removing the redundant ones and adding the newly relevant ones as soon as they become such. Tests on five publicly available datasets with an artificially injected drift have confirmed the effectiveness of the proposed method. Full article
(This article belongs to the Special Issue Granular Computing: From Foundations to Applications)
Show Figures

Figure 1

Open AccessArticle
Smart Black Box 2.0: Efficient High-Bandwidth Driving Data Collection Based on Video Anomalies
Algorithms 2021, 14(2), 57; https://0-doi-org.brum.beds.ac.uk/10.3390/a14020057 - 09 Feb 2021
Viewed by 728
Abstract
Autonomous vehicles require fleet-wide data collection for continuous algorithm development and validation. The smart black box (SBB) intelligent event data recorder has been proposed as a system for prioritized high-bandwidth data capture. This paper extends the SBB by applying anomaly detection and action [...] Read more.
Autonomous vehicles require fleet-wide data collection for continuous algorithm development and validation. The smart black box (SBB) intelligent event data recorder has been proposed as a system for prioritized high-bandwidth data capture. This paper extends the SBB by applying anomaly detection and action detection methods for generalized event-of-interest (EOI) detection. An updated SBB pipeline is proposed for the real-time capture of driving video data. A video dataset is constructed to evaluate the SBB on real-world data for the first time. SBB performance is assessed by comparing the compression of normal and anomalous data and by comparing our prioritized data recording with an FIFO strategy. The results show that SBB data compression can increase the anomalous-to-normal memory ratio by ∼25%, while the prioritized recording strategy increases the anomalous-to-normal count ratio when compared to an FIFO strategy. We compare the real-world dataset SBB results to a baseline SBB given ground-truth anomaly labels and conclude that improved general EOI detection methods will greatly improve SBB performance. Full article
(This article belongs to the Section Databases and Data Structures)
Show Figures

Figure 1

Open AccessArticle
Constant-Time Complete Visibility for Robots with Lights: The Asynchronous Case
Algorithms 2021, 14(2), 56; https://0-doi-org.brum.beds.ac.uk/10.3390/a14020056 - 09 Feb 2021
Viewed by 717
Abstract
We consider the distributed setting of N autonomous mobile robots that operate in Look-Compute-Move (LCM) cycles and use colored lights (the robots with lights model). We assume obstructed visibility where a robot cannot see another robot if a third robot is positioned between [...] Read more.
We consider the distributed setting of N autonomous mobile robots that operate in Look-Compute-Move (LCM) cycles and use colored lights (the robots with lights model). We assume obstructed visibility where a robot cannot see another robot if a third robot is positioned between them on the straight line segment connecting them. In this paper, we consider the problem of positioning N autonomous robots on a plane so that every robot is visible to all others (this is called the Complete Visibility problem). This problem is fundamental, as it provides a basis to solve many other problems under obstructed visibility. In this paper, we provide the first, asymptotically optimal, O(1) time, O(1) color algorithm for Complete Visibility in the asynchronous setting. This significantly improves on an O(N)-time translation of the existing O(1) time, O(1) color semi-synchronous algorithm to the asynchronous setting. The proposed algorithm is collision-free, i.e., robots do not share positions, and their paths do not cross. We also introduce a new technique for moving robots in an asynchronous setting that may be of independent interest, called Beacon-Directed Curve Positioning. Full article
Show Figures

Figure 1

Open AccessArticle
Optimal Clustering in Stable Instances Using Combinations of Exact and Noisy Ordinal Queries
Algorithms 2021, 14(2), 55; https://0-doi-org.brum.beds.ac.uk/10.3390/a14020055 - 08 Feb 2021
Viewed by 610
Abstract
This work studies clustering algorithms which operates with ordinal or comparison-based queries (operations), a situation that arises in many active-learning applications where “dissimilarities” between data points are evaluated by humans. Typically, exact answers are costly (or difficult to obtain in large amounts) while [...] Read more.
This work studies clustering algorithms which operates with ordinal or comparison-based queries (operations), a situation that arises in many active-learning applications where “dissimilarities” between data points are evaluated by humans. Typically, exact answers are costly (or difficult to obtain in large amounts) while possibly erroneous answers have low cost. Motivated by these considerations, we study algorithms with non-trivial trade-offs between the number of exact (high-cost) operations and noisy (low-cost) operations with provable performance guarantees. Specifically, we study a class of polynomial-time graph-based clustering algorithms (termed Single-Linkage) which are widely used in practice and that guarantee exact solutions for stable instances in several clustering problems (these problems are NP-hard in the worst case). We provide several variants of these algorithms using ordinal operations and, in particular, non-trivial trade-offs between the number of high-cost and low-cost operations that are used. Our algorithms still guarantee exact solutions for stable instances of k-medoids clustering, and they use a rather small number of high-cost operations, without increasing the low-cost operations too much. Full article
(This article belongs to the Special Issue Graph Algorithms and Network Dynamics)
Open AccessArticle
Granular Classification for Imbalanced Datasets: A Minkowski Distance-Based Method
Algorithms 2021, 14(2), 54; https://0-doi-org.brum.beds.ac.uk/10.3390/a14020054 - 07 Feb 2021
Viewed by 520
Abstract
The problem of classification for imbalanced datasets is frequently encountered in practical applications. The data to be classified in this problem are skewed, i.e., the samples of one class (the minority class) are much less than those of other classes (the majority class). [...] Read more.
The problem of classification for imbalanced datasets is frequently encountered in practical applications. The data to be classified in this problem are skewed, i.e., the samples of one class (the minority class) are much less than those of other classes (the majority class). When dealing with imbalanced datasets, most classifiers encounter a common limitation, that is, they often obtain better classification performances on the majority classes than those on the minority class. To alleviate the limitation, in this study, a fuzzy rule-based modeling approach using information granules is proposed. Information granules, as some entities derived and abstracted from data, can be used to describe and capture the characteristics (distribution and structure) of data from both majority and minority classes. Since the geometric characteristics of information granules depend on the distance measures used in the granulation process, the main idea of this study is to construct information granules on each class of imbalanced data using Minkowski distance measures and then to establish the classification models by using “If-Then” rules. The experimental results involving synthetic and publicly available datasets reflect that the proposed Minkowski distance-based method can produce information granules with a series of geometric shapes and construct granular models with satisfying classification performance for imbalanced datasets. Full article
Show Figures

Figure 1

Open AccessArticle
K-Means Clustering Algorithm Based on Chaotic Adaptive Artificial Bee Colony
Algorithms 2021, 14(2), 53; https://0-doi-org.brum.beds.ac.uk/10.3390/a14020053 - 07 Feb 2021
Viewed by 557
Abstract
K-Means Clustering is a popular technique in data analysis and data mining. To remedy the defects of relying on the initialization and converging towards the local minimum in the K-Means Clustering (KMC) algorithm, a chaotic adaptive artificial bee colony algorithm (CAABC) clustering algorithm [...] Read more.
K-Means Clustering is a popular technique in data analysis and data mining. To remedy the defects of relying on the initialization and converging towards the local minimum in the K-Means Clustering (KMC) algorithm, a chaotic adaptive artificial bee colony algorithm (CAABC) clustering algorithm is presented to optimally partition objects into K clusters in this study. This algorithm adopts the max–min distance product method for initialization. In addition, a new fitness function is adapted to the KMC algorithm. This paper also reports that the iteration abides by the adaptive search strategy, and Fuch chaotic disturbance is added to avoid converging on local optimum. The step length decreases linearly during the iteration. In order to overcome the shortcomings of the classic ABC algorithm, the simulated annealing criterion is introduced to the CAABC. Finally, the confluent algorithm is compared with other stochastic heuristic algorithms on the 20 standard test functions and 11 datasets. The results demonstrate that improvements in CAABA-K-means have an advantage on speed and accuracy of convergence over some conventional algorithms for solving clustering problems. Full article
Show Figures

Graphical abstract

Open AccessArticle
Optimization Method of Customized Shuttle Bus Lines under Random Condition
Algorithms 2021, 14(2), 52; https://0-doi-org.brum.beds.ac.uk/10.3390/a14020052 - 05 Feb 2021
Viewed by 714
Abstract
Transit network optimization can effectively improve transit efficiency, improve traffic conditions, and reduce the pollution of the environment. In order to better meet the travel demands of passengers, the factors influencing passengers’ satisfaction with a customized bus are fully analyzed. Taking the minimum [...] Read more.
Transit network optimization can effectively improve transit efficiency, improve traffic conditions, and reduce the pollution of the environment. In order to better meet the travel demands of passengers, the factors influencing passengers’ satisfaction with a customized bus are fully analyzed. Taking the minimum operating cost of the enterprise as the objective and considering the random travel time constraints of passengers, the customized bus routes are optimized. The K-means clustering analysis is used to classify the passengers’ needs based on the analysis of the passenger travel demand of the customized shuttle bus, and the time stochastic uncertainty under the operating environment of the customized shuttle bus line is fully considered. On the basis of meeting the passenger travel time requirements and minimizing the cost of service operation, an optimization model that maximizes the overall satisfaction of passengers and public transit enterprises is structured. The smaller the value of the objective function is, the lower the operating cost. When the value is negative, it means there is profit. The model is processed by the deterministic processing method of random constraints, and then the hybrid intelligent algorithm is used to solve the model. A stochastic simulation technique is used to train stochastic constraints to approximate uncertain functions. Then, the improved immune clonal algorithm is used to solve the vehicle routing problem. Finally, it is proved by a case that the method can reasonably and efficiently realize the optimization of the customized shuttle bus lines in the region. Full article
Show Figures

Figure 1

Open AccessArticle
Effects of Nonlinearity and Network Architecture on the Performance of Supervised Neural Networks
Algorithms 2021, 14(2), 51; https://0-doi-org.brum.beds.ac.uk/10.3390/a14020051 - 05 Feb 2021
Viewed by 471
Abstract
The nonlinearity of activation functions used in deep learning models is crucial for the success of predictive models. Several simple nonlinear functions, including Rectified Linear Unit (ReLU) and Leaky-ReLU (L-ReLU) are commonly used in neural networks to impose the nonlinearity. In practice, these [...] Read more.
The nonlinearity of activation functions used in deep learning models is crucial for the success of predictive models. Several simple nonlinear functions, including Rectified Linear Unit (ReLU) and Leaky-ReLU (L-ReLU) are commonly used in neural networks to impose the nonlinearity. In practice, these functions remarkably enhance the model accuracy. However, there is limited insight into the effects of nonlinearity in neural networks on their performance. Here, we investigate the performance of neural network models as a function of nonlinearity using ReLU and L-ReLU activation functions in the context of different model architectures and data domains. We use entropy as a measurement of the randomness, to quantify the effects of nonlinearity in different architecture shapes on the performance of neural networks. We show that the ReLU nonliearity is a better choice for activation function mostly when the network has sufficient number of parameters. However, we found that the image classification models with transfer learning seem to perform well with L-ReLU in fully connected layers. We show that the entropy of hidden layer outputs in neural networks can fairly represent the fluctuations in information loss as a function of nonlinearity. Furthermore, we investigate the entropy profile of shallow neural networks as a way of representing their hidden layer dynamics. Full article
(This article belongs to the Special Issue Supervised and Unsupervised Classification Algorithms)
Show Figures

Figure 1

Open AccessArticle
The Common Warehouse Model and Profit Distribution of the Express Industry
Algorithms 2021, 14(2), 50; https://0-doi-org.brum.beds.ac.uk/10.3390/a14020050 - 04 Feb 2021
Viewed by 667
Abstract
The sharing mode of the logistics industry can effectively solve the new problems arising from the rapid development of the express industry. However, only when the interests are reasonably distributed can the sharing mode be implemented for a long time. This paper discusses [...] Read more.
The sharing mode of the logistics industry can effectively solve the new problems arising from the rapid development of the express industry. However, only when the interests are reasonably distributed can the sharing mode be implemented for a long time. This paper discusses the connotation of unified warehouse and distribution, designs the operation mode of a unified warehouse and distribution, and solves the profit distribution problem of a unified warehouse and distribution alliance based on the improved Shapley value method. Firstly, the traditional Shapley value method is improved by using a comprehensive correction factor, including the proportions of investment, risk, and innovative research contributions. Secondly, each factor’s weight is determined by the analytic hierarchy process (AHP), and the profits are distributed according to the contribution of each express enterprise to the alliance. Finally, an example is given to verify the validity of the modified algorithm. It proves that the modified Shapley value method can effectively solve the problem of profit distribution. Full article
Show Figures

Figure 1

Open AccessArticle
A Novel Approach for Cognitive Clustering of Parkinsonisms through Affinity Propagation
Algorithms 2021, 14(2), 49; https://0-doi-org.brum.beds.ac.uk/10.3390/a14020049 - 04 Feb 2021
Viewed by 753
Abstract
Cluster analysis is widely applied in the neuropsychological field for exploring patterns in cognitive profiles, but traditional hierarchical and non-hierarchical approaches could be often poorly effective or even inapplicable on certain type of data. Moreover, these traditional approaches need the initial specification of [...] Read more.
Cluster analysis is widely applied in the neuropsychological field for exploring patterns in cognitive profiles, but traditional hierarchical and non-hierarchical approaches could be often poorly effective or even inapplicable on certain type of data. Moreover, these traditional approaches need the initial specification of the number of clusters, based on a priori knowledge not always owned. For this reason, we proposed a novel method for cognitive clustering through the affinity propagation (AP) algorithm. In particular, we applied the AP clustering on the regression residuals of the Mini Mental State Examination scores—a commonly used screening tool for cognitive impairment—of a cohort of 49 Parkinson’s disease, 48 Progressive Supranuclear Palsy and 44 healthy control participants. We found four clusters, where two clusters (68 and 30 participants) showed almost intact cognitive performance, one cluster had a moderate cognitive impairment (34 participants), and the last cluster had a more extensive cognitive deficit (8 participants). The findings showed, for the first time, an intra- and inter-diagnostic heterogeneity in the cognitive profile of Parkinsonisms patients. Our novel method of unsupervised learning could represent a reliable tool for supporting the neuropsychologists in understanding the natural structure of the cognitive performance in the neurodegenerative diseases. Full article
(This article belongs to the Special Issue Machine Learning in Healthcare and Biomedical Application)
Show Figures

Figure 1

Open AccessArticle
Safe Approximation—An Efficient Solution for a Hard Routing Problem
Algorithms 2021, 14(2), 48; https://0-doi-org.brum.beds.ac.uk/10.3390/a14020048 - 02 Feb 2021
Viewed by 543
Abstract
The Disjoint Connecting Paths problem and its capacitated generalization, called Unsplittable Flow problem, play an important role in practical applications such as communication network design and routing. These tasks are NP-hard in general, but various polynomial-time approximations are known. Nevertheless, the approximations [...] Read more.
The Disjoint Connecting Paths problem and its capacitated generalization, called Unsplittable Flow problem, play an important role in practical applications such as communication network design and routing. These tasks are NP-hard in general, but various polynomial-time approximations are known. Nevertheless, the approximations tend to be either too loose (allowing large deviation from the optimum), or too complicated, often rendering them impractical in large, complex networks. Therefore, our goal is to present a solution that provides a relatively simple, efficient algorithm for the unsplittable flow problem in large directed graphs, where the task is NP-hard, and is known to remain NP-hard even to approximate up to a large factor. The efficiency of our algorithm is achieved by sacrificing a small part of the solution space. This also represents a novel paradigm for approximation. Rather than giving up the search for an exact solution, we restrict the solution space to a subset that is the most important for applications, and excludes only a small part that is marginal in some well-defined sense. Specifically, the sacrificed part only contains scenarios where some edges are very close to saturation. Since nearly saturated links are undesirable in practical applications, therefore, excluding near saturation is quite reasonable from the practical point of view. We refer the solutions that contain no nearly saturated edges as safe solutions, and call the approach safe approximation. We prove that this safe approximation can be carried out efficiently. That is, once we restrict ourselves to safe solutions, we can find the exact optimum by a randomized polynomial time algorithm. Full article
(This article belongs to the Special Issue Networks, Communication, and Computing Vol. 2)
Show Figures

Figure 1

Open AccessArticle
Modeling and Optimization in Resource Sharing Systems: Application to Bike-Sharing with Unequal Demands
Algorithms 2021, 14(2), 47; https://0-doi-org.brum.beds.ac.uk/10.3390/a14020047 - 30 Jan 2021
Viewed by 738
Abstract
The imbalanced distribution of shared bikes in the dockless bike-sharing system (a typical example of the resource-sharing system), which may lead to potential customer churn and lost profit, gradually becomes a vital problem for bike-sharing firms and their users. To resolve the problem, [...] Read more.
The imbalanced distribution of shared bikes in the dockless bike-sharing system (a typical example of the resource-sharing system), which may lead to potential customer churn and lost profit, gradually becomes a vital problem for bike-sharing firms and their users. To resolve the problem, we first formulate the bike-sharing system as a Markovian queueing network with higher-demand nodes and lower-demand nodes, which can provide steady-state probabilities of having a certain number of bikes at one node. A model reduction method is then designed to reduce the complexity of the proposed model. Subsequently, we adopt an operator-based relocation strategy to optimize the reduced network. The objective of the optimization model is to maximize the total profit and act as a decision-making tool for operators to determine the optimal relocation frequency. The results reveal that it is possible for most of the shared bikes to gather at one low-demand node eventually in the long run under the influence of the various arrival rates at different nodes. However, the decrease of the number of bikes at the high-demand nodes is more sensitive to the unequal demands, especially when the size of the network and the number of bikes in the system are large. It may cause a significant loss for operators, to which they should pay attention. Meanwhile, different estimated values of parameters related with revenue and cost affect the optimization results differently. Full article
(This article belongs to the Special Issue Simulation-Optimization in Logistics, Transportation, and SCM)
Show Figures

Figure 1

Open AccessArticle
Determining Optimal Parameters of Regular Microrelief Formed on the End Surfaces of Rotary Bodies
Algorithms 2021, 14(2), 46; https://0-doi-org.brum.beds.ac.uk/10.3390/a14020046 - 30 Jan 2021
Viewed by 587
Abstract
The analytical dependences for determining the overlap area of V-shaped grooves of partially regular microrelief shifted by an angular pitch of 0.5° are established. The V-shaped grooves are formed on the end surface of the rotary body by vibration. In addition, the intersection [...] Read more.
The analytical dependences for determining the overlap area of V-shaped grooves of partially regular microrelief shifted by an angular pitch of 0.5° are established. The V-shaped grooves are formed on the end surface of the rotary body by vibration. In addition, the intersection between groove elements can be of different types. The relationship between the geometric parameters of V-shaped grooves and their location is determined. The influence of geometrical parameters of grooves on the overlap area is established depending on their location. Measures are proposed to ensure that the burnishing area is the same at different distances from the center of rotation of the rotary body end surface on which the partially regular microrelief is formed. A graph showing the dependence of the overlap area of two grooves on the axial pitch between them is constructed, and a block diagram of the algorithm for determining the optimal value of the axial pitch is developed. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

Open AccessArticle
Combining Heuristics with Simulation and Fuzzy Logic to Solve a Flexible-Size Location Routing Problem under Uncertainty
Algorithms 2021, 14(2), 45; https://0-doi-org.brum.beds.ac.uk/10.3390/a14020045 - 30 Jan 2021
Cited by 1 | Viewed by 903
Abstract
The location routing problem integrates both a facility location and a vehicle routing problem. Each of these problems are NP-hard in nature, which justifies the use of heuristic-based algorithms when dealing with large-scale instances that need to be solved in reasonable computing times. [...] Read more.
The location routing problem integrates both a facility location and a vehicle routing problem. Each of these problems are NP-hard in nature, which justifies the use of heuristic-based algorithms when dealing with large-scale instances that need to be solved in reasonable computing times. This paper discusses a realistic variant of the problem that considers facilities of different sizes and two types of uncertainty conditions. In particular, we assume that some customers’ demands are stochastic, while others follow a fuzzy pattern. An iterated local search metaheuristic is integrated with simulation and fuzzy logic to solve the aforementioned problem, and a series of computational experiments are run to illustrate the potential of the proposed algorithm. Full article
(This article belongs to the Special Issue Simulation-Optimization in Logistics, Transportation, and SCM)
Show Figures

Figure 1

Open AccessArticle
Non-Overlapping LZ77 Factorization and LZ78 Substring Compression Queries with Suffix Trees
Algorithms 2021, 14(2), 44; https://0-doi-org.brum.beds.ac.uk/10.3390/a14020044 - 29 Jan 2021
Viewed by 682
Abstract
We present algorithms computing the non-overlapping Lempel–Ziv-77 factorization and the longest previous non-overlapping factor table within small space in linear or near-linear time with the help of modern suffix tree representations fitting into limited space. With similar techniques, we show how to answer [...] Read more.
We present algorithms computing the non-overlapping Lempel–Ziv-77 factorization and the longest previous non-overlapping factor table within small space in linear or near-linear time with the help of modern suffix tree representations fitting into limited space. With similar techniques, we show how to answer substring compression queries for the Lempel–Ziv-78 factorization with a possible logarithmic multiplicative slowdown depending on the used suffix tree representation. Full article
(This article belongs to the Special Issue Algorithms and Data-Structures for Compressed Computation)
Show Figures

Figure 1

Open AccessArticle
An FPTAS for Dynamic Multiobjective Shortest Path Problems
Algorithms 2021, 14(2), 43; https://0-doi-org.brum.beds.ac.uk/10.3390/a14020043 - 29 Jan 2021
Viewed by 702
Abstract
The Dynamic Multiobjective Shortest Path problem features multidimensional costs that can depend on several variables and not only on time; this setting is motivated by flight planning applications and the routing of electric vehicles. We give an exact algorithm for the FIFO case [...] Read more.
The Dynamic Multiobjective Shortest Path problem features multidimensional costs that can depend on several variables and not only on time; this setting is motivated by flight planning applications and the routing of electric vehicles. We give an exact algorithm for the FIFO case and derive from it an FPTAS for both, the static Multiobjective Shortest Path (MOSP) problems and, under mild assumptions, for the dynamic problem variant. The resulting FPTAS is computationally efficient and beats the known complexity bounds of other FPTAS for MOSP problems. Full article
(This article belongs to the Special Issue Algorithms for Shortest Paths in Dynamic and Evolving Networks)
Show Figures

Figure 1

Open AccessArticle
Integrated Simulation-Based Optimization of Operational Decisions at Container Terminals
Algorithms 2021, 14(2), 42; https://0-doi-org.brum.beds.ac.uk/10.3390/a14020042 - 28 Jan 2021
Cited by 1 | Viewed by 606
Abstract
At container terminals, many cargo handling processes are interconnected and occur in parallel. Within short time windows, many operational decisions need to be made and should consider both time efficiency and equipment utilization. During operation, many sources of disturbance and, thus, uncertainty exist. [...] Read more.
At container terminals, many cargo handling processes are interconnected and occur in parallel. Within short time windows, many operational decisions need to be made and should consider both time efficiency and equipment utilization. During operation, many sources of disturbance and, thus, uncertainty exist. For these reasons, perfectly coordinated processes can potentially unravel. This study analyzes simulation-based optimization, an approach that considers uncertainty by means of simulation while optimizing a given objective. The developed procedure simultaneously scales the amount of utilized equipment and adjusts the selection and tuning of operational policies. Thus, the benefits of a simulation study and an integrated optimization framework are combined in a new way. Four meta-heuristics—Tree-structured Parzen Estimator, Bayesian Optimization, Simulated Annealing, and Random Search—guide the simulation-based optimization process. Thus, this study aims to determine a favorable configuration of equipment quantity and operational policies for container terminals using a small number of experiments and, simultaneously, to empirically compare the chosen meta-heuristics including the reproducibility of the optimization runs. The results show that simulation-based optimization is suitable for identifying the amount of required equipment and well-performing policies. Among the presented scenarios, no clear ranking between meta-heuristics regarding the solution quality exists. The approximated optima suggest that pooling yard trucks and a yard block assignment that is close to the quay crane are preferable. Full article
(This article belongs to the Special Issue Simulation-Optimization in Logistics, Transportation, and SCM)
Show Figures

Figure 1

Open AccessArticle
Simulation-Optimization Approach for Multi-Period Facility Location Problems with Forecasted and Random Demands in a Last-Mile Logistics Application
Algorithms 2021, 14(2), 41; https://0-doi-org.brum.beds.ac.uk/10.3390/a14020041 - 28 Jan 2021
Viewed by 683
Abstract
The introduction of automated parcel locker (APL) systems is one possible approach to improve urban logistics (UL) activities. Based on the city of Dortmund as case study, we propose a simulation-optimization approach integrating a system dynamics simulation model (SDSM) with a multi-period capacitated [...] Read more.
The introduction of automated parcel locker (APL) systems is one possible approach to improve urban logistics (UL) activities. Based on the city of Dortmund as case study, we propose a simulation-optimization approach integrating a system dynamics simulation model (SDSM) with a multi-period capacitated facility location problem (CFLP). We propose this integrated model as a decision support tool for future APL implementations as a last-mile distribution scheme. First, we built a causal-loop and stock-flow diagram to show main components and interdependencies of the APL systems. Then, we formulated a multi-period CFLP model to determine the optimal number of APLs for each period. Finally, we used a Monte Carlo simulation to estimate the costs and reliability level with random demands. We evaluate three e-shopper rate scenarios with the SDSM, and then analyze ten detailed demand configurations based on the results for the middle-size scenario with our CFLP model. After 36 months, the number of APLs increases from 99 to 165 with the growing demand, and stabilizes in all configurations from month 24. A middle-demand configuration, which has total costs of about 750,000€, already locates a suitable number of APLs. If the budget is lower, our approach offers alternatives for decision-makers. Full article
(This article belongs to the Special Issue Simulation-Optimization in Logistics, Transportation, and SCM)
Show Figures

Figure 1

Open AccessArticle
A Survey of Advances in Landscape Analysis for Optimisation
Algorithms 2021, 14(2), 40; https://0-doi-org.brum.beds.ac.uk/10.3390/a14020040 - 28 Jan 2021
Viewed by 628
Abstract
Fitness landscapes were proposed in 1932 as an abstract notion for understanding biological evolution and were later used to explain evolutionary algorithm behaviour. The last ten years has seen the field of fitness landscape analysis develop from a largely theoretical idea in evolutionary [...] Read more.
Fitness landscapes were proposed in 1932 as an abstract notion for understanding biological evolution and were later used to explain evolutionary algorithm behaviour. The last ten years has seen the field of fitness landscape analysis develop from a largely theoretical idea in evolutionary computation to a practical tool applied in optimisation in general and more recently in machine learning. With this widened scope, new types of landscapes have emerged such as multiobjective landscapes, violation landscapes, dynamic and coupled landscapes and error landscapes. This survey is a follow-up from a 2013 survey on fitness landscapes and includes an additional 11 landscape analysis techniques. The paper also includes a survey on the applications of landscape analysis for understanding complex problems and explaining algorithm behaviour, as well as algorithm performance prediction and automated algorithm configuration and selection. The extensive use of landscape analysis in a broad range of areas highlights the wide applicability of the techniques and the paper discusses some opportunities for further research in this growing field. Full article
Open AccessArticle
Representing Deep Neural Networks Latent Space Geometries with Graphs
Algorithms 2021, 14(2), 39; https://0-doi-org.brum.beds.ac.uk/10.3390/a14020039 - 27 Jan 2021
Viewed by 609
Abstract
Deep Learning (DL) has attracted a lot of attention for its ability to reach state-of-the-art performance in many machine learning tasks. The core principle of DL methods consists of training composite architectures in an end-to-end fashion, where inputs are associated with outputs trained [...] Read more.
Deep Learning (DL) has attracted a lot of attention for its ability to reach state-of-the-art performance in many machine learning tasks. The core principle of DL methods consists of training composite architectures in an end-to-end fashion, where inputs are associated with outputs trained to optimize an objective function. Because of their compositional nature, DL architectures naturally exhibit several intermediate representations of the inputs, which belong to so-called latent spaces. When treated individually, these intermediate representations are most of the time unconstrained during the learning process, as it is unclear which properties should be favored. However, when processing a batch of inputs concurrently, the corresponding set of intermediate representations exhibit relations (what we call a geometry) on which desired properties can be sought. In this work, we show that it is possible to introduce constraints on these latent geometries to address various problems. In more detail, we propose to represent geometries by constructing similarity graphs from the intermediate representations obtained when processing a batch of inputs. By constraining these Latent Geometry Graphs (LGGs), we address the three following problems: (i) reproducing the behavior of a teacher architecture is achieved by mimicking its geometry, (ii) designing efficient embeddings for classification is achieved by targeting specific geometries, and (iii) robustness to deviations on inputs is achieved via enforcing smooth variation of geometry between consecutive latent spaces. Using standard vision benchmarks, we demonstrate the ability of the proposed geometry-based methods in solving the considered problems. Full article
(This article belongs to the Special Issue Efficient Graph Algorithms in Machine Learning)
Show Figures

Figure 1

Open AccessArticle
A Multi-Objective Optimization Method for Hospital Admission Problem—A Case Study on Covid-19 Patients
Algorithms 2021, 14(2), 38; https://0-doi-org.brum.beds.ac.uk/10.3390/a14020038 - 27 Jan 2021
Viewed by 683
Abstract
The wide spread of Covid-19 has led to infecting a huge number of patients, simultaneously. This resulted in a massive number of requests for medical care, at the same time. During the first wave of Covid-19, many people were not able to get [...] Read more.
The wide spread of Covid-19 has led to infecting a huge number of patients, simultaneously. This resulted in a massive number of requests for medical care, at the same time. During the first wave of Covid-19, many people were not able to get admitted to appropriate hospitals because of the immense number of patients. Admitting patients to suitable hospitals can decrease the in-bed time of patients, which can lead to saving many lives. Also, optimizing the admission process can minimize the waiting time for medical care, which can save the lives of severe cases. The admission process needs to consider two main criteria: the admission time and the readiness of the hospital that will accept the patients. These two objectives convert the admission problem into a Multi-Objective Problem (MOP). Pareto Optimization (PO) is a common multi-objective optimization method that has been applied to different MOPs and showed its ability to solve them. In this paper, a PO-based algorithm is proposed to deal with admitting Covid-19 patients to hospitals. The method uses PO to vary among hospitals to choose the most suitable hospital for the patient with the least admission time. The method also considers patients with severe cases by admitting them to hospitals with the least admission time regardless of their readiness. The method has been tested over a real-life dataset that consisted of 254 patients obtained from King Faisal specialist hospital in Saudi Arabia. The method was compared with the lexicographic multi-objective optimization method regarding admission time and accuracy. The proposed method showed its superiority over the lexicographic method regarding the two criteria, which makes it a good candidate for real-life admission systems. Full article
(This article belongs to the Special Issue Algorithms in Multi-Objective Optimization)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop