Next Issue
Volume 14, September
Previous Issue
Volume 14, July
Due to scheduled maintenance work on our core network, there may be short service disruptions on this website between 16:00 and 16:30 CEST on September 25th.

Algorithms, Volume 14, Issue 8 (August 2021) – 35 articles

Cover Story (view full-size image): This article presents a cooperative optimization approach (COA) toward distributing service points for mobility applications, which generalizes and refines a previously proposed method. COA is an iterative framework for optimizing service point locations, combining an optimization component with user interaction on a large scale and a machine learning component that learns user needs and provides the objective function for the optimization. The previously proposed COA was designed for mobility applications, in which single service points are sufficient for satisfying individual user demand. This framework is generalized here for applications in which the satisfaction of demand relies on the existence of two or more suitably located service stations, such as in the case of bike/car sharing systems. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Select all
Export citation of selected articles as:
Article
Constrained Dynamic Mean-Variance Portfolio Selection in Continuous-Time
Algorithms 2021, 14(8), 252; https://0-doi-org.brum.beds.ac.uk/10.3390/a14080252 - 23 Aug 2021
Viewed by 390
Abstract
This paper revisits the dynamic MV portfolio selection problem with cone constraints in continuous-time. We first reformulate our constrained MV portfolio selection model into a special constrained LQ optimal control model and develop the optimal portfolio policy of our model. In addition, we [...] Read more.
This paper revisits the dynamic MV portfolio selection problem with cone constraints in continuous-time. We first reformulate our constrained MV portfolio selection model into a special constrained LQ optimal control model and develop the optimal portfolio policy of our model. In addition, we provide an alternative method to resolve this dynamic MV portfolio selection problem with cone constraints. More specifically, instead of solving the correspondent HJB equation directly, we develop the optimal solution for this problem by using the special properties of value function induced from its model structure, such as the monotonicity and convexity of value function. Finally, we provide an example to illustrate how to use our solution in real application. The illustrative example demonstrates that our dynamic MV portfolio policy dominates the static MV portfolio policy. Full article
Show Figures

Figure 1

Article
Comparative Analysis of Recurrent Neural Networks in Stock Price Prediction for Different Frequency Domains
Algorithms 2021, 14(8), 251; https://0-doi-org.brum.beds.ac.uk/10.3390/a14080251 - 22 Aug 2021
Viewed by 577
Abstract
Investors in the stock market have always been in search of novel and unique techniques so that they can successfully predict stock price movement and make a big profit. However, investors continue to look for improved and new techniques to beat the market [...] Read more.
Investors in the stock market have always been in search of novel and unique techniques so that they can successfully predict stock price movement and make a big profit. However, investors continue to look for improved and new techniques to beat the market instead of old and traditional ones. Therefore, researchers are continuously working to build novel techniques to supply the demand of investors. Different types of recurrent neural networks (RNN) are used in time series analyses, especially in stock price prediction. However, since not all stocks’ prices follow the same trend, a single model cannot be used to predict the movement of all types of stock’s price. Therefore, in this research we conducted a comparative analysis of three commonly used RNNs—simple RNN, Long Short Term Memory (LSTM), and Gated Recurrent Unit (GRU)—and analyzed their efficiency for stocks having different stock trends and various price ranges and for different time frequencies. We considered three companies’ datasets from 30 June 2000 to 21 July 2020. The stocks follow different trends of price movements, with price ranges of $30, $50, and $290 during this period. We also analyzed the performance for one-day, three-day, and five-day time intervals. We compared the performance of RNN, LSTM, and GRU in terms of R2 value, MAE, MAPE, and RMSE metrics. The results show that simple RNN is outperformed by LSTM and GRU because RNN is susceptible to vanishing gradient problems, while the other two models are not. Moreover, GRU produces lesser errors comparing to LSTM. It is also evident from the results that as the time intervals get smaller, the models produce lower errors and higher reliability. Full article
Show Figures

Figure 1

Article
A Real-Time Network Traffic Classifier for Online Applications Using Machine Learning
Algorithms 2021, 14(8), 250; https://0-doi-org.brum.beds.ac.uk/10.3390/a14080250 - 21 Aug 2021
Viewed by 567
Abstract
The increasing ubiquity of network traffic and the new online applications’ deployment has increased traffic analysis complexity. Traditionally, network administrators rely on recognizing well-known static ports for classifying the traffic flowing their networks. However, modern network traffic uses dynamic ports and is transported [...] Read more.
The increasing ubiquity of network traffic and the new online applications’ deployment has increased traffic analysis complexity. Traditionally, network administrators rely on recognizing well-known static ports for classifying the traffic flowing their networks. However, modern network traffic uses dynamic ports and is transported over secure application-layer protocols (e.g., HTTPS, SSL, and SSH). This makes it a challenging task for network administrators to identify online applications using traditional port-based approaches. One way for classifying the modern network traffic is to use machine learning (ML) to distinguish between the different traffic attributes such as packet count and size, packet inter-arrival time, packet send–receive ratio, etc. This paper presents the design and implementation of NetScrapper, a flow-based network traffic classifier for online applications. NetScrapper uses three ML models, namely K-Nearest Neighbors (KNN), Random Forest (RF), and Artificial Neural Network (ANN), for classifying the most popular 53 online applications, including Amazon, Youtube, Google, Twitter, and many others. We collected a network traffic dataset containing 3,577,296 packet flows with different 87 features for training, validating, and testing the ML models. A web-based user-friendly interface is developed to enable users to either upload a snapshot of their network traffic to NetScrapper or sniff the network traffic directly from the network interface card in real time. Additionally, we created a middleware pipeline for interfacing the three models with the Flask GUI. Finally, we evaluated NetScrapper using various performance metrics such as classification accuracy and prediction time. Most notably, we found that our ANN model achieves an overall classification accuracy of 99.86% in recognizing the online applications in our dataset. Full article
Show Figures

Figure 1

Article
Myocardial Infarction Quantification from Late Gadolinium Enhancement MRI Using Top-Hat Transforms and Neural Networks
Algorithms 2021, 14(8), 249; https://0-doi-org.brum.beds.ac.uk/10.3390/a14080249 - 20 Aug 2021
Viewed by 358
Abstract
Late gadolinium enhancement (LGE) MRI is the gold standard technique for myocardial viability assessment. Although the technique accurately reflects the damaged tissue, there is no clinical standard to quantify myocardial infarction (MI). Moreover, commercial software used in clinical practice are mostly semi-automatic, and [...] Read more.
Late gadolinium enhancement (LGE) MRI is the gold standard technique for myocardial viability assessment. Although the technique accurately reflects the damaged tissue, there is no clinical standard to quantify myocardial infarction (MI). Moreover, commercial software used in clinical practice are mostly semi-automatic, and hence require direct intervention of experts. In this work, a new automatic method for MI quantification from LGE-MRI is proposed. Our novel segmentation approach is devised for accurately detecting not only hyper-enhanced lesions, but also microvascular obstruction areas. Moreover, it includes a myocardial disease detection step which extends the algorithm for working under healthy scans. The method is based on a cascade approach where firstly, diseased slices are identified by a convolutional neural network (CNN). Secondly, by means of morphological operations a fast coarse scar segmentation is obtained. Thirdly, the segmentation is refined by a boundary-voxel reclassification strategy using an ensemble of very light CNNs. We tested the method on a LGE-MRI database with healthy (n = 20) and diseased (n = 80) cases following a 5-fold cross-validation scheme. Our approach segmented myocardial scars with an average Dice coefficient of 77.22 ± 14.3% and with a volumetric error of 1.0 ± 6.9 cm3. In a comparison against nine reference algorithms, the proposed method achieved the highest agreement in volumetric scar quantification with the expert delineations (p< 0.001 when compared to the other approaches). Moreover, it was able to reproduce the scar segmentation intra- and inter-rater variability. Our approach was shown to be a good first attempt towards automatic and accurate myocardial scar segmentation, although validation over larger LGE-MRI databases is needed. Full article
Show Figures

Figure 1

Article
Experimental Validation of a Guaranteed Nonlinear Model Predictive Control
Algorithms 2021, 14(8), 248; https://0-doi-org.brum.beds.ac.uk/10.3390/a14080248 - 20 Aug 2021
Viewed by 396
Abstract
This paper combines the interval analysis tools with the nonlinear model predictive control (NMPC). The NMPC strategy is formulated based on an uncertain dynamic model expressed as nonlinear ordinary differential equations (ODEs). All the dynamic parameters are identified in a guaranteed way considering [...] Read more.
This paper combines the interval analysis tools with the nonlinear model predictive control (NMPC). The NMPC strategy is formulated based on an uncertain dynamic model expressed as nonlinear ordinary differential equations (ODEs). All the dynamic parameters are identified in a guaranteed way considering the various uncertainties on the embedded sensors and the system’s design. The NMPC problem is solved at each time step using validated simulation and interval analysis methods to compute the optimal and safe control inputs over a finite prediction horizon. This approach considers several constraints which are crucial for the system’s safety and stability, namely the state and the control limits. The proposed controller consists of two steps: filtering and branching procedures enabling to find the input intervals that fulfill the state constraints and ensure the convergence to the reference set. Then, the optimization procedure allows for computing the optimal and punctual control input that must be sent to the system’s actuators for the pendulum stabilization. The validated NMPC capabilities are illustrated through several simulations under the DynIbex library and experiments using an inverted pendulum. Full article
(This article belongs to the Special Issue Algorithms for Reliable Estimation, Identification and Control II)
Show Figures

Figure 1

Article
Numerical Algorithm for Dynamic Impedance of Bridge Pile-Group Foundation and Its Validation
Algorithms 2021, 14(8), 247; https://0-doi-org.brum.beds.ac.uk/10.3390/a14080247 - 20 Aug 2021
Viewed by 335
Abstract
The characteristics of bridge pile-group foundation have a significant influence on the dynamic performance of the superstructure. Most of the existing analysis methods for the pile-group foundation impedance take the trait of strong specialty, which cannot be generalized in practical projects. Therefore, a [...] Read more.
The characteristics of bridge pile-group foundation have a significant influence on the dynamic performance of the superstructure. Most of the existing analysis methods for the pile-group foundation impedance take the trait of strong specialty, which cannot be generalized in practical projects. Therefore, a project-oriented numerical solution algorithm is proposed to compute the dynamic impedance of bridge pile-group foundation. Based on the theory of viscous-spring artificial boundary, the derivation and solution of the impedance function are transferred to numerical modeling and harmonic analysis, which can be carried out through the finite element method. By taking a typical pile-group foundation as a case study, the results based on the algorithm are compared with those from existing literature. Moreover, an impact experiment of a real pile-group foundation was implemented, the results of which are also compared with those resulting from the proposed numerical algorithm. Both comparisons show that the proposed numerical algorithm satisfies engineering precision, thus showing good effectiveness in application. Full article
Show Figures

Figure 1

Article
Scheduling Multiprocessor Tasks with Equal Processing Times as a Mixed Graph Coloring Problem
Algorithms 2021, 14(8), 246; https://0-doi-org.brum.beds.ac.uk/10.3390/a14080246 - 19 Aug 2021
Viewed by 351
Abstract
This article extends the scheduling problem with dedicated processors, unit-time tasks, and minimizing maximal lateness Lmax for integer due dates to the scheduling problem, where along with precedence constraints given on the set [...] Read more.
This article extends the scheduling problem with dedicated processors, unit-time tasks, and minimizing maximal lateness Lmax for integer due dates to the scheduling problem, where along with precedence constraints given on the set V={v1,v2,,vn} of the multiprocessor tasks, a subset of tasks must be processed simultaneously. Contrary to a classical shop-scheduling problem, several processors must fulfill a multiprocessor task. Furthermore, two types of the precedence constraints may be given on the task set V. We prove that the extended scheduling problem with integer release times ri0 of the jobs V to minimize schedule length Cmax may be solved as an optimal mixed graph coloring problem that consists of the assignment of a minimal number of colors (positive integers) {1,2,,t} to the vertices {v1,v2,,vn}=V of the mixed graph G=(V,A,E) such that, if two vertices vp and vq are joined by the edge [vp,vq]E, their colors have to be different. Further, if two vertices vi and vj are joined by the arc (vi,vj)A, the color of vertex vi has to be no greater than the color of vertex vj. We prove two theorems, which imply that most analytical results proved so far for optimal colorings of the mixed graphs G=(V,A,E), have analogous results, which are valid for the extended scheduling problems to minimize the schedule length or maximal lateness, and vice versa. Full article
(This article belongs to the Special Issue 2021 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Figure 1

Article
SVSL: A Human Activity Recognition Method Using Soft-Voting and Self-Learning
Algorithms 2021, 14(8), 245; https://0-doi-org.brum.beds.ac.uk/10.3390/a14080245 - 19 Aug 2021
Viewed by 507
Abstract
Many smart city and society applications such as smart health (elderly care, medical applications), smart surveillance, sports, and robotics require the recognition of user activities, an important class of problems known as human activity recognition (HAR). Several issues have hindered progress in HAR [...] Read more.
Many smart city and society applications such as smart health (elderly care, medical applications), smart surveillance, sports, and robotics require the recognition of user activities, an important class of problems known as human activity recognition (HAR). Several issues have hindered progress in HAR research, particularly due to the emergence of fog and edge computing, which brings many new opportunities (a low latency, dynamic and real-time decision making, etc.) but comes with its challenges. This paper focuses on addressing two important research gaps in HAR research: (i) improving the HAR prediction accuracy and (ii) managing the frequent changes in the environment and data related to user activities. To address this, we propose an HAR method based on Soft-Voting and Self-Learning (SVSL). SVSL uses two strategies. First, to enhance accuracy, it combines the capabilities of Deep Learning (DL), Generalized Linear Model (GLM), Random Forest (RF), and AdaBoost classifiers using soft-voting. Second, to classify the most challenging data instances, the SVSL method is equipped with a self-training mechanism that generates training data and retrains itself. We investigate the performance of our proposed SVSL method using two publicly available datasets on six human activities related to lying, sitting, and walking positions. The first dataset consists of 562 features and the second dataset consists of five features. The data are collected using the accelerometer and gyroscope smartphone sensors. The results show that the proposed method provides 6.26%, 1.75%, 1.51%, and 4.40% better prediction accuracy (average over the two datasets) compared to GLM, DL, RF, and AdaBoost, respectively. We also analyze and compare the class-wise performance of the SVSL methods with that of DL, GLM, RF, and AdaBoost. Full article
Show Figures

Figure 1

Article
An Efficient Geometric Search Algorithm of Pandemic Boundary Detection
Algorithms 2021, 14(8), 244; https://0-doi-org.brum.beds.ac.uk/10.3390/a14080244 - 18 Aug 2021
Viewed by 444
Abstract
We consider a scenario where the pandemic infection rate is inversely proportional to the power of the distance between the infected region and the non-infected region. In our study, we analyze the case where the exponent of the distance is 2, which is [...] Read more.
We consider a scenario where the pandemic infection rate is inversely proportional to the power of the distance between the infected region and the non-infected region. In our study, we analyze the case where the exponent of the distance is 2, which is in accordance with Reilly’s law of retail gravitation. One can test for infection but such tests are costly so one seeks to determine the region of infection while performing few tests. Our goal is to find a boundary region of minimal size that contains all infected areas. We discuss efficient algorithms and provide the asymptotic bound of the testing cost and simulation results for this problem. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

Article
Tourism Demand Forecasting Based on an LSTM Network and Its Variants
Algorithms 2021, 14(8), 243; https://0-doi-org.brum.beds.ac.uk/10.3390/a14080243 - 18 Aug 2021
Viewed by 515
Abstract
The need for accurate tourism demand forecasting is widely recognized. The unreliability of traditional methods makes tourism demand forecasting still challenging. Using deep learning approaches, this study aims to adapt Long Short-Term Memory (LSTM), Bidirectional LSTM (Bi-LSTM), and Gated Recurrent Unit networks (GRU), [...] Read more.
The need for accurate tourism demand forecasting is widely recognized. The unreliability of traditional methods makes tourism demand forecasting still challenging. Using deep learning approaches, this study aims to adapt Long Short-Term Memory (LSTM), Bidirectional LSTM (Bi-LSTM), and Gated Recurrent Unit networks (GRU), which are straightforward and efficient, to improve Taiwan’s tourism demand forecasting. The networks are able to seize the dependence of visitor arrival time series data. The Adam optimization algorithm with adaptive learning rate is used to optimize the basic setup of the models. The results show that the proposed models outperform previous studies undertaken during the Severe Acute Respiratory Syndrome (SARS) events of 2002–2003. This article also examines the effects of the current COVID-19 outbreak to tourist arrivals to Taiwan. The results show that the use of the LSTM network and its variants can perform satisfactorily for tourism demand forecasting. Full article
Show Figures

Figure 1

Review
Data Mining Algorithms for Smart Cities: A Bibliometric Analysis
Algorithms 2021, 14(8), 242; https://0-doi-org.brum.beds.ac.uk/10.3390/a14080242 - 17 Aug 2021
Viewed by 834
Abstract
Smart cities connect people and places using innovative technologies such as Data Mining (DM), Machine Learning (ML), big data, and the Internet of Things (IoT). This paper presents a bibliometric analysis to provide a comprehensive overview of studies associated with DM technologies used [...] Read more.
Smart cities connect people and places using innovative technologies such as Data Mining (DM), Machine Learning (ML), big data, and the Internet of Things (IoT). This paper presents a bibliometric analysis to provide a comprehensive overview of studies associated with DM technologies used in smart cities applications. The study aims to identify the main DM techniques used in the context of smart cities and how the research field of DM for smart cities evolves over time. We adopted both qualitative and quantitative methods to explore the topic. We used the Scopus database to find relative articles published in scientific journals. This study covers 197 articles published over the period from 2013 to 2021. For the bibliometric analysis, we used the Biliometrix library, developed in R. Our findings show that there is a wide range of DM technologies used in every layer of a smart city project. Several ML algorithms, supervised or unsupervised, are adopted for operating the instrumentation, middleware, and application layer. The bibliometric analysis shows that DM for smart cities is a fast-growing scientific field. Scientists from all over the world show a great interest in researching and collaborating on this interdisciplinary scientific field. Full article
(This article belongs to the Special Issue New Algorithms for Visual Data Mining)
Show Figures

Figure 1

Article
Property-Based Semantic Similarity Criteria to Evaluate the Overlaps of Schemas
Algorithms 2021, 14(8), 241; https://0-doi-org.brum.beds.ac.uk/10.3390/a14080241 - 17 Aug 2021
Viewed by 376
Abstract
Knowledge graph-based data integration is a practical methodology for heterogeneous legacy database-integrated service construction. However, it is neither efficient nor economical to build a new cross-domain knowledge graph on top of the schemas of each legacy database for the specific integration application rather [...] Read more.
Knowledge graph-based data integration is a practical methodology for heterogeneous legacy database-integrated service construction. However, it is neither efficient nor economical to build a new cross-domain knowledge graph on top of the schemas of each legacy database for the specific integration application rather than reusing the existing high-quality knowledge graphs. Consequently, a question arises as to whether the existing knowledge graph is compatible with cross-domain queries and with heterogenous schemas of the legacy systems. An effective criterion is urgently needed in order to evaluate such compatibility as it limits the quality upbound of the integration. This research studies the semantic similarity of the schemas from the aspect of properties. It provides a set of in-depth criteria, namely coverage and flexibility, to evaluate the pairwise compatibility between the schemas. It takes advantage of the properties of knowledge graphs to evaluate the overlaps between schemas and defines the weights of entity types in order to perform precise compatibility computation. The effectiveness of the criteria obtained to evaluate the compatibility between knowledge graphs and cross-domain queries is demonstrated using a case study. Full article
(This article belongs to the Special Issue Ontologies, Ontology Development and Evaluation)
Show Figures

Graphical abstract

Article
Adaptive Supply Chain: Demand–Supply Synchronization Using Deep Reinforcement Learning
Algorithms 2021, 14(8), 240; https://0-doi-org.brum.beds.ac.uk/10.3390/a14080240 - 15 Aug 2021
Viewed by 708
Abstract
Adaptive and highly synchronized supply chains can avoid a cascading rise-and-fall inventory dynamic and mitigate ripple effects caused by operational failures. This paper aims to demonstrate how a deep reinforcement learning agent based on the proximal policy optimization algorithm can synchronize inbound and [...] Read more.
Adaptive and highly synchronized supply chains can avoid a cascading rise-and-fall inventory dynamic and mitigate ripple effects caused by operational failures. This paper aims to demonstrate how a deep reinforcement learning agent based on the proximal policy optimization algorithm can synchronize inbound and outbound flows and support business continuity operating in the stochastic and nonstationary environment if end-to-end visibility is provided. The deep reinforcement learning agent is built upon the Proximal Policy Optimization algorithm, which does not require hardcoded action space and exhaustive hyperparameter tuning. These features, complimented with a straightforward supply chain environment, give rise to a general and task unspecific approach to adaptive control in multi-echelon supply chains. The proposed approach is compared with the base-stock policy, a well-known method in classic operations research and inventory control theory. The base-stock policy is prevalent in continuous-review inventory systems. The paper concludes with the statement that the proposed solution can perform adaptive control in complex supply chains. The paper also postulates fully fledged supply chain digital twins as a necessary infrastructural condition for scalable real-world applications. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

Article
Adaptive Self-Scaling Brain-Storm Optimization via a Chaotic Search Mechanism
Algorithms 2021, 14(8), 239; https://0-doi-org.brum.beds.ac.uk/10.3390/a14080239 - 13 Aug 2021
Viewed by 413
Abstract
Brain-storm optimization (BSO), which is a population-based optimization algorithm, exhibits a poor search performance, premature convergence, and a high probability of falling into local optima. To address these problems, we developed the adaptive mechanism-based BSO (ABSO) algorithm based on the chaotic local search [...] Read more.
Brain-storm optimization (BSO), which is a population-based optimization algorithm, exhibits a poor search performance, premature convergence, and a high probability of falling into local optima. To address these problems, we developed the adaptive mechanism-based BSO (ABSO) algorithm based on the chaotic local search in this study. The adjustment of the search space using the local search method based on an adaptive self-scaling mechanism balances the global search and local development performance of the ABSO algorithm, effectively preventing the algorithm from falling into local optima and improving its convergence accuracy. To verify the stability and effectiveness of the proposed ABSO algorithm, the performance was tested using 29 benchmark test functions, and the mean and standard deviation were compared with those of five other optimization algorithms. The results showed that ABSO outperforms the other algorithms in terms of stability and convergence accuracy. In addition, the performance of ABSO was further verified through a nonparametric statistical test. Full article
(This article belongs to the Special Issue Evolutionary Algorithms and Applications)
Show Figures

Figure 1

Article
Efficient Construction of the Equation Automaton
Algorithms 2021, 14(8), 238; https://0-doi-org.brum.beds.ac.uk/10.3390/a14080238 - 11 Aug 2021
Viewed by 355
Abstract
This paper describes a fast algorithm for constructing directly the equation automaton from the well-known Thompson automaton associated with a regular expression. Allauzen and Mohri have presented a unified construction of small automata and gave a construction of the equation automaton with time [...] Read more.
This paper describes a fast algorithm for constructing directly the equation automaton from the well-known Thompson automaton associated with a regular expression. Allauzen and Mohri have presented a unified construction of small automata and gave a construction of the equation automaton with time and space complexity in O(mlogm+m2), where m denotes the number of Thompson automaton transitions. It is based on two classical automata operations, namely epsilon-removal and Hopcroft’s algorithm for deterministic Finite Automata (DFA) minimization. Using the notion of c-continuation, Ziadi et al. presented a fast computation of the equation automaton in O(m2) time complexity. In this paper, we design an output-sensitive algorithm combining advantages of the previous algorithms and show that its computational complexity can be reduced to O(m×|Qe|), where |Qe| denotes the number of states of the equation automaton, by an epsilon-removal and Bubenzer minimization algorithm of an Acyclic Deterministic Finite Automata (ADFA). Full article
Show Figures

Figure 1

Article
Detect Overlapping Community Based on the Combination of Local Expansion and Label Propagation
by and
Algorithms 2021, 14(8), 237; https://0-doi-org.brum.beds.ac.uk/10.3390/a14080237 - 11 Aug 2021
Viewed by 400
Abstract
It is a common phenomenon in real life that individuals have diverse member relationships in different social clusters, which is called overlap in the science of network. Detecting overlapping components of the community structure in a network has extensive value in real-life applications. [...] Read more.
It is a common phenomenon in real life that individuals have diverse member relationships in different social clusters, which is called overlap in the science of network. Detecting overlapping components of the community structure in a network has extensive value in real-life applications. The mainstream algorithms for community detection generally focus on optimization of a global or local static metric. These algorithms are often not good when the community characteristics are diverse. In addition, there is a lot of randomness in the process of the algorithm. We proposed a algorithm combining local expansion and label propagation. In the stage of local expansion, the seed is determined by the node pair with the largest closeness, and the rule of expansion also depends on closeness. Local expansion is just to obtain the center of expected communities instead of final communities, and these immature communities leave only dense regions after pruning according to certain rules. Taking the dense regions as the source makes the label propagation reach stability rapidly in the early propagation so that the final communities are detected more accurately. The experiments in synthetic and real-world networks proved that our algorithm is more effective not only on the whole, but also at the level of the node. In addition, it is stable in the face of different network structures and can maintain high accuracy. Full article
(This article belongs to the Special Issue Optimization Algorithms for Graphs and Complex Networks)
Show Figures

Figure 1

Article
SR-Inpaint: A General Deep Learning Framework for High Resolution Image Inpainting
Algorithms 2021, 14(8), 236; https://0-doi-org.brum.beds.ac.uk/10.3390/a14080236 - 10 Aug 2021
Viewed by 684
Abstract
Recently, deep learning has enabled a huge leap forward in image inpainting. However, due to the memory and computational limitation, most existing methods are able to handle only low-resolution inputs, typically less than 1 K. With the improvement of Internet transmission capacity and [...] Read more.
Recently, deep learning has enabled a huge leap forward in image inpainting. However, due to the memory and computational limitation, most existing methods are able to handle only low-resolution inputs, typically less than 1 K. With the improvement of Internet transmission capacity and mobile device cameras, the resolution of image and video sources available to users via the cloud or locally is increasing. For high-resolution images, the common inpainting methods simply upsample the inpainted result of the shrinked image to yield a blurry result. In recent years, there is an urgent need to reconstruct the missing high-frequency information in high-resolution images and generate sharp texture details. Hence, we propose a general deep learning framework for high-resolution image inpainting, which first hallucinates a semantically continuous blurred result using low-resolution inpainting and suppresses computational overhead. Then the sharp high-frequency details with original resolution are reconstructed using super-resolution refinement. Experimentally, our method achieves inspiring inpainting quality on 2K and 4K resolution images, ahead of the state-of-the-art high-resolution inpainting technique. This framework is expected to be popularized for high-resolution image editing tasks on personal computers and mobile devices in the future. Full article
(This article belongs to the Special Issue Algorithmic Aspects of Neural Networks)
Show Figures

Figure 1

Article
Computational Complexity and ILP Models for Pattern Problems in the Logical Analysis of Data
Algorithms 2021, 14(8), 235; https://0-doi-org.brum.beds.ac.uk/10.3390/a14080235 - 09 Aug 2021
Viewed by 532
Abstract
Logical Analysis of Data is a procedure aimed at identifying relevant features in data sets with both positive and negative samples. The goal is to build Boolean formulas, represented by strings over {0,1,-} called patterns, which can be used to classify new [...] Read more.
Logical Analysis of Data is a procedure aimed at identifying relevant features in data sets with both positive and negative samples. The goal is to build Boolean formulas, represented by strings over {0,1,-} called patterns, which can be used to classify new samples as positive or negative. Since a data set can be explained in alternative ways, many computational problems arise related to the choice of a particular set of patterns. In this paper we study the computational complexity of several of these pattern problems (showing that they are, in general, computationally hard) and we propose some integer programming models that appear to be effective. We describe an ILP model for finding the minimum-size set of patterns explaining a given set of samples and another one for the problem of determining whether two sets of patterns are equivalent, i.e., they explain exactly the same samples. We base our first model on a polynomial procedure that computes all patterns compatible with a given set of samples. Computational experiments substantiate the effectiveness of our models on fairly large instances. Finally, we conjecture that the existence of an effective ILP model for finding a minimum-size set of patterns equivalent to a given set of patterns is unlikely, due to the problem being NP-hard and co-NP-hard at the same time. Full article
(This article belongs to the Special Issue 2021 Selected Papers from Algorithms Editorial Board Members)
Article
Maritime Supply Chain Optimization by Using Fuzzy Goal Programming
Algorithms 2021, 14(8), 234; https://0-doi-org.brum.beds.ac.uk/10.3390/a14080234 - 09 Aug 2021
Viewed by 473
Abstract
Fuzzy goal programming has important applications in many areas of supply chain, logistics, transportation and shipping business. Business management has complications, and there exist many interactions between the factors of its components. The locomotive of world trade is maritime transport and approximately 90% [...] Read more.
Fuzzy goal programming has important applications in many areas of supply chain, logistics, transportation and shipping business. Business management has complications, and there exist many interactions between the factors of its components. The locomotive of world trade is maritime transport and approximately 90% of the products in the world are transported by sea. Optimization of maritime operations is a challenge in order to provide technical, operational and financial benefits. Fuzzy goal programming models attract interests of many scholars, therefore the objective of this paper is to investigate the problem of minimization of total cost and minimization of loss or damage of containers returned from destination port. There are various types of fuzzy goal programming problems based on models and solution methods. This paper employs fuzzy goal programming with triangular fuzzy numbers, membership functions, constraints, assumptions as well as the variables and parameters for optimizing the solution of the model problem. The proposed model presents the mathematical algorithm, and reveals the optimal solution according to satisfaction rank from 0 to 1. Providing a theoretical background, this study offers novel ideas to researchers, decision makers and authorities. Full article
Show Figures

Figure 1

Editorial
Special Issue on Algorithms and Models for Dynamic Multiple Criteria Decision Making
Algorithms 2021, 14(8), 233; https://0-doi-org.brum.beds.ac.uk/10.3390/a14080233 - 08 Aug 2021
Viewed by 508
Abstract
The current Special Issue contains six papers focused on Multiple Criteria Decision Making (MCDM) problems and the formal techniques applied to derive consistent rankings of them [...] Full article
(This article belongs to the Special Issue Algorithms and Models for Dynamic Multiple Criteria Decision Making)
Article
A General Cooperative Optimization Approach for Distributing Service Points in Mobility Applications
Algorithms 2021, 14(8), 232; https://0-doi-org.brum.beds.ac.uk/10.3390/a14080232 - 06 Aug 2021
Viewed by 554
Abstract
This article presents a cooperative optimization approach (COA) for distributing service points for mobility applications, which generalizes and refines a previously proposed method. COA is an iterative framework for optimizing service point locations, combining an optimization component with user interaction on a large [...] Read more.
This article presents a cooperative optimization approach (COA) for distributing service points for mobility applications, which generalizes and refines a previously proposed method. COA is an iterative framework for optimizing service point locations, combining an optimization component with user interaction on a large scale and a machine learning component that learns user needs and provides the objective function for the optimization. The previously proposed COA was designed for mobility applications in which single service points are sufficient for satisfying individual user demand. This framework is generalized here for applications in which the satisfaction of demand relies on the existence of two or more suitably located service stations, such as in the case of bike/car sharing systems. A new matrix factorization model is used as surrogate objective function for the optimization, allowing us to learn and exploit similar preferences among users w.r.t. service point locations. Based on this surrogate objective function, a mixed integer linear program is solved to generate an optimized solution to the problem w.r.t. the currently known user information. User interaction, refinement of the matrix factorization, and optimization are iterated. An experimental evaluation analyzes the performance of COA with special consideration of the number of user interactions required to find near optimal solutions. The algorithm is tested on artificial instances, as well as instances derived from real-world taxi data from Manhattan. Results show that the approach can effectively solve instances with hundreds of potential service point locations and thousands of users, while keeping the user interactions reasonably low. A bound on the number of user interactions required to obtain full knowledge of user preferences is derived, and results show that with 50% of performed user interactions the solutions generated by COA feature optimality gaps of only 1.45% on average. Full article
(This article belongs to the Special Issue 2021 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Figure 1

Article
Improved Duplication-Transfer-Loss Reconciliation with Extinct and Unsampled Lineages
Algorithms 2021, 14(8), 231; https://0-doi-org.brum.beds.ac.uk/10.3390/a14080231 - 05 Aug 2021
Viewed by 500
Abstract
Duplication-Transfer-Loss (DTL) reconciliation is a widely used computational technique for understanding gene family evolution and inferring horizontal gene transfer (transfer for short) in microbes. However, most existing models and implementations of DTL reconciliation cannot account for the effect of unsampled or extinct species [...] Read more.
Duplication-Transfer-Loss (DTL) reconciliation is a widely used computational technique for understanding gene family evolution and inferring horizontal gene transfer (transfer for short) in microbes. However, most existing models and implementations of DTL reconciliation cannot account for the effect of unsampled or extinct species lineages on the evolution of gene families, likely affecting their accuracy. Accounting for the presence and possible impact of any unsampled species lineages, including those that are extinct, is especially important for inferring and studying horizontal transfer since many genes in the species lineages represented in the reconciliation analysis are likely to have been acquired through horizontal transfer from unsampled lineages. While models of DTL reconciliation that account for transfer from unsampled lineages have already been proposed, they use a relatively simple framework for transfer from unsampled lineages and cannot explicitly infer the location on the species tree of each unsampled or extinct lineage associated with an identified transfer event. Furthermore, there does not yet exist any systematic studies to assess the impact of accounting for unsampled lineages on the accuracy of DTL reconciliation. In this work, we address these deficiencies by (i) introducing an extended DTL reconciliation model, called the DTLx reconciliation model, that accounts for unsampled and extinct species lineages in a new, more functional manner compared to existing models, (ii) showing that optimal reconciliations under the new DTLx reconciliation model can be computed just as efficiently as under the fastest DTL reconciliation model, (iii) providing an efficient algorithm for sampling optimal DTLx reconciliations uniformly at random, (iv) performing the first systematic simulation study to assess the impact of accounting for unsampled lineages on the accuracy of DTL reconciliation, and (v) comparing the accuracies of inferring transfers from unsampled lineages under our new model and the only other previously proposed parsimony-based model for this problem. Full article
(This article belongs to the Special Issue Algorithms in Computational Biology)
Show Figures

Figure 1

Article
Behavior Selection Metaheuristic Search Algorithm for the Pollination Optimization: A Simulation Case of Cocoa Flowers
Algorithms 2021, 14(8), 230; https://0-doi-org.brum.beds.ac.uk/10.3390/a14080230 - 31 Jul 2021
Viewed by 516
Abstract
Since nature is an excellent source of inspiration for optimization methods, many optimization algorithms have been proposed, are inspired by nature, and are modified to solve various optimization problems. This paper uses metaheuristics in a new field inspired by nature; more precisely, we [...] Read more.
Since nature is an excellent source of inspiration for optimization methods, many optimization algorithms have been proposed, are inspired by nature, and are modified to solve various optimization problems. This paper uses metaheuristics in a new field inspired by nature; more precisely, we use pollination optimization in cocoa plants. The cocoa plant was chosen as the object since its flower type differs from other kinds of flowers, for example, by using cross-pollination. This complex relationship between plants and pollinators also renders pollination a real-world problem for chocolate production. Therefore, this study first identified the underlying optimization problem as a deferred fitness problem, where the quality of a potential solution cannot be immediately determined. Then, the study investigates how metaheuristic algorithms derived from three well-known techniques perform when applied to the flower pollination problem. The three techniques examined here are Swarm Intelligence Algorithms, Individual Random Search, and Multi-Agent Systems search. We then compare the behavior of these various search methods based on the results of pollination simulations. The criteria are the number of pollinated flowers for the trees and the amount and fairness of nectar pickup for the pollinator. Our results show that Multi-Agent System performs notably better than other methods. The result of this study are insights into the co-evolution of behaviors for the collaborative pollination task. We also foresee that this investigation can also help farmers increase chocolate production by developing methods to attract and promote pollinators. Full article
(This article belongs to the Special Issue Bio-Inspired Algorithms)
Show Figures

Figure 1

Article
An Efficient Time-Variant Reliability Analysis Method with Mixed Uncertainties
Algorithms 2021, 14(8), 229; https://0-doi-org.brum.beds.ac.uk/10.3390/a14080229 - 31 Jul 2021
Viewed by 528
Abstract
In practical engineering, due to the lack of information, it is impossible to accurately determine the distribution of all variables. Therefore, time-variant reliability problems with both random and interval variables may be encountered. However, this kind of problem usually involves a complex multilevel [...] Read more.
In practical engineering, due to the lack of information, it is impossible to accurately determine the distribution of all variables. Therefore, time-variant reliability problems with both random and interval variables may be encountered. However, this kind of problem usually involves a complex multilevel nested optimization problem, which leads to a substantial computational burden, and it is difficult to meet the requirements of complex engineering problem analysis. This study proposes a decoupling strategy to efficiently analyze the time-variant reliability based on the mixed uncertainty model. The interval variables are treated with independent random variables that are uniformly distributed in their respective intervals. Then the time-variant reliability-equivalent model, containing only random variables, is established, to avoid multi-layer nesting optimization. The stochastic process is first discretized to obtain several static limit state functions at different times. The time-variant reliability problem is changed into the conventional time-invariant system reliability problem. First order reliability analysis method (FORM) is used to analyze the reliability of each time. Thus, an efficient and robust convergence hybrid time-variant reliability calculation algorithm is proposed based on the equivalent model. Finally, numerical examples shows the effectiveness of the proposed method. Full article
Show Figures

Figure 1

Article
Image Representation Using Stacked Colour Histogram
Algorithms 2021, 14(8), 228; https://0-doi-org.brum.beds.ac.uk/10.3390/a14080228 - 30 Jul 2021
Viewed by 585
Abstract
Image representation plays a vital role in the realisation of Content-Based Image Retrieval (CBIR) system. The representation is performed because pixel-by-pixel matching for image retrieval is impracticable as a result of the rigid nature of such an approach. In CBIR therefore, colour, shape [...] Read more.
Image representation plays a vital role in the realisation of Content-Based Image Retrieval (CBIR) system. The representation is performed because pixel-by-pixel matching for image retrieval is impracticable as a result of the rigid nature of such an approach. In CBIR therefore, colour, shape and texture and other visual features are used to represent images for effective retrieval task. Among these visual features, the colour and texture are pretty remarkable in defining the content of the image. However, combining these features does not necessarily guarantee better retrieval accuracy due to image transformations such rotation, scaling, and translation that an image would have gone through. More so, concerns about feature vector representation taking ample memory space affect the running time of the retrieval task. To address these problems, we propose a new colour scheme called Stack Colour Histogram (SCH) which inherently extracts colour and neighbourhood information into a descriptor for indexing images. SCH performs recurrent mean filtering of the image to be indexed. The recurrent blurring in this proposed method works by repeatedly filtering (transforming) the image. The output of a transformation serves as the input for the next transformation, and in each case a histogram is generated. The histograms are summed up bin-by-bin and the resulted vector used to index the image. The image blurring process uses pixel’s neighbourhood information, making the proposed SCH exhibit the inherent textural information of the image that has been indexed. The SCH was extensively tested on the Coil100, Outext, Batik and Corel10K datasets. The Coil100, Outext, and Batik datasets are generally used to assess image texture descriptors, while Corel10K is used for heterogeneous descriptors. The experimental results show that our proposed descriptor significantly improves retrieval and classification rate when compared with (CMTH, MTH, TCM, CTM and NRFUCTM) which are the start-of-the-art descriptors for images with textural features. Full article
Show Figures

Graphical abstract

Article
A Modified Liu and Storey Conjugate Gradient Method for Large Scale Unconstrained Optimization Problems
Algorithms 2021, 14(8), 227; https://0-doi-org.brum.beds.ac.uk/10.3390/a14080227 - 28 Jul 2021
Viewed by 426
Abstract
The conjugate gradient method is one of the most popular methods to solve large-scale unconstrained optimization problems since it does not require the second derivative, such as Newton’s method or approximations. Moreover, the conjugate gradient method can be applied in many fields such [...] Read more.
The conjugate gradient method is one of the most popular methods to solve large-scale unconstrained optimization problems since it does not require the second derivative, such as Newton’s method or approximations. Moreover, the conjugate gradient method can be applied in many fields such as neural networks, image restoration, etc. Many complicated methods are proposed to solve these optimization functions in two or three terms. In this paper, we propose a simple, easy, efficient, and robust conjugate gradient method. The new method is constructed based on the Liu and Storey method to overcome the convergence problem and descent property. The new modified method satisfies the convergence properties and the sufficient descent condition under some assumptions. The numerical results show that the new method outperforms famous CG methods such as CG-Descent 5.3, Liu and Storey, and Dai and Liao. The numerical results include the number of iterations and CPU time. Full article
Show Figures

Figure 1

Article
Synthetic Experiences for Accelerating DQN Performance in Discrete Non-Deterministic Environments
Algorithms 2021, 14(8), 226; https://0-doi-org.brum.beds.ac.uk/10.3390/a14080226 - 27 Jul 2021
Viewed by 490
Abstract
State-of-the-art Deep Reinforcement Learning Algorithms such as DQN and DDPG use the concept of a replay buffer called Experience Replay. The default usage contains only the experiences that have been gathered over the runtime. We propose a method called Interpolated Experience Replay that [...] Read more.
State-of-the-art Deep Reinforcement Learning Algorithms such as DQN and DDPG use the concept of a replay buffer called Experience Replay. The default usage contains only the experiences that have been gathered over the runtime. We propose a method called Interpolated Experience Replay that uses stored (real) transitions to create synthetic ones to assist the learner. In this first approach to this field, we limit ourselves to discrete and non-deterministic environments and use a simple equally weighted average of the reward in combination with observed follow-up states. We could demonstrate a significantly improved overall mean average in comparison to a DQN network with vanilla Experience Replay on the discrete and non-deterministic FrozenLake8x8-v0 environment. Full article
(This article belongs to the Special Issue Algorithmic Aspects of Neural Networks)
Show Figures

Figure 1

Article
Similar Supergraph Search Based on Graph Edit Distance
Algorithms 2021, 14(8), 225; https://0-doi-org.brum.beds.ac.uk/10.3390/a14080225 - 27 Jul 2021
Viewed by 551
Abstract
Subgraph and supergraph search methods are promising techniques for the development of new drugs. For example, the chemical structure of favipiravir—an antiviral treatment for influenza—resembles the structure of some components of RNA. Represented as graphs, such compounds are similar to a subgraph of [...] Read more.
Subgraph and supergraph search methods are promising techniques for the development of new drugs. For example, the chemical structure of favipiravir—an antiviral treatment for influenza—resembles the structure of some components of RNA. Represented as graphs, such compounds are similar to a subgraph of favipiravir. However, the existing supergraph search methods can only discover compounds that match exactly. We propose a novel problem, called similar supergraph search, and design an efficient algorithm to solve it. The problem is to identify all graphs in a database that are similar to any subgraph of a query graph, where similarity is defined as edit distance. Our algorithm represents the set of candidate subgraphs by a code tree, which it uses to efficiently compute edit distance. With a distance threshold of zero, our algorithm is equivalent to an existing efficient algorithm for exact supergraph search. Our experiments show that the computation time increased exponentially as the distance threshold increased, but increased sublinearly with the number of graphs in the database. Full article
(This article belongs to the Special Issue Scalable Graph Algorithms and Applications)
Show Figures

Figure 1

Article
Intelligent Network Intrusion Prevention Feature Collection and Classification Algorithms
Algorithms 2021, 14(8), 224; https://0-doi-org.brum.beds.ac.uk/10.3390/a14080224 - 26 Jul 2021
Viewed by 537
Abstract
Rapid Internet use growth and applications of diverse military have managed researchers to develop smart systems to help applications and users achieve the facilities through the provision of required service quality in networks. Any smart technologies offer protection in interactions in dispersed locations [...] Read more.
Rapid Internet use growth and applications of diverse military have managed researchers to develop smart systems to help applications and users achieve the facilities through the provision of required service quality in networks. Any smart technologies offer protection in interactions in dispersed locations such as, e-commerce, mobile networking, telecommunications and management of network. Furthermore, this article proposed on intelligent feature selection methods and intrusion detection (ISTID) organization in webs based on neuron-genetic algorithms, intelligent software agents, genetic algorithms, particulate swarm intelligence and neural networks, rough-set. These techniques were useful to identify and prevent network intrusion to provide Internet safety and improve service value and accuracy, performance and efficiency. Furthermore, new algorithms of intelligent rules-based attributes collection algorithm for efficient function and rules-based improved vector support computer, were proposed in this article, along with a survey into the current smart techniques for intrusion detection systems. Full article
Show Figures

Figure 1

Article
Fixed Point Results on Multi-Valued Generalized (α,β)-Nonexpansive Mappings in Banach Spaces
Algorithms 2021, 14(8), 223; https://0-doi-org.brum.beds.ac.uk/10.3390/a14080223 - 25 Jul 2021
Viewed by 550
Abstract
In this paper, we provide and study the concept of multi-valued generalized (α,β)-nonexpansive mappings, which is the multi-valued version of the recently developed generalized (α,β)-nonexpansive mappings. We establish some elementary properties and fixed [...] Read more.
In this paper, we provide and study the concept of multi-valued generalized (α,β)-nonexpansive mappings, which is the multi-valued version of the recently developed generalized (α,β)-nonexpansive mappings. We establish some elementary properties and fixed point existence results for these mappings. Moreover, a multi-valued version of the M-iterative scheme is proposed for approximating fixed points of these mappings in the weak and strong senses. Using an example, we also show that M-iterative scheme converges faster as compared to many other schemes for this class of mappings. Full article
Show Figures

Figure 1

Previous Issue
Back to TopTop