Next Issue
Volume 14, August
Previous Issue
Volume 14, June
Due to scheduled maintenance work on our core network, there may be short service disruptions on this website between 16:00 and 16:30 CEST on September 25th.

Algorithms, Volume 14, Issue 7 (July 2021) – 29 articles

Cover Story (view full-size image): In this project, we established a proof of concept through Raspberry Pi 4, where we tested the integration capacity of quantum computing within mobile robotics with one of the most demanding problems in this industrial sector: picking and batching problems. The results have been promising using different technologies. Moreover, in the case of computational need, the robot will parallelize part of the operations in hybrid computing (quantum + classical), accessing CPUs and QPUs distributed in a public or private cloud. Furthermore, we developed a stable environment (ARM64) inside the robot (Raspberry) to run gradient operations and other quantum algorithms on IBMQ, Amazon Braket (D-Wave), and Pennylane locally and remotely. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Select all
Export citation of selected articles as:
Article
Development of Multi-Actor Multi-Criteria Analysis Based on the Weight of Stakeholder Involvement in the Assessment of Natural–Cultural Tourism Area Transportation Policies
Algorithms 2021, 14(7), 217; https://0-doi-org.brum.beds.ac.uk/10.3390/a14070217 - 20 Jul 2021
Cited by 1 | Viewed by 605
Abstract
Multi-actor multi-criteria analysis (MAMCA) was developed with a process involving the participation of various stakeholders. Stakeholders express various criteria as measures for the achievement of their respective goals. In general, the assessment of each stakeholder is considered to have the same weight. In [...] Read more.
Multi-actor multi-criteria analysis (MAMCA) was developed with a process involving the participation of various stakeholders. Stakeholders express various criteria as measures for the achievement of their respective goals. In general, the assessment of each stakeholder is considered to have the same weight. In reality, the weight of each stakeholder’s involvement in policy decision making is not the same. For example, the government’s assessment weight will be different from those of local business actors. In this study, the authors developed a multi-actor multi-criteria analysis method by adding the weight of stakeholder involvement when making decisions about transportation policies that support sustainable mobility in protected natural–cultural tourism areas. The weight of involvement was developed through stakeholder participation. Stakeholders were asked to provide weights for all stakeholders other than themselves using the AHP method. The results of this weighting were then averaged and considered as the stakeholder assessment weights. Adding stakeholder weighting can also improve the quality of decisions by avoiding bias and following the principle of fairness in the assessment. Full article
(This article belongs to the Special Issue Algorithms and Models for Dynamic Multiple Criteria Decision Making)
Show Figures

Figure 1

Article
ArCAR: A Novel Deep Learning Computer-Aided Recognition for Character-Level Arabic Text Representation and Recognition
Algorithms 2021, 14(7), 216; https://0-doi-org.brum.beds.ac.uk/10.3390/a14070216 - 16 Jul 2021
Viewed by 605
Abstract
Arabic text classification is a process to simultaneously categorize the different contextual Arabic contents into a proper category. In this paper, a novel deep learning Arabic text computer-aided recognition (ArCAR) is proposed to represent and recognize Arabic text at the character level. The [...] Read more.
Arabic text classification is a process to simultaneously categorize the different contextual Arabic contents into a proper category. In this paper, a novel deep learning Arabic text computer-aided recognition (ArCAR) is proposed to represent and recognize Arabic text at the character level. The input Arabic text is quantized in the form of 1D vectors for each Arabic character to represent a 2D array for the ArCAR system. The ArCAR system is validated over 5-fold cross-validation tests for two applications: Arabic text document classification and Arabic sentiment analysis. For document classification, the ArCAR system achieves the best performance using the Alarabiya-balance dataset in terms of overall accuracy, recall, precision, and F1-score by 97.76%, 94.08%, 94.16%, and 94.09%, respectively. Meanwhile, the ArCAR performs well for Arabic sentiment analysis, achieving the best performance using the hotel Arabic reviews dataset (HARD) balance dataset in terms of overall accuracy and F1-score by 93.58% and 93.23%, respectively. The proposed ArCAR seems to provide a practical solution for accurate Arabic text representation, understanding, and classification. Full article
Show Figures

Figure 1

Article
Design of an FPGA Hardware Optimizing the Performance and Power Consumption of a Plenoptic Camera Depth Estimation Algorithm
Algorithms 2021, 14(7), 215; https://0-doi-org.brum.beds.ac.uk/10.3390/a14070215 - 15 Jul 2021
Viewed by 545
Abstract
Plenoptic camera based system captures the light-field that can be exploited to estimate the 3D depth of the scene. This process generally consists of a significant number of recurrent operations, and thus requires high computation power. General purpose processor based system, due to [...] Read more.
Plenoptic camera based system captures the light-field that can be exploited to estimate the 3D depth of the scene. This process generally consists of a significant number of recurrent operations, and thus requires high computation power. General purpose processor based system, due to its sequential architecture, consequently results in the problem of large execution time. A desktop graphics processing unit (GPU) can be employed to resolve this problem. However, it is an expensive solution with respect to power consumption and therefore cannot be used in mobile applications with low energy requirements. In this paper, we propose a modified plenoptic depth estimation algorithm that works on a single frame recorded by the camera and respective FPGA based hardware design. For this purpose, the algorithm is modified for parallelization and pipelining. In combination with efficient memory access, the results show good performance and lower power consumption compared to other systems. Full article
(This article belongs to the Special Issue Algorithms in Reconfigurable Computing)
Show Figures

Graphical abstract

Article
Hybrid Artificial Intelligence HFS-RF-PSO Model for Construction Labor Productivity Prediction and Optimization
Algorithms 2021, 14(7), 214; https://0-doi-org.brum.beds.ac.uk/10.3390/a14070214 - 15 Jul 2021
Cited by 1 | Viewed by 928
Abstract
This paper presents a novel approach, using hybrid feature selection (HFS), machine learning (ML), and particle swarm optimization (PSO) to predict and optimize construction labor productivity (CLP). HFS selects factors that are most predictive of CLP to reduce the complexity of CLP data. [...] Read more.
This paper presents a novel approach, using hybrid feature selection (HFS), machine learning (ML), and particle swarm optimization (PSO) to predict and optimize construction labor productivity (CLP). HFS selects factors that are most predictive of CLP to reduce the complexity of CLP data. Selected factors are used as inputs for four ML models for CLP prediction. The study results showed that random forest (RF) obtains better performance in mapping the relationship between CLP and selected factors affecting CLP, compared with the other three models. Finally, the integration of RF and PSO is developed to identify the maximum CLP value and the optimum value of each selected factor. This paper introduces a new hybrid model named HFS-RF-PSO that addresses the main limitation of existing CLP prediction studies, which is the lack of capacity to optimize CLP and its most predictive factors with respect to a construction company’s preferences, such as a targeted CLP. The major contribution of this paper is the development of the hybrid HFS-RF-PSO model as a novel approach for optimizing factors that influence CLP and identifying the maximum CLP value. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

Article
An Integrated Deep Learning and Belief Rule-Based Expert System for Visual Sentiment Analysis under Uncertainty
Algorithms 2021, 14(7), 213; https://0-doi-org.brum.beds.ac.uk/10.3390/a14070213 - 15 Jul 2021
Viewed by 830
Abstract
Visual sentiment analysis has become more popular than textual ones in various domains for decision-making purposes. On account of this, we develop a visual sentiment analysis system, which can classify image expression. The system classifies images by taking into account six different expressions [...] Read more.
Visual sentiment analysis has become more popular than textual ones in various domains for decision-making purposes. On account of this, we develop a visual sentiment analysis system, which can classify image expression. The system classifies images by taking into account six different expressions such as anger, joy, love, surprise, fear, and sadness. In our study, we propose an expert system by integrating a Deep Learning method with a Belief Rule Base (known as the BRB-DL approach) to assess an image’s overall sentiment under uncertainty. This BRB-DL approach includes both the data-driven and knowledge-driven techniques to determine the overall sentiment. Our integrated expert system outperforms the state-of-the-art methods of visual sentiment analysis with promising results. The integrated system can classify images with 86% accuracy. The system can be beneficial to understand the emotional tendency and psychological state of an individual. Full article
(This article belongs to the Special Issue New Algorithms for Visual Data Mining)
Show Figures

Figure 1

Article
Deep Learning Based Cardiac MRI Segmentation: Do We Need Experts?
Algorithms 2021, 14(7), 212; https://0-doi-org.brum.beds.ac.uk/10.3390/a14070212 - 14 Jul 2021
Viewed by 512
Abstract
Deep learning methods are the de facto solutions to a multitude of medical image analysis tasks. Cardiac MRI segmentation is one such application, which, like many others, requires a large number of annotated data so that a trained network can generalize well. Unfortunately, [...] Read more.
Deep learning methods are the de facto solutions to a multitude of medical image analysis tasks. Cardiac MRI segmentation is one such application, which, like many others, requires a large number of annotated data so that a trained network can generalize well. Unfortunately, the process of having a large number of manually curated images by medical experts is both slow and utterly expensive. In this paper, we set out to explore whether expert knowledge is a strict requirement for the creation of annotated data sets on which machine learning can successfully be trained. To do so, we gauged the performance of three segmentation models, namely U-Net, Attention U-Net, and ENet, trained with different loss functions on expert and non-expert ground truth for cardiac cine–MRI segmentation. Evaluation was done with classic segmentation metrics (Dice index and Hausdorff distance) as well as clinical measurements, such as the ventricular ejection fractions and the myocardial mass. The results reveal that generalization performances of a segmentation neural network trained on non-expert ground truth data is, to all practical purposes, as good as that trained on expert ground truth data, particularly when the non-expert receives a decent level of training, highlighting an opportunity for the efficient and cost-effective creation of annotations for cardiac data sets. Full article
Show Figures

Figure 1

Article
Efficient Dynamic Cost Scheduling Algorithm for Financial Data Supply Chain
Algorithms 2021, 14(7), 211; https://0-doi-org.brum.beds.ac.uk/10.3390/a14070211 - 14 Jul 2021
Viewed by 508
Abstract
The financial data supply chain is vital to the economy, especially for banks. It affects their customer service level, therefore, it is crucial to manage the scheduling of the financial data supply chain to elevate the efficiency of banking sectors’ performance. The primary [...] Read more.
The financial data supply chain is vital to the economy, especially for banks. It affects their customer service level, therefore, it is crucial to manage the scheduling of the financial data supply chain to elevate the efficiency of banking sectors’ performance. The primary tool used in the data supply chain is data batch processing which requires efficient scheduling. This work investigates the problem of scheduling the processing of tasks with non-identical sizes and different priorities on a set of parallel processors. An iterative dynamic scheduling algorithm (DCSDBP) was developed to address the data batching process. The objective is to minimize different cost types while satisfying constraints such as resources availability, customer service level, and tasks dependency relation. The algorithm proved its effectiveness by allocating tasks with higher priority and weight while taking into consideration customers’ Service Level Agreement, time, and different types of costs, which led to a lower total cost of the batching process. The developed algorithm proved effective by testing it on an illustrative network. Also, a sensitivity analysis is conducted by varying the model parameters for networks with different sizes and complexities to study their impact on the total cost and the problem under study. Full article
(This article belongs to the Special Issue Scheduling: Algorithms and Applications)
Show Figures

Figure 1

Article
A Multicriteria Simheuristic Approach for Solving a Stochastic Permutation Flow Shop Scheduling Problem
Algorithms 2021, 14(7), 210; https://0-doi-org.brum.beds.ac.uk/10.3390/a14070210 - 14 Jul 2021
Viewed by 531
Abstract
This paper proposes a hybridized simheuristic approach that couples a greedy randomized adaptive search procedure (GRASP), a Monte Carlo simulation, a Pareto archived evolution strategy (PAES), and an analytic hierarchy process (AHP), in order to solve a multicriteria stochastic permutation flow shop problem [...] Read more.
This paper proposes a hybridized simheuristic approach that couples a greedy randomized adaptive search procedure (GRASP), a Monte Carlo simulation, a Pareto archived evolution strategy (PAES), and an analytic hierarchy process (AHP), in order to solve a multicriteria stochastic permutation flow shop problem with stochastic processing times and stochastic sequence-dependent setup times. For the decisional criteria, the proposed approach considers four objective functions, including two quantitative and two qualitative criteria. While the expected value and the standard deviation of the earliness/tardiness of jobs are included in the quantitative criteria to address a robust solution in a just-in-time environment, this approach also includes a qualitative assessment of the product and customer importance in order to appraise a weighted priority for each job. An experimental design was carried out in several study instances of the flow shop problem to test the effects of the processing times and sequence-dependent setup times, obtained through lognormal and uniform probability distributions with three levels of coefficients of variation, settled as 0.3, 0.4, and 0.5. The results show that both probability distributions and coefficients of variation have a significant effect on the four decision criteria selected. In addition, the analytical hierarchical process makes it possible to choose the best sequence exhibited by the Pareto frontier that adjusts more adequately to the decision-makers’ objectives. Full article
(This article belongs to the Special Issue Scheduling: Algorithms and Applications)
Show Figures

Figure 1

Article
Containment Control of First-Order Multi-Agent Systems under PI Coordination Protocol
Algorithms 2021, 14(7), 209; https://0-doi-org.brum.beds.ac.uk/10.3390/a14070209 - 14 Jul 2021
Viewed by 610
Abstract
This paper investigates the containment control problem of discrete-time first-order multi-agent system composed of multiple leaders and followers, and we propose a proportional-integral (PI) coordination control protocol. Assume that each follower has a directed path to one leader, and we consider several cases [...] Read more.
This paper investigates the containment control problem of discrete-time first-order multi-agent system composed of multiple leaders and followers, and we propose a proportional-integral (PI) coordination control protocol. Assume that each follower has a directed path to one leader, and we consider several cases according to different topologies composed of the followers. Under the general directed topology that has a spanning tree, the frequency-domain analysis method is used to obtain the sufficient convergence condition for the followers achieving the containment-rendezvous that all the followers reach an agreement value in the convex hull formed by the leaders. Specially, a less conservative sufficient condition is obtained for the followers under symmetric and connected topology. Furthermore, it is proved that our proposed protocol drives the followers with unconnected topology to converge to the convex hull of the leaders. Numerical examples show the correctness of the theoretical results. Full article
(This article belongs to the Special Issue Algorithms for PID Controller 2021)
Show Figures

Figure 1

Article
PM2.5 Concentration Prediction Based on CNN-BiLSTM and Attention Mechanism
Algorithms 2021, 14(7), 208; https://0-doi-org.brum.beds.ac.uk/10.3390/a14070208 - 13 Jul 2021
Viewed by 448
Abstract
The concentration of PM2.5 is an important index to measure the degree of air pollution. When it exceeds the standard value, it is considered to cause pollution and lower the air quality, which is harmful to human health and can cause a variety [...] Read more.
The concentration of PM2.5 is an important index to measure the degree of air pollution. When it exceeds the standard value, it is considered to cause pollution and lower the air quality, which is harmful to human health and can cause a variety of diseases, i.e., asthma, chronic bronchitis, etc. Therefore, the prediction of PM2.5 concentration is helpful to reduce its harm. In this paper, a hybrid model called CNN-BiLSTM-Attention is proposed to predict the PM2.5 concentration over the next two days. First, we select the PM2.5 concentration data in hours from January 2013 to February 2017 of Shunyi District, Beijing. The auxiliary data includes air quality data and meteorological data. We use the sliding window method for preprocessing and dividing the corresponding data into a training set, a validation set, and a test set. Second, CNN-BiLSTM-Attention is composed of the convolutional neural network, bidirectional long short-term memory neural network, and attention mechanism. The parameters of this network structure are determined by the minimum error in the training process, including the size of the convolution kernel, activation function, batch size, dropout rate, learning rate, etc. We determine the feature size of the input and output by evaluating the performance of the model, finding out the best output for the next 48 h. Third, in the experimental part, we use the test set to check the performance of the proposed CNN-BiLSTM-Attention on PM2.5 prediction, which is compared by other comparison models, i.e., lasso regression, ridge regression, XGBOOST, SVR, CNN-LSTM, and CNN-BiLSTM. We conduct short-term prediction (48 h) and long-term prediction (72 h, 96 h, 120 h, 144 h), respectively. The results demonstrate that even the predictions of the next 144 h with CNN-BiLSTM-Attention is better than the predictions of the next 48 h with the comparison models in terms of mean absolute error (MAE), root mean square error (RMSE), and coefficient of determination (R2). Full article
Show Figures

Figure 1

Article
Extended High Order Algorithms for Equations under the Same Set of Conditions
Algorithms 2021, 14(7), 207; https://0-doi-org.brum.beds.ac.uk/10.3390/a14070207 - 12 Jul 2021
Viewed by 540
Abstract
A variety of strategies are used to construct algorithms for solving equations. However, higher order derivatives are usually assumed to calculate the convergence order. More importantly, bounds on error and uniqueness regions for the solution are also not derived. Therefore, the benefits of [...] Read more.
A variety of strategies are used to construct algorithms for solving equations. However, higher order derivatives are usually assumed to calculate the convergence order. More importantly, bounds on error and uniqueness regions for the solution are also not derived. Therefore, the benefits of these algorithms are limited. We simply use the first derivative to tackle all these issues and study the ball analysis for two sixth order algorithms under the same set of conditions. In addition, we present a calculable ball comparison between these algorithms. In this manner, we enhance the utility of these algorithms. Our idea is very general. That is why it can also be used to extend other algorithms as well in the same way. Full article
Show Figures

Figure 1

Article
Energy Management of a Multi-Source Power System
Algorithms 2021, 14(7), 206; https://0-doi-org.brum.beds.ac.uk/10.3390/a14070206 - 07 Jul 2021
Viewed by 556
Abstract
This work focuses on energy management for a system operated by multiple energy sources which include batteries, super capacitors, a hydrogen fuel cell, and a photovoltaic cell. The overall objective is to minimize the power consumption from all sources needed to satisfy the [...] Read more.
This work focuses on energy management for a system operated by multiple energy sources which include batteries, super capacitors, a hydrogen fuel cell, and a photovoltaic cell. The overall objective is to minimize the power consumption from all sources needed to satisfy the system’s power demand by optimizing the switching between the different energy sources. A dynamic mathematical model representing the energy sources is developed taking into account the different constraints on the system, i.e., primarily the state-of-charge of the battery and the super capacitors. In addition to the model, a heuristic approach is developed and compared with the mathematical model. Both approaches were tested on a multi-energy source ground robot as a prototype. The novelty of this work is that the scheduling of an energy system consisting of four different types of sources is compared by performing analysis via dynamic programming, and a heuristic approach. The results generated using both methods are analyzed and compared to a standard mode of operation. The comparison validated that the proposed approaches minimize the average power consumption across all sources. The dynamic modeling approach performs well in terms of optimization and provided a superior switching sequence, while the heuristic approach offers the definite advantages in terms of ease of implementation and simple computation requirements. Additionally, the switching sequence provided by the dynamic approach was able to meet the power demand for all simulations performed and showed that the average power consumption across all sources is minimized. Full article
(This article belongs to the Special Issue Scheduling: Algorithms and Applications)
Show Figures

Figure 1

Article
Iterative Solution of Linear Matrix Inequalities for the Combined Control and Observer Design of Systems with Polytopic Parameter Uncertainty and Stochastic Noise
Algorithms 2021, 14(7), 205; https://0-doi-org.brum.beds.ac.uk/10.3390/a14070205 - 07 Jul 2021
Viewed by 703
Abstract
Most research activities that utilize linear matrix inequality (LMI) techniques are based on the assumption that the separation principle of control and observer synthesis holds. This principle states that the combination of separately designed linear state feedback controllers and linear state observers, which [...] Read more.
Most research activities that utilize linear matrix inequality (LMI) techniques are based on the assumption that the separation principle of control and observer synthesis holds. This principle states that the combination of separately designed linear state feedback controllers and linear state observers, which are independently proven to be stable, results in overall stable system dynamics. However, even for linear systems, this property does not necessarily hold if polytopic parameter uncertainty and stochastic noise influence the system’s state and output equations. In this case, the control and observer design needs to be performed simultaneously to guarantee stabilization. However, the loss of the validity of the separation principle leads to nonlinear matrix inequalities instead of LMIs. For those nonlinear inequalities, the current paper proposes an iterative LMI solution procedure. If this algorithm produces a feasible solution, the resulting controller and observer gains ensure robust stability of the closed-loop control system for all possible parameter values. In addition, the proposed optimization criterion leads to a minimization of the sensitivity to stochastic noise so that the actual state trajectories converge as closely as possible to the desired operating point. The efficiency of the proposed solution approach is demonstrated by stabilizing the Zeeman catastrophe machine along the unstable branch of its bifurcation diagram. Additionally, an observer-based tracking control task is embedded into an iterative learning-type control framework. Full article
(This article belongs to the Special Issue Algorithms for Reliable Estimation, Identification and Control II)
Show Figures

Figure 1

Article
A Comparative Study of Block Incomplete Sparse Approximate Inverses Preconditioning on Tesla K20 and V100 GPUs
Algorithms 2021, 14(7), 204; https://0-doi-org.brum.beds.ac.uk/10.3390/a14070204 - 30 Jun 2021
Viewed by 674
Abstract
Incomplete Sparse Approximate Inverses (ISAI) has shown some advantages over sparse triangular solves on GPUs when it is used for the incomplete LU based preconditioner. In this paper, we extend the single GPU method for Block–ISAI to multiple GPUs algorithm by coupling Block–Jacobi [...] Read more.
Incomplete Sparse Approximate Inverses (ISAI) has shown some advantages over sparse triangular solves on GPUs when it is used for the incomplete LU based preconditioner. In this paper, we extend the single GPU method for Block–ISAI to multiple GPUs algorithm by coupling Block–Jacobi preconditioner, and introduce the detailed implementation in the open source numerical package PETSc. In the experiments, two representative cases are performed and a comparative study of Block–ISAI on up to four GPUs are conducted on two major generations of NVIDIA’s GPUs (Tesla K20 and Tesla V100). Block–Jacobi preconditioning with Block–ISAI (BJPB-ISAI) shows an advantage over the level-scheduling based triangular solves from the cuSPARSE library for the cases, and the overhead of setting up Block–ISAI and the total wall clock times of GMRES is greatly reduced using Tesla V100 GPUs compared to Tesla K20 GPUs. Full article
(This article belongs to the Section Parallel and Distributed Algorithms)
Show Figures

Figure 1

Article
Non-Traditional Layout Design for Robotic Mobile Fulfillment System with Multiple Workstations
Algorithms 2021, 14(7), 203; https://0-doi-org.brum.beds.ac.uk/10.3390/a14070203 - 30 Jun 2021
Viewed by 729
Abstract
This paper studies the layout design of a robotic mobile fulfillment system with multiple workstations. This is a parts-to-picker storage system where robots hoist pods and bring them directly to the workstations for stationary pickers to retrieve required items. As few research efforts [...] Read more.
This paper studies the layout design of a robotic mobile fulfillment system with multiple workstations. This is a parts-to-picker storage system where robots hoist pods and bring them directly to the workstations for stationary pickers to retrieve required items. As few research efforts have focused on determining the optimal locations of workstations in such systems, we develop an integer programming model to determine the location of workstations to minimize the total traveling distance of robots. In addition, we investigate the near-optimal workstation location patterns (i.e., some general workstation configuration rules) in the context of both traditional and flying-V layouts. A series of experiments led to the following findings: (1) the flying-V layout can save 8∼26% of travel distance compared with the traditional layout, and the sacrifice of space use is only 2∼3% for medium or large warehouses; (2) instead of solving the optimization model, the proposed 2n rule and n+1 rule are simple and easily implemented ways to locate workstations, with travel distance gaps of less than 1.5% and 5% for traditional and flying-V layouts, respectively; and (3) the “optimal” cross-aisle angle (i.e., θ) in flying-V layout can be set as large as possible as long as the cross-aisle intersects the left or right edge of the warehouse. Full article
Show Figures

Figure 1

Article
A Simplification Method for Point Cloud of T-Profile Steel Plate for Shipbuilding
Algorithms 2021, 14(7), 202; https://0-doi-org.brum.beds.ac.uk/10.3390/a14070202 - 30 Jun 2021
Viewed by 451
Abstract
According to the requirements of point cloud simplification for T-profile steel plate welding in shipbuilding, the disadvantages of the existing simplification algorithms are analyzed. In this paper, a point cloud simplification method is proposed based on octree coding and the threshold of the [...] Read more.
According to the requirements of point cloud simplification for T-profile steel plate welding in shipbuilding, the disadvantages of the existing simplification algorithms are analyzed. In this paper, a point cloud simplification method is proposed based on octree coding and the threshold of the surface curvature feature. In this method, the original point cloud data are divided into multiple sub-cubes with specified side lengths by octree coding, and the points that are closest to the gravity center of the sub-cube are kept. The k-neighborhood method and the curvature calculation are performed in order to obtain the curvature features of the point cloud. Additionally, the point cloud data are divided into several regions based on the given adjustable curvature threshold. Finally, combining the random sampling method with the simplification method based on the regional gravity center, the T-profile point cloud data can be simplified. In this study, after obtaining the point cloud data of a T-profile plate, the proposed simplification method is compared with some other simplification methods. It is found that the proposed simplification method for the point cloud of the T-profile steel plate for shipbuilding is faster than the three existing simplification methods, while retaining more feature points and having approximately the same reduction rates. Full article
Show Figures

Figure 1

Article
COVID-19 Prediction Applying Supervised Machine Learning Algorithms with Comparative Analysis Using WEKA
Algorithms 2021, 14(7), 201; https://0-doi-org.brum.beds.ac.uk/10.3390/a14070201 - 30 Jun 2021
Viewed by 820
Abstract
Early diagnosis is crucial to prevent the development of a disease that may cause danger to human lives. COVID-19, which is a contagious disease that has mutated into several variants, has become a global pandemic that demands to be diagnosed as soon as [...] Read more.
Early diagnosis is crucial to prevent the development of a disease that may cause danger to human lives. COVID-19, which is a contagious disease that has mutated into several variants, has become a global pandemic that demands to be diagnosed as soon as possible. With the use of technology, available information concerning COVID-19 increases each day, and extracting useful information from massive data can be done through data mining. In this study, authors utilized several supervised machine learning algorithms in building a model to analyze and predict the presence of COVID-19 using the COVID-19 Symptoms and Presence dataset from Kaggle. J48 Decision Tree, Random Forest, Support Vector Machine, K-Nearest Neighbors and Naïve Bayes algorithms were applied through WEKA machine learning software. Each model’s performance was evaluated using 10-fold cross validation and compared according to major accuracy measures, correctly or incorrectly classified instances, kappa, mean absolute error, and time taken to build the model. The results show that Support Vector Machine using Pearson VII universal kernel outweighs other algorithms by attaining 98.81% accuracy and a mean absolute error of 0.012. Full article
Show Figures

Figure 1

Article
An Enhanced Discrete Symbiotic Organism Search Algorithm for Optimal Task Scheduling in the Cloud
Algorithms 2021, 14(7), 200; https://0-doi-org.brum.beds.ac.uk/10.3390/a14070200 - 30 Jun 2021
Viewed by 584
Abstract
Recently, cloud computing has begun to experience tremendous growth because government agencies and private organisations are migrating to the cloud environment. Hence, having a task scheduling strategy that is efficient is paramount for effectively improving the prospects of cloud computing. Typically, a certain [...] Read more.
Recently, cloud computing has begun to experience tremendous growth because government agencies and private organisations are migrating to the cloud environment. Hence, having a task scheduling strategy that is efficient is paramount for effectively improving the prospects of cloud computing. Typically, a certain number of tasks are scheduled to use diverse resources (virtual machines) to minimise the makespan and achieve the optimum utilisation of the system by reducing the response time within the cloud environment. The task scheduling problem is NP-complete; as such, obtaining a precise solution is difficult, particularly for large-scale tasks. Therefore, in this paper, we propose a metaheuristic enhanced discrete symbiotic organism search (eDSOS) algorithm for optimal task scheduling in the cloud computing setting. Our proposed algorithm is an extension of the standard symbiotic organism search (SOS), a nature-inspired algorithm that has been implemented to solve various numerical optimisation problems. This algorithm imitates the symbiotic associations (mutualism, commensalism, and parasitism stages) displayed by organisms in an ecosystem. Despite the improvements made with the discrete symbiotic organism search (DSOS) algorithm, it still becomes trapped in local optima due to the large size of the values of the makespan and response time. The local search space of the DSOS is diversified by substituting the best value with any candidate in the population at the mutualism phase of the DSOS algorithm, which makes it worthy for use in task scheduling problems in the cloud. Thus, the eDSOS strategy converges faster when the search space is larger or more prominent due to diversification. The CloudSim simulator was used to conduct the experiment, and the simulation results show that the proposed eDSOS was able to produce a solution with a good quality when compared with that of the DSOS. Lastly, we analysed the proposed strategy by using a two-sample t-test, which revealed that the performance of eDSOS was of significance compared to the benchmark strategy (DSOS), particularly for large search spaces. The percentage improvements were 26.23% for the makespan and 63.34% for the response time. Full article
(This article belongs to the Special Issue Distributed Algorithms and Applications)
Show Figures

Figure 1

Article
CARA: A Congestion-Aware Routing Algorithm for Wireless Sensor Networks
Algorithms 2021, 14(7), 199; https://0-doi-org.brum.beds.ac.uk/10.3390/a14070199 - 30 Jun 2021
Viewed by 546
Abstract
Congestion control is one of the key research topics in relation to the routing algorithms of wireless sensor networks (WSNs). In this paper, we propose a congestion-aware routing algorithm (CARA) for unlimited-lifetime wireless sensor networks by integrating the geographic distance and traffic load [...] Read more.
Congestion control is one of the key research topics in relation to the routing algorithms of wireless sensor networks (WSNs). In this paper, we propose a congestion-aware routing algorithm (CARA) for unlimited-lifetime wireless sensor networks by integrating the geographic distance and traffic load of sensor nodes. The algorithm takes alleviating congestion as the primary purpose and considers the traffic of the node itself and local network traffic. According to the geographic distance between nodes, CARA defines four decision parameters (node load factor, forward rate, cache remaining rate, and forward average cache remaining rate), selecting the best node as the next-hop through the multi-attribute decision-making method. Compared with the two existing algorithms for congestion control, our simulation results suggest that the CARA algorithm alleviates network congestion and meets reasonable network delay and energy consumption requirements. Full article
Show Figures

Figure 1

Review
Decimal Multiplication in FPGA with a Novel Decimal Adder/Subtractor
Algorithms 2021, 14(7), 198; https://0-doi-org.brum.beds.ac.uk/10.3390/a14070198 - 29 Jun 2021
Viewed by 592
Abstract
Financial and commercial data are mostly represented in decimal format. To avoid errors introduced when converting some decimal fractions to binary, these data are processed with decimal arithmetic. Most processors only have hardwired binary arithmetic units. So, decimal operations are executed with slow [...] Read more.
Financial and commercial data are mostly represented in decimal format. To avoid errors introduced when converting some decimal fractions to binary, these data are processed with decimal arithmetic. Most processors only have hardwired binary arithmetic units. So, decimal operations are executed with slow software-based decimal arithmetic functions. For the fast execution of decimal operations, dedicated hardware units have been proposed and designed in FPGA. Decimal multiplication is found in most decimal-based applications and so its optimized design is very important for fast execution. In this paper two new parallel decimal multipliers in FPGA are proposed. These are based on a new decimal adder/subtractor also proposed in this paper. The new decimal multipliers improve state-of-the-art parallel decimal multipliers. Compared to previous architectures, implementation results show that the proposed multipliers achieve 26% better area and 12% better performance. Also, the new decimal multipliers reduce the area and performance gap to binary multipliers and are smaller for 32 digit operands. Full article
(This article belongs to the Special Issue Algorithms in Reconfigurable Computing)
Show Figures

Figure 1

Article
An Optimal and Stable Algorithm for Clustering Numerical Data
Algorithms 2021, 14(7), 197; https://0-doi-org.brum.beds.ac.uk/10.3390/a14070197 - 29 Jun 2021
Viewed by 536
Abstract
In the conventional k-means framework, seeding is the first step toward optimization before the objects are clustered. In random seeding, two main issues arise: the clustering results may be less than optimal and different clustering results may be obtained for every run. In [...] Read more.
In the conventional k-means framework, seeding is the first step toward optimization before the objects are clustered. In random seeding, two main issues arise: the clustering results may be less than optimal and different clustering results may be obtained for every run. In real-world applications, optimal and stable clustering is highly desirable. This report introduces a new clustering algorithm called the zero k-approximate modal haplotype (Zk-AMH) algorithm that uses a simple and novel seeding mechanism known as zero-point multidimensional spaces. The Zk-AMH provides cluster optimality and stability, therefore resolving the aforementioned issues. Notably, the Zk-AMH algorithm yielded identical mean scores to maximum, and minimum scores in 100 runs, producing zero standard deviation to show its stability. Additionally, when the Zk-AMH algorithm was applied to eight datasets, it achieved the highest mean scores for four datasets, produced an approximately equal score for one dataset, and yielded marginally lower scores for the other three datasets. With its optimality and stability, the Zk-AMH algorithm could be a suitable alternative for developing future clustering tools. Full article
Show Figures

Graphical abstract

Article
Self-Adaptive Path Tracking Control for Mobile Robots under Slippage Conditions Based on an RBF Neural Network
Algorithms 2021, 14(7), 196; https://0-doi-org.brum.beds.ac.uk/10.3390/a14070196 - 28 Jun 2021
Viewed by 504
Abstract
Wheeled mobile robots are widely implemented in the field environment where slipping and skidding may often occur. This paper presents a self-adaptive path tracking control framework based on a radial basis function (RBF) neural network to overcome slippage disturbances. Both kinematic and dynamic [...] Read more.
Wheeled mobile robots are widely implemented in the field environment where slipping and skidding may often occur. This paper presents a self-adaptive path tracking control framework based on a radial basis function (RBF) neural network to overcome slippage disturbances. Both kinematic and dynamic models of a wheeled robot with skid-steer characteristics are established with position, orientation, and equivalent tracking error definitions. A dual-loop control framework is proposed, and kinematic and dynamic models are integrated in the inner and outer loops, respectively. An RBF neutral network is employed for yaw rate control to realize adaptability to longitudinal slippage. Simulations employing the proposed control framework are performed to track snaking and a DLC reference path with slip ratio variations. The results suggest that the proposed control framework yields much lower position and orientation errors compared with those of a PID and a single neuron network (SNN) controller. It also exhibits prior anti-disturbance performance and adaptability to longitudinal slippage. The proposed control framework could thus be employed for autonomous mobile robots working on complex terrain. Full article
Show Figures

Figure 1

Article
Knowledge-Driven Network for Object Detection
Algorithms 2021, 14(7), 195; https://0-doi-org.brum.beds.ac.uk/10.3390/a14070195 - 28 Jun 2021
Viewed by 610
Abstract
Object detection is a challenging computer vision task with numerous real-world applications. In recent years, the concept of the object relationship model has become helpful for object detection and has been verified and realized in deep learning. Nonetheless, most approaches to modeling object [...] Read more.
Object detection is a challenging computer vision task with numerous real-world applications. In recent years, the concept of the object relationship model has become helpful for object detection and has been verified and realized in deep learning. Nonetheless, most approaches to modeling object relations are limited to using the anchor-based algorithms; they cannot be directly migrated to the anchor-free frameworks. The reason is that the anchor-free algorithms are used to eliminate the complex design of anchors and predict heatmaps to represent the locations of keypoints of different object categories, without considering the relationship between keypoints. Therefore, to better fuse the information between the heatmap channels, it is important to model the visual relationship between keypoints. In this paper, we present a knowledge-driven network (KDNet)—a new architecture that can aggregate and model keypoint relations to augment object features for detection. Specifically, it processes a set of keypoints simultaneously through interactions between their local and geometric features, thereby allowing the modeling of their relationship. Finally, the updated heatmaps were used to obtain the corners of the objects and determine their positions. The experimental results conducted on the RIDER dataset confirm the effectiveness of the proposed KDNet, which significantly outperformed other state-of-the-art object detection methods. Full article
(This article belongs to the Special Issue Algorithms for Machine Learning and Pattern Recognition Tasks)
Show Figures

Figure 1

Article
qRobot: A Quantum Computing Approach in Mobile Robot Order Picking and Batching Problem Solver Optimization
Algorithms 2021, 14(7), 194; https://0-doi-org.brum.beds.ac.uk/10.3390/a14070194 - 26 Jun 2021
Viewed by 849
Abstract
This article aims to bring quantum computing to robotics. A quantum algorithm is developed to minimize the distance traveled in warehouses and distribution centers where order picking is applied. For this, a proof of concept is proposed through a Raspberry Pi 4, generating [...] Read more.
This article aims to bring quantum computing to robotics. A quantum algorithm is developed to minimize the distance traveled in warehouses and distribution centers where order picking is applied. For this, a proof of concept is proposed through a Raspberry Pi 4, generating a quantum combinatorial optimization algorithm that saves the distance travelled and the batch of orders to be made. In case of computational need, the robot will be able to parallelize part of the operations in hybrid computing (quantum + classical), accessing CPUs and QPUs distributed in a public or private cloud. We developed a stable environment (ARM64) inside the robot (Raspberry) to run gradient operations and other quantum algorithms on IBMQ, Amazon Braket (D-Wave), and Pennylane locally or remotely. The proof of concept, when run in the above stated quantum environments, showed the execution time of our algorithm with different public access simulators on the market, computational results of our picking and batching algorithm, and analyze the quantum real-time execution. Our findings are that the behavior of the Amazon Braket D-Wave is better than Gate-based Quantum Computing over 20 qubits, and that AWS-Braket has better time performance than Qiskit or Pennylane. Full article
(This article belongs to the Special Issue Quantum Optimization and Machine Learning)
Show Figures

Figure 1

Article
Optimal Coronavirus Optimization Algorithm Based PID Controller for High Performance Brushless DC Motor
Algorithms 2021, 14(7), 193; https://0-doi-org.brum.beds.ac.uk/10.3390/a14070193 - 25 Jun 2021
Viewed by 551
Abstract
This paper presents an efficient coronavirus optimization algorithm (CVOA) to find the optimal values of the PID controller to track a preselected reference speed of a brushless DC (BLDC) motor under several types of disturbances. This work simulates how the coronavirus (COVID-19) spreads [...] Read more.
This paper presents an efficient coronavirus optimization algorithm (CVOA) to find the optimal values of the PID controller to track a preselected reference speed of a brushless DC (BLDC) motor under several types of disturbances. This work simulates how the coronavirus (COVID-19) spreads and infects healthy people. The initial values of PID controller parameters consider the zero patient, who infects new patients (other values of PID controller parameters). The model aims to simulate as accurately as possible the coronavirus activity. The CVOA has two major advantages compared to other similar strategies. First, the CVOA parameters are already adjusted according to disease statistics to prevent designers from initializing them with arbitrary values. Second, the approach has the ability to finish after several iterations where the infected population initially grows at an exponential rate. The proposed CVOA was investigated with well-known optimization techniques such as the genetic algorithm (GA) and Harmony Search (HS) optimization. A multi-objective function was used to allow the designer to select the desired rise time, the desired settling time, the desired overshoot, and the desired steady-state error. Several tests were performed to investigate the obtained proper values of PID controller parameters. In the first test, the BLDC motor was exposed to sudden load at a steady speed. In the second test, the continuous sinusoidal load was applied to the rotor of the BLDC motor. In the third test, different operating points of reference speed were selected to the rotor of the BLDC motor. The results proved that the CVOA-based PID controller has the best performance among the techniques. In the first test, the CVOA-based PID controller has a minimum rise time (0.0042 s), minimum settling time (0.0079 s), and acceptable overshoot (0.0511%). In the second test, the CVOA-based PID controller has the minimum deviation about the reference speed (±4 RPM). In the third test, the CVOA-based PID controller can accurately track the reference speed among other techniques. Full article
(This article belongs to the Special Issue Algorithms for PID Controller 2021)
Show Figures

Figure 1

Article
Convolutional Neural Network with an Elastic Matching Mechanism for Time Series Classification
Algorithms 2021, 14(7), 192; https://0-doi-org.brum.beds.ac.uk/10.3390/a14070192 - 25 Jun 2021
Viewed by 488
Abstract
Recently, some researchers adopted the convolutional neural network (CNN) for time series classification (TSC) and have achieved better performance than most hand-crafted methods in the University of California, Riverside (UCR) archive. The secret to the success of the CNN is weight sharing, which [...] Read more.
Recently, some researchers adopted the convolutional neural network (CNN) for time series classification (TSC) and have achieved better performance than most hand-crafted methods in the University of California, Riverside (UCR) archive. The secret to the success of the CNN is weight sharing, which is robust to the global translation of the time series. However, global translation invariance is not the only case considered for TSC. Temporal distortion is another common phenomenon besides global translation in time series. The scale and phase changes due to temporal distortion bring significant challenges to TSC, which is out of the scope of conventional CNNs. In this paper, a CNN architecture with an elastic matching mechanism, which is named Elastic Matching CNN (short for EM-CNN), is proposed to address this challenge. Compared with the conventional CNN, EM-CNN allows local time shifting between the time series and convolutional kernels, and a matching matrix is exploited to learn the nonlinear alignment between time series and convolutional kernels of the CNN. Several EM-CNN models are proposed in this paper based on diverse CNN models. The results for 85 UCR datasets demonstrate that the elastic matching mechanism effectively improves CNN performance. Full article
Show Figures

Figure 1

Article
Optimization of the Weighted Multi-Facility Location Problem Using MS Excel
Algorithms 2021, 14(7), 191; https://0-doi-org.brum.beds.ac.uk/10.3390/a14070191 - 25 Jun 2021
Viewed by 526
Abstract
This article presents the possibilities in solving the Weighted Multi-Facility Location Problem and its related optimization tasks using a widely available office software—MS Excel with the Solver add-in. To verify the proposed technique, a set of benchmark instances with various point topologies (regular, [...] Read more.
This article presents the possibilities in solving the Weighted Multi-Facility Location Problem and its related optimization tasks using a widely available office software—MS Excel with the Solver add-in. To verify the proposed technique, a set of benchmark instances with various point topologies (regular, combination of regular and random, and random) was designed. The optimization results are compared with results achieved by a metaheuristic algorithm based on simulated annealing principles. The influence of the hardware configuration on the performance achieved by MS Excel Solver is also examined and discussed from both the execution time and accuracy perspectives. The experiments showed that this widely available office software is practical for solving even relatively complex optimization tasks (Weighted Multi-Facility Location Problem with 100 points and 20 centers, which consists of 40 continuous optimization variables in two-dimensional space) with sufficient quality for many real-world applications. The method used is described in detail and step-by-step using an example. Full article
(This article belongs to the Special Issue Evolutionary Algorithms and Applications)
Show Figures

Figure 1

Article
Fact-Checking Reasoning System for Fake Review Detection Using Answer Set Programming
Algorithms 2021, 14(7), 190; https://0-doi-org.brum.beds.ac.uk/10.3390/a14070190 - 24 Jun 2021
Viewed by 558
Abstract
A rising number of people use online reviews to choose if they want to use or buy a service or product. Therefore, approaches for identifying fake reviews are in high request. This paper proposes a hybrid rule-based fact-checking framework based on Answer Set [...] Read more.
A rising number of people use online reviews to choose if they want to use or buy a service or product. Therefore, approaches for identifying fake reviews are in high request. This paper proposes a hybrid rule-based fact-checking framework based on Answer Set Programming (ASP) and natural language processing. The paper incorporates the behavioral patterns of reviewers combined with the qualitative and quantitative properties/features extracted from the content of their reviews. As a case study, we evaluated the framework using a movie review dataset, consisting of user accounts with their associated reviews, including the review title, content, and the star rating of the movie, to identify reviews that are not trustworthy and labeled them accordingly in the output. This output is then used in the front end of a movie review platform to tag reviews as fake and show their sentiment. The evaluation of the proposed approach showed promising results and high flexibility. Full article
(This article belongs to the Special Issue Logic-Based Artificial Intelligence)
Show Figures

Figure 1

Article
Optimal Transport in Multilayer Networks for Traffic Flow Optimization
Algorithms 2021, 14(7), 189; https://0-doi-org.brum.beds.ac.uk/10.3390/a14070189 - 23 Jun 2021
Viewed by 667
Abstract
Modeling traffic distribution and extracting optimal flows in multilayer networks is of the utmost importance to design efficient, multi-modal network infrastructures. Recent results based on optimal transport theory provide powerful and computationally efficient methods to address this problem, but they are mainly focused [...] Read more.
Modeling traffic distribution and extracting optimal flows in multilayer networks is of the utmost importance to design efficient, multi-modal network infrastructures. Recent results based on optimal transport theory provide powerful and computationally efficient methods to address this problem, but they are mainly focused on modeling single-layer networks. Here, we adapt these results to study how optimal flows distribute on multilayer networks. We propose a model where optimal flows on different layers contribute differently to the total cost to be minimized. This is done by means of a parameter that varies with layers, which allows to flexibly tune the sensitivity to the traffic congestion of the various layers. As an application, we consider transportation networks, where each layer is associated to a different transportation system, and show how the traffic distribution varies as we tune this parameter across layers. We show an example of this result on the real, 2-layer network of the city of Bordeaux with a bus and tram, where we find that in certain regimes, the presence of the tram network significantly unburdens the traffic on the road network. Our model paves the way for further analysis of optimal flows and navigability strategies in real, multilayer networks. Full article
(This article belongs to the Special Issue Network Science: Algorithms and Applications)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop