Next Issue
Volume 13, September
Previous Issue
Volume 13, July
 
 

Algorithms, Volume 13, Issue 8 (August 2020) – 27 articles

Cover Story (view full-size image): The novelty in application of the Reed–Solomon algorithm for remote sensing lies in the different interpretation of the algorithm itself at the preparatory stage―data fusion. The rationale is to include all possible information from all acquired spectral bands, assuming that complete composite information in the form of one compound image will improve both the quality of visualization and some aspects of further quantitative and qualitative analyses. The concept arose from an empirical, heuristic combination of geographic information systems (GIS), map algebra, and two-dimensional cellular automata. The challenges are related to handling big quantitative datasets and the awareness that these numbers are, in fact, descriptors of a real-world multidimensional view. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
27 pages, 5839 KiB  
Article
Methodology for Analyzing the Traditional Algorithms Performance of User Reviews Using Machine Learning Techniques
by Abdul Karim, Azhari Azhari, Samir Brahim Belhaouri, Ali Adil Qureshi and Maqsood Ahmad
Algorithms 2020, 13(8), 202; https://0-doi-org.brum.beds.ac.uk/10.3390/a13080202 - 18 Aug 2020
Cited by 3 | Viewed by 4619
Abstract
Android-based applications are widely used by almost everyone around the globe. Due to the availability of the Internet almost everywhere at no charge, almost half of the globe is engaged with social networking, social media surfing, messaging, browsing and plugins. In the Google [...] Read more.
Android-based applications are widely used by almost everyone around the globe. Due to the availability of the Internet almost everywhere at no charge, almost half of the globe is engaged with social networking, social media surfing, messaging, browsing and plugins. In the Google Play Store, which is one of the most popular Internet application stores, users are encouraged to download thousands of applications and various types of software. In this research study, we have scraped thousands of user reviews and the ratings of different applications. We scraped 148 application reviews from 14 different categories. A total of 506,259 reviews were accumulated and assessed. Based on the semantics of reviews of the applications, the results of the reviews were classified negative, positive or neutral. In this research, different machine-learning algorithms such as logistic regression, random forest and naïve Bayes were tuned and tested. We also evaluated the outcome of term frequency (TF) and inverse document frequency (IDF), measured different parameters such as accuracy, precision, recall and F1 score (F1) and present the results in the form of a bar graph. In conclusion, we compared the outcome of each algorithm and found that logistic regression is one of the best algorithms for the review-analysis of the Google Play Store from an accuracy perspective. Furthermore, we were able to prove and demonstrate that logistic regression is better in terms of speed, rate of accuracy, recall and F1 perspective. This conclusion was achieved after preprocessing a number of data values from these data sets. Full article
(This article belongs to the Special Issue Advanced Data Mining: Algorithms and Applications)
Show Figures

Figure 1

27 pages, 1770 KiB  
Article
A NARX Model Reference Adaptive Control Scheme: Improved Disturbance Rejection Fractional-Order PID Control of an Experimental Magnetic Levitation System
by Hossein Alimohammadi, Baris Baykant Alagoz, Aleksei Tepljakov, Kristina Vassiljeva and Eduard Petlenkov
Algorithms 2020, 13(8), 201; https://0-doi-org.brum.beds.ac.uk/10.3390/a13080201 - 18 Aug 2020
Cited by 17 | Viewed by 3897
Abstract
Real control systems require robust control performance to deal with unpredictable and altering operating conditions of real-world systems. Improvement of disturbance rejection control performance should be considered as one of the essential control objectives in practical control system design tasks. This study presents [...] Read more.
Real control systems require robust control performance to deal with unpredictable and altering operating conditions of real-world systems. Improvement of disturbance rejection control performance should be considered as one of the essential control objectives in practical control system design tasks. This study presents a multi-loop Model Reference Adaptive Control (MRAC) scheme that leverages a nonlinear autoregressive neural network with external inputs (NARX) model in as the reference model. Authors observed that the performance of multi-loop MRAC-fractional-order proportional integral derivative (FOPID) control with MIT rule largely depends on the capability of the reference model to represent leading closed-loop dynamics of the experimental ML system. As such, the NARX model is used to represent disturbance-free dynamical behavior of PID control loop. It is remarkable that the obtained reference model is independent of the tuning of other control loops in the control system. The multi-loop MRAC-FOPID control structure detects impacts of disturbance incidents on control performance of the closed-loop FOPID control system and adapts the response of the FOPID control system to reduce the negative effects of the additive input disturbance. This multi-loop control structure deploys two specialized control loops: an inner loop, which is the closed-loop FOPID control system for stability and set-point control, and an outer loop, which involves a NARX reference model and an MIT rule to increase the adaptation ability of the system. Thus, the two-loop MRAC structure allows improvement of disturbance rejection performance without deteriorating precise set-point control and stability characteristics of the FOPID control loop. This is an important benefit of this control structure. To demonstrate disturbance rejection performance improvements of the proposed multi-loop MRAC-FOPID control with NARX model, an experimental study is conducted for disturbance rejection control of magnetic levitation test setup in the laboratory. Simulation and experimental results indicate an improvement of disturbance rejection performance. Full article
(This article belongs to the Special Issue Algorithms for PID Controller 2019)
Show Figures

Figure 1

15 pages, 315 KiB  
Article
Adaptive Metrics for Adaptive Samples
by Nicholas J. Cavanna and Donald R. Sheehy
Algorithms 2020, 13(8), 200; https://0-doi-org.brum.beds.ac.uk/10.3390/a13080200 - 18 Aug 2020
Viewed by 2430
Abstract
We generalize the local-feature size definition of adaptive sampling used in surface reconstruction to relate it to an alternative metric on Euclidean space. In the new metric, adaptive samples become uniform samples, making it simpler both to give adaptive sampling versions of homological [...] Read more.
We generalize the local-feature size definition of adaptive sampling used in surface reconstruction to relate it to an alternative metric on Euclidean space. In the new metric, adaptive samples become uniform samples, making it simpler both to give adaptive sampling versions of homological inference results and to prove topological guarantees using the critical points theory of distance functions. This ultimately leads to an algorithm for homology inference from samples whose spacing depends on their distance to a discrete representation of the complement space. Full article
(This article belongs to the Special Issue Topological Data Analysis)
28 pages, 3005 KiB  
Article
Scalable Block Preconditioners for Linearized Navier-Stokes Equations at High Reynolds Number
by Filippo Zanetti and Luca Bergamaschi
Algorithms 2020, 13(8), 199; https://0-doi-org.brum.beds.ac.uk/10.3390/a13080199 - 16 Aug 2020
Cited by 4 | Viewed by 3306
Abstract
We review a number of preconditioners for the advection-diffusion operator and for the Schur complement matrix, which, in turn, constitute the building blocks for Constraint and Triangular Preconditioners to accelerate the iterative solution of the discretized and linearized Navier-Stokes equations. An intensive numerical [...] Read more.
We review a number of preconditioners for the advection-diffusion operator and for the Schur complement matrix, which, in turn, constitute the building blocks for Constraint and Triangular Preconditioners to accelerate the iterative solution of the discretized and linearized Navier-Stokes equations. An intensive numerical testing is performed onto the driven cavity problem with low values of the viscosity coefficient. We devise an efficient multigrid preconditioner for the advection-diffusion matrix, which, combined with the commuted BFBt Schur complement approximation, and inserted in a 2×2 block preconditioner, provides convergence of the Generalized Minimal Residual (GMRES) method in a number of iteration independent of the meshsize for the lowest values of the viscosity parameter. The low-rank acceleration of such preconditioner is also investigated, showing its great potential. Full article
Show Figures

Figure 1

27 pages, 14281 KiB  
Article
Pavement Defect Segmentation in Orthoframes with a Pipeline of Three Convolutional Neural Networks
by Roland Lõuk, Andri Riid, René Pihlak and Aleksei Tepljakov
Algorithms 2020, 13(8), 198; https://0-doi-org.brum.beds.ac.uk/10.3390/a13080198 - 14 Aug 2020
Cited by 7 | Viewed by 3451
Abstract
In the manuscript, the issue of detecting and segmenting out pavement defects on highway roads is addressed. Specifically, computer vision (CV) methods are developed and applied to the problem based on deep learning of convolutional neural networks (ConvNets). A novel neural network structure [...] Read more.
In the manuscript, the issue of detecting and segmenting out pavement defects on highway roads is addressed. Specifically, computer vision (CV) methods are developed and applied to the problem based on deep learning of convolutional neural networks (ConvNets). A novel neural network structure is considered, based on a pipeline of three ConvNets and endowed with the capacity for context awareness, which improves grid-based search for defects on orthoframes by considering the surrounding image content—an approach, which essentially draws inspiration from how humans tend to solve the task of image segmentation. Also, methods for assessing the quality of segmentation are discussed. The contribution also describes the complete procedure of working with pavement defects in an industrial setting, involving the workcycle of defect annotation, ConvNet training and validation. The results of ConvNet evaluation provided in the paper hint at a successful implementation of the proposed technique. Full article
(This article belongs to the Special Issue Bio-Inspired Algorithms for Image Processing)
Show Figures

Figure 1

16 pages, 297 KiB  
Article
A Brief Survey of Fixed-Parameter Parallelism
by Faisal N. Abu-Khzam and Karam Al Kontar
Algorithms 2020, 13(8), 197; https://0-doi-org.brum.beds.ac.uk/10.3390/a13080197 - 14 Aug 2020
Cited by 3 | Viewed by 2373
Abstract
This paper provides an overview of the field of parameterized parallel complexity by surveying previous work in addition to presenting a few new observations and exploring potential new directions. In particular, we present a general view of how known FPT techniques, [...] Read more.
This paper provides an overview of the field of parameterized parallel complexity by surveying previous work in addition to presenting a few new observations and exploring potential new directions. In particular, we present a general view of how known FPT techniques, such as bounded search trees, color coding, kernelization, and iterative compression, can be modified to produce fixed-parameter parallel algorithms. Full article
(This article belongs to the Special Issue 2020 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Figure 1

21 pages, 1475 KiB  
Article
Adaptive Reconstruction of Imperfectly Observed Monotone Functions, with Applications to Uncertainty Quantification
by Luc Bonnet, Jean-Luc Akian, Éric Savin and T. J. Sullivan
Algorithms 2020, 13(8), 196; https://0-doi-org.brum.beds.ac.uk/10.3390/a13080196 - 13 Aug 2020
Viewed by 3180
Abstract
Motivated by the desire to numerically calculate rigorous upper and lower bounds on deviation probabilities over large classes of probability distributions, we present an adaptive algorithm for the reconstruction of increasing real-valued functions. While this problem is similar to the classical statistical problem [...] Read more.
Motivated by the desire to numerically calculate rigorous upper and lower bounds on deviation probabilities over large classes of probability distributions, we present an adaptive algorithm for the reconstruction of increasing real-valued functions. While this problem is similar to the classical statistical problem of isotonic regression, the optimisation setting alters several characteristics of the problem and opens natural algorithmic possibilities. We present our algorithm, establish sufficient conditions for convergence of the reconstruction to the ground truth, and apply the method to synthetic test cases and a real-world example of uncertainty quantification for aerodynamic design. Full article
Show Figures

Figure 1

15 pages, 1632 KiB  
Article
Deep Learning-Enabled Semantic Inference of Individual Building Damage Magnitude from Satellite Images
by Bradley J. Wheeler and Hassan A. Karimi
Algorithms 2020, 13(8), 195; https://0-doi-org.brum.beds.ac.uk/10.3390/a13080195 - 13 Aug 2020
Cited by 23 | Viewed by 3598
Abstract
Natural disasters are phenomena that can occur in any part of the world. They can cause massive amounts of destruction and leave entire cities in great need of assistance. The ability to quickly and accurately deliver aid to impacted areas is crucial toward [...] Read more.
Natural disasters are phenomena that can occur in any part of the world. They can cause massive amounts of destruction and leave entire cities in great need of assistance. The ability to quickly and accurately deliver aid to impacted areas is crucial toward not only saving time and money, but, most importantly, lives. We present a deep learning-based computer vision model to semantically infer the magnitude of damage to individual buildings after natural disasters using pre- and post-disaster satellite images. This model helps alleviate a major bottleneck in disaster management decision support by automating the analysis of the magnitude of damage to buildings post-disaster. In this paper, we will show our methods and results for how we were able to obtain a better performance than existing models, especially in moderate to significant magnitudes of damage, along with ablation studies to show our methods and results for the importance and impact of different training parameters in deep learning for satellite imagery. We were able to obtain an overall F1 score of 0.868 with our methods. Full article
Show Figures

Figure 1

10 pages, 401 KiB  
Article
Graph Planarity by Replacing Cliques with Paths
by Patrizio Angelini, Peter Eades, Seok-Hee Hong, Karsten Klein, Stephen Kobourov, Giuseppe Liotta, Alfredo Navarra and Alessandra Tappini
Algorithms 2020, 13(8), 194; https://0-doi-org.brum.beds.ac.uk/10.3390/a13080194 - 13 Aug 2020
Cited by 6 | Viewed by 3431
Abstract
This paper introduces and studies the following beyond-planarity problem, which we call h-Clique2Path Planarity. Let G be a simple topological graph whose vertices are partitioned into subsets of size at most h, each inducing a clique. h [...] Read more.
This paper introduces and studies the following beyond-planarity problem, which we call h-Clique2Path Planarity. Let G be a simple topological graph whose vertices are partitioned into subsets of size at most h, each inducing a clique. h-Clique2Path Planarity asks whether it is possible to obtain a planar subgraph of G by removing edges from each clique so that the subgraph induced by each subset is a path. We investigate the complexity of this problem in relation to k-planarity. In particular, we prove that h-Clique2Path Planarity is NP-complete even when h=4 and G is a simple 3-plane graph, while it can be solved in linear time when G is a simple 1-plane graph, for any value of h. Our results contribute to the growing fields of hybrid planarity and of graph drawing beyond planarity. Full article
(This article belongs to the Special Issue Graph Algorithms and Applications)
Show Figures

Figure 1

12 pages, 1660 KiB  
Article
Cross-Camera Erased Feature Learning for Unsupervised Person Re-Identification
by Shaojun Wu and Ling Gao
Algorithms 2020, 13(8), 193; https://0-doi-org.brum.beds.ac.uk/10.3390/a13080193 - 10 Aug 2020
Cited by 1 | Viewed by 2331
Abstract
Most supervised person re-identification methods show their excellent performance, but using labeled datasets is very expensive, which limits its application in practical scenarios. To solve the scalability problem, we propose a Cross-camera Erased Feature Learning (CEFL) framework for unsupervised person re-identification that learns [...] Read more.
Most supervised person re-identification methods show their excellent performance, but using labeled datasets is very expensive, which limits its application in practical scenarios. To solve the scalability problem, we propose a Cross-camera Erased Feature Learning (CEFL) framework for unsupervised person re-identification that learns discriminative features from image appearances without manual annotations, where both of the cross-camera global image appearance and the local details are explored. Specifically, for the global appearance, in order to bridge the gap between images with the same identities under different cameras, we generate style-transferred images. The network is trained to classify the original images, the style-transferred images and the negative samples. To learn the partial details of the images, we generate erased images and train the network to pull the similar erased images together and push the dissimilar ones away. In addition, we joint learn the discriminative global and local information to learn a more robust model. Global and erased features are used together in feature learning which are successful conjunction of BFENet. A large number of experiments show the superiority of CEFL in unsupervised pedestrian re-identification. Full article
Show Figures

Figure 1

12 pages, 3728 KiB  
Article
Local-Topology-Based Scaling for Distance Preserving Dimension Reduction Method to Improve Classification of Biomedical Data-Sets
by Karaj Khosla, Indra Prakash Jha, Ajit Kumar and Vibhor Kumar
Algorithms 2020, 13(8), 192; https://0-doi-org.brum.beds.ac.uk/10.3390/a13080192 - 10 Aug 2020
Cited by 2 | Viewed by 3025
Abstract
Dimension reduction is often used for several procedures of analysis of high dimensional biomedical data-sets such as classification or outlier detection. To improve the performance of such data-mining steps, preserving both distance information and local topology among data-points could be more useful than [...] Read more.
Dimension reduction is often used for several procedures of analysis of high dimensional biomedical data-sets such as classification or outlier detection. To improve the performance of such data-mining steps, preserving both distance information and local topology among data-points could be more useful than giving priority to visualization in low dimension. Therefore, we introduce topology-preserving distance scaling (TPDS) to augment a dimension reduction method meant to reproduce distance information in a higher dimension. Our approach involves distance inflation to preserve local topology to avoid collapse during distance preservation-based optimization. Applying TPDS on diverse biomedical data-sets revealed that besides providing better visualization than typical distance preserving methods, TPDS leads to better classification of data points in reduced dimension. For data-sets with outliers, the approach of TPDS also proves to be useful, even for purely distance-preserving method for achieving better convergence. Full article
(This article belongs to the Special Issue Advanced Data Mining: Algorithms and Applications)
Show Figures

Figure 1

30 pages, 743 KiB  
Article
Faster Algorithms for Mining Shortest-Path Distances from Massive Time-Evolving Graphs
by Mattia D’Emidio
Algorithms 2020, 13(8), 191; https://0-doi-org.brum.beds.ac.uk/10.3390/a13080191 - 04 Aug 2020
Cited by 4 | Viewed by 3162
Abstract
Computing shortest-path distances is a fundamental primitive in the context of graph data mining, since this kind of information is essential in a broad range of prominent applications, which include social network analysis, data routing, web search optimization, database design and route planning. [...] Read more.
Computing shortest-path distances is a fundamental primitive in the context of graph data mining, since this kind of information is essential in a broad range of prominent applications, which include social network analysis, data routing, web search optimization, database design and route planning. Standard algorithms for shortest paths (e.g., Dijkstra’s) do not scale well with the graph size, as they take more than a second or huge memory overheads to answer a single query on the distance for large-scale graph datasets. Hence, they are not suited to mine distances from big graphs, which are becoming the norm in most modern application contexts. Therefore, to achieve faster query answering, smarter and more scalable methods have been designed, the most effective of them based on precomputing and querying a compact representation of the transitive closure of the input graph, called the 2-hop-cover labeling. To use such approaches in realistic time-evolving scenarios, when the managed graph undergoes topological modifications over time, specific dynamic algorithms, carefully updating the labeling as the graph evolves, have been introduced. In fact, recomputing from scratch the 2-hop-cover structure every time the graph changes is not an option, as it induces unsustainable time overheads. While the state-of-the-art dynamic algorithm to update a 2-hop-cover labeling against incremental modifications (insertions of arcs/vertices, arc weights decreases) offers very fast update times, the only known solution for decremental modifications (deletions of arcs/vertices, arc weights increases) is still far from being considered practical, as it requires up to tens of seconds of processing per update in several prominent classes of real-world inputs, as experimentation shows. In this paper, we introduce a new dynamic algorithm to update 2-hop-cover labelings against decremental changes. We prove its correctness, formally analyze its worst-case performance, and assess its effectiveness through an experimental evaluation employing both real-world and synthetic inputs. Our results show that it improves, by up to several orders of magnitude, upon average update times of the only existing decremental algorithm, thus representing a step forward towards real-time distance mining in general, massive time-evolving graphs. Full article
(This article belongs to the Special Issue Algorithmic Aspects of Networks)
Show Figures

Figure 1

11 pages, 362 KiB  
Article
On a Nonsmooth Gauss–Newton Algorithms for Solving Nonlinear Complementarity Problems
by Marek J. Śmietański
Algorithms 2020, 13(8), 190; https://0-doi-org.brum.beds.ac.uk/10.3390/a13080190 - 04 Aug 2020
Cited by 3 | Viewed by 2963
Abstract
In this paper, we propose a new version of the generalized damped Gauss–Newton method for solving nonlinear complementarity problems based on the transformation to the nonsmooth equation, which is equivalent to some unconstrained optimization problem. The B-differential plays the role of the derivative. [...] Read more.
In this paper, we propose a new version of the generalized damped Gauss–Newton method for solving nonlinear complementarity problems based on the transformation to the nonsmooth equation, which is equivalent to some unconstrained optimization problem. The B-differential plays the role of the derivative. We present two types of algorithms (usual and inexact), which have superlinear and global convergence for semismooth cases. These results can be applied to efficiently find all solutions of the nonlinear complementarity problems under some mild assumptions. The results of the numerical tests are attached as a complement of the theoretical considerations. Full article
Show Figures

Figure 1

20 pages, 1167 KiB  
Article
Node Placement Optimization of Wireless Sensor Networks Using Multi-Objective Adaptive Degressive Ary Number Encoded Genetic Algorithm
by Yijie Zhang and Mandan Liu
Algorithms 2020, 13(8), 189; https://0-doi-org.brum.beds.ac.uk/10.3390/a13080189 - 03 Aug 2020
Cited by 9 | Viewed by 3735
Abstract
The wireless sensor network (WSN) has the advantages of low cost, high monitoring accuracy, good fault tolerance, remote monitoring and convenient maintenance. It has been widely used in various fields. In the WSN, the placement of node sensors has a great impact on [...] Read more.
The wireless sensor network (WSN) has the advantages of low cost, high monitoring accuracy, good fault tolerance, remote monitoring and convenient maintenance. It has been widely used in various fields. In the WSN, the placement of node sensors has a great impact on its coverage, energy consumption and some other factors. In order to improve the convergence speed of a node placement optimization algorithm, the encoding method is improved in this paper. The degressive ary number encoding is further extended to a multi-objective optimization problem. Furthermore, the adaptive changing rule of ary number is proposed by analyzing the experimental results of the N-ary number encoded algorithm. Then a multi-objective optimization algorithm adopting the adaptive degressive ary number encoding method has been used in optimizing the node placement in WSN. The experiments show that the proposed adaptive degressive ary number encoded algorithm can improve both the optimization effect and search efficiency when solving the node placement problem. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

18 pages, 5832 KiB  
Article
Application of the Reed-Solomon Algorithm as a Remote Sensing Data Fusion Tool for Land Use Studies
by Piotr A. Werner
Algorithms 2020, 13(8), 188; https://doi.org/10.3390/a13080188 - 03 Aug 2020
Cited by 1 | Viewed by 3216
Abstract
The Reed-Solomon algorithm is well known in different fields of computer science. The novelty of this study lies in the different interpretation of the algorithm itself and its scope of application for remote sensing, especially at the preparatory stage, i.e., data fusion. A [...] Read more.
The Reed-Solomon algorithm is well known in different fields of computer science. The novelty of this study lies in the different interpretation of the algorithm itself and its scope of application for remote sensing, especially at the preparatory stage, i.e., data fusion. A short review of the attempts to use different data fusion approaches in geospatial technologies explains the possible usage of the algorithm. The rationale behind its application for data fusion is to include all possible information from all acquired spectral bands, assuming that complete composite information in the form of one compound image will improve both the quality of visualization and some aspects of further quantitative and qualitative analyses. The concept arose from an empirical, heuristic combination of geographic information systems (GIS), map algebra, and two-dimensional cellular automata. The challenges are related to handling big quantitative data sets and the awareness that these numbers are in fact descriptors of a real-world multidimensional view. An empirical case study makes it easier to understand the operationalization of the Reed-Solomon algorithm for land use studies. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Graphical abstract

20 pages, 851 KiB  
Article
Constructing Reliable Computing Environments on Top of Amazon EC2 Spot Instances
by Altino M. Sampaio and Jorge G. Barbosa
Algorithms 2020, 13(8), 187; https://0-doi-org.brum.beds.ac.uk/10.3390/a13080187 - 31 Jul 2020
Cited by 3 | Viewed by 3251
Abstract
Cloud provider Amazon Elastic Compute Cloud (EC2) gives access to resources in the form of virtual servers, also known as instances. EC2 spot instances (SIs) offer spare computational capacity at steep discounts compared to reliable and fixed price on-demand instances. The drawback, however, [...] Read more.
Cloud provider Amazon Elastic Compute Cloud (EC2) gives access to resources in the form of virtual servers, also known as instances. EC2 spot instances (SIs) offer spare computational capacity at steep discounts compared to reliable and fixed price on-demand instances. The drawback, however, is that the delay in acquiring spots can be incredible high. Moreover, SIs may not always be available as they can be reclaimed by EC2 at any given time, with a two-minute interruption notice. In this paper, we propose a multi-workflow scheduling algorithm, allied with a container migration-based mechanism, to dynamically construct and readjust virtual clusters on top of non-reserved EC2 pricing model instances. Our solution leverages recent findings on performance and behavior characteristics of EC2 spots. We conducted simulations by submitting real-life workflow applications, constrained by user-defined deadline and budget quality of service (QoS) parameters. The results indicate that our solution improves the rate of completed tasks by almost 20%, and the rate of completed workflows by at least 30%, compared with other state-of-the-art algorithms, for a worse-case scenario. Full article
Show Figures

Figure 1

23 pages, 451 KiB  
Article
A Review on Recent Advancements in FOREX Currency Prediction
by Md. Saiful Islam, Emam Hossain, Abdur Rahman, Mohammad Shahadat Hossain and Karl Andersson
Algorithms 2020, 13(8), 186; https://0-doi-org.brum.beds.ac.uk/10.3390/a13080186 - 30 Jul 2020
Cited by 18 | Viewed by 12551
Abstract
In recent years, the foreign exchange (FOREX) market has attracted quite a lot of scrutiny from researchers all over the world. Due to its vulnerable characteristics, different types of research have been conducted to accomplish the task of predicting future FOREX currency prices [...] Read more.
In recent years, the foreign exchange (FOREX) market has attracted quite a lot of scrutiny from researchers all over the world. Due to its vulnerable characteristics, different types of research have been conducted to accomplish the task of predicting future FOREX currency prices accurately. In this research, we present a comprehensive review of the recent advancements of FOREX currency prediction approaches. Besides, we provide some information about the FOREX market and cryptocurrency market. We wanted to analyze the most recent works in this field and therefore considered only those papers which were published from 2017 to 2019. We used a keyword-based searching technique to filter out popular and relevant research. Moreover, we have applied a selection algorithm to determine which papers to include in this review. Based on our selection criteria, we have reviewed 39 research articles that were published on “Elsevier”, “Springer”, and “IEEE Xplore” that predicted future FOREX prices within the stipulated time. Our research shows that in recent years, researchers have been interested mostly in neural networks models, pattern-based approaches, and optimization techniques. Our review also shows that many deep learning algorithms, such as gated recurrent unit (GRU) and long short term memory (LSTM), have been fully explored and show huge potential in time series prediction. Full article
Show Figures

Figure 1

29 pages, 692 KiB  
Article
Machine Learning-Guided Dual Heuristics and New Lower Bounds for the Refueling and Maintenance Planning Problem of Nuclear Power Plants
by Nicolas Dupin and El-Ghazali Talbi
Algorithms 2020, 13(8), 185; https://0-doi-org.brum.beds.ac.uk/10.3390/a13080185 - 30 Jul 2020
Cited by 5 | Viewed by 3800
Abstract
This paper studies the hybridization of Mixed Integer Programming (MIP) with dual heuristics and machine learning techniques, to provide dual bounds for a large scale optimization problem from an industrial application. The case study is the EURO/ROADEF Challenge 2010, to optimize the refueling [...] Read more.
This paper studies the hybridization of Mixed Integer Programming (MIP) with dual heuristics and machine learning techniques, to provide dual bounds for a large scale optimization problem from an industrial application. The case study is the EURO/ROADEF Challenge 2010, to optimize the refueling and maintenance planning of nuclear power plants. Several MIP relaxations are presented to provide dual bounds computing smaller MIPs than the original problem. It is proven how to get dual bounds with scenario decomposition in the different 2-stage programming MILP formulations, with a selection of scenario guided by machine learning techniques. Several sets of dual bounds are computable, improving significantly the former best dual bounds of the literature and justifying the quality of the best primal solution known. Full article
(This article belongs to the Special Issue Optimization Algorithms and Applications)
Show Figures

Figure 1

21 pages, 10437 KiB  
Article
Iterative Algorithm for Solving Scalar Fractional Differential Equations with Riemann–Liouville Derivative and Supremum
by Ravi Agarwal, Snezhana Hristova, Donal O’Regan and Kremena Stefanova
Algorithms 2020, 13(8), 184; https://0-doi-org.brum.beds.ac.uk/10.3390/a13080184 - 30 Jul 2020
Cited by 4 | Viewed by 2773
Abstract
The initial value problem for a special type of scalar nonlinear fractional differential equation with a Riemann–Liouville fractional derivative is studied. The main characteristic of the equation is the presence of the supremum of the unknown function over a previous time interval. This [...] Read more.
The initial value problem for a special type of scalar nonlinear fractional differential equation with a Riemann–Liouville fractional derivative is studied. The main characteristic of the equation is the presence of the supremum of the unknown function over a previous time interval. This type of equation is difficult to be solved explicitly and we need approximate methods for its solving. In this paper, initially, mild lower and mild upper solutions are defined. Then, based on these definitions and the application of the monotone-iterative technique, we present an algorithm for constructing two types of successive approximations. Both sequences are monotonically convergent from above and from below, respectively, to the mild solutions of the given problem. The suggested iterative scheme is applied to particular problems to illustrate its application. Full article
Show Figures

Figure 1

23 pages, 675 KiB  
Article
Influence Maximization with Priority in Online Social Networks
by Canh V. Pham, Dung K. T. Ha, Quang C. Vu, Anh N. Su and Huan X. Hoang
Algorithms 2020, 13(8), 183; https://0-doi-org.brum.beds.ac.uk/10.3390/a13080183 - 29 Jul 2020
Cited by 4 | Viewed by 3130
Abstract
The Influence Maximization (IM) problem, which finds a set of k nodes (called seedset) in a social network to initiate the influence spread so that the number of influenced nodes after propagation process is maximized, is an important problem in [...] Read more.
The Influence Maximization (IM) problem, which finds a set of k nodes (called seedset) in a social network to initiate the influence spread so that the number of influenced nodes after propagation process is maximized, is an important problem in information propagation and social network analysis. However, previous studies ignored the constraint of priority that led to inefficient seed collections. In some real situations, companies or organizations often prioritize influencing potential users during their influence diffusion campaigns. With a new approach to these existing works, we propose a new problem called Influence Maximization with Priority (IMP) which finds out a set seed of k nodes in a social network to be able to influence the largest number of nodes subject to the influence spread to a specific set of nodes U (called priority set) at least a given threshold T in this paper. We show that the problem is NP-hard under well-known IC model. To find the solution, we propose two efficient algorithms, called Integrated Greedy (IG) and Integrated Greedy Sampling (IGS) with provable theoretical guarantees. IG provides a 1(11k)t-approximation solution with t is an outcome of algorithm and t1. The worst-case approximation ratio is obtained when t=1 and it is equal to 1/k. In addition, IGS is an efficient randomized approximation algorithm based on sampling method that provides a 1(11k)tϵ-approximation solution with probability at least 1δ with ϵ>0,δ(0,1) as input parameters of the problem. We conduct extensive experiments on various real networks to compare our IGS algorithm to the state-of-the-art algorithms in IM problem. The results indicate that our algorithm provides better solutions interns of influence on the priority sets when approximately give twice to ten times higher than threshold T while running time, memory usage and the influence spread also give considerable results compared to the others. Full article
(This article belongs to the Section Combinatorial Optimization, Graph, and Network Algorithms)
Show Figures

Figure 1

23 pages, 620 KiB  
Article
Trajectory Clustering and k-NN for Robust Privacy Preserving k-NN Query Processing in GeoSpark
by Elias Dritsas, Andreas Kanavos, Maria Trigka, Gerasimos Vonitsanos, Spyros Sioutas and Athanasios Tsakalidis
Algorithms 2020, 13(8), 182; https://0-doi-org.brum.beds.ac.uk/10.3390/a13080182 - 28 Jul 2020
Cited by 4 | Viewed by 3634
Abstract
Privacy Preserving and Anonymity have gained significant concern from the big data perspective. We have the view that the forthcoming frameworks and theories will establish several solutions for privacy protection. The k-anonymity is considered a key solution that has been widely employed [...] Read more.
Privacy Preserving and Anonymity have gained significant concern from the big data perspective. We have the view that the forthcoming frameworks and theories will establish several solutions for privacy protection. The k-anonymity is considered a key solution that has been widely employed to prevent data re-identifcation and concerns us in the context of this work. Data modeling has also gained significant attention from the big data perspective. It is believed that the advancing distributed environments will provide users with several solutions for efficient spatio-temporal data management. GeoSpark will be utilized in the current work as it is a key solution that has been widely employed for spatial data. Specifically, it works on the top of Apache Spark, the main framework leveraged from the research community and organizations for big data transformation, processing and visualization. To this end, we focused on trajectory data representation so as to be applicable to the GeoSpark environment, and a GeoSpark-based approach is designed for the efficient management of real spatio-temporal data. Th next step is to gain deeper understanding of the data through the application of k nearest neighbor (k-NN) queries either using indexing methods or otherwise. The k-anonymity set computation, which is the main component for privacy preservation evaluation and the main issue of our previous works, is evaluated in the GeoSpark environment. More to the point, the focus here is on the time cost of k-anonymity set computation along with vulnerability measurement. The extracted results are presented into tables and figures for visual inspection. Full article
(This article belongs to the Special Issue Algorithmic Data Management)
Show Figures

Figure 1

26 pages, 3481 KiB  
Article
On the Optimal Calculation of the Rice Coding Parameter
by Fernando Solano Donado
Algorithms 2020, 13(8), 181; https://0-doi-org.brum.beds.ac.uk/10.3390/a13080181 - 27 Jul 2020
Cited by 4 | Viewed by 4482
Abstract
In this article, we design and evaluate several algorithms for the computation of the optimal Rice coding parameter. We conjecture that the optimal Rice coding parameter can be bounded and verify this conjecture through numerical experiments using real data. We also describe algorithms [...] Read more.
In this article, we design and evaluate several algorithms for the computation of the optimal Rice coding parameter. We conjecture that the optimal Rice coding parameter can be bounded and verify this conjecture through numerical experiments using real data. We also describe algorithms that partition the input sequence of data into sub-sequences, such that if each sub-sequence is coded with a different Rice parameter, the overall code length is minimised. An algorithm for finding the optimal partitioning solution for Rice codes is proposed, as well as fast heuristics, based on the understanding of the problem trade-offs. Full article
(This article belongs to the Special Issue Lossless Data Compression)
Show Figures

Figure 1

15 pages, 1357 KiB  
Article
A Predictive Analysis on Emerging Technology Utilization in Industrialized Construction in the United States and China
by Bing Qi, Shuyu Qian and Aaron Costin
Algorithms 2020, 13(8), 180; https://0-doi-org.brum.beds.ac.uk/10.3390/a13080180 - 24 Jul 2020
Cited by 6 | Viewed by 3644
Abstract
Considering the increasing use of emerging technologies in industrialized construction in recent years, the primary objective of this article is to develop and validate predictive models to predict the emerging technology utilization level of industrialized construction industry practitioners. Our preliminary research results indicate [...] Read more.
Considering the increasing use of emerging technologies in industrialized construction in recent years, the primary objective of this article is to develop and validate predictive models to predict the emerging technology utilization level of industrialized construction industry practitioners. Our preliminary research results indicate that the company background and personal career profiles can significantly affect practitioners’ technology utilization level. Thus, our prediction model is based on four variables: company size, company type, working experience, and working position. The United States and China are selected as the case studies to validate the prediction model. First, a well-designed questionnaire survey is distributed to the industrialized construction industry practitioners from the two countries, which leads to 81 and 99 valid responses separately. Then, ordinal logistic regression is used to develop a set of models to predict the practitioners’ utilization level of the four main technology types. Finally, the external test dataset consisting of 16 cases indicates the prediction models have a high accuracy. The results also reflect some differences of the technology utilization status in the industrialized construction industry between the United States and China. The major contribution of this research is offering an efficient and accurate method to predict practitioners’ technology utilization level in industrialized construction. Significantly, the models are believed to have a wide application in promoting the emerging technologies in the actual industrialized construction. Full article
Show Figures

Figure 1

16 pages, 451 KiB  
Article
Two-Component Bayesian Hierarchical Models for Cost-Benefit Analysis of Traffic Barrier Crash Count
by Mahdi Rezapour and Khaled Ksaibati
Algorithms 2020, 13(8), 179; https://0-doi-org.brum.beds.ac.uk/10.3390/a13080179 - 23 Jul 2020
Cited by 5 | Viewed by 2728
Abstract
Road departure crashes tend to be hazardous, especially in rural areas like Wyoming. Traffic barriers could be installed to mitigate the severity of those crashes. However, the severity of traffic barriers crashes still persists. Besides various drivers and environmental characteristics, the roadways and [...] Read more.
Road departure crashes tend to be hazardous, especially in rural areas like Wyoming. Traffic barriers could be installed to mitigate the severity of those crashes. However, the severity of traffic barriers crashes still persists. Besides various drivers and environmental characteristics, the roadways and barrier geometric characteristics play a critical role in the severity of barrier crashes. The Wyoming department of transportation (WYDOT) has initiated a project to identify and optimize the heights of those barriers that are below the design standard, while prioritizing them based on the monetary benefit. This is to optimize first barriers that need an immediate attention, considering the limited budget, and then all other barriers being under design. In order to account for both aspects of frequency and severity of crashes, equivalent property damage only (EPDO) was considered. The data of this type besides having an over-dispersion, exhibits excess amounts of zeroes. Thus, a two-component model was employed to provide a flexible way of addressing this problem. Beside this technique, one-component hierarchical modeling approach was considered for a comparison purpose. This paper presents an empirical cost-benefit analysis based on Bayesian hierarchical machine learning techniques. After identifying the best model in terms of the performance, deviance information criterion (DIC), the results were converted into an equation, and the equation was used for a purpose of machine learning technique. An automated method generated cost based on barriers’ current conditions, and then based on optimized barrier heights. The empirical analysis showed that cost-sensitive modeling and machine learning technique deployment could be used as an effective way for cost-benefit analysis. That could be achieved through measuring the associated costs of barriers’ enhancements, added benefits over years and consequently, barrier prioritization due to lack of available budget. A comprehensive discussion across the two-component models, zero-inflated and hurdle, is included in the manuscript. Full article
Show Figures

Figure 1

16 pages, 2816 KiB  
Article
The Model Order Reduction Method as an Effective Way to Implement GPC Controller for Multidimensional Objects
by Sebastian Plamowski and Richard W Kephart
Algorithms 2020, 13(8), 178; https://0-doi-org.brum.beds.ac.uk/10.3390/a13080178 - 23 Jul 2020
Cited by 4 | Viewed by 2938
Abstract
The paper addresses issues associated with implementing GPC controllers in systems with multiple input signals. Depending on the method of identification, the resulting models may be of a high order and when applied to a control/regulation law, may result in numerical errors due [...] Read more.
The paper addresses issues associated with implementing GPC controllers in systems with multiple input signals. Depending on the method of identification, the resulting models may be of a high order and when applied to a control/regulation law, may result in numerical errors due to the limitations of representing values in double-precision floating point numbers. This phenomenon is to be avoided, because even if the model is correct, the resulting numerical errors will lead to poor control performance. An effective way to identify, and at the same time eliminate, this unfavorable feature is to reduce the model order. A method of model order reduction is presented in this paper that effectively mitigates these issues. In this paper, the Generalized Predictive Control (GPC) algorithm is presented, followed by a discussion of the conditions that result in high order models. Examples are included where the discussed problem is demonstrated along with the subsequent results after the reduction. The obtained results and formulated conclusions are valuable for industry practitioners who implement a predictive control in industry. Full article
(This article belongs to the Special Issue Model Predictive Control: Algorithms and Applications)
Show Figures

Figure 1

19 pages, 3050 KiB  
Article
Sphere Fitting with Applications to Machine Tracking
by Dror Epstein and Dan Feldman
Algorithms 2020, 13(8), 177; https://0-doi-org.brum.beds.ac.uk/10.3390/a13080177 - 22 Jul 2020
Cited by 7 | Viewed by 3513
Abstract
We suggest a provable and practical approximation algorithm for fitting a set P of n points in R d to a sphere. Here, a sphere is represented by its center x R d and radius r > 0 . The goal is [...] Read more.
We suggest a provable and practical approximation algorithm for fitting a set P of n points in R d to a sphere. Here, a sphere is represented by its center x R d and radius r > 0 . The goal is to minimize the sum p P p x r of distances to the points up to a multiplicative factor of 1 ± ε , for a given constant ε > 0 , over every such r and x. Our main technical result is a data summarization of the input set, called coreset, that approximates the above sum of distances on the original (big) set P for every sphere. Then, an accurate sphere can be extracted quickly via an inefficient exhaustive search from the small coreset. Most articles focus mainly on sphere identification (e.g., circles in 2 D image) rather than finding the exact match (in the sense of extent measures), and do not provide approximation guarantees. We implement our algorithm and provide extensive experimental results on both synthetic and real-world data. We then combine our algorithm in a mechanical pressure control system whose main bottleneck is tracking a falling ball. Full open source is also provided. Full article
(This article belongs to the Section Randomized, Online, and Approximation Algorithms)
Show Figures

Figure 1

27 pages, 2513 KiB  
Article
Towards Cognitive Recommender Systems
by Amin Beheshti, Shahpar Yakhchi, Salman Mousaeirad, Seyed Mohssen Ghafari, Srinivasa Reddy Goluguri and Mohammad Amin Edrisi
Algorithms 2020, 13(8), 176; https://0-doi-org.brum.beds.ac.uk/10.3390/a13080176 - 22 Jul 2020
Cited by 52 | Viewed by 9415
Abstract
Intelligence is the ability to learn from experience and use domain experts’ knowledge to adapt to new situations. In this context, an intelligent Recommender System should be able to learn from domain experts’ knowledge and experience, as it is vital to know the [...] Read more.
Intelligence is the ability to learn from experience and use domain experts’ knowledge to adapt to new situations. In this context, an intelligent Recommender System should be able to learn from domain experts’ knowledge and experience, as it is vital to know the domain that the items will be recommended. Traditionally, Recommender Systems have been recognized as playlist generators for video/music services (e.g., Netflix and Spotify), e-commerce product recommenders (e.g., Amazon and eBay), or social content recommenders (e.g., Facebook and Twitter). However, Recommender Systems in modern enterprises are highly data-/knowledge-driven and may rely on users’ cognitive aspects such as personality, behavior, and attitude. In this paper, we survey and summarize previously published studies on Recommender Systems to help readers understand our method’s contributions to the field in this context. We discuss the current limitations of the state of the art approaches in Recommender Systems and the need for our new approach: A vision and a general framework for a new type of data-driven, knowledge-driven, and cognition-driven Recommender Systems, namely, Cognitive Recommender Systems. Cognitive Recommender Systems will be the new type of intelligent Recommender Systems that understand the user’s preferences, detect changes in user preferences over time, predict user’s unknown favorites, and explore adaptive mechanisms to enable intelligent actions within the compound and changing environments. We present a motivating scenario in banking and argue that existing Recommender Systems: (i) do not use domain experts’ knowledge to adapt to new situations; (ii) may not be able to predict the ratings or preferences a customer would give to a product (e.g., loan, deposit, or trust service); and (iii) do not support data capture and analytics around customers’ cognitive activities and use it to provide intelligent and time-aware recommendations. Full article
(This article belongs to the Special Issue Algorithms for Personalization Techniques and Recommender Systems)
Show Figures

Figure 1

Previous Issue
Back to TopTop