Next Issue
Volume 14, November
Previous Issue
Volume 14, September
 
 

Algorithms, Volume 14, Issue 10 (October 2021) – 26 articles

Cover Story (view full-size image): READ (Rough Estimator based Asynchronous Distributed algorithm) is an efficient distributed super points detection algorithm. It uses a lightweight estimator to generate candidate super points and the famous Linear Estimator to accurately estimate the cardinality of each candidate super point, so as to detect the super point with the smallest communication burden. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
20 pages, 9766 KiB  
Article
Research on Building Target Detection Based on High-Resolution Optical Remote Sensing Imagery
by Yong Mei, Hao Chen and Shuting Yang
Algorithms 2021, 14(10), 300; https://0-doi-org.brum.beds.ac.uk/10.3390/a14100300 - 19 Oct 2021
Viewed by 1678
Abstract
High-resolution remote sensing image building target detection has wide application value in the fields of land planning, geographic monitoring, smart cities and other fields. However, due to the complex background of remote sensing imagery, some detailed features of building targets are less distinguishable [...] Read more.
High-resolution remote sensing image building target detection has wide application value in the fields of land planning, geographic monitoring, smart cities and other fields. However, due to the complex background of remote sensing imagery, some detailed features of building targets are less distinguishable from the background. When carrying out the detection task, it is prone to problems such as distortion and the missing of the building outline. To address this challenge, we developed a novel building target detection method. First, a building detection method based on rectangular approximation and region growth was proposed, and a saliency detection model based on the foreground compactness and local contrast of manifold ranking is used to obtain the saliency map of the building region. Then, the boundary prior saliency detection method based on the improved manifold ranking algorithm was proposed for the target area of buildings with low contrast with the background in remote sensing imagery. Finally, fusing the results of the rectangular approximation-based and saliency-based detection, the proposed fusion method improved the Recall and F1 value of building detection, indicating that this paper provides an effective and efficient high-resolution remote sensing image building target detection method. Full article
(This article belongs to the Special Issue Machine Learning in Image and Video Processing)
Show Figures

Graphical abstract

31 pages, 4725 KiB  
Article
The Stock Index Prediction Based on SVR Model with Bat Optimization Algorithm
by Jianguo Zheng, Yilin Wang, Shihan Li and Hancong Chen
Algorithms 2021, 14(10), 299; https://0-doi-org.brum.beds.ac.uk/10.3390/a14100299 - 15 Oct 2021
Cited by 14 | Viewed by 2641
Abstract
Accurate stock market prediction models can provide investors with convenient tools to make better data-based decisions and judgments. Moreover, retail investors and institutional investors could reduce their investment risk by selecting the optimal stock index with the help of these models. Predicting stock [...] Read more.
Accurate stock market prediction models can provide investors with convenient tools to make better data-based decisions and judgments. Moreover, retail investors and institutional investors could reduce their investment risk by selecting the optimal stock index with the help of these models. Predicting stock index price is one of the most effective tools for risk management and portfolio diversification. The continuous improvement of the accuracy of stock index price forecasts can promote the improvement and maturity of China’s capital market supervision and investment. It is also an important guarantee for China to further accelerate structural reforms and manufacturing transformation and upgrading. In response to this problem, this paper introduces the bat algorithm to optimize the three free parameters of the SVR machine learning model, constructs the BA-SVR hybrid model, and forecasts the closing prices of 18 stock indexes in Chinese stock market. The total sample comes from 15 January 2016 (the 10th trading day in 2016) to 31 December 2020. We select the last 20, 60, and 250 days of whole sample data as test sets for short-term, mid-term, and long-term forecast, respectively. The empirical results show that the BA-SVR model outperforms the polynomial kernel SVR model and sigmoid kernel SVR model without optimized initial parameters. In the robustness test part, we use the stationary time series data after the first-order difference of six selected characteristics to re-predict. Compared with the random forest model and ANN model, the prediction performance of the BA-SVR model is still significant. This paper also provides a new perspective on the methods of stock index forecasting and the application of bat algorithms in the financial field. Full article
(This article belongs to the Special Issue Metaheuristics)
Show Figures

Figure A1

19 pages, 2780 KiB  
Article
SENSE: A Flow-Down Semantics-Based Requirements Engineering Framework
by Kalliopi Kravari, Christina Antoniou and Nick Bassiliades
Algorithms 2021, 14(10), 298; https://0-doi-org.brum.beds.ac.uk/10.3390/a14100298 - 15 Oct 2021
Cited by 1 | Viewed by 1970
Abstract
The processes involved in requirements engineering are some of the most, if not the most, important steps in systems development. The need for well-defined requirements remains a critical issue for the development of any system. Describing the structure and behavior of a system [...] Read more.
The processes involved in requirements engineering are some of the most, if not the most, important steps in systems development. The need for well-defined requirements remains a critical issue for the development of any system. Describing the structure and behavior of a system could be proven vague, leading to uncertainties, restrictions, or improper functioning of the system that would be hard to fix later. In this context, this article proposes SENSE, a framework based on standardized expressions of natural language with well-defined semantics, called boilerplates, that support a flow-down procedure for requirement management. This framework integrates sets of boilerplates and proposes the most appropriate of them, depending, among other considerations, on the type of requirement and the developing system, while providing validity and completeness verification checks using the minimum consistent set of formalities and languages. SENSE is a consistent and easily understood framework that allows engineers to use formal languages and semantics rather than the traditional natural languages and machine learning techniques, optimizing the requirement development. The main aim of SENSE is to provide a complete process of the production and standardization of the requirements by using semantics, ontologies, and appropriate NLP techniques. Furthermore, SENSE performs the necessary verifications by using SPARQL (SPIN) queries to support requirement management. Full article
(This article belongs to the Special Issue Ontologies, Ontology Development and Evaluation)
Show Figures

Figure 1

29 pages, 134025 KiB  
Article
Improving the Robustness of AI-Based Malware Detection Using Adversarial Machine Learning
by Shruti Patil, Vijayakumar Varadarajan, Devika Walimbe, Siddharth Gulechha, Sushant Shenoy, Aditya Raina and Ketan Kotecha
Algorithms 2021, 14(10), 297; https://doi.org/10.3390/a14100297 - 15 Oct 2021
Cited by 12 | Viewed by 7616
Abstract
Cyber security is used to protect and safeguard computers and various networks from ill-intended digital threats and attacks. It is getting more difficult in the information age due to the explosion of data and technology. There is a drastic rise in the new [...] Read more.
Cyber security is used to protect and safeguard computers and various networks from ill-intended digital threats and attacks. It is getting more difficult in the information age due to the explosion of data and technology. There is a drastic rise in the new types of attacks where the conventional signature-based systems cannot keep up with these attacks. Machine learning seems to be a solution to solve many problems, including problems in cyber security. It is proven to be a very useful tool in the evolution of malware detection systems. However, the security of AI-based malware detection models is fragile. With advancements in machine learning, attackers have found a way to work around such detection systems using an adversarial attack technique. Such attacks are targeted at the data level, at classifier models, and during the testing phase. These attacks tend to cause the classifier to misclassify the given input, which can be very harmful in real-time AI-based malware detection. This paper proposes a framework for generating the adversarial malware images and retraining the classification models to improve malware detection robustness. Different classification models were implemented for malware detection, and attacks were established using adversarial images to analyze the model’s behavior. The robustness of the models was improved by means of adversarial training, and better attack resistance is observed. Full article
Show Figures

Figure 1

12 pages, 308 KiB  
Article
Genz and Mendell-Elston Estimation of the High-Dimensional Multivariate Normal Distribution
by Lucy Blondell, Mark Z. Kos, John Blangero and Harald H. H. Göring
Algorithms 2021, 14(10), 296; https://0-doi-org.brum.beds.ac.uk/10.3390/a14100296 - 14 Oct 2021
Cited by 2 | Viewed by 1436
Abstract
Statistical analysis of multinomial data in complex datasets often requires estimation of the multivariate normal (mvn) distribution for models in which the dimensionality can easily reach 10–1000 and higher. Few algorithms for estimating the mvn distribution can offer robust and efficient [...] Read more.
Statistical analysis of multinomial data in complex datasets often requires estimation of the multivariate normal (mvn) distribution for models in which the dimensionality can easily reach 10–1000 and higher. Few algorithms for estimating the mvn distribution can offer robust and efficient performance over such a range of dimensions. We report a simulation-based comparison of two algorithms for the mvn that are widely used in statistical genetic applications. The venerable Mendell-Elston approximation is fast but execution time increases rapidly with the number of dimensions, estimates are generally biased, and an error bound is lacking. The correlation between variables significantly affects absolute error but not overall execution time. The Monte Carlo-based approach described by Genz returns unbiased and error-bounded estimates, but execution time is more sensitive to the correlation between variables. For ultra-high-dimensional problems, however, the Genz algorithm exhibits better scale characteristics and greater time-weighted efficiency of estimation. Full article
(This article belongs to the Section Analysis of Algorithms and Complexity Theory)
Show Figures

Figure 1

13 pages, 363 KiB  
Article
Ant Colony Optimization with Warm-Up
by Mattia Neroni
Algorithms 2021, 14(10), 295; https://0-doi-org.brum.beds.ac.uk/10.3390/a14100295 - 12 Oct 2021
Cited by 9 | Viewed by 2136
Abstract
The Ant Colony Optimization (ACO) is a probabilistic technique inspired by the behavior of ants for solving computational problems that may be reduced to finding the best path through a graph. Some species of ants deposit pheromone on the ground to mark some [...] Read more.
The Ant Colony Optimization (ACO) is a probabilistic technique inspired by the behavior of ants for solving computational problems that may be reduced to finding the best path through a graph. Some species of ants deposit pheromone on the ground to mark some favorable paths that should be used by other members of the colony. Ant colony optimization implements a similar mechanism for solving optimization problems. In this paper a warm-up procedure for the ACO is proposed. During the warm-up, the pheromone matrix is initialized to provide an efficient new starting point for the algorithm, so that it can obtain the same (or better) results with fewer iterations. The warm-up is based exclusively on the graph, which, in most applications, is given and does not need to be recalculated every time before executing the algorithm. In this way, it can be made only once, and it speeds up the algorithm every time it is used from then on. The proposed solution is validated on a set of traveling salesman problem instances, and in the simulation of a real industrial application for the routing of pickers in a manual warehouse. During the validation, it is compared with other ACO adopting a pheromone initialization technique, and the results show that, in most cases, the adoption of the proposed warm-up allows the ACO to obtain the same or better results with fewer iterations. Full article
(This article belongs to the Special Issue Metaheuristics)
Show Figures

Figure 1

21 pages, 494 KiB  
Article
Globally Optimizing QAOA Circuit Depth for Constrained Optimization Problems
by Rebekah Herrman, Lorna Treffert, James Ostrowski, Phillip C. Lotshaw, Travis S. Humble and George Siopsis
Algorithms 2021, 14(10), 294; https://0-doi-org.brum.beds.ac.uk/10.3390/a14100294 - 11 Oct 2021
Cited by 9 | Viewed by 2029
Abstract
We develop a global variable substitution method that reduces n-variable monomials in combinatorial optimization problems to equivalent instances with monomials in fewer variables. We apply this technique to 3-SAT and analyze the optimal quantum unitary circuit depth needed to solve the reduced [...] Read more.
We develop a global variable substitution method that reduces n-variable monomials in combinatorial optimization problems to equivalent instances with monomials in fewer variables. We apply this technique to 3-SAT and analyze the optimal quantum unitary circuit depth needed to solve the reduced problem using the quantum approximate optimization algorithm. For benchmark 3-SAT problems, we find that the upper bound of the unitary circuit depth is smaller when the problem is formulated as a product and uses the substitution method to decompose gates than when the problem is written in the linear formulation, which requires no decomposition. Full article
(This article belongs to the Special Issue Quantum Optimization and Machine Learning)
Show Figures

Figure 1

20 pages, 400 KiB  
Article
A Unified Formulation of Analytical and Numerical Methods for Solving Linear Fredholm Integral Equations
by Efthimios Providas
Algorithms 2021, 14(10), 293; https://0-doi-org.brum.beds.ac.uk/10.3390/a14100293 - 10 Oct 2021
Cited by 1 | Viewed by 2033
Abstract
This article is concerned with the construction of approximate analytic solutions to linear Fredholm integral equations of the second kind with general continuous kernels. A unified treatment of some classes of analytical and numerical classical methods, such as the Direct Computational Method (DCM), [...] Read more.
This article is concerned with the construction of approximate analytic solutions to linear Fredholm integral equations of the second kind with general continuous kernels. A unified treatment of some classes of analytical and numerical classical methods, such as the Direct Computational Method (DCM), the Degenerate Kernel Methods (DKM), the Quadrature Methods (QM) and the Projection Methods (PM), is proposed. The problem is formulated as an abstract equation in a Banach space and a solution formula is derived. Then, several approximating schemes are discussed. In all cases, the method yields an explicit, albeit approximate, solution. Several examples are solved to illustrate the performance of the technique. Full article
Show Figures

Figure 1

19 pages, 11469 KiB  
Article
Utilizing the Particle Swarm Optimization Algorithm for Determining Control Parameters for Civil Structures Subject to Seismic Excitation
by Courtney A. Peckens, Andrea Alsgaard, Camille Fogg, Mary C. Ngoma and Clara Voskuil
Algorithms 2021, 14(10), 292; https://0-doi-org.brum.beds.ac.uk/10.3390/a14100292 - 08 Oct 2021
Cited by 1 | Viewed by 1493
Abstract
Structural control of civil infrastructure in response to large external loads, such as earthquakes or wind, is not widely employed due to challenges regarding information exchange and the inherent latencies in the system due to complex computations related to the control algorithm. This [...] Read more.
Structural control of civil infrastructure in response to large external loads, such as earthquakes or wind, is not widely employed due to challenges regarding information exchange and the inherent latencies in the system due to complex computations related to the control algorithm. This study employs front-end signal processing at the sensing node to alleviate computations at the control node and results in a simplistic sum of weighted inputs to determine a control force. The control law simplifies to U = WP, where U is the control force, W is a pre-determined weight matrix, and P is a deconstructed representation of the response of the structure to the applied excitation. Determining the optimal weight matrix for this calculation is non-trivial and this study uses the particle swarm optimization (PSO) algorithm with a modified homing feature to converge on a possible solution. To further streamline the control algorithm, various pruning techniques are combined with the PSO algorithm in order to optimize the number of entries in the weight matrix. These optimization techniques are applied in simulation to a five-story structure and the success of the resulting control parameters are quantified based on their ability to minimize the information exchange while maintaining control effectiveness. It is found that a magnitude-based pruning method, when paired with the PSO algorithm, is able to offer the most effective control for a structure subject to seismic base excitation. Full article
Show Figures

Figure 1

19 pages, 508 KiB  
Article
An Algorithm for Making Regime-Changing Markov Decisions
by Juri Hinz
Algorithms 2021, 14(10), 291; https://0-doi-org.brum.beds.ac.uk/10.3390/a14100291 - 04 Oct 2021
Viewed by 1541
Abstract
In industrial applications, the processes of optimal sequential decision making are naturally formulated and optimized within a standard setting of Markov decision theory. In practice, however, decisions must be made under incomplete and uncertain information about parameters and transition probabilities. This situation occurs [...] Read more.
In industrial applications, the processes of optimal sequential decision making are naturally formulated and optimized within a standard setting of Markov decision theory. In practice, however, decisions must be made under incomplete and uncertain information about parameters and transition probabilities. This situation occurs when a system may suffer a regime switch changing not only the transition probabilities but also the control costs. After such an event, the effect of the actions may turn to the opposite, meaning that all strategies must be revised. Due to practical importance of this problem, a variety of methods has been suggested, ranging from incorporating regime switches into Markov dynamics to numerous concepts addressing model uncertainty. In this work, we suggest a pragmatic and practical approach using a natural re-formulation of this problem as a so-called convex switching system, we make efficient numerical algorithms applicable. Full article
(This article belongs to the Special Issue Machine Learning Applications in High Dimensional Stochastic Control)
Show Figures

Figure 1

17 pages, 25326 KiB  
Article
Fine-Grained Pests Recognition Based on Truncated Probability Fusion Network via Internet of Things in Forestry and Agricultural Scenes
by Kai Ma, Ming-Jun Nie, Sen Lin, Jianlei Kong, Cheng-Cai Yang and Jinhao Liu
Algorithms 2021, 14(10), 290; https://0-doi-org.brum.beds.ac.uk/10.3390/a14100290 - 30 Sep 2021
Cited by 1 | Viewed by 2216
Abstract
Accurate identification of insect pests is the key to improve crop yield and ensure quality and safety. However, under the influence of environmental conditions, the same kind of pests show obvious differences in intraclass representation, while the different kinds of pests show slight [...] Read more.
Accurate identification of insect pests is the key to improve crop yield and ensure quality and safety. However, under the influence of environmental conditions, the same kind of pests show obvious differences in intraclass representation, while the different kinds of pests show slight similarities. The traditional methods have been difficult to deal with fine-grained identification of pests, and their practical deployment is low. In order to solve this problem, this paper uses a variety of equipment terminals in the agricultural Internet of Things to obtain a large number of pest images and proposes a fine-grained identification model of pests based on probability fusion network FPNT. This model designs a fine-grained feature extractor based on an optimized CSPNet backbone network, mining different levels of local feature expression that can distinguish subtle differences. After the integration of the NetVLAD aggregation layer, the gated probability fusion layer gives full play to the advantages of information complementarity and confidence coupling of multi-model fusion. The comparison test shows that the PFNT model has an average recognition accuracy of 93.18% for all kinds of pests, and its performance is better than other deep-learning methods, with the average processing time drop to 61 ms, which can meet the needs of fine-grained image recognition of pests in the Internet of Things in agricultural and forestry practice, and provide technical application reference for intelligent early warning and prevention of pests. Full article
(This article belongs to the Special Issue Algorithms for Machine Learning and Pattern Recognition Tasks)
Show Figures

Figure 1

20 pages, 3909 KiB  
Article
Comparing Commit Messages and Source Code Metrics for the Prediction Refactoring Activities
by Priyadarshni Suresh Sagar, Eman Abdulah AlOmar, Mohamed Wiem Mkaouer, Ali Ouni and Christian D. Newman
Algorithms 2021, 14(10), 289; https://0-doi-org.brum.beds.ac.uk/10.3390/a14100289 - 30 Sep 2021
Cited by 10 | Viewed by 2618
Abstract
Understanding how developers refactor their code is critical to support the design improvement process of software. This paper investigates to what extent code metrics are good indicators for predicting refactoring activity in the source code. In order to perform this, we formulated the [...] Read more.
Understanding how developers refactor their code is critical to support the design improvement process of software. This paper investigates to what extent code metrics are good indicators for predicting refactoring activity in the source code. In order to perform this, we formulated the prediction of refactoring operation types as a multi-class classification problem. Our solution relies on measuring metrics extracted from committed code changes in order to extract the corresponding features (i.e., metric variations) that better represent each class (i.e., refactoring type) in order to automatically predict, for a given commit, the method-level type of refactoring being applied, namely Move Method, Rename Method, Extract Method, Inline Method, Pull-up Method, and Push-down Method. We compared various classifiers, in terms of their prediction performance, using a dataset of 5004 commits and extracted 800 Java projects. Our main findings show that the random forest model trained with code metrics resulted in the best average accuracy of 75%. However, we detected a variation in the results per class, which means that some refactoring types are harder to detect than others. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

27 pages, 9272 KiB  
Article
Machine Learning-Based Prediction of the Seismic Bearing Capacity of a Shallow Strip Footing over a Void in Heterogeneous Soils
by Mohammad Sadegh Es-haghi, Mohsen Abbaspour, Hamidreza Abbasianjahromi and Stefano Mariani
Algorithms 2021, 14(10), 288; https://0-doi-org.brum.beds.ac.uk/10.3390/a14100288 - 30 Sep 2021
Cited by 5 | Viewed by 2331
Abstract
The seismic bearing capacity of a shallow strip footing above a void displays a complex dependence on several characteristics, linked to geometric problems and to the soil properties. Hence, setting analytical models to estimate such bearing capacity is extremely challenging. In this work, [...] Read more.
The seismic bearing capacity of a shallow strip footing above a void displays a complex dependence on several characteristics, linked to geometric problems and to the soil properties. Hence, setting analytical models to estimate such bearing capacity is extremely challenging. In this work, machine learning (ML) techniques have been employed to predict the seismic bearing capacity of a shallow strip footing located over a single unsupported rectangular void in heterogeneous soil. A dataset consisting of 38,000 finite element limit analysis simulations has been created, and the mean value between the upper and lower bounds of the bearing capacity has been computed at the varying undrained shear strength and internal friction angle of the soil, horizontal earthquake accelerations, and position, shape, and size of the void. Three machine learning techniques have been adopted to learn the link between the aforementioned parameters and the bearing capacity: multilayer perceptron neural networks; a group method of data handling; and a combined adaptive-network-based fuzzy inference system and particle swarm optimization. The performances of these ML techniques have been compared with each other, in terms of the following statistical performance indices: coefficient of determination (R2); root mean square error (RMSE); mean absolute percentage error; scatter index; and standard bias. Results have shown that all the ML techniques perform well, though the multilayer perceptron has a slightly superior accuracy featuring noteworthy results (R2= 0.9955 and RMSE= 0.0158). Full article
(This article belongs to the Special Issue 2021 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Figure 1

22 pages, 3491 KiB  
Article
Closed-Loop Cognitive-Driven Gain Control of Competing Sounds Using Auditory Attention Decoding
by Ali Aroudi, Eghart Fischer, Maja Serman, Henning Puder and Simon Doclo
Algorithms 2021, 14(10), 287; https://0-doi-org.brum.beds.ac.uk/10.3390/a14100287 - 30 Sep 2021
Cited by 4 | Viewed by 2371
Abstract
Recent advances have shown that it is possible to identify the target speaker which a listener is attending to using single-trial EEG-based auditory attention decoding (AAD). Most AAD methods have been investigated for an open-loop scenario, where AAD is performed in an offline [...] Read more.
Recent advances have shown that it is possible to identify the target speaker which a listener is attending to using single-trial EEG-based auditory attention decoding (AAD). Most AAD methods have been investigated for an open-loop scenario, where AAD is performed in an offline fashion without presenting online feedback to the listener. In this work, we aim at developing a closed-loop AAD system that allows to enhance a target speaker, suppress an interfering speaker and switch attention between both speakers. To this end, we propose a cognitive-driven adaptive gain controller (AGC) based on real-time AAD. Using the EEG responses of the listener and the speech signals of both speakers, the real-time AAD generates probabilistic attention measures, based on which the attended and the unattended speaker are identified. The AGC then amplifies the identified attended speaker and attenuates the identified unattended speaker, which are presented to the listener via loudspeakers. We investigate the performance of the proposed system in terms of the decoding performance and the signal-to-interference ratio (SIR) improvement. The experimental results show that, although there is a significant delay to detect attention switches, the proposed system is able to improve the SIR between the attended and the unattended speaker. In addition, no significant difference in decoding performance is observed between closed-loop AAD and open-loop AAD. The subjective evaluation results show that the proposed closed-loop cognitive-driven system demands a similar level of cognitive effort to follow the attended speaker, to ignore the unattended speaker and to switch attention between both speakers compared to using open-loop AAD. Closed-loop AAD in an online fashion is feasible and enables the listener to interact with the AGC. Full article
(This article belongs to the Special Issue Machine Learning in Medical Signal and Image Processing)
Show Figures

Figure 1

19 pages, 2890 KiB  
Article
Enhanced Hyper-Cube Framework Ant Colony Optimization for Combinatorial Optimization Problems
by Ali Ahmid, Thien-My Dao and Ngan Van Le
Algorithms 2021, 14(10), 286; https://0-doi-org.brum.beds.ac.uk/10.3390/a14100286 - 29 Sep 2021
Cited by 2 | Viewed by 1652
Abstract
Solving of combinatorial optimization problems is a common practice in real-life engineering applications. Trusses, cranes, and composite laminated structures are some good examples that fall under this category of optimization problems. Those examples have a common feature of discrete design domain that turn [...] Read more.
Solving of combinatorial optimization problems is a common practice in real-life engineering applications. Trusses, cranes, and composite laminated structures are some good examples that fall under this category of optimization problems. Those examples have a common feature of discrete design domain that turn them into a set of NP-hard optimization problems. Determining the right optimization algorithm for such problems is a precious point that tends to impact the overall cost of the design process. Furthermore, reinforcing the performance of a prospective optimization algorithm reduces the design cost. In the current study, a comprehensive assessment criterion has been developed to assess the performance of meta-heuristic (MH) solutions in the domain of structural design. Thereafter, the proposed criterion was employed to compare five different variants of Ant Colony Optimization (ACO). It was done by using a well-known structural optimization problem of laminate Stacking Sequence Design (SSD). The initial results of the comparison study reveal that the Hyper-Cube Framework (HCF) ACO variant outperforms the others. Consequently, an investigation of further improvement led to introducing an enhanced version of HCFACO (or EHCFACO). Eventually, the performance assessment of the EHCFACO variant showed that the average practical reliability became more than twice that of the standard ACO, and the normalized price decreased more to hold at 28.92 instead of 51.17. Full article
(This article belongs to the Special Issue Metaheuristic Algorithms in Optimization and Applications 2021)
Show Figures

Figure 1

25 pages, 1821 KiB  
Article
Efficient and Portable Distribution Modeling for Large-Scale Scientific Data Processing with Data-Parallel Primitives
by Hao-Yi Yang, Zhi-Rong Lin and Ko-Chih Wang
Algorithms 2021, 14(10), 285; https://0-doi-org.brum.beds.ac.uk/10.3390/a14100285 - 29 Sep 2021
Cited by 1 | Viewed by 1829
Abstract
The use of distribution-based data representation to handle large-scale scientific datasets is a promising approach. Distribution-based approaches often transform a scientific dataset into many distributions, each of which is calculated from a small number of samples. Most of the proposed parallel algorithms focus [...] Read more.
The use of distribution-based data representation to handle large-scale scientific datasets is a promising approach. Distribution-based approaches often transform a scientific dataset into many distributions, each of which is calculated from a small number of samples. Most of the proposed parallel algorithms focus on modeling single distributions from many input samples efficiently, but these may not fit the large-scale scientific data processing scenario because they cannot utilize computing resources effectively. Histograms and the Gaussian Mixture Model (GMM) are the most popular distribution representations used to model scientific datasets. Therefore, we propose the use of multi-set histogram and GMM modeling algorithms for the scenario of large-scale scientific data processing. Our algorithms are developed by data-parallel primitives to achieve portability across different hardware architectures. We evaluate the performance of the proposed algorithms in detail and demonstrate use cases for scientific data processing. Full article
(This article belongs to the Special Issue New Advances in Securing Data and Big Data)
Show Figures

Figure 1

16 pages, 42761 KiB  
Article
FPGA-Based Linear Detection Algorithm of an Underground Inspection Robot
by Chuanwei Zhang, Shirui Chen, Lu Zhao, Xianghe Li and Xiaowen Ma
Algorithms 2021, 14(10), 284; https://0-doi-org.brum.beds.ac.uk/10.3390/a14100284 - 29 Sep 2021
Cited by 3 | Viewed by 1638
Abstract
Conveyor belts are key pieces of equipment for bulk material transport, and they are of great significance to ensure safe operation. With the development of belt conveyors in the direction of long distances, large volumes, high speeds, and high reliability, the use of [...] Read more.
Conveyor belts are key pieces of equipment for bulk material transport, and they are of great significance to ensure safe operation. With the development of belt conveyors in the direction of long distances, large volumes, high speeds, and high reliability, the use of inspection robots to perform full inspections of belt conveyors has not only improved the efficiency and scope of the inspections but has also eliminated the dependence of the traditional method on the density of sensor arrangement. In this paper, relying on the wireless-power-supply orbital inspection robot independently developed by the laboratory, aimed at the problem of the deviation of the belt conveyor, the methods for the diagnosis of the deviation of the conveyor belt and FPGA (field-programmable gate array) parallel computing technology are studied. Based on the traditional LSD (line segment detection) algorithm, a straight-line extraction IP core, suitable for an FPGA computing platform, was constructed. This new hardware linear detection algorithm improves the real-time performance and flexibility of the belt conveyor diagnosis mechanism. Full article
Show Figures

Figure 1

19 pages, 5163 KiB  
Article
XGB4mcPred: Identification of DNA N4-Methylcytosine Sites in Multiple Species Based on an eXtreme Gradient Boosting Algorithm and DNA Sequence Information
by Xiao Wang, Xi Lin, Rong Wang, Kai-Qi Fan, Li-Jun Han and Zhao-Yuan Ding
Algorithms 2021, 14(10), 283; https://0-doi-org.brum.beds.ac.uk/10.3390/a14100283 - 29 Sep 2021
Cited by 1 | Viewed by 1702
Abstract
DNA N4-methylcytosine(4mC) plays an important role in numerous biological functions and is a mechanism of particular epigenetic importance. Therefore, accurate identification of the 4mC sites in DNA sequences is necessary to understand the functional mechanism. Although some effective calculation tools have been proposed [...] Read more.
DNA N4-methylcytosine(4mC) plays an important role in numerous biological functions and is a mechanism of particular epigenetic importance. Therefore, accurate identification of the 4mC sites in DNA sequences is necessary to understand the functional mechanism. Although some effective calculation tools have been proposed to identifying DNA 4mC sites, it is still challenging to improve identification accuracy and generalization ability. Therefore, there is a great need to build a computational tool to accurately identify the position of DNA 4mC sites. Hence, this study proposed a novel predictor XGB4mcPred, a predictor for the identification of 4mC sites trained using an extreme gradient boosting algorithm (XGBoost) and DNA sequence information. Firstly, we used the One-Hot encoding on adjacent and spaced nucleotides, dinucleotides, and trinucleotides of the original 4mC site sequences as feature vectors. Then, the importance values of the feature vectors pre-trained by the XGBoost algorithm were used as a threshold to filter redundant features, resulting in a significant improvement in the identification accuracy of the constructed XGB4mcPred predictor to identify 4mC sites. The analysis shows that there is a clear preference for nucleotide sequences between 4mC sites and non-4mC site sequences in six datasets from multiple species, and the optimized features can better distinguish 4mC sites from non-4mC sites. The experimental results of cross-validation and independent tests from six different species show that our proposed predictor XGB4mcPred significantly outperformed other state-of-the-art predictors and was improved to varying degrees compared with other state-of-the-art predictors. Additionally, the user-friendly webserver we used to developed the XGB4mcPred predictor was made freely accessible. Full article
(This article belongs to the Special Issue Machine Learning in Bioinformatics)
Show Figures

Figure 1

18 pages, 7827 KiB  
Article
Simultaneous Feature Selection and Support Vector Machine Optimization Using an Enhanced Chimp Optimization Algorithm
by Di Wu, Wanying Zhang, Heming Jia and Xin Leng
Algorithms 2021, 14(10), 282; https://0-doi-org.brum.beds.ac.uk/10.3390/a14100282 - 28 Sep 2021
Cited by 13 | Viewed by 2147
Abstract
Chimp Optimization Algorithm (ChOA), a novel meta-heuristic algorithm, has been proposed in recent years. It divides the population into four different levels for the purpose of hunting. However, there are still some defects that lead to the algorithm falling into the local optimum. [...] Read more.
Chimp Optimization Algorithm (ChOA), a novel meta-heuristic algorithm, has been proposed in recent years. It divides the population into four different levels for the purpose of hunting. However, there are still some defects that lead to the algorithm falling into the local optimum. To overcome these defects, an Enhanced Chimp Optimization Algorithm (EChOA) is developed in this paper. Highly Disruptive Polynomial Mutation (HDPM) is introduced to further explore the population space and increase the population diversity. Then, the Spearman’s rank correlation coefficient between the chimps with the highest fitness and the lowest fitness is calculated. In order to avoid the local optimization, the chimps with low fitness values are introduced with Beetle Antenna Search Algorithm (BAS) to obtain visual ability. Through the introduction of the above three strategies, the ability of population exploration and exploitation is enhanced. On this basis, this paper proposes an EChOA-SVM model, which can optimize parameters while selecting the features. Thus, the maximum classification accuracy can be achieved with as few features as possible. To verify the effectiveness of the proposed method, the proposed method is compared with seven common methods, including the original algorithm. Seventeen benchmark datasets from the UCI machine learning library are used to evaluate the accuracy, number of features, and fitness of these methods. Experimental results show that the classification accuracy of the proposed method is better than the other methods on most data sets, and the number of features required by the proposed method is also less than the other algorithms. Full article
Show Figures

Figure 1

17 pages, 713 KiB  
Article
Information Fusion-Based Deep Neural Attentive Matrix Factorization Recommendation
by Zhen Tian, Lamei Pan, Pu Yin and Rui Wang
Algorithms 2021, 14(10), 281; https://0-doi-org.brum.beds.ac.uk/10.3390/a14100281 - 28 Sep 2021
Cited by 4 | Viewed by 2174
Abstract
The emergence of the recommendation system has effectively alleviated the information overload problem. However, traditional recommendation systems either ignore the rich attribute information of users and items, such as the user’s social-demographic features, the item’s content features, etc., facing the sparsity problem, or [...] Read more.
The emergence of the recommendation system has effectively alleviated the information overload problem. However, traditional recommendation systems either ignore the rich attribute information of users and items, such as the user’s social-demographic features, the item’s content features, etc., facing the sparsity problem, or adopt the fully connected network to concatenate the attribute information, ignoring the interaction between the attribute information. In this paper, we propose the information fusion-based deep neural attentive matrix factorization (IFDNAMF) recommendation model, which introduces the attribute information and adopts the element-wise product between the different information domains to learn the cross-features when conducting information fusion. In addition, the attention mechanism is utilized to distinguish the importance of different cross-features on prediction results. In addition, the IFDNAMF adopts the deep neural network to learn the high-order interaction between users and items. Meanwhile, we conduct extensive experiments on two datasets: MovieLens and Book-crossing, and demonstrate the feasibility and effectiveness of the model. Full article
Show Figures

Graphical abstract

11 pages, 257 KiB  
Article
A Brief Roadmap into Uncertain Knowledge Representation via Probabilistic Description Logics
by Rafael Peñaloza
Algorithms 2021, 14(10), 280; https://0-doi-org.brum.beds.ac.uk/10.3390/a14100280 - 28 Sep 2021
Cited by 2 | Viewed by 2074
Abstract
Logic-based knowledge representation is one of the main building blocks of (logic-based) artificial intelligence. While most successful knowledge representation languages are based on classical logic, realistic intelligent applications need to handle uncertainty in an adequate manner. Over the years, many different languages for [...] Read more.
Logic-based knowledge representation is one of the main building blocks of (logic-based) artificial intelligence. While most successful knowledge representation languages are based on classical logic, realistic intelligent applications need to handle uncertainty in an adequate manner. Over the years, many different languages for representing uncertain knowledge—often extensions of classical knowledge representation languages—have been proposed. We briefly present some of the defining properties of these languages as they pertain to the family of probabilistic description logics. This limited view is intended to help pave the way for the interested researcher to find the most adequate language for their needs, and potentially identify the remaining gaps. Full article
(This article belongs to the Special Issue Logic-Based Artificial Intelligence)
7 pages, 276 KiB  
Communication
Short Communication: Optimally Solving the Unit-Demand Envy-Free Pricing Problem with Metric Substitutability in Cubic Time
by Marcos M. Salvatierra, Mario Salvatierra, Jr. and Juan G. Colonna
Algorithms 2021, 14(10), 279; https://0-doi-org.brum.beds.ac.uk/10.3390/a14100279 - 26 Sep 2021
Viewed by 1511
Abstract
In general, the unit-demand envy-free pricing problem has proven to be APX-hard, but some special cases can be optimally solved in polynomial time. When substitution costs that form a metric space are included, the problem can be solved in [...] Read more.
In general, the unit-demand envy-free pricing problem has proven to be APX-hard, but some special cases can be optimally solved in polynomial time. When substitution costs that form a metric space are included, the problem can be solved in O(n4) time, and when the number of consumers is equal to the number of items—all with a single copy so that each consumer buys an item—a O(n3) time method is presented to solve it. This work shows that the first case has similarities with the second, and, by exploiting the structural properties of the costs set, it presents a O(n2) time algorithm for solving it when a competitive equilibrium is considered or a O(n3) time algorithm for more general scenarios. The methods are based on a dynamic programming strategy, which simplifies the calculations of the shortest paths in a network; this simplification is usually adopted in the second case. The theoretical results obtained provide efficiency in the search for optimal solutions to specific revenue management problems. Full article
(This article belongs to the Special Issue Algorithmic Game Theory 2021)
Show Figures

Figure 1

22 pages, 6303 KiB  
Article
Ensembling EfficientNets for the Classification and Interpretation of Histopathology Images
by Athanasios Kallipolitis, Kyriakos Revelos and Ilias Maglogiannis
Algorithms 2021, 14(10), 278; https://0-doi-org.brum.beds.ac.uk/10.3390/a14100278 - 26 Sep 2021
Cited by 20 | Viewed by 2766
Abstract
The extended utilization of digitized Whole Slide Images is transforming the workflow of traditional clinical histopathology to the digital era. The ongoing transformation has demonstrated major potentials towards the exploitation of Machine Learning and Deep Learning techniques as assistive tools for specialized medical [...] Read more.
The extended utilization of digitized Whole Slide Images is transforming the workflow of traditional clinical histopathology to the digital era. The ongoing transformation has demonstrated major potentials towards the exploitation of Machine Learning and Deep Learning techniques as assistive tools for specialized medical personnel. While the performance of the implemented algorithms is continually boosted by the mass production of generated Whole Slide Images and the development of state-of the-art deep convolutional architectures, ensemble models provide an additional methodology towards the improvement of the prediction accuracy. Despite the earlier belief related to deep convolutional networks being treated as black boxes, important steps for the interpretation of such predictive models have also been proposed recently. However, this trend is not fully unveiled for the ensemble models. The paper investigates the application of an explanation scheme for ensemble classifiers, while providing satisfactory classification results of histopathology breast and colon cancer images in terms of accuracy. The results can be interpreted by the hidden layers’ activation of the included subnetworks and provide more accurate results than single network implementations. Full article
(This article belongs to the Special Issue Ensemble Algorithms and/or Explainability)
Show Figures

Figure 1

24 pages, 1776 KiB  
Article
Rough Estimator Based Asynchronous Distributed Super Points Detection on High Speed Network Edge
by Jie Xu and Wei Ding
Algorithms 2021, 14(10), 277; https://0-doi-org.brum.beds.ac.uk/10.3390/a14100277 - 25 Sep 2021
Viewed by 1577
Abstract
Super points detection plays an important role in network research and application. With the increase of network scale, distributed super points detection has become a hot research topic. The key point of super points detection in a multi-node distributed environment is how to [...] Read more.
Super points detection plays an important role in network research and application. With the increase of network scale, distributed super points detection has become a hot research topic. The key point of super points detection in a multi-node distributed environment is how to reduce communication overhead. Therefore, this paper proposes a three-stage communication algorithm to detect super points in a distributed environment, Rough Estimator based Asynchronous Distributed super points detection algorithm (READ). READ uses a lightweight estimator, the Rough Estimator (RE), which is fast in computation and takes less memory to generate candidate super points. Meanwhile, the famous Linear Estimator (LE) is applied to accurately estimate the cardinality of each candidate super point, so as to detect the super point correctly. In READ, each node scans IP address pairs asynchronously. When reaching the time window boundary, READ starts three-stage communication to detect the super point. This paper proves that the accuracy of READ in a distributed environment is no less than that in the single-node environment. Four groups of 10 Gb/s and 40 Gb/s real-world high-speed network traffic are used to test READ. The experimental results show that READ not only has high accuracy in a distributed environment, but also has less than 5% of communication burden compared with existing algorithms. Full article
(This article belongs to the Collection Parallel and Distributed Computing: Algorithms and Applications)
Show Figures

Figure 1

22 pages, 471 KiB  
Article
Algorithms for Optimal Power Flow Extended to Controllable Renewable Systems and Loads
by Elkin D. Reyes and Sergio Rivera
Algorithms 2021, 14(10), 276; https://0-doi-org.brum.beds.ac.uk/10.3390/a14100276 - 25 Sep 2021
Cited by 1 | Viewed by 1731
Abstract
In an effort to quantify and manage uncertainties inside power systems with penetration of renewable energy, uncertainty costs have been defined and different uncertainty cost functions have been calculated for different types of generators and electric vehicles. This article seeks to use the [...] Read more.
In an effort to quantify and manage uncertainties inside power systems with penetration of renewable energy, uncertainty costs have been defined and different uncertainty cost functions have been calculated for different types of generators and electric vehicles. This article seeks to use the uncertainty cost formulation to propose algorithms and solve the problem of optimal power flow extended to controllable renewable systems and controllable loads. In a previous study, the first and second derivatives of the uncertainty cost functions were calculated and now an analytical and heuristic algorithm of optimal power flow are used. To corroborate the analytical solution, the optimal power flow was solved by means of metaheuristic algorithms. Finally, it was found that analytical algorithms have a much higher performance than metaheuristic methods, especially as the number of decision variables in an optimization problem grows. Full article
(This article belongs to the Special Issue Algorithms in Planning and Operation of Power Systems)
Show Figures

Figure 1

22 pages, 838 KiB  
Review
A Review of Parallel Heterogeneous Computing Algorithms in Power Systems
by Diego Rodriguez, Diego Gomez, David Alvarez and Sergio Rivera
Algorithms 2021, 14(10), 275; https://0-doi-org.brum.beds.ac.uk/10.3390/a14100275 - 23 Sep 2021
Cited by 7 | Viewed by 3186
Abstract
The power system expansion and the integration of technologies, such as renewable generation, distributed generation, high voltage direct current, and energy storage, have made power system simulation challenging in multiple applications. The current computing platforms employed for planning, operation, studies, visualization, and the [...] Read more.
The power system expansion and the integration of technologies, such as renewable generation, distributed generation, high voltage direct current, and energy storage, have made power system simulation challenging in multiple applications. The current computing platforms employed for planning, operation, studies, visualization, and the analysis of power systems are reaching their operational limit since the complexity and size of modern power systems results in long simulation times and high computational demand. Time reductions in simulation and analysis lead to the better and further optimized performance of power systems. Heterogeneous computing—where different processing units interact—has shown that power system applications can take advantage of the unique strengths of each type of processing unit, such as central processing units, graphics processing units, and field-programmable gate arrays interacting in on-premise or cloud environments. Parallel Heterogeneous Computing appears as an alternative to reduce simulation times by optimizing multitask execution in parallel computing architectures with different processing units working together. This paper presents a review of Parallel Heterogeneous Computing techniques, how these techniques have been applied in a wide variety of power system applications, how they help reduce the computational time of modern power system simulation and analysis, and the current tendency regarding each application. We present a wide variety of approaches classified by technique and application. Full article
(This article belongs to the Special Issue Algorithms in Planning and Operation of Power Systems)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop