Next Issue
Volume 14, December
Previous Issue
Volume 14, October
 
 

Algorithms, Volume 14, Issue 11 (November 2021) – 38 articles

Cover Story (view full-size image): The numerical solution of advection–diffusion–reaction equations which exhibits abrupt changes or strong gradients is here handled by employing classical error estimator techniques together with an unsupervised anomaly detection algorithm. The present work shows a numerical study that highlights promising results obtained by bridging together standard techniques and approaches typical of machine learning and artificial intelligence. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
20 pages, 491 KiB  
Article
An Empirical Study of Cluster-Based MOEA/D Bare Bones PSO for Data Clustering
by Daphne Teck Ching Lai and Yuji Sato
Algorithms 2021, 14(11), 338; https://0-doi-org.brum.beds.ac.uk/10.3390/a14110338 - 22 Nov 2021
Cited by 3 | Viewed by 2762
Abstract
Previously, cluster-based multi or many objective function techniques were proposed to reduce the Pareto set. Recently, researchers proposed such techniques to find better solutions in the objective space to solve engineering problems. In this work, we applied a cluster-based approach for solution selection [...] Read more.
Previously, cluster-based multi or many objective function techniques were proposed to reduce the Pareto set. Recently, researchers proposed such techniques to find better solutions in the objective space to solve engineering problems. In this work, we applied a cluster-based approach for solution selection in a multiobjective evolutionary algorithm based on decomposition with bare bones particle swarm optimization for data clustering and investigated its clustering performance. In our previous work, we found that MOEA/D with BBPSO performed the best on 10 datasets. Here, we extend this work applying a cluster-based approach tested on 13 UCI datasets. We compared with six multiobjective evolutionary clustering algorithms from the existing literature and ten from our previous work. The proposed technique was found to perform well on datasets highly overlapping clusters, such as CMC and Sonar. So far, we found only one work that used cluster-based MOEA for clustering data, the hierarchical topology multiobjective clustering algorithm. All other cluster-based MOEA found were used to solve other problems that are not data clustering problems. By clustering Pareto solutions and evaluating new candidates against the found cluster representatives, local search is introduced in the solution selection process within the objective space, which can be effective on datasets with highly overlapping clusters. This is an added layer of search control in the objective space. The results are found to be promising, prompting different areas of future research which are discussed, including the study of its effects with an increasing number of clusters as well as with other objective functions. Full article
(This article belongs to the Special Issue Nature-Inspired Algorithms in Machine Learning)
Show Figures

Graphical abstract

34 pages, 3117 KiB  
Article
An Interaction-Based Convolutional Neural Network (ICNN) Toward a Better Understanding of COVID-19 X-ray Images
by Shaw-Hwa Lo and Yiqiao Yin
Algorithms 2021, 14(11), 337; https://0-doi-org.brum.beds.ac.uk/10.3390/a14110337 - 19 Nov 2021
Cited by 4 | Viewed by 2853
Abstract
The field of explainable artificial intelligence (XAI) aims to build explainable and interpretable machine learning (or deep learning) methods without sacrificing prediction performance. Convolutional neural networks (CNNs) have been successful in making predictions, especially in image classification. These popular and well-documented successes use [...] Read more.
The field of explainable artificial intelligence (XAI) aims to build explainable and interpretable machine learning (or deep learning) methods without sacrificing prediction performance. Convolutional neural networks (CNNs) have been successful in making predictions, especially in image classification. These popular and well-documented successes use extremely deep CNNs such as VGG16, DenseNet121, and Xception. However, these well-known deep learning models use tens of millions of parameters based on a large number of pretrained filters that have been repurposed from previous data sets. Among these identified filters, a large portion contain no information yet remain as input features. Thus far, there is no effective method to omit these noisy features from a data set, and their existence negatively impacts prediction performance. In this paper, a novel interaction-based convolutional neural network (ICNN) is introduced that does not make assumptions about the relevance of local information. Instead, a model-free influence score (I-score) is proposed to directly extract the influential information from images to form important variable modules. This innovative technique replaces all pretrained filters found by trial-and-error with explainable, influential, and predictive variable sets (modules) determined by the I-score. In other words, future researchers need not rely on pretrained filters; the suggested algorithm identifies only the variables or pixels with high I-score values that are extremely predictive and important. The proposed method and algorithm were tested on real-world data set and a state-of-the-art prediction performance of 99.8% was achieved without sacrificing the explanatory power of the model. This proposed design can efficiently screen patients infected by COVID-19 before human diagnosis and can be a benchmark for addressing future XAI problems in large-scale data sets. Full article
(This article belongs to the Special Issue Interpretability, Accountability and Robustness in Machine Learning)
Show Figures

Figure 1

18 pages, 814 KiB  
Article
Decomposition of Random Sequences into Mixtures of Simpler Ones and Its Application in Network Analysis
by András Faragó
Algorithms 2021, 14(11), 336; https://0-doi-org.brum.beds.ac.uk/10.3390/a14110336 - 19 Nov 2021
Cited by 1 | Viewed by 2033
Abstract
A classic and fundamental result about the decomposition of random sequences into a mixture of simpler ones is de Finetti’s Theorem. In its original form, it applies to infinite 0–1 valued sequences with the special property that the distribution is invariant to permutations [...] Read more.
A classic and fundamental result about the decomposition of random sequences into a mixture of simpler ones is de Finetti’s Theorem. In its original form, it applies to infinite 0–1 valued sequences with the special property that the distribution is invariant to permutations (called an exchangeable sequence). Later it was extended and generalized in numerous directions. After reviewing this line of development, we present our new decomposition theorem, covering cases that have not been previously considered. We also introduce a novel way of applying these types of results in the analysis of random networks. For self-containment, we provide the introductory exposition in more detail than usual, with the intent of making it also accessible to readers who may not be closely familiar with the subject. Full article
(This article belongs to the Special Issue Algorithms for Communication Networks)
20 pages, 1968 KiB  
Article
A Context-Aware Neural Embedding for Function-Level Vulnerability Detection
by Hongwei Wei, Guanjun Lin, Lin Li and Heming Jia
Algorithms 2021, 14(11), 335; https://0-doi-org.brum.beds.ac.uk/10.3390/a14110335 - 17 Nov 2021
Cited by 8 | Viewed by 3817
Abstract
Exploitable vulnerabilities in software systems are major security concerns. To date, machine learning (ML) based solutions have been proposed to automate and accelerate the detection of vulnerabilities. Most ML techniques aim to isolate a unit of source code, be it a line or [...] Read more.
Exploitable vulnerabilities in software systems are major security concerns. To date, machine learning (ML) based solutions have been proposed to automate and accelerate the detection of vulnerabilities. Most ML techniques aim to isolate a unit of source code, be it a line or a function, as being vulnerable. We argue that a code segment is vulnerable if it exists in certain semantic contexts, such as the control flow and data flow; therefore, it is important for the detection to be context aware. In this paper, we evaluate the performance of mainstream word embedding techniques in the scenario of software vulnerability detection. Based on the evaluation, we propose a supervised framework leveraging pre-trained context-aware embeddings from language models (ELMo) to capture deep contextual representations, further summarized by a bidirectional long short-term memory (Bi-LSTM) layer for learning long-range code dependency. The framework takes directly a source code function as an input and produces corresponding function embeddings, which can be treated as feature sets for conventional ML classifiers. Experimental results showed that the proposed framework yielded the best performance in its downstream detection tasks. Using the feature representations generated by our framework, random forest and support vector machine outperformed four baseline systems on our data sets, demonstrating that the framework incorporated with ELMo can effectively capture the vulnerable data flow patterns and facilitate the vulnerability detection task. Full article
(This article belongs to the Special Issue Interpretability, Accountability and Robustness in Machine Learning)
Show Figures

Figure 1

14 pages, 1186 KiB  
Article
Is One Teacher Model Enough to Transfer Knowledge to a Student Model?
by Nicola Landro, Ignazio Gallo and Riccardo La Grassa
Algorithms 2021, 14(11), 334; https://0-doi-org.brum.beds.ac.uk/10.3390/a14110334 - 15 Nov 2021
Cited by 1 | Viewed by 1515
Abstract
Nowadays, the transfer learning technique can be successfully applied in the deep learning field through techniques that fine-tune the CNN’s starting point so it may learn over a huge dataset such as ImageNet and continue to learn on a fixed dataset to achieve [...] Read more.
Nowadays, the transfer learning technique can be successfully applied in the deep learning field through techniques that fine-tune the CNN’s starting point so it may learn over a huge dataset such as ImageNet and continue to learn on a fixed dataset to achieve better performance. In this paper, we designed a transfer learning methodology that combines the learned features of different teachers to a student network in an end-to-end model, improving the performance of the student network in classification tasks over different datasets. In addition to this, we tried to answer the following questions which are in any case directly related to the transfer learning problem addressed here. Is it possible to improve the performance of a small neural network by using the knowledge gained from a more powerful neural network? Can a deep neural network outperform the teacher using transfer learning? Experimental results suggest that neural networks can transfer their learning to student networks using our proposed architecture, designed to bring to light a new interesting approach for transfer learning techniques. Finally, we provide details of the code and the experimental settings. Full article
(This article belongs to the Special Issue Machine Learning in Image and Video Processing)
Show Figures

Figure 1

21 pages, 4019 KiB  
Article
Parallel Implementation of the Algorithm to Compute Forest Fire Impact on Infrastructure Facilities of JSC Russian Railways
by Nikolay Viktorovich Baranovskiy, Aleksey Podorovskiy and Aleksey Malinin
Algorithms 2021, 14(11), 333; https://0-doi-org.brum.beds.ac.uk/10.3390/a14110333 - 15 Nov 2021
Cited by 3 | Viewed by 2153
Abstract
Forest fires have a negative impact on the economy in a number of regions, especially in Wildland Urban Interface (WUI) areas. An important link in the fight against fires in WUI areas is the development of information and computer systems for predicting the [...] Read more.
Forest fires have a negative impact on the economy in a number of regions, especially in Wildland Urban Interface (WUI) areas. An important link in the fight against fires in WUI areas is the development of information and computer systems for predicting the fire safety of infrastructural facilities of Russian Railways. In this work, a numerical study of heat transfer processes in the enclosing structure of a wooden building near the forest fire front was carried out using the technology of parallel computing. The novelty of the development is explained by the creation of its own program code, which is planned to be put into operation either in the Information System for Remote Monitoring of Forest Fires ISDM-Rosleskhoz, or in the information and computing system of JSC Russian Railways. In the Russian Federation, it is forbidden to use foreign systems in the security services of industrial facilities. The implementation of the deterministic model of heat transfer in the enclosing structure with the complexity of the algorithm O (2N2 + 2K) is presented. The program is implemented in Python 3.x using the NumPy and Concurrent libraries. Calculations were carried out on a multiprocessor cluster in the Sirius University of Science and Technology. The results of calculations and the acceleration coefficient for operating modes for 1, 2, 4, 8, 16, 32, 48 and 64 processes are presented. The developed algorithm can be applied to assess the fire safety of infrastructure facilities of Russian Railways. The main merit of the new development should be noted, which is explained by the ability to use large computational domains with a large number of computational grid nodes in space and time. The use of caching intermediate data in files made it possible to distribute a large number of computational nodes among the processors of a computing multiprocessor system. However, one should also note a drawback; namely, a decrease in the acceleration of computational operations with a large number of involved nodes of a multiprocessor computing system, which is explained by the write and read cycles in cache files. Full article
(This article belongs to the Collection Parallel and Distributed Computing: Algorithms and Applications)
Show Figures

Figure 1

13 pages, 3573 KiB  
Article
Application of Mini-Batch Metaheuristic Algorithms in Problems of Optimization of Deterministic Systems with Incomplete Information about the State Vector
by Andrei V. Panteleev and Aleksandr V. Lobanov
Algorithms 2021, 14(11), 332; https://0-doi-org.brum.beds.ac.uk/10.3390/a14110332 - 14 Nov 2021
Cited by 5 | Viewed by 1534
Abstract
In this paper, we consider the application of the zero-order mini-batch optimization method in the problem of finding optimal control of a pencil of trajectories of nonlinear deterministic systems in the case of incomplete information about the state vector. The pencil of trajectories [...] Read more.
In this paper, we consider the application of the zero-order mini-batch optimization method in the problem of finding optimal control of a pencil of trajectories of nonlinear deterministic systems in the case of incomplete information about the state vector. The pencil of trajectories originates from a given set of initial states. To solve the problem, the structure of a feedback system is proposed, which contains models of the plant, measuring system, nonlinear state observer and control law of the fixed structure with unknown coefficients. The objective function proposed considers the quality of pencil of trajectories control, which is estimated by the average value of the Bolz functional over the given set of initial states. Unknown control laws of a plant and an observer are found in the form of expansions in terms of orthonormal systems of basis functions, which are specified on the set of possible states of a dynamical system. The original pencil of trajectories control problem is reduced to a global optimization problem, which is solved using the well-proven zero-order method, which uses a modified mini-batch approach in a random search procedure with adaptation. An algorithm for solving the problem is proposed. The satellite stabilization problem with incomplete information is solved. Full article
(This article belongs to the Special Issue Metaheuristic Algorithms and Applications)
Show Figures

Figure 1

13 pages, 843 KiB  
Article
A Mathematical Model of Universal Basic Income and Its Numerical Simulations
by Maria Letizia Bertotti
Algorithms 2021, 14(11), 331; https://0-doi-org.brum.beds.ac.uk/10.3390/a14110331 - 11 Nov 2021
Viewed by 2133
Abstract
In this paper, an elementary mathematical model describing the introduction of a universal basic income in a closed market society is constructed. The model is formulated in terms of a system of nonlinear ordinary differential equations, each of which gives account of how [...] Read more.
In this paper, an elementary mathematical model describing the introduction of a universal basic income in a closed market society is constructed. The model is formulated in terms of a system of nonlinear ordinary differential equations, each of which gives account of how the number of individuals in a certain income class changes in time. Societies ruled by different fiscal systems (with no taxes, with taxation and redistribution, with a welfare system) are considered and the effect of the presence of a basic income in the various cases is analysed by means of numerical simulations. The main findings are that basic income effectively acts as a tool of poverty alleviation: indeed, in its presence the portion of individuals in the poorest classes and economic inequality diminish. Of course, the issue of a universal basic income in the real world is more complex and involves a variety of aspects. The goal here is simply to show how mathematical models can help in forecasting scenarios resulting from one or the other policy. Full article
Show Figures

Figure 1

13 pages, 3046 KiB  
Article
Autoencoder-Based Reduced Order Observer Design for a Class of Diffusion-Convection-Reaction Systems
by Alexander Schaum
Algorithms 2021, 14(11), 330; https://0-doi-org.brum.beds.ac.uk/10.3390/a14110330 - 11 Nov 2021
Cited by 1 | Viewed by 1515
Abstract
The application of autoencoders in combination with Dynamic Mode Decomposition for control (DMDc) and reduced order observer design as well as Kalman Filter design is discussed for low order state reconstruction of a class of scalar linear diffusion-convection-reaction systems. The general idea and [...] Read more.
The application of autoencoders in combination with Dynamic Mode Decomposition for control (DMDc) and reduced order observer design as well as Kalman Filter design is discussed for low order state reconstruction of a class of scalar linear diffusion-convection-reaction systems. The general idea and conceptual approaches are developed following recent results on machine-learning based identification of the Koopman operator using autoencoders and DMDc for finite-dimensional discrete-time system identification. The resulting linear reduced order model is combined with a classical Kalman Filter for state reconstruction with minimum error covariance as well as a reduced order observer with very low computational and memory demands. The performance of the two schemes is evaluated and compared in terms of the approximated L2 error norm in a numerical simulation study. It turns out, that for the evaluated case study the reduced-order scheme achieves comparable performance with significantly less computational load. Full article
(This article belongs to the Special Issue Computer Science and Intelligent Control)
Show Figures

Figure 1

19 pages, 1085 KiB  
Article
Zero-Crossing Point Detection of Sinusoidal Signal in Presence of Noise and Harmonics Using Deep Neural Networks
by Venkataramana Veeramsetty, Bhavana Reddy Edudodla and Surender Reddy Salkuti
Algorithms 2021, 14(11), 329; https://0-doi-org.brum.beds.ac.uk/10.3390/a14110329 - 08 Nov 2021
Cited by 7 | Viewed by 3391
Abstract
Zero-crossing point detection is necessary to establish a consistent performance in various power system applications, such as grid synchronization, power conversion and switch-gear protection. In this paper, zero-crossing points of a sinusoidal signal are detected using deep neural networks. In order to train [...] Read more.
Zero-crossing point detection is necessary to establish a consistent performance in various power system applications, such as grid synchronization, power conversion and switch-gear protection. In this paper, zero-crossing points of a sinusoidal signal are detected using deep neural networks. In order to train and evaluate the deep neural network model, new datasets for sinusoidal signals having noise levels from 5% to 50% and harmonic distortion from 10% to 50% are developed. This complete study is implemented in Google Colab using deep learning framework Keras. Results shows that the proposed deep learning model is able to detect zero-crossing points in a distorted sinusoidal signal with good accuracy. Full article
Show Figures

Figure 1

20 pages, 2934 KiB  
Article
Adaptive Refinement in Advection–Diffusion Problems by Anomaly Detection: A Numerical Study
by Antonella Falini and Maria Lucia Sampoli
Algorithms 2021, 14(11), 328; https://0-doi-org.brum.beds.ac.uk/10.3390/a14110328 - 07 Nov 2021
Viewed by 1864
Abstract
We consider advection–diffusion–reaction problems, where the advective or the reactive term is dominating with respect to the diffusive term. The solutions of these problems are characterized by the so-called layers, which represent localized regions where the gradients of the solutions are rather large [...] Read more.
We consider advection–diffusion–reaction problems, where the advective or the reactive term is dominating with respect to the diffusive term. The solutions of these problems are characterized by the so-called layers, which represent localized regions where the gradients of the solutions are rather large or are subjected to abrupt changes. In order to improve the accuracy of the computed solution, it is fundamental to locally increase the number of degrees of freedom by limiting the computational costs. Thus, adaptive refinement, by a posteriori error estimators, is employed. The error estimators are then processed by an anomaly detection algorithm in order to identify those regions of the computational domain that should be marked and, hence, refined. The anomaly detection task is performed in an unsupervised fashion and the proposed strategy is tested on typical benchmarks. The present work shows a numerical study that highlights promising results obtained by bridging together standard techniques, i.e., the error estimators, and approaches typical of machine learning and artificial intelligence, such as the anomaly detection task. Full article
Show Figures

Figure 1

23 pages, 1198 KiB  
Article
Hybrid Multiagent Collaboration for Time-Critical Tasks: A Mathematical Model and Heuristic Approach
by Yifeng Zhou, Kai Di and Haokun Xing
Algorithms 2021, 14(11), 327; https://0-doi-org.brum.beds.ac.uk/10.3390/a14110327 - 05 Nov 2021
Cited by 1 | Viewed by 1477
Abstract
Principal–assistant agent teams are often employed to solve tasks in multiagent collaboration systems. Assistant agents attached to the principal agents are more flexible for task execution and can assist them to complete tasks with complex constraints. However, how to employ principal–assistant agent teams [...] Read more.
Principal–assistant agent teams are often employed to solve tasks in multiagent collaboration systems. Assistant agents attached to the principal agents are more flexible for task execution and can assist them to complete tasks with complex constraints. However, how to employ principal–assistant agent teams to execute time-critical tasks considering the dependency between agents and the constraints among tasks is still a challenge so far. In this paper, we investigate the principal–assistant collaboration problem with deadlines, which is to allocate tasks to suitable principal–assistant teams and construct routes satisfying the temporal constraints. Two cases are considered in this paper, including single principal–assistant teams and multiple principal–assistant teams. The former is formally formulated in an arc-based integer linear programming model. We develop a hybrid combination algorithm for adapting larger scales, the idea of which is to find an optimal combination of partial routes generated by heuristic methods. The latter is defined in a path-based integer linear programming model, and a branch-and-price-based (BP-based) algorithm is proposed that introduces the number of assistant-accessible tasks surrounding a task to guide the route construction. Experimental results validate that the hybrid combination algorithm and the BP-based algorithm are superior to the benchmarks in terms of the number of served tasks and the running time. Full article
Show Figures

Figure 1

22 pages, 35884 KiB  
Article
Optimized Dissolved Oxygen Fuzzy Control for Recombinant Escherichia coli Cultivations
by Rafael Akira Akisue, Matheus Lopes Harth, Antonio Carlos Luperni Horta and Ruy de Sousa Junior
Algorithms 2021, 14(11), 326; https://0-doi-org.brum.beds.ac.uk/10.3390/a14110326 - 05 Nov 2021
Cited by 2 | Viewed by 1919
Abstract
Due to low oxygen solubility and mechanical stirring limitations of a bioreactor, ensuring an adequate oxygen supply during a recombinant Escherichia coli cultivation is a major challenge in process control. Under the light of this fact, a fuzzy dissolved oxygen controller was developed, [...] Read more.
Due to low oxygen solubility and mechanical stirring limitations of a bioreactor, ensuring an adequate oxygen supply during a recombinant Escherichia coli cultivation is a major challenge in process control. Under the light of this fact, a fuzzy dissolved oxygen controller was developed, taking into account a decision tree algorithm presented in the literature, and implemented in the supervision software SUPERSYS_HCDC. The algorithm was coded in MATLAB with its membership function parameters determined using an Adaptive Network-Based Fuzzy Inference System tool. The controller was composed of three independent fuzzy inference systems: Princ1 and Princ2 assessed whether there would be an increment or a reduction in air and oxygen flow rates (respectively), whilst Delta estimated the size of these variations. To test the controller, simulations with a neural network model and E. coli cultivations were conducted. The fuzzification of the decision tree was successful, resulting in smoothing of air and oxygen flow rates and, hence, in an attenuation of dissolved oxygen oscillations. Statistically, the average standard deviation of the fuzzy controller was 2.45 times lower than the decision tree (9.48%). Results point toward an increase in the flow meter lifespan and a possible reduction of the metabolic stress suffered by E. coli during the cultivation. Full article
Show Figures

Figure 1

18 pages, 2909 KiB  
Article
Travel Time Reliability-Based Rescue Resource Scheduling for Accidents Concerning Transport of Dangerous Goods by Rail
by Lanfen Liu and Xinfeng Yang
Algorithms 2021, 14(11), 325; https://0-doi-org.brum.beds.ac.uk/10.3390/a14110325 - 05 Nov 2021
Cited by 3 | Viewed by 1660
Abstract
The characteristics of railway dangerous goods accidents are very complex. The rescue of railway dangerous goods accidents should consider the timeliness of rescue, the uncertainty of traffic environment and the diversity of rescue resources. Thus, the purpose of this paper is to confront [...] Read more.
The characteristics of railway dangerous goods accidents are very complex. The rescue of railway dangerous goods accidents should consider the timeliness of rescue, the uncertainty of traffic environment and the diversity of rescue resources. Thus, the purpose of this paper is to confront the rescue resources scheduling problem of railway dangerous goods accident by considering factors such as rescue capacity, rescue demand and response time. Based on the analysis of travel time and reliability for rescue route, a multi-objective scheduling model of rescue resources based on travel time reliability is constructed in order to minimize the total arrival time of rescue resources and to maximize total reliability. The proposed model is more reliable than the traditional model due to the consideration of travel time reliability of rescue routes. Moreover, a two-stage algorithm is designed to solve this problem. A multi-path algorithm with bound constraints is used to obtain the set of feasible rescue routes in the first stage, and the NSGA-II algorithm is used to determine the scheduling of rescue resources for each rescue center. Finally, the two-stage algorithm is tested on a regional road network, and the results show that the designed two-stage algorithm is valid for solving the rescue resource scheduling problem of dangerous goods accidents and is able to obtain the rescue resource scheduling scheme in a short period of time. Full article
(This article belongs to the Special Issue Metaheuristic Algorithms in Optimization and Applications 2021)
Show Figures

Figure 1

19 pages, 2620 KiB  
Article
Feature Selection for High-Dimensional Datasets through a Novel Artificial Bee Colony Framework
by Yuanzi Zhang, Jing Wang, Xiaolin Li, Shiguo Huang and Xiuli Wang
Algorithms 2021, 14(11), 324; https://0-doi-org.brum.beds.ac.uk/10.3390/a14110324 - 04 Nov 2021
Cited by 2 | Viewed by 1899
Abstract
There are generally many redundant and irrelevant features in high-dimensional datasets, which leads to the decline of classification performance and the extension of execution time. To tackle this problem, feature selection techniques are used to screen out redundant and irrelevant features. The artificial [...] Read more.
There are generally many redundant and irrelevant features in high-dimensional datasets, which leads to the decline of classification performance and the extension of execution time. To tackle this problem, feature selection techniques are used to screen out redundant and irrelevant features. The artificial bee colony (ABC) algorithm is a popular meta-heuristic algorithm with high exploration and low exploitation capacities. To balance between both capacities of the ABC algorithm, a novel ABC framework is proposed in this paper. Specifically, the solutions are first updated by the process of employing bees to retain the original exploration ability, so that the algorithm can explore the solution space extensively. Then, the solutions are modified by the updating mechanism of an algorithm with strong exploitation ability in the onlooker bee phase. Finally, we remove the scout bee phase from the framework, which can not only reduce the exploration ability but also speed up the algorithm. In order to verify our idea, the operators of the grey wolf optimization (GWO) algorithm and whale optimization algorithm (WOA) are introduced into the framework to enhance the exploitation capability of onlooker bees, named BABCGWO and BABCWOA, respectively. It has been found that these two algorithms are superior to four state-of-the-art feature selection algorithms using 12 high-dimensional datasets, in terms of the classification error rate, size of feature subset and execution speed. Full article
(This article belongs to the Special Issue Metaheuristic Algorithms in Optimization and Applications 2021)
Show Figures

Figure 1

18 pages, 2183 KiB  
Article
Metaheuristics for a Flow Shop Scheduling Problem with Urgent Jobs and Limited Waiting Times
by BongJoo Jeong, Jun-Hee Han and Ju-Yong Lee
Algorithms 2021, 14(11), 323; https://0-doi-org.brum.beds.ac.uk/10.3390/a14110323 - 03 Nov 2021
Cited by 8 | Viewed by 1934
Abstract
This study considers a scheduling problem for a flow shop with urgent jobs and limited waiting times. The urgent jobs and limited waiting times are major considerations for scheduling in semiconductor manufacturing systems. The objective function is to minimize a weighted sum of [...] Read more.
This study considers a scheduling problem for a flow shop with urgent jobs and limited waiting times. The urgent jobs and limited waiting times are major considerations for scheduling in semiconductor manufacturing systems. The objective function is to minimize a weighted sum of total tardiness of urgent jobs and the makespan of normal jobs. This problem is formulated in mixed integer programming (MIP). By using a commercial optimization solver, the MIP can be used to find an optimal solution. However, because this problem is proved to be NP-hard, solving to optimality requires a significantly long computation time for a practical size problem. Therefore, this study adopts metaheuristic algorithms to obtain a good solution quickly. To complete this, two metaheuristic algorithms (an iterated greedy algorithm and a simulated annealing algorithm) are proposed, and a series of computational experiments were performed to examine the effectiveness and efficiency of the proposed algorithms. Full article
(This article belongs to the Special Issue Metaheuristic Algorithms in Optimization and Applications 2021)
Show Figures

Figure 1

20 pages, 837 KiB  
Article
Robust Bilinear Probabilistic Principal Component Analysis
by Yaohang Lu and Zhongming Teng
Algorithms 2021, 14(11), 322; https://0-doi-org.brum.beds.ac.uk/10.3390/a14110322 - 01 Nov 2021
Viewed by 1698
Abstract
Principal component analysis (PCA) is one of the most popular tools in multivariate exploratory data analysis. Its probabilistic version (PPCA) based on the maximum likelihood procedure provides a probabilistic manner to implement dimension reduction. Recently, the bilinear PPCA (BPPCA) model, which assumes that [...] Read more.
Principal component analysis (PCA) is one of the most popular tools in multivariate exploratory data analysis. Its probabilistic version (PPCA) based on the maximum likelihood procedure provides a probabilistic manner to implement dimension reduction. Recently, the bilinear PPCA (BPPCA) model, which assumes that the noise terms follow matrix variate Gaussian distributions, has been introduced to directly deal with two-dimensional (2-D) data for preserving the matrix structure of 2-D data, such as images, and avoiding the curse of dimensionality. However, Gaussian distributions are not always available in real-life applications which may contain outliers within data sets. In order to make BPPCA robust for outliers, in this paper, we propose a robust BPPCA model under the assumption of matrix variate t distributions for the noise terms. The alternating expectation conditional maximization (AECM) algorithm is used to estimate the model parameters. Numerical examples on several synthetic and publicly available data sets are presented to demonstrate the superiority of our proposed model in feature extraction, classification and outlier detection. Full article
Show Figures

Figure 1

21 pages, 4611 KiB  
Article
Path Planning of a Mechanical Arm Based on an Improved Artificial Potential Field and a Rapid Expansion Random Tree Hybrid Algorithm
by Qingni Yuan, Junhui Yi, Ruitong Sun and Huan Bai
Algorithms 2021, 14(11), 321; https://0-doi-org.brum.beds.ac.uk/10.3390/a14110321 - 01 Nov 2021
Cited by 9 | Viewed by 2934
Abstract
To improve the path planning efficiency of a robotic arm in three-dimensional space and improve the obstacle avoidance ability, this paper proposes an improved artificial potential field and rapid expansion random tree (APF-RRT) hybrid algorithm for the mechanical arm path planning method. The [...] Read more.
To improve the path planning efficiency of a robotic arm in three-dimensional space and improve the obstacle avoidance ability, this paper proposes an improved artificial potential field and rapid expansion random tree (APF-RRT) hybrid algorithm for the mechanical arm path planning method. The improved APF algorithm (I-APF) introduces a heuristic method based on the number of adjacent obstacles to escape from local minima, which solves the local minimum problem of the APF method and improves the search speed. The improved RRT algorithm (I-RRT) changes the selection method of the nearest neighbor node by introducing a triangular nearest neighbor node selection method, adopts an adaptive step and generates a virtual new node strategy to explore the path, and removes redundant path nodes generated by the RRT algorithm, which effectively improves the obstacle avoidance ability and efficiency of the algorithm. Bezier curves are used to fit the final generated path. Finally, an experimental analysis based on Python shows that the search time of the hybrid algorithm in a multi-obstacle environment is reduced to 2.8 s from 37.8 s (classic RRT algorithm), 10.1 s (RRT* algorithm), and 7.4 s (P_RRT* algorithm), and the success rate and efficiency of the search are both significantly improved. Furthermore, the hybrid algorithm is simulated in a robot operating system (ROS) using the UR5 mechanical arm, and the results prove the effectiveness and reliability of the hybrid algorithm. Full article
(This article belongs to the Topic Soft Computing)
Show Figures

Figure 1

24 pages, 1026 KiB  
Article
Load Balancing Strategies for Slice-Based Parallel Versions of JEM Video Encoder
by Héctor Migallón, Otoniel López-Granado, Miguel O. Martínez-Rach, Vicente Galiano and Manuel P. Malumbres
Algorithms 2021, 14(11), 320; https://0-doi-org.brum.beds.ac.uk/10.3390/a14110320 - 01 Nov 2021
Viewed by 1601
Abstract
The proportion of video traffic on the internet is expected to reach 82% by 2022, mainly due to the increasing number of consumers and the emergence of new video formats with more demanding features (depth, resolution, multiview, 360, etc.). Efforts are therefore being [...] Read more.
The proportion of video traffic on the internet is expected to reach 82% by 2022, mainly due to the increasing number of consumers and the emergence of new video formats with more demanding features (depth, resolution, multiview, 360, etc.). Efforts are therefore being made to constantly improve video compression standards to minimize the necessary bandwidth while retaining high video quality levels. In this context, the Joint Collaborative Team on Video Coding has been analyzing new video coding technologies to improve the compression efficiency with respect to the HEVC video coding standard. A software package known as the Joint Exploration Test Model has been proposed to implement and evaluate new video coding tools. In this work, we present parallel versions of the JEM encoder that are particularly suited for shared memory platforms, and can significantly reduce its huge computational complexity. The proposed parallel algorithms are shown to achieve high levels of parallel efficiency. In particular, in the All Intra coding mode, the best of our proposed parallel versions achieves an average efficiency value of 93.4%. They also had high levels of scalability, as shown by the inclusion of an automatic load balancing mechanism. Full article
(This article belongs to the Collection Parallel and Distributed Computing: Algorithms and Applications)
Show Figures

Figure 1

13 pages, 4846 KiB  
Article
Risk Assessment Algorithm for Power Transformer Fleets Based on Condition and Strategic Importance
by Diego A. Zaldivar, Andres A. Romero and Sergio R. Rivera
Algorithms 2021, 14(11), 319; https://0-doi-org.brum.beds.ac.uk/10.3390/a14110319 - 31 Oct 2021
Cited by 11 | Viewed by 1847
Abstract
In every electric power system, power transformers (PT) play a critical role. Under ideal circumstances, PT should receive the utmost care to maintain the highest operative condition during their lifetime. Through the years, different approaches have been developed to assess the condition and [...] Read more.
In every electric power system, power transformers (PT) play a critical role. Under ideal circumstances, PT should receive the utmost care to maintain the highest operative condition during their lifetime. Through the years, different approaches have been developed to assess the condition and the inherent risk during the operation of PT. However, most proposed methodologies tend to analyze PT as individuals and not as a fleet. A fleet assessment helps the asset manager make sound decisions regarding the maintenance scheduling for groups of PT with similar conditions. This paper proposes a new methodology to assess the risk of PT fleets, considering the technical condition and the strategic importance of the units. First, the state of the units was evaluated using a health index (HI) with a fuzzy logic algorithm. Then, the strategic importance of each unit was assessed using a weighting technique to obtain the importance index (II). Finally, the analyzed units with similar HI and II were arranged into a set of clusters using the k-means clustering technique. A fleet of 19 PTs was used to validate the proposed method. The obtained results are also provided to demonstrate the viability and feasibility of the assessment model. Full article
Show Figures

Figure 1

17 pages, 583 KiB  
Article
Using Decision Trees and Random Forest Algorithms to Predict and Determine Factors Contributing to First-Year University Students’ Learning Performance
by Thao-Trang Huynh-Cam, Long-Sheng Chen and Huynh Le
Algorithms 2021, 14(11), 318; https://0-doi-org.brum.beds.ac.uk/10.3390/a14110318 - 30 Oct 2021
Cited by 19 | Viewed by 4911
Abstract
First-year students’ learning performance has received much attention in educational practice and theory. Previous works used some variables, which should be obtained during the course or in the progress of the semester through questionnaire surveys and interviews, to build prediction models. These models [...] Read more.
First-year students’ learning performance has received much attention in educational practice and theory. Previous works used some variables, which should be obtained during the course or in the progress of the semester through questionnaire surveys and interviews, to build prediction models. These models cannot provide enough timely support for the poor performance students, caused by economic factors. Therefore, other variables are needed that allow us to reach prediction results earlier. This study attempts to use family background variables that can be obtained prior to the start of the semester to build learning performance prediction models of freshmen using random forest (RF), C5.0, CART, and multilayer perceptron (MLP) algorithms. The real sample of 2407 freshmen who enrolled in 12 departments of a Taiwan vocational university will be employed. The experimental results showed that CART outperforms C5.0, RF, and MLP algorithms. The most important features were mother’s occupations, department, father’s occupations, main source of living expenses, and admission status. The extracted knowledge rules are expected to be indicators for students’ early performance prediction so that strategic intervention can be planned before students begin the semester. Full article
(This article belongs to the Special Issue Discrete Optimization Theory, Algorithms, and Applications)
Show Figures

Figure 1

16 pages, 6649 KiB  
Article
A Real-Time Car Towing Management System Using ML-Powered Automatic Number Plate Recognition
by Ahmed Abdelmoamen Ahmed and Sheikh Ahmed
Algorithms 2021, 14(11), 317; https://0-doi-org.brum.beds.ac.uk/10.3390/a14110317 - 30 Oct 2021
Cited by 7 | Viewed by 3524
Abstract
Automatic Number Plate Recognition (ANPR) has been widely used in different domains, such as car park management, traffic management, tolling, and intelligent transport systems. Despite this technology’s importance, the existing ANPR approaches suffer from the accurate identification of number plats due to its [...] Read more.
Automatic Number Plate Recognition (ANPR) has been widely used in different domains, such as car park management, traffic management, tolling, and intelligent transport systems. Despite this technology’s importance, the existing ANPR approaches suffer from the accurate identification of number plats due to its different size, orientation, and shapes across different regions worldwide. In this paper, we are studying these challenges by implementing a case study for smart car towing management using Machine Learning (ML) models. The developed mobile-based system uses different approaches and techniques to enhance the accuracy of recognizing number plates in real-time. First, we developed an algorithm to accurately detect the number plate’s location on the car body. Then, the bounding box of the plat is extracted and converted into a grayscale image. Second, we applied a series of filters to detect the alphanumeric characters’ contours within the grayscale image. Third, the detected the alphanumeric characters’ contours are fed into a K-Nearest Neighbors (KNN) model to detect the actual number plat. Our model achieves an overall classification accuracy of 95% in recognizing number plates across different regions worldwide. The user interface is developed as an Android mobile app, allowing law-enforcement personnel to capture a photo of the towed car, which is then recorded in the car towing management system automatically in real-time. The app also allows owners to search for their cars, check the case status, and pay fines. Finally, we evaluated our system using various performance metrics such as classification accuracy, processing time, etc. We found that our model outperforms some state-of-the-art ANPR approaches in terms of the overall processing time. Full article
Show Figures

Figure 1

12 pages, 2172 KiB  
Article
Variation Trends of Fractal Dimension in Epileptic EEG Signals
by Zhiwei Li, Jun Li, Yousheng Xia, Pingfa Feng and Feng Feng
Algorithms 2021, 14(11), 316; https://0-doi-org.brum.beds.ac.uk/10.3390/a14110316 - 29 Oct 2021
Cited by 1 | Viewed by 1891
Abstract
Epileptic diseases take EEG as an important basis for clinical judgment, and fractal algorithms were often used to analyze electroencephalography (EEG) signals. However, the variation trends of fractal dimension (D) were opposite in the literature, i.e., both D decreasing and increasing [...] Read more.
Epileptic diseases take EEG as an important basis for clinical judgment, and fractal algorithms were often used to analyze electroencephalography (EEG) signals. However, the variation trends of fractal dimension (D) were opposite in the literature, i.e., both D decreasing and increasing were reported in previous studies during seizure status relative to the normal status, undermining the feasibility of fractal algorithms for EEG analysis to detect epileptic seizures. In this study, two algorithms with high accuracy in the D calculation, Higuchi and roughness scaling extraction (RSE), were used to study D variation of EEG signals with seizures. It was found that the denoising operation had an important influence on D variation trend. Moreover, the D variation obtained by RSE algorithm was larger than that by Higuchi algorithm, because the non-fractal nature of EEG signals during normal status could be detected and quantified by RSE algorithm. The above findings in this study could be promising to make more understandings of the nonlinear nature and scaling behaviors of EEG signals. Full article
(This article belongs to the Special Issue Machine Learning in Medical Signal and Image Processing)
Show Figures

Figure 1

18 pages, 2786 KiB  
Article
Nonsingular Terminal Sliding Mode Based Finite-Time Dynamic Surface Control for a Quadrotor UAV
by Yuxiao Niu, Hanyu Ban, Haichao Zhang, Wenquan Gong and Fang Yu
Algorithms 2021, 14(11), 315; https://0-doi-org.brum.beds.ac.uk/10.3390/a14110315 - 29 Oct 2021
Cited by 5 | Viewed by 1779
Abstract
In this work, a tracking control strategy is developed to achieve finite-time stability of quadrotor Unmanned Aerial Vehicles (UAVs) subject to external disturbances and parameter uncertainties. Firstly, a finite-time extended state observer (ESO) is proposed based on the nonsingular terminal sliding mode variable [...] Read more.
In this work, a tracking control strategy is developed to achieve finite-time stability of quadrotor Unmanned Aerial Vehicles (UAVs) subject to external disturbances and parameter uncertainties. Firstly, a finite-time extended state observer (ESO) is proposed based on the nonsingular terminal sliding mode variable to estimate external disturbances to the position subsystem. Then, utilizing the information provided by the ESO and the nonsingular terminal sliding mode control (NTSMC) technique, a dynamic surface controller is proposed to achieve finite-time stability of the position subsystem. By conducting a similar step for the attitude subsystem, a finite-time ESO-based dynamic surface controller is proposed to carry out attitude tracking control of the quadrotor UAV. Finally, the performance of the control algorithm is demonstrated via a numerical simulation. Full article
(This article belongs to the Special Issue Unmanned Aero—Vehicle Guidance and Control Algorithms & Application)
Show Figures

Figure 1

27 pages, 9325 KiB  
Article
DMFO-CD: A Discrete Moth-Flame Optimization Algorithm for Community Detection
by Mohammad H. Nadimi-Shahraki, Ebrahim Moeini, Shokooh Taghian and Seyedali Mirjalili
Algorithms 2021, 14(11), 314; https://0-doi-org.brum.beds.ac.uk/10.3390/a14110314 - 28 Oct 2021
Cited by 31 | Viewed by 3806
Abstract
In this paper, a discrete moth–flame optimization algorithm for community detection (DMFO-CD) is proposed. The representation of solution vectors, initialization, and movement strategy of the continuous moth–flame optimization are purposely adapted in DMFO-CD such that it can solve the discrete community detection. In [...] Read more.
In this paper, a discrete moth–flame optimization algorithm for community detection (DMFO-CD) is proposed. The representation of solution vectors, initialization, and movement strategy of the continuous moth–flame optimization are purposely adapted in DMFO-CD such that it can solve the discrete community detection. In this adaptation, locus-based adjacency representation is used to represent the position of moths and flames, and the initialization process is performed by considering the community structure and the relation between nodes without the need of any knowledge about the number of communities. Solution vectors are updated by the adapted movement strategy using a single-point crossover to distance imitating, a two-point crossover to calculate the movement, and a single-point neighbor-based mutation that can enhance the exploration and balance exploration and exploitation. The fitness function is also defined based on modularity. The performance of DMFO-CD was evaluated on eleven real-world networks, and the obtained results were compared with five well-known algorithms in community detection, including GA-Net, DPSO-PDM, GACD, EGACD, and DECS in terms of modularity, NMI, and the number of detected communities. Additionally, the obtained results were statistically analyzed by the Wilcoxon signed-rank and Friedman tests. In the comparison with other comparative algorithms, the results show that the proposed DMFO-CD is competitive to detect the correct number of communities with high modularity. Full article
(This article belongs to the Special Issue Network Science: Algorithms and Applications)
Show Figures

Figure 1

39 pages, 670 KiB  
Article
Matheuristics and Column Generation for a Basic Technician Routing Problem
by Nicolas Dupin, Rémi Parize and El-Ghazali Talbi
Algorithms 2021, 14(11), 313; https://0-doi-org.brum.beds.ac.uk/10.3390/a14110313 - 27 Oct 2021
Cited by 7 | Viewed by 2811
Abstract
This paper considers a variant of the Vehicle Routing Problem with Time Windows, with site dependencies, multiple depots and outsourcing costs. This problem is the basis for many technician routing problems. Having both site-dependency and time window constraints lresults in difficulties in finding [...] Read more.
This paper considers a variant of the Vehicle Routing Problem with Time Windows, with site dependencies, multiple depots and outsourcing costs. This problem is the basis for many technician routing problems. Having both site-dependency and time window constraints lresults in difficulties in finding feasible solutions and induces highly constrained instances. Matheuristics based on Mixed Integer Linear Programming compact formulations are firstly designed. Column Generation matheuristics are then described by using previous matheuristics and machine learning techniques to stabilize and speed up the convergence of the Column Generation algorithm. The computational experiments are analyzed on public instances with graduated difficulties in order to analyze the accuracy of algorithms for ensuring feasibility and the quality of solutions for weakly to highly constrained instances. The results emphasize the interest of the multiple types of hybridization between mathematical programming, machine learning and heuristics inside the Column Generation framework. This work offers perspectives for many extensions of technician routing problems. Full article
(This article belongs to the Special Issue 2021 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Graphical abstract

14 pages, 674 KiB  
Article
A Linearly Involved Generalized Moreau Enhancement of 2,1-Norm with Application to Weighted Group Sparse Classification
by Yang Chen, Masao Yamagishi and Isao Yamada
Algorithms 2021, 14(11), 312; https://0-doi-org.brum.beds.ac.uk/10.3390/a14110312 - 27 Oct 2021
Cited by 2 | Viewed by 1810
Abstract
This paper proposes a new group-sparsity-inducing regularizer to approximate 2,0 pseudo-norm. The regularizer is nonconvex, which can be seen as a linearly involved generalized Moreau enhancement of 2,1-norm. Moreover, the overall convexity of the corresponding group-sparsity-regularized [...] Read more.
This paper proposes a new group-sparsity-inducing regularizer to approximate 2,0 pseudo-norm. The regularizer is nonconvex, which can be seen as a linearly involved generalized Moreau enhancement of 2,1-norm. Moreover, the overall convexity of the corresponding group-sparsity-regularized least squares problem can be achieved. The model can handle general group configurations such as weighted group sparse problems, and can be solved through a proximal splitting algorithm. Among the applications, considering that the bias of convex regularizer may lead to incorrect classification results especially for unbalanced training sets, we apply the proposed model to the (weighted) group sparse classification problem. The proposed classifier can use the label, similarity and locality information of samples. It also suppresses the bias of convex regularizer-based classifiers. Experimental results demonstrate that the proposed classifier improves the performance of convex 2,1 regularizer-based methods, especially when the training data set is unbalanced. This paper enhances the potential applicability and effectiveness of using nonconvex regularizers in the frame of convex optimization. Full article
(This article belongs to the Special Issue Algorithms for Convex Optimization)
Show Figures

Figure 1

20 pages, 1572 KiB  
Article
Evaluation of Features Generated by a High-End Low-Cost Electrical Smart Meter
by Christina Koutroumpina, Spyros Sioutas, Stelios Koutroubinas and Kostas Tsichlas
Algorithms 2021, 14(11), 311; https://0-doi-org.brum.beds.ac.uk/10.3390/a14110311 - 25 Oct 2021
Cited by 3 | Viewed by 1644
Abstract
The problem of energy disaggregation is the separation of an aggregate energy signal into the consumption of individual appliances in a household. This is useful, since the goal of energy efficiency at the household level can be achieved through energy-saving policies towards changing [...] Read more.
The problem of energy disaggregation is the separation of an aggregate energy signal into the consumption of individual appliances in a household. This is useful, since the goal of energy efficiency at the household level can be achieved through energy-saving policies towards changing the behavior of the consumers. This requires as a prerequisite to be able to measure the energy consumption at the appliance level. The purpose of this study is to present some initial results towards this goal by making heavy use of the characteristics of a particular din-rail meter, which is provided by Meazon S.A. Our thinking is that meter-specific energy disaggregation solutions may yield better results than general-purpose methods, especially for sophisticated meters. This meter has a 50 Hz sampling rate over 3 different lines and provides a rather rich set of measurements with respect to the extracted features. In this paper we aim at evaluating the set of features generated by the smart meter. To this end, we use well-known supervised machine learning models and test their effectiveness on certain appliances when selecting specific subsets of features. Three algorithms are used for this purpose: the Decision Tree Classifier, the Random Forest Classifier, and the Multilayer Perceptron Classifier. Our experimental study shows that by using a specific set of features one can enhance the classification performance of these algorithms. Full article
(This article belongs to the Special Issue Algorithmic Data Management)
Show Figures

Figure 1

20 pages, 4144 KiB  
Article
A Non-Dominated Genetic Algorithm Based on Decoding Rule of Heat Treatment Equipment Volume and Job Delivery Date
by Yan Liang and Qingdong Zhang
Algorithms 2021, 14(11), 310; https://0-doi-org.brum.beds.ac.uk/10.3390/a14110310 - 25 Oct 2021
Viewed by 1386
Abstract
This paper investigated the flexible job-shop scheduling problem with the heat treatment process. To solve this problem, we built an unified mathematical model of the heat treatment process and machining process. Up to now, this problem has not been investigated much. Based on [...] Read more.
This paper investigated the flexible job-shop scheduling problem with the heat treatment process. To solve this problem, we built an unified mathematical model of the heat treatment process and machining process. Up to now, this problem has not been investigated much. Based on the features of this problem, we are intended to minimize Cmax, maximize the space utilization rate of heat treatment equipment, and minimize the total delay penalty to optimize the scheduling. By taking the dynamic process arrival under consideration, this paper proposed a set of decoding rules based on the heat treatment equipment volume and job delivery date to achieve a hybrid dynamic scheduling solution during one scheduling procedure. When the utilization rate of heat treatment equipment volume is maximized, and the job delivery date is taken under consideration, it is preferred to minimize the number of workpiece batches in the same job, and reduce the waiting time of the pending job. In combination with the improved adaptive non-dominated genetic algorithm, we worked out the solution. Furthermore, we verified the effectiveness of the proposed decoding rules and improved algorithm through algorithm comparison and calculation results. Finally, a software system for algorithm verification and algorithm comparison was developed to verify the validity of our proposed algorithm. Full article
Show Figures

Figure 1

10 pages, 284 KiB  
Article
A Parallel Algorithm for Dividing Octonions
by Aleksandr Cariow and Janusz P. Paplinski
Algorithms 2021, 14(11), 309; https://0-doi-org.brum.beds.ac.uk/10.3390/a14110309 - 24 Oct 2021
Viewed by 1553
Abstract
The article presents a parallel hardware-oriented algorithm designed to speed up the division of two octonions. The advantage of the proposed algorithm is that the number of real multiplications is halved as compared to the naive method for implementing this operation. In the [...] Read more.
The article presents a parallel hardware-oriented algorithm designed to speed up the division of two octonions. The advantage of the proposed algorithm is that the number of real multiplications is halved as compared to the naive method for implementing this operation. In the synthesis of the discussed algorithm, the matrix representation of this operation was used, which allows us to present the division of octonions by means of a vector–matrix product. Taking into account a specific structure of the matrix multiplicand allows for reducing the number of real multiplications necessary for the execution of the octonion division procedure. Full article
(This article belongs to the Collection Parallel and Distributed Computing: Algorithms and Applications)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop