Next Issue
Volume 15, March
Previous Issue
Volume 15, January
 
 

Algorithms, Volume 15, Issue 2 (February 2022) – 45 articles

Cover Story (view full-size image): Laborious human assistance is often required when algorithms attempt to track densely packed cells in video sequences. Common sense—often lacking in software—dictates that cells can move, rotate, and grow a bit between video frames. They can also periodically split, but they cannot instantaneously appear, disappear, or jump great distances. Our code—Cell Universe—maintains an internal model of the scene, incorporating “common sense” rules that the modelled cells must follow. Synthetic images of the internal model are compared to the real image, allowing a precise fit of the model to reality. The rules tightly constrain possible interpretations of the scene, greatly increasing Cell Universe's robustness to noise and nonphysical behaviour compared to existing algorithms. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
55 pages, 7055 KiB  
Review
A Review of Deep Learning Algorithms and Their Applications in Healthcare
by Hussein Abdel-Jaber, Disha Devassy, Azhar Al Salam, Lamya Hidaytallah and Malak EL-Amir
Algorithms 2022, 15(2), 71; https://0-doi-org.brum.beds.ac.uk/10.3390/a15020071 - 21 Feb 2022
Cited by 31 | Viewed by 11158
Abstract
Deep learning uses artificial neural networks to recognize patterns and learn from them to make decisions. Deep learning is a type of machine learning that uses artificial neural networks to mimic the human brain. It uses machine learning methods such as supervised, semi-supervised, [...] Read more.
Deep learning uses artificial neural networks to recognize patterns and learn from them to make decisions. Deep learning is a type of machine learning that uses artificial neural networks to mimic the human brain. It uses machine learning methods such as supervised, semi-supervised, or unsupervised learning strategies to learn automatically in deep architectures and has gained much popularity due to its superior ability to learn from huge amounts of data. It was found that deep learning approaches can be used for big data analysis successfully. Applications include virtual assistants such as Alexa and Siri, facial recognition, personalization, natural language processing, autonomous cars, automatic handwriting generation, news aggregation, the colorization of black and white images, the addition of sound to silent films, pixel restoration, and deep dreaming. As a review, this paper aims to categorically cover several widely used deep learning algorithms along with their architectures and their practical applications: backpropagation, autoencoders, variational autoencoders, restricted Boltzmann machines, deep belief networks, convolutional neural networks, recurrent neural networks, generative adversarial networks, capsnets, transformer, embeddings from language models, bidirectional encoder representations from transformers, and attention in natural language processing. In addition, challenges of deep learning are also presented in this paper, such as AutoML-Zero, neural architecture search, evolutionary deep learning, and others. The pros and cons of these algorithms and their applications in healthcare are explored, alongside the future direction of this domain. This paper presents a review and a checkpoint to systemize the popular algorithms and to encourage further innovation regarding their applications. For new researchers in the field of deep learning, this review can help them to obtain many details about the advantages, disadvantages, applications, and working mechanisms of a number of deep learning algorithms. In addition, we introduce detailed information on how to apply several deep learning algorithms in healthcare, such as in relation to the COVID-19 pandemic. By presenting many challenges of deep learning in one section, we hope to increase awareness of these challenges, and how they can be dealt with. This could also motivate researchers to find solutions for these challenges. Full article
Show Figures

Figure 1

19 pages, 1638 KiB  
Article
Classification and Merging Techniques to Reduce Brokerage Using Multi-Objective Optimization
by Dhanalakshmi Bettahalli Kengegowda, Srikantaiah Kamidoddi Chowdaiah, Gururaj Harinahalli Lokesh and Francesco Flammini
Algorithms 2022, 15(2), 70; https://0-doi-org.brum.beds.ac.uk/10.3390/a15020070 - 21 Feb 2022
Cited by 3 | Viewed by 2139
Abstract
Cloud computing is concerned with effective resource utilization and cost optimization. In the existing system, the cost of resources is much higher. To overcome this problem, a new model called Classification and Merging Techniques for Reducing Brokerage Cost (CMRBC) is designed for effective [...] Read more.
Cloud computing is concerned with effective resource utilization and cost optimization. In the existing system, the cost of resources is much higher. To overcome this problem, a new model called Classification and Merging Techniques for Reducing Brokerage Cost (CMRBC) is designed for effective resource utilization and cost optimization in the cloud. CMRBC has two benefits. Firstly, this is a cost-effective solution to service providers and customers. Secondly, for every job, virtual machine (VM) creations are avoided to reduce brokerage. The allocation, creation or selection of resources of VM is carried out by broker. The main objective is to maximize the resource utilization and minimize brokerage in cloud computing by using Multi-Objective Optimization (MOO). It considered a multi-attribute approach as it has more than two objectives. Likewise, CMRBC implements efficient resource allocation to reduce the usage cost of resources. The outcome of the experiment shows that CMRBC outperforms 60 percent of reduction in brokerage and 10 percent in response time. Full article
(This article belongs to the Special Issue Algorithms in Multi-Objective Optimization)
Show Figures

Figure 1

14 pages, 1123 KiB  
Article
Approximation of the Riesz–Caputo Derivative by Cubic Splines
by Francesca Pitolli, Chiara Sorgentone and Enza Pellegrino
Algorithms 2022, 15(2), 69; https://0-doi-org.brum.beds.ac.uk/10.3390/a15020069 - 21 Feb 2022
Cited by 16 | Viewed by 4341
Abstract
Differential problems with the Riesz derivative in space are widely used to model anomalous diffusion. Although the Riesz–Caputo derivative is more suitable for modeling real phenomena, there are few examples in literature where numerical methods are used to solve such differential problems. In [...] Read more.
Differential problems with the Riesz derivative in space are widely used to model anomalous diffusion. Although the Riesz–Caputo derivative is more suitable for modeling real phenomena, there are few examples in literature where numerical methods are used to solve such differential problems. In this paper, we propose to approximate the Riesz–Caputo derivative of a given function with a cubic spline. As far as we are aware, this is the first time that cubic splines have been used in the context of the Riesz–Caputo derivative. To show the effectiveness of the proposed numerical method, we present numerical tests in which we compare the analytical solution of several boundary differential problems which have the Riesz–Caputo derivative in space with the numerical solution we obtain by a spline collocation method. The numerical results show that the proposed method is efficient and accurate. Full article
Show Figures

Figure 1

12 pages, 2286 KiB  
Article
Whispered Speech Conversion Based on the Inversion of Mel Frequency Cepstral Coefficient Features
by Qiang Zhu, Zhong Wang, Yunfeng Dou and Jian Zhou
Algorithms 2022, 15(2), 68; https://0-doi-org.brum.beds.ac.uk/10.3390/a15020068 - 20 Feb 2022
Cited by 3 | Viewed by 2396
Abstract
A conversion method based on the inversion of Mel frequency cepstral coefficient (MFCC) features was proposed to convert whispered speech into normal speech. First, the MFCC features of whispered speech and normal speech were extracted and a matching relation between the MFCC feature [...] Read more.
A conversion method based on the inversion of Mel frequency cepstral coefficient (MFCC) features was proposed to convert whispered speech into normal speech. First, the MFCC features of whispered speech and normal speech were extracted and a matching relation between the MFCC feature parameters of whispered speech and normal speech was developed through the Gaussian mixture model (GMM). Then, the MFCC feature parameters of normal speech corresponding to whispered speech were obtained based on the GMM and, finally, whispered speech was converted into normal speech through the inversion of MFCC features. The experimental results showed that the cepstral distortion (CD) of the normal speech converted by the proposed method was 21% less than that of the normal speech converted by the linear predictive coefficient (LPC) features, the mean opinion score (MOS) was 3.56, and a satisfactory outcome in both intelligibility and sound quality was achieved. Full article
Show Figures

Figure 1

10 pages, 5458 KiB  
Article
Adjacency Maps and Efficient Graph Algorithms
by Gabriel Valiente
Algorithms 2022, 15(2), 67; https://0-doi-org.brum.beds.ac.uk/10.3390/a15020067 - 20 Feb 2022
Cited by 1 | Viewed by 6479
Abstract
Graph algorithms that test adjacencies are usually implemented with an adjacency-matrix representation because the adjacency test takes constant time with adjacency matrices, but it takes linear time in the degree of the vertices with adjacency lists. In this article, we review the adjacency-map [...] Read more.
Graph algorithms that test adjacencies are usually implemented with an adjacency-matrix representation because the adjacency test takes constant time with adjacency matrices, but it takes linear time in the degree of the vertices with adjacency lists. In this article, we review the adjacency-map representation, which supports adjacency tests in constant expected time, and we show that graph algorithms run faster with adjacency maps than with adjacency lists by a small constant factor if they do not test adjacencies and by one or two orders of magnitude if they perform adjacency tests. Full article
Show Figures

Figure 1

40 pages, 8026 KiB  
Article
A Temporal Case-Based Reasoning Platform Relying on a Fuzzy Vector Spaces Object-Oriented Model and a Method to Design Knowledge Bases and Decision Support Systems in Multiple Domains
by Joël Colloc, Relwendé Aristide Yameogo, Peter Summons, Lilian Loubet, Jean-Bernard Cavelier and Paul Bridier
Algorithms 2022, 15(2), 66; https://0-doi-org.brum.beds.ac.uk/10.3390/a15020066 - 19 Feb 2022
Cited by 5 | Viewed by 2918
Abstract
Knowledge bases in complex domains must take into account many attributes describing numerous objects that are themselves components of complex objects. Temporal case-based reasoning (TCBR) requires comparing the structural evolution of component objects and their states (attribute values) at different levels of granularity. [...] Read more.
Knowledge bases in complex domains must take into account many attributes describing numerous objects that are themselves components of complex objects. Temporal case-based reasoning (TCBR) requires comparing the structural evolution of component objects and their states (attribute values) at different levels of granularity. This paper provides some significant contributions to computer science. It extends a fuzzy vector space object-oriented model and method (FVSOOMM) to present a new platform and a method guideline capable of designing objects and attributes that represent timepoint knowledge objects. It shows how temporal case-based reasoning can use distances between temporal fuzzy vector functions to compare these knowledge objects’ evolution. It describes examples of interfaces that have been implemented on this new platform. These include an expert’s interface that describes a knowledge class diagram; a practitioner’s interface that instantiates domain objects and their attribute constraints; and an end-user interface to input attribute values of the real cases stored in a domain case database. This paper illustrates resultant knowledge bases in different domains, with examples of pulmonary embolism diagnosis in medicine and decision making in French municipal territorial recomposition. The paper concludes with the current limitations of the proposed model, its future perspectives and possible platform enhancements. Full article
(This article belongs to the Special Issue Algorithms in Decision Support Systems Vol. 2)
Show Figures

Figure 1

21 pages, 751 KiB  
Article
A New Algorithm Based on Colouring Arguments for Identifying Impossible Polyomino Tiling Problems
by Marcus R. Garvie and John Burkardt
Algorithms 2022, 15(2), 65; https://0-doi-org.brum.beds.ac.uk/10.3390/a15020065 - 17 Feb 2022
Cited by 2 | Viewed by 3283
Abstract
Checkerboard colouring arguments for proving that a given collection of polyominoes cannot tile a finite target region of the plane are well-known and typically applied on a case-by-case basis. In this article, we give a systematic mathematical treatment of such colouring arguments, based [...] Read more.
Checkerboard colouring arguments for proving that a given collection of polyominoes cannot tile a finite target region of the plane are well-known and typically applied on a case-by-case basis. In this article, we give a systematic mathematical treatment of such colouring arguments, based on the concept of a parity violation, which arises from the mismatch between the colouring of the tiles and the colouring of the target region. Identifying parity violations is a combinatorial problem related to the subset sum problem. We convert the combinatorial problem into linear Diophantine equations and give necessary and sufficient conditions for a parity violation. The linear Diophantine equation approach leads to an algorithm implemented in MATLAB for finding all possible parity violations of large tiling problems, and is the main contribution of this article. Numerical examples illustrate the effectiveness of our algorithm. The collection of MATLAB programs, POLYOMINO_PARITY (v2.0.0) is freely available for download. Full article
Show Figures

Figure 1

19 pages, 1861 KiB  
Article
Accelerate Incremental TSP Algorithms on Time Evolving Graphs with Partitioning Methods
by Shalini Sharma and Jerry Chou
Algorithms 2022, 15(2), 64; https://0-doi-org.brum.beds.ac.uk/10.3390/a15020064 - 14 Feb 2022
Cited by 1 | Viewed by 2368
Abstract
In time-evolving graphs, the graph changes at each time interval, and the previously computed results become invalid. We addressed this issue for the traveling salesman problem (TSP) in our previous work and proposed an incremental algorithm where the TSP tour is computed from [...] Read more.
In time-evolving graphs, the graph changes at each time interval, and the previously computed results become invalid. We addressed this issue for the traveling salesman problem (TSP) in our previous work and proposed an incremental algorithm where the TSP tour is computed from the previous result instead of the whole graph. In our current work, we have mapped the TSP problem to three partitioning methods named vertex size attribute, edge attribute, and k-means; then, we compared the TSP tour results. We have also examined the effect of increasing the number of partitions on the total computation time. Through our experiments, we have observed that the vertex size attribute performs the best because of a balanced number of vertices in each partition. Full article
(This article belongs to the Special Issue Performance Optimization and Performance Evaluation)
Show Figures

Figure 1

9 pages, 324 KiB  
Article
Pruning Adapters with Lottery Ticket
by Jiarun Wu and Qingliang Chen
Algorithms 2022, 15(2), 63; https://0-doi-org.brum.beds.ac.uk/10.3390/a15020063 - 14 Feb 2022
Cited by 1 | Viewed by 1795
Abstract
Massively pre-trained transformer models such as BERT have gained great success in many downstream NLP tasks. However, they are computationally expensive to fine-tune, slow for inference, and have large storage requirements. So, transfer learning with adapter modules has been introduced and has become [...] Read more.
Massively pre-trained transformer models such as BERT have gained great success in many downstream NLP tasks. However, they are computationally expensive to fine-tune, slow for inference, and have large storage requirements. So, transfer learning with adapter modules has been introduced and has become a remarkable solution for those problems. Nevertheless, recent studies reveal that the parameters in adapters are actually still quite redundant, which could slow down inference speed when fusing multiple adapters for a specific downstream task, and thus, they can be further reduced. To address this issue, we propose three novel ways to prune the adapter modules iteratively based on the prestigious Lottery Ticket Hypothesis. Extensive experiments on the GLUE datasets show that the pruned adapters can achieve state-of-the-art results, with sizes reduced significantly while performance remains unchanged, and some pruned adapters even outperform the ones with the same size that are fine-tuned alone without pruning. Full article
Show Figures

Figure 1

30 pages, 1049 KiB  
Article
A Two-Party Quantum Parliament
by Theodore Andronikos and Michail Stefanidakis
Algorithms 2022, 15(2), 62; https://0-doi-org.brum.beds.ac.uk/10.3390/a15020062 - 14 Feb 2022
Cited by 8 | Viewed by 1818
Abstract
This paper introduces the first functional model of a quantum parliament that is dominated by two parties or coalitions, and may or may not contain independent legislators. We identify a single crucial parameter, aptly named free will radius, which can be used [...] Read more.
This paper introduces the first functional model of a quantum parliament that is dominated by two parties or coalitions, and may or may not contain independent legislators. We identify a single crucial parameter, aptly named free will radius, which can be used as a practical measure of the quantumness of the parties and the parliament as a whole. The free will radius used by the two parties determines the degree of independence that is afforded to the representatives of the parties. Setting the free will radius to zero degrades the quantum parliament to a classical one. On the other hand, setting the free will radius to its maximum value 1 makes the representatives totally independent. Moreover, we present a quantum circuit in Qiskit with which we simulate the operation of the quantum parliament under various scenarios. The experimental results allow us to arrive at some novel and fundamental conclusions that, we believe, provide new insights into the operation and the traits of a possible future quantum parliament. Finally, we propose the game “Passing the Bill,” which captures the operation of the quantum parliament and basic options available to the leadership of the two parties. Full article
Show Figures

Figure 1

23 pages, 1219 KiB  
Article
Finding Hamiltonian and Longest (s,t)-Paths of C-Shaped Supergrid Graphs in Linear Time
by Fatemeh Keshavarz-Kohjerdi and Ruo-Wei Hung
Algorithms 2022, 15(2), 61; https://0-doi-org.brum.beds.ac.uk/10.3390/a15020061 - 13 Feb 2022
Cited by 7 | Viewed by 1784
Abstract
A graph is called Hamiltonian connected if it contains a Hamiltonian path between any two distinct vertices. In the past, we proved the Hamiltonian path and cycle problems for general supergrid graphs to be NP-complete. However, they are still open for solid supergrid [...] Read more.
A graph is called Hamiltonian connected if it contains a Hamiltonian path between any two distinct vertices. In the past, we proved the Hamiltonian path and cycle problems for general supergrid graphs to be NP-complete. However, they are still open for solid supergrid graphs. In this paper, first we will verify the Hamiltonian cycle property of C-shaped supergrid graphs, which are a special case of solid supergrid graphs. Next, we show that C-shaped supergrid graphs are Hamiltonian connected except in a few conditions. For these excluding conditions of Hamiltonian connectivity, we compute their longest paths. Then, we design a linear-time algorithm to solve the longest path problem in these graphs. The Hamiltonian connectivity of C-shaped supergrid graphs can be applied to compute the optimal stitching trace of computer embroidery machines, and construct the minimum printing trace of 3D printers with a C-like component being printed. Full article
(This article belongs to the Special Issue Discrete Optimization Theory, Algorithms, and Applications)
Show Figures

Graphical abstract

28 pages, 582 KiB  
Article
Allocating Small Transporters to Large Jobs
by Neil Jami, Neele Leithäuser and Christian Weiß
Algorithms 2022, 15(2), 60; https://0-doi-org.brum.beds.ac.uk/10.3390/a15020060 - 12 Feb 2022
Cited by 1 | Viewed by 1550
Abstract
We optimize the assignment of transporters to several jobs. Each job consists of processing a large, decomposable volume. A fleet of transporters is given, each of which can only process a limited volume at a time. After processing its share, a transporter must [...] Read more.
We optimize the assignment of transporters to several jobs. Each job consists of processing a large, decomposable volume. A fleet of transporters is given, each of which can only process a limited volume at a time. After processing its share, a transporter must rest for a short time before being able to process another part. This time is only dependent on the assigned job, not on the transporter. Other transporters can take over the processing while a transporter rests. Transporters assigned to the same job wait for their turn in a queue. A transporter can only be assigned to one job. Our goal is to simultaneously minimize the maximum job completion time and the number of assigned transporters by computing the frontier of Pareto optimal solutions. In general, we show that it is NP-hard in the strong sense to compute even a single point on the Pareto frontier. We provide exact methods and heuristics to compute the Pareto frontier for the general problem and compare them computationally. Full article
Show Figures

Figure 1

17 pages, 326 KiB  
Article
Regularization Algorithms for Linear Copositive Programming Problems: An Approach Based on the Concept of Immobile Indices
by Olga Kostyukova and Tatiana Tchemisova
Algorithms 2022, 15(2), 59; https://0-doi-org.brum.beds.ac.uk/10.3390/a15020059 - 11 Feb 2022
Cited by 1 | Viewed by 1556
Abstract
In this paper, we continue an earlier study of the regularization procedures of linear copositive problems and present new algorithms that can be considered as modifications of the algorithm described in our previous publication, which is based on the concept of immobile indices. [...] Read more.
In this paper, we continue an earlier study of the regularization procedures of linear copositive problems and present new algorithms that can be considered as modifications of the algorithm described in our previous publication, which is based on the concept of immobile indices. The main steps of the regularization algorithms proposed in this paper are explicitly described and interpreted from the point of view of the facial geometry of the cone of copositive matrices. The results of the paper provide a deeper understanding of the structure of feasible sets of copositive problems and can be useful for developing a duality theory for these problems. Full article
(This article belongs to the Special Issue 1st Online Conference on Algorithms (IOCA2021))
18 pages, 6561 KiB  
Article
Robust Registration of Medical Images in the Presence of Spatially-Varying Noise
by Reza Abbasi-Asl, Aboozar Ghaffari and Emad Fatemizadeh
Algorithms 2022, 15(2), 58; https://0-doi-org.brum.beds.ac.uk/10.3390/a15020058 - 10 Feb 2022
Viewed by 2220
Abstract
Spatially-varying intensity noise is a common source of distortion in medical images and is often associated with reduced accuracy in medical image registration. In this paper, we propose two multi-resolution image registration algorithms based on Empirical Mode Decomposition (EMD) that are robust against [...] Read more.
Spatially-varying intensity noise is a common source of distortion in medical images and is often associated with reduced accuracy in medical image registration. In this paper, we propose two multi-resolution image registration algorithms based on Empirical Mode Decomposition (EMD) that are robust against additive spatially-varying noise. EMD is a multi-resolution tool that decomposes a signal into several principle patterns and residual components. Our first proposed algorithm (LR-EMD) is based on the registration of EMD feature maps from both floating and reference images in various resolutions. In the second algorithm (AFR-EMD), we first extract a single average feature map based on EMD and then use a simple hierarchical multi-resolution algorithm to register the average feature maps. We then showcase the superior performance of both algorithms in the registration of brain MRIs as well as retina images. For the registration of brain MR images, using mutual information as the similarity measure, both AFR-EMD and LR-EMD achieve a lower error rate in intensity (42% and 32%, respectively) and lower error rate in transformation (52% and 41%, respectively) compared to intensity-based hierarchical registration. Our results suggest that the two proposed algorithms offer robust registration solutions in the presence of spatially-varying noise. Full article
Show Figures

Figure 1

16 pages, 2845 KiB  
Article
Data Fitting with Rational Functions: Scaled Null Space Method with Applications of Fitting Large Scale Shocks on Economic Variables and S-Parameters
by Indrit Hoxha and Taoufik Meklachi
Algorithms 2022, 15(2), 57; https://0-doi-org.brum.beds.ac.uk/10.3390/a15020057 - 09 Feb 2022
Cited by 2 | Viewed by 2638
Abstract
Curve fitting discrete data (x, y) with a smooth function is a complex problem when faced with sharply oscillating data or when the data are very large in size. We propose a straightforward method, one that is often overlooked, to [...] Read more.
Curve fitting discrete data (x, y) with a smooth function is a complex problem when faced with sharply oscillating data or when the data are very large in size. We propose a straightforward method, one that is often overlooked, to fit discrete data (s, ys) with rational functions. This method serves as a solid data fitting choice that proves to be fast and highly accurate. Its novelty lies on scaling positive explanatory data to the interval [0, 1], before solving the associated linear problem Ax=0. A rescaling is performed once the fitting function is derived. Each solution in the null space of A provides a rational fitting function. Amongst them, the best is chosen based on a pointwise error check. This avoids solving an overdetermined nonhomogeneous linear system Ax=b with a badly conditioned and scaled matrix A. With large data, the latter can lack accuracy and be computationally expensive. Furthermore, any linear combination of at least one solution in the basis of the null space produces a new fitting function, which gives the flexibility to choose the best rational function that fits the constraints of specific problems. We tested our method with many economic variables that experienced sharp oscillations owing to the effects of COVID-19-related shocks to the economy. Such data are intrinsically difficult to fit with a smooth function. Deriving such continuous model functions over a desired period is important in the analysis and prediction algorithms of such economic variables. The method can be expanded to model behaviors of interest in other applied sciences, such as electrical engineering, where the method was successfully fitted into network scattering parameter measurements with high accuracy. Full article
Show Figures

Figure 1

17 pages, 446 KiB  
Article
Improved Scheduling Algorithm for Synchronous Data Flow Graphs on a Homogeneous Multi-Core Systems
by Lei Wang, Chenguang Wang and Huabing Wang
Algorithms 2022, 15(2), 56; https://0-doi-org.brum.beds.ac.uk/10.3390/a15020056 - 09 Feb 2022
Cited by 1 | Viewed by 2195
Abstract
In order to accelerate the execution of streaming applications on multi-core systems, this article studies the scheduling problem of synchronous data flow graphs (SDFG) on homogeneous multi-core systems. To describe the data flow computation process, we propose the SDAG (Super DAG) computation model [...] Read more.
In order to accelerate the execution of streaming applications on multi-core systems, this article studies the scheduling problem of synchronous data flow graphs (SDFG) on homogeneous multi-core systems. To describe the data flow computation process, we propose the SDAG (Super DAG) computation model based on the DAG model combined with data-driven thoughts. Further, we analyze the current common SDFG scheduling algorithms and propose an improved SDFG scheduling algorithm, LSEFT (level-shortest-first-earliest-finish time). The LSEFT algorithm uses an inverse traversal algorithm to calculate the priority of tasks in the task-selection phase; the shortest-job-priority earliest-finish-time policy is used in the processor selection phase to replace the original long job priority policy. In the experimental part, we designed an SDFG random generator and generated 958 SDFGs with the help of the random generator as test cases to verify the scheduling algorithm. The experimental results show that our improved algorithm performs well for different numbers of processor cores, especially for 8 cores, where the speedup of our improved algorithm improves by 10.17% on average. Full article
Show Figures

Figure 1

16 pages, 2467 KiB  
Article
Precision-Based Weighted Blending Distributed Ensemble Model for Emotion Classification
by Gayathri Soman, M. V. Vivek, M. V. Judy, Elpiniki Papageorgiou and Vassilis C. Gerogiannis
Algorithms 2022, 15(2), 55; https://0-doi-org.brum.beds.ac.uk/10.3390/a15020055 - 06 Feb 2022
Cited by 5 | Viewed by 2304
Abstract
Focusing on emotion recognition, this paper addresses the task of emotion classification and its performance with respect to accuracy, by investigating the capabilities of a distributed ensemble model using precision-based weighted blending. Research on emotion recognition and classification refers to the detection of [...] Read more.
Focusing on emotion recognition, this paper addresses the task of emotion classification and its performance with respect to accuracy, by investigating the capabilities of a distributed ensemble model using precision-based weighted blending. Research on emotion recognition and classification refers to the detection of an individual’s emotional state by considering various types of data as input features, such as textual data, facial expressions, vocal, gesture and physiological signal recognition, electrocardiogram (ECG) and electrodermography (EDG)/galvanic skin response (GSR). The extraction of effective emotional features from different types of input data, as well as the analysis of large volume of real-time data, have become increasingly important tasks in order to perform accurate classification. Taking into consideration the volume and variety of the examined problem, a machine learning model that works in a distributed manner is essential. In this direction, we propose a precision-based weighted blending distributed ensemble model for emotion classification. The suggested ensemble model can work well in a distributed manner using the concepts of Spark’s resilient distributed datasets, which provide quick in-memory processing capabilities and also perform iterative computations effectively. Regarding model validation set, weights are assigned to different classifiers in the ensemble model, based on their precision value. Each weight determines the importance of the respective classifier in terms of its performing prediction, while a new model is built upon the derived weights. The produced model performs the task of final prediction on the test dataset. The results disclose that the proposed ensemble model is sufficiently accurate in differentiating between primary emotions (such as sadness, fear, and anger) and secondary emotions. The suggested ensemble model achieved accuracy of 76.2%, 99.4%, and 99.6% on the FER-2013, CK+, and FERG-DB datasets, respectively. Full article
(This article belongs to the Special Issue Ensemble Algorithms and/or Explainability)
Show Figures

Figure 1

14 pages, 295 KiB  
Article
A Biased-Randomized Discrete-Event Algorithm for the Hybrid Flow Shop Problem with Time Dependencies and Priority Constraints
by Christoph Laroque, Madlene Leißau, Pedro Copado, Christin Schumacher, Javier Panadero and Angel A. Juan
Algorithms 2022, 15(2), 54; https://0-doi-org.brum.beds.ac.uk/10.3390/a15020054 - 02 Feb 2022
Cited by 2 | Viewed by 1800
Abstract
Based on a real-world application in the semiconductor industry, this article models and discusses a hybrid flow shop problem with time dependencies and priority constraints. The analyzed problem considers a production where a large number of heterogeneous jobs are processed by a number [...] Read more.
Based on a real-world application in the semiconductor industry, this article models and discusses a hybrid flow shop problem with time dependencies and priority constraints. The analyzed problem considers a production where a large number of heterogeneous jobs are processed by a number of machines. The route that each job has to follow depends upon its type, and, in addition, some machines require that a number of jobs are combined in batches before starting their processing. The hybrid flow model is also subject to a global priority rule and a “same setup” rule. The primary goal of this study was to find a solution set (permutation of jobs) that minimizes the production makespan. While simulation models are frequently employed to model these time-dependent flow shop systems, an optimization component is needed in order to generate high-quality solution sets. In this study, a novel algorithm is proposed to deal with the complexity of the underlying system. Our algorithm combines biased-randomization techniques with a discrete-event heuristic, which allows us to model dependencies caused by batching and different paths of jobs efficiently in a near-natural way. As shown in a series of numerical experiments, the proposed simulation-optimization algorithm can find solutions that significantly outperform those provided by employing state-of-the-art simulation software. Full article
Show Figures

Figure 1

15 pages, 1451 KiB  
Article
A Preliminary Study on the Resolution of Electro-Thermal Multi-Physics Coupling Problem Using Physics-Informed Neural Network (PINN)
by Yaoyao Ma, Xiaoyu Xu, Shuai Yan and Zhuoxiang Ren
Algorithms 2022, 15(2), 53; https://0-doi-org.brum.beds.ac.uk/10.3390/a15020053 - 01 Feb 2022
Cited by 8 | Viewed by 3322
Abstract
The problem of electro-thermal coupling is widely present in the integrated circuit (IC). The accuracy and efficiency of traditional solution methods, such as the finite element method (FEM), are tightly related to the quality and density of mesh construction. Recently, PINN (physics-informed neural [...] Read more.
The problem of electro-thermal coupling is widely present in the integrated circuit (IC). The accuracy and efficiency of traditional solution methods, such as the finite element method (FEM), are tightly related to the quality and density of mesh construction. Recently, PINN (physics-informed neural network) was proposed as a method for solving differential equations. This method is mesh free and generalizes the process of solving PDEs regardless of the equations’ structure. Therefore, an experiment is conducted to explore the feasibility of PINN in solving electro-thermal coupling problems, which include the electrokinetic field and steady-state thermal field. We utilize two neural networks in the form of sequential training to approximate the electric field and the thermal field, respectively. The experimental results show that PINN provides good accuracy in solving electro-thermal coupling problems. Full article
Show Figures

Figure 1

26 pages, 482 KiB  
Article
k-Center Clustering with Outliers in Sliding Windows
by Paolo Pellizzoni, Andrea Pietracaprina and Geppino Pucci
Algorithms 2022, 15(2), 52; https://0-doi-org.brum.beds.ac.uk/10.3390/a15020052 - 31 Jan 2022
Cited by 4 | Viewed by 2680
Abstract
Metric k-center clustering is a fundamental unsupervised learning primitive. Although widely used, this primitive is heavily affected by noise in the data, so a more sensible variant seeks for the best solution that disregards a given number z of points of the [...] Read more.
Metric k-center clustering is a fundamental unsupervised learning primitive. Although widely used, this primitive is heavily affected by noise in the data, so a more sensible variant seeks for the best solution that disregards a given number z of points of the dataset, which are called outliers. We provide efficient algorithms for this important variant in the streaming model under the sliding window setting, where, at each time step, the dataset to be clustered is the window W of the most recent data items. For general metric spaces, our algorithms achieve O1 approximation and, remarkably, require a working memory linear in k+z and only logarithmic in |W|. For spaces of bounded doubling dimension, the approximation can be made arbitrarily close to 3. For these latter spaces, we show, as a by-product, how to estimate the effective diameter of the window W, which is a measure of the spread of the window points, disregarding a given fraction of noisy distances. We also provide experimental evidence of the practical viability of the improved clustering and diameter estimation algorithms. Full article
(This article belongs to the Special Issue Discrete Optimization Theory, Algorithms, and Applications)
Show Figures

Figure 1

14 pages, 776 KiB  
Article
No Cell Left behind: Automated, Stochastic, Physics-Based Tracking of Every Cell in a Dense, Growing Colony
by Huy Pham, Emile R. Shehada, Shawna Stahlheber, Kushagra Pandey and Wayne B. Hayes
Algorithms 2022, 15(2), 51; https://0-doi-org.brum.beds.ac.uk/10.3390/a15020051 - 30 Jan 2022
Cited by 1 | Viewed by 2544
Abstract
Motivation: Precise tracking of individual cells—especially tracking the family lineage, for example in a developing embryo—has widespread applications in biology and medicine. Due to significant noise in microscope images, existing methods have difficulty precisely tracking cell activities. These difficulties often require human intervention [...] Read more.
Motivation: Precise tracking of individual cells—especially tracking the family lineage, for example in a developing embryo—has widespread applications in biology and medicine. Due to significant noise in microscope images, existing methods have difficulty precisely tracking cell activities. These difficulties often require human intervention to resolve. Humans are helpful because our brain naturally and automatically builds a simulation “model” of any scene that we observe. Because we understand simple truths about the world—for example cells can move and divide, but they cannot instantaneously move vast distances—this model “in our heads” helps us to severely constrain the possible interpretations of what we see, allowing us to easily distinguish signal from noise, and track the motion of cells even in the presence of extreme levels of noise that would completely confound existing automated methods. Results: Here, we mimic the ability of the human brain by building an explicit computer simulation model of the scene. Our simulated cells are programmed to allow movement and cell division consistent with reality. At each video frame, we stochastically generate millions of nearby “Universes” and evolve them stochastically to the next frame. We then find and fit the best universes to reality by minimizing the residual between the real image frame and a synthetic image of the simulation. The rule-based simulation puts extremely stringent constraints on possible interpretations of the data, allowing our system to perform far better than existing methods even in the presense of extreme levels of image noise. We demonstrate the viability of this method by accurately tracking every cell in a colony that grows from 4 to over 300 individuals, doing about as well as a human can in the difficult task of tracking cell lineages. Full article
(This article belongs to the Special Issue Stochastic Algorithms and Their Applications)
Show Figures

Figure 1

15 pages, 1352 KiB  
Article
Adaptive Authentication Protocol Based on Zero-Knowledge Proof
by Nikita Konstantinovich Chistousov, Igor Anatolyevich Kalmykov, Daniil Vyacheslavovich Dukhovnyj, Maksim Igorevich Kalmykov and Aleksandr Anatolyevich Olenev
Algorithms 2022, 15(2), 50; https://0-doi-org.brum.beds.ac.uk/10.3390/a15020050 - 30 Jan 2022
Cited by 6 | Viewed by 3421
Abstract
Authentication protocols are expanding their application scope in wireless information systems, among which are low-orbit satellite communication systems (LOSCS) for the OneWeb space Internet, automatic object identification systems using RFID, the Internet of Things, intelligent transportation systems (ITS), Vehicular Ad Hoc Network (VANET). [...] Read more.
Authentication protocols are expanding their application scope in wireless information systems, among which are low-orbit satellite communication systems (LOSCS) for the OneWeb space Internet, automatic object identification systems using RFID, the Internet of Things, intelligent transportation systems (ITS), Vehicular Ad Hoc Network (VANET). This is due to the fact that authentication protocols effectively resist a number of attacks on wireless data transmission channels in these systems. The main disadvantage of most authentication protocols is the use of symmetric and asymmetric encryption systems to ensure high cryptographic strength. As a result, there is a problem in delivering keys to the sides of the prover and the verifier. At the same time, compromising of keys will lead to a decrease in the level of protection of the transmitted data. Zero-knowledge authentication protocols (ZKAP) are able to eliminate this disadvantage. However, most of these protocols use multiple rounds to authenticate the prover. Therefore, ZKAP, which has minimal time costs, is developed in the article. A scheme for adapting protocol parameters has been developed in this protocol to increase its efficiency. Reductions in the level of confidentiality allow us to reduce the time spent on the execution of the authentication protocol. This increases the volume of information traffic. At the same time, an increase in the confidentiality of the protocol entails an increase in the time needed for authentication of the prover, which reduces the volume of information traffic. The FPGA Artix-7 xc7a12ticsg325-1L was used to estimate the time spent implementing the adaptive ZKAP protocol. Testing was performed for 32- and 64-bit adaptive authentication protocols. Full article
(This article belongs to the Special Issue Algorithms for Communication Networks)
Show Figures

Figure 1

14 pages, 1182 KiB  
Article
Using Explainable Machine Learning to Explore the Impact of Synoptic Reporting on Prostate Cancer
by Femke M. Janssen, Katja K. H. Aben, Berdine L. Heesterman, Quirinus J. M. Voorham, Paul A. Seegers and Arturo Moncada-Torres
Algorithms 2022, 15(2), 49; https://0-doi-org.brum.beds.ac.uk/10.3390/a15020049 - 29 Jan 2022
Cited by 5 | Viewed by 3817
Abstract
Machine learning (ML) models have proven to be an attractive alternative to traditional statistical methods in oncology. However, they are often regarded as black boxes, hindering their adoption for answering real-life clinical questions. In this paper, we show a practical application of [...] Read more.
Machine learning (ML) models have proven to be an attractive alternative to traditional statistical methods in oncology. However, they are often regarded as black boxes, hindering their adoption for answering real-life clinical questions. In this paper, we show a practical application of explainable machine learning (XML). Specifically, we explored the effect that synoptic reporting (SR; i.e., reports where data elements are presented as discrete data items) in Pathology has on the survival of a population of 14,878 Dutch prostate cancer patients. We compared the performance of a Cox Proportional Hazards model (CPH) against that of an eXtreme Gradient Boosting model (XGB) in predicting patient ranked survival. We found that the XGB model (c-index = 0.67) performed significantly better than the CPH (c-index = 0.58). Moreover, we used Shapley Additive Explanations (SHAP) values to generate a quantitative mathematical representation of how features—including usage of SR—contributed to the models’ output. The XGB model in combination with SHAP visualizations revealed interesting interaction effects between SR and the rest of the most important features. These results hint that SR has a moderate positive impact on predicted patient survival. Moreover, adding an explainability layer to predictive ML models can open their black box, making them more accessible and easier to understand by the user. This can make XML-based techniques appealing alternatives to the classical methods used in oncological research and in health care in general. Full article
(This article belongs to the Special Issue Interpretability, Accountability and Robustness in Machine Learning)
Show Figures

Figure 1

17 pages, 869 KiB  
Article
Two Taylor Algorithms for Computing the Action of the Matrix Exponential on a Vector
by Javier Ibáñez, José M. Alonso, Pedro Alonso-Jordá, Emilio Defez and Jorge Sastre
Algorithms 2022, 15(2), 48; https://0-doi-org.brum.beds.ac.uk/10.3390/a15020048 - 28 Jan 2022
Cited by 3 | Viewed by 2362
Abstract
The action of the matrix exponential on a vector eAtv, ACn×n, vCn, appears in problems that arise in mathematics, physics, and engineering, such as the solution of systems of [...] Read more.
The action of the matrix exponential on a vector eAtv, ACn×n, vCn, appears in problems that arise in mathematics, physics, and engineering, such as the solution of systems of linear ordinary differential equations with constant coefficients. Nowadays, several state-of-the-art approximations are available for estimating this type of action. In this work, two Taylor algorithms are proposed for computing eAv, which make use of the scaling and recovering technique based on a backward or forward error analysis. A battery of highly heterogeneous test matrices has been used in the different experiments performed to compare the numerical and computational properties of these algorithms, implemented in the MATLAB language. In general, both of them improve on those already existing in the literature, in terms of accuracy and response time. Moreover, a high-performance computing version that is able to take advantage of the computational power of a GPU platform has been developed, making it possible to tackle high dimension problems at an execution time significantly reduced. Full article
(This article belongs to the Collection Feature Paper in Algorithms and Complexity Theory)
Show Figures

Figure 1

20 pages, 4176 KiB  
Article
A Novel MCDA-Based Methodology Dealing with Dynamics and Ambiguities Resulting from Citizen Participation in the Context of the Energy Transition
by Sadeeb Simon Ottenburger, Stella Möhrle, Tim Oliver Müller and Wolfgang Raskob
Algorithms 2022, 15(2), 47; https://0-doi-org.brum.beds.ac.uk/10.3390/a15020047 - 28 Jan 2022
Cited by 1 | Viewed by 2256
Abstract
In the context of the energy transition, sound decision making regarding the development of renewable energy systems faces various technical and societal challenges. In addition to climate-related uncertainties affecting technical issues of reliable grid planning, there are also subtle aspects and uncertainties related [...] Read more.
In the context of the energy transition, sound decision making regarding the development of renewable energy systems faces various technical and societal challenges. In addition to climate-related uncertainties affecting technical issues of reliable grid planning, there are also subtle aspects and uncertainties related to the integration of energy technologies into built environments. Citizens’ opinions on grid development may be ambiguous or divergent in terms of broad acceptance of the energy transition in general, and they may have negative attitudes towards concrete planning in their local environment. First, this article identifies the issue of discrepancies between preferences of a fixed stakeholder group with respect to the question of the integration of renewable energy technology, posed from different perspectives and at different points in time, and considers it as a fundamental problem in the context of robust decision making in sustainable energy system planning. Second, for dealing with that issue, a novel dynamic decision support methodology is presented that includes multiple surveys, statistical analysis of the discrepancies that may arise, and multicriteria decision analysis that specifically incorporates the opinions of citizens. Citizens are considered as stakeholders and participants in smart decision-making processes. A case study applying agent-based simulations underlines the relevance of the methodology proposed for decision making in the context of renewable energies. Full article
Show Figures

Figure 1

22 pages, 24821 KiB  
Article
Test and Validation of the Surrogate-Based, Multi-Objective GOMORS Algorithm against the NSGA-II Algorithm in Structural Shape Optimization
by Yannis Werner, Tim van Hout, Vijey Subramani Raja Gopalan and Thomas Vietor
Algorithms 2022, 15(2), 46; https://0-doi-org.brum.beds.ac.uk/10.3390/a15020046 - 28 Jan 2022
Viewed by 2951
Abstract
Nowadays, product development times are constantly decreasing, while the requirements for the products themselves increased significantly in the last decade. Hence, manufacturers use Computer-Aided Design (CAD) and Finite-Element (FE) Methods to develop better products in shorter times. Shape optimization offers great potential to [...] Read more.
Nowadays, product development times are constantly decreasing, while the requirements for the products themselves increased significantly in the last decade. Hence, manufacturers use Computer-Aided Design (CAD) and Finite-Element (FE) Methods to develop better products in shorter times. Shape optimization offers great potential to improve many high-fidelity, numerical problems such as the crash performance of cars. Still, the proper selection of optimization algorithms provides a great potential to increase the speed of the optimization time. This article reviews the optimization performance of two different algorithms and frameworks for the structural behavior of a b-pillar. A b-pillar is the structural component between a car’s front and rear door, loaded under static and crash requirements. Furthermore, the validation of the algorithm includes a feasibility constraint. Recently, an optimization routine was implemented and validated for a Non-dominated Sorting Genetic Algorithm (NSGA-II) implementation. Different multi-objective optimization algorithms are reviewed and methodically ranked in a comparative study by given criteria. In this case, the Gap Optimized Multi-Objective Optimization using Response Surfaces (GOMORS) framework is chosen and implemented into the existing Institut für Konstruktionstechnik Optimizes Shapes (IKOS) framework. Specifically, the article compares the NSGA-II and GOMORS directly for a linear, non-linear, and feasibility optimization scenario. The results show that the GOMORS outperforms the NSGA-II vastly regarding the number of function calls and Pareto-efficient results without the feasibility constraint. The problem is reformulated to an unconstrained, three-objective optimization problem to analyze the influence of the constraint. The constrained and unconstrained approaches show equal performance for the given scenarios. Accordingly, the authors provide a clear recommendation towards the surrogate-based GOMORS for costly and multi-objective evaluations. Furthermore, the algorithm can handle the feasibility constraint properly when formulated as an objective function and as a constraint. Full article
(This article belongs to the Special Issue Nature-Inspired Algorithms in Machine Learning)
Show Figures

Figure 1

16 pages, 619 KiB  
Article
Calibration of an Adaptive Genetic Algorithm for Modeling Opinion Diffusion
by Kara Layne Johnson and Nicole Bohme Carnegie 
Algorithms 2022, 15(2), 45; https://0-doi-org.brum.beds.ac.uk/10.3390/a15020045 - 28 Jan 2022
Cited by 1 | Viewed by 2465
Abstract
Genetic algorithms mimic the process of natural selection in order to solve optimization problems with minimal assumptions and perform well when the objective function has local optima on the search space. These algorithms treat potential solutions to the optimization problem as chromosomes, consisting [...] Read more.
Genetic algorithms mimic the process of natural selection in order to solve optimization problems with minimal assumptions and perform well when the objective function has local optima on the search space. These algorithms treat potential solutions to the optimization problem as chromosomes, consisting of genes which undergo biologically-inspired operators to identify a better solution. Hyperparameters or control parameters determine the way these operators are implemented. We created a genetic algorithm in order to fit a DeGroot opinion diffusion model using limited data, making use of selection, blending, crossover, mutation, and survival operators. We adapted the algorithm from a genetic algorithm for design of mixture experiments, but the new algorithm required substantial changes due to model assumptions and the large parameter space relative to the design space. In addition to introducing new hyperparameters, these changes mean the hyperparameter values suggested for the original algorithm cannot be expected to result in optimal performance. To make the algorithm for modeling opinion diffusion more accessible to researchers, we conduct a simulation study investigating hyperparameter values. We find the algorithm is robust to the values selected for most hyperparameters and provide suggestions for initial, if not default, values and recommendations for adjustments based on algorithm output. Full article
(This article belongs to the Special Issue Bio-Inspired Algorithms)
Show Figures

Figure 1

6 pages, 269 KiB  
Editorial
Acknowledgment to Reviewers of Algorithms in 2021
by Algorithms Editorial Office
Algorithms 2022, 15(2), 44; https://0-doi-org.brum.beds.ac.uk/10.3390/a15020044 - 28 Jan 2022
Viewed by 1643
Abstract
Rigorous peer-reviews are the basis of high-quality academic publishing [...] Full article
10 pages, 317 KiB  
Article
Clustering with Nature-Inspired Algorithm Based on Territorial Behavior of Predatory Animals
by Maciej Trzciński, Piotr A. Kowalski and Szymon Łukasik
Algorithms 2022, 15(2), 43; https://0-doi-org.brum.beds.ac.uk/10.3390/a15020043 - 28 Jan 2022
Cited by 1 | Viewed by 2426
Abstract
Clustering constitutes a well-known problem of division of unlabelled dataset into disjoint groups of data elements. It can be tackled with standard statistical methods but also with metaheuristics, which offer more flexibility and decent performance. The paper studies the application of the clustering [...] Read more.
Clustering constitutes a well-known problem of division of unlabelled dataset into disjoint groups of data elements. It can be tackled with standard statistical methods but also with metaheuristics, which offer more flexibility and decent performance. The paper studies the application of the clustering algorithm—inspired by the territorial behaviors of predatory animals—named the Predatory Animals Algorithm (or, in short: PAA). Besides the description of the PAA, the results of its experimental evaluation, with regards to the classic k-means algorithm, are provided. It is concluded that the application of newly-created nature-inspired technique brings very promising outcomes. The discussion of obtained results is followed by areas of possible improvements and plans for further research. Full article
(This article belongs to the Special Issue Nature-Inspired Algorithms in Machine Learning)
Show Figures

Figure 1

33 pages, 2350 KiB  
Article
Recent Advances in Positive-Instance Driven Graph Searching
by Max Bannach and Sebastian Berndt
Algorithms 2022, 15(2), 42; https://0-doi-org.brum.beds.ac.uk/10.3390/a15020042 - 27 Jan 2022
Cited by 1 | Viewed by 2252
Abstract
Research on the similarity of a graph to being a tree—called the treewidth of the graph—has seen an enormous rise within the last decade, but a practically fast algorithm for this task has been discovered only recently by Tamaki (ESA 2017). It is [...] Read more.
Research on the similarity of a graph to being a tree—called the treewidth of the graph—has seen an enormous rise within the last decade, but a practically fast algorithm for this task has been discovered only recently by Tamaki (ESA 2017). It is based on dynamic programming and makes use of the fact that the number of positive subinstances is typically substantially smaller than the number of all subinstances. Algorithms producing only such subinstances are called positive-instance driven (PID). The parameter treedepth has a similar story. It was popularized through the graph sparsity project and is theoretically well understood—but the first practical algorithm was discovered only recently by Trimble (IPEC 2020) and is based on the same paradigm. We give an alternative and unifying view on such algorithms from the perspective of the corresponding configuration graphs in certain two-player games. This results in a single algorithm that can compute a wide range of important graph parameters such as treewidth, pathwidth, and treedepth. We complement this algorithm with a novel randomized data structure that accelerates the enumeration of subproblems in positive-instance driven algorithms. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop