Next Issue
Volume 12, September
Previous Issue
Volume 12, July
 
 

Algorithms, Volume 12, Issue 8 (August 2019) – 29 articles

Cover Story (view full-size image): Convolutional neural network is a type of deep neural network used for many visual recognition tasks. These tasks are found on many edge devices whose recognition accuracy can be improved by adopting these neural networks. Therefore, it is important that edge computing platforms support the inference of convolutional neural networks within their limited computational and energy consumption constraints. Reconfigurable computing devices are being used for the inference on edge due to their hardware flexibility, which allows the target computing platform to be adapted to the convolutional neural network model, achieving high performance and energy efficiency. This paper surveys the reconfigurable computing architectures available for running convolutional neural networks on edge devices. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
14 pages, 1410 KiB  
Article
A Distributed Hybrid Community Detection Methodology for Social Networks
by Konstantinos Georgiou, Christos Makris and Georgios Pispirigos
Algorithms 2019, 12(8), 175; https://0-doi-org.brum.beds.ac.uk/10.3390/a12080175 - 17 Aug 2019
Cited by 2 | Viewed by 5697
Abstract
Nowadays, the amount of digitally available information has tremendously grown, with real-world data graphs outreaching the millions or even billions of vertices. Hence, community detection, where groups of vertices are formed according to a well-defined similarity measure, has never been more essential affecting [...] Read more.
Nowadays, the amount of digitally available information has tremendously grown, with real-world data graphs outreaching the millions or even billions of vertices. Hence, community detection, where groups of vertices are formed according to a well-defined similarity measure, has never been more essential affecting a vast range of scientific fields such as bio-informatics, sociology, discrete mathematics, nonlinear dynamics, digital marketing, and computer science. Even if an impressive amount of research has yet been published to tackle this NP-hard class problem, the existing methods and algorithms have virtually been proven inefficient and severely unscalable. In this regard, the purpose of this manuscript is to combine the network topology properties expressed by the loose similarity and the local edge betweenness, which is a currently proposed Girvan–Newman’s edge betweenness measure alternative, along with the intrinsic user content information, in order to introduce a novel and highly distributed hybrid community detection methodology. The proposed approach has been thoroughly tested on various real social graphs, roundly compared to other classic divisive community detection algorithms that serve as baselines and practically proven exceptionally scalable, highly efficient, and adequately accurate in terms of revealing the subjacent network hierarchy. Full article
Show Figures

Figure 1

15 pages, 2618 KiB  
Article
A Novel Blind Restoration and Reconstruction Approach for CT Images Based on Sparse Representation and Hierarchical Bayesian-MAP
by Yunshan Sun, Liyi Zhang, Yanqin Li and Juan Meng
Algorithms 2019, 12(8), 174; https://0-doi-org.brum.beds.ac.uk/10.3390/a12080174 - 16 Aug 2019
Cited by 1 | Viewed by 3002
Abstract
Computed tomography (CT) image reconstruction and restoration are very important in medical image processing, and are associated together to be an inverse problem. Image iterative reconstruction is a key tool to increase the applicability of CT imaging and reduce radiation dose. Nevertheless, traditional [...] Read more.
Computed tomography (CT) image reconstruction and restoration are very important in medical image processing, and are associated together to be an inverse problem. Image iterative reconstruction is a key tool to increase the applicability of CT imaging and reduce radiation dose. Nevertheless, traditional image iterative reconstruction methods are limited by the sampling theorem and also the blurring of projection data will propagate unhampered artifact in the reconstructed image. To overcome these problems, image restoration techniques should be developed to accurately correct a wide variety of image degrading effects in order to effectively improve image reconstruction. In this paper, a blind image restoration technique is embedded in the compressive sensing CT image reconstruction, which can result in a high-quality reconstruction image using fewer projection data. Because a small amount of data can be obtained by radiation in a shorter time, high-quality image reconstruction with less data is equivalent to reducing radiation dose. Technically, both the blurring process and the sparse representation of the sharp CT image are first modeled as a serial of parameters. The sharp CT image will be obtained from the estimated sparse representation. Then, the model parameters are estimated by a hierarchical Bayesian maximum posteriori formulation. Finally, the estimated model parameters are optimized to obtain the final image reconstruction. We demonstrate the effectiveness of the proposed method with the simulation experiments in terms of the peak signal to noise ratio (PSNR), and structural similarity index (SSIM). Full article
(This article belongs to the Special Issue The Second Symposium on Machine Intelligence and Data Analytics)
Show Figures

Figure 1

21 pages, 1093 KiB  
Article
Long Short-Term Memory Neural Network Applied to Train Dynamic Model and Speed Prediction
by Zhen Li, Tao Tang and Chunhai Gao
Algorithms 2019, 12(8), 173; https://0-doi-org.brum.beds.ac.uk/10.3390/a12080173 - 16 Aug 2019
Cited by 17 | Viewed by 3420
Abstract
The automatic train operation system is a significant component of the intelligent railway transportation. As a fundamental problem, the construction of the train dynamic model has been extensively researched using parametric approaches. The parametric based models may have poor performances due to unrealistic [...] Read more.
The automatic train operation system is a significant component of the intelligent railway transportation. As a fundamental problem, the construction of the train dynamic model has been extensively researched using parametric approaches. The parametric based models may have poor performances due to unrealistic assumptions and changeable environments. In this paper, a long short-term memory network is carefully developed to build the train dynamic model in a nonparametric way. By optimizing the hyperparameters of the proposed model, more accurate outputs can be obtained with the same inputs of the parametric approaches. The proposed model was compared with two parametric methods using actual data. Experimental results suggest that the model performance is better than those of traditional models due to the strong learning ability. By exploring a detailed feature engineering process, the proposed long short-term memory network based algorithm was extended to predict train speed for multiple steps ahead. Full article
Show Figures

Figure 1

17 pages, 960 KiB  
Article
Practical Access to Dynamic Programming on Tree Decompositions
by Max Bannach and Sebastian Berndt
Algorithms 2019, 12(8), 172; https://0-doi-org.brum.beds.ac.uk/10.3390/a12080172 - 16 Aug 2019
Cited by 10 | Viewed by 4048
Abstract
Parameterized complexity theory has led to a wide range of algorithmic breakthroughs within the last few decades, but the practicability of these methods for real-world problems is still not well understood. We investigate the practicability of one of the fundamental approaches of this [...] Read more.
Parameterized complexity theory has led to a wide range of algorithmic breakthroughs within the last few decades, but the practicability of these methods for real-world problems is still not well understood. We investigate the practicability of one of the fundamental approaches of this field: dynamic programming on tree decompositions. Indisputably, this is a key technique in parameterized algorithms and modern algorithm design. Despite the enormous impact of this approach in theory, it still has very little influence on practical implementations. The reasons for this phenomenon are manifold. One of them is the simple fact that such an implementation requires a long chain of non-trivial tasks (as computing the decomposition, preparing it, …). We provide an easy way to implement such dynamic programs that only requires the definition of the update rules. With this interface, dynamic programs for various problems, such as 3-coloring, can be implemented easily in about 100 lines of structured Java code. The theoretical foundation of the success of dynamic programming on tree decompositions is well understood due to Courcelle’s celebrated theorem, which states that every MSO-definable problem can be efficiently solved if a tree decomposition of small width is given. We seek to provide practical access to this theorem as well, by presenting a lightweight model checker for a small fragment of MSO 1 (that is, we do not consider “edge-set-based” problems). This fragment is powerful enough to describe many natural problems, and our model checker turns out to be very competitive against similar state-of-the-art tools. Full article
(This article belongs to the Special Issue New Frontiers in Parameterized Complexity and Algorithms)
Show Figures

Figure 1

21 pages, 1336 KiB  
Article
An Application of Manifold Learning in Global Shape Descriptors
by Fereshteh S. Bashiri, Reihaneh Rostami, Peggy Peissig, Roshan M. D’Souza and Zeyun Yu
Algorithms 2019, 12(8), 171; https://0-doi-org.brum.beds.ac.uk/10.3390/a12080171 - 16 Aug 2019
Cited by 3 | Viewed by 3523
Abstract
With the rapid expansion of applied 3D computational vision, shape descriptors have become increasingly important for a wide variety of applications and objects from molecules to planets. Appropriate shape descriptors are critical for accurate (and efficient) shape retrieval and 3D model classification. Several [...] Read more.
With the rapid expansion of applied 3D computational vision, shape descriptors have become increasingly important for a wide variety of applications and objects from molecules to planets. Appropriate shape descriptors are critical for accurate (and efficient) shape retrieval and 3D model classification. Several spectral-based shape descriptors have been introduced by solving various physical equations over a 3D surface model. In this paper, for the first time, we incorporate a specific manifold learning technique, introduced in statistics and machine learning, to develop a global, spectral-based shape descriptor in the computer graphics domain. The proposed descriptor utilizes the Laplacian Eigenmap technique in which the Laplacian eigenvalue problem is discretized using an exponential weighting scheme. As a result, our descriptor eliminates the limitations tied to the existing spectral descriptors, namely dependency on triangular mesh representation and high intra-class quality of 3D models. We also present a straightforward normalization method to obtain a scale-invariant and noise-resistant descriptor. The extensive experiments performed in this study using two standard 3D shape benchmarks—high-resolution TOSCA and McGill datasets—demonstrate that the present contribution provides a highly discriminative and robust shape descriptor under the presence of a high level of noise, random scale variations, and low sampling rate, in addition to the known isometric-invariance property of the Laplace–Beltrami operator. The proposed method significantly outperforms state-of-the-art spectral descriptors in shape retrieval and classification. The proposed descriptor is limited to closed manifolds due to its inherited inability to accurately handle manifolds with boundaries. Full article
(This article belongs to the Special Issue Algorithms for Manifold Learning and Its Applications)
Show Figures

Figure 1

23 pages, 903 KiB  
Article
Protograph LDPC Code Design for Asynchronous Random Access
by Federico Clazzer, Balázs Matuz, Sachini Jayasooriya, Mahyar Shirvanimoghaddam and Sarah J. Johnson
Algorithms 2019, 12(8), 170; https://0-doi-org.brum.beds.ac.uk/10.3390/a12080170 - 15 Aug 2019
Cited by 5 | Viewed by 4055
Abstract
This work addresses the physical layer channel code design for an uncoordinated, frame- and slot-asynchronous random access protocol. Starting from the observation that collisions between two users yield very specific interference patterns, we define a surrogate channel model and propose different protograph low-density [...] Read more.
This work addresses the physical layer channel code design for an uncoordinated, frame- and slot-asynchronous random access protocol. Starting from the observation that collisions between two users yield very specific interference patterns, we define a surrogate channel model and propose different protograph low-density parity-check code designs. The proposed codes are both tested in a setup where the physical layer is abstracted, as well as on a more realistic channel model, where finite-length physical layer simulations of the entire asynchronous random access scheme, including decoding, are carried out. We find that the abstracted physical layer model overestimates the performance when short blocks are considered. Additionally, the optimized codes show gains in supported channel traffic, a measure of the number of terminals that can be concurrently accommodated on the channel, of around 17% at a packet loss rate of 10 2 w.r.t. off-the-shelf codes. Full article
(This article belongs to the Special Issue Coding Theory and Its Application)
Show Figures

Figure 1

12 pages, 2263 KiB  
Article
Structural Analysis and Application of Non-Standard Components Based on Genetic Algorithm
by Zhao Lei, Hu Lai, Zhang Hua and Chen Hua
Algorithms 2019, 12(8), 169; https://0-doi-org.brum.beds.ac.uk/10.3390/a12080169 - 15 Aug 2019
Viewed by 2565
Abstract
Aiming at the problems of low efficiency, heavy quality, and high cost of traditional components, it is necessary to study a design and analysis method of non-standard components. Taking the non-standard parts-turret loading and -unloading device as the carrier, the key parts of [...] Read more.
Aiming at the problems of low efficiency, heavy quality, and high cost of traditional components, it is necessary to study a design and analysis method of non-standard components. Taking the non-standard parts-turret loading and -unloading device as the carrier, the key parts of the non-standard parts are extracted for structural design and the multi-objective mathematical model and modal theory model are established. The optimization analysis of the key parts is carried out by genetic algorithm. Finally, the optimization results are compared and simulated by ANSYS Workbench. The results show that: in this case, the genetic algorithm optimized data with other data, the overall quality difference is 4.1%. The first six order modal values in the optimized results are in the range of 68 Hz to 130 Hz, which provides a basis for similar research in the future. Full article
Show Figures

Figure 1

10 pages, 249 KiB  
Article
Cyclotomic Trace Codes
by Dean Crnković, Andrea Švob and Vladimir D. Tonchev
Algorithms 2019, 12(8), 168; https://doi.org/10.3390/a12080168 - 13 Aug 2019
Cited by 2 | Viewed by 3386
Abstract
A generalization of Ding’s construction is proposed that employs as a defining set the collection of the sth powers ( s 2 ) of all nonzero elements in G F ( p m ) , where p 2 is prime. [...] Read more.
A generalization of Ding’s construction is proposed that employs as a defining set the collection of the sth powers ( s 2 ) of all nonzero elements in G F ( p m ) , where p 2 is prime. Some of the resulting codes are optimal or near-optimal and include projective codes over G F ( 4 ) that give rise to optimal or near optimal quantum codes. In addition, the codes yield interesting combinatorial structures, such as strongly regular graphs and block designs. Full article
14 pages, 2921 KiB  
Article
LMI Pole Regions for a Robust Discrete-Time Pole Placement Controller Design
by Danica Rosinová and Mária Hypiusová
Algorithms 2019, 12(8), 167; https://0-doi-org.brum.beds.ac.uk/10.3390/a12080167 - 13 Aug 2019
Cited by 13 | Viewed by 4066
Abstract
Herein, robust pole placement controller design for linear uncertain discrete time dynamic systems is addressed. The adopted approach uses the so called “D regions” where the closed loop system poles are determined to lie. The discrete time pole regions corresponding to the prescribed [...] Read more.
Herein, robust pole placement controller design for linear uncertain discrete time dynamic systems is addressed. The adopted approach uses the so called “D regions” where the closed loop system poles are determined to lie. The discrete time pole regions corresponding to the prescribed damping of the resulting closed loop system are studied. The key issue is to determine the appropriate convex approximation to the originally non-convex discrete-time system pole region, so that numerically efficient robust controller design algorithms based on Linear Matrix Inequalities (LMI) can be used. Several alternatives for relatively simple inner approximations and their corresponding LMI descriptions are presented. The developed LMI region for the prescribed damping can be arbitrarily combined with other LMI pole limitations (e.g., stability degree). Simple algorithms to calculate the matrices for LMI representation of the proposed convex pole regions are provided in a concise way. The results and their use in a robust controller design are illustrated on a case study of a laboratory magnetic levitation system. Full article
Show Figures

Figure 1

14 pages, 356 KiB  
Article
MapReduce Algorithm for Variants of Skyline Queries: Skyband and Dominating Queries
by Md. Anisuzzaman Siddique, Hao Tian, Mahboob Qaosar and Yasuhiko Morimoto
Algorithms 2019, 12(8), 166; https://0-doi-org.brum.beds.ac.uk/10.3390/a12080166 - 13 Aug 2019
Cited by 2 | Viewed by 3404
Abstract
The skyline query and its variant queries are useful functions in the early stages of a knowledge-discovery processes. The skyline query and its variant queries select a set of important objects, which are better than other common objects in the dataset. In order [...] Read more.
The skyline query and its variant queries are useful functions in the early stages of a knowledge-discovery processes. The skyline query and its variant queries select a set of important objects, which are better than other common objects in the dataset. In order to handle big data, such knowledge-discovery queries must be computed in parallel distributed environments. In this paper, we consider an efficient parallel algorithm for the “K-skyband query” and the “top-k dominating query”, which are popular variants of skyline query. We propose a method for computing both queries simultaneously in a parallel distributed framework called MapReduce, which is a popular framework for processing “big data” problems. Our extensive evaluation results validate the effectiveness and efficiency of the proposed algorithm on both real and synthetic datasets. Full article
Show Figures

Figure 1

17 pages, 662 KiB  
Article
Algorithmic Matching Attacks on Optimally Suppressed Tabular Data
by Kazuhiro Minami and Yutaka Abe
Algorithms 2019, 12(8), 165; https://0-doi-org.brum.beds.ac.uk/10.3390/a12080165 - 11 Aug 2019
Cited by 2 | Viewed by 3694
Abstract
The objective of the cell suppression problem (CSP) is to protect sensitive cell values in tabular data under the presence of linear relations concerning marginal sums. Previous algorithms for solving CSPs ensure that every sensitive cell has enough uncertainty on its values based [...] Read more.
The objective of the cell suppression problem (CSP) is to protect sensitive cell values in tabular data under the presence of linear relations concerning marginal sums. Previous algorithms for solving CSPs ensure that every sensitive cell has enough uncertainty on its values based on the interval width of all possible values. However, we find that every deterministic CSP algorithm is vulnerable to an adversary who possesses the knowledge of that algorithm. We devise a matching attack scheme that narrows down the ranges of sensitive cell values by matching the suppression pattern of an original table with that of each candidate table. Our experiments show that actual ranges of sensitive cell values are significantly narrower than those assumed by the previous CSP algorithms. Full article
(This article belongs to the Special Issue Statistical Disclosure Control for Microdata)
Show Figures

Figure 1

13 pages, 260 KiB  
Article
Equisum Partitions of Sets of Positive Integers
by Roger B. Eggleton
Algorithms 2019, 12(8), 164; https://0-doi-org.brum.beds.ac.uk/10.3390/a12080164 - 11 Aug 2019
Viewed by 2827
Abstract
Let V be a finite set of positive integers with sum equal to a multiple of the integer b . When does V have a partition into b parts so that all parts have equal sums? We develop algorithmic constructions which yield positive, [...] Read more.
Let V be a finite set of positive integers with sum equal to a multiple of the integer b . When does V have a partition into b parts so that all parts have equal sums? We develop algorithmic constructions which yield positive, albeit incomplete, answers for the following classes of set V , where n is a given positive integer: (1) an initial interval { a + : a n } ; (2) an initial interval of primes { p : p n } , where is the set of primes; (3) a divisor set { d + : d | n } ; (4) an aliquot set { d + : d | n ,   d < n } . Open general questions and conjectures are included for each of these classes. Full article
11 pages, 360 KiB  
Article
Idea of Using Blockchain Technique for Choosing the Best Configuration of Weights in Neural Networks
by Alicja Winnicka and Karolina Kęsik
Algorithms 2019, 12(8), 163; https://0-doi-org.brum.beds.ac.uk/10.3390/a12080163 - 10 Aug 2019
Cited by 9 | Viewed by 4299
Abstract
The blockchain technique is becoming more and more popular due to its advantages such as stability and dispersed nature. This is an idea based on blockchain activity paradigms. Another important field is machine learning, which is increasingly used in practice. Unfortunately, the training [...] Read more.
The blockchain technique is becoming more and more popular due to its advantages such as stability and dispersed nature. This is an idea based on blockchain activity paradigms. Another important field is machine learning, which is increasingly used in practice. Unfortunately, the training or overtraining artificial neural networks is very time-consuming and requires high computing power. In this paper, we proposed using a blockchain technique to train neural networks. This type of activity is important due to the possible search for initial weights in the network, which affect faster training, due to gradient decrease. We performed the tests with much heavier calculations to indicate that such an action is possible. However, this type of solution can also be used for less demanding calculations, i.e., only a few iterations of training and finding a better configuration of initial weights. Full article
(This article belongs to the Special Issue Algorithms for Pattern Recognition)
Show Figures

Figure 1

27 pages, 591 KiB  
Article
Distributed Balanced Partitioning via Linear Embedding
by Kevin Aydin, MohammadHossein Bateni and Vahab Mirrokni
Algorithms 2019, 12(8), 162; https://0-doi-org.brum.beds.ac.uk/10.3390/a12080162 - 10 Aug 2019
Cited by 14 | Viewed by 5712
Abstract
Balanced partitioning is often a crucial first step in solving large-scale graph optimization problems, for example, in some cases, a big graph can be chopped into pieces that fit on one machine to be processed independently before stitching the results together, leading to [...] Read more.
Balanced partitioning is often a crucial first step in solving large-scale graph optimization problems, for example, in some cases, a big graph can be chopped into pieces that fit on one machine to be processed independently before stitching the results together, leading to certain suboptimality from the interaction among different pieces. In other cases, links between different parts may show up in the running time and/or network communications cost, hence the desire to have small cut size. We study a distributed balanced-partitioning problem where the goal is to partition the vertices of a given graph into k pieces so as to minimize the total cut size. Our algorithm is composed of a few steps that are easily implementable in distributed computation frameworks such as MapReduce. The algorithm first embeds nodes of the graph onto a line, and then processes nodes in a distributed manner guided by the linear embedding order. We examine various ways to find the first embedding, for example, via a hierarchical clustering or Hilbert curves. Then we apply four different techniques including local swaps, and minimum cuts on the boundaries of partitions, as well as contraction and dynamic programming. As our empirical study, we compare the above techniques with each other, and also to previous work in distributed graph algorithms, for example, a label-propagation method, FENNEL and Spinner. We report our results both on a private map graph and several public social networks, and show that our results beat previous distributed algorithms: For instance, compared to the label-propagation algorithm, we report an improvement of 15–25% in the cut value. We also observe that our algorithms admit scalable distributed implementation for any number of partitions. Finally, we explain three applications of this work at Google: (1) Balanced partitioning is used to route multi-term queries to different replicas in Google Search backend in a way that reduces the cache miss rates by ≈ 0.5 % , which leads to a double-digit gain in throughput of production clusters. (2) Applied to the Google Maps Driving Directions, balanced partitioning minimizes the number of cross-shard queries with the goal of saving in CPU usage. This system achieves load balancing by dividing the world graph into several “shards”. Live experiments demonstrate an ≈ 40 % drop in the number of cross-shard queries when compared to a standard geography-based method. (3) In a job scheduling problem for our data centers, we use balanced partitioning to evenly distribute the work while minimizing the amount of communication across geographically distant servers. In fact, the hierarchical nature of our solution goes well with the layering of data center servers, where certain machines are closer to each other and have faster links to one another. Full article
(This article belongs to the Special Issue Graph Partitioning: Theory, Engineering, and Applications)
Show Figures

Figure 1

11 pages, 1231 KiB  
Article
Distributed Centrality Analysis of Social Network Data Using MapReduce
by Ranjan Kumar Behera, Santanu Kumar Rath, Sanjay Misra, Robertas Damaševičius and Rytis Maskeliūnas
Algorithms 2019, 12(8), 161; https://0-doi-org.brum.beds.ac.uk/10.3390/a12080161 - 09 Aug 2019
Cited by 31 | Viewed by 4428
Abstract
Analyzing the structure of a social network helps in gaining insights into interactions and relationships among users while revealing the patterns of their online behavior. Network centrality is a metric of importance of a network node in a network, which allows revealing the [...] Read more.
Analyzing the structure of a social network helps in gaining insights into interactions and relationships among users while revealing the patterns of their online behavior. Network centrality is a metric of importance of a network node in a network, which allows revealing the structural patterns and morphology of networks. We propose a distributed computing approach for the calculation of network centrality value for each user using the MapReduce approach in the Hadoop platform, which allows faster and more efficient computation as compared to the conventional implementation. A distributed approach is scalable and helps in efficient computations of large-scale datasets, such as social network data. The proposed approach improves the calculation performance of degree centrality by 39.8%, closeness centrality by 40.7% and eigenvalue centrality by 41.1% using a Twitter dataset. Full article
(This article belongs to the Special Issue Algorithms for Pattern Recognition)
Show Figures

Figure 1

25 pages, 4158 KiB  
Article
A Novel Virtual Sample Generation Method to Overcome the Small Sample Size Problem in Computer Aided Medical Diagnosing
by Mohammad Wedyan, Alessandro Crippa and Adel Al-Jumaily
Algorithms 2019, 12(8), 160; https://0-doi-org.brum.beds.ac.uk/10.3390/a12080160 - 09 Aug 2019
Cited by 12 | Viewed by 4710
Abstract
Deep neural networks are successful learning tools for building nonlinear models. However, a robust deep learning-based classification model needs a large dataset. Indeed, these models are often unstable when they use small datasets. To solve this issue, which is particularly critical in light [...] Read more.
Deep neural networks are successful learning tools for building nonlinear models. However, a robust deep learning-based classification model needs a large dataset. Indeed, these models are often unstable when they use small datasets. To solve this issue, which is particularly critical in light of the possible clinical applications of these predictive models, researchers have developed approaches such as virtual sample generation. Virtual sample generation significantly improves learning and classification performance when working with small samples. The main objective of this study is to evaluate the ability of the proposed virtual sample generation to overcome the small sample size problem, which is a feature of the automated detection of a neurodevelopmental disorder, namely autism spectrum disorder. Results show that our method enhances diagnostic accuracy from 84%–95% using virtual samples generated on the basis of five actual clinical samples. The present findings show the feasibility of using the proposed technique to improve classification performance even in cases of clinical samples of limited size. Accounting for concerns in relation to small sample sizes, our technique represents a meaningful step forward in terms of pattern recognition methodology, particularly when it is applied to diagnostic classifications of neurodevelopmental disorders. Besides, the proposed technique has been tested with other available benchmark datasets. The experimental outcomes showed that the accuracy of the classification that used virtual samples was superior to the one that used original training data without virtual samples. Full article
(This article belongs to the Special Issue Evolutionary Algorithms in Health Technologies)
Show Figures

Figure 1

16 pages, 4444 KiB  
Article
Compaction of Church Numerals
by Isamu Furuya and Takuya Kida
Algorithms 2019, 12(8), 159; https://0-doi-org.brum.beds.ac.uk/10.3390/a12080159 - 08 Aug 2019
Cited by 1 | Viewed by 3096
Abstract
In this study, we address the problem of compaction of Church numerals. Church numerals are unary representations of natural numbers on the scheme of lambda terms. We propose a novel decomposition scheme from a given natural number into an arithmetic expression using tetration, [...] Read more.
In this study, we address the problem of compaction of Church numerals. Church numerals are unary representations of natural numbers on the scheme of lambda terms. We propose a novel decomposition scheme from a given natural number into an arithmetic expression using tetration, which enables us to obtain a compact representation of lambda terms that leads to the Church numeral of the natural number. For natural number n, we prove that the size of the lambda term obtained by the proposed method is O ( ( slog 2 n ) ( log n / log log n ) ) . Moreover, we experimentally confirmed that the proposed method outperforms binary representation of Church numerals on average, when n is less than approximately 10,000. Full article
(This article belongs to the Special Issue Data Compression Algorithms and their Applications)
Show Figures

Figure 1

12 pages, 528 KiB  
Article
A Fast Randomized Algorithm for the Heterogeneous Vehicle Routing Problem with Simultaneous Pickup and Delivery
by Napoleão Nepomuceno, Ricardo Barboza Saboia and Plácido Rogério Pinheiro
Algorithms 2019, 12(8), 158; https://0-doi-org.brum.beds.ac.uk/10.3390/a12080158 - 03 Aug 2019
Cited by 6 | Viewed by 3411
Abstract
In the vehicle routing problem with simultaneous pickup and delivery (VRPSPD), customers demanding both delivery and pickup operations have to be visited once by a single vehicle. In this work, we propose a fast randomized algorithm using a nearest neighbor strategy to tackle [...] Read more.
In the vehicle routing problem with simultaneous pickup and delivery (VRPSPD), customers demanding both delivery and pickup operations have to be visited once by a single vehicle. In this work, we propose a fast randomized algorithm using a nearest neighbor strategy to tackle an extension of the VRPSPD in which the fleet of vehicles is heterogeneous. This variant is an NP-hard problem, which in practice makes it impossible to be solved to proven optimality for large instances. To evaluate the proposal, we use benchmark instances from the literature and compare our results to those obtained by a state-of-the-art algorithm. Our approach presents very competitive results, not only improving several of the known solutions, but also running in a shorter time. Full article
Show Figures

Figure 1

18 pages, 270 KiB  
Article
In Search of the Densest Subgraph
by András Faragó and Zohre R. Mojaveri
Algorithms 2019, 12(8), 157; https://0-doi-org.brum.beds.ac.uk/10.3390/a12080157 - 02 Aug 2019
Cited by 7 | Viewed by 3569
Abstract
In this survey paper, we review various concepts of graph density, as well as associated theorems and algorithms. Our goal is motivated by the fact that, in many applications, it is a key algorithmic task to extract a densest subgraph from an input [...] Read more.
In this survey paper, we review various concepts of graph density, as well as associated theorems and algorithms. Our goal is motivated by the fact that, in many applications, it is a key algorithmic task to extract a densest subgraph from an input graph, according to some appropriate definition of graph density. While this problem has been the subject of active research for over half of a century, with many proposed variants and solutions, new results still continuously emerge in the literature. This shows both the importance and the richness of the subject. We also identify some interesting open problems in the field. Full article
13 pages, 410 KiB  
Article
A Collocation Method for the Numerical Solution of Nonlinear Fractional Dynamical Systems
by Francesca Pitolli
Algorithms 2019, 12(8), 156; https://0-doi-org.brum.beds.ac.uk/10.3390/a12080156 - 31 Jul 2019
Cited by 6 | Viewed by 3079
Abstract
Fractional differential problems are widely used in applied sciences. For this reason, there is a great interest in the construction of efficient numerical methods to approximate their solution. The aim of this paper is to describe in detail a collocation method suitable to [...] Read more.
Fractional differential problems are widely used in applied sciences. For this reason, there is a great interest in the construction of efficient numerical methods to approximate their solution. The aim of this paper is to describe in detail a collocation method suitable to approximate the solution of dynamical systems with time derivative of fractional order. We will highlight all the steps necessary to implement the corresponding algorithm and we will use it to solve some test problems. Two Mathematica Notebooks that can be used to solve these test problems are provided. Full article
Show Figures

Figure 1

12 pages, 2306 KiB  
Article
A Rigid Motion Artifact Reduction Method for CT Based on Blind Deconvolution
by Yuan Zhang and Liyi Zhang
Algorithms 2019, 12(8), 155; https://0-doi-org.brum.beds.ac.uk/10.3390/a12080155 - 31 Jul 2019
Cited by 2 | Viewed by 3618
Abstract
In computed tomography (CT), artifacts due to patient rigid motion often significantly degrade image quality. This paper suggests a method based on iterative blind deconvolution to eliminate motion artifacts. The proposed method alternately reconstructs the image and reduces motion artifacts in an iterative [...] Read more.
In computed tomography (CT), artifacts due to patient rigid motion often significantly degrade image quality. This paper suggests a method based on iterative blind deconvolution to eliminate motion artifacts. The proposed method alternately reconstructs the image and reduces motion artifacts in an iterative scheme until the difference measure between two successive iterations is smaller than a threshold. In this iterative process, Richardson–Lucy (RL) deconvolution with spatially adaptive total variation (SATV) regularization is inserted into the iterative process of the ordered subsets expectation maximization (OSEM) reconstruction algorithm. The proposed method is evaluated on a numerical phantom, a head phantom, and patient scan. The reconstructed images indicate that the proposed method can reduce motion artifacts and provide high-quality images. Quantitative evaluations also show the proposed method yielded an appreciable improvement on all metrics, reducing root-mean-square error (RMSE) by about 30% and increasing Pearson correlation coefficient (CC) and mean structural similarity (MSSIM) by about 15% and 20%, respectively, compared to the RL-OSEM method. Furthermore, the proposed method only needs measured raw data and no additional measurements are needed. Compared with the previous work, it can be applied to any scanning mode and can realize six degrees of freedom motion artifact reduction, so the artifact reduction effect is better in clinical experiments. Full article
(This article belongs to the Special Issue The Second Symposium on Machine Intelligence and Data Analytics)
Show Figures

Figure 1

24 pages, 1109 KiB  
Review
A Survey of Convolutional Neural Networks on Edge with Reconfigurable Computing
by Mário P. Véstias
Algorithms 2019, 12(8), 154; https://0-doi-org.brum.beds.ac.uk/10.3390/a12080154 - 31 Jul 2019
Cited by 80 | Viewed by 9234
Abstract
The convolutional neural network (CNN) is one of the most used deep learning models for image detection and classification, due to its high accuracy when compared to other machine learning algorithms. CNNs achieve better results at the cost of higher computing and memory [...] Read more.
The convolutional neural network (CNN) is one of the most used deep learning models for image detection and classification, due to its high accuracy when compared to other machine learning algorithms. CNNs achieve better results at the cost of higher computing and memory requirements. Inference of convolutional neural networks is therefore usually done in centralized high-performance platforms. However, many applications based on CNNs are migrating to edge devices near the source of data due to the unreliability of a transmission channel in exchanging data with a central server, the uncertainty about channel latency not tolerated by many applications, security and data privacy, etc. While advantageous, deep learning on edge is quite challenging because edge devices are usually limited in terms of performance, cost, and energy. Reconfigurable computing is being considered for inference on edge due to its high performance and energy efficiency while keeping a high hardware flexibility that allows for the easy adaption of the target computing platform to the CNN model. In this paper, we described the features of the most common CNNs, the capabilities of reconfigurable computing for running CNNs, the state-of-the-art of reconfigurable computing implementations proposed to run CNN models, as well as the trends and challenges for future edge reconfigurable platforms. Full article
(This article belongs to the Special Issue High Performance Reconfigurable Computing)
Show Figures

Figure 1

18 pages, 320 KiB  
Article
γ-Graphs of Trees
by Stephen Finbow and Christopher M. van Bommel
Algorithms 2019, 12(8), 153; https://0-doi-org.brum.beds.ac.uk/10.3390/a12080153 - 30 Jul 2019
Cited by 1 | Viewed by 3244
Abstract
For a graph G = ( V , E ) , the γ -graph of G, denoted G ( γ ) = ( V ( γ ) , E ( γ ) ) , is the graph whose vertex set is the [...] Read more.
For a graph G = ( V , E ) , the γ -graph of G, denoted G ( γ ) = ( V ( γ ) , E ( γ ) ) , is the graph whose vertex set is the collection of minimum dominating sets, or γ -sets of G, and two γ -sets are adjacent in G ( γ ) if they differ by a single vertex and the two different vertices are adjacent in G. In this paper, we consider γ -graphs of trees. We develop an algorithm for determining the γ -graph of a tree, characterize which trees are γ -graphs of trees, and further comment on the structure of γ -graphs of trees and its connections with Cartesian product graphs, the set of graphs which can be obtained from the Cartesian product of graphs of order at least two. Full article
Show Figures

Figure 1

25 pages, 1057 KiB  
Article
Bicriteria Vehicle Routing Problem with Preferences and Timing Constraints in Home Health Care Services
by Syrine Roufaida Ait Haddadene, Nacima Labadie and Caroline Prodhon
Algorithms 2019, 12(8), 152; https://0-doi-org.brum.beds.ac.uk/10.3390/a12080152 - 30 Jul 2019
Cited by 17 | Viewed by 4484
Abstract
Home Healthcare (HHC) is an emerging and fast-expanding service sector that gives rise to challenging vehicle routing and scheduling problems. Each day, HHC structures must schedule the visits of caregivers to patients requiring specific medical and paramedical services at home. These operations have [...] Read more.
Home Healthcare (HHC) is an emerging and fast-expanding service sector that gives rise to challenging vehicle routing and scheduling problems. Each day, HHC structures must schedule the visits of caregivers to patients requiring specific medical and paramedical services at home. These operations have the potential to be unsuitable if the visits are not planned correctly, leading hence to high logistics costs and/or deteriorated service level. In this article, this issue is modeled as a vehicle routing problem where a set of routes has to be built to visit patients asking for one or more specific service within a given time window and during a fixed service time. Each patient has a preference value associated with each available caregiver. The problem addressed in this paper considers two objectives to optimize simultaneously: minimize the caregivers’ travel costs and maximize the patients’ preferences. In this paper, different methods based on the bi-objective non-dominated sorting algorithm are proposed to solve the vehicle routing problem with time windows, preferences, and timing constraints. Numerical results are presented for instances with up to 73 clients. Metrics such as the distance measure, hyper-volume, and the number of non-dominated solutions in the Pareto front are used to assess the quality of the proposed approaches. Full article
(This article belongs to the Special Issue Evolutionary Algorithms in Health Technologies)
Show Figures

Figure 1

15 pages, 1527 KiB  
Article
Soft Iterative Decoding Algorithms for Rateless Codes in Satellite Systems
by Meixiang Zhang, Satya Chan and Sooyoung Kim
Algorithms 2019, 12(8), 151; https://0-doi-org.brum.beds.ac.uk/10.3390/a12080151 - 29 Jul 2019
Cited by 2 | Viewed by 3922
Abstract
The satellite system is one of the most efficient means for broadcasting due to its wide service coverage as well as the fact that it can provide high data rate services by using high frequency bands. However, there are a number of problems [...] Read more.
The satellite system is one of the most efficient means for broadcasting due to its wide service coverage as well as the fact that it can provide high data rate services by using high frequency bands. However, there are a number of problems in the satellite system, such as a long round trip delay (RTD) and heterogeneity of the channel conditions of the earth stations. Even though utilizing adaptive coding and modulation (ACM) is almost mandatory for the satellite systems using high frequency bands due to the serious rain fading, the long RTD makes it difficult to quickly respond to channel quality information, resulting in a decrease in the efficiency of ACM. A high heterogeneity of earth stations caused by a wide service coverage also makes it difficult to apply a uniform transmission mode, and thus satellite systems require receiver-dependent transmission modes. A rateless code can be an effective means to compensate for these disadvantages of satellite systems compared to terrestrial wireless systems. This paper presents soft iterative decoding algorithms for efficient application of rateless codes in satellite systems and demonstrates that rateless codes can be effectively used for hybrid automatic repeat request schemes. Full article
(This article belongs to the Special Issue Coding Theory and Its Application)
Show Figures

Figure 1

27 pages, 1136 KiB  
Article
Defacement Detection with Passive Adversaries
by Francesco Bergadano, Fabio Carretto, Fabio Cogno and Dario Ragno
Algorithms 2019, 12(8), 150; https://0-doi-org.brum.beds.ac.uk/10.3390/a12080150 - 29 Jul 2019
Cited by 5 | Viewed by 4706
Abstract
A novel approach to defacement detection is proposed in this paper, addressing explicitly the possible presence of a passive adversary. Defacement detection is an important security measure for Web Sites and Applications, aimed at avoiding unwanted modifications that would result in significant reputational [...] Read more.
A novel approach to defacement detection is proposed in this paper, addressing explicitly the possible presence of a passive adversary. Defacement detection is an important security measure for Web Sites and Applications, aimed at avoiding unwanted modifications that would result in significant reputational damage. As in many other anomaly detection contexts, the algorithm used to identify possible defacements is obtained via an Adversarial Machine Learning process. We consider an exploratory setting, where the adversary can observe the detector’s alarm-generating behaviour, with the purpose of devising and injecting defacements that will pass undetected. It is then necessary to make to learning process unpredictable, so that the adversary will be unable to replicate it and predict the classifier’s behaviour. We achieve this goal by introducing a secret key—a key that our adversary does not know. The key will influence the learning process in a number of different ways, that are precisely defined in this paper. This includes the subset of examples and features that are actually used, the time of learning and testing, as well as the learning algorithm’s hyper-parameters. This learning methodology is successfully applied in this context, by using the system with both real and artificially modified Web sites. A year-long experimentation is also described, referred to the monitoring of the new Web Site of a major manufacturing company. Full article
Show Figures

Figure 1

24 pages, 694 KiB  
Article
Mapping a Guided Image Filter on the HARP Reconfigurable Architecture Using OpenCL
by Thomas Faict, Erik H. D’Hollander and Bart Goossens
Algorithms 2019, 12(8), 149; https://0-doi-org.brum.beds.ac.uk/10.3390/a12080149 - 27 Jul 2019
Cited by 3 | Viewed by 3731
Abstract
Intel recently introduced the Heterogeneous Architecture Research Platform, HARP. In this platform, the Central Processing Unit and a Field-Programmable Gate Array are connected through a high-bandwidth, low-latency interconnect and both share DRAM memory. For this platform, Open Computing Language (OpenCL), a High-Level Synthesis [...] Read more.
Intel recently introduced the Heterogeneous Architecture Research Platform, HARP. In this platform, the Central Processing Unit and a Field-Programmable Gate Array are connected through a high-bandwidth, low-latency interconnect and both share DRAM memory. For this platform, Open Computing Language (OpenCL), a High-Level Synthesis (HLS) language, is made available. By making use of HLS, a faster design cycle can be achieved compared to programming in a traditional hardware description language. This, however, comes at the cost of having less control over the hardware implementation. We will investigate how OpenCL can be applied to implement a real-time guided image filter on the HARP platform. In the first phase, the performance-critical parameters of the OpenCL programming model are defined using several specialized benchmarks. In a second phase, the guided image filter algorithm is implemented using the insights gained in the first phase. Both a floating-point and a fixed-point implementation were developed for this algorithm, based on a sliding window implementation. This resulted in a maximum floating-point performance of 135 GFLOPS, a maximum fixed-point performance of 430 GOPS and a throughput of HD color images at 74 frames per second. Full article
(This article belongs to the Special Issue High Performance Reconfigurable Computing)
Show Figures

Figure 1

15 pages, 359 KiB  
Article
Variational Calculus Approach to Optimal Interception Task of a Ballistic Missile in 1D and 2D Cases
by Dariusz Horla
Algorithms 2019, 12(8), 148; https://0-doi-org.brum.beds.ac.uk/10.3390/a12080148 - 26 Jul 2019
Viewed by 5435
Abstract
The paper presents the application of variational calculus to achieve the optimal design of the open-loop control law in the process of anti-ballistic missile interception task. It presents the analytical results in the form of appropriate Euler–Lagrange equations for three different performance indices, [...] Read more.
The paper presents the application of variational calculus to achieve the optimal design of the open-loop control law in the process of anti-ballistic missile interception task. It presents the analytical results in the form of appropriate Euler–Lagrange equations for three different performance indices, with a simple model of the rocket and the missile, based on the conservation of momentum principle. It also presents the software program enabling rapid simulation of the interception process with selected parameters, parametric analysis, as well as easy potential modification by other researchers, as it is written in open code as m-function of Matlab. Full article
Show Figures

Figure 1

15 pages, 1908 KiB  
Article
An Optimized Differential Step-Size LMS Algorithm
by Alexandru-George Rusu, Silviu Ciochină, Constantin Paleologu and Jacob Benesty
Algorithms 2019, 12(8), 147; https://0-doi-org.brum.beds.ac.uk/10.3390/a12080147 - 24 Jul 2019
Cited by 5 | Viewed by 3902
Abstract
Adaptive algorithms with differential step-sizes (related to the filter coefficients) are well known in the literature, most frequently as “proportionate” algorithms. Usually, they are derived on a heuristic basis. In this paper, we introduce an algorithm resulting from an optimization criterion. Thereby, we [...] Read more.
Adaptive algorithms with differential step-sizes (related to the filter coefficients) are well known in the literature, most frequently as “proportionate” algorithms. Usually, they are derived on a heuristic basis. In this paper, we introduce an algorithm resulting from an optimization criterion. Thereby, we obtain a benchmark algorithm and also another version with lower computational complexity, which is rigorously valid for less correlated input signals. Simulation results confirm the theory and outline the performance of the algorithms. Unfortunately, the good performance is obtained by an important increase in computational complexity. Nevertheless, the proposed algorithms could represent useful benchmarks in the field. Full article
Show Figures

Figure 1

Previous Issue
Back to TopTop