Next Issue
Volume 12, January
Previous Issue
Volume 11, November
 
 

Algorithms, Volume 11, Issue 12 (December 2018) – 24 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
14 pages, 1116 KiB  
Article
Parallel Reservoir Simulation with OpenACC and Domain Decomposition
by Zhijiang Kang, Ze Deng, Wei Han and Dongmei Zhang
Algorithms 2018, 11(12), 213; https://0-doi-org.brum.beds.ac.uk/10.3390/a11120213 - 18 Dec 2018
Cited by 2 | Viewed by 3721
Abstract
Parallel reservoir simulation is an important approach to solving real-time reservoir management problems. Recently, there is a new trend of using a graphics processing unit (GPU) to parallelize the reservoir simulations. Current GPU-aided reservoir simulations focus on compute unified device architecture (CUDA). Nevertheless, [...] Read more.
Parallel reservoir simulation is an important approach to solving real-time reservoir management problems. Recently, there is a new trend of using a graphics processing unit (GPU) to parallelize the reservoir simulations. Current GPU-aided reservoir simulations focus on compute unified device architecture (CUDA). Nevertheless, CUDA is not functionally portable across devices and incurs high amount of code. Meanwhile, domain decomposition is not well used for GPU-based reservoir simulations. In order to address the problems, we propose a parallel method with OpenACC to accelerate serial code and reduce the time and effort during porting an application to GPU. Furthermore, the GPU-aided domain decomposition is developed to accelerate the efficiency of reservoir simulation. The experimental results indicate that (1) the proposed GPU-aided approach can outperform the CPU-based one up to about two times, meanwhile with the help of OpenACC, the workload of the transplant code was reduced significantly by about 22 percent of the source code, (2) the domain decomposition method can further improve the execution efficiency up to 1.7×. The proposed parallel reservoir simulation method is a efficient tool to accelerate reservoir simulation. Full article
Show Figures

Figure 1

21 pages, 1197 KiB  
Article
Evaluating Algorithm Efficiency for Optimizing Experimental Designs with Correlated Data
by Lazarus K. Mramba and Salvador A. Gezan
Algorithms 2018, 11(12), 212; https://0-doi-org.brum.beds.ac.uk/10.3390/a11120212 - 18 Dec 2018
Viewed by 3455
Abstract
The search for efficient methods and procedures to optimize experimental designs is a vital process in field trials that is often challenged by computational bottlenecks. Most existing methods ignore the presence of some form of correlations in the data to simplify the optimization [...] Read more.
The search for efficient methods and procedures to optimize experimental designs is a vital process in field trials that is often challenged by computational bottlenecks. Most existing methods ignore the presence of some form of correlations in the data to simplify the optimization process at the design stage. This study explores several algorithms for improving field experimental designs using a linear mixed models statistical framework adjusting for both spatial and genetic correlations based on A- and D-optimality criteria. Relative design efficiencies are estimated for an array of algorithms including pairwise swap, genetic neighborhood, and simulated annealing and evaluated with varying levels of heritabilities, spatial and genetic correlations. Initial randomized complete block designs were generated using a stochastic procedure and can also be imported directly from other design software. Results showed that at a spatial correlation of 0.6 and a heritability of 0.3, under the A-optimality criterion, both simulated annealing and simple pairwise algorithms achieved the highest design efficiencies of 7.4 % among genetically unrelated individuals, implying a reduction in average variance of the random treatment effects by 7.4 % when the algorithm was iterated 5000 times. In contrast, results under D-optimality criterion indicated that simulated annealing had the lowest design efficiency. The simple pairwise algorithm consistently maintained highest design efficiencies in all evaluated conditions. Design efficiencies for experiments with full-sib families decreased with increasing heritability. The number of successful swaps appeared to decrease with increasing heritability and were highest for both simulated annealing and simple pairwise algorithms, and lowest for genetic neighborhood algorithm. Full article
Show Figures

Figure 1

26 pages, 2049 KiB  
Article
A Connection Between the Kalman Filter and an Optimized LMS Algorithm for Bilinear Forms
by Laura-Maria Dogariu, Silviu Ciochină, Constantin Paleologu and Jacob Benesty
Algorithms 2018, 11(12), 211; https://0-doi-org.brum.beds.ac.uk/10.3390/a11120211 - 17 Dec 2018
Cited by 7 | Viewed by 3788
Abstract
The system identification problem becomes more challenging when the parameter space increases. Recently, several works have focused on the identification of bilinear forms, which are related to the impulse responses of a spatiotemporal model, in the context of a multiple-input/single-output system. In this [...] Read more.
The system identification problem becomes more challenging when the parameter space increases. Recently, several works have focused on the identification of bilinear forms, which are related to the impulse responses of a spatiotemporal model, in the context of a multiple-input/single-output system. In this framework, the problem was addressed in terms of the Wiener filter and different basic adaptive algorithms. This paper studies two types of algorithms tailored for the identification of such bilinear forms, i.e., the Kalman filter (along with its simplified version) and an optimized least-mean-square (LMS) algorithm. Also, a comparison between them is performed, which shows interesting similarities. In addition to the mathematical derivation of the algorithms, we also provide extensive experimental results, which support the theoretical findings and indicate the good performance of the proposed solutions. Full article
(This article belongs to the Special Issue Adaptive Filtering Algorithms)
Show Figures

Figure 1

26 pages, 6029 KiB  
Article
Multi-Objective Bi-Level Programming for the Energy-Aware Integration of Flexible Job Shop Scheduling and Multi-Row Layout
by Hongliang Zhang, Haijiang Ge, Ruilin Pan and Yujuan Wu
Algorithms 2018, 11(12), 210; https://0-doi-org.brum.beds.ac.uk/10.3390/a11120210 - 17 Dec 2018
Cited by 14 | Viewed by 4404
Abstract
The flexible job shop scheduling problem (FJSSP) and multi-row workshop layout problem (MRWLP) are two major focuses in sustainable manufacturing processes. There is a close interaction between them since the FJSSP provides the material handling information to guide the optimization of the MRWLP, [...] Read more.
The flexible job shop scheduling problem (FJSSP) and multi-row workshop layout problem (MRWLP) are two major focuses in sustainable manufacturing processes. There is a close interaction between them since the FJSSP provides the material handling information to guide the optimization of the MRWLP, and the layout scheme affects the effect of the scheduling scheme by the transportation time of jobs. However, in traditional methods, they are regarded as separate tasks performed sequentially, which ignores the interaction. Therefore, developing effective methods to deal with the multi-objective energy-aware integration of the FJSSP and MRWLP (MEIFM) problem in a sustainable manufacturing system is becoming more and more important. Based on the interaction between FJSSP and MRWLP, the MEIFM problem can be formulated as a multi-objective bi-level programming (MOBLP) model. The upper-level model for FJSSP is employed to minimize the makespan and total energy consumption, while the lower-level model for MRWLP is used to minimize the material handling quantity. Because the MEIFM problem is denoted as a mixed integer non-liner programming model, it is difficult to solve it using traditional methods. Thus, this paper proposes an improved multi-objective hierarchical genetic algorithm (IMHGA) to solve this model. Finally, the effectiveness of the method is verified through comparative experiments. Full article
(This article belongs to the Special Issue Algorithms for Decision Making)
Show Figures

Figure 1

34 pages, 1005 KiB  
Article
Hadoop vs. Spark: Impact on Performance of the Hammer Query Engine for Open Data Corpora
by Mauro Pelucchi, Giuseppe Psaila and Maurizio Toccu
Algorithms 2018, 11(12), 209; https://0-doi-org.brum.beds.ac.uk/10.3390/a11120209 - 17 Dec 2018
Cited by 6 | Viewed by 3940
Abstract
The Hammer prototype is a query engine for corpora of Open Data that provides users with the concept of blind querying. Since data sets published on Open Data portals are heterogeneous, users wishing to find out interesting data sets are blind: queries cannot [...] Read more.
The Hammer prototype is a query engine for corpora of Open Data that provides users with the concept of blind querying. Since data sets published on Open Data portals are heterogeneous, users wishing to find out interesting data sets are blind: queries cannot be fully specified, as in the case of databases. Consequently, the query engine is responsible for rewriting and adapting the blind query to the actual data sets, by exploiting lexical and semantic similarity. The effectiveness of this approach was discussed in our previous works. In this paper, we report our experience in developing the query engine. In fact, in the very first version of the prototype, we realized that the implementation of the retrieval technique was too slow, even though corpora contained only a few thousands of data sets. We decided to adopt the Map-Reduce paradigm, in order to parallelize the query engine and improve performances. We passed through several versions of the query engine, either based on the Hadoop framework or on the Spark framework. Hadoop and Spark are two very popular frameworks for writing and executing parallel algorithms based on the Map-Reduce paradigm. In this paper, we present our study about the impact of adopting the Map-Reduce approach and its two most famous frameworks to parallelize the Hammer query engine; we discuss various implementations of the query engine, either obtained without significantly rewriting the algorithm or obtained by completely rewriting the algorithm by exploiting high level abstractions provided by Spark. The experimental campaign we performed shows the benefits provided by each studied solution, with the perspective of moving toward Big Data in the future. The lessons we learned are collected and synthesized into behavioral guidelines for developers approaching the problem of parallelizing algorithms by means of Map-Reduce frameworks. Full article
(This article belongs to the Special Issue MapReduce for Big Data)
Show Figures

Figure 1

14 pages, 404 KiB  
Article
On the Use of Learnheuristics in Vehicle Routing Optimization Problems with Dynamic Inputs
by Quim Arnau, Angel A. Juan and Isabel Serra
Algorithms 2018, 11(12), 208; https://0-doi-org.brum.beds.ac.uk/10.3390/a11120208 - 15 Dec 2018
Cited by 19 | Viewed by 5267
Abstract
Freight transportation is becoming an increasingly critical activity for enterprises in a global world. Moreover, the distribution activities have a non-negligible impact on the environment, as well as on the citizens’ welfare. The classical vehicle routing problem (VRP) aims at designing routes that [...] Read more.
Freight transportation is becoming an increasingly critical activity for enterprises in a global world. Moreover, the distribution activities have a non-negligible impact on the environment, as well as on the citizens’ welfare. The classical vehicle routing problem (VRP) aims at designing routes that minimize the cost of serving customers using a given set of capacitated vehicles. Some VRP variants consider traveling times, either in the objective function (e.g., including the goal of minimizing total traveling time or designing balanced routes) or as constraints (e.g., the setting of time windows or a maximum time per route). Typically, the traveling time between two customers or between one customer and the depot is assumed to be both known in advance and static. However, in real life, there are plenty of factors (predictable or not) that may affect these traveling times, e.g., traffic jams, accidents, road works, or even the weather. In this work, we analyze the VRP with dynamic traveling times. Our work assumes not only that these inputs are dynamic in nature, but also that they are a function of the structure of the emerging routing plan. In other words, these traveling times need to be dynamically re-evaluated as the solution is being constructed. In order to solve this dynamic optimization problem, a learnheuristic-based approach is proposed. Our approach integrates statistical learning techniques within a metaheuristic framework. A number of computational experiments are carried out in order to illustrate our approach and discuss its effectiveness. Full article
(This article belongs to the Special Issue Algorithms for Decision Making)
Show Figures

Figure 1

19 pages, 541 KiB  
Article
Trajectory Clustering and k-NN for Robust Privacy Preserving Spatiotemporal Databases
by Elias Dritsas, Maria Trigka, Panagiotis Gerolymatos and Spyros Sioutas
Algorithms 2018, 11(12), 207; https://doi.org/10.3390/a11120207 - 14 Dec 2018
Cited by 10 | Viewed by 5249
Abstract
In the context of this research work, we studied the problem of privacy preserving on spatiotemporal databases. In particular, we investigated the k-anonymity of mobile users based on real trajectory data. The k-anonymity set consists of the k nearest neighbors. We [...] Read more.
In the context of this research work, we studied the problem of privacy preserving on spatiotemporal databases. In particular, we investigated the k-anonymity of mobile users based on real trajectory data. The k-anonymity set consists of the k nearest neighbors. We constructed a motion vector of the form (x,y,g,v) where x and y are the spatial coordinates, g is the angle direction, and v is the velocity of mobile users, and studied the problem in four-dimensional space. We followed two approaches. The former applied only k-Nearest Neighbor (k-NN) algorithm on the whole dataset, while the latter combined trajectory clustering, based on K-means, with k-NN. Actually, it applied k-NN inside a cluster of mobile users with similar motion pattern (g,v). We defined a metric, called vulnerability, that measures the rate at which k-NNs are varying. This metric varies from 1 k (high robustness) to 1 (low robustness) and represents the probability the real identity of a mobile user being discovered from a potential attacker. The aim of this work was to prove that, with high probability, the above rate tends to a number very close to 1 k in clustering method, which means that the k-anonymity is highly preserved. Through experiments on real spatial datasets, we evaluated the anonymity robustness, the so-called vulnerability, of the proposed method. Full article
(This article belongs to the Special Issue Humanistic Data Mining: Tools and Applications)
Show Figures

Figure 1

35 pages, 12141 KiB  
Article
Optimal Design of Interval Type-2 Fuzzy Heart Rate Level Classification Systems Using the Bird Swarm Algorithm
by Ivette Miramontes, Juan Carlos Guzman, Patricia Melin and German Prado-Arechiga
Algorithms 2018, 11(12), 206; https://0-doi-org.brum.beds.ac.uk/10.3390/a11120206 - 14 Dec 2018
Cited by 35 | Viewed by 4281
Abstract
In this paper, the optimal designs of type-1 and interval type-2 fuzzy systems for the classification of the heart rate level are presented. The contribution of this work is a proposed approach for achieving the optimal design of interval type-2 fuzzy systems for [...] Read more.
In this paper, the optimal designs of type-1 and interval type-2 fuzzy systems for the classification of the heart rate level are presented. The contribution of this work is a proposed approach for achieving the optimal design of interval type-2 fuzzy systems for the classification of the heart rate in patients. The fuzzy rule base was designed based on the knowledge of experts. Optimization of the membership functions of the fuzzy systems is done in order to improve the classification rate and provide a more accurate diagnosis, and for this goal the Bird Swarm Algorithm was used. Two different type-1 fuzzy systems are designed and optimized, the first one with trapezoidal membership functions and the second with Gaussian membership functions. Once the best type-1 fuzzy systems have been obtained, these are considered as a basis for designing the interval type-2 fuzzy systems, where the footprint of uncertainty was optimized to find the optimal representation of uncertainty. After performing different tests with patients and comparing the classification rate of each fuzzy system, it is concluded that fuzzy systems with Gaussian membership functions provide a better classification than those designed with trapezoidal membership functions. Additionally, tests were performed with the Crow Search Algorithm to carry out a performance comparison, with Bird Swarm Algorithm being the one with the best results. Full article
(This article belongs to the Special Issue Evolutionary Algorithms in Health Technologies)
Show Figures

Figure 1

16 pages, 5739 KiB  
Article
Optimal Sliding Mode Control for an Active Suspension System Based on a Genetic Algorithm
by Chen Zhou, Xinhui Liu, Wei Chen, Feixiang Xu and Bingwei Cao
Algorithms 2018, 11(12), 205; https://0-doi-org.brum.beds.ac.uk/10.3390/a11120205 - 14 Dec 2018
Cited by 40 | Viewed by 7455
Abstract
In order to improve the dynamic quality of traditional sliding mode control for an active suspension system, an optimal sliding mode control (OSMC) based on a genetic algorithm (GA) is proposed. First, the overall structure and control principle of the active suspension system [...] Read more.
In order to improve the dynamic quality of traditional sliding mode control for an active suspension system, an optimal sliding mode control (OSMC) based on a genetic algorithm (GA) is proposed. First, the overall structure and control principle of the active suspension system are introduced. Second, the mathematical model of the quarter car active suspension system is established. Third, a sliding mode control (SMC) controller is designed to manipulate the active force to control the active suspension system. Fourth, GA is applied to optimize the weight coefficients of an SMC switching function and the parameters of the control law. Finally, the simulation model is built based on MATLAB/Simulink (version 2014a), and the simulations are performed and analyzed with the proposed control strategy to identify its performance. The simulation results show that the OSMC controller tuned using a GA has better control performance than the traditional SMC controller. Full article
Show Figures

Figure 1

13 pages, 1513 KiB  
Article
A Novel Method for Risk Assessment and Simulation of Collision Avoidance for Vessels based on AIS
by ManhCuong Nguyen, Shufang Zhang and Xiaoye Wang
Algorithms 2018, 11(12), 204; https://0-doi-org.brum.beds.ac.uk/10.3390/a11120204 - 14 Dec 2018
Cited by 23 | Viewed by 5438
Abstract
The identification of risks associated with collision for vessels is an important element in maritime safety and management. A vessel collision avoidance system is a topic that has been deeply studied, and it is a specialization in navigation technology. The automatic identification system [...] Read more.
The identification of risks associated with collision for vessels is an important element in maritime safety and management. A vessel collision avoidance system is a topic that has been deeply studied, and it is a specialization in navigation technology. The automatic identification system (AIS) has been used to support navigation, route estimation, collision prediction, and abnormal traffic detection. This article examined the main elements of ship collision, developed a mathematical model for the risk assessment, and simulated a collision assessment based on AIS information, thereby providing meaningful recommendations for crew training and a warning system, in conjunction with the AIS on board. Full article
Show Figures

Figure 1

14 pages, 6055 KiB  
Article
Finite Difference Algorithm on Non-Uniform Meshes for Modeling 2D Magnetotelluric Responses
by Xiaozhong Tong, Yujun Guo and Wei Xie
Algorithms 2018, 11(12), 203; https://0-doi-org.brum.beds.ac.uk/10.3390/a11120203 - 14 Dec 2018
Cited by 4 | Viewed by 4324
Abstract
A finite-difference approach with non-uniform meshes was presented for simulating magnetotelluric responses in 2D structures. We presented the calculation formula of this scheme from the boundary value problem of electric field and magnetic field, and compared finite-difference solutions with finite-element numerical results and [...] Read more.
A finite-difference approach with non-uniform meshes was presented for simulating magnetotelluric responses in 2D structures. We presented the calculation formula of this scheme from the boundary value problem of electric field and magnetic field, and compared finite-difference solutions with finite-element numerical results and analytical solutions of a 1D model. First, a homogeneous half-space model was tested and the finite-difference approach can provide very good accuracy for 2D magnetotelluric modeling. Then we compared them to the analytical solutions for the two-layered geo-electric model; the relative errors of the apparent resistivity and the impedance phase were both increased when the frequency was increased. To conclude, we compare our finite-difference simulation results with COMMEMI 2D-0 model with the finite-element solutions. Both results are in close agreement to each other. These comparisons can confirm the validity and reliability of our finite-difference algorithm. Moreover, a future project will extend the 2D structures to 3D, where non-uniform meshes should perform especially well. Full article
Show Figures

Figure 1

14 pages, 833 KiB  
Article
Classification of Normal and Abnormal Regimes in Financial Markets
by Jun Chen and Edward P. K. Tsang
Algorithms 2018, 11(12), 202; https://0-doi-org.brum.beds.ac.uk/10.3390/a11120202 - 12 Dec 2018
Cited by 7 | Viewed by 5442
Abstract
When financial market conditions change, traders adopt different strategies. The traders’ collective behaviour may cause significant changes in the statistical properties of price movements. When this happens, the market is said to have gone through “regime changes”. The purpose of this paper is [...] Read more.
When financial market conditions change, traders adopt different strategies. The traders’ collective behaviour may cause significant changes in the statistical properties of price movements. When this happens, the market is said to have gone through “regime changes”. The purpose of this paper is to characterise what is a “normal market regime” as well as what is an “abnormal market regime”, under observations in Directional Changes (DC). Our study starts with historical data from 10 financial markets. For each market, we focus on a period of time in which significant events could have triggered regime changes. The observations of regime changes in these markets are then positioned in a designed two-dimensional indicator space based on DC. Our results suggest that the normal regimes from different markets share similar statistical characteristics. In other words, with our observations, it is possible to distinguish normal regimes from abnormal regimes. This is significant, because, for the first time, we can tell whether a market is in a normal regime by observing the DC indicators in the market. This opens the door for future work to be able to dynamically monitor the market for regime change. Full article
(This article belongs to the Special Issue Algorithms in Computational Finance)
Show Figures

Figure 1

16 pages, 4761 KiB  
Article
A Fast Approach to Texture-Less Object Detection Based on Orientation Compressing Map and Discriminative Regional Weight
by Hancheng Yu, Haibao Qin and Maoting Peng
Algorithms 2018, 11(12), 201; https://0-doi-org.brum.beds.ac.uk/10.3390/a11120201 - 12 Dec 2018
Cited by 4 | Viewed by 3641
Abstract
This paper presents a fast algorithm for texture-less object recognition, which is designed to be robust to cluttered backgrounds and small transformations. At its core, the proposed method demonstrates a two-stage template-based procedure using an orientation compressing map and discriminative regional weight (OCM-DRW) [...] Read more.
This paper presents a fast algorithm for texture-less object recognition, which is designed to be robust to cluttered backgrounds and small transformations. At its core, the proposed method demonstrates a two-stage template-based procedure using an orientation compressing map and discriminative regional weight (OCM-DRW) to effectively detect texture-less objects. In the first stage, the proposed method quantizes and compresses all the orientations in a neighborhood to obtain the orientation compressing map which then is used to generate a set of possible object locations. To recognize the object in these possible object locations, the second stage computes the similarity of each possible object location with the learned template by using discriminative regional weight, which can effectively distinguish different categories of objects with similar parts. Experiments on publiclyavailable, texture-less object datasets indicate that apart from yielding efficient computational performance, the proposed method also attained remarkable recognition rates surpassing recent state-of-the-art texture-less object detectors in the presence of high-clutter, occlusion and scale-rotation changes. It improves the accuracy and speed by 8% and 370% respectively, relative to the previous best result on D-Textureless dataset. Full article
Show Figures

Figure 1

3 pages, 159 KiB  
Editorial
Special Issue on Algorithms for the Resource Management of Large Scale Infrastructures
by Danilo Ardagna, Claudia Canali and Riccardo Lancellotti
Algorithms 2018, 11(12), 200; https://0-doi-org.brum.beds.ac.uk/10.3390/a11120200 - 10 Dec 2018
Cited by 2 | Viewed by 3102
Abstract
Modern distributed systems are becoming increasingly complex as virtualization is being applied at both the levels of computing and networking. Consequently, the resource management of this infrastructure requires innovative and efficient solutions. This issue is further exacerbated by the unpredictable workload of modern [...] Read more.
Modern distributed systems are becoming increasingly complex as virtualization is being applied at both the levels of computing and networking. Consequently, the resource management of this infrastructure requires innovative and efficient solutions. This issue is further exacerbated by the unpredictable workload of modern applications and the need to limit the global energy consumption. The purpose of this special issue is to present recent advances and emerging solutions to address the challenge of resource management in the context of modern large-scale infrastructures. We believe that the four papers that we selected present an up-to-date view of the emerging trends, and the papers propose innovative solutions to support efficient and self-managing systems that are able to adapt, manage, and cope with changes derived from continually changing workload and application deployment settings, without the need for human supervision. Full article
(This article belongs to the Special Issue Algorithms for the Resource Management of Large Scale Infrastructures)
12 pages, 1046 KiB  
Article
Decision Support Software for Forecasting Patient’s Length of Stay
by Ioannis E. Livieris, Theodore Kotsilieris, Ioannis Dimopoulos and Panagiotis Pintelas
Algorithms 2018, 11(12), 199; https://0-doi-org.brum.beds.ac.uk/10.3390/a11120199 - 06 Dec 2018
Cited by 12 | Viewed by 3838
Abstract
Length of stay of hospitalized patients is generally considered to be a significant and critical factor for healthcare policy planning which consequently affects the hospital management plan and resources. Its reliable prediction in the preadmission stage could further assist in identifying abnormality or [...] Read more.
Length of stay of hospitalized patients is generally considered to be a significant and critical factor for healthcare policy planning which consequently affects the hospital management plan and resources. Its reliable prediction in the preadmission stage could further assist in identifying abnormality or potential medical risks to trigger additional attention for individual cases. Recently, data mining and machine learning constitute significant tools in the healthcare domain. In this work, we introduce a new decision support software for the accurate prediction of hospitalized patients’ length of stay which incorporates a novel two-level classification algorithm. Our numerical experiments indicate that the proposed algorithm exhibits better classification performance than any examined single learning algorithm. The proposed software was developed to provide assistance to the hospital management and strengthen the service system by offering customized assistance according to patients’ predicted hospitalization time. Full article
(This article belongs to the Special Issue Humanistic Data Mining: Tools and Applications)
Show Figures

Figure 1

24 pages, 3309 KiB  
Article
Damage Identification Algorithm of Hinged Joints for Simply Supported Slab Bridges Based on Modified Hinge Plate Method and Artificial Bee Colony Algorithms
by Hanbing Liu, Xin He and Yubo Jiao
Algorithms 2018, 11(12), 198; https://0-doi-org.brum.beds.ac.uk/10.3390/a11120198 - 04 Dec 2018
Cited by 16 | Viewed by 3654
Abstract
Hinge joint damage is a typical form of damage occurring in simply supported slab bridges, which can present adverse effects on the overall force distribution of the structure. However, damage identification methods of hinge joint damage are still limited. In this study, a [...] Read more.
Hinge joint damage is a typical form of damage occurring in simply supported slab bridges, which can present adverse effects on the overall force distribution of the structure. However, damage identification methods of hinge joint damage are still limited. In this study, a damage identification algorithm for simply supported hinged-slab bridges based on the modified hinge plate method (MHPM) and artificial bee colony (ABC) algorithms was proposed by considering the effect of hinge damage conditions on the lateral load distribution (LLD) of structures. Firstly, MHPM was proposed and demonstrated, which is based on a traditional hinge plate method by introducing relative displacement as a damage factor to simulate hinge joint damage. The effectiveness of MHPM was verified through comparison with the finite element method (FEM). Secondly, damage identification was treated as the inverse problem of calculating the LLD in damage conditions of simply supported slab bridges. Four ABC algorithms were chosen to solve the problem due to its simple structure, ease of implementation, and robustness. Comparisons of convergence speed and identification accuracy with genetic algorithm and particle swarm optimization were also conducted. Finally, hinged bridges composed of four and seven slabs were studied as numerical examples to account for the feasibility and correctness of the proposed method. The simulation results revealed that the proposed algorithm could identify the location and degree of damaged joints efficiently and precisely. Full article
Show Figures

Figure 1

17 pages, 828 KiB  
Article
Parametric Estimation in the Vasicek-Type Model Driven by Sub-Fractional Brownian Motion
by Shengfeng Li and Yi Dong
Algorithms 2018, 11(12), 197; https://0-doi-org.brum.beds.ac.uk/10.3390/a11120197 - 04 Dec 2018
Cited by 5 | Viewed by 2995
Abstract
In the paper, we tackle the least squares estimators of the Vasicek-type model driven by sub-fractional Brownian motion: d X t = ( μ + θ X t ) d t + d S t H , t 0 with [...] Read more.
In the paper, we tackle the least squares estimators of the Vasicek-type model driven by sub-fractional Brownian motion: d X t = ( μ + θ X t ) d t + d S t H , t 0 with X 0 = 0 , where S H is a sub-fractional Brownian motion whose Hurst index H is greater than 1 2 , and μ R , θ R + are two unknown parameters. Based on the so-called continuous observations, we suggest the least square estimators of μ and θ and discuss the consistency and asymptotic distributions of the two estimators. Full article
(This article belongs to the Special Issue Parameter Estimation Algorithms and Its Applications)
Show Figures

Figure 1

22 pages, 1404 KiB  
Article
Solon: A Holistic Approach for Modelling, Managing and Mining Legal Sources
by Marios Koniaris, George Papastefanatos and Ioannis Anagnostopoulos
Algorithms 2018, 11(12), 196; https://0-doi-org.brum.beds.ac.uk/10.3390/a11120196 - 03 Dec 2018
Cited by 6 | Viewed by 4378
Abstract
Recently there has been an exponential growth of the number of publicly available legal resources. Portals allowing users to search legal documents, through keyword queries, are now widespread. However, legal documents are mainly stored and offered in different sources and formats that do [...] Read more.
Recently there has been an exponential growth of the number of publicly available legal resources. Portals allowing users to search legal documents, through keyword queries, are now widespread. However, legal documents are mainly stored and offered in different sources and formats that do not facilitate semantic machine-readable techniques, thus making difficult for legal stakeholders to acquire, modify or interlink legal knowledge. In this paper, we describe Solon, a legal document management platform. It offers advanced modelling, managing and mining functions over legal sources, so as to facilitate access to legal knowledge. It utilizes a novel method for extracting semantic representations of legal sources from unstructured formats, such as PDF and HTML text files, interlinking and enhancing them with classification features. At the same time, utilizing the structure and specific features of legal sources, it provides refined search results. Finally, it allows users to connect and explore legal resources according to their individual needs. To demonstrate the applicability and usefulness of our approach, Solon has been successfully deployed in a public sector production environment, making Greek tax legislation easily accessible to the public. Opening up legislation in this way will help increase transparency and make governments more accountable to citizens. Full article
(This article belongs to the Special Issue Humanistic Data Mining: Tools and Applications)
Show Figures

Figure 1

27 pages, 494 KiB  
Article
Convex-Hull Algorithms: Implementation, Testing, and Experimentation
by Ask Neve Gamby and Jyrki Katajainen
Algorithms 2018, 11(12), 195; https://0-doi-org.brum.beds.ac.uk/10.3390/a11120195 - 28 Nov 2018
Cited by 18 | Viewed by 9346
Abstract
From a broad perspective, we study issues related to implementation, testing, and experimentation in the context of geometric algorithms. Our focus is on the effect of quality of implementation on experimental results. More concisely, we study algorithms that compute convex hulls for a [...] Read more.
From a broad perspective, we study issues related to implementation, testing, and experimentation in the context of geometric algorithms. Our focus is on the effect of quality of implementation on experimental results. More concisely, we study algorithms that compute convex hulls for a multiset of points in the plane. We introduce several improvements to the implementations of the studied algorithms: plane-sweep, torch, quickhull, and throw-away. With a new set of space-efficient implementations, the experimental results—in the integer-arithmetic setting—are different from those of earlier studies. From this, we conclude that utmost care is needed when doing experiments and when trying to draw solid conclusions upon them. Full article
Show Figures

Figure 1

32 pages, 7507 KiB  
Article
New and Efficient Algorithms for Producing Frequent Itemsets with the Map-Reduce Framework
by Yaron Gonen, Ehud Gudes and Kirill Kandalov
Algorithms 2018, 11(12), 194; https://0-doi-org.brum.beds.ac.uk/10.3390/a11120194 - 28 Nov 2018
Cited by 1 | Viewed by 3997
Abstract
The Map-Reduce (MR) framework has become a popular framework for developing new parallel algorithms for Big Data. Efficient algorithms for data mining of big data and distributed databases has become an important problem. In this paper we focus on algorithms producing association rules [...] Read more.
The Map-Reduce (MR) framework has become a popular framework for developing new parallel algorithms for Big Data. Efficient algorithms for data mining of big data and distributed databases has become an important problem. In this paper we focus on algorithms producing association rules and frequent itemsets. After reviewing the most recent algorithms that perform this task within the MR framework, we present two new algorithms: one algorithm for producing closed frequent itemsets, and the second one for producing frequent itemsets when the database is updated and new data is added to the old database. Both algorithms include novel optimizations which are suitable to the MR framework, as well as to other parallel architectures. A detailed experimental evaluation shows the effectiveness and advantages of the algorithms over existing methods when it comes to large distributed databases. Full article
(This article belongs to the Special Issue MapReduce for Big Data)
Show Figures

Figure 1

15 pages, 1674 KiB  
Article
A Forecast Model of the Number of Containers for Containership Voyage
by Yuchuang Wang, Guoyou Shi and Xiaotong Sun
Algorithms 2018, 11(12), 193; https://0-doi-org.brum.beds.ac.uk/10.3390/a11120193 - 28 Nov 2018
Cited by 4 | Viewed by 3313
Abstract
Container ships must pass through multiple ports of call during a voyage. Therefore, forecasting container volume information at the port of origin followed by sending such information to subsequent ports is crucial for container terminal management and container stowage personnel. Numerous factors influence [...] Read more.
Container ships must pass through multiple ports of call during a voyage. Therefore, forecasting container volume information at the port of origin followed by sending such information to subsequent ports is crucial for container terminal management and container stowage personnel. Numerous factors influence container allocation to container ships for a voyage, and the degree of influence varies, engendering a complex nonlinearity. Therefore, this paper proposes a model based on gray relational analysis (GRA) and mixed kernel support vector machine (SVM) for predicting container allocation to a container ship for a voyage. First, in this model, the weights of influencing factors are determined through GRA. Then, the weighted factors serve as the input of the SVM model, and SVM model parameters are optimized through a genetic algorithm. Numerical simulations revealed that the proposed model could effectively predict the number of containers for container ship voyage and that it exhibited strong generalization ability and high accuracy. Accordingly, this model provides a new method for predicting container volume for a voyage. Full article
(This article belongs to the Special Issue Modeling Computing and Data Handling for Marine Transportation)
Show Figures

Graphical abstract

15 pages, 11769 KiB  
Article
A Study on Faster R-CNN-Based Subway Pedestrian Detection with ACE Enhancement
by Hongquan Qu, Meihan Wang, Changnian Zhang and Yun Wei
Algorithms 2018, 11(12), 192; https://0-doi-org.brum.beds.ac.uk/10.3390/a11120192 - 26 Nov 2018
Cited by 6 | Viewed by 4563
Abstract
At present, the problem of pedestrian detection has attracted increasing attention in the field of computer vision. The faster regions with convolutional neural network features (Faster R-CNN) are regarded as one of the most important techniques for studying this problem. However, the detection [...] Read more.
At present, the problem of pedestrian detection has attracted increasing attention in the field of computer vision. The faster regions with convolutional neural network features (Faster R-CNN) are regarded as one of the most important techniques for studying this problem. However, the detection capability of the model trained by faster R-CNN is susceptible to the diversity of pedestrians’ appearance and the light intensity in specific scenarios, such as in a subway, which can lead to the decline in recognition rate and the offset of target selection for pedestrians. In this paper, we propose the modified faster R-CNN method with automatic color enhancement (ACE), which can improve sample contrast by calculating the relative light and dark relationship to correct the final pixel value. In addition, a calibration method based on sample categories reduction is presented to accurately locate the target for detection. Then, we choose the faster R-CNN target detection framework on the experimental dataset. Finally, the effectiveness of this method is verified with the actual data sample collected from the subway passenger monitoring video. Full article
(This article belongs to the Special Issue Deep Learning for Image and Video Understanding)
Show Figures

Figure 1

15 pages, 718 KiB  
Article
MapReduce Algorithm for Location Recommendation by Using Area Skyline Query
by Chen Li, Annisa Annisa, Asif Zaman, Mahboob Qaosar, Saleh Ahmed and Yasuhiko Morimoto
Algorithms 2018, 11(12), 191; https://0-doi-org.brum.beds.ac.uk/10.3390/a11120191 - 25 Nov 2018
Cited by 5 | Viewed by 4029
Abstract
Location recommendation is essential for various map-based mobile applications. However, it is not easy to generate location-based recommendations with the changing contexts and locations of mobile users. Skyline operation is one of the most well-established techniques for location-based services. Our previous work proposed [...] Read more.
Location recommendation is essential for various map-based mobile applications. However, it is not easy to generate location-based recommendations with the changing contexts and locations of mobile users. Skyline operation is one of the most well-established techniques for location-based services. Our previous work proposed a new query method, called “area skyline query”, to select areas in a map. However, it is not efficient for large-scale data. In this paper, we propose a parallel algorithm for processing the area skyline using MapReduce. Intensive experiments on both synthetic and real data confirm that our proposed algorithm is sufficiently efficient for large-scale data. Full article
(This article belongs to the Special Issue Algorithms for Decision Making)
Show Figures

Figure 1

26 pages, 1953 KiB  
Article
Best Trade-Off Point Method for Efficient Resource Provisioning in Spark
by Peter P. Nghiem
Algorithms 2018, 11(12), 190; https://0-doi-org.brum.beds.ac.uk/10.3390/a11120190 - 22 Nov 2018
Cited by 1 | Viewed by 4578
Abstract
Considering the recent exponential growth in the amount of information processed in Big Data, the high energy consumed by data processing engines in datacenters has become a major issue, underlining the need for efficient resource allocation for more energy-efficient computing. We previously proposed [...] Read more.
Considering the recent exponential growth in the amount of information processed in Big Data, the high energy consumed by data processing engines in datacenters has become a major issue, underlining the need for efficient resource allocation for more energy-efficient computing. We previously proposed the Best Trade-off Point (BToP) method, which provides a general approach and techniques based on an algorithm with mathematical formulas to find the best trade-off point on an elbow curve of performance vs. resources for efficient resource provisioning in Hadoop MapReduce. The BToP method is expected to work for any application or system which relies on a trade-off elbow curve, non-inverted or inverted, for making good decisions. In this paper, we apply the BToP method to the emerging cluster computing framework, Apache Spark, and show that its performance and energy consumption are better than Spark with its built-in dynamic resource allocation enabled. Our Spark-Bench tests confirm the effectiveness of using the BToP method with Spark to determine the optimal number of executors for any workload in production environments where job profiling for behavioral replication will lead to the most efficient resource provisioning. Full article
(This article belongs to the Special Issue MapReduce for Big Data)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop