Next Issue
Volume 12, August
Previous Issue
Volume 12, June
 
 

Algorithms, Volume 12, Issue 7 (July 2019) – 21 articles

Cover Story (view full-size image): In this study, we present Optimus, which implements a self-adaptive differential evolution algorithm with an ensemble of mutation strategies (jEDE), for grasshopper algorithmic modeling in Rhinoceros CAD software. We made an experiment using standard test problems, some of the test problems proposed in IEEE CEC 2005, and one design optimization problem. Experimental results show that Optimus (jEDE) outperforms Galapagos (genetic algorithm), SilverEye (particle swarm optimization), and Opossum (RbfOpt) by finding better results for 20 out of 21 problems. As the main conclusion, Optimus showed that near-optimal solutions of architectural design problems can be improved by testing different types of algorithms with respect to no-free lunch theorem. The target audience of this paper is frequent users of parametric design modeling, e.g., architects, engineers, and designers. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
18 pages, 1541 KiB  
Article
Hybrid MU-MIMO Precoding Based on K-Means User Clustering
by Razvan-Florentin Trifan, Andrei-Alexandru Enescu and Constantin Paleologu
Algorithms 2019, 12(7), 146; https://0-doi-org.brum.beds.ac.uk/10.3390/a12070146 - 23 Jul 2019
Cited by 3 | Viewed by 3648
Abstract
Multi-User (MU) Multiple-Input-Multiple-Output (MIMO) systems have been extensively investigated over the last few years from both theoretical and practical perspectives. The low complexity Linear Precoding (LP) schemes for MU-MIMO are already deployed in Long-Term Evolution (LTE) networks; however, they do not work well [...] Read more.
Multi-User (MU) Multiple-Input-Multiple-Output (MIMO) systems have been extensively investigated over the last few years from both theoretical and practical perspectives. The low complexity Linear Precoding (LP) schemes for MU-MIMO are already deployed in Long-Term Evolution (LTE) networks; however, they do not work well for users with strongly-correlated channels. Alternatives to those schemes, like Non-Linear Precoding (NLP), and hybrid precoding schemes were proposed in the standardization phase for the Third-Generation Partnership Project (3GPP) 5G New Radio (NR). NLP schemes have better performance, but their complexity is prohibitively high. Hybrid schemes, which combine LP schemes to serve users with separable channels and NLP schemes for users with strongly-correlated channels, can help reduce the computational burden, while limiting the performance degradation. Finding the optimum set of users that can be co-scheduled through LP schemes could require an exhaustive search and, thus, may not be affordable for practical systems. The purpose of this paper is to present a new semi-orthogonal user selection algorithm based on the statistical K-means clustering and to assess its performance in MU-MIMO systems employing hybrid precoding schemes. Full article
Show Figures

Figure 1

18 pages, 8683 KiB  
Article
A Study on Sensitive Bands of EEG Data under Different Mental Workloads
by Hongquan Qu, Zhanli Fan, Shuqin Cao, Liping Pang, Hao Wang and Jie Zhang
Algorithms 2019, 12(7), 145; https://0-doi-org.brum.beds.ac.uk/10.3390/a12070145 - 22 Jul 2019
Cited by 5 | Viewed by 3937
Abstract
Electroencephalogram (EEG) signals contain a lot of human body performance information. With the development of the brain–computer interface (BCI) technology, many researchers have used the feature extraction and classification algorithms in various fields to study the feature extraction and classification of EEG signals. [...] Read more.
Electroencephalogram (EEG) signals contain a lot of human body performance information. With the development of the brain–computer interface (BCI) technology, many researchers have used the feature extraction and classification algorithms in various fields to study the feature extraction and classification of EEG signals. In this paper, the sensitive bands of EEG data under different mental workloads are studied. By selecting the characteristics of EEG signals, the bands with the highest sensitivity to mental loads are selected. In this paper, EEG signals are measured in different load flight experiments. First, the EEG signals are preprocessed by independent component analysis (ICA) to remove the interference of electrooculogram (EOG) signals, and then the power spectral density and energy are calculated for feature extraction. Finally, the feature importance is selected based on Gini impurity. The classification accuracy of the support vector machines (SVM) classifier is verified by comparing the characteristics of the full band with the characteristics of the β band. The results show that the characteristics of the β band are the most sensitive in EEG data under different mental workloads. Full article
(This article belongs to the Special Issue The Second Symposium on Machine Intelligence and Data Analytics)
Show Figures

Figure 1

30 pages, 7033 KiB  
Article
Simulation Tool for Tuning and Performance Analysis of Robust, Tracking, Disturbance Rejection and Aggressiveness Controller
by Veeramani Bagyaveereswaran, Subramaniam Umashankar and Pachiyappan Arulmozhivarman
Algorithms 2019, 12(7), 144; https://0-doi-org.brum.beds.ac.uk/10.3390/a12070144 - 20 Jul 2019
Cited by 1 | Viewed by 4099
Abstract
The RTD-A (robust, tracking, disturbance rejection and aggressiveness) controller is a novel control scheme that substitutes the classical proportional integral derivative (PID) controller. This novel controller’s performance depends on the four controller tuning parameters (θR, θT, θD [...] Read more.
The RTD-A (robust, tracking, disturbance rejection and aggressiveness) controller is a novel control scheme that substitutes the classical proportional integral derivative (PID) controller. This novel controller’s performance depends on the four controller tuning parameters (θR, θT, θD and θA). The tuning of RTD-A controller is more transparent than classic PID controllers. The RTD-A tuning parameters values lies between ZERO and ONE. Availability of a tool to design optimal parameters for this controller and evaluating the performance on a given system is necessary for the researchers. In this paper, the new simulation tool is presented to deal with the RTD-A control scheme. There are four graphical user interface tools included in the proposed tool and working of each tool is explained in detail. To demonstrate the proposed tool, two examples, which involve a liquid level control application and an air pressure control application, are presented in this work. The performance of the RTD-A controller is compared with PID controller. RTD-A controllers are tuned using optimization algorithms and their performances are observed and analyzed in both cases under deterministic and uncertain conditions. Full article
Show Figures

Figure 1

18 pages, 676 KiB  
Article
Bi-Level Multi-Objective Production Planning Problem with Multi-Choice Parameters: A Fuzzy Goal Programming Algorithm
by Murshid Kamal, Srikant Gupta, Prasenjit Chatterjee, Dragan Pamucar and Zeljko Stevic
Algorithms 2019, 12(7), 143; https://0-doi-org.brum.beds.ac.uk/10.3390/a12070143 - 19 Jul 2019
Cited by 14 | Viewed by 3790
Abstract
This paper deals with the modeling and optimization of a bi-level multi-objective production planning problem, where some of the coefficients of objective functions and parameters of constraints are multi-choice. A general transformation technique based on a binary variable has been used to transform [...] Read more.
This paper deals with the modeling and optimization of a bi-level multi-objective production planning problem, where some of the coefficients of objective functions and parameters of constraints are multi-choice. A general transformation technique based on a binary variable has been used to transform the multi-choices parameters of the problem into their equivalent deterministic form. Finally, two different types of secularization technique have been used to achieve the maximum degree of individually membership goals by minimizing their deviational variables and obtained the most satisfactory solution of the formulated problem. An illustrative real case study of production planning has been discussed and, also compared to validate the efficiency and usefulness of the proposed work. Full article
(This article belongs to the Special Issue Algorithms for Multi-Criteria Decision-Making)
Show Figures

Figure 1

15 pages, 2232 KiB  
Article
New Bipartite Graph Techniques for Irregular Data Redistribution Scheduling
by Qinghai Li and Chang Wu Yu
Algorithms 2019, 12(7), 142; https://0-doi-org.brum.beds.ac.uk/10.3390/a12070142 - 16 Jul 2019
Viewed by 3750
Abstract
For many parallel and distributed systems, automatic data redistribution improves its locality and increases system performance for various computer problems and applications. In general, an array can be distributed to multiple processing systems by using regular or irregular distributions. Some data distribution adopts [...] Read more.
For many parallel and distributed systems, automatic data redistribution improves its locality and increases system performance for various computer problems and applications. In general, an array can be distributed to multiple processing systems by using regular or irregular distributions. Some data distribution adopts BLOCK, CYCLIC, or BLOCK-CYCLIC to specify data array decomposition and distribution. On the other hand, irregular distributions specify a different-size data array distribution according to user-defined commands or procedures. In this work, we propose three bipartite graph problems, including the “maximum edge coloring problem”, the “maximum degree edge coloring problem”, and the “cost-sharing maximum edge coloring problem” to formulate these kinds of distribution problems. Next, we propose an approximation algorithm with a ratio bound of two for the maximum edge coloring problem when the input graph is biplanar. Moreover, we also prove that the “cost-sharing maximum edge coloring problem” is an NP-complete problem even when the input graph is biplanar. Full article
Show Figures

Figure 1

27 pages, 10891 KiB  
Article
OPTIMUS: Self-Adaptive Differential Evolution with Ensemble of Mutation Strategies for Grasshopper Algorithmic Modeling
by Cemre Cubukcuoglu, Berk Ekici, Mehmet Fatih Tasgetiren and Sevil Sariyildiz
Algorithms 2019, 12(7), 141; https://0-doi-org.brum.beds.ac.uk/10.3390/a12070141 - 12 Jul 2019
Cited by 30 | Viewed by 6845
Abstract
Most of the architectural design problems are basically real-parameter optimization problems. So, any type of evolutionary and swarm algorithms can be used in this field. However, there is a little attention on using optimization methods within the computer aided design (CAD) programs. In [...] Read more.
Most of the architectural design problems are basically real-parameter optimization problems. So, any type of evolutionary and swarm algorithms can be used in this field. However, there is a little attention on using optimization methods within the computer aided design (CAD) programs. In this paper, we present Optimus, which is a new optimization tool for grasshopper algorithmic modeling in Rhinoceros CAD software. Optimus implements self-adaptive differential evolution algorithm with ensemble of mutation strategies (jEDE). We made an experiment using standard test problems in the literature and some of the test problems proposed in IEEE CEC 2005. We reported minimum, maximum, average, standard deviations and number of function evaluations of five replications for each function. Experimental results on the benchmark suite showed that Optimus (jEDE) outperforms other optimization tools, namely Galapagos (genetic algorithm), SilverEye (particle swarm optimization), and Opossum (RbfOpt) by finding better results for 19 out of 20 problems. For only one function, Galapagos presented slightly better result than Optimus. Ultimately, we presented an architectural design problem and compared the tools for testing Optimus in the design domain. We reported minimum, maximum, average and number of function evaluations of one replication for each tool. Galapagos and Silvereye presented infeasible results, whereas Optimus and Opossum found feasible solutions. However, Optimus discovered a much better fitness result than Opossum. As a conclusion, we discuss advantages and limitations of Optimus in comparison to other tools. The target audience of this paper is frequent users of parametric design modelling e.g., architects, engineers, designers. The main contribution of this paper is summarized as follows. Optimus showed that near-optimal solutions of architectural design problems can be improved by testing different types of algorithms with respect to no-free lunch theorem. Moreover, Optimus facilitates implementing different type of algorithms due to its modular system. Full article
(This article belongs to the Special Issue Metaheuristic Algorithms in Optimization and Applications (volume 2))
Show Figures

Figure 1

14 pages, 7745 KiB  
Article
Projected Augmented Reality Intelligent Model of a City Area with Path Optimization
by Mateus Mendes, Jorge Almeida, Hajji Mohamed and Rudi Giot
Algorithms 2019, 12(7), 140; https://0-doi-org.brum.beds.ac.uk/10.3390/a12070140 - 12 Jul 2019
Cited by 8 | Viewed by 6718
Abstract
Augmented Reality is increasingly used for enhancing user experiences in different tasks. The present paper describes a model combining augmented reality and artificial intelligence algorithms in a 3D model of an area of the city of Coimbra, based on information extracted from OpenStreetMap. [...] Read more.
Augmented Reality is increasingly used for enhancing user experiences in different tasks. The present paper describes a model combining augmented reality and artificial intelligence algorithms in a 3D model of an area of the city of Coimbra, based on information extracted from OpenStreetMap. The augmented reality effect is achieved using a video projection over a 3D printed map. Users can interact with the model using a smart phone or similar device and simulate itineraries which are optimized using a genetic algorithm and A*. Among other applications, the model can be used for tourists or travelers to simulate travels with realism, as well as virtual reconstructions of historical places or remote areas. Full article
Show Figures

Figure 1

15 pages, 372 KiB  
Article
A Credit Rating Model in a Fuzzy Inference System Environment
by Amir Karbassi Yazdi, Thomas Hanne, Yong J. Wang and Hui-Ming Wee
Algorithms 2019, 12(7), 139; https://0-doi-org.brum.beds.ac.uk/10.3390/a12070139 - 09 Jul 2019
Cited by 5 | Viewed by 3126
Abstract
One of the most important functions of an export credit agency (ECA) is to act as an intermediary between national governments and exporters. These organizations provide financing to reduce the political and commercial risks in international trade. The agents assess the buyers based [...] Read more.
One of the most important functions of an export credit agency (ECA) is to act as an intermediary between national governments and exporters. These organizations provide financing to reduce the political and commercial risks in international trade. The agents assess the buyers based on financial and non-financial indicators to determine whether it is advisable to grant them credit. Because many of these indicators are qualitative and inherently linguistically ambiguous, the agents must make decisions in uncertain environments. Therefore, to make the most accurate decision possible, they often utilize fuzzy inference systems. The purpose of this research was to design a credit rating model in an uncertain environment using the fuzzy inference system (FIS). In this research, we used suitable variables of agency ratings from previous studies and then screened them via the Delphi method. Finally, we created a credit rating model using these variables and FIS including related IF-THEN rules which can be applied in a practical setting. Full article
Show Figures

Figure 1

22 pages, 1355 KiB  
Article
A Quantum-Behaved Neurodynamic Approach for Nonconvex Optimization with Constraints
by Zheng Ji, Xu Cai and Xuyang Lou
Algorithms 2019, 12(7), 138; https://0-doi-org.brum.beds.ac.uk/10.3390/a12070138 - 05 Jul 2019
Viewed by 3145
Abstract
This paper presents a quantum-behaved neurodynamic swarm optimization approach to solve the nonconvex optimization problems with inequality constraints. Firstly, the general constrained optimization problem is addressed and a high-performance feedback neural network for solving convex nonlinear programming problems is introduced. The convergence of [...] Read more.
This paper presents a quantum-behaved neurodynamic swarm optimization approach to solve the nonconvex optimization problems with inequality constraints. Firstly, the general constrained optimization problem is addressed and a high-performance feedback neural network for solving convex nonlinear programming problems is introduced. The convergence of the proposed neural network is also proved. Then, combined with the quantum-behaved particle swarm method, a quantum-behaved neurodynamic swarm optimization (QNSO) approach is presented. Finally, the performance of the proposed QNSO algorithm is evaluated through two function tests and three applications including the hollow transmission shaft, heat exchangers and crank–rocker mechanism. Numerical simulations are also provided to verify the advantages of our method. Full article
Show Figures

Figure 1

11 pages, 1370 KiB  
Article
Money Neutrality, Monetary Aggregates and Machine Learning
by Periklis Gogas, Theophilos Papadimitriou and Emmanouil Sofianos
Algorithms 2019, 12(7), 137; https://0-doi-org.brum.beds.ac.uk/10.3390/a12070137 - 05 Jul 2019
Cited by 7 | Viewed by 4291
Abstract
The issue of whether or not money affects real economic activity (money neutrality) has attracted significant empirical attention over the last five decades. If money is neutral even in the short-run, then monetary policy is ineffective and its role limited. If money matters, [...] Read more.
The issue of whether or not money affects real economic activity (money neutrality) has attracted significant empirical attention over the last five decades. If money is neutral even in the short-run, then monetary policy is ineffective and its role limited. If money matters, it will be able to forecast real economic activity. In this study, we test the traditional simple sum monetary aggregates that are commonly used by central banks all over the world and also the theoretically correct Divisia monetary aggregates proposed by the Barnett Critique (Chrystal and MacDonald, 1994; Belongia and Ireland, 2014), both in three levels of aggregation: M1, M2, and M3. We use them to directionally forecast the Eurocoin index: A monthly index that measures the growth rate of the euro area GDP. The data span from January 2001 to June 2018. The forecasting methodology we employ is support vector machines (SVM) from the area of machine learning. The empirical results show that: (a) The Divisia monetary aggregates outperform the simple sum ones and (b) both monetary aggregates can directionally forecast the Eurocoin index reaching the highest accuracy of 82.05% providing evidence against money neutrality even in the short term. Full article
Show Figures

Figure 1

2 pages, 154 KiB  
Editorial
Editorial: Special Issue on Efficient Data Structures
by Jesper Jansson
Algorithms 2019, 12(7), 136; https://0-doi-org.brum.beds.ac.uk/10.3390/a12070136 - 05 Jul 2019
Viewed by 2762
Abstract
This Special Issue of Algorithms is focused on the design, formal analysis, implementation, and experimental evaluation of efficient data structures for various computational problems. Full article
(This article belongs to the Special Issue Efficient Data Structures)
12 pages, 1934 KiB  
Article
Breast Microcalcification Detection Algorithm Based on Contourlet and ASVM
by Sheng Cai, Pei-Zhong Liu, Yan-Min Luo, Yong-Zhao Du and Jia-Neng Tang
Algorithms 2019, 12(7), 135; https://0-doi-org.brum.beds.ac.uk/10.3390/a12070135 - 30 Jun 2019
Cited by 3 | Viewed by 6169
Abstract
Microcalcification is the most important landmark information for early breast cancer. At present, morphological artificial observation is the main method for clinical diagnosis of such diseases, but it is easy to cause misdiagnosis and missed diagnosis. The present study proposes an algorithm for [...] Read more.
Microcalcification is the most important landmark information for early breast cancer. At present, morphological artificial observation is the main method for clinical diagnosis of such diseases, but it is easy to cause misdiagnosis and missed diagnosis. The present study proposes an algorithm for detecting microcalcification on mammography for early breast cancer. Firstly, the contrast characteristics of mammograms are enhanced by Contourlet transformation and morphology (CTM). Secondly, split the ROI by the improved K-means algorithm. Thirdly, calculate grayscale feature, shape feature, and Histogram of Oriented Gradient (HOG) for the ROI region. The Adaptive support vector machine (ASVM) is used as a tool to classify the rough calcification point and the false calcification point. Under the guidance of a professional doctor, 280 normal images and 120 calcification images were selected for experimentation, of which 210 normal images and 90 images with calcification images were used for training classification. The remaining 100 are used to test the algorithm. It is found that the accuracy of the automatic classification results of the Adaptive support vector machine (ASVM) algorithm reaches 94%, and the experimental results are superior to similar algorithms. The algorithm overcomes various difficulties in microcalcification detection and has great clinical application value. Full article
(This article belongs to the Special Issue Algorithms for Computer-Aided Design)
Show Figures

Figure 1

21 pages, 1606 KiB  
Article
An Enhanced Lightning Attachment Procedure Optimization Algorithm
by Yanjiao Wang and Xintian Jiang
Algorithms 2019, 12(7), 134; https://0-doi-org.brum.beds.ac.uk/10.3390/a12070134 - 29 Jun 2019
Cited by 7 | Viewed by 3115
Abstract
To overcome the shortcomings of the lightning attachment procedure optimization (LAPO) algorithm, such as premature convergence and slow convergence speed, an enhanced lightning attachment procedure optimization (ELAPO) algorithm was proposed in this paper. In the downward leader movement, the idea of differential evolution [...] Read more.
To overcome the shortcomings of the lightning attachment procedure optimization (LAPO) algorithm, such as premature convergence and slow convergence speed, an enhanced lightning attachment procedure optimization (ELAPO) algorithm was proposed in this paper. In the downward leader movement, the idea of differential evolution was introduced to speed up population convergence; in the upward leader movement, by superimposing vectors pointing to the average individual, the individual updating mode was modified to change the direction of individual evolution, avoid falling into local optimum, and carry out a more fine local information search; in the performance enhancement stage, opposition-based learning (OBL) was used to replace the worst individuals, improve the convergence rate of population, and increase the global exploration capability. Finally, 16 typical benchmark functions in CEC2005 are used to carry out simulation experiments with LAPO algorithm, four improved algorithms, and ELAPO. Experimental results showed that ELAPO obtained the better convergence velocity and optimization accuracy. Full article
Show Figures

Figure 1

11 pages, 2073 KiB  
Article
A New Method for Markovian Adaptation of the Non-Markovian Queueing System Using the Hidden Markov Model
by Ilija Tanackov, Olegas Prentkovskis, Žarko Jevtić, Gordan Stojić and Pamela Ercegovac
Algorithms 2019, 12(7), 133; https://0-doi-org.brum.beds.ac.uk/10.3390/a12070133 - 28 Jun 2019
Cited by 5 | Viewed by 4046
Abstract
This manuscript starts with a detailed analysis of the current solution for the queueing system M/Er/1/∞. In the existing solution, Erlang’s service is caused by Poisson’s arrival process of groups, but not individual clients. The service of individual clients is still exponentially distributed, [...] Read more.
This manuscript starts with a detailed analysis of the current solution for the queueing system M/Er/1/∞. In the existing solution, Erlang’s service is caused by Poisson’s arrival process of groups, but not individual clients. The service of individual clients is still exponentially distributed, contrary to the declaration in Kendall’s notation. From the related theory of the Hidden Markov Model (HMM), for the advancement of queueing theory, the idea of “hidden Markov states” (HMS) was taken. In this paper, the basic principles of application of HMS have first been established. The abstract HMS states have a catalytic role in the standard procedure of solving the non-Markovian queueing systems. The proposed solution based on HMS exceeds the problem of accessing identical client groups in the current solution of the M/Er/r queueing system. A detailed procedure for the new solution of the queueing system M/Er/1/∞ is implemented. Additionally, a new solution to the queueing system M/N/1/∞ with a normal service time N(μ, σ) based on HMS is also implemented. Full article
(This article belongs to the Special Issue Algorithms for Multi-Criteria Decision-Making)
Show Figures

Figure 1

13 pages, 1242 KiB  
Article
Drum Water Level Control Based on Improved ADRC
by Cuiping Pu, Yicheng Zhu and Jianbo Su
Algorithms 2019, 12(7), 132; https://0-doi-org.brum.beds.ac.uk/10.3390/a12070132 - 28 Jun 2019
Cited by 8 | Viewed by 4051
Abstract
Drum water level systems show strong disturbance, big inertia, large time delay, and non-linearity characteristics. In order to improve the antidisturbance performance and robustness of the traditional active disturbance rejection controller (ADRC), an improved linear active disturbance rejection controller (ILADRC) for drum water [...] Read more.
Drum water level systems show strong disturbance, big inertia, large time delay, and non-linearity characteristics. In order to improve the antidisturbance performance and robustness of the traditional active disturbance rejection controller (ADRC), an improved linear active disturbance rejection controller (ILADRC) for drum water level is designed. On the basis of the linear active disturbance rejection controller (LADRC) structure, an identical linear extended state observer (ESO) is added with the same parameters as that of the original one. The estimation error value of the total disturbance is introduced, and the estimation error of the total disturbance is compensated, which can improve the control system’s ability to suppress unknown disturbances, so as to improve the antidisturbance performance and robustness. The antijamming performance and robustness of LADRC and ILADRC for drum water level are simulated and analyzed under the influence of external disturbance and model parameter variation. Results show that the proposed control system ILADRC has shorter settling time, smaller overshot, and strong anti-interference ability and robustness. It has better performance than the LADRC and has certain application value in engineering. Full article
(This article belongs to the Special Issue The Second Symposium on Machine Intelligence and Data Analytics)
Show Figures

Figure 1

18 pages, 751 KiB  
Article
Aiding Dictionary Learning Through Multi-Parametric Sparse Representation
by Florin Stoican and Paul Irofti
Algorithms 2019, 12(7), 131; https://0-doi-org.brum.beds.ac.uk/10.3390/a12070131 - 28 Jun 2019
Cited by 4 | Viewed by 3600
Abstract
The 1 relaxations of the sparse and cosparse representation problems which appear in the dictionary learning procedure are usually solved repeatedly (varying only the parameter vector), thus making them well-suited to a multi-parametric interpretation. The associated constrained optimization problems differ only through [...] Read more.
The 1 relaxations of the sparse and cosparse representation problems which appear in the dictionary learning procedure are usually solved repeatedly (varying only the parameter vector), thus making them well-suited to a multi-parametric interpretation. The associated constrained optimization problems differ only through an affine term from one iteration to the next (i.e., the problem’s structure remains the same while only the current vector, which is to be (co)sparsely represented, changes). We exploit this fact by providing an explicit, piecewise affine with a polyhedral support, representation of the solution. Consequently, at runtime, the optimal solution (the (co)sparse representation) is obtained through a simple enumeration throughout the non-overlapping regions of the polyhedral partition and the application of an affine law. We show that, for a suitably large number of parameter instances, the explicit approach outperforms the classical implementation. Full article
(This article belongs to the Special Issue Dictionary Learning Algorithms and Applications)
Show Figures

Figure 1

17 pages, 5674 KiB  
Article
A Novel Consistent Quality Driven for JEM Based Distributed Video Coding
by Dinh Trieu Duong, Huy Phi Cong and Xiem Hoang Van
Algorithms 2019, 12(7), 130; https://0-doi-org.brum.beds.ac.uk/10.3390/a12070130 - 28 Jun 2019
Cited by 6 | Viewed by 3205
Abstract
Distributed video coding (DVC) is an attractive and promising solution for low complexity constrained video applications, such as wireless sensor networks or wireless surveillance systems. In DVC, visual quality consistency is one of the most important issues to evaluate the performance of a [...] Read more.
Distributed video coding (DVC) is an attractive and promising solution for low complexity constrained video applications, such as wireless sensor networks or wireless surveillance systems. In DVC, visual quality consistency is one of the most important issues to evaluate the performance of a DVC codec. However, it is the fact that the quality of the decoded frames that is achieved in most recent DVC codecs is not consistent and it is varied with high quality fluctuation. In this paper, we propose a novel DVC solution named Joint exploration model based DVC (JEM-DVC) to solve the problem, which can provide not only higher performance as compared to the traditional DVC solutions, but also an effective scheme for the quality consistency control. We first employ several advanced techniques that are provided in the Joint exploration model (JEM) of the future video coding standard (FVC) in the proposed JEM-DVC solution to effectively improve the performance of JEM-DVC codec. Subsequently, for consistent quality control, we propose two novel methods, named key frame quantization (KF-Q) and Wyner-Zip frame quantization (WZF-Q), which determine the optimal values of the quantization parameter (QP) and quantization matrix (QM) applied for the key and WZ frame coding, respectively. The optimal values of QP and QM are adaptively controlled and updated for every key and WZ frames to guarantee the consistent video quality for the proposed codec unlike the conventional approaches. Our proposed JEM-DVC is the first DVC codec in literature that employs the JEM coding technique, and then all of the results that are presented in this paper are new. The experimental results show that the proposed JEM-DVC significantly outperforms the relevant DVC benchmarks, notably the DISCOVER DVC and the recent H.265/HEVC based DVC, in terms of both Peak signal-to-noise ratio (PSNR) performance and consistent visual quality. Full article
Show Figures

Figure 1

19 pages, 1682 KiB  
Article
A Hyper Heuristic Algorithm to Solve the Low-Carbon Location Routing Problem
by Chunmiao Zhang, Yanwei Zhao and Longlong Leng
Algorithms 2019, 12(7), 129; https://0-doi-org.brum.beds.ac.uk/10.3390/a12070129 - 27 Jun 2019
Cited by 9 | Viewed by 3302
Abstract
This paper proposes a low-carbon location routing problem (LCLRP) model with simultaneous delivery and pick up, time windows, and heterogeneous fleets to reduce the logistics cost and carbon emissions and improve customer satisfaction. The correctness of the model is tested by a simple [...] Read more.
This paper proposes a low-carbon location routing problem (LCLRP) model with simultaneous delivery and pick up, time windows, and heterogeneous fleets to reduce the logistics cost and carbon emissions and improve customer satisfaction. The correctness of the model is tested by a simple example of CPLEX (optimization software for mathematical programming). To solve this problem, a hyper-heuristic algorithm is designed based on a secondary exponential smoothing strategy and adaptive receiving mechanism. The algorithm can achieve fast convergence and is highly robust. This case study analyzes the impact of depot distribution and cost, heterogeneous fleets (HF), and customer distribution and time windows on logistics costs, carbon emissions, and customer satisfaction. The experimental results show that the proposed model can reduce logistics costs by 1.72%, carbon emissions by 11.23%, and vehicle travel distance by 9.69%, and show that the proposed model has guiding significance for reducing logistics costs. Full article
Show Figures

Figure 1

13 pages, 4317 KiB  
Article
Refinement of Background-Subtraction Methods Based on Convolutional Neural Network Features for Dynamic Background
by Tianming Yu, Jianhua Yang and Wei Lu
Algorithms 2019, 12(7), 128; https://0-doi-org.brum.beds.ac.uk/10.3390/a12070128 - 27 Jun 2019
Cited by 5 | Viewed by 3607
Abstract
Advancing the background-subtraction method in dynamic scenes is an ongoing timely goal for many researchers. Recently, background subtraction methods have been developed with deep convolutional features, which have improved their performance. However, most of these deep methods are supervised, only available for a [...] Read more.
Advancing the background-subtraction method in dynamic scenes is an ongoing timely goal for many researchers. Recently, background subtraction methods have been developed with deep convolutional features, which have improved their performance. However, most of these deep methods are supervised, only available for a certain scene, and have high computational cost. In contrast, the traditional background subtraction methods have low computational costs and can be applied to general scenes. Therefore, in this paper, we propose an unsupervised and concise method based on the features learned from a deep convolutional neural network to refine the traditional background subtraction methods. For the proposed method, the low-level features of an input image are extracted from the lower layer of a pretrained convolutional neural network, and the main features are retained to further establish the dynamic background model. The evaluation of the experiments on dynamic scenes demonstrates that the proposed method significantly improves the performance of traditional background subtraction methods. Full article
(This article belongs to the Special Issue Deep Learning for Image and Video Understanding)
Show Figures

Figure 1

37 pages, 711 KiB  
Article
Guidelines for Experimental Algorithmics: A Case Study in Network Analysis
by Eugenio Angriman, Alexander van der Grinten, Moritz von Looz, Henning Meyerhenke, Martin Nöllenburg, Maria Predari and Charilaos Tzovas
Algorithms 2019, 12(7), 127; https://0-doi-org.brum.beds.ac.uk/10.3390/a12070127 - 26 Jun 2019
Cited by 18 | Viewed by 6093
Abstract
The field of network science is a highly interdisciplinary area; for the empirical analysis of network data, it draws algorithmic methodologies from several research fields. Hence, research procedures and descriptions of the technical results often differ, sometimes widely. In this paper we focus [...] Read more.
The field of network science is a highly interdisciplinary area; for the empirical analysis of network data, it draws algorithmic methodologies from several research fields. Hence, research procedures and descriptions of the technical results often differ, sometimes widely. In this paper we focus on methodologies for the experimental part of algorithm engineering for network analysis—an important ingredient for a research area with empirical focus. More precisely, we unify and adapt existing recommendations from different fields and propose universal guidelines—including statistical analyses—for the systematic evaluation of network analysis algorithms. This way, the behavior of newly proposed algorithms can be properly assessed and comparisons to existing solutions become meaningful. Moreover, as the main technical contribution, we provide SimexPal, a highly automated tool to perform and analyze experiments following our guidelines. To illustrate the merits of SimexPal and our guidelines, we apply them in a case study: we design, perform, visualize and evaluate experiments of a recent algorithm for approximating betweenness centrality, an important problem in network analysis. In summary, both our guidelines and SimexPal shall modernize and complement previous efforts in experimental algorithmics; they are not only useful for network analysis, but also in related contexts. Full article
Show Figures

Figure 1

11 pages, 1681 KiB  
Article
A New Regularized Reconstruction Algorithm Based on Compressed Sensing for the Sparse Underdetermined Problem and Applications of One-Dimensional and Two-Dimensional Signal Recovery
by Bin Wang, Li Wang, Hao Yu and Fengming Xin
Algorithms 2019, 12(7), 126; https://0-doi-org.brum.beds.ac.uk/10.3390/a12070126 - 26 Jun 2019
Cited by 1 | Viewed by 3198
Abstract
The compressed sensing theory has been widely used in solving undetermined equations in various fields and has made remarkable achievements. The regularized smooth L0 (ReSL0) reconstruction algorithm adds an error regularization term to the smooth L0(SL0) algorithm, achieving the reconstruction of the signal [...] Read more.
The compressed sensing theory has been widely used in solving undetermined equations in various fields and has made remarkable achievements. The regularized smooth L0 (ReSL0) reconstruction algorithm adds an error regularization term to the smooth L0(SL0) algorithm, achieving the reconstruction of the signal well in the presence of noise. However, the ReSL0 reconstruction algorithm still has some flaws. It still chooses the original optimization method of SL0 and the Gauss approximation function, but this method has the problem of a sawtooth effect in the later optimization stage, and the convergence effect is not ideal. Therefore, we make two adjustments to the basis of the ReSL0 reconstruction algorithm: firstly, we introduce another CIPF function which has a better approximation effect than Gauss function; secondly, we combine the steepest descent method and Newton method in terms of the algorithm optimization. Then, a novel regularized recovery algorithm named combined regularized smooth L0 (CReSL0) is proposed. Under the same experimental conditions, the CReSL0 algorithm is compared with other popular reconstruction algorithms. Overall, the CReSL0 algorithm achieves excellent reconstruction performance in terms of the peak signal-to-noise ratio (PSNR) and run-time for both a one-dimensional Gauss signal and two-dimensional image reconstruction tasks. Full article
(This article belongs to the Special Issue Data Compression Algorithms and their Applications)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop