Evolutionary Computation

A special issue of Mathematics (ISSN 2227-7390).

Deadline for manuscript submissions: closed (31 March 2019) | Viewed by 97917

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors


E-Mail
Guest Editor
Department of Computer Science and Technology, Ocean University of China, 266100 Qingdao, China
Interests: evolutionary computation; swarm intelligence; metaheuristics; fuzzy scheduling; big data optimization; multi-objective and many-objective optimization
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Civil and Environmental Engineering, University of Pittsburgh, Pittsburgh, PA 15261, USA
Interests: structural health monitoring; smart civil infrastructure systems; deployment of advanced sensors; energy harvesting; civil engineering system informatics
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Evolutionary computation (EC) is a family of algorithms for global optimization inspired by biological evolution. It includes various population-based trial and error problem solvers with a metaheuristic or stochastic optimization character. In EC, each individual has a simple structure and function. EC system is composed of many of these individuals and can address difficult real-world problems, which are impossible to be solved by single individuals. During the recent decades, the EC methods have been successfully applied to solve complex and time-consuming problems. The EC is indeed a topic of interest amongst researchers in various fields of science and engineering. Some of the most popular EC paradigms are genetic algorithm, genetic programming, and evolution strategy. Many theoretical and experimental studies have proved the significant properties of EC such as reasoning with vague and/or ambiguous data, adaptation to dynamic and uncertain environments, and learning from noisy and/or incomplete information.

The aim of this special issue is to compile the latest theory and applications in the field of EC. Submissions should be original and unpublished, and present novel in-depth fundamental research contributions either from a methodological perspective or from an application point of view. In general, we are soliciting contributions on (but not only limited to) the following topics:
  • Improvements of traditional EC methods (e.g., genetic algorithm, differential evolution, ant colony optimization and particle swarm optimization)
  • Recent development of EC methods (e.g., biogeography-based optimization, krill herd (KH) algorithm, monarch butterfly optimization (MBO), earthworm optimization algorithm (EWA), elephant herding optimization (EHO), moth search (MS) algorithm, rhino herd (RH) algorithm)
  • Theoretical study on EC algorithms using various techniques (e.g., Markov chain, dynamic system, complex system/networks, and Martingale)
  • Application of EC methods (e.g., scheduling, data mining, machine learning, reliability, planning, task assignment problem, IIR filter design, traveling salesman problem, optimization under dynamic and uncertain environments)

Dr. Gai-Ge Wang
Dr. Amir H. Alavi
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (21 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

31 pages, 8087 KiB  
Article
A Novel Monarch Butterfly Optimization with Global Position Updating Operator for Large-Scale 0-1 Knapsack Problems
by Yanhong Feng, Xu Yu and Gai-Ge Wang
Mathematics 2019, 7(11), 1056; https://0-doi-org.brum.beds.ac.uk/10.3390/math7111056 - 04 Nov 2019
Cited by 31 | Viewed by 2824
Abstract
As a significant subset of the family of discrete optimization problems, the 0-1 knapsack problem (0-1 KP) has received considerable attention among the relevant researchers. The monarch butterfly optimization (MBO) is a recent metaheuristic algorithm inspired by the migration behavior of monarch butterflies. [...] Read more.
As a significant subset of the family of discrete optimization problems, the 0-1 knapsack problem (0-1 KP) has received considerable attention among the relevant researchers. The monarch butterfly optimization (MBO) is a recent metaheuristic algorithm inspired by the migration behavior of monarch butterflies. The original MBO is proposed to solve continuous optimization problems. This paper presents a novel monarch butterfly optimization with a global position updating operator (GMBO), which can address 0-1 KP known as an NP-complete problem. The global position updating operator is incorporated to help all the monarch butterflies rapidly move towards the global best position. Moreover, a dichotomy encoding scheme is adopted to represent monarch butterflies for solving 0-1 KP. In addition, a specific two-stage repair operator is used to repair the infeasible solutions and further optimize the feasible solutions. Finally, Orthogonal Design (OD) is employed in order to find the most suitable parameters. Two sets of low-dimensional 0-1 KP instances and three kinds of 15 high-dimensional 0-1 KP instances are used to verify the ability of the proposed GMBO. An extensive comparative study of GMBO with five classical and two state-of-the-art algorithms is carried out. The experimental results clearly indicate that GMBO can achieve better solutions on almost all the 0-1 KP instances and significantly outperforms the rest. Full article
(This article belongs to the Special Issue Evolutionary Computation)
Show Figures

Figure 1

16 pages, 5399 KiB  
Article
Rock Classification from Field Image Patches Analyzed Using a Deep Convolutional Neural Network
by Xiangjin Ran, Linfu Xue, Yanyan Zhang, Zeyu Liu, Xuejia Sang and Jinxin He
Mathematics 2019, 7(8), 755; https://0-doi-org.brum.beds.ac.uk/10.3390/math7080755 - 18 Aug 2019
Cited by 69 | Viewed by 10232
Abstract
The automatic identification of rock type in the field would aid geological surveying, education, and automatic mapping. Deep learning is receiving significant research attention for pattern recognition and machine learning. Its application here has effectively identified rock types from images captured in the [...] Read more.
The automatic identification of rock type in the field would aid geological surveying, education, and automatic mapping. Deep learning is receiving significant research attention for pattern recognition and machine learning. Its application here has effectively identified rock types from images captured in the field. This paper proposes an accurate approach for identifying rock types in the field based on image analysis using deep convolutional neural networks. The proposed approach can identify six common rock types with an overall classification accuracy of 97.96%, thus outperforming other established deep-learning models and a linear model. The results show that the proposed approach based on deep learning represents an improvement in intelligent rock-type identification and solves several difficulties facing the automated identification of rock types in the field. Full article
(This article belongs to the Special Issue Evolutionary Computation)
Show Figures

Figure 1

13 pages, 1952 KiB  
Article
An Adaptive Multi-Swarm Competition Particle Swarm Optimizer for Large-Scale Optimization
by Fanrong Kong, Jianhui Jiang and Yan Huang
Mathematics 2019, 7(6), 521; https://0-doi-org.brum.beds.ac.uk/10.3390/math7060521 - 06 Jun 2019
Cited by 14 | Viewed by 2528
Abstract
As a powerful tool in optimization, particle swarm optimizers have been widely applied to many different optimization areas and drawn much attention. However, for large-scale optimization problems, the algorithms exhibit poor ability to pursue satisfactory results due to the lack of ability in [...] Read more.
As a powerful tool in optimization, particle swarm optimizers have been widely applied to many different optimization areas and drawn much attention. However, for large-scale optimization problems, the algorithms exhibit poor ability to pursue satisfactory results due to the lack of ability in diversity maintenance. In this paper, an adaptive multi-swarm particle swarm optimizer is proposed, which adaptively divides a swarm into several sub-swarms and a competition mechanism is employed to select exemplars. In this way, on the one hand, the diversity of exemplars increases, which helps the swarm preserve the exploitation ability. On the other hand, the number of sub-swarms adaptively changes from a large value to a small value, which helps the algorithm make a suitable balance between exploitation and exploration. By employing several peer algorithms, we conducted comparisons to validate the proposed algorithm on a large-scale optimization benchmark suite of CEC 2013. The experiments results demonstrate the proposed algorithm is effective and competitive to address large-scale optimization problems. Full article
(This article belongs to the Special Issue Evolutionary Computation)
Show Figures

Figure 1

20 pages, 1519 KiB  
Article
An Efficient Memetic Algorithm for the Minimum Load Coloring Problem
by Zhiqiang Zhang, Zhongwen Li, Xiaobing Qiao and Weijun Wang
Mathematics 2019, 7(5), 475; https://0-doi-org.brum.beds.ac.uk/10.3390/math7050475 - 25 May 2019
Cited by 4 | Viewed by 2556
Abstract
Given a graph G with n vertices and l edges, the load distribution of a coloring q: V → {red, blue} is defined as dq = (rq, bq), in which rq is the number of [...] Read more.
Given a graph G with n vertices and l edges, the load distribution of a coloring q: V → {red, blue} is defined as dq = (rq, bq), in which rq is the number of edges with at least one end-vertex colored red and bq is the number of edges with at least one end-vertex colored blue. The minimum load coloring problem (MLCP) is to find a coloring q such that the maximum load, lq = 1/l × max{rq, bq}, is minimized. This problem has been proved to be NP-complete. This paper proposes a memetic algorithm for MLCP based on an improved K-OPT local search and an evolutionary operation. Furthermore, a data splitting operation is executed to expand the data amount of global search, and a disturbance operation is employed to improve the search ability of the algorithm. Experiments are carried out on the benchmark DIMACS to compare the searching results from memetic algorithm and the proposed algorithms. The experimental results show that a greater number of best results for the graphs can be found by the memetic algorithm, which can improve the best known results of MLCP. Full article
(This article belongs to the Special Issue Evolutionary Computation)
Show Figures

Figure 1

12 pages, 2995 KiB  
Article
An Entropy-Assisted Particle Swarm Optimizer for Large-Scale Optimization Problem
by Weian Guo, Lei Zhu, Lei Wang, Qidi Wu and Fanrong Kong
Mathematics 2019, 7(5), 414; https://0-doi-org.brum.beds.ac.uk/10.3390/math7050414 - 09 May 2019
Cited by 7 | Viewed by 2137
Abstract
Diversity maintenance is crucial for particle swarm optimizer’s (PSO) performance. However, the update mechanism for particles in the conventional PSO is poor in the performance of diversity maintenance, which usually results in a premature convergence or a stagnation of exploration in the searching [...] Read more.
Diversity maintenance is crucial for particle swarm optimizer’s (PSO) performance. However, the update mechanism for particles in the conventional PSO is poor in the performance of diversity maintenance, which usually results in a premature convergence or a stagnation of exploration in the searching space. To help particle swarm optimization enhance the ability in diversity maintenance, many works have proposed to adjust the distances among particles. However, such operators will result in a situation where the diversity maintenance and fitness evaluation are conducted in the same distance-based space. Therefore, it also brings a new challenge in trade-off between convergence speed and diversity preserving. In this paper, a novel PSO is proposed that employs competitive strategy and entropy measurement to manage convergence operator and diversity maintenance respectively. The proposed algorithm was applied to the large-scale optimization benchmark suite on CEC 2013 and the results demonstrate the proposed algorithm is feasible and competitive to address large scale optimization problems. Full article
(This article belongs to the Special Issue Evolutionary Computation)
Show Figures

Figure 1

35 pages, 3547 KiB  
Article
Enhancing Elephant Herding Optimization with Novel Individual Updating Strategies for Large-Scale Optimization Problems
by Jiang Li, Lihong Guo, Yan Li and Chang Liu
Mathematics 2019, 7(5), 395; https://0-doi-org.brum.beds.ac.uk/10.3390/math7050395 - 30 Apr 2019
Cited by 37 | Viewed by 3945
Abstract
Inspired by the behavior of elephants in nature, elephant herd optimization (EHO) was proposed recently for global optimization. Like most other metaheuristic algorithms, EHO does not use the previous individuals in the later updating process. If the useful information in the previous individuals [...] Read more.
Inspired by the behavior of elephants in nature, elephant herd optimization (EHO) was proposed recently for global optimization. Like most other metaheuristic algorithms, EHO does not use the previous individuals in the later updating process. If the useful information in the previous individuals were fully exploited and used in the later optimization process, the quality of solutions may be improved significantly. In this paper, we propose several new updating strategies for EHO, in which one, two, or three individuals are selected from the previous iterations, and their useful information is incorporated into the updating process. Accordingly, the final individual at this iteration is generated according to the elephant generated by the basic EHO, and the selected previous elephants through a weighted sum. The weights are determined by a random number and the fitness of the elephant individuals at the previous iteration. We incorporated each of the six individual updating strategies individually into the basic EHO, creating six improved variants of EHO. We benchmarked these proposed methods using sixteen test functions. Our experimental results demonstrated that the proposed improved methods significantly outperformed the basic EHO. Full article
(This article belongs to the Special Issue Evolutionary Computation)
Show Figures

Figure 1

14 pages, 1226 KiB  
Article
Improved Whale Algorithm for Solving the Flexible Job Shop Scheduling Problem
by Fei Luan, Zongyan Cai, Shuqiang Wu, Tianhua Jiang, Fukang Li and Jia Yang
Mathematics 2019, 7(5), 384; https://0-doi-org.brum.beds.ac.uk/10.3390/math7050384 - 28 Apr 2019
Cited by 34 | Viewed by 4452
Abstract
In this paper, a novel improved whale optimization algorithm (IWOA), based on the integrated approach, is presented for solving the flexible job shop scheduling problem (FJSP) with the objective of minimizing makespan. First of all, to make the whale optimization algorithm (WOA) adaptive [...] Read more.
In this paper, a novel improved whale optimization algorithm (IWOA), based on the integrated approach, is presented for solving the flexible job shop scheduling problem (FJSP) with the objective of minimizing makespan. First of all, to make the whale optimization algorithm (WOA) adaptive to the FJSP, the conversion method between the whale individual position vector and the scheduling solution is firstly proposed. Secondly, a resultful initialization scheme with certain quality is obtained using chaotic reverse learning (CRL) strategies. Thirdly, a nonlinear convergence factor (NFC) and an adaptive weight (AW) are introduced to balance the abilities of exploitation and exploration of the algorithm. Furthermore, a variable neighborhood search (VNS) operation is performed on the current optimal individual to enhance the accuracy and effectiveness of the local exploration. Experimental results on various benchmark instances show that the proposed IWOA can obtain competitive results compared to the existing algorithms in a short time. Full article
(This article belongs to the Special Issue Evolutionary Computation)
Show Figures

Figure 1

19 pages, 337 KiB  
Article
Topology Structure Implied in β-Hilbert Space, Heisenberg Uncertainty Quantum Characteristics and Numerical Simulation of the DE Algorithm
by Kaiguang Wang and Yuelin Gao
Mathematics 2019, 7(4), 330; https://0-doi-org.brum.beds.ac.uk/10.3390/math7040330 - 04 Apr 2019
Cited by 3 | Viewed by 2130
Abstract
The differential evolutionary ( D E ) algorithm is a global optimization algorithm. To explore the convergence implied in the H i l b e r t space with the parameter β of the D E algorithm and the quantum properties of the [...] Read more.
The differential evolutionary ( D E ) algorithm is a global optimization algorithm. To explore the convergence implied in the H i l b e r t space with the parameter β of the D E algorithm and the quantum properties of the optimal point in the space, we establish a control convergent iterative form of a higher-order differential equation under the conditions of P ε and analyze the control convergent properties of its iterative sequence; analyze the three topological structures implied in H i l b e r t space of the single-point topological structure, branch topological structure, and discrete topological structure; and establish and analyze the association between the H e i s e n b e r g uncertainty quantum characteristics depending on quantum physics and its topological structure implied in the β -Hilbert space of the D E algorithm as follows: The speed resolution Δ v 2 of the iterative sequence convergent speed and the position resolution Δ x β ε of the global optimal point with the swinging range are a pair of conjugate variables of the quantum states in β -Hilbert space about eigenvalues λ i R , corresponding to the uncertainty characteristics on quantum states, and they cannot simultaneously achieve bidirectional efficiency between convergent speed and the best point precision with any procedural improvements. Where λ i R is a constant in the β -Hilbert space. Finally, the conclusion is verified by the quantum numerical simulation of high-dimensional data. We get the following important quantitative conclusions by numerical simulation: except for several dead points and invalid points, under the condition of spatial dimension, the number of the population, mutated operator, crossover operator, and selected operator are generally decreasing or increasing with a variance deviation rate + 0.50 and the error of less than ± 0.5 ; correspondingly, speed changing rate of the individual iterative points and position changing rate of global optimal point β exhibit a inverse correlation in β -Hilbert space in the statistical perspectives, which illustrates the association between the H e i s e n b e r g uncertainty quantum characteristics and its topological structure implied in the β -Hilbert space of the D E algorithm. Full article
(This article belongs to the Special Issue Evolutionary Computation)
17 pages, 898 KiB  
Article
An Improved Artificial Bee Colony Algorithm Based on Elite Strategy and Dimension Learning
by Songyi Xiao, Wenjun Wang, Hui Wang, Dekun Tan, Yun Wang, Xiang Yu and Runxiu Wu
Mathematics 2019, 7(3), 289; https://0-doi-org.brum.beds.ac.uk/10.3390/math7030289 - 21 Mar 2019
Cited by 16 | Viewed by 3525
Abstract
Artificial bee colony is a powerful optimization method, which has strong search abilities to solve many optimization problems. However, some studies proved that ABC has poor exploitation abilities in complex optimization problems. To overcome this issue, an improved ABC variant based on elite [...] Read more.
Artificial bee colony is a powerful optimization method, which has strong search abilities to solve many optimization problems. However, some studies proved that ABC has poor exploitation abilities in complex optimization problems. To overcome this issue, an improved ABC variant based on elite strategy and dimension learning (called ABC-ESDL) is proposed in this paper. The elite strategy selects better solutions to accelerate the search of ABC. The dimension learning uses the differences between two random dimensions to generate a large jump. In the experiments, a classical benchmark set and the 2013 IEEE Congress on Evolutionary (CEC 2013) benchmark set are tested. Computational results show the proposed ABC-ESDL achieves more accurate solutions than ABC and five other improved ABC variants. Full article
(This article belongs to the Special Issue Evolutionary Computation)
Show Figures

Figure 1

26 pages, 450 KiB  
Article
SRIFA: Stochastic Ranking with Improved-Firefly-Algorithm for Constrained Optimization Engineering Design Problems
by Umesh Balande and Deepti Shrimankar
Mathematics 2019, 7(3), 250; https://0-doi-org.brum.beds.ac.uk/10.3390/math7030250 - 11 Mar 2019
Cited by 18 | Viewed by 3621
Abstract
Firefly-Algorithm (FA) is an eminent nature-inspired swarm-based technique for solving numerous real world global optimization problems. This paper presents an overview of the constraint handling techniques. It also includes a hybrid algorithm, namely the Stochastic Ranking with Improved Firefly Algorithm (SRIFA) for solving [...] Read more.
Firefly-Algorithm (FA) is an eminent nature-inspired swarm-based technique for solving numerous real world global optimization problems. This paper presents an overview of the constraint handling techniques. It also includes a hybrid algorithm, namely the Stochastic Ranking with Improved Firefly Algorithm (SRIFA) for solving constrained real-world engineering optimization problems. The stochastic ranking approach is broadly used to maintain balance between penalty and fitness functions. FA is extensively used due to its faster convergence than other metaheuristic algorithms. The basic FA is modified by incorporating opposite-based learning and random-scale factor to improve the diversity and performance. Furthermore, SRIFA uses feasibility based rules to maintain balance between penalty and objective functions. SRIFA is experimented to optimize 24 CEC 2006 standard functions and five well-known engineering constrained-optimization design problems from the literature to evaluate and analyze the effectiveness of SRIFA. It can be seen that the overall computational results of SRIFA are better than those of the basic FA. Statistical outcomes of the SRIFA are significantly superior compared to the other evolutionary algorithms and engineering design problems in its performance, quality and efficiency. Full article
(This article belongs to the Special Issue Evolutionary Computation)
Show Figures

Figure 1

11 pages, 751 KiB  
Article
A Novel Hybrid Algorithm for Minimum Total Dominating Set Problem
by Fuyu Yuan, Chenxi Li, Xin Gao, Minghao Yin and Yiyuan Wang
Mathematics 2019, 7(3), 222; https://0-doi-org.brum.beds.ac.uk/10.3390/math7030222 - 27 Feb 2019
Cited by 13 | Viewed by 2376
Abstract
The minimum total dominating set (MTDS) problem is a variant of the classical dominating set problem. In this paper, we propose a hybrid evolutionary algorithm, which combines local search and genetic algorithm to solve MTDS. Firstly, a novel scoring heuristic is implemented to [...] Read more.
The minimum total dominating set (MTDS) problem is a variant of the classical dominating set problem. In this paper, we propose a hybrid evolutionary algorithm, which combines local search and genetic algorithm to solve MTDS. Firstly, a novel scoring heuristic is implemented to increase the searching effectiveness and thus get better solutions. Specially, a population including several initial solutions is created first to make the algorithm search more regions and then the local search phase further improves the initial solutions by swapping vertices effectively. Secondly, the repair-based crossover operation creates new solutions to make the algorithm search more feasible regions. Experiments on the classical benchmark DIMACS are carried out to test the performance of the proposed algorithm, and the experimental results show that our algorithm performs much better than its competitor on all instances. Full article
(This article belongs to the Special Issue Evolutionary Computation)
Show Figures

Figure 1

13 pages, 6344 KiB  
Article
First-Arrival Travel Times Picking through Sliding Windows and Fuzzy C-Means
by Lei Gao, Zhen-yun Jiang and Fan Min
Mathematics 2019, 7(3), 221; https://0-doi-org.brum.beds.ac.uk/10.3390/math7030221 - 27 Feb 2019
Cited by 14 | Viewed by 4877
Abstract
First-arrival picking is a critical step in seismic data processing. This paper proposes the first-arrival picking through sliding windows and fuzzy c-means (FPSF) algorithm with two stages. The first stage detects a range using sliding windows on vertical and horizontal directions. The second [...] Read more.
First-arrival picking is a critical step in seismic data processing. This paper proposes the first-arrival picking through sliding windows and fuzzy c-means (FPSF) algorithm with two stages. The first stage detects a range using sliding windows on vertical and horizontal directions. The second stage obtains the first-arrival travel times from the range using fuzzy c-means coupled with particle swarm optimization. Results on both noisy and preprocessed field data show that the FPSF algorithm is more accurate than classical methods. Full article
(This article belongs to the Special Issue Evolutionary Computation)
Show Figures

Figure 1

20 pages, 3675 KiB  
Article
A Multi-Objective DV-Hop Localization Algorithm Based on NSGA-II in Internet of Things
by Penghong Wang, Fei Xue, Hangjuan Li, Zhihua Cui, Liping Xie and Jinjun Chen
Mathematics 2019, 7(2), 184; https://0-doi-org.brum.beds.ac.uk/10.3390/math7020184 - 15 Feb 2019
Cited by 113 | Viewed by 5747
Abstract
Locating node technology, as the most fundamental component of wireless sensor networks (WSNs) and internet of things (IoT), is a pivotal problem. Distance vector-hop technique (DV-Hop) is frequently used for location node estimation in WSN, but it has a poor estimation precision. In [...] Read more.
Locating node technology, as the most fundamental component of wireless sensor networks (WSNs) and internet of things (IoT), is a pivotal problem. Distance vector-hop technique (DV-Hop) is frequently used for location node estimation in WSN, but it has a poor estimation precision. In this paper, a multi-objective DV-Hop localization algorithm based on NSGA-II is designed, called NSGA-II-DV-Hop. In NSGA-II-DV-Hop, a new multi-objective model is constructed, and an enhanced constraint strategy is adopted based on all beacon nodes to enhance the DV-Hop positioning estimation precision, and test four new complex network topologies. Simulation results demonstrate that the precision performance of NSGA-II-DV-Hop significantly outperforms than other algorithms, such as CS-DV-Hop, OCS-LC-DV-Hop, and MODE-DV-Hop algorithms. Full article
(This article belongs to the Special Issue Evolutionary Computation)
Show Figures

Figure 1

21 pages, 3615 KiB  
Article
Monarch Butterfly Optimization for Facility Layout Design Based on a Single Loop Material Handling Path
by Minhee Kim and Junjae Chae
Mathematics 2019, 7(2), 154; https://0-doi-org.brum.beds.ac.uk/10.3390/math7020154 - 06 Feb 2019
Cited by 21 | Viewed by 4054
Abstract
Facility layout problems (FLPs) are concerned with the non-overlapping arrangement of facilities. The objective of many FLP-based studies is to minimize the total material handling cost between facilities, which are considered as rectangular blocks of given space. However, it is important to integrate [...] Read more.
Facility layout problems (FLPs) are concerned with the non-overlapping arrangement of facilities. The objective of many FLP-based studies is to minimize the total material handling cost between facilities, which are considered as rectangular blocks of given space. However, it is important to integrate a layout design associated with continual material flow when the system uses circulating material handling equipment. The present study proposes approaches to solve the layout design and shortest single loop material handling path. Monarch butterfly optimization (MBO), a recently-announced meta-heuristic algorithm, is applied to determine the layout configuration. A loop construction method is proposed to construct a single loop material handling path for the given layout in every MBO iteration. A slicing tree structure (STS) is used to represent the layout configuration in solution form. A total of 11 instances are tested to evaluate the algorithm’s performance. The proposed approach generates solutions as intended within a reasonable amount of time. Full article
(This article belongs to the Special Issue Evolutionary Computation)
Show Figures

Graphical abstract

17 pages, 2881 KiB  
Article
A Novel Bat Algorithm with Multiple Strategies Coupling for Numerical Optimization
by Yechuang Wang, Penghong Wang, Jiangjiang Zhang, Zhihua Cui, Xingjuan Cai, Wensheng Zhang and Jinjun Chen
Mathematics 2019, 7(2), 135; https://0-doi-org.brum.beds.ac.uk/10.3390/math7020135 - 01 Feb 2019
Cited by 105 | Viewed by 11525
Abstract
A bat algorithm (BA) is a heuristic algorithm that operates by imitating the echolocation behavior of bats to perform global optimization. The BA is widely used in various optimization problems because of its excellent performance. In the bat algorithm, the global search capability [...] Read more.
A bat algorithm (BA) is a heuristic algorithm that operates by imitating the echolocation behavior of bats to perform global optimization. The BA is widely used in various optimization problems because of its excellent performance. In the bat algorithm, the global search capability is determined by the parameter loudness and frequency. However, experiments show that each operator in the algorithm can only improve the performance of the algorithm at a certain time. In this paper, a novel bat algorithm with multiple strategies coupling (mixBA) is proposed to solve this problem. To prove the effectiveness of the algorithm, we compared it with CEC2013 benchmarks test suits. Furthermore, the Wilcoxon and Friedman tests were conducted to distinguish the differences between it and other algorithms. The results prove that the proposed algorithm is significantly superior to others on the majority of benchmark functions. Full article
(This article belongs to the Special Issue Evolutionary Computation)
Show Figures

Figure 1

18 pages, 530 KiB  
Article
Search Acceleration of Evolutionary Multi-Objective Optimization Using an Estimated Convergence Point
by Yan Pei, Jun Yu and Hideyuki Takagi
Mathematics 2019, 7(2), 129; https://0-doi-org.brum.beds.ac.uk/10.3390/math7020129 - 28 Jan 2019
Cited by 11 | Viewed by 2975
Abstract
We propose a method to accelerate evolutionary multi-objective optimization (EMO) search using an estimated convergence point. Pareto improvement from the last generation to the current generation supports information of promising Pareto solution areas in both an objective space and a parameter space. We [...] Read more.
We propose a method to accelerate evolutionary multi-objective optimization (EMO) search using an estimated convergence point. Pareto improvement from the last generation to the current generation supports information of promising Pareto solution areas in both an objective space and a parameter space. We use this information to construct a set of moving vectors and estimate a non-dominated Pareto point from these moving vectors. In this work, we attempt to use different methods for constructing moving vectors, and use the convergence point estimated by using the moving vectors to accelerate EMO search. From our evaluation results, we found that the landscape of Pareto improvement has a uni-modal distribution characteristic in an objective space, and has a multi-modal distribution characteristic in a parameter space. Our proposed method can enhance EMO search when the landscape of Pareto improvement has a uni-modal distribution characteristic in a parameter space, and by chance also does that when landscape of Pareto improvement has a multi-modal distribution characteristic in a parameter space. The proposed methods can not only obtain more Pareto solutions compared with the conventional non-dominant sorting genetic algorithm (NSGA)-II algorithm, but can also increase the diversity of Pareto solutions. This indicates that our proposed method can enhance the search capability of EMO in both Pareto dominance and solution diversity. We also found that the method of constructing moving vectors is a primary issue for the success of our proposed method. We analyze and discuss this method with several evaluation metrics and statistical tests. The proposed method has potential to enhance EMO embedding deterministic learning methods in stochastic optimization algorithms. Full article
(This article belongs to the Special Issue Evolutionary Computation)
Show Figures

Figure 1

25 pages, 4527 KiB  
Article
The Importance of Transfer Function in Solving Set-Union Knapsack Problem Based on Discrete Moth Search Algorithm
by Yanhong Feng, Haizhong An and Xiangyun Gao
Mathematics 2019, 7(1), 17; https://0-doi-org.brum.beds.ac.uk/10.3390/math7010017 - 24 Dec 2018
Cited by 25 | Viewed by 4269
Abstract
Moth search (MS) algorithm, originally proposed to solve continuous optimization problems, is a novel bio-inspired metaheuristic algorithm. At present, there seems to be little concern about using MS to solve discrete optimization problems. One of the most common and efficient ways to discretize [...] Read more.
Moth search (MS) algorithm, originally proposed to solve continuous optimization problems, is a novel bio-inspired metaheuristic algorithm. At present, there seems to be little concern about using MS to solve discrete optimization problems. One of the most common and efficient ways to discretize MS is to use a transfer function, which is in charge of mapping a continuous search space to a discrete search space. In this paper, twelve transfer functions divided into three families, S-shaped (named S1, S2, S3, and S4), V-shaped (named V1, V2, V3, and V4), and other shapes (named O1, O2, O3, and O4), are combined with MS, and then twelve discrete versions MS algorithms are proposed for solving set-union knapsack problem (SUKP). Three groups of fifteen SUKP instances are employed to evaluate the importance of these transfer functions. The results show that O4 is the best transfer function when combined with MS to solve SUKP. Meanwhile, the importance of the transfer function in terms of improving the quality of solutions and convergence rate is demonstrated as well. Full article
(This article belongs to the Special Issue Evolutionary Computation)
Show Figures

Figure 1

34 pages, 2870 KiB  
Article
A Novel Simple Particle Swarm Optimization Algorithm for Global Optimization
by Xin Zhang, Dexuan Zou and Xin Shen
Mathematics 2018, 6(12), 287; https://0-doi-org.brum.beds.ac.uk/10.3390/math6120287 - 27 Nov 2018
Cited by 27 | Viewed by 7901
Abstract
In order to overcome the several shortcomings of Particle Swarm Optimization (PSO) e.g., premature convergence, low accuracy and poor global searching ability, a novel Simple Particle Swarm Optimization based on Random weight and Confidence term (SPSORC) is proposed in this paper. The original [...] Read more.
In order to overcome the several shortcomings of Particle Swarm Optimization (PSO) e.g., premature convergence, low accuracy and poor global searching ability, a novel Simple Particle Swarm Optimization based on Random weight and Confidence term (SPSORC) is proposed in this paper. The original two improvements of the algorithm are called Simple Particle Swarm Optimization (SPSO) and Simple Particle Swarm Optimization with Confidence term (SPSOC), respectively. The former has the characteristics of more simple structure and faster convergence speed, and the latter increases particle diversity. SPSORC takes into account the advantages of both and enhances exploitation capability of algorithm. Twenty-two benchmark functions and four state-of-the-art improvement strategies are introduced so as to facilitate more fair comparison. In addition, a t-test is used to analyze the differences in large amounts of data. The stability and the search efficiency of algorithms are evaluated by comparing the success rates and the average iteration times obtained from 50-dimensional benchmark functions. The results show that the SPSO and its improved algorithms perform well comparing with several kinds of improved PSO algorithms according to both search time and computing accuracy. SPSORC, in particular, is more competent for the optimization of complex problems. In all, it has more desirable convergence, stronger stability and higher accuracy. Full article
(This article belongs to the Special Issue Evolutionary Computation)
Show Figures

Figure 1

16 pages, 3075 KiB  
Article
Energy-Efficient Scheduling for a Job Shop Using an Improved Whale Optimization Algorithm
by Tianhua Jiang, Chao Zhang, Huiqi Zhu, Jiuchun Gu and Guanlong Deng
Mathematics 2018, 6(11), 220; https://0-doi-org.brum.beds.ac.uk/10.3390/math6110220 - 28 Oct 2018
Cited by 59 | Viewed by 3936
Abstract
Under the current environmental pressure, many manufacturing enterprises are urged or forced to adopt effective energy-saving measures. However, environmental metrics, such as energy consumption and CO2 emission, are seldom considered in the traditional production scheduling problems. Recently, the energy-related scheduling problem has [...] Read more.
Under the current environmental pressure, many manufacturing enterprises are urged or forced to adopt effective energy-saving measures. However, environmental metrics, such as energy consumption and CO2 emission, are seldom considered in the traditional production scheduling problems. Recently, the energy-related scheduling problem has been paid increasingly more attention by researchers. In this paper, an energy-efficient job shop scheduling problem (EJSP) is investigated with the objective of minimizing the sum of the energy consumption cost and the completion-time cost. As the classical JSP is well known as a non-deterministic polynomial-time hard (NP-hard) problem, an improved whale optimization algorithm (IWOA) is presented to solve the energy-efficient scheduling problem. The improvement is performed using dispatching rules (DR), a nonlinear convergence factor (NCF), and a mutation operation (MO). The DR is used to enhance the initial solution quality and overcome the drawbacks of the random population. The NCF is adopted to balance the abilities of exploration and exploitation of the algorithm. The MO is employed to reduce the possibility of falling into local optimum to avoid the premature convergence. To validate the effectiveness of the proposed algorithm, extensive simulations have been performed in the experiment section. The computational data demonstrate the promising advantages of the proposed IWOA for the energy-efficient job shop scheduling problem. Full article
(This article belongs to the Special Issue Evolutionary Computation)
Show Figures

Figure 1

16 pages, 5988 KiB  
Article
Urban-Tissue Optimization through Evolutionary Computation
by Diego Navarro-Mateu, Mohammed Makki and Ana Cocho-Bermejo
Mathematics 2018, 6(10), 189; https://0-doi-org.brum.beds.ac.uk/10.3390/math6100189 - 02 Oct 2018
Cited by 12 | Viewed by 5784
Abstract
The experiments analyzed in this paper focus their research on the use of Evolutionary Computation (EC) applied to a parametrized urban tissue. Through the application of EC, it is possible to develop a design under a single model that addresses multiple conflicting objectives. [...] Read more.
The experiments analyzed in this paper focus their research on the use of Evolutionary Computation (EC) applied to a parametrized urban tissue. Through the application of EC, it is possible to develop a design under a single model that addresses multiple conflicting objectives. The experiments presented are based on Cerdà’s master plan in Barcelona, specifically on the iconic Eixample block which is grouped into a 4 × 4 urban Superblock. The proposal aims to reach the existing high density of the city while reclaiming the block relations proposed by Cerdà’s original plan. Generating and ranking multiple individuals in a population through several generations ensures a flexible solution rather than a single “optimal” one. Final results in the Pareto front show a successful and diverse set of solutions that approximate Cerdà’s and the existing Barcelona’s Eixample states. Further analysis proposes different methodologies and considerations to choose appropriate individuals within the front depending on design requirements. Full article
(This article belongs to the Special Issue Evolutionary Computation)
Show Figures

Figure 1

18 pages, 846 KiB  
Article
A Developed Artificial Bee Colony Algorithm Based on Cloud Model
by Ye Jin, Yuehong Sun and Hongjiao Ma
Mathematics 2018, 6(4), 61; https://0-doi-org.brum.beds.ac.uk/10.3390/math6040061 - 18 Apr 2018
Cited by 10 | Viewed by 4441
Abstract
The Artificial Bee Colony (ABC) algorithm is a bionic intelligent optimization method. The cloud model is a kind of uncertainty conversion model between a qualitative concept T ˜ that is presented by nature language and its quantitative expression, which integrates probability theory and [...] Read more.
The Artificial Bee Colony (ABC) algorithm is a bionic intelligent optimization method. The cloud model is a kind of uncertainty conversion model between a qualitative concept T ˜ that is presented by nature language and its quantitative expression, which integrates probability theory and the fuzzy mathematics. A developed ABC algorithm based on cloud model is proposed to enhance accuracy of the basic ABC algorithm and avoid getting trapped into local optima by introducing a new select mechanism, replacing the onlooker bees’ search formula and changing the scout bees’ updating formula. Experiments on CEC15 show that the new algorithm has a faster convergence speed and higher accuracy than the basic ABC and some cloud model based ABC variants. Full article
(This article belongs to the Special Issue Evolutionary Computation)
Show Figures

Figure 1

Back to TopTop