Next Article in Journal
Hyper-Local Weather Predictions with the Enhanced General Urban Area Microclimate Predictions Tool
Previous Article in Journal
Scan-to-HBIM Reliability
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Nature-Inspired Algorithms from Oceans to Space: A Comprehensive Review of Heuristic and Meta-Heuristic Optimization Algorithms and Their Potential Applications in Drones

by
Shahin Darvishpoor
1,
Amirsalar Darvishpour
2,
Mario Escarcega
3 and
Mostafa Hassanalian
3,*
1
Department of Aerospace Engineering, K.N. Toosi University of Technology, Tehran 16569-83911, Iran
2
Department of Computer Engineering, University of Tehran, Tehran 14179-35840, Iran
3
Department of Mechanical Engineering, New Mexico Tech, Socorro, NM 87801, USA
*
Author to whom correspondence should be addressed.
Submission received: 15 May 2023 / Revised: 17 June 2023 / Accepted: 25 June 2023 / Published: 27 June 2023

Abstract

:
This paper reviews a majority of the nature-inspired algorithms, including heuristic and meta-heuristic bio-inspired and non-bio-inspired algorithms, focusing on their source of inspiration and studying their potential applications in drones. About 350 algorithms have been studied, and a comprehensive classification is introduced based on the sources of inspiration, including bio-based, ecosystem-based, social-based, physics-based, chemistry-based, mathematics-based, music-based, sport-based, and hybrid algorithms. The performance of 21 selected algorithms considering calculation time, max iterations, error, and the cost function is compared by solving 10 different benchmark functions from different types. A review of the applications of nature-inspired algorithms in aerospace engineering is provided, which illustrates a general view of optimization problems in drones that are currently used and potential algorithms to solve them.

1. Introduction

Optimization is a practical and essential part of engineering and science with an increasing amount of applications [1]. Many researchers around the world are working on the development of optimization methods. Among these different optimization algorithms, nature-inspired or bio-inspired algorithms are prevalent due to their excellent performance and simplicity [2]. Since their inception, nature-inspired algorithms have experienced exponential growth. Hundreds of animals, insects, and natural phenomena have been used as a source of inspiration for developing optimization algorithms [3]. Researchers have developed algorithms based on underwater, terrestrial, and flying animals. This list includes natural phenomena such as rain, the water cycle, hurricanes, and even stars and galaxies. In addition, many behaviors of humans and animals and other topics such as music and sports have also been used to develop optimization algorithms. There are also hybrid algorithms that are a combination of other nature-inspired algorithms. About 100 different species of animals, insects, plants, and micro- and nano-organisms have been used so far to develop optimization algorithms. Based on the current paper, there are at least 350 different nature-inspired algorithms in various categories. Figure 1 illustrates a small portion of the sources of inspiration in nature-inspired algorithms from oceans to space symbolically.
Some similar research can be found in the literature on the latest progress of nature-inspired optimization algorithms. Molina et al. have proposed comprehensive taxonomies of nature-inspired optimization algorithms. They have focused on the source of inspiration in these algorithms and have shown the similarity of a big group of algorithms with classic approaches regarding their core computation process. However, their research lacks a performance analysis for these algorithms [4]. Discussions on the novelty and importance of nature-inspired optimization algorithms are still going on, some researchers consider no value for these algorithms while others believe the production of new methods should be stopped and the majority of efforts should be dedicated to more promising research directions in the meta-heuristic literature. While they confirm the power of nature-inspired optimization, they believe only a few algorithms can really be used for solving problems with high accuracy in a short time [5]. Evaluating the performance of these algorithms in solving different problems seems to be necessary research that is not studied too much. The next section will review similar review papers on nature-inspired optimization and their efforts to manage and classify these algorithms for better understanding and study.
In this paper, we try to study the latest developments in nature-inspired optimization and challenges in this field. First of all, we classified all of these algorithms based on their source of inspiration and provided a comprehensive classification. Considering other classifications in similar papers, we have tried to provide a more comprehensive classification with detailed sub-categories. In each category, the most popular algorithms are studied in detail to provide a good view of their challenges and benefits. Next, in Section 3, the performance of a group of selected algorithms is evaluated in solving 10 different problems. Critical parameters such as mean iterations, average computation time, and mean error are calculated by solving each problem 500 times. A database of sample codes for nature-inspired algorithms is also provided. In addition to this, all of the project’s codes are provided in the GitHub repositories. Most algorithms in this paper have been extracted from a recent work by Tzanetos et al., in addition to classical algorithms and newly developed ones [6]. In the last section, we have studied the different applications of nature-inspired algorithms in drones and aerospace systems. After providing a classification of different applications, we have studied the papers published in this area to find out which algorithms are being used most. This study reveals: (1) the areas in drones in which nature-inspired algorithms are more applicable, (2) the algorithms that are being widely used, and (3) the algorithms with high potential that are neglected in the literature. Finally, we have provided a brief overview of the future of nature-inspired algorithms specifically in drones and aerospace systems.

2. Classifications of Nature-Inspired Algorithms

Classifying nature-inspired algorithms is a challenging problem because it is difficult to introduce a classification that contains all of the developed algorithms. There have been many attempts at classifying algorithms. Yang has introduced a simple classification of nature-inspired optimization algorithms in order to review their challenges and applications. Although Yang did not intend to review all nature-inspired optimization methods, he divided them into two categories: procedure-based and equations-based. Based on Yang’s classification, algorithms such as evolutionary strategy (ES), genetic algorithm (GA), and ant colony optimization (ACO) belong to the procedure-based category, while others such as particle swarm optimization (PSO), firefly algorithm (FA), bat algorithm (BA), cuckoo search (CS), flower pollination algorithm (FPA), etc. belong to the equation-based category [7]. Figure 2 illustrates the view of Yang’s classification.
Muller has also worked on stochastic optimization methods and their applications. His work consists of a survey on bio-inspired optimization algorithms with a focus on evolutionary algorithms and their applications in aeronautics and micro- and nanotechnology. Based on Muller, optimization algorithms can be divided into gradient and non-gradient (direct or zero-order) methods. A non-gradient method only requires information from the cost function (objective function), while a gradient method, in addition to the information from the objective function, uses the gradient or higher derivative information of the cost function. Based on Muller, direct methods can be divided into stochastic and deterministic methods. Stochastic methods use random numbers in the optimization process, unlike deterministic algorithms. Indirect or gradient methods can also be classified into two groups based on the order of the cost function derivatives they use; while first-order methods use the objective function and its gradient, the second-order methods use the Hessian matrix as well. Among these methods, Muller has studied evolutionary algorithms from the stochastic category, including evolutionary programming, genetic algorithm, evolutionary strategies, and differential evolution. Figure 3 illustrates the classification used by Muller [8].
Nature-inspired algorithms have many parts in common with the well-known field of meta-heuristic algorithms. Although some researchers consider nature-inspired algorithms to be a sub-category of meta-heuristic algorithms, they have a few different algorithms, such as math-inspired algorithms, which are considered non-nature-inspired algorithms. Much research is conducted by different researchers on meta-heuristic algorithms, including Osman [9], Gendreau et al. [10], Fister [2], and others [11]. Abdel-basset et al. have studied different reviews on meta-heuristic algorithms. Based on their work, there are some popular classifications of meta-heuristic algorithms. One of them is trajectory-based and population-based; in a trajectory-based algorithm, a solution is considered at first, and in each iteration, the best solution is replaced by a new, better solution, while a population-based algorithm starts with a random population of solutions, and this solution is refined through each search iteration. Some researchers have also classified meta-heuristic algorithms based on the usage of memory.
Another popular classification is based on being nature-inspired or not. Nature-inspired algorithms are divided into swarm-intelligence-based, non-swarm, and physics/chemistry-based algorithms in these classifications. Ruiz-Vanoye et al. have also classified meta-heuristic algorithms based on animal groups. Based on them, meta-heuristic algorithms can be divided into swarm, school, flock, and herd algorithms [11]. Abdel-basset et al. have also introduced a new classification of meta-heuristic algorithms based on their inspiration type into metaphor-based and non-metaphor-based algorithms. Unlike non-metaphor-based algorithms, metaphor-based algorithms simulate a natural phenomenon, human or animal behavior, and even mathematics. Abdel-basset et al. have divided metaphor-based algorithms into biology-based (evolutionary, swarm intelligence, and artificial immune systems), physics-based, swarm-based, social-based, music-based, chemistry-based, sport-based, and math-based. They did not introduce any sub-category for the non-metaphor-based category, but it includes algorithms such as Iterated Local Search (ILS), Variable Neighborhood Search (VNS), Greedy Randomized Adaptive Search Procedure (GRASP), and Partial Optimization Meta-Heuristic Under Special Intensification Condition (POPMUSIC). A similar classification is used by Espinosa. He classified nature-inspired methods as biological, chemical, or physical [12]. Figure 4 illustrates the classification of meta-heuristic algorithms studied by Abdel-basset et al.
This paper introduces a classification based on the inspiration of algorithms to study nature-based algorithms. This classification is an extended version of the classification of Abdel-basset et al. There are nine main categories: bio-based, ecosystem-based, social-based, physics-based, chemistry-based, music-based, sport-based, hybrid, and math-based. Math-based algorithms are considered nature-inspired algorithms, although they are not necessarily nature-based. The biology-based category is divided into 10 categories: evolution-based, organ-based, behavior-based, microorganism-based, insect-based, avian-based, aquatic-based, terrestrial animal-based, and plant-based. The microorganism-based category includes the algorithms inspired by microorganisms such as bacteria and protists; although viruses are not microorganisms, the algorithms inspired by viruses are studied in this group. Therefore, this category includes nano-organisms as well. Algorithms, such as evolutionary algorithms and genetic algorithms, which are based on evolution theories, are studied in the evolution-based category. Some algorithms are based on the function of internal or external organs, such as the heart, immune system, and coronary circulation system. We have classified these algorithms as organ-based algorithms. A significant part of algorithms is inspired by the behavior of animals, humans, swarms, and herds, which are studied in the behavior-based category. Some algorithms are developed based on diseases or vaccines, which are classified into disease-based algorithms. Figure 5 illustrates the classification of nature-inspired algorithms.
The most popular category is bio-based, which contains 66% of the total algorithms. Second place belongs to the physics-based algorithm at 15.6% percent. Figure 6 illustrates the share of each category from the total number (about 360 algorithms) of algorithms studied in this paper.
Figure 7 illustrates the divisions of each category from the total number of nature-inspired algorithms, including subcategories of the bio-based category.
Another factor that shows the popularity of an algorithm is the count of citations of the papers in each category. Although the number of citations is not an accurate factor, and more citations do not necessarily mean more applications, it shows the greater number of developments and research based on each algorithm or category. This factor approximately shows the applicability of the algorithms. For each category, we have measured the number of citations for the first published publication to introduce each algorithm based on Google Scholar metrics. Figure 8 illustrates a comparison of the total citations for each category.
Based on Figure 8, bio-based and physics-based algorithms are also the most popular and applicable algorithms considering the total citations, but music-based algorithms are more popular than math-based or ecosystem-based algorithms, while there are fewer music-based algorithms.
The following sections study different nature-inspired algorithms based on the above classification. There are about 360 different algorithms classified into the above-mentioned categories. In each category, a number of the most popular algorithms are studied in detail, and the rest are just briefly mentioned. The Git repository of this research contains the sample codes of a group of algorithms in MATLAB, Python, or C/C#/C++. In the absence of sample codes, the pseudocode or flowchart of the algorithms are included (https://github.com/shahind/Nature-Inspired-Algorithms, accessed on 20 June 2023).

2.1. Bio-Based

Bio-based algorithms are generally inspired by living species such as animals, humans, insects, etc. While the majority of these algorithms are inspired by animals and insects, some of them are developed based on processes, organs, or behavior of animals in general. About 80% of the bio-based algorithms which are studied in this research (about 230 algorithms) are based on living species. Among the living species-based algorithms, the most popular sub-category is terrestrial animal-based algorithms with about 19.7% of all algorithms; the second place belongs to insect-based algorithms with about 18% of bio-based algorithms; and the next most popular subcategories are the aquatic-based, avian-animal-based, and plant-based algorithms. Figure 9 illustrates the distribution of each sub-category from the total number of bio-based algorithms.
A comparison of the citations of each bio-based sub-category reveals other results. Based on Figure 10, evolutionary algorithms are 3 times more popular than insect-based algorithms and 13 times more popular than terrestrial-animal-based algorithms, while they have about 9 times fewer numbers. Of course, the citation count is not an accurate factor in calculating the popularity of an algorithm because some have been introduced recently. Nevertheless, almost all sub-categories have algorithms from the 1900s to recent times.
Almost 100 different species of animals, insects, plants, and micro- and nano-organisms can be found among the bio-based algorithms. Figure 11 illustrates different species, as well as organisms in which the studied algorithms in this paper are inspired by.

2.1.1. Evolution-Based

There are less than 10 main evolutionary-based algorithms among the bio-based algorithms, but as mentioned before, they are the most popular and the most used algorithms compared to other categories. Among the evolutionary algorithms, the genetic algorithm is the most applied algorithm based on the count of citations; the next place belongs to differential evolution and evolutionary programming. Much like other algorithms, there are different versions of each evolution-based algorithm (such as improved and hybrid versions), but in this section, we focus on the original versions. Figure 12 illustrates the most popular evolutionary algorithms.

Genetic Algorithm (GA)

The genetic algorithm is an evolutionary algorithm, introduced by Holland in 1992, based on natural selection [13]. In this algorithm, the solution is considered a set of random solutions to form an initial population. The population size can be chosen arbitrarily based on the type of problem. A part of the existing population is selected to breed a new generation during each successive iteration. These individual solutions are selected based on the cost-function or fitness-function, so the fitter solutions are more likely to be selected. This selection method can be modified as some functions are, for example, stochastic. The next generation population of the solutions is generated based on the selected individuals. For each new solution, a pair of parent solutions are selected to produce a child. This child or new solution shares many of the characteristics of its parents using the methods of Crossover and Mutation [14]. In the crossover, some parts of the selected chromosomes or parents are exchanged in different mechanisms such as one, two, and uniform crossover, etc. In mutation, some parts of chromosomes are changed randomly in order to escape from local optimum solutions [11]. This process results in the next generation of chromosomes that are different from their parents or previous generations. In general, the fitness function will increase because in each generation, only the best parents are selected to breed the next generation. All of these processes are repeated until the fitness function satisfies the problem’s requirements or termination criteria are reached [14]. Figure 13 illustrates the flowchart of the genetic algorithm.
Multiple types of crossovers, mutation, and even selection operators have been used by researchers so far. The most used crossover functions are single point, two point (or k-point), and uniform crossovers; moreover, among mutation functions, we can mention uniform, gaussian, bit, and flip bit. It is also possible to use various selection functions in GA; common functions are tournament, uniform, and rank selections [16,17,18]. Figure 14 illustrates some common genetic operators.
The genetic algorithm is one of the most popular nature-inspired algorithms. About 70% of the total citations of the evolution-based algorithms belong to genetic algorithms. Numerous improved versions of the genetic algorithm have been introduced by researchers. Genetic algorithms have been used in a wide range of applications, including path planning, image processing, and optimal control. They are effective algorithms for finding the global optimum solution for many other problems [14]. GA has been used in numerous applications, including different fields of engineering, artificial intelligence and computer science, finance, social sciences, multimedia, and network [18]. Numerous academics from a variety of disciplines are concentrating on the development of feasible strategies based on GA. Various businesses are also trying to develop commercial products using the aid of GA [19].

Differential Evolution (DE)

Differential evolution (DE) is a parallel direct population-based search method that is based on an evolutionary process. DE was introduced by Storn in 1996 [20]. DE is a simple direct and stochastic algorithm that uses a few control variables [21,22]. Since DE is a stochastic algorithm, the initial population is chosen randomly. The initial population should be chosen in a way that uniformly covers the entire parameter space. Put simply, DE starts with three random solutions and updates each solution based on a weighted difference of two other solutions. For each solution, DE uses a n-dimensional parameter vector, such as x, where n is the dimensionality of the optimization problem. In each iteration, a donor vector will be generated for each population based on a weighted difference of two other solutions. The vector will be used to generate a candidate solution using a crossover function. If the fitness of the generated candidate is better than the original population, it will be replaced by it. This will continue until the best solution is found [21,22]. The process of generating a donor vector based on a weighted difference of two other solutions is usually known as mutation. Some of the most widely used mutation operators are: DE/rand/1, DE/best/1, DE/rand/2, DE/best/2, DE/current-to-best/1and, and DE/current-to-rand/1. Table 1 shows the different mutation operators in which v is the donor vector, and x refers to candidate solutions. The crossover function randomly combines the donor vector and the original candidate. The crossover’s probability scale factor CR and the number of populations are the only control parameters of the DE. The few control parameters the main advantage of DE, along with its simplicity [21,22].
Population size determines the ability of the algorithm to explore the search space. In problems with a large number of dimensions, the parameter n should be large to provide the capability of searching the multi-dimensional design space. Small values of F lead to small mutation step sizes and result in longer convergence time, while large values of F decrease the exploration time but can lead to the overshooting of good optima. The crossover probability CR controls the number of changing elements. Larger values of CR result in more variation in the new population [21].
Notably, although DE is an evolutionary algorithm, it lacks a real natural paradigm and is not an exact replica of natural evolution, unlike other evolutionary algorithms. DE has demonstrated outstanding performance in a wide range of optimization problems from diverse scientific domains, including constrained and multi-objective optimization problems [23]. It belongs to the stochastic population-based evolutionary group and, like other evolutionary algorithms, uses a population of candidate solutions and stochastic mutation, crossover, and selection operators to move the population toward superior solutions in the design space. The key advantage of standard DE is that it requires the adjustment of only one control parameter, although it has two other control parameters. The performance of DE in a certain optimization problem is highly dependent on both the trial vector generation scheme and the chosen control parameters [21]. As is clear in Table 1—although the main version of DE uses three candidate solutions—firstly, different mutation functions may use more population, and secondly, the main process can be done for a larger population with random division into groups of three (or more depending on the mutation function) individuals.

Evolutionary Programming (EP)

Evolution programming (EP) is an evolution-based algorithm developed by Fogel et al. in the 1960s [24]. Like other popular evolution-based algorithms, evolution programming is useful for many types of problems, specifically when other algorithms are not applicable [25]. It has been shown that EP is applicable for pattern discovery, system identification, and control [26]. In the last decades, the applications of DE in combinatorial and parameter optimization problems have been increasing [27]. DE has a diverse field of applications, including path planning, automatic control, general function optimization, design and training of neural networks, fuzzy systems, hierarchical systems, game playing, and game strategies, and the evolution of art [27]. EP has a similar approach to GA and other evolution-based algorithms, but the main emphasis in EP is on the mutation process [25,28]. In the mutation process of the original version of EP, each individual’s standard deviation is computed as the square root of a linear transformation of its fitness value; for example, m(xi) = x i , the mutation of xi, is calculated as follows:
x i = x i + σ i · z σ i = β i F x + γ i
where fitness value F x is the objective function which is scaled to positive values using function G .
Selecting appropriate values for the parameters β i ,   γ i could be challenging in high-dimensional objective functions. Some research was done on solving this problem, including meta-EP that, like evolution strategies, self-adapts to the required variables [29].
Unlike GA, EP does not use crossover; however, it may combine candidate solutions with other methods. It is also based on a continuous representation of candidates instead of GA’s binary representation. Figure 15 compares the main features of standard forms of GA and DE.

Other Algorithms

There are other evolution-based algorithms, such as evolutionary strategies (ES), which is a popular algorithm developed by Rechenberg in 1973 that uses mutation, recombination, and selection applied to a population of individuals [31]. Gene expression programming (GEP) is another well-known genotype/phenotype genetic algorithm introduced by Ferreira in 2001 which employs character linear chromosomes made of genes structurally organized in a head and tail [32]. The Memetic algorithm (MA), which is considered an extension of GA, was first introduced by Moscato in 1989 [33]. Grammatical evolution (GE) is also a genetic algorithm, introduced by Ryan et al., which uses a variable-length linear genome to determine how a Backus–Naur form grammar definition is mapped to an expression or program of arbitrary complexity [34].

2.1.2. Organ-Based

Some nature-inspired algorithms are based on the internal organs of humans or the bodies of other animals. There are different algorithms inspired by the immune system, kidney, heart, neural system, coronary circulation system, and so on. Artificial neural networks (ANN) are probably the most popular organ-based algorithm. ANNs are a class of machine-learning algorithms that are trained to learn the relation between some input and output data. This class of learning algorithms was so popular in the last decades that many versions of them have been developed, including conventional neural networks, deep neural networks, and Bayesian neural networks. The study of ANNs and machine-learning algorithms is the subject of another study. In this paper, learning algorithms are ignored. In Figure 16, some of the popular organ-based algorithms are presented.

Artificial Immune Systems (AIS)

The immune system is a highly evolved biological system whose purpose is to recognize and remove invading pollutants. To accomplish this, it must be able to differentiate between foreign molecules (or antigens) and body molecules. A strong capacity for learning, memory, and pattern recognition is necessary for the successful completion of this task. To achieve this, the immune system employs genetic mechanisms for change that are similar to those employed in biological evolution. However, in the immune system, these mechanisms can operate on a timescale as short as a few days, making it a perfect candidate for the study and modeling of adaptive processes. It is inevitable that foreign organisms will attempt to infect a highly evolved organism in order to use its rich chemical environment. To combat this, the immune system of vertebrates has evolved to recognize and eliminate foreign material. This is accomplished in part by antibody molecules, which label foreign material for removal by lymphocytes, phagocytic cells, and the complement system. The sequence of amino acids forming the paratope controls its form, and consequently, the set of molecules with which it can react. If the shape of a foreign antigen molecule matches that of the paratope, the antibody will connect to the antigen, resulting in its eventual destruction. Epitopes are defined as areas on any molecule (antigen or antibody) to which paratopes can connect (see Figure 17) [35].
B-lymphocytes are the cells responsible for the production of antibodies. On the surface of each cell is around 105 antibodies with identical paratopes which serve as sensors to detect the presence of an epitope to which this antibody type can respond. When the correct epitope is identified, the lymphocyte is driven to create more lymphocytes (clone) and to secrete free antibodies.
Clonal selection is the process of amplifying only those cells that produce a desirable antibody type. The diversity of the immune system is maintained by the daily replacement of around five percent of the B-lymphocytes with newly produced lymphocytes in the bone marrow. As cells are produced in the bone marrow, they will produce various antibodies. In addition to the generation of new cells in the bone marrow, the reproduction of B-lymphocytes stimulated by the recognition of an epitope generates additional diversity. During this process, it is believed that the mutation rate for antibody genes is substantially greater than that of non-antibody genes [35].
AIS implements genetic operators (such as inversion, point mutation, and crossover) to the epitope and paratope strings to mimic the reproduction of real lymphocytes. Inversion is simulated by inverting a segment of the string randomly. Point mutation is simulated by changing a bit in a string randomly. Crossover is simulated by interchanging two randomly selected pieces of two antibody types to create two entirely new antibodies [35]. In this terminology, an antibody cell represents the candidate solutions and the ability of the cell to recognize the input pattern, or alternatively, the affinity function represents the cost function.
AIS has a variety of applications in different fields of engineering and science. Research shows that it has been used in computer security, antivirus software, anomaly detection, fault diagnosis, pattern recognition, and data analysis. It is one of the widely used algorithms in optimization [36]. It can also be parallelized for faster computation [37].

Clonal Selection Algorithm

The immune system finds the response features to an antigen stimulus event via the clonal selection algorithm (CSA). Only cells that recognize the antigens are allowed to multiply. The cells are matured based on their affinity for selective antigens. Complicated learning algorithms, such as multimodal optimizations and pattern recognition, were solved using a clonal-selection-based computational system developed by Castro et al. [38].
Antibodies (Ab) are produced in the B lymphocytes in the bone marrow when an animal is exposed to antigens. The cell creates a semi-specific antibody to a certain antigen. Terminal plasma cells are secreted as a result of the proliferation and maturation of B cells after an antigen binds to the antibodies. Clones of the cells are generated through mitosis. Both plasma cells and large B lymphocytes secrete Ab, but plasma does so at a higher rate. T cells are critical in immune responses and regulate B cells. Lymphocytes can develop into older B memory cells in addition to differentiating into plasma cells. In the event of a second antigen exposure, memory cells, which circulate through the body, transform into large lymphocytes. These newly differentiated lymphocytes produce pre-selected and high-affinity antibodies for the initial antigen responsible for the primary response [38]. Figure 18 depicts the clonal selection principle.
There are three main features of clonal selection theory that CSA is based upon. The first feature is a diverse antibody pattern formed via accelerated somatic mutations as a result of randomly generated genetic changes. The second trait is the retention and restriction of a pattern to a cloned cell. The third trait is the multiplication and maturity of cells when in contact with antigens [38]. The aspects of immunity modeled in CSA are: cloning of the productive cells from a stimulated viewpoint; decommissioning of non-stimulated cells; upkeep of the memory cells efficiency that have left the repertoire; selection; generation; genetic diversity; hypermutation based on the affinity of a cell; and the maturity affinity and selection of high-affinity cloned cells.
CSA has a fixed number of generations—the max number of generations determined by the user. In each generation, a set of candidate solutions pool (P) is generated, which is the sum of the remaining existing population (Pr) and of a group of memory cells (M) (P = Pr + M). From population (Pn), the highest-affinity n individuals are selected. The population is then reproduced and creates a temporary group of clones (C). The size of the clone is a function of the antigen affinity. Next, the clones are hypermutated, which is proportional to antibody–antigen affinity from which a new and mature antibody group (C*) emerges. A new memory set M is then comprised of cells from C*. Improved cells in C* can also replace cells of P. Lastly, novel antibodies replace d antibodies as a form of diversity introduction. The low-affinity cells are more likely to be replaced [38]. Figure 19 illustrates the abstract flowchart of CSA.
CSA is capable of solving multimodal and combinatorial optimization, as well as maintaining effective memory and learning. While GA oftentimes concierges to the best candidate solution, CSA obtains a diverse set of optimal solutions. CSA and GA differ in the sequence, vocabulary, and inspiration of their evolutionary search process. GSA and GA still have similar evaluation and coding processes and exhibit fine tractability related to computational cost [38].

Other Algorithms

Other popular organ-based algorithms exist, such as artificial immune network (AIN) algorithms, which are similar to AIS and CSA. AIN is capable of finding and maintaining optimal solutions, local with global search combination, automatic population size evaluation, and defined convergence criterion [39]. Jaddy et al. have developed a human kidney-inspired algorithm (KA) for optimization problems. KA is a population-based algorithm based on the glomerular filtration process in the human’s kidney. The performance of KA has been evaluated in eight test functions compared to well-known algorithms such as GA and PSO. KA outperformed the other algorithms on seven out of eight tests and found the global optimum with fewer function evaluations on six out of eight test functions. It has been statistically shown that KA is able to produce good-quality results [40].
Hatamlou has also come up with the idea of developing an optimization algorithm based on the heart. The heart algorithm (HA) is a stochastic optimization algorithm based on the human heart and circulatory systems function. HA begins with an objective function which is computed for a random group of candidate solutions. The heart is chosen as the optimal candidate solution. The other solutions form the blood molecules. The blood molecules are then motivated to search for the optimal solution by the heart. It has been shown that HA has great potential in data clustering [41]. Another similar algorithm was developed by Kaveh et al., based on the coronary circulation system. Artificial coronary circulation system (ACCS) is a stochastic swarm intelligent optimization technique which simulates the expansion of veins in the heart. The coronary growth factor (CGF) evaluates a population of candidate solutions composed of capillaries. The main artery is chosen as the best candidate solution. The other solutions are designated as capillaries and act as the searchers of the search space. The heart uses heart memory to decide how candidates (if any) move relative to the main vein to find the optimal solution. The ACCS has been used to solve some benchmark functions, and the results show the potential and capability of ACCS [42].
Gharebaghi et al. have also developed an optimization method based on neuronal communication (NC). NC is based on the data exchanged between neurons in the brain. NC is a desirable method since the cost of calculations is low due to the few neurons that are required to execute functions compared to other meta-heuristic methods. Additionally, NC does not require a continuous domain and gradient calculation to function. The results of the examination of NC using various benchmark functions show that, in comparison to a majority of methods, the average number of iterations for 50 independent runs of functions has been decreased by using NC [43].
Another method for global optimization is proposed by Raouf et al., based on the fertilization process in humans, called sperm motility algorithm (SMA). The search for the ovum is initiated by the random diffusion of sperm inside the female vagina. Stokes equations were selected as a mathematical model basis when considering the typical movement of sperm flow. The ovum secretes a chemoattractant that behaves as a mechanism to guide the sperm and ensure the sperm approaches the ovum. A search method to find an optimization algorithm was achieved by Raouf et al. by mimicking the fertilization process. The SMA has been tested using several standard benchmark functions and engineering problems, and the results validate and verify the efficiency of SMA [44].
Enciso et al. have also developed an algorithm based on allostasis. Allostasis explains the process that internal organs follow to reach a steady state when presented with an unbalanced condition based on the internal state of the organs. Each individual is enhanced using the biological foundation of the allostasis mechanism. Numerical operations in allostatic optimization (AO) mimic the IS of other organs. The results indicate the satisfactory performance of AO in regard to the search for an optimum when compared to other well-documented optimization algorithms [45].

2.1.3. Behavior-Based

We have classified algorithms whose main function is related to the behavior of animals or insects in the behavior-based category. Behavior-based algorithms focus on animals’ and insects’ tactics to survive and communicate with each other or other species. These algorithms may be inspired by migration, hunting, competitions, and social behaviors in animals. The most popular algorithms in this category are biogeography-based optimization (BBO), symbiotic organisms search (SOS), and group search optimizer (GSO). In the coming sections, we will study two popular behavior-based algorithms, BBO and SOS, in detail. Figure 20 illustrates the most popular behavior-based algorithms.

Biogeography-Based Optimization

The study of the geographical spread of biological organisms is called biogeography. In the 1960s, the governing mathematical equations for organism distribution were developed. The idea of using biogeography in optimization was first introduced by Simon. BBO has some features in common with other bio-based optimization methods, such as GA and PSO [46].
The rise of new species, species extinction, and the migration of species from one island to another are described with biogeography mathematical models. The term “island” denotes a geographically isolated habitat from other habitats. The amount of a species in a habitat directly influences the emigration rate, μ, and the immigration rate, λ. The emigration rate grows as the habitat is filled and species expand. As such, species are likely to seek another suitable habitat. As the number of species reaches the limit a habitat can sustain, the maximum emigration rate is achieved. It becomes more difficult for a species to survive as the habitat becomes more crowded and the immigration rate slows. [46].
The biogeography of a species can be modeled in a simple way. The probability of the habitat to contain exactly S species, Ps, changes in each time step. The process of finding the optimal solution starts with generating a set of habitats, with each habitat corresponding to a potential solution. Then, the fitness function, called the habitat suitability index (HSI), is calculated for each solution. A high value for the HIS represents a habitat with more species. Afterward, the habitats should be modified based on the immigration rate and emigration rate. A habitat’s HSI can change suddenly due to apparently random events modeled as suitability index variables (SIVs) mutation. Population diversity tends to increase due to the mutation scheme, as the rate of mutation is inversely proportional to the probability of a solution. Solutions with high probability will dominate in a model without population diversity mediated by mutations. Solutions with a low HSI index are more likely to mutate, and thus, have a greater probability of improving. Despite having a high potential to improve already, high HSI solutions can also improve, given a higher mutation rate. Figure 21 illustrates the flowchart of the biogeography-based algorithm.
BBO can be applied to high-dimensional problems that have numerous local optima, much in the same way GAs and PSO are applied. BBO has been applied to benchmark functions and to a sensor selection problem, and results showed that it performs similarly to population-based methods. Due to the no-free lunch theorem, the superior efficacy of BBO compared to other methods cannot be established [46]. Like other algorithms it is applicable for nonlinear, discontinuous, and logical problems, it has been shown to provide more or equal performance to complex algorithms with a simpler structure and code.

Symbiotic Organisms Search

Symbiotic organism search (SOS) is a robust stochastic optimization algorithm developed by Cheng et al., which simulates the symbiotic interaction strategies adopted by organisms to survive and propagate in the ecosystem [47]. Symbiosis is derived from the Greek word for living together and describes the relationship between two different species. Symbiotic relationships can be classified as either obligate or facultative. Obligate symbiotic relations describe the necessary relationship between two species for survival, while facultative relationships describe non-necessary but mutually beneficial cohabitation. The symbiotic relationships—called commensalism, parasitism, and mutualism—are the most common types found in nature. Commensalism describes a symbiotic relationship where one species directly benefits from another, but the other is neutrally affected. Parasitism describes a symbiotic relationship where one party benefits and another is actively and negatively affected. A mutually beneficial symbiotic relationship where two species mutually benefit is called mutualism. Figure 22 illustrates various symbiotic relations in an ecosystem [47].
The SOS aims to find an optimal global solution via iterative processes involving populations of candidate solutions in promising search spaces similar to other well-documented population-based algorithms. In SOS, the ecosystem is designated as the initial population. To populate the search space, random organisms are generated from the initial ecosystem. For every organism, there exists a corresponding problem and associated fitness value that indicates how adaptive the organism is to the desired objective. In SOS, biological interaction is mimicked to generate new solutions, as opposed to all other meta-heuristic algorithms, where new solutions are found by iteratively applying operations to existing solutions. The three aforementioned symbiotic relationships (mutualism, commensalism, and parasitism) serve as phases of the model. Each organism interacts with the other organism randomly through all phases [47].
The relationship between bees and flowers is an example of mutualism in nature. Bees collect nectar to convert into honey while simultaneously pollinating the flowers, benefiting both bees and flowers alike. In SOS, the mathematical models used to describe new solutions for Xi and Xj based on their mutualistic symbiotic pairing are shown below:
X i , n e w = X i + r a n d 0 , 1 × X b e s t M u t u a l v e c t o r × B F 1
X j , n e w = X j + r a n d 0 , 1 × X b e s t M u t u a l v e c t o r × B F 2
M u t u a l _ v e c t o r = X i + X j 2
Some mutualism relationships may benefit one organism more than another. Such an unequal beneficial relationship is modeled using benefit factors (BF1 and BF2) randomly determined as 1 or 2, which represent how beneficial a mutualistic relationship is to each organism. The Mutual_vector is a representation of the relationship between the organisms Xi and Xj. The highest value of adaptation is represented by the value Xbest. If an organism’s fitness is greater after the interaction, the organisms are updated. [47].
Remora fish and sharks are an example of commensalism in nature. While the shark receives very little benefit from the remora, a remora fish can consume leftover scraps produced by the shark. The commensal symbiosis model is used to mathematically calculate the candidate solution of Xi between organism Xi and Xj, the equation for which is shown below:
X i , n e w = X i + r a n d 1 , 1 × X b e s t X j
The value (XbestXj) describes the increased survivability rating of Xi as the advantage provided by Xj. The highest degree of survivability in an organism is Xbest [47].
The plasmodium parasite, which mosquitoes pass between human hosts, is an example of parasitism in nature. The parasites use human hosts to survive, but the human may develop malaria and die. In the SOS, an organism Xi is duplicated, modified randomly, and tagged the Parasite_vector in the search space. An organism Xj is randomly chosen and designated as a host for the parasite. The Parasite_vector will replace organism Xj in the ecosystem if the parasite has a better fitness value. The Parasite_vector will no longer exist in the ecosystem if Xj has a higher fitness value [47]. Figure 23 illustrates the flowchart of the SOS.
SOS is capable of generating higher-quality solutions than existing meta-heuristic algorithms based on the performance of sample problems. SOS outperformed GA, DE, BA, PSO, and PBA by identifying 22 out of 26 mathematical function solutions in the benchmarking phase. SOS outperformed other algorithms when benchmarked against structural design problems by achieving optimal results with less iteration. SOS does not require a tuning phase for stable performance, unlike other algorithms, and the three biological models for symbiotic relationships are modeled in simple mathematical expressions [47]. Despite its complex algorithm, it is relatively easy to implement; however, the number of parameters in SOS which directly determine its performance is high compared to other algorithms.

Other Algorithms

The searching behavior of animals has inspired a popular behavior-based algorithm called the group search optimizer (GSO). GSO mimics the producer–scrounger model, where organisms find scrounger or producer opportunities in an ecosystem. The GSO algorithm has exhibited increased performance in convergence speed and accuracy in high-dimensional multimodal problems compared to other EAs, with the added benefit of being applicable in neural networks [48].
Some algorithms are developed based on the behavior of migrating animals and insects. Differential search algorithm (DSA) is one of the migration-based algorithms which simulates Brownian-like random-walk movement used by an organism to migrate. DSA can solve different optimization problems at a very high level of accuracy [49]. Another migration-based algorithm is animal migration optimization (AMO), which was developed by Li et al. based on general migration behavior in animals. All major groups of animals (fish, mammals, birds, etc.) display a form of migration. The quality of solutions obtained from benchmarking results shows that AMO performs equally or better than similar algorithms [50]. Zhang et al. have developed an algorithm based on the biology migration phenomenon called the biology migration algorithm (BMA). BMA consists of only the migration and updating phases. The migration phase emulates a species’ relocation to a new habitat. During this phase, each agent should obey two main rules depicted by two random operators. The updating phase copies species’ behavior during migration when leaving or joining a group. During the updating phase, an individual will be evaluated to stay or be replaced via an iterative process. BMA has been shown to be as effective as other optimization methods based on benchmarking tests [51].
Hunting behavior of animals is another source of inspiration for behavior-based algorithms. Hunting search (HuS), developed by Oftadeh et al., is inspired by communal hunting exhibited by wolves, dolphins, and lions. The commonalities between collective hunting animals are the search for prey, although actual hunting methods may differ. Typically, hunters catch prey by circling around the prey and closing in where each individual updates its position relative to other members. The hunters reorganize themselves around the prey if it escapes. HuS is a powerful search and optimization technique which yields better solutions compared to those obtained by existing algorithms when applied to continuous problems [52]. Some other algorithms are based on the behavior of prey and predator in hunting processes, such as Fausto et al., who have developed the selfish herd optimizer (SHO). SHO was developed based on the behavior of herd animals subject to a predatory risk, called selfish herd behavior. A group of predators and a pack of prey are used to mimic the predatory interactions between the two groups. The SHO has been compared to PSO, ABC, FA, DE, GA, and CSA. The results show the remarkable performance of SHO compared to the methods. Additionally, SHO has been shown to be an excellent alternative to solving global optimization problems [53]. Prey-predator algorithm (PPA) was developed by Tilahun et al. which mimics prey–predator animal interactions where prey and predators are assigned randomly generated solutions based on objective function performance. The survival value is a numerical representation of the prey or predator’s performance. A predator seeks prey with a relatively diminished survival value. Individual prey will tend away from predators and toward prey solutions with higher surviving values. However, the best prey or the prey with the best survival value performs a local search. The best prey solution seeks the optimum while the rest of the prey search the solution space exploitatively. The PPA has been tested on well-known test problems and has been compared to genetic algorithms and particle swarm optimization. Simulation results show that on the selected test problems, PPA performs better in achieving the optimal value [54].
Seeker optimization algorithm (SOA) is a swarm intelligence algorithm inspired by the act of humans’ application of experience, memory, and uncertainty reasoning. An individual is labeled the searcher or seeker. The searcher applies cognitive learning, social learning, and uncertainty reasoning to move to a new position from the initial starting point. The SOA exhibited more robustness, efficiency, and optimization quality compared to the GA and PSO when benchmarked against typically complex functions [55].
Cuevas et al. have also developed an optimization algorithm based on the collective behaviors of animal swarms. Food source swarming, idling by a location, and migrating are behaviors exhibited by animal groups such as flocks of birds, herds of mammals, and schools of fish. Some benefits that animals derive from collective behaviors are avoiding predators, increased aerodynamics, more efficient migratory routes, and increased harvesting efficiency. In the collective animal behavior (CAB) algorithm, the searcher agents emulate a group of animals that interact with another based on the biological laws of collective motion. Benchmarking tests show that the CAB algorithm performs well in global optimum searches [56].
The chromosome amount or combination of gametes is unnecessary in asexual reproduction. A uni- or multicellular parent organism endows its offspring with a copy of its genetic makeup through asexual reproduction. Single-celled bacteria, archaea, plants, animals, and fungi primarily reproduce asexually. Based on this biological phenomenon, Farasat et al. have developed an optimization algorithm called asexual reproduction optimization (ARO). In ARO, offspring are produced by an individual through reproduction. The fitter individual is determined via a performance index based on the objective function. The ARO performance is tested with several benchmark functions frequently used in the area of optimization and is compared with that of PSO. Results of the simulation illustrate that ARO remarkably outperforms PSO [57]. Another similar algorithm is developed by Kaveh et al. called the cyclical parthenogenesis algorithm (CPA). CPA mimics the social behavior and propagation of species that can reproduce sexually or asexually, such as aphids. Displacement and reproduction mechanisms are used to iteratively improve the solution quality of a solution, which is an organism of a species. The benchmarking results indicate that the performance of CPA is comparable to similar meta-heuristic algorithms [58].
The hierarchical system that is responsible for the creation of complex, problem-solving intelligence was used by Chen et al. to develop the hierarchical swarm model (HSM) [59]. Parpinelli et al. have also developed the ecology-inspired optimization algorithm (ECO), which is an optimization algorithm based on the ecological concepts of habitats, ecological relationships, and ecological successions. ECO performs significantly better than ABC, especially as the dimensionality of the functions increases [60].
Other algorithms focus on the methods that animals use to survive. Competition over resources (COR) is an optimization algorithm developed by Mohseni et al., which mimics the competitive behavior found within animal groups. In the COR algorithm, less searching-efficient individuals will be eliminated from the animal group while the best searching agent in a group spreads its children within the animal group. Convergence to an optimization algorithm is quickly reached since highly competitive search agent populations compete against each other. Based on benchmarking tests, COR converges faster and more accurately than other optimization algorithms, such as PSO [61]. Nguyen et al. have also developed a similar algorithm based on the foraging behavior of zombies. In zombie survival optimization (ZSO), the fitness of the exploration agents, zombies, is based on their ability to exploit the search space to find an airborne antidote that transforms the zombie back into a human. ZSO is productive as a search tool to find an image, for example. Benchmarking using the CAVIAR dataset indicates that ZSO is more efficient than BFO and PSO [62].

2.1.4. Disease-Based

Some research is based on the behavior and models of diseases and treatments. Infectious and viral disease transmission is usually modeled using established ordinary differential equations (ODE) or partial differential equations (PDE) based on epidemiological models such as SIR, MSIR, MSEIRS, etc. Some researchers have focused on using the spreading models of diseases to solve optimization problems. Some work is based on tumor growth and chemotherapy. Figure 24 illustrates the most popular disease-based algorithms based on their citations.
Swine influenza models-based optimization (SIMBO) is a disease-based optimization algorithm that leverages the well-known ODE susceptible–infectious–recovered (SIR) models of swine flu developed by Pattnaik et al. The development of SIMBO follows a probability-based treatment phase (SIMBO-T), vaccination phase (SIMBO-V), and quarantine phase (SIMBO-Q). The SIMBO variants can be used to optimize complex multimodal functions with improved convergence and accuracy. First, a test based on the dynamic threshold identifies a confirmed case of swine flu. Susceptible parties are advised to inoculate themselves from swine flu after a confirmed case in the community. The swine flu-infected individuals were quarantined from the population. The suspected cases are treated with antiviral medication. The number of antiviral drugs given to individuals is dependent on patient health and susceptibility or existing complications. In SIMBO-V and SIMBO-Q, treatment and quarantine/vaccination status are used to update the individual’s state. An individual’s health cannot be queried every day due to the restrictions created by the nonlinear momentum factors. SIMBO variants can easily be implemented on parallel computer architecture without overburdening or modifications. SIMBO-T, SIMBO-V, and SIMBO-Q have been tested with 13 standard benchmark functions, and the results have been compared with other optimization techniques.
The results show that SIMBO variants outperform other optimization techniques. The performance of SIMBO variants has been evaluated in terms of the quality of optima, the number of overheating criteria, convergence, fitness evaluations (FEs), t-test, statistical parameters, and analysis of variance test (ANOVA). A real-time application in video motion estimation is also considered by authors to test the efficiency of the SIMBO variants. The results of motion estimation using proposed variants seem to be faster than the published methods while maintaining a similar peak signal-to-noise ratio [63]. Another similar algorithm was developed by Huang based on the SEIQRA epidemic model. The SEIQRA artificial infectious disease optimization (AIDO) supposes that human individuals in an ecosystem are differentiated by the number of features they have. Some of the individuals’ features are attacked by a quickly spreading and infectious disease (SARS).
There are five stages that individuals in an ecosystem can pass through, which are susceptibility (S), exposure (E), infection (I), quarantine (Q), and recovery (R). Individual diversity in the SEIGRA model is achieved by the separation of infected individuals by the S, E, I, Q, and R states. Individuals can thus travel between the five states via the information exchange achieved by infectious disease. The operators for SEIQRA are such state transitions. Currently, there are 13 state transitions, and as such, there are 13 operators. The SEIQRA model controls the state transitions through the use of transmission as a synergistic logic control scheme, which allows for the implementation of many operations (reflection, crossover, differential, etc.). The algorithm is stabilized by allowing the 13 operators an equal opportunity to occur. The use of the part variables iteration strategy (PVI) gives the algorithm the capability to solve high-dimensional optimization problems. Benchmarking results reveal that SEIQRA has a high convergence speed when searching for global optima [64].
Invasive tumor growth optimization (ITGO) is another disease-based algorithm developed by Tang et al. Tumor growth is mediated by the drive of tumor cells to grow and proliferate by targeting nutrients in their immediate environment. The three categories of tumor cells In the ITGO algorithm are quiescent cells, proliferative cells, and dying cells. Tumor cells rely on intercellular interaction and random motion to travel. The interaction between all three cell types is simulated. Quiescent and proliferative cells are mimicked in their invasive behavior by levy flight. The results of testing ITGO on using problems such as CEC2005, CEC2008, and CEC2010 reveal that ITGO is better at solving global optimization problems in comparison to other meta-heuristic algorithms such as ABC, DE, and PSO [65].
Salmani et al. have proposed a chemotherapy-based meta-heuristic population algorithm for searching purposes. The chemotherapy science algorithm (ChSA) eliminates unwanted cells (solutions) at the risk of destroying normal cells (possible solutions) [66].

2.1.5. Microorganism and Nano-Organism Based

Some nature-inspired algorithms are based on micro- and nano-organisms such as bacteria, viruses, cells, and similar organisms. We have classified and reviewed such algorithms in this section. The most popular organisms in this category are bacteria, followed by viruses. Figure 25 illustrates the most popular microorganism-based algorithms.
The most popular microorganism-based algorithm is bacterial chemotaxis (BC), developed by Muller et al., which analyzes the chemotaxis toward amino acids of the bacterium Escherichia coli [67]. The BC algorithm features good computation time due to the developed n-dimensional model based on a two-dimensional chemotaxis. The BC algorithm is stochastic and relatively simple, while it performs comparably to the Rosenbrock function for quadratic functions but worse than ESs. BC performs better and exhibits comparable convergence in inverse airfoil design than ESs. [67]. Another bacteria-based algorithm was developed by Passino called the bacterial foraging optimization (BFO), which simulated parallel non-gradient optimization in bacteria nutrient foraging in an ecosystem [68]. Niu et al. have also developed bacterial colony optimization (BCO), which mimics communication, chemotaxis, migration, elimination, and reproduction in E. coli bacteria during a lifecycle. The optimization efficiency is improved in this algorithm via the group and individual communication exchange [69]. Other developed bacteria-based algorithms are the bacterial swarming algorithm (BSA) [70], bacterial evolutionary algorithm (BEA) [71], magnetotactic bacteria optimization algorithm (MBOA) [72], and superbug algorithm (SA) [73].
There are algorithms that are based on viruses, such as the virus colony search (VCS) developed by Li et al. VCS allows an individual in the algorithm to efficiently explore the search space by mimicking virus survival and propagation via infection and diffusion strategies. VCS is effective for global numerical and engineering optimization problems given the benchmark results constrained and unconstrained optimization problems regarding convergence and accuracy simultaneously [74]. The viral systems (VS) algorithm was developed by Cortes et al. based on viral performance. Results are obtained from the meta-heuristic using the host’s infection response and viral replication mechanisms. VS outperforms genetic algorithms and some tabu approaches when compared to the results of the medium-to-large-sized Steiner problem. VS produces similar results to the Steiner problem, which are obtained by the best performing tabu approaches. [75]. Another example of virus-based algorithm is the virulence optimization algorithm (VOA) [76].
Some of the algorithms are inspired by cell behavior, i.e., the B-Cell algorithm (BCA), which is inspired by the mammalian immune system’s blood cells, B lymphocytes [77]. Taherdangkoo et al. have also developed stem cells optimization algorithm (SCOA), which simulates stem cell self-reproduction. SCOA exhibits a low-effort implementation, low complexity, and high-speed convergence, as well as intelligent local minima avoidance. SCOA outperforms GA, PSO, ACO, and ABC in comparative benchmark functions [78]. Amoeboid organism algorithm (AOA) [79] and synergistic fibroblast optimization (SFO) [80] are similar cell-based algorithms.

2.1.6. Insect-Based

Insects are a popular species in developing optimization algorithms. More than 40 algorithms have been developed based on the behavior and life of insects, ignoring different variants of similar algorithms. Some algorithms in this group, such as the artificial bee colony and ant colony optimization, are highly used by engineers and researchers. Many species, such as ants, bees, flies, spiders, and cockroaches, have been considered in developing optimization algorithms. Figure 26 illustrates the most popular insect-based algorithms and a simple categorization of them.

Ant Colony Optimization (ACO)

Ant species foraging behaviors are the basis for ant colony optimization (ACO). Ants mark areas of interest related to foraging with pheromones to guide other colony members, which is simulated to solve optimization problems using ACO. In ACO, artificial ants build solutions to the given optimization problem and mimic the pheromone foraging behavior in ants to share information related to the generated solutions. The ant system (AS), originally proposed in the 1990s, predates ACO algorithms such as the ant colony system (ACS) and MAX–MIN ant system (MMAS) [81].
In the case where V is a set of vertices and E is a set of edges, GC(V, E) is a connected construction graph that is obtained from the solution set C represented by edges or vertices. A partial solution is generated as agents (ant) travel along the edges of the graph from vertex to vertex. Ants deposit an amount of pheromone on a vertices or edge with a magnitude that is directly proportional to the quality of the solution. Other search agents then use the pheromone concentration to explore favorable solution regions. Three iterations of the meta-heuristic process are carried out where the ants generate a solution graph, optionally use local search methods, and update the pheromone concentration [81]. The optional and problem-specific local search, called the loyal search, is typically applied before updating the pheromone to improve the solution quality. The last phase is the pheromone update, which decreases the pheromone concentration of poor solutions and increases pheromone levels of desirable solutions. Pheromone updating is typically achieved by increasing the pheromone levels of a set of optimal solutions and by using pheromone evaporation to lower the impact of suboptimal solutions [81]. Figure 27 illustrates the flowchart of the ACO.
ACO has been used to solve a range of famous optimization problems, including traveling salesman, vehicle routing, sequence ordering, and so forth. It has been shown that ACO can be used to solve stochastic, dynamic, multi-objective, and continuous problems [81]. Recent studies show that rather than academic applications, some companies are applying ACO for real-world industrial problems in which multiple objectives, stochastic information, and time-varying data are readily available [82]. ACO is reported to be fast in solving complex problems, but it is sensitive to parameter adjustment like the pheromone evaporation rate during the pheromone update process.

Artificial Bee Colony (ABC)

ABC works based on the behavior of honeybees. In ABC, three groups of artificial ants (employed, onlookers, and scouts) comprise the entire colony. The colony is evenly divided between employed and onlooker bees. The number of employed bees is equal to the number of food sources around the colony. Scouts are bees whose food source has been depleted. The three steps in a cycle are: onlooker and employed bees are directed to relocate the food source, the nectar amount is calculated, and scout bees are designated to locate more food sources. Solutions are thus possible locations of food, and the nectar amounts assign the quality of the food source. A probability-based process is used to select which onlookers will retrieve the food source. The onlookers show preferential attention to food sources that have a relatively high amount of nectar. Scouts are characterized as having low food source quality and low search costs since scouts are concerned with finding any food source without prior knowledge of known food source locations. Scouts can sometimes discover abundant food sources that were previously undiscovered. Employed bees are selected to be scouts via the “limit” control parameter. Employed bees turn into scout bees if a food source is not improved after various solution trials. The parameter “limit” sets the number of trials before a food source is abandoned [83,84]. Figure 28 illustrates the flowchart of ABC.
The performance of ABC is better than similar algorithms such as PSO, GA, and DE when solving nonlinear unimodal and multimodal benchmark functions [84,85]. ABC uses a simple operation that quantifies the differences between the parent and a random solution from the bee population to find new candidate solutions, as opposed to DE and GA, which use crossover operations to generate new solutions. Local minima convergence speed is thus increased. GA and DE find the optimal solution in the population, which can be used to generate new solutions or new velocities in PSO. In ABC, the best solution is chosen from the pool of existing solutions and those solutions found by scout bees, which may not create new trial solutions. A greedy selection process between parent and candidate solution is used in DE and ABC. In ABC, like in DE, test solutions are produced for all population solutions disregarding the quality of the solution. Trial solutions are produced by onlooker bees favoring the new solutions with higher fitness levels, so favorable solutions are searched more thoroughly and in less time. The mentioned attributes are similar to the selection criteria employed in GA, namely seeded and natural selection. The ABC algorithm only has one control parameter (limit) apart from colony size and maximum cycle number. The value of the ‘‘limit” will be determined based on colony size and the dimension of the problem. Therefore, ABC has only two common control parameters [86].

Moth Flame Optimization Algorithm

Moths employ specialized navigation methods during the nighttime hours based on the transverse orientation mechanism, which uses moonlight as a reference. Moths are able to traverse large straight paths by keeping a fixed reference angle to the moon. The angle is maintained, and thus, the straight path of travel, because the moon is far from the Earth, which is illustrated in Figure 29a. Transverse orientation is effective, but moths are often confused by artificial lighting sources, which they encircle at a fixed angle. Figure 29b shows the encirclement traps that moths fall into when using transverse orientation on artificial light sources.
Moth-flame optimization (MFO) algorithm is inspired by the use of the transverse orientation of moths in the existence of artificial light, as shown in Figure 29b. The moth eventually converges toward the light, which is used as the mathematical bases for the MFO algorithm [87].
In the MFO algorithm, moths serve as the candidate solutions, and the moth positions serve as the problem’s variables. Therefore, the moths may traverse one-, two-, three-, or hyper-dimensional space without the positional vector changing. The MFO algorithm is population-based, so the moths and flames are presented in a matrix form. The flames and moths are solutions, but the way they are updated is different. The flames, which can be considered as flags tagged for search, are the optimal solutions that have been found by the moths, which are the search space agents. Moths update the flags after searching for fire and update the fitness of the solution, so the moth always tracks the best solution. Moths’ positions are updated based on a mathematical model of transverse orientation with the following equation:
M i = S M i , F j
where Mi represents the ith agent/moth, Fj represents the jth flag/flame, and S represents the spiral function. In this case, a logarithmic spiral is used but whatever spiral can be utilized for these purposes. The logarithmic spiral is shown below:
S M i , F j = D i · e b t · cos 2 π t + F j
where D is the distance between moth i and flame j, b is a value that defines the spiral shape, and t is a value in the range [−1, 1] chosen randomly [87].
The MFO has been used to solve 19 different unimodal, multimodal, and composite benchmarks and its performance has been compared to PSO, GSA, BA, GA, FA, SMS, and FPA. The results have shown that MFO provides the best results in four of the test functions, thus the superior efficiency of MFO is statistically significant. MFO competes with GSA in the results of some problems. The selection of flames for the positioning of the moths is the reason why MFO does not provide better results in three of the multimodal tests. The local solutions are avoided since the moths mainly explore the search space. MFO cannot approximate the global optimum very well since unimodal test functions do not feature local solutions, which is not a major concern since MFO achieved superior results in two unimodal tests. Accuracy and convergence speed are maximized by the MFO algorithm due to the updating position vector of the moths, but the avoidance of local solutions is one of the largest drawbacks [87].

Other Algorithms

Some other algorithms have been developed based on bees and their behaviors, such as the bee colony optimization (BCO) and fuzzy bee systems (FBS) algorithms developed by Teodorovic et al. BCO and FBS can solve deterministic combinatorial problems and uncertain combinatorial problems [88]. Honeybees mating optimization (HBMO) is another swarm-based algorithm that was developed by Haddad et al. based on the process of honeybee mating. The performance of HBMO is comparable to the results of the well-developed genetic algorithm in problems such as highly non-linear constrained and unconstrained real-valued mathematical models [89]. Another similar algorithm is marriage in honeybees (MHB), which mimics the development of honeybees that begins with a solitary colony, categorized as a solitary queen with no family, to a eusocial colony, characterized as multiple queens with families. MHB displays good performance in solving satisfiability problems (SAT) [90]. Queen-bee evolution (QBE) is another example of a bee-inspired algorithm, which enhances the capability of GA, where the queen bee has a direct influence over the reproduction process. QBE enhances genetic algorithm performance in converging to a global optimum by improving the exploration and exploitation of the search environment [91]. Bee swarm optimization (BSO) is based on the foraging behavior of honeybees by using population-based optimization, where bees adjust their flight trajectories. Experimental results have shown that BSO is comparable to ABC and bee and foraging algorithm (BFA) in solving nonlinear unimodal and multimodal multivariable benchmark functions [92].
Bee collecting pollen algorithm (BCPA) is also a global convergence searching algorithm that mimics honeybees’ swarm intelligence and pollen collection behaviors. BCPA has comparable performance with ACO in benchmarking function performance [93]. Another bee-based algorithm is OptBees, which is inspired by the processes of collective decision-making by bee colonies. It has been shown that OptBees exploits the multimodality of problems. Additionally, it creates and maintains diversity and achieves desired results in global optimization [94]. Fitness dependent optimizer (FDO) is another bee swarm algorithm inspired by the bee-swarming reproductive process and collective decision-making. FDO is a PSO-based algorithm that uses velocity to update the position of the search agents. Velocity is calculated via the fitness function that processes weights, which provide guidance to the search agents in the exploration and exploitation phase. Experimental results have shown that FDO performs better than PSO, GA, DA, WOA, and SpSO in some nonlinear unimodal and multimodal benchmark functions, including CEC-C06 [95]. Bumblebees (BB) is a multiagent optimization algorithm inspired by the collective behavior of social insects. Experimental results have shown that BB is faster and outperforms other algorithms, such as GA, in solving the k-coloring of a graph problem for a range of random graphs with different orders and densities [96]. Another algorithm in this group is bumble bees mating optimization (BBMO), which simulates the mating behavior of the bumble bees for solving global unconstrained optimization problems. It has been shown that the BBMO has high performance in solving some nonlinear multivariable benchmark functions in comparison to algorithms such as GA, DE, PSO, and HBMO [97].
Some algorithms in this category are inspired by antlion and dragonfly, which both are members of the insect order Odonata. Antlion optimizer (AlO) mimics the hunting mechanism of antlions in nature. It has been shown that AlO competes with algorithms such as PSO, GA, CS, and FPA in benchmarking functions in terms of local optima avoidance, exploration, exploitation, and convergence [98]. Dragonfly algorithm (DA) is also another swarm intelligence optimization technique that originates from the static and dynamic swarming behaviors of dragonflies in nature. In DA, dragonfly behaviors such as foraging, navigation, and predator avoidance were used to develop two distinct phases of exploration/exploitation and optimization. DA can effectively solve highly constrained CFD problems [99].
Some other algorithms are inspired based on flies; for example, fruit fly optimization (FFO) is based on the food-seeking behavior of fruit flies. Fruit flies can detect food sources from over 40 km away using their olfactory organs. Fruit flies also employ their sensitive vision to detect food and swarm flocking locations. The stability of the search route for the fruit flies is related to the population of the flies. FFO can normally find correct solutions to optimization problems [100]. The dispersive flies optimization (DFO) algorithm is based on the swarming behavior of flies over food sources in nature [101].
Another group of insect-based algorithms is based on Orthoptera, an order of insects including grasshoppers, locusts, and crickets. Grasshopper optimization algorithm (GOA) mimics the behavior of grasshopper swarms in nature for solving optimization problems. The simulation results have shown that GOA is able to provide superior results compared to well-known algorithms such as GA, PSO, CS, SM, and FPA and verify the merits of GOA in solving real problems with unknown search spaces [102]. Locust swarms (LS) is a multi-optima search technique explicitly designed for non-globally convex search spaces, which uses PSO as part of its coarse-mesh search to find starting points for a paired greedy search technique [103]. Another algorithm is the cricket algorithm (CrA) which is based on the behavior of crickets in nature. The CrA is a population-based algorithm similar to PSO, in which the candidate that converges to the optimum result tries to provide solutions. CrA applies some aspects of BA and FA [104].
There are some algorithms based on fireflies, such as the firefly algorithm (FA). The FA is a warm-intelligent algorithm based on the flashing pattern of tropical fireflies. It has been confirmed that FA can provide a good balance of exploitation and exploration and requires far fewer function evaluations [105]. Another algorithm is the glowworm swarm-based optimization algorithm (GSOA), which is a swarm intelligence-based algorithm for optimizing multimodal functions. The GSOA aims to encapsulate all of a function’s local maxima. The glowworms, or agents, of GSOA move in a direction based on the signal strength, picked up from their neighbors after using a decision domain to select them. The GSOA is based on glowworm behavior used to attract mates and pray, namely the luciferin glow intrinsic to the worms [106]. Jumper firefly algorithm (JFA) is also another algorithm based on FA. The JFA finds efficient solutions by evaluating the efficiency of the agents. Inefficient agents are then relocated to maximize the likelihood of finding the best solutions by using the jump option [107].
Another class of algorithms is inspired by spiders. Social spider optimization (SSO) is a swarm algorithm based on the simulation of the cooperative behavior of social spiders. SSO mimics colony cooperation by featuring computational mechanisms and considering two genders to provide a better exploration and exploitation behavior and avoid premature convergence, which are issues that plague PSO and ABC algorithms [108]. Another spider-based algorithm is black widow optimization (BWO) which draws inspiration from black widow mating behaviors, such as cannibalism. The cannibalism stage allows for early convergence due to the rejection of inefficient search agents [109]. Another example is the water strider optimization algorithm (WSOA), a population-based optimizer that mimics the water strider bug life cycles. The WSOA includes water strider behaviors such as mating style, feeding mechanisms, succession, territorial behavior, and intelligent ripple communication. WSOA has been applied to classical constrained, unconstrained, continuous, and discrete engineering design problems, two structural optimizations of double-layer barrel vaults, and a challenging bridge damage detection problem [110].
In the group of lepidopteras, there are some algorithms inspired by butterflies. The monarch butterfly optimization method (MBOA) was developed based on the migration of monarch butterflies. In MBOA, the location of the butterflies, which are located in two different environments, is updated by two methods. The migration operator generates offspring. Then, the butterfly adjusting operator updates the position of the agents. The amount of agents remains unchanged by these two methods so as to avoid fitness evaluations [111]. Another example is the butterfly optimization algorithm (BOA), which uses the concept of emitting fragrances from flowers and the ability of butterflies to sense those fragrances from long distances. Butterflies are agents which are used to search for optimal solutions in the search space. It has been shown that BOA has efficient performance compared to PSO, GA, ABC, and FA [112].
Another popular algorithm is the artificial butterfly optimization algorithm (ABOA) which is based on the mate-finding strategy of some butterfly species. In the ABOA algorithm, all virtual butterflies are divided into two groups, one called the sunspot butterfly group, and the other called the canopy butterfly group. The fitness of sunspot butterflies is better than those of canopy butterflies. In an optimization process, exploration is achieved by canopy butterflies, while exploitation is achieved by sunspot butterflies. It has been shown that ABOA has comparable performance to PSO, GA, and ABC [113].
Numerous algorithms have also been developed based on roaches. Roach infestation optimization (RIO) is an algorithm similar to PSO, which is inspired by the social behavior of cockroaches. It has been shown that RIO has a better performance compared to PSO [114]. Another similar algorithm is the cockroach swarm optimization method (CSOM) which is also based on the social behavior of roaches [115]. Another algorithm based on the social behavior of roaches is artificial social cockroaches (ASC) [116]. Cockroach colony optimization (CCO) is another algorithm that is developed based on CSOM, and uses an improved grid map for environment modeling [117]. Another example is cockroach swarm evolution (CSE) which is inspired by competitive event-based behavior in roaches [118].
Some algorithms are inspired by beetles, such as the pity beetle algorithm (PBA), which is inspired by aggregation behavior, and nest and food localization of the six-toothed spruce bark beetle or Pityogenes chalcographus beetle. PBA exploits weakened trees in a forest until the population is large and healthy enough that the beetles can infest healthy trees. It has been shown that PBA has considerable performance in solving NP-hard optimization problems and some common benchmark functions [119]. Another example is the beetle swarm optimization algorithm (BSOA) developed by Wang et al., based on beetle foraging principles. BSOA has shown good performance in solving benchmark functions as well as some engineering problems such as pressure vessel design problems and Himmelblau optimization problems compared to similar algorithms such as PSO and GA [120]. Jiang et al. have also developed the beetle antennae search algorithm (BASA), based on the searching behavior of Longhorn beetles. This algorithm mimics the function of the antennae and the random walking mechanism of these beetles. Jiang et al. have shown the efficiency of the BASA in solving benchmark functions [121].
Some algorithms have been developed based on mosquitoes. One example is mosquito flying optimization (MFO), which is an algorithm based on the behavior of mosquito behavior in nest seeking that solves optimization problems by modeling the flight and sliding motion of the insects [122]. Another example is mosquitos oviposition (MO), developed by Minhas et al., based on the highly selective behavior of female mosquitoes in selecting a habitat to lay their eggs and the inhibition of those eggs to hatch into the next stage for solving multidimensional optimization problems [123].
There are other insect-based algorithms such as termite colony optimization (TCO), which is developed based on a decision-making model inspired by the intelligent behaviors of termites in adjusting their motion trajectory [124]. Zhu et al. have also developed seven-spot ladybird optimization (SLO) based on the foraging behavior of a seven-spot ladybird. It has been shown that SLO is capable of solving optimization problems using a smaller population size compared to GA, PSO, and ABC, which makes it suitable for low-dimensional problems [125]. Another example is Eurygaster Algorithm (EA), which is developed for solving NP-hard continuous and discrete problems. EA is based on the attack strategy of Eurygaster groups on farms [126].

2.1.7. Avian Animal-Based

A major part of bio-based algorithms belongs to avian animals, such as birds and bats. Different kinds of birds have been used as a source of inspiration for developing optimization algorithms, from hunting birds like eagles and hawks, to chickens. Some of the most popular algorithms in this group are cuckoo search (CS), green herons optimization algorithm (GHOA), and bat algorithm (BA). Figure 30 illustrates different winged-animals-based algorithms based on their citations.

Cuckoo Search

Cuckoo search (CS) mimics the Levy flight of some insects and birds, as well as the parasitic behavior of cuckoo birds [127]. Levy, or Lévy-flight, describes a random flight pattern of straight paths with intermittent 90° turns. Cuckoo birds also employ aggressive reproduction tactics, such as the use of communal nests where parents ensure the hatching of their own eggs by removing other eggs. Some cuckoo species can copy the coloration patterns of host species, so that cuckoo offspring are mothered by another host species. Cuckoos take advantage of nests where eggs were recently laid. When a cuckoo chick hatches, its first instinct is to remove host eggs from the nest to increase its share of food from the mother bird. The chicks can also increase feeding opportunities by mimicking host bid calls [127]. Drosophila, or fruit flies, use Levy flight to explore their environment, which is a behavior that has been applied to optimization and optimal search [127]. Figure 31 illustrates an example of a Levy flight.
The CS algorithm is governed by three main principles. First, a cuckoo bird can only lay one egg at a time, which is randomly deposited in an existing nest. Second, a higher-quality nest will propagate. Third, host nest amounts are fixed, and the probability of the parasitic egg being discovered is pa ∈ [0, 1]. If discovered, the host species will abandon the nest or evict the cuckoo egg, which is approximated by pa of n replaced nests. Fitness is proportional to the objective function in a maximization problem. Fitness can also be defined as it is in existing genetic algorithms. Essentially, a new cuckoo egg generates a potentially better solution that can replace a less optimal solution [127]. Levy flight i performed for a new solution x(t + 1) for a cuckoo, i, as follows:
x i t + 1 = x i t + α L evy   λ
where α > 0 controls the step size that is dependent on the scale of the problem (in most cases α = 1), and product ⊕ represents entry-wise multiplication. Random walk is thus generated by the above stochastic equation. The Levy distribution gives the random length to the random walk as follows:
L evy   u = t λ ,   ( 1 < λ 3 )
which features an infinite variance and mean. A step-length power-law distribution with a thick tail random walk process is thus formed. The local search can be sped up by generating Levy walk with proximity to the best solutions. To ensure that the system does not converge on a local optimum, far-field randomization should be employed to scatter the new solutions away from the best solution. CS is most similar to hill-climbing due to the randomization process, yet distinct in many ways. CS finds the best solution similar to harmony search while maintaining a population-based structure such as GA and PSO. Additionally, due to the thick-tailed distribution of the step size, randomization is more efficient. Lastly, CS is likely easier to implement than GA and PSO due to the diminished number of parameters. CS can be used as a meta-population algorithm if each nest is treated as a set of solutions [127].
The performance of CS in optimizing multimodal objective functions has been shown to be superior to other algorithms in part due to the diminished number of parameters (n and pa). Parameter pa does not affect the convergence rate, so the parameter does not need to be tuned on a case-by-case basis. CS is more robust than other nature-inspired algorithms as a result. CS can be extended to study multi-objective optimization applications with various constraints, including NP-hard problems [127].

Green Herons Optimization Algorithm

The green heron optimization algorithm (GHOA) is based on the prey-baiting and tool-based mechanisms employed by the green heron bird. These birds perch horizontally and dip their bills into a body of water, usually baited with worms, feathers, foliage, berries, insects, etc. When water-dwelling prey senses the presence of food and investigates, the Heron catches the prey. Optimization and complex problem-solving are being solved using the meta-heuristic foundations of Heron’s baiting behavior [129]. GHOA is well suited for graph-based problems with discrete representations. The algorithm features an enhanced convergence speed and efficient solution-finding behavior when producing good solution-set combinations [129]. Figure 32 illustrates the Heron’s baiting behavior.
GHOA features three operations that create a path and the solutions are baiting, attracting prey swarms, and change of position. The baiting stage mimics the behavior of herons, which drop bait with the hopes of attracting prey. The bait is a randomly generated solution that the bird drops upon finding an optimal position through a local search. Thus, both bait and prey serve as solutions to the algorithm. There are three modes that the bait-prey class will take the form of to improve or generate a new solution heuristically. The three modes are the missed catch, catch, and false catch. In the missing mode, bait is placed in a suitable location, but the bird fails to catch prey, so the solution number likely increases (illustrated in Figure 33a). In the catch mode, bait is deposited, and the bird catches prey. Thus, a more competitive solution is added to the system and a suboptimal solution is removed, and the number of solutions remains constant, which is illustrated in Figure 33a. An event where a bird catches prey without bait is named the false catch mode, where an inappropriate solution is removed, and the number of solutions in the system decreases. If a solution is removed in a false catch, a new mode must be introduced, or certain constraints of the problem may be compromised (illustrated in Figure 33a) [129].
The attracting prey swarms’ phase is essentially a local search that ensures the algorithm converges quickly when solving precedence criteria in constrained discrete problems, which is different from the change of positions phase. In the attraction phase, the position of the bird remains static while swarms of osprey are attracted to the bait, which is an algorithm behavior useful in solving VRP, TSP, scheduling problems or other problems where constrained numbers of unknown, and selectively useful in path planning, routing, etc., problems (see Figure 33b). The change of position phase represents the bird’s behavior in choosing a location where it can easily attract prey or catch prey on the chance that prey moves near its feet, which ensures that time is not being wasted in a local solution space that does not contain solutions (see Figure 33c) [129]. Figure 34 illustrates the flowchart of the GHOA.
GHOA is seemingly robust and scalable and displays adequate convergence criteria. The algorithm has produced promising solutions for high dimensional and combinatorial problems, as well as graph-based and discrete optimization problems [129].

Bat Algorithm

Bats are a diverse species that can use advanced echolocation and are the only mammals that use wings. The insectivore microbats use echolocation to find prey, avoid obstacles, and find homes in the dark. Bats achieve echolocation by creating a sound pulse and using time-of-return to estimate the distance between them and other objects. Bats can vary the properties of their echolocation due to species variation and hunting strategy. Bats can either use constant-frequency signals or short-frequency-modulated waves for echolocation. Output signal frequency depends on the species, but bats use more harmonics to increase the bandwidth. Bats generate a three-dimensional map of their environment using the volume of the variations, the time of travel of the emissions, and the time difference between their two ears. Bat sonar is precise enough to determine the orientation, position, type, and speed of prey. Additionally, bats use the Doppler shift of insect wings to discriminate between prey [130]. Figure 35 illustrates a schematic of echolocation.
The bat algorithm (BA) is inspired by the above-mentioned behaviors of bats. All bats use echolocation to sense distance, type of food/prey, and background barriers in the BA. The bats randomly fly with loudness A0, position xi, and velocity vi. The bats also fly with a changing wavelength λ and fixed frequency fmin. The proximity of prey adjusts the pulse emission (between the range [0, 1]) and the frequency of the pulses. Loudness varies within the maximum A0 and the minimum A0. New solutions for the velocities and positions vi(t) and xi(t) are generated using the best global solution, x*. A random walk is used to generate new solutions for each bat after the global best is found using the averaged loudness values at the given time across all bats. Velocity and position are updated similarly to how values are updated in PSO since the range and speed of the particles are controlled by fi. BA is an equal combination of local search mediated by loudness and pulse control and the standard PSO. Additionally, loudness is diminished once the prey has been detected and pulse emission is increased. Yang et al. showed that BA outperforms existing algorithms when applied to seven constrained and nonlinear design task benchmark problems. BA is potentially more powerful than GA, PSO, and HS. PSO and HS are a simplified version of BA. BA outperforms these existing algorithms since it incorporates the strengths of the other algorithms but includes a robust local optimal search finder [130].

Other Algorithms

Crow search algorithm (CrSA) is another bird-inspired algorithm developed by Askarzadeh, which is a population-based technique that mimics crow food storage and retrieval. Simulation results reveal that CSOA may produce promising results compared to the other algorithms [132].
Other algorithms are developed based on hawks, eagles and other birds of prey. Heidary et al. have developed Harris hawk optimization (HHO), based on surprise bounce, which is a cooperative chasing behavior intrinsic to Harris hawks when multiple hawks dive at the same time to surprise a target. Harris hawks have developed several methods of surprise pouncing due to unpredictability in the environment and prey [133]. The cooperative behavior and chasing style of Harris hawks in nature is called surprise pounce. In this intelligent strategy, several hawks cooperatively pounce prey from different directions in an attempt to surprise it. Harris hawks can reveal a variety of chasing patterns based on the dynamic nature of scenarios and escaping patterns of the prey [133]. Eagle strategy (ES) iteratively combines the firefly algorithm and the Levy walk method for a random search, and studies suggest that ES is an effective stochastic optimizer [134].
Segundo et al. have also developed Falcon’s hunt optimization algorithm (FHOA) based on the hunting behavior of falcons which is a robust and powerful stochastic population-based algorithm that needs the adjustment of a few parameters for its three-stage movement decision [135]. Khan et al. have also developed the eagle perching optimizer algorithm (EPOA), which mimics the perching nature of eagles and is based on exploration and exploitation [136]. Bald eagle search (BES) is another eagle-based algorithm that is based on the hunting strategy or intelligent social behavior of bald eagles as they search for fish. Simulation results confirm that the BES algorithm has comparable performance to conventional methods and advanced meta-heuristic algorithms [137].
Another group of algorithms were developed based on penguins. Penguins search optimization algorithm (PeSOA) is one such algorithm and is based on the collaborative hunting strategy of penguins [138]. Emperor penguin colony (EPeC) is another algorithm developed by Harifi et al. based on body heat radiation and the spiral-like movement of emperor penguins in their colony [139]. Dhiman et al. have also developed emperor penguin optimizer (EPeO), which mimics the huddling behavior of emperor penguins [140].
Other algorithms are developed based on swarms or the social behavior of birds, such as migration. Among bird-based algorithms is chicken swarm optimization (CSO), which mimics the intelligent hierarchical swarm behaviors in hens, roosters, and chicks to optimize problems. Studies show that CSO achieves good robustness and accuracy in optimization problems compared to popular meta-heuristic algorithms [141]. The bird swarm algorithm (BSA) is another algorithm based on interactions and behaviors of birds swarm intelligence. Three predominant social behaviors in birds are foraging, vigilance, and flight behavior. Birds increase their survival rate through social interactions by foraging and escaping predators, for example [142]. Duman et al. have also developed migrating birds optimization (MBO) based on the energy-saving V flight formation common in bird migration [143]. Swallow swarm optimization algorithm (SWOA) is another similar algorithm that models swallow swarm movement. SWOA has been proven to be highly efficient in flat areas, local extrema stagnation avoidance, good convergence speed, and intelligent particle participation [144]. Bird mating optimizer (BMO) is a similar algorithm inspired by the mating strategies of birds. BMO creates optimum searching techniques by breeding agents with better genes. BMO competes with other EAs in performance on unimodal and multimodal benchmark functions [145].
Some algorithms are based on other bird behaviors, such as feeding and egg-laying. The laying chicken algorithm (LCA) was developed by Hosseini based on the behavior of chickens laying eggs [146]. Lamy has also developed artificial feeding birds (AFB) inspired by food-searching behaviors in birds, which can be applied to many optimization problems [147].
Numerous algorithms are based on different kinds of birds, such as pigeon-inspired optimization (PIO), which is based on pigeon swarm behaviors [148]. Dhiman et al. have also developed the seagull optimization algorithm (SOA), which is inspired by the migration and attacking behaviors of seagulls [149]. Satin bowerbird optimizer (SBO) was developed by Moosavi et al. based on the mating behavior of seagulls [150]. Jain et al. have also developed the owl search algorithm (OSA), which is a population-based algorithm inspired by the hunting mechanism of owls in the dark [151]. Another example is the Egyptian vulture optimization algorithm (EVOA), which was developed based on the food acquisition behaviors of Egyptian vultures [152]. Kestrel optimization algorithm (KOA) has been developed from the feeding behavior of a Kestrel, which is a type of duck [153].
Dhiman et al. have also developed the sooty tern optimization algorithm (STOA) based on the migration and attacking behaviors of the sooty tern, which is a seabird [154]. The raven roosting optimization algorithm (RROA), which was developed by Barbazon et al. based on the social roosting and foraging behavior of the common raven, is another example [155]. Andean Condor algorithm (ACA) is another example that is inspired by the movement pattern of the Andean Condor when it searches for food [156]. Omidvar et al. have also developed a PSO-like algorithm termed the see-see partridge chicks optimization (SSPCO) by modeling the behavior of the see-see partridge chicks [157]. Hoopoe heuristic optimization (HHO) is another bird-inspired algorithm developed by El-Dosuky et al. [158]. Urban pigeon-inspired optimizer (UPIO) was developed based on the foraging behavior of groups of urban pigeons [159]. Another algorithm based on bat’s echolocation capabilities is called the bat sonar optimization algorithm (BSOA) [160].

2.1.8. Aquatic Animals-Based Algorithms

More than 35 aquatic animal-based algorithms are studied in this section, which includes different kinds of aquatic animals and plants. This category has one of the most diverse groups of species. Fish, dolphins, and whales are the most popular species in this category. The most popular algorithms based on their citations are the whale optimization algorithm (WOA), krill herd (KH), and salp swarm optimization (SlpSO). Figure 36 illustrates the most cited aquatic-based algorithms.

Whale Optimization Algorithm

Whales are considered to be the largest mammals in the world. There are seven main species of whales, such as the Minke, Sei, right, finback, blue, killer, and humpback. Whales are classified as predators. Humpback whales exhibit a certain hunting method called the bubble-net feeding method, where humpbacks feed on entire schools of fish or krill close to the ocean’s surface. It has been observed that humpback whales’ foraging is done by creating distinctive bubbles along a circle or “9”-shaped path, as shown in Figure 37. Two distinct bubble shapes are used in the feeding maneuver termed the upward-spirals and double-loops. The upward-spiral maneuver is executed when a whale dives around 12 m under the surface and creates spiral-shaped bubble formations underneath the prey, and swims toward the surface. The double-loop maneuver includes the coral loop, lobtail, and capture loop stages. Of note, the bubble-net foraging/hunting behavior is specific to humpback whales [161].
WOA is based on the bubble-net feeding, the encirclement of prey, and prey search behaviors in humpback whales. The whales encircle prey after the prey has been located. WOA takes the location of the prey as the best candidate solution since the optimal design location is not intrinsically known. Search agents will update their positions using the best solution positions so far. The shrinking encircling mechanism and the spiral updating position approaches are designed to model bubble-net foraging. In essence, shrinking encircling is achieved by creating a spiral between the whale located at (X, Y) and the prey located at (X*, Y*). The distance is first calculated, and the value is decreased. Humpbacks simultaneously swim in a helical path and encircle the prey in a shrinking circle such that the model updates the whale’s position as follows:
X t + 1 = X * t A · D                                               i f   p < 0.5 D · e b l · cos 2 π l + X * t   i f   p 0.5
where p is a number randomly chosen in the span [0, 1], and D = X * t X t , which indicates the distance between the ith whale to the best prey/solution, the constant b defines the logarithmic spiral shape, and l is a number randomly generated from the span [−1, 1]. WOA begins with a random solution set where the position of agents is updated based on the position of a random agent or the best solution that has been found. It can select between a circular or spiral movement based on the variable p. The optimization process finishes when the termination criteria are met.
WOA competes with popular meta-heuristic methods based on its performance on several benchmarking functions to test the local optima avoidance, convergence, exploration, and exploitation behaviors [161].

Krill Herd

Antarctic krill are a vastly studied marine animal. These herds exist on the 10s to 100s of meter space scale and hour-to-day time scales. The herds have no parallel orientation and are mass collections of krill, and the swarming behavior is a main characteristic of krill. Krill herd density is diminished after predatory attacks, where individual krill are removed from the swarm. Many parameters dictate the formation of a new krill herd after an attack. The two main goals of herding krill are to reach food and increase the swarm density. Krill herd (KH) is a meta-heuristic algorithm based on the herding of krill to solve global optimization problems. KH emulates the herding behaviors of krill swarms during specific environmental and biological events. The parameters for the KH algorithm have been determined from a review of the literature of real-world studies. The distance from krill to food and the highest density in the swarm serves as the fitness functions for the KH algorithm. The position of the krill, which is time-dependent, is determined via the motion caused by other krill, foraging, and random diffusion [163]. Krill herds are close to global minima due to the objectives of increasing herd density and finding food such that the krill approach is the best solution when looking to fulfill the two goals. Therefore, the objective function is minimized when krill are close to food or at high densities of other krill [163].
The position of a krill in the KH algorithm which represents a candidate solution is updated based on the maximum induction speed, the inertia weight, the target direction from the best krill agent, and the attractive and repulsive effects between krill neighbors. Krill neighbors are determined using a sensing distance parameter as illustrated in Figure 38. The maximum induction speed determines the maximum distance that a krill can move in each iteration. The inertia weight determines the degree to which the krill agent will maintain its current direction. The target direction from the best krill agent guides the krill toward the global optimum. The attractive and repulsive effects between krill neighbors are determined by their fitness and position relative to each other. Krill agents with higher fitness are attractive to others, while those with lower fitness are repulsive. These effects influence the direction and speed of movement of each krill agent. From this aspect, it shares the same concept with PSO [163]. Figure 39 shows the flowchart for KH.
The related parameters should be tuned for meta-heuristic algorithms. KH uses real-world coefficients to simulate krill behavior, so only the time interval, Ct, has to be tuned. Thus, one of the greatest advantages that KH has compared to other algorithms, is that it only has one parameter that must be tuned. Other advantages of using KH are as follows: (1) the fitness of agents dictates the motion of other agents, (2) a local search is conducted in neighborhoods due to the attractive/repulsive behavior of krill movements in the algorithm, and (3) the global best is estimated through the determination of the center of food based on agent fitness. KH is a more complex algorithm than PSO due to intrinsic motions such as foraging. KH has been shown to outperform various popular algorithms, such as PSO, APSO, GA, ES, BBO, ACO, DE, and HDE [163].

Fish-Swarm Algorithm

Li et al. have proposed an evolutionary computation technique called artificial fish-swarm optimization (AFSO) [164]. AFSO aims to find a global optimum using single-fish local searches by mimicking the swarming, following, and preying behaviors of fish. The environment state and fish state dictate the behavior of the fish. Therefore, fish behavior influences the environment and other fish activities. Similar to GA, AFSO can solve complex high-dimensional nonlinear problems independent from gradient information with a faster convergence speed and less parameter tuning. AFSO does not incorporate mutation or crossover, so it performs faster. AFSO is also a population-based optimizer. The optimum is searched for iteratively from an initial set of solutions that have been generated randomly [165].
Fish realizes their external perception through their vision, illustrated in Figure 40. X denotes the current state of a fish, visual describes the visual distance, and Xv describes the instantaneous visual position. A fish moves to Xnext when the visual position is better than the current position. The fish continues to inspect the environment if Xnext is not better. Fish gather more information about the solution space after every iteration of visual comparison they do. The fish model includes some variables and functions. The variables are the position of the fish, the step length, the distance that the fish can “see”, the number of attempts, and the crowd factor which is between 0 and 1. The functions of the fish, denoted AF, mimic the behaviors of real-life fish and include AF_Swarm, AF_Follow, AF_Prey, AF_Move, AF-Evaluate, and AF_Leap [165]. AFSA has lower convergence speed, higher time complexity, no agent memory, and difficulty differentiating between global and local searches despite being one of the best swarm intelligence algorithms. Some of the advantages are global search ability, robustness, insensitivity to initial values, and parameter setting tolerance [166]. A comprehensive review of the fish swarm algorithm was done by Neshat et al., including their challenges and applications [165].

Other Algorithms

Another popular aquatic animal-based algorithm is salp swarm optimization (SlpSO). SlpSO has two main versions, the single-objective salp swarm algorithm (SSlpSA) and the multi-objective salp swarm algorithm (MSlpSA). SlpSO was inspired by the swarming behavior of salp when navigating and foraging in oceans. SlpSO is effective at final optimal solutions in optimization problems, as evidenced by its performance on various optimization functions tests, which shows SlpSO converges to the optimum and improves the random solution set that is initially chosen. In fact, SlpSO rivals Pareto optimal solutions with good coverage and high convergence [167].
Most aquatic-based algorithms are inspired by fish, such as the artificial fish school algorithm (AFSA) [168]. Filho et al. have also developed fish school search (FSS), which benefits from the collective behavior of fish that increases mutual survivability [169]. Sailfish optimizer (SO) is a similar fish-based algorithm that mimics the behavior of hunting sailfish. The intensity of the search is mediated by the sailfish population, and the sardine population diversifies the search space. SO, performs well in tests that are scalable, non-separable, and non-convex. SO is useful for solving problems with unknown and unconstrained search spaces, evidenced by the performance of five real-world optimization problems [170].
The great salmon run (GSR) was developed based on salmon as a powerful tool for optimizing complex multi-dimensional and multimodal problems [171]. The mouthbrooding fish algorithm (MBFA) has also been developed. MBFA mimics the propagation and survival of symbiotic relationships between organisms in an ecosystem. Mouth brooding fish behavior, namely protection, dispersion, and movement, are used as the underlying pattern to optimize a problem and MBFA was able to construct a desirable result when evaluated by the CEC2013&14 benchmark functions, showing the promise MBFA has in solving optimization problems [172]. Another fish-based algorithm is the yellow saddle goatfish algorithm (YSGA) which is based on the goatfish hunting strategy. Goatfish cover an entire hunting region by dividing a population into various subpopulations. All fish in a subpopulation participate in the hunt as either chasers or blockers. Chaser fish explore the search space to find prey, and the blockers move in a formation to prevent prey from escaping. The YSGA performs accurately and efficiently and is robust when compared to other evolutionary techniques [173].
Manta ray foraging optimization (MRFO) was developed by Zhao et al. to mimic the unique manta ray foraging behaviors, namely somersault, chain, and cyclone foraging. MRFO features highly accurate results at low computational costs and outperforms competing for optimization algorithms [174]. Yilmaz et al. have developed the electric fish optimization algorithm (EFOA) based on the communication and prey location actions of electric fish. Electric fish rely on electrolocation to sense their surroundings as nocturnal animals with poor visualization capabilities. Local and global searches may be balanced using the passive and active electrolocation abilities of electric fish. Simulation results indicate that EFO competes with other optimizers such as SA, GA, DE, PSO, and ABC [175]. Fish electrolocation optimization is a meta-heuristic technique inspired by other electrolocation techniques observed in fish. The elephant nose fish create a geometrical map from electrical images created from electrical pulses that are discharged by an organ in their tail. The fish then seek food based on the capacitance values intrinsic to the electrical image. Sharks are capable of using electrolocation for prey-seeking as well. Sharks can sense the electrical waves caused by the contraction of a prey’s muscles (FEO) [176].
Some algorithms were developed based on the behaviors of dolphins. Dolphin echolocation algorithm (DEA) is inspired by the biological sonar capabilities, otherwise known as echolocation, of dolphins used in hunting and navigation behaviors. DEA performs better than other optimization methods and requires little fine-tuning [177]. Dolphin partner optimization (DPO) is developed based on the behavior of dolphins in a team, such as a role recognition, communication, and leadership [178]. Other noteworthy capabilities that dolphins exhibit are information exchange, cooperation, echolocation, and division of labor. The dolphin swarm algorithm (DSA) combines the biological characteristics and living habits of dolphins with swarm intelligence to solve optimization problems. Studies show that the DSA performs better than similar swarm-based algorithms. DSA features local-optimum-free, periodic convergence, first-slow-then-fast convergence, and little to no demand during benchmarking. DSA is specifically useful in optimization problems with fewer agents and more fitness functions [179]. Another similar stochastic algorithm is developed by Yong et al. based on dolphin swarms called dolphin swarm optimization algorithm (DSOA) [180]. Serani et al. have also developed the dolphin pod optimization algorithm (DPOA) based on a simplified social model of a dolphin pod in search of food for unconstrained single-objective minimization [181].
Other major groups of aquatic-animal-based algorithms are inspired by aquatic predators such as sharks and whales. Shark smell optimization method (SSOM) mimics the olfactory-based prey-location behavior employed by sharks. The capabilities of SSOM have been validated by finding the solution to load frequency control problems in electrical power systems [182]. Ebrahimi et al. have also developed the sperm whale algorithm (SWA), a population-based optimization technique that mimics the lifestyle of sperm whales. SWA uses the worst and best answers to reach the optimum [183]. Marine predators algorithm (MPA) is another similar algorithm that is inspired by the Levy and Brownian foraging movements and the maximum encounter rate of prey for marine predators [184]. Biyanto et al. have developed a killer Whale algorithm (KWA) based on the life of the killer whale. Studies show that KWA outperformed algorithms such as GA, imperialist competitive algorithm (ICA), and SA in black box optimization benchmarking [185]. Whale swarm algorithm (WSA) was developed by Zeng et al. based on the whales’ ultrasonic communication and hunting behavior. WSA has been compared with several popular meta-heuristic algorithms on comprehensive performance metrics, and the results show that WSA has a competitive performance compared to other algorithms [186]. Masadeh et al. have also developed the vocalization of humpback whale optimization algorithm (VHWOA), which mimics the humpback whale behavior of vocalization and is used in cloud computing environments to ameliorate task scheduling. A multi-objective model is a basis for the VWOA scheduler, which maximizes resource usage and reduces span, energy, and cost consumption [187].
There are numerous other algorithms inspired by aquatic animal behavior, such as the artificial algae algorithm (AAA), which was developed by modeling the living behaviors of the photosynthetic species microalgae [188]. Coral reefs optimization algorithm (CROA) was developed based on the growth, reproduction, and fighting behaviors of coral reefs [189]. Eesa et al. have also developed the cuttlefish algorithm (CA) based on color-changing in cuttlefishes [190]. Another aquatic-based algorithm is mussels wandering optimization (MWO), developed by An et al. MWO simulates mussels’ leisurely locomotion behavior in forming bed patterns in their habitat [191]. Tunicate swarm algorithm (TSA) was developed based on jet propulsion and swarm behaviors of tunicates during their navigation and foraging process by Kaur et al. [192]. Masadeh et al. have also developed the sea lion optimization algorithm (SLOA) based on the hunting behavior of sea lions and their whiskers which are used to detect prey [193]. Another example of a marine-based algorithm is the barnacles mating optimizer algorithm (BMOA) which was developed by Sulaiman et al. based on the behavior of the barnacle [194]. Another fish-inspired algorithm is the anglerfish algorithm (AA) which was developed based on the mating behavior of anglerfish [195]. Circular structures of puffer fish (CSPF) was developed by Catalbas et al. and is inspired by the courting behaviors of male puffer fish on females [196]. Pontogammarus maeoticus swarm optimization (PMSO) is also another example inspired by the foraging behavior of Pontogammarus [197]. The last example in this group is the water-tank fish algorithm (WTFA) developed by Sukoon et al. [198].

Terrestrial Animals-Based

Terrestrial animals are one of the most popular species in developing nature-inspired algorithms. They form around 20 percent of nature-inspired algorithms. The most popular algorithms in this category based on citations are the grey wolf optimizer (GWO), cats swarm optimization (CSO), and lion optimization algorithm (LOA). Figure 41 illustrates the most cited terrestrial animal-based algorithms.

Grey Wolf Optimizer

Grey wolves (Canis lupus) are top-of-the-food-chain apex predators. Grey wolves live in hierarchical packs of about 5–12 wolves led by male and female alphas. The alphas dictate where the pack hunts, sleep, etc. The beta wolves are subordinate to the alphas and aid in pack decision-making. The lowest hierarchy wolves are the omegas, who are often the scapegoat, eat last, and must submit to the alphas and betas. Delta or subordinate wolves represent the rest of the pack. The delta wolves submit to the alpha and beta wolves but not the omegas [199]. Figure 42 illustrates the hierarchy of grey wolves.
Grey wolves exhibit other social behaviors, such as group hunting. Wolves hunt in the following phases: track-chase-approach, pursue-encircle-harass, and attack, which are illustrated in Figure 43 [199].
The mathematical model describing the encircling behavior of grey wolves is introduced as follows:
D = C · X p t X t
X t + 1 = X p t A · D
where t describes the iteration, A and C are vectors of coefficient, Xp describes the prey position vector, and X describes the wolf position vector. The A and C vectors are calculated as follows:
A = 2 a · r 1 a
C = 2 r 2
where a is decreased linearly from 2 to 0 as the iterations carry out, and r1, r2 are vectors assigned random numbers in the span [0, 1]. Grey wolves can find and encircle prey. The alpha normally leads the hunt, whereas the beta sometimes participates. However, the location of the optimum solution (prey) is not known in a search space. Therefore, it is assumed that the alphas, betas, and deltas know of a better solution than the omegas. The positions of all of the wolves are updated based on the solutions of the alphas, betas, and deltas. This behavior is mathematically simulated as follows:
D α = C 1 · X α X   ,   D β = C 2 · X β X   ,   D δ = C 3 · X δ X
X 1 = X α A 1 · D α   ,   X 2 = X β A 2 · D β   ,   X 3 = X δ A 3 · D δ
X t + 1 = X 1 + X 2 + X 3 3
Grey wolf hunts end when the pack attacks the prey and is mathematically simulated by decreasing the values of a and A. A is a value randomly chosen from the span [2a, 2a] where a decrement from 2 to 0. The search agent’s next position can be anywhere between itself and the prey when A is [1, 1]. Thus, GWO compels the search agents to advance toward the prey using the alpha, beta, and omega solutions as a guide at the cost of local solution stagnation. The stagnation can be remedied by including more operators to mediate exploration [199].
GWO has been tested with 25 benchmark functions, and the results showed that it competed well with popular heuristics like GSA, DE, PSO, ES, and EP. GWO exhibited superior exploitation during the unimodal function test results. GWO displayed exploratory ability during the multimodal test functions. Local optima avoidance capabilities were shown during the unimodal functions. Additionally, the GWOs capability to converge was displayed. GWO algorithm has displayed good performance in unknown and challenging search spaces [199].

Shuffle Frog-Leaping Algorithm

Shuffle frog-leaping algorithm (SFLA) mimics a global information exchange and development of memes based on individual interactions. SFLA finds solutions to combinatorial optimization problems using a heuristic search. Memes and genes move to new vehicles differently. Genes are chosen for sexual reproduction, and memes for host communication (modeled with frogs in the optimizer). A population is composed of individuals who have an associated fitness. The population is described as a memetic vector, or meme host, where each host carries a meme composed of memotype(s). The analogy between genes and chromosomes is drawn between memes and memotype(s). Time loops are the time step in SFLA [201].
In the example of frogs, frogs seek stones in a river that are close to the largest food source. Frogs improve their individual memes by communicating with each other. The position of a frog is brought closer to the optimum when a frog meme is improved. The step size of a frog is changed, which is otherwise considered changing the memotype. The change of the memotype is a discrete value [201].
SFLA is composed of stochastic and deterministic methods. Similar to PSO, the heuristic search is guided by response surface information. The search pattern maintains robustness and flexibility via random elements. SFLA begins with a population Ω of frogs inhabiting a swamp. The random population of frogs evolves and searches the space separately in partitioned memeplexes. Memetic evolution is mediated by the transfer of ideas between frogs, which improves frogs’ ability to achieve the goal. To produce a competitive population of search agents, frogs with higher-quality memes influence the growth of other frogs as opposed to frogs with poor memes. Frogs with better memes are chosen using a triangular probability distribution. Frogs change their memes according to the local or global best memeplex. The frog’s new position is determined via small changes in memotype(s) or leaping step size. A frog that improves its position returns to the memeplex to improve on the new information that was gathered. SFLA and GA are different with respect to updating the population when new insights are discovered; GA must first modify the entire population as opposed to SFLA, where new insight is instantaneously available for all search agents. New memeplexes are formed in a shuffling process after a certain number of mimetic evolutions take place. The shuffling process improves the memes because new ideas from different regions are introduced into the population. Regional bias is removed from the population evolution by the exchange of designs and ideas mediated by migration, where the search process time is truncated. Figure 44 illustrates the flowchart of the SFLA [201].
SFLA and GA performance were compared to one another in a series of tests. SFLA outperformed or performed equally compared to GA in 2 applications and 11 theoretical test functions on nearly all tests. SFLA demonstrated higher robustness in locating the global optima. The results of four engineering problems were compared with results from the literature for other optimization algorithms. SFLA was proven to be robust and have a fast convergence speed. SFLA shows encouraging results as a robust meta-heuristic process. SFLA seems to perform well in solving mixed-integer problems despite being developed for use in combinatorial problems. Similar to other GAs, SFLA may be a promising candidate for parallelization [201]. Due to the SFLA promising performance, improved versions of this algorithm have been developed [202,203,204].

Cat Swarm Optimization

Cats maintain a high level of alertness even when they are at rest. In Cat Swarm Optimization (CSO), the seeking and tracking behaviors of cats are mimicked. In the algorithms, the agent cats are described by their dimensional positions, M, where every positional dimension has a velocity, a value of fitness, and a seeking/tracking flag. The best solution that is found by a cat is kept until all of the iterations are completed. Four modes are defined for the seeking behavior: seeking a range of dimensions (SRD), seeking memory pool (SMP), self-position consideration (SPC), and count of the changing dimensions (CDC) [205].
The memory size (seeking points) for the seeking behavior of the cats is defined by SMP. From the memory pool, a cat picks a point to explore. The mutative ratio for memory is defined via the SRD. SRD also dictates that when a dimension mutates, the new value will not be outside of a predetermined range. The number of dimensions that vary is dictated by CDC. A point occupied by a cat can be determined as a candidate solution by the Boolean variable defined by SPC. SMP is not influenced by SPC [205].
Targets are traced by the casts in the tracing mode. Cats move based on the velocity of their positions when tracing. Cats use most of their time tracing when resting or when awake. When resting, the cat traces when in the same position, or they move cautiously and deliberately. Cats then chase their target based on real-world cats’ running behaviors. MR is thus set to a small value to ensure cats mostly seek [205]. Figure 45 illustrates the flowchart of the CSO.
CSO, PSO, and PSO with a weighting factor have been applied to six test functions to compare their performance. The experiments demonstrate CSO is superior compared to PSO with a weighting factor [205].

Other Algorithms

Some algorithms are inspired by monkeys. Bansal et al. have developed the spider monkey optimization algorithm (SMOA) based on spider monkey foraging behaviors. The spider monkey species is termed fission–fusion social-structure-based. Fission–fusion social system animals partition themselves or group more monkeys together based on food availability. SMOA is thus a swarm intelligence fission–fusion social structure-based algorithm [206]. Monkey search (MS) is another similar algorithm developed by Mucherino et al., which mimics tree-climbing and food-locating behaviors in monkeys. Neighboring tree branches are conceptualized as disturbances between two possible solutions in the search space. Desirable solutions are found when monkeys mark and update the branches as they climb. MS shows competitive performance in simulating protein molecule folding and optimizing Lennard-Jones and Morse clusters when compared to other popular meta-heuristic methods [207].
Monkey king evolution (MKE) was developed by Meng et al., which outperforms PSO variants in convergence speed, optimization accuracy, and robustness. MKE outperformed other meta-heuristic methods in the CEC2008 benchmark functions for large-scale optimization problems [208]. Mahmood et al. have also developed blue monkey (BM) inspired by blue monkey swarms. The BM algorithm determines the number of males in the group. In nature, there is usually only one adult male outside of the breeding season. The BM algorithm has been verified with 43 benchmark test functions. The performance of the BM algorithm was compared to other meta-heuristic strategies such as BBO, GSA, PSO, and ABC. The BM algorithm displayed a competitive performance against the other algorithms. BM also shows promising results in optimization problems with unidentified or restricted search spaces [209].
Lions have inspired various optimization algorithms, such as the lion optimization algorithm (LOA) developed by Yazdani et al. LOA is based on the lifestyle and cooperation characteristics of lions. Benchmark simulation results show that LOA outperforms similar algorithms [210]. The lion’s algorithm (TLA) mimics the social behaviors of lions. Optimal solutions are found in the search space through the interpretation of lion social orders. TLA uses binary and integer structured agents to solve single and multivariable cost functions. TLA can perform depending on different sizes of the search space [211]. Wang et al. have developed the lion pride optimizer (LPO) based on lion group theory and pride evolution. The state of each lion contributes to the overall pride’s health which exists simultaneously with competition in and between the male lions of a pride. LPO incorporates the dominant behavior of lion breeding to solve optimization problems. The alpha lion has access to most of the breeding resources. When an alpha lion is replaced, the new group of lions eliminates the cubs bred by the last alpha, which aids in finding the optimum solution. Studies show that LPO is not sensitive to parameter tuning, displaying LPOs robustness as an optimization algorithm [212]. Kaveh et al. have also developed a similar algorithm called the lion pride optimization algorithm (LPOA) [213].
Hunting predator animals and their behaviors are another source of inspiration for developing nature-inspired algorithms. One such popular hunter is the wolf, which was studied in the GWO section. Wolf search (WS) [214], wolf pack algorithm (WPA) [215], and dominion optimization algorithm (DOA) [216] are three examples of wolf-based algorithms which have various improved versions [217]. Another terrestrial-hunter-based algorithm is the spotted hyena optimizer (SHO) which is based on the social relationship between spotted hyenas as well as their collaborative behavior [218]. Another example is coyote optimization (CO) which is inspired by the Canis latrans species [219]. Polap et al. have developed the polar bear optimization algorithm (PBOA) based on the hunting techniques of polar bears in harsh arctic conditions [220]. Another hunting-animal-based algorithm is the cheetah-based optimization algorithm (CbOA) which was developed based on the social behaviors of cheetahs [221]. Another similar algorithm is the cheetah chase algorithm (CCA) [222]. Jaguar algorithm (JA) developed by Chen et al. can also be mentioned in this section. JA mimics the hunting behaviors of jaguars in the exploitation and exploration phases of the optimization process [223]. African wild dog algorithm (AWDA) is another example inspired by the communal hunting behavior of the dogs [224]. Finally, military dog optimizer (MDO) can be mentioned in this group. MDO models the searching capability of trained military dogs to solve optimization problems [225].
Humans also fall into this category of algorithms. Zhang et al. have developed the human-inspired algorithm (HIA), which mimics the use of binoculars and cell phones by mountain climbers to find the highest mountain in the range [226].
There are some algorithms based on elephants, such as elephant herding optimization (EHO), which was developed based on the herding behavior of elephant groups [227]. The elephant search algorithm (ESA) is based on similar concepts [228].
Some algorithms are inspired by squirrels. One example is the squirrel search algorithm (SSA) developed by Jain et al. based on the dynamic foraging behavior of flying squirrels and their gliding motion [229]. Another example is flying squirrel optimizer (FSO) which uses a similar approach and considers social connections between squirrels [230].
Meerkats have been considered in designing optimization problems. The meerkat-inspired algorithm (MIA) was developed based on the behaviors of meerkats [231]. Meerkat clan algorithm (MCA) uses a similar approach as the MIA since it models the personal and social behaviors of meerkats to solve optimization problems [232].
There are many algorithms in this group based on different species. The sheep flocks heredity model algorithm (SFHMA) models the heredity of sheep flocks in a prairie [233]. Another similar algorithm is the shuffled shepherd optimization algorithm (SSOA) which imitates the behavior of a shepherd [234]. Another example is the camel optimization algorithm (COA), which is inspired by camels’ behaviors while traveling through a harsh desert environment [235]. Motevali et al. have developed wildebeests herd optimization (WHO) based on the path-planning behavior of African wildebeests during migration [236]. The side-blotched lizard algorithm (SBLA) is also another example developed by Maciel et al. based on mating strategies and population balance in side-blotched lizards [237]. Another terrestrial-animal-based algorithm is the raccoon optimization algorithm (ROA) which is developed based on the rummaging behaviors of real raccoons for food [238]. Tian et al. have developed an algorithm based on the habitual characteristics of the rhinoceros called the rhinoceros search algorithm (RSA) [239]. Xerus optimization algorithm (XOA) was developed based on cape ground squirrels’ lifestyle in groups [240]. Even earthworms have been used by Wang et al. to solve optimization problems. Earthworm optimization algorithm (EOA) models the reproduction of earthworms to generate a population during the optimization process [241].
The red deer algorithm (RDA) is another example developed by Fathollahi et al. based on the mating behavior of the Scottish red deer [242]. Taherdangkoo et al. have also developed the blind naked mole-rats algorithm (BNMRA) based on the social behavior of the blind naked mole-rats colony in finding food and protecting themselves against enemies [243]. Another example is rhino herd (RH) which is a swarm-based meta-heuristic algorithm based on the herding behavior of rhinos [244]. The donkey and smuggler optimization algorithm (DSOA) is also another example based on the searching behavior of donkeys [245]. The African buffalo optimization (ABO) algorithm can also be mentioned, which was developed by Odili et al. based on the African buffalo’s strategy in searching for pastures [246]. The last algorithm in this list is the jumping frogs optimization (JFO) which is a PSO-based algorithm with random frog’s jump-like movements in agents, which makes the algorithm faster in finding an optimal solution [247].

2.1.9. Plant-Based

In this category, plant-based algorithms based on agriculture are reviewed. A majority of these plant-based algorithms are based on trees. There are also algorithms based on flowers, fruits, and other kinds of plants. The most popular plant-based algorithm based on citations is the flower pollination algorithm. Figure 46 illustrates the most popular plant-based algorithms based on their citations.

Flower Pollination Algorithm

Nearly 80% of all plant species, which is around 250,000 species of plants, are flowering plants in nature. The main purpose of flowers is for pollination-based reproduction. Pollen transfer is the driving force in flower pollination and is often mediated by pollinators such as birds, insects, bats, and other animals. There exists insect–plant symbiotic pairs that have coevolved into a partnership. Pollinators can be classified as abiotic or biotic. Biotic pollination is carried out by biological species such as insects and animals, which constitutes close to 90% of total pollinators. The other 10% of pollination happens without the intervention of any pollinator, such as wind and water diffusion. Honeybees are typical pollinators that exhibit choice-execution behaviors in the flowers they pollinate. The flower constancy displayed by honeybees seems to have evolutionary advantages since the maximum amount of pollen is being transferred to the exclusivity flower species. Flower constancy benefits pollinators because they can be sure that nectar quotas can be filled by certain flower species, lowering their memory and exploration memory [248].
Self and cross-pollination are self-reproduction mechanisms employed by some plants. Cross-pollination (allogamy) refers to the event that pollen from another plant fertilizes the flower of another distinct plant. Self-pollination happens when flowers of the same plant fertilize flowers, such as in the case of peach flowers. Self-pollination occurs when reliable pollinators are non-existent. Biotic cross-pollination can span long distances when pollinators fly long distances (such as bats, birds, bees, etc.). These long-distance pollinators can be considered global pollinators [248]. Figure 47 illustrates different pollination types.
In the flower pollination algorithm (FPA), it is assumed that each plant only has one flower that only produces one pollen gamete for simplicity. Therefore, a solution xi is equivalent to a flower and/or a pollen gamete. Global and local pollination are the two main steps in the algorithm. In the global pollination step, pollinators carry pollen over a long distance, which ensures the fertilization and reproduction of the best solution, which is represented as g∗. The first rule and flower constancy can be mathematically represented as follows:
X i t + 1 = X i t + L X i t g *
where Xi(t) represents the ith pollen or Xi, the solution vector at the t-th iteration, and g∗ represents the best solution found so far amount the current population. The parameter L represents the pollination strength. Levy flight is simulated to mimic the efficiency of a pollinator’s long-distance flight. From the Levy distribution, L > 0 is drawn:
L ~ λ Γ λ sin π λ 2 π 1 s 1 + λ ,   ( s s 0 > 0 )
where Γ(λ) represents the gamma function distribution which is valid for large steps s > 0. Flower constancy and local pollination is represented as follows:
X i t + 1 = X i t + ϵ X j t X k t
where Xj(t) and Xk(t) represent pollen particles of the same plant from different flowers, which emulates floral constancy in a limited population. The pollen particles Xj(t) and Xk(t) essentially perform a local random walk when ϵ is between [0, 1] since they come from the same population. Pollination occurs both globally and locally. Local pollen is more likely to pollinate neighboring flowers than those that are far away. To emulate the proximity based population, p is used to switch between intense local pollination to global pollination [248].
The simulation results have shown that the FPA algorithm can outperform GA and PSO and it features an exponential convergence rate. FPA is efficient because of flower consistency and distance pollination. Pollinators can leave the local search space to essentially explore the overall search space. FPA converges quickly because the same flowers/solutions are frequently chosen due to flower consistency. The efficiency of the algorithm is ensured through the relationship between the components and g∗ (the best solution) [245]. The FPA has only a few parameters that must be tuned. Therefore, FPA is applicable in many optimization areas. FPA needs some improvement to eliminate the time cost and premature convergence [249].

Invasive Weed Colonization

A plant is classified as a weed if it mainly grows in a human-occupied territory without being cultivated directly. Weeds cause troubles in the agricultural sector since they adapt to their environment and change in order to increase their own fitness [250]. Figure 48 illustrates examples of weeds.
The invasive weed colonization (IWC) algorithm has four main stages: initializing a population, reproduction, spatial dispersal, and competitive exclusion (where the best individuals are chosen). Initial solutions are disseminated in random positions over a problem space with d dimensions. Seed production is based on the fitness of a plant and a colony. For example, the least fit plant in a colony produces the minimum number of seeds, while the fittest plant produces the most seeds. A unique feature of IWC is the fact that while the fittest plants are allowed to reproduce, less fit plants are still allowed to reproduce in the case that their offspring is a useful solution. The seeds are randomly disseminated over the dimensions of the search space via random and normally distributed numbers with changing variance and a mean of zero. In every generation, the standard deviation (SD), σ, will be reduced from σinitial to σfinal in every step (generation). A nonlinear alteration provided adequate performance and is shown as follows:
σ i = N i n N σ i n i t i a l σ f i n a l + σ f i n a l
where N describes the total iteration count, σi represents the current SD, and n represents the nonlinear modulation index where the best value for n is 3 based on simulation results. The above changes ensure that the probability of seeds dropping far away decreases linearly to cluster the fitter plants and phase out the less fit plants [250]. The population of plants must be regulated so that one species does not invade the entire search space. Thus, competition between plants is necessary. When pmax, the maximum plant limit in a colony, is reached, less fit plants start to be eliminated. When a colony reaches the maximum number of plants, the current population disseminates its seeds according to the aforementioned mechanisms. When the seeds find their position, all weeds (parents and children alike) are ranked, and fewer fit plants are eliminated until the maximum number of plants is achieved. Thus, even low-fitness plants have the opportunity to reproduce [250]. Figure 49 illustrates the flowchart of the IWC algorithm.
The feasibility and efficiency of IWO for the optimization of two examples have been compared to four recent evolutionary algorithms: GA, MA, PSO, and SFLA. It was shown that IWC locates minima rapidly compared to other methods. IWC also escapes from local optima and can solve non-differentiable complex objective functions. IWO performed at a satisfactory level in the test functions and competed with other evolutionary algorithms. When increasing the plants in a set, the mean of the solution increases, but the percentage of success stays the same. The behavior of IWC seems to be optimal when the minimum and maximum seed numbers are set to zero and 2, respectively [250].

Other Algorithms

A large group of plant-based algorithms is based on trees and forests. Tree seed optimization algorithm (TSOA) is a popular optimizer based on tree and seed relationships. The location of feasible solutions in the n-dimensional search space is represented by the position of seeds and trees. Each tree creates one or more seeds, and seeds with better fitness ratings replace trees with low fitness values. A tree location or the best solution is considered for a new tree even though new locations are produced for seeds, which is controlled using the search tendency (ST) parameter for a certain iteration count. The exploitation and exploration abilities of the TSOA are balanced. When tested on numeric function optimization, TSOA outperformed similar meta-heuristic algorithms such as ABC, PSO, HS, FA, and BA and can be utilized for multilevel thresholding functions [254].
Ghaemi et al. have developed the forest optimization algorithm (FOA), which mimics trees that live for decades in the forest. FOA was developed to solve continuous nonlinear optimization functions. FOA simulates the spread of seeds by trees, whether they are deposited directly underneath a canopy, spread across a search space, or eaten by animals. The results of the experiments have shown the acceptable performance of FOA compared to GA and PSO [255]. The tree growth algorithm (TGA) is another algorithm that is inspired by trees competing for light and food. Convergence analysis and significance tests via nonparametric techniques have confirmed the efficiency and robustness of TGA. According to the experimental tests, TGA can be considered a successful meta-heuristic method and is suitable for optimization problems [256]. Li et al. have also developed the artificial tree algorithm (ATA), which is inspired by the growth law of trees. In ATA, the design variable is the branch position. The branch thickness is a solution indicator, and the branch itself is the solution. Updating the branches and organic material transport models is the main computing process of ATA. Based on simulation results, ATA is effective in dealing with various problems [257]. Natural forest regeneration (NFR) was developed by Moez et al. based on the natural behavior of the forests against the rapidly changing environment. This phenomenon is combined with the natural regeneration behaviors of forests [258].
Other algorithms are inspired by fruit such as strawberries. Plant propagation optimization algorithm (PPOA) is an example of a fruit-inspired algorithm that mimics the way strawberries and other plants propagate [259]. Merrikh-Bayat developed the strawberry algorithm (SA). Plants—such as strawberry plants—grow roots and runners to find minerals and water and for propagation purposes. Runners and roots can be used as tools for global and local searches. SA displays three main differences between other nature-inspired optimization methods: information exchange isolation between agents, duplication-elimination of agents during all iterations, and forcing all agents to move in small or large magnitudes. SA has the advantage of only having one parameter that needs to be fine-tuned. Simulations have shown that SA can very effectively solve complicated optimization problems [260]. Another similar algorithm is the mushroom reproduction optimization (MRO) algorithm, which was created by Bidar et al. to mimic the growth and reproduction behaviors of mushrooms in nature. Spores explore the search space to find rich areas to develop a colony. The experimental results have confirmed the ability of MRO to deal with complex optimization problems by discovering solutions with better quality [261].
Some algorithms are developed based on agriculture, such as farmlands and related materials. For example, the farmland fertility algorithm (FFA) can be mentioned here. FFA uses external and internal memory as well as partitioning farmland into different zones to find an optimal solution. Simulations have shown that the FFA often performs better than other meta-heuristic algorithms such as ABC, FA, HS, PSO, DE, BA, and the improved PSO [262]. The paddy field algorithm (PFA) [263], fertile field optimization algorithm (FFOA) [264], and targeted showering optimization (TSO) [265] are other examples of agriculture-inspired algorithms.
Other plant-based algorithms are based on root growth, such as the runner-root algorithm (RRA) [266], root tree optimization algorithm (RTOA) [267], root growth algorithm (RGA) [268], and root mass optimization (RMO) algorithm [269]. These algorithms work based on a model of the root’s growth in plants and trees, which strives to find rich soil and mineral sources. RMO is inspired by the Roger Newson growth model. RGA uses the L-system model, and other algorithms employ different approaches [268,269].
Other algorithms are developed based on the growth and reproduction of the different plants. Plant growth optimization (PGO) uses a model for plant’s growth which includes leaf growth, branches, spatial occupation, and phototropism [270]. Physarum optimization (PO) is a high-parallelism algorithm developed for minimum path finding and uses a computation model based on the slime mold Physarum polycephalum [271]. An algorithm similar to PO is the Physarum-energy optimization algorithm (PEOA) which uses Physarum’s energy and biological model [272]. Another example is the saplings growing up algorithm (SGA) which is developed based on the sowing and development of saplings [273]. Sulaiman et al. have developed the seed-based plant propagation algorithm (SBPA) based on seed dispersion caused by birds and animals [274]. Another plant-based algorithm is the artificial plant optimization algorithm (APOA) which was developed for solving constrained optimization problems [275]. The waterweeds algorithm (WA) is another algorithm that is based on the reproduction principle of waterweeds searching for water sources [276]. Gowri et al. have developed the bladderworts suction algorithm (BSA), which is a plant-intelligence-inspired algorithm based on the foraging and suction mechanism of bladderworts [277]. Photosynthetic algorithm (PA) was also developed by Okayama et al. based on photosynthesis in plants [278].

2.2. Ecosystem-Based

In this section, ecosystem and environment-inspired algorithms will be explored. The algorithms are inspired by natural phenomena such as the water cycle, sun, wind, rain, and general mechanics found in the ecosystem. Figure 50 illustrates the most popular ecosystem-based algorithms based on their citations.

2.2.1. Water Cycle Algorithm

The hydrologic (or water) cycle begins with rivers, which are often formed in tall locations such as mountains. Glaciers or snow melts, and the runoff turns into a river, where they eventually merge with the sea. Water evaporates from rivers, lakes, oceans, and plant transpiration. The evaporated water collects in the atmosphere and condenses into rain, thus continuing the water cycle. Figure 51 illustrates the water cycle process. In nature, most water returns to the aquifer, which are subterranean water reserves. Water from the aquifer is sometimes called groundwater. Water in an aquifer flows downward and may discharge into a body of water. The cycle begins again as water evaporates from bodies of water or from biological beings (such as plants). First-order streams are the smallest river branches which rivers originate from. When two first-order rivers form, the resulting river is called a second-order stream. A third-order stream is thus formed when two second-order streams meet. This process continues until a river reaches the sea, which is the lower point in the environment.
Similar to other meta-heuristic algorithms, the water cycle algorithm (WCA) begins with random raindrops as an initial population. It is assumed that precipitation of rain occurs. The best raindrop is termed a sea. Good raindrops are selected to be a river, and the others are relegated as being streams that flow to rivers which later flow into the sea. The rivers assimilate some water from streams, which depends on the flow magnitude. The water volume in streams varies. The rivers flow to the lowest point, which is the sea. Water cycle algorithm (WCA) generates streams, rivers, and a sea randomly. In an optimization problem with Nvar dimensions, an array of 1 × Nvar describes a raindrop. The raindrops are organized into a candidate matrix of size Npop × Nvar, where Npop represents the raindrop population, and Nvar represents the design variables. The intensity of the flow used in assigning raindrops is calculated with the following equation:
N S n = r o u n d C n i = 1 N s r C i × N r a i n d r o p s   ,   n = 1 , 2 , , N s r
where NSn represents the streams that flow to the sea or rivers, Nsr represents the best raindrops that are chosen as rivers, and the sea and rivers where the raindrop that has the lowest value among the set of raindrops is designated as the sea. The streams join together and form new rivers, or the stream may flow into the sea. The positions of a river and stream are swapped if the stream has a better solution than the river. A river can replace a sea in a similar fashion. Convergence can be slowed by the process of evaporation. Figure 52 illustrates the flowchart of the WCA.
WCA can handle many constraints based on the efficiency of the algorithm compared to other popular optimization methods. WCA has a lower computational cost and generally obtains better solutions than popular optimizing methods such as DE, PSO, and GA. The complexity of the problem determines the quality of the solution and the computational efficiency. WCA can achieve acceptably accurate and relatively efficient solutions in real-world problems that require a significant computational effort [279].

2.2.2. Other Algorithms

Other algorithms similar to WCA have been developed by researchers based on water cycles, rivers, clouds, and rain. An example is river formation dynamics (RFD) which was developed by Rabanal et al. based on river formation dynamics for solving NP-hard problems [280]. Water evaporation optimization (WEO) is another similar algorithm developed by Kaveh et al., which emulates the evaporation of water on surfaces of different wettability that are then studied via molecular dynamics simulations. Studies indicate that WEO is highly competitive with other well-known meta-heuristics [281]. Kaboli et al. have developed a rainfall optimization algorithm (RFOA) based on the mechanics of raindrops which is used to solve real-valued optimization problems. RFOA effectively searches a large domain to find an optimum solution. Studies show RFOA performs relatively well and sufficiently effectively to solve constrained and unconstrained engineering problems. Furthermore, it has been shown that RFOA is robust [282].
Hydrological cycle algorithm (HCA) was developed by Wedyan et al., which mimics the continuous water motion in the environment. Various raindrops pass through the different phases of the hydrological cycle. Every stage of the algorithm prevents premature convergence and solution generation. The solution quality is improved through communication between droplets. The simulation results have shown that HCA is effective and competes with other popular algorithms. HCA was shown to converge to a global solution while escaping local optima [283]. Gao-Wei et al. have developed the atmosphere clouds model optimization (ACMO), which is inspired by the behavior of clouds in nature, and it has been shown to have a certain advantage in solving multimodal functions compared to GA and PSO [284]. Artificial raindrop algorithm (ARA) is another algorithm that emulates the changes that a raindrop goes through, such as condensation, precipitation, and collisions. The algorithm features five corresponding operations. ARA performs better than DE, ABC, PSO, GSO, and CS in finding the optimum of stable linear systems [285].
Other algorithms have been developed based on a natural phenomenon such as lightning or wind, e.g., lightning search algorithm (LSA) was developed by Shareef et al. based on lightning and the step leader propagation mechanism using the concept of projectiles to solve constraint optimization problems [286]. Nematollahi et al. have developed lightning attachment procedure optimization (LAPO) which is based on the lightning attachment procedure, unpredictable lightning trajectory in loaders, branch fading features, and upward leader propagation. The striking point of the lightning is the final optimum result. LAPO does not require parameter tuning and usually does not get stuck in local optima. LAPO displayed good performance in the optimization of five classical engineering problems [287].
Wind driven optimization (WDO) is another example of a population-based optimizing algorithm that is based on the position and velocity of air parcels and uses the equations of atmospheric motion to update their positions [288]. Another similar algorithm is the hurricane-based optimization algorithm (HOA), which is based on the observation of hurricanes, radial wind, and pressure profiles. In HOA, the central zone, or eye, is a low-pressure zone from which air parcels move out in a spiral fashion. The optimal solution is found by the air parcels aiming to find a new low-pressure zone. HOA performed very well in various benchmark functions to test HOs optimization capabilities [289].
Other algorithms are inspired by the ecosystem, such as the artificial ecosystem-based optimization (EBO) [290], artificial ecosystem algorithm (AEA) [291], and the sunshine algorithm (SshA), which is inspired by sunlight shining through space [292]. Hosseini et al. have developed the volcano eruption algorithm (VEA) based on simulations of the volcanic eruption process for solving discrete and continuous problems, which has shown remarkable performance in solving NP-hard problems [293].

2.3. Social-Based and Others

Many algorithms have been developed based on human-related social behaviors such as thinking, collaborating, and cultural habits. Specifications of swarm systems have been a topic of interest. In this section, algorithms with such themes have been studied. Figure 53 illustrates the most popular social-based algorithms.

2.3.1. Particle Swarm Optimization

Particle swarm optimization (PSO) was developed by Kennedy and Eberhart, and was based on collective behaviors such as cooperation for mutual benefit. PSO is computationally inexpensive in speed and memory requirements and uses basic mathematical operations [294,295]. In PSO, a D-dimensional search space is established where D represents the parameters to be optimized. Particles represent candidate solutions in the search space. The ith particle position is represented by vector xi:
x i = x i 1 x i 2 x i D
where the swarm is the collection of the N candidate solutions:
X = x 1 x 2 x N
Trajectories in the search space are represented by particles using the equation of motion as follows:
x i t + 1 = x i t + v i t + 1
where t and t + 1 represent successive algorithm iterations, and vi is the collection of velocities of a particle in vector form. The vector determines how a particle travels the search space and is comprised of three terms: the first is inertia, which limits sudden particle movement. The second is the cognitive component which prevents particles from going back to the best solutions they have found. The third is the social component which predicts if a particle will find the best solution. The particle velocity is defined as follows:
v i t + 1 = v t + c 1 p i x i t R 1 + c 2 g x i t R 2
where pi represents the best position, the particle has found, and g is the best position found globally by all of the particles. The magnitude of the steps taken by the particles in search of local or global optima are controlled by c1 and c2, called the acceleration constant, and are in the range 0 ≤ c1, c2 ≤ 4. The acceleration coefficients are also called the cognitive and social coefficients. The velocity update rule is influenced by the constant acceleration stochastically via R1 and R2, which are diagonal matrices composed of random numbers from the span [0, 1]. Since the trajectories are influenced by the stochastic weighting of the social and cognitive terms as well as the attraction to the local and global optima, they are considered semi-random [296]. Figure 54 illustrates the flowchart of the PSO.
PSO has gained attention from researchers because of the relative simplicity of the algorithm and the fact that PSO does not assume that an optimization function is continuous or differential equations. Initial suggestions to improve PSO were to use different topologies, but the problem has idiosyncratic topologies. PSO struggled with converging after researchers began widely using the algorithm. PSO was combined with other algorithms, and other parameters were added to ameliorate the convergence issues. The most relevant applications solved using PSO were multimodal and constrained multi-objective optimization problems [295].

2.3.2. Teaching–Learning-Based Optimization

Teaching-Learning Based Optimization (TLBO) mimics the influence of a teacher on learners in a class. TLBO considers grades as an output. Learners usually absorb information from teachers, who are vastly knowledgeable. The outcomes of the learners depend on the quality of the teacher. A better teacher produces learners that receive better grades and perform well [297].
In TLBO, two teachers, T1 and T2 have different classes where the learners have the same merit, and they each teach the same material. Figure 55 shows the grade distribution of the learners evaluated by the teachers of the different classes. Curve-1 and Curve-2 are the distributions of the learners’ grades in the classes taught by T1 and T2. The grades are assumed to have a normal distribution, but in real life, the results may be skewed [297].
Since curve-2 shows better results, it is concluded that teacher T2 is of higher quality than T1. From the results, it can be concluded that a better teacher produces a better meaning. The learners’ results are also improved by interactions between themselves [297].
The best learner emulates the teachers since they are seen as knowledgeable people. Teachers distribute new knowledge to the class of learners, which increases the level of knowledge in the class. Therefore, a teacher aims to move the mean knowledge level of a class closer to their own. Although a teacher dedicates their entire will to teaching a class, students retain information based on the quality of the instruction of the teacher and the quality of the learner. The population determines the quality of the students. When a teacher increases the mean of a class close to the mean level of the teacher, the class requires a new instructor with higher mean knowledge to continue improving [297].
TLBO is a population-based algorithm that finds a global solution using its population, which is a class of learners. Design variables are analogous to materials taught to pupils in TLBO, and the learners’ performance is the associated fitness. The best solution for the population is the teacher [297].
TLBO consists of two sections. The first section is termed the ‘Teacher Phase’, and the second section is termed the ‘Learner Phase’. Learners learn from the teacher during the ‘Teacher Phase’, and the students learn from each other during the ‘Learners Phase’. A good teacher brings the mean knowledge level of their class to their own. However, the teacher can only move the mean of the class to the extent that depends on the capabilities of the students. The two sections are random processes. Let Mi represent the mean knowledge level, and Ti represents a teacher at an iteration, i. Ti aims to translate the mean Mi closer to the mean of Ti such that Mnew is set to be Ti. The difference between the two means updates the solution. The learners gain knowledge through the teacher’s lecturing and through interactions between learners. Learners learn from one another through random interactions if one learner knows more than another. Figure 56 illustrates the flowchart of TLBO where TF describes the teaching factor, which determines the magnitude of the mean to be changed, and r is a number in the span [0, 1] chosen randomly. TF can either be 1 or 2 and is decided randomly with equal probability [297]. TLBO is a population-based optimization technique which incorporates solutions to search for an optimum solution. The performance of an algorithm is affected by the parameters required for the algorithm. TLBO does not require parameter tuning, and thus, it does not lose performance compared to other popular algorithms. The convergence rate in TLBO is increased by taking the best solution found in the iteration and applying it to the population. TLBO does not partition the population, but greediness is implemented to ensure a good solution. TLBO has better performance compared to other nature-inspired algorithms such as DE, PSO, and ABD with respect to mean solution, success rate, convergence rate, and average evaluations required from the benchmark functions tested. Additionally, TLBO performed better with high dimensional problems with less computational cost. TLBO can be used to optimize engineering design problems [297].

2.3.3. Other Algorithms

In addition to PSO, there are other algorithms based on social behaviors. The Social cognitive optimization (SCO) method was developed to solve nonlinear programming problems (NLP) that are based on social cognitive theory (SCT) and human intelligence. Experiments that compare SCO with GA on benchmark functions (which include linear, nonlinear, quadratic, cubic, and polynomial) show that the SCO can produce high-quality solutions efficiently, even with only one learning agent [298]. Another example is the social emotional optimization algorithm (SEOA), which is a swarm intelligence technique developed by simulating human behaviors guided by emotion to solve nonlinear programming problems. Simulation results show that the SEOA is effective and efficient at solving nonlinearly constrained programming problems [299]. Other algorithms are based on the learning and thinking behaviors of humans, such as brainstorming. Examples of cognitive-behavior-based algorithms are TLBO, which was studied in the previous section. Normally, a complicated problem is solved more effectively by many people rather than just one. Shi has developed brain storm optimization (BSO) based on the human brainstorming process [300]. The BSO has been used to solve linear and nonlinear single-objective one-dimensional problems as well as multi-objective and multimodal optimization problems [301]. Another example is the human mental Search (HMS), which is a population-based meta-heuristic algorithm developed by Mousavirad et al. based on the exploration strategies of the bid space in online auctions [302]. Simple human learning optimization algorithm (SHLOA) is another example that mimics human learning mechanisms [303]. Feng et al. have developed the creativity-oriented optimization model (COOM), which simulates the creative thinking process in humans [304].
Atashpaz-Gargari and Lucas have developed the imperialist competitive algorithm (ICA) based on imperialistic competition. Much like other evolutionary algorithms, the ICA is initiated with a starting population. A population is considered an empire, which comprises countries that are either colonies or imperialists. ICA is based on the Imperialistic competition among empires where strong empires prevail over weak ones. ICA converges when only one empire remains whose colonies share the same cost magnitude as the reigning country. ICA has been tested on four two-dimensional nonlinear and single objective benchmark functions. The results show its ability to deal with different types of optimization problems in the same iterations as opposed to GA and with a faster convergence than PSO [305].
Cultural algorithm (CA) was developed based on a conceptual framework of the cultural evolution process, which integrates cultural evolution. There is a human population at the level of micro-evolution, which is described by behavioral traits. Socially motivated operators mediate the transfer of traits from generation to generation. Individuals create “mappa” that condense their experience at the macro-evolutionary level. Mappa can be combined to form a group mappa which can be generalized. CA exerts pressure on the population by constraining performance, and it maintains a memory of individual performance to converge faster [306].
Another similar algorithm is the interior search algorithm (ISA) which is based on interior design and decoration. This algorithm differs from existing meta-heuristic algorithms and gives insight into the search for a global optimum. A designer commonly begins designing the composition of the elements of the wall, which limits the design of other aspects. In this process, a designer changes the composition of elements to find a more appealing view of the environment. The composition of each element is not changed unless it produces a more appealing view based on the satisfaction of the client(s), which can be employed for optimization. In this process, the place of an element is changed only if it achieves better fitness (more decorative view) and also satisfies the constraints of the clients. The derivative information is not necessary for the ISA since it uses a stochastic random search instead of a gradient search and only has one parameter to tune, making it applicable to a wider class of optimization problems. Based on the simulation results, this is very effective for constrained engineering problems, and it has excellent convergence in comparison to other algorithms such as DE and PSO [307].
Another algorithm developed based on the procedure of trading shares on the stock market for continuous nonlinear optimization problems is the exchange market algorithm (EMA). EMA has two different modes. There is no market oscillation in the first mode, but there is an oscillation in the second mode. The individual’s fitness value is evaluated after each mode. The first mode recruits search agents toward successful agents, and the second mode deals with searching for the optima. Global optima are extracted via the use of two searching and two absorbent operators to create and organize randomly generated numbers. EMA was tested with high-dimensional benchmark functions and compared with eight new and efficient algorithms. The results show that EMA outperformed the other algorithms considering mean error and run speed [308].
Yin-yang-pair optimization (YYPO) is based on keeping exploration and exploitation balanced in the search space. YYPO is a stochastic algorithm that generates points from two initial points based on the dimensionality of the problem and has three tuning parameters. The performance of the YYPO algorithm has been evaluated on a set of problems, including unimodal, basic multimodal and composition (multimodal, non-separable and asymmetrical combination of multiple basic functions) functions. The results show that YYPO has comparable performance and lower time complexity than ABC, ALO, DE, GWO, multidirectional search, pattern search, and PSO. In addition, YYPO is applicable to classical constrained engineering problems [309].
Anarchic society optimization (AScO) was developed based on a social grouping in which members behave anarchically to improve their situation for optimal robust control PID systems [310]. Another algorithm is the wisdom of artificial crowds (WoAC) which was developed by Yampolskiy et al. based on collective human intelligence to solve NP-hard problems [311]. Kulkarni et al. have also introduced a social-inspired algorithm called the cohort intelligence algorithm (CIA) [312]. Borji has developed the parliamentary optimization algorithm (POA), which simulates the intra- and inter-group competitions in parliamentary government [313]. Artificial tribe algorithm (ATA) is another social-based algorithm that models existing skills, propagation behaviors, and migration behaviors of natural tribes [314]. Kashan et al. have also developed the find-fix-finish-exploit-analyze meta-heuristic (FFFEAM) algorithm based on the targeting process for selecting objects or installations to be destroyed in warfare [315]. Another example is the open-source development model algorithm (OSDMA) which was introduced by Khormouji et al. based on the open-source software development mechanism and community behaviors [316]. The last algorithm in this group is the jigsaw inspired meta-heuristic (JIM) which works based on a jigsaw cooperative learning strategy developed by Chifu et al. [317].

2.4. Physics-Based

Physics-based algorithms are one of the most popular types of nature-inspired algorithms. There are more than 50 physics-based algorithms which currently amount to 15.6% of all nature-inspired algorithms. The most popular physics-based algorithms based on citations are the simulated annealing algorithm (SAA), gravitational search algorithm (GSA), and the big bang–big crunch (BBBC) algorithm. Physics-based algorithms are inspired by different physical laws and phenoms, including gravity, space, stars, galaxies, atoms, nuclear reactions, electromagnetism, gas dynamics, combustion, explosions, water, sounds, motion, vibrations, optics, and energy, to name some. Figure 57 illustrates the classification of the most popular physics-based algorithms.

2.4.1. Simulated Annealing Algorithm (SAA)

SAA was conceptualized in the 1970s [318], but Kirkpatrick et al. introduced the SAA in 1983 [319]. SAA mimics the metal annealing process that describes how a heated metal settles into a crystalline structure with minimal energy. SAA is a stochastic method that searches for minima [320]. Statistical mechanics dictates that as the temperature dissipates, a substance solidifies into a glass or solid, or the substance remains a liquid. The phase of the substance depends on the cooling rate. The ground state of a substance is described as the lowest energy state of the substance, which is achieved at substances limiting temperature. The ground state is achieved by heating a substance until it melts and cooling slowly to the freezing point such that the crystalline structure is based on a single nucleation point, which is the physical process of annealing. An energy state higher than the ground state is usually achieved if the annealing is not done slowly enough, which introduces defects in the crystal structure. Glass is formed if the substance is cooled rapidly, which is termed quenching. Quenching produces a substance with a higher energy level than the ground state, which forms locally optimal structures. SAA is effective in solving combinatorial or discrete [321,322]. SAA does not require point functional information, which makes SAA an ideal candidate for optimization [322]. The algorithm chooses a point Xc randomly in the search space and evaluates said point. The result of the function evaluation is assigned to Vc. A point Xa that satisfies the criteria ||Xc- Xa|| = 1 is selected at random and is functionally evaluated. The result of the evaluation is assigned to Va. Point Xa takes the position of Xc, a probability which depends on annealing temperature. The temperature dictates in large part, the behavior of the algorithm. Downhill moves are accepted readily on the way to large uphill moves if the temperature is high since the likelihood of selecting an adjacent point is near 0.5. The likelihood of choosing downhill moves decreases, and the likelihood of choosing uphill moves increases.
The annealing schedule is defined by a sequence of slowly decreased temperatures to ensure convergence. The objective function is evaluated many times at many different points as a result of decrementing the temperature slowly, which is computationally expensive. Some researchers have applied faster annealing schedule heuristics to ameliorate the computational cost [322]. Figure 58 illustrates the flowchart of the SAA.
SAA can handle multiple constraints, noisy data, and nonlinear models, making it a general and robust technique [320]. SSA is ideal for finding optimal solutions to combinatorial problems with various local minima [321]. SSA is more flexible and converges well toward global optima compared to other local search methods. Since SSA does not rely on restrictive problem properties, it is a versatile method. SAA can be tuned easily, which is an important feature since turning an algorithm into a specific problem is complicated and time-consuming [320].
The quality of the solution produced by SAA is dependent on the precision of the implemented variable [320]. A drawback to using SAA is that the initial annealing schedule and temperature may require extensive tuning [321].

2.4.2. Gravitational Search Algorithm (GSA)

The GSA is a population-based heuristic algorithm inspired by Newton’s law of gravity. Generally, it is similar to PSO as in both algorithms, a solution is found by the movement of agents in the search space. GSA is similar to central force optimization (CFO), which is a deterministic multi-dimensional algorithm inspired by particle motion in a gravitational field [323]. GSA is a stochastic method despite its source of inspiration [324]. In the GSA, search agents are objects and their mass represents their performance. The objects have a gravity-based interaction where all objects exert a gravitational pull on each other, and the heaviest objects attract others. The gravitational force is thus a method of communication between objects. The exploitation step of the algorithm is ensured since the bigger masses, which are the best solutions, move slower. Each mass is described by four parameters: passive gravitational mass, active gravitational mass, position, and inertial mass. The inertial and gravitational masses are found via the fitness function, and the position corresponds to the solution [324].
Since every mass is a solution, the inertial and gravitational masses are adjusted to navigate the algorithm. As time goes on, the heaviest masses will attract other masses, which represents an optimum solution. GSA is, therefore, a Newtonian system of masses [324]. In order to increase the search accuracy, the gravitational constant is reduced with each time step. The fitness evaluation produces the inertial and gravitational masses where an efficient agent is classified by a heavier mass. Agents that move slowly and have high gravitational attractions are better. The fitness map is used in GSA to calculate the value of the masses with the following equation:
m i t = f i t i t w o r s t t b e s t i t w o r s t t
M i t = m i t j = 1 N m j t
where fiti(t) is the ith agent’s fitness at time t and best(t) and worst(t) are best and worst fitness values. Figure 59 illustrates the flowchart of the GSA.
GSA has been tested on different nonlinear multivariable single objective benchmark functions (unimodal and multimodal), and the results have been compared to meta-heuristic algorithms. GSA tended to produce superior or comparable results to CFO, RGA, and PSO [324].

2.4.3. Big Bang—Big Crunch

During the Big Bang phase, randomness and disorder are produced by energy dissipation. During the Big Crunch, particles that were distributed randomly are drawn into a cohesive mass. The Big Bang–Big Crunch (BB–BC) method is thus inspired by this theory. In the Big Bang phase, random points are generated, and in the Big Crunch phase, the points are shrunken into a minimal cost approach or center of mass to form a singular representative point. This optimization contains two parts: the Big Bang, which generates an initial population of points, and the Big Crunch, which calculates the center of mass (xc) according to the following formula:
X c = i = 1 N 1 f x i i = 1 N 1 f
where xi represents a point that exists in a search space of n dimensions, fi represents the fitness of xi, and N is the number of points that were created during the Big Bang phase. The Big Bang and Big Crunch iteratively repeat, where new agents are created to be used again [325]. The BB-BC algorithm is shown in Figure 60.
According to Eksin et al., BB-BC is more effective than C-GA where an optimization problem has many local optimum points. The performance of the BB–BC method demonstrates superiority over an improved and enhanced version of GSA and outperforms the classical genetic algorithm (GA) in many benchmark test functions [326].

2.4.4. Other Algorithms

Some algorithms are based on galaxies, stars and the physics of gravitation, such as the black hole (BH) algorithm, which is designed based on the swallowing of stars by a black hole. The galaxy-based search algorithm (GbSA) is another algorithm that mimics the spiral arm of spiral galaxies to search its surroundings [327]. Muthiah-Nakarajan et al. have developed the galactic swarm optimization (GSO) based on the motion of stars, galaxies, and superclusters of galaxies under the influence of gravity [328]. Another space-based algorithm is the space gravitational algorithm (SGA), which was developed by Hsiao et al. based on Einstein’s general theory of relativity that uses the concept of a gravitational field to search for the optimum solution [329]. Gravitational interactions optimization (GIO) can be mentioned, which works on a similar basis as GSA [330]. Another similar algorithm is the general relativity search algorithm (GRSA) which utilizes general relativity principles to solve optimization problems [331]. Bendato et al. have also developed attraction force optimization (AFO) based on Newton’s law of universal gravitation [332]. Space gravity optimization (SGO) is another similar example inspired by the gravitational force between asteroids [333]. Supernova optimizer (SO) is also another example that imitates supernovae in exploration, exploitation, and local minima avoidance [334]. Formato has also developed central force optimization (CFO) based on the metaphor of gravitational kinematics [323]. Another algorithm is the multi-verse optimizer (MVO), which is based on white holes, black holes, and wormholes [335].
Other algorithms are inspired by atoms, electrons, and nuclear physics, such as the atom search optimization (ASO) which is based on molecular dynamics [336]. Another example is the electron radar search algorithm (ERSA) which was developed based on electric flow in the form of electron discharge via a gas, liquid, or solid medium [337]. Nuclear reaction optimization (NRO) is another example that models the nuclear reaction process consisting of nuclear fission and nuclear fusion [338].
Another similar algorithm is nuclear fission–nuclear fusion (NFNF) which is a similar algorithm to BB-BC developed by Yalcin et al. based on nuclear fusion and fission [339].
Another group of physics-based algorithms based on electromagnetism, such as electromagnetism-like optimization (EM), uses an attraction-repulsion mechanism to find an optimum solution [340]. Another algorithm is the electromagnetic field optimization (EFO) algorithm which was developed based on the behavior of electromagnets with different polarities [341]. Anita et al. have also developed the artificial electric field algorithm (AEFA) based on Coulomb’s Law of electrostatic force [342]. Electrostatic discharge algorithm (EDA) is another similar algorithm developed by Bouchekara based on electrostatic discharge [343]. Ohm’s Law has also been used in the Ohm’s Law optimization (OLO) algorithm [344]. Kaveh et al. have also used Coulomb’s Law from electrostatics and the Newtonian Laws of Mechanics to develop the charged system search (CSS) [345]. Another physics-based algorithm is the Coulomb’s and Franklin’s laws Algorithm (CFLA) which is based on the use of electrical attraction and repulsive forces in electrically charged particle impacts [346]. Zarand et al. have also developed hysteretic optimization (HO) based on the demagnetization procedure, which is well-known in magnetism [347].
Other physics-based algorithms are inspired by gas dynamics, combustion, explosions, and heat transfer physics. For example, the mine blast optimization (MBO) algorithm is inspired by the concept of mine bomb explosions [348]. The thermal exchange optimization (TEO) algorithm is another example that was developed based on Newton’s Law of Cooling [349]. Another example is the grenade explosion method which is based on the explosion of a grenade and the destruction of nearby objects by the shrapnel [350]. Patel et al. have also developed the heat transfer search (HTS) based on the laws of thermodynamics and heat transfer [351]. Another example is the henry gas solubility optimization (HGSO) which imitates Henry’s Law which relates the amount of a gas that is dissolved in a liquid or volume at a given temperature [352]. The gases Brownian motion Optimization (GBMO) algorithm is inspired by the Brownian and turbulent rotational motion of gasses [353]. Another similar algorithm is the kinetic gas molecule optimization (KGMO) which was developed by Moein et al. based on the kinetic energy of gas molecules [354]. Varaee et al. have also developed a similar algorithm called the ideal gas molecular movement algorithm (IGMMA) based on the motion of an ideal gas [355].
Another group of physics-based algorithms is inspired by fluid dynamics. The water wave optimization (WWO), which was developed based on shallow water wave theory, is one such algorithm that can be mentioned in this section [356]. The vortex search algorithm (VSA) is another example that was developed based on the vortex pattern created by the vortical flow of stirred fluids [357]. Shah-Hosseini has also developed the intelligent water drop (IWD) algorithm based on the formation of rivers by water drops [358]. Equilibrium optimizer (EO) is another algorithm that was developed by Faramarzi et al. based on the dynamic and equilibrium states estimation control volume mass balance models [359]. Another example is the artificial showering algorithm (ASA) which works based on the advanced characteristics of artificial showering phenomena and irrigation [360]. Another water-based algorithm is the circular water waves (CWW) algorithm which was developed based on a circular wave created from a water drop that falls on still water [361]. Cortés-Toro et al. have also developed the vapor-liquid equilibrium-based algorithm (VLEA) based on the thermodynamic conditions of vapor-liquid equilibrium [362]. In this group, the flow regime algorithm (FRA) can also be noted, which was inspired by classical fluid mechanics and flow regimes [363]. The last example in this group is the whirlpool algorithm (WA) which was developed based on the fact that the whirlpool has a descent direction and a vertex [364].
Other algorithms were developed based on motion and collision physics, such as colliding bodies optimization (CBO) which is based on one-dimensional collision between bodies [365]. Another example is the ions motion algorithm (IMA) which is based on ions’ motion and models the attraction and repulsion of anions and cations [366]. Kaveh et al. have also developed the vibrating particles system algorithm (VPSA) based on the free vibration of single-degree-of-freedom systems with viscous damping [367]. Another collision-based algorithm is the particle collision algorithm (PCA) which was inspired by nuclear collision reactions called absorption [368]. Mejía-de-Dios has also developed the evolutionary centers algorithm (ECA) based on principles in mechanics, including motion equations and the center of mass [369]. Artificial physics optimization (APO) is another similar algorithm that is based on the motion equation of particles [370].
The states of matter (SM) algorithm developed by Cuevas et al. can be mentioned in this section [371]. Another example is the ray optimization (RO) method which works based on Snell’s Light Refraction Law [372]. Similarly, Kashan has also developed the optics inspired optimization (OIO) algorithm based on surface or mirror reflection [373]. Weighted superposition attraction (WSA) is another algorithm that IS based on superposition and attracted movement of agents [374]. Tzanetos et al. have also developed the sonar inspired optimization (SIO) algorithm, which is based on the methods that warships use for target and obstacle detection, namely underwater acoustics [375]. Another algorithm is the crystal energy optimization algorithm (CEOA) which was inspired by the dynamics of crystals observed in freezing rivers [376]. The last algorithm in this group is the spring search algorithm (SSA) which was developed based on the spring force law [377].

2.5. Chemistry-Based

Some algorithms have been developed based on the concepts of chemistry. One of these algorithms is the fireworks algorithm for optimization (FAO), which is the most popular chemistry-based algorithm based on citations. In the FAO, the search process is inspired by an explosion, and each explosion creates a population of sparks or candidate solutions. When the sparks are evaluated for fitness, new explosions occur where the best n sparks are chosen as the best solutions. It has been shown that FAO has comparable performance in solving some nonlinear single-objective multivariable benchmark functions [378]. Figure 61 illustrates the most popular chemistry-based algorithms.
Some algorithms were developed based on chemical reactions. For example, chemical-reaction-inspired optimization (CRIO) emulates molecular interaction in reaction to reach a stable low-energy state. The performance of CRIO was tested using three combinatorial nondeterministic polynomial-time hard problems. One was a real-world problem, and the other two were traditional benchmark problems [379]. Another example is the artificial chemical reaction optimization algorithm (ACROA) which is more robust and has fewer parameters than similar algorithms [380]. Chemical reaction algorithm (CRA) is another example that uses chemical reaction principles to create a meta-heuristic population-based optimization algorithm. In chemical reactions, reactants are transformed into molecules through a series of reactions [381].

2.6. Math-Based

There are algorithms that were developed based on mathematical concepts such as basic arithmetic operators, sinusoidal functions, fractals, and so on. Figure 62 illustrates the most cited math-based algorithms.
The most popular math-based algorithm is the population-based sine cosine algorithm (SCA) which uses a sinusoidal-based mathematical model to create and disseminate candidate solutions that move toward the best solution. The sine cosine model is as follows:
X i t + 1 = X i t + r 1 sin r 2 r 3 P i t X i t ,     r 4 < 0.5 X i t + r 1 cos r 2 r 3 P i t X i t ,     r 4 0.5
where Pi is the position of the destination point in ith iteration and ri is a random number. SCA has been implemented on multiple single objective nonlinear multivariable unimodal and multimodal benchmark functions, and the results have confirmed the fast convergence and local optimal avoidance of SCA [382].
Stochastic fractal search (SFS) is another math-based algorithm that was developed based on the fractal concept in geometry. A fractal can be simply defined as a whole that is formed from particles similar to itself. In FSF, each particle diffuses and causes some other random particles to be created in order to shape a fractal. FSF has been shown to perform well in some constrained and unconstrained unimodal or multimodal benchmark functions [383].
Another example is the simulated Kalman Filter algorithm (SKFA), in which all agents behave as Kalman Filters in a state estimation problem which is considered to be the optimization problem. The agents use a best-so-far reference solution and measuring process as a Kalman filter to find the optimum in a given problem. The SKFA has been applied to 30 benchmark functions of CEC 2014 for real-parameter single-objective optimization problems, and results shown that SKFA is a promising approach and is able to outperform some well-known meta-heuristic algorithms such as GA, PSO, BHA, and GWO [384].
The base optimization algorithm (BOA) is inspired by basic arithmetic operators, which use a combination of operators to force the solutions toward the optimum point. It has been shown that BOA displays an acceptable performance in solving some nonlinear multivariable single objective unimodal and multimodal benchmark functions [385].
Golden sine algorithm (GSA) is another math-inspired population-based algorithm that uses the sine function to solve optimization problems. Individuals are created randomly whose dimensions are distributed uniformly. The current solution is moved closer to the target goal with each iteration. The solutions are narrowed to the golden section to scan the best solutions instead of all of the solutions. GSA has fewer algorithm-dependent parameters and operators than other meta-heuristic methods, and it has been shown that it converges faster compared to similar algorithms [386].
Spherical search optimizer (SSO) uses a spherical search style that is inspired by the hypercube search style and basic reduced hypercube search style. Implementation of SSO on the CEC2013, CEC2014, CEC2015, and CEC2017 benchmark functions has proved its simplicity and efficiency [387].

2.7. Music-Based

Some algorithms are based on music theories and concepts, such as the harmony search (HS) algorithm. HS works by imitating the activity of musicians while improvising a musical piece. Musical performances strive toward the optimal solution (harmony), which is judged by aesthetic estimation or the search for an optimum solution. The combined sounds produced by the instruments are measured by aesthetic estimation, such as evaluation carried out by the objective function. The sounds of a symphony are improved through practice, such as the solutions of a problem being improved through iterations. HS makes a new candidate solution by considering the entire set of candidates instead of just parent solutions such as the genetic algorithm. HS is flexible and finds better solutions since it does not require initial variables. It has been shown that HS can solve continuous variable problems as well as combinatorial problems [388]. HS has been an active topic of research such that an improved version of HS, called the melody search (MS), has been developed and implemented on different problems [389]. However, some researchers consider HS redundant and similar to DE [390].
The method of musical composition (MMC) algorithm is another example of a music-based algorithm. MMC is a population-based meta-heuristic method that is inspired by an artificial society that employs a creative dynamic system to produce music. MMC has been shown to have a higher exploration capability of the solution space throughout the entire iteration due to the utilization of interaction among agents and is an attractive option to solve a set of rotated multimodal problems [391]. Figure 63 illustrates the most popular music-based algorithms.

2.8. Sport-Based

Some algorithms are inspired by sport-related subjects. Football, cricket, and tug of war have been sources of inspiration for some of these algorithms. The most popular algorithm in this group is the league championship optimization algorithm (LCOA), which mimics the contest between teams in sports leagues. In LCOA, sports teams serve as search agents that compete in a league for various seasons of play which serve as iterations. Teams are paired up, and the resulting winner is given a higher fitness value. The teams use the recovery period to address any weaknesses in their strategies and stands in the creation of new solutions until the next match. The total number of matches in a season is thus the stopping criteria for the algorithms. LCOA performs better than or comparably with PSO when subject to benchmark functions that are single-objective and non-linear at finding the global minimum [392]. Figure 64 illustrates the most popular sport-based algorithms.
Another similar algorithm is the golden ball (GB), which is a multiple population-based meta-heuristic algorithm developed to solve combinatorial problems inspired by soccer. GB initiates by generating a random population and partitioning the agents into teams. Next, the first season or iteration is initiated. League competition is emulated by dividing the season into weeks where teams address flaws in their strategies and compete against other teams. Upon the completion of the season, players and coaches switch to different teams during the transfer phase [393]. Another similar algorithm is the soccer league competition algorithm (SLCA). SLCA generates a team of agents that have two designations: fixed and substitute players. The algorithm converges onto the global optimum as the league progresses, and the best team climbs the bracket. The results of applying the proposed algorithm in solving nonlinear systems of equations demonstrate that SLCA can rapidly converge to the answer [394]. Another similar algorithm proposed by Fadakar and Ebrahimi is Football Game Algorithm (FGA), which mimics a football team’s capability to find the position that will allow them to score a point under the coach’s supervision. Applying FGA for solving some nonlinear multivariable single objective functions has revealed that it has better results compared to PSO and BA in the majority of functions and also has a more robust performance [395].
The multi-agent meta-heuristic algorithm called the tug of war optimization (TWO) is based on the tug of war. In TWO, candidate solutions are teams that compete against each other in rope-pulling games. The pulling force of a team is dependent on the quality of their solution. The Newtonian laws of mechanics dictate the positions of the teams during a match. TWO considered the contributing qualities of both solutions, unlike meta-heuristic methods. TWO is suited to solve non-smooth, discontinuous, multimodal, and non-convex global optimization functions [396].

2.9. Hybrid Algorithms

Hybrid algorithms are combinations of algorithms. Research on hybrid metaheuristics is considered to still be in its early stages, but it is expected that hybrid algorithms take the majority of publications on meta-heuristic optimization in the near future. Hybridization of algorithms can be the key to achieving higher performance but requires a careful analysis and deep understanding of each algorithm’s process and pros and cons. Nature-inspired algorithms can be hybridized by themselves, tree search techniques, constraint programming, dynamic programming, and problem relaxation [397]. One such algorithm was introduced by Nabil who hybridized the standard flower pollination algorithm with the clonal selection algorithm and tested it with 23 nonlinear multivariable single objective unimodal and multimodal optimization benchmark functions. The proposed algorithm was compared to five famous optimization algorithms: SAA, GA, FPA, BA, and FA. The results showed that the hybrid algorithm was able to find more accurate solutions than the standard FPA and the other four techniques [398].
Another example is the ANGEL Algorithm which was developed for solving quadratic assignment problems (QAP). ANGEL combines a local search method (LS), ACO, and GA. ANGEL has the ACO and GA phases. ACO constructs an initial population that is not a random set of chromosomes such that GA has a better starting position. The transition between GA and ACO phases uses pheromones as a feedback system. Control is transferred to ACO from GA upon reaching the termination criterion. ACO then utilizes the feedback from GA to explore and generate a promising population for the next iteration. The solutions obtained by ACO and GA are improved by using the local search method. Tseng and Liang also have proposed a new concept called the eugenics strategy, which is intended to guide the GA to evolve toward a better direction. The results of implementing ANGEL on about a hundred QAP benchmark functions have shown that ANGEL is able to obtain the optimal solution with a success rate of 90 percent [399]. Applying an exact large neighborhood search along with ACO provided D’Andreagiovanni et al. with a high performance and accuracy in solving an NP-Hard problem considerably better than software solutions such as CPLEX [400].
Fontes et al. have also combined PSO and SA for solving the optimal scheduling problem in shop environments that have transport tasks and vehicles. Using this technique they were able to reduce computation time and increase the robustness compared to other algorithms [401].
Another example of a hybrid algorithm is the MKF-Cuckoo algorithm (MKFCA) which hybridizes multiple kernel-based fuzzy C means and the CS algorithms. The MKFCM objective is taken and solved through the CS, which is proven to be effective in many optimization problems. The performance of the algorithm has been compared to the performance of other algorithms using rand coefficients, clustering accuracy, computational time, and Jaccard coefficient with iris and wine datasets. MKFCM achieved a 96% accuracy on the iris dataset and a 67% accuracy on the wine dataset [402]. Other hybrid algorithms combine GWO and FAO [403], SOA and Feature selection [404], CS and DE [405], as well as GWO, SCA, and CrSA [406].
Kottah et al. have also evaluated the hybridization of different algorithms such as CSA, GWO, HHO, and WOA by solving multiple benchmark functions. Based on their research, a combination of these algorithms provides better solutions compared to the base algorithms. Among all hybrid forms they studied, CSA-GWO and HHO-WOA have provided superior results for a majority of the benchmark functions [407].

2.10. Constraint Handling Techniques (CHT)

Generally, the described methods are designed for solving unconstrained problems. There are some techniques to handle constraints by using nature-inspired methods. Although this is out of the scope of this paper, a brief review of these methods is provided. While some methods are proposed for specific algorithms, the majority of constraint-handling techniques treat constraints as a cost function, so they can be used in almost any optimization algorithm. It should be noted that some methods have been introduced based on the idea of separating the cost function and constraints. A classical classification suggests four categories for CHTs, penalty functions, a search for feasible solutions, preserving feasibility of solutions and separation of objective function from constraints [408]. Mezura-Montes and Coello also suggest five other new categories besides novel attempts at classic categories [409]. Figure 65 illustrates the classification of CHTs.
The penalty methods are the most common approaches for constraint handling. The penalty term is determined by the amount of constraint violation of the solution vector and sums it up with the cost function. There are different penalty methods, including a static penalty, dynamic penalty, death penalty, co-evolutionary penalty, annealing penalty, adaptive penalty, and more [410]. Penalty functions can be classified based on different aspects, including variable/constant penalty, problem-dependent/problem independent, with parameter/without parameter, and so forth [411]. Classic penalty methods and their pros and cons have been studied before by Yeniay [408].
Some approaches are based on separating the cost function and constraints [410], and some approaches try to limit the solution space to just feasible solutions such that the problem can be solved as an unconstrained problem. These problems are considered some of the most competitive constraint-handling techniques [409]. In some methods, a special operator is created to preserve the feasibility of the solution or to move within part of the region of interest. GENCOP is an example of such an operator which is developed for linear constraints [409]
Research has been conducted to develop techniques based on feasibility rules. Mezura-Montes and Coello have studied some of these techniques, such as stochastic ranking (SR). SR was created to compensate for under or over-penalization awarded by penalty functions. SR uses a parameter to compare the feasibility of a solution instead of defining the function [409].
The ε-constrained method is a recently reported constraint handling technique that converts a constrained numerical problem to an unconstrained numerical problem. There are also some highly competitive constraint-handling techniques based on multi-objective concepts. In these methods, a constraint violation measure is added as another objective. Some techniques have also been suggested based on a hybrid use of the above-mentioned methods [409,412].

3. Comparison between Algorithms

In order to have a general view of nature-inspired algorithms, it would be a good idea to provide a comparison between different algorithms in solving different problems. There are several benchmark functions to evaluate the performance of the optimization algorithms. A benchmark function is a single- or multi-objective function that has a deterministic optimum. By using benchmark functions, it would be possible to evaluate the performance of the optimization algorithms in terms of accuracy and speed. In the following sections, 10 different problems are solved by 27 nature-inspired algorithms. Each problem is solved 500 times, and the mean results are calculated. In addition to this, the average solving time is also calculated to obtain the speed of the algorithms. In order to provide an equal situation, the maximum iteration and the maximum number of agents (or any equivalent concept based on the algorithm) in all algorithms are considered to be the same. Computational calculations are done in Python using a modified version of the NiaPy package [413] and NIA package [414]. and all codes related to this section are available on the project’s GitHub repository [415].

3.1. Benchmark Functions

There are many factors in designing a benchmark function to challenge an algorithm as much as possible. Some functions have numerous peaks which are designed to trap the algorithm and impact the search process. This is known as the modality of an algorithm. Another specification is the basin which is defined as a relatively steep decline surrounding a large area. Optimization algorithms can be easily attracted by the basins. Valleys are designed to misguide the algorithm as the process begins. A valley occurs when a narrow area is surrounded by regions with a steep descent. The state of difficulty in a benchmark function is defined as separability. A multivariable function is separable if we can rewrite it as a sum of n single-variable functions. Generally, a separable function is easier to solve. The last parameter is dimensionality. Dimensionality determines the number of parameters. As the number of parameters increases, the search space also expands exponentially, which consequently makes the benchmark function harder to solve. A function that has any possible number of parameters is called scalable, and vice versa [416].
In order to evaluate nature-inspired algorithms, we need to consider multiple benchmark functions with different levels of difficulty, modality, and dimensions. A selection of 10 functions with different specifications is considered and described in Table 2 Specifications of selected benchmark functions [416].
The equations of the benchmark functions and their plots are shown in Table 3 Note that only a two-dimensional schematic of multimodal functions is plotted to show the general view of the function.

3.2. Results

Due to the stochastic terms in algorithms, each problem is solved 500 times, and the average of results is used for comparison. Table 4 summarizes the results of computational calculations. The best result in each problem regarding cost, iteration, and time is determined with a green highlight, and the worst result is determined using a red highlight. In some problems, more than 1 algorithm is highlighted in red color which indicates the algorithms were unable to find a good solution in less than 1000 iterations, which was 1 of the termination criteria. In such a case, the best solution in 1000 iterations was considered the best solution of the algorithm. A good solution was set to be a solution with a cost function less than 10−4. The term error stands for the mean square error of the solution and the optimum solution (which is at the origin of all problems). This value determines how far was the calculated solution from the origin. This is very important for problems such as Qing, Powell, or Chung-Reynolds in which a wide area around the origin has a low-cost value, and there are not many local minimums. In these problems, a low final cost does not necessarily represent a good solution. It is important in these problems to check whether the final solution is near the origin or not. Based on the results, in most cases, the mean error is low enough, which means most algorithms went near enough to the origin (optimum solution).
It is necessary to remember that each algorithm may find better or significantly better solutions by setting suitable parameters. Of course, it is not easy to find appropriate parameters for each set of algorithms and problems. Even in that case, regardless of human error, the results would not be suitable for comparison. Instead, we tried to keep the situation the same enough for each algorithm. Each algorithm’s parameters are adjusted to provide the best possible results for the Ackley benchmark function. The maximum population for each algorithm is also considered to be 25. Thus, each algorithm solved the problems with the same number of populations or agents and fixed parameters in a maximum of 1000 iterations. The Python codes and parameters of both problems and algorithms are available in the project’s GitHub repository [419].
Considering the results, most of the algorithms provide an acceptable result for the majority of problems. In each problem, the best and worst results in each section (cost, iterations, time, and error) are highlighted in green and red colors. Considering the number of problems in which an algorithm had the best cost value, the sine cosine algorithm was the best of the five problems. In contrast, clonal selection was the worst algorithm by providing the least accurate results in three problems. It should be noted that, at the same time, sine cosine algorithm had the least accurate results in the Qing problem. Some other algorithms, such as artificial bee colony, cat swarm optimization, firefly algorithm, and fireworks algorithm, also had the best results in some problems.
A significant result is the low performance of the genetic algorithm compared to other algorithms, which is well aligned with the results of other research [420]. It also had the highest error and iteration numbers in the majority of problems. BFO shows mostly low performance in most algorithms, but it had good results in solving Chung-Reynolds and Dixon price; other research also confirms the result and indicates its acceptable performance in solving other benchmark functions such as quadratic (sphere), Rosenbrock, and Rastrigin functions. Some adaptive forms of BFO have shown significantly better performance [421]. Figure 66 compares the algorithms with the best and worst results in solving the problems.
One interesting result is that, in six problems, the Harris hawk algorithm had the least number of iterations. It was also the fastest algorithm in most algorithms. In most of those problems, it was able to find the optimal solution in less than 30 iterations and the average solve time was about an order of 10−2 s. In the group of algorithms with few numbers of iterations, artificial bee colony, bees algorithm, fireworks algorithm, and grey wolf optimizer must also be mentioned; However, the last two algorithms had the highest iteration count in two problems. Harmony search, bacterial foraging, forest optimization, clonal selection, and other algorithms were unable to find accurate enough solutions in less than 1000 iterations. Figure 67 illustrates the algorithms with the least and the greatest number of iterations.
In contrast to their poor performance regarding the iteration count, the harmony search and gray wolf algorithms had the fastest performance in the three problems. The bacterial foraging and firefly algorithms were the slowest algorithms in the four problems. Other information regarding the average calculation time of the algorithms is shown in Figure 68.
One other important factor, specifically for algorithms such as the Powell and Dixon price, which have high-cost values, is the mean square error of the final result. While sine cosine, artificial bee colony, and four other algorithms had the most accurate results, the genetic algorithm had the worst results. However, some of the results were in the time order of 10−4 and 10−5 but had the highest errors compared to the other algorithms. Figure 69 illustrates the comparison of the most and least accurate algorithms.

4. Nature-Inspired Algorithms in Drones and Aerospace Engineering

This section reviews the potential applications of nature-inspired algorithms in different fields of aerospace engineering and drones. A comprehensive review in this field would require extensive work. As a result, this section only provides a brief overview of potential applications. A deep study of different applications, their specifications, and requirements with a comprehensive review of the literature is left for future works. In order to provide such an overview, it is a good idea to review similar research in this field, as many other researchers have studied optimization problems in aerospace and drones. For example, Liu et al. have reviewed the applications of convex optimization in aerospace engineering. Based on their research, optimization methods have applications in optimal trajectory design, collision avoidance, and formation control of UAVs. Regarding spacecraft, there are many applications in designing optimal trajectories, optimal control policies, optimal rendezvous guidance, optimal guidance for a swarm of spacecraft, and satellite station keeping. Optimization algorithms are also useful in high-speed aircraft. There is much room for developing optimization approaches in re-entry vehicles, hypersonic aerial vehicles, rockets, and gliders [422]. Padula et al. have also studied the optimization applications in aerospace accounting for uncertainty. Regarding them, optimization methods are key tools in aircraft impact dynamics, optimizing the weight, increasing safety, optimizing airfoil shape, aerodynamic wing design, and structural wing design [423]. Mieloszyk has also reviewed numerical optimization’s applications in aerospace. Regarding him, some specifications such as speed and accuracy are necessary for aerospace applications. He has reviewed optimization in airfoil geometry optimization considering the maximum-lift angle of attack [424]. Lian et al. have also researched the applications of evolutionary algorithms in aerodynamic problems [425].
Considering available papers in the literature, potential applications of nature-inspired algorithms can be studied in eight main categories: design and manufacturing of aerospace systems, structure optimization, aerodynamics-related optimization, optimal guidance and control, engine design, communication, energy management, and infrastructures and operation. Figure 70 illustrates the proposed classification in details. The following sections study each of these groups.

4.1. Design and Manufacturing

Nature-inspired algorithms, including heuristic and meta-heuristic algorithms, have been used in aerospace and drone systems design. Both conceptual and multidisciplinary designs of drones are complex nonlinear and high coupling optimization problems. Due to the stochastic nature of the bio-inspired optimization algorithm, there is no need for prior information on the system or the necessity of a convex search domain. Research shows the applicability and good performance of nature-inspired algorithms in solving optimal design problems of drone systems.

4.1.1. Conceptual Design Optimization

The conceptual design of drones is a challenging problem due to the interdependent nature of their sub-systems. Nature-inspired algorithms are suitable candidates for solving such problems due to their ability to handle complex and nonlinear problems without requiring prior knowledge of the system. Recent research has employed GA and other meta-heuristic algorithms for the conceptual design of aerial vehicles, including helicopters and fixed-wing UAVs. In the design of unconventional drone configurations, such as tilt-rotor, tilt-wing, and helicopter, optimization algorithms serve as a key tool to balance performance requirements and ensure safety and stability. Optimization provides optimal solutions that satisfy multiple design objectives and constraints and can be used to solve the optimal conceptual design of aircraft in general. This section provides an overview of the optimization approaches used in drone conceptual design, with a focus on the application of nature-inspired algorithms and their performance in solving these complex problems [426]. In recent years, a number of nature-inspired optimization algorithms such as GA have been employed for the conceptual design of aerial vehicles, including helicopters [427]. Champasak et al. have compared the performance of some meta-heuristic algorithms in solving the conceptual design problem of a fixed-wing UAV. Based on their research, among equilibrium optimizers, evolution strategies, the moth-flame algorithm, marine predators algorithm, slime mold algorithm, and salp swarm algorithm, the equilibrium optimizer and moth-flame algorithm have the best performance [428]. Optimal conceptual design is also a popular approach in satellite design which is always formulated as an MDO problem [429,430]. In the conceptual design of novel and unconventional drone configurations such as tilt-rotor, tilt-wing, and helicopter, optimization serves as a key tool. The design of such drones involves multiple technical aspects, including the need to balance performance requirements such as range, speed, and payload capacity, while also ensuring safety and stability. Optimization algorithms enable the exploration of large design spaces and the identification of optimal solutions that satisfy multiple design objectives and constraints [431,432]. Of course, PSO has also been used to solve the optimal conceptual design of aircraft to find the best possible configuration [433].

4.1.2. Multidisciplinary Design Optimization (MDO)

Multidisciplinary design optimization (MDO) is a design methodology for systems in which there is a coupling and strong interaction between different sub-systems. MDO tries to provide a balance between the optimality of all sub-systems and the system in general. There are many MDO problems. In such problems, multiple disciplines interact with each other, and the goal is to design a new discipline that considers the intersection. Other problems try to improve the design from the conceptual stage [434]. Another group is problems dealing with computational and organizational issues. MDO problems are based on the type of disciplines, for example, structure and control optimization, control and aerodynamic optimization, aerodynamic and thermodynamic, etc. [435]. An MDO problem may consist of CFD computation for evaluating aerodynamic characteristics, weight estimations, heat transfer calculations, and structural analysis. The problem may be solved using a single-objective or multi-objective problem [436]. However, an MDO problem is a challenging and computationally expensive problem. This is due to its complex structure and optimization algorithms [435]. Many researchers are working on optimization algorithms with fast and low computations for MDO problems, specifically in drone systems. It seems that nature-inspired algorithms have comparable performance in solving complex MDO problems.
Many nature-inspired algorithms such as GA [437], PSO [438], and ABC [439] have been used to solve MDO problems in aerospace systems like aircraft design, UAV design, and turbomachinery. Moreover, there is a high potential for other algorithms to be applicable to MDO problems.

4.2. Engine Modeling and Propulsion

In modeling a gas turbine engine, there are many parameters that should be designed and regulated simultaneously. This complex 14-dimensional problem must be solved to achieve optimized turbine performance. Optimization algorithms have been used to model, structure design, and tune gas turbine engines. Based on recent research, many nature-inspired algorithms have been used to optimize gas turbine engines. GA and its variants, PSO, ACO, ABC, IWO, and other custom methods based on GA, have been used for these purposes [440].
Gue and Rosen have used GA along with a detailed mathematical model of the electric system and propeller to design an electric mini UAV [441]. However it is believed in propulsion system optimization, despite its benefits such as solving highly nonlinear and non-convex design problems, easy implementation, parallelization capability, and global optimal finding, GA can be less effective when using high fidelity models due to the high number of evaluations [442]. A summarization of the studied publications in optimal design and their algorithm(s) can be provided in Table 5.

4.3. Structure

Structural efficiency in aerospace systems and drones is a critical topic. For many important aspects, including safety, energy consumption, and cost reduction, it is necessary to provide an efficient structural design. There are many problems in this field—including structure weight optimization, stress reduction, elastic and aeroelastic characterizations, and thermal resilience—which need appropriate optimization methods to be solved. In complex computational-method-based structural problems where deterministic algorithms are not applicable, the stochastic nature-inspired algorithm can provide acceptable and relatively fast results [443]. These algorithms can also be used for optimal finite element model updates in structural analysis. For example, Boulkabeit et al. have used the fish school search algorithm for the FEM model update of a GARTEUR SM-AG19 aircraft structure. They have shown that this algorithm provides more accurate results compared to GA and PSO [444]. This section provides an overview of the optimization tools used in aircraft design, with a focus on their applications in optimizing structural features and computational analysis.

Structure Design

The use of optimization methods in aircraft design is crucial for ensuring safety in crash accidents or reducing weight. The optimization goal can be the geometry of the aircraft or specific structural features such as the rib in wings to optimize stress and manufacturability. Nature-inspired algorithms, including GA, have been widely used to optimize these features. Additionally, bio-inspired algorithms can optimize process parameters of welding in aircraft wing structures, aeroelasticity characteristics, and elastic deformation in classic aluminum-based structures or composite materials. Nature-inspired algorithms can also be applied to computational structural analysis, such as optimizing the fiber orientation of a composite wing to maximize flutter speed.
Drones often use lightweight structures made of carbon fiber reinforced plastics (CFRP), particularly continuous unidirectional (UD) carbon fibers due to their superior mechanical properties. The layered nature of UD plies offers design flexibility, allowing the mechanical properties of the resulting laminate to be tailored to best suit the applied loading and stiffness requirements. The challenge in optimizing the stacking sequence of large aerospace structures lies in the mixed discrete and continuous nature of the problem. Structural constraints such as strength and maximum displacements are formulated using continuous quantities, while design and manufacturing rules concern discrete plies. Nature-inspired algorithms have numerous iterations which makes the calculations too expensive. Meanwhile, gradient-based algorithms have better performance in physical constraints handling. A hybrid approach is suggested to be the best solution. Heuristic algorithms can handle discrete variables, while gradient-based algorithms are responsible for physical constraints. To bridge the gap between these stages, multiple iterations of the two-stage process are needed, resulting in a significant penalization in terms of the performance metrics of the aircraft [434].
Optimization methods are useful tools in designing the structure of the aircraft in terms of safety in crash accidents or weight reduction. The geometry of the plane or structural features of the aircraft can be considered an optimization goal [423]. For example, some details, such as the rib in wings regarding the stress and manufacturability, can be optimized using nature-inspired algorithms, including GA [445]. Nature-inspired algorithms also have applications in designing pressure bulkheads. Viana et al. have applied CO, GA, and PSO for designing a pressure bulkhead of an aircraft with arrays of beams. Based on their research, these algorithms can provide inexpensive and accurate results [446]. Bio-inspired algorithms such as GA also have applications in optimizing process parameters of welding in aircraft wing structures [447]. Some other in-detail structural specifications can also be optimized, including aeroelasticity characteristics and elastic deformation in classic aluminum-based structures or even composite materials [448]. Some research suggests using algorithms like Harmony search for the optimization of aircraft stiffened panels [449].
It is also possible to apply nature-inspired algorithms in computational structural analysis in order to achieve an optimum design. This process is expensive, such as CFD-based aerodynamic optimization, but can reduce the manufacturing costs, risk, and need for expensive practical experiments. For example, BCO has been used to optimize the fiber orientation of a composite wing in order to maximize the flutter speed [450]. A similar problem is solved using a hybrid variant of bacterial foraging optimization [451].
A summarization of the studied publication in structure optimization and their algorithm(s) is provided in Table 6.

4.4. Aerodynamics

Aerodynamic shape design is a popular field that focuses on optimizing the airfoil, wing, fuselage, control surfaces [452], tail, and other aerodynamic-related parts of an aircraft. Especially when it is combined with computational fluid dynamics (CFD), ASO becomes an important approach in modern aircraft design. ASO reduces aircraft development’s cycle time considerably and improves performance [453]. Lian et al. have studied the applications of evolutionary algorithms in aerodynamic applications. Based on their survey, EAs and hybrid algorithms can be used in the design of turbopumps, compressors, and micro-air vehicles [425]. Giannakoglou has also studied the design of aerodynamic shapes using stochastic algorithms like evolution algorithm, genetic algorithm, evolution strategies, and their variants [454]. He has shown the performance of such algorithms in designing uniform and multi-element airfoils.

4.4.1. Airfoil Shape Design

Airfoil shape design optimization is one of the important optimization problems in aerodynamic optimization. Airfoil geometry optimization can reduce drag, increase lift, and optimize weight, while considering different parameters such as the angle of attack for maximum lift. Additionally, wing shape optimization, considering structural loads, deformations, and aerodynamics, is also an essential application in this field. Computational optimization tools have been developed to address these design challenges and improve the performance of airfoil and wing designs.
One of the optimization applications in aerodynamics is airfoil design. Optimization tools provide robust methods for reducing drag, increasing the lift-over-drag ratio, and optimizing weight. It is possible to design the two-dimensional geometry of airfoil or shape specifications like mean cord or thickness. Much research has been conducted in subsonic, supersonic, and transonic airfoil design, and some profile optimization methods (POM) have been introduced so far [423]. Airfoil geometry optimization considering the angle of attack (AOA) where maximum lift occurs is also another important application, as it is one of the critical parameters in aircraft design [424]. Both uniform and multi-element airfoil design (look Figure 71) are common in this field.
Another feasible problem is the three-dimensional shape design of the wing considering airfoil, wing span, swept angle, wing sections, etc. Wing shape optimization considering structural loads, deformations, and even both aerodynamics and structural loads and their interaction at the same time is another application [423]. A common optimization method in this group is CFD-based optimization which tries to systematically compute different variants of the two- or three-dimensional model using the computational fluid dynamics methods. Generally, a piece of code is responsible for getting the model, providing appropriate meshing, computations, and an optimization process that leads to this code [436].
It has been shown that in a lift maximization airfoil design problem, GA has better results compared to SA and gradient-based optimization, while its computational cost is higher [455]. The application of evolution strategy algorithms has also been studied in wing and blade airfoil design [456]. Tian and Li have also used an improved version of FFO to solve the airfoil shape design problem. Based on them, this algorithm provides better results compared to some of the evolutionary algorithms [457]. Predictably PSO has been used for airfoil shape design as well as other algorithms. The results show better performance and convergence speed for PSO compared to GA [458]. Naumann et al. have also used a modified version of CS to optimize airfoil shape in order to maximize the lift/drag ratio [459]. ABC has also been used to optimize wind turbine blade shape using CFD and BEM methods [460]. Hoseynipoor et al. have used GSA for two-dimensional airfoil shape design to achieve the maximum lift-over-drag ratio [461]. HS has also been used for this problem [462,463]. Jalili has also applied the SCA and obtained a smooth shape and low drag [382].

4.4.2. Wing and Tail Design

In addition to airfoils, the overall configuration of the wings can also be optimized using bio-inspired algorithms. Specifically, in unconventional wings, including multi-part, blended wings, morphing wings, and even control surface design, it can be a challenging problem to maintain efficiency, aerodynamics, weight, and other balanced factors. In fixed and morphing wing aircraft, there are many configurations to choose from. These include, but are not limited to, tapered, elliptical, delta, joined, ogival, swept, forward, and nature-inspired wings. One can find many configurations for tails as well. T-tail, V-tail, X-tail, and twin tail are just some examples of them. Figure 72 illustrates some of the common wing and tail configurations. The goal of the optimization problem in designing the wing or tail can be the optimal values for the wing’s specifications (such as chord variation, sweep, and dihedral angle) or to get a primary design of the wing or tail to base the main design process on it. A good reason behind this is that compared to gradient-based approaches, bio-inspired algorithms such as GA have been shown to have more computation costs. Therefore, considering their easier implementation, they seem to be more suitable for low-detail problems and preliminary design [455]. Nature-inspired algorithms have shown promise in solving complex multidisciplinary design challenges. Researchers have used algorithms such as PSO, ACO, BFO, DE, and ABC in hybrid forms or in combination of machine learning to improve aerodynamic performance, and optimize wing size, topology, aeroelastic design, and morphing wing tip design.
Nature-inspired algorithms can be used to optimize the characteristics and shape of the wing and tails. Some research is based on machine-learning algorithms, and their focus is the aerodynamic characteristics of the wing or tail [453]. The complete problem would be a time and energy-consuming effort as it may combine CFD with optimization algorithms [465]. While some researchers focus on gradient-based optimizations, others offer hybrid algorithms by combining GA and gradient-based approaches [466]. It is a fact that gradient-based approaches are more cost-efficient compared to stochastic and non-gradient approaches, but they are not applicable to discrete or partially discrete problems. In such cases, for example, in the MDO design of aircraft wings, the potential and performance of bio-inspired algorithms like PSO has been shown [467]. Wang et al. have also shown the effectiveness of ACO in solving the optimization problem of wing size and topology [468]. BFO has also been used to solve the optimization problem of the aeroelastic design of a rectangular wing [451]. The applicability of the DE algorithm has been shown for multimodal problems like wing and airfoil design [469]. Li et al. have also used FSO to design a variable camber morphing wing [470]. Another research also focuses on the application of ABC in morphing wing tip design of aircraft considering aerodynamic specifications [471].

4.4.3. Body Design

Designing the overall body shape of an aircraft in two- or three-dimensional approaches is another application of nature-inspired algorithms. This can be done by considering similar goals to the airfoil design or other requirements. There are many different aerial vehicles, and most of them employ an aerodynamic body. Even in multirotor configurations or space vehicles such as satellites in which aerodynamics are not a great concern, other specifications such as size and equipment placement are challenging optimization problems. We can find many different aerospace systems, including fixed-wing aircraft/drones, helicopters or rotorcrafts, jet fighters, airships, launchers, missiles, satellites, space planes, hovercrafts, unmanned aerial systems, and so on. They use different mechanisms such as lift-producing surfaces, rotors, and blades, moving mass and CoG control, rocket propulsion, gliding, etc., to fly. Each part of their mechanisms and parts of the body has its own characteristics which may affect the flight. For example, in a moving-mass-controlled aircraft, the weight of the moving parts has a direct effect on the controllability and maneuverability of the aircraft [472]. Moreover, body shape in a re-entry vehicle plays a dominant role in the performance of the spacecraft [473]. Therefore, designing these specifications is indeed the subject of an optimization problem. Figure 73 illustrates a classification of aerospace/drone systems.
Some research focuses on optimal equipment placed inside the fuselage using GA [474]. Li et al. have used bat algorithm to locate the flapping hinge in a coaxial helicopter to reduce the vibration and achieve minimum hub [475]. Viviani et al. have also used a variant of GA to solve the problem of the optimal body shape of a re-entry vehicle [476]. Another similar research is done by Arora and Kumar in the aerodynamic shape optimization of a re-entry vehicle [477]. PSO has also been used to find the optimal geometric body shape in a flying wing glider [478]. Generally, the geometric shape design of a flying wing considering aerodynamic specifications contains non-convex functions, which make algorithms such as DE suitable candidates [479]. Chen et al. have also applied DE for satellite layout optimization or equipment placement. They have considered three-dimensional layout optimization, which is an NP-hard problem. They have shown the robustness and efficiency of this algorithm and its hybrid variants in the optimal layout design of satellites with up to 40 pieces of equipment to be placed [480].
Table 7 summarizes the studied publications in aerodynamic optimization and the algorithms the have used. The most popular algorithms and applications in this section are GA and airfoil shape design.

4.5. Guidance, Navigation, and Control (GNC)

The GNC is a wide field of study in aerospace and drone systems. Sometimes each of these three problems is considered, and in most cases, a combination of them at the same time. Each field consists of very different methods and approaches which mostly rely on optimization algorithms. In control, every classic, modern, linear or nonlinear approach has multiple control parameters. A direct and, in most cases, easy approach to finding or tuning these parameters is using an optimization algorithm. The problem of guidance was always finding the optimal solution. Problems such as path planning, motion planning, and formation control are always tightened to optimization. Some other related fields such as system identification and state estimation are also highly dependent on optimization steps. In the following section, the current optimization-based problems in GNC and the nature-inspired solutions for solving them are covered.

4.5.1. Optimal Guidance and Control

In contrast to conventional and classic control systems, empowered control systems using nature-inspired algorithms and machine-learning algorithms usually have high autonomy and robustness. Unconventional control systems provide more flexibility, and they can handle difficult and unpredictable conditions and challenges. They can also eliminate the need for the model of the system, or at least the exact model of the system using online intelligent identification systems. However, using these controllers usually increases computational costs, time, and memory.
Small UAVs such as multirotor are fast and agile systems and rely on fast and optimal control and trajectory planning methods to perform different missions. Collision avoidance is also another necessary feature in the path planning of these vehicles, which needs optimization processes in addition to reliable sensors and vision-based navigation approaches [422].
Optimum path planning, specifically minimum fuel consumption, and other objectives such as minimum time and shortest trajectory are the main issues in space missions such satellites, moon-landers, Mars missions, and space rockets. Another space application is spacecraft rendezvous which is a highly nonlinear optimal control problem [422]. Satellites’ station keeping and orbit control is a nonlinear and constrained optimization problem that needs optimization algorithms [422]. Swarm control of the aerospace systems such as drones or spacecraft and satellites, considering hundreds of potential agents, is a complex path planning and collision avoidance optimization problem [422].
Hypersonic aircraft are known for their highly nonlinear dynamic and complex aerodynamics. There are many constraints in the navigation and path planning of these aircraft which are usually nonconvex. Hence, nature-inspired optimization algorithms are potentially suitable options for solving such guidance problems [422]. In hypersonic airframes exhibiting elevated ratios of lift/drag, optimization algorithms are beneficial in designing flight paths characterized by soft gradients and curvature. Such algorithmic optimization permits hypersonic gliders to attain favorable trajectories that reduce stress on structures while minimizing aerodynamic losses, resulting in extended range and endurance [422].
In path planning or optimal trajectory design, the problem varies based on the objective or goal. The common approaches include optimal control or less energy consumption, optimal or minimum time, optimal trajectory, or shortest path. In most cases, a combination of these goals is considered. In path planning, another common problem is obstacle avoidance. This is so critical, specifically in harsh environments such as cities, forests, and indoor flight. Optimal obstacle avoidance ensures the safety of the system, fuel consumption, and mission success at the same time. Optimal guidance approaches are also a common approach in swarm systems. Missions such as formation control are always formulated as an optimization problem.
Path planning in drones has numerous challenges such as difficulties in navigation and guidance, obstacle detection and avoidance, considering the shape and size of drone, and formation control in swarms. However, recent developments in high-performance navigation with data fusion and intelligent navigation can help in overcoming to these problems. Nature-inspired algorithms are shown to play a dominant role in path planning of drones. PSO, ACO, DE, GA, and GWO are examples of bio-inspired algorithms that are applied in drone path planning. Several studies have proposed algorithms for multi-UAV path planning. Collision-free and obstacle-free four-dimensional-space path planning is studied using PSO. Collision avoidance protocols and the inscribed circle method for smoothness based on the metropolis criterion and predicted three trajectory correction schemes are also done applying ACO. A dynamic discrete pigeon-inspired optimization technique is used for search attack missions with both distributed path generation and central tasks mission. Modified versions of PSO is also studied for UAV path planning, which showed faster convergence rate and better solution [481]. Konatowski et al. have used ACO for autonomous optimal route construction of a UAV. The results of the simulations show the dependency of UAV trajectory on the selected weighting factors, determining the priority of avoiding detected hazards or choosing the shortest path [482]. Yu et al. have proposed using drones for disaster situational awareness by optimizing their path planning through an adaptive selection mutation constrained DE algorithm. The algorithm selects individuals based on their fitness values and constraint violations, improving exploitation and maintaining exploration. Experimental results show that the proposed algorithm is competitive with state-of-the-art algorithms, making it suitable for disaster scenarios [483]. Qu et al. have designed a novel reinforcement learning-based GWO (RLGWO) to address the challenge of high-quality path planning for drones in complex three-dimensional flight environments. The proposed algorithm incorporates reinforcement learning to enable adaptive switching of operations based on accumulated performance. Four operations—namely, exploration, exploitation, geometric adjustment, and optimal adjustment—are introduced for each individual to serve UAVs path planning. The generated flight route is smoothed using the cubic B-spline curve, making it suitable for UAVs. Simulation results demonstrate the RLGWO algorithm’s feasibility and effectiveness in generating a suitable path for UAVs in complex environments [484]. Another research by Shen et al. focuses on a multi-objective optimization approach to path planning in a three-dimensional terrain scenario with constraints, using an evolutionary algorithm based on multi-level constraint processing (ANSGA-III-PPS) to plan the shortest collision-free flight path of a gliding UAV. The proposed algorithm employs an adaptive constraint processing mechanism to improve path constraints in a three-dimensional environment and an improved adaptive non-dominated sorting GA to enhance path planning ability in a complex environment. Experimental results demonstrate that ANSGA-III-PPS outperforms four other algorithms in terms of solution performance, validating the effectiveness of the proposed algorithm and enriching research results in UAV path planning [485].
One of the popular algorithms in path planning is bat algorithm. Lin et al. have used an improved version of the bat algorithm in a combination of artificial potential field and chaos strategy for UAV path planning. Using this approach, they have achieved a robust performance and global optimality [486]. Wang et al. have also used an improved version of the bat algorithm for UAV’s optimal dynamic target tracking problem [487]. Bat algorithm has also been used for optimal pitch control [488] and landing of aircraft [489]. Li et al. have used clonal selection algorithm for optimal UAV route evaluation [490]. The trajectory-tracking problem of a quadrotor has been studied and solved by cuckoo search and PSO [491]. Trajectory planning of MAVs in urban environments using cuckoo search has been studied by Hu et al. [492]. Zhang et al. have applied an improved version of DE for online path planning of a quadrotor. Based on them, this algorithm produces feasible paths and better performance compared to PSO and GA [493]. Nikolos and Brintaki have also considered coordinated path planning of UAVs using DE [494]. Alihodzic has used the fireworks algorithm for solving the NP-hard problem of UAV path planning. According to him, this algorithm outperforms other nature-inspired algorithms such as DE, PSO, and CS [495]. This is well-aligned with the results of the comparison in Section 3, where in most of the benchmark functions fireworks algorithm stands on the top of the list. Zhangs have also studied the path planning problem in UAVs using the hybrid method of DE and fireworks algorithm [496]. Roberge and Tarbouchi have used flower pollination algorithm for real-time trajectory planning of a UAV [497]. Li and Duan have studied the problem of optimal path planning for a UAV using the gravitational search algorithm [498]. Qu et al. have also used the GWO for optimal path planning of a UAV [499]. Lou et al. have developed an improved butterfly optimization (BOA-TSAR) algorithm for autonomous three-dimensional pathfinding of drones in complex spaces. The algorithm improves the randomness strategy of initial population generation using the tent chaotic mapping method, adaptive nonlinear inertia weights, a simulated annealing strategy, and stochasticity mutation with global adaptive features. Simulation experiments verify the superior performance of BOA-TSAR, achieving optimal path length and smoothness measures. The algorithm is competitive among swarm intelligence algorithms of the same type [500].
UAVs are being increasingly used for a variety of civilian applications such as delivery, logistics, surveillance, entertainment, and more. Path and trajectory selection for UAVs can be formalized as a TSP path optimization problem under constraints, which shares similarities with similar problems that have been studied in the context of urban vehicles. A recent study by Khoufi et al. reveals the applications of GA, PSO, ACO, and SA in solving this problem [501]. Drone-truck problem is one of the famous optimization problems studied by numerous researchers. This problem is usually modeled as a travelling sales man (TSP) problem in which a truck and a drone are used for package delivery. The goal is to deliver packages by only passing each city or node for one time. The drone leaves the truck in a city and returns back in another city for package loading and recharging (switching) batteries. Based on a recent research, the majority of efforts in this field focus on applying heuristic algorithms [502]. Cooperation of drones with other vehicles such as underwater and ground vehicles is another problem which can be solved by nature-inspired algorithms. Drones can be used to support the operation of other vehicles and drones or may perform independent missions [503]. Weng et. al. proposes a cooperative truck and drone delivery path optimization problem to minimize delivery task completion time. The truck carries cargo along the outer boundary of a restricted traffic zone and sends/receives the drone responsible for delivering the cargo to customers. To solve this problem, a hybrid meta-heuristic optimization algorithm based on WWO is applied to optimize the paths of the truck and drone. Experimental results show that the proposed algorithm performs competitively compared to other popular optimization algorithms such as basic WWO, GA, PSO, DE, BBO, and EBO [504].
Coverage path planning is another field in drone guidance and control which relies on optimization algorithms. In this problem, a path should be found in way such that all points of some area are covered at least once. Some topics such as range of the drone’s camera and duplicate coverage avoidance make coverage path planning a challenging problem. It may be combined with other objectives such as energy reduction or coverage in minimum time. Other challenges such as obstacle avoidance make the final optimization problem even harder. This problem is a common in agriculture, environment protection, disaster management, and save and rescue applications. Otto et al. have shown that the majority of the research in this area appies heuristic and meta-heuristic algorithms [503].
The complete automation of fixed-wing UAV operations involves autonomous execution of take-off, cruising, and landing. The landing stage is particularly crucial, requiring the UAV to maintain a constant speed and glide slope to ensure stability and a successful touchdown on the runway, while also estimating the landing point accurately in minimal time. Incorporating bio-inspired algorithms into UAV control systems can improve the accuracy and speed of landing point estimation. A study by Ilango and R. utilized the bats optimization algorithm, moth flame optimization algorithm, and artificial bee colony algorithm to determine the computed path coordinates and optimal landing point within the operational limits of the UAV. The objective was to identify the optimal landing point in minimal time based on the computed points. The error rate between the actual path and estimated path computed points was used to measure performance. Empirical results indicate that the moth flame optimization algorithm performs the best, taking the least amount of time to compute the optimal point with minimal error, compared to the other two optimization algorithms examined [505]. A dual swarm optimization algorithm that combines the dragonfly optimization method and the DE method is designed by Liang et al. to address the obstacle avoidance trajectory planning problem in the landing process of micro drones. An orthogonal learning mechanism is implemented to facilitate adaptive switching between the two algorithms. In the landing route planning process, the planning plane is obtained by making the gliding plane tangent to the obstacle. The obstacle projection is transformed into multiple unreachable line segments in the planning plane. An optimization model is designed to transform the three-dimensional landing route planning problem into a two-dimensional obstacle avoidance route optimization problem. The shortest route is chosen as the optimization objective, and a penalty factor is introduced into the cost function to prevent the intersection of the landing route and obstacle. During the optimization process, the hybrid algorithm adaptively selects the next iterative algorithm through orthogonal learning of intermediate iterative results, allowing for the full utilization of the respective advantages of the two algorithms. The optimization results demonstrate that the proposed hybrid optimization algorithm is more effective in solving the landing route planning problem for micro-small UAVs compared to a single optimization algorithm [506]. The problem of landing scheduling has been also considered in applications of nature-inspired algorithms. Some research suggests applying flower pollination for this problem [507,508], while others focus on GWO [509] and harmony search algorithm [510]. Abdul-Razaq and Ali have also used the bees algorithm to provide a nature-inspired landing scheduling for aircraft [511]. Jia et al. have also solved a similar problem using the clonal selection algorithm [512].
Trajectory optimization in space missions is also an optimization problem. An optimized trajectory is a key factor in vehicle stability and mission success. Chai et al. have reviewed the optimization techniques in the trajectory design of space crafts. According to their research, in complex trajectory design problems where gradient-based approaches are not applicable, stochastic or evolution-based algorithms such as GA, PSO, DE, and ACO have been used solely or in combination with other approaches such as gradient-based algorithms to solve the optimal trajectory design of spacecraft. However, they indicate when using NIAs the validation of solution optimality becomes difficult, and the computational complexity due to the heuristic optimization process tends to be very high, making it challenging to treat heuristic-based methods as a standard optimization algorithm that can solve general spacecraft trajectory planning problems [513]. Shuang et al. have also conducted similar and interesting research. They have studied the optimization approaches in international and China’s national trajectory optimization competitions. Based on their research in different missions and problems such as multi-spacecraft exploration and removing space debris, which contains trajectory design and rendezvous problems, algorithms such as GA, PSO, ACO, and some forms of hybrid algorithms have been used by the researcher [514]. Shirazi et al. have also conducted similar research which concluded that common objectives in spacecraft trajectory optimization are Mayer term (state or input at the end of trajectory), time, velocity, Lagrange term (integral of input or state along the trajectory), acceleration, fuel mass, or smoothness of the trajectory. Based on their findings, in addition to previous algorithms, DE and SA have also been used in spacecraft trajectory optimization [515]. Su and Wang have studied the optimal trajectory optimization of a reusable launch vehicle using GSA [516].
GNC can be considered as the most popular application for nature-inspired optimization algorithms. These algorithms have been widely used in optimal control, path planning, task allocation in a swarm, mission planning, obstacle avoidance, formation control, and autonomous flight. Zhou et al. have studied the UAV swarm systems, based on their research, algorithms such as ACO, Wolf Swarm, ABC, and Firefly algorithm, have shown their potential applications in solving UAV swarm distributed control problems as well as GA and PSO. Nature-inspired algorithms such as GA, ACO, GWO, glowworm optimization, wolf pack, simulate annealing, and krill herd algorithm, have also considerable applications in task allocation in swarm systems. Specifically, about path planning, algorithms such as PSO, GWO, fruit fly optimization, and pigeon-inspired algorithm, have been used so far for three-dimensional path planning, dynamic path planning, area coverage path planning, and other optimal planning applications. Algorithms such as fireworks algorithm have been used also for the optimization of satellite control law [517]. By modeling particulars of the aircraft and constraints of the flight scenario within an optimization framework solvable via the fireworks algorithm, Xue et al. demonstrate the generation of optimal trajectories [518]. The complex scheduling and routing issues inherent to air traffic control systems have also been effectively addressed through the application of the gravitational search algorithm [519]. Trajectory tracking along with trajectory planning is another popular problem [520]. Aircraft engine control is also one of the probable applications. Algorithms such as GWO have been used so far for such problems [521]. Katal et al. have also used bat algorithm for robust flight control of a UAV [522].
Control parameter tuning using bio-inspired algorithms is one common application for these algorithms; Lin et al. have used such an approach for PID UAV flight control tuning with ABC [523]. Bian et al. have done similar research using the bacterial foraging algorithm [524]. Another example is research by Oyekan and Hu who developed a PID control gain tuning for a UAV [525]. Bencharef and Boubertakh have also used bat algorithm for parameter tuning of a quadrotor’s PD controller [526]. Zeri et al. have used the bees algorithm for optimal tuning of an aircraft’s fuzzy controller that consists of a set of linguistic rules with adjustable membership functions and scaling factors that determine its performance [527]. Huang and Fei have used clonal selection for parameter tuning of an active disturbance rejection controller of a UAV [528]. Zatout et al. have used PSO, bat algorithm, and cuckoo search for optimizing the fuzzy attitude controller of a quadrotor. According to them, bat algorithm provided a better computing time and performance compared to CS and PSO [529]. Glida et al. have used cuckoo search for parameter optimization of a quadrotor’s backstepping controller [530]. Pedro et al. also utilized the differential evolution optimization algorithm for proportional-integral-derivative (PID) gain tuning of a quadrotor unmanned aerial vehicle (UAV) operating in hovering flight conditions. The authors were able to achieve significantly improved hovering performance for the quadrotor UAV compared to untuned initial PID gains. The results demonstrate the utility of evolutionary optimization techniques such as differential evolution for automating the complex process of PID controller design for nonlinear dynamical systems such as quadrotor vehicles [531]. Wang et al. have also used a similar approach for quadrotors trajectory tracking using PID [532]. Keskins have conducted similar research on tuning PD parameters of a quadrotor position control using the firefly algorithm [533]. Kaba has also used the firefly algorithm for PID tunning of a quadrotor’s controller [534]. Nonlinear controllers such as the sliding mode controller and backstepping controller have been tuned by firefly algorithm for the quadrotor’s optimal flight control [535]. Prabaningtyas and Mardlijah have used the firefly algorithm for parameter tuning of a linear quadratic Gaussian tracking control applied in a quadrotor’s trajectory tracking problem [536]. Yin et al. have used the fireworks algorithm for parameter tuning in a hypersonic vehicle sliding mode control [537]. Glida et al. have used flower pollination algorithm to optimize a fuzzy adaptive backstepping controller for quadrotor attitude control [538]. A similar approach has been used by Basri and Noordin using gravitational search optimization [539]. Abbas and Sami have also used this algorithm for tuning the PID gains of a quadrotor’s controller [540]. Cai et al. have worked on the application of grey wolf optimization on active disturbance rejection control parameter tuning for a quadrotor’s trajectory tracking [520]. Hartawan has also applied the harmony search algorithm for PID gain tunning in a quadrotor’s controller [541]. Altan has studied the performance of Harris hawk optimization in PID gain tunning of attitude and altitude controller of a UAV in path following problem [542]. Yuan et. al. have developed a robust close-formation control system for unmanned aerial vehicle (UAV) flights using dynamic estimation and compensation to address wake vortex effects and advance UAV close-formation flights to an engineer-implementation level. The control system is divided into three control subsystems for the longitudinal, altitude, and lateral channels, using linear active-disturbance rejection control (LADRC) with two cascaded first-order LADRC controllers. Sine-powered pigeon-inspired optimization is proposed to optimize the control parameters for each channel. Simulation results show that the designed control system achieves stable and robust dynamic performance within the expected error range, maximizing the aerodynamic benefits for a trailing UAV [543]. Jing et al. have proposed a disturbance-observer-based nonlinear sliding mode surface controller (SMC) for a simulated PX4-conducted quadcopter and optimized its parameters using PSO. The quadcopter’s tracking performance is evaluated and compared under various noise and disturbance conditions against PID control strategies. Results show that the PSO-powered SMC controller with disturbance observer enables accurate and rapid adaptation of the quadcopter in uncertain dynamic environments, outperforming the PID control strategies under the same conditions [544].
Swarm motion and swarm control of drones and aircraft is a new application in which bio-inspired algorithms have been widely used. Shafieenejad et al. have used the bees algorithm for swarm guidance of aerial robots. In combination with fuzzy logic, they have reached faster results [545]. Zhang et al. have studied the formation control of UAV swarms using DE considering their reconfiguration [546]. Bian et al. have also used DE for UAV swarm formation and trajectory tracking [547]. Wang et al. have applied the grey wolf optimization algorithm in the coordination control of a UAV swarm [548]. Moth flame optimization has also been used by Ma et al. for optimal path planning of a UAV swarm [549].
Xiong et al. have proposed a method for multi-drone mission assignments and path planning in a three-dimensional disaster rescue environment using adaptive genetic algorithms and sine cosine particle swarm optimization. The method considers factors such as drone performance, mission points, elevation cost, and threat sources to formulate a cost-revenue function and employs an AGA to assign missions to multiple drones. The SCPSO is used for optimal flight path planning. Simulation experiments have validated the effectiveness of the proposed method [550]. Qiu et al. have designed a UAV flocking distributed optimization control framework to convert the many-objective optimization problem into a multi-objective optimization problem solved by a single UAV. To account for onboard computing resource limitations, a modified multi-objective pigeon-inspired optimization (MPIO) algorithm is proposed based on the hierarchical learning behavior in pigeon flocks. Comparison experiments with basic MPIO and a modified non-dominated sorting genetic algorithm (NSGA-II) demonstrate the feasibility, validity, and superiority of the proposed algorithm [551]. A study by Ali et al. investigates the path planning of multiple unmanned aerial vehicles in a dynamic environment using a hybrid algorithm that combines maximum-minimum ACO and DE. The proposed algorithm addresses the limitations of existing classical ACO and maximum–minimum ACO, which face challenges in balancing excessive information and global optimization. The proposed MMACO is used to identify the best ant of each colony to construct the path, and DE is used to optimize the path that escapes maximum–minimum ACO. This ensures the identification of the best global colony that provides optimal solutions for the entire colony. The proposed approach also enhances robustness while preserving the global convergence speed. The simulation experiments are conducted in common benchmark functions to test the effectiveness of the proposed algorithm [552]. The problem of optimal cooperative path planning in a UAV swarm has been also studied by Wu et al. using HS [553].
Task allocation in swarms is another application for nature-inspired algorithms. It is a multi-objective and complex problem that is introduced recently. Yu et al. have considered the fireworks algorithm for task allocation in a swarm of UAVs considering an uncertain environment. Based on them, this algorithm provides better results compared to PSO [554]. Zhang et al. have also studied the task assignment problem in UAV swarms using fireworks algorithm [555]. Cui et al. have used harmony search for task assignment in a UAV swarm [556]. Xiang et al. have proposed a multi-UAV mission planning model that considers mission execution rates, flight energy consumption costs, and impact costs. A lightning search algorithm based on multi-layer nesting and random walk strategies (MNRW-LSA) is proposed to address three-dimensional UAV kinematic constraints and poor uniformity in traditional optimization algorithms. The algorithm’s convergence performance is demonstrated and compared to other algorithms using optimization test functions, Friedman and Nemenyi tests. They have also used a greedy strategy to the RRT algorithm to initialize trajectories for simulation experiments using a three-dimensional city model. The proposed algorithm is shown that improves global convergence, robustness, UAV execution coverage, and reduces energy consumption. The proposed method has greater advantages than other algorithms such as PSO, SA, and LSA in addressing multi-UAV trajectory planning problems [557].
One other application for nature-inspired algorithms is aircraft’s active vibration reduction. Zarchi and Attaran have developed a method for improving an aircraft’s vibration absorber using bees algorithm [558]. A similar technique is applied by Toloei et al. [559].
Table 8 summarizes the studied publications in this section. As it can be seen, although this section includes a wide range of algorithms, the majority of papers have focused on a limited number of algorithms such as DE, PSO, BA, GWO, and GA. Figure 74 provides the share of different nature-inspired algorithms from the reviewed publications in this section. It can be said the most popular applications of bio-inspired algorithms in this group are controller parameter tunning, path, and trajectory planning, as well as optimal swarm motion.

4.5.2. System Identification

System identification is popular because of its cheap cost and efficiency. Combining a mathematical model with real data from the flight can result in accurate enough estimations of parameters and the system’s model. Nature-inspired algorithms are useful tools for system identification and parameter estimation both in linear and nonlinear models. Some algorithms such as GA, ABC, and PSO have been used so far for the identification of aerospace systems such as aircraft and helicopters [560,561]. Researchers have also applied various other meta-heuristic algorithms such as DE, CS, and HS for parameter identification in chaotic systems and infinite impulse response identification. These algorithms have been used in combination to provide better results and have shown promise for system identification of quadrotors, multi-rotor UAVs, unmanned helicopters, and small fixed-wing drones. The effectiveness of algorithms such as HS have also been demonstrated in aircraft and small helicopter parameter estimation.
Gotmare et al. have studied the applications of evolutionary algorithms in system identification. Based on their research algorithms such as GA, ACO, artificial immune System, PSO, HS, DE, BFO, fish swarm algorithm, gravitational search, cuckoo search, evolutionary programming, and ant swarm optimization have been used so far for parameter identification in chaotic systems, Hammerstein identification, and infinite impulse response identification [562]. El gmili et al. have used PSO and cuckoo search for parameter estimation of a quadrotor. Based on their research, a combination of CS and PSO provides better results [563]. Yang et al. have also conducted similar research based on GA [564]. Wang et al. have used DE for system parameter estimation of a multi-rotor UAV [565]. Tijani et al. have applied DE for the system identification of an unmanned helicopter [566]. Nonut et al. have also used 13 different meta-heuristic algorithms including ant lion algorithm, dragonfly algorithm, grasshopper algorithm, grey wolf optimizer, moth flame algorithm, salp swarm algorithm, whale optimization, sine cosine algorithm, water cycle algorithm, and evolution strategy for identification of a small fixed-wing drone. Based on their research, water cycle algorithm, ant lion optimization, and moth flame optimization are the three top algorithms among the above-mentioned list [567]. Li and Duan have also applied the harmony search for aircraft parameter estimation [568]. Similar research is conducted by Yang et al. on the identification of a small helicopter in the frequency domain using harmony search [569].
Table 9 summarizes the studied publications in optimal system identification. We can say multirotor in general are the most attractive systems that are being studied for NIA-based optimal identification. Among algorithms GA, PSO, DE, ABC, and HS seem to have equal potentials in solving optimal system identification problems.

4.5.3. Navigation

Nature-inspired algorithms have shown great potential in the navigation of robots, particularly in challenging environments such as urban and crowded areas. Optimization algorithms such as bat algorithm, MFO, PSO, CS, and GWO have been applied to automatic robot navigation, image processing, and localization problems in swarm systems. These algorithms have been used in various applications such as integrated navigation, target recognition, and source localization in UAV-based search and rescue missions. Additionally, researchers have used CS and DE algorithms for automatic guided vehicles’ navigation and autonomous UAV swarm coordination, respectively. Furthermore, hybrid versions of GWO and SCA have been applied to energy-efficient localization of UAVs and visual tracking techniques, respectively, resulting in better performance. These algorithms provide safe and collision-free trajectories in the presence of uncertainty, error, and external disturbances.
Recent work has examined applications of algorithms such as the bat algorithm, moth flame optimizer, PSO, CS, and GWO in automatic robot navigation [570]. Some navigation methods rely on optimization processes for calculations or process images in novel vision-based navigation approaches. Optimization algorithms also have applications in localization problems in a swarm system. Zhangs have applied the PSO for optimal localization in a UAV swarm in order to reduce the localization error [571]. Shanshan et al. have used ABC for improving the performance of the integrated navigation which resulted in a reduction in velocity and position error [572]. Duan has also studied the application of ABC in the target recognition, and PSO in image matching for a low-altitude UAV [573]. Clonal selection has been used in an automatic guided vehicle’s navigation in a warehouse [574]. Banerjee et al. have applied the cuckoo search algorithm for source localization in UAV-based search and rescue missions to determine the location of the victim [575]. Alfeo et al. have used DE in order to provide autonomous UAVs in a swarm with self-coordination and robustness [576]. Sun et al. have also used DE to increase the geosynchronous synthetic aperture radar imaging performance of a UAV used in the navigation and path planning of the UAV [577]. Li et al. have developer a three-dimensional localization approach for multiple UAVs using a flipping ambiguity avoidance optimization algorithm. Beacon UAVs collect data and utilize a semidefinite programming-based approach to estimate the global position of GPS-denied UAVs. They have applied an improved GWO algorithm which is used to improve positioning accuracy in noisy environments. Simulation results show the superiority of the proposed approach on similar methods [484]. Arafat and Moh have developed a similar energy-efficient localization method for UAVs in swarms based on a hybrid version of GWO [578]. The navigation of drones in urban and crowded places is a challenging issue. The first problem is safety and the second problem is uncertainty and inaccurate measurements such as GPS. To overcome this problem, Radmanesh and Kumar have designed an optimization method based on GWO, which by using automatic dependent surveillance-broadcast (ADS-B), determines the accurate distance to obstacles and provides optimal safe and collision-free trajectories [579]. Nenavath et al. have applied the sine cosine algorithm for a visual tracking technique called trigonometric particle filter (TPF) to achieve better performance. Based on their research, this algorithm provides better results compared to spider monkey optimization, firefly algorithm, and PSO [580].
Hao et al. have proposed a passive location and tracking algorithm for moving targets using a UAV swarm. The algorithm is based on an improved particle swarm optimization (PSO) algorithm. The localization method of cluster cooperative passive localization is employed, and the problem of improving passive location accuracy is transformed into the problem of obtaining more target information. The A criterion is used as the optimization target, and a recursive neural network (RNN) is used to predict the probability distribution of the target’s location in the next moment, making the localization method suitable for moving targets. The particle swarm algorithm is improved using grouping and time period strategies, and the algorithm flow for moving target location is constructed. Simulation verification and algorithm comparison demonstrate the advantages of the proposed algorithm [581].
Li et al. have developed a method to improve the navigation accuracy of inertial navigation systems in drones by identifying errors in horizontal gyroscopes and accelerometers using the improved pigeon-inspired optimization (PIO) method. This approach has the potential to reduce the need for sending the inertial navigation system back to the manufacturer for calibration, saving time and resources [582].
Table 10 summarizes the studied publications in optimal navigation using nature-inspired algorithms and the algorithms which have been used in them. It can be seen that a wide range of algorithms have been applied on a wide range of applications. However, the list of algorithms in this section is limited to a few number algorithms such as other categories.

4.6. Communication

A UAV-based communication system can potentially pair heuristic/meta-heuristic solutions with UAVs to enhance the performance of wireless communication networks, namely in the spectral efficiency and coverage of these networks. A UAV-based communication system can be readily applied in an emergency or offloading scenario. For example, researchers have proposed methods to prejudice, asses, and preserve a response in emergency situations [583]. The combination of heuristics and UAV platforms can potentially mitigate the inherent weaknesses of a pure UAV-based communication system, such as channel modeling, resource management, positioning, and security [584]. UAVs are currently utilized for data delivery and collection from dangerous or inaccessible locations. However, trajectory planning remains a major issue for UAVs. Khoufi et al. have conducted research on determining optimized routes for data pickup and delivery by drones within a time window and intermittent connectivity network, while allowing for battery recharge in route to destinations. The problem is formulated as a multi-objective optimization problem and solved using Non-dominated Sorting GA II (NSGA-II). Various experiments validated the proposed algorithm in different scenarios [585]. Optimizing device-device communication, the deployment process, and limited power supply for the devices and hardware they carry are practical issues to be addressed in applying drones in disaster response scenarios. In this field, the bio-inspired self-organizing network (BISON) achieved promising results using Voronoi tessellations. However, in this approach, the wireless sensor network nodes were using knowledge about their coverage areas center of gravity, which a drone would not automatically know. To address this, Eledlebi et al. have augmented BISON with a GA to further improve network deployment time and overall coverage. Their evaluations show an increase in energy cost [586].
The development of the edge computing paradigm, IoT-based devices, and 5G technology has led to increased data traffic that requires efficient processing. UAVs can replace edge servers used in mobile edge computing (MEC). Subburaj et al. have proposed a self-adaptive trajectory optimization algorithm (STO) for a UAV-assisted MEC system using DE. The STO is a multi-objective optimization algorithm that aims to minimize the energy consumed by MEC and the process emergency indicator. The proposed self-adaptive multi-objective differential evolution-based algorithm improves the population diversity by self-adapting the strategies and crossover rate using fuzzy systems. The algorithm’s performance is evaluated on a single UAV-assisted MEC system with hundreds of fixed IoT device instances on the ground level [587].

4.6.1. Positioning and Placement

Within a drone-enabled network, the utilization of drones may encompass the allocation of drones to fixed locations, thereby operating as intermediaries to interconnect mobile devices with macrocell base stations. Alternatively, a drone may traverse a cyclical trajectory to facilitate this connection. Furthermore, a drone-to-drone communication paradigm can be implemented, allowing mobile devices connected to one drone to establish connections with those linked to other drones. Importing communication constraints and modeling into the problem makes it a challenging optimization problem. A recent study reveals more than two-third of the papers in this area are based on heuristic and meta-heuristic algorithms [503].

4.6.2. Managing Resources

Resource management in UAV-based networks is essential since many of the requirements are contradictory, for example, low latency but the ability to host many devices. Reliable wireless transmission can be achieved via the ability of UAVs to connect to other nodes and optimized deployment and clustering. Researchers have improved communication coverage rates with high altitude platform stations (HAPS) using RL and swarm intelligence algorithms [588]. The global optimum of a complex radio coverage probe was solved using the applied network-based heterogeneous particle swarm optimization (NHPSO) to optimize multi-UAV air-to-ground downlink communication [589]. Other work has been done in developing a method to optimize drone charging. First, a game theory-based auction model and PSO algorithm are used to prevent collisions and provide optimal paths to UAV charging stations with the addition of using blockchain technology to verify the charging transactions for each UAV [590]. One work has created a mathematical framework for video inspection drones where the UAVs ride on top of buses to a chosen point of interest [591]. Sensor energy consumption for sensor nodes in wireless charging and communication systems has been predicted using an echo-state network ESN-based scheme and mean field game (MFG)-based power control. The scheme improved the efficiency of wireless charging and provides outage resiliency [592]. Xie et al. have proposed a multi-objective ant colony optimization framework based on the adaptive coordinate method (MOACO-ACM) which optimizes the visiting order of nodes for each UAV in a UAV-enabled wireless sensor network (WSN) for large-area data collection. Considering practical speed-related flight energy model and the optimal energy and delay tradeoff for K UAVs they achieved different Pareto-optimal tradeoffs between the maximum single-UAV energy consumption among all UAVs and the task completion time. Extensive simulations validate the effectiveness of the proposed algorithm and highlight the importance of UAVs’ flight speeds in achieving both energy-efficient and time-efficient data collection [593]. Zhang et al. have proposed a GA that uses integer code scheme to encode the sequence of drones’ deployment in a drone-swarm deployment for wireless coverage to improve performance. They study the tradeoffs between energy consumption, number of drones, and coverage rate and present a drone swarm deployment algorithm to find the best tradeoff between these objectives. Extensive simulations were conducted to evaluate the performance of the proposed algorithms [594].

4.6.3. Network Security and Routing

UAV networks must be robust against cyber-attacks to maintain the integrity of the network. One effective approach to safeguarding UAV networks is physical layer security (PLS), which protects against jamming and eavesdropping [595,596]. PLS uses information–theoretic methods and encryption solutions to enhance the secrecy of transmission [597]. In a review paper, bio-inspired nature algorithms were examined in their ability to rout multiple UAVs in flying ad hoc networks (FANETS). Based on this research, several optimization techniques such as the KH, GWO, BAT, red deer optimization, PSO, FSA, WOA, ACO, BCO, GSO, MFO, FFA, and BFA are used for this aim. In addition to basic algorithms, hybrid forms of NIAs with combination to each other or other methods such as fuzzy logic are also studied in this specific application [598]. A recent study by Otto et al. reveals the applications of heuristic and meta-heuristic algorithms in FANET operations [503]. A summarization of the studied publications in this section and their application is provided in Table 11.

4.7. Energy Management

Battery power supply is a widely used energy source for UAVs, particularly in smaller ones. This is while the energy storage capacity of batteries is limited, which poses a significant challenge to their commercial and industrial application. To increase the endurance of UAVs, batteries must be frequently charged. Several battery charging techniques have been developed, including battery swapping, which involves recharging or replacing UAV batteries either conventionally or via hot swapping. In conventional swapping, the depleted UAV leaves its service location to the charging station and is replaced by an already charged UAV, while in hot swapping, new batteries are quickly inserted into UAVs as soon as they reach the charging station. Automated battery swapping mechanisms have been developed to replace depleted batteries with new ones using robotic actuators. However, effective swapping requires a battery recharging station, multiple UAVs, and a management system to coordinate the battery recharging and replacement cycle of the UAV swarm. Fuel cell-powered UAVs have been shown to be more efficient than battery-powered UAVs, which can increase endurance by up to six times. However, fuel cells have lower energy density and require special fuel tanks. To address this, compressed hydrogen gas, liquid hydrogen, or chemical hydrogen can be used. Renewable energy sources such as wind and solar power can also be used to power drones, but they depend on environmental conditions and have limitations such as reduced efficiency during rainy conditions and nights. Hybrid power supply methods, combining battery, fuel cells, and renewable energy sources, can provide a blend of power supply for UAVs. In all cases an optimal energy management system is a necessary requirement in industrial applications [599].
UAV-based cellular networks face challenges due to the energy consumption of UAVs. The use of UAV-base station (UAV-BSs) can hinder the improvement of network EE if energy consumption is not carefully considered. Energy optimization is important because of the limitations of UAV power supply and charging techniques. Four major aspects of energy optimization are identified in UAV-based cellular networks: optimization of propulsion energy and communication energy, joint optimization of communication and propulsion energy, and optimization of energy consumption in UAV-assisted cellular networks. Joint optimization of communication and propulsion energy results in the most energy conservation. Strategies have been developed to reduce the energy consumption of both UAV-BSs and fixed BSs, including using a UAV-assisted BS sleeping strategy. Conventional optimization methods for energy optimization in UAV-based cellular networks can be classified into three categories: exact methods, heuristic methods, and meta-heuristic methods. Meta-heuristic methods are problem-agnostic and can treat functions as black boxes which makes them suitable for this purpose. Algorithms such as PSO have been applied to minimize transmission power of UAVs serving as relays in IoT communications while considering the outage probability of IoT devices. Other algorithms such as GA were used to design an energy-efficient trajectory for UAV-BSs during backhaul connection to terrestrial BSs in post-disaster scenarios. A UAV-BS path planning framework can be impowered with GA to determine the optimal path with minimal turns and energy consumption [599].

4.8. Infrastructure and Operation

The concepts advocated to facilitate the incorporation of drone operations within civilian airspace include airspace that is solely accessible to drones or air corridors as well as shared airspace with manned aircraft. Under any of these circumstances, the integration of drones into civilian airspace would necessitate the development of relevant air traffic regulations and management schemes along with collision avoidance and automatic flight path replanning capabilities for individual drones. These approaches could, for instance, delineate a set of conflict resolution rules (e.g., priority rules), types of surveillance (e.g., aircraft locations are determined by centralized radar or on the basis of information broadcasted by the aircraft themselves), and forms of coordination (i.e., whether aircraft can communicate with each other in case of conflict) [503].
Some studies focus on locating distribution centers and transfer points to supply emergency items via drones after disasters. Centers supply damaged areas where roads are intact by trucks, while drones deliver to areas with damaged infrastructure. Limited drone range poses a challenge. Off-road vehicles proved more beneficial than drones in simulations due to longer drone loading and unloading times. Some research focuses on recharging stations and automatic service centers for drones. Studies aim to minimize costs by considering fixed facility costs and inventory costs, while disaster-related objectives such as delivery lead times and travel times also matter [503].
Many important civilian drone applications require a human operator to examine the sensory information sent by the drone in real time. The drone’s operations must account for potential idle time of the human operator and give the operator enough time to examine information at each point of interest. A heuristic algorithm is proposed for path planning that takes this factor into consideration, assuming the sequence of point of interest visits for each drone is given. In addition, cognitive underload and overload of human operators should be avoided by alternating demanding and less demanding tasks appropriately and providing enough rest breaks. Researchers have scheduled as many monitoring tasks to multiple operators as possible within a given time horizon, considering drone routes and speeds, while penalizing situations of cognitive underload and overload [503].
The main trade-off in scheduling drone maintenance operations such as refueling, launching, and repair is balancing the drone’s priority to return to its tasks quickly with not wasting fuel by making it wait too long. The refueling sequence for drones at an automatic refueling tank is optimized to achieve this balance. In addition, experience-based planning rules have been used at aircraft carriers to optimal integer programming solutions for scheduling drone maintenance. Similar planning problems may arise at mobile warehouses servicing drone fleets, such as Amazon’s floating warehouses or Ford’s auto deliveries [503]. In all cases, heuristic and meta-heuristic algorithms play a dominant role due their ability to solve complex, nonlinear, non-convex, and multivariable problems. Direct methods can hardly be applied on such problems which makes nature-inspired algorithms suitable alternatives [503].

5. Summary

Considering the selected list of the most popular nature-inspired algorithms, the current state of nature-inspired algorithms used in aerospace applications and drones can be studied. Table 12 summarizes the reviewed algorithms in Section 4. An interesting result from this table is the inconsistency between the algorithms and problem requirements. For example, based on the results in Section 3, GA has the worst results and performance in many benchmark functions and factors such as error and iteration. However, it has surprisingly been widely used in aerospace problems. One reason for this trend is likely simplicity, but the main reason must be its reputation. Another popular algorithm is PSO, which did not have a bad performance in Section 3 compared to GA; it was not the best performed in benchmark functions. This inconsistency exists with other algorithms as well. For example, in wing and tail design, two algorithms other than GA and PSO are BFO and DE. According to Section 3, DE has a similar performance as PSO. It was a midrange algorithm when it came to solving the benchmark functions. However, DE received a better score compared to PSO and is not the best possible choice for solving such problems. BFO’s state is worse than DE’s. In most of the problems, BFO is at the bottom of the list. It is comparatively slow and less accurate. In contrast, algorithms such as Harris hawk, sine cosine, or artificial bee colony, which are faster and more accurate, have not been used or studied in such problems.
The most popular field for nature-inspired algorithms is control systems. Nature-inspired algorithms have been widely used in control systems, path planning, trajectory design, trajectory tracking, and swarm and formation control. The most popular algorithms in aerospace/drone systems are GA, PSO, ABC, and DE, which are used in many different fields of aerospace/drone systems, from aerodynamics to control systems. Forest optimization and cat swarm are not popular in aerospace/drone systems as they are in other fields. According to Section 3, forest optimization has obtained middle to low results, but cat swarm is usually above the mean and near the top of lists. Cat swarm obtained the best performance and accuracy in the expanded Schaffer benchmark function. Therefore, it seems there is still room for research and development for this algorithm in aerospace applications. Instead of control systems, other applications are only dedicated to a few algorithms. It is highly suggested to apply algorithms with high performances, such as sine cosine, Harris hawks, firefly algorithm, fireworks algorithm, grey wolf optimization, and cat swarm, for a wider range of applications since they are expected to provide superior results.

6. Conclusions

This paper reviewed the majority of nature-inspired algorithms (about 350 algorithms) based on their source of inspiration. A comprehensive classification of the nature-inspired algorithms was provided based on the sources of inspiration, including bio-based, ecosystem-based, social-based, physics-based, mathematics-based, chemistry-based, music-based, sport-based, and hybrid algorithms. In each category, a group of the most popular algorithms has been reviewed in detail, while others have been studied briefly, introducing their source of inspiration. In order to evaluate these algorithms, in the final section of the paper, a comparison is provided by solving 10 different benchmark functions or optimization problems with a selected number of algorithms. The results of simulations provide many parameters such as cost value, number of iterations, average time, and error for each set of algorithms and problems. Based on the results of the simulations, in addition to the high performance of the nature-inspired algorithms, the advantages of some algorithms in terms of accuracy and speed are shown.
This study revealed the massive amount of research on nature-inspired algorithms which are mimicking different aspects of nature. From oceans to space, there can be found algorithms focusing on a specific phenomenon or creature. Neglecting the open question of the novelty of these algorithms and their similarities to most famous algorithms, the high number of publications in this area is considerable. Most of the publications in this field do not provide any or enough information on the performance of the algorithms, their exact differences, and their contributions. The majority of papers solely provide a simple comparison with just one or a few well-known algorithms such as GA (which is shown to have one of the lowest performances among nature-inspired algorithms) by only solving a single problem. This cannot, of course, provide us with a good understanding of the real values of the algorithms. Performance analysis of nature-inspired algorithms is what is felt necessary right now. The main focus of research in this field should be put on the applications of these algorithms and their performance evaluation and improvements. It seems that there are more than enough algorithms available with different perspectives. Studying the current literature reveals that there is a focus on developing hybrid algorithms to increase the performance of the algorithms which can broaden their field of applications. The performance of hybrid algorithms is reported to be significantly better than the basic algorithms which makes it reasonable to expect more research on this topic in the future.
Current research illustrated the latest developments in the field of nature-inspired optimization, the popularity of these algorithms, and some related challenges such as constraint handling and their performance in solving different problems. A compact review of the applications of nature-inspired algorithms in aerospace systems has been provided. Based on this review, the most used algorithms in aerospace systems are GA, PSO, and ABC. The field where nature-inspired algorithms have been used widely is in control systems. Based on evaluations in this paper, it is recommended to apply high-performance algorithms such as the sine cosine algorithm, Harris hawk optimization, firefly algorithm, fireworks algorithm, grey wolf optimization, and cat swarm optimization in a wider range of applications from conceptual design, MDO, aerodynamics, and shape design to navigation and identification. Considering the progress of hybrid algorithms, and their considerably improved performance, their application in aerospace systems is recommended. All results of the paper, including data, and codes in Python and MATLAB, are also published in public GitHub repositories.

Author Contributions

Conceptualization and methodology, validation S.D.; software, A.D.; writing—original draft preparation, S.D., M.E. and A.D.; writing—review and editing, S.D.; supervision, M.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

All of the information, data, and codes of this paper are publicly available in its GitHub repository: https://github.com/shahind/Nature-Inspired-Algorithms (accessed on 20 June 2023).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lange, K. Optimization, 2nd ed.; Springer Science & Business Media: New York, NY, USA, 2013. [Google Scholar]
  2. Fister, I.; Yang, X.S.; Brest, J.; Fister, D. A brief review of nature-inspired algorithms for optimization. Elektroteh. Vestnik/Electrotech. Rev. 2013, 80, 116–122. [Google Scholar]
  3. Yang, X.-S. (Ed.) Nature-Inspired Algorithms and Applied Optimization; Springer: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
  4. Molina, D.; Poyatos, J.; Del Ser, J.; García, S.; Hussain, A.; Herrera, F. Comprehensive Taxonomies of Nature- and Bio-inspired Optimization: Inspiration Versus Algorithmic Behavior, Critical Analysis Recommendations. Cognit. Comput. 2020, 12, 897–939. [Google Scholar] [CrossRef]
  5. Sörensen, K. Metaheuristics-the metaphor exposed. Int. Trans. Oper. Res. 2015, 22, 3–18. [Google Scholar] [CrossRef]
  6. Tzanetos, A.; Fister, I.; Dounias, G. A comprehensive database of Nature-Inspired Algorithms. Data Brief 2020, 31, 105792. [Google Scholar] [CrossRef]
  7. Yang, X.-S. Nature-inspired optimization algorithms: Challenges and open problems. J. Comput. Sci. 2020, 46, 101104. [Google Scholar] [CrossRef] [Green Version]
  8. Muller, S.D. Bio-Inspired Optimization Algorithms for Engineering Applications; Swiss Federal Institute of Technology Zurich: Zürich, Switzerland, 2002. [Google Scholar]
  9. Osman, I.H. Focused issue on applied meta-heuristics. Comput. Ind. Eng. 2003, 44, 205–207. [Google Scholar] [CrossRef]
  10. Gendreau, M.; Potvin, J.Y. Metaheuristics in combinatorial optimization. Ann. Oper. Res. 2005, 140, 189–213. [Google Scholar] [CrossRef]
  11. Abdel-Basset, M.; Abdel-Fatah, L.; Sangaiah, A.K. Metaheuristic Algorithms: A Comprehensive Review. In Computational Intelligence for Multimedia Big Data on the Cloud with Engineering Applications; Elsevier: Amsterdam, The Netherlands, 2018; pp. 185–231. [Google Scholar] [CrossRef]
  12. Espinosa, H. Nature-Inspired Computing for Control Systems; Springer: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
  13. Holland, J.H. Adaptation in Natural and Artificial Systems; MIT Press: Cambridge, MA, USA, 2019. [Google Scholar]
  14. Kumar, M.; Husain, M.; Upreti, N.; Gupta, D. Genetic Algorithm: Review and Application. Int. J. Inf. Technol. Knowl. Manag. 2010, 2, 451–454. [Google Scholar] [CrossRef]
  15. Dastanpour, A.; Mahmood, R.A.R. Feature selection based on genetic algorithm and SupportVector machine for intrusion detection system. In Proceedings of the Second International Conference on Informatics Engineering & Information Science, Kuala Lumpur, Malaysia, 12–14 November 2013; pp. 169–181. [Google Scholar]
  16. Umbarkar, A.J.; Sheth, P.D. Crossover Operators in Genetic Algorithms: A Review. ICTACT J. Soft Comput. 2015, 06, 1083–1092. [Google Scholar] [CrossRef]
  17. Deb, K.; Deb, A. Analysing mutation schemes for real-parameter genetic algorithms. Int. J. Artif. Intell. Soft Comput. 2014, 4, 1. [Google Scholar] [CrossRef]
  18. Katoch, S.; Chauhan, S.S.; Kumar, V. A review on genetic algorithm: Past, present, and future. Multimed. Tools Appl. 2021, 80, 8091–8126. [Google Scholar] [CrossRef] [PubMed]
  19. Mukhopadhyay, D.; Balitanas, M. Genetic algorithm: A tutorial review. Int. J. Grid Distrib. Comput. 2009, 2, 25–32. Available online: http://0-www-sersc-org.brum.beds.ac.uk/journals/IJGDC/vol2_no3/3.pdf (accessed on 20 June 2023).
  20. Storn, R. On the usage of differential evolution for function optimization. In Proceedings of the North American Fuzzy Information Processing, Berkeley, CA, USA, 19–22 June 1996; pp. 519–523. [Google Scholar] [CrossRef]
  21. Georgioudakis, M.; Plevris, V. A Comparative Study of Differential Evolution Variants in Constrained Structural Optimization. Front. Built Environ. 2020, 6, 102. [Google Scholar] [CrossRef]
  22. Storn, R.; Price, K. Differential Evolution—A Simple and Efficient Heuristic for global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  23. Ayaz, M.; Panwar, A.; Pant, M. A Brief Review on Multi-objective Differential Evolution. In Soft Computing: Theories and Applications; Springer: Berlin/Heidelberg, Germany, 2020; pp. 1027–1040. [Google Scholar] [CrossRef]
  24. Fogel, L.J.; Owens, A.J.; Walsh, M.J. Artificial Intelligence Through Simulated Evolution; Wiley-IEEE Press: Piscataway, NJ, USA, 1966. [Google Scholar]
  25. Asthana, R.G.S. Evolutionary Algorithms and Neural Networks. Soft Comput. Intell. Syst. 2000, 111–136. [Google Scholar] [CrossRef]
  26. Fogel, D. Evolutionary programming: An introduction and some current directions. Stat. Comput. 1994, 4, 113–129. [Google Scholar] [CrossRef]
  27. Jacob, C. Evolutionary Programming. In Illustrating Evolutionary Computation with Mathematica; Elsevier: Amsterdam, The Netherlands, 2001; pp. 297–344. [Google Scholar] [CrossRef]
  28. Dagdia, Z.C.; Mirchev, M. When Evolutionary Computing Meets Astro- and Geoinformatics. In Knowledge Discovery in Big Data from Astronomy and Earth Observation; Elsevier: Amsterdam, The Netherlands, 2020; pp. 283–306. [Google Scholar] [CrossRef]
  29. Hoorfar, A. Evolutionary Programming in Electromagnetic Optimization: A Review. IEEE Trans. Antennas Propag. 2007, 55, 523–537. [Google Scholar] [CrossRef]
  30. Bäck, T.; Rudolph, G.; Schwefel, H.-P. Evolutionary Programming and Evolution Strategies: Similarities and Differences. In Proceedings of the Second Annual Conference on Evolutionary Programming, La Jolla, CA, USA, 25–26 February 1993; pp. 11–22. [Google Scholar]
  31. Rechenberg, I. Evolution Strategy: Optimization of Technical systems by means of biological evolution. Fromman-Holzboog. Stuttgart 1973, 104, 15. [Google Scholar]
  32. Ferreira, C. Gene Expression Programming: A New Adaptive Algorithm for Solving Problems. arXiv 2001, arXiv:cs/0102027v3. [Google Scholar]
  33. Moscato, P. On Evolution, Search, Optimization, Genetic Algorithms and Martial Arts: Towards Memetic Algorithms; Caltech concurrent computation program, C3P Report; California Institute of Technology: Pasadena, CA, USA, 1989. [Google Scholar]
  34. Ryan, C.; Collins, J.; Neill, M.O. Grammatical evolution: Evolving programs for an arbitrary language. In Proceedings of the Genetic Programming: First European Workshop, EuroGP’98, Paris, France, 14–15 April 1998; pp. 83–96. [Google Scholar] [CrossRef]
  35. Farmer, J.D.; Packard, N.H.; Perelson, A.S. The immune system, adaptation, and machine learning. Phys. D Nonlinear Phenom. 1986, 22, 187–204. [Google Scholar] [CrossRef]
  36. Dasgupta, D. Artificial Immune Systems and Their Applications; Springer Science & Business Media: Berlin/Heidelberg, Germany, 1999. [Google Scholar] [CrossRef]
  37. Beluch, W.; Burczyński, T.; Kuś, W. Parallel Artificial Immune System in Optimization and Identification of Composite Structures. In Parallel Problem Solving from Nature, PPSN XI; Springer: Berlin/Heidelberg, Germany, 2010; pp. 171–180. [Google Scholar] [CrossRef]
  38. De Castro, L.N.; von Zuben, F.J. The Clonal Selection Algorithm with Engineering Applications. In Proceedings of the GECCO, Cancún, Mexico, 8–12 July 2020; pp. 36–37. [Google Scholar]
  39. de Castro, L.N.; Timmis, J. An artificial immune network for multimodal function optimization. In Proceedings of the 2002 Congress on Evolutionary Computation. CEC’02 (Cat. No.02TH8600), Honolulu, HI, USA, 12–17 May 2002; pp. 699–704. [Google Scholar] [CrossRef] [Green Version]
  40. Jaddi, N.S.; Alvankarian, J.; Abdullah, S. Kidney-inspired algorithm for optimization problems. Commun. Nonlinear Sci. Numer. Simul. 2017, 42, 358–369. [Google Scholar] [CrossRef]
  41. Hatamlou, A. Heart: A novel optimization algorithm for cluster analysis. Prog. Artif. Intell. 2014, 2, 167–173. [Google Scholar] [CrossRef]
  42. Kaveh, A.; Kooshkebaghi, M. Artificial Coronary Circulation System; A new bio-inspired metaheuristic algorithm. Sci. Iran. 2019, 26, 2731–2747. [Google Scholar] [CrossRef] [Green Version]
  43. Asil Gharebaghi, S.; Ardalan Asl, M. New Meta-Heuristic Optimization Algorithm Using Neuronal Communication. Int. J. Optim. Civ. Eng. 2017, 7, 413–431. [Google Scholar]
  44. Raouf, O.A.; Hezam, I.M. Sperm motility algorithm: A novel metaheuristic approach for global optimisation. Int. J. Oper. Res. 2017, 28, 143. [Google Scholar] [CrossRef]
  45. Enciso, V.O.; Cuevas, E.; Oliva, D.; Sossa, H.; Cisneros, M.P. A bio-inspired evolutionary algorithm: Allostatic optimisation. Int. J. Bio-Inspired Comput. 2016, 8, 154. [Google Scholar] [CrossRef]
  46. Simon, D. Biogeography-Based Optimization. IEEE Trans. Evol. Comput. 2008, 12, 702–713. [Google Scholar] [CrossRef] [Green Version]
  47. Cheng, M.-Y.; Prayogo, D. Symbiotic Organisms Search: A new metaheuristic optimization algorithm. Comput. Struct. 2014, 139, 98–112. [Google Scholar] [CrossRef]
  48. He, S.; Wu, Q.H.; Saunders, J.R. Group Search Optimizer: An Optimization Algorithm Inspired by Animal Searching Behavior. IEEE Trans. Evol. Comput. 2009, 13, 973–990. [Google Scholar] [CrossRef]
  49. Civicioglu, P. Transforming geocentric cartesian coordinates to geodetic coordinates by using differential search algorithm. Comput. Geosci. 2012, 46, 229–247. [Google Scholar] [CrossRef]
  50. Li, X.; Zhang, J.; Yin, M. Animal migration optimization: An optimization algorithm inspired by animal migration behavior. Neural Comput. Appl. 2014, 24, 1867–1877. [Google Scholar] [CrossRef]
  51. Zhang, Q.; Wang, R.; Yang, J.; Lewis, A.; Chiclana, F.; Yang, S. Biology migration algorithm: A new nature-inspired heuristic methodology for global optimization. Soft Comput. 2019, 23, 7333–7358. [Google Scholar] [CrossRef]
  52. Oftadeh, R.; Mahjoob, M.J.; Shariatpanahi, M. A novel meta-heuristic optimization algorithm inspired by group hunting of animals: Hunting search. Comput. Math. Appl. 2010, 60, 2087–2098. [Google Scholar] [CrossRef] [Green Version]
  53. Fausto, F.; Cuevas, E.; Valdivia, A.; González, A. A global optimization algorithm inspired in the behavior of selfish herds. Biosystems 2017, 160, 39–55. [Google Scholar] [CrossRef] [PubMed]
  54. Tilahun, S.L.; Ong, H.C. Prey-Predator Algorithm: A New Metaheuristic Algorithm for Optimization Problems. Int. J. Inf. Technol. Decis. Mak. 2015, 14, 1331–1352. [Google Scholar] [CrossRef]
  55. Dai, C.; Zhu, Y.; Chen, W. Seeker Optimization Algorithm. In Proceedings of the Computational Intelligence and Security: International Conference, CIS 2006, Guangzhou, China, 3–6 November 2006; pp. 167–176. [Google Scholar] [CrossRef]
  56. Cuevas, E.; González, M.; Zaldivar, D.; Pérez-Cisneros, M.; García, G. An Algorithm for Global Optimization Inspired by Collective Animal Behavior. Discret. Dyn. Nat. Soc. 2012, 2012, 638275. [Google Scholar] [CrossRef] [Green Version]
  57. Farasat, A.; Menhaj, M.B.; Mansouri, T.; Moghadam, M.R.S. ARO: A new model-free optimization algorithm inspired from asexual reproduction. Appl. Soft Comput. 2010, 10, 1284–1292. [Google Scholar] [CrossRef]
  58. Kaveh, A.; Zolghadr, A. Cyclical Parthenogenesis Algorithm: A new meta-heuristic algorithm. Asian J. Civ. Eng. 2017, 18, 673–701. [Google Scholar]
  59. Chen, H.; Zhu, Y.; Hu, K.; He, X. Hierarchical Swarm Model: A New Approach to Optimization. Discret. Dyn. Nat. Soc. 2010, 2010, 379649. [Google Scholar] [CrossRef] [Green Version]
  60. Parpinelli, R.S.; Lopes, H.S. An eco-inspired evolutionary algorithm applied to numerical optimization. In Proceedings of the 2011 Third World Congress on Nature and Biologically Inspired Computing, Salamanca, Spain, 19–21 October 2011; pp. 466–471. [Google Scholar] [CrossRef]
  61. Mohseni, S.; Gholami, R.; Zarei, N.; Zadeh, A.R. Competition over Resources: A New Optimization Algorithm Based on Animals Behavioral Ecology. In Proceedings of the 2014 International Conference on Intelligent Networking and Collaborative Systems, Salerno, Italy, 10–12 September 2014; pp. 311–315. [Google Scholar] [CrossRef]
  62. Nguyen, H.T.; Bhanu, B. Zombie Survival Optimization: A swarm intelligence algorithm inspired by zombie foraging. In Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012), Tsukuba, Japan, 11–15 November 2012; pp. 987–990. [Google Scholar]
  63. Pattnaik, S.S.; Bakwad, K.M.; Sohi, B.S.; Ratho, R.K.; Devi, S. Swine Influenza Models Based Optimization (SIMBO). Appl. Soft Comput. 2013, 13, 628–653. [Google Scholar] [CrossRef]
  64. Huang, G. Artificial infectious disease optimization: A SEIQR epidemic dynamic model-based function optimization algorithm. Swarm Evol. Comput. 2016, 27, 31–67. [Google Scholar] [CrossRef] [PubMed]
  65. Tang, D.; Dong, S.; Jiang, Y.; Li, H.; Huang, Y. ITGO: Invasive tumor growth optimization algorithm. Appl. Soft Comput. 2015, 36, 670–698. [Google Scholar] [CrossRef]
  66. Salmani, M.H.; Eshghi, K. A Metaheuristic Algorithm Based on Chemotherapy Science: CSA. J. Optim. 2017, 2017, 3082024. [Google Scholar] [CrossRef]
  67. Muller, S.D.; Marchetto, J.; Airaghi, S.; Kournoutsakos, P. Optimization based on bacterial chemotaxis. IEEE Trans. Evol. Comput. 2002, 6, 16–29. [Google Scholar] [CrossRef]
  68. Passino, K.M. Bacterial Foraging Optimization. In Innovations and Developments of Swarm Intelligence Applications; IGI Global: Hershey, PA, USA, 2012; pp. 219–234. [Google Scholar] [CrossRef]
  69. Niu, B.; Wang, H. Bacterial colony optimization. Discret. Dyn. Nat. Soc. 2012, 2012, 698057. [Google Scholar] [CrossRef] [Green Version]
  70. Tang, W.J.; Wu, Q.H.; Saunders, J.R. A bacterial swarming algorithm for global optimization. In Proceedings of the 2007 IEEE Congress on Evolutionary Computation, Singapore, 25–28 September 2007; pp. 1207–1212. [Google Scholar] [CrossRef]
  71. Nawa, N.E.; Furuhashi, T. Bacterial evolutionary algorithm for fuzzy system design. In Proceedings of the SMC’98 Conference Proceedings 1998 IEEE International Conference on Systems, Man, and Cybernetics (Cat. No.98CH36218), San Diego, CA, USA, 14 October 1998; pp. 2424–2429. [Google Scholar] [CrossRef] [Green Version]
  72. Mo, H.; Xu, L. Magnetotactic bacteria optimization algorithm for multimodal optimization. In Proceedings of the 2013 IEEE Symposium on Swarm Intelligence (SIS), Singapore, 16–19 April 2013; pp. 240–247. [Google Scholar] [CrossRef]
  73. Chandramouli Anandaraman; Arun Vikram Madurai Sankar; Ramaraj Natarajan A New Evolutionary Algorithm Based on Bacterial Evolution and Its Application for Scheduling A Flexible Manufacturing System. J. Tek. Ind. 2012, 14, 1–12.
  74. Li, M.D.; Zhao, H.; Weng, X.W.; Han, T. A novel nature-inspired algorithm for optimization: Virus colony search. Adv. Eng. Softw. 2016, 92, 65–88. [Google Scholar] [CrossRef]
  75. Cortés, P.; García, J.M.; Muñuzuri, J.; Onieva, L. Viral systems: A new bio-inspired optimisation approach. Comput. Oper. Res. 2008, 35, 2840–2860. [Google Scholar] [CrossRef]
  76. Jaderyan, M.; Khotanlou, H. Virulence Optimization Algorithm. Appl. Soft Comput. 2016, 43, 596–618. [Google Scholar] [CrossRef]
  77. Kelsey, J.; Timmis, J. Immune Inspired Somatic Contiguous Hypermutation for Function Optimisation. In Genetic and Evolutionary Computation Conference—GECCO 2003: Genetic and Evolutionary Computation—GECCO 2003; Springer: Berlin/Heidelberg, Germany, 2003; pp. 207–218. [Google Scholar] [CrossRef]
  78. Taherdangkoo, M.; Yazdi, M.; Bagheri, M.H. Stem Cells Optimization Algorithm. In International Conference on Intelligent Computing—ICIC 2011: Bio-Inspired Computing and Applications; Springer: Berlin/Heidelberg, Germany, 2012; pp. 394–403. [Google Scholar] [CrossRef]
  79. Zhang, X.; Huang, S.; Hu, Y.; Zhang, Y.; Mahadevan, S.; Deng, Y. Solving 0-1 knapsack problems based on amoeboid organism algorithm. Appl. Math. Comput. 2013, 219, 9959–9970. [Google Scholar] [CrossRef]
  80. Krishnaveni, M.; Subashini, P.; Dhivyaprabha, T.T. A new optimization approach—SFO for denoising digital images. In Proceedings of the 2016 International Conference on Computation System and Information Technology for Sustainable Solutions (CSITSS), Bengaluru, India, 6–8 October 2016; pp. 34–39. [Google Scholar] [CrossRef]
  81. Dorigo, M.; Birattari, M.; Stutzle, T. Ant colony optimization. IEEE Comput. Intell. Mag. 2006, 1, 28–39. [Google Scholar] [CrossRef]
  82. Dorigo, M.; Stützle, T. Ant Colony Optimization: Overview and Recent Advances. In Handbook of Metaheuristics; Springer: Berlin/Heidelberg, Germany, 2019; pp. 311–351. [Google Scholar] [CrossRef] [Green Version]
  83. Karaboga, D. An Idea based on Honey Bee Swarm for Numerical Optimization; Technical report-tr06; Computer Engineering Department, Engineering Faculty, Erciyes University: Kayseri, Turkey, 2005; Volume 200, pp. 1–10. Available online: http://mf.erciyes.edu.tr/abc/pub/tr06_2005.pdf (accessed on 20 June 2023).
  84. Karaboga, D.; Basturk, B. A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (ABC) algorithm. J. Glob. Optim. 2007, 39, 459–471. [Google Scholar] [CrossRef]
  85. Karaboga, D.; Basturk, B. On the performance of artificial bee colony (ABC) algorithm. Appl. Soft Comput. J. 2008, 8, 687–697. [Google Scholar] [CrossRef]
  86. Dervis Karaboga; Bahriye Akay A comparative study of Artificial Bee Colony algorithm. Appl. Math. Comput. 2009, 214, 108–132.
  87. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl-Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  88. Teodorović, D. Bee Colony Optimization (BCO). In Innovations in Swarm Intelligence; Springer: Berlin/Heidelberg, Germany, 2009; pp. 39–60. [Google Scholar] [CrossRef]
  89. Haddad, O.B.; Afshar, A.; Mariño, M.A. Honey-Bees Mating Optimization (HBMO) Algorithm: A New Heuristic Approach for Water Resources Optimization. Water Resour. Manag. 2006, 20, 661–680. [Google Scholar] [CrossRef]
  90. Abbass, H.A. MBO: Marriage in honey bees optimization a haplometrosis polygynous swarming approach. In Proceedings of the 2001 Congress on Evolutionary Computation (IEEE Cat. No.01TH8546), Seoul, Republic of Korea, 27–30 May 2001; pp. 207–214. [Google Scholar] [CrossRef]
  91. Jung, S.H. Queen-bee evolution for genetic algorithms. Electron. Lett. 2003, 39, 575. [Google Scholar] [CrossRef]
  92. Akbari, R.; Mohammadi, A.; Ziarati, K. A novel bee swarm optimization algorithm for numerical function optimization. Commun. Nonlinear Sci. Numer. Simul. 2010, 15, 3142–3155. [Google Scholar] [CrossRef]
  93. Lu, X.; Zhou, Y. A Novel Global Convergence Algorithm: Bee Collecting Pollen Algorithm. In Advanced Intelligent Computing Theories and Applications. With Aspects of Artificial Intelligence: Proceedings of the 4th International Conference on Intelligent Computing, ICIC 2008, Shanghai, China, 15–18 September 2008; Springer: Berlin/Heidelberg, Germany, 2008; pp. 518–525. [Google Scholar] [CrossRef]
  94. Maia, R.D.; de Castro, L.N.; Caminhas, W.M. Bee colonies as model for multimodal continuous optimization: The OptBees algorithm. In Proceedings of the 2012 IEEE Congress on Evolutionary Computation, Brisbane, Australia, 10–15 June 2012; pp. 1–8. [Google Scholar] [CrossRef]
  95. Abdullah, J.M.; Ahmed, T. Fitness Dependent Optimizer: Inspired by the Bee Swarming Reproductive Process. IEEE Access 2019, 7, 43473–43486. [Google Scholar] [CrossRef]
  96. Comellas, F.; Martinez-Navarro, J. Bumblebees. In Proceedings of the first ACM/SIGEVO Summit on Genetic and Evolutionary Computation—GEC ’09, Shanghai, China, 12–14 June 2009; ACM Press: New York, NY, USA, 2009; p. 811. [Google Scholar] [CrossRef]
  97. Marinakis, Y.; Marinaki, M.; Matsatsinis, N. A Bumble Bees Mating Optimization Algorithm for Global Unconstrained Optimization Problems. NICSO 2010, 284, 305–318. [Google Scholar] [CrossRef]
  98. Mirjalili, S. The Ant Lion Optimizer. Adv. Eng. Softw. 2015, 83, 80–98. [Google Scholar] [CrossRef]
  99. Mirjalili, S. Dragonfly algorithm: A new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems. Neural Comput. Appl. 2016, 27, 1053–1073. [Google Scholar] [CrossRef]
  100. Pan, W.-T. A new Fruit Fly Optimization Algorithm: Taking the financial distress model as an example. Knowl-Based Syst. 2012, 26, 69–74. [Google Scholar] [CrossRef]
  101. Al-Rifaie, M.M. Dispersive Flies Optimisation. In Proceedings of the 2014 Federated Conference on Computer Science and Information Systems, Warsaw, Poland, 7–10 September 2014; pp. 529–538. [Google Scholar] [CrossRef] [Green Version]
  102. Saremi, S.; Mirjalili, S.; Lewis, A. Grasshopper Optimisation Algorithm: Theory and application. Adv. Eng. Softw. 2017, 105, 30–47. [Google Scholar] [CrossRef] [Green Version]
  103. Chen, S. Locust Swarms—A new multi-optima search technique. In Proceedings of the 2009 IEEE Congress on Evolutionary Computation, Trondheim, Norway, 18–21 May 2009; pp. 1745–1752. [Google Scholar] [CrossRef]
  104. Canayaz, M.; Karci, A. Cricket behaviour-based evolutionary computation technique in solving engineering optimization problems. Appl. Intell. 2016, 44, 362–376. [Google Scholar] [CrossRef]
  105. Yang, X.S.; He, X. Firefly algorithm: Recent advances and applications. Int. J. Swarm Intell. 2013, 1, 36. [Google Scholar] [CrossRef] [Green Version]
  106. Krishnanand, K.N.; Ghose, D. Glowworm swarm optimisation: A new method for optimising multi-modal functions. Int. J. Comput. Intell. Stud. 2009, 1, 93. [Google Scholar] [CrossRef]
  107. Bidar, M.; Rashidy Kanan, H. Jumper firefly algorithm. In Proceedings of the ICCKE 2013, Mashhad, Iran, 31 October–1 November 2013; pp. 267–271. [Google Scholar] [CrossRef]
  108. Cuevas, E.; Cienfuegos, M.; Zaldívar, D.; Pérez-Cisneros, M. A swarm optimization algorithm inspired in the behavior of the social-spider. Expert Syst. Appl. 2013, 40, 6374–6384. [Google Scholar] [CrossRef] [Green Version]
  109. Hayyolalam, V.; Pourhaji Kazem, A.A. Black Widow Optimization Algorithm: A novel meta-heuristic approach for solving engineering optimization problems. Eng. Appl. Artif. Intell. 2020, 87, 103249. [Google Scholar] [CrossRef]
  110. Kaveh, A.; Dadras Eslamlou, A. Water strider algorithm: A new metaheuristic and applications. Structures 2020, 25, 520–541. [Google Scholar] [CrossRef]
  111. Wang, G.-G.; Deb, S.; Cui, Z. Monarch butterfly optimization. Neural Comput. Appl. 2019, 31, 1995–2014. [Google Scholar] [CrossRef] [Green Version]
  112. Arora, S.; Singh, S. Butterfly algorithm with Lèvy Flights for global optimization. In Proceedings of the 2015 International Conference on Signal Processing, Computing and Control (ISPCC), Waknaghat, India, 24–26 September 2015; pp. 220–224. [Google Scholar] [CrossRef]
  113. Qi, X.; Zhu, Y.; Zhang, H. A new meta-heuristic butterfly-inspired algorithm. J. Comput. Sci. 2017, 23, 226–239. [Google Scholar] [CrossRef]
  114. Havens, T.C.; Spain, C.J.; Salmon, N.G.; Keller, J.M. Roach Infestation Optimization. In Proceedings of the 2008 IEEE Swarm Intelligence Symposium, St. Louis, MO, USA, 21–23 September 2008; pp. 1–7. [Google Scholar] [CrossRef] [Green Version]
  115. Chen, Z.; Tang, H. Notice of Retraction: Cockroach Swarm Optimization. In Proceedings of the 2010 2nd International Conference on Computer Engineering and Technology, Chengdu, China, 16–18 April 2010; pp. V6-652–V6-655. [Google Scholar] [CrossRef]
  116. Bouarara, H.A.; Hamou, R.M.; Amine, A. Novel Bio-Inspired Technique of Artificial Social Cockroaches (ASC). Int. J. Organ. Collect. Intell. 2015, 5, 47–79. [Google Scholar] [CrossRef]
  117. Cheng, L.; Han, L.; Zeng, X.; Bian, Y.; Yan, H. Adaptive Cockroach Colony Optimization for Rod-Like Robot Navigation. J. Bionic Eng. 2015, 12, 324–337. [Google Scholar] [CrossRef]
  118. Wu, S.-J.; Wu, C.-T. A bio-inspired optimization for inferring interactive networks: Cockroach swarm evolution. Expert Syst. Appl. 2015, 42, 3253–3267. [Google Scholar] [CrossRef]
  119. Kallioras, N.A.; Lagaros, N.D.; Avtzis, D.N. Pity beetle algorithm—A new metaheuristic inspired by the behavior of bark beetles. Adv. Eng. Softw. 2018, 121, 147–166. [Google Scholar] [CrossRef]
  120. Wang, T.; Yang, L. Beetle Swarm Optimization Algorithm:Theory and Application. arXiv 2018. [Google Scholar] [CrossRef]
  121. Jiang, X.; Li, S. BAS: Beetle Antennae Search Algorithm for Optimization Problems. arXiv 2017. [Google Scholar] [CrossRef]
  122. Alauddin, M. Mosquito flying optimization (MFO). In Proceedings of the 2016 International Conference on Electrical, Electronics, and Optimization Techniques (ICEEOT), Chennai, India, 3–5 March 2016; pp. 79–84. [Google Scholar] [CrossRef]
  123. Minhas, F.U.A.A.; Arif, M. MOX: A novel global optimization algorithm inspired from Oviposition site selection and egg hatching inhibition in mosquitoes. Appl. Soft Comput. 2011, 11, 4614–4625. [Google Scholar] [CrossRef]
  124. Hedayatzadeh, R.; Akhavan Salmassi, F.; Keshtgari, M.; Akbari, R.; Ziarati, K. Termite colony optimization: A novel approach for optimizing continuous problems. In Proceedings of the 2010 18th Iranian Conference on Electrical Engineering, Isfahan, Iran, 11–13 May 2010; pp. 553–558. [Google Scholar] [CrossRef]
  125. Wang, P.; Zhu, Z.; Huang, S. Seven-Spot Ladybird Optimization: A Novel and Efficient Metaheuristic Algorithm for Numerical Optimization. Sci. World J. 2013, 2013, 378515. [Google Scholar] [CrossRef] [Green Version]
  126. Ahmadi, F.; Salehi, H.; Karimi, K. Eurygaster Algorithm: A New Approach to Optimization. Int. J. Comput. Appl. 2012, 57, 8887. [Google Scholar]
  127. Yang, X.-S.; Deb, S. Cuckoo Search via Lévy flights. In Proceedings of the 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC), Coimbatore, India, 9–11 December 2009; pp. 210–214. [Google Scholar] [CrossRef]
  128. Ladhari, T.; Khoja, I.; Msahli, F.; Sakly, A. Parameter identification of a reduced nonlinear model for an activated sludge process based on cuckoo search algorithm. Trans. Inst. Meas. Control 2019, 41, 3352–3363. [Google Scholar] [CrossRef]
  129. Sur, C.; Shukla, A. New Bio-inspired Meta-Heuristics—Green Herons Optimization Algorithm—For Optimization of Travelling Salesman Problem and Road Network. In Swarm, Evolutionary, and Memetic Computing: Proceedings of the 4th International Conference, SEMCCO 2013, Chennai, India, 19–21 December 2013; Springer International Publishing: Berlin/Heidelberg, Germany, 2013; pp. 168–179. [Google Scholar] [CrossRef]
  130. Yang, X.; Hossein Gandomi, A. Bat algorithm: A novel approach for global engineering optimization. Eng. Comput. 2012, 29, 464–483. [Google Scholar] [CrossRef] [Green Version]
  131. Song, S. Auditory Device Design Inspired by Nature; Brunel University: London, UK, 2014. [Google Scholar]
  132. Askarzadeh, A. A novel metaheuristic method for solving constrained engineering optimization problems: Crow search algorithm. Comput. Struct. 2016, 169, 1–12. [Google Scholar] [CrossRef]
  133. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Futur. Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  134. Yang, X.S.; Deb, S. Eagle strategy using Lévy walk and firefly algorithms for stochastic optimization. Stud. Comput. Intell. 2010, 284, 101–111. [Google Scholar] [CrossRef] [Green Version]
  135. De Vasconcelos Segundo, E.H.; Mariani, V.C.; Coelho, L. dos S. Design of heat exchangers using Falcon Optimization Algorithm. Appl. Therm. Eng. 2019, 156, 119–144. [Google Scholar] [CrossRef]
  136. Khan, A.T.; Li, S.; Stanimirovic, P.S.; Zhang, Y. Model-free optimization using eagle perching optimizer. arXiv 2018, arXiv:1807.02754. [Google Scholar]
  137. Alsattar, H.A.; Zaidan, A.A.; Zaidan, B.B. Novel meta-heuristic bald eagle search optimisation algorithm. Artif. Intell. Rev. 2020, 53, 2237–2264. [Google Scholar] [CrossRef]
  138. Gheraibia, Y.; Moussaoui, A. Penguins Search Optimization Algorithm (PeSOA). In Recent Trends in Applied Artificial Intelligence: Proceedings of the 26th International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems, IEA/AIE 2013, Amsterdam, The Netherlands, 17–21 June 2013; Springer: Berlin/Heidelberg, Germany, 2013; pp. 222–231. [Google Scholar] [CrossRef]
  139. Harifi, S.; Khalilian, M.; Mohammadzadeh, J.; Ebrahimnejad, S. Emperor Penguins Colony: A new metaheuristic algorithm for optimization. Evol. Intell. 2019, 12, 211–226. [Google Scholar] [CrossRef]
  140. Dhiman, G.; Kumar, V. Emperor penguin optimizer: A bio-inspired algorithm for engineering problems. Knowl-Based Syst. 2018, 159, 20–50. [Google Scholar] [CrossRef]
  141. Meng, X.; Liu, Y.; Gao, X.; Zhang, H. A New Bio-inspired Algorithm: Chicken Swarm Optimization. In Advances in Swarm Intelligence: Proceedings of the 5th International Conference, ICSI 2014, Hefei, China, 17–20 October 2014, Part I; Springer: Berlin/Heidelberg, Germany, 2014; pp. 86–94. [Google Scholar] [CrossRef]
  142. Meng, X.-B.; Gao, X.Z.; Lu, L.; Liu, Y.; Zhang, H. A new bio-inspired optimisation algorithm: Bird Swarm Algorithm. J. Exp. Theor. Artif. Intell. 2016, 28, 673–687. [Google Scholar] [CrossRef]
  143. Duman, E.; Uysal, M.; Alkaya, A.F. Migrating Birds Optimization: A new metaheuristic approach and its performance on quadratic assignment problem. Inf. Sci. 2012, 217, 65–77. [Google Scholar] [CrossRef]
  144. Neshat, M.; Sepidnam, G.; Sargolzaei, M. Swallow swarm optimization algorithm: A new method to optimization. Neural Comput. Appl. 2013, 23, 429–454. [Google Scholar] [CrossRef]
  145. Askarzadeh, A. Bird mating optimizer: An optimization algorithm inspired by bird mating strategies. Commun. Nonlinear Sci. Numer. Simul. 2014, 19, 1213–1228. [Google Scholar] [CrossRef]
  146. Hosseini, E. Laying Chicken Algorithm: A New Meta-Heuristic Approach to Solve Continuous Programming Problems. J. Appl. Comput. Math. 2017, 6, 1–8. [Google Scholar] [CrossRef] [Green Version]
  147. Lamy, J.B. Artificial feeding birds (afb): A new metaheuristic inspired by the behavior of pigeons. In Advances in Nature-Inspired Computing and Applications; Springer: Berlin/Heidelberg, Germany, 2019; pp. 43–60. [Google Scholar] [CrossRef] [Green Version]
  148. Duan, H.; Qiao, P. Pigeon-inspired optimization: A new swarm intelligence optimizer for air robot path planning. Int. J. Intell. Comput. Cybern. 2014, 7, 24–37. [Google Scholar] [CrossRef]
  149. Dhiman, G.; Kumar, V. Seagull optimization algorithm: Theory and its applications for large-scale industrial engineering problems. Knowl-Based Syst. 2019, 165, 169–196. [Google Scholar] [CrossRef]
  150. Samareh Moosavi, S.H.; Khatibi Bardsiri, V. Satin bowerbird optimizer: A new optimization algorithm to optimize ANFIS for software development effort estimation. Eng. Appl. Artif. Intell. 2017, 60, 1–15. [Google Scholar] [CrossRef]
  151. Jain, M.; Maurya, S.; Rani, A.; Singh, V. Owl search algorithm: A novel nature-inspired heuristic paradigm for global optimization. J. Intell. Fuzzy Syst. 2018, 34, 1573–1582. [Google Scholar] [CrossRef]
  152. Sur, C.; Sharma, S.; Shukla, A. Egyptian vulture optimization algorithm—A new nature inspired meta-heuristics for knapsack problem. Adv. Intell. Syst. Comput. 2013, 209 AISC, 227–237. [Google Scholar] [CrossRef]
  153. Hajiaghaei-Keshteli, M.; Aminnayeri, M. Keshtel Algorithm (KA); A New Optimization Algorithm Inspired by Keshtels’ Feeding. Proceeding IEEE Conf. Ind. Eng. Manag. Syst. 2012, 1, 2249–2253. [Google Scholar]
  154. Dhiman, G.; Kaur, A. STOA: A bio-inspired based optimization algorithm for industrial engineering problems. Eng. Appl. Artif. Intell. 2019, 82, 148–174. [Google Scholar] [CrossRef]
  155. Brabazon, A.; Cui, W.; O’Neill, M. The raven roosting optimisation algorithm. Soft Comput. 2016, 20, 525–545. [Google Scholar] [CrossRef]
  156. Almonacid, B.; Soto, R. Andean Condor Algorithm for cell formation problems. Nat. Comput. 2019, 18, 351–381. [Google Scholar] [CrossRef]
  157. Omidvar, R.; Parvin, H.; Rad, F. SSPCO optimization algorithm (See-See Partridge Chicks Optimization). In Proceedings of the 2015 Fourteenth Mexican International Conference on Artificial Intelligence (MICAI), Cuernavaca, Mexico, 25–31 October 2015; pp. 101–106. [Google Scholar] [CrossRef]
  158. El-Dosuky, M.; EL-Bassiouny, A.; Hamza, T.; Rashad, M. New Hoopoe Heuristic Optimization. arXiv 2012, arXiv:1211.6410. [Google Scholar]
  159. Blanco, A.L.; Chaparro, N.; Rojas-Galeano, S. An urban pigeon-inspired optimiser for unconstrained continuous domains. In Proceedings of the 2019 8th Brazilian Conference on Intelligent Systems (BRACIS), Salvador, Brazil, 15–18 October 2019; pp. 521–526. [Google Scholar] [CrossRef]
  160. Tawfeeq, M.A. Intelligent Algorithm for Optimum Solutions Based on the Principles of Bat Sonar. arXiv 2012, arXiv:1211.0730. [Google Scholar]
  161. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  162. Hofman, J. Bubble-Net Feeding, Instagram. 2021. Available online: https://www.instagram.com/p/B4H160do6u (accessed on 26 June 2021).
  163. Gandomi, A.H.; Alavi, A.H. Krill herd: A new bio-inspired optimization algorithm. Commun. Nonlinear Sci. Numer. Simul. 2012, 17, 4831–4845. [Google Scholar] [CrossRef]
  164. Li, L.X. An optimizing method based on autonomous animals: Fish-swarm algorithm. Syst. Eng. Theory Pract. 2002, 22, 32–38. [Google Scholar]
  165. Neshat, M.; Sepidnam, G.; Sargolzaei, M.; Toosi, A.N. Artificial fish swarm algorithm: A survey of the state-of-the-art, hybridization, combinatorial and indicative applications. Artif. Intell. Rev. 2014, 42, 965–997. [Google Scholar] [CrossRef]
  166. Li, G.; Yang, Y.; Zhao, T.; Peng, P.; Zhou, Y.; Hu, Y.; Guo, C. An improved artificial fish swarm algorithm and its application to packing and layout problems. In Proceedings of the 2017 36th Chinese Control Conference (CCC), Dalian, China, 26–28 July 2017; pp. 9824–9828. [Google Scholar] [CrossRef]
  167. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  168. Xiao-lei, L.; Fei, L.; Tian, G.H.; Qian, J.X. Applications of artificial fish school algorithm in combinatorial optimization problems. J. Shandong Univ. Eng. Sci. 2005, 34, 64–67. [Google Scholar]
  169. Filho, C.J.A.B.; de Lima Neto, F.B.; Lins, A.J.C.C.; Nascimento, A.I.S.; Lima, M.P. Fish School Search. Nat-Inspired Algorithms Optim. 2009, 193, 261–277. [Google Scholar] [CrossRef]
  170. Shadravan, S.; Naji, H.R.; Bardsiri, V.K. The Sailfish Optimizer: A novel nature-inspired metaheuristic algorithm for solving constrained engineering optimization problems. Eng. Appl. Artif. Intell. 2019, 80, 20–34. [Google Scholar] [CrossRef]
  171. Mozaffari, A.; Fathi, A.; Behzadipour, S. The great salmon run: A novel bio-inspired algorithm for artificial system design and optimisation. Int. J. Bio-Inspired Comput. 2012, 4, 286–301. [Google Scholar] [CrossRef]
  172. Jahani, E.; Chizari, M. Tackling global optimization problems with a novel algorithm—Mouth Brooding Fish algorithm. Appl. Soft Comput. 2018, 62, 987–1002. [Google Scholar] [CrossRef]
  173. Zaldívar, D.; Morales, B.; Rodríguez, A.; Valdivia-G, A.; Cuevas, E.; Pérez-Cisneros, M. A novel bio-inspired optimization model based on Yellow Saddle Goatfish behavior. Biosystems 2018, 174, 1–21. [Google Scholar] [CrossRef]
  174. Zhao, W.; Zhang, Z.; Wang, L. Manta ray foraging optimization: An effective bio-inspired optimizer for engineering applications. Eng. Appl. Artif. Intell. 2020, 87, 103300. [Google Scholar] [CrossRef]
  175. Yilmaz, S.; Sen, S. Electric fish optimization: A new heuristic algorithm inspired by electrolocation. Neural Comput. Appl. 2020, 32, 11543–11578. [Google Scholar] [CrossRef]
  176. Haldar, V.; Chakraborty, N. A novel evolutionary technique based on electrolocation principle of elephant nose fish and shark: Fish electrolocation optimization. Soft Comput. 2017, 21, 3827–3848. [Google Scholar] [CrossRef]
  177. Kaveh, A.; Farhoudi, N. A new optimization method: Dolphin echolocation. Adv. Eng. Softw. 2013, 59, 53–70. [Google Scholar] [CrossRef]
  178. Shiqin, Y.; Jianjun, J.; Guangxing, Y. A Dolphin Partner Optimization. In Proceedings of the 2009 WRI Global Congress on Intelligent Systems, Xiamen, China, 19–21 May 2009; pp. 124–128. [Google Scholar] [CrossRef]
  179. Wu, T.; Yao, M.; Yang, J. Dolphin swarm algorithm. Front. Inf. Technol. Electron. Eng. 2016, 17, 717–729. [Google Scholar] [CrossRef] [Green Version]
  180. Yong, W.; Tao, W.; Cheng-Zhi, Z.; Hua-Juan, H. A New Stochastic Optimization Approach—Dolphin Swarm Optimization Algorithm. Int. J. Comput. Intell. Appl. 2016, 15, 1650011. [Google Scholar] [CrossRef]
  181. Serani, A.; Diez, M. Dolphin Pod Optimization. In Advances in Swarm Intelligence: Proceedings of the 8th International Conference, ICSI 2017, Fukuoka, Japan, 27 July–1 August 2017, Part I; Springer: Berlin/Heidelberg, Germany, 2017; pp. 50–62. [Google Scholar] [CrossRef]
  182. Abedinia, O.; Amjady, N.; Ghasemi, A. A new metaheuristic algorithm based on shark smell optimization. Complexity 2016, 21, 97–116. [Google Scholar] [CrossRef]
  183. Ebrahimi, A.; Khamehchi, E. Sperm whale algorithm: An effective metaheuristic algorithm for production optimization problems. J. Nat. Gas Sci. Eng. 2016, 29, 211–222. [Google Scholar] [CrossRef]
  184. Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine Predators Algorithm: A nature-inspired metaheuristic. Expert Syst. Appl. 2020, 152, 113377. [Google Scholar] [CrossRef]
  185. Biyanto, T.R.; Matradji; Irawan, S.; Febrianto, H.Y.; Afdanny, N.; Rahman, A.H.; Gunawan, K.S.; Pratama, J.A.D.; Bethiana, T.N. Killer Whale Algorithm: An Algorithm Inspired by the Life of Killer Whale. Procedia Comput. Sci. 2017, 124, 151–157. [Google Scholar] [CrossRef]
  186. Zeng, B.; Gao, L.; Li, X. Whale Swarm Algorithm for Function Optimization. In Proceedings of the Intelligent Computing Theories and Application: 13th International Conference, ICIC 2017, Liverpool, UK, 7–10 August 2017; pp. 624–639. [Google Scholar] [CrossRef] [Green Version]
  187. Mohammad Taisir Masadeh, R.; Abdel-Aziz Sharieh, A.; Mahafzah, B.A.; Masadeh, R.; Sharieh, A. Humpback Whale Optimization Algorithm Based on Vocal Behavior for Task Scheduling in Cloud Computing. Int. J. Adv. Sci. Technol. 2019, 13, 121–140. [Google Scholar]
  188. Uymaz, S.A.; Tezel, G.; Yel, E. Artificial algae algorithm (AAA) for nonlinear global optimization. Appl. Soft Comput. 2015, 31, 153–171. [Google Scholar] [CrossRef]
  189. Salcedo-Sanz, S.; Del Ser, J.; Landa-Torres, I.; Gil-López, S.; Portilla-Figueras, J.A. The Coral Reefs Optimization Algorithm: A Novel Metaheuristic for Efficiently Solving Optimization Problems. Sci. World J. 2014, 2014, 739768. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  190. Eesa, A.S.; Brifcani, A.M.A.; Orman, Z. Cuttlefish algorithm-a novel bio-inspired optimization algorithm. Int. J. Sci. Eng. Res. 2013, 4, 1978–1987. [Google Scholar]
  191. An, J.; Kang, Q.; Wang, L.; Wu, Q. Mussels Wandering Optimization: An Ecologically Inspired Algorithm for Global Optimization. Cognit. Comput. 2013, 5, 188–199. [Google Scholar] [CrossRef]
  192. Kaur, S.; Awasthi, L.K.; Sangal, A.L.; Dhiman, G. Tunicate Swarm Algorithm: A new bio-inspired based metaheuristic paradigm for global optimization. Eng. Appl. Artif. Intell. 2020, 90, 103541. [Google Scholar] [CrossRef]
  193. Masadeh, R.; Mahafzah, B.A.; Sharieh, A. Sea Lion Optimization algorithm. Int. J. Adv. Comput. Sci. Appl. 2019, 10, 388–395. [Google Scholar] [CrossRef] [Green Version]
  194. Sulaiman, M.H.; Mustaffa, Z.; Saari, M.M.; Daniyal, H.; Musirin, I.; Daud, M.R. Barnacles mating optimizer: An evolutionary algorithm for solving optimization. In Proceedings of the 2018 IEEE International Conference on Automatic Control and Intelligent Systems (I2CACIS), Shah Alam, Malaysia, 20 October 2018; pp. 99–104. [Google Scholar] [CrossRef]
  195. Pook, M.F.; Ramlan, E.I. The Anglerfish algorithm: A derivation of randomized incremental construction technique for solving the traveling salesman problem. Evol. Intell. 2019, 12, 11–20. [Google Scholar] [CrossRef]
  196. Catalbas, M.C.; Gulten, A. Circular structures of puffer fish: A new metaheuristic optimization algorithm. In Proceedings of the 2018 Third International Conference on Electrical and Biomedical Engineering, Clean Energy and Green Computing (EBECEGC), Beirut, Lebanon, 25–27 April 2018; pp. 1–5. [Google Scholar] [CrossRef]
  197. Ghojogh, B.; Sharifian, S. Pontogammarus maeoticus swarm optimization: A metaheuristic optimization algorithm. arXiv 2018, arXiv:1807.01844. [Google Scholar]
  198. Sukoon, M.; Banka, H. Water-Tank Fish Algorithm: A New Metaheuristic for Optimization. Int. J. Comput. Appl. 2018, 182, 1–5. [Google Scholar] [CrossRef]
  199. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  200. Saber, M.; El-kenawy, E.-S.M. Design and implementation of accurate frequency estimator depend on deep learning. Int. J. Eng. Technol. 2020, 9, 367–377. [Google Scholar] [CrossRef]
  201. Eusuff, M.; Lansey, K.; Pasha, F. Shuffled frog-leaping algorithm: A memetic meta-heuristic for discrete optimization. Eng. Optim. 2006, 38, 129–154. [Google Scholar] [CrossRef]
  202. Elbeltagi, E.; Hegazy, T.; Grierson, D. A modified shuffled frog-leaping optimization algorithm: Applications to project management. Struct. Infrastruct. Eng. 2007, 3, 53–60. [Google Scholar] [CrossRef]
  203. Li, X.; Luo, J.; Chen, M.-R.; Wang, N. An improved shuffled frog-leaping algorithm with extremal optimisation for continuous optimisation. Inf. Sci. 2012, 192, 143–151. [Google Scholar] [CrossRef]
  204. Zhang, X.; Hu, X.; Cui, G.; Wang, Y.; Niu, Y. An improved shuffled frog leaping algorithm with cognitive behavior. In Proceedings of the 2008 7th World Congress on Intelligent Control and Automation, Chongqing, China, 25–27 June 2008; pp. 6197–6202. [Google Scholar] [CrossRef]
  205. Chu, S.-C.; Tsai, P.; Pan, J.-S. Cat Swarm Optimization. In PRICAI 2006: Trends in Artificial Intelligence, 9th Pacific Rim International Conference on Artificial Intelligence, Guilin, China, 7–11 August 2006, Proceedings; Springer: Berlin/Heidelberg, Germany, 2006; pp. 854–858. [Google Scholar] [CrossRef]
  206. Bansal, J.C.; Sharma, H.; Jadon, S.S.; Clerc, M. Spider Monkey Optimization algorithm for numerical optimization. Memetic Comput. 2014, 6, 31–47. [Google Scholar] [CrossRef]
  207. Mucherino, A.; Seref, O.; Seref, O.; Kundakcioglu, O.E.; Pardalos, P. Monkey search: A novel metaheuristic search for global optimization. In AIP Conference Proceedings; AIP: College Park, MD, USA, 2007; Volume 953, pp. 162–173. [Google Scholar] [CrossRef]
  208. Meng, Z.; Pan, J.-S. Monkey King Evolution: A new memetic evolutionary algorithm and its application in vehicle fuel consumption optimization. Knowl.-Based Syst. 2016, 97, 144–157. [Google Scholar] [CrossRef]
  209. Mahmood, M.; Al-Khateeb, B. The blue monkey: A new nature inspired metaheuristic optimization algorithm. Period. Eng. Nat. Sci. 2019, 7, 1054. [Google Scholar] [CrossRef] [Green Version]
  210. Yazdani, M.; Jolai, F. Lion Optimization Algorithm (LOA): A nature-inspired metaheuristic algorithm. J. Comput. Des. Eng. 2016, 3, 24–36. [Google Scholar] [CrossRef] [Green Version]
  211. Rajakumar, B.R. The Lion’s Algorithm: A New Nature-Inspired Search Algorithm. Procedia Technol. 2012, 6, 126–135. [Google Scholar] [CrossRef] [Green Version]
  212. Wang, B.; Jin, X.; Cheng, B. Lion pride optimizer: An optimization algorithm inspired by lion pride behavior. Sci. China Inf. Sci. 2012, 55, 2369–2389. [Google Scholar] [CrossRef]
  213. Kaveh, A.; Mahjoubi, S. Lion Pride Optimization Algorithm: A meta-heuristic method for global optimization problems. Sci. Iran. 2018, 25, 3113–3132. [Google Scholar] [CrossRef]
  214. Tang, R.; Fong, S.; Yang, X.S.; Deb, S. Wolf search algorithm with ephemeral memory. In Proceedings of the Seventh International Conference on Digital Information Management (ICDIM 2012), Macau, China, 22–24 August 2012; pp. 165–172. [Google Scholar] [CrossRef]
  215. Wu, H.S.; Zhang, F.M. Wolf pack algorithm for unconstrained global optimization. Math. Probl. Eng. 2014, 2014, 465082. [Google Scholar] [CrossRef] [Green Version]
  216. Alhijawi, B. Dominion algorithm- a novel metaheuristic optimization method. Int. J. Adv. Intell. Paradig. 2021, 20, 221–242. [Google Scholar]
  217. Chi, M. An improved Wolf pack algorithm. In Proceedings of the International Conference on Artificial Intelligence, Information Processing and Cloud Computing (AIIPCC’19), Sanya, China, 19–21 December 2019; ACM: Guildford, UK, 2019. [Google Scholar] [CrossRef]
  218. Dhiman, G.; Kumar, V. Spotted hyena optimizer: A novel bio-inspired based metaheuristic technique for engineering applications. Adv. Eng. Softw. 2017, 114, 48–70. [Google Scholar] [CrossRef]
  219. Pierezan, J.; Dos Santos Coelho, L. Coyote Optimization Algorithm: A New Metaheuristic for Global Optimization Problems. In Proceedings of the 2018 IEEE Congress on Evolutionary Computation (CEC), Rio de Janeiro, Brazil, 8–13 July 2018. [Google Scholar] [CrossRef]
  220. Polap, D.; Woźniak, M. Polar bear optimization algorithm: Meta-heuristic with fast population movement and dynamic birth and death mechanism. Symmetry 2017, 9, 203. [Google Scholar] [CrossRef] [Green Version]
  221. Klein, C.E.; Mariani, V.C.; Coelho, L.D.S. Cheetah based optimization algorithm: A novel swarm intelligence paradigm. In Proceedings of the ESANN 2018 Proceedings, European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, Bruges, Belgium, 25–27 April 2018; pp. 685–690. [Google Scholar]
  222. Goudhaman, M. Cheetah chase algorithm (CCA): A nature-inspired metaheuristic algorithm. Int. J. Eng. Technol. 2018, 7, 1804. [Google Scholar] [CrossRef]
  223. Chen, C.C.; Tsai, Y.C.; Liu, I.I.; Lai, C.C.; Yeh, Y.T.; Kuo, S.Y.; Chou, Y.H. A Novel Metaheuristic: Jaguar Algorithm with Learning Behavior. In Proceedings of the 2015 IEEE International Conference on Systems, Man, and Cybernetics, Hong Kong, China, 9–12 October 2015; pp. 1595–1600. [Google Scholar] [CrossRef]
  224. Subramanian, C. African Wild Dog Algorithm: A New Meta Heuristic Approach for Optimal Design of Steel Structures. Ph.D. Thesis, Anna University, Nadu, India, 2015. [Google Scholar]
  225. Tripathi, A.K.; Sharma, K.; Bala, M. Military dog based optimizer and its application to fake review detection. arXiv 2019, arXiv:1909.11890. [Google Scholar]
  226. Zhang, L.M.; Dahlmann, C.; Zhang, Y. Human-Inspired Algorithms for continuous function optimization. In Proceedings of the 2009 IEEE International Conference on Intelligent Computing and Intelligent Systems, Shanghai, China, 20–22 November 2009; pp. 318–321. [Google Scholar] [CrossRef]
  227. Wang, G.G.; Deb, S.; Coelho, L.D.S. Elephant Herding Optimization. In Proceedings of the 2015 3rd International Symposium on Computational and Business Intelligence (ISCBI), Bali, Indonesia, 7–9 December 2015; pp. 1–5. [Google Scholar] [CrossRef]
  228. Deb, S.; Fong, S.; Tian, Z. Elephant Search Algorithm for optimization problems. In Proceedings of the 2015 Tenth International Conference on Digital Information Management (ICDIM), Jeju, Republic of Korea, 21–23 October 2015; pp. 249–255. [Google Scholar] [CrossRef]
  229. Jain, M.; Singh, V.; Rani, A. A novel nature-inspired algorithm for optimization: Squirrel search algorithm. Swarm Evol. Comput. 2019, 44, 148–175. [Google Scholar] [CrossRef]
  230. Azizyan, G.; Miarnaeimi, F.; Rashki, M.; Shabakhty, N. Flying Squirrel Optimizer (FSO): A novel SI-based optimization algorithm for engineering problems. Iran. J. Optim. 2019, 11, 177–205. [Google Scholar]
  231. Klein, C.E.; Coelho, L.D.S. Meerkats-inspired algorithm for global optimization problems. In Proceedings of the ESANN 2018 Proceedings, European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, Bruges, Belgium, 25–27 April 2018; pp. 679–684. [Google Scholar]
  232. Al-Obaidi, A.T.S.; Abdullah, H.S.; Ahmed, Z.O. Meerkat clan algorithm: A new swarm intelligence algorithm. Indones. J. Electr. Eng. Comput. Sci. 2018, 10, 354–360. [Google Scholar] [CrossRef]
  233. Kim, H.; Ahn, B. A new evolutionary algorithm based on sheep flocks heredity model. In Proceedings of the 2001 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing (IEEE Cat. No.01CH37233), Victoria, BC, Canada, 26–28 August 2001; pp. 514–517. [Google Scholar] [CrossRef]
  234. Kaveh, A.; Zaerreza, A. Shuffled shepherd optimization method: A new Meta-heuristic algorithm. Eng. Comput. 2020, 37, 2357–2389. [Google Scholar] [CrossRef]
  235. Khalid Ibrahim, M.; Salim Ali, R. Novel Optimization Algorithm Inspired by Camel Traveling Behavior. Iraqi J. Electr. Electron. Eng. 2016, 12, 167–177. [Google Scholar] [CrossRef]
  236. Motevali, M.M.; Shanghooshabad, A.M.; Aram, R.Z.; Keshavarz, H. WHO: A New Evolutionary Algorithm Bio-Inspired by Wildebeests with a Case Study on Bank Customer Segmentation. Int. J. Pattern Recognit. Artif. Intell. 2019, 33, 1959017. [Google Scholar] [CrossRef]
  237. Maciel, C.O.; Cuevas, E.; Navarro, M.A.; Zaldívar, D.; Hinojosa, S. Side-Blotched Lizard Algorithm: A polymorphic population approach. Appl. Soft Comput. J. 2020, 88, 106039. [Google Scholar] [CrossRef]
  238. Zangbari Koohi, S.; Abdul Hamid, N.A.W.; Othman, M.; Ibragimov, G. Raccoon Optimization Algorithm. IEEE Access 2019, 7, 5383–5399. [Google Scholar] [CrossRef]
  239. Tian, Z.; Fong, S.; Tang, R.; Deb, S.; Wong, R. Rhinoceros Search Algorithm. In Proceedings of the 2016 3rd International Conference on Soft Computing & Machine Intelligence (ISCMI), Dubai, United Arab Emirates, 23–25 November 2016; pp. 18–22. [Google Scholar] [CrossRef]
  240. Yousefi, F.S.; Karimian, N.; Ghodousian, A. Xerus Optimization Algorithm (XOA): A novel nature-inspired metaheuristic algorithm for solving global optimization problems. J. Algorithms Comput. 2019, 51, 111–126. [Google Scholar]
  241. Wang, G.G.; Deb, S.; Dos Santos Coelho, L. Earthworm optimisation algorithm: A bio-inspired metaheuristic algorithm for global optimisation problems. Int. J. Bio-Inspired Comput. 2018, 12, 1–22. [Google Scholar] [CrossRef]
  242. Fathollahi Fard, A.M.; Hajiaghaei-Keshteli, M.; Tavakkoli-Moghaddam, R. Red Deer Algorithm (RDA); A New Optimization Algorithm Inspired by Red Deers’ Mating. In Proceedings of the 12th International Conference on Industerial Engineering (ICIE 2016), Tehran, Iran, 25–26 January 2016; pp. 1–10. [Google Scholar]
  243. Mohammad, T.M.H.; Mohammad, H.B. A novel meta-heuristic algorithm for numerical function optimization: Blind, naked mole-rats (BNMR) algorithm. Sci. Res. Essays 2012, 7, 3566–3583. [Google Scholar] [CrossRef]
  244. Wang, G.-G.; Gao, X.-Z.; Zenger, K.; dos Coelho, L.S. A Novel Metaheuristic Algorithm inspired by Rhino Herd Behavior. In Proceedings of the 9th EUROSIM Congress on Modelling and Simulation, EUROSIM 2016, the 57th SIMS Conference on Simulation and Modelling SIMS 2016; Linköping University Electronic Press: Jönköping, Sweden, 2018; Volume 142, pp. 1026–1033. [Google Scholar] [CrossRef] [Green Version]
  245. Shamsaldin, A.S.; Rashid, T.A.; Al-Rashid Agha, R.A.; Al-Salihi, N.K.; Mohammadi, M. Donkey and smuggler optimization algorithm: A collaborative working approach to path finding. J. Comput. Des. Eng. 2019, 6, 562–583. [Google Scholar] [CrossRef]
  246. Odili, J.B.; Kahar, M.N.M.; Anwar, S. African Buffalo Optimization: A Swarm-Intelligence Technique. Procedia Comput. Sci. 2015, 76, 443–448. [Google Scholar] [CrossRef] [Green Version]
  247. Garcia, F.; Perez, J. Jumping frogs optimization: A new swarm method for discrete optimization. Doc. Trab. DEIOC 2008, 3, 10. [Google Scholar]
  248. Yang, X.-S. Flower Pollination Algorithm for Global Optimization. In Unconventional Computation and Natural Computation: Proceedings of the 11th International Conference, UCNC 2012, Orléan, France, 3–7 September 2012; Springer: Berlin/Heidelberg, Germany, 2012; pp. 240–249. [Google Scholar] [CrossRef] [Green Version]
  249. Abdel-Basset, M.; Shawky, L.A. Flower pollination algorithm: A comprehensive review. Artif. Intell. Rev. 2019, 52, 2533–2557. [Google Scholar] [CrossRef]
  250. Mehrabian, A.R.; Lucas, C. A novel numerical optimization algorithm inspired from weed colonization. Ecol. Inform. 2006, 1, 355–366. [Google Scholar] [CrossRef]
  251. Hume, G. Dandelion (Taraxacum Officinale); Wikipedia. 2006. Available online: https://en.wikipedia.org/wiki/Taraxacum#/media/File:DandelionFlower.jpg (accessed on 20 June 2023).
  252. Epukas Burdock—Arctium tomentosum. Wikipedia. 2008. Available online: https://en.wikipedia.org/wiki/Arctium#/media/File:Villtakjas_2008.jpg (accessed on 20 June 2023).
  253. Stüber, K. Species: Amaranthus Tricolor Family: Amaranthaceae. Wikipedia. 2004. Available online: https://en.wikipedia.org/wiki/Amaranth#/media/File:Amaranthus_tricolor0.jpg (accessed on 20 June 2023).
  254. Kiran, M.S. TSA: Tree-seed algorithm for continuous optimization. Expert Syst. Appl. 2015, 42, 6686–6698. [Google Scholar] [CrossRef]
  255. Ghaemi, M.; Feizi-Derakhshi, M.-R. Forest Optimization Algorithm. Expert Syst. Appl. 2014, 41, 6676–6687. [Google Scholar] [CrossRef]
  256. Cheraghalipour, A.; Hajiaghaei-Keshteli, M.; Paydar, M.M. Tree Growth Algorithm (TGA): A novel approach for solving optimization problems. Eng. Appl. Artif. Intell. 2018, 72, 393–414. [Google Scholar] [CrossRef]
  257. Li, Q.Q.; Song, K.; He, Z.C.; Li, E.; Cheng, A.G.; Chen, T. The artificial tree (AT) algorithm. Eng. Appl. Artif. Intell. 2017, 65, 99–110. [Google Scholar] [CrossRef] [Green Version]
  258. Moez, H.; Kaveh, A.; Taghizadieh, N. Natural Forest Regeneration Algorithm: A New Meta-Heuristic. Iran. J. Sci. Technol. Trans. Civ. Eng. 2016, 40, 311–326. [Google Scholar] [CrossRef]
  259. Salhi, A.; Fraga, E.S. Nature-inspired optimisation approaches and the new plant propagation algorithm. Int. Conf. Numer. Anal. Optim. 2011, K2. [Google Scholar] [CrossRef]
  260. Merrikh-Bayat, F. A Numerical Optimization Algorithm Inspired by the Strawberry. arXiv 2014, arXiv:1407.7399. [Google Scholar]
  261. Bidar, M.; Kanan, H.R.; Mouhoub, M.; Sadaoui, S. Mushroom Reproduction Optimization (MRO): A Novel Nature-Inspired Evolutionary Algorithm. In Proceedings of the 2018 IEEE Congress on Evolutionary Computation (CEC), Rio de Janeiro, Brazil, 8–13 July 2018; pp. 1–10. [Google Scholar] [CrossRef]
  262. Shayanfar, H.; Gharehchopogh, F.S. Farmland fertility: A new metaheuristic algorithm for solving continuous optimization problems. Appl. Soft Comput. 2018, 71, 728–746. [Google Scholar] [CrossRef]
  263. Premaratne, U.; Samarabandu, J.; Sidhu, T. A new biologically inspired optimization algorithm. In Proceedings of the 2009 International Conference on Industrial and Information Systems (ICIIS), Peradeniya, Sri Lanka, 28–31 December 2009; pp. 279–284. [Google Scholar] [CrossRef]
  264. Mohammadi, M.; Khodaygan, S. An algorithm for numerical nonlinear optimization: Fertile Field Algorithm (FFA). J. Ambient Intell. Humaniz. Comput. 2020, 11, 865–878. [Google Scholar] [CrossRef]
  265. Luqman, M.; Saeed, M.; Ali, J.; Tabassam, M.F.; Mahmood, T. Targeted showering optimization: Training irrigation tools to solve crop planning problems. Pakistan J. Agric. Sci. 2019, 56, 225–235. [Google Scholar]
  266. Merrikh-Bayat, F. The runner-root algorithm: A metaheuristic for solving unimodal and multimodal optimization problems inspired by runners and roots of plants in nature. Appl. Soft Comput. J. 2015, 33, 292–303. [Google Scholar] [CrossRef]
  267. Labbi, Y.; Ben Attous, D.; Gabbar, H.A.; Mahdad, B.; Zidan, A. A new rooted tree optimization algorithm for economic dispatch with valve-point effect. Int. J. Electr. Power Energy Syst. 2016, 79, 298–311. [Google Scholar] [CrossRef]
  268. Zhang, H.; Zhu, Y.; Chen, H. Root growth model: A novel approach to numerical function optimization and simulation of plant root system. Soft Comput. 2014, 18, 521–537. [Google Scholar] [CrossRef]
  269. Qi, X.; Zhu, Y.; Chen, H.; Zhang, D.; Niu, B. An Idea Based on Plant Root Growth for Numerical Optimization. In Intelligent Computing Theories and Technology: Proceedings of the 9th International Conference, ICIC 2013, Nanning, China, 28–31 July 2013; Springer: Berlin/Heidelberg, Germany, 2013; pp. 571–578. [Google Scholar] [CrossRef] [Green Version]
  270. Cai, W.; Yang, W.; Chen, X. A global optimization algorithm based on plant growth theory: Plant growth optimization. In Proceedings of the 2008 International Conference on Intelligent Computation Technology and Automation (ICICTA), Changsha, China, 20–22 October 2008; pp. 1194–1199. [Google Scholar] [CrossRef]
  271. Liu, L.; Song, Y.; Ma, H.; Zhang, X. Physarum optimization: A biology-inspired algorithm for minimal exposure path problem in wireless sensor networks. In Proceedings of the 2012 Proceedings IEEE INFOCOM, Orlando, FL, USA, 25–30 March 2012; pp. 1296–1304. [Google Scholar] [CrossRef]
  272. Feng, X.; Liu, Y.; Yu, H.; Luo, F. Physarum-energy optimization algorithm. Soft Comput. 2019, 23, 871–888. [Google Scholar] [CrossRef]
  273. Karci, A.; Alatas, B. Thinking capability of saplings growing up algorithm. In International Conference on Intelligent Data Engineering and Automated Learning: Proceedings of the 7th International Conference, Burgos, Spain, 20–23 September 2006; Springer: Berlin/Heidelberg, Germany, 2006; pp. 386–393. [Google Scholar] [CrossRef]
  274. Sulaiman, M.; Salhi, A. A seed-based plant propagation algorithm: The feeding station model. Sci. World J. 2015, 2015, 904364. [Google Scholar] [CrossRef] [Green Version]
  275. Zhao, Z.; Cui, Z.; Zeng, J.; Yue, X. Artificial plant optimization algorithm for constrained optimization problems. In Proceedings of the 2011 Second International Conference on Innovations in Bio-inspired Computing and Applications, Shenzhen, China, 16–18 December 2011; pp. 120–123. [Google Scholar] [CrossRef]
  276. Cheng, L.; Zhang, Q.; Tao, F.; Ni, K.; Cheng, Y. A novel search algorithm based on waterweeds reproduction principle for job shop scheduling problem. Int. J. Adv. Manuf. Technol. 2016, 84, 405–424. [Google Scholar] [CrossRef]
  277. Gowri, R.; Rathipriya, R. Non-Swarm Plant Intelligence Algorithm: BladderWorts Suction (BWS) Algorithm. In Proceedings of the 2018 International Conference on Circuits and Systems in Digital Enterprise Technology (ICCSDET), Kottayam, India, 21–22 December 2018. [Google Scholar] [CrossRef]
  278. Murase, H. Finite element inverse analysis using a photosynthetic algorithm. Comput. Electron. Agric. 2000, 29, 115–123. [Google Scholar] [CrossRef]
  279. Eskandar, H.; Sadollah, A.; Bahreininejad, A.; Hamdi, M. Water cycle algorithm—A novel metaheuristic optimization method for solving constrained engineering optimization problems. Comput. Struct. 2012, 110–111, 151–166. [Google Scholar] [CrossRef]
  280. Rabanal, P.; Rodríguez, I.; Rubio, F. Using river formation dynamics to design heuristic algorithms. In Proceedings of the 6th International Conference, UC 2007, Kingston, CA, Canada, 13–17 August 2007; pp. 163–177. [Google Scholar] [CrossRef]
  281. Kaveh, A.; Bakhshpoori, T. Water Evaporation Optimization: A novel physically inspired optimization algorithm. Comput. Struct. 2016, 167, 69–85. [Google Scholar] [CrossRef]
  282. Aghay Kaboli, S.H.; Selvaraj, J.; Rahim, N.A. Rain-fall optimization algorithm: A population based algorithm for solving constrained optimization problems. J. Comput. Sci. 2017, 19, 31–42. [Google Scholar] [CrossRef]
  283. Wedyan, A.; Whalley, J.; Narayanan, A. Hydrological Cycle Algorithm for Continuous Optimization Problems. J. Optim. 2017, 2017, 3828420. [Google Scholar] [CrossRef] [Green Version]
  284. Gao-Wei, Y.; Zhanju, H. A Novel Atmosphere Clouds Model Optimization Algorithm. In Proceedings of the 2012 International Conference on Computing, Measurement, Control and Sensor Network, 2012 International Conference on Computing, Measurement, Control and Sensor Network, Taiyuan, China, 7–9 July 2012; pp. 217–220. [Google Scholar] [CrossRef]
  285. Jiang, Q.; Wang, L.; Hei, X.; Fei, R.; Yang, D.; Zou, F.; Li, H.; Cao, Z.; Lin, Y. Optimal approximation of stable linear systems with a novel and efficient optimization algorithm. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC), Beijing, China, 6–11 July 2014; pp. 840–844. [Google Scholar] [CrossRef]
  286. Shareef, H.; Ibrahim, A.A.; Mutlag, A.H. Lightning search algorithm. Appl. Soft Comput. 2015, 36, 315–333. [Google Scholar] [CrossRef]
  287. Nematollahi, A.F.; Rahiminejad, A.; Vahidi, B. A novel physical based meta-heuristic optimization method known as Lightning Attachment Procedure Optimization. Appl. Soft Comput. 2017, 59, 596–621. [Google Scholar] [CrossRef]
  288. Bayraktar, Z.; Komurcu, M.; Werner, D.H. Wind Driven Optimization (WDO): A novel nature-inspired optimization algorithm and its application to electromagnetics. In Proceedings of the 2010 IEEE Antennas and Propagation Society International Symposium, Toronto, ON, Canada, 11–17 July 2010; pp. 1–4. [Google Scholar] [CrossRef]
  289. Rbouh, I.; Imrani, A.A. El Hurricane-based Optimization Algorithm. AASRI Procedia 2014, 6, 26–33. [Google Scholar] [CrossRef]
  290. Zhao, W.; Wang, L.; Zhang, Z. Artificial ecosystem-based optimization: A novel nature-inspired meta-heuristic algorithm. Neural Comput. Appl. 2020, 32, 9383–9425. [Google Scholar] [CrossRef]
  291. Adham, M.T.; Bentley, P.J. An Artificial Ecosystem Algorithm applied to static and Dynamic Travelling Salesman Problems. In Proceedings of the 2014 IEEE International Conference on Evolvable Systems, Orlando, FL, USA, 9–12 December 2014; pp. 149–156. [Google Scholar] [CrossRef]
  292. Jahedbozorgan, M.; Amjadifard, R. Sunshine: A novel random search for continuous global optimization. In Proceedings of the 2016 1st Conference on Swarm Intelligence and Evolutionary Computation (CSIEC), Bam, Iran, 9–11 March 2016; pp. 12–17. [Google Scholar] [CrossRef]
  293. Hosseini, E.; Sadiq, A.S.; Ghafoor, K.Z.; Rawat, D.B.; Saif, M.; Yang, X. Volcano eruption algorithm for solving optimization problems. Neural Comput. Appl. 2021, 33, 2321–2337. [Google Scholar] [CrossRef]
  294. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar] [CrossRef]
  295. Freitas, D.; Lopes, L.G.; Morgado-Dias, F. Particle Swarm Optimisation: A Historical Review up to the Current Developments. Entropy 2020, 22, 362. [Google Scholar] [CrossRef] [Green Version]
  296. Marini, F.; Walczak, B. Particle swarm optimization (PSO). A tutorial. Chemom. Intell. Lab. Syst. 2015, 149, 153–165. [Google Scholar] [CrossRef]
  297. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput. Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  298. Xie, X.-F.; Zhang, W.-J.; Yang, Z.-L. Social cognitive optimization for nonlinear programming problems. In Proceedings of the International Conference on Machine Learning and Cybernetics, Beijing, China, 4–5 November 2002; Volume 2, pp. 779–783. [Google Scholar] [CrossRef]
  299. Xu, Y.; Cui, Z.; Zeng, J. Social Emotional Optimization Algorithm for Nonlinear Constrained Optimization Problems. In Swarm, Evolutionary, and Memetic Computing: Proceedings of the First International Conference on Swarm, Evolutionary, and Memetic Computing, SEMCCO 2010, Chennai, India, 16–18 December 2010; Springer: Berlin/Heidelberg, Germany, 2010; pp. 583–590. [Google Scholar] [CrossRef]
  300. Shi, Y. Brain Storm Optimization Algorithm. In Advances in Swarm Intelligence, Part I: Proceedings of the Second International Conference, ICSI 2011, Chongqing, China, 12–15 June 2011; Springer: Berlin/Heidelberg, Germany, 2011; pp. 303–309. [Google Scholar] [CrossRef]
  301. Cheng, S.; Qin, Q.; Chen, J.; Shi, Y. Brain storm optimization algorithm: A review. Artif. Intell. Rev. 2016, 46, 445–458. [Google Scholar] [CrossRef]
  302. Mousavirad, S.J.; Ebrahimpour-Komleh, H. Human mental search: A new population-based metaheuristic optimization algorithm. Appl. Intell. 2017, 47, 850–887. [Google Scholar] [CrossRef]
  303. Wang, L.; Ni, H.; Yang, R.; Fei, M.; Ye, W. A Simple Human Learning Optimization Algorithm. In Computational Intelligence, Networked Systems and Their Applications: Proceedings of the International Conference of Life System Modeling and Simulation, LSMS 2014 and International Conference on Intelligent Computing for Sustainable Energy and Environment, ICSEE 2014, Shanghai, China, 20–23 September 2014; Springer: Berlin/Heidelberg, Germany, 2014; pp. 56–65. [Google Scholar] [CrossRef]
  304. Feng, X.; Zou, R.; Yu, H. A novel optimization algorithm inspired by the creative thinking process. Soft Comput. 2015, 19, 2955–2972. [Google Scholar] [CrossRef]
  305. Atashpaz-Gargari, E.; Lucas, C. Imperialist competitive algorithm: An algorithm for optimization inspired by imperialistic competition. In Proceedings of the 2007 IEEE Congress on Evolutionary Computation, Singapore, 25–28 September 2007; pp. 4661–4667. [Google Scholar] [CrossRef]
  306. Reynolds, R.G. An Introduction to Cultural Algorithms. In Proceedings of the 3rd Annual Conference on Evolutionary Programming; World Scientific Publishing: Singapore, 1994; pp. 131–139. Available online: https://www.researchgate.net/publication/201976967 (accessed on 20 June 2023).
  307. Gandomi, A.H. Interior search algorithm (ISA): A novel approach for global optimization. ISA Trans. 2014, 53, 1168–1183. [Google Scholar] [CrossRef] [PubMed]
  308. Ghorbani, N.; Babaei, E. Exchange market algorithm. Appl. Soft Comput. 2014, 19, 177–187. [Google Scholar] [CrossRef]
  309. Punnathanam, V.; Kotecha, P. Yin-Yang-pair Optimization: A novel lightweight optimization algorithm. Eng. Appl. Artif. Intell. 2016, 54, 62–79. [Google Scholar] [CrossRef]
  310. Shayeghi, H.; Dadashpour, J. Anarchic Society Optimization Based PID Control of an Automatic Voltage Regulator (AVR) System. Electr. Electron. Eng. 2012, 2, 199–207. [Google Scholar] [CrossRef] [Green Version]
  311. Yampolskiy, R.V.; Ashby, L.; Hassan, L. Wisdom of Artificial Crowds—A Metaheuristic Algorithm for Optimization. J. Intell. Learn. Syst. Appl. 2012, 4, 98–107. [Google Scholar] [CrossRef] [Green Version]
  312. Kulkarni, A.J.; Krishnasamy, G.; Abraham, A. Cohort Intelligence: A Socio-Inspired Optimization Method; Springer International Publishing: Cham, Switzerland, 2017. [Google Scholar] [CrossRef]
  313. Borji, A. A New Global Optimization Algorithm Inspired by Parliamentary Political Competitions. In MICAI 2007: Advances in Artificial Intelligence; Springer: Berlin/Heidelberg, Germany, 2007; pp. 61–71. [Google Scholar] [CrossRef]
  314. Chen, T. A Novel Bionic Intelligent Optimization Algorithm: Artificial Tribe Algorithm and its Performance Analysis. In Proceedings of the 2010 International Conference on Measuring Technology and Mechatronics Automation, Changsha, China, 13–14 March 2010; pp. 222–225. [Google Scholar] [CrossRef]
  315. Kashan, A.H.; Tavakkoli-Moghaddam, R.; Gen, M. A Warfare Inspired Optimization Algorithm: The Find-Fix-Finish-Exploit-Analyze (F3EA) Metaheuristic Algorithm. In Proceedings of the Tenth International Conference on Management Science and Engineering Management; Springer: Singapore, 2017; pp. 393–408. [Google Scholar] [CrossRef]
  316. Khormouji, H.B.; Hajipour, H.; Rostami, H. BODMA: A novel metaheuristic algorithm for binary optimization problems based on open source Development Model Algorithm. In Proceedings of the 7’th International Symposium on Telecommunications (IST’2014), Tehran, Iran, 9–11 September 2014; pp. 49–54. [Google Scholar] [CrossRef]
  317. Chifu, V.R.; Salomie, I.; Chifu, E.Ş.; Pop, C.B.; Poruţiu, P.; Antal, M. Jigsaw inspired metaheuristic for selecting the optimal solution in web service composition. Adv. Intell. Syst. Comput. 2016, 356, 573–584. [Google Scholar] [CrossRef]
  318. Pincus, M. Letter to the Editor—A Monte Carlo Method for the Approximate Solution of Certain Types of Constrained Optimization Problems. Oper. Res. 1970, 18, 1225–1228. [Google Scholar] [CrossRef] [Green Version]
  319. Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by Simulated Annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef] [PubMed]
  320. Busetti, F. Simulated Annealing Overview, Lancs. 2003, pp. 1–10. Available online: https://www.aiinfinance.com/saweb.pdf (accessed on 20 August 2021).
  321. Varty, Z. Simulated Annealing Overview. 2017. Available online: http://lancs.ac.uk/~varty/RTOne.pdf (accessed on 20 August 2021).
  322. Haddock, J.; Mittenthal, J. Simulation optimization using simulated annealing. Comput. Ind. Eng. 1992, 22, 387–395. [Google Scholar] [CrossRef]
  323. Formato, R.A. Central force optimization: A new metaheuristic with applications in applied electromagnetics. Prog. Electromagn. Res. 2007, 77, 425–491. [Google Scholar] [CrossRef] [Green Version]
  324. Rashedi, E.; Nezamabadi-pour, H.; Saryazdi, S. GSA: A Gravitational Search Algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  325. Hatamlou, A. Black hole: A new heuristic optimization approach for data clustering. Inf. Sci. 2013, 222, 175–184. [Google Scholar] [CrossRef]
  326. Erol, O.K.; Eksin, I. A new optimization method: Big Bang-Big Crunch. Adv. Eng. Softw. 2006, 37, 106–111. [Google Scholar] [CrossRef]
  327. Hosseini, H.S. Principal components analysis by the galaxy-based search algorithm: A novel metaheuristic for continuous optimisation. Int. J. Comput. Sci. Eng. 2011, 6, 132. [Google Scholar] [CrossRef]
  328. Muthiah-Nakarajan, V.; Noel, M.M. Galactic Swarm Optimization: A new global optimization metaheuristic inspired by galactic motion. Appl. Soft Comput. J. 2016, 38, 771–787. [Google Scholar] [CrossRef]
  329. Hsiao, Y.T.; Chuang, C.L.; Jiang, J.A.; Chien, C.C. A novel optimization algorithm: Space gravitational optimization. In Proceedings of the 2005 IEEE International Conference on Systems, Man and Cybernetics, Waikoloa, HI, USA, 12 October 2005; pp. 2323–2328. [Google Scholar] [CrossRef]
  330. Flores, J.J.; Lopez, R.; Barrera, J. Gravitational interactions optimization. In Learning and Intelligent Optimization: Proceedings of the 5th International Conference, LION 5, Rome, Italy, 17–21 January 2011; Springer: Berlin/Heidelberg, Germany, 2011; pp. 226–237. [Google Scholar] [CrossRef]
  331. Beiranvand, H.; Rokrok, E. General Relativity Search Algorithm: A Global Optimization Approach. Int. J. Comput. Intell. Appl. 2015, 14, 1550017. [Google Scholar] [CrossRef]
  332. Bendato, I.; Cassettari, L.; Giribone, P.G.; Fioribello, S. Attraction Force Optimization (AFO): A deterministic nature-inspired heuristic for solving optimization problems in stochastic simulation. Appl. Math. Sci. 2016, 10, 989–1011. [Google Scholar] [CrossRef]
  333. Kilinç, N.; Mahouti, P.; Güneş, F. Space gravity optimization applied to the feasible design target space required for a wide-band front-end amplifier. Prog. Electromagn. Res. Symp. 2013, 2013, 1495–1499. [Google Scholar]
  334. Hudaib, A.A.; Fakhouri, H.N. Supernova Optimizer: A Novel Natural Inspired Meta-Heuristic. Mod. Appl. Sci. 2017, 12, 32. [Google Scholar] [CrossRef] [Green Version]
  335. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-verse optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2016, 27, 495–513. [Google Scholar] [CrossRef]
  336. Zhao, W.; Wang, L.; Zhang, Z. Atom search optimization and its application to solve a hydrogeologic parameter estimation problem. Knowledge-Based Syst. 2019, 163, 283–304. [Google Scholar] [CrossRef]
  337. Rahmanzadeh, S.; Pishvaee, M.S. Electron radar search algorithm: A novel developed meta-heuristic algorithm. Soft Comput. 2020, 24, 8443–8465. [Google Scholar] [CrossRef]
  338. Wei, Z.; Huang, C.; Wang, X.; Han, T.; Li, Y. Nuclear Reaction Optimization: A Novel and Powerful Physics-Based Algorithm for Global Optimization. IEEE Access 2019, 7, 1–9. [Google Scholar] [CrossRef]
  339. Yalcin, Y.; Pekcan, O. Nuclear Fission–Nuclear Fusion algorithm for global optimization: A modified Big Bang–Big Crunch algorithm. Neural Comput. Appl. 2020, 32, 2751–2783. [Google Scholar] [CrossRef]
  340. Birbil, Ş.I.; Fang, S.C. An electromagnetism-like mechanism for global optimization. J. Glob. Optim. 2003, 25, 263–282. [Google Scholar] [CrossRef]
  341. Abedinpourshotorban, H.; Mariyam Shamsuddin, S.; Beheshti, Z.; Jawawi, D.N.A. Electromagnetic field optimization: A physics-inspired metaheuristic optimization algorithm. Swarm Evol. Comput. 2016, 26, 8–22. [Google Scholar] [CrossRef]
  342. Yadav, A. AEFA: Artificial electric field algorithm for global optimization. Swarm Evol. Comput. 2019, 48, 93–108. [Google Scholar] [CrossRef]
  343. Bouchekara, H.R.E.H. Electrostatic discharge algorithm: A novel nature-inspired optimisation algorithm and its application to worst-case tolerance analysis of an EMC filter. IET Sci. Meas. Technol. 2019, 13, 518–522. [Google Scholar] [CrossRef]
  344. Fadafen, M.K.; Mehrshad, N.; Zahiri, S.H.; Razavi, S.M. A New Algorithm for Optimization Based on Ohm’s Law. CIVILICA 2017, 1, 16–22. [Google Scholar]
  345. Kaveh, A.; Talatahari, S. A novel heuristic optimization method: Charged system search. Acta Mech. 2010, 213, 267–289. [Google Scholar] [CrossRef]
  346. Ghasemi, M.; Ghavidel, S.; Aghaei, J.; Akbari, E.; Li, L. CFA optimizer: A new and powerful algorithm inspired by Franklin’s and Coulomb’s laws theory for solving the economic load dispatch problems. Int. Trans. Electr. Energy Syst. 2018, 28, e2536. [Google Scholar] [CrossRef] [Green Version]
  347. Zaránd, G.; Pázmándi, F.; Pál, K.F.; Zimányi, G.T. Using hysteresis for optimization. Phys. Rev. Lett. 2002, 89, 150201. [Google Scholar] [CrossRef] [Green Version]
  348. Sadollah, A.; Bahreininejad, A.; Eskandar, H.; Hamdi, M. Mine blast algorithm: A new population based algorithm for solving constrained engineering optimization problems. Appl. Soft Comput. J. 2013, 13, 2592–2612. [Google Scholar] [CrossRef]
  349. Kaveh, A.; Dadras, A. A novel meta-heuristic optimization algorithm: Thermal exchange optimization. Adv. Eng. Softw. 2017, 110, 69–84. [Google Scholar] [CrossRef]
  350. Ahrari, A.; Atai, A.A. Grenade Explosion Method—A novel tool for optimization of multimodal functions. Appl. Soft Comput. J. 2010, 10, 1132–1140. [Google Scholar] [CrossRef]
  351. Patel, V.K.; Savsani, V.J. Heat transfer search (HTS): A novel optimization algorithm. Inf. Sci. 2015, 324, 217–246. [Google Scholar] [CrossRef]
  352. Hashim, F.A.; Houssein, E.H.; Mabrouk, M.S.; Al-Atabany, W.; Mirjalili, S. Henry gas solubility optimization: A novel physics-based algorithm. Futur. Gener. Comput. Syst. 2019, 101, 646–667. [Google Scholar] [CrossRef]
  353. Abdechiri, M.; Meybodi, M.R.; Bahrami, H. Gases brownian motion optimization: An algorithm for optimization (GBMO). Appl. Soft Comput. J. 2013, 13, 2932–2946. [Google Scholar] [CrossRef]
  354. Moein, S.; Logeswaran, R. KGMO: A swarm optimization algorithm based on the kinetic energy of gas molecules. Inf. Sci. 2014, 275, 127–144. [Google Scholar] [CrossRef]
  355. Varaee, H.; Ghasemi, M.R. Engineering optimization based on ideal gas molecular movement algorithm. Eng. Comput. 2017, 33, 71–93. [Google Scholar] [CrossRef]
  356. Zheng, Y.J. Water wave optimization: A new nature-inspired metaheuristic. Comput. Oper. Res. 2015, 55, 1–11. [Google Scholar] [CrossRef] [Green Version]
  357. Doʇan, B.; Ölmez, T. A new metaheuristic for numerical function optimization: Vortex Search algorithm. Inf. Sci. 2015, 293, 125–145. [Google Scholar] [CrossRef]
  358. Shah-Hosseini, H. Intelligent water drops algorithm. Int. J. Intell. Comput. Cybern. 2008, 1, 193–212. [Google Scholar] [CrossRef]
  359. Faramarzi, A.; Heidarinejad, M.; Stephens, B.; Mirjalili, S. Equilibrium optimizer: A novel optimization algorithm. Knowl-Based Syst. 2020, 191, 105190. [Google Scholar] [CrossRef]
  360. Ali, J.; Saeed, M.; Luqman, M.; Tabassum, M.F. Artificial Showering Algorithm: A New Meta-Heuristic for Unconstrained Optimization. Sci. Int. 2015, 27, 4939–4942. [Google Scholar]
  361. Colak, M.E.; Varol, A. A Novel Intelligent Optimization Algorithm Inspired from Circular Water Waves. Elektron. Elektrotechnika 2015, 21, 3–6. [Google Scholar] [CrossRef] [Green Version]
  362. Cortés-Toro, E.M.; Crawford, B.; Gómez-Pulido, J.A.; Soto, R.; Lanza-Gutiérrez, J.M. A new metaheuristic inspired by the vapour-liquid equilibrium for continuous optimization. Appl. Sci. 2018, 8, 2080. [Google Scholar] [CrossRef] [Green Version]
  363. Tahani, M.; Babayan, N. Flow Regime Algorithm (FRA): A physics-based meta-heuristics algorithm. Knowl. Inf. Syst. 2019, 60, 1001–1038. [Google Scholar] [CrossRef]
  364. Zou, Y. The whirlpool algorithm based on physical phenomenon for solving optimization problems. Eng. Comput. 2019, 36, 664–690. [Google Scholar] [CrossRef]
  365. Kaveh, A.; Mahdavi, V.R. Colliding bodies optimization: A novel meta-heuristic method. Comput. Struct. 2014, 139, 18–27. [Google Scholar] [CrossRef]
  366. Javidy, B.; Hatamlou, A.; Mirjalili, S. Ions motion algorithm for solving optimization problems. Appl. Soft Comput. J. 2015, 32, 72–79. [Google Scholar] [CrossRef]
  367. Kaveh, A.; Ilchi Ghazaan, M. A new meta-heuristic algorithm: Vibrating particles system. Sci. Iran. 2017, 24, 551–566. [Google Scholar] [CrossRef] [Green Version]
  368. Sacco, W.F.; de Oliveira, C.R.E. A new stochastic optimization algorithm based on a particle collision metaheuristic. In Proceedings of the 6th World Congresses of Structural and Multidisciplinary Optimization, Rio de Janeiro, Brazil, 30 May–3 June 2005. [Google Scholar]
  369. Mejía-de-Dios, J.-A.; Mezura-Montes, E. A New Evolutionary Optimization Method Based on Center of Mass. In Decision Science in Action: Theory and Applications of Modern Decision Analytic Optimisation; Springer: Berlin/Heidelberg, Germany, 2019; pp. 65–74. [Google Scholar] [CrossRef]
  370. Xie, L.; Zeng, J.; Cui, Z. General framework of artificial physics optimization algorithm. In Proceedings of the 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC), Coimbatore, India, 9–11 December 2009; pp. 1321–1326. [Google Scholar] [CrossRef]
  371. Cuevas, E.; Echavarría, A.; Ramírez-Ortegón, M.A. An optimization algorithm inspired by the States of Matter that improves the balance between exploration and exploitation. Appl. Intell. 2014, 40, 256–272. [Google Scholar] [CrossRef] [Green Version]
  372. Kaveh, A.; Khayatazad, M. A new meta-heuristic method: Ray Optimization. Comput. Struct. 2012, 112–113, 283–294. [Google Scholar] [CrossRef]
  373. Husseinzadeh Kashan, A. A new metaheuristic for optimization: Optics inspired optimization (OIO). Comput. Oper. Res. 2015, 55, 99–125. [Google Scholar] [CrossRef]
  374. Baykasoğlu, A.; Akpinar, Ş. Weighted Superposition Attraction (WSA): A swarm intelligence algorithm for optimization problems—Part 1: Unconstrained optimization. Appl. Soft Comput. J. 2017, 56, 520–540. [Google Scholar] [CrossRef]
  375. Tzanetos, A.; Dounias, G. A new metaheuristic method for optimization: Sonar inspired optimization. Commun. Comput. Inf. Sci. 2017, 744, 417–428. [Google Scholar] [CrossRef]
  376. Feng, X.; Ma, M.; Yu, H. Crystal energy optimization algorithm. Comput. Intell. 2016, 32, 284–322. [Google Scholar] [CrossRef]
  377. Dehghani, M.; Montazeri, Z.; Dehghani, A.; Nouri, N.; Seifi, A. BSSA: Binary spring search algorithm. In Proceedings of the 2017 IEEE 4th International Conference on Knowledge-Based Engineering and Innovation (KBEI), Tehran, Iran, 22–22 December 2017; pp. 0220–0224. [Google Scholar] [CrossRef]
  378. Tan, Y.; Zhu, Y. Fireworks algorithm for optimization. In International Conference in Swarm Intelligence: Proceedings of the First International Conference, ICSI 2010, Beijing, China, 12–15 June 2010; Springer: Berlin/Heidelberg, Germany, 2010; pp. 355–364. [Google Scholar] [CrossRef]
  379. Lam, A.Y.S.; Li, V.O.K. Chemical-Reaction-Inspired Metaheuristic for Optimization. IEEE Trans. Evol. Comput. 2010, 14, 381–399. [Google Scholar] [CrossRef] [Green Version]
  380. Alatas, B. ACROA: Artificial Chemical Reaction Optimization Algorithm for global optimization. Expert Syst. Appl. 2011, 38, 13170–13180. [Google Scholar] [CrossRef]
  381. Siddique, N.; Adeli, H. Nature-Inspired Chemical Reaction Optimisation Algorithms. Cognit. Comput. 2017, 9, 411–422. [Google Scholar] [CrossRef] [Green Version]
  382. Mirjalili, S. SCA: A Sine Cosine Algorithm for solving optimization problems. Knowledge-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  383. Salimi, H. Stochastic Fractal Search: A powerful metaheuristic algorithm. Knowledge-Based Syst. 2015, 75, 1–18. [Google Scholar] [CrossRef]
  384. Ibrahim, Z.; Aziz, N.H.A.; Aziz, N.A.A.; Razali, S.; Mohamad, M.S. Simulated Kalman Filter: A Novel Estimation-Based Metaheuristic Optimization Algorithm. Adv. Sci. Lett. 2016, 22, 2941–2946. [Google Scholar] [CrossRef]
  385. Salem, S.A. BOA: A novel optimization algorithm. In Proceedings of the 2012 International Conference on Engineering and Technology (ICET), Cairo, Egypt, 10–11 October 2012; pp. 1–5. [Google Scholar] [CrossRef]
  386. TANYILDIZI, E.; DEMIR, G. Golden Sine Algorithm: A Novel Math-Inspired Algorithm. Adv. Electr. Comput. Eng. 2017, 17, 71–78. [Google Scholar] [CrossRef]
  387. Zhao, J.; Tang, D.; Liu, Z.; Cai, Y.; Dong, S. Spherical search optimizer: A simple yet efficient meta-heuristic approach. Neural Comput. Appl. 2020, 32, 9777–9808. [Google Scholar] [CrossRef]
  388. Geem, Z.W.; Kim, J.H.; Loganathan, G.V. A new heuristic optimization algorithm: Harmony search. Simulation 2001, 76, 60–68. [Google Scholar] [CrossRef]
  389. Ashrafi, S.M.; Dariane, A.B. A novel and effective algorithm for numerical optimization: Melody Search (MS). In Proceedings of the 2011 11th International Conference on Hybrid Intelligent Systems (HIS), Melacca, Malaysia, 5–8 December 2011; pp. 109–114. [Google Scholar] [CrossRef]
  390. Weyland, D. A Rigorous Analysis of the Harmony Search Algorithm. Int. J. Appl. Metaheuristic Comput. 2010, 1, 50–60. [Google Scholar] [CrossRef] [Green Version]
  391. Mora-Gutiérrez, R.A.; Ramírez-Rodríguez, J.; Rincón-García, E.A. An optimization algorithm inspired by musical composition. Artif. Intell. Rev. 2014, 41, 301–315. [Google Scholar] [CrossRef]
  392. Kashan, A.H. League Championship Algorithm: A New Algorithm for Numerical Function Optimization. In Proceedings of the 2009 International Conference of Soft Computing and Pattern Recognition, Malacca, Malaysia, 4–7 December 2009; pp. 43–48. [Google Scholar] [CrossRef]
  393. Osaba, E.; Diaz, F.; Onieva, E. Golden ball: A novel meta-heuristic to solve combinatorial optimization problems based on soccer concepts. Appl. Intell. 2014, 41, 145–166. [Google Scholar] [CrossRef]
  394. Moosavian, N.; Roodsari, B.K. Soccer League Competition Algorithm, a New Method for Solving Systems of Nonlinear Equations. Int. J. Intell. Sci. 2014, 4, 7–16. [Google Scholar] [CrossRef]
  395. Fadakar, E.; Ebrahimi, M. A new metaheuristic football game inspired algorithm. In Proceedings of the 2016 1st Conference on Swarm Intelligence and Evolutionary Computation (CSIEC), Bam, Iran, 9–11 March 2016; pp. 6–11. [Google Scholar] [CrossRef]
  396. Kaveh, A.; Zolghadr, A. a Novel Meta-Heuristic Algorithm: Tug of War Optimization. Int. J. Optim. Civ. Eng. Int. J. Optim. Civ. Eng 2016, 6, 469–492. [Google Scholar]
  397. Blum, C.; Puchinger, J.; Raidl, G.R.; Roli, A. Hybrid metaheuristics in combinatorial optimization: A survey. Appl. Soft Comput. 2011, 11, 4135–4151. [Google Scholar] [CrossRef] [Green Version]
  398. Nabil, E. A Modified Flower Pollination Algorithm for Global Optimization. Expert Syst. Appl. 2016, 57, 192–203. [Google Scholar] [CrossRef]
  399. Tseng, L.-Y.; Liang, S.-C. A Hybrid Metaheuristic for the Quadratic Assignment Problem. Comput. Optim. Appl. 2006, 34, 85–113. [Google Scholar] [CrossRef]
  400. D’Andreagiovanni, F.; Krolikowski, J.; Pulaj, J. A fast hybrid primal heuristic for multiband robust capacitated network design with multiple time periods. Appl. Soft Comput. 2015, 26, 497–507. [Google Scholar] [CrossRef] [Green Version]
  401. Fontes, D.B.M.M.; Homayouni, S.M.; Gonçalves, J.F. A hybrid particle swarm optimization and simulated annealing algorithm for the job shop scheduling problem with transport resources. Eur. J. Oper. Res. 2023, 306, 1140–1157. [Google Scholar] [CrossRef]
  402. Binu, D.; Selvi, M.; George, A. MKF-Cuckoo: Hybridization of Cuckoo Search and Multiple Kernel-based Fuzzy C-means Algorithm. AASRI Procedia 2013, 4, 243–249. [Google Scholar] [CrossRef]
  403. Yue, Z.; Zhang, S.; Xiao, W. A Novel Hybrid Algorithm Based on Grey Wolf Optimizer and Fireworks Algorithm. Sensors 2020, 20, 2147. [Google Scholar] [CrossRef] [Green Version]
  404. Jia, H.; Xing, Z.; Song, W. A New Hybrid Seagull Optimization Algorithm for Feature Selection. IEEE Access 2019, 7, 49614–49631. [Google Scholar] [CrossRef]
  405. Zhang, Z.; Ding, S.; Jia, W. A hybrid optimization algorithm based on cuckoo search and differential evolution for solving constrained engineering problems. Eng. Appl. Artif. Intell. 2019, 85, 254–268. [Google Scholar] [CrossRef]
  406. Dey, B.; Raj, S.; Mahapatra, S.; Márquez, F.P.G. Optimal scheduling of distributed energy resources in microgrid systems based on electricity market pricing strategies by a novel hybrid optimization technique. Int. J. Electr. Power Energy Syst. 2022, 134, 107419. [Google Scholar] [CrossRef]
  407. Kottath, R.; Singh, P.; Bhowmick, A. Swarm-based hybrid optimization algorithms: An exhaustive analysis and its applications to electricity load and price forecasting. Soft Comput. 2023, 1–32. [Google Scholar] [CrossRef] [PubMed]
  408. Yeniay, Ö. Penalty Function Methods for Constrained Optimization with Genetic Algorithms. Math. Comput. Appl. 2005, 10, 45–56. [Google Scholar] [CrossRef] [Green Version]
  409. Mezura-Montes, E.; Coello Coello, C.A. Constraint-handling in nature-inspired numerical optimization: Past, present and future. Swarm Evol. Comput. 2011, 1, 173–194. [Google Scholar] [CrossRef]
  410. Chehouri, A.; Younes, R.; Perron, J.; Ilinca, A. A constraint-handling technique for genetic algorithms using a violation factor. J. Comput. Sci. 2016, 12, 350–362. [Google Scholar] [CrossRef] [Green Version]
  411. Gen, M.; Cheng, R. A survey of penalty techniques in genetic algorithms. In Proceedings of the IEEE International Conference on Evolutionary Computation, Nagoya, Japan, 20–22 May 1996; pp. 804–809. [Google Scholar] [CrossRef]
  412. Jordehi, A.R. A review on constraint handling strategies in particle swarm optimisation. Neural Comput. Appl. 2015, 26, 1265–1275. [Google Scholar] [CrossRef]
  413. Vrbančič, G.; Brezočnik, L.; Mlakar, U.; Fister, D.; Fister, I., Jr. NiaPy: Python microframework for building nature-inspired algorithms. J. Open Source Softw. 2018, 3, 613. [Google Scholar] [CrossRef]
  414. Darvishpoor, S.; Darvishpour, A. NIA, PYPI. 2021. Available online: https://pypi.org/project/nia/ (accessed on 9 May 2022).
  415. Darvishpoor, S. Nature Inspired Algorithms Review, GitHub. 2022. Available online: https://github.com/shahind/Nature-Inspired-Algorithms-Review (accessed on 4 March 2022).
  416. Jamil, M.; Yang, X.S. A literature survey of benchmark functions for global optimisation problems. Int. J. Math. Model. Numer. Optim. 2013, 4, 150–194. [Google Scholar] [CrossRef] [Green Version]
  417. Bingham, D. Optimization Test Problems, Simon Fraser Univ. 2013. Available online: https://www.sfu.ca/~ssurjano/optimization.html (accessed on 4 March 2022).
  418. Al-Roomi, A.R. Unconstrained Multi-Objective Benchmark Functions Repository. 2016. Available online: https://www.al-roomi.org/benchmarks/multi-objective/unconstrained-list (accessed on 20 June 2023).
  419. Darvishpoor, S.; Darvishpour, A. Modified NiaPy, GitHub. 2022. Available online: https://github.com/salar-shdk/NiaPy (accessed on 21 June 2022).
  420. Digalakis, J.G.; Margaritis, K.G. On benchmarking functions for genetic algorithms. Int. J. Comput. Math. 2001, 77, 481–506. [Google Scholar] [CrossRef]
  421. Chen, H.; Zhu, Y.; Hu, K. Adaptive Bacterial Foraging Optimization. Abstr. Appl. Anal. 2011, 2011, 108269. [Google Scholar] [CrossRef] [Green Version]
  422. Liu, X.; Lu, P.; Pan, B. Survey of convex optimization for aerospace applications. Astrodynamics 2017, 1, 23–40. [Google Scholar] [CrossRef]
  423. Padula, S.L.; Gumbert, C.R.; Li, W. Aerospace applications of optimization under uncertainty. Optim. Eng. 2006, 7, 317–328. [Google Scholar] [CrossRef] [Green Version]
  424. Mieloszyk, J. Practical problems of numerical optimization in aerospace sciences. Aircr. Eng. Aerosp. Technol. 2017, 89, 570–578. [Google Scholar] [CrossRef] [Green Version]
  425. Lian, Y.; Oyama, A.; Liou, M.-S. Progress in design optimization using evolutionary algorithms for aerodynamic problems. Prog. Aerosp. Sci. 2010, 46, 199–223. [Google Scholar] [CrossRef]
  426. Gage, P.J. New Approaches to Optimisation in Aerospace Conceptual Design; Stanford University: Stanford, CA, USA, 1994. [Google Scholar]
  427. Crossley, W.A.; Laananen, D.H. Conceptual design of helicopters via genetic algorithm. J. Aircr. 1996, 33, 1062–1070. [Google Scholar] [CrossRef]
  428. Champasak, P.; Panagant, N.; Bureerat, S.; Pholdee, N. Investigation on the performance of meta-heuristics for solving single objective conceptual design of a conventional fixed wing unmanned aerial vehicle. J. Res. Appl. Mech. Eng. 2022, 10, 1. [Google Scholar]
  429. Jafarsalehi, A.; Fazeley, H.R.; Mirshams, M. Conceptual Remote Sensing Satellite Design Optimization under uncertainty. Aerosp. Sci. Technol. 2016, 55, 377–391. [Google Scholar] [CrossRef]
  430. Jilla, C.; Miller, D. A Multiobjective, Multidisciplinary Design Optimization Methodology for the Conceptual Design of Distributed Satellite Systems. In Proceedings of the 9th AIAA/ISSMO Symposium on Multidisciplinary Analysis and Optimization, Panama City Beach, FL, USA, 4–6 September 2002; American Institute of Aeronautics and Astronautics: Reston, VA, USA, 2002. [Google Scholar] [CrossRef] [Green Version]
  431. Abedini, A.; Bataleblu, A.A.; Roshanian, J. Co-design Optimization of a Novel Multi-identity Drone Helicopter (MICOPTER). J. Intell. Robot. Syst. 2022, 106, 56. [Google Scholar] [CrossRef] [PubMed]
  432. HASSANALIAN, M.; SALAZAR, R.; ABDELKEFI, A. Conceptual design and optimization of a tilt-rotor micro air vehicle. Chin. J. Aeronaut. 2019, 32, 369–381. [Google Scholar] [CrossRef]
  433. Blasi, L.; Core, G. Del Particle Swarm Approach in Finding Optimum Aircraft Configuration. J. Aircr. 2007, 44, 679–683. [Google Scholar] [CrossRef]
  434. Corrado, G.; Ntourmas, G.; Sferza, M.; Traiforos, N.; Arteiro, A.; Brown, L.; Chronopoulos, D.; Daoud, F.; Glock, F.; Ninic, J.; et al. Recent progress, challenges and outlook for multidisciplinary structural optimization of aircraft and aerial vehicles. Prog. Aerosp. Sci. 2022, 135, 100861. [Google Scholar] [CrossRef]
  435. Sobieszczanski-Sobieski, J.; Haftka, R.T. Multidisciplinary aerospace design optimization: Survey of recent developments. Struct. Optim. 1997, 14, 1–23. [Google Scholar] [CrossRef] [Green Version]
  436. Keane, A.; Scanlan, J. Design search and optimization in aerospace engineering. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 2007, 365, 2501–2529. [Google Scholar] [CrossRef]
  437. Neufeld, D.; Chung, J.; Behdinan, K. Development of a flexible MDO architecture for aircraft conceptual design. In Proceedings of the 2008 EngOpt conference (International Conference on Engineering Optimization), Rio de Janeiro, Brazil, 1–5 June 2008; pp. 1–8. [Google Scholar]
  438. Ganguli, R.; Rajagopal, S. Multidisciplinary Design Optimization of an UAV Wing Using Kriging Based Multi-Objective Genetic Algorithm. In Proceedings of the 50th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference; American Institute of Aeronautics and Astronautics: Reston, VA, USA, 2009. [Google Scholar] [CrossRef] [Green Version]
  439. Ampellio, E.; Bertini, F.; Ferrero, A.; Larocca, F.; Vassio, L. Turbomachinery design by a swarm-based optimization method coupled with a CFD solver. Adv. Aircr. Spacecr. Sci. 2016, 3, 149–170. [Google Scholar] [CrossRef] [Green Version]
  440. Jafari, S.; Nikolaidis, T. Meta-heuristic global optimization algorithms for aircraft engines modelling and controller design; A review, research challenges, and exploring the future. Prog. Aerosp. Sci. 2019, 104, 40–53. [Google Scholar] [CrossRef]
  441. Gur, O.; Rosen, A. Optimizing Electric Propulsion Systems for Unmanned Aerial Vehicles. J. Aircr. 2009, 46, 1340–1353. [Google Scholar] [CrossRef] [Green Version]
  442. Pelz, P.F.; Leise, P.; Meck, M. Sustainable aircraft design—A review on optimization methods for electric propulsion with derived optimal number of propulsors. Prog. Aerosp. Sci. 2021, 123, 100714. [Google Scholar] [CrossRef]
  443. Wang, X.; Damodaran, M. Comparison of Deterministic and Stochastic Optimization Algorithms for Generic Wing Design Problems. J. Aircr. 2000, 37, 929–932. [Google Scholar] [CrossRef]
  444. Boulkabeit, I.; Mthembu, L.; Marwala, T.; de Neto, F.B.L. Finite Element Model Updating Using Fish School Search Optimization Method. In Proceedings of the 2013 BRICS Congress on Computational Intelligence and 11th Brazilian Congress on Computational Intelligence, Ipojuca, Brazil, 8–11 September 2013; pp. 447–452. [Google Scholar] [CrossRef] [Green Version]
  445. Toropov, V.V.; Jones, R.; Willment, T.; Funnell, M. Weight and Manufacturability Optimization of Composite Aircraft Components Based on a Genetic Algorithm. In Proceedings of the 6th World Congresses of Structural and Multidisciplinary Optimization, Rio de Janeiro, Brazil, 30 May–3 June 2005. [Google Scholar]
  446. Viana, F.A.C.; Steffen, V.; Butkewitsch, S.; de Freitas Leal, M. Optimization of aircraft structural components by using nature-inspired algorithms and multi-fidelity approximations. J. Glob. Optim. 2009, 45, 427–449. [Google Scholar] [CrossRef]
  447. Sandeep, R.; Jeevanantham, A.K.; Manikandan, M.; Arivazhagan, N.; Tofil, S. Multi-Performance Optimization in Friction Stir Welding of AA6082/B4C Using Genetic Algorithm and Desirability Function Approach for Aircraft Wing Structures. J. Mater. Eng. Perform. 2021, 30, 5845–5857. [Google Scholar] [CrossRef]
  448. Weis, L.; Koke, H.; Huhne, C. Structural optimisation of a composite aircraft frame applying a particle swarm algorithm. In Proceedings of the 2015 IEEE Congress on Evolutionary Computation (CEC), Sendai, Japan, 25–28 May 2015; pp. 582–588. [Google Scholar] [CrossRef]
  449. Keshtegar, B.; Hao, P.; Wang, Y.; Hu, Q. An adaptive response surface method and Gaussian global-best harmony search algorithm for optimization of aircraft stiffened panels. Appl. Soft Comput. 2018, 66, 196–207. [Google Scholar] [CrossRef]
  450. Varatharajoo, R.; Romli, F.I.; Ahmad, K.A.; Majid, D.L.; Mustapha, F. Aeroelastic Tailoring of Composite Wing Design Using Bee Colony Optimisation. Appl. Mech. Mater. 2014, 629, 182–188. [Google Scholar] [CrossRef]
  451. Georgiou, G.; Vio, G.A.; Cooper, J.E. Aeroelastic tailoring and scaling using Bacterial Foraging Optimisation. Struct. Multidiscip. Optim. 2014, 50, 81–99. [Google Scholar] [CrossRef]
  452. de Wit, A.J.; Lammen, W.F.; Vankan, W.J.; Timmermans, H.; van der Laan, T.; Ciampa, P.D. Aircraft rudder optimization—A multi-level and knowledge-enabled approach. Prog. Aerosp. Sci. 2020, 119, 100650. [Google Scholar] [CrossRef]
  453. Li, J.; Du, X.; Martins, J.R.R.A. Machine learning in aerodynamic shape optimization. Prog. Aerosp. Sci. 2022, 134, 100849. [Google Scholar] [CrossRef]
  454. Giannakoglou, K.C. Design of optimal aerodynamic shapes using stochastic optimization methods and computational intelligence. Prog. Aerosp. Sci. 2002, 38, 43–76. [Google Scholar] [CrossRef]
  455. Yu, Y.; Lyu, Z.; Xu, Z.; Martins, J.R.R.A. On the influence of optimization algorithm and initial design on wing aerodynamic shape optimization. Aerosp. Sci. Technol. 2018, 75, 183–199. [Google Scholar] [CrossRef]
  456. Olhofer, M.; Jin, Y.; Sendhoff, B. Adaptive encoding for aerodynamic shape optimization using evolution strategies. In Proceedings of the 2001 Congress on Evolutionary Computation (IEEE Cat. No.01TH8546), Seoul, Republic of Korea, 27–30 May 2001; Volume 1, pp. 576–583. [Google Scholar] [CrossRef]
  457. Tian, X.; Li, J. A novel improved fruit fly optimization algorithm for aerodynamic shape design optimization. Knowl-Based Syst. 2019, 179, 77–91. [Google Scholar] [CrossRef]
  458. Hoyos, J.; Jímenez, J.H.; Echavarría, C.; Alvarado, J.P. Airfoil Shape Optimization: Comparative Study of Meta-heuristic Algorithms, Airfoil Parameterization Methods and Reynolds Number Impact. IOP Conf. Ser. Mater. Sci. Eng. 2021, 1154, 012016. [Google Scholar] [CrossRef]
  459. Naumann, D.S.; Evans, B.; Walton, S.; Hassan, O. A novel implementation of computational aerodynamic shape optimisation using Modified Cuckoo Search. Appl. Math. Model. 2016, 40, 4543–4559. [Google Scholar] [CrossRef]
  460. Derakhshan, S.; Tavaziani, A.; Kasaeian, N. Numerical Shape Optimization of a Wind Turbine Blades Using Artificial Bee Colony Algorithm. J. Energy Resour. Technol. 2015, 137, 051210. [Google Scholar] [CrossRef]
  461. Hoseynipoor, M.; Malek Jafarian, M.; Safavinejad, A. Two-objective optimization of aerodynamic shapes using gravitational search algorithm. Modares Mech. Eng. 2017, 17, 211–220. [Google Scholar]
  462. Jalili, F.; MalekJafarian, S.M.; Safavinejad, A.; Masoumi, H. A New Modified Harmony Search Optimization Algorithm for Evaluating Airfoil Shape Parameterization Methods and Aerodynamic Optimization. Iran. J. Mech. Eng. Trans. ISME 2022, 23, 80–104. [Google Scholar]
  463. Jalili, F.; Malek-Jafarian, M.; Safavinejad, A. Introduction of Harmony Search Algorithm for Aerodynamic Shape Optimization Using. J. Appl. Comput. Sci. Mech. 2013, 24, 81–96. [Google Scholar]
  464. Darvishpoor, S.; Roshanian, J.; Raissi, A.; Hassanalian, M. Configurations, flight mechanisms, and applications of unmanned aerial systems: A review. Prog. Aerosp. Sci. 2020, 121, 100694. [Google Scholar] [CrossRef]
  465. Keane, A.J. Wing Optimization Using Design of Experiment, Response Surface, and Data Fusion Methods. J. Aircr. 2003, 40, 741–750. [Google Scholar] [CrossRef]
  466. Vicini, A.; Quagliarella, D. Airfoil and Wing Design Through Hybrid Optimization Strategies. AIAA J. 1999, 37, 634–641. [Google Scholar] [CrossRef]
  467. Venter, G.; Sobieszczanski-Sobieski, J. Multidisciplinary optimization of a transport aircraft wing using particle swarm optimization. Struct. Multidiscip. Optim. 2004, 26, 121–131. [Google Scholar] [CrossRef] [Green Version]
  468. Wang, W.; Guo, S.; Yang, W. Simultaneous partial topology and size optimization of a wing structure using ant colony and gradient based methods. Eng. Optim. 2011, 43, 433–446. [Google Scholar] [CrossRef] [Green Version]
  469. Martinez, A.D.; Osaba, E.; Oregi, I.; Fister, I.; Fister, I.; Ser, J. Del Hybridizing differential evolution and novelty search for multimodal optimization problems. In Proceedings of the Genetic and Evolutionary Computation Conference Companion, Prague, Czech Republic, 13–17 July 2019; ACM: New York, NY, USA, 2019; pp. 1980–1989. [Google Scholar]
  470. Li, Y.; Ge, W.; Zhou, J.; Zhang, Y.; Zhao, D.; Wang, Z.; Dong, D. Design and experiment of concentrated flexibility-based variable camber morphing wing. Chin. J. Aeronaut. 2019, 35, 455–469. [Google Scholar] [CrossRef]
  471. Koreanschi, A.; Sugar Gabor, O.; Acotto, J.; Brianchon, G.; Portier, G.; Botez, R.M.; Mamou, M.; Mebarki, Y. Optimization and design of an aircraft’s morphing wing-tip demonstrator for drag reduction at low speed, Part I—Aerodynamic optimization using genetic, bee colony and gradient descent algorithms. Chin. J. Aeronaut. 2017, 30, 149–163. [Google Scholar] [CrossRef]
  472. Darvishpoor, S.; Roshanian, J.; Tayefi, M. A novel concept of VTOL bi-rotor UAV based on moving mass control. Aerosp. Sci. Technol. 2020, 107, 106238. [Google Scholar] [CrossRef]
  473. Sudmeijer, K.; Mooij, E. Shape Optimization for a Small Experimental Re-entry Module. In Proceedings of the AIAA/AAAF 11th International Space Planes and Hypersonic Systems and Technologies Conference, Orleans, France, 29 September–4 October 2002; American Institute of Aeronautics and Astronautics: Reston, VA, USA, 2002. [Google Scholar]
  474. Suzdaltsev, I.V.; Chermoshencev, S.F.; Bogula, N.Y. Genetic algorithm for onboard equipment placement inside the unmanned aerial vehicle fuselage. In Proceedings of the 2016 XIX IEEE International Conference on Soft Computing and Measurements (SCM), St. Petersburg, Russia, 25–27 May 2016; pp. 262–264. [Google Scholar] [CrossRef]
  475. Li, L.; Chen, M.; Cao, F.; Ma, Y. Coaxial helicopter optimum dynamics design based on multi-objective bat algorithm and experimental validation. In Proceedings of the 2017 8th International Conference on Mechanical and Aerospace Engineering (ICMAE), Prague, Czech Republic, 22–25 July 2017; pp. 411–415. [Google Scholar] [CrossRef]
  476. Viviani, A.; Iuspa, L.; Aprovitola, A. An optimization-based procedure for self-generation of Re-entry Vehicles shape. Aerosp. Sci. Technol. 2017, 68, 123–134. [Google Scholar] [CrossRef]
  477. Arora, R.; Kumar, P. Aerodynamic Shape Optimization of a Re-entry Capsule. In AIAA Atmospheric Flight Mechanics Conference and Exhibit; American Institute of Aeronautics and Astronautics: Reston, VA, USA, 2003. [Google Scholar] [CrossRef]
  478. Wang, Z.; Yu, J.; Zhang, A.; Wang, Y.; Zhao, W. Parametric geometric model and hydrodynamic shape optimization of a flying-wing structure underwater glider. China Ocean Eng. 2017, 31, 709–715. [Google Scholar] [CrossRef]
  479. Rodríguez-Cortés, H.; Arias-Montaño, A. Robust geometric sizing of a small flying wing planform based on evolutionary algorithms. Aeronaut. J. 2012, 116, 175–188. [Google Scholar] [CrossRef]
  480. Chen, X.; Yao, W.; Zhao, Y.; Chen, X.; Zhang, J.; Luo, Y. The Hybrid Algorithms Based on Differential Evolution for Satellite Layout Optimization Design. In Proceedings of the 2018 IEEE Congress on Evolutionary Computation (CEC), Rio de Janeiro, Brazil, 8–13 July 2018; pp. 1–8. [Google Scholar] [CrossRef]
  481. Israr, A.; Ali, Z.A.; Alkhammash, E.H.; Jussila, J.J. Optimization Methods Applied to Motion Planning of Unmanned Aerial Vehicles: A Review. Drones 2022, 6, 126. [Google Scholar] [CrossRef]
  482. Konatowski, S.; Pawłowski, P. Application of the ACO algorithm for UAV path planning. Prz. Elektrotechniczny 2019, 95, 115–119. [Google Scholar] [CrossRef]
  483. Yu, X.; Li, C.; Zhou, J. A constrained differential evolution algorithm to solve UAV path planning in disaster scenarios. Knowl-Based Syst. 2020, 204, 106209. [Google Scholar] [CrossRef]
  484. Li, Z.; Xia, X.; Yan, Y. A Novel Semidefinite Programming-based UAV 3D Localization Algorithm with Gray Wolf Optimization. Drones 2023, 7, 113. [Google Scholar] [CrossRef]
  485. Shen, Y.; Zhu, Y.; Kang, H.; Sun, X.; Chen, Q.; Wang, D. UAV Path Planning Based on Multi-Stage Constraint Optimization. Drones 2021, 5, 144. [Google Scholar] [CrossRef]
  486. Lin, N.; Tang, J.; Li, X.; Zhao, L. A novel improved bat algorithm in UAV path planning. Comput. Mater. Contin. 2019, 61, 323–344. [Google Scholar] [CrossRef]
  487. Wang, Y.; Li, K.; Han, Y.; Ge, F.; Xu, W.; Liu, L. Tracking a dynamic invading target by UAV in oilfield inspection via an improved bat algorithm. Appl. Soft Comput. 2020, 90, 106150. [Google Scholar] [CrossRef]
  488. Kumar, P.; Narayan, S. Multi-objective bat algorithm tuned optimal FOPID controller for robust aircraft pitch control. Int. J. Syst. Control Commun. 2017, 8, 348. [Google Scholar] [CrossRef]
  489. Xie, J.; Zhou, Y.; Zheng, H. A Hybrid Metaheuristic for Multiple Runways Aircraft Landing Problem Based on Bat Algorithm. J. Appl. Math. 2013, 2013, 742653. [Google Scholar] [CrossRef] [Green Version]
  490. Li, X.; Zhou, D.; Yang, Z.; Huang, J.; Zhang, K.; Pan, Q. UAV route evaluation algorithm based on CSA-AHP and TOPSIS. In Proceedings of the 2017 IEEE International Conference on Information and Automation (ICIA), Macau, China, 18–20 July 2017; pp. 914–915. [Google Scholar] [CrossRef]
  491. El Gmili; Mjahed; El Kari; Ayad Particle Swarm Optimization and Cuckoo Search-Based Approaches for Quadrotor Control and Trajectory Tracking. Appl. Sci. 2019, 9, 1719. [CrossRef] [Green Version]
  492. Hu, H.; Wu, Y.; Xu, J.; Sun, Q. Cuckoo search-based method for trajectory planning of quadrotor in an urban environment. Proc. Inst. Mech. Eng. Part G J. Aerosp. Eng. 2019, 233, 4571–4582. [Google Scholar] [CrossRef]
  493. Zhang, X.; Chen, J.; Xin, B.; Fang, H. Online Path Planning for UAV Using an Improved Differential Evolution Algorithm. IFAC Proc. Vol. 2011, 44, 6349–6354. [Google Scholar] [CrossRef]
  494. Nikolos, I.K.; Brintaki, A.N. Coordinated UAV Path Planning Using Differential Evolution. In Proceedings of the 2005 IEEE International Symposium on, Mediterrean Conference on Control and Automation Intelligent Control, Limassol, Cyprus, 27–29 June 2005; pp. 549–556. [Google Scholar] [CrossRef]
  495. Alihodzic, A. Fireworks Algorithm with New Feasibility-Rules in Solving UAV Path Planning. In Proceedings of the 2016 3rd International Conference on Soft Computing & Machine Intelligence (ISCMI), Dubai, United Arab Emirates, 23–25 November 2016; pp. 53–57. [Google Scholar] [CrossRef]
  496. Zhang, X.; Zhang, X. UAV Path Planning Based on Hybrid Differential Evolution with Fireworks Algorithm. In International Conference on Sensing and Imaging: ICSI 2022: Advances in Swarm Intelligence; Springer: Berlin/Heidelberg, Germany, 2022; pp. 354–364. [Google Scholar] [CrossRef]
  497. Roberge, V.; Tarbouchi, M. Parallel Hybrid 2-Opt Flower Pollination Algorithm for Real-Time UAV Trajectory Planning on GPU. ITM Web Conf. 2022, 48, 03007. [Google Scholar] [CrossRef]
  498. Li, P.; Duan, H. Path planning of unmanned aerial vehicle based on improved gravitational search algorithm. Sci. China Technol. Sci. 2012, 55, 2712–2719. [Google Scholar] [CrossRef]
  499. Qu, C.; Gai, W.; Zhang, J.; Zhong, M. A novel hybrid grey wolf optimizer algorithm for unmanned aerial vehicle (UAV) path planning. Knowl-Based Syst. 2020, 194, 105530. [Google Scholar] [CrossRef]
  500. Luo, Y.; Lu, J.; Zhang, Y.; Zheng, K.; Qin, Q.; He, L.; Liu, Y. Near-Ground Delivery Drones Path Planning Design Based on BOA-TSAR Algorithm. Drones 2022, 6, 393. [Google Scholar] [CrossRef]
  501. Khoufi, I.; Laouiti, A.; Adjih, C. A Survey of Recent Extended Variants of the Traveling Salesman and Vehicle Routing Problems for Unmanned Aerial Vehicles. Drones 2019, 3, 66. [Google Scholar] [CrossRef] [Green Version]
  502. Chung, S.H.; Sah, B.; Lee, J. Optimization for drone and drone-truck combined operations: A review of the state of the art and future directions. Comput. Oper. Res. 2020, 123, 105004. [Google Scholar] [CrossRef]
  503. Otto, A.; Agatz, N.; Campbell, J.; Golden, B.; Pesch, E. Optimization approaches for civil applications of unmanned aerial vehicles (UAVs) or aerial drones: A survey. Networks 2018, 72, 411–458. [Google Scholar] [CrossRef]
  504. Weng, Y.-Y.; Wu, R.-Y.; Zheng, Y.-J. Cooperative Truck–Drone Delivery Path Optimization under Urban Traffic Restriction. Drones 2023, 7, 59. [Google Scholar] [CrossRef]
  505. Ilango, H.S.; Ramanathan, R. A Performance Study of Bio-Inspired Algorithms in Autonomous Landing of Unmanned Aerial Vehicle. Procedia Comput. Sci. 2020, 171, 1449–1458. [Google Scholar] [CrossRef]
  506. Liang, S.; Song, B.; Xue, D. Landing route planning method for micro drones based on hybrid optimization algorithm. Biomim. Intell. Robot. 2021, 1, 100003. [Google Scholar] [CrossRef]
  507. Mahmud, A.A.A.; Satakshi; Jeberson, W. Aircraft Landing Scheduling Using Embedded Flower Pollination Algorithm. Int. J. Parallel Program. 2020, 48, 771–785. [Google Scholar] [CrossRef]
  508. Zhou, G.; Wang, R.; Zhou, Y. Flower pollination algorithm with runway balance strategy for the aircraft landing scheduling problem. Cluster Comput. 2018, 21, 1543–1560. [Google Scholar] [CrossRef]
  509. Teimoori, M.; Taghizadeh, H.; Pourmahmoud, J.; Honarmand Azimi, M. A multi-objective grey wolf optimization algorithm for aircraft landing problem. J. Appl. Res. Ind. Eng. 2021, 8, 386–398. [Google Scholar] [CrossRef]
  510. Abdullah, O.S.; Abdullah, S.; Sarim, H.M. Harmony search algorithm for the multiple runways aircraft landing scheduling problem. J. Telecommun. Electron. Comput. Eng. 2017, 9, 59–65. [Google Scholar]
  511. Abdul-Razaq, T.S.; Ali, F.H. Hybrid Bees Algorithm to Solve Aircraft Landing Problem. J. Zankoy Sulaimani—Part A 2014, 17, 71–90. [Google Scholar] [CrossRef]
  512. Jia, X.; Cao, X.; Guo, Y.; Qiao, H.; Zhang, J. Scheduling Aircraft Landing Based on Clonal Selection Algorithm and Receding Horizon Control. In Proceedings of the 2008 11th International IEEE Conference on Intelligent Transportation Systems, Beijing, China, 12–15 October 2008; pp. 357–362. [Google Scholar] [CrossRef]
  513. Chai, R.; Savvaris, A.; Tsourdos, A.; Chai, S.; Xia, Y. A review of optimization techniques in spacecraft flight trajectory design. Prog. Aerosp. Sci. 2019, 109, 100543. [Google Scholar] [CrossRef]
  514. Li, S.; Huang, X.; Yang, B. Review of optimization methodologies in global and China trajectory optimization competitions. Prog. Aerosp. Sci. 2018, 102, 60–75. [Google Scholar] [CrossRef]
  515. Shirazi, A.; Ceberio, J.; Lozano, J.A. Spacecraft trajectory optimization: A review of models, objectives, approaches and solutions. Prog. Aerosp. Sci. 2018, 102, 76–98. [Google Scholar] [CrossRef] [Green Version]
  516. Su, Z.; Wang, H. A novel robust hybrid gravitational search algorithm for reusable launch vehicle approach and landing trajectory optimization. Neurocomputing 2015, 162, 116–127. [Google Scholar] [CrossRef]
  517. Panteleev, A.V.; Kryuchkov, A.Y. Application of Modified Fireworks Algorithm for Multiobjective Optimization of Satellite Control Law. In Advances in Theory and Practice of Computational Mechanics; Springer: Berlin/Heidelberg, Germany, 2020; pp. 333–349. [Google Scholar] [CrossRef]
  518. Xue, J.-J.; Wang, Y.; Li, H.; Xiao, J. Discrete Fireworks Algorithm for Aircraft Mission Planning. In International Conference on Swarm Intelligence: ICSI 2016: Advances in Swarm Intelligence; Springer: Berlin/Heidelberg, Germany, 2016; pp. 544–551. [Google Scholar] [CrossRef]
  519. Dastgerdi, K.; Mehrshad, N.; Farshad, M. A new intelligent approach for air traffic control using gravitational search algorithm. Sadhana 2016, 41, 183–191. [Google Scholar] [CrossRef]
  520. Cai, Z.; Lou, J.; Zhao, J.; Wu, K.; Liu, N.; Wang, Y.X. Quadrotor trajectory tracking and obstacle avoidance by chaotic grey wolf optimization-based active disturbance rejection control. Mech. Syst. Signal Process. 2019, 128, 636–654. [Google Scholar] [CrossRef]
  521. Xiao, L.; Xu, M.; Chen, Y.; Chen, Y. Hybrid Grey Wolf Optimization Nonlinear Model Predictive Control for Aircraft Engines Based on an Elastic BP Neural Network. Appl. Sci. 2019, 9, 1254. [Google Scholar] [CrossRef] [Green Version]
  522. Katal, N.; Kumar, P.; Narayan, S. Design of PIλDμ controller for robust flight control of a UAV using multi-objective bat algorithm. In Proceedings of the 2015 2nd International Conference on Recent Advances in Engineering & Computational Sciences (RAECS), Chandigarh, India, 21–22 December 2015; pp. 1–5. [Google Scholar] [CrossRef]
  523. Lin, F.; Wang, X.; Qu, X. PID parameters tuning of UAV flight control system based on artificial bee colony algorithm. In 2015 2nd International Conference on Electrical, Computer Engineering and Electronics; Atlantis Press: Amsterdam, The Netherlands, 2015. [Google Scholar] [CrossRef] [Green Version]
  524. Bian, Q.; Nener, B.; Wang, X. A modified bacterial-foraging tuning algorithm for multimodal optimization of the flight control system. Aerosp. Sci. Technol. 2019, 93, 105274. [Google Scholar] [CrossRef]
  525. Oyekan, J.; Hu, H. A novel bacterial foraging algorithm for automated tuning of PID controllers of UAVs. In Proceedings of the The 2010 IEEE International Conference on Information and Automation, Harbin, China, 20–23 June 2010; pp. 693–698. [Google Scholar] [CrossRef]
  526. Bencharef, S.; Boubertakh, H. Optimal tuning of a PD control by bat algorithm to stabilize a quadrotor. In Proceedings of the 8th International Conference on Modelling, Identification and Control (ICMIC), Algiers, Algeria, 15–17 November 2016. [Google Scholar] [CrossRef]
  527. Zaeri, R.; Ghanbarzadeh, A.; Attaran, B.; Zaeri, Z. Fuzzy Logic Controller based pitch control of aircraft tuned with Bees Algorithm. In Proceedings of the The 2nd International Conference on Control, Instrumentation and Automation, Shiraz, Iran, 27–29 December 2011; pp. 705–710. [Google Scholar] [CrossRef]
  528. Huang, Y.; Fei, Q. Clonal selection algorithm based optimization of the ADRC parameters designed to control UAV longitudinal channel. In Proceedings of the 2015 IEEE International Conference on Control System, Computing and Engineering (ICCSCE), Penang, Malaysia, 27–29 November 2015; pp. 448–452. [Google Scholar] [CrossRef]
  529. Zatout, M.S.; Rezoug, A.; Rezoug, A.; Baizid, K.; Iqbal, J. Optimisation of fuzzy logic quadrotor attitude controller—Particle swarm, cuckoo search and BAT algorithms. Int. J. Syst. Sci. 2022, 53, 883–908. [Google Scholar] [CrossRef]
  530. Glida, H.E.; Abdou, L.; Chelihi, A.; Sentouh, C.; Hasseni, S.-E.-I. Optimal model-free backstepping control for a quadrotor helicopter. Nonlinear Dyn. 2020, 100, 3449–3468. [Google Scholar] [CrossRef]
  531. Pedro, J.O.; Dangor, M.; Kala, P.J. Differential evolution-based PID control of a quadrotor system for hovering application. In Proceedings of the 2016 IEEE Congress on Evolutionary Computation (CEC), Vancouver, BC, Canada, 24–29 July 2016; pp. 2791–2798. [Google Scholar] [CrossRef]
  532. Wang, W.; Yuan, X.; Zhu, J. Automatic PID tuning via differential evolution for quadrotor UAVs trajectory tracking. In Proceedings of the 2016 IEEE Symposium Series on Computational Intelligence (SSCI), Athens, Greece, 6–9 December 2016; pp. 1–8. [Google Scholar]
  533. Keskin, B.; Keskin, K. Position Control of Quadrotor using Firefly Algorithm. El-Cezeri 2021, 9, 554–566. [Google Scholar] [CrossRef]
  534. Kaba, A. Improved PID rate control of a quadrotor with a convexity-based surrogated model. Aircr. Eng. Aerosp. Technol. 2021, 93, 1287–1301. [Google Scholar] [CrossRef]
  535. Ebrahimkhani, E.; Dehghani, H.; Asadollahi, M.; Ghiasi, A.R. Controlling a Micro Quadrotor Using Nonlinear Techniques Tuned by Firefly Algorithm (FA). IN Int. Conf. New Res. Electr. Eng. Comput. Sci. 2015, 1–11. [Google Scholar] [CrossRef]
  536. Prabaningtyas, S. Mardlijah LQGT Control Design Based on Firefly Algorithm optimization for Trajectory Tracking on Quadcopter. In Proceedings of the 2022 International Seminar on Intelligent Technology and Its Applications (ISITIA), Surabaya, Indonesia, 20–21 July 2022; pp. 261–266. [Google Scholar] [CrossRef]
  537. Yin, X.; Wei, X.; Liu, L.; Wang, Y. Improved Hybrid Fireworks Algorithm-Based Parameter Optimization in High-Order Sliding Mode Control of Hypersonic Vehicles. Complexity 2018, 2018, 9098151. [Google Scholar] [CrossRef] [Green Version]
  538. Glida, H.-E.; Abdou, L.; Chelihi, A. Optimal Fuzzy Adaptive Backstepping Controller for Attitude Control of a Quadrotor Helicopter. In Proceedings of the 2019 International Conference on Control, Automation and Diagnosis (ICCAD), Grenoble, France, 2–4 July 2019; pp. 1–6. [Google Scholar] [CrossRef]
  539. Basri, M.A.; Noordin, A. Optimal backstepping control of quadrotor UAV using gravitational search optimization algorithm. Bull. Electr. Eng. Inform. 2020, 9, 1819–1826. [Google Scholar] [CrossRef]
  540. Abbas, N.H.; Sami, A.R. Tuning of PID Controllers for Quadcopter System using Hybrid Memory based Gravitational Search Algorithm-Particle Swarm Optimization. Int. J. Comput. Appl. 2017, 172, 975–8887. [Google Scholar]
  541. Hartawan, W. Otomasi Pid Tuning Untuk Optimasi Kontrol Quadcopter Menggunakan Metode Harmony Search. J. Inov. Tek. Inform. 2021, 4, 21–28. Available online: http://journal.universitaspahlawan.ac.id/index.php/jiti/article/view/2012 (accessed on 20 June 2023).
  542. Altan, A. Performance of Metaheuristic Optimization Algorithms based on Swarm Intelligence in Attitude and Altitude Control of Unmanned Aerial Vehicle for Path Following. In Proceedings of the 2020 4th International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT), Istanbul, Turkey, 22–24 October 2020; pp. 1–6. [Google Scholar] [CrossRef]
  543. Yuan, G.; Duan, H. Robust Control for UAV Close Formation Using LADRC via Sine-Powered Pigeon-Inspired Optimization. Drones 2023, 7, 238. [Google Scholar] [CrossRef]
  544. Jing, Y.; Wang, X.; Heredia-Juesas, J.; Fortner, C.; Giacomo, C.; Sipahi, R.; Martinez-Lorenzo, J. PX4 Simulation Results of a Quadcopter with a Disturbance-Observer-Based and PSO-Optimized Sliding Mode Surface Controller. Drones 2022, 6, 261. [Google Scholar] [CrossRef]
  545. Shafieenejad, I.; Rouzi, E.D.; Sardari, J.; Araghi, M.S.; Esmaeili, A.; Zahedi, S. Fuzzy logic, neural-fuzzy network and honey bees algorithm to develop the swarm motion of aerial robots. Evol. Syst. 2022, 13, 319–330. [Google Scholar] [CrossRef]
  546. Zhang, B.; Sun, X.; Liu, S.; Deng, X. Adaptive Differential Evolution-based Receding Horizon Control Design for Multi-UAV Formation Reconfiguration. Int. J. Control. Autom. Syst. 2019, 17, 3009–3020. [Google Scholar] [CrossRef]
  547. Bian, L.; Sun, W.; Sun, T. Trajectory Following and Improved Differential Evolution Solution for Rapid Forming of UAV Formation. IEEE Access 2019, 7, 169599–169613. [Google Scholar] [CrossRef]
  548. WANG, Y.; ZHANG, T.; CAI, Z.; ZHAO, J.; WU, K. Multi-UAV coordination control by chaotic grey wolf optimization based distributed MPC with event-triggered strategy. Chin. J. Aeronaut. 2020, 33, 2877–2897. [Google Scholar] [CrossRef]
  549. Ma, M.; Wu, J.; Shi, Y.; Yue, L.; Yang, C.; Chen, X. Chaotic Random Opposition-Based Learning and Cauchy Mutation Improved Moth-Flame Optimization Algorithm for Intelligent Route Planning of Multiple UAVs. IEEE Access 2022, 10, 49385–49397. [Google Scholar] [CrossRef]
  550. Xiong, T.; Liu, F.; Liu, H.; Ge, J.; Li, H.; Ding, K.; Li, Q. Multi-Drone Optimal Mission Assignment and 3D Path Planning for Disaster Rescue. Drones 2023, 7, 394. [Google Scholar] [CrossRef]
  551. Qiu, H.; Duan, H. A multi-objective pigeon-inspired optimization approach to UAV distributed flocking among obstacles. Inf. Sci. 2020, 509, 515–529. [Google Scholar] [CrossRef]
  552. Ali, Z.A.; Zhangang, H.; Zhengru, D. Path planning of multiple UAVs using MMACO and DE algorithm in dynamic environment. Meas. Control 2023, 56, 459–469. [Google Scholar] [CrossRef]
  553. Wu, J.; Yi, J.; Gao, L.; Li, X. Cooperative path planning of multiple UAVs based on PH curves and harmony search algorithm. In Proceedings of the 2017 IEEE 21st International Conference on Computer Supported Cooperative Work in Design (CSCWD), Wellington, New Zealand, 26–28 April 2017; pp. 540–544. [Google Scholar] [CrossRef]
  554. Yu, J.; Guo, J.; Zhang, X.; Zhou, C.; Xie, T.; Han, X. A Novel Tent-Levy Fireworks Algorithm for the UAV Task Allocation Problem Under Uncertain Environment. IEEE Access 2022, 10, 102373–102385. [Google Scholar] [CrossRef]
  555. Zhang, Y.; Wang, X. Research on UAV Task Assignment Based on Fireworks Algorithm. Acad. J. Comput. Inf. Sci. 2022, 5, 103–107. [Google Scholar] [CrossRef]
  556. Cui, Y.; Dong, W.; Hu, D.; Liu, H. The Application of Improved Harmony Search Algorithm to Multi-UAV Task Assignment. Electronics 2022, 11, 1171. [Google Scholar] [CrossRef]
  557. Xiang, H.; Han, Y.; Pan, N.; Zhang, M.; Wang, Z. Study on Multi-UAV Cooperative Path Planning for Complex Patrol Tasks in Large Cities. Drones 2023, 7, 367. [Google Scholar] [CrossRef]
  558. Zarchi, M.; Attaran, B. Performance improvement of an active vibration absorber subsystem for an aircraft model using a bees algorithm based on multi-objective intelligent optimization. Eng. Optim. 2017, 49, 1905–1921. [Google Scholar] [CrossRef]
  559. RezaToloei, A.; Zarchi, M.; Attaran, B. Application of Active Suspension System to Reduce Aircraft Vibration using PID Technique and Bees Algorithm. Int. J. Comput. Appl. 2014, 98, 17–24. [Google Scholar] [CrossRef]
  560. Ding, L.; Wu, H.; Yao, Y. Chaotic Artificial Bee Colony Algorithm for System Identification of a Small-Scale Unmanned Helicopter. Int. J. Aerosp. Eng. 2015, 2015, 801874. [Google Scholar] [CrossRef] [Green Version]
  561. Ghosh Roy, A.; Peyada, N.K. Aircraft parameter estimation using Hybrid Neuro Fuzzy and Artificial Bee Colony optimization (HNFABC) algorithm. Aerosp. Sci. Technol. 2017, 71, 772–782. [Google Scholar] [CrossRef]
  562. Gotmare, A.; Bhattacharjee, S.S.; Patidar, R.; George, N.V. Swarm and evolutionary computing algorithms for system identification and filter design: A comprehensive review. Swarm Evol. Comput. 2017, 32, 68–84. [Google Scholar] [CrossRef]
  563. El Gmili, N.; Mjahed, M.; El Kari, A.; Ayad, H. Quadrotor Identification through the Cooperative Particle Swarm Optimization-Cuckoo Search Approach. Comput. Intell. Neurosci. 2019, 2019, 8925165. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  564. Yang, J.; Cai, Z.; Lin, Q.; Zhang, D.; Wang, Y. System identification of quadrotor UAV based on genetic algorithm. In Proceedings of the 2014 IEEE Chinese Guidance, Navigation and Control Conference, Yantai, China, 8–10 August 2014; pp. 2336–2340. [Google Scholar] [CrossRef]
  565. Wang, S.; Guo, H.; Li, W.; Dong, F.; Bu, L. Differential evolution parameter identification of multi-rotor unmanned aerial vehicle (UAV) based on gradient prey acceleration strategy. Int. J. Simul. Syst. Sci. Technol. 2016, 17, 5.1–5.6. [Google Scholar] [CrossRef]
  566. Tijani, I.B.; Akmeliawati, R.; Legowo, A.; Budiyono, A. Nonlinear identification of a small scale unmanned helicopter using optimized NARX network with multiobjective differential evolution. Eng. Appl. Artif. Intell. 2014, 33, 99–115. [Google Scholar] [CrossRef]
  567. Nonut, A.; Kanokmedhakul, Y.; Bureerat, S.; Kumar, S.; Tejani, G.G.; Artrit, P.; Yıldız, A.R.; Pholdee, N. A small fixed-wing UAV system identification using metaheuristics. Cogent Eng. 2022, 9, 2114196. [Google Scholar] [CrossRef]
  568. Li, J.; Duan, H. Boid-Inspired Harmony Search approach to aircraft parameter estimation. In Proceedings of the 11th World Congress on Intelligent Control and Automation, Shenyang, China, 29 June–4 July 2014; pp. 3556–3561. [Google Scholar] [CrossRef] [Green Version]
  569. Yang, J.; Wang, G.; Zhu, J. Frequency-domain identification of a small-scale unmanned helicopter with harmony search algorithm. Int. J. Comput. Appl. Technol. 2014, 49, 141. [Google Scholar] [CrossRef]
  570. Samarakoon, S.M.B.P.; Muthugala, M.A.V.J.; Elara, M.R. Metaheuristic based navigation of a reconfigurable robot through narrow spaces with shape changing ability. Expert Syst. Appl. 2022, 201, 117060. [Google Scholar] [CrossRef]
  571. Zhang, W.; Zhang, W. Efficient UAV Localization Based on Modified Particle Swarm Optimization. In Proceedings of the 2022 IEEE International Conference on Communications Workshops (ICC Workshops), Seoul, Republic of Korea, 16–20 May 2022; pp. 1089–1094. [Google Scholar] [CrossRef]
  572. Shanshan, G.; Zhong, Y.; Weina, C.; Yizhi, W. Artificial Bee Colony Particle Filtering Algorithm for Integrated Navigation. In Advances in Guidance, Navigation and Control: Proceedings of the 2020 International Conference on Guidance, Navigation and Control, ICGNC 2020, Tianjin, China, 23–25 October 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 3797–3806. [Google Scholar] [CrossRef]
  573. Duan, H. Biological Vision-Based Surveillance and Navigation. In Bio-Inspired Computation in Unmanned Aerial Vehicles; Springer: Berlin/Heidelberg, Germany, 2014; pp. 215–246. [Google Scholar] [CrossRef]
  574. Shrivastava, A. AGV Using Clonal Selection in Warehouse; Galgotias College of Engineering and Technology: Uttar Pradesh, India, 2021. [Google Scholar]
  575. Banerjee, A.; Nilhani, A.; Dhabal, S.; Venkateswaran, P. A novel sound source localization method using a global-best guided cuckoo search algorithm for drone-based search and rescue operations. In Unmanned Aerial Systems; Elsevier: Amsterdam, The Netherlands, 2021; pp. 375–415. [Google Scholar] [CrossRef]
  576. Alfeo, A.L.; Cimino, M.G.C.A.; De Francesco, N.; Lega, M.; Vaglini, G. Design and simulation of the emergent behavior of small drones swarming for distributed target localization. J. Comput. Sci. 2018, 29, 19–33. [Google Scholar] [CrossRef]
  577. Sun, Z.; Wu, J.; Yang, J.; Huang, Y.; Li, C.; Li, D. Path Planning for GEO-UAV Bistatic SAR Using Constrained Adaptive Multiobjective Differential Evolution. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6444–6457. [Google Scholar] [CrossRef]
  578. Arafat, M.Y.; Moh, S. Bio-Inspired Approaches for Energy-Efficient Localization and Clustering in UAV Networks for Monitoring Wildfires in Remote Areas. IEEE Access 2021, 9, 18649–18669. [Google Scholar] [CrossRef]
  579. Radmanesh, M.; Kumar, M. Grey wolf optimization based sense and avoid algorithm for UAV path planning in uncertain environment using a Bayesian framework. In Proceedings of the 2016 International Conference on Unmanned Aircraft Systems (ICUAS), Arlington, VA, USA, 7–10 June 2016; pp. 68–76. [Google Scholar] [CrossRef]
  580. Nenavath, H.; Ashwini, K.; Jatoth, R.K.; Mirjalili, S. Intelligent Trigonometric Particle Filter for visual tracking. ISA Trans. 2022, 128, 460–476. [Google Scholar] [CrossRef]
  581. Hao, L.; Xiangyu, F.; Manhong, S. Research on the Cooperative Passive Location of Moving Targets Based on Improved Particle Swarm Optimization. Drones 2023, 7, 264. [Google Scholar] [CrossRef]
  582. Li, Z.; Deng, Y.; Liu, W. Identification of INS Sensor Errors from Navigation Data Based on Improved Pigeon-Inspired Optimization. Drones 2022, 6, 287. [Google Scholar] [CrossRef]
  583. Egi, Y.; Otero, C.E. Machine-Learning and 3D Point-Cloud Based Signal Power Path Loss Model for the Deployment of Wireless Communication Systems. IEEE Access 2019, 7, 42507–42517. [Google Scholar] [CrossRef]
  584. Bithas, P.S.; Michailidis, E.T.; Nomikos, N.; Vouyioukas, D.; Kanatas, A.G. A Survey on Machine-Learning Techniques for UAV-Based Communications. Sensors 2019, 19, 5170. [Google Scholar] [CrossRef] [Green Version]
  585. Khoufi, I.; Laouiti, A.; Adjih, C.; Hadded, M. UAVs Trajectory Optimization for Data Pick Up and Delivery with Time Window. Drones 2021, 5, 27. [Google Scholar] [CrossRef]
  586. Eledlebi, K.; Hildmann, H.; Ruta, D.; Isakovic, A.F. A Hybrid Voronoi Tessellation/Genetic Algorithm Approach for the Deployment of Drone-Based Nodes of a Self-Organizing Wireless Sensor Network (WSN) in Unknown and GPS Denied Environments. Drones 2020, 4, 33. [Google Scholar] [CrossRef]
  587. Subburaj, B.; Jayachandran, U.M.; Arumugham, V.; Suthanthira Amalraj, M.J.A. A Self-Adaptive Trajectory Optimization Algorithm Using Fuzzy Logic for Mobile Edge Computing System Assisted by Unmanned Aerial Vehicle. Drones 2023, 7, 266. [Google Scholar] [CrossRef]
  588. Anicho, O.; Charlesworth, P.B.; Baicher, G.S.; Nagar, A.; Buckley, N. Comparative study for coordinating multiple unmanned HAPS for communications area coverage. In Proceedings of the 2019 International Conference on Unmanned Aircraft Systems, ICUAS, Atlanta, GA, USA, 11–14 June 2019; pp. 467–474. [Google Scholar] [CrossRef]
  589. Du, W.; Ying, W.; Yang, P.; Cao, X.; Yan, G.; Tang, K.; Wu, D. Network-Based Heterogeneous Particle Swarm Optimization and Its Application in UAV Communication Coverage. In IEEE Transactions on Emerging Topics in Computational Intelligence; IEEE: New York, NY, USA, 2020; Volume 4, pp. 312–323. [Google Scholar] [CrossRef]
  590. Torky, M.; El-Dosuky, M.; Goda, E.; Snášel, V.; Hassanien, A.E. Scheduling and Securing Drone Charging System Using Particle Swarm Optimization and Blockchain Technology. Drones 2022, 6, 237. [Google Scholar] [CrossRef]
  591. Trotta, A.; Andreagiovanni, F.D.; Di Felice, M.; Natalizio, E.; Chowdhury, K.R. When UAVs Ride A Bus: Towards Energy-efficient City-scale Video Surveillance. In Proceedings of the IEEE INFOCOM 2018—IEEE Conference on Computer Communications, Honolulu, HI, USA, 16–19 April 2018; Volume 2018-April, pp. 1043–1051. [Google Scholar] [CrossRef]
  592. Li, L.; Xu, Y.; Zhang, Z.; Yin, J.; Chen, W.; Han, Z. A prediction-based charging policy and interference mitigation approach in the wireless powered internet of things. IEEE J. Sel. Areas Commun. 2018, 37, 439–451. [Google Scholar] [CrossRef]
  593. Xie, J.; Fu, Q.; Jia, R.; Lin, F.; Li, M.; Zheng, Z. Optimal Energy and Delay Tradeoff in UAV-Enabled Wireless Sensor Networks. Drones 2023, 7, 368. [Google Scholar] [CrossRef]
  594. Zhang, X.; Xiang, X.; Lu, S.; Zhou, Y.; Sun, S. Evolutionary Optimization of Drone-Swarm Deployment for Wireless Coverage. Drones 2022, 7, 8. [Google Scholar] [CrossRef]
  595. Mukherjee, A.; Fakoorian, S.A.A.; Huang, J.; Swindlehurst, A.L. Principles of physical layer security in multiuser wireless networks: A survey. IEEE Commun. Surv. Tutorials 2014, 16, 1550–1573. [Google Scholar] [CrossRef] [Green Version]
  596. Li, B.; Fei, Z.; Zhang, Y.; Guizani, M. Secure UAV communication networks over 5G. IEEE Wirel. Commun. 2019, 26, 114–120. [Google Scholar] [CrossRef]
  597. Bassily, R.; Ekrem, E.; He, X.; Tekin, E.; Xie, J.; Bloch, M.R.; Ulukus, S.; Yener, A. Cooperative security at the physical layer: A summary of recent advances. IEEE Signal Process. Mag. 2013, 30, 16–28. [Google Scholar] [CrossRef]
  598. Beegum, T.R.; Idris, M.Y.I.; Bin Ayub, M.N.; Shehadeh, H.A. Optimized Routing of UAVs Using Bio-Inspired Algorithm in FANET: A Systematic Review. IEEE Access 2023, 11, 15588–15622. [Google Scholar] [CrossRef]
  599. Abubakar, A.I.; Ahmad, I.; Omeke, K.G.; Ozturk, M.; Ozturk, C.; Abdel-Salam, A.M.; Mollel, M.S.; Abbasi, Q.H.; Hussain, S.; Imran, M.A. A Survey on Energy Optimization Techniques in UAV-Based Cellular Networks: From Conventional to Machine Learning Approaches. Drones 2023, 7, 214. [Google Scholar] [CrossRef]
Figure 1. The sources of inspiration in nature-inspired algorithms vary from underwater to space.
Figure 1. The sources of inspiration in nature-inspired algorithms vary from underwater to space.
Drones 07 00427 g001
Figure 2. Classification of nature-inspired optimization algorithms based on Yang’s method [7].
Figure 2. Classification of nature-inspired optimization algorithms based on Yang’s method [7].
Drones 07 00427 g002
Figure 3. Classification of optimization methods based on Muller [8].
Figure 3. Classification of optimization methods based on Muller [8].
Drones 07 00427 g003
Figure 4. Classification of meta-heuristic algorithms introduced by Abdel-basset et al. [11].
Figure 4. Classification of meta-heuristic algorithms introduced by Abdel-basset et al. [11].
Drones 07 00427 g004
Figure 5. Inspiration-based classification of nature-inspired algorithms.
Figure 5. Inspiration-based classification of nature-inspired algorithms.
Drones 07 00427 g005
Figure 6. The share of each category from the total number of nature-inspired algorithms.
Figure 6. The share of each category from the total number of nature-inspired algorithms.
Drones 07 00427 g006
Figure 7. The share of each category from the total number of nature-inspired algorithms.
Figure 7. The share of each category from the total number of nature-inspired algorithms.
Drones 07 00427 g007
Figure 8. Number of citations for each category.
Figure 8. Number of citations for each category.
Drones 07 00427 g008
Figure 9. The share of each sub-category from the total number of bio-based algorithms.
Figure 9. The share of each sub-category from the total number of bio-based algorithms.
Drones 07 00427 g009
Figure 10. Comparison of total citations of bio-based sub-categories.
Figure 10. Comparison of total citations of bio-based sub-categories.
Drones 07 00427 g010
Figure 11. Different species in bio-based algorithms and their share of the total number of algorithms.
Figure 11. Different species in bio-based algorithms and their share of the total number of algorithms.
Drones 07 00427 g011
Figure 12. Most popular evolutionary algorithms.
Figure 12. Most popular evolutionary algorithms.
Drones 07 00427 g012
Figure 13. Flowchart of the GA [15].
Figure 13. Flowchart of the GA [15].
Drones 07 00427 g013
Figure 14. Different genetic operators [16,17,18].
Figure 14. Different genetic operators [16,17,18].
Drones 07 00427 g014
Figure 15. Comparison between main features of standard forms of GA and EP [30].
Figure 15. Comparison between main features of standard forms of GA and EP [30].
Drones 07 00427 g015
Figure 16. Most popular organ-based algorithms.
Figure 16. Most popular organ-based algorithms.
Drones 07 00427 g016
Figure 17. An antibody and a B-lymphocyte with antibodies on its surface [35].
Figure 17. An antibody and a B-lymphocyte with antibodies on its surface [35].
Drones 07 00427 g017
Figure 18. Clonal selection principle [38].
Figure 18. Clonal selection principle [38].
Drones 07 00427 g018
Figure 19. Flowchart of CSA.
Figure 19. Flowchart of CSA.
Drones 07 00427 g019
Figure 20. Most popular behavior-based algorithms.
Figure 20. Most popular behavior-based algorithms.
Drones 07 00427 g020
Figure 21. The flowchart of biogeography-based algorithm.
Figure 21. The flowchart of biogeography-based algorithm.
Drones 07 00427 g021
Figure 22. Different symbiotic relations in nature [47].
Figure 22. Different symbiotic relations in nature [47].
Drones 07 00427 g022
Figure 23. Flowchart of the SOS.
Figure 23. Flowchart of the SOS.
Drones 07 00427 g023
Figure 24. Most popular disease-based algorithms.
Figure 24. Most popular disease-based algorithms.
Drones 07 00427 g024
Figure 25. Most popular microorganism-based algorithms.
Figure 25. Most popular microorganism-based algorithms.
Drones 07 00427 g025
Figure 26. Most popular insect-based algorithms.
Figure 26. Most popular insect-based algorithms.
Drones 07 00427 g026
Figure 27. Flowchart of the ACO [81].
Figure 27. Flowchart of the ACO [81].
Drones 07 00427 g027
Figure 28. Flowchart of ABC [84].
Figure 28. Flowchart of ABC [84].
Drones 07 00427 g028
Figure 29. Transverse orientation in moths (a) Using moonlight as a reference (b) encirclement traps in presence of artificial lights [87].
Figure 29. Transverse orientation in moths (a) Using moonlight as a reference (b) encirclement traps in presence of artificial lights [87].
Drones 07 00427 g029
Figure 30. Most popular winged-animals-based algorithms.
Figure 30. Most popular winged-animals-based algorithms.
Drones 07 00427 g030
Figure 31. An example of Lévy flight in 2D space [128].
Figure 31. An example of Lévy flight in 2D space [128].
Drones 07 00427 g031
Figure 32. Preying habit with bait in the green heron bird [129].
Figure 32. Preying habit with bait in the green heron bird [129].
Drones 07 00427 g032
Figure 33. Main operations of GHOA: (a) baiting operations, (b) attracting pray swarm, (c) change of position [129].
Figure 33. Main operations of GHOA: (a) baiting operations, (b) attracting pray swarm, (c) change of position [129].
Drones 07 00427 g033
Figure 34. Flowchart of the GHOA.
Figure 34. Flowchart of the GHOA.
Drones 07 00427 g034
Figure 35. Bat echolocation [131].
Figure 35. Bat echolocation [131].
Drones 07 00427 g035
Figure 36. Most popular aquatic-based algorithms.
Figure 36. Most popular aquatic-based algorithms.
Drones 07 00427 g036
Figure 37. Bubble-net feeding method in humpback whales [161,162].
Figure 37. Bubble-net feeding method in humpback whales [161,162].
Drones 07 00427 g037
Figure 38. Schematic of the sensing ambit around a krill individual [163].
Figure 38. Schematic of the sensing ambit around a krill individual [163].
Drones 07 00427 g038
Figure 39. Flowchart of the KH [163].
Figure 39. Flowchart of the KH [163].
Drones 07 00427 g039
Figure 40. Vision concept of the artificial fish [165].
Figure 40. Vision concept of the artificial fish [165].
Drones 07 00427 g040
Figure 41. Most popular terrestrial animal-based algorithms.
Figure 41. Most popular terrestrial animal-based algorithms.
Drones 07 00427 g041
Figure 42. Hierarchy of gray wolves [200].
Figure 42. Hierarchy of gray wolves [200].
Drones 07 00427 g042
Figure 43. Main phases of grey wolf hunting: (A) chasing, approaching, and tracking prey, (BD) pursuing, harassing, and encircling, and (E) stationary situation and attack [199].
Figure 43. Main phases of grey wolf hunting: (A) chasing, approaching, and tracking prey, (BD) pursuing, harassing, and encircling, and (E) stationary situation and attack [199].
Drones 07 00427 g043
Figure 44. Flowchart of the SFLA.
Figure 44. Flowchart of the SFLA.
Drones 07 00427 g044
Figure 45. Flowchart of CSO [205].
Figure 45. Flowchart of CSO [205].
Drones 07 00427 g045
Figure 46. Most popular plant-based algorithms.
Figure 46. Most popular plant-based algorithms.
Drones 07 00427 g046
Figure 47. The pollinators and pollination types [249].
Figure 47. The pollinators and pollination types [249].
Drones 07 00427 g047
Figure 48. Some kinds of species that are usually considered as weeds: (a) dandelion [251], (b) burdock [252], (c) amaranth or pigweed [253].
Figure 48. Some kinds of species that are usually considered as weeds: (a) dandelion [251], (b) burdock [252], (c) amaranth or pigweed [253].
Drones 07 00427 g048
Figure 49. Flowchart of the IWC algorithm [250].
Figure 49. Flowchart of the IWC algorithm [250].
Drones 07 00427 g049
Figure 50. Most popular ecosystem-based algorithms.
Figure 50. Most popular ecosystem-based algorithms.
Drones 07 00427 g050
Figure 51. Schematic of the water cycle.
Figure 51. Schematic of the water cycle.
Drones 07 00427 g051
Figure 52. Flowchart of the WCA [279].
Figure 52. Flowchart of the WCA [279].
Drones 07 00427 g052
Figure 53. Most popular social-based algorithms.
Figure 53. Most popular social-based algorithms.
Drones 07 00427 g053
Figure 54. Flowchart of the PSO [296].
Figure 54. Flowchart of the PSO [296].
Drones 07 00427 g054
Figure 55. Distribution of marks obtained by learners taught by two different teachers [297].
Figure 55. Distribution of marks obtained by learners taught by two different teachers [297].
Drones 07 00427 g055
Figure 56. Flowchart of the TLBO [297].
Figure 56. Flowchart of the TLBO [297].
Drones 07 00427 g056
Figure 57. Most popular physics-based algorithms.
Figure 57. Most popular physics-based algorithms.
Drones 07 00427 g057
Figure 58. Flowchart of the SAA [320].
Figure 58. Flowchart of the SAA [320].
Drones 07 00427 g058
Figure 59. Flowchart of GSA [324].
Figure 59. Flowchart of GSA [324].
Drones 07 00427 g059
Figure 60. Algorithm of the BB-BC [326].
Figure 60. Algorithm of the BB-BC [326].
Drones 07 00427 g060
Figure 61. Most cited chemistry-based algorithms.
Figure 61. Most cited chemistry-based algorithms.
Drones 07 00427 g061
Figure 62. Most cited math-based algorithms.
Figure 62. Most cited math-based algorithms.
Drones 07 00427 g062
Figure 63. Most cited music-based algorithms.
Figure 63. Most cited music-based algorithms.
Drones 07 00427 g063
Figure 64. Most cited sport-based algorithms.
Figure 64. Most cited sport-based algorithms.
Drones 07 00427 g064
Figure 65. Constraint handling techniques [408,409,410,411,412].
Figure 65. Constraint handling techniques [408,409,410,411,412].
Drones 07 00427 g065
Figure 66. Comparison of algorithms, number of problems with best and worst cost functions.
Figure 66. Comparison of algorithms, number of problems with best and worst cost functions.
Drones 07 00427 g066
Figure 67. Comparison of algorithms, number of problems with least and most iterations.
Figure 67. Comparison of algorithms, number of problems with least and most iterations.
Drones 07 00427 g067
Figure 68. Comparison of algorithms; number of problems with shortest and longest calculation time.
Figure 68. Comparison of algorithms; number of problems with shortest and longest calculation time.
Drones 07 00427 g068
Figure 69. Comparison of algorithms; number of problems with the lowest and highest error.
Figure 69. Comparison of algorithms; number of problems with the lowest and highest error.
Drones 07 00427 g069
Figure 70. Classification of optimization algorithms in aerospace systems and drones.
Figure 70. Classification of optimization algorithms in aerospace systems and drones.
Drones 07 00427 g070
Figure 71. Design of optimized uniform and multi-element airfoil [454].
Figure 71. Design of optimized uniform and multi-element airfoil [454].
Drones 07 00427 g071
Figure 72. Some forms of wings and tails [464].
Figure 72. Some forms of wings and tails [464].
Drones 07 00427 g072
Figure 73. Classification of aerospace/drone systems.
Figure 73. Classification of aerospace/drone systems.
Drones 07 00427 g073
Figure 74. Share of nature-inspired algorithms and applications from guidance and control publications.
Figure 74. Share of nature-inspired algorithms and applications from guidance and control publications.
Drones 07 00427 g074
Table 1. Different mutation operators in DE [19].
Table 1. Different mutation operators in DE [19].
FunctionFormula
DE/rand/1 v i = x i + F x j x k
DE/best/1 v i = x b e s t + F x i x j
DE/rand/2 v i = x i + F x j x k + F x l x m
DE/best/2 v i = x b e s t + F x i x j + F x k x l
DE/current-to-best/1 v i = x i + F x b e s t x i + F x j x k
DE/current-to-rand/1 v i = x i + r a n d x j x i + F x k x l
Table 2. Specifications of selected benchmark functions [416].
Table 2. Specifications of selected benchmark functions [416].
FunctionContinuityDifferentiabilitySeparabilityScalabilityModality
AckleyContinuousDifferentiableNon-separableScalableMultimodal
AlpineContinuousNon-differentiableSeparableScalableMultimodal
Chung ReynoldsContinuousDifferentiablePartially SeparableScalableUnimodal
Cosine MixtureDiscontinuousNon-differentiableSeparableScalableMultimodal
Dixon & PriceContinuousDifferentiableNon-separableScalableUnimodal
GriewankContinuousDifferentiableNon-separableScalableMultimodal
Pint´erContinuousDifferentiableNon-separableScalableMultimodal
PowellContinuousDifferentiableNon-separableScalableUnimodal
QingContinuousDifferentiableSeparableScalableMultimodal
ScahfferContinuousDifferentiableNon-separableNon-scalableUnimodal
Table 3. Equations and plots of the selected benchmark functions [417,418].
Table 3. Equations and plots of the selected benchmark functions [417,418].
FunctionEquationPlot
Ackley f x = a . e x p b 1 d i = 1 d x i 2 exp 1 d i = 1 d cos c x i + a + exp 1   , a = 20 , b = 0.2 , c = 2 π , d = 4 Drones 07 00427 i001
Alpine f x = i = 1 D x i sin x i + 0.1 x i ,     d = 4 Drones 07 00427 i002
Chung Reynolds f x = ( i = 1 d x i 2 ) 2 ,     d = 4 Drones 07 00427 i003
Cosine Mixture f x = 0.1 i = 1 d cos 5 π x i + i = 1 d x i 2 + 0.1 d ,     d = 4 Drones 07 00427 i004
Dixon and Price f x = x 1 1 2 + i = 2 d i 2 x i 2 x i 1 2 ,     d = 4 Drones 07 00427 i005
Griewank f x = i = 1 d x i 2 4000 cos x i i + 1 ,     d = 4 Drones 07 00427 i006
Pint´er f x = i = 1 d i x i 2 + i = 1 d 20 i s i n 2 A + i = 1 d i l o g 10 1 + i B 2 ,     d = 4  
A = x i 1 sin x i + sin x i + 1  
B = x i 1 2 2 x i + 3 x i + 1 cos x i + 1
Drones 07 00427 i007
Powell f x = i = 1 d / 4 x 4 i 3 + 10 x 4 i 2 2 + 5 x 4 i 1 x 4 i 2 + x 4 i 2 2 x 4 i 1 4 + 10 x 4 i 3 x 4 i 4 ,     d = 4 Drones 07 00427 i008
Qing f x = i = 1 d x i 2 i 2 , d = 4 Drones 07 00427 i009
Scahffer f x = 0.5 + s i n 2 x 1 2 + x 2 2 0.5 1 + 0.001 x 1 2 + x 2 2 2 , d = 2 Drones 07 00427 i010
Table 4. Calculation results.
Table 4. Calculation results.
AlgorithmProblemMean CostMean IterationsMean Time (s)Mean Error
Sine Cosine AlgorithmAckley5.34 × 10−5339.4825.50 × 10−11.07 × 10−7
Harris Hawks Optimization6.14 × 105 × 10−535.0861.02 × 10−11.23 × 10−7
Fireworks Algorithm6.27 × 10−588.3145.48 × 10−11.25 × 10−7
Artificial Bee Colony Algorithm7.28 × 10−591.1941.52 × 10−11.46 × 10−7
Bees Algorithm7.75 × 10−5140.837.29 × 10−11.55 × 10−7
Gravitational Search Algorithm7.92 × 10−5422.27.17 × 10−11.58 × 10−7
Firefly Algorithm7.93 × 10−5341.845.711.59 × 10−7
Differential Evolution7.96 × 10−5217.2349.90 × 10−11.59 × 10−7
Bat Algorithm7.99 × 10−5294.7082.70 × 10−11.60 × 10−7
Grey Wolf Optimizer8.00 × 10−528.9028.20 × 10−21.60 × 10−7
Flower Pollination Algorithm8.13 × 10−5841.3061.321.63 × 10−7
Cuckoo Search8.19 × 10−5312.44.94 × 10−11.64 × 10−7
Particle Swarm Algorithm8.27 × 10−5227.0565.03 × 10−11.65 × 10−7
Cat Swarm Optimization9.30 × 10−548.0641.91 × 10−11.86 × 10−7
Clonal Selection Algorithm6.12 × 10−410001.671.22 × 10−6
Fish School Search3.78 × 10−310002.467.56 × 10−6
Moth Flame Optimizer4.06 × 10−3531.4625.47 × 10−18.12 × 10−6
Forest Optimization Algorithm3.54 × 10−210001.007.09 × 10−5
Bacterial Foraging Optimization6.83 × 10−210005.611.37 × 10−4
Genetic Algorithm2.46 × 10−110004.77 × 10−12.46 × 10−2
Harmony Search3.49 × 10−110001.36 × 10−16.98 × 10−4
Fireworks AlgorithmAlpine6.64 × 10−5133.9065.94 × 10−11.33 × 10−7
Harris Hawks Optimization7.00 × 10−527.0426.11 × 10−21.40 × 10−7
Bees Algorithm7.13 × 10−5135.8485.09 × 10−11.43 × 10−7
Artificial Bee Colony Algorithm7.50 × 10−584.6481.25 × 10−11.50 × 10−7
Gravitational Search Algorithm8.00 × 10−5310.837.94 × 10−11.60 × 10−7
Particle Swarm Algorithm8.06 × 10−5250.6223.37 × 10−11.61 × 10−7
Firefly Algorithm8.06 × 10−5260.9283.981.61 × 10−7
Grey Wolf Optimizer8.47 × 10−5145.8783.23 × 10−11.69 × 10−7
Cat Swarm Optimization9.32 × 10−591.692.67 × 10−11.86 × 10−7
Differential Evolution1.68 × 10−4315.5369.93 × 10−13.37 × 10−7
Cuckoo Search4.82 × 10−4932.2647.28 × 10−19.64 × 10−7
Fish School Search1.13 × 10−3999.9942.212.26 × 10−6
Moth Flame Optimizer2.32 × 10−3548.0766.47 × 10−14.64 × 10−6
Sine Cosine Algorithm2.56 × 10−3340.8922.62 × 10−15.12 × 10−6
Bat Algorithm4.91 × 10−3768.3727.83 × 10−19.81 × 10−6
Harmony Search1.37 × 10−210001.17 × 10−12.74 × 10−5
Flower Pollination Algorithm1.62 × 10−210001.313.24 × 10−5
Forest Optimization Algorithm4.01 × 10−210006.14 × 10−18.02 × 10−5
Bacterial Foraging Optimization4.16 × 10−210004.078.33 × 10−5
Genetic Algorithm5.24 × 10−210002.21 × 10−15.24 × 10−3
Clonal Selection Algorithm7.96 × 10−110001.051.59 × 10−3
Sine Cosine AlgorithmChung-Reynolds2.39 × 10−5146.7241.20 × 10−14.78 × 10−8
Fireworks Algorithm3.05 × 10−543.3761.95 × 10−16.09 × 10−8
Artificial Bee Colony Algorithm4.16 × 10−514.2542.88 × 10−28.33 × 10−8
Forest Optimization Algorithm4.65 × 10−5171.5686.75 × 10−29.30 × 10−8
Grey Wolf Optimizer4.68 × 10−59.8341.59 × 10−29.36 × 10−8
Gravitational Search Algorithm4.76 × 10−5114.3524.45 × 10−19.51 × 10−8
Bees Algorithm4.82 × 10−516.9168.23 × 10−29.65 × 10−8
Bat Algorithm4.83 × 10−545.575.10 × 10−29.67 × 10−8
Differential Evolution4.88 × 10−571.8662.31 × 10−19.77 × 10−8
Firefly Algorithm4.94 × 10−591.871.119.87 × 10−8
Particle Swarm Algorithm5.07 × 10−547.7665.38 × 10−21.01 × 10−7
Clonal Selection Algorithm5.08 × 10−544.3667.50 × 10−21.02 × 10−7
Fish School Search5.14 × 10−5314.8325.99 × 10−11.03 × 10−7
Flower Pollination Algorithm5.15 × 10−5203.6862.58 × 10−11.03 × 10−7
Bacterial Foraging Optimization5.16 × 10−553.932.54 × 10−11.03 × 10−7
Cuckoo Search5.21 × 10−567.724.55 × 10−21.04 × 10−7
Moth Flame Optimizer7.05 × 10−5228.2322.14 × 10−11.41 × 10−7
Cat Swarm Optimization7.50 × 10−515.9224.91 × 10−21.50 × 10−7
Genetic Algorithm1.82 × 10−48432.57 × 10−11.82 × 10−5
Harris Hawks Optimization2.96 × 10−46.149.07 × 10−35.92 × 10−7
Harmony Search4.63 × 10−4789.024.77 × 10−29.26 × 10−7
Sine Cosine AlgorithmCosine Mixture3.73 × 10−5201.7041.89 × 10−17.47 × 10−8
Fireworks Algorithm4.87 × 10−545.9822.30 × 10−19.75 × 10−8
Clonal Selection Algorithm5.69 × 10−581.1241.17 × 10−11.14 × 10−7
Artificial Bee Colony Algorithm5.84 × 10−528.9783.28 × 10−21.17 × 10−7
Grey Wolf Optimizer6.37 × 10−513.6443.38 × 10−21.27 × 10−7
Firefly Algorithm6.62 × 10−5135.3142.021.32 × 10−7
Bees Algorithm6.64 × 10−562.5482.68 × 10−11.33 × 10−7
Differential Evolution6.70 × 10−5104.83.73 × 10−11.34 × 10−7
Particle Swarm Algorithm6.80 × 10−577.2041.10 × 10−11.36 × 10−7
Flower Pollination Algorithm6.83 × 10−5464.0145.65 × 10−11.37 × 10−7
Cuckoo Search6.83 × 10−5130.3441.24 × 10−11.37 × 10−7
Fish School Search7.71 × 10−5993.2582.661.54 × 10−7
Forest Optimization Algorithm8.33 × 10−5622.2662.97 × 10−11.67 × 10−7
Cat Swarm Optimization8.68 × 10−541.1741.22 × 10−11.74 × 10−7
Harris Hawks Optimization1.09 × 10−48.7362.20 × 10−22.19 × 10−7
Gravitational Search Algorithm2.56 × 10−4240.7785.91 × 10−15.13 × 10−7
Moth Flame Optimizer3.85 × 10−4331.8222.82 × 10−17.70 × 10−7
Genetic Algorithm1.49 × 10−310002.65 × 10−11.49 × 10−4
Bat Algorithm3.91 × 10−3185.3661.48 × 10−17.82 × 10−6
Harmony Search9.96 × 10−310008.87 × 10−21.99 × 10−5
Bacterial Foraging Optimization1.61 × 10−210004.043.23 × 10−5
Firefly AlgorithmDixon-Price6.48 × 10−5205.2423.171.30 × 10−7
Bat Algorithm6.60 × 10−5158.6981.25 × 10−11.32 × 10−7
Gravitational Search Algorithm6.66 × 10−5229.6489.72 × 10−11.33 × 10−7
Cuckoo Search6.88 × 10−5224.772.32 × 10−11.38 × 10−7
Particle Swarm Algorithm7.12 × 10−5171.62.15 × 10−11.42 × 10−7
Flower Pollination Algorithm7.19 × 10−5743.129.05 × 10−11.44 × 10−7
Artificial Bee Colony Algorithm8.33 × 10−5332.5485.04 × 10−11.67 × 10−7
Fish School Search8.53 × 10−5992.3083.041.71 × 10−7
Harris Hawks Optimization1.06 × 10−4208.4485.15 × 10−12.12 × 10−7
Bees Algorithm9.22 × 10−4120.126.29 × 10−11.84 × 10−6
Differential Evolution2.12 × 10−3204.8585.65 × 10−14.24 × 10−6
Cat Swarm Optimization1.08 × 10−210004.262.15 × 10−5
Bacterial Foraging Optimization1.54 × 10−210004.063.07 × 10−5
Forest Optimization Algorithm9.88 × 10−210008.36 × 10−11.98 × 10−4
Grey Wolf Optimizer1.25 × 10−110001.402.49 × 10−4
Sine Cosine Algorithm2.49 × 10−110009.62 × 10−14.98 × 10−4
Fireworks Algorithm4.95 × 10−110004.649.90 × 10−4
Harmony Search8.09 × 10−110001.22 × 10−11.62 × 10−3
Genetic Algorithm3.0310002.80 × 10−13.03 × 10−1
Moth Flame Optimizer4.36706.777.07 × 10−18.72 × 10−3
Clonal Selection Algorithm10.310002.252.05 × 10−2
Cat Swarm OptimizationExpanded Schaffer9.04 × 10−5130.8365.16 × 10−11.81 × 10−7
Artificial Bee Colony Algorithm1.45 × 10−4367.6185.78 × 10−12.90 × 10−7
Fireworks Algorithm1.49 × 10−4209.6741.052.97 × 10−7
Harris Hawks Optimization1.50 × 10−427.3645.30 × 10−23.00 × 10−7
Grey Wolf Optimizer2.21 × 10−4262.4783.97 × 10−14.43 × 10−7
Cuckoo Search3.86 × 10−4509.3745.12 × 10−17.72 × 10−7
Flower Pollination Algorithm6.49 × 10−4894.7161.541.30 × 10−6
Sine Cosine Algorithm1.78 × 10−3510.8844.51 × 10−13.56 × 10−6
Firefly Algorithm2.33 × 10−3945.7821.39 × 1014.67 × 10−6
Particle Swarm Algorithm3.46 × 10−3542.1828.18 × 10−16.92 × 10−6
Bacterial Foraging Optimization4.09 × 10−3974.9465.858.19 × 10−6
Bees Algorithm4.29 × 10−3775.363.538.58 × 10−6
Fish School Search5.77 × 10−3697.2761.591.15 × 10−5
Differential Evolution6.79 × 10−3792.1882.451.36 × 10−5
Clonal Selection Algorithm6.91 × 10−3789.4021.211.38 × 10−5
Moth Flame Optimizer8.75 × 10−3982.446.91 × 10−11.75 × 10−5
Genetic Algorithm8.78 × 10−310003.88 × 10−18.78 × 10−4
Gravitational Search Algorithm8.89 × 10−3998.0881.371.78 × 10−5
Forest Optimization Algorithm9.06 × 10−3960.0186.72 × 10−11.81 × 10−5
Harmony Search9.58 × 10−3978.928.91 × 10−21.92 × 10−5
Bat Algorithm2.40 × 10−2988.6048.10 × 10−14.79 × 10−5
Artificial Bee Colony AlgorithmGriewank4.42 × 10−5127.179.13 × 10−18.84 × 10−8
Fireworks Algorithm1.63 × 10−418.254.96 × 10−23.27 × 10−7
Harris Hawks Optimization3.23 × 10−4773.199.98 × 10−16.45 × 10−7
Cuckoo Search6.61 × 10−4384.456.14 × 10−11.32 × 10−6
Fish School Search1.22 × 10−3887.881.952.45 × 10−6
Cat Swarm Optimization1.95 × 10−3180.847.96 × 10−13.90 × 10−6
Grey Wolf Optimizer3.40 × 10−3439.696.40 × 10−16.80 × 10−6
Differential Evolution8.61 × 10−3502.861.361.72 × 10−5
Particle Swarm Algorithm9.41 × 10−3866.571.361.88 × 10−5
Flower Pollination Algorithm1.09 × 10−21000.001.892.18 × 10−5
Sine Cosine Algorithm1.22 × 10−2417.415.25 × 10−12.45 × 10−5
Clonal Selection Algorithm1.38 × 10−2951.051.352.76 × 10−5
Bees Algorithm1.46 × 10−2946.304.752.93 × 10−5
Harmony Search1.69 × 10−21000.001.35 × 10−13.38 × 10−5
Genetic Algorithm2.18 × 10−21000.003.24 × 10−12.18 × 10−3
Firefly Algorithm2.29 × 10−2971.101.67 × 1014.58 × 10−5
Bacterial Foraging Optimization2.95 × 10−2998.505.875.90 × 10−5
Gravitational Search Algorithm3.15 × 10−2972.951.526.31 × 10−5
Forest Optimization Algorithm3.90 × 10−21000.005.16 × 10−17.79 × 10−5
Bat Algorithm8.00 × 10−2994.599.75 × 10−11.60 × 10−4
Moth Flame Optimizer1.38 × 10−1998.669.12 × 10−12.76 × 10−4
Sine Cosine AlgorithmPinter3.69 × 10−5269.6145.62 × 10−17.38 × 10−8
Fireworks Algorithm4.79 × 10−580.121.219.58 × 10−8
Differential Evolution6.54 × 10−5205.11.131.31 × 10−7
Firefly Algorithm6.58 × 10−5240.3788.051.32 × 10−7
Cuckoo Search6.72 × 10−5366.7821.351.34 × 10−7
Cat Swarm Optimization8.66 × 10−542.1482.49 × 10−11.73 × 10−7
Harris Hawks Optimization9.47 × 10−517.0489.11 × 10−21.89 × 10−7
Flower Pollination Algorithm1.61 × 10−4889.7182.463.22 × 10−7
Artificial Bee Colony Algorithm1.92 × 10−4353.711.043.85 × 10−7
Grey Wolf Optimizer3.74 × 10−239.0321.83 × 10−17.48 × 10−5
Gravitational Search Algorithm1.01357.9481.282.01 × 10−3
Moth Flame Optimizer2.37632.9181.574.75 × 10−3
Particle Swarm Algorithm2.59320.3547.60 × 10−15.19 × 10−3
Forest Optimization Algorithm2.6510001.335.30 × 10−3
Fish School Search3.96999.666.257.93 × 10−3
Harmony Search9.5410001.25 × 10−11.91 × 10−2
Bacterial Foraging Optimization10.410008.442.09 × 10−2
Bees Algorithm13.2597.3267.652.64 × 10−2
Genetic Algorithm17.810001.031.78
Bat Algorithm32.2761.0461.846.45 × 10−2
Clonal Selection Algorithm56.0985.2223.641.12 × 10−1
Sine Cosine AlgorithmPowell3.79 × 10−5270.8063.28 × 10−17.58 × 10−8
Fireworks Algorithm4.36 × 10−5112.1546.03 × 10−18.72 × 10−8
Grey Wolf Optimizer6.16 × 10−531.6321.03 × 10−11.23 × 10−7
Flower Pollination Algorithm6.25 × 10−5339.4167.87 × 10−11.25 × 10−7
Firefly Algorithm6.38 × 10−5178.0044.771.28 × 10−7
Cuckoo Search6.39 × 10−5138.0463.20 × 10−11.28 × 10−7
Gravitational Search Algorithm6.54 × 10−5194.2746.73 × 10−11.31 × 10−7
Bat Algorithm6.68 × 10−5141.4761.89 × 10−11.34 × 10−7
Fish School Search7.18 × 10−5952.012.811.44 × 10−7
Particle Swarm Algorithm7.35 × 10−5171.7143.70 × 10−11.47 × 10−7
Cat Swarm Optimization8.20 × 10−529.1681.32 × 10−11.64 × 10−7
Harris Hawks Optimization1.72 × 10−414.45.09 × 10−23.44 × 10−7
Clonal Selection Algorithm1.80 × 10−4308.528.33 × 10−13.59 × 10−7
Artificial Bee Colony Algorithm2.90 × 10−4958.7781.955.80 × 10−7
Bees Algorithm5.67 × 10−4808.196.471.13 × 10−6
Forest Optimization Algorithm2.51 × 10−2997.98.70 × 10−15.03 × 10−5
Bacterial Foraging Optimization3.43 × 10−2997.4565.086.87 × 10−5
Differential Evolution4.52176.4165.83 × 10−19.05 × 10−3
Genetic Algorithm7.9010005.65 × 10−17.90 × 10−1
Harmony Search11.710001.01 × 10−12.33 × 10−2
Moth Flame Optimizer98.2892.9741.111.96 × 10−1
Artificial Bee Colony AlgorithmQing5.73 × 10−557.0681.23 × 10−11.15 × 10−7
Differential Evolution6.46 × 10−5227.6827.35 × 10−11.29 × 10−7
Bees Algorithm6.54 × 10−560.3764.47 × 10−11.31 × 10−7
Gravitational Search Algorithm6.55 × 10−5226.2047.34 × 10−11.31 × 10−7
Bat Algorithm6.58 × 10−5157.9761.48 × 10−11.32 × 10−7
Firefly Algorithm6.63 × 10−5204.6543.691.33 × 10−7
Moth Flame Optimizer6.92 × 10−5431.1966.93 × 10−11.38 × 10−7
Particle Swarm Algorithm6.93 × 10−5130.8182.41 × 10−11.39 × 10−7
Cuckoo Search6.94 × 10−5172.8762.09 × 10−11.39 × 10−7
Fish School Search7.02 × 10−5989.8462.071.40 × 10−7
Harris Hawks Optimization1.17 × 10−4737.7121.532.34 × 10−7
Flower Pollination Algorithm3.91 × 10−4978.5821.297.81 × 10−7
Forest Optimization Algorithm3.16 × 10−3996.4744.68 × 10−16.32 × 10−6
Bacterial Foraging Optimization7.55 × 10−310003.941.51 × 10−5
Fireworks Algorithm1.32 × 10−210003.852.64 × 10−5
Harmony Search1.77 × 10−210006.30 × 10−23.54 × 10−5
Clonal Selection Algorithm2.53 × 10−2911.4289.80 × 10−15.07 × 10−5
Cat Swarm Optimization3.69 × 10−210003.087.38 × 10−5
Grey Wolf Optimizer7.70 × 10−210001.431.54 × 10−4
Genetic Algorithm1.85 × 10−110003.63 × 10−11.85 × 10−2
Sine Cosine Algorithm2.8110006.38 × 10−15.62 × 10−3
Table 5. Different applications of NIAs in design optimization.
Table 5. Different applications of NIAs in design optimization.
ReferencePublication YearApplicationAlgorithm
[427]1996Conceptual designGA
[428]2022Conceptual designMFA, MPA, SMA, SSA
[433]2002Conceptual designPSO
[437]2004Multidisciplinary design optimizationGA
[438]2009Multidisciplinary design optimizationPSO
[439]2016Multidisciplinary design optimizationABC
[440]2019Engine modeling and designGA, PSO, ACO, ABC, IWO
[441]2009Electric motor optimizationGA
[442]2021Propulsion system optimizationGA
Table 6. Different applications of NIAs in structure optimization.
Table 6. Different applications of NIAs in structure optimization.
ReferencePublication YearApplicationAlgorithm
[445]2005Component (rib, wing, etc.) designGA
[446]2009Pressure bulkhead designGA, PSO, CO
[447]2021Welding process optimizationGA
[448]2015Elastic optimizationPSO
[449]2018Stiffened panels optimizationHS
[450]2014Aeroelastic composite wing designBCO
[451]2014Aeroelastic tailoring and scalingBFO
Table 7. Different applications of NIAs in aerodynamic optimization.
Table 7. Different applications of NIAs in aerodynamic optimization.
ReferencePublication YearApplicationAlgorithm
[455]2018Airfoil designGA, SA
[456]2001Wing and blade airfoil designES
[457]2019Airfoil designFFO
[458]2021Airfoil designPSO, GA
[459]2016Airfoil designCS
[460]2015Blade designABC
[461]2017Airfoil designGSA
[462]2022Airfoil designHS
[463]2013Aerodynamic shape optimizationHS
[382]2016Aerodynamic shape optimizationSCA
[466]1999Wing designGA
[467]2004Wing designPSO
[468]2011Wing designACO
[469]2019Wing designDE
[470]2019Wing designFSO
[471]2017Wing tip designABC
[474]2016Equipment placement in bodyGA
[475]2017Equipment placement in bodyBA
[476]2017Body shape designGA
[478]2017Body shape designPSO
[479]2012Body sizingDE
Table 8. Different applications of NIAs in guidance and control optimization.
Table 8. Different applications of NIAs in guidance and control optimization.
ReferencePublication YearApplicationAlgorithm
[481]2022Path and motion planningPSO
[482]2019Path and motion planningACO
[483]2020Path and motion planningDE
[484]2023Path and motion planningGWO
[485]2021Path and motion planningGA
[486]2019Path and motion planningBA
[487]2020Target trackingBA
[488]2017Optimal ControlBA
[489]2013Optimal LandingBA
[490]2017Route evaluationCSA
[491]2019Trajectory trackingCS
[492]2019Trajectory planningCS
[493]2011Path and motion planningDE, PSO, GA
[494]2005Path and motion planningDE
[495]2016Path and motion planningFAO, DE, PSO, GA
[496]2022Path and motion planningFAO, DE
[497]2022Trajectory planningFPA
[498]2012Path and motion planningGSA
[499]2020Path and motion planningGWO
[500]2022Path and motion planningBOA
[504]2023Drone-Truck path planningWWO, GA, PSO, DE, BBO, EBO
[505]2020Optimal landingMFO, BOA, ABC
[506]2021Optimal landingDFO, DE
[507]2020Optimal landingFPA
[508]2018Optimal landingFPA
[509]2021Optimal landingGWO
[510]2017Optimal landingHS
[511]2014Optimal landingBA
[512]2008Optimal landingCSA
[513,514]2019Optimal space trajectoryGA, PSO, ACO
[516]2015Optimal space trajectoryGSA
[517]2020Optimal space controlFAO
[518]2016Optimal trajectoryFAO
[519]2016Air traffic controlGSA
[520]2019Trajectory trackingGWO
[521]2019Engine controlGWO
[522]2015Robust controlBA
[523]2015Control parameter tunningABC
[524]2019Control parameter tunningBFA
[525]2010Control parameter tunningBFA
[526]2016Control parameter tunningBA
[527]2015Control parameter tunningBeeA
[528]2015Control parameter tunningCS
[529]2022Control parameter tunningBA, PSO, CS
[530]2020Control parameter tunningCS
[531]2016Control parameter tunningDE
[532]2016Control parameter tunningDE
[533]2021Control parameter tunningFA
[534]2021Control parameter tunningFA
[535]2015Control parameter tunningFA
[536]2022Control parameter tunningFA
[537]2018Control parameter tunningFAO
[538]2019Control parameter tunningFPA
[539]2020Control parameter tunningGSO
[540]2017Control parameter tunningGSO
[541]2021Control parameter tunningHS
[542]2020Control parameter tunningHHO
[543]2023Control parameter tunningPIO
[544]2022Control parameter tunningPSO
[545]2022Swarm motion and formationBeeA
[546]2019Swarm motion and formationDE
[547]2019Swarm motion and formationDE
[548]2020Swarm motion and formationGWO
[549]2022Swarm motion and formationMFO
[550]2023Swarm motion and formationPSO
[551]2020Swarm motion and formationGA
[552]2023Swarm motion and formationACO, DE
[553]2017Swarm motion and formationHS
[554]2022Swarm mission planning and task allocationFAO
[555]2022Swarm mission planning and task allocationFAO
[556]2022Swarm mission planning and task allocationHS
[557]2023Swarm mission planning and task allocationLSA, PSO, SA
[558]2017Vibration reductionBeeA
[559]2014Vibration reductionBeeA
Table 9. Different applications of NIAs in system identification optimization.
Table 9. Different applications of NIAs in system identification optimization.
ReferencePublication YearApplicationAlgorithm
[560]2015Helicopter UAV identificationABC, GA
[561]2017Helicopter UAV identificationABC, PSO
[563]2019Quadrotor identificationPSO, CS
[564]2014Quadrotor identificationGA
[565]2016Multirotor UAV identificationDE
[566]2014Helicopter UAV identificationDE
[567]2022Fixed-wing drone identificationAlO, DA, GOA, GWO, SlpSO, WOA, SCA, WCA, ES, MFO
[568]2014Aircraft identificationHS
[569]2014Helicopter UAVHS
Table 10. Different applications of NIAs in navigation optimization.
Table 10. Different applications of NIAs in navigation optimization.
ReferencePublication YearApplicationAlgorithm
[570]2022Automatic drone navigationBA, MFO, PSO, CS, GWO
[571]2022Localization in swarmPSO
[572]2022INS error reductionABC
[573]2014Target recognitionABC
[574]2021In-door navigationCSA
[575]2021Target recognitionCS
[576]2018Localization in swarmDE
[577]2016Radar imagingDE
[578]2021Localization in swarmGWO
[484]2023GPS-denied navigationGWO
[579]2016Obstacle avoidanceGWO
[580]2022Target trackingSCA, PSO, FAO, SMOA
[581]2023Target trackingPSO
[582]2022INS error reductionPIO
Table 11. Different applications of NIAs in communication optimization.
Table 11. Different applications of NIAs in communication optimization.
ReferencePublication YearApplicationAlgorithm
[585]2021Optimized routingGA
[586]2020Network deployment and coverageGA
[587]2023Mobile edge computingDE
[588]2019Coverage optimizationBeeA
[589]2020Coverage optimizationPSO
[590]2022Optimal charging (by path planning and obstacle avoidance)PSO
[593]2023Wireless sensingACO
[594]2022Coverage optimizationGA
Table 12. Currently studied algorithms in aerospace applications.
Table 12. Currently studied algorithms in aerospace applications.
Conceptual DesignMultidisciplinary DesignEngine DesignStructure DesignAirfoil DesignWing & Tail DesignBody DesignControlSystem IdentificationNavigationDrone Communication
Artificial Bee Colony
Bacterial Foraging Optimization
Bat Algorithm
Bees Algorithm
Cat Swarm Optimization
Clonal Selection Algorithm
Cuckoo Search
Differential Evolution
Firefly Algorithm
Fireworks Algorithm
Fish School Search
Flower Pollination Algorithm
Forest Optimization Algorithm
Genetic Algorithm
Gravitational Search Algorithm
Grey Wolf Optimizer
Harmony Search
Harris Hawks Optimization
Moth Flame Optimizer
Particle Swarm Algorithm
Sine Cosine Algorithm
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Darvishpoor, S.; Darvishpour, A.; Escarcega, M.; Hassanalian, M. Nature-Inspired Algorithms from Oceans to Space: A Comprehensive Review of Heuristic and Meta-Heuristic Optimization Algorithms and Their Potential Applications in Drones. Drones 2023, 7, 427. https://0-doi-org.brum.beds.ac.uk/10.3390/drones7070427

AMA Style

Darvishpoor S, Darvishpour A, Escarcega M, Hassanalian M. Nature-Inspired Algorithms from Oceans to Space: A Comprehensive Review of Heuristic and Meta-Heuristic Optimization Algorithms and Their Potential Applications in Drones. Drones. 2023; 7(7):427. https://0-doi-org.brum.beds.ac.uk/10.3390/drones7070427

Chicago/Turabian Style

Darvishpoor, Shahin, Amirsalar Darvishpour, Mario Escarcega, and Mostafa Hassanalian. 2023. "Nature-Inspired Algorithms from Oceans to Space: A Comprehensive Review of Heuristic and Meta-Heuristic Optimization Algorithms and Their Potential Applications in Drones" Drones 7, no. 7: 427. https://0-doi-org.brum.beds.ac.uk/10.3390/drones7070427

Article Metrics

Back to TopTop