Next Article in Journal / Special Issue
Research on Microgrid Optimal Dispatching Based on a Multi-Strategy Optimization of Slime Mould Algorithm
Previous Article in Journal
A Bio-Inspired Probabilistic Neural Network Model for Noise-Resistant Collision Perception
Previous Article in Special Issue
A Multi-Objective Sine Cosine Algorithm Based on a Competitive Mechanism and Its Application in Engineering Design Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Botox Optimization Algorithm: A New Human-Based Metaheuristic Algorithm for Solving Optimization Problems

by
Marie Hubálovská
,
Štěpán Hubálovský
and
Pavel Trojovský
*
Department of Technics, Faculty of Education, University of Hradec Kralove, 50003 Hradec Králové, Czech Republic
*
Author to whom correspondence should be addressed.
Submission received: 4 February 2024 / Revised: 21 February 2024 / Accepted: 22 February 2024 / Published: 23 February 2024

Abstract

:
This paper introduces the Botox Optimization Algorithm (BOA), a novel metaheuristic inspired by the Botox operation mechanism. The algorithm is designed to address optimization problems, utilizing a human-based approach. Taking cues from Botox procedures, where defects are targeted and treated to enhance beauty, the BOA is formulated and mathematically modeled. Evaluation on the CEC 2017 test suite showcases the BOA’s ability to balance exploration and exploitation, delivering competitive solutions. Comparative analysis against twelve well-known metaheuristic algorithms demonstrates the BOA’s superior performance across various benchmark functions, with statistically significant advantages. Moreover, application to constrained optimization problems from the CEC 2011 test suite highlights the BOA’s effectiveness in real-world optimization tasks.

1. Introduction

Optimization problems, characterized by multiple feasible solutions, involve finding the best solution among them. Mathematically, these problems consist of decision variables, constraints, and an objective function. The optimization process aims to determine optimal values for decision variables, adhering to constraints while optimizing the objective function. Numerous real-world applications in science, engineering, industry, and technology necessitate effective optimization techniques. Two main approaches, deterministic and stochastic, address these challenges. Deterministic approaches, including gradient-based and non-gradient-based methods, excel in handling simpler problems but face limitations in complexity and local optima traps. To address complex, nonlinear, and high-dimensional challenges, researchers have developed stochastic approaches, acknowledging the limitations of deterministic methods in practical optimization scenarios [1,2,3,4,5,6].
Metaheuristic algorithms represent a widely employed stochastic approach for effective optimization problem-solving. Leveraging random search, random operators, and trial-and-error processes, these algorithms yield suitable solutions. The optimization process initiates with the random generation of candidate solutions, progressively enhancing them through iterations. The final output is the best-improved candidate solution. While the inherent randomness poses challenges in guaranteeing a global optimal solution, solutions obtained from metaheuristic algorithms are considered to be quasi-optimal due to their proximity to the global optimum. The pursuit of more effective quasi-optimal solutions, closely aligning with the global optimum, drives the development of various metaheuristic algorithms [7,8].
For metaheuristic algorithms to effectively address optimization problems, they must conduct thorough searches at both the global and local levels within the problem-solving space. Global search, aligned with exploration, denotes the algorithm’s proficiency in extensively exploring the problem-solving space to identify the region containing the primary optimum and avoid local optima. Local search, associated with exploitation, illustrates the algorithm’s ability to closely investigate promising solutions, aiming for convergence to the global optimal solution. The success of a metaheuristic algorithm is contingent on striking a balance between exploration and exploitation throughout the search process [9].
The central research inquiry revolves around whether, given the multitude of existing metaheuristic algorithms, there remains a necessity to develop novel ones. In addressing this query, the No Free Lunch (NFL) principle [10] asserts that a metaheuristic algorithm’s success in optimizing a specific set of problems does not guarantee comparable performance across all optimization tasks. The NFL theorem posits that no single metaheuristic algorithm can be deemed the optimal solution for all optimization challenges. It highlights the unpredictability of an algorithm’s success or failure in addressing different optimization problems, emphasizing that a method that is successful in converging to the global optimum for one problem may encounter difficulties, such as local optima entrapment, when applied to another problem. Consequently, the NFL theorem discourages assumptions about the universal effectiveness of a metaheuristic algorithm and encourages ongoing exploration and introduction of new algorithms to enhance solutions for diverse optimization problems.
This paper brings innovation and novelty to the forefront by introducing the Botox Optimization Algorithm (BOA), a novel metaheuristic approach for solving optimization problems. The key contributions of this paper encompass the following:
  • Introducing the BOA involves emulating the Botox injection process, drawing inspiration from enhancing facial beauty by addressing defects in specific facial areas.
  • BOA theory is described and then mathematically modeled.
  • The BOA’s performance is rigorously assessed using the CEC 2017 test suite, showcasing its efficacy in solving optimization problems.
  • The algorithm’s robustness is further tested in handling real-world applications, particularly in optimizing twenty-two constrained problems from the CEC 2011 test suite.
  • The BOA’s performance is objectively compared with twelve established metaheuristic algorithms, establishing its competitive edge and effectiveness.
This paper follows a structured outline: Section 2 encompasses a comprehensive literature review. Section 3 introduces and models the Botox Optimization Algorithm. Section 4 presents simulation studies and results. The efficacy of the BOA in real-world applications is explored in Section 5. The paper concludes with Section 6, offering conclusions and suggestions for future research.

2. Literature Review

Metaheuristic algorithms draw inspiration from diverse sources, such as natural phenomena, living organisms’ lifestyles, the laws of physics, biology, human interactions, and game rules. Classified into five groups based on their design principles, these are swarm-based, evolutionary-based, physics-based, human-based, and game-based approaches.
Swarm-based algorithms, like Particle Swarm Optimization (PSO) [11], Ant Colony Optimization (ACO) [12], Artificial Bee Colony (ABC) [13], and the Firefly Algorithm (FA) [14], emulate the behaviors of animals, insects, plants, birds, and aquatic life. PSO models the group movement of birds or fish searching for food, ACO is inspired by ants finding the shortest communication path, ABC mimics honey bees’ activities in locating food, and the FA replicates fireflies’ optical communication. Noteworthy wildlife activities, such as foraging, hunting, chasing, migration, and digging, serve as the foundation for swarm-based metaheuristic algorithms like the Pufferfish Optimization Algorithm (POA) [15], Golden Jackal Optimization (GJO) [16], Tunicate Swarm Algorithm (TSA) [17], Coati Optimization Algorithm (COA) [18], Chameleon Swarm Algorithm (CSA) [19], Wild Geese Algorithm (WGA) [20], White Shark Optimizer (WSO) [21], Grey Wolf Optimizer (GWO) [22], African Vultures Optimization Algorithm (AVOA) [23], Mantis Search Algorithm (MSA) [24], Marine Predator Algorithm (MPA) [25], Whale Optimization Algorithm (WOA) [26], Orca Predation Algorithm (OPA) [27], Reptile Search Algorithm (RSA) [28], Honey Badger Algorithm (HBA) [29], and Kookaburra Optimization Algorithm (KOA) [30].
Evolutionary-based metaheuristic algorithms derive inspiration from the biological sciences, genetics, survival of the fittest, natural selection, and random operators. Prominent algorithms in this group include the Genetic Algorithm (GA) [31] and Differential Evolution (DE) [32], designed to emulate reproduction and Darwin’s theory of evolution, and to incorporate random operators like mutation, crossover, and selection. Artificial Immune Systems (AISs) are modeled after the human body’s defense system [33]. Other algorithms in this category encompass Genetic Programming (GP) [34], Cultural Algorithm (CA) [35], and Evolution Strategy (ES) [36].
Physics-based metaheuristic algorithms are developed by simulating laws, forces, transformations, and other concepts from physics. Simulated Annealing (SA) [37], a widely used algorithm in this category, emulates the metal annealing process, where metals are melted and slowly cooled to achieve optimal crystal formation. Various algorithms, including the Momentum Search Algorithm (MSA) [38], Spring Search Algorithm (SSA) [39], and Gravitational Search Algorithm (GSA) [40], are based on physical forces and Newton’s laws of motion. The Black Hole Algorithm (BHA) [41] and Multi-Verse Optimizer (MVO) [42] draw inspiration from cosmological concepts. Other physics-based metaheuristic algorithms include the Equilibrium Optimizer (EO) [43], Archimedes Optimization Algorithm (AOA) [44], Henry Gas Optimization (HGO) [45], Electro-Magnetism Optimization (EMO) [46], Lichtenberg Algorithm (LA) [47], Nuclear Reaction Optimization (NRO) [48], Thermal Exchange Optimization (TEO) [49], and Water Cycle Algorithm (WCA) [50].
Human-based metaheuristic algorithms are designed to emulate human behaviors, interactions, thoughts, and social activities. Notably, Teaching–Learning-Based Optimization (TLBO) draws inspiration from educational interactions in classrooms, simulating knowledge exchange among teachers and students [51]. The Special Forces Algorithm (SFA) mirrors real-life special forces missions, incorporating mechanisms to simulate UAV-assisted searches and contact loss due to force majeure [52]. The Political algorithm (PO) [53] replicates democratic parliamentary politics, offering a unique optimization approach inspired by political decision-making dynamics. The Chef-Based Optimization Algorithm (CHBO) [54] takes cues from individuals learning cooking skills in classes. Other human-based metaheuristic algorithms include the Coronavirus Herd Immunity Optimizer (CHIO) [55], Doctor and Patient Optimization (DPO) [56], War Strategy Optimization (WSO) [57], Election-Based Optimization Algorithm (EBOA) [58], Gaining Sharing Knowledge-Based Algorithm (GSK) [59], Following Optimization Algorithm (FOA) [60], Driving Training-Based Optimization (DTBO) [5], Sewing Training-Based Optimization (STBO) [61], and Ali Baba and the Forty Thieves (AFT) [62].
Game-based metaheuristic algorithms are formulated by simulating player behavior, influential figures, and the rules of various individual and team games. Algorithms like Football Game-Based Optimization (FGBO) [63] and Volleyball Premier League (VPL) [64] are inspired by modeling league matches. The Hide Object Game Optimizer (HOGO) [65] is designed based on players’ attempts to locate hidden objects on the playing field. The Darts Game Optimizer (DGO) [66] incorporates the skill of players throwing darts to earn more points. The Orientation Search Algorithm (OSA) [67] emulates players’ movements directed by referees. Other game-based metaheuristic algorithms include the Dice Game Optimizer (DGO) [68], Golf Optimization Algorithm (GOA), League Championship Algorithm (LCA) [6], Ring Toss Game-Based Optimization (RTGBO) [69], and Puzzle Optimization Algorithm (POA) [70].
In addition to the original versions of metaheuristic algorithms, many researchers have tried to improve the performance of existing algorithms by developing their improved versions, such as the Enhanced Snake Optimizer (ESO) [71], Improved Sparrow Search Algorithm (ISSA) [72], and multi-strategy-based Adaptive Sine–Cosine Algorithm (ASCA) [73].
To the best of our knowledge, as gleaned from the literature review, no metaheuristic algorithm inspired by the human activity of Botox injections has been introduced thus far. The process of enhancing facial beauty by injecting substances to eliminate facial defects presents an intelligent methodology that could serve as the foundation for a novel metaheuristic algorithm. To bridge this research gap in metaheuristic algorithm studies, this paper introduces a new human-based metaheuristic algorithm, grounded in the mathematical modeling of Botox injections in specific facial areas, as elaborated in the subsequent section.

3. Botox Optimization Algorithm

Within this section, the Botox Optimization Algorithm (BOA) is elucidated, beginning with an exploration of its theory and source of inspiration. Following this, the mathematical modeling of the implementation steps for the proposed BOA approach is detailed.

3.1. Inspiration of BOA

Enhancing facial beauty is a significant and intricate concern for many individuals, with the emergence of facial wrinkles often causing distress. Wrinkles result from the repetitive contraction of underlying facial muscles and dermal atrophy. To address this issue, small doses of botulinum toxin are strategically injected into specific overactive muscles. This injection induces localized muscle relaxation, subsequently leading to the smoothing of the skin in these hyperactive muscle areas [74]. Botulinum toxin, a potent neurotoxin protein derived from the bacterium Clostridium botulinum, is employed for this purpose. The administration of this toxin results in the targeted muscles being temporarily paralyzed, preventing the formation of wrinkles in the treated area [75]. Botox, the cosmetic use of botulinum toxin, gained approval from the U.S. Food and Drug Administration (FDA) in 2002 for treating glabellar complex muscles responsible for frown lines, and in 2013 for addressing lateral orbicularis oculi muscles associated with crow’s feet [76].
Botox exerts a significant impact on diminishing facial wrinkles and enhancing facial aesthetics. The strategic injection of Botox into specific facial areas to eliminate wrinkles serves as an intelligent process, forming the foundational concept behind the design of the approach proposed by the BOA.

3.2. Algorithm Initialization

The proposed BOA methodology operates as a population-based optimizer, leveraging the collective search capabilities of its participants in an iterative process to generate viable solutions for optimization problems. In this context, individuals seeking Botox injections constitute the BOA population. Each member contributes to decision variable values based on their position in the problem-solving space, mathematically represented as a vector. This vector, encapsulating decision variables, forms the population matrix outlined in Equation (1); initialization of each BOA member’s position is achieved through random assignment using Equation (2):
X = X 1 X i X N N × m = x 1,1 x 1 , d x 1 , m x i , 1 x i , d x i , m x N , 1 x N , d x N , m N × m ,
x i , d = l b d + r i , d · ( u b d l b d ) ,   i = 1 , , N ,   d = 1 , , m ,
where X is the BOA population matrix, X i is the i th BOA member (candidate solution), x i , d is its d th dimension in the search space (decision variable), N is the number of population members, m is the number of decision variables, r i , d are random numbers from interval 0 ,   1 , and l b d and u b d are the lower bound and upper bound of the d th decision variable, respectively.
Given that each member in the BOA population represents a candidate solution for the problem, the associated objective function of the problem can be assessed for each individual. Consequently, the array of objective function values can be depicted as a vector, as per Equation (3):
F = F 1 F i F N N × 1 = F ( X 1 ) F ( X i ) F ( X N ) N × 1 ,
where F is the vector of the evaluated objective function and F i is the evaluated objective function based on the i th BOA member.
The assessed objective function values serve as reliable criteria for appraising the quality of candidate solutions. Consequently, the optimal member of the BOA corresponds to the best value achieved for the objective function, while the suboptimal member aligns with the worst value. Given that the position of BOA population members and their objective function values are updated in each iteration, the best candidate solution undergoes regular updates.

3.3. Mathematical Modeling of BOA

The BOA approach, a population-based optimizer, adeptly furnishes viable solutions for optimization problems through an iterative process. In the BOA’s design, inspiration is drawn from the Botox injection mechanism to update the position of population members within the search space. The schematic of Botox injection and its simulation to design the proposed BOA approach is shown in Figure 1.
Each individual seeking Botox injections represents a member of the BOA population. The BOA design mirrors the process of a doctor injecting Botox into specific facial muscles to diminish wrinkles and enhance beauty. Similarly, in the BOA approach, improvement to a candidate solution involves adding a designated value, akin to Botox, to select decision variables.
In the design of the BOA, it is considered that the number of facial muscles that need to be injected with Botox decreases during the iterations of the algorithm. Therefore, the number of selected muscles (i.e., decision variables) for Botox injection is determined by using Equation (4):
N b = 1 + m t m ,
where N b is the number of muscles requiring Botox injection and t is the current value of the iteration counter.
When the applicant visits the doctor, the doctor decides which muscles to inject Botox into, based on the person’s face and wrinkles. Inspired by this fact, in BOA design, the variables to be injected are selected for each population member using Equation (5). It should be noted that the muscles that are chosen for Botox injection should not be repeated, which is considered in Equation (5):
C B S i = d 1 ,   d 2 ,   , d j ,   , d N b   ,   d j 1,2 ,   , m     a n d     h , k 1,2 ,   , N b : d h d k .
Thus, C B S i is the set of candidate decision variables of the i th population member that are selected for Botox injection, and d j is the position of the j th decision variable selected for Botox injection.
In the BOA design, akin to the doctor’s discretion in determining the drug quantity for Botox injection based on expertise and patient needs, the amount of Botox injection for each population member is computed using Equation (6):
B i = X m e a n X i ,   t < T 2   ; X b e s t X i ,   e l s e ,
where B i = ( b i , 1 , , b i , j , , b i , m ) is the considered amount for Botox injection to the i th member, X m e a n is the mean population position (i.e., X m e a n = 1 N i = 1 N X i ), T is the total number of iterations, and X b e s t is the best population member.
After Botox injection into the facial muscles, the appearance of the face changes, with the disappearance of wrinkles. In the BOA design, based on the simulation of Botox injection to the facial muscles, first, a new position is calculated for each BOA member based on Botox injection using Equation (7); then, if the value of the objective function is improved, this new position replaces the previous position of the corresponding member according to Equation (8):
X i n e w :   x i , d j n e w = x i , d j + r i , d j · b i , d j ,
X i = X i n e w ,   F i n e w < F i X i ,   else ,
where X i n e w is the new position of the i th BOA member after Botox injection,   x i , d j n e w is its d j th dimension, F i n e w is its objective function value, r i , d j is a random number with a uniform distribution on the interval 0 ,   1 , and b i , d j is the d j th dimension of Botox injection for the i th BOA member (i.e., B i ).

3.4. Repetition Process, Pseudocode, and Flowchart of the BOA

After updating the position of all BOA members in the search space, the first iteration of the algorithm is completed. Then, based on the updated values, the algorithm enters the next iteration, and the process of updating the BOA population members continues until the last iteration, based on Equations (4)–(8). In each iteration, the best obtained candidate solution X b e s t is also updated and saved. After the full implementation of the proposed BOA approach, the best candidate solution X b e s t stored during the iterations of the algorithm is introduced as the solution to the given problem. The steps of BOA implementation are presented in the form of a flowchart in Figure 2, and its pseudocode is shown in Algorithm 1.
Algorithm 1. Pseudocode of the BOA.
Start the BOA.
1.Input problem information: variables, objective function, and constraints.
2.Set the BOA population size N and the total number of iterations T .
3.Generate the initial population matrix at random using Equation (2).
4.Evaluate the objective function.
5.Determine the best candidate solution X b e s t .
6.For t = 1 to T
7.  Update number of decision variables for Botox injections using Equation (4).
8.  For  i = 1 to N
9. Determine the variables that are considered for Botox injection using Equation (5).
10. Calculate the amount of Botox injection using Equation (6).
11. For  j = 1 to N b
12. Calculate the new position of the i th BOA member using Equation (7).
13. End
14. Evaluate the objective function based on X i n e w .
15. Update the i th BOA member using Equation (8).
16.  End
17.  Save the best candidate solution obtained so far.
18.  End
19.  Output the best quasi-optimal solution obtained with the BOA.
End the BOA.

3.5. Computational Complexity of the BOA

In this subsection, the computational complexity of the BOA is evaluated. The preparation and initialization steps of the BOA for an optimization problem have a computational complexity equal to O ( N m ) , where N is the number of population members and m is the number of decision variables of the problem. In each iteration, the position of the population members is updated and the corresponding objective function is also evaluated. Therefore, the BOA update process has a computational complexity equal to O ( N m T ) , where T is the maximum number of iterations of the algorithm. According to this, the total computational complexity of the proposed BOA approach is equal to O ( N m ( 1 + T ) ) .

3.6. Population Diversity, Exploration, and Exploitation Analysis

The population diversity of the BOA refers to the distribution of population members within the problem space, which plays a critical role in monitoring the search processes of the algorithm. Essentially, this metric indicates whether the population members are focused on exploration or exploitation. By measuring the diversity of the BOA population, it becomes possible to gauge and adapt the algorithm’s capacity to explore and exploit a collective group effectively. Various definitions of diversity have been put forth by researchers. Pant [77] defined diversity according to Equations (9) and (10):
D i v e r s i t y = 1 N i = 1 N d = 1 m x i , d x ¯ d 2 ,
x ¯ d = 1 N i = 1 N x i , d
where N is the number of population members, m is the number of problem dimensions, and x ¯ d is the mean of the entire population in the d th dimension. Hence, the percentage of exploration and exploitation of the population for each iteration can be defined by Equations (11) and (12), respectively:
E x p l o r a t i o n = D i v e r s i t y D i v e r s i t y m a x ,
E x p l o i t a t i o n = 1 E x p l o r a t i o n .
In this subsection, the analysis of population diversity, exploration, and exploitation is evaluated on twenty-three standard benchmark functions, consisting of 7 unimodal functions (F1 to F7) and 16 multimodal functions (F8 to F23). A full description of these benchmark functions is available in [78].
Figure 3 illustrates the exploration–exploitation ratio of the BOA method throughout the iteration process, offering visual support for analyzing how the algorithm balances global and local search strategies. Also, the results of the analysis of population diversity, exploration, and exploitation are reported in Table 1. The simulation results show that the BOA has favorable population diversity, where it has high values in the first iteration, while the values of this index are low in the last iteration. Also, based on the obtained results, in most cases the exploration–exploitation ratio of the BOA is close to 0.00%:100%. The findings obtained from this analysis confirm that the proposed BOA approach, by creating the appropriate population diversity during the iterations of the algorithm, provides a favorable performance in managing exploration and exploitation, and in balancing them during the search process.

4. Simulation Studies and Results

In this section, the performance of the proposed BOA approach in handling optimization tasks is evaluated.

4.1. Performance Comparison

To assess the BOA’s effectiveness in addressing optimization problems, its results were juxtaposed with those of twelve prominent metaheuristic algorithms: the GA [31], PSO [11], GSA [40], TLBO [51], MVO [42], GWO [22], WOA [26], MPA [25], TSA [17], RSA [28], AVOA [23], and WSO [21]. These twelve algorithms were selected from the numerous algorithms available in the literature. The reasons for choosing these twelve algorithms were as follows: the GA and PSO are among the first and most famous metaheuristic algorithms. The GSA, TLBO, MVO, GWO, and WOA are among the most cited metaheuristic algorithms that have been used in various optimization applications. The MPA, TSA, RSA, AVOA, and WSO approaches are among the recently published successful metaheuristic algorithms that have attracted the attention of many researchers in this short period of time. Comparing the proposed BOA approach with these twelve selected metaheuristic algorithms is a valuable comparison, after which the efficiency of the BOA will have been tested well. Table 2 outlines the control parameter values for the competing algorithms. The evaluation of the simulation results incorporates six statistical metrics: mean, best, worst, standard deviation (std), median, and rank. The mean index values were utilized for ranking the metaheuristic algorithms concerning each benchmark function.

4.2. Evaluation of the CEC 2017 Test Suite

In this section, the performance of the BOA and competing algorithms is evaluated using the CEC 2017 test suite, considering problem dimensions (number of decision variables) equal to 10, 30, 50, and 100. The CEC 2017 test suite comprises thirty benchmark functions, including three unimodal functions (C17-F1 to C17-F3), seven multimodal functions (C17-F4 to C17-F10), ten hybrid functions (C17-F11 to C17-F20), and ten composition functions (C17-F21 to C17-F30). The C17-F2 function is excluded due to its unstable behavior, as described in [79].
The results of employing the BOA approach and competing algorithms on the CEC 2017 test suite are presented in Table 3. Boxplot diagrams depicting the performance of the BOA and competing algorithms in optimizing the CEC 2017 test suite are illustrated in Figure 4. The outcomes indicate that the BOA outperformed other optimizers, ranking as the top performer for functions C17-F1, C17-F3 to C17-F21, C17-F23, C17-F24, and C17-F27 to C17-F30.
Overall, the BOA demonstrated its efficacy in providing effective solutions for the CEC 2017 test suite, showcasing a commendable ability to explore, exploit, and maintain balance throughout the search process. The simulation results establish the BOA’s superior performance over competing algorithms, securing the top rank as the best optimizer for handling the CEC 2017 test suite.

4.3. Statistical Analysis

In this section, a statistical analysis was performed on the performances of the BOA and rival algorithms to assess the significance of the BOA’s superiority from a statistical perspective. The Wilcoxon signed-rank test [80], a non-parametric test for matched or paired data, was employed for this purpose. This test helps determine whether there is a significant difference between the averages of two data samples. The results of the Wilcoxon signed-rank test, presented in Table 4, indicate instances where the BOA exhibits statistically significant superiority over the respective competing algorithms, with a p-value criterion of less than 0.05.

4.4. Discussion

In this subsection, the performance of the BOA compared to competing algorithms is discussed. The CEC 2017 test suite has different types of objective functions.
Unimodal functions C17-F1 and C17-F3 have only one main optimum (i.e., global optimum), and for that reason they are suitable criteria for measuring the exploitation ability of metaheuristic algorithms. Analysis of the simulation results shows that the proposed BOA approach, with a strong performance in local search, has superior performance against all twelve competing algorithms for handling unimodal functions. Therefore, as the first strength, the superiority of the BOA in exploitation is confirmed against competing algorithms.
Multimodal functions C17-F4 to C17-F10, in addition to the main optimum (i.e., the global optimum), also have a number of local optima, which challenge the exploration ability of metaheuristic algorithms. The findings obtained from the simulation results show that the BOA, with global search management, was able to achieve the rank of the best optimizer in the competition with the compared algorithms to handle the functions C17-F4 to C17-F10. The simulation results confirm that, as the second strength, the BOA has a better exploration ability to manage global search compared to competing algorithms.
Hybrid functions C17-F11 to C17-F20 and composition functions C17-F21 to C17-F30 are complex optimization problems that challenge the performance of metaheuristic algorithms in establishing a balance between exploration and exploitation. The simulation results of these functions show that the BOA was able to achieve the rank of the best optimizer in most of these benchmark functions, except for C17-F22, C17-F25, and C17-F26. The simulation results confirm that the BOA is highly capable of balancing exploration and exploitation when facing complex optimization problems. Therefore, as a third strength, the superiority of the BOA in balancing exploration and exploitation is confirmed compared to competing algorithms.
In addition, the statistical analysis of the Wilcoxon signed-rank test and the values obtained for the p -value index, as the fourth strength, confirm that the BOA has a significant statistical superiority compared to all twelve competing algorithms.

5. BOA for Real-World Applications

In this section, the effectiveness of the proposed BOA approach in addressing real-world optimization tasks is evaluated. To this end, twenty-two constrained optimization problems from the CEC 2011 test suite, along with four engineering design problems, are utilized.

5.1. Evaluation of CEC 2011 Test Suite

In this subsection, the performance of the BOA in optimizing the CEC 2011 test suite, which comprises twenty-two constrained optimization problems from real-world applications, is assessed. Detailed descriptions and information about the CEC 2011 test suite can be found in [81]. The results of employing the BOA and competing algorithms on the CEC 2011 test suite are presented in Table 5, and the boxplot diagrams illustrating the performance of the BOA and competing algorithms are depicted in Figure 5. The optimization outcomes highlight that the BOA effectively generated suitable solutions for this test suite, showcasing a balanced exploration and exploitation throughout the search process. Notably, the BOA emerges as the top optimizer for solving functions C11-F1 to C11-F22, demonstrating superior performance in comparison to competing algorithms. Statistical analysis, specifically the Wilcoxon signed-rank test, further validates the significant statistical superiority of the BOA in these evaluations.

5.2. Pressure Vessel Design Problem

The design of the pressure vessel in engineering aims primarily to minimize construction costs, as illustrated in Figure 6. The mathematical representation of pressure vessel design is defined as follows [82]:
Consider: X = x 1 , x 2 , x 3 , x 4 = T s , T h , R , L .
Minimize: f x = 0.6224 x 1 x 3 x 4 + 1.778 x 2 x 3 2 + 3.1661 x 1 2 x 4 + 19.84 x 1 2 x 3 .
Subject to
g 1 x = x 1 + 0.0193 x 3     0 ,   g 2 x = x 2 + 0.00954 x 3   0 ,
g 3 x = π x 3 2 x 4 4 3 π x 3 3 + 1296000   0 ,   g 4 x = x 4 240     0 .
with
0 x 1 , x 2 100   a n d   10 x 3 , x 4 200 .
The outcomes derived from applying the BOA and rival algorithms to optimize pressure vessel design are documented in Table 6 and Table 7. According to the results, the BOA yielded the optimal solution for this design, with design variable values of ( 0.7781685 ,   0.3846492 , 40.319615 , 200 ) and an objective function value of 5885.3263 . The convergence curve of the BOA throughout the discovery of the optimal solution for pressure vessel design is depicted in Figure 7. Examination of the optimization results indicates that the BOA exhibits superior performance in addressing pressure vessel design challenges, outperforming competing algorithms.

5.3. Speed Reducer Design Problem

The design of a speed reducer is a practical engineering application focused on minimizing the weight of the speed reducer, as illustrated in Figure 8. The mathematical model for the design of the speed reducer is outlined in [83,84]:
Consider: X = x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 = b , m , p , l 1 , l 2 , d 1 , d 2 .
Minimize: f x = 0.7854 x 1 x 2 2 3.3333 x 3 2 + 14.9334 x 3 43.0934 1.508 x 1 x 6 2 + x 7 2 + 7.4777 x 6 3 + x 7 3 + 0.7854 ( x 4 x 6 2 + x 5 x 7 2 ) .
Subject to
g 1 x = 27 x 1 x 2 2 x 3 1 0 , g 2 x = 397.5 x 1 x 2 2 x 3 2 1 0 ,
g 3 x = 1.93 x 4 3 x 2 x 3 x 6 4 1 0 , g 4 x = 1.93 x 5 3 x 2 x 3 x 7 4 1 0 ,
g 5 x = 1 110 x 6 3 745 x 4 x 2 x 3 2 + 16.9 × 10 6 1 0 ,
g 6 ( x ) = 1 85 x 7 3 745 x 5 x 2 x 3 2 + 157.5 × 10 6 1 0 ,
g 7 x = x 2 x 3 40 1 0 , g 8 x = 5 x 2 x 1 1 0 ,
g 9 x = x 1 12 x 2 1 0 , g 10 x = 1.5 x 6 + 1.9 x 4 1 0 ,
g 11 x = 1.1 x 7 + 1.9 x 5 1 0 .
with
2.6 x 1 3.6 , 0.7 x 2 0.8 , 17 x 3 28 , 7.3 x 4 8.3 , 7.8 x 5 8.3 , 2.9 x 6 3.9 ,   a n d   5 x 7 5.5 .
The outcomes of implementing the BOA and competing optimizers to address the speed reducer design challenges are documented in Table 8 and Table 9. The BOA yielded the optimal solution for this design, characterized by design variable values ( 3.5 ,   0.7 ,   17 ,   7.3 ,   7.8 , 3.3502147 ,   5.2866832 ) and an objective function value of 2996.3482 . The convergence curve, depicting the BOA’s performance in optimizing the speed reducer design, is illustrated in Figure 9. The analysis of the simulation results confirms that the BOA demonstrated more effective performance in tackling the speed reducer design compared to its competitors.

5.4. Welded Beam Design

The design of a welded beam poses a real-world engineering challenge, intending to minimize the fabrication cost of the beam, as depicted in Figure 10. The mathematical model governing the welded beam design is outlined as follows [26]:
Consider: X = x 1 , x 2 , x 3 , x 4 = h , l , t , b .
Minimize: f ( x ) = 1.10471 x 1 2 x 2 + 0.04811 x 3 x 4 ( 14.0 + x 2 ) .
Subject to
g 1 x = τ x 13600 0 ,   g 2 x = σ x 30000     0 ,
g 3 x = x 1 x 4   0 ,   g 4 ( x ) = 0.10471 x 1 2 + 0.04811 x 3 x 4   ( 14 + x 2 ) 5.0     0 ,
g 5 x = 0.125   x 1   0 ,   g 6 x = δ   x 0.25     0 ,
g 7 x = 6000 p c   x   0 ,
where
τ x = τ 2 + 2 τ τ x 2 2 R + τ 2   ,   τ = 6000 2 x 1 x 2 ,   τ = M R J ,
M = 6000 14 + x 2 2 ,   R = x 2 2 4 + x 1 + x 3 2 2 ,
J = 2 x 1 x 2 2 x 2 2 12 + x 1 + x 3 2 2 ,   σ x = 504000 x 4 x 3 2 ,   δ x = 2.1925 x 4 x 3 3 ,
p c   x = 17062.0748 · x 3 x 4 3 1 x 3 28 5 8 .
with 0.1 x 1 , x 4 2   a n d   0.1 x 2 , x 3 10 .
The optimization outcomes for the welded beam design, utilizing the BOA and competing algorithms, are outlined in Table 10 and Table 11. The BOA yielded the optimal solution for this design, with design variable values set at ( 0.2057296 ,   3.4704887 ,   9.0366239 , 0.2057296 ), resulting in an objective function value of 1.7246798 . The convergence process of the BOA towards the optimal solution for the welded beam design is illustrated in Figure 11. The simulation results underscore the effectiveness of the BOA in addressing the welded beam design problem, showcasing superior performance compared to competing algorithms.

5.5. Tension/Compression Spring Design Problem

The engineering challenge in tension/compression spring design is to minimize the weight of the spring, as depicted in Figure 12. The mathematical model for tension/compression spring design is outlined as follows [26]:
Consider:  X = x 1 , x 2 , x 3 = d , D , P .
Minimize f x = x 3 + 2 x 2 x 1 2 .
Subject to
g 1 x = 1 x 2 3 x 3 71785 x 1 4     0 ,   g 2 x = 4 x 2 2 x 1 x 2 12566 ( x 2 x 1 3 ) + 1 5108 x 1 2 1   0 ,
g 3 x = 1 140.45 x 1 x 2 2 x 3   0 ,   g 4 x = x 1 + x 2 1.5 1     0 .
with 0.05 x 1 2 , 0.25 x 2 1.3   a n d   2 x 3 15 .
The optimization outcomes for tension/compression spring design using the BOA and competing algorithms are outlined in Table 12 and Table 13. The BOA yielded the optimal solution for this design, with design variable values of ( 0.0516891 ,   0.3567177 ,   11.288966 ) and an objective function value of 0.0126019 . The convergence curve depicting the BOA’s performance in optimizing the tension/compression spring design is illustrated in Figure 13. The simulation results demonstrate that the BOA exhibited superior performance compared to competing algorithms by delivering improved outcomes for tension/compression spring design.

6. Conclusions and Future Works

In this paper, motivated by the No Free Lunch (NFL) theorem, a new human-based metaheuristic algorithm called the Botox Optimization Algorithm (BOA) was introduced, mimicking the human action of Botox injections. The originality of the proposed BOA approach was confirmed based on the best knowledge obtained from the literature review, where no metaheuristic algorithm based on Botox injection modeling has been designed so far. The fundamental inspiration of the BOA is the injection of Botox into the areas of the face in order to remove defects from the face and increase facial beauty. The theory of the BOA was stated, and the various stages of its implementation were mathematically modeled based on the simulation of Botox injection. The performance of the BOA was evaluated on the CEC 2017 test suite. The optimization results showed that the BOA has a high ability to balance exploration and exploitation during the search process. To measure the quality of the BOA, the obtained results were compared with the performance of twelve well-known metaheuristic algorithms. The simulation results showed that the BOA outperformed competing algorithms by providing better results in most benchmark functions. Using statistical analysis, it was shown that the BOA has significant statistical superiority over competing algorithms. Also, the implementation of the BOA on twenty-two constrained optimization problems from the CEC 2011 test suite showed the ability of the proposed approach to handle real-world applications.
After introducing the proposed BOA approach, several research paths can be considered for further studies:
  • Binary BOA: The real version of the BOA is detailed and explained thoroughly in this paper. Nonetheless, many scientific optimization issues, like feature selection, require the use of binary versions of metaheuristic algorithms for efficient optimization. Consequently, developing the binary version of the BOA (BBOA) is a notable focus of this research.
  • Multi-objective BOA: Optimization problems are classified based on the number of objective functions, which are either single-objective or multi-objective. To find an optimal solution, many problems require the consideration of multiple objective functions simultaneously. Hence, exploring the potential of developing a multi-objective version of the BOA (MOBOA) to address multi-objective optimization dilemmas is another area of research highlighted in this paper.
  • Hybrid BOA: Researchers have always been intrigued by the idea of merging multiple metaheuristic algorithms to leverage the strengths of each and establish a more efficient hybrid strategy. Hence, a potential future research endeavor includes crafting hybrid versions of the BOA.
  • Tackle new domains: Exploring opportunities for employing the BOA in tackling practical applications and optimizing problems within various scientific fields, like robotics, renewable energy, chemical engineering, and image processing, is a focus for future research proposals.

Author Contributions

Conceptualization, P.T. and Š.H.; data curation, M.H. and Š.H.; formal analysis, M.H.; investigation, M.H. and Š.H.; methodology, P.T. and Š.H.; software, Š.H.; validation, P.T. and M.H.; visualization, M.H. and Š.H.; writing—original draft preparation, P.T. and M.H.; writing—review and editing M.H. and Š.H. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by the specific research project FacEdu 2024 No. 2126 of the Faculty of Education, University of Hradec Králové.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Acknowledgments

The authors thank the University of Hradec Králové for its support.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. El-kenawy, E.-S.M.; Khodadadi, N.; Mirjalili, S.; Abdelhamid, A.A.; Eid, M.M.; Ibrahim, A. Greylag Goose Optimization: Nature-inspired optimization algorithm. Expert Syst. Appl. 2024, 238, 122147. [Google Scholar] [CrossRef]
  2. Singh, N.; Cao, X.; Diggavi, S.; Başar, T. Decentralized multi-task stochastic optimization with compressed communications. Automatica 2024, 159, 111363. [Google Scholar] [CrossRef]
  3. Liberti, L.; Kucherenko, S. Comparison of deterministic and stochastic approaches to global optimization. Int. Trans. Oper. Res. 2005, 12, 263–285. [Google Scholar] [CrossRef]
  4. Koc, I.; Atay, Y.; Babaoglu, I. Discrete tree seed algorithm for urban land readjustment. Eng. Appl. Artif. Intell. 2022, 112, 104783. [Google Scholar] [CrossRef]
  5. Trojovský, P.; Dehghani, M. Subtraction-Average-Based Optimizer: A New Swarm-Inspired Metaheuristic Algorithm for Solving Optimization Problems. Biomimetics 2023, 8, 149. [Google Scholar] [CrossRef]
  6. Kashan, A.H. League Championship Algorithm (LCA): An algorithm for global optimization inspired by sport championships. Appl. Soft Comput. 2014, 16, 171–200. [Google Scholar] [CrossRef]
  7. De Armas, J.; Lalla-Ruiz, E.; Tilahun, S.L.; Voß, S. Similarity in metaheuristics: A gentle step towards a comparison methodology. Nat. Comput. 2022, 21, 265–287. [Google Scholar] [CrossRef]
  8. Dehghani, M.; Montazeri, Z.; Dehghani, A.; Malik, O.P.; Morales-Menendez, R.; Dhiman, G.; Nouri, N.; Ehsanifar, A.; Guerrero, J.M.; Ramirez-Mendoza, R.A. Binary spring search algorithm for solving various optimization problems. Appl. Sci. 2021, 11, 1286. [Google Scholar] [CrossRef]
  9. Yang, X.-S.; Koziel, S.; Leifsson, L. Computational Optimization, Modelling and Simulation: Smart Algorithms and Better Models. Procedia Comput. Sci. 2012, 9, 852–856. [Google Scholar] [CrossRef]
  10. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  11. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; IEEE: Perth, WA, Australia, 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  12. Dorigo, M.; Maniezzo, V.; Colorni, A. Ant system: Optimization by a colony of cooperating agents. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 1996, 26, 29–41. [Google Scholar] [CrossRef] [PubMed]
  13. Karaboga, D.; Basturk, B. Artificial bee colony (ABC) optimization algorithm for solving constrained optimization problems. In International Fuzzy Systems Association World Congress; Springer: Berlin/Heidelberg, Germany, 2007; pp. 789–798. [Google Scholar]
  14. Yang, X.-S. Firefly algorithm, stochastic test functions and design optimisation. Int. J. Bio-Inspired Comput. 2010, 2, 78–84. [Google Scholar] [CrossRef]
  15. Al-Baik, O.; Alomari, S.; Alssayed, O.; Gochhait, S.; Leonova, I.; Dutta, U.; Malik, O.P.; Montazeri, Z.; Dehghani, M. Pufferfish Optimization Algorithm: A New Bio-Inspired Metaheuristic Algorithm for Solving Optimization Problems. Biomimetics 2024, 9, 65. [Google Scholar] [CrossRef]
  16. Chopra, N.; Ansari, M.M. Golden Jackal Optimization: A Novel Nature-Inspired Optimizer for Engineering Applications. Expert Syst. Appl. 2022, 198, 116924. [Google Scholar] [CrossRef]
  17. Kaur, S.; Awasthi, L.K.; Sangal, A.L.; Dhiman, G. Tunicate Swarm Algorithm: A new bio-inspired based metaheuristic paradigm for global optimization. Eng. Appl. Artif. Intell. 2020, 90, 103541. [Google Scholar] [CrossRef]
  18. Dehghani, M.; Montazeri, Z.; Trojovská, E.; Trojovský, P. Coati Optimization Algorithm: A new bio-inspired metaheuristic algorithm for solving optimization problems. Knowl.-Based Syst. 2023, 259, 110011. [Google Scholar] [CrossRef]
  19. Braik, M.S. Chameleon Swarm Algorithm: A bio-inspired optimizer for solving engineering design problems. Expert Syst. Appl. 2021, 174, 114685. [Google Scholar] [CrossRef]
  20. Ghasemi, M.; Rahimnejad, A.; Hemmati, R.; Akbari, E.; Gadsden, S.A. Wild Geese Algorithm: A novel algorithm for large scale optimization based on the natural life and death of wild geese. Array 2021, 11, 100074. [Google Scholar] [CrossRef]
  21. Braik, M.; Hammouri, A.; Atwan, J.; Al-Betar, M.A.; Awadallah, M.A. White Shark Optimizer: A novel bio-inspired meta-heuristic algorithm for global optimization problems. Knowl.-Based Syst. 2022, 243, 108457. [Google Scholar] [CrossRef]
  22. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  23. Abdollahzadeh, B.; Gharehchopogh, F.S.; Mirjalili, S. African vultures optimization algorithm: A new nature-inspired metaheuristic algorithm for global optimization problems. Comput. Ind. Eng. 2021, 158, 107408. [Google Scholar] [CrossRef]
  24. Abdel-Basset, M.; Mohamed, R.; Zidan, M.; Jameel, M.; Abouhawwash, M. Mantis Search Algorithm: A novel bio-inspired algorithm for global optimization and engineering design problems. Comput. Methods Appl. Mech. Eng. 2023, 415, 116200. [Google Scholar] [CrossRef]
  25. Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine Predators Algorithm: A nature-inspired metaheuristic. Expert Syst. Appl. 2020, 152, 113377. [Google Scholar] [CrossRef]
  26. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  27. Jiang, Y.; Wu, Q.; Zhu, S.; Zhang, L. Orca predation algorithm: A novel bio-inspired algorithm for global optimization problems. Expert Syst. Appl. 2022, 188, 116026. [Google Scholar] [CrossRef]
  28. Abualigah, L.; Abd Elaziz, M.; Sumari, P.; Geem, Z.W.; Gandomi, A.H. Reptile Search Algorithm (RSA): A nature-inspired meta-heuristic optimizer. Expert Syst. Appl. 2022, 191, 116158. [Google Scholar] [CrossRef]
  29. Hashim, F.A.; Houssein, E.H.; Hussain, K.; Mabrouk, M.S.; Al-Atabany, W. Honey Badger Algorithm: New metaheuristic algorithm for solving optimization problems. Math. Comput. Simul. 2022, 192, 84–110. [Google Scholar] [CrossRef]
  30. Dehghani, M.; Montazeri, Z.; Bektemyssova, G.; Malik, O.P.; Dhiman, G.; Ahmed, A.E.M. Kookaburra Optimization Algorithm: A New Bio-Inspired Metaheuristic Algorithm for Solving Optimization Problems. Biomimetics 2023, 8, 470. [Google Scholar] [CrossRef]
  31. Goldberg, D.E.; Holland, J.H. Genetic Algorithms and Machine Learning. Mach. Learn. 1988, 3, 95–99. [Google Scholar] [CrossRef]
  32. Storn, R.; Price, K. Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  33. De Castro, L.N.; Timmis, J.I. Artificial immune systems as a novel soft computing paradigm. Soft Comput. 2003, 7, 526–544. [Google Scholar] [CrossRef]
  34. Koza, J.R.; Koza, J.R. Genetic Programming: On the Programming of Computers by Means of Natural Selection; MIT Press: Cambridge, MA, USA, 1992; Volume 1. [Google Scholar]
  35. Reynolds, R.G. An introduction to cultural algorithms. In Proceedings of the Third Annual Conference on Evolutionary Programming, San Diego, CA, USA, 24–26 February 1994; World Scientific Publishing: Singapore, 1994; pp. 131–139. [Google Scholar]
  36. Beyer, H.-G.; Schwefel, H.-P. Evolution strategies—A comprehensive introduction. Nat. Comput. 2002, 1, 3–52. [Google Scholar] [CrossRef]
  37. Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by simulated annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef] [PubMed]
  38. Dehghani, M.; Samet, H. Momentum search algorithm: A new meta-heuristic optimization algorithm inspired by momentum conservation law. SN Appl. Sci. 2020, 2, 1720. [Google Scholar] [CrossRef]
  39. Dehghani, M.; Montazeri, Z.; Dhiman, G.; Malik, O.; Morales-Menendez, R.; Ramirez-Mendoza, R.A.; Dehghani, A.; Guerrero, J.M.; Parra-Arroyo, L. A spring search algorithm applied to engineering optimization problems. Appl. Sci. 2020, 10, 6173. [Google Scholar] [CrossRef]
  40. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A gravitational search algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  41. Hatamlou, A. Black hole: A new heuristic optimization approach for data clustering. Inf. Sci. 2013, 222, 175–184. [Google Scholar] [CrossRef]
  42. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-verse optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2016, 27, 495–513. [Google Scholar] [CrossRef]
  43. Faramarzi, A.; Heidarinejad, M.; Stephens, B.; Mirjalili, S. Equilibrium optimizer: A novel optimization algorithm. Knowl.-Based Syst. 2020, 191, 105190. [Google Scholar] [CrossRef]
  44. Hashim, F.A.; Hussain, K.; Houssein, E.H.; Mabrouk, M.S.; Al-Atabany, W. Archimedes optimization algorithm: A new metaheuristic algorithm for solving optimization problems. Appl. Intell. 2021, 51, 1531–1551. [Google Scholar] [CrossRef]
  45. Hashim, F.A.; Houssein, E.H.; Mabrouk, M.S.; Al-Atabany, W.; Mirjalili, S. Henry gas solubility optimization: A novel physics-based algorithm. Future Gener. Comput. Syst. 2019, 101, 646–667. [Google Scholar] [CrossRef]
  46. Cuevas, E.; Oliva, D.; Zaldivar, D.; Pérez-Cisneros, M.; Sossa, H. Circle detection using electro-magnetism optimization. Inf. Sci. 2012, 182, 40–55. [Google Scholar] [CrossRef]
  47. Pereira, J.L.J.; Francisco, M.B.; Diniz, C.A.; Oliver, G.A.; Cunha, S.S., Jr.; Gomes, G.F. Lichtenberg algorithm: A novel hybrid physics-based meta-heuristic for global optimization. Expert Syst. Appl. 2021, 170, 114522. [Google Scholar] [CrossRef]
  48. Wei, Z.; Huang, C.; Wang, X.; Han, T.; Li, Y. Nuclear reaction optimization: A novel and powerful physics-based algorithm for global optimization. IEEE Access 2019, 7, 66084–66109. [Google Scholar] [CrossRef]
  49. Kaveh, A.; Dadras, A. A novel meta-heuristic optimization algorithm: Thermal exchange optimization. Adv. Eng. Softw. 2017, 110, 69–84. [Google Scholar] [CrossRef]
  50. Eskandar, H.; Sadollah, A.; Bahreininejad, A.; Hamdi, M. Water cycle algorithm–A novel metaheuristic optimization method for solving constrained engineering optimization problems. Comput. Struct. 2012, 110, 151–166. [Google Scholar] [CrossRef]
  51. Rao, R.V.; Savsani, V.J.; Vakharia, D. Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput.-Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  52. Wei, Z.; Ke, P.; Shigang, L.; Yagang, W. Special Forces Algorithm: A novel meta-heuristic method for global optimization. Math. Comput. Simul. 2023, 213, 394–417. [Google Scholar]
  53. Askari, Q.; Younas, I.; Saeed, M. Political Optimizer: A novel socio-inspired meta-heuristic for global optimization. Knowl.-Based Syst. 2020, 195, 105709. [Google Scholar] [CrossRef]
  54. Trojovská, E.; Dehghani, M. A new human-based metahurestic optimization method based on mimicking cooking training. Sci. Rep. 2022, 12, 14861. [Google Scholar] [CrossRef]
  55. Al-Betar, M.A.; Alyasseri, Z.A.A.; Awadallah, M.A.; Abu Doush, I. Coronavirus herd immunity optimizer (CHIO). Neural Comput. Appl. 2021, 33, 5011–5042. [Google Scholar] [CrossRef]
  56. Dehghani, M.; Mardaneh, M.; Guerrero, J.M.; Malik, O.P.; Ramirez-Mendoza, R.A.; Matas, J.; Vasquez, J.C.; Parra-Arroyo, L. A new “Doctor and Patient” optimization algorithm: An application to energy commitment problem. Appl. Sci. 2020, 10, 5791. [Google Scholar] [CrossRef]
  57. Ayyarao, T.L.; RamaKrishna, N.; Elavarasam, R.M.; Polumahanthi, N.; Rambabu, M.; Saini, G.; Khan, B.; Alatas, B. War Strategy Optimization Algorithm: A New Effective Metaheuristic Algorithm for Global Optimization. IEEE Access 2022, 10, 25073–25105. [Google Scholar] [CrossRef]
  58. Trojovský, P.; Dehghani, M. A new optimization algorithm based on mimicking the voting process for leader selection. PeerJ Comput. Sci. 2022, 8, e976. [Google Scholar] [CrossRef]
  59. Mohamed, A.W.; Hadi, A.A.; Mohamed, A.K. Gaining-sharing knowledge based algorithm for solving optimization problems: A novel nature-inspired algorithm. Int. J. Mach. Learn. Cybern. 2020, 11, 1501–1529. [Google Scholar] [CrossRef]
  60. Dehghani, M.; Mardaneh, M.; Malik, O. FOA: ‘Following’ Optimization Algorithm for solving Power engineering optimization problems. J. Oper. Autom. Power Eng. 2020, 8, 57–64. [Google Scholar]
  61. Dehghani, M.; Trojovská, E.; Zuščák, T. A new human-inspired metaheuristic algorithm for solving optimization problems based on mimicking sewing training. Sci. Rep. 2022, 12, 17387. [Google Scholar] [CrossRef] [PubMed]
  62. Braik, M.; Ryalat, M.H.; Al-Zoubi, H. A novel meta-heuristic algorithm for solving numerical optimization problems: Ali Baba and the forty thieves. Neural Comput. Appl. 2022, 34, 409–455. [Google Scholar] [CrossRef]
  63. Dehghani, M.; Mardaneh, M.; Guerrero, J.M.; Malik, O.; Kumar, V. Football game based optimization: An application to solve energy commitment problem. Int. J. Intell. Eng. Syst. 2020, 13, 514–523. [Google Scholar] [CrossRef]
  64. Moghdani, R.; Salimifard, K. Volleyball premier league algorithm. Appl. Soft Comput. 2018, 64, 161–185. [Google Scholar] [CrossRef]
  65. Dehghani, M.; Montazeri, Z.; Saremi, S.; Dehghani, A.; Malik, O.P.; Al-Haddad, K.; Guerrero, J.M. HOGO: Hide objects game optimization. Int. J. Intell. Eng. Syst. 2020, 13, 216–225. [Google Scholar] [CrossRef]
  66. Dehghani, M.; Montazeri, Z.; Givi, H.; Guerrero, J.M.; Dhiman, G. Darts game optimizer: A new optimization technique based on darts game. Int. J. Intell. Eng. Syst. 2020, 13, 286–294. [Google Scholar] [CrossRef]
  67. Dehghani, M.; Montazeri, Z.; Malik, O.P.; Ehsanifar, A.; Dehghani, A. OSA: Orientation search algorithm. International Journal of Industrial Electronics. Control. Optim. 2019, 2, 99–112. [Google Scholar]
  68. Dehghani, M.; Montazeri, Z.; Malik, O.P. DGO: Dice game optimizer. Gazi Univ. J. Sci. 2019, 32, 871–882. [Google Scholar] [CrossRef]
  69. Doumari, S.A.; Givi, H.; Dehghani, M.; Malik, O.P. Ring Toss Game-Based Optimization Algorithm for Solving Various Optimization Problems. Int. J. Intell. Eng. Syst. 2021, 14, 545–554. [Google Scholar] [CrossRef]
  70. Zeidabadi, F.A.; Dehghani, M. POA: Puzzle Optimization Algorithm. Int. J. Intell. Eng. Syst. 2022, 15, 273–281. [Google Scholar]
  71. Yao, L.; Yuan, P.; Tsai, C.-Y.; Zhang, T.; Lu, Y.; Ding, S. ESO: An enhanced snake optimizer for real-world engineering problems. Expert Syst. Appl. 2023, 230, 120594. [Google Scholar] [CrossRef]
  72. Hong, J.; Shen, B.; Xue, J.; Pan, A. A vector-encirclement-model-based sparrow search algorithm for engineering optimization and numerical optimization problems. Appl. Soft Comput. 2022, 131, 109777. [Google Scholar] [CrossRef]
  73. Wei, F.; Zhang, Y.; Li, J. Multi-strategy-based adaptive sine cosine algorithm for engineering optimization problems. Expert Syst. Appl. 2024, 248, 123444. [Google Scholar] [CrossRef]
  74. Dressler, D.; Benecke, R. Pharmacology of therapeutic botulinum toxin preparations. Disabil. Rehabil. 2007, 29, 1761–1768. [Google Scholar] [CrossRef]
  75. Blasi, J.; Chapman, E.R.; Link, E.; Binz, T.; Yamasaki, S.; Camilli, P.D.; Südhof, T.C.; Niemann, H.; Jahn, R. Botulinum neurotoxin A selectively cleaves the synaptic protein SNAP-25. Nature 1993, 365, 160–163. [Google Scholar] [CrossRef] [PubMed]
  76. Small, R. Botulinum toxin injection for facial wrinkles. Am. Fam. Physician 2014, 90, 168–175. [Google Scholar] [PubMed]
  77. Pant, M.; Radha, T.; Singh, V.P. A simple diversity guided particle swarm optimization. In Proceedings of the 2007 IEEE Congress on Evolutionary Computation, Singapore, 25–28 September 2007; IEEE: Piscataway, NJ, USA, 2007; pp. 3294–3299. [Google Scholar]
  78. Yao, X.; Liu, Y.; Lin, G. Evolutionary programming made faster. IEEE Trans. Evol. Comput. 1999, 3, 82–102. [Google Scholar]
  79. Awad, N.; Ali, M.; Liang, J.; Qu, B.; Suganthan, P.; Definitions, P. Evaluation criteria for the CEC 2017 special session and competition on single objective real-parameter numerical optimization. Technol. Rep. 2016. [Google Scholar]
  80. Wilcoxon, F. Individual comparisons by ranking methods. In Breakthroughs in Statistics; Springer: Berlin/Heidelberg, Germany, 1992; pp. 196–202. [Google Scholar]
  81. Das, S.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for CEC 2011 Competition on Testing Evolutionary Algorithms on Real World Optimization Problems; Jadavpur University: Kolkata, India; Nanyang Technological University: Singapore, 2010; pp. 341–359. [Google Scholar]
  82. Kannan, B.; Kramer, S.N. An augmented Lagrange multiplier based method for mixed integer discrete continuous optimization and its applications to mechanical design. J. Mech. Des. 1994, 116, 405–411. [Google Scholar] [CrossRef]
  83. Gandomi, A.H.; Yang, X.-S. Benchmark problems in structural optimization. In Computational Optimization, Methods and Algorithms; Springer: Berlin/Heidelberg, Germany, 2011; pp. 259–281. [Google Scholar]
  84. Mezura-Montes, E.; Coello, C.A.C. Useful infeasible solutions in engineering optimization with evolutionary algorithms. In Proceedings of the Mexican International Conference on Artificial Intelligence, Monterrey, Mexico, 14–18 November 2005; Springer: Berlin/Heidelberg, Germany, 2005; pp. 652–662. [Google Scholar]
Figure 1. Schematic diagram of the Botox injection and the proposed BOA.
Figure 1. Schematic diagram of the Botox injection and the proposed BOA.
Biomimetics 09 00137 g001
Figure 2. Flowchart of the BOA.
Figure 2. Flowchart of the BOA.
Biomimetics 09 00137 g002
Figure 3. Exploration and exploitation of the BOA.
Figure 3. Exploration and exploitation of the BOA.
Biomimetics 09 00137 g003aBiomimetics 09 00137 g003b
Figure 4. Boxplot representations illustrating the performances of the BOA and rival algorithms on the CEC 2017 test suite.
Figure 4. Boxplot representations illustrating the performances of the BOA and rival algorithms on the CEC 2017 test suite.
Biomimetics 09 00137 g004aBiomimetics 09 00137 g004b
Figure 5. Boxplot diagrams of the BOA and competing algorithms’ performances on the CEC 2011 test suite.
Figure 5. Boxplot diagrams of the BOA and competing algorithms’ performances on the CEC 2011 test suite.
Biomimetics 09 00137 g005aBiomimetics 09 00137 g005b
Figure 6. Schematic of pressure vessel design. The thickness of the shell is T s , the thickness of the head is T h , the length of cylindrical shell is L , and the inner radius is R .
Figure 6. Schematic of pressure vessel design. The thickness of the shell is T s , the thickness of the head is T h , the length of cylindrical shell is L , and the inner radius is R .
Biomimetics 09 00137 g006
Figure 7. The BOA’s performance convergence curve on pressure vessel design.
Figure 7. The BOA’s performance convergence curve on pressure vessel design.
Biomimetics 09 00137 g007
Figure 8. Schematic of speed reducer design. The face width is b , the number of teeth on the pinion is z , the module of teeth is m , the length of the second shaft between bearings is l 2 , the length of the first shaft between bearings is l 1 , the second shaft’s diameter is d 2 , and the first shaft’s diameter is d 1 .
Figure 8. Schematic of speed reducer design. The face width is b , the number of teeth on the pinion is z , the module of teeth is m , the length of the second shaft between bearings is l 2 , the length of the first shaft between bearings is l 1 , the second shaft’s diameter is d 2 , and the first shaft’s diameter is d 1 .
Biomimetics 09 00137 g008
Figure 9. The BOA’s performance convergence curve on speed reducer design.
Figure 9. The BOA’s performance convergence curve on speed reducer design.
Biomimetics 09 00137 g009
Figure 10. Schematic of welded beam design. The bar height is t , the weld thickness is h , the thickness of bar is b , and the length of clamped bar is l .
Figure 10. Schematic of welded beam design. The bar height is t , the weld thickness is h , the thickness of bar is b , and the length of clamped bar is l .
Biomimetics 09 00137 g010
Figure 11. The BOA’s performance convergence curve on welded beam design.
Figure 11. The BOA’s performance convergence curve on welded beam design.
Biomimetics 09 00137 g011
Figure 12. Schematic of tension/compression spring design. The wire’s diameter is d , the number of active coils is P , and the mean coil’s diameter is D .
Figure 12. Schematic of tension/compression spring design. The wire’s diameter is d , the number of active coils is P , and the mean coil’s diameter is D .
Biomimetics 09 00137 g012
Figure 13. The BOA’s performance convergence curve on tension/compression spring design.
Figure 13. The BOA’s performance convergence curve on tension/compression spring design.
Biomimetics 09 00137 g013
Table 1. Population diversity, exploration, and exploitation percentage results.
Table 1. Population diversity, exploration, and exploitation percentage results.
Function NameExplorationExploitationDiversity
First IterationLast Iteration
F101140.59030
F20110.942420
F301252.39050
F401128.81750
F50144.920060
F60.0127440.987256114.01711.453073
F70.0493720.9506281.5184030.074966
F85.84E-1011230.611.19E-06
F94.76E-1019.5558224.55E-09
F101.8E-17150.328149.04E-16
F113.7E-111885.04673.27E-08
F120161.458760
F130177.587550
F142.02E-08123.657224.77E-07
F155.93E-1114.0488372.4E-10
F160.0822210.9177791.610180.13239
F176.68E-1014.9616953.31E-09
F180.0681180.9318820.7579680.051631
F190.2450070.7549930.3785840.118559
F200.0547770.9452230.4411190.024163
F211.95E-1013.1491257.25E-10
F221.36E-1013.5052946.63E-10
F239.32E-1114.3474734.27E-10
Table 2. Control parameters’ values.
Table 2. Control parameters’ values.
AlgorithmParameterValue
GA
TypeReal coded
SelectionRoulette wheel (proportionate)
CrossoverWhole arithmetic ( P r o b a b i l i t y = 0.8 ,
α 0.5 , 1.5 )
MutationGaussian ( P r o b a b i l i t y = 0.05 )
PSO
TopologyFully connected
Cognitive and social constant( C 1 ,   C 2 = ( 2 , 2 )
Inertia weightLinear reduction from 0.9 to 0.1
Velocity limit10% of dimension range
GSA
Alpha, G 0 , R n o r m , R p o w e r 20, 100, 2, 1
TLBO
T F : Teaching factor T F = r o u n d     ( 1 + r a n d )
Random numberrand is a random number from 0 , 1 .
GWO
Convergence parameter ( a ) a : Linear reduction from 2 to 0.
MVO
Wormhole Existence Probability (WEP) M i n ( W E P ) = 0.2 and M a x ( W E P ) = 1 .
Exploitation accuracy over the iterations ( p ) p = 6 .
WOA
Convergence parameter ( a ) a : Linear reduction from 2 to 0.
r is a random vector in 0 ,   1 ;
l is a random number in 1 ,   1
TSA
P m i n , P m a x 1, 4
c 1 , c 2 ,   c 3 Random numbers lie in the interval 0 , 1 .
MPA
Constant number P = 0.5
Random vectorR is a vector of uniform random numbers in 0 , 1 .
Fish Aggregating Devices (FADs) F A D s = 0.2
Binary vector U = 0 or 1
RSA
Sensitive parameter β = 0.01
Sensitive parameter α = 0.1
Evolutionary Sense (ES)ES: Randomly decreasing values between 2 and −2
AVOA
L 1 , L 2 0.8, 0.2
w 2.5
P 1 , P 2 , P 3 0.6, 0.4, 0.6
WSO
F m i n , F m a x 0.07, 0.75
τ , a 0 , a 1 , a 2 4.125, 6.25, 100, 0.0005
Table 3. Optimization results of the CEC 2017 test suite.
Table 3. Optimization results of the CEC 2017 test suite.
BOAWSOAVOARSAMPATSAWOAMVOGWOTLBOGSAPSOGA
C17-F1Mean1005.46E+093848.6251.02E+1035,331,8261.74E+096,458,5317530.83188,328,6531.47E+08747.43453148.60411,867,816
Best1004.59E+09115.63918.84E+0911,218.073.73E+084,702,7524790.09927,833.6865,653,192100.0193345.99356,145,607
Worst1006.85E+0911,928.771.22E+101.28E+083.8E+098,503,45111,096.783.21E+083.56E+081792.3819323.40217,037,274
Std01.07E+096005.3451.64E+0967,796,1791.66E+091,750,5293215.3531.69E+081.52E+08796.87814524.0354,954,471
Median1005.21E+091675.0469.94E+096,476,1051.4E+096,313,9607118.22516,188,75484,181,901548.66911462.5112,144,191
Rank11241381165910237
C17-F3Mean3007591.42301.89579658.2261408.74611,214.421731.412300.05473071.866726.734310,268.8730014,789.2
Best3004113.7833005207.518791.84594270.307619.6359300.01271529.615471.42136461.8123004354.021
Worst30010,174.7304.054812,922.22537.37815,855.23333.866300.12455893.445893.514713,957.4730023,376.32
Std02896.1212.3936623838.968876.45735353.911391.6670.053452191.143201.37423364.5715.05E-1410,814.33
Median3008038.595301.76410,251.591152.88112,366.081486.072300.04072432.202771.000510,328.0930015,713.23
Rank19410612738511213
C17-F4Mean400919.2957404.76051352.77406.7394576.7582425.1975403.341411.7605409.1884404.5619420.3519414.7475
Best400672.5461401.2436845.7611402.4511477.9916406.4543401.5971406.1014408.4021403.5684400.1059411.7011
Worst4001142.743406.53931849.389411.4014692.0754473.6998404.9048428.4155409.6849406.0879470.5109418.4747
Std0229.29262.715361466.22964.804772114.203835.297561.87172612.084140.5986161.25748136.767293.226255
Median400930.9465405.62961357.964406.5525568.4828410.3179403.431406.2626409.3334404.2956405.3954414.407
Rank11241351110276398
C17-F5Mean501.2464562.2888544.5598573.6638513.037565.1128541.4479523.9769513.1801534.4525554.4872528.2287528.3418
Best500.9951547.4308527.1499558.8475508.4371543.7341523.7238510.3406508.6157528.9014549.5681511.27523.5807
Worst501.9917572.2487563.5795588.8477518.2122597.5197577.7367538.4222520.5555538.0275566.3814552.3704534.175
Std0.54077612.1116520.8102218.125895.59141725.9682527.534612.7615.6047034.3647668.74610520.643165.206467
Median500.9993564.7379543.7549573.48512.7494559.5987532.1655523.5724511.7746535.4404550.9996524.6372527.8058
Rank11191321284371056
C17-F6Mean600632.7824617.595641.3535601.2128625.225623.5356602.184601.1449606.9718617.4791607.548610.4234
Best600628.194616.5747638.0899600.7222615.3143607.646600.4796600.6055604.8344602.9627601.3762607.0149
Worst600637.5379620.1871645.6746602.4363641.0634645.9187604.3818601.7463610.3045636.7176619.5657614.7356
Std04.4130031.886133.7094450.88997512.0888317.53911.9084750.5139522.71409716.994878.9791753.72455
Median600632.699616.8091640.8248600.8464622.2611620.2889601.9373601.1139606.3741615.1181604.6251609.9715
Rank11291331110425867
C17-F7Mean711.1267797.8688766.4303805.8543724.8556830.358762.9002731.196726.2532752.71717.2159733.0903737.2869
Best710.6726780.6589744.4304792.4154720.5783789.6269751.7383717.2818717.5639748.1129714.8806725.8012726.7599
Worst711.7995809.4588794.6393818.7295729.351872.5061792.8006750.7791744.0606760.9719721.0193744.8461741.9507
Std0.55738413.28425.1369513.444224.02278739.1858221.7571515.3382813.24736.26272.8920389.4514877.758608
Median711.0174800.6788763.3257806.1362724.7466829.6494753.531728.3615721.6942750.8777716.4818730.8569740.2185
Rank11110123139548267
C17-F8Mean801.4928848.1769831.617854.5622812.8571849.0636836.9437812.0059816.0909838.313820.1744823.1267817.0493
Best800.995842.8959820.611843.1502808.9812832.594818.8766807.536810.6855831.2975812.2041815.9405813.0058
Worst801.9912854.7381847.6997859.9005815.0312868.6915849.3401816.8821821.1703846.4472828.0838829.679824.9831
Std0.6256366.70100912.450988.4093743.04100917.4760614.238854.1807924.7737228.4327837.3542047.4071725.86208
Median801.4926847.5368829.0787857.599813.708847.4845839.779811.8029816.2538837.7536820.2048823.4438815.1042
Rank11181331292410675
C17-F9Mean9001430.9611192.8491476.617905.28371388.6081383.102900.8146912.1313912.0221900904.3122905.1958
Best9001276.126954.61091379.243900.33291172.431076.987900.0011900.5827907.3517900900.9142902.8443
Worst9001573.8521673.6881616.361913.56281681.6011668.813903.166933.6785920.3352900912.5228909.2282
Std0140.1941362.6238109.87876.480782239.779271.14431.70656916.897766.20990806.032933.14228
Median9001436.9321071.5491455.432903.61961350.21393.304900.0457907.1319910.2007900901.9059904.3554
Rank1118125109276134
C17-F10Mean1006.1792311.8341782.7062588.8991519.7822039.3612031.7081785.6951729.9382179.7362286.5651951.9681720.13
Best1000.2842010.8151486.5122416.7571393.5921762.2371452.0751458.7621542.1971786.2432004.9511563.921417.123
Worst1012.6682456.6232423.1482951.2321595.4022293.2712559.742290.9111998.5562469.9642393.2642360.8732118.219
Std7.244311225.7849478.7803270.7973103.4757304.9995582.7437438.7957211.0887316.3582204.7244356.2546327.1839
Median1005.8822389.9491610.5832493.8041545.0662050.9672057.5081696.5541689.4982231.3692374.0231941.5391672.588
Rank11251329864101173
C17-F11Mean11003442.371148.7754000.3831127.1985484.0311151.2431127.6611155.5821151.1971139.4131143.7722389.968
Best11002188.7621117.1451460.6211113.2755334.9711113.0321105.5771121.7431138.0441119.7551132.4321115.129
Worst11004665.8131202.3576508.911159.1115565.8161173.5131149.1881229.0941172.71311691165.3886006.61
Std01210.18940.818422469.21623.5556111.660430.3948323.7135554.4626816.2868922.8737916.151732624.636
Median11003457.4541137.840161118.2035517.6681159.2131127.941135.7461147.0161134.4481138.6341219.067
Rank11161221383974510
C17-F12Mean1352.9593.57E+081,109,7047.11E+08572,257.81,048,2882,373,5471,037,6331,427,0555,094,5221,028,8348145.64610,018.9
Best1318.64680,192,749358,948.91.58E+0820,015.65543,606.5173,159.88892.71145,801.151,363,349478,453.62528.033176,684.6
Worst1438.1766.23E+082,012,4461.24E+09895,571.31,286,9903,937,6413,259,3642,233,7649,018,8481,739,93114,021.191,076,854
Std62.358012.98E+08841,688.25.98E+08419,783.3381,500.61,904,4751,634,0931,049,5124,413,129581,137.55698.792402,260.9
Median1327.5063.62E+081,033,7107.22E+08686,722.11,181,2772,691,694441,138.11,714,3264,997,944948,475.78016.671593,268.4
Rank11281337106911524
C17-F13Mean1305.32417,335,62818,471.9434,661,5455468.24312,831.357630.3286772.9410,372.7816,852.5410,143.186664.75454,887.32
Best1303.1141,445,2542734.6762,877,6823740.6277639.263297.2061386.7266550.02815,912.775078.0072387.7518603.051
Worst1308.50857,542,23831,658.651.15E+086689.90720,335.5715,267.0112,470.2814,494.8119,148.2814,291.0316,841.17181,516.7
Std2.47346229,234,42316,273.3458,465,6861530.8795963.5825938.4036247.9843543.4191681.5014237.9767465.24191,927.81
Median1304.8375,177,51019,747.2110,349,2145721.21811,675.295978.5496617.37610,223.1516,174.5510,601.853715.04814,714.78
Rank11210132854796311
C17-F14Mean1400.7463828.9152027.6455383.6941945.1673405.1091520.1541573.5192355.2521592.6315604.3063010.58113,069.42
Best14003170.61681.6634711.0861435.3631488.7511482.5961423.3221462.9021517.2244631.8031432.8313748.476
Worst1400.9955066.9282841.4486948.682917.4895622.4961560.2951999.1334996.7061623.1987611.0746894.73726,056.87
Std0.541408945.4187594.84461143.911756.53972393.08643.20222308.87121916.554.973351519.182840.85510,284.68
Median1400.9953539.0661793.7354937.5051713.9083254.5941518.8621435.811480.7011615.055087.1741857.37711,236.18
Rank11061159237412813
C17-F15Mean1500.33110,283.955333.3113,989.153998.8417053.6626262.1461542.2345854.3551711.14824,088.279066.7444578.338
Best1500.0013259.432077.8012745.5593239.2812327.1912019.2651526.1573589.3831584.90111,316.552884.3321894.197
Worst1500.517,559.4812,730.2130,6274923.52612,648.2913,558.051554.3856950.0161801.65236,159.6114,917.698074.57
Std0.2562136701.9445408.36213,248.87760.39694827.045473.80113.41961680.534115.764512,916.315473.3123344.03
Median1500.41310,158.453262.61211,292.033916.2796619.5854735.6341544.1976439.011729.0224,438.469232.4774172.293
Rank11161249827313105
C17-F16Mean1600.762006.4721811.4712020.2011684.9892051.361953.6641818.2041729.6581677.5922077.4661926.5791804.186
Best1600.3561936.6771642.6331821.3251642.1191864.7821766.6031727.6661615.961651.3891950.2511824.4771719.79
Worst1601.122125.7661929.192296.531715.872237.6022083.0471880.4471827.5041732.3652274.0782087.7861835.532
Std0.34380791.7687131.3759218.415634.5268184.0946163.693170.3236794.9922641.07563160.2418132.739661.28572
Median1600.7811981.7221837.0311981.4761690.9842051.5281982.5031832.3521737.5851663.3062042.7681897.0261830.712
Rank11061131297421385
C17-F17Mean1700.0991823.3521751.4661819.4951736.0791803.1611843.241844.131769.1811758.9411848.1591752.8641756.518
Best1700.021806.0661734.731802.4241722.1211787.8261774.3091779.251724.6911748.7081748.3961746.151753.36
Worst1700.3321830.6721795.9081828.8061775.6291814.1581891.1421952.8461873.2271768.9861975.6891759.6171758.979
Std0.16886412.6130332.3287612.7645328.7089812.3098155.2311989.4531375.8746110.92649126.14156.2679082.763929
Median1700.0221828.3351737.6131823.3751723.2821805.331853.7551822.2111739.4041759.0361834.2761752.8441756.867
Rank11039281112761345
C17-F18Mean1805.362,877,25711,923.855,735,59211,111.6712,127.723,449.8921,073.6220,025.7129,687.979765.66422,009.0612,887.9
Best1800.003148,282.34864.976283,9244174.7477503.8886480.2718747.6736354.33624,138.126424.122888.0713447.217
Worst1820.4518,337,68415,688.3616,650,12316,616.7616,384.4336,839.4833,922.9533,800.3437,134.8311,923.3440,997.7918,593.6
Std10.951974,127,8215281.268,252,6436158.2614019.38115,920.6812,898.7715,147.086506.0532554.34221,410.877199.978
Median1800.4921,511,53013,571.043,004,16111,827.612,311.2425,239.920,811.9219,974.0728,739.4710,357.622,075.1914,755.39
Rank11241335108711296
C17-F19Mean1900.445390,051.56740.448708,553.55623.033126,33835,023.841914.8515407.0814715.3440,679.0525,099.056211.208
Best1900.03925,762.012178.66546,105.642320.6591949.5567697.991909.4841944.9552044.2111,164.512629.4292215.362
Worst1901.559821,920.813,319.271,522,1249466.148252,369.264,127.871924.47113,887.8512,559.6959,008.8377,380.539935.635
Std0.810364378,831.55895.722724,659.83963.615156,290.325,211.417.703586217.7335691.55323,319.3538,358.73466.525
Median1900.09356,261.65731.927632,992.25352.663125,516.634,134.741912.7242897.7592128.7346,271.4210,193.126346.918
Rank11271351192431086
C17-F20Mean2000.3122216.3832171.6832224.4942092.9312208.7452207.9562140.4772171.0332072.4752255.3722170.092050.542
Best2000.3122165.242031.5362165.6082073.2282107.4052098.9762047.2472131.7762061.3752189.0142145.8062036.046
Worst2000.3122283.8272296.4412280.2832123.6412323.0212289.7812249.0672247.6252082.9642349.0882202.1472058.359
Std053.77105129.697261.4178123.5095299.4045399.26490.1859256.835939.85057684.7551730.4750711.19601
Median2000.3122208.2322179.3772226.0432087.4282202.2782221.5332132.7972152.3652072.782241.6922166.2042053.882
Rank11181241095731362
C17-F21Mean22002292.2242213.9082267.6152257.6172326.0872310.6632253.5342314.1242300.4192369.5472319.6692298.888
Best22002245.7862204.1582224.1312255.1092221.3862218.5282200.0082309.8852203.7462351.9412311.5492226.752
Worst22002321.3232239.2992292.3392260.1722373.392355.1792308.42319.132339.3912387.0012327.2772333.787
Std037.442418.4787232.829832.3316877.2749967.6915467.274464.13778570.6427615.945738.41884153.00527
Median22002300.8932206.0882276.9952257.5932354.7862334.4722252.8632313.742329.272369.6222319.9252317.506
Rank16254129310813117
C17-F22Mean2300.0732701.0272309.0542920.7862305.0442717.1822323.992285.6622308.6692319.7292300.0042313.3762318.072
Best23002581.0712304.3992710.2092300.9512450.3382319.2882228.8732301.2772313.41623002300.6432315.149
Worst2300.292820.2822311.2363075.3282309.4372926.7162331.6972305.3322322.592331.5622300.0182345.8332322.574
Std0.157893114.93183.421932167.32443.892406231.38356.03352441.2150910.668419.0361570.00965823.599133.452639
Median23002701.3772310.2922948.8042304.8942745.8372322.4882304.2212305.4052316.9723002303.5142317.282
Rank31161341210159278
C17-F23Mean2600.9192690.4992642.5212701.5982614.4572724.6522649.2312620.4532613.86826432793.7042644.762656.735
Best2600.0032655.3772630.8512672.3142611.982634.7242631.182607.242607.8532632.0222728.0262637.4752636.591
Worst2602.872710.5952660.4612742.5932617.1872769.5262669.6022632.1432620.6512652.3822933.472656.8112665.174
Std1.43692228.3828915.181735.767582.68238966.2842822.569611.806447.1732019.830174105.08069.52117114.8412
Median2600.4032698.0122639.3852695.7432614.3312747.182648.0712621.2152613.4832643.7992756.662642.3762662.587
Rank11051131284261379
C17-F24Mean2630.4882788.3172768.952852.0382630.6542668.6592761.8862683.8342749.9372757.0612748.6122766.8852724.025
Best2516.6772745.6882736.9412825.6542617.6192523.6692736.4452501.192726.5332745.9232502.6552755.882536.059
Worst2732.322856.9872792.8462913.4622636.8072812.8272792.3012759.8422766.3032767.0282899.2112786.9362811.901
Std126.788357.9551327.8963744.821219.562201168.313125.02914133.123719.1988610.60604186.152515.09998137.7502
Median2636.4772775.2962773.0062834.5182634.0952669.0692759.3992737.1512753.4572757.6472796.2922762.3632774.07
Rank11211132394786105
C17-F25Mean2932.6393139.2152913.3173279.8732917.7653135.4212907.3152922.0072938.7512933.5372922.1792923.2522952.419
Best2898.0473067.7562899.1043210.5272913.4282905.5132763.1082900.5722921.0742915.9072902.2612898.6732938.562
Worst2945.7933299.2442949.0923356.8832923.183664.4882959.6822943.7222945.9152952.3882943.3942946.562962.947
Std25.12878118.495126.0164765.885014.450199388.1053104.726726.8086812.8628621.9649224.9658428.6654211.32013
Median2943.3593094.9312902.5373276.0412917.2252985.8422953.2352921.8672944.0072932.9272921.5312923.8882954.084
Rank71221331114985610
C17-F26Mean29003563.4752980.6123764.4653012.7213627.6513185.712900.1493268.7943209.5483870.8122904.0982897.19
Best29003234.3922806.1173437.5182892.043146.5392927.4732900.1142969.8852912.1692806.1172806.1172705.581
Worst29003783.2533159.2574104.733297.3184282.6613600.7312900.1953916.7673885.0464362.8623010.2763111.603
Std4.04E-13264.6671219.3124313.0895207.4462604.6037320.34710.039308474.4582493.3205785.028890.85344223.8032
Median29003618.1282978.5373757.8052930.7633540.7013107.3182900.1443094.2623020.4884157.13629002885.789
Rank21051261173981341
C17-F27Mean3089.5183211.0933120.2943232.4743104.8363180.3873195.9293091.6483116.3643115.3363227.333136.5193160.678
Best3089.5183162.7443095.3623127.5743092.273102.5523179.8843089.7123094.4843095.4413215.0523097.1683119.628
Worst3089.5183293.23181.7963426.3823134.2333223.0463207.8043095.0163177.6133172.0453249.0893184.293220.171
Std2.86E-1361.7637844.75181144.071521.4836459.4107312.681622.71515444.4825141.154616.4841239.8728946.26169
Median3089.5183194.2143102.0093187.9713096.4213197.9743198.0143090.9323096.683096.9293222.5893132.3093151.456
Rank11161339102541278
C17-F28Mean31003597.0023237.243784.8623219.5293590.3193288.3583239.8813346.9783326.9583453.6643307.3743247.568
Best31003551.34431003701.8693167.4883415.0983153.133100.1253195.4793214.923440.2923177.7543145.253
Worst31003629.1663392.7483844.773244.6273801.2613393.2633392.7493414.6273392.9923472.2463392.9653516.826
Std037.84994140.924472.1844438.85546217.9689134.3096175.9315110.780592.5007316.11354106.1885196.1013
Median31003603.753228.1063796.4053232.9993572.4593303.5183233.3253388.9023349.9613451.0593329.3893164.097
Rank11231321164981075
C17-F29Mean3132.2413341.5433286.0293377.5733203.8133237.2713351.0073203.3913266.4973213.4433347.9693267.3773238.278
Best3130.0763320.373211.1963305.5333166.3253166.4353236.7433142.6313190.4583166.0153234.6773168.2733189.013
Worst3134.8413357.5833367.5883445.4273245.6573308.0273499.3643288.0153381.8023236.3483639.8533351.0133287.876
Std2.70154416.9305987.6517678.469238.0102463.06594119.87367.002799.0514835.89995212.650790.3055445.26891
Median3132.0233344.1093282.6663379.6663201.6353237.313333.9623191.4593246.8643225.7053258.6723275.1113238.111
Rank11091335122741186
C17-F30Mean3418.7342,270,202296,246.23,694,956416,890.8617,692.3997,284.6304,439.1940,663.960,930.4786,751.4389,254.41,535,217
Best3394.6821,673,245105,205.9831,886.715,996.07112,875.54471.2477460.75933,746.8129,426.72604,858.76409.895528,520.6
Worst3442.9073,240,778771,775.45,836,135615,301.11,306,0893,764,9001,160,7731,361,385102,270.81,004,710771,812.23,497,487
Std30.22288740,116345,952.72,280,306296,209551,800.62,010,505621,503.5678,871.638,719.09180,833.5480,108.61,523,052
Median3418.6732,083,393154,001.94,055,902518,132.9525,902.3109,883.724,761.281,183,76256,012.05768,718.5389,397.81,057,430
Rank11231367104928511
Sum rank38318177350106286239116188191238183197
Mean rank1.31034510.965526.10344812.068973.6551729.8620698.24137946.4827596.5862078.2068976.3103456.793103
Total rank11241321110367958
Table 4. Wilcoxon signed-rank test results.
Table 4. Wilcoxon signed-rank test results.
Compared AlgorithmObjective Function Type
CEC 2017
BOA vs. WSO1.97E-21
BOA vs. AVOA3.77E-19
BOA vs. RSA1.97E-21
BOA vs. MPA2.00E-18
BOA vs. TSA9.50E-21
BOA vs. WOA9.50E-21
BOA vs. MVO9.03E-19
BOA vs. GWO5.23E-21
BOA vs. TLBO3.69E-21
BOA vs. GSA1.60E-18
BOA vs. PSO1.54E-19
BOA vs. GA2.71E-19
Table 5. Optimization results of the CEC 2011 test suite.
Table 5. Optimization results of the CEC 2011 test suite.
BOAWSOAVOARSAMPATSAWOAMVOGWOTLBOGSAPSOGA
C11-F1Mean5.92010318.365813.3652222.890037.66264519.1391913.6737214.4702611.1477319.1729722.5950318.6514224.38586
Best2E-1016.194089.34585421.256190.39209618.182948.67855911.689251.17628817.3325320.7026911.0375723.48431
Worst12.3060621.0588617.220625.2427612.7067420.3639917.6942316.9236718.3536921.0605823.9582624.9743626.34835
Std7.1963792.4750394.559741.9976245.9121311.0460454.3195522.531897.6060471.6291711.4715016.9595451.418008
Median5.68717618.1051413.4472222.530598.77587419.0049114.1610414.6340612.5304719.1493922.8595719.2968823.85539
Rank17412295631011813
C11-F2Mean−26.3179−13.8385−20.8022−10.8912−25.0347−10.5975−18.2689−7.99707−22.4684−10.1862−15.0459−22.5196−12.3132
Best−27.0676−15.2725−21.3417−11.3546−25.6818−14.5354−21.8709−10.1042−24.6938−11.4086−20.3083−23.9159−14.7797
Worst−25.4328−12.5692−20.0389−10.403−23.6635−8.29419−14.0672−6.404−18.7253−9.11933−10.7963−20.0088−10.5245
Std0.7389351.4506720.6134060.516550.9878223.1044554.2214611.6823682.7610060.9988484.5576541.8082432.093573
Median−26.3856−13.7562−20.914−10.9036−25.3968−9.78013−18.5688−7.74007−23.2272−10.1085−14.5394−23.0768−11.9743
Rank18510211613412739
C11-F4Mean1.15E-051.15E-051.15E-051.15E-051.15E-051.15E-051.15E-051.15E-051.15E-051.15E-051.15E-051.15E-051.15E-05
Best1.15E-051.15E-051.15E-051.15E-051.15E-051.15E-051.15E-051.15E-051.15E-051.15E-051.15E-051.15E-051.15E-05
Worst1.15E-051.15E-051.15E-051.15E-051.15E-051.15E-051.15E-051.15E-051.15E-051.15E-051.15E-051.15E-051.15E-05
Std2E-192.29E-112.63E-095.16E-111.28E-152.46E-146.39E-191.03E-123.85E-158.1E-142.07E-196.03E-202.85E-18
Median1.15E-051.15E-051.15E-051.15E-051.15E-051.15E-051.15E-051.15E-051.15E-051.15E-051.15E-051.15E-051.15E-05
Rank11113126841079325
C11-F4Mean0000000000000
Best0000000000000
Worst0000000000000
Std0000000000000
Median0000000000000
Rank1111111111111
C11-F5Mean−34.1274−24.4632−27.8918−19.4252−33.246−26.8779−27.396−26.7327−31.4832−9.86988−27.1031−7.61915−8.51327
Best−34.7494−25.646−28.9861−21.6476−33.8296−31.4861−27.5536−31.6474−34.2023−12.1106−31.4408−11.318−10.0071
Worst−33.3862−23.4985−27.4102−16.9763−31.872−21.3206−27.0038−24.211−27.3029−8.15331−23.8495−5.81752−6.75594
Std0.5899890.9894570.7798292.5890140.967474.4010750.276043.6592523.0964951.7702063.4963422.7230631.505232
Median−34.1871−24.3541−27.5855−19.5384−33.6412−27.3525−27.5132−25.5362−32.2137−9.60779−26.561−6.67054−8.64499
Rank19410275831161312
C11-F6Mean−24.1119−13.6555−18.847−12.6221−22.5646−6.92131−19.8051−8.96998−19.4699−1.47492−21.8112−2.37518−3.31781
Best−27.4298−14.2972−20.1867−13.3562−25.6947−16.2979−22.9894−17.2225−22.2246−1.67787−26.7438−5.2789−8.77883
Worst−23.0059−13.3329−17.0321−11.6073−21.2709−3.56713−12.5777−1.40727−17.8008−1.40727−17.4414−1.40727−1.40727
Std2.3249510.4578161.5494870.8681252.2247296.5792785.1819269.0432092.2297710.1422174.2018882.0347393.829068
Median−23.0059−13.496−19.0847−12.7624−21.6463−3.9101−21.8266−8.62506−18.9272−1.40727−21.5297−1.40727−1.54257
Rank17682104951331211
C11-F7Mean0.8606991.6303361.2978391.9543750.9318821.3162631.7723730.8817311.0742491.7466261.0866921.1320471.769103
Best0.5822661.5763121.1519171.7003340.762991.1465391.6526650.8119950.8077421.5448260.8942210.8270511.374402
Worst1.0250271.7388671.4398482.1408341.0122131.6885431.9462210.9589621.3081181.8882651.2944071.3825851.976253
Std0.2115030.0789730.1612470.1940660.1213850.2633060.1314570.0786020.2175780.1575660.1914750.304780.286366
Median0.917751.6030821.2997961.9881650.9761631.2149851.7453020.8779841.0905691.7767071.0790711.1592751.862878
Rank19713381224105611
C11-F8Mean220287.3329241.2176329.2094222.5348258.6563267.7395224.2247227.6045224.2247247.3067478.5826222.5818
Best220259.8815223.7553286.7508220220246.1934220220220220249.1037220
Worst220323.3464258.6798375.4703225.0697359.4163315.479236.8989235.209236.8989296.0452582.7551230.3271
Std029.2066115.797138.250393.07655271.006933.713648.8812429.2296578.88124237.90414165.9845.427425
Median220283.0518241.2176327.3083222.5348227.6045254.6428220227.6045220236.5908541.2357220
Rank1106112894547123
C11-F9Mean8789.286577,676.6392,136.41,101,22120,62568,367.45388,333.6138,018.744,296.44423,556.8853,670.61,122,4472,014,720
Best5457.674385,879.1346,854.4718,85011,156.3149,109.4214,821.278,052.7318,844.04350,374.3730,315.2900,965.21,930,602
Worst14,042.29663,904.2422,189.51,292,11929,564.9686,903.2658,126.8208,994.377,779.54543,358.7919,076.41,374,7332,132,738
Std3889.181137,784.334,745.74273,210.98526.78616,900.77212,512.556,771.1926,179.2189,242.5588,329.7266,242.6104,535.7
Median7828.591630,461.6399,750.81,196,95820,889.3768,728.59340,193.1132,513.840,281.09400,247.2882,645.41,107,0451,997,771
Rank19711246538101213
C11-F10Mean−21.4889−13.701−16.7579−11.948−18.9391−14.1356−12.564−14.4587−13.8394−10.9206−12.8545−11.024−10.7178
Best−21.8299−14.9479−16.9487−12.3376−19.3304−18.7599−13.2827−21.164−14.3313−11.0122−13.3834−11.0692−10.7552
Worst−20.7878−13.0631−16.3898−11.6609−18.554−11.6783−12.0574−11.0939−12.6316−10.8456−12.0435−11.0025−10.6622
Std0.4986160.9004020.269460.3037760.422593.3485810.5421854.7731990.854290.0760440.6822860.0327540.042033
Median−21.669−13.3964−16.8466−11.8967−18.936−13.0521−12.4579−12.7884−14.1973−10.9123−12.9956−11.0121−10.7269
Rank17310259461281113
C11-F11Mean571,712.35,990,5421,005,9379,157,6151,697,9246,138,0511,238,3621,334,5883,950,3475,376,6381,441,1565,388,1146,322,408
Best260,837.95,713,697790,621.88,861,8971,582,2235,108,6531,126,726609,4923,753,1385,357,6221,292,3035,372,0546,288,702
Worst828,560.96,367,1191,183,7699,345,0991,828,4537,420,2631,396,9112,812,9104,332,2125,392,3501,619,2165,407,4806,399,460
Std260,922.1315,719.1180,244.7217,528.6122,969.11,003,762119,8651,050,363273,78115,960.55141,558.116,144.3655,013.89
Median598,725.25,940,6761,024,6789,211,7331,690,5096,011,6441,214,905957,974.73,858,0185,378,2911,426,5515,386,4626,300,734
Rank11021361134785912
C11-F12Mean1,199,8058,648,4653,453,39413,667,0351,277,2295,166,6885,980,1201,332,0091,432,07614,799,1815,954,0532,353,72714,965,863
Best1,155,9378,290,4453,348,64412,692,5411,199,9384,885,7545,545,8131,174,1601,263,20913,925,7285,653,5642,175,72914,835,162
Worst1,249,3538,965,6613,521,97214,523,2801,357,5015,316,5456,197,3921,482,7831,573,79515,476,1826,170,4502,571,33315,100,976
Std47,157.58294,829.279,624.52789,278.672,305.18210,271.9315,606132,531.8135,346.5683,800.8234,270.3171,464.7114,159.5
Median1,196,9658,668,8783,471,48113,726,1591,275,7385,232,2276,088,6371,335,5471,445,64914,897,4075,996,0982,333,92214,963,658
Rank11061127934128513
C11-F13Mean15,444.215,872.7215,448.1316,341.9315,463.8615,491.8715,538.815,510.2215,503.1815,951.03132,457.915,492.5330,533.99
Best15,444.1915,680.4615,447.115,910.5915,461.4715,481.5915,493.5115,489.2515,49615,634.295,600.2715,474.6715,461.05
Worst15,444.2116,338.5115,449.2817,413.5515,46815,504.9415,599.8515,550.1315,515.616,534.68182,449.715,530.3875,388.3
Std0.009091329.56790.965292757.10363.0420112.1349652.0007629.665989.128434428.397241,099.6726.8111431,431.06
Median15,444.215,735.9515,448.0816,021.7815,462.9915,490.4715,530.9215,500.7515,500.5615,817.61125,890.815,482.5315,643.31
Rank19211348761013512
C11-F14Mean18,295.35115,938.618,527.59236,83818,619.2519,576.6619,259.8619,460.6219,267.06321,549.619,121.9619,155.7219,142.49
Best18,241.5888,038.8618,409.22174,242.118,533.3619,310.2719,102.0219,357.4119,116.9130,687.5218,822.0818,985.6618,847.36
Worst18,388.08162,435.618,626.34341,619.318,694.9620,146.7919,385.6919,546.2419,460.5621,276.219,341.4419,304.2819,454.19
Std71.5993834,977.03107.580778,804.5773.40623403.5767136.487383.23058159.3053298,019.5235.5233137.423260.6657
Median18,275.87106,639.918,537.4215,745.318,624.3519,424.7919,275.8719,469.4119,245.42317,117.319,162.1719,166.4719,134.21
Rank11121231079813465
C11-F15Mean32,883.58940,841.1110,645.31,985,23132,950.7655,363.33225,557.833,108.0833,085.1215,996,211309,756.933,303.158,231,969
Best32,782.17387,070.343,553.37829,509.732,873.1333,051.1133,017.8633,024.1933,048.033,350,790274,007.733,293.783,746,244
Worst32,956.462,367,653185,709.15,184,17033,020.99121,996.9323,378.933,169.0133,150.0423,854,606334,233.833,312.4314,108,888
Std76.946961,003,43380,304.622,245,09163.6450446,692.58137,782.267.0447148.923919,799,50029,448.298.0714544,994,206
Median32,897.86504,320.5106,659.4963,621.332,954.4733,202.65272,917.233,119.5733,071.2118,389,724315,393.133,303.197,536,372
Rank11071126843139512
C11-F16Mean133,550975,518.1135,237.72,017,200137,810.6145,558142,484.5142,115.8146,331.892,220,90319,417,71882,541,54379,253,422
Best131,374.2294,692.9133,737.1486,424.7135,730.3142,830.3136,399.6133,165.5143,68489,866,7449,860,01568,276,86664,052,715
Worst136,310.82,311,282135,911.95,020,540141,530.1147,640.7147,906.6151,316.2151,968.794,876,22035,135,21598,635,6511.01E+08
Std2392.2953,315.11067.5262,143,3642722.9742482.8675054.2848022.2524005.4582,206,78111,487,54713,754,29216,662,950
Median133,257.5648,048.9135,650.91,280,917136,991145,880.4142,816141,990.7144,837.292,070,32516,337,82281,626,82875,794,234
Rank18293654713101211
C11-F17Mean1,926,6159.3E+092.4E+091.61E+102,304,5701.33E+091.01E+103,156,4323,060,4802.31E+101.16E+102.16E+102.27E+10
Best1,916,9537.92E+092.18E+091.16E+101,958,8631.1E+097.18E+092,310,0262,042,6832.23E+101.02E+101.91E+102.12E+10
Worst1,942,6851.03E+102.63E+091.97E+102,944,0651.52E+091.34E+103,810,7034,991,5552.42E+101.23E+102.5E+102.56E+10
Std12,003.531.11E+092.07E+083.66E+09464,536.32.29E+082.74E+09728,055.41,396,4448.21E+089.97E+082.8E+092.11E+09
Median1,923,4129.48E+092.4E+091.66E+102,157,6761.35E+099.84E+093,252,4982,603,8402.31E+101.2E+102.12E+102.2E+10
Rank17610258431391112
C11-F18Mean942,057.557,009,3396,765,5971.23E+08972,857.52,091,6469,903,113989,837.61,034,45832,127,44711,509,5371.4E+081.19E+08
Best938,416.239,198,1194,054,02684,796,562950,200.11,824,8974,240,455964,629.7967,922.625,456,9598,580,0561.17E+081.14E+08
Worst944,706.964,852,36211,629,1431.4E+081,033,1812,447,36717,411,9621,001,5981,210,12934,756,12014,528,3171.55E+081.23E+08
Std2774.13912,627,7203,707,86727,267,02542,401.09315,442.75,845,99917,904.23123,353.44,693,4862,793,34417,827,0263,754,354
Median942,553.561,993,4385,689,6101.33E+08954,024.62,047,1618,980,017996,561.4979,889.434,148,35511,464,8881.43E+081.19E+08
Rank11061225734981311
C11-F19Mean1,025,34156,112,1906,867,4911.2E+081,142,0372,517,27610,562,7011,493,5681,375,43036,886,7956,463,7211.79E+081.19E+08
Best967,927.747,875,0676,260,7671.04E+081,070,9552,270,1542,100,5471,134,1761,241,43725,821,5002,432,8361.63E+081.16E+08
Worst1,167,14271,354,8608,329,1001.51E+081,297,6052,980,67019,167,0841,996,2381,558,53146,022,7788,501,5962.07E+081.23E+08
Std99,675.0411,137,1931,031,44723,175,948110,088.2333,115.48,443,038380,000.6139,4249,198,2392,895,91320,333,9792,808,423
Median983,146.652,609,4166,440,0481.13E+081,099,7942,409,14010,491,5851,421,9281,350,87737,851,4507,460,2261.73E+081.19E+08
Rank11071225843961311
C11-F20Mean941,250.459,668,6116,078,0191.3E+08961,061.91,859,7137,518,662973,995.91,000,68535,832,14414,770,4471.65E+081.2E+08
Best936,143.252,493,1385,354,9051.14E+08957,468.71,668,8637,081,933963,584.3978,672.935,045,3789,799,3441.51E+081.14E+08
Worst946,866.670,665,7696,851,4451.55E+08963,379.72,176,7638,101,614985,820.91,017,85336,682,87322,883,4591.79E+081.24E+08
Std5013.5528,139,325652,894.418,267,7762670.851253,461.5458,251.810,320.5617,752.81715,894.66,010,18216,641,8324,510,598
Median940,995.957,757,7686,052,8621.26E+08961,699.61,796,6127,445,551973,289.11,003,10735,800,16313,199,4931.65E+081.2E+08
Rank11061225734981311
C11-F21Mean12.7144351.6647721.9808878.8730516.057230.4719239.7773328.0982922.74275104.016441.7608109.2417105.994
Best9.97420642.3474420.6845658.4762413.9011127.0184836.2667724.8414820.9157249.6302536.6114294.2966460.45563
Worst14.9749961.6578123.7930499.2655118.347932.1929844.1380531.1801125.09372153.482744.73996121.6858129.6288
Std2.4126678.7509691.39652418.935582.1731822.4821723.6583713.7330971.92441744.714053.83754214.1127833.81205
Median12.9542551.3269221.7229578.8752215.9898931.3381139.3522428.1857922.48077106.476442.84591110.4922116.9458
Rank19310267541181312
C11-F22Mean16.1251347.9827.8755165.2835819.1934932.7459847.4813832.9080325.33021105.894647.85216110.080995.48389
Best11.5013341.4627522.579446.9981916.3654628.7142841.1040225.1731724.0978868.3800339.7433992.3591394.65695
Worst19.5528653.6368433.2457775.1515521.3256635.2626652.3067938.0828426.25798125.422257.19442121.50997.01353
Std4.1977975.4832585.29041913.161132.4821152.9953295.3010126.1068071.08097726.964167.53747713.850531.132853
Median16.7231748.4102127.8384269.492319.5414233.5034948.2573534.1880625.48249114.888147.23541113.227795.13255
Rank19410257631281311
Sum rank221911092315514614511897222157198224
Mean rank18.6818184.95454510.52.56.6363646.5909095.3636364.40909110.090917.136364910.18182
Total rank19413276531181012
Wilcoxon: p-value4.38E-127.75E-151.56E-150.0017461424.89E-155.25E-151.60E-111.92E-123.34E-158.03E-151.56E-152.28E-15
Table 6. Performance of optimization algorithms on the pressure vessel design problem.
Table 6. Performance of optimization algorithms on the pressure vessel design problem.
AlgorithmOptimal VariablesOptimal Cost
T s T h RL
BOA0.77816850.384649240.3196152005885.3263
WSO0.77816850.384649240.3196152005885.3322
AVOA0.77819020.384659940.320737199.984365885.3693
RSA0.85388320.416832440.3848242006547.2433
MPA0.77816850.384649240.3196152005885.3322
TSA0.77975760.385865640.3965392005913.0266
WOA0.81284570.541012840.396424198.933516581.148
MVO0.81820220.406199242.352706173.535155968.7271
GWO0.77845390.385625240.32716199.942885890.2366
TLBO1.19788451.263994261.05614991.74157914,709.571
GSA0.9570180.473727349.581732144.999857674.4943
PSO1.2767682.322152550.647017110.1534317,231.342
GA1.14343150.779938554.78476796.5149919745.9413
Table 7. Statistical results of optimization algorithms on the pressure vessel design problem.
Table 7. Statistical results of optimization algorithms on the pressure vessel design problem.
AlgorithmMeanBestWorstStdMedianRank
BOA5885.32635885.32635885.32632.32E-085885.32631
WSO5907.0115885.33226094.60653.1047135885.33223
AVOA6417.95425885.36937301.8987485.208276249.92065
RSA12,102.4586547.243320,969.9823923.607611,268.4359
MPA5885.33225885.33225885.33223.91E-065885.33222
TSA6259.45685913.02667323.2568391.184186101.12376
WOA7978.52796581.14812,433.2421390.33957795.47748
MVO6576.38195968.72717273.5044448.128076572.64597
GWO5945.52435890.23666636.6942163.663975901.75734
TLBO39,032.93414,709.57169,674.57415,506.90338,454.33812
GSA24,592.0497674.494339,531.9578743.482926,413.07510
PSO41,176.99717,231.34289,983.87518,842.41738,677.47213
GA29,575.4519745.941360,485.67214,026.2726,621.05711
Table 8. Performance of optimization algorithms on the speed reducer design problem.
Table 8. Performance of optimization algorithms on the speed reducer design problem.
AlgorithmOptimal VariablesOptimal Cost
b M p l 1 l 2 d 1 d 2
BOA3.50.7177.37.83.35021475.28668322996.3482
WSO3.50000050.7177.30000997.80000043.35021485.28668332996.3483
AVOA3.50.7177.30000077.83.35021475.28668322996.3482
RSA3.59220920.7178.2220928.2610463.35566585.48338093182.9113
MPA3.50.7177.37.83.35021475.28668322996.3482
TSA3.51290390.7177.38.2610463.35054075.29021773013.8833
WOA3.5875090.7177.38.00941933.36161635.28675583038.2679
MVO3.50225280.7177.38.0691573.36960275.28688193008.2394
GWO3.50064150.7177.30514547.83.36395335.28881093001.5161
TLBO3.5561210.70399926.3276558.10171628.14534923.66356675.33938025271.2441
GSA3.52291970.702754417.3693017.82075437.88964873.40880055.38597823169.7986
PSO3.50818730.70007218.0961297.39908097.86805973.59555695.34404863302.6701
GA3.57804780.705567817.8141747.7427737.85586833.70171325.34635953.35E+03
Table 9. Statistical results of optimization algorithms on the speed reducer design problem.
Table 9. Statistical results of optimization algorithms on the speed reducer design problem.
AlgorithmMeanBestWorstStdMedianRank
BOA2996.34822996.34822996.34829.33E-132996.34821
WSO2996.63182996.34832998.80030.58510512996.36443
AVOA3000.85792996.34823011.08163.96973493000.75834
RSA3276.90583182.91133335.240257.5409023291.79029
MPA2996.34822996.34822996.34823.19E-062996.34822
TSA3032.14823013.88333045.885210.1441593033.93697
WOA3150.12073038.26793445.3098106.344633116.7688
MVO3029.83753008.23943070.210413.2632393030.27756
GWO3004.62523001.51613010.59262.50838073004.1075
TLBO6.958E+135271.24415.037E+141.158E+142.725E+1312
GSA3454.84893169.79864076.1493262.319733325.143110
PSO1.027E+143302.67015.202E+141.24E+147.345E+1313
GA4.944E+133347.00813.191E+147.789E+131.981E+1311
Table 10. Performance of optimization algorithms on the welded beam design problem.
Table 10. Performance of optimization algorithms on the welded beam design problem.
AlgorithmOptimal VariablesOptimal Cost
hltb
BOA0.20572963.47048879.03662390.20572961.7246798
WSO0.20572963.47048879.03662390.20572961.7248523
AVOA0.20496473.48707819.03651720.20573451.7259197
RSA0.19669373.5346839.92494530.21779871.9754653
MPA0.20572963.47048879.03662390.20572961.7248523
TSA0.20419563.49537979.06419110.20615641.7338449
WOA0.21372873.32972868.97381530.22099821.8213232
MVO0.20599313.464819.0446860.20605561.7283648
GWO0.2055923.47364549.03624010.20579881.7255236
TLBO0.3152534.42156666.79770010.42508863.0235653
GSA0.29383522.72173077.42124330.30794082.0844573
PSO0.37253043.42468237.34468540.57393034.0226813
GA0.2243086.91435037.76348460.3043622.7608802
Table 11. Statistical results of optimization algorithms on the welded beam design problem.
Table 11. Statistical results of optimization algorithms on the welded beam design problem.
AlgorithmMeanBestWorstStdMedianRank
BOA1.72467981.72467981.72467982.28E-161.72467981
WSO1.72485261.72485231.72485781.25E-061.72485233
AVOA1.76120951.72591971.84266690.03646391.74731967
RSA2.18156281.97546532.52914860.14412362.15652858
MPA1.72485231.72485231.72485233.35E-091.72485232
TSA1.74314681.73384491.75233810.0056051.7432436
WOA2.31063891.82132324.04583970.64163172.08572539
MVO1.74122061.72836481.7750420.01375521.73715095
GWO1.72725221.72552361.73129560.00136261.7270074
TLBO3.326E+133.02356533.209E+148.111E+135.690948412
GSA2.44360732.08445732.75214170.19149072.473146810
PSO4.586E+134.02268132.776E+148.759E+136.729308113
GA1.126E+132.76088021.218E+143.456E+135.657549611
Table 12. Performance of optimization algorithms on the tension/compression spring design problem.
Table 12. Performance of optimization algorithms on the tension/compression spring design problem.
AlgorithmOptimal VariablesOptimal Cost
dDP
BOA0.05168910.356717711.2889660.0126019
WSO0.05168710.356670111.2917590.0126652
AVOA0.05119180.344881712.0213010.0126702
RSA0.05013160.31417214.7108810.0131579
MPA0.05169070.356758311.286590.0126652
TSA0.05098890.340101512.3476410.012682
WOA0.05116630.344282312.060540.0126707
MVO0.05013160.319966713.8846920.0127497
GWO0.05195610.363159610.9255670.0126707
TLBO0.06772810.89161272.72368460.0174771
GSA0.05510980.44110427.82151010.0130734
PSO0.06764580.88850122.72368460.0173752
GA0.06819520.89940842.72368460.0178708
Table 13. Statistical results of optimization algorithms on the tension/compression spring design problem.
Table 13. Statistical results of optimization algorithms on the tension/compression spring design problem.
AlgorithmMeanBestWorstStdMedianRank
BOA0.01260190.01260190.01260196.88E-180.01260191
WSO0.01267630.01266520.01282393.537E-050.01266563
AVOA0.01333390.01267020.01413290.000550.01326658
RSA0.01323850.01315790.01338066.845E-050.01321786
MPA0.01266520.01266520.01266522.81E-090.01266522
TSA0.01295850.0126820.01351470.00023830.01288585
WOA0.01326430.01267070.01447450.00059610.01306877
MVO0.01642360.01274970.01784190.00162510.01732729
GWO0.01272220.01267070.01294255.456E-050.01271974
TLBO0.01800150.01747710.01860040.00035320.017957910
GSA0.01933350.01307340.0318070.00420270.018913111
PSO2.064E+130.01737523.663E+148.195E+130.017375213
GA1.612E+120.01787081.668E+134.815E+120.02538312
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hubálovská, M.; Hubálovský, Š.; Trojovský, P. Botox Optimization Algorithm: A New Human-Based Metaheuristic Algorithm for Solving Optimization Problems. Biomimetics 2024, 9, 137. https://0-doi-org.brum.beds.ac.uk/10.3390/biomimetics9030137

AMA Style

Hubálovská M, Hubálovský Š, Trojovský P. Botox Optimization Algorithm: A New Human-Based Metaheuristic Algorithm for Solving Optimization Problems. Biomimetics. 2024; 9(3):137. https://0-doi-org.brum.beds.ac.uk/10.3390/biomimetics9030137

Chicago/Turabian Style

Hubálovská, Marie, Štěpán Hubálovský, and Pavel Trojovský. 2024. "Botox Optimization Algorithm: A New Human-Based Metaheuristic Algorithm for Solving Optimization Problems" Biomimetics 9, no. 3: 137. https://0-doi-org.brum.beds.ac.uk/10.3390/biomimetics9030137

Article Metrics

Back to TopTop