Next Article in Journal
Feature Selection Problem and Metaheuristics: A Systematic Literature Review about Its Formulation, Evaluation and Applications
Next Article in Special Issue
Running-Time Analysis of Brain Storm Optimization Based on Average Gain Model
Previous Article in Journal / Special Issue
Dynamic Population on Bio-Inspired Algorithms Using Machine Learning for Global Optimization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Hybrid Particle Swarm Optimization–Teaching–Learning-Based Optimization for Solving Optimization Problems

by
Štěpán Hubálovský
1,*,
Marie Hubálovská
2 and
Ivana Matoušová
3
1
Department of Applied Cybernetics, Faculty of Science, University of Hradec Králové, 50003 Hradec Kralove, Czech Republic
2
Department of Technics, Faculty of Education, University of Hradec Králové, 50003 Hradec Kralove, Czech Republic
3
Department of Mathematics, Faculty of Science, University of Hradec Králové, 50003 Hradec Kralove, Czech Republic
*
Author to whom correspondence should be addressed.
Submission received: 13 November 2023 / Revised: 9 December 2023 / Accepted: 22 December 2023 / Published: 25 December 2023
(This article belongs to the Special Issue Bioinspired Algorithms)

Abstract

:
This research paper develops a novel hybrid approach, called hybrid particle swarm optimization–teaching–learning-based optimization (hPSO-TLBO), by combining two metaheuristic algorithms to solve optimization problems. The main idea in hPSO-TLBO design is to integrate the exploitation ability of PSO with the exploration ability of TLBO. The meaning of “exploitation capabilities of PSO” is the ability of PSO to manage local search with the aim of obtaining possible better solutions near the obtained solutions and promising areas of the problem-solving space. Also, “exploration abilities of TLBO” means the ability of TLBO to manage the global search with the aim of preventing the algorithm from getting stuck in inappropriate local optima. hPSO-TLBO design methodology is such that in the first step, the teacher phase in TLBO is combined with the speed equation in PSO. Then, in the second step, the learning phase of TLBO is improved based on each student learning from a selected better student that has a better value for the objective function against the corresponding student. The algorithm is presented in detail, accompanied by a comprehensive mathematical model. A group of benchmarks is used to evaluate the effectiveness of hPSO-TLBO, covering various types such as unimodal, high-dimensional multimodal, and fixed-dimensional multimodal. In addition, CEC 2017 benchmark problems are also utilized for evaluation purposes. The optimization results clearly demonstrate that hPSO-TLBO performs remarkably well in addressing the benchmark functions. It exhibits a remarkable ability to explore and exploit the search space while maintaining a balanced approach throughout the optimization process. Furthermore, a comparative analysis is conducted to evaluate the performance of hPSO-TLBO against twelve widely recognized metaheuristic algorithms. The evaluation of the experimental findings illustrates that hPSO-TLBO consistently outperforms the competing algorithms across various benchmark functions, showcasing its superior performance. The successful deployment of hPSO-TLBO in addressing four engineering challenges highlights its effectiveness in tackling real-world applications.

1. Introduction

Optimization is the process of finding the best solution among all available solutions for an optimization problem [1]. From a mathematical point of view, every optimization problem consists of three main parts: decision variables, constraints, and objective function. Therefore, the goal in optimization is to determine the appropriate values for the decision variables so that the objective function is optimized by respecting the constraints of the problem [2]. There are countless optimization problems in science, engineering, industry, and real-world applications that must be solved using appropriate techniques [3].
Metaheuristic algorithms are one of the most effective approaches used in handling optimization tasks. Metaheuristic algorithms are able to provide suitable solutions for optimization problems without the need for gradient information, only based on random search in the problem solving space, using random operators and trial and error processes [4]. Advantages such as simple concepts, easy implementation, efficiency in nonlinear, nonconvex, discontinuous, nonderivative, NP-hard optimization problems, and efficiency in discrete and unknown search spaces have led to the popularity of metaheuristic algorithms among researchers [5]. The optimization process in metaheuristic algorithms starts with the random generation of a number of solvable solutions for the problem. Then, during an iteration-based process, these initial solutions are improved based on algorithm update steps. At the end, the best improved solution is presented as the solution to the problem [6]. The nature of random search in metaheuristic algorithms means that there is no guarantee of achieving the global optimum using these approaches. However, due to the proximity of the solutions provided by metaheuristic algorithms to the global optimum, they are acceptable as quasi-optimal solutions [7].
In order to perform the search process in the problem-solving space well, metaheuristic algorithms must be able to scan the problem-solving space well at both global and local levels. Global search with the concept of exploration leads to the ability of the algorithm to search all the variables in the search space in order to prevent the algorithm from getting stuck in the local optimal areas and to accurately identify the main optimal area. Local search with the concept of exploitation leads to the ability of the algorithm to search accurately and meticulously around the discovered solutions and promising areas with the aim of achieving solutions that are close to the global optimum. In addition to the ability in exploration and exploitation, what leads to the success of the metaheuristic algorithm in providing a suitable search process is its ability to establish a balance between exploration and exploitation during the search process [8]. The desire of researchers to obtain better solutions for optimization problems has led to the design of numerous metaheuristic algorithms.
The main question of this research whether, considering the many metaheuristic algorithms that have been introduced so far, there is a need to design newer algorithms or develop hybrid approaches from the combination of several metaheuristic algorithms. In response to this question, the no free lunch (NFL) [9] theorem explains that no unique metaheuristic algorithm is the best optimizer for all optimization applications. According to the NFL theorem, the proper performance of a metaheuristic algorithm in solving a set of optimization problems is not a guarantee of the same performance of that algorithm in handling other optimization applications. Therefore, the NFL theorem, by keeping the research field active, motivates researchers to be able to provide more effective solutions for optimization problems by introducing new algorithms as well as developing hybrid versions of the combination of several algorithms.
Numerous metaheuristic algorithms have been designed by researchers. Among these, particle swarm optimization (PSO) [10] and teaching–learning-based optimization (TLBO) [11] are successful and popular algorithms that have been widely employed to deal with optimization problems in various sciences.
The design of PSO is inspired by the movement of flocks of birds and fish in search of food. In PSO design, the position of the best member is used to update the position of the population members. This dependence of the update process on the best member prevents the algorithm from scanning the entire problem-solving space, and as a result, it can lead to the rapid convergence of the algorithm in inappropriate local optima. Therefore, improving the exploration ability in PSO in order to manage the global search plays a significant role in the more successful performance of this algorithm.
In the design of TLBO, it is adapted from the exchange of knowledge between the teacher and students and the students with each other in the educational space of the classroom. The teacher phase in the design of TLBO is such that it has led to the high capability of this algorithm in exploration and global search.
The innovation and novelty of this article are in developing a new hybrid metaheuristic algorithm called hybrid particle swarm optimization–teaching–learning-based optimization (hPSO-TLBO), which is used in handling optimization tasks. The main motivation in designing hybrid algorithms is to benefit from the advantages of two or more algorithms at the same time by combining them. PSO has good quality in exploitation, but on the other hand, it suffers from the weakness of exploration. On the other hand, TLBO has high quality in exploration. Therefore, the main goal in designing hPSO-TLBO is to design a powerful hybrid metaheuristic approach with benefit and combination the exploitation power of PSO and the exploration power of TLBO.
The main contributions of this paper are as follows:
  • hPSO-TLBO is developed based on the combination of particle swarm optimization–teaching–learning-based optimization.
  • The performance of hPSO-TLBO is tested on fifty-two standard benchmark functions from unimodal, high-dimensional multimodal, fixed-dimensional multimodal types, and the CEC 2017 test suite.
  • The performance of hPSO-TLBO is evaluated in handling real-world applications, challenged on four design engineering problems.
  • The results of hPSO-TLBO are compared with the performance of twelve well-known metaheuristic algorithms.
This paper is organized as follows: the literature review is presented in Section 2. The proposed hPSO-TLBO approach is introduced and modeled in Section 3. Simulation studies and results are presented in Section 4. The effectiveness of hPSO-TLBO in handling real-world applications is challenged in Section 5. Finally, conclusions and suggestions for future research are provided in Section 6.

2. Literature Review

Various natural phenomena have inspired metaheuristic algorithms, the behavior of living organisms in nature, genetics, and biology, laws and concepts of physics, rules of games, human behavior, and other evolutionary phenomena. Based on the source of inspiration in the design, metaheuristic algorithms are placed in five groups: swarm-based, evolutionary-based, physics-based, game-based, and human-based.
Swarm-based metaheuristic algorithms have been proposed based on modeling swarm behaviors among birds, animals, insects, aquatic animals, plants, and other living organisms in nature. The most famous algorithms of this group are particle swarm optimization (PSO) [10], artificial bee colony (ABC) [12], ant colony optimization (ACO) [13], and firefly algorithm (FA) [14]. The PSO algorithm was developed using inspiration from the movement of flocks of birds and fishes searching for food. ABC was proposed based on the activities of honey bees in a colony, aiming to access food resources. ACO was introduced based on modeling the ability of ants to discover the shortest path between the colony and the food source. FA was developed using inspiration from optical communication between fireflies. Foraging, hunting, migration, digging are among the most common natural behaviors among living organisms, which have been a source of inspiration in the design of swarm-based metaheuristic algorithms such as the coati optimization algorithm (COA) [15], whale optimization algorithm (WOA) [16], white shark optimizer (WSO) [17], reptile search algorithm (RSA) [18], pelican optimization algorithm (POA) [19], kookaburra optimization algorithm (KOA) [20], grey wolf optimizer (GWO) [21], walruses optimization algorithm (WaOA) [22], golden jackal optimization (GJO) [23], honey badger algorithm (HBA) [24], lyrebird optimization algorithm (LOA) [25], marine predator algorithm (MPA) [26], African vultures optimization algorithm (AVOA) [27], and tunicate swarm algorithm (TSA) [28].
Evolutionary-based metaheuristic algorithms have been proposed based on modeling concepts of biology and genetics such as survival of the fittest, natural selection, etc. The genetic algorithm (GA) [29] and differential evolution (DE) [30] are among the most well-known and widely used metaheuristic algorithms developed based on the modeling of the generation process, Darwin’s evolutionary theory, and the use of mutation, crossover, and selection random evolutionary operators. Artificial immune system (AIS) [31] algorithms are designed with inspiration from the human body’s defense mechanism against diseases and microbes.
Physics-based metaheuristic algorithms have been proposed based on modeling concepts, transformations, forces, laws in physics. Simulated annealing (SA) [32] is one of the most famous metaheuristic algorithms of this group, which was developed based on the modeling of the annealing process of metals, during which, based on physical transformations, metals are melted under heat and then slowly cooled to become the crystal of its idea. Physical forces have inspired the design of several algorithms, including the gravitational search algorithm (GSA) [33], based on gravitational force simulation; spring search algorithm (SSA) [34], based on spring potential force simulation; and momentum search algorithm (MSA) [35], based on impulse force simulation. Some of the most popular physics-based methods are water cycle algorithm (WCA) design [36], electromagnetism optimization (EMO) [37], the Archimedes optimization algorithm (AOA) [38], Lichtenberg algorithm (LA) [39], equilibrium optimizer (EO) [40], black hole algorithm (BHA) [41], multi-verse optimizer (MVO) [42], and thermal exchange optimization (TEO) [43].
Game-based metaheuristic algorithms have been proposed, inspired by governing rules, strategies of players, referees, coaches, and other influential factors in individual and group games. The modeling of league matches was a source of inspiration in designing algorithms such as football game-based optimization (FGBO) [44], based on a football game, and the volleyball premier league (VPL) algorithm [45], based on a volleyball league. The effort of players in a tug-of-war competition was the main idea in the design of tug of war optimization (TWO) [46]. Some other game-based algorithms are the golf optimization algorithm (GOA) [47], hide object game optimizer (HOGO) [48], darts game optimizer (DGO) [49], archery algorithm (AA) [5], and puzzle optimization algorithm (POA) [50].
Human-based metaheuristic algorithms have been proposed, inspired by strategies, choices, decisions, thoughts, and other human behaviors in individual and social life. Teaching–learning-based optimization (TLBO) [11] is one of the most famous human-based algorithms, which is designed based on modeling the classroom learning environment and the interactions between students and teachers. Interactions between doctors and patients in order to treat patients is the main idea in the design of doctor and patient optimization (DPO) [51]. Cooperation among the people of a team in order to achieve the set goals of that team is employed in teamwork optimization algorithm (TOA) [52] design. The efforts of both the poor and the rich sections of the society in order to improve their economic situation were a source of inspiration in the design of poor and rich optimization (PRO) [53]. Some of the other human-based metaheuristic algorithms are the mother optimization algorithm (MOA) [54], herd immunity optimizer (CHIO) [55], driving training-based optimization (DTBO) [56], Ali Baba and the Forty Thieves (AFT) [57], election-based optimization algorithm (EBOA) [58], chef-based optimization algorithm (ChBOA) [59], sewing training-based optimization (STBO) [60], language education optimization (LEO) [61], gaining–sharing knowledge-based algorithm (GSK) [62], and war strategy optimization (WSO) [63].
In addition to the groupings stated above, researchers have developed hybrid metaheuristic algorithms by combining two or more metaheuristic algorithms. The main goal and motivation in the construction of hybrid metaheuristic algorithms is to take advantage of several algorithms at the same time in order to improve the performance of the optimization process compared to the single versions of each of the combined algorithms. The combination of TLBO and HS was used to design the hTLBO-HS hybrid approach [64]. hPSO-YUKI was proposed based on the combination of PSO and the YUKI algorithm to address the challenge of double crack identification in CFRP cantilever beams [65]. The hGWO-PSO hybrid approach was designed by integrating GWO and PSO for static and dynamic crack identification [66].
PSO and TLBO algorithms are successful metaheuristic approaches that have always attracted the attention of researchers and have been employed to solve many optimization applications. In addition to using single versions of PSO and TLBO, researchers have tried to develop hybrid approaches by integrating these two algorithms that benefit from the advantages of both algorithms at the same time. A hybrid version of hPSO-TLBO was proposed based on merging the better half of the PSO population and the better half obtained from the TLBO teacher phase. Then, the merged population enters the learner phase of TLBO. In this hybrid approach, there is no change or integration in the equations [67]. A hybrid version of hPSO-TLBO based on population merging was proposed for trajectory optimization [68]. The idea of dividing and merging the population has also been used to solve optimization problems [69]. A hybrid version of PSO and TLBO was proposed for distribution network reconfiguration [70]. A hybrid version of TLBO and SA as well as the use of a support vector machine was developed for gene expression data [71]. From the combination of the sine–cosine algorithm and TLBO, the hSCA-TLBO hybrid approach was proposed for visual tracking [72]. Sunflower optimization and TLBO were combined to develop hSFO-TLBO for biodegradable classification [73]. A hybrid version called hTLBO-SSA was proposed from the combination of the salp swarm algorithm and TLBO for reliability redundancy allocation problems [74]. A hybrid version consisting of PSO and SA was developed under the title of hPSO-SA for mobile robot path planning in warehouses [75]. Harris hawks optimization and PSO were integrated with Ham to design hPSO-HHO for renewable energy applications [76]. A hybrid version called hPSO-GSA was proposed from the combination of PSO and GSA for feature selection [77]. A hybrid version made from PSO and GWO called hPSO-GWO was developed to deal with reliability optimization and redundancy allocation for fire extinguisher drones [78]. A hybrid PSO-GA approach was proposed for flexible flow shop scheduling with transportation [79].
In addition to the development of hybrid metaheuristic algorithms, researchers have tried to improve existing versions of algorithms by making modifications. Therefore, numerous improved versions of metaheuristic algorithms have been proposed by scientists to improve the performance of the original versions of existing algorithms. An improved version of PSO was proposed for efficient maximum power point tracking under partial shading conditions [80]. An improved version of PSO was developed based on hummingbird flight patterns to enhance search quality and population diversity [81]. In order to deal with the planar graph coloring problem, an improved version of PSO was designed [82]. The application of an improved version of PSO was evaluated for the optimization of reactive power [83]. An improved version of TLBO for optimal placement and sizing of electric vehicle charging infrastructure in a grid-tied DC microgrid was proposed [84]. An improved version of TLBO was developed for solving time–cost optimization in generalized construction projects [85]. Two improved TLBO approaches were developed for the solution of inverse boundary design problems [86]. In order to address the challenge of selective harmonic elimination in multilevel inverters, an improved version of TLBO was designed [87].
Based on the best knowledge from the literature review, although several attempts have been made to improve the performance of PSO and TLBO algorithms and also to design hybrid versions of these two algorithms, it is still possible to develop an effective hybrid approach to solve optimization problems by integrating the equations of these two algorithms and making modifications in their design. In order to address this research gap in the study of metaheuristic algorithms, in this paper, a new hybrid metaheuristic approach combining PSO and TLBO was developed, which is discussed in detail in the next section.

3. Hybrid Particle Swarm Optimization–Teaching–Learning-Based Optimization

In this section, PSO and TLBO are discussed first, and their mathematical equations are presented. Then, the proposed hybrid particle swarm optimization–teaching–learning-based optimization (hPSO-TLBO) approach is presented based on the combination of PSO and TLBO.

3.1. Particle Swarm Optimization (PSO)

PSO is a prominent swarm-based metaheuristic algorithm widely known for its ability to emulate the foraging behavior observed in fish and bird flocks, enabling an effective search for optimal solutions. All PSO members are candidate solutions representing values of decision variables based on their position in the search space. The personal best experience P b e s t i and the collective best experience g b e s t are used in PSO design in the population updating process. P b e s t i represents the best candidate solution that each PSO member has been able to achieve up to the current iteration. g b e s t is the best candidate solution discovered up to the current iteration by the entire population in the search space. The population update equations in PSO are as follows:
X i ( t + 1 ) = X i ( t ) + V i ( t ) ,
V i ( t + 1 ) = ω t · V i ( t ) + r 1 · c 1 · P b e s t i X i ( t ) + r 2 · c 2 · g b e s t X i ( t ) ,
ω ( t ) = 0.9 0.8 · t 1 T 1
where X i ( t ) is the i th PSO member, V i ( t ) is its velocity, P b e s t i is the best obtained solution so far by the i th PSO member, g b e s t is the best obtained solution so far by overall PSO population, ω ( t ) is the inertia weight factor with linear reduction from 0.9 to 0.1 during algorithm iteration, T is the maximum number of iterations, t is the iteration counter, r 1 and r 2 are the real numbers with a uniform probability distribution between 0 and 1 (i.e., r 1 , r 2 U [ 0,1 ] ), c 1 and c 2 (fulfilling the condition c 1 + c 2 4 ) are acceleration constants in which c 1 represents the confidence of a PSO member in itself while c 2 represents the confidence of a PSO member in the population.

3.2. Teaching–Learning-Based Optimization (TLBO)

TLBO has established itself as a leading and extensively employed human-based metaheuristic algorithm, effectively simulating the dynamics of educational interactions within a classroom setting. Like PSO, each TLBO member is also a candidate solution to the problem based on its position in the search space. In the design of TLBO, the best member of the population with the most knowledge is considered a teacher, and the other population members are considered class students. In TLBO, the position of population members is updated under two phases (the teacher and learner phases).
In the teacher phase, the best member of the population with the highest level of knowledge, denoted as the teacher, tries to raise the academic level of the class by teaching and transferring knowledge to students. The population update equations in TLBO based on the teacher phase are as follows:
X i = X i + r 3 · ( T I · M ) ,
M = i = 1 N X i N ,
where X i is the i th TLBO member, T is the teacher, M is the mean value of the class, r 3 U [ 0,1 ] , I is a random integer obtained from a uniform distribution on the set 1,2 , and N represents the number of population members.
In the learner phase, the students of the class try to improve their knowledge level and thus the class by helping each other. In TLBO, it is assumed that each student randomly chooses another student and exchanges knowledge. The population update equations in TLBO based on the learner phase are as follows:
X i = X i + r 4 · ( X k X i ) ,   F k < F i ; X i + r 4 · ( X i X k ) ,   e l s e ,
where X k is the k th student ( k 1,2 , 3 ,   . . . ,   N   a n d   k i ), F k is its objective function value, and r 4 U [ 0,1 ] .

3.3. Proposed Hybrid Particle Swarm Optimization–Teaching–Learning-Based Optimization (hPSO-TLBO)

This subsection presents the introduction and modeling of the proposed hPSO-TLBO approach, which combines the features of PSO and TLBO. In this design, an attempt was made to use the advantages of each of the mentioned algorithms so as to develop a hybrid metaheuristic algorithm that performs better than PSO or TLBO.
PSO has a high exploitation ability based on the term r 1 · c 1 · P b e s t i X i in the update equations; however, due to the dependence of the update process on the best population member g b e s t , PSO is weak in global search and exploration. In fact, the term r 2 · c 2 · g b e s t X i in PSO can stop the algorithm by taking it to the local optimum and reaching the stationary state (early gathering of all population members in a solution).
The teacher phase in TLBO incorporates large and sudden changes in the population’s position, based on term r 3 · ( T I · M ) , resulting in global search and exploration capabilities. Enhancing exploration in metaheuristic algorithms improves the search process, preventing it from getting trapped in local optima and accurately identifying the main optimal area. Hence, the primary concept behind the design of the proposed hPSO-TLBO approach is to facilitate the exploration phase in PSO by leveraging the exceptional global search and exploration capabilities of TLBO. According to this, in hPSO-TLBO, a new hybrid metaheuristic algorithm is designed by integrating the exploration ability of TLBO with the exploitation ability of PSO.
For the possibility and effectiveness of the combination of PSO and TLBO, the term r 2 · c 2 · g b e s t X i was removed from Equation (2) (i.e., equation for velocity), and conversely, to improve the discovery ability, the term r 3 · ( T I · M ) from the teacher phase of TLBO was added to this equation. Therefore, the new form of velocity equation in the hPSO-TLBO is as follows:
V i = ω ( t ) · V i + r 1 · c 1 · P b e s t i X i + r 3 · ( T I · M ) .
Then, based on the velocity calculated from Equation (7), and based on Equation (1), a new location for any hPSO-TLBO member is calculated by Equation (8). If the value of the objective function improves at the new location, it supersedes the previous position of the corresponding member based on Equation (9).
X i n e w = X i + V i ,  
X i = X i n e w ,   F i n e w F i ; X i ,   e l s e ,
where X i n e w is the new proposed location for the i th population member in the search space and F i n e w is objective function value of X i n e w .
During the student phase of TLBO, every student chooses another student at random for the purpose of exchanging knowledge. A randomly selected student may have a better or worse knowledge status compared to the student who is the selector. In hPSO-TLBO design, an enhancement is introduced in the student phase, assuming that each student selects a superior student to elevate their knowledge level and enhance overall performance. In this case, if the objective function value of a member represents the scientific level of that member, the set of better students for each hPSO-TLBO member is determined using Equation (10):
C S i = X l ;   F k < F l l 1,2 ,   , N T ,
where C S i is the set of suitable students for guiding the i th member X i , and X l is the population member with a better objective function value F l than member X i .
In the implementation of hPSO-TLBO, each student uniformly randomly chooses one of the higher-performing students from a given set and proceeds to exchange knowledge with them. Based on the exchange of knowledge in the student phase, a new location of each member is calculated by Equation (11). If the new position leads to an improvement in the objective function value, it replaces the previous position of the corresponding member, as specified by Equation (12).
X i n e w = X i + r 4 ( S S i X i ) ,  
X i = X i n e w ,   F i n e w F i ; X i ,   e l s e ,
where S S i is the selected student for guiding the i th population member.
Figure 1 presents a flowchart illustrating the implementation steps of the hPSO-TLBO approach, while Algorithm 1 provides the corresponding pseudocode.
Algorithm 1. Pseudocode of hPSO-TLBO
Start hPSO-TLBO.
1.Input problem information: variables, objective function, and constraints.
2.Set the population size N and the maximum number of iterations T .
3.Generate the initial population matrix at random.
4.Evaluate the objective function.
5. For t = 1 to T
6. Update the value of ω t by Equation (3) and the value of the teacher T .
7. Calculate M using Equation (5). M i = 1 N X i N
8. For i = 1 to N
9. Update P b e s t i based on comparison X i with P b e s t i .
10. Set the best population member as teacher T.
11. Calculate hybrid velocity for the i th member using Equation (7). V i ω ( t ) · V i + r 1 · c 1 · P b e s t i X i + r 3 · ( T I · M )
12. Calculate new position of the i th population member using Equation (8). X i n e w X i + V i
13. Update the i th member using Equation (9). X i X i n e w ,   F i n e w F i ; X i ,   e l s e .
14. Determine candidate students set for the i th member using Equation (10). C S i X k |   F k < F i ,   k 1,2 ,   , N T
15. Calculate the new position of the i th population member based on modified student phase by Equation (11). X i n e w X i + r 4 · ( S S i X i )
16. Update the i th member using Equation (12). X i X i n e w ,   F i n e w F i ; X i ,   e l s e .
17. end
18. Save the best candidate solution so far.
19. end
20. Output the best quasi-optimal solution obtained with hPSO-TLBO.
End hPSO-TLBO.

3.4. Computational Complexity of hPSO-TLBO

This subsection focuses on evaluating the computational complexity of the hPSO-TLBO algorithm. The initialization of hPSO-TLBO for an optimization problem with m decision variables has a computational complexity of O ( N m ) , where N represents the number of population members. In each iteration, the position of the population members in the search space is updated in two steps. As a result, in each iteration, the value of the objective function for each population member is computed twice. Hence, the computational complexity of the population update process in hPSO-TLBO is O 2 N m T , with T representing the total number of the algorithm’s iterations. Based on these, the overall computational complexity of the proposed hPSO-TLBO approach is O ( N m ( 2 T + 1 ) ) .
Similarly, the computational complexity of each of the PSO and TLBO algorithms can also be evaluated. PSO has a computational complexity of O ( N m ( T + 1 ) ) and TLBO has a computational complexity of O ( N m ( 2 T + 1 ) ) . Therefore, from the point of view of computational complexity, the proposed hPSO-TLBO approach has a similar situation to TLBO, but compared to PSO, it has twice the computational complexity. Actually, the number of function evaluations in each iteration in hPSO-TLBO and TLBO is equal to 2N and in PSO is equal to N.

4. Simulation Studies and Results

In this section, the performance of the proposed hPSO-TLBO approach in solving optimization problems is evaluated. For this purpose, a set of fifty-two standard benchmark functions of unimodal, high-dimensional multimodal, and fixed-dimensional multimodal types [88], and CEC 2017 test suite [89] were employed.

4.1. Performance Comparison and Experimental Settings

In order to check the quality of hPSO-TLBO, the obtained results were compared with the performance of twelve well-known metaheuristic algorithms: PSO, TLBO, improved PSO (IPSO) [81], improved TLBO (ITLBO) [87], hybrid PSO-TLBO (hPT1) developed in [67], hybrid PSO-TLBO (hPT2) developed in [90], GWO, MPA, TSA, RSA, AVOA, WSO. Therefore, hPSO-TLBO was compared with twelve metaheuristic algorithms in total. The experiments were carried out on a Windows 10 computer with a 2.2GHz Core i7 processor and 16 GB of RAM, utilizing MATLAB 2018a as the software environment. The optimization results are reported using six statistical indicators: mean, best, worst, standard deviation (std), median, and rank. In addition, the value of the mean index was used to rank the metaheuristic algorithms in handling each of the benchmark functions.

4.2. Evaluation of Unimodal Test Functions F1 to F7

Unimodal functions are valuable for evaluating the exploitation and local search capabilities of metaheuristic algorithms since they lack local optima. Table 1 presents the optimization results of unimodal functions F1 to F7, obtained using hPSO-TLBO and other competing algorithms. The optimization results demonstrate that hPSO-TLBO excels in local search and exploitation, consistently achieving the global optimum for functions F1 to F6. Furthermore, hPSO-TLBO emerged as the top-performing optimizer for solving function F7. The analysis of simulation outcomes confirms that hPSO-TLBO, with its exceptional exploitation capability and superior results, outperforms competing algorithms in tackling functions F1 to F7 of unimodal type.

4.3. Evaluation of High-Dimensional Multimodal Test Functions F8 to F13

Due to having multiple local optima, high-dimensional multimodal functions are suitable options for global exploration and search in metaheuristic algorithms. The results of implementing hPSO-TLBO and competing algorithms on high-dimensional multimodal benchmarks F8 to F13 are presented in Table 2. Based on the results, hPSO-TLBO, with high discovery ability, was able to handle functions F9 and F11 while identifying the main optimal area, converging to the global optimum. The hPSO-TLBO demonstrates exceptional performance as the top optimizer for benchmarks F8, F10, F12, and F13. The simulation results clearly indicate that hPSO-TLBO, with its remarkable exploration capability, outperforms competing algorithms in effectively handling benchmarks F8 to F13 of high-dimensional multimodal type.

4.4. Evaluation of Fixed-Dimensional Multimodal Test Functions F14 to F23

Multimodal functions with a fixed number of dimensions are suitable criteria for simultaneous measurement of exploration and exploitation in metaheuristic algorithms. Table 3 presents the outcomes achieved by applying hPSO-TLBO and other competing optimizers to fixed-dimension multimodal benchmarks F14 to F23. The proposed hPSO-TLBO emerged as the top-performing optimizer for functions F14 to F23, showcasing its effectiveness. In cases where hPSO-TLBO shares the same mean index values with certain competing algorithms, its superior performance is evident through better std index values. The simulation results highlight hPSO-TLBO’s exceptional balance between exploration and exploitation, surpassing competing algorithms in handling fixed-dimension multimodal functions F14 to F23. The performance comparison by convergence curves is illustrated in Figure 2.

4.5. Evaluation CEC 2017 Test Suite

In this subsection, the performance of hPSO-TLBO is evaluated in handling the CEC 2017 test suite. The test suite employed in this study comprises thirty standard benchmarks, including three unimodal functions (C17-F1 to C17-F3), seven multimodal functions (C17-F4 to C17-F10), ten hybrid functions (C17-F11 to C17-F20), and ten composition functions (C17-F21 to C17-F30). However, the C17-F2 function was excluded from the simulations due to its unstable behavior. Detailed CEC 2017 test suite information can be found in [89]. The implementation results of hPSO-TLBO and other competing algorithms on the CEC 2017 test suite are presented in Table 4. Boxplots of the performance of metaheuristic methods in handling benchmarks from the CEC 2017 set are shown in Figure 3. The optimization results demonstrate that hPSO-TLBO emerged as the top-performing optimizer for functions C17-F1, C17-F3 to C17-F24, and C17-F26 to C17-F30. Overall, evaluating the benchmark functions in the CEC 2017 test set revealed that the proposed hPSO-TLBO approach outperforms competing algorithms in achieving superior results.

4.6. Statistical Analysis

To assess the statistical significance of the superiority of the proposed hPSO-TLBO approach over competing algorithms, a nonparametric statistical test, namely the Wilcoxon signed-rank test [91], was conducted in this subsection. The test examines the mean differences between two data samples and determines whether they differ significantly. The obtained p -values from the test were used to evaluate the significance of the differences between hPSO-TLBO and the competing algorithms. The results of the Wilcoxon signed-rank test, indicating the significance of the performance differences among the metaheuristic algorithms, are presented in Table 5. The statistical analysis reveals that the proposed hPSO-TLBO approach exhibits a significant statistical advantage over the competing algorithms when the p -value is less than 0.05. The Wilcoxon signed-rank test notably confirms that hPSO-TLBO outperforms all twelve competing metaheuristic algorithms with a significant statistical advantage.

5. hPSO-TLBO for Real-World Applications

In this section, we examine the effectiveness of the proposed hPSO-TLBO approach in addressing four engineering design problems, highlighting one of the key applications of metaheuristic algorithms. These algorithms play a crucial role in solving optimization problems in real-world scenarios.

5.1. Pressure Vessel Design Problem

The design of a pressure vessel poses a significant engineering challenge, requiring careful consideration and analysis. The primary objective of this design is to achieve the minimum construction cost while meeting all necessary specifications and requirements. To provide a visual representation, Figure 4 depicts the schematic of the pressure vessel design, aiding in understanding its structural elements and overall layout. The mathematical model governing the pressure vessel design is presented below. This model encapsulates the equations and parameters that define the behavior and characteristics of the pressure vessel [92]:
Consider:
X = x 1 , x 2 , x 3 , x 4 = T s , T h , R , L .
Minimize:
f x = 0.6224 x 1 x 3 x 4 + 1.778 x 2 x 3 2 + 3.1661 x 1 2 x 4 + 19.84 x 1 2 x 3 .
Subject to:
g 1 x = x 1 + 0.0193 x 3 0 ,
g 2 x = x 2 + 0.00954 x 3 0 ,
g 3 x = π x 3 2 x 4 4 3 π x 3 3 + 1296000 0 ,
g 4 x = x 4 240 0 .
with
0 x 1 , x 2 100   and   10 x 3 , x 4 200 .
The results of employing hPSO-TLBO and competing algorithms to optimize pressure vessel design are presented in Table 6 and Table 7. The results obtained from the analysis indicate that the hPSO-TLBO algorithm successfully achieved the optimal design solution for the pressure vessel. The design variables were determined as ( 0.7780271 ,   0.3845792 ,   40.312284 ,   200 ) , with the objective function value of 5882.9013 . Furthermore, a comprehensive evaluation of the simulation results reveals that the hPSO-TLBO algorithm outperforms other competing algorithms regarding statistical indicators for the pressure vessel design problem. This superiority is demonstrated by the ability of hPSO-TLBO to deliver more favorable results. To visualize the convergence of the hPSO-TLBO algorithm towards the optimal design, Figure 5 illustrates the convergence curve associated with achieving the optimal solution for the pressure vessel.

5.2. Speed Reducer Design Problem

The speed reducer design is a real-world application in engineering to minimize speed reducer weight. The speed reducer design schematic is shown in Figure 6. As expressed in [93,94], the mathematical model for the design of the speed reducer is given by the following equation and constraints:
Consider:
X = x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 = b , m , p , l 1 , l 2 , d 1 , d 2 .
Minimize:
f x = 0.7854 x 1 x 2 2 3.3333 x 3 2 + 14.9334 x 3 43.0934 1.508 x 1 x 6 2 + x 7 2 + 7.4777 x 6 3 + x 7 3 + 0.7854 ( x 4 x 6 2 + x 5 x 7 2 ) .
Subject to:
g 1 x = 27 x 1 x 2 2 x 3 1     0 ,   g 2 x = 397.5 x 1 x 2 2 x 3 1   0 ,
g 3 x = 1.93 x 4 3 x 2 x 3 x 6 4 1   0 ,   g 4 x = 1.93 x 5 3 x 2 x 3 x 7 4 1     0 ,
g 5 x = 1 110 x 6 3 745 x 4 x 2 x 3 2 + 16.9 · 10 6 1   0 ,
g 6 ( x ) = 1 85 x 7 3 745 x 5 x 2 x 3 2 + 157.5 · 10 6 1     0 ,
g 7 x = x 2 x 3 40 1     0 ,   g 8 x = 5 x 2 x 1 1     0 ,
g 9 x = x 1 12 x 2 1     0 ,
g 10 x = 1.5 x 6 + 1.9 x 4 1     0 ,
g 11 x = 1.1 x 7 + 1.9 x 5 1     0 .
with
2.6 x 1 3.6 ,   0.7 x 2 0.8 ,   17 x 3 28 ,
7.3 x 4 8.3 ,   7.8 x 5 8.3 ,   2.9 x 6 3.9 ,
and
5 x 7 5.5 .
Table 8 and Table 9 display the outcomes obtained by applying the hPSO-TLBO algorithm and other competing algorithms to optimize the design of the speed reducer. The obtained results demonstrate that the hPSO-TLBO algorithm successfully generated the optimal design solution for the speed reducer. The model variables were determined as ( 3.5 ,   0.7 ,   17 ,   7.3 ,   7.8 ,   3.3502147 ,   5.2866832 ) , resulting in an objective function value of 2996.3482 . The simulation results clearly indicate that hPSO-TLBO performs better than other competing methods in tackling the speed reducer design problem. Furthermore, it consistently produces better outcomes and achieves improved results. Figure 7 portrays the convergence curve of the hPSO-TLBO algorithm as it progresses toward attaining the optimal design for the speed reducer, providing a visual representation of its successful performance.

5.3. Welded Beam Design

The design of welded beams holds significant importance in real-world engineering applications. Its primary objective is to minimize the fabrication cost associated with welded beam design. To aid in visualizing the design, Figure 8 presents the schematic of a welded beam, illustrating its structural configuration and critical elements. The mathematical model to analyze and optimize the welded beam design is as follows [16]:
Consider:
X = x 1 , x 2 , x 3 , x 4 = h , l , t , b .
Minimize:
f ( x ) = 1.10471 x 1 2 x 2 + 0.04811 x 3 x 4 ( 14.0 + x 2 ) .
Subject to:
g 1 x = τ x 13600     0 ,
g 2 x = σ x 30000     0 ,
g 3 x = x 1 x 4   0 ,
g 4 x = 0.10471 x 1 2 + 0.04811 x 3 x 4   ( 14 + x 2 ) 5.0     0 ,
g 5 x = 0.125 x 1   0 ,
g 6 x = δ   x 0.25     0 ,
g 7 x = 6000 p c   x   0 .
where
τ x = τ 2 + 2 τ τ x 2 2 R + τ 2   ,   τ = 6000 2 x 1 x 2 ,   τ = M R J ,
M = 6000 14 + x 2 2 ,   R = x 2 2 4 + x 1 + x 3 2 2 ,
J = 2 2 x 1 x 2 x 2 2 12 + x 1 + x 3 2 2 ,   σ x = 504000 x 4 x 3 2 ,
δ   x = 65856000 30 · 1 0 6 x 4 x 3 3 ,
p c   x = 4.013 30 · 1 0 6 x 3 x 4 3 1176 1 x 3 28 30 · 1 0 6 4 ( 12 · 1 0 6 ) .
with
0.1 x 1 ,   x 4 2   and   0.1 x 2 ,   x 3 10 .
The optimization results for the welded beam design, achieved by employing the proposed hPSO-TLBO algorithm and other competing optimizers, are presented in Table 10 and Table 11. The proposed hPSO-TLBO algorithm yielded the optimal design for the welded beam, as indicated by the obtained results. The design variables were determined to have values of ( 0.2057296 ,   3.4704887 ,   9.0366239 ,   0.2057296 ) , and the corresponding objective function value was found to be 1.7248523 . The simulation outcomes demonstrate that hPSO-TLBO outperforms competing algorithms in terms of statistical indicators and overall effectiveness in optimizing the welded beam design. The process of achieving the optimal design using hPSO-TLBO for the welded beam is depicted in Figure 9.

5.4. Tension/Compression Spring Design

The tension/compression spring design is an optimization problem in real-world applications to minimize the weight of a tension/compression spring. The tension/compression spring design schematic is shown in Figure 10. The following mathematical model represents a tension/compression spring, as outlined in [16]:
Consider:
X = x 1 , x 2 , x 3 = d , D , P .
Minimize:
f x = x 3 + 2 x 2 x 1 2 .
Subject to:
g 1 x = 1 x 2 3 x 3 71785 x 1 4     0 ,
g 2 x = 4 x 2 2 x 1 x 2 12566 ( x 2 x 1 3 ) + 1 5108 x 1 2 1   0 ,
g 3 x = 1 140.45 x 1 x 2 2 x 3   0 ,     g 4 x = x 1 + x 2 1.5 1     0
with
0.05 x 1 2 ,   0.25 x 2 1.3   a n d   2   x 3 15 .
Table 12 and Table 13 showcase the results obtained when employing the hPSO-TLBO algorithm and other competing algorithms for the optimization of the tension/compression spring design. The proposed hPSO-TLBO approach yielded the optimal design for the tension/compression spring, as evidenced by the obtained results. The design variables were determined to have values of ( 0.0516891 ,   0.3567177 ,   11.288966 ) , and the corresponding value of the objective function was found to be 0.0126652 . Simulation outcomes demonstrate that hPSO-TLBO outperforms competing algorithms, delivering superior outcomes in addressing the tension/compression spring problem. The convergence curve of hPSO-TLBO, illustrating its ability to achieve the optimal design for a tension/compression spring, is depicted in Figure 11.

6. Conclusions and Future Works

This paper presented a novel hybrid metaheuristic algorithm called hPSO-TLBO, which combines the strengths of particle swarm optimization (PSO) and teaching–learning-based optimization (TLBO). The integration of PSO’s exploitation capability with TLBO’s exploration ability forms the foundation of hPSO-TLBO. The performance of hPSO-TLBO was evaluated on a diverse set of optimization tasks, including fifty-two standard benchmark functions and the CEC 2017 test suite. The results showcase the favorable performance of hPSO-TLBO across a range of benchmark functions, highlighting its capability to balance exploration and exploitation strategies effectively. A comparative analysis with twelve established metaheuristic algorithms further confirms the superior performance of hPSO-TLBO, which is statistically significant according to Wilcoxon analysis. Additionally, the successful application of hPSO-TLBO in solving four engineering design problems showcased its efficacy in real-world scenarios.
The introduction of hPSO-TLBO opens up several avenues for future research. One promising direction involves developing discrete or multi-objective versions of hPSO-TLBO. Exploring the application of hPSO-TLBO in diverse real-world problem domains is another great research prospect.

Author Contributions

Conceptualization, I.M. and M.H.; methodology, M.H.; software, Š.H.; validation, M.H.; formal analysis, Š.H.; investigation, M.H.; resources, M.H.; data curation, Š.H.; writing—original draft preparation, M.H. and I.M.; writing—review and editing, Š.H.; visualization, M.H.; supervision, Š.H.; project administration, I.M.; funding acquisition, Š.H. and I.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Project of Specific Research, Faculty of Science, University of Hradec Králové, 2024.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Acknowledgments

The authors thank the University of Hradec Králové for support.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhao, S.; Zhang, T.; Ma, S.; Chen, M. Dandelion Optimizer: A nature-inspired metaheuristic algorithm for engineering applications. Eng. Appl. Artif. Intell. 2022, 114, 105075. [Google Scholar] [CrossRef]
  2. Sergeyev, Y.D.; Kvasov, D.; Mukhametzhanov, M. On the efficiency of nature-inspired metaheuristics in expensive global optimization with limited budget. Sci. Rep. 2018, 8, 453. [Google Scholar] [CrossRef] [PubMed]
  3. Jahani, E.; Chizari, M. Tackling global optimization problems with a novel algorithm—Mouth Brooding Fish algorithm. Appl. Soft Comput. 2018, 62, 987–1002. [Google Scholar] [CrossRef]
  4. Liberti, L.; Kucherenko, S. Comparison of deterministic and stochastic approaches to global optimization. Int. Trans. Oper. Res. 2005, 12, 263–285. [Google Scholar] [CrossRef]
  5. Zeidabadi, F.-A.; Dehghani, M.; Trojovský, P.; Hubálovský, Š.; Leiva, V.; Dhiman, G. Archery Algorithm: A Novel Stochastic Optimization Algorithm for Solving Optimization Problems. Comput. Mater. Contin. 2022, 72, 399–416. [Google Scholar] [CrossRef]
  6. De Armas, J.; Lalla-Ruiz, E.; Tilahun, S.L.; Voß, S. Similarity in metaheuristics: A gentle step towards a comparison methodology. Nat. Comput. 2022, 21, 265–287. [Google Scholar] [CrossRef]
  7. Dehghani, M.; Montazeri, Z.; Dehghani, A.; Malik, O.P.; Morales-Menendez, R.; Dhiman, G.; Nouri, N.; Ehsanifar, A.; Guerrero, J.M.; Ramirez-Mendoza, R.A. Binary spring search algorithm for solving various optimization problems. Appl. Sci. 2021, 11, 1286. [Google Scholar] [CrossRef]
  8. Trojovská, E.; Dehghani, M.; Trojovský, P. Zebra Optimization Algorithm: A New Bio-Inspired Optimization Algorithm for Solving Optimization Algorithm. IEEE Access 2022, 10, 49445–49473. [Google Scholar] [CrossRef]
  9. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  10. Kennedy, J.; Eberhart, R. Particle Swarm Optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  11. Rao, R.V.; Savsani, V.J.; Vakharia, D. Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput.-Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  12. Karaboga, D.; Basturk, B. Artificial Bee Colony (ABC) Optimization Algorithm for Solving Constrained Optimization Problems. In International Fuzzy Systems Association World Congress; Springer: Berlin/Heidelberg, Germany, 2007; pp. 789–798. [Google Scholar]
  13. Dorigo, M.; Maniezzo, V.; Colorni, A. Ant system: Optimization by a colony of cooperating agents. IEEE Trans. Syst. Man Cybern. Part B 1996, 26, 29–41. [Google Scholar] [CrossRef] [PubMed]
  14. Yang, X.-S. Firefly Algorithms for Multimodal Optimization. In Proceedings of the International Symposium on Stochastic Algorithms, Sapporo, Japan, 26–28 October 2009; Springer: Berlin/Heidelberg, Germany, 2009; pp. 169–178. [Google Scholar]
  15. Dehghani, M.; Montazeri, Z.; Trojovská, E.; Trojovský, P. Coati Optimization Algorithm: A new bio-inspired metaheuristic algorithm for solving optimization problems. Knowl.-Based Syst. 2023, 259, 110011. [Google Scholar] [CrossRef]
  16. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  17. Braik, M.; Hammouri, A.; Atwan, J.; Al-Betar, M.A.; Awadallah, M.A. White Shark Optimizer: A novel bio-inspired meta-heuristic algorithm for global optimization problems. Knowl.-Based Syst. 2022, 243, 108457. [Google Scholar] [CrossRef]
  18. Abualigah, L.; Abd Elaziz, M.; Sumari, P.; Geem, Z.W.; Gandomi, A.H. Reptile Search Algorithm (RSA): A nature-inspired meta-heuristic optimizer. Expert Syst. Appl. 2022, 191, 116158. [Google Scholar] [CrossRef]
  19. Trojovský, P.; Dehghani, M. Pelican Optimization Algorithm: A Novel Nature-Inspired Algorithm for Engineering Applications. Sensors 2022, 22, 855. [Google Scholar] [CrossRef]
  20. Dehghani, M.; Montazeri, Z.; Bektemyssova, G.; Malik, O.P.; Dhiman, G.; Ahmed, A.E. Kookaburra Optimization Algorithm: A New Bio-Inspired Metaheuristic Algorithm for Solving Optimization Problems. Biomimetics 2023, 8, 470. [Google Scholar] [CrossRef]
  21. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  22. Trojovský, P.; Dehghani, M. A new bio-inspired metaheuristic algorithm for solving optimization problems based on walruses behavior. Sci. Rep. 2023, 13, 8775. [Google Scholar] [CrossRef]
  23. Chopra, N.; Ansari, M.M. Golden Jackal Optimization: A Novel Nature-Inspired Optimizer for Engineering Applications. Expert Syst. Appl. 2022, 198, 116924. [Google Scholar] [CrossRef]
  24. Hashim, F.A.; Houssein, E.H.; Hussain, K.; Mabrouk, M.S.; Al-Atabany, W. Honey Badger Algorithm: New metaheuristic algorithm for solving optimization problems. Math. Comput. Simul. 2022, 192, 84–110. [Google Scholar] [CrossRef]
  25. Dehghani, M.; Bektemyssova, G.; Montazeri, Z.; Shaikemelev, G.; Malik, O.P.; Dhiman, G. Lyrebird Optimization Algorithm: A New Bio-Inspired Metaheuristic Algorithm for Solving Optimization Problems. Biomimetics 2023, 8, 507. [Google Scholar] [CrossRef] [PubMed]
  26. Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine Predators Algorithm: A nature-inspired metaheuristic. Expert Syst. Appl. 2020, 152, 113377. [Google Scholar] [CrossRef]
  27. Abdollahzadeh, B.; Gharehchopogh, F.S.; Mirjalili, S. African vultures optimization algorithm: A new nature-inspired metaheuristic algorithm for global optimization problems. Comput. Ind. Eng. 2021, 158, 107408. [Google Scholar] [CrossRef]
  28. Kaur, S.; Awasthi, L.K.; Sangal, A.L.; Dhiman, G. Tunicate Swarm Algorithm: A new bio-inspired based metaheuristic paradigm for global optimization. Eng. Appl. Artif. Intell. 2020, 90, 103541. [Google Scholar] [CrossRef]
  29. Goldberg, D.E.; Holland, J.H. Genetic Algorithms and Machine Learning. Mach. Learn. 1988, 3, 95–99. [Google Scholar] [CrossRef]
  30. Storn, R.; Price, K. Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  31. De Castro, L.N.; Timmis, J.I. Artificial immune systems as a novel soft computing paradigm. Soft Comput. 2003, 7, 526–544. [Google Scholar] [CrossRef]
  32. Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by simulated annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef]
  33. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A gravitational search algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  34. Dehghani, M.; Montazeri, Z.; Dhiman, G.; Malik, O.; Morales-Menendez, R.; Ramirez-Mendoza, R.A.; Dehghani, A.; Guerrero, J.M.; Parra-Arroyo, L. A spring search algorithm applied to engineering optimization problems. Appl. Sci. 2020, 10, 6173. [Google Scholar] [CrossRef]
  35. Dehghani, M.; Samet, H. Momentum search algorithm: A new meta-heuristic optimization algorithm inspired by momentum conservation law. SN Appl. Sci. 2020, 2, 1720. [Google Scholar] [CrossRef]
  36. Eskandar, H.; Sadollah, A.; Bahreininejad, A.; Hamdi, M. Water cycle algorithm–A novel metaheuristic optimization method for solving constrained engineering optimization problems. Comput. Struct. 2012, 110, 151–166. [Google Scholar] [CrossRef]
  37. Cuevas, E.; Oliva, D.; Zaldivar, D.; Pérez-Cisneros, M.; Sossa, H. Circle detection using electro-magnetism optimization. Inf. Sci. 2012, 182, 40–55. [Google Scholar] [CrossRef]
  38. Hashim, F.A.; Hussain, K.; Houssein, E.H.; Mabrouk, M.S.; Al-Atabany, W. Archimedes optimization algorithm: A new metaheuristic algorithm for solving optimization problems. Appl. Intell. 2021, 51, 1531–1551. [Google Scholar] [CrossRef]
  39. Pereira, J.L.J.; Francisco, M.B.; Diniz, C.A.; Oliver, G.A.; Cunha, S.S., Jr; Gomes, G.F. Lichtenberg algorithm: A novel hybrid physics-based meta-heuristic for global optimization. Expert Syst. Appl. 2021, 170, 114522. [Google Scholar] [CrossRef]
  40. Faramarzi, A.; Heidarinejad, M.; Stephens, B.; Mirjalili, S. Equilibrium optimizer: A novel optimization algorithm. Knowl.-Based Syst. 2020, 191, 105190. [Google Scholar] [CrossRef]
  41. Hatamlou, A. Black hole: A new heuristic optimization approach for data clustering. Inf. Sci. 2013, 222, 175–184. [Google Scholar] [CrossRef]
  42. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-verse optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2016, 27, 495–513. [Google Scholar] [CrossRef]
  43. Kaveh, A.; Dadras, A. A novel meta-heuristic optimization algorithm: Thermal exchange optimization. Adv. Eng. Softw. 2017, 110, 69–84. [Google Scholar] [CrossRef]
  44. Dehghani, M.; Mardaneh, M.; Guerrero, J.M.; Malik, O.; Kumar, V. Football game based optimization: An application to solve energy commitment problem. Int. J. Intell. Eng. Syst. 2020, 13, 514–523. [Google Scholar] [CrossRef]
  45. Moghdani, R.; Salimifard, K. Volleyball premier league algorithm. Appl. Soft Comput. 2018, 64, 161–185. [Google Scholar] [CrossRef]
  46. Kaveh, A.; Zolghadr, A. A Novel Meta-Heuristic Algorithm: Tug of War Optimization. Int. J. Optim. Civ. Eng. 2016, 6, 469–492. [Google Scholar]
  47. Montazeri, Z.; Niknam, T.; Aghaei, J.; Malik, O.P.; Dehghani, M.; Dhiman, G. Golf Optimization Algorithm: A New Game-Based Metaheuristic Algorithm and Its Application to Energy Commitment Problem Considering Resilience. Biomimetics 2023, 8, 386. [Google Scholar] [CrossRef] [PubMed]
  48. Dehghani, M.; Montazeri, Z.; Saremi, S.; Dehghani, A.; Malik, O.P.; Al-Haddad, K.; Guerrero, J.M. HOGO: Hide objects game optimization. Int. J. Intell. Eng. Syst. 2020, 13, 216–225. [Google Scholar] [CrossRef]
  49. Dehghani, M.; Montazeri, Z.; Givi, H.; Guerrero, J.M.; Dhiman, G. Darts game optimizer: A new optimization technique based on darts game. Int. J. Intell. Eng. Syst. 2020, 13, 286–294. [Google Scholar] [CrossRef]
  50. Zeidabadi, F.A.; Dehghani, M. POA: Puzzle Optimization Algorithm. Int. J. Intell. Eng. Syst. 2022, 15, 273–281. [Google Scholar]
  51. Dehghani, M.; Mardaneh, M.; Guerrero, J.M.; Malik, O.P.; Ramirez-Mendoza, R.A.; Matas, J.; Vasquez, J.C.; Parra-Arroyo, L. A new “Doctor and Patient” optimization algorithm: An application to energy commitment problem. Appl. Sci. 2020, 10, 5791. [Google Scholar] [CrossRef]
  52. Dehghani, M.; Trojovský, P. Teamwork Optimization Algorithm: A New Optimization Approach for Function Minimization/Maximization. Sensors 2021, 21, 4567. [Google Scholar] [CrossRef]
  53. Moosavi, S.H.S.; Bardsiri, V.K. Poor and rich optimization algorithm: A new human-based and multi populations algorithm. Eng. Appl. Artif. Intell. 2019, 86, 165–181. [Google Scholar] [CrossRef]
  54. Matoušová, I.; Trojovský, P.; Dehghani, M.; Trojovská, E.; Kostra, J. Mother optimization algorithm: A new human-based metaheuristic approach for solving engineering optimization. Sci. Rep. 2023, 13, 10312. [Google Scholar] [CrossRef] [PubMed]
  55. Al-Betar, M.A.; Alyasseri, Z.A.A.; Awadallah, M.A.; Abu Doush, I. Coronavirus herd immunity optimizer (CHIO). Neural Comput. Appl. 2021, 33, 5011–5042. [Google Scholar] [CrossRef] [PubMed]
  56. Dehghani, M.; Trojovská, E.; Trojovský, P. A new human-based metaheuristic algorithm for solving optimization problems on the base of simulation of driving training process. Sci. Rep. 2022, 12, 9924. [Google Scholar] [CrossRef] [PubMed]
  57. Braik, M.; Ryalat, M.H.; Al-Zoubi, H. A novel meta-heuristic algorithm for solving numerical optimization problems: Ali Baba and the forty thieves. Neural Comput. Appl. 2022, 34, 409–455. [Google Scholar] [CrossRef]
  58. Trojovský, P.; Dehghani, M. A new optimization algorithm based on mimicking the voting process for leader selection. PeerJ Comput. Sci. 2022, 8, e976. [Google Scholar] [CrossRef] [PubMed]
  59. Trojovská, E.; Dehghani, M. A new human-based metahurestic optimization method based on mimicking cooking training. Sci. Rep. 2022, 12, 14861. [Google Scholar] [CrossRef] [PubMed]
  60. Dehghani, M.; Trojovská, E.; Zuščák, T. A new human-inspired metaheuristic algorithm for solving optimization problems based on mimicking sewing training. Sci. Rep. 2022, 12, 17387. [Google Scholar] [CrossRef]
  61. Trojovský, P.; Dehghani, M.; Trojovská, E.; Milkova, E. The Language Education Optimization: A New Human-Based Metaheuristic Algorithm for Solving Optimization Problems: Language Education Optimization. Comput. Model. Eng. Sci. 2022, 136, 1527–1573. [Google Scholar]
  62. Mohamed, A.W.; Hadi, A.A.; Mohamed, A.K. Gaining-sharing knowledge based algorithm for solving optimization problems: A novel nature-inspired algorithm. Int. J. Mach. Learn. Cybern. 2020, 11, 1501–1529. [Google Scholar] [CrossRef]
  63. Ayyarao, T.L.; RamaKrishna, N.; Elavarasam, R.M.; Polumahanthi, N.; Rambabu, M.; Saini, G.; Khan, B.; Alatas, B. War Strategy Optimization Algorithm: A New Effective Metaheuristic Algorithm for Global Optimization. IEEE Access 2022, 10, 25073–25105. [Google Scholar] [CrossRef]
  64. Talatahari, S.; Goodarzimehr, V.; Taghizadieh, N. Hybrid teaching-learning-based optimization and harmony search for optimum design of space trusses. J. Optim. Ind. Eng. 2020, 13, 177–194. [Google Scholar]
  65. Khatir, A.; Capozucca, R.; Khatir, S.; Magagnini, E.; Benaissa, B.; Le Thanh, C.; Wahab, M.A. A new hybrid PSO-YUKI for double cracks identification in CFRP cantilever beam. Compos. Struct. 2023, 311, 116803. [Google Scholar] [CrossRef]
  66. Al Thobiani, F.; Khatir, S.; Benaissa, B.; Ghandourah, E.; Mirjalili, S.; Wahab, M.A. A hybrid PSO and Grey Wolf Optimization algorithm for static and dynamic crack identification. Theor. Appl. Fract. Mech. 2022, 118, 103213. [Google Scholar] [CrossRef]
  67. Singh, R.; Chaudhary, H.; Singh, A.K. A new hybrid teaching–learning particle swarm optimization algorithm for synthesis of linkages to generate path. Sādhanā 2017, 42, 1851–1870. [Google Scholar] [CrossRef]
  68. Wang, H.; Li, Y. Hybrid teaching-learning-based PSO for trajectory optimisation. Electron. Lett. 2017, 53, 777–779. [Google Scholar] [CrossRef]
  69. Yun, Y.; Gen, M.; Erdene, T.N. Applying GA-PSO-TLBO approach to engineering optimization problems. Math. Biosci. Eng. 2023, 20, 552–571. [Google Scholar] [CrossRef] [PubMed]
  70. Azad-Farsani, E.; Zare, M.; Azizipanah-Abarghooee, R.; Askarian-Abyaneh, H. A new hybrid CPSO-TLBO optimization algorithm for distribution network reconfiguration. J. Intell. Fuzzy Syst. 2014, 26, 2175–2184. [Google Scholar] [CrossRef]
  71. Shukla, A.K.; Singh, P.; Vardhan, M. A new hybrid wrapper TLBO and SA with SVM approach for gene expression data. Inf. Sci. 2019, 503, 238–254. [Google Scholar] [CrossRef]
  72. Nenavath, H.; Jatoth, R.K. Hybrid SCA–TLBO: A novel optimization algorithm for global optimization and visual tracking. Neural Comput. Appl. 2019, 31, 5497–5526. [Google Scholar] [CrossRef]
  73. Sharma, S.R.; Singh, B.; Kaur, M. Hybrid SFO and TLBO optimization for biodegradable classification. Soft Comput. 2021, 25, 15417–15443. [Google Scholar] [CrossRef]
  74. Kundu, T.; Deepmala; Jain, P. A hybrid salp swarm algorithm based on TLBO for reliability redundancy allocation problems. Appl. Intell. 2022, 52, 12630–12667. [Google Scholar] [CrossRef] [PubMed]
  75. Lin, S.; Liu, A.; Wang, J.; Kong, X. An intelligence-based hybrid PSO-SA for mobile robot path planning in warehouse. J. Comput. Sci. 2023, 67, 101938. [Google Scholar] [CrossRef]
  76. Murugesan, S.; Suganyadevi, M.V. Performance Analysis of Simplified Seven-Level Inverter using Hybrid HHO-PSO Algorithm for Renewable Energy Applications. Iran. J. Sci. Technol. Trans. Electr. Eng. 2023. [Google Scholar] [CrossRef]
  77. Hosseini, M.; Navabi, M.S. Hybrid PSO-GSA based approach for feature selection. J. Ind. Eng. Manag. Stud. 2023, 10, 1–15. [Google Scholar]
  78. Bhandari, A.S.; Kumar, A.; Ram, M. Reliability optimization and redundancy allocation for fire extinguisher drone using hybrid PSO–GWO. Soft Comput. 2023, 27, 14819–14833. [Google Scholar] [CrossRef]
  79. Amirteimoori, A.; Mahdavi, I.; Solimanpur, M.; Ali, S.S.; Tirkolaee, E.B. A parallel hybrid PSO-GA algorithm for the flexible flow-shop scheduling with transportation. Comput. Ind. Eng. 2022, 173, 108672. [Google Scholar] [CrossRef]
  80. Koh, J.S.; Tan, R.H.; Lim, W.H.; Tan, N.M. A Modified Particle Swarm Optimization for Efficient Maximum Power Point Tracking under Partial Shading Condition. IEEE Trans. Sustain. Energy 2023, 14, 1822–1834. [Google Scholar] [CrossRef]
  81. Zare, M.; Akbari, M.-A.; Azizipanah-Abarghooee, R.; Malekpour, M.; Mirjalili, S.; Abualigah, L. A modified Particle Swarm Optimization algorithm with enhanced search quality and population using Hummingbird Flight patterns. Decis. Anal. J. 2023, 7, 100251. [Google Scholar] [CrossRef]
  82. Cui, G.; Qin, L.; Liu, S.; Wang, Y.; Zhang, X.; Cao, X. Modified PSO algorithm for solving planar graph coloring problem. Prog. Nat. Sci. 2008, 18, 353–357. [Google Scholar] [CrossRef]
  83. Lihong, H.; Nan, Y.; Jianhua, W.; Ying, S.; Jingjing, D.; Ying, X. Application of Modified PSO in the Optimization of Reactive Power. In Proceedings of the 2009 Chinese Control and Decision Conference, Guilin, China, 17–19 June 2009; pp. 3493–3496. [Google Scholar]
  84. Krishnamurthy, N.K.; Sabhahit, J.N.; Jadoun, V.K.; Gaonkar, D.N.; Shrivastava, A.; Rao, V.S.; Kudva, G. Optimal Placement and Sizing of Electric Vehicle Charging Infrastructure in a Grid-Tied DC Microgrid Using Modified TLBO Method. Energies 2023, 16, 1781. [Google Scholar] [CrossRef]
  85. Eirgash, M.A.; Toğan, V.; Dede, T.; Başağa, H.B. Modified Dynamic Opposite Learning Assisted TLBO for Solving Time-Cost Optimization in Generalized Construction Projects. In Structures; Elsevier: Amsterdam, The Netherlands, 2023; pp. 806–821. [Google Scholar]
  86. Amiri, H.; Radfar, N.; Arab Solghar, A.; Mashayekhi, M. Two ımproved teaching–learning-based optimization algorithms for the solution of ınverse boundary design problems. Soft Comput. 2023, 1–22. [Google Scholar] [CrossRef]
  87. Yaqoob, M.T.; Rahmat, M.K.; Maharum, S.M.M. Modified teaching learning based optimization for selective harmonic elimination in multilevel inverters. Ain Shams Eng. J. 2022, 13, 101714. [Google Scholar] [CrossRef]
  88. Yao, X.; Liu, Y.; Lin, G. Evolutionary programming made faster. IEEE Trans. Evol. Comput. 1999, 3, 82–102. [Google Scholar]
  89. Awad, N.; Ali, M.; Liang, J.; Qu, B.; Suganthan, P.; Definitions, P. Evaluation criteria for the CEC 2017 special session and competition on single objective real-parameter numerical optimization. Technol. Rep. 2016. [Google Scholar]
  90. Bashir, M.U.; Paul, W.U.H.; Ahmad, M.; Ali, D.; Ali, M.S. An Efficient Hybrid TLBO-PSO Approach for Congestion Management Employing Real Power Generation Rescheduling. Smart Grid Renew. Energy 2021, 12, 113–135. [Google Scholar] [CrossRef]
  91. Wilcoxon, F. Individual comparisons by ranking methods. In Breakthroughs in Statistics; Springer: Berlin/Heidelberg, Germany, 1992; pp. 196–202. [Google Scholar]
  92. Kannan, B.; Kramer, S.N. An augmented Lagrange multiplier based method for mixed integer discrete continuous optimization and its applications to mechanical design. J. Mech. Des. 1994, 116, 405–411. [Google Scholar] [CrossRef]
  93. Gandomi, A.H.; Yang, X.-S. Benchmark problems in structural optimization. In Computational Optimization, Methods and Algorithms; Springer: Berlin/Heidelberg, Germany, 2011; pp. 259–281. [Google Scholar]
  94. Mezura-Montes, E.; Coello, C.A.C. Useful infeasible solutions in engineering optimization with evolutionary algorithms. In Proceedings of the Mexican International Conference on Artificial Intelligence, Monterrey, Mexico, 14–18 November 2005; Springer: Berlin/Heidelberg, Germany, 2005; pp. 652–662. [Google Scholar]
Figure 1. Flowchart of hPSO-TLBO.
Figure 1. Flowchart of hPSO-TLBO.
Biomimetics 09 00008 g001
Figure 2. Convergence curves of performance hPSO-TLBO and twelve competitor optimizers on functions F1 to F23.
Figure 2. Convergence curves of performance hPSO-TLBO and twelve competitor optimizers on functions F1 to F23.
Biomimetics 09 00008 g002aBiomimetics 09 00008 g002b
Figure 3. Boxplot diagram of the hPSO-TLBO and competitor optimizers’ performances on the CEC 2017 test set.
Figure 3. Boxplot diagram of the hPSO-TLBO and competitor optimizers’ performances on the CEC 2017 test set.
Biomimetics 09 00008 g003aBiomimetics 09 00008 g003b
Figure 4. Schematic of pressure vessel design.
Figure 4. Schematic of pressure vessel design.
Biomimetics 09 00008 g004
Figure 5. hPSO-TLBO’s performance convergence curve on pressure vessel design.
Figure 5. hPSO-TLBO’s performance convergence curve on pressure vessel design.
Biomimetics 09 00008 g005
Figure 6. Schematic of speed reducer design.
Figure 6. Schematic of speed reducer design.
Biomimetics 09 00008 g006
Figure 7. hPSO-TLBO’s performance convergence curve on speed reducer design.
Figure 7. hPSO-TLBO’s performance convergence curve on speed reducer design.
Biomimetics 09 00008 g007
Figure 8. Schematic of welded beam design.
Figure 8. Schematic of welded beam design.
Biomimetics 09 00008 g008
Figure 9. hPSO-TLBO’s performance convergence curve on welded beam design.
Figure 9. hPSO-TLBO’s performance convergence curve on welded beam design.
Biomimetics 09 00008 g009
Figure 10. Schematic of tension/compression spring design.
Figure 10. Schematic of tension/compression spring design.
Biomimetics 09 00008 g010
Figure 11. hPSO-TLBO’s performance convergence curve on tension/compression spring.
Figure 11. hPSO-TLBO’s performance convergence curve on tension/compression spring.
Biomimetics 09 00008 g011
Table 1. Optimization results of unimodal functions.
Table 1. Optimization results of unimodal functions.
F hPSO-TLBOWSOAVOARSAMPATSAGWOhPT2hPT1ITLBOIPSOTLBOPSO
F1Mean058.071597.09E-617.09E-611.69E-494.1E-477.09E-610.1318441.63E-597.09E-611.17E-160.08895326.87535
Best04.6655685.99E-635.99E-633.36E-521.27E-505.99E-630.0929651.38E-615.99E-634.72E-170.00042915.79547
Worst0210.50423.09E-603.09E-601.46E-482.91E-463.09E-600.1773637.11E-593.09E-603.29E-161.23155450.15932
Std045.405858.38E-618.38E-613.38E-498.61E-478.38E-610.0238841.92E-598.38E-616.16E-170.2674059.002594
Median040.019594.31E-614.31E-613.67E-503.77E-484.31E-610.132639.91E-604.31E-619.97E-170.00856424.84615
Rank111225629437810
F2Mean01.8854165.43E-365.43E-366.14E-281.86E-285.43E-360.2283581.25E-345.44E-364.83E-080.7890312.456858
Best00.5837091.95E-371.95E-371.63E-291.78E-301.95E-370.1410424.49E-361.97E-373.07E-080.0398981.537836
Worst06.5602373.17E-353.17E-354.15E-271.61E-273.17E-350.321177.29E-343.17E-351.09E-072.1968633.353961
Std01.5266487.67E-367.67E-369.41E-284.55E-287.67E-360.05421.76E-347.67E-361.61E-080.6218550.468754
Median01.3484912.61E-362.61E-363.1E-281.74E-292.61E-360.2364425.99E-352.63E-364.52E-080.5147082.415588
Rank110226528437911
F3Mean01573.928.72E-168.72E-162.21E-121.04E-1017586.0914.074122E-148.72E-16418.9635341.98321911.093
Best0916.73929.45E-219.45E-211.62E-162.18E-181819.3695.2639412.17E-199.48E-21216.71919.180041254.853
Worst03121.8421.62E-141.62E-141.27E-111.72E-0930564.0243.120893.73E-131.62E-141045.265903.47523047.672
Std0540.1743.53E-153.53E-153.77E-123.75E-107362.8579.2624458.11E-143.53E-15189.5394248.1753550.4119
Median01373.0111.87E-171.87E-171.61E-139.48E-1417907.7310.466834.3E-161.87E-17352.7354258.20181850.929
Rank1102256127439811
F4Mean015.239524.92E-164.92E-164.92E-160.00389745.659840.4820661.13E-144.92E-161.0889365.5332122.492984
Best010.498162.63E-172.63E-172.67E-178.51E-050.7970190.2343076.04E-162.63E-178.72E-092.0179581.952933
Worst021.001692.3E-152.3E-152.3E-150.03156880.805560.8485425.29E-142.3E-154.34179811.771723.518006
Std02.4844335.71E-165.71E-165.71E-160.00683625.481470.1653821.31E-145.71E-161.1935472.1531310.401768
Median015.659542.55E-162.55E-162.55E-160.00129548.834550.4679045.85E-152.55E-160.7991135.1830522.452526
Rank1112246127538109
F5Mean09516.4311.06623212.5193221.6170126.1576425.1288385.8471524.4872924.669139.878644064.647525.661
Best01188.1691.0255181.02550521.164823.7064824.5957325.3937923.5522323.5936623.8543924.20836202.6519
Worst081,693.131.08926726.6322922.2407326.5445726.40645334.02425.0164926.42328148.447179368.291989.749
Std017,267.510.02061812.687630.3335730.6748370.50043787.303410.4734880.80660238.1430217309.01365.6698
Median04943.7591.0521911.08876621.5975426.4475424.9312427.5371224.1649524.264324.2840476.94997420.079
Rank11323487105691211
F6Mean088.935650.0265075.7165560.0265073.270640.0983820.1595560.6087821.1379330.0265070.0824130.11387
Best014.957310.0098973.2576870.0098972.2794370.0234980.0898170.227290.2338090.0098970.01014513.77592
Worst0337.06630.050236.4281490.050234.2385750.3080960.2503581.1536141.9173940.050230.49732455.34426
Std082.154010.012010.8856570.012010.5964290.0847440.0412510.2758270.4238870.012010.12592611.65958
Median061.323980.0291746.1115340.0291743.3821670.0605210.163070.6700121.0993590.0291740.03078427.94543
Rank11341131067892512
F7Mean2.54E-050.0001159.04E-056.18E-050.0005170.0038620.0011610.0102690.0007670.0013830.0465650.1622820.009365
Best2.35E-062.3E-051.5E-051.43E-050.0001330.0013518.11E-050.003540.0001688.83E-050.0124740.0608520.002701
Worst6.89E-050.0003170.0002610.0001590.0008010.0088160.0047980.0199270.0018030.0026040.084220.3624730.019393
Std1.93E-057.92E-056.36E-053.04E-050.0001840.0020090.0012410.0043360.000420.0007550.0214710.0679860.004146
Median1.83E-058.37E-057.21E-055.91E-050.0005020.0033190.0007520.0100030.000780.0013510.0457020.1566420.009006
Rank14325971168121310
Sum rank7721724325048593635546574
Mean rank110.285712.4285713.4285714.5714297.1428576.8571438.4285715.14285757.7142869.28571410.57143
Total ranking11223487106591113
Table 2. Optimization results of high-dimensional multimodal functions.
Table 2. Optimization results of high-dimensional multimodal functions.
F hPSO-TLBOWSOAVOARSAMPATSAGWOhPT2hPT1ITLBOIPSOTLBOPSO
F8Mean−12,498.6−7441.49−12,216.6−6018.45−9764.22−6637.83−10,978.1−8130.21−6585.37−6161.34−3679.16−6997.53−8648.79
Best−12,622.8−9148.22−12,328.4−6214.34−10450.1−7689.05−12314−9292.31−7292.22−7424.16−4727.93−8481.16−9748.05
Worst−11,936.3−6604.3−11715−5570.82−9263.23−5103.76−8038.63−7294.3−5644.31−5239.76−3087.21−5627.46−7433.78
Std185.933632.4496166.3366191.7651309.4905620.84021488.781623.0539425.9794530.8499431.4128643.493548.4352
Median−12,577.8−7384.82−12,293.4−6058.43−9804.06−6608.26−11,853.8−8022.38−6586.24−6192.06−3598.29−7140.36−8616.57
Rank17212493610111385
F9Mean021.701666.84E-166.84E-166.84E-16152.53996.84E-1686.197881.57E-146.84E-1625.1162859.6632448.1797
Best012.8813900079.07429046.510550012.2732335.0663820.47009
Worst040.487134.56E-154.56E-154.56E-15253.91964.56E-15131.53131.05E-134.56E-1542.95628100.940867.75744
Std07.4153341.27E-151.27E-151.27E-1543.888331.27E-1521.680332.92E-141.27E-157.88681616.2115311.8805
Median019.99106000146.858085.539920023.2314757.3319946.35864
Rank1422292832576
F10Mean8.88E-164.6622441.52E-151.52E-154.5E-151.0947624.34E-150.5091881.55E-144.65E-157.24E-092.4029693.150025
Best8.88E-162.9807121.17E-151.17E-151.74E-158E-151.46E-150.0886397.43E-154.3E-154.11E-091.49212.5393
Worst8.88E-167.223891.74E-151.74E-154.87E-152.9723548E-152.2161362.05E-144.87E-151.27E-084.4557924.090043
Std01.0509921.39E-161.39E-166.46E-161.3504551.96E-150.5826733.19E-151.39E-162.01E-090.7380780.341285
Median8.88E-164.5636451.46E-151.46E-154.59E-152.03E-144.59E-150.1712111.4E-144.59E-156.81E-092.4088613.198028
Rank1122249386571011
F11Mean01.5121625.37E-055.37E-055.37E-050.0078455.37E-050.3522080.0012345.37E-056.3510440.1632921.298331
Best00.972628000000.22393002.6397840.0028411.134942
Worst02.8941790.0007550.0007550.0007550.0181040.0007550.4722580.0173410.00075511.135160.7717111.520656
Std00.4669010.0001760.0001760.0001760.0054250.0001760.0704240.0040330.0001762.3411210.1965540.106538
Median01.4106290000.00808400.367154006.4422230.1078081.275578
Rank1822242632957
F12Mean1.57E-322.8825390.00161.1625520.00165.1056350.0193070.8074920.0367370.0644480.1866641.3241840.243809
Best1.57E-320.8419560.0005040.6787860.0005040.9152410.0028180.0016890.0115730.0220230.0012280.0022420.055508
Worst1.57E-326.5133070.0034811.4509860.00348112.456020.1211283.3923120.0799460.1201290.8217314.6001760.576584
Std2.74E-481.5742160.0008360.261550.0008363.3385110.034091.0298150.0191910.0180420.2642231.1060870.119679
Median1.57E-322.5514670.0015211.2254060.0015213.7946020.0064010.3719980.0349240.0630140.0724011.1337760.234452
Rank11231021349567118
F13Mean1.35E-323171.7040.020610.020610.0228112.4144660.2096980.0494880.4733380.9915810.0705343.1992892.406486
Best1.35E-3212.169541.88E-061.88E-060.0079091.7903940.0474470.0197294.32E-050.5394530.0079090.0284721.150151
Worst1.35E-3254770.430.038110.038110.038113.3086160.6249830.1052290.8752621.3779950.84442711.106433.48383
Std2.74E-4811920.440.0100990.0100990.0093780.4813650.1594730.0234580.2319450.2005680.1791632.6053420.652146
Median1.35E-3238.994280.0207440.0207440.0233812.2519150.1666360.041620.4764061.0003830.0288032.9308932.536779
Rank11332411758961210
Sum Rank6561430185521423535475347
Mean rank19.3333332.333333539.1666673.575.8333335.8333337.8333338.8333337.833333
Total ranking111253104766898
Table 3. Optimization results of fixed-dimensional multimodal functions.
Table 3. Optimization results of fixed-dimensional multimodal functions.
F hPSO-TLBOWSOAVOARSAMPATSAGWOhPT2hPT1ITLBOIPSOTLBOPSO
F14Mean0.3978870.3979280.3979280.4091230.3983810.397960.3979280.3979280.3979290.3979920.3979280.7034490.457962
Best0.3978870.3978870.3978870.398670.3978870.3978930.3978870.3978870.3978880.3978990.3978870.3978870.397887
Worst0.3978870.3981450.3981450.4748770.4010230.3981680.3981460.3981450.3981450.3981640.3981452.5066281.591156
Std07.35E-057.36E-050.0167250.0008968.73E-057.35E-057.36E-057.35E-058.56E-057.36E-050.610290.260471
Median0.3978870.3978940.3978940.4030920.3979710.3979180.3978950.3978940.3978940.3979730.3978940.3979180.397966
Rank1421097536821211
F15Mean33.24913.2491015.6939716.03484310.740033.2491233.2491013.2491123.2491013.24913.24917.040393
Best33.0010983.0010983.0021073.0133753.0011043.0010983.0010983.0011013.0010993.0010983.0010983.003001
Worst35.1273665.12736627.9556328.9182281.456875.1273675.1273675.1273775.1273685.1273665.12736631.20499
Std1.14E-150.4892790.4892797.3211845.96100422.467570.4892720.4892790.4892770.4892790.4892790.4892798.990184
Median33.044413.044413.140023.5410473.1400143.0444173.044413.044433.0444113.044413.044413.179816
Rank12610111395874312
F16Mean−3.86278−3.85185−3.85185−3.82907−3.7303−3.8515−3.84977−3.85185−3.85051−3.85088−3.85185−3.85185−3.85171
Best−3.86278−3.86278−3.86278−3.85382−3.86278−3.86268−3.86277−3.86278−3.86278−3.86253−3.86278−3.86278−3.86276
Worst−3.86278−3.81789−3.81789−3.77776−3.31594−3.81781−3.8175−3.81789−3.81778−3.81766−3.81789−3.81789−3.81759
Std2.22E-150.0105640.0105640.0211130.128810.0104110.0103210.0105640.0106140.010130.0105640.0105640.010686
Median−3.86278−3.85184−3.85184−3.83257−3.73109−3.8518−3.85033−3.85184−3.85132−3.85144−3.85184−3.85184−3.85177
Rank1241112710598336
F17Mean−3.322−3.24156−3.21013−2.76672−2.56172−3.19829−3.19374−3.21528−3.20179−3.18745−3.25727−3.20672−3.17472
Best−3.322−3.31434−3.28525−3.038−3.22873−3.31233−3.30934−3.31434−3.31434−3.29852−3.31434−3.31434−3.2439
Worst−3.322−3.15104−3.10447−1.75378−1.84535−3.07187−3.0508−3.096−3.00597−2.93767−3.20079−3.04679−2.9906
Std4.34E-160.0449550.0593420.2769870.316730.0603950.0755840.0644260.0797720.0828640.0269230.0774080.06172
Median−3.322−3.2565−3.21667−2.84296−2.61407−3.18964−3.20415−3.24865−3.21227−3.19193−3.2616−3.23928−3.18554
Rank13512138947102611
F18Mean−10.1532−8.37918−9.91819−5.42633−7.63223−6.1929−9.24171−8.80122−9.24604−7.01014−7.31095−5.92734−6.48809
Best−10.1532−10.1447−10.1531−5.6612−10.1516−10.1238−10.1524−10.153−10.1529−9.25331−10.1531−10.0716−9.60167
Worst−10.1532−3.1694−9.54887−5.05701−5.05701−3.10699−5.28384−5.25966−5.09691−3.88379−3.1694−3.14741−2.90764
Std2.03E-152.7271040.1691790.1691791.9201282.7966471.6027281.930721.671251.7626042.9998272.4513672.42999
Median−10.1532−9.84425−9.95037−5.45851−7.99154−5.27858−9.84367−9.80279−9.91216−7.29137−9.75152−5.33906−7.07368
Rank16213711453981210
F19Mean−10.4029−9.8836−10.2207−5.53737−8.18247−7.12047−8.19905−8.48644−10.2202−8.05921−9.97953−6.67863−7.54998
Best−10.4029−10.4027−10.4027−5.71945−10.4006−10.3165−10.3774−10.3792−10.4025−9.81922−10.4027−10.383−10.0062
Worst−10.4029−3.63785−9.98411−5.30082−5.30082−2.43296−2.47727−3.53837−9.98285−4.54312−5.44475−3.25513−3.17664
Std3.42E-151.4441530.1610040.1610041.9615343.1173322.6083682.3563170.1610511.4600861.0546473.0338171.711875
Median−10.4029−10.2047−10.296−5.61271−9.10019−7.78563−9.98165−10.0327−10.2957−8.36284−10.2334−5.41806−7.93751
Rank15213811763941210
F20Mean−10.5364−10.4274−10.4274−5.66249−9.20887−7.67717−8.70667−9.48063−10.427−8.26849−10.208−6.80118−6.74773
Best−10.5364−10.5295−10.5295−5.76459−10.4527−10.4346−10.5286−10.5295−10.5293−9.76719−10.5295−10.5216−9.80024
Worst−10.5364−10.1103−10.1103−5.34538−5.34538−3.35452−2.60974−5.38693−10.11−4.87011−6.04545−3.2291−3.28855
Std2.7E-150.1134060.1134060.1134071.3816632.9481842.8435991.923620.1133911.4225540.9634793.3103052.211936
Median−10.5364−10.4585−10.4585−5.69352−9.5868−10.0178−10.413−10.4331−10.4582−8.77756−10.4585−4.53964−7.24575
Rank12313710864951112
F21Mean0.3978870.3979280.3979280.4091230.3983810.397960.3979280.3979280.3979290.3979920.3979280.7034490.457962
Best0.3978870.3978870.3978870.398670.3978870.3978930.3978870.3978870.3978880.3978990.3978870.3978870.397887
Worst0.3978870.3981450.3981450.4748770.4010230.3981680.3981460.3981450.3981450.3981640.3981452.5066281.591156
Std07.35E-057.36E-050.0167250.0008968.73E-057.35E-057.36E-057.35E-058.56E-057.36E-050.610290.260471
Median0.3978870.3978940.3978940.4030920.3979710.3979180.3978950.3978940.3978940.3979730.3978940.3979180.397966
Rank1421097536821211
F22Mean33.24913.2491015.6939716.03484310.740033.2491233.2491013.2491123.2491013.24913.24917.040393
Best33.0010983.0010983.0021073.0133753.0011043.0010983.0010983.0011013.0010993.0010983.0010983.003001
Worst35.1273665.12736627.9556328.9182281.456875.1273675.1273675.1273775.1273685.1273665.12736631.20499
Std1.14E-150.4892790.4892797.3211845.96100422.467570.4892720.4892790.4892770.4892790.4892790.4892798.990184
Median33.044413.044413.140023.5410473.1400143.0444173.044413.044433.0444113.044413.044413.179816
Rank12610111395874312
F23Mean−3.86278−3.85185−3.85185−3.82907−3.7303−3.8515−3.84977−3.85185−3.85051−3.85088−3.85185−3.85185−3.85171
Best−3.86278−3.86278−3.86278−3.85382−3.86278−3.86268−3.86277−3.86278−3.86278−3.86253−3.86278−3.86278−3.86276
Worst−3.86278−3.81789−3.81789−3.77776−3.31594−3.81781−3.8175−3.81789−3.81778−3.81766−3.81789−3.81789−3.81759
Std2.22E-150.0105640.0105640.0211130.128810.0104110.0103210.0105640.0106140.010130.0105640.0105640.010686
Median−3.86278−3.85184−3.85184−3.83257−3.73109−3.8518−3.85033−3.85184−3.85132−3.85144−3.85184−3.85184−3.85177
Rank1241112710598336
Sum rank1044341068810267516774488196
Mean rank14.43.410.68.810.26.75.16.77.44.88.19.6
Total ranking1321291165674810
Table 4. Optimization results of CEC 2017 test suite.
Table 4. Optimization results of CEC 2017 test suite.
hPSO-TLBOWSOAVOARSAMPATSAGWOhPT2hPT1ITLBOIPSOTLBOPSO
C17-F1Mean1005.29E+093748.3689.6E+0933,159,3611.64E+0982,897,34142,368,20850,218,72946,455,4302.19E+091.38E+083091.392
Best1004.38E+09508.74378.29E+0910,544.923.5E+0826,138.8110,715,87314,220,69814,179,2621.92E+0961,616,141341.377
Worst1006.79E+0911272.161.14E+101.2E+083.56E+093.01E+0878,853,36582,349,95575,758,1002.6E+093.34E+089150.309
Std01.1E+095382.9891.49E+0961,500,1841.51E+091.54E+0832,783,47435,674,00631,933,4613.27E+081.38E+084290.734
Median1005E+091606.2819.33E+096,078,1191.31E+0915,193,52439,951,79752,152,13247,942,1802.12E+0979,005,6801436.94
Rank11231341085761192
C17-F3Mean3008033.049301.77919082.7761340.56810543.272901.42958.6736981.4005863.50392647.886700.4938300
Best3004074.8323004905.748761.60174026.1681454.004589.3344598.4639546.07472060.398460.8804300
Worst30010,742.63303.805512,146.042399.79914,898.695549.4971575.431603.3811365.4973170.95857.0187300
Std03069.1112.1713733482.46795.06464856.7171987.661471.1406478.0596388.2045532.1373182.67340
Median3008657.364301.65559639.6581100.43611624.122301.09834.9652861.8787771.22162680.099742.0381300
Rank11131281310675942
C17-F4Mean400902.4571405.33731295.052407.1945566.7586411.9069406.1412407.7145405.3359611.6262409.4929419.97
Best400677.5934401.6131818.3657402.3049473.6418405.7307402.8584403.3799403.6201498.7167407.9005400.1039
Worst4001104.07409.14971760.71411.9996674.1198426.6833410.3496414.603407.151713.2278411.7909469.1877
Std0204.00813.31691422.74675.124688102.673310.466543.596825.2165291.98113893.993661.72524234.87123
Median400914.0825405.29321300.566407.2368559.6364407.6068405.6784406.4375405.2862617.2802409.1402405.2941
Rank11231351084621179
C17-F5Mean501.2464561.909543.0492570.3636513.4648562.3384513.5991514.2683517.4738517.9574529.0633533.5634527.7224
Best500.9951547.5217526.9607555.91508.5995543.3011508.5861510.1615512.195514.5008519.2719527.6244511.0772
Worst501.9917570.3438561.9262585.6405518.5725592.2042520.7717517.8512522.4134521.9358541.1436537.1694551.4064
Std0.52269811.0433919.1430717.231845.81588123.143915.4924174.3362865.5497723.40025311.511214.62772719.5741
Median500.9993564.8853541.655569.9518513.3437556.9241512.5192514.5302517.6433517.6965527.9188534.7298524.2029
Rank11110132123456897
C17-F6Mean600631.2476616.8356639.1331601.4607623.9964601.397602.1618602.9351603.8761611.6091606.8656607.4064
Best600627.3127615.7521636.0102600.8105614.5053600.8308601.3224601.7229603.1373609.0681604.6699601.3504
Worst600634.131619.2083642.9987603.1224638.5972601.9612603.9859605.5767604.9986616.0634610.5068619.1985
Std03.3052171.6869463.4329361.17140810.832770.5495041.3166261.9128930.9059843.2313942.8193758.516125
Median600631.7734616.1909638.7618600.955621.4415601.398601.6695602.2204603.6842610.6524606.1428604.5383
Rank11210133112456978
C17-F7Mean711.1267799.8043763.968800.9678724.9498823.9647726.2615726.252729.6364731.7253742.712751.0914732.6781
Best710.6726780.3107743.0682788.1026720.9932785.4855718.1642723.984727.3951729.0775738.5639746.5242725.5384
Worst711.7995816.1562790.5001813.545728.9161864.0148742.7212728.6154733.1142734.773749.4638759.339744.2048
Std0.53875115.8989223.0521212.54713.61951135.9038911.90492.2876872.8246382.6972385.235726.0556548.957916
Median711.0174801.3751761.1518801.1118724.9449823.1792722.0802726.2044729.0182731.5253741.4101749.2512730.4845
Rank11110122134356897
C17-F8Mean801.4928847.368830.689852.2232813.0826847.0627816.1175814.5802817.6934817.3872823.3427836.9731822.7208
Best800.995839.9264820.163841.7367809.2484831.8296810.8479813.4555816.3749815.8313821.7669830.0727815.66
Worst801.9912855.1817845.4663857.0366815.4123865.287820.5682817.0288820.7097819.2567826.6415844.4105829.1593
Std0.6047217.47644311.168947.4383012.86781815.791394.311191.7328912.1281681.7407192.3406147.6880017.036328
Median801.4926847.182828.5632855.0598813.8348845.5672816.5269813.9183816.8444817.2305822.4812836.7047823.0318
Rank11291321143658107
C17-F9Mean9001399.0121175.0261441.344905.14311358.747911.5695904.9645905.8344936.00521025.904911.4671904.2313
Best9001262.075951.29541350.309900.35511155.798900.5895901.8209902.4042908.07931006.621906.9387900.897
Worst9001533.6581626.1521572.35912.76791633.582931.6466908.4465908.8868989.75441057.233919.6198912.2878
Std0128.106328.835499.496035.777535217.399315.206392.8527612.82869539.2288723.057235.8784495.721816
Median9001400.1571061.3281421.359903.72461322.804907.021904.7952906.0233923.09361019.88909.6549901.8702
Rank11210134117358962
C17-F10Mean1006.1792272.6741775.3652531.9841528.6082016.2371725.8411517.5851630.4571557.7211809.692147.9811934.218
Best1000.2842023.1951480.8992368.3241393.6921773.521533.161368.1131438.831389.8511638.2121762.1981553.546
Worst1012.6682447.5362374.3222873.261616.9442238.0491995.3081624.5871775.2871643.5541931.072437.7282335.345
Std7.002135197.3598436.2218244.329107.0706266.5065205.4292117.2316151.6732120.6617148.1346301.1232337.7875
Median1005.8822309.9831623.122443.1761551.8982026.691687.4491538.8211653.8551598.7411834.7392195.9991923.991
Rank11271331062548119
C17-F11Mean11003706.8411147.6463823.9041127.3955216.3211154.0341124.9961130.0541127.691740.5491149.9191142.951
Best11002532.4041118.8841439.9911114.2145075.9431122.1621116.9391121.2081120.791195.4621137.251131.823
Worst11004840.8941197.8196177.6951158.2695292.9631223.951142.3621148.3111136.8762267.181169.9981164.161
Std01091.70736.774752240.03922.00052101.515650.0312712.3035613.038658.484187511.19914.7656915.3188
Median11003727.0331136.9413838.9661118.5495248.191135.0131120.3411125.3481126.5461749.7771146.2151137.909
Rank11171231392541086
C17-F12Mean1352.9593.34E+081,041,8406.67E+08537,442.3984,200.61,339,6761,119,5171,391,1291,447,6521.52E+084,781,6268018.164
Best1318.64674,974,042337,122.51.48E+0819,273.83510,668.343,473.74545,184.3617,787.2614,496.833,669,0301,279,6482505.361
Worst1438.1765.84E+081,889,1871.17E+09841,127.5120,79842,097,0331,919,0432,399,9602,461,2242.65E+088,464,89313,785.05
Std60.273392.71E+08763,715.25.42E+08380,785.4345,933.8952,033.6661,001.8879,737.4953,977.11.23E+084,003,5315405.839
Median1327.5063.39E+08970,525.56.77E+08644,6841,109,0751,609,0991,006,9201,273,3851,357,4441.54E+084,690,9827891.125
Rank11251334768911102
C17-F13Mean1305.32416,270,43417,645.32325304695441.23912351.5810,044.196291.8577405.2978219.5297,388,04816,125.56564.175
Best1303.1141,357,7832693.0227009543735.4967913.5316372.1435115.9496074.4686835.592615,92215,108.722367.428
Worst1308.50854,003,82729,936.81.08E+086879.061931013,7307769.7059547.43510,480.9124,516,11018,714.8416,549.6
Std2.39077426,518,93714,969.87530361271535.2335258.83196.3761212.3911609.9721827.78612,037,7131826.0127080.253
Median1304.8374,860,06318,975.739,713,2625575.211,091.410,037.316140.8876999.6427780.8092,210,08015,339.213669.834
Rank11210132873561194
C17-F14Mean1400.7463925.2882057.8765207.5571980.4693350.6372365.3381807.0861903.0421740.3552937.6211649.6112980.369
Best14003067.4791697.6194645.471434.5911489.1371470.0951453.8041467.2021498.3862195.6071515.8591432.215
Worst1400.9955224.0872758.6546608.8872857.4535364.2524808.8162285.5512338.7892124.3474045.4131833.0386791.638
Std0.5233091056.743519.5787986.0415713.91922220.7771716.878441.5361528.3317288.3858834.9858140.36342694.358
Median1400.9953704.7931887.6144787.9351814.9173274.5811591.221744.4951903.0881669.3432754.7321624.7731848.812
Rank11271361184539210
C17-F15Mean1500.33110,141.395420.88713544.474168.4777035.4525909.8923267.6333681.8313018.0337377.0442021.4578924.747
Best1500.0013199.9722428.0922943.8973518.1512551.2553846.7252857.0782946.8172337.0274624.8081842.2572858.362
Worst1500.517,211.2812098.8128,895.15286.27412348.586956.2174051.4524770.5693748.6029747.7372152.94614665.88
Std0.2476486166.6534733.2711904.67828.51314366.5291487.545574.9909815.2069631.95772486.011136.71225191.053
Median1500.41310077.163578.32411169.443934.7416620.9856418.3133081.0013504.9692993.2517567.8152045.3129087.371
Rank11271369845310211
C17-F16Mean1600.762004.6131812.4352008.331693.732037.5731735.6531675.2471696.6911675.4581821.1431686.7881920.464
Best1600.3561942.8091650.2861817.9891654.671863.6421630.121653.191675.7941668.7941761.021660.1381820.271
Worst1601.122147.2371924.0892263.2971719.0192207.9921823.7881695.8331716.1891682.2921862.5471734.4992078.647
Std0.332314100.9265121.8302196.5231.24997162.915783.9240120.8588523.369186.77312947.0852534.94773125.8979
Median1600.7811964.2041837.6831976.0171700.6162039.3281744.3511675.9821697.391675.3741830.5031676.2571891.469
Rank11181251372639410
C17-F17Mean1700.0991814.2661750.5621814.4081736.1211799.0781767.1881731.4721737.4161733.4351753.8921757.5771751.874
Best1700.021805.4521734.5721798.4731723.3071784.4041725.1511722.7441728.5111725.1211749.0791748.261745.29
Worst1700.3321819.5141792.5581823.4341773.3261809.3061864.9231753.7531760.1921750.0761763.5471766.9121758.499
Std0.1632196.46760729.5298111.5653126.0986211.3154168.9060815.6799516.0621211.867237.1047349.8693265.942316
Median1700.0221816.0481737.5581817.8621723.9251801.3011739.3391724.6961730.4811729.2721751.4711757.5691751.853
Rank11261341110253897
C17-F18Mean1805.362,700,24212,164.855,383,87711,402.6112,356.1619,768.5112,292.5814,853.5512,788.061,232,70928,836.6321,629.9
Best1800.003138,517.94723.003266,622.34264.8987199.6456310.4616928.1728412.2538741.0466,373.9523,000.72867.661
Worst1820.4517,824,44416,330.6815,627,89117,201.9915,723.7931,879.116,355.1319,819.8715,990.63,564,33236,637.1440,262.56
Std10.585843,744,6555510.4277,486,8985844.8733889.49913,653.754139.175194.7423184.8021,705,2096740.65520,306.86
Median1800.4921,419,00313,802.862,820,49712,071.7813,250.620,442.2412,943.5215,591.0413,210.29650,064.127,854.3421,694.7
Rank11231325847611109
C17-F19Mean1900.445375,744.67433.981666,091.66385.277119,677.36182.6045568.2596953.6784601.69116,1041.95533.39924,663.69
Best1900.03923,791.012314.74743,418.542448.012024.0572198.7532410.4862462.7933059.65111,808.982168.6112615.743
Worst1901.559791,564.413,172.511,428,79612,225.8240,192.413,706.129972.91514,000.355757.04632,7034.912,057.4375,964.03
Std0.783273347,977.34854.412656805.84951.454142,642.65427.9183428.4235249.2741234.592145,239.84793.2236,380.59
Median1900.09343,811.57124.333596075.75433.649118,246.34412.774944.8185675.7864795.034152,661.83953.77910,037.49
Rank11281361054721139
C17-F20Mean2000.3122210.4242168.3982217.9632094.492203.1822167.7892070.2112083.0962073.42131.9182075.2912166.904
Best2000.3122157.4252035.9012161.662077.3672108.7092132.3152059.6512073.6522068.5362110.4952066.2442143.076
Worst2000.3122278.1362286.8552269.3532122.3432311.8012238.7032085.8052097.6332076.9842146.3752084.0982198.36
Std052.6401118.843555.9668220.4043190.5767850.687311.674810.756554.04963717.751587.78077428.90349
Median2000.3122203.0682175.4192220.4192089.1242196.112150.0682067.6932080.5492074.042135.4012075.4112163.091
Rank11210136119253748
C17-F21Mean22002293.1132218.1662268.572259.1862323.4472312.2192253.1032264.8242247.4222271.0552299.3572317.423
Best22002248.1372209.2522228.0852256.4862224.9692307.8942238.4172245.3952229.8162264.4362208.9542309.456
Worst22002317.4852242.322291.4272261.8212368.1662317.1542258.2562271.6442253.952276.7212335.5862324.888
Std034.1159416.9751729.547262.4229570.304354.00901410.3039313.6287212.382285.44120863.825417.984688
Median22002303.4162210.5462277.3842259.2192350.3262311.9152257.8692271.1292252.9612271.5322326.4442317.675
Rank19275131146381012
C17-F22Mean2300.0732713.8412309.0712883.1862305.3072692.1022308.7092306.5692308.3252307.3352438.1432319.0892313.126
Best23002594.7062304.2992685.1192300.922441.1212301.2262302.8652303.6742304.7012389.6172312.7252300.631
Worst2300.292845.0022311.8833027.8232309.0282888.3492321.3722310.2822312.1882311.2132469.4662329.7932344.973
Std0.152615121.80813.496248152.01553.871741210.33589.3563753.6814744.7348893.04280437.561478.27026322.38123
Median23002707.8292310.052909.9012305.6392719.4682306.1192306.5652308.7182306.7132446.7462316.9192303.451
Rank11271321163541098
C17-F23Mean2600.9192693.9422641.8352697.282615.4982718.9172614.9452617.1872621.7632620.0832640.7022642.2862643.937
Best2600.0032654.2272630.6092669.5222612.8982634.5452609.0252616.182620.1882618.8712631.6332631.7332636.826
Worst2602.872716.5992658.4232735.5052617.812761.5292621.0622618.6752623.5952620.6452648.4362650.8162655.745
Std1.38888630.8424813.7900232.481612.34983560.17726.3812541.1993371.4968770.8652768.4659688.8712369.013826
Median2600.4032702.4712639.1552692.0472615.6412739.7972614.8462616.9472621.6352620.4082641.3692643.2972641.588
Rank11181231324657910
C17-F24Mean2630.4882775.692766.2632844.2422636.4722672.1392748.422658.0642672.0692672.2712721.3232755.1062764.326
Best2516.6772723.6532734.2852820.9222622.022537.5062724.2362620.2872645.7192637.9372695.6252742.4342755.438
Worst2732.322853.5292786.4722904.6052643.6872810.1582761.8412687.92692.9532703.0012756.652767.1752785.859
Std122.549866.2580426.0106142.4232710.52639153.191418.11336.7397824.5965736.4110928.9825312.1571615.28211
Median2636.4772762.7892772.1482825.7212640.092670.4462753.8012662.0342674.8022674.0732716.5092755.4072758.003
Rank11211132583467910
C17-F25Mean2932.6393147.522914.1053258.122918.2783122.5512937.9742924.3612923.9092924.4352998.8582933.0812923.428
Best2898.0473060.6822899.0663194.1892915.2762907.9312922.6262910.7532911.7572909.852994.5022915.1732898.661
Worst2945.7933340.7472948.833328.6962924.5123617.3852945.8482933.7132934.1622936.6923003.7842950.0782946.546
Std24.28873136.74524.5099758.419044.545926350.925510.9633910.44749.94208312.251314.89170520.2037627.48149
Median2943.3593094.3252904.2613254.7982916.6632982.4442941.7112926.4892924.8582925.5992998.5732933.5362924.253
Rank71211321195461083
C17-F26Mean29003564.3292975.833711.4833005.9653583.0823246.2913009.7083026.4123022.5563121.4873190.6882904.021
Best29003234.1712811.893400.4542897.2413136.092970.32917.2622917.912904.8463091.3672911.4212807.879
Worst29003796.6193140.2374030.653268.8764197.643850.2333263.9873311.4483306.7423165.8353820.4633008.206
Std3.91E-13283.394199.1308284.9489185.0471548.1546427.1786178.4936200.1559200.284933.22064444.767486.1682
Median29003613.2632975.5973707.4132928.8713499.2983082.3162928.7932938.1462939.3173114.3733015.4342900
Rank11131341210576892
C17-F27Mean3089.5183204.1673120.413225.6923105.9023176.8073116.7213104.3043108.1823104.8233139.8863115.7563135.637
Best3089.5183156.2483097.2143125.563092.4273103.9623094.5063092.6673093.4993093.733100.9493097.2883097.024
Worst3089.5183273.8983180.1713407.8793135.5333216.283176.2453119.4723124.8833119.8553180.5493168.4153182.511
Std2.76E-1352.236441.98196131.114321.0153354.0590841.8193512.4057714.5856713.2901937.0212836.9522437.81668
Median3089.5183193.263102.1273184.6643097.8253193.4933098.0683102.5393107.1743102.8533139.0233098.6613131.506
Rank11281341172531069
C17-F28Mean31003603.643237.6613751.6093221.0393569.0293340.6513210.9643234.0963213.8943350.3573321.8623303.482
Best31003559.7553103.3223668.1813175.8553407.3013202.1253198.3563221.5073193.3923293.0563215.8753176.295
Worst31003638.2393387.2633810.5523243.763761.4613403.3023222.5063247.9613224.0923390.2653387.4823387.467
Std034.63407131.971769.7818533.67228193.591497.9153411.6419211.8685414.8670944.2673383.67492100.7125
Median31003608.2843230.0293763.8523232.2693553.6763378.5883211.4973233.4583219.0453359.0543342.0463325.084
Rank11261341192531087
C17-F29Mean3132.2413323.7723282.3463368.2613205.1853236.5863264.0153190.5293202.0513196.4143250.5573214.2243264.841
Best3130.0763306.053207.8413296.3773165.7293173.7113194.3773165.163171.763172.483191.2553171.8443167.558
Worst3134.8413340.3133362.4943434.0753242.9933298.7183370.7663215.2513225.833225.813285.273238.4743346.938
Std2.61123218.6715881.3035672.169735.3092653.9750189.6208821.5515123.5720824.371243.0838532.0739685.65981
Median3132.0233324.3623279.5243371.2953206.0093236.9583245.4583190.8533205.3073193.6843262.8523223.2883272.434
Rank11211135792438610
C17-F30Mean3418.7342,111,674294,724.63,484,443407,950.5596,404.5899,516.1253,362276,973.7223,025.71,045,24673,878.61382,013.5
Best3394.6821,277,34099,075.07781,071.315,417.93138,930.332,077.1514,510.3516,141.9830,921.79354,534.728,022.716354.214
Worst3442.9073,145,244757,357.95,477,66961,0461.1122,61811,291,826374,902.6417,356.2313,368.71,361,416129,022757,392.4
Std29.21253813,970326,040.12,072,605280,538.3484,692.4624,972.1170,483.8188,716.3136,522.3488,851.643,961.79455,349.8
Median3418.6732,012,056161,232.73,839,517502,961.4510,253.11,137,081312,017.5337,198.3273,906.21,232,51769,234.87382,153.8
Rank11261389104531127
Sum rank35338199366115299211101160132267209207
Mean rank1.20689711.655176.86206912.620693.96551710.310347.2758623.4827595.5172414.5517249.2068977.2068977.137931
Total rank11261331192541087
Table 5. Wilcoxon rank sum test results.
Table 5. Wilcoxon rank sum test results.
Compared AlgorithmsUnimodalHigh-MultimodalFixed-MultimodalCEC 2017 Test Suite
hPSO-TLBO vs. WSO1.85E-241.97E-212.09E-342.02E-21
hPSO-TLBO vs. AVOA3.02E-114.99E-051.44E-343.77E-19
hPSO-TLBO vs. RSA4.25E-071.63E-111.44E-341.97E-21
hPSO-TLBO vs. MPA1.01E-241.04E-142.09E-342.00E-18
hPSO-TLBO vs. TSA1.01E-241.31E-201.44E-349.50E-21
hPSO-TLBO vs. GWO1.01E-245.34E-161.44E-345.23E-21
hPSO-TLBO vs. hPT21.01E-241.51E-221.44E-345.88E-20
hPSO-TLBO vs. hPT11.01E-244.09E-171.44E-343.41E-22
hPSO-TLBO vs. ITLBO1.01E-245.34E-161.44E-342.40E-22
hPSO-TLBO vs. IPSO1.01E-242.46E-241.44E-341.04E-19
hPSO-TLBO vs. TLBO1.01E-241.97E-211.44E-341.60E-18
hPSO-TLBO vs. PSO1.01E-241.97E-211.44E-341.54E-19
Table 6. Performance of optimization algorithms on pressure vessel design problem.
Table 6. Performance of optimization algorithms on pressure vessel design problem.
AlgorithmOptimum VariablesOptimum Cost
TsThRL
hPSO-TLBO0.7780270.38457940.312282005882.901
WSO0.7780270.38457940.312282005882.901
AVOA0.7780310.38458140.31251199.99695882.909
RSA1.2668640.68445564.0362121.847558083.221
MPA0.7780270.38457940.312282005882.901
TSA0.7797530.38603340.399312005913.936
GWO0.7785340.38602540.32206199.95835891.47
hPT10.8633310.55166343.82355178.13577423.859
hPT20.9097540.61276845.4607170.19788203.294
ITLBO1.0076440.42986944.41372164.24827173.881
IPSO0.9713810.57493645.31477185.87398924.884
TLBO1.6973840.49796848.96822111.664911,655.86
PSO1.6830830.66422767.0726623.9025510,707.79
Table 7. Statistical results of optimization algorithms on pressure vessel design problem.
Table 7. Statistical results of optimization algorithms on pressure vessel design problem.
AlgorithmMeanBestWorstStdMedianRank
hPSO-TLBO5882.8954515882.8954515882.8954512.06E-125882.8954511
WSO5892.6601215882.9010515979.18833628.70492135882.9014643
AVOA6277.541715882.9085117246.78008455.21641116076.089625
RSA13,534.147978083.22103522,422.758714039.89516712,354.521249
MPA5882.9010575882.9010525882.9010644.76E-065882.9010552
TSA6338.0247085913.9360567131.963127430.41158126188.5365886
GWO6034.6745495891.4696316806.784466309.26516695901.2452644
hPT111,215.466347423.85701416,642.536562954.12686911,038.722838
hPT213,923.198328203.29271121,021.480884224.52718713,968.7299610
ITLBO11,172.034527173.8794818,660.959343548.61493410,397.253467
IPSO15,785.031228924.88224222,541.584275160.56135616,389.8726611
TLBO32,131.2564611,655.8620869,689.8354517,822.7764628,265.1879812
PSO33,789.1740610,707.7902358,436.5158216,685.4638937,331.5955313
Table 8. Performance of optimization algorithms on speed reducer design problem.
Table 8. Performance of optimization algorithms on speed reducer design problem.
AlgorithmOptimum VariablesOptimum Cost
bMpl1l2d1d2
hPSO-TLBO3.50.7177.37.83.3502155.2866832996.348
WSO3.50.7177.300017.83.3502155.2866832996.348
AVOA3.50.7177.3000017.83.3502155.2866832996.348
RSA3.5951920.7178.251928.275963.3558425.4897443188.946
MPA3.50.7177.37.83.3502155.2866832996.348
TSA3.5133210.7177.38.275963.3505515.2903323014.45
GWO3.5006620.7177.3053127.83.3643985.288883001.683
hPT13.5011760.70032117.467057.3974227.8499713.3821025.2976362.33E+10
hPT23.502560.70056217.627417.4611317.8668563.3977135.3074143.86E+10
ITLBO3.5115870.70082618.925887.465537.8713043.4149125.2975643466.045
IPSO3.5215740.70002217.339487.521077.916273.5308695.3450623161.188
TLBO3.5579360.70412826.629398.127658.1565213.6737035.3410855344.833
PSO3.5084520.70007418.131597.4022867.8702613.6034935.3459043312.579
Table 9. Statistical results of optimization algorithms on speed reducer design problem.
Table 9. Statistical results of optimization algorithms on speed reducer design problem.
AlgorithmMeanBestWorstStdMedianRank
hPSO-TLBO2996.3481652996.3481652996.3481651.03E-122996.3481651
WSO2996.6409812996.3483052998.879650.6656619462996.3648953
AVOA3001.0037832996.3481873011.5581994.5162854083000.9009844
RSA3285.9813883188.9463523346.20285465.463095143301.3472527
MPA2996.3481682996.3481652996.3481783.62E-062996.3481662
TSA3033.3062923014.4504913047.48765111.540799783035.1528846
GWO3004.89293001.6832523011.0534032.853732923004.3579965
hPT11.60763E+132,328,270,43267.95377E+132.14493E+138.15597E+129
hPT22.4969E+1338,611,102,1571.07078E+143.07571E+131.42599E+1310
ITLBO1.44E+133466.0452091.04E+142.64E+135.63E+128
IPSO3.18058E+133161.1884061.6112E+144.23337E+132.2749E+1311
TLBO7.18E+135344.8333665.20E+141.32E+142.81E+1312
PSO1.06E+143312.5791765.37E+141.41E+147.58E+1313
Table 10. Performance of optimization algorithms on welded beam design problem.
Table 10. Performance of optimization algorithms on welded beam design problem.
AlgorithmOptimum VariablesOptimum Cost
hltb
hPSO-TLBO0.205733.4704899.0366240.205731.724852
WSO0.205733.4704899.0366240.205731.724852
AVOA0.204943.4876159.0365140.2057351.725954
RSA0.1964013.536769.9536810.2181891.983572
MPA0.205733.4704899.0366240.205731.724852
TSA0.2041463.4961859.0650830.206171.734136
GWO0.2055883.4737489.0362280.2058011.725545
hPT10.2371383.8299498.5225550.2621672.139874
hPT20.2432473.7839049.1784280.2638472.384718
ITLBO0.2274513.6872048.5744070.251021.994317
IPSO0.2686983.5234078.8928210.2933922.53362
TLBO0.3187964.4523326.7252740.4321853.065577
PSO0.3779263.4232017.2899540.5858414.097012
Table 11. Statistical results of optimization algorithms on welded beam design problem.
Table 11. Statistical results of optimization algorithms on welded beam design problem.
AlgorithmMeanBestWorstStdMedianRank
hPSO-TLBO1.7246798231.7246798231.7246798232.51E-161.7246798231
WSO1.7248443621.7248440161.7248497311.42E-061.7248440163
AVOA1.7623773441.7259459581.8464697070.0414841861.7480380576
RSA2.196328361.9835639062.5551580290.1639664322.1704842247
MPA1.7248440211.7248440171.7248440283.81E-091.7248440212
TSA1.7437302671.7341274791.7532189310.0063767031.7438296345
GWO1.7273215731.7255370721.7314957670.0015502291.7270684584
hPT17.51754E+122.1398166764.96972E+131.39052E+131.47507E+119
hPT21.16016E+132.3846770866.62629E+131.94524E+132.95014E+1110
ITLBO6.87E+121.9942595936.63E+131.85E+132.5487447468
IPSO1.4204E+132.5335785888.59849E+132.98947E+133.39718052411
TLBO3.43E+133.0655682953.31E+149.23E+135.81923701212
PSO4.73E+134.0970041362.87E+149.96E+136.89118601113
Table 12. Performance of optimization algorithms on tension/compression spring design problem.
Table 12. Performance of optimization algorithms on tension/compression spring design problem.
AlgorithmOptimum VariablesOptimum Cost
dDP
hPSO-TLBO0.0516890.35671811.288970.012665
WSO0.0516870.35666911.291850.012665
AVOA0.0511760.34449912.044990.01267
RSA0.0500810.31279614.821570.013174
MPA0.0516910.3567611.286510.012665
TSA0.0509660.33956412.381890.012682
GWO0.0519650.36336810.913810.012671
hPT10.0550070.467379.5133980.013657
hPT20.0566650.5226988.6281560.014153
ITLBO0.0548430.46359.7763360.013664
IPSO0.0563120.5127169.3392430.014227
TLBO0.0682470.9089162.4466110.017633
PSO0.0681620.9057042.4466110.017528
Table 13. Statistical results of optimization algorithms on tension/compression spring design problem.
Table 13. Statistical results of optimization algorithms on tension/compression spring design problem.
AlgorithmMeanBestWorstStdMedianRank
hPSO-TLBO0.0126019070.0126019070.0126019077.58E-180.0126019071
WSO0.0126735760.0126621880.0128260094.02E-050.0126626173
AVOA0.0133524450.0126672880.0141773810.0006257520.0132828957
RSA0.0132540440.0131708030.0134006787.79E-050.0132326046
MPA0.0126621910.0126621880.01266223.20E-090.012662192
TSA0.0129649340.0126794540.0135391290.0002711380.0128899195
GWO0.0127209920.0126678040.0129484446.20725E-050.0127184424
hPT11.06544E+120.0136361571.89059E+134.66181E+120.01372457810
hPT22.13088E+120.0141374893.78117E+139.32E+120.01425284111
ITLBO0.0138149060.0136427260.0139943580.0001196930.0138054388
IPSO6.39263E+120.0142116761.13435E+142.79708E+130.01423419812
TLBO1.82E-020.0176296731.88E-024.02E-040.0181260149
PSO2.13E+130.0175245263.78E+149.32E+130.01752452613
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hubálovský, Š.; Hubálovská, M.; Matoušová, I. A New Hybrid Particle Swarm Optimization–Teaching–Learning-Based Optimization for Solving Optimization Problems. Biomimetics 2024, 9, 8. https://0-doi-org.brum.beds.ac.uk/10.3390/biomimetics9010008

AMA Style

Hubálovský Š, Hubálovská M, Matoušová I. A New Hybrid Particle Swarm Optimization–Teaching–Learning-Based Optimization for Solving Optimization Problems. Biomimetics. 2024; 9(1):8. https://0-doi-org.brum.beds.ac.uk/10.3390/biomimetics9010008

Chicago/Turabian Style

Hubálovský, Štěpán, Marie Hubálovská, and Ivana Matoušová. 2024. "A New Hybrid Particle Swarm Optimization–Teaching–Learning-Based Optimization for Solving Optimization Problems" Biomimetics 9, no. 1: 8. https://0-doi-org.brum.beds.ac.uk/10.3390/biomimetics9010008

Article Metrics

Back to TopTop