Next Article in Journal
Modeling and Analysis of Human Comfort in Human–Robot Collaboration
Next Article in Special Issue
Application of Bidirectional Long Short-Term Memory to Adaptive Streaming for Internet of Autonomous Vehicles
Previous Article in Journal
Intelligent Breast Mass Classification Approach Using Archimedes Optimization Algorithm with Deep Learning on Digital Mammograms
Previous Article in Special Issue
An Automatic-Segmentation- and Hyper-Parameter-Optimization-Based Artificial Rabbits Algorithm for Leaf Disease Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Teaching–Learning Optimization Algorithm Based on the Cadre–Mass Relationship with Tutor Mechanism for Solving Complex Optimization Problems

1
School of Mechanical Engineering, Guizhou University, Guiyang 550025, China
2
State Key Laboratory of Public Big Data, Guizhou University, Guiyang 550025, China
*
Author to whom correspondence should be addressed.
Submission received: 25 June 2023 / Revised: 10 September 2023 / Accepted: 21 September 2023 / Published: 1 October 2023
(This article belongs to the Special Issue Biomimicry for Optimization, Control, and Automation)

Abstract

:
The teaching–learning-based optimization (TLBO) algorithm, which has gained popularity among scholars for addressing practical issues, suffers from several drawbacks including slow convergence speed, susceptibility to local optima, and suboptimal performance. To overcome these limitations, this paper presents a novel algorithm called the teaching–learning optimization algorithm, based on the cadre–mass relationship with the tutor mechanism (TLOCTO). Building upon the original teaching foundation, this algorithm incorporates the characteristics of class cadre settings and extracurricular learning institutions. It proposes a new learner strategy, cadre–mass relationship strategy, and tutor mechanism. The experimental results on 23 test functions and CEC-2020 benchmark functions demonstrate that the enhanced algorithm exhibits strong competitiveness in terms of convergence speed, solution accuracy, and robustness. Additionally, the superiority of the proposed algorithm over other popular optimizers is confirmed through the Wilcoxon signed rank-sum test. Furthermore, the algorithm’s practical applicability is demonstrated by successfully applying it to three complex engineering design problems.

1. Introduction

Optimization algorithms are a class of mathematical techniques employed to seek the optimal solution for a problem, with the primary focus on either maximizing or minimizing an objective function [1]. Traditional optimization methods encounter various challenges as the scale and complexity increase, such as high costs, low efficiency, long execution times, and a tendency to become trapped in local optima [2]. However, metaheuristic optimization algorithms draw inspiration from natural phenomena and the fundamental characteristics of biological systems, endowing them with the capability to solve a wide range of real-world problems [3]. These metaheuristic algorithms possess numerous advantages, including efficient operation, adaptable flexibility, robust stability, exceptional self-organization capabilities, straightforward implementation, potent parallelism, and seamless integration with other algorithms [4]. A myriad of metaheuristic algorithms have been developed to address diverse optimization problems. These algorithms leverage two essential attributes—exploration and exploitation—to effectively navigate the problem spaces and unveil optimal solutions [5].
Nature-inspired algorithms, also known as methods for simulating biological or physical phenomena to tackle optimization problems, play a crucial role in the field. These approaches can be broadly classified into three main types: evolutionary-based, physical-based, and population-based [6]. Evolutionary algorithms, such as genetic algorithms (GA) [7] and differential evolution (DE) [8], draw inspiration from the principles of evolution in biology. GA emulates natural selection, crossover, mutation, and other biological processes to generate novel solutions, retain superior individuals, and progressively explore the optimal solution. Similarly, DE treats each individual as a vector within an n-dimensional space, utilizing operations like mutation, crossover, and selection to iteratively search for the optimal solution. In practical applications, evolutionary algorithms can be used for various optimization problems, such as combinatorial optimization, nonlinear programming, and function optimization.
Physics-based algorithms are a class of optimization algorithms that leverage simulations of physical phenomena to address problems. These algorithms incorporate various physics models, including partial differential equations, kinetic equations, and probability distributions. An illustrative example is simulated annealing (SA) [9], which emulates the annealing process of materials by gradually reducing the temperature and making decisions regarding accepting or rejecting new states based on changes in energy. This approach enables a comprehensive exploration of optimal solutions. Another notable algorithm is particle swarm optimization (PSO) [10], which views the problem as a group of particles, with each particle representing a potential solution. Through iterative updates utilizing velocity and position mechanisms, PSO aims to discover the global optimum. The efficacy of these algorithms has been demonstrated across diverse domains, such as combinatorial optimization, image processing, and machine learning.
Population-based algorithms belong to a category of bionic algorithms that address optimization problems by simulating the cooperative behaviors observed in natural groups. These algorithms facilitate interactions and collaboration among multiple agents. For instance, the green anaconda optimization (GAO) algorithm [11] is derived from the natural mating and hunting behaviors of male anacondas, specifically their ability to locate female anacondas. Similarly, the egret swarm optimization algorithm (ESOA) [12] takes inspiration from the hunting strategies employed by two egret species—the great egret and the snow egret. The ESOA encompasses three crucial components: a sit-and-wait strategy, an aggressive strategy, and discriminant conditions. Notably, this population-based algorithm has extensive applications across various fields, such as engineering design, bioinformatics, finance, etc.
Similar advantages are shared among these algorithms, which possess the capability to address multiple objective functions and nonlinear constraints without the requirement of resolving the derivative or function continuity of the problem, thereby rendering them applicable to a diverse range of optimization problems.
Teaching–learning-based optimization (TLBO) [13] is a technique that draws inspiration from teaching methods employed in the education process, and simulates the influence of teachers on students. This algorithm, despite having fewer parameters, demonstrates excellent performance across various optimization problems. TLBO, along with its enhanced versions, has shown effectiveness in addressing continuous optimization problems [14], combinatorial problems [15], and real-world engineering problems [16]. However, through our rigorous literature survey, we have identified areas where the results presented in earlier studies can be improved in terms of accuracy, robustness, and convergence of the solutions. For example, the LNTLBO algorithm [17], an improved version of TLBO, integrates a logarithmic spiral strategy and a triangular mutation rule to enhance the learning process. By incorporating the logarithmic spiral strategy during the teacher stage, students can actively seek guidance from their teachers, thereby accelerating convergence speed. Moreover, the adoption of a new triangular mutation learning mechanism further improves the learners’ exploration and exploitation abilities. Another approach, the artificial bee colony and teaching–learning-based optimization (ABC-TLBO) algorithm [18], revamps the search strategy of both employed and onlooker bees. It builds upon the basic ABC framework and incorporates TLBO in the observer bee stage to enhance the algorithm’s exploitation capabilities. To improve the quality of the solutions, a chaotic teaching–learning-based optimization (chaotic TLBO) algorithm [19] is proposed, which adopts different chaos mechanisms and introduces a local search method. These additions aim to improve the overall quality of the solution. In conclusion, there is a significant demand for the improvement of the TLBO algorithm, as it holds tremendous potential to enhance performance and provide more satisfactory solutions for complex problems.
In this paper, a new optimization method TLOCTO (teaching–learning optimization algorithm based on the cadre–mass relationship with tutor mechanism) is proposed as an innovation point for a cadre–mass relationship strategy and a tutor mechanism, improving the teacher phase and learner phase based on the TLBO algorithm. Among them, the teacher phase and new learner phase are mainly designed for global exploration. To maximize the global search, the cadre–mass relationship strategy and the tutor mechanism are mainly applied to the algorithm exploration phase to solve the TLBO algorithm’s premature convergence and tendency to fall into local optimum, The two mechanisms are explored and exploited based on the global optimization performed in the teacher phase and new learner phase, which help to achieve a proper balance between exploration and exploitation of solutions, and to ultimately find the solution with the optimal quality.
The remainder of this paper is organized as follows. Section 2 provides a brief overview of related work. The mathematical model of the proposed algorithm is presented in Section 3. Section 4 presents simulations, experiments, and an analysis of the results. Mechanical engineering design problems are described in Section 5. Finally, Section 6 concludes the paper and outlines future research directions.

2. Related Work

The basic TLBO algorithm mainly consists of two roles: teachers and students. Teachers are professionals who engage in teaching and imparting knowledge and skills. They work in schools or other educational institutions using various teaching methods to convey knowledge and skills and guide students’ growth and development. Teachers need not only solidly subject knowledge but also good pedagogical skills and communication skills to effectively teach students. Students are individuals who receive education in schools or other educational institutions. They acquire new knowledge and improve their comprehensive abilities by learning from teachers, high-achieving peers, and each other in various ways.

2.1. Teacher Phase

In the teacher phase, the teaching process is simulated to find the solution with the best objective function value in the class. Using Equation (1), a new potential solution is generated.
T j n e w = X j i + rand × ( X j best   T F × X j a v g )
where T j n e w and X j i represent the positions of the individual after and before learning; T j n e w denotes the teacher’s position, which corresponds to the best individual in the population; X j a v g signifies the average level of search agents in the population; T F is a teaching factor that determines the change in the value of X j a v g ; and rand represents a random number between 0 and 1. The value of T F can be either 1 or 2, randomly determined according to the probability given by Equation (1), as T F = r o u n d ( 1 + r a n d j 0,1 2 1 .

2.2. Learner Phase

Besides gaining knowledge from the teacher, learners can also enhance their understanding through interaction. During mutual learning, a learner can acquire knowledge from a randomly chosen peer with a higher grade. The learner strategy can be expressed as follows:
S j i     = { X j i + rand × ( X j rand   X j i ) , f ( X j i ) < f ( X j rand   ) X j i + rand × ( X j i X j rand   ) ,   otherwise
where S j i     is the position of the student and X j rand   is the position of a learner randomly selected from the class.

3. The Proposed TLOCTO

In this section, the inspiration for the proposed method is first discussed. Then, the mathematical model is provided.

3.1. Inspiration

The teaching and learning algorithm’s development concept stems from the teaching procedure, and this article will introduce an improved teaching and learning algorithm—a teaching–learning optimization algorithm based on the cadre–mass relationship with the tutor mechanism (TLOCTO), mainly inspired by conventional teaching methods such as teacher instruction, learning from excellent students, setting class leaders, and extracurricular tutoring. In the remaining sections of Section 3, the proposed teaching and learning behaviors will be mathematically modeled to develop an optimizer with satisfactory search performance.

3.2. New Learner Strategy

During the learner phase, a member learns from random solutions in the class to generate potential new members. Students have various preferences for learning modes, such as formal communication, group discussions, or presentations, and can learn from both teachers and classmates. They also have the flexibility to adjust their learning mode according to their specific situation. Therefore, this paper introduces a new learning mode, which is described in Equation (3), to enhance the diversity of students’ learning methods.
S j new   = { X j i + rand × [ ( 1 t T ) × X j rand   + ( t T ) × T j new   X j i ] , f ( X j rand   ) < f ( X j i ) , X j i +   rand × [ ( 1 t T ) × X j rand   + ( t T ) × T j new   X j rand   ] ,   otherwise
where S j n e w is the position of the student and X j r a n d   is the position of a learner randomly selected from the class. t and T are the current and maximum number of iterations.
In this scenario, the learning mode initially emphasizes random learning to achieve population diversity and global search. As time goes on, students increasingly rely on communication with teachers to accomplish local exploration.

3.3. Assistance Phase

The stage in question is bifurcated into two distinct strategies: the cadre–mass relationship strategy and the tutor mechanism, both of which have been primarily devised to facilitate regional exploration. Nevertheless, an overabundance of mechanisms can potentially undermine the efficacy of selective development. Hence, in this scheme, two mechanisms are used to further solve the initial position obtained previously, and the strategy with smaller results is finally selected as the solution for the optimal position.

3.3.1. Cadre–Mass Relationship Strategy

If learners learn from everyone around them, which is an inclusive approach to learning, it will inevitably exert an impact on their learning efficiency, and this impact can be either positive or negative depending on the quality and relevance of the information they receive. Therefore, in the class, teachers will generally set students with good academic performance, strong learning ability, and high learning efficiency as class cadres to play exemplary roles. Class cadres serve as a bridge between students and teachers, and their cooperation with teachers can allow the teaching to receive good results. Student cadres are the core of student groups. They are charismatic, influential, and cohesive, which can unite students to become outstanding. The process can be described by Equation (4).
S j Cadres   = ϖ × T j new   rand × H 1 × S j new   H 2 × Levy ( D ) + rand × H 1
where S j Cadres   is represents the student cadres, the current solution of iteration j. ϖ represents the quality function used to balance the search strategy, which can be calculated by Equation (5). H1 represents the influencing factors in the search for class cadres, which is defined by Equation (6). H2 is a decreasing value from ε to 0, indicating that knowledge acquisition increases along with multiple links, such as teaching by teachers, learning led by class leaders, and discussion among students. This efficiency can be defined by Equation (7).
ϖ = H 1 t ( 1 T ) 2
H 1 = μ × rand 1
H 2 = ε × ( 1 t T )
Levy ( D ) is the Levy selection distribution function [20], defined by Equation (8).
Levy ( D ) = s × τ × ω | v | 1 φ
where μ and ε are a number randomly selected between [1, 10], s is 0.01, and τ and ν are randomly selected numbers in the range of [0, 1]. ω is defined by Equation (9).
ω = Γ ( 1 + κ ) × sin ( π κ 2 ) Γ ( 1 + κ 2 ) × κ × 2 ( κ 1 2 )
where the κ value is 1.5.

3.3.2. Tutor Mechanism

The tutor mechanism is a new mechanism for students to utilize to look for teachers in other teaching institutions in order to improve their knowledge. Applying this principle to this algorithm can expand the original search space and discover agents with better performance outside the original population. This will greatly increase the likelihood of an optimal solution, enriching population diversity and enhancing intelligence capabilities. In each generation, let S j i [ L G l , L G u ] be a point in a D-dimensional space, where the bound vectors L G l = [ L 1 l , L 2 l , …, L D l ]T and L G u = [ L 1 u , L 2 u , L 3 u , …, L D u ]T are updated as:
{ L j l = M i n ( [ S 1 j new   , S 2 j new   , S 3 j new   , , S n j new   ] ) L j u = Max ( [ S 1 j new   , S 2 j new   , S 3 j new   , , S n j new   ] )
where j = 1, 2, 3, …, D. Defining S j T u = [ S 1 T u , S 2 T u , S 3 T u , , S N T u ] T as a tutor mechanism individual at the current generation j , it can be defined by the tutor mechanism as Equation (11).
S j Tu = S j n e w + y n + 1 α × ( y n + 1 β × ( L j u + L j 1 S j n e w ) S j new   )
where y n + 1 α and y n + 1 β are defined by Equation (12).
y n + 1 = mod ( δ × y n ( 1 y n × cos ( arccos ( y n ) ) ) × 10 4 , 1 )
where y n ∈ [0, 1], δ is the control parameter and δ ∈ [0, + ). From the above formula, it can be concluded that the result of y n + 1 ( 0 , 1 ) .
This article presents the TLOCTO algorithm, outlined in Algorithm 1, and Figure 1 illustrates the flow chart of TLOCTO. The algorithm comprises six steps, which are summarized as follows:
Population and parameters are initialized. The maximum number of iterations (Tmax) is set to 500 and the total particle size (N) to 30, then all agents are randomly initialized. Fitness values are calculated, then evaluated for each agent based on the objective function. The positions are then updated. In the teacher phase, learner phase, cadre–mass mechanism, and tutor mechanism, the solution is continuously optimized by updating the position of each agent. For the boundary check, it must be ensured that each agent’s position remains within the boundaries of the search space. The global best solution, the current best solution, and its fitness value in each iteration are updated. For the termination criterion, the above steps are repeated until the termination condition is met and the output represents the global best solution and its fitness value.
Algorithm 1: The framework of the TLOCTO algorithm
1: Initialize the solution’s positions of population N randomly;
2: Set the maximum number of iterations (Tmax) and other parameters;
3: For t = 1 to Tmax do;
4: Calculate the average of the population;
5: Select the teacher;
6: Calculate the fitness function for the given solutions using Equation (1);
7: Find the best solution position and fitness value so far;
8: For i = 1 to N do;
9: Update the individual position using Equation (2);
10: Update the individual position using Equation (3);
11: Compare and select the one that generates the smaller value as the update position;
12: For i = 1 to N do;
13: Update the individual position using Equation (4);
14: Update the individual position using Equation (11);
15: Calculate the fitness values Fitness ( S j Cadres   ) and Fitness ( S j Tu );
16: If Fitness ( S j Cadres   ) < Fitness ( S j Tu ), then
17: Obtain the best position and the best fitness value of the current iteration using Equation (4);
18: else;
19: Obtain the best position and the best fitness value of the current iteration using Equation (11);
20: end if;
21: end for;
22: end for;
23: Return the best solution.

3.4. Computational Complexity Analysis

The computational complexity of the TLOCTO algorithm primarily depends on three factors: the initialization process, the evaluation of the fitness function, and the updating of the solutions. The complexity of the initialization process is O( N ), where N represents the size of the population. The fitness function depends on the problem, so we will not discuss it here. Finally, the complexity of updating the position is indicated by O ( T × N ) + O ( T × N × D ) , where T represents the number of iterations and D represents the number of parameters (dimensions) in the problem. Therefore, the computational complexity of the proposed TLOCTO is O ( N × ( T × D + 1 ) ) .

4. Experimental Results and Detailed Analyses

In this section, we use two types of benchmark functions to investigate the effectiveness of the TLOCTO algorithm. After the qualitative evaluation of the TLOCTO algorithm through standard benchmark functions (the details of these functions can be found in Appendix A) [21], the algorithm was then subjected to testing to assess its efficacy in terms of solving numerical problems. Moreover, the performance of the TLOCTO algorithm in tackling intricate numerical problems was evaluated using the CEC2020 test functions [22]. The TLOCTO was compared to several renowned optimizers, including the artificial bee colony (ABC) [23], genetic algorithm (GA) [7], particle swarm optimization (PSO) [10], grey wolf optimizer (GWO) [24], coati optimization algorithm (COA) [25], and dung beetle optimizer (DBO) [26]. Moreover, TLOCTO was also compared to teaching–learning-based optimization (TLBO) [13]. It is worth noting that these algorithms not only cover recently proposed technologies such as the GWO, DBO, and COA algorithms, but also include classical optimization methods such as the ABC, GA, PSO, and TLBO algorithms. To ensure a fair experimental comparison, the comparison algorithms were executed under identical test conditions. The numerical experiments were conducted using MATLAB 2021b on a computer equipped with an AMD Ryzen 53550H CPU @2.10 GHz and 16 G RAM, running on a 64-bit Windows 10 operating system. Among them, the Wilcoxon rank-sum test [27] was designed with reference to the setting of PlatEMO [28]. The population size was set to N = 30, the maximum number of iterations to T = 500, and 30 independent runs were performed. Additionally, the parameter settings of other counterparts referred to their own settings. It is important to note that the tabular data in this paper are presented in scientific notation.

4.1. Qualitative Evaluation

In this section, a qualitative analysis of the TLOCTO algorithm is described, focusing on its convergence behavior, exploration, exploitation, and population diversity. This paper aims to evaluate the performance and characteristics of TLOCTO from a qualitative perspective.

4.1.1. Convergence Behavior Analysis

The benchmark test functions verified TLOCTO’s convergence behavior and analyze the experimental results, as shown in Figure 2. Six functions were chosen for analysis, forming a five-column image. In the first column, the two-dimensional shape of the benchmark function was displayed, helping us to understand the complexity of the problem. The second column showed black points as search agents and a red dot as the global optimum. These agents concentrated near the optimal solution, but were distributed across the search space, demonstrating TLOCTO’s effective exploration ability. The third column presented the average change in fitness values among search agents, starting high and decreasing rapidly, indicating the algorithm’s potential to discover the best value. The fourth column showed the search agent’s trajectory, transitioning from fluctuation to stability. This signified the shift from global exploration to local exploitation and facilitated the process of reaching the global optimal value. Lastly, the fifth column illustrated the convergence curve of the TLOCTO algorithm. In unimodal functions, the curve is smooth and continuously declining, indicating the algorithm’s ability to find the optimal solution. For multimodal functions, the convergence curve descends in steps, indicating the algorithm’s capability to consistently escape local optima and reach the global optimum.

4.1.2. Population Diversity Analysis

Population diversity is significant for the performance of metaheuristic algorithms, and was analyzed by conducting experiments on the suite of classical benchmark functions to compare the population size differences between TLBO and TLOCTO. The computation of population diversity was carried out using a moment of inertia I C , demonstrated in Equation (13), while c d , represented in Equation (14), indicated the dispersion of the population from its mass center c in each iteration, where the parameter x i d denoted the value of the dth dimension of the ith search agent at iteration [29].
I C ( t ) = i = 1 N d = 1 D ( x i d ( t ) c d ( t ) ) 2
c d ( t ) = 1 D i = 1 N x i d ( t )
The experimental results are presented in Figure 3, indicating that TLOCTO exhibited a higher level of population diversity compared to TLBO throughout all iterations. This significant discovery suggests that TLOCTO can comprehensively explore the search space and effectively avoid premature convergence and stagnation in local solutions. As a result, it can be inferred that TLOCTO possesses a higher potential to attain the global optimal solution.

4.1.3. Exploration and Exploitation Analysis

By dividing the search process into two stages, namely, exploration and exploitation [30], the metaheuristic algorithm displays its essential characteristic. Balancing these two stages can effectively enhance the algorithm’s efficiency. To achieve this objective, we utilized Equation (15) and Equation (16) to determine the percentages of exploration and exploitation, respectively. Additionally, the dimension-wise diversity measurement was calculated using Equation (17), denoted as Div(t). It should be noted that D i v m a x , representing the maximum diversity in the entire iteration process, was also taken into consideration [29].
Exploration ( % ) = Div ( t ) Div × 100
Exploitation ( % ) = | Div ( t ) Div max | Div max × 100
Div ( t ) = 1 D d = 1 D 1 N i = 1 N | median ( x d ( t ) ) x i d ( t ) |
Dimensional diversity measurement was used in [30] to evaluate the balance of each scheme, and it was concluded that the optimal balance for most functions was over 90% exploitation and less than 10% exploration out of 42 function tests. By observing Figure 4, it becomes apparent that the TLOCTO algorithm displayed exceptional outcomes, surpassing 90% exploitation in all of these assessment functions. Such an observation suggests that the TLOCTO algorithm has effectively attained a desirable equilibrium between the processes of exploration and exploitation within the search domain, thereby resulting in an optimal performance. Specifically, the methodology employed in the TLOCTO algorithm incorporates a dynamic balance between the exploratory and exploitative aspects, which subsequently yields remarkable benefits in terms of circumventing local optima and precluding premature convergence.

4.2. Performance Indicators

This paper utilizes two statistical tools, specifically the mean value and the standard deviation [31]. The mathematical formulations for these tools are presented as follows:
{ AVG = 1 P i = 1 P f i STD = 1 P 1 i = 1 P ( f i M ) 2
where P is the number of optimization experiments, AVG stands for average, and f i represents the optimal value in each independent run.
Furthermore, the Wilcoxon signed rank-sum test [27] was used, with a significance level of α = 0.05, to assess the disparity between TLOCTO and its rivals in this study. Specifically, the outcomes of capturing the minimum fitness function value for each of the 30 independent runs were acquired. Subsequently, the individual probability p-value associated with the TLOCTO algorithm and each competitor was separately computed using MATLAB. Ultimately, the decision regarding significant distinctions between algorithms relied on comparing the p-value against the significance level α. The symbols applied to the Wilcoxon signed rank-sum test were described as “+”, “−”, and “=”, respectively, to indicate that the comparison algorithms showed significantly superior, inferior, and no significant differences compared with TLOCTO’s algorithm.

4.3. TLOCTO’s Performance on the Benchmark Test Functions

This section shows and analyzes the test results of the TLOCTO algorithm and its comparison algorithms on the benchmark test functions.

4.3.1. Comparison Using the Benchmark Test Functions

Functions F1-F7 exhibited a unimodal nature, featuring a solitary global optimum, which facilitated the evaluation of exploitation capability in the meta-heuristic algorithms under examination. The comprehensive analysis presented in Table 1 indicates that, except for F5 and F7, the TLOCTO algorithm consistently surpassed all other compared algorithms when considering the standard values associated with unimodal functions. This consistent superiority establishes the TLOCTO algorithm as the most potent and proficient optimizer among the seven tested unimodal functions, thereby providing compelling evidence of its exceptional exploitation ability.
Multimodal functions, as opposed to unimodal functions, possess multiple local optima that increase exponentially with the problem size, which is determined by the number of design variables. Consequently, these test problems hold great value in assessing the exploration capability of an optimization algorithm. According to the data presented in Table 1, TLOCTO surpassed other optimizers in terms of both average values and standard deviations across 13 out of 16 test functions, specifically multimodal and fixed-dimension multimodal functions F8-F23. Furthermore, the TLOCTO algorithm closely approximated the specified standard values in nearly all functions, with the exception of F12-F14, showcasing its exceptional accuracy and stability. The outstanding performance of TLOCTO on these multimodal functions unequivocally validates its remarkable ability to navigate through and avoid local optima. This ability can be attributed to the utilization of a cadre–mass relationship strategy and tutor mechanism within the algorithm, effectively guiding it towards the global optimum.
Based on the Wilcoxon signed rank-sum test results shown in Table 1 (last line), TLOCTO outperformed GA, GWO, PSO, and TLBO with more than 20 significantly better results (“+”). Additionally, it surpassed COA, DBO, and ABC with 12, 16, and 18 superior results, respectively. In essence, the average goodness percentage of TLOCTO across the 23 benchmark functions was 81.99% (( i = 1 7 + i ) / ( 23 × 7 ) × 100 % ). Overall, the results indicate that the cadre–mass relationship strategy and tutor mechanism strategy effectively enhance TLBO’s optimization capability.

4.3.2. Analysis of Convergence Behavior

TLOCTO’s search agents were observed to extensively explore promising regions of the design space and exploit the most optimal solution. In the initial stages of optimization, the search agents underwent abrupt changes before gradually converging. This ensured the convergence of a population-based algorithm to a point in the search space. Figure 5 presents the convergence curves for TLOCTO and comparative algorithms on some of the 23 standard benchmark functions. These curves reflect the convergence rate, which intuitively measures the improvement in exploration and exploitation. The results imply that TLOCTO competes well with other state-of-the-art meta-heuristic algorithms and exhibits superior convergence accuracy, as is consistent with Table 1.

4.4. TLOCTO’s Performance on CEC 2020 Test Functions

The TLOCTO algorithm’s superior performance on simple optimization problems was demonstrated by the mentioned benchmark experiments. Moving on to the next evaluation, we introduce CEC 2020 [22], which presented a challenging test suite aimed at assessing the performance of complex optimization problems. This test suite included a variety of hybrid and composition functions that enabled further evaluation of the TLOCTO algorithm. The benchmark functions, as displayed in Table 2, were categorized into four groups: unimodal function (F1), multimodal shifted and rotated functions (F2-F4), hybrid functions (F5-F7), and composition functions (F8-F10). To assess the TLOCTO algorithm and other comparison algorithms, we utilized the AVG, STD, and Wilcoxon signed rank-sum test according to the experimental setup rules outlined in Section 4. The test results for these algorithms in CEC2020 are presented in Table 3 and Table 4 for problem dimension D, equal to 5 and 10, respectively. For each function, the smallest average value is highlighted in bold font.

4.4.1. Analysis of CEC 2020 Test Function

According to the data presented in Table 3, it can be observed that the TLOCTO algorithm exhibited remarkable performance when dealing with the five-dimensional testing problem. Notably, among the 10 CEC 2020 test functions, the TLOCTO algorithm yielded the minimum fitness value outcomes for seven of them, encompassing single-peaked function (F1), multi-modal shift and rotation functions (F2-F4), and hybrid functions (F5-F7). This implies that TLOCTO possesses a more considerable advantage than other algorithms in terms of resolving non-compound functions. Furthermore, based on the information provided in Table 3 (last row), it can be deduced that TLOCTO surpassed various algorithms, such as ABC, GWO, PSO, GA, COA, DBO, and TLBO in 7, 10, 10, 8, 10, 7, and 7 cases, respectively, out of 10 functions. Additionally, its average excellence rate amounted to 84.29% (( i = 1 7 + i ) / ( 10 × 7 ) × 100 % ). Consequently, the TLOCTO algorithm attained superior outcomes compared to other algorithms.
TLOCTO outperformed other methods when solving 10-dimensional problems, as shown by its high success rates in Table 4. Wilcoxon rank-sum tests revealed TLOCTO’s superiority over rivals like ABC, GWO, PSO, COA, DBO, and TLBO on over 7 functions, with an 88.57% optimization rate on 10 functions. Additionally, Table 4 indicates that TLOCTO achieved the highest ranking in nine test functions, making it a highly promising solution for CEC 2020 10-dimensional problems. Overall, our comprehensive analysis of these results confirmed TLOCTO’s exceptional performance when compared to other algorithms.

4.4.2. Analysis of Convergence Behavior

Figure 6 and Figure 7 display the convergence plots of TLOCTO and other comparison algorithms on the CEC 2020 (5D and 10D) test functions, respectively. The vertical axis of these plots indicates the function’s best fitness value, while the horizontal axis represents the number of function evaluations. Upon analyzing these plots, it becomes evident that TLOCTO demonstrated a faster descent rate and superior optimization ability across all test functions. This can be attributed to TLOCTO’s incorporation of the teaching and learning stage strategy, cadre–mass relationship strategy, and tutor mechanism strategy. These strategies enable TLOCTO to strike a better balance between global exploration and local exploitation capabilities. By utilizing the curriculum teaching strategy in conjunction with the teacher phase and learner phase, the TLOCTO algorithm attains powerful search capabilities. These capabilities are further enhanced by the integration of both the cadre–mass mechanism and tutor mechanism, which facilitate local exploration and ensures the algorithm’s stability, robustness, and high convergence accuracy. The aforementioned mechanisms are evidenced in the test plots through the consistent descent and rapid convergence of the red line segments. The results demonstrate that the proposed TLOCTO algorithm excelled in terms of convergence and global optimization capabilities. Furthermore, the superiority and robustness of TLOCTO are further confirmed.

4.4.3. Analysis of Scalability

Based on the above experimental results, it has been demonstrated that the TLOCTO algorithm exhibits a remarkable performance in terms of competitiveness. In order to further illustrate the superiority of the TLOCTO algorithm, this section compares it with two other types of algorithms on the CEC2020 test function suite (Dim = 20). An autonomous teaching–learning-based optimization algorithm (ATLBO) [32], an improved teaching–learning-based optimization algorithm (ITLBO) [33], a teaching–learning-studying-based optimization algorithm (TLSBO) [34], and an improved TLBO with a logarithmic spiral and triangular mutation (LNTLBO) [17] are all new variants of the TLBO algorithm. Additionally, the self-adaptive spherical search algorithm (SASS) [35] is the champion algorithm in the CEC2020 test function suite competition [36]. Based on the results presented in Table 5, it can be concluded that TLOCTO demonstrated an outstanding performance in terms of its capabilities and achieved most of the optimal results. This highlights the significant research value of TLOCTO. Furthermore, as observed in Figure 8, TLOCTO consistently maintained the best convergence speed and exhibited excellent stability. Therefore, considering the range of tested functions, TLOCTO can be regarded as a reliable choice.

5. Mechanical Engineering Application Problems

Most real-world engineering optimization problems are non-linear, with complex constraints [37]. Hence, this section tests the optimization performance of the TLOCTO algorithm, developed for practical applications, by using three renowned mechanical engineering problems. These problems feature multiple equality and inequality constraints, which assess the capability of TLOCTO in terms of optimizing real-world and constrained problems from a constraint handling perspective.
When solving engineering design constraint optimization problems with varying levels of complexity, the death penalty functions [38] can be used to handle solutions not meeting the constraint conditions, and the formula is as follows:
F ( x ) = F ( x ) + z z = i = 1 m ( λ ( g ( i ) ) 2 H ( i ) )
where z is a penalty term, m presents the number of constraints in the problem, l is the penalty constant, and H(i) is used to identify whether the i th constraint condition is met.
Furthermore, all of the algorithm parameters were set to the same values as in the above experiments, and the population size and maximum iteration numbers for all problems were 30 and 500, respectively. The following sections provide detailed descriptions of the three engineering problems and present all comparative results of these algorithms.

5.1. Planetary Gear Train Design Optimization Problem

The main goal of this problem was to minimize the maximum errors in the gear ratio [39] utilized in automobiles. To achieve this, the total number of gear teeth is computed for an automatic planetary transmission system. it is shown in Table A4. In which included total 9-decision variables, first 6-decision variables based on the number of teeth in the gears (N1, N2, N3, N4, N5 and N6), namely 1-6 marked in the figure, which can only take integers values, and rest of 3-discrete variables as modules of the gears (m1 and m2) and number of planet gears (P).
The implementation results of TLOCTO and competitor algorithms in terms of achieving the optimal solution for the planetary gear train design optimization problem are reported in Table 6. In addition, Table 7 provides the corresponding constraint values of these algorithms for this problem.
According to the analysis of reference [40], the planetary gear train design optimization problem is one of the most difficult problems in mechanical engineering. As can be seen from Table 6, in solving such problems, the TLOCTO algorithm not only performed the best out of all comparison algorithms, but also obtained the solution closest to the optimal value of the planetary wheel design problem. Therefore, we can say that the TLOCTO algorithm not only has an excellent ability to solve complex mechanical engineering problems, but also gives full play to its high efficiency and accuracy.

5.2. Robot Gripper Problem

For this problem [41], we utilized the difference between the minimum and maximum force generated by the robot gripper as the objective function. This problem comprised seven design variables and six nonlinear design constraints associated with the robot. Mathematically, the robot gripper was a single-degree-of-freedom planning closed-loop mechanism, and the schematic diagram of this problem was simplified to a mechanism composed of three connecting rods and four joints, as shown in Table A5, where Y m i n represents the minimal gripping object dimension (50 mm), Y G signifies the gripper ends’ maximum displacement range (150 mm), Y m a x denotes the maximal gripping object dimension (100 mm), Z m a x indicates the maximum gripper actuator displacement (100 mm), and P represents the gripper’s actuating force (100 N).
The implementation results of TLOCTO and the competitor algorithms in achieving the optimal solution for the robot gripper problem are reported in Table 8. In addition, Table 9 provides the corresponding constraint values of these algorithms for this problem.
As can be seen from Table 8, when solving the robot fixture problem, among them, the best value solved by the TLOCTO algorithm was closest to the best value provided by the literature, which indicates that TLOCTO can solve complex mechanical engineering problems. It is worth noting that the values in the table can reflect that neither the newly proposed COA and DBO algorithms, nor the excellent PSO and ABC algorithms proposed earlier, could solve the robot fixture problem well. This intuitively proves the superiority of the TLOCTO algorithm’s performance and its professional ability to solve the complex problems of robot fixtures.

5.3. Speed Reducer Design Problem

In this case, the purpose was to minimize the weight of the speed reducer [42]. Seven variables were considered, including face width (x1), a module of teeth (x2), a discrete design variable on behalf of the teeth in the pinion (x3), the length of the first shaft between bearings (x4), the length of the second shaft between bearings (x5), the diameter of the first shaft (x6), and the diameter of the second shaft (x7). The resulting optimization problem is shown in Table A6. The implementation results of TLOCTO and its competitor algorithms in terms of achieving the optimal solution for the speed reducer design problem are reported in Table 10. In addition, Table 11 gives the corresponding constraint values of these algorithms for this problem.
As shown in Table 10, the TLOCTO algorithm achieved good results regarding the design of the reducer. It is worth noting that the optimal value of the TLBO algorithm also reached the standard value, but the performance of the average value and standard deviation was still worse than that of the TLOCTO algorithm. The reason for this is that, when compared to the issues established in the preceding two sections, the speed reducer design problem appeared to be comparatively uncomplicated, thereby demonstrating that the TLBO algorithm possesses some outstanding performance attributes. However, this further illustrates that the TLOCTO algorithm exhibits stronger robustness and accuracy.

6. Conclusions and Future Works

This study proposes a teaching–learning optimization algorithm based on the cadre–mass relationship strategy with the tutor mechanism (TLOCTO), which is an efficient optimizer for complex optimization problems. It significantly enhances the exploration and repair reply exploitation capabilities of algorithms by combining innovative strategies such as the new learner strategy, the cadre–mass relationship strategy, and the tutor mechanism. Among these, the cadre–mass strategy plays a crucial role in the TLOCTO algorithm by effectively improving the algorithm’s global exploration capability. Additionally, the TLOCTO algorithm introduces the tutor mechanism, effectively addressing the problem of falling into the local optima that plagued the original algorithm. Through the coordination of these mechanisms, the TLOCTO algorithm demonstrated outstanding performance. Moreover, for 53 different test functions, it provided high-quality solutions, showcasing its adaptability and robustness when applied to complex optimization problems. Specifically, a comparative analysis was conducted between the TLOCTO algorithm and seven other optimization algorithms on 23 benchmark test functions and CEC2020 test functions (Dim = 5, 10), demonstrating its remarkable search performance in terms of convergence speed, solution accuracy, and stability. Furthermore, even when compared to the new variant of TLBO and the champion algorithm of the CEC2020 test suite function, TLOCTO still demonstrated strong competitiveness and superior performance on the CEC2020 (D = 20) test function. Furthermore, the TLOCTO algorithm successfully solved three mechanical engineering design problems, confirming its superiority over other optimizers.
The implementation of TLOCTO opens up numerous possibilities for future research. One avenue is to develop a variant of TLOCTO specifically tailored for multi-objective optimization problems and to execute it accordingly. Additionally, we plan to utilize TLOCTO in order to address various practical issues, such as bionic robotics, task assignment for multiple agents, data clustering, and feature selection, among others.

Author Contributions

Conceptualization, X.W. and F.W.; methodology, X.W. and X.J.; software, F.W. and X.J.; validation, X.J. and F.W.; formal analysis, F.W.; investigation, X.W. and X.J.; resources, S.L.; writing—original draft preparation, X.W.; writing—review and editing, X.W.; supervision, F.W.; project administration, S.L.; funding acquisition, S.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported in part by the National Natural Science Foundation of China (52275480), National Key Research and Development Plan Project (2020YFB1713300), Project support of Guizhou Provincial Science and Technology Department (Guizhou Science and Technology Center [2023]02), and National Key Research and Development Plan Project (2020YFB1713304).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The source codes of the TLOCTO are publicly available at https://ww2.mathworks.cn/matlabcentral/fileexchange/133382-tlocto.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

The set of benchmark test functions implemented in the experiments is described in Table A1, Table A2 and Table A3, and the benchmark test functions are classified in unimodal Table A1, multimodal Table A2, and fixed-dimension multimodal Table A3.
Table A1. Unimodal benchmark functions.
Table A1. Unimodal benchmark functions.
FunctionDimensionsRangefmin
f 1 ( x ) = i = 1 n x i 2 30/50/100[−100, 100]0
f 2 ( x ) = i = 1 n x i + i = 1 n x i 30/50/100[−10, 10]0
f 3 ( x ) = i = 1 n j 1 i x j 2 30/50/100[−100, 100]0
f 4 ( x ) = m a x i x i , 1 i n 30/50/100[−100, 100]0
f 5 ( x ) = i = 1 n 1 100 x i + 1 x i 2 2 + x i 1 2 30/50/100[−30, 30]0
f 6 ( x ) = i = 1 n x i + 0.5 2 30/50/100[−100, 100]0
f 7 ( x ) = i = 1 n i x i 4 + r a n d o m [ 0,1 ) 30/50/100[−1.28, −1.28]0
Table A2. Multimodal benchmark functions.
Table A2. Multimodal benchmark functions.
FunctionDimensionsRangefmin
F 8 ( x ) = i = 1 n x i s i n x i 30/50/100[−500, 500]−418.9829 × d
F 9 ( x ) = i = 1 n x i 2 10 c o s 2 π x i + 10 30/50/100[−5.12, 5.12]0
F 10 ( x ) = 20 e x p 0.2 1 n i = 1 n x i 2 e x p 1 n i = 1 n c o s 2 π x i + 20 + e 30/50/100[−32, 32]0
F 11 ( x ) = 1 4000 i = 1 n x i 2 i = 1 n c o s x i i + 1 30/50/100[−600, 600]0
F 12 ( x ) = π n 10 s i n π y 1 + i = 1 n 1 y i 1 2 1 + 10 s i n 2 π y i + 1 + y n 1 2 + i = 1 n u x i , 10,100,4 30/50/100[−50, 50]0
F 13 x = 0.1 sin 2 3 π x 1 + i = 1 n x i 1 2 1 + sin 2 3 π x i + 1 + x n 1 2 1 + sin 2 2 π x n + i = 1 n u x i , 5 ,   100 ,   4 30/50/100[−50, 50]0
Table A3. Fixed-dimension multimodal benchmark functions.
Table A3. Fixed-dimension multimodal benchmark functions.
FunctionDimensionsRangefmin
F 14 ( x ) = 1 500 + j = 1 25 1 j + i = 1 2 x i a i j 6 1 2[−65.536, 65.536]0.998
F 15 ( x ) = i = 1 11 a i x 1 b i 2 + b i x 2 b i 2 + b i x 3 + x 4 2 4[−5, 5]0.0003
F 16 ( x ) = 4 x 1 2 2 1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 2[−5, 5]−1.0316
F 17 ( x ) = x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 2 + 10 1 1 8 π c o s x 1 + 10 2[−5, 5]0.398
F 18 x = 1 + x 1 + x 2 + 1 2 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2
× 30 + 2 x 1 3 x 2 2 × 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2
2[−2, 2]3
F 19 ( x ) = i = 1 4 c i e x p j = 1 3 a i j x j p i j 2 3[0, 1]−3.86
F 20 ( x ) = i = 1 4 c i e x p j = 1 6 a i j x j p i j 2 [0, 1]−3.32
F 21 ( x ) = i = 1 5 X a i X a i T + c i 1 4[0, 10]−10.1532
F 22 ( x ) = i = 1 7 X a i X a i T + c i 1 4[0, 10]−10.4028
F 23 ( x ) = i = 1 10 X a i X a i T + c i 1 4[0, 10]−10.5364
Table A4. Planetary gear train design optimization problem.
Table A4. Planetary gear train design optimization problem.
Objective FunctionsConstraintsDiagram
Minimize:
f ( x ¯ ) = max i k i 0 k , k = { 1 , 2 , , R }
where i 1 = N 6 N 4 , i 01 = 3.11 , i 0 R = 3.11 ,
i 2 = N 6 ( N 1 N 3 + N 2 N 4 ) N 1 N 3 ( N 6 N 4 ) ,   I R = N 2 N 6 N 1 N 3 , i 02 = 1.84 ,
x ¯ = N 1 , N 2 , N 3 , N 4 , N 5 , N 6 , p , m 1 , m 2
With bounds: p = ( 3 , 4 , 5 ) ,
m 1 = ( 1.75 , 2.0 , 2.25 , 2.5 , 2.75 , 3.0 ) ,
m 2 = 1.75,2.0,2.25,2.5,2.75,3.0 , 17 N 1 96 , 14 N 2 54 , 14 N 3 51 , 17 N 4 46 , 14 N 5 51 , 48 N 6 124 , N i =   integer .
Subject to:
g 1 ( x ¯ ) = m 3 ( N 6 + 2.5 ) D max 0 ,
g 2 ( x ¯ ) = m 1 ( N 1 + N 2 ) + m 1 ( N 2 + 2 ) D max 0 ,
g 3 ( x ¯ ) = m 3 ( N 4 + N 5 ) + m 3 ( N 5 + 2 ) D max 0 , g 4 ( x ¯ ) = | m 1 ( N 1 + N 2 ) m 3 ( N 6 N 3 ) | m 1 m 3 0 ,
g 5 ( x ¯ ) = ( N 1 + N 2 ) sin ( π / p ) + N 2 + 2 + δ 22 0 ,
g 6 ( x ¯ ) = ( N 6 N 3 ) sin ( π p ) + N 3 + 2 + δ 33 0 ,
g 7 ( x ¯ ) = ( N 4 + N 5 ) sin ( π / p ) + N 5 + 2 + δ 55 0 ,
g 8 ( x ¯ ) = ( N 3 + N 5 + 2 + δ 35 ) 2 ( N 6 N 3 ) 2 ( N 4 + N 5 ) 2 + 2 ( N 6 N 3 ) ( N 4 + N 5 ) cos ( 2 π p β ) 0 ,
g 9 ( x ¯ ) = N 4 N 6 + 2 N 5 + 2 δ 56 + 4 0 ,
g 10 ( x ¯ ) = 2 N 3 N 6 + N 4 + 2 δ 34 + 4 0 , h 1 ( x ¯ ) = N 6 N 4 p = integer .
where, β = cos 1 ( ( N 4 + N 5 ) 2 + ( N 6 N 3 ) 2 ( N 3 + N 5 ) 2 ) 2 ( N 6 N 3 ) ( N 4 + N 5 ) , δ 22 = δ 33 = δ 55 = δ 35 = δ 56 = 0.5 , D max = 220 .
Biomimetics 08 00462 i001
Structure of the planetary gear train design [43].
Table A5. Robot gripper problem.
Table A5. Robot gripper problem.
Objective FunctionsConstraintsDiagram
Minimize:
f ( x ¯ ) = min z F k ( x , z ) + max z F k ( x , z )
With bounds:
0 e 50 , 100 c 200 , 10 f , a , b 150 , 1 δ 3.14 , 100 l 300 .
Subject to:
g 1 ( x ¯ ) = Y min + y ( x ¯ , Z max ) 0 , g 2 ( x ¯ ) = y ( x , Z max ) 0 , g 3 ( x ¯ ) = Y max y ( x ¯ , 0 ) 0 , g 4 ( x ¯ ) = y ( x ¯ , 0 ) Y G 0 , g 5 ( x ¯ ) = l 2 + e 2 ( a + b ) 2 0 , g 6 ( x ¯ ) = b 2 ( a e ) 2 ( l Z max ) 2 0 , g 7 ( x ¯ ) = Z max l 0 .
Where, α = cos 1 ( a 2 + g 2 b 2 2 a g ) + ϕ ,
g = e 2 + ( z l ) 2 , ϕ = tan 1 ( e l z ) ,
β = cos 1 ( b 2 + g 2 a 2 2 b g ) ϕ , y ( x , z ) = 2 ( f + e + c sin ( β + δ ) ) , F k = P b sin ( α + β ) 2 c cos ( α ) .
Biomimetics 08 00462 i002
Force distribution and geometrical variables of the gripper mechanism [44].
Table A6. Speed reducer design problem.
Table A6. Speed reducer design problem.
Objective FunctionsConstraintsDiagram
Minimize:
f ( x ¯ ) = ( 14.9334 x 3 43.0934 + 3.3333 x 3 2 ) 0.7854 x 2 2 x 1 + 7.477 ( x 7 3 + x 6 3 ) + 0.7854 ( x 5 x 7 2 + x 4 x 6 2 ) 1.508 x 1 ( x 7 2 + x 6 2 )
With bounds: 2.6 x 1 3.6 ,   0.7 x 2 0.8 , 17 x 3 28 , x 4 8.3 , 7.3 x 5 , 2.9 x 6 3.9 ,   5 x 7 5.5 .
Subject to:
g 1 ( x ¯ ) = x 1 x 2 2 x 3 + 27 0 , g 2 ( x ¯ ) = x 1 x 2 2 x 3 2 + 397.5 0 , g 3 ( x ¯ ) = x 2 x 6 4 x 3 x 4 3 + 1.93 0 , g 4 ( x ¯ ) = x 2 x 7 4 x 3 x 5 3 + 1.93 0 ,
g 5 ( x ¯ ) = 10 x 6 3 16.91 × 10 6 + 745 x 4 x 2 1 x 3 1 2 1100 0 ,
g 6 ( x ¯ ) = 10 x 7 3 157.5 × 10 6 + 745 x 5 x 2 1 x 3 1 2 850 0 ,
g 7 ( x ¯ ) = x 2 x 3 40 0 , g 8 ( x ¯ ) = x 1 x 2 1 + 5 0 ,
g 9 ( x ¯ ) = x 1 x 2 1 12 0 , g 10 ( x ¯ ) = 1.5 x 6 x 4 + 1.9 0 , g 11 ( x ¯ ) = 1.1 x 7 x 5 + 1.9 0 .
Biomimetics 08 00462 i003
Speed reducer design problem [45].

References

  1. Abualigah, L.; Diabat, A.; Mirjalili, S.; Abd Elaziz, M.; Gandomi, A.H. The Arithmetic Optimization Algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  2. Dhiman, G. SSC: A Hybrid Nature-Inspired Meta-Heuristic Optimization Algorithm for Engineering Applications. Knowl.-Based Syst. 2021, 222, 106926. [Google Scholar] [CrossRef]
  3. Yuan, Y.; Ren, J.; Wang, S.; Wang, Z.; Mu, X.; Zhao, W. Alpine Skiing Optimization: A New Bio-Inspired Optimization Algorithm. Adv. Eng. Softw. 2022, 170, 103158. [Google Scholar] [CrossRef]
  4. Meidani, K.; Mirjalili, S.; Barati Farimani, A. Online Metaheuristic Algorithm Selection. Expert Syst. Appl. 2022, 201, 117058. [Google Scholar] [CrossRef]
  5. Shen, Y.X.; Zeng, C.H.; Wang, X.Y. A Novel Sine Cosine Algorithm for Global Optimization. In Proceedings of the 2021 5th International Conference on Computer Science and Artificial Intelligence, Beijing, China, 4–6 December 2021; pp. 202–208. [Google Scholar] [CrossRef]
  6. Abd Elaziz, M.; Ewees, A.A.; Neggaz, N.; Ibrahim, R.A.; Al-Qaness, M.A.A.; Lu, S. Cooperative Meta-Heuristic Algorithms for Global Optimization Problems. Expert Syst. Appl. 2021, 176, 114788. [Google Scholar] [CrossRef]
  7. Audet, C.; Hare, W. Genetic Algorithms. In Springer Series in Operations Research and Financial Engineering; Springer: Berlin/Heidelberg, Germany, 2017; pp. 57–73. [Google Scholar] [CrossRef]
  8. Storn, R.; Price, K. Differential Evolution—A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  9. Xiang, Y.; Gong, X.G. Efficiency of Generalized Simulated Annealing. Phys. Rev. E 2000, 62, 4473–4476. [Google Scholar] [CrossRef] [PubMed]
  10. Eberhart, R.; Kennedy, J. A New Optimizer Using Particle Swarm Theory. In Proceedings of the MHS’95, Sixth International Symposium on Micro Machine and Human Science, Nagoya, Japan, 4–6 October 1995; IEEE: Piscataway, NJ, USA, 1995; pp. 39–43. [Google Scholar]
  11. Trojovský, P.; Dehghani, M.; Trojovská, E.; Milkova, E. Green Anaconda Optimization: A New Bio-Inspired Metaheuristic Algorithm for Solving Optimization Problems. Comput. Model. Eng. Sci. 2023, 136, 1527–1573. [Google Scholar] [CrossRef]
  12. Chen, Z.; Francis, A.; Li, S.; Liao, B.; Xiao, D.; Ha, T.T.; Li, J.; Ding, L.; Cao, X. Egret Swarm Optimization Algorithm: An Evolutionary Computation Approach for Model Free Optimization. Biomimetics 2022, 7, 144. [Google Scholar] [CrossRef]
  13. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching-Learning-Based Optimization: A Novel Method for Constrained Mechanical Design Optimization Problems. Comput. Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  14. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching-Learning-Based Optimization: An Optimization Method for Continuous Non-Linear Large Scale Problems. Inf. Sci. 2012, 183, 1–15. [Google Scholar] [CrossRef]
  15. Rao, R.V.; Savsani, V.J.; Balic, J. Teaching-Learning-Based Optimization Algorithm for Unconstrained and Constrained Real-Parameter Optimization Problems. Eng. Optim. 2012, 44, 1447–1462. [Google Scholar] [CrossRef]
  16. Toĝan, V. Design of Planar Steel Frames Using Teaching-Learning Based Optimization. Eng. Struct. 2012, 34, 225–232. [Google Scholar] [CrossRef]
  17. Zhang, Z.; Huang, H.; Huang, C.; Han, B. An Improved TLBO with Logarithmic Spiral and Triangular Mutation for Global Optimization. Neural Comput. Appl. 2019, 31, 4435–4450. [Google Scholar] [CrossRef]
  18. Zhang, M.; Pan, Y.; Zhu, J.; Chen, G. ABC-TLBO: A Hybrid Algorithm Based on Artificial Bee Colony and Teaching-Learning-Based Optimization. In Proceedings of the 2018 37th Chinese Control Conference (CCC), Wuhan, China, 25–27 July 2018; pp. 2410–2417. [Google Scholar] [CrossRef]
  19. Kumar, Y.; Singh, P.K. A Chaotic Teaching Learning Based Optimization Algorithm for Clustering Problems. Appl. Intell. 2019, 49, 1036–1062. [Google Scholar] [CrossRef]
  20. Houssein, E.H.; Saad, M.R.; Hashim, F.A.; Shaban, H.; Hassaballah, M. Lévy Flight Distribution: A New Metaheuristic Algorithm for Solving Engineering Optimization Problems. Eng. Appl. Artif. Intell. 2020, 94, 103731. [Google Scholar] [CrossRef]
  21. Yao, X.; Liu, Y.; Lin, G. Evolutionary Programming Made Faster. IEEE Trans. Evol. Comput. 1999, 3, 82–102. [Google Scholar] [CrossRef]
  22. Bolufe-Rohler, A.; Chen, S. A Multi-Population Exploration-Only Exploitation-Only Hybrid on CEC-2020 Single Objective Bound Constrained Problems. In Proceedings of the 2020 IEEE Congress on Evolutionary Computation (CEC), Glasgow, UK, 19–24 July 2020. [Google Scholar] [CrossRef]
  23. Yu, X.; Chen, W.; Zhang, X. An Artificial Bee Colony Algorithm for Solving Constrained Optimization Problems. In Proceedings of the 2018 2nd IEEE Advanced Information Management, Communicates, Electronic and Automation Control Conference (IMCEC), Xi’an, China, 25–27 May 2018; pp. 2663–2666. [Google Scholar] [CrossRef]
  24. Agarwal, A.; Chandra, A.; Shalivahan, S.; Singh, R.K. Grey Wolf Optimizer: A New Strategy to Invert Geophysical Data Sets. Geophys. Prospect. 2018, 66, 1215–1226. [Google Scholar] [CrossRef]
  25. Dehghani, M.; Montazeri, Z.; Trojovská, E.; Trojovský, P. Coati Optimization Algorithm: A New Bio-Inspired Metaheuristic Algorithm for Solving Optimization Problems. Knowl. Based Syst. 2023, 259, 110011. [Google Scholar] [CrossRef]
  26. Xue, J.; Shen, B. Dung Beetle Optimizer: A New Meta-Heuristic Algorithm for Global Optimization; Springer: New York, NY, USA, 2022; ISBN 0123456789. [Google Scholar]
  27. Mann, H.B.; Whitney, D.R. On a Test of Whether One of Two Random Variables Is Stochastically Larger than the Other. Ann. Math. Stat. 1947, 18, 50–60. [Google Scholar] [CrossRef]
  28. Tian, Y.; Cheng, R.; Zhang, X.; Jin, Y. PlatEMO: A MATLAB Platform for Evolutionary Multi-Objective Optimization. Comput. Res. Repos. 2017, 12, 73–87. [Google Scholar]
  29. Nadimi-Shahraki, M.H.; Zamani, H. DMDE: Diversity-Maintained Multi-Trial Vector Differential Evolution Algorithm for Non-Decomposition Large-Scale Global Optimization. Expert Syst. Appl. 2022, 198, 116895. [Google Scholar] [CrossRef]
  30. Morales-Castañeda, B.; Zaldívar, D.; Cuevas, E.; Fausto, F.; Rodríguez, A. A Better Balance in Metaheuristic Algorithms: Does It Exist? Swarm Evol. Comput. 2020, 54, 100671. [Google Scholar] [CrossRef]
  31. Lee, D.K.; In, J.; Lee, S. Standard Deviation and Standard Error of the Mean. Korean J. Anesthesiol. 2015, 68, 220–223. [Google Scholar] [CrossRef]
  32. Ge, F.; Hong, L.; Shi, L. An Autonomous Teaching-Learning Based Optimization Algorithm for Single Objective Global Optimization. Int. J. Comput. Intell. Syst. 2016, 9, 506–524. [Google Scholar] [CrossRef]
  33. Ji, X.; Ye, H.; Zhou, J.; Yin, Y.; Shen, X. An Improved Teaching-Learning-Based Optimization Algorithm and Its Application to a Combinatorial Optimization Problem in Foundry Industry. Appl. Soft Comput. J. 2017, 57, 504–516. [Google Scholar] [CrossRef]
  34. Akbari, E.; Ghasemi, M.; Gil, M.; Rahimnejad, A.; Andrew Gadsden, S. Optimal Power Flow via Teaching-Learning-Studying-Based Optimization Algorithm. Electr. Power Compon. Syst. 2022, 49, 584–601. [Google Scholar] [CrossRef]
  35. Kumar, A.; Das, S.; Zelinka, I. A Self-Adaptive Spherical Search Algorithm for Real-World Constrained Optimization Problems. In Proceedings of the GECCO’20: Genetic and Evolutionary Computation Conference, Cancún, Mexico, 8–12 July 2020; pp. 13–14. [Google Scholar] [CrossRef]
  36. Suganthan, P.N.; Ali, M.Z.; Wu, G.; Liang, J.J.; Qu, B.Y. Special Session & Competitions on Real World Single Objective Constrained Optimization. In Proceedings of the CEC-2020, Glasgow, UK, 19–24 July 2020; Volume 2017. [Google Scholar]
  37. Gandomi, A.H.; Yang, X.S.; Alavi, A.H.; Talatahari, S. Bat Algorithm for Constrained Optimization Tasks. Neural Comput. Appl. 2013, 22, 1239–1255. [Google Scholar] [CrossRef]
  38. Li, Y.; Yu, X.; Liu, J. An Opposition-Based Butterfly Optimization Algorithm with Adaptive Elite Mutation in Solving Complex High-Dimensional Optimization Problems. Math. Comput. Simul. 2023, 204, 498–528. [Google Scholar] [CrossRef]
  39. Wan, W.; Zhang, S.; Meng, X. Study on Reliability-Based Optimal Design of Multi-Stage Planetary Gear Train in Wind Power Yaw Reducer. Appl. Mech. Mater. 2012, 215–216, 867–872. [Google Scholar] [CrossRef]
  40. Kumar, A.; Wu, G.; Ali, M.Z.; Mallipeddi, R.; Suganthan, P.N.; Das, S. A Test-Suite of Non-Convex Constrained Optimization Problems from the Real-World and Some Baseline Results. Swarm Evol. Comput. 2020, 56, 100693. [Google Scholar] [CrossRef]
  41. Dörterler, M.; Atila, U.; Durgut, R.; Sahin, I. Analyzing the Performances of Evolutionary Multi-Objective Optimizers on Design Optimization of Robot Gripper Configurations. Turkish J. Electr. Eng. Comput. Sci. 2021, 29, 349–369. [Google Scholar] [CrossRef]
  42. Bayzidi, H.; Talatahari, S.; Saraee, M.; Lamarche, C.P. Social Network Search for Solving Engineering Optimization Problems. Comput. Intell. Neurosci. 2021, 2021, 8548639. [Google Scholar] [CrossRef] [PubMed]
  43. Singh, N.; Kaur, J. Hybridizing Sine–cosine Algorithm with Harmony Search Strategy for Optimization Design Problems. Soft Comput. 2021, 25, 11053–11075. [Google Scholar] [CrossRef]
  44. Hassan, A.; Abomoharam, M. Modeling and Design Optimization of a Robot Gripper Mechanism. Robot. Comput. Integr. Manuf. 2017, 46, 94–103. [Google Scholar] [CrossRef]
  45. Wu, F.; Zhang, J.; Li, S.; Lv, D.; Li, M. An Enhanced Differential Evolution Algorithm with Bernstein Operator and Refracted Oppositional-Mutual Learning Strategy. Entropy 2022, 24, 1205. [Google Scholar] [CrossRef]
Figure 1. Flowchart of TLOCTO.
Figure 1. Flowchart of TLOCTO.
Biomimetics 08 00462 g001
Figure 2. Convergence behaviors of TLOCTO in the search process.
Figure 2. Convergence behaviors of TLOCTO in the search process.
Biomimetics 08 00462 g002aBiomimetics 08 00462 g002b
Figure 3. The population diversity of TLBO and TLOCTO.
Figure 3. The population diversity of TLBO and TLOCTO.
Biomimetics 08 00462 g003
Figure 4. The exploration and exploitation of TLOCTO.
Figure 4. The exploration and exploitation of TLOCTO.
Biomimetics 08 00462 g004
Figure 5. Convergence curves of some benchmark functions.
Figure 5. Convergence curves of some benchmark functions.
Biomimetics 08 00462 g005
Figure 6. Convergence curves of CEC2020 (5D).
Figure 6. Convergence curves of CEC2020 (5D).
Biomimetics 08 00462 g006
Figure 7. Convergence curves of CEC2020 (10D).
Figure 7. Convergence curves of CEC2020 (10D).
Biomimetics 08 00462 g007aBiomimetics 08 00462 g007b
Figure 8. Convergence curves of CEC2020 (20D).
Figure 8. Convergence curves of CEC2020 (20D).
Biomimetics 08 00462 g008
Table 1. Experimental results of 8 algorithms on the benchmark test functions.
Table 1. Experimental results of 8 algorithms on the benchmark test functions.
ProblemMetricTLOCTOABCGWOPSOGACOADBOTLBO
F1AVG
STD
0.0000 × 100
0.00 × 100
8.3930 × 100
6.86 × 100
8.6133 × 10−38
1.40 × 10−37
2.6801 × 100
1.0384 × 100
1.1609 × 100
4.16 × 10−2
0.0000 × 100
0.00 × 100
6.0286 × 104
6.84 × 103
3.3163 × 10−78
1.10 × 10−77
F2AVG
STD
0.0000 × 100
0.00 × 100
8.9866 × 10−1
4.11 × 10−1
1.5131 × 10−22
1.53 × 10−22
4.0965 × 100
1.0268 × 100
5.8101 × 10−1
1.24 × 10−2
1.2046 × 10−184
0.00 × 100
4.3274 × 106
1.97 × 107
6.1081 × 10−40
5.10 × 10−40
F3AVG
STD
0.0000 × 100
0.00 × 100
2.8314 × 104
8.30 × 103
1.6118 × 10−1
6.14 × 10−1
1.8046 × 102
5.52 × 101
1.6834 × 100
6.02 × 10−1
0.0000 × 100
0.00 × 100
1.5840 × 100
6.36 × 100
3.5098 × 10−17
1.06 × 10−16
F4AVG
STD
0.0000 × 100
0.00 × 100
8.6368 × 101
4.76 × 100
2.4439 × 10−9
4.91 × 10−9
2.0328 × 100
2.45 × 10−1
2.0040 × 10−1
0.00 × 100
1.2709 × 10−181
0.00 × 100
8.5609 × 101
4.36 × 100
3.5091 × 10−33
3.01 × 10−33
F5AVG
STD
1.8890 × 10−4
4.30 × 10−4
4.0777 × 103
5.00 × 103
2.8415 × 101
7.62 × 10−1
9.3311 × 102
5.27 × 102
4.2570 × 102
7.91 × 102
0.0000 × 100
0.00 × 100
2.5716 × 101
1.78 × 10−1
2.6539 × 101
4.3 × 10−1
F6AVG
STD
0.0000 × 100
0.00 × 100
1.4700 × 101
1.12 × 101
6.6667 × 10−2
2.54 × 10−1
2.2603 × 100
1.03 × 100
3.3333 × 10−2
1.83 × 10−1
0.0000 × 100
0.00 × 100
0.0000 × 100
0.00 × 100
9.7384 × 10−3
4.1 × 10−1
F7AVG
STD
3.6988 × 10−2
3.12 × 10−2
2.9306 × 10−1
9.01 × 10−2
9.1929 × 10−2
5.26 × 10−2
1.7808 × 101
1.78 × 101
4.8093 × 10−2
1.50 × 10−2
4.8764 × 10−5
3.76 × 10−5
6.0628 × 10−2
5.11 × 10−2
1.1936 × 10−3
4.26 × 10−4
F8AVG
STD
−1.1790 × 104
9.13 × 102
−9.1626 × 103
6.99 × 102
−1.5991 × 103
3.64 × 102
−6.1198 × 103
1.44 × 103
−1.1152 × 104
3.29 × 102
−1.2569 × 103
5.85 × 10−2
−8.5886 × 103
2.12 × 103
−7.4077 × 103
1.02 × 103
F9AVG
STD
0.0000 × 100
0.00 × 100
1.6257 × 102
6.13 × 101
0.0000 × 100
0.00 × 100
1.7389 × 102
3.71 × 101
2.0327 × 100
1.21 × 100
0.0000 × 100
0.00 × 100
2.9850 × 10−1
1.63 × 100
1.6721 × 101
6.23 × 100
F10AVG
STD
8.8818 × 10−16
0.00 × 100
2.6192 × 100
5.65 × 10−1
7.9936 × 10−15
1.32 × 10−15
2.6744 × 100
5.02 × 10−1
1.7871 × 10−1
4.16 × 10−2
8.8818 × 10−16
0.00 × 100
8.8818 × 10−16
0.00 × 100
6.9278 × 10−15
1.66 × 10−5
F11AVG
STD
0.0000 × 100
0.00 × 100
1.0966 × 100
9.41 × 10−2
0.0000 × 100
0.00 × 100
1.1093 × 10−1
3.73 × 10−2
4.5412 × 10−1
1.20 × 10−1
0.0000 × 100
0.00 × 100
0.0000 × 100
0.00 × 100
3.0152 × 10−6
1.65 × 10−5
F12AVG
STD
7.2295 × 10−7
1.13 × 10−6
4.0123 × 102
3.29 × 102
3.1208 × 100
1.51 × 10−1
4.4793 × 10−2
5.51 × 10−2
4.1303 × 10−2
3.04 × 10−2
1.5705 × 10−32
5.57 × 10−48
2.4235 × 100
6.86 × 10−1
8.6770 × 10−5
3.01 × 10−4
F13AVG
STD
1.8164 × 10−7
2.62 × 10−7
1.6327 × 103
3.62 × 103
2.1180 × 100
2.40 × 10−1
6.3050 × 10−1
2.37 × 10−1
2.3183 × 10−2
8.76 × 10−3
1.3498 × 10−32
5.57 × 10−48
6.0204 × 10−1
4.36 × 10−1
1.9071 × 10−1
1.48 × 10−1
F14AVG
STD
2.0458 × 100
2.50 × 100
1.6202 × 100
1.42 × 100
1.2198 × 101
1.86 × 100
3.0027 × 100
2.51 × 100
9.9800 × 10−1
2.15 × 10−11
9.9800 × 10−1
8.67 × 10−11
1.5218 × 100
1.87 × 100
9.9800 × 10−1
0.00 × 100
F15AVG
STD
3.0979 × 10−4
8.72 × 10−6
1.4522 × 10−3
3.58 × 10−3
7.3193 × 10−3
8.39 × 10−3
9.2132 × 10−4
2.69 × 10−4
4.5409 × 10−3
7.62 × 10−3
4.4093 × 10−4
1.20 × 10−4
7.7156 × 10−4
2.74 × 10−4
3.5310 × 10−4
0.00 × 100
F16AVG
STD
−1.0316 × 100
6.65 × 10−16
−1.0316 × 100
5.53 × 10−16
−1.0235 × 100
7.84 × 10−3
−1.0316 × 100
4.88 × 10−16
−1.0316 × 100
4.94 × 10−7
−1.0316 × 100
1.23 × 10−4
−1.0316 × 100
4.44 × 10−16
−1.0316 × 100
6.71 × 10−15
F17AVG
STD
3.9789 × 10−1
0.00 × 100
3.9789 × 10−1
0.00 × 100
8.1189 × 10−1
4.91 × 10−9
3.9789 × 10−1
0.00 × 100
3.9789 × 10−1
8.38 × 10−7
3.9831 × 10−1
8.62 × 10−4
3.9789 × 10−1
3.24 × 10−16
3.9789 × 10−1
0.00 × 100
F18AVG
STD
3.0000 × 100
1.28 × 10−15
3.0000 × 100
4.24 × 10−15
3.2919 × 100
4.89 × 10−1
3.0000 × 100
6.24 × 10−14
3.0000 × 100
4.84 × 10−6
3.0459 × 100
6.34 × 10−2
3.0000 × 100
1.85 × 10−14
3.0000 × 100
1.39 × 10−15
F19AVG
STD
−3.8628 × 100
2.71 × 10−15
−3.8628 × 100
2.46 × 10−15
−3.5047 × 100
3.57 × 10−1
−3.8628 × 100
1.92 × 10−15
−3.8628 × 100
1.74 × 10−7
−3.8002 × 100
7.97 × 10−2
−3.8615 × 100
2.99 × 10−3
−3.8628 × 100
2.71 × 10−15
F20AVG
STD
−3.3146 × 100
2.79 × 10−2
−3.2744 × 100
5.92 × 10−2
−2.4044 × 100
2.82 × 10−1
−3.2586 × 100
6.03 × 10−3
−3.2744 × 100
5.92 × 10−2
−2.6194 × 100
3.88 × 10−1
−3.2998 × 100
5.17 × 10−2
−3.3100 × 100
3.62 × 10−2
F21AVG
STD
−1.0153 × 101
7.01 × 10−15
−6.8147 × 100
3.68 × 100
−2.4730 × 100
1.12 × 100
−7.056 × 100
3.269 × 100
−6.1443 × 100
3.45 × 100
−1.0153 × 101
7.07 × 10−5
−6.2541 × 100
2.19 × 100
−1.0153 × 101
2.92 × 10−14
F22AVG
STD
−1.0403 × 101
1.32 × 10−15
−7.6146 × 100
3.54 × 100
−1.3810 × 100
1.01 × 100
−8.3590 × 100
3.01 × 100
−7.5984 × 100
3.31 × 100
−1.0403 × 101
4.24 × 10−4
−7.5244 × 100
2.77 × 100
−1.0183 × 101
1.22 × 100
F23AVG
STD
−1.0536 × 101
1.89 × 10−15
−8.5397 × 100
3.38 × 100
−1.3328 × 100
9.82 × 10−1
−9.7550 × 100
2.17 × 100
−6.0831 × 100
3.76 × 100
−1.0536 × 101
8.51 × 10−5
−8.0278 × 100
2.73 × 100
−1.0536 × 101
3.75 × 10−3
(+/−/=)~~0/18/50/20/30/23/01/21/14/12/70/16/71/22/0
Table 2. Descriptions of the benchmark functions from CEC 2020.
Table 2. Descriptions of the benchmark functions from CEC 2020.
FunctionNameRangefmin
F1 (CEC_01)Shifted and rotated bent cigar function[−100, 100]Dim100
F2 (CEC_02)Shifted and rotated schwefel’s function[−100, 100]Dim1100
F3 (CEC_03)Shifted and rotated lunacek bi-rastrigin function[−100, 100]Dim700
F4 (CEC_04)Expanded rosenbrock’s plus griewangk’s function[−100, 100]Dim1900
F5 (CEC_05)Hybrid function 1 (N = 3)[−100, 100]Dim1700
F6 (CEC_06)Hybrid function 1 (N = 4)[−100, 100]Dim1600
F7 (CEC_07)Hybrid function 1 (N = 5)[−100, 100]Dim2100
F8 (CEC_08)Composition function 1 (N = 3)[−100, 100]Dim2200
F9 (CEC_09)Composition function 1 (N = 4)[−100, 100]Dim2400
F10 (CEC_10)Composition function 1 (N = 5)[−100, 100]Dim2500
Table 3. Comparison results of algorithms on CEC 2020 (5D).
Table 3. Comparison results of algorithms on CEC 2020 (5D).
ProblemMetricTLOCTOABCGWOPSOGACOADBOTLBO
F1AVG
STD
3.9393 × 102
3.16 × 102
3.9926 × 103
4.21 × 103
1.0928 × 108
8.11 × 107
1.1321 × 108
1.88 × 108
4.2118 × 103
4.17 × 103
9.5389 × 108
6.45 × 108
3.0308 × 103
3.67 × 103
3.0856 × 105
2.95 × 105
F2AVG
STD
1.1942 × 103
6.82 × 101
1.2459 × 103
1.37 × 102
1.8413 × 103
2.24 × 102
1.5817 × 103
1.66 × 102
1.2611 × 103
1.37 × 102
2.1665 × 103
2.07 × 102
1.3873 × 103
1.49 × 102
1.3663 × 103
1.11 × 102
F3AVG
STD
7.0722 × 102
1.44 × 100
7.0849 × 102
3.20 × 100
7.3054 × 102
6.96 × 100
7.1652 × 102
6.43 × 100
7.0796 × 102
2.22 × 100
7.5890 × 102
7.34 × 100
7.1223 × 102
3.56 × 100
7.1670 × 102
4.33 × 100
F4AVG
STD
1.9002 × 103
1.02 × 10−1
1.9006 × 103
3.04 × 10−1
1.9083 × 103
3.92 × 100
3.1673 × 103
5.93 × 103
1.9003 × 103
1.92 × 10−1
1.3707 × 104
1.38 × 104
1.9013 × 103
1.15 × 100
1.9008 × 103
3.40 × 10−1
F5AVG
STD
1.7051 × 103
7.07 × 100
1.7364 × 103
4.72 × 101
6.5142 × 105
6.29 × 105
7.8483 × 103
6.66 × 103
1.7394 × 103
5.00 × 101
3.3860 × 106
3.96 × 106
2.0135 × 103
7.92 × 102
1.8101 × 103
4.17 × 101
F6AVG
STD
1.6008 × 103
4.29 × 10−1
1.6025 × 103
7.38 × 100
1.6927 × 103
8.03 × 101
1.6455 × 103
5.27 × 101
1.6115 × 103
3.07 × 101
1.8093 × 103
1.04 × 102
1.6055 × 103
1.19 × 101
1.6067 × 103
6.44 × 100
F7AVG
STD
2.1002 × 103
3.13 × 10−1
2.1004 × 103
3.01 × 10−1
2.1816 × 103
7.25 × 101
2.1225 × 103
2.68 × 101
2.1034 × 103
1.02 × 101
2.2411 × 103
8.05 × 101
2.1017 × 103
6.02 × 100
2.1008 × 103
1.96 × 10−1
F8AVG
STD
2.2461 × 103
5.25 × 101
2.2390 × 103
4.70 × 101
2.3204 × 103
3.27 × 101
2.2856 × 103
6.46 × 101
2.2748 × 103
5.00 × 101
2.4921 × 103
1.53 × 102
2.2433 × 103
4.45 × 101
2.2358 × 103
2.72 × 101
F9AVG
STD
2.5181 × 103
5.58 × 101
2.5520 × 103
8.72 × 101
2.7130 × 103
8.62 × 101
2.6786 × 103
9.23 × 101
2.5883 × 103
1.13 × 102
2.7258 × 103
6.49 × 101
2.5000 × 103
2.65 × 10−4
2.5357 × 103
6.33 × 100
F10AVG
STD
2.8391 × 103
3.78 × 101
2.8458 × 103
8.65 × 100
2.8575 × 103
7.03 × 100
2.8736 × 103
2.68 × 101
2.8394 × 103
2.14 × 101
2.9488 × 103
5.51 × 101
2.8500 × 103
2.01 × 101
2.8247 × 103
1.18 × 101
(+/−/=)~~0/7/30/10/00/10/00/8/20/10/01/7/22/7/1
Table 4. Comparison results of algorithms on CEC 2020 (10D).
Table 4. Comparison results of algorithms on CEC 2020 (10D).
ProblemMetricTLOCTOABCGWOPSOGACOADBOTLBO
F1AVG
STD
2.6402 × 103
2.96 × 103
3.9681 × 103
3.59 × 103
2.0448 × 109
6.82 × 108
6.0622 × 109
3.51 × 109
1.7744 × 104
1.73 × 104
1.5653 × 1010
6.10 × 109
1.2644 × 106
4.50 × 106
1.9777 × 108
9.19 × 107
F2AVG
STD
1.5719 × 103
2.81 × 102
2.4585 × 103
6.75 × 102
3.0011 × 103
3.01 × 102
2.2721 × 103
3.37 × 102
1.5736 × 103
2.25 × 102
3.6023 × 103
3.34 × 102
2.0814 × 103
3.09 × 102
2.4658 × 103
2.50 × 102
F3AVG
STD
7.2975 × 102
7.97 × 100
7.4313 × 102
2.08 × 101
8.0923 × 102
1.42 × 101
7.8781 × 102
3.24 × 101
7.2642 × 102
7.30 × 100
9.0337 × 102
2.78 × 101
7.5004 × 102
1.86 × 101
8.0925 × 102
2.81 × 101
F4AVG
STD
1.9018 × 103
8.77 × 10−1
1.9031 × 103
1.67 × 100
2.2790 × 103
4.16 × 102
7.2218 × 104
8.19 × 104
1.9023 × 103
1.03 × 100
5.2207 × 105
4.34 × 105
1.9053 × 103
2.68 × 100
1.9102 × 103
8.24 × 100
F5AVG
STD
2.7587 × 103
9.63 × 102
3.0432 × 105
4.52 × 105
6.2118 × 105
1.33 × 105
7.9091 × 105
8.38 × 105
4.5979 × 105
5.39 × 105
7.3010 × 106
7.23 × 106
1.9503 × 104
2.02 × 104
1.1536 × 104
5.47 × 103
F6AVG
STD
1.6706 × 103
7.49 × 101
1.7373 × 103
1.10 × 102
2.0147 × 103
9.65 × 101
2.1041 × 103
1.70 × 102
1.7671 × 103
1.27 × 102
2.8130 × 103
2.95 × 102
1.8091 × 103
1.29 × 102
1.7854 × 103
7.64 × 101
F7AVG
STD
2.4853 × 103
1.66 × 102
1.1205 × 104
9.80 × 103
3.0861 × 106
4.62 × 106
6.6925 × 105
1.30 × 106
1.4839 × 105
3.55 × 105
4.0225 × 106
5.47 × 106
7.8715 × 103
9.06 × 103
4.9133 × 103
1.50 × 103
F8AVG
STD
2.3051 × 103
1.88 × 101
2.3073 × 103
1.48 × 101
2.4639 × 103
5.96 × 101
2.7308 × 103
3.94 × 102
2.3100 × 103
8.25 × 10−3
3.6911 × 103
5.98 × 102
2.3115 × 103
2.22 × 100
2.4274 × 103
1.59 × 102
F9AVG
STD
2.7129 × 103
8.25 × 101
2.7553 × 103
1.70 × 101
2.8194 × 103
1.20 × 101
2.8307 × 103
1.02 × 102
2.7581 × 103
1.50 × 101
2.9661 × 103
1.08 × 102
2.7751 × 103
3.88 × 101
2.7689 × 103
4.82 × 101
F10AVG
STD
2.9279 × 103
2.32 × 101
2.9359 × 103
2.07 × 101
3.0253 × 103
4.66 × 101
3.1485 × 103
1.31 × 102
2.9399 × 103
2.73 × 101
3.9653 × 103
3.73 × 102
2.9379 × 103
6.66 × 101
2.9490 × 103
1.21 × 101
(+/−/=)~~0/7/30/10/00/9/11/6/30/10/00/10/00/10/0
Table 5. Comparison results of algorithms on CEC 2020 (20D).
Table 5. Comparison results of algorithms on CEC 2020 (20D).
ProblemMetricTLOCTOATLBOITLBOTLSBOLNTLBOSASS
F1Ave1.5872 × 1051.5285 × 1064.2403 × 1031.7599 × 1038.3267 × 1091.6100 × 103
Std3.79 × 1053.25 × 1063.95 × 1032.24 × 1032.71 × 1092.26 × 103
F2Ave3.2654 × 1034.7114 × 1035.1045 × 1035.3031 × 1034.4598 × 1035.4914 × 103
Std1.52 × 1026.72 × 1024.15 × 1022.20 × 1025.89 × 1022.63 × 102
F3Ave8.6336 × 1028.5826 × 1028.4488 × 1028.2675 × 1021.0341 × 1038.2943 × 102
Std4.33 × 1013.80 × 1012.88 × 1013.89 × 1016.54 × 1018.71 × 100
F4Ave1.9353 × 1031.9204 × 1031.9163 × 1031.9117 × 1031.7668 × 1041.9088 × 103
Std1.85 × 1019.98 × 1006.88 × 1003.14 × 1001.45 × 1041.40 × 100
F5Ave4.4176 × 1047.2356 × 1041.3859 × 1051.7059 × 1051.6930 × 1054.7068 × 104
Std3.71 × 1045.66 × 1041.01 × 1051.29 × 1052.16 × 1054.50 × 104
F6Ave1.6291 × 1031.7099 × 1031.8043 × 1031.9356 × 1031.7243 × 1032.2605 × 103
Std2.31 × 10−131.16 × 10−121.16 × 10−121.16 × 10−121.16 × 10−121.85 × 10−12
F7Ave1.6466 × 1042.0766 × 1045.2183 × 1042.7900 × 1048.8333 × 1042.1520 × 104
Std9.40 × 1031.51 × 1044.76 × 1041.71 × 1041.47 × 1051.30 × 104
F8Ave2.3061 × 1032.3098 × 1032.4348 × 1032.3072 × 1033.8601 × 1032.3069 × 103
Std3.74 × 1009.77 × 1007.25 × 1021.30 × 1008.36 × 1029.61 × 101
F9Ave2.8775 × 1032.8706 × 1032.8565 × 1032.8418 × 1033.0211 × 1032.8633 × 103
Std3.13 × 1012.41 × 1012.78 × 1011.52 × 1015.75 × 1014.19 × 101
F10Ave2.9981 × 1033.0151 × 1032.9992 × 1033.0768 × 1033.4933 × 1033.0328 × 103
Std3.10 × 1014.33 × 1013.31 × 1013.28 × 1012.77 × 1023.87 × 101
(+/−/=)~~0/6/41/6/31/5/40/7/31/5/4
Table 6. Comparison results for the planetary gear train design optimization problem.
Table 6. Comparison results for the planetary gear train design optimization problem.
AlgorithmBestMeanWorstStd N 1 N 2 N 3 N 4 N 5 N 6 p m 1 m 2
TLOCTO0.523250.535920.537060.00364321815191569422
TLBO0.537060.538770.556670.00494221415171562322
DBO0.548461.8 × 10200.776673.66 × 102026141419146931.751.75
COA0.557067.00 × 10190.860741.44 × 102017142017146231.751.75
GWO0.529670.552290.710000.03423472415211476322
PSO0.526240.546320.805730.0494226172224148732.751.75
ABC0.593120.938131.768410.284245626192828103321.75
Table 7. The constraint values of the planetary gear train design optimization problem.
Table 7. The constraint values of the planetary gear train design optimization problem.
AlgorithmConstraints
g 1 g 2 g 3 g 4 g 5 g 6 g 7 g 8 g 9 g 10
TLOCTO−77−80−118−4−14.8553−20.6838−6.54163−1165.89−15−15
TLBO−91−116−122−18−14.6769−23.2032−10.2128−1698.90−91−116
DBO−77−108−122−26−18.141−31.1314−12.0788−3597.35−17−17
COA−91−126−126−18−10.3468−13.8731−10.3468−377.447−91−126
GWO−63−26−118−16−34.9878−35.3275−13.8109−4988.21−63−26
PSO−41−34−112−4−17.7391−31.7917−16.4090−4383.19−41−34
ABC−90−480−42.5141−51.2461−17.9974−7422.03−90
Table 8. Comparison results for the robot gripper problem.
Table 8. Comparison results for the robot gripper problem.
AlgorithmBestMeanWorstStdabcef l δ
TLOCTO2.774793.092413.312690.15419149.2512132.6060200.000016.4597149.9069104.62982.4493
TLBO3.017913.590015.980430.59922143.5842133.7948200.00009.58430149.9702106.97152.4607
DBO3.314325.828279.349131.43645150.0000149.4977198.80000.030617.01180124.67551.7178
COA3.794602.94 × 10223.56 × 10237.68 × 1022147.8050139.9458153.52517.4652147.8050115.77802.6540
GWO3.323793.773674.540040.30097150.0000140.7627176.03288.7721149.1059118.90152.5634
PSO3.440924.169939.542231.07760150.0000111.7020199.579337.1027144.0012129.43362.7289
ABC4.398518.2128613.192732.44974147.5731138.0619197.66396.5379148.0575160.00942.6131
Table 9. The constraint values of the robot gripper problem.
Table 9. The constraint values of the robot gripper problem.
AlgorithmConstraints
g 1 g 2 g 3 g 4 g 5 g 6 g 7
TLOCTO−32.4842−17.5158−45.1971−4.8029−68225.1644−70.6672−4.6299
TLBO−43.0799 −6.9201 −31.3405 −18.6595 −65404.3490 −103.5279 −6.9716
DBO−49.1101 −0.8899 −43.5540 −6.4460 −74154.8911 −750.1439 −24.6756
COA−6.3163 −43.6837 −20.6581 −29.3419 −69340.2484 −359.3810 −15.7781
GWO−21.0329 −28.9671 −26.2605 −23.7395 −70328.4313 −488.4525 −18.9016
PSO−40.5933 −9.4067 −34.8365 −15.1635 −50358.2696 −1134.8062 −29.4337
ABC−45.0324 −4.9676 −34.9849 −15.0151 −55941.6010 −4430.9795 −60.0095
Table 10. Comparison results for the speed reducer design problem.
Table 10. Comparison results for the speed reducer design problem.
AlgorithmBestMeanWorstStd x 1 x 2 x 3 x 4 x 5 x 6 x 7
TLOCTO2994.4242994.4242994.4241.85 × 10−123.500000.7000017.00007.300007.715323.350545.28665
TLBO2994.4242994.4922994.8700.0963.500000.70000217.00007.300067.715323.350545.28666
DBO3032.7793406.5315735.099782.9393.502640.7000017.00007.300007.773053.353325.28696
COA3060.4134.57 × 10171.31 × 10192.38 × 10183.500220.7000017.00007.300007.896763.351555.28631
GWO3003.8253011.0453018.4713.8543.501220.7000217.00017.772067.826423.352265.28846
PSO3007.4373160.0233363.736120.9863.500000.7000117.00007.300008.302153.350545.28686
ABC2549.6392597.2822635.20520.9955.994850.7040214.48667.307487.901213.494925.29177
Table 11. The constraint values of the speed reducer design problem.
Table 11. The constraint values of the speed reducer design problem.
AlgorithmConstraints
g 1 g 2 g 3 g 4 g 5 g 6 g 7 g 8 g 9 g 10 g 11
TLOCTO−2.16 × 100−9.81 × 101−1.93 × 100−1.83 × 101−9.35 × 10−4−2.15 × 10−3−2.81 × 1010.00 × 100−7.00 × 100−3.74 × 10−1−5.00 × 10−6
TLBO−2.16 × 100−9.81 × 101−1.93 × 100−1.83 × 101−1.01 × 10−3−2.67 × 10−3−2.81 × 101−1.43 × 10−5−7.00 × 100−3.74 × 10−1−6.00 × 10−6
DBO−2.18 × 100−9.85 × 101−1.94 × 100−1.79 × 101−2.73 × 100−1.38 × 10−1−2.81 × 101−3.77 × 10−3−7.00 × 100−3.70 × 10−1−5.74 × 10−2
COA−2.16 × 100−9.82 × 101−1.93 × 100−1.69 × 101−9.93 × 10−1−1.96 × 10−1−2.81 × 101−3.14 × 10−4−7.00 × 100−3.73 × 10−1−1.82 × 10−1
GWO−2.17 × 100−9.83 × 101−1.27 × 100−1.75 × 101−7.98 × 10−1−8.52 × 10−1−2.81 × 101−1.60 × 10−3−7.00 × 100−8.44 × 10−1−1.09 × 10−1
PSO−2.16 × 100−9.81 × 101−1.93 × 100−1.43 × 101−7.43 × 10−4−9.41 × 10−5−2.81 × 101−7.14 × 10−5−7.00 × 100−3.74 × 10−1−5.87 × 10−1
ABC−1.60 × 101−2.26 × 102−1.97 × 100−1.43 × 101−1.29 × 102−2.19 × 100−2.98 × 101−3.52 × 100−3.48 × 100−1.65 × 10−1−1.80 × 10−1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wu, X.; Li, S.; Wu, F.; Jiang, X. Teaching–Learning Optimization Algorithm Based on the Cadre–Mass Relationship with Tutor Mechanism for Solving Complex Optimization Problems. Biomimetics 2023, 8, 462. https://0-doi-org.brum.beds.ac.uk/10.3390/biomimetics8060462

AMA Style

Wu X, Li S, Wu F, Jiang X. Teaching–Learning Optimization Algorithm Based on the Cadre–Mass Relationship with Tutor Mechanism for Solving Complex Optimization Problems. Biomimetics. 2023; 8(6):462. https://0-doi-org.brum.beds.ac.uk/10.3390/biomimetics8060462

Chicago/Turabian Style

Wu, Xiao, Shaobo Li, Fengbin Wu, and Xinghe Jiang. 2023. "Teaching–Learning Optimization Algorithm Based on the Cadre–Mass Relationship with Tutor Mechanism for Solving Complex Optimization Problems" Biomimetics 8, no. 6: 462. https://0-doi-org.brum.beds.ac.uk/10.3390/biomimetics8060462

Article Metrics

Back to TopTop