Next Article in Journal
Deciphering Genomic Heterogeneity and the Internal Composition of Tumour Activities through a Hierarchical Factorisation Model
Next Article in Special Issue
Efficient Covering of Thin Convex Domains Using Congruent Discs
Previous Article in Journal
A Review on Text Steganography Techniques
Previous Article in Special Issue
Outer Approximation Method for the Unit Commitment Problem with Wind Curtailment and Pollutant Emission
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Snow Leopard Optimization Algorithm: A New Nature-Based Optimization Algorithm for Solving Optimization Problems

1
Faculty of Science, University of Hradec Kralove, Rokitanskeho 62, 500 03 Hradec Kralove, Czech Republic
2
Faculty of Education, University of Hradec Kralove, Rokitanskeho 62, 500 03 Hradec Kralove, Czech Republic
3
Faculty of Natural Sciences, The Constantine the Philosopher University in Nitra, A. Hlinku 1, 949 74 Nitra, Slovakia
*
Author to whom correspondence should be addressed.
Submission received: 16 October 2021 / Revised: 31 October 2021 / Accepted: 3 November 2021 / Published: 8 November 2021
(This article belongs to the Special Issue Optimization Theory and Applications)

Abstract

:
Numerous optimization problems have been defined in different disciplines of science that must be optimized using effective techniques. Optimization algorithms are an effective and widely used method of solving optimization problems that are able to provide suitable solutions for optimization problems. In this paper, a new nature-based optimization algorithm called Snow Leopard Optimization Algorithm (SLOA) is designed that mimics the natural behaviors of snow leopards. SLOA is simulated in four phases including travel routes, hunting, reproduction, and mortality. The different phases of the proposed algorithm are described and then the mathematical modeling of the SLOA is presented in order to implement it on different optimization problems. A standard set of objective functions, including twenty-three functions, is used to evaluate the ability of the proposed algorithm to optimize and provide appropriate solutions for optimization problems. Also, the optimization results obtained from the proposed SLOA are compared with eight other well-known optimization algorithms. The optimization results show that the proposed SLOA has a high ability to solve various optimization problems. Also, the analysis and comparison of the optimization results obtained from the SLOA with the other eight algorithms shows that the SLOA is able to provide more appropriate quasi-optimal solutions and closer to the global optimal, and with better performance, it is much more competitive than similar algorithms.

1. Introduction

1.1. Motivation

The goal of optimization is to find the best acceptable answer, given the limitations and needs of the problem [1]. For a problem, there may be different solutions, and to compare them and select the optimal solution, a function called the objective function is defined. The choice of this function depends on the nature of the problem. However, choosing the suitable objective function is one of the most important optimization steps [2]. An optimization problem can be defined from a mathematical point of view using the three main parts of variables, objective functions, and constraints [3]. Once the optimization problem is mathematically modelled, it must be optimized using the appropriate method.
Optimization algorithms are among the optimization problem solving methods that are able to provide suitable solutions for optimization problems based on random scanning of the search space without the need for gradient information. In recent years, many optimization algorithms have been designed by scientists to solve optimization problems. These algorithms are based on simulations of various natural phenomena, the laws of physics, the biological sciences, the behavior of animals, insects, and living things in nature. The main question that arises in the study of optimization algorithms is that according to the algorithms that have been introduced in recent years, is there a need to develop and design new algorithms? The answer to this question lies in the No Free Lunch (NFL) theorem [4]. According to the NFL theorem, a particular optimization algorithm will not be able to solve all optimization problems simply because it has a powerful performance in solving several optimization problems. In fact, an optimization algorithm may be the best optimizer in solving one optimization problem, but it fails in solving another problem. This is because each optimization problem has its own complexity and nature. The NFL theorem prompts researchers to design new optimization algorithms that can solve optimization problems in different domains and applications. NFL theorem also motivated the authors of this paper to design and introduce a new optimization algorithm for solving optimization problems.

1.2. Research Gap

Although many optimization algorithms have been introduced to solve the optimization problems, achieving more suitable quasi-optimal solutions closer to the global optimal is still a major challenge in solving and optimizing various optimization problems. The key issue in the studies of optimization algorithms is that since optimization algorithms are considered stochastic-based methods, they do not guarantee provided solutions as the global optimal. Therefore, it is always possible to develop new optimization algorithms that are able to provide quasi-optimal solutions to the global optimal. In order to solve more accurate and more effective optimization problems in different sciences, it is necessary to design newer algorithms with higher abilities in achieving quasi-optimal solutions closer to global optimal solutions. Therefore, in this study, it has been attempted to design a new optimization algorithm that can provide more suitable solutions for optimization issues.
One of the disadvantages of most optimization algorithms is that the process of updating their population depends too much on the best member of the population. This can lead to early convergence of the algorithm or entrapment in local optimal areas. The reason for the lack of optimal convergence in these methods is that it does not allow members of the population to properly scan the search space in different directions. The advantage of the proposed SLOA is that it does not rely on the best member of the population and increases the search power of the algorithm. In SLOA, the algorithm population is updated in four different phases. None of these four phases relies on the best member of the population. In the first phase, simulating the zigzag motion of snow leopards is very effective in accurately scanning the search space and exiting local optimal solutions. In the second phase, the behavior of snow leopards during hunting, which is at two different speeds towards the prey, increases the power of exploitation and the convergence of the algorithm towards the solution. In the third phase, the reproduction process creates random solutions in different areas of the search space, which increases the exploration power of the algorithm. In the fourth phase, mortality gives the algorithm the advantage of helping the algorithm evolve by eliminating weak population members and preventing the search for non-optimal areas.

1.3. Contribution

The innovation and contribution of this paper is in designing a new optimization algorithm called Snow Leopard Optimization Algorithm (SLOA) in order to solve optimization problems more effectively in different sciences. The novelty of the proposed method is to simulate the behavior and social life of snow leopards with a focus on travel routes, hunting, reproduction, and mortality. In the algorithms that have been introduced so far, researchers have not used the optimization process in the social life of snow leopards. The contributions proposed by this study are as follows:
(i)
The main idea in designing SLOA is simulation of the different behaviors of snow leopards that are inspired by nature. In SLOA, the natural behaviors of snow leopards are modeled in four phases including travel routes, hunting, reproduction, and mortality.
(ii)
The proposed SLOA is mathematically modeled and simulated for use in solving and optimizing various problems.
(iii)
Twenty-three sets of standard objective functions of different types are employed to evaluate the power of the proposed SLOA in providing quasi-optimal and effective solutions.
(iv)
In order to further analyze the SLOA and evaluate the quality of the obtained optimization results, the performance of the SLOA is compared with eight well-known optimization algorithms.
Optimization algorithms are applied in all disciplines of science and real-world problems where the optimization process or problem is defined and designed. The proposed SLOA can be applied to minimize or maximize various objective functions. SLOA can be applied in optimal designs and engineering sciences where decision variables must be well selected to optimize device performance. In data mining, medical science, clustering, and in general in any application that faces optimization, the proposed SLOA can be applied.

1.4. Paper Organization

The rest of the paper is organized in such a way that lecture review is presented in Section 2. The proposed SLOA is introduced in Section 3. Simulation studies on the performance of the SLOA are presented in Section 4. The performance of the SLOA from the perspective of exploration and exploitation is analyzed and discussed in Section 5. Finally, conclusions and several suggestions for future studies are presented in Section 6.

2. Lecture Review

Optimization problem solving methods include two categories of deterministic methods and stochastic methods [5].
The use of basic optimization methods including linear programming, integer programming, dynamic programming, and nonlinear programming is associated with disadvantages. The most important disadvantage of these methods is being time consuming in solving big problems using them. In a way, even with today’s advanced computing technologies, solving a large-scale problem with the mentioned techniques takes several years [6]. The emergence of these problems has forced researchers to adjust their expectations to find the best possible answer and to be satisfied with good enough answers, that even for large-scale problems, appropriate solutions can be reached in a reasonable time [7].
The weakness of methods such as gradients and numerical calculations in solving optimization problems has led to the creation of a special type of intelligent search algorithms called population-based optimization algorithms. Population-based optimization algorithms are a kind of stochastic methods that are inspired by nature and its mechanisms [8]. These methods try to send their initial population to the global optimal and provide appropriate solutions close to the global optimal in a reasonable time [9].
The basis of the performance of optimization algorithms is that they first produce a number of solutions which is the population of the algorithm, then in an iterative-based process without the use of derivative information and only based on the simulation of the collective intelligence of the algorithm members improves these initial solutions [10].
Every optimization problem has a precise and basic solution called global optimal [11]. Given that optimization algorithms are stochastic methods and improve the initial proposed solutions during successive iterations, it is possible that the optimization algorithms will not be able to provide exactly the global optimal. For this reason, the solutions obtained using optimization algorithms for an optimization problem are called quasi-optimal [12]. In evaluating several quasi-optimal solutions to an optimization problem, the most appropriate solution is closer to the global optimal. The main reason for the design of numerous optimization algorithms by researchers is to provide quasi-optimal solutions that are more appropriate and closer to the global optimal solution. In this regard, optimization algorithms have been applied to solve optimization problems in different branches of science such as microwave for design of rectangular microstrip antennas [13], electromagnetic problems [14], Proportional-Integral-Derivative (PID) control [15], Flexible Job Planning (FJSP) problem [16], force analysis and optimization of kinematic parameters [17], simultaneous optimization of distributed generation [18], optimization of complex system reliability [19], design of model-based fuzzy controllers for networked control systems [20], design of the stable robot controller [21], and the Maki-Thompson rumor model [22].
Optimization algorithms have developed over the years based on various natural, physical, game, genetic, and any theory or process that has an evolving nature.
Genetic Algorithm (GA) is one of the famous, oldest, and widely used optimization methods that is inspired by the science of genetics and Darwin’s theory of evolution, and is based on the survival of the fittest or natural selection. GA begins with the production of a population of chromosomes (the initial population of chromosomes in GA is randomly generated and bound to the top and bottom of the problem variables). In the next step, the generated data structures (chromosomes) are evaluated. Chromosomes that provide better values for the objective function of the problem have a higher chance to be selected as parents for reproduction than weaker solutions [23]. Although GA has simple concepts and can be easily implemented, having several control parameters and being time consuming is the most important disadvantage of GA.
Particle Swarm Optimization (PSO) is another popular and widely used algorithm that is inspired by the collective behavior of birds or fish in nature. In PSO, a group of birds or fish are looking for food in an environment where there is only one piece of food. None of the birds know the location of the food and only know the distance to the food. One of the best strategies is to follow a bird that is closer to the food. In other words, every bird or fish, in addition to its own experience, also trusts the bird or fish that is closest to the food [24]. The main disadvantages of PSO are that it is easy to fall into local optimal in high-dimensional problems, and it has a low convergence rate in the iterative process.
Gravitational Search Algorithm (GSA) is a physics-based algorithm that is introduced based on simulation of gravitational force and Newton’s laws of motion. According to the theory of gravity, objects that are at different distances from each other exert a gravitational force on each other. In GSA, the mass of objects is determined based on the values of the objective function. Objects that are in a better position in the search space pull other objects towards themselves based on simulations of gravity force and the laws of motion [25]. Slow convergence, tendency to become trapped in local optimal, and having control parameters are the main disadvantages of GSA.
Teaching-Learning Based Optimization (TLBO) is a population-based optimization method that is designed based on simulation of behaviors of the teacher and students in a classroom. In TLBO, the best member of the population is considered the teacher and the rest of the population is considered the students of the class. TLBO has two phases called teaching phase and learning phase. In the teaching phase, the teacher teaches her/his knowledge to the students and in the learning phase, the students share their information with each other [26]. The main disadvantages of TLBO are that consumes lot of memory space and it involves lot of iterations, so TLBO is a time-consuming algorithm.
Gray Wolf Optimizer (GWO) is a nature-based optimizer that is introduced based on simulation of behaviors of grey wolves in nature. GWO mimics the hierarchical leadership and hunting mechanism of the gray wolves. Four types of gray wolves named alpha, beta, delta, and omega are used to simulate hierarchical leadership. In GWO, alpha is the best member of the population, beta and delta are the second and third best members of the population, and the rest of the wolves are omega. In addition, three stages including search for prey, encircling prey, and attacking prey are used to simulate hunting mechanism [27], low solving accuracy, slow convergence rate, and bad local searching ability are several disadvantages of GWO.
Grasshopper Optimization Algorithm (GOA) is a swarm-based algorithm that is introduced based on the simulation of behavior of grasshopper swarms in nature. In GOA, the group movement of grasshoppers towards food sources is imitated and simulated. A mathematical model is proposed for simulation of attraction and repulsion forces between the grasshoppers. Attraction forces encouraged grasshoppers to exploit promising regions, whereas repulsion forces let them explore the search space [28]. The most important disadvantages of the GOA are slow convergence speed, the fact that it’s time-consuming, and having control parameters.
Marine Predators Algorithm (MPA) is a nature-based algorithm that is inspired by the movement strategies that marine predators use when trapping their prey in the oceans. The predominant search behavior and strategy of marine predators for hunting is modeled using the Levy flight method. MPA has three phases due to the different speeds of the predators and the prey: Phase 1: When the prey moves faster than the predator, Phase 2: When the prey and the predator move at almost the same speed, and Phase 3: When the predator is moving faster than the prey [29]. One of the main disadvantages of MPA is that it requires huge number of iterations, especially for nonlinear optimization problems.
Tunicate Swarm Algorithm (TSA) is a bio-inspired algorithm that is introduced based on imitation of swarm behaviors of tunicates and jet propulsion during the navigation and foraging process. In TSA, two behaviors of tunicates including jet propulsion and swarm intelligence are employed for finding the food sources. In order to model the jet propulsion behavior, tunicates must comply with three conditions including remaining close to the best search agent, moving towards the position of best search agent, and avoiding the conflicts between search agents. In order to model the swarm intelligence behavior, positions of search agents should be updated based on the best optimal solution [30]. Poor convergence in solving high-dimensional multimodal problems, having control parameters, and complex calculations are the main disadvantages of the TSA.

3. Snow Leopard Optimization Algorithm (SLOA)

In this section, the snow leopard is introduced first. Then, based on simulating the habits and natural behaviors of snow leopards, a new optimization algorithm called Snow Leopard Optimization Algorithm (SLOA) is developed. Mathematical modeling and formulation of the proposed SLOA for implementation in solving optimization problems is presented.

3.1. Snow Leopard

The snow leopard (Panthera uncia) is a species of Panthera native that lives in the high mountains of South and Central Asia. Snow leopards live in mountainous and alpine areas at altitudes of 3000 to 4500 m from eastern Afghanistan, the Himalayas, and the Tibetan Plateau to southern Siberia, Mongolia, and western China [31].
The fur of snow leopards is whitish to gray with black spots on neck and head, with larger rosettes on the back, flanks, and bushy tail. The snow leopard’s belly is whitish. Its eyes are grey or pale green. Its nasal cavities are large. Its forehead is domed and its muzzle is short. The fur is thick with hairs between 5 and 12 cm long. Its body is stocky, short-legged, and slightly smaller than the other cats of the genus Panthera, reaching a shoulder height of 56 cm, and ranging in head to body size from 75 to 150 cm. Its tail is 80 to 105 cm long [32]. It weighs between 22 and 55 kg, with an occasional large male reaching 75 kg, and small female of under 25 kg. Its canine teeth are 28.6 mm long and are more slender than those of the other Panthera species [33].
Snow leopards have different behaviors and habits, including how they move towards each other and travel routes, how they hunt, reproduce, and mortality. The modeling of these natural behaviors has been used in the design of the proposed SLOA. In this design, modeling of four optimal natural behaviors in the life of snow leopards is used.
The first behavior is travel routes and movement. Modeling the zig-zag pattern movements of snow leopards as they move and follow each other leads to a more efficient search of the search space and crossing the optimal local areas.
The second behavior is how to hunt. Modeling the movements of snow leopards in order to hunt prey leads to the convergence of the optimization algorithm towards the optimal areas.
The third behavior is reproduction. The reproduction of snow leopards can be modeled as a combination of two members of the population, which leads to the production of a new member that may improve the performance of the algorithm in achieving optimal areas.
The fourth behavior is mortality. Modeling the mortality of weak snow leopards leads to the elimination of solutions and inappropriate members of the algorithm. This will remove members in inappropriate areas from the search space. In addition, modeling this behavior leads to the algorithm population remaining constant during the algorithm iterations.

3.2. Mathematical Modeling

In the proposed SLOA, each snow leopard is a member of the algorithm population. A certain number of snow leopards as search agents are members of the SLOA. In population-based optimization algorithms, population members are identified using a matrix called the population matrix. The number of rows in the population matrix is equal to the number of members in the population, and the number of columns in this matrix is equal to the number of variables in the optimization problem. The population matrix is specified as a matrix representation using Equation (1).
X = X 1 X i X N N × m = x 1 , 1 x 1 , d x 1 , m   x i , 1 x i , d x i , m x N , 1 x N , d x N , m N × m
where, X is the population of snow leopard, X i is the ith snow leopard, x i , d is the value for dth problem variable suggested by ith snow leopard, N is the number of snow leopard in algorithm population, and m is the number of problem variables.
The position of each snow leopard as a member of the population in the problem-solving space determines the values for the problem variables. Therefore, for each snow leopard, a value can be calculated for the objective function of the problem. The values of the objective function are specified by a vector using Equation (2).
F = f 1 f i f N N × 1 = f X 1 f X i f X N N × 1
here,   F is the vector of objective function and F i is the value for objective function of problem obtained based on ith snow leopard.
Members of the population are updated in the proposed SLOA based on simulating the natural behaviors of snow leopards in four phases: displacement, hunting, reproduction, and mortality. The mathematical modeling of these four phases and the mentioned natural behaviors are presented in the following subsections.

3.2.1. Phase 1: Travel Routes and Movement

Snow leopards, like other cats, use scent signs to show their locations and travel routes. These signs are usually caused by scraping the ground with the hind feet before depositing urine or scat [34]. Snow leopards also move in a zig-zag pattern in indirect lines [35]. So, snow leopards can move towards or follow each other based on this natural behavior.
This phase of the proposed SLOA is mathematically modeled using Equations (3)–(5).
x i , d P 1 = x i , d + r × x k , d I × x i , d × s i g n F i F k ,   k 1 , 2 ,   3 , , N ,   d = 1 , 2 , 3 ,   , m
X i = X i P 1 , F i P 1 < F i X i , e l s e
I = r o u n d   1 + r
here, x i , d P 1 is the new value for dth problem variable obtained by ith snow leopard based on phase 1, r is a random number in interval of 0 , 1 , k is the row number of selected snow leopard for guiding ith snow leopard in dth axis, X i P 1 is the updated location of ith snow leopard based on phase 1, and F i P 1 is its objective function value.

3.2.2. Phase 2: Hunting

In the second phase of updating the members of the population, the behavior of snow leopards during hunting and attacking prey is used. The process and method of hunting a snow leopard, based on an observation recorded in Hemis National Park, is that the snow leopard uses rocky cliffs to cover itself when approaching its prey. After reaching a distance of 40 m from the prey, the snow leopard first walked slowly for the first 15 m, then ran the last 25 m, and finally killed the prey by biting its neck [36].
The natural behavior of snow leopards during hunting is mathematically modeled using Equations (6)–(8). Equation (6) specifies the location of the prey for ith snow leopard. Equation (7) simulates how a snow leopard moves toward its prey. According to observations, a snow leopard walks about 0.375% of the distance to the prey and then runs 0.625% of the distance to the prey. Therefore, a parameter called P is used to simulate this type of motion in Equation (7). Parameter P shows the percentage from the distance to the prey that the snow leopard walks. In the simulation of the proposed SLOA, the value of this parameter based on observations is considered 0.375. In Equation (8), the new position of the snow leopard after the attack on the prey is simulated. For this purpose, an effective update is used in which the new position is acceptable to an algorithm member if the value of the objective function in the new position is more appropriate than the previous position.
p i , d = x j , d , d = 1 , 2 , 3 ,   , m
x i , d P 2 = x i , d + r × p i , d x i , d × P + p i , d 2 × x i , d × 1 P × s i g n F i F p ,
X i = X i P 2 , F i P 2 < F i   X i , e l s e
here,   p i , d is the dth dimension of location of prey considered for ith snow leopard,   F p is the objective function value based on location of prey, x i , d P 2 is the new value for dth problem variable obtained by ith snow leopard based on phase 2, and F i P 2 is its objective function value.

3.2.3. Phase 3: Reproduction

In this phase, based on the natural reproductive behavior of snow leopards, new members equal to half the total population are added to the population of the algorithm. In fact, it is assumed that a cub will be born based on the mating of both snow leopards. The reproduction process of snow leopards is mathematically modeled based on the mentioned concepts using Equation (9).
C l = x l + X N l + 1 2 ,         l = 1 , 2 , 3 ,   , N 2
here,   C l is the lth cub which is born from the mating of two snow leopards.

3.2.4. Phase 4: Mortality

Living things are always in danger of dying. Although reproduction increases the population of snow leopards, the number of snow leopards remains constant during the replication of the algorithm due to mortality and losses. In the proposed SLOA, it is assumed that in each replication after reproduction, snow leopards face mortality exactly as the number of puppies born. The criterion of snow leopard mortality in the SLOA is the value of the objective function. Therefore, snow leopards that have a weaker objective function are more prone to death. Also, some born cubs may die due to having poor objective function.

3.3. Flowchart of SLOA

In the proposed SLOA, snow leopards are updated in each iteration according to the first and second phases, then the population of the algorithm according to the third and fourth phases is faced with natural processes of reproduction and mortality.
These steps of the SLOA are repeated until the stop condition is reached. After fully implementing the SLOA on an optimization problem, the SLOA makes available the best obtained solution as the best quasi-optimal solution. The various stages of implementation of the SLOA are specified as flowcharts in Figure 1 and its pseudocode is presented in Algorithm 1.
Algorithm 1 Pseudocode of SLOA
Start SLOA.
1.Input problem information: variables, objective function, and constraints.
2.Set number of snow leopard (N) and iterations (T).
3.Generate an initial population matrix at random.
4.Evaluate the objective function.
5. For t = 1:T
6.  Phase 1: travel routes and movement
7.   For i = 1:N
8.    For d = 1:m
9.     Calculate x i , d P 1 using Equation (3) and Equation (5).
10.    End
11.    Update X i using Equation (4)
12.   End
13.  Phase 2: hunting
14.   For i = 1:N
15.    For d = 1:m
16.     Calculate location of prey p i , d using Equation (6)
17.     Calculate x i , d P 2 using Equation (7).
18.    End
19.    Update X i using Equation (8).
20.   End
21.  Phase 3: reproduction
22.   For l = 1:0.5 × N
23.   Generate cub C l using Equation (9)
24.   End
25.  Phase 4: mortality
26.   Adjust the number of snow leopards to N due to mortality based on criterion of the objective function.
27.  Save best quasi-optimal solution obtained with the SLOA so far.
28. End For t = 1:T
29.Output best quasi-optimal solution obtained with the SLOA.
End SLOA.

4. Simulation Studies and Results

In this section, the performance of the proposed SLOA in optimization and providing effective solutions to optimization problems are studied. For this purpose, a standard set of twenty-three objective functions of three different types of unimodal, high-dimensional multimodal, and fixed-dimensional multimodal has been used. The complete information of these objective functions is specified in Appendix A and in Table A1, Table A2 and Table A3. The optimization results obtained using the SLOA are compared with the performance of eight optimization algorithms including Genetic Algorithm (GA) [23], Particle Swarm Optimization (PSO) [24], Gravitational Search Algorithm (GSA) [25], Teaching-Learning Based Optimization (TLBO) [26], Gray Wolf Optimizer (GWO) [27], Grasshopper Optimization Algorithm (GOA) [28], Marine Predators Algorithm (MPA) [29], and Tunicate Swarm Algorithm (TSA) [30]. The proposed SLOA, as well as eight compared algorithms, are implemented in twenty independent implementations on the optimization of twenty-three objective functions F1 to F23. The most important criterion for determining the superiority of optimization algorithms is the value of the objective function. Although the algorithm convergence speed is an important criterion in the performance of optimization algorithms, the main purpose of optimizing an optimization problem is to provide a suitable quasi-optimal solution. An algorithm may have a high convergence speed but converge to local or unsuitable solutions. In fact, in optimizing an objective function, it is a superior algorithm that can offer a better solution and with a more optimal objective function value. For this reason, the results of implementation and performance of optimization algorithms on the set of the mentioned objective functions have been reported using the two indicators of the average of the best obtained solutions from twenty independent executions for the objective function (ave) and the standard deviation of these best obtained solutions (std). These two indicators can be calculated using Equations (10) and (11). The values used for the main controlling parameters of the comparative optimization algorithms are specified in Table 1.
a v e = 1 20 × i = 1 20 B O S i ,
s t d = 1 20 × i = 1 20 B O S i a v e 2 ,
where, B O S i is the best obtained solution in ith independent implementation.

4.1. Evaluation of Unimodal Objective Functions

Seven F1 to F7 functions of the unimodal type are selected to analyze the ability of optimization algorithms to provide quasi-optimal solutions for the unimodal objective functions. The results of optimization of these objective functions using the proposed SLOA as well as eight other algorithms are presented in Table 2. The optimization results show that the proposed SLOA has been able to converge to the global optimal solution in solving F1 and F6 functions. SLOA is the first best optimizer in optimizing the F2, F3, F4, F5, and F7 functions. The simulation results show that in optimizing the F1, F2, F3, and F4 functions, the performance of the proposed SLOA is significantly superior to the eight compared algorithms.
Analysis and comparison of the results obtained from optimization algorithms, shows that the SLOA has a more effective ability to provide quasi-optimal solutions in this type of objective functions than similar algorithms.

4.2. Evaluation of High-Dimensional Multimodal Objective Functions

The second group of objective functions, including six F8 to F13 functions of the high-dimension multi-objective type, are selected to analyze the power of optimization algorithms in solving this type of optimization problems. The performance results of the optimization algorithms and the SLOA on the objective functions F8 to F13 are presented in Table 3. The results of this table show that the proposed algorithm has been able to provide the global optimal solution for F9 and F11 functions. The proposed SLOA is the first best optimizer for solving F10 and F12 functions. In optimizing the F8 function, SLOA is the sixth best optimizer after GA, TLBO, PSO, GWO, and TSA algorithms. GSA is the first best optimizer for optimizing the F13 function, while the proposed algorithm for solving this objective function is the second-best optimizer.
Analysis of the results of optimization of high-dimensional multi-model objective functions shows the acceptable ability of the SLOA to solve such optimization problems.

4.3. Evaluation of Fixed-Dimensional Multimodal Objective Functions

Ten objective functions, including F14 to F23, are considered to analyze the performance of optimization algorithms in solving fixed-dimension multimodal optimization problems. The results of optimization of these objective functions using the proposed SLOA and eight other algorithms are presented in Table 4. The optimization results indicate that the proposed SLOA is the first best optimizer for the F15, F16, F17, F19, and F20 functions. In optimizing the F14 function, SLOA has a similar performance to GA, GOA, and MPA in the Ave indicator. However, due to the lower std indicator, it is clear that the proposed SLOA is a more efficient method for solving F14. In optimizing the F18 function, SLOA with the lower std indicator is the first best optimizer. In optimizing the F21, F22, and F23 functions, SLOA and MPA provide similar performance in the Ave indicator. But since the proposed SLOA has a lower std indicator, it is the first-best optimizer and MPA is the second-best optimizer.
What is clear from the analysis and comparison of the performance of optimization algorithms is that the SLOA has a high ability to solve fixed-dimensional multimodal optimization problems and is able to provide more effective solutions with less standard deviation than similar algorithms.
The behavior and performance of the proposed SLOA and the eight compared optimization algorithms are presented in the form of a boxplot in Figure 2. This figure intuitively demonstrates the superiority of the proposed SLOA in optimizing the functions of F1 to F7, F9 to F12, F14 to F23.

4.4. Statistic Analysis

Presenting the results of optimization of objective functions using standard statistical indicators of average and standard deviation of the best solutions provides useful and valuable information. However, the superiority of an algorithm in solving an optimization problem may be coincidental even after twenty independent implementations. Therefore, Wilcoxon rank-sum test is presented in order to statistically analyze the optimization results obtained from the proposed SLOA and eight other optimization algorithms. Wilcoxon rank-sum test is a non-parametric test that is used to evaluate the similarity of two dependent samples with a ranking scale. This analysis is applied to specify whether the obtained results using SLOA are different from the competitive algorithms in a statistically significant way.
A p-value characterizes whether the given optimization algorithm is statistically significant or not. If the p-value of the given optimization algorithm is less than 0.05, then the corresponding optimization algorithm is statistically significant. The results of statistical analysis based on the application of Wilcoxon rank-sum test are presented in Table 5. Based on the analysis of the results in Table 5, it is concluded that the proposed SLOA has a significant superiority over the other eight algorithms in cases where the p-value is less than 0.05. Accordingly, the proposed SLOA has a significant superiority over all eight compared algorithms in optimizing the F1 to F7 unimodal functions. In optimizing high-dimensional multimodal functions, SLOA has a significant superiority over MPA and GOA with p-value of less than 0.05. Also, in optimizing fixed-dimensional multimodal functions, the proposed SLOA with p-value of less than 0.05 has a significant superiority over all eight algorithms compared.

4.5. Sensitivity Analysis

The sensitivity analysis means studies of the output changes of a mathematical model due to changes in the values of the input parameters. In other words, if sensitivity analysis is checked, which if the value of an independent parameter is changed, how does its dependent variable change in a specified and defined condition with assuming the constant of other parameters? In order to further analyze the proposed SLOA, sensitivity analysis is presented. For this purpose, the sensitivity of the SLOA to the three parameters of maximum number of iterations, number of members of the snow leopard population, and P parameter is investigated.
In order to analyze the sensitivity of the proposed algorithm to the maximum number of iterations parameter, the SLOA in independent runs for a maximum of 100, 500, 800, and 1000 iterations is implemented on the objective functions F1 to F23. The results of this analysis are presented in Table 6 and the behavior of convergence curves under the influence of sensitivity analysis to the maximum number of iterations are presented in Figure 3. The simulation results show that by increasing the maximum number of iterations, SLOA converges towards more suitable quasi-optimal solutions.
The proposed SLOA is implemented in independent runs for the population of snow leopards 20, 30, 50, and 80 on all objective functions F1 to F23 in order to analyze the sensitivity of the SLOA to the parameter of the number of members of the population. The simulation results of the sensitivity analysis to the number of population members are presented in Table 7, and the behavior of the convergence curves is shown in Figure 4. The simulation results show that as the number of snow leopard population members increases, the values of the objective function decrease and the algorithm converges towards solutions closer to the global optimal.
The P parameter represents the percentage of the distance that the snow leopard walks when attacking prey. In order to analyze the sensitivity of the proposed algorithm to this parameter, SLOA is run independently for different p values equal to 0.2, 0.375, 0.6, and 0.8. The simulation results of this analysis for all F1 to F23 objective functions are reported in Table 8 and the behavior of convergence curves under the influence of sensitivity analysis to the P parameter are presented in Figure 5. The simulation results show that in the target objective functions of F6, F9, F11, F14, F16, F17, F18, F19, F20, and F23, the P parameter changes had no effect on the performance of the proposed SLOA. In the F1, F5, F10, F21, and F22 functions, the P parameter changes had very little effect on the performance of the proposed SLOA. According to the analysis of the optimization results, it is determined that the value of 0.375 for the P parameter is a suitable value. Therefore, the authors suggest that researchers use the value 0.375 for the P parameter in their research simulations.

5. Discussion

Exploitation and exploration are two important and key criteria in analyzing the performance of optimization algorithms and evaluating their ability to provide appropriate quasi-optimal solutions.
Exploitation means the ability of the algorithm to provide a suitable quasi-optimal solution that is close to the global optimal. Therefore, compared to the ability of several algorithms to solve an optimization problem, the algorithm that can provide the best quasi-optimal solution and the closest to the global optimal has a higher exploitation. Exploitation is very important especially for optimization problems that lack local optimal solutions. Unimodal objective functions, including F1 to F7, have only one main solution and lack local optimal solutions. Therefore, these types of objective functions are suitable for evaluating the exploitation of optimization algorithms. The results of optimization of these objective functions are presented in Table 2. Based on the analysis of the results of this table and the comparison of the performance of optimization algorithms, it is clear that the proposed SLOA has provided more suitable quasi-optimal solutions and has a higher capability in the exploitation index than the other eight algorithms.
Exploration means the ability of the algorithm to accurately and effectively scan the search space of optimization problems. This index is especially important in solving optimization problems that have several local optimal solutions in addition to the global optimal. An optimization algorithm must be able to search the various regions of the problem-solving space well during execution and approach the global optimal by passing through the local optimal regions. Therefore, in evaluating the performance of several algorithms in solving an optimization problem, an algorithm that can scan the problem-solving space well has a higher ability in the exploration index. High-dimensional multimodal objective functions, including F8 to F13, as well as fixed-dimensional multimodal objective functions, including F14 to F23, are functions that have several optimal local solutions in addition to the main global optimal. Therefore, these types of objective functions are suitable for analyzing the exploration index in optimization algorithms. The performance results of optimization algorithms in solving these objective functions are presented in Table 3 and Table 4. The analysis of these results indicates that the proposed SLOA with high exploration power has scanned the search space in optimization problems containing local optimal areas and has approached to global optimal by passing local areas.
Exploration power allows the algorithm to scan different areas of the search space with the aim of passing through the optimal local areas and discovering the main optimal area. Exploitation power causes an algorithm to converge as much as possible towards the optimal solution after finding the optimal area in the search space. The main feature and advantage of the proposed SLOA is that in the process of scanning the search space, it does not rely on a specific member like the best member of the population. The first and second phases of SLOA, which simulate the snow leopard zigzag movement and hunting strategy, increase the exploitation power of the proposed algorithm in converging to the optimal solution. The effect of these two phases and the high convergence power of the proposed SLOA in the results of optimizing the F1 to F4 functions is significantly evident. The third and fourth phases of SLOA, which models reproduction and mortality, have a great impact on the algorithm’s exploration power in scanning new areas in the search space and distancing itself from non-optimal areas. The high exploration power of the proposed SLOA is well evident in the optimization of F9, F11, F14, to F23 functions.
Therefore, with an overview of the optimization results obtained from three different groups of objective functions, it has been determined that the proposed SLOA has high capability and power in both exploitation and exploration indicators. In fact, the main reason for the proposed algorithm’s superiority compared to compared algorithms is its exploitation and exploration abilities. SLOA with high-exploration capability can search different areas of problem-solving space and after crossing local optimal areas and approaching the optimal global solution, with high-exploration capability, converge to global optimal as much as possible. As specified in simulations, SLOA in optimizing functions of F1, F6, F9, and F11 has been able to provide the global optimal solution. What is inferred from the simulation results is that the performance of the proposed SLOA in optimization is superior to the comparative algorithms, and its results are much more competitive.

6. Conclusions and Future Studies

Numerous optimization problems in different sciences should be optimized with appropriate technique. Population-based optimization algorithms are among the stochastic solving methods of optimization problems that can provide acceptable quasi-optimal solutions to optimization problems. In this paper, a new optimization algorithm called Snow Leopard Optimization Algorithm (SLOA) has been presented in order to effectively solve optimization problems in various sciences and provide quasi-optimal solutions that are more desirable and closer to the global optimal. The theory and different stages of the proposed SLOA have been stated and then the mathematical model of the SLOA has been presented in order to implement on optimization problems with the aim of achieving quasi-optimal solutions. The performance of the proposed SLOA has been tested on a standard set consisting of twenty-three objective functions of unimodal, high-dimensional multimodal, and fixed-dimensional multimodal types. Also, in order to analyze the quality of the SLOA in providing quasi-optimal solutions, the results obtained from the SLOA have been compared with eight other well-known algorithms, namely Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Gravitational Search Algorithm (GSA), Teaching-Learning Based Optimization (TLBO), Gray Wolf Optimizer (GWO), Grasshopper Optimization Algorithm (GOA), Marine Predators Algorithm (MPA, and Tunicate Swarm Algorithm (TSA).
The optimization results using the proposed algorithm indicate the high ability of the SLOA to provide suitable quasi-optimal solutions for optimization problems of different types. Also, the results of performance analysis and comparison of comparative optimization algorithms showed that the SLOA presented better results and is much more competitive than these eight optimization algorithms.
The authors offer several suggestions and perspectives for future studies of this paper. The design of binary and multi-objective versions for SLOA are the main potentials of the proposed algorithm. In addition, the use of the proposed SLOA in solving real-world optimization problems as well as various other types of optimization problems are suggestions for future studies related to this paper—see e.g., [34,35,36,37,38,39].
The important thing about all optimization algorithms is that one cannot claim that a particular algorithm is the best optimizer for all optimization problems. Therefore, one of the limitations of the proposed SLOA is that in optimizing some optimization problems, it may not be able to provide a quasi-optimal solution very close to the global optimal. Another limitation for SLOA is that it is always possible to design newer algorithms that have a higher ability to converge to the optimal solution. In addition, with the advancement of science and technology, more complex optimization issues arise that existing algorithms, such as the proposed algorithm, may not be able to solve and require the improvement of existing methods or the design of newer methods.

Author Contributions

Conceptualization, Š.H. and P.C.; methodology, P.C.; software, Š.H. and P.C.; validation, Š.H., P.C. and M.H.; formal analysis, Z.B.; investigation, M.H.; resources, Š.H.; data curation, P.C.; writing, Š.H., Z.B. and P.C. All authors have read and agreed to the published version of the manuscript.

Funding

The research was supported by the Project of Specific Research, PrF UHK No. 2101/2021 and by the project KEGA, 036UKF-4/2019, Adaptation of the learning process using sensor networks and the Internet of Things.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The authors declare to honor the Principles of Transparency and Best Practice in Scholarly Publishing about Data.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Information of the twenty-three objective functions is provided in Table A1, Table A2 and Table A3.
Table A1. Unimodal test functions.
Table A1. Unimodal test functions.
Objective FunctionRangeDimensionsFmin
1. F 1 x = i = 1 m x i 2 100 , 100 300
2. F 2 x = i = 1 m x i + i = 1 m x i 10 , 10 300
3. F 3 x = i = 1 m j = 1 i x i 2 100 , 100 300
4. F 4 x = m a x x i   ,     1 i m   100 , 100 300
5. F 5 x = i = 1 m 1 100 x i + 1 x i 2 2 + x i 1 2 ) 30 , 30 300
6. F 6 x = i = 1 m x i + 0.5 2 100 , 100 300
7. F 7 x = i = 1 m i x i 4 + r a n d o m 0 , 1 1.28 , 1.28 300
Table A2. High dimensional Multimodal test functions.
Table A2. High dimensional Multimodal test functions.
Objective FunctionRangeDimensionsFmin
8. F 8 x = i = 1 m x i   sin x i 500 , 500 30−1.2569 × 10+4
9. F 9 x = i = 1 m   x i 2 10 cos 2 π x i + 10 5.12 , 5.12 300
10. F 10 x = 20 exp 0.2 1 m i = 1 m x i 2 exp 1 m i = 1 m cos 2 π x i + 20 + e 32 , 32 300
11. F 11 x = 1 4000 i = 1 m x i 2 i = 1 m c o s x i i + 1 600 , 600 300
12. F 12 x = π m   10 sin π y 1 + i = 1 m y i 1 2 1 + 10 sin 2 π y i + 1 + y n 1 2 + i = 1 m u x i , 10 , 100 , 4
u x i , a , i , n = k x i a n ,                               x i > a ;   0 ,                                       a < x i < a k x i a n ,                       x i < a . ;
50 , 50 300
13. F 13 x = 0.1   sin 2 3 π x 1 + i = 1 m x i 1 2 1 + sin 2 3 π x i + 1 + x n 1 2 1 + sin 2 2 π x m + i = 1 m u x i , 5 , 100 , 4 50 , 50 300
Table A3. Fixed dimensional Multimodal test functions.
Table A3. Fixed dimensional Multimodal test functions.
Objective FunctionRangeDimensionsFmin
14. F 14 x = 1 500 + j = 1 25 1 j + i = 1 2 x i a i j 6 1 65.53 , 65.53 20.998
15. F 15 x = i = 1 11 a i x 1 b i 2 + b i x 2 b i 2 + b i x 3 + x 4 2 5 , 5 40.00030
16. F 16 x = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 5 , 5 2−1.0316
17. F 17 x = x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 2 + 10 1 1 8 π c o s x 1 + 10 [−5,10] × [0,15]20.398
18. F 18 x = 1 + x 1 + x 2 + 1 2 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 × 30 + 2 x 1 3 x 2 2 × 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 5 , 5 23
19. F 19 x = i = 1 4 c i exp j = 1 3 a i j x j P i j 2 0 , 1 3−3.86
20. F 20 x = i = 1 4 c i exp j = 1 6 a i j x j P i j 2 0 , 1 6−3.22
21. F 21 x = i = 1 5 X a i X a i T + 6 c i 1 0 , 10 4−10.1532
22. F 22 x = i = 1 7 X a i X a i T + 6 c i 1 0 , 10 4−10.4029
23. F 23 x = i = 1 10 X a i X a i T + 6 c i 1 0 , 10 4−10.5364

References

  1. Dhiman, G. SSC: A hybrid nature-inspired meta-heuristic optimization algorithm for engineering applications. Knowl.-Based Syst. 2021, 222, 106926. [Google Scholar] [CrossRef]
  2. Dhiman, G.; Kumar, V. Emperor penguin optimizer: A bio-inspired algorithm for engineering problems. Knowl.-Based Syst. 2018, 159, 20–50. [Google Scholar] [CrossRef]
  3. Doumari, S.A.; Givi, H.; Dehghani, M.; Malik, O.P. Ring Toss Game-Based Optimization Algorithm for Solving Various Optimization Problems. Int. J. Intell. Eng. Syst. 2021, 14, 545–554. [Google Scholar]
  4. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef] [Green Version]
  5. Dehghani, M.; Montazeri, Z.; Dehghani, A.; Ramirez-Mendoza, R.A.; Samet, H.; Guerrero, J.M.; Dhiman, G. MLO: Multi leader optimizer. Int. J. Intell. Eng. Syst. 2020, 13, 364–373. [Google Scholar] [CrossRef]
  6. Yang, X.-S. Firefly algorithm, stochastic test functions and design optimisation. Int. J. Bio-Inspired Comput. 2010, 2, 78–84. [Google Scholar] [CrossRef]
  7. Yang, X.S.; Gandomi, A.H. Bat algorithm: A novel approach for global engineering optimization. Eng. Comput. 2012, 29, 464–483. [Google Scholar] [CrossRef] [Green Version]
  8. Dhiman, G.; Kaur, A. STOA: A bio-inspired based optimization algorithm for industrial engineering problems. Eng. Appl. Artif. Intell. 2019, 82, 148–174. [Google Scholar] [CrossRef]
  9. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl.-Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  10. Geem, Z.W.; Kim, J.H.; Loganathan, G.V. A new heuristic optimization algorithm: Harmony search. Simulation 2001, 76, 60–68. [Google Scholar] [CrossRef]
  11. Dhiman, G.; Kumar, V. Spotted hyena optimizer: A novel bio-inspired based metaheuristic technique for engineering applications. Adv. Eng. Softw. 2017, 114, 48–70. [Google Scholar] [CrossRef]
  12. Sadeghi, A.; Doumari, S.A.; Dehghani, M.; Montazeri, Z.; Trojovský, P.; Ashtiani, H.J. A New “Good and Bad Groups-Based Optimizer” for Solving Various Optimization Problems. Appl. Sci. 2021, 11, 4382. [Google Scholar] [CrossRef]
  13. Yilmaz, A.E.; Kuzuoglu, M. Calculation of optimized parameters of rectangular microstrip patch antenna using particle swarm optimization. Microw. Opt. Technol. Lett. 2007, 49, 2905–2907. [Google Scholar] [CrossRef]
  14. Duca, A.; Duca, L.; Ciuprina, G.; Egemen Yilmaz, A.; Altinoz, T. PSO algorithms and GPGPU technique for electromagnetic problems. Int. J. Appl. Electromagn. Mech. 2017, 53 (Suppl. S2), S249–S259. [Google Scholar] [CrossRef]
  15. Altinoz, O.T.; Yilmaz, A.E.; Weber, G.-W. Application of chaos embedded PSO for PID parameter tuning. Int. J. Comput. Commun. 2012, 7, 204–217. [Google Scholar] [CrossRef] [Green Version]
  16. Stanković, A.; Petrović, G.; Ćojbašić, Ž.; Marković, D. An application of metaheuristic optimization algorithms for solving the flexible job-shop scheduling problem. Oper. Res. Eng. Sci. Theory Appl. 2020, 3, 13–28. [Google Scholar] [CrossRef]
  17. Todorov, T.; Mitrev, R.; Penev, I. Force analysis and kinematic optimizationof a fluid valve driven by shape memory alloys. Rep. Mech. Eng. 2020, 1, 61–76. [Google Scholar] [CrossRef]
  18. Ganguly, S. Multi-objective distributed generation penetration planning with load model using particle swarm optimization. Decision Making: Appl. Manag. Eng. 2020, 3, 30–42. [Google Scholar] [CrossRef]
  19. Negi, G.; Kumar, A.; Pant, S.; Ram, M. Optimization of Complex System Reliability using Hybrid Grey Wolf Optimizer. Decis. Mak. Appl. Manag. Eng. 2021, 4, 241–256. [Google Scholar] [CrossRef]
  20. Precup, R.-E.; Preitl, S.; Petriu, E.; Bojan-Dragos, C.-A.; Szedlak-Stinean, A.-I.; Roman, R.-C.; Hedrea, E.-L. Model-based fuzzy control results for networked control systems. Rep. Mech. Eng. 2020, 1, 10–25. [Google Scholar] [CrossRef]
  21. Gharib, M.R. Comparison of robust optimal QFT controller with TFC and MFC controller in a multi-input multi-output system. Rep. Mech. Eng. 2020, 1, 151–161. [Google Scholar] [CrossRef]
  22. Belen, S.; Kropa, E.; Weber, G.-W. Rumours within time dependent Maki-Thompson model. Preprint 2008, 127. in press. [Google Scholar]
  23. Goldberg, D.E.; Holland, J.H. Genetic Algorithms and Machine Learning. Mach. Learn. 1988, 3, 95–99. [Google Scholar] [CrossRef]
  24. Kennedy, J.; Eberhart, R. Particle Swarm Optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; IEEE: Piscataway, NJ, USA, 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  25. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A gravitational search algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  26. Rao, R.V.; Savsani, V.J.; Vakharia, D. Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput.-Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  27. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  28. Saremi, S.; Mirjalili, S.; Lewis, A. Grasshopper optimisation algorithm: Theory and application. Adv. Eng. Softw. 2017, 105, 30–47. [Google Scholar] [CrossRef] [Green Version]
  29. Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine Predators Algorithm: A nature-inspired metaheuristic. Expert Syst. Appl. 2020, 152, 113377. [Google Scholar] [CrossRef]
  30. Kaur, S.; Awasthi, L.K.; Sangal, A.L.; Dhiman, G. Tunicate Swarm Algorithm: A new bio-inspired based metaheuristic paradigm for global optimization. Eng. Appl. Artif. Intell. 2020, 90, 103541. [Google Scholar] [CrossRef]
  31. Janečka, J.E.; Jackson, R.; Yuquang, Z.; Diqiang, L.; Munkhtsog, B.; Buckley-Beason, V.; Murphy, W. Population monitoring of snow leopards using noninvasive collection of scat samples: A pilot study. Anim. Conserv. 2008, 11, 401–411. [Google Scholar] [CrossRef]
  32. Hemmer, H. Uncia uncia. Mamm. Species 1972, 20, 1–5. [Google Scholar] [CrossRef]
  33. Christiansen, P. Canine morphology in the larger Felidae: Implications for feeding ecology. Biol. J. Linn. Soc. 2007, 91, 573–592. [Google Scholar] [CrossRef] [Green Version]
  34. Sunquist, M.; Sunquist, F. Wild Cats of the World; University of Chicago Press: Chicago, IL, USA, 2017. [Google Scholar]
  35. Jackson, R.M. Home Range, Movements and Habitat Use of Snow Leopard(Uncia uncia) in Nepal; University of London: London, UK, 1996. [Google Scholar]
  36. Fox, J.L.; Chundawat, R.S. Observations of snow leopard stalking, killing, and feeding behavior. Mammalia 1988, 52, 137–140. [Google Scholar]
  37. Veteska, J.; Kursch, M. The research on the efficiency of the methods of talent management within organizations. New Educ. Rev. 2018, 52, 28–42. [Google Scholar] [CrossRef] [Green Version]
  38. Veteska, J.; Kursch, M.; Svobodova, Z.; Tureckiova, M. Longitudinal co-teaching projects–scoping review. In Proceedings of the 17th International Conference on Cognition and Exploratory Learning in Digital Age, Lisbon, Portugal, 18–20 November 2020. [Google Scholar]
  39. Coufal, P.; Hubálovský, Š.; Hubálovská, M. Application of the Basic Graph Theory in Autonomous Motion of Robots. Mathematics 2021, 9, 919. [Google Scholar] [CrossRef]
Figure 1. Flowchart of SLOA.
Figure 1. Flowchart of SLOA.
Mathematics 09 02832 g001
Figure 2. Boxplot of composition objective functions results for different optimization algorithms.
Figure 2. Boxplot of composition objective functions results for different optimization algorithms.
Mathematics 09 02832 g002aMathematics 09 02832 g002bMathematics 09 02832 g002c
Figure 3. Sensitivity analysis of SLOA for maximum number of iterations.
Figure 3. Sensitivity analysis of SLOA for maximum number of iterations.
Mathematics 09 02832 g003aMathematics 09 02832 g003b
Figure 4. Sensitivity analysis of SLOA for number of population members.
Figure 4. Sensitivity analysis of SLOA for number of population members.
Mathematics 09 02832 g004aMathematics 09 02832 g004b
Figure 5. Sensitivity analysis of SLOA for P parameter.
Figure 5. Sensitivity analysis of SLOA for P parameter.
Mathematics 09 02832 g005aMathematics 09 02832 g005b
Table 1. Parameter values for the comparative algorithms.
Table 1. Parameter values for the comparative algorithms.
AlgorithmParameterValue
GA
TypeReal coded
SelectionRoulette wheel (Proportionate)
CrossoverWhole arithmetic (Probability = 0.8,
α = 0.5 ,   1.5 )
MutationGaussian (Probability = 0.05)
PSO
TopologyFully connected
Cognitive and social constant C 1 = 2 ,   C 2 = 2
Inertia weightLinear reduction from 0.9 to 0.1
Velocity limit10% of dimension range
GSA
Alpha, G0, Rnorm, Rpower20, 100, 2, 1
TLBO
T F :   teaching factor T F = round 1 + r a n d
random numberrand is a random number between 0 1 .
GWO
Convergence parameter (a)a: Linear reduction from 2 to 0.
GOA
Cmin, Cmax, l, f0.0001, 1, 1.5, 0.5
r1,r2,r3 are random numbers in 0 1 .
TSA
Pmin and Pmax1, 4
c 1 ,   c 2 ,   c 3 random numbers lie in the range of 0 1 .
MPA
Constant numberp = 0.5
Random vectorR   is   a   vector   of   uniform   random   numbers   in   0 1 .
Fish Aggregating Devices (FADs)FADs = 0.2
Binary vectorU = 0 or 1
Table 2. Optimization results of SLOA and other algorithms on unimodal test function.
Table 2. Optimization results of SLOA and other algorithms on unimodal test function.
GAPSOGSATLBOGWOGOATSAMPASLOA
F1Ave13.24051.77 × 10−52.03 × 10−178.34 × 10−601.09 × 10−588.62 × 10−87.71 × 10−383.27 × 10−210
std4.77 × 10−156.44 × 10−211.14 × 10−324.94 × 10−765.14 × 10−745.32 × 10−227.00 × 10−214.62 × 10−210
F2Ave2.47940.34112.37 × 10−87.17 × 10−351.30 × 10−340.61498.48 × 10−391.57 × 10−122.70 × 10−228
std2.23 × 10−157.45 × 10−175.18 × 10−246.69 × 10−501.91 × 10−503.25 × 10−155.92 × 10−411.42 × 10−120
F3Ave1536.896589.492279.34392.75 × 10−157.41 × 10−156.91 × 10−71.15 × 10−210.08641.86 × 10−52
std6.61 × 10−137.12 × 10−131.21 × 10−132.65 × 10−315.64 × 10−306.79 × 10−256.70 × 10−210.14443.35 × 10−50
F4Ave2.09423.96343.25 × 10−99.42 × 10−151.26 × 10−146.13 × 10−41.33 × 10−232.60 × 10−88.8 × 10−152
std2.23 × 10−151.99 × 10−162.03 × 10−242.12 × 10−301.06 × 10−296.20 × 10−221.15 × 10−229.25 × 10−90
F5Ave310.427350.2624536.10695146.456426.860745.351828.861546.04924.25997
std2.10 × 10−131.59 × 10−143.10 × 10−141.91 × 10−1405.16 × 10−124.76 × 10−30.42191.51 × 10−14
F6Ave14.5520.2500.44350.64236.14 × 10−87.10 × 10−210.3980
std3.18 × 10−157.56 × 10−404.22 × 10−166.21 × 10−171.32 × 10−241.12 × 10−250.19140
F7Ave5.68 × 10−30.11340.02060.00170.00080.72613.72 × 10−40.00181.47 × 10−4
std7.76 × 10−194.34 × 10−172.72 × 10−183.88 × 10−197.27 × 10−204.93 × 10−165.09 × 10−50.0015.45 × 10−20
Table 3. Optimization results of SLOA and other algorithms on high dimensional test function.
Table 3. Optimization results of SLOA and other algorithms on high dimensional test function.
GAPSOGSATLBOGWOGOATSAMPASLOA
F8Ave−8184.41−6908.6558−2849.0724−7408.6107−5885.1172−1783.2648−5740.3388−3594.16321−5583.06
std8.33 × 10+26.26 × 10+22.64 × 10+25.14 × 10+24.68 × 10+27.62 × 10+24.15 × 10+18.11 × 10+22.03 × 10−12
F9Ave62.411457.06131.63 × 10+11.02 × 10+18.53 × 10−153.19625.70 × 10−31.40 × 10+20
std2.54 × 10146.36 × 10−153.18 × 10−155.56 × 10−155.64 × 10−301.70 × 10−141.46 × 10−32.63 × 10+10
F10Ave3.22182.15463.57 × 10−92.76 × 10−11.71 × 10−143.16 × 10−19.80 × 10−149.70 × 10−124.44 × 10−15
std5.16 × 10−157.94 × 10−163.70 × 10−252.56 × 10−152.75 × 10−296.35 × 10−144.51 × 10−126.13 × 10−120
F11Ave1.23020.04623.746.08 × 10−13.70 × 10−31.26 × 10−11.00 × 10−700
std8.44 × 10−163.10 × 10−182.78 × 10−151.99 × 10−161.26 × 10−185.62 × 10−167.46 × 10−700
F12Ave0.0470.48060.03620.02030.03721.52630.03680.08510.010594
std4.65 × 10−171.86 × 10−166.21 × 10−177.76 × 10−164.34 × 10−177.53 × 10−101.55 × 10−20.00522.21 × 10−17
F13Ave1.20850.50840.0020.32930.57633.52 × 10−12.960.49010.111886
std3.23 × 10−144.97 × 10−154.26 × 10−142.11 × 10−142.48 × 10−156.46 × 10−121.57 × 10−120.19321.55 × 10−16
Table 4. Optimization results of SLOA and other algorithms on fixed dimensional test function.
Table 4. Optimization results of SLOA and other algorithms on fixed dimensional test function.
GAPSOGSATLBOGWOGOATSAMPASLOA
F14Ave9.99 × 10−12.173.592.273.749.98 × 10−11.990.9980.998
std1.56 × 10−157.94 × 10−167.94 × 10−161.99 × 10−166.45 × 10−157.62 × 10−152.65 × 10−74.27 × 10−161.47 × 10−16
F15Ave5.40 × 10−25.35 × 10−22.40 × 10−33.30 × 10−30.00635.10 × 10−34.00 × 10−43.00 × 10−30.0003
std7.08 × 10−183.88 × 10−162.91 × 10−181.22 × 10−171.16 × 10−182.17 × 10−169.0125 × 10−44.10 × 10−157.15 × 10−19
F16Ave−1.0316−1.03−1.03−1.03−1.03−1.03−1.03−1.03−1.03163
std7.94 × 10−163.48 × 10−165.96 × 10−161.44 × 10−153.97 × 10−166.33 × 10−155.65 × 10−164.47 × 10−162.48 × 10−16
F17Ave0.43697.85 × 10−13.98 × 10−13.98 × 10−13.98 × 10−14.04 × 10−13.99 × 10−13.98 × 10−10.397887
std4.97 × 10−144.97 × 10−159.93 × 10−167.45 × 10−168.69 × 10−163.83 × 10−142.16 × 10−169.12 × 10−150
F18Ave4.3592333.000933333
std5.96 × 10−163.67 × 10−156.95 × 10−161.59 × 10−152.09 × 10−158.61 × 10−152.65 × 10−151.96 × 10−151.94 × 10−17
F19Ave−3.85434−3.8627−3.8627−3.8609−3.86−3.81−3.8066−3.8627−3.86278
std9.93 × 10−148.94 × 10−158.34 × 10−157.35 × 10−152.48 × 10−157.62 × 10−152.64 × 10−154.24 × 10−155.96 × 10−16
F20Ave−2.8239−3.2619−3.0396−3.2014−3.2523−3.24−3.3206−3.32−3.32199
std3.97 × 10−112.98 × 10−122.18 × 10−141.79 × 10−152.18 × 10−155.68 × 10−155.69 × 10−151.14 × 10−111.99 × 10−16
F21Ave−4.304−5.3891−5.1486−9.1746−9.6452−7.3862−5.5021−10.1532−10.1532
std1.59 × 10−121.49 × 10−132.98 × 10−148.54 × 10−156.55 × 10−155.61 × 10−105.46 × 10−132.54 × 10−117.94 × 10−16
F22Ave−5.12−7.63−9.02−1.00 × 10+1−1.04 × 10+1−8.81−5.0625−10.4029−10.4029
std6.29 × 10−157.59 × 10−151.65 × 10−121.53 × 10−141.99 × 10−158.42 × 10−148.46 × 10−142.82 × 10−116.36 × 10−16
F23Ave−6.56−6.16−8.90−9.29−1.01 × 10+1−9.96−10.3613−10.5364−10.5364
std3.87 × 10−152.78 × 10−157.15 × 10−146.19 × 10−154.57 × 10−158.61 × 10−147.65 × 10−123.99 × 10−111.45 × 10−16
Table 5. Obtained results from the Wilcoxon test (p ≥ 0.05).
Table 5. Obtained results from the Wilcoxon test (p ≥ 0.05).
Compared AlgorithmsUnimodalHigh—MultimodalFixed—Multimodal
SLOA vs. MPA0.0156250.06250.0625
SLOA vs. TSA0.0156250.43750.003906
SLOA vs. GOA0.0156250.031250.007813
SLOA vs. GWO0.0156250.43750.011719
SLOA vs. TLBO0.0156250.43750.005859
SLOA vs. GSA0.031250.156250.019531
SLOA vs. PSO0.0156250.43750.003906
SLOA vs. GA0.0156250.43750.001953
Table 6. Results of the algorithm sensitivity analysis to the maximum number of iterations.
Table 6. Results of the algorithm sensitivity analysis to the maximum number of iterations.
Objective FunctionMaximum Number of Iterations
1005008001000
F14.74 × 10−393 × 10−21300
F22.42 × 10−211 × 10−1123.1 × 10−1812.7 × 10−228
F30.0306598.02 × 10−181.14 × 10−281.86 × 10−52
F46.24 × 10−143.96 × 10−742.8 × 10−1198.8 × 10−152
F526.8258225.7827925.4456524.25997
F60000
F70.0023880.0006230.0004630.000147
F8−3883.26−4381.65−4649.31−5583.06
F90000
F104.62 × 10−154.44 × 10−154.44 × 10−154.44 × 10−15
F110000
F120.0351710.0302580.0244270.010594
F130.3765410.3499860.2694670.111886
F140.9980081.0972090.9980.998
F150.000590.0005380.0004460.0003
F16−1.03163−1.03163−1.03163−1.03163
F170.3978870.3978870.3978870.397887
F183333
F19−3.86278−3.86278−3.86278−3.86278
F20−3.31457−3.32195−3.32196−3.32199
F21−10.1393−10.1521−10.1528−10.1532
F22−10.35−10.3681−10.4022−10.4029
F23−10.5031−10.5357−10.5361−10.5364
Table 7. Results of the algorithm sensitivity analysis to the number of population members.
Table 7. Results of the algorithm sensitivity analysis to the number of population members.
Objective FunctionNumber of Population Members
20305080
F10000
F22.8 × 10−2237.2 × 10−2252.7 × 10−2289.6 × 10−230
F31.38 × 10−411.92 × 10−511.86 × 10−527.69 × 10−62
F42.2 × 10−1462 × 10−1498.8 × 10−1522.6 × 10−155
F526.6599326.1780824.2599724.12891
F60000
F70.0014090.0008830.0001479.81 × 10−5
F8−4525.52−4571.32−5583.06−5894.9
F90000
F105.15 × 10−154.44 × 10−154.44 × 10−154.44 × 10−15
F110.001355000
F120.1153730.0546860.0105940.00869
F131.3187330.6990140.1118860.053976
F142.7603962.2721370.9980.998
F150.001560.0004990.00030.0003
F16−1.03163−1.03163−1.03163−1.03163
F170.3979160.397890.3978870.397887
F187.05333
F19−3.86275−3.86278−3.86278−3.86278
F20−3.29751−3.29803−3.32199−3.32199
F21−9.54574−10.1514−10.1532−10.1532
F22−8.61249−10.1353−10.4029−10.4029
F23−8.21416−10.213−10.5364−10.5364
Table 8. Results of the algorithm sensitivity analysis to the number of population members.
Table 8. Results of the algorithm sensitivity analysis to the number of population members.
Objective FunctionValue of P Parameter
0.20.3750.60.8
F10003.7 × 10−218
F21.5 × 10−2302.7 × 10−2281.8 × 10−1871.2 × 10−128
F36.71 × 10−441.86 × 10−521.76 × 10−364.61 × 10−28
F41.7 × 10−1648.8 × 10−1521.43 × 10−941.38 × 10−50
F524.8249324.2599724.4876324.33416
F60000
F70.0002320.0001470.0002720.000438
F8−4851.53−5583.06−4864.46−4643.05
F90000
F104.44 × 10−154.44 × 10−154.44 × 10−157.99 × 10−15
F110000
F120.0064630.0105940.0218120.019961
F130.236320.1118860.2258480.627989
F140.9980.9980.9980.998
F150.0003990.00030.0003410.00037
F16−1.03163−1.03163−1.03163−1.03163
F170.3978870.3978870.3978870.397887
F183333
F19−3.86278−3.86278−3.86278−3.86278
F20−3.32199−3.32199−3.32199−3.32199
F21−10.1525−10.1532−10.1528−10.1532
F22−10.4029−10.4029−10.4028−10.4027
F23−10.5364−10.5364−10.5364−10.5364
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Coufal, P.; Hubálovský, Š.; Hubálovská, M.; Balogh, Z. Snow Leopard Optimization Algorithm: A New Nature-Based Optimization Algorithm for Solving Optimization Problems. Mathematics 2021, 9, 2832. https://0-doi-org.brum.beds.ac.uk/10.3390/math9212832

AMA Style

Coufal P, Hubálovský Š, Hubálovská M, Balogh Z. Snow Leopard Optimization Algorithm: A New Nature-Based Optimization Algorithm for Solving Optimization Problems. Mathematics. 2021; 9(21):2832. https://0-doi-org.brum.beds.ac.uk/10.3390/math9212832

Chicago/Turabian Style

Coufal, Petr, Štěpán Hubálovský, Marie Hubálovská, and Zoltan Balogh. 2021. "Snow Leopard Optimization Algorithm: A New Nature-Based Optimization Algorithm for Solving Optimization Problems" Mathematics 9, no. 21: 2832. https://0-doi-org.brum.beds.ac.uk/10.3390/math9212832

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop