Next Article in Journal
Further Results on the Proportional Vitalities Model
Next Article in Special Issue
Improved Particle Swarm Optimization Based on Entropy and Its Application in Implicit Generalized Predictive Control
Previous Article in Journal
Determination of Parameters for an Entropy-Based Atrial Fibrillation Detector
Previous Article in Special Issue
Exploration and Exploitation Zones in a Minimalist Swarm Optimiser
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Particle Swarm Algorithm Based on a Multi-Stage Search Strategy

School of Software, Yunnan University, Kunming 650504, China
*
Author to whom correspondence should be addressed.
Submission received: 10 August 2021 / Revised: 29 August 2021 / Accepted: 9 September 2021 / Published: 11 September 2021
(This article belongs to the Special Issue Swarm Models: From Biological and Social to Artificial Systems)

Abstract

:
Particle swarm optimization (PSO) has the disadvantages of easily getting trapped in local optima and a low search accuracy. Scores of approaches have been used to improve the diversity, search accuracy, and results of PSO, but the balance between exploration and exploitation remains sub-optimal. Many scholars have divided the population into multiple sub-populations with the aim of managing it in space. In this paper, a multi-stage search strategy that is dominated by mutual repulsion among particles and supplemented by attraction was proposed to control the traits of the population. From the angle of iteration time, the algorithm was able to adequately enhance the entropy of the population under the premise of satisfying the convergence, creating a more balanced search process. The study acquired satisfactory results from the CEC2017 test function by improving the standard PSO and improved PSO.

1. Introduction

Scholars have applied different approaches to the increasing number of structurally complex optimization problems, which are difficult to solve using traditional means, including evolutionary algorithms such as the genetic algorithm (GA) [1], bee colony (ABC) algorithm [2], difference (DE) algorithm [3], simulated annealing (SA) [4], ant colony (ACO) algorithm [5], and PSO [6].
PSO was proposed by Kennedy and Eberhart [7] in 1995 as a population-based heuristic optimization algorithm. With the advantages of a simple implementation, high efficiency, and few parameters, it is widely used in fields such as path planning [8], image segmentation [9], neural networks [10,11], data prediction [12], and noise control [13]. However, PSO is prone to getting trapped in local optima and lacks search accuracy in late iterations. To address these issues, PSO has been improved from four main points of view.
(1) Parameter tuning. Gunasundari et al. [14] proposed velocity-bounded Boolean PSO (VbBoPSO) based on binary PSO (BPSO), in which particles are initialized with random binary positions and velocities, with the velocity constrained to a specific range of values [15], to explore more regions and obtain better convergence. Sen et al. [16] proposed adaptive modified particle velocity PSO (MPV-PSO). A method to balance the search capability of PSO was proposed, using a strategy of linearly reducing the inertia weights [17]. Ratnaweera et al. [18] proposed a self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients (HPSO-TVAC), which varies the acceleration coefficients to improve the search capabilities. Zhan et al. [19] proposed adaptive particle swarm optimization (APSO) with adaptive control parameters. Chen et al. [20] proposed chaotic dynamic weight particle swarm optimization (CDW-PSO), using inertia quantities with chaotic mapping to modify the search direction.
(2) Learning strategy. PSO is prone to getting trapped in local optima, and the search accuracy is not sufficient in late iterations because it uses only two experiences to guide particle learning. Improving this aspect of the learning strategy has attracted much attention. Liang et al. [21] proposed a dynamic multi-swarm particle swarm optimizer (DMS-PSO) with a dynamic neighborhood structure, in which the learning of each particle is no longer restricted to one population. Liu et al. [22] proposed a hierarchical simple time hierarchy strategy (THSPSO) algorithm using different learning strategies in different search phases. Zhan et al. [23] proposed orthogonal learning particle swarm optimization (OLPSO) with orthogonal learning, in which each particle obtains useful information from its own historical best experience and that of its neighbors. Xu et al. [24] proposed two-swarm learning particle swarm optimization (TSLPSO) based on dimensional learning, which constructs a learning paradigm for each particle by learning each dimension of its individual optimal position from the corresponding dimension of the population optimal position. Finally, Li et al. [25] proposed a learning strategy based on the collaboration of multiple populations to achieve information sharing and co-evolution among populations.
(3) Topology. Kennedy [26] pointed out that the use of topology is effective for population-based algorithms. In structured populations, information is often exchanged between closely linked individuals based on fitness and topological relationships as a way to slow down convergence. Mendes et al. [27] proposed fully informed particle swarm optimization (FIPSO), which uses a fully informed strategy in which each particle is updated based on the historical best experience of its neighbors. Janson et al. [28] constructed a dynamically changing tree topology in which each particle learns from its parent, effectively using the information of each particle.
(4) Algorithm crossover. Hybrid algorithms are a key research area to improve the performance of PSO algorithms. They incorporate operators such as crossover, selection, mutation, and choice to improve the search quality of the population individuals and the general efficiency of the algorithm. For example, the use of genetic operators can improve population diversity and convergence to the global optimum. Hybrid algorithms can better escape local optima and overcome certain inherent drawbacks associated with single algorithms. Zhang et al. [29] proposed differential mutation and novel social learning PSO (DSPSO), which combined four differential variation operations with social learning particle swarm optimization (SLPSO). Nasiraghdam et al. [30] proposed a new approach based on a hybrid genetic algorithm and PSO. Related studies [31,32,33,34] showed that a hybrid PSO algorithm incorporating other evolutionary algorithms not only improves population diversity but prevents premature convergence and increases the probability of finding a global optimal solution.
In summary, the key to PSO improvement lies in balancing diversity and convergence, preventing premature convergence to a local optimum, and improving local exploitation. Previous work used entropy as an evaluation index to improve the algorithm but stopped at using entropy to constrain the population traits and did not explore a more optimal search process. Others divided up the search strategy using both new strategies or vastly different ones. This paper proposes a multi-stage search strategy to fully maintain the population diversity for the global search in the early stage and modifies the update formula in the late stage to help jump out of the local optima, improving the local search ability. While improving the algorithm exploitation ability and search accuracy, the original algorithm converges after adding the strategy. This strategy can be used as an improved operator and implemented in heterogeneous comprehensive learning particle swarm optimization (HCLPSO) and TSLPSO. Both of these algorithms divide sub-populations to balance exploration and exploitation; however, adding a new balancing strategy can have adverse effects. This paper explores these two representative algorithms, which are compatible with the new strategy, to illustrate the effectiveness of the strategy. The experimental results show that this strategy can effectively improve the probability of the algorithm converging to the global optimal solution.

2. Related Knowledge

2.1. Particle Swarm Optimization

PSO tends to a global convergence through the cooperation and competition of particles in the search space. The particle velocity and position updates are determined by the best position found by the corresponding particle (pbest) and the best position found by the whole population up to the current iteration of the algorithm (gbest). Let the total number of particles be N , the dimension D, and the maximum number of iterations maxiter. Then, the velocity of the ith particle at moment t is v i ( t ) = [ v i 1 ( t ) , v i 2 ( t ) , , v i m ( t ) ] T , the position is x i ( t ) = [ x i 1 ( t ) , x i 2 ( t ) , , x i m ( t ) ] T , the best position found by the corresponding particle at moment t is p b e s t i ( t ) = [ p i 1 ( t ) , p i 2 ( t ) , , p i m ( t ) ] T , and the best position found by the whole population is g b e s t = [ g 1 , g 2 , , g m ] T . The particles are iteratively searched according to the following formula:
v i + 1 ( t + 1 ) = ω v i ( t ) + c 1 r 1 ( p b e s t i ( t )     x i ( t ) ) + c 2 r 2 ( g b e s t     x i ( t ) ) ,
x i + 1 ( t + 1 ) = x i ( t ) + v i + 1 ( t + 1 ) ,
where the inertia weight coefficients are the learning factors. r1 and r2 are random numbers between [0,1]. The two factors c1 and c2, known as “acceleration coefficients”, are positive constants commonly used to determine when the cognition speed of the ith particle is accelerated towards pbest and gbest, respectively. The symbol ω denotes the inertia weight parameter which was originally developed to address the velocity explosion problem.

2.2. TSLPSO Algorithm

Currently, TSLPSO is one of the algorithms that achieves superior results on the congress on evolutionary computation (CEC) test function [24], in which one sub-population uses the learning paradigm constructed by the dimensional learning strategy to guide the local search of particles, and the other uses the learning paradigm constructed by the integrated learning strategy to guide the global search. The learning paradigm constructed by dimensional learning can guide the particles to search in better regions; however, if the majority of particles are near a local optimum and are trapped there, then premature convergence occurs. To solve this problem, the integrated learning strategy is introduced to enhance population diversity and help particles escape local optima.

2.3. HCLPSO Algorithm

In HCLPSO, the population is divided into two sub-populations. The first aims to enhance exploration and the second to enhance exploitation. In both sub-populations, samples are generated using an integrated learning (CL) strategy with learning probability Pc to generate random numbers for each dimension of the particles and to compare them with their respective learning probability Pci values. The velocity of the exploration sub-population is updated as
V i d = ω V i d + c r a n d i d ( p b e s t f i ( d ) d X i d ) ,
and the velocity of the exploitation sub-population as
V i d = ω V i d + c 1 r a n d 1 i d ( p b e s t f i ( d ) d X i d ) + c 2 r a n d 2 i d ( g b e s t d X i d ) .
where f i ( d ) = [ f i ( 1 ) , f i ( 2 ) , , f i ( D ) ] indicates whether the i-th particle of each dimension d follows its own or another’s pbest. When r a n d i d > Pci, f i ( d ) = 1; when r a n d i d < Pci, then two particles, namely X j and X k , are randomly selected from the sub-population of the ith particle. If fitness ( X j ) fitness ( X k ), then f i ( d ) = j ; otherwise, f i ( d ) = k .
Since the exploring particles are not allowed to access the information of the exploiting particles, there is no information flow from the exploiting sub-population to the exploring sub-population. To prevent loss of diversity, exploring and exploiting individuals do not interact. Inertia weights and dynamic acceleration coefficients are used for both sub-populations.

3. Particle Swarm Optimization with a Multi-Stage Search Strategy

3.1. Multi-Stage Search Strategy

The paper uses the multi-stage search strategy to improve the search results based on the original mechanisms of PSO, TSLPSO, and HCLPSO. The multi-stage search strategy is described below.
Before population iteration, each dimension of each particle is given an additional attribute Ra, the value of which is 1 or −1. As shown in Figure 1, blue signifies that the Ra attribute value of the particle is 1, and red signifies that the value of the particle is −1. In subsequent action behavior judgments, this value determines whether two particles are mutually attracted or repelled, i.e., particles attract each other with the opposite Ra value, but repel each other if they are the same. In the first stage, a certain proportion of better adapted particles are selected, and these will not be attracted or repelled, as indicated by the black box at the edge of the particles in Figure 1. The remaining particles will be affected by the better adapted particle swarm, except for those that change their position according to the original algorithm, i.e., those without the black border in the figure, and the strength of the influence is determined by the parameter pow. To prevent the phenomenon of premature clustering, the Euclidean distance between particles is calculated as
d ( X i , X j ) = D = 1 n ( X i d X j d ) 2 .
when the distance is less than the threshold, the particles with poor fitness will be bounced away.
In the second stage, no operation is applied to the particle population. In the third stage, only the Ra of the optimal particle is kept unchanged, and all other particles have the opposite property value of the optimal particle in each dimension, i.e., they will be attracted by it.
In the velocity update Formula (1), particles are pulled by the gbest and pbest points. This seems similar to the attraction and repulsion of this strategy, but in fact, the PSO easily traps in local optima because particles are influenced by the current gbest point in the early stage, and the poor search accuracy in the later stage is caused by the attraction of the pbest point. Existing improvement algorithms make dynamic linear changes to learning factors c1 and c2 in the velocity update Formula (1) to achieve the expected improvement. However, from another perspective, the improvement strategy is independent of the velocity and position update Formulate (1) and (2), and this study sought to improve the algorithm in terms of exploration and exploitation and to control the shape of the population by relatively direct and simple means. From the analysis of the experimental results in Section 6, it can be seen that the algorithm’s performance has a potential point of balance between exploration and exploitation, and thus a higher probability of obtaining the optimal solution.

3.2. Definition of Particle Actions

3.2.1. Particle Action Definition for Stage 1

The position and velocity of all particles are updated according to velocity and position update Formulae (1) and (2), the position of the lagging particles will change according to the fitness value of the particles, and the direction of movement is determined by the Ra value. The Ra value is an inherent property of each particle: its positive or negative value determines repulsion or attraction between particles, i.e., particles attract those with the opposite Ra value, but repel those with the same. Each particle has a Ra value for each dimension.
R a i d = { 1             r a n d ( 0 , 1 ) > 0.5 1                                                               e l s e
As a result of the randomness of Ra, a lag particle action also has randomness. When the force value pow is large enough, the new interaction force between particles can dominate the movement of the lag particle swarm. The force value is calculated as
p o w = j = 1 M d = 1 D ( 1 | X j d X i d | s e a r c h r a n g e ) R a j d R a i d s t r ,
where str is a constant parameter that determines pow, which changes the speed of the particle; the “searchrange” is limited by X m i n   a n d   X m a x . The particle itself has a speed, which means that these particles may be pushed away from the gbest point, may be close to the gbest point, or may oscillate in its vicinity. However, since the size of the lag particle population is fixed, only a certain number of particles will act to make up for the difference. If a lag particle improves its fitness beyond that of other particles after iteration, then other particles will replace it as the lag particle population. Put differently, this mechanism ensures that part of the particle swarm is exploited locally, and the rest is explored globally; this study sought a balance between them.
To limit premature convergence, which is the tendency to trap in local optima, the Euclidean distance between particles is incorporated in stage 1. Once the distance between any two particles reaches a threshold, the particle with the lag fitness value will be selected and a force applied to it in the opposite direction with respect to the gbest point, pushing it away from the region in which it was originally located. This strategy coexists with the strategy repulsion and attraction mentioned above, and they affect each other.

3.2.2. Particle Action Definition for Stage 2

As shown in Figure 2, stage 2 is mainly used to judge the population traits and control the transition of the stage; in this study, no additional actions were performed on the particles. By mapping the diversity and convergence of the improvement algorithm, in this study, the parameters were set to control the end of stage 1 and the beginning of stage 3. Stage 2 did not originally exist in our design, but it was found that the experimental results could be improved by incorporating stage 2. Therefore, stage 2 could not be abandoned.

3.2.3. Particle Action Definition for Stage 3

In stage 3, we redefined the additional behavior of the particles, maintaining the Ra value of the current particle closest to the gbest point, and making the Ra values of all other particles the opposite, i.e., all other particles will be attracted by the particle near the gbest point, the power force decreases with the number of iterations, and it will decay to 0 at the end of the iteration.
p o w = p o w s t r N E F S ( 0.5 s t a g e ) ,
where “stage” is the value of the parameter that divides the stage and EFS represents fitness evaluations.
It should be noted that the optimal particle may change, so the pow of each particle will be recalculated after each iteration, and there is no Ra influence between other particles except the optimal particle.
Unlike the standard PSO, in which particles are influenced by the information of the pbest and gbest points, in stage 3, particles are influenced by the information of the current optimal particle position, which does not overlap with the gbest point position. The subtle differences between the two can be seen in Figure 3. The addition of a new guidance factor can help improve the convergence performance of PSO, and such a definition reduces the problem of poor search accuracy caused by the traction of the pbest point in the late stage of the search. Moreover, the current optimal particle position and the gbest point jointly determine the search direction to improve convergence performance.

3.3. Particle Property Change History

Ra is randomly attached to each particle at the beginning of phase 1 and will change in the following situations:
(1)
A particle distance determination is triggered in stage 1. The Ra of the less adapted of two particles will be reassigned with the same value as the other particle;
(2)
At the beginning of stage 3, a particle that is not the best adapted will have an Ra value opposite to that of the best particle;
(3)
When the optimal particle changes in stage 3, repeat 2.

3.4. Inter-Particle Action

The interactions between particles are summarized as follows, where repulsion and attraction are reflected by the change in the direction of velocity and magnitude, which directly acts on velocity change:
(1)
In all phases, on the basis of the original algorithm velocity and position update formula, particles that are equivalent to each other will be subject to attraction from the gbest point;
(2)
In stage 1, the lag particle will be attracted by the pow in Formula (7) from the better particle population;
(3)
In stage 1, the particle with the most lag of two close particles will be subject to repulsive force;
(4)
In stage 3, the non-optimal particle will be attracted by the optimal particle.

3.5. Framework of RaPSO Algorithm

Input: Tmax is the maximum number of iterations, and Dim is the number of dimensions, Ra: denotes additional properties, and pow denotes force parameters.
Output: GbestValue is the global optimal solution, and a and b are constants for control phase division.
Figure 4 is the framework of the RaPSO algorithm (Algorithm 1).
Algorithms 1.
1 Initialize a population, initialize particle velocity, position, fitness; initialize PBest, GBest.
2 Randomly assign particles Ra
3 for t = 1 to Tmax do
4 Update the velocity v of the particles according to the particle population update formula;
5      If t < a/Tmax do
6      calculate the additional offset of the particle position based on pow and Ra
7      update the particle velocity v and position x again.
8      update pbest, gbest.
9      Distance detection, adjusting Ra.
10    else If t > b/Tmax do
11        adjust the Ra of the non-optimal particle opposite to the optimal particle.
12        Calculate the additional offset of the particle position based on pow and Ra.
13        update the particle velocity v and position x again.
14    end
15 end

4. Entropy and Convergence

The paper discusses the effectiveness of the strategy in terms of entropy and convergence. An increase in population confusion often means a decrease in convergence performance, but if it increases the likelihood that the population searches for the optimal value without affecting convergence, i.e., it produces small fluctuations in the convergence curve, it will help improve the algorithm. We chose population entropy and DBscan as reference tools to draw figures to establish that the multi-stage search strategy worked.

4.1. Population Entropy

Entropy is a measure of the degree of chaos in a system and population entropy is an indicator of group diversity. The general definition of entropy may not apply in high-dimensional space; therefore, we chose population entropy as the evaluation criterion of algorithm convergence. In this paper, we assume that the number of particles is M, and the best individual position of the particles is f _ pb . Calculating population entropy can be divided into three steps:
Step 1: Calculate the minimum fitness of the particle’s historical best position, f min = min ( f _ pb ) , and maximum fitness, f max = max ( f _ pb ) . The interval of consideration is [ f min , f max ] ;
Step 2: Divide the interval [ f min , f max ] into M equidistant small regions and calculate the   number   of   f _ pb in each small interval, k i ,   i = 1 ,   2 ,   , M , and i = 1 M k i = M ;
Step 3: Calculate the population entropy, E t = i = 1 M p i logp i , p i = k i M .
The larger the computational results of population entropy, the more chaotic the particle distribution in space and the less convergent the algorithm.
We compared PSO and the improved algorithm RaPSO using CEC2017-fun10 as the objective function and by conducting experiments in 10-dimensional space. The fitness value of Figure 5a is higher than that of Figure 5b. As can be seen from Figure 5a, the population entropy value of PSO basically has a continuous decreasing trend, and PSO has nearly no effective exploration in the middle and at the end of the algorithm iteration, after which the fitness value no longer changes. This also demonstrates that PSO is easily trapped in local optima. Figure 5b shows the historical change in the population entropy after adding the stage search strategy, from which we can see that the algorithm is in a more chaotic state from beginning to end, and the population entropy is obviously larger than that of PSO at each moment. However, the experimental results are better than those depicted in Figure 5a, which shows that, under the premise of algorithm convergence, increasing population entropy is beneficial to particles in terms of searching for the optimal solution, i.e., increasing population entropy at the beginning of an iteration can help in global exploration, while increasing it at the end of the iteration can benefit local exploitation.

4.2. DBscan

DBscan can measure the degree of population disorder from the perspective of density. It defines clusters as the maximum set of densely connected points, which can divide regions with sufficient density into clusters and find clusters of arbitrary shapes in the spatial database of noise. The DBscan calculation method is described as follows:
Step 1: Define the field radius e and the threshold of core point;
Step 2: Start from one particle and find all particles that have a density connection. Jump out of this cluster and start to find particles again from the other particle;
Step 3: Repeat step 2 until all particles have been traversed.
We chose the conditions for the population entropy experiment and plotted Figure 6. From the figure, we can see that DBscan can depict the clustering phenomenon of the population more graphically than population entropy. The PSO algorithm enters a high degree of clustering after 400 iterations, where the particle population cannot jump out of the local optimum due to insufficient guidance information. Similarly, it can be seen that the overall convergence performance of the population remains unchanged after adding the strategy, while the degree of confusion is improved.

5. Experimental Analysis of the Improved Algorithm

5.1. Experimental Design Method

We chose 29 test functions in CEC2017 as objective functions for optimization. Among them, Fun1 is single-peaked, Fun3–Fun10 are multi-peaked, Fun11–Fun20 are mixed, and Fun21–Fun30 are composite. The details are shown in the Table 1.

5.2. Experimental Parameter Selection

The 29 test functions of CEC2017 were selected as target functions for optimization tests, and experiments were conducted with a population size N = 100, repeat = 30, and D = 10, 30. Other parameter settings are shown in Table 2. In order to achieve a better comparison, the parameters were all selected based on previous experiments [24].
After adding the improvement mechanism, there were three additional controllable parameters in each algorithm: str, prop, and stage. In order to ensure the accuracy of parameter selection, the number of experimental repetitions was increased to 100.

5.2.1. The Parameter Str

The parameter str affects the magnitude of the strength of all repulsions and attractions in stages 1 and 3. The algorithm used in the experiment was the standard PSO. With the other parameters unchanged, Fun5, Fun14, and Fun23 in CEC2017 were selected as objects for the comparison experiments. The x-axis in Figure 7 represents the value of parameter str, and the y-axis represents the average fitness after normalization. The value range of the x-axis was obtained after a large number of experiments. The experimental range was far from that. It was the range obtained after preliminary screening. The sub-graph in the Figure 7 is a detailed display of the x-axis from 20 to 30. The results from Figure 7 show that str = 25 was chosen to obtain better results.

5.2.2. The Parameter Prop

The parameter prop determines the proportion of poorly adapted particles in stage 1. Because only Fun5 in CEC2017 was selected as the test function, the fitness value of the y-axis was unprocessed. Other parts are similar to those used in the experiment in Section 5.2.1 Based on the experimental results, a value of 0.5 was chosen in Figure 8, meaning that half of the particles were attracted to or repelled by the other half.

5.2.3. The Parameter Stage

The parameter stage is the basis for deciding the division of the three stages. Similarly, we used standard PSO as the experimental algorithm in 10 dimensions. The test function was Fun25 in CEC2017. Based on the experimental results in Table 3, this was set to 0.1, which means that the first stage accounts for 40% of the iterations, the second stage 20%, and the third stage 40%.

5.3. Analysis of Experimental Results

5.3.1. Standard PSO Based on Multi-Stage Search Strategy

Table 4 and Table 5 is the experiment data of results of PSO and RaPSO in the case of 10/30 dimensions according to the parameters mentioned in Section 5.2. The data in bold in the tables is preferred.

5.3.2. TSLPSO Based on Multi-Stage Search Strategy

Table 6 and Table 7 is the experiment data of results of TSLPSO and RaTSLPSO in the case of 10/30 dimensions according to the parameters mentioned in Section 5.2. The data in bold in the tables is preferred.

5.3.3. HCLPSO Based on Multi-Stage Search Strategy

Table 8 and Table 9 is the experiment data of results of HCLPSO and RaHCLPSO in the case of 10/30 dimensions according to the parameters mentioned in Section 5.2. The data in bold in the tables is preferred.

6. Analysis of the Experimental Results

Figure 9 shows various test functions of more significant optimization effects from the six tables in Section 5.3. The x-axis is the fitness evaluation number (Fes, 10 dimensions equal 105; 30 dimensions equal 3 × 105). The y-axis is the average fitness value. Different colors represent different original algorithms. Solid lines represent algorithms with the strategy and dotted lines represent those without. According to the figure, algorithms adopting the strategy achieve lower (better) fitness and better search performance. Since the strategy increases population disorder during early iterations, the phenomena of slower convergence in early iterations and a poorer fitness value could be observed in certain circumstances in the figure; however, these phenomena are often improved at the end of search. Full development in the preliminary stage lays a more solid foundation for later exploration, and may in turn yield better fitness values. According to Figure 9, the red dotted curve of PSO is always the first stabilized, which means that PSO is more likely to be trapped in local optima. Both TSLPSO and HCLPSO can improve this. However, the assistance of the new strategy can produce better results. According to the figure, the strategy has a more significant optimizing effect on PSO and its two improved algorithms. The same conclusion is drawn in 10 dimensions and 30 dimensions, which further verifies the effectiveness of the strategy as regards the scope of the application.

7. Conclusions

A relatively direct and simple method is proposed herein to address problems such as premature convergence and poor searching precision with PSO, i.e., improving particle swarm diversity in early iterations of the algorithm and introducing guidance for the best particle location in the population at the end of the iteration to enhance searching precision. The concept was based on PSO and its two improved algorithms and acts as a multi-stage search strategy operator. It guides the particle swarm to different goals in different stages, improves the comprehensive exploration and development ability, and upgrades population disorder without changing the convergence performance of the algorithm too much, thus effectively improving the particle’s ability to jump out of local optima. Particle swarm behavior before and after the application of the strategy was compared and described, and the effectiveness of the strategy was verified with the population entropy and DBscan tools. The contrast experiment indicated the selection process for three parameters in the proposed strategy. Moreover, according to experimental results of the algorithms on the test functions, the strategy can improve search performance.
In future work, we hope to mathematically demonstrate that the selected parameter values are optimal, prove the general applicability of the strategy, and demonstrate the adequacy and necessity of the divided stages. The effectiveness of the proposed strategy was experimentally demonstrated, but a mathematical analysis and proof are required.

Author Contributions

Conceptualization, Y.S.; methodology, Y.S.; project administration, W.C. and H.K.; software, X.S.; validation, X.S. and Q.C.; visualization, W.C. and H.Z.; formal analysis, Y.S.; investigation, H.Z.; resources H.K.; data curation, W.C.; writing—original draft preparation, W.C.; writing—review and editing, Y.S. and H.Z.; supervision, X.S.; funding acquisition, Y.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant number 61663046, 61876166, and by the Open Foundation of Key Laboratory of Software Engineering of Yunnan Province, grant number 2020SE308, 2020SE309.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hosseinabadi, A.A.R.; Vahidi, J.; Saemi, B.; Sangaiah, A.K.; Elhoseny, M. Extended Genetic Algorithm for solving open-shop scheduling problem. Soft Comput. 2018, 23, 5099–5116. [Google Scholar] [CrossRef]
  2. Xue, Y.; Jiang, J.; Zhao, B.; Ma, T. A self-adaptive artificial bee colony algorithm based on global best for global optimi-zation. Soft Comput. 2018, 22, 2935–2952. [Google Scholar] [CrossRef]
  3. Li, W.; Li, S.; Chen, Z.; Zhong, L.; Ouyang, C. Self-feedback differential evolution adapting to fitness landscape characteristics. Soft Comput. 2017, 23, 1151–1163. [Google Scholar] [CrossRef]
  4. Wei, L.; Zhang, Z.; Zhang, D.; Leung, S.C. A simulated annealing algorithm for the capacitated vehicle routing problem with two-dimensional loading constraints. Eur. J. Oper. Res. 2018, 265, 843–859. [Google Scholar] [CrossRef]
  5. Engin, O.; Güçlü, A. A new hybrid ant colony optimization algorithm for solving the no-wait flow shop scheduling problems. Appl. Soft Comput. 2018, 72, 166–176. [Google Scholar] [CrossRef]
  6. Yadav, S.; Ekbal, A.; Saha, S. Feature selection for entity extraction from multiple biomedical corpora: A PSO-based approach. Soft Comput. 2018, 22, 6881–6904. [Google Scholar] [CrossRef]
  7. Eberhart, R.; Kennedy, J. A new optimizer using particle swarm theory[C]//MHS′95. In Proceedings of the Sixth International Symposium on Micro Machine and Human Science, Nagoya, Japan, 4–6 October 1995; pp. 39–43. [Google Scholar]
  8. Han, Y.S.; Zhang, L.; Tan, H.Y.; Xue, X.L. Mobile robot path planning based on improved particle swarm optimization algorithm. J. Xi’an Polytech. Univ. 2019, 33, 517–523. (In Chinese) [Google Scholar]
  9. Yang, Y. Research on image segmentation algorithm with dynamic particle swarm optimization k-means. Modern Comput. 2019, 8, 63–67. (In Chinese) [Google Scholar]
  10. Zhao, H.W.; Li, S.P. Research on resources scheduling method in cloud computing based on PSO and RBF neural network. Comput. Sci. 2016, 43, 113–117. (In Chinese) [Google Scholar]
  11. Liu, H.; Shen, X.; Qu, H.; Wang, P. Neural network pid biogas dry fermentation temperature control by particle swarm optimization. Comput. Eng. Design 2017, 38, 784–788. [Google Scholar]
  12. Sun, Q.; Ren, S.; Le, Y.G. Research on data prediction model based on particle swarm optimization extreme learning machine. J. Sichuan Inst. Technol. (Nat. Sci. Ed.) 2019, 32, 35–41. (In Chinese) [Google Scholar]
  13. Zhou, X.; Cheng, X.; Peng, W.; Sun, L.; Lu, J. A particle swarm optimization algorithm-based noise control method for cooling tower clusters. J. Xi’an Univ. Eng. 2019, 33, 568–574. [Google Scholar]
  14. Gunasundari, S.; Janakiraman, S.; Meenambal, S. Velocity bounded boolean particle swarm optimization for improved feature selection in liver and kidney disease diagnosis. Expert Syst. 2016, 56, 28–47. [Google Scholar] [CrossRef]
  15. Marandi, A.; Afshinmanesh, F.; Shahabadi, M.; Bahrami, F. Boolean Particle Swarm Optimization and Its Application to the Design of a Dual-Band Dual-Polarized Planar Antenna. In Proceedings of the 2006 IEEE International Conference on Evolutionary Computation, Vancouver, BC, Canada, 16–21 July 2006. [Google Scholar] [CrossRef]
  16. Sen, T.; Pragallapati, N.; Agarwal, V.; Kumar, R. Global maximum power point tracking of PV arrays under partial shading conditions using a modified particle velocity-based PSOtechnique, IET Renew. Power Gener. 2017, 12, 555–564. [Google Scholar] [CrossRef]
  17. Shi, Y.; Eberhart, R. A modified particle swarm optimizer. In Proceedings of the 1998 IEEE International Conference on Evolutionary Computation Proceedings (IEEE World Congress on Computational Intelligence), (Cat. No.98TH8360). Anchorage, AK, USA, 4–9 May 1998; 73, pp. 69–73. [Google Scholar] [CrossRef]
  18. Ratnaweera, A.; Halgamuge, S.; Watson, H.C. Self-Organizing Hierarchical Particle Swarm Optimizer with Time-Varying Acceleration Coefficients. IEEE Trans. Evol. Comput. 2004, 8, 240–255. [Google Scholar] [CrossRef]
  19. Zhan, Z.H.; Zhang, J.; Li, Y.; Chung, H.S.H. Adaptive particle swarm optimization. IEEE Trans. Syst. Man Cybern. Part B Cybern. 2009, 39, 1362–1381. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Chen, K.; Zhou, F.Y.; Liu, A.L. Chaotic dynamic weight particle swarm optimization for numerical function optimiza-tion. Knowl. Based Syst. 2018, 139, 23–40. [Google Scholar] [CrossRef]
  21. Liang, J.; Suganthan, P. Dynamic multi-swarm particle swarm optimizer. In Proceedings of the IEEE Swarm Intelligence Symposium, Pasadena, CA, USA, 8–10 June 2005; pp. 124–129. [Google Scholar]
  22. Liu, H.R.; Cui, J.C.; Lu, Z.D.; Liu, D.Y.; Deng, Y.J. A hierarchical simple particle swarm optimization with mean dimen-sional information. Appl. Soft Comput. 2019, 76, 712–725. [Google Scholar] [CrossRef]
  23. Zhan, Z.-H.; Zhang, J.; Li, Y.; Shi, Y.-H. Orthogonal Learning Particle Swarm Optimization. IEEE Trans. Evol. Comput. 2010, 15, 832–847. [Google Scholar] [CrossRef] [Green Version]
  24. Xu, G.; Cui, Q.; Shi, X. Particle Swarm Optimization based on dimensional learning Strategy. Swarm Evol. Comput. 2019, 45, 33–51. [Google Scholar] [CrossRef]
  25. Li, W.; Meng, X.; Huang, Y.; Fu, Z.-H. Multipopulation cooperative particle swarm optimization with a mixed mutation strategy. Inf. Sci. 2020, 529, 179–196. [Google Scholar] [CrossRef]
  26. Kennedy, J.; Mendes, R. Population structure and particle swarm performance. In Proceedings of the 2002 Congress on Evo-lutionary Computation, CEC02 (Cat. No.02TH8600). Honolulu, HI, USA, 12–17 May 2002; pp. 1671–1676. [Google Scholar]
  27. Mendes, R.; Kennedy, J.; Neves, J. The Fully Informed Particle Swarm: Simpler, Maybe Better. IEEE Trans. Evol. Comput. 2004, 8, 204–210. [Google Scholar] [CrossRef]
  28. Janson, S.; Middendorf, M. A hierarchical particle swarm optimizer and its adaptive variant. IEEE Trans. Syst. Man, Cybern. Part B Cyber. 2005, 35, 1272–1282. [Google Scholar] [CrossRef]
  29. Zhang, X.; Wang, X.; Kang, Q.; Cheng, J. Differential mutation and novel social learning particle swarm optimization algorithm. Inf. Sci. 2019, 480, 109–129. [Google Scholar] [CrossRef]
  30. Nasiraghdam, M.; Nafar, M. New Approach Based on Hybrid GA and PSO as HGAPSO in Low-Frequency Oscillation Damping Using UPFC Controller. J. Basic Appl. Sci. Res. 2011, 1, 2208–2218. [Google Scholar]
  31. Wang, F.; Zhu, H.; Li, W.; Li, K. A hybrid convolution network for serial number recognition on banknotes. Inf. Sci. 2019, 512, 952–963. [Google Scholar] [CrossRef]
  32. Meng, A.; Li, Z.; Yin, H.; Chen, S.; Guo, Z. Accelerating particle swarm optimization using crisscross search. Inf. Sci. 2016, 329, 52–72. [Google Scholar] [CrossRef]
  33. Gong, Y.-J.; Li, J.-J.; Zhou, Y.; Li, Y.; Chung, H.; Shi, Y.-H.; Zhang, J. Genetic Learning Particle Swarm Optimization. IEEE Trans. Cybern. 2015, 46, 2277–2290. [Google Scholar] [CrossRef] [Green Version]
  34. Bergh, F.V.D.; Engelbrecht, A. A Cooperative Approach to Particle Swarm Optimization. IEEE Trans. Evol. Comput. 2004, 8, 225–239. [Google Scholar] [CrossRef]
Figure 1. Particle swarm optimization in search strategy 1.
Figure 1. Particle swarm optimization in search strategy 1.
Entropy 23 01200 g001
Figure 2. Axis diagram of population iteration number.
Figure 2. Axis diagram of population iteration number.
Entropy 23 01200 g002
Figure 3. Particle velocity diagram in stage 3.
Figure 3. Particle velocity diagram in stage 3.
Entropy 23 01200 g003
Figure 4. Framework of the RaPSO algorithm.
Figure 4. Framework of the RaPSO algorithm.
Entropy 23 01200 g004
Figure 5. (a) Population entropy without strategy; (b) population entropy with strategy.
Figure 5. (a) Population entropy without strategy; (b) population entropy with strategy.
Entropy 23 01200 g005
Figure 6. (a) The number of clusters without applying policies; (b) the number of clusters applying policies.
Figure 6. (a) The number of clusters without applying policies; (b) the number of clusters applying policies.
Entropy 23 01200 g006
Figure 7. Str parameter selection.
Figure 7. Str parameter selection.
Entropy 23 01200 g007
Figure 8. Prop parameter selection.
Figure 8. Prop parameter selection.
Entropy 23 01200 g008
Figure 9. Comparison of the six algorithms.
Figure 9. Comparison of the six algorithms.
Entropy 23 01200 g009aEntropy 23 01200 g009bEntropy 23 01200 g009c
Table 1. CEC17 functions. U: unimodal; M: multimodal; H: hybrid; C: composition.
Table 1. CEC17 functions. U: unimodal; M: multimodal; H: hybrid; C: composition.
NumFunction NamePropertyBest Value
Fun1Shifted and Rotated Bent Cigar FunctionU100
Fun3Shifted and Rotated Zakharov FunctionM300
Fun4Shifted and Rotated Rosenbrock’s FunctionM400
Fun5Shifted and Rotated Rastrigin’s FunctionM500
Fun6Shifted and Rotated Expanded Scaffer’s F6 FunctionM600
Fun7Shifted and Rotated Lunacek Bi_Rastrigin FunctionM700
Fun8Shifted and Rotated Non-Continuous Rastrigin’s FunctionM800
Fun9Shifted and Rotated Levy FunctionM900
Fun10Shifted and Rotated Schwefel’s FunctionM1000
Fun11Hybrid Function 1 (N = 3)H1100
Fun12Hybrid Function 2 (N = 3)H1200
Fun13Hybrid Function 3 (N = 3)H1300
Fun14Hybrid Function 4 (N = 4)H1400
Fun15Hybrid Function 5 (N = 4)H1500
Fun16Hybrid Function 6 (N = 4)H1600
Fun17Hybrid Function 6 (N = 5)H1700
Fun18Hybrid Function 6 (N = 5)H1800
Fun19Hybrid Function 6 (N = 5)H1900
Fun20Hybrid Function 6 (N = 6)H2000
Fun21Composition Function 1 (N = 3)C2100
Fun22Composition Function 2 (N = 3)C2200
Fun23Composition Function 3 (N = 4)C2300
Fun24Composition Function 4 (N = 4)C2400
Fun25Composition Function 5 (N = 5)C2500
Fun26Composition Function 6 (N = 5)C2600
Fun27Composition Function 7 (N = 6)C2700
Fun28Composition Function 8 (N = 6)C2800
Fun29Composition Function 9 (N = 3)C2900
Fun30Composition Function 10 (N = 3)C3000
Table 2. Parameter selection of the three algorithms.
Table 2. Parameter selection of the three algorithms.
PSO w = 0.5 , c 1 = c 2 = 2
TSLPSO w = 0.9 0.4 , c 1 = c 2 = 1.49445 , c 3 = 0.5 2.5
HCLPSO w = 0.99 0.2 , c 1 = 2.5 0.5 , c 2 = 0.5 2.5 , c = 3 1.5 , g 1 = 37 , g 2 = 63
Table 3. Stage parameter selection.
Table 3. Stage parameter selection.
Stage ValueMinMaxAveStd
02893.392954.962931.9322.69
0.052893.582956.032928.5823.68
0.12892.972957.382927.6625.22
0.152893.772957.982934.9722.25
0.22892.862959.272935.5021.88
0.252892.852953.282934.6920.67
0.32893.242960.552933.2922.98
0.352893.692956.032932.6722.84
0.42892.682955.042931.5723.60
0.452893.622959.702932.1122.51
0.52893.162953.782934.5021.79
Table 4. Comparison of experimental results of PSO and RaPSO in the case of 10 dimensions.
Table 4. Comparison of experimental results of PSO and RaPSO in the case of 10 dimensions.
FunPSO (Dim = 10)RaPSO (Dim = 10)
MinMaxMeanStdMinMaxMeanStd
F11.02 × 1022.54 × 1031.29 × 1038.30 × 1021.13 × 1021.53 × 1037.13 × 1025.06 × 102
F33.00 × 1023.00 × 1023.00 × 1021.06 × 10−143.00 × 1023.00 × 1023.00 × 1029.64 × 10−11
F44.00 × 1024.35 × 1024.17 × 1021.66 × 1014.00 × 1024.35 × 1024.16 × 1021.66 × 101
F55.04 × 1025.34 × 1025.16 × 1027.92 × 1005.04 × 1025.18 × 1025.11 × 1023.53 × 100
F66.00 × 1026.00 × 1026.00 × 1024.22 × 10−146.00 × 1026.00 × 1026.00 × 1023.69 × 10−13
F77.13 × 1027.28 × 1027.20 × 1023.77 × 1007.14 × 1027.22 × 1027.18 × 1022.29 × 100
F88.10 × 1028.40 × 1028.21 × 1027.84 × 1008.06 × 1028.19 × 1028.13 × 1023.22 × 100
F99.00 × 1029.00 × 1029.00 × 1020.00 × 1009.00 × 1029.00 × 1029.00 × 1020.00 × 100
F101.13 × 1031.85 × 1031.42 × 1031.93 × 1021.01 × 1031.51 × 1031.30 × 1031.38 × 102
F111.10 × 1031.13 × 1031.11 × 1037.65 × 1001.10 × 1031.12 × 1031.11 × 1034.54 × 100
F121.50 × 1034.31 × 1052.66 × 1047.69 × 1041.72 × 1032.37 × 1041.13 × 1046.96 × 103
F131.31 × 1038.14 × 1033.09 × 1032.00 × 1031.31 × 1034.62 × 1032.33 × 1039.75 × 102
F141.44 × 1031.56 × 1031.48 × 1032.63 × 1011.44 × 1031.49 × 1031.47 × 1031.32 × 101
F151.51 × 1031.73 × 1031.56 × 1035.09 × 1011.50 × 1031.58 × 1031.54 × 1031.96 × 101
F161.60 × 1031.85 × 1031.71 × 1037.88 × 1011.60 × 1031.73 × 1031.69 × 1035.15 × 101
F171.72 × 1031.81 × 1031.75 × 1031.94 × 1011.73 × 1031.75 × 1031.74 × 1038.12 × 100
F181.87 × 1032.15 × 1047.02 × 1034.88 × 1032.00 × 1031.12 × 1046.13 × 1032.85 × 103
F191.90 × 1032.00 × 1031.92 × 1032.22 × 1011.91 × 1031.94 × 1031.92 × 1038.57 × 100
F202.01 × 1032.20 × 1032.07 × 1035.13 × 1012.01 × 1032.09 × 1032.04 × 1031.95 × 101
F212.25 × 1032.25 × 1032.25 × 1038.44 × 10−142.20 × 1032.20 × 1032.20 × 1031.19 × 10−13
F222.21 × 1032.35 × 1032.35 × 1032.56 × 1012.20 × 1032.30 × 1032.30 × 1031.83 × 101
F232.40 × 1032.83 × 1032.67 × 1031.18 × 1022.40 × 1032.69 × 1032.63 × 1039.40 × 101
F242.50 × 1032.83 × 1032.62 × 1039.09 × 1012.50 × 1032.60 × 1032.58 × 1033.79 × 101
F252.89 × 1032.96 × 1032.92 × 1032.55 × 1012.90 × 1032.95 × 1032.93 × 1032.15 × 101
F262.80 × 1033.76 × 1032.94 × 1032.67 × 1022.60 × 1032.90 × 1032.84 × 1038.14 × 101
F273.15 × 1033.46 × 1033.27 × 1038.58 × 1013.10 × 1033.28 × 1033.18 × 1035.27 × 101
F283.10 × 1033.15 × 1033.14 × 1032.21 × 1013.10 × 1033.15 × 1033.13 × 1032.35 × 101
F293.14 × 1033.30 × 1033.18 × 1033.60 × 1013.14 × 1033.20 × 1033.17 × 1031.45 × 101
F303.50 × 1033.70 × 1041.15 × 1049.46 × 1033.43 × 1031.04 × 1045.61 × 1031.91 × 103
Table 5. Comparison of experimental results of PSO and RaPSO in the case of 30 dimensions.
Table 5. Comparison of experimental results of PSO and RaPSO in the case of 30 dimensions.
FunPSO (Dim = 30)RaPSO (Dim = 30)
MinMaxMeanStdMinMaxMeanStd
F11.00 × 1021.07 × 1091.02 × 1083.13 × 1081.00 × 1021.91 × 1032.42 × 1023.72 × 102
F33.02 × 1024.20 × 1023.27 × 1022.60 × 1014.03 × 1025.69 × 1024.81 × 1025.06 × 101
F44.04 × 1025.80 × 1024.86 × 1024.48 × 1014.04 × 1024.96 × 1024.70 × 1022.00 × 101
F55.66 × 1026.57 × 1026.05 × 1022.48 × 1015.58 × 1026.04 × 1025.88 × 1021.24 × 101
F66.00 × 1026.25 × 1026.07 × 1026.46 × 1006.00 × 1026.08 × 1026.04 × 1022.43 × 100
F77.75 × 1029.14 × 1028.21 × 1022.96 × 1017.70 × 1028.29 × 1028.00 × 1021.74 × 101
F88.83 × 1029.94 × 1029.21 × 1023.13 × 1018.67 × 1029.34 × 1029.10 × 1021.74 × 101
F91.01 × 1035.26 × 1032.83 × 1031.16 × 1039.02 × 1022.81 × 1031.94 × 1036.10 × 102
F102.86 × 1034.84 × 1033.89 × 1035.55 × 1022.60 × 1034.47 × 1033.78 × 1035.39 × 102
F111.17 × 1031.38 × 1031.24 × 1034.55 × 1011.16 × 1031.28 × 1031.23 × 1033.38 × 101
F123.02 × 1031.80 × 1071.16 × 1064.33 × 1062.41 × 1031.51 × 1046.47 × 1033.37 × 103
F131.33 × 1032.36 × 1042.73 × 1034.05 × 1031.36 × 1032.62 × 1031.77 × 1033.73 × 102
F141.52 × 1031.96 × 1031.67 × 1031.06 × 1021.56 × 1031.76 × 1031.65 × 1036.13 × 101
F151.53 × 1031.98 × 1031.60 × 1037.78 × 1011.54 × 1031.62 × 1031.58 × 1032.20 × 101
F162.11 × 1032.98 × 1032.54 × 1032.28 × 1021.76 × 1032.40 × 1032.11 × 1031.77 × 102
F171.78 × 1032.69 × 1032.12 × 1032.01 × 1021.79 × 1032.09 × 1031.95 × 1038.23 × 101
F181.01 × 1041.38 × 1055.53 × 1043.32 × 1041.12 × 1048.45 × 1043.96 × 1041.92 × 104
F192.01 × 1031.64 × 1045.69 × 1033.76 × 1031.94 × 1035.62 × 1033.06 × 1031.01 × 103
F202.26 × 1032.74 × 1032.47 × 1031.37 × 1022.26 × 1032.56 × 1032.42 × 1037.98 × 101
F212.20 × 1032.20 × 1032.20 × 1034.78 × 10−132.25 × 1032.25 × 1032.25 × 1035.00 × 10−13
F222.30 × 1032.30 × 1032.30 × 1034.55 × 10−132.35 × 1032.35 × 1032.35 × 1034.55 × 10−13
F232.96 × 1034.50 × 1033.53 × 1033.57 × 1022.98 × 1034.01 × 1033.46 × 1033.14 × 102
F242.60 × 1032.61 × 1032.60 × 1031.62 × 1002.60 × 1032.60 × 1032.60 × 1031.16 × 10−12
F252.90 × 1033.12 × 1032.97 × 1035.84 × 1012.90 × 1032.97 × 1032.93 × 1033.08 × 101
F262.80 × 1035.43 × 1032.90 × 1034.79 × 1022.80 × 1032.80 × 1032.80 × 1031.50 × 10−12
F273.81 × 1035.30 × 1034.53 × 1033.39 × 1023.73 × 1034.47 × 1034.17 × 1032.25 × 102
F283.22 × 1033.42 × 1033.30 × 1035.42 × 1013.22 × 1033.30 × 1033.26 × 1032.43 × 101
F293.31 × 1034.10 × 1033.69 × 1032.25 × 1023.28 × 1033.63 × 1033.44 × 1038.41 × 101
F304.23 × 1035.44 × 1049.02 × 1039.30 × 1034.09 × 1031.43 × 1046.76 × 1032.78 × 103
Table 6. Comparison of experimental results of TSLPSO and RaTSLPSO in the case of 10 dimensions.
Table 6. Comparison of experimental results of TSLPSO and RaTSLPSO in the case of 10 dimensions.
FunTSLPSO (Dim = 10)RaTSLPSO (Dim = 10)
MinMaxMeanStdMinMaxMeanStd
F11.31 × 1021.43 × 1037.12 × 1023.85 × 1021.00 × 1021.46 × 1036.30 × 1024.44 × 102
F33.00 × 1023.00 × 1023.00 × 1021.26 × 10−53.00 × 1023.00 × 1023.00 × 1022.22 × 10−3
F44.00 × 1024.05 × 1024.01 × 1021.40 × 1004.00 × 1024.05 × 1024.02 × 1021.76 × 100
F55.03 × 1025.14 × 1025.08 × 1022.56 × 1005.03 × 1025.10 × 1025.07 × 1021.74 × 100
F66.00 × 1026.00 × 1026.00 × 1022.72 × 10−56.00 × 1026.00 × 1026.00 × 1023.44 × 10−9
F77.12 × 1027.23 × 1027.19 × 1022.82 × 1007.08 × 1027.23 × 1027.19 × 1023.43 × 100
F88.04 × 1028.12 × 1028.08 × 1022.25 × 1008.03 × 1028.10 × 1028.07 × 1021.79 × 100
F99.00 × 1029.02 × 1029.00 × 1024.43 × 10−19.00 × 1029.00 × 1029.00 × 1022.73 × 10−2
F101.01 × 1031.40 × 1031.21 × 1031.05 × 1021.00 × 1031.36 × 1031.21 × 1031.16 × 102
F111.10 × 1031.11 × 1031.10 × 1031.94 × 1001.10 × 1031.11 × 1031.10 × 1031.38 × 100
F122.41 × 1032.98 × 1041.42 × 1048.42 × 1032.13 × 1032.25 × 1041.13 × 1046.36 × 103
F131.31 × 1031.46 × 1031.37 × 1033.47 × 1011.31 × 1031.98 × 1031.52 × 1031.99 × 102
F141.42 × 1031.48 × 1031.44 × 1031.27 × 1011.41 × 1031.44 × 1031.43 × 1036.42 × 100
F151.50 × 1031.58 × 1031.53 × 1031.91 × 1011.50 × 1031.53 × 1031.52 × 1038.45 × 100
F161.60 × 1031.62 × 1031.61 × 1034.05 × 1001.60 × 1031.61 × 1031.60 × 1031.85 × 100
F171.71 × 1031.75 × 1031.73 × 1037.73 × 1001.71 × 1031.74 × 1031.73 × 1036.45 × 100
F181.85 × 1033.34 × 1032.27 × 1032.99 × 1021.96 × 1032.87 × 1032.38 × 1032.88 × 102
F191.90 × 1031.92 × 1031.91 × 1033.61 × 1001.90 × 1031.91 × 1031.91 × 1032.47 × 100
F202.00 × 1032.04 × 1032.02 × 1036.89 × 1002.00 × 1032.03 × 1032.02 × 1039.97 × 100
F212.20 × 1032.20 × 1032.20 × 1037.51 × 10−132.20 × 1032.20 × 1032.20 × 1032.14 × 10−7
F222.20 × 1032.30 × 1032.23 × 1033.81 × 1012.20 × 1032.30 × 1032.23 × 1033.80 × 101
F232.65 × 1032.67 × 1032.66 × 1033.51 × 1002.64 × 1032.66 × 1032.66 × 1035.08 × 100
F242.43 × 1032.66 × 1032.53 × 1034.81 × 1012.40 × 1032.54 × 1032.50 × 1032.95 × 101
F252.85 × 1032.90 × 1032.90 × 1039.15 × 1002.85 × 1032.90 × 1032.90 × 1038.92 × 100
F262.60 × 1032.95 × 1032.74 × 1031.45 × 1022.60 × 1032.90 × 1032.73 × 1031.38 × 102
F273.11 × 1033.15 × 1033.13 × 1031.16 × 1013.10 × 1033.15 × 1033.13 × 1031.42 × 101
F283.10 × 1033.15 × 1033.13 × 1032.22 × 1013.06 × 1033.15 × 1033.12 × 1032.44 × 101
F293.14 × 1033.19 × 1033.16 × 1031.25 × 1013.10 × 1033.17 × 1033.15 × 1031.66 × 101
F303.43 × 1032.15 × 1047.91 × 1034.06 × 1033.75 × 1031.04 × 1047.26 × 1032.13 × 103
Table 7. Comparison of experimental results of TSLPSO and RaTSLPSO in the case of 30 dimensions.
Table 7. Comparison of experimental results of TSLPSO and RaTSLPSO in the case of 30 dimensions.
FunTSLPSO (Dim = 30)RaTSLPSO (Dim = 30)
MinMaxMeanStdMinMaxMeanStd
F11.01 × 1024.46 × 1031.39 × 1031.21 × 1031.03 × 1022.49 × 1031.11 × 1039.16 × 102
F33.46 × 1021.07 × 1041.78 × 1032.08 × 1033.31 × 1027.38 × 1024.37 × 1021.36 × 102
F44.00 × 1025.44 × 1024.47 × 1024.22 × 1014.00 × 1024.70 × 1024.22 × 1023.00 × 101
F55.36 × 1026.03 × 1025.63 × 1021.54 × 1015.38 × 1025.62 × 1025.53 × 1027.30 × 100
F66.00 × 1026.00 × 1026.00 × 1022.01 × 10−76.00 × 1026.00 × 1026.00 × 1022.10 × 10−13
F77.70 × 1028.41 × 1027.94 × 1021.41 × 1017.61 × 1027.94 × 1027.84 × 1028.10 × 100
F88.36 × 1029.03 × 1028.64 × 1021.55 × 1018.36 × 1028.62 × 1028.51 × 1027.44 × 100
F99.01 × 1022.09 × 1031.05 × 1032.46 × 1029.01 × 1029.95 × 1029.45 × 1023.26 × 101
F101.97 × 1034.42 × 1033.23 × 1036.23 × 1022.29 × 1033.32 × 1032.85 × 1032.88 × 102
F111.12 × 1031.30 × 1031.18 × 1034.69 × 1011.13 × 1031.16 × 1031.15 × 1031.20 × 101
F122.45 × 1031.86 × 1047.36 × 1034.16 × 1032.78 × 1036.16 × 1034.35 × 1031.03 × 103
F131.34 × 1032.12 × 1031.66 × 1032.27 × 1021.34 × 1031.88 × 1031.57 × 1031.66 × 102
F141.43 × 1031.21 × 1042.21 × 1032.01 × 1031.44 × 1031.47 × 1031.45 × 1039.13 × 100
F151.52 × 1031.78 × 1031.65 × 1038.34 × 1011.56 × 1032.57 × 1031.81 × 1032.74 × 102
F161.78 × 1032.35 × 1032.04 × 1031.19 × 1021.83 × 1031.99 × 1031.92 × 1034.91 × 101
F171.77 × 1032.07 × 1031.92 × 1038.16 × 1011.78 × 1031.92 × 1031.87 × 1033.70 × 101
F181.29 × 1042.43 × 1056.93 × 1046.14 × 1041.59 × 1045.67 × 1043.76 × 1041.06 × 104
F191.92 × 1033.28 × 1032.26 × 1033.32 × 1021.94 × 1034.84 × 1032.71 × 1038.30 × 102
F202.08 × 1032.30 × 1032.21 × 1035.27 × 1012.09 × 1032.21 × 1032.15 × 1033.64 × 101
F212.11 × 1032.23 × 1032.17 × 1032.61 × 1012.10 × 1032.17 × 1032.15 × 1033.44 × 101
F222.23 × 1032.30 × 1032.27 × 1031.70 × 1012.23 × 1032.27 × 1032.25 × 1031.14 × 101
F232.82 × 1032.90 × 1032.85 × 1031.89 × 1012.83 × 1032.86 × 1032.85 × 1031.04 × 101
F242.60 × 1033.41 × 1033.29 × 1032.34 × 1022.60 × 1033.35 × 1032.85 × 1032.78 × 102
F252.90 × 1032.99 × 1032.93 × 1032.92 × 1012.90 × 1032.92 × 1032.92 × 1034.89 × 100
F262.90 × 1035.24 × 1034.17 × 1038.37 × 1022.90 × 1034.50 × 1033.43 × 1035.46 × 102
F273.45 × 1033.56 × 1033.49 × 1032.65 × 1013.46 × 1033.54 × 1033.51 × 1032.26 × 101
F283.22 × 1033.44 × 1033.26 × 1035.64 × 1013.18 × 1033.28 × 1033.23 × 1032.88 × 101
F293.32 × 1033.61 × 1033.45 × 1037.47 × 1013.26 × 1033.43 × 1033.38 × 1035.17 × 101
F304.13 × 1031.96 × 1049.30 × 1034.27 × 1034.11 × 1033.45 × 1041.61 × 1049.66 × 103
Table 8. Comparison of experimental results of HCLPSO and RaHCLPSO in the case of 10 dimensions.
Table 8. Comparison of experimental results of HCLPSO and RaHCLPSO in the case of 10 dimensions.
FunHCLPSO (Dim = 10)RaHCLPSO (Dim = 10)
MinMaxMeanStdMinMaxMeanStd
F11.07 × 1021.53 × 1034.30 × 1024.29 × 1021.00 × 1023.48 × 1021.75 × 1028.54 × 101
F33.00 × 1023.00 × 1023.00 × 1021.03 × 10−83.00 × 1023.00 × 1023.00 × 1025.51 × 10−3
F44.00 × 1024.05 × 1024.01 × 1021.55 × 1004.00 × 1024.00 × 1024.00 × 1021.01 × 10−1
F55.02 × 1025.12 × 1025.06 × 1022.29 × 1005.02 × 1025.05 × 1025.05 × 1028.56 × 10−1
F66.00 × 1026.00 × 1026.00 × 1027.44 × 10−86.00 × 1026.00 × 1026.00 × 1021.15 × 10−3
F77.14 × 1027.29 × 1027.18 × 1023.50 × 1007.13 × 1027.18 × 1027.16 × 1021.50 × 100
F88.02 × 1028.09 × 1028.06 × 1021.97 × 1008.02 × 1028.06 × 1028.05 × 1021.18 × 100
F99.00 × 1029.00 × 1029.00 × 1021.64 × 10−139.00 × 1029.00 × 1029.00 × 1024.36 × 10−5
F101.01 × 1031.38 × 1031.14 × 1031.16 × 1021.00 × 1031.14 × 1031.05 × 1034.50 × 101
F111.10 × 1031.10 × 1031.10 × 1031.07 × 1001.10 × 1031.10 × 1031.10 × 1037.65 × 10−1
F122.61 × 1034.38 × 1041.30 × 1049.90 × 1032.57 × 1032.02 × 1041.07 × 1045.24 × 103
F131.30 × 1031.42 × 1031.35 × 1033.24 × 1011.31 × 1031.49 × 1031.39 × 1035.37 × 101
F141.42 × 1031.47 × 1031.45 × 1031.21 × 1011.43 × 1031.45 × 1031.44 × 1037.23 × 100
F151.51 × 1031.56 × 1031.52 × 1031.13 × 1011.51 × 1031.53 × 1031.52 × 1037.12 × 100
F161.60 × 1031.62 × 1031.60 × 1033.28 × 1001.60 × 1031.60 × 1031.60 × 1036.90 × 10−1
F171.72 × 1031.74 × 1031.73 × 1034.61 × 1001.70 × 1031.73 × 1031.73 × 1036.16 × 100
F181.93 × 1034.12 × 1032.62 × 1035.66 × 1021.89 × 1032.69 × 1032.31 × 1032.63 × 102
F191.90 × 1031.94 × 1031.91 × 1038.08 × 1001.90 × 1031.91 × 1031.91 × 1032.44 × 100
F202.00 × 1032.04 × 1032.01 × 1031.12 × 1012.00 × 1032.02 × 1032.01 × 1039.24 × 100
F212.20 × 1032.20 × 1032.20 × 1033.97 × 10−62.10 × 1032.26 × 1032.23 × 1034.20 × 101
F222.20 × 1032.30 × 1032.27 × 1034.35 × 1012.20 × 1032.39 × 1032.25 × 1037.68 × 101
F232.40 × 1032.66 × 1032.64 × 1034.71 × 1012.55 × 1032.65 × 1032.63 × 1032.81 × 101
F242.50 × 1032.55 × 1032.50 × 1031.20 × 1012.49 × 1032.50 × 1032.50 × 1032.34 × 100
F252.89 × 1032.90 × 1032.90 × 1031.14 × 1002.89 × 1032.90 × 1032.90 × 1037.35 × 10−1
F262.60 × 1032.90 × 1032.73 × 1031.25 × 1022.60 × 1032.82 × 1032.66 × 1039.55 × 101
F273.10 × 1033.14 × 1033.11 × 1031.34 × 1013.10 × 1033.11 × 1033.10 × 1034.33 × 100
F283.08 × 1033.15 × 1033.10 × 1031.33 × 1013.08 × 1033.10 × 1033.10 × 1032.75 × 100
F293.13 × 1033.18 × 1033.15 × 1031.08 × 1013.12 × 1033.15 × 1033.14 × 1035.79 × 100
F303.41 × 1031.17 × 1045.38 × 1032.31 × 1033.67 × 1035.66 × 1034.58 × 1035.92 × 102
Table 9. Comparison of experimental results of HCLPSO and RaHCLPSO in the case of 30 dimensions.
Table 9. Comparison of experimental results of HCLPSO and RaHCLPSO in the case of 30 dimensions.
FunHCLPSO (Dim = 30)RaHCLPSO (Dim = 30)
MinMaxMeanStdMinMaxMeanStd
F11.00 × 1021.32 × 1034.10 × 1023.47 × 1021.08 × 1028.05 × 1023.47 × 1022.53 × 102
F33.00 × 1023.05 × 1023.01 × 1021.35 × 1003.01 × 1023.01 × 1023.01 × 1021.52 × 10−1
F44.04 × 1025.15 × 1024.74 × 1022.32 × 1014.03 × 1024.72 × 1024.52 × 1022.89 × 101
F55.19 × 1025.58 × 1025.37 × 1029.26 × 1005.17 × 1025.36 × 1025.29 × 1026.47 × 100
F66.00 × 1026.00 × 1026.00 × 1022.02 × 10−66.00 × 1026.00 × 1026.00 × 1028.97 × 10−4
F77.52 × 1028.16 × 1027.83 × 1021.70 × 1017.61 × 1027.90 × 1027.79 × 1029.24 × 100
F88.21 × 1028.65 × 1028.40 × 1021.21 × 1018.25 × 1028.38 × 1028.32 × 1023.31 × 100
F99.00 × 1029.15 × 1029.03 × 1024.02 × 1009.00 × 1029.02 × 1029.01 × 1025.40 × 10−1
F102.06 × 1033.92 × 1032.91 × 1034.45 × 1021.96 × 1032.65 × 1032.43 × 1031.87 × 102
F111.13 × 1031.24 × 1031.15 × 1032.51 × 1011.13 × 1031.15 × 1031.14 × 1038.40 × 100
F122.17 × 1031.05 × 1044.62 × 1031.97 × 1038.21 × 1032.72 × 1041.96 × 1045.33 × 103
F131.33 × 1031.60 × 1031.43 × 1037.55 × 1011.36 × 1031.83 × 1031.49 × 1031.50 × 102
F141.45 × 1031.60 × 1031.52 × 1033.89 × 1011.47 × 1031.59 × 1031.53 × 1034.29 × 101
F151.53 × 1031.79 × 1031.62 × 1037.42 × 1011.55 × 1032.34 × 1031.94 × 1032.62 × 102
F161.61 × 1032.32 × 1031.94 × 1031.74 × 1021.62 × 1031.96 × 1031.82 × 1031.09 × 102
F171.77 × 1031.94 × 1031.83 × 1034.61 × 1011.77 × 1031.82 × 1031.80 × 1031.20 × 101
F187.96 × 1031.13 × 1053.49 × 1042.69 × 1041.14 × 1043.66 × 1042.68 × 1046.88 × 103
F191.93 × 1032.13 × 1032.01 × 1035.89 × 1011.94 × 1032.54 × 1032.10 × 1032.02 × 102
F202.06 × 1032.27 × 1032.15 × 1036.79 × 1012.06 × 1032.14 × 1032.10 × 1032.43 × 101
F212.25 × 1032.25 × 1032.25 × 1031.08 × 10−122.12 × 1032.18 × 1032.17 × 1031.98 × 101
F222.35 × 1032.35 × 1032.35 × 1031.33 × 10−122.22 × 1032.24 × 1032.23 × 1033.85 × 100
F232.82 × 1032.88 × 1032.85 × 1031.43 × 1012.78 × 1032.84 × 1032.83 × 1031.71 × 101
F242.60 × 1032.60 × 1032.60 × 1035.28 × 10−72.60 × 1032.61 × 1032.60 × 1039.18 × 10−1
F252.90 × 1032.97 × 1032.91 × 1031.72 × 1012.90 × 1032.92 × 1032.91 × 1037.42 × 100
F262.90 × 1032.90 × 1032.90 × 1033.71 × 10−32.90 × 1032.90 × 1032.90 × 1034.74 × 10−3
F273.39 × 1033.63 × 1033.54 × 1036.41 × 1013.35 × 1033.51 × 1033.46 × 1034.44 × 101
F283.14 × 1033.30 × 1033.23 × 1033.42 × 1013.14 × 1033.22 × 1033.19 × 1032.94 × 101
F293.21 × 1033.41 × 1033.29 × 1035.57 × 1013.26 × 1033.30 × 1033.28 × 1031.33 × 101
F304.22 × 1031.56 × 1047.68 × 1033.06 × 1039.50 × 1033.57 × 1042.69 × 1047.65 × 103
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Shen, Y.; Cai, W.; Kang, H.; Sun, X.; Chen, Q.; Zhang, H. A Particle Swarm Algorithm Based on a Multi-Stage Search Strategy. Entropy 2021, 23, 1200. https://0-doi-org.brum.beds.ac.uk/10.3390/e23091200

AMA Style

Shen Y, Cai W, Kang H, Sun X, Chen Q, Zhang H. A Particle Swarm Algorithm Based on a Multi-Stage Search Strategy. Entropy. 2021; 23(9):1200. https://0-doi-org.brum.beds.ac.uk/10.3390/e23091200

Chicago/Turabian Style

Shen, Yong, Wangzhen Cai, Hongwei Kang, Xingping Sun, Qingyi Chen, and Haigang Zhang. 2021. "A Particle Swarm Algorithm Based on a Multi-Stage Search Strategy" Entropy 23, no. 9: 1200. https://0-doi-org.brum.beds.ac.uk/10.3390/e23091200

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop