Next Article in Journal
A New Approach Based on Collective Intelligence to Solve Traveling Salesman Problems
Next Article in Special Issue
Credit and Loan Approval Classification Using a Bio-Inspired Neural Network
Previous Article in Journal
Running-Time Analysis of Brain Storm Optimization Based on Average Gain Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improved Differential Evolution Algorithm Guided by Best and Worst Positions Exploration Dynamics

1
ASH (Mathematics) Department, REC Bijnor, Chandpur 246725, UP, India
2
Department of Basic Sciences, Preparatory Year, King Faisal University, Al Ahsa 31982, Saudi Arabia
*
Author to whom correspondence should be addressed.
Submission received: 27 November 2023 / Revised: 26 January 2024 / Accepted: 10 February 2024 / Published: 16 February 2024
(This article belongs to the Special Issue Nature-Inspired Metaheuristic Optimization Algorithms 2024)

Abstract

:
The exploration of premium and new locations is regarded as a fundamental function of every evolutionary algorithm. This is achieved using the crossover and mutation stages of the differential evolution (DE) method. A best-and-worst position-guided novel exploration approach for the DE algorithm is provided in this study. The proposed version, known as “Improved DE with Best and Worst positions (IDEBW)”, offers a more advantageous alternative for exploring new locations, either proceeding directly towards the best location or evacuating the worst location. The performance of the proposed IDEBW is investigated and compared with other DE variants and meta-heuristics algorithms based on 42 benchmark functions, including 13 classical and 29 non-traditional IEEE CEC-2017 test functions and 3 real-life applications of the IEEE CEC-2011 test suite. The results prove that the proposed approach successfully completes its task and makes the DE algorithm more efficient.

1. Introduction

Nowadays, the optimization problems of various science and engineering domains are becoming more complex due to the presence of various algorithmic properties like differentiability, non-convexity, non-linearity, etc., and hence it is not possible to deal with them using traditional methods. For that reason, new meta-heuristic methods are emerging to deal with these challenges in optimization fields. A meta-heuristic is a general term for heuristic methods that can be useful in a wider range of situations than the precise conditions of any specific problem. These meta-heuristic methods can be categorized into different groups, such as (i) the EA-based group, e.g., genetic algorithm [1], differential evolution algorithm [2], Jaya algorithm [3], etc.; (ii) swarm-based group, e.g., particle swarm optimization [4], artificial bee colony [5], gray wolf optimization [6], whale optimization algorithm [7], manta ray foraging optimization [8], reptile search algorithm [9], etc.; (iii) physics-based group, e.g., gravitational search algorithm [10], sine-cosine algorithm [11], atom search optimization [12], etc.; and (iv) human-based group, e.g., brain storm optimization [13], teaching–learning-based optimization [14], gaining–sharing knowledge optimization [15], etc.
The DE algorithm has maintained its influence for the last three decades due to its excellent performance. Many of its variants have placed among the top ranks in the IEEE CEC conference series [16,17]. Its straight forward execution, simple and small structure, and quick convergence can be considered the main reasons for its great efficiency. It has been successfully applied to a wide range of real-life applications, such as image processing [18,19], industriel noise recognition [20], bit coin price forecasting [21], optimal power flow [22], neural network optimization [23], engineering design problems [24], and so on. There are also several other fields like controlling theory [25,26] which are also open for the application of the DE algorithm.
In spite of its many promising characteristics, DE also faces some shortcomings, such as stagnation problems, a slow convergence rate, and a failure to perform in many other critical situations. In the past three decades, a number of studies have been executed to improve its performance and overcome its shortcomings. Many improvements have been developed in the areas of mutation operation and control parameter adjustment. For example, Brest et al. [27] suggested a self-adaptive method of selecting control parameters F and Cr. Later, Zhang et al. [28] proposed JADE by adapting Cauchy distributed control parameters and the DE/current to p-best/2 strategy. Gong et al. [29] made self-adaptive rules to implement various mutation strategies with JADE. The idea behind JADE was further improved in SHADE [30] by maintaining a successful history memory of the control parameters. Later, LSHADE [31] was proposed to improve the search capacity of SHADE by adapting a linear population size reduction approach. Later, several enhanced variants, such as iLSHADE [32], LSHADE-SPA [33], LSHADE-CLM [34], and iLSHADE-RSP [35], were also presented to improve the performance of the LSHADE variant. The iLSHADE variant was also improved by Brest et al. in their new variant named jSO [36]
Despite these famous variants, there are many other DE variants that have been presented throughout the years, for which some diverse tactics have been adapted to modify the operation of mutation; for example, Ali et al. applied a Cauchy distribution-based mutation operation and proposed MDE [37]. Later, Choi et al. [38] modified the MDE and presented ACM-DE by adapting the advanced Cauchy mutation operator. Kumar and Pant presented MRLDE [39] by dividing the population into three sub regions in order to perform mutation operations. Mallipeddi et al. presented EPSDE [40] using ensemble mutation strategies. Gong and Cai [41] introduced a ranking-based selection idea of using vectors for mutation operation in the current population. Xiang et al. [42] combined two mutation strategies, DE/current/1/bin and DE/p-best/bin/1, to enhance the performance of the DE algorithm. Some recent research on the development of mutation operations is included in [43,44,45,46,47,48,49].
Apart from these, several good research projects have also been executed in different domains, such as improving population initializing strategies [50,51,52,53], crossover operations [54], selection operations [55,56], local exploration strategies [57,58,59], and so on.
An interesting and detailed literature survey on modifications in the DE algorithm over the last decades is given in [60].
It can be noticed that most of the advanced DE variants compromise their simple structure by including some supplementary features. Therefore, in order to enhance the performance of the DE algorithm without overly complicating its simple structure, a new exploration method guided by the best and worst positions is proposed in this paper. The proposed method attempts to optimally explore the search space by moving forward toward the best position or backward toward the worst position. Additionally, a DE/αbest/1 [39,42] approach is also incorporated with the proposed exploration strategies in the selection operation to achieve a better balance between exploitation and exploration. The proposed variant is termed as ‘IDEBW’ and has been implemented in various test cases and real-life applications.
The remaining of the paper is designed as follows: a concise description of DE is given in Section 2. The proposed approach for IDEBW variant is explained in Section 3. The parameter settings and the empirical results from various test suites and real-life applications are discussed in Section 4. Finally, the conclusion of the complete study is presented in Section 5.

2. DE Algorithm

A basic representation of DE can be expressed as DE/a/b/c, where ‘astands for a mutation approach, ‘bstands for vector differences, and ‘cstands for a crossover approach. The various phases in the operation of the DE algorithm are explained next.
The working structure of the DE algorithm is very easy to implement. It begins with a random generated population P o p ( G ) = { Y i ( G ) i = 1 ,   2 ,   ,   N }  of d-dimensional N-vectors within a specified bound domain [Yl, Yu], as shown in Equation (1).
Y i ( G ) = r a n d × ( Y u Y l ) + Y l
Subsequently, the mutation, crossover, and selection phases are started for the generation and selection of new vectors for the next-generation population.
Mutation: This phase is considered as a key operation in the DE algorithm and can be used to explore new positions in the search space. Some mutation schemes to generate a perturbed vector, say, M i ( G + 1 ) = { m i ,   j ( G + 1 ) :   j = 1 ,   2 , d } , are given in Equation (2).
DE /   rand /   1 :               M i ( G + 1 ) = Y a 1 ( G ) + F × ( Y a 2 ( G ) Y a 3 ( G ) ) DE /   rand /   2 :               M i ( G + 1 ) = Y a 1 ( G ) + F × ( Y a 2 ( G ) Y a 3 ( G ) ) + F × ( Y a 3 ( G ) Y a 4 ( G ) ) DE /   best /   1 :               M i ( G + 1 ) = Y b e s t ( G ) + F × ( Y a 2 ( G ) Y a 3 ( G ) ) DE /   best /   2 :               M i ( G + 1 ) = Y b e s t ( G ) + F × ( Y a 2 ( G ) Y a 3 ( G ) ) + F × ( Y a 3 ( G ) Y a 4 ( G ) ) DE /   curr   best / 1 :   M i ( G + 1 ) = Y i ( G ) + F × ( Y b e s t ( G ) Y i ( G ) ) + F × ( Y a 2 ( G ) Y a 3 ( G ) )
where Y a 1 ,   Y a 2   Y a 3 ,   Y a 4   Y a 5  are mutually different vectors randomly chosen from P ( g ) ; the parameter F ( 0 ,   1 ]  is used to manage the magnification of the vector’s difference.
Crossover: This phase is generally responsible for maintaining the population diversity and generates a trail vector X i ( G + 1 ) = { x i ,   j ( G + 1 ) :   j = 1 ,   2 , d } by blending the target Y i ( G + 1 ) = { y i ,   j ( G + 1 ) :   j = 1 ,   2 , d }  and perturbed vector M i ( G + 1 ) = { m i ,   j ( G + 1 ) :   j = 1 ,   2 , d } , as explained in Equation (3).
x i ,   j ( g + 1 ) = m i ,   j ( G + 1 ) i f r a n d C R   | |   j r a n d i ( d ) y i ,   j ( G )        o t h e r w i s e
where C R ( 0 ,   1 ) isknown as the crossover parameter, and randi (D) denotes the random index used to ensure that at least one component in the trail vector is chosen from the mutant vector.
Selection: This procedure selects the best vector from the target and trail vectors for the next-generation population based on their fitness value, as determined by Equation (4).
Y i ( g + 1 ) = X i ( g + 1 ) i f f u n   ( X i ( g + 1 ) ) f u n ( Y i ( g ) ) Y i ( g ) e l s e

3. Proposed IDEBW Algorithm

To improve the performance of the DE algorithm without making any major changes to its structure, we designed our variant IDEBW by modifying the original DE algorithm in two ways. We did this by first exploring the search area, guided by best and worst positions, and second by improving the selection operation, where a DE/αbest/1 approach is also incorporated to generate new trail vectors whenever the old trail vectors are not selected into the next generation. The proposed approaches are explained in detail as below:

3.1. Proposed Exploration Strategies

Rao [3] presented the idea of searching for new positions by going towards the best position and away from the worst positions, as shown in Equation (5).
Y i ( G ) = Y i , ( G ) + r a n d × ( Y b e s t , ( G ) Y i , ( G ) ) r a n d × ( Y w o r s t , ( G ) Y i , ( G ) )
Motivated by this remarkable idea, we have utilized this approach to explore new positions through mutation and crossover phases, as given below.
To find the new position Xi corresponding to the ith vector Yi, first we chose a random vector, say Yr, from the population and used Equations (6) and (7) to create the component of the Xi:
Crossover Operation by Best Position:
D E / r a n d / b e s t / 1 :       x i , j ( G ) = y r , j ( G ) + r a n d B × ( y b e s t , j ( G ) y i , j ( G ) ) ;   i f   r a n d C R B   y i , j ( G ) ;   o t h e r w i s e
Crossover Operation by Worst Position:
D E / r a n d / w o r s t / 1 :       x i , j ( G ) = y r , j ( G ) r a n d W × ( y w o r s t , j ( G ) y i , j ( G ) ) ;   i f   r a n d C R W   y i , j ( G ) ;   o t h e r w i s e
where rand, randB and randW are different uniform random numbers from 0 to 1, and CRB and CRW are prefix constants used to handle the crossover rate. Now we can randomly pick any proposed crossover strategy on the basis of pre-fix probability, called ‘Pr’.
The difference between explorations by the DE/rand/1 and proposed strategies is graphically demonstrated in Figure 1. In the left image, the yellow and green dots represent the possible crossover position as determined using the DE/rand/1 strategy. When using this strategy, we can see that there are four possible crossover positions for the target vector Yi. In the right image, the yellow and blue dot represent the possible crossover position as assessed using the DE/rand/best/1 strategy, while the green and red dot represents the possible crossover position as determined using the DE/rand/worst/1 strategy. We can see that eight improved possible crossover positions for the target vector Yi are obtained using these strategies. Hence, we can say that the proposed strategies improve the exploration capability of the DE algorithm by providing additional and better positions for generating trail vectors compared to the DE/rand/1 approach.

3.2. Improved Selection Operation

If a vector created through the proposed crossover operation was not able to beat its target vector, then we imposed DE/αbest/1 to create an additional trail vector. This approach is an adapted version of DE/rand/1 and also utilizes the advantage of another approach, namely DE/best/1, by selecting the base vector Ya from the top α% of the current population. The crossover operation for the DE/αbest/1 is defined by Equation (8) as below:
D E / α b e s t / 1 :   x i ,   j ( G + 1 ) = y a 1 * ,   j ( G ) + F α × ( y a 2 , j ( G ) y a 3 , j ( G ) ) ;       i f   r a n d C R α | |   j r a n d i   ( 1 ,   2 ,   d ) y i ,   j ( G ) ;     o t h e r w i s e
where Y a 1 is a randomly selected vector from the top α% of the current population; Y a 2 and Y a 2 are another two randomly selected vectors; and F α  and  C R α  are control parameters.
Therefore, by using the proposed IDEBW, we not only obtain an additional approach to generating the trail vector, but also a way to improve it via a modified selection operation. However, apart from these advantages, we can also face drawbacks like slightly increased complexity and population stagnation problems in some cases.
The working steps, pseudo-code (Algorithm 1), and flowchart (Figure 2) of the proposed IDEBW are given as below:
(a) 
Working Steps:
Step-1: 
Initialize the parameter settings, like population size (N), CRB, CRW, CRα, Fα, probability constant (Pr), and Max-iteration, and generate initial population.
Step-2: 
Generate a uniform random number rand and go to step-3.
Step-3: 
If (randPr) then use Equation6; otherwise, use Equation (7) to generate trail vector.
Step-4: 
Select this trail vector for the next generation if it gives a smaller fitness value than its corresponding target vector; otherwise, generate an additional trail vector using Equation (8) and repeat the old selection operation.
Step-5: 
Repeat all above steps for all remaining vectors and obtain the best value after Max-iteration reached.
(b) 
Pseudo-Code of proposed IDEBW
Algorithm 1. IDEBW Algorithm
1Input: N, d, Max-iteration, CRB, CRW, CRα, Fα
2 Generate   initial   population   P ( G )  via Equation (1)
3Calculate function value f(Yi) for each i
4While iteration ≤ Max_Iteration
5Obtain best and worst locations
6For i = 1:N
7 Select   Y r   randomly   from   P ( G )
8IF randPr
9For j = 1:d
10 Generate   trail   vector   X i  via Equation (6)//(DE/rand/best/1)
11End For
12Else
13For j = 1:d
14 Generate   trail   vector   X i  via Equation (7)//(DE/rand/best/1)
15End For
16End IF
17IF  f ( X i ) f   ( Y i )
18 Update   Y i   via   X i
19Update best position
20Else
21 Select   Y a 1   randomly   from   top   α %   and   Y a 2   and   Y a 3   from   the   P ( G )
22For j = 1:d
23 Generate   trail   vector   X i  via Equation (8)//(DE/α-best/1)
24End For
25IF  f   ( X i ) f   ( Y i )
26 Update   Y i   via   X i
27Update best position
28End IF
29End IF
30End For
31iteration = iteration + 1
32End While
(c) 
Flow Chart of proposed IDEBW
Figure 2. Flow chart of IDEBW.
Figure 2. Flow chart of IDEBW.
Biomimetics 09 00119 g002

4. Result Analysis and Discussion

The performance assessment of the proposed IDEBW on various test suites and real-life problems is discussed in this section.

4.1. Experimental Settings

All experiments are executed under the following conditions:
  • System Configuration: OS-64 Bit, Windows-10, Processor: 2.6-GHz Intel Core i3 processor, RAM-8GB.
  • N=100; d=30,
  • α = 20, Fα = 0.5, CRα = 0.9, CRB = 0.9, CRW = 0.5.
  • Max-iteration = 100 × d.
  • Total Run = 30.

4.2. Performance Evaluation of IDEBWon Classical Functions

A test suite of 13 simple and classical benchmark problems is selected from different studies [21,22,23]. The functions can be classified as unimodal (f1f6) and multimodal functions (f8f13),or as noisy function f7. As per the literature, the unimodal and multimodal functions are essential to testing the exploration and convergence effectiveness of the algorithms.
The performance assessment of IDEBW is performed with six other state-of-the-art DE variants, such as jDE [27], JADE [28], ApadapSS-JADE [29], SHADE [30], CJADE [58],and DEGOS [57]. The results for the jDE and JADE are copied from [28], while the results for the APadapSS-JADE are taken from [29]. For the SHADE, CJADE, and DEGOS, the results are obtained by using the code provided by the respective authors on http://toyamaailab.githhub.io/soucedata.html (accessed on 23 July 2023). The numerical results for the average error and standard deviation of 30 independent runs are presented in Table 1.
From Table 1, it is clear that the proposed IDEBW improves the quality of result, obtaining first rank for eight functions, namely f1, f2, f3, f5, f10, f11, f12, and f13, and second rank for function f6. For remaining functions f4 and f7, it takes third rank, while for f8 and f9, it takes sixth and fifth ranks, respectively. The Ap-AdapSS-JADE obtains first rank in three cases—f6, f7, and f11—whereas SHADE, JADE and jDE obtain first rank for f4, f9 and f8, respectively. The win/loss/tie (w/l/t) represents the pairwise competition which indicates that the IDEBW exceeds the CJADE, DEGOS, SHADE, AdapSS-JADE, JADE, and jDE in 10, 13, 10, 8, 11 and 11 cases, respectively.
To check the time complexity of the algorithm, the average CPU run time is also calculated for the algorithms IDEBW, CJADE, DEGOS, and SHADE. We can see that the CPU times for IDEBW, CJADE, DEGOS, and SHADE are 11.6, 13.2, 11.4 and 12.1 s, respectively. Hence, IDEBW takes less computing time than CJADE and SHADE. The exception is DEGOS, which is better than all algorithms in terms of time complexity.
The signs ‘+’, ‘−‘ and ‘=’ stand for whether the IDEBW is significantly better, worse, or equal, respectively. The p-value for pairwise ‘Wilcoxon sign test’ is also presented in the table, verifying the statistical effectiveness of the proposed IDEBW on the others.
The Wilcoxon rank sum test outcomes are listed in Table 2. The results present pairwise ranks, sum of ranks, and p-values. The lower rank and higher positive rank sum evidence the effectiveness of the proposed IDEBW over its competitors. However, the p-values shows that the IDEBW is significantly better than CJADE, DEGOS, and jDE, while there is no significant difference between the performance of IDEBW, SHADE, APAdapSS-JADE, and JADE.
The Friedman’s rank and critical difference (CD) values obtained through the Bonferroni–Dunn test are presented in Table 3 in order to examine the global difference between the algorithms. The IDEBW obtained the lowest average rank, confirming its significance over others.
Figure 3 represents the algorithm’s ranks and horizontal control lines. These show significant levels at 10% and 5%, respectively. Through the graph, we can see that the rank bars of the IDEBW, SHADE, ApadapSS-JADE, and JADE are below the control lines and hence these algorithms are of equal significance, while the CJADE, DEGOS, and jDE are considered significantly worse than the obtained IDEBW algorithm.
Figure 4 represents the convergence graphs of the algorithms for some selected functions: f1, f2, f10 and f11. The X- and Y-axes indicate the iterations and fitness values of the function. We can analyze the convergence behaviour of the algorithms using their graph lines, which verifies the faster convergence of the proposed IDEBW than its competitors.

4.3. Performance Evaluation of IDEBW on CEC2017 Functions

In this section, a performance assessment of the IDEBW is performed on a well-known IEEE CEC-2017 test suite of 29 (C1C30) more complicated and composite functions. These functions can be divided into four groups: unimodal (C1C3), multimodal (C4C10), hybrid (C11C20), and composite (C21C30). For a function, the optimum value is 100 × f u n c t i o n _ n o , while the initial bounds are (−100, 100) for all functions. A full specification of these functions is given in [61].
Next the performance Assessment of IDEBW with DE Variants and other meta-heuristics have been carried out separately and their numerical results are presented in Table 4 and Table 5 respectively while the statistical analysis on these results are given in Table 6 and Table 7.

4.3.1. Performance Assessment with DE Variants

Five state-of-the-art DE variants, such as SHADE [30], DEGOS [57], CJADE [58], TRADE [59] and IMODE [62] are selected for performance assessment with IDEBW. The TRADE, CJADE, and DEGOS are recently developed DE variants, while the SHADE and IMODE are the winner algorithms from the CEC-2014 and CEC-2020 competitions, respectively. The population size and maximum iterations are taken as 100 and 3000, respectively, for all algorithms. The other parameter settings of algorithms are taken as suggested in their original works.
Table 4 presents the numerical results for the average error and standard deviation of 30 runs. The value to reach (VTR) is taken as 10−08, i.e., the error is taken as 0 if it crosses the fixed VTR. Table 4 shows that the IDEBW obtains first rank in 11 cases, such as C1, C6, C9, C13, C15, C18, C19, C22, C25, C29 and C30. Similarly, TRADE obtains first rank in 11 cases, such as C1, C6, C9, C16, C17, C20, C22, C23, C25, C26 and C27. SHADE obtains best position in 10 cases, such as C1, C3, C5, C7, C8, C10, C21, C22, C24, and C25. The CJADE and DEGOS both obtain first ranks in 5 cases such as (C1, C6, C9, C22, and C25) and (C1, C9, C11, C14, and C22), respectively, whereas IMODE takes first place in only 3 cases, such as C4, C12, and C28. All algorithms except IMODE equally obtain first rank for C1 and C22, while the IDEBW, TRADE, DEGOS and CJADE perform equally in the case of C6 and C9. The pairwise w/l/t performance demonstrates that the IDEBW exceeds the TRADE, CJADE, DEGOS, SHADE, and IMODE in 13, 14, 16, 14 and 25 cases, respectively.
The average CPU times for the IDEBW, TRADE, CJADE, DEGOS, SHADE and IMODE are 146.2, 165.4, 172.9, 144.5, 148.1, and 168.2 s, respectively. Hence, IDEBW takes less computing time than all DE variants except DEGOS, which is better than all algorithms in term of time complexity.
The p-values obtained by the pairwise ‘Wilcoxon sign test’ also verify the statistical effectiveness of the proposed IDEBW on the others.
The Wilcoxon rank sum test outcomes with pairwise ranks, sum of ranks, and p-values are listed in Table 6. The lower rank and higher positive rank sum evidence the effectiveness of the proposed IDEBW over its competitors. However, the p-values show that the IDEBW is significantly better than IMODE, while there is no significant difference between the performance of the IDEBW, TRADE, CJADE, DEGOS, and SHADE.
The Friedman’s rank and critical difference (CD) values obtained through the Bonferroni–Dunn test are presented in Table 7 to test out the global difference between the algorithms. The TRADE obtained lowest average rank; however, the bar graphs presented in Figure 5a shows that the IDEBW, TRADE, DEGOS, and SHADE are considered as significantly equal, while the CJADE and IMODE are significantly worse with these algorithms.

4.3.2. Performance Assessment with Other Meta-Heuristics

In this section, the performance of the IDEBW is compared with that of 5 other meta-heuristics algorithms such as TDSD [63], EJaya [64], AGBSO [65], HMRFO [66],and disGSA [67]. The HMRFO, disGSA, AGBSO, and EJaya methods are recently developed variants of meta-heuristics such as MRFO, GSA, BSO, and Jaya algorithms, respectively, whereas the TDSD is a hybrid variant of three search dynamics such as spherical search, hypercube search, and chaotic local search.
The population size and maximum iterations are taken as 100 and 3000, respectively, for all algorithms. The other parameter settings of algorithms are taken as suggested in their original works.
Table 5 presents the obtained average error and standard deviation of 30 runs. The Table 5 shows that IDEBW obtains first rank in 14 cases, namely, C1, C6, C9, C11, C13, C14, C15, C18, C19, C20, C22, C27, C29,and C30, whereas AGBSO obtains first rank in 9 cases C5, C8, C9, C10, C16, C17, C21, C22, and C23. The EJAYA, HMRFO, disGSA, and TDSD obtain first ranks in 3 cases (C3, C12, and C22), 1 case (C22), 4 cases (C7, C22, C24, and C26), and 2 cases (C4, C25), respectively. The pairwise w/l/t demonstrates that the IDEBW exceeds the EJAYA, HMRFO, AGBSO, disGSA, and TDSD on 24, 25, 16, 17 and 22 cases, respectively.
The average CPU times for IDEBW, EJAYA, HMRFO, AGBSO, DisGSA, and TDSD are 146.2, 105.4, 165.2, 154.4, 159.2, and 189.3 s, respectively. Hence, IDEBW takes less computing time than all meta-heuristics except EJAYA, which is better than all algorithms in term of time complexity.
The p-values, obtained by the pairwise ‘Wilcoxon sign test’, also verify the statistical effectiveness of the proposed IDEBW on the others.
The Wilcoxon rank sum test outcomes with pairwise ranks, sum of ranks, and p-values are listed in Table 6. The lower rank and higher positive rank sum evidence the effectiveness of the proposed IDEBW over its competitors. The p-values show that only AGBSO demonstrated a significantly equal performance with the IDEBW, whereas all other meta-heuristics are significantly worst against the IDEBW.
The Friedman’s rank and critical difference (CD) values obtained through the Bonferroni–Dunn test are presented in Table 7 to test out the global difference between the algorithms. The IDEBW obtains the lowest average rank and shows its significance.
The bar graphs presented in Figure 5b show that the IDEBW and AGBSO are significantly equal, while the others cross the control lines and are considered as significantly worse compared to those with these algorithms.
Figure 6 represents the convergence graphs of the algorithms for some selected functions: C1, C10, C21, and C30. The X and Y-axes indicate the iterations and fitness values of the function. We can analyze the convergence behaviour of the algorithms by their graphs lines, which verify the faster convergence of the proposed IDEBW on its competitors.

4.4. Performance Evaluation of IDEBW on Real-Life Applications

In this section, the practical qualification of the proposed IDEBW is tested on 03 IEEE CEC-2011 real-life applications, as given below:
RP1:
Frequency-modulated (FM) sound wave problem.
RP2:
Spread-spectrum radar polyphase code design problem.
RP3:
Non-linear stirred tank reactor optimal control problem.
The complete details of these problems are specified in [68].
The performance assessment is taken with five qualified algorithms, including DEGOS, SHADE, DE, EJAYA, and TDSD. The outcomes for the SHADE and TDSD are copied from [63]. The maximum iterations are taken as 100 × d, i.e., it is 600, 2000, and 100 for the RP1, RP2, and RP3 respectively. The results for the best values, mean values and standard deviation obtained in 30 independent runs are presented in Table 8.
The results show that the proposed IDEBW improves the quality of results and obtains first rank by obtaining the optimum value in each case, whether it is RP1, RP2,and RP3. The SHADE algorithm takes second rank for RP1 and RP2, while TDSD takes second rank for RP3. Hence, the proposed IDEBW confirms its feasibility for use on the real-life problems also.
The convergence graphs for the IDEBW, DEGOS, DE, and EJAYA are presented in Figure 7. The X- and Y-axes indicate the iterations and fitness values of the function. We can analyze the convergence behaviour of the algorithms by their graph lines, which also demonstrate a faster convergence speed of the IDEBW compared to its opponents.

5. Conclusions

A best and worst location guided exploration approach to the DE algorithm is presented in this study. The proposed technique offers an improved search alternative by either directing attention towards the best location or avoiding the most unfavorable location. The proposed variant named ‘IDEBW’ also uses the DE/αbest/1 approach as a selection operation when the trail vectors are not selected for the next operation. The ‘IDEBW’ variant is tested on 13 classical, 29 hybrids, and composite CEC-17 benchmark functions and 3 real-life optimization problems from the CEC-2011 test suite. The results are compared with eight other state-of-the-art DE variants, such as jDE, JADE, SHADE, APadapSS-JADE, CJADE, DEGOS, TRADE, and IMODE, and 5 other enhanced meta-heuristics variants, such as EJAYA, HMRFO, disGSA, AGBSO, and TDSD. The outcomes verify the success of the new exploration strategy in terms of improvement in solution quality, as well as in convergence speed.
Our future works will focus on employing the proposed IDEBW in some complicated, constrained, and multi-objective real-life applications. Second, it will also be quite exciting to apply the proposed idea to other meta-heuristic algorithms to improve their performance.

Author Contributions

Conceptualization, P.K.; methodology, P.K.; software, P.K. and M.A.; validation, P.K. and M.A.; formal analysis, P.K.; investigation, P.K. and M.A.; resources, P.K. and M.A.; data curation, P.K.; writing—original draft preparation, P.K. and M.A.; writing—review and editing, P.K. and M.A.; visualization, P.K. and M.A.; supervision, P.K. and M.A.; project administration, P.K.; funding acquisition, M.A. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Deanship of Scientific Research, Vice Presidency for Graduate Studies and Scientific Research, King Faisal University, Saudi Arabia (Grant No. 5806).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

All related data is contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Wright, A.H. Genetic Algorithms for Real Parameter Optimization. Found. Genet. Algorithms 1991, 1, 205–218. [Google Scholar] [CrossRef]
  2. Storn, R.; Price, K. Differential Evolution—A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  3. Venkata Rao, R. Jaya: A Simple and New Optimization Algorithm for Solving Constrained and Unconstrained Optimization Problems. Int. J. Ind. Eng. Comput. 2016, 7, 19–34. [Google Scholar] [CrossRef]
  4. Kennedy, J.; Eberhart, R. Particle Swarm Optimization. In Proceedings of the IEEE International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume IV, pp. 1942–1948. [Google Scholar]
  5. Karaboga, D.; Basturk, B. A Powerful and Efficient Algorithm for Numerical Function Optimization: Artificial Bee Colony (ABC) Algorithm. J. Glob. Optim. 2007, 39, 459–471. [Google Scholar] [CrossRef]
  6. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  7. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  8. Zhao, W.; Zhang, Z.; Wang, L. Manta Ray Foraging Optimization: An Effective Bio-Inspired Optimizer for Engineering Applications. Eng. Appl. Artif. Intell. 2020, 87, 103300. [Google Scholar] [CrossRef]
  9. Abualigah, L.; Elaziz, M.A.; Sumari, P.; Geem, Z.W.; Gandomi, A.H. Reptile Search Algorithm (RSA): A Nature-Inspired Meta-Heuristic Optimizer. Expert Syst. Appl. 2022, 191, 116158. [Google Scholar] [CrossRef]
  10. Rashedi, E.; Nezamabadi-pour, H.; Saryazdi, S. GSA: A Gravitational Search Algorithm. Inf. Sci. 2009, 179, 223–2248. [Google Scholar] [CrossRef]
  11. Mirjalili, S. SCA: A Sine Cosine Algorithm for Solving Optimization Problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  12. Zhao, W.; Wang, L.; Zhang, Z. A Novel Atom Search Optimization for Dispersion Coefficient Estimation in Groundwater. Futur. Gener. Comput. Syst. 2019, 91, 601–610. [Google Scholar] [CrossRef]
  13. Shi, Y. Brain Storm Optimization Algorithm. In Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  14. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching-Learning-Based Optimization: A Novel Method for Constrained Mechanical Design Optimization Problems. CAD Comput. Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  15. Mohamed, A.W.; Hadi, A.A.; Mohamed, A.K. Gaining-Sharing Knowledge Based Algorithm for Solving Optimization Problems: A Novel Nature-Inspired Algorithm. Int. J. Mach. Learn. Cybern. 2020, 11, 1501–1529. [Google Scholar] [CrossRef]
  16. Deng, L.B.; Zhang, L.L.; Fu, N.; Sun, H.L.; Qiao, L.Y. ERG-DE: An Elites Regeneration Framework for Differential Evolution. Inf. Sci. 2020, 539, 81–103. [Google Scholar] [CrossRef]
  17. Zhang, K.; Yu, Y. An Enhancing Differential Evolution Algorithm with a Rankup Selection: RUSDE. Mathematics 2021, 9, 569. [Google Scholar] [CrossRef]
  18. Kumar, S.; Kumar, P.; Sharma, T.K.; Pant, M. Bi-Level Thresholding Using PSO, Artificial Bee Colony and MRLDE Embedded with Otsu Method. Memetic Comput. 2013, 5, 323–334. [Google Scholar] [CrossRef]
  19. Chakraborty, S.; Saha, A.K.; Ezugwu, A.E.; Agushaka, J.O.; Zitar, R.A.; Abualigah, L. Differential Evolution and Its Applications in Image Processing Problems: A Comprehensive Review. Arch. Comput. Methods Eng. 2023, 30, 985–1040. [Google Scholar] [CrossRef]
  20. Kumar, P.; Pant, M. Recognition of Noise Source in Multi Sounds Field by Modified Random Localized Based DE Algorithm. Int. J. Syst. Assur. Eng. Manag. 2018, 9, 245–261. [Google Scholar] [CrossRef]
  21. Jana, R.K.; Ghosh, I.; Das, D. A Differential Evolution-Based Regression Framework for Forecasting Bitcoin Price. Ann. Oper. Res. 2021, 306, 295–320. [Google Scholar] [CrossRef]
  22. Yi, W.; Lin, Z.; Lin, Y.; Xiong, S.; Yu, Z.; Chen, Y. Solving Optimal Power Flow Problem via Improved Constrained Adaptive Differential Evolution. Mathematics 2023, 11, 1250. [Google Scholar] [CrossRef]
  23. Baioletti, M.; Di Bari, G.; Milani, A.; Poggioni, V. Differential Evolution for Neural Networks Optimization. Mathematics 2020, 8, 69. [Google Scholar] [CrossRef]
  24. Mohamed, A.W. A Novel Differential Evolution Algorithm for Solving Constrained Engineering Optimization Problems. J. Intell. Manuf. 2018, 28, 149–164. [Google Scholar] [CrossRef]
  25. Chi, R.; Li, H.; Shen, D.; Hou, Z.; Huang, B. Enhanced P-Type Control: Indirect Adaptive Learning from Set-Point Updates. IEEE Trans. Automat. Contr. 2023, 68, 1600–1613. [Google Scholar] [CrossRef]
  26. Roman, R.-C.; Precup, R.-E.; Petriu, E.M.; Borlea, A.-I. Hybrid Data-Driven Active Disturbance Rejection Sliding Mode Control with Tower Crane Systems Validation. Sci. Technol. 2024, 27, 3–17. [Google Scholar]
  27. Brest, J.; Greiner, S.; Bošković, B.; Mernik, M.; Zumer, V. Self-Adapting Control Parameters in Differential Evolution: A Comparative Study on Numerical Benchmark Problems. IEEE Trans. Evol. Comput. 2006, 10, 646–657. [Google Scholar] [CrossRef]
  28. Zhang, J.; Sanderson, A.C. JADE: Adaptive Differential Evolution with Optional External Archive. IEEE Trans. Evol. Comput. 2009, 13, 945–958. [Google Scholar] [CrossRef]
  29. Gong, W.; Fialho, Á.; Cai, Z.; Li, H. Adaptive Strategy Selection in Differential Evolution for Numerical Optimization: An Empirical Study. Inf. Sci. 2011, 181, 5364–5386. [Google Scholar] [CrossRef]
  30. Tanabe, R.; Fukunaga, A. Success-History Based Parameter Adaptation for Differential Evolution. In Proceedings of the 2013 IEEE Congress on Evolutionary Computation, CEC 2013, Cancun, Mexico, 20–23 June 2013. [Google Scholar]
  31. Tanabe, R.; Fukunaga, A.S. Improving the Search Performance of SHADE Using Linear Population Size Reduction. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation, CEC 2014, Beijing, China, 6–11 July 2014. [Google Scholar]
  32. Brest, J.; Maučec, M.S.; Bošković, B. IL-SHADE: Improved L-SHADE Algorithm for Single Objective Real-Parameter Optimization. In Proceedings of the 2016 IEEE Congress on Evolutionary Computation, CEC 2016, Vancouver, BC, Canada, 24–29 July 2016. [Google Scholar]
  33. Hadi, A.A.; Mohamed, A.W.; Jambi, K.M. LSHADE-SPA Memetic Framework for Solving Large-Scale Optimization Problems. Complex Intell. Syst. 2019, 5, 25–40. [Google Scholar] [CrossRef]
  34. Zhao, F.; Zhao, L.; Wang, L.; Song, H. A Collaborative LSHADE Algorithm with Comprehensive Learning Mechanism. Appl. Soft Comput. J. 2020, 96, 106609. [Google Scholar] [CrossRef]
  35. Choi, T.J.; Ahn, C.W. An Improved LSHADE-RSP Algorithm with the Cauchy Perturbation: ILSHADE-RSP. Knowl.-Based Syst. 2021, 215, 106628. [Google Scholar] [CrossRef]
  36. Brest, J.; Maučec, M.S.; Bošković, B. Single Objective Real-Parameter Optimization: Algorithm JSO. In Proceedings of the 2017 IEEE Congress on Evolutionary Computation, CEC 2017—Proceedings, Donostia, Spain, 5–8 June 2017. [Google Scholar]
  37. Ali, M.; Pant, M. Improving the Performance of Differential Evolution Algorithm Using Cauchy Mutation. Soft Comput. 2011, 15, 991–1007. [Google Scholar] [CrossRef]
  38. Choi, T.J.; Togelius, J.; Cheong, Y.G. Advanced Cauchy Mutation for Differential Evolution in Numerical Optimization. IEEE Access 2020, 8, 8720–8734. [Google Scholar] [CrossRef]
  39. Kumar, P.; Pant, M. Enhanced Mutation Strategy for Differential Evolution. In Proceedings of the 2012 IEEE Congress on Evolutionary Computation, CEC 2012, Brisbane, QLD, Australia, 10–15 June 2012. [Google Scholar]
  40. Mallipeddi, R.; Suganthan, P.N.; Pan, Q.K.; Tasgetiren, M.F. Differential Evolution Algorithm with Ensemble of Parameters and Mutation Strategies. Appl. Soft Comput. J. 2011, 11, 1679–1696. [Google Scholar] [CrossRef]
  41. Gong, W.; Cai, Z. Differential Evolution with Ranking-Based Mutation Operators. IEEE Trans. Cybern. 2013, 43, 2066–2081. [Google Scholar] [CrossRef]
  42. Xiang, W.L.; Meng, X.L.; An, M.Q.; Li, Y.Z.; Gao, M.X. An Enhanced Differential Evolution Algorithm Based on Multiple Mutation Strategies. Comput. Intell. Neurosci. 2015, 2015, 285730. [Google Scholar] [CrossRef]
  43. Gupta, S.; Su, R. An Efficient Differential Evolution with Fitness-Based Dynamic Mutation Strategy and Control Parameters. Knowl.-Based Syst. 2022, 251, 109280. [Google Scholar] [CrossRef]
  44. Wang, L.; Zhou, X.; Xie, T.; Liu, J.; Zhang, G. Adaptive Differential Evolution with Information Entropy-Based Mutation Strategy. IEEE Access 2021, 9, 146783–146796. [Google Scholar] [CrossRef]
  45. Sun, G.; Lan, Y.; Zhao, R. Differential Evolution with Gaussian Mutation and Dynamic Parameter Adjustment. Soft Comput. 2019, 23, 1615–1642. [Google Scholar] [CrossRef]
  46. Cheng, J.; Pan, Z.; Liang, H.; Gao, Z.; Gao, J. Differential Evolution Algorithm with Fitness and Diversity Ranking-Based Mutation Operator. Swarm Evol. Comput. 2021, 61, 100816. [Google Scholar] [CrossRef]
  47. Li, Y.; Wang, S.; Yang, B. An Improved Differential Evolution Algorithm with Dual Mutation Strategies Collaboration. Expert Syst. Appl. 2020, 153, 113451. [Google Scholar] [CrossRef]
  48. AlKhulaifi, D.; AlQahtani, M.; AlSadeq, Z.; ur Rahman, A.; Musleh, D. An Overview of Self-Adaptive Differential Evolution Algorithms with Mutation Strategy. Math. Model. Eng. Probl. 2022, 9, 1017–1024. [Google Scholar] [CrossRef]
  49. Kumar, P.; Ali, M. SaMDE: A Self Adaptive Choice of DNDE and SPIDE Algorithms with MRLDE. Biomimetics 2023, 8, 494. [Google Scholar] [CrossRef]
  50. Zhu, W.; Tang, Y.; Fang, J.A.; Zhang, W. Adaptive Population Tuning Scheme for Differential Evolution. Inf. Sci. 2013, 223, 164–191. [Google Scholar] [CrossRef]
  51. Poikolainen, I.; Neri, F.; Caraffini, F. Cluster-Based Population Initialization for Differential Evolution Frameworks. Inf. Sci. 2015, 297, 216–235. [Google Scholar] [CrossRef]
  52. Meng, Z.; Zhong, Y.; Yang, C. CS-DE: Cooperative Strategy Based Differential Evolution with Population Diversity Enhancement. Inf. Sci. 2021, 577, 663–696. [Google Scholar] [CrossRef]
  53. Stanovov, V.; Akhmedova, S.; Semenkin, E. Dual-Population Adaptive Differential Evolution Algorithm L-NTADE. Mathematics 2022, 10, 4666. [Google Scholar] [CrossRef]
  54. Meng, Z.; Chen, Y. Differential Evolution with Exponential Crossover Can Be Also Competitive on Numerical Optimization. Appl. Soft Comput. 2023, 146, 110750. [Google Scholar] [CrossRef]
  55. Zeng, Z.; Zhang, M.; Chen, T.; Hong, Z. A New Selection Operator for Differential Evolution Algorithm. Knowl. -Based Syst. 2021, 226, 107150. [Google Scholar] [CrossRef]
  56. Kumar, A.; Biswas, P.P.; Suganthan, P.N. Differential Evolution with Orthogonal Array-based Initialization and a Novel Selection Strategy. Swarm Evol. Comput. 2022, 68, 101010. [Google Scholar] [CrossRef]
  57. Yu, Y.; Gao, S.; Wang, Y.; Todo, Y. Global Optimum-Based Search Differential Evolution. IEEE/CAA J. Autom. Sin. 2019, 6, 379–394. [Google Scholar] [CrossRef]
  58. Gao, S.; Yu, Y.; Wang, Y.; Wang, J.; Cheng, J.; Zhou, M. Chaotic Local Search-Based Differential Evolution Algorithms for Optimization. IEEE Trans. Syst. Man, Cybern. Syst. 2021, 51, 3954–3967. [Google Scholar] [CrossRef]
  59. Cai, Z.; Yang, X.; Zhou, M.C.; Zhan, Z.H.; Gao, S. Toward Explicit Control between Exploration and Exploitation in Evolutionary Algorithms: A Case Study of Differential Evolution. Inf. Sci. 2023, 649, 119656. [Google Scholar] [CrossRef]
  60. Ahmad, M.F.; Isa, N.A.M.; Lim, W.H.; Ang, K.M. Differential Evolution: A Recent Review Based on State-of-the-Art Works. Alex. Eng. J. 2022, 61, 3831–3872. [Google Scholar] [CrossRef]
  61. Awad, N.H.; Ali, M.Z.; Liang, J.J.; Qu, B.Y.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for the CEC 2017 Special Session and Competition on Single Objective Bound Constrained Real-Parameter Numerical Optimization; Technical Report; Nanyang Technological University: Singapore, 2016; pp. 1–34. [Google Scholar]
  62. Sallam, K.M.; Elsayed, S.M.; Chakrabortty, R.K.; Ryan, M.J. Improved Multi-Operator Differential Evolution Algorithm for Solving Unconstrained Problems. In Proceedings of the 2020 IEEE Congress on Evolutionary Computation, CEC 2020—Conference Proceedings, Glasgow, UK, 19–24 July 2020. [Google Scholar]
  63. Li, X.; Cai, Z.; Wang, Y.; Todo, Y.; Cheng, J.; Gao, S. TDSD: A New Evolutionary Algorithm Based on Triple Distinct Search Dynamics. IEEE Access 2020, 8, 76752–76764. [Google Scholar] [CrossRef]
  64. Zhang, Y.; Chi, A.; Mirjalili, S. Enhanced Jaya Algorithm: A Simple but Efficient Optimization Method for Constrained Engineering Design Problems. Knowl.-Based Syst. 2021, 233, 107555. [Google Scholar] [CrossRef]
  65. Cai, Z.; Gao, S.; Yang, X.; Yang, G.; Cheng, S.; Shi, Y. Alternate Search Pattern-Based Brain Storm Optimization. Knowl.-Based Syst. 2022, 238, 107896. [Google Scholar] [CrossRef]
  66. Tang, Z.; Wang, K.; Tao, S.; Todo, Y.; Wang, R.L.; Gao, S. Hierarchical Manta Ray Foraging Optimization with Weighted Fitness-Distance Balance Selection. Int. J. Comput. Intell. Syst. 2023, 16, 114. [Google Scholar] [CrossRef]
  67. Guo, A.; Wang, Y.; Guo, L.; Zhang, R.; Yu, Y.; Gao, S. An Adaptive Position-Guided Gravitational Search Algorithm for Function Optimization and Image Threshold Segmentation. Eng. Appl. Artif. Intell. 2023, 121, 106040. [Google Scholar] [CrossRef]
  68. Das, S.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for CEC 2011 Competition on Testing Evolutionary Algorithms on Real World Optimization Problems; Jadavpur University: Kolkata, India; Nanyang Technological University: Singapore, 2010; pp. 341–359. [Google Scholar]
Figure 1. Difference of exploration by DE/rand/mutation and proposed mutation.
Figure 1. Difference of exploration by DE/rand/mutation and proposed mutation.
Biomimetics 09 00119 g001
Figure 3. The Friedman ranks and Bonferroni–Dunn test presentation for classical functions.
Figure 3. The Friedman ranks and Bonferroni–Dunn test presentation for classical functions.
Biomimetics 09 00119 g003
Figure 4. Performance evaluation of IDEBW by convergence graphs of classical functions. (a) F01 (Unimodal). (b) F02 (Unimodal). (c) F10 (Multimodal). (d) F11 (Multimodal).
Figure 4. Performance evaluation of IDEBW by convergence graphs of classical functions. (a) F01 (Unimodal). (b) F02 (Unimodal). (c) F10 (Multimodal). (d) F11 (Multimodal).
Biomimetics 09 00119 g004
Figure 5. The Friedman ranks and Bonferroni–Dunn test presentation for CEC17 functions for (a) DE varaints. (b) Meta-hueristics varaints
Figure 5. The Friedman ranks and Bonferroni–Dunn test presentation for CEC17 functions for (a) DE varaints. (b) Meta-hueristics varaints
Biomimetics 09 00119 g005
Figure 6. Convergence graphs for CEC-2017 functions: (aC01, (b) C05, (c) C15, and (d) C30.
Figure 6. Convergence graphs for CEC-2017 functions: (aC01, (b) C05, (c) C15, and (d) C30.
Biomimetics 09 00119 g006
Figure 7. Convergence graphs for real-life problems: (a) RP1 (b) RP2 and (c) RP3.
Figure 7. Convergence graphs for real-life problems: (a) RP1 (b) RP2 and (c) RP3.
Biomimetics 09 00119 g007
Table 1. Performance Evaluation of IDEBW Classical Functions.
Table 1. Performance Evaluation of IDEBW Classical Functions.
FIter.IDEBWCJADEDEGOSSHADEAPadapSS-JADEJADEjDE
f11.5 × 1033.51 × 10−81
(7.1 × 10−81)
4.07 × 10−62 +
(2.32 × 10−62)
6.14 × 10−26 +
(4.85 × 10−26)
3.76 × 10−74 +
(2.34 × 10−74)
2.45 × 10−75 +
(1.39 × 10−74)
1.79 × 10−60 +
(8.29 × 10−60)
2.49 × 10−28 +
(4.39 × 10−28)
rank1473256
f22.0 × 1037.08 × 10−56
(4.55 × 10−56)
7.03 × 10−34 +
(4.53 × 10−34)
1.98 × 10−19 +
(3.43 × 10−19)
1.04 × 10−47 +
(3.24 × 10−47)
1.90 × 10−44 +
(1.29 × 10−43)
1.89 × 10−25 +
(9.01 × 10−25)
1.49 × 10−23 +
(1.01 × 10−23)
rank1472356
f35.0 × 1031.57 × 10−68
(2.19 × 10−68)
1.06 × 10−59 +
(9.05 × 10−59)
1.39 × 10−20 +
(1.09 × 10−20)
4.56 × 10−63 +
(2.15 × 10−63)
2.49 × 10−68 +
(8.40 × 10−68)
5.99 × 10−61 +
(2.90 × 10−60)
5.19 × 10−14 +
(1.11 × 10−14)
rank1563247
f45.0 × 1031.11 × 10−49
(1.53 × 10−49)
1.97 × 10−61 +
(2.34 × 10−60)
2.34 × 10−01 +
(4.82 × 10−01)
7.86 × 10−64 −
(4.83 × 10−64)
5.15 × 10−22 +
(5.39 × 10−22)
8.19 × 10−24 +
(4.01 × 10−23)
1.39 × 10−15 +
(1.09 × 10−15)
rank3271546
f55.0 × 1032.14 × 10−28
(1.98 × 10−28)
6.02 × 10−01 +
(4.82 × 10−01)
9.53 × 10−22 +
(4.28 × 10−22)
8.12 × 10−02 +
(4.34 × 10−02)
3.20 × 10−01 +
(1.09 × 10+00)
8.01 × 10−02 +
(7.19 × 10−01
1.30 × 10+01 +
(1.40 × 10+01)
rank1624537
f61.0 × 1021.02 × 10−01
(3.22 × 10−01)
3.57 × 10+00 +
(6.43 × 10−01)
9.34 × 10+01 +
(3.45 × 10+01)
4.11 × 10+00 +
(1.01 × 10+00)
3.99 × 10−02 −
(1.95 × 10−02)
2.90 × 10+00 +
(1.10 × 10+00)
1.09 × 10+03 +
(2.09 × 10+02)
2465137
f73.0 × 1031.05 × 10−03
(9.23 × 10−04)
1.21 × 10−03 +
(5.24 × 10−03)
2.22 × 10−03 +
(3.34 × 10−03)
1.18 × 10−03 +
(3.38 × 10−04)
5.89 × 10−04 +
(1.79 × 10−04)
6.39 × 10−04 −
(2.19 × 10−04)
3.29 × 10−03 +
(8.49 × 10−04)
rank3564127
f81.0 × 1039.49 × 1002
(3.37 × 1002)
1.05 × 10−03 −
(1.39 × 10−05)
2.62 × 1003 +
(7.11 × 1003)
1.01 × 10−03 −
(0.00 × 1000)
1.79 × 10−08 +
(1.20 × 10−07)
3.29 × 10−05 −
(2.1 × 10−05)
7.19 × 10−11 −
(1.29 × 10−10)
6574231
f91.0 × 1031.42 × 1001
(2.59 × 1000)
7.01 × 1002+
(3.22 × 1000)
2.53 × 1001 +
(1.03 × 1001)
3.38 × 1000 −
(1.37 × 1000)
2.89 × 10−01 −
(5.70 × 10−01)
1.09 × 10−04 −
(6.09 × 10−05)
1.49 × 10−04 −
(1.99 × 10−04)
rank5764312
f105.0 × 1025.63 × 10−13
(2.81 × 10−13)
4.69 × 10−09 +
(3.42 × 10−09)
4.85 × 10−04 +
(1.09 × 10−04)
1.25 × 10−11 +
(3.45 × 10−11)
1.11 × 10−11 +
(1.90 × 10−10)
8.19 × 10−10 +
(7.01 × 10−10)
3.49 × 10−04 −
(1.05 × 10−04)
rank1573246
f115.0 × 1020.00
(0.00)
1.70 × 10−15 +
(4.34 × 10−16)
3.33 × 10−05 +
(5.32 × 10−05)
1.55 × 10−16 +
(3.47 × 10−16)
0.00 =
(0.00)
9.89 × 10−08 +
(6.01 × 10−07)
1.89 × 10−05 +
(5.79 × 10−05)
1463157
f125.0 × 1022.13 × 10−25
(1.88 × 10−25)
3.42 × 10−18 +
(3.41 × 10−18)
5.63 × 10−04 +
(8.45 × 10−04)
4.56 × 10−19 +
(3.23 × 10−19)
2.19 × 10−22 +
(7.69 × 10−22)
4.39 × 10−17 +
(2.10 × 10−16)
1.59 × 10−07 +
(1.50 × 10−07)
rank1473256
f135.0 × 1021.83 × 10−23
(3.47 × 10−23)
4.56 × 10−17 +
(4.21 × 10−17)
1.23 × 10−03 +
(3.42 × 10−03)
2.67 × 10−18 +
(1.03 × 10−18)
3.80 × 10−20 +
(1.19 × 10−19)
2.09 × 10−16 +
(6.59 × 10−16)
1.48 × 10−06 +
(9.80 × 10−07)
rank1473256
CPU Time (s)11.613.211.412.1------
w/l/t 11/2/0
0.022 +
13/0/0
<0.001 +
10/3/0
0.092 =
8/4/1
0.388 =
10/3/0
p = 0.092 =
11/2/0
p = 0.022 +
‘+’, ‘−‘ and ‘=’ stand for significantly better, worst and equal, respectively.
Table 2. ‘Wilcoxon rank sum test’ outcomes for the classical functions.
Table 2. ‘Wilcoxon rank sum test’ outcomes for the classical functions.
AlgorithmsPairwise RankΣR+ΣRz-Valuep-ValueSig at α = 0.05
IDEBW vs.CJADE(1.15, 1.85)75162.0620.039+
DEGOS(1.00, 2.00)9103.1800.001+
SHADE(1.23, 1.77)63281.2230.221=
APadapSS-JADE(1.35, 1.65)40380.0780.937=
JADE(1.23, 1.77)57340.8040.422=
jDE(1.15, 1.85)75162.0620.039+
‘+’, ‘−‘ and ‘=’ stand for significantly better, worst and equal, respectively.
Table 3. Friedman Ranks and Bonferroni–Dunn’s CD values for classical functions.
Table 3. Friedman Ranks and Bonferroni–Dunn’s CD values for classical functions.
IDEBWCJADEDEGOSSHADEApadapSS-JADEJADEjDECD
(α = 0.1)
CD
(α = 0.05)
Rank2.124.546.313.232.423.775.622.02852.2352
Table 4. Comparison of IDEBW with other DE variants on CEC-2017 functions.
Table 4. Comparison of IDEBW with other DE variants on CEC-2017 functions.
FunIDEBWTRADECJADEDEGOSSHADEIMODE
MeanSDMeanSDMeanSDMeanSDMeanSDMeanSD
C10.00 × 10000.00 × 10000.00 × 10000.00 × 10000.00 × 10000.00 × 10000.00 × 10000.00 × 10000.00 × 10000.00 × 10008.1 × 10−111.4 × 10−03
C31.8 × 10−081.9 × 10−072.40 × 10014.41 × 10018.5 × 10−041.42 × 10042.8 × 10−056.9 × 10−050.00 × 10000.00 × 10001.4 × 10−078.1 × 10−09
C45.86 × 10010.00 × 10005.98 × 10012.45 × 10003.66 × 10013.08 × 10015.92 × 10011.85 × 10005.86 × 10013.1 × 10−142.19 × 10012.84 × 1002
C53.55 × 10011.22 × 10011.90 × 10014.91 × 10002.66 × 10016.09 × 10002.70 × 10011.25 × 10011.55 × 10012.70 × 10002.59 × 10024.14 × 1000
C60.00 × 10000.00 × 10000.00 × 10000.00 × 10000.00 × 10000.00 × 10007.9 × 10−071.5 × 10−063.8 × 10−053.2 × 10−055.82 × 10016.34 × 1000
C77.25 × 10011.17 × 10015.43 × 10019.85 × 10005.64 × 10015.68 × 10007.56 × 10015.13 × 10014.67 × 10013.46 × 10009.23 × 10023.12 × 1002
C82.39 × 10012.94 × 10012.42 × 10014.48 × 10002.62 × 10013.75 × 10003.17 × 10011.44 × 10011.64 × 10014.36 × 10002.08 × 10013.99 × 1000
C90.00 × 10000.00 × 10000.00 × 10000.00 × 10000.00 × 10000.00 × 10000.00 × 10000.00 × 10006.3 × 10−021.4 × 10−015.49 × 10031.52 × 1003
C103.15 × 10036.19 × 10027.28 × 10033.28 × 10021.86 × 10032.83 × 10023.85 × 10031.92 × 10031.65 × 10033.85 × 10023.81 × 10034.74 × 1002
C111.49 × 10012.51 × 10011.67 × 10012.01 × 10012.00 × 10017.31 × 10001.08 × 10012.31 × 10002.22 × 10011.59 × 10011.95 × 10024.82 × 1001
C121.32 × 10041.62 × 10041.39 × 10048.83 × 10031.35 × 10039.05 × 10028.54 × 10039.68 × 10031.18 × 10033.99 × 10021.12 × 10033.74 × 1002
C132.42 × 10018.11 × 10002.95 × 10015.50 × 10003.11 × 10019.70 × 10002.71 × 10011.13 × 10013.99 × 10011.86 × 10013.99 × 10021.75 × 1002
C142.06 × 10011.46 × 10012.38 × 10016.04 × 10001.46 × 10033.03 × 10032.02 × 10011.01 × 10012.96 × 10013.03 × 10001.93 × 10025.62 × 1001
C156.54 × 10004.11 × 10007.10 × 10002.32 × 10003.49 × 10029.92 × 10028.18 × 10003.66 × 10003.73 × 10013.27 × 10012.14 × 10028.74 × 1001
C163.28 × 10024.08 × 10021.59 × 10019.80 × 10004.68 × 10021.60 × 10024.49 × 10025.06 × 10024.10 × 10021.27 × 10021.47 × 10034.66 × 1002
C172.46 × 10027.48 × 10012.71 × 10012.90 × 10007.38 × 10014.16 × 10011.02 × 10027.04 × 10015.13 × 10011.24 × 10018.69 × 10022.63 × 1002
C182.43 × 10011.07 × 10002.80 × 10018.32 × 10006.85 × 10014.16 × 10013.24 × 10011.59 × 10015.82 × 10014.37 × 10011.59 × 10027.48 × 1001
C194.11 × 10002.20 × 10005.61 × 10001.78 × 10002.49 × 10012.63 × 10017.37 × 10003.08 × 10001.20 × 10013.68 × 10005.91 × 10023.57 × 1002
C202.72 × 10015.15 × 10012.02 × 10017.15 × 10001.06 × 10025.03 × 10016.93 × 10019.83 × 10015.75 × 10013.66 × 10016.80 × 10021.94 × 1002
C212.45 × 10021.34 × 10012.21 × 10024.23 × 10002.26 × 10025.65 × 10002.25 × 10029.81 × 10002.17 × 10021.56 × 10004.15 × 10023.20 × 1001
C221.00 × 10020.00 × 10001.00 × 10020.00 × 10001.00 × 10020.00 × 10001.00 × 10020.00 × 10001.00 × 10020.00 × 10001.33 × 10031.96 × 1003
C233.76 × 10028.45 × 10003.61 × 10028.74 × 10003.72 × 10024.62 × 10003.76 × 10021.45 × 10013.65 × 10026.99 × 10007.97 × 10028.41 × 1001
C244.65 × 10021.15 × 10014.41 × 10024.84 × 10004.40 × 10024.80 × 10004.51 × 10021.83 × 10014.36 × 10022.58 × 10009.60 × 10027.35 × 1001
C253.87 × 10021.1 × 10−013.87 × 10022.7 × 10−023.87 × 10021.8 × 10−014.51 × 10021.83 × 10013.87 × 10023.3 × 10−013.95 × 10021.85 × 1001
C261.38 × 10031.52 × 10029.77 × 10027.79 × 10011.20 × 10032.89 × 10011.23 × 10039.75 × 10011.10 × 10037.06 × 10014.42 × 10031.14 × 1003
C275.02 × 10026.51 × 10004.94 × 10021.16 × 10015.04 × 10021.10 × 10015.01 × 10027.98 × 10005.06 × 10026.86 × 10007.59 × 10021.24 × 1002
C283.42 × 10027.91 × 10013.36 × 10025.35 × 10013.54 × 10025.68 × 10013.48 × 10027.37 × 10013.43 × 10025.62 × 10013.31 × 10025.81 × 1001
C294.19 × 10021.13 × 10024.23 × 10022.79 × 10014.86 × 10025.07 × 10014.61 × 10028.08 × 10014.69 × 10023.85 × 10011.56 × 10034.15 × 1002
C302.04 × 10031.35 × 10022.07 × 10034.59 × 10012.18 × 10031.69 × 10022.10 × 10031.06 × 10022.11 × 10037.53 × 10014.35 × 10031.43 × 1003
CPU time (s)146.2 165.4 172.9 144.5 148.1 168.2
w/l/t 13/11/5 14/10/5 16/9/4 14/11/4 25/4/0
p-values 0.839 = 0.541 = 0.030 + 0.690 = 0.001 +
‘+’ and ‘=’ stand for significantly better and equal, respectively.
Table 5. Comparison of IDEBW with other meta-heuristics on CEC-2017 functions.
Table 5. Comparison of IDEBW with other meta-heuristics on CEC-2017 functions.
FunIDEBW EJaya HMRFO AGBSO DisGSA TDSD
MeanSDMeanSDMeanSDMeanSDMeanSDMeanSD
C10.00× 10000.00 × 10001.28 × 10022.03 × 10022.98 × 10032.34 × 10032.16 × 10032.63 × 10032.44 × 10031.17 × 10031.75 × 10039.03 × 1002
C31.8 × 10−081.9 × 10−074.9 × 10−105.13 × 10056.14 × 10013.57 × 10014.97 × 10011.07 × 10024.93 × 10032.12 × 10034.04 × 10049.54 × 1003
C45.86 × 10010.00 × 10002.64 × 10011.42 × 10013.64 × 10013.50 × 10019.02 × 10011.56 × 10011.02 × 10022.38 × 10012.02 × 10012.10 × 1001
C53.55 × 10011.22 × 10015.21 × 10012.05 × 10016.00 × 10011.91 × 10011.67 × 10016.08 × 10001.75 × 10016.51 × 10007.99 × 10011.14 × 1001
C60.00 × 10000.00 × 10004.17 × 10001.35 × 10014.9 × 10−011.07 × 10007.7 × 10−054.7 × 10−054.1 × 10−057.3 × 10−053.30 × 10006.9 × 10−01
C77.25 × 10011.17 × 10011.10 × 10024.52 × 10001.17 × 10023.98 × 10015.10 × 10017.47 × 10005.02 × 10013.97 × 10001.32 × 10021.34 × 1001
C82.39 × 10012.94 × 10017.34 × 10019.85 × 10006.53 × 10011.89 × 10011.47 × 10014.97 × 10001.71 × 10013.25 × 10008.33 × 10018.19 × 1000
C90.00 × 10000.00 × 10002.30 × 10022.88 × 10014.64 × 10014.07 × 10010.00 × 10000.00 × 10001.8 × 10−136.2 × 10−141.71 × 10033.44 × 1002
C103.15 × 10036.19 × 10023.82 × 10032.87 × 10023.47 × 10036.76 × 10024.80 × 10022.70 × 10021.98 × 10035.54 × 10022.19 × 10032.20 × 1002
C111.49 × 10012.51 × 10018.64 × 10011.25 × 10014.29 × 10011.05 × 10015.07 × 10012.82 × 10019.61 × 10012.82 × 10016.86 × 10012.40 × 1002
C121.32 × 10041.62 × 10047.48 × 10032.03 × 10053.65 × 10041.35 × 10046.12 × 10053.01 × 10059.75 × 10031.76 × 10032.91 × 10051.73 × 1005
C132.42 × 10018.11 × 10002.56 × 10032.54 × 10031.48 × 10041.02 × 10041.07 × 10046.58 × 10034.75 × 10032.51 × 10036.95 × 10023.24 × 1002
C142.06 × 10011.46 × 10011.12 × 10021.42 × 10031.68 × 10039.51 × 10022.74 × 10032.96 × 10033.41 × 10032.51 × 10038.42 × 10036.07 × 1003
C156.54 × 10004.11 × 10009.58 × 10028.67 × 10002.87 × 10033.81 × 10033.47 × 10033.80 × 10031.66 × 10031.66 × 10033.56 × 10022.24 × 1002
C163.28 × 10024.08 × 10024.74 × 10021.37 × 10026.30 × 10022.92 × 10021.13 × 10029.81 × 10015.71 × 10022.39 × 10024.85 × 10021.16 × 1002
C172.46 × 10027.48 × 10011.19 × 10026.91 × 10011.95 × 10021.36 × 10025.10 × 10013.85 × 10011.71 × 10021.34 × 10029.38 × 10013.88 × 1001
C182.43 × 10011.07 × 10004.04 × 10031.47 × 10048.15 × 10043.38 × 10049.27 × 10045.46 × 10044.13 × 10041.74 × 10048.14 × 10043.67 × 1004
C194.11 × 10002.20 × 10002.54 × 10022.82 × 10032.52 × 10032.66 × 10035.39 × 10036.64 × 10033.67 × 10031.32 × 10031.53 × 10021.01 × 1002
C202.72 × 10015.15 × 10013.27 × 10024.15 × 10012.70 × 10021.24 × 10021.01 × 10026.55 × 10011.74 × 10021.29 × 10011.38 × 10025.41 × 1001
C212.45 × 10021.34 × 10012.51 × 10029.11 × 10002.52 × 10021.84 × 10012.17 × 10025.54 × 10002.28 × 10028.81 × 10002.22 × 10028.14 × 1001
C221.00 × 10020.00 × 10001.00 × 10021.6 × 10−061.00 × 10022.4 × 10−131.00 × 10022.3 × 10−061.00 × 10024.7 × 10−091.11 × 10021.89 × 1000
C233.76 × 10028.45 × 10004.18 × 10021.42 × 10014.30 × 10022.51 × 10013.60 × 10025.07 × 10003.73 × 10024.97 × 10004.54 × 10021.72 × 1002
C244.65 × 10021.15 × 10014.89 × 10024.34 × 10004.81 × 10021.71 × 10014.36 × 10021.10 × 10014.13 × 10021.68 × 10014.25 × 10021.87 × 1002
C253.87 × 10021.1 × 10−014.03 × 10028.94 × 10003.92 × 10021.37 × 10013.86 × 10021.11 × 10003.87 × 10022.11 × 10003.83 × 10021.15 × 1001
C261.38 × 10031.52 × 10022.25 × 10035.46 × 10021.68 × 10038.49 × 10029.93 × 10027.67 × 10012.00 × 10021.8 × 10−082.21 × 10025.54 × 1000
C275.02 × 10026.51 × 10005.54 × 10027.04 × 10005.45 × 10021.58 × 10015.05 × 10025.80 × 10005.48 × 10021.89 × 10015.21 × 10026.13 × 1000
C283.42 × 10027.91 × 10013.80 × 10021.10 × 10013.34 × 10025.73 × 10013.80 × 10024.18 × 10013.66 × 10026.07 × 10014.09 × 10021.48 × 1001
C294.19 × 10021.13 × 10026.21 × 10021.01 × 10027.66 × 10021.78 × 10024.67 × 10023.45 × 10016.35 × 10021.71 × 10025.90 × 10024.56 × 1001
C302.04 × 10031.35 × 10024.88 × 10032.90 × 10043.91 × 10031.15 × 10035.14 × 10044.22 × 10045.17 × 10037.06 × 10025.04 × 10038.29 × 1002
CPU time (s)146.2 105.4 165.2 154.4 159.2 189.3
w/l/t 24/4/1 25/3/1 16/11/2 17/10/2 22/7/0
p-value 0.001 + 0.001 + 0.441 = 0.248 = 0.009 +
‘+’ and ‘=’ stand for significantly better and equal, respectively.
Table 6. ‘Wilcoxon rank sum test’ outcomes for the CEC17 functions.
Table 6. ‘Wilcoxon rank sum test’ outcomes for the CEC17 functions.
AlgorithmsPairwise RankΣR+ΣRz-Valuep-ValueSig at α = 0.05
IDEBW vs.TRADE(1.47, 1.53)1271730.6570.511=
CJADE(1.43, 1.57)1601400.2860.775=
DEGOS(1.38,1.62)1911330.7940.427=
SHADE(1.45, 1.55)1661590.0940.927=
IMODE(1.14, 1.86)392433.7730.001+
EJaya(1.16, 1.84)355513.4610.001+
HMRFO(1.12, 1.88)376303.9390.001+
AGBSO(1.41, 1.59)2661121.8500.062=
DisGSA(1.38, 1.62)2721061.9940.042+
TDSD(1.24, 1.76)356792.9950.003+
‘+’, ‘−‘ and ‘=’ stand for significantly better, worst and equal, respectively.
Table 7. Friedman Ranks and Bonferroni–Dunn’s CD values for CEC17 functions.
Table 7. Friedman Ranks and Bonferroni–Dunn’s CD values for CEC17 functions.
DE VariantsOther Meta-Heuristics
AlgorithmRankAlgorithmRank
IDEBW2.86IDEBW2.31
TRADE2.66EJAYA3.95
CJADE3.83HMRFO4.38
DEGOS3.62AGBSO3.10
SHADE2.97DisGSA3.50
IMODE5.02TDSD3.76
CD (Level = 10%)1.1428CD (Level = 10%)1.1428
CD (Level = 5%)1.2656CD (Level = 5%)1.2656
Table 8. Performance evaluation of IDEBW on real-life optimization problems.
Table 8. Performance evaluation of IDEBW on real-life optimization problems.
ProblemIter.ValueIDEBWDEGOSSHADECJADEEJAYATDSD
RP1Best0.002.24 × 10−200.000.001.4000.00
600Mean1.163.111.822.298010.683.93
SD0.90846.952.606.17115.45064.97
rank152364
RP2Best0.58910.70921.03450.70290.50000.8701
2000Mean0.73321.4671.22560.91711.00941.0234
SD0.19240.35370.09740.10660.30170.0773
rank152364
RP3Best13.77013.78313.7713.83214.98113.77
100Mean13.92114.36214.2814.32915.00613.93
SD0.28561.74750.201.2122.3020.17
rank153462
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kumar, P.; Ali, M. Improved Differential Evolution Algorithm Guided by Best and Worst Positions Exploration Dynamics. Biomimetics 2024, 9, 119. https://0-doi-org.brum.beds.ac.uk/10.3390/biomimetics9020119

AMA Style

Kumar P, Ali M. Improved Differential Evolution Algorithm Guided by Best and Worst Positions Exploration Dynamics. Biomimetics. 2024; 9(2):119. https://0-doi-org.brum.beds.ac.uk/10.3390/biomimetics9020119

Chicago/Turabian Style

Kumar, Pravesh, and Musrrat Ali. 2024. "Improved Differential Evolution Algorithm Guided by Best and Worst Positions Exploration Dynamics" Biomimetics 9, no. 2: 119. https://0-doi-org.brum.beds.ac.uk/10.3390/biomimetics9020119

Article Metrics

Back to TopTop