Next Article in Journal
The Role of Immunoglobulin G (IgG), IgA and IgE—Antibodies against Helicobacter pylori in the Development of Oxidative Stress in Patients with Chronic Gastritis
Next Article in Special Issue
Long Non-Coding RNA GAS5 Promotes BAX Expression by Competing with microRNA-128-3p in Response to 5-Fluorouracil
Previous Article in Journal
A Historical Review of Military Medical Strategies for Fighting Infectious Diseases: From Battlefields to Global Health
Previous Article in Special Issue
circRNA: A New Biomarker and Therapeutic Target for Esophageal Cancer
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mutational Slime Mould Algorithm for Gene Selection

1
Department of Computer Science and Artificial Intelligence, Wenzhou University, Wenzhou 325035, China
2
Information Systems, University of Canterbury, Christchurch 8014, New Zealand
3
Department of Information Technology, Wenzhou Polytechnic, Wenzhou 325035, China
4
Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
5
Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
6
Department of Information Engineering, Hangzhou Vocational & Technical College, Hangzhou 310018, China
*
Author to whom correspondence should be addressed.
Submission received: 22 June 2022 / Revised: 14 August 2022 / Accepted: 16 August 2022 / Published: 22 August 2022
(This article belongs to the Special Issue 10th Anniversary of Biomedicines—Advances in Genetic Research)

Abstract

:
A large volume of high-dimensional genetic data has been produced in modern medicine and biology fields. Data-driven decision-making is particularly crucial to clinical practice and relevant procedures. However, high-dimensional data in these fields increase the processing complexity and scale. Identifying representative genes and reducing the data’s dimensions is often challenging. The purpose of gene selection is to eliminate irrelevant or redundant features to reduce the computational cost and improve classification accuracy. The wrapper gene selection model is based on a feature set, which can reduce the number of features and improve classification accuracy. This paper proposes a wrapper gene selection method based on the slime mould algorithm (SMA) to solve this problem. SMA is a new algorithm with a lot of application space in the feature selection field. This paper improves the original SMA by combining the Cauchy mutation mechanism with the crossover mutation strategy based on differential evolution (DE). Then, the transfer function converts the continuous optimizer into a binary version to solve the gene selection problem. Firstly, the continuous version of the method, ISMA, is tested on 33 classical continuous optimization problems. Then, the effect of the discrete version, or BISMA, was thoroughly studied by comparing it with other gene selection methods on 14 gene expression datasets. Experimental results show that the continuous version of the algorithm achieves an optimal balance between local exploitation and global search capabilities, and the discrete version of the algorithm has the highest accuracy when selecting the least number of genes.

1. Introduction

Microarray technology [1,2] is a new analytical tool that simultaneously measures the expression levels of thousands of genes in a single experiment, greatly helping researchers understand disease at the genetic level. However, the gene expression data are all high-dimensional, and the number of features is much larger than the number of samples [3,4]. A large number of unrelated and complex features will reduce the computational performance and waste computational resources, which is not conducive to the classification of gene expression [5,6,7]. The application of feature selection in genes, namely gene selection, is a screening technology to reduce unrelated genes and gene dimensions [8,9,10]. Through this technology, feature size can be effectively reduced, and classification performance can be improved [11,12,13].
Feature selection is an essential technology in data processing and machine learning [7,14]. The essence is to pick out the relatively optimal features from the raw data so that the data go from high to low dimensions [15,16]. The commonly used (classical) feature selection methods can be divided into filter, wrapper, embedded, and hybrid methods [17]. Filter methods typically select features independently and evaluate individual features without providing a practical evaluation across feature subsets, which may ignore the correlation between feature combinations [18,19,20,21]. Because it does not use any algorithm, the computation is less, leading to the failure to find the optimal gene subset. The wrapper method relies on the classification algorithm to select the feature subset, which can obtain the ideal effect, but the calculation cost is high [22,23,24]. Embedded methods usually use some machine learning algorithms and models for training and then select the best feature subset through the classifier algorithm [25]. When extracting features, it needs to train the model to automatically obtain the corresponding threshold value, which is realized by the algorithm with a built-in feature selection method. The hybrid method combines the advantages of the filter and wrapper methods to determine the optimal subset of a given cardinality by independent measurement and select the final subset in the optimal subset using a mining algorithm [26,27,28,29].
Optimization methods can be approximated or deterministic [30], and their model can be single objective and multi-objective, including multiple objective algorithms that can deal with multiple objectives at once [31,32]. In recent years, since the wrapper method based on the meta-heuristic algorithm or its variants can find an acceptable solution, that is, the approximate optimal subset of features, it has been widely used in feature selection [33,34]. In this study, we tried to use an improved slime mould algorithm (SMA), called ISMA, to develop an efficient wrapper gene selection method for finding the smallest feature subset. The optimization algorithm proposed in this paper is aimed at the shortcomings and characteristics of the original SMA, using the main operators of the SMA, but some of the operators use binary conversion to adapt to the genetic selection problem because the original version of the algorithm was created to solve the continuity problem. SMA is a new meta-heuristic algorithm recently proposed by Li et al. [35], which is used to deal with continuous global optimization and engineering design problems. It is an optimal algorithm used to simulate the dynamic vibration behavior of slime mould in dispersive foraging and food searching. This method consists of three search patterns with different morphologic variations, which are mathematically expressed using a unique model. The mathematical model of SMA mainly adopts the adaptive weight to simulate the propagation wave of the biological oscillator and generates positive feedback during the optimization process, which helps form an optimal exploration trajectory of the optimal solution with good searchability. In addition, the survey and results confirm that SMA achieves a balanced competitive advantage between global exploration and local exploitation. Notably, it shows a superior tendency towards local exploitation. With the help of adaptive weighting and efficient and reasonable structure, SMA can provide significantly enhanced performance compared to many recognized advanced algorithms, such as whale optimization algorithm (WOA), gray wolf optimization (GWO), grasshopper optimization algorithm (GOA), moth-flame optimization (MFO), ant lion optimizer (ALO), bat algorithm (BA), salp swarm algorithm (SSA), sine cosine algorithm (SCA), particle swarm optimization (PSO), and differential evolution (DE) [36]. Other examples include biogeography-based learning particle swarm optimization (BLPSO) [37], comprehensive learning particle swarm optimizer (CLPSO) [38], improved grey wolf optimization (IGWO)[39], and binary whale optimization algorithm (BWOA) [40], etc. Therefore, SMA has been applied in engineering design problems [35,41], solar photovoltaic cell parameter estimation [42,43], multi-spectral image segmentation [44], numerical optimization [45], prediction problems [46,47], support vector regression parameter adjustment [48] and other aspects. This algorithm is a sufficiently effective meta-heuristic optimization algorithm, but it may have the shortcoming of local optimal convergence and slow convergence speed when dealing with some complex problems. Therefore, there are some challenges in improving the optimization capability of SMA and expanding its application value.
In order to alleviate the shortcomings of traditional SMA and strengthen the trend of coordination between global exploration and local exploitation, an advanced SMA variant was proposed based on the reasonable integration of Cauchy mutation (CM) and crossover mutation (MC). After the initial search agent is generated, the solution is updated in three phases. First, execute the search process of SMA and update the search agent. The Cauchy mutation strategy is adopted in the second stage to adjust the SMA-based search agent. Finally, the optimal search agent is selected from the previous generation of search agents through a crossover mutation strategy. In addition, we convert the continuous version of ISMA to a discrete ISMA with a transfer function. Tests on gene expression data sets have shown that BISMA has significant advantages over some advanced gene selection methods and is very effective. It shows that ISMA can effectively solve high-dimensional complex gene problems, which makes improving SMA more valuable.
The main contributions in this paper can be summarized as follows:
(1)
An improved slime mould algorithm (ISMA) is proposed to solve continuous global optimization problems and high-dimensional gene selection problems.
(2)
The performance of the ISMA algorithm is verified by comparing it with several famous optimization algorithms.
(3)
Different transfer functions are used to transform the proposed ISMA into a discrete version of BISMA, and they are compared to choose the most suitable transfer function for the binary ISMA optimizer.
(4)
The optimal BISMA version was selected as a gene selection optimizer to select the optimal gene subset from the gene expression data set.
(5)
The performance of the selected method is verified by comparing it with several other advanced optimizers.
The rest of this article is as follows: The second part introduces the work of gene selection and meta-heuristic algorithms. In the third section, Cauchy mutation and a crossover mutation strategy based on the DE algorithm are introduced in detail, and ISMA is proposed. In the fourth section, a series of comparative experiments between ISMA and other similar algorithms are introduced. In the fifth part, we design the structure of wrapper gene selection for discrete ISMA. In the sixth part, we discuss the application of BISMA and other related algorithms in gene selection. In the seventh part, we discuss a summary of the proposed work as well as its shortcomings and implications. The eighth part gives a brief description of the work of this paper and points out the future direction of the work.

2. Related Works

The dimensions of microarray data are often extremely asymmetric and highly redundant, and most genes are considered to be irrelevant to the category under study. Traditional classification methods cannot effectively process such data. Many researchers have achieved good results using machine learning techniques to process gene expression data sets.

2.1. Machine Learning for Gene Selection

Singh et al. [49] proposed a hybrid improved chaotic emperor penguin (CEPO) algorithm based on the Fisher criterion, ReliefF, and extreme learning machine (ELM) for microarray data analysis. In this paper, the Fisher criterion and ReliefF method were first used as gene selection filters, and then relevant data were used to train the ELM to obtain a better model. Banu et al. [50] used the fuzzy clustering method to assign initial values to each gene and then predicted the likelihood of belonging to each cluster to carry out gene selection. The comparative experimental results show that the fuzzy clustering algorithm performs well in gene prediction and selection. Chen et al. [51] proposed a support vector machine for binary tumor diagnosis, extending the three kinds of support vector machines to improve the performance of gene selection. At the same time, lasso, elastic net, and other sparse regression methods were introduced for cancer classification and gene selection. Mahendran et al. [52] conducted an extensive review of recent work on machine learning-based selection and its performance analysis, classified various feature selection algorithms under supervised, unsupervised and semi-supervised learning, and discussed the problems in dealing with high and low sample data. Tan et al. [53] proposed an integrated machine learning approach to analyze multiple gene expression profiles of cervical cancer to find the genomes associated with it, with the expectation that it could help in diagnosis and prognosis. The gene expression data were identified effectively through the analysis of three steps.
Zhou et al. [54] proposed an improved discretized particle swarm optimization algorithm for feature selection. In their work, a modest pre-screening process is first applied to obtain fewer features; then, a better cutting combination is found through the encoding and decoding method based on PSO and the local search strategy guided by probability to obtain the desired feature subset. Zohre Sadeghian et al. [55] proposed a three-stage feature selection method based on the S-BBOA algorithm. In the first stage, the minimum redundancy—maximum new classification information (MRMNCI) feature selection was used to remove 80% of the irrelevant and redundant features. The best feature subset was chosen using IG-BBOA in the second step. Furthermore, the similarity ranking approach was used to choose the final feature subset. Veredas Coleto-Alcudia et al. [56] proposed a new hybridization method based on the dominance degree artificial bee colony algorithm (ABCD) to investigate the problem of gene selection. The method combines the first step of gene screening with the second part of the optimization algorithm to find the optimal subset of genes for the classification task. The first step is to use the Analytic Hierarchy Process (AHP) to select the most relevant genes in the dataset through five sequencing methods. In this way, gene filtering reduces the number of genes that need to be managed. For the second step, gene selection can be divided into two objectives: minimizing the number of selected genes and maximizing classification accuracy. Lee et al. [57] embedded the formal definition of correlation into Markov coverage (MB) and established a new multi-feature sequencing method, which was applied to high-dimensional microarray data, enhancing the efficiency of gene selection and, as a result, the accuracy of microarray data classification.

2.2. Swarm Intelligence for Gene Selection

Alok Kumar Shukla et al. [4] created TLBOGSA, a hybrid wrapper approach that combines the features of the Teaching Learning based Optimization (TLBO) and the Gravity Search Algorithm (GSA). TLBOGSA was updated with a new encoding approach that transformed the continuous search space into the binary search space, resulting in the binary TBSA. First, significant genes from the gene expression dataset were chosen using the minimal redundancy and maximum correlation (mRMR) feature selection approach. Then, using a wrapper strategy, information genes were chosen from the reduced data generated by the mRMR. They developed the gravitational seeking mechanism in the teaching stage to boost the evolutionary process’s searching capabilities. The technique selected the most reasonable genes using a Naive Bayes classifier as a fitness function, which is useful for accurate cancer classification. Based on the phase diagram approach, Elahe Khani et al. [58] suggested a unique gene selection algorithm, and Ridge logistic regression analysis was performed to evaluate the likelihood that the genes belong to a stable group of genes with excellent classification ability. To address the problems, a methodology for the final selection of the selected set is suggested. The model’s performance was assessed using the B632+ error estimation approach. To identify genes from gene expression data and valuable information genes from cancer data genes, a decision tree optimizer based on particle swarm optimization was presented by Chen et al. [59]. Experimental results demonstrate that this strategy outperforms different popular classifiers, including support vector machines, self-organizing mapping, and back propagation neural networks. Dabba et al. [10] developed the Quantum MFO (QMFOA), a swarm intelligent gene selection technique based on the fusion of quantum computing with the MFO, to discover a relatively small subset of genes for high-precision sample classification. The QMFOA gene selection algorithm has two stages: the first is preprocessing, which acquires a preprocessing gene set by measuring the redundancy and correlation of genes, and the second is hybrid combination and gene selection, which utilizes several techniques such as MFO, quantum computing, and support vector machine. To select a limited, representative fraction of cancer-related genetic information, Mohamad et al. [60] developed an enhanced binary particle swarm optimization for gene selection. The velocity of particles is incorporated in this approach to give the rate of particle position change, and the particle position update rule is presented. The experimental findings show that the suggested technique outperforms the classic binary PSO in terms of classification accuracy while picking fewer genes (BPSO).

3. The Proposed ISMA

3.1. SMA

Several swarm intelligence optimization techniques have appeared successively in recent years, such as slime mould algorithm (SMA) [35], Harris hawks optimization (HHO) [61], hunger games search (HGS) [62], Runge Kutta optimizer (RUN) [63], colony predation algorithm (CPA) [64], and weighted mean of vectors (INFO) [65]. Due to the simplicity and efficiency of swarm intelligence algorithms, they have been widely used in many different fields, such as image segmentation [66,67], the traveling salesman problem [68], feature selection [69,70], practical engineering problems [71,72], fault diagnosis [73], scheduling problems [74,75,76], multi-objective problems [77,78], medical diagnosis [79,80], economic emission dispatch problems [81], robust optimization [82,83], solar cell parameter identification [84], and optimization of machine learning models [85]. Among them, SMA is a new bionic stochastic optimization problem, simulating slime mold behavior and morphological changes during foraging. At the same time, SMA used weight to simulate the positive and negative feedback effects of slime mould propagation waves during foraging behavior to construct a venous network with different thicknesses. The morphology of the slime mould changed with the three search patterns: proximity to food, wrap around food, and oscillation.
From the brief description of SMA shown in Figure 1, the random value r a n d helps to find the optimal solution. The slime moulds were randomly distributed in any direction to search for solutions (food), and when r a n d < z , there was no venous structure. During the search phase, when r a n d z and r < p , individuals form diffuse venous structures to access food. The adaptive change of decision parameter p ensures better adaptability of the transition from the exploration stage to the exploitation stage. During the exploitation phase, when r p , the individual encapsulates the solution (food) through venous fibrillation.
Based on the following significant parameters, a specific mathematical model of SMA can be constructed to represent the three contraction modes of slime mould:
X t + 1 = r a n d · U B L B + L B , r a n d < z X b t + v b t · W · X A t X B t , r < p v c t · X t , r p
where X t and X t + 1 represent the position vectors of slime mould during iteration t and t + 1 , respectively. U B and L B indicate the upper and lower boundaries of the search space, respectively. X b denotes the position vector of the individual with the highest fitness (highest concentration). X A t and X B t indicate the position vectors of random individuals selected from the slime mould during iteration t . r a n d and r are random values between 0 and 1. The parameter z is set to 0.03 as in the original literature.
In addition, the decision parameter p can be calculated as follows:
p = t a n h   t a n h   S i D F
where S i indicates the fitness of the i th individual in the slime mould X , i 1 , 2 ,   , N . N . denotes the size of the population. D F represents the best fitness, which is attained during all of the iterations.
W is the weight vector of slime mould, which can be obtained from the following equation. This vector mimics the rate at which slime mould shrinks around food for different food masses.
W S m e l l I n d e x i = 1 + r · log b F S m e l l O r d e r i b F w F + 1 , c o n d i t i o n 1 r · log b F S m e l l O r d e r i b F w F + 1 , o t h e r w i s e
S m e l l O r d e r , S m e l l I n d e x = s o r t S
where b F and w F are the best and worst fitness obtained in the current iteration, respectively. S m e l l I n d e x and S m e l l O r d e r denote, respectively, the fitness sort order (smallest problems sorted in ascending order) and the corresponding fitness value. c o n d i t i o n indicates the first half of S m e l l O r d e r and is also the overall fitness ordering value. c o n d i t i o n simulates the individuals adjusting their search patterns dynamically according to the quality of things.
The collaborative interaction between the parameters v b and v c can simulate the selection behavior of slime mould. v b denotes a random value in the interval a , a . The parameter v c represents a decrease in the number of iterations within the interval b , b .
a = arctanh 1 t M a x _ i t e r
b = 1 t M a x _ i t e r
where M a x _ i t e r indicates the maximum number of iterations.
The simplified pseudo-code of SMA is listed in Algorithm 1. We can find more specific descriptions in the original literature.
Algorithm 1: Pseudo-code of SMA
Begin
 Initialize the parameters: M a x _ i t e r , N
 Initialize slime mould population X
 While t M a x _ i t e r
  Calculate the fitness of each individual in the slime mould
  Update best fitness and the X b
  Calculate the weight W according to Equation (3)
  Calculate a according to Equation (4)
  Calculate b according to Equation (5)
  For i = 1 , 2 ,   , N (each search agent)
   Update p according to Equation (2)
   Update v b , v c based on a and b , respectively
   Update the positions according to Equation (1)
  EndFor
  iteration = iteration + 1
 EndWhile
 Return the best fitness and X b
End

3.2. The Cauchy Mutation Operator

In this section, we will briefly introduce the Cauchy mutation. The Cauchy density function can be described as:
f t x = 1 π t t 2 + x 2   , < x <
where t > 0 and is the proportional parameter, and the distribution function is expressed as follows:
F t x = 1 2 + 1 π a r c t a n x t
By increasing the search range in each generation, individuals can be guaranteed to find better solutions in a wider range, thus avoiding local optimization. Therefore, Cauchy mutation was selected as an improved mechanism.
In the original SMA based on Equations (6) and (7), the version using the Cauchy mutation operation is expressed as:
x i _ c a u c h y = x i × 1 + C a u c h y
where C a u c h y is the random number of the distribution obtained by the Cauchy distribution,   x i is a position in the SMA at the time of the current iteration,   x i _ c a u c h y is the corresponding position of x i after Cauchy mutation. The introduction of the Cauchy mutation mechanism improves the foraging behavior of slime mould searching the unknown space, so the quality of SMA solutions can be enhanced by using the Cauchy operator in the simulation process.

3.3. The Mutation and Crossover Strategy in DE

During the optimization procedure, the major operations are mutation and crossover. Each solution x i = { x i 1 , x i 2 , x i 3 ,     , x i n } is a vector of n dimensions.
A.
Mutation
A mutant vector can be generated via the mutation operator ? i according to selected components from randomly nominated vectors x a , x b , and x c , where a     b     c ≠ i . The mathematical equation can be represented as follows:
u i = x a + F x b x c
where F is a random number that is able to control the mutation’s perturbation size.
B.
Crossover
The crossover operator may construct a trial vector v i by applying crossover to a mutant vector, where the trial vector is constructed by randomly selecting items from the mutant u i and the target vector x i depending on the probability ratio P c . The math formula appears such as this:
v i j = u i j   ; r a n d P c   o r   j = j 0 x i j ; o t h e r w i s e
The probability feature P c controls the diversity of the swarm and relieves the risk of local optima, and j 0 is an index between [1,2,3, …, N p ], which guarantees that v i obtains at least one component from u i .

3.4. The Hybrid Structure of the Proposed ISMA

Considering that the original SMA may not converge to some suboptimal solutions precociously or face the risk of falling into local optimal solutions, the improved algorithm ISMA proposed in this paper combines two strategies, Cauchy mutation and crossover mutation based on DE, to promote the coordination of global exploration and local exploitation and forms a new SMA variant, namely ISMA. The structure of the proposed ISMA is shown in Figure 2, which is demonstrated in Algorithm 2. Under the ISMA framework, these two strategies are used, in turn, to generate the new search agent and the best agent with the best solution in the current iteration. Figure 2 depicts the ISMA process. As illustrated in the picture, the position of each agent may be rebuilt when the location of each agent is updated according to Equation (1), implying that each agent achieves the best solution in a larger search space.
The position update based on SMA is to solve the position vector of slime mould according to the optimization rules of SMA, as detailed in Section 2.1. This phase produces a population based on SMA. The Cauchy-based mechanism and the crossover mutation mechanism are based on the behavior of the Cauchy-based mutation operation and the crossover mutation operation shown in Section 2.2 to adjust the position vector of an SMA-based individual to produce a new SMA-based population. In this stage, the advantages of Cauchy and the crossover mutation mechanism in the exploration stage are utilized to make up for the shortcomings of the SMA exploration. Considering both mechanisms’ effects on search ability, this means increasing population size and thus population diversity. The research shows that this stage not only helps to promote the coordination of exploration and exploitation capabilities but also helps to improve the quality of solutions and accelerate the convergence rate.
Algorithm 2: Pseudo-code of ISMA
Begin
 Initialize of the parameters: M a x _ i t e r , N
 Initialize of slime mould population X
 While t M a x _ i t e r
  Calculate the fitness for each individual in slime mould
  Update X b and the best fitness
  Calculate the weight W ,a,b according to Equations (3)–(5)
  For i = 1 : N
   Update p using Equation (2)
   Update v b , v c based on a and b , respectively
   Update the positions by Equation (1)
  EndFor
  Use Cauchy mutation strategy to update the best individual and the best fitness
  Adopt MC strategy to update the best individual and the best fitness
  iteration = iteration + 1
 EndWhile
 Return the best fitness and X b as the best solution
End

3.5. Computational Complexity

The proposed SMA structure mainly includes the following parts: initialization, fitness evaluation, fitness sorting, weight updating, position updating based on SMA strategy, position updating based on Cauchy mutation strategy, and position updating based on crossover mutation strategy, where N is the number of cells of slime mould, D is the dimension of function, and T is the maximum number of iterations. The computational complexity of initialization is O D . In the process of evaluation and sorting of fitness, the computational complexity is O N + N l o g N . The computational complexity of updating the weight is O N × D . The computational complexity of the location update process based on SMA is O N × D . Similarly, the computational complexity of the location updating process based on the Cauchy mutation mechanism and cross mutation mechanism is O N × D . Therefore, the total computational complexity of ISMA is O ( D + T × N × 1 + 4 D + l o g N ).

4. Experimental Design and Analysis of Global Optimization Problem

To evaluate successive versions of ISMA, we considered two experiments to compare the methods presented in this section with several other competitors. We used 23 continuous benchmark functions (including 7 unimodal functions, 6 multimodal functions, and 10 fixed-dimensional multimodal functions) and 10 typical CEC2014 benchmark functions (2 hybrid functions and 8 composition functions) for a total of 33 benchmark cases. Experiment 1 is a series of SMA variants with different update strategies: ISMA, CSMA, and MCSMA. The best SMA variants are obtained by comparing them with the original SMA and DE algorithm. Experiment 2 is to compare the ISMA algorithm with 8 other advanced optimization algorithms, including multi-population ensemble differential evolution (MPEDE) [86], successful history-based adaptive DE variants with linear population size reduction (LSHADE) [87], particle swarm optimization with an aging leader and challengers (ALCPSO) [88], comprehensive learning particle swarm optimizer (CLPSO) [38], chaos-enhanced sine cosine-inspired algorithm (CESCA) [89], improved grey wolf optimization (IGWO) [39], whale optimization algorithm with β-hill climbing (BHC) algorithm and associative learning and memory (BMWOA) [90], modified GWO with random spiral motions, simplified hierarchy, random leaders, oppositional based learning (OBL), levy flight (LF) with random decreasing stability index, and greedy selection (GS) mechanisms (OBLGWO) [91]. In this study, all experimental evaluations were conducted on a Windows 10(64-bit) operating system with 32GB RAM, Intel(R) Xeon(R) Silver 4110 CPU @ 2.40 GHz 2.10 GHz (dual processor), and MATLAB R2014a coding.
Table A1, Table A2, Table A3 and Table A4 contain information on 23 benchmark functions and 10 classic CEC2014 benchmark functions. It can be seen that the information of the 33 functions used in the experiment contains a wide variety of problems. These capabilities can be used not only to verify the local exploitation ability and global exploration ability but also to verify the ability to balance the two abilities. In addition, to reduce algorithm randomness’s impact on the experiment [92], we conducted 30 independent tests for each test case. In order to exclude the influence of other factors on the experiment, all the test algorithms were run under the same settings and conditions [93,94,95]. The maximum function evaluation was set as 300,000, and the population size was 30.
In addition, statistical results such as mean and standard deviation (std) are used to represent the global optimization ability and robustness of the evaluation method. The Wilcoxon signed-rank test at the significance level of 0.05 was used to measure the degree of improvement, which was statistically significant. It is worth noting that the label ‘+/=/−’ in the results indicates that ISMA is significantly superior to, equal to, or worse than other competitors. For a comprehensive statistical comparison, the Friedman test was used to see whether the performance of all the comparison algorithms on the baseline function differed and was statistically significant. The mean ranking value (ARV) of the Friedman test was used to evaluate the average performance of the investigated method. It is worth noting that a reliable comparison must involve more than 5 algorithms for more than 10 test cases [96].

4.1. Comparison between SMA Variant and Original SMA and DE Algorithm

In this section, to prove the superiority of the Cauchy mutation mechanism and the combination of mutation and crossover strategies in DE, we compare the three combinations of the two mechanisms and the original SMA with the DE algorithm. The comparison results are shown in Table A5, Table A6 and Table A7, and the algorithm convergence curve is shown in Figure 3.
As the results show in Table A5 and Table A6, ISMA clearly outperforms the other mechanism combinations and the original SMA and DE algorithms, as ISMA outperforms almost all algorithms in handling most of the test functions. As can be seen from the ARV of Friedman’s test in Table A7, ISMA can be considered the first algorithm when comparing the five algorithms. Mean and std in Table A5 also indicate the superiority of ISMA in F1–F6, F9–F14, F26–28, and F30–33 functions. ISMA ranks 2nd in F7, F15–F17, F19–25, and F29. According to the statistical significance of p-values in Table A6, almost all values in the SMA column are less than 0.05, indicating that ISMA has significantly improved the original SMA algorithm. The final optimization effect of F1–3, F9–11 and F26–28 functions by CSMA and MCSMA is the same as that by ISMA. In summary, the results of the Wilcoxon signed-rank test show that, statistically, ISMA has significantly improved performance compared with other algorithms. The results show that the addition of the Cauchy mutation strategy and crossover mutation strategy based on DE is beneficial to ISMA’s exploitation ability and exploration ability and the balance between ISMA’s exploitation ability and exploration ability.
The convergence analysis can show which optimizer as an iterative method can reach better quality results within a shorter time [97,98]. Figure 3 shows the convergence curves of the comparison method on 12 functions. We can intuitively find that, compared with the original SMA, DE, and other two SMA variants, the ISMA using the two mechanisms has a better effect. Combining the two mechanisms makes the SMA avoid falling into the local optimal solution and can obtain the global optimal solution. The overall advantage of ISMA is significant because of the positive effect of the Cauchy mutation mechanism and the crossover mutation strategy on SMA, which highlights the optimization capability of the proposed method.

4.2. Comparison with Advanced Algorithms

In this experiment, we compare ISMA with several typical advanced algorithms, such as MPEDE [86], LSHADE [87], ALCPSO [88], CLPSO [38], BMWOA [90], CESCA [89], IGWO [39] and OBLGWO [91], in order to fully prove the ability of the proposed algorithm to avoid local optimal and global exploration. These include two superior DE variants, two often-computed PSO variants, and variants of WOA, GWO, and SCA.
Table A8, Table A9 and Table A10 record the results of the comparison between ISMA and eight advanced algorithms. As can be seen from the comparison results in Table A10, among ISMA and 8 other advanced meta-heuristic algorithms, the average Freidman test result of ISMA is 3.7075758, ranking first, followed by CLPSO. The statistical results in Table A8 show that among all the comparison algorithms, the std of ISMA on more test functions is 0, so it can be seen that ISMA is more stable. In addition, the comparison results of specific functions show that ISMA has a stronger ability to deal with complex functions and mixed functions than other advanced algorithms. The mean and std in Table A8 also indicate the superiority of ISMA in F1–6, F9–15, F26–28 and F30–33 functions. ISMA also ranks high in the F7, F21–23. In addition, Table A9 shows the Wilcoxon signed-rank test results between ISMA and other advanced algorithms. It can be seen that ISMA outperforms other algorithms on most of the benchmark functions, especially CESCA, which outperforms CESCA on 90.9% of the functions. As a result, ISMA is superior to other strong competitors.
The convergence curves of all nine algorithms over 12 functions shown in Figure 4 show that the convergence rate of ISMA is competitive with other more advanced methods, which always converge to local optimum earlier than ISMA. It can be proved that the ISMA algorithm has a strong ability to avoid local and global searches, and ISMA can produce more accurate solutions.
To sum up, the optimization power of ISMA is reflected in the overall superior performance of ISMA in different types of functions compared to the more challenging advanced methods. The combination of the Cauchy mutation mechanism and crossover mutation strategy based on the DE algorithm enables the proposed ISMA to obtain a higher quality solution in the optimization process and makes exploration and exploitation in a better equilibrium state.

5. The Proposed Technique for Gene Selection

In this section, the proposed ISMA is applied to the gene selection problem, which makes improving the proposed algorithm more practical. For this purpose, we transform the continuous ISMA into a discrete version, namely the BISMA of the wrapper method, to solve the gene selection problem for binary optimization tasks.

5.1. System Architecture of Gene Selection Based on ISMA

The procedure of selecting or generating some of the most significant features from a set of features in order to lower the dimension of the training dataset is known as feature selection. Many fields with large data sets want to be able to reduce the dimensions of application data, such as gene selection for high-dimensional gene expression data sets in the medical field. The task of gene selection is to reduce the number of irrelevant and unimportant genes, identify the most relevant genomes with the greatest classification accuracy, reduce the cost of high computing costs and improve the accuracy of disease analysis. The continuous ISMA optimizer is converted to binary ISMA (BISMA) using the transfer function (TF) for the gene selection problem. The machine learning algorithm was used as a classifier to evaluate the ability of BISMA to identify discriminant genes and eliminate irrelevant, redundant genes in high-dimensional gene expression datasets. In addition, cross-validation (CV) was used to evaluate the optimality of selected gene subsets for classification during the evaluation process.

5.2. Fitness Function

Gene selection is a process that uses the least subset of genes to obtain the optimal classification accuracy, and both goals need to be achieved simultaneously. Therefore, in order to meet each objective, the fitness function expressed in Equation (11) can be designed to comprehensively evaluate the candidate solutions by using classification accuracy and the number of selected genes.
f i t = α × 1 A c c + β   × D R D
where Acc indicates the classification accuracy of the classifier (machine learning method), so 1 A c c is the error rate of the classifier. The weighting factors α and β are the importance of error rate and the number of selected genes, respectively, and α ∈ [0,1], β = 1 − α. D is the total number of genes in the exponential data set, and the numerator D R is the number of genes filtered by the proposed gene selection optimizer. In this study, α and β were set to 0.95 and 0.05, respectively.

5.3. Implementation of Discrete BSSMA

The proposed ISMA optimizer searches for the optimal solution in a continuous search space in previous work. Gene selection is a binary problem. The transfer function restricts the continuous search space to 0 or 1. When the value is 0, it means not selected, and when the value is 1, it means selected.
Individuals with binary position vectors are initialized through a random threshold, as shown below:
x i d = 0 , r a n d 0.5 1 , r a n d > 0.5  
where x i d is the   i -th gene on the d -th dimension of the position vectors of the slime mould.
In addition, the transfer function (TF) is a suitable converter that can convert a continuous optimization algorithm to a discrete version without changing the algorithm’s structure because it is convenient and efficient [99]. There are 8 types of TFs, which can be divided into S-shaped and V-shaped according to their shapes. Their mathematical formulae and graphical descriptions are shown in Table A11.
For an S-shaped family, a gene of the position vector at the next iteration can be converted according to the TFS1-TFS4 shown in Table A11 as follows:
x i d t + 1 = 1 ,   r a n d < T x i d t + 1 0 ,   r a n d T x i d t + 1
where T x i d t + 1 represents the probability value of the i -th gene on the d -th dimension at the next iteration.
For a V-shaped family, the gene of the position vector at the next iteration can be converted according to the TFV1-TFV4 shown in Table A11 as follows:
x i d t + 1 = ¬ x i d t + 1 ,   r a n d < T x i d t + 1 x i d t + 1 ,   r a n d T x i d t + 1

6. Experimental Design and Discussion on Gene Selection

6.1. Experimental Design

In this experiment, two kinds of comparison results are used to evaluate the optimization ability of the proposed algorithm. In the first assessment, we studied BISMA with different TFs to determine the best version of BISMA out of the eight TFs. The resulting BISMA is compared with other mature meta-heuristic optimizers in the second evaluation. Fourteen gene expression datasets were used in the two case studies. Table A12 lists the detailed characteristics of these microarray datasets, including the number of samples, the number of genes per sample, and the number of categories. These 14 representative gene datasets have been widely used to test a variety of gene selection optimizers to evaluate their performance.
In addition, to obtain more convincing results, this paper also considers the Leave-One-Out cross-validation (LOOCV) to validate the gene selection process. A sample in the data set is taken as the test set to verify the classification accuracy of the classifier, while the rest of the sample is taken as the training set to be trained with the classifier. The number of validations per dataset is equal to the size of the test dataset. The KNN classifier is used for classification tasks. For KNN, let the field size k in KNN be 1. The test method of distance D is as follows:
D x , y = K N x k y k 1 2
To be fair in comparison [100,101,102], each evaluation and comparison involving BISMA was performed on the same computing environment, namely Intel(R) Xeon(R) Silver 4110 CPU @ 2.40 GHz 2.10 GHz (two processors) and 8 GB RAM (Windows 10)(64-bit). MATLAB R2014a software was used to test the algorithm. For each algorithm, we set the maximum number of iteration agents and the number of search agents to 50 and 20, respectively. It was run 10 times independently. The initial parameters of all algorithms are identified as their original reference parameters.

6.2. The Proposed BISMA with Different TFs

Considering the effect of TF on the performance of the gene selection optimizer, we developed eight BISMA optimizers using eight different TFs and evaluated their effectiveness in finding the optimal gene from each gene dataset listed in Table A10. These eight TFs include four S-shaped and four V-shaped TFs, as shown in Table A11. This assessment helps to obtain the best binary version of BISMA to the gene selection issue. Table A13, Table A14, Table A15 and Table A16 show the average number of selected genes, the average error rate, the average fitness, the average calculation time, and the corresponding std and AVR of the 8 developed versions of the BISMA optimizer.
The average number of selected genes produced by each version of BISMA on the 14 datasets is shown in Table A13. The number of genes required by BISMA based on V-shaped was the least among all versions of BISMA. As can be seen from the ARV value, the average number of selected genes of BISMA based on TFV4 was the least and ranked the first, while the four BISMA based on V-shaped were ranked as the first four, and the number of selected genes was significantly lower than that of BISMA based on S-shaped.
Table A14 records the average classification error rates of the eight versions of BISMA on the baseline gene dataset. Judging from the average ranking value, BISMA with TFV4 is significantly better than other competitors. The four V-shaped BISMAs obtained an average error of 0 in 57% of the gene data sets, indicating the stability of feature selection based on V-shaped BISMA. Meanwhile, BISMA based on V4 obtained an error of 0 and a standard deviation of 0 on 85.7% of gene data sets. Therefore, from the average error rate, the ability of BIMSA with V-shaped TFs to solve the gene selection task is due to its S-shaped TFs counterpart.
According to the average fitness test results reported in Table A15, it can be found that BISMA_V3 achieved the best fitness on about 42.9% of the baseline gene data set, which was slightly better than BISMA_V3 and significantly better than other competitors. However, from the ranking mean, BISMA_V4 ranked first, followed by BISMA_V3, BISMA_V1, BISMA_V2, BISMA_V1, BISMA_S2, BISMA_S3, and BISMA_S4. The fitness results also showed that TFs of the V-shaped family were better than that of the S-shaped family.
Similarly, it can be seen from the calculated time that, except for V1, the version of the V-shaped TFs takes less time to run than the version of the S-shaped TFs. In particular, the first-place V4 takes much less time on average than the second-place V3. The calculation overhead of BISMA_V4 with the best average ranking value is lower than that of the other versions over all the benchmark datasets.
As shown in Table A13, Table A14, Table A15 and Table A16, the BISMA version with TFV4 was superior to other versions in terms of the average number of selected genes, average error rate, average fitness, and average time cost, and the BISMA version with TFV4 was far superior to the second in terms of average time cost. In comparing S-shaped and V-shaped, V-shaped can achieve better results than S-shaped. Therefore, the transfer function TFV4 was chosen as the best choice to establish a BISMA optimizer with better stability for genetic problems. In this case, BISMA_V4 is used to represent BISMA, which is further evaluated by comparison in the following sections.

6.3. Comparative Evaluation with Other Optimizers

In this section, the superiority of the proposed BISMA optimizer is evaluated by comparing it with several state-of-the-art meta-heuristic approaches. These algorithms considered to be meta-heuristics are bGWO [103], BGSA [104], BPSO [99], bALO [105], BBA [106], BSSA [107], bWOA [108], BSMA, the binary form of the original SMA [35], and BISMA, the discrete version of the improved ISMA. Table A17 shows the parameter settings for the relative comparison optimizer.
Table A18, Table A19, Table A20 and Table A21 show the selected genes’ statistical results in terms of length, error rate, fitness and computational time. According to the average gene length in Table A18, the proposed BISMA had the least number of selected genes on 57.1% of the gene datasets, while bWOA had the least number of selected genes on 42.9% of the gene datasets. It can be seen that in the 14 data sets, BISMA and bWOA are far more competitive than other algorithms in reducing the data dimensions.
The results of the mean error rate are shown in Table A19, which shows the superiority of the proposed BISMA. BISMA achieves the minimum mean error rate on 85.7% of the gene data sets and only performs slightly worse on Lung_Cancer and Tumor_14. bGWO showed the best error rate on the Tumor_14 gene dataset, while bWOA showed competing results on the Lung_Cancer gene dataset. From the perspective of the ARV index, BISMA ranked first, followed by bWOA, bGWO, BISMA, BGSA, BPSO, bALO, BSSA, and BBA.
The fitness of the important measurements shown in Table A20 comprises the weighted error rate and the number of genes selected by weighting. It is clear that the performance of the proposed BISMA is superior to other competitors on 64.3% of the gene data sets. The average fitness of BISMA and bWOA on 14 gene datasets was significantly better than that of the other algorithms.
In addition, according to the std values shown in Table A18, Table A19 and Table A20, BISMA showed better performance, satisfactory standard deviation and excellent average fitness values in most of the gene data sets tested, which indicated that BISMA was more stable than bALO, BSSA, BBA, etc. There is a big gap between the overall performance of BISMA, BSMA, bWOA, and bGWO and the overall performance of BGSA, BPSO, bALO, BBA, and BSSA, and the first four optimizers are obviously better than the last five.
As can be seen from the average calculation time results shown in Table A21, the proposed BISMA has the highest time cost, and the time complexity of BSMA and bWOA with better performance is also relatively high, indicating the increase in calculation time cost caused by the improvement of performance. The time cost of BISMA was influenced by the introduced Cauchy mutation and the crossover mutation strategy based on DE. As shown in Table A21, the calculation time of the original SMA is also relatively expensive, which is also the reason for the high cost of BISMA time.
Compared with other gene selection optimizers, it is found that BISMA is the best one. Although the result is not ideal in terms of calculation time, BISMA is expected to select the optimal gene subset on the vast majority of microarray data sets to obtain the best fitness and the best classification error rate without the loss of meaningful genes. This fact proves that the combination of Cauchy mutation and crossover mutation strategy based on DE guarantees the improvement of global exploration in the proposed BISMA to achieve a more effective balance between local exploitation and global exploration.

7. Discussions

In this part, the ISMA algorithm proposed in this paper is discussed, and its advantages and existing points can be improved. In the original SMA, the global exploration ability of slime moulds was not strong, and they would fall into local optimum in the face of some problems, limiting the algorithm’s use. In this paper, Cauchy mutation (CM) and cross mutation are introduced to update the population, increasing the global exploration space and avoiding falling into local optimum. Experiments show that the effect of a dual mechanism is better than that of a single mechanism, and ISMA is better than some advanced optimization algorithms.
However, ISMA exposes some common shortcomings of random optimizers in certain areas. As seen in Table A5 and Table A8, when processing some multimodal functions, the algorithm’s performance is sometimes poor due to the randomness of the crossover mutation mechanism. The search speed is slow in global exploration and local exploitation.
A binary algorithm (BISMA) performs feature selection in feature selection optimization problems on 14 data sets. The experimental results show that the proposed algorithm exhibits smaller average fitness and lower classification error rates while selecting fewer features. However, the introduction of Cauchy mutation and cross mutation mechanism brings good effects but also leads to a long running time of the algorithm, and the time complexity is the highest among all comparison algorithms.
According to the study [109], Ornek et al. combined the position update of the sines and cosines algorithm with the slime mold algorithm. In these updates, various sines and cosines algorithms are used to modify the oscillation process of slime molds. Experimental results show that the algorithm has good exploration and exploitation ability. Gurses et al. [110] applied a new hybrid slime mold algorithm, the Simulated Annealing Algorithm (HSMA-SA), to structural engineering design problems. Experimental results demonstrate the feasibility of the proposed algorithm in solving shape optimization problems. Cai et al. [111] proposed an artificial slime mold algorithm to solve the traffic network node selection problem, and the experimental results are of great significance to studying traffic node selection and artificial learning mechanisms. These ideas can be used as a reference to improve the shortcomings of ISMA in the future so that it can be applied in more fields, such as dynamic module detection [112,113], road network planning [114], information retrieval services [115,116,117], drug discovery [118,119], microgrids planning [120], image dehazing [121], location-based services [122,123], power flow optimization [124], disease identification and diagnosis [125,126], recommender system [127,128,129,130], human activity recognition [131], and image-to-image translation [132].

8. Conclusions

In this study, based on the basic SMA, an improved ISMA version is proposed, and the combination of the Cauchy mutation and crossover mutation strategy based on the DE algorithm is used to improve the SMA so as to achieve the coordination between global exploration and local exploitation. We first evaluate the effectiveness of the continuous version of the ISMA algorithm on 33 benchmark evaluation functions to deal with global optimization problems, compared with some advanced swarm intelligence algorithms. The results show that ISMA has a strong global exploration capability. In order to verify the performance of ISMA in practical application, the BISMA was obtained by mapping ISMA into binary space through the transfer function and then applied to the feature selection problem of 14 commonly used UCI datasets. In order to understand the optimal conversion function of the ISMA variant, we compared the number of selected genes, average error rate, average fitness, and computational cost. It can be seen that BISMA_V4 is superior to other versions. Therefore, BISMA_V4 is regarded as the final method to solve the gene selection problem. We compare BISMA_V4 with binary SMA, binary GWO, and several other advanced methods. The experimental results show that BISMA can select fewer features and obtain higher classification accuracy.
Therefore, we believe that the proposed BISMA is a promising gene selection technique. There are several ways to extend the work we have conducted. We can consider applying BISMA to other high-dimensional data sets and study the effectiveness of BISMA on other data sets. Secondly, other strategies can be used to improve the SMA and improve the coordination between the SMA global exploration and local exploration. Thirdly, interested researchers can apply SMA to more areas, such as financial forecasting, optimization of photovoltaic parameters, and other engineering applications. Finally, we can extend the application of ISMA to multi-objective optimization, image segmentation, machine learning, and other fields.

Author Contributions

Conceptualization, G.L. and H.C.; methodology, G.L. and H.C.; software, G.L. and H.C.; validation, F.Q., P.Z., A.A.H., G.L., H.C., F.K.K., H.E. and H.L.; formal analysis, F.K.K., H.E. and H.L.; investigation, F.Q., P.Z. and A.A.H.; resources, F.K.K., H.E. and H.L.; data curation, F.K.K., H.E. and H.L.; writing—original draft preparation, F.Q., P.Z. and A.A.H.; writing—review and editing, G.L. and H.C.; visualization, G.L. and H.C.; supervision, F.K.K., H.E. and H.L.; project administration, F.K.K., H.E. and H.L.; funding acquisition, G.L. and H.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research project was funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2022R300), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia. Zhejiang University Students Science and Technology Innovation Activity Plan (2022R429B045), Graduate Innovation Fund of Wenzhou University (316202102088).

Data Availability Statement

The data involved in this study are all public data, which can be downloaded through public channels.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Appendix A

Table A1. Descriptions of unimodal benchmark functions.
Table A1. Descriptions of unimodal benchmark functions.
FunctionDimRangefmin
f 1 x = i = 1 n x i 2 30[−100, 100]0
f 2 x = i = 1 n   x i + Π i = 1 n x i 30[−10, 10]0
f 3 x = i = 1 n Σ j 1 i x j 2 30[−100, 100]0
f 4 x = m a x i {   x i ,   1     i   n } 30[−100, 100]0
f 5 x = i = 1 n 1 [ 100 x i + 1 x i 2 2 + x i 1 2 ]30[−30, 30]0
f 6 x = i = 1 n x i + 0.5 2 30[−100, 100]0
f 7 x = i = 1 n i x i 4 + random[0,1)30[−128, 128]0
Table A2. Descriptions of multimodal benchmark functions.
Table A2. Descriptions of multimodal benchmark functions.
FunctionDimRangefmin
f 8 x   = i = 1 n x i s i n x i   30[−500, 500]−418.9829 × 30
f 9 x   = i = 1 n [ x i 2   10 cos(2 π x i ) + 10]30[−5.12, 5.12]0
f 10 x   = −20 exp{−0.2 1 n Σ i = 1 n x i   } e x p { 1 n Σ i = 1 n cos ( 2 π x i )} + 20 + e30[−32, 32]0
f 11 x = 1 4000 Σ i = 1 n x i 2 Π i = 1 n c o s x i i + 1 30[−600, 600]0
f 12 x   = π n {10sin( a y 1 )+ i = 1 n 1 y i 1 2 [1+10 s i n 2 ( π y i + 1 )]+ y n 1 2 + i = 1 n μ x i , 10 , 100 , 4 }
y i = 1 + x i + 1 4
μ x i , a , k , m = k x i a m x i > a 0 a < x i < a k x i a m x   < a
30[−50, 50]0
f 13 x   = 0.01 { s i n 2 (3 π x i )+ i = 1 n x i 1 2 [1+ s i n 2 (3 π x i + 1 )]+ x n 1 2 [1+ s i n 2 (2 π x n )]+ i = 1 n   μ x i , 5 , 100 , 4 30[−50, 50]0
Table A3. Descriptions of fixed-dimension multimodal benchmark functions.
Table A3. Descriptions of fixed-dimension multimodal benchmark functions.
FunctionDimRangefmin
f 14 x = 1 500 + Σ j = 1 25 1 j + Σ j = 1 2 x i a i j 6 1 2[−65, 65]1
f 15 x = Σ i = 1 11 [ a i x 1 b i 2 + b i x 2 b i 2 + b i x 3 + x 4 ] 2 4[−5, 5]0.00030
f 16 x = 4 x 1 2 2.1 x i 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 2[−5, 5]−1.0316
f 17 x   = x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 2 + 10(1 − 1 8 π )cos x 1 + 10 2[−5, 5]0.398
f 18 x   = [1 + x 1 + x 2 + 1 2 (19 − 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 )] × [30 + 2 x 1 3 x 2 2 ×(18 − 32 x 1 + 12 x 1 2 + 48 x 2 − 36 x 1 x 2 + 27 x 2 2 ) ] 2[−2, 2]3
f 19 x = i = 1 4 c i exp j = 1 3 a i j x j P i j 2 3[1, 3]−3.86
f 20 x = i = 1 4 c i exp j = 1 6 a i j x j P i j 2 6[0, 1]−3.32
f 21 x = i = 1 5 [ X a i X a i T + c i ] 1 4[0, 10]−10.1532
f 22 x = i = 1 7 [ X a i X a i T + c i ] 1 4[0, 10]−10.4028
f 23 x = i = 1 10 [ X a i X a i T + c i ] 1 4[0, 10]−10.5363
Table A4. Descriptions of CEC2014 functions. (Search range: [−100, 100]D).
Table A4. Descriptions of CEC2014 functions. (Search range: [−100, 100]D).
FunctionClassFunctionsOptimum
F24HybridHybrid Function 5 (N = 5)2100
F25Hybrid Function 6 (N = 5)2200
F26CompositionComposition Function 1 (N = 5)2300
F27Composition Function 2 (N = 3)2400
F28Composition Function 3 (N = 3)2500
F29Composition Function 4 (N = 5)2600
F30Composition Function 5 (N = 5)2700
F31Composition Function 6 (N = 5)2800
F32Composition Function 7 (N = 3)2900
F33Composition Function 8 (N = 3)3000
Table A5. The SMA variants are compared with the original SMA and DE algorithms.
Table A5. The SMA variants are compared with the original SMA and DE algorithms.
F1F2F3
meanstdmeanmeanstdmean
ISMA0.0000 × 1000.0000 × 1000.0000 × 1000.0000 × 1000.0000 × 1000.0000 × 100
SMA3.2559 × 10−441.7833 × 10−431.7856 × 10−443.2559 × 10−441.7833 × 10−431.7856 × 10−44
CSMA0.0000 × 1000.0000 × 1000.0000 × 1000.0000 × 1000.0000 × 1000.0000 × 100
MCSMA0.0000 × 1000.0000 × 1000.0000 × 1000.0000 × 1000.0000 × 1000.0000 × 100
DE1.8673 × 10−1594.1198 × 10−1591.3001 × 10−941.8673 × 10−1594.1198 × 10−1591.3001 × 10−94
F4F5F6
meanstdmeanmeanstdmean
ISMA0.0000 × 1000.0000 × 1001.5210 × 10−200.0000 × 1000.0000 × 1001.5210 × 10−20
SMA9.1947 × 10−445.0362 × 10−434.5273 × 10−19.1947 × 10−445.0362 × 10−434.5273 × 10−1
CSMA0.0000 × 1000.0000 × 1001.0735 × 1000.0000 × 1000.0000 × 1001.0735 × 100
MCSMA5.5509 × 10−2470.0000 × 1003.7675 × 1005.5509 × 10−2470.0000 × 1003.7675 × 100
DE6.3804 × 10−151.3750 × 10−143.2827 × 1016.3804 × 10−151.3750 × 10−143.2827 × 101
F7F8F9
meanstdmeanmeanstdmean
ISMA5.2004 × 10−54.4680 × 10−56.5535 × 1045.2004 × 10−54.4680 × 10−56.5535 × 104
SMA1.8109 × 10−31.9112 × 10−3−1.256 × 1041.8109 × 10−31.9112 × 10−3−1.256 × 104
CSMA1.0466 × 10−57.1026 × 10−66.5535 × 1041.0466 × 10−57.1026 × 10−66.5535 × 104
MCSMA2.8153 × 10−41.4821 × 10−41.256 × 1042.8153 × 10−41.4821 × 10−41.256 × 104
DE2.4715 × 10−34.9474 × 10−4−1.244 × 1042.4715 × 10−34.9474 × 10−4−1.244 × 104
F10F11F12
meanstdmeanmeanstdmean
ISMA8.8818 × 10−160.0000 × 1000.0000 × 1008.8818 × 10−160.0000 × 1000.0000 × 100
SMA8.8818 × 10−160.0000 × 1000.0000 × 1008.8818 × 10−160.0000 × 1000.0000 × 100
CSMA8.8818 × 10−160.0000 × 1000.0000 × 1008.8818 × 10−160.0000 × 1000.0000 × 100
MCSMA8.8818 × 10−160.0000 × 1000.0000 × 1008.8818 × 10−160.0000 × 1000.0000 × 100
DE7.7568 × 10−159.0135 × 10−160.0000 × 1007.7568 × 10−159.0135 × 10−160.0000 × 100
F13F14F15
meanstdmeanmeanstdmean
ISMA1.3498 × 10−325.5674 × 10−489.9800 × 10−11.3498 × 10−325.5674 × 10−489.9800 × 10−1
SMA4.8249 × 10−37.4218 × 10−31.3350 × 1004.8249 × 10−37.4218 × 10−31.3350 × 100
CSMA4.3078 × 10−36.3340 × 10−31.2955 × 1004.3078 × 10−36.3340 × 10−31.2955 × 100
MCSMA1.3498 × 10−325.5674 × 10−489.9800 × 10−11.3498 × 10−325.5674 × 10−489.9800 × 10−1
DE1.3498 × 10−325.5674 × 10−481.0311 × 1001.3498 × 10−325.5674 × 10−481.0311 × 100
F16F17F18
meanstdmeanmeanstdmean
ISMA−1.032 × 1001.2770 × 10−83.9838 × 10−1−1.032 × 1001.2770 × 10−83.9838 × 10−1
SMA−8.2436 × 10−14.1923 × 10−14.1640 × 10−1−8.2436 × 10−14.1923 × 10−14.1640 × 10−1
CSMA−1.031 × 1001.1109 × 10−34.1829 × 10−1−1.031 × 1001.1109 × 10−34.1829 × 10−1
MCSMA−1.031 × 1006.5572 × 10−43.9865 × 10−1−1.031 × 1006.5572 × 10−43.9865 × 10−1
DE−1.031 × 1006.7752 × 10−163.9789 × 10−1−1.031 × 1006.7752 × 10−163.9789 × 10−1
F19F20F21
meanstdmeanmeanstdmean
ISMA−3.863 × 1001.1037 × 10−4−3.163 × 100−3.863 × 1001.1037 × 10−4−3.163 × 100
SMA−3.782 × 1009.4398 × 10−2−2.958 × 100−3.782 × 1009.4398 × 10−2−2.958 × 100
CSMA−3.795 × 1007.9965 × 10−2−2.901 × 100−3.795 × 1007.9965 × 10−2−2.901 × 100
MCSMA−3.861 × 1001.9880 × 10−3−3.042 × 100−3.861 × 1001.9880 × 10−3−3.042 × 100
DE3.862 × 1002.7101 × 10−15−3.321 × 1003.862 × 1002.7101 × 10−15−3.321 × 100
F22F23F24
meanstdmeanmeanstdmean
ISMA−1.040 × 1013.3560 × 10−6−1.054 × 101−1.040 × 1013.3560 × 10−6−1.054 × 101
SMA−1.032 × 1019.7684 × 10−2−1.044 × 101−1.032 × 1019.7684 × 10−2−1.044 × 101
CSMA−9.877 × 1001.2268 × 100−1.041 × 101−9.877 × 1001.2268 × 100−1.041 × 101
MCSMA−1.040 × 1016.2358 × 10−6−1.054 × 101−1.040 × 1016.2358 × 10−6−1.054 × 101
DE−1.040 × 1011.8067 × 10−15−1.053 × 101−1.040 × 1011.8067 × 10−15−1.053 × 101
F25F26F27
meanstdmeanmeanstdmean
ISMA3.4989 × 1032.2734 × 1022.5000 × 1033.4989 × 1032.2734 × 1022.5000 × 103
SMA1.0429 × 1042.8215 × 1042.5169 × 1031.0429 × 1042.8215 × 1042.5169 × 103
CSMA4.7397 × 1031.2900 × 1032.5000 × 1034.7397 × 1031.2900 × 1032.5000 × 103
MCSMA3.6251 × 1031.8988 × 1022.5000 × 1033.6251 × 1031.8988 × 1022.5000 × 103
DE2.3554 × 1038.2085 × 1012.6152 × 1032.3554 × 1038.2085 × 1012.6152 × 103
F28F29F30
meanstdmeanmeanstdmean
ISMA2.7000 × 1030.0000 × 1002.7147 × 1032.7000 × 1030.0000 × 1002.7147 × 103
SMA2.7000 × 1030.0000 × 1002.7732 × 1032.7000 × 1030.0000 × 1002.7732 × 103
CSMA2.7000 × 1030.0000 × 1002.7172 × 1032.7000 × 1030.0000 × 1002.7172 × 103
MCSMA2.7000 × 1030.0000 × 1002.7788 × 1032.7000 × 1030.0000 × 1002.7788 × 103
DE2.7066 × 1038.5796 × 10−12.7003 × 1032.7066 × 1038.5796 × 10−12.7003 × 103
F31F32F33
meanstdmeanmeanstdmean
ISMA3.0000 × 1030.0000 × 1003.1000 × 1033.0000 × 1030.0000 × 1003.1000 × 103
SMA4.1186 × 1031.9606 × 1032.8989 × 1074.1186 × 1031.9606 × 1032.8989 × 107
CSMA3.0000 × 1030.0000 × 1003.1000 × 1033.0000 × 1030.0000 × 1003.1000 × 103
MCSMA5.4386 × 1031.1178 × 1034.0742 × 1075.4386 × 1031.1178 × 1034.0742 × 107
DE3.6286 × 1032.4807 × 1011.2080 × 1053.6286 × 1032.4807 × 1011.2080 × 105
Table A6. Wilcoxon signed-rank test results between the SMA variants and the original SMA and DE algorithms.
Table A6. Wilcoxon signed-rank test results between the SMA variants and the original SMA and DE algorithms.
FunctionSMACSMAMCSMADE
F11.7344 × 10−61.0000 × 1001.0000 × 1001.7344 × 10−6
F21.7344 × 10−61.0000 × 1001.0000 × 1001.7344 × 10−6
F31.7344 × 10−61.0000 × 1001.0000 × 1001.7344 × 10−6
F41.7344 × 10−61.0000 × 1001.7344 × 10−61.7344 × 10−6
F51.7344 × 10−61.7344 × 10−62.3438 × 10−21.7344 × 10−6
F61.7344 × 10−61.7344 × 10−61.0000 × 1001.0000 × 100
F72.3534 × 10−64.0715 × 10−52.6033 × 10−61.7344 × 10−6
F81.6503 × 10−11.2720 × 10−11.3851 × 10−11.6268 × 10−1
F91.0000 × 1001.0000 × 1001.0000 × 1005.0000 × 10−1
F101.0000 × 1001.0000 × 1001.0000 × 1001.0135 × 10−7
F111.0000 × 1001.0000 × 1001.0000 × 1001.0000 × 100
F121.7344 × 10−61.7344 × 10−61.0000 × 1001.0000 × 100
F131.7344 × 10−61.7344 × 10−61.0000 × 1001.0000 × 100
F141.7344 × 10−61.7344 × 10−61.0000 × 1001.0000 × 100
F152.8786 × 10−62.6033 × 10−66.7328 × 10−13.5888 × 10−4
F161.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−6
F171.2381 × 10−58.4661 × 10−69.5899 × 10−11.7344 × 10−6
F187.3433 × 10−14.0483 × 10−11.1973 × 10−31.7344 × 10−6
F191.7344 × 10−61.7344 × 10−62.6033 × 10−61.7344 × 10−6
F206.3391 × 10−66.3391 × 10−62.6033 × 10−61.7344 × 10−6
F211.7344 × 10−61.7344 × 10−69.0993 × 10−13.1123 × 10−5
F221.7344 × 10−61.7344 × 10−61.9569 × 10−21.7344 × 10−6
F231.7344 × 10−61.7344 × 10−64.2843 × 10−11.7344 × 10−6
F246.9838 × 10−62.5967 × 10−53.1618 × 10−31.7344 × 10−6
F253.1123 × 10−51.1265 × 10−54.2767 × 10−21.7344 × 10−6
F262.5000 × 10−11.0000 × 1001.0000 × 1004.3205 × 10−8
F275.0000 × 10−11.0000 × 1001.0000 × 1001.7344 × 10−6
F281.0000 × 1001.0000 × 1001.0000 × 1001.7344 × 10−6
F296.5213 × 10−61.8326 × 10−31.6789 × 10−51.7344 × 10−6
F303.7896 × 10−61.0000 × 1001.7344 × 10−61.7344 × 10−6
F314.8828 × 10−41.0000 × 1002.5631 × 10−61.7344 × 10−6
F327.8125 × 10−31.0000 × 1001.7344 × 10−61.7344 × 10−6
F333.7896 × 10−61.0000 × 1001.7344 × 10−61.7344 × 10−6
+/=/−25/8/016/16/115/18/116/7/10
Table A7. Average ranking values using the Friedman test.
Table A7. Average ranking values using the Friedman test.
AlgorithmISMASMACSMAMCSMADE
AVR2.2560606063.8479797983.2025252532.909595962.783838384
rank15432
Table A8. Comparison of the numerical results obtained by ISMA and other advanced methods.
Table A8. Comparison of the numerical results obtained by ISMA and other advanced methods.
F1F2F3
meanstdmeanmeanstdmean
ISMA0.0000 × 1000.0000 × 1000.0000 × 1000.0000 × 1000.0000 × 1000.0000 × 100
MPEDE5.6838 × 10−2230.0000 × 1002.0352 × 10−1095.6838 × 10−2230.0000 × 1002.0352 × 10−109
LSHADE8.6954 × 10−2030.0000 × 1002.6224 × 10−858.6954 × 10−2030.0000 × 1002.6224 × 10−85
ALCPSO4.5530 × 10−1860.0000 × 1001.0128 × 10−64.5530 × 10−1860.0000 × 1001.0128 × 10−6
CLPSO2.7917 × 10−342.0632 × 10−345.6730 × 10−212.7917 × 10−342.0632 × 10−345.6730 × 10−21
CESCA1.0264 × 1037.6509 × 1027.2069 × 1001.0264 × 1037.6509 × 1027.2069 × 100
IGWO0.0000 × 1000.0000 × 1005.4179 × 10−2600.0000 × 1000.0000 × 1005.4179 × 10−260
BMWOA8.7826 × 10−41.9389 × 10−38.5362 × 10−38.7826 × 10−41.9389 × 10−38.5362 × 10−3
OBLGWO2.6476 × 10−2810.0000 × 1005.6311 × 10−1422.6476 × 10−2810.0000 × 1005.6311 × 10−142
F4F5F6
meanstdmeanmeanstdmean
ISMA0.0000 × 1000.0000 × 1005.6931 × 10−120.0000 × 1000.0000 × 1005.6931 × 10−12
MPEDE1.3923 × 10−52.6447 × 10−51.1960 × 1001.3923 × 10−52.6447 × 10−51.1960 × 100
LSHADE1.3040 × 10−42.3249 × 10−45.3155 × 10−11.3040 × 10−42.3249 × 10−45.3155 × 10−1
ALCPSO2.6029 × 10−53.4443 × 10−52.5603 × 1012.6029 × 10−53.4443 × 10−52.5603 × 101
CLPSO1.3451 × 1002.6110 × 10−16.5461 × 10−11.3451 × 1002.6110 × 10−16.5461 × 10−1
CESCA2.0286 × 1017.5303 × 1002.4759 × 1052.0286 × 1017.5303 × 1002.4759 × 105
IGWO7.5149 × 10−264.1158 × 10−252.3186 × 1017.5149 × 10−264.1158 × 10−252.3186 × 101
BMWOA3.6139 × 10−33.9430 × 10−33.9781 × 10−33.6139 × 10−33.9430 × 10−33.9781 × 10−3
OBLGWO2.7133 × 10−1571.4861 × 10−1562.6112 × 1012.7133 × 10−1571.4861 × 10−1562.6112 × 101
F7F8F9
meanstdmeanmeanstdmean
ISMA9.4873 × 10−56.6385 × 10−56.5535 × 1049.4873 × 10−56.6385 × 10−56.5535 × 104
MPEDE3.2148 × 10−31.6021 × 10−3−1.187 × 1043.2148 × 10−31.6021 × 10−3−1.187 × 104
LSHADE6.5393 × 10−35.0546 × 10−3−1.895 × 1036.5393 × 10−35.0546 × 10−3−1.895 × 103
ALCPSO9.6181 × 10−23.9035 × 10−2−1.147 × 1049.6181 × 10−23.9035 × 10−2−1.147 × 104
CLPSO2.6752 × 10−37.7407 × 10−4−1.256 × 1042.6752 × 10−37.7407 × 10−4−1.256 × 104
CESCA5.3895 × 10−13.4475 × 10−1−3.901 × 1035.3895 × 10−13.4475 × 10−1−3.901 × 103
IGWO2.7827 × 10−42.2936 × 10−4−7.436 × 1032.7827 × 10−42.2936 × 10−4−7.436 × 103
BMWOA1.1610 × 10−38.5016 × 10−4−1.257 × 1041.1610 × 10−38.5016 × 10−4−1.257 × 104
OBLGWO2.3640 × 10−52.4037 × 10−5−1.253 × 1042.3640 × 10−52.4037 × 10−5−1.253 × 104
F10F11F12
meanstdmeanmeanstdmean
ISMA8.8818 × 10−160.0000 × 1000.0000 × 1008.8818 × 10−160.0000 × 1000.0000 × 100
MPEDE2.0353 × 1006.7054 × 10−11.5065 × 10−22.0353 × 1006.7054 × 10−11.5065 × 10−2
LSHADE3.3455 × 10−143.7417 × 10−151.2274 × 10−23.3455 × 10−143.7417 × 10−151.2274 × 10−2
ALCPSO8.3257 × 10−18.5957 × 10−11.7674 × 10−28.3257 × 10−18.5957 × 10−11.7674 × 10−2
CLPSO1.2138 × 10−142.4831 × 10−150.0000 × 1001.2138 × 10−142.4831 × 10−150.0000 × 100
CESCA6.7169 × 1001.9070 × 1001.0700 × 1016.7169 × 1001.9070 × 1001.0700 × 101
IGWO4.6777 × 10−159.0135 × 10−160.0000 × 1004.6777 × 10−159.0135 × 10−160.0000 × 100
BMWOA4.6994 × 10−35.2250 × 10−31.7612 × 10−34.6994 × 10−35.2250 × 10−31.7612 × 10−3
OBLGWO8.8818 × 10−160.0000 × 1000.0000 × 1008.8818 × 10−160.0000 × 1000.0000 × 100
F13F14F15
meanstdmeanmeanstdmean
ISMA1.3498 × 10−325.5674 × 10−489.9800 × 10−11.3498 × 10−325.5674 × 10−489.9800 × 10−1
MPEDE3.2626 × 10−19.4775 × 10−19.9800 × 10−13.2626 × 10−19.4775 × 10−19.9800 × 10−1
LSHADE1.1303 × 10−14.0369 × 10−19.9800 × 10−11.1303 × 10−14.0369 × 10−19.9800 × 10−1
ALCPSO1.1403 × 10−23.4415 × 10−29.9800 × 10−11.1403 × 10−23.4415 × 10−29.9800 × 10−1
CLPSO1.3498 × 10−325.5674 × 10−489.9800 × 10−11.3498 × 10−325.5674 × 10−489.9800 × 10−1
CESCA4.2932 × 1056.0065 × 1053.0471 × 1004.2932 × 1056.0065 × 1053.0471 × 100
IGWO1.6832 × 10−23.2997 × 10−29.9800 × 10−11.6832 × 10−23.2997 × 10−29.9800 × 10−1
BMWOA1.7335 × 10−45.7395 × 10−49.9800 × 10−11.7335 × 10−45.7395 × 10−49.9800 × 10−1
OBLGWO2.4316 × 10−23.9405 × 10−29.9800 × 10−12.4316 × 10−23.9405 × 10−29.9800 × 10−1
F16F17F18
meanstdmeanmeanstdmean
ISMA−1.032 × 1006.9699 × 10−93.9808 × 10−1−1.032 × 1006.9699 × 10−93.9808 × 10−1
MPEDE−1.032 × 1006.7752 × 10−163.9789 × 10−1−1.032 × 1006.7752 × 10−163.9789 × 10−1
LSHADE−1.032 × 1006.7752 × 10−163.9789 × 10−1−1.032 × 1006.7752 × 10−163.9789 × 10−1
ALCPSO−1.032 × 1005.6082 × 10−163.9789 × 10−1−1.032 × 1005.6082 × 10−163.9789 × 10−1
CLPSO−1.032 × 1006.4539 × 10−163.9789 × 10−1−1.032 × 1006.4539 × 10−163.9789 × 10−1
CESCA−1.026 × 1005.9057 × 10−37.0892 × 10−1−1.026 × 1005.9057 × 10−37.0892 × 10−1
IGWO−1.032 × 1002.2583 × 10−133.9789 × 10−1−1.032 × 1002.2583 × 10−133.9789 × 10−1
BMWOA−1.031 × 1004.4024 × 10−163.9789 × 10−1−1.031 × 1004.4024 × 10−163.9789 × 10−1
OBLGWO−1.032 × 1009.0832 × 10−93.9801 × 10−1−1.032 × 1009.0832 × 10−93.9801 × 10−1
F19F20F21
meanstdmeanmeanstdmean
ISMA−3.863 × 1009.7215 × 10−5−3.159 × 100−3.863 × 1009.7215 × 10−5−3.159 × 100
MPEDE−3.863 × 1002.7101 × 10−15−3.271 × 100−3.863 × 1002.7101 × 10−15−3.271 × 100
LSHADE−3.863 × 1001.3042 × 10−4−1.952 × 100−3.863 × 1001.3042 × 10−4−1.952 × 100
ALCPSO−3.862 × 1002.5243 × 10−15−3.274 × 100−3.862 × 1002.5243 × 10−15−3.274 × 100
CLPSO−3.863 × 1002.7101 × 10−15−3.322 × 100−3.863 × 1002.7101 × 10−15−3.322 × 100
CESCA−3.610 × 1001.6803 × 10−1−2.176 × 100−3.610 × 1001.6803 × 10−1−2.176 × 100
IGWO−3.863 × 1001.0500 × 10−9−3.251 × 100−3.863 × 1001.0500 × 10−9−3.251 × 100
BMWOA−3.863 × 1001.5134 × 10−14−3.290 × 100−3.863 × 1001.5134 × 10−14−3.290 × 100
OBLGWO−3.863 × 1001.3281 × 10−6−3.223 × 100−3.863 × 1001.3281 × 10−6−3.223 × 100
F22F23F24
meanstdmeanmeanstdmean
ISMA−1.040 × 1015.9774 × 10−6−1.054 × 101−1.040 × 1015.9774 × 10−6−1.054 × 101
MPEDE−9.542 × 1002.2747 × 100−9.817 × 100−9.542 × 1002.2747 × 100−9.817 × 100
LSHADE−1.023 × 1019.6292 × 10−1−1.053 × 101−1.023 × 1019.6292 × 10−1−1.053 × 101
ALCPSO−9.876 × 1001.6093 × 100−9.997 × 100−9.876 × 1001.6093 × 100−9.997 × 100
CLPSO−1.040 × 1015.7155 × 10−9−1.054 × 101−1.040 × 1015.7155 × 10−9−1.054 × 101
CESCA−1.091 × 1004.2964 × 10−1−1.172 × 100−1.091 × 1004.2964 × 10−1−1.172 × 100
IGWO−9.166 × 1002.2815 × 100−1.018 × 101−9.166 × 1002.2815 × 100−1.018 × 101
BMWOA−1.040 × 1019.4634 × 10−11−1.054 × 101−1.040 × 1019.4634 × 10−11−1.054 × 101
OBLGWO−1.040 × 1013.5332 × 10−5−1.054 × 101−1.040 × 1013.5332 × 10−5−1.054 × 101
F25F26F27
meanstdmeanmeanstdmean
ISMA3.4696 × 1031.5041 × 1022.5000 × 1033.4696 × 1031.5041 × 1022.5000 × 103
MPEDE2.5483 × 1032.1545 × 1022.6152 × 1032.5483 × 1032.1545 × 1022.6152 × 103
LSHADE2.4214 × 1031.2400 × 1022.6152 × 1032.4214 × 1031.2400 × 1022.6152 × 103
ALCPSO2.6317 × 1031.8339 × 1022.6153 × 1032.6317 × 1031.8339 × 1022.6153 × 103
CLPSO2.4055 × 1038.0140 × 1012.6152 × 1032.4055 × 1038.0140 × 1012.6152 × 103
CESCA5.5650 × 1039.4857 × 1023.0675 × 1035.5650 × 1039.4857 × 1023.0675 × 103
IGWO2.5661 × 1031.8331 × 1022.6206 × 1032.5661 × 1031.8331 × 1022.6206 × 103
BMWOA2.9003 × 1031.9433 × 1022.5005 × 1032.9003 × 1031.9433 × 1022.5005 × 103
OBLGWO2.6973 × 1032.3782 × 1022.6188 × 1032.6973 × 1032.3782 × 1022.6188 × 103
F28F29F30
meanstdmeanmeanstdmean
ISMA2.7000 × 1030.0000 × 1002.7181 × 1032.7000 × 1030.0000 × 1002.7181 × 103
MPEDE2.7112 × 1034.6410 × 1002.7202 × 1032.7112 × 1034.6410 × 1002.7202 × 103
LSHADE2.7056 × 1033.3938 × 1002.7104 × 1032.7056 × 1033.3938 × 1002.7104 × 103
ALCPSO2.7124 × 1035.0481 × 1002.7553 × 1032.7124 × 1035.0481 × 1002.7553 × 103
CLPSO2.7072 × 1039.5781 × 10−12.7004 × 1032.7072 × 1039.5781 × 10−12.7004 × 103
CESCA2.7206 × 1038.6833 × 1002.7123 × 1032.7206 × 1038.6833 × 1002.7123 × 103
IGWO2.7107 × 1032.5492 × 1002.7007 × 1032.7107 × 1032.5492 × 1002.7007 × 103
BMWOA2.7000 × 1031.1250 × 10−22.7006 × 1032.7000 × 1031.1250 × 10−22.7006 × 103
OBLGWO2.7000 × 1030.0000 × 1002.7005 × 1032.7000 × 1030.0000 × 1002.7005 × 103
F31F32F33
meanstdmeanmeanstdmean
ISMA3.0000 × 1030.0000 × 1003.1000 × 1033.0000 × 1030.0000 × 1003.1000 × 103
MPEDE3.9778 × 1033.4239 × 1021.6519 × 1063.9778 × 1033.4239 × 1021.6519 × 106
LSHADE3.7470 × 1038.7552 × 1012.9248 × 1053.7470 × 1038.7552 × 1012.9248 × 105
ALCPSO4.4793 × 1035.0276 × 1022.8922 × 1064.4793 × 1035.0276 × 1022.8922 × 106
CLPSO3.7271 × 1038.5165 × 1013.8465 × 1033.7271 × 1038.5165 × 1013.8465 × 103
CESCA5.4621 × 1032.9312 × 1021.6432 × 1075.4621 × 1032.9312 × 1021.6432 × 107
IGWO3.7942 × 1031.0332 × 1028.4824 × 1053.7942 × 1031.0332 × 1028.4824 × 105
BMWOA3.0001 × 1031.8250 × 10−13.8977 × 1053.0001 × 1031.8250 × 10−13.8977 × 105
OBLGWO3.5344 × 1034.8730 × 1023.4895 × 1063.5344 × 1034.8730 × 1023.4895 × 106
Table A9. Wilcoxon signed-rank test results between the ISMA and other advanced algorithms.
Table A9. Wilcoxon signed-rank test results between the ISMA and other advanced algorithms.
FunctionMPEDELSHADEALCPSOCLPSOCESCAIGWOBMWOAOBLGWO
F11.7344 × 10−61.7333 × 10−61.7333 × 10−61.7344 × 10−61.7344 × 10−61.0000 × 1001.7344 × 10−61.0000 × 100
F21.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−6
F31.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−62.5000 × 10−1
F41.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−63.7896 × 10−6
F58.1806 × 10−55.9829 × 10−21.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−6
F63.5657 × 10−42.4414 × 10−41.7333 × 10−61.0000 × 1001.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−6
F71.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−65.7924 × 10−51.7344 × 10−63.1123 × 10−5
F81.4831 × 10−31.4591 × 10−31.4835 × 10−31.3642 × 10−31.4557 × 10−31.4839 × 10−31.4839 × 10−31.4839 × 10−3
F91.7300 × 10−65.0136 × 10−61.7344 × 10−61.0000 × 1001.7344 × 10−61.0000 × 1001.7344 × 10−61.0000 × 100
F101.7203 × 10−68.7824 × 10−71.7041 × 10−61.0651 × 10−61.7344 × 10−61.0135 × 10−71.7344 × 10−61.0000 × 100
F111.9472 × 10−43.9586 × 10−51.3163 × 10−41.0000 × 1001.7333 × 10−61.0000 × 1001.7333 × 10−61.0000 × 100
F122.6499 × 10−51.7948 × 10−51.7311 × 10−61.0000 × 1001.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−6
F135.2772 × 10−54.0204 × 10−41.7062 × 10−61.0000 × 1001.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−6
F141.0000 × 1001.0000 × 1001.0000 × 1001.0000 × 1001.7344 × 10−64.1722 × 10−73.9063 × 10−31.7344 × 10−6
F151.4795 × 10−21.9209 × 10−62.7653 × 10−31.7344 × 10−61.7344 × 10−65.9836 × 10−22.7653 × 10−31.8519 × 10−2
F161.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.1748 × 10−2
F171.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−62.1827 × 10−2
F181.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.3059 × 10−11.7344 × 10−61.7344 × 10−61.7344 × 10−6
F191.7344 × 10−63.1123 × 10−51.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−62.3534 × 10−6
F203.8822 × 10−61.9152 × 10−13.8822 × 10−61.7344 × 10−61.9209 × 10−68.4661 × 10−66.3391 × 10−62.2248 × 10−4
F216.4352 × 10−11.6503 × 10−11.4795 × 10−27.7309 × 10−31.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−6
F221.4795 × 10−23.1123 × 10−52.7653 × 10−31.7344 × 10−61.7344 × 10−64.9498 × 10−21.7344 × 10−61.7344 × 10−6
F232.7653 × 10−31.7344 × 10−62.7653 × 10−31.7344 × 10−61.7344 × 10−66.8836 × 10−11.7344 × 10−62.6033 × 10−6
F241.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−62.6033 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−6
F251.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.9209 × 10−61.7344 × 10−6
F264.3205 × 10−86.7988 × 10−81.7344 × 10−61.7333 × 10−61.7333 × 10−61.7333 × 10−61.7333 × 10−61.7344 × 10−6
F271.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.0000 × 100
F281.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.0000 × 100
F297.8647 × 10−21.4839 × 10−31.4139 × 10−11.7344 × 10−62.5637 × 10−21.7344 × 10−61.7344 × 10−61.7344 × 10−6
F301.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−62.5000 × 10−1
F311.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−62.9305 × 10−4
F321.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−6
F331.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−6
+/=/−22/3/820/4/923/2/816/6/1130/1/219/5/921/0/1216/8/9
Table A10. Average ranking values using the Friedman test.
Table A10. Average ranking values using the Friedman test.
AlgorithmISMAMPEDELSHADEALCPSOCLPSOCESCAIGWOBMWOAOBLGWO
AVR3.70757584.02575764.19797985.19494953.87929298.84747475.09848485.0803034.9681818
rank134829765
Table A11. The descriptions of two types of TFs.
Table A11. The descriptions of two types of TFs.
S-Shaped Family
NameTFsGraphs
TFS1 T x i j t = 1 1 + e x p 2 x i j t Biomedicines 10 02052 i001
TFS2 T x i j t = 1 1 + e x p 2 x i j t
TFS3 T x i j t = 1 1 + e x p 2 x i j t
TFS4 T x i j t = 1 1 + e x p 2 x i j t
V-Shaped Family
NameTFsGraphs
TFV1 T x i j t = e r f π 2 x i j t = π 2 0 ( π 2 ) x i j t e t 2 d t Biomedicines 10 02052 i002
TFV2 T x i j t = t a n h   t a n h   x i j t  
TFV3 T x i j t = x i j t 1 + x i j t 2
TFV4 T x i j t = 2 π a r c t a n π 2 x i j t
Note: x i j t denotes the i -th element on j -th dimension in the position vector.
Table A12. Characteristics of gene expression datasets.
Table A12. Characteristics of gene expression datasets.
DatasetsSamplesGenesCategories
Colon6220002
SRBCT8323094
Leukemia7271312
Brain_Tumor19059205
Brain_Tumor25010,3674
CNS6071302
DLBCL7754704
Leukemia17253285
Leukemia27211,2253
Lung_Cancer20312,6013
Prostate_Tumor10210,5092
Tumors_96057269
Tumors_1117412,53311
Tumors_1430815,00926
Table A13. Overall results of eight versions of BISMA according to S-shaped and V-shaped TFs in terms of average number of the selected genes.
Table A13. Overall results of eight versions of BISMA according to S-shaped and V-shaped TFs in terms of average number of the selected genes.
DatasetsMetricsBISMA_S1BISMA_S2BISMA_S3BISMA_S4BISMA_V1BISMA_V2BISMA_V3BISMA_V4
Colonstd143.6448157.4435173.4187162.02430.42160.97180.69920.6992
avg307.5000464.5000476.5000498.00001.00001.00001.00001.0000
SRBCTstd138.211495.9528156.2727154.43752.95152.93641.93221.4337
avg376.5000465.5000566.0000565.00004.00005.00004.50004.5000
Leukemiastd589.3556296.6164135.855464.62410.94871.25170.31620.3162
avg1595.50001359.00001738.50001755.00001.00001.00001.00001.0000
Brain_Tumor1std926.7275778.296244.3653560.9920147.63928.093911.167919.3724
avg1050.00001319.50001451.50001461.50002.00003.00002.50002.5000
Brain_Tumor2std755.7944978.0951955.6762430.08681.75120.99441.22930.4831
avg1938.00002509.50002510.00002529.50001.00002.00001.50001.0000
CNSstd504.4472867.2775732.4766489.25982.21360.51640.00000.4216
avg1685.00001720.50001805.00001935.00001.00001.00001.00001.0000
DLBCLstd292.2214169.4024129.883979.65730.00000.67500.31620.6325
avg490.50001295.00001334.50001371.50001.00001.00001.00001.0000
Leukemia1std348.2874536.871566.775077.78101.34991.81351.10051.2472
avg1163.00001271.50001283.00001328.50002.00002.00002.00002.0000
Leukemia2std731.5217497.7822232.6141929.41723.73571.66331.41421.4181
avg1255.50002532.50002673.50002737.50003.00002.50001.50003.0000
Lung_Cancerstd1191.41381241.86451162.5447623.997519.816116.159329.074693.8666
avg3066.00003122.00003111.00003162.000023.500019.000016.500015.5000
Prostate_Tumorstd1573.84631270.59761119.62901279.62016.240537.98671.07501.8529
avg2540.00002709.00002631.50002760.50003.50002.00002.50002.5000
Tumors_9std785.7851856.2383533.6090595.3492243.168142.0502595.2484139.8144
avg1376.50001409.50001698.00001421.00001.00002.00002.50004.0000
Tumors_11std1040.67521660.67261391.32131285.5454108.9483288.1741948.9861248.4647
avg3118.50004607.00004642.00003287.0000210.0000304.5000374.5000233.0000
Tumors_14std2353.34111657.2601974.47081551.20761520.8509930.6287618.4779966.3795
avg4920.00007469.00007450.00006775.00001143.5000760.5000540.5000569.5000
ARV5.71436.38936.81437.03932.52862.65362.44642.4143
Rank56783421
Table A14. Overall results of eight versions of BISMA according to S-shaped and V-shaped TFs in terms of average number of average error rate.
Table A14. Overall results of eight versions of BISMA according to S-shaped and V-shaped TFs in terms of average number of average error rate.
DatasetsMetricsBISMA_S1BISMA_S2BISMA_S3BISMA_S4BISMA_V1BISMA_V2BISMA_V3BISMA_V4
Colonstd1.305 × 10−11.399 × 10−11.620 × 10−11.042 × 10−10.000 × 1000.000 × 1000.000 × 1000.000 × 100
avg0.14290.16670.16670.15480.00000.00000.00000.0000
SRBCTstd0.000 × 1000.000 × 1000.000 × 1000.000 × 1000.000 × 1000.000 × 1000.000 × 1000.000 × 100
avg0.00000.00000.00000.00000.00000.00000.00000.0000
Leukemiastd0.000 × 1000.000 × 1000.000 × 1000.000 × 1000.000 × 1000.000 × 1000.000 × 1000.000 × 100
avg0.00000.00000.00000.00000.00000.00000.00000.0000
Brain_Tumor1std5.463 × 10−25.604 × 10−27.147 × 10−25.520 × 10−23.162 × 10−23.162 × 10−23.162 × 10−20.000 × 100
avg0.00000.00000.05000.05000.00000.00000.00000.0000
Brain_Tumor2std9.088 × 10−28.051 × 10−21.370 × 10−18.051 × 10−20.000 × 1000.000 × 1000.000 × 1000.000 × 100
avg0.00000.00000.00000.00000.00000.00000.00000.0000
CNSstd8.794 × 10−21.466 × 10−18.607 × 10−21.528 × 10−10.000 × 1000.000 × 1000.000 × 1000.000 × 100
avg0.15480.00000.00000.15480.00000.00000.00000.0000
DLBCLstd3.953 × 10−24.518 × 10−20.000 × 1000.000 × 1000.000 × 1000.000 × 1000.000 × 1000.000 × 100
avg0.00000.00000.00000.00000.00000.00000.00000.0000
Leukemia1std0.000 × 1000.000 × 1000.000 × 1000.000 × 1000.000 × 1000.000 × 1000.000 × 1000.000 × 100
avg0.00000.00000.00000.00000.00000.00000.00000.0000
Leukemia2std0.000 × 1004.518 × 10−24.518 × 10−20.000 × 1000.000 × 1000.000 × 1000.000 × 1000.000 × 100
avg0.00000.00000.00000.00000.00000.00000.00000.0000
Lung_Cancerstd2.528 × 10−22.561 × 10−22.491 × 10−23.310 × 10−20.000 × 1001.506 × 10−20.000 × 1000.000 × 100
avg0.00000.02380.00000.00000.00000.00000.00000.0000
Prostate_Tumorstd6.449 × 10−25.020 × 10−27.071 × 10−25.182 × 10−23.162 × 10−20.000 × 1000.000 × 1000.000 × 100
avg0.00000.09090.00000.04550.00000.00000.00000.0000
Tumors_9std7.313 × 10−21.315 × 10−16.325 × 10−21.406 × 10−15.271 × 10−20.000 × 1000.000 × 1000.000 × 100
avg0.00000.00000.00000.00000.00000.00000.00000.0000
Tumors_11std4.353 × 10−24.395 × 10−25.206 × 10−24.678 × 10−22.886 × 10−22.413 × 10−22.975 × 10−21.757 × 10−2
avg0.05560.05900.05720.05720.00000.00000.00000.0000
Tumors_14std4.856 × 10−21.028 × 10−15.861 × 10−24.875 × 10−24.411 × 10−27.900 × 10−23.750 × 10−26.582 × 10−2
avg0.29520.25400.29710.28330.25000.23740.24570.2379
ARV5.11075.11075.04295.05714.98574.04293.95713.9429
Rank88675432
Table A15. Overall results of eight versions of BISMA according to S-shaped and V-shaped TFs in terms of average number of average fitness.
Table A15. Overall results of eight versions of BISMA according to S-shaped and V-shaped TFs in terms of average number of average fitness.
DatasetsMetricsBISMA_S1BISMA_S2BISMA_S3BISMA_S4BISMA_V1BISMA_V2BISMA_V3BISMA_V4
Colonstd1.2251 × 10−11.3115 × 10−11.5248 × 10−19.9228 × 10−21.0500 × 10−52.4300 × 10−51.7500 × 10−51.7500 × 10−5
avg0.144150.166950.169660.165542.50 × 10−52.50 × 10−52.50 × 10−52.50 × 10−5
SRBCTstd2.9942 × 10−32.0787 × 10−33.3855 × 10−33.3457 × 10−36.3900 × 10−56.3600 × 10−54.1900 × 10−53.1100 × 10−5
avg0.00815640.0100840.0122620.012248.67 × 10−50.000108329.75 × 10−59.75 × 10−5
Leukemiastd4.1329 × 10−32.0801 × 10−39.5270 × 10−44.5318 × 10−46.6500 × 10−68.7800 × 10−62.2200 × 10−62.2200 × 10−6
avg0.0111890.00953020.0121910.0123077.01 × 10−67.01 × 10−67.01 × 10−67.01 × 10−6
Brain_Tumor1std5.1602 × 10−25.4124 × 10−26.7843 × 10−25.1574 × 10−22.9920 × 10−23.0033 × 10−23.0031 × 10−21.6362 × 10−4
avg0.0187580.0181630.0592150.065411.69 × 10−52.53 × 10−52.11 × 10−52.11 × 10−5
Brain_Tumor2std8.4413 × 10−27.4520 × 10−21.3015 × 10−17.6131 × 10−28.4500 × 10−64.8000 × 10−65.9300 × 10−62.3300 × 10−6
avg0.0122620.0152310.0124410.0136354.82 × 10−69.65 × 10−67.23 × 10−64.82 × 10−6
CNSstd8.4987 × 10−21.3856 × 10−18.0292 × 10−21.4740 × 10−11.5500 × 10−53.6200 × 10−60.0000 × 1002.9600 × 10−6
avg0.15480.0187120.0230610.15947.01 × 10−67.01 × 10−67.01 × 10−67.01 × 10−6
DLBCLstd3.7662 × 10−24.3039 × 10−21.1875 × 10−37.2826 × 10−40.0000 × 1006.1700 × 10−62.8900 × 10−65.7800 × 10−6
avg0.00448440.0120090.0122010.0125399.14 × 10−69.14 × 10−69.14 × 10−69.14 × 10−6
Leukemia1std3.2691 × 10−35.0392 × 10−36.2676 × 10−47.3006 × 10−41.2700 × 10−51.7000 × 10−51.0300 × 10−51.1700 × 10−5
avg0.0109160.0119340.0120420.0124691.88 × 10−51.88 × 10−51.88 × 10−51.88 × 10−5
Leukemia2std3.2584 × 10−34.3716 × 10−24.3135 × 10−24.1399 × 10−31.6600 × 10−57.4100 × 10−66.3000 × 10−66.3200 × 10−6
avg0.00559240.0112810.0119090.0121941.34 × 10−51.11 × 10−56.68 × 10−61.34 × 10−5
Lung_Cancerstd2.3808 × 10−22.2944 × 10−22.2084 × 10−23.1035 × 10−27.8600 × 10−51.4293 × 10−21.1538 × 10−43.7249 × 10−4
avg0.0186050.040040.0228370.0131159.33 × 10−58.73 × 10−56.55 × 10−56.15 × 10−5
Prostate_Tumorstd6.2632 × 10−24.4868 × 10−26.4827 × 10−24.9589 × 10−23.0037 × 10−21.8073 × 10−45.1100 × 10−68.8200 × 10−6
avg0.0184270.0988430.0249190.062172.85 × 10−59.52 × 10−61.19 × 10−51.19 × 10−5
Tumors_9std7.3201 × 10−21.2925 × 10−16.0930 × 10−21.3705 × 10−15.1706 × 10−23.6719 × 10−45.1978 × 10−31.2209 × 10−3
avg0.0122560.0123080.0148270.0124088.73 × 10−61.75 × 10−52.18 × 10−53.49 × 10−5
Tumors_11std4.0291 × 10−24.4019 × 10−24.8341 × 10−24.3815 × 10−22.7431 × 10−22.2469 × 10−22.7845 × 10−21.7275 × 10−2
avg0.06460.0749110.0716930.068890.00139030.00197680.00465570.00092955
Tumors_14std4.2144 × 10−29.9535 × 10−25.5160 × 10−24.2749 × 10−24.2056 × 10−27.3598 × 10−23.5842 × 10−26.0899 × 10−2
avg0.303110.266140.307060.287830.240170.226960.235480.22735
ARV5.97865.97866.22146.62146.69292.70362.75362.6036
Rank55678342
Table A16. Overall results of eight versions of BISMA according to S-shaped and V-shaped TFs in terms of average number of average computational time.
Table A16. Overall results of eight versions of BISMA according to S-shaped and V-shaped TFs in terms of average number of average computational time.
DatasetsMetricsBISMA_S1BISMA_S2BISMA_S3BISMA_S4BISMA_V1BISMA_V2BISMA_V3BISMA_V4
Colonstd1.21911.30522.43551.47571.19381.49091.26861.3287
avg85.962690.0505121.336389.573994.029784.158382.051282.549
SRBCTstd1.58851.87732.85251.0841.85082.453.02162.414
avg102.6595105.9233153.5824106.7198110.4927101.372294.920298.0153
Leukemiastd4.64637.18638.44854.1765.64778.93457.16245.6551
avg288.1026369.0878418.859300.3361312.7438281.3737263.277262.0363
Brain_Tumor1std15.04865.5185.91233.64836.59857.02065.28879.7095
avg257.1141329.2853355.1649265.6545268.0102235.1687226.2937221.4106
Brain_Tumor2std26.092316.46635.8295.91095.78216.98928.45144.4363
avg394.7483557.1407417.7936408.0532429.047378.0446403.66366.1612
CNSstd18.52588.25714.94164.78555.25495.17886.34852.5764
avg282.115399.2233297.2468292.9844305.7291270.9286305.3575257.4227
DLBCLstd13.44597.39863.76983.29656.69346.8815.90376.3564
avg229.0604318.173239.9178235.4501243.1863222.2096206.4178207.6545
Leukemia1std13.09157.11454.21943.56374.37865.32263.42464.1366
avg221.6625306.661230.06226.9516236.7801206.948201.3261199.8014
Leukemia2std27.955727.89527.35656.21859.651410.06497.26919.5984
avg454.5811626.5679467.8857467.3684482.2521424.8834411.641408.7297
Lung_Cancerstd40.018114.413321.343126.783747.396337.882548.65432.9511
avg835.78161064.939847.6348828.0133677.3208558.4493534.8904521.5364
Prostate_Tumorstd25.141710.45736.731110.380819.936716.208712.079624.8174
avg470.1901659.3352485.7299477.1534464.5947415.6169390.0605389.1298
Tumors_9std13.55888.86143.23164.00112.61094.02594.32683.487
avg231.0626333.3597240.6433238.7621246.7118220.015208.5856206.8161
Tumors_11std39.862415.657218.850615.937346.414536.590220.480115.6274
avg744.1785985.7713758.463752.3035630.7326555.8758502.5768483.1388
Tumors_14std73.198462.049169.1097103.712477.727474.066949.113357.7599
avg1560.3651901.441556.6381541.6041087.476880.2872760.1826723.8812
ARV4.77.51436.40715.05715.89292.86432.13571.4286
Rank48756321
Table A17. Parameter settings.
Table A17. Parameter settings.
OptimizersParametersValue
bGWO a m a x 2
a m i n 0
BPSOMin inertia weight0.4
Min inertia weight0.9
c 1 ,   c 2 0.2
bWOA a m a x 2
a m i n 0
Table A18. Comparison of BISMA with other gene selection optimizers in terms of average number of the selected genes.
Table A18. Comparison of BISMA with other gene selection optimizers in terms of average number of the selected genes.
DatasetsMetricsBISMABSMAbGWOBGSABPSObALOBBABSSAbWOA
Colonstd0.516429.572715.975323.217818.790126.908157.2076413.93991.6499
avg146153.5769899876818424.52
SRBCTstd1.264920.572115.256728.051517.232121.734888.7612234.94261.8974
avg333.5192898.510239969361073.54
Leukemiastd0.4216421.969941.07522.226431.853127.3595180.28851254.89970.91894
avg136791.5310633543288285034272
Brain_Tumor1std3.142978.327237.800145.663631.373942.0132104.92881333.0511.2649
avg3.5656312559276627372449.52646.53
Brain_Tumor2std1.7029240.606275.537355.001955.969146.9871135.98382454.58831.1785
avg2.51561148.54672.54914.54864.542092946.52.5
CNSstd0.31623136.706742.726596.630435.962350.9117198.02231551.19523.2335
avg187.585231713386.53344.5298532932
DLBCLstd0.4216433.486523.795748.918224.616237.7601156.4013833.02720.99443
avg140.5571.52329.52522.5248922452625.52
Leukemia1std0.816525.358833.183239.132420.801731.6665190.64131124.431.2649
avg240550.523032473.5241921322538.53.5
Leukemia2std1.264922.361746.411357.610251.119642.9973252.54752534.87081.1972
avg2.5551245.55021.55320.55272.545925412.53
Lung_Cancerstd27.247240.519866.004177.930842.366348.9689688.26112587.933313.898
avg1017215045750.560305947.55097.560925.5
Prostate_Tumorstd1.4181234.636463.2583109.339583.183639.8112191.48552202.96291.792
avg2181.51262.54772.550294955.54401.550413
Tumors_9std102.7665812.152643.083471.028645.387837.3722171.11661120.2783.0258
avg817467425292732.52655.52376.527503
Tumors_11std231.5253558.04845.2396142.4798102.577388.8522190.70121889.8508113.9361
avg235.54971596.55776.56080.55968.55281.56134.5110.5
Tumors_14std681.7162562.6438127.5985132.423480.264977.5818187.683261.4366664.218
avg68214692382.57337.574017357.56349.57426.5565
ARV1.46433.13574.17146.26438.1757.51075.52867.12861.6214
Rank134698572
Table A19. Comparison of BISMA with other gene selection optimizers in terms of average error rate.
Table A19. Comparison of BISMA with other gene selection optimizers in terms of average error rate.
DatasetsMetricsBISMABSMAbGWOBGSABPSObALOBBABSSAbWOA
Colonstd0.00000.05270.11620.19250.12290.22220.15920.15540.0000
avg0.00000.00000.00000.08330.16670.08330.22620.07140.0000
SRBCTstd0.00000.00000.00000.00000.00000.00000.09010.00000.0000
avg0.00000.00000.00000.00000.00000.00000.10560.00000.0000
Leukemiastd0.00000.00000.00000.00000.00000.00000.07070.00000.0000
avg0.00000.00000.00000.00000.00000.00000.00000.00000.0000
Brain_Tumor1std0.03160.05020.05600.05460.05640.07350.08810.05740.0351
avg0.00000.00000.00000.00000.05000.00000.11110.00000.0000
Brain_Tumor2std0.00000.00000.07770.08310.08660.12350.14540.12350.0000
avg0.00000.00000.00000.00000.00000.00000.20830.00000.0000
CNSstd0.00000.08830.07030.11790.11940.08560.13150.13650.0000
avg0.00000.00000.00000.00000.00000.07140.33330.07140.0000
DLBCLstd0.00000.00000.00000.03950.03950.03950.11110.00000.0000
avg0.00000.00000.00000.00000.00000.00000.06250.00000.0000
Leukemia1std0.00000.00000.00000.00000.00000.00000.06020.00000.0000
avg0.00000.00000.00000.00000.00000.00000.00000.00000.0000
Leukemia2std0.00000.00000.00000.03950.03950.05270.09790.04520.0000
avg0.00000.00000.00000.00000.00000.00000.06250.00000.0000
Lung_Cancerstd0.01580.02060.02340.03410.03630.02480.04630.03590.0151
avg0.00000.00000.00000.00000.00000.04760.07320.02380.0000
Prostate_Tumorstd0.00000.04830.04220.07010.08440.06990.15890.07870.0000
avg0.00000.00000.00000.00000.00000.05000.30000.09550.0000
Tumors_9std0.00000.07030.09040.00000.07030.08110.25320.13090.0000
avg0.00000.00000.00000.00000.00000.00000.36670.00000.0000
Tumors_11std0.02230.06140.02110.05700.04880.05080.06380.05860.0369
avg0.00000.00000.00000.00000.02630.05570.11440.05880.0263
Tumors_14std0.05990.05160.06030.07190.03680.05590.08180.10080.0682
avg0.26240.28080.17590.20280.27130.23790.39060.25830.2284
ARV4.07864.6254.354.83935.09645.15717.43575.28574.1321
Rank143567982
Table A20. Comparison of BISMA with other gene selection optimizers in terms of average fitness.
Table A20. Comparison of BISMA with other gene selection optimizers in terms of average fitness.
DatasetsMetricsBISMABSMAbGWOBGSABPSObALOBBABSSAbWOA
Colonstd1.2910 × 10−55.0206 × 10−21.1035 × 10−11.8282 × 10−11.1702 × 10−12.1096 × 10−11.3282 × 10−11.4476 × 10−14.1248 × 10−5
avg2.5000 × 10−51.1500 × 10−34.3875 × 10−39.8642 × 10−21.8077 × 10−11.0080 × 10−11.7705 × 10−18.0020 × 10−25.0000 × 10−5
SRBCTstd2.7403 × 10−54.4567 × 10−43.3052 × 10−46.0770 × 10−43.7331 × 10−44.7086 × 10−45.4394 × 10−25.0897 × 10−34.1104 × 10−5
avg6.4991 × 10−57.2574 × 10−44.1594 × 10−31.9465 × 10−22.2162 × 10−22.1577 × 10−21.9757 × 10−22.3256 × 10−28.6655 × 10−5
Leukemiastd2.9568 × 10−61.5407 × 10−42.8804 × 10−41.5587 × 10−42.2337 × 10−41.9186 × 10−43.8230 × 10−28.8001 × 10−36.4442 × 10−6
avg7.0126 × 10−62.5245 × 10−45.5505 × 10−32.1781 × 10−22.3520 × 10−22.3058 × 10−21.6518 × 10−22.4032 × 10−21.4025 × 10−5
Brain_Tumor1std3.0044 × 10−24.7527 × 10−25.3069 × 10−25.1816 × 10−25.3452 × 10−26.9773 × 10−26.3670 × 10−24.9713 × 10−23.3378 × 10−2
avg2.9561 × 10−59.4172 × 10−45.6841 × 10−32.2204 × 10−27.1128 × 10−22.3408 × 10−21.2274 × 10−12.4928 × 10−22.5338 × 10−5
Brain_Tumor2std8.2133 × 10−61.1604 × 10−37.3988 × 10−27.9166 × 10−28.2304 × 10−21.1744 × 10−11.3429 × 10−11.2498 × 10−15.6840 × 10−6
avg1.2057 × 10−57.5239 × 10−45.5899 × 10−32.2574 × 10−22.3715 × 10−22.3488 × 10−22.3165 × 10−21.4332 × 10−21.2057 × 10−5
CNSstd2.2179 × 10−68.3687 × 10−26.6773 × 10−21.1206 × 10−11.1335 × 10−18.1427 × 10−21.5272 × 10−11.3596 × 10−12.2679 × 10−5
avg7.0136 × 10−62.2373 × 10−36.0843 × 10−32.3320 × 10−22.4165 × 10−29.1574 × 10−21.8163 × 10−19.2401 × 10−21.4027 × 10−5
DLBCLstd3.8548 × 10−63.0615 × 10−42.1755 × 10−43.7526 × 10−23.7505 × 10−23.7552 × 10−26.1711 × 10−27.6159 × 10−39.0915 × 10−6
avg9.1424 × 10−63.7027 × 10−45.2249 × 10−32.1348 × 10−22.3149 × 10−22.2820 × 10−21.9112 × 10−22.4003 × 10−21.8285 × 10−5
Leukemia1std7.6638 × 10−62.3802 × 10−43.1146 × 10−43.6730 × 10−41.9525 × 10−42.9723 × 10−43.6838 × 10−31.0554 × 10−21.1873 × 10−5
avg1.8772 × 10−53.7545 × 10−45.1671 × 10−32.1616 × 10−22.3217 × 10−22.2705 × 10−21.9378 × 10−22.3827 × 10−23.2852 × 10−5
Leukemia2std5.6343 × 10−69.9607 × 10−52.0673 × 10−43.7572 × 10−23.7745 × 10−24.9938 × 10−25.3561 × 10−24.6911 × 10−25.3328 × 10−6
avg1.1136 × 10−52.4499 × 10−45.5479 × 10−32.2367 × 10−22.3699 × 10−22.3510 × 10−21.9595 × 10−22.4109 × 10−21.3363 × 10−5
Lung_Cancerstd1.5004 × 10−21.9354 × 10−22.2153 × 10−23.2336 × 10−23.4492 × 10−22.3622 × 10−23.1687 × 10−24.1718 × 10−21.4294 × 10−2
avg5.1587 × 10−51.1885 × 10−36.1905 × 10−32.3317 × 10−22.4093 × 10−26.8815 × 10−26.3121 × 10−24.6873 × 10−22.5794 × 10−5
Prostate_Tumorstd6.7472 × 10−64.6054 × 10−24.0020 × 10−26.6461 × 10−28.0247 × 10−26.6461 × 10−21.1415 × 10−17.7648 × 10−28.5258 × 10−6
avg9.5157 × 10−62.0126 × 10−36.1828 × 10−32.3454 × 10−22.4241 × 10−27.1027 × 10−21.0987 × 10−19.3377 × 10−21.4273 × 10−5
Tumors_9std8.9737 × 10−46.7888 × 10−28.5899 × 10−26.2023 × 10−46.6747 × 10−27.7087 × 10−21.9970 × 10−11.2797 × 10−12.6422 × 10−5
avg6.9857 × 10−51.5194 × 10−35.8854 × 10−32.2083 × 10−22.4162 × 10−22.3411 × 10−22.3214 × 10−22.4703 × 10−22.6196 × 10−5
Tumors_11std2.1319 × 10−25.7197 × 10−21.9912 × 10−25.4119 × 10−24.6341 × 10−24.8083 × 10−25.9891 × 10−25.7273 × 10−23.4943 × 10−2
avg1.3923 × 10−36.6026 × 10−36.4171 × 10−32.3604 × 10−24.9392 × 10−27.6599 × 10−21.2055 × 10−16.7590 × 10−22.6123 × 10−2
Tumors_14std5.5130 × 10−25.3345 × 10−25.7317 × 10−26.8032 × 10−23.5029 × 10−25.2960 × 10−27.0141 × 10−29.5849 × 10−26.4709 × 10−2
avg2.5180 × 10−12.7576 × 10−11.7527 × 10−12.1745 × 10−12.8210 × 10−12.5053 × 10−13.1859 × 10−12.7012 × 10−12.2178 × 10−1
ARV1.69641.69643.75714.22865.97.12866.81436.80716.6429
Rank113459876
Table A21. Comparison of BISMA with other gene selection optimizers in terms of average computational time.
Table A21. Comparison of BISMA with other gene selection optimizers in terms of average computational time.
DatasetsMetricsBISMABSMAbGWOBGSABPSObALOBBABSSAbWOA
Colonstd0.932150.554070.0980980.110180.0764720.0693360.191830.231890.4158
avg79.193335.919414.20797.22154.24714.129513.944623.262226.0384
SRBCTstd2.16190.511490.157020.114020.135340.168510.256670.31630.37599
avg93.606141.24416.28568.85965.40735.287716.311927.139329.9446
Leukemiastd7.23031.80220.30740.427450.270520.354540.517940.938541.2515
avg256.4992122.725744.850123.581512.531312.215145.091479.294989.7865
Brain_Tumor1std6.96841.05270.2780.450350.477690.324930.496181.0691.2276
avg220.5351103.256938.703921.663613.249312.710640.444968.41674.6861
Brain_Tumor2std4.47182.00850.408760.49240.47050.346690.579631.46831.9697
avg354.245176.779763.466629.992413.317612.333760.1049110.4912131.5993
CNSstd4.74551.47880.519110.267870.198560.25630.60560.944371.1606
avg248.504122.862544.596922.042310.889910.310143.503777.909389.9202
DLBCLstd5.39190.936840.230640.160630.293570.163530.472670.617591.1042
avg200.623494.978535.304818.732610.600110.326935.738361.979369.2286
Leukemia1std4.36231.11560.316380.358290.271410.191610.522070.868220.85614
avg194.38692.065834.079317.71359.95829.562134.439160.04866.7794
Leukemia2std7.3743.07260.493820.618190.52410.501140.617381.94912.7261
avg399.2129192.55769.793536.932719.166818.031669.7835123.9545144.0557
Lung_Cancerstd41.67994.53891.07363.72264.30193.61124.38482.0892.7099
avg515.1456233.643599.250493.31677.845275.9999127.7409190.8128167.6969
Prostate_Tumorstd17.42312.79560.568530.555180.829530.55961.13791.33481.8037
avg383.2946183.442168.613842.002425.800125.201672.0151122.9783133.436
Tumors_9std2.93671.10390.253970.3560.421940.232730.392940.776151.0106
avg203.607498.81436.38618.18799.2378.840435.607362.832171.9579
Tumors_11std11.61643.55691.0452.70622.84013.27183.77581.91981.6793
avg465.5284226.337593.290478.838361.748660.114113.7264175.511163.5641
Tumors_14std78.30819.93542.197913.68467.069210.46168.84724.40115.6434
avg664.032309.3361159.1758202.5748176.6571176.1249235.2403308.2242212.441
ARV97.954.18573.11.89291.25714.66436.26436.6857
Rank984321567

References

  1. Ye, M.; Wang, W.; Yao, C.; Fan, R.; Wang, P. Gene Selection Method for Microarray Data Classification Using Particle Swarm Optimization and Neighborhood Rough Set. Curr. Bioinform. 2019, 14, 422–431. [Google Scholar] [CrossRef]
  2. Wang, S.; Kong, W.; Zeng, W.; Hong, X. Hybrid Binary Imperialist Competition Algorithm and Tabu Search Approach for Feature Selection Using Gene Expression Data. Biomed Res. Int. 2016, 2016, 9721713. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Uthayan, K. A novel microarray gene selection and classification using intelligent dynamic grey wolf optimization. Genetika 2019, 51, 805–828. [Google Scholar] [CrossRef] [Green Version]
  4. Shukla, A.; Singh, P.; Vardhan, M. Gene selection for cancer types classification using novel hybrid metaheuristics approach. Swarm Evol. Comput. 2020, 54, 100661. [Google Scholar] [CrossRef]
  5. Sharma, A.; Rani, R. C-HMOSHSSA: Gene selection for cancer classification using multi-objective meta-heuristic and machine learning methods. Comput. Methods Programs Biomed. 2019, 178, 219–235. [Google Scholar] [CrossRef]
  6. Mohamad, M.; Omatu, S.; Deris, S.; Yoshioka, M.; Abdullah, A.; Ibrahim, Z. An enhancement of binary particle swarm optimization for gene selection in classifying cancer classes. Algorithms Mol. Biol. 2013, 8, 15. [Google Scholar] [CrossRef] [Green Version]
  7. Mabu, A.; Prasad, R.; Yadav, R. Gene Expression Dataset Classification Using Artificial Neural Network and Clustering-Based Feature Selection. Int. J. Swarm Intell. Res. 2020, 11, 65–86. [Google Scholar] [CrossRef]
  8. Jin, C.; Jin, S. Gene selection approach based on improved swarm intelligent optimisation algorithm for tumour classification. Iet Syst. Biol. 2016, 10, 107–115. [Google Scholar] [CrossRef]
  9. Dabba, A.; Tari, A.; Meftali, S.; Mokhtari, R. Gene selection and classification of microarray data method based on mutual information and moth flame algorithm. Expert Syst. Appl. 2021, 166, 114012. [Google Scholar] [CrossRef]
  10. Dabba, A.; Tari, A.; Meftali, S. Hybridization of Moth flame optimization algorithm and quantum computing for gene selection in microarray data. J. Ambient. Intell. Humaniz. Comput. 2021, 12, 2731–2750. [Google Scholar] [CrossRef]
  11. Xu, X.; Li, J.; Chen, H.-L. Enhanced support vector machine using parallel particle swarm optimization. In Proceedings of the 2014 10th International Conference on Natural Computation (ICNC), Xiamen, China, 19–21 August 2014. [Google Scholar]
  12. Alshamlan, H.; Badr, G.; Alohali, Y. mRMR-ABC: A Hybrid Gene Selection Algorithm for Cancer Classification Using Microarray Gene Expression Profiling. Biomed Res. Int. 2015, 2015, 604910. [Google Scholar] [CrossRef] [Green Version]
  13. Alshamlan, H.; Badr, G.; Alohali, Y. Genetic Bee Colony (GBC) algorithm: A new gene selection method for microarray cancer classification. Comput. Biol. Chem. 2015, 56, 49–60. [Google Scholar] [CrossRef]
  14. Liu, B.; Tian, M.; Zhang, C.; Li, X. Discrete Biogeography Based Optimization for Feature Selection in Molecular Signatures. Mol. Inform. 2015, 34, 197–215. [Google Scholar] [CrossRef]
  15. Best, M.; Sol, N.; In’t Veld, S.G.J.G.; Vancura, A.; Muller, M.; Niemeijer, A.N.; Fejes, A.V.; Tjon Kon Fat, L.A.; Huis In’t Veld, A.E.; Leurs, C.; et al. Swarm Intelligence-Enhanced Detection of Non-Small-Cell Lung Cancer Using Tumor-Educated Platelets. Cancer Cell 2017, 32, 238. [Google Scholar] [CrossRef]
  16. Best, M.; In’t Veld, S.; Sol, N.; Wurdinger, T. RNA sequencing and swarm intelligence-enhanced classification algorithm development for blood-based disease diagnostics using spliced blood platelet RNA. Nat. Protoc. 2019, 14, 1206–1234. [Google Scholar] [CrossRef]
  17. Ang, J.; Mirzal, A.; Haron, H.; Hamed, H. Supervised, Unsupervised, and Semi-Supervised Feature Selection: A Review on Gene Selection. IEEE-Acm Trans. Comput. Biol. Bioinform. 2016, 13, 971–989. [Google Scholar] [CrossRef]
  18. Sun, Y.; Lu, C.; Li, X. The Cross-Entropy Based Multi-Filter Ensemble Method for Gene Selection. Genes 2018, 9, 258. [Google Scholar] [CrossRef] [Green Version]
  19. Mundra, P.; Rajapakse, J. SVM-RFE With MRMR Filter for Gene Selection. IEEE Trans. Nanobioscience 2010, 9, 31–37. [Google Scholar] [CrossRef]
  20. Li, J.; Su, L.; Pang, Z. A Filter Feature Selection Method Based on MFA Score and Redundancy Excluding and It∙s Application to Tumor Gene Expression Data Analysis. Interdiscip. Sci.-Comput. Life Sci. 2015, 7, 391–396. [Google Scholar] [CrossRef]
  21. Kim, Y.; Yoon, Y. A genetic filter for cancer classification on gene expression data. Bio-Med. Mater. Eng. 2015, 26, S1993–S2002. [Google Scholar] [CrossRef] [Green Version]
  22. Chandrashekar, G.; Sahin, F. A survey on feature selection methods. Comput. Electr. Eng. 2014, 40, 16–28. [Google Scholar] [CrossRef]
  23. Bolon-Canedo, V.; Sanchez-Marono, N.; Alonso-Betanzos, A. A review of feature selection methods on synthetic data. Knowl. Inf. Syst. 2013, 34, 483–519. [Google Scholar] [CrossRef]
  24. Lee, S.; Xu, Z.; Li, T.; Yang, Y. A novel bagging C4.5 algorithm based on wrapper feature selection for supporting wise clinical decision making. J. Biomed. Inform. 2018, 78, 144–155. [Google Scholar]
  25. Al-Thanoon, N.; Qasim, O.; Algamal, Z. Tuning parameter estimation in SCAD-support vector machine using firefly algorithm with application in gene selection and cancer classification. Comput. Biol. Med. 2018, 103, 262–268. [Google Scholar] [CrossRef]
  26. Yang, A.; Cao, T.; Li, R.; Liao, B. A Hybrid Gene Selection Method for Cancer Classification Based on Clustering Algorithm and Euclidean Distance. J. Comput. Theor. Nanosci. 2012, 9, 611–615. [Google Scholar] [CrossRef]
  27. Wang, L.; Han, B. Hybrid feature selection method for gene expression analysis. Electron. Lett. 2014, 50, 1269–1270. [Google Scholar]
  28. Sungheetha, A.; Sharma, R. Extreme Learning Machine and Fuzzy K-Nearest Neighbour Based Hybrid Gene Selection Technique for Cancer Classification. J. Med. Imaging Health Inform. 2016, 6, 1652–1656. [Google Scholar] [CrossRef]
  29. Lu, H.; Chen, J.; Yan, K.; Jin, Q.; Xue, Y.; Gao, Z. A hybrid feature selection algorithm for gene expression data classification. Neurocomputing 2017, 256, 56–62. [Google Scholar] [CrossRef]
  30. Cao, B.; Zhao, J.; Lv, Z.; Yang, P. Diversified personalized recommendation optimization based on mobile data. IEEE Trans. Intell. Transp. Syst. 2020, 22, 2133–2139. [Google Scholar] [CrossRef]
  31. Cao, B.; Fan, S.; Zhao, J.; Tian, S.; Zheng, Z.; Yan, Y.; Yang, P. Large-scale many-objective deployment optimization of edge servers. IEEE Trans. Intell. Transp. Syst. 2021, 22, 3841–3849. [Google Scholar] [CrossRef]
  32. Zhang, M.; Chen, Y.; Susilo, W. PPO-CPQ: A privacy-preserving optimization of clinical pathway query for e-healthcare systems. IEEE Internet Things J. 2020, 7, 10660–10672. [Google Scholar] [CrossRef]
  33. Wang, L.; Wang, Y.; Chang, Q. Feature selection methods for big data bioinformatics: A survey from the search perspective. Methods 2016, 111, 21–31. [Google Scholar] [CrossRef]
  34. Prasartvit, T.; Banharnsakun, A.; Kaewkamnerdpong, B.; Achalakul, T. Reducing bioinformatics data dimension with ABC-kNN. Neurocomputing 2013, 116, 367–381. [Google Scholar] [CrossRef]
  35. Li, S.; Chen, H.; Wang, M.; Heidari, A.; Mirjalili, S. Slime mould algorithm: A new method for stochastic optimization. Future Gener. Comput. Syst. -Int. J. Escience 2020, 111, 300–323. [Google Scholar] [CrossRef]
  36. Mirjalili, S.; Dong, J.S.; Lewis, A. Nature-Inspired Optimizers: Theories, Literature Reviews and Applications; Springer: Berlin/Heidelberg, Germany, 2019; Volume 811. [Google Scholar]
  37. Chen, X.; Tianfield, H.; Mei, C.; Du, W.; Liu, G. Biogeography-based learning particle swarm optimization. Soft Comput. 2017, 21, 7519–7541. [Google Scholar] [CrossRef]
  38. Liang, J.; Qin, A.; Suganthan, P.; Baskar, S. Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Trans. Evol. Comput. 2006, 10, 281–295. [Google Scholar] [CrossRef]
  39. Cai, Z.; Gu, J.; Luo, J.; Zhang, Q.; Chen, H.; Pan, Z.; Li, Y.; Li, C. Evolving an optimal kernel extreme learning machine by using an enhanced grey wolf optimization strategy. Expert Syst. Appl. 2019, 138, 112814. [Google Scholar] [CrossRef]
  40. Reddy, K.; Panwar, L.; Panigrahi, B.; Kumar, R. Binary whale optimization algorithm: A new metaheuristic approach for profit-based unit commitment problems in competitive electricity markets. Eng. Optim. 2019, 51, 369–389. [Google Scholar] [CrossRef]
  41. Kouadri, R.; Slimani, L.; Bouktir, T. Slime mould algorithm for practical optimal power flow solutions incorporating stochastic wind power and static var compensator device. Electr. Eng. Electromechanics 2020, 45–54. [Google Scholar] [CrossRef]
  42. Mostafa, M.; Rezk, H.; Aly, M.; Ahmed, E. A new strategy based on slime mould algorithm to extract the optimal model parameters of solar PV panel. Sustain. Energy Technol. Assess. 2020, 42, 100849. [Google Scholar] [CrossRef]
  43. Kumar, C.; Raj, T.; Premkumar, M. A new stochastic slime mould optimization algorithm for the estimation of solar photovoltaic cell parameters. Optik 2020, 223, 165277. [Google Scholar] [CrossRef]
  44. Abdel-Basset, M.; Chang, V.; Mohamed, R. HSMA_WOA: A hybrid novel Slime mould algorithm with whale optimization algorithm for tackling the image segmentation problem of chest X-ray images. Appl. Soft Comput. 2020, 95, 106642. [Google Scholar] [CrossRef] [PubMed]
  45. Sun, K.; Jia, H.; Li, Y.; Jiang, Z. Hybrid improved slime mould algorithm with adaptive beta hill climbing for numerical optimization. J. Intell. Fuzzy Syst. 2021, 40, 1667–1679. [Google Scholar] [CrossRef]
  46. Zubaidi, S.; Abdulkareem, I.H.; Hashim, K.S.; Al-Bugharbee, H.; Ridha, H.M.; Gharghan, S.K.; Al-Qaim, F.F.; Muradov, M.; Kot, P.; Al-Khaddar, R. Hybridised Artificial Neural Network Model with Slime Mould Algorithm: A Novel Methodology for Prediction of Urban Stochastic Water Demand. Water 2020, 12, 2692. [Google Scholar] [CrossRef]
  47. Zhang, Y.; Liu, R.; Heidari, A.A.; Wang, X.; Chen, Y.; Wang, M.; Chen, H. Towards augmented kernel extreme learning models for bankruptcy prediction: Algorithmic behavior and comprehensive analysis. Neurocomputing 2021, 430, 185–212. [Google Scholar] [CrossRef]
  48. Chen, Z.; Liu, W. An Efficient Parameter Adaptive Support Vector Regression Using K-Means Clustering and Chaotic Slime Mould Algorithm. IEEE Access 2020, 8, 156851–156862. [Google Scholar] [CrossRef]
  49. Baliarsingh, S.; Vipsita, S. Chaotic emperor penguin optimised extreme learning machine for microarray cancer classification. Iet Syst. Biol. 2020, 14, 85–95. [Google Scholar] [CrossRef]
  50. Banu, P.; Azar, A.; Inbarani, H. Fuzzy firefly clustering for tumour and cancer analysis. Int. J. Model. Identif. Control. 2017, 27, 92–103. [Google Scholar] [CrossRef]
  51. Chen, L.; Li, J.; Chang, M. Cancer Diagnosis and Disease Gene Identification via Statistical Machine Learning. Curr. Bioinform. 2020, 15, 956–962. [Google Scholar] [CrossRef]
  52. Mahendran, N.; Vincent, P.; Srinivasan, K.; Chang, C. Machine Learning Based Computational Gene Selection Models: A Survey, Performance Evaluation, Open Issues, and Future Research Directions. Front. Genet. 2020, 11, 603808. [Google Scholar] [CrossRef]
  53. Tan, M.; Chang, S.; Cheah, P.; Yap, H. Integrative machine learning analysis of multiple gene expression profiles in cervical cancer. Peerj 2018, 6, e5285. [Google Scholar] [CrossRef] [PubMed]
  54. Zhou, Y.; Lin, J.; Guo, H. Feature subset selection via an improved discretization-based particle swarm optimization. Appl. Soft Comput. 2021, 98, 106794. [Google Scholar] [CrossRef]
  55. Sadeghian, Z.; Akbari, E.; Nematzadeh, H. A hybrid feature selection method based on information theory and binary butterfly optimization algorithm. Eng. Appl. Artif. Intell. 2021, 97, 104079. [Google Scholar] [CrossRef]
  56. Coleto-Alcudia, V.; Vega-Rodriguez, M. Artificial Bee Colony algorithm based on Dominance (ABCD) for a hybrid gene selection method. Knowl.-Based Syst. 2020, 205, 106323. [Google Scholar] [CrossRef]
  57. Lee, J.; Choi, I.; Jun, C. An efficient multivariate feature ranking method for gene selection in high-dimensional microarray data. Expert Syst. Appl. 2021, 166, 113971. [Google Scholar] [CrossRef]
  58. Khani, E.; Mahmoodian, H. Phase diagram and ridge logistic regression in stable gene selection. Biocybern. Biomed. Eng. 2020, 40, 965–976. [Google Scholar] [CrossRef]
  59. Chen, K.; Wang, K.; Wang, K.; Angelia, M. Applying particle swarm optimization-based decision tree classifier for cancer classification on gene expression data. Appl. Soft Comput. 2014, 24, 773–780. [Google Scholar] [CrossRef]
  60. Mohamad, M.; Omatu, S.; Deris, S.; Yoshioka, M. A Modified Binary Particle Swarm Optimization for Selecting the Small Subset of Informative Genes From Gene Expression Data. IEEE Trans. Inf. Technol. Biomed. 2011, 15, 813–822. [Google Scholar] [CrossRef] [Green Version]
  61. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. -Int. J. Escience 2019, 97, 849–872. [Google Scholar] [CrossRef]
  62. Yang, Y.; Chen, H.; Heidari, A.A.; Gandomi, A.H. Hunger games search: Visions, conception, implementation, deep analysis, perspectives, and towards performance shifts. Expert Syst. Appl. 2021, 177, 114864. [Google Scholar] [CrossRef]
  63. Ahmadianfar, I.; Heidari, A.A.; Gandomi, A.H.; Chu, X.; Chen, H. RUN Beyond the Metaphor: An Efficient Optimization Algorithm Based on Runge Kutta Method. Expert Syst. Appl. 2021, 181, 115079. [Google Scholar] [CrossRef]
  64. Tu, J.; Chen, H.; Wang, M.; Gandomi, A.H. The Colony Predation Algorithm. J. Bionic Eng. 2021, 18, 674–710. [Google Scholar] [CrossRef]
  65. Ahmadianfar, I.; Heidari, A.A.; Noshadian, S.; Chen, H.; Gandomi, A.H. INFO: An Efficient Optimization Algorithm based on Weighted Mean of Vectors. Expert Syst. Appl. 2022, 195, 116516. [Google Scholar] [CrossRef]
  66. Hussien, A.G.; Heidari, A.A.; Ye, X.; Liang, G.; Chen, H.; Pan, Z. Boosting whale optimization with evolution strategy and Gaussian random walks: An image segmentation method. Eng. Comput. 2022. [Google Scholar] [CrossRef]
  67. Yu, H.; Song, J.; Chen, C.; Heidari, A.A.; Liu, J.; Chen, H.; Zaguia, A.; Mafarja, M. Image segmentation of Leaf Spot Diseases on Maize using multi-stage Cauchy-enabled grey wolf algorithm. Eng. Appl. Artif. Intell. 2022, 109, 104653. [Google Scholar] [CrossRef]
  68. Lai, X.; Zhou, Y. Analysis of multiobjective evolutionary algorithms on the biobjective traveling salesman problem (1, 2). Multimed. Tools Appl. 2020, 79, 30839–30860. [Google Scholar] [CrossRef]
  69. Hu, J.; Chen, H.; Heidari, A.A.; Wang, M.; Zhang, X.; Chen, Y.; Pan, Z. Orthogonal learning covariance matrix for defects of grey wolf optimizer: Insights, balance, diversity, and feature selection. Knowl.-Based Syst. 2021, 213, 106684. [Google Scholar] [CrossRef]
  70. Hu, J.; Gui, W.; Heidari, A.A.; Cai, Z.; Liang, G.; Chen, H.; Pan, Z. Dispersed foraging slime mould algorithm: Continuous and binary variants for global optimization and wrapper-based feature selection. Knowl.-Based Syst. 2022, 237, 107761. [Google Scholar] [CrossRef]
  71. Chen, H.; Wang, M.; Zhao, X. A multi-strategy enhanced sine cosine algorithm for global optimization and constrained practical engineering problems. Appl. Math. Comput. 2020, 369, 124872. [Google Scholar] [CrossRef]
  72. Yu, H.; Qiao, S.; Heidari, A.A.; Bi, C.; Chen, H. Individual Disturbance and Attraction Repulsion Strategy Enhanced Seagull Optimization for Engineering Design. Mathematics 2022, 10, 276. [Google Scholar] [CrossRef]
  73. Yu, H.; Yuan, K.; Li, W.; Zhao, N.; Chen, W.; Huang, C.; Chen, H.; Wang, M. Improved Butterfly Optimizer-Configured Extreme Learning Machine for Fault Diagnosis. Complexity 2021, 2021, 6315010. [Google Scholar] [CrossRef]
  74. Han, X.; Han, Y.; Chen, Q.; Li, J.; Sang, H.; Liu, Y.; Pan, Q.; Nojima, Y. Distributed Flow Shop Scheduling with Sequence-Dependent Setup Times Using an Improved Iterated Greedy Algorithm. Complex Syst. Modeling Simul. 2021, 1, 198–217. [Google Scholar] [CrossRef]
  75. Gao, D.; Wang, G.-G.; Pedrycz, W. Solving fuzzy job-shop scheduling problem using DE algorithm improved by a selection mechanism. IEEE Trans. Fuzzy Syst. 2020, 28, 3265–3275. [Google Scholar] [CrossRef]
  76. Wang, G.-G.; Gao, D.; Pedrycz, W. Solving multi-objective fuzzy job-shop scheduling problem by a hybrid adaptive differential evolution algorithm. IEEE Trans. Ind. Inform. 2022. [Google Scholar] [CrossRef]
  77. Deng, W.; Zhang, X.; Zhou, Y.; Liu, Y.; Zhou, X.; Chen, H.; Zhao, H. An enhanced fast non-dominated solution sorting genetic algorithm for multi-objective problems. Inf. Sci. 2022, 585, 441–453. [Google Scholar] [CrossRef]
  78. Hua, Y.; Liu, Q.; Hao, K.; Jin, Y. A Survey of Evolutionary Algorithms for Multi-Objective Optimization Problems With Irregular Pareto Fronts. IEEE/CAA J. Autom. Sin. 2021, 8, 303–318. [Google Scholar] [CrossRef]
  79. Li, Q.; Chen, H.; Huang, H.; Zhao, X.; Cai, Z.; Tong, C.; Liu, W.; Tian, X. An Enhanced Grey Wolf Optimization Based Feature Selection Wrapped Kernel Extreme Learning Machine for Medical Diagnosis. Comput. Math. Methods Med. 2017, 2017, 9512741. [Google Scholar] [CrossRef]
  80. Cai, Z.; Gu, J.; Wen, C.; Zhao, D.; Huang, C.; Huang, H.; Tong, C.; Li, J.; Chen, H. An Intelligent Parkinson’s Disease Diagnostic System Based on a Chaotic Bacterial Foraging Optimization Enhanced Fuzzy KNN Approach. Comput. Math. Methods Med. 2018, 2018, 2396952. [Google Scholar] [CrossRef]
  81. Dong, R.; Chen, H.; Heidari, A.A.; Turabieh, H.; Mafarja, M.; Wang, S. Boosted kernel search: Framework, analysis and case studies on the economic emission dispatch problem. Knowl.-Based Syst. 2021, 233, 107529. [Google Scholar] [CrossRef]
  82. He, Z.; Yen, G.G.; Ding, J. Knee-based decision making and visualization in many-objective optimization. IEEE Trans. Evol. Comput. 2020, 25, 292–306. [Google Scholar] [CrossRef]
  83. He, Z.; Yen, G.G.; Lv, J. Evolutionary multiobjective optimization with robustness enhancement. IEEE Trans. Evol. Comput. 2019, 24, 494–507. [Google Scholar] [CrossRef]
  84. Ye, X.; Liu, W.; Li, H.; Wang, M.; Chi, C.; Liang, G. Modified Whale Optimization Algorithm for Solar Cell and PV Module Parameter Identification. Complexity 2021, 2021, 8878686. [Google Scholar] [CrossRef]
  85. Chen, H.L.; Yang, B.; Wang, S.J.; Wang, G.; Li, H.Z.; Liu, W.B. Towards an optimal support vector machine classifier using a parallel particle swarm optimization strategy. Appl. Math. Comput. 2014, 239, 180–197. [Google Scholar] [CrossRef]
  86. Wu, G.; Mallipeddi, R.; Suganthan, P.; Wang, R.; Chen, H. Differential evolution with multi-population based ensemble of mutation strategies. Inf. Sci. 2016, 329, 329–345. [Google Scholar]
  87. Piotrowski, A. L-SHADE optimization algorithms with population-wide inertia. Inf. Sci. 2018, 468, 117–141. [Google Scholar] [CrossRef]
  88. Chen, W.; Zhang, J.; Lin, Y.; Chen, N.; Zhan, Z.H.; Chung HS, H.; Li, Y.; Shi, Y.H. Particle Swarm Optimization with an Aging Leader and Challengers. IEEE Trans. Evol. Comput. 2013, 17, 241–258. [Google Scholar] [CrossRef]
  89. Lin, A.; Wu, Q.; Heidari, A.A.; Xu, Y.; Chen, H.; Geng, W.; Li, C. Predicting Intentions of Students for Master Programs Using a Chaos-Induced Sine Cosine-Based Fuzzy K-Nearest Neighbor Classifier. IEEE Access 2019, 7, 67235–67248. [Google Scholar] [CrossRef]
  90. Heidari, A.; Aljarah, I.; Faris, H.; Chen, H.; Luo, J.; Mirjalili, S. An enhanced associative learning-based exploratory whale optimizer for global optimization. Neural Comput. Appl. 2020, 32, 5185–5211. [Google Scholar] [CrossRef]
  91. Heidari, A.; Abbaspour, R.; Chen, H. Efficient boosted grey wolf optimizers for global search and kernel extreme learning machine training. Appl. Soft Comput. 2019, 81, 105521. [Google Scholar] [CrossRef]
  92. Wang, S.; Guo, H.; Zhang, S.; Barton, D.; Brooks, P. Analysis and prediction of double-carriage train wheel wear based on SIMPACK and neural networks. Adv. Mech. Eng. 2022, 14, 16878132221078491. [Google Scholar] [CrossRef]
  93. Lv, Z.; Li, Y.; Feng, H.; Lv, H. Deep learning for security in digital twins of cooperative intelligent transportation systems. IEEE Trans. Intell. Transp. Syst. 2021, 1–10. [Google Scholar] [CrossRef]
  94. Lv, Z.; Chen, D.; Feng, H.; Zhu, H.; Lv, H. Digital twins in unmanned aerial vehicles for rapid medical resource delivery in epidemics. IEEE Trans. Intell. Transp. Syst. 2021, 1–9. [Google Scholar] [CrossRef]
  95. Zou, Q.; Xing, P.; Wei, L.; Liu, B. Gene2vec: Gene subsequence embedding for prediction of mammalian N6-methyladenosine sites from mRNA. RNA 2019, 25, 205–218. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  96. Faramarzi, A.; Heidarinejad, M.; Stephens, B.; Mirjalili, S. Equilibrium optimizer: A novel optimization algorithm. Knowl.-Based Syst. 2020, 191, 105190. [Google Scholar] [CrossRef]
  97. Zhou, L.; Fan, Q.; Huang, X.; Liu, Y. Weak and strong convergence analysis of Elman neural networks via weight decay regularization. Optimization 2022, 1–23. [Google Scholar] [CrossRef]
  98. Fan, Q.; Zhang, Z.; Huang, X. Parameter Conjugate Gradient with Secant Equation Based Elman Neural Network and its Convergence Analysis. Adv. Theory Simul. 2022, 2200047. [Google Scholar] [CrossRef]
  99. Mirjalili, S.; Lewis, A. S-shaped versus V-shaped transfer functions for binary Particle Swarm Optimization. Swarm Evol. Comput. 2013, 9, 1–14. [Google Scholar] [CrossRef]
  100. Xu, Q.; Zeng, Y.; Tang, W.; Peng, W.; Xia, T.; Li, Z.; Teng, F.; Li, W.; Guo, J. Multi-task joint learning model for segmenting and classifying tongue images using a deep neural network. IEEE J. Biomed. Health Inform. 2020, 24, 2481–2489. [Google Scholar] [CrossRef]
  101. Li, J.; Xu, K.; Chaudhuri, S.; Yumer, E.; Zhang, H.; Guibas, L. Grass: Generative recursive autoencoders for shape structures. ACM Trans. Graph. 2017, 36, 1–14. [Google Scholar] [CrossRef]
  102. Zhao, H.; Zhu, C.; Xu, X.; Huang, H.; Xu, K. Learning practically feasible policies for online 3D bin packing. Sci. China Inf. Sci. 2022, 65, 1–17. [Google Scholar] [CrossRef]
  103. Emary, E.; Zawba, H.; Hassanien, A. Binary grey wolf optimization approaches for feature selection. Neurocomputing 2016, 172, 371–381. [Google Scholar] [CrossRef]
  104. Rashedi, E.; Nezamabadi-pour, H.; Saryazdi, S. BGSA: Binary gravitational search algorithm. Nat. Comput. 2010, 9, 727–745. [Google Scholar] [CrossRef]
  105. Emary, E.; Zawbaa, H.; Hassanien, A. Binary ant lion approaches for feature selection. Neurocomputing 2016, 213, 54–65. [Google Scholar] [CrossRef]
  106. Mirjalili, S.; Mirjalili, S.; Yang, X. Binary bat algorithm. Neural Comput. Appl. 2014, 25, 663–681. [Google Scholar] [CrossRef]
  107. Reddy, K.; Panwar, L.; Panigrahi, B.; Kumar, R. A New Binary Variant of Sine-Cosine Algorithm: Development and Application to Solve Profit-Based Unit Commitment Problem. Arab. J. Sci. Eng. 2018, 43, 4041–4056. [Google Scholar] [CrossRef]
  108. Mafarja, M.; Mirjalili, S. Whale optimization approaches for wrapper feature selection. Appl. Soft Comput. 2018, 62, 441–453. [Google Scholar] [CrossRef]
  109. Örnek, B.N.; Aydemir, S.B.; Düzenli, T.; Özak, B. A novel version of slime mould algorithm for global optimization and real world engineering problems: Enhanced slime mould algorithm. Math. Comput. Simul. 2022, 198, 253–288. [Google Scholar] [CrossRef]
  110. Gürses, D.; Bureerat, S.; Sait, S.M.; Yıldız, A.R. Comparison of the arithmetic optimization algorithm, the slime mold optimization algorithm, the marine predators algorithm, the salp swarm algorithm for real-world engineering applications. Mater. Test. 2021, 63, 448–452. [Google Scholar] [CrossRef]
  111. Cai, Z.; Xiong, Z.; Wan, K.; Xu, Y.; Xu, F. A node selecting approach for traffic network based on artificial slime mold. IEEE Access 2020, 8, 8436–8448. [Google Scholar] [CrossRef]
  112. Li, D.; Zhang, S.; Ma, X. Dynamic Module Detection in Temporal Attributed Networks of cancers. IEEE/ACM Trans. Comput. Biol. Bioinform. 2021, 19, 2219–2230. [Google Scholar] [CrossRef]
  113. Ma, X.; Sun, P.G.; Gong, M. An integrative framework of heterogeneous genomic data for cancer dynamic modules based on matrix decomposition. IEEE/ACM Trans. Comput. Biol. Bioinform. 2020, 19, 305–316. [Google Scholar] [CrossRef]
  114. Huang, L.; Yang, Y.; Chen, H.; Zhang, Y.; Wang, Z.; He, L. Context-aware road travel time estimation by coupled tensor decomposition based on trajectory data. Knowl.-Based Syst. 2022, 245, 108596. [Google Scholar] [CrossRef]
  115. Wu, Z.; Li, R.; Xie, J.; Zhou, Z.; Guo, J.; Xu, X. A user sensitive subject protection approach for book search service. J. Assoc. Inf. Sci. Technol. 2020, 71, 183–195. [Google Scholar] [CrossRef]
  116. Wu, Z.; Shen, S.; Lian, X.; Su, X.; Chen, E. A dummy-based user privacy protection approach for text information retrieval. Knowl.-Based Syst. 2020, 195, 105679. [Google Scholar] [CrossRef]
  117. Wu, Z.; Shen, S.; Zhou, H.; Li, H.; Lu, C.; Zou, D. An effective approach for the protection of user commodity viewing privacy in e-commerce website. Knowl.-Based Syst. 2021, 220, 106952. [Google Scholar] [CrossRef]
  118. Li, Y.; Li, X.X.; Hong, J.J.; Wang, Y.X.; Fu, J.B.; Yang, H.; Yu, C.Y.; Li, F.C.; Hu, J.; Xue, W.W.; et al. Clinical trials, progression-speed differentiating features and swiftness rule of the innovative targets of first-in-class drugs. Brief. Bioinform. 2020, 21, 649–662. [Google Scholar] [CrossRef] [Green Version]
  119. Zhu, F.; Li, X.; Yang, S.; Chen, Y. Clinical success of drug targets prospectively predicted by in silico study. Trends Pharmacol. Sci. 2018, 39, 229–231. [Google Scholar] [CrossRef]
  120. Cao, X.; Sun, X.; Xu, Z.; Zeng, B.; Guan, X. Hydrogen-Based Networked Microgrids Planning Through Two-Stage Stochastic Programming with Mixed-Integer Conic Recourse. IEEE Trans. Autom. Sci. Eng. 2021, 1–14. [Google Scholar] [CrossRef]
  121. Zhang, X.; Wang, J.; Wang, T.; Jiang, R. Hierarchical feature fusion with mixed convolution attention for single image dehazing. IEEE Trans. Circuits Syst. Video Technol. 2021, 32, 510–522. [Google Scholar] [CrossRef]
  122. Wu, Z.; Li, G.; Shen, S.; Cui, Z.; Lian, X.; Xu, G. Constructing dummy query sequences to protect location privacy and query privacy in location-based services. World Wide Web 2021, 24, 25–49. [Google Scholar] [CrossRef]
  123. Wu, Z.; Wang, R.; Li, Q.; Lian, X.; Xu, G. A location privacy-preserving system based on query range cover-up for location-based services. IEEE Trans. Veh. Technol. 2020, 69, 5244–5254. [Google Scholar] [CrossRef]
  124. Cao, X.; Wang, J.; Zeng, B. A Study on the Strong Duality of Second-Order Conic Relaxation of AC Optimal Power Flow in Radial Networks. IEEE Trans. Power Syst. 2022, 37, 443–455. [Google Scholar] [CrossRef]
  125. Tian, Y.; Su, X.; Su, Y.; Zhang, X. EMODMI: A multi-objective optimization based method to identify disease modules. IEEE Trans. Emerg. Top. Comput. Intell. 2020, 5, 570–582. [Google Scholar] [CrossRef]
  126. Su, Y.; Li, S.; Zheng, C.; Zhang, X. A heuristic algorithm for identifying molecular signatures in cancer. IEEE Trans. NanoBioscience 2019, 19, 132–141. [Google Scholar] [CrossRef]
  127. Wang, D.; Liang, Y.; Xu, D.; Feng, X.; Guan, R.J.K.-B.S. A content-based recommender system for computer science publications. Knowl.-Based Syst. 2018, 157, 1–9. [Google Scholar] [CrossRef]
  128. Li, J.; Chen, C.; Chen, H.; Tong, C. Towards Context-aware Social Recommendation via Individual Trust. Knowl.-Based Syst. 2017, 127, 58–66. [Google Scholar] [CrossRef]
  129. Li, J.; Lin, J. A probability distribution detection based hybrid ensemble QoS prediction approach. Inf. Sci. 2020, 519, 289–305. [Google Scholar] [CrossRef]
  130. Li, J.; Zheng, X.-L.; Chen, S.-T.; Song, W.-W.; Chen, D.-R. An efficient and reliable approach for quality-of-service-aware service composition. Inf. Sci. 2014, 269, 238–254. [Google Scholar] [CrossRef]
  131. Qiu, S.; Zhao, H.; Jiang, N.; Wang, Z.; Liu, L.; An, Y.; Zhao, H.; Miao, X.; Liu, R.; Fortino, G. Multi-sensor information fusion based on machine learning for real applications in human activity recognition: State-of-the-art and research challenges. Inf. Fusion 2022, 80, 241–265. [Google Scholar] [CrossRef]
  132. Zhang, X.; Fan, C.; Xiao, Z.; Zhao, L.; Chen, H.; Chang, X. Random Reconstructed Unpaired Image-to-Image Translation. IEEE Trans. Ind. Inform. 2022, 1. [Google Scholar] [CrossRef]
Figure 1. A brief description of SMA.
Figure 1. A brief description of SMA.
Biomedicines 10 02052 g001
Figure 2. The framework of the proposed ISMA.
Figure 2. The framework of the proposed ISMA.
Biomedicines 10 02052 g002
Figure 3. Convergence curves of the SMA variants and the original SMA and DE algorithms on twelve functions.
Figure 3. Convergence curves of the SMA variants and the original SMA and DE algorithms on twelve functions.
Biomedicines 10 02052 g003
Figure 4. Convergence curves of the ISMA and the other advanced algorithms on twelve functions.
Figure 4. Convergence curves of the ISMA and the other advanced algorithms on twelve functions.
Biomedicines 10 02052 g004
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Qiu, F.; Zheng, P.; Heidari, A.A.; Liang, G.; Chen, H.; Karim, F.K.; Elmannai, H.; Lin, H. Mutational Slime Mould Algorithm for Gene Selection. Biomedicines 2022, 10, 2052. https://0-doi-org.brum.beds.ac.uk/10.3390/biomedicines10082052

AMA Style

Qiu F, Zheng P, Heidari AA, Liang G, Chen H, Karim FK, Elmannai H, Lin H. Mutational Slime Mould Algorithm for Gene Selection. Biomedicines. 2022; 10(8):2052. https://0-doi-org.brum.beds.ac.uk/10.3390/biomedicines10082052

Chicago/Turabian Style

Qiu, Feng, Pan Zheng, Ali Asghar Heidari, Guoxi Liang, Huiling Chen, Faten Khalid Karim, Hela Elmannai, and Haiping Lin. 2022. "Mutational Slime Mould Algorithm for Gene Selection" Biomedicines 10, no. 8: 2052. https://0-doi-org.brum.beds.ac.uk/10.3390/biomedicines10082052

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop