Next Article in Journal
Finite Element Analysis for Pre-Clinical Testing of Custom-Made Knee Implants for Complex Reconstruction Surgery
Next Article in Special Issue
Anomaly Detection in Log Files Using Selected Natural Language Processing Methods
Previous Article in Journal
Research on a mmWave Beam-Prediction Algorithm with Situational Awareness Based on Deep Learning for Intelligent Transportation Systems
Previous Article in Special Issue
Performance Prediction of Rolling Bearing Using EEMD and WCDPSO-KELM Methods
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Tool for Predicting College Student Career Decisions: An Enhanced Support Vector Machine Framework

1
The Student Affairs Office, Wenzhou University, Wenzhou 325035, China
2
Department of Information Technology, Wenzhou Polytechnic, Wenzhou 325035, China
3
College of Computer Science and Artificial Intelligence, Wenzhou University, Wenzhou 325035, China
*
Authors to whom correspondence should be addressed.
Submission received: 8 April 2022 / Revised: 27 April 2022 / Accepted: 29 April 2022 / Published: 9 May 2022
(This article belongs to the Special Issue Soft Computing Application to Engineering Design)

Abstract

:
The goal of this research is to offer an effective intelligent model for forecasting college students’ career decisions in order to give a useful reference for career decisions and policy formation by relevant departments. The suggested prediction model is mainly based on a support vector machine (SVM) that has been modified using an enhanced butterfly optimization approach with a communication mechanism and Gaussian bare-bones mechanism (CBBOA). To get a better set of parameters and feature subsets, first, we added a communication mechanism to BOA to improve its global search capability and balance exploration and exploitation trends. Then, Gaussian bare-bones was added to increase the population diversity of BOA and its ability to jump out of the local optimum. The optimal SVM model (CBBOA-SVM) was then developed to predict the career decisions of college students based on the obtained parameters and feature subsets that are already optimized by CBBOA. In order to verify the effectiveness of CBBOA, we compared it with some advanced algorithms on all benchmark functions of CEC2014. Simulation results demonstrated that the performance of CBBOA is indeed more comprehensive. Meanwhile, comparisons between CBBOA-SVM and other machine learning approaches for career decision prediction were carried out, and the findings demonstrate that the provided CBBOA-SVM has better classification and more stable performance. As a result, it is plausible to conclude that the CBBOA-SVM is capable of being an effective tool for predicting college student career decisions.

1. Introduction

With the advancement of technology and the development of society, the world today has become more challenging and uncertain. It can be said that we are now in the VUCA era, which is characterized by volatility, uncertainty, complexity, and ambiguity. The frequent “black swan events” in recent years and the new crown epidemic that swept the world this year are two very prominent examples [1]. In such an uncertain VUCA era, it is crucial for everyone to find their own positioning and future development direction. As a special group, college students are the backbone of China’s future society and the group of people who are responsible for China’s dream of achieving great rejuvenation. Thus, having strong career development ability is a requirement for their comprehensive quality and professionalism and also reflects their learning achievements in their college career. In September 2019, the Ministry of Education issued “the Opinions on Deepening the Reform of Undergraduate Education and Teaching to Comprehensively Improve the Quality of Talent Training”, proposing to “deepen the reform of the education and teaching system”, “develop personalized training programs and academic career plans” for college students, and “build a professional setting management system oriented to economic and social development and students’ career development needs”. The career development of college students is closely related to their academic career and career, and it is also related to the results of undergraduate education and teaching reform and the quality of talent cultivation, which are valued at the national level.
In recent years, especially since the Ministry of Education issued the notice of “Teaching Requirements of Career Development and Career Guidance Course for College Students”, career planning education has been in full swing in all colleges and universities, and the career development of college students has received attention from the state, society, colleges and universities, and scientific research institutions at all levels. There are many pieces of research on the career development of college students, but the existing research mainly focus on career education and guidance and career theory and application, while there is a lack of further exploration on the empirical research and model construction of college students’ career development; there is especially still a lot of room for exploration in combining the latest theoretical research results with the characteristics of Chinese local students at present. Since 2018, with the entry of “post-00” college students into colleges and universities, “post-00” college students have accounted for half of the population so far. The “post-00s” college students have a strong sense of autonomy, which is reflected in their desire to choose their own learning style, major direction, and life circle according to their own interests. They have a strong sense of self-awareness and self-identity, which is reflected in their courage to express and insist on their own opinions; they have pragmatic and rational life goals and realization paths, which is reflected in their belief that success mainly depends on personal efforts. After sorting out these characteristics, it is found that the Self-Determination Theory (SDT), which is currently the focus of academic circles, is in line with the characteristics of college students as the research target and the characteristics of the times.
According to Deci Edward L. and Professor Ryan Richard M., two well-known American psychologists, self-determination theory was first suggested in the 1980s and is a cognitive-motivational explanation of human self-determined action [2]. Those who believe in self-determination theory believe that people are active creatures who possess an inbuilt capacity for self-determination and psychological growth. This potential leads people to engage in interest-oriented behaviors that are conducive to the development of their abilities, and this innate motivation for self-determination constitutes an intrinsic motivation for human behavior. After several decades of research and development, self-determination theory has gradually formed a relatively complete theoretical system on human motivation and personality, which has been widely applied in the fields of organizational management, sports, psychological medicine, and educational counseling [3].
“Autonomy needs” are about having the psychological freedom to do things of one’s own choice, “Competence needs” are about having control over one’s environment and growing as a person, and “Belonging needs”, also called relationship needs, are about having a sense of connection with other people. These three needs are important for people to grow, internalize, and be happy [4]. The main assumption of this theory is that when the three basic psychological needs of autonomy, competence, and relationship are met, people will be more willing and able to participate in activities, which will lead to more sustained and high-quality behaviors and better behavioral outcomes as well as better physical and mental health for people. At the same time, we find that the three basic psychological needs are very individualized and, interestingly, can exist widely across cultures and situations. Through a literature review, the core assumptions of self-determination theory are consistent with the characteristics of college students; therefore, it is feasible to discuss the construction of a career development model for college students based on self-determination theory.
Until now, many studies have been conducted to better investigate and discuss the career development of college students. Using interview data from product development interns at a single engineering business, Powers et al. [5] contributed insights into the particular abilities that interns describe as gaining in their internship and identified linkages between school-and-work learning. Kim et al. [6] used a sample of 420 South Korean college students to analyze the cultural validity of the family impact scale in order to determine the degree to which family played a role in college students’ career development within collectivistic societies. Kiselev et al. [7] addressed the social constructivism foundations of machine learning approaches in career advising, as well as the relevance of social networks in psychological research. Chung et al. [8] employed random forests in machine learning to predict students at risk of dropping out in order to identify and assist students who are in danger of dropping out. Luo et al. [9] looked at how stereotyped attitudes about STEM occupations influenced STEM self-efficacy and STEM career-related result expectancies as well as how these constructs predicted STEM career desire in upper primary pupils. Nauta et al. [10] looked at the effects of interpersonal interactions on gay, lesbian, bisexual, and heterosexual college students’ job decisions. Park et al. [11] investigated the impacts of a future time perspective on job selections, which comprised three sessions that were opportunity, value, and connectivity. By collecting 558 completed questionnaires, Lee et al. [12] investigated the influence of several significant professional decision-making elements (i.e., advisers, industry mentors, parents, faculty members, and social media) on students.
Therefore, in order to discuss the construction of a career development model for college students based on self-determination theory, this paper proposes a support vector machine (SVM) combined with improved butterfly optimization algorithm (BOA) named CBBOA that adds two mechanisms to BOA. First, a communication mechanism (CM) was added, which can enhance the exploitation ability and improve the convergence accuracy of the original BOA. We also introduced Gaussian bare-bones mechanism into the original BOA. Mutation mechanism in Gaussian bare-bones can increase the diversity of the population and avoid falling into local optimum. In addition, in view of the shortcomings of SVM, we propose a new CBBOA-SVM model, which can substantially improve the classification accuracy of the original SVM by optimizing the parameters. In order to verify the effectiveness of the proposed CBBOA, we conducted a series of experiments based on benchmark functions. Simulation results illustrate that the algorithm showed better performance than the original BOA. In order to better study the career decision factors of college students, it is necessary to conduct comparative experiments between CBBOA-SVM and other algorithms. The results show that CBBOA-SVM can produce more accurate classification results as well as greater stability in terms of the four indicators studied when compared to all other comparison methods.
The following are some of its most significant contributions:
  • An enhanced BOA (CBBOA) is proposed, where a communication mechanism is used to boost the exploitation ability of the convergence accuracy, as well as a Gaussian bare-bones mechanism is utilized to raise the diversity of the population and the capacity of avoiding falling into local optimum.
  • A new CBBOA-SVM with a feature selection model is developed to predict the future career decisions of college students, which can aid jobless college students in selecting acceptable occupations for themselves but also help the government’s macro-management of college students’ employment market.
  • The performance of CBBOA is experimentally verified by comparing the high-quality algorithm with CBBOA, which shows that the performance of CBBOA is better than other peers.
  • To further validate the performance of CBBOA-SVM with feature selection, we compared it with five other similar methods, which indicates that it has a better performance than other similar methods and can be used to study the impact of career decisions.
The remainder of the paper’s structure is depicted below. The proposed CBBOA model and the CBBOA-SVM model are described in detail in Section 2 and Section 3, respectively. Section 4 is primarily concerned with the introduction of the data source and simulation settings. On the real-world dataset, the experimental results of CBBOA on benchmark functions and CBBOA-SVM on the benchmark functions are discussed in Section 5. Section 6 is devoted primarily to the discussion of the improved algorithm and its implications. Finally, there is a section dedicated to summaries and advice.

2. Proposed CBBOA

CBBOA has added two mechanisms compared with the original BOA. These two mechanisms are the communication mechanism and Gaussian Bare-Bones, respectively. The two mechanisms and the main flow of the proposed CBBOA are described in detail in the remainder of this section.

2.1. Communication Mechanism (CM)

This communication mechanism (CM) is inspired by DE [13] and proposed in the recent work [13]. CM will select one individual in the population to communicate with other two optimal individuals in the population, thus generating the location of new individuals. This updated approach will enhance the exploitation capabilities. The updated formula of CM is as follows:
X i t + 1 = a X i t + ( 1 a ) ( S ( 1 ) + S ( 2 ) ) / 2  
where a is a random number in the range [0, 1]; X i t represents the i-th individual in the population in the t-th iteration; S ( 1 ) and S ( 2 ) represent the two best individuals in the population.

2.2. Gaussian Bare-Bones

Gaussian Bare-Bones has been widely used in other algorithms. For example, Kennedy et al. [14] proposed bare-bones particle swarms optimization (BBPSO), and Omran et al. [15] proposed bare-bones differential evolution (BBDE) through this mechanism. Of course, the application of Gaussian Bare-Bones in these algorithms has changed and improved. In this paper, one of the updated formulas of Gaussian Bare-Bones adopts a new Gaussian variation strategy proposed by Wang et al. [16]. The formula of the Gaussian variation strategy is as follows:
x i , j t + 1 = N ( m u ,   s i g m a )
where N ( m u ,   s i g m a ) means a random generation from a Gaussian distribution with the mean of m u = ( x b e s t , j + x i , j ) / 2 and the standard deviation of s i g m a = | x b e s t , j x i , j | .
In this paper, in addition to the Gaussian variation mentioned above, another formula adopts the variation mutation strategy based on DE. The two mutation methods can effectively increase the population diversity and prevent the algorithm from falling into local optimum. The complete Gaussian Bare-Bones update formula is as follows:
x i , j t + 1 = { N ( m u ,   s i g m a )                                         r a n d < C R                                       ( 1 ) x i 1 , j t + k * ( x i 2 , j t x i 3 , j t )       r a n d C R                                       ( 2 )
where CR is the predetermined mutation probability. x i 1 , j t , x i 2 , j t and x i 3 , j t are the i 1 th population, the i 2 th and the i 3 th individual components in the jth dimension respectively. i 1 , i 2 and i 3 are three random numbers between [1, N], and i 1 i 2 i 3 .

2.3. Description of CBBOA

In this section, we propose CBBOA based on the above two mechanisms and introduce the entire process of CBBOA. CBBOA adds the above two mechanisms to BOA. CBBOA enters the communication mechanism stage after updating the population. Next, we use Equation (1) to generate a temporary individual V. Then, use Equation (3) to generate a temporary population S. Afterwards, it decides whether to replace the original population after comparison. Finally, we update the optimal solution; so far, a complete iteration end. Figure 1 shows the concrete process.
The temporal complexity of CBBOA is governed by the maximum number of iterations (T), the number of dimensions (dim), and the size of the population in each dimension (N). According to the results of the study, the overall time complexity of CBBOA is O(CBBOA) = O(initialization) + O(calculation fitness) + T ∗ (O(update position by BOA) + O(communication mechanism) + O(Gaussian Bare-Bones) + O(update the optimal solution)). Initializing the population has a O(Ndim) time complexity, which means it takes a long time. The time complexity of computing fitness is O(N). Using BOA formula to update the time complexity is O(N). The time complexity of CM is O(N). The Gaussian Bare-Bones is O(Ndim) and update the optimal solution is O(N). Therefore, its final complexity is as below. O ( CBBOA ) = O ( N * d i m ) + O ( N ) + T * ( O ( N ) + O ( N ) + O ( N * d i m ) + O ( N ) ) = O ( N * d i m ) + O ( N ) + T * ( 3 O ( N ) + O ( N * d i m ) ) .

3. Proposed CBBOA-SVM Method

The two factors that affect the classification accuracy of SVM are the setting of hyperparameters and the selection of feature set, where the hyperparameters include penalty factor C and the kernel parameter γ , which greatly affect the classification accuracy. Feature subsets also use the entire set or a random selection, which results in low efficiency and accuracy. Based on this, we propose CBBOA-SVM to optimize the SVM by searching for the optimal hyperparameters as well as a subset of features. Next, we apply the model to two realistic scenarios to test the superiority of the model. Figure 2 depicts the framework of the CBBOA-SVM. The model consists mostly of two key components. The classification accuracy (ACC) of this optimized SVM is acquired in the right half using 10-fold cross-validation, nine of which are used to train and the remainder to test.

4. Experiments

4.1. Collection of Data

In this paper, a random web survey was conducted using Questionnaire Star to randomly select students from general undergraduate colleges and universities (comprehensive category), Sino-foreign cooperative colleges and universities, higher vocational colleges and universities (comprehensive category), and general undergraduate colleges and universities (specialist category). A total of 557 questionnaires were collected. Taking into account the volume of questions and the speed of answering, those with an answer time of 120 seconds or more were classified as valid questionnaires, totaling 445, of which 310 were male and 247 were female, and the distribution of majors included science and technology, medicine and health, literature, history and philosophy, arts, and sports. By examining the gender, education, grade, major, place of origin of the respondents, as well as the Chinese version of the Basic Psychological Needs Satisfaction Scale (nine attributes) using self-determination theory, and the Student Career Construct Questionnaire (25 attributes) (see Table 1), the importance of these attributes and their intrinsic connections were explored, and a prediction model for college students’ career decisions was established on this basis.

4.2. Experimental Setup

The experiment was carried out with the assistance of the MATLAB R2018 software. Prior to dealing with classification, the data were scaled to [−1,1] before being analyzed.
In computational science, the fair setting of experiments plays a very important role in the comparison of methods, such as molecular signature identification [17,18], drug discovery [19,20], and recommender system [21,22,23,24]. The data was split using the k-fold cross-validation (CV) method, with the value of k being set to 10 [25,26].

5. Experimental Results

5.1. Benchmark Function Validation

This section mainly introduces and discusses related experiments to verify the performance of CBBOA. In order to verify the performance of CBBOA, we tested it with other advanced algorithms on the selected benchmark functions. In addition, in order to explore the impact of the mechanisms on CBBOA, a balance and diversity analysis experiment was also added.

5.1.1. Test Conditions and Benchmark Functions

It is important to set the fair test conditions for the comparison experiment [27,28,29,30,31]. All experiments were tested in the same environment, where the dimension, number of populations, and number of random runs were set to 30, while the maximum number of evaluations was 300,000, and CEC2014 was chosen as the test function set [32]. Table A1 shows the selected 30 benchmark functions which have been used in many recent studies [33,34]. In the table, the last three columns indicate the dimensionality, the upper and lower bounds of the search space, and the optimal solution of the corresponding function. In addition, the functions are divided into unimodal, simple multimodal, hybrid, and composition functions. The reason for choosing different types of functions is to evaluate the performance of the algorithm more comprehensively.

5.1.2. Comparison with the Excellent Algorithms

In order to test the effectiveness of the proposed CBBOA, several high-quality algorithms are selected for comparison with CBBOA in this section. The tested algorithms include BMWOA [35], ACWOA [36], COSCA [37], CESCA [38], CGPSO [39], ALCPSO [40], GL25 [41], DECLS [42], DE [43], GWO [44], and BOA. Table A2 records the experimental results of CBBOA and the above algorithms on the CEC14 function.
In the table, Avg represents the average result, and Std represents the standard deviation. The best performing results on each function in the table have been bolded. We can see nine of the results of CBBOA ranking first. In the F1~F16, CBBOA ranks in the top three out of 15 functions. On F17~F30, which have more complicated function structures, CBBOA ranks in the top three among the nine functions. In contrast, the original BOA only performed well in F23, F24, and F25, and ranked at the end among other functions. This shows that the improvement for BOA is effective. CBBOA can achieve good results on different kinds of functions. These functions are more comprehensive and more versatile.
In order to analyze the experimental results more comprehensively, we used the Wilcoxon signed-rank test [45] to analyze the experimental results. In the Wilcoxon signed-rank test, when the p-value is less than 0.05, it indicates that this algorithm has a significant improvement compared with another algorithm in statistics.
Table A3 shows the p-value of CBBOA compared with other algorithms. Values greater than or equal to 0.05 are indicated in bold. From the table, we can see that CBBOA has improved significantly over CESCA in all functions. Compared with BMWOA, it can be found that only one function has a test value not less than 0.05. Compared with the original BOA, CBBOA has significant improvements in functions other than F24~F27. This once again proves that the performance of CBBOA is superior and more comprehensive.
The convergence curves of this experiment can be seen in Figure 3. From the figure, we can see that in F1, F2, F6, and F17, the convergence curve of CBBOA has obvious advantages. In F9, algorithms such as ALCPSO and GL25 have stabilized in the early stage of iteration, while CBBOA can continue to decline. This shows that CBBOA doesn’t easily fall into the local optimum. This is where CBBOA has improved compared with BOA.

5.1.3. Mechanism Comparison Experiment

We made a mechanism comparison experiment to compare the effects of the two mechanisms added in CBBOA. To compare the effects of the two mechanisms, we set up an algorithm that uses only a single mechanism, hence the following four algorithms are used for testing: CBBOA, BOABB, BOACM, and BOA. BOABB means that the original BOA adds Gaussian Bare-Bones. BOACM only adds a communication mechanism. CBBOA added both mechanisms. BOA is the original version. The four algorithms are tested on the same 30 benchmark functions. Table A4 records the experimental results. Similarly, the best-optimized solution for each function is bolded. In the table, we can see that among 30 functions, CBBOA ranks first in 19 functions. In addition to comparing Avg, comparing the values of Std can prove that CBBOA is more stable than other algorithms. This shows that the combination of the two mechanisms exerts a better effect.
Figure 4 shows the convergence curves of the above experiment. In F1 and F11, we can clearly see that CBBOA has got a better solution. We can see that the optimal solutions of the four algorithms are relatively similar in F24 and F25. However, the convergence speed of CBBOA is faster than BOABB and BOA in these two functions. The above experimental analyses all show that the combination of the two mechanisms helps BOA achieve better performance.

5.1.4. Qualitative Analysis

The purpose of this part is to undertake a qualitative study of CBBOA. In the first instance, the function of CEC14 is subjected to a feasibility study. The findings of the feasibility study of CBBOA and BOA are shown in Figure 5. In the illustration, there are five columns. The figure shows five columns of data, from left to right, indicating the three-dimensional distribution, two-dimensional distribution, and one-dimensional distribution of the search trajectory of CBBOA in the multidimensional space, the variation of the average fitness, and the convergence curve, respectively. In Figure 5b, the red dot shows the position of the optimum solution, and the black dot represents the location of the search for CBBOA, which is shown in the lower right corner. The fact that the black dots are dispersed over the whole search plane in the picture indicates that CBBOA is capable of traversing the solution space to the greatest extent conceivable. The black dots closest to the best answer are the densest, indicating that CBBOA can define the target region and perform additional development in this part of the world. The trajectory curve in Figure 5c fluctuates greatly in the early period and tends to stabilize in the later period. The fluctuation of the trajectory indicates that the algorithm is searching extensively in the early period. When the algorithm finds the target area, the trajectory becomes stable. In Figure 5d, the average fitness decreases during the iteration. The average fitness dropped to a lower value in the mid-term, indicating that the CBBOA has a good convergence speed. In Figure 5e, the convergence curve of CBBOA is lower than BOA. This shows that the quality of the solutions found by CBBOA is better.
Figure 6 shows the results of the balanced analysis of CBBOA and BOA, tested on the same functions as the diversity analysis above. The results show the presence of three curves, red, blue, and green, in each graph. Red indicates the exploration capability of the algorithm, blue indicates the exploitation capability, while green is the increment–decrement curve, which is used to describe the trend of the red and blue curves, with the curve rising, indicating the dominance of exploration. On the contrary, exploitation behavior dominates. From the figure, we can see that the two behaviors are at the same level when the incremental-decremental curve reaches the maximum.
From the selected graphs, we can see that the added mechanisms have a great influence on the balance of the original BOA. The original BOA maintains a high exploration and low exploitation trend in functions. However, from the proportion of the two behaviors, we can see that the proportion of BOA exploration behavior is too high. For example, in F10, exploration accounts for more than 90%. This may lead to BOA not being able to get a high-quality solution. In contrast, the exploitation behavior of CBBOA accounts for a higher proportion. It means that it spends most of its time exploiting the target area.
Figure 7 is the result of the diversity analysis. From the figure we can see that the diversity curves are all decreasing curves. The reason is that the algorithm randomly generates the population at the beginning, so the diversity at the beginning is large. In the process of algorithm iteration, the continuous narrowing of the search range makes the population diversity continue to decrease. As can be seen from Figure 7, the population diversity of BOA is maintained at a high value in multiple functions. This shows that BOA has been kept in a large search range and cannot determine the target area, which makes the algorithm sometimes unable to find high-quality solutions. The diversity of CBBOA can maintain a steady decline, indicating that it has determined the region where the optimal solution is located and further developed.

5.2. Predicting Results of Employment Stability

In this subsection, to investigate the impact of career decisions, we evaluated CBBOA-SVM with feature selection (CBBOA-SVM-FS) with some real datasets collected. The results of the evaluation using accuracy (ACC), Matthews Correlation Coefficient (MCC), Sensitivity, and Specificity are given in Table 2. The ACC of a model is defined as the proportion of properly categorized events out of all classified events, and it represents the model’s performance in categorizing the information. Specificity is a performance metric used to evaluate the ability of a binary classification model to distinguish between normal and abnormal cases. The sensitivity of the binary classification model is used to evaluate the metrics of the model in terms of spotting aberrant data. In order to properly examine the effectiveness of the classification model, the MCC is utilized. This provides a more objective predictive evaluation than just percentile rankings. Among the results given, the ACC result for CBBOA-SVM-FS is 94.2%, the MCC result is 88.9%, the Sensitivity result is 94.5%, and the Specificity result is 94%. The analysis of the results obtained by CBBOA-SVM-FS through these four evaluation metrics fully illustrates the feasibility of using CBBOA-SVM-FS to study the impact of career decisions for college students.
To further validate the performance of CBBOA-SVM-FS, we compared it with five other similar methods, namely CBBOA-SVM, BOA-SVM, ANN, RF, and KELM. The evaluation results of ACC, sensitivity, specificity, and MCC for each method are shown in Figure 8. For the most important one, ACC, CBBOA-SVM-FS obtained a high result of 94.20%, which is 1.90% better than CBBOA-SVM, 2.40% better than BOA-SVM, 8.80% better than ANN, 2.90% better than RF, and 4.90% better than KELM. Therefore, CBBOA-SVM-FS is the best among the performance of all the methods involved in the comparison.
Moreover, in terms of the variance of the ACC obtained by the methods involved in the comparison, CBBOA-SVM-FS is 0.037, CBBOA-SVM is 0.076, BOA-SVM is 0.071, ANN is 0.095, RF is 0.083, and KELM is 0.075. By comparing the variance, we can find that the stability of CBBOA-SVM-FS is also better with respect to ACC.
In terms of sensitivity, CBBOA-SVM obtains the best result of 98.20%, and CBBOA-SVM-FS is the second-best, with a result of 94.50%. In addition, the results of other methods are 93.30% for BOA-SVM, 92.60% for RF, 87.70% for KELM, and 78.20% for ANN, respectively, which shows that the proposed methods CBBOA-SVM-FS and CBBOA-SVM are also superior to other methods in terms of Sensitivity.
Further, the results in terms of variance also show that CBBOA-SVM-FS and CBBOA-SVM are more stable than the other methods. When evaluated using Specificity, CBBOA-SVM-FS is 94%, which is 8.00% better than CBBOA-SVM, 4.00% better than BOA-SVM, 1.00% better than ANN, 4.00% better than RF, and 3.00% better than KELM, showing that CBBOA-SVM-FS is the best. Moreover, on top of stability, CBBOA-SVM-FS is also better than CBBOA-SVM, BOA-SVM, RF, and KELM.
In the MCC results, CBBOA-SVM-FS is the best with 88.90%. CBBOA-SVM, BOA-SVM, ANN, RF, and KELM are 85.30%, 84.30%, 72.60%, 83.00%, and 79.20%, respectively, which shows that CBBOA-SVM-FS is 3.60% better than the next best CBBOA-SVM and 16.30% better than the worst ANN. Therefore, CBBOA-SVM-FS also performs the best, and its stability performance can be seen from Figure 8 that it is the best. In summary, CBBOA-SVM-FS has a better performance than other similar methods and can be used to study the impact of career decisions.
The proposed CBBOA not only accomplishes the optimum configuration of the SVM’s super parameters but also achieves the selection of the best feature set throughout the process. We took use of a ten-fold CV approach to our benefit. Figure 9 depicts the frequency distribution of the primary features found by the CBBOA-SVM via the 10-fold CV method, as determined by the CBBOA-SVM.
Because of this, as seen in the chart, the characteristics that appeared the most frequently were “Education” (F2), “Accept and win difficult challenges” (F10), “Feel empowered by what you do” (F11), “Decide what is more important to me” (values) (F17), “Set an example in your mind” (F19), “Decide what you want to do” (F29), and “Reconfirmation of a wise career choice” (F33). The five most frequent characteristics appeared 9, 9, 8, 7, and 8 times, respectively. As a result, the study came to the conclusion that such attributes may have an important role in predicting the effect of career decisions.

6. Discussion

By analyzing the results obtained experimentally for the questionnaire, it can be found that the most important 6 attribute features among the 39 attributes are F2, F10, F11, F17, F29, and F33, which have a more prominent impact on the career decision-making ability of college students.
The survey data shows that the strengths and weaknesses of college students’ career decisions differ among different education levels, with students with master’s degrees being stronger than those with bachelor’s degrees, and those with bachelor’s degrees being stronger than those with college (higher vocational), and it can be seen that the higher the education level, the stronger their career decision-making ability. This is because college students with strong career decision-making abilities have clearer and more targeted goals and clearer directions for their studies and future development, and they are more committed to improving their education.
F10 and F11 are two characteristics that measure the degree of satisfaction of competence needs among the basic psychological needs of individuals. We can find that college students who have accepted and won difficult challenges and feel competent in what they do have stronger career decision-making ability, because individuals whose competence needs are satisfied to have more confidence in their abilities in various aspects, and this ability also includes career decision-making ability. At the same time, the successes and difficulties won in career decision making promote the individual’s sense of competence.
The characteristics of F17 reflect the values in the individual’s self-concept, and it can be seen that the clearer the university students are about their self-values, the stronger their career decision-making ability is. This is because values are the overall evaluation of the meaning, role, effect, and importance of objective things (including people, things, and events) and the results of one’s own behavior, and they are the principles and standards that drive and guide one’s decisions and actions, and they are one of the core elements of the psychological structure of personality. The clearer one’s values are, the clearer one is about what one wants, needs, and needs, and the clearer and more determined one’s decisions will be; therefore, the clarity of one’s values also determines the strength of one’s career decision-making ability. We find that the stronger the degree of deciding what you want to do, the clearer the career decisions and goals, and the stronger the career decision-making ability, because deciding what to do and how to do it is part of the decision-making ability.
F33 is actually a reconfirmation and re-enforcement of the individual’s decision, which can be said to be a recognition of self-decision-making ability and can enhance the individual’s self-efficacy for decision making. The research still needs to be improved: first, the number of samples collected for the model construction needs to be increased, so that it is more comprehensive, perfect, and accurate for the model construction. Secondly, the sample collection is currently concentrated in one city, which will be influenced by factors such as the urban environment and college environment. It is necessary to expand different geographical areas, especially different provinces, to expand colleges and universities in different cities including first-tier cities, new first-tier cities, second-tier, third-tier, and fourth-tier cities, and to expand different types of colleges and universities to enrich the model. Thirdly, we can also expand more attributes that affect college students’ career decision-making ability and seek more influential attributes to make the model more convincing.
The practical value of this paper’s study on forecasting college students’ career selections is vast, and it not only can aid jobless college students in selecting acceptable occupations for themselves but also help the government’s macro-management of college students’ employment market. As a result, conducting prediction studies on college students’ future profession choices can provide very valuable information.
Because the proposed CBBOA in this paper has a strong optimization ability, it can be applied to many other scenarios in the future, such as disease module identification [46,47], molecular signatures identification for cancer diagnosis [48,49], drug-disease associations prediction [50], fractional-order controller [51], prediction of mortality rates in high-risk diabetic foot patients [52], urban road planning [53] and fault diagnosis [54,55], information retrieval services [56,57,58], and location-based services [59,60]. In addition, the method proposed in this paper can also be extended to distributed version [61,62], multi-objective or many optimization version [63,64,65], or matrix-based evolutionary computation version [66].

7. Conclusions and Future Work

In this study, we developed an effective hybrid CBBOA-SVM model to predict career decisions for college students. This paper proposes an improved BOA called CBBOA by introducing the Gaussian Bare-Bones and communication mechanism. The mechanisms effectively improve the exploitation ability and convergence accuracy of BOA. In order to evaluate the performance of CBBOA, we test it with other algorithms on benchmark functions of CEC14. Experimental results show that CBBOA has a great improvement in most functions compared with BOA. CBBOA also has competitiveness compared with some advanced algorithms. Meanwhile, by optimizing SVM with CBBOA, it is feasible to get better parameter combinations and feature subsets than previous approaches. Compared to previous machine learning approaches, the suggested method can still predict more correctly and realize more consistently while dealing with the issue of predicting career selections for college students.
In the following study, due to the advanced characteristic of the proposed CBBOA-SVM model, it will be generalized in China and utilized to anticipate various issues, such as medical diagnostics and financial risk prediction. Furthermore, it is envisaged that the CBBOA method may be expanded to handle new application domains, such photovoltaic cell optimization and optimization of deep learning network nodes.

Author Contributions

Conceptualization, Z.W., G.L. and H.C.; Methodology, H.C. and Z.W.; software, Z.W.; validation, H.C., G.L. and Z.W.; formal analysis, Z.W. and G.L.; investigation, Z.W.; resources, H.C. and G.L.; data curation, Z.W.; writing—original draft preparation, Z.W.; writing—review and editing, H.C., Z.W. and G.L.; visualization, Z.W. and G.L.; supervision, Z.W. and G.L.; project administration, Z.W.; funding acquisition, H.C., G.L. and Z.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data involved in this study can be provided upon request.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Test benchmark functions.
Table A1. Test benchmark functions.
IDFunction EquationDimRangeF(min)
Unimodal Functions
F1Rotated High Conditioned Elliptic Function30[−100,100]100
F2Rotated Bent Cigar Function30[−100,100]200
F3Rotated Discus Function30[−100,100]300
Simple Multimodal Functions
F4Shifted and Rotated Rosenbrock’s Function30[−100,100]400
F5Shifted and Rotated Ackley’s Function30[−100,100]500
F6Shifted and Rotated Weierstrass Function30[−100,100]600
F7Shifted and Rotated Griewank’s Function30[−100,100]700
F8Shifted Rastrigin’s Function30[−100,100]800
F9Shifted and Rotated Rastrigin’s Function30[−100,100]900
F10Shifted Schwefel’s Function30[−100,100]1000
F11Shifted and Rotated Schwefel’s Function30[−100,100]1100
F12Shifted and Rotated Katsuura Function30[−100,100]1200
F13Shifted and Rotated HappyCat Function30[−100,100]1300
F14Shifted and Rotated HGBat Function30[−100,100]1400
F15Shifted and Rotated Expanded Griewank’s plus Rosenbrock’s Function30[−100,100]1500
F16Shifted and Rotated Expanded Scaffer’s F6 Function 30[−100,100]1600
Hybrid Function
F17Hybrid Function 130[−100,100]1700
F18Hybrid Function 230[−100,100]1800
F19Hybrid Function 330[−100,100]1900
F20Hybrid Function 430[−100,100]2000
F21Hybrid Function 530[−100,100]2100
F22Hybrid Function 630[−100,100]2200
Composition Functions
F23Composition Function 130[−100,100]2300
F24Composition Function 230[−100,100]2400
F25Composition Function 330[−100,100]2500
F26Composition Function 430[−100,100]2600
F27Composition Function 530[−100,100]2700
F28Composition Function 630[−100,100]2800
F29Composition Function 730[−100,100]2900
F30Composition Function 830[−100,100]3000
Table A2. Comparison results of CBBOA and other algorithms.
Table A2. Comparison results of CBBOA and other algorithms.
F1 F2
AvgStdAvgStd
CBBOA1.8030 × 1061.0304 × 1062.0001 × 1029.2127 × 10−3
BMWOA1.0717 × 1084.3745 × 1072.6567 × 1081.5529 × 108
ACWOA1.4833 × 1085.5638 × 1077.9939 × 1093.4474 × 109
COSCA8.9386 × 1085.3031 × 1086.3658 × 10102.1103 × 1010
CESCA1.1876 × 1092.0735 × 1087.6944 × 10104.2170 × 109
CGPSO9.7702 × 1062.5780 × 1061.5498 × 1082.1136 × 107
ALCPSO6.4899 × 1066.6766 × 1062.2696 × 1033.8048 × 103
GL256.8886 × 1063.8457 × 1061.2582 × 1031.1673 × 103
DECLS3.3217 × 1079.1089 × 1061.1338 × 1033.6390 × 103
DE1.9301 × 1075.7413 × 1068.2095 × 1021.6488 × 103
GWO5.5892 × 1073.5683 × 1072.1901 × 1092.5341 × 109
BOA1.6643 × 1093.1079 × 1087.8163 × 10108.3707 × 109
F3 F4
AvgStdAvgStd
CBBOA4.0850 × 1021.3860 × 1024.7838 × 1023.6867 × 101
BMWOA5.4968 × 1049.3525 × 1036.7336 × 1027.3102 × 101
ACWOA4.9095 × 1048.8924 × 1031.1387 × 1032.8025 × 102
COSCA5.6942 × 1041.7497 × 1049.8014 × 1033.0534 × 103
CESCA1.1103 × 1051.5056 × 1041.1600 × 1041.4613 × 103
CGPSO2.2727 × 1036.6309 × 1024.6431 × 1023.1795 × 101
ALCPSO5.1468 × 1025.7009 × 1025.1660 × 1024.4261 × 101
GL257.1686 × 1031.1124 × 1045.1182 × 1023.3736 × 101
DECLS5.1436 × 1021.8144 × 1025.0966 × 1021.9755 × 101
DE4.0464 × 1021.0122 × 1025.0399 × 1022.6263 × 101
GWO3.1663 × 1048.2882 × 1036.4242 × 1028.7207 × 101
BOA7.3093 × 1047.0422 × 1031.6572 × 1042.0742 × 103
F5 F6
AvgStdAvgStd
CBBOA5.2054 × 1029.1124 × 10−26.1011 × 1023.4478 × 100
BMWOA5.2097 × 1029.2063 × 10−26.3329 × 1022.9235 × 100
ACWOA5.2080 × 1021.5647 × 10−16.3348 × 1023.4412 × 100
COSCA5.2104 × 1027.6610 × 10−26.2130 × 1024.7481 × 100
CESCA5.2103 × 1025.0365 × 10−26.4161 × 1021.5680 × 100
CGPSO5.2096 × 1025.2391 × 10−26.2282 × 1023.2396 × 100
ALCPSO5.2082 × 1026.2548 × 10−26.1614 × 1023.6276 × 100
GL255.2107 × 1021.2606 × 10−16.1431 × 1022.9503 × 100
DECLS5.2064 × 1026.0533 × 10−26.2158 × 1021.9812 × 100
DE5.2057 × 1025.7521 × 10−26.1966 × 1022.0167 × 100
GWO5.2096 × 1025.3745 × 10−26.1427 × 1022.8626 × 100
BOA5.2105 × 1024.2576 × 10−26.3918 × 1021.5880 × 100
F7 F8
AvgStdAvgStd
CBBOA7.0001 × 1021.3748 × 10−28.1898 × 1021.0020 × 101
BMWOA7.0291 × 1028.9324 × 10−19.7229 × 1022.6449 × 101
ACWOA7.3695 × 1021.6757 × 1011.0027 × 1032.4405 × 101
COSCA1.1459 × 1031.7911 × 1021.0319 × 1031.0624 × 102
CESCA1.4151 × 1035.7444 × 1011.2180 × 1031.5325 × 101
CGPSO7.0237 × 1021.8918 × 10−19.8251 × 1022.2402 × 101
ALCPSO7.0001 × 1021.2223 × 10−28.2025 × 1029.3172 × 100
GL257.0005 × 1024.7467 × 10−28.5966 × 1021.7631 × 101
DECLS7.0000 × 1024.9595 × 10−68.0092 × 1029.9010 × 10−1
DE7.0000 × 1025.6014 × 10−138.0097 × 1028.1531 × 10−1
GWO7.1835 × 1021.7971 × 1018.8460 × 1022.3224 × 101
BOA1.5084 × 1039.3231 × 1011.0800 × 1032.3929 × 101
F9 F10
AvgStdAvgStd
CBBOA9.9137 × 1022.3780 × 1011.6058 × 1033.6765 × 102
BMWOA1.1237 × 1032.4960 × 1014.8429 × 1036.8002 × 102
ACWOA1.1296 × 1032.6544 × 1014.8371 × 1038.0218 × 102
COSCA1.1912 × 1034.7617 × 1013.9767 × 1037.9244 × 102
CESCA1.3040 × 1031.5796 × 1018.8141 × 1032.9668 × 102
CGPSO1.1188 × 1032.3346 × 1015.5081 × 1035.8401 × 102
ALCPSO1.0081 × 1033.0592 × 1011.7183 × 1034.3505 × 102
GL251.0288 × 1033.3241 × 1013.7255 × 1039.2637 × 102
DECLS1.0230 × 1039.6967 × 1001.0252 × 1032.5235 × 101
DE1.0098 × 1039.3311 × 1001.0301 × 1034.1062 × 101
GWO1.0040 × 1032.1414 × 1013.4005 × 1035.9025 × 102
BOA1.2821 × 1034.7244 × 1016.9291 × 1033.9695 × 102
F11 F12
AvgStdAvgStd
CBBOA5.0119 × 1036.0250 × 1021.2009 × 1032.3915 × 10−1
BMWOA7.1530 × 1036.0590 × 1021.2024 × 1034.8900 × 10−1
ACWOA6.4025 × 1038.5223 × 1021.2018 × 1034.4736 × 10−1
COSCA6.3104 × 1036.1537 × 1021.2020 × 1034.1068 × 10−1
CESCA9.1132 × 1032.4588 × 1021.2036 × 1034.0427 × 10−1
CGPSO6.0632 × 1034.8645 × 1021.2026 × 1032.3568 × 10−1
ALCPSO4.1988 × 1036.6139 × 1021.2015 × 1034.8177 × 10−1
GL256.4044 × 1036.7616 × 1021.2025 × 1038.3770 × 10−1
DECLS6.1820 × 1032.5230 × 1021.2011 × 1031.3085 × 10−1
DE5.8411 × 1033.0834 × 1021.2009 × 1031.2453 × 10−1
GWO3.9793 × 1039.0198 × 1021.2019 × 1031.0066 × 100
BOA7.6936 × 1033.0339 × 1021.2029 × 1033.7932 × 10−1
F13 F14
AvgStdAvgStd
CBBOA1.3004 × 1038.5067 × 10−21.4002 × 1033.4669 × 10−2
BMWOA1.3006 × 1031.5480 × 10−11.4003 × 1034.3141 × 10−2
ACWOA1.3015 × 1031.0365 × 1001.4191 × 1031.5916 × 101
COSCA1.3057 × 1032.6847 × 1001.6137 × 1038.0522 × 101
CESCA1.3078 × 1034.8390 × 10−11.6506 × 1032.2394 × 101
CGPSO1.3004 × 1031.0740 × 10−11.4003 × 1031.1888 × 10−1
ALCPSO1.3005 × 1038.7747 × 10−21.4007 × 1033.0709 × 10−1
GL251.3004 × 1038.0749 × 10−21.4003 × 1035.0502 × 10−2
DECLS1.3004 × 1036.2113 × 10−21.4003 × 1038.7249 × 10−2
DE1.3004 × 1034.2062 × 10−21.4003 × 1034.5312 × 10−2
GWO1.3004 × 1031.0651 × 10−11.4028 × 1035.0769 × 100
BOA1.3088 × 1033.9353 × 10−11.6981 × 1033.5360 × 101
F15 F16
AvgStdAvgStd
CBBOA1.5098 × 1032.5115 × 1001.6113 × 1036.4364 × 10−1
BMWOA1.5759 × 1034.1733 × 1011.6125 × 1033.2151 × 10−1
ACWOA2.0604 × 1036.0202 × 1021.6120 × 1035.3325 × 10−1
COSCA3.6836 × 1051.6560 × 1051.6125 × 1033.8935 × 10−1
CESCA4.3189 × 1051.0374 × 1051.6135 × 1032.1406 × 10−1
CGPSO1.5174 × 1031.1700 × 1001.6115 × 1033.5374 × 10−1
ALCPSO1.5095 × 1033.1903 × 1001.6117 × 1035.2657 × 10−1
GL251.5220 × 1036.9231 × 1001.6120 × 1035.0226 × 10−1
DECLS1.5133 × 1038.2274 × 10−11.6118 × 1032.8453 × 10−1
DE1.5121 × 1039.6677 × 10−11.6115 × 1032.9432 × 10−1
GWO2.1591 × 1031.3597 × 1031.6108 × 1036.5913 × 10−1
BOA3.8960 × 1051.3187 × 1051.6134 × 1032.8382 × 10−1
F17 F18
AvgStdAvgStd
CBBOA2.1792 × 1051.2091 × 1053.9527 × 1032.1688 × 103
BMWOA5.5900 × 1063.7204 × 1068.5449 × 1046.4288 × 104
ACWOA1.6134 × 1071.0735 × 1075.0025 × 1073.6272 × 107
COSCA3.0648 × 1073.5516 × 1077.1814 × 1082.1449 × 109
CESCA1.0504 × 1081.8015 × 1074.0499 × 1099.6455 × 108
CGPSO3.1879 × 1051.2726 × 1052.8501 × 1066.4288 × 105
ALCPSO4.7325 × 1055.4772 × 1058.3538 × 1037.5279 × 103
GL258.2784 × 1054.6405 × 1052.7815 × 1031.4342 × 103
DECLS2.0297 × 1068.4694 × 1051.9452 × 1041.6845 × 104
DE1.5640 × 1066.5718 × 1058.6163 × 1034.8731 × 103
GWO2.6886 × 1062.9678 × 1066.2481 × 1061.4825 × 107
BOA1.7617 × 1088.5409 × 1076.4244 × 1092.5330 × 109
F19 F20
AvgStdAvgStd
CBBOA1.9123 × 1031.6232 × 1013.6498 × 1031.2646 × 103
BMWOA1.9331 × 1032.3707 × 1013.1152 × 1042.7581 × 104
ACWOA2.0086 × 1032.9753 × 1014.1432 × 1041.9260 × 104
COSCA2.0930 × 1032.0199 × 1022.5987 × 1049.0261 × 103
CESCA2.2572 × 1034.4261 × 1014.4640 × 1052.8889 × 105
CGPSO1.9177 × 1032.8220 × 1002.4449 × 1038.9684 × 101
ALCPSO1.9132 × 1031.5071 × 1013.0207 × 1035.0969 × 102
GL251.9177 × 1031.8849 × 1011.5454 × 1049.4537 × 103
DECLS1.9090 × 1031.1062 × 1005.5417 × 1031.9236 × 103
DE1.9083 × 1037.1246 × 10−14.9888 × 1031.5815 × 103
GWO1.9421 × 1032.6740 × 1011.8691 × 1049.2334 × 103
BOA2.4468 × 1037.1856 × 1011.3607 × 1051.0837 × 105
F21 F22
AvgStdAvgStd
CBBOA1.5122 × 1059.4180 × 1042.5175 × 1031.4193 × 102
BMWOA1.7781 × 1061.7201 × 1062.8403 × 1032.1005 × 102
ACWOA5.3323 × 1064.3368 × 1062.9696 × 1032.3798 × 102
COSCA4.2385 × 1054.1713 × 1052.5624 × 1031.3627 × 102
CESCA3.6023 × 1071.1128 × 1075.6077 × 1031.2488 × 103
CGPSO1.7853 × 1051.1978 × 1052.9412 × 1032.1692 × 102
ALCPSO6.5221 × 1046.3401 × 1042.6419 × 1032.3292 × 102
GL252.4524 × 1051.5452 × 1052.5479 × 1031.2323 × 102
DECLS3.4290 × 1051.7048 × 1052.4056 × 1038.5474 × 101
DE2.9194 × 1051.4617 × 1052.3745 × 1039.1063 × 101
GWO5.7270 × 1051.3318 × 1062.5510 × 1031.7590 × 102
BOA6.5589 × 1076.2651 × 1072.9678 × 1042.4451 × 104
F23 F24
AvgStdAvgStd
CBBOA2.6152 × 1031.1517 × 10−122.6000 × 1035.1366 × 10−13
BMWOA2.5006 × 1039.7621 × 10−12.6003 × 1032.8177 × 10−1
ACWOA2.5187 × 1035.7177 × 1012.6000 × 1031.9497 × 10−6
COSCA2.5000 × 1030.0000 × 1002.6000 × 1031.4417 × 10−4
CESCA3.1011 × 1031.1310 × 1022.6559 × 1032.1852 × 101
CGPSO2.5000 × 1031.6453 × 10−32.6000 × 1031.2108 × 10−2
ALCPSO2.6153 × 1032.6885 × 10−22.6317 × 1031.2483 × 101
GL252.6152 × 1035.4858 × 10−112.6348 × 1037.6871 × 100
DECLS2.5000 × 1031.5125 × 10−32.6000 × 1031.1092 × 10−2
DE2.6152 × 1031.3876 × 10−122.6266 × 1032.9836 × 100
GWO2.6348 × 1038.6286 × 1002.6000 × 1036.9708 × 10−4
BOA2.5000 × 1030.0000 × 1002.6000 × 1034.9958 × 10−13
F25 F26
AvgStdAvgStd
CBBOA2.7000 × 1030.0000 × 1002.7568 × 1035.0205 × 101
BMWOA2.7000 × 1037.5052 × 10−32.7006 × 1031.4230 × 10−1
ACWOA2.7000 × 1030.0000 × 1002.7570 × 1035.0001 × 101
COSCA2.7000 × 1030.0000 × 1002.7535 × 1035.0524 × 101
CESCA2.7173 × 1037.7569 × 1002.7126 × 1031.3655 × 100
CGPSO2.7000 × 1034.5423 × 10−52.7900 × 1033.0386 × 101
ALCPSO2.7112 × 1033.5602 × 1002.7659 × 1038.0123 × 101
GL252.7149 × 1033.2778 × 1002.7470 × 1035.0602 × 101
DECLS2.7000 × 1032.8374 × 10−52.7003 × 1036.4985 × 10−2
DE2.7070 × 1031.0605 × 1002.7003 × 1033.7748 × 10−2
GWO2.7093 × 1034.8242 × 1002.7635 × 1034.8781 × 101
BOA2.7000 × 1030.0000 × 1002.7710 × 1033.9722 × 101
F27 F28
AvgStdAvgStd
CBBOA3.2793 × 1031.4039 × 1023.8798 × 1031.9790 × 102
BMWOA2.9001 × 1031.6129 × 10−13.0002 × 1032.0453 × 10−1
ACWOA3.7057 × 1033.4813 × 1023.6620 × 1031.0791 × 103
COSCA2.9168 × 1036.4131 × 1013.0000 × 1030.0000 × 100
CESCA3.9890 × 1031.7575 × 1025.4311 × 1033.3799 × 102
CGPSO2.9750 × 1031.8597 × 1023.1379 × 1037.5511 × 102
ALCPSO3.4654 × 1032.3066 × 1024.3818 × 1034.7586 × 102
GL253.2986 × 1031.1776 × 1024.0664 × 1032.1919 × 102
DECLS3.0343 × 1031.8568 × 1023.0442 × 1031.6814 × 102
DE3.2247 × 1031.0311 × 1023.6416 × 1032.4955 × 101
GWO3.3653 × 1031.2923 × 1023.9109 × 1032.0234 × 102
BOA3.2618 × 1038.1426 × 1019.4775 × 1039.0608 × 102
F29 F30
AvgStdAvgStd
CBBOA2.3085 × 1063.9835 × 1066.0232 × 1031.0071 × 103
BMWOA9.2623 × 1052.7754 × 1065.0326 × 1044.0784 × 104
ACWOA2.4172 × 1072.4069 × 1074.1810 × 1053.6525 × 105
COSCA3.4284 × 1061.0796 × 1074.6562 × 1041.2930 × 105
CESCA1.7522 × 1073.5720 × 1061.3828 × 1063.1673 × 105
CGPSO6.3314 × 1032.7923 × 1031.3065 × 1041.0994 × 104
ALCPSO1.2267 × 1063.7342 × 1061.2522 × 1048.6766 × 103
GL254.1844 × 1032.1535 × 1026.7032 × 1038.8310 × 102
DECLS6.6570 × 1039.6293 × 1036.6640 × 1031.2451 × 103
DE5.0815 × 1031.8380 × 1036.2438 × 1031.1828 × 103
GWO1.3376 × 1063.5286 × 1065.2609 × 1043.5890 × 104
BOA3.1000 × 1030.0000 × 1003.2000 × 1035.9714 × 10−5
Table A3. The p-value of CBBOA versus other algorithms.
Table A3. The p-value of CBBOA versus other algorithms.
BMWOAACWOACOSCACESCACGPSOALCPSO
F11.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−62.8308 × 10−4
F21.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−62.1266 × 10−6
F31.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.5286 × 10−1
F41.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−69.3676 × 10−25.3197 × 10−3
F51.7344 × 10−63.8822 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−6
F61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−62.3534 × 10−61.6394 × 10−5
F71.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−65.0085 × 10−1
F81.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−69.7539 × 10−1
F91.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−67.5213 × 10−2
F101.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−62.7116 × 10−1
F111.9209 × 10−69.3157 × 10−63.5152 × 10−61.7344 × 10−65.7517 × 10−66.8923 × 10−5
F121.9209 × 10−61.7344 × 10−61.9209 × 10−61.7344 × 10−61.7344 × 10−63.8811 × 10−4
F135.7517 × 10−61.7344 × 10−61.9209 × 10−61.7344 × 10−68.6121 × 10−15.3070 × 10−5
F141.4773 × 10−41.7344 × 10−61.7344 × 10−61.7344 × 10−64.4052 × 10−11.9209 × 10−6
F151.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−69.0993 × 10−1
F162.6033 × 10−64.4493 × 10−52.3534 × 10−61.7344 × 10−63.0010 × 10−24.3896 × 10−3
F171.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−67.7309 × 10−37.5213 × 10−2
F181.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7088 × 10−3
F192.0515 × 10−41.7344 × 10−63.5152 × 10−61.7344 × 10−63.5888 × 10−41.1093 × 10−1
F201.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−64.7292 × 10−63.0010 × 10−2
F212.1266 × 10−61.7344 × 10−64.8603 × 10−51.7344 × 10−65.9994 × 10−15.2872 × 10−4
F221.2381 × 10−52.1266 × 10−62.2888 × 10−11.7344 × 10−61.9209 × 10−61.9569 × 10−2
F231.7322 × 10−62.9944 × 10−74.3205 × 10−81.7322 × 10−61.7322 × 10−61.7333 × 10−6
F241.7344 × 10−66.8988 × 10−51.1914 × 10−11.7344 × 10−61.7344 × 10−62.1253 × 10−6
F251.7333 × 10−61.0000 × 1001.0000 × 1001.7333 × 10−61.7333 × 10−61.7344 × 10−6
F262.2545 × 10−31.8408 × 10−19.1082 × 10−13.6085 × 10−37.5137 × 10−58.9718 × 10−2
F271.7344 × 10−64.0715 × 10−51.7344 × 10−61.7344 × 10−64.8603 × 10−57.7122 × 10−4
F281.7344 × 10−62.6230 × 10−11.7344 × 10−61.7344 × 10−63.1123 × 10−56.3198 × 10−5
F295.5774 × 10−13.7243 × 10−52.7653 × 10−31.7344 × 10−63.7094 × 10−11.8462 × 10−1
F302.1266 × 10−67.6909 × 10−63.8203 × 10−11.7344 × 10−61.7518 × 10−21.6394 × 10−5
GL25DECLSDEGWOBOA
F11.9209 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−6
F21.7344 × 10−63.3173 × 10−43.6094 × 10−31.7344 × 10−61.7344 × 10−6
F31.7344 × 10−69.8421 × 10−38.7740 × 10−11.7344 × 10−61.7344 × 10−6
F41.1138 × 10−34.1955 × 10−43.8542 × 10−31.9209 × 10−61.7344 × 10−6
F51.9209 × 10−63.4053 × 10−52.2888 × 10−11.7344 × 10−61.7344 × 10−6
F63.8811 × 10−41.7344 × 10−62.1266 × 10−62.8308 × 10−41.7344 × 10−6
F71.6046 × 10−41.4093 × 10−41.5391 × 10−41.7344 × 10−61.7344 × 10−6
F81.7344 × 10−61.7333 × 10−61.9185 × 10−61.9209 × 10−61.7344 × 10−6
F98.9187 × 10−51.9729 × 10−56.6392 × 10−41.1093 × 10−11.7344 × 10−6
F101.7344 × 10−61.9209 × 10−61.9209 × 10−61.7344 × 10−61.7344 × 10−6
F112.1266 × 10−62.6033 × 10−65.2165 × 10−64.0715 × 10−51.7344 × 10−6
F121.7344 × 10−61.0357 × 10−31.2544 × 10−12.8434 × 10−51.7344 × 10−6
F138.1302 × 10−11.6503 × 10−11.8519 × 10−25.8571 × 10−11.7344 × 10−6
F141.2453 × 10−21.6394 × 10−53.5152 × 10−64.0715 × 10−51.7344 × 10−6
F151.7344 × 10−62.1630 × 10−54.5336 × 10−41.7344 × 10−61.7344 × 10−6
F162.5967 × 10−51.6046 × 10−41.2544 × 10−14.9498 × 10−21.7344 × 10−6
F172.3534 × 10−61.7344 × 10−61.7344 × 10−63.5152 × 10−61.7344 × 10−6
F184.2767 × 10−22.3534 × 10−61.8910 × 10−42.3704 × 10−51.7344 × 10−6
F197.7122 × 10−42.8021 × 10−18.4508 × 10−11.9209 × 10−61.7344 × 10−6
F201.7344 × 10−64.1955 × 10−45.7064 × 10−41.7344 × 10−61.7344 × 10−6
F211.2453 × 10−21.7988 × 10−51.1499 × 10−47.8647 × 10−21.7344 × 10−6
F225.8571 × 10−17.7122 × 10−44.1955 × 10−45.3044 × 10−11.7344 × 10−6
F231.6354 × 10−61.7344 × 10−61.0000 × 1001.7344 × 10−64.3205 × 10−8
F241.7333 × 10−61.7333 × 10−61.7333 × 10−61.7333 × 10−66.2500 × 10−2
F251.7333 × 10−61.7333 × 10−61.7333 × 10−61.2279 × 10−51.0000 × 100
F265.0383 × 10−12.3704 × 10−51.9729 × 10−51.0937 × 10−18.5924 × 10−2
F276.5833 × 10−13.4053 × 10−52.0589 × 10−13.3269 × 10−26.4352 × 10−1
F281.2866 × 10−31.7344 × 10−61.7344 × 10−65.4401 × 10−11.7344 × 10−6
F293.0650 × 10−41.3591 × 10−18.2206 × 10−22.8948 × 10−11.7344 × 10−6
F301.3194 × 10−22.4308 × 10−24.1653 × 10−11.7344 × 10−61.7344 × 10−6
Table A4. Comparison results of CBBOA and other algorithms.
Table A4. Comparison results of CBBOA and other algorithms.
F1 F2
AvgStd AvgStd
CBBOA1.4818 × 1067.6067 × 105CBBOA2.0000 × 1028.9172 × 10−3
BOABB2.1815 × 1061.1008 × 106BOABB2.0000 × 1023.7543 × 10−3
BOACM1.4030 × 1092.0120 × 108BOACM7.6129 × 10101.0019 × 1010
BOA1.7705 × 1092.9979 × 108BOA7.5017 × 10106.8016 × 109
F3 F4
AvgStd AvgStd
CBBOA4.1096 × 1021.2917 × 102CBBOA4.7678 × 1023.7929 × 101
BOABB4.5098 × 1021.6666 × 102BOABB4.8281 × 1023.7593 × 101
BOACM7.6585 × 1044.3189 × 103BOACM1.3699 × 1043.0500 × 103
BOA7.4196 × 1045.6359 × 103BOA1.6618 × 1042.1750 × 103
F5 F6
AvgStd AvgStd
CBBOA5.2052 × 1029.0294 × 10−2CBBOA6.1162 × 1023.4318 × 100
BOABB5.2050 × 1027.6702 × 10−2BOABB6.1223 × 1023.0495 × 100
BOACM5.2100 × 1025.8757 × 10−2BOACM6.3825 × 1021.7127 × 100
BOA5.2105 × 1026.0785 × 10−2BOA6.3823 × 1022.2752 × 100
F7 F8
AvgStd AvgStd
CBBOA7.0001 × 1021.3063 × 10−2CBBOA8.1958 × 1027.8919 × 100
BOABB7.0001 × 1021.1507 × 10−2BOABB8.2585 × 1021.0404 × 101
BOACM1.3306 × 1038.9903 × 101BOACM1.0957 × 1032.1952 × 101
BOA1.5138 × 1036.5163 × 101BOA1.0867 × 1032.8582 × 101
F9 F10
AvgStd AvgStd
CBBOA9.9045 × 1022.9712 × 101CBBOA1.5833 × 1032.9764 × 102
BOABB9.7194 × 1022.1876 × 101BOABB1.6668 × 1033.8606 × 102
BOACM1.2604 × 1031.8657 × 101BOACM6.9789 × 1033.5990 × 102
BOA1.2788 × 1033.3273 × 101BOA6.8713 × 1034.0484 × 102
F11 F12
AvgStd AvgStd
CBBOA4.6879 × 1037.0222 × 102CBBOA1.2008 × 1031.9694 × 10−1
BOABB5.0674 × 1037.8285 × 102BOABB1.2008 × 1032.1869 × 10−1
BOACM7.5830 × 1035.1201 × 102BOACM1.2028 × 1033.1569 × 10−1
BOA7.5866 × 1033.4520 × 102BOA1.2028 × 1034.2705 × 10−1
F13 F14
AvgStd AvgStd
CBBOA1.3004 × 1036.5511 × 10−2CBBOA1.4002 × 1033.9803 × 10−2
BOABB1.3004 × 1036.4113 × 10−2BOABB1.4003 × 1031.0610 × 10−1
BOACM1.3084 × 1037.9012 × 10−1BOACM1.6396 × 1032.4950 × 101
BOA1.3088 × 1036.0851 × 10−1BOA1.6987 × 1032.9022 × 101
F15 F16
AvgStd AvgStd
CBBOA1.5085 × 1032.6655 × 100CBBOA1.6112 × 1034.8598 × 10−1
BOABB1.5093 × 1032.9331 × 100BOABB1.6113 × 1034.3135 × 10−1
BOACM1.5791 × 1055.9606 × 104BOACM1.6128 × 1032.6040 × 10−1
BOA3.6709 × 1051.3770 × 105BOA1.6134 × 1031.8603 × 10−1
F17 F18
AvgStd AvgStd
CBBOA2.4572 × 1051.6260 × 105CBBOA3.7389 × 1032.3791 × 103
BOABB2.6297 × 1051.2144 × 105BOABB4.4248 × 1032.6027 × 103
BOACM1.6116 × 1087.3356 × 107BOACM5.6867 × 1092.4342 × 109
BOA2.1369 × 1081.0842 × 108BOA6.6090 × 1091.9399 × 109
F19 F20
AvgStd AvgStd
CBBOA1.9117 × 1031.5060 × 101CBBOA4.1870 × 1031.5712 × 103
BOABB1.9148 × 1032.0418 × 101BOABB4.3564 × 1031.8372 × 103
BOACM2.3700 × 1038.8841 × 101BOACM1.8488 × 1055.1230 × 105
BOA2.4259 × 1036.0800 × 101BOA1.1590 × 1057.4029 × 104
F21 F22
AvgStd AvgStd
CBBOA9.6956 × 1046.6030 × 104CBBOA2.4939 × 1031.4941 × 102
BOABB1.2602 × 1051.2720 × 105BOABB2.4979 × 1031.2878 × 102
BOACM4.9917 × 1074.5993 × 107BOACM3.3284 × 1046.0774 × 104
BOA4.8587 × 1072.6658 × 107BOA4.7721 × 1048.5557 × 104
F23 F24
AvgStd AvgStd
CBBOA2.6037 × 1033.5164 × 101CBBOA2.6000 × 1035.9711 × 10−13
BOABB2.6152 × 1031.5548 × 10−12BOABB2.6064 × 1031.0767 × 101
BOACM2.5000 × 1030.0000 × 100BOACM2.6000 × 1038.4444 × 10−14
BOA2.5000 × 1030.0000 × 100BOA2.6000 × 1034.4684 × 10−13
F25 F26
AvgStd AvgStd
CBBOA2.7000 × 1030.0000 × 100CBBOA2.7602 × 1034.9627 × 101
BOABB2.7000 × 1030.0000 × 100BOABB2.7116 × 1034.6443 × 101
BOACM2.7000 × 1030.0000 × 100BOACM2.7714 × 1033.8205 × 101
BOA2.7000 × 1030.0000 × 100BOA2.7733 × 1033.8978 × 101
F27 F28
AvgStd AvgStd
CBBOA3.2691 × 1031.2932 × 102CBBOA3.8365 × 1031.3805 × 102
BOABB3.2718 × 1031.5141 × 102BOABB3.8623 × 1031.9048 × 102
BOACM2.9598 × 1036.6784 × 101BOACM3.6505 × 1031.4264 × 103
BOA3.2628 × 1037.4755 × 101BOA8.9360 × 1038.8515 × 102
F29 F30
AvgStd AvgStd
CBBOA3.0868 × 1064.5614 × 106CBBOA6.2096 × 1038.9321 × 102
BOABB2.6707 × 1063.6263 × 106BOABB6.6561 × 1032.0851 × 103
BOACM3.1000 × 1030.0000 × 100BOACM3.2000 × 1033.7537 × 10−5
BOA3.1000 × 1030.0000 × 100BOA7.3943 × 1032.2973 × 104

Appendix B. SVM & BOA

Appendix B.1. Support Vector Machine (SVM)

SVM has applied to many real-life problems including diagnosis of breast cancer [67], diagnosis of tuberculous pleural effusion [68], analyzing patients with paraquat poisoning [69], prognosis of paraquat poisoning patients [70], forecasting electricity price [71], forecasting electricity spot-prices [72] and predicting Parkinson’s disease [73]. The principle of SVM is to discover an optimal plane that can maximize the separation of different data. The support-vector is the data point that is closest to the border. When it comes to data processing, the SVM is frequently used as a supervised learning strategy in order to determine the optimal hyperplane that can accurately distinguish between positive and negative samples. The hyperplane can be written as follows, given the data collection G = ( x i , y i ) , i = 1 , , N , x R d , y { ± 1 } .
g ( x ) = ω T x + b
Maximization of geometric spacing equals minimization of in terms of geometric comprehension of the hyperplane. In the event of a few outliers, the idea of “soft interval” is introduced, and the slack variable ξ i > 0 is used. The disciplinary factor c, which represents the capacity to accept outliers, is one of the major factors that can impact the effectiveness of SVM classification. A standard SVM model exists as shown below.
{ min ( ω ) = 1 2 || ω || 2 + c i = 1 N ξ i 2 s . t     y i ( ω Τ x i + b ) 1 ξ i , i = 1 , 2 , , N
in which ω is the weight of inertia, b is a constant.
By using this method, the lower dimensional data is transformed into higher dimensional data and the optimal classification surface is divided by combining multiple linear techniques. Meanwhile, the SVM changes the linearly inseparable sample set Φ : R d H nonlinearly. To ensure that the computational results of the sample in the high-dimensional part remain the same as in the low-dimensional, a suitable kernel function k ( x i , x j ) is constructed, with α i denoting the Lagrange multiplier and Equation (A3) being converted as follows:
{ Q ( α ) = 1 2 i = 1 N α i α j y i y j k ( x i , x j ) i = 1 N α i s . t   i = 1 N a i y i = 0 , 0 a i C , i = 1 , 2 ,   , N
The generalized radial basis kernel function is employed in this work, and its formulation is as follows.
k ( x , y ) = e γ || x i x j ||
where γ is a kernel parameter that specifies the interaction breadth of the kernel function and another factor that is highly significant to the classification performance of SVM.

Appendix B.2. Butterfly Optimization Algorithm (BOA)

In recent years, many new intelligent algorithms have been proposed to solve practical problems, such as weighted mean of vectors (INFO) [74], hunger games search (HGS) [75], Runge Kutta optimizer (RUN) [76], Harris hawks optimization (HHO) [77], slime mould algorithm (SMA) [78] and colony predation algorithm (CPA) [79]. These algorithms show specific potential in solving many problems such as plant disease recognition [80], medical diagnosis [81,82,83], feature selection [84,85,86], image segmentation [87,88,89], engineering design [90,91], economic emission dispatch problem [92], multi-attribute decision making [93,94,95,96], parameter tunning for machine learning models [97,98], green supplier selection [99], scheduling problems [100,101], combination optimization problems [102], parameter optimization [103,104,105,106], and big data optimization problems [107].
Butterfly optimization algorithm (BOA) [108] is a new swarm intelligence optimization algorithm proposed in 2018. Since its introduction, BOA has been applied to many problems such as fault diagnosis [109], disease diagnosis [110], optimal cluster head choice for wireless sensor networks [111], parameters identification of photovoltaic models [112], image segmentation [113] and feature selection [114]. BOA mainly searches the optimal solution of the problem by imitating the foraging behavior of butterflies. In biology, butterflies have chemoreceptors on their bodies. With these chemoreceptors, butterflies can smell the fragrance of food. Therefore, a function f representing fragrance is set in the BOA. The calculation formula of f is as follows:
f = c × I a
where c represents the sensory modality, a is the power exponent that depends on the sensory modality, I is stimulation intensity which means fitness in the algorithm.
The update formula of c is as follows:
c n e w = c + 0.025 c × t
where t represents the current iteration number.
There are two situations in BOA during the search phase. In the first situation, when one butterfly produces fragrance that other butterflies perceive, the other butterflies will then move in that direction. This phase is global search. On the other hand, when butterflies cannot perceive the fragrance which produced by other butterflies in the search space, they will take random steps to search around. This phase is local search. From the above analysis we can get two update formulas for BOA. The formula for the global search phase is shown in Equation (A7):
x i t + 1 = x i t + ( r 2 × g * x i t ) × f
where x i t represents the solution vector of the i-th butterfly in the t-th iteration, g * represents the fitness of the optimal solution, r is the random number in the range [0, 1].
The formula for the local search phase is shown in Equation (A8):
x i t + 1 = x i t + ( r 2 × x j t x k t ) × f
where x j t and x k t are the positions of j-th and k-th butterflies in the population, j and k are the random numbers in the range [1, N], N represents the number of populations, r is the random number in the range [0, 1].
In order to switch between global search and local search in the process of searching for the optimal solution, a probability parameter p is set in BOA. The algorithm will choose which formula to update according to p. In other words, p controls BOA to perform global search or local search. The new population individuals generated by BOA will be stored in the agent. After calculating the fitness, choose whether to replace or not. Only when the generated population is better than the original population will it be replaced. The above steps constitute a complete iterative process of BOA. Figure A1 shows the concrete process.
Figure A1. Flowchart of BOA.
Figure A1. Flowchart of BOA.
Applsci 12 04776 g0a1

References

  1. Wakelin-Theron, N.; Spowart, J. Determining tourism graduate employability, knowledge, skills, and competencies in a VUCA world: Constructing a tourism employability model. Afr. J. Hosp. Tour. Leis. 2019, 8, 1–18. [Google Scholar]
  2. Deci, E.L.; Ryan, R.M. Intrinsic Motivation and Self-Determination in Human Behavior; Springer Science & Business Media: New York, NY, USA, 2013. [Google Scholar]
  3. Rigby, C.S.; Ryan, R.M. Self-determination theory in human resource development: New directions and practical considerations. Adv. Dev. Hum. Resour. 2018, 20, 133–147. [Google Scholar] [CrossRef]
  4. Deci, E.L.; Olafsen, A.H.; Ryan, R.M. Self-determination theory in work organizations: The state of a science. Annu. Rev. Organ. Psychol. Organ. Behav. 2017, 4, 19–43. [Google Scholar] [CrossRef]
  5. Powers, K.; Chen, H.; Prasad, K.; Gilmartin, S.; Sheppard, S. Exploring How Engineering Internships and Undergraduate Research Experiences Inform and Influence College Students’ Career Decisions and Future Plans. In Proceedings of the American Society for Engineering Education Annual Conference, Salt Lake City, UT, USA, 24–27 June 2018. [Google Scholar]
  6. Kim, S.-Y.; Ahn, T.; Fouad, N. Family influence on Korean students’ career decisions: A social cognitive perspective. J. Career Assess. 2016, 24, 513–526. [Google Scholar] [CrossRef]
  7. Kiselev, P.; Kiselev, B.; Matsuta, V.; Feshchenko, A.; Bogdanovskaya, I.; Kosheleva, A. Career guidance based on machine learning: Social networks in professional identity construction. Procedia Comput. Sci. 2020, 169, 158–163. [Google Scholar] [CrossRef]
  8. Chung, J.Y.; Lee, S. Dropout early warning systems for high school students using machine learning. Child. Youth Serv. Rev. 2019, 96, 346–353. [Google Scholar] [CrossRef]
  9. Luo, T.; So, W.W.M.; Wan, Z.H.; Li, W.C. STEM stereotypes predict students’ STEM career interest via self-efficacy and outcome expectations. Int. J. STEM Educ. 2021, 8, 36. [Google Scholar] [CrossRef]
  10. Nauta, M.M.; Saucier, A.M.; Woodard, L.E. Interpersonal influences on students’ academic and career decisions: The impact of sexual orientation. Career Dev. Q. 2001, 49, 352–362. [Google Scholar] [CrossRef]
  11. Park, I.-J.; Rie, J.; Kim, H.S.; Park, J. Effects of a future time perspective–based career intervention on career decisions. J. Career Dev. 2020, 47, 96–110. [Google Scholar] [CrossRef]
  12. Lee, P.C.; Lee, M.J.; Dopson, L.R. Who influences college students’ career choices? An empirical study of hospitality management students. J. Hosp. Tour. Educ. 2019, 31, 74–86. [Google Scholar] [CrossRef] [Green Version]
  13. Tu, J.; Chen, H.; Liu, J.; Heidari, A.A.; Zhang, X.; Wang, M.; Ruby, R.; Pham, Q.-V. Evolutionary biogeography-based whale optimization methods with communication structure: Towards measuring the balance. Knowl.-Based Syst. 2021, 212, 106642. [Google Scholar] [CrossRef]
  14. Kennedy, J. Bare Bones Particle Swarms. In Proceedings of the 2003 IEEE Swarm Intelligence Symposium, Indianapolis, IN, USA, 26 April 2003; pp. 80–87. [Google Scholar] [CrossRef]
  15. Omran, M.; Engelbrecht, A.; Salman, A. Bare bones differential evolution. Eur. J. Oper. Res. 2009, 196, 128–139. [Google Scholar] [CrossRef] [Green Version]
  16. Wang, H.; Rahnamayan, S.; Sun, H.; Omran, M. Gaussian Bare-Bones Differential Evolution. IEEE Trans. Cybern. 2013, 43, 634–647. [Google Scholar] [CrossRef]
  17. Fu, J.; Zhang, Y.; Wang, Y.; Zhang, H.; Liu, J.; Tang, J.; Yang, Q.; Sun, H.; Qiu, W.; Ma, Y.; et al. Optimization of metabolomic data processing using NOREVA. Nat. Protoc. 2022, 17, 129–151. [Google Scholar] [CrossRef]
  18. Li, B.; Tang, J.; Yang, Q.; Li, S.; Cui, X.; Li, Y.; Chen, Y.; Xue, W.; Li, X.; Zhu, F. NOREVA: Normalization and evaluation of MS-based metabolomics data. Nucleic Acids Res. 2017, 45, W162–W170. [Google Scholar] [CrossRef] [Green Version]
  19. Li, Y.; Li, X.; Hong, J.; Wang, Y.; Fu, J.; Yang, H.; Yu, C.; Li, F.; Hu, J.; Xue, W.; et al. Clinical trials, progression-speed differentiating features and swiftness rule of the innovative targets of first-in-class drugs. Brief. Bioinform. 2020, 21, 649–662. [Google Scholar] [CrossRef] [Green Version]
  20. Zhu, F.; Li, X.; Yang, S.; Chen, Y. Clinical success of drug targets prospectively predicted by in silico study. Trends Pharmacol. Sci. 2018, 39, 229–231. [Google Scholar] [CrossRef]
  21. Wang, D.; Liang, Y.; Xu, D.; Feng, X.; Guan, R.J.K.-B.S. A content-based recommender system for computer science publications. Knowl.-Based Syst. 2018, 157, 1–9. [Google Scholar] [CrossRef]
  22. Li, J.; Chen, C.; Chen, H.; Tong, C. Towards Context-aware Social Recommendation via Individual Trust. Knowl.-Based Syst. 2017, 127, 58–66. [Google Scholar] [CrossRef]
  23. Li, J.; Lin, J. A probability distribution detection based hybrid ensemble QoS prediction approach. Inf. Sci. 2020, 519, 289–305. [Google Scholar] [CrossRef]
  24. Li, J.; Zheng, X.-L.; Chen, S.-T.; Song, W.-W.; Chen, D.-r. An efficient and reliable approach for quality-of-service-aware service composition. Inf. Sci. 2014, 269, 238–254. [Google Scholar] [CrossRef]
  25. Yin, J.; Sun, W.; Li, F.; Hong, J.; Li, X.; Zhou, Y.; Lu, Y.; Liu, M.; Zhang, X.; Chen, N.; et al. VARIDT 1.0: Variability of drug transporter database. Nucleic Acids Res. 2020, 48, D1042–D1050. [Google Scholar] [CrossRef] [PubMed]
  26. Zhu, F.; Shi, Z.; Qin, C.; Tao, L.; Liu, X.; Xu, F.; Zhang, L.; Song, Y.; Zhang, J.; Han, B.; et al. Therapeutic target database update 2012: A resource for facilitating target-oriented drug discovery. Nucleic Acids Res. 2012, 40, D1128–D1136. [Google Scholar] [CrossRef] [PubMed]
  27. Zhang, Q.; Chen, H.; Heidari, A.A.; Zhao, X.; Xu, Y.; Wang, P.; Li, Y.; Li, C. Chaos-induced and mutation-driven schemes boosting salp chains-inspired optimizers. IEEE Access 2019, 7, 31243–31261. [Google Scholar] [CrossRef]
  28. Qiu, S.; Hao, Z.; Wang, Z.; Liu, L.; Liu, J.; Zhao, H.; Fortino, G. Sensor Combination Selection Strategy for Kayak Cycle Phase Segmentation Based on Body Sensor Networks. IEEE Internet Things J. 2021, 9, 4190–4201. [Google Scholar] [CrossRef]
  29. Qiu, S.; Zhao, H.; Jiang, N.; Wu, D.; Song, G.; Zhao, H.; Wang, Z. Sensor network oriented human motion capture via wearable intelligent system. Int. J. Intell. Syst. 2021, 37, 1646–1673. [Google Scholar] [CrossRef]
  30. Pei, H.; Yang, B.; Liu, J.; Chang, K.C.C. Active Surveillance via Group Sparse Bayesian Learning. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 1133–1148. [Google Scholar] [CrossRef]
  31. Guan, R.; Zhang, H.; Liang, Y.; Giunchiglia, F.; Huang, L.; Feng, X. Deep Feature-Based Text Clustering and Its Explanation. IEEE Trans. Knowl. Data Eng. 2020, 14, 8. [Google Scholar] [CrossRef]
  32. Chen, H.; Heidari, A.A.; Zhao, X.; Zhang, L.; Chen, H. Advanced orthogonal learning-driven multi-swarm sine cosine optimization: Framework and case studies. Expert Syst. Appl. 2020, 144, 113113. [Google Scholar] [CrossRef]
  33. Wang, X.; Chen, H.; Heidari, A.A.; Zhang, X.; Xu, J.; Xu, Y.; Huang, H. Multi-population following behavior-driven fruit fly optimization: A Markov chain convergence proof and comprehensive analysis. Knowl.-Based Syst. 2020, 210, 106437. [Google Scholar] [CrossRef]
  34. Song, S.; Wang, P.; Heidari, A.A.; Wang, M.; Zhao, X.; Chen, H.; He, W.; Xu, S. Dimension decided Harris hawks optimization with Gaussian mutation: Balance analysis and diversity patterns. Knowl.-Based Syst. 2021, 215, 106425. [Google Scholar] [CrossRef]
  35. Heidari, A.A.; Aljarah, I.; Faris, H.; Chen, H.; Luo, J.; Mirjalili, S. An enhanced associative learning-based exploratory whale optimizer for global optimization. Neural Comput. Appl. 2019, 32, 5185–5211. [Google Scholar] [CrossRef]
  36. Elhosseini, M.A.; Haikal, A.Y.; Badawy, M.; Khashan, N. Biped robot stability based on an A–C parametric Whale Optimization Algorithm. J. Comput. Sci. 2019, 31, 17–32. [Google Scholar] [CrossRef]
  37. Liang, X.; Cai, Z.-N.; Wang, M.; Zhao, X.; Chen, H.; Li, C. Chaotic oppositional sine–cosine method for solving global optimization problems. Eng. Comput. 2020, 38, 1223–1239. [Google Scholar] [CrossRef]
  38. Lin, A.; Wu, Q.; Heidari, A.A.; Xu, Y.; Chen, H.; Geng, W.; Li, Y.; Li, C. Predicting Intentions of Students for Master Programs Using a Chaos-Induced Sine Cosine-Based Fuzzy K-Nearest Neighbor Classifier. IEEE Access 2019, 7, 67235–67248. [Google Scholar] [CrossRef]
  39. Sun, T.-Y.; Liu, C.-C.; Tsai, S.-J.; Hsieh, S.-T.; Li, K.-Y. Cluster Guide Particle Swarm Optimization (CGPSO) for Underdetermined Blind Source Separation With Advanced Conditions. IEEE Trans. Evol. Comput. 2011, 15, 798–811. [Google Scholar] [CrossRef]
  40. Singh, C.L.; Anandini, C.; Gogoi, A.; Baishnab, K. Automated sizing of low-noise CMOS analog amplifier using ALCPSO optimization algorithm. J. Inf. Optim. Sci. 2017, 39, 99–111. [Google Scholar] [CrossRef]
  41. García-Martínez, C.; Lozano, M.; Herrera, F.; Molina, D.; Sánchez, A. Global and local real-coded genetic algorithms based on parent-centric crossover operators. Eur. J. Oper. Res. 2008, 185, 1088–1113. [Google Scholar] [CrossRef]
  42. Gao, S.; Yu, Y.; Wang, Y.; Wang, J.; Cheng, J.; Zhou, M. Chaotic Local Search-Based Differential Evolution Algorithms for Optimization. IEEE Trans. Syst. Man Cybern. Syst. 2019, 51, 3954–3967. [Google Scholar] [CrossRef]
  43. Storn, R.; Price, K. Differential Evolution—A Simple and Efficient Heuristic for global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  44. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  45. Carrasco, J.; García, S.; Rueda, M.M.; Das, S.; Herrera, F. Recent trends in the use of statistical tests for comparing swarm and evolutionary computing algorithms: Practical guidelines and a critical review. Swarm Evol. Comput. 2020, 54, 100665. [Google Scholar] [CrossRef] [Green Version]
  46. Su, Y.; Liu, C.; Niu, Y.; Cheng, F.; Zhang, X. A community structure enhancement-based community detection algorithm for complex networks. IEEE Trans. Syst. Man Cybern. Syst. 2019, 51, 2833–2846. [Google Scholar] [CrossRef]
  47. Tian, Y.; Su, X.; Su, Y.; Zhang, X. EMODMI: A multi-objective optimization based method to identify disease modules. IEEE Trans. Emerg. Top. Comput. Intell. 2020, 5, 570–582. [Google Scholar] [CrossRef]
  48. Su, Y.; Li, S.; Zheng, C.; Zhang, X. A heuristic algorithm for identifying molecular signatures in cancer. IEEE Trans. NanoBioscience 2019, 19, 132–141. [Google Scholar] [CrossRef]
  49. Li, L.; Gao, Z.; Wang, Y.-T.; Zhang, M.-W.; Ni, J.-C.; Zheng, C.-H.; Su, Y. SCMFMDA: Predicting microRNA-disease associations based on similarity constrained matrix factorization. PLoS Comput. Biol. 2021, 17, e1009165. [Google Scholar] [CrossRef]
  50. Cai, L.; Lu, C.; Xu, J.; Meng, Y.; Wang, P.; Fu, X.; Zeng, X.; Su, Y. Drug repositioning based on the heterogeneous information fusion graph convolutional network. Brief. Bioinform. 2021, 22, bbab319. [Google Scholar] [CrossRef]
  51. Li, G.; Li, Y.; Chen, H.; Deng, W. Fractional-Order Controller for Course-Keeping of Underactuated Surface Vessels Based on Frequency Domain Specification and Improved Particle Swarm Optimization Algorithm. Appl. Sci. 2022, 12, 3139. [Google Scholar] [CrossRef]
  52. Zhang, X.; Wang, H.; Du, C.; Fan, X.; Cui, L.; Chen, H.; Deng, F.; Tong, Q.; He, M.; Yang, M.; et al. Custom-Molded Offloading Footwear Effectively Prevents Recurrence and Amputation, and Lowers Mortality Rates in High-Risk Diabetic Foot Patients: A Multicenter, Prospective Observational Study. Diabetes Metab. Syndr. Obes. Targets Ther. 2022, 15, 103–109. [Google Scholar] [CrossRef]
  53. Ran, X.; Zhou, X.; Lei, M.; Tepsan, W.; Deng, W. A novel k-means clustering algorithm with a noise algorithm for capturing urban hotspots. Appl. Sci. 2021, 11, 11202. [Google Scholar] [CrossRef]
  54. Cui, H.; Guan, Y.; Chen, H. Rolling element fault diagnosis based on VMD and sensitivity MCKD. IEEE Access 2021, 9, 120297–120308. [Google Scholar] [CrossRef]
  55. Deng, W.; Li, Z.; Li, X.; Chen, H.; Zhao, H. Compound Fault Diagnosis Using Optimized MCKD and Sparse Representation for Rolling Bearings. IEEE Trans. Instrum. Meas. 2022, 71, 1–9. [Google Scholar] [CrossRef]
  56. Wu, Z.; Li, R.; Zhou, Z.; Guo, J.; Jiang, J.; Su, X. A user sensitive subject protection approach for book search service. J. Assoc. Inf. Sci. Technol. 2020, 71, 183–195. [Google Scholar] [CrossRef]
  57. Wu, Z.; Shen, S.; Lian, X.; Su, X.; Chen, E. A dummy-based user privacy protection approach for text information retrieval. Knowl.-Based Syst. 2020, 195, 105679. [Google Scholar] [CrossRef]
  58. Wu, Z.; Shen, S.; Zhou, H.; Li, H.; Lu, C.; Zou, D. An effective approach for the protection of user commodity viewing privacy in e-commerce website. Knowl.-Based Syst. 2021, 220, 106952. [Google Scholar] [CrossRef]
  59. Wu, Z.; Li, G.; Shen, S.; Lian, X.; Chen, E.; Xu, G. Constructing dummy query sequences to protect location privacy and query privacy in location-based services. World Wide Web 2021, 24, 25–49. [Google Scholar] [CrossRef]
  60. Wu, Z.; Wang, R.; Li, Q.; Lian, X.; Xu, G.; Chen, E.; Liu, X. A location privacy-preserving system based on query range cover-up for location-based services. IEEE Trans. Veh. Technol. 2020, 69, 5244–5254. [Google Scholar] [CrossRef]
  61. Zhan, Z.-H.; Wang, Z.-J.; Jin, H.; Zhang, J. Adaptive distributed differential evolution. IEEE Trans. Cybern. 2019, 50, 4633–4647. [Google Scholar] [CrossRef]
  62. Zhan, Z.-H.; Liu, X.-F.; Zhang, H.; Yu, Z.; Weng, J.; Li, Y.; Gu, T.; Zhang, J. Cloudde: A heterogeneous differential evolution algorithm and its distributed cloud version. IEEE Trans. Parallel Distrib. Syst. 2016, 28, 704–716. [Google Scholar] [CrossRef]
  63. Gu, Z.-M.; Wang, G.-G. Improving NSGA-III algorithms with information feedback models for large-scale many-objective optimization. Future Gener. Comput. Syst. 2020, 107, 49–69. [Google Scholar] [CrossRef]
  64. Liu, X.-F.; Zhan, Z.-H.; Gao, Y.; Zhang, J.; Kwong, S.; Zhang, J. Coevolutionary particle swarm optimization with bottleneck objective learning strategy for many-objective optimization. IEEE Trans. Evol. Comput. 2018, 23, 587–602. [Google Scholar] [CrossRef]
  65. Deng, W.; Zhang, X.; Zhou, Y.; Liu, Y.; Zhou, X.; Chen, H.; Zhao, H. An enhanced fast non-dominated solution sorting genetic algorithm for multi-objective problems. Inf. Sci. 2022, 585, 441–453. [Google Scholar] [CrossRef]
  66. Zhan, Z.-H.; Zhang, J.; Lin, Y.; Li, J.-Y.; Huang, T.; Guo, X.-Q.; Wei, F.-F.; Kwong, S.; Zhang, X.-Y.; You, R. Matrix-based evolutionary computation. IEEE Trans. Emerg. Top. Comput. Intell. 2021, 6, 315–328. [Google Scholar] [CrossRef]
  67. Huang, H.; Feng, X.a.; Zhou, S.; Jiang, J.; Chen, H.; Li, Y.; Li, C. A new fruit fly optimization algorithm enhanced support vector machine for diagnosis of breast cancer based on high-level features. BMC Bioinform. 2019, 20, 290. [Google Scholar] [CrossRef] [Green Version]
  68. Li, C.; Hou, L.; Sharma, B.Y.; Li, H.; Chen, C.; Li, Y.; Zhao, X.; Huang, H.; Cai, Z.; Chen, H. Developing a new intelligent system for the diagnosis of tuberculous pleural effusion. Comput. Methods Programs Biomed. 2018, 153, 211–225. [Google Scholar] [CrossRef]
  69. Hu, L.; Lin, F.; Li, H.; Tong, C.; Pan, Z.; Li, J.; Chen, H. An intelligent prognostic system for analyzing patients with paraquat poisoning using arterial blood gas indexes. J. Pharmacol. Toxicol. Methods 2017, 84, 78–85. [Google Scholar] [CrossRef]
  70. Chen, H.; Hu, L.; Li, H.; Hong, G.; Zhang, T.; Ma, J.; Lu, Z. An Effective Machine Learning Approach for Prognosis of Paraquat Poisoning Patients Using Blood Routine Indexes. Basic Clin. Pharmacol. Toxicol. 2017, 120, 86–96. [Google Scholar] [CrossRef]
  71. Weron, R. Electricity price forecasting: A review of the state-of-the-art with a look into the future. Int. J. Forecast. 2014, 30, 1030–1081. [Google Scholar] [CrossRef] [Green Version]
  72. Cincotti, S.; Gallo, G.; Ponta, L.; Raberto, M. Modeling and forecasting of electricity spot-prices: Computational intelligence vs classical econometrics. AI Commun. 2014, 27, 301–314. [Google Scholar] [CrossRef]
  73. Cai, Z.; Gu, J.; Chen, H. A New Hybrid Intelligent Framework for Predicting Parkinson’s Disease. IEEE Access 2017, 5, 17188–17200. [Google Scholar] [CrossRef]
  74. Ahmadianfar, I.; Heidari, A.A.; Noshadian, S.; Chen, H.; Gandomi, A.H. INFO: An Efficient Optimization Algorithm based on Weighted Mean of Vectors. Expert Syst. Appl. 2022, 195, 116516. [Google Scholar] [CrossRef]
  75. Yang, Y.; Chen, H.; Heidari, A.A.; Gandomi, A.H. Hunger games search: Visions, conception, implementation, deep analysis, perspectives, and towards performance shifts. Expert Syst. Appl. 2021, 177, 114864. [Google Scholar] [CrossRef]
  76. Ahmadianfar, I.; Heidari, A.A.; Gandomi, A.H.; Chu, X.; Chen, H. RUN Beyond the Metaphor: An Efficient Optimization Algorithm Based on Runge Kutta Method. Expert Syst. Appl. 2021, 181, 115079. [Google Scholar] [CrossRef]
  77. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  78. Li, S.; Chen, H.; Wang, M.; Heidari, A.A.; Mirjalili, S. Slime mould algorithm: A new method for stochastic optimization. Future Gener. Comput. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
  79. Tu, J.; Chen, H.; Wang, M.; Gandomi, A.H. The Colony Predation Algorithm. J. Bionic Eng. 2021, 18, 674–710. [Google Scholar] [CrossRef]
  80. Yu, H.; Cheng, X.; Chen, C.; Heidari, A.A.; Liu, J.; Cai, Z.; Chen, H. Apple leaf disease recognition method with improved residual network. Multimed. Tools Appl. 2022, 81, 7759–7782. [Google Scholar] [CrossRef]
  81. Xia, J.; Wang, Z.; Yang, D.; Li, R.; Liang, G.; Chen, H.; Heidari, A.A.; Turabieh, H.; Mafarja, M.; Pan, Z. Performance optimization of support vector machine with oppositional grasshopper optimization for acute appendicitis diagnosis. Comput. Biol. Med. 2022, 143, 105206. [Google Scholar] [CrossRef]
  82. Xia, J.; Yang, D.; Zhou, H.; Chen, Y.; Zhang, H.; Liu, T.; Heidari, A.A.; Chen, H.; Pan, Z. Evolving kernel extreme learning machine for medical diagnosis via a disperse foraging sine cosine algorithm. Comput. Biol. Med. 2022, 141, 105137. [Google Scholar] [CrossRef]
  83. Hu, J.; Han, z.; Heidari, A.A.; Shou, Y.; Ye, H.; Wang, L.; Huang, X.; Chen, H.; Chen, Y.; Wu, P. Detection of COVID-19 severity using blood gas analysis parameters and Harris hawks optimized extreme learning machine. Comput. Biol. Med. 2022, 142, 105166. [Google Scholar] [CrossRef]
  84. Hu, J.; Chen, H.; Heidari, A.A.; Wang, M.; Zhang, X.; Chen, Y.; Pan, Z. Orthogonal learning covariance matrix for defects of grey wolf optimizer: Insights, balance, diversity, and feature selection. Knowl.-Based Syst. 2021, 213, 106684. [Google Scholar] [CrossRef]
  85. Hu, J.; Gui, W.; Heidari, A.A.; Cai, Z.; Liang, G.; Chen, H.; Pan, Z. Dispersed foraging slime mould algorithm: Continuous and binary variants for global optimization and wrapper-based feature selection. Knowl.-Based Syst. 2022, 237, 107761. [Google Scholar] [CrossRef]
  86. Too, J.; Liang, G.; Chen, H. Memory-based Harris hawk optimization with learning agents: A feature selection approach. Eng. Comput. 2021, 1–22. [Google Scholar] [CrossRef]
  87. Hussien, A.G.; Heidari, A.A.; Ye, X.; Liang, G.; Chen, H.; Pan, Z. Boosting whale optimization with evolution strategy and Gaussian random walks: An image segmentation method. Eng. Comput. 2022, 1–45. [Google Scholar] [CrossRef]
  88. Su, H.; Zhao, D.; Yu, F.; Heidari, A.A.; Zhang, Y.; Chen, H.; Li, C.; Pan, J.; Quan, S. Horizontal and vertical search artificial bee colony for image segmentation of COVID-19 X-ray images. Comput. Biol. Med. 2022, 142, 105181. [Google Scholar] [CrossRef] [PubMed]
  89. Yu, H.; Song, J.; Chen, C.; Heidari, A.A.; Liu, J.; Chen, H.; Zaguia, A.; Mafarja, M. Image segmentation of Leaf Spot Diseases on Maize using multi-stage Cauchy-enabled grey wolf algorithm. Eng. Appl. Artif. Intell. 2022, 109, 104653. [Google Scholar] [CrossRef]
  90. Zhang, H.; Liu, T.; Ye, X.; Heidari, A.A.; Liang, G.; Chen, H.; Pan, Z. Differential evolution-assisted salp swarm algorithm with chaotic structure for real-world problems. Eng. Comput. 2022, 1–35. [Google Scholar] [CrossRef]
  91. Shan, W.; Qiao, Z.; Heidari, A.A.; Chen, H.; Turabieh, H.; Teng, Y. Double adaptive weights for stabilization of moth flame optimizer: Balance analysis, engineering cases, and medical diagnosis. Knowl.-Based Syst. 2021, 214, 106728. [Google Scholar] [CrossRef]
  92. Dong, R.; Chen, H.; Heidari, A.A.; Turabieh, H.; Mafarja, M.; Wang, S. Boosted kernel search: Framework, analysis and case studies on the economic emission dispatch problem. Knowl.-Based Syst. 2021, 233, 107529. [Google Scholar] [CrossRef]
  93. Fan, C.; Hu, K.; Feng, S.; Ye, J.; Fan, E. Heronian mean operators of linguistic neutrosophic multisets and their multiple attribute decision-making methods. Int. J. Distrib. Sens. Netw. 2019, 15, 1550147719843059. [Google Scholar] [CrossRef]
  94. Cui, W.-H.; Ye, J. Logarithmic similarity measure of dynamic neutrosophic cubic sets and its application in medical diagnosis. Comput. Ind. 2019, 111, 198–206. [Google Scholar]
  95. Fan, C.; Fan, E.; Hu, K. New form of single valued neutrosophic uncertain linguistic variables aggregation operators for decision-making. Cogn. Syst. Res. 2018, 52, 1045–1055. [Google Scholar]
  96. Ye, J.; Cui, W. Modeling and stability analysis methods of neutrosophic transfer functions. Soft Comput. 2020, 24, 9039–9048. [Google Scholar] [CrossRef]
  97. Shi, B.; Ye, H.; Zheng, L.; Lyu, J.; Chen, C.; Heidari, A.A.; Hu, Z.; Chen, H.; Wu, P. Evolutionary warning system for COVID-19 severity: Colony predation algorithm enhanced extreme learning machine. Comput. Biol. Med. 2021, 136, 104698. [Google Scholar] [CrossRef]
  98. Sun, Y.; Xue, B.; Zhang, M.; Yen, G.G. Evolving deep convolutional neural networks for image classification. IEEE Trans. Evol. Comput. 2019, 24, 394–407. [Google Scholar] [CrossRef] [Green Version]
  99. Liu, P.; Gao, H. A novel green supplier selection method based on the interval type-2 fuzzy prioritized choquet bonferroni means. IEEE/CAA J. Autom. Sin. 2020, 8, 1549–1566. [Google Scholar] [CrossRef]
  100. Han, X.; Han, Y.; Chen, Q.; Li, J.; Sang, H.; Liu, Y.; Pan, Q.; Nojima, Y. Distributed Flow Shop Scheduling with Sequence-Dependent Setup Times Using an Improved Iterated Greedy Algorithm. Complex Syst. Model. Simul. 2021, 1, 198–217. [Google Scholar] [CrossRef]
  101. Gao, D.; Wang, G.-G.; Pedrycz, W. Solving fuzzy job-shop scheduling problem using DE algorithm improved by a selection mechanism. IEEE Trans. Fuzzy Syst. 2020, 28, 3265–3275. [Google Scholar] [CrossRef]
  102. Zhao, F.; Di, S.; Cao, J.; Tang, J. A novel cooperative multi-stage hyper-heuristic for combination optimization problems. Complex Syst. Model. Simul. 2021, 1, 91–108. [Google Scholar] [CrossRef]
  103. Heidari, A.A.; Abbaspour, R.A.; Chen, H. Efficient boosted grey wolf optimizers for global search and kernel extreme learning machine training. Appl. Soft Comput. 2019, 81, 105521. [Google Scholar]
  104. Shen, L.; Chen, H.; Yu, Z.; Kang, W.; Zhang, B.; Li, H.; Yang, B.; Liu, D. Evolving support vector machines using fruit fly optimization for medical data classification. Knowl.-Based Syst. 2016, 96, 61–75. [Google Scholar] [CrossRef]
  105. Wang, M.; Chen, H.; Yang, B.; Zhao, X.; Hu, L.; Cai, Z.; Huang, H.; Tong, C. Toward an optimal kernel extreme learning machine using a chaotic moth-flame optimization strategy with applications in medical diagnoses. Neurocomputing 2017, 267, 69–84. [Google Scholar] [CrossRef]
  106. Wang, M.; Chen, H. Chaotic multi-swarm whale optimizer boosted support vector machine for medical diagnosis. Appl. Soft Comput. 2020, 88, 105946. [Google Scholar] [CrossRef]
  107. Yi, J.-H.; Deb, S.; Dong, J.; Alavi, A.H.; Wang, G.-G. An improved NSGA-III algorithm with adaptive mutation operator for Big Data optimization problems. Future Gener. Comput. Syst. 2018, 88, 571–585. [Google Scholar] [CrossRef]
  108. Arora, S.; Singh, S. Butterfly optimization algorithm: A novel approach for global optimization. Soft Comput. 2019, 23, 715–734. [Google Scholar] [CrossRef]
  109. Yu, H.; Yuan, K.; Li, W.; Zhao, N.; Chen, W.; Huang, C.; Chen, H.; Wang, M. Improved Butterfly Optimizer-Configured Extreme Learning Machine for Fault Diagnosis. Complexity 2021, 2021, 6315010. [Google Scholar] [CrossRef]
  110. Liu, G.; Jia, W.; Luo, Y.; Wang, M.; Heidari, A.A.; Ouyang, J.; Chen, H.; Chen, M. Prediction Optimization of Cervical Hyperextension Injury: Kernel Extreme Learning Machines With Orthogonal Learning Butterfly Optimizer and Broyden- Fletcher-Goldfarb-Shanno Algorithms. IEEE Access 2020, 8, 119911–119930. [Google Scholar] [CrossRef]
  111. Maheshwari, P.; Sharma, A.K.; Verma, K. Energy efficient cluster based routing protocol for WSN using butterfly optimization algorithm and ant colony optimization. Ad Hoc Netw. 2021, 110, 102317. [Google Scholar] [CrossRef]
  112. Long, W.; Wu, T.; Xu, M.; Tang, M.; Cai, S. Parameters identification of photovoltaic models by using an enhanced adaptive butterfly optimization algorithm. Energy 2021, 229, 120750. [Google Scholar] [CrossRef]
  113. Sharma, S.; Saha, A.K.; Majumder, A.; Nama, S. MPBOA-A novel hybrid butterfly optimization algorithm with symbiosis organisms search for global optimization and image segmentation. Multimed. Tools Appl. 2021, 80, 12035–12076. [Google Scholar] [CrossRef]
  114. Sadeghian, Z.; Akbari, E.; Nematzadeh, H. A hybrid feature selection method based on information theory and binary butterfly optimization algorithm. Eng. Appl. Artif. Intell. 2021, 97, 104079. [Google Scholar] [CrossRef]
Figure 1. Flowchart of CBBOA.
Figure 1. Flowchart of CBBOA.
Applsci 12 04776 g001
Figure 2. Flowchart of the CBBOA-SVM model.
Figure 2. Flowchart of the CBBOA-SVM model.
Applsci 12 04776 g002
Figure 3. Convergence curves of the CBBOA and advanced algorithms (First row: F1, F2, F6; second row: F9, F12, F14; third row: F17, F24, F25).
Figure 3. Convergence curves of the CBBOA and advanced algorithms (First row: F1, F2, F6; second row: F9, F12, F14; third row: F17, F24, F25).
Applsci 12 04776 g003
Figure 4. Convergence curves of CBBOA and its variants (First row: F1, F3, F8; second row: F10, F11, F16; third row: F21, F24, F25).
Figure 4. Convergence curves of CBBOA and its variants (First row: F1, F3, F8; second row: F10, F11, F16; third row: F21, F24, F25).
Applsci 12 04776 g004
Figure 5. (a) Three-dimensional location distribution of CBBOA, (b) two -dimensional location distribution of CBBOA, (c) Trajectory of CBBOA in the first dimension, (d) Average fitness of CBBOA, (e) Convergence curves of CBBOA and BOA.
Figure 5. (a) Three-dimensional location distribution of CBBOA, (b) two -dimensional location distribution of CBBOA, (c) Trajectory of CBBOA in the first dimension, (d) Average fitness of CBBOA, (e) Convergence curves of CBBOA and BOA.
Applsci 12 04776 g005
Figure 6. Balance analysis of CBBOA and BOA.
Figure 6. Balance analysis of CBBOA and BOA.
Applsci 12 04776 g006
Figure 7. Diversity analysis of CBBOA and BOA.
Figure 7. Diversity analysis of CBBOA and BOA.
Applsci 12 04776 g007
Figure 8. Classification results of five models in terms of four metrics.
Figure 8. Classification results of five models in terms of four metrics.
Applsci 12 04776 g008
Figure 9. Frequency of the features chosen from CBBOA-SVM through the 10-fold CV procedure.
Figure 9. Frequency of the features chosen from CBBOA-SVM through the 10-fold CV procedure.
Applsci 12 04776 g009
Table 1. Descriptions of each attribute.
Table 1. Descriptions of each attribute.
AttributesNameDescriptions
F1GenderOne for female, two for male
F2EducationThere are three categories: undergraduate in progress, specialist (senior) in progress, and master’s in progress, indicated by 1, 2, and 3, respectively.
F3GradeThere are six categories: freshmen, sophomores, juniors, seniors, other grades, and graduate students, denoted by 1, 2, 3, 4, 5, and 6, respectively.
F4SpecialtyThere are four categories: science and technology, literature, history and philosophy, medicine and health, and arts and sports, indicated by 1, 2, 3, and 4, respectively.
F5SourceThere are two categories, urban and rural, indicated by 1 and 2, respectively.
F6Make choices based on interests and valuesIt is used to measure the degree of satisfaction of the individual’s basic psychological needs for autonomy, with higher values indicating higher satisfaction.
F7Do things freely in your own waySame as F6
F8Express “ true self “ through my choiceSame as F6
F9Successfully complete difficult tasks and programsIt is used to measure the degree of satisfaction of competence needs among the basic psychological needs of individuals, and a higher value indicates a higher degree of satisfaction.
F10Accept and win difficult challengesSame as F9
F11Feel empowered by what you doSame as F9
F12Have a sense of connection with people who care about me and who I care aboutIt is used to measure the degree of satisfaction of relationship needs among the basic psychological needs of individuals, with higher values indicating higher satisfaction.
F13Feel intimately connected to significant othersSame as F12
F14Have a strong sense of closeness to those around youSame as F12
F15Have a clear understanding of the formation of self-personality traitsIt is used to measure the clarity of an individual’s self-concept, with higher values indicating greater clarity.
F16Know your natural abilitiesSame as F15
F17Decide what is more important to me (values)Same as F15
F18Know what others see in meSame as F15
F19Set an example in your mindSame as F15
F20Find out what I am interested inSame as F15
F21Set goals for yourselfSame as F15
F22Interview with Career ProfessionalsIt is used to measure the progress of individual career exploration, with higher values indicating better progress.
F23Discuss your careerSame as F22
F24Understand the type of jobSame as F22
F25Read career informationSame as F22
F26Research suitable careersSame as F22
F27Conduct career experienceSame as F22
F28Identify the training required for the occupationSame as F22
F29Decide what you want to doIt is used to measure the clarity of an individual’s career decisions and goals, with higher values indicating greater clarity.
F30Find the right job pathSame as F29
F31Choose a career that you are satisfied withSame as F29
F32Develop an action plan for choosing a careerSame as F29
F33Reconfirmation of a wise career choiceSame as F29
F34Develop specific knowledge or skills needed for the jobIt is used to measure the degree of learning of an individual’s required skills, with higher values indicating higher levels of learning.
F35Get the training and experience you needSame as F34
F36Participate in the training required for the jobSame as F34
F37Have the qualifications required for the jobSame as F34
F38Develop a job search planIt is used to measure an individual’s readiness for the transition from school to work, with higher values indicating greater readiness.
F39Get a job once you complete the trainingSame as F38
Table 2. Classification results of CBBOA-SVM-FS in the light of four metrics.
Table 2. Classification results of CBBOA-SVM-FS in the light of four metrics.
FoldACCMCCSensitivitySpecificity
Num.10.950 0.905 0.900 1.000
Num.20.952 0.909 0.909 1.000
Num.30.952 0.908 1.000 0.900
Num.40.952 0.909 0.909 1.000
Num.50.952 0.908 1.000 0.900
Num.60.857 0.716 0.909 0.800
Num.70.905 0.826 0.818 1.000
Num.80.950 0.905 1.000 0.900
Num.90.950 0.905 1.000 0.900
Num.101.000 1.000 1.000 1.000
AVG0.942 0.889 0.945 0.940
STD0.037 0.074 0.064 0.070
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, Z.; Liang, G.; Chen, H. Tool for Predicting College Student Career Decisions: An Enhanced Support Vector Machine Framework. Appl. Sci. 2022, 12, 4776. https://0-doi-org.brum.beds.ac.uk/10.3390/app12094776

AMA Style

Wang Z, Liang G, Chen H. Tool for Predicting College Student Career Decisions: An Enhanced Support Vector Machine Framework. Applied Sciences. 2022; 12(9):4776. https://0-doi-org.brum.beds.ac.uk/10.3390/app12094776

Chicago/Turabian Style

Wang, Zhuang, Guoxi Liang, and Huiling Chen. 2022. "Tool for Predicting College Student Career Decisions: An Enhanced Support Vector Machine Framework" Applied Sciences 12, no. 9: 4776. https://0-doi-org.brum.beds.ac.uk/10.3390/app12094776

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop