Next Article in Journal
Analytical Model for Predicting the UCS from P-Wave Velocity, Density, and Porosity on Saturated Limestone
Next Article in Special Issue
A Novel Application of League Championship Optimization (LCA): Hybridizing Fuzzy Logic for Soil Compression Coefficient Analysis
Previous Article in Journal
Receptance-Based Dominant Eigenvalues Computation of Controlled Vibrating Systems with Multiple Time-Delays Using a Contour Integral Method
Previous Article in Special Issue
Spotted Hyena Optimizer and Ant Lion Optimization in Predicting the Shear Strength of Soil
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Neural Computing Improvement Using Four Metaheuristic Optimizers in Bearing Capacity Analysis of Footings Settled on Two-Layer Soils

by
Hossein Moayedi
1,2,
Dieu Tien Bui
3 and
Phuong Thao Thi Ngo
4,*
1
Department for Management of Science and Technology Development, Ton Duc Thang University, Ho Chi Minh City 758307, Vietnam
2
Faculty of Civil Engineering, Ton Duc Thang University, Ho Chi Minh City 758307, Vietnam
3
Geographic Information System Group, Department of Business and IT, University of South-Eastern Norway, N-3800 Bø i Telemark, Norway
4
Institute of Research and Development, Duy Tan University, Da Nang 550000, Vietnam
*
Author to whom correspondence should be addressed.
Submission received: 4 November 2019 / Revised: 25 November 2019 / Accepted: 27 November 2019 / Published: 3 December 2019
(This article belongs to the Special Issue Artificial Intelligence in Smart Buildings)

Abstract

:
This study outlines the applicability of four metaheuristic algorithms, namely, whale optimization algorithm (WOA), league champion optimization (LCA), moth–flame optimization (MFO), and ant colony optimization (ACO), for performance improvement of an artificial neural network (ANN) in analyzing the bearing capacity of footings settled on two-layered soils. To this end, the models estimate the stability/failure of the system by taking into consideration soil key factors. The complexity of each network is optimized through a sensitivity analysis process. The performance of the ensembles is compared with a typical ANN to evaluate the efficiency of the applied optimizers. It was shown that the incorporation of the WOA, LCA, MFO, and ACO algorithms resulted in 14.49%, 13.41%, 18.30%, and 35.75% reductions in the prediction error of the ANN, respectively. Moreover, a ranking system is developed to compare the efficiency of the used models. The results revealed that the ACO–ANN performs most accurately, followed by the MFO–ANN, WOA–ANN, and LCA–ANN. Lastly, the outcomes demonstrated that the ACO–ANN can be a promising alternative to traditional methods used for analyzing the bearing capacity of two-layered soils.

1. Introduction

Soil bearing capacity is one of the most crucial engineering parameters which needs to be meticulously investigated before any construction action [1,2]. Thus, having an accurate approximation of the bearing capacity is a very important prerequisite of many geotechnical engineering projects as it is a function of various soil characteristics [3]. The ultimate applicable stress (Fult) is obtained based on the maximum settlement ratio, which is 0.1 of the footing width [4,5]. In this regard, many scholars investigated or introduced relationships to give the Fult [6,7]. Lotfizadeh and Kamalian [8] used the stress characteristic lines method for forecasting the static bearing capacity of strip footing installed on two-layered soils. Up to now, different numerical and analytical approaches were utilized to analyze the bearing capacity [9,10,11]. However, as a matter of fact, traditional methods and laboratory approaches are not applicable without spending a huge amount of time and money. On the other hand, due to the high competency of artificial intelligence techniques in different engineering applications, they can be used as inexpensive yet accurate models for estimating geotechnical parameters like bearing capacity.
The advent of soft computing approaches provided proper accurate models such as artificial neural network (ANN), adaptive neuro-fuzzy inference system (ANFIS), etc. for numerous engineering calculations with a focus on estimation tasks. These models were also successfully used for bearing capacity analysis [12,13,14]. In this sense, Padmini et al. [15] employed three models of neuro-fuzzy, ANN, and fuzzy for predicting the ultimate bearing capacity of shallow foundations (on cohesionless soil). Their results showed the superiority of intelligent models to popular bearing capacity theories. Also, Alavi and Sadrossadat [16] employed linear genetic programming to estimate the ultimate bearing capacity of shallow foundations resting on rock masses.
Metaheuristic algorithms suggest potent solutions for several optimization problems. They are also used for optimizing the performance of well-known predictive models like the support vector machine (SVM), ANN, and ANFIS [17,18,19]. As for the application of metaheuristic algorithms in bearing capacity analysis, different algorithms were applied to enhance the accuracy of the mentioned models [2,20,21]. Moayedi et al. [22] applied the biogeography-based optimization (BBO) algorithm to ANN and ANFIS for estimating the failure likelihood of shallow footings. The results showed that the used algorithm can increase the classification accuracy of the ANN (from 98.2% to 98.4%) and, more considerably, the ANFIS (from 97.6% to 98.5%). Likewise, Moayedi et al. [23] compared the optimization capability of the dragonfly algorithm (DA) and Harris hawks optimization (HHO) in adjusting the computational parameters of the ANN. Their study revealed that both these algorithms can effectively handle the mentioned task. However, referring to the calculated values of area under the curve (AUC), the DA (AUC = 0.942 and error = 0.1171) performed more accurately than the HHO (AUC = 0.915 and error = 0.1350).
According to the literature review, despite the broad application of popular metaheuristics (e.g., imperialist competition algorithm (ICA) and particle swarm optimization (PSO)) for bearing capacity analysis [2,24,25], there are still many unused techniques which might be more capable. Hence, the main focus of the present paper was to investigate the applicability of several metaheuristic algorithms, namely, whale optimization algorithm (WOA), league champion optimization (LCA), moth–flame optimization (MFO), and ant colony optimization (ACO), for optimizing the performance of the ANN to discover powerful models. The necessity of coupling these algorithms lies in some computational drawbacks [26,27] of the ANN which can be prevailed through proper adjustment of the weights and biases. In other words, the main contribution of these algorithms to the stated problem is to benefit metaheuristic advantages for the accurate evaluation of the relationship between bearing capacity and soil parameters.
Hereafter, the paper is structured in four major parts. The used algorithms are described in Section 2, data provision is explained in Section 3, results are presented and discussed in Section 4, and Section 5 gives the conclusion.

2. Methodology

2.1. Artificial Neural Network

The artificial neural network (ANN) is the basic model of this study which we aimed to optimize. ANNs showed high capability for estimating different engineering parameters [28,29,30]. Their high robustness in dealing with complex and non-linear tasks makes the ANNs universal approximators [31]. The idea of neural learning was first suggested by McCulloch and Pitts [32]. The ANN can be represented by different notions like radial basis function (RBF) and generalized regression neural network (GRNN), but the most common of these is multi-layer perceptron (MLP) [33]. An ordinary view of the MLP is depicted in Figure 1. It follows a so-called backpropagation (BP) learning method [34] with a Levenberg–Marquardt (LM) training algorithm [35] by default. This model benefits the mentioned items in mapping the relationship between two variables called input(s) and target(s). Mathematically, assuming T as the input of the j-th computational unit, the response (O) is calculated as follows:
O j = F   ( m = 1 M T m W m j + b j ) ,
where F stands for the activation function, and the terms Wj and bj are the corresponding weight and bias, respectively.

2.2. Hybrid Metaheuristic Algorithms

Due to the successful performance of metaheuristic algorithms in optimizing regular predictive models, in this study, four recently developed algorithms are applied to the ANN. The considered optimizers are the whale optimization algorithm (WOA), league champion optimization (LCA), moth–flame optimization (MFO), and ant colony optimization (ACO), which are used as potential search methods for finding the optimal solution to a given problem. In the case of this study, a general MLP is given as the problem, and, concerning a cost function, the algorithms aim to find the best weights and biases for the network.
The WOA was designed by Mirjalili and Lewis [36], inspired by the bubble-net hunting of humpback whales. Three major stages of this algorithm are (a) shrinking encircling hunt, (b) exploitation (i.e., bubble-net attacking), and (c) exploration (i.e., searching for the prey). More information about the WOA can be found in previous studies [37,38,39,40]. The LCA was proposed by Kashan [41], based on sporting competitions in sports leagues. Considering the league schedule programming and some relationships for determining the winner/loser team in an artificial league, the most appropriate solution is found. The LCA was detailed in previous studies [42,43,44]. As a novel nature-inspired optimization technique, the MFO was suggested by Mirjalili [45]. The pivotal idea of this algorithm is the navigation method of moths, which is known as transverse orientation. The candidate solutions in the MFO are moths, and their positions in space express the problem’s variables. The optimization process of this optimizer was well described in previous studies [46,47,48]. Lastly, the name ACO implies a population-based optimization technique which mimics the foraging behavior of ant herds. It was presented by Dorigo and Di Caro [49]. In this algorithm, artificial ants guide each other to achieve a proper (i.e., short) path leading to a promising food source. For more details, please refer to References [50,51,52].
The pseudo-code of the WOA, LCA, MFO, and ACO algorithms are shown below.
Algorithm 1. The pseudo-code of the whale optimization algorithm (WOA) [53]
Initialize the whale population Xi (i = 1, 2, …, n)
Calculate the fitness of each search agent
X the best search agent
while (t < maximum number of iterations)
  for each search agent
   Update a, A, C, l, and P
   if1 (P < 0.5)
      if2 (|A| < 1)
       Update the position of the current search agent
      else if2 (|A| ≥ 1)
      Select a random search agent (Xrand)
      Update the position of the current search agent
      end if2
   else if1 (P ≥ 0.5)
      Update the position of the current search agent
   end if1
  end for
  Check if any search agent goes beyond the search space and amend it
  Calculate the fitness of each search agent
  Update X if there is a better solution
  t = t + 1
end while
return X∗
Algorithm 2. The pseudo-code of the league champion optimization algorithm (LCA) [54]
Initialize the league size (L) and the number of seasons
(S); t = 1;
Generate a league schedule;
Initialize team formations (generate a population of L solutions) and determine the playing strengths
(function or fitness value) along with them. Let the initialization also be the team’s current best formation;
While t < = S × (L − 1)
   Based on the league schedule at week t, determine the winner/loser among every pair of teams using a playing strength-based criterion;
   t = t + 1
   For i = 1 to L
   Devise a new formation for team i for the forthcoming match, while taking into account the team’s current best formation and previous week events. Evaluate the playing strength of the resulting arrangement;
      If the new formation is the fittest one (that is, the new solution is the best solution achieved so far for the i-th member of the population), hereafter consider the new formation as the team’s current best formation;
   End For
   If mod (t, L−1) = 0
      Generate a league schedule;
   End If
End While
Algorithm 3. The pseudo-code of the moth–flame optimization (MFO) algorithm [55]
While iteration < max iteration
Update flame number
Obj = fitness function (Moths);
if Iteration = 1
      Sort the moths based on their objective functions; update the flames
      Iteration = 0;
else   Sort the moths based on their objective functions and flames from last iteration; update the flames
end
linearly decrease the convergence constant
for j = 1: Number of moths
      for k = 1: Number of variables, update r and t
Calculate the distance of moth from each flame; update the values of the variables of moth from the corresponding flame
      end
end
Iteration = iteration + 1;
end
Algorithm 4. The pseudo-code of the ant colony optimization (ACO) algorithm [56]
Initialization:
   Algorithm parameters;
   Ant population size K;
   Maximum number of iteration NMax;
Generation:
   Generating the pheromone matrix for the ant k;
   Update the pheromone values and set x* = k;
   i = 1;
Repeat
   for k = 1 to K
   Compute the cost function for the ant k;
   Compute probability move of ant individual;
   if f(k) < f(x*) Then
        Update the pheromone values;
        Set x* = k;
End if
End for
Until I > NMax;

3. Data Collection

By implementing a two-dimensional (2D) axisymmetric finite element method, a shallow footing settled on a two-layered soil was analyzed in different conditions. The settlement (Uy) was derived in each stage. A total of 901 analyses were carried out by 15-node triangular elements, where the effective variables were unit weight ( k N m 3 ), friction angle, elastic modulus ( k N m 2 ), dilation angle, Poisson’s ratio (v), applied stress ( k N m ), and setback distance (m). Figure 2 illustrates the distribution pattern of these factors.
The descriptive statistics of the dataset are also presented in Table 1. As can be seen, the minimum and maximum values obtained for the settlement were 0 and 0.10 m, respectively. Similar to a previous study [23], it was deemed that, if the Uy is less than 0.05 m, the system fails; otherwise, it remains stable. The failure and stability of the system are represented by values 1 and 0, respectively. The gathered dataset (without normalization) was then randomly divided into two groups, namely, training (for development of intelligent models) and testing (for evaluating the prediction capability of the models). In this regard, 721 samples (i.e., 80% of whole data) were specified to the first group, and the remaining 180 samples (i.e., 20% of whole data) served as the testing data (addressing unseen soil conditions).

4. Results and Discussion

To meet the objective of the study (i.e., investigating the optimization capability of the abovementioned metaheuristic algorithms), the algorithms should be coupled with the ANN. The aim of this work was to let this algorithm find the most appropriate matrix of the weights and biases for the ANN. To this end, firstly, an ANN with one hidden layer containing six neurons (determined by a trial-and-error process) was proposed as the base model. Thus, regarding the number of input/output parameters, the considered MLP took the form 7 × 6 × 1. Note that, in the present study, the activation functions of the hidden and output neurons were set to be “tangent-sigmoid (i.e., Tansig)” and “purelin”, respectively. Next, it was mathematically synthesized with the WOA, LCA, MFO, and ACO algorithms to create WOA–ANN, LCA–ANN, MFO–ANN, and ACO–ANN neural ensembles.

4.1. Hybridizing the ANN Using Metaheuristic Algorithms

After creating the ensembles, a population-based trial-and-error process was carried out to achieve the best-fitted complexity of the metaheuristic algorithms. To do so, all four networks were tested with nine different population sizes including 10, 25, 50, 75, 100, 200, 300, 400, and 500. Each model performed 1000 repetitions to minimize the error. In this process, root-mean-square error (RMSE) was set as the objective function (OF) to measure the training error in each iteration. This function is expressed in Equation (2). Figure 3a shows the obtained RMSEs for the tested population sizes. Also, the convergence curve of the most accurate one is illustrated in this figure.
R M S E = 1 N i = 1 N Y i o b s e r v e d   Y i p r e d i c t e d   2 ,
where N is the number of data, and Yi observed and Yi predicted stand for the observed and predicted stability values.
As can be seen, all four models exhibited an acceptable error in analyzing the relationship between the stability condition and its influential parameters. In detail, the smallest error was obtained for the WOA–ANN with a population size of 400 (RMSE = 0.307658318), LCA–ANN with a population size of 200 (RMSE = 0.312263011), MFO–ANN with a population size of 50 (RMSE = 0.298588793), and ACO–ANN with a population size of 10 (RMSE = 0.274504799).

4.2. Accuracy Assessment Criteria

The classification accuracy of the models was measured using a well-known criterion, namely, the area under the receiving operating characteristic curve (AUROC). Note that it was obtained by plotting the ROC diagrams, which is a good way of assessing the accuracy in diagnostic problems, such as natural hazard models [57,58,59,60,61]. Moreover, two error criteria of RMSE and mean absolute error (MAE) were used to measure the performance error of the models. Equation (3) expresses the formulation of the MAE.
M A E =   1 N i = 1 N Y i o b s e r v e d   Y i p r e d i c t e d   .

4.3. Accuracy Assessment of the Predictive Models

In this part, the results of the best-fitted models (i.e., with elite population sizes) are evaluated to examine their simulation capability. As is known, the results of the training phase address the learning quality of the model, and the testing results indicate the generalization capability for unseen conditions of the problem.
In the training phase, the calculated values of RMSE and MAE for the typical ANN were 0.3465 and 0.3055, respectively. Both of these indices experienced considerable decreases by applying the WOA (0.3076 and 0.2555), LCA (0.3122 and 0.2592), MFO (0.2985 and 0.2430), and ACO (0.2745 and 0.1783) optimization techniques. Also, in terms of the AUROC, the accuracy of the ANN was increased from 0.956 to 0.969, 0.964, 0.969, and 0.965, respectively. At a glance, it can be deduced that the models can improve the learning capability of the ANN. Figure 4 displays the predicted and actual stability values for the ensemble models. The output ranges were [−0.196124259, 1.163826771], [−0.285459666, 1.165811194], [−0.280854543, 1.220819059], and [−0.323683705, 1.197618633], respectively.
Similar to the first phase, all the neural-metaheuristic ensembles surpassed the ANN in the testing phase which means the algorithms have performed efficiently in adjusting the computational weights and biases of this tool. In detail, the RMSE was reduced from 0.3465 to 0.3076, 0.3122, 0.2985, and 0.2745. As for the MAE, it fell from 0.3055 to 0.2555, 0.2592, 0.2430, and 0.1783. The differences between the actual and predicted stability values (labeled as error) are illustrated in Figure 5, along with the histogram of the errors. The products of the WOA-ANN, LCA-ANN, MFO-ANN, and ACO-ANN vary in the extents [−0.19260775, 1.121547514], [−0.221848351, 1.183820947], [−0.176990489, 1.103579629], and [−0.072023941, 1.206028442], respectively.
Moreover, the ROC curves for the prediction of ensemble models are shown in Figure 6. The calculated areas under the curves indicate more than 90% accuracy for all five models. However, the AUROCs of the hybrid ensembles were significantly higher than the unreinforced ANN (AUROC = 0.930).
Until now, all used criteria confirmed that the metaheuristic algorithms can develop a more powerful ANN compared to the BP learning method. The results of the WOA–ANN, LCA–ANN, MFO–ANN, and ACO–ANN tools are evaluated in this section to compare the efficiency of the algorithms. A score-based system was developed to rank the models and determine the most accurate one. As Table 2; Table 3 denote, each model received three scores based on the calculated RMSE, MAE, and AUROC. Then, the summation of these scores determined the model producing the most consistent outputs in each phase. According to Table 3, the ACO-based model grasped the highest scores in terms of all accuracy criteria except for the training AUROC (0.965) which was second to the WOA and MFO (0.969). Therefore, it grasped the highest overall scores in both training and testing phases, followed by the MFO, which closely surpassed the WOA, while the LCA featured the lowest rank.
Moreover, in comparison with the HHO and DA applied to the same data by Moayedi et al. [23], it was deduced that the methods of the current study present a more accurate analysis and approximation of bearing capacity. In detail, the RMSE and MAE obtained for the DA–MLP (superior to the HHO–MLP) were 0.3421 and 0.2904, which are larger than the results obtained for our WOA–ANN, MFO–ANN, and ACO–ANN. Also, the best AUROC of this study was higher than that for both models in the mentioned reference (0.944 vs. 0.942).

4.4. Presenting the Neural Predictive Formula

In this section, due to the largest accuracy obtained for the ACO–ANN, the neural relationship of this model was extracted and presented in the form of Equation (4) to predict the stability value using the considered effective parameters. Note that this formula was developed by the optimized parameters of the MLP output neuron. There were six middle parameters (A, B, …, F), which are expressed by Equation (5).
Stability value ACO-ANN = 0.0684 × A + 0.1911 × B − 0.4022 × C − 0.8409 × D − 0.0458 × E − 0.9175 × F − 0.5984.
A B C D E F =   T a n s i g 0.1966 0.5483 0.9172 1.1127 0.2130 0.8492 0.2915 0.8962 0.6059 1.1167 0.2273 0.1856 0.8686 0.1111 1.0910 0.1197 0.0399 0.3251 1.0160 0.9038 0.3303 0.6287 0.3309 0.7786 0.1076 0.7085 0.7997 1.0031 0.3544 1.2248 0.5554 0.6322 0.6777 0.0116 0.6907 0.9035 0.0708 0.8111 0.2542 1.0138 0.2361 0.8017 D O S S a n d L o a m C l a y M C L L L I +   1.8084 1.0850 0.3617 0.3617 1.0850 1.8084
Tan sig   x =   2 1 +   e 2 x 1 .

5. Conclusions

The optimization potential of four wise metaheuristic techniques, namely, WOA, LCA, MFO, and ACO, was evaluated in this paper. The algorithms were coupled with an artificial neural network, and the developed ensembles were applied to an important geotechnical problem, bearing capacity analysis. It was revealed that ACO–ANN was the most accurate model. After that, the MFO–ANN was the second efficient ensemble. Based on the findings of this study, the combination of neural computing and metaheuristic techniques (especially the ACO–ANN) provides fast and inexpensive yet accurate models for analyzing the stability of the footings over two-layered soils. This is in contrast to the use of traditional methods (e.g., laboratory studies and finite element techniques), which are time-consuming and entail implementing costly and destructive tests. Furthermore, a comparison with conventional methods (MLP neural network) showed that using metaheuristic techniques can be an advantageous way of enhancing their performance (i.e., around 95% accuracy of prediction). In fact, this research suggests the use of powerful inspirations from real-world phenomena for optimizing the computational parameters of the ANN.

Author Contributions

H.M. and D.T.B. wrote the manuscript, and discussed and analyzed the data. H.M. and P.T.T.N. edited, restructured, and professionally optimized the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Momeni, E.; Nazir, R.; Armaghani, D.J.; Sohaie, H. Bearing capacity of precast thin-walled foundation in sand. Proc. Inst. Civ. Eng. Geotech. Eng. 2015, 168, 539–550. [Google Scholar] [CrossRef]
  2. Moayedi, H.; Moatamediyan, A.; Nguyen, H.; Bui, X.-N.; Bui, D.T.; Rashid, A.S.A. Prediction of ultimate bearing capacity through various novel evolutionary and neural network models. Eng. Comput. 2019, 35, 1–17. [Google Scholar] [CrossRef]
  3. Keskin, M.S.; Laman, M. Model studies of bearing capacity of strip footing on sand slope. Ksce J. Civ. Eng. 2013, 17, 699–711. [Google Scholar] [CrossRef]
  4. Das, B.M.; Sobhan, K. Principles of Geotechnical Engineering; Cengage Learning: Belmont, CA, USA, 2013. [Google Scholar]
  5. Ranjan, G.; Rao, A. Basic and Applied Soil Mechanics; New Age International: New Delhi, India, 2007. [Google Scholar]
  6. Meyerhof, G.; Hanna, A. Ultimate bearing capacity of foundations on layered soils under inclined load. Can. Geotech. J. 1978, 15, 565–572. [Google Scholar] [CrossRef]
  7. Terzaghi, K.; Peck, R.B.; Mesri, G. Soil Mechanics in Engineering Practice; John Wiley & Sons: Hoboken, NJ, USA, 1996. [Google Scholar]
  8. Lotfizadeh, M.R.; Kamalian, M. Estimating bearing capacity of strip footings over two-layered sandy soils using the characteristic lines method. Int. J. Civ. Eng. 2016, 14, 107–116. [Google Scholar] [CrossRef]
  9. Frydman, S.; Burd, H.J. Numerical studies of bearing-capacity factor N γ. J. Geotech. Geoenviron. Eng. 1997, 123, 20–29. [Google Scholar] [CrossRef]
  10. Florkiewicz, A. Upper bound to bearing capacity of layered soils. Can. Geotech. J. 1989, 26, 730–736. [Google Scholar] [CrossRef]
  11. Ghazavi, M.; Eghbali, A.H. A simple limit equilibrium approach for calculation of ultimate bearing capacity of shallow foundations on two-layered granular soils. Geotech. Geol. Eng. 2008, 26, 535–542. [Google Scholar] [CrossRef]
  12. Ziaee, S.A.; Sadrossadat, E.; Alavi, A.H.; Mohammadzadeh Shadmehri, D. Explicit formulation of bearing capacity of shallow foundations on rock masses using artificial neural networks: Application and supplementary studies. Environ. Earth Sci. 2015, 73, 3417–3431. [Google Scholar] [CrossRef]
  13. Moayedi, H.; Hayati, S. Modelling and optimization of ultimate bearing capacity of strip footing near a slope by soft computing methods. Appl. Soft Comput. 2018, 66, 208–219. [Google Scholar] [CrossRef]
  14. Acharyya, R.; Dey, A.; Kumar, B. Finite element and ANN-based prediction of bearing capacity of square footing resting on the crest of c-φ soil slope. Int. J. Geotech. Eng. 2018, 1–12. [Google Scholar] [CrossRef]
  15. Padmini, D.; Ilamparuthi, K.; Sudheer, K. Ultimate bearing capacity prediction of shallow foundations on cohesionless soils using neurofuzzy models. Comput. Geotech. 2008, 35, 33–46. [Google Scholar] [CrossRef]
  16. Alavi, A.H.; Sadrossadat, E. New design equations for estimation of ultimate bearing capacity of shallow foundations resting on rock masses. Geosci. Front. 2016, 7, 91–99. [Google Scholar] [CrossRef] [Green Version]
  17. Nguyen, H.; Mehrabi, M.; Kalantar, B.; Moayedi, H.; Abdullahi, M.A.M. Potential of hybrid evolutionary approaches for assessment of geo-hazard landslide susceptibility mapping. Geomat. Nat. Hazards Risk 2019, 10, 1667–1693. [Google Scholar] [CrossRef]
  18. Moayedi, H.; Raftari, M.; Sharifi, A.; Jusoh, W.A.W.; Rashid, A.S.A. Optimization of ANFIS with GA and PSO estimating α ratio in driven piles. Eng. Comput. 2019, 1–12. [Google Scholar] [CrossRef]
  19. Wang, J.; Xing, Y.; Cheng, L.; Qin, F.; Ma, T. The prediction of Mechanical Properties of Cement Soil Based on PSO-SVM. In Proceedings of the 2010 International Conference on Computational Intelligence and Software Engineering, Wuhan, China, 10–12 December 2010; IEEE: Piscataway, NJ, USA; pp. 1–4. [Google Scholar]
  20. Moayedi, H.; Armaghani, D.J. Optimizing an ANN model with ICA for estimating bearing capacity of driven pile in cohesionless soil. Eng. Comput. 2018, 34, 347–356. [Google Scholar] [CrossRef]
  21. Harandizadeh, H.; Toufigh, M.M.; Toufigh, V. Application of improved ANFIS approaches to estimate bearing capacity of piles. Soft Comput. 2019, 23, 9537–9549. [Google Scholar] [CrossRef]
  22. Moayedi, H.; Nguyen, H.; Rashid, A.S.A. Novel metaheuristic classification approach in developing mathematical model-based solutions predicting failure in shallow footing. Eng. Comput. 2019, 1–8. [Google Scholar] [CrossRef]
  23. Moayedi, H.; Nguyen, H.; Rashid, A.S.A. Comparison of dragonfly algorithm and Harris hawks optimization evolutionary data mining techniques for the assessment of bearing capacity of footings over two-layer foundation soils. Eng. Comput. 2019, 1–11. [Google Scholar] [CrossRef]
  24. Moayedi, H.; Kalantar, B.; Dounis, A.; Tien Bui, D.; Foong, L.K. Development of Two Novel Hybrid Prediction Models Estimating Ultimate Bearing Capacity of the Shallow Circular Footing. Appl. Sci. 2019, 9, 4594. [Google Scholar] [CrossRef] [Green Version]
  25. Xiaohui, L. Determination of subsoil bearing capacity using genetic algorithm. Chin. J. Rock Mech. Eng. 2001, 20, 394–398. [Google Scholar]
  26. Moayedi, H.; Mehrabi, M.; Mosallanezhad, M.; Rashid, A.S.A.; Pradhan, B. Modification of landslide susceptibility mapping using optimized PSO-ANN technique. Eng. Comput. 2018, 35, 1–18. [Google Scholar] [CrossRef]
  27. Shakti, S.P.; Hassan, M.K.; Zhenning, Y.; Caytiles, R.D.; SN, I.N.C. Annual Automobile Sales Prediction Using ARIMA Model. Int. J. Hybrid Inf. Technol. 2017, 10, 13–22. [Google Scholar] [CrossRef]
  28. Moayedi, H.; Rezaei, A. An artificial neural network approach for under-reamed piles subjected to uplift forces in dry sand. Neural Comput. Appl. 2019, 31, 327–336. [Google Scholar] [CrossRef]
  29. Alnaqi, A.A.; Moayedi, H.; Shahsavar, A.; Nguyen, T.K. Prediction of energetic performance of a building integrated photovoltaic/thermal system thorough artificial neural network and hybrid particle swarm optimization models. Energy Convers. Manag. 2019, 183, 137–148. [Google Scholar] [CrossRef]
  30. Xi, W.; Li, G.; Moayedi, H.; Nguyen, H. A particle-based optimization of artificial neural network for earthquake-induced landslide assessment in Ludian county, China. Geomat. Nat. Hazards Risk 2019, 10, 1750–1771. [Google Scholar] [CrossRef] [Green Version]
  31. Hornik, K.; Stinchcombe, M.; White, H. Multilayer feedforward networks are universal approximators. Neural Netw. 1989, 2, 359–366. [Google Scholar] [CrossRef]
  32. McCulloch, W.S.; Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 1943, 5, 115–133. [Google Scholar] [CrossRef]
  33. Hornik, K. Approximation capabilities of multilayer feedforward networks. Neural Netw. 1991, 4, 251–257. [Google Scholar] [CrossRef]
  34. Hecht-Nielsen, R. Theory of the backpropagation neural network. In Neural Networks for Perception; Elsevier: Amsterdam, The Netherlands, 1992; pp. 65–93. [Google Scholar]
  35. Moré, J.J. The Levenberg-Marquardt algorithm: Implementation and theory. In Numerical Analysis; Springer: Berlin/Heidelberg, Germany, 1978; pp. 105–116. [Google Scholar]
  36. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  37. Mafarja, M.M.; Mirjalili, S. Hybrid whale optimization algorithm with simulated annealing for feature selection. Neurocomputing 2017, 260, 302–312. [Google Scholar] [CrossRef]
  38. Trivedi, I.N.; Jangir, P.; Kumar, A.; Jangir, N.; Totlani, R. A novel hybrid PSO–WOA algorithm for global numerical functions optimization. In Advances in Computer and Computational Sciences; Springer: Berlin/Heidelberg, Germany, 2018; pp. 53–60. [Google Scholar]
  39. Nasiri, J.; Khiyabani, F.M. A whale optimization algorithm (WOA) approach for clustering. Cogent Math. Stat. 2018, 5, 1483565. [Google Scholar] [CrossRef]
  40. Rana, N.; Latiff, M.S.A. A Cloud-based Conceptual Framework for Multi-Objective Virtual Machine Scheduling using Whale Optimization Algorithm. Int. J. Innov. Comput. 2018, 8. [Google Scholar] [CrossRef]
  41. Kashan, A.H. League Championship Algorithm: A New Algorithm for Numerical Function Optimization. In Proceedings of the 2009 International Conference of Soft Computing and Pattern Recognition, Paris, France, 7–10 December 2009; IEEE: Piscataway, NJ, USA; pp. 43–48. [Google Scholar]
  42. Kashan, A.H. League Championship Algorithm (LCA): An algorithm for global optimization inspired by sport championships. Appl. Soft Comput. 2014, 16, 171–200. [Google Scholar] [CrossRef]
  43. Jalili, S.; Kashan, A.H.; Hosseinzadeh, Y. League championship algorithms for optimum design of pin-jointed structures. J. Comput. Civ. Eng. 2016, 31, 04016048. [Google Scholar] [CrossRef]
  44. Kashan, A.H. An efficient algorithm for constrained global optimization and application to mechanical engineering design: League championship algorithm (LCA). Comput. Aided Des. 2011, 43, 1769–1792. [Google Scholar] [CrossRef]
  45. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl. Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  46. Savsani, V.; Tawhid, M.A. Non-dominated sorting moth flame optimization (NS-MFO) for multi-objective problems. Eng. Appl. Artif. Intell. 2017, 63, 20–32. [Google Scholar] [CrossRef]
  47. Yamany, W.; Fawzy, M.; Tharwat, A.; Hassanien, A.E. Moth-flame optimization for training multi-layer perceptrons. In Proceedings of the 2015 11th International Computer Engineering Conference (ICENCO), Cairo, Egypt, 29–30 December 2015; IEEE: Piscataway, NJ, USA; pp. 267–272. [Google Scholar]
  48. Yıldız, B.S.; Yıldız, A.R. Moth-flame optimization algorithm to determine optimal machining parameters in manufacturing processes. Mater. Test. 2017, 59, 425–429. [Google Scholar] [CrossRef]
  49. Dorigo, M.; Di Caro, G. Ant colony optimization: A new meta-heuristic. In Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406), Washington, DC, USA, 6–9 July 1999; IEEE: Piscataway, NJ, USA; pp. 1470–1477. [Google Scholar]
  50. Dorigo, M.; Birattari, M. Ant Colony Optimization; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  51. Dorigo, M.; Blum, C. Ant colony optimization theory: A survey. Theor. Comput. Sci. 2005, 344, 243–278. [Google Scholar] [CrossRef]
  52. Socha, K.; Dorigo, M. Ant colony optimization for continuous domains. Eur. J. Oper. Res. 2008, 185, 1155–1173. [Google Scholar] [CrossRef] [Green Version]
  53. Sanprasit, K.; Artrit, P. Optimal Comparison Using MOWOA and MOGWO for PID Tuning of DC Servo Motor. J. Autom. Control Eng. 2019, 7, 45–49. [Google Scholar] [CrossRef]
  54. Bingol, H.; Alatas, B. Chaotic league championship algorithms. Arab. J. Sci. Eng. 2016, 41, 5123–5147. [Google Scholar] [CrossRef]
  55. Khalilpourazari, S.; Pasandideh, S.H.R. Multi-item EOQ model with nonlinear unit holding cost and partial backordering: Moth-flame optimization algorithm. J. Ind. Prod. Eng. 2017, 34, 42–51. [Google Scholar] [CrossRef]
  56. Le, D.-N. A Comparatives Study of Gateway Placement Optimization in Wireless Mesh Network using GA, PSO and ACO. Int. J. Inf. Netw. Secur. 2013, 2, 292. [Google Scholar] [CrossRef] [Green Version]
  57. Gao, W.; Wu, H.; Siddiqui, M.K.; Baig, A.Q. Study of biological networks using graph theory. Saudi J. Biol. Sci. 2018, 25, 1212–1219. [Google Scholar] [CrossRef]
  58. Gao, W.; Guirao, J.L.G.; Basavanagoud, B.; Wu, J. Partial multi-dividing ontology learning algorithm. Inf. Sci. 2018, 467, 35–58. [Google Scholar] [CrossRef]
  59. Gao, W.; Guirao, J.L.G.; Abdel-Aty, M.; Xi, W. An independent set degree condition for fractional critical deleted graphs. Discrete Contin. Dyn. Syst. S 2019, 12, 877–886. [Google Scholar]
  60. Gao, W.; Dimitrov, D.; Abdo, H. Tight independent set neighborhood union condition for fractional critical deleted graphs and ID deleted graphs. Discrete Contin. Dyn. Syst. S 2018, 12, 711–721. [Google Scholar]
  61. Gao, W.; Wang, W.; Dimitrov, D.; Wang, Y. Nano properties analysis via fourth multiplicative ABC indicator calculating. Arab. J. Chem. 2018, 11, 793–801. [Google Scholar] [CrossRef]
Figure 1. The structure of a multi-layer perceptron (MLP) neural network.
Figure 1. The structure of a multi-layer perceptron (MLP) neural network.
Applsci 09 05264 g001
Figure 2. Distribution of bearing capacity influential factors: (a) unit weight, (b) friction angle, (c) elastic modulus, (d) dilation angle, (e) Poisson’s ratio, (f) applied stress, (g) setback distance, and (h) settlement.
Figure 2. Distribution of bearing capacity influential factors: (a) unit weight, (b) friction angle, (c) elastic modulus, (d) dilation angle, (e) Poisson’s ratio, (f) applied stress, (g) setback distance, and (h) settlement.
Applsci 09 05264 g002
Figure 3. Executed sensitivity analysis based on the population size: (a) obtained RMSE values, the convergence curves of (b) MFO-ANN, (c) WOA-ANN, (d) ACO-ANN, and (e) LCA-ANN.
Figure 3. Executed sensitivity analysis based on the population size: (a) obtained RMSE values, the convergence curves of (b) MFO-ANN, (c) WOA-ANN, (d) ACO-ANN, and (e) LCA-ANN.
Applsci 09 05264 g003
Figure 4. The results obtained for (a) WOA-ANN, (b) LCA-ANN, (c) MFO-ANN, and (d) ACO-ANN predictions in the training phase.
Figure 4. The results obtained for (a) WOA-ANN, (b) LCA-ANN, (c) MFO-ANN, and (d) ACO-ANN predictions in the training phase.
Applsci 09 05264 g004aApplsci 09 05264 g004b
Figure 5. The results obtained for (a,b) whale optimization algorithm (WOA)–artificial neural network (ANN), (c,d) league champion optimization algorithm (LCA)–ANN, (e,f) moth–flame optimization (MFO)–ANN, and (g,h) ant colony optimization (ACO)–ANN predictions for the testing samples.
Figure 5. The results obtained for (a,b) whale optimization algorithm (WOA)–artificial neural network (ANN), (c,d) league champion optimization algorithm (LCA)–ANN, (e,f) moth–flame optimization (MFO)–ANN, and (g,h) ant colony optimization (ACO)–ANN predictions for the testing samples.
Applsci 09 05264 g005
Figure 6. The receiver operating characteristic (ROC) curves of (a) WOA–ANN, (b) LCA–ANN, (c) MFO–ANN, and (d) ACO–ANN predictions in the testing phase.
Figure 6. The receiver operating characteristic (ROC) curves of (a) WOA–ANN, (b) LCA–ANN, (c) MFO–ANN, and (d) ACO–ANN predictions in the testing phase.
Applsci 09 05264 g006
Table 1. Descriptive statistics of the used dataset.
Table 1. Descriptive statistics of the used dataset.
FeaturesSymbolDescriptive Index
MeanStandard ErrorMedianModeStandard DeviationSample VarianceSkewnessMinimumMaximum
Friction angleX136.750.1336.0036.003.9115.28−0.1430.0042.00
Dilation angleX28.280.098.008.002.616.83−0.393.4011.50
Unit weight (kN/m3)X320.440.0220.5020.500.650.43−0.9519.0021.10
Elastic modulus (kN/m2)X441,087.68546.6535,000.0035,000.0016,408.72269,246,192.500.2217,500.0065,000.00
Poisson’s ratio (v)X50.290.000.290.290.030.000.140.250.33
Setback distanceX64.190.075.005.002.084.31−0.131.007.00
Applied stress (kN/m2)X7289.747.89245.650.00236.9756152.921.260.001132.65
Settlement (m)Y0.040.000.030.000.030.000.460.000.10
Table 2. The obtained values of root-mean-square error (RMSE), mean absolute error (MAE), and area under the receiver operating characteristic curve (AUROC). MLP—multi-layer perceptron; WOA—whale optimization algorithm; LCA—league champion optimization algorithm; MFO—moth–flame optimization; ACO—ant colony optimization; ANN—artificial neural network.
Table 2. The obtained values of root-mean-square error (RMSE), mean absolute error (MAE), and area under the receiver operating characteristic curve (AUROC). MLP—multi-layer perceptron; WOA—whale optimization algorithm; LCA—league champion optimization algorithm; MFO—moth–flame optimization; ACO—ant colony optimization; ANN—artificial neural network.
ModelsNetwork Results
TrainingTesting
RMSEMAEAUROCRMSEMAEAUROC
MLP0.34650.30550.9560.36870.33120.930
WOA–ANN0.30760.25550.9690.33990.28320.939
LCA–ANN0.31220.25920.9640.34260.28680.935
MFO–ANN0.29850.24300.9690.33300.27060.939
ACO–ANN0.27450.17830.9650.31330.21280.944
Table 3. The developed ranking system based on the calculated accuracy criteria.
Table 3. The developed ranking system based on the calculated accuracy criteria.
ModelsScores
TrainingTesting
RMSEMAEAUROCScoreRMSEMAEAUROCScore
MLP11241124
WOA–ANN3351133410
LCA–ANN22372237
MFO–ANN4451344412
ACO–ANN5541455515

Share and Cite

MDPI and ACS Style

Moayedi, H.; Bui, D.T.; Thi Ngo, P.T. Neural Computing Improvement Using Four Metaheuristic Optimizers in Bearing Capacity Analysis of Footings Settled on Two-Layer Soils. Appl. Sci. 2019, 9, 5264. https://0-doi-org.brum.beds.ac.uk/10.3390/app9235264

AMA Style

Moayedi H, Bui DT, Thi Ngo PT. Neural Computing Improvement Using Four Metaheuristic Optimizers in Bearing Capacity Analysis of Footings Settled on Two-Layer Soils. Applied Sciences. 2019; 9(23):5264. https://0-doi-org.brum.beds.ac.uk/10.3390/app9235264

Chicago/Turabian Style

Moayedi, Hossein, Dieu Tien Bui, and Phuong Thao Thi Ngo. 2019. "Neural Computing Improvement Using Four Metaheuristic Optimizers in Bearing Capacity Analysis of Footings Settled on Two-Layer Soils" Applied Sciences 9, no. 23: 5264. https://0-doi-org.brum.beds.ac.uk/10.3390/app9235264

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop