Next Article in Journal
Enhancing Day-Ahead Cooling Load Prediction in Tropical Commercial Buildings Using Advanced Deep Learning Models: A Case Study in Singapore
Previous Article in Journal
Synergies and Challenges: Exploring Organizational Perspectives on Digital Transformation and Sustainable Development in the Context of Skills and Education
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Towards Designing Durable Sculptural Elements: Ensemble Learning in Predicting Compressive Strength of Fiber-Reinforced Nano-Silica Modified Concrete

1
School of Fine Arts and Design, Guangzhou University, Guangzhou 510006, China
2
School of Civil Engineering, Guangzhou University, Guangzhou 510006, China
*
Authors to whom correspondence should be addressed.
Submission received: 29 December 2023 / Revised: 19 January 2024 / Accepted: 24 January 2024 / Published: 1 February 2024
(This article belongs to the Section Building Materials, and Repair & Renovation)

Abstract

:
Fiber-reinforced nano-silica concrete (FrRNSC) was applied to a concrete sculpture to address the issue of brittle fracture, and the primary objective of this study was to explore the potential of hybridizing the Grey Wolf Optimizer (GWO) with four robust and intelligent ensemble learning techniques, namely XGBoost, LightGBM, AdaBoost, and CatBoost, to anticipate the compressive strength of fiber-reinforced nano-silica concrete (FrRNSC) for sculptural elements. The optimization of hyperparameters for these techniques was performed using the GWO metaheuristic algorithm, enhancing accuracy through the creation of four hybrid ensemble learning models: GWO-XGBoost, GWO-LightGBM, GWO-AdaBoost, and GWO-CatBoost. A comparative analysis was conducted between the results obtained from these hybrid models and their conventional counterparts. The evaluation of these models is based on five key indices: R2, RMSE, VAF, MAE, and bias, addressing an objective assessment of the predictive models’ performance and capabilities. The outcomes reveal that GWO-XGBoost, exhibiting R2 values of (0.971 and 0.978) for the train and test stages, respectively, emerges as the best predictive model for estimating the compressive strength of fiber-reinforced nano-silica concrete (FrRNSC) compared to other models. Consequently, the proposed GWO-XGBoost algorithm proves to be an efficient tool for anticipating CSFrRNSC.

1. Introduction

Cities promote the development of human culture. Cities products created by people and are composed of people’s thoughts and concepts. With people’s constant pursuit of the value of urban life, art comes onto the stage of cities for some relatively simple ideas, such as beautifying cities and inheriting culture. Among them, sculpture stands out in its permanent artistic life, public artistry, and colorful expression techniques, and it becomes a symbol of urban art. Sculpture art expresses human’s understanding of the natural world and imagination of the beautiful world through plastic and sculpting techniques.
Concrete has evolved as a material that is commonly preferred for use in construction, according to a number of research studies [1,2,3,4]. Concrete material is artificial stone with a natural ability to be shaped, and its construction method and sculpture have a lot in common. At the same time, the excellent plasticity of concrete material makes it the carrier of sculpture art, which has been widely accepted by artists and architects. Concrete is also widely used in urban sculpture, and Figure 1 shows the example of concrete sculptures by the authors. This special building material has a special spiritual characteristic when applied to urban sculpture, and the diversity of this medium enriches the expression technique of urban sculpture.
However, it should be noted that the concrete sculpture structure is prone to brittle fracture due to its strong rigidity, which will cause damage to the created sculpture structure. In order to overcome the intrinsic brittleness of concrete, researchers have shifted their attention to fiber-reinforced concrete (FRC), which is recognized for having higher ductility in comparison to ordinary concrete [5,6,7]. This upgrade helps to improve the durability and structural reliability of the concrete sculpture structure.
The development of cracks in concrete is the first step in the failure process of concrete. The incorporation of FRC, which comprises various fibers, such as glass, steel, and polypropylene, is intended to improve the mechanical strength and energy absorption capacity of concretes. This is accomplished by limiting the propagation of fractures, thus making it possible for structural parts to endure higher distortions after the first cracks have appeared [8,9,10].
Nanoparticles, and more especially nano-silica (NS), have been indicated to be helpful in filling gaps within cement paste, ultimately resulting in the higher durability and mechanical strength of concrete [11,12,13]. As a result, the introduction of nanoparticles into FRC shows promise in developing a material with exceptional performance, which is excellent for the construction of buildings that are both long-lasting and high-performing. A decrease in the first and last setting periods of concretes is one of the benefits that NS brings, along with an increase in the early-age strength of the material. The one-of-a-kind nanostructure of NS, which is distinguished by an exceptionally high specific surface area (SSA), processes as a binder for cement aggregate [14,15,16]. In accordance with the powerful pozzolanic effect that NS demonstrates [14,17,18], the nanoparticle size should be considered. The interfacial transition zone (ITZ), which is known to be a vulnerable region in concrete, is improved as a consequence of the thorough filling of gaps and voids by these nanoparticles [18,19,20], which ultimately results in a reduction in permeability. It has been shown via research that NS is an extremely efficient component that speeds up the process of concrete hydration [21] and encourages the production of calcium–silicate–hydrate (C-S-H) gel, which is an essential factor in determining the strength of the material [22,23,24]. NS interacts with Ca(OH)2, which results in a denser final product [25,26]. This causes the percentage of portlandite-Ca(OH)2 to decrease in cementitious materials like cementitious materials. NS may be used to replace up to 4% of cement, according to previous research [27,28]. This can significantly improve the material’s durability and strength, particularly when subjected to challenging circumstances, such as corrosion and high temperatures. Even though there are many different uses, the best results are obtained when NS is used as a cement substitute in the range from 0.5 to 4%. This is because using excessive amounts of NS may result in the buildup of particles and a reduction in the material’s workability [29]. A wide variety of nanoparticles, including NS, are utilized as materials to be added to concrete in order to improve the macroscopic qualities and interpretation of the material. Nevertheless, the limited practical application of NS in constructions may be related to its much higher prices, which are about one thousand times more costly than normal cement [30,31].
The inclusion of fibers in FrRNSC serves as a crucial mechanism for crack arrest and mitigation. The extended discussion now highlights how the fibers act as a network within the concrete matrix, effectively arresting the propagation of cracks. This reinforcement mechanism imparts toughness to the material, minimizing the likelihood of brittle fractures and enhancing the structural resilience of concrete sculptures. The enhanced content on nano-silica in the discussion now elucidates its role in improving the ductility of FrRNSC. Nano-silica particles contribute to a denser and more homogenous microstructure, reducing the size of pores and enhancing the material’s resistance to crack propagation. This increased ductility adds a layer of flexibility to the material, making it more resilient to external forces and mitigating the risk of brittle failure. The revised discussion emphasizes the synergistic effects arising from the combination of fiber reinforcement and nano-silica. By elaborating on how these two components complement each other, the paper now provides a more comprehensive understanding of how their interaction contributes to the mitigation of brittle fracture. The intertwined effects include improved tensile strength, enhanced flexural toughness, and heightened resistance to environmental factors, collectively bolstering the overall structural integrity and durability of FrRNSC sculptures [32,33,34,35].
Researchers have made notable advancements in the field of nanotechnology through the identification of nanoparticles (NPs) measuring less than 100 nm [25,27,36,37]. These NPs exhibit the potential to enhance the mechanical properties of diverse materials, such as polymers [29] and cementitious materials [30,31,38,39]. Moreover, they find applications in the medical, engineering, and food sectors [40]. Consequently, there has been a heightened interest among researchers in investigating the influence of NS on concrete [41]. Various types of NPs, including nano ZnO, nano Fe2O3, nano Al2O3, nano TiO2, and NS, have undergone scrutiny. Notably, NS stands out for its ability to significantly improve compressive strength (CS) in concrete. Additionally, it has been observed that NS reduces the initial and final setting times of concrete while augmenting its early-age strength. The nanostructure of NS plays a pivotal role, offering an unusually large SSA and functioning as an aggregate–cement binder [42]. The remarkable pozzolanic activity of NS is attributed to its nanoparticle size [43]. Furthermore, the ITZ, recognized as a weak phase in cementitious materials, experiences enhancement due to the packing of these tiny NPs in gaps and voids, thereby reducing permeability [44,45,46]. NS emerges as a highly active component that expedites the hydration process of cementitious materials [47], fostering the formation of more calcium silicate hydrate (C-S-H) gel [48], which is crucial for material strength [49]. The proportion of portlandite-Ca(OH)2 decreases in cementitious materials as NS combines with Ca(OH)2 to form a denser product [50,51]. Previous studies indicate that substituting NS for up to 4% of the cement in concrete enhances its mechanical strength and durability, especially under adverse conditions, like elevated temperatures and corrosion [52]. While NS has demonstrated efficacy in specific applications for cementitious materials, optimal utilization occurs in proportions ranging from 0.5% to 4% as a cement substitute. However, an excess amount of NS may lead to agglomeration, compromising workability due to improper dispersion [53]. The distinguishing characteristic of NP lies in their high volume-to-surface-area ratio, as depicted in Figure 2. Numerous NPs serve as nano-additives in cementitious composites to enhance their macroscopic characteristics and performance, with NS gaining prevalence among these NPs. Nevertheless, the limited practical adoption of NS in construction is primarily attributed to its high cost, which remains 1000 times more expensive than ordinary cement [54,55,56].
Compressive strength (CS) is usually acknowledged to be an important factor [38], despite the fact that different studies are used to assess the performance of concrete. The concrete strength (CS) rating is a helpful indication of a variety of concrete parameters, which are either directly or indirectly connected to mechanical and durability characteristics [39]. In the quest for efficiency, predictive models for material strength are becoming more popular as a means of reducing the number of redundant experiments and the number of resources that are used. When it comes to effectively modeling the nonlinear character of cement-based composites, traditional models, such as best-fit curves, often fall short [58]. Furthermore, regression methods have the potential to significantly overstate the significance of certain components [40]. It is in reaction to this that novel modeling approaches, such as artificial intelligence (AI) methods, and more especially supervised machine learning (ML), are gaining popularity in the area [41,42,59,60,61,62,63,64,65,66,67,68,69,70]. These methodologies make use of input attributes in order to model responses, and the models that are produced as a consequence are verified via testing. The use of machine learning approaches may be shown in the prediction of the characteristics of concrete and bituminous mixtures [44,47,48,49,71,72,73].
Furthermore, the implementation of different machine learning methods for predicting the refreshed and toughened features of concretes mixes has demonstrated great benefits. This is in addition to the experimental research that has been conducted. Using artificial intelligence technology, recent research developed a data-driven system for estimating the compressive strength (CS) of foam cellular concretes. This formulation outperformed all empirical models in terms of forecasting CS [74]. Studies that were quite similar to this one investigated the possibility of artificial intelligence systems being able to predict the features of concrete, proposing that AI may serve as an option to exploratory and empirical programs for modeling refreshed and toughened characteristics [52]. As part of their investigation into artificial intelligence approaches, Behnood et al. [57] focused on the model tree as an AI strategy for predicting CS in regular and high-performance concrete data records. They demonstrated the correctness of this classification methodology throughout their research. Gholampour et al. [75] focused their attention on artificial intelligence approaches for calculating the mechanical characteristics of recycled aggregate concretes. They also investigated the application of predictive AI models in predesign and modeling. Few studies have attempted to forecast the characteristics of fiber-reinforced nano-silica concrete (FrRNSC), despite the fact that early machine learning-based research focused mostly on predicting CS for regular cement-based materials [54,76,77,78,79].
In the realm of predicting the compressive strength of FrRNSC, the existing literature predominantly focuses on traditional optimization algorithms and individual machine learning models. Notably, limited attention has been directed towards the combined utilization of the Grey Wolf Optimization Algorithm and Ensemble Learning techniques, including CatBoost, LightGBM, AdaBoost, and XGBoost, in the context of this specific application. The optimization is one of the popular methods used in different fields of study [80,81]. Consequently, there exists a discernible research gap that warrants exploration and investigation into the potential synergies and performance enhancements that may arise from the integration of the Grey Wolf Optimization Algorithm with the aforementioned Ensemble Learning methods. The understanding of how these advanced optimization and ensemble techniques collectively contribute to the accuracy and robustness of compressive strength predictions for FrRNSC remains underexplored, highlighting the need for comprehensive research in this niche area. Addressing this research gap can significantly contribute to the advancement of predictive modeling in the domain of FrRNSC, offering valuable insights for both academic and practical applications in construction and materials engineering. Therefore, this study optimizes the hyperparameters of four robust techniques, namely CatBoost, LightGBM, AdaBoost, and XGBoost, using the GWO algorithm for predicting FrRNSC.
This study is of paramount importance in the domain of construction materials and computational modeling. The incorporation of advanced optimization algorithms and ensemble learning techniques addresses a critical gap in the understanding of estimating the CS of concrete reinforced with fibers and modified with nano-silica. This research is vital for advancing the state-of-the-art in predictive modeling for enhanced concrete compositions, where the inclusion of fibers and nano-silica represents a growing area of interest in the construction industry. The optimization algorithms, particularly the GWO algorithm, offer a sophisticated approach to refining model parameters, contributing to the precision and reliability of predictions. The ensemble learning methodology, involving CatBoost, LightGBM, AdaBoost, and XGBoost, introduces a comprehensive strategy for leveraging the strengths of multiple algorithms, resulting in a more robust and accurate predictive model. The significance of this research extends beyond theoretical advancements, holding practical implications for the construction industry and infrastructure development. By accurately predicting the compressive strength of FrRNSC, this research directly informs engineering practices, enabling professionals to design and construct more resilient and durable structures. The outcomes of this study have the potential to revolutionize construction material selection and usage, influencing the development of cost-effective and sustainable solutions.
The remaining sections of the present research are organized as follows: The backgrounds of AdaBoost, CatBoost, LightGBM, XGBoost, and GWO techniques are separately examined in Section 2. Section 3 delineates a description of the dataset employed and the preprocessing and data preparation. Furthermore, Section 4 presents the obtained results from the predictive models and discussed them. Conclusions are presented in Section 5.

2. Materials and Methods

2.1. AdaBoost

Due to the fact that it can only make one choice at a time, a single decision tree is considered a poor learner. Scholars speculate about the possibility of producing a superior learner by combining numerous base learners (standalone) into one group. In 1990, Schapire [55,82] demonstrated that this hypothesis was correct, so laying the groundwork for the boost technique, which merges several weak learners in a sequential fashion. Equation (5) demonstrates that the general trees will be deleted and only the most powerful trees inserted whenever an additional tree model is introduced to the overall system [56,83]. As more repeated computations are performed in this manner, the general efficiency of the model will steadily increase. On the other hand, there is an issue with this. When the first, most fundamental tree model is produced, a portion of the points in the dataset are properly categorized, while others are misclassified. The AdaBoost technique is a straightforward approach to improving poor classification algorithms. It enhances a system’s capacity to classify data by continuously subjecting it to training. Learning the training samples yields the initial weaker learners, which is then used to create an additional training point by adding incorrect examples to the untrained samples. In addition, the second weak classifier may be acquired by the training of this dataset. The inaccurate samples are merged with the points that were previously trained in order to create an additional training point. This new training sample may then be trained in order to generate the third weak classifier. After carrying out these steps a number of times, we will be able to obtain the upgraded version of the strong classifier. The AdaBoost method assigns various weights to the classifiers in order to enhance the number of accurate classifications that are produced. The samples that were successfully categorized are given relatively low weights, while the weights of the classifiers that were incorrectly classified are incremented [82,84,85]. This causes the algorithm to provide greater focus on the classifiers that were incorrectly classified [82]. Adjustments are required to be applied to the weight allocation for each point within the dataset before retraining the initial tree model. Because each set of training datasets is unique, the outcomes of the training are also expected to be distinct. Finally, the individual sets of findings are compiled, and a summation of results is achieved as presented in Equation (1) [86]:
F n ( x ) = F n 1 ( x ) + a r g   m i n h i = 1 n L ( y i ,   F n 1 ( x i ) + h ( x i ) )
where Fn(x) reveals the overall model, Fn−1(x) signifies the overall achieved in the previous iteration, yi indicate the estimation results of the ith tree, and h(xi) is the current generated tree.

2.2. XGBoost

The XGBoost model originated from the early experimental work of Washington University [87]. It stands out as an enhanced iteration of the gradient boosting algorithm, boasting increased scalability and efficiency. The distinctive features that set the XGBoost algorithm apart are elaborated upon in detail [88,89,90]. Notably, automatic feature extraction becomes feasible, and XGBoost incorporates regularization techniques to mitigate overfitting, demonstrating proficiency in learning from nonlinear datasets. Additionally, its parallelization feature enables training with numerous CPU cores, positioning it as one of the tree-based ensemble additive models comprising several base-learner systems. In a broad sense, XGBoost can be denoted as Equations (2) and (3):
F = b 1 ,   b 2 ,   ,   b n
y ^ i = t = 1 n b n ( x i )
where m represents base learners, and y ^ i is the predictive model.
In the context of the XGBoost predictive model, represented as y ^ i , which combines numerous base learners, x serves as the inputs feature for base-learners, denoted as m. The objective function for XGBoost is delineated as Equation (4):
O ( θ ) = i = 1 m L ( o i , o ^ i ) + i = 1 n Ω ( f i )
Examining this equation, we find that the objective function comprises two parts: The first part signifies the loss function, denoted as L, representing the training loss of either logistic or squared loss. The second part involves the addition of each tree’s complexity, with oi representing the measured values, o ^ i as the estimated values, Ω as the regularization term, n denoting the number of constructed trees, and f representing the function.

2.3. LightGBM

LightGBM stands out as a widely employed boosted model renowned for its support of parallel training, akin to extreme gradient boosting [91,92,93]. Particularly adept at handling multidimensional datasets, LightGBM surpasses traditional boosting algorithms and even outperforms XGBoost. Unlike conventional boosting algorithms that horizontally divided the trees’ architecture (i.e., level-wise growth), LightGBM takes a vertical approach, leading to enhanced performance. The divergence in tree branch growth among the level-wise and leaf-wise methodologies is illustrated in Figure 3.

2.4. CatBoost

CatBoost, introduced in 2017 by Yandex researchers, marks the inception of the first Russian machine learning algorithm [94]. As a tree-based boosting algorithm, CatBoost, short for categorical boosting, extends its applicability beyond categorical features, addressing absolute values and various other aspects, including regression problems and automatic feature engineering. Consequently, CatBoost exhibits a reduced training time compared to various gradient boosting algorithms [95,96].
Unlike the conventional gradient boosting techniques that generally adhere to a standard GBT approach for constructing decision trees, CatBoost adopts a dual approach in tree construction [97,98]. The first method involves an ordered technique, while the second employs a basic technique.
In the ordered mode, a random permutation technique is applied during training, involving n supporting models (m1, m2, …, mn). Each model, mn, is trained using the earliest I samplings in the permutation. Subsequently, in each repetition, the MJ−1 model is utilized to obtain the residual of the jth sample. This distinctive approach sets CatBoost apart in its tree construction methodology.

2.5. Grey Wolf Optimization Algorithm

The group predation behavior of gray wolves served as the basis for the creation of the unique swarm intelligence optimization algorithm known as GWO. This method has found use in a variety of technical domains. Within the population of gray wolves, there is a distinct and well-established hierarchy, and the whole population of wolves can be broken down into four distinct levels, namely α, β, δ, and ω [97,98,99]. According to Figure 4, the best gray wolf is α, which is the leader of the wolves’ team and has the authority to determine all significant matters pertaining to the whole wolves’ team. Moreover, β is the second-rank grey wolf, which provides assistance to the leader wolf in making choices. Notably, δ is at the third level and is responsible for sentry, reconnaissance, and other responsibilities; and ω is the lowest-ranking wolf and is commanded by the first three tiers of gray wolves. The act of predation carried out by gray wolves may be broken down into three distinct stages: the search, the surrounding, and the assault stages. During the hunting phase, each gray wolf updates its location based on the given equations in order to capture its victim. The location guiding and updating mechanism of the ω wolves adopted by GWO is presented in Figure 5. The flowchart of GWO is illustrated in Figure 6.
In the equations, t represents the number of recent repetitions; D represents the distance that separates an individual grey wolf from hunted preys; X signifies the position vector of an individual grey wolves, whereas X p represents the position vector of the preys; A and C represent coefficient vectors; a is the convergence factor, which declines from 2 to 0 linearly with the number of repetitions; and r and r 2 are vectors of random values in the range of (0,1) [100,101]. The required formulas for this aim are as shown in Equations (5)–(8):
D = C × X p ( t ) X p ( t )
X ( t + 1 ) = X p ( t ) A × D
A = 2 a × r 1 a
C = 2 × r 2
Figure 4. Social hierarchy of wolves in GWO [102].
Figure 4. Social hierarchy of wolves in GWO [102].
Buildings 14 00396 g004
Figure 5. The location guiding and updating mechanism of the ω wolves adopted by GWO [102].
Figure 5. The location guiding and updating mechanism of the ω wolves adopted by GWO [102].
Buildings 14 00396 g005
Figure 6. Flowchart of GWO algorithm.
Figure 6. Flowchart of GWO algorithm.
Buildings 14 00396 g006

3. Data Analysis and Data Preparation

To achieve the desired outcome, the ML methods necessitate a diverse array of input variables. The computation of the CSFrRNSC was conducted using data extracted from the literature. To mitigate bias, data points were selected randomly from recent conducted research, focusing on data points including CS results for algorithmic execution. The quantity of input features and the dataset’s size significantly influence the model’s target. In the current study, 175 data points were considered for running artificial intelligence (AI) techniques. Three types of fibers—steel, polypropylene, and glass—were employed in the FrRNSC samples. The data were acquired on the basis of the mixture characteristics and the specific concern of interest, ensuring that models received comparable input variables for mixtures to generate the desired target.
In this investigation, the prediction of FrRNSC involved the utilization of six input parameters, sourced from the research conducted by Anjum et al. [103]. Furthermore, four studies [104,105,106,107] considered using those input parameters for gathering a dataset to construct predictive models. The ongoing study aims to formulate a strategy that is beneficial for engineers in enhancing different aspects of geopolymer concretes relevant to construction operations. Numerical values have been assigned to parameters, both inputs and outputs, with their ranges presented in Table 1. The obtained data from 175 laboratory tests were considered in this study to provide the necessary data. The input variables, encompassing fiber volume (FV), coarse-aggregate-to-fine-aggregate ratio (CA/FA), water-to-binder ratio (w/b), nano-silica (NS), superplasticizer-to-binder ratio (SP/B), and specimen age (A), were incorporated into the models intended for training AI methods. Simultaneously, the CSFrRNSC was entered in a predictive framework as the output variable for AI modeling.
Additionally, Figure 6 illustrates the heatmap correlation coefficients of the used parameters in this study. The correlation linked two variables that are typically assessed using the Pearson correlation coefficient (p, p ∈ [−1, 1]). The Pearson correlation coefficient is determined by the formula provided in Equation (9) [108,109]:
p = i = 1 t ( d i d m ) ( r i r m ) i = 1 t ( d i d m ) 2 × i = 1 t ( r i r m ) 2
where t represents the number of raw data points, dm denotes the average value across all d data, and rm signifies the average value across all r data. A positive correlation is indicated by a value of p greater than zero (p > 0), signifying a positive linear relationship between the two parameters. A stronger positive correlation is indicated by a value of p closer to one (p ≃ 1). Conversely, if p is less than zero (p < 0), it implies a negative linear correlation between the two parameters. A stronger negative correlation is denoted by a value of p closer to minus one (p ≃ −1) [110,111].
As depicted in Figure 7, four parameters, namely FV, NS, SP/B, and Age, exhibit a positive linear correlation with CSFrRNSC, while C/F and w/b display a negative linear correlation with CSFrRNSC. Additionally, by examining the absolute values of the Pearson correlation coefficients in Figure 7, it becomes evident that C/F manifests a notably strong negative linear correlation, and SP/B demonstrates a marked positive linear correlation with CSFrRNSC. The correlations between the remaining input parameters and CSFrRNSC are observed to be of weak-to-medium strength. Furthermore, the violin plot of parameters is presented in Figure 8. A violin plot is a statistical data visualization tool that combines elements of box plots and kernel density plots to provide insights into the distribution and probability density of a dataset across different categories or groups. It is particularly useful for comparing the distribution of multiple datasets or subgroups within a larger dataset.
Table 1 shows the maximum and minimum values of 91.2 MPa and 19.1 MPa for CSFrRNSC, respectively. Furthermore, the skewness and kurtosis of each variable presented in Table 1 reveals more insights into the shape and distribution of a dataset. Skewness measures the asymmetry of the distribution. A positive skewness (greater than 0) indicates that the data are skewed to the right, meaning that the tail on the right side is longer or fatter than the left side. In addition, a negative skewness (less than 0) indicates that the data are skewed to the left, meaning that the tail on the left side is longer or fatter than on the right side. Therefore, FV, CF, w/b, NS, and Age all have positive skewness values, indicating that these variables have a right-skewed distribution. Moreover, SP/B and CSFrRNSC have negative skewness values, suggesting a left-skewed distribution for these variables.
Kurtosis measures the thickness of the tail of a distribution relative to the normal distribution. Positive kurtosis (greater than 3) indicates heavy tails and a sharper peak than the normal distribution (leptokurtic). In addition, negative kurtosis (less than 3) indicates light tails and a flatter peak than the normal distribution (platykurtic) [112]. Accordingly, FV, CF, w/b, NS, Age, and CSFrRNSC have positive kurtosis values, indicating that these variables have heavier tails and sharper peaks compared to a normal distribution. Notably, SP/B has negative kurtosis, suggesting lighter tails and a flatter peak compared to a normal distribution.
The main phase of this paper revolves around the elucidation of the datasets’ foundations and a succinct examination of their properties. Subsequently, in the preprocessing phase, we employ a careful approach to splitting the FrRNSC compressive strength dataset into training and testing sets to ensure a robust model evaluation. We utilized a stratified sampling technique to maintain the distribution of compressive strength values in both the training and testing sets, preventing biased model training and evaluation. Specifically, the dataset is randomly divided into two subsets, with a predefined ratio (e.g., 75% for training and 25% for testing). This ensures that the models are trained on a diverse range of data and evaluated on unseen instances [113,114]. Accordingly, the compressive-strength data of FrRNSC was bifurcated into two primary categories: the training set, encompassing seventy-five percent of the entire concrete dataset (131 data points); and the test part, constituting twenty-five percent of the concrete dataset (44 data samples). The details of the training and testing datasets are presented in Table 2. The CatBoost, LightGBM, AdaBoost, and XGBoost algorithms were applied to the training dataset following this division. Simultaneously, the GWO metaheuristic algorithm was employed to ascertain optimal hyperparameter values for the aforementioned models. The mean squared error (MSE) serves as the objective function for the GWO. This function quantifies the disparity between the actual compressive strength of the fiber-reinforced nano-silica concrete data in the train phase and the predicted compressive strength generated by the CatBoost, LightGBM, AdaBoost, and XGBoost techniques, utilizing the inputs from the training set. The Equation (10) furnishes a delineation of the framework governing the objective function.
X o = M S E ( X r , X ^ r )
In this equation, Xr and X ^ r indicate measured and predicted CSFrRNSC, respectively. It should be noted that X ^ r can be separately defined for each CatBoost, LightGBM, AdaBoost, and XGBoost models, as shown in Equations (11)–(14):
X ^ r X G B o o s t   = X G B o o s t ( h 1 , h 2 , h 3 , h 4 , h 5 , h 6 ) f i t ( t r a i n i n g s e t )
X ^ r L i g h t G B M   = L i g h t G B M ( h 1 , h 2 , h 3 , h 4 ) f i t ( t r a i n i n g s e t )
X ^ r A d a B o o s t   = A d a B o o s t ( h 1 , h 2 ) f i t ( t r a i n i n g s e t )
X ^ r C a t B o o s t   = C a t B o o s t ( h 1 , h 2 ) f i t ( t r a i n i n g s e t )
In these equations, h denotes the hyperparameters of techniques, and the CatBoost, LightGBM, AdaBoost, and XGBoost are fitted on the training sets. In Equation (11), h1h6 are, respectively, n_estimators, learning_rate, gamma, max_depth, min_child_weight, and reg_alpha, which function as hyperparameters of the XGBoost method. In Equation (12), h1h4 are, respectively, the learning_rate, n_estimators, max_depth, and reg_alpha, which function as hyperparameters of the LightGBM method. In Equations (13) and (14), h1 and h2 are respectively learning_rate and n_estimators as hyperparameters of both CatBoost and AdaBoost models.
In the final phase of the procedure, the potential remedies proposed by the GWO are assessed using various statistical metrics on the testing set. This evaluation facilitates the identification of optimal hyperparameters for the CatBoost, LightGBM, AdaBoost, and XGBoost frameworks. Furthermore, the cosine amplitude technique is employed to enhance the sensitivity analysis of CSFrRNSC concerning its influential parameters. This analytical process relies on the well-refined CatBoost, LightGBM, AdaBoost, and XGBoost models, each configured with optimal hyperparameter values.

4. Prediction Results

4.1. Hyperparameters Tunning

When delving into the realm of soft computing techniques, a paramount responsibility emerges, termed hyperparameter tuning. This crucial task serves to mitigate the risk of algorithmic overfitting, enhance the algorithm’s generalization capabilities, and curtail overall model complexity [115,116,117]. In the present study, the utilized dataset underwent random partitioning into a training subset, encompassing 75% of the FrRNSC data; and a testing subset, incorporating the remaining 25% of the FrRNSC concrete data. This segregation facilitated the development of the CatBoost, LightGBM, AdaBoost, and XGBoost methodologies. The accuracy of the LightGBM model, designed for predicting the CSFrRNSC, was assessed using the testing subset, while the training subset contributed to the initial algorithmic development. As previously mentioned, the creation of the CatBoost, LightGBM, AdaBoost, and XGBoost methodologies necessitates the adjustment of several hyperparameters, including n_estimators, learning_rate, gamma, max_depth, min_child_weight, and reg_alpha for the XGBoost model. These values are configured within the model’s designated file. Consequently, the GWO was employed to optimize these hyperparameters. The GWO was configured with a swarm size ranging from 10 to 100 grey wolves, and the number of iterations was 1000 iterations.
The optimization process begins with the initialization of a population of grey wolves. In the context of the GWO algorithm, these wolves represent potential solutions in the search space of hyperparameters for the ensemble learning models. Each grey wolf in the population corresponds to a unique set of hyperparameters for the respective ensemble learning model (XGBoost, LightGBM, AdaBoost, and CatBoost). The optimization process aims to find the optimal combination of hyperparameters that minimizes a predefined objective function. Therefore, the objective function is tailored to measure the accuracy in predicting the compressive strength of FrRNSC. The GWO algorithm iteratively updates the positions of grey wolves based on their fitness scores, which are determined by evaluating the objective function. During each iteration, wolves adjust their positions to explore the hyperparameter space effectively. The GWO algorithm is seamlessly integrated with ensemble learning techniques (XGBoost, LightGBM, AdaBoost, and CatBoost). At each iteration, the algorithm optimizes the hyperparameters for each base learner within the ensemble, leading to an ensemble configuration that maximizes predictive accuracy. The optimized hyperparameter sets for individual base learners are then combined to form an ensemble model. The final prediction is a weighted combination of predictions from each base learner, where weights are determined by their respective optimized hyperparameters. To ensure robustness and avoid overfitting, the optimized ensemble model undergoes a validation process, where it is fine-tuned and validated on separate datasets. This step further refines the model’s generalization capability.
The hyperparameter ranges are detailed in Table 3. Nonetheless, the hybrid systems’ generalization efficacy is evaluated through a tenfold cross-validation approach during the training phase of this investigation. Learning outcomes pertinent to GWO-LightGBM, GWO-XGBoost, GWO-CatBoost, and GWO-AdaBoost are depicted in Figure 9. Observing Figure 9 reveals that the minimum MSE for GWO-LightGBM, GWO-XGBoost, GWO-CatBoost, and GWO-AdaBoost models were achieved with swarm sizes of 30, 10, 10, and 50, respectively. Moreover, it is evident that these systems did not commence convergence until approximately 98, 215, 95, and 110 iterations, respectively. Ultimately, the optimum value for the two determined hyperparameters is elucidated in Table 3.
It is paramount to evaluate the effectiveness of the trained models following the completion of model training and the acquisition of accurate models. Consequently, to gauge the method’s efficacy, assessment measures such as the coefficient of determination (R2), root mean squared error (RMSE), variance accounted for (VAF), mean absolute error (MAE), and bias were utilized. In the context of the regression analysis, these three metrics commonly serve as benchmarks for assessing the performance of AI models and can be computed by applying Equations (15)–(19) [32,100,118,119,120,121,122,123,124]:
R 2 = 1 ( i = 1 n ( O i P i ) 2 i = 1 n ( P i P ¯ i ) 2 )
R M S E = 1 n i = 1 n ( O i P i ) 2
V A F = 100 ( 1 v a r ( O i P i ) v a r ( O i ) )
M A E = i = 1 n O i P i n
B i a s = 1 n i = 1 n ( P i O i )
where Oi denotes the real CSFrRNSC, Pi stands for the anticipated CSFrRNSC, P ¯ i signifies the average of the anticipated CSFrRNSC, and n stands for the number of data samples. It is noteworthy that achieving values of one, zero, one hundred, zero, and zero for R2, RMSE, VAF, MAE, and bias, respectively, signifies optimal model capability and performance [125,126].
Given these considerations, the outcomes of the computations for the five aforementioned assessment indices of the amalgamated methodologies are outlined in the following. According to Figure 10 and Figure 11, which, respectively, revealed the obtained RMSE and R2 values for conventional and optimized modes in both training and testing phases, a model with the lowest RMSE and highest R2 in both the training and testing phase is the GWO-XGBoost model.
In the training phase of the XGBoost model, the R2 value stands at 0.971, with an accompanying RMSE value of 1.933. On the testing subset, the R2 value is 0.978, and the RMSE value is 2.129. Evidently, the developed GWO-XGBoost technique exhibits exceptional precision, effectively predicting the compressive strength of CSFrRNSC. Notably, the model’s performance on the testing subset surpasses that on the training subset to some extent.

4.2. Results of Predictive Models

This research evaluated the estimation ability of the hybrid algorithms in anticipating CSFrRNSC. The efficiency of these models was considerable and can be acceptably employed for prediction aims. Nevertheless, the hybrid algorithms’ performance has been systematically compared with that of their conventional forms to validate their predictive capabilities. Therefore, the LightGBM, XGBoost, CatBoost, and AdaBoost methods were trained for predicting CSFrRNSC and then optimized by GWO to construct hybrid models to predict CSFrRNSC with a high level of accuracy. The models’ capability and effectiveness in forecasting CSFrRNSC were assessed. However, it is imperative to explore and compare their efficiency, capability, and success in prediction. The performance of LightGBM, XGBoost, CatBoost, and AdaBoost was determined using evaluation metrics for hybrid systems, and the results are presented in Table 4. In addition, scatterplots illustrating the relationship between measured and predicted values of CSFrRNSC by LightGBM, XGBoost, CatBoost, AdaBoost, GWO-LightGBM, GWO-XGBoost, GWO-CatBoost, and GWO-AdaBoost models for both training and testing parts are demonstrated in Figure 12, Figure 13, Figure 14, Figure 15, Figure 16, Figure 17, Figure 18 and Figure 19 was generated. Table 5 first rates the obtained statistical indices for the developed model and then calculates the summation of the rates and ranks the models based on the highest rate. A rate of 1 to 8 is considered for models based on their performance and the highest rate is considered for the highest R2 and VAF values and the lowest MAE, RMSE, and bias values. According to Table 5, the GWO-XGBoost model can predict CSFrRNSC better than other techniques. However, the optimized XGBoost model, i.e., GWO-XGBoost, has the most accurate results on the basis of RMSE 1.933 and 2.129 for training and testing phases, respectively. As shown in Figure 12, Figure 13, Figure 14, Figure 15, Figure 16, Figure 17, Figure 18 and Figure 19, which illustrated the scatterplot of measured and predicted CSFrRNSC in both parts of the train and test, it can be shown that four trained systems anticipated the CSFrRNSC with an acceptable performance. But, the GWO-XGBoost technique with an R2 of 0.978 for the testing phase was more accurate than the LightGBM, with an R2 of 0.926; XGBoost, with an R2 of 0.930; CatBoost, with an R2 of 0.930; AdaBoost, with an R2 of 0.905; GWO-LightGBM, with an R2 of 0.937; GWO-CatBoost, with an R2 of 0.924; and GWO-AdaBoost, with an R2 of 0.929. Hence, the GWO-XGBoost technique is selected as a superior system in estimating CSFrRNSC. Figure 20 provides a visual representation of the comparison between the CSFrRNSC anticipated by the hybrid systems and the measured CSFrRNSC. From a logical standpoint, a majority of the projected CSFrRNSC values closely align with the actual measurements.
For a more detailed comparison, the Taylor diagram and violin plot illustrating the accuracy of the models are presented in Figure 21 and Figure 22, respectively. The Taylor diagram is a powerful tool for assessing the similarity between observed and simulated datasets. It provides a comprehensive view of multiple statistical metrics, such as the correlation coefficient, root mean square error, and standard deviation ratio, in a single plot. By using the Taylor diagram, we aim to offer a holistic perspective on how well our models replicate the observed data. Each model is represented by a point on the diagram, and the proximity of these points to the reference point (representing the observed data) allows for a quick and intuitive assessment of model performance. We include detailed explanations in the paper, emphasizing how the Taylor diagram facilitates the comparison of multiple models across various performance metrics simultaneously. It aids in comparing the GWO-XGBoost model with other relevant models used in the study, providing a holistic view of its efficiency in relation to alternative approaches. In Figure 21, the GWO-XGBoost symbol (depicted as a pink square) exhibited the closest proximity to the reference dataset in two parts of the training and testing (represented by actual values). This indicates that the GWO-XGBoost technique performed exceptionally well, emerging as the most accurate among the models. Moreover, Figure 22 illustrates the violin plot relevant to the developed models. The violin plot is employed to provide a distributional overview of the prediction errors generated by different models. It complements the Taylor diagram by offering insights into the variability and spread of errors. The width of the violin plot at each point indicates the density of data points, allowing us to identify not only central tendencies but also the distribution of errors. This visualization aids in identifying models that not only perform well on average but also exhibit a consistent and reliable performance. This figure underscores that the results of the GWO-XGBoost model closely align with the measured values.
It is well-established in the machine learning community that a highly complex dataset can pose challenges for even the most optimized algorithms. The concept of Kolmogorov complexity, denoting the length of the shortest computer program that produces the output, becomes pertinent in this discussion. Simplifying the dataset can lead to improved model accuracy by enabling the algorithm to focus on essential patterns and relationships, reducing the risk of overfitting to noise or irrelevant features [127,128].
The synergy between the GWO and XGBoost plays a pivotal role in the model’s superior performance. The GWO algorithm excels in balancing exploration and exploitation during the optimization process. This ability allows GWO-XGBoost to effectively explore the solution space, identifying optimal hyperparameters for XGBoost. The model’s enhanced exploration capability helps to avoid getting stuck in local minima and facilitates the discovery of more robust and accurate predictive patterns. Notably, GWO’s inherent global search capability complements the ensemble learning nature of XGBoost. By efficiently exploring diverse regions of the feature space, GWO enhances the diversity of base learners within the XGBoost ensemble. This diversity is crucial for capturing different facets of the complex relationships within the FrRNSC dataset, leading to a more robust and accurate predictive model. It should be mentioned that GWO is known for its adaptability to different problem characteristics. By hybridizing GWO with XGBoost, we leverage this adaptability to tailor the optimization process specifically to the challenges posed by predicting the compressive strength of FrRNSC. The adaptability of GWO ensures that the optimization process is aligned with the unique characteristics and nuances present in the concrete strength prediction problem. Furthermore, the GWO-XGBoost hybridization facilitates precise tuning of XGBoost hyperparameters. GWO’s optimization process fine-tunes the parameters of XGBoost, optimizing its performance for the specific task at hand. This precision in parameter tuning contributes significantly to the model’s overall predictive capabilities.
In the final phase of this study, the identification of the most and least influential parameters in determining CSFrRNSC was undertaken. To achieve this, a sensitivity analysis technique called the Cosine Amplitude Method (CAM) was applied [129]. The CAM assesses the impact of each influential variable on CSFrRNSC. Equation (20) was utilized for this purpose [118]:
r i j = k = 1 m x i k . x j k ( k = 1 m x i k 2 ) . ( k = 1 m x j k 2 )
where rij is the sensitivity values of xi (input) and xj (output).
The results of the sensitivity analysis revealed the impactful parameters. Figure 23 illustrates that the SP/B parameter exerts the most significant influence on CSFrRNSC, with a strength value of 0.958. In contrast, the FV parameter exhibits the least impact, with a strength of 0.718. Notably, the prioritization of parameter influence based on rij values in ascending order is as follows: FV < Age < NS < w/b < CF < SP/B, with impact values of 0.718, 0.755, 0.821, 0.948, 0.954, and 0.958, respectively. The Rank value represents the rank of the influence degree of each parameter on CSFrRNSC. The smaller the ordinal number is, the higher the influence degree of the parameter on CSFrRNSC. Therefore, SP/B has the largest influence degree, and FV has the least influence degree.

5. Conclusions

Fiber-reinforced nano-silica concrete (FrRNSC) was used to construct a concrete sculpture to address the issue of brittle fracture, and this research endeavored to advance the predictive modeling of CSFrRNSC by employing a hybrid approach that integrates the Grey Wolf Optimizer (GWO) with ensemble learning techniques—specifically XGBoost, LightGBM, AdaBoost, and CatBoost. The optimization of hyperparameters through the GWO metaheuristic algorithm results in the creation of four hybrid ensemble learning models: GWO-XGBoost, GWO-LightGBM, GWO-AdaBoost, and GWO-CatBoost. The comparative analysis undertaken between these hybrid models and their conventional counterparts reveals the superior predictive capabilities of GWO-XGBoost. With impressive R2 values of 0.971 and 0.978, RMSE values of 1.933 and 2.129, VAF values of 96.960 and 97.774, MAE values of 1.653 and 1.802, and bias values of 1.653 and 1.802 for the train and test stages, respectively, GWO-XGBoost emerges as the most efficient predictor for estimating the CSFrRNSC when compared to other models. In essence, the proposed GWO-XGBoost algorithm not only enhances accuracy but also establishes itself as a powerful and reliable tool for anticipating CSFrRNSC. This study contributes to the ongoing efforts in the field, providing valuable insights for the application of advanced optimization algorithms and ensemble learning techniques in the prediction of concrete compressive strength. While this research demonstrates the effectiveness of the GWO-XGBoost algorithm for predicting the compressive strength of FrRNSC in sculptural elements, it is important to mention two main limitations. First, the study primarily focuses on FrRNSC for sculptural elements. The findings may not be directly generalizable to other types of concrete or construction applications. Future research could explore the applicability of the proposed approach to a broader range of concrete formulations and use cases.
Second, the success of machine learning models is highly dependent on the quality and representativeness of the training data. Limitations may arise if the dataset used is specific to certain material compositions or manufacturing processes, potentially impacting the model’s performance when applied to variations in concrete formulations.

Author Contributions

Conceptualization, R.W., J.Z., Y.L. and J.H.; Methodology, R.W., Y.L. and J.H.; Software, R.W. and J.H.; Data curation, J.Z.; Writing—original draft, R.W., Y.L. and J.H.; Writing—review & editing, J.Z., Y.L. and J.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Guangdong Provincial Department of Education Innovative Strong School Youth Innovative Talent Project (Social Science) (funding number: 2022WQNCX055) and China Postdoctoral Science Foundation (funding number: 2022M720878).

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Khan, M.; Cao, M.; Xie, C.; Ali, M. Effectiveness of Hybrid Steel-Basalt Fiber Reinforced Concrete under Compression. Case Stud. Constr. Mater. 2022, 16, e00941. [Google Scholar] [CrossRef]
  2. Khan, M.; Cao, M.; Chaopeng, X.; Ali, M. Experimental and Analytical Study of Hybrid Fiber Reinforced Concrete Prepared with Basalt Fiber under High Temperature. Fire Mater. 2022, 46, 205–226. [Google Scholar] [CrossRef]
  3. Khan, M.; Cao, M.; Ai, H.; Hussain, A. Basalt Fibers in Modified Whisker Reinforced Cementitious Composites. Period. Polytech. Civ. Eng. 2022, 66, 344–354. [Google Scholar] [CrossRef]
  4. Ren, J.L.; Xu, Y.S.; Zhao, Z.D.; Chen, J.C.; Cheng, Y.Y.; Huang, J.D.; Yang, C.X.; Wang, J. Fatigue Prediction of Semi-Flexible Composite Mixture Based on Damage Evolution. Constr. Build. Mater. 2022, 318, 126004. [Google Scholar] [CrossRef]
  5. Cao, M.; Mao, Y.; Khan, M.; Si, W.; Shen, S. Different Testing Methods for Assessing the Synthetic Fiber Distribution in Cement-Based Composites. Constr. Build. Mater. 2018, 184, 128–142. [Google Scholar] [CrossRef]
  6. Sun, Y.T.; Bi, R.Y.; Chang, Q.L.; Taherdangkoo, R.; Zhang, J.F.; Sun, J.B.; Huang, J.D.; Li, G.C. Stability Analysis of Roadway Groups under Multi-Mining Disturbances. Appl. Sci. 2021, 11, 7953. [Google Scholar] [CrossRef]
  7. Cui, K.; Chang, J.; Sabri, M.M.S.; Huang, J.D. Toughness, Reinforcing Mechanism, and Durability of Hybrid Steel Fiber Reinforced Sulfoaluminate Cement Composites. Buildings 2022, 12, 1243. [Google Scholar] [CrossRef]
  8. Afroughsabet, V.; Biolzi, L.; Ozbakkaloglu, T. Influence of Double Hooked-End Steel Fibers and Slag on Mechanical and Durability Properties of High Performance Recycled Aggregate Concrete. Compos. Struct. 2017, 181, 273–284. [Google Scholar] [CrossRef]
  9. Afroughsabet, V.; Ozbakkaloglu, T. Mechanical and Durability Properties of High-Strength Concrete Containing Steel and Polypropylene Fibers. Constr. Build. Mater. 2015, 94, 73–82. [Google Scholar] [CrossRef]
  10. Chun, B.; Kim, S.; Yoo, D.-Y. Reinforcing Effect of Surface-Modified Steel Fibers in Ultra-High-Performance Concrete under Tension. Case Stud. Constr. Mater. 2022, 16, e01125. [Google Scholar] [CrossRef]
  11. Murad, Y. Compressive Strength Prediction for Concrete Modified with Nanomaterials. Case Stud. Constr. Mater. 2021, 15, e00660. [Google Scholar] [CrossRef]
  12. Cao, M.; Khan, M.; Ahmed, S. Effectiveness of Calcium Carbonate Whisker in Cementitious Composites. Period. Polytech. Civ. Eng. 2020, 64, 265. [Google Scholar] [CrossRef]
  13. Bahari, A.; Berenjian, J.; Sadeghi-Nik, A. Modification of Portland Cement with Nano SiC. Proc. Natl. Acad. Sci. India Sect. A Phys. Sci. 2016, 86, 323–331. [Google Scholar] [CrossRef]
  14. Wang, X.F.; Huang, Y.J.; Wu, G.Y.; Fang, C.; Li, D.W.; Han, N.X.; Xing, F. Effect of Nano-SiO2 on Strength, Shrinkage and Cracking Sensitivity of Lightweight Aggregate Concrete. Constr. Build. Mater. 2018, 175, 115–125. [Google Scholar] [CrossRef]
  15. Ren, J.L.; Xu, Y.S.; Huang, J.D.; Wang, Y.; Jia, Z.R. Gradation Optimization and Strength Mechanism of Aggregate Structure Considering Macroscopic and Mesoscopic Aggregate Mechanical Behaviour in Porous Asphalt Mixture. Constr. Build. Mater. 2021, 300, 124262. [Google Scholar] [CrossRef]
  16. Gao, Y.; Huang, J.D.; Li, M.; Dai, Z.R.; Jiang, R.L.; Zhang, J.X. Chemical Modification of Combusted Coal Gangue for U(VI) Adsorption: Towards a Waste Control by Waste Strategy. Sustainability 2021, 13, 8421. [Google Scholar] [CrossRef]
  17. Ying, J.; Zhou, B.; Xiao, J. Pore Structure and Chloride Diffusivity of Recycled Aggregate Concrete with Nano-SiO2 and Nano-TiO2. Constr. Build. Mater. 2017, 150, 49–55. [Google Scholar] [CrossRef]
  18. Ardalan, R.B.; Jamshidi, N.; Arabameri, H.; Joshaghani, A.; Mehrinejad, M.; Sharafi, P. Enhancing the Permeability and Abrasion Resistance of Concrete Using Colloidal Nano-SiO2 Oxide and Spraying Nanosilicon Practices. Constr. Build. Mater. 2017, 146, 128–135. [Google Scholar] [CrossRef]
  19. Sharkawi, A.M.; Abd-Elaty, M.A.; Khalifa, O.H. Synergistic Influence of Micro-Nano Silica Mixture on Durability Performance of Cementious Materials. Constr. Build. Mater. 2018, 164, 579–588. [Google Scholar] [CrossRef]
  20. Xu, J.; Kong, F.; Song, S.; Cao, Q.; Huang, T.; Cui, Y. Effect of Fenton Pre-Oxidation on Mobilization of Nutrients and Efficient Subsequent Bioremediation of Crude Oil-Contaminated Soil. Chemosphere 2017, 180, 1–10. [Google Scholar] [CrossRef]
  21. Zahiri, F.; Eskandari-Naddaf, H. Optimizing the Compressive Strength of Concrete Containing Micro-Silica, Nano-Silica, and Polypropylene Fibers Using Extreme Vertices Mixture Design. Front. Struct. Civ. Eng. 2019, 13, 821–830. [Google Scholar] [CrossRef]
  22. Mohammed, B.S.; Liew, M.S.; Alaloul, W.S.; Khed, V.C.; Hoong, C.Y.; Adamu, M. Properties of Nano-Silica Modified Pervious Concrete. Case Stud. Constr. Mater. 2018, 8, 409–422. [Google Scholar] [CrossRef]
  23. Norhasri, M.S.M.; Hamidah, M.S.; Fadzil, A.M. Applications of Using Nano Material in Concrete: A Review. Constr. Build. Mater. 2017, 133, 91–97. [Google Scholar] [CrossRef]
  24. Zhang, H.; Chang, Q.; Li, S.; Huang, J.D. Determining the Efficiency of the Sponge City Construction Pilots in China Based on the DEA-Malmquist Model. Int. J. Environ. Res. Public. Health 2022, 19, 11195. [Google Scholar] [CrossRef] [PubMed]
  25. Massana, J.; Reyes, E.; Bernal, J.; León, N.; Sánchez-Espinosa, E. Influence of Nano-and Micro-Silica Additions on the Durability of a High-Performance Self-Compacting Concrete. Constr. Build. Mater. 2018, 165, 93–103. [Google Scholar] [CrossRef]
  26. Tian, Q.; Su, Z.L.; Fiorentini, N.; Zhou, J.; Luo, H.; Lu, Y.J.; Xu, X.Q.; Chen, C.P.; Huang, J.D. Ensemble Learning Models to Predict the Compressive Strength of Geopolymer Concrete: A Comparative Study for Geopolymer Composition Design. Multiscale Multidiscip. Model. Exp. Des. 2023. [Google Scholar] [CrossRef]
  27. Mahapatra, C.K.; Barai, S.V. Temperature Impact on Residual Properties of Self-Compacting Based Hybrid Fiber Reinforced Concrete with Fly Ash and Colloidal Nano Silica. Constr. Build. Mater. 2019, 198, 120–132. [Google Scholar] [CrossRef]
  28. Erdem, S.; Hanbay, S.; Güler, Z. Micromechanical Damage Analysis and Engineering Performance of Concrete with Colloidal Nano-Silica and Demolished Concrete Aggregates. Constr. Build. Mater. 2018, 171, 634–642. [Google Scholar] [CrossRef]
  29. Zareei, S.A.; Ameri, F.; Bahrami, N.; Shoaei, P.; Moosaei, H.R.; Salemi, N. Performance of Sustainable High Strength Concrete with Basic Oxygen Steel-Making (BOS) Slag and Nano-Silica. J. Build. Eng. 2019, 25, 100791. [Google Scholar] [CrossRef]
  30. Fang, Y.; Wang, J.; Ma, H.; Wang, L.; Qian, X.; Qiao, P. Performance Enhancement of Silica Fume Blended Mortars Using Bio-Functionalized Nano-Silica. Constr. Build. Mater. 2021, 312, 125467. [Google Scholar] [CrossRef]
  31. Reches, Y. Nanoparticles as Concrete Additives: Review and Perspectives. Constr. Build. Mater. 2018, 175, 483–495. [Google Scholar] [CrossRef]
  32. Zhou, J.; Su, Z.; Hosseini, S.; Tian, Q.; Lu, Y.; Luo, H.; Xu, X.; Chen, C.; Huang, J. Decision Tree Models for the Estimation of Geo-Polymer Concrete Compressive Strength. Math. Biosci. Eng. 2024, 21, 1413–1444. [Google Scholar] [CrossRef]
  33. Wang, Q.-A.; Zhang, J.; Huang, J. Simulation of the Compressive Strength of Cemented Tailing Backfill through the Use of Firefly Algorithm and Random Forest Model. Shock Vib. 2021, 2021, 5536998. [Google Scholar] [CrossRef]
  34. Huang, J.; Zhou, M.; Sabri, M.M.S.; Yuan, H. A Novel Neural Computing Model Applied to Estimate the Dynamic Modulus (Dm) of Asphalt Mixtures by the Improved Beetle Antennae Search. Sustainability 2022, 14, 5938. [Google Scholar] [CrossRef]
  35. Huang, J.; Kumar, G.S.; Ren, J.; Zhang, J.; Sun, Y. Accurately Predicting Dynamic Modulus of Asphalt Mixtures in Low-Temperature Regions Using Hybrid Artificial Intelligence Model. Constr. Build. Mater. 2021, 297, 123655. [Google Scholar] [CrossRef]
  36. Niewiadomski, P.; Stefaniuk, D.; Hoła, J. Microstructural Analysis of Self-Compacting Concrete Modified with the Addition of Nanoparticles. Procedia Eng. 2017, 172, 776–783. [Google Scholar] [CrossRef]
  37. Ren, J.; Lai, Y.; Gao, J. Exploring the Influence of SiO2 and TiO2 Nanoparticles on the Mechanical Properties of Concrete. Constr. Build. Mater. 2018, 175, 277–285. [Google Scholar] [CrossRef]
  38. Zhang, B.; Ahmad, W.; Ahmad, A.; Aslam, F.; Joyklad, P. A Scientometric Analysis Approach to Analyze the Present Research on Recycled Aggregate Concrete. J. Build. Eng. 2022, 46, 103679. [Google Scholar] [CrossRef]
  39. Singh, N.; Kumar, P.; Goyal, P. Reviewing the Behaviour of High Volume Fly Ash Based Self Compacting Concrete. J. Build. Eng. 2019, 26, 100882. [Google Scholar] [CrossRef]
  40. Sadrmomtazi, A.; Sobhani, J.; Mirgozar, M.A. Modeling Compressive Strength of EPS Lightweight Concrete Using Regression, Neural Network and ANFIS. Constr. Build. Mater. 2013, 42, 205–216. [Google Scholar] [CrossRef]
  41. Nafees, A.; Amin, M.N.; Khan, K.; Nazir, K.; Ali, M.; Javed, M.F.; Aslam, F.; Musarat, M.A.; Vatin, N.I. Modeling of Mechanical Properties of Silica Fume-Based Green Concrete Using Machine Learning Techniques. Polymers 2022, 14, 30. [Google Scholar] [CrossRef]
  42. Nafees, A.; Javed, M.F.; Khan, S.; Nazir, K.; Farooq, F.; Aslam, F.; Musarat, M.A.; Vatin, N.I. Predictive Modeling of Mechanical Properties of Silica Fume-Based Green Concrete Using Artificial Intelligence Approaches: MLPNN, ANFIS, and GEP. Materials 2021, 14, 7531. [Google Scholar] [CrossRef]
  43. Ni, H.G.; Wang, J.Z. Prediction of Compressive Strength of Concrete by Neural Networks. Cem. Concr. Res. 2000, 30, 1245–1250. [Google Scholar] [CrossRef]
  44. Sobhani, J.; Najimi, M.; Pourkhorshidi, A.R.; Parhizkar, T. Prediction of the Compressive Strength of No-Slump Concrete: A Comparative Study of Regression, Neural Network and ANFIS Models. Constr. Build. Mater. 2010, 24, 709–718. [Google Scholar] [CrossRef]
  45. Huang, J.; Zhang, J.; Gao, Y. Evaluating the Clogging Behavior of Pervious Concrete (PC) Using the Machine Learning Techniques. CMES-Comput. Model. Eng. Sci. 2022, 130, 805–821. [Google Scholar] [CrossRef]
  46. Huang, J.; Zhou, M.; Yuan, H.; Sabri, M.M.S.; Li, X. Prediction of the Compressive Strength for Cement-Based Materials with Metakaolin Based on the Hybrid Machine Learning Method. Materials 2022, 15, 3500. [Google Scholar] [CrossRef]
  47. Awoyera, P.O.; Kirgiz, M.S.; Viloria, A.; Ovallos-Gazabon, D. Estimating Strength Properties of Geopolymer Self-Compacting Concrete Using Machine Learning Techniques. J. Mater. Res. Technol. 2020, 9, 9016–9028. [Google Scholar] [CrossRef]
  48. Hodhod, O.A.; Ahmed, H.I. Modeling the Corrosion Initiation Time of Slag Concrete Using the Artificial Neural Network. HBRC J. 2014, 10, 231–234. [Google Scholar] [CrossRef]
  49. Bal, L.; Buyle-Bodin, F. Artificial Neural Network for Predicting Drying Shrinkage of Concrete. Constr. Build. Mater. 2013, 38, 248–254. [Google Scholar] [CrossRef]
  50. Ben Chaabene, W.; Flah, M.; Nehdi, M.L. Machine Learning Prediction of Mechanical Properties of Concrete: Critical Review. Constr. Build. Mater. 2020, 260, 119889. [Google Scholar] [CrossRef]
  51. Huang, J.; Sabri, M.M.S.; Ulrikh, D.V.; Ahmad, M.; Alsaffar, K.A.M. Predicting the Compressive Strength of the Cement-Fly Ash–Slag Ternary Concrete Using the Firefly Algorithm (Fa) and Random Forest (Rf) Hybrid Machine-Learning Method. Materials 2022, 15, 4193. [Google Scholar] [CrossRef]
  52. Sonebi, M.; Cevik, A.; Grünewald, S.; Walraven, J. Modelling the Fresh Properties of Self-Compacting Concrete Using Support Vector Machine Approach. Constr. Build. Mater. 2016, 106, 55–64. [Google Scholar] [CrossRef]
  53. Kalman Šipoš, T.; Miličević, I.; Siddique, R. Model for Mix Design of Brick Aggregate Concrete Based on Neural Network Modelling. Constr. Build. Mater. 2017, 148, 757–769. [Google Scholar] [CrossRef]
  54. Chou, J.S.; Tsai, C.F.; Pham, A.D.; Lu, Y.H. Machine Learning in Concrete Strength Simulations: Multi-Nation Data Analytics. Constr. Build. Mater. 2014, 73, 771–780. [Google Scholar] [CrossRef]
  55. Huang, J.; Leandri, P.; Cuciniello, G.; Losa, M. Mix Design and Laboratory Characterisation of Rubberised Mixture Used as Damping Layer in Pavements. Int. J. Pavement Eng. 2022, 23, 2746–2760. [Google Scholar] [CrossRef]
  56. Huang, J.; Duan, T.; Sun, Y.; Wang, L.; Lei, Y. Finite Element (Fe) Modeling of Indirect Tension to Cylindrical (It-Cy) Specimen Test for Damping Asphalt Mixtures (Dams). Adv. Civ. Eng. 2020, 2020, 6694180. [Google Scholar] [CrossRef]
  57. Behnood, A.; Behnood, V.; Modiri Gharehveran, M.; Alyamac, K.E. Prediction of the Compressive Strength of Normal and High-Performance Concretes Using M5P Model Tree Algorithm. Constr. Build. Mater. 2017, 142, 199–207. [Google Scholar] [CrossRef]
  58. Awoyera, P.O. Nonlinear Finite Element Analysis of Steel Fibre-Reinforced Concrete Beam under Static Loading. J. Eng. Sci. Technol. 2016, 11, 1669–1677. [Google Scholar]
  59. Nafees, A.; Khan, S.; Javed, M.F.; Alrowais, R.; Mohamed, A.M.; Mohamed, A.; Vatin, N.I. Forecasting the Mechanical Properties of Plastic Concrete Employing Experimental Data Using Machine Learning Algorithms: DT, MLPNN, SVM, and RF. Polymers 2022, 14, 1583. [Google Scholar] [CrossRef] [PubMed]
  60. Ilyas, I.; Zafar, A.; Afzal, M.T.; Javed, M.F.; Alrowais, R.; Althoey, F.; Mohamed, A.M.; Mohamed, A.; Vatin, N.I. Advanced Machine Learning Modeling Approach for Prediction of Compressive Strength of FRP Confined Concrete Using Multiphysics Genetic Expression Programming. Polymers 2022, 14, 1789. [Google Scholar] [CrossRef]
  61. Cavaleri, L.; Barkhordari, M.S.; Repapis, C.C.; Armaghani, D.J.; Ulrikh, D.V.; Asteris, P.G. Convolution-Based Ensemble Learning Algorithms to Estimate the Bond Strength of the Corroded Reinforced Concrete. Constr. Build. Mater. 2022, 359, 129504. [Google Scholar] [CrossRef]
  62. Barkhordari, M.S.; Armaghani, D.J.; Sabri, M.M.S.; Ulrikh, D.V.; Ahmad, M. The Efficiency of Hybrid Intelligent Models in Predicting Fiber-Reinforced Polymer Concrete Interfacial-Bond Strength. Materials 2022, 15, 3019. [Google Scholar] [CrossRef] [PubMed]
  63. Asteris, P.G.; Lourenço, P.B.; Roussis, P.C.; Adami, C.E.; Armaghani, D.J.; Cavaleri, L.; Chalioris, C.E.; Hajihassani, M.; Lemonis, M.E.; Mohammed, A.S. Revealing the Nature of Metakaolin-Based Concrete Materials Using Artificial Intelligence Techniques. Constr. Build. Mater. 2022, 322, 126500. [Google Scholar] [CrossRef]
  64. Barkhordari, M.S.; Armaghani, D.J.; Mohammed, A.S.; Ulrikh, D.V. Data-Driven Compressive Strength Prediction of Fly Ash Concrete Using Ensemble Learner Algorithms. Buildings 2022, 12, 132. [Google Scholar] [CrossRef]
  65. Liao, J.; Asteris, P.G.; Cavaleri, L.; Mohammed, A.S.; Lemonis, M.E.; Tsoukalas, M.Z.; Skentou, A.D.; Maraveas, C.; Koopialipoor, M.; Armaghani, D.J. Novel Fuzzy-Based Optimization Approaches for the Prediction of Ultimate Axial Load of Circular Concrete-Filled Steel Tubes. Buildings 2021, 11, 629. [Google Scholar] [CrossRef]
  66. Shahmansouri, A.A.; Akbarzadeh Bengar, H.; Ghanbari, S. Compressive Strength Prediction of Eco-Efficient GGBS-Based Geopolymer Concrete Using GEP Method. J. Build. Eng. 2020, 31, 101326. [Google Scholar] [CrossRef]
  67. Apostolopoulou, M.; Asteris, P.G.; Armaghani, D.J.; Douvika, M.G.; Lourenço, P.B.; Cavaleri, L.; Bakolas, A.; Moropoulou, A. Mapping and Holistic Design of Natural Hydraulic Lime Mortars. Cem. Concr. Res. 2020, 136, 106167. [Google Scholar] [CrossRef]
  68. Asteris, P.G.; Armaghani, D.J.; Hatzigeorgiou, G.D.; Karayannis, C.G.; Pilakoutas, K. Predicting the Shear Strength of Reinforced Concrete Beams Using Artificial Neural Networks. Comput. Concr. 2019, 24, 469–488. [Google Scholar] [CrossRef]
  69. Koopialipoor, M.; Asteris, P.G.; Salih Mohammed, A.; Alexakis, D.E.; Mamou, A.; Armaghani, D.J. Introducing Stacking Machine Learning Approaches for the Prediction of Rock Deformation. Transp. Geotech. 2022, 34, 100756. [Google Scholar] [CrossRef]
  70. Armaghani, D.J.; Asteris, P.G. A Comparative Study of ANN and ANFIS Models for the Prediction of Cement-Based Mortar Materials Compressive Strength. Neural Comput. Appl. 2021, 33, 4501–4532. [Google Scholar] [CrossRef]
  71. Öztaş, A.; Pala, M.; Özbay, E.; Kanca, E.; Çaǧlar, N.; Bhatti, M.A. Predicting the Compressive Strength and Slump of High Strength Concrete Using Neural Network. Constr. Build. Mater. 2006, 20, 769–775. [Google Scholar] [CrossRef]
  72. Saridemir, M. Predicting the Compressive Strength of Mortars Containing Metakaolin by Artificial Neural Networks and Fuzzy Logic. Adv. Eng. Softw. 2009, 40, 920–927. [Google Scholar] [CrossRef]
  73. Shafabakhsh, G.H.; Ani, O.J.; Talebsafa, M. Artificial Neural Network Modeling (ANN) for Predicting Rutting Performance of Nano-Modified Hot-Mix Asphalt Mixtures Containing Steel Slag Aggregates. Constr. Build. Mater. 2015, 85, 136–143. [Google Scholar] [CrossRef]
  74. Kiani, B.; Gandomi, A.H.; Sajedi, S.; Liang, R.Y. New Formulation of Compressive Strength of Preformed-Foam Cellular Concrete: An Evolutionary Approach. J. Mater. Civ. Eng. 2016, 28, 04016092. [Google Scholar] [CrossRef]
  75. Gholampour, A.; Mansouri, I.; Kisi, O.; Ozbakkaloglu, T. Evaluation of Mechanical Properties of Concretes Containing Coarse Recycled Concrete Aggregates Using Multivariate Adaptive Regression Splines (MARS), M5 Model Tree (M5Tree), and Least Squares Support Vector Regression (LSSVR) Models. Neural Comput. Appl. 2020, 32, 295–308. [Google Scholar] [CrossRef]
  76. Young, B.A.; Hall, A.; Pilon, L.; Gupta, P.; Sant, G. Can the Compressive Strength of Concrete Be Estimated from Knowledge of the Mixture Proportions? New Insights from Statistical Analysis and Machine Learning Methods. Cem. Concr. Res. 2019, 115, 379–388. [Google Scholar] [CrossRef]
  77. Duan, Z.H.; Kou, S.C.; Poon, C.S. Prediction of Compressive Strength of Recycled Aggregate Concrete Using Artificial Neural Networks. Constr. Build. Mater. 2013, 40, 1200–1206. [Google Scholar] [CrossRef]
  78. Veloso de Melo, V.; Banzhaf, W. Improving the Prediction of Material Properties of Concrete Using Kaizen Programming with Simulated Annealing. Neurocomputing 2017, 246, 25–44. [Google Scholar] [CrossRef]
  79. Yeh, I.C.; Lien, L.C. Knowledge Discovery of Concrete Material Using Genetic Operation Trees. Expert Syst. Appl. 2009, 36, 5807–5812. [Google Scholar] [CrossRef]
  80. Tran, V.T.; Le, M.H.; Vo, M.T.; Le, Q.T.; Hoang, V.H.; Tran, N.-T.; Nguyen, V.-T.; Nguyen, T.-A.-T.; Nguyen, H.N.; Nguyen, V.T.T. Optimization Design for Die-Sinking EDM Process Parameters Employing Effective Intelligent Method. Cogent Eng. 2023, 10, 2264060. [Google Scholar] [CrossRef]
  81. Huynh, N.-T.; Nguyen, T.V.T.; Tam, N.T.; Nguyen, Q.-M. Optimizing Magnification Ratio for the Flexible Hinge Displacement Amplifier Mechanism Design. In Proceedings of the 2nd Annual International Conference on Material, Machines and Methods for Sustainable Development (MMMS2020), Nha Trang, Vietnam, 12–15 November 2020; pp. 769–778. [Google Scholar]
  82. Liaw, A.; Wiener, M. Classification and Regression by RandomForest. R News 2002, 2, 18–22. [Google Scholar]
  83. Huang, J.; Sun, Y. Viscoelastic Analysis of the Damping Asphalt Mixtures (DAMs) Made with a High Content of Asphalt Rubber (AR). Adv. Civ. Eng. 2020, 2020, 8826926. [Google Scholar] [CrossRef]
  84. Huang, J.; Duan, T.; Lei, Y.; Hasanipanah, M. Finite Element Modeling for the Antivibration Pavement Used to Improve the Slope Stability of the Open-Pit Mine. Shock Vib. 2020, 2020, 6650780. [Google Scholar] [CrossRef]
  85. Huang, J.; Zhang, J.; Li, X.; Qiao, Y.; Zhang, R.; Kumar, G.S. Investigating the Effects of Ensemble and Weight Optimization Approaches on Neural Networks’ Performance to Estimate the Dynamic Modulus of Asphalt Concrete. Road Mater. Pavement Des. 2022, 24, 1939–1959. [Google Scholar] [CrossRef]
  86. Schapire, R.E. Explaining Adaboost. In Empirical Inference: Festschrift in Honor of Vladimir N. Vapnik; Springer: Berlin/Heidelberg, Germany, 2013; pp. 37–52. [Google Scholar]
  87. Yan, D.; Zhou, Q.; Wang, J.; Zhang, N. Bayesian Regularisation Neural Network Based on Artificial Intelligence Optimisation. Int. J. Prod. Res. 2017, 55, 2266–2287. [Google Scholar] [CrossRef]
  88. Ampomah, E.K.; Qin, Z.; Nyame, G. Evaluation of Tree-Based Ensemble Machine Learning Models in Predicting Stock Price Direction of Movement. Information 2020, 11, 332. [Google Scholar] [CrossRef]
  89. Huang, J.; Losa, M.; Leandri, P.; Kumar, S.G.; Zhang, J.; Sun, Y. Potential Anti-Vibration Pavements with Damping Layer: Finite Element (FE) Modeling, Validation, and Parametrical Studies. Constr. Build. Mater. 2021, 281, 122550. [Google Scholar] [CrossRef]
  90. Huang, J.; Li, X.; Zhang, J.; Sun, Y.; Ren, J. Determining the Rayleigh Damping Parameters of Flexible Pavements for Finite Element Modeling. J. Vib. Control 2022, 28, 3181–3194. [Google Scholar] [CrossRef]
  91. Dorogush, A.V.; Ershov, V.; Gulin, A. CatBoost: Gradient Boosting with Categorical Features Support. arXiv 2018, arXiv:1810.11363. [Google Scholar]
  92. Zhu, F.; Wu, X.; Zhou, M.; Sabri, M.M.S.; Huang, J. Intelligent Design of Building Materials: Development of an Ai-Based Method for Cement-Slag Concrete Design. Materials 2022, 15, 3833. [Google Scholar] [CrossRef]
  93. Huang, J.; Zhou, M.; Yuan, H.; Sabri, M.M.S.; Li, X. Towards Sustainable Construction Materials: A Comparative Study of Prediction Models for Green Concrete with Metakaolin. Buildings 2022, 12, 772. [Google Scholar] [CrossRef]
  94. Patel, J.; Shah, S.; Thakkar, P.; Kotecha, K. Predicting Stock and Stock Price Index Movement Using Trend Deterministic Data Preparation and Machine Learning Techniques. Expert Syst. Appl. 2015, 42, 259–268. [Google Scholar] [CrossRef]
  95. Huang, J.; Sun, Y. Effect of Modifiers on the Rutting, Moisture-Induced Damage, and Workability Properties of Hot Mix Asphalt Mixtures. Appl. Sci. 2020, 10, 7145. [Google Scholar] [CrossRef]
  96. Huang, J.; Suhatril, M.; Baharom, S.; Alaskar, A.; Assilzadeh, H. Influence of Porosity and Cement Grade on Concrete Mechanical Properties. Adv. Concr. Constr. 2020, 10, 393–402. [Google Scholar]
  97. Huang, J.; Li, X.; Kumar, G.S.; Deng, Y.; Gong, M.; Dong, N. Rheological Properties of Bituminous Binder Modified with Recycled Waste Toner. J. Clean. Prod. 2021, 317, 128415. [Google Scholar] [CrossRef]
  98. Huang, J.; Xue, J. Optimization of Svr Functions for Flyrock Evaluation in Mine Blasting Operations. Environ. Earth Sci. 2022, 81, 434. [Google Scholar] [CrossRef]
  99. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  100. Hosseini, S.; Mousavi, A.; Monjezi, M.; Khandelwal, M. Mine-to-Crusher Policy: Planning of Mine Blasting Patterns for Environmentally Friendly and Optimum Fragmentation Using Monte Carlo Simulation-Based Multi-Objective Grey Wolf Optimization Approach. Resour. Policy 2022, 79, 103087. [Google Scholar] [CrossRef]
  101. Chantar, H.; Mafarja, M.; Alsawalqah, H.; Heidari, A.A.; Aljarah, I.; Faris, H. Feature Selection Using Binary Grey Wolf Optimizer with Elite-Based Crossover for Arabic Text Classification. Neural Comput. Appl. 2020, 32, 12201–12220. [Google Scholar] [CrossRef]
  102. Yang, B.; Zhang, X.; Yu, T.; Shu, H.; Fang, Z. Grouped Grey Wolf Optimizer for Maximum Power Point Tracking of Doubly-Fed Induction Generator Based Wind Turbine. Energy Convers. Manag. 2017, 133, 427–443. [Google Scholar] [CrossRef]
  103. Anjum, M.; Khan, K.; Ahmad, W.; Ahmad, A.; Amin, M.N.; Nafees, A. Application of Ensemble Machine Learning Methods to Estimate the Compressive Strength of Fiber-Reinforced Nano-Silica Modified Concrete. Polymers 2022, 14, 3906. [Google Scholar] [CrossRef]
  104. Ashrafian, A.; Amiri, M.J.T.; Rezaie-Balf, M.; Ozbakkaloglu, T.; Lotfi-Omran, O. Prediction of Compressive Strength and Ultrasonic Pulse Velocity of Fiber Reinforced Concrete Incorporating Nano Silica Using Heuristic Regression Methods. Constr. Build. Mater. 2018, 190, 479–494. [Google Scholar] [CrossRef]
  105. Salemi, N.; Behfarnia, K. Effect of Nano-Particles on Durability of Fiber-Reinforced Concrete Pavement. Constr. Build. Mater. 2013, 48, 934–941. [Google Scholar] [CrossRef]
  106. Fallah, S.; Nematzadeh, M. Mechanical Properties and Durability of High-Strength Concrete Containing Macro-Polymeric and Polypropylene Fibers with Nano-Silica and Silica Fume. Constr. Build. Mater. 2017, 132, 170–187. [Google Scholar] [CrossRef]
  107. Sadrmomtazi, A.; Fasihi, A. Influence of Polypropylene Fibers on the Performance of Nano-SiO2-Incorporated Mortar. 2010. Available online: https://journals.shirazu.ac.ir/article_690.html (accessed on 28 December 2023).
  108. Wang, X.; Hosseini, S.; Jahed Armaghani, D.; Tonnizam Mohamad, E. Data-Driven Optimized Artificial Neural Network Technique for Prediction of Flyrock Induced by Boulder Blasting. Mathematics 2023, 11, 2358. [Google Scholar] [CrossRef]
  109. Bakhtavar, E.; Hosseini, S.; Hewage, K.; Sadiq, R. Air Pollution Risk Assessment Using a Hybrid Fuzzy Intelligent Probability-Based Approach: Mine Blasting Dust Impacts. Nat. Resour. Res. 2021, 30, 2607–2627. [Google Scholar] [CrossRef]
  110. Hosseini, S.; Mousavi, A.; Monjezi, M. Prediction of Blast-Induced Dust Emissions in Surface Mines Using Integration of Dimensional Analysis and Multivariate Regression Analysis. Arab. J. Geosci. 2022, 15, 163. [Google Scholar] [CrossRef]
  111. Bakhtavar, E.; Hosseini, S.; Hewage, K.; Sadiq, R. Green Blasting Policy: Simultaneous Forecast of Vertical and Horizontal Distribution of Dust Emissions Using Artificial Causality-Weighted Neural Network. J. Clean. Prod. 2021, 283, 124562. [Google Scholar] [CrossRef]
  112. Hosseini, S.; Monjezi, M.; Bakhtavar, E.; Mousavi, A. Prediction of Dust Emission Due to Open Pit Mine Blasting Using a Hybrid Artificial Neural Network. Nat. Resour. Res. 2021, 30, 4773–4788. [Google Scholar] [CrossRef]
  113. Hosseini, S.; Monjezi, M.; Bakhtavar, E. Minimization of Blast-Induced Dust Emission Using Gene-Expression Programming and Grasshopper Optimization Algorithm: A Smart Mining Solution Based on Blasting Plan Optimization. Clean. Technol. Environ. Policy 2022, 24, 2313–2328. [Google Scholar] [CrossRef]
  114. Hosseini, S.; Poormirzaee, R.; Hajihassani, M.; Kalatehjari, R. An ANN-Fuzzy Cognitive Map-Based Z-Number Theory to Predict Flyrock Induced by Blasting in Open-Pit Mines. Rock. Mech. Rock. Eng. 2022, 55, 4373–4390. [Google Scholar] [CrossRef]
  115. Huang, J.; Zhou, M.; Zhang, J.; Ren, J.; Vatin, N.I.; Sabri, M.M.S. Development of a New Stacking Model to Evaluate the Strength Parameters of Concrete Samples in Laboratory. Iran. J. Sci. Technol. Trans. Civ. Eng. 2022, 46, 4355–4370. [Google Scholar] [CrossRef]
  116. Huang, J.; Zhou, M.; Zhang, J.; Ren, J.; Vatin, N.I.; Sabri, M.M.S. The Use of GA and PSO in Evaluating the Shear Strength of Steel Fiber Reinforced Concrete Beams. KSCE J. Civ. Eng. 2022, 26, 3918–3931. [Google Scholar] [CrossRef]
  117. Zhang, R.; Chen, H.; Guo, H.; Zhou, M.; Jiandong, H. Development of a Prior Model to Predict the Cracking Performance of Asphalt Mixture in General for Asphalt Material Selection and Mix Design. Int. J. Pavement Eng. 2023, 24, 2251080. [Google Scholar] [CrossRef]
  118. Zhao, J.; Hosseini, S.; Chen, Q.; Armaghani, D.J. Super Learner Ensemble Model: A Novel Approach for Predicting Monthly Copper Price in Future. Resour. Policy 2023, 85, 103903. [Google Scholar] [CrossRef]
  119. Hosseini, S.; Khatti, J.; Taiwo, B.O.; Fissha, Y.; Grover, K.S.; Ikeda, H.; Pushkarna, M.; Berhanu, M.; Ali, M. Assessment of the Ground Vibration during Blasting in Mining Projects Using Different Computational Approaches. Sci. Rep. 2023, 13, 18582. [Google Scholar] [CrossRef]
  120. Hosseini, S.; Poormirzaee, R.; Gilani, S.-O.; Jiskani, I.M. A Reliability-Based Rock Engineering System for Clean Blasting: Risk Analysis and Dust Emissions Forecasting. Clean Technol. Environ. Policy 2023, 25, 1903–1920. [Google Scholar] [CrossRef]
  121. Hosseini, S.; Pourmirzaee, R.; Armaghani, D.J.; Sabri Sabri, M.M. Prediction of Ground Vibration Due to Mine Blasting in a Surface Lead–Zinc Mine Using Machine Learning Ensemble Techniques. Sci. Rep. 2023, 13, 6591. [Google Scholar] [CrossRef]
  122. Lawal, A.I.; Hosseini, S.; Kim, M.; Ogunsola, N.O.; Kwon, S. Prediction of Factor of Safety of Slopes Using Stochastically Modified ANN and Classical Methods: A Rigorous Statistical Model Selection Approach. Nat. Hazards 2023, 1–22. [Google Scholar] [CrossRef]
  123. Wang, Q.; Qi, J.; Hosseini, S.; Rasekh, H.; Huang, J. ICA-LightGBM Algorithm for Predicting Compressive Strength of Geo-Polymer Concrete. Buildings 2023, 13, 2278. [Google Scholar] [CrossRef]
  124. Hosseini, S.; Poormirzaee, R.; Hajihassani, M. Application of Reliability-Based Back-Propagation Causality-Weighted Neural Networks to Estimate Air-Overpressure Due to Mine Blasting. Eng. Appl. Artif. Intell. 2022, 115, 105281. [Google Scholar] [CrossRef]
  125. Hosseini, S.; Pourmirzaee, R. Green Policy for Managing Blasting Induced Dust Dispersion in Open-Pit Mines Using Probability-Based Deep Learning Algorithm. Expert. Syst. Appl. 2023, 240, 122469. [Google Scholar] [CrossRef]
  126. Hosseini, S.; Javanshir, S.; Sabeti, H.; Tahmasebizadeh, P. Mathematical-Based Gene Expression Programming (GEP): A Novel Model to Predict Zinc Separation from a Bench-Scale Bioleaching Process. J. Sustain. Metall. 2023, 9, 1601–1619. [Google Scholar] [CrossRef]
  127. Bolón-Canedo, V.; Remeseiro, B. Feature Selection in Image Analysis: A Survey. Artif. Intell. Rev. 2020, 53, 2905–2931. [Google Scholar] [CrossRef]
  128. Kabir, H.; Garg, N. Machine Learning Enabled Orthogonal Camera Goniometry for Accurate and Robust Contact Angle Measurements. Sci. Rep. 2023, 13, 1497. [Google Scholar] [CrossRef]
  129. Hosseini, S.; Poormirzaee, R.; Hajihassani, M. An Uncertainty Hybrid Model for Risk Assessment and Prediction of Blast-Induced Rock Mass Fragmentation. Int. J. Rock Mech. Min. Sci. 2022, 160, 105250. [Google Scholar] [CrossRef]
Figure 1. Concrete sculptures designed by the authors.
Figure 1. Concrete sculptures designed by the authors.
Buildings 14 00396 g001
Figure 2. Particle sizes and SSA relevant to cementitious material [57].
Figure 2. Particle sizes and SSA relevant to cementitious material [57].
Buildings 14 00396 g002
Figure 3. Tree branch growth among level-wise and leaf-wise.
Figure 3. Tree branch growth among level-wise and leaf-wise.
Buildings 14 00396 g003
Figure 7. The correlation matrix is relevant to input parameters and CSFrRNSC.
Figure 7. The correlation matrix is relevant to input parameters and CSFrRNSC.
Buildings 14 00396 g007
Figure 8. Violin plot input parameters and CSFrRNSC.
Figure 8. Violin plot input parameters and CSFrRNSC.
Buildings 14 00396 g008
Figure 9. The optimization of the hyperparameters relevant to GWO-LightGBM, GWO-XGBoost, GWO-CatBoost, and GWO-AdaBoost models.
Figure 9. The optimization of the hyperparameters relevant to GWO-LightGBM, GWO-XGBoost, GWO-CatBoost, and GWO-AdaBoost models.
Buildings 14 00396 g009
Figure 10. The obtained RMSE value for training and testing developed models.
Figure 10. The obtained RMSE value for training and testing developed models.
Buildings 14 00396 g010
Figure 11. The obtained R2 value for training and testing developed models.
Figure 11. The obtained R2 value for training and testing developed models.
Buildings 14 00396 g011
Figure 12. Correlation between estimated and measured CSFrRNSC by CatBoost model.
Figure 12. Correlation between estimated and measured CSFrRNSC by CatBoost model.
Buildings 14 00396 g012
Figure 13. Correlation between estimated and measured CSFrRNSC by AdaBoost model.
Figure 13. Correlation between estimated and measured CSFrRNSC by AdaBoost model.
Buildings 14 00396 g013
Figure 14. Correlation between estimated and measured CSFrRNSC by LightGBM model.
Figure 14. Correlation between estimated and measured CSFrRNSC by LightGBM model.
Buildings 14 00396 g014
Figure 15. Correlation between estimated and measured CSFrRNSC by XGBoost model.
Figure 15. Correlation between estimated and measured CSFrRNSC by XGBoost model.
Buildings 14 00396 g015
Figure 16. Correlation between estimated and measured CSFrRNSC by GWO-CatBoost model.
Figure 16. Correlation between estimated and measured CSFrRNSC by GWO-CatBoost model.
Buildings 14 00396 g016
Figure 17. Correlation between estimated and measured CSFrRNSC by GWO-AdaBoost model.
Figure 17. Correlation between estimated and measured CSFrRNSC by GWO-AdaBoost model.
Buildings 14 00396 g017
Figure 18. Correlation between estimated and measured CSFrRNSC by GWO-LightGBM model.
Figure 18. Correlation between estimated and measured CSFrRNSC by GWO-LightGBM model.
Buildings 14 00396 g018
Figure 19. Correlation between estimated and measured CSFrRNSC by GWO-XGBoost model.
Figure 19. Correlation between estimated and measured CSFrRNSC by GWO-XGBoost model.
Buildings 14 00396 g019
Figure 20. Comparison of estimated and real CSFrRNSC on the train and test data points in the GWO-XGBoost model.
Figure 20. Comparison of estimated and real CSFrRNSC on the train and test data points in the GWO-XGBoost model.
Buildings 14 00396 g020
Figure 21. Revealing Taylor diagram for trained systems based on train (left) and test (right) sets.
Figure 21. Revealing Taylor diagram for trained systems based on train (left) and test (right) sets.
Buildings 14 00396 g021
Figure 22. Violin diagram relevant to developed systems in both train (left) and test (right) data points.
Figure 22. Violin diagram relevant to developed systems in both train (left) and test (right) data points.
Buildings 14 00396 g022
Figure 23. The impact of each influential variable on CSFrRNSC.
Figure 23. The impact of each influential variable on CSFrRNSC.
Buildings 14 00396 g023
Table 1. Descriptive statistics of datasets [104,105,106,107].
Table 1. Descriptive statistics of datasets [104,105,106,107].
VariableUnitNotationMinAveMaxStDSkewnessKurtosis
InputFiber volume%FV00.1980.90.1851.9745.204
CA/FA-CF0.8740.9061.1350.0602.3826.035
w/b-w/b0.310.4080.480.0410.5090.236
Nano-silicakg/m3NS021.21449.617.3030.317−1.084
SP/B-SP/B0.0050.0170.0250.006−1.176−0.396
AgedayAge741.65112038.2520.785−0.930
OutputCompressive strength of fiber-reinforced nano-silica concrete MPaCSFrRNSC19.166.48391.217.829−0.838−0.243
StD: standard deviation. Min: minimum. Max: maximum. Ave: average.
Table 2. The details of training and testing data points.
Table 2. The details of training and testing data points.
Training Set
ParameterMinAveMaxStDKurtosisSkewness
FV00.20.50.1290.5660.723
CF0.8740.8820.9730.0267.4573.051
w/b0.390.3980.480.0257.3233.035
NS023.71049.618.282−1.3180.110
SP/B0.0050.0190.020.0047.323−3.035
Age738.7569035.202−1.3490.663
CSFrRNSC42.174.11491.211.1110.129−0.838
Testing Set
ParameterMinAveMaxStDKurtosisSkewness
FV00.90.1930.2952.1401.876
CF0.9051.1350.9770.0740.7781.334
w/b0.310.480.4390.0610.718−1.452
NS031.513.78411.209−1.2560.128
SP/B0.0050.0250.0100.008−0.6451.009
Age712050.27345.528−1.1690.755
CSFrRNSC19.169.143.76414.404−1.1480.232
Table 3. Hyperparameters’ tunning in GWO-LightGBM, GWO-XGBoost, GWO-CatBoost, and GWO-AdaBoost techniques.
Table 3. Hyperparameters’ tunning in GWO-LightGBM, GWO-XGBoost, GWO-CatBoost, and GWO-AdaBoost techniques.
TechniqueOptimizerHyperparameterOptimum Values
LightGBMGWOlearning_rate, n_estimators, max_depth, and reg_alphalearning_rate = 0.005
n_estimators = 170
max_depth = 9
reg_alpha = 0.45
XGBoostGWOn_estimators, learning_rate, gamma, max_depth, min_child_weight, and reg_alphan_estimators = 100
learning_rate = 0.25
gamma = 0.6
max_depth = 2
min_child_weight = 5
reg_alpha = 1
CatBoostGWOlearning_rate and n_estimatorslearning_rate = 0.003
n_estimators = 300
AdaBoostGWOlearning_rate and n_estimatorslearning_rate = 0.001
n_estimators = 500
Table 4. Performance evaluation of developed models.
Table 4. Performance evaluation of developed models.
TechniqueTrain PhaseTest Phase
MAER2RMSEVAFBiasMAER2RMSEVAFBias
CatBoost3.1070.9143.69189.4843.1073.1840.9303.83892.7823.184
AdaBoost3.1640.9103.62889.6323.1644.2940.9054.85888.7094.294
LightGBM2.6740.9293.04092.5932.6743.5060.9264.10291.7153.506
XGBoost2.1290.9542.49994.9872.1293.7460.9304.23291.3873.746
GWO-CatBoost2.8800.9233.32291.1142.8803.2730.9243.97192.3253.273
GWO-AdaBoost2.6690.9383.10693.0902.6693.5690.9294.10191.7893.569
GWO-LightGBM2.4550.9482.91194.4992.4553.0360.9373.60693.6003.036
GWO-XGBoost1.6530.9711.93396.9601.6531.8020.9782.12997.7741.802
Table 5. Rating of the obtained statistical indices for selecting a superior model.
Table 5. Rating of the obtained statistical indices for selecting a superior model.
TechniqueTrain PhaseTest PhaseTotal RateRank
MAER2RMSEVAFBiasMAER2RMSEVAFBias
CatBoost2211265666376
AdaBoost1122111111128
LightGBM4454443334385
XGBoost7777726222493
GWO-CatBoost3333352555376
GWO-AdaBoost5545534443424
GWO-LightGBM6666677777652
GWO-XGBoost8888888888801
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, R.; Zhang, J.; Lu, Y.; Huang, J. Towards Designing Durable Sculptural Elements: Ensemble Learning in Predicting Compressive Strength of Fiber-Reinforced Nano-Silica Modified Concrete. Buildings 2024, 14, 396. https://0-doi-org.brum.beds.ac.uk/10.3390/buildings14020396

AMA Style

Wang R, Zhang J, Lu Y, Huang J. Towards Designing Durable Sculptural Elements: Ensemble Learning in Predicting Compressive Strength of Fiber-Reinforced Nano-Silica Modified Concrete. Buildings. 2024; 14(2):396. https://0-doi-org.brum.beds.ac.uk/10.3390/buildings14020396

Chicago/Turabian Style

Wang, Ranran, Jun Zhang, Yijun Lu, and Jiandong Huang. 2024. "Towards Designing Durable Sculptural Elements: Ensemble Learning in Predicting Compressive Strength of Fiber-Reinforced Nano-Silica Modified Concrete" Buildings 14, no. 2: 396. https://0-doi-org.brum.beds.ac.uk/10.3390/buildings14020396

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop