Next Article in Journal
Graph-Informed Neural Networks for Regressions on Graph-Structured Data
Next Article in Special Issue
Research on Zonal Disintegration Characteristics and Failure Mechanisms of Deep Tunnel in Jointed Rock Mass with Strength Reduction Method
Previous Article in Journal
A Modular Framework for Generic Quantum Algorithms
Previous Article in Special Issue
Effect of Lithology on Mechanical and Damage Behaviors of Concrete in Concrete-Rock Combined Specimen
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Novel Ensemble Tree Solution for Rockburst Prediction Using Deep Forest

1
School of Resources and Safety Engineering, Central South University, Changsha 410083, China
2
Department of Urban Planning, Engineering Networks and Systems, Institute of Architecture and Construction, South Ural State University, 76, Lenin Prospect, 454080 Chelyabinsk, Russia
*
Author to whom correspondence should be addressed.
Submission received: 9 February 2022 / Revised: 23 February 2022 / Accepted: 25 February 2022 / Published: 1 March 2022

Abstract

:
The occurrence of rockburst can cause significant disasters in underground rock engineering. It is crucial to predict and prevent rockburst in deep tunnels and mines. In this paper, the deficiencies of ensemble learning algorithms in rockburst prediction were investigated. Aiming at these shortages, a novel machine learning model, deep forest, was proposed to predict rockburst risk. The deep forest combines the characteristics of deep learning and ensemble models, which can solve complex problems. To develop the deep forest model for rockburst prediction, 329 real rockburst cases were collected to build a comprehensive database for intelligent analysis. Bayesian optimization was proposed to tune the hyperparameters of the deep forest. As a result, the deep forest model achieved 100% training accuracy and 92.4% testing accuracy, and it has more outstanding capability to forecast rockburst disasters compared to other widely used models (i.e., random forest, boosting tree models, neural network, support vector machine, etc.). The results of sensitivity analysis revealed the impact of variables on rockburst levels and the applicability of deep forest with a few input parameters. Eventually, real cases of rockburst in two gold mines, China, were used for validation purposes while the needed data sets were prepared by field observations and laboratory tests. The promoting results of the developed model during the validation phase confirm that it can be used with a high level of accuracy by practicing engineers for predicting rockburst occurrences.

1. Introduction

Rockburst is a geological catastrophe induced by the sudden release of strain energy stored in rock mass during or after the excavation of underground engineering in high in-situ stress areas. Rockburst occurs in many countries around the world [1]. It is generally believed that intensity and frequentness of rockburst increase as depth increases. The occurrence of rockburst damages underground tunnels and facilities and poses a severe threat to the safety of the operators on site. The gold mines in South Africa have greater mining depths than those in other countries. Meanwhile, most rockburst disasters occur in South African gold mines. In 1975, 73 laborers died due to 680 rockburst incidents in 31 gold mines. From 1984 to 1993, 3275 laborers lost their lives in mining geological disasters due to the lack of mining technology to cope with the rockburst below 2000 m [2]. Rockburst is a complicated problem restricting the progress of underground engineering. It is necessary for researchers to study how to prevent and control rockburst.
Many scholars have taken various measures to evaluate rockburst risk. These methods contain empirical indicators, numerical modeling, rock mechanics tests, intelligent techniques, etc. [1,3]. Xue et al. [4] adopted the empirical method to estimate the rockburst grade at the Jiangbian hydropower station, China. The empirical method is simple and easy to implement, but its effectiveness is poor. Zhai et al. [5] carried out rockburst tests with six hard brittle rocks subjected to one-free-face true triaxial mechanical tests. Their test results revealed that strength, fracturing, fragmentation characteristics, and failure modes had a remarkable impact on rockburst proneness. Due to scale effect considerations, experimental methods are suitable for investigating the failure process and mechanism of rockburst rather than predicting rockburst [6,7,8]. Moreover, the field condition is challenging to be reproduced in the laboratory. Wang et al. [9] summarized the numerical simulations, including the continuum, discontinued, and hybrid techniques for rockburst evaluation. The numerical simulation is economical, secure, and time-saving [1,3,10]. Nevertheless, choosing an appropriate constitutive model and simulation method is very important according to specific problems. With the blossom of artificial intelligence and big data, intelligent algorithms are increasingly used to predict rockbursts. Compared to empirical, numerical, and experimental methods, the intelligent model has high efficiency, good practicability and can foretell and prevent rockbursts in time. However, it requires high-quality data.
The machine learning (ML) algorithm is an essential part of the intelligent algorithm [11,12]. The ML algorithms for rockburst classification mainly include linear models (LM), decision trees (DT), artificial neural networks (ANN), k-nearest neighbor (KNN), Bayes classifiers, support vector machines (SVM), ensemble models, etc. Each ML model has its own supremacy and drawback, and no model can perform best for every practical engineering based on the ‘No Free Lunch theorem’. Table 1 compiles the ML techniques for rockburst estimation recently and compares their advantages and disadvantages.
To overcome the limitations of single ML models, some researchers have combined multiple intelligent techniques to develop an ensemble model for rockburst estimation recently. Zhang et al. [25] combined seven extensively applied ML techniques using a voting strategy to construct an ensemble model, which had better performance than individual classifiers in rockburst prediction. Liang et al. [26] compared five ensemble models based on DT for forecasting short-term rockburst. They found that the RF was the optimal model. Liang et al. [27] utilized weighting voting to combine six intelligent techniques to forecast short-term rockburst. The capacity of the comprehensive combined model was better than that of the base classifiers. Yin et al. [28] used stacking to integrate KNN, SVM, deep neural networks, and recurrent neural networks for rockburst prediction. The ensemble model that adopted KNN and RNN performed best in all ensemble models. Although these ensemble models show a high level of prediction accuracy, some practical problems [27,28] prevent them from being widely used.
(1)
Ensemble models are easy to overfit. For obtaining higher prediction accuracy, ensemble models, such as RF, become complex, which makes generalization of the model poor [13].
(2)
The selection of a base classifier and combination strategy is difficult [25,27,28,29]. Different problems require different combined strategies, there should be a difference between the base classifiers, and it is necessary to choose the combination strategy and base classifier. Otherwise, performance cannot improve.
(3)
Many hyperparameters need to be tuned. The parameter setting of the ensemble model has a significant effect on the capacity.
To address the above limitations, this study proposed a novel methodology, the deep forest model, for rockburst prediction. Motivated by the theory behind deep neural networks and ensemble models, Zhou et al. [30] proposed the deep forest (DF), which combines the characteristics of deep learning models and ensemble models. DF can deal with more complex problems such as deep learning models. However, it has fewer parameters to work with than deep learning models. The DF model is easy to use, and its complexity can be determined according to the data, which can effectively prevent overfitting. The DF model is suitable for small data sets. Through validation by different data in different fields, the DF model still performs better even if it adopts the default parameter configuration. It is meaningful to build the more powerful and robust intelligent model by DF model to predict and prevent rockburst.
Additionally, Bayesian optimization (BO) is applied for optimizing hyperparameters of DF. BO has been widely used in hyperparametric optimization in different ML studies (e.g., [31,32,33]). BO differs from other optimization algorithms, and it is a useful model for problems that are expensive to conduct. BO constructs a probabilistic model of the objective function to be optimized and then applies the probabilistic model to determine the next point to be evaluated [34]. BO has been increasingly applied to geotechnical engineering [31,35,36,37].
The structure of this study is as follows: the ‘Methodology’ section introduces the theory and composition of DF and BO. The ‘Data’ section presents the source and statistical description of the rockburst database. The ‘Simulation’ exhibits how to construct and optimize the DF model for rockburst prediction. In the ‘Discussion’ section, the capability of DF to predict rockburst is evaluated. Furthermore, the influence of variables on rockburst intensities is analyzed by sensitivity analysis. Finally, the DF is applied to forecast the rockburst in practical engineering.

2. Methodology

2.1. DF

Zhou et al. [38] presented the gcForest to build the DF model, which consists of the cascade forest and multi-grained scanning. When the gcForest addresses sequence or image-style data, it needs multi-grained scanning. The rockburst data does not have spatial or sequential relationships in this study, so the multi-grained scanning structure in gcForest is abandoned.
RF and complete random forest (CRF) are the base classifiers in the DF model. The RF is an ensemble model composed of K decision trees h X , θ k , k = 1 , , K , where θ k is a random vector that satisfies independent identically distributed [39]. Figure 1 shows flowchart to build RF.
Similarly, CRF randomly selects a feature for split, and the others are the same as the process of RF. As shown in Figure 2, the RF and CRF are implemented to construct cascade layers. The input features are input to the first cascade layer. After all this, the output of the previous cascade layer and the input features are input to the next layer. At this step, the training set is divided into a growing set and an estimating set. When the cascade forest increases by one layer, the estimating set tests the whole generated DF model. If the performance of the estimating set is lower than that of the previous layer, the DF model stops growing, and the cascade layer does not increase. In the last layer, the average of all the output probability vectors is calculated, and the label with the maximum probability is output as the prediction result. When the cascade layer does not increase, the DF model is retained based on the whole training set. The structure of the DF model is automatically determined, which reduces the risk of overfitting.

2.2. BO

BO is appropriate for tasks with expensive evaluation costs [40]. BO consists of two parts, the surrogate model and the acquisition function [41]. The Gaussian process (GP) [42] is the most extensively applied surrogate model in BO due to its flexibility and tractability [40]. BO has three basic acquisition functions (AF), which are the probability of improvement [43], expected improvement [44,45], and upper/lower confidence bound. It is vital to choose an appropriate acquisition function to match the surrogate model. GP-Hedge is proposed to select an appropriate AF in each iteration, and detailed information about GP-Hedge can be found in previous studies [46].

3. Data

3.1. Data Collection and Description

The database, including 329 real rockburst cases worldwide, was established, as shown in Table 2. According to the criteria for the classification of rockburst in Table 3, rockburst levels can be grouped into four categories: none (53 cases), light (101 cases), moderate (119 cases), and strong (56 cases). In the collected database (Supplementary Materials), the number of light and moderate rockbursts is greater than that of none and strong rockbursts.
Rockburst often occurs on the excavation face in deep underground construction, it is induced by the sudden release of strain energy stored in the rock mass, and the most common phenomenon is strain burst. Rockburst mechanisms are complicated [47], and it is connected with stress in the earth’s crust, rock property, rock mass structure, groundwater, and so on [48]. In this study, seven factors, including maximum tangential stress ( σ θ ), uniaxial compressive strength ( σ c ), tensile strength ( σ t ), elastic strain energy index ( W e t ), stress concentration factor ( S C F or σ θ / σ c ), rock brittleness index B 1 ( B 1 = σ c / σ t ), and rock brittleness index B 2 ( B 2 = ( σ c σ t ) / ( σ c + σ t ) ), are adopted as the input variables in the DF model [13,28]. Table 4 displays the statistical description of the four rockburst intensities. Pearson correlation coefficients (Equation (1)) between variables are calculated, as shown in Figure 3. Figure 4 exhibits the boxplots and histograms of the seven input variables of four rockburst intensities. The boxplots are not symmetrical, there are many points outside the upper and lower whiskers of boxplots, and the collected database does not satisfy a normal distribution.
r = i = 1 n X i X ¯ Y i Y ¯ i = 1 n X i X ¯ 2 i = 1 n Y i Y ¯ 2

3.2. Step-by-Step Study Flowchart

The database is divided into a training set (Tr) and a testing set (Te) according to three split ratios, Tr (75%)-Te (25%), Tr (80%)-Te (20%), and Tr (85%)-Te (15%). Z-score (Equation (2)) is utilized to process the input parameters. As shown in Figure 5, the training part is applied to build the DF model for rockburst estimation. BO is implemented to tune the hyperparameters of the DF. 5 fold cross-validation is implemented to choose the optimized DF model. The permutation feature importance algorithm and partial dependence plots are introduced to interpret the DF model. A sensitivity analysis is employed to analyze effective variables on rockburst intensities and the robustness of the DF model. Finally, the intelligent model is applied to foretell rockburst in practical engineering.
X = X X ¯ σ
In Equation (2), X ¯ depicts the average value and σ represents the standard deviation.

4. Simulation

4.1. Model Metrics

Accuracy is often used as the metric index in the ML classification problem. Equation (3) shows the equation to calculate accuracy. According to the actual label and the predicted label, the sample can be divided into true positives (TP), false positives (FP), true negatives (TN), and false negatives (FN). The sum of the samples = TP + FP + TN + FN. According to these, the precision and recall can be calculated, as shown in Equations (4) and (5). In precision and recall, usually, when one is higher, the other is lower. In order to integrate precision and recall, their harmonically average value, i.e., f 1 , is usually taken (Equation (6)). Additionally, a receiver operating characteristic (ROC) curve is introduced to evaluate the capability of single rockburst types. The area under the ROC curve is between 0 and 1. A larger area of the ROC curve indicates a better prediction effect of the model.
A c c u r a c y = n n
p r e c i s i o n = T P T P + F P
r e c a l l = T P T P + F N
f 1 = 2 × p r e c i s i o n × r e c a l l p r e c i s i o n + r e c a l l
In Equation (3), n represents the number of all datasets, and n stands for the number of instances of which the predicted labels are equal to actual labels.

4.2. Cross-Validation

When the hyperparameters of ML models are optimized, the generalization of the model needs to be evaluated to select the optimal model. K fold cross-validation is often adopted in model evaluation. In cross-validation, k is usually set to 10 or 5 [56]. In this study, k was set to 5 by referring to a previous study [25] to reduce the running time. As shown in Figure 6, the training set is divided into five pieces of data equal to each other, four pieces of data are selected to train each time, and the remaining piece of data is used for validation. The process is repeated five times, and finally, the average value of the five validation scores is taken.

4.3. DF Optimization

σ θ , σ c , σ t , S C F , B 1 , B 2 , and W e t were input to the DF to develop a rockburst prediction model. The cascade layers in the DF can be automatically determined according to the training set. The number of forests, the trees in each forest, and the maximum number of cascade layers are the key parameters that influence the performance of DF. Referring to previous studies by Zhou et al. [30], the optimization range of these hyperparameters was determined, as shown in Table 5.
BO was implemented to optimize the hyperparameters of the DF to choose an optimal model. Before performing BO, the objective function needed to be defined, and BO was utilized to optimize this objective function. The cross-entropy loss function is commonly used for classification problems in ML areas, as shown in Equation (7). The smaller the value is, the better the capability of the model. To improve the generalization of the DF model, the cross-entropy loss function in 5 fold cross-validation was chosen as the objective function, as shown in Equation (8).
l o s s = 1 n i = 1 n log p mod e l [ y i C y i ]
O b j e c t i v e _ f u n c t i o n = 1 5 1 5 l o s s i
In Equation (7), p mod e l [ y i C y i ] is the prediction probability in the actual label. Equation (8) means that the training set is split into five folds, four folds are applied to train the DF model, and the cross-entropy loss function of the DF on the remaining one fold is calculated. Repeating the process five times, the average value of the cross-entropy loss function is chosen as the objective function.
BO was performed using Scikit-Optimize, an open-source Python library [57]. The parameters of BO utilized the default value of Scikit-Optimize. Table 6 lists the values of hyperparameters in BO in this study. Gaussian process was chosen as the surrogate model, and GP-Hedge was selected as the acquisition function. Figure 7 exhibits the flowchart in which BO tunes the hyperparameters of the DF model. In the BO process, GP-Hedge determines the following points that need to be evaluated. The DF model trains according to the hyperparameter value recommended by the GP-Hedge. After the DF is built, the objective function is calculated, and the GP model is updated. By repeating this process N times, the optimal hyperparameters can be obtained. Figure 8 exhibits the convergence of the objective function with the process of BO. BO can efficiently minimize the objective function to find the optimal DF model. Table 7 shows the optimal hyperparameters of the DF model with different training sets at the end of BO.

4.4. Results

The optimized DF models with different training sets were obtained according to Table 7, and in three different training sets, the training accuracies of DF models were 100%. The remaining three testing sets were adopted to evaluate the capability of DF models. Table 8, Table 9 and Table 10 present the performances of DF models on the different testing sets. The three DF models developed by different training datasets had the same capabilities in predicting strong rockburst. In terms of testing accuracy, the DF model with Tr (80%)-Te (20%) has the best capacity. Accordingly, it is appropriate to develop DF models to predict rockburst by 80% training set and 20% testing set.

5. Discussion

5.1. Model Performance Comparison

DF model consists of RF and CRF, and to analyze the advantages of DF compared to its base classifiers, RF and CRF are also independently built with the same hyperparameters (Tr (80%)-Te (20%)) in Table 7. The training accuracy in RF and CRF was 100% and 99.6%, respectively, and Table 11 and Table 12 display the testing performance of RF and CRF, respectively. According to Table 9, the DF has higher testing accuracy than RF and CRF, which reveals that model combination can improve the capability for predicting rockburst. The ROC curve is introduced to compare the performance of each rockburst intensity in DF, RF, and CRF. Figure 9 exhibits the ROC curves of the four rockburst intensities in DF, RF, and CRF. The larger area of the ROC curve is associated with better model performance. DF, RF, and CRF perform similarly in terms of strong rockburst prediction, but DF outperforms RF and CRF in terms of none, light, and moderate rockburst prediction.
Additionally, to explore the power of the DF model, widely used ML models were also developed in this study, and they included LR, Naive Bayes, KNN, SVM, DT, adaptive boosting (AdaBoost), ANN, XGB, and GBM. XGB was built using the default parameters in XGBoost (a Python library) [51], and other models used the Scikit-learn [52] default parameters. Training set (80%) was applied to develop these models, and testing set (20%) was adopted to evaluate these models. Figure 10 displays the training and testing accuracy of these models. The DT suffers from serious overfitting, and its performance differs markedly between the training and testing sets. Many ensemble tree models, i.e., GBM, XGB, and RF, perform better than other ML models. Taylor diagrams [58,59] were introduced to determine the strength of the DF model compared to other models. In this study, Taylor diagrams combine the Matthews correlation coefficient (MCC), centered root mean square error (green dotted lines in Figure 11), and standard deviation into a polar diagram. Equation (9) shows the equation to calculate MCC. The reference points with black star shapes depict the actual rockburst, and when other points are closer to the reference points, the corresponding models have better performance in predicting rockburst. It is worth noting that the DF model outperforms other commonly used tree models in the training and testing sets, according to Figure 11.
Additionally, Table 13 compares the capabilities of DF and some intelligent models proposed by other scholars in recent years, and the DF model has better performance than other models during training and testing phases. The results reveal that the DF model is a powerful technique to forecast and prevent rockburst.
M C C = T P × T N F P × F N ( T P + F P ) ( T P + F N ) ( T N + F P ) ( T N + F N )

5.2. Sensitivity Analysis

The permutation feature importance was implemented to determine crucial variables that affected rockburst in the DF model. The permutation feature importance was beneficial for analyzing the relative importance of input variables in nonlinear or opaque ML models [60], and it is introduced to determine the importance score of input parameters. Figure 12 shows the importance score of the input parameters in the DF model. According to this Figure 12, σ θ , W e t , and S C F are the vital parameters that influence the performance of rockburst prediction.
To determine the impact of parameters on the rockburst levels, partial dependence plots (PDP) [61] were introduced to analyze the relationship between variables and predicted results of the DF model, as shown in Figure 13. The PDP displays the dependence of the predicted probability of different rockburst levels on the variable of interest when other variables are fixed. With the increase of σ θ , W e t , and S C F , the predicted probability of a strong rockburst increases, and the predicted probability of a none rockburst decreases. There is no apparent relationship between the other variables and the predicted probability of rockburst in the DF model. These results indicate that larger σ θ , W e t , and S C F values are accompanied by more serious rockburst. Accordingly, it is vital to reduce the σ θ , W e t , and S C F of surrounding rock in underground engineering to mitigate the rockburst risk. Some measures, such as smooth blasting, pressure relief blasting, and deformable bolts and mesh, can be applied to prevent rockburst on-site [62].
The key parameters affecting rockburst intensities are determined, which makes it possible to analyze the performance variation of the DF model under different influential variables. According to the relative importance of the input variables, seven models that adopted different input parameters were developed, as shown in Table 14. These seven models adopted the same hyperparameters (Tr (80%)-Te (20%)) in Table 7. Figure 14 displays the variations of training and testing accuracy with the input variable number varying. With the decrease of input parameters, DF has low accuracy, which suggests that considering more factors is beneficial to improving the generalization of DF and predicting and preventing rockburst due to the complexity of rockburst. Additionally, with only three input parameters, the testing accuracy of the DF model is still 81.82%, which indicates that the DF model has good robustness and is suitable for fewer input parameters.

5.3. Engineering Validation

Xincheng Gold Mine and Sanshangao Gold Mine are located in Yantai, Shandong, China. Figure 15 presents their locations. After years of mining production, the two gold mines have become mines with a mining depth of more than 1000 m in China. To meet production needs, it is necessary to excavate ores in deeper strata. Due to the complexity of deep high stress and geological conditions, many engineering problems are inevitably faced in deep mining operations, among which rockburst poses a severe threat to the safety of facilities and workers. A series of field investigations and rock mechanics tests were carried out in the two gold mines to avoid the threat of rockburst to shaft construction. Seven rock blocks were taken in different locations in the Xincheng Gold Mine, and five rock blocks were taken in the Sanshandao Gold Mine. These rock blocks were processed into standard specimens for rock mechanics tests and in-situ stress analysis [63]. Table 15 presents the rock parameters used to foretell rockburst. According to the standard of c1assification for intensities of rockburst in Table 3, the rockburst level of each site was determined. The DF model was used to evaluate the rockburst, and Table 15 shows the predicted results. The predicted results of the DF model are consistent with the actual rockburst situation. These results suggest that the DF model has superior engineering practicability.

6. Limitations and Future Studies

Rockburst is a complex geological disaster that is related to many factors, such as geological structure, in-situ stress conditions, rock strength, excavation method, excavation size, etc. However, in this study, only seven related parameters were considered to build the intelligent model for rockburst prediction. Other parameters, such as rock quality index, rock integrity coefficient, and the geometric size of the cross-section of the excavation, are also essential to determine rockburst level. In the future, more influential factors should be considered to add to the database. Additionally, increasing the size of the database contributes to building a more powerful intelligent rockburst prediction model. As for the DF model, to apply DF to predict rockburst in different engineering problems, it is necessary to tune the hyperparameters of DF according to the size and complexity of data.

7. Conclusions

(1)
Deep forest, a novel tree-based ensemble model, was proposed to build the rockburst prediction model based on 329 collected real rockburst cases. Bayesian optimization was used to turn the hyperparameters of the DF. The DF had 100% accuracy in the training set and 92.4% accuracy in the testing set, and it performed better than other ML models and can forecast massive rockburst disasters.
(2)
σ θ and W e t are the essential parameters that affect the performance of the DF model for rockburst prediction. Sensitivity analysis reveals that more factors can be taken into account to build a more accurate rockburst prediction model for the complexity of rockburst. Moreover, it also confirms that the proposed DF model has good performance with fewer input parameters.
(3)
A field investigation was carried out in the Xincheng Gold Mine and Sanshandao Gold mine, Shandong, China, and the collected rock blocks were tested in the laboratory. The obtained parameters were input into the trained DF model, and the predicted results matched the rockburst situation on site. The validation datasets from gold mines can expand the rockburst database to establish more powerful models.
(4)
The DF model is trained by the datasets from mines, tunnels, large underground chambers, traffic tunnels, etc., and it is validated by the cases from two gold mines. Accordingly, it is worth noting that the proposed DF model is not only applied to gold mines with a high level of accuracy but also is suitable for other deep mine and underground excavation engineering.

Supplementary Materials

The collected rockburst database can be found be downloaded at: https://0-www-mdpi-com.brum.beds.ac.uk/article/10.3390/math10050787/s1, Table S1: Collected rockburst database.

Author Contributions

Conceptualization, D.L. and Z.L.; methodology, Z.L.; software, Z.L.; validation, P.X.; investigation, P.X.; resources, D.L.; writing—original draft preparation, Z.L. and D.L.; writing—review and editing, D.L., D.J.A. and J.Z.; visualization, Z.L.; supervision, D.L.; project administration, D.L.; funding acquisition, D.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Natural Science Foundation of China (Grant No.:52074349).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhou, J.; Li, X.; Mitri, H.S. Evaluation method of rockburst: State-of-the-art literature review. Tunn. Undergr. Space Technol. 2018, 81, 632–659. [Google Scholar] [CrossRef]
  2. Cai, M. Prediction and prevention of rockburst in metal mines–A case study of Sanshandao gold mine. J. Rock Mech. Geotech. Eng. 2016, 8, 204–211. [Google Scholar] [CrossRef]
  3. Afraei, S.; Shahriar, K.; Madani, S.H. Developing intelligent classification models for rock burst prediction after recognizing significant predictor variables, Section 1: Literature review and data preprocessing procedure. Tunn. Undergr. Space Technol. 2019, 83, 324–353. [Google Scholar] [CrossRef]
  4. Xue, Y.; Bai, C.; Kong, F.; Qiu, D.; Li, L.; Su, M.; Zhao, Y. A two-step comprehensive evaluation model for rockburst prediction based on multiple empirical criteria. Eng. Geol. 2020, 268, 105515. [Google Scholar] [CrossRef]
  5. Zhai, S.; Su, G.; Yin, S.; Zhao, B.; Yan, L. Rockburst characteristics of several hard brittle rocks: A true triaxial experimental study. J. Rock Mech. Geotech. Eng. 2020, 12, 279–296. [Google Scholar] [CrossRef]
  6. Khan, N.M.; Ahmad, M.; Cao, K.; Ali, I.; Liu, W.; Rehman, H.; Hussain, S.; Rehman, F.U.; Ahmed, T. Developing a new bursting liability index based on energy evolution for coal under different loading rates. Sustainability 2022, 14, 1572. [Google Scholar] [CrossRef]
  7. Wang, S.; Li, X.; Yao, J.; Gong, F.; Li, X.; Du, K.; Tao, M.; Huang, L.; Du, S. Experimental investigation of rock breakage by a conical pick and its application to non-explosive mechanized mining in deep hard rock. Int. J. Rock Mech. Min. Sci. 2019, 122, 104063. [Google Scholar] [CrossRef]
  8. Wang, S.; Sun, L.; Li, X.; Wang, S.; Du, K.; Li, X.; Feng, F. Experimental investigation of cuttability improvement for hard rock fragmentation using conical cutter. Int. J. Geomech. 2021, 21, 06020039. [Google Scholar] [CrossRef]
  9. Wang, J.; Apel, D.B.; Pu, Y.; Hall, R.; Wei, C.; Sepehri, M. Numerical modeling for rockbursts: A state-of-the-art review. J. Rock Mech. Geotech. Eng. 2021, 13, 457–478. [Google Scholar] [CrossRef]
  10. Pu, Y.; Apel, D.B.; Liu, V.; Mitri, H. Machine learning methods for rockburst prediction-state-of-the-art review. Int. J. Min. Sci. Technol. 2019, 29, 565–570. [Google Scholar] [CrossRef]
  11. Ma, L.; Khan, N.M.; Cao, K.; Rehman, H.; Salman, S.; Rehman, F.U. Prediction of sandstone dilatancy point in different water contents using infrared radiation characteristic: Experimental and machine learning approaches. Lithosphere 2022, 2021, 3243070. [Google Scholar] [CrossRef]
  12. Khan, N.M.; Ma, L.; Cao, K.; Hussain, S.; Liu, W.; Xu, Y.; Yuan, Q.; Gu, J. Prediction of an early failure point using infrared radiation characteristics and energy evolution for sandstone with different water contents. Bull. Eng. Geol. Environ. 2021, 80, 6913–6936. [Google Scholar] [CrossRef]
  13. Zhou, J.; Li, X.; Mitri, H.S. Classification of rockburst in underground projects: Comparison of ten supervised learning methods. J. Comput. Civ. Eng. 2016, 30, 04016003. [Google Scholar] [CrossRef]
  14. Li, N.; Jimenez, R. A logistic regression classifier for long-term probabilistic prediction of rock burst hazard. Nat. Hazards 2018, 90, 197–215. [Google Scholar] [CrossRef]
  15. Ghasemi, E.; Gholizadeh, H.; Adoko, A.C. Evaluation of rockburst occurrence and intensity in underground structures using decision tree approach. Eng. Comput. 2020, 36, 213–225. [Google Scholar] [CrossRef]
  16. Pu, Y.; Apel, D.B.; Lingga, B. Rockburst prediction in kimberlite using decision tree with incomplete data. J. Sustain. Min. 2018, 17, 158–165. [Google Scholar] [CrossRef]
  17. Li, N.; Feng, X.; Jimenez, R. Predicting rock burst hazard with incomplete data using Bayesian networks. Tunn. Undergr. Space Technol. 2017, 61, 61–70. [Google Scholar] [CrossRef]
  18. Zhou, J.; Guo, H.; Koopialipoor, M.; Jahed Armaghani, D.; Tahir, M. Investigating the effective parameters on the risk levels of rockburst phenomena by developing a hybrid heuristic algorithm. Eng. Comput. 2021, 37, 1679–1694. [Google Scholar] [CrossRef]
  19. Zhou, J.; Koopialipoor, M.; Li, E.; Armaghani, D.J. Prediction of rockburst risk in underground projects developing a neuro-bee intelligent system. Bull. Eng. Geol. Environ. 2020, 79, 4265–4279. [Google Scholar] [CrossRef]
  20. Li, D.; Liu, Z.; Xiao, P.; Zhou, J.; Jahed Armaghani, D. Intelligent rockburst prediction model with sample category balance using feedforward neural network and Bayesian optimization. Undergr. Space 2022, in press. [Google Scholar] [CrossRef]
  21. Pu, Y.; Apel, D.B.; Wang, C.; Wilson, B. Evaluation of burst liability in kimberlite using support vector machine. Acta Geophys. 2018, 66, 973–982. [Google Scholar] [CrossRef]
  22. Lin, Y.; Zhou, K.; Li, J. Application of Cloud Model in Rock Burst Prediction and Performance Comparison with Three Machine Learning Algorithms. IEEE Access 2018, 6, 30958–30968. [Google Scholar] [CrossRef]
  23. Wang, S.-M.; Zhou, J.; Li, C.-Q.; Armaghani, D.J.; Li, X.-B.; Mitri, H.S. Rockburst prediction in hard rock mines developing bagging and boosting tree-based ensemble techniques. J. Cent. South Univ. 2021, 28, 527–542. [Google Scholar] [CrossRef]
  24. Xie, X.; Jiang, W.; Guo, J. Research on rockburst prediction classification based on GA-XGB model. IEEE Access 2021, 6, 83993–84020. [Google Scholar] [CrossRef]
  25. Zhang, J.; Wang, Y.; Sun, Y.; Li, G. Strength of ensemble learning in multiclass classification of rockburst intensity. Int. J. Numer. Anal. Methods Geomech. 2020, 44, 1833–1853. [Google Scholar] [CrossRef]
  26. Liang, W.; Sari, A.; Zhao, G.; McKinnon, S.D.; Wu, H. Short-term rockburst risk prediction using ensemble learning methods. Nat. Hazards 2020, 104, 1923–1946. [Google Scholar] [CrossRef]
  27. Liang, W.; Sari, Y.A.; Zhao, G.; McKinnon, S.D.; Wu, H. Probability estimates of short-term rockburst risk with ensemble classifiers. Rock Mech. Rock Eng. 2021, 54, 1799–1814. [Google Scholar] [CrossRef]
  28. Yin, X.; Liu, Q.; Pan, Y.; Huang, X.; Wu, J.; Wang, X. Strength of stacking technique of ensemble learning in rockburst prediction with imbalanced data: Comparison of eight single and ensemble models. Nat. Resour. Res. 2021, 30, 1795–1815. [Google Scholar] [CrossRef]
  29. Li, D.; Liu, Z.; Armaghani, D.J.; Xiao, P.; Zhou, J. Novel ensemble intelligence methodologies for rockburst assessment in complex and variable environments. Sci. Rep. 2022, 12, 1844. [Google Scholar] [CrossRef]
  30. Zhou, Z.-H.; Feng, J. Deep forest. arXiv 2017, arXiv:1702.08835. [Google Scholar] [CrossRef]
  31. Zhou, J.; Qiu, Y.; Zhu, S.; Armaghani, D.J.; Khandelwal, M.; Mohamad, E.T. Estimation of the TBM advance rate under hard rock conditions using XGBoost and Bayesian optimization. Undergr. Space 2021, 6, 506–515. [Google Scholar] [CrossRef]
  32. Zhou, J.; Asteris, P.G.; Armaghani, D.J.; Pham, B.T. Prediction of ground vibration induced by blasting operations through the use of the Bayesian Network and random forest models. Soil Dyn. Earthq. Eng. 2020, 139, 106390. [Google Scholar] [CrossRef]
  33. Han, H.; Armaghani, D.J.; Tarinejad, R.; Zhou, J.; Tahir, M. Random forest and bayesian network techniques for probabilistic prediction of flyrock induced by blasting in quarry sites. Nat. Resour. Res. 2020, 29, 655–667. [Google Scholar] [CrossRef]
  34. Shahriari, B.; Swersky, K.; Wang, Z.; Adams, R.P.; Freitas, N.D. Taking the human out of the loop: A review of bayesian optimization. Proc. IEEE 2015, 104, 148–175. [Google Scholar] [CrossRef] [Green Version]
  35. Liang, X. Image-based post-disaster inspection of reinforced concrete bridge systems using deep learning with Bayesian optimization. Comput.-Aided Civ. Infrastruct. Eng. 2019, 34, 415–430. [Google Scholar] [CrossRef]
  36. Zhang, Q.; Hu, W.; Liu, Z.; Tan, J. TBM performance prediction with Bayesian optimization and automated machine learning. Tunn. Undergr. Space Technol. 2020, 103, 103493. [Google Scholar] [CrossRef]
  37. Sameen, M.I.; Pradhan, B.; Lee, S. Application of convolutional neural networks featuring Bayesian optimization for landslide susceptibility assessment. Catena 2020, 186, 104249. [Google Scholar] [CrossRef]
  38. Lu, X.; Duan, Z.; Qian, Y.; Zhou, W. A malicious code classification method based on deep forest. J. Softw. 2020, 31, 1454–1464. [Google Scholar]
  39. Breiman, L. Random Forests. MLear 2001, 45, 5–32. [Google Scholar]
  40. Snoek, J.; Larochelle, H.; Adams, R.P. Practical bayesian optimization of machine learning algorithms. In Proceedings of the Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural Information Processing Systems 2012, Lake Tahoe, NV, USA, 3–6 December 2012; Volume 25. [Google Scholar]
  41. Yang, L.; Shami, A. On hyperparameter optimization of machine learning algorithms: Theory and practice. Neurocomputing 2020, 415, 295–316. [Google Scholar] [CrossRef]
  42. Seeger, M. Gaussian processes for machine learning. Int. J. Neural Syst. 2008, 14, 69–106. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  43. Kushner, H.J. A new method of locating the maximum point of an arbitrary multipeak curve in the presence of noise. J. Basic Eng. 1964, 86, 97–106. [Google Scholar] [CrossRef]
  44. Mockus, J.; Tiesis, V.; Zilinskas, A. The application of Bayesian methods for seeking the extremum. Towards Glob. Optim. 1978, 2, 2. [Google Scholar]
  45. Jones, D.R.; Schonlau, M.; Welch, W.J. Efficient global optimization of expensive black-box functions. J. Glob. Optim. 1998, 13, 455–492. [Google Scholar] [CrossRef]
  46. Brochu, E.; Hoffman, M.W.; De Freitas, N. Portfolio allocation for bayesian optimization. arXiv 2010, arXiv:1009.5419. [Google Scholar]
  47. He, M.; Ren, F.; Liu, D. Rockburst mechanism research and its control. Int. J. Min. Sci. Technol. 2018, 28, 829–837. [Google Scholar] [CrossRef]
  48. Zhou, J.; Li, X.; Shi, X. Long-term prediction model of rockburst in underground openings using heuristic algorithms and support vector machines. Saf. Sci. 2012, 50, 629–644. [Google Scholar] [CrossRef]
  49. Pu, Y.; Apel, D.B.; Xu, H. Rockburst prediction in kimberlite with unsupervised learning method and support vector classifier. Tunn. Undergr. Space Technol. 2019, 90, 12–18. [Google Scholar] [CrossRef]
  50. Ran, L.; Ye, Y.; Hu, N.; Hu, C.; Wang, X. Classified prediction model of rockburst using rough sets-normal cloud. Neural Comput. Appl. 2019, 31, 8185–8193. [Google Scholar]
  51. Xue, Y.; Li, Z.; Li, S.; Qiu, D.; Tao, Y.; Wang, L.; Yang, W.; Zhang, K. Prediction of rock burst in underground caverns based on rough set and extensible comprehensive evaluation. Bull. Eng. Geol. Environ. 2019, 78, 417–429. [Google Scholar] [CrossRef]
  52. Wu, S.; Wu, Z.; Zhang, C. Rock burst prediction probability model based on case analysis. Tunn. Undergr. Space Technol. 2019, 93, 103069. [Google Scholar] [CrossRef]
  53. Du, Z.; Xu, M.; Liu, Z.; Xuan, W. Laboratory integrated evaluation method for engineering wall rock rock-burst. Gold 2006, 27, 26–30. [Google Scholar]
  54. Jia, Q.; Wu, L.; Li, B.; Chen, C.; Peng, Y. The comprehensive prediction model of rockburst tendency in tunnel based on optimized unascertained measure theory. Geotech. Geol. Eng. 2019, 37, 3399–3411. [Google Scholar] [CrossRef]
  55. Xue, Y.; Bai, C.; Qiu, D.; Kong, F.; Li, Z. Predicting rockburst with database using particle swarm optimization and extreme learning machine. Tunn. Undergr. Space Technol. 2020, 98, 103287. [Google Scholar] [CrossRef]
  56. Zhou, J.; Li, E.; Wang, M.; Chen, X.; Shi, X.; Jiang, L. Feasibility of stochastic gradient boosting approach for evaluating seismic liquefaction potential based on SPT and CPT case histories. J. Perform. Constr. Facil. 2019, 33, 04019024. [Google Scholar] [CrossRef]
  57. Head, T.; MechCoder, G.L.; Shcherbatyi, I. Scikit-optimize/scikit-optimize: v0.5.2. Zenodo 2018, 1207017. [Google Scholar]
  58. Taylor, K.E. Taylor Diagram Primer. Working Paper. 2005, pp. 1–4. Available online: http://www.atmos.albany.edu/daes/atmclasses/atm401/spring_2016/ppts_pdfs/Taylor_diagram_primer.pdf (accessed on 8 February 2022).
  59. Zhou, J.; Zhu, S.; Qiu, Y.; Armaghani, D.J.; Zhou, A.; Yong, W. Predicting tunnel squeezing using support vector machine optimized by whale optimization algorithm. Acta Geotech. 2022. [Google Scholar] [CrossRef]
  60. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V. Scikit-learn: Machine learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  61. Friedman, J.H. Greedy function approximation: A gradient boosting machine. Ann. Stat. 2001, 29, 1189–1232. [Google Scholar] [CrossRef]
  62. Xiao, P.; Li, D.; Zhao, G.; Liu, H. New criterion for the spalling failure of deep rock engineering based on energy release. Int. J. Rock Mech. Min. Sci. 2021, 148, 104943. [Google Scholar] [CrossRef]
  63. Xiao, P.; Li, D.; Zhao, G.; Liu, M. Experimental and Numerical Analysis of Mode I Fracture Process of Rock by Semi-Circular Bend Specimen. Mathematics 2021, 9, 1769. [Google Scholar] [CrossRef]
Figure 1. The flowchart to establish RF model.
Figure 1. The flowchart to establish RF model.
Mathematics 10 00787 g001
Figure 2. Schematic diagram of cascade forest.
Figure 2. Schematic diagram of cascade forest.
Mathematics 10 00787 g002
Figure 3. The heatmap of correlations between different variables.
Figure 3. The heatmap of correlations between different variables.
Mathematics 10 00787 g003
Figure 4. Boxplots and histograms of the seven variables of four rockburst intensities.
Figure 4. Boxplots and histograms of the seven variables of four rockburst intensities.
Mathematics 10 00787 g004
Figure 5. The flowchart to develop DF model for rockburst prediction.
Figure 5. The flowchart to develop DF model for rockburst prediction.
Mathematics 10 00787 g005
Figure 6. The steps to perform 5 fold CV.
Figure 6. The steps to perform 5 fold CV.
Mathematics 10 00787 g006
Figure 7. The flowchart of BO.
Figure 7. The flowchart of BO.
Mathematics 10 00787 g007
Figure 8. Convergence of objective function during BO.
Figure 8. Convergence of objective function during BO.
Mathematics 10 00787 g008
Figure 9. Model performance comparison in each rockburst by ROC curve.
Figure 9. Model performance comparison in each rockburst by ROC curve.
Mathematics 10 00787 g009
Figure 10. Training and testing results of the developed models. (a) Training accuracy; (b) Testing accuracy.
Figure 10. Training and testing results of the developed models. (a) Training accuracy; (b) Testing accuracy.
Mathematics 10 00787 g010
Figure 11. Taylor diagrams for comparing the performance of developed models. (a) Training results; (b) Testing results.
Figure 11. Taylor diagrams for comparing the performance of developed models. (a) Training results; (b) Testing results.
Mathematics 10 00787 g011
Figure 12. The relative importance of input variables in the DF model.
Figure 12. The relative importance of input variables in the DF model.
Mathematics 10 00787 g012
Figure 13. PDP to analyze the influence of variables on the rockburst predicted probability. (a) σ θ ; (b) σ c ; (c) σ t ; (d) S C F ; (e) B 1 ; (f) B 2 ; (g) W e t .
Figure 13. PDP to analyze the influence of variables on the rockburst predicted probability. (a) σ θ ; (b) σ c ; (c) σ t ; (d) S C F ; (e) B 1 ; (f) B 2 ; (g) W e t .
Mathematics 10 00787 g013
Figure 14. Model performance variation with the input parameters variation.
Figure 14. Model performance variation with the input parameters variation.
Mathematics 10 00787 g014
Figure 15. The location of Xincheng Gold Mine and Sanshandao Gold Mine.
Figure 15. The location of Xincheng Gold Mine and Sanshandao Gold Mine.
Mathematics 10 00787 g015
Table 1. The superiority and drawback of ML techniques for rockburst estimation recently.
Table 1. The superiority and drawback of ML techniques for rockburst estimation recently.
AlgorithmSuperiorityDrawback
LDA [13]Fast training and prediction speed, and simple and easy to interpret.Unsuitable for high-dimensional data.
LR [14]
C5.0 DT [15]Suitable for data with missing values, and can process continuous variables and discrete variables simultaneously.Tend to produce an overly complex model to reduce its generalization.
DT [16]
KNN [13]Simple and easy to implement.Unsuitable for unbalanced samples.
Naïve Bayes [13]Simple and fast. Perform very well under the assumption that distribution is independent.The assumption of independent distribution is difficult to meet in practical projects.
BN [17]
ANN [13,18,19,20] Strong mapping ability and can deal with complex nonlinear problems.Many hyperparameters to turn and easy to overfit.
SVM [13,21]Solid theoretical basis and can be applied to complex nonlinear data.Difficult to deal with multiple classification problems.
RF [13,22] Suitable for high dimension data and good generalization ability.Overfitting appears when dealing with classification with high noise.
Bagging [23]
GBM [13] Suitable for continuous values and discrete values. Robust to outliers using robust loss functionsDifficult to train data in parallel.
XGB [24]
Note: LDA = linear discriminant analysis; LR = logistic regression; BN = Bayesian network; GBM = gradient boosting machines; RF = random forest; XGB = extreme gradient boosting;.
Table 2. The database source.
Table 2. The database source.
No.Number of CasesReference
1N (43 cases), L (78 cases), M (81 cases), S (44 cases)Zhou et al. [13]
2L (1 case), M (11 cases)Pu et al. [49]
3N (3 cases), L (4 cases), M (8 cases), S (1 case)Liu et al. [50]
4N (3 cases), L (7 cases), M (7 cases), S (3 cases)Xue et al. [51]
5L (1 case), M (5 cases), Strong (1 case)Wu et al. [52]
6N (1 case), L (2 cases), S (4 cases)Du et al. [53]
7L (3 cases), M (3 cases)Jia et al. [54]
8N (3 cases), L (5 cases), M (4 cases), S (3 cases)Xue et al. [55]
SumN (53 cases), L (101 cases), M (119 cases), S (56 cases)329 cases
Table 3. Standard of c1assification for four intensities of rockburst [48].
Table 3. Standard of c1assification for four intensities of rockburst [48].
Rockburst LabelFailure Characteristics
NoneNo sound of rockburst and rockburst activities.
LightThe surrounding rock is spalled, cracked, or striped, and there is no ejection phenomenon and a weak sound.
ModerateThe surrounding rock is deformed and fractured, and there is aconsiderable number of rock chip ejection, loose and sudden destruction, accompanied by crisp crackling, and often presented in the local cavern of surrounding rock.
StrongThe surrounding rocks are severely bursted and suddenly thrown or shot into the tunnel, accompanied by strong bursts and roaring sounds, air jets, the continuity of storm phenomena, and the rapid expansion into deep surrounding rocks.
Table 4. Statistical description of the input parameters.
Table 4. Statistical description of the input parameters.
GradeStatistical Indicators σ θ σ c σ t S C F B 1 B 2 W e t
NoneMean value25.27 101.96 5.98 0.30 21.08 0.87 2.78
Standard deviation16.32 49.39 3.90 0.25 12.72 0.07 1.94
Min value2.60 20.00 0.40 0.05 5.38 0.69 0.81
25th percentiles12.30 67.40 3.00 0.13 10.75 0.83 1.50
50th percentiles21.50 96.41 5.00 0.21 18.75 0.90 2.04
75th percentiles31.20 123.60 7.60 0.31 29.40 0.93 3.60
Max value77.69 241.00 17.66 1.05 47.93 1.00 7.80
LightMean value44.42 116.64 6.68 0.41 21.53 0.89 3.72
Standard deviation20.63 39.56 3.91 0.19 10.12 0.07 1.54
Min value13.50 30.00 1.90 0.10 2.52 0.43 0.85
25th percentiles29.70 88.00 3.60 0.26 12.70 0.85 2.53
50th percentiles43.21 117.00 5.90 0.38 23.60 0.92 3.20
75th percentiles57.97 142.00 8.95 0.56 28.10 0.93 4.61
Max value126.72 263.00 22.60 0.90 69.69 0.97 9.00
Mean value44.42 116.64 6.68 0.41 21.53 0.89 3.72
ModerateMean value51.50 116.58 6.12 0.47 25.20 0.90 5.06
Standard deviation22.91 43.03 3.80 0.20 16.34 0.05 2.69
Min value13.02 30.00 1.30 0.10 0.15 0.69 1.20
25th percentiles37.15 84.30 2.98 0.34 15.02 0.87 3.66
50th percentiles51.50 112.50 5.26 0.47 21.69 0.91 5.00
75th percentiles65.84 147.53 8.30 0.59 27.76 0.93 5.91
Max value118.77 237.20 17.66 1.27 80.00 0.98 21.00
StrongMean value119.65 129.08 10.34 1.18 14.12 0.85 8.91
Standard deviation83.11 52.37 4.67 1.17 5.94 0.06 6.02
Min value16.43 30.00 2.50 0.10 5.53 0.69 2.03
25th percentiles61.98 91.30 7.04 0.53 11.16 0.84 5.86
50th percentiles91.37 127.09 10.27 0.72 13.27 0.86 7.20
75th percentiles126.71 158.60 13.86 0.97 16.84 0.89 9.03
Max value297.80 304.20 22.60 4.87 32.20 0.94 30.00
Table 5. The optimization range of hyperparameters in the DF model.
Table 5. The optimization range of hyperparameters in the DF model.
HyperparametersOptimization Range
The number of forests(1, 4)
The trees in the forest(10, 100)
The maximum number of cascade layers(10, 30)
Table 6. The parameters of BO.
Table 6. The parameters of BO.
Surrogate ModelAcquisition FunctionSurrogate Model Hyperparameters
GPGP-Hedge1. Kernel function: Matern Kernel and White Kernel
2. Noise: Gaussian distribution
Table 7. The optimal hyperparameters in the DF model.
Table 7. The optimal hyperparameters in the DF model.
DatasetsHyperparametersValue
Tr (75%)-Te (25%)The number of forests4
The trees in the forest36
The maximum number of cascade layers30
Tr (80%)-Te (20%)The number of forests4
The trees in the forest18
The maximum number of cascade layers19
Tr (85%)-Te (15%)The number of forests4
The trees in the forest25
The maximum number of cascade layers25
Table 8. The testing performance of DF models with Tr (75%)-Te (25%).
Table 8. The testing performance of DF models with Tr (75%)-Te (25%).
Rockburst TypePrecisionRecall f 1 Number
None0.860.920.8913
Light0.950.760.8425
Moderate0.881.000.9430
Strong1.001.001.0014
Accuracy91.5%82
Table 9. The testing performance of DF models with Tr (80%)-Te (20%).
Table 9. The testing performance of DF models with Tr (80%)-Te (20%).
Rockburst TypePrecisionRecall f 1 Number
None0.910.910.9111
Light0.940.800.8620
Moderate0.891.000.9424
Strong1.001.001.0011
Accuracy92.4%66
Table 10. The testing performance of DF models with Tr (85%)-Te (15%).
Table 10. The testing performance of DF models with Tr (85%)-Te (15%).
Rockburst TypePrecisionRecall f 1 Number
None1.000.880.938
Light0.920.800.8615
Moderate0.861.000.9218
Strong1.001.001.008
Accuracy91.8%49
Table 11. The testing performance of the RF model.
Table 11. The testing performance of the RF model.
Rockburst TypePrecisionRecall f 1 Number
None0.750.820.7811
Light0.740.850.7920
Moderate0.950.830.8924
Strong1.000.910.9511
Accuracy84.8%66
Table 12. The testing performance of the CRF model.
Table 12. The testing performance of the CRF model.
Rockburst TypePrecisionRecall f 1 Number
None0.750.820.7811
Light0.750.750.7520
Moderate0.840.880.8624
Strong1.000.820.9011
Accuracy81.8%66
Table 13. Comparison of DF and other ML models proposed in recent years.
Table 13. Comparison of DF and other ML models proposed in recent years.
Algorithm/ModelInput ParametersData SizeAccuracy
TrainingTesting
Voting model [25] H , σ θ , σ c , σ t , W e t 18894%80%
PSO-ELM [55] σ θ , σ c , σ t , S C F , B 1 ,   W e t 34498.99%88.89%
Bagging [23] σ θ , σ c , σ t , S C F , B 1 ,   W e t 102100%88.24%
Boosting [23] σ θ , σ c , σ t , S C F , B 1 ,   W e t 102100%91.18%
Stacking model [28] σ θ , σ c , σ t , S C F , B 1 , B 2 ,   W e t 246 88.52%
DF σ θ , σ c , σ t , S C F , B 1 , B 2 ,   W e t 329100%92.40%
Note: Voting model is the combination of back propagation neural network, KNN, SVM, LR, linear model, DT, and Naive Bayes; H = depth; POS = particle swarm optimization; ELM = extreme learning machine; stacking model is the combination of KNN, SVM, deep neural network, and recurrent neural network.
Table 14. Seven models and their input parameters.
Table 14. Seven models and their input parameters.
ModelInput ParametersInput Parameters Number
M 1 σ θ , σ c , σ t , S C F , B 1 , B 2 ,   W e t 7
M 2 σ θ , σ c , S C F , B 1 , B 2 ,   W e t 6
M 3 σ θ , S C F , B 1 , B 2 ,   W e t 5
M 4 σ θ , S C F , B 1 , W e t 4
M 5 σ θ , S C F , W e t 3
M 6 σ θ , W e t 2
M 7 σ θ 1
Table 15. Application of DF model in practical engineering.
Table 15. Application of DF model in practical engineering.
No.Engineering σ θ / MPa σ c / MPa σ t / MPa S C F B 1 B 2 W e t Actual GradePredicted Grade
1Xincheng 87.60139.0710.630.6313.08 0.86 5.56 SS
2Gold Mine108.31149.9911.970.7212.53 0.85 6.88 SS
3 89.46155.4512.050.5812.90 0.86 3.98 SS
4 100.00137.5213.730.7310.01 0.82 5.27 SS
5 107.25182.6712.110.5915.08 0.88 5.48 SS
6 107.87140.3812.060.7711.64 0.84 8.50 SS
7 109.57174.3412.070.6314.44 0.87 7.69 SS
8Sanshandao94.64160.949.740.5916.52 0.89 4.94MM
9Gold Mine32.45138.259.040.2315.29 0.88 3.73LL
10 23.13146.2919.60.167.46 0.76 6.45NN
11 34.12154.2813.980.2211.04 0.83 4.61LL
12 34.07128.511.710.2710.97 0.83 1.92NN
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Li, D.; Liu, Z.; Armaghani, D.J.; Xiao, P.; Zhou, J. Novel Ensemble Tree Solution for Rockburst Prediction Using Deep Forest. Mathematics 2022, 10, 787. https://0-doi-org.brum.beds.ac.uk/10.3390/math10050787

AMA Style

Li D, Liu Z, Armaghani DJ, Xiao P, Zhou J. Novel Ensemble Tree Solution for Rockburst Prediction Using Deep Forest. Mathematics. 2022; 10(5):787. https://0-doi-org.brum.beds.ac.uk/10.3390/math10050787

Chicago/Turabian Style

Li, Diyuan, Zida Liu, Danial Jahed Armaghani, Peng Xiao, and Jian Zhou. 2022. "Novel Ensemble Tree Solution for Rockburst Prediction Using Deep Forest" Mathematics 10, no. 5: 787. https://0-doi-org.brum.beds.ac.uk/10.3390/math10050787

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop