Next Article in Journal
Nitrogen-Blowing Assisted Strategy for Fabricating Large-Area Organic Solar Modules with an Efficiency of 15.6%
Previous Article in Journal
Comparison of Mechanical Property Simulations with Results of Limited Flexural Tests of Different Multi-Layer Carbon Fiber-Reinforced Polymer Composites
Previous Article in Special Issue
Synthesis of Nano-Structured Conjugated Polymers with Multiple Micro-/Meso-Pores by the Post-Crosslinking of End-Functionalized Hyperbranched Conjugated Polymers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Experimental and Numerical Investigation Integrated with Machine Learning (ML) for the Prediction Strategy of DP590/CFRP Composite Laminates

1
School of Materials Science and Engineering, Hebei University of Technology, Tianjin 300401, China
2
School of Mechanical and Engineering, Tianjin Sino-German University of Applied Sciences, Tianjin 300350, China
3
School of Mechanical and Engineering, Hebei University of Technology, Tianjin 300401, China
4
School of System Design and Intelligent Manufacturing, Southern University of Science and Technology, Shenzhen 518055, China
5
Tianjin Sino-Spanish Machining Tool Vocational Training Center, Tianjin Sino-German University of Applied Sciences, Tianjin 300350, China
*
Authors to whom correspondence should be addressed.
Submission received: 26 April 2024 / Revised: 22 May 2024 / Accepted: 29 May 2024 / Published: 3 June 2024

Abstract

:
This study unveils a machine learning (ML)-assisted framework designed to optimize the stacking sequence and orientation of carbon fiber-reinforced polymer (CFRP)/metal composite laminates, aiming to enhance their mechanical properties under quasi-static loading conditions. This work pioneers the expansion of initial datasets for ML analysis in the field by uniquely integrating the experimental results with finite element simulations. Nine ML models, including XGBoost and gradient boosting, were assessed for their precision in predicting tensile and bending strengths. The findings reveal that the XGBoost and gradient boosting models excel in tensile strength prediction due to their low error rates and high interpretability. In contrast, the decision trees, K-nearest neighbors (KNN), and random forest models show the highest accuracy in bending strength predictions. Tree-based models demonstrated exceptional performance across various metrics, notably for CFRP/DP590 laminates. Additionally, this study investigates the impact of layup sequences on mechanical properties, employing an innovative combination of ML, numerical, and experimental approaches. The novelty of this study lies in the first-time application of these ML models to the performance optimization of CFRP/metal composites and in providing a novel perspective through the comprehensive integration of experimental, numerical, and ML methods for composite material design and performance prediction.

Graphical Abstract

1. Introduction

With the growing emphasis on environmental protection and energy scarcity, the industrial demand for lightweight, high-strength materials, especially reinforced composite materials, has rapidly increased [1]. Despite the high performance of carbon fiber composite materials, they still exhibit some disadvantages compared to traditional metal materials, such as poor impact resistance, high cost, and low flexibility. To overcome these issues, fiber/metal composite laminate materials have emerged [2,3,4,5]. This material combines the advantages of metal and fiber composite materials, exhibiting high strength and stiffness while maintaining relatively low density, excellent fatigue performance, and high damage tolerance. It has demonstrated superior performance in impact resistance, energy absorption, vibration reduction, heat dissipation, and sound insulation [6,7,8,9]. The results have shown that steel/carbon fiber composite materials have a higher load-bearing capacity and better energy absorption performance than single-material structures, making them highly applicable in the automotive and rail transportation industries [10]. This combination offers structural strength and lightweight design advantages, providing innovative solutions for various engineering applications [11,12,13,14,15]. To fully harness the performance of these materials, a thorough understanding of their reliability during service is crucial. Tensile and bending are key indicators of their fundamental mechanical properties [16,17,18,19,20,21,22,23,24,25]. Tensile performance directly affects the stability and ductility of the material under tensile stress, forming the basis for evaluating its overall strength. Meanwhile, bending performance plays a crucial role in the practical application of composite structures, as many engineering scenarios involve bending rather than pure tension. By understanding the behavior of fiber/metal composite materials under tensile and bending loads, we can more accurately predict their performance in real-world applications, optimizing the design and ensuring their reliability and stability in various engineering applications.
In traditional materials performance evaluation, experimental testing has been an indispensable means; however, the time-consuming and costly nature of experiments has posed challenges for researchers [26,27,28]. The complexity of experiments and sample preparation requirements limits the speed and range of testing, especially for composite materials that need to consider multiple load conditions in predicting long-term performance. In this context, numerical simulation techniques have become one of the new approaches researchers have explored. Numerical simulation, by establishing mathematical models, simulates the behavior of materials under different environments and loading conditions, providing valuable insights for material design [13,23,29,30]. Despite the significant progress that numerical simulation methods have brought to materials research, they still face challenges related to massive computational resource requirements and high dependence on model accuracy. Pushkar et al. investigated ballistic penetration in laminated plates as complex but crucial for designing effective structural protection. Experimental means are expensive and often dangerous, while numerical simulation, though a good supplement, is computationally intensive. Developing and testing an efficient tool for the real-time prediction of projectile penetration through laminated plates has been addressed by training neural networks and decision tree regression models [31].
In recent years, with the vigorous development of machine learning technology, there have been new possibilities for predicting the performance of composite materials. Christian et al. summarized the latest developments using machine learning for enhanced composite material design in lightweight industrial structures. They discussed the limitations of traditional methods and provided detailed schemes for introducing machine learning into composite material technology, focusing on implementing machine learning algorithms, data cleaning, material/process selection, and data acquisition techniques. The assessment covered emerging digital tools platforms for implementing machine learning algorithms and pointed out research gaps in composite material design for the future [32]. The uniqueness of machine learning lies in its ability to learn complex patterns of material behavior from large amounts of data and predict its performance in untested conditions [33,34,35,36,37]. Some machine learning applications in composite material replace traditional experiments and numerical simulations and provide researchers with a new perspective. By learning from data, machine learning models can capture nonlinear relationships and complex patterns that are difficult to obtain through traditional means, providing profound insights. It is crucial to understand the performance variations, challenges, and advantages of composite materials under different load conditions.
Many researchers have used machine learning models to predict the performance of composite laminates under different load conditions. Faramarz et al. and Christos et al. predicted the non-destructive strength of composite laminate panels based on deep learning and stochastic finite element methods [38]. Chen et al. developed efficient data-driven models by implementing integrated learning algorithms such as “gradient boosting decision trees” and “random forests” on a collected database of steel/fiber single-lap shear tests. The results showed that the model generated by the gradient boosting decision tree algorithm achieved the best accuracy (R2 = 0.98) in predicting steel/fiber interface adhesion strength, surpassing other integrated and machine learning algorithms [36].
However, despite the enormous innovative potential that machine learning technology brings to composite material research, its application still has challenges. One major challenge is the quality and diversity of data. The performance of machine learning models heavily depends on the quality and diversity of their training data. Acquiring large-scale and high-quality data is an urgent issue that needs to be addressed. Chahar et al. pointed out that machine learning requires a substantial amount of high-fidelity data, advocating using multi-fidelity (MF) machine learning algorithms based on Gaussian processes. They suggested combining these algorithms with finite element simulations to quantify uncertainties through training surrogate models [39].
Additionally, model interpretability is a crucial issue, especially in highly complex composite material systems [36,40,41,42]. Machine learning has paved a new path for composite material research, providing more efficient and economical means to predict and understand material performance. With a deepening understanding of data and algorithms, coupled with continuous improvements in related technologies, machine learning is expected to play an increasingly important role in the future of composite material engineering, driving more innovations and breakthroughs [41,43,44,45,46].
This paper is dedicated to comprehensively studying the mechanical properties of metal/CFRP composite materials under tension and bending conditions through the integrated application of machine learning, numerical simulation, and experimental testing. Firstly, we established a performance database for the tension and bending of metal/CFRP composite materials through experimental tests. Subsequently, we developed finite element simulation models to simulate the tensile and bending behavior of metal/CFRP composite laminates. The model considered material nonlinearity, anisotropy, and interlayer bonding. The established finite element model could accurately predict the experimental data. The finite element simulation data, combined with the experimental data, formed a high-precision dataset for training machine learning models to predict the behavior of composite materials in untested conditions.
Furthermore, the prediction accuracy of different models is extensively discussed, as well as their applicability and limitations in various scenarios. The aim is to comprehensively assess material performance and deepen the understanding of material behavior through finite element simulation and machine learning methods. This integrated research approach can contribute to a more in-depth understanding of composite material performance and provide a scientific basis for future engineering design and material selection.

2. Machine Learning Methods

In the study of the mechanical performance of DP590/CFRP composite panels under tensile and bending conditions, a selection of nine machine-learning models was made to encompass a broad range of algorithmic families. This selection was guided by several criteria crucial for robust prediction capabilities in the context of material behavior. These criteria included the complexity and nonlinearity of the data patterns, the limited amount of data available for training, and the requirement for interpretability, which is essential in materials science.
Three major classes of algorithms—linear, nonlinear, and tree-based models—were included to leverage their distinct advantages under different data characteristics. Linear models, namely linear regression, ridge regression, and lasso regression, were selected for their efficiency and the provision of a baseline in scenarios where relationships might predominantly appear linear or when baseline model performance comparisons were necessary.
Nonlinear models, such as K-nearest neighbors and polynomial regression, were chosen for their capacity to model complex relationships beyond linear interactions, particularly valuable in datasets that exhibit nonlinear patterns, as anticipated in the behavior of composite materials.
Tree-based models including decision trees, random forests, gradient boosting, and XGBoost were incorporated due to their superior ability to manage complex, high-dimensional data structures, which are typical in materials engineering. These models excel in handling interactions and heterogeneities inherent in composite material data.
The choice to exclude more complex algorithms like deep learning stemmed from the inadequate volume of available data, which might lead to overfitting, and the necessity for computational efficiency and interpretability in results, which are critical in applied material research.

2.1. Introduction to Machine Learning Models

2.1.1. Linear Regression

Linear regression was chosen for modeling simple relationships [47]. Due to its simplicity and interpretability, it is suitable for cases with an approximate linear relationship between features and the target. Linear regression is a benchmark that helps us understand the linear relationships between features.
The basic formula for linear regression is
y = b 0 + b 1 x 1 + b 2 x 2 + + b n x n
where y is the target variable, x1, x2, …, xn are the feature variables, and b0, b1, b2, …, bn are the model parameters.
In the provided information, y represents the target variable, and x1, x2, …, xn represent the feature variables, while b0, b1, b2, …, bn denote the model parameters.
Linear regression is based on the least squares method, aiming to minimize the sum of the squared differences between the actual observed and model-predicted values. By solving for the coefficients b, the model obtains the best-fitting line for the data. This model assumes a linear relationship between the target variable and the features.

2.1.2. Ridge Regression

Ridge regression is a regularized linear regression method applicable in multicollinear situations (high correlation among features) [48]. By introducing a regularization term, ridge regression helps to prevent overfitting, improving the model’s generalization ability. The chosen ridge regression addresses the possible correlation among the features.
The objective function of ridge regression is
J β = i = 1 m ( y i β 0 j = 1 n β j x i j ) 2 + α j = 1 n β j 2
where J(β) is the loss function, β1, β2, …, βn are the coefficients, and α is the regularization coefficient.
By introducing an L2 regularization term based on the least squares method, ridge regression aims to prevent model overfitting. The regularization term penalizes coefficients, causing the model’s coefficients to lean more towards zero, thereby alleviating the collinearity problem.

2.1.3. Lasso Regression

Lasso regression is another type of regularized linear regression method, but unlike ridge regression, it uses L1 regularization. A characteristic of lasso regression is that it can produce sparse models, i.e., automatically selecting unimportant features [49]. In our study, choosing lasso regression helps discover the most important features for predicting composite material performance.
J β = i = 1 m ( y i β 0 j = 1 n β j x i j ) 2 + α j = 1 n β j
where J(β) is the loss function, β1, β2, …, βn are the coefficients, and α is the regularization coefficient.
Lasso regression also introduces a regularization term based on the least squares method but uses L1 regularization. It causes some coefficients to become zero, achieving the result of automatic feature selection, suitable for high-dimensional datasets.

2.1.4. K-Nearest Neighbors

K-nearest neighbors (K-NN) is an instance-based learning method suitable for non-parametric models [50]. In our study, we chose K-NN to account for the local relationships between features, i.e., similar features may have similar mechanical properties. K-NN can flexibly adapt to the local structure of the data, and it performs well in modeling nonlinear relationships.
y ^ = 1 k i = 1 k y i
where y ^ is the predicted value of the target variable, and yi is the target value of the k nearest neighbors to the input sample.
K-nearest neighbors work based on a voting mechanism among neighbors. It predicts a target value through the average of the nearest target values. It is assumed that outputs with similar attributes are likely to have similar results, making them suitable for strong local relationships.

2.1.5. Polynomial Regression

Polynomial regression is used to handle the nonlinear relationship between features and targets [51]. By introducing higher-order terms of features, polynomial regression can more flexibly adapt to the nonlinear structure of the data. In our study, considering that the performance of composite materials may be subject to complex nonlinear effects, we chose polynomial regression to capture these relationships better.
y = b 0 + b 1 x + b 2 x 2 + + b n x n
In this context, y is the target variable, x refers to the feature variables, and b0, b1, b2, …, bn are the model parameters. These parameters are estimated during the training process to minimize the difference between the predicted and the actual values of the target variable.
By adding higher-order terms of the features, polynomial regression allows the model to adapt to the nonlinear relationships within the data. Essentially, it is an extension of linear regression.

2.1.6. Decision Tree

A decision tree is a tree-shaped model suitable for handling complex nonlinear relationships and interactive effects [52]. Decision trees divide the data space recursively, capturing the nonlinear relationships between features. In our study, we chose decision trees to consider possible nonlinear structures and provide interpretability through the structure of the tree. The prediction formula of a decision tree is a tree structure where each node represents a feature, each branch represents a decision rule, and the leaf nodes contain the prediction value of the target variable. Decision trees are based on recursive divisions of features, and they choose features that increase data purity. They are suitable for nonlinear relations and interactive effects and enhance interpretability.

2.1.7. Random Forest

Random forest is an ensemble learning method based on decision trees, which improves the robustness and accuracy of the model by combining the predictions of multiple decision trees [53]. In our study, we chose random forest to harness the advantages of multiple decision trees and adapt to more complex model structures.
Random forest consists of an ensemble of decision trees, and the prediction result is the average or voting result of all trees. Random forest improves the robustness and accuracy of the model by training multiple decision trees and aggregating their predictions. Each tree is trained on a different subset of data, introducing randomness.

2.1.8. Gradient Boosting

Gradient boosting is an ensemble learning method that trains weak learners iteratively and combines them into a strong learner [54]. Gradient boosting was chosen to enhance the model’s predictive performance further. Gradient boosting has a strong ability to fit complex nonlinear relationships.
F x = m = 1 M γ m h m ( x )
In this context, F(x) is the final predicted result, γm represents the weight of the weak learner, and hm(x) is the predicted result of the weak learner. This method aims to combine multiple vulnerable learners to form a strong model.
Gradient boosting works by iteratively training multiple weak learners, each time adjusting the model to reduce the gradient of the loss function. It is suitable for nonlinear relationships and can gradually improve the performance of the model.

2.1.9. XGBoost

XGBoost is an improved version of the gradient boosting algorithm, providing superior efficiency and performance [54]. XGBoost regression was selected to take full advantage of its exceptional training speed and superior generalization performance.
F x = m = 1 M γ m h m ( x ) + k = 1 K Ω ( f k )
The term Ω(fk) is the regularization term for each tree in the model. This term is used to constrain the complexity of the tree structure and prevent overfitting.
XGBoost, building upon gradient boosting, introduces a regularization term to optimize model complexity, enhancing the training speed and performance. In this complex machine learning task, XGBoost offers considerable advantages, including high predictive performance, robustness against features, and the capability to handle missing values. These advantages make it a powerful modeling tool for predicting the mechanical properties of DP590/CFRP composite laminate.

2.2. Fine-Tuning Machine Learning Models

To enhance the model’s performance, we employed two tuning methods: grid search and cross-validation. Grid search is a tuning technique to find the optimal hyperparameters for the model. It works by specifying a grid of potential values for the hyperparameters and then systematically working through multiple combinations of those hyperparameters. Cross-validation evaluates these models to determine which combination offers the highest validation score. The model’s performance would thereby be significantly enhanced with the optimal parameters.
Cross-validation is a resampling procedure used to evaluate the model’s performance on a limited data sample. The most common cross-validation method is k-fold cross-validation, where ‘k’ is the number of groups a dataset sample will split into.
After separating the data into k groups, we fit the model using k-1 groups and evaluate the model’s performance on the remaining part of the data. We carry out this process k times, so we obtain k models and performance estimates, which we then average, providing a more accurate model performance indicative of the model’s ability to generalize to unseen data. By using systematic parameter search and validation, we aim to find the best model parameters to ensure the model’s robust generalization ability.

2.3. Machine Learning Model Evaluation Indicators

It will employ several evaluation metrics, including MAE (mean absolute error), MSE (mean squared error), R2 (coefficient of determination), and MAPE (mean absolute percentage error). These metrics will comprehensively assess the model’s performance in various aspects, ensuring comprehensive and accurate predictions of the composite material’s properties. The MAE measures the average magnitude of the errors in a set of predictions without considering their direction. It is the average of the absolute differences between prediction and actual observation over the test sample, where all individual differences have equal weight. The MSE, on the other hand, measures the average squared difference between the estimated and actual values. It is more sensitive to outliers than the MAE, as the differences are squared before they are averaged. The R2 or coefficient of determination is a statistical measure representing the proportion of the variance for a dependent variable explained by an independent variable or variables in a regression model. The closer to 1 this value is, the better the regression line fits. Lastly, the MAPE is often used for time series forecasting as it gives a simple percentage error that describes the average error rate for the forecast. The smaller the MAPE value, the better the accuracy of the forecast. By utilizing all these evaluation metrics, we can have a much more holistic view of our model’s performance, acknowledging its strengths and areas where it can improve.

3. Machine Learning Data Acquisition

3.1. Experimental Introduction

The process involves awakening the prepreg for about 0.5 h, then cutting and laying up, including placing a steel plate, molding, and finally, demolding and cooling. The stacking sequence of CFRP layers in the ply design of the DP590/CFRP composite laminates for both the tension and bending models is shown in Figure 1. The tensile strength test was conducted in accordance with the specifications of the ASTM-D638 standard [55], while the flexural strength test was carried out following the ASTM-D790 standard [56].

3.2. Introduction to Finite Element Modeling

The numerical model implemented a three-dimensional failure model based on strain damage law using the explicit finite element subroutine Abaqus-VUMAT. The interface failure was simulated using a bilinear cohesive force contact model, and the steel plate layer was described by the Johnson–Cook model. For the fiber layer, the 2D Hashin criterion, which only considers the in-plane stress components, was replaced by a 3D Hashin failure criterion developed based on VUMAT. In the program design, the initiation of fiber failure and matrix tensile failure is controlled by the maximum stress criterion and the strain-based Hashin criterion. In contrast, matrix compression failure is assessed using the Puck failure criterion.
The established finite element model can accurately predict the response and failure mechanisms of composite laminates during tensile and bending processes. The numerical model accurately predicted the experimental curves, notably achieving a 97% accuracy in predicting the maximum load. In the bending tests, both the experimental and numerical models demonstrated typical bilinear behaviors, with prediction accuracies reaching 88% for the bending modulus and 97% for the bending strength. Finally, utilizing the established FEA-VUMAT model, we studied the ultimate failure modes within and between layers. It was found that the predicted failure modes in the bending tests were consistent with the experimental results [57].

3.3. Ply Stacking Design of CFRP Layers in DP590/CFRP Composite Laminates

Utilizing experimentally validated finite element models, we extensively discussed the impact of different ply stacking sequences on the strength of DP590/CFRP composite laminates under tension and bending loading conditions. The analysis encompasses the reasons for strength variations, interlaminar stress distributions, and other aspects, aiming to understand the performance characteristics comprehensively.
In this study, 28 different layup sequences were selected to explore the mechanical properties of composite laminates, incorporating commonly used fiber angles such as 0°, ±45°, and 90°. The design included not only symmetric layups but also asymmetric configurations. This approach aims to comprehensively assess the impact of these varying fiber angles on the performance of the composite laminates. By integrating both symmetric and asymmetric layups, this study not only investigates the balance and stability of symmetric layups but also evaluates the performance advantages of asymmetric layups in specific applications. The specific layup sequences are presented in Table 1.

3.4. Data Augmentation and Preprocessing

This paper obtains tensile and bending strength data for multiple sets of DP590/CFRP composite laminates through experimental means. Subsequently, the finite element model is validated using the experimental data. To establish the finite element modeling, the tensile and bending strengths of composite laminates with different CFRP layup sequences were further analyzed. The initial numerical values obtained through the finite element and experimental methods amounted to 48 sets. The size of the training set was expanded through data augmentation methods from the original 48 sample sets to 1296 sets. Feature encoding was performed by applying one-hot encoding to the angle values in the “layer sequence”, and the encoded angle values were merged with the original data to handle angle information better. Standardization was employed to eliminate the dimensional differences between different features, scaling all features to a range with a mean of 0 and a standard deviation of 1. It helps in improving the training effectiveness and stability of the model. Furthermore, correlation analysis was conducted, and the correlation coefficients between features and strength were calculated. Only features with non-zero standard deviations were considered, with invariant features being excluded to minimize their impact on the results. The results of the correlation analysis are presented in the form of heatmaps, visually illustrating the degree of correlation between various features. The original data and expanded experimental data used in this study are available for download in the Supplementary Materials section at the end of this article.
Figure 2, Figure 3, Figure 4 and Figure 5 compare the correlation coefficient heatmaps of the tensile strength and bending strength for DP590/CFRP composite plates before and after data augmentation. In these heatmaps, the X-axis and Y-axis represent different variables or parameters used in this study, including the material’s angles at various stacking sequences, roughness, cross-sectional thickness, cross-sectional width, cross-sectional area, and tensile strength. Each cell in the heatmap represents the correlation coefficient between the corresponding variables, ranging from −1 to 1, where 1 indicates a perfect positive correlation, −1 indicates a perfect negative correlation, and 0 indicates no correlation. Warm colors indicate positive correlations, meaning that as one variable increases, the other variable tends to increase as well; cold colors indicate negative correlations, meaning that as one variable increases, the other variable tends to decrease.
For instance, in Figure 2, the correlation coefficient between angle_1_0 and the tensile strength is 0.61, indicating a strong positive correlation, i.e., an increase in the angle value leads to an increase in the tensile strength. Conversely, the correlation coefficient between angle_1_45 and the tensile strength is −0.38, indicating a certain degree of negative correlation, i.e., an increase in the angle value leads to a decrease in the tensile strength. After data augmentation, as shown in Figure 3, although the correlation coefficients of most variables changed, the overall trend remained relatively stable. For example, the correlation coefficient between the cross-sectional area and strength decreased after augmentation but remained relatively high, indicating that the data augmentation method preserved the intrinsic relationships between variables, ensuring the model’s robustness and generalizability.
Overall, despite some changes in the variable correlations, most remained relatively stable, indicating that data augmentation did not significantly alter the characteristics of the data source. The analysis results suggest that the model built using this data augmentation method exhibits excellent performance, validating its effectiveness in predicting the mechanical properties of composite materials. This method not only successfully expanded the training set but also maintained the model’s robustness, significantly improving its generalizability. These data preprocessing steps provide a solid foundation for subsequent model training, ensuring the model can accurately capture patterns and effectively predict the mechanical properties of materials.
The research presented in this paper is primarily divided into three stages. The first stage is the data preparation phase, where initial data are obtained through experimental and numerical simulation methods. Subsequently, data augmentation techniques incorporating randomness were employed to expand the dataset, ensuring it could support the training requirements of machine learning models. The second stage involved model training and validation, where the models were optimized using grid search and cross-validation methods. The third stage comprised the comparison of predictions from different machine learning models, including a summary analysis of the model hyperparameters and performance validation results. By comparing the predictive outcomes of machine learning models with the findings from mechanistic studies, a multi-faceted understanding and interpretation of the performance of CFRP/metal composite materials were achieved, which offers new insights into the research in this field. Figure 6 summarizes the entire procedure.

4. Results and Discussion

4.1. The Impact of Different Layup Sequences on the Tensile and Bending Properties of DP590/cfrp Composite Laminate

Table 2 presents the tensile and bending strengths of the DP590/CFRP composite laminates across 28 CFRP lamination sequences. The data were obtained using experimental methods under specific layup conditions (0°/90°/90°/90°/90°/0°). Subsequently, the validity of the established finite element model was verified using these experimental data. Utilizing the finite element model, the tensile and bending strengths of the DP590/CFRP composite laminates under various layup conditions were discussed.
The lamination sequence 2 (“0°/0°/0°/0°/0°/0°”) stands out by exhibiting the highest tensile strength at 819.97 MPa and the highest bending strength at 947.67 MPa. This superior performance is primarily attributed to the all-directional lamination approach, wherein fibers uniformly oriented at 0° ensure a more even stress distribution under both tensile and bending loads. This uniform stress distribution is pivotal in enhancing the laminate’s mechanical properties.
The analysis reveals that specific lamination sequences significantly influence the laminate’s mechanical behavior. For instance, sequences employing alternating fiber orientations, such as sequences 5 (“0°/90°/90°/90°/90°/0°”) and 6 (“0°/90°/0°/0°/90°/0°”), demonstrated improved tensile and bending performances due to better stress dispersion. Moreover, the inclusion of 45° fibers in sequences like 9 (“0°/0°/45°/−45°/0°/0°”) and 11 (“0°/45°/0°/0°/−45°/0°”) has been shown to optimize the tensile behavior in specific configurations further.
The findings underscore the critical role of the lamination sequence in optimizing the performance of fiber/metal composite materials. The all-directional lamination method, especially with uniform fiber orientation or strategic alternation, significantly contributes to the superior mechanical properties of composite laminates. These insights provide valuable guidance for the design and performance optimization of composite materials, highlighting the synergistic effect of fiber orientation in enhancing the laminate’s tensile and bending strengths.
Initially, the combination of finite element methods and experimental techniques was used to obtain the tensile and bending strengths of the DP590/CFRP composite laminates under 28 different layup sequences, as presented in Table 2. These values served as the raw data. To expand the dataset for machine learning purposes, a method involving the introduction of random bias was employed. Specifically, random bias coefficients were generated for certain fields within the dataset, such as the tensile and bending strengths, and these coefficients were applied to compute new field values, thereby synthesizing new data points. This randomization not only increased the quantity of the dataset but also enhanced its diversity, contributing to the model’s robustness and accuracy when dealing with real-world data. Ultimately, the newly generated dataset was sufficient to meet the demands of machine learning, allowing for further analysis and model training, as illustrated in Figure 7 and Figure 8.
Figure 7 and Figure 8 show the fully expanded stretching and bending dataset after reasonable expansion. The term “Sample NO.” represents the serial number assigned to each experimental data point in our dataset. All experimental data are arranged in ascending order.

4.2. Predictive Results of Tensile Strength with Different Machine Learning Models

Figure 9 presents a comparative analysis of the tensile strength predictions made by various regression and machine learning models against the experimental values, highlighting their predictive accuracies and error distributions. Figure 9a illustrates the performance of the linear regression model, noting a maximum prediction error of −91.3843 MPa and a minimum of 0.0840 MPa, with 87 samples exceeding an absolute error of 20 MPa. The error spread predominantly ranges between −71 MPa and +59 MPa. Similarly, Figure 9b evaluates the ridge regression model, revealing a maximum error of −82.2506 MPa, a minimum of 0.0851 MPa, and 87 samples with errors above 20 MPa. Its error distribution extends from −69 MPa to +61 MPa. In the case of the lasso regression model, shown in Figure 9c, the errors stretch from −85.6135 MPa to −0.0688 MPa, with the same number of samples exhibiting significant errors. The error span is noted between −72 MPa and +60 MPa. Figure 9d assesses the K-nearest neighbor model, marking a notable improvement with a maximum error of 25.9403 MPa, a minimum of 0 MPa, and only four samples with errors beyond 20 MPa, showcasing a more contained error range from −16 MPa to +16 MPa.
The polynomial regression model’s performance is depicted in Figure 9e, where the maximum error reaches −131.8372 MPa and the minimum −0.0625 MPa, with 48 samples exceeding the 20 MPa error threshold. The error distribution is observed between −39 MPa and +26 MPa. The decision tree model evaluations in Figure 9f show a maximum error of −41.0732 MPa, a minimum of 0 MPa, and only two samples with significant errors, indicating an error distribution from −14 MPa to +14 MPa. Figure 9g explores the random forest model, presenting a maximum error of 35.4226 MPa, a minimum of 0.1779 MPa, and 18 samples with errors over 20 MPa. The error range is between −25 MPa and +14 MPa. Lastly, Figure 9h,i analyze the gradient boosting and XGBoost models, respectively. The former shows a maximum error of −21.7254 MPa and a minimum of 0.1260 MPa, with five samples exceeding the 20 MPa error benchmark and an error distribution from −19 MPa to +12 MPa. The latter demonstrates superior performance with no samples exceeding 20 MPa in error and a distribution range from −15 MPa to +13 MPa.
To assess the prediction errors, the XGBoost, gradient boosting, and K-nearest neighbors models exhibit a commendable reduction in maximum error, enhancing the reliability for practical use. The XGBoost (version number: 2.0.3.), decision tree, and K-nearest neighbors models also excel in stability, effectively managing high-error outliers. The error distribution of these models suggests a more accurate capture of the tensile strength characteristics, with XGBoost outperforming others in terms of a lower maximum error and stable predictive accuracy. Consequently, the XGBoost model is highlighted as the most effective, followed by the K-nearest neighbors and gradient boosting models for their satisfactory predictive capabilities.
Table 3 offers a detailed comparison of the performance and hyperparameter configurations of various machine learning models in predicting the tensile strength of CFRP/DP590. The evaluation encompasses the MAE, MSE, R2, MAPE, and the optimization of hyperparameters through methods such as grid search and cross-validation. The key insights from this comparison are summarized as follows:
XGBoost and gradient boosting are the top performers, which showcase exemplary precision with MAE values of 6.080 and 6.067, MSE scores of 56.15 and 59.86, R2 values at 0.996 for both, and MAPE at 1.08. Their hyperparameter configurations, finely tuned for optimal balance, underscore their superior performance.
Random forest exhibits solid results in the MAE, MSE, and R2 metrics, albeit slightly behind XGBoost and gradient boosting, hinting at the potential for further enhancement through hyperparameter optimization. K-nearest neighbors demonstrates commendable outcomes but reveals a higher MAPE, suggesting a possible sensitivity to noise in the dataset. Adjustments in its hyperparameters could mitigate this issue. Decision tree achieves satisfactory results with default parameters, though improvements in the MAE, MSE, and R2 could be achieved through tuning.
Polynomial regression underperforms, potentially due to overfitting, indicating a need to revisit the degree of polynomials used. Linear regression, lasso regression, and ridge regression are the least effective, with higher MAPEs and lower performance metrics, suggesting difficulty in capturing the data’s complexity.
In summary, XGBoost and gradient boosting are the most effective models, offering lower prediction errors and high explanatory power. Xing Liu et al. [58] have pointed out in their research that XGBoost, as a machine learning technique, is capable of providing accurate predictions for tabular datasets and possesses good predictive interpretability. The selection of a model should consider not only the performance metrics but also the complexity and interpretability to align with the requirements of practical applications.

4.3. Different Machine Learning Model Predictions for Bending Strength

Figure 10 offers a detailed comparison of the bending strength predictions from the nine distinct machine learning models against the experimental values, highlighting their predictive accuracy through specific metrics such as the maximum and minimum errors, the count of samples with significant errors (≥20 MPa), and the overall distribution of these errors.
Linear regression showcased a broad error range with maximum and minimum errors of 367.8585 MPa and 0.1032 MPa, respectively, and a high number of samples (199) exceeding the 20 MPa error threshold, indicating significant variability in predictions. Ridge regression revealed a slightly improved accuracy with a maximum error of 369.6862 MPa and a minimum of −156.3737 MPa, yet it still had many samples (159) with significant errors. The lasso model presented a similar performance to ridge regression with a maximum error of 369.8354 MPa and 159 samples with errors above 20 MPa, indicating challenges in prediction accuracy. K-nearest neighbor (KNN) demonstrated a narrower error distribution from −32 MPa to +19 MPa and fewer samples (36) with significant errors, showcasing better model reliability. Polynomial regression exhibited the widest error range, with a maximum error of 381.2560 MPa, underscoring potential overfitting issues.
Decision tree regression showed improved precision with a maximum error of −41.5498 MPa and only 28 samples exceeding the 20 MPa error mark, indicating a more accurate prediction capability. The random forest model presented a balanced performance with a maximum error of −134.6730 MPa, and 65 samples had significant errors, suggesting a moderate level of prediction accuracy. Gradient boosting regression and XGBoost regression displayed concentrated error distributions and fewer samples with significant errors (29 and 34, respectively), indicating high consistency and reliability in their predictions. In conclusion, the decision tree, K-nearest neighbors, and random forest models were the top performers based on the smallest maximum errors, indicating their higher precision in bending strength predictions. The decision tree, gradient boosting, and XGBoost models had the fewest number of samples with significant errors, showcasing their reliability. The K-nearest neighbors, gradient boosting, and XGBoost models featured the most concentrated error distributions, highlighting their consistency and accuracy. These insights guide the selection of the optimal models for predicting the bending strength in materials science, emphasizing the importance of error minimization and predictive reliability.
Table 4 compares the prediction performance and hyperparameters of different machine learning models for the CFRP/DP590 tensile strength. The XGBoost and decision tree models exhibit excellent performance in terms of the MAE (9.66 and 9.91), MSE (136.35 and 145.46), R2 (0.98), and relatively low MAPE (1.156 and 1.191), respectively, demonstrating superior predictive accuracy and interpretability. The performance of these two models is relatively stable. During the hyperparameter tuning process, XGBoost used carefully adjusted hyperparameter settings, while the decision tree used default settings, indicating the exceptional performance of the decision tree on this problem. K-nearest neighbors and gradient boosting perform relatively well but are slightly inferior to XGBoost and decision tree. The hyperparameter settings for K-nearest neighbors {‘n_neighbors’: 11, ‘weights’: ‘distance’} reveal consideration for the distance weights of neighbors, while the settings for gradient boosting {‘learning_rate’: 0.33, ‘max_depth’: 6, ‘min_samples_leaf’: 1, ‘min_samples_split’: 4, ‘n_estimators’: 66} indicate a certain trade-off introduced in learning. The random forest performs quite ordinarily, showing a relatively poor performance, which might require further fine-tuning in the hyperparameters. Its settings {‘max_depth’: 19, ‘min_samples_leaf’: 1, ‘min_samples_split’: 2, ‘n_estimators’: 33} might lead to a rather complex model, calling for careful adjustments. Lasso regression, ridge regression, polynomial regression, and linear regression show the poorest performance in terms of the MAE, MSE, and R2, and have a relatively higher MAPE, indicating that linear models and polynomial regression might be inadequate to capture nonlinear relationships in the data. The hyperparameter settings for lasso regression and ridge regression are {‘alpha’: 0.18} and {‘alpha’: 5.25}, respectively, showing some balance achieved in the regularization process.
In summary, XGBoost and decision tree are the most outstanding models. K-nearest neighbors and gradient boosting are relatively good, while the performance of random forest is slightly worse. Relative to these, linear models and polynomial regression present the poorest performance. When choosing a model, the hyperparameters are closely related to the model’s performance and need careful adjustment to achieve the best prediction results.
In evaluating the predictive performance and hyperparameter settings of various machine learning models for the tensile strength of CFRP/DP590, distinct distinctions emerge. The XGBoost and decision tree models stand out for their exceptional accuracy, evidenced by their low mean absolute error (MAE) values of 9.66 and 9.91, mean squared error (MSE) values of 136.35 and 145.46, a consistent R2 of 0.98, and relatively minimal mean absolute percentage error (MAPE) of 1.156 and 1.191, respectively. Notably, the decision tree model achieves this high level of performance using default settings, highlighting its remarkable suitability for this application. In contrast, the XGBoost model requires finely tuned hyperparameters to achieve its optimal performance.
The K-nearest neighbors and gradient boosting models also demonstrate commendable performance but are slightly outperformed by XGBoost and decision tree. The hyperparameters for K-nearest neighbors (n_neighbors: 11, weights: ‘distance’) suggest a strategic emphasis on the influence of neighbor distances, whereas those for gradient boosting (learning_rate: 0.33, max_depth: 6, min_samples_leaf: 1, min_samples_split: 4, n_estimators: 66) reflect a deliberate balance between learning complexity and model efficiency.
The random forest model exhibits modest performance, indicating potential room for improvement through hyperparameter optimization. Its settings (max_depth: 19, min_samples_leaf: 1, min_samples_split: 2, n_estimators: 33) may contribute to its complexity, necessitating careful calibration to enhance its predictive capability.
Linear models, including lasso and ridge regression and polynomial and linear regression, perform poorly in comparison. Their higher MAE, MSE, R2 values, and MAPE suggest these models may not effectively capture the nonlinear dynamics present in the data. The hyperparameter settings for lasso (alpha: 0.18) and ridge regression (alpha: 5.25) attempt to strike a balance in regularization, yet their performance remains suboptimal.
In conclusion, the XGBoost and decision tree models are the top performers, offering both accuracy and interpretability. C. Furtado and colleagues have also reached similar conclusions, noting that XGBoost exhibits excellent performance in predicting the design of composite laminate panels [59]. K-nearest neighbors and gradient boosting are commendable but not at the same tier, while random forest requires further tuning for potential improvement. Linear models, including polynomial regression, prove less effective for this dataset, underscoring the importance of model selection and hyperparameter optimization in achieving precise predictive outcomes.

5. Conclusions

Through a comprehensive analysis of the DP590/CFRP composite laminate performance under various layup sequences, this study has identified key findings that substantially contribute to future material design and engineering applications:
  • Optimal Layup Sequences: Layup sequence 2, employing an omnidirectional layup method, demonstrated superior mechanical properties with a tensile strength of 819.97 MPa and a bending strength of 947.67 MPa. Other sequences showing robust performance include layups 5, 6, 8, 9, 11, 15, 16, and 25, all exceeding 600 MPa in tensile strength, and sequences 5 through 12 for a bending strength above 900 MPa.
  • Machine Learning Model Performance: Among the machine learning models evaluated, XGBoost and gradient boosting emerged as the top performers across multiple metrics, including maximum error, mean absolute error (MAE), mean squared error (MSE), and the coefficient of determination (R2). These models exhibited robustness and high interpretability, effectively capturing the complex relationships in the composite performance data.
  • Synergy Between Experimental and Numerical Approaches: Integrating experimental data with numerical simulations and machine learning analysis has enriched our understanding of CFRP/steel composite materials. This holistic approach not only validates the finite element models but also enhances our insight into the material behavior under various conditions, demonstrating the complementary nature of these methodologies.
This study’s findings provide critical design references for optimizing the performance of fiber/metal composites and underline the effectiveness of advanced analytical models in predicting material behaviors.

Supplementary Materials

The following supporting information can be downloaded at: https://0-www-mdpi-com.brum.beds.ac.uk/article/10.3390/polym16111589/s1, Dataset S1: Original Tensile Strength Dataset; Dataset S2: Original bending strength Dataset; Dataset S3: Expanded Tensile strength Dataset; Dataset S4: Expanded Bending strength Dataset.

Author Contributions

H.H.: Conceptualization, writing original draft, formal analysis, validation. Q.W.: Supervision, validation. T.W.: Machine learning, visualization. Q.M.: Validation. P.J.: Data curation, visualization. S.P.: Formal analysis, visualization, F.L.: Visualization. S.W.: Investigation, numerical simulation. Y.Y.: Investigation, numerical simulation. Y.L.: Formal analysis. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Science & Technology Development Fund of Tianjin Education Commission for Higher Education, grant number 2021KJ099; Tianjin Applied Basic Research Project, grant number 22JCYBJC00330; and the Tianjin Technical Expert Project, grant number 23YDTPJC00940.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article/Supplementary Materials, further inquiries can be directed to the corresponding authors.

Acknowledgments

I would like to thank the team members for assisting with the experimental work.

Conflicts of Interest

The authors declare no potential conflicts of interest to this article’s research, authorship, and publication.

References

  1. Erik, F. Carbon fibers: Precursor systems, processing, structure, and properties. Angew. Chem. (Int. Ed. Engl.) 2014, 21, 5262–5298. [Google Scholar]
  2. Yao, Y.; Shi, P.; Qi, S.; Yan, C.; Chen, G.; Liu, D.; Zhu, Y.; Herrmann, A. Manufacturing and mechanical properties of steel-CFRP hybrid composites. J. Compos. Mater. 2020, 54, 3673–3682. [Google Scholar] [CrossRef]
  3. Vlot, A.; Gunnink, J. Fibre Metal Laminates: An Introduction; Springer: Berlin/Heidelberg, Germany, 2001. [Google Scholar]
  4. Gholami, M.; Sam, A.R.M.; Yatim, J.M.; Tahir, M.M. A review on steel/CFRP strengthening systems focusing environmental performance. Constr. Build. Mater. 2013, 47, 301–310. [Google Scholar] [CrossRef]
  5. Chang, P.-Y.; Yeh, P.-C.; Yang, J.-M. Fatigue crack initiation in hybrid boron/glass/aluminum fiber metal laminates. Mater. Sci. Eng. A 2008, 496, 273–280. [Google Scholar] [CrossRef]
  6. Zuo, P.; Srinivasan, D.V.; Vassilopoulos, A.P. Review of hybrid composites fatigue. Compos. Struct. 2021, 274, 114358. [Google Scholar] [CrossRef]
  7. Liu, H.; Falzon, B.G.; Tan, W. Predicting the Compression-After-Impact (CAI) strength of damage-tolerant hybrid unidirectional/woven carbon-fibre reinforced composite laminates. Compos. Part A Appl. Sci. Manuf. 2018, 105, 189–202. [Google Scholar] [CrossRef]
  8. Dadej, K.; Bieniaś, J. On fatigue stress-cycle curves of carbon, glass and hybrid carbon/glass-reinforced fibre metal laminates. Int. J. Fatigue 2020, 140, 105843. [Google Scholar] [CrossRef]
  9. Banik, A.; Zhang, C.; Khan, M.H.; Wilson, M.; Tan, K.T. Low-velocity ice impact response and damage phenomena on steel and CFRP sandwich composite. Int. J. Impact Eng. 2022, 162, 104134. [Google Scholar] [CrossRef]
  10. Quagliato, L.; Jang, C.; Kim, N. Manufacturing process and mechanical properties characterisation for steel skin—Carbon fiber reinforced polymer core laminate structures. Compos. Struct. 2019, 209, 1–12. [Google Scholar] [CrossRef]
  11. Zhu, Q.; Zhang, C.; Curiel-Sosa, J.L.; Quoc Bui, T.; Xu, X. Finite element simulation of damage in fiber metal laminates under high velocity impact by projectiles with different shapes. Compos. Struct. 2019, 214, 73–82. [Google Scholar] [CrossRef]
  12. Taherzadeh-Fard, A.; Liaghat, G.; Ahmadi, H.; Razmkhah, O.; Chitsaz Charandabi, S.; Amin Zarezadeh-mehrizi, M.; Khodadadi, A. Experimental and numerical investigation of the impact response of elastomer layered fiber metal laminates (EFMLs). Compos. Struct. 2020, 245, 112264. [Google Scholar] [CrossRef]
  13. Nassir, N.A.; Birch, R.S.; Cantwell, W.J.; Sierra, D.R.; Edwardson, S.P.; Dearden, G.; Guan, Z.W. Experimental and numerical characterisation of titanium-based fibre metal laminates. Compos. Struct. 2020, 245, 112398. [Google Scholar] [CrossRef]
  14. Hu, C.; Sang, L.; Jiang, K.; Xing, J.; Hou, W. Experimental and numerical characterisation of flexural properties and failure behavior of CFRP/Al laminates. Compos. Struct. 2022, 281, 115036. [Google Scholar] [CrossRef]
  15. Blala, H.; Lang, L.; Li, L.; Alexandrov, S. Deep drawing of fiber metal laminates using an innovative material design and manufacturing process. Compos. Commun. 2021, 23, 100590. [Google Scholar] [CrossRef]
  16. Yao, L.; Zhang, S.; Cao, X.; Gu, Z.; Wang, C.; He, W. Tensile mechanical behavior and failure mechanisms of fiber metal laminates under various temperature environments. Compos. Struct. 2022, 284, 115142. [Google Scholar] [CrossRef]
  17. Guocai, W.; Yang, J.M. The mechanical behabior of GLARE laminates for aircraft structures. J. Miner. Mater. Soc. 2005, 57, 72–79. [Google Scholar]
  18. Wu, G.; Wu, Z.-S.; Luo, Y.-B.; Sun, Z.-Y.; Hu, X.-Q. Mechanical Properties of Steel-FRP Composite Bar under Uniaxial and Cyclic Tensile Loads. J. Mater. Civ. Eng. 2010, 22, 1056–1066. [Google Scholar] [CrossRef]
  19. Reyes, G.; Gupta, S. Manufacturing and mechanical properties of thermoplastic hybrid laminates based on DP500 steel. Compos. Part A Appl. Sci. Manuf. 2009, 40, 176–183. [Google Scholar] [CrossRef]
  20. Gonzalez-Canche, N.G.; Flores-Johnson, E.; Carrillo, J.G. Mechanical characterisation of fiber metal laminate based on aramid fiber reinforced polypropylene. Compos. Struct. 2017, 172, 259–266. [Google Scholar] [CrossRef]
  21. Zhang, H.; Yang, D.; Ding, H.; Wang, H.; Xu, Q.; Ma, Y.; Bi, Y. Effect of Z-pin insertion angles on low-velocity impact mechanical response and damage mechanism of CFRP laminates with different layups. Compos. Part A Appl. Sci. Manuf. 2021, 150, 106593. [Google Scholar] [CrossRef]
  22. Su, B.; Liu, S.; Zhang, P.; Wu, J.; Wang, Y. Mechanical properties and failure mechanism of overlap structure for cord-rubber composite. Compos. Struct. 2021, 274, 114350–114358. [Google Scholar] [CrossRef]
  23. Samborski, S. Numerical analysis of the DCB test configuration applicability to mechanically coupled Fiber Reinforced Laminated Composite beams. Compos. Struct. 2016, 152, 477–487. [Google Scholar] [CrossRef]
  24. Lauter, C.; Wang, Z.; Koke, I.; Troester, T. Influences of process parameters on the mechanical properties of hybrid sheet metal-FRP-composites manufactured by prepreg press technology. In Proceedings of the 20th International Conference on Composite Materials, Wollongong, Australia, 15–19 February 2015. [Google Scholar]
  25. Alphonse, M.; Raja, V.K.B.; Krishna, V.G.; Kiran, R.S.U.; Subbaiah, B.V.; Chandra, L.V.R. Mechanical behavior of sandwich structures with varying core material—A review. Mater. Today Proc. 2021, 44, 3751–3759. [Google Scholar] [CrossRef]
  26. Liu, S.; Cui, Y.; Cui, S.; Li, Z.; Zhou, F.; Wang, H. Experimental investigation on rock fracturing performance under high-pressure foam impact. Eng. Fract. Mech. 2021, 252, 107838. [Google Scholar] [CrossRef]
  27. Huo, J.; Zhang, X.; Yang, J.; Xiao, Y. Experimental study on dynamic behavior of CFRP-to-steel interface. Structures 2019, 20, 465–475. [Google Scholar] [CrossRef]
  28. Draganić, H.; Gazić, G.; Lukić, S.; Jeleč, M. Experimental investigation on blast load resistance of reinforced concrete slabs retrofitted with epoxy resin impregnated glass fiber textiles. Compos. Struct. 2021, 274, 114349. [Google Scholar] [CrossRef]
  29. Thomson, D.M.; Cui, H.; Erice, B.; Hoffmann, J.; Wiegand, J.; Petrinic, N. Experimental and numerical study of strain-rate effects on the IFF fracture angle using a new efficient implementation of Puck’s criterion. Compos. Struct. 2017, 181, 325–335. [Google Scholar] [CrossRef]
  30. Banat, D.; Mania, R.J. Damage analysis of thin-walled GLARE members under axial compression—Numerical and experiment investigations. Compos. Struct. 2020, 241, 112102. [Google Scholar] [CrossRef]
  31. Wadagbalkar, P.; Liu, G.R. Real-time prediction of projectile penetration to laminates by training machine learning models with finite element solver as the trainer. Def. Technol. 2021, 17, 147–160. [Google Scholar] [CrossRef]
  32. Okafor, C.E.; Iweriolor, S.; Ani, O.I.; Ahmad, S.; Mehfuz, S.; Ekwueme, G.O.; Chukwumuanya, O.E.; Abonyi, S.E.; Ekengwu, I.E.; Chikelu, O.P. Advances in machine learning-aided design of reinforced polymer composite and hybrid material systems. Hybrid Adv. 2023, 2, 100026. [Google Scholar] [CrossRef]
  33. Zhao, J.; Wang, B.; Lyu, Q.; Xie, W.; Guo, Z.; Wang, B. Compression after multiple impact strength of composite laminates prediction method based on machine learning approach. Aerosp. Sci. Technol. 2023, 136, 108243. [Google Scholar] [CrossRef]
  34. Yuan, M.; Zhao, H.; Xie, Y.; Ren, H.; Tian, L.; Wang, Z.; Zhang, B.; Chen, J. Prediction of stiffness degradation based on machine learning: Axial elastic modulus of [0m/90n]s composite laminates. Compos. Sci. Technol. 2022, 218, 109186. [Google Scholar] [CrossRef]
  35. Yuan, M.; Zhao, H.; Liu, S.; Ren, H.; Zhang, B.; Chen, J. Prediction of matrix-cracking-induced stiffness degradation of cross-ply laminates based on data-driven method. Compos. Sci. Technol. 2022, 230, 109716. [Google Scholar] [CrossRef]
  36. Chen, S.-Z.; Feng, D.-C.; Han, W.-S.; Wu, G. Development of data-driven prediction model for CFRP-steel bond strength by implementing ensemble learning algorithms. Constr. Build. Mater. 2021, 303, 124470. [Google Scholar] [CrossRef]
  37. Bagherzadeh, F.; Shafighfard, T.; Khan, R.M.A.; Szczuko, P.; Mieloszyk, M. Prediction of maximum tensile stress in plain-weave composite laminates with interacting holes via stacked machine learning algorithms: A comparative study. Mech. Syst. Signal Process. 2023, 195, 110315. [Google Scholar] [CrossRef]
  38. Nastos, C.; Komninos, P.; Zarouchas, D. Non-destructive strength prediction of composite laminates utilising deep learning and the stochastic finite element methods. Compos. Struct. 2023, 311, 116815. [Google Scholar] [CrossRef]
  39. Chahar, R.S.; Mukhopadhyay, T. Multi-fidelity machine learning based uncertainty quantification of progressive damage in composite laminates through optimal data fusion. Eng. Appl. Artif. Intell. 2023, 125, 106647. [Google Scholar] [CrossRef]
  40. Wanigasekara, C.; Oromiehie, E.; Swain, A.; Prusty, B.G.; Nguang, S.K. Machine learning-based inverse predictive model for AFP based thermoplastic composites. J. Ind. Inf. Integr. 2021, 22, 100197. [Google Scholar] [CrossRef]
  41. Moein, M.M.; Saradar, A.; Rahmati, K.; Mousavinejad, S.H.G.; Bristow, J.; Aramali, V.; Karakouzian, M. Predictive models for concrete properties using machine learning and deep learning approaches: A review. J. Build. Eng. 2023, 63, 105444. [Google Scholar] [CrossRef]
  42. Jalali, S.S.; Mahzoon, M.; Mohammadi, H. Identification of damage properties of glass/epoxy laminates using machine learning models. Int. J. Impact Eng. 2023, 177, 104510. [Google Scholar] [CrossRef]
  43. Stergiou, K.; Ntakolia, C.; Varytis, P.; Koumoulos, E.; Karlsson, P.; Moustakidis, S. Enhancing property prediction and process optimisation in building materials through machine learning: A review. Comput. Mater. Sci. 2023, 220, 112031. [Google Scholar] [CrossRef]
  44. Sánchez-Garrido, A.J.; Navarro, I.J.; García, J.; Yepes, V. A systematic literature review on modern methods of construction in building: An integrated approach using machine learning. J. Build. Eng. 2023, 73, 106725. [Google Scholar] [CrossRef]
  45. Puchi-Cabrera, E.S.; Rossi, E.; Sansonetti, G.; Sebastiani, M.; Bemporad, E. Machine learning aided nanoindentation: A review of the current state and future perspectives. Current Opinion in Solid State and Materials Science. Machine learning aided nanoindentation: A review of the current state and future perspectives. Curr. Opin. Solid State Mater. Sci. 2023, 27, 101091. [Google Scholar] [CrossRef]
  46. Chaupal, P.; Rajendran, P. A review on recent developments in vibration-based damage identification methods for laminated composite structures: 2010–2022. Compos. Struct. 2023, 311, 116809. [Google Scholar] [CrossRef]
  47. Seber, G.A.F.; Lee, A.J. Linear Regression Analysis, 2nd ed.; John Wiley & Sons: Hoboken, NJ, USA, 2012. [Google Scholar]
  48. Wu, R. Forecast analysis of securities index based on ridge regression—In case of shanghai composite index. Bus. Glob. 2016, 4, 47–55. [Google Scholar] [CrossRef]
  49. Boonyakunakorn, P.; Nunti, C.; Yamaka, W. Forecasting of Thailand’s Rice Exports Price: Based on Ridge and Lasso Regression; ACM Press: New York, NY, USA, 2019. [Google Scholar]
  50. Mohammed, A.J.; Mohammed, A.S.; Mohammed, A.S. Prediction of Tribological Properties of UHMWPE/SiC Polymer Composites Using Machine Learning Techniques. Polymers 2023, 15, 4057. [Google Scholar] [CrossRef] [PubMed]
  51. Li, H.; Li, F.; Zhu, L. A Fast and Efficient Approach to Strength Prediction for Carbon/Epoxy Composites with Resin-Missing Defects. Polymers 2024, 16, 742. [Google Scholar] [CrossRef] [PubMed]
  52. Siddiqui, E.F.; Ahmed, T.; Nayak, S.K. A decision tree approach for enhancing real-time response in exigent healthcare unit using edge computing. Meas. Sens. 2024, 32, 100979. [Google Scholar] [CrossRef]
  53. Li, X.; Yuan, Y. Hybrid and gradient design of ultra-thin-ply composite laminates for synergistic suppression of delamination and fiber fracture damage modes. Eng. Fract. Mech. 2024, 295, 109822. [Google Scholar] [CrossRef]
  54. Uddin, M.J.; Fan, J. Interpretable Machine Learning Framework to Predict the Glass Transition Temperature of Polymers. Polymers 2024, 16, 1049. [Google Scholar] [CrossRef] [PubMed]
  55. ASTM D638-2014; Standard Test Method for Tensile Properties of Plastics. ASTM International: West Conshohocken, PA, USA, 2014.
  56. ASTM D790-2017; Standard Test Methods for Flexural Properties of Unreinforced and Reinforced Plastics and Electrical Insulating Materials. ASTM International: West Conshohocken, PA, USA, 2017.
  57. Hu, H.; Hu, N.; Wei, Q.; Liu, B.; Wu, J.; Wang, Z.; Yang, C. Characterisation of progressive damage behaviour and failure mechanism of carbon fibre reinforced DP590 laminates. Thin-Walled Struct. 2021, 168, 13. [Google Scholar] [CrossRef]
  58. Liu, X.; Liu, T.Q.; Feng, P. Long-term performance prediction framework based on XGBoost decision tree for pultruded FRP composites exposed to water, humidity and alkaline solution. Compos. Struct. 2022, 284, 115184. [Google Scholar] [CrossRef]
  59. Furtado, C.; Pereira, L.F.; Tavares, R.P.; Salgado, M.; Otero, F.; Catalanotti, G.; Arteiro, A.; Bessa, M.A.; Camanho, P.P. A methodology to generate design allowables of composite laminates using machine learning. Int. J. Solids Struct. 2021, 233, 111095. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of the laminated plate structure [47].
Figure 1. Schematic diagram of the laminated plate structure [47].
Polymers 16 01589 g001
Figure 2. Heatmap of correlation coefficients before augmentation of tensile strength data.
Figure 2. Heatmap of correlation coefficients before augmentation of tensile strength data.
Polymers 16 01589 g002
Figure 3. Heatmap of correlation coefficients after augmentation of tensile strength data.
Figure 3. Heatmap of correlation coefficients after augmentation of tensile strength data.
Polymers 16 01589 g003
Figure 4. Heatmap of correlation coefficients before augmentation of bending strength data.
Figure 4. Heatmap of correlation coefficients before augmentation of bending strength data.
Polymers 16 01589 g004
Figure 5. Heatmap of correlation coefficients after augmentation of bending strength data.
Figure 5. Heatmap of correlation coefficients after augmentation of bending strength data.
Polymers 16 01589 g005
Figure 6. Three data processing stages. (Green represents the data preparation phase, symbolizing the inception and accumulation of data; blue is used for the model training and validation stage, emphasizing the stability and systematic nature of the process; light green is used for the prediction comparison and analysis stage, showing the natural transition from data preparation to application analysis).
Figure 6. Three data processing stages. (Green represents the data preparation phase, symbolizing the inception and accumulation of data; blue is used for the model training and validation stage, emphasizing the stability and systematic nature of the process; light green is used for the prediction comparison and analysis stage, showing the natural transition from data preparation to application analysis).
Polymers 16 01589 g006
Figure 7. Stretching strength dataset.
Figure 7. Stretching strength dataset.
Polymers 16 01589 g007
Figure 8. Bending strength dataset.
Figure 8. Bending strength dataset.
Polymers 16 01589 g008
Figure 9. Comparison of tensile strength predictions by different machine learning models. (a) Linear regression. (b) Ridge regression. (c) Lasso regression. (d) K-nearest neighbors. (e) Polynomial regression. (f) Decision tree. (g) Random forest. (h) Gradient boosting. (i) Xgboost.
Figure 9. Comparison of tensile strength predictions by different machine learning models. (a) Linear regression. (b) Ridge regression. (c) Lasso regression. (d) K-nearest neighbors. (e) Polynomial regression. (f) Decision tree. (g) Random forest. (h) Gradient boosting. (i) Xgboost.
Polymers 16 01589 g009
Figure 10. Comparison of bending strength prediction by different machine learning models. (a) Linear regression. (b) Ridge regression. (c) Lasso regression. (d) K−nearest neighbors. (e) Polynomial regression. (f) Decision tree. (g) Random forest. (h) Gradient boosting. (i) Xgboost.
Figure 10. Comparison of bending strength prediction by different machine learning models. (a) Linear regression. (b) Ridge regression. (c) Lasso regression. (d) K−nearest neighbors. (e) Polynomial regression. (f) Decision tree. (g) Random forest. (h) Gradient boosting. (i) Xgboost.
Polymers 16 01589 g010aPolymers 16 01589 g010b
Table 1. Layer design of CFRP laminate in tensile and bending models for DP590/CFRP composite panels.
Table 1. Layer design of CFRP laminate in tensile and bending models for DP590/CFRP composite panels.
Serial NumberLaying Sequence
10°/90°/0°/90°/0°/90°
20°/0°/0°/0°/0°/0°
390°/90°/90°/90°/90°/90°
445°/−45°/45°/−45°/45°/−45°
50°/90°/90°/90°/90°/0°
60°/90°/0°/0°/90°/0°
70°/90°/45°/−45°/90°/0°
80°/0°/90°/90°/0°/0°
90°/0°/45°/−45°/0°/0°
100°/45°/90°/90°/−45°/0°
110°/45°/0°/0°/−45°/0°
120°/45°/−45°/45°/−45°/0°
1390°/90°/0°/0°/90°/90°
1490°/90°/45°/−45°/90°/90°
1590°/0°/90°/90°/0°/90°
1690°/0°/0°/0°/0°/90°
1790°/0°/45°/−45°/0°/90°
1890°/45°/−45°/45°/−45°/90°
1990°/45°/90°/90°/−45°/90°
2090°/45°/0°/0°/−45°/90°
2145°/90°/90°/90°/90°/−45°
2245°/90°/0°/0°/90°/−45°
2345°/90°/−45°/45°/90°/−45°
2445°/0°/90°/90°/0°/−45°
2545°/0°/0°/0°/0°/−45°
2645°/0°/−45°/45°/0°/−45°
2745°/−45°/90°/90°/45°/−45°
2845°/−45°/0°/0°/45°/−45°
Table 2. Design of CFRP layer layout in the tension and bending models of DP590/CFRP composite laminate.
Table 2. Design of CFRP layer layout in the tension and bending models of DP590/CFRP composite laminate.
Serial NumberLaying SequenceTensile Strength (MPa)Bending Strength (MPa)
10°/90°/0°/90°/0°/90°578.96892.76
20°/0°/0°/0°/0°/0°819.97947.67
390°/90°/90°/90°/90°/90°307.10831.92
445°/−45°/45°/−45°/45°/−45°358.92854.89
50°/90°/90°/90°/90°/0°609.36916.06
60°/90°/0°/0°/90°/0°663.30920.51
70°/90°/45°/−45°/90°/0°530.09920.68
80°/0°/90°/90°/0°/0°662.96941.62
90°/0°/45°/−45°/0°/0°686.26947.16
100°/45°/90°/90°/−45°/0°524.35918.80
110°/45°/0°/0°/−45°/0°684.70923.89
120°/45°/−45°/45°/−45°/0°523.48921.92
1390°/90°/0°/0°/90°/90°510.61841.18
1490°/90°/45°/−45°/90°/90°338.09840.21
1590°/0°/90°/90°/0°/90°606.61867.77
1690°/0°/0°/0°/0°/90°662.26872.63
1790°/0°/45°/−45°/0°/90°529.74872.53
1890°/45°/−45°/45°/−45°/90°376.00846.48
1990°/45°/90°/90°/−45°/90°344.31839.51
2090°/45°/0°/0°/−45°/90°526.78847.02
2145°/90°/90°/90°/90°/−45°335.86839.65
2245°/90°/0°/0°/90°/−45°520.73845.33
2345°/90°/−45°/45°/90°/−45°378.16848.72
2445°/0°/90°/90°/0°/−45°523.23870.43
2545°/0°/0°/0°/0°/−45°680.35874.89
2645°/0°/−45°/45°/0°/−45°523.83875.69
2745°/−45°/90°/90°/45°/−45°363.65849.09
2845°/−45°/0°/0°/45°/−45°522.75853.98
Table 3. Comparison table of prediction performance and hyperparameters of different machine learning models for CFRP/DP590 tensile strength.
Table 3. Comparison table of prediction performance and hyperparameters of different machine learning models for CFRP/DP590 tensile strength.
ModelMAEMSER2MAPEHyperparameters
xgboost6.08056.150.9961.08{‘colsample_bytree’: 1, ‘learning_rate’: 0.5, ‘max_depth’: 5, ‘min_child_weight’: 1, ‘n_estimators’: 45, ‘subsample’: 1}
Gradient boosting6.06759.860.9961.08{‘learning_rate’: 0.19, ‘max_depth’: 5, ‘min_samples_leaf’: 1, ‘min_samples_split’: 7, ‘n_estimators’: 50}
Decision tree6.46966.200.9941.13default
K-nearest neighbors6.8173.010.9951.19{‘n_neighbors’: 13, ‘weights’: ‘distance’}
Random forest7.35107.530.9921.28{‘max_depth’: 9, ‘min_samples_leaf’: 4, ‘min_samples_split’: 2, ‘n_estimators’: 91}
Polynomial regression8.82152.540.9891.54{‘poly__degree’: 3}
Linear regression18.91682.980.9503.65default
Lasso regression18.77685.250.9503.72{‘alpha’: 0.41}
Ridge regression19.19689.050.9503.74{‘alpha’: 0.32}
Table 4. Comparison results of prediction performance and hyperparameters of different machine learning models for bending strength of CFRP/DP590.
Table 4. Comparison results of prediction performance and hyperparameters of different machine learning models for bending strength of CFRP/DP590.
ModelMAEMSER2MAPEHyperparameters
xgboost9.66136.350.9831.156{‘colsample_bytree’: 0.6, ‘learning_rate’: 0.66, ‘max_depth’: 5, ‘min_child_weight’: 1, ‘n_estimators’: 97, ‘subsample’: 1}
Decision tree9.91145.460.9851.191Default
K-nearest neighbors11.61341.600.9571.379{‘n_neighbors’: 11, ‘weights’: ‘distance’}
Gradient boosting11.67399.240.9491.392{‘learning_rate’: 0.33, ‘max_depth’: 6, ‘min_samples_leaf’: 1, ‘min_samples_split’: 4, ‘n_estimators’: 66}
Random forest14.12442.950.9441.743{‘max_depth’: 19, ‘min_samples_leaf’: 1, ‘min_samples_split’: 2, ‘n_estimators’: 33}
Lasso40.724701.840.4035.630{‘alpha’: 0.18}
40.804700.700.4035.638{‘alpha’: 5.25}
Ridge regression43.354614.390.4145.916{‘poly__degree’: 2}
Polynomial regression43.534799.880.3905.938Default
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hu, H.; Wei, Q.; Wang, T.; Ma, Q.; Jin, P.; Pan, S.; Li, F.; Wang, S.; Yang, Y.; Li, Y. Experimental and Numerical Investigation Integrated with Machine Learning (ML) for the Prediction Strategy of DP590/CFRP Composite Laminates. Polymers 2024, 16, 1589. https://0-doi-org.brum.beds.ac.uk/10.3390/polym16111589

AMA Style

Hu H, Wei Q, Wang T, Ma Q, Jin P, Pan S, Li F, Wang S, Yang Y, Li Y. Experimental and Numerical Investigation Integrated with Machine Learning (ML) for the Prediction Strategy of DP590/CFRP Composite Laminates. Polymers. 2024; 16(11):1589. https://0-doi-org.brum.beds.ac.uk/10.3390/polym16111589

Chicago/Turabian Style

Hu, Haichao, Qiang Wei, Tianao Wang, Quanjin Ma, Peng Jin, Shupeng Pan, Fengqi Li, Shuxin Wang, Yuxuan Yang, and Yan Li. 2024. "Experimental and Numerical Investigation Integrated with Machine Learning (ML) for the Prediction Strategy of DP590/CFRP Composite Laminates" Polymers 16, no. 11: 1589. https://0-doi-org.brum.beds.ac.uk/10.3390/polym16111589

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop