Next Article in Journal
On the Reliability of CNNs in Clinical Practice: A Computer-Aided Diagnosis System Case Study
Next Article in Special Issue
Artificial Neural Networks Applied in Civil Engineering
Previous Article in Journal
Novel Chitosan Derivatives and Their Multifaceted Biological Applications
Previous Article in Special Issue
Deep Learning-Based Accuracy Upgrade of Reduced Order Models in Topology Optimization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Deep Learning-Based Integration Method for Hybrid Seismic Analysis of Building Structures: Numerical Validation

Department of Architecture and Civil Engineering, Toyohashi University of Technology, Toyohashi 441-8580, Japan
*
Author to whom correspondence should be addressed.
Submission received: 26 January 2022 / Revised: 27 February 2022 / Accepted: 22 March 2022 / Published: 23 March 2022
(This article belongs to the Special Issue Artificial Neural Networks Applied in Civil Engineering)

Abstract

:
A hybrid seismic analysis computing the full nonlinear response of building structures is proposed and validated in this paper. Recurrent neural networks are trained to predict the nonlinear hysteretic response of isolation devices with deformation- and velocity-dependent behavior. Then, they are implemented in an explicit time integration method for time history analysis. A comprehensive framework is proposed to develop and test deep learning models considering the data framing, the network architecture, and the learning behavior. Hybrid seismic analyses of three base-isolated building models subjected to four ground motions with different properties were performed in order to check their efficiency. The small relative errors of computed results to those of the conventional analysis successfully validate the accuracy of the proposed analysis. Its computation time depends mainly on the ground motion duration and is considered negligible. The development of the machine learning model is more time-consuming but nonrepetitive since it can be saved and reused to analyze any new structure containing the same target components. The proposed hybrid seismic analysis overcomes the shortcomings of usual applications of machine learning in structural response prediction problems being limited to specific response quantity(s) of the same structure(s) used at the training process. By taking advantage of both mechanics-based and data-driven methods, results reveal that hybrid analysis is an efficient tool for building-response simulation.

1. Introduction

The conventional method to simulate the time history response of a building model subjected to dynamic loadings, such as earthquake ground motion and wind load, is based on the principle of mechanics. Newton’s laws of motion are the essence behind the establishment of the equilibrium equations. When considering the complexity of the original formulation and the development of numerical computation resources in the second half of the 20th century, the resulting second-order differential equations had been reformulated and numerically solved through discretization in space and time using the Finite Element Method (FEM) and time integration algorithms, respectively. Many analytical models to simulate the nonlinear behavior of common structural members, such as reinforced concrete (RC) structural components, had already been formulated and validated by experimental results [1]. The combination of all these modules makes it possible to compute a building response by performing the nonlinear time history analysis (NTHA). Many assumptions and/or simplifications are considered, whether in the original physics laws, the FEM, or the hysteresis models of structural components. Time integration methods also generate numerical errors, and their stability may be compromised [2,3]. Despite all these inevitable limitations, the NTHA remains the most accurate and reliable mechanics-based method to evaluate the seismic performance of a structural model. This know-how must be the reference for response prediction problems (RPPs) of building structures.
In the last decade, data-driven methods took an impetus in several applications in earthquake engineering. The latter can be categorized in three sub-fields: RPP [4], damage evaluation [5] (DE), and system identification [6] (SI). Xie et al. [7] conducted an exhaustive state-of-the-art review of the implementation of machine learning (ML) in both earthquake engineering and seismology. For RPPs, machine learning models (MLMs) are generated using regression ML algorithms such as support vector regression, random forest, response surface model, and artificial neural networks (ANNs). The development of these surrogate models needs data. The input data may be physical building information (number of stories, structural materials, lateral force resisting system…), building modal properties (natural periods, modal shapes…), ground motion data (time-history, frequency content…), and/or other variables. The output data are the response of interest (acceleration, shear, drift…) to predict for specific degrees of freedom of the studied structure, depending on the problem framing. Hybrid MLMs have been proposed by stacking different ML algorithms [8], by combining different types of inputs such as structural analysis response quantities and simply measured or evaluated building properties [9], and/or by incorporating some physics-based principles into the design of the ML algorithm [10]. During the training phase, an ML algorithm forms a mapping between input and output data that could predict the response of interest for new unseen data, all of the same nature as the original training data. Hence, these black-box models can predict only limited response quantities of specific structure(s), and their accuracy and reliability are highly dependent on the nature and amount of data used during their training. Therefore, some skepticism regarding their application is still present.
For RPPs, deep learning (DL) models such as multi-layer perceptrons (MLPs), convolutional neural networks (CNNs), and recurrent neural networks (RNNs) are becoming more attractive since no need to identify features within input data are required, and more sophisticated DL algorithms are being constantly developed [11]. Even though RNNs were initially designed for time series forecasting problems, only very few of their applications are counted in earthquake engineering. Zhan et al. [12] predicted the nonlinear inter-story drifts of three building structures (five degrees-of-freedom system, six-story instrumented RC building, and three-story steel building model) using the Long Short-Term Memory [13] (LSTM) recurrent networks. In general, very high prediction accuracy was observed for a close-to-linear structural response, and it decreases with nonlinearities. The same research team had improved the prediction accuracy and robustness of the previously developed LSTM models by encoding some laws of physics in the network architecture and embedding them in the overall loss function [14]. Eshkevari et al. [10] developed a physics-based recurrent cell (DynNet) that predicts the full state space (acceleration, velocity, displacement, and internal force) of multi degrees of freedom (MDOF), given a ground motion. Nonlinear responses of two four-degree of freedom systems with different nonlinearity types (elastoplastic and nonlinear elastic stiffnesses) were quite successfully predicted; thus, concluding that the performance of the DynNet in capturing the nonlinear response is promising. All these implementations of RNNs for RPPs highlighted some drop of accuracy in predicting large nonlinearities, and their respective models remain limited to predict limited response quantities of the studied structures.
In order to address many of the aforementioned limitations, the current study combines the advantages of both NTHA (mechanics-based method) and MLMs (data-driven method) to propose a hybrid seismic analysis for building structures. The term hybrid is related to the time integration algorithm of the NTHA, not to the input data nor to the model architecture. Some existing or newly developed materials, structural components, or devices may have an excessively complicated analytical model and/or a still not-exhaustive understanding of their true behavior among researchers and practitioner communities. Instead of adopting simplified analytical models based on many assumptions which introduce a modeling error, an RNN trained by available experimental data can capture the true behavior of the target component or group of components, then be used to make predictions on new unseen data at each time-step of the hybrid seismic analysis proposed in this study. Therefore, this paper presents the novelty of implementing DL models into the numerical time-integration algorithm, targeting only structural components of interest. The resulting hybrid seismic analysis is not limited to a specific building structure since the MLM can be saved and reused in any new building model containing the same target components, it can compute the full dynamic response, and it aims to increase further the accuracy of conventional NTHAs. Since this paper represents the first attempt at such an application of DL in earthquake engineering, artificial data are generated and used in this study to check the validity of the method.
After a presentation of the principle of the proposed hybrid seismic analysis, the properties of the three isolated-building models and the four ground motions used in this study are described. Then, a comprehensive framework for generating synthetic data, designing, and testing RNN models is proposed and applied to develop three MLMs simulating the isolation layer of respective buildings. Eventually, these MLMs are implemented to perform twelve hybrid analyses whose results are compared with those of conventional NTHAs. The efficiency of the proposed hybrid analysis is checked in terms of accuracy and necessary computation time. Further related studies are proposed in the conclusion section.

2. Principle of the Proposed Hybrid Seismic Analysis

The hybrid seismic analysis proposed in this paper is inspired by the hybrid simulation developed by Nakashima et al. [15,16] for real-time pseudo-dynamic testing. The target structure is divided into a numerical model and a physically tested model. The latter one may be a structural component or a group of components as a sub-structure. It is an efficient and most economical alternative to testing an entire structure. The control and the limitation of the numerical error require the choice of an appropriate numerical integration method [17]. Nakashima et al. [18] proposed a mixed implicit-explicit direct integration method that incorporates the physically measured restoring force into the numerically solved second order equation of motion. It is a predictor-one-corrector displacement method based on the operator splitting technique proposed by Hughes et al. [19] and the implicit Newmark-β method [20]. It was proven that this method (referred to as the OpS method in the remainder of this paper) is unconditionally stable for pseudo-dynamic testing of structures with softening-type nonlinearities.
Instead of measuring the restoring force developed in the tested physical model, the hybrid seismic analysis proposed in this paper evaluates it using an MLM, previously trained to simulate the hysteretic behavior of the target component or group of components (here, isolation layer). The commonly used OpS method is selected as a suitable numerical integration algorithm for NTHA. Figure 1 summarizes the principle of the proposed hybrid seismic analysis. Assuming a building structure and an input ground motion, the predictor displacement vector is first evaluated at the current time step following the algorithm of the OpS method. Restoring forces of structural components with already mastered hysteresis behavior, such as reinforced concrete or steel columns and beams, are evaluated using available validated analytical models. Some existing or newly developed materials, components, or devices may have an excessively complicated analytical model and/or still not-exhaustive understanding of their true behavior among researchers and practitioner communities. Instead of adopting simplified analytical models based on many assumptions, an MLM previously trained by available experimental data can capture the true behavior of the target component or group of components and then be used to make predictions within the time integration algorithm. It would predict the component restoring force at the next time step, based on the displacement time history and the current predictor displacement. Then, the full seismic response at the next time step is computed as performed in the OpS method. The same process is repeated till the end of the analysis.
The concept of the proposed hybrid seismic analysis is general to any structure and relies on developing MLMs beforehand. The development of MLMs from experimental data is nonrepetitive since it can be saved and reused in any new structure containing the same target components. For instance, the same isolation device can be used in different buildings but with different configurations and numbers. New experimental data of the same nature may be used to update the MLM, though.
In addition to the numerical error of the selected integration method, making predictions by an MLM introduces an inevitable uncertainty. It is due to the epistemic uncertainty of the experimental data used for the training process and the aleatory uncertainty of ANNs explained further in this paper. Since artificial data are generated in this study from NTHAs, only the intrinsic uncertainty of MLMs needs to be dealt with.

3. Overview of Studied Numerical Cases

3.1. Base-Isolated Building Structures

Three base-isolated buildings of 5, 10, and 15 stories are considered in this study. A Lumped Mass Model (LMM) is adopted for the superstructure, which behaves linearly. The isolation layer is formed by Natural Rubber Bearing (NRB), Lead Rubber Bearing (LRB), and Oil Damper (Oil), as shown in Figure 2. The combination of both NRB and LRB devices is assumed to perform a bilinear hysteresis behavior. The force developed in the Oil Damper device is assumed to depend only on the relative velocity of its edges. The three buildings were designed according to Japanese engineering practice [21] and modeled using the software STERA_3D (Structural Earthquake Response Analysis 3D) [22]. Table 1 and Table 2 provide all the necessary properties to reproduce the same models. More design information and even the STERA_3D models are available [23].
Synthetic data (rather than experimental data) were generated to develop the three MLMs (MLM1, MLM2, and MLM3) simulating the behavior of the respective isolation layers (NRB + LRB + Oil Damper) of the studied buildings (B01, B02, and B03), as shown in Table 2. Since the MLM aims to predict the true behavior of target components, property modification factors of design codes may be applied to its predicted values, and the same surrogate model can be reused within different code requirements.

3.2. Earthquake Ground Motions

As shown in Table 3, GMs 1 and 2 are used to generate the training data used for developing the MLMs. They are both derived from the same strong-motion record of the Hyogo-ken Nanbu earthquake at the Japan Meteorological Agency (JMA) station of Kobe city, but with different durations for reasons explained further in this paper. GMs 3, 4, and 5 are used to test the generalization capability of developed MLMs. Eventually, structural responses of the three separate building models subjected to GMs 2, 3, 4, and 5 are evaluated by the proposed hybrid seismic analysis and compared to those of conventional analysis, taking as a reference. All GMs were intentionally derived from famous earthquake records with different amplitudes, frequency contents, and durations. In the acceleration response spectra graph (Figure 3), fundamental horizontal periods of the building’s superstructures (Tn,1, Tn,2, and Tn,3) and the effective periods of their respective isolation layers (Teff,1, Teff,2, and Teff,3) are marked with vertical dashed lines, covering the common period range of based isolated structures.

4. Development of MLMs for Hybrid Seismic Analysis

4.1. Framework for Developing and Testing the MLMs

As explained previously, the MLM is developed to predict the shear time history given the displacement time history of the target isolation layer. The MLM input and output are time sequences whose lengths may be very long considering the very small time-step commonly adopted for accurate and stable NTHAs. The sequence length is variable as the duration of the GMs. Hysteresis behavior may depend on many parameters and phenomena; thus, patterns in data may be difficult to describe. Considering the aforementioned conditions, one of the favorite machine learning algorithms to adopt for this study is the RNN, which is designed to deal with sequence data and to infer patterns automatically in data. This paper proposes an innovative application of RNNs in civil engineering rather than descriptions or advances into their algorithms. For a brief and basic presentation of common recurrent units/cells, the reader may refer to provided reference [24]. To develop and test RNN models, Figure 4 summarizes the five Stages proposed in this study: (1) Generating artificial data, (2) data preprocessing, (3) tune model architecture, (4) tune learning behavior, and eventually (5) test the developed model on new unseen data corresponding to GMs 3, 4, and 5.
Stage 1: Given the displacement time history of the target isolation layer (NRB + LRB + Oil Damper), the MLM would predict the corresponding shear force-time history. Samples of displacement/shear time histories need to be generated by performing NTHAs of the equivalent 1DOF system having a deformation and velocity-dependent hysteresis behavior as presented in Table 2. Natural periods and inherent viscous damping are chosen randomly with the goal to reach the displacement limit of the target isolation layer (here, fixed to 25 cm). Data derived from GMs 1 and 2 are used to develop the MLMs (Stages 3 and 4), and those corresponding to GMs 3, 4, and 5 are used for testing (Stage 5). The five seconds duration of GM 1 is chosen for the only reason to optimize the computation time of Stage 3. Even though a large artificial dataset could be generated, only 50 Samples are produced intentionally to simulate often-faced cases of a small experimental dataset.
Stage 2: Training ANNs needs as much clean data as possible. Artificial samples generated in this study are supposedly clean. A data augmentation technique is performed to offer a solution for the intentionally generated small dataset: One-second splitting and 0-padding of sequences through time were enough to feed Stage 3 with 250 samples and Stage 4 with 750 samples. These training datasets are scaled by the maximum absolute value to be within the range [−1, 1] to avoid any exploding or vanishing gradient descent during the backpropagation process [25] in training. Then, datasets are shaped into 3-dimensional vectors: Samples, time steps, features. The test data are scaled and shaped similarly, but no size augmentation is needed.
Stage 3: ANNs are stochastic models. The same architecture trained by the same data and with the same hyperparameters leads to different predictions. The randomness of both network weights initialization and splitting to training and validation data generates an aleatory uncertainty. All candidate architectures are trained many times, and their final performances are summarized in a Box and Whisker plot to select the most appropriate one based on accuracy and precision. Ten acts of training (runs) per case are performed for the first level screening to choose between the two most common types of RNN cells: The Long Short-Term Memory (LSTM) [13] and the Gated Recurrent Unit (GRU) [26]. Then, 20 acts of training per case are performed for the second level screening to select the network architecture. In order to test further the reliability of the final network, batches of 10, 20, 30, 40, and 50 independent acts of training were performed to analyze the scattering of its final performances. The range of candidate architectures and the number of training repetitions are limited only by the computation resources; the reason why sequences corresponding to 5 s duration (derived from GM1) are trained up to 500 epochs for this stage is to obtain a good balance between model reliability and computation effort. Model performance is evaluated by computing the Mean Square Error (MSE) between normalized reference values and predicted ones, which is given as,
MSE = data ( Reference   value i Predicted   value i ) 2 N d a t a
Stage 4: Given the same data and the same network architecture, hyperparameters such as the loss function [27] (the performance assessment metric), the optimizer [28] (the algorithm used to update the network weights), the batch size (the amount of data loaded for each weights update), and the number of training epochs, may influence the learning behavior of the model. A sensitivity analysis to hyperparameters is performed, followed by a deep diagnosis of the learning behavior through training epochs. The objective of this stage is to decide the appropriate hyperparameters among widely used ones and to train the final MLMs on full-length sequences derived from GM2. The same performance assessment metric of Stage 3 (MSE) is used for that purpose.
Stage 5: The developed MLMs are tested on a new unseen dataset derived from GMs 3, 4, and 5. MSEs are computed for the three test sub-datasets containing 50 samples each. For visualization of each model performance, shear time history predictions of a few random test samples are unscaled, then combined with their respective displacement time histories to draw the corresponding nonlinear cyclic hysteresis loops. In order to keep the same unit of the original waves, the Root Mean Square Error (RMSE) of unscaled sequences (ShearOpS versus ShearPredicted) is evaluated as follows:
Shear   RMSE   ( kN ) = time ( s h e a r O p S , i s h e a r P r e d i c t e d , i ) 2 N time   steps
Python scripts are written to perform all NTHAs necessary to generate samples of Deformation/Shear time histories, to process these data for ML, and to train/test the recurrent models. The framework presented herein is not limited to this study but may be used to design RNN models for similar sequence-to-sequence prediction problems.

4.2. First Screening: Appropriate Type of the Recurrent Unit (Stage 3)

The architecture of a pure RNN is described mainly by the depth (number of recurrent layers), the width (number of recurrent cells in each layer), and the type of the recurrent cell (or unit). It is very common to add a fully connected Dense layer for the output layer, as presented in the original 1997 LSTM paper [13]. More developed configurations may be considered depending on the problem treated [29]. Since the objective is to predict the shear time history from the displacement time history, a sequence-to-sequence model is adopted. Once trained and tested, the MLM would be implemented to perform hybrid seismic analyses by predicting the shear force at each integration time step; thus, its architecture should be as small as possible to optimize the computation time. Deep RNNs may not systematically increase the performance of the network [30], as opposed to MLPs and CNNs and as long as overfitting-type problems are treated. When unfolded through time, a shallow RNN is actually a deep network. Therefore, a stacked RNN of only three layers is adopted herein: first recurrent layer as an input layer, second recurrent layer as a hidden layer, and a one-unit Dense layer as an output layer. The network depth being fixed, random searches are performed to optimize the type of the recurrent cell (first screening) and the network width (second screening).
LSTM [13] and GRU [26] are the most common and widely used recurrent cells (or units) since they outperform traditional RNN cells [31]. They are gated recurrent units capable of learning long-term temporal dependencies, which is important in this study to predict the nonlinear shear force from the displacement time history. The GRU is a recent variant of the LSTM with less trainable parameters. However, the performance of one relative to the other is still an open research field [32]. Both LSTM and GRU cells are used and compared in this study. Box and Whisker plots are visualized to describe the statistical distribution of models’ final performances. For each RNN architecture, scores (MSE on validation data after 500 training epochs) are dot marked next to the boxplot, their mean value by a cross mark, and the median value by a horizontal line within the Inter Quartile Range (IQR). Figure 5 shows the performance of LSTM (red color) and GRU (blue color) models, all with the same depth but different widths. The horizontal axis represents the number of RNN cells per each of the two recurrent layers. A 5-unit step is adopted from 10 up to 100 units. For each architecture, 10 independent runs are performed, leading to 10 different evaluations, which attest to the aleatory uncertainty of the training process. MSEs of both LSTM and GRU models are decreasing uniformly up to 35-cells width. From 35 to 100-cells width, the performance of GRU models is more stable compared to LSTM models, which have more outliers and larger IQRs. Therefore, the GRU cell slightly outperforms the LSTM cell for the problem treated in this study. The width range between 60- and 80-GRU cells are selected for the second screening since there is no major gain in accuracy and precision for larger widths, and the minimization of network architecture is a crucial criterion for an efficient implementation of MLMs in hybrid seismic analyses.

4.3. Second Screening and Reliability of the Selected GRU Model (Stage 3)

GRU architectures within the previously selected width range are trained and evaluated 20 independent times again, as shown in Figure 6A. The global performance is still stable as all MSE means and median values are less than 3 × 10−4, and data are less scattered (tighter IQRs). Since all these models perform similarly (in terms of mean value, median value, and IQR), the one with the fewer GRU units is selected to minimize the computation time when implemented for hybrid seismic analysis. The reliability of the selected 60-GRU architecture is tested further to ensure its robustness, as shown in Figure 6B. All MSE means and median values are again less than 3 × 10−4, although few outliers are included within the data. It is worth mentioning that the cases of 10- and 20-runs of the selected 60-GRU architecture are independent of those performed at first and second screenings. The difference in results is due to the randomness of the training process.

4.4. Sensitivity Analysis to Hyperparameters (Stage 4)

The optimizer, the number of epochs, and the batch size are commonly recognized as the most influencing hyperparameters on the performance (accuracy and computation time) of neural networks [29]. More specifically, the learning rate of the optimizer (here, Adam optimizer [33]) that scales the magnitude of the network weights update is a key hyperparameter [34]. The loss may be a highly non-convex function [35]. Very small learning rates delay the network learning and/or may stick the loss function to a local minimum. Very high learning rates make the learning very noisy with a risk of divergence. The number of epochs combined with the batch size defines how often the weights are updated, which influences both the learning accuracy and the computation time. For a fixed number of 500 epochs, the sensitivity of the selected RNN to commonly used values of learning rate and batch size is investigated. As shown in Figure 7A, high learning rates (0.005~0.01) and frequencies of weights updates (batch size of 16 and 32) lead to a divergence of the loss function, and a small learning rate (0.0001) delays the learning and especially for small frequencies of weights updates (batch size of 64 and 128). A good balance is obtained for intermediate (diagonal) values. The learning rate has almost no influence on the training duration, which is inversely proportional to the batch size, as shown in Figure 7B. Since the RNN architecture was minimized at Stage 3 and GRU cells have less trainable parameters than LSTM cells, the training time is less than one hour in all cases. Therefore, the values of learning rate (0.001) and batch size (32) used previously at Stage 3 are kept unchanged.

4.5. Diagnosing the Learning Behavior of the Selected Architecture (Stage 4)

The selected GRU architecture has shown satisfactory final performance up to 500 epochs. In order to decide the number of training epochs to train the final model, the learning behavior has been diagnosed throughout the first 1500 epochs for five independent runs (5 colors in Figure 8). The MSE on training data is represented by solid lines and dashed lines for validation data. The most important phase of learning is established in the first 125 epochs (Range 1). Both validation and training losses decrease and stabilize jointly for the five independent runs. No exaggerated underfitting or overfitting behavior is observed after 125 training epochs. The mean loss of both training and validation data decreases below 5 × 10−5 after only 1000 epochs and keeps converging asymptotically to a zero value (Range 2). Similar nonlinear dynamic response prediction problems treated in recent studies using recurrent networks required 10,000 [14] and 50,000 [12] epochs to reach losses of the same order and with networks having about 130,000 [12] trainable parameters (here, only 33,361 for the selected 60 GRU architecture). MLM1 is trained up to 1500 epochs, and its MSE on validation data is 1.69 × 10−5. This outstanding performance is due to the quality/size of training data and to the network optimization process.
MLM2 and MLM3 are designed for a sequence prediction problem of the same nature as MLM1. The only differences are the properties of the isolation layers they simulate (Table 2). Therefore, the same architecture and hyperparameters adopted for MLM1 were used for their training activities on datasets of the same size but with different contents. The MLM2 (MLM3 respectively) MSE on validation data is 6.51 × 10−6 (1.24 × 10−5 respectively).

4.6. Testing the Developed MLMs (Stage 5)

Three test datasets of 50 samples each were derived from GMs 3~5 (Stages 1 and 2). In order to test developed MLMs, their MSEs on these new unseen data are evaluated. As shown in Table 4, all MSEs are in the order of 10−4, which are greater than those of training data (~10−5), but it is still a very good performance knowing that test data were intentionally generated from GMs with different properties than GM2 used for training (see Table 3 and Figure 3). To illustrate the accuracy of developed models on real scale data, shear time history predictions of a few random samples are unscaled, then combined with their respective displacement time histories to draw the corresponding nonlinear cyclic hysteresis loops. Figure 9, Figure 10 and Figure 11 show respectively the resulting curves compared with reference ones of NTHAs. It is worth mentioning that for each sample case, both MLM and NTHA have the same displacement time history; thus, RMSE is provided only for the shear sequence. The predicted hystereses fit very well reference curves, and all RMSEs are in the order of tens of kN, which is widely accepted in earthquake engineering.

4.7. Computation Time for Developing the MLMs

All the training acts are performed on a computer with 16 Intel® Xeon® W-2245 CPUs and 1 NVIDIA RTX 5000 GPU card, using the TensorFlow machine learning library [36] under Python 3.6. Table 5 shows the consumed computation time for each step of Stages 3 and 4. GRU models are about 27% faster than LSTM models since they were initially designed with less trainable parameters. The third stage is the most time-consuming in the proposed framework since 630 independent training acts were performed for a cumulative computation time of almost 3 days. The sequence length is the number of integration time-steps used for NTHAs, which is equal to the GM duration divided by the modified time-step of its original record (here, 0.02/5 = 0.004 s). The time subdivision by five is commonly used for an accurate nonlinear analysis. Considering the randomness of the training process of ANNs, the skill of a model should never be evaluated by a single run. The number of repetitive training acts is limited only by the available time and the computation resources.

5. Hybrid Seismic Analyses of Studied Building Models

5.1. Results and Discussion for Building 1

Seismic analyses of Building 1 subjected to GMs 2, 3, 4, and 5 are performed using both the conventional NTHA (OpS method) taking as a reference and the new proposed hybrid analysis (MLM1 simulating the isolation layer: NRB, LRB, and Oil Damper). The superstructure response is evaluated by its analytical models in both cases. Although the hybrid analysis computes the full response of the building (acceleration, velocity, and drift at each degree of freedom), only hysteresis loops of the isolation layer are shown in Figure 12 for ease of presentation (black color for the hybrid analysis and red color for the reference analysis). Since both drift and shear are computed in both analyses and at each time-step, the drift RMSE is also evaluated as follows,
Drift   RMSE   ( cm ) = time ( drift O p S , i drift Hybrid , i ) 2 N time   steps
Hysteresis loops of the isolation layer are not perfectly bilinear because of the effect of the Oil Damper. Hybrid seismic analyses results fit very well reference curves for all the GMs considered in this study, despite the long nonlinearities (drift up to 13.5 cm for the case of GM2). The maximum drift RMSE (respectively shear RMSE) is 0.098 cm for GM5 (respectively 32.4 kN for GM4). All RMSEs are of the same order and remain within an acceptable range. Maximum deformations at each floor are given in Figure 13 for each GM (4 colors) and each seismic analysis (solid lines for hybrid analysis and dashed lines conventional one). The superstructure deforms linearly, and both analysis responses fit very well. The maximum drift RMSE is 0.00528 cm (GM 2, roof), and the maximum shear RMSE is 27.1 kN (GM4, first floor).

5.2. Results and Discussion for Buildings 2 and 3

Since superstructures of Buildings 2 and 3 behaved linearly and similarly to Building 1, only the hysteresis loops of their isolation layers are shown in Figure 14 and Figure 15 to check the reliability of the proposed hybrid analysis under longer displacements of the isolation layer (up to 21.6 cm for Building 3 under GM 2). The maximum drift RMSE (respectively shear RMSE) is 0.253 cm (respectively 46.3 kN) for Building 2, and 0.338 cm (respectively 34.4 kN) for Building 3. Considering the intentionally chosen differences in the GMs’ and buildings’ properties used in this study, the fact that the MLMs were trained by data derived from only one GM, and the very small RMSEs observed in all previous cases, the accuracy of the new proposed hybrid seismic analysis is validated.

5.3. Computation Time of the Proposed Hybrid Seismic Analysis

After validating the accuracy of the hybrid analysis, its applicability is investigated hereafter. Developed MLM is incorporated into the numerical integration method (here, OpS method) to predict the shear force at the target component or group of components (here, isolation layer) and at each time step. The resulting hybrid integration module, written in a Python script herein, is to be implemented within a structural analysis software for the common use of practitioners. The optimization of the network architecture might be time-consuming, but once the MLM is trained, it can be saved and reused in any new structural model containing the same target component or group of components. For similar nonlinear structural RPPs, it is recommended to adopt the same RNN architecture determined in this paper for a straightforward application. Therefore, the key factor in evaluating the applicability of the hybrid analysis is its computation time.
Table 6 presents the computation times of the twelve hybrid seismic analyses performed in this study: Buildings 1, 2, and 3 subjected to GMs 2, 3, 4, and 5. A Python script is written to perform the structural analysis of corresponding MDOFs and to load previously developed MLMs. All computations are performed on the same CPU (16 Intel® Xeon® W-2245 CPUs) and using the TensorFlow [36] machine learning library. When assuming a specific computer’s performance, the consumed time depends mainly on the number of integration time-steps since the MLM would make predictions at each step. This number is a function of the duration of the GM, the original time step of the record (here, 0.02 s), and its subdivision (here, by 5) commonly chosen for accurate NTHA. The minimum computation time corresponds to GM2 (the shortest duration) and has an average of 12.80 min. The most time-consuming case happens obviously for the GM5 of longest duration, with an average value of 38.12 min. Furthermore, computation times of hybrid analyses of different buildings under the same GM are almost the same since analytical models describing the behavior of the respective superstructures are processed in a very short time (here, less than 1 s). It will only increase if more MLMs are used to simulate other components in the same building. Since the structural design is performed once in a lifetime of a building, the additional computation time is supposedly negligible. Considering the accuracy advantage and the reasonable computation time, the efficiency of the proposed hybrid seismic analysis is proven in this study.

6. Conclusions

A hybrid seismic analysis is elaborated in this paper to compute the full response of a building structure by incorporating both analytical and RNN models into an explicit time integration method. The MLM would predict the nonlinear response of interest at each time step of the computation. A comprehensive framework is proposed for generating artificial data, optimizing the network architecture and hyperparameters, training the MLM, and eventually testing it. Numerical simulations of three isolated buildings (5, 10, and 15 stories) subjected to four GMs of different amplitudes, frequency contents, and durations are performed to check the efficiency of the proposed hybrid analysis. The developed MLMs simulate the isolation layer (NRB + LRB + Oil Damper) of studied buildings by mapping its displacement time history to the corresponding shear force-time history. They were tested then implemented to perform a total of twelve hybrid seismic analyses. Hence, the following conclusions are formulated:
  • The proposed hybrid seismic analysis computes the full response in terms of acceleration, velocity, and displacement at each degree of freedom. The term hybrid is related to the time integration algorithm, not to the input data nor to the MLM architecture, offering a novel application of ML in civil engineering. Once the MLM is developed, it can be saved and reused to perform structural analysis of any building model containing the same component or group of components the MLM was trained to simulate. Therefore, the novel hybrid analysis proposed in this paper overcomes the shortcoming of most applications of ML in structural RPPs being limited to predicting specific response quantity(s) of the structure(s) used in the training process.
  • The framework proposed for developing the MLMs considers the main three steps to design a sequence-to-sequence prediction model: data framing, tuning of model architecture, and diagnosis of learning behavior. A shallow GRU model with two recurrent layers of 60 cells each and a one-unit Dense layer as an output layer has been found the most appropriate in terms of accuracy, reliability (similar performance after 150 repetitions), and simplicity (only 33,361 trainable parameters). The MSEs on scaled training and validation data decreased below 5 × 10−5 after only 1000 epochs and were in the order of 10−4 on testing data. The optimized RNN architecture may be used for similar structural RPPs, and the proposed framework is highly recommended to design RNN models for different sequence-to-sequence prediction problems.
  • The hybrid seismic analyses responses of the studied building models under the four GMs are compared with results of the conventional analysis (OpS method) taken as a reference in this study. The linear response of the superstructures is very well evaluated with a maximum drift RMSE (respectively shear RMSE) of 0.00528 cm (respectively 27.1 kN) among all the twelve cases. The hybrid analysis could accurately capture the nonlinear hysteresis loops of isolation layers. The maximum drift RMSE (respectively shear RMSE) at the isolation level was 0.338 cm (respectively 46.3 kN) among all configurations. These results offer a numerical validation of the accuracy of the proposed hybrid seismic analysis since all RMSEs are considered acceptable in the earthquake engineering practice.
  • The necessary computation times for developing the MLMs and performing the hybrid seismic analyses were provided to check the applicability of such a study by practitioners. Optimizing the RNN architecture is the most time-consuming stage in the whole framework. A total duration of 67.42 h, counting 630 independent training acts, was necessary to find out an accurate and reliable simple architecture capable of simulating the structural behavior of both deformation- and velocity-dependent devices simultaneously. The hybrid seismic analysis is far less time-consuming, and it takes an average of 0.27 s per integration time-step for a standard CPU. The maximum computation time was 39.30 min, corresponding to 6750 integration time-steps. The capability of the proposed hybrid seismic analysis to simulate the true structural behavior of a building model in a few minutes proves its efficiency for engineering practice. The development of reliable and accurate MLMs remains a challenging task, though.
Synthetic data were generated in this study to develop MLMs for hybrid seismic analysis. This paper validates this novel application of ML in structural analysis. However, the ultimate objective is the development of MLMs simulating structural components or groups of components whose only available and reliable data are those obtained at well-designed experimental tests. In that case, an MLM would overcome its analytical counterpart not yet experimentally validated, thus further increasing the accuracy of conventional NTHA. Since experimental data contain the contribution of all known and unknown physical phenomena and also the epistemic uncertainty of measurement, the development of such MLMs would be more beneficial and more challenging at the same time. This unfulfilled task is targeted by the authors in another study.

Author Contributions

Conceptualization, T.S. and N.M.; methodology, N.M.; software, N.M.; validation, T.S.; formal analysis, N.M.; investigation, T.S.; Resources, T.S.; data curation, N.M.; writing−original draft preparation, N.M.; writing−review and editing, T.S.; visualization, N.M.; supervision, T.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

Nomenclatures

AbbreviationDefinition
ANNArtificial Neural Network
CNNConvolutional Neural Network
DEDamage Evaluation
DLDeep Learning
IQRInter Quartile Range
FEMFinite Element Analysis
GMGround Motion
GRUGated Recurrent Unit
JMAJapan Meteorological Agency
LRBLead Rubber Bearing
LSTMLong Short-Term Memory
MDOFMulti Degrees Of Freedom
MLMachine Learning
MLMMachine Learning Model
MLPMulti-Layer Perceptron
MSEMean Square Error
NTHANonlinear Time History Analysis
NRBNatural Rubber Bearing
OpSOperator Splitting
RCReinforced Concrete
RMSERoot Mean Square Error
RNNRecurrent Neural Network
RPPResponse Prediction Problem
SISystem Identification
STERA_3DStructural Earthquake Response Analysis 3D

References

  1. Sengupta, P.; Li, B. Hysteresis Modeling of Reinforced Concrete Structures: State of the Art. ACI Struct. J. 2017, 114, 25–38. [Google Scholar] [CrossRef]
  2. Dokainish, M.A.; Subbaraj, K. A survey of direct time-integration methods in computational structural dynamics-I. Explicit methods. Comput. Struct. 1989, 32, 1371–1386. [Google Scholar] [CrossRef]
  3. Subbaraj, K.; Dokainish, M.A. A survey of direct time-integration methods in computational structural dynamics-II. Implicit methods. Comput. Struct. 1989, 32, 1387–1401. [Google Scholar] [CrossRef]
  4. Zhang, R.; Liu, Y.; Sun, H. Physics-guided convolutional neural network (PhyCNN) for data-driven seismic response modeling. Eng. Struct. 2020, 215, 110704. [Google Scholar] [CrossRef]
  5. Dang, H.V.; Raza, M.; Nguyen, T.V.; Bui-Tien, T.; Nguyen, H.X. Deep learning-based detection of structural damage using time-series data. Struct. Infrastruct. Eng. 2020, 17, 1474–1493. [Google Scholar] [CrossRef]
  6. Mekaoui, N.; Horioka, T.; Hayashi, K.; Saito, T. Real-time parameter identification of seismically excited building structures using response surface method and Bayesian optimization technique: Case study. AIJ J. Struct. Eng. 2021, 67, 643–653. [Google Scholar]
  7. Xie, Y.; Sichani, M.E.; Padgett, J.E.; DesRoches, R. The promise of implementing machine learning in earthquake engineering: A state-of-the-art review. Earthq. Spectra 2020, 36, 1769–1801. [Google Scholar] [CrossRef]
  8. Yaseen, Z.M.; Afan, H.A.; Tran, M.T. Beam-column shear prediction using hybridized deep learning neural network with genetic algorithm. IOP Conf. Ser. Earth Environ. Sci. (CUTE) 2018, 143, 012025. [Google Scholar] [CrossRef] [Green Version]
  9. Guan, X.; Burton, H.; Shokrabadi, M.; Yi, Z. Seismic drift demand estimation for steel moment frame buildings: From mechanics-based to data-driven models. ASCE J. Struct. Eng. 2021, 146, 04021058. [Google Scholar] [CrossRef]
  10. Eshkevari, S.S.; Takac, M.; Pakzad, S.N.; Jahani, M. DynNet: Physics-based neural architecture design for nonlinear structural response modeling and prediction. Eng. Struct. 2021, 229, 111582. [Google Scholar] [CrossRef]
  11. Brownlee, J. Deep Learning for Time Series Forecasting. Machine Learning Mastery; 2020 v1.8. 555p. Available online: https://machinelearningmastery.com/deep-learning-for-time-series-forecasting/ (accessed on 25 January 2022).
  12. Zhang, R.; Chen, Z.; Chen, S.; Zheng, J.; Buyukozturk, O.; Sun, H. Deep long short-term memory networks for nonlinear structural seismic response prediction. Comput. Struct. 2019, 220, 55–68. [Google Scholar] [CrossRef]
  13. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
  14. Zhang, R.; Liu, Y.; Sun, H. Physics-informed multi-LSTM networks for metamodeling of nonlinear structures. Comput. Methods Appl. Mech. Eng. 2020, 369, 113226. [Google Scholar] [CrossRef]
  15. Nakashima, M.; Kato, H.; Takaoka, E. Development of real-time pseudo dynamic testing. Earthq. Eng. Struct. Dyn. 1992, 21, 79–92. [Google Scholar] [CrossRef]
  16. Nakashima, M. Hybrid simulation: An early history. Earthq. Eng. Struct. Dyn. 2020, 49, 949–962. [Google Scholar] [CrossRef]
  17. Schellenberg, A.H.; Mahin, S.A.; Fenves, G.L. Advanced Implementation of Hybrid Simulation; University of California: Berkeley, CA, USA; Pacific Earthquake Engineering Research Center: Berkeley, CA, USA, 2009; Volume 104, 286p. [Google Scholar]
  18. Nakashima, M.; Ishida, M.; Ando, K. Integration techniques for substructure pseudo dynamic test. J. Struct. Constr. Eng. 1990, 417, 107–117. (In Japanese) [Google Scholar]
  19. Hughes, T.J.R.; Pister, K.S.; Taylor, R.L. Implicit-explicit finite elements in nonlinear transient analysis. Comput. Methods Appl. Mech. Eng. 1979, 17, 159–182. [Google Scholar] [CrossRef]
  20. Newmark, N.M. A method of computation for structural dynamics. ASCE J. Eng. Mech. Div. 1959, 85, 67–94. [Google Scholar] [CrossRef]
  21. MLIT. The Notification and Commentary on the Structural Calculation Procedures for Building with Seismic Isolation; Ministry of Land, Infrastructure, Transport and Tourism: Tokyo, Japan, 2000. (In Japanese) [Google Scholar]
  22. Saito, T. Structural Earthquake Response Analysis, STERA_3D Version 11.0. Available online: http://www.rc.ace.tut.ac.jp/saito/software-e.html (accessed on 25 January 2022).
  23. Lumped Mass Models (with Seismic Isolation). Available online: http://www.rc.ace.tut.ac.jp/saito/software_sample_SI02-e.html (accessed on 25 January 2022).
  24. Colah’s Blog. Available online: https://colah.github.io/posts/2015-08-Understanding-LSTMs/ (accessed on 25 January 2022).
  25. Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning representations by back-propagating errors. Nature 1986, 323, 533–536. [Google Scholar] [CrossRef]
  26. Cho, K.; Merrienboer, B.V.; Bahdanau, D.; Bengio, Y. On the properties of neural machine translation: Encoder-decoder approaches. In Proceedings of the SSST-8, Eight Workshop on Syntax, Semantics and Structure in Statistical Translation (W14-4012), Doha, Qatar, October 2014; pp. 103–111. [Google Scholar]
  27. Keras API Reference/Losses. Available online: https://keras.io/api/losses/ (accessed on 25 January 2022).
  28. Keras API Reference/Optimizers. Available online: https://keras.io/api/optimizers/ (accessed on 25 January 2022).
  29. Brownlee, J. Long Short-Term Memory Networks with Python. Machine Learning Mastery; 2020 v1.7. 245p. Available online: https://machinelearningmastery.com/lstms-with-python/ (accessed on 25 January 2022).
  30. Levine, Y.; Sharir, O.; Shashua, A. Benefits of depth for long-term memory of recurrent networks. In Proceedings of the 6th International Conference on Learning Representations (ICLR), Vancouver, BC, Canada, April–May 2018. [Google Scholar]
  31. Chung, J.; Gulcehre, C.; Cho, K.; Bengio, Y. Empirical evaluation of gated recurrent neural networks on sequence modeling. In Proceedings of the NIPS Deep Learning and Representation Learning Workshop, Montreal, QC, Canada, December 2014. [Google Scholar]
  32. Yang, S.; Zhou, Y.; Yu, X. LSTM and GRU neural network performance comparison study. In Proceedings of the International Workshop on Electronic Communication and Artificial Intelligence (IWECAI), Qingdao, China, June 2020. [Google Scholar]
  33. Kingma, D.P.; Ba, J.L. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representation (ICLR), San Diego, CA, USA, May 2015. [Google Scholar]
  34. Smith, L.N. Cyclical learning rates for training neural networks. In Proceedings of the 2017 IEEE Winter Conference on Applications of Computer Vision (WACV), Santa Rosa, CA, USA, March 2017. [Google Scholar]
  35. Li, H.; Xu, Z.; Taylor, G.; Studer, C.; Goldstein, T. Visualizing the loss landscape of neural nets. In Proceedings of the 9th International Conference on Learning Representation (ICLR), virtual only, May 2021. [Google Scholar]
  36. TensorFlow. Available online: https://www.tensorflow.org/ (accessed on 25 January 2022).
Figure 1. Principle of the proposed hybrid seismic analysis [18].
Figure 1. Principle of the proposed hybrid seismic analysis [18].
Applsci 12 03266 g001
Figure 2. (A) Overview of base-isolated building and (B) constitutive law of isolation devices.
Figure 2. (A) Overview of base-isolated building and (B) constitutive law of isolation devices.
Applsci 12 03266 g002
Figure 3. Acceleration response spectra of used earthquake ground motions (damping h = 5%).
Figure 3. Acceleration response spectra of used earthquake ground motions (damping h = 5%).
Applsci 12 03266 g003
Figure 4. Framework for developing and testing the MLMs.
Figure 4. Framework for developing and testing the MLMs.
Applsci 12 03266 g004
Figure 5. 1st Screening: Random search for the appropriate architecture (10 runs).
Figure 5. 1st Screening: Random search for the appropriate architecture (10 runs).
Applsci 12 03266 g005
Figure 6. (A) 2nd screening (20 runs), and (B) Reliability of the selected GRU model.
Figure 6. (A) 2nd screening (20 runs), and (B) Reliability of the selected GRU model.
Applsci 12 03266 g006
Figure 7. Sensitivity analysis to hyperparameters: (A) MSE, and (B) Training time (500 epochs).
Figure 7. Sensitivity analysis to hyperparameters: (A) MSE, and (B) Training time (500 epochs).
Applsci 12 03266 g007
Figure 8. Learning behavior of the MLM1 (5 independent training acts).
Figure 8. Learning behavior of the MLM1 (5 independent training acts).
Applsci 12 03266 g008
Figure 9. Unscaled MLM1 predictions on a random sample from each test dataset.
Figure 9. Unscaled MLM1 predictions on a random sample from each test dataset.
Applsci 12 03266 g009
Figure 10. Unscaled MLM2 predictions on a random sample from each test dataset.
Figure 10. Unscaled MLM2 predictions on a random sample from each test dataset.
Applsci 12 03266 g010
Figure 11. Unscaled MLM3 predictions on a random sample from each test dataset.
Figure 11. Unscaled MLM3 predictions on a random sample from each test dataset.
Applsci 12 03266 g011
Figure 12. Nonlinear cyclic hystereses of isolation level of Building 1.
Figure 12. Nonlinear cyclic hystereses of isolation level of Building 1.
Applsci 12 03266 g012
Figure 13. Maximum story deformations of Building 1.
Figure 13. Maximum story deformations of Building 1.
Applsci 12 03266 g013
Figure 14. Nonlinear cyclic hystereses of isolation level of Building 2.
Figure 14. Nonlinear cyclic hystereses of isolation level of Building 2.
Applsci 12 03266 g014
Figure 15. Nonlinear cyclic hystereses of isolation level of Building 3.
Figure 15. Nonlinear cyclic hystereses of isolation level of Building 3.
Applsci 12 03266 g015
Table 1. Lumped-mass model properties of studied buildings superstructures.
Table 1. Lumped-mass model properties of studied buildings superstructures.
Building 1Building 2Building 3
Ref.B01_05F_LMM_SIB02_10F_LMM_SIB03_15F_LMM_SI
PeriodTn,1 = 0.5 (s)Tn,2 = 1.0 (s)Tn,3 = 1.5 (s)
Story LevelStiffnessWeightStiffnessWeightStiffnessWeight
(kN/mm)(kN)(kN/mm)(kN)(kN/mm)(kN)
15th----803000
14th----1563000
13th----2253000
12th- --2903000
11th----3493000
10th--12130004023000
9th--22930004513000
8th--32630004943000
7th--41030005313000
6th--48330005633000
5th241300054330005903000
4th435300059230006123000
3rd579300062830006283000
2nd676300065230006393000
1st724300066430006443000
Table 2. Isolation levels properties of studied building models.
Table 2. Isolation levels properties of studied building models.
Isolation LevelBuilding 1Building 2Building 3
Effective Period(s)Teff,1 = 2.5Teff,2 = 4.0Teff,3 = 6.0
Base Weight (kN)450045004500
Oil DamperC1(kNs/mm)2.1941.6171.237
C2/C1 0.0670.0670.067
Vr(mm/s)320320320
LRB+NRBK1(kN/mm)175.5129.499
K2/K1 0.0460.0420.031
Fy(kN)17551294990
ML model designationMLM1MLM2MLM3
Table 3. Overview of earthquake ground motions used in this study.
Table 3. Overview of earthquake ground motions used in this study.
No.Ground MotionEarthquake EventPGADurationAcceleration SparklineUsage in This Study
DesignationOccurrence Date(Gal)(s)
1Kobe 1995 NS17 January 19958185 Applsci 12 03266 i001Tuning network
architecture
2Kobe 1995 NS17 January 199581815 Applsci 12 03266 i002Training the MLM
Hybrid analysis
3Taft 1952 EW21 July 195217619 Applsci 12 03266 i003Testing the MLM
Hybrid analysis
4Tohoku 1978 NS12 June 197825823 Applsci 12 03266 i004Testing the MLM
Hybrid analysis
5El Centro 1940 NS18 May 194034227 Applsci 12 03266 i005Testing the MLM
Hybrid analysis
Table 4. MSEs of developed MLMs on test datasets (50 samples each).
Table 4. MSEs of developed MLMs on test datasets (50 samples each).
Test Dataset 3
(GM3: Taft 1952 EW)
Test Dataset 4
(GM4: Tohoku 1978 NS)
Test Dataset 5
(GM5: El Centro 1940 NS)
MLM12.73 × 10−46.04 × 10−43.84 × 10−4
MLM25.04 × 10−47.04 × 10−44.97 × 10−4
MLM36.75 × 10−49.72 × 10−46.63 × 10−4
Table 5. Computation time for developing the MLMs.
Table 5. Computation time for developing the MLMs.
TrainingSequence LengthEpochsBatch SizeNumber of TrainingTime (h)
1st screening125150032LSTM: 19 × 10 = 19023.93
GRU: 19 × 10 = 19018.87
2nd screening1251500325 × 20 = 10009.82
Model reliability12515003210 + 20 + 30 + 40 + 50 = 15014.80
Sensitivity analysis375150016~1282010.27
MLM13751150032508.49
MLM23751150032101.65
MLM33751150032101.66
Table 6. Computation time of performed hybrid seismic analyses.
Table 6. Computation time of performed hybrid seismic analyses.
GM2
(Kobe 1995 NS)
GM3
(Taft 1960 EW)
GM4
(Tohoku 1978 NS)
GM5
(El Centro 1940 NS)
GM duration (s)15192327
Integration time steps3750475057506750
Time (min)B01_05F_LMM_SI12.7119.1829.9339.30
B02_10F_LMM_SI12.6620.2529.5338.76
B03_15F_LMM_SI12.9319.0826.7536.31
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Mekaoui, N.; Saito, T. A Deep Learning-Based Integration Method for Hybrid Seismic Analysis of Building Structures: Numerical Validation. Appl. Sci. 2022, 12, 3266. https://0-doi-org.brum.beds.ac.uk/10.3390/app12073266

AMA Style

Mekaoui N, Saito T. A Deep Learning-Based Integration Method for Hybrid Seismic Analysis of Building Structures: Numerical Validation. Applied Sciences. 2022; 12(7):3266. https://0-doi-org.brum.beds.ac.uk/10.3390/app12073266

Chicago/Turabian Style

Mekaoui, Nabil, and Taiki Saito. 2022. "A Deep Learning-Based Integration Method for Hybrid Seismic Analysis of Building Structures: Numerical Validation" Applied Sciences 12, no. 7: 3266. https://0-doi-org.brum.beds.ac.uk/10.3390/app12073266

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop