Next Article in Journal
Treatment Effects of Miniscrew-Assisted Rapid Palatal Expansion in Adolescents Using Cone-Beam Computed Tomography
Previous Article in Journal
An Event Extraction Approach Based on a Multi-Round Q&A Framework
Previous Article in Special Issue
A Prediction Method for Height of Water Flowing Fractured Zone Based on Sparrow Search Algorithm–Elman Neural Network in Northwest Mining Area
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Advanced Elastic and Reservoir Properties Prediction through Generative Adversarial Network

by
Muhammad Anwar Ishak
1,2,
Abdul Halim Abdul Latiff
1,*,
Eric Tatt Wei Ho
1,
Muhammad Izzuljad Ahmad Fuad
2,
Nian Wei Tan
2,
Muhammad Sajid
2 and
Emad Elsebakhi
2
1
Centre for Subsurface Imaging, Department of Geosciences, Universiti Teknologi PETRONAS, Bandar Seri Iskandar 32610, Perak, Malaysia
2
PETRONAS Research Sdn Bhd (PRSB), Lot 3288 and 3289, Off Jln Ayer Itam, Kawasan Institusi Bangi, Kajang 43000, Selangor, Malaysia
*
Author to whom correspondence should be addressed.
Submission received: 21 February 2023 / Revised: 26 March 2023 / Accepted: 23 April 2023 / Published: 22 May 2023
(This article belongs to the Special Issue Big Data and Machine Learning in Earth Sciences)

Abstract

:
The prediction of subsurface properties such as velocity, density, porosity, and water saturation has been the main focus of petroleum geosciences. Advanced methods such as Full Waveform Inversion (FWI), Joint Migration Inversion (JMI) and ML-Rock Physics are able to produce better predictions than their predecessors, but they still require tedious manual interpretation that is prone to human error. The research on these methods remains open as they suffer from technical limitations. As computing resources are becoming cheaper, the use of a single deep-generative adversarial network is feasible in predicting all these properties in a completely data-driven manner. In our proposed method of multiscale pix2pix applied to SEG SEAM salt data, we have managed to map from one input, which is seismic post-stack data, to several outputs of reservoir and elastic properties such as porosity, velocity, and density by using only one trained model and without having to manually interpret or pre-process the input data. With 90% accuracy of the results in the synthetic data testing, the method is worthy of being explored by the petroleum geoscience fraternity.

1. Introduction

Accurate prediction of elastic and reservoir properties is crucial in getting a more precise image of the subsurface. This will lead to the correct placement of a well, reducing risks and uncertainties and thus increasing the chance of success in finding hydrocarbon. Velocity model building (VMB) as well as seismic inversion and rock physics study have been used to achieve these objectives. Although significant advancement has been made in these methods, they remain an open topic with a lot of ongoing research. There are several approaches to VMB [1]. The most conventional method is tomography, an iterative process involving several steps, such as sorting the data into common image points (CIP) and picking the reflections to flatten the image gathered prior to producing the final velocity model. Another widely used approach is Full Waveform Inversion (FWI) [2], which matches calculated data with observed data by considering amplitude and travel-times [3,4]. Traditional FWI is depth-limited as the diving wave does not penetrate deep enough in the subsurface, hence the reflection FWI (RFWI) is proposed [5]. RFWI suffers from a highly non-linear coupling of density and velocity. The problem faced by RFWI can be solved by Joint Migration Inversion (JMI) [6,7], which manages to reduce the non-linearity of the inversion. However, JMI is based on a one-way wave equation, which is inferior to FWI. In 2018, a hybrid FWI-JMI was proposed [8]. Seismic inversion and rock physics study is another method used in reservoir characterization. Unlike VMB, this method is much more focused on the reservoir itself than on the entire seismic volume [9,10]. In a multi-step inversion approach, elastic properties such as P-impedance, S-impedance, and density are produced by stochastic or deterministic inversion on seismic data. Next, at a well, a rock physics model is developed connecting these elastic properties to reservoir properties. High-resolution models of the reservoir’s properties are created in a single-loop manner before rock physics transformations are used to create the elastic properties volume. After that, calculated synthetic seismic traces are compared to actual seismic data. In order to combine these multi-step and single-loop approaches, Grana, D. [9] suggested using the output from the first approach as an a priori for the second approach. As was previously established, inversion and rock physics are reservoir-centric, and accuracy decreases as reservoir depth increases. A regional rock physics template (RPT), which is based on the rock physics inclusion model, has been proposed to address these issues [11].

2. Related Work

AI-Velocity Model. The application of neural networks for velocity model building dates back to the 1990s. Röth, G. et al. [12] used a neural network to predict 1D velocity from shot gathers. Similar work has recently been published [13] that maps the 1D vertical velocity profile from data cubes of the neighboring common midpoint (CMP) gathers. The work applied advanced DL architecture, the Visual Geometry Group (VGG) network, and is able to accommodate lateral heterogeneity in the model, a problem faced by [14] in generalizing the neural network training, by creating four different sets of geologically inspired geometric training models with and without background velocity gradient. The authors managed to produce plausible results with only one neural net (NN) and one fully connected (FC) layer as the network architecture. Earlier work of [15] has also shown the feasibility of the UNET architecture to approximate the non-linear mapping into the velocity model from multi-shot gathers. The authors also suggested that a generative adversarial network (GAN) be used in future testing. Another example is by [16] in a paper on DL tomography in which the authors applied three dense layers to learn the tomography operator from the seismic data. Based on the example here, most of the model-building techniques focus only on building the velocity model [13,14,16,17] and, in certain cases, the density model [18].
AI-Rock Physics. The application of machine learning or deep learning in rock physics has gained attention in recent years. One of the early applications used two simple 1D convolutional layers in solving the seismic inversion problem of predicting the elastic model of the subsurface from recorded seismic data [19]. The authors generated a numerical training set of P-impedance realizations with the corresponding seismic response and were able to have a good generalized network. In another example, convolutional neural networks (CNN) have been used to predict reservoir properties directly in the depth domain given the time domain pre-stack seismic data [20]. The feasibility of the method was shown by comparing two CNN networks, namely PetroNet (end-to-end CNN) and ElasticNet-ElasticPetroNet (cascaded CNN). In addition to deriving the reservoir and elastic properties directly from the seismic data, the neural net was also tested in predicting the reservoir properties from the elastic properties [21]. Several inputs of different elastic properties were passed together into a three-layer neural net to produce the reservoir properties.
Generative Adversarial Networks. Deep learning is a subset of machine learning, and machine learning itself is a subset of artificial intelligence. Generative adversarial networks, or GAN [22], is a two-network architecture that consists of a generator that produces the translated image and a discriminator that tries to classify its input whether it is generated (fake) or coming from the label set (real). GAN has become one of the best deep learning networks due to its ability to produce remarkable photorealistic results in many computer vision tasks such as image generation [22], image transformation or translation [22,23], styling [24], and super-resolution [25,26]. Furthermore, many GAN networks were also applied to medical image segmentation [27,28,29,30,31,32,33]. Despite the success of GAN in both domains, the application of GAN in geosciences is limited. Some of the applications in geosciences are compressive sensing [34], facies classification [35], and rock-type inference [36,37].
In this paper, we modified a generative deep learning (GAN) method called Pix2Pix proposed by Isola, P. et al. [22] and applied it to synthetic post-stack seismic data. We verified the feasibility of the method in predicting or mapping the reservoir and elastic properties directly. First, the input and target data were prepared as required by the DL network. Then, we split the data into training/validation and testing, and trained them for a number of epochs. Next, we looked at the validation results and did the testing on the unseen data and finally, we discussed those results, the shortcomings, and the way forward for this method.
Overall, there are two main contributions of our work:
  • The groundwork of a new method alongside velocity model building and rock physics to predict subsurface properties that is entirely data-driven without any manual interactions.
  • The multiscale patchGAN extracts features at different scales and thus improves the accuracy of final predictions.

3. Pix2pix and Multiscale PatchGAN of Pix2Pix

Pix2pix is a generalized image-to-image translation network that is based on a conditional GAN [22]. It comprises a generator, G , and a discriminator, D . In the original pix2pix network, G is U-NET while D is PatchGAN. G takes the observed data x and the random noise vector z as its input and generates an output, y .
G : x , z y
G is trained to produce plausible outputs that are indistinguishable from the label data (real) while D is trained to detect whether its incoming input is real (from the label data) or fake (generated by G ). Both G and D are competing with each other in a min-max game in which G is trying to minimize its loss by producing output as similar as possible to the true label (Figure 1) while D is trying to maximize its “correctness” at detecting whether its incoming input is real, y , or fake, y (Figure 2). The ability of pix2pix to transform different images is helped by its two objective functions: reconstruction loss, l L 1 G (Equation (2), and conditional GAN adversarial loss, l c G A N G , D (Equation (3). In the reconstruction loss, the difference between the label y and the generated output y is calculated in L1 distance as L1 encourages less blurriness [22]. The adversarial loss, on the other hand, is given by the sum of the likelihood of D correctly classifying the real data as real and the generated data as fake.
l L 1 G = E x , y , z
l c G A N G , D = E x , y l o g D x , y + E x , z l o g 1 D x , G x , z
The final objective function is
G = a r g m i n G m a x D l c G A N G , D + λ l L 1 G
with λ being the regularization coefficient.
Proposed Network
The proposed network is an enhanced version of the original pix2pix. The general idea is to use the original pix2pix in two (2) runs. The first run (Figure 3), together with all the default hyperparameters, is used to obtain the base model. This base model is re-trained in the second run, but this time, the kernel size of the discriminator is changed from 4 × 4 to 3 × 3 (Figure 4). In theory, a smaller kernel size would mean high-resolution feature extraction, but it cannot be too small or there will not be enough information for the kernel can extract. In addition, the proposed solution is also based on StarGAN [23], an approach in which only a single network is trained for performing image-to-image translation of multiple domains. Such a unified model architecture allows for the simultaneous training of multiple datasets with different domains within a single network. We have adapted this idea of transforming the seismic data into four different reservoir and elastic properties by using only one model. The summary of our proposed network can be seen in Table 1.

4. Experiments

In this section, we validated our proposed method on the SEG SEAM salt dataset. The dataset was a 3D representation of a deepwater Gulf of Mexico salt domain, complete with fine-scale stratigraphy that included oil and gas reservoirs. The model extended 35 km east-west, 40 km north-south, and 15 km in depth with a 10 m grid in all directions. The stratigraphic variation was created based on geostatistics. All model properties were derived from fundamental rock properties that followed typical compaction gradients below the water bottom. Hence, properties have subtle contrasts at microlayer boundaries, especially in the shallow section, generating very realistic synthetic data [38]. In our project, the dataset that we used was seismic post-stack, velocity, porosity, density, and water saturation.
Figure 5 shows the complete workflow for the experiment.
A.
Data preparation
  • Each of the datasets (except for porosity) was normalized to the 0 and 1 range based on the property’s fixed values, i.e., 1500–5000 for velocity, −0.3–0.3 for seismic amplitude and 1–5 for density, by using the min-max normalization method. We then selected 1000 inlines from each of the datasets and transformed them into patches sized 256 × 256. The total number of patches for each dataset was 25,000. We then inputted the seismic data as our x and velocity, porosity, density, and water saturation as our y ( y 1 , y 2 , y 3 , y 4 ) into the data loader. We then split the data loader into 90% training and 10% validation. The 90% of training data was then inputted into the network randomly.
B.
Training/validation
  • We started the training with the default pix2pix setting for up to 2000 epochs. We implemented early stopping [39] manually by doing quality control (QC) on the validation results and model every 50 epochs. The advantage of implementing early stopping is that we do not have to wait until the training is completed for 2000 epochs. If we have a good intermediate model, we can stop the training early. We selected the model that gave the highest validation accuracy as our base model, which in this case was model 1000 (Figure 6).
Next, we re-trained the base model using the same number of parameters except for the discriminator kernel size, which we changed from 4 to 3. We implemented the same early-stopping strategy. Here, the best model was model 0 as it gave the highest accuracy for the validation set (Figure 7).
C.
Testing
  • Now that we had our base model (pix2pix) and final model (MS-pix2pix, the proposed method), we conducted testing on 2 inlines that were not previously included during the training phase (Figure 8). Inline 1 was chosen to represent an area with simple geology, a layer-cake strata with a small salt body, while Inline 2 represented an area with complex geology with a big salt body. The inlines sizes were 1024 × 1024 and we applied normalization to them before being inputted into both models. We compare the results of both models and also both inlines in the next section.

5. Results

This section is organized as follows. Each figure contains 3 images: prediction from pix2pix, prediction from MS-pix2pix (proposed method) and the ground truth. The first 2 figures are the velocity prediction for Inline 1 and Inline 2. The next 2 figures are the porosity prediction, followed by density and lastly water saturation. The accuracy metrics used were correlation (Equation (5)) and the structural similarity index, SSIM (Equation (6)). The SSIM metric calculates the similarity between 2 images based on 3 key features: luminance, l ; contrast, c ; and structure, s [41]. Both metrics range between −1, which means the 2 images are very different, and +1, which means the 2 images are very similar. The results can be found in Table 2.
C O R R y , y = y y ¯ y y ¯ y y ¯ 2 y y ¯ 2
S S I M y , y = l y , y α c y , y β s y , y γ
MS-pix2pix managed to produce predictions of higher accuracy qualitatively and quantitatively compared to the pix2pix network for all subsurface properties (Figure 9, Figure 10, Figure 11, Figure 12, Figure 13, Figure 14, Figure 15 and Figure 16). Although the predictions were better, there are still some areas that require further attention. In Figure 9, Figure 11, Figure 13, and Figure 15, for example, MS-pix2pix improved the predictions of pix2pix by a lot. However, there is a wrong prediction that is predicted as a salt body at a shallow depth. This could be due to the “dimmed” amplitude in the seismic data that resembles the salt body signature. In Figure 16, although the prediction from MS-pix2pix is better, it is over-predicted. There should not be a layer inside the salt body. As pix2pix was conditioned on the input image, the layer might be carried over from the input seismic. It is also important to note that although both accuracy metrics gave high accuracy for water saturation, the network actually failed to detect the water layer. The high accuracy was given by the salt and non-salt formation of the water saturation model. The salt and non-salt formations must be removed from the training for a more accurate water saturation prediction.

6. Application on Field Data

Applying deep learning to field data is a very challenging task due to two main reasons, which are (1) the residual noise presence in the seismic data and (2) the unavailability of the true label data. To test the robustness of our approach to field data, we used seismic data from a Malaysian field with the labels generated by A. Fuad, M.I., et al. [42] via rock-physics-guided, deep-learning-based properties inversion (Figure 17). The same workflow was applied to the field data, such as normalization and transformation to 256 × 256 patches. The total number of patches was 512, of which 90% of them were used for training and the other 10% for validation. We repeated the same procedure for training. The best model from the first training was re-trained with a smaller kernel size for the discriminator. The best model from this was then used for testing. The testing data (Figure 18) was a patch that was never used in the training and validation.
Before examining the results in detail, let us look at the accuracy metric. Our objective here was to predict the value as close as possible to the real value. We then established an accuracy metric (Figure 19) that compared the value of each element with a 5% margin of error. We counted the number of correct predictions and divided them by the total number of elements to get our final accuracy number.
Table 3 shows the results of the field data test. In general, MS-pix2pix produced predictions of higher accuracy than the predictions coming from the pix2pix network (Figure 20). MS-pix2pix (Figure 21) managed to predict the velocity and density accurately. This would help in producing better-migrated images of the subsurface thus resulting in a better interpretation of the geological features of the area. The porosity prediction was also improved in MS-pix2pix, which can lead to a better delineation of good reservoir rocks. However, the prediction of water saturation remains a challenge.

7. Conclusions and Discussion

In this paper, we lay down the groundwork for implementing a deep learning model in predicting subsurface properties. We propose multi-scale pix2pix (MS-pix2pix), which is based on pix2pix and a conditional generative adversarial network, as an alternative to solve the non-linearity and non-uniqueness problems encountered in geosciences. We devised an experiment to test the method to transform the post-stack seismic to velocity, porosity, density, and water saturation by using just one single network. The result of the synthetic data test looks good. MS-pix2pix produced better results than pix2pix. There were mispredictions but the general prediction matched the ground truth. The results of the field data test was less accurate even though MS-pix2pix still produced better results than pix2pix. This was expected as the real data had a lot of unknowns and noise which were not present in the synthetic data. The accuracy of the field data test varied according to the predicted properties. Velocity and density predictions recorded high accuracy whereas porosity and water saturation predictions recorded low accuracy. In both tests, synthetic and field data, the prediction of water saturation remains challenging. So why were we able to predict the velocity, porosity, and density, but failed to predict water saturation? One way to look at this is because the three properties have continuous values that work well with our regression network, MS-pix2pix. Water saturation, on the other hand, is normally quantified by the value 0 or 1, which is more suited to a classification problem. In the synthetic test, the salt and background formation needed to be removed from the label to have a better judgment on the water saturation prediction. Moving forward, the proposed MS-pix2pix model looks promising and could be the base for further research.

Author Contributions

Conceptualization, M.A.I., A.H.A.L. and E.T.W.H.; methodology, M.A.I.; validation, M.A.I.; formal analysis, M.A.I.; investigation, M.A.I.; data curation, M.A.I. and M.I.A.F.; writing—original draft preparation, M.A.I.; writing—review and editing, A.H.A.L., E.T.W.H. and N.W.T.; supervision, A.H.A.L., E.T.W.H., E.E. and M.S.; project administration, A.H.A.L. and M.S. All authors have read and agreed to the published version of the manuscript.

Funding

The work is supported by PETRONAS Research Sdn Bhd and Universiti Teknologi PETRONAS under grant 015MD0-076.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Jones, I.F. Velocities, Imaging and Waveform Inversion: The Evolution of Characterising the Earth’s Subsurface; EAGE Publications: Houten, The Netherlands, 2018. [Google Scholar]
  2. Virieux, J.; Operto, S. An Overview of Full-Waveform Inversion in Exploration Geophysics. Geophysics 2009, 74, wcc1–wcc26. [Google Scholar] [CrossRef]
  3. Liu, Y.; Teng, J.; Xu, T.; Wang, Y.; Liu, Q.; Badal, J. Robust Time-Domain Full Waveform Inversion with Normalized Zero-Lag Cross-Correlation Objective Function. Geophys. J. Int. 2016, 209, 106–122. [Google Scholar] [CrossRef]
  4. da Silva Sérgio Luiz, E.F.; Kaniadakis, G. Statistics Approach to Optimal Transport Waveform Inversion. Phys. Rev. E 2022, 106, 034113. [Google Scholar] [CrossRef] [PubMed]
  5. Xu, S.S.; Wang, D.; Chen, F.L.; Zhang, Y.; Lambaré, G. Full Waveform Inversion for Reflected Seismic Data. In Proceedings of the 74th EAGE Conference and Exhibition Incorporating EUROPEC 2012, Copenhagen, Denmark, 4–7 June 2012. [Google Scholar]
  6. Berkhout, A.J. Combining Full Wavefield Migration and Full Waveform Inversion, a Glance into the Future of Seismic Imaging. Geophysics 2012, 77, S43–S50. [Google Scholar] [CrossRef]
  7. Verschuur, D.J.; Staal, X.R.; Berkhout, A.J. Joint Migration Inversion: Simultaneous Determination of Velocity Fields and Depth Images Using All Orders of Scattering. Lead. Edge 2016, 35, 1037–1046. [Google Scholar] [CrossRef]
  8. Ishak, M.A.; Verschuur, D.J.; Ghazali, A.R. A Hybrid Fwi-Jmi for High Resolution Velocity Estimation. In Proceedings of the 81st EAGE Conference and Exhibition, London, UK, 3–6 June 2019. [Google Scholar]
  9. Grana, D.; Dvorkin, J.P. The Link between Seismic Inversion, Rock Physics, and Geostatistical Simulations in Seismic Reservoir Characterization Studies. Lead. Edge 2011, 30, 54–61. [Google Scholar] [CrossRef]
  10. Mavko, G.; Mukerji, T.; Dvorkin, J.P. The Rock Physics Handbook: Tools for Seismic Analysis of Porous Media; Cambridge University Press: Cambridge, NY, USA; Melbourne, FL, USA, 1998. [Google Scholar]
  11. Ahmad Fuad, M.I.; Ahmad Munif, H.A. Regional Rock physics implementation foe enhanced lithological and fluid predictions, a case study in deep reservoirs. In Proceedings of the 81st EAGE Conference and Exhibition, London, UK, 3–6 June 2019. [Google Scholar]
  12. Röth, G.; Tarantola, A. Neural Networks and Inversion of Seismic Data. J. Geophys. Res. Solid Earth 1994, 99, 6753–6768. [Google Scholar] [CrossRef]
  13. Vladimir, K.; Oleg, O.; Tariq, A. Velocity Model Building by Deep Learning: From General Synthetics to Field Data Application. In Seg Technical Program Expanded Abstracts; Society of Exploration Geophysicists: Houston, TX, USA, 2020; pp. 1561–1565. [Google Scholar]
  14. Hani, A.; Jeffrey, S. Seismic Velocity Model Building Using Neural Networks: Training Data Design and Learning Generalization. Geophysics 2022, 87, R193–R211. [Google Scholar]
  15. Fangshu, Y.; Jianwei, M. Deep-Learning Inversion: A Next-Generation Seismic Velocity Model Building Method. Geophysics 2019, 84, R583–R599. [Google Scholar]
  16. Araya-Polo, M.; Jennings, J.; Adler, A.; Dahlke, T. Deep-Learning Tomography. Lead. Edge 2018, 37, 58–66. [Google Scholar] [CrossRef]
  17. Martin, T.; Bell, M. An Innovative Approach to Automation for Velocity Model Building. First Break 2019, 37, 57–65. [Google Scholar] [CrossRef]
  18. Gao, Z.; Li, C.; Zhang, B.; Jiang, X.; Pan, Z.; Gao, J.; Xu, Z. Building Large-Scale Density Model via a Deep-Learning-Based Data-Driven Method. Geophysics 2020, 86, M1–M15. [Google Scholar] [CrossRef]
  19. Das, V.; Pollack, A.; Wollner, U.; Mukerji, T. Effect of Rock Physics Modeling in Impedance Inversion from Seismic Data Using Convolutional Neural Network. In Proceedings of the 13th SEGJ International Symposium, Tokyo, Japan, 12–14 November 2018. [Google Scholar]
  20. Das, V.; Mukerji, T. Petrophysical Properties Prediction from Prestack Seismic Data Using Convolutional Neural Networks. Geophysics 2020, 85, N41–N55. [Google Scholar] [CrossRef]
  21. Weinzierl, W.; Wiese, B. Deep Learning a Poroelastic Rock-Physics Model for Pressure and Saturation Discrimination. Geophysics 2021, 86, MR53–MR66. [Google Scholar] [CrossRef]
  22. Isola, P.; Zhu, J.-Y.; Zhou, T.; Efros, A.A. Image-to-Image Translation with Conditional Adversarial Networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 5967–5976. [Google Scholar]
  23. Yunjey, C.; Min-je, C.; Munyoung, K.; Jung-Woo, H.; Sunghun, K.; Jaegul, C. Stargan: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 8789–8797. [Google Scholar]
  24. Karras, T.; Laine, S.; Aila, T. A Style-Based Generator Architecture for Generative Adversarial Networks. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 4396–4405. [Google Scholar]
  25. Ledig, C.; Theis, L.; Huszár, F.; Caballero, J.; Aitken, A.P.; Tejani, A.; Totz, J.; Wang, Z.; Shi, W. Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 105–114. [Google Scholar]
  26. Wang, X.; Yu, K.; Wu, S.; Gu, J.; Liu, Y.; Dong, C.; Loy, C.C.; Qiao, Y.; Tang, Z. Esrgan: Enhanced Super-Resolution Generative Adversarial Networks. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops, Munich, Germany, 8–14 September 2018. [Google Scholar]
  27. Jiang, M.; Zhi, M.; Wei, L.; Yang, X.; Zhang, J.; Li, Y.; Wang, P.; Huang, J.; Yang, G. FA-GAN: Fused attentive generative adversarial networks for MRI image super-resolution. Comput. Med. Imaging Graph. Off. J. Comput. Med. Imaging Soc. 2021, 92, 101969. [Google Scholar] [CrossRef]
  28. Zhang, K.; Hu, H.; Philbrick, K.; Conte, G.M.; Sobek, J.; Rouzrokh, P.; Erickson, B. SOUP-GAN: Super-Resolution MRI Using Generative Adversarial Networks. Tomography 2021, 8, 905–919. [Google Scholar] [CrossRef]
  29. Cirillo, M.D.; Abramian, D.; Eklund, A. Vox2vox: 3d-Gan for Brain Tumour Segmentation. arXiv 2020, arXiv:2003.13653. [Google Scholar]
  30. Skandarani, Y.; Jodoin, P.-M.; Lalande, A. Gans for Medical Image Synthesis: An Empirical Study. arXiv 2021, arXiv:2105.05318. [Google Scholar] [CrossRef]
  31. Calimeri, F.; Marzullo, A.; Stamile, C.; Terracina, G. Biomedical Data Augmentation Using Generative Adversarial Neural Networks. In Proceedings of the 26th International Conference on Artificial Neural Networks, Alghero, Italy, 11–14 September 2017; pp. 626–634. [Google Scholar]
  32. Kazeminia, S.; Baur, C.; Kuijper, A.; Ginneken, B.; Navab, N.; Albarqouni, S.; Mukhopadhyay, A. Gans for Medical Image Analysis. Artif. Intell. Med. 2020, 109, 101938. [Google Scholar] [CrossRef]
  33. Frid-Adar, M.; Diamant, I.; Klang, E.; Amitai, M.M.; Goldberger, J.; Greenspan, H. Gan-Based Synthetic Medical Image Augmentation for Increased Cnn Performance in Liver Lesion Classification. Neurocomputing 2018, 321, 321–331. [Google Scholar] [CrossRef]
  34. Xiaoyang, R.L.; Nikolaos, M.; Ping, L.; Yuan, X.; Xing, Z. Seismic Compressive Sensing by Generative Inpainting Network: Toward an Optimized Acquisition Survey. Lead. Edge 2019, 38, 923–933. [Google Scholar]
  35. Liu, M.; Jervis, M.; Li, W.; Nivlet, P. Seismic Facies Classification Using Supervised Convolutional Neural Networks and Semisupervised Generative Adversarial Networks. Geophysics 2020, 85, 047–058. [Google Scholar] [CrossRef]
  36. Dupont, E.; Zhang, T.; Tilke, P.; Liang, L.; Bailey, W.J. Generating Realistic Geology Conditioned on Physical Measurements with Generative Adversarial Networks. arXiv 2018, arXiv:1802.03065. [Google Scholar]
  37. Laloy, E.; Hérault, R.; Jacques, D.; Linde, N. Training-Image Based Geostatistical Inversion Using a Spatial Generative Adversarial Neural Network. Water Resour. Res. 2017, 54, 381–406. [Google Scholar] [CrossRef]
  38. Michael, F.; Joseph, K.P. Chapter 2: Model Decelopment. In SEAM Phase I: Challenges of Subsalt Imaging in Tertiary Basins, with Emphasis on Deepwater Gulf of Mexico; Society of Exploration Geophysics: Tulsa, OK, USA, 2011; p. 7. [Google Scholar]
  39. Prechelt, L. Early Stopping-but When? In Neural Networks: Tricks of the Trade; Genevieve, B., Müller Orr, K.-R., Eds.; Springer: Berlin/Heidelberg, Germany, 1998; pp. 55–69. [Google Scholar]
  40. Divya, S.; Jiannong, C. Generative Adversarial Networks (GANs): Challenges, Solutions, and Future Directions. arXiv 2020, arXiv:2005.00065. [Google Scholar]
  41. Zhou, W.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image Quality Assessment: From Error Visibility to Structural Similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar]
  42. Ahmad Fuad, M.I.; Jaya, M.S.; Abdrahman, S.; Lew, C.L.; Law, M. Deep Learning Based Seismic Elastic Properties Inversion Guided by Rock Physics; ADIPEC: Abu Dhabi, United Arab Emirates, 2022. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions, and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions, or products referred to in the content.
Figure 1. A diagram for generator training. The generator takes the input and is trained to produce output that is similar and indistinguishable from the label data. The prediction is generated by the generator and the target is the label. The reconstruction loss is calculated based on the difference between the generated output and the label. The generated output is also sent to the discriminator to “fool” it. The adversarial loss is given by the ability of the discriminator to correctly classify the generated output as fake and the label as real. The total loss for the generator training is the sum of the reconstruction loss and the adversarial loss.
Figure 1. A diagram for generator training. The generator takes the input and is trained to produce output that is similar and indistinguishable from the label data. The prediction is generated by the generator and the target is the label. The reconstruction loss is calculated based on the difference between the generated output and the label. The generated output is also sent to the discriminator to “fool” it. The adversarial loss is given by the ability of the discriminator to correctly classify the generated output as fake and the label as real. The total loss for the generator training is the sum of the reconstruction loss and the adversarial loss.
Applsci 13 06311 g001
Figure 2. A diagram for the discriminator training. The discriminator loss is the adversarial loss, which is the ability of the discriminator to correctly classify the generated output as fake and the label as real. It is worth noting that in conditional GAN, the discriminator sees the input too.
Figure 2. A diagram for the discriminator training. The discriminator loss is the adversarial loss, which is the ability of the discriminator to correctly classify the generated output as fake and the label as real. It is worth noting that in conditional GAN, the discriminator sees the input too.
Applsci 13 06311 g002
Figure 3. The original pix2pix network. The original pix2pix is used to obtain the base model.
Figure 3. The original pix2pix network. The original pix2pix is used to obtain the base model.
Applsci 13 06311 g003
Figure 4. The architecture of the proposed method. Once the base model is obtained in Figure 3, the model is re-trained but this time a smaller kernel size for the discriminator is used.
Figure 4. The architecture of the proposed method. Once the base model is obtained in Figure 3, the model is re-trained but this time a smaller kernel size for the discriminator is used.
Applsci 13 06311 g004
Figure 5. The complete workflow of the proposed method. The required data was post-stack seismic, velocity, porosity, density, and water saturation. All the data was normalized from 0 to 1 using the min-max normalization method. One thousand inlines were selected from these data and transformed into 256 × 256 patches. The patches were inputted into the data loader and were trained randomly using the MS-Pix2Pix (a combination of 2 Pix2Pix of different kernel sizes). The final model was then selected and tested on unseen data.
Figure 5. The complete workflow of the proposed method. The required data was post-stack seismic, velocity, porosity, density, and water saturation. All the data was normalized from 0 to 1 using the min-max normalization method. One thousand inlines were selected from these data and transformed into 256 × 256 patches. The patches were inputted into the data loader and were trained randomly using the MS-Pix2Pix (a combination of 2 Pix2Pix of different kernel sizes). The final model was then selected and tested on unseen data.
Applsci 13 06311 g005
Figure 6. Validation accuracy of the training of the original pix2pix. Model 1000 was selected as our base model as it gave the highest accuracy for the validation results. Note that we stopped the training at 1250 and not 2000 as the accuracy trend was decreasing. As mentioned above, the generator calculates 2 losses that can lead to instability [40] in the training, hence the oscillation pattern seen in the accuracy curve.
Figure 6. Validation accuracy of the training of the original pix2pix. Model 1000 was selected as our base model as it gave the highest accuracy for the validation results. Note that we stopped the training at 1250 and not 2000 as the accuracy trend was decreasing. As mentioned above, the generator calculates 2 losses that can lead to instability [40] in the training, hence the oscillation pattern seen in the accuracy curve.
Applsci 13 06311 g006
Figure 7. Validation accuracy of the second network training with different kernel sizes. Model 0 gave the highest accuracy. Oscillation was also seen here.
Figure 7. Validation accuracy of the second network training with different kernel sizes. Model 0 gave the highest accuracy. Oscillation was also seen here.
Applsci 13 06311 g007
Figure 8. (left): Inline 1. Simple geological section with a small salt body, (right): Inline 2. Complex geological section with a bigger salt body. The color bar is exaggerated to make the sections clearer.
Figure 8. (left): Inline 1. Simple geological section with a small salt body, (right): Inline 2. Complex geological section with a bigger salt body. The color bar is exaggerated to make the sections clearer.
Applsci 13 06311 g008
Figure 9. Velocity prediction results of Inline 1. (a): Prediction from pix2pix. (b): Prediction from MS-pix2pix. (c): The ground truth. Visually, the MS-pix2pix showed better results. There was an artifact detected at the shallow part of the velocity model.
Figure 9. Velocity prediction results of Inline 1. (a): Prediction from pix2pix. (b): Prediction from MS-pix2pix. (c): The ground truth. Visually, the MS-pix2pix showed better results. There was an artifact detected at the shallow part of the velocity model.
Applsci 13 06311 g009
Figure 10. Velocity prediction results of Inline 2. (a): Prediction from pix2pix. (b): Prediction from MS-pix2pix. (c): The ground truth. Although MS-Pix2Pix produced better prediction, the pre-salt velocity requires further refinement.
Figure 10. Velocity prediction results of Inline 2. (a): Prediction from pix2pix. (b): Prediction from MS-pix2pix. (c): The ground truth. Although MS-Pix2Pix produced better prediction, the pre-salt velocity requires further refinement.
Applsci 13 06311 g010
Figure 11. Porosity prediction results of Inline 1. (a): Prediction from pix2pix. (b): Prediction from MS-pix2pix. (c): The ground truth. The same shallow artifact was observed in the porosity model.
Figure 11. Porosity prediction results of Inline 1. (a): Prediction from pix2pix. (b): Prediction from MS-pix2pix. (c): The ground truth. The same shallow artifact was observed in the porosity model.
Applsci 13 06311 g011
Figure 12. Porosity prediction results of Inline 2. (a): Prediction from pix2pix. (b): Prediction from MS-pix2pix. (c): The ground truth. In general, MS-pix2pix produced a better porosity model compared to pix2pix prediction. However, there was misprediction in the deep pre-salt formation.
Figure 12. Porosity prediction results of Inline 2. (a): Prediction from pix2pix. (b): Prediction from MS-pix2pix. (c): The ground truth. In general, MS-pix2pix produced a better porosity model compared to pix2pix prediction. However, there was misprediction in the deep pre-salt formation.
Applsci 13 06311 g012
Figure 13. Density prediction results of Inline 1. (a): Prediction from pix2pix. (b): Prediction from MS-pix2pix. (c): The ground truth. The same shallow artifact can be seen here too.
Figure 13. Density prediction results of Inline 1. (a): Prediction from pix2pix. (b): Prediction from MS-pix2pix. (c): The ground truth. The same shallow artifact can be seen here too.
Applsci 13 06311 g013
Figure 14. Density prediction results of Inline 2. (a): Prediction from pix2pix. (b): Prediction from MS-pix2pix. (c): The ground truth. MS-pix2pix showed a big improvement in the density model over pix2pix. However, there was misprediction inside the salt body.
Figure 14. Density prediction results of Inline 2. (a): Prediction from pix2pix. (b): Prediction from MS-pix2pix. (c): The ground truth. MS-pix2pix showed a big improvement in the density model over pix2pix. However, there was misprediction inside the salt body.
Applsci 13 06311 g014
Figure 15. Water saturation prediction results of Inline 1. (a): Prediction from pix2pix. (b): Prediction from MS-pix2pix. (c): The ground truth. MS-pix2pix improved pix2pix’s prediction except for the shallow part of the water saturation model.
Figure 15. Water saturation prediction results of Inline 1. (a): Prediction from pix2pix. (b): Prediction from MS-pix2pix. (c): The ground truth. MS-pix2pix improved pix2pix’s prediction except for the shallow part of the water saturation model.
Applsci 13 06311 g015
Figure 16. Water saturation prediction results of Inline 1. (a): Prediction from pix2pix. (b): Prediction from MS-pix2pix. (c): The ground truth. MS-pix2pix corrected the structure predicted using pix2pix. However, it failed to detect the water saturation layer.
Figure 16. Water saturation prediction results of Inline 1. (a): Prediction from pix2pix. (b): Prediction from MS-pix2pix. (c): The ground truth. MS-pix2pix corrected the structure predicted using pix2pix. However, it failed to detect the water saturation layer.
Applsci 13 06311 g016
Figure 17. Seismic and subsurface properties data used in field data testing. The subsurface properties were generated via the rock physics method. The actual generated data only contained 160 samples. As pix2pix requires 256 samples, sample 161 to 256 was filled with sample 1 to 96.
Figure 17. Seismic and subsurface properties data used in field data testing. The subsurface properties were generated via the rock physics method. The actual generated data only contained 160 samples. As pix2pix requires 256 samples, sample 161 to 256 was filled with sample 1 to 96.
Applsci 13 06311 g017
Figure 18. The seismic section used as the input data for testing.
Figure 18. The seismic section used as the input data for testing.
Applsci 13 06311 g018
Figure 19. The accuracy metric. As we are interested in the right value at each location, we introduced element-wise comparison with a 5% margin of error. The values at the same location of the two images were compared. If the difference in value was within 5% of the true value, it was considered “correct”. The total number of “correct” predictions was then divided by the total number of elements to give us the final accuracy.
Figure 19. The accuracy metric. As we are interested in the right value at each location, we introduced element-wise comparison with a 5% margin of error. The values at the same location of the two images were compared. If the difference in value was within 5% of the true value, it was considered “correct”. The total number of “correct” predictions was then divided by the total number of elements to give us the final accuracy.
Applsci 13 06311 g019
Figure 20. The pix2pix prediction of the field data test. The first row is the result for velocity, followed by the result for porosity in the second row, density in the third row, and water saturation in the last row. The first column shows the pix2pix result, followed by the ground truth in the second column and the difference in the third column. It can be seen that the velocity, porosity, and density predictions are following the geological structure as the ground truth. The water saturation prediction is horizontal, cutting across the structure and far from the ground truth.
Figure 20. The pix2pix prediction of the field data test. The first row is the result for velocity, followed by the result for porosity in the second row, density in the third row, and water saturation in the last row. The first column shows the pix2pix result, followed by the ground truth in the second column and the difference in the third column. It can be seen that the velocity, porosity, and density predictions are following the geological structure as the ground truth. The water saturation prediction is horizontal, cutting across the structure and far from the ground truth.
Applsci 13 06311 g020
Figure 21. The MS-pix2pix prediction of the field data test. The first row is the result for velocity, followed by the result for porosity in the second row, density in the third row, and water saturation in the last row. The first column shows the pix2pix result, followed by the ground truth in the second column, and the difference in the third column. The MS-pix2pix showed better prediction compared with pix2pix as shown by the smaller range in the difference plot. Again, water saturation is the hardest to predict.
Figure 21. The MS-pix2pix prediction of the field data test. The first row is the result for velocity, followed by the result for porosity in the second row, density in the third row, and water saturation in the last row. The first column shows the pix2pix result, followed by the ground truth in the second column, and the difference in the third column. The MS-pix2pix showed better prediction compared with pix2pix as shown by the smaller range in the difference plot. Again, water saturation is the hardest to predict.
Applsci 13 06311 g021
Table 1. Summary of the changes made to the original pix2pix.
Table 1. Summary of the changes made to the original pix2pix.
ParameterOriginal Pix2pixProposed Method
Input channel31
Output channel34
Kernel size of G4 × 44 × 4
Kernel size of D4 × 44 × 4 and 3 × 3
Table 2. The prediction accuracy was calculated using cross-correlation and SSIM. The MS-pix2pix produced higher accuracy predictions for all properties and for all inlines.
Table 2. The prediction accuracy was calculated using cross-correlation and SSIM. The MS-pix2pix produced higher accuracy predictions for all properties and for all inlines.
PropertiesCorrelationSSIM
ModelPix2pixMS-Pix2pixPix2pixMS-Pix2pix
Velocity Inline 10.700.910.770.93
Velocity Inline 20.910.990.900.97
Porosity Inline 10.400.830.540.90
Porosity Inline 20.820.980.780.92
Density Inline 10.970.990.930.99
Density Inline 20.970.990.950.98
Water saturation Inline 10.430.840.760.95
Water saturation Inline 20.770.950.890.95
Table 3. The prediction accuracy of the field data testing. MS-pix2pix managed to improve the prediction of the Pix2Pix network.
Table 3. The prediction accuracy of the field data testing. MS-pix2pix managed to improve the prediction of the Pix2Pix network.
PropertiesPix2pixMS-pix2pix
P-velocity91.197.1
Porosity21.633.1
Density97.399.8
Water saturation0.0417.0
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ishak, M.A.; Abdul Latiff, A.H.; Ho, E.T.W.; Fuad, M.I.A.; Tan, N.W.; Sajid, M.; Elsebakhi, E. Advanced Elastic and Reservoir Properties Prediction through Generative Adversarial Network. Appl. Sci. 2023, 13, 6311. https://0-doi-org.brum.beds.ac.uk/10.3390/app13106311

AMA Style

Ishak MA, Abdul Latiff AH, Ho ETW, Fuad MIA, Tan NW, Sajid M, Elsebakhi E. Advanced Elastic and Reservoir Properties Prediction through Generative Adversarial Network. Applied Sciences. 2023; 13(10):6311. https://0-doi-org.brum.beds.ac.uk/10.3390/app13106311

Chicago/Turabian Style

Ishak, Muhammad Anwar, Abdul Halim Abdul Latiff, Eric Tatt Wei Ho, Muhammad Izzuljad Ahmad Fuad, Nian Wei Tan, Muhammad Sajid, and Emad Elsebakhi. 2023. "Advanced Elastic and Reservoir Properties Prediction through Generative Adversarial Network" Applied Sciences 13, no. 10: 6311. https://0-doi-org.brum.beds.ac.uk/10.3390/app13106311

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop