10th Anniversary of Computation—Computational Engineering

A special issue of Computation (ISSN 2079-3197). This special issue belongs to the section "Computational Engineering".

Deadline for manuscript submissions: closed (31 March 2024) | Viewed by 49336

Special Issue Editors


grade E-Mail Website
Guest Editor
Informatics Building School of Informatics, University of Leicester, Leicester LE1 7RH, UK
Interests: deep learning; artificial intelligence; machine learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Information Engineering, Polytechnic University of Marche, 60121 Ancona, Italy
Interests: social and complex network analysis; Internet of Things; logic programming and methods for coupling inductive and deductive reasoning; advanced algorithms for sequences comparison; bioinformatics and medical informatics applications; data mining and data science
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Published for the first time in 2013, Computation will celebrate its 10th anniversary this year. In celebration of it, we propose a Special Issue that aims to showcase the cutting-edge research and advancements in the field of Computational Engineering. Over the past decade, the journal has been at the forefront of promoting innovative approaches in computational methods, simulations, and their applications in engineering. This Special Issue will provide a valuable opportunity to highlight the most significant contributions made by researchers in this dynamic and rapidly evolving discipline.

The Special Issue will focus on original research articles, review papers, and case studies that encompass a wide range of topics related to computational engineering. We invite contributions that emphasize novel numerical methods, optimization techniques, data-driven approaches, and the integration of artificial intelligence and machine learning in engineering simulations. Moreover, we encourage submissions that explore the application of computational methods in diverse fields, including, but not limited to, fluid dynamics, structural analysis, materials science, renewable energy systems, and biomedical engineering.

Prof. Dr. Yudong Zhang
Dr. Francesco Cauteruccio
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Computation is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • computational engineering
  • big data
  • data analysis
  • complex engineering phenomena
  • Optimization
  • computational design
  • multiphysics modeling

Published Papers (37 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 1879 KiB  
Article
A Weighted and Epsilon-Constraint Biased-Randomized Algorithm for the Biobjective TOP with Prioritized Nodes
by Lucia Agud-Albesa, Neus Garrido, Angel A. Juan, Almudena Llorens and Sandra Oltra-Crespo
Computation 2024, 12(4), 84; https://0-doi-org.brum.beds.ac.uk/10.3390/computation12040084 - 20 Apr 2024
Viewed by 290
Abstract
This paper addresses a multiobjective version of the Team Orienteering Problem (TOP). The TOP focuses on selecting a subset of customers for maximum rewards while considering time and fleet size constraints. This study extends the TOP by considering two objectives: maximizing total rewards [...] Read more.
This paper addresses a multiobjective version of the Team Orienteering Problem (TOP). The TOP focuses on selecting a subset of customers for maximum rewards while considering time and fleet size constraints. This study extends the TOP by considering two objectives: maximizing total rewards from customer visits and maximizing visits to prioritized nodes. The MultiObjective TOP (MO-TOP) is formulated mathematically to concurrently tackle these objectives. A multistart biased-randomized algorithm is proposed to solve MO-TOP, integrating exploration and exploitation techniques. The algorithm employs a constructive heuristic defining biefficiency to select edges for routing plans. Through iterative exploration from various starting points, the algorithm converges to high-quality solutions. The Pareto frontier for the MO-TOP is generated using the weighted method, epsilon-constraint method, and Epsilon-Modified Method. Computational experiments validate the proposed approach’s effectiveness, illustrating its ability to generate diverse and high-quality solutions on the Pareto frontier. The algorithms demonstrate the ability to optimize rewards and prioritize node visits, offering valuable insights for real-world decision making in team orienteering applications. Full article
(This article belongs to the Special Issue 10th Anniversary of Computation—Computational Engineering)
Show Figures

Figure 1

10 pages, 1126 KiB  
Article
Application of Machine Learning to Predict Blockage in Multiphase Flow
by Nazerke Saparbayeva, Boris V. Balakin, Pavel G. Struchalin, Talal Rahman and Sergey Alyaev
Computation 2024, 12(4), 67; https://0-doi-org.brum.beds.ac.uk/10.3390/computation12040067 - 31 Mar 2024
Viewed by 640
Abstract
This study presents a machine learning-based approach to predict blockage in multiphase flow with cohesive particles. The aim is to predict blockage based on parameters like Reynolds and capillary numbers using a random forest classifier trained on experimental and simulation data. Experimental observations [...] Read more.
This study presents a machine learning-based approach to predict blockage in multiphase flow with cohesive particles. The aim is to predict blockage based on parameters like Reynolds and capillary numbers using a random forest classifier trained on experimental and simulation data. Experimental observations come from a lab-scale flow loop with ice slurry in the decane. The plugging simulation is based on coupled Computational Fluid Dynamics with Discrete Element Method (CFD-DEM). The resulting classifier demonstrated high accuracy, validated by precision, recall, and F1-score metrics, providing precise blockage prediction under specific flow conditions. Additionally, sensitivity analyses highlighted the model’s adaptability to cohesion variations. Equipped with the trained classifier, we generated a detailed machine-learning-based flow map and compared it with earlier literature, simulations, and experimental data results. This graphical representation clarifies the blockage boundaries under given conditions. The methodology’s success demonstrates the potential for advanced predictive modelling in diverse flow systems, contributing to improved blockage prediction and prevention. Full article
(This article belongs to the Special Issue 10th Anniversary of Computation—Computational Engineering)
Show Figures

Figure 1

19 pages, 12184 KiB  
Article
Generalized Approach to Optimal Polylinearization for Smart Sensors and Internet of Things Devices
by Marin B. Marinov and Slav Dimitrov
Computation 2024, 12(4), 63; https://0-doi-org.brum.beds.ac.uk/10.3390/computation12040063 - 23 Mar 2024
Viewed by 621
Abstract
This study introduces an innovative numerical approach for polylinear approximation (polylinearization) of non-self-intersecting compact sensor characteristics (transfer functions) specified either pointwise or analytically. The goal is to partition the sensor characteristic optimally, i.e., to select the vertices of the approximating polyline (approximant) along [...] Read more.
This study introduces an innovative numerical approach for polylinear approximation (polylinearization) of non-self-intersecting compact sensor characteristics (transfer functions) specified either pointwise or analytically. The goal is to partition the sensor characteristic optimally, i.e., to select the vertices of the approximating polyline (approximant) along with their positions, on the sensor characteristics so that the distance (i.e., the separation) between the approximant and the characteristic is rendered below a certain problem-specific tolerance. To achieve this goal, two alternative nonlinear optimization problems are solved, which differ in the adopted quantitative measure of the separation between the transfer function and the approximant. In the first problem, which relates to absolutely integrable sensor characteristics (their energy is not necessarily finite, but they can be represented in terms of convergent Fourier series), the polylinearization is constructed by the numerical minimization of the L1-metric (a distance-based separation measure), concerning the number of polyline vertices and their locations. In the second problem, which covers the quadratically integrable sensor characteristics (whose energy is finite, but they do not necessarily admit a representation in terms of convergent Fourier series), the polylinearization is constructed by numerically minimizing the L2-metric (area- or energy-based separation measure) for the same set of optimization variables—the locations and the number of polyline vertices. Full article
(This article belongs to the Special Issue 10th Anniversary of Computation—Computational Engineering)
Show Figures

Figure 1

13 pages, 557 KiB  
Article
Exploring Numba and CuPy for GPU-Accelerated Monte Carlo Radiation Transport
by Tair Askar, Argyn Yergaliyev, Bekdaulet Shukirgaliyev and Ernazar Abdikamalov
Computation 2024, 12(3), 61; https://0-doi-org.brum.beds.ac.uk/10.3390/computation12030061 - 20 Mar 2024
Viewed by 785
Abstract
This paper examines the performance of two popular GPU programming platforms, Numba and CuPy, for Monte Carlo radiation transport calculations. We conducted tests involving random number generation and one-dimensional Monte Carlo radiation transport in plane-parallel geometry on three GPU cards: NVIDIA Tesla A100, [...] Read more.
This paper examines the performance of two popular GPU programming platforms, Numba and CuPy, for Monte Carlo radiation transport calculations. We conducted tests involving random number generation and one-dimensional Monte Carlo radiation transport in plane-parallel geometry on three GPU cards: NVIDIA Tesla A100, Tesla V100, and GeForce RTX3080. We compared Numba and CuPy to each other and our CUDA C implementation. The results show that CUDA C, as expected, has the fastest performance and highest energy efficiency, while Numba offers comparable performance when data movement is minimal. While CuPy offers ease of implementation, it performs slower for compute-heavy tasks. Full article
(This article belongs to the Special Issue 10th Anniversary of Computation—Computational Engineering)
Show Figures

Figure 1

13 pages, 298 KiB  
Article
Practical Improvement in the Implementation of Two Avalanche Tests to Measure Statistical Independence in Stream Ciphers
by Evaristo José Madarro-Capó, Eziel Christians Ramos Piñón, Guillermo Sosa-Gómez and Omar Rojas
Computation 2024, 12(3), 60; https://0-doi-org.brum.beds.ac.uk/10.3390/computation12030060 - 19 Mar 2024
Viewed by 731
Abstract
This study describes the implementation of two algorithms in a parallel environment. These algorithms correspond to two statistical tests based on the bit’s independence criterion and the strict avalanche criterion. They are utilized to measure avalanche properties in stream ciphers. These criteria allow [...] Read more.
This study describes the implementation of two algorithms in a parallel environment. These algorithms correspond to two statistical tests based on the bit’s independence criterion and the strict avalanche criterion. They are utilized to measure avalanche properties in stream ciphers. These criteria allow for the statistical independence between the outputs and the internal state of a bit-level cipher to be determined. Both tests require extensive input parameters to assess the performance of current stream ciphers, leading to longer execution times. The presented implementation significantly reduces the execution time of both tests, making them suitable for evaluating ciphers in practical applications. The evaluation results compare the performance of the RC4 and HC256 stream ciphers in both sequential and parallel environments. Full article
(This article belongs to the Special Issue 10th Anniversary of Computation—Computational Engineering)
Show Figures

Figure 1

18 pages, 4576 KiB  
Article
High-Compression Crash Simulations and Tests of PLA Cubes Fabricated Using Additive Manufacturing FDM with a Scaling Strategy
by Andres-Amador Garcia-Granada
Computation 2024, 12(3), 40; https://0-doi-org.brum.beds.ac.uk/10.3390/computation12030040 - 23 Feb 2024
Viewed by 1111
Abstract
Impacts due to drops or crashes between moving vehicles necessitate the search for energy absorption elements to prevent damage to the transported goods or individuals. To ensure safety, a given level of acceptable deceleration is provided. The optimization of deformable parts to absorb [...] Read more.
Impacts due to drops or crashes between moving vehicles necessitate the search for energy absorption elements to prevent damage to the transported goods or individuals. To ensure safety, a given level of acceptable deceleration is provided. The optimization of deformable parts to absorb impact energy is typically conducted through explicit simulations, where kinetic energy is converted into plastic deformation energy. The introduction of additive manufacturing techniques enables this optimization to be conducted with more efficient shapes, previously unachievable with conventional manufacturing methods. This paper presents an initial approach to validating explicit simulations of impacts against solid cubes of varying sizes and fabrication directions. Such cubes were fabricated using PLA, the most used material, and a desktop printer. All simulations could be conducted using a single material law description, employing solid elements with a controlled time step suitable for industrial applications. With this approach, the simulations were capable of predicting deceleration levels across a broad range of impact configurations for solid cubes. Full article
(This article belongs to the Special Issue 10th Anniversary of Computation—Computational Engineering)
Show Figures

Figure 1

13 pages, 864 KiB  
Article
Injury Patterns and Impact on Performance in the NBA League Using Sports Analytics
by Vangelis Sarlis, George Papageorgiou and Christos Tjortjis
Computation 2024, 12(2), 36; https://0-doi-org.brum.beds.ac.uk/10.3390/computation12020036 - 16 Feb 2024
Viewed by 1757
Abstract
This research paper examines Sports Analytics, focusing on injury patterns in the National Basketball Association (NBA) and their impact on players’ performance. It employs a unique dataset to identify common NBA injuries, determine the most affected anatomical areas, and analyze how these injuries [...] Read more.
This research paper examines Sports Analytics, focusing on injury patterns in the National Basketball Association (NBA) and their impact on players’ performance. It employs a unique dataset to identify common NBA injuries, determine the most affected anatomical areas, and analyze how these injuries influence players’ post-recovery performance. This study’s novelty lies in its integrative approach that combines injury data with performance metrics and salary data, providing new insights into the relationship between injuries and economic and on-court performance. It investigates the periodicity and seasonality of injuries, seeking patterns related to time and external factors. Additionally, it examines the effect of specific injuries on players’ per-match analytics and performance, offering perspectives on the implications of injury rehabilitation for player performance. This paper contributes significantly to sports analytics, assisting coaches, sports medicine professionals, and team management in developing injury prevention strategies, optimizing player rotations, and creating targeted rehabilitation plans. Its findings illuminate the interplay between injuries, salaries, and performance in the NBA, aiming to enhance player welfare and the league’s overall competitiveness. With a comprehensive and sophisticated analysis, this research offers unprecedented insights into the dynamics of injuries and their long-term effects on athletes. Full article
(This article belongs to the Special Issue 10th Anniversary of Computation—Computational Engineering)
Show Figures

Figure 1

17 pages, 709 KiB  
Article
Accelerating Multiple Sequence Alignments Using Parallel Computing
by Qanita Bani Baker, Ruba A. Al-Hussien and Mahmoud Al-Ayyoub
Computation 2024, 12(2), 32; https://0-doi-org.brum.beds.ac.uk/10.3390/computation12020032 - 09 Feb 2024
Viewed by 1386
Abstract
Multiple sequence alignment (MSA) stands as a critical tool for understanding the evolutionary and functional relationships among biological sequences. Obtaining an exact solution for MSA, termed exact-MSA, is a significant challenge due to the combinatorial nature of the problem. Using the dynamic [...] Read more.
Multiple sequence alignment (MSA) stands as a critical tool for understanding the evolutionary and functional relationships among biological sequences. Obtaining an exact solution for MSA, termed exact-MSA, is a significant challenge due to the combinatorial nature of the problem. Using the dynamic programming technique to solve MSA is recognized as a highly computationally complex algorithm. To cope with the computational demands of MSA, parallel computing offers the potential for significant speedup in MSA. In this study, we investigated the utilization of parallelization to solve the exact-MSA using three proposed novel approaches. In these approaches, we used multi-threading techniques to improve the performance of the dynamic programming algorithms in solving the exact-MSA. We developed and employed three parallel approaches, named diagonal traversing, blocking, and slicing, to improve MSA performance. The proposed method accelerated the exact-MSA algorithm by around 4×. The suggested approaches could be basic approaches to be combined with many existing techniques. These proposed approaches could serve as foundational elements, offering potential integration with existing techniques for comprehensive MSA enhancement. Full article
(This article belongs to the Special Issue 10th Anniversary of Computation—Computational Engineering)
Show Figures

Figure 1

21 pages, 8030 KiB  
Article
Numerical Modeling and Analysis of Transient and Three-Dimensional Heat Transfer in 3D Printing via Fused-Deposition Modeling (FDM)
by Büryan Apaçoğlu-Turan, Kadir Kırkköprü and Murat Çakan
Computation 2024, 12(2), 27; https://0-doi-org.brum.beds.ac.uk/10.3390/computation12020027 - 05 Feb 2024
Cited by 1 | Viewed by 1339
Abstract
Fused-Deposition Modeling (FDM) is a commonly used 3D printing method for rapid prototyping and the fabrication of plastic components. The history of temperature variation during the FDM process plays a crucial role in the degree of bonding between layers. This study presents research [...] Read more.
Fused-Deposition Modeling (FDM) is a commonly used 3D printing method for rapid prototyping and the fabrication of plastic components. The history of temperature variation during the FDM process plays a crucial role in the degree of bonding between layers. This study presents research on the thermal analysis of the 3D printing process using a developed simulation code. The code employs numerical discretization methods with an implicit scheme and an effective heat transfer coefficient for cooling. The computational model is validated by comparing the results with analytical solutions, demonstrating an agreement of more than 99%. The code is then utilized to perform thermal analyses for the 3D printing process. Interlayer and intralayer reheating effects, sensitivity to printing parameters, and realistic printing patterns are investigated. It is shown that concentric and zigzag paths yield similar peaks at different time intervals. Nodal temperatures can fall below the glass transition temperature (Tg) during the printing process, especially at the outer nodes of the domain and under conditions where the cooling period is longer and the printed volume per unit time is smaller. The article suggests future work to calculate welding time at different conditions and locations for the estimation of the degree of bonding. Full article
(This article belongs to the Special Issue 10th Anniversary of Computation—Computational Engineering)
Show Figures

Figure 1

15 pages, 1266 KiB  
Article
Two Iterative Methods for Sizing Pipe Diameters in Gas Distribution Networks with Loops
by Dejan Brkić
Computation 2024, 12(2), 25; https://0-doi-org.brum.beds.ac.uk/10.3390/computation12020025 - 01 Feb 2024
Viewed by 1417
Abstract
Closed-loop pipe systems allow the possibility of the flow of gas from both directions across each route, ensuring supply continuity in the event of a failure at one point, but their main shortcoming is in the necessity to model them using iterative methods. [...] Read more.
Closed-loop pipe systems allow the possibility of the flow of gas from both directions across each route, ensuring supply continuity in the event of a failure at one point, but their main shortcoming is in the necessity to model them using iterative methods. Two iterative methods of determining the optimal pipe diameter in a gas distribution network with closed loops are described in this paper, offering the advantage of maintaining the gas velocity within specified technical limits, even during peak demand. They are based on the following: (1) a modified Hardy Cross method with the correction of the diameter in each iteration and (2) the node-loop method, which provides a new diameter directly in each iteration. The calculation of the optimal pipe diameter in such gas distribution networks relies on ensuring mass continuity at nodes, following the first Kirchhoff law, and concluding when the pressure drops in all the closed paths are algebraically balanced, adhering to the second Kirchhoff law for energy equilibrium. The presented optimisation is based on principles developed by Hardy Cross in the 1930s for the moment distribution analysis of statically indeterminate structures. The results are for steady-state conditions and for the highest possible estimated demand of gas, while the distributed gas is treated as a noncompressible fluid due to the relatively small drop in pressure in a typical network of pipes. There is no unique solution; instead, an infinite number of potential outcomes exist, alongside infinite combinations of pipe diameters for a given fixed flow pattern that can satisfy the first and second Kirchhoff laws in the given topology of the particular network at hand. Full article
(This article belongs to the Special Issue 10th Anniversary of Computation—Computational Engineering)
Show Figures

Figure 1

17 pages, 381 KiB  
Article
Exploring Controlled Passive Particle Motion Driven by Point Vortices on a Sphere
by Carlos Balsa, M. Victoria Otero-Espinar and Sílvio Gama
Computation 2024, 12(2), 23; https://0-doi-org.brum.beds.ac.uk/10.3390/computation12020023 - 31 Jan 2024
Viewed by 1172
Abstract
This work focuses on optimizing the displacement of a passive particle interacting with vortices located on the surface of a sphere. The goal is to minimize the energy expended during the displacement within a fixed time. The modeling of particle dynamics, whether in [...] Read more.
This work focuses on optimizing the displacement of a passive particle interacting with vortices located on the surface of a sphere. The goal is to minimize the energy expended during the displacement within a fixed time. The modeling of particle dynamics, whether in Cartesian or spherical coordinates, gives rise to alternative formulations of the identical problem. Thanks to these two versions of the same problem, we can assert that the algorithm, employed to transform the optimal control problem into an optimization problem, is effective, as evidenced by the obtained controls. The numerical resolution of these formulations through a direct approach consistently produces optimal solutions, regardless of the selected coordinate system. Full article
(This article belongs to the Special Issue 10th Anniversary of Computation—Computational Engineering)
Show Figures

Figure 1

23 pages, 441 KiB  
Article
Maxwell’s True Current
by Robert S. Eisenberg
Computation 2024, 12(2), 22; https://0-doi-org.brum.beds.ac.uk/10.3390/computation12020022 - 31 Jan 2024
Viewed by 1213
Abstract
Maxwell defined a ‘true’ or ‘total’ current in a way not widely used today. He said that “… true electric current … is not the same thing as the current of conduction but that the time-variation of the electric displacement must be taken [...] Read more.
Maxwell defined a ‘true’ or ‘total’ current in a way not widely used today. He said that “… true electric current … is not the same thing as the current of conduction but that the time-variation of the electric displacement must be taken into account in estimating the total movement of electricity”. We show that the true or total current is a universal property of electrodynamics independent of the properties of matter. We use mathematics without the approximation of a dielectric constant. The resulting Maxwell current law is a generalization of the Kirchhoff law of current used in circuit analysis, that also includes the displacement current. The generalization is not a long-time low-frequency approximation in contrast to the traditional presentation of Kirchhoff’s law. Full article
(This article belongs to the Special Issue 10th Anniversary of Computation—Computational Engineering)
22 pages, 3189 KiB  
Article
A Technical Comparative Heart Disease Prediction Framework Using Boosting Ensemble Techniques
by Najmu Nissa, Sanjay Jamwal and Mehdi Neshat
Computation 2024, 12(1), 15; https://0-doi-org.brum.beds.ac.uk/10.3390/computation12010015 - 16 Jan 2024
Viewed by 1697
Abstract
This paper addresses the global surge in heart disease prevalence and its impact on public health, stressing the need for accurate predictive models. The timely identification of individuals at risk of developing cardiovascular ailments is paramount for implementing preventive measures and timely interventions. [...] Read more.
This paper addresses the global surge in heart disease prevalence and its impact on public health, stressing the need for accurate predictive models. The timely identification of individuals at risk of developing cardiovascular ailments is paramount for implementing preventive measures and timely interventions. The World Health Organization (WHO) reports that cardiovascular diseases, responsible for an alarming 17.9 million annual fatalities, constitute a significant 31% of the global mortality rate. The intricate clinical landscape, characterized by inherent variability and a complex interplay of factors, poses challenges for accurately diagnosing the severity of cardiac conditions and predicting their progression. Consequently, early identification emerges as a pivotal factor in the successful treatment of heart-related ailments. This research presents a comprehensive framework for the prediction of cardiovascular diseases, leveraging advanced boosting techniques and machine learning methodologies, including Cat boost, Random Forest, Gradient boosting, Light GBM, and Ada boost. Focusing on “Early Heart Disease Prediction using Boosting Techniques”, this paper aims to contribute to the development of robust models capable of reliably forecasting cardiovascular health risks. Model performance is rigorously assessed using a substantial dataset on heart illnesses from the UCI machine learning library. With 26 feature-based numerical and categorical variables, this dataset encompasses 8763 samples collected globally. The empirical findings highlight AdaBoost as the preeminent performer, achieving a notable accuracy of 95% and excelling in metrics such as negative predicted value (0.83), false positive rate (0.04), false negative rate (0.04), and false development rate (0.01). These results underscore AdaBoost’s superiority in predictive accuracy and overall performance compared to alternative algorithms, contributing valuable insights to the field of cardiovascular health prediction. Full article
(This article belongs to the Special Issue 10th Anniversary of Computation—Computational Engineering)
Show Figures

Figure 1

21 pages, 5167 KiB  
Article
Enhancement of Machine-Learning-Based Flash Calculations near Criticality Using a Resampling Approach
by Eirini Maria Kanakaki, Anna Samnioti and Vassilis Gaganis
Computation 2024, 12(1), 10; https://0-doi-org.brum.beds.ac.uk/10.3390/computation12010010 - 09 Jan 2024
Cited by 1 | Viewed by 1453
Abstract
Flash calculations are essential in reservoir engineering applications, most notably in compositional flow simulation and separation processes, to provide phase distribution factors, known as k-values, at a given pressure and temperature. The calculation output is subsequently used to estimate composition-dependent properties of interest, [...] Read more.
Flash calculations are essential in reservoir engineering applications, most notably in compositional flow simulation and separation processes, to provide phase distribution factors, known as k-values, at a given pressure and temperature. The calculation output is subsequently used to estimate composition-dependent properties of interest, such as the equilibrium phases’ molar fraction, composition, density, and compressibility. However, when the flash conditions approach criticality, minor inaccuracies in the computed k-values may lead to significant deviation in the dependent properties, which is eventually inherited to the simulator, leading to large errors in the simulation. Although several machine-learning-based regression approaches have emerged to drastically accelerate flash calculations, the criticality issue persists. To address this problem, a novel resampling technique of the ML models’ training data population is proposed, which aims to fine-tune the training dataset distribution and optimally exploit the models’ learning capacity across various flash conditions. The results demonstrate significantly improved accuracy in predicting phase behavior results near criticality, offering valuable contributions not only to the subsurface reservoir engineering industry but also to the broader field of thermodynamics. By understanding and optimizing the model’s training, this research enables more precise predictions and better-informed decision-making processes in domains involving phase separation phenomena. The proposed technique is applicable to every ML-dominated regression problem, where properties dependent on the machine output are of interest rather than the model output itself. Full article
(This article belongs to the Special Issue 10th Anniversary of Computation—Computational Engineering)
Show Figures

Figure 1

16 pages, 4913 KiB  
Article
LSTM Reconstruction of Turbulent Pressure Fluctuation Signals
by Konstantinos Poulinakis, Dimitris Drikakis, Ioannis W. Kokkinakis, S. Michael Spottswood and Talib Dbouk
Computation 2024, 12(1), 4; https://0-doi-org.brum.beds.ac.uk/10.3390/computation12010004 - 01 Jan 2024
Viewed by 1683
Abstract
This paper concerns the application of a long short-term memory model (LSTM) for high-resolution reconstruction of turbulent pressure fluctuation signals from sparse (reduced) data. The model’s training was performed using data from high-resolution computational fluid dynamics (CFD) simulations of high-speed turbulent boundary layers [...] Read more.
This paper concerns the application of a long short-term memory model (LSTM) for high-resolution reconstruction of turbulent pressure fluctuation signals from sparse (reduced) data. The model’s training was performed using data from high-resolution computational fluid dynamics (CFD) simulations of high-speed turbulent boundary layers over a flat panel. During the preprocessing stage, we employed cubic spline functions to increase the fidelity of the sparse signals and subsequently fed them to the LSTM model for a precise reconstruction. We evaluated our reconstruction method with the root mean squared error (RMSE) metric and via inspection of power spectrum plots. Our study reveals that the model achieved a precise high-resolution reconstruction of the training signal and could be transferred to new unseen signals of a similar nature with extremely high success. The numerical simulations show promising results for complex turbulent signals, which may be experimentally or computationally produced. Full article
(This article belongs to the Special Issue 10th Anniversary of Computation—Computational Engineering)
Show Figures

Figure 1

20 pages, 2411 KiB  
Article
MSVR & Operator-Based System Design of Intelligent MIMO Sensorless Control for Microreactor Devices
by Tatsuma Kato, Kosuke Nishizawa and Mingcong Deng
Computation 2024, 12(1), 2; https://0-doi-org.brum.beds.ac.uk/10.3390/computation12010002 - 25 Dec 2023
Viewed by 1317
Abstract
Recently, microreactors, which are tubular reactors capable of fast and highly efficient chemical reactions, have attracted attention. However, precise temperature control is required because temperature changes due to reaction heat can cause reactions to proceed differently from those designed. In a previous study, [...] Read more.
Recently, microreactors, which are tubular reactors capable of fast and highly efficient chemical reactions, have attracted attention. However, precise temperature control is required because temperature changes due to reaction heat can cause reactions to proceed differently from those designed. In a previous study, a single-input/output nonlinear control system was proposed using a model in which the microreactor is divided into three regions and the thermal equation is formulated considering the temperature gradient, but it could not control two different temperatures. In this paper, a multi-input, multi-output nonlinear control system was designed using operator theory. On the other hand, when the number of parallel microreactors is increased, a sensorless control method using M–SVR with a generalized Gaussian kernel was incorporated into the MIMO nonlinear control system from the viewpoint of cost reduction, and the effectiveness of the proposed method was confirmed via experimental results. Full article
(This article belongs to the Special Issue 10th Anniversary of Computation—Computational Engineering)
Show Figures

Figure 1

34 pages, 5630 KiB  
Article
Shear-Enhanced Compaction Analysis of the Vaca Muerta Formation
by José G. Hasbani, Evan M. C. Kias, Roberto Suarez-Rivera and Victor M. Calo
Computation 2023, 11(12), 250; https://0-doi-org.brum.beds.ac.uk/10.3390/computation11120250 - 10 Dec 2023
Viewed by 1492
Abstract
The laboratory measurements conducted on Vaca Muerta formation samples demonstrate stress-dependent elastic behavior and compaction under representative in situ conditions. The experimental results reveal that the analyzed samples display elastoplastic deformation and shear-enhanced compaction as primary plasticity mechanisms. These experimental findings contradict the [...] Read more.
The laboratory measurements conducted on Vaca Muerta formation samples demonstrate stress-dependent elastic behavior and compaction under representative in situ conditions. The experimental results reveal that the analyzed samples display elastoplastic deformation and shear-enhanced compaction as primary plasticity mechanisms. These experimental findings contradict the expected linear elastic response anticipated before brittle failure, as reported in several studies on the geomechanical characterization of the Vaca Muerta formation. Therefore, we present a comprehensive laboratory analysis of Vaca Muerta formation samples showing their nonlinear elastic behavior and irrecoverable shear-enhanced compaction. Additionally, we calibrate an elastoplastic constitutive model based on these experimental observations. The resulting model accurately reproduces the observed phenomena, playing a pivotal role in geoengineering applications within the energy industry. Full article
(This article belongs to the Special Issue 10th Anniversary of Computation—Computational Engineering)
Show Figures

Figure 1

18 pages, 6424 KiB  
Article
Effects of Running in Minimal, Maximal and Conventional Footwear on Tibial Stress Fracture Probability: An Examination Using Finite Element and Probabilistic Analyses
by Jonathan Sinclair and Paul John Taylor
Computation 2023, 11(12), 248; https://doi.org/10.3390/computation11120248 - 06 Dec 2023
Viewed by 1596
Abstract
This study examined the effects of minimal, maximal and conventional running footwear on tibial strains and stress fracture probability using finite element and probabilistic analyses. The current investigation examined fifteen males running in three footwear conditions (minimal, maximal and conventional). Kinematic data were [...] Read more.
This study examined the effects of minimal, maximal and conventional running footwear on tibial strains and stress fracture probability using finite element and probabilistic analyses. The current investigation examined fifteen males running in three footwear conditions (minimal, maximal and conventional). Kinematic data were collected during overground running at 4.0 m/s using an eight-camera motion-capture system and ground reaction forces using a force plate. Tibial strains were quantified using finite element modelling and stress fracture probability calculated via probabilistic modelling over 100 days of running. Ninetieth percentile tibial strains were significantly greater in minimal (4681.13 με) (p < 0.001) and conventional (4498.84 με) (p = 0.007) footwear compared to maximal (4069.65 με). Furthermore, tibial stress fracture probability was significantly greater in minimal footwear (0.22) (p = 0.047) compared to maximal (0.15). The observations from this investigation show that compared to minimal footwear, maximal running shoes appear to be effective in attenuating runners’ likelihood of developing a tibial stress fracture. Full article
(This article belongs to the Special Issue 10th Anniversary of Computation—Computational Engineering)
Show Figures

Figure 1

26 pages, 7420 KiB  
Article
Design and Implementation of a Camera-Based Tracking System for MAV Using Deep Learning Algorithms
by Stefan Hensel, Marin B. Marinov and Raphael Panter
Computation 2023, 11(12), 244; https://0-doi-org.brum.beds.ac.uk/10.3390/computation11120244 - 04 Dec 2023
Viewed by 1562
Abstract
In recent years, the advancement of micro-aerial vehicles has been rapid, leading to their widespread utilization across various domains due to their adaptability and efficiency. This research paper focuses on the development of a camera-based tracking system specifically designed for low-cost drones. The [...] Read more.
In recent years, the advancement of micro-aerial vehicles has been rapid, leading to their widespread utilization across various domains due to their adaptability and efficiency. This research paper focuses on the development of a camera-based tracking system specifically designed for low-cost drones. The primary objective of this study is to build up a system capable of detecting objects and locating them on a map in real time. Detection and positioning are achieved solely through the utilization of the drone’s camera and sensors. To accomplish this goal, several deep learning algorithms are assessed and adopted because of their suitability with the system. Object detection is based upon a single-shot detector architecture chosen for maximum computation speed, and the tracking is based upon the combination of deep neural-network-based features combined with an efficient sorting strategy. Subsequently, the developed system is evaluated using diverse metrics to determine its performance for detection and tracking. To further validate the approach, the system is employed in the real world to show its possible deployment. For this, two distinct scenarios were chosen to adjust the algorithms and system setup: a search and rescue scenario with user interaction and precise geolocalization of missing objects, and a livestock control scenario, showing the capability of surveying individual members and keeping track of number and area. The results demonstrate that the system is capable of operating in real time, and the evaluation verifies that the implemented system enables precise and reliable determination of detected object positions. The ablation studies prove that object identification through small variations in phenotypes is feasible with our approach. Full article
(This article belongs to the Special Issue 10th Anniversary of Computation—Computational Engineering)
Show Figures

Figure 1

24 pages, 571 KiB  
Article
Wind Farm Cable Connection Layout Optimization Using a Genetic Algorithm and Integer Linear Programming
by Eduardo J. Solteiro Pires, Adelaide Cerveira and José Baptista
Computation 2023, 11(12), 241; https://0-doi-org.brum.beds.ac.uk/10.3390/computation11120241 - 03 Dec 2023
Viewed by 1398
Abstract
This work addresses the wind farm (WF) optimization layout considering several substations. It is given a set of wind turbines jointly with a set of substations, and the goal is to obtain the optimal design to minimize the infrastructure cost and the cost [...] Read more.
This work addresses the wind farm (WF) optimization layout considering several substations. It is given a set of wind turbines jointly with a set of substations, and the goal is to obtain the optimal design to minimize the infrastructure cost and the cost of electrical energy losses during the wind farm lifetime. The turbine set is partitioned into subsets to assign to each substation. The cable type and the connections to collect wind turbine-produced energy, forwarding to the corresponding substation, are selected in each subset. The technique proposed uses a genetic algorithm (GA) and an integer linear programming (ILP) model simultaneously. The GA creates a partition in the turbine set and assigns each of the obtained subsets to a substation to optimize a fitness function that corresponds to the minimum total cost of the WF layout. The fitness function evaluation requires solving an ILP model for each substation to determine the optimal cable connection layout. This methodology is applied to four onshore WFs. The obtained results show that the solution performance of the proposed approach reaches up to 0.17% of economic savings when compared to the clustering with ILP approach (an exact approach). Full article
(This article belongs to the Special Issue 10th Anniversary of Computation—Computational Engineering)
Show Figures

Figure 1

14 pages, 18797 KiB  
Article
Effects of the Number of Classes and Pressure Map Resolution on Fine-Grained In-Bed Posture Classification
by Luís Fonseca, Fernando Ribeiro and José Metrôlho
Computation 2023, 11(12), 239; https://0-doi-org.brum.beds.ac.uk/10.3390/computation11120239 - 02 Dec 2023
Viewed by 1316
Abstract
In-bed posture classification has attracted considerable research interest and has significant potential to enhance healthcare applications. Recent works generally use approaches based on pressure maps, machine learning algorithms and focused mainly on finding solutions to obtain high accuracy in posture classification. Typically, these [...] Read more.
In-bed posture classification has attracted considerable research interest and has significant potential to enhance healthcare applications. Recent works generally use approaches based on pressure maps, machine learning algorithms and focused mainly on finding solutions to obtain high accuracy in posture classification. Typically, these solutions use different datasets with varying numbers of sensors and classify the four main postures (supine, prone, left-facing, and right-facing) or, in some cases, include some variants of those main postures. Following this, this article has three main objectives: fine-grained detection of postures of bedridden people, identifying a large number of postures, including small variations—consideration of 28 different postures will help to better identify the actual position of the bedridden person with a higher accuracy. The number of different postures in this approach is considerably higher than the of those used in any other related work; analyze the impact of pressure map resolution on the posture classification accuracy, which has also not been addressed in other studies; and use the PoPu dataset, a dataset that includes pressure maps from 60 participants and 28 different postures. The dataset was analyzed using five distinct ML algorithms (k-nearest neighbors, linear support vector machines, decision tree, random forest, and multi-layer perceptron). This study’s findings show that the used algorithms achieve high accuracy in 4-posture classification (up to 99% in the case of MLP) using the PoPu dataset, with lower accuracies when attempting the finer-grained 28-posture classification approach (up to 68% in the case of random forest). The results indicate that using ML algorithms for finer-grained applications is possible to specify the patient’s exact position to some degree since the parent posture is still accurately classified. Furthermore, reducing the resolution of the pressure maps seems to affect the classifiers only slightly, which suggests that for applications that do not need finer-granularity, a lower resolution might suffice. Full article
(This article belongs to the Special Issue 10th Anniversary of Computation—Computational Engineering)
Show Figures

Figure 1

15 pages, 585 KiB  
Article
Development of AI-Based Tools for Power Generation Prediction
by Ana Paula Aravena-Cifuentes, Jose David Nuñez-Gonzalez, Andoni Elola and Malinka Ivanova
Computation 2023, 11(11), 232; https://0-doi-org.brum.beds.ac.uk/10.3390/computation11110232 - 16 Nov 2023
Viewed by 1430
Abstract
This study presents a model for predicting photovoltaic power generation based on meteorological, temporal and geographical variables, without using irradiance values, which have traditionally posed challenges and difficulties for accurate predictions. Validation methods and evaluation metrics are used to analyse four different approaches [...] Read more.
This study presents a model for predicting photovoltaic power generation based on meteorological, temporal and geographical variables, without using irradiance values, which have traditionally posed challenges and difficulties for accurate predictions. Validation methods and evaluation metrics are used to analyse four different approaches that vary in the distribution of the training and test database, and whether or not location-independent modelling is performed. The coefficient of determination, R2, is used to measure the proportion of variation in photovoltaic power generation that can be explained by the model’s variables, while gCO2eq represents the amount of CO2 emissions equivalent to each unit of power generation. Both are used to compare model performance and environmental impact. The results show significant differences between the locations, with substantial improvements in some cases, while in others improvements are limited. The importance of customising the predictive model for each specific location is emphasised. Furthermore, it is concluded that environmental impact studies in model production are an additional step towards the creation of more sustainable and efficient models. Likewise, this research considers both the accuracy of solar energy predictions and the environmental impact of the computational resources used in the process, thereby promoting the responsible and sustainable progress of data science. Full article
(This article belongs to the Special Issue 10th Anniversary of Computation—Computational Engineering)
Show Figures

Graphical abstract

22 pages, 6759 KiB  
Article
Deep Learning Enriched Automation in Damage Detection for Sustainable Operation in Pipelines with Welding Defects under Varying Embedment Conditions
by Li Shang, Zi Zhang, Fujian Tang, Qi Cao, Nita Yodo, Hong Pan and Zhibin Lin
Computation 2023, 11(11), 218; https://0-doi-org.brum.beds.ac.uk/10.3390/computation11110218 - 02 Nov 2023
Cited by 1 | Viewed by 1413
Abstract
Welded joints in metallic pipelines and other structures are used to connect metallic structures. Welding defects, such as cracks and lack of fusion, are vulnerable to initiating early-age cracking and corrosion. The present damage identification techniques use ultrasonic-guided wave procedures, which depend on [...] Read more.
Welded joints in metallic pipelines and other structures are used to connect metallic structures. Welding defects, such as cracks and lack of fusion, are vulnerable to initiating early-age cracking and corrosion. The present damage identification techniques use ultrasonic-guided wave procedures, which depend on the change in the physical characteristics of waveforms as they propagate to determine damage states. However, the complexity of geometry and material discontinuity (e.g., the roughness of a weldment with or without defects) could lead to complicated wave reflection and scatters, thus increasing the difficulty in the signal processing. Artificial intelligence and machine learning exhibit their capability for data fusion, including processing signals originally from ultrasonic-guided waves. This study aims to utilize deep learning approaches, including a convolutional neural network (CNN), Long-short term memory network (LSTM), or hybrid CNN-LSTM model, to demonstrate the capability in automation for damage detection for pipes with welded joints embedded in soil. The damage features in terms of welding defect types and severity as well as multiple defects are used to understand the effectiveness of the hybrid CNN-LSTM model, which is further compared to the two commonly used deep learning approaches, CNN and LSTM. The results showed the hybrid CNN-LSTM model has much higher classification accuracy for damage states under all scenarios in comparison with the CNN and LSTM models. Furthermore, the impacts of the pipelines embedded in different types of materials, ranging from loose sand to stiff soil, on signal processing and data classification were further calibrated. The results demonstrated these deep learning approaches can still perform well to detect various pipeline damage under varying embedment conditions. However, the results demonstrate when concrete is used as an embedding material, high attention to absorbing the signal energy of concrete could pose a challenge for the signal processing, particularly under high noise levels. Full article
(This article belongs to the Special Issue 10th Anniversary of Computation—Computational Engineering)
Show Figures

Figure 1

28 pages, 9378 KiB  
Article
A Simulated-Annealing-Quasi-Oppositional-Teaching-Learning-Based Optimization Algorithm for Distributed Generation Allocation
by Seyed Iman Taheri, Mohammadreza Davoodi and Mohd. Hasan Ali
Computation 2023, 11(11), 214; https://0-doi-org.brum.beds.ac.uk/10.3390/computation11110214 - 02 Nov 2023
Cited by 1 | Viewed by 1829
Abstract
Conventional evolutionary optimization techniques often struggle with finding global optima, getting stuck in local optima instead, and can be sensitive to initial conditions and parameter settings. Efficient Distributed Generation (DG) allocation in distribution systems hinges on streamlined optimization algorithms that handle complex energy [...] Read more.
Conventional evolutionary optimization techniques often struggle with finding global optima, getting stuck in local optima instead, and can be sensitive to initial conditions and parameter settings. Efficient Distributed Generation (DG) allocation in distribution systems hinges on streamlined optimization algorithms that handle complex energy operations, support real-time decisions, adapt to dynamics, and improve system performance, considering cost and power quality. This paper proposes the Simulated-Annealing-Quasi-Oppositional-Teaching-Learning-Based Optimization Algorithm to efficiently allocate DGs within a distribution test system. The study focuses on wind turbines, photovoltaic units, and fuel cells as prominent DG due to their growing usage trends. The optimization goals include minimizing voltage losses, reducing costs, and mitigating greenhouse gas emissions in the distribution system. The proposed algorithm is implemented and evaluated on the IEEE 70-bus test system, with a comparative analysis conducted against other evolutionary methods such as Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Honey Bee Mating Optimization (HBMO), and Teaching-Learning-Based Optimization (TLBO) algorithms. Results indicate that the proposed algorithm is effective in allocating the DGs. Statistical testing confirms significant results (probability < 0.1), indicating superior optimization capabilities for this specific problem. Crucially, the proposed algorithm excels in both accuracy and computational speed compared to other methods studied. Full article
(This article belongs to the Special Issue 10th Anniversary of Computation—Computational Engineering)
Show Figures

Graphical abstract

17 pages, 4411 KiB  
Article
Modeling of Wind Turbine Interactions and Wind Farm Losses Using the Velocity-Dependent Actuator Disc Model
by Ziemowit Malecha and Gideon Dsouza
Computation 2023, 11(11), 213; https://0-doi-org.brum.beds.ac.uk/10.3390/computation11110213 - 01 Nov 2023
Cited by 1 | Viewed by 1490
Abstract
This paper analyzes the interaction of wind turbines and losses in wind farms using computational fluid dynamics (CFD). The mathematical model used consisted of three-dimensional Reynolds-averaged Navier–Stokes (RANS) equations, while the presence of wind turbines in the flow was simulated as additional source [...] Read more.
This paper analyzes the interaction of wind turbines and losses in wind farms using computational fluid dynamics (CFD). The mathematical model used consisted of three-dimensional Reynolds-averaged Navier–Stokes (RANS) equations, while the presence of wind turbines in the flow was simulated as additional source terms. The novelty of the research is the definition of the source term as a velocity-dependent actuator disc model (ADM). This allowed for modeling the operation of a wind farm consisting of real wind turbines, characterized by power coefficients Cp and thrust force coefficients CT, which are a function of atmospheric wind speed. The calculations presented used a real 5 MW Gamesa turbine. Two different turbine spacings, 5D and 10D, where D is the diameter of the turbine, and two different locations corresponding to the offshore and onshore conditions were examined. The proposed model can be used to analyze wind farm losses not only in terms of the geometric distribution of individual turbines but also in terms of a specific type of wind turbine and in the entire wind speed spectrum. Full article
(This article belongs to the Special Issue 10th Anniversary of Computation—Computational Engineering)
Show Figures

Graphical abstract

12 pages, 5127 KiB  
Article
Transformer-Based Model for Predicting Customers’ Next Purchase Day in e-Commerce
by Alexandru Grigoraș and Florin Leon
Computation 2023, 11(11), 210; https://0-doi-org.brum.beds.ac.uk/10.3390/computation11110210 - 29 Oct 2023
Viewed by 2930
Abstract
The paper focuses on predicting the next purchase day (NPD) for customers in e-commerce, a task with applications in marketing, inventory management, and customer retention. A novel transformer-based model for NPD prediction is introduced and compared to traditional methods such as ARIMA, XGBoost, [...] Read more.
The paper focuses on predicting the next purchase day (NPD) for customers in e-commerce, a task with applications in marketing, inventory management, and customer retention. A novel transformer-based model for NPD prediction is introduced and compared to traditional methods such as ARIMA, XGBoost, and LSTM. Transformers offer advantages in capturing long-term dependencies within time series data through self-attention mechanisms. This adaptability to various time series patterns, including trends, seasonality, and irregularities, makes them a promising choice for NPD prediction. The transformer model demonstrates improvements in prediction accuracy compared to the baselines. Additionally, a clustered transformer model is proposed, which further enhances accuracy, emphasizing the potential of this architecture for NPD prediction. Full article
(This article belongs to the Special Issue 10th Anniversary of Computation—Computational Engineering)
Show Figures

Figure 1

16 pages, 465 KiB  
Article
Numerical Solution of the Retrospective Inverse Parabolic Problem on Disjoint Intervals
by Miglena N. Koleva and Lubin G. Vulkov
Computation 2023, 11(10), 204; https://0-doi-org.brum.beds.ac.uk/10.3390/computation11100204 - 16 Oct 2023
Viewed by 1179
Abstract
The retrospective inverse problem for evolution equations is formulated as the reconstruction of unknown initial data by a given solution at the final time. We consider the inverse retrospective problem for a one-dimensional parabolic equation in two disconnected intervals with weak solutions in [...] Read more.
The retrospective inverse problem for evolution equations is formulated as the reconstruction of unknown initial data by a given solution at the final time. We consider the inverse retrospective problem for a one-dimensional parabolic equation in two disconnected intervals with weak solutions in weighted Sobolev spaces. The two solutions are connected with nonstandard interface conditions, and thus this problem is solved in the whole spatial region. Such a problem, as with other inverse problems, is ill-posed, and for its numerical solution, specific techniques have to be used. The direct problem is first discretized by a difference scheme which provides a second order of approximation in space. For the resulting ordinary differential equation system, the positive coerciveness is established. Next, we develop an iterative conjugate gradient method to solve the ill-posed systems of the difference equations, which are obtained after weighted time discretization, of the inverse problem. Test examples with noisy input data are discussed. Full article
(This article belongs to the Special Issue 10th Anniversary of Computation—Computational Engineering)
Show Figures

Graphical abstract

12 pages, 2940 KiB  
Article
A Graphical Calibration Method for a Water Quality Model Considering Process Variability Versus Delay Time: Theory and a Case Study
by Eyal Brill and Michael Bendersky
Computation 2023, 11(10), 200; https://0-doi-org.brum.beds.ac.uk/10.3390/computation11100200 - 07 Oct 2023
Viewed by 1159
Abstract
Process Variability (PV) is a significant water quality time-series measurement. It is a critical element in detecting abnormality. Typically, the quality control system should raise an alert if the PV exceeds its normal value after a proper delay time (DT). The literature does [...] Read more.
Process Variability (PV) is a significant water quality time-series measurement. It is a critical element in detecting abnormality. Typically, the quality control system should raise an alert if the PV exceeds its normal value after a proper delay time (DT). The literature does not address the relation between the extended process variability and the time delay for a warning. The current paper shows a graphical method for calibrating a Water Quality Model based on these two parameters. The amount of variability is calculated based on the Euclidean distance between records in a dataset. Typically, each multivariable process has some relation between the variability and the time delay. In the case of a short period (a few minutes), the PV may be high. However, as the relevant DT is longer, it is expected to see the PV converge to some steady state. The current paper examines a method for estimating the relationship between the two measurements (PV and DT) as a detection tool for abnormality. Given the user’s classification of the actual event for true and false events, the method shows how to build a graphical map that helps the user select the best thresholds for the model. The last section of the paper offers an implementation of the method using real-world data. Full article
(This article belongs to the Special Issue 10th Anniversary of Computation—Computational Engineering)
Show Figures

Figure 1

19 pages, 4282 KiB  
Article
Fuzzy Transform Image Compression in the YUV Space
by Barbara Cardone, Ferdinando Di Martino and Salvatore Sessa
Computation 2023, 11(10), 191; https://0-doi-org.brum.beds.ac.uk/10.3390/computation11100191 - 01 Oct 2023
Viewed by 1290
Abstract
This research proposes a new image compression method based on the F1-transform which improves the quality of the reconstructed image without increasing the coding/decoding CPU time. The advantage of compressing color images in the YUV space is due to the fact that while [...] Read more.
This research proposes a new image compression method based on the F1-transform which improves the quality of the reconstructed image without increasing the coding/decoding CPU time. The advantage of compressing color images in the YUV space is due to the fact that while the three bands Red, Green and Blue are equally perceived by the human eye, in YUV space most of the image information perceived by the human eye is contained in the Y band, as opposed to the U and V bands. Using this advantage, we construct a new color image compression algorithm based on F1-transform in which the image compression is accomplished in the YUV space, so that better-quality compressed images can be obtained without increasing the execution time. The results of tests performed on a set of color images show that our color image compression method improves the quality of the decoded images with respect to the image compression algorithms JPEG, F1-transform on the RGB color space and F-transform on the YUV color space, regardless of the selected compression rate and with comparable CPU times. Full article
(This article belongs to the Special Issue 10th Anniversary of Computation—Computational Engineering)
Show Figures

Figure 1

20 pages, 4220 KiB  
Article
Estimation of Temperature-Dependent Thermal Conductivity and Heat Capacity Given Boundary Data
by Abdulaziz Sharahy and Zaid Sawlan
Computation 2023, 11(9), 184; https://0-doi-org.brum.beds.ac.uk/10.3390/computation11090184 - 14 Sep 2023
Viewed by 1092
Abstract
This work aims to estimate temperature-dependent thermal conductivity and heat capacity given measurements of temperature and heat flux at the boundaries. This estimation problem has many engineering and industrial applications, such as those for the building sector and chemical reactors. Two approaches are [...] Read more.
This work aims to estimate temperature-dependent thermal conductivity and heat capacity given measurements of temperature and heat flux at the boundaries. This estimation problem has many engineering and industrial applications, such as those for the building sector and chemical reactors. Two approaches are proposed to address this problem. The first method uses an integral approach and a polynomial approximation of the temperature profile. The second method uses a numerical solver for the nonlinear heat equation and an optimization algorithm. The performance of the two methods is compared using synthetic data generated with different boundary conditions and configurations. The results demonstrate that the integral approach works in limited scenarios, whereas the numerical approach is effective in estimating temperature-dependent thermal properties. The second method is also extended to account for noisy measurements and a comprehensive uncertainty quantification framework is developed. Full article
(This article belongs to the Special Issue 10th Anniversary of Computation—Computational Engineering)
Show Figures

Figure 1

28 pages, 13353 KiB  
Article
In-Silico Prediction of Mechanical Behaviour of Uniform Gyroid Scaffolds Affected by Its Design Parameters for Bone Tissue Engineering Applications
by Haja-Sherief N. Musthafa, Jason Walker, Talal Rahman, Alvhild Bjørkum, Kamal Mustafa and Dhayalan Velauthapillai
Computation 2023, 11(9), 181; https://0-doi-org.brum.beds.ac.uk/10.3390/computation11090181 - 12 Sep 2023
Cited by 1 | Viewed by 1773
Abstract
Due to their excellent properties, triply periodic minimal surfaces (TPMS) have been applied to design scaffolds for bone tissue engineering applications. Predicting the mechanical response of bone scaffolds in different loading conditions is vital to designing scaffolds. The optimal mechanical properties can be [...] Read more.
Due to their excellent properties, triply periodic minimal surfaces (TPMS) have been applied to design scaffolds for bone tissue engineering applications. Predicting the mechanical response of bone scaffolds in different loading conditions is vital to designing scaffolds. The optimal mechanical properties can be achieved by tuning their geometrical parameters to mimic the mechanical properties of natural bone. In this study, we designed gyroid scaffolds of different user-specific pore and strut sizes using a combined TPMS and signed distance field (SDF) method to obtain varying architecture and porosities. The designed scaffolds were converted to various meshes such as surface, volume, and finite element (FE) volume meshes to create FE models with different boundary and loading conditions. The designed scaffolds under compressive loading were numerically evaluated using a finite element method (FEM) to predict and compare effective elastic moduli. The effective elastic moduli range from 0.05 GPa to 1.93 GPa was predicted for scaffolds of different architectures comparable to human trabecular bone. The results assert that the optimal mechanical properties of the scaffolds can be achieved by tuning their design and morphological parameters to match the mechanical properties of human bone. Full article
(This article belongs to the Special Issue 10th Anniversary of Computation—Computational Engineering)
Show Figures

Figure 1

11 pages, 1999 KiB  
Article
Solving the Problem of Elasticity for a Layer with N Cylindrical Embedded Supports
by Vitaly Miroshnikov, Oleksandr Savin, Vladimir Sobol and Vyacheslav Nikichanov
Computation 2023, 11(9), 172; https://0-doi-org.brum.beds.ac.uk/10.3390/computation11090172 - 03 Sep 2023
Viewed by 947
Abstract
The main goal of deformable solid mechanics is to determine the stress–strain state of parts, structural elements, and their connections. The most accurate results of calculations of this state allow us to optimize design objects. However, not all models can be solved using [...] Read more.
The main goal of deformable solid mechanics is to determine the stress–strain state of parts, structural elements, and their connections. The most accurate results of calculations of this state allow us to optimize design objects. However, not all models can be solved using exact methods. One such model is the problem of a layer with cylindrical embedded supports that are parallel to each other and the layer boundaries. In this work, the supports are represented by cylindrical cavities with zero displacements set on them. The layer is considered in Cartesian coordinates, and the cavities are in cylindrical coordinates. To solve the problem, the Lamé equation is used, where the basic solutions between different coordinate systems are linked using the generalized Fourier method. By satisfying the boundary conditions and linking different coordinate systems, a system of infinite linear algebraic equations is created. For numerical realization, the method of reduction is used to find the unknowns. The numerical analysis has shown that the boundary conditions are fulfilled with high accuracy, and the physical pattern of the stress distribution and the comparison with results of similar studies indicate the accuracy of the obtained results. The proposed method for calculating the stress–strain state can be applied to the calculation of structures whose model is a layer with cylindrical embedded supports. The numerical results of the work make it possible to predetermine the geometric parameters of the model to be designed. Full article
(This article belongs to the Special Issue 10th Anniversary of Computation—Computational Engineering)
Show Figures

Figure 1

23 pages, 5619 KiB  
Article
Impact of Cross-Tie Material Nonlinearity on the Dynamic Behavior of Shallow Flexible Cable Networks
by Amir Younespour and Shaohong Cheng
Computation 2023, 11(9), 169; https://0-doi-org.brum.beds.ac.uk/10.3390/computation11090169 - 01 Sep 2023
Viewed by 752
Abstract
Cross-ties have proven their efficacy in mitigating vibrations in bridge stay cables. Several factors, such as cross-tie malfunctions due to slackening or snapping, as well as the utilization of high-energy dissipative materials, can introduce nonlinear restoring forces in the cross-ties. While previous studies [...] Read more.
Cross-ties have proven their efficacy in mitigating vibrations in bridge stay cables. Several factors, such as cross-tie malfunctions due to slackening or snapping, as well as the utilization of high-energy dissipative materials, can introduce nonlinear restoring forces in the cross-ties. While previous studies have investigated the influence of the former on cable network dynamics, the evaluation of the impact of nonlinear cross-tie materials remains unexplored. In this current research, an existing analytical model of a two-shallow-flexible-cable network has been extended to incorporate the cross-tie material nonlinearity in the formulation. The harmonic balance method (HBM) is employed to determine the equivalent linear stiffness of the cross-ties. The dynamic response of a cable network containing nonlinear cross-ties is approximated by comparing it to an equivalent linear system. Additionally, the study delves into the effects of the cable vibration amplitude, cross-tie material properties, installation location, and the length ratio between constituent cables on both the fundamental frequency of the cable network and the equivalent linear stiffness of the cross-ties. The findings reveal that the presence of cross-tie nonlinearity significantly influences the in-plane modal response of the cable network. Not only the frequencies of all the modes are reduced, but the formation of local modes is delayed to a high order. In contrast to an earlier finding based on a linear cross-tie assumption, with nonlinearity present, moving a cross-tie towards the mid-span of a cable would not enhance the in-plane stiffness of the network. Moreover, the impact of the length ratio on the network in-plane stiffness and frequency is contingent on its combined effect on the cross-tie axial stiffness and the lateral stiffness of neighboring cables. Full article
(This article belongs to the Special Issue 10th Anniversary of Computation—Computational Engineering)
Show Figures

Figure 1

27 pages, 1255 KiB  
Article
Adapting PINN Models of Physical Entities to Dynamical Data
by Dmitriy Tarkhov, Tatiana Lazovskaya and Valery Antonov
Computation 2023, 11(9), 168; https://0-doi-org.brum.beds.ac.uk/10.3390/computation11090168 - 01 Sep 2023
Viewed by 1123
Abstract
This article examines the possibilities of adapting approximate solutions of boundary value problems for differential equations using physics-informed neural networks (PINNs) to changes in data about the physical entity being modelled. Two types of models are considered: PINN and parametric PINN (PPINN). The [...] Read more.
This article examines the possibilities of adapting approximate solutions of boundary value problems for differential equations using physics-informed neural networks (PINNs) to changes in data about the physical entity being modelled. Two types of models are considered: PINN and parametric PINN (PPINN). The former is constructed for a fixed parameter of the problem, while the latter includes the parameter for the number of input variables. The models are tested on three problems. The first problem involves modelling the bending of a cantilever rod under varying loads. The second task is a non-stationary problem of a thermal explosion in the plane-parallel case. The initial model is constructed based on an ordinary differential equation, while the modelling object satisfies a partial differential equation. The third task is to solve a partial differential equation of mixed type depending on time. In all cases, the initial models are adapted to the corresponding pseudo-measurements generated based on changing equations. A series of experiments are carried out for each problem with different functions of a parameter that reflects the character of changes in the object. A comparative analysis of the quality of the PINN and PPINN models and their resistance to data changes has been conducted for the first time in this study. Full article
(This article belongs to the Special Issue 10th Anniversary of Computation—Computational Engineering)
Show Figures

Figure 1

16 pages, 1491 KiB  
Article
Evolutionary PINN Learning Algorithms Inspired by Approximation to Pareto Front for Solving Ill-Posed Problems
by Tatiana Lazovskaya, Dmitriy Tarkhov, Maria Chistyakova, Egor Razumov, Anna Sergeeva and Tatiana Shemyakina
Computation 2023, 11(8), 166; https://0-doi-org.brum.beds.ac.uk/10.3390/computation11080166 - 21 Aug 2023
Viewed by 889
Abstract
The article presents the development of new physics-informed evolutionary neural network learning algorithms. These algorithms aim to address the challenges of ill-posed problems by constructing a population close to the Pareto front. The study focuses on comparing the algorithm’s capabilities based on three [...] Read more.
The article presents the development of new physics-informed evolutionary neural network learning algorithms. These algorithms aim to address the challenges of ill-posed problems by constructing a population close to the Pareto front. The study focuses on comparing the algorithm’s capabilities based on three quality criteria of solutions. To evaluate the algorithms’ performance, two benchmark problems have been used. The first involved solving the Laplace equation in square regions with discontinuous boundary conditions. The second problem considered the absence of boundary conditions but with the presence of measurements. Additionally, the study investigates the influence of hyperparameters on the final results. Comparisons have been made between the proposed algorithms and standard algorithms for constructing neural networks based on physics (commonly referred to as vanilla’s algorithms). The results demonstrate the advantage of the proposed algorithms in achieving better performance when solving incorrectly posed problems. Furthermore, the proposed algorithms have the ability to identify specific solutions with the desired smoothness. Full article
(This article belongs to the Special Issue 10th Anniversary of Computation—Computational Engineering)
Show Figures

Figure 1

28 pages, 548 KiB  
Article
The Complexity of the Super Subdivision of Cycle-Related Graphs Using Block Matrices
by Mohamed R. Zeen El Deen, Walaa A. Aboamer and Hamed M. El-Sherbiny
Computation 2023, 11(8), 162; https://0-doi-org.brum.beds.ac.uk/10.3390/computation11080162 - 15 Aug 2023
Viewed by 885
Abstract
The complexity (number of spanning trees) in a finite graph Γ (network) is crucial. The quantity of spanning trees is a fundamental indicator for assessing the dependability of a network. The best and most dependable network is the one with the most spanning [...] Read more.
The complexity (number of spanning trees) in a finite graph Γ (network) is crucial. The quantity of spanning trees is a fundamental indicator for assessing the dependability of a network. The best and most dependable network is the one with the most spanning trees. In graph theory, one constantly strives to create novel structures from existing ones. The super subdivision operation produces more complicated networks, and the matrices of these networks can be divided into block matrices. Using methods from linear algebra and the characteristics of block matrices, we derive explicit formulas for determining the complexity of the super subdivision of a certain family of graphs, including the cycle Cn, where n=3,4,5,6; the dumbbell graph Dbm,n; the dragon graph Pm(Cn); the prism graph Πn, where n=3,4; the cycle Cn with a Pn2-chord, where n=4,6; and the complete graph K4. Additionally, 3D plots that were created using our results serve as illustrations. Full article
(This article belongs to the Special Issue 10th Anniversary of Computation—Computational Engineering)
Show Figures

Graphical abstract

13 pages, 4351 KiB  
Article
Study on Optical Positioning Using Experimental Visible Light Communication System
by Nikoleta Vitsi, Argyris N. Stassinakis, Nikolaos A. Androutsos, George D. Roumelas, George K. Varotsos, Konstantinos Aidinis and Hector E. Nistazakis
Computation 2023, 11(8), 161; https://0-doi-org.brum.beds.ac.uk/10.3390/computation11080161 - 14 Aug 2023
Viewed by 821
Abstract
Visible light positioning systems (VLP) have attracted significant commercial and research interest because of the many advantages they possess over other applications such as radio frequency (RF) positioning systems. In this work, an experimental configuration of an indoor VLP system based on the [...] Read more.
Visible light positioning systems (VLP) have attracted significant commercial and research interest because of the many advantages they possess over other applications such as radio frequency (RF) positioning systems. In this work, an experimental configuration of an indoor VLP system based on the well-known Lambertian light emission, is investigated. The corresponding results are also presented, and show that the system retains high enough accuracy to be operational, even in cases of low transmitted power and high background noise. Full article
(This article belongs to the Special Issue 10th Anniversary of Computation—Computational Engineering)
Show Figures

Figure 1

Back to TopTop