Next Issue
Volume 9, January
Previous Issue
Volume 8, September
 
 

Computation, Volume 8, Issue 4 (December 2020) – 24 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
13 pages, 1651 KiB  
Article
Modeling Interactions between Graphene and Heterogeneous Molecules
by Kyle Stevens, Thien Tran-Duc, Ngamta Thamwattana and James M. Hill
Computation 2020, 8(4), 107; https://0-doi-org.brum.beds.ac.uk/10.3390/computation8040107 - 21 Dec 2020
Cited by 4 | Viewed by 2198
Abstract
The Lennard–Jones potential and a continuum approach can be used to successfully model interactions between various regular shaped molecules and nanostructures. For single atomic species molecules, the interaction can be approximated by assuming a uniform distribution of atoms over surfaces or volumes, which [...] Read more.
The Lennard–Jones potential and a continuum approach can be used to successfully model interactions between various regular shaped molecules and nanostructures. For single atomic species molecules, the interaction can be approximated by assuming a uniform distribution of atoms over surfaces or volumes, which gives rise to a constant atomic density either over or throughout the molecule. However, for heterogeneous molecules, which comprise more than one type of atoms, the situation is more complicated. Thus far, two extended modeling approaches have been considered for heterogeneous molecules, namely a multi-surface semi-continuous model and a fully continuous model with average smearing of atomic contribution. In this paper, we propose yet another modeling approach using a single continuous surface, but replacing the atomic density and attractive and repulsive constants in the Lennard–Jones potential with functions, which depend on the heterogeneity across the molecules, and the new model is applied to study the adsorption of coronene onto a graphene sheet. Comparison of results is made between the new model and two other existing approaches as well as molecular dynamics simulations performed using the LAMMPS molecular dynamics simulator. We find that the new approach is superior to the other continuum models and provides excellent agreement with molecular dynamics simulations. Full article
(This article belongs to the Section Computational Chemistry)
Show Figures

Figure 1

24 pages, 8891 KiB  
Article
Prediction of Drug Potencies of BACE1 Inhibitors: A Molecular Dynamics Simulation and MM_GB(PB)SA Scoring
by Mazen Y. Hamed
Computation 2020, 8(4), 106; https://0-doi-org.brum.beds.ac.uk/10.3390/computation8040106 - 15 Dec 2020
Cited by 4 | Viewed by 2159
Abstract
Alzheimer’s disease (AD) is a progressive neurodegenerative brain disorder. One of the important therapeutic approaches of AD is the inhibition of β-site APP cleaving enzyme-1 (BACE1). This enzyme plays a central role in the synthesis of the pathogenic β-amyloid peptides (Aβ) in Alzheimer’s [...] Read more.
Alzheimer’s disease (AD) is a progressive neurodegenerative brain disorder. One of the important therapeutic approaches of AD is the inhibition of β-site APP cleaving enzyme-1 (BACE1). This enzyme plays a central role in the synthesis of the pathogenic β-amyloid peptides (Aβ) in Alzheimer’s disease. A group of potent BACE1 inhibitors with known X-ray structures (PDB ID 5i3X, 5i3Y, 5iE1, 5i3V, 5i3W, 4LC7, 3TPP) were studied by molecular dynamics simulation and binding energy calculation employing MM_GB(PB)SA. The calculated binding energies gave Kd values of 0.139 µM, 1.39 nM, 4.39 mM, 24.3 nM, 1.39 mM, 29.13 mM, and 193.07 nM, respectively. These inhibitors showed potent inhibitory activities in enzymatic and cell assays. The Kd values are compared with experimental values and the structures are discussed in view of the energy contributions to binding. Drug likeness of these inhibitors is also discussed. Accommodation of ligands in the catalytic site of BACE1 is discussed depending on the type of fragment involved in each structure. Molecular dynamics (MD) simulations and energy studies were used to explore the recognition of the selected BACE1 inhibitors by Asp32, Asp228, and the hydrophobic flap. The results show that selective BACE1 inhibition may be due to the formation of strong electrostatic interactions with Asp32 and Asp228 and a large number of hydrogen bonds, in addition to π–π and van der Waals interactions with the amino acid residues located inside the catalytic cavity. Interactions with the ligands show a similar binding mode with BACE1. These results help to rationalize the design of selective BACE1 inhibitors. Full article
(This article belongs to the Section Computational Chemistry)
Show Figures

Graphical abstract

12 pages, 1148 KiB  
Article
Generalized Pattern Search Algorithm for Crustal Modeling
by Mulugeta Dugda and Farzad Moazzami
Computation 2020, 8(4), 105; https://0-doi-org.brum.beds.ac.uk/10.3390/computation8040105 - 08 Dec 2020
Cited by 1 | Viewed by 2260
Abstract
In computational seismology, receiver functions represent the impulse response for the earth structure beneath a seismic station and, in general, these are functionals that show several seismic phases in the time-domain related to discontinuities within the crust and the upper mantle. This paper [...] Read more.
In computational seismology, receiver functions represent the impulse response for the earth structure beneath a seismic station and, in general, these are functionals that show several seismic phases in the time-domain related to discontinuities within the crust and the upper mantle. This paper introduces a new technique called generalized pattern search (GPS) for inverting receiver functions to obtain the depth of the crust–mantle discontinuity, i.e., the crustal thickness H, and the ratio of crustal P-wave velocity Vp to S-wave velocity Vs. In particular, the GPS technique, which is a direct search method, does not need derivative or directional vector information. Moreover, the technique allows simultaneous determination of the weights needed for the converted and reverberated phases. Compared to previously introduced variable weights approaches for inverting H-κ stacking of receiver functions, with κ = Vp/Vs, the GPS technique has some advantages in terms of saving computational time and also suitability for simultaneous determination of crustal parameters and associated weights. Finally, the technique is tested using seismic data from the East Africa Rift System and it provides results that are consistent with previously published studies. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

15 pages, 424 KiB  
Article
Classification of Categorical Data Based on the Chi-Square Dissimilarity and t-SNE
by Luis Ariosto Serna Cardona, Hernán Darío Vargas-Cardona, Piedad Navarro González, David Augusto Cardenas Peña and Álvaro Ángel Orozco Gutiérrez
Computation 2020, 8(4), 104; https://0-doi-org.brum.beds.ac.uk/10.3390/computation8040104 - 04 Dec 2020
Cited by 11 | Viewed by 2824
Abstract
The recurrent use of databases with categorical variables in different applications demands new alternatives to identify relevant patterns. Classification is an interesting approach for the recognition of this type of data. However, there are a few amount of methods for this purpose in [...] Read more.
The recurrent use of databases with categorical variables in different applications demands new alternatives to identify relevant patterns. Classification is an interesting approach for the recognition of this type of data. However, there are a few amount of methods for this purpose in the literature. Also, those techniques are specifically focused only on kernels, having accuracy problems and high computational cost. For this reason, we propose an identification approach for categorical variables using conventional classifiers (LDC-QDC-KNN-SVM) and different mapping techniques to increase the separability of classes. Specifically, we map the initial features (categorical attributes) to another space, using the Chi-square (C-S) as a measure of dissimilarity. Then, we employ the (t-SNE) for reducing dimensionality of data to two or three features, allowing a significant reduction of computational times in learning methods. We evaluate the performance of proposed approach in terms of accuracy for several experimental configurations and public categorical datasets downloaded from the UCI repository, and we compare with relevant state of the art methods. Results show that C-S mapping and t-SNE considerably diminish the computational times in recognitions tasks, while the accuracy is preserved. Also, when we apply only the C-S mapping to the datasets, the separability of classes is enhanced, thus, the performance of learning algorithms is clearly increased. Full article
Show Figures

Figure 1

26 pages, 1050 KiB  
Article
Graph Reachability on Parallel Many-Core Architectures
by Stefano Quer and Andrea Calabrese
Computation 2020, 8(4), 103; https://0-doi-org.brum.beds.ac.uk/10.3390/computation8040103 - 02 Dec 2020
Cited by 2 | Viewed by 2497
Abstract
Many modern applications are modeled using graphs of some kind. Given a graph, reachability, that is, discovering whether there is a path between two given nodes, is a fundamental problem as well as one of the most important steps of many other algorithms. [...] Read more.
Many modern applications are modeled using graphs of some kind. Given a graph, reachability, that is, discovering whether there is a path between two given nodes, is a fundamental problem as well as one of the most important steps of many other algorithms. The rapid accumulation of very large graphs (up to tens of millions of vertices and edges) from a diversity of disciplines demand efficient and scalable solutions to the reachability problem. General-purpose computing has been successfully used on Graphics Processing Units (GPUs) to parallelize algorithms that present a high degree of regularity. In this paper, we extend the applicability of GPU processing to graph-based manipulation, by re-designing a simple but efficient state-of-the-art graph-labeling method, namely the GRAIL (Graph Reachability Indexing via RAndomized Interval) algorithm, to many-core CUDA-based GPUs. This algorithm firstly generates a label for each vertex of the graph, then it exploits these labels to answer reachability queries. Unfortunately, the original algorithm executes a sequence of depth-first visits which are intrinsically recursive and cannot be efficiently implemented on parallel systems. For that reason, we design an alternative approach in which a sequence of breadth-first visits substitute the original depth-first traversal to generate the labeling, and in which a high number of concurrent visits is exploited during query evaluation. The paper describes our strategy to re-design these steps, the difficulties we encountered to implement them, and the solutions adopted to overcome the main inefficiencies. To prove the validity of our approach, we compare (in terms of time and memory requirements) our GPU-based approach with the original sequential CPU-based tool. Finally, we report some hints on how to conduct further research in the area. Full article
Show Figures

Figure 1

12 pages, 5914 KiB  
Article
Simulation of Diffusion Bonding of Different Heat Resistant Nickel-Base Alloys
by Albert R. Khalikov, Evgeny A. Sharapov, Vener A. Valitov, Elvina V. Galieva, Elena A. Korznikova and Sergey V. Dmitriev
Computation 2020, 8(4), 102; https://0-doi-org.brum.beds.ac.uk/10.3390/computation8040102 - 30 Nov 2020
Cited by 5 | Viewed by 2564
Abstract
Currently, an important fundamental problem of practical importance is the production of high-quality solid-phase compounds of various metals. This paper presents a theoretical model that allows one to study the diffusion process in nickel-base refractory alloys. As an example, a two-dimensional model of [...] Read more.
Currently, an important fundamental problem of practical importance is the production of high-quality solid-phase compounds of various metals. This paper presents a theoretical model that allows one to study the diffusion process in nickel-base refractory alloys. As an example, a two-dimensional model of ternary alloy is considered to model diffusion bonding of the alloys with different compositions. The main idea is to divide the alloy components into three groups: (i) the base element Ni, (ii) the intermetallic forming elements Al and Ti and (iii) the alloying elements. This approach allows one to consider multi-component alloys as ternary alloys, which greatly simplifies the analysis. The calculations are carried out within the framework of the hard sphere model when describing interatomic interactions by pair potentials. The energy of any configuration of a given system is written in terms of order parameters and ordering energies. A vacancy diffusion model is described, which takes into account the gain/loss of potential energy due to a vacancy jump and temperature. Diffusion bonding of two dissimilar refractory alloys is modeled. The concentration profiles of the components and order parameters are analyzed at different times. The results obtained indicate that the ternary alloy model is efficient in modeling the diffusion bonding of dissimilar Ni-base refractory alloys. Full article
(This article belongs to the Section Computational Chemistry)
Show Figures

Figure 1

27 pages, 397 KiB  
Article
Combined Heat and Power Dynamic Economic Emissions Dispatch with Valve Point Effects and Incentive Based Demand Response Programs
by Nnamdi Nwulu
Computation 2020, 8(4), 101; https://0-doi-org.brum.beds.ac.uk/10.3390/computation8040101 - 23 Nov 2020
Cited by 5 | Viewed by 2026
Abstract
In this paper, the Combined Heat and Power Dynamic Economic Emissions Dispatch (CHPDEED) problem formulation is considered. This problem is a complicated nonlinear mathematical formulation with multiple, conflicting objective functions. The aim of this mathematical problem is to obtain the optimal quantities of [...] Read more.
In this paper, the Combined Heat and Power Dynamic Economic Emissions Dispatch (CHPDEED) problem formulation is considered. This problem is a complicated nonlinear mathematical formulation with multiple, conflicting objective functions. The aim of this mathematical problem is to obtain the optimal quantities of heat and power output for the committed generating units which includes power and heat only units. Heat and load demand are expected to be satisfied throughout the total dispatch interval. In this paper, Valve Point effects are considered in the fuel cost function of the units which lead to a non-convex cost function. Furthermore, an Incentive Based Demand Response Program formulation is also simultaneously considered with the CHPDEED problem further complicating the mathematical problem. The decision variables are thus the optimal power and heat output of the generating units and the optimal power curbed and monetary incentive for the participating demand response consumers. The resulting mathematical formulations are tested on four practical scenarios depicting different system operating conditions and obtained results show the efficacy of the developed mathematical optimization model. Obtained results indicate that, when the Incentive-Based Demand Response (IBDR) program’s operational hours is unrestricted with a residential load profile, the energy curtailed is highest (2680 MWh), the energy produced by the generators is lowest (38,008.53 MWh), power losses are lowest (840.5291 MW) and both fuel costs and emissions are lowest. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

15 pages, 6345 KiB  
Article
Near-Field Flow Structure and Entrainment of a Round Jet at Low Exit Velocities: Implications on Microclimate Ventilation
by Alan Kabanshi
Computation 2020, 8(4), 100; https://0-doi-org.brum.beds.ac.uk/10.3390/computation8040100 - 23 Nov 2020
Cited by 2 | Viewed by 2456
Abstract
This paper explores the flow structure, mean/turbulent statistical characteristics of the vector field and entrainment of round jets issued from a smooth contracting nozzle at low nozzle exit velocities (1.39–6.44 m/s). The motivation of the study was to increase understand of the near [...] Read more.
This paper explores the flow structure, mean/turbulent statistical characteristics of the vector field and entrainment of round jets issued from a smooth contracting nozzle at low nozzle exit velocities (1.39–6.44 m/s). The motivation of the study was to increase understand of the near field and get insights on how to control and reduce entrainment, particularly in applications that use jets with low-medium momentum flow like microclimate ventilation systems. Additionally, the near field of free jets with low momentum flow is not extensively covered in literature. Particle image velocimetry (PIV), a whole field vector measurement method, was used for data acquisition of the flow from a 0.025 m smooth contracting nozzle. The results show that at low nozzle exit velocities the jet flow was unstable with oscillations and this increased entrainment, however, increasing the nozzle exit velocity stabilized the jet flow and reduced entrainment. This is linked to the momentum flow of the jet, the structure characteristics of the flow and the type or disintegration distance of vortices created on the shear layer. The study discusses practical implications on microclimate ventilation systems and at the same time contributes data to the development and validation of a planned computational turbulence model for microclimate ventilation. Full article
Show Figures

Figure 1

17 pages, 1031 KiB  
Article
A Discrete Particle Swarm Optimization to Solve the Put-Away Routing Problem in Distribution Centres
by Rodrigo Andrés Gómez-Montoya, Jose Alejandro Cano, Pablo Cortés and Fernando Salazar
Computation 2020, 8(4), 99; https://0-doi-org.brum.beds.ac.uk/10.3390/computation8040099 - 18 Nov 2020
Cited by 9 | Viewed by 3018
Abstract
Put-away operations typically consist of moving products from depots to allocated storage locations using either operators or Material Handling Equipment (MHE), accounting for important operative costs in warehouses and impacting operations efficiency. Therefore, this paper aims to formulate and solve a Put-away Routing [...] Read more.
Put-away operations typically consist of moving products from depots to allocated storage locations using either operators or Material Handling Equipment (MHE), accounting for important operative costs in warehouses and impacting operations efficiency. Therefore, this paper aims to formulate and solve a Put-away Routing Problem (PRP) in distribution centres (DCs). This PRP formulation represents a novel approach due to the consideration of a fleet of homogeneous Material Handling Equipment (MHE), heterogeneous products linked to a put-away list size, depot location and multi-parallel aisles in a distribution centre. It should be noted that the slotting problem, rather than the PRP, has usually been studied in the literature, whereas the PRP is addressed in this paper. The PRP is solved using a discrete particle swarm optimization (PSO) algorithm that is compared to tabu search approaches (Classical Tabu Search (CTS), Tabu Search (TS) 2-Opt) and an empirical rule. As a result, it was found that a discrete PSO generates the best solutions, as the time savings range from 2 to 13% relative to CTS and TS 2-Opt for different combinations of factor levels evaluated in the experimentation. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

30 pages, 13413 KiB  
Article
Effect of Computational Schemes on Coupled Flow and Geo-Mechanical Modeling of CO2 Leakage through a Compromised Well
by Mohammad Islam, Nicolas Huerta and Robert Dilmore
Computation 2020, 8(4), 98; https://0-doi-org.brum.beds.ac.uk/10.3390/computation8040098 - 13 Nov 2020
Cited by 3 | Viewed by 2811
Abstract
Carbon capture, utilization, and storage (CCUS) describes a set of technically viable processes to separate carbon dioxide (CO2) from industrial byproduct streams and inject it into deep geologic formations for long-term storage. Legacy wells located within the spatial domain of new [...] Read more.
Carbon capture, utilization, and storage (CCUS) describes a set of technically viable processes to separate carbon dioxide (CO2) from industrial byproduct streams and inject it into deep geologic formations for long-term storage. Legacy wells located within the spatial domain of new injection and production activities represent potential pathways for fluids (i.e., CO2 and aqueous phase) to leak through compromised components (e.g., through fractures or micro-annulus pathways). The finite element (FE) method is a well-established numerical approach to simulate the coupling between multi-phase fluid flow and solid phase deformation interactions that occur in a compromised well system. We assumed the spatial domain consists of a three-phases system: a solid, liquid, and gas phase. For flow in the two fluids phases, we considered two sets of primary variables: the first considering capillary pressure and gas pressure (PP) scheme, and the second considering liquid pressure and gas saturation (PS) scheme. Fluid phases were coupled with the solid phase using the full coupling (i.e., monolithic coupling) and iterative coupling (i.e., sequential coupling) approaches. The challenge of achieving numerical stability in the coupled formulation in heterogeneous media was addressed using the mass lumping and the upwinding techniques. Numerical results were compared with three benchmark problems to assess the performance of coupled FE solutions: 1D Terzaghi’s consolidation, Liakopoulos experiments, and the Kueper and Frind experiments. We found good agreement between our results and the three benchmark problems. For the Kueper and Frind test, the PP scheme successfully captured the observed experimental response of the non-aqueous phase infiltration, in contrast to the PS scheme. These exercises demonstrate the importance of fluid phase primary variable selection for heterogeneous porous media. We then applied the developed model to the hypothetical case of leakage along a compromised well representing a heterogeneous media. Considering the mass lumping and the upwinding techniques, both the monotonic and the sequential coupling provided identical results, but mass lumping was needed to avoid numerical instabilities in the sequential coupling. Additionally, in the monolithic coupling, the magnitude of primary variables in the coupled solution without mass lumping and the upwinding is higher, which is essential for the risk-based analyses. Full article
(This article belongs to the Special Issue Computational Models for Complex Fluid Interfaces across Scales)
Show Figures

Figure 1

2 pages, 149 KiB  
Editorial
Computational Insights into Industrial Chemistry
by Alexander S. Novikov
Computation 2020, 8(4), 97; https://0-doi-org.brum.beds.ac.uk/10.3390/computation8040097 - 12 Nov 2020
Cited by 1 | Viewed by 2236
Abstract
This brief Editorial is dedicated to announcing the Special Issue “Computational Insights into Industrial Chemistry”. The Special Issue covers the most recent progress in the rapidly growing field of computational chemistry, and the application of computer modeling in topics relevant to industrial chemistry [...] Read more.
This brief Editorial is dedicated to announcing the Special Issue “Computational Insights into Industrial Chemistry”. The Special Issue covers the most recent progress in the rapidly growing field of computational chemistry, and the application of computer modeling in topics relevant to industrial chemistry (chemical industrial processes and materials, environmental effects caused by chemical industry activities, computer-aided design of catalysts, green chemistry, etc.). Full article
(This article belongs to the Special Issue Computational Insights into Industrial Chemistry)
13 pages, 1184 KiB  
Article
Application of the Robust Fixed Point Iteration Method in Control of the Level of Twin Tanks Liquid
by Hamza Khan, Hazem Issa and József K. Tar
Computation 2020, 8(4), 96; https://0-doi-org.brum.beds.ac.uk/10.3390/computation8040096 - 10 Nov 2020
Cited by 4 | Viewed by 2620
Abstract
Precise control of the flow rate of fluids stored in multiple tank systems is an important task in process industries. On this reason coupled tanks are considered popular paradigms in studies because they form strongly nonlinear systems that challenges the controller designers to [...] Read more.
Precise control of the flow rate of fluids stored in multiple tank systems is an important task in process industries. On this reason coupled tanks are considered popular paradigms in studies because they form strongly nonlinear systems that challenges the controller designers to develop various approaches. In this paper the application of a novel, Fixed Point Iteration (FPI)-based technique is reported to control the fluid level in a “lower tank” that is fed by the egress of an “upper” one. The control signal is the ingress rate at the upper tank. Numerical simulation results obtained by the use of simple sequential Julia code with Euler integration are presented to illustrate the efficiency of this approach. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

20 pages, 13025 KiB  
Article
Explicit Sensitivity Coefficients for Estimation of Temperature-Dependent Thermophysical Properties in Inverse Transient Heat Conduction Problems
by Farzad Mohebbi
Computation 2020, 8(4), 95; https://0-doi-org.brum.beds.ac.uk/10.3390/computation8040095 - 06 Nov 2020
Cited by 2 | Viewed by 2611
Abstract
Explicit expressions are obtained for sensitivity coefficients to separately estimate temperature-dependent thermophysical properties, such as specific heat and thermal conductivity, in two-dimensional inverse transient heat conduction problems for bodies with irregular shape from temperature measurement readings of a single sensor inside the body. [...] Read more.
Explicit expressions are obtained for sensitivity coefficients to separately estimate temperature-dependent thermophysical properties, such as specific heat and thermal conductivity, in two-dimensional inverse transient heat conduction problems for bodies with irregular shape from temperature measurement readings of a single sensor inside the body. The proposed sensitivity analysis scheme allows for the computation of all sensitivity coefficients in only one direct problem solution at each iteration with no need to solve the sensitivity and adjoint problems. In this method, a boundary-fitted grid generation (elliptic) method is used to mesh the irregular shape of the heat conducting body. Explicit expressions are obtained to calculate the sensitivity coefficients efficiently and the conjugate gradient method as an iterative gradient-based optimization method is used to minimize the objective function and reach the solution. A test case with different initial guesses and sensor locations is presented to investigate the proposed inverse analysis. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

10 pages, 2022 KiB  
Article
Spatial and Temporal Validation of a CFD Model Using Residence Time Distribution Test in a Tubular Reactor
by José Rivas, M. Constanza Sadino-Riquelme, Ignacio Garcés, Andrea Carvajal and Andrés Donoso-Bravo
Computation 2020, 8(4), 94; https://0-doi-org.brum.beds.ac.uk/10.3390/computation8040094 - 06 Nov 2020
Cited by 2 | Viewed by 2869
Abstract
Computational fluid dynamic (CFD) has been increasingly exploited for the design and optimization of (bio)chemical processes. Validation is a crucial part of any modeling application. In CFD, when validation is done, complex and expensive techniques are normally employed. The aim of this study [...] Read more.
Computational fluid dynamic (CFD) has been increasingly exploited for the design and optimization of (bio)chemical processes. Validation is a crucial part of any modeling application. In CFD, when validation is done, complex and expensive techniques are normally employed. The aim of this study was to test the capability of the CFD model to represent a residence time distribution (RTD) test in a temporal and spatial fashion inside a reactor. The RTD tests were carried out in a tubular reactor operated in continuous mode, with and without the presence of artificial biomass. Two hydraulic retention times of 7.2 and 13 h and superficial velocities 0.65, 0.6, 1.3, and 1.1 m h−1 were evaluated. As a tracer, an aqueous solution of methylene blue was used. The CFD model was implemented in ANSYS Fluent, and to solve the equations system, the SIMPLE scheme and second-order discretization methods were selected. The proposed CFD model that represents the reactor was able to predict the spatial and temporal distribution of the tracer injected in the reactor. The main disagreements between the simulations and the experimental results were observed, especially in the first 50 min of the RTD, caused by the different error sources, associated to the manual execution of the triplicates, as well as some channeling or tracer by-pass that cannot be predicted by the CFD model. The CFD model performed better as the time of the experiment elapsed for all the sampling ports. A validation methodology based on an RTD by sampling at different reactor positions can be employed as a simple way to validate CFD models. Full article
(This article belongs to the Section Computational Chemistry)
Show Figures

Figure 1

15 pages, 4654 KiB  
Article
The Influence of Pressure on the Formation of FM/AF Configurations in LSMO Films: A Monte Carlo Approach
by Hugo Hernán Ortiz-Álvarez, Francy Nelly Jiménez-García, Carolina Márquez-Narváez, José Dario Agudelo-Giraldo and Elisabeth Restrepo-Parra
Computation 2020, 8(4), 93; https://0-doi-org.brum.beds.ac.uk/10.3390/computation8040093 - 06 Nov 2020
Viewed by 1953
Abstract
In this work, Monte Carlo simulations of magnetic properties of thin films, including the influence of an external pressure, are presented. These simulations were developed using a Hamiltonian composed by terms that represent the exchange interaction, dipolar interaction, Zeeman effect, monocrystalline anisotropy, and [...] Read more.
In this work, Monte Carlo simulations of magnetic properties of thin films, including the influence of an external pressure, are presented. These simulations were developed using a Hamiltonian composed by terms that represent the exchange interaction, dipolar interaction, Zeeman effect, monocrystalline anisotropy, and pressure influence. The term that represents the pressure influence on the magnetic properties was included, since for many applications, magnetic materials are a part of a multiferroic material together with a piezoelectric or a ferroelectric compound. Initially, the model was developed using generic parameters, in order to probe its suitable performance; after that, parameters were adjusted for simulating thin films of La0.67Sr0.33MnO3, a manganite with several technological applications because its Curie temperature is greater than room temperature. Including the pressure influence, it was observed the formation of several kind of FM/AF configurations as strip, labyrinth, and chess board forms. Furthermore, it was observed that, as the pressure increased, the critical temperature tended to decrease, and this result was in agreement with experimental reports. Full article
(This article belongs to the Section Computational Chemistry)
Show Figures

Figure 1

34 pages, 1003 KiB  
Article
A Modified Heart Dipole Model for the Generation of Pathological ECG Signals
by Mario Versaci, Giovanni Angiulli and Fabio La Foresta
Computation 2020, 8(4), 92; https://0-doi-org.brum.beds.ac.uk/10.3390/computation8040092 - 06 Nov 2020
Cited by 5 | Viewed by 3484
Abstract
In this paper, we introduce a new dynamic model of simulation of electrocardiograms (ECGs) affected by pathologies starting from the well-known McSharry dynamic model for the ECGs without cardiac disorders. In particular, the McSharry model has [...] Read more.
In this paper, we introduce a new dynamic model of simulation of electrocardiograms (ECGs) affected by pathologies starting from the well-known McSharry dynamic model for the ECGs without cardiac disorders. In particular, the McSharry model has been generalized (by a linear transformation and a rotation) for simulating ECGs affected by heart diseases verifying, from one hand, the existence and uniqueness of the solution and, on the other hand, if it admits instabilities. The results, obtained numerically by a procedure based on a Four Stage Lobatto IIIa formula, show the good performances of the proposed model in producing ECGs with or without heart diseases very similar to those achieved directly on the patients. Moreover, verified that the ECGs signals are affected by uncertainty and/or imprecision through the computation of the linear index and the fuzzy entropy index (whose values obtained are close to unity), these similarities among ECGs signals (with or without heart diseases) have been quantified by a well-established fuzzy approach based on fuzzy similarity computations highlighting that the proposed model to simulate ECGs affected by pathologies can be considered as a solid starting point for the development of synthetic pathological ECGs signals. Full article
(This article belongs to the Section Computational Biology)
Show Figures

Figure 1

13 pages, 3143 KiB  
Article
All-Nitrogen Cages and Molecular Crystals: Topological Rules, Stability, and Pyrolysis Paths
by Konstantin P. Katin, Valeriy B. Merinov, Alexey I. Kochaev, Savas Kaya and Mikhail M. Maslov
Computation 2020, 8(4), 91; https://0-doi-org.brum.beds.ac.uk/10.3390/computation8040091 - 06 Nov 2020
Cited by 8 | Viewed by 3272
Abstract
We combined ab initio molecular dynamics with the intrinsic reaction coordinate in order to investigate the mechanisms of stability and pyrolysis of N4 ÷ N120 fullerene-like nitrogen cages. The stability of the cages was evaluated in terms of the activation barriers [...] Read more.
We combined ab initio molecular dynamics with the intrinsic reaction coordinate in order to investigate the mechanisms of stability and pyrolysis of N4 ÷ N120 fullerene-like nitrogen cages. The stability of the cages was evaluated in terms of the activation barriers and the activation Gibbs energies of their thermal-induced breaking. We found that binding energies, bond lengths, and quantum-mechanical descriptors failed to predict the stability of the cages. However, we derived a simple topological rule that adjacent hexagons on the cage surface resulted in its instability. For this reason, the number of stable nitrogen cages is significantly restricted in comparison with their carbon counterparts. As a rule, smaller clusters are more stable, whereas the earlier proposed large cages collapse at room temperature. The most stable all-nitrogen cages are the N4 and N6 clusters, which can form the van der Waals crystals with densities of 1.23 and 1.36 g/cm3, respectively. The examination of their band structures and densities of electronic states shows that they are both insulators. Their power and sensitivity are not inferior to the modern advanced high-energy nanosystems. Full article
(This article belongs to the Section Computational Chemistry)
Show Figures

Graphical abstract

32 pages, 4477 KiB  
Article
Self-Adjusting Variable Neighborhood Search Algorithm for Near-Optimal k-Means Clustering
by Lev Kazakovtsev, Ivan Rozhnov, Aleksey Popov and Elena Tovbis
Computation 2020, 8(4), 90; https://0-doi-org.brum.beds.ac.uk/10.3390/computation8040090 - 05 Nov 2020
Cited by 5 | Viewed by 2653
Abstract
The k-means problem is one of the most popular models in cluster analysis that minimizes the sum of the squared distances from clustered objects to the sought cluster centers (centroids). The simplicity of its algorithmic implementation encourages researchers to apply it in a [...] Read more.
The k-means problem is one of the most popular models in cluster analysis that minimizes the sum of the squared distances from clustered objects to the sought cluster centers (centroids). The simplicity of its algorithmic implementation encourages researchers to apply it in a variety of engineering and scientific branches. Nevertheless, the problem is proven to be NP-hard which makes exact algorithms inapplicable for large scale problems, and the simplest and most popular algorithms result in very poor values of the squared distances sum. If a problem must be solved within a limited time with the maximum accuracy, which would be difficult to improve using known methods without increasing computational costs, the variable neighborhood search (VNS) algorithms, which search in randomized neighborhoods formed by the application of greedy agglomerative procedures, are competitive. In this article, we investigate the influence of the most important parameter of such neighborhoods on the computational efficiency and propose a new VNS-based algorithm (solver), implemented on the graphics processing unit (GPU), which adjusts this parameter. Benchmarking on data sets composed of up to millions of objects demonstrates the advantage of the new algorithm in comparison with known local search algorithms, within a fixed time, allowing for online computation. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

26 pages, 14485 KiB  
Article
Modeling of Isocyanate Synthesis by the Thermal Decomposition of Carbamates
by Ratmir Dashkin, Georgii Kolesnikov, Pavel Tsygankov, Igor Lebedev, Artem Lebedev, Natalia Menshutina, Khusrav Ghafurov and Abakar Bagomedov
Computation 2020, 8(4), 89; https://0-doi-org.brum.beds.ac.uk/10.3390/computation8040089 - 18 Oct 2020
Cited by 7 | Viewed by 3670
Abstract
The presented work is devoted to isocyanate synthesis by the thermal decomposition of carbamates model. The work describes the existing isocyanate-obtaining processes and the main problems in the study of isocyanate synthesis by the thermal decomposition of carbamates, which can be solved using [...] Read more.
The presented work is devoted to isocyanate synthesis by the thermal decomposition of carbamates model. The work describes the existing isocyanate-obtaining processes and the main problems in the study of isocyanate synthesis by the thermal decomposition of carbamates, which can be solved using mathematical and computer models. Experiments with carbamates of various structures were carried out. After processing the experimental data, the activation energy and the pre-exponential factor for isocyanate synthesis by the thermal decomposition of carbamates were determined. Then, a mathematical model of the reactor for the thermal decomposition of carbamates using the COMSOL Multiphysics software was developed. For this model, computational experiments under different conditions were carried out. It was shown that the calculation results correspond to the experimental ones, so the suggested model can be used in the design of the equipment for isocyanate synthesis by the thermal decomposition of carbamates. Full article
(This article belongs to the Section Computational Chemistry)
Show Figures

Figure 1

21 pages, 4509 KiB  
Article
A QP Solver Implementation for Embedded Systems Applied to Control Allocation
by Christina Schreppel and Jonathan Brembeck
Computation 2020, 8(4), 88; https://0-doi-org.brum.beds.ac.uk/10.3390/computation8040088 - 13 Oct 2020
Cited by 3 | Viewed by 3160
Abstract
Quadratic programming problems (QPs) frequently appear in control engineering. For use on embedded platforms, a QP solver implementation is required in the programming language C. A new solver for quadratic optimization problems, EmbQP, is described, which was implemented in well readable C code. [...] Read more.
Quadratic programming problems (QPs) frequently appear in control engineering. For use on embedded platforms, a QP solver implementation is required in the programming language C. A new solver for quadratic optimization problems, EmbQP, is described, which was implemented in well readable C code. The algorithm is based on the dual method of Goldfarb and Idnani and solves strictly convex QPs with a positive definite objective function matrix and linear equality and inequality constraints. The algorithm is outlined and some details for an efficient implementation in C are shown, with regard to the requirements of embedded systems. The newly implemented QP solver is demonstrated in the context of control allocation of an over-actuated vehicle as application example. Its performance is assessed in a simulation experiment. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

24 pages, 10056 KiB  
Article
Complex Modelling and Design of Catalytic Reactors Using Multiscale Approach—Part 2: Catalytic Reactions Modelling with Cellular Automata Approach
by Natalia Menshutina, Igor Lebedev, Evgeniy Lebedev, Andrey Kolnoochenko, Alexander Troyankin, Ratmir Dashkin, Michael Shishanov, Pavel Flegontov and Maxim Burdeyniy
Computation 2020, 8(4), 87; https://0-doi-org.brum.beds.ac.uk/10.3390/computation8040087 - 10 Oct 2020
Viewed by 2211
Abstract
The presented work is devoted to reactions of obtaining 4,4’-Diaminodiphenylmethane (MDA) in the presence of a catalyst model. The work describes the importance of studying the MDA obtaining process and the possibility of the cellular automata (CA) approach in the modelling of chemical [...] Read more.
The presented work is devoted to reactions of obtaining 4,4’-Diaminodiphenylmethane (MDA) in the presence of a catalyst model. The work describes the importance of studying the MDA obtaining process and the possibility of the cellular automata (CA) approach in the modelling of chemical reactions. The work suggests a CA-model that makes it possible to predict the kinetic curves of the studied MDA-obtaining reaction. The developed model was used to carry out computational experiments under the following different conditions—aniline:formaldehyde:catalyst ratios, stirrer speed, and reaction temperature. The results of computational experiments were compared with the corresponding experimental data. The suggested model was shown to be suitable for predicting MDA-obtaining reaction kinetics. The proposed CA model can be used with the CFD model, suggested in Part 1, allowing the implementation of complex multiscale modeling of a flow catalytic reactor from the molecule level to the level of the entire apparatus. Full article
(This article belongs to the Section Computational Chemistry)
Show Figures

Figure 1

15 pages, 1361 KiB  
Article
Biomass Steam Gasification: A Comparison of Syngas Composition between a 1-D MATLAB Kinetic Model and a 0-D Aspen Plus Quasi-Equilibrium Model
by Vera Marcantonio, Andrea Monforti Ferrario, Andrea Di Carlo, Luca Del Zotto, Danilo Monarca and Enrico Bocci
Computation 2020, 8(4), 86; https://0-doi-org.brum.beds.ac.uk/10.3390/computation8040086 - 05 Oct 2020
Cited by 20 | Viewed by 5106
Abstract
Biomass is one of the most widespread and accessible energy source and steam gasification is one of the most important processes to convert biomass into combustible gases. However, to date the difference of results between the main models used to predict steam gasification [...] Read more.
Biomass is one of the most widespread and accessible energy source and steam gasification is one of the most important processes to convert biomass into combustible gases. However, to date the difference of results between the main models used to predict steam gasification producer gas composition have been not analyzed in details. Indeed, gasification, involving heterogeneous reactions, does not reach thermodynamic equilibrium and so thermodynamic models with experimental corrections and kinetic models are mainly applied. Thus, this paper compares a 1-D kinetic model developed in MATLAB, combining hydrodynamics and reaction kinetics, and a 0-D thermodynamic model developed in Aspen Plus, based on Gibbs free energy minimization applying the quasi-equilibrium approach, calibrated by experimental data. After a comparison of the results of the models against experimental data at two S/B ratios, a sensitivity analysis for a wide range of S/B ratios has been performed. The experimental comparison and sensitivity analysis shows that the two models provide sufficiently similar data in terms of the main components of the syngas although the thermodynamic model shows, with increasing S/B, a greater increase of H2 and CO2 and lower decrease of CH4 and CO respect to the kinetic one and the experimental data. Thus, the thermodynamic model, despite being calibrated by experimental data, can be used mainly to analyze global plant performance due to the reduced importance of the discrepancy from a global energy and plant perspective. Meanwhile, the more complex kinetic model should be used when a more precise gas composition is needed and, of course, for reactor design. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

14 pages, 641 KiB  
Article
Causal Modeling of Twitter Activity during COVID-19
by Oguzhan Gencoglu and Mathias Gruber
Computation 2020, 8(4), 85; https://0-doi-org.brum.beds.ac.uk/10.3390/computation8040085 - 29 Sep 2020
Cited by 27 | Viewed by 6371
Abstract
Understanding the characteristics of public attention and sentiment is an essential prerequisite for appropriate crisis management during adverse health events. This is even more crucial during a pandemic such as COVID-19, as primary responsibility of risk management is not centralized to a single [...] Read more.
Understanding the characteristics of public attention and sentiment is an essential prerequisite for appropriate crisis management during adverse health events. This is even more crucial during a pandemic such as COVID-19, as primary responsibility of risk management is not centralized to a single institution, but distributed across society. While numerous studies utilize Twitter data in descriptive or predictive context during COVID-19 pandemic, causal modeling of public attention has not been investigated. In this study, we propose a causal inference approach to discover and quantify causal relationships between pandemic characteristics (e.g., number of infections and deaths) and Twitter activity as well as public sentiment. Our results show that the proposed method can successfully capture the epidemiological domain knowledge and identify variables that affect public attention and sentiment. We believe our work contributes to the field of infodemiology by distinguishing events that correlate with public attention from events that cause public attention. Full article
(This article belongs to the Special Issue Computation to Fight SARS-CoV-2 (CoVid-19))
Show Figures

Figure 1

13 pages, 1420 KiB  
Article
Development of a Parallel 3D Navier–Stokes Solver for Sediment Transport Calculations in Channels
by Gokhan Kirkil
Computation 2020, 8(4), 84; https://0-doi-org.brum.beds.ac.uk/10.3390/computation8040084 - 25 Sep 2020
Viewed by 2171
Abstract
We propose a method to parallelize a 3D incompressible Navier–Stokes solver that uses a fully implicit fractional-step method to simulate sediment transport in prismatic channels. The governing equations are transformed into generalized curvilinear coordinates on a non-staggered grid. To develop a parallel version [...] Read more.
We propose a method to parallelize a 3D incompressible Navier–Stokes solver that uses a fully implicit fractional-step method to simulate sediment transport in prismatic channels. The governing equations are transformed into generalized curvilinear coordinates on a non-staggered grid. To develop a parallel version of the code that can run on various platforms, in particular on PC clusters, it was decided to parallelize the code using Message Passing Interface (MPI) which is one of the most flexible parallel programming libraries. Code parallelization is accomplished by “message passing” whereby the computer explicitly uses library calls to accomplish communication between the individual processors of the machine (e.g., PC cluster). As a part of the parallelization effort, besides the Navier–Stokes solver, the deformable bed module used in simulations with loose beds are also parallelized. The flow, sediment transport, and bathymetry at equilibrium conditions were computed with the parallel and serial versions of the code for the case of a 140-degree curved channel bend of rectangular section. The parallel simulation conducted on eight processors gives exactly the same results as the serial solver. The parallel version of the solver showed good scalability. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop