Computation doi: 10.3390/computation9030028

Authors: Malgorzata Peszynska Joseph Umhoefer Choah Shin

In this paper, we consider an important problem for modeling complex coupled phenomena in porous media at multiple scales. In particular, we consider flow and transport in the void space between the pores when the pore space is altered by new solid obstructions formed by microbial growth or reactive transport, and we are mostly interested in pore-coating and pore-filling type obstructions, observed in applications to biofilm in porous media and hydrate crystal formation, respectively. We consider the impact of these obstructions on the macroscopic properties of the porous medium, such as porosity, permeability and tortuosity, for which we build an experimental probability distribution with reduced models, which involves three steps: (1) generation of independent realizations of obstructions, followed by, (2) flow and transport simulations at pore-scale, and (3) upscaling. For the first step, we consider three approaches: (1A) direct numerical simulations (DNS) of the PDE model of the actual physical process called BN which forms the obstructions, and two non-DNS methods, which we call (1B) CLPS and (1C) LP. LP is a lattice Ising-type model, and CLPS is a constrained version of an Allen–Cahn model for phase separation with a localization term. Both LP and CLPS are model approximations of BN, and they seek local minima of some nonconvex energy functional, which provide plausible realizations of the obstructed geometry and are tuned heuristically to deliver either pore-coating or pore-filling obstructions. Our methods work with rock-void geometries obtained by imaging, but bypass the need for imaging in real-time, are fairly inexpensive, and can be tailored to other applications. The reduced models LP and CLPS are less computationally expensive than DNS, and can be tuned to the desired fidelity of the probability distributions of upscaled quantities.

]]>Computation doi: 10.3390/computation9030027

Authors: Nattakarn Numpanviwat Pearanat Chuchard

The semi-analytical solution for transient electroosmotic flow through elliptic cylindrical microchannels is derived from the Navier-Stokes equations using the Laplace transform. The electroosmotic force expressed by the linearized Poisson-Boltzmann equation is considered the external force in the Navier-Stokes equations. The velocity field solution is obtained in the form of the Mathieu and modified Mathieu functions and it is capable of describing the flow behavior in the system when the boundary condition is either constant or varied. The fluid velocity is calculated numerically using the inverse Laplace transform in order to describe the transient behavior. Moreover, the flow rates and the relative errors on the flow rates are presented to investigate the effect of eccentricity of the elliptic cross-section. The investigation shows that, when the area of the channel cross-sections is fixed, the relative errors are less than 1% if the eccentricity is not greater than 0.5. As a result, an elliptic channel with the eccentricity not greater than 0.5 can be assumed to be circular when the solution is written in the form of trigonometric functions in order to avoid the difficulty in computing the Mathieu and modified Mathieu functions.

]]>Computation doi: 10.3390/computation9030026

Authors: Amzin Domagała

In turbulent premixed flames, for the mixing at a molecular level of reactants and products on the flame surface, it is crucial to sustain the combustion. This mixing phenomenon is featured by the scalar dissipation rate, which may be broadly defined as the rate of micro-mixing at small scales. This term, which appears in many turbulent combustion methods, includes the Conditional Moment Closure (CMC) and the Probability Density Function (PDF), requires an accurate model. In this study, a mathematical closure for the conditional mean scalar dissipation rate, &lt;Nc|ζ&gt;, in Reynolds, Averaged Navier–Stokes (RANS) context is proposed and tested against two different Direct Numerical Simulation (DNS) databases having different thermochemical and turbulence conditions. These databases consist of lean turbulent premixed V-flames of the CH4-air mixture and stoichiometric turbulent premixed flames of H2-air. The mathematical model has successfully predicted the peak and the typical profile of &lt;Nc|ζ&gt; with the sample space ζ and its prediction was consistent with an earlier study.

]]>Computation doi: 10.3390/computation9030025

Authors: Elsa Garavaglia Raffaella Pavani Luca Sgambi

Within the context of structure deterioration studies, we propose a new numerical method based on the use of fragility curves. In particular, the present work aims to theoretically study the degradation of concrete bridge structures subjected to aggressive environments. A simple probabilistic method based on fragility curves is presented which allows the forecasting of the lifetime of the considered structural system and the best monitoring time. The method was applied to investigate the degradation of a concrete bridge used as a case study. A Monte Carlo numerical procedure was used to simulate the variation over time of the residual resistant section and the ultimate bending moment of the deck of the case study. Within this context, fragility curves are used as reliable indicators of possible monitoring scenarios. In comparison with other methods, the main advantage of the proposed approach is the small amount of computing time required to obtain rapid assessment of reliability and deterioration level of the considered structure.

]]>Computation doi: 10.3390/computation9020024

Authors: Guillermo A. Martínez-Mascorro José R. Abreu-Pederzini José C. Ortiz-Bayliss Angel Garcia-Collantes Hugo Terashima-Marín

Crime generates significant losses, both human and economic. Every year, billions of dollars are lost due to attacks, crimes, and scams. Surveillance video camera networks generate vast amounts of data, and the surveillance staff cannot process all the information in real-time. Human sight has critical limitations. Among those limitations, visual focus is one of the most critical when dealing with surveillance. For example, in a surveillance room, a crime can occur in a different screen segment or on a distinct monitor, and the surveillance staff may overlook it. Our proposal focuses on shoplifting crimes by analyzing situations that an average person will consider as typical conditions, but may eventually lead to a crime. While other approaches identify the crime itself, we instead model suspicious behavior—the one that may occur before the build-up phase of a crime—by detecting precise segments of a video with a high probability of containing a shoplifting crime. By doing so, we provide the staff with more opportunities to act and prevent crime. We implemented a 3DCNN model as a video feature extractor and tested its performance on a dataset composed of daily action and shoplifting samples. The results are encouraging as the model correctly classifies suspicious behavior in most of the scenarios where it was tested. For example, when classifying suspicious behavior, the best model generated in this work obtains precision and recall values of 0.8571 and 1 in one of the test scenarios, respectively.

]]>Computation doi: 10.3390/computation9020023

Authors: Narisara Khamsing Kantimarn Chindaprasert Rapeepan Pitakaso Worapot Sirirak Chalermchat Theeraviriya

This research presents a solution to the family tourism route problem by considering daily time windows. To find the best solution for travel routing, the modified adaptive large neighborhood search (MALNS) method, using the four destructions and the four reconstructions approach, is applied here. The solution finding performance of the MALNS method is compared with an exact method running on the Lingo program. As shown by various solutions, the MALNS method can balance travel routing designs, including when many tourist attractions are present in each path. Furthermore, the results of the MALNS method are not significantly different from the results of the exact method for small problem sizes. For medium and large problem sizes, the MALNS method shows a higher performance and a smaller processing time for finding solutions. The values for the average total travel cost and average travel satisfaction rating derived by the MALNS method are approximately 0.18% for a medium problem and 0.05% for a large problem, 0.24% for a medium problem, and 0.21% for a large problem, respectively. The values derived from the exact method are slightly different. Moreover, the MALNS method calculation requires less processing time than the exact method, amounting to approximately 99.95% of the time required for the exact method. In this case study, the MALNS algorithm result shows a suitable balance of satisfaction and number of tourism places in relation to the differences between family members of different ages and genders in terms of satisfaction in tour route planning. The proposed solution methodology presents an effective high-quality solution, suggesting that the MALNS method has the potential to be a great competitive algorithm. According to the empirical results shown here, the MALNS method would be useful for creating route plans for tourism organizations that support travel route selection for family tours in Thailand.

]]>Computation doi: 10.3390/computation9020022

Authors: Jitsin Piyawatthanachot Narongsak Yotha Kanit Mukdasai

The problem of delay-range-dependent stability analysis for linear systems with distributed time-varying delays and nonlinear perturbations is studied without using the model transformation and delay-decomposition approach. The less conservative stability criteria are obtained for the systems by constructing a new augmented Lyapunov–Krasovskii functional and various inequalities, which are presented in terms of linear matrix inequalities (LMIs). Four numerical examples are demonstrated for the results given to illustrate the effectiveness and improvement over other methods.

]]>Computation doi: 10.3390/computation9020021

Authors: Leonid V. Moroz Volodymyr V. Samotyy Oleh Y. Horyachyy

Many low-cost platforms that support floating-point arithmetic, such as microcontrollers and field-programmable gate arrays, do not include fast hardware or software methods for calculating the square root and/or reciprocal square root. Typically, such functions are implemented using direct lookup tables or polynomial approximations, with a subsequent application of the Newton–Raphson method. Other, more complex solutions include high-radix digit-recurrence and bipartite or multipartite table-based methods. In contrast, this article proposes a simple modification of the fast inverse square root method that has high accuracy and relatively low latency. Algorithms are given in C/C++ for single- and double-precision numbers in the IEEE 754 format for both square root and reciprocal square root functions. These are based on the switching of magic constants in the initial approximation, depending on the input interval of the normalized floating-point numbers, in order to minimize the maximum relative error on each subinterval after the first iteration—giving 13 correct bits of the result. Our experimental results show that the proposed algorithms provide a fairly good trade-off between accuracy and latency after two iterations for numbers of type float, and after three iterations for numbers of type double when using fused multiply–add instructions—giving almost complete accuracy.

]]>Computation doi: 10.3390/computation9020020

Authors: Despoina Mouratidis Maria Nefeli Nikiforos Katia Lida Kermanidis

In the past decade, the rapid spread of large volumes of online information among an increasing number of social network users is observed. It is a phenomenon that has often been exploited by malicious users and entities, which forge, distribute, and reproduce fake news and propaganda. In this paper, we present a novel approach to the automatic detection of fake news on Twitter that involves (a) pairwise text input, (b) a novel deep neural network learning architecture that allows for flexible input fusion at various network layers, and (c) various input modes, like word embeddings and both linguistic and network account features. Furthermore, tweets are innovatively separated into news headers and news text, and an extensive experimental setup performs classification tests using both. Our main results show high overall accuracy performance in fake news detection. The proposed deep learning architecture outperforms the state-of-the-art classifiers, while using fewer features and embeddings from the tweet text.

]]>Computation doi: 10.3390/computation9020019

Authors: Dimitrios Myridakis Paul Myridakis Athanasios Kakarountas

Recently, there has been a sharp increase in the production of smart devices and related networks, and consequently the Internet of Things. One concern for these devices, which is constantly becoming more critical, is their protection against attacks due to their heterogeneity and the absence of international standards to achieve this goal. Thus, these devices are becoming vulnerable, with many of them not even showing any signs of malfunction or suspicious behavior. The aim of the present work is to introduce a circuit that is connected in series with the power supply of a smart device, specifically an IP camera, which allows analysis of its behavior. The detection circuit operates in real time (real-time detection), sampling the supply current of the device, processing the sampled values and finally indicating any detection of abnormal activities, based on a comparison to normal operation conditions. By utilizing techniques borrowed by simple power analysis side channel attack, it was possible to detect deviations from the expected operation of the IP camera, as they occurred due to intentional attacks, quarantining the monitored device from the rest of the network. The circuit is analyzed and a low-cost implementation (under 5US$) is illustrated. It achieved 100% success in the test results, showing excellent performance in intrusion detection.

]]>Computation doi: 10.3390/computation9020018

Authors: Fleurianne Bertrand Emilie Pirch

This paper investigates numerical properties of a flux-based finite element method for the discretization of a SEIQRD (susceptible-exposed-infected-quarantined-recovered-deceased) model for the spread of COVID-19. The model is largely based on the SEIRD (susceptible-exposed-infected-recovered-deceased) models developed in recent works, with additional extension by a quarantined compartment of the living population and the resulting first-order system of coupled PDEs is solved by a Least-Squares meso-scale method. We incorporate several data on political measures for the containment of the spread gathered during the course of the year 2020 and develop an indicator that influences the predictions calculated by the method. The numerical experiments conducted show a promising accuracy of predictions of the space-time behavior of the virus compared to the real disease spreading data.

]]>Computation doi: 10.3390/computation9020017

Authors: Halima Saker Rainer Machné Jörg Fallmann Douglas B. Murray Ahmad M. Shahin Peter F. Stadler

The problem of segmenting linearly ordered data is frequently encountered in time-series analysis, computational biology, and natural language processing. Segmentations obtained independently from replicate data sets or from the same data with different methods or parameter settings pose the problem of computing an aggregate or consensus segmentation. This Segmentation Aggregation problem amounts to finding a segmentation that minimizes the sum of distances to the input segmentations. It is again a segmentation problem and can be solved by dynamic programming. The aim of this contribution is (1) to gain a better mathematical understanding of the Segmentation Aggregation problem and its solutions and (2) to demonstrate that consensus segmentations have useful applications. Extending previously known results we show that for a large class of distance functions only breakpoints present in at least one input segmentation appear in the consensus segmentation. Furthermore, we derive a bound on the size of consensus segments. As show-case applications, we investigate a yeast transcriptome and show that consensus segments provide a robust means of identifying transcriptomic units. This approach is particularly suited for dense transcriptomes with polycistronic transcripts, operons, or a lack of separation between transcripts. As a second application, we demonstrate that consensus segmentations can be used to robustly identify growth regimes from sets of replicate growth curves.

]]>Computation doi: 10.3390/computation9020016

Authors: George Tsakalidis Kostas Georgoulakos Dimitris Paganias Kostas Vergidis

Business process optimization (BPO) has become an increasingly attractive subject in the wider area of business process intelligence and is considered as the problem of composing feasible business process designs with optimal attribute values, such as execution time and cost. Despite the fact that many approaches have produced promising results regarding the enhancement of attribute performance, little has been done to reduce the computational complexity due to the size of the problem. The proposed approach introduces an elaborate preprocessing phase as a component to an established optimization framework (bpoF) that applies evolutionary multi-objective optimization algorithms (EMOAs) to generate a series of diverse optimized business process designs based on specific process requirements. The preprocessing phase follows a systematic rule-based algorithmic procedure for reducing the library size of candidate tasks. The experimental results on synthetic data demonstrate a considerable reduction of the library size and a positive influence on the performance of EMOAs, which is expressed with the generation of an increasing number of nondominated solutions. An important feature of the proposed phase is that the preprocessing effects are explicitly measured before the EMOAs application; thus, the effects on the library reduction size are directly correlated with the improved performance of the EMOAs in terms of average time of execution and nondominated solution generation. The work presented in this paper intends to pave the way for addressing the abiding optimization challenges related to the computational complexity of the search space of the optimization problem by working on the problem specification at an earlier stage.

]]>Computation doi: 10.3390/computation9020015

Authors: Maria Eftychia Angelaki Theodoros Karvounidis Christos Douligeris

This paper proposes the use of motivational features in mobile applications to support adolescents’ education in sustainable travel urban behavior, so that they become more mindful of their environmental impact. To this effect, existing persuasive strategies are adopted, implemented, and integrated into six simulated screens of a prospective mobile application named ESTA, designed for this purpose through a user-centered design process. These screens are then assessed by secondary education pupils, the outcome of which is analyzed and presented in detail. The analysis takes into consideration the possibility for the daily use of ESTA in order for the adolescents to foster an eco-friendly and healthy transit attitude and make more sustainable mobility choices that will follow them throughout their life. The potential effectiveness of ESTA is demonstrated via two use cases: the “Daily Commuting” case is addressed towards adolescents who want to move within their area of residence or neighborhood following their daily routine and activities, while the “Weekend Entertainment” is addressed towards adolescents who want to move using the available public transport modes, encouraging them to adopt greener weekend travel habits.

]]>Computation doi: 10.3390/computation9020014

Authors: Ezzeddine Touti Hossem Zayed Remus Pusca Raphael Romary

Renewable energy systems have been extensively developed and they are attractive to become widespread in the future because they can deliver energy at a competitive price and generally do not cause environmental pollution. However, stand-alone energy systems may not be practical for satisfying the electric load demands, especially in places having unsteady wind speeds with high unpredictability. Hybrid energy systems seem to be a more economically feasible alternative to satisfy the energy demands of several isolated clients worldwide. The combination of these systems makes it possible to guarantee the power stability, efficiency, and reliability. The aim of this paper is to present a comprehensive analysis and to propose a technical solution to integrate a self-excited induction generator in a low power multisource system. Therefore, to avoid the voltage collapsing and the machine demagnetization, the various parameters have to be identified. This procedure allows for the limitation of a safe operating area where the best stability of the machine can be obtained. Hence, the load variation interval is determined. An improvement of the induction generator stability will be analyzed. Simulation results will be validated through experimental tests.

]]>Computation doi: 10.3390/computation9020013

Authors: Ehsan Reyhanian Benedikt Dorschner Ilya Karlin

We investigate a kinetic model for compressible non-ideal fluids. The model imposes the local thermodynamic pressure through a rescaling of the particle’s velocities, which accounts for both long- and short-range effects and hence full thermodynamic consistency. The model is fully Galilean invariant and treats mass, momentum, and energy as local conservation laws. The analysis and derivation of the hydrodynamic limit is followed by the assessment of accuracy and robustness through benchmark simulations ranging from the Joule–Thompson effect to a phase-change and high-speed flows. In particular, we show the direct simulation of the inversion line of a van der Waals gas followed by simulations of phase-change such as the one-dimensional evaporation of a saturated liquid, nucleate, and film boiling and eventually, we investigate the stability of a perturbed strong shock front in two different fluid mediums. In all of the cases, we find excellent agreement with the corresponding theoretical analysis and experimental correlations. We show that our model can operate in the entire phase diagram, including super- as well as sub-critical regimes and inherently captures phase-change phenomena.

]]>Computation doi: 10.3390/computation9020012

Authors: Evangelos Maltezos Athanasios Douklias Aris Dadoukis Fay Misichroni Lazaros Karagiannidis Markos Antonopoulos Katerina Voulgary Eleftherios Ouzounoglou Angelos Amditis

Situational awareness is a critical aspect of the decision-making process in emergency response and civil protection and requires the availability of up-to-date information on the current situation. In this context, the related research should not only encompass developing innovative single solutions for (real-time) data collection, but also on the aspect of transforming data into information so that the latter can be considered as a basis for action and decision making. Unmanned systems (UxV) as data acquisition platforms and autonomous or semi-autonomous measurement instruments have become attractive for many applications in emergency operations. This paper proposes a multipurpose situational awareness platform by exploiting advanced on-board processing capabilities and efficient computer vision, image processing, and machine learning techniques. The main pillars of the proposed platform are: (1) a modular architecture that exploits unmanned aerial vehicle (UAV) and terrestrial assets; (2) deployment of on-board data capturing and processing; (3) provision of geolocalized object detection and tracking events; and (4) a user-friendly operational interface for standalone deployment and seamless integration with external systems. Experimental results are provided using RGB and thermal video datasets and applying novel object detection and tracking algorithms. The results show the utility and the potential of the proposed platform, and future directions for extension and optimization are presented.

]]>Computation doi: 10.3390/computation9020011

Authors: Robin Trunk Timo Weckerle Nicolas Hafen Gudrun Thäter Hermann Nirschl Mathias J. Krause

The simulation of surface resolved particles is a valuable tool to gain more insights in the behaviour of particulate flows in engineering processes. In this work the homogenized lattice Boltzmann method as one approach for such direct numerical simulations is revisited and validated for different scenarios. Those include a 3D case of a settling sphere for various Reynolds numbers. On the basis of this dynamic case, different algorithms for the calculation of the momentum exchange between fluid and particle are evaluated along with different forcing schemes. The result is an updated version of the method, which is in good agreement with the benchmark values based on simulations and experiments. The method is then applied for the investigation of the tubular pinch effect discovered by Segré and Silberberg and the simulation of hindered settling. For the latter, the computational domain is equipped with periodic boundaries for both fluid and particles. The results are compared to the model by Richardson and Zaki and are found to be in good agreement. As no explicit contact treatment is applied, this leads to the assumption of sufficient momentum transfer between particles via the surrounding fluid. The implementations are based on the open-source C++ lattice Boltzmann library OpenLB.

]]>Computation doi: 10.3390/computation9020010

Authors: Adhika Satyadharma Harinaldi

Although the grid convergence index is a widely used for the estimation of discretization error in computational fluid dynamics, it still has some problems. These problems are mainly rooted in the usage of the order of a convergence variable within the model which is a fundamental variable that the model is built upon. To improve the model, a new perspective must be taken. By analyzing the behavior of the gradient within simulation data, a gradient-based model was created. The performance of this model is tested on its accuracy, precision, and how it will affect a computational time of a simulation. The testing is conducted on a dataset of 36 simulated variables, simulated using the method of manufactured solutions, with an average of 26.5 meshes/case. The result shows the new gradient based method is more accurate and more precise then the grid convergence index(GCI). This allows for the usage of a coarser mesh for its analysis, thus it has the potential to reduce the overall computational by at least by 25% and also makes the discretization error analysis more available for general usage.

]]>Computation doi: 10.3390/computation9020009

Authors: Konstantin Isupov

Residue number system (RNS) is known for its parallel arithmetic and has been used in recent decades in various important applications, from digital signal processing and deep neural networks to cryptography and high-precision computation. However, comparison, sign identification, overflow detection, and division are still hard to implement in RNS. For such operations, most of the methods proposed in the literature only support small dynamic ranges (up to several tens of bits), so they are only suitable for low-precision applications. We recently proposed a method that supports arbitrary moduli sets with cryptographically sized dynamic ranges, up to several thousands of bits. The practical interest of our method compared to existing methods is that it relies only on very fast standard floating-point operations, so it is suitable for multiple-precision applications and can be efficiently implemented on many general-purpose platforms that support IEEE 754 arithmetic. In this paper, we make further improvements to this method and demonstrate that it can successfully be applied to implement efficient data-parallel primitives operating in the RNS domain, namely finding the maximum element of an array of RNS numbers on graphics processing units. Our experimental results on an NVIDIA RTX 2080 GPU show that for random residues and a 128-moduli set with 2048-bit dynamic range, the proposed implementation reduces the running time by a factor of 39 and the memory consumption by a factor of 13 compared to an implementation based on mixed-radix conversion.

]]>Computation doi: 10.3390/computation9020008

Authors: Chendi Cao Mitchell Neilsen

Dam embankment breaches caused by overtopping or internal erosion can impact both life and property downstream. It is important to accurately predict the amount of erosion, peak discharge, and the resulting downstream flow. This paper presents a new model based on the material point method to simulate soil and water interaction and predict failure rate parameters. The model assumes that the dam consists of a homogeneous embankment constructed with cohesive soil, and water inflow is defined by a hydrograph using other readily available reach routing software. The model uses continuum mixture theory to describe each phase where each species individually obeys the conservation of mass and momentum. A two-grid material point method is used to discretize the governing equations. The Drucker&ndash;Prager plastic flow model, combined with a Hencky strain-based hyperelasticity model, is used to compute soil stress. Water is modeled as a weakly compressible fluid. Analysis of the model demonstrates the efficacy of our approach for existing examples of overtopping dam breach, dam failures, and collisions. Simulation results from our model are compared with a physical-based breach model, WinDAM C. The new model can capture water and soil interaction at a finer granularity than WinDAM C. The new model gradually removes the granular material during the breach process. The impact of material properties on the dam breach process is also analyzed.

]]>Computation doi: 10.3390/computation9010007

Authors: Computation Editorial Office Computation Editorial Office

Peer review is the driving force of journal development, and reviewers are gatekeepers who ensure that Computation maintains its standards for the high quality of its published papers [...]

]]>Computation doi: 10.3390/computation9010006

Authors: Maria Eleni Skarkala Manolis Maragoudakis Stefanos Gritzalis Lilian Mitrou

Distributed medical, financial, or social databases are analyzed daily for the discovery of patterns and useful information. Privacy concerns have emerged as some database segments contain sensitive data. Data mining techniques are used to parse, process, and manage enormous amounts of data while ensuring the preservation of private information. Cryptography, as shown by previous research, is the most accurate approach to acquiring knowledge while maintaining privacy. In this paper, we present an extension of a privacy-preserving data mining algorithm, thoroughly designed and developed for both horizontally and vertically partitioned databases, which contain either nominal or numeric attribute values. The proposed algorithm exploits the multi-candidate election schema to construct a privacy-preserving tree-augmented naive Bayesian classifier, a more robust variation of the classical naive Bayes classifier. The exploitation of the Paillier cryptosystem and the distinctive homomorphic primitive shows in the security analysis that privacy is ensured and the proposed algorithm provides strong defences against common attacks. Experiments deriving the benefits of real world databases demonstrate the preservation of private data while mining processes occur and the efficient handling of both database partition types.

]]>Computation doi: 10.3390/computation9010005

Authors: Maria Vasilyeva Dmitry Ammosov Vasily Vasil’ev

In this work, we consider a mathematical model and finite element implementation of heat transfer and mechanics of soils with phase change. We present the construction of the simplified mathematical model based on the definition of water and ice fraction volumes as functions of temperature. In the presented mathematical model, the soil deformations occur due to the porosity growth followed by the difference between ice and water density. We consider a finite element discretization of the presented thermoelastic model with implicit time approximation. Implementation of the presented basic mathematical model is performed using FEniCS finite element library and openly available to download. The results of the numerical investigation are presented for the two-dimensional and three-dimensional model problems for two test cases in three different geometries. We consider algorithms with linearization from the previous time layer (one Picard iteration) and the Picard iterative method. Computational time is presented with the total number of nonlinear iterations. A numerical investigation with results of the convergence of the nonlinear iteration is presented for different time step sizes, where we calculate relative errors for temperature and displacements between current solution and reference solution with the largest number of the time layers. Numerical results illustrate the influence of the porosity change due to the phase-change of pore water into ice on the deformation of the soils. We observed a good numerical convergence of the presented implementation with the small number of nonlinear iterations, that depends on time step size.

]]>Computation doi: 10.3390/computation9010004

Authors: Wenhuan Zeng Anupam Gautam Daniel H. Huson

The current COVID-19 pandemic, caused by the rapid worldwide spread of the SARS-CoV-2 virus, is having severe consequences for human health and the world economy. The virus affects different individuals differently, with many infected patients showing only mild symptoms, and others showing critical illness. To lessen the impact of the epidemic, one problem is to determine which factors play an important role in a patient&rsquo;s progression of the disease. Here, we construct an enhanced COVID-19 structured dataset from more than one source, using natural language processing to add local weather conditions and country-specific research sentiment. The enhanced structured dataset contains 301,363 samples and 43 features, and we applied both machine learning algorithms and deep learning algorithms on it so as to forecast patient&rsquo;s survival probability. In addition, we import alignment sequence data to improve the performance of the model. Application of Extreme Gradient Boosting (XGBoost) on the enhanced structured dataset achieves 97% accuracy in predicting patient&rsquo;s survival; with climatic factors, and then age, showing the most importance. Similarly, the application of a Multi-Layer Perceptron (MLP) achieves 98% accuracy. This work suggests that enhancing the available data, mostly basic information on patients, so as to include additional, potentially important features, such as weather conditions, is useful. The explored models suggest that textual weather descriptions can improve outcome forecast.

]]>Computation doi: 10.3390/computation9010003

Authors: Sima Sarv Ahrabi Michele Scarpiniti Enzo Baccarelli Alireza Momenzadeh

In parallel with the vast medical research on clinical treatment of COVID-19, an important action to have the disease completely under control is to carefully monitor the patients. What the detection of COVID-19 relies on most is the viral tests, however, the study of X-rays is helpful due to the ease of availability. There are various studies that employ Deep Learning (DL) paradigms, aiming at reinforcing the radiography-based recognition of lung infection by COVID-19. In this regard, we make a comparison of the noteworthy approaches devoted to the binary classification of infected images by using DL techniques, then we also propose a variant of a convolutional neural network (CNN) with optimized parameters, which performs very well on a recent dataset of COVID-19. The proposed model&rsquo;s effectiveness is demonstrated to be of considerable importance due to its uncomplicated design, in contrast to other presented models. In our approach, we randomly put several images of the utilized dataset aside as a hold out set; the model detects most of the COVID-19 X-rays correctly, with an excellent overall accuracy of 99.8%. In addition, the significance of the results obtained by testing different datasets of diverse characteristics (which, more specifically, are not used in the training process) demonstrates the effectiveness of the proposed approach in terms of an accuracy up to 93%.

]]>Computation doi: 10.3390/computation9010002

Authors: Călin-Ioan Gheorghiu

We are concerned with the study of some classical spectral collocation methods, mainly Chebyshev and sinc as well as with the new software system Chebfun in computing high order eigenpairs of singular and regular Schr&ouml;dinger eigenproblems. We want to highlight both the qualities as well as the shortcomings of these methods and evaluate them in conjunction with the usual ones. In order to resolve a boundary singularity, we use Chebfun with domain truncation. Although it is applicable with spectral collocation, a special technique to introduce boundary conditions as well as a coordinate transform, which maps an unbounded domain to a finite one, are the special ingredients. A challenging set of &ldquo;hard&rdquo;benchmark problems, for which usual numerical methods (f. d., f. e. m., shooting, etc.) fail, were analyzed. In order to separate &ldquo;good&rdquo;and &ldquo;bad&rdquo;eigenvalues, we have estimated the drift of the set of eigenvalues of interest with respect to the order of approximation and/or scaling of domain parameter. It automatically provides us with a measure of the error within which the eigenvalues are computed and a hint on numerical stability. We pay a particular attention to problems with almost multiple eigenvalues as well as to problems with a mixed spectrum.

]]>Computation doi: 10.3390/computation9010001

Authors: Anton Kasprzhitskii Georgy Lazorenko Tatiana Nazdracheva Victor Yavna

This research evaluates the inhibitory effect of L-amino acids (AAs) with different side chain lengths on Fe (100) surfaces implementing Monte Carlo (MC) simulation. A quantitative and qualitative description of the adsorption behavior of AAs on the iron surface has been carried out. Calculations have shown that the absolute values of the adsorption energy of L-amino acids increase with side chain prolongation; they are also determined by the presence of heteroatoms. The maximum absolute value of the adsorption energy AAs on the iron surface in accordance with the side chain classification increases in the following sequence: Glu (acidic) &lt; Gln (polar) &lt; Trp (nonpolar) &lt; Arg (basic). AAs from nonpolar and basic groups have the best adsorption ability to the iron surface, which indicates their highest inhibitory efficiency according to the results of the MC simulation. The calculation results agree with the experimental data.

]]>Computation doi: 10.3390/computation8040107

Authors: Kyle Stevens Thien Tran-Duc Ngamta Thamwattana James M. Hill

The Lennard&ndash;Jones potential and a continuum approach can be used to successfully model interactions between various regular shaped molecules and nanostructures. For single atomic species molecules, the interaction can be approximated by assuming a uniform distribution of atoms over surfaces or volumes, which gives rise to a constant atomic density either over or throughout the molecule. However, for heterogeneous molecules, which comprise more than one type of atoms, the situation is more complicated. Thus far, two extended modeling approaches have been considered for heterogeneous molecules, namely a multi-surface semi-continuous model and a fully continuous model with average smearing of atomic contribution. In this paper, we propose yet another modeling approach using a single continuous surface, but replacing the atomic density and attractive and repulsive constants in the Lennard&ndash;Jones potential with functions, which depend on the heterogeneity across the molecules, and the new model is applied to study the adsorption of coronene onto a graphene sheet. Comparison of results is made between the new model and two other existing approaches as well as molecular dynamics simulations performed using the LAMMPS molecular dynamics simulator. We find that the new approach is superior to the other continuum models and provides excellent agreement with molecular dynamics simulations.

]]>Computation doi: 10.3390/computation8040106

Authors: Mazen Y. Hamed

Alzheimer&rsquo;s disease (AD) is a progressive neurodegenerative brain disorder. One of the important therapeutic approaches of AD is the inhibition of &beta;-site APP cleaving enzyme-1 (BACE1). This enzyme plays a central role in the synthesis of the pathogenic &beta;-amyloid peptides (A&beta;) in Alzheimer&rsquo;s disease. A group of potent BACE1 inhibitors with known X-ray structures (PDB ID 5i3X, 5i3Y, 5iE1, 5i3V, 5i3W, 4LC7, 3TPP) were studied by molecular dynamics simulation and binding energy calculation employing MM_GB(PB)SA. The calculated binding energies gave Kd values of 0.139 &micro;M, 1.39 nM, 4.39 mM, 24.3 nM, 1.39 mM, 29.13 mM, and 193.07 nM, respectively. These inhibitors showed potent inhibitory activities in enzymatic and cell assays. The Kd values are compared with experimental values and the structures are discussed in view of the energy contributions to binding. Drug likeness of these inhibitors is also discussed. Accommodation of ligands in the catalytic site of BACE1 is discussed depending on the type of fragment involved in each structure. Molecular dynamics (MD) simulations and energy studies were used to explore the recognition of the selected BACE1 inhibitors by Asp32, Asp228, and the hydrophobic flap. The results show that selective BACE1 inhibition may be due to the formation of strong electrostatic interactions with Asp32 and Asp228 and a large number of hydrogen bonds, in addition to &pi;&ndash;&pi; and van der Waals interactions with the amino acid residues located inside the catalytic cavity. Interactions with the ligands show a similar binding mode with BACE1. These results help to rationalize the design of selective BACE1 inhibitors.

]]>Computation doi: 10.3390/computation8040105

Authors: Mulugeta Dugda Farzad Moazzami

In computational seismology, receiver functions represent the impulse response for the earth structure beneath a seismic station and, in general, these are functionals that show several seismic phases in the time-domain related to discontinuities within the crust and the upper mantle. This paper introduces a new technique called generalized pattern search (GPS) for inverting receiver functions to obtain the depth of the crust&ndash;mantle discontinuity, i.e., the crustal thickness H, and the ratio of crustal P-wave velocity Vp to S-wave velocity Vs. In particular, the GPS technique, which is a direct search method, does not need derivative or directional vector information. Moreover, the technique allows simultaneous determination of the weights needed for the converted and reverberated phases. Compared to previously introduced variable weights approaches for inverting H-&kappa; stacking of receiver functions, with &kappa; = Vp/Vs, the GPS technique has some advantages in terms of saving computational time and also suitability for simultaneous determination of crustal parameters and associated weights. Finally, the technique is tested using seismic data from the East Africa Rift System and it provides results that are consistent with previously published studies.

]]>Computation doi: 10.3390/computation8040104

Authors: Luis Ariosto Serna Cardona Hernán Darío Vargas-Cardona Piedad Navarro González David Augusto Cardenas Peña Álvaro Ángel Orozco Gutiérrez

The recurrent use of databases with categorical variables in different applications demands new alternatives to identify relevant patterns. Classification is an interesting approach for the recognition of this type of data. However, there are a few amount of methods for this purpose in the literature. Also, those techniques are specifically focused only on kernels, having accuracy problems and high computational cost. For this reason, we propose an identification approach for categorical variables using conventional classifiers (LDC-QDC-KNN-SVM) and different mapping techniques to increase the separability of classes. Specifically, we map the initial features (categorical attributes) to another space, using the Chi-square (C-S) as a measure of dissimilarity. Then, we employ the (t-SNE) for reducing dimensionality of data to two or three features, allowing a significant reduction of computational times in learning methods. We evaluate the performance of proposed approach in terms of accuracy for several experimental configurations and public categorical datasets downloaded from the UCI repository, and we compare with relevant state of the art methods. Results show that C-S mapping and t-SNE considerably diminish the computational times in recognitions tasks, while the accuracy is preserved. Also, when we apply only the C-S mapping to the datasets, the separability of classes is enhanced, thus, the performance of learning algorithms is clearly increased.

]]>Computation doi: 10.3390/computation8040103

Authors: Stefano Quer Andrea Calabrese

Many modern applications are modeled using graphs of some kind. Given a graph, reachability, that is, discovering whether there is a path between two given nodes, is a fundamental problem as well as one of the most important steps of many other algorithms. The rapid accumulation of very large graphs (up to tens of millions of vertices and edges) from a diversity of disciplines demand efficient and scalable solutions to the reachability problem. General-purpose computing has been successfully used on Graphics Processing Units (GPUs) to parallelize algorithms that present a high degree of regularity. In this paper, we extend the applicability of GPU processing to graph-based manipulation, by re-designing a simple but efficient state-of-the-art graph-labeling method, namely the GRAIL (Graph Reachability Indexing via RAndomized Interval) algorithm, to many-core CUDA-based GPUs. This algorithm firstly generates a label for each vertex of the graph, then it exploits these labels to answer reachability queries. Unfortunately, the original algorithm executes a sequence of depth-first visits which are intrinsically recursive and cannot be efficiently implemented on parallel systems. For that reason, we design an alternative approach in which a sequence of breadth-first visits substitute the original depth-first traversal to generate the labeling, and in which a high number of concurrent visits is exploited during query evaluation. The paper describes our strategy to re-design these steps, the difficulties we encountered to implement them, and the solutions adopted to overcome the main inefficiencies. To prove the validity of our approach, we compare (in terms of time and memory requirements) our GPU-based approach with the original sequential CPU-based tool. Finally, we report some hints on how to conduct further research in the area.

]]>Computation doi: 10.3390/computation8040102

Authors: Albert R. Khalikov Evgeny A. Sharapov Vener A. Valitov Elvina V. Galieva Elena A. Korznikova Sergey V. Dmitriev

Currently, an important fundamental problem of practical importance is the production of high-quality solid-phase compounds of various metals. This paper presents a theoretical model that allows one to study the diffusion process in nickel-base refractory alloys. As an example, a two-dimensional model of ternary alloy is considered to model diffusion bonding of the alloys with different compositions. The main idea is to divide the alloy components into three groups: (i) the base element Ni, (ii) the intermetallic forming elements Al and Ti and (iii) the alloying elements. This approach allows one to consider multi-component alloys as ternary alloys, which greatly simplifies the analysis. The calculations are carried out within the framework of the hard sphere model when describing interatomic interactions by pair potentials. The energy of any configuration of a given system is written in terms of order parameters and ordering energies. A vacancy diffusion model is described, which takes into account the gain/loss of potential energy due to a vacancy jump and temperature. Diffusion bonding of two dissimilar refractory alloys is modeled. The concentration profiles of the components and order parameters are analyzed at different times. The results obtained indicate that the ternary alloy model is efficient in modeling the diffusion bonding of dissimilar Ni-base refractory alloys.

]]>Computation doi: 10.3390/computation8040101

Authors: Nnamdi Nwulu

In this paper, the Combined Heat and Power Dynamic Economic Emissions Dispatch (CHPDEED) problem formulation is considered. This problem is a complicated nonlinear mathematical formulation with multiple, conflicting objective functions. The aim of this mathematical problem is to obtain the optimal quantities of heat and power output for the committed generating units which includes power and heat only units. Heat and load demand are expected to be satisfied throughout the total dispatch interval. In this paper, Valve Point effects are considered in the fuel cost function of the units which lead to a non-convex cost function. Furthermore, an Incentive Based Demand Response Program formulation is also simultaneously considered with the CHPDEED problem further complicating the mathematical problem. The decision variables are thus the optimal power and heat output of the generating units and the optimal power curbed and monetary incentive for the participating demand response consumers. The resulting mathematical formulations are tested on four practical scenarios depicting different system operating conditions and obtained results show the efficacy of the developed mathematical optimization model. Obtained results indicate that, when the Incentive-Based Demand Response (IBDR) program&rsquo;s operational hours is unrestricted with a residential load profile, the energy curtailed is highest (2680 MWh), the energy produced by the generators is lowest (38,008.53 MWh), power losses are lowest (840.5291 MW) and both fuel costs and emissions are lowest.

]]>Computation doi: 10.3390/computation8040100

Authors: Alan Kabanshi

This paper explores the flow structure, mean/turbulent statistical characteristics of the vector field and entrainment of round jets issued from a smooth contracting nozzle at low nozzle exit velocities (1.39&ndash;6.44 m/s). The motivation of the study was to increase understand of the near field and get insights on how to control and reduce entrainment, particularly in applications that use jets with low-medium momentum flow like microclimate ventilation systems. Additionally, the near field of free jets with low momentum flow is not extensively covered in literature. Particle image velocimetry (PIV), a whole field vector measurement method, was used for data acquisition of the flow from a 0.025 m smooth contracting nozzle. The results show that at low nozzle exit velocities the jet flow was unstable with oscillations and this increased entrainment, however, increasing the nozzle exit velocity stabilized the jet flow and reduced entrainment. This is linked to the momentum flow of the jet, the structure characteristics of the flow and the type or disintegration distance of vortices created on the shear layer. The study discusses practical implications on microclimate ventilation systems and at the same time contributes data to the development and validation of a planned computational turbulence model for microclimate ventilation.

]]>Computation doi: 10.3390/computation8040099

Authors: Rodrigo Andrés Gómez-Montoya Jose Alejandro Cano Pablo Cortés Fernando Salazar

Put-away operations typically consist of moving products from depots to allocated storage locations using either operators or Material Handling Equipment (MHE), accounting for important operative costs in warehouses and impacting operations efficiency. Therefore, this paper aims to formulate and solve a Put-away Routing Problem (PRP) in distribution centres (DCs). This PRP formulation represents a novel approach due to the consideration of a fleet of homogeneous Material Handling Equipment (MHE), heterogeneous products linked to a put-away list size, depot location and multi-parallel aisles in a distribution centre. It should be noted that the slotting problem, rather than the PRP, has usually been studied in the literature, whereas the PRP is addressed in this paper. The PRP is solved using a discrete particle swarm optimization (PSO) algorithm that is compared to tabu search approaches (Classical Tabu Search (CTS), Tabu Search (TS) 2-Opt) and an empirical rule. As a result, it was found that a discrete PSO generates the best solutions, as the time savings range from 2 to 13% relative to CTS and TS 2-Opt for different combinations of factor levels evaluated in the experimentation.

]]>Computation doi: 10.3390/computation8040098

Authors: Mohammad Islam Nicolas Huerta Robert Dilmore

Carbon capture, utilization, and storage (CCUS) describes a set of technically viable processes to separate carbon dioxide (CO2) from industrial byproduct streams and inject it into deep geologic formations for long-term storage. Legacy wells located within the spatial domain of new injection and production activities represent potential pathways for fluids (i.e., CO2 and aqueous phase) to leak through compromised components (e.g., through fractures or micro-annulus pathways). The finite element (FE) method is a well-established numerical approach to simulate the coupling between multi-phase fluid flow and solid phase deformation interactions that occur in a compromised well system. We assumed the spatial domain consists of a three-phases system: a solid, liquid, and gas phase. For flow in the two fluids phases, we considered two sets of primary variables: the first considering capillary pressure and gas pressure (PP) scheme, and the second considering liquid pressure and gas saturation (PS) scheme. Fluid phases were coupled with the solid phase using the full coupling (i.e., monolithic coupling) and iterative coupling (i.e., sequential coupling) approaches. The challenge of achieving numerical stability in the coupled formulation in heterogeneous media was addressed using the mass lumping and the upwinding techniques. Numerical results were compared with three benchmark problems to assess the performance of coupled FE solutions: 1D Terzaghi&rsquo;s consolidation, Liakopoulos experiments, and the Kueper and Frind experiments. We found good agreement between our results and the three benchmark problems. For the Kueper and Frind test, the PP scheme successfully captured the observed experimental response of the non-aqueous phase infiltration, in contrast to the PS scheme. These exercises demonstrate the importance of fluid phase primary variable selection for heterogeneous porous media. We then applied the developed model to the hypothetical case of leakage along a compromised well representing a heterogeneous media. Considering the mass lumping and the upwinding techniques, both the monotonic and the sequential coupling provided identical results, but mass lumping was needed to avoid numerical instabilities in the sequential coupling. Additionally, in the monolithic coupling, the magnitude of primary variables in the coupled solution without mass lumping and the upwinding is higher, which is essential for the risk-based analyses.

]]>Computation doi: 10.3390/computation8040097

Authors: Alexander S. Novikov

This brief Editorial is dedicated to announcing the Special Issue &ldquo;Computational Insights into Industrial Chemistry&rdquo;. The Special Issue covers the most recent progress in the rapidly growing field of computational chemistry, and the application of computer modeling in topics relevant to industrial chemistry (chemical industrial processes and materials, environmental effects caused by chemical industry activities, computer-aided design of catalysts, green chemistry, etc.).

]]>Computation doi: 10.3390/computation8040096

Authors: Hamza Khan Hazem Issa József K. Tar

Precise control of the flow rate of fluids stored in multiple tank systems is an important task in process industries. On this reason coupled tanks are considered popular paradigms in studies because they form strongly nonlinear systems that challenges the controller designers to develop various approaches. In this paper the application of a novel, Fixed Point Iteration (FPI)-based technique is reported to control the fluid level in a &ldquo;lower tank&rdquo; that is fed by the egress of an &ldquo;upper&rdquo; one. The control signal is the ingress rate at the upper tank. Numerical simulation results obtained by the use of simple sequential Julia code with Euler integration are presented to illustrate the efficiency of this approach.

]]>Computation doi: 10.3390/computation8040095

Authors: Farzad Mohebbi

Explicit expressions are obtained for sensitivity coefficients to separately estimate temperature-dependent thermophysical properties, such as specific heat and thermal conductivity, in two-dimensional inverse transient heat conduction problems for bodies with irregular shape from temperature measurement readings of a single sensor inside the body. The proposed sensitivity analysis scheme allows for the computation of all sensitivity coefficients in only one direct problem solution at each iteration with no need to solve the sensitivity and adjoint problems. In this method, a boundary-fitted grid generation (elliptic) method is used to mesh the irregular shape of the heat conducting body. Explicit expressions are obtained to calculate the sensitivity coefficients efficiently and the conjugate gradient method as an iterative gradient-based optimization method is used to minimize the objective function and reach the solution. A test case with different initial guesses and sensor locations is presented to investigate the proposed inverse analysis.

]]>Computation doi: 10.3390/computation8040094

Authors: José Rivas M. Constanza Sadino-Riquelme Ignacio Garcés Andrea Carvajal Andrés Donoso-Bravo

Computational fluid dynamic (CFD) has been increasingly exploited for the design and optimization of (bio)chemical processes. Validation is a crucial part of any modeling application. In CFD, when validation is done, complex and expensive techniques are normally employed. The aim of this study was to test the capability of the CFD model to represent a residence time distribution (RTD) test in a temporal and spatial fashion inside a reactor. The RTD tests were carried out in a tubular reactor operated in continuous mode, with and without the presence of artificial biomass. Two hydraulic retention times of 7.2 and 13 h and superficial velocities 0.65, 0.6, 1.3, and 1.1 m h&minus;1 were evaluated. As a tracer, an aqueous solution of methylene blue was used. The CFD model was implemented in ANSYS Fluent, and to solve the equations system, the SIMPLE scheme and second-order discretization methods were selected. The proposed CFD model that represents the reactor was able to predict the spatial and temporal distribution of the tracer injected in the reactor. The main disagreements between the simulations and the experimental results were observed, especially in the first 50 min of the RTD, caused by the different error sources, associated to the manual execution of the triplicates, as well as some channeling or tracer by-pass that cannot be predicted by the CFD model. The CFD model performed better as the time of the experiment elapsed for all the sampling ports. A validation methodology based on an RTD by sampling at different reactor positions can be employed as a simple way to validate CFD models.

]]>Computation doi: 10.3390/computation8040093

Authors: Hugo Hernán Ortiz-Álvarez Francy Nelly Jiménez-García Carolina Márquez-Narváez José Dario Agudelo-Giraldo Elisabeth Restrepo-Parra

In this work, Monte Carlo simulations of magnetic properties of thin films, including the influence of an external pressure, are presented. These simulations were developed using a Hamiltonian composed by terms that represent the exchange interaction, dipolar interaction, Zeeman effect, monocrystalline anisotropy, and pressure influence. The term that represents the pressure influence on the magnetic properties was included, since for many applications, magnetic materials are a part of a multiferroic material together with a piezoelectric or a ferroelectric compound. Initially, the model was developed using generic parameters, in order to probe its suitable performance; after that, parameters were adjusted for simulating thin films of La0.67Sr0.33MnO3, a manganite with several technological applications because its Curie temperature is greater than room temperature. Including the pressure influence, it was observed the formation of several kind of FM/AF configurations as strip, labyrinth, and chess board forms. Furthermore, it was observed that, as the pressure increased, the critical temperature tended to decrease, and this result was in agreement with experimental reports.

]]>Computation doi: 10.3390/computation8040092

Authors: Mario Versaci Giovanni Angiulli Fabio La Foresta

In this paper, we introduce a new dynamic model of simulation of electrocardiograms (ECGs) affected by pathologies starting from the well-known McSharry dynamic model for the ECGs without cardiac disorders. In particular, the McSharry model has been generalized (by a linear transformation and a rotation) for simulating ECGs affected by heart diseases verifying, from one hand, the existence and uniqueness of the solution and, on the other hand, if it admits instabilities. The results, obtained numerically by a procedure based on a Four Stage Lobatto IIIa formula, show the good performances of the proposed model in producing ECGs with or without heart diseases very similar to those achieved directly on the patients. Moreover, verified that the ECGs signals are affected by uncertainty and/or imprecision through the computation of the linear index and the fuzzy entropy index (whose values obtained are close to unity), these similarities among ECGs signals (with or without heart diseases) have been quantified by a well-established fuzzy approach based on fuzzy similarity computations highlighting that the proposed model to simulate ECGs affected by pathologies can be considered as a solid starting point for the development of synthetic pathological ECGs signals.

]]>Computation doi: 10.3390/computation8040091

Authors: Konstantin P. Katin Valeriy B. Merinov Alexey I. Kochaev Savas Kaya Mikhail M. Maslov

We combined ab initio molecular dynamics with the intrinsic reaction coordinate in order to investigate the mechanisms of stability and pyrolysis of N4 &divide; N120 fullerene-like nitrogen cages. The stability of the cages was evaluated in terms of the activation barriers and the activation Gibbs energies of their thermal-induced breaking. We found that binding energies, bond lengths, and quantum-mechanical descriptors failed to predict the stability of the cages. However, we derived a simple topological rule that adjacent hexagons on the cage surface resulted in its instability. For this reason, the number of stable nitrogen cages is significantly restricted in comparison with their carbon counterparts. As a rule, smaller clusters are more stable, whereas the earlier proposed large cages collapse at room temperature. The most stable all-nitrogen cages are the N4 and N6 clusters, which can form the van der Waals crystals with densities of 1.23 and 1.36 g/cm3, respectively. The examination of their band structures and densities of electronic states shows that they are both insulators. Their power and sensitivity are not inferior to the modern advanced high-energy nanosystems.

]]>Computation doi: 10.3390/computation8040090

Authors: Lev Kazakovtsev Ivan Rozhnov Aleksey Popov Elena Tovbis

The k-means problem is one of the most popular models in cluster analysis that minimizes the sum of the squared distances from clustered objects to the sought cluster centers (centroids). The simplicity of its algorithmic implementation encourages researchers to apply it in a variety of engineering and scientific branches. Nevertheless, the problem is proven to be NP-hard which makes exact algorithms inapplicable for large scale problems, and the simplest and most popular algorithms result in very poor values of the squared distances sum. If a problem must be solved within a limited time with the maximum accuracy, which would be difficult to improve using known methods without increasing computational costs, the variable neighborhood search (VNS) algorithms, which search in randomized neighborhoods formed by the application of greedy agglomerative procedures, are competitive. In this article, we investigate the influence of the most important parameter of such neighborhoods on the computational efficiency and propose a new VNS-based algorithm (solver), implemented on the graphics processing unit (GPU), which adjusts this parameter. Benchmarking on data sets composed of up to millions of objects demonstrates the advantage of the new algorithm in comparison with known local search algorithms, within a fixed time, allowing for online computation.

]]>Computation doi: 10.3390/computation8040089

Authors: Ratmir Dashkin Georgii Kolesnikov Pavel Tsygankov Igor Lebedev Artem Lebedev Natalia Menshutina Khusrav Ghafurov Abakar Bagomedov

The presented work is devoted to isocyanate synthesis by the thermal decomposition of carbamates model. The work describes the existing isocyanate-obtaining processes and the main problems in the study of isocyanate synthesis by the thermal decomposition of carbamates, which can be solved using mathematical and computer models. Experiments with carbamates of various structures were carried out. After processing the experimental data, the activation energy and the pre-exponential factor for isocyanate synthesis by the thermal decomposition of carbamates were determined. Then, a mathematical model of the reactor for the thermal decomposition of carbamates using the COMSOL Multiphysics software was developed. For this model, computational experiments under different conditions were carried out. It was shown that the calculation results correspond to the experimental ones, so the suggested model can be used in the design of the equipment for isocyanate synthesis by the thermal decomposition of carbamates.

]]>Computation doi: 10.3390/computation8040088

Authors: Christina Schreppel Jonathan Brembeck

Quadratic programming problems (QPs) frequently appear in control engineering. For use on embedded platforms, a QP solver implementation is required in the programming language C. A new solver for quadratic optimization problems, EmbQP, is described, which was implemented in well readable C code. The algorithm is based on the dual method of Goldfarb and Idnani and solves strictly convex QPs with a positive definite objective function matrix and linear equality and inequality constraints. The algorithm is outlined and some details for an efficient implementation in C are shown, with regard to the requirements of embedded systems. The newly implemented QP solver is demonstrated in the context of control allocation of an over-actuated vehicle as application example. Its performance is assessed in a simulation experiment.

]]>Computation doi: 10.3390/computation8040087

Authors: Natalia Menshutina Igor Lebedev Evgeniy Lebedev Andrey Kolnoochenko Alexander Troyankin Ratmir Dashkin Michael Shishanov Pavel Flegontov Maxim Burdeyniy

The presented work is devoted to reactions of obtaining 4,4&rsquo;-Diaminodiphenylmethane (MDA) in the presence of a catalyst model. The work describes the importance of studying the MDA obtaining process and the possibility of the cellular automata (CA) approach in the modelling of chemical reactions. The work suggests a CA-model that makes it possible to predict the kinetic curves of the studied MDA-obtaining reaction. The developed model was used to carry out computational experiments under the following different conditions&mdash;aniline:formaldehyde:catalyst ratios, stirrer speed, and reaction temperature. The results of computational experiments were compared with the corresponding experimental data. The suggested model was shown to be suitable for predicting MDA-obtaining reaction kinetics. The proposed CA model can be used with the CFD model, suggested in Part 1, allowing the implementation of complex multiscale modeling of a flow catalytic reactor from the molecule level to the level of the entire apparatus.

]]>Computation doi: 10.3390/computation8040086

Authors: Vera Marcantonio Andrea Monforti Ferrario Andrea Di Carlo Luca Del Zotto Danilo Monarca Enrico Bocci

Biomass is one of the most widespread and accessible energy source and steam gasification is one of the most important processes to convert biomass into combustible gases. However, to date the difference of results between the main models used to predict steam gasification producer gas composition have been not analyzed in details. Indeed, gasification, involving heterogeneous reactions, does not reach thermodynamic equilibrium and so thermodynamic models with experimental corrections and kinetic models are mainly applied. Thus, this paper compares a 1-D kinetic model developed in MATLAB, combining hydrodynamics and reaction kinetics, and a 0-D thermodynamic model developed in Aspen Plus, based on Gibbs free energy minimization applying the quasi-equilibrium approach, calibrated by experimental data. After a comparison of the results of the models against experimental data at two S/B ratios, a sensitivity analysis for a wide range of S/B ratios has been performed. The experimental comparison and sensitivity analysis shows that the two models provide sufficiently similar data in terms of the main components of the syngas although the thermodynamic model shows, with increasing S/B, a greater increase of H2 and CO2 and lower decrease of CH4 and CO respect to the kinetic one and the experimental data. Thus, the thermodynamic model, despite being calibrated by experimental data, can be used mainly to analyze global plant performance due to the reduced importance of the discrepancy from a global energy and plant perspective. Meanwhile, the more complex kinetic model should be used when a more precise gas composition is needed and, of course, for reactor design.

]]>Computation doi: 10.3390/computation8040085

Authors: Oguzhan Gencoglu Mathias Gruber

Understanding the characteristics of public attention and sentiment is an essential prerequisite for appropriate crisis management during adverse health events. This is even more crucial during a pandemic such as COVID-19, as primary responsibility of risk management is not centralized to a single institution, but distributed across society. While numerous studies utilize Twitter data in descriptive or predictive context during COVID-19 pandemic, causal modeling of public attention has not been investigated. In this study, we propose a causal inference approach to discover and quantify causal relationships between pandemic characteristics (e.g., number of infections and deaths) and Twitter activity as well as public sentiment. Our results show that the proposed method can successfully capture the epidemiological domain knowledge and identify variables that affect public attention and sentiment. We believe our work contributes to the field of infodemiology by distinguishing events that correlate with public attention from events that cause public attention.

]]>Computation doi: 10.3390/computation8040084

Authors: Gokhan Kirkil

We propose a method to parallelize a 3D incompressible Navier&ndash;Stokes solver that uses a fully implicit fractional-step method to simulate sediment transport in prismatic channels. The governing equations are transformed into generalized curvilinear coordinates on a non-staggered grid. To develop a parallel version of the code that can run on various platforms, in particular on PC clusters, it was decided to parallelize the code using Message Passing Interface (MPI) which is one of the most flexible parallel programming libraries. Code parallelization is accomplished by &ldquo;message passing&rdquo; whereby the computer explicitly uses library calls to accomplish communication between the individual processors of the machine (e.g., PC cluster). As a part of the parallelization effort, besides the Navier&ndash;Stokes solver, the deformable bed module used in simulations with loose beds are also parallelized. The flow, sediment transport, and bathymetry at equilibrium conditions were computed with the parallel and serial versions of the code for the case of a 140-degree curved channel bend of rectangular section. The parallel simulation conducted on eight processors gives exactly the same results as the serial solver. The parallel version of the solver showed good scalability.

]]>Computation doi: 10.3390/computation8030083

Authors: Md. Mamun Molla Preetom Nag Sharaban Thohura Amirul Khan

A modified power-law (MPL) viscosity model of non-Newtonian fluid flow has been used for the multiple-relaxation-time (MRT) lattice Boltzmann methods (LBM) and then validated with the benchmark problems using the graphics process unit (GPU) parallel computing via Compute Unified Device Architecture (CUDA) C platform. The MPL model for characterizing the non-Newtonian behavior is an empirical correlation that considers the Newtonian behavior of a non-Newtonian fluid at a very low and high shear rate. A new time unit parameter (&lambda;) governing the flow has been identified, and this parameter is the consequence of the induced length scale introduced by the power law. The MPL model is free from any singularities due to the very low or even zero shear-rate. The proposed MPL model was first validated for the benchmark study of the lid-driven cavity and channel flows. The model was then applied for shear-thinning and shear-thickening fluid flows through a backward-facing step with relatively low Reynolds numbers, Re = 100&ndash;400. In the case of shear-thinning fluids (n=0.5), laminar to transitional flow arises while Re&ge;300, and the large vortex breaks into several small vortices. The numerical results are presented regarding the velocity distribution, streamlines, and the lengths of the reattachment points.

]]>Computation doi: 10.3390/computation8030082

Authors: Chang Phang Yoke Teng Toh Farah Suraya Md Nasrudin

In this work, we derive the operational matrix using poly-Bernoulli polynomials. These polynomials generalize the Bernoulli polynomials using a generating function involving a polylogarithm function. We first show some new properties for these poly-Bernoulli polynomials; then we derive new operational matrix based on poly-Bernoulli polynomials for the Atangana&ndash;Baleanu derivative. A delay operational matrix based on poly-Bernoulli polynomials is derived. The error bound of this new method is shown. We applied this poly-Bernoulli operational matrix for solving fractional delay differential equations with variable coefficients. The numerical examples show that this method is easy to use and yet able to give accurate results.

]]>Computation doi: 10.3390/computation8030081

Authors: Martine Castellà-Ventura Alain Moissette Emile Kassab

The Si/Al ratio and confinement effects of zeolite framework on energetics and vibrational frequencies of pyridine and 4,4&prime;-bipyridine adsorbed on Br&oslash;nsted acid sites in the straight channel of H-ZSM-5 are investigated by DFT calculations at the B3LYP and M06-2X+D3 levels. The straight channel of H-ZSM-5 is simulated by a cluster of 32 tetrahedral centers covering the intersection between straight and zigzag channels. Pyridine and 4,4&prime;-bipyridine adsorption at two different sites in the intersection (open region) and/or in the narrow region situated between two intersections (closed region) is studied. For two Si/Al ratios (31, 15), the ion pair complexes formed by proton transfer upon pyridine and 4,4&prime;-bipyridine adsorption in the open region and for the first time in the closed region are characterized. Our results indicate: (i) the stability for all adsorption complexes is essentially governed by the dispersive van der Waals interactions and the open region is energetically more favorable than the closed region owing to the predominance of the dispersive interactions over the steric constraints exerted by the confinement effects; (ii) as the Al centers are sufficiently spaced apart, Si/Al ratio does not influence pyridine adsorption energy, but significantly affects the adsorption energies and the relative stability of 4,4&prime;-bipyridine complexes; (iii) neither Si/Al ratio nor confinement significantly influence pyridine and 4,4&prime;-bipyridine vibrational frequencies within their complexes.

]]>Computation doi: 10.3390/computation8030080

Authors: Christos Kalyvas Manolis Maragoudakis

One of the most common tasks nowadays in big data environments is the need to classify large amounts of data. There are numerous classification models designed to perform best in different environments and datasets, each with its advantages and disadvantages. However, when dealing with big data, their performance is significantly degraded because they are not designed&mdash;or even capable&mdash;of handling very large datasets. The current approach is based on a novel proposal of exploiting the dynamics of skyline queries to efficiently identify the decision boundary and classify big data. A comparison against the popular k-nearest neighbor (k-NN), support vector machines (SVM) and na&iuml;ve Bayes classification algorithms shows that the proposed method is faster than the k-NN and the SVM. The novelty of this method is based on the fact that only a small number of computations are needed in order to make a prediction, while its full potential is revealed in very large datasets.

]]>Computation doi: 10.3390/computation8030079

Authors: Ibrahim Ahmad Muhammad Kanikar Muangchoo Auwal Muhammad Ya’u Sabo Ajingi Ibrahim Yahaya Muhammad Ibrahim Dauda Umar Abubakar Bakoji Muhammad

Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) was found to be a severe threat to global public health in late 2019. Nevertheless, no approved medicines have been found to inhibit the virus effectively. Anti-malarial and antiviral medicines have been reported to target the SARS-CoV-2 virus. This paper chose eight natural eucalyptus compounds to study their binding interactions with the SARS-CoV-2 main protease (Mpro) to assess their potential for becoming herbal drugs for the new SARS-CoV-2 infection virus. In-silico methods such as molecular docking, molecular dynamics (MD) simulations, and Molecular Mechanics Poisson Boltzmann Surface Area (MM/PBSA) analysis were used to examine interactions at the atomistic level. The results of molecular docking indicate that Mpro has good binding energy for all compounds studied. Three docked compounds, &alpha;-gurjunene, aromadendrene, and allo-aromadendrene, with highest binding energies of &minus;7.34 kcal/mol (&minus;30.75 kJ/mol), &minus;7.23 kcal/mol (&minus;30.25 kJ/mol), and &minus;7.17 kcal/mol (&minus;29.99 kJ/mol) respectively, were simulated with GROningen MAchine for Chemical Simulations (GROMACS) to measure the molecular interactions between Mpro and inhibitors in detail. Our MD simulation results show that &alpha;-gurjunene has the strongest binding energy of &minus;20.37 kcal/mol (&minus;85.21 kJ/mol), followed by aromadendrene with &minus;18.99 kcal/mol (&minus;79.45 kJ/mol), and finally allo-aromadendrene with &minus;17.91 kcal/mol (&minus;74.95 kJ/mol). The findings indicate that eucalyptus may be used to inhibit the Mpro enzyme as a drug candidate. This is the first computational analysis that gives an insight into the potential role of structural flexibility during interactions with eucalyptus compounds. It also sheds light on the structural design of new herbal medicinal products against Mpro.

]]>Computation doi: 10.3390/computation8030078

Authors: Claudia Germoso Giacomo Quaranta Jean Louis Duval Francisco Chinesta

Mesh-based solution of 3D models defined in plate or shell domains remains a challenging issue nowadays due to the fact that the needed meshes generally involve too many degrees of freedom. When the considered problem involves some parameters aiming at computing its parametric solution the difficulty is twofold. The authors proposed, in some of their former works, strategies for solving both, however they suffer from a deep intrusiveness. This paper proposes a totally novel approach that from any existing discretization is able to reduce the 3D parametric complexity to the one characteristic of a simple 2D calculation. Thus, the 3D complexity is reduced to 2D, the parameters included naturally into the solution, and the procedure applied on a discretization performed with a standard software, which taken together enable real-time engineering.

]]>Computation doi: 10.3390/computation8030077

Authors: Giulia Culletta Maria Rita Gulotta Ugo Perricone Maria Zappalà Anna Maria Almerico Marco Tutone

To date, SARS-CoV-2 infectious disease, named COVID-19 by the World Health Organization (WHO) in February 2020, has caused millions of infections and hundreds of thousands of deaths. Despite the scientific community efforts, there are currently no approved therapies for treating this coronavirus infection. The process of new drug development is expensive and time-consuming, so that drug repurposing may be the ideal solution to fight the pandemic. In this paper, we selected the proteins encoded by SARS-CoV-2 and using homology modeling we identified the high-quality model of proteins. A structure-based pharmacophore modeling study was performed to identify the pharmacophore features for each target. The pharmacophore models were then used to perform a virtual screening against the DrugBank library (investigational, approved and experimental drugs). Potential inhibitors were identified for each target using XP docking and induced fit docking. MM-GBSA was also performed to better prioritize potential inhibitors. This study will provide new important comprehension of the crucial binding hot spots usable for further studies on COVID-19. Our results can be used to guide supervised virtual screening of large commercially available libraries.

]]>Computation doi: 10.3390/computation8030076

Authors: Gilberto González-Parra Miguel Díaz-Rodríguez Abraham J. Arenas

In this paper, we study and explore two control strategies to decrease the spread of Zika virus in the human and mosquito populations. The control strategies that we consider in this study are awareness and spraying campaigns. We solve several optimal control problems relying on a mathematical epidemic model of Zika that considers both human and mosquito populations. The first control strategy is broad and includes using information campaigns, encouraging people to use bednetting, wear long-sleeve shirts, or similar protection actions. The second control is more specific and relies on spraying insecticides. The control system relies on a Zika mathematical model with control functions. To develop the optimal control problem, we use Pontryagins&rsquo; maximum principle, which is numerically solved as a boundary value problem. For the mathematical model of the Zika epidemic, we use parameter values extracted from real data from an outbreak in Colombia. We study the effect of the costs related to the controls and infected populations. These costs are important in real life since they can change the outcomes and recommendations for health authorities dramatically. Finally, we explore different options regarding which control measures are more cost-efficient for society.

]]>Computation doi: 10.3390/computation8030075

Authors: Angel E. Rodriguez-Fernandez Bernardo Gonzalez-Torres Ricardo Menchaca-Mendez Peter F. Stadler

MAX-CUT is one of the well-studied NP-hard combinatorial optimization problems. It can be formulated as an Integer Quadratic Programming problem and admits a simple relaxation obtained by replacing the integer &ldquo;spin&rdquo; variables xi by unitary vectors v&rarr;i. The Goemans&ndash;Williamson rounding algorithm assigns the solution vectors of the relaxed quadratic program to a corresponding integer spin depending on the sign of the scalar product v&rarr;i&middot;r&rarr; with a random vector r&rarr;. Here, we investigate whether better graph cuts can be obtained by instead using a more sophisticated clustering algorithm. We answer this question affirmatively. Different initializations of k-means and k-medoids clustering produce better cuts for the graph instances of the most well known benchmark for MAX-CUT. In particular, we found a strong correlation of cluster quality and cut weights during the evolution of the clustering algorithms. Finally, since in general the maximal cut weight of a graph is not known beforehand, we derived instance-specific lower bounds for the approximation ratio, which give information of how close a solution is to the global optima for a particular instance. For the graphs in our benchmark, the instance specific lower bounds significantly exceed the Goemans&ndash;Williamson guarantee.

]]>Computation doi: 10.3390/computation8030074

Authors: Silvia Mirri Giovanni Delnevo Marco Roccetti

The Nobel laureate Niels Bohr once said that: &ldquo;Predictions are very difficult, especially if they are about the future&rdquo;. Nonetheless, models that can forecast future COVID-19 outbreaks are receiving special attention by policymakers and health authorities, with the aim of putting in place control measures before the infections begin to increase. Nonetheless, two main problems emerge. First, there is no a general agreement on which kind of data should be registered for judging on the resurgence of the virus (e.g., infections, deaths, percentage of hospitalizations, reports from clinicians, signals from social media). Not only this, but all these data also suffer from common defects, linked to their reporting delays and to the uncertainties in the collection process. Second, the complex nature of COVID-19 outbreaks makes it difficult to understand if traditional epidemiological models, such as susceptible, infectious, or recovered (SIR), are more effective for a timely prediction of an outbreak than alternative computational models. Well aware of the complexity of this forecasting problem, we propose here an innovative metric for predicting COVID-19 diffusion based on the hypothesis that a relation exists between the spread of the virus and the presence in the air of particulate pollutants, such as PM2.5, PM10, and NO2. Drawing on the recent assumption of 239 experts who claimed that this virus can be airborne, and further considering that particulate matter may favor this airborne route, we developed a machine learning (ML) model that has been instructed with: (i) all the COVID-19 infections that occurred in the Italian region of Emilia-Romagna, one of the most polluted areas in Europe, in the period of February&ndash;July 2020, (ii) the daily values of all the particulates taken in the same period and in the same region, and finally (iii) the chronology according to which restrictions were imposed by the Italian Government to human activities. Our ML model was then subjected to a classic ten-fold cross-validation procedure that returned a promising 90% accuracy value. Finally, the model was used to predict a possible resurgence of the virus in all the nine provinces of Emilia-Romagna, in the period of September&ndash;December 2020. To make those predictions, input to our ML model were the daily measurements of the aforementioned pollutants registered in the periods of September&ndash;December 2017/2018/2019, along with the hypothesis that the mild containment measures taken in Italy in the so-called Phase 3 are obeyed. At the time we write this article, we cannot have a confirmation of the precision of our predictions. Nevertheless, we are projecting a scenario based on an original hypothesis that makes our COVID-19 prediction model unique in the world. Its accuracy will be soon judged by history&mdash;and this, too, is science at the service of society.

]]>Computation doi: 10.3390/computation8030073

Authors: Dmitriy Klyuchinskiy Nikita Novikov Maxim Shishlenin

We investigate the mathematical model of the 2D acoustic waves propagation in a heterogeneous domain. The hyperbolic first order system of partial differential equations is considered and solved by the Godunov method of the first order of approximation. This is a direct problem with appropriate initial and boundary conditions. We solve the coefficient inverse problem (IP) of recovering density. IP is reduced to an optimization problem, which is solved by the gradient descent method. The quality of the IP solution highly depends on the quantity of IP data and positions of receivers. We introduce a new approach for computing a gradient in the descent method in order to use as much IP data as possible on each iteration of descent.

]]>Computation doi: 10.3390/computation8030072

Authors: Amit Kumar Verma Mukesh Kumar Rawani Ravi P. Agarwal

In this paper, we propose a 7th order weakly L-stable time integration scheme. In the process of derivation of the scheme, we use explicit backward Taylor&rsquo;s polynomial approximation of sixth-order and Hermite interpolation polynomial approximation of fifth order. We apply this formula in the vector form in order to solve Burger&rsquo;s equation, which is a simplified form of Navier-Stokes equation. The literature survey reveals that several methods fail to capture the solutions in the presence of inconsistency and for small values of viscosity, e.g., 10&minus;3, whereas the present scheme produces highly accurate results. To check the effectiveness of the scheme, we examine it over six test problems and generate several tables and figures. All of the calculations are executed with the help of Mathematica 11.3. The stability and convergence of the scheme are also discussed.

]]>Computation doi: 10.3390/computation8030071

Authors: Miroslava Mikusova Jamshid Abdunazarov Joanna Zukowska Juraj Jagelcak

Nowadays, in all cities, there is an acute problem of a lack of parking spaces. The number of vehicles is constantly increasing not only in big cities and megacities, but also in small towns of the country, and there are not enough parking places&mdash;the pace of solving the problem is several times slower than the growth rate of transport among citizens. The paper is dedicated to the determination of an optimal size of a parking place for design vehicles in a parking space as an element of roads. In the example of passenger cars and trucks, the optimal number of parking places is presented. The results of the research on the dimensioning of parking spaces serve as recommendations and can be used for the design of objects of transportation infrastructure. According to the research, authors introduce the term &ldquo;design vehicle&rdquo; and provide its definition. They also figure out optimal parameters for each design vehicle and recommend a special template for designing parking places.

]]>Computation doi: 10.3390/computation8030070

Authors: YM Tang Ka-Yin Chau Wenqiang Li TW Wan

Time series forecasting technology and related applications for stock price forecasting are gradually receiving attention. These approaches can be a great help in making decisions based on historical information to predict possible future situations. This research aims at establishing forecasting models with deep learning technology for share price prediction in the logistics industry. The historical share price data of five logistics companies in Hong Kong were collected and trained with various time series forecasting algorithms. Based on the Mean Absolute Percentage Error (MAPE) results, we adopted Long Short-Term Memory (LSTM) as the methodology to further predict share price. The proposed LSTM model was trained with different hyperparameters and validated by the Root Mean Square Error (RMSE). In this study, we found various optimal parameters for the proposed LSTM model for six different logistics stocks in Hong Kong, and the best RMSE result was 0.43%. Finally, we can forecast economic recessions through the prediction of the stocks, using the LSTM model.

]]>Computation doi: 10.3390/computation8030069

Authors: Gus I. Argyros Michael I. Argyros Samundra Regmi Ioannis K. Argyros Santhosh George

The method of discretization is used to solve nonlinear equations involving Banach space valued operators using Lipschitz or H&ouml;lder constants. But these constants cannot always be found. That is why we present results using &omega;&minus; continuity conditions on the Fr&eacute;chet derivative of the operator involved. This way, we extend the applicability of the discretization technique. It turns out that if we specialize &omega;&minus; continuity our new results improve those in the literature too in the case of Lipschitz or H&ouml;lder continuity. Our analysis includes tighter upper error bounds on the distances involved.

]]>Computation doi: 10.3390/computation8030068

Authors: Kizito Muzhinji Stanford Shateyi

In this paper, we consider the numerical solution of the optimal control problems of the elliptic partial differential equation. Numerically tackling these problems using the finite element method produces a large block coupled algebraic system of equations of saddle point form. These systems are of large dimension, block, sparse, indefinite and ill conditioned. The solution of such systems is a major computational task and poses a greater challenge for iterative techniques. Thus they require specialised methods which involve some preconditioning strategies. The preconditioned solvers must have nice convergence properties independent of the changes in discretisation and problem parameters. Most well known preconditioned solvers converge independently of mesh size but not for the decreasing regularisation parameter. This work proposes and extends the work for the formulation of preconditioners which results in the optimal performances of the iterative solvers independent of both the decreasing mesh size and the regulation parameter. In this paper we solve the indefinite system using the preconditioned minimum residual method. The main task in this work was to analyse the 3 &times; 3 block diagonal preconditioner that is based on the approximation of the Schur complement form obtained from the matrix system. The eigenvalue distribution of both the proposed Schur complement approximate and the preconditioned system will be investigated since the clustering of eigenvalues points to the effectiveness of the preconditioner in accelerating an iterative solver. This is done in order to create fast, efficient solvers for such problems. Numerical experiments demonstrate the effectiveness and performance of the proposed approximation compared to the other approximations and demonstrate that it can be used in practice. The numerical experiments confirm the effectiveness of the proposed preconditioner. The solver used is robust and optimal with respect to the changes in both mesh size and the regularisation parameter.

]]>Computation doi: 10.3390/computation8030067

Authors: Riccardo Longo Alessandro Sebastian Podda Roberto Saia

Currently, an increasing number of third-party applications exploit the Bitcoin blockchain to store tamper-proof records of their executions, immutably. For this purpose, they leverage the few extra bytes available for encoding custom metadata in Bitcoin transactions. A sequence of records of the same application can thus be abstracted as a stand-alone subchain inside the Bitcoin blockchain. However, several existing approaches do not make any assumptions about the consistency of their subchains, either (i) neglecting the possibility that this sequence of messages can be altered, mainly due to unhandled concurrency, network malfunctions, application bugs, or malicious users, or (ii) giving weak guarantees about their security. To tackle this issue, in this paper, we propose an improved version of a consensus protocol formalized in our previous work, built on top of the Bitcoin protocol, to incentivize third-party nodes to consistently extend their subchains. Besides, we perform an extensive analysis of this protocol, both defining its properties and presenting some real-world attack scenarios, to show how its specific design choices and parameter configurations can be crucial to prevent malicious practices.

]]>Computation doi: 10.3390/computation8030066

Authors: Suyash Verma Arman Hemmati

The wake dynamics of sharp-edge rigid panels is examined using Overset Grid Assembly (OGA) utilized in OpenFOAM, an open-source platform. The OGA method is an efficient solution technique based on overlap of a single or multiple moving grids on a stationary background grid. Five test cases for a stationary panel at different angle of attack are compared with available computational data, which show a good agreement in predicting global flow variables, such as mean drag. The models also provided accurate results in predicting the main flow features and structures. The flow past a pitching square panel is also investigated at two Reynolds numbers. The study of surface pressure distribution and shear forces acting on the panel suggests that a higher streamwise pressure gradient exists for the high Reynolds number case, which leads to an increase in lift, whereas the highly viscous effects at low Reynolds number lead to an increased drag production. The wake visualizations for the stationary and pitching motion cases show that the vortex shedding and wake characteristics are captured accurately using the OGA method.

]]>Computation doi: 10.3390/computation8030065

Authors: Winter Sinkala

Construction of conservation laws of differential equations is an essential part of the mathematical study of differential equations. In this paper we derive, using two approaches, general formulas for finding conservation laws of the Black-Scholes equation. In one approach, we exploit nonlinear self-adjointness and Lie point symmetries of the equation, while in the other approach we use the multiplier method. We present illustrative examples and also show how every solution of the Black-Scholes equation leads to a conservation law of the same equation.

]]>Computation doi: 10.3390/computation8030064

Authors: Shengkun Xie Anna T. Lawniczak Junlin Hao

A lot of effort has been devoted to mathematical modelling and simulation of complex systems for a better understanding of their dynamics and control. Modelling and analysis of computer simulations outcomes are also important aspects of studying the behaviour of complex systems. It often involves the use of both traditional and modern statistical approaches, including multiple linear regression, generalized linear model and non-linear regression models such as artificial neural networks. In this work, we first conduct a simulation study of the agents&rsquo; decisions learning to cross a cellular automaton based highway and then, we model the simulation data using artificial neural networks. Our research shows that artificial neural networks are capable of capturing the functional relationships between input and output variables of our simulation experiments, and they outperform the classical modelling approaches. The variable importance measure techniques can consistently identify the most dominant factors that affect the response variables, which help us to better understand how the decision-making by the autonomous agents is affected by the input factors. The significance of this work is in extending the investigations of complex systems from mathematical modelling and computer simulations to the analysis and modelling of the data obtained from the simulations using advanced statistical models.

]]>Computation doi: 10.3390/computation8030063

Authors: Uygulana Gavrilieva Maria Vasilyeva Eric T. Chung

In this work, we consider elastic wave propagation in fractured media. The mathematical model is described by the Helmholtz problem related to wave propagation with specific interface conditions (Linear Slip Model, LSM) on the fracture in the frequency domain. For the numerical solution, we construct a fine grid that resolves all fracture interfaces on the grid level and construct approximation using a finite element method. We use a discontinuous Galerkin method for the approximation by space that helps to weakly impose interface conditions on fractures. Such approximation leads to a large system of equations and is computationally expensive. In this work, we construct a coarse grid approximation for an effective solution using the Generalized Multiscale Finite Element Method (GMsFEM). We construct and compare two types of the multiscale methods&mdash;Continuous Galerkin Generalized Multiscale Finite Element Method (CG-GMsFEM) and Discontinuous Galerkin Generalized Multiscale Finite Element Method (DG-GMsFEM). Multiscale basis functions are constructed by solving local spectral problems in each local domains to extract dominant modes of the local solution. In CG-GMsFEM, we construct continuous multiscale basis functions that are defined in the local domains associated with the coarse grid node and contain four coarse grid cells for the structured quadratic coarse grid. The multiscale basis functions in DG-GMsFEM are discontinuous and defined in each coarse grid cell. The results of the numerical solution for the two-dimensional Helmholtz equation are presented for CG-GMsFEM and DG-GMsFEM for different numbers of multiscale basis functions.

]]>Computation doi: 10.3390/computation8030062

Authors: Ravi P. Agarwal

Following the corrected chronology of ancient Hindu scientists/mathematicians, in this article, a sincere effort is made to report the origin of Pythagorean triples. We shall account for the development of these triples from the period of their origin and list some known astonishing directions. Although for researchers in this field, there is not much that is new in this article, we genuinely hope students and teachers of mathematics will enjoy this article and search for new directions/patterns.

]]>Computation doi: 10.3390/computation8030061

Authors: Kifayat Ullah Junaid Ahmad Manuel de la Sen

We introduce a very general class of generalized non-expansive maps. This new class of maps properly includes the class of Suzuki non-expansive maps, Reich&ndash;Suzuki type non-expansive maps, and generalized &alpha; -non-expansive maps. We establish some basic properties and demiclosed principle for this class of maps. After this, we establish existence and convergence results for this class of maps in the context of uniformly convex Banach spaces and compare several well known iterative algorithms.

]]>Computation doi: 10.3390/computation8030060

Authors: Orchidea Maria Lecian

The optical equivalence principle is analyzed according to the possibility of describing unbounded states, and the suitable approximations are calculated for highly energetic phenomena. Among these possibilities, the relevance for laser fields, interferometers, and optomehcanical systems are implemented. Their suitableness for research in General Relativity, Cosmology, and High-Energy Physics are outlined.

]]>Computation doi: 10.3390/computation8020059

Authors: Giovanni Delnevo Silvia Mirri Marco Roccetti

As we prepare to emerge from an extensive and unprecedented lockdown period, due to the COVID-19 virus infection that hit the Northern regions of Italy with the Europe&rsquo;s highest death toll, it becomes clear that what has gone wrong rests upon a combination of demographic, healthcare, political, business, organizational, and climatic factors that are out of our scientific scope. Nonetheless, looking at this problem from a patient&rsquo;s perspective, it is indisputable that risk factors, considered as associated with the development of the virus disease, include older age, history of smoking, hypertension and heart disease. While several studies have already shown that many of these diseases can also be favored by a protracted exposure to air pollution, there has been recently an insurgence of negative commentary against authors who have correlated the fatal consequences of COVID-19 (also) to the exposition of specific air pollutants. Well aware that understanding the real connection between the spread of this fatal virus and air pollutants would require many other investigations at a level appropriate to the scale of this phenomenon (e.g., biological, chemical, and physical), we propose the results of a study, where a series of the measures of the daily values of PM2.5, PM10, and NO2 were considered over time, while the Granger causality statistical hypothesis test was used for determining the presence of a possible correlation with the series of the new daily COVID19 infections, in the period February&ndash;April 2020, in Emilia-Romagna. Results taken both before and after the governmental lockdown decisions show a clear correlation, although strictly seen from a Granger causality perspective. Moving beyond the relevance of our results towards the real extent of such a correlation, our scientific efforts aim at reinvigorating the debate on a relevant case, that should not remain unsolved or no longer investigated.

]]>Computation doi: 10.3390/computation8020058

Authors: Valentin Alekseev Qili Tang Maria Vasilyeva Eric T. Chung Yalchin Efendiev

In this paper, we consider a coupled system of equations that describes simplified magnetohydrodynamics (MHD) problem in perforated domains. We construct a fine grid that resolves the perforations on the grid level in order to use a traditional approximation. For the solution on the fine grid, we construct approximation using the mixed finite element method. To reduce the size of the fine grid system, we will develop a Mixed Generalized Multiscale Finite Element Method (Mixed GMsFEM). The method differs from existing approaches and requires some modifications to represent the flow and magnetic fields. Numerical results are presented for a two-dimensional model problem in perforated domains. This model problem is a special case for the general 3D problem. We study the influence of the number of multiscale basis functions on the accuracy of the method and show that the proposed method provides a good accuracy with few basis functions.

]]>Computation doi: 10.3390/computation8020057

Authors: Winter Sinkala Tembinkosi F. Nkalashe

Two equations are considered in this paper&mdash;the Black&ndash;Scholes equation and an equation that models the spatial dynamics of a brain tumor under some treatment regime. We shall call the latter equation the tumor equation. The Black&ndash;Scholes and tumor equations are partial differential equations that arise in very different contexts. The tumor equation is used to model propagation of brain tumor, while the Black&ndash;Scholes equation arises in financial mathematics as a model for the fair price of a European option and other related derivatives. We use Lie symmetry analysis to establish a mapping between them and hence deduce solutions of the tumor equation from solutions of the Black&ndash;Scholes equation.

]]>Computation doi: 10.3390/computation8020056

Authors: Ikha Magdalena Muh Fadhel Atras Leo Sembiring M. A. Nugroho Roi Solomon B. Labay Marian P. Roque

In this paper, we investigate the wave damping mechanism caused by the presence of submerged bars using the Shallow Water Equations (SWEs). We first solve these equations for the single bar case using separation of variables to obtain the analytical solution for the wave elevation over a rectangular bar wave reflector with specific heights and lengths. From the analytical solution, we derive the wave reflection and transmission coefficients and determine the optimal height and length of the bar that would give the smallest transmission coefficient. We also measure the effectiveness of the bar by comparing the amplitude of the incoming wave before and after the wave passes the submerged bar, and extend the result to the case of n-submerged bars. We then construct a numerical scheme for the SWEs based on the finite volume method on a staggered grid to simulate the propagation of a monochromatic wave as it passes over a single submerged rectangular bar. For validation, we compare the transmission coefficient values obtained from the analytical solution, numerical scheme, and experimental data. The result of this paper may be useful in wave reflector engineering and design, particularly that of rectangle-shaped wave reflectors, as it can serve as a basis for designing bar wave reflectors that reduce wave amplitudes optimally.

]]>Computation doi: 10.3390/computation8020055

Authors: Stanford Shateyi Hillary Muzara

A thorough and detailed investigation of an unsteady free convection boundary layer flow of an incompressible electrically conducting Williamson fluid over a stretching sheet saturated with a porous medium has been numerically carried out. The partial governing equations are transferred into a system of non-linear dimensionless ordinary differential equations by employing suitable similarity transformations. The resultant equations are then numerically solved using the spectral quasi-linearization method. Numerical solutions are obtained in terms of the velocity, temperature and concentration profiles, as well as the skin friction, heat and mass transfers. These numerical results are presented graphically and in tabular forms. From the results, it is found out that the Weissenberg number, local electric parameter, the unsteadiness parameter, the magnetic, porosity and the buoyancy parameters have significant effects on the flow properties.

]]>Computation doi: 10.3390/computation8020054

Authors: Piotr Soczówka Renata Żochowska Grzegorz Karoń

The transport system of a Smart City consists of many subsystems; therefore, the modeling of the transportation network, which maps its structure, requires consideration of both the connections between individual subsystems and the relationships within each of them. The road and street network is one of the most important subsystems, whose main task is to ensure access to places generating travel demand in the city. Thus, its effectiveness should be at an appropriate level of quality. Connectivity is one of the most important characteristics of a road and street network. It describes how elements of that network are connected, which translates to travel times and costs. The analysis of the connectivity of the road and street network in urban areas is often conducted with the application of topological measures. In the case of a large area of the city, such analysis requires its division into smaller parts, which may affect the computational results of these measures; therefore, the main goal of the study was to present a method of performing analysis based on the computation of numerical values of selected measures of connectivity of road and street network, for a city area divided into fields of regular shape. To achieve that goal, the analyzed area was split into a regular grid. Subsequently, numerical values of the chosen measures of connectivity were calculated for each basic field, and the results allowed us to determine whether they are influenced by the method of division of the area. Obtained results showed that the size of the basic field influences the numerical values of measures of connectivity; however that influence is different for each of the selected measures.

]]>Computation doi: 10.3390/computation8020053

Authors: Zhen Qiao Hongtao Zhang Hai-Feng Ji Qian Chen

Since the outbreak of the 2019 novel coronavirus disease (COVID-19), the medical research community is vigorously seeking a treatment to control the infection and save the lives of severely infected patients. The main potential candidates for the control of viruses are virally targeted agents. In this short letter, we report our calculations on the inhibitors for the SARS-CoV-2 3CL protease and the spike protein for the potential treatment of COVID-19. The results show that the most potent inhibitors of the SARS-CoV-2 3CL protease include saquinavir, tadalafil, rivaroxaban, sildenafil, dasatinib, etc. Ergotamine, amphotericin b, and vancomycin are most promising to block the interaction of the SARS-CoV-2 S-protein with human ACE-2.

]]>Computation doi: 10.3390/computation8020052

Authors: Jerwin Jay E. Taping Junie B. Billones Voltaire G. Organo

Nickel(II) complexes of mono-functionalized pyridine-tetraazamacrocycles (PyMACs) are a new class of catalysts that possess promising activity similar to biological peroxidases. Experimental studies with ABTS (2,2&prime;-azino-bis(3-ethylbenzothiazoline-6-sulfonic acid), substrate) and H2O2 (oxidant) proposed that hydrogen-bonding and proton-transfer reactions facilitated by their pendant arm were responsible for their catalytic activity. In this work, density functional theory calculations were performed to unravel the influence of pendant arm functionalization on the catalytic performance of Ni(II)&ndash;PyMACs. Generated frontier orbitals suggested that Ni(II)&ndash;PyMACs activate H2O2 by satisfying two requirements: (1) the deprotonation of H2O2 to form the highly nucleophilic HOO&minus;, and (2) the generation of low-spin, singlet state Ni(II)&ndash;PyMACs to allow the binding of HOO&minus;. COSMO solvation-based energies revealed that the O&ndash;O Ni(II)&ndash;hydroperoxo bond, regardless of pendant arm type, ruptures favorably via heterolysis to produce high-spin (S = 1) [(L)Ni3+&ndash;O&middot;]2+ and HO&minus;. Aqueous solvation was found crucial in the stabilization of charged species, thereby favoring the heterolytic process over homolytic. The redox reaction of [(L)Ni3+&ndash;O&middot;]2+ with ABTS obeyed a 1:2 stoichiometric ratio, followed by proton transfer to produce the final intermediate. The regeneration of Ni(II)&ndash;PyMACs at the final step involved the liberation of HO&minus;, which was highly favorable when protons were readily available or when the pKa of the pendant arm was low.

]]>Computation doi: 10.3390/computation8020051

Authors: Evgenia Ishchukova Ekaterina Maro Pavel Pristalov

In January 2016, a new standard for symmetric block encryption was established in the Russian Federation. The standard contains two encryption algorithms: Magma and Kuznyechik. In this paper we propose to consider the possibility of applying the algebraic analysis method to these ciphers. To do this, we use the simplified algorithms Magma &oplus; and S-KN2. To solve sets of nonlinear Boolean equations, we choose two different approaches: a reduction and solving of the Boolean satisfiability problem (by using the CryptoMiniSat solver) and an extended linearization method (XL). In our research, we suggest using a security assessment approach that identifies the resistance of block ciphers to algebraic cryptanalysis. The algebraic analysis of an eight-round Magma (68 key bits were fixed) with the CryptoMiniSat solver demanded four known text pairs and took 3029.56 s to complete (the search took 416.31 s). The algebraic analysis of a five-round Magma cipher with weakened S-boxes required seven known text pairs and took 1135.61 s (the search took 3.36 s). The algebraic analysis of a five-round Magma cipher with disabled S-blocks (equivalent value substitution) led to getting only one solution for five known text pairs in 501.18 s (the search took 4.92 s). The complexity of the XL algebraic analysis of a four-round S-KN2 cipher with three text pairs was 236.33 s (took 1.191 Gb RAM).

]]>Computation doi: 10.3390/computation8020050

Authors: Stephan Lenz Martin Geier Manfred Krafczyk

The simulation of fire is a challenging task due to its occurrence on multiple space-time scales and the non-linear interaction of multiple physical processes. Current state-of-the-art software such as the Fire Dynamics Simulator (FDS) implements most of the required physics, yet a significant drawback of this implementation is its limited scalability on modern massively parallel hardware. The current paper presents a massively parallel implementation of a Gas Kinetic Scheme (GKS) on General Purpose Graphics Processing Units (GPGPUs) as a potential alternative modeling and simulation approach. The implementation is validated for turbulent natural convection against experimental data. Subsequently, it is validated for two simulations of fire plumes, including a small-scale table top setup and a fire on the scale of a few meters. We show that the present GKS achieves comparable accuracy to the results obtained by FDS. Yet, due to the parallel efficiency on dedicated hardware, our GKS implementation delivers a reduction of wall-clock times of more than an order of magnitude. This paper demonstrates the potential of explicit local schemes in massively parallel environments for the simulation of fire.

]]>Computation doi: 10.3390/computation8020049

Authors: Khalid Hattaf

This paper proposes a new definition of fractional derivative with non-singular kernel in the sense of Caputo which generalizes various forms existing in the literature. Furthermore, the version in the sense of Riemann&ndash;Liouville is defined. Moreover, fundamental properties of the new generalized fractional derivatives in the sense of Caputo and Riemann&ndash;Liouville are rigorously studied. Finally, an application in epidemiology as well as in virology is presented.

]]>Computation doi: 10.3390/computation8020048

Authors: Stefano Quer Andrea Marcelli Giovanni Squillero

The maximum common subgraph of two graphs is the largest possible common subgraph, i.e., the common subgraph with as many vertices as possible. Even if this problem is very challenging, as it has been long proven NP-hard, its countless practical applications still motivates searching for exact solutions. This work discusses the possibility to extend an existing, very effective branch-and-bound procedure on parallel multi-core and many-core architectures. We analyze a parallel multi-core implementation that exploits a divide-and-conquer approach based on a thread pool, which does not deteriorate the original algorithmic efficiency and it minimizes data structure repetitions. We also extend the original algorithm to parallel many-core GPU architectures adopting the CUDA programming framework, and we show how to handle the heavily workload-unbalance and the massive data dependency. Then, we suggest new heuristics to reorder the adjacency matrix, to deal with &ldquo;dead-ends&rdquo;, and to randomize the search with automatic restarts. These heuristics can achieve significant speed-ups on specific instances, even if they may not be competitive with the original strategy on average. Finally, we propose a portfolio approach, which integrates all the different local search algorithms as component tools; such portfolio, rather than choosing the best tool for a given instance up-front, takes the decision on-line. The proposed approach drastically limits memory bandwidth constraints and avoids other typical portfolio fragility as CPU and GPU versions often show a complementary efficiency and run on separated platforms. Experimental results support the claims and motivate further research to better exploit GPUs in embedded task-intensive and multi-engine parallel applications.

]]>Computation doi: 10.3390/computation8020047

Authors: Kirkil Lin

A high-resolution large eddy simulation (LES) of wind flow over the Oklahoma City downtown area was performed to explain the effect of the building height on wind flow over the city. Wind flow over cities is vital for pedestrian and traffic comfort as well as urban heat effects. The average southerly wind speed of eight meters per second was used in the inflow section. It was found that heights and distribution of the buildings have the greatest impact on the wind flow patterns. The complexity of the flow field mainly depended on the location of buildings relative to each other and their heights. A strong up and downflows in the wake of tall buildings as well as large-scale coherent eddies between the low-rise buildings were observed. It was found out that high-rise buildings had the highest impact on the urban wind patterns. Other characteristics of urban canopy flows, such as wind shadows and channeling effects, are also successfully captured by the LES. The LES solver was shown to be a powerful tool for understanding urban canopy flows; therefore, it can be used in similar studies (e.g., other cities, dispersion studies, etc.) in the future.

]]>Computation doi: 10.3390/computation8020046

Authors: Ashis Kumar Mandal M. N. M. Kahar Graham Kendall

The paper investigates a partial exam assignment approach for solving the examination timetabling problem. Current approaches involve scheduling all of the exams into time slots and rooms (i.e., produce an initial solution) and then continuing by improving the initial solution in a predetermined number of iterations. We propose a modification of this process that schedules partially selected exams into time slots and rooms followed by improving the solution vector of partial exams. The process then continues with the next batch of exams until all exams are scheduled. The partial exam assignment approach utilises partial graph heuristic orderings with a modified great deluge algorithm (PGH-mGD). The PGH-mGD approach is tested on two benchmark datasets, a capacitated examination dataset from the 2nd international timetable competition (ITC2007) and an un-capacitated Toronto examination dataset. Experimental results show that PGH-mGD is able to produce quality solutions that are competitive with those of the previous approaches reported in the scientific literature.

]]>Computation doi: 10.3390/computation8020045

Authors: Yulia Shichkina Muon Ha

In this article, we describe a new formalized method for constructing the NoSQL document database of MongoDB, taking into account the structure of queries planned for execution to the database. The method is based on set theory. The initial data are the properties of objects, information about which is stored in the database, and the set of queries that are most often executed or whose execution speed should be maximum. In order to determine the need to create embedded documents, our method uses the type of relationship between tables in a relational database. Our studies have shown that this method is in addition to the method of creating collections without embedded documents. In the article, we also describe a methodology for determining in which cases which methods should be used to make working with databases more efficient. It should be noted that this approach can be used for translating data from MySQL to MongoDB and for the consolidation of these databases.

]]>Computation doi: 10.3390/computation8020044

Authors: Ivan Girotto Sebastiano Fabio Schifano Enrico Calore Gianluca Di Staso Federico Toschi

This paper presents the performance analysis for both the computing performance and the energy efficiency of a Lattice Boltzmann Method (LBM) based application, used to simulate three-dimensional multicomponent turbulent systems on massively parallel architectures for high-performance computing. Extending results reported in previous works, the analysis is meant to demonstrate the impact of using optimized data layouts designed for LBM based applications on high-end computer platforms. A particular focus is given to the Intel Skylake processor and to compare the target architecture with other models of the Intel processor family. We introduce the main motivations of the presented work as well as the relevance of its scientific application. We analyse the measured performances of the implemented data layouts on the Skylake processor while scaling the number of threads per socket. We compare the results obtained on several CPU generations of the Intel processor family and we make an analysis of energy efficiency on the Skylake processor compared with the Intel Xeon Phi processor, finally adding our interpretation of the presented results.

]]>Computation doi: 10.3390/computation8020043

Authors: Marc Haussmann Florian Ries Jonathan B. Jeppener-Haltenhoff Yongxiang Li Marius Schmidt Cooper Welch Lars Illmann Benjamin Böhm Hermann Nirschl Mathias J. Krause Amsini Sadiki

In this paper, we compare the capabilities of two open source near-wall-modeled large eddy simulation (NWM-LES) approaches regarding prediction accuracy, computational costs and ease of use to predict complex turbulent flows relevant to internal combustion (IC) engines. The applied open source tools are the commonly used OpenFOAM, based on the finite volume method (FVM), and OpenLB, an implementation of the lattice Boltzmann method (LBM). The near-wall region is modeled by the Musker equation coupled to a van Driest damped Smagorinsky-Lilly sub-grid scale model to decrease the required mesh resolution. The results of both frameworks are compared to a stationary engine flow bench experiment by means of particle image velocimetry (PIV). The validation covers a detailed error analysis using time-averaged and root mean square (RMS) velocity fields. Grid studies are performed to examine the performance of the two solvers. In addition, the differences in the processes of grid generation are highlighted. The performance results show that the OpenLB approach is on average 32 times faster than the OpenFOAM implementation for the tested configurations. This indicates the potential of LBM for the simulation of IC engine-relevant complex turbulent flows using NWM-LES with computationally economic costs.

]]>Computation doi: 10.3390/computation8020042

Authors: Boris Avdeev Roman Dema Sergei Chernyi

The magnetic field distribution along the radius and height in the working chamber of a hydrocyclone with a radial magnetic field is studied. One of the most important parameters of magnetic hydrocyclones is the magnetic field distribution along the radius and height of the working chamber. It is necessary for calculating the coagulation forces and the magnetic force affecting the particle or flocculus. The magnetic field strength was calculated through magnetic induction, measured by a teslameter at equal intervals and at different values of the supply DC current. The obtained values for the magnetic field strength are presented in the form of graphs. The field distribution curves produced from the dependences found earlier were constructed. The correlation coefficients were calculated. It was proven that the analyzed dependences could be used in further calculations of coagulation forces and magnetic force, because theoretical and experimental data compared favourably with each other. The distribution along the radius and height in the cylindrical part of the magnetic hydrocyclone was consistent with data published in the scientific literature.

]]>Computation doi: 10.3390/computation8020041

Authors: Felicia Anisoara Damian Simona Moldovanu Nilanjan Dey Amira S. Ashour Luminita Moraru

(1) Background: In this research, we aimed to identify and validate a set of relevant features to distinguish between benign nevi and melanoma lesions. (2) Methods: Two datasets with 70 melanomas and 100 nevi were investigated. The first one contained raw images. The second dataset contained images preprocessed for noise removal and uneven illumination reduction. Further, the images belonging to both datasets were segmented, followed by extracting features considered in terms of form/shape and color such as asymmetry, eccentricity, circularity, asymmetry of color distribution, quadrant asymmetry, fast Fourier transform (FFT) normalization amplitude, and 6th and 7th Hu&rsquo;s moments. The FFT normalization amplitude is an atypical feature that is computed as a Fourier transform descriptor and focuses on geometric signatures of skin lesions using the frequency domain information. The receiver operating characteristic (ROC) curve and area under the curve (AUC) were employed to ascertain the relevance of the selected features and their capability to differentiate between nevi and melanoma. (3) Results: The ROC curves and AUC were employed for all experiments and selected features. A comparison in terms of the accuracy and AUC was performed, and an evaluation of the performance of the analyzed features was carried out. (4) Conclusions: The asymmetry index and eccentricity, together with F6 Hu&rsquo;s invariant moment, were fairly competent in providing a good separation between malignant melanoma and benign lesions. Also, the FFT normalization amplitude feature should be exploited due to showing potential in classification.

]]>Computation doi: 10.3390/computation8020040

Authors: Valery Ochkov Inna Vasileva Massimiliano Nori Konstantin Orlov Evgeny Nikulchev

In this article, we examine the use of symmetry groups for modeling applied problems through computer symbolic calculus. We consider the problem of solving radical equations symbolically using computer mathematical packages. We propose some methods to obtain a correct analytical solution for this class of equations by means of the Mathcad package. The application of symmetric polynomials is proposed to ensure a correct approach to the solution. Issues on the solvability based on the physical sense of a problem are discussed. Common errors in solving radical equations related to the specificity of the computer usage are analyzed. Provable electrical and geometrical problems are illustrated as example.

]]>Computation doi: 10.3390/computation8020039

Authors: Varadarajan Rengaraj Michael Lass Christian Plessl Thomas D. Kühne

In scientific computing, the acceleration of atomistic computer simulations by means of custom hardware is finding ever-growing application. A major limitation, however, is that the high efficiency in terms of performance and low power consumption entails the massive usage of low precision computing units. Here, based on the approximate computing paradigm, we present an algorithmic method to compensate for numerical inaccuracies due to low accuracy arithmetic operations rigorously, yet still obtaining exact expectation values using a properly modified Langevin-type equation.

]]>Computation doi: 10.3390/computation8020038

Authors: Andreas G. Fotopoulos Dionissios P. Margaris

Our study presents the computational implementation of an air lubrication system on a commercial ship with 154,800 m3 Liquified Natural Gas capacity. The air lubrication reduces the skin friction between the ship&rsquo;s wetted area and sea water. We analyze the real operating conditions as well as the assumptions, that will approach the problem as accurately as possible. The computational analysis is performed with the ANSYS FLUENT software. Two separate geometries (two different models) are drawn for a ship&rsquo;s hull: with and without an air lubrication system. Our aim is to extract two different skin friction coefficients, which affect the fuel consumption and the CO2 emissions of the ship. A ship&rsquo;s hull has never been designed before in real scale with air lubrication injectors adjusted in a computational environment, in order to simulate the function of air lubrication system. The system&rsquo;s impact on the minimization of LNG transfer cost and on the reduction in fuel consumption and CO2 emissions is also examined. The study demonstrates the way to install the entire system in a new building. Fuel consumption can be reduced by up to 8%, and daily savings could reach up to EUR 8000 per travelling day.

]]>Computation doi: 10.3390/computation8020037

Authors: Kaijie Fan Biagio Cosenza Ben Juurlink

Energy optimization is an increasingly important aspect of today&rsquo;s high-performance computing applications. In particular, dynamic voltage and frequency scaling (DVFS) has become a widely adopted solution to balance performance and energy consumption, and hardware vendors provide management libraries that allow the programmer to change both memory and core frequencies manually to minimize energy consumption while maximizing performance. This article focuses on modeling the energy consumption and speedup of GPU applications while using different frequency configurations. The task is not straightforward, because of the large set of possible and uniformly distributed configurations and because of the multi-objective nature of the problem, which minimizes energy consumption and maximizes performance. This article proposes a machine learning-based method to predict the best core and memory frequency configurations on GPUs for an input OpenCL kernel. The method is based on two models for speedup and normalized energy predictions over the default frequency configuration. Those are later combined into a multi-objective approach that predicts a Pareto-set of frequency configurations. Results show that our approach is very accurate at predicting extema and the Pareto set, and finds frequency configurations that dominate the default configuration in either energy or performance.

]]>Computation doi: 10.3390/computation8020036

Authors: Alessio Fuoco Giorgio De Luca Elena Tocci Johannes Carolus Jansen

Computational modelling and simulation form a consolidated branch in the multidisciplinary field of membrane science and technology [...]

]]>