Next Issue
Volume 8, September
Previous Issue
Volume 8, March

Computation, Volume 8, Issue 2 (June 2020) – 38 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Select all
Export citation of selected articles as:
Article
Particulate Matter and COVID-19 Disease Diffusion in Emilia-Romagna (Italy). Already a Cold Case?
Computation 2020, 8(2), 59; https://0-doi-org.brum.beds.ac.uk/10.3390/computation8020059 - 23 Jun 2020
Cited by 10 | Viewed by 1687
Abstract
As we prepare to emerge from an extensive and unprecedented lockdown period, due to the COVID-19 virus infection that hit the Northern regions of Italy with the Europe’s highest death toll, it becomes clear that what has gone wrong rests upon a combination [...] Read more.
As we prepare to emerge from an extensive and unprecedented lockdown period, due to the COVID-19 virus infection that hit the Northern regions of Italy with the Europe’s highest death toll, it becomes clear that what has gone wrong rests upon a combination of demographic, healthcare, political, business, organizational, and climatic factors that are out of our scientific scope. Nonetheless, looking at this problem from a patient’s perspective, it is indisputable that risk factors, considered as associated with the development of the virus disease, include older age, history of smoking, hypertension and heart disease. While several studies have already shown that many of these diseases can also be favored by a protracted exposure to air pollution, there has been recently an insurgence of negative commentary against authors who have correlated the fatal consequences of COVID-19 (also) to the exposition of specific air pollutants. Well aware that understanding the real connection between the spread of this fatal virus and air pollutants would require many other investigations at a level appropriate to the scale of this phenomenon (e.g., biological, chemical, and physical), we propose the results of a study, where a series of the measures of the daily values of PM2.5, PM10, and NO2 were considered over time, while the Granger causality statistical hypothesis test was used for determining the presence of a possible correlation with the series of the new daily COVID19 infections, in the period February–April 2020, in Emilia-Romagna. Results taken both before and after the governmental lockdown decisions show a clear correlation, although strictly seen from a Granger causality perspective. Moving beyond the relevance of our results towards the real extent of such a correlation, our scientific efforts aim at reinvigorating the debate on a relevant case, that should not remain unsolved or no longer investigated. Full article
(This article belongs to the Special Issue Computation to Fight SARS-CoV-2 (CoVid-19))
Show Figures

Figure 1

Article
Mixed Generalized Multiscale Finite Element Method for a Simplified Magnetohydrodynamics Problem in Perforated Domains
Computation 2020, 8(2), 58; https://0-doi-org.brum.beds.ac.uk/10.3390/computation8020058 - 23 Jun 2020
Viewed by 1145
Abstract
In this paper, we consider a coupled system of equations that describes simplified magnetohydrodynamics (MHD) problem in perforated domains. We construct a fine grid that resolves the perforations on the grid level in order to use a traditional approximation. For the solution on [...] Read more.
In this paper, we consider a coupled system of equations that describes simplified magnetohydrodynamics (MHD) problem in perforated domains. We construct a fine grid that resolves the perforations on the grid level in order to use a traditional approximation. For the solution on the fine grid, we construct approximation using the mixed finite element method. To reduce the size of the fine grid system, we will develop a Mixed Generalized Multiscale Finite Element Method (Mixed GMsFEM). The method differs from existing approaches and requires some modifications to represent the flow and magnetic fields. Numerical results are presented for a two-dimensional model problem in perforated domains. This model problem is a special case for the general 3D problem. We study the influence of the number of multiscale basis functions on the accuracy of the method and show that the proposed method provides a good accuracy with few basis functions. Full article
Show Figures

Figure 1

Article
Studying a Tumor Growth Partial Differential Equation via the Black–Scholes Equation
Computation 2020, 8(2), 57; https://0-doi-org.brum.beds.ac.uk/10.3390/computation8020057 - 16 Jun 2020
Viewed by 890
Abstract
Two equations are considered in this paper—the Black–Scholes equation and an equation that models the spatial dynamics of a brain tumor under some treatment regime. We shall call the latter equation the tumor equation. The Black–Scholes and tumor equations are partial differential equations [...] Read more.
Two equations are considered in this paper—the Black–Scholes equation and an equation that models the spatial dynamics of a brain tumor under some treatment regime. We shall call the latter equation the tumor equation. The Black–Scholes and tumor equations are partial differential equations that arise in very different contexts. The tumor equation is used to model propagation of brain tumor, while the Black–Scholes equation arises in financial mathematics as a model for the fair price of a European option and other related derivatives. We use Lie symmetry analysis to establish a mapping between them and hence deduce solutions of the tumor equation from solutions of the Black–Scholes equation. Full article
Article
Wave Transmission by Rectangular Submerged Breakwaters
Computation 2020, 8(2), 56; https://0-doi-org.brum.beds.ac.uk/10.3390/computation8020056 - 09 Jun 2020
Cited by 3 | Viewed by 1309
Abstract
In this paper, we investigate the wave damping mechanism caused by the presence of submerged bars using the Shallow Water Equations (SWEs). We first solve these equations for the single bar case using separation of variables to obtain the analytical solution for the [...] Read more.
In this paper, we investigate the wave damping mechanism caused by the presence of submerged bars using the Shallow Water Equations (SWEs). We first solve these equations for the single bar case using separation of variables to obtain the analytical solution for the wave elevation over a rectangular bar wave reflector with specific heights and lengths. From the analytical solution, we derive the wave reflection and transmission coefficients and determine the optimal height and length of the bar that would give the smallest transmission coefficient. We also measure the effectiveness of the bar by comparing the amplitude of the incoming wave before and after the wave passes the submerged bar, and extend the result to the case of n-submerged bars. We then construct a numerical scheme for the SWEs based on the finite volume method on a staggered grid to simulate the propagation of a monochromatic wave as it passes over a single submerged rectangular bar. For validation, we compare the transmission coefficient values obtained from the analytical solution, numerical scheme, and experimental data. The result of this paper may be useful in wave reflector engineering and design, particularly that of rectangle-shaped wave reflectors, as it can serve as a basis for designing bar wave reflectors that reduce wave amplitudes optimally. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

Article
On the Numerical Analysis of Unsteady MHD Boundary Layer Flow of Williamson Fluid Over a Stretching Sheet and Heat and Mass Transfers
Computation 2020, 8(2), 55; https://0-doi-org.brum.beds.ac.uk/10.3390/computation8020055 - 02 Jun 2020
Cited by 4 | Viewed by 1118
Abstract
A thorough and detailed investigation of an unsteady free convection boundary layer flow of an incompressible electrically conducting Williamson fluid over a stretching sheet saturated with a porous medium has been numerically carried out. The partial governing equations are transferred into a system [...] Read more.
A thorough and detailed investigation of an unsteady free convection boundary layer flow of an incompressible electrically conducting Williamson fluid over a stretching sheet saturated with a porous medium has been numerically carried out. The partial governing equations are transferred into a system of non-linear dimensionless ordinary differential equations by employing suitable similarity transformations. The resultant equations are then numerically solved using the spectral quasi-linearization method. Numerical solutions are obtained in terms of the velocity, temperature and concentration profiles, as well as the skin friction, heat and mass transfers. These numerical results are presented graphically and in tabular forms. From the results, it is found out that the Weissenberg number, local electric parameter, the unsteadiness parameter, the magnetic, porosity and the buoyancy parameters have significant effects on the flow properties. Full article
(This article belongs to the Special Issue Computational Heat, Mass, and Momentum Transfer—II)
Show Figures

Figure 1

Article
Method of the Analysis of the Connectivity of Road and Street Network in Terms of Division of the City Area
Computation 2020, 8(2), 54; https://0-doi-org.brum.beds.ac.uk/10.3390/computation8020054 - 02 Jun 2020
Cited by 3 | Viewed by 997
Abstract
The transport system of a Smart City consists of many subsystems; therefore, the modeling of the transportation network, which maps its structure, requires consideration of both the connections between individual subsystems and the relationships within each of them. The road and street network [...] Read more.
The transport system of a Smart City consists of many subsystems; therefore, the modeling of the transportation network, which maps its structure, requires consideration of both the connections between individual subsystems and the relationships within each of them. The road and street network is one of the most important subsystems, whose main task is to ensure access to places generating travel demand in the city. Thus, its effectiveness should be at an appropriate level of quality. Connectivity is one of the most important characteristics of a road and street network. It describes how elements of that network are connected, which translates to travel times and costs. The analysis of the connectivity of the road and street network in urban areas is often conducted with the application of topological measures. In the case of a large area of the city, such analysis requires its division into smaller parts, which may affect the computational results of these measures; therefore, the main goal of the study was to present a method of performing analysis based on the computation of numerical values of selected measures of connectivity of road and street network, for a city area divided into fields of regular shape. To achieve that goal, the analyzed area was split into a regular grid. Subsequently, numerical values of the chosen measures of connectivity were calculated for each basic field, and the results allowed us to determine whether they are influenced by the method of division of the area. Obtained results showed that the size of the basic field influences the numerical values of measures of connectivity; however that influence is different for each of the selected measures. Full article
(This article belongs to the Special Issue Transport Modelling for Smart Cities)
Show Figures

Figure 1

Article
Computational View toward the Inhibition of SARS-CoV-2 Spike Glycoprotein and the 3CL Protease
Computation 2020, 8(2), 53; https://0-doi-org.brum.beds.ac.uk/10.3390/computation8020053 - 31 May 2020
Cited by 9 | Viewed by 1900
Abstract
Since the outbreak of the 2019 novel coronavirus disease (COVID-19), the medical research community is vigorously seeking a treatment to control the infection and save the lives of severely infected patients. The main potential candidates for the control of viruses are virally targeted [...] Read more.
Since the outbreak of the 2019 novel coronavirus disease (COVID-19), the medical research community is vigorously seeking a treatment to control the infection and save the lives of severely infected patients. The main potential candidates for the control of viruses are virally targeted agents. In this short letter, we report our calculations on the inhibitors for the SARS-CoV-2 3CL protease and the spike protein for the potential treatment of COVID-19. The results show that the most potent inhibitors of the SARS-CoV-2 3CL protease include saquinavir, tadalafil, rivaroxaban, sildenafil, dasatinib, etc. Ergotamine, amphotericin b, and vancomycin are most promising to block the interaction of the SARS-CoV-2 S-protein with human ACE-2. Full article
(This article belongs to the Special Issue Computation to Fight SARS-CoV-2 (CoVid-19))
Show Figures

Figure 1

Article
Influence of Varying Functionalization on the Peroxidase Activity of Nickel(II)–Pyridine Macrocycle Catalysts: Mechanistic Insights from Density Functional Theory
Computation 2020, 8(2), 52; https://0-doi-org.brum.beds.ac.uk/10.3390/computation8020052 - 31 May 2020
Viewed by 1332
Abstract
Nickel(II) complexes of mono-functionalized pyridine-tetraazamacrocycles (PyMACs) are a new class of catalysts that possess promising activity similar to biological peroxidases. Experimental studies with ABTS (2,2′-azino-bis(3-ethylbenzothiazoline-6-sulfonic acid), substrate) and H2O2 (oxidant) proposed that hydrogen-bonding and proton-transfer reactions facilitated by their pendant [...] Read more.
Nickel(II) complexes of mono-functionalized pyridine-tetraazamacrocycles (PyMACs) are a new class of catalysts that possess promising activity similar to biological peroxidases. Experimental studies with ABTS (2,2′-azino-bis(3-ethylbenzothiazoline-6-sulfonic acid), substrate) and H2O2 (oxidant) proposed that hydrogen-bonding and proton-transfer reactions facilitated by their pendant arm were responsible for their catalytic activity. In this work, density functional theory calculations were performed to unravel the influence of pendant arm functionalization on the catalytic performance of Ni(II)–PyMACs. Generated frontier orbitals suggested that Ni(II)–PyMACs activate H2O2 by satisfying two requirements: (1) the deprotonation of H2O2 to form the highly nucleophilic HOO, and (2) the generation of low-spin, singlet state Ni(II)–PyMACs to allow the binding of HOO. COSMO solvation-based energies revealed that the O–O Ni(II)–hydroperoxo bond, regardless of pendant arm type, ruptures favorably via heterolysis to produce high-spin (S = 1) [(L)Ni3+–O·]2+ and HO. Aqueous solvation was found crucial in the stabilization of charged species, thereby favoring the heterolytic process over homolytic. The redox reaction of [(L)Ni3+–O·]2+ with ABTS obeyed a 1:2 stoichiometric ratio, followed by proton transfer to produce the final intermediate. The regeneration of Ni(II)–PyMACs at the final step involved the liberation of HO, which was highly favorable when protons were readily available or when the pKa of the pendant arm was low. Full article
(This article belongs to the Section Computational Chemistry)
Show Figures

Graphical abstract

Article
Algebraic Analysis of a Simplified Encryption Algorithm GOST R 34.12-2015
Computation 2020, 8(2), 51; https://0-doi-org.brum.beds.ac.uk/10.3390/computation8020051 - 28 May 2020
Viewed by 992
Abstract
In January 2016, a new standard for symmetric block encryption was established in the Russian Federation. The standard contains two encryption algorithms: Magma and Kuznyechik. In this paper we propose to consider the possibility of applying the algebraic analysis method to these ciphers. [...] Read more.
In January 2016, a new standard for symmetric block encryption was established in the Russian Federation. The standard contains two encryption algorithms: Magma and Kuznyechik. In this paper we propose to consider the possibility of applying the algebraic analysis method to these ciphers. To do this, we use the simplified algorithms Magma ⊕ and S-KN2. To solve sets of nonlinear Boolean equations, we choose two different approaches: a reduction and solving of the Boolean satisfiability problem (by using the CryptoMiniSat solver) and an extended linearization method (XL). In our research, we suggest using a security assessment approach that identifies the resistance of block ciphers to algebraic cryptanalysis. The algebraic analysis of an eight-round Magma (68 key bits were fixed) with the CryptoMiniSat solver demanded four known text pairs and took 3029.56 s to complete (the search took 416.31 s). The algebraic analysis of a five-round Magma cipher with weakened S-boxes required seven known text pairs and took 1135.61 s (the search took 3.36 s). The algebraic analysis of a five-round Magma cipher with disabled S-blocks (equivalent value substitution) led to getting only one solution for five known text pairs in 501.18 s (the search took 4.92 s). The complexity of the XL algebraic analysis of a four-round S-KN2 cipher with three text pairs was 236.33 s (took 1.191 Gb RAM). Full article
(This article belongs to the Special Issue Recent Advances in Computation Engineering)
Show Figures

Figure 1

Article
Simulation of Fire with a Gas Kinetic Scheme on Distributed GPGPU Architectures
Computation 2020, 8(2), 50; https://0-doi-org.brum.beds.ac.uk/10.3390/computation8020050 - 26 May 2020
Cited by 1 | Viewed by 1159
Abstract
The simulation of fire is a challenging task due to its occurrence on multiple space-time scales and the non-linear interaction of multiple physical processes. Current state-of-the-art software such as the Fire Dynamics Simulator (FDS) implements most of the required physics, yet a significant [...] Read more.
The simulation of fire is a challenging task due to its occurrence on multiple space-time scales and the non-linear interaction of multiple physical processes. Current state-of-the-art software such as the Fire Dynamics Simulator (FDS) implements most of the required physics, yet a significant drawback of this implementation is its limited scalability on modern massively parallel hardware. The current paper presents a massively parallel implementation of a Gas Kinetic Scheme (GKS) on General Purpose Graphics Processing Units (GPGPUs) as a potential alternative modeling and simulation approach. The implementation is validated for turbulent natural convection against experimental data. Subsequently, it is validated for two simulations of fire plumes, including a small-scale table top setup and a fire on the scale of a few meters. We show that the present GKS achieves comparable accuracy to the results obtained by FDS. Yet, due to the parallel efficiency on dedicated hardware, our GKS implementation delivers a reduction of wall-clock times of more than an order of magnitude. This paper demonstrates the potential of explicit local schemes in massively parallel environments for the simulation of fire. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

Article
A New Generalized Definition of Fractional Derivative with Non-Singular Kernel
Computation 2020, 8(2), 49; https://0-doi-org.brum.beds.ac.uk/10.3390/computation8020049 - 21 May 2020
Cited by 9 | Viewed by 1221
Abstract
This paper proposes a new definition of fractional derivative with non-singular kernel in the sense of Caputo which generalizes various forms existing in the literature. Furthermore, the version in the sense of Riemann–Liouville is defined. Moreover, fundamental properties of the new generalized fractional [...] Read more.
This paper proposes a new definition of fractional derivative with non-singular kernel in the sense of Caputo which generalizes various forms existing in the literature. Furthermore, the version in the sense of Riemann–Liouville is defined. Moreover, fundamental properties of the new generalized fractional derivatives in the sense of Caputo and Riemann–Liouville are rigorously studied. Finally, an application in epidemiology as well as in virology is presented. Full article
(This article belongs to the Section Computational Biology)
Show Figures

Figure 1

Article
The Maximum Common Subgraph Problem: A Parallel and Multi-Engine Approach
Computation 2020, 8(2), 48; https://0-doi-org.brum.beds.ac.uk/10.3390/computation8020048 - 18 May 2020
Cited by 2 | Viewed by 1011
Abstract
The maximum common subgraph of two graphs is the largest possible common subgraph, i.e., the common subgraph with as many vertices as possible. Even if this problem is very challenging, as it has been long proven NP-hard, its countless practical applications still motivates [...] Read more.
The maximum common subgraph of two graphs is the largest possible common subgraph, i.e., the common subgraph with as many vertices as possible. Even if this problem is very challenging, as it has been long proven NP-hard, its countless practical applications still motivates searching for exact solutions. This work discusses the possibility to extend an existing, very effective branch-and-bound procedure on parallel multi-core and many-core architectures. We analyze a parallel multi-core implementation that exploits a divide-and-conquer approach based on a thread pool, which does not deteriorate the original algorithmic efficiency and it minimizes data structure repetitions. We also extend the original algorithm to parallel many-core GPU architectures adopting the CUDA programming framework, and we show how to handle the heavily workload-unbalance and the massive data dependency. Then, we suggest new heuristics to reorder the adjacency matrix, to deal with “dead-ends”, and to randomize the search with automatic restarts. These heuristics can achieve significant speed-ups on specific instances, even if they may not be competitive with the original strategy on average. Finally, we propose a portfolio approach, which integrates all the different local search algorithms as component tools; such portfolio, rather than choosing the best tool for a given instance up-front, takes the decision on-line. The proposed approach drastically limits memory bandwidth constraints and avoids other typical portfolio fragility as CPU and GPU versions often show a complementary efficiency and run on separated platforms. Experimental results support the claims and motivate further research to better exploit GPUs in embedded task-intensive and multi-engine parallel applications. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

Article
Large Eddy Simulation of Wind Flow over A Realistic Urban Area
Computation 2020, 8(2), 47; https://0-doi-org.brum.beds.ac.uk/10.3390/computation8020047 - 18 May 2020
Viewed by 1487
Abstract
A high-resolution large eddy simulation (LES) of wind flow over the Oklahoma City downtown area was performed to explain the effect of the building height on wind flow over the city. Wind flow over cities is vital for pedestrian and traffic comfort as [...] Read more.
A high-resolution large eddy simulation (LES) of wind flow over the Oklahoma City downtown area was performed to explain the effect of the building height on wind flow over the city. Wind flow over cities is vital for pedestrian and traffic comfort as well as urban heat effects. The average southerly wind speed of eight meters per second was used in the inflow section. It was found that heights and distribution of the buildings have the greatest impact on the wind flow patterns. The complexity of the flow field mainly depended on the location of buildings relative to each other and their heights. A strong up and downflows in the wake of tall buildings as well as large-scale coherent eddies between the low-rise buildings were observed. It was found out that high-rise buildings had the highest impact on the urban wind patterns. Other characteristics of urban canopy flows, such as wind shadows and channeling effects, are also successfully captured by the LES. The LES solver was shown to be a powerful tool for understanding urban canopy flows; therefore, it can be used in similar studies (e.g., other cities, dispersion studies, etc.) in the future. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

Article
Addressing Examination Timetabling Problem Using a Partial Exams Approach in Constructive and Improvement
Computation 2020, 8(2), 46; https://0-doi-org.brum.beds.ac.uk/10.3390/computation8020046 - 17 May 2020
Cited by 3 | Viewed by 1013
Abstract
The paper investigates a partial exam assignment approach for solving the examination timetabling problem. Current approaches involve scheduling all of the exams into time slots and rooms (i.e., produce an initial solution) and then continuing by improving the initial solution in a predetermined [...] Read more.
The paper investigates a partial exam assignment approach for solving the examination timetabling problem. Current approaches involve scheduling all of the exams into time slots and rooms (i.e., produce an initial solution) and then continuing by improving the initial solution in a predetermined number of iterations. We propose a modification of this process that schedules partially selected exams into time slots and rooms followed by improving the solution vector of partial exams. The process then continues with the next batch of exams until all exams are scheduled. The partial exam assignment approach utilises partial graph heuristic orderings with a modified great deluge algorithm (PGH-mGD). The PGH-mGD approach is tested on two benchmark datasets, a capacitated examination dataset from the 2nd international timetable competition (ITC2007) and an un-capacitated Toronto examination dataset. Experimental results show that PGH-mGD is able to produce quality solutions that are competitive with those of the previous approaches reported in the scientific literature. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

Article
Creating Collections with Embedded Documents for Document Databases Taking into Account the Queries
Computation 2020, 8(2), 45; https://0-doi-org.brum.beds.ac.uk/10.3390/computation8020045 - 15 May 2020
Viewed by 817
Abstract
In this article, we describe a new formalized method for constructing the NoSQL document database of MongoDB, taking into account the structure of queries planned for execution to the database. The method is based on set theory. The initial data are the properties [...] Read more.
In this article, we describe a new formalized method for constructing the NoSQL document database of MongoDB, taking into account the structure of queries planned for execution to the database. The method is based on set theory. The initial data are the properties of objects, information about which is stored in the database, and the set of queries that are most often executed or whose execution speed should be maximum. In order to determine the need to create embedded documents, our method uses the type of relationship between tables in a relational database. Our studies have shown that this method is in addition to the method of creating collections without embedded documents. In the article, we also describe a methodology for determining in which cases which methods should be used to make working with databases more efficient. It should be noted that this approach can be used for translating data from MySQL to MongoDB and for the consolidation of these databases. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

Article
Performance and Energy Assessment of a Lattice Boltzmann Method Based Application on the Skylake Processor
Computation 2020, 8(2), 44; https://0-doi-org.brum.beds.ac.uk/10.3390/computation8020044 - 08 May 2020
Viewed by 935
Abstract
This paper presents the performance analysis for both the computing performance and the energy efficiency of a Lattice Boltzmann Method (LBM) based application, used to simulate three-dimensional multicomponent turbulent systems on massively parallel architectures for high-performance computing. Extending results reported in previous works, [...] Read more.
This paper presents the performance analysis for both the computing performance and the energy efficiency of a Lattice Boltzmann Method (LBM) based application, used to simulate three-dimensional multicomponent turbulent systems on massively parallel architectures for high-performance computing. Extending results reported in previous works, the analysis is meant to demonstrate the impact of using optimized data layouts designed for LBM based applications on high-end computer platforms. A particular focus is given to the Intel Skylake processor and to compare the target architecture with other models of the Intel processor family. We introduce the main motivations of the presented work as well as the relevance of its scientific application. We analyse the measured performances of the implemented data layouts on the Skylake processor while scaling the number of threads per socket. We compare the results obtained on several CPU generations of the Intel processor family and we make an analysis of energy efficiency on the Skylake processor compared with the Intel Xeon Phi processor, finally adding our interpretation of the presented results. Full article
(This article belongs to the Special Issue Energy-Efficient Computing on Parallel Architectures)
Show Figures

Figure 1

Article
Evaluation of a Near-Wall-Modeled Large Eddy Lattice Boltzmann Method for the Analysis of Complex Flows Relevant to IC Engines
Computation 2020, 8(2), 43; https://0-doi-org.brum.beds.ac.uk/10.3390/computation8020043 - 05 May 2020
Cited by 8 | Viewed by 2929
Abstract
In this paper, we compare the capabilities of two open source near-wall-modeled large eddy simulation (NWM-LES) approaches regarding prediction accuracy, computational costs and ease of use to predict complex turbulent flows relevant to internal combustion (IC) engines. The applied open source tools are [...] Read more.
In this paper, we compare the capabilities of two open source near-wall-modeled large eddy simulation (NWM-LES) approaches regarding prediction accuracy, computational costs and ease of use to predict complex turbulent flows relevant to internal combustion (IC) engines. The applied open source tools are the commonly used OpenFOAM, based on the finite volume method (FVM), and OpenLB, an implementation of the lattice Boltzmann method (LBM). The near-wall region is modeled by the Musker equation coupled to a van Driest damped Smagorinsky-Lilly sub-grid scale model to decrease the required mesh resolution. The results of both frameworks are compared to a stationary engine flow bench experiment by means of particle image velocimetry (PIV). The validation covers a detailed error analysis using time-averaged and root mean square (RMS) velocity fields. Grid studies are performed to examine the performance of the two solvers. In addition, the differences in the processes of grid generation are highlighted. The performance results show that the OpenLB approach is on average 32 times faster than the OpenFOAM implementation for the tested configurations. This indicates the potential of LBM for the simulation of IC engine-relevant complex turbulent flows using NWM-LES with computationally economic costs. Full article
Show Figures

Figure 1

Article
Study and Modeling of the Magnetic Field Distribution in the Fricker Hydrocyclone Cylindrical Part
Computation 2020, 8(2), 42; https://0-doi-org.brum.beds.ac.uk/10.3390/computation8020042 - 02 May 2020
Cited by 5 | Viewed by 797
Abstract
The magnetic field distribution along the radius and height in the working chamber of a hydrocyclone with a radial magnetic field is studied. One of the most important parameters of magnetic hydrocyclones is the magnetic field distribution along the radius and height of [...] Read more.
The magnetic field distribution along the radius and height in the working chamber of a hydrocyclone with a radial magnetic field is studied. One of the most important parameters of magnetic hydrocyclones is the magnetic field distribution along the radius and height of the working chamber. It is necessary for calculating the coagulation forces and the magnetic force affecting the particle or flocculus. The magnetic field strength was calculated through magnetic induction, measured by a teslameter at equal intervals and at different values of the supply DC current. The obtained values for the magnetic field strength are presented in the form of graphs. The field distribution curves produced from the dependences found earlier were constructed. The correlation coefficients were calculated. It was proven that the analyzed dependences could be used in further calculations of coagulation forces and magnetic force, because theoretical and experimental data compared favourably with each other. The distribution along the radius and height in the cylindrical part of the magnetic hydrocyclone was consistent with data published in the scientific literature. Full article
Show Figures

Figure 1

Article
Feature Selection of Non-Dermoscopic Skin Lesion Images for Nevus and Melanoma Classification
Computation 2020, 8(2), 41; https://doi.org/10.3390/computation8020041 - 30 Apr 2020
Cited by 7 | Viewed by 1681
Abstract
(1) Background: In this research, we aimed to identify and validate a set of relevant features to distinguish between benign nevi and melanoma lesions. (2) Methods: Two datasets with 70 melanomas and 100 nevi were investigated. The first one contained raw images. The [...] Read more.
(1) Background: In this research, we aimed to identify and validate a set of relevant features to distinguish between benign nevi and melanoma lesions. (2) Methods: Two datasets with 70 melanomas and 100 nevi were investigated. The first one contained raw images. The second dataset contained images preprocessed for noise removal and uneven illumination reduction. Further, the images belonging to both datasets were segmented, followed by extracting features considered in terms of form/shape and color such as asymmetry, eccentricity, circularity, asymmetry of color distribution, quadrant asymmetry, fast Fourier transform (FFT) normalization amplitude, and 6th and 7th Hu’s moments. The FFT normalization amplitude is an atypical feature that is computed as a Fourier transform descriptor and focuses on geometric signatures of skin lesions using the frequency domain information. The receiver operating characteristic (ROC) curve and area under the curve (AUC) were employed to ascertain the relevance of the selected features and their capability to differentiate between nevi and melanoma. (3) Results: The ROC curves and AUC were employed for all experiments and selected features. A comparison in terms of the accuracy and AUC was performed, and an evaluation of the performance of the analyzed features was carried out. (4) Conclusions: The asymmetry index and eccentricity, together with F6 Hu’s invariant moment, were fairly competent in providing a good separation between malignant melanoma and benign lesions. Also, the FFT normalization amplitude feature should be exploited due to showing potential in classification. Full article
Show Figures

Figure 1

Article
Symbolic Computation to Solving an Irrational Equation on Based Symmetric Polynomials Method
Computation 2020, 8(2), 40; https://0-doi-org.brum.beds.ac.uk/10.3390/computation8020040 - 28 Apr 2020
Cited by 1 | Viewed by 981
Abstract
In this article, we examine the use of symmetry groups for modeling applied problems through computer symbolic calculus. We consider the problem of solving radical equations symbolically using computer mathematical packages. We propose some methods to obtain a correct analytical solution for this [...] Read more.
In this article, we examine the use of symmetry groups for modeling applied problems through computer symbolic calculus. We consider the problem of solving radical equations symbolically using computer mathematical packages. We propose some methods to obtain a correct analytical solution for this class of equations by means of the Mathcad package. The application of symmetric polynomials is proposed to ensure a correct approach to the solution. Issues on the solvability based on the physical sense of a problem are discussed. Common errors in solving radical equations related to the specificity of the computer usage are analyzed. Provable electrical and geometrical problems are illustrated as example. Full article
Show Figures

Figure 1

Article
Accurate Sampling with Noisy Forces from Approximate Computing
Computation 2020, 8(2), 39; https://0-doi-org.brum.beds.ac.uk/10.3390/computation8020039 - 28 Apr 2020
Cited by 3 | Viewed by 2238
Abstract
In scientific computing, the acceleration of atomistic computer simulations by means of custom hardware is finding ever-growing application. A major limitation, however, is that the high efficiency in terms of performance and low power consumption entails the massive usage of low precision computing [...] Read more.
In scientific computing, the acceleration of atomistic computer simulations by means of custom hardware is finding ever-growing application. A major limitation, however, is that the high efficiency in terms of performance and low power consumption entails the massive usage of low precision computing units. Here, based on the approximate computing paradigm, we present an algorithmic method to compensate for numerical inaccuracies due to low accuracy arithmetic operations rigorously, yet still obtaining exact expectation values using a properly modified Langevin-type equation. Full article
(This article belongs to the Section Computational Chemistry)
Show Figures

Figure 1

Article
Computational Analysis of Air Lubrication System for Commercial Shipping and Impacts on Fuel Consumption
Computation 2020, 8(2), 38; https://0-doi-org.brum.beds.ac.uk/10.3390/computation8020038 - 28 Apr 2020
Cited by 1 | Viewed by 1230
Abstract
Our study presents the computational implementation of an air lubrication system on a commercial ship with 154,800 m3 Liquified Natural Gas capacity. The air lubrication reduces the skin friction between the ship’s wetted area and sea water. We analyze the real operating [...] Read more.
Our study presents the computational implementation of an air lubrication system on a commercial ship with 154,800 m3 Liquified Natural Gas capacity. The air lubrication reduces the skin friction between the ship’s wetted area and sea water. We analyze the real operating conditions as well as the assumptions, that will approach the problem as accurately as possible. The computational analysis is performed with the ANSYS FLUENT software. Two separate geometries (two different models) are drawn for a ship’s hull: with and without an air lubrication system. Our aim is to extract two different skin friction coefficients, which affect the fuel consumption and the CO2 emissions of the ship. A ship’s hull has never been designed before in real scale with air lubrication injectors adjusted in a computational environment, in order to simulate the function of air lubrication system. The system’s impact on the minimization of LNG transfer cost and on the reduction in fuel consumption and CO2 emissions is also examined. The study demonstrates the way to install the entire system in a new building. Fuel consumption can be reduced by up to 8%, and daily savings could reach up to EUR 8000 per travelling day. Full article
Show Figures

Figure 1

Article
Accurate Energy and Performance Prediction for Frequency-Scaled GPU Kernels
Computation 2020, 8(2), 37; https://0-doi-org.brum.beds.ac.uk/10.3390/computation8020037 - 27 Apr 2020
Cited by 1 | Viewed by 1298
Abstract
Energy optimization is an increasingly important aspect of today’s high-performance computing applications. In particular, dynamic voltage and frequency scaling (DVFS) has become a widely adopted solution to balance performance and energy consumption, and hardware vendors provide management libraries that allow the programmer to [...] Read more.
Energy optimization is an increasingly important aspect of today’s high-performance computing applications. In particular, dynamic voltage and frequency scaling (DVFS) has become a widely adopted solution to balance performance and energy consumption, and hardware vendors provide management libraries that allow the programmer to change both memory and core frequencies manually to minimize energy consumption while maximizing performance. This article focuses on modeling the energy consumption and speedup of GPU applications while using different frequency configurations. The task is not straightforward, because of the large set of possible and uniformly distributed configurations and because of the multi-objective nature of the problem, which minimizes energy consumption and maximizes performance. This article proposes a machine learning-based method to predict the best core and memory frequency configurations on GPUs for an input OpenCL kernel. The method is based on two models for speedup and normalized energy predictions over the default frequency configuration. Those are later combined into a multi-objective approach that predicts a Pareto-set of frequency configurations. Results show that our approach is very accurate at predicting extema and the Pareto set, and finds frequency configurations that dominate the default configuration in either energy or performance. Full article
(This article belongs to the Special Issue Energy-Efficient Computing on Parallel Architectures)
Show Figures

Figure 1

Editorial
Computational Approaches in Membrane Science and Engineering
Computation 2020, 8(2), 36; https://0-doi-org.brum.beds.ac.uk/10.3390/computation8020036 - 23 Apr 2020
Cited by 1 | Viewed by 1034
Abstract
Computational modelling and simulation form a consolidated branch in the multidisciplinary field of membrane science and technology [...] Full article
Article
Optimal Design in Roller Pump System Applications for Linear Infusion
Computation 2020, 8(2), 35; https://0-doi-org.brum.beds.ac.uk/10.3390/computation8020035 - 19 Apr 2020
Viewed by 1443
Abstract
In this study, an infusion roller pump comprising two separate innovative resilient tube designs is presented. The first incorporates the flexible tubing cross-section area in its relaxed state as a lenticular one for power reduction reasons. The second keeps the previous lenticular cross-section [...] Read more.
In this study, an infusion roller pump comprising two separate innovative resilient tube designs is presented. The first incorporates the flexible tubing cross-section area in its relaxed state as a lenticular one for power reduction reasons. The second keeps the previous lenticular cross-section along its length, while it additionally incorporates an inflating portion, for creating a momentary flow positive pulse to balance the void generated by the roller disengagement. Fluid–Structure Interaction (FSI) simulations cannot provide quantitatively realistic results, due to the limitation of full compression of the tube, and are only used qualitatively to reveal by which way to set the inflated portion along the tube length in order to suppress backflow and achieve constant flow rate. Finally, indirect lumen volume measurements were performed numerically and an optimum design was found testing eight design approaches. These indirect fluid volume measurements assess the optimum inflated tube’s portion leading to backflow and pulsating elimination. The optimum design has an inflation portion of 75 degrees covering almost 42% of the curved part of the tube, while it has a constant zone with the maximum value of inflated lenticular cross-section, within the portion, of 55 degrees covering about 73% of the inflation portion. Full article
Show Figures

Figure 1

Article
Performance and Energy Footprint Assessment of FPGAs and GPUs on HPC Systems Using Astrophysics Application
Computation 2020, 8(2), 34; https://0-doi-org.brum.beds.ac.uk/10.3390/computation8020034 - 17 Apr 2020
Cited by 1 | Viewed by 999
Abstract
New challenges in Astronomy and Astrophysics (AA) are urging the need for many exceptionally computationally intensive simulations. “Exascale” (and beyond) computational facilities are mandatory to address the size of theoretical problems and data coming from the new generation of observational facilities in AA. [...] Read more.
New challenges in Astronomy and Astrophysics (AA) are urging the need for many exceptionally computationally intensive simulations. “Exascale” (and beyond) computational facilities are mandatory to address the size of theoretical problems and data coming from the new generation of observational facilities in AA. Currently, the High-Performance Computing (HPC) sector is undergoing a profound phase of innovation, in which the primary challenge to the achievement of the “Exascale” is the power consumption. The goal of this work is to give some insights about performance and energy footprint of contemporary architectures for a real astrophysical application in an HPC context. We use a state-of-the-art N-body application that we re-engineered and optimized to exploit the heterogeneous underlying hardware fully. We quantitatively evaluate the impact of computation on energy consumption when running on four different platforms. Two of them represent the current HPC systems (Intel-based and equipped with NVIDIA GPUs), one is a micro-cluster based on ARM-MPSoC, and one is a “prototype towards Exascale” equipped with ARM-MPSoCs tightly coupled with FPGAs. We investigate the behavior of the different devices where the high-end GPUs excel in terms of time-to-solution while MPSoC-FPGA systems outperform GPUs in power consumption. Our experience reveals that considering FPGAs for computationally intensive application seems very promising, as their performance is improving to meet the requirements of scientific applications. This work can be a reference for future platform development for astrophysics applications where computationally intensive calculations are required. Full article
(This article belongs to the Special Issue Energy-Efficient Computing on Parallel Architectures)
Show Figures

Figure 1

Article
Exploration of Equivalent Design Approaches for Tanks Transporting Flammable Liquids
Computation 2020, 8(2), 33; https://0-doi-org.brum.beds.ac.uk/10.3390/computation8020033 - 16 Apr 2020
Cited by 1 | Viewed by 828
Abstract
Tank vehicles are widely used for the road transportation of dangerous goods and especially flammable liquid fuels. Existing gross weight limitations, in such transportations, render the self-weight of the tank structure a crucial parameter of the design. For the design and the construction [...] Read more.
Tank vehicles are widely used for the road transportation of dangerous goods and especially flammable liquid fuels. Existing gross weight limitations, in such transportations, render the self-weight of the tank structure a crucial parameter of the design. For the design and the construction of metallic tank vehicles carrying dangerous goods, the European Standard EN13094:2015 is applied. This Standard poses a minimum thickness for the shell thickness for the tank construction according to the mechanical properties of the construction material. In the present paper, primarily, the proposed design was investigated and a weight minimization of such a tank vehicle with respect to its structural integrity was performed. As test case, a tank vehicle with a box-shaped cross-section and low gross weight was considered. For the evaluation of the structural integrity of the tank construction, the mechanical analysis software ANSYS ® 2019R1 was used. The boundary values and the suitable computation for structural integrity were applied, as they are defined in the corresponding Standards. The thickness of the construction material was decreased to a minimum, lower than that posed by the standards, indicating that the limit posed by them was by no means boundary in terms of structural integrity. Full article
Show Figures

Figure 1

Article
Embedded Exponentially-Fitted Explicit Runge-Kutta-Nyström Methods for Solving Periodic Problems
Computation 2020, 8(2), 32; https://0-doi-org.brum.beds.ac.uk/10.3390/computation8020032 - 15 Apr 2020
Cited by 1 | Viewed by 1282
Abstract
In this work, a pair of embedded explicit exponentially-fitted Runge–Kutta–Nyström methods is formulated for solving special second-order ordinary differential equations (ODEs) with periodic solutions. A variable step-size technique is used for the derivation of the 5(3) embedded pair, which provides a cheap local [...] Read more.
In this work, a pair of embedded explicit exponentially-fitted Runge–Kutta–Nyström methods is formulated for solving special second-order ordinary differential equations (ODEs) with periodic solutions. A variable step-size technique is used for the derivation of the 5(3) embedded pair, which provides a cheap local error estimation. The numerical results obtained signify that the new adapted method is more efficient and accurate compared with the existing methods. Full article
Show Figures

Figure 1

Article
Brain Tissue Evaluation Based on Skeleton Shape and Similarity Analysis between Hemispheres
Computation 2020, 8(2), 31; https://0-doi-org.brum.beds.ac.uk/10.3390/computation8020031 - 15 Apr 2020
Viewed by 773
Abstract
Background: The purpose of this article is to provide a new evaluation tool based on skeleton maps to assess the tumoral and non-tumoral regions of the 2D MRI in PD-weighted (proton density) and T2w (T2-weighted type) brain images. Methods: The proposed [...] Read more.
Background: The purpose of this article is to provide a new evaluation tool based on skeleton maps to assess the tumoral and non-tumoral regions of the 2D MRI in PD-weighted (proton density) and T2w (T2-weighted type) brain images. Methods: The proposed method investigated inter-hemisphere brain tissue similarity using a mask in the right hemisphere and its mirror reflection in the left one. At the hemisphere level and for each ROI (region of interest), a morphological skeleton algorithm was used to efficiently investigate the similarity between hemispheres. Two datasets with 88 T2w and PD images belonging to healthy patients and patients diagnosed with glioma were investigated: D1 contains the original raw images affected by Rician noise and D2 consists of the same images pre-processed for noise removal. Results: The investigation was based on structural similarity assessment by using the Structural Similarity Index (SSIM) and a modified Jaccard metrics. A novel S-Jaccard (Skeleton Jaccard) metric was proposed. Cluster accuracy was estimated based on the Silhouette method (SV). The Silhouette coefficient (SC) indicates the quality of the clustering process for the SSIM and S-Jaccard. To assess the overall classification accuracy an ROC curve implementation was carried out. Conclusions: Consistent results were obtained for healthy patients and for PD images of glioma. We demonstrated that the S-Jaccard metric based on skeletal similarity is an efficient tool for an inter-hemisphere brain similarity evaluation. The accuracy of the proposed skeletonization method was smaller for the original images affected by Rician noise (AUC = 0.883 (T2w) and 0.904 (PD)) but increased for denoised images (AUC = 0.951 (T2w) and 0.969 (PD)). Full article
Show Figures

Figure 1

Article
Comparison and Evaluation of Different Methods for the Feature Extraction from Educational Contents
Computation 2020, 8(2), 30; https://0-doi-org.brum.beds.ac.uk/10.3390/computation8020030 - 15 Apr 2020
Cited by 4 | Viewed by 1237
Abstract
This paper analyses the capabilities of different techniques to build a semantic representation of educational digital resources. Educational digital resources are modeled using the Learning Object Metadata (LOM) standard, and these semantic representations can be obtained from different LOM fields, like the title, [...] Read more.
This paper analyses the capabilities of different techniques to build a semantic representation of educational digital resources. Educational digital resources are modeled using the Learning Object Metadata (LOM) standard, and these semantic representations can be obtained from different LOM fields, like the title, description, among others, in order to extract the features/characteristics from the digital resources. The feature extraction methods used in this paper are the Best Matching 25 (BM25), the Latent Semantic Analysis (LSA), Doc2Vec, and the Latent Dirichlet allocation (LDA). The utilization of the features/descriptors generated by them are tested in three types of educational digital resources (scientific publications, learning objects, patents), a paraphrase corpus and two use cases: in an information retrieval context and in an educational recommendation system. For this analysis are used unsupervised metrics to determine the feature quality proposed by each one, which are two similarity functions and the entropy. In addition, the paper presents tests of the techniques for the classification of paraphrases. The experiments show that according to the type of content and metric, the performance of the feature extraction methods is very different; in some cases are better than the others, and in other cases is the inverse. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop