Next Issue
Volume 10, January
Previous Issue
Volume 9, November

Computation, Volume 9, Issue 12 (December 2021) – 21 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
Editorial
The Reasonable Effectiveness of Randomness in Scalable and Integrative Gene Regulatory Network Inference and Beyond
Computation 2021, 9(12), 146; https://0-doi-org.brum.beds.ac.uk/10.3390/computation9120146 - 20 Dec 2021
Viewed by 400
Abstract
Gene regulation is orchestrated by a vast number of molecules, including transcription factors and co-factors, chromatin regulators, as well as epigenetic mechanisms, and it has been shown that transcriptional misregulation, e.g., caused by mutations in regulatory sequences, is responsible for a plethora of [...] Read more.
Gene regulation is orchestrated by a vast number of molecules, including transcription factors and co-factors, chromatin regulators, as well as epigenetic mechanisms, and it has been shown that transcriptional misregulation, e.g., caused by mutations in regulatory sequences, is responsible for a plethora of diseases, including cancer, developmental or neurological disorders. As a consequence, decoding the architecture of gene regulatory networks has become one of the most important tasks in modern (computational) biology. However, to advance our understanding of the mechanisms involved in the transcriptional apparatus, we need scalable approaches that can deal with the increasing number of large-scale, high-resolution, biological datasets. In particular, such approaches need to be capable of efficiently integrating and exploiting the biological and technological heterogeneity of such datasets in order to best infer the underlying, highly dynamic regulatory networks, often in the absence of sufficient ground truth data for model training or testing. With respect to scalability, randomized approaches have proven to be a promising alternative to deterministic methods in computational biology. As an example, one of the top performing algorithms in a community challenge on gene regulatory network inference from transcriptomic data is based on a random forest regression model. In this concise survey, we aim to highlight how randomized methods may serve as a highly valuable tool, in particular, with increasing amounts of large-scale, biological experiments and datasets being collected. Given the complexity and interdisciplinary nature of the gene regulatory network inference problem, we hope our survey maybe helpful to both computational and biological scientists. It is our aim to provide a starting point for a dialogue about the concepts, benefits, and caveats of the toolbox of randomized methods, since unravelling the intricate web of highly dynamic, regulatory events will be one fundamental step in understanding the mechanisms of life and eventually developing efficient therapies to treat and cure diseases. Full article
(This article belongs to the Special Issue Inference of Gene Regulatory Networks Using Randomized Algorithms)
Show Figures

Figure 1

Article
Nonuniformity of Isometric Properties of Automotive Driveshafts
Computation 2021, 9(12), 145; https://0-doi-org.brum.beds.ac.uk/10.3390/computation9120145 - 20 Dec 2021
Viewed by 282
Abstract
This paper presents an analysis of the CVJ (constant velocity joint) of automotive driveshafts from a point of view concerning the nonuniformity of isometric properties. In the automotive industry, driveshafts are considered to have constant velocity through its joints: free tripode joints and [...] Read more.
This paper presents an analysis of the CVJ (constant velocity joint) of automotive driveshafts from a point of view concerning the nonuniformity of isometric properties. In the automotive industry, driveshafts are considered to have constant velocity through its joints: free tripode joints and fixed ball joints, which has been proved by Mtzner’s indirect method and Orain’s direct method for tripod joint. Based on vectorial mechanics, the paper proved the quasi-isometry of velocity for polypod joints such as fixed ball joints. In the meantime, it was computed that the global nonuniformity of constant velocity joints for modern driveshafts based on the Dudita-Diaconescu homokinetic approach for the driveshafts. The nonuniformity of the velocity isometry of driveshafts was computed as a function of the input angular velocity of the driveshaft, angular inclination between the tripod–tulip axis and the midshaft axis and the angular inclination between the bowl axis and midshaft axis. The main aim of this article is how to improve the geometric and kinematic approach to add an important correction when designing the driveshaft dynamics prediction such as: forced torsional vibrations, forced bending–shearing vibrations, and coupled torsional–bending vibrations for the automotive driveshaft in the regions of specific resonances such as principal parametric resonance, internal resonance, combined resonance, and simultaneous resonances. By the way it is added, there are important corrections for the design of driveshafts, for the torsional dynamic behavior prediction, and for bending–shearing dynamic behavior of the driveshafts in the early stages of design. The results presented in the article represent a starting point for future research on dynamic phenomena in the area mentioned previously. Full article
Show Figures

Figure 1

Article
A Computational Analysis for Active Flow and Pressure Control Using Moving Roller Peristalsis
Computation 2021, 9(12), 144; https://0-doi-org.brum.beds.ac.uk/10.3390/computation9120144 - 20 Dec 2021
Viewed by 356
Abstract
Peristaltic motion arises in many physiological, medical, pharmaceutical and industrial processes. Control of the fluid volume rate and pressure is crucial for pumping applications, such as the infusion of intravenous liquid drugs, blood transportation, etc. In this study, a simulation of peristaltic flow [...] Read more.
Peristaltic motion arises in many physiological, medical, pharmaceutical and industrial processes. Control of the fluid volume rate and pressure is crucial for pumping applications, such as the infusion of intravenous liquid drugs, blood transportation, etc. In this study, a simulation of peristaltic flow is presented in which occlusion is imposed by pairs of circular rollers that squeeze a deformable channel connected to a reservoir with constant fluid pressure. Naturally, this kind of flow is laminar; hence, the computation occurred in this context. The effect of the number and speed of the pairs of rollers, as well as that of the intrapair roller gap, is investigated. Non-Newtonian fluids are considered, and the effect of the shear-thinning behavior degree is examined. The volumetric flow rate is found to increase with an increase in the number of rollers or in the relative occlusion. A reduction in the Bird–Carreau power index resulted in a small reduction in transport efficiency. The characteristic of the pumping was computed, i.e., the induced pressure as a function of the fluid volume rate. A strong positive correlation exists between relative occlusion and induced pressure. Shear-thinning behavior significantly decreases the developed pressure compared to Newtonian fluids. The immersed boundary method on curvilinear coordinates is adapted and validated for non-Newtonian fluids. Full article
Show Figures

Figure 1

Article
Numerical Analysis of a Novel Twin-Impeller Centrifugal Compressor
Computation 2021, 9(12), 143; https://0-doi-org.brum.beds.ac.uk/10.3390/computation9120143 - 18 Dec 2021
Viewed by 437
Abstract
Centrifugal compressors are widely used in many industrial fields such as automotive, aviation, aerospace. However, these turbomachines suffer instability phenomenon when the flow rate is too high or too low, called rotating stall and surge. These phenomena cause the operation failure, pressure fluctuations [...] Read more.
Centrifugal compressors are widely used in many industrial fields such as automotive, aviation, aerospace. However, these turbomachines suffer instability phenomenon when the flow rate is too high or too low, called rotating stall and surge. These phenomena cause the operation failure, pressure fluctuations and vibrations of the thorough system. Numerous mechanical solutions have been presented to minimize these instabilities and expand the operating range towards low-flow rates like active control of the flow path, variable inlet guide vane and casing treatment. Currently, our team has developed a novel compressor composed of a twin-impeller powered by autonomous systems. We notice the performance improvement and instabilities suppression of this compressor experimentally. In this paper, an active control method is introduced by controlling the speed and rotation direction of the impellers to expand the operating range. A CFD study is then conducted to analysis flow morphology and thermodynamic characteristics based on the experimental observations at three special points. Numerical results and experimental measurements of compressor maps are consistent. Full article
(This article belongs to the Special Issue Computational Heat, Mass, and Momentum Transfer—III)
Show Figures

Figure 1

Article
Evaluation of Pseudo-Random Number Generation on GPU Cards
Computation 2021, 9(12), 142; https://0-doi-org.brum.beds.ac.uk/10.3390/computation9120142 - 14 Dec 2021
Viewed by 410
Abstract
Monte Carlo methods rely on sequences of random numbers to obtain solutions to many problems in science and engineering. In this work, we evaluate the performance of different pseudo-random number generators (PRNGs) of the Curand library on a number of modern Nvidia GPU [...] Read more.
Monte Carlo methods rely on sequences of random numbers to obtain solutions to many problems in science and engineering. In this work, we evaluate the performance of different pseudo-random number generators (PRNGs) of the Curand library on a number of modern Nvidia GPU cards. As a numerical test, we generate pseudo-random number (PRN) sequences and obtain non-uniform distributions using the acceptance-rejection method. We consider GPU, CPU, and hybrid CPU/GPU implementations. For the GPU, we additionally consider two different implementations using the host and device application programming interfaces (API). We study how the performance depends on implementation parameters, including the number of threads per block and the number of blocks per streaming multiprocessor. To achieve the fastest performance, one has to minimize the time consumed by PRNG seed setup and state update. The duration of seed setup time increases with the number of threads, while PRNG state update decreases. Hence, the fastest performance is achieved by the optimal balance of these opposing effects. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

Article
Evaluation of a Moisture Diffusion Model for Analyzing the Convective Drying Kinetics of Lavandula x allardii Leaves
Computation 2021, 9(12), 141; https://0-doi-org.brum.beds.ac.uk/10.3390/computation9120141 - 13 Dec 2021
Viewed by 505
Abstract
In the present case study, a moisture diffusion model is developed to simulate the drying kinetics of Lavandula x allardii leaves for non-stationary convective drying regimes. Increasing temperature profiles are applied over the drying duration and the influence of temperature advancing rates on [...] Read more.
In the present case study, a moisture diffusion model is developed to simulate the drying kinetics of Lavandula x allardii leaves for non-stationary convective drying regimes. Increasing temperature profiles are applied over the drying duration and the influence of temperature advancing rates on the moisture removal and the drying rate is investigated. The model assumes a one-dimensional moisture transfer under transient conditions, which occurs from the leaf center to the surface by liquid diffusion due to the concentration gradient developed by the surface water evaporation caused by the difference of water vapor partial pressure between the drying medium and the leaf surface. A numerical solution of Fick’s 2nd law is obtained by an in-house code using the finite volume method, including shrinkage and a variable temperature-dependent effective moisture diffusion coefficient. The numerical results have been validated against experimental data for selected cases using statistical indices and the predicted dehydration curves presented a good agreement for the higher temperature advancing rates. The examined modeling approach was found stable and can output, in a computationally efficient way, the temporal changes of moisture and drying rate. Thus, the present model could be used for engineering applications involving the design, optimization and development of drying equipment and drying schedules for the examined type of non-stationary drying patterns. Full article
Show Figures

Figure 1

Article
Mass Media as a Mirror of the COVID-19 Pandemic
Computation 2021, 9(12), 140; https://0-doi-org.brum.beds.ac.uk/10.3390/computation9120140 - 13 Dec 2021
Viewed by 452
Abstract
The media plays an important role in disseminating facts and knowledge to the public at critical times, and the COVID-19 pandemic is a good example of such a period. This research is devoted to performing a comparative analysis of the representation of topics [...] Read more.
The media plays an important role in disseminating facts and knowledge to the public at critical times, and the COVID-19 pandemic is a good example of such a period. This research is devoted to performing a comparative analysis of the representation of topics connected with the pandemic in the internet media of Kazakhstan and the Russian Federation. The main goal of the research is to propose a method that would make it possible to analyze the correlation between mass media dynamic indicators and the World Health Organization COVID-19 data. In order to solve the task, three approaches related to the representation of mass media dynamics in numerical form—automatically obtained topics, average sentiment, and dynamic indicators—were proposed and applied according to a manually selected list of search queries. The results of the analysis indicate similarities and differences in the ways in which the epidemiological situation is reflected in publications in Russia and in Kazakhstan. In particular, the publication activity in both countries correlates with the absolute indicators, such as the daily number of new infections, and the daily number of deaths. However, mass media tend to ignore the positive rate of confirmed cases and the virus reproduction rate. If we consider strictness of quarantine measures, mass media in Russia show a rather high correlation, while in Kazakhstan, the correlation is much lower. Analysis of search queries revealed that in Kazakhstan the problem of fake news and disinformation is more acute during periods of deterioration of the epidemiological situation, when the level of crime and poverty increase. The novelty of this work is the proposal and implementation of a method that allows the performing of a comparative analysis of objective COVID-19 statistics and several mass media indicators. In addition, it is the first time that such a comparative analysis, between different countries, has been performed on a corpus in a language other than English. Full article
(This article belongs to the Special Issue Computation to Fight SARS-CoV-2 (CoVid-19))
Show Figures

Figure 1

Article
Data Analysis and Symbolic Regression Models for Predicting CO and NOx Emissions from Gas Turbines
Computation 2021, 9(12), 139; https://0-doi-org.brum.beds.ac.uk/10.3390/computation9120139 - 13 Dec 2021
Viewed by 474
Abstract
Predictive emission monitoring systems (PEMS) are software solutions for the validation and supplementation of costly continuous emission monitoring systems for natural gas electrical generation turbines. The basis of PEMS is that of predictive models trained on past data to estimate emission components. The [...] Read more.
Predictive emission monitoring systems (PEMS) are software solutions for the validation and supplementation of costly continuous emission monitoring systems for natural gas electrical generation turbines. The basis of PEMS is that of predictive models trained on past data to estimate emission components. The gas turbine process dataset from the University of California at Irvine open data repository has initiated a challenge of sorts to investigate the quality of models of various machine learning methods to build a model for predicting CO and NOx emissions depending on ambient variables and the parameters of the technological process. The novelty and features of this paper are: (i) a contribution to the study of the features of the open dataset on CO and NOx emissions for gas turbines, which will enable one to more objectively compare different machine learning methods for further research; (ii) for the first time for the CO and NOx emissions, a model based on symbolic regression and a genetic algorithm is presented—the advantage of this being the transparency of the influence of factors and the interpretability of the model; (iii) a new classification model based on the symbolic regression model and fuzzy inference system is proposed. The coefficients of determination of the developed models are: R2=0.83 for NOx emissions, R2=0.89 for CO emissions. Full article
Show Figures

Figure 1

Article
Plasma Confined Ground and Excited State Helium Atom: A Comparison Theorem Study Using Variational Monte Carlo and Lagrange Mesh Method
Computation 2021, 9(12), 138; https://0-doi-org.brum.beds.ac.uk/10.3390/computation9120138 - 10 Dec 2021
Viewed by 430
Abstract
The energy eigenvalues of the ground state helium atom and lowest two excited states corresponding to the configurations 1s2s embedded in the plasma environment using Hulthén, Debye–Hückel and exponential cosine screened Coulomb model potentials are investigated within the variational Monte Carlo method, starting [...] Read more.
The energy eigenvalues of the ground state helium atom and lowest two excited states corresponding to the configurations 1s2s embedded in the plasma environment using Hulthén, Debye–Hückel and exponential cosine screened Coulomb model potentials are investigated within the variational Monte Carlo method, starting with the ultracompact trial wave functions in the form of generalized Hylleraas–Kinoshita functions and Guevara–Harris–Turbiner functions. The Lagrange mesh method calculations of energy are reported for the He atom in the ground and excited 1S and 3S states, which are in excellent agreement with the variational Monte Carlo results. Interesting relative ordering of eigenvalues are reported corresponding to the different screened Coulomb potentials in the He ground and excited electronic states, which are rationalized in terms of the comparison theorem of quantum mechanics. Full article
Show Figures

Figure 1

Article
Optimal Economic–Environmental Operation of BESS in AC Distribution Systems: A Convex Multi-Objective Formulation
Computation 2021, 9(12), 137; https://0-doi-org.brum.beds.ac.uk/10.3390/computation9120137 - 10 Dec 2021
Viewed by 457
Abstract
This paper deals with the multi-objective operation of battery energy storage systems (BESS) in AC distribution systems using a convex reformulation. The objective functions are CO2 emissions, and the costs of the daily energy losses are considered. The conventional non-linear nonconvex branch [...] Read more.
This paper deals with the multi-objective operation of battery energy storage systems (BESS) in AC distribution systems using a convex reformulation. The objective functions are CO2 emissions, and the costs of the daily energy losses are considered. The conventional non-linear nonconvex branch multi-period optimal power flow model is reformulated with a second-order cone programming (SOCP) model, which ensures finding the global optimum for each point present in the Pareto front. The weighting factors methodology is used to convert the multi-objective model into a convex single-objective model, which allows for finding the optimal Pareto front using an iterative search. Two operational scenarios regarding BESS are considered: (i) a unity power factor operation and (ii) a variable power factor operation. The numerical results demonstrate that including the reactive power capabilities in BESS reduces 200 kg of CO2 emissions and USD 80 per day of operation. All of the numerical validations were developed in MATLAB 2020b with the CVX tool and the SEDUMI and SDPT3 solvers. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

Article
Numerical Calculation of Sink Velocities for Helminth Eggs in Water
Computation 2021, 9(12), 136; https://0-doi-org.brum.beds.ac.uk/10.3390/computation9120136 - 10 Dec 2021
Viewed by 385
Abstract
The settling velocities of helminth eggs of three types, namely Ascaris suum (ASC), Trichuris suis (TRI), and Oesophagostomum spp. (OES), in clean tap water are computationally determined by means of computational fluid dynamics, using the general-purpose CFD software ANSYS Fluent 18.0. The previous [...] Read more.
The settling velocities of helminth eggs of three types, namely Ascaris suum (ASC), Trichuris suis (TRI), and Oesophagostomum spp. (OES), in clean tap water are computationally determined by means of computational fluid dynamics, using the general-purpose CFD software ANSYS Fluent 18.0. The previous measurements of other authors are taken as the basis for the problem formulation and validation, whereby the latter is performed by comparing the predicted sink velocities with those measured in an Owen tube. To enable a computational treatment, the measured shapes of the eggs are parametrized by idealizing them in terms of elementary geometric forms. As the egg shapes show a variation within each class, “mean” shapes are considered. The sink velocities are obtained through the computationally obtained drag coefficients. The latter are defined by means of steady-state calculations. Predicted sink velocities are compared with the measured ones. It is observed that the calculated values show a better agreement with the measurements, for ASC and TRI, compared to the theoretical sink values delivered by the Stokes theory. However, the observed agreement is still found not to be very satisfactory, indicating the role of further parameters, such as the uncertainties in the characterization of egg shapes or flocculation effects even in clean tap water. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

Article
Extraction of a One-Particle Reduced Density Matrix from a Quantum Monte Carlo Electronic Density: A New Tool for Studying Nondynamic Correlation
Computation 2021, 9(12), 135; https://0-doi-org.brum.beds.ac.uk/10.3390/computation9120135 - 09 Dec 2021
Viewed by 382
Abstract
In this work, we present a method to build a first order reduced density matrix (1-RDM) of a molecule from variational Quantum Monte Carlo (VMC) computations by means of a given correlated mapping wave function. Such a wave function is modeled on a [...] Read more.
In this work, we present a method to build a first order reduced density matrix (1-RDM) of a molecule from variational Quantum Monte Carlo (VMC) computations by means of a given correlated mapping wave function. Such a wave function is modeled on a Generalized Valence Bond plus Complete Active Space Configuration Interaction form and fits at best the density resulting from the Slater-Jastrow wave function of VMC. The accuracy of the method proposed has been proved by comparing the resulting kinetic energy with the corresponding VMC value. This 1-RDM is used to analyze the amount of correlation eventually captured in Kohn-Sham calculations performed in an unrestricted approach (UKS-DFT) and with different energy functionals. We performed test calculations on a selected set of molecules that show a significant multireference character. In this analysis, we compared both local and global indicators of nondynamic and dynamic correlation. Moreover, following the natural orbital decomposition of the 1-RDM, we also compared the effective temperatures of the corresponding Fermi-like distributions. Although there is a general agreement between UKS-DFT and VMC, we found the best match with the functional LC-BLYP. Full article
(This article belongs to the Special Issue Electronic Correlation)
Show Figures

Figure 1

Article
Fuzzy Mathematics-Based Outer-Loop Control Method for Converter-Connected Distributed Generation and Storage Devices in Micro-Grids
Computation 2021, 9(12), 134; https://0-doi-org.brum.beds.ac.uk/10.3390/computation9120134 - 09 Dec 2021
Viewed by 397
Abstract
The modern changes in electric systems present new issues for control strategies. When power converters and distributed energy resources are included in the micro-grid, its model is more complex than the simplified representations used, sometimes losing essential data. This paper proposes a unified [...] Read more.
The modern changes in electric systems present new issues for control strategies. When power converters and distributed energy resources are included in the micro-grid, its model is more complex than the simplified representations used, sometimes losing essential data. This paper proposes a unified fuzzy mathematics-based control method applied to the outer loop of a voltage source converter (VSC) in both grid-connected and islanded modes to avoid using simplified models in complex micro-grids and handle the uncertain and non-stationary behaviour of nonlinear systems. The proposed control method is straightforwardly designed without simplifying the controlled system. This paper explains the design of a fuzzy mathematics-based control method applied to the outer-loop of a VSC, a crucial device for integrating renewable sources and storage devices in a micro-grid. Simulation results validated the novel control strategy, demonstrating its capabilities for real field applications. Full article
(This article belongs to the Special Issue Recent Advances in Process Modeling and Optimisation)
Show Figures

Figure 1

Article
Principal Components Analysis of EEG Signals for Epileptic Patient Identification
Computation 2021, 9(12), 133; https://0-doi-org.brum.beds.ac.uk/10.3390/computation9120133 - 09 Dec 2021
Viewed by 435
Abstract
According to the behavior of its neuronal connections, it is possible to determine if the brain suffers from abnormalities such as epilepsy. This disease produces seizures and alters the patient’s behavior and lifestyle. Neurologists employ the electroencephalogram (EEG) to diagnose the disease through [...] Read more.
According to the behavior of its neuronal connections, it is possible to determine if the brain suffers from abnormalities such as epilepsy. This disease produces seizures and alters the patient’s behavior and lifestyle. Neurologists employ the electroencephalogram (EEG) to diagnose the disease through brain signals. Neurologists visually analyze these signals, recognizing patterns, to identify some indication of brain disorder that allows for the epilepsy diagnosis. This article proposes a study, based on the Fourier analysis, through fast Fourier transformation and principal component analysis, to quantitatively identify patterns to diagnose and differentiate between healthy patients and those with the disease. Subsequently, principal component analysis can be used to classify patients, employing frequency bands as the signal features. Besides, it is made a classification comparison before and after using principal component analysis. The classification is performed via logistic regression, with a reduction from 5 to 4 dimensions, as well as from 8 to 7, achieving an improvement when there are 7 dimensions in the precision, recall, and F1 score metrics. The best results obtained, without PCA are: precision 0.560, recall 0.690, and F1 score 0.620; meanwhile, the best values obtained using PCA are: precision 0.734, recall 0.787, and F1 score 0.776. Full article
(This article belongs to the Section Computational Biology)
Show Figures

Figure 1

Article
TS Fuzzy Robust Sampled-Data Control for Nonlinear Systems with Bounded Disturbances
Computation 2021, 9(12), 132; https://0-doi-org.brum.beds.ac.uk/10.3390/computation9120132 - 08 Dec 2021
Viewed by 408
Abstract
We investigate robust fault-tolerant control pertaining to Takagi–Sugeno (TS) fuzzy nonlinear systems with bounded disturbances, actuator failures, and time delays. A new fault model based on a sampled-data scheme that is able to satisfy certain criteria in relation to actuator fault matrix is [...] Read more.
We investigate robust fault-tolerant control pertaining to Takagi–Sugeno (TS) fuzzy nonlinear systems with bounded disturbances, actuator failures, and time delays. A new fault model based on a sampled-data scheme that is able to satisfy certain criteria in relation to actuator fault matrix is introduced. Specifically, we formulate a reliable controller with state feedback, such that the resulting closed-loop-fuzzy system is robust, asymptotically stable, and able to satisfy a prescribed H performance constraint. Linear matrix inequality (LMI) together with a proper construction of the Lyapunov–Krasovskii functional is leveraged to derive delay-dependent sufficient conditions with respect to the existence of robust H controller. It is straightforward to obtain the solution by using the MATLAB LMI toolbox. We demonstrate the effectiveness of the control law and less conservativeness of the results through two numerical simulations. Full article
(This article belongs to the Special Issue Control Systems, Mathematical Modeling and Automation)
Show Figures

Figure 1

Article
Adjustment of Planned Surveying and Geodetic Networks Using Second-Order Nonlinear Programming Methods
Computation 2021, 9(12), 131; https://0-doi-org.brum.beds.ac.uk/10.3390/computation9120131 - 03 Dec 2021
Viewed by 451
Abstract
Due to the huge amount of redundant data, the problem arises of finding a single integral solution that will satisfy numerous possible accuracy options. Mathematical processing of such measurements by traditional geodetic methods can take significant time and at the same time does [...] Read more.
Due to the huge amount of redundant data, the problem arises of finding a single integral solution that will satisfy numerous possible accuracy options. Mathematical processing of such measurements by traditional geodetic methods can take significant time and at the same time does not provide the required accuracy. This article discusses the application of nonlinear programming methods in the computational process for geodetic data. Thanks to the development of computer technology, a modern surveyor can solve new emerging production problems using nonlinear programming methods—preliminary computational experiments that allow evaluating the effectiveness of a particular method for solving a specific problem. The efficiency and performance comparison of various nonlinear programming methods in the course of trilateration network equalization on a plane is shown. An algorithm of the modified second-order Newton’s method is proposed, based on the use of the matrix of second partial derivatives and the Powell and the Davis–Sven–Kempy (DSK) method in the computational process. The new method makes it possible to simplify the computational process, allows the user not to calculate the preliminary values of the determined parameters with high accuracy, since the use of this method makes it possible to expand the region of convergence of the problem solution. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

Article
Video-Based Deep Learning Approach for 3D Human Movement Analysis in Institutional Hallways: A Smart Hallway
Computation 2021, 9(12), 130; https://0-doi-org.brum.beds.ac.uk/10.3390/computation9120130 - 02 Dec 2021
Viewed by 587
Abstract
New artificial intelligence- (AI) based marker-less motion capture models provide a basis for quantitative movement analysis within healthcare and eldercare institutions, increasing clinician access to quantitative movement data and improving decision making. This research modelled, simulated, designed, and implemented a novel marker-less AI [...] Read more.
New artificial intelligence- (AI) based marker-less motion capture models provide a basis for quantitative movement analysis within healthcare and eldercare institutions, increasing clinician access to quantitative movement data and improving decision making. This research modelled, simulated, designed, and implemented a novel marker-less AI motion-analysis approach for institutional hallways, a Smart Hallway. Computer simulations were used to develop a system configuration with four ceiling-mounted cameras. After implementing camera synchronization and calibration methods, OpenPose was used to generate body keypoints for each frame. OpenPose BODY25 generated 2D keypoints, and 3D keypoints were calculated and postprocessed to extract outcome measures. The system was validated by comparing ground-truth body-segment length measurements to calculated body-segment lengths and ground-truth foot events to foot events detected using the system. Body-segment length measurements were within 1.56 (SD = 2.77) cm and foot-event detection was within four frames (67 ms), with an absolute error of three frames (50 ms) from ground-truth foot event labels. This Smart Hallway delivers stride parameters, limb angles, and limb measurements to aid in clinical decision making, providing relevant information without user intervention for data extraction, thereby increasing access to high-quality gait analysis for healthcare and eldercare institutions. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

Article
Recent Developments of Noise Attenuation Using Acoustic Barriers for a Specific Edge Geometry
Computation 2021, 9(12), 129; https://0-doi-org.brum.beds.ac.uk/10.3390/computation9120129 - 02 Dec 2021
Viewed by 409
Abstract
The aim of this research is to provide a better prediction for noise attenuation using thin rigid barriers. In particular, the paper presents an analysis on four methods of computing the noise attenuation using acoustic barriers: Maekawa-Tatge formulation, Kurze and Anderson algorithm, Menounou [...] Read more.
The aim of this research is to provide a better prediction for noise attenuation using thin rigid barriers. In particular, the paper presents an analysis on four methods of computing the noise attenuation using acoustic barriers: Maekawa-Tatge formulation, Kurze and Anderson algorithm, Menounou formulation, and the general prediction method (GPM-ISO 9613). Accordingly, to improve the GPM, the prediction computation of noise attenuation was optimized for an acoustic barrier by considering new effects, such as attenuation due to geometrical divergence, ground absorption-reflections, and atmospheric absorption. The new method, modified GPM (MGPM), was tested for the optimization of an y-shape edge geometry of the noise barrier and a closed agreement with the experimental data was found in the published literature. The specific y-shape edge geometry of the noise barrier contributes to the attenuation due to the diffraction phenomena. This aspect is based on the Kirchhoff diffraction theory that contains the Huygens-Fresnel theory, which is applied to a semi-infinite acoustic barrier. The new method MGPM of predicting the noise attenuation using acoustic barriers takes into consideration the next phenomena: The effect of the relative position of the receiver, the effect of the proximity of the source or receiver to the midplane of the barrier, the effect of the proximity of the receiver to the shadow boundary, the effect of ground absorption-reflections, the effect of atmospheric absorption, and the meteorological effect due to downwind. The conclusion of the paper reveals the optimization of the method for computing the noise attenuation using acoustic barriers, including the necessary corrections for ISO-9613 and the Sound PLAN software, as well as the optimization on a case study of a specific geometry of the edge barrier. Full article
Show Figures

Figure 1

Article
Transient Two-Way Molecular-Continuum Coupling with OpenFOAM and MaMiCo: A Sensitivity Study
Computation 2021, 9(12), 128; https://0-doi-org.brum.beds.ac.uk/10.3390/computation9120128 - 30 Nov 2021
Viewed by 571
Abstract
Molecular-continuum methods, as considered in this work, decompose the computational domain into continuum and molecular dynamics (MD) sub-domains. Compared to plain MD simulations, they greatly reduce computational effort. However, the quality of a fully two-way coupled simulation result strongly depends on a variety [...] Read more.
Molecular-continuum methods, as considered in this work, decompose the computational domain into continuum and molecular dynamics (MD) sub-domains. Compared to plain MD simulations, they greatly reduce computational effort. However, the quality of a fully two-way coupled simulation result strongly depends on a variety of system-specific parameters, and the corresponding sensitivity is only rarely addressed in the literature. Using a state-flux molecular-continuum coupling algorithm, we investigated the influences of various parameters, such as the size of the overlapping region, the coupling time step and the quality of ensemble-based sampling of flow velocities, in a Couette flow scenario. In particular, we considered a big setup in terms of domain size and number of time steps, which allowed us to investigate the long-term behavior of the coupling algorithm close to the incompressible regime. While mostly good agreement was reached on short time scales, it was the long-term behavior which differed even with slightly differently parametrized simulations. We demonstrated our findings by measuring the error in velocity, and we summarize our main observations with a few lessons learned. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

Article
Unimodal and Multimodal Perception for Forest Management: Review and Dataset
Computation 2021, 9(12), 127; https://0-doi-org.brum.beds.ac.uk/10.3390/computation9120127 - 29 Nov 2021
Viewed by 495
Abstract
Robotics navigation and perception for forest management are challenging due to the existence of many obstacles to detect and avoid and the sharp illumination changes. Advanced perception systems are needed because they can enable the development of robotic and machinery solutions to accomplish [...] Read more.
Robotics navigation and perception for forest management are challenging due to the existence of many obstacles to detect and avoid and the sharp illumination changes. Advanced perception systems are needed because they can enable the development of robotic and machinery solutions to accomplish a smarter, more precise, and sustainable forestry. This article presents a state-of-the-art review about unimodal and multimodal perception in forests, detailing the current developed work about perception using a single type of sensors (unimodal) and by combining data from different kinds of sensors (multimodal). This work also makes a comparison between existing perception datasets in the literature and presents a new multimodal dataset, composed by images and laser scanning data, as a contribution for this research field. Lastly, a critical analysis of the works collected is conducted by identifying strengths and research trends in this domain. Full article
(This article belongs to the Special Issue Computation and Analysis of Remote Sensing Imagery and Image Motion)
Show Figures

Figure 1

Review
More on the Supremum Statistic to Test Multivariate Skew-Normality
Computation 2021, 9(12), 126; https://0-doi-org.brum.beds.ac.uk/10.3390/computation9120126 - 29 Nov 2021
Viewed by 415
Abstract
This review is about verifying and generalizing the supremum test statistic developed by Balakrishnan et al. Exhaustive simulation studies are conducted for various dimensions to determine the effect, in terms of empirical size, of the supremum test statistic developed by Balakrishnan et al. [...] Read more.
This review is about verifying and generalizing the supremum test statistic developed by Balakrishnan et al. Exhaustive simulation studies are conducted for various dimensions to determine the effect, in terms of empirical size, of the supremum test statistic developed by Balakrishnan et al. to test multivariate skew-normality. Monte Carlo simulation studies indicate that the Type-I error of the supremum test can be controlled reasonably well for various dimensions for given nominal significance levels 0.05 and 0.01. Cut-off values are provided for the number of samples required to attain the nominal significance levels 0.05 and 0.01. Some new and relevant information of the supremum test statistic are reported here. Full article
(This article belongs to the Special Issue Modern Statistical Methods for Spatial and Multivariate Data)
Previous Issue
Next Issue
Back to TopTop