Mathematical and Computational Applications doi: 10.3390/mca26030053

Authors: Mauro Dell’Amico Matteo Magnani

We consider the distributor’s pallet loading problem where a set of different boxes are packed on the smallest number of pallets by satisfying a given set of constraints. In particular, we refer to a real-life environment where each pallet is loaded with a set of layers made of boxes, and both a stability constraint and a compression constraint must be respected. The stability requirement imposes the following: (a) to load at level k+1 a layer with total area (i.e., the sum of the bottom faces’ area of the boxes present in the layer) not exceeding α times the area of the layer of level k (where α≥1), and (b) to limit with a given threshold the difference between the highest and the lowest box of a layer. The compression constraint defines the maximum weight that each layer k can sustain; hence, the total weight of the layers loaded over k must not exceed that value. Some stability and compression constraints are considered in other works, but to our knowledge, none are defined as faced in a real-life problem. We present a matheuristic approach which works in two phases. In the first, a number of layers are defined using classical 2D bin packing algorithms, applied to a smart selection of boxes. In the second phase, the layers are packed on the minimum number of pallets by means of a specialized MILP model solved with Gurobi. Computational experiments on real-life instances are used to assess the effectiveness of the algorithm.

]]>Mathematical and Computational Applications doi: 10.3390/mca26030052

Authors: Anthony S. Walker Kyle E. Niemeyer

The partial differential equations describing compressible fluid flows can be notoriously difficult to resolve on a pragmatic scale and often require the use of high-performance computing systems and/or accelerators. However, these systems face scaling issues such as latency, the fixed cost of communicating information between devices in the system. The swept rule is a technique designed to minimize these costs by obtaining a solution to unsteady equations at as many possible spatial locations and times prior to communicating. In this study, we implemented and tested the swept rule for solving two-dimensional problems on heterogeneous computing systems across two distinct systems and three key parameters: problem size, GPU block size, and work distribution. Our solver showed a speedup range of 0.22–2.69 for the heat diffusion equation and 0.52–1.46 for the compressible Euler equations. We can conclude from this study that the swept rule offers both potential for speedups and slowdowns and that care should be taken when designing such a solver to maximize benefits. These results can help make decisions to maximize these benefits and inform designs.

]]>Mathematical and Computational Applications doi: 10.3390/mca26030051

Authors: Tamme Claus Jonas Bünger Manuel Torrilhon

The spatial resolution of electron probe microanalysis (EPMA), a non-destructive method to determine the chemical composition of materials, is currently restricted to a pixel size larger than the volume of interaction between beam electrons and the material, as a result of limitations on the underlying k-ratio model. Using more sophisticated models to predict k-ratios while solving the inverse problem of reconstruction offers a possibility to increase the spatial resolution. Here, a k-ratio model based on the deterministic M1-model in Boltzmann Continuous Slowing-Down approximation (BCSD) will be utilized to present a reconstruction method for EPMA which is implemented as a PDE-constrained optimization problem. Iterative gradient-based optimization techniques are used in combination with the adjoint state method to calculate the gradient in order to solve the optimization problem efficiently. The accuracy of the spatial resolution still depends on the number and quality of the measured data, but in contrast to conventional reconstruction methods, an overlapping of the interaction volumes of different measurements is permissible without ambiguous solutions. The combination of k-ratios measured with various electron beam configurations is necessary for a high resolution. Attempts to reconstruct materials with synthetic data show challenges that occur with small reconstruction pixels, but also indicate the potential to improve the spatial resolution in EPMA using the presented method.

]]>Mathematical and Computational Applications doi: 10.3390/mca26030050

Authors: Fábio A. O. Fernandes Mariusz Ptak

Traumatic brain injury (TBI) is one of the leading causes of death and disability [...]

]]>Mathematical and Computational Applications doi: 10.3390/mca26030049

Authors: Rangan Gupta Christian Pierdzioch

Using data for the group of G7 countries and China for the sample period 1996Q1 to 2020Q4, we study the role of uncertainty and spillovers for the out-of-sample forecasting of the realized variance of gold returns and its upside (good) and downside (bad) counterparts. We go beyond earlier research in that we do not focus exclusively on U.S.-based measures of uncertainty, and in that we account for international spillovers of uncertainty. Our results, based on the Lasso estimator, show that, across the various model configurations that we study, uncertainty has a more systematic effect on out-of-sample forecast accuracy than spillovers. Our results have important implications for investors in terms of, for example, pricing of related derivative securities and the development of portfolio-allocation strategies.

]]>Mathematical and Computational Applications doi: 10.3390/mca26020048

Authors: Pedro N. Oliveira Elza M. M. Fonseca Raul D. S. G. Campilho Paulo A. G. Piloto

Some analytical methods are available for temperature evaluation in solid bodies. These methods can be used due to their simplicity and good results. The main goal of this work is to present the temperature calculation in different cross-sections of structural hot-rolled steel profiles (IPE, HEM, L, and UAP) using the lumped capacitance method and the simplified equation from Eurocode 3. The basis of the lumped capacitance method is that the temperature of the solid body is uniform at any given time instant during a heat transient process. The profiles were studied, subjected to the fire action according to the nominal temperature–time curves (standard temperature-time curve ISO 834, external fire curve, and hydrocarbon fire curve). The obtained results allow verifying the agreement between the two methodologies and the influence in the temperature field due to the use of different nominal fire curves. This finding enables us to conclude that the lumped capacitance method is accurate and could be easily applied.

]]>Mathematical and Computational Applications doi: 10.3390/mca26020047

Authors: Julien Eustache Antony Plait Frédéric Dubas Raynal Glises

Compared to conventional vapor-compression refrigeration systems, magnetic refrigeration is a promising and potential alternative technology. The magnetocaloric effect (MCE) is used to produce heat and cold sources through a magnetocaloric material (MCM). The material is submitted to a magnetic field with active magnetic regenerative refrigeration (AMRR) cycles. Initially, this effect was widely used for cryogenic applications to achieve very low temperatures. However, this technology must be improved to replace vapor-compression devices operating around room temperature. Therefore, over the last 30 years, a lot of studies have been done to obtain more efficient devices. Thus, the modeling is a crucial step to perform a preliminary study and optimization. In this paper, after a large introduction on MCE research, a state-of-the-art of multi-physics modeling on the AMRR cycle modeling is made. To end this paper, a suggestion of innovative and advanced modeling solutions to study magnetocaloric regenerator is described.

]]>Mathematical and Computational Applications doi: 10.3390/mca26020046

Authors: Esmeralda López René F. Domínguez-Cruz Iván Salgado-Tránsito

Optimization of energy resources is a priority issue for our society. An improper imbalance between demand and power generation can lead to inefficient use of installed capacity, waste of fuels, worse effects on the environment, and higher costs. This paper presents the preliminary results of a study of seventeen interconnected power generation plants situated in eastern Mexico. The aim of the research is to apply a linear programming model to find the system-optimal solution by minimizing operating costs for this grid of power plants. The calculations were made taking into account the actual parameters of each plant; the demand and production of energy were analyzed in four time periods of 6 h during a day. The results show the cost-optimal configuration of the current power infrastructure obtained from a simple implementation model in MATLAB® software. The contribution of this paper is to adapt a lineal progamming model for an electrical distribution network formed with different types of power generation technology. The study shows that fossil fuel plants, besides emitting greenhouse gases that affect human health and the environment, incur maintenance expenses even without operation. The results are a helpful instrument for decision-making regarding the rational use of available installed capacity.

]]>Mathematical and Computational Applications doi: 10.3390/mca26020045

Authors: Evangelos Roussos

We show how modern Bayesian Machine Learning tools can be effectively used in order to develop efficient methods for filtering Earth Observation signals. Bayesian statistical methods can be thought of as a generalization of the classical least-squares adjustment methods where both the unknown signals and the parameters are endowed with probability distributions, the priors. Statistical inference under this scheme is the derivation of posterior distributions, that is, distributions of the unknowns after the model has seen the data. Least squares can then be thought of as a special case that uses Gaussian likelihoods, or error statistics. In principle, for most non-trivial models, this framework requires performing integration in high-dimensional spaces. Variational methods are effective tools for approximate inference in Statistical Machine Learning and Computational Statistics. In this paper, after introducing the general variational Bayesian learning method, we apply it to the modelling and implementation of sparse mixtures of Gaussians (SMoG) models, intended to be used as adaptive priors for the efficient representation of sparse signals in applications such as wavelet-type analysis. Wavelet decomposition methods have been very successful in denoising real-world, non-stationary signals that may also contain discontinuities. For this purpose we construct a constrained hierarchical Bayesian model capturing the salient characteristics of such sets of decomposition coefficients. We express our model as a Dirichlet mixture model. We then show how variational ideas can be used to derive efficient methods for bypassing the need for integration: the task of integration becomes one of optimization. We apply our SMoG implementation to the problem of denoising of Synthetic Aperture Radar images, inherently affected by speckle noise, and show that it achieves improved performance compared to established methods, both in terms of speckle reduction and image feature preservation.

]]>Mathematical and Computational Applications doi: 10.3390/mca26020044

Authors: Eric Chung Hyea-Hyun Kim Ming-Fai Lam Lina Zhao

In this paper, we consider the balancing domain decomposition by constraints (BDDC) algorithm with adaptive coarse spaces for a class of stochastic elliptic problems. The key ingredient in the construction of the coarse space is the solutions of local spectral problems, which depend on the coefficient of the PDE. This poses a significant challenge for stochastic coefficients as it is computationally expensive to solve the local spectral problems for every realization of the coefficient. To tackle this computational burden, we propose a machine learning approach. Our method is based on the use of a deep neural network (DNN) to approximate the relation between the stochastic coefficients and the coarse spaces. For the input of the DNN, we apply the Karhunen–Loève expansion and use the first few dominant terms in the expansion. The output of the DNN is the resulting coarse space, which is then applied with the standard adaptive BDDC algorithm. We will present some numerical results with oscillatory and high contrast coefficients to show the efficiency and robustness of the proposed scheme.

]]>Mathematical and Computational Applications doi: 10.3390/mca26020043

Authors: Constantino Grau Turuelo Cornelia Breitkopf

The prediction and control of the transformation of void structures with high-temperature processing is a critical area in many engineering applications. In this work, focused on the void shape evolution of silicon, a novel algebraic model for the calculation of final equilibrium structures from initial void cylindrical trenches, driven by surface diffusion, is introduced. This algebraic model provides a simple and fast way to calculate expressions to predict the final geometrical characteristics, based on linear perturbation analysis. The obtained results are similar to most compared literature data, especially, to those in which a final transformation is reached. Additionally, the model can be applied in any materials affected by the surface diffusion. With such a model, the calculation of void structure design points is greatly simplified not only in the semiconductors field but in other engineering fields where surface diffusion phenomenon is studied.

]]>Mathematical and Computational Applications doi: 10.3390/mca26020042

Authors: José A. O. Matos Paulo B. Vasconcelos

With the fast advances in computational sciences, there is a need for more accurate computations, especially in large-scale solutions of differential problems and long-term simulations. Amid the many numerical approaches to solving differential problems, including both local and global methods, spectral methods can offer greater accuracy. The downside is that spectral methods often require high-order polynomial approximations, which brings numerical instability issues to the problem resolution. In particular, large condition numbers associated with the large operational matrices, prevent stable algorithms from working within machine precision. Software-based solutions that implement arbitrary precision arithmetic are available and should be explored to obtain higher accuracy when needed, even with the higher computing time cost associated. In this work, experimental results on the computation of approximate solutions of differential problems via spectral methods are detailed with recourse to quadruple precision arithmetic. Variable precision arithmetic was used in Tau Toolbox, a mathematical software package to solve integro-differential problems via the spectral Tau method.

]]>Mathematical and Computational Applications doi: 10.3390/mca26020041

Authors: Mohammad Mehdi Rashidi Mikhail A. Sheremet Maryam Sadri Satyaranjan Mishra Pradyumna Kumar Pattnaik Faranak Rabiei Saeid Abbasbandy Hussein Sahihi Esmaeel Erfani

In this research, the analytical methods of the differential transform method (DTM), homotopy asymptotic method (HAM), optimal homotopy asymptotic method (OHAM), Adomian decomposition method (ADM), variation iteration method (VIM) and reproducing kernel Hilbert space method (RKHSM), and the numerical method of the finite difference method (FDM) for (analytical-numerical) simulation of 2D viscous flow along expanding/contracting channels with permeable borders are carried out. The solutions for analytical method are obtained in series form (and the series are convergent), while for the numerical method the solution is obtained taking into account approximation techniques of second-order accuracy. The OHAM and HAM provide an appropriate method for controlling the convergence of the discretization series and adjusting convergence domains, despite having a problem for large sizes of obtained results in series form; for instance, the size of the series solution for the DTM is very small for the same order of accuracy. It is hard to judge which method is the best and all of them have their advantages and disadvantages. For instance, applying the DTM to BVPs is difficult; however, solving BVPs with the HAM, OHAM and VIM is simple and straightforward. The extracted solutions, in comparison with the computational solutions (shooting procedure combined with a Runge–Kutta fourth-order scheme, finite difference method), demonstrate remarkable accuracy. Finally, CPU time, average error and residual error for different cases are presented in tables and figures.

]]>Mathematical and Computational Applications doi: 10.3390/mca26020040

Authors: Michael W. Daniels Daniel Dvorkin Rani K. Powers Katerina Kechris

Integrating gene-level data is useful for predicting the role of genes in biological processes. This problem has typically focused on supervised classification, which requires large training sets of positive and negative examples. However, training data sets that are too small for supervised approaches can still provide valuable information. We describe a hierarchical mixture model that uses limited positively labeled gene training data for semi-supervised learning. We focus on the problem of predicting essential genes, where a gene is required for the survival of an organism under particular conditions. We applied cross-validation and found that the inclusion of positively labeled samples in a semi-supervised learning framework with the hierarchical mixture model improves the detection of essential genes compared to unsupervised, supervised, and other semi-supervised approaches. There was also improved prediction performance when genes are incorrectly assumed to be non-essential. Our comparisons indicate that the incorporation of even small amounts of existing knowledge improves the accuracy of prediction and decreases variability in predictions. Although we focused on gene essentiality, the hierarchical mixture model and semi-supervised framework is standard for problems focused on prediction of genes or other features, with multiple data types characterizing the feature, and a small set of positive labels.

]]>Mathematical and Computational Applications doi: 10.3390/mca26020039

Authors: Juan P. Sánchez-Hernández Juan Frausto-Solís Juan J. González-Barbosa Diego A. Soto-Monterrubio Fanny G. Maldonado-Nava Guadalupe Castilla-Valdez

The Protein Folding Problem (PFP) is a big challenge that has remained unsolved for more than fifty years. This problem consists of obtaining the tertiary structure or Native Structure (NS) of a protein knowing its amino acid sequence. The computational methodologies applied to this problem are classified into two groups, known as Template-Based Modeling (TBM) and ab initio models. In the latter methodology, only information from the primary structure of the target protein is used. In the literature, Hybrid Simulated Annealing (HSA) algorithms are among the best ab initio algorithms for PFP; Golden Ratio Simulated Annealing (GRSA) is a PFP family of these algorithms designed for peptides. Moreover, for the algorithms designed with TBM, they use information from a target protein’s primary structure and information from similar or analog proteins. This paper presents GRSA-SSP methodology that implements a secondary structure prediction to build an initial model and refine it with HSA algorithms. Additionally, we compare the performance of the GRSAX-SSP algorithms versus its corresponding GRSAX. Finally, our best algorithm GRSAX-SSP is compared with PEP-FOLD3, I-TASSER, QUARK, and Rosetta, showing that it competes in small peptides except when predicting the largest peptides.

]]>Mathematical and Computational Applications doi: 10.3390/mca26020038

Authors: Peter Mitic

Selecting a suitable method to solve a black-box optimization problem that uses noisy data was considered. A targeted stop condition for the function to be optimized, implemented as a stochastic algorithm, makes established Bayesian methods inadmissible. A simple modification was proposed and shown to improve optimization the efficiency considerably. The optimization effectiveness was measured in terms of the mean and standard deviation of the number of function evaluations required to achieve the target. Comparisons with alternative methods showed that the modified Bayesian method and binary search were both performant, but in different ways. In a sequence of identical runs, the former had a lower expected value for the number of runs needed to find an optimal value. The latter had a lower standard deviation for the same sequence of runs. Additionally, we suggested a way to find an approximate solution to the same problem using symbolic computation. Faster results could be obtained at the expense of some impaired accuracy and increased memory requirements.

]]>Mathematical and Computational Applications doi: 10.3390/mca26020037

Authors: Noah Giansiracusa

The voting patterns of the nine justices on the United States Supreme Court continue to fascinate and perplex observers of the Court. While it is commonly understood that the division of the justices into a liberal branch and a conservative branch inevitably drives many case outcomes, there are finer, less transparent divisions within these two main branches that have proven difficult to extract empirically. This study imports methods from evolutionary biology to help illuminate the intricate and often overlooked branching structure of the justices’ voting behavior. Specifically, phylogenetic tree estimation based on voting disagreement rates is used to extend ideal point estimation to the non-Euclidean setting of hyperbolic metrics. After introducing this framework, comparing it to one- and two-dimensional multidimensional scaling, and arguing that it flexibly captures important higher-dimensional voting behavior, a handful of potential ways to apply this tool are presented. The emphasis throughout is on interpreting these judicial trees and extracting qualitative insights from them.

]]>Mathematical and Computational Applications doi: 10.3390/mca26020036

Authors: Alejandro Estrada-Padilla Daniela Lopez-Garcia Claudia Gómez-Santillán Héctor Joaquín Fraire-Huacuja Laura Cruz-Reyes Nelson Rangel-Valdez María Lucila Morales-Rodríguez

A common issue in the Multi-Objective Portfolio Optimization Problem (MOPOP) is the presence of uncertainty that affects individual decisions, e.g., variations on resources or benefits of projects. Fuzzy numbers are successful in dealing with imprecise numerical quantities, and they found numerous applications in optimization. However, so far, they have not been used to tackle uncertainty in MOPOP. Hence, this work proposes to tackle MOPOP’s uncertainty with a new optimization model based on fuzzy trapezoidal parameters. Additionally, it proposes three novel steady-state algorithms as the model’s solution process. One approach integrates the Fuzzy Adaptive Multi-objective Evolutionary (FAME) methodology; the other two apply the Non-Dominated Genetic Algorithm (NSGA-II) methodology. One steady-state algorithm uses the Spatial Spread Deviation as a density estimator to improve the Pareto fronts’ distribution. This research work’s final contribution is developing a new defuzzification mapping that allows measuring algorithms’ performance using widely known metrics. The results show a significant difference in performance favoring the proposed steady-state algorithm based on the FAME methodology.

]]>Mathematical and Computational Applications doi: 10.3390/mca26020035

Authors: Teodoro Macias-Escobar Laura Cruz-Reyes César Medina-Trejo Claudia Gómez-Santillán Nelson Rangel-Valdez Héctor Fraire-Huacuja

The decision-making process can be complex and underestimated, where mismanagement could lead to poor results and excessive spending. This situation appears in highly complex multi-criteria problems such as the project portfolio selection (PPS) problem. Therefore, a recommender system becomes crucial to guide the solution search process. To our knowledge, most recommender systems that use argumentation theory are not proposed for multi-criteria optimization problems. Besides, most of the current recommender systems focused on PPS problems do not attempt to justify their recommendations. This work studies the characterization of cognitive tasks involved in the decision-aiding process to propose a framework for the Decision Aid Interactive Recommender System (DAIRS). The proposed system focuses on a user-system interaction that guides the search towards the best solution considering a decision-maker’s preferences. The developed framework uses argumentation theory supported by argumentation schemes, dialogue games, proof standards, and two state transition diagrams (STD) to generate and explain its recommendations to the user. This work presents a prototype of DAIRS to evaluate the user experience on multiple real-life case simulations through a usability measurement. The prototype and both STDs received a satisfying score and mostly overall acceptance by the test users.

]]>Mathematical and Computational Applications doi: 10.3390/mca26020034

Authors: Isaac Gibert Martínez Frederico Afonso Simão Rodrigues Fernando Lau

The objective of this work is to study the coupling of two efficient optimization techniques, Aerodynamic Shape Optimization (ASO) and Topology Optimization (TO), in 2D airfoils. To achieve such goal two open-source codes, SU2 and Calculix, are employed for ASO and TO, respectively, using the Sequential Least SQuares Programming (SLSQP) and the Bi-directional Evolutionary Structural Optimization (BESO) algorithms; the latter is well-known for allowing the addition of material in the TO which constitutes, as far as our knowledge, a novelty for this kind of application. These codes are linked by means of a script capable of reading the geometry and pressure distribution obtained from the ASO and defining the boundary conditions to be applied in the TO. The Free-Form Deformation technique is chosen for the definition of the design variables to be used in the ASO, while the densities of the inner elements are defined as design variables of the TO. As a test case, a widely used benchmark transonic airfoil, the RAE2822, is chosen here with an internal geometric constraint to simulate the wing-box of a transonic wing. First, the two optimization procedures are tested separately to gain insight and then are run in a sequential way for two test cases with available experimental data: (i) Mach 0.729 at α=2.31°; and (ii) Mach 0.730 at α=2.79°. In the ASO problem, the lift is fixed and the drag is minimized; while in the TO problem, compliance minimization is set as the objective for a prescribed volume fraction. Improvements in both aerodynamic and structural performance are found, as expected: the ASO reduced the total pressure on the airfoil surface in order to minimize drag, which resulted in lower stress values experienced by the structure.

]]>Mathematical and Computational Applications doi: 10.3390/mca26020033

Authors: Muhammad Usman Shaaban Abdallah Mudassar Imran

In this work, the response of a ship rolling in regular beam waves is studied. The model is one degree of freedom model for nonlinear ship dynamics. The model consists of the terms containing inertia, damping, restoring forces, and external forces. The asymptotic perturbation method is used to study the primary resonance phenomena. The effects of various parameters are studied on the stability of steady states. It is shown that the variation of bifurcation parameters affects the bending of the bifurcation curve. The slope stability theorems are also presented.

]]>Mathematical and Computational Applications doi: 10.3390/mca26020032

Authors: Stefan Banholzer Bennet Gebken Lena Reichle Stefan Volkwein

The goal in multiobjective optimization is to determine the so-called Pareto set. Our optimization problem is governed by a parameter-dependent semi-linear elliptic partial differential equation (PDE). To solve it, we use a gradient-based set-oriented numerical method. The numerical solution of the PDE by standard discretization methods usually leads to high computational effort. To overcome this difficulty, reduced-order modeling (ROM) is developed utilizing the reduced basis method. These model simplifications cause inexactness in the gradients. For that reason, an additional descent condition is proposed. Applying a modified subdivision algorithm, numerical experiments illustrate the efficiency of our solution approach.

]]>Mathematical and Computational Applications doi: 10.3390/mca26020031

Authors: Manuel Berkemeier Sebastian Peitz

We present a local trust region descent algorithm for unconstrained and convexly constrained multiobjective optimization problems. It is targeted at heterogeneous and expensive problems, i.e., problems that have at least one objective function that is computationally expensive. Convergence to a Pareto critical point is proven. The method is derivative-free in the sense that derivative information need not be available for the expensive objectives. Instead, a multiobjective trust region approach is used that works similarly to its well-known scalar counterparts and complements multiobjective line-search algorithms. Local surrogate models constructed from evaluation data of the true objective functions are employed to compute possible descent directions. In contrast to existing multiobjective trust region algorithms, these surrogates are not polynomial but carefully constructed radial basis function networks. This has the important advantage that the number of data points needed per iteration scales linearly with the decision space dimension. The local models qualify as fully linear and the corresponding general scalar framework is adapted for problems with multiple objectives.

]]>Mathematical and Computational Applications doi: 10.3390/mca26020030

Authors: Riccardo Fazio Alessandra Insana Alessandra Jannelli

In this paper, we present an implicit finite difference method for the numerical solution of the Black–Scholes model of American put options without dividend payments. We combine the proposed numerical method by using a front-fixing approach where the option price and the early exercise boundary are computed simultaneously. We study the consistency and prove the stability of the implicit method by fixing the values of the free boundary and of its first derivative. We improve the accuracy of the computed solution via a mesh refinement based on Richardson’s extrapolation. Comparisons with some proposed methods for the American options problem are carried out to validate the obtained numerical results and to show the efficiency of the proposed numerical method. Finally, by using an a posteriori error estimator, we find a suitable computational grid requiring that the computed solution verifies a prefixed error tolerance.

]]>Mathematical and Computational Applications doi: 10.3390/mca26020029

Authors: Juan Frausto-Solís Lucía J. Hernández-González Juan J. González-Barbosa Juan Paulo Sánchez-Hernández Edgar Román-Rangel

The COVID-19 disease constitutes a global health contingency. This disease has left millions people infected, and its spread has dramatically increased. This study proposes a new method based on a Convolutional Neural Network (CNN) and temporal Component Transformation (CT) called CNN–CT. This method is applied to confirmed cases of COVID-19 in the United States, Mexico, Brazil, and Colombia. The CT changes daily predictions and observations to weekly components and vice versa. In addition, CNN–CT adjusts the predictions made by CNN using AutoRegressive Integrated Moving Average (ARIMA) and Exponential Smoothing (ES) methods. This combination of strategies provides better predictions than most of the individual methods by themselves. In this paper, we present the mathematical formulation for this strategy. Our experiments encompass the fine-tuning of the parameters of the algorithms. We compared the best hybrid methods obtained with CNN–CT versus the individual CNN, Long Short-Term Memory (LSTM), ARIMA, and ES methods. Our results show that our hybrid method surpasses the performance of LSTM, and that it consistently achieves competitive results in terms of the MAPE metric, as opposed to the individual CNN and ARIMA methods, whose performance varies largely for different scenarios.

]]>Mathematical and Computational Applications doi: 10.3390/mca26020028

Authors: Mercedes Perez-Villafuerte Laura Cruz-Reyes Nelson Rangel-Valdez Claudia Gomez-Santillan Héctor Fraire-Huacuja

Many real-world optimization problems involving several conflicting objective functions frequently appear in current scenarios and it is expected they will remain present in the future. However, approaches combining multi-objective optimization with the incorporation of the decision maker’s (DM’s) preferences through multi-criteria ordinal classification are still scarce. In addition, preferences are rarely associated with a DM’s characteristics; the preference selection is arbitrary. This paper proposes a new hybrid multi-objective optimization algorithm called P-HMCSGA (preference hybrid multi-criteria sorting genetic algorithm) that allows the DM’s preferences to be incorporated in the optimization process’ early phases and updated into the search process. P-HMCSGA incorporates preferences using a multi-criteria ordinal classification to distinguish solutions as good and bad; its parameters are determined with a preference disaggregation method. The main feature of P-HMCSGA is the new method proposed to associate preferences with the characterization profile of a DM and its integration with ordinal classification. This increases the selective pressure towards the desired region of interest more in agreement with the DM’s preferences specified in realistic profiles. The method is illustrated by solving real-size multi-objective PPPs (project portfolio problem). The experimentation aims to answer three questions: (i) To what extent does allowing the DM to express their preferences through a characterization profile impact the quality of the solution obtained in the optimization? (ii) How sensible is the proposal to different profiles? (iii) How much does the level of robustness of a profile impact the quality of final solutions (this question is related with the knowledge level that a DM has about his/her preferences)? Concluding, the proposal fulfills several desirable characteristics of a preferences incorporation method concerning these questions.

]]>Mathematical and Computational Applications doi: 10.3390/mca26020027

Authors: Alejandro Castellanos-Alvarez Laura Cruz-Reyes Eduardo Fernandez Nelson Rangel-Valdez Claudia Gómez-Santillán Hector Fraire José Alfredo Brambila-Hernández

Most real-world problems require the optimization of multiple objective functions simultaneously, which can conflict with each other. The environment of these problems usually involves imprecise information derived from inaccurate measurements or the variability in decision-makers’ (DMs’) judgments and beliefs, which can lead to unsatisfactory solutions. The imperfect knowledge can be present either in objective functions, restrictions, or decision-maker’s preferences. These optimization problems have been solved using various techniques such as multi-objective evolutionary algorithms (MOEAs). This paper proposes a new MOEA called NSGA-III-P (non-nominated sorting genetic algorithm III with preferences). The main characteristic of NSGA-III-P is an ordinal multi-criteria classification method for preference integration to guide the algorithm to the region of interest given by the decision-maker’s preferences. Besides, the use of interval analysis allows the expression of preferences with imprecision. The experiments contrasted several versions of the proposed method with the original NSGA-III to analyze different selective pressure induced by the DM’s preferences. In these experiments, the algorithms solved three-objectives instances of the DTLZ problem. The obtained results showed a better approximation to the region of interest for a DM when its preferences are considered.

]]>Mathematical and Computational Applications doi: 10.3390/mca26020026

Authors: Qi-Wen Jin Zheng Liu Shuan-Hai He

Structural reliability and structural robustness, from different research fields, are usually employed for the evaluative analysis of building and civil engineering structures. Structural reliability has been widely used for structural analysis and optimization design, while structural robustness is still in rapid development. Several dimensionless evaluation indexes have been defined for structural robustness so far, such as the structural reliability-based redundancy index. However, these different evaluation indexes are usually based on subjective definitions, and they are also difficult to put into engineering practice. The mathematical relational model between structural reliability and structural robustness has not been established yet. This paper is a quantitative study, focusing on the mathematical relation between structural reliability and structural robustness so as to further develop the theory of structural robustness. A strain energy evaluation index for structural robustness is introduced firstly by considering the energy principle. The mathematical relation model of structural reliability and structural robustness is then derived followed by a further comparative study on sensitivity, structural damage, and random variation factor. A cantilever beam and a truss beam are also presented as two case studies. In this study, a parabolic curve mathematical model between structural reliability and structural robustness is established. A significant variation trend for their sensitivities is also observed. The complex interaction mechanism of the joint effect of structural damage and random variation factor is also reflected. With consideration of the variation trend of the structural reliability index that is affected by different degrees of structural damage (mild impairment, moderate impairment, and severe impairment), a three-stage framework for structural life-cycle maintenance management is also proposed. This study can help us gain a better understanding of structural robustness and structural reliability. Some practical references are also provided for the better decision-making of maintenance and management departments.

]]>Mathematical and Computational Applications doi: 10.3390/mca26020025

Authors: Gilberto Gonzalez-Parra David Martínez-Rodríguez Rafael Villanueva-Micó

Several SARS-CoV-2 variants have emerged around the world, and the appearance of other variants depends on many factors. These new variants might have different characteristics that can affect the transmissibility and death rate. The administration of vaccines against the coronavirus disease 2019 (COVID-19) started in early December of 2020 and in some countries the vaccines will not soon be widely available. For this article, we studied the impact of a new more transmissible SARS-CoV-2 strain on prevalence, hospitalizations, and deaths related to the SARS-CoV-2 virus. We studied different scenarios regarding the transmissibility in order to provide a scientific support for public health policies and bring awareness of potential future situations related to the COVID-19 pandemic. We constructed a compartmental mathematical model based on differential equations to study these different scenarios. In this way, we are able to understand how a new, more infectious strain of the virus can impact the dynamics of the COVID-19 pandemic. We studied several metrics related to the possible outcomes of the COVID-19 pandemic in order to assess the impact of a higher transmissibility of a new SARS-CoV-2 strain on these metrics. We found that, even if the new variant has the same death rate, its high transmissibility can increase the number of infected people, those hospitalized, and deaths. The simulation results show that health institutions need to focus on increasing non-pharmaceutical interventions and the pace of vaccine inoculation since a new variant with higher transmissibility, such as, for example, VOC-202012/01 of lineage B.1.1.7, may cause more devastating outcomes in the population.

]]>Mathematical and Computational Applications doi: 10.3390/mca26010024

Authors: Sittisak Injan Angkool Wangwongchai Usa Humphries

Climate change in Thailand is related to the El Niño and Southern Oscillation (ENSO) phenomenon, in particular drought and heavy precipitation. The data assimilation method is used to improve the accuracy of the Ensemble Intermediate Coupled Model (EICM) that simulates the sea surface temperature (SST). The four-dimensional variational (4D-Var) and three-dimensional variational (3D-Var) schemes have been used for data assimilation purposes. The simulation was performed by the model with and without data assimilation from satellite data in 2011. The result shows that the model with data assimilation is better than the model without data assimilation. The 4D-Var scheme is the best method, with a Root Mean Square Error (RMSE) of 0.492 and a Correlation Coefficient of 0.684. The relationship between precipitation in Thailand and the ENSO area in Niño 3.4 was consistent for seven months, with a correlation coefficient of −0.882.

]]>Mathematical and Computational Applications doi: 10.3390/mca26010023

Authors: Robin Willems Léo A. J. Friedrich Clemens V. Verhoosel

Axial-Flux Permanent Magnet (AFPM) machines have gained popularity over the past few years due to their compact design. Their application can be found, for example, in the automotive and medical sectors. For typically considered materials, excessive heat can be generated, causing possible irreversible damage to the magnets, bonding, or other structural parts. In order to optimize cooling, knowledge of the flow and the consequent temperature distribution is required. This paper discusses the flow types and heat transfer present inside a typical AFPM machine. An Isogeometric Analysis (IGA) laminar-energy model is developed using the Nutils open-source Python package. The developed analysis tool is used to study the effects of various important design parameters, such as the air-inlet, the gap-length, and the rotation speed on the heat transfer in an AFPM machine. It is observed that the convective heat transfer at the stator core is negatively affected by adding an air-inlet. However, the heat dissipation of the entire stator improves as convective heat transfer occurs within the air-inlet.

]]>Mathematical and Computational Applications doi: 10.3390/mca26010022

Authors: Riccardo Fazio Alessandra Jannelli

This paper deals with a non-standard implicit finite difference scheme that is defined on a quasi-uniform mesh for approximate solutions of the Magneto-Hydro Dynamics (MHD) boundary layer flow of an incompressible fluid past a flat plate for a wide range of the magnetic parameter. The proposed approach allows imposing the given boundary conditions at infinity exactly. We show how to improve the obtained numerical results via a mesh refinement and a Richardson extrapolation. The obtained numerical results are favourably compared with those available in the literature.

]]>Mathematical and Computational Applications doi: 10.3390/mca26010021

Authors: Ahmad Taher Azar Fernando E. Serrano Nashwa Ahmad Kamal

In this paper, a loop shaping controller design methodology for single input and a single output (SISO) system is proposed. The theoretical background for this approach is based on complex elliptic functions which allow a flexible design of a SISO controller considering that elliptic functions have a double periodicity. The gain and phase margins of the closed-loop system can be selected appropriately with this new loop shaping design procedure. The loop shaping design methodology consists of implementing suitable filters to obtain a desired frequency response of the closed-loop system by selecting appropriate poles and zeros by the Abel theorem that are fundamental in the theory of the elliptic functions. The elliptic function properties are implemented to facilitate the loop shaping controller design along with their fundamental background and contributions from the complex analysis that are very useful in the automatic control field. Finally, apart from the filter design, a PID controller loop shaping synthesis is proposed implementing a similar design procedure as the first part of this study.

]]>Mathematical and Computational Applications doi: 10.3390/mca26010020

Authors: Francisco-Ronay López-Estrada Guillermo Valencia-Palomo

Control-systems engineering is a multidisciplinary subject that applies automatic-control theory to design systems with desired behaviors in control environments [...]

]]>Mathematical and Computational Applications doi: 10.3390/mca26010019

Authors: Peter Mitic

A model for financial stress testing and stability analysis is presented. Given operational risk loss data within a time window, short-term projections are made using Loess fits to sequences of lognormal parameters. The projections can be scaled by a sequence of risk factors, derived from economic data in response to international regulatory requirements. Historic and projected loss data are combined using a lengthy nonlinear algorithm to calculate a capital reserve for the upcoming year. The model is embedded in a general framework, in which arrays of risk factors can be swapped in and out to assess their effect on the projected losses. Risk factor scaling is varied to assess the resilience and stability of financial institutions to economic shock. Symbolic analysis of projected losses shows that they are well-conditioned with respect to risk factors. Specific reference is made to the effect of the 2020 COVID-19 pandemic. For a 1-year projection, the framework indicates a requirement for an increase in regulatory capital of approximately 3% for mild stress, 8% for moderate stress, and 32% for extreme stress. The proposed framework is significant because it is the first formal methodology to link financial risk with economic factors in an objective way without recourse to correlations.

]]>Mathematical and Computational Applications doi: 10.3390/mca26010018

Authors: Riccardo Fazio

This work is concerned with the existence and uniqueness of boundary value problems defined on semi-infinite intervals. These kinds of problems seldom admit exactly known solutions and, therefore, the theoretical information on their well-posedness is essential before attempting to derive an approximate solution by analytical or numerical means. Our utmost contribution in this context is the definition of a numerical test for investigating the existence and uniqueness of solutions of boundary problems defined on semi-infinite intervals. The main result is given by a theorem relating the existence and uniqueness question to the number of real zeros of a function implicitly defined within the formulation of the iterative transformation method. As a consequence, we can investigate the existence and uniqueness of solutions by studying the behaviour of that function. Within such a context, the numerical test is illustrated by two examples where we find meaningful numerical results.

]]>Mathematical and Computational Applications doi: 10.3390/mca26010017

Authors: Thomas Daniel Fabien Casenave Nissrine Akkari David Ryckelynck

Classification algorithms have recently found applications in computational physics for the selection of numerical methods or models adapted to the environment and the state of the physical system. For such classification tasks, labeled training data come from numerical simulations and generally correspond to physical fields discretized on a mesh. Three challenging difficulties arise: the lack of training data, their high dimensionality, and the non-applicability of common data augmentation techniques to physics data. This article introduces two algorithms to address these issues: one for dimensionality reduction via feature selection, and one for data augmentation. These algorithms are combined with a wide variety of classifiers for their evaluation. When combined with a stacking ensemble made of six multilayer perceptrons and a ridge logistic regression, they enable reaching an accuracy of 90% on our classification problem for nonlinear structural mechanics.

]]>Mathematical and Computational Applications doi: 10.3390/mca26010016

Authors: Christopher Cullenbine Joseph Rohrer Erin Almand J. Steel Matthew Davis Christopher Carson Steven Hasstedt John Sitko Douglas Wickert

A closed-form equation, the Fizzle Equation, was derived from a mathematical model predicting Severe Acute Respiratory Virus-2 dynamics, optimized for a 4000-student university cohort. This equation sought to determine the frequency and percentage of random surveillance testing required to prevent an outbreak, enabling an institution to develop scientifically sound public health policies to bring the effective reproduction number of the virus below one, halting virus progression. Model permutations evaluated the potential spread of the virus based on the level of random surveillance testing, increased viral infectivity and implementing additional safety measures. The model outcomes included: required level of surveillance testing, the number of infected individuals, and the number of quarantined individuals. Using the derived equations, this study illustrates expected infection load and how testing policy can prevent outbreaks in an institution. Furthermore, this process is iterative, making it possible to develop responsive policies scaling the amount of surveillance testing based on prior testing results, further conserving resources.

]]>Mathematical and Computational Applications doi: 10.3390/mca26010015

Authors: Marie-Sophie Hartig

It is common practice in science and engineering to approximate smooth surfaces and their geometric properties by using triangle meshes with vertices on the surface. Here, we study the approximation of the Gaussian curvature through the Gauss–Bonnet scheme. In this scheme, the Gaussian curvature at a vertex on the surface is approximated by the quotient of the angular defect and the area of the Voronoi region. The Voronoi region is the subset of the mesh that contains all points that are closer to the vertex than to any other vertex. Numerical error analyses suggest that the Gauss–Bonnet scheme always converges with quadratic convergence speed. However, the general validity of this conclusion remains uncertain. We perform an analytical error analysis on the Gauss–Bonnet scheme. Under certain conditions on the mesh, we derive the convergence speed of the Gauss–Bonnet scheme as a function of the maximal distance between the vertices. We show that the conditions are sufficient and necessary for a linear convergence speed. For the special case of locally spherical surfaces, we find a better convergence speed under weaker conditions. Furthermore, our analysis shows that the Gauss–Bonnet scheme, while generally efficient and effective, can give erroneous results in some specific cases.

]]>Mathematical and Computational Applications doi: 10.3390/mca26010014

Authors: Maria Teresa Signes-Pont José Juan Cortés-Plana Higinio Mora-Mora

This paper presents a discrete compartmental Susceptible–Exposed–Infected–Recovered/Dead (SEIR/D) model to address the expansion of Covid-19. This model is based on a grid. As time passes, the status of the cells updates by means of binary rules following a neighborhood and a delay pattern. This model has already been analyzed in previous works and successfully compared with the corresponding continuous models solved by ordinary differential equations (ODE), with the intention of finding the homologous parameters between both approaches. Thus, it has been possible to prove that the combination neighborhood-update rule is responsible for the rate of expansion and recovering/death of the disease. The delays (between Susceptible and Asymptomatic, Asymptomatic and Infected, Infected and Recovered/Dead) may have a crucial impact on both height and timing of the peak of Infected and the Recovery/Death rate. This theoretical model has been successfully tested in the case of the dissemination of information through mobile social networks and in the case of plant pests.

]]>Mathematical and Computational Applications doi: 10.3390/mca26010013

Authors: Luis Gerardo de la Fraga

In this work, the differential evolution algorithm behavior under a fixed point arithmetic is analyzed also using half-precision floating point (FP) numbers of 16 bits, and these last numbers are known as FP16. In this paper, it is considered that it is important to analyze differential evolution (DE) in these circumstances with the goal of reducing its consumption power, storage size of the variables, and improve its speed behavior. All these aspects become important if one needs to design a dedicated hardware, as an embedded DE within a circuit chip, that performs optimization. With these conditions DE is tested using three common multimodal benchmark functions: Rosenbrock, Rastrigin, and Ackley, in 10 dimensions. Results are obtained in software by simulating all numbers using C programming language.

]]>Mathematical and Computational Applications doi: 10.3390/mca26010012

Authors: Andrea Giunta Gaetano Giunta Domenico Marino Francesco Oliveri

The aim of this work is to simulate a market behavior in order to study the evolution of wealth distribution. The numerical simulations are carried out on a simple economical model with a finite number of economic agents, which are able to exchange goods/services and money; the various agents interact each other by means of random exchanges. The model is micro founded, self-consistent, and predictive. Despite the simplicity of the model, the simulations show a complex and non-trivial behavior. First of all, we are able to recognize two solution classes, namely two phases, separated by a threshold region. The analysis of the wealth distribution of the model agents, in the threshold region, shows functional forms resembling empirical quantitative studies of the probability distributions of wealth and income in the United Kingdom and the United States. Furthermore, the decile distribution of the population wealth of the simulated model, in the threshold region, overlaps in a suggestive way with the real data of the Italian population wealth in the last few years. Finally, the results of the simulated model allow us to draw important considerations for designing effective policies for economic and human development.

]]>Mathematical and Computational Applications doi: 10.3390/mca26010011

Authors: Fatima Oumellal Abdellah Lamnii

In this paper, the constructions of both open and closed trigonometric Hermite interpolation curves while using the derivatives are presented. The combination of tension, continuity, and bias control is used as a very powerful type of interpolation; they are applied to open and closed Hermite interpolation curves. Surface construction utilizing the studied trigonometric Hermite interpolation is explored and several examples obtained by the C1 trigonometric Hermite interpolation surface are given to show the usefulness of this method.

]]>Mathematical and Computational Applications doi: 10.3390/mca26010010

Authors: MCA Editorial Office MCA Editorial Office

Peer review is the driving force of journal development, and reviewers are gatekeepers who ensure that MCA maintains its standards for the high quality of its published papers [...]

]]>Mathematical and Computational Applications doi: 10.3390/mca26010009

Authors: Daniele Mortari Roberto Furfaro

This work presents a methodology to derive analytical functionals, with embedded linear constraints among the components of a vector (e.g., coordinates) that is a function a single variable (e.g., time). This work prepares the background necessary for the indirect solution of optimal control problems via the application of the Pontryagin Maximum Principle. The methodology presented is part of the univariate Theory of Functional Connections that has been developed to solve constrained optimization problems. To increase the clarity and practical aspects of the proposed method, the work is mostly presented via examples of applications rather than via rigorous mathematical definitions and proofs.

]]>Mathematical and Computational Applications doi: 10.3390/mca26010008

Authors: Juan Frausto-Solis Leonor Hernández-Ramírez Guadalupe Castilla-Valdez Juan J. González-Barbosa Juan P. Sánchez-Hernández

The Job Shop Scheduling Problem (JSSP) has enormous industrial applicability. This problem refers to a set of jobs that should be processed in a specific order using a set of machines. For the single-objective optimization JSSP problem, Simulated Annealing is among the best algorithms. However, in Multi-Objective JSSP (MOJSSP), these algorithms have barely been analyzed, and the Threshold Accepting Algorithm has not been published for this problem. It is worth mentioning that the researchers in this area have not reported studies with more than three objectives, and the number of metrics they used to measure their performance is less than two or three. In this paper, we present two MOJSSP metaheuristics based on Simulated Annealing: Chaotic Multi-Objective Simulated Annealing (CMOSA) and Chaotic Multi-Objective Threshold Accepting (CMOTA). We developed these algorithms to minimize three objective functions and compared them using the HV metric with the recently published algorithms, MOMARLA, MOPSO, CMOEA, and SPEA. The best algorithm is CMOSA (HV of 0.76), followed by MOMARLA and CMOTA (with HV of 0.68), and MOPSO (with HV of 0.54). In addition, we show a complexity comparison of these algorithms, showing that CMOSA, CMOTA, and MOMARLA have a similar complexity class, followed by MOPSO.

]]>Mathematical and Computational Applications doi: 10.3390/mca26010007

Authors: Simone Balmelli Francesco Moresino

When designing a new product, conjoint analysis is a powerful tool to estimate the perceived value of the prospects. However, it has a drawback: when the product has too many attributes and levels, it may be difficult to administrate the survey to respondents because they will be overwhelmed by the too numerous questions. In this paper, we propose an alternative approach that permits us to bypass this problem. Contrary to conjoint analysis, which estimates respondents&rsquo; utility functions, our approach directly estimates market shares. This enables us to split the questionnaire among respondents and, therefore, to reduce the burden on each respondent as much as desired. However, this new method has two weaknesses that conjoint analysis does not have: first, inferences on a single respondent cannot be made; second, the competition&rsquo;s product profiles have to be known before administrating the survey. Therefore, our method has to be used when traditional methods are less easily implementable, i.e., when the number of attributes and levels is large.

]]>Mathematical and Computational Applications doi: 10.3390/mca26010006

Authors: Iman Bahreini Toussi Abdolmajid Mohammadian Reza Kianoush

Liquid storage tanks subjected to base excitation can cause large impact forces on the tank roof, which can lead to structural damage as well as economic and environmental losses. The use of artificial intelligence in solving engineering problems is becoming popular in various research fields, and the Genetic Programming (GP) method is receiving more attention in recent years as a regression tool and also as an approach for finding empirical expressions between the data. In this study, an OpenFOAM numerical model that was validated by the authors in a previous study is used to simulate various tank sizes with different liquid heights. The tanks are excited in three different orientations with harmonic sinusoidal loadings. The excitation frequencies are chosen as equal to the tanks&rsquo; natural frequencies so that they would be subject to a resonance condition. The maximum pressure in each case is recorded and made dimensionless; then, using Multi-Gene Genetic Programming (MGGP) methods, a relationship between the dimensionless maximum pressure and dimensionless liquid height is acquired. Finally, some error measurements are calculated, and the sensitivity and uncertainty of the proposed equation are analyzed.

]]>Mathematical and Computational Applications doi: 10.3390/mca26010005

Authors: Kalyanmoy Deb Proteek Chandan Roy Rayan Hussein

Most practical optimization problems are comprised of multiple conflicting objectives and constraints which involve time-consuming simulations. Construction of metamodels of objectives and constraints from a few high-fidelity solutions and a subsequent optimization of metamodels to find in-fill solutions in an iterative manner remain a common metamodeling based optimization strategy. The authors have previously proposed a taxonomy of 10 different metamodeling frameworks for multiobjective optimization problems, each of which constructs metamodels of objectives and constraints independently or in an aggregated manner. Of the 10 frameworks, five follow a generative approach in which a single Pareto-optimal solution is found at a time and other five frameworks were proposed to find multiple Pareto-optimal solutions simultaneously. Of the 10 frameworks, two frameworks (M3-2 and M4-2) are detailed here for the first time involving multimodal optimization methods. In this paper, we also propose an adaptive switching based metamodeling (ASM) approach by switching among all 10 frameworks in successive epochs using a statistical comparison of metamodeling accuracy of all 10 frameworks. On 18 problems from three to five objectives, the ASM approach performs better than the individual frameworks alone. Finally, the ASM approach is compared with three other recently proposed multiobjective metamodeling methods and superior performance of the ASM approach is observed. With growing interest in metamodeling approaches for multiobjective optimization, this paper evaluates existing strategies and proposes a viable adaptive strategy by portraying importance of using an ensemble of metamodeling frameworks for a more reliable multiobjective optimization for a limited budget of solution evaluations.

]]>Mathematical and Computational Applications doi: 10.3390/mca26010004

Authors: Antonino Amoddeo

A mathematical model describing the interaction of cancer cells with the urokinase plasminogen activation system is represented by a system of partial differential equations, in which cancer cell dynamics accounts for diffusion, chemotaxis, and haptotaxis contributions. The mutual relations between nerve fibers and tumors have been recently investigated, in particular, the role of nerves in the development of tumors, as well neurogenesis induced by cancer cells. Such mechanisms are mediated by neurotransmitters released by neurons as a consequence of electrical stimuli flowing along the nerves, and therefore electric fields can be present inside biological tissues, in particular, inside tumors. Considering cancer cells as negatively charged particles immersed in the correct biological environment and subjected to an external electric field, the effect of the latter on cancer cell dynamics is still unknown. Here, we implement a mathematical model that accounts for the interaction of cancer cells with the urokinase plasminogen activation system subjected to a uniform applied electric field, simulating the first stage of cancer cell dynamics in a three-dimensional axial symmetric domain. The obtained numerical results predict that cancer cells can be moved along a preferred direction by an applied electric field, suggesting new and interesting strategies in cancer therapy.

]]>Mathematical and Computational Applications doi: 10.3390/mca26010003

Authors: Benito Chen-Charpentier Clara Garza-Hume María Jorge

Marital relations depend on many factors which can increase the amount of satisfaction or unhappiness in the relation. A large percentage of marriages end up in divorce. While there are many studies about the causes of divorce and how to prevent it, there are very few mathematical models dealing with marital relations. In this paper, we present a continuous model based on the ideas presented by Gottman and coauthors. We show that the type of influence functions that describe the interaction between husband and wife is critical in determining the outcome of a marriage. We also introduce stochasticity into the model to account for the many factors that affect the marriage and that are not easily quantified, such as economic climate, work stress, and family relations. We show that these factors are able to change the equilibrium state of the couple.

]]>Mathematical and Computational Applications doi: 10.3390/mca26010002

Authors: Zhuo-Jia Fu Lu-Feng Li De-Shun Yin Li-Li Yuan

In this paper, we introduce a novel localized collocation solver for two-dimensional (2D) phononic crystal analysis. In the proposed collocation solver, the displacement at each node is expressed as a linear combination of T-complete functions in each stencil support and the sparse linear system is obtained by satisfying the considered governing equation at interior nodes and boundary conditions at boundary nodes. As compared with finite element method (FEM) results and the analytical solutions, the efficiency and accuracy of the proposed localized collocation solver are verified under a benchmark example. Then, the proposed method is applied to 2D phononic crystals with various lattice forms and scatterer shapes, where the related band structures, transmission spectra, and displacement amplitude distributions are calculated as compared with the FEM.

]]>Mathematical and Computational Applications doi: 10.3390/mca26010001

Authors: Mehmet Ersoy Omar Lakkis Philip Townsend

We propose a one-dimensional Saint-Venant (open-channel) model for overland flows, including a water input&ndash;output source term modeling recharge via rainfall and infiltration (or exfiltration). We derive the model via asymptotic reduction from the two-dimensional Navier&ndash;Stokes equations under the shallow water assumption, with boundary conditions including recharge via ground infiltration and runoff. This new model recovers existing models as special cases, and adds more scope by adding water-mixing friction terms that depend on the rate of water recharge. We propose a novel entropy function and its flux, which are useful in validating the model&rsquo;s conservation or dissipation properties. Based on this entropy function, we propose a finite volume scheme extending a class of kinetic schemes and provide numerical comparisons with respect to the newly introduced mixing friction coefficient. We also provide a comparison with experimental data.

]]>Mathematical and Computational Applications doi: 10.3390/mca25040080

Authors: Fernanda Beltrán Oliver Cuate Oliver Schütze

Problems where several incommensurable objectives have to be optimized concurrently arise in many engineering and financial applications. Continuation methods for the treatment of such multi-objective optimization methods (MOPs) are very efficient if all objectives are continuous since in that case one can expect that the solution set forms at least locally a manifold. Recently, the Pareto Tracer (PT) has been proposed, which is such a multi-objective continuation method. While the method works reliably for MOPs with box and equality constraints, no strategy has been proposed yet to adequately treat general inequalities, which we address in this work. We formulate the extension of the PT and present numerical results on some selected benchmark problems. The results indicate that the new method can indeed handle general MOPs, which greatly enhances its applicability.

]]>Mathematical and Computational Applications doi: 10.3390/mca25040079

Authors: Jismi Mathew Christophe Chesneau

The Lomax distribution is arguably one of the most useful lifetime distributions, explaining the developments of its extensions or generalizations through various schemes. The Marshall&ndash;Olkin length-biased Lomax distribution is one of these extensions. The associated model has been used in the frameworks of data fitting and reliability tests with success. However, the theory behind this distribution is non-existent and the results obtained on the fit of data were sufficiently encouraging to warrant further exploration, with broader comparisons with existing models. This study contributes in these directions. Our theoretical contributions on the the Marshall&ndash;Olkin length-biased Lomax distribution include an original compounding property, various stochastic ordering results, equivalences of the main functions at the boundaries, a new quantile analysis, the expressions of the incomplete moments under the form of a series expansion and the determination of the stress&ndash;strength parameter in a particular case. Subsequently, we contribute to the applicability of the Marshall&ndash;Olkin length-biased Lomax model. When combined with the maximum likelihood approach, the model is very effective. We confirm this claim through a complete simulation study. Then, four selected real life data sets were analyzed to illustrate the importance and flexibility of the model. Especially, based on well-established standard statistical criteria, we show that it outperforms six strong competitors, including some extended Lomax models, when applied to these data sets. To our knowledge, such comprehensive applied work has never been carried out for this model.

]]>Mathematical and Computational Applications doi: 10.3390/mca25040078

Authors: Anouk F. G. Pelzer Alef E. Sterk

In this paper, we study a family of dynamical systems with circulant symmetry, which are obtained from the Lorenz-96 model by modifying its nonlinear terms. For each member of this family, the dimension n can be arbitrarily chosen and a forcing parameter F acts as a bifurcation parameter. The primary focus in this paper is on the occurrence of finite cascades of pitchfork bifurcations, where the length of such a cascade depends on the divisibility properties of the dimension n. A particularly intriguing aspect of this phenomenon is that the parameter values F of the pitchfork bifurcations seem to satisfy the Feigenbaum scaling law. Further bifurcations can lead to the coexistence of periodic or chaotic attractors. We also describe scenarios in which the number of coexisting attractors can be reduced through collisions with an equilibrium.

]]>Mathematical and Computational Applications doi: 10.3390/mca25040077

Authors: Frédéric Dubas Kamel Boughrara

Electrical machines are used in many electrical engineering applications [...]

]]>Mathematical and Computational Applications doi: 10.3390/mca25040076

Authors: Perla Rubi Castañeda-Aviña Esteban Tlelo-Cuautle Luis Gerardo de la Fraga

The optimization of analog integrated circuits requires to take into account a number of considerations and trade-offs that are specific to each circuit, meaning that each case of design may be subject to different constraints to accomplish target specifications. This paper shows the single-objective optimization of a complementary metal-oxide-semiconductor (CMOS) four-stage voltage-controlled oscillator (VCO) to maximize the oscillation frequency. The stages are designed by using CMOS current-mode logic or differential pairs and are connected in a ring structure. The optimization is performed by applying differential evolution (DE) algorithm, in which the design variables are the control voltage and the transistors&rsquo; widths and lengths. The objective is maximizing the oscillation frequency under the constraints so that the CMOS VCO be robust to Monte Carlo simulations and to process-voltage-temperature (PVT) variations. The optimization results show that DE provides feasible solutions oscillating at 5 GHz with a wide control voltage range and robust to both Monte Carlo and PVT analyses.

]]>Mathematical and Computational Applications doi: 10.3390/mca25040075

Authors: Nicholas Fantuzzi

Authors of the present Special Issue are gratefully acknowledged for writing papers of very high standard [...]

]]>Mathematical and Computational Applications doi: 10.3390/mca25040074

Authors: Fernando Alcántara-López Carlos Fuentes Fernando Brambila-Paz Jesús López-Estrada

The present work proposes a new model to capture high heterogeneity of single phase flow in naturally fractured vuggy reservoirs. The model considers a three porous media reservoir; namely, fractured system, vugular system and matrix; the case of an infinite reservoir is considered in a full-penetrating wellbore. Furthermore, the model relaxes classic hypotheses considering that matrix permeability has a significant impact on the pressure deficit from the wellbore, reaching the triple permeability and triple porosity model wich allows the wellbore to be fed by all the porous media and not exclusively by the fractured system; where it is considered a pseudostable interporous flow. In addition, it is considered the anomalous flow phenomenon from the pressure of each independent porous medium and as a whole, through the temporal fractional derivative of Caputo type; the resulting phenomenon is studied for orders in the fractional derivatives in (0, 2), known as superdiffusive and subdiffusive phenomena. Synthetic results highlight the effect of anomalous flows throughout the entire transient behavior considering a significant permeability in the matrix and it is contrasted with the effect of an almost negligible matrix permeability. The model is solved analytically in the Laplace space, incorporating the Tartaglia&ndash;Cardano equations.

]]>Mathematical and Computational Applications doi: 10.3390/mca25040073

Authors: Xiatong Cai Abdolmajid Mohammadian Hamidreza Shirkhani

Combining multiple modules into one framework is a key step in modelling a complex system. In this study, rather than focusing on modifying a specific model, we studied the performance of different calculation structures in a multi-objective optimization framework. The Hydraulic and Risk Combined Model (HRCM) combines hydraulic performance and pipe breaking risk in a drainage system to provide optimal rehabilitation strategies. We evaluated different framework structures for the HRCM model. The results showed that the conventional framework structure used in engineering optimization research, which includes (1) constraint functions; (2) objective functions; and (3) multi-objective optimization, is inefficient for drainage rehabilitation problem. It was shown that the conventional framework can be significantly improved in terms of calculation speed and cost-effectiveness by removing the constraint function and adding more objective functions. The results indicated that the model performance improved remarkably, while the calculation speed was not changed substantially. In addition, we found that the mixed-integer optimization can decrease the optimization performance compared to using continuous variables and adding a post-processing module at the last stage to remove the unsatisfying results. This study (i) highlights the importance of the framework structure inefficiently solving engineering problems, and (ii) provides a simplified efficient framework for engineering optimization problems.

]]>Mathematical and Computational Applications doi: 10.3390/mca25040072

Authors: José-Yaír Guzmán-Gaspar Efrén Mezura-Montes Saúl Domínguez-Isidro

This study presents an empirical comparison of the standard differential evolution (DE) against three random sampling methods to solve robust optimization over time problems with a survival time approach to analyze its viability and performance capacity of solving problems in dynamic environments. A set of instances with four different dynamics, generated by two different configurations of two well-known benchmarks, are solved. This work also introduces a comparison criterion that allows the algorithm to discriminate among solutions with similar survival times to benefit the selection process. The results show that the standard DE holds a good performance to find ROOT solutions, improving the results reported by state-of-the-art approaches in the studied environments. Finally, it was found that the chaotic dynamic, disregarding the type of peak movement in the search space, is a source of difficulty for the proposed DE algorithm.

]]>Mathematical and Computational Applications doi: 10.3390/mca25040071

Authors: Md. Taksir Hasan Majumder Md. Mahabur Rahman Anindya Iqbal M. Sohel Rahman

Homoglyphs are pairs of visual representations of Unicode characters that look similar to the human eye. Identifying homoglyphs is extremely useful for building a strong defence mechanism against many phishing and spoofing attacks, ID imitation, profanity abusing, etc. Although there is a list of discovered homoglyphs published by Unicode consortium, regular expansion of Unicode character scripts necessitates a robust and reliable algorithm that is capable of identifying all possible new homoglyphs. In this article, we first show that shallow Convolutional Neural Networks are capable of identifying homoglyphs. We propose two variations, both of which obtain very high accuracy (99.44%) on our benchmark dataset. We also report that adoption of transfer learning allows for another model to achieve 100% recall on our dataset. We ensemble these three methods to obtain 99.72% accuracy on our independent test dataset. These results illustrate the superiority of our ensembled model in detecting homoglyphs and suggest that our model can be used to detect new homoglyphs when increasing Unicode characters are added. As a by-product, we also prepare a benchmark dataset based on the currently available list of homoglyphs.

]]>Mathematical and Computational Applications doi: 10.3390/mca25040070

Authors: Julien Petitgirard Tony Piguet Philippe Baucour Didier Chamagne Eric Fouillien Jean-Christophe Delmare

The study concerns the winding head thermal design of electrical machines in difficult thermal environments. The new approach is adapted for all basic shapes and solves the thermal behaviour of a random wire layout. The model uses the nodal method but does not use the common homogenization method for the winding slot. The layout impact can be precisely studied to find different hotspots. To achieve this a Delaunay triangulation provides the thermal links between adjoining wires in the slot. Vorono&iuml; tessellation gives a cutting to estimate thermal conductance between adjoining wires. This thermal behaviour is simulated in cell cutting and it is simplified with the thermal bridge notion to obtain a simple solving of these thermal conductances. The boundaries are imposed on the slot borders with Dirichlet condition. Then solving with many Dirichlet conditions is described. Some results show different possible applications with rectangular and round shapes, one ore many boundaries, different limit condition values and different layouts. The model can be integrated into a larger model that represents the stator to have best results.

]]>Mathematical and Computational Applications doi: 10.3390/mca25040069

Authors: Christophe Guyeux

Asynchronous iterations have long been used in distributed computing algorithms to produce calculation methods that are potentially faster than a serial or parallel approach, but whose convergence is more difficult to demonstrate. Conversely, over the past decade, the study of the complex dynamics of asynchronous iterations has been initiated and deepened, as well as their use in computer security and bioinformatics. The first work of these studies focused on chaotic discrete dynamical systems, and links were established between these dynamics on the one hand, and between random or complex behaviours in the sense of the theory of the same name. Computer security applications have focused on pseudo-random number generation, hash functions, hidden information, and various security aspects of wireless sensor networks. At the bioinformatics level, this study of complex systems has allowed an original approach to understanding the evolution of genomes and protein folding. These various contributions are detailed in this review article, which is an extension of the paper “An update on the topological properties of asynchronous iterations” presented during the Sixth International Conference on Parallel, Distributed, GPU and Cloud Computing (Pareng 2019).

]]>Mathematical and Computational Applications doi: 10.3390/mca25040068

Authors: Desmond Adair Martin Jaeger

Free in-plane vibrations of a scimitar-type nonprismatic rotating curved beam, with a variable cross-section and increasing sweep along the leading edge, are calculated using an innovative, efficient and accurate solver called the Adomian modified decomposition method (AMDM). The equation of motion includes the axial force resulting from centrifugal stiffening, and the boundary conditions imposed are those of a cantilever beam, i.e., clamped-free and simple-free. The AMDM allows the governing differential equation to become a recursive algebraic equation suitable for symbolic computation, and, after additional simple mathematical operations, the natural frequencies and corresponding closed-form series solution of the mode shapes are obtained simultaneously. Two main advantages of the application of the AMDM are its fast convergence rate to a solution and its high degree of accuracy. The design shape parameters of the beam, such as transitioning from a straight beam pattern to a curved beam pattern, are investigated. The accuracy of the model is investigated using previously reported investigations and using an innovative error analysis procedure.

]]>Mathematical and Computational Applications doi: 10.3390/mca25040067

Authors: Anna Kirkpatrick Kalen Patton Prasad Tetali Cassie Mitchell

Ribonucleic acid (RNA) secondary structures and branching properties are important for determining functional ramifications in biology. While energy minimization of the Nearest Neighbor Thermodynamic Model (NNTM) is commonly used to identify such properties (number of hairpins, maximum ladder distance, etc.), it is difficult to know whether the resultant values fall within expected dispersion thresholds for a given energy function. The goal of this study was to construct a Markov chain capable of examining the dispersion of RNA secondary structures and branching properties obtained from NNTM energy function minimization independent of a specific nucleotide sequence. Plane trees are studied as a model for RNA secondary structure, with energy assigned to each tree based on the NNTM, and a corresponding Gibbs distribution is defined on the trees. Through a bijection between plane trees and 2-Motzkin paths, a Markov chain converging to the Gibbs distribution is constructed, and fast mixing time is established by estimating the spectral gap of the chain. The spectral gap estimate is obtained through a series of decompositions of the chain and also by building on known mixing time results for other chains on Dyck paths. The resulting algorithm can be used as a tool for exploring the branching structure of RNA, especially for long sequences, and to examine branching structure dependence on energy model parameters. Full exposition is provided for the mathematical techniques used with the expectation that these techniques will prove useful in bioinformatics, computational biology, and additional extended applications.

]]>Mathematical and Computational Applications doi: 10.3390/mca25040066

Authors: Seifu Endris Yimer Poom Kumam Anteneh Getachew Gebrie

In this paper, we consider a bilevel optimization problem as a task of finding the optimum of the upper-level problem subject to the solution set of the split feasibility problem of fixed point problems and optimization problems. Based on proximal and gradient methods, we propose a strongly convergent iterative algorithm with an inertia effect solving the bilevel optimization problem under our consideration. Furthermore, we present a numerical example of our algorithm to illustrate its applicability.

]]>Mathematical and Computational Applications doi: 10.3390/mca25040065

Authors: Jismi Mathew Christophe Chesneau

It is well established that classical one-parameter distributions lack the flexibility to model the characteristics of a complex random phenomenon. This fact motivates clever generalizations of these distributions by applying various mathematical schemes. In this paper, we contribute in extending the one-parameter length-biased Maxwell distribution through the famous Marshall&ndash;Olkin scheme. We thus introduce a new two-parameter lifetime distribution called the Marshall&ndash;Olkin length-biased Maxwell distribution. We emphasize the pliancy of the main functions, strong stochastic order results and versatile moments measures, including the mean, variance, skewness and kurtosis, offering more possibilities compared to the parental length-biased Maxwell distribution. The statistical characteristics of the new model are discussed on the basis of the maximum likelihood estimation method. Applications to simulated and practical data sets are presented. In particular, for five referenced data sets, we show that the proposed model outperforms five other comparable models, also well known for their fitting skills.

]]>Mathematical and Computational Applications doi: 10.3390/mca25040064

Authors: Lorenzo G. Resca Nicholas A. Mecholsky

Biological mapping of the visual field from the eye retina to the primary visual cortex, also known as occipital area V1, is central to vision and eye movement phenomena and research. That mapping is critically dependent on the existence of cortical magnification factors. Once unfolded, V1 has a convex three-dimensional shape, which can be mathematically modeled as a surface of revolution embedded in three-dimensional Euclidean space. Thus, we solve the problem of differential geometry and geodesy for the mapping of the visual field to V1, involving both isotropic and non-isotropic cortical magnification factors of a most general form. We provide illustrations of our technique and results that apply to V1 surfaces with curve profiles relevant to vision research in general and to visual phenomena such as &lsquo;crowding&rsquo; effects and eye movement guidance in particular. From a mathematical perspective, we also find intriguing and unexpected differential geometry properties of V1 surfaces, discovering that geodesic orbits have alternative prograde and retrograde characteristics, depending on the interplay between local curvature and global topology.

]]>Mathematical and Computational Applications doi: 10.3390/mca25040063

Authors: Anthony Overmars Sitalakshmi Venkatraman

The security of RSA relies on the computationally challenging factorization of RSA modulus N=p1&nbsp;p2 with N&nbsp;being a large semi-prime consisting of two primes p1and&nbsp;p2, for the generation of RSA keys in commonly adopted cryptosystems. The property of p1&nbsp;and&nbsp;p2, both congruent to 1 mod 4, is used in Euler&rsquo;s factorization method to theoretically factorize them. While this caters to only a quarter of the possible combinations of primes, the rest of the combinations congruent to 3 mod 4 can be found by extending the method using Gaussian primes. However, based on Pythagorean primes that are applied in RSA, the semi-prime has only two sums of two squares in the range of possible squares N&minus;1,&nbsp;N/2&nbsp;. As N becomes large, the probability of finding the two sums of two squares becomes computationally intractable in the practical world. In this paper, we apply Pythagorean primes to explore how the number of sums of two squares in the search field can be increased thereby increasing the likelihood that a sum of two squares can be found. Once two such sums of squares are found, even though many may exist, we show that it is sufficient to only find two solutions to factorize the original semi-prime. We present the algorithm showing the simplicity of steps that use rudimentary arithmetic operations requiring minimal memory, with search cycle time being a factor for very large semi-primes, which can be contained. We demonstrate the correctness of our approach with practical illustrations for breaking RSA keys. Our enhanced factorization method is an improvement on our previous work with results compared to other factorization algorithms and continues to be an ongoing area of our research.

]]>Mathematical and Computational Applications doi: 10.3390/mca25040062

Authors: Paweł Olejnik

Nonlinear dynamics takes its origins from physics and applied mathematics [...]

]]>Mathematical and Computational Applications doi: 10.3390/mca25030061

Authors: Christophe Bastien Clive Neal-Sturgess Huw Davies Xiang Cheng

In the real world, the severity of traumatic injuries is measured using the Abbreviated Injury Scale (AIS). However, the AIS scale cannot currently be computed by using the output from finite element human computer models, which currently rely on maximum principal strains (MPS) to capture serious and fatal injuries. In order to overcome these limitations, a unique Organ Trauma Model (OTM) able to calculate the threat to the life of a brain model at all AIS levels is introduced. The OTM uses a power method, named Peak Virtual Power (PVP), and defines brain white and grey matter trauma responses as a function of impact location and impact speed. This research has considered ageing in the injury severity computation by including soft tissue material degradation, as well as brain volume changes due to ageing. Further, to account for the limitations of the Lagrangian formulation of the brain model in representing hemorrhage, an approach to include the effects of subdural hematoma is proposed and included as part of the predictions. The OTM model was tested against two real-life falls and has proven to correctly predict the post-mortem outcomes. This paper is a proof of concept, and pending more testing, could support forensic studies.

]]>Mathematical and Computational Applications doi: 10.3390/mca25030060

Authors: Yi Hong

This article exploits arbitrage valuation bounds on currency basket options. Instead of using a sophisticated model to price these options, we consider a set of pricing models that are consistent with the prices of available hedging assets. In the absence of arbitrage, we identify valuation bounds on currency basket options without model specifications. Our results extend the work in the literature by seeking tight arbitrage valuation bounds on these options. Specifically, the valuation bounds are enforced by static portfolios that consist of both cross-currency options and individual options denominated in the numeraire currency.

]]>Mathematical and Computational Applications doi: 10.3390/mca25030059

Authors: Conghua Wen Junwei Wei

This article aims to study the schemes of forecasting the volatilities of Chinese futures markets and sector stocks. An improved method based on the cyclical two-component model (CTCM) introduced by Harris et al. in 2011 is provided. The performance of CTCM is compared with the benchmark model: Heterogeneous Autoregressive model of Realized Volatility type (HAR-RV type). The impact of open interest for futures market is included in HAR-RV type model. We employ 3 different evaluation rules to determine the most efficient models when the results of different evaluation rules are inconsistent. The empirical results show that CTCM is more accurate than HAR-RV type in both estimation and forecasting. The results also show that the realized range-based tripower volatility (RTV) is the most efficient estimator for both Chinese futures markets and sector stocks.

]]>Mathematical and Computational Applications doi: 10.3390/mca25030058

Authors: Minh Nguyen Mehmet Aktas Esra Akbas

The growth of social media in recent years has contributed to an ever-increasing network of user data in every aspect of life. This volume of generated data is becoming a vital asset for the growth of companies and organizations as a powerful tool to gain insights and make crucial decisions. However, data is not always reliable, since primarily, it can be manipulated and disseminated from unreliable sources. In the field of social network analysis, this problem can be tackled by implementing machine learning models that can learn to classify between humans and bots, which are mostly harmful computer programs exploited to shape public opinions and circulate false information on social media. In this paper, we propose a novel topological feature extraction method for bot detection on social networks. We first create weighted ego networks of each user. We then encode the higher-order topological features of ego networks using persistent homology. Finally, we use these extracted features to train a machine learning model and use that model to classify users as bot vs. human. Our experimental results suggest that using the higher-order topological features coming from persistent homology is promising in bot detection and more effective than using classical graph-theoretic structural features.

]]>Mathematical and Computational Applications doi: 10.3390/mca25030057

Authors: Oscar-David Ramírez-Cárdenas Felipe Trujillo-Romero

In this work, the sensorless speed control of a brushless direct current motor utilizing a neural network is presented. This control is done using a two-layer neural network that uses the backpropagation algorithm for training. The values provided by a Proportional, Integral, and Derivative (PID) control to this type of motor are used to train the network. From this PID control, the velocity values and their corresponding signal control (u) are recovered for different values of load pairs. Five different values of load pairs were used to consider the entire working range of the motor to be controlled. After carrying out the training, it was observed that the proposed network could hold constant load pairs, as well as variables. Several tests were carried out at the simulation level, which showed that control based on neural networks is robust. Finally, it is worth mentioning that this control strategy can be realized without the need for a speed sensor.

]]>Mathematical and Computational Applications doi: 10.3390/mca25030056

Authors: Ildeberto Santos-Ruiz Francisco-Ronay López-Estrada Vicenç Puig Guillermo Valencia-Palomo

This paper presents a proposal to estimate simultaneously, through nonlinear optimization, the roughness and head loss coefficients in a non-straight pipeline. With the proposed technique, the calculation of friction is optimized by minimizing the fitting error in the Colebrook&ndash;White equation for an operating interval of the pipeline from the flow and pressure measurements at the pipe ends. The proposed method has been implemented in MATLAB and validated in a serpentine-shaped experimental pipeline by contrasting the theoretical friction for the estimated coefficients obtained from the Darcy&ndash;Weisbach equation for a set of steady-state measurements.

]]>Mathematical and Computational Applications doi: 10.3390/mca25030055

Authors: Mario Heras-Cervantes Adriana del Carmen Téllez-Anguiano Juan Anzurez-Marín Elisa Espinosa-Juárez

In this paper, as an introduction, the nonlinear model of a distillation column is presented in order to understand the fundamental paper that the column heating actuator has in the distillation process dynamics as well as in the quality and safety of the process. In order to facilitate the implementation control strategies to maintain the heating power regulated in the distillation process, it is necessary to represent adequately the heating power actuator behavior; therefore, three different models (switching, nonlinear and fuzzy Takagi&ndash;Sugeno) of a DC-DC Buck-Boost power converter, selected to regulate the electric power regarding the heating power, are presented and compared. Considering that the online measurements of the two main variables of the converter, the inductor current and the capacitor voltage, are not always available, two different fuzzy observers (with and without sliding modes) are developed to allow monitoring the physical variables in the converter. The observers response is compared to determine which has a better performance. The role of the observer in estimating the state variables with the purpose of using them in the sensors fault diagnosis, using the analytical redundancy concept, likewise, from the estimation of these variables other non-measurable can be determined; for example, the caloric power. The stability analysis and observers gains are obtained by linear matrix inequalities (LMIs). The observers are validated by MATLAB&reg; simulations to verify the observers convergence and analyze their response under system disturbances.

]]>Mathematical and Computational Applications doi: 10.3390/mca25030054

Authors: Safeer Hussain Khan Timilehin Opeyemi Alakoya Oluwatosin Temitope Mewomo

In each iteration, the projection methods require computing at least one projection onto the closed convex set. However, projections onto a general closed convex set are not easily executed, a fact that might affect the efficiency and applicability of the projection methods. To overcome this drawback, we propose two iterative methods with self-adaptive step size that combines the Halpern method with a relaxed projection method for approximating a common solution of variational inequality and fixed point problems for an infinite family of multivalued relatively nonexpansive mappings in the setting of Banach spaces. The core of our algorithms is to replace every projection onto the closed convex set with a projection onto some half-space and this guarantees the easy implementation of our proposed methods. Moreover, the step size of each algorithm is self-adaptive. We prove strong convergence theorems without the knowledge of the Lipschitz constant of the monotone operator and we apply our results to finding a common solution of constrained convex minimization and fixed point problems in Banach spaces. Finally, we present some numerical examples in order to demonstrate the efficiency of our algorithms in comparison with some recent iterative methods.

]]>Mathematical and Computational Applications doi: 10.3390/mca25030053

Authors: Penglei Gao Rui Zhang Xi Yang

Stock index price prediction is prevalent in both academic and economic fields. The index price is hard to forecast due to its uncertain noise. With the development of computer science, neural networks are applied in kinds of industrial fields. In this paper, we introduce four different methods in machine learning including three typical machine learning models: Multilayer Perceptron (MLP), Long Short Term Memory (LSTM) and Convolutional Neural Network (CNN) and one attention-based neural network. The main task is to predict the next day&rsquo;s index price according to the historical data. The dataset consists of the SP500 index, CSI300 index and Nikkei225 index from three different financial markets representing the most developed market, the less developed market and the developing market respectively. Seven variables are chosen as the inputs containing the daily trading data, technical indicators and macroeconomic variables. The results show that the attention-based model has the best performance among the alternative models. Furthermore, all the introduced models have better accuracy in the developed financial market than developing ones.

]]>Mathematical and Computational Applications doi: 10.3390/mca25030052

Authors: Naveed Iqbal Humaira Yasmin Bawfeh K. Kometa Adel A. Attiya

This article deals with Sisko fluid flow exhibiting peristaltic mechanism in an asymmetric channel with sinusoidal wave propagating down its walls. The channel walls in heat transfer process satisfy the convective conditions. The flow and heat transfer equations are modeled and non-dimensionalized. Analysis has been carried out subject to low Reynolds number and long wavelength considerations. Analytical solution is obtained by using the regular perturbation method by taking Sisko fluid parameter as a perturbed parameter. The shear-thickening and shear-thinning properties of Sisko fluid in the present nonlinear analysis are examined. Comparison is provided between Sisko fluid outcomes and viscous fluids. Velocity and temperature distributions, pressure gradient and streamline pattern are addressed with respect to different parameters of interest. Trapping and pumping processes have also been studied. As a result, the thermal analysis indicates that the implementation of a rise in a non-Newtonian parameter, the Biot numbers and Brinkman number increases the thermal stability of the liquid.

]]>Mathematical and Computational Applications doi: 10.3390/mca25030051

Authors: Jesus R. Pulido-Luna Jorge A. López-Rentería Nohe R. Cazarez-Castro

In this work, a generalization of a synchronization methodology applied to a pair of chaotic systems with heterogeneous dynamics is given. The proposed control law is designed using the error state feedback and Lyapunov theory to guarantee asymptotic stability. The control law is used to synchronize two systems with different number of scrolls in their dynamics and defined in a different number of pieces. The proposed control law is implemented in an FPGA in order to test performance of the synchronization schemes.

]]>Mathematical and Computational Applications doi: 10.3390/mca25030050

Authors: Ana Arnal Fernando Casas Cristina Chiralt

We propose a unified approach for different exponential perturbation techniques used in the treatment of time-dependent quantum mechanical problems, namely the Magnus expansion, the Floquet&ndash;Magnus expansion for periodic systems, the quantum averaging technique, and the Lie&ndash;Deprit perturbative algorithms. Even the standard perturbation theory fits in this framework. The approach is based on carrying out an appropriate change of coordinates (or picture) in each case, and it can be formulated for any time-dependent linear system of ordinary differential equations. All of the procedures (except the standard perturbation theory) lead to approximate solutions preserving by construction unitarity when applied to the time-dependent Schr&ouml;dinger equation.

]]>Mathematical and Computational Applications doi: 10.3390/mca25030049

Authors: Silvia Licciardi Rosa Maria Pidatella Marcello Artioli Giuseppe Dattoli

In this paper, we show that the use of methods of an operational nature, such as umbral calculus, allows achieving a double target: on one side, the study of the Voigt function, which plays a pivotal role in spectroscopic studies and in other applications, according to a new point of view, and on the other, the introduction of a Voigt transform and its possible use. Furthermore, by the same method, we point out that the Hermite and Laguerre functions, extension of the corresponding polynomials to negative and/or real indices, can be expressed through a definition in a straightforward and unified fashion. It is illustrated how the techniques that we are going to suggest provide an easy derivation of the relevant properties along with generalizations to higher order functions.

]]>Mathematical and Computational Applications doi: 10.3390/mca25030048

Authors: Francisco-Ronay López-Estrada Oscar Santos-Estudillo Guillermo Valencia-Palomo Samuel Gómez-Peñate Carlos Hernández-Gutiérrez

The main aim of this paper is to propose a robust fault-tolerant control for a three degree of freedom (DOF) mechanical crane by using a convex quasi-Linear Parameter Varying (qLPV) approach for modeling the crane and a passive fault-tolerant scheme. The control objective is to minimize the load oscillations while the desired path is tracked. The convex qLPV model is obtained by considering the nonlinear sector approach, which can represent exactly the nonlinear system under the bounded nonlinear terms. To improve the system safety, tolerance to partial actuator faults is considered. Performance requirements of the tracking control system are specified in an H&infin; criteria that guarantees robustness against measurement noise, and partial faults. As a result, a set of Linear Matrix Inequalities is derived to compute the controller gains. Numerical experiments on a realistic 3 DOF crane model confirm the applicability of the control scheme.

]]>Mathematical and Computational Applications doi: 10.3390/mca25030047

Authors: Guash Haile Taddele Poom Kumam Anteneh Getachew Gebrie Kanokwan Sitthithakerngkiet

In this paper, we study an iterative method for solving the multiple-set split feasibility problem: find a point in the intersection of a finite family of closed convex sets in one space such that its image under a linear transformation belongs to the intersection of another finite family of closed convex sets in the image space. In our result, we obtain a strongly convergent algorithm by relaxing the closed convex sets to half-spaces, using the projection onto those half-spaces and by introducing the extended form of selecting step sizes used in a relaxed CQ algorithm for solving the split feasibility problem. We also give several numerical examples for illustrating the efficiency and implementation of our algorithm in comparison with existing algorithms in the literature.

]]>Mathematical and Computational Applications doi: 10.3390/mca25030046

Authors: Mario Kovač Philippe Notton Daniel Hofman Josip Knezović

In this paper, we present an overview of the European Processor Initiative (EPI), one of the cornerstones of the EuroHPC Joint Undertaking, a new European Union strategic entity focused on pooling the Union&rsquo;s and national resources on HPC to acquire, build and deploy the most powerful supercomputers in the world within Europe. EPI started its activities in December 2018. The first three years drew processor and platform designers, embedded software, middleware, applications and usage experts from 10 EU countries together to co-design Europe&rsquo;s first HPC Systems on Chip and accelerators with its unique Common Platform (CP) technology. One of EPI&rsquo;s core activities also takes place in the automotive sector, providing architectural solutions for a novel embedded high-performance computing (eHPC) platform and ensuring the overall economic viability of the initiative.

]]>Mathematical and Computational Applications doi: 10.3390/mca25030045

Authors: José Luis Hernández-Caceres René Iván González-Fernández Marlis Ontivero-Ortega Guido Nolte

Nonlinear frequency coupling is assessed with bispectral measures, such as bicoherence. In this study, BisQ, a new bicoherence-derived index, is proposed for assessing nonlinear processes in cardiac regulation. To find BisQ, 110 ten-minute ECG traces obtained from 55 participants were initially studied. Via bispectral analysis, a bicoherence matrix (BC) was obtained from each trace (0.06 to 1.8 Hz with a resolution of 0.01 Hz). Each frequency pair in BC was tested for correlation with the HRV recurrent quantification analysis (RQA) index Lmean, obtained from tachograms from the same ECG trace. BisQ is the result of adding BC values corresponding to the three frequency pairs exhibiting the highest correlation with Lmean. BisQ values were estimated for different groups of subjects: healthy persons, persons with arrhythmia, persons with epilepsy, and preterm neonates. ECG traces from persons with arrhythmia showed no significant differences in BisQ values respect to healthy persons, while persons with epilepsy and neonates showed higher BisQ values (p &lt; 0.05; Mann-Whitney U-test). BisQ reflects nonlinear interactions at the level of sinus-and atrial-ventricular nodes, and most likely cardiorespiratory coupling as well. We expect that BisQ will allow for further exploration of cardiac nonlinear dynamics, complementing available HRV indices.

]]>Mathematical and Computational Applications doi: 10.3390/mca25030044

Authors: Abraham Efraim Rodriguez-Mata Yaneth Bustos-Terrones Victor Gonzalez-Huitrón Pablo Antonio Lopéz-Peréz Omar Hernández-González Leonel Ernesto Amabilis-Sosa

The deterioration of current environmental water sources has led to the need to find ways to monitor water quality conditions. In this paper, we propose the use of Streeter&ndash;Phelps contaminant distribution models and state estimation techniques (observer) to be able to estimate variables that are very difficult to measure in rivers with online sensors, such as Biochemical Oxygen Demand (BOD). We propose the design of a novel Fractional Order High Gain Observer (FOHO) and consider the use of Lyapunov convergence functions to demonstrate stability, as it is compared to classical extended Luenberger Observer published in the literature, to study the convergence in BOD estimation in rivers. The proposed methodology was used to estimated Dissolved oxygen (DO) and BOD monitoring of River Culiacan, Sinaloa, Mexico. The use of fractional order in high-gain observers has a very effective effect on BOD estimation performance, as shown by our numerical studies. The theoretical results have shown that robust observer design can help solve problems in estimating complex variables.

]]>Mathematical and Computational Applications doi: 10.3390/mca25030043

Authors: Meirav Amram Etan Fisher Shai Gul Uzi Vishne

The goal of this research is to maximize chord-based composition possibilities given a relatively small amount of information. A transformational approach, based in group theory, was chosen, focusing on chord intervals as the components of a modified Markov process. The Markov process was modified to balance between average harmony, representing familiarity, and entropy, representing novelty. Uniform triadic transformations are suggested as a further extension of the transformational approach, improving the quality of tonality. The composition algorithms are demonstrated given a short chord progression and also given a larger database of albums by the Beatles. Results demonstrate capabilities and limitations of the algorithms.

]]>Mathematical and Computational Applications doi: 10.3390/mca25030042

Authors: Yasushi Ota Naoki Mizutani

In this study, based on our previous study in which the proposed model is derived based on the SIR model and E. M. Rogers&rsquo;s Diffusion of Innovation Theory, including the aspects of contact and time delay, we examined the mathematical properties, especially the stability of the equilibrium for our proposed mathematical model. By means of the results of the stability in this study, we also used actual data representing transient and resurgent booms, and conducted parameter estimation for our proposed model using Bayesian inference. In addition, we conducted a model fitting to five actual data. By this study, we reconfirmed that we can express the resurgences or minute oscillations of actual data by means of our proposed model.

]]>Mathematical and Computational Applications doi: 10.3390/mca25030041

Authors: Antonio Calcagnì Massimiliano Pastore Gianmarco Altoé

Recent technological advances have provided new settings to enhance individual-based data collection and computerized-tracking data have became common in many behavioral and social research. By adopting instantaneous tracking devices such as computer-mouse, wii, and joysticks, such data provide new insights for analysing the dynamic unfolding of response process. ssMousetrack is a R package for modeling and analysing computerized-tracking data by means of a Bayesian state-space approach. The package provides a set of functions to prepare data, fit the model, and assess results via simple diagnostic checks. This paper describes the package and illustrates how it can be used to model and analyse computerized-tracking data. A case study is also included to show the use of the package in empirical case studies.

]]>Mathematical and Computational Applications doi: 10.3390/mca25030040

Authors: Daniel Jodlbauer Ulrich Langer Thomas Wick

Phase-field fracture models lead to variational problems that can be written as a coupled variational equality and inequality system. Numerically, such problems can be treated with Galerkin finite elements and primal-dual active set methods. Specifically, low-order and high-order finite elements may be employed, where, for the latter, only few studies exist to date. The most time-consuming part in the discrete version of the primal-dual active set (semi-smooth Newton) algorithm consists in the solutions of changing linear systems arising at each semi-smooth Newton step. We propose a new parallel matrix-free monolithic multigrid preconditioner for these systems. We provide two numerical tests, and discuss the performance of the parallel solver proposed in the paper. Furthermore, we compare our new preconditioner with a block-AMG preconditioner available in the literature.

]]>Mathematical and Computational Applications doi: 10.3390/mca25030039

Authors: David Martínez-Galicia Alejandro Guerra-Hernández Nicandro Cruz-Ramírez Xavier Limón Francisco Grimaldo

Windowing is a sub-sampling method, originally proposed to cope with large datasets when inducing decision trees with the ID3 and C4.5 algorithms. The method exhibits a strong negative correlation between the accuracy of the learned models and the number of examples used to induce them, i.e., the higher the accuracy of the obtained model, the fewer examples used to induce it. This paper contributes to a better understanding of this behavior in order to promote windowing as a sub-sampling method for Distributed Data Mining. For this, the generalization of the behavior of windowing beyond decision trees is established, by corroborating the observed negative correlation when adopting inductive algorithms of different nature. Then, focusing on decision trees, the windows (samples) and the obtained models are analyzed in terms of Minimum Description Length (MDL), Area Under the ROC Curve (AUC), Kulllback&ndash;Leibler divergence, and the similitude metric Sim1; and compared to those obtained when using traditional methods: random, balanced, and stratified samplings. It is shown that the aggressive sampling performed by windowing, up to 3% of the original dataset, induces models that are significantly more accurate than those obtained from the traditional sampling methods, among which only the balanced sampling is comparable in terms of AUC. Although the considered informational properties did not correlate with the obtained accuracy, they provide clues about the behavior of windowing and suggest further experiments to enhance such understanding and the performance of the method, i.e., studying the evolution of the windows over time.

]]>Mathematical and Computational Applications doi: 10.3390/mca25020038

Authors: Konrad Lang Sarah Stryeck David Bodruzic Manfred Stepponat Slave Trajanoski Ursula Winkler Stefanie Lindstaedt

Life sciences (LS) are advanced in research data management, since LS have established disciplinary tools for data archiving as well as metadata standards for data reuse. However, there is a lack of tools supporting the active research process in terms of data management and data analytics. This leads to tedious and demanding work to ensure that research data before and after publication are FAIR (findable, accessible, interoperable and reusable) and that analyses are reproducible. The initiative CyVerse US from the University of Arizona, US, supports all processes from data generation, management, sharing and collaboration to analytics. Within the presented project, we deployed an independent instance of CyVerse in Graz, Austria (CAT) in frame of the BioTechMed association. CAT helped to enhance and simplify collaborations between the three main universities in Graz. Presuming steps were (i) creating a distributed computational and data management architecture (iRODS-based), (ii) identifying and incorporating relevant data from researchers in LS and (iii) identifying and hosting relevant tools, including analytics software to ensure reproducible analytics using Docker technology for the researchers taking part in the initiative. This initiative supports research-related processes, including data management and analytics for LS researchers. It also holds the potential to serve other disciplines and provides potential for Austrian universities to integrate their infrastructure in the European Open Science Cloud.

]]>Mathematical and Computational Applications doi: 10.3390/mca25020037

Authors: Vicente-Josué Aguilera-Rueda Nicandro Cruz-Ramírez Efrén Mezura-Montes

We present a novel bi-objective approach to address the data-driven learning problem of Bayesian networks. Both the log-likelihood and the complexity of each candidate Bayesian network are considered as objectives to be optimized by our proposed algorithm named Nondominated Sorting Genetic Algorithm for learning Bayesian networks (NS2BN) which is based on the well-known NSGA-II algorithm. The core idea is to reduce the implicit selection bias-variance decomposition while identifying a set of competitive models using both objectives. Numerical results suggest that, in stark contrast to the single-objective approach, our bi-objective approach is useful to find competitive Bayesian networks especially in the complexity. Furthermore, our approach presents the end user with a set of solutions by showing different Bayesian network and their respective MDL and classification accuracy results.

]]>Mathematical and Computational Applications doi: 10.3390/mca25020036

Authors: Tilmann Glimm Jianying Zhang

We propose a numerical approach that combines a radial basis function (RBF) meshless approximation with a finite difference discretization to solve a nonlinear system of integro-differential equations. The equations are of advection-reaction-diffusion type modeling the formation of pre-cartilage condensations in embryonic chicken limbs. The computational domain is four dimensional in the sense that the cell density depends continuously on two spatial variables as well as two structure variables, namely membrane-bound counterreceptor densities. The biologically proper Dirichlet boundary conditions imposed in the semi-infinite structure variable region is in favor of a meshless method with Gaussian basis functions. Coupled with WENO5 finite difference spatial discretization and the method of integrating factors, the time integration via method of lines achieves optimal complexity. In addition, the proposed scheme can be extended to similar models with more general boundary conditions. Numerical results are provided to showcase the validity of the scheme.

]]>Mathematical and Computational Applications doi: 10.3390/mca25020035

Authors: Tijani A. Apalara Aminu M. Nass Hamdan Al Sulaimani

In the present work, we study a one-dimensional laminated Timoshenko beam with a single nonlinear structural damping due to interfacial slip. We use the multiplier method and some properties of convex functions to establish an explicit and general decay result. Interestingly, the result is established without any additional internal or boundary damping term and without imposing any restrictive growth assumption on the nonlinear term, provided the wave speeds of the first equations of the system are equal.

]]>Mathematical and Computational Applications doi: 10.3390/mca25020034

Authors: Zeinab Mansour Maryam Al-Towailb

In this paper, we introduce the complementary q-Lidstone interpolating polynomial of degree 2 n , which involves interpolating data at the odd-order q-derivatives. For this polynomial, we will provide a q-Peano representation of the error function. Next, we use these results to prove the existence of solutions of the complementary q-Lidstone boundary value problems. Some examples are included.

]]>