Mathematics doi: 10.3390/math10111799

Authors: Yongxu Liu Zhi Zhang Yan Liu Yao Zhu

In recent decades, non-invasive neuroimaging techniques and graph theories have enabled a better understanding of the structural patterns of the human brain at a macroscopic level. As one of the most widely used non-invasive techniques, an electroencephalogram (EEG) may collect non-neuronal signals from &ldquo;bad channels&rdquo;. Automatically detecting these bad channels represents an imbalanced classification task; research on the topic is rather limited. Because the human brain can be naturally modeled as a complex graph network based on its structural and functional characteristics, we seek to extend previous imbalanced node classification techniques to the bad-channel detection task. We specifically propose a novel edge generator considering the prominent small-world organization of the human brain network. We leverage the attention mechanism to adaptively calculate the weighted edge connections between each node and its neighboring nodes. Moreover, we follow the homophily assumption in graph theory to add edges between similar nodes. Adding new edges between nodes sharing identical labels shortens the path length, thus facilitating low-cost information messaging.

]]>Mathematics doi: 10.3390/math10111798

Authors: Cao Zhang Cao

Many manufacturers sell products of differential quality to retailers or directly to consumers, and the retailers might promote high- or low-quality products. Given different channel structures, how can the supply chain be optimized? We developed a game-theoretic framework with a manufacturer and retailer as a leader and follower, respectively, and the retailer makes the promotional effort. We examined the effects of product quality, promotional effort, and hybrid channels on the supply chain performance in four dual-channel structures. We found that, regardless of qualities, retailers generally prefer to engage in the promotion even though manufacturers are reluctant to share promotional costs. However, promotional effort does not always improve the supply chain profit across channels, and there is an interaction between product-channel structure and promotional effort. The preferences of manufacturers and retailers in all feasible regions of quality levels within the aforementioned structures can be ranked. There exists a feasible region of quality levels where the supply chain can achieve the Pareto improvement without any additional coordination mechanism, and both players prefer the channel structure (&Pi;4) that retailers sell high-quality products with promotional effort. Moreover, the extended analysis suggests that the less significant the product variety is, the less effort is made by the retailer to promote the products.

]]>Mathematics doi: 10.3390/math10111797

Authors: Musaev Makshanov Grigoriev

We consider the problem of evolutionary self-organization of control strategies using the example of speculative trading in a non-stationary immersion market environment. The main issue that obstructs obtaining real profit is the extremely high instability of the system component of observation series which implement stochastic chaos. In these conditions, traditional techniques for increasing the stability of control strategies are ineffective. In particular, the use of adaptive computational schemes is difficult due to the high volatility and non-stationarity of observation series. That leads to significant statistical errors of both kinds in the generated control decisions. An alternative approach based on the use of dynamic robustification technologies significantly reduces the effectiveness of the decisions. In the current work, we propose a method based on evolutionary modeling, which supplies structural and parametric self-organization of the control model.

]]>Mathematics doi: 10.3390/math10111796

Authors: Toloo Khodabandelou Oukil

Fractional programming (FP) refers to a family of optimization problems whose objective function is a ratio of two functions. FP has been studied extensively in economics, management science, information theory, optic and graph theory, communication, and computer science, etc. This paper presents a bibliometric review of the FP-related publications over the past five decades in order to track research outputs and scholarly trends in the field. The reviews are conducted through the Science Citation Index Expanded (SCI-EXPANDED) database of the Web of Science Core Collection (Clarivate Analytics). Based on the bibliometric analysis of 1811 documents, various theme-related research indicators were described, such as the most prominent authors, the most commonly cited papers, journals, institutions, and countries. Three research directions emerged, including Electrical and Electronic Engineering, Telecommunications, and Applied Mathematics.

]]>Mathematics doi: 10.3390/math10111795

Authors: Liu Wang Huang Pang

The dynamic optimization of the closed-loop supply chain (CLSC) is a hot research topic. Members&rsquo; competitive behavior and product goodwill play an important role in the decision making of CLSC members. In this paper, a closed-loop supply chain (CLSC) with competitive manufacturers and single retailers is studied, in which the manufacturer produces and recycles the products, and the retailer is responsible for the sales of the products. On this basis, a dynamic linear differential equation of product goodwill is constructed, the optimal dynamic path of each decision variable is found, and the influence of price competition among manufacturers on the decision making of members in a dynamic closed-loop supply chain is studied. The conclusion is verified by an example. The results show that goodwill directly affects the wholesale price, the retail price, the recovery price, and the profit of supply chain members. The wholesale price and the retail price of products are not only positively affected by their own goodwill, but also by the goodwill of competing products. The manufacturer competition intensity will affect the product price and the supply chain member profit. To a certain extent, the more intense the manufacturer&rsquo;s competition is, the higher the wholesale price and the retail price, and the greater the profit of the supply chain members.

]]>Mathematics doi: 10.3390/math10111794

Authors: Olga Bureneva Nikolay Safyannikov Zoya Aleksanyan

Singular spectrum analysis (SSA) is a method of time series analysis and is used in various fields, including medicine. A tremorogram is a biological signal that allows evaluation of a person&rsquo;s neuromotor reactions in order to infer the state of the motor parts of the central nervous system (CNS). A tremorogram has a complex structure, and its analysis requires the use of advanced methods of signal processing and intelligent analysis. The paper&rsquo;s novelty lies in the application of the SSA method to extract diagnostically significant features from tremorograms with subsequent evaluation of the state of the motor parts of the CNS. The article presents the application of a method of singular spectrum decomposition, comparison of known variants of classification, and grouping of principal components for determining the components of the tremorogram corresponding to the trend, periodic components, and noise. After analyzing the results of the SSA of tremorograms, we proposed a new algorithm of grouping based on the analysis of singular values of the trajectory matrix. An example of applying the SSA method to the analysis of tremorograms is shown. Comparison of known clustering methods and the proposed algorithm showed that there is a reasonable correspondence between the proposed algorithm and the traditional methods of classification and pairing in the set of periodic components.

]]>Mathematics doi: 10.3390/math10111792

Authors: Zubair Ahmad Zahra Almaspoor Faridoon Khan Mahmoud El-Morshedy

Predicting and modeling time-to-events data is a crucial and interesting research area. For modeling and predicting such types of data, numerous statistical models have been suggested and implemented. This study introduces a new statistical model, namely, a new modified flexible Weibull extension (NMFWE) distribution for modeling the mortality rate of COVID-19 patients. The introduced model is obtained by modifying the flexible Weibull extension model. The maximum likelihood estimators of the NMFWE model are obtained. The evaluation of the estimators of the NMFWE model is assessed in a simulation study. The flexibility and applicability of the NMFWE model are established by taking two datasets representing the mortality rates of COVID-19-infected persons in Mexico and Canada. For predictive modeling, we consider two pure statistical models and two machine learning (ML) algorithms. The pure statistical models include the autoregressive moving average (ARMA) and non-parametric autoregressive moving average (NP-ARMA), and the ML algorithms include neural network autoregression (NNAR) and support vector regression (SVR). To evaluate their forecasting performance, three standard measures of accuracy, namely, root mean square error (RMSE), mean absolute error (MAE), and mean absolute percentage error (MAPE) are calculated. The findings demonstrate that ML algorithms are very effective at predicting the mortality rate data.

]]>Mathematics doi: 10.3390/math10111791

Authors: Siyu Zhang Liusan Wu Ming Cheng Dongqing Zhang

The achievement of the carbon peaking and carbon neutrality targets requires the adjustment of the energy structure, in which the dual-carbon progress of the power industry will directly affect the realization process of the goal. In such terms, an accurate demand forecast is imperative for the government and enterprises&rsquo; decision makers to develop an optimal strategy for electric energy planning work in advance. According to the data of the whole social electricity consumption in Jiangsu Province of China from 2015 to 2019, this paper uses the improved particle swarm optimization algorithm to calculate the fractional-order r of the FGM (1, 1) model and establishes a metabolic FGM (1, 1) model to predict the whole social electricity consumption in Jiangsu Province of China from 2020 to 2023. The results show that in the next few years the whole social electricity consumption in Jiangsu Province will show a growth trend, but the growth rate will slow down generally. It can be seen that the prediction accuracy of the metabolic FGM (1, 1) model is higher than that of the GM (1, 1) and FGM (1, 1) models. In addition, the paper analyzes the reasons for the changes in the whole society electricity consumption in Jiangsu Province of China and provides support for government decision making.

]]>Mathematics doi: 10.3390/math10111790

Authors: Yitong Guo Jie Mei Zhiting Pan Haonan Liu Weiwei Li

Ensemble learning techniques are widely applied to classification tasks such as credit-risk evaluation. As for most credit-risk evaluation scenarios in the real world, only imbalanced data are available for model construction, and the performance of ensemble models still needs to be improved. An ideal ensemble algorithm is supposed to improve diversity in an effective manner. Therefore, we provide an insight in considering an ensemble diversity-promotion method for imbalanced learning tasks. A novel ensemble structure is proposed, which combines self-adaptive optimization techniques and a diversity-promotion method (SA-DP Forest). Additional artificially constructed samples, generated by a fuzzy sampling method at each iteration, directly create diverse hypotheses and address the imbalanced classification problem while training the proposed model. Meanwhile, the self-adaptive optimization mechanism within the ensemble simultaneously balances the individual accuracy as the diversity increases. The results using the decision tree as a base classifier indicate that SA-DP Forest outperforms the comparative algorithms, as reflected by most evaluation metrics on three credit data sets and seven other imbalanced data sets. Our method is also more suitable for experimental data that are properly constructed with a series of artificial imbalance ratios on the original credit data set.

]]>Mathematics doi: 10.3390/math10111788

Authors: Chuanying Li Peibing Du Kuan Li Yu Liu Hao Jiang Zhe Quan

The Horner and Goertzel algorithms are frequently used in polynomial evaluation. Each of them can be less expensive than the other in special cases. In this paper, we present a new compensated algorithm to improve the accuracy of the Goertzel algorithm by using error-free transformations. We derive the forward round-off error bound for our algorithm, which implies that our algorithm yields a full precision accuracy for polynomials that are not too ill-conditioned. A dynamic error estimate in our algorithm is also presented by running round-off error analysis. Moreover, we show the cases in which our algorithms are less expensive than the compensated Horner algorithm for evaluating polynomials. Numerical experiments indicate that our algorithms run faster than the compensated Horner algorithm in those cases while producing the same accurate results, and our algorithm is absolutely stable when the condition number is smaller than 1016. An application is given to illustrate that our algorithm is more accurate than MATLAB&rsquo;s fft function. The results show that the relative error of our algorithm is from 1015 to 1017, and that of the fft was from 1012 to 1015.

]]>Mathematics doi: 10.3390/math10111789

Authors: Ling Zhu

By using the power series of the functions 1/sinnt and cost/sinnt (n=1,2,3,4,5), and the estimation of the ratio of two adjacent Bernoulli numbers, we obtained new bounds for arithmetic mean A by the weighted arithmetic means of Mtan1/3Msin2/3 and 13Mtan+23Msin,Mtanh1/3Msinh2/3 and 13Mtanh+23Msinh, where Mtan(x,y) and Msin(x,y), Mtanh(x,y) and Msinh(x,y) are the tangent mean, sine mean, hyperbolic tangent mean and hyperbolic sine mean, respectively. The upper and lower bounds obtained in this paper are compared in detail with the conclusions of the previous literature.

]]>Mathematics doi: 10.3390/math10111787

Authors: Rekha R. Jaichander Izhar Ahmad Krishna Kummari Suliman Al-Homidan

In this paper, Karush-Kuhn-Tucker type robust necessary optimality conditions for a robust nonsmooth interval-valued optimization problem (UCIVOP) are formulated using the concept of LU-optimal solution and the generalized robust Slater constraint qualification (GRSCQ). These Karush-Kuhn-Tucker type robust necessary conditions are shown to be sufficient optimality conditions under generalized convexity. The Wolfe and Mond-Weir type robust dual problems are formulated over cones using generalized convexity assumptions, and usual duality results are established. The presented results are illustrated by non-trivial examples.

]]>Mathematics doi: 10.3390/math10101786

Authors: Ali Muhib Osama Moaaz Clemente Cesarano Shami A. M. Alsallami Sayed Abdel-Khalek Abd Elmotaleb A. M. A. Elamin

In this work, new criteria were established for testing the oscillatory behavior of solutions of a class of even-order delay differential equations. We follow an approach that depends on obtaining new monotonic properties for the decreasing positive solutions of the studied equation. Moreover, we use these properties to provide new oscillation criteria of an iterative nature. We provide an example to support the significance of the results and compare them with the related previous work.

]]>Mathematics doi: 10.3390/math10101785

Authors: Alfonso García-Pérez

The spatio-temporal variogram is an important factor in spatio-temporal prediction through kriging, especially in fields such as environmental sustainability or climate change, where spatio-temporal data analysis is based on this concept. However, the traditional spatio-temporal variogram estimator, which is commonly employed for these purposes, is extremely sensitive to outliers. We approach this problem in two ways in the paper. First, new robust spatio-temporal variogram estimators are introduced, which are defined as M-estimators of an original data transformation. Second, we compare the classical estimate against a robust one, identifying spatio-temporal outliers in this way. To accomplish this, we use a multivariate scale-contaminated normal model to produce reliable approximations for the sample distribution of these new estimators. In addition, we define and study a new class of M-estimators in this paper, including real-world applications, in order to determine whether there are any significant differences in the spatio-temporal variogram between two temporal lags and, if so, whether we can reduce the number of lags considered in the spatio-temporal analysis.

]]>Mathematics doi: 10.3390/math10101784

Authors: Shqair Farrag Al-Smadi

The solution of the complex neutron diffusion equations system of equations in a spherical nuclear reactor is presented using the homotopy perturbation method (HPM); the HPM is a remarkable approximation method that successfully solves different systems of diffusion equations, and in this work, the system is solved for the first time using the approximation method. The considered system of neutron diffusion equations consists of two consistent subsystems, where the first studies the reactor and the multi-group subsystem of equations in the reactor core, and the other studies the multi-group subsystem of equations in the reactor reflector; each subsystem can deal with any finite number of neutron energy groups. The system is simplified numerically to a one-group bare and reflected reactor, which is compared with the modified differential transform method; a two-group bare reactor, which is compared with the residual power series method; a two-group reflected reactor, which is compared with the classical method; and a four-group bare reactor compared with the residual power series.

]]>Mathematics doi: 10.3390/math10101783

Authors: Antonella Calzolari Barbara Torti

The strong predictable representation property of semi-martingales and the notion of enlargement of filtration meet naturally in modeling financial markets, and theoretical problems arise. Here, first, we illustrate some of them through classical examples. Then, we review recent results obtained by studying predictable martingale representations for filtrations enlarged by means of a full process, possibly with accessible components in its jump times. The emphasis is on the non-uniqueness of the martingale enjoying the strong predictable representation property with respect to the same enlarged filtration.

]]>Mathematics doi: 10.3390/math10101782

Authors: Akram M. Abdurraqeeb Abdullrahman A. Al-Shamma’a Abdulaziz Alkuhayli Abdullah M. Noman Khaled E. Addoweesh

The instability of DC microgrids is the most prominent problem that limits the expansion of their use, and one of the most important causes of instability is constant power load CPLs. In this paper, a robust RST digital feedback controller is proposed to overcome the instability issues caused by the negative-resistance effect of CPLs and to improve robustness against the perturbations of power load and input voltage fluctuations, as well as to achieve a good tracking performance. To develop the proposed controller, it is necessary to first identify the dynamic model of the DC/DC buck converter with CPL. Second, based on the pole placement and sensitivity function shaping technique, a controller is designed and applied to the buck converter system. Then, validation of the proposed controller using Matlab/Simulink was achieved. Finally, the experimental validation of the RST controller was performed on a DC/DC buck converter with CPL using a real-time Hardware-in-the-loop (HIL). The OPAL-RT OP4510 RCP/HIL and dSPACE DS1104 controller board are used to model the DC/DC buck converter and to implement the suggested RST controller, respectively. The simulation and HIL experimental results indicate that the suggested RST controller has high efficiency.

]]>Mathematics doi: 10.3390/math10101781

Authors: Mikhail Zymbler Andrey Goglachev

Summarization of a long time series often occurs in analytical applications related to decision-making, modeling, planning, and so on. Informally, summarization aims at discovering a small-sized set of typical patterns (subsequences) to briefly represent the long time series. Apparent approaches to summarization like motifs, shapelets, cluster centroids, and so on, either require training data or do not provide an analyst with information regarding the fraction of the time series that a typical subsequence found corresponds to. Recently introduced, the time series snippet concept overcomes the above-mentioned limitations. A snippet is a subsequence that is similar to many other subsequences of the time series with respect to a specially defined similarity measure based on the Euclidean distance. However, the original Snippet-Finder algorithm has cubic time complexity concerning the lengths of the time series and the snippet. In this article, we propose the PSF (Parallel Snippet-Finder) algorithm that accelerates the original snippet discovery schema with GPU and ensures acceptable performance over very long time series. As opposed to the original algorithm, PSF splits the calculation of the similarity of all the time series subsequences to a snippet into several steps, each of which is performed in parallel. Experimental evaluation over real-world time series shows that PSF outruns both the original algorithm and a straightforward parallelization.

]]>Mathematics doi: 10.3390/math10101780

Authors: Majid Aghasharifian Esfahani Mohammadmehdi Namazi Theoklis Nikolaidis Soheil Jafari

New propulsion systems in aircrafts must meet strict regulations and emission limitations. The Flightpath 2050 goals set by the Advisory Council for Aviation Research and Innovation in Europe (ACARE) include reductions of 75%, 90%, and 65% in CO2, NOx, and noise, respectively. These goals are not fully satisfied by marginal improvements in gas turbine technology or aircraft design. A novel control design procedure for the next generation of turbofan engines is proposed in this paper to improve Full Authority Digital Engine Control (FADEC) systems and reduce the emission levels to meet the Flightpath 2050 regulations. Hence, an Adaptive Network&ndash;based Fuzzy Inference System (ANFIS), nonlinear autoregressive network with exogenous inputs (NARX) techniques, and the block-structure Hammerstein&ndash;Wiener approach are used to develop a model for a turbofan engine. The Min&ndash;Max control structure is chosen as the most widely used practical control algorithm for gas turbine aero engines. The objective function is considered to minimize the emission level for the engine in a pre-defined maneuver while keeping the engine performance in different aspects. The Genetic Algorithm (GA) is applied to find the optimized control structure. The results confirm the effectiveness of the proposed approach in emission reduction for the next generation of turbofan engines.

]]>Mathematics doi: 10.3390/math10101779

Authors: Alvaro Rodríguez-Prieto Manuel Callejas Ernesto Primera Guglielmo Lomonaco Ana María Camacho

The aim of this work is to present a new analytical model to evaluate jointly the mechanical integrity and the fitness-for-service of nuclear reactor pressure-vessels steels. This new methodology integrates a robust and regulated irradiation embrittlement prediction model such as the ASTM E-900 with the ASME Fitness-for-Service code used widely in other demanding industries, such as oil and gas, to evaluate, among others, the risk of experiencing degradation mechanisms such as the brittle fracture (generated, in this case, due to the irradiation embrittlement). This multicriteria analytical model, which is based on a new formulation of the brittle fracture criterion, allows an adequate prediction of the irradiation effect on the fracture toughness of reactor pressure-vessel steels, letting us jointly evaluate the mechanical integrity and the fitness-for-service of the vessel by using standardized limit conditions. This allows making decisions during the design, manufacturing and in-service of reactor pressure vessels. The results obtained by the application of the methodology are coherent with several historical experimental works.

]]>Mathematics doi: 10.3390/math10101778

Authors: Yongbo Pan Xunlin Zhu

The cutterhead torque and thrust, reflecting the obstruction degree of the geological environment and the behavior of excavation, are the key operating parameters for the tunneling of tunnel boring machines (TBMs). In this paper, a hybrid hidden Markov model (HMM) combined with ensemble learning is proposed to predict the value intervals of the cutterhead torque and thrust based on the historical tunneling data. First, the target variables are encoded into discrete states by means of HMM. Then, ensemble learning models including AdaBoost, random forest (RF), and extreme random tree (ERT) are employed to predict the discrete states. On this basis, the performances of those models are compared under different forms of the same input parameters. Moreover, to further validate the effectiveness and superiority of the proposed method, two excavation datasets including Beijing and Zhengzhou from the actual project under different geological conditions are utilized for comparison. The results show that the ERT outperforms the other models and the corresponding prediction accuracies are up to 0.93 and 0.99 for the cutterhead torque and thrust, respectively. Therefore, the ERT combined with HMM can be used as a valuable prediction tool for predicting the cutterhead torque and thrust, which is of positive significance to alert the operator to judge whether the excavation is normal and assist the intelligent tunneling.

]]>Mathematics doi: 10.3390/math10101777

Authors: Rosa Fernández Ropero María Julia Flores Rafael Rumí

Environmental data often present inconveniences that make modeling tasks difficult. During the phase of data collection, two problems were found: (i) a block of five months of data was unavailable, and (ii) no information was collected from the coastal area, which made flood-risk estimation difficult. Thus, our aim is to explore and provide possible solutions to both issues. To avoid removing a variable (or those missing months), the proposed solution is a BN-based regression model using fixed probabilistic graphical structures to impute the missing variable as accurately as possible. For the second problem, the lack of information, an unsupervised classification method based on BN was developed to predict flood risk in the coastal area. Results showed that the proposed regression solution could predict the behavior of the continuous missing variable, avoiding the initial drawback of rejecting it. Moreover, the unsupervised classifier could classify all observations into a set of groups according to upstream river behavior and rainfall information, and return the probability of belonging to each group, providing appropriate predictions about the risk of flood in the coastal area.

]]>Mathematics doi: 10.3390/math10101776

Authors: Zhishuo Zhang Yao Xiao Huayong Niu

Data envelopment analysis (DEA) has been widely applied to evaluate the performance of banks, enterprises, governments, research institutions, hospitals, and other fields as a non-parametric estimation method for evaluating the relative effectiveness of research objects. However, the composition of its effective frontier surface is based on the input-output data of existing decision units, which makes it challenging to apply the method to predict the future performance level of other decision units. In this paper, the Slack Based Measure (SBM) model in DEA method is used to measure the relative efficiency values of decision units, and then, eleven machine learning models are used to train the absolute efficient frontier to be applied to the performance prediction of new decisions units. To further improve the prediction effect of the models, this paper proposes a training set under the DEA classification method, starting from the training-set sample selection and input feature indicators. In this paper, regression prediction of test set performance based on the training set under different classification combinations is performed, and the prediction effects of proportional relative indicators and absolute number indicators as machine-learning input features are explored. The robustness of the effective frontier surface under the integrated model is verified. An integrated models of DEA and machine learning with better prediction effects is proposed, taking China&rsquo;s regional carbon-dioxide emission (carbon emission) performance prediction as an example. The novelty of this work is mainly as follows: firstly, the integrated model can achieve performance prediction by constructing an effective frontier surface, and the empirical results show that this is a feasible methodological technique. Secondly, two schemes to improve the prediction effectiveness of integrated models are discussed in terms of training set partitioning and feature selection, and the effectiveness of the schemes is demonstrated by using carbon-emission performance prediction as an example. This study has some application value and is a complement to the existing literature.

]]>Mathematics doi: 10.3390/math10101775

Authors: Zeeshan Rasool Shah Waris Khan Essam R. El-Zahar Se-Jin Yook Nehad Ali Shah

Sakiadis rheology of a generalised polymeric material, as well as a heat source or sink and a magnetic field, are all part of this study. Thermal radiations have been introduced into the convective heating process. The translation of a physical situation into a set of nonlinear equations was achieved through mathematical modelling. To convert the resulting partial differential equation into a set of nonlinear ordinary differential equations, appropriate transformations have been used. The velocity and temperature profiles are generated both analytically by HAM and numerically by the Runge&ndash;Kutta method (RK-4). In order to analyse the behaviour of the physical quantities involved, numerical and graphical depictions have been offered. To show that the acquired findings are correct, a nonlinear system error analysis has been offered. The heat flux study has been shown using bar charts. For the essential factors involved, the local Nusselt number and local Skin friction are calculated in tabular form. The fluid particles&rsquo; molecular mobility was slowed due to the magnetic field and porosity, and the heat transfer rates were demonstrated to be lowered when magnetic and porosity effects are present. This magnetic field and porosity effects regulating property has applications in MHD ion propulsion and power production, the electromagnetic casting of metals, etc. Furthermore, internal heat absorption and generation have diametrically opposed impacts on fluid temperature. The novelty of the present study is that no one has investigated the Sakiadis flow of thermal convection magnetised Oldroyd-B fluid in terms of a heat reservoir across a porous sheet. In limited circumstances, a satisfactory match is revealed when the collected values are compared to the existing work published corroborating the current attempt. The findings of this study are expected to be applicable to a wide range of technical and industrial processes, including steel extrusion, wire protective layers, fiber rolling, fabrication, polythene stuff such as broadsheet, fiber, and stainless steel sheets, and even the process of depositing a thin layer where the sheet is squeezed.

]]>Mathematics doi: 10.3390/math10101773

Authors: Chunyeung Kwok

This paper investigates the possibility of using the global VAR (GVAR) model to estimate a simple New Keynesian DSGE-type multi-country model. The long-run forecasts from an estimated GVAR model were used to calculate the steady-states of macro variables as differences. The deviations from the long-run forecasts were taken as the deviation from the steady-states and were used to estimate a simple NK open economy model with an IS curve, Philips curve, Taylor rule, and an exchange rate equation. The shocks to these equations were taken as the demand shock, supply shock, monetary shock, and exchange rate shock, respectively. An alternative model was constructed to compare the results from GVAR long-run forecasts. The alternative model used a Hodrick&ndash;Prescott (HP) filter to derive deviations from the steady-states. The impulsive response functions from the shocks were then compared to results from other DSGE models in the literature. Both GVAR and HP estimates produced dissimilar results, although the GVAR managed to capture more from the data, given the explicit co-integration relationships. For the IRFs, both GVAR and HP estimated DSGE models appeared to be as expected before the pandemic; however, if we include the pandemic data, i.e., 2020, the IRFs are very different, due to the nature of the policy actions. In general, DSGE&ndash;GVAR models appear to be much more versatile, and are able to capture dynamics that HP filters are not.

]]>Mathematics doi: 10.3390/math10101774

Authors: Tareq Saeed Kamel Djeddi Juan L. G. Guirao Hamed H. Alsulami Mohammed Sh. Alhodaly

In this paper, we present a cancer system in a continuous state as well as some numerical results. We present discretization methods, e.g., the Euler method, the Taylor series expansion method, and the Runge&ndash;Kutta method, and apply them to the cancer system. We studied the stability of the fixed points in the discrete cancer system using the new version of Marotto&rsquo;s theorem at a fixed point; we prove that the discrete cancer system is chaotic. Finally, we present numerical simulations, e.g., Lyapunov exponents and bifurcations diagrams.

]]>Mathematics doi: 10.3390/math10101772

Authors: Xinyu Liu Yuting Ding

As COVID-19 continues to threaten public health around the world, research on specific vaccines has been underway. In this paper, we establish an SVIR model on booster vaccination with two time delays. The time delays represent the time of booster vaccination and the time of booster vaccine invalidation, respectively. Second, we investigate the impact of delay on the stability of non-negative equilibria for the model by considering the duration of the vaccine, and the system undergoes Hopf bifurcation when the duration of the vaccine passes through some critical values. We obtain the normal form of Hopf bifurcation by applying the multiple time scales method. Then, we study the model with two delays and show the conditions under which the nontrivial equilibria are locally asymptotically stable. Finally, through analysis of official data, we select two groups of parameters to simulate the actual epidemic situation of countries with low vaccination rates and countries with high vaccination rates. On this basis, we select the third group of parameters to simulate the ideal situation in which the epidemic can be well controlled. Through comparative analysis of the numerical simulations, we concluded that the most appropriate time for vaccination is to vaccinate with the booster shot 6 months after the basic vaccine. The priority for countries with low vaccination rates is to increase vaccination rates; otherwise, outbreaks will continue. Countries with high vaccination rates need to develop more effective vaccines while maintaining their coverage rates. When the vaccine lasts longer and the failure rate is lower, the epidemic can be well controlled within 20 years.

]]>Mathematics doi: 10.3390/math10101771

Authors: Manuel D. Ortigueira

In this paper, some myths associated to the initial condition problem are studied and demystified. It is shown that the initial conditions provided by the one-sided Laplace transform are not those required for Riemann-Liouville and Caputo derivatives. The problem is studied and solved with generality as well as applied to continuous-time fractional autoregressive-moving average systems.

]]>Mathematics doi: 10.3390/math10101770

Authors: Abdulilah Mohammad Mayet Seyed Mehdi Alizadeh Zana Azeez Kakarash Ali Awadh Al-Qahtani Abdullah K. Alanazi Hala H. Alhashimi Ehsan Eftekhari-Zadeh Ehsan Nazemi

When fluids flow into the pipes, the materials in them cause deposits to form inside the pipes over time, which is a threat to the efficiency of the equipment and their depreciation. In the present study, a method for detecting the volume percentage of two-phase flow by considering the presence of scale inside the test pipe is presented using artificial intelligence networks. The method is non-invasive and works in such a way that the detector located on one side of the pipe absorbs the photons that have passed through the other side of the pipe. These photons are emitted to the pipe by a dual source of the isotopes barium-133 and cesium-137. The Monte Carlo N Particle Code (MCNP) simulates the structure, and wavelet features are extracted from the data recorded by the detector. These features are considered Group methods of data handling (GMDH) inputs. A neural network is trained to determine the volume percentage with high accuracy independent of the thickness of the scale in the pipe. In this research, to implement a precise system for working in operating conditions, different conditions, including different flow regimes and different scale thickness values as well as different volume percentages, are simulated. The proposed system is able to determine the volume percentages with high accuracy, regardless of the type of flow regime and the amount of scale inside the pipe. The use of feature extraction techniques in the implementation of the proposed detection system not only reduces the number of detectors, reduces costs, and simplifies the system but also increases the accuracy to a good extent.

]]>Mathematics doi: 10.3390/math10101769

Authors: Irina Opraie Dorian Popa Liana Timboş

In this paper, we prove the Ulam stability of the Fr&eacute;chet functional equation f(x+y+z)+f(x)+f(y)+f(z)=f(x+y)+f(y+z)+f(z+x) arising from the characterization of inner product spaces and we determine its best Ulam constant. Using this result, we give a stability result for a pexiderized version of the Fr&eacute;chet functional equation.

]]>Mathematics doi: 10.3390/math10101768

Authors: Pavel Ilyushin Aleksandr Kulikov Konstantin Suslov Sergey Filippov

In the context of energy industry decentralization, electrical networks encounter deviations of power quality indices (PQI), including violations of the sinusoidality of current and voltage signals, which increase errors in the joint digital processing of spatially separated signals in digital devices. This paper addresses specific features of using the concept of spatial coherence in the measurement and digital processing of current and voltage signals. Methods for assessing the coherence of current and voltage signals during synchronized measurements are considered for the case of PQI deviation. The example of a double-ended transmission line fault location (hereafter, DTLFL) demonstrates that the lower the cross-correlation coefficient, the higher the error and the lower the accuracy of calculating the distance to the fault site. The nature of the influence of spatial coherence violations on errors in DTLFL depends on the expression used to calculate the distance to the fault point. The application of a normalized cross-correlation coefficient for finding errors in the digital processing of current and voltage signals, in the case of spatial coherence violation, was substantiated. The influence of interharmonics and noise on errors in DTLFL, in the case of violations of spatial coherence of signals, was investigated. The magnitude of distortions and error in estimating the current and voltage amplitude depends on the ratio between the amplitudes and phases of the fundamental and distorting interharmonics. Filtration of the original and decimated signals based on the discrete Fourier transform eliminates the noise components of the power frequency harmonics.

]]>Mathematics doi: 10.3390/math10101767

Authors: Georgiana Ingrid Stoleru Adrian Iftene

Alzheimer&rsquo;s Disease (AD) is a highly prevalent condition and most of the people suffering from it receive the diagnosis late in the process. The diagnosis is currently established following an evaluation of the protein biomarkers in cerebrospinal fluid (CSF), brain imaging, cognitive tests, and the medical history of the individuals. While diagnostic tools based on CSF collections are invasive, the tools used for acquiring brain scans are expensive. Taking these into account, an early predictive system, based on Artificial Intelligence (AI) approaches, targeting the diagnosis of this condition, as well as the identification of lead biomarkers becomes an important research direction. In this survey, we review the state-of-the-art research on machine learning (ML) techniques used for the detection of AD and Mild Cognitive Impairment (MCI). We attempt to identify the most accurate and efficient diagnostic approaches, which employ ML techniques and therefore, the ones most suitable to be used in practice. Research is still ongoing to determine the best biomarkers for the task of AD classification. At the beginning of this survey, after an introductory part, we enumerate several available resources, which can be used to build ML models targeting the diagnosis and classification of AD, as well as their main characteristics. After that, we discuss the candidate markers which were used to build AI models with the best results in terms of diagnostic accuracy, as well as their limitations.

]]>Mathematics doi: 10.3390/math10101765

Authors: Ion Necoara Tudor-Corneliu Ionescu

In this paper, we compute a (local) optimal reduced order model that matches a prescribed set of moments of a stable linear time-invariant system of high dimension. We fix the interpolation points and parametrize the models achieving moment-matching in a set of free parameters. Based on the parametrization and using the H2-norm of the approximation error as the objective function, we derive a nonconvex optimization problem, i.e., we search for the optimal free parameters to determine the model yielding the minimal H2-norm of the approximation error. Furthermore, we provide the necessary first-order optimality conditions in terms of the controllability and the observability Gramians of a minimal realization of the error system. We then propose two gradient-type algorithms to compute the (local) optimal models, with mathematical guarantees on the convergence. We also derive convex semidefinite programming relaxations for the nonconvex Problem, under the assumption that the error system admits block-diagonal Gramians, and derive sufficient conditions to guarantee the block diagonalization. The solutions resulting at each step of the proposed algorithms guarantee the achievement of the imposed moment matching conditions. The second gradient-based algorithm exhibits the additional property that, when stopped, yields a stable approximation with a reduced H2-error norm. We illustrate the theory on a CD-player and on a discretized heat equation.

]]>Mathematics doi: 10.3390/math10101766

Authors: Jun Moon

In this paper, we consider the two-player state and control path-dependent stochastic zero-sum differential game. In our problem setup, the state process, which is controlled by the players, is dependent on (current and past) paths of state and control processes of the players. Furthermore, the running cost of the objective functional depends on both state and control paths of the players. We use the notion of non-anticipative strategies to define lower and upper value functionals of the game, where unlike the existing literature, these value functions are dependent on the initial states and control paths of the players. In the first main result of this paper, we prove that the (lower and upper) value functionals satisfy the dynamic programming principle (DPP), for which unlike the existing literature, the Skorohod metric is necessary to maintain the separability of c&agrave;dl&agrave;g (state and control) spaces. We introduce the lower and upper Hamilton&ndash;Jacobi&ndash;Isaacs (HJI) equations from the DPP, which correspond to the state and control path-dependent nonlinear second-order partial differential equations. In the second main result of this paper, we show that by using the functional It&ocirc; calculus, the lower and upper value functionals are viscosity solutions of (lower and upper) state and control path-dependent HJI equations, where the notion of viscosity solutions is defined on a compact &kappa;-H&ouml;lder space to use several important estimates and to guarantee the existence of minimum and maximum points between the (lower and upper) value functionals and the test functions. Based on these two main results, we also show that the Isaacs condition and the uniqueness of viscosity solutions imply the existence of the game value. Finally, we prove the uniqueness of classical solutions for the (state path-dependent) HJI equations in the state path-dependent case, where its proof requires establishing an equivalent classical solution structure as well as an appropriate contradiction argument.

]]>Mathematics doi: 10.3390/math10101764

Authors: Anuar R. Giménez Jesús Martín-Vaquero Manuel Rodríguez-Martín

In industrial engineering degrees in Spain, mathematics subjects are usually taught during the first two academic years. Consequently, it is often the case that students sometimes do not feel motivated to learn subjects such as Mathematics II (calculus). Nevertheless, this subject is fundamental for understanding other subjects in the degree study plan, as well as for the graduate&rsquo;s future professional career as an engineer. To address this, a problem-based teaching methodology was carried out with the help of a fourth-year student who explained an activity to first-year students in a manner which was both friendly and approachable. In this experiment, the student went through a series of practical problems taken from different engineering subjects, which required multivariable integrals to be calculated and which he had learned in mathematics as a first-year student. In addition, a method based on pre-test and post-test assessments was applied. From this work, various benefits were observed in terms of learning, as well as an increase in the level of motivation of first-year students. There was a greater appreciation of the usefulness of calculus and computer programs to solve real-life problems, and the students generally responded positively to this type of activity.

]]>Mathematics doi: 10.3390/math10101763

Authors: Shing-Yun Jung Ting-Han Lin Chia-Hung Liao Shyan-Ming Yuan Chuen-Tsai Sun

We study the problem of controllable citation text generation by introducing a new concept to generate citation texts. Citation text generation, as an assistive writing approach, has drawn a number of researchers&rsquo; attention. However, current research related to citation text generation rarely addresses how to generate the citation texts that satisfy the specified citation intents by the paper&rsquo;s authors, especially at the beginning of paper writing. We propose a controllable citation text generation model that extends a pre-trained sequence to sequence models, namely, BART and T5, by using the citation intent as the control code to generate the citation text, meeting the paper authors&rsquo; citation intent. Experimental results demonstrate that our model can generate citation texts semantically similar to the reference citation texts and satisfy the given citation intent. Additionally, the results from human evaluation also indicate that incorporating the citation intent may enable the models to generate relevant citation texts almost as scientific paper authors do, even when only a little information from the citing paper is available.

]]>Mathematics doi: 10.3390/math10101762

Authors: Mario Martínez García Jesse Y. Rumbo Morales Gerardo Ortiz Torres Salvador A. Rodríguez Paredes Sebastián Vázquez Reyes Felipe de J. Sorcia Vázquez Alan F. Pérez Vidal Jorge S. Valdez Martínez Ricardo Pérez Zúñiga Erasmo M. Renteria Vargas

One of the separation processes used for the production and purification of hydrogen is molecular sieve adsorption using the Pressure Swing Adsorption (PSA) method. The process uses two beds containing activated carbon and a sequence of four steps (adsorption, depressurization, purge, and repressurization) for hydrogen production and purification. The initial composition is 0.11 CO, 0.61 H2, and 0.28 CH4 in molar fractions. The aim of this work is to bring the purity of hydrogen to 0.99 in molar fraction and implement controllers that can maintain the desired purity even in the presence of the disturbances that occur in the PSA process. The controller design (discrete PID and state feedback control) was based on the Hammerstein&ndash;Wiener model, which had an 80% fit over the rigorous PSA model. Both controllers were validated on a virtual plant of the PSA process, showing great performance and robustness against disturbances. The results obtained show that it is possible to follow the desired trajectory and attenuate double disturbances, while managing to maintain the purity of hydrogen at a value of 0.99 in molar fraction, which meets the international standards to be used as a biofuel.

]]>Mathematics doi: 10.3390/math10101761

Authors: Ludovica Tognolatti Cristina Ponti Massimo Santarsiero Giuseppe Schettini

In this paper we present an efficient Matlab computation of a 3-D electromagnetic scattering problem, in which a plane wave impinges with a generic inclination onto a conducting ellipsoid of revolution. This solid is obtained by the rotation of an ellipse around one of its axes, which is also known as a spheroid. We have developed a fast and ad hoc code to solve the electromagnetic scattering problem, using spheroidal vector wave functions, which are special functions used to describe physical problems in which a prolate or oblate spheroidal reference system is considered. Numerical results are presented, both for TE and TM polarization of the incident wave, and are validated by a comparison with results obtained by a commercial electromagnetic simulator.

]]>Mathematics doi: 10.3390/math10101760

Authors: Juliana Castaneda Xabier A. Martin Majsa Ammouriova Javier Panadero Angel A. Juan

Stochastic, as well as fuzzy uncertainty, can be found in most real-world systems. Considering both types of uncertainties simultaneously makes optimization problems incredibly challenging. In this paper, we analyze the permutation flow shop problem (PFSP) with both stochastic and fuzzy processing times. The main goal is to find the solution (permutation of jobs) that minimizes the expected makespan. However, due to the existence of uncertainty, other characteristics of the solution are also taken into account. In particular, we illustrate how survival analysis can be employed to enrich the probabilistic information given to decision-makers. To solve the aforementioned optimization problem, we extend the concept of a simheuristic framework so it can also include fuzzy elements. Hence, both stochastic and fuzzy uncertainty are simultaneously incorporated in the PFSP. In order to test our approach, classical PFSP instances have been adapted and extended, so that processing times become either stochastic or fuzzy. The experimental results show the effectiveness of the proposed approach when compared with more traditional ones.

]]>Mathematics doi: 10.3390/math10101759

Authors: Natalia Dilna Michal Fečkan

The exact conditions sufficient for the unique solvability of the initial value problem for a system of linear fractional functional differential equations determined by isotone operators are established. In a sense, the conditions obtained are optimal. The method of the test elements intended for the estimation of the spectral radius of a linear operator is used. The unique solution is presented by the Neumann&rsquo;s series. All theoretical investigations are shown in the examples. A pantograph-type model from electrodynamics is studied.

]]>Mathematics doi: 10.3390/math10101758

Authors: Long-Sheng Liu Qing-Wen Wang Mahmoud Saad Mehany

We derive the solvability conditions and a formula of a general solution to a Sylvester-type matrix equation over Hamilton quaternions. As an application, we investigate the necessary and sufficient conditions for the solvability of the quaternion matrix equation, which involves &eta;-Hermicity. We also provide an algorithm with a numerical example to illustrate the main results of this paper.

]]>Mathematics doi: 10.3390/math10101757

Authors: Sherzod N. Tashpulatov

We consider the wholesale electricity market prices in England and Wales during its complete history, where price-cap regulation and divestment series were introduced at different points in time. We compare the impact of these regulatory reforms on the dynamics of electricity prices. For this purpose, we apply flexible distributions that account for asymmetry, heavy tails, and excess kurtosis usually observed in data or model residuals. The application of skew generalized error distribution is appropriate for our case study. We find that after the second series of divestments, price level and volatility are lower than during price-cap regulation and after the first series of divestments. This finding implies that a sufficient horizontal restructuring through divestment series may be superior to price-cap regulation. The conclusion could be interesting to other countries because the England and Wales electricity market served as the benchmark model for liberalizing energy markets worldwide.

]]>Mathematics doi: 10.3390/math10101756

Authors: Xuan-Yi Xue Si-Rui Wen Jun-Yi Sun Xiao-Ting He

In this study, we analytically solved the thermal stress problem of a bimodular functionally graded bending beam under arbitrary temperature rise modes. First, based on the strain suppression method in a one-dimensional case, we obtained the thermal stress of a bimodular functionally graded beam subjected to bending moment under arbitrary temperature rise modes. Using the stress function method based on compatibility conditions, we also derived two-dimensional thermoelasticity solutions for the same problem under pure bending and lateral-force bending, respectively. During the solving, the number of unknown integration constants is doubled due to the introduction of bimodular effect; thus, the determination for these constants depends not only on the boundary conditions, but also on the continuity conditions at the neutral layer. The comparisons indicate that the one- and two-dimensional thermal stress solutions are consistent in essence, with a slight difference in the axial stress, which exactly reflects the distinctions of one- and two-dimensional problems. In addition, the temperature rise modes in this study are not explicitly indicated, which further expands the applicability of the solutions obtained. The originality of this work is that the one- and two-dimensional thermal stress solutions for bimodular functionally graded beams are derived for the first time. The results obtained in this study may serve as a theoretical reference for the analysis and design of beam-like structures with obvious bimodular functionally graded properties in a thermal environment.

]]>Mathematics doi: 10.3390/math10101755

Authors: Yong Pei Churong Chen Dechang Pi

This paper studies a topology identification problem of complex networks with dynamics on different time scales. Using the adaptive synchronization method, some criteria for a successful estimation are obtained. In particular, by regulating the original network to synchronize with an auxiliary chaotic network, this work further explores a way to avoid the precondition of linear independence. When the adaptive controller fails to achieve the outer synchronization, an impulsive control method is used. In the end, we conclude with three numerical simulations. The results obtained in this paper generalize continuous, discrete with arbitrary time step size and mixed cases.

]]>Mathematics doi: 10.3390/math10101754

Authors: Yanbing Li Wei Zhao Huilong Fan

The accuracy of short-term traffic flow prediction is one of the important issues in the construction of smart cities, and it is an effective way to solve the problem of traffic congestion. Most previous studies could not effectively mine the potential relationship between the temporal and spatial dimensions of traffic data flow. Due to the large variability in the traffic flow data of road conditions, we analyzed it with &ldquo;dynamic&rdquo;, using a dynamic-aware graph neural network model for the hidden relationships between space-time in the deep learning segment. In this paper, we propose a dynamic perceptual graph neural network model for the temporal and spatial hidden relationships of deep learning segments. This model mixes temporal features and spatial features with graphs and expresses them. The temporal features and spatial features are connected to each other to learn potential relationships, so as to more accurately predict the traffic speed in the future time period, we performed experiments on real data sets and compared with some baseline models. The experiments show that the method proposed in this paper has certain advantages.

]]>Mathematics doi: 10.3390/math10101753

Authors: Kamsing Nonlaopon Pshtiwan Othman Mohammed Y. S. Hamed Rebwar Salih Muhammad Aram Bahroz Brzo Hassen Aydi

In this paper, first, we intend to determine the relationship between the sign of &Delta;c0&beta;y(c0+1), for 1&lt;&beta;&lt;2, and &Delta;y(c0+1)&gt;0, in the case we assume that &Delta;c0&beta;y(c0+1) is negative. After that, by considering the set D&#8467;+1,&theta;&sube;D&#8467;,&theta;, which are subsets of (1,2), we will extend our previous result to make the relationship between the sign of &Delta;c0&beta;y(z) and &Delta;y(z)&gt;0 (the monotonicity of y), where &Delta;c0&beta;y(z) will be assumed to be negative for each z&isin;Nc0T:={c0,c0+1,c0+2,&#8943;,T} and some T&isin;Nc0:={c0,c0+1,c0+2,&#8943;}. The last part of this work is devoted to see the possibility of information reduction regarding the monotonicity of y despite the non-positivity of &Delta;c0&beta;y(z) by means of numerical simulation.

]]>Mathematics doi: 10.3390/math10101752

Authors: Viktor A. Rukavishnikov Alexey V. Rukavishnikov

We consider the Stokes problem with the homogeneous Dirichlet boundary condition in a polygonal domain with one re-entrant corner on its boundary. We define an R&nu;-generalized solution of the problem in a nonsymmetric variational formulation. Such defined solution allows us to construct numerical methods for finding an approximate solution without loss of accuracy. In the paper, the existence and uniqueness of an R&nu;-generalized solution in weighted sets is proved.

]]>Mathematics doi: 10.3390/math10101751

Authors: Habib ur Rehman Wiyada Kumam Kamonrat Sombut

Equilibrium problems are articulated in a variety of mathematical computing applications, including minimax and numerical programming, saddle-point problems, fixed-point problems, and variational inequalities. In this paper, we introduce improved iterative techniques for evaluating the numerical solution of an equilibrium problem in a Hilbert space with a pseudomonotone and a Lipschitz-type bifunction. These techniques are based on two computing steps of a proximal-like mapping with inertial terms. We investigated two simplified stepsize rules that do not require a line search, allowing the technique to be carried out more successfully without knowledge of the Lipschitz-type constant of the cost bifunction. Once control parameter constraints are put in place, the iterative sequences converge on a particular solution to the problem. We prove strong convergence theorems without knowing the Lipschitz-type bifunction constants. A sequence of numerical tests was performed, and the results confirmed the correctness and speedy convergence of the new techniques over the traditional ones.

]]>Mathematics doi: 10.3390/math10101750

Authors: Tonglai Liu Ronghai Luo Longqin Xu Dachun Feng Liang Cao Shuangyin Liu Jianjun Guo

Recently, the attention mechanism combining spatial and channel information has been widely used in various deep convolutional neural networks (CNNs), proving its great potential in improving model performance. However, this usually uses 2D global pooling operations to compress spatial information or scaling methods to reduce the computational overhead in channel attention. These methods will result in severe information loss. Therefore, we propose a Spatial channel attention mechanism that captures cross-dimensional interaction, which does not involve dimensionality reduction and brings significant performance improvement with negligible computational overhead. The proposed attention mechanism can be seamlessly integrated into any convolutional neural network since it is a lightweight general module. Our method achieves a performance improvement of 2.08% on ResNet and 1.02% on MobileNetV2 in top-one error rate on the ImageNet dataset.

]]>Mathematics doi: 10.3390/math10101748

Authors: Nicolás Alonso Fernández Coronado Jaime I. García-García Elizabeth H. Arredondo Ismael Andrés Araya Naveas

The competencies that today&rsquo;s citizen must possess have led to a transformation of the teaching of probability, which has been repositioned on the school curriculum from an algorithmic view to a practical one based on the understanding of the concepts and their application in daily life. In this task, the understanding of the binomial distribution is essential as it allows the analysis of discrete data, the modeling of random situations, and the learning of other notions. However, weaknesses are identified in teachers and students with respect to the binomial distribution attributed to the lack of knowledge of its origin and meaning throughout history. For this reason, our work consists of the identification of its partial meanings and essential components as well as its relationships from a historical epistemological study and its analysis, based on the notions of the Ontosemiotic Approach (OSA) to Mathematical Knowledge and Instruction and the specialized literature on statistics and probability. As a result of our work, we present a reconstruction of the holistic meaning of the binomial distribution from the elements that compose it, which are essential for didactic purposes such as the identification and resolution of learning conflicts, the design or evaluation criteria, and teacher education.

]]>Mathematics doi: 10.3390/math10101749

Authors: Khizer Mehmood Naveed Ishtiaq Chaudhary Zeshan Aslam Khan Muhammad Asif Zahoor Raja Khalid Mehmood Cheema Ahmad H. Milyani

Swarm intelligence-based metaheuristic algorithms have attracted the attention of the research community and have been exploited for effectively solving different optimization problems of engineering, science, and technology. This paper considers the parameter estimation of the control autoregressive (CAR) model by applying a novel swarm intelligence-based optimization algorithm called the Aquila optimizer (AO). The parameter tuning of AO is performed statistically on different generations and population sizes. The performance of the AO is investigated statistically in various noise levels for the parameters with the best tuning. The robustness and reliability of the AO are carefully examined under various scenarios for CAR identification. The experimental results indicate that the AO is accurate, convergent, and robust for parameter estimation of CAR systems. The comparison of the AO heuristics with recent state of the art counterparts through nonparametric statistical tests established the efficacy of the proposed scheme for CAR estimation.

]]>Mathematics doi: 10.3390/math10101747

Authors: Carmen López-Esteban Fernando Almaraz-Menéndez

The Economic Society of Friends of the Country of Asturias, Spain, was an instrument of enlightened reformism which operated in a region with serious economic backwardness. It was born in 1780 at the initiative of Campomanes and responds to the Matritense model, meaning that it focused on economic development and popular education. It is known that Jovellanos directly participated in the establishment of the Royal Institute of Nautical Studies and Mineralogy of Gij&oacute;n. In this work, the historical method of research in education was used with the objective of determining the sociogenesis of the kind of technical mathematics that was taught in this Institute. The results show the role of the Asturian Economic Society in the creation of the Institute and we also analysed the teaching and curriculum of mathematics that was taught there, and if it was in line with the internal debates of the discipline in that historical moment. The limitations of the Jovellanista model of the Technical Training School for Sailors and Miners created in Gij&oacute;n are made clear, although the attempt to start it up in such a peripheral place is no less remarkable.

]]>Mathematics doi: 10.3390/math10101745

Authors: Ivan Gonzalez Igor Kondrashuk Victor H. Moll Alfredo Vega

Analytic expressions for the N-dimensional Debye function are obtained by the method of brackets. The new expressions are suitable for the study of heat capacity of solids and the analysis of the asymptotic behavior of this function, both in the high and low temperature limits.

]]>Mathematics doi: 10.3390/math10101746

Authors: Jianguo Zhang Peitao Li Xin Yin Sheng Wang Yuanguang Zhu

The mechanical parameters of surrounding rock are an essential basis for roadway excavation and support design. Aiming at the difficulty in obtaining the mechanical parameters of surrounding rock and large experimental errors, the optimized BP neural network model is proposed in this paper. The mind evolutionary algorithm can adequately search the optimal initial weights and thresholds, while the neural network has the advantage of strong nonlinear prediction ability. So, the optimized BP neural network model (MEA-BP model) takes advantage of the two models. It can not only avoid the local extreme value problem but also improve the accuracy and reliability of the prediction results. Based on the orthogonal test method and finite element analysis method, training samples and test samples are established. The nonlinear relationship between rock mechanical parameters and roadway deformation is established by the BP model and MEA-BP model, respectively. The importance analysis of the three input variables shows that the &#8710;D is the most important input variable, while &#8710;BC has the smallest impact. The comparison of prediction performance between the MEA-BP model and BP model demonstrates that the optimized initial weights and thresholds can improve the accuracy of prediction value. Finally, the MEA-BP model has been well applied to predicting the mechanical parameter for the surrounding rock in the Pingdingshan mine area, which proves the accuracy and reliability of the optimized model.

]]>Mathematics doi: 10.3390/math10101744

Authors: Qing Tian Chih-Chiang Fang Chun-Wu Yeh

In the software development life cycle, the quality and reliability of software are critical to software developers. Poor quality and reliability not only cause the loss of customers and sales but also increase the operational risk due to unreliable codes. Therefore, software developers should try their best to reduce such potential software defects by undertaking a software testing project. However, to pursue perfect and faultless software is unrealistic since the budget, time, and testing resources are limited, and the software developers need to reach a compromise that balances software reliability and the testing cost. Using the model presented in this study, software developers can devise multiple alternatives for a software testing project, and each alternative has its distinct allocation of human resources. The best alternative can therefore be selected. Furthermore, the allocation incorporates debuggers&rsquo; learning and negligent factors, both of which influence the efficiency of software testing in practice. Accordingly, the study considers both human factors and the nature of errors during the debugging process to develop a software reliability growth model to estimate the related costs and the reliability indicator. Additionally, the issue of error classification is also extended by considering the impacts of errors on the system, and the expected time required to remove simple or complex errors can be estimated based on different truncated exponential distributions. Finally, numerical examples are presented and sensitivity analyses are performed to provide managerial insights and useful directions to inform software release strategies.

]]>Mathematics doi: 10.3390/math10101743

Authors: Palanivel Kaliyaperumal Amrit Das

The problem of optimizing an objective function that exists within the constraints of equality and inequality is addressed by nonlinear programming (NLP). A linear program exists if all of the functions are linear; otherwise, the problem is referred to as a nonlinear program. The development of highly efficient and robust linear programming (LP) algorithms and software, the advent of high-speed computers, and practitioners&rsquo; wider understanding and portability of mathematical modeling and analysis have all contributed to LP&rsquo;s importance in solving problems in a variety of fields. However, due to the nature of the nonlinearity of the objective functions and any of the constraints, several practical situations cannot be completely explained or predicted as a linear program. Efforts to overcome such nonlinear problems quickly and efficiently have made rapid progress in recent decades. The past century has seen rapid progress in the field of nonlinear modeling of real-world problems. Because of the uncertainty that exists in all aspects of nature and human life, these models must be viewed through a system known as a fuzzy system. In this article, a new fuzzy model is proposed to address the vagueness presented in the nonlinear programming problems (NLPPs). The proposed model is described; its mathematical formulation and detailed computational procedure are shown with numerical illustrations by employing trapezoidal fuzzy membership functions (TFMFs). Here, the computational procedure has an important role in acquiring the optimum result by utilizing the necessary and sufficient conditions of the Lagrangian multipliers method in terms of fuzziness. Additionally, the proposed model is based on the previous research in the literature, and the obtained optimal result is justified with TFMFs. A model performance evaluation was completed with different set of inputs, followed by a comparison analysis, results and discussion. Lastly, the performance evaluation states that the efficiency level of the proposed model is of high impact. The code to solve the model is implemented in LINGO, and it comes with a collection of built-in solvers for various problems.

]]>Mathematics doi: 10.3390/math10101742

Authors: Yonggang Chen Yu Qiao Xiangtuan Xiong

The inverse and ill-posed problem of determining a solute concentration for the two-dimensional nonhomogeneous fractional diffusion equation is investigated. This model is much worse than its homogeneous counterpart as the source term appears. We propose a modified kernel regularization technique for the stable numerical reconstruction of the solution. The convergence estimates under both a priori and a posteriori parameter choice rules are proven.

]]>Mathematics doi: 10.3390/math10101741

Authors: Yuanyuan Jiang Xingzhong Xu

In classical statistics, the primary test statistic is the likelihood ratio. However, for high dimensional data, the likelihood ratio test is no longer effective and sometimes does not work altogether. By replacing the maximum likelihood with the integral of the likelihood, the Bayes factor is obtained. The posterior Bayes factor is the ratio of the integrals of the likelihood function with respect to the posterior. In this paper, we investigate the performance of the posterior Bayes factor in high dimensional hypothesis testing through the problem of testing the equality of two multivariate normal mean vectors. The asymptotic normality of the linear function of the logarithm of the posterior Bayes factor is established. Then we construct a test with an asymptotically nominal significance level. The asymptotic power of the test is also derived. Simulation results and an application example are presented, which show good performance of the test. Hence, taking the posterior Bayes factor as a statistic in high dimensional hypothesis testing is a reasonable methodology.

]]>Mathematics doi: 10.3390/math10101740

Authors: Friedrich Götze Andrei Yu. Zaitsev

The paper deals with studying a connection of the Littlewood&ndash;Offord problem with estimating the concentration functions of some symmetric infinitely divisible distributions. It is shown that the concentration function of a weighted sum of independent identically distributed random variables is estimated in terms of the concentration function of a symmetric infinitely divisible distribution whose spectral measure is concentrated on the set of plus-minus weights.

]]>Mathematics doi: 10.3390/math10101738

Authors: Carlos M. Avendaño-Lopez Rogelio Castro-Sanchez Dora L. Almanza-Ojeda Juan Gabriel Avina-Cervantes Miguel A. Gomez-Martinez Mario A. Ibarra-Manzano

This paper proposes a visible light positioning system that utilizes commercial Light-Emitting Diode (LED) lamps as transmitters and Silicon PIN photodiodes as receivers. The light signals are transmitted and received using Intensity Modulation and Direct Detection (IMDD). The lamps are modulated using On&ndash;Off Keying (OOK) with the Manchester code, and the medium access control is achieved by Time-Division Multiplexing (TDM). The position is estimated using trilateration based on the Received Signal Strength (RSS). The system&rsquo;s scalability is accomplished by replicating primary localization cells composed of seven lamps and drawing on the neighborhood synchrony, exploiting the spatial multiplexing property of the light. A basic unit in the cell comprises three lamps forming a localization triangle; then, one primary localization cell shall consist of six triangles sharing lights among basic neighbor units. The cell prototype was implemented to prove the working principle of the system. Three estimation methods were used to compute the position: a deterministic approach based on least-squares regression, an Artificial Neural Network (ANN) per lamp, and an ANN for the complete system. The best per lamp estimator was the ANN, computing positions that reached an experimental accuracy of 2.5 cm under indoor conditions.

]]>Mathematics doi: 10.3390/math10101739

Authors: Miao Ye Qinghao Zhang Ruoyu Wei Yong Wang Xiaofang Deng

In the distributed storage system, when data need to be recovered after node failure, the erasure code redundancy method occupies less storage space than the multi-copy method. At present, the repair mechanism using erasure code to reconstruct the failed node only considers the improvement of link bandwidth on the repair rate and does not consider the impact of the selection of data providing node-set on the repair performance. A single node fault data reconstruction method based on the Software Defined Network (SDN) using the erasure code method is designed to solve the above problems. This method collects the network link-state through SDN, establishes a multi-attribute decision-making model of the data providing node-set based on the node performance, and determines the data providing nodes participating in providing data through the ideal point method. Then, the data recovery problem of a single fault node is modeled as the optimization problem of an optimal repair tree, and a hybrid genetic algorithm is designed to solve it. The experimental results show that under the same erasure code scale, after selecting the nodes of the data providing node-set, compared with the traditional tree topology and star topology, the repair delay distribution of the designed single fault node repair method for a distributed storage system is reduced by 15% and 45% respectively, and the repair flow is close to the star topology, which is reduced by 40% compared with the traditional tree repair.

]]>Mathematics doi: 10.3390/math10101736

Authors: Tzu-Chi Huang Guo-Hao Huang Ming-Fong Tsai

The MapReduce architecture can reliably distribute massive datasets to cloud worker nodes for processing. When each worker node processes the input data, the Map program generates intermediate data that are used by the Reduce program for integration. However, as the worker nodes process the MapReduce tasks, there are differences in the number of intermediate data created, due to variation in the operating-system environments and the input data, which results in the phenomenon of laggard nodes and affects the completion time for each small-scale cloud application task. In this paper, we propose a dynamic task adjustment mechanism for an intermediate-data processing cycle prediction algorithm, with the aim of improving the execution performance of small-scale cloud applications. Our mechanism dynamically adjusts the number of Map and Reduce program tasks based on the intermediate-data processing capabilities of each cloud worker node, in order to mitigate the problem of performance degradation caused by the limitations on the Google Cloud platform (Hadoop cluster) due to the phenomenon of laggards. The proposed dynamic task adjustment mechanism was compared with a simulated Hadoop system in a performance analysis, and an improvement of at least 5% in the processing efficiency was found for a small-scale cloud application.

]]>Mathematics doi: 10.3390/math10101735

Authors: Song-Kyoo (Amang) Kim

This article analyzes the behavior of a Brownian fluctuation process under a mixed strategic game setup. A variant of a compound Brownian motion has been newly proposed, which is called the Shifted Brownian Fluctuation Process to predict the turning points of a stochastic process. This compound process evolves until it reaches one step prior to the turning point. The Shifted Brownian Fluctuation Game has been constructed based on this new process to find the optimal moment of actions. Analytically tractable results are obtained by using the fluctuation theory and the mixed strategy game theory. The joint functional of the Shifted Brownian Fluctuation Process is targeted for transformation of the first passage time and its index. These results enable us to predict the moment of a turning point and the moment of actions to obtain the optimal payoffs of a game. This research adapts the theoretical framework to implement an autonomous trader for value assets including stocks and cybercurrencies.

]]>Mathematics doi: 10.3390/math10101737

Authors: Sumit Pant Ebrahem A. Algehyne

The motive of this work is to numerically evaluate the effect of changeable gravitational fields and varying viscosity on the beginning of convection in ferromagnetic fluid layer. The fluid layer is constrained by two free boundaries and varying gravitational fields that vary with distance across the layer. The authors hypothesized two categories of gravitational field variation, which can be subdivided into six distinct cases: (i) f(z)=z, (ii) f(z)=ez, (iii) f(z)=log(1+z), (iv) f(z)=&minus;z, (v) f(z)=&minus;z2, and (vi) f(z)=z2&minus;2z. The normal mode method was applied, and the single term Galerkin approach was used to solve the ensuing eigenvalue problem. The results imply that, in the first three cases, the gravity variation parameter speeds up the commencement of convection, while, in the last three cases, the viscosity variation parameter and gravity variation parameter slow down the onset of convection. It was also observed that, in the absence of the viscosity variation parameter, the non-buoyancy magnetization parameter destabilizes the impact on the beginning of convection but, in the presence of the viscosity variation parameter, it destabilizes or stabilizes impact on the beginning of convection. In the case of oscillatory convection, the results illustrate that oscillatory modes are not permitted, suggesting the validity of the theory of exchange of stabilities. Additionally, it was also discovered that the system is more stable for case (vi) and more unstable for case (ii).

]]>Mathematics doi: 10.3390/math10101734

Authors: Cláudio Moura Silva Sérgio Ronaldo Granemann Patricia Guarnieri Gladston Luiz Da Silva

The vast Brazilian territory and the accelerated economic growth of the cities of the country&rsquo;s interior in recent years have created a favourable environment for the expansion of regional aviation. In 2015, the Brazilian Government launched a program of investments in regional airports equipping them to receive commercial flights. However, the economic crisis and the scarcity of resources drive the prioritisation of projects with a greater economic and social return. This article aims to present a multicriteria decision aid (MCDA) model to measure cities&rsquo; attractiveness to receive investments in regional airports. The MCDA approach can deal with multiple indicators and different points of view and provide systematised steps for supporting decision-makers. For this purpose, we selected 12 criteria among the evaluation parameters identified in the literature, which led to the construction of the evaluation model and elaborating the ranking of the localities participating in the investment program. This study can contribute scientifically by proposing the use of an MCDA approach to support decisions related to logistics and infrastructure. It can help managers and practitioners provide a structured and systematised model to address decisions related to airport investments.

]]>Mathematics doi: 10.3390/math10101733

Authors: Shaojie Ai Jia Song Guobiao Cai

The remaining useful life (RUL) of the unmanned aerial vehicle (UAV) is primarily determined by the discharge state of the lithium-polymer battery and the expected flight maneuver. It needs to be accurately predicted to measure the UAV&rsquo;s capacity to perform future missions. However, the existing works usually provide a one-step prediction based on a single feature, which cannot meet the reliability requirements. This paper provides a multilevel fusion transformer-network-based sequence-to-sequence model to predict the RUL of the highly maneuverable UAV. The end-to-end method is improved by introducing the external factor attention and multi-scale feature mining mechanism. Simulation experiments are conducted based on a high-fidelity quad-rotor UAV electric propulsion model. The proposed method can rapidly predict more precisely than the state-of-the-art. It can predict the future RUL sequence by four-times the observation length (32 s) with a precision of 83% within 60 ms.

]]>Mathematics doi: 10.3390/math10101732

Authors: Leishi Wang Mingtao Li Xin Pei Juan Zhang

China&rsquo;s livestock output has been growing, but domestic livestock products such as beef, mutton and pork have been unable to meet domestic consumers&rsquo; demands. The imbalance between supply and demand causes unstable livestock prices and affects profits on livestock. Therefore, the purpose of this paper is to provide the optimal breeding strategy for livestock farmers to maximize profits and adjust the balance between supply and demand. Firstly, when the price changes, livestock farmers will respond in two ways: by not adjusting the scale of livestock with the price or adjusting the scale with the price. Therefore, combining the model of price and the behavior of livestock farmers, two livestock breeding models were established. Secondly, we proposed four optimal breeding strategies based on the previously studied models and the main research method is Pontryagin&rsquo;s Maximum Principle. Optimal breeding strategies are achieved by controlling the growth and output of livestock. Further, their existence was verified. Finally, we simulated two situations and found the most suitable strategy for both situations by comparing profits of four strategies. From that, we obtained several conclusions: The optimal strategy under constant prices is not always reasonable. The effect of price on livestock can promote a faster balance. To get more profits, the livestock farmers should adjust the farm&rsquo;s productivity reasonably. It is necessary to calculate the optimal strategy results under different behaviors.

]]>Mathematics doi: 10.3390/math10101731

Authors: Sameh Shenawy Carlo Alberto Mantica Luca Guido Molinari Nasser Bin Turki

Sufficient conditions for a Lorentzian generalized quasi-Einstein manifold M,g,f,&mu; to be a generalized Robertson&ndash;Walker spacetime with Einstein fibers are derived. The Ricci tensor in this case gains the perfect fluid form. Likewise, it is proven that a &lambda;,n+m-Einstein manifold M,g,w having harmonic Weyl tensor, &nabla;jw&nabla;mwCjklm=0 and &nabla;lw&nabla;lw&lt;0 reduces to a perfect fluid generalized Robertson&ndash;Walker spacetime with Einstein fibers. Finally, M,g,w reduces to a perfect fluid manifold if &phi;=&minus;m&nabla;lnw is a &phi;Ric-vector field on M and to an Einstein manifold if &psi;=&nabla;w is a &psi;Ric-vector field on M. Some consequences of these results are considered.

]]>Mathematics doi: 10.3390/math10101730

Authors: Tao Li Qing-Wen Wang Xin-Fang Zhang

This paper is devoted to proposing a modified conjugate residual method for solving the generalized coupled Sylvester tensor equations. To further improve its convergence rate, we derive a preconditioned modified conjugate residual method based on the Kronecker product approximations for solving the tensor equations. A theoretical analysis shows that the proposed method converges to an exact solution for any initial tensor at most finite steps in the absence round-off errors. Compared with a modified conjugate gradient method, the obtained numerical results illustrate that our methods perform much better in terms of the number of iteration steps and computing time.

]]>Mathematics doi: 10.3390/math10101729

Authors: José Luis Díaz Palencia Julián Roa González Almudena Sánchez Sánchez

The goal of the present study is to characterize solutions under a travelling wave formulation to a degenerate Fisher-KPP problem. With the degenerate problem, we refer to the following: a heterogeneous diffusion that is formulated with a high order operator; a non-linear advection and non-Lipstchitz spatially heterogeneous reaction. The paper examines the existence of solutions, uniqueness and travelling wave oscillatory properties (also called instabilities). Such oscillatory behaviour may lead to negative solutions in the proximity of zero. A numerical exploration is provided with the following main finding to declare: the solutions keeps oscillating in the proximity of the null stationary solution due to the high order operator, except if the reaction term is quasi-Lipschitz, in which it is possible to define a region where solutions are positive locally in time.

]]>Mathematics doi: 10.3390/math10101727

Authors: Aliya Naaz Siddiqui Ali Hussain Alkhaldi Lamia Saeed Alqahtani

The geometry of Hessian manifolds is a fruitful branch of physics, statistics, Kaehlerian and affine differential geometry. The study of inequalities for statistical submanifolds in Hessian manifolds of constant Hessian curvature was truly initiated in 2018 by Mihai, A. and Mihai, I. who dealt with Chen-Ricci and Euler inequalities. Later on, Siddiqui, A.N., Ahmad K. and Ozel C. came with the study of Casorati inequality for statistical submanifolds in the same ambient space by using algebraic technique. Also, Chen, B.-Y., Mihai, A. and Mihai, I. obtained a Chen first inequality for such submanifolds. In 2020, Mihai, A. and Mihai, I. studied the Chen inequality for &delta;(2,2)-invariant. In the development of this topic, we establish the generalized Wintgen inequality for statistical submanifolds in Hessian manifolds of constant Hessian curvature. Some examples are also discussed at the end.

]]>Mathematics doi: 10.3390/math10101728

Authors: Munayim Dilxat Shoulan Gao Dong Liu Limeng Xia

This paper mainly considers a class of non-weight modules over the Lie algebra of the Weyl type. First, we construct the U(h)-free modules of rank one over the differential operator algebra. Then, we characterize the tensor products of these kind of modules and the quasi-finite highest weight modules. Finally, we undertake such research for the differential operator algebra of multi-variables.

]]>Mathematics doi: 10.3390/math10101726

Authors: Tahir Mahmood Ubaid ur Rehman Zeeshan Ali Muhammad Aslam Ronnason Chinram

The idea of bipolar complex fuzzy (BCF) sets, as a genuine modification of both bipolar fuzzy sets and complex fuzzy sets, gives a massive valuable framework for representing and evaluating ambiguous information. In intelligence decision making based on BCF sets, it is a critical dilemma to compare or rank positive and negative membership grades. In this framework, we deliberated various techniques for aggregating the collection of information into a singleton set, called BCF weighted arithmetic averaging (BCFWAA), BCF ordered weighted arithmetic averaging (BCFOWAA), BCF weighted geometric averaging (BCFWGA), and BCF ordered weighted geometric averaging (BCFOWGA) operators for BCF numbers (BCFNs). To illustrate the feasibility and original worth of the diagnosed approaches, we demonstrated various properties of the diagnosed operators, in addition to their capability that the evaluated value of a set of BCF numbers is a unique BCF number. Further, multiattribute decision making (&ldquo;MADM&rdquo;) refers to a technique employed to compute a brief and dominant assessment of opinions with multiattributes. The main influence of this theory is implementing the diagnosed theory in the field of the MADM tool using BCF settings. Finally, a benchmark dilemma is used for comparison with various prevailing techniques to justify the cogency and dominancy of the evaluated operators.

]]>Mathematics doi: 10.3390/math10101725

Authors: Yaning Yu Ziye Zhang

In this paper, the problem of state estimation for complex-valued inertial neural networks with leakage, additive and distributed delays is considered. By means of the Lyapunov&ndash;Krasovskii functional method, the Jensen inequality, and the reciprocally convex approach, a delay-dependent criterion based on linear matrix inequalities (LMIs) is derived. At the same time, the network state is estimated by observing the output measurements to ensure the global asymptotic stability of the error system. Finally, two examples are given to verify the effectiveness of the proposed method.

]]>Mathematics doi: 10.3390/math10101723

Authors: Taiguang Gao Kui Wang Yali Mei Shan He Yanfang Wang

The free-riding behavior of companies that do not act will bring losses to companies that provide services. A market consists of two secondary supply chains: manufacturers and retailers. Each supply chain can choose to adopt promotional strategies to expand its market demand. This paper constructs the centralized decision-making in the supply chain and the Nash game competition model between supply chains and primarily studies the impact of risk aversion and the free-riding coefficient on supply chain pricing, promotion strategy selection, and expected utility. We show that the supply chain with high-risk aversion has relatively low pricing, but the demand and a total expected utility are high. We also identify that, on the premise of the same risk aversion degree of the two supply chains, when the free-riding coefficient between the chains is small and equal, the supply chain tends to implement the promotion strategy. When consumers have the same preference for the products of two retailers, the pricing of the free-riding supply chain increases with the increase in the free-riding coefficient, while the supply chain with a promotion strategy is the opposite. Based on the numerical results, we further give the optimal one-way free-riding coefficient when the two supply chains have the same degree of risk aversion; when there is a bidirectional free-riding behavior in the market, competition among supply chains gradually tends to the first two scenarios.

]]>Mathematics doi: 10.3390/math10101724

Authors: Muhammad Qasim Hind Alamri Ishak Altun Nawab Hussain

Considering the &omega;-distance function defined by Kosti&#263; in proximity space, we prove the Matkowski and Boyd&ndash;Wong fixed-point theorems in proximity space using &omega;-distance, and provide some examples to explain the novelty of our work. Moreover, we characterize Edelstein-type fixed-point theorem in compact proximity space. Finally, we investigate an existence and uniqueness result for solution of a kind of second-order boundary value problem via obtained Matkowski-type fixed-point results under some suitable conditions.

]]>Mathematics doi: 10.3390/math10101722

Authors: Maoxiong Liao Tao Zhang Jinggu Cao

A time-domain adaptive algorithm was developed for solving elasto-dynamics problems through a mixed meshless local Petrov-Galerkin finite volume method (MLPG5). In this time-adaptive algorithm, each time-dependent variable is interpolated by a time series function of n-order, which is determined by a criterion in each step. The high-order series of expanded variables bring high accuracy in the time domain, especially for the elasto-dynamic equations, which are second-order PDE in the time domain. In the present mixed MLPG5 dynamic formulation, the strains are interpolated independently, as are displacements in the local weak form, which eliminates the expensive differential of the shape function. In the traditional MLPG5, both shape function and its derivative for each node need to be calculated. By taking the Heaviside function as the test function, the local domain integration of stiffness matrix is avoided. Several numerical examples, including the comparison of our method, the MLPG5&ndash;Newmark method and FEM (ANSYS) are given to demonstrate the advantages of the presented method: (1) a large time step can be used in solving a elasto-dynamics problem; (2) computational efficiency and accuracy are improved in both space and time; (3) smaller support sizes can be used in the mixed MLPG5.

]]>Mathematics doi: 10.3390/math10101721

Authors: Florin Leon Mircea Hulea Marius Gavrilescu

Recent advancements in artificial intelligence and machine learning have led to the development of powerful tools for use in problem solving in a wide array of scientific and technical fields [...]

]]>Mathematics doi: 10.3390/math10101720

Authors: Manuel Jesús Cardoso-Pulido Juan Ramón Guijarro-Ojeda Cristina Pérez-Valverde

The teaching profession has an important emotional burden that, together with the erosion of different elements that compose it from continuous educational reform to the bad behavior and demotivation of students has led to many teachers experiencing physical and psychological illness or leaving the profession. Nevertheless, studies and interventions in this regard are still insufficient in the Spanish context. This situation also exponentially affects pre-service teachers, which according to numerous studies is the stage during which the diminishing of teacher well-being begins and consolidates. Within this panorama, with this study the authors pursue to determine which dimensions of teacher well-being are capable of predicting the professional success of 88 pre-service primary education teachers who specialize in a foreign language so that they can be addressed in the training process. To this end, an ex post facto study was carried out correlating the following instruments: the Teacher Distress Questionnaire, the Trait Emotional Intelligence Questionnaire and the Maslach Burnout Inventory-Educators Survey with an adaptation of the Rueda de la vida escolar sobre el &eacute;xito y la satisfacci&oacute;n laboral del docente&nbsp;(Wheel of school life on teacher success and job satisfaction). Multiple linear regression revealed that of all the variables studied for teacher well-being (intrinsic motivation, expectations about good professional performance, professional distress, professional exhaustion, irrational beliefs, emotional intelligence and burnout) only emotional intelligence and intrinsic motivation have the ability to predict the success of teachers in training in their future professional performance. This result is of paramount importance for reconsidering the training that teachers receive during their university stage, which currently and substantially prioritizes the cognitive component over psychosocial and emotional components.

]]>Mathematics doi: 10.3390/math10101719

Authors: Nazlıhan Terzioğlu Can Kızılateş Wei-Shih Du

In this paper, with the help of the finite operators and Fibonacci numbers, we define a new family of quaternions whose components are the Fibonacci finite operator numbers. We also provide some properties of these types of quaternions. Moreover, we derive many identities related to Fibonacci finite operator quaternions by using the matrix representations.

]]>Mathematics doi: 10.3390/math10101718

Authors: Shuhui Li Yihua Shen Xinyue Jiao Su Cai

Using multiple representations is advocated and emphasized in mathematics and science education. However, many students have difficulty connecting multiple representations of linear functions. Augmented Reality (AR) may affect these teaching and learning difficulties by offering dynamically linked representations. Inspired by this, our study aims to develop, implement, and evaluate an AR-based multi-representational learning environment (MRLE) with three representations of linear functions. The data were collected from 82 seventh graders from two high-performing classes in an urban area in China, through a pre-test, a post-questionnaire, and follow-up interviews. The results reveal that students were satisfied with the AR-based MRLE, which assisted in enhancing their understanding of the real-life, symbolic, and graphical representations and connections among them. Regarding students&rsquo; interactions with multiple representations, apparent differences in learning sequences and preferences existed among students in terms of their representational learning profile. In sum, learning in the AR-based MRLE is a complex interaction process between the mathematics content, forms of representations, digital features, and students&rsquo; representational learning profile.

]]>Mathematics doi: 10.3390/math10101717

Authors: Myrzagali Ospanov Kordan Ospanov

We study a type of third-order linear differential equations with variable and unbounded coefficients, which are defined in an infinite interval. We also consider a non-linear generalization with coefficients that depends on an unknown function. We establish sufficient conditions for the correctness of this linear equation and the maximal regularity estimate for their solution. Using these results, we prove the solvability of a nonlinear differential equation and estimate the norms of its terms.

]]>Mathematics doi: 10.3390/math10101716

Authors: Mingyang Zhang Xufeng Yang Taichiu Edwin Cheng Chen Chang

In recent years, many retailers sell their products through not only offline but also online platforms. The sales of perishable goods on e-commerce platforms recorded phenomenal growth in 2020. However, some retailers are overconfident and order more products than the optimal ordering quantity, resulting in great losses due to product decay. In this paper, we apply the newsvendor model to analyze the impacts of overconfident behavior on the retailer&rsquo;s optimal pricing and order quantity decisions and profit. Our model provides the overconfident retailer with a feasible and effective method to adjust optimal ordering and pricing decisions. Through numerical studies, we examine the retailer&rsquo;s optimal decisions under the scenarios of complete rationality, over-estimation, and over-precision. We find that the over-estimation retailer always orders more products than the optimal order quantity, and the over-precision retailer always orders fewer products than the optimal order quantity. Under some conditions, overconfidence hurts the retailer&rsquo;s revenue to a large extent. Therefore, it is beneficial for the overconfident retailer to adjust its order quantity according to our research findings.

]]>Mathematics doi: 10.3390/math10101715

Authors: Xuelin Liang Fengrui Xu Mengqiao Chen Wensheng Liu

In this paper, a relative threshold event-triggered based novel complementary sliding mode control (NSMCR) algorithm of all-electric aircraft (AEA) anti-skid braking system (ABS) is proposed to guarantee the braking stability and tracking precision of reference wheel slip control. First, a model of the braking system is established in strict-feedback form. Then a virtual controller with a nonlinear control algorithm is proposed to address the problem of constraint control regarding wheel slip rate with asymptotical stability. Next, a novel approaching law-based complementary sliding mode controller is developed to keep track of braking pressure. Moreover, the robust adaptive law is designed to estimate the uncertainties of the braking systems online to alleviate the chattering problem of the braking pressure controller. Additionally, to reduce the network communication and actuator wear of AEA-ABS, a relative threshold event trigger mechanism is proposed to transmit the output of NSMC in demand. The simulation results under various algorithms regarding three types of runway indicate that the proposed algorithms can improve the performance of braking control. In addition, the hardware-in-the-loop (HIL) experimental results prove that the proposed methods are practical for real-time applications.

]]>Mathematics doi: 10.3390/math10101714

Authors: Asmita Mahajan Nonita Sharma Silvia Aparicio-Obregon Hashem Alyami Abdullah Alharbi Divya Anand Manish Sharma Nitin Goyal

Infectious Disease Prediction aims to anticipate the aspects of both seasonal epidemics and future pandemics. However, a single model will most likely not capture all the dataset&rsquo;s patterns and qualities. Ensemble learning combines multiple models to obtain a single prediction that uses the qualities of each model. This study aims to develop a stacked ensemble model to accurately predict the future occurrences of infectious diseases viewed at some point in time as epidemics, namely, dengue, influenza, and tuberculosis. The main objective is to enhance the prediction performance of the proposed model by reducing prediction errors. Autoregressive integrated moving average, exponential smoothing, and neural network autoregression are applied to the disease dataset individually. The gradient boosting model combines the regress values of the above three statistical models to obtain an ensemble model. The results conclude that the forecasting precision of the proposed stacked ensemble model is better than that of the standard gradient boosting model. The ensemble model reduces the prediction errors, root-mean-square error, for the dengue, influenza, and tuberculosis dataset by approximately 30%, 24%, and 25%, respectively.

]]>Mathematics doi: 10.3390/math10101713

Authors: Rundong Luo Yiming Chen Shuai Song

Estimating the expected value of a random variable by data-driven methods is one of the most fundamental problems in statistics. In this study, we present an extension of Olivier Catoni&rsquo;s classical M-estimators of the empirical mean, which focus on the heavy-tailed data by imposing more precise inequalities on exponential moments of Catoni&rsquo;s estimator. We show that our works behave better than Catoni&lsquo;s both in practice and theory. The performances are illustrated in the simulation and real data.

]]>Mathematics doi: 10.3390/math10101712

Authors: Xiangyu Wang Wei Niu

The bifurcation of limit cycles is an important part in the study of switching systems. The investigation of limit cycles includes the number and configuration, which are related to Hilbert&rsquo;s 16th problem. Most researchers studied the number of limit cycles, and only few works focused on the configuration of limit cycles. In this paper, we develop a general method to determine the configuration of limit cycles based on the Lyapunov constants. To show our method by an example, we study a class of cubic switching systems, which has three equilibria: (0,0) and (&plusmn;1,0), and compute the Lyapunov constants based on Poincar&eacute; return map, then find at least 10 small-amplitude limit cycles that bifurcate around (1,0) or (&minus;1,0). Using our method, we determine the location distribution of these ten limit cycles.

]]>Mathematics doi: 10.3390/math10101710

Authors: Mohamed S. Zaky Shaaban M. Shaaban Tamer Fetouh Haitham Z. Azazi Yehya I. Mesalam

Instability of an adaptive flux observer (AFO) in the regenerating mode at low frequencies is a great challenge of sensorless induction motor (SIM) drives. Zero observer feedback gains (OFGs) in the regenerating mode at low frequencies are the main reasons for moving the dominant zero of the speed estimators to the unstable region. OFGs should be appropriately selected to transfer the unstable dominant zero to the stable region. In this paper, genetic algorithm (GA) and particle swarm optimization (PSO) techniques were used to design the OFGs for a stable observer. A fair comparison of the dominant zero location between the two approaches using the optimized OFGs is presented under parameter deviation. Analytical results and the design procedure of the OFGs using the two approaches are presented under deviations of stator resistance and mutual inductance to guarantee a stable dominant zero in the regenerating mode of IM. The dominant zeros obtained by PSO had a superior location to that obtained by GA for both stator resistance and mutual inductance deviations. It was observed that one of the gains had an almost constant value over a wide range of parameter deviations. However, the value of the other gain was dependent on the deviation of machine parameters. The advantage of using PSO over GA is that the relation between the gain and parameter deviation can be represented by a deterministic and mostly linear relationship. Simulation and experimental work of the SIM drive are presented and evaluated under the optimized OFGs.

]]>Mathematics doi: 10.3390/math10101711

Authors: Lucille Salha Jeremy Bleyer Karam Sab Joanna Bodgi

Building upon recent works devoted to the development of a stress-based layerwise model for multilayered plates, we explore an alternative finite-element discretization to the conventional displacement-based finite-element method. We rely on a mixed finite-element approach where both stresses and displacements are interpolated. Since conforming stress-based finite-elements ensuring traction continuity are difficult to construct, we consider a hybridization strategy in which traction continuity is relaxed by the introduction of an additional displacement-like Lagrange multiplier defined on the element facets. Such a strategy offers the advantage of uncoupling many degrees of freedom so that static condensation can be performed at the element level, yielding a much smaller final system to solve. Illustrative applications demonstrate that the proposed mixed approach is free from any shear-locking in the thin plate limit and is more accurate than a displacement approach for the same number of degrees of freedom. As a result, this method can be used to capture efficiently strong intra- and inter-laminar stress variations near free-edges or cracks.

]]>Mathematics doi: 10.3390/math10101709

Authors: Shan Yang Kaijun Su Bing Wang Zitong Xu

To effectively protect citizens&rsquo; property from the infringement of fund-raising fraud, it is necessary to investigate the dissemination, identification, and causation of fund-raising fraud. In this study, the Susceptible Infected Recovered (SIR) model, Back-Propagation (BP) neural network, Fault tree, and Bayesian network were used to analyze the dissemination, identification, and causation of fund-raising fraud. Firstly, relevant data about fund-raising fraud were collected from residents in the same area via a questionnaire survey. Secondly, the SIR model was used to simulate the dissemination of victims, susceptibles, alerts, and fraud amount; the BP neural network was used to identify the data of financial fraud and change the accuracy of the number analysis of neurons and hidden layers; the fault-tree model and the Bayesian network model were employed to analyze the causation and importance of basic events. Finally, the security measures of fund-raising fraud were simulated by changing the dissemination parameters. The results show that (1) for the spread of the scam, the scale of the victims expands sharply with the increase of the fraud cycle, and the victims of the final fraud cycle account for 12.5% of people in the region; (2) for the source of infection of the scam, the initial recognition rate of fraud by the BP neural network varies from 90.9% to 93.9%; (3) for the victims of the scam, reducing fraud publicity, improving risk awareness, and strengthening fraud supervision can effectively reduce the probability of fraud; and (4) reducing the fraud rate can reduce the number of victims and delay the outbreak time. Improving the alert rate can reduce victims on a large scale. Strengthening supervision can restrict the scale of victims and prolong the duration of fraud.

]]>Mathematics doi: 10.3390/math10101705

Authors: Ivan Derpich Juan M. Sepúlveda Rodrigo Barraza Fernanda Castro

It has been widely demonstrated by many research works that the distribution of a factory can condition its productivity. Because of this, a factory in Santiago, Chile, asked the authors for advice to evaluate the current situation in the company and what alternatives could be proposed to improve performance by increasing productivity without incurring too high costs. Among the most important requirements requested was a study of the current design of the raw materials warehouse within the plant since this is a main pillar in the design of the plant. For this purpose, the current layout ws analyzed and alternative designs were proposed under two scenarios: use the same area that is currently available or make the design from scratch considering land purchases to build the warehouse. In order to give a precise answer to many possible design decisions, formulations were developed and deduced to calculate the optimal dimensions of the warehouse, and qualitative criteria were incorporated for decision making.

]]>Mathematics doi: 10.3390/math10101701

Authors: Asha S. Kotnurkar Joonabi Beleri Irfan Anjum Badruddin Khaleed H.M.T. Sarfaraz Kamangar Nandalur Ameer Ahammad

The noteworthiness of double-diffusive convection with magneto-Jeffrey nanofluid on a peristaltic motion under the effect of MHD and porous medium through a flexible channel with the permeable wall has been theoretically examined. A non-linearized Rosseland approximation is utilized to show the thermal radiation effect. The governing equations are converted to standard non-linear partial differential equations by using suitable non-dimensional parameters. Solutions of emerging equations are obtained by using the multi-step differential transformation method (Ms-DTM). The differential transformation method (DTM) can be applied directly to nonlinear differential equations without requiring linearization and discretization; therefore, it is not affected by errors associated with discretization. The role of influential factors on concentration, temperature, volume fraction, and velocity are determined using graphs. A significant outcome of the present article is that the presence of double-diffusive convection can change the nature of convection in the system. The present results have a wide biological applicability, including for biomicrofluidic devices that regulate the fluid flow through a flexible endoscope and other medical pumping systems.

]]>Mathematics doi: 10.3390/math10101708

Authors: Teodor M. Atanackovic Cemal Dolicanin Enes Kacapor

Here, we study the internal variable approach to viscoelasticity. First, we generalize the classical approach by introducing a fractional derivative into the equation for time evolution of the internal variables. Next, we derive restrictions on the coefficients that follow from the dissipation inequality (entropy inequality under isothermal conditions). In the example of wave propagation, we show that the restrictions that follow from entropy inequality are sufficient to guarantee the existence of the solution. We present a numerical solution to the wave equation for several values of the parameters.

]]>Mathematics doi: 10.3390/math10101706

Authors: Gopinath Veeram Pasam Poojitha Harika Katta Sanakkayala Hemalatha Macherla Jayachandra Babu Chakravarthula S. K. Raju Nehad Ali Shah Se-Jin Yook

The heat transmission capabilities of hybrid nanofluids are superior to those of mono nanofluids. In addition to solar collectors and military equipment, they may be found in a number of areas including heat exchanger, automotive industry, transformer cooling and electronic cooling. The purpose of this study was to evaluate the significance of the higher order chemical reaction parameter on the radiative flow of hybrid nanofluid (polyethylene glycol (PEG)&ndash;water combination: base fluid and zirconium dioxide, magnesium oxide: nanoparticles) via a curved shrinking sheet with viscous dissipation. Flow-driven equations were transformed into nonlinear ODEs using appropriate similarity transmutations, and then solved using the bvp4c solver (MATLAB built-in function). The results of two scenarios, PEG&minus;Water+ZrO2+MgO (hybrid nanofluid) and PEG&minus;Water+ZrO2, (nanofluid) are reported. In order to draw important inferences about physical features, such as heat transfer rate, a correlation coefficient was used. The main findings of this study were that curvature parameter lowers fluid velocity, and Eckert number increases the temperature of fluid. It was observed that the volume fraction of nanoparticles enhances the skin friction coefficient and curvature parameter lessens the same. It was noticed that when curvature parameter (K) takes input in the range 0.5&le;K&le;2.5, the skin friction coefficient decreases at a rate of 1.46633 (i.e., 146.633%) (in the case of hybrid nanofluid) and 1.11236 (i.e., 111.236%) (in the case of nanofluid) per unit value of curvature parameter. Increasing rates in the skin friction parameter were 3.481179 (i.e., 348.1179%) (in the case of hybrid nanofluid) and 2.745679 (in the case of nanofluid) when the volume fraction of nanoparticle (&#981;1) takes input in the range 0&le;&#981;1&le;0.2. It was detected that, when Eckert number (Eck) increases, Nusselt number decreases. The decrement rates were observed as 1.41148 (i.e., 141.148%) (in the case of hybrid nanofluid) and 1.15337 (i.e., 153.337%) (in the case of nanofluid) when Eckert number takes input in the range 0&le;Eck&le;0.2. In case of hybrid nanofluid, it was discovered that the mass transfer rate increases at a rate of 1.497214 (i.e., 149.7214%) when chemical reaction (Kr) takes input in the range 0&le;Kr&le;0.2. In addition, we checked our findings against those of other researchers and discovered a respectable degree of agreement.

]]>Mathematics doi: 10.3390/math10101707

Authors: Stefano Innamorati Fulvio Zuanni

In this paper, we remove the solid incidence assumption in a characterization of H(4,q2) by J. Schillewaert and J. A. Thasby proving that Hermitian plane incidence numbers imply Hermitian solid incidence numbers, except for a few possible small cases.

]]>Mathematics doi: 10.3390/math10101704

Authors: Zahra Eidinejad Reza Saadati Radko Mesiar

In this work, by considering a class of matrix valued fuzzy controllers and using a (&kappa;,&sigmaf;)-Cauchy&ndash;Jensen additive functional equation ((&kappa;,&sigmaf;)-CJAFE), we apply the Radu&ndash;Mihet method (RMM), which is derived from an alternative fixed point theorem, and obtain the existence of a unique solution and the H&ndash;U&ndash;R stability (Hyers&ndash;Ulam&ndash;Rassias) for the homomorphisms and Jordan homomorphisms on Lie matrix valued fuzzy algebras with &sigmaf; members (&sigmaf;-LMVFA). With regards to each theorem, we consider the aggregation function as a matrix value fuzzy control function and investigate the results obtained.

]]>Mathematics doi: 10.3390/math10101703

Authors: Gerardo Amato Roberto D’Amato Alessandro Ruggiero

Frequency-dependent adaptive noise cancellation-based tracking controller (ANC-TC) is a known technique for the stabilization of several nonlinear dynamical systems. In recent years, this control strategy has been introduced and applied for the stabilization of a flexible rotor supported on full-lubricated journal bearings. This paper proposes a theoretical investigation, based on robust immersion and invariance (I&amp;I) theory, of a novel ANC-frequency estimation (FE) technique designed to stabilize a flexible rotor shaft affected by self-generated sinusoidal disturbances, generalized to the case of unknown frequency. A structural proof, under assumptions on closed-loop output signals, shows that the sinusoidal disturbance rejection is exponential. Numerical simulations are presented to validate the mathematical results in silico. The iterative Inexact Newton method is applied to the disturbance frequency and phase estimation error point series. The data fitting confirms that the phase estimation succession has an exponential convergence behavior and that the asymptotical frequency estimation is a warm-up phase of the overall close-loop disturbance estimation process. In two different operating conditions, the orders of convergence obtained by phase and frequency estimate timeseries are p&phi;=1,&nbsp;p&omega;,unc=0.9983 and p&omega;,cav=1.005. Rejection of the rotor dynamic disturbance occurs approximately 76% before in the cavitated than in the uncavitated condition, 2 (s) and 8.5 (s), respectively.

]]>Mathematics doi: 10.3390/math10101702

Authors: Ajay Kumar Sara Salem Alzaid Badr Saad T. Alkahtani Sunil Kumar

We apply a new generalized Caputo operator to investigate the dynamical behaviour of the non-integer food web model (FWM). This dynamical model has three population species and is nonlinear. Three types of species are considered in this population: prey species, intermediate predators, and top predators, and the top predators are also divided into mature and immature predators. We calculated the uniqueness and existence of the solutions applying the fixed-point hypothesis. Our study examines the possibility of obtaining new dynamical phase portraits with the new generalized Caputo operator and demonstrates the portraits for several values of fractional order. A generalized predictor&ndash;corrector (P-C) approach is utilized in numerically solving this food web model. In the case of the nonlinear equations system, the effectiveness of the used scheme is highly evident and easy to implement. In addition, stability analysis was conducted for this numerical scheme.

]]>Mathematics doi: 10.3390/math10101700

Authors: Shaomin Li Haoyu Wei Xiaoyu Lei

Recently, the high-dimensional negative binomial regression (NBR) for count data has been widely used in many scientific fields. However, most studies assumed the dispersion parameter as a constant, which may not be satisfied in practice. This paper studies the variable selection and dispersion estimation for the heterogeneous NBR models, which model the dispersion parameter as a function. Specifically, we proposed a double regression and applied a double &#8467;1-penalty to both regressions. Under the restricted eigenvalue conditions, we prove the oracle inequalities for the lasso estimators of two partial regression coefficients for the first time, using concentration inequalities of empirical processes. Furthermore, derived from the oracle inequalities, the consistency and convergence rate for the estimators are the theoretical guarantees for further statistical inference. Finally, both simulations and a real data analysis demonstrate that the new methods are effective.

]]>Mathematics doi: 10.3390/math10101699

Authors: Keyou Shi Yong Liu Weizhang Liang

Rockburst is a severe geological disaster accompanied with the violent ejection of rock debris, which greatly threatens the safety of underground workers and equipment. This study aims to propose a novel multi-criteria decision-making (MCDM) approach for evaluating rockburst risk under uncertain environments. First, considering the heterogeneity of rock mass and complexity of geological environments, trapezoidal fuzzy numbers (TrFNs) are adopted to express initial indicator information. Thereafter, the superiority linguistic ratings of experts and a modified entropy weights model with TrFNs are used to calculate the subjective and objective weights, respectively. Then, comprehensive weights can be determined by integrating subjective and objective weights based on game theory. After that, the organ&iacute;sation, rangement et synth&egrave;se de donn&eacute;es relarionnelles (ORESTE) approach is extended to obtain evaluation results in a trapezoidal fuzzy circumstance. Finally, the proposed approach is applied to assess rockburst risk in the Kaiyang phosphate mine. In addition, the evaluation results are compared with empirical methods and other trapezoidal fuzzy MCDM approaches. Results show that the proposed extended ORESTE approach is reliable for evaluating rockburst risk, and provides an effective reference for the design of prevention techniques.

]]>