Mathematical Modeling, Optimization and Machine Learning

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Mathematics and Computer Science".

Deadline for manuscript submissions: closed (30 September 2023) | Viewed by 31641

Special Issue Editors


E-Mail Website
Guest Editor
Federal Research Center “Computer Science and Control”, Russian Academy of Science, 119333 Moscow, Russia
Interests: machine learning; neural networks; semiparametric models; stochastic models; mixture distributions; computational statistics; data analysis
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Federal Research Center “Computer Science and Control” of the Russian Academy of Sciences, 119333 Moscow, Russia
Interests: discrete optimization; global optimization; parallel programming; multi-objective optimization; complex systems
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Federal Research Center “Computer Science and Control” of the Russian Academy of Sciences, 119333 Moscow, Russia
Interests: computational fluid dynamics; numerical analysis; parallel computing; computational physics; rarefied gas dynamics
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Mathematical optimization and machine learning are two highly sophisticated, advanced analytics technologies that are used in a vast array of applications. Both are based on a substantial mathematical background and are convincing examples of how mathematics can be used to solve complex problems. Both technologies have a seemingly endless range of applications, including image and speech recognition, virtual personal assistants, fraud detection, autonomic driving vehicles, production planning, workforce scheduling, electric power distribution, shipment routing, design optimization, robotics, etc.

Optimization and machine learning are tightly coupled with a mature but still very demanded research direction—mathematical modeling. For example, optimization operates with a detailed mathematical model of a business process, technical construct, or physical phenomenon. Machine learning methods can be effectively employed to estimate the parameters of models when traditional methods fail due to uncertainty, including variance or noise in the specific data values.

This Special Issue of Mathematics is devoted to topics in mathematical modeling, optimization methods, and various machine learning approaches. Submitted papers should satisfy the general requirements of the Mathematics journal, with a strong focus on new analytic or numerical methods for solving challenging problems. Potential topics include but are not limited to:

  • Mathematical foundations of machine learning;
  • New machine learning algorithms, approaches, and architectures of neural networks;
  • Mathematical models and machine learning;
  • Data analysis based on mathematical models, optimization, and machine learning algorithms;
  • Mathematical models, optimization techniques, and machine learning algorithms in applied sciences;
  • Statistical models and stochastic processes;
  • Continuous and discrete optimization, linear and nonlinear optimization, derivative-free optimization;
  • Deterministic and stochastic optimization algorithms;
  • Numerical simulation in physical, social, and life sciences;
  • High-performance computing for mathematical modeling;
  • Application of machine learning, mathematical modeling, and optimization in science and technology.

Prof. Dr. Andrey Gorshenin
Prof. Dr. Mikhail Posypkin
Prof. Dr. Vladimir Titarev
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • mathematical modeling
  • mathematical optimization
  • control theory and applications
  • high-performance computing
  • stochastic processes
  • numerical analysis and simulation
  • computational fluid dynamics
  • machine learning
  • data analytics

Published Papers (18 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 676 KiB  
Article
Non-Convex Optimization: Using Preconditioning Matrices for Optimally Improving Variable Bounds in Linear Relaxations
by Victor Reyes and Ignacio Araya
Mathematics 2023, 11(16), 3549; https://0-doi-org.brum.beds.ac.uk/10.3390/math11163549 - 17 Aug 2023
Viewed by 788
Abstract
The performance of branch-and-bound algorithms for solving non-convex optimization problems greatly depends on convex relaxation techniques. They generate convex regions which are used for improving the bounds of variable domains. In particular, convex polyhedral regions can be represented by a linear system [...] Read more.
The performance of branch-and-bound algorithms for solving non-convex optimization problems greatly depends on convex relaxation techniques. They generate convex regions which are used for improving the bounds of variable domains. In particular, convex polyhedral regions can be represented by a linear system A.x=b. Then, bounds of variable domains can be improved by minimizing and maximizing variables in the linear system. Reducing or contracting optimally variable domains in linear systems, however, is an expensive task. It requires solving up to two linear programs for each variable (one for each variable bound). Suboptimal strategies, such as preconditioning, may offer satisfactory approximations of the optimal reduction at a lower cost. In non-square linear systems, a preconditioner P can be chosen such that P.A is close to a diagonal matrix. Thus, the projection of the equivalent system P.A.x=P.b over x, by using an iterative method such as Gauss–Seidel, can significantly improve the contraction. In this paper, we show how to generate an optimal preconditioner, i.e., a preconditioner that helps the Gauss–Seidel method to optimally reduce the variable domains. Despite the cost of generating the preconditioner, it can be re-used in sub-regions of the search space without losing too much effectiveness. Experimental results show that, when used for reducing domains in non-square linear systems, the approach is significantly more effective than Gauss-based elimination techniques. Finally, the approach also shows promising results when used as a component of a solver for non-convex optimization problems. Full article
(This article belongs to the Special Issue Mathematical Modeling, Optimization and Machine Learning)
Show Figures

Figure 1

28 pages, 1865 KiB  
Article
Modelling Sign Language with Encoder-Only Transformers and Human Pose Estimation Keypoint Data
by Luke T. Woods and Zeeshan A. Rana
Mathematics 2023, 11(9), 2129; https://0-doi-org.brum.beds.ac.uk/10.3390/math11092129 - 01 May 2023
Cited by 1 | Viewed by 2131
Abstract
We present a study on modelling American Sign Language (ASL) with encoder-only transformers and human pose estimation keypoint data. Using an enhanced version of the publicly available Word-level ASL (WLASL) dataset, and a novel normalisation technique based on signer body size, we show [...] Read more.
We present a study on modelling American Sign Language (ASL) with encoder-only transformers and human pose estimation keypoint data. Using an enhanced version of the publicly available Word-level ASL (WLASL) dataset, and a novel normalisation technique based on signer body size, we show the impact model architecture has on accurately classifying sets of 10, 50, 100, and 300 isolated, dynamic signs using two-dimensional keypoint coordinates only. We demonstrate the importance of running and reporting results from repeated experiments to describe and evaluate model performance. We include descriptions of the algorithms used to normalise the data and generate the train, validation, and test data splits. We report top-1, top-5, and top-10 accuracy results, evaluated with two separate model checkpoint metrics based on validation accuracy and loss. We find models with fewer than 100k learnable parameters can achieve high accuracy on reduced vocabulary datasets, paving the way for lightweight consumer hardware to perform tasks that are traditionally resource-intensive, requiring expensive, high-end equipment. We achieve top-1, top-5, and top-10 accuracies of 97%, 100%, and 100%, respectively, on a vocabulary size of 10 signs; 87%, 97%, and 98% on 50 signs; 83%, 96%, and 97% on 100 signs; and 71%, 90%, and 94% on 300 signs, thereby setting a new benchmark for this task. Full article
(This article belongs to the Special Issue Mathematical Modeling, Optimization and Machine Learning)
Show Figures

Figure 1

18 pages, 2757 KiB  
Article
Synthesis of Nonlinear Nonstationary Stochastic Systems by Wavelet Canonical Expansions
by Igor Sinitsyn, Vladimir Sinitsyn, Eduard Korepanov and Tatyana Konashenkova
Mathematics 2023, 11(9), 2059; https://0-doi-org.brum.beds.ac.uk/10.3390/math11092059 - 26 Apr 2023
Viewed by 585
Abstract
The article is devoted to Bayes optimization problems of nonlinear observable stochastic systems (NLOStSs) based on wavelet canonical expansions (WLCEs). Input stochastic processes (StPs) and output StPs of considered nonlinearly StSs depend on random parameters and additive independent Gaussian noises. For stochastic synthesis [...] Read more.
The article is devoted to Bayes optimization problems of nonlinear observable stochastic systems (NLOStSs) based on wavelet canonical expansions (WLCEs). Input stochastic processes (StPs) and output StPs of considered nonlinearly StSs depend on random parameters and additive independent Gaussian noises. For stochastic synthesis we use a Bayes approach with the given loss function and minimum risk condition. WLCEs are formed by covariance function expansion coefficients of two-dimensional orthonormal basis of wavelet with a compact carrier. New results: (i) a common Bayes’ criteria synthesis algorithm for NLOStSs by WLCE is presented; (ii) partial synthesis algorithms for three of Bayes’ criteria (minimum mean square error, damage accumulation and probability of error exit outside the limits) are given; (iii) an approximate algorithm based on statistical linearization; (iv) three test examples. Applications: wavelet optimization and parameter calibration in complex measurement and control systems. Some generalizations are formulated. Full article
(This article belongs to the Special Issue Mathematical Modeling, Optimization and Machine Learning)
Show Figures

Figure 1

14 pages, 3460 KiB  
Article
Tensor Train-Based Higher-Order Dynamic Mode Decomposition for Dynamical Systems
by Keren Li and Sergey Utyuzhnikov
Mathematics 2023, 11(8), 1809; https://0-doi-org.brum.beds.ac.uk/10.3390/math11081809 - 11 Apr 2023
Cited by 1 | Viewed by 1710
Abstract
Higher-order dynamic mode decomposition (HODMD) has proved to be an efficient tool for the analysis and prediction of complex dynamical systems described by data-driven models. In the present paper, we propose a realization of HODMD that is based on the low-rank tensor decomposition [...] Read more.
Higher-order dynamic mode decomposition (HODMD) has proved to be an efficient tool for the analysis and prediction of complex dynamical systems described by data-driven models. In the present paper, we propose a realization of HODMD that is based on the low-rank tensor decomposition of potentially high-dimensional datasets. It is used to compute the HODMD modes and eigenvalues to effectively reduce the computational complexity of the problem. The proposed extension also provides a more efficient realization of the ordinary dynamic mode decomposition with the use of the tensor-train decomposition. The high efficiency of the tensor-train-based HODMD (TT-HODMD) is illustrated by a few examples, including forecasting the load of a power system, which provides comparisons between TT-HODMD and HODMD with respect to the computing time and accuracy. The developed algorithm can be effectively used for the prediction of high-dimensional dynamical systems. Full article
(This article belongs to the Special Issue Mathematical Modeling, Optimization and Machine Learning)
Show Figures

Figure 1

14 pages, 10624 KiB  
Article
Reducing the Reality Gap Using Hybrid Data for Real-Time Autonomous Operations
by Suleyman Yildirim and Zeeshan A. Rana
Mathematics 2023, 11(7), 1696; https://0-doi-org.brum.beds.ac.uk/10.3390/math11071696 - 02 Apr 2023
Cited by 1 | Viewed by 1144
Abstract
This paper presents an ablation study aimed at investigating the impact of a hybrid dataset, domain randomisation, and custom-designed neural network architecture on the performance of object localisation. In this regard, real images were gathered from the Boeing 737-400 aircraft while synthetic images [...] Read more.
This paper presents an ablation study aimed at investigating the impact of a hybrid dataset, domain randomisation, and custom-designed neural network architecture on the performance of object localisation. In this regard, real images were gathered from the Boeing 737-400 aircraft while synthetic images were generated using the domain randomisation technique involved randomising various parameters of the simulation environment in a photo-realistic manner. The study results indicated that the use of the hybrid dataset, domain randomisation, and the custom-designed neural network architecture yielded a significant enhancement in object localisation performance. Furthermore, the study demonstrated that domain randomisation facilitated the reduction of the reality gap between the real-world and simulation environments, leading to a better generalisation of the neural network architecture on real-world data. Additionally, the ablation study delved into the impact of each randomisation parameter on the neural network architecture’s performance. The insights gleaned from this investigation shed light on the importance of each constituent component of the proposed methodology and how they interact to enhance object localisation performance. The study affirms that deploying a hybrid dataset, domain randomisation, and custom-designed neural network architecture is an effective approach to training deep neural networks for object localisation tasks. The findings of this study can be applied to a wide range of computer vision applications, particularly in scenarios where collecting large amounts of labelled real-world data is challenging. The study employed a custom-designed neural network architecture that achieved 99.19% accuracy, 98.26% precision, 99.58% recall, and 97.92% [email protected] trained using a hybrid dataset comprising synthetic and real images. Full article
(This article belongs to the Special Issue Mathematical Modeling, Optimization and Machine Learning)
Show Figures

Figure 1

23 pages, 824 KiB  
Article
Modeling COVID-19 Using a Modified SVIR Compartmental Model and LSTM-Estimated Parameters
by Alejandra Wyss and Arturo Hidalgo
Mathematics 2023, 11(6), 1436; https://0-doi-org.brum.beds.ac.uk/10.3390/math11061436 - 16 Mar 2023
Viewed by 1531
Abstract
This article presents a modified version of the SVIR compartmental model for predicting the evolution of the COVID-19 pandemic, which incorporates vaccination and a saturated incidence rate, as well as piece-wise time-dependent parameters that enable self-regulation based on the epidemic trend. We have [...] Read more.
This article presents a modified version of the SVIR compartmental model for predicting the evolution of the COVID-19 pandemic, which incorporates vaccination and a saturated incidence rate, as well as piece-wise time-dependent parameters that enable self-regulation based on the epidemic trend. We have established the positivity of the ODE version of the model and explored its local stability. Artificial neural networks are used to estimate time-dependent parameters. Numerical simulations are conducted using a fourth-order Runge–Kutta numerical scheme, and the results are compared and validated against actual data from the Autonomous Communities of Spain. The modified model also includes explicit parameters to examine potential future scenarios. In addition, the modified SVIR model is transformed into a system of one-dimensional PDEs with diffusive terms, and solved using a finite volume framework with fifth-order WENO reconstruction in space and an RK3-TVD scheme for time integration. Overall, this work demonstrates the effectiveness of the modified SVIR model and its potential for improving our understanding of the COVID-19 pandemic and supporting decision-making in public health. Full article
(This article belongs to the Special Issue Mathematical Modeling, Optimization and Machine Learning)
Show Figures

Figure 1

19 pages, 1839 KiB  
Article
Machine-Learning Methods on Noisy and Sparse Data
by Konstantinos Poulinakis, Dimitris Drikakis, Ioannis W. Kokkinakis and Stephen Michael Spottswood
Mathematics 2023, 11(1), 236; https://0-doi-org.brum.beds.ac.uk/10.3390/math11010236 - 03 Jan 2023
Cited by 26 | Viewed by 4247
Abstract
Experimental and computational data and field data obtained from measurements are often sparse and noisy. Consequently, interpolating unknown functions under these restrictions to provide accurate predictions is very challenging. This study compares machine-learning methods and cubic splines on the sparsity of training data [...] Read more.
Experimental and computational data and field data obtained from measurements are often sparse and noisy. Consequently, interpolating unknown functions under these restrictions to provide accurate predictions is very challenging. This study compares machine-learning methods and cubic splines on the sparsity of training data they can handle, especially when training samples are noisy. We compare deviation from a true function f using the mean square error, signal-to-noise ratio and the Pearson R2 coefficient. We show that, given very sparse data, cubic splines constitute a more precise interpolation method than deep neural networks and multivariate adaptive regression splines. In contrast, machine-learning models are robust to noise and can outperform splines after a training data threshold is met. Our study aims to provide a general framework for interpolating one-dimensional signals, often the result of complex scientific simulations or laboratory experiments. Full article
(This article belongs to the Special Issue Mathematical Modeling, Optimization and Machine Learning)
Show Figures

Figure 1

17 pages, 66877 KiB  
Article
Joint Semantic Deep Learning Algorithm for Object Detection under Foggy Road Conditions
by Mingdi Hu, Yixuan Li, Jiulun Fan and Bingyi Jing
Mathematics 2022, 10(23), 4526; https://0-doi-org.brum.beds.ac.uk/10.3390/math10234526 - 30 Nov 2022
Cited by 2 | Viewed by 1707
Abstract
Current mainstream deep learning methods for object detection are generally trained on high-quality datasets, which might have inferior performances under bad weather conditions. In the paper, a joint semantic deep learning algorithm is proposed to address object detection under foggy road conditions, which [...] Read more.
Current mainstream deep learning methods for object detection are generally trained on high-quality datasets, which might have inferior performances under bad weather conditions. In the paper, a joint semantic deep learning algorithm is proposed to address object detection under foggy road conditions, which is constructed by embedding three attention modules and a 4-layer UNet multi-scale decoding module in the feature extraction module of the backbone network Faster RCNN. The algorithm differs from other object detection methods in that it is designed to solve low- and high-level joint tasks, including dehazing and object detection through end-to-end training. Furthermore, the location of the fog is learned by these attention modules to assist image recovery, the image quality is recovered by UNet decoding module for dehazing, and then the feature representations of the original image and the recovered image are fused and fed into the FPN (Feature Pyramid Network) module to achieve joint semantic learning. The joint semantic features are leveraged to push the subsequent network modules ability, and therefore make the proposed algorithm work better for the object detection task under foggy conditions in the real world. Moreover, this method and Faster RCNN have the same testing time due to the weight sharing in the feature extraction module. Extensive experiments confirm that the average accuracy of our algorithm outperforms the typical object detection algorithms and the state-of-the-art joint low- and high-level tasks algorithms for the object detection of seven kinds of objects on road traffics under normal weather or foggy conditions. Full article
(This article belongs to the Special Issue Mathematical Modeling, Optimization and Machine Learning)
Show Figures

Figure 1

19 pages, 5106 KiB  
Article
WSI: A New Early Warning Water Survival Index for the Domestic Water Demand
by Dong-Her Shih, Ching-Hsien Liao, Ting-Wei Wu, Huan-Shuo Chang and Ming-Hung Shih
Mathematics 2022, 10(23), 4478; https://0-doi-org.brum.beds.ac.uk/10.3390/math10234478 - 27 Nov 2022
Cited by 1 | Viewed by 1153
Abstract
A reservoir is an integrated water resource management infrastructure that can be used for water storage, flood control, power generation, and recreational activities. Predicting reservoir levels is critical for water supply management and can influence operations and intervention strategies. Currently, the water supply [...] Read more.
A reservoir is an integrated water resource management infrastructure that can be used for water storage, flood control, power generation, and recreational activities. Predicting reservoir levels is critical for water supply management and can influence operations and intervention strategies. Currently, the water supply monitoring index is used to warn the water level of most reservoirs. However, there is no precise calculation method for the current water supply monitoring index to warn about the adequacy of the domestic water demand. Therefore, taking Feitsui Reservoir as an example, this study proposes a new early warning water survival index (WSI) to warn users whether there is a shortage of domestic water demand in the future. The calculation of WSI was divided into two stages. In the first stage, the daily rainfall, daily inflow, daily outflow, and daily water level of the Feitsui Reservoir were used as input variables to predict the water level of the Feitsui Reservoir by the machine learning method. In the second stage, the interpolation method was used to calculate the daily domestic water demand in Greater Taipei. Combined with the water level prediction results of the Feitsui Reservoir in the first stage, the remaining estimated days of domestic water supply from the Feitsui Reservoir to Greater Taipei City were calculated. Then, the difference between the estimated remaining days of domestic water demand and the moving average was converted by the bias ratio to obtain a new WSI. WSI can be divided into short-term bias ratios and long-term bias ratios. In this study, the degree of the bias ratio of WSI was given in three colors, namely, condition blue, condition green, and condition red, to provide users with a warning of the shortage of domestic water in the future. The research results showed that compared with the existing water supply monitoring index, the new WSI proposed in this study can faithfully present the warning of the lack of domestic water demand in the future. Full article
(This article belongs to the Special Issue Mathematical Modeling, Optimization and Machine Learning)
Show Figures

Figure 1

14 pages, 3973 KiB  
Article
Dendrite Net with Acceleration Module for Faster Nonlinear Mapping and System Identification
by Gang Liu, Yajing Pang, Shuai Yin, Xiaoke Niu, Jing Wang and Hong Wan
Mathematics 2022, 10(23), 4477; https://0-doi-org.brum.beds.ac.uk/10.3390/math10234477 - 27 Nov 2022
Cited by 1 | Viewed by 1121
Abstract
Nonlinear mapping is an essential and common demand in online systems, such as sensor systems and mobile phones. Accelerating nonlinear mapping will directly speed up online systems. Previously the authors of this paper proposed a Dendrite Net (DD) with enormously lower time complexity [...] Read more.
Nonlinear mapping is an essential and common demand in online systems, such as sensor systems and mobile phones. Accelerating nonlinear mapping will directly speed up online systems. Previously the authors of this paper proposed a Dendrite Net (DD) with enormously lower time complexity than the existing nonlinear mapping algorithms; however, there still are redundant calculations in DD. This paper presents a DD with an acceleration module (AC) to accelerate nonlinear mapping further. We conduct three experiments to verify whether DD with AC has lower time complexity while retaining DD’s nonlinear mapping properties and system identification properties: The first experiment is the precision and identification of unary nonlinear mapping, reflecting the calculation performance using DD with AC for basic functions in online systems. The second experiment is the mapping precision and identification of the multi-input nonlinear system, reflecting the performance for designing online systems via DD with AC. Finally, this paper compares the time complexity of DD and DD with AC and analyzes the theoretical reasons through repeated experiments. Results: DD with AC retains DD’s excellent mapping and identification properties and has lower time complexity. Significance: DD with AC can be used for most engineering systems, such as sensor systems, and will speed up computation in these online systems. Full article
(This article belongs to the Special Issue Mathematical Modeling, Optimization and Machine Learning)
Show Figures

Figure 1

16 pages, 4907 KiB  
Article
Stock Portfolio Optimization with Competitive Advantages (MOAT): A Machine Learning Approach
by Ana Lorena Jiménez-Preciado, Francisco Venegas-Martínez and Abraham Ramírez-García
Mathematics 2022, 10(23), 4449; https://0-doi-org.brum.beds.ac.uk/10.3390/math10234449 - 25 Nov 2022
Cited by 1 | Viewed by 2101
Abstract
This paper aimed to develop a useful Machine Learning (ML) model for detecting companies with lasting competitive advantages (companies’ moats) according to their financial ratios in order to improve the performance of investment portfolios. First, we computed the financial ratios of companies belonging [...] Read more.
This paper aimed to develop a useful Machine Learning (ML) model for detecting companies with lasting competitive advantages (companies’ moats) according to their financial ratios in order to improve the performance of investment portfolios. First, we computed the financial ratios of companies belonging to the S&P 500. Subsequently, we assessed the stocks’ moats according to an evaluation defined between 0 and 5 for each financial ratio. The sum of all the ratios provided a score between 0 and 100 to classify the companies as wide, narrow or null moats. Finally, several ML models were applied for classification to obtain an efficient, faster and less expensive method to select companies with lasting competitive advantages. The main findings are: (1) the model with the highest precision is the Random Forest; and (2) the most important financial ratios for detecting competitive advantages are a long-term debt-to-net income, Depreciation and Amortization (D&A)-to-gross profit, interest expense-to-Earnings Before Interest and Taxes (EBIT), and Earnings Per Share (EPS) trend. This research provides a new combination of ML tools and information that can improve the performance of investment portfolios; to the authors’ knowledge, this has not been done before. The algorithm developed in this paper has a limitation in the calculation of the stocks’ moats since it does not consider its cost, price-to-earnings ratio (PE), or valuation. Due to this limitation, this algorithm does not represent a strategy for short-term or intraday trading. Full article
(This article belongs to the Special Issue Mathematical Modeling, Optimization and Machine Learning)
Show Figures

Figure 1

22 pages, 971 KiB  
Article
Low Dissipative Entropic Lattice Boltzmann Method
by Oleg Ilyin
Mathematics 2022, 10(21), 3928; https://0-doi-org.brum.beds.ac.uk/10.3390/math10213928 - 23 Oct 2022
Cited by 2 | Viewed by 1333
Abstract
In the entropic lattice Boltzmann approach, the stability properties are governed by the parameter α, which in turn affects the viscosity of a flow. The variation of this parameter allows one to guarantee the fulfillment of the discrete H-theorem for all [...] Read more.
In the entropic lattice Boltzmann approach, the stability properties are governed by the parameter α, which in turn affects the viscosity of a flow. The variation of this parameter allows one to guarantee the fulfillment of the discrete H-theorem for all spatial nodes. In the ideal case, the alteration of α from its normal value in the conventional lattice Boltzmann method (α=2) should be as small as possible. In the present work, the problem of the evaluation of α securing the H-theorem and having an average value close to α=2 is addressed. The main idea is to approximate the H-function by a quadratic function on the parameter α around α=2. The entropy balance requirement leads to a closed form expression for α depending on the values of the H-function and its derivatives. To validate the proposed method, several benchmark problems are considered: the Sod shock tube, the propagation of shear, acoustic waves, and doubly shear layer. It is demonstrated that the obtained formula for α yields solutions that show very small excessive dissipation. The simulation results are also compared with the essentially entropic and Zhao–Yong lattice Boltzmann approaches. Full article
(This article belongs to the Special Issue Mathematical Modeling, Optimization and Machine Learning)
Show Figures

Figure 1

15 pages, 440 KiB  
Article
Entropy-Randomized Clustering
by Yuri S. Popkov, Yuri A. Dubnov and Alexey Yu. Popkov
Mathematics 2022, 10(19), 3710; https://0-doi-org.brum.beds.ac.uk/10.3390/math10193710 - 10 Oct 2022
Viewed by 838
Abstract
This paper proposes a clustering method based on a randomized representation of an ensemble of possible clusters with a probability distribution. The concept of a cluster indicator is introduced as the average distance between the objects included in the cluster. The indicators averaged [...] Read more.
This paper proposes a clustering method based on a randomized representation of an ensemble of possible clusters with a probability distribution. The concept of a cluster indicator is introduced as the average distance between the objects included in the cluster. The indicators averaged over the entire ensemble are considered the latter’s characteristics. The optimal distribution of clusters is determined using the randomized machine learning approach: an entropy functional is maximized with respect to the probability distribution subject to constraints imposed on the averaged indicator of the cluster ensemble. The resulting entropy-optimal cluster corresponds to the maximum of the optimal probability distribution. This method is developed for binary clustering as a basic procedure. Its extension to t-ary clustering is considered. Some illustrative examples of entropy-randomized clustering are given. Full article
(This article belongs to the Special Issue Mathematical Modeling, Optimization and Machine Learning)
Show Figures

Figure 1

13 pages, 2372 KiB  
Article
Accelerating Extreme Search of Multidimensional Functions Based on Natural Gradient Descent with Dirichlet Distributions
by Ruslan Abdulkadirov, Pavel Lyakhov and Nikolay Nagornov
Mathematics 2022, 10(19), 3556; https://0-doi-org.brum.beds.ac.uk/10.3390/math10193556 - 29 Sep 2022
Cited by 3 | Viewed by 1537
Abstract
The high accuracy attainment, using less complex architectures of neural networks, remains one of the most important problems in machine learning. In many studies, increasing the quality of recognition and prediction is obtained by extending neural networks with usual or special neurons, which [...] Read more.
The high accuracy attainment, using less complex architectures of neural networks, remains one of the most important problems in machine learning. In many studies, increasing the quality of recognition and prediction is obtained by extending neural networks with usual or special neurons, which significantly increases the time of training. However, engaging an optimization algorithm, which gives us a value of the loss function in the neighborhood of global minimum, can reduce the number of layers and epochs. In this work, we explore the extreme searching of multidimensional functions by proposed natural gradient descent based on Dirichlet and generalized Dirichlet distributions. The natural gradient is based on describing a multidimensional surface with probability distributions, which allows us to reduce the change in the accuracy of gradient and step size. The proposed algorithm is equipped with step-size adaptation, which allows it to obtain higher accuracy, taking a small number of iterations in the process of minimization, compared with the usual gradient descent and adaptive moment estimate. We provide experiments on test functions in four- and three-dimensional spaces, where natural gradient descent proves its ability to converge in the neighborhood of global minimum. Such an approach can find its application in minimizing the loss function in various types of neural networks, such as convolution, recurrent, spiking and quantum networks. Full article
(This article belongs to the Special Issue Mathematical Modeling, Optimization and Machine Learning)
Show Figures

Figure 1

20 pages, 1731 KiB  
Article
Comparative Study of Markov Chain Filtering Schemas for Stabilization of Stochastic Systems under Incomplete Information
by Alexey Bosov and Andrey Borisov
Mathematics 2022, 10(18), 3381; https://0-doi-org.brum.beds.ac.uk/10.3390/math10183381 - 17 Sep 2022
Viewed by 1032
Abstract
The object under investigation is a controllable linear stochastic differential system affected by some external statistically uncertain piecewise continuous disturbances. They are directly unobservable but assumed to be a continuous-time Markov chain. The problem is to stabilize the system output concerning a quadratic [...] Read more.
The object under investigation is a controllable linear stochastic differential system affected by some external statistically uncertain piecewise continuous disturbances. They are directly unobservable but assumed to be a continuous-time Markov chain. The problem is to stabilize the system output concerning a quadratic optimality criterion. As is known, the separation theorem holds for the system. The goal of the paper is performance analysis of various numerical schemes applied to the filtering of the external Markov input for system stabilization purposes. The paper briefly presents the theoretical solution to the considered problem of optimal stabilization for systems with the Markov jump external disturbances: the conditions providing the separation theorem, the equations of optimal control, and the ones defining the Wonham filter. It also contains a complex of the stable numerical approximations of the filter, designed for the time-discretized observations, along with their accuracy characteristics. The approximations of orders 12, 1, and 2 along with the classical Euler–Maruyama scheme are chosen for the comparison of the Wonham filter numerical realization. The filtering estimates are used in the practical stabilization of the various linear systems of the second order. The numerical experiments confirm the significant influence of the filtering precision on the stabilization performance and superiority of the proposed stable schemes of numerical filtering. Full article
(This article belongs to the Special Issue Mathematical Modeling, Optimization and Machine Learning)
Show Figures

Figure 1

20 pages, 1509 KiB  
Article
Identification of Continuous-Discrete Hidden Markov Models with Multiplicative Observation Noise
by Andrey Borisov and Andrey Gorshenin
Mathematics 2022, 10(17), 3062; https://0-doi-org.brum.beds.ac.uk/10.3390/math10173062 - 25 Aug 2022
Cited by 2 | Viewed by 1294
Abstract
The paper aims to identify hidden Markov model parameters. The unobservable state represents a finite-state Markov jump process. The observations contain Wiener noise with state-dependent intensity. The identified parameters include the transition intensity matrix of the system state, conditional drift and diffusion coefficients [...] Read more.
The paper aims to identify hidden Markov model parameters. The unobservable state represents a finite-state Markov jump process. The observations contain Wiener noise with state-dependent intensity. The identified parameters include the transition intensity matrix of the system state, conditional drift and diffusion coefficients in the observations. We propose an iterative identification algorithm based on the fixed-interval smoothing of the Markov state. Using the calculated state estimates, we restore all required system parameters. The paper contains a detailed description of the numerical schemes of state estimation and parameter identification. The comprehensive numerical study confirms the high precision of the proposed identification estimates. Full article
(This article belongs to the Special Issue Mathematical Modeling, Optimization and Machine Learning)
Show Figures

Figure 1

20 pages, 2459 KiB  
Article
Modified Erlang Loss System for Cognitive Wireless Networks
by Evsey Morozov, Stepan Rogozin, Hung Q. Nguyen and Tuan Phung-Duc
Mathematics 2022, 10(12), 2101; https://0-doi-org.brum.beds.ac.uk/10.3390/math10122101 - 16 Jun 2022
Cited by 3 | Viewed by 1711
Abstract
This paper considers a modified Erlang loss system for cognitive wireless networks and related applications. A primary user has pre-emptive priority over secondary users, and the primary customer is lost if upon arrival all the channels are used by other primary users. Secondary [...] Read more.
This paper considers a modified Erlang loss system for cognitive wireless networks and related applications. A primary user has pre-emptive priority over secondary users, and the primary customer is lost if upon arrival all the channels are used by other primary users. Secondary users cognitively use idle channels, and they can stay (either in an infinite buffer or in an orbit) in cases where idle channels are not available upon arrival or they are interrupted by primary users. While the infinite buffer model represents the case with zero sensing time, the infinite orbit model represents the case with positive sensing time. We obtain an explicit stability condition for the cases where arrival processes of primary users and secondary users follow Poisson processes, and their service times follow two distinct arbitrary distributions. The stability condition is insensitive to the service time distributions and implies the maximal throughout of secondary users. Moreover, we extend the stability analysis to the system with outgoing calls. For a special case of exponential service time distributions, we analyze the buffered system in depth to show the effect of parameters on the delay performance and the mean number of interruptions of secondary users. Our simulations for distributions rather than exponential reveal that the mean number of terminations for secondary users is less sensitive to the service time distribution of primary users. Full article
(This article belongs to the Special Issue Mathematical Modeling, Optimization and Machine Learning)
Show Figures

Figure 1

21 pages, 1054 KiB  
Article
Assessment of Machine Learning Methods for State-to-State Approach in Nonequilibrium Flow Simulations
by Lorenzo Campoli, Elena Kustova and Polina Maltseva
Mathematics 2022, 10(6), 928; https://0-doi-org.brum.beds.ac.uk/10.3390/math10060928 - 14 Mar 2022
Cited by 7 | Viewed by 2050
Abstract
State-to-state numerical simulations of high-speed reacting flows are the most detailed but also often prohibitively computationally expensive. In this work, we explore the usage of machine learning algorithms to alleviate such a burden. Several tasks have been identified. Firstly, data-driven machine learning regression [...] Read more.
State-to-state numerical simulations of high-speed reacting flows are the most detailed but also often prohibitively computationally expensive. In this work, we explore the usage of machine learning algorithms to alleviate such a burden. Several tasks have been identified. Firstly, data-driven machine learning regression models were compared for the prediction of the relaxation source terms appearing in the right-hand side of the state-to-state Euler system of equations for a one-dimensional reacting flow of a N2/N binary mixture behind a plane shock wave. Results show that, by appropriately choosing the regressor and opportunely tuning its hyperparameters, it is possible to achieve accurate predictions compared to the full-scale state-to-state simulation in significantly shorter times. Secondly, several strategies to speed-up our in-house state-to-state solver were investigated by coupling it with the best-performing pre-trained machine learning algorithm. The embedding of machine learning algorithms into ordinary differential equations solvers may offer a speed-up of several orders of magnitude. Nevertheless, performances are found to be strongly dependent on the interfaced codes and the set of variables onto which the coupling is realized. Finally, the solution of the state-to-state Euler system of equations was inferred by means of a deep neural network by-passing the use of the solver while relying only on data. Promising results suggest that deep neural networks appear to be a viable technology also for this task. Full article
(This article belongs to the Special Issue Mathematical Modeling, Optimization and Machine Learning)
Show Figures

Figure 1

Back to TopTop