Numerical and Machine Learning Methods for Applied Sciences

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Computational and Applied Mathematics".

Deadline for manuscript submissions: 15 September 2024 | Viewed by 4826

Special Issue Editors


E-Mail Website
Guest Editor
Applied Mathematical Modeling Group, Department of Mathematics, University of Oviedo, 3, 33003 Oviedo, Spain
Interests: applied and computational mathematics; machine learning; nonlinear dynamics; time series; adaptive optics; computational particle physics

E-Mail Website
Guest Editor
Applied Mathematical Modeling Group, Department of Mathematics, University of Oviedo, 3, 33003 Oviedo, Spain
Interests: deep learning; neural networks; mathematical modeling

Special Issue Information

Dear Colleagues,

Artificial intelligence methods are mathematical tools designed to learn and model complex physical systems and other science and real-life situations. The broad capabilities of intelligent models have allowed a vast number of applications to emerge in recent years in fields of engineering, health science, physics, economics, computational sciences, and so on. The aim of this Special Issue is to gather the latest improvements of the intelligent models as well as new approaches in applications in science. The topics covered in this Special Issue include but are not restricted to:

  1. Design of self-learning models;
  2. Improvements on already stablished artificial intelligence models;
  3. Performance improvements in computing simulations;
  4. Applications of these models on engineering problems, such as robotics, materials, etc.;
  5. Modeling of complex systems;
  6. Numerical models implementation;
  7. Applications in vanguard physics problems;
  8. Health sciences applications;
  9. Bioengineering design;
  10. And many other possible applications.

Improvements in the field of Machine Learning and Artificial Neural Networks are particularly welcomed.

Dr. Sergio Luis Suárez Gómez
Dr. Carlos González-Gutiérrez
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Artificial neural networks
  • Machine learning
  • Artificial intelligence
  • Learning-based models
  • Control and optimization
  • Numerical computing
  • Nonlinear dynamics
  • Time series
  • Mathematical optics
  • Computational particle physics
  • Computational imaging
  • Computational biomechanics
  • Science applications

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 5605 KiB  
Article
Adaptive Graph Attention and Long Short-Term Memory-Based Networks for Traffic Prediction
by Taomei Zhu, Maria Jesus Lopez Boada and Beatriz Lopez Boada
Mathematics 2024, 12(2), 255; https://0-doi-org.brum.beds.ac.uk/10.3390/math12020255 - 12 Jan 2024
Viewed by 694
Abstract
While the increased availability of traffic data is allowing us to better understand urban mobility, research on data-driven and predictive modeling is also providing new methods for improving traffic management and reducing congestion. In this paper, we present a hybrid predictive modeling architecture, [...] Read more.
While the increased availability of traffic data is allowing us to better understand urban mobility, research on data-driven and predictive modeling is also providing new methods for improving traffic management and reducing congestion. In this paper, we present a hybrid predictive modeling architecture, namely GAT-LSTM, by incorporating graph attention (GAT) and long short-term memory (LSTM) networks for handling traffic prediction tasks. In this architecture, GAT networks capture the spatial dependencies of the traffic network, LSTM networks capture the temporal correlations, and the Dayfeature component incorporates time and external information (such as day of the week, extreme weather conditions, holidays, etc.). A key attention block is designed to integrate GAT, LSTM, and the Dayfeature components as well as learn and assign weights to these different components within the architecture. This method of integration is proven effective at improving prediction accuracy, as shown by the experimental results obtained with the PeMS08 open dataset, and the proposed model demonstrates state-of-the-art performance in these experiments. Furthermore, the hybrid model demonstrates adaptability to dynamic traffic conditions, different prediction horizons, and various traffic networks. Full article
(This article belongs to the Special Issue Numerical and Machine Learning Methods for Applied Sciences)
Show Figures

Figure 1

23 pages, 3525 KiB  
Article
An Advanced Segmentation Approach to Piecewise Regression Models
by Kang-Ping Lu and Shao-Tung Chang
Mathematics 2023, 11(24), 4959; https://0-doi-org.brum.beds.ac.uk/10.3390/math11244959 - 14 Dec 2023
Cited by 1 | Viewed by 1115
Abstract
Two problems concerning detecting change-points in linear regression models are considered. One involves discontinuous jumps in a regression model and the other involves regression lines connected at unknown places. Significant literature has been developed for estimating piecewise regression models because of their broad [...] Read more.
Two problems concerning detecting change-points in linear regression models are considered. One involves discontinuous jumps in a regression model and the other involves regression lines connected at unknown places. Significant literature has been developed for estimating piecewise regression models because of their broad range of applications. The segmented (SEG) regression method with an R package has been employed by many researchers since it is easy to use, converges fast, and produces sufficient estimates. The SEG method allows for multiple change-points but is restricted to continuous models. Such a restriction really limits the practical applications of SEG when it comes to discontinuous jumps encountered in real change-point problems very often. In this paper, we propose a piecewise regression model, allowing for discontinuous jumps, connected lines, or the occurrences of jumps and connected change-points in a single model. The proposed segmentation approach can derive the estimates of jump points, connected change-points, and regression parameters simultaneously, allowing for multiple change-points. The initializations of the proposed algorithm and the decision on the number of segments are discussed. Experimental results and comparisons demonstrate the effectiveness and superiority of the proposed method. Several real examples from diverse areas illustrate the practicability of the new method. Full article
(This article belongs to the Special Issue Numerical and Machine Learning Methods for Applied Sciences)
Show Figures

Figure 1

12 pages, 1702 KiB  
Article
Wavefront Recovery for Multiple Sun Regions in Solar SCAO Scenarios with Deep Learning Techniques
by Sergio Luis Suárez Gómez, Francisco García Riesgo, Saúl Pérez Fernández, Francisco Javier Iglesias Rodríguez, Enrique Díez Alonso, Jesús Daniel Santos Rodríguez and Francisco Javier De Cos Juez
Mathematics 2023, 11(7), 1561; https://0-doi-org.brum.beds.ac.uk/10.3390/math11071561 - 23 Mar 2023
Cited by 1 | Viewed by 836
Abstract
The main objective of an Adaptive Optics (AO) system is to correct the aberrations produced in the received wavefronts, caused by atmospheric turbulence. From some measures taken by ground-based telescopes, AO systems must reconstruct all the turbulence traversed by the incoming light and [...] Read more.
The main objective of an Adaptive Optics (AO) system is to correct the aberrations produced in the received wavefronts, caused by atmospheric turbulence. From some measures taken by ground-based telescopes, AO systems must reconstruct all the turbulence traversed by the incoming light and calculate a correction. The turbulence is characterized as a phenomenon that can be modeled as several independent, random, and constantly changing layers. In the case of Solar Single-Conjugated Adaptive Optics (Solar SCAO), the key is to reconstruct the turbulence on-axis with the direction of the observation. Previous research has shown that ANNs are a possible alternative when they have been trained in the Sun’s regions where they must make the reconstructions. Along this research, a new solution based on Artificial Intelligence (AI) is proposed to predict the atmospheric turbulence from the data obtained by the telescope sensors that can generalize recovering wavefronts in regions of the sun completely unknown previously. The presented results show the quality of the reconstructions made by this new technique based on Artificial Neural Networks (ANNs), specifically the Multi-layer Perceptron (MLP). Full article
(This article belongs to the Special Issue Numerical and Machine Learning Methods for Applied Sciences)
Show Figures

Figure 1

19 pages, 3312 KiB  
Article
Assessment of CNN-Based Models for Odometry Estimation Methods with LiDAR
by Miguel Clavijo, Felipe Jiménez, Francisco Serradilla and Alberto Díaz-Álvarez
Mathematics 2022, 10(18), 3234; https://0-doi-org.brum.beds.ac.uk/10.3390/math10183234 - 06 Sep 2022
Cited by 1 | Viewed by 1289
Abstract
The problem of simultaneous localization and mapping (SLAM) in mobile robotics currently remains a crucial issue to ensure the safety of autonomous vehicles’ navigation. One approach addressing the SLAM problem and odometry estimation has been through perception sensors, leading to V-SLAM and visual [...] Read more.
The problem of simultaneous localization and mapping (SLAM) in mobile robotics currently remains a crucial issue to ensure the safety of autonomous vehicles’ navigation. One approach addressing the SLAM problem and odometry estimation has been through perception sensors, leading to V-SLAM and visual odometry solutions. Furthermore, for these purposes, computer vision approaches are quite widespread, but LiDAR is a more reliable technology for obstacles detection and its application could be broadened. However, in most cases, definitive results are not achieved, or they suffer from a high computational load that limits their operation in real time. Deep Learning techniques have proven their validity in many different fields, one of them being the perception of the environment of autonomous vehicles. This paper proposes an approach to address the estimation of the ego-vehicle positioning from 3D LiDAR data, taking advantage of the capabilities of a system based on Machine Learning models, analyzing possible limitations. Models have been used with two real datasets. Results provide the conclusion that CNN-based odometry could guarantee local consistency, whereas it loses accuracy due to cumulative errors in the evaluation of the global trajectory, so global consistency is not guaranteed. Full article
(This article belongs to the Special Issue Numerical and Machine Learning Methods for Applied Sciences)
Show Figures

Figure 1

Back to TopTop