Artificial Intelligence in Modeling and Simulation

A special issue of Algorithms (ISSN 1999-4893). This special issue belongs to the section "Combinatorial Optimization, Graph, and Network Algorithms".

Deadline for manuscript submissions: closed (31 March 2024) | Viewed by 23678

Special Issue Editors


E-Mail Website
Guest Editor
COPELABS, Lusófona University, 1700-097 Lisboa, Portugal
Interests: computer science; machine learning; artificial intelligence; modeling and simulation; high performance computing
DINÂMIA'CET, Iscte – Instituto Universitário de Lisboa, 1649-026 Lisboa, Portugal
Interests: simulation; agent-based models; computer ethics; privacy and data protection; philosophy of technology; data science

Special Issue Information

Dear Colleagues,

Modeling and simulation (M&S), through continuous, discrete-event, agent-based, among other approaches, aims to describe different aspects of a real system. It avoids actual experimentation, which can be costly, time-consuming, or even impossible, as in the case of inaccessible systems, systems with uncontrollable factors, or systems which are still being designed. M&S is therefore a crucial component in engineering and in many scientific fields, such as physics, social science, medicine, defense, transport, chemistry, ecology or biology.

Artificial intelligence (AI) can be thought of as intelligence displayed by artificial systems capable of understanding their environment and performing tasks in order to achieve their goals. Following the widespread adoption of AI in academia and industry in recent years, M&S has also welcomed AI, embedding it into the simulation models themselves, using AI in their development, or both. When AI is present in a real system (e.g., transports, factories, social networks), the respective model needs to include AI as well. In other scenarios, models need to mimic natural intelligence, relying on AI techniques for doing so. AI and machine learning techniques enable the intelligent optimization and fine-tuning of simulation models, for example by training models with real system data or by effectively synchronizing models with live data, fostering both model verification and validation. These techniques also open the door to the efficient generation of metamodels, which can predict model responses for a number of parameter combinations without simulation. Metamodels bring about extraordinary accelerations in model optimization, especially in scenarios where models have many parameters and exploring their factor space is unfeasible due to long simulation times, as in the case of agent-based or multiscale models. Conversely, by generating simulated data, simulation models can train AI systems to be deployed in the real world.

This Special Issue aims to investigate the relationship between AI and M&S from perspectives such as: 1) the use of AI techniques, such as machine learning and computational intelligence algorithms, in applications of M&S to different systems; 2) the use of AI for implementing and optimizing simulation models, with special focus on verification and validation; 3) AI-driven creation of metamodels from agent-based and other computationally costly models; and 4) how simulation models or model-generated synthetic data drive machine learning.

We invite researchers to submit papers on the topic, from all viewpoints, including theoretical issues, algorithms, and systems, as well as academic and industrial applications in all areas of knowledge.

Dr. Nuno Fachada
Dr. Nuno David
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • modeling and simulation
  • artificial intelligence
  • machine learning
  • verification and validation
  • metamodeling

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

20 pages, 4414 KiB  
Article
Uncertainty in Visual Generative AI
by Kara Combs, Adam Moyer and Trevor J. Bihl
Algorithms 2024, 17(4), 136; https://0-doi-org.brum.beds.ac.uk/10.3390/a17040136 - 27 Mar 2024
Viewed by 882
Abstract
Recently, generative artificial intelligence (GAI) has impressed the world with its ability to create text, images, and videos. However, there are still areas in which GAI produces undesirable or unintended results due to being “uncertain”. Before wider use of AI-generated content, it is [...] Read more.
Recently, generative artificial intelligence (GAI) has impressed the world with its ability to create text, images, and videos. However, there are still areas in which GAI produces undesirable or unintended results due to being “uncertain”. Before wider use of AI-generated content, it is important to identify concepts where GAI is uncertain to ensure the usage thereof is ethical and to direct efforts for improvement. This study proposes a general pipeline to automatically quantify uncertainty within GAI. To measure uncertainty, the textual prompt to a text-to-image model is compared to captions supplied by four image-to-text models (GIT, BLIP, BLIP-2, and InstructBLIP). Its evaluation is based on machine translation metrics (BLEU, ROUGE, METEOR, and SPICE) and word embedding’s cosine similarity (Word2Vec, GloVe, FastText, DistilRoBERTa, MiniLM-6, and MiniLM-12). The generative AI models performed consistently across the metrics; however, the vector space models yielded the highest average similarity, close to 80%, which suggests more ideal and “certain” results. Suggested future work includes identifying metrics that best align with a human baseline to ensure quality and consideration for more GAI models. The work within can be used to automatically identify concepts in which GAI is “uncertain” to drive research aimed at increasing confidence in these areas. Full article
(This article belongs to the Special Issue Artificial Intelligence in Modeling and Simulation)
Show Figures

Graphical abstract

15 pages, 438 KiB  
Article
Framework Based on Simulation of Real-World Message Streams to Evaluate Classification Solutions
by Wenny Hojas-Mazo, Francisco Maciá-Pérez, José Vicente Berná Martínez, Mailyn Moreno-Espino, Iren Lorenzo Fonseca and Juan Pavón
Algorithms 2024, 17(1), 47; https://0-doi-org.brum.beds.ac.uk/10.3390/a17010047 - 21 Jan 2024
Viewed by 1349
Abstract
Analysing message streams in a dynamic environment is challenging. Various methods and metrics are used to evaluate message classification solutions, but often fail to realistically simulate the actual environment. As a result, the evaluation can produce overly optimistic results, rendering current solution evaluations [...] Read more.
Analysing message streams in a dynamic environment is challenging. Various methods and metrics are used to evaluate message classification solutions, but often fail to realistically simulate the actual environment. As a result, the evaluation can produce overly optimistic results, rendering current solution evaluations inadequate for real-world environments. This paper proposes a framework based on the simulation of real-world message streams to evaluate classification solutions. The framework consists of four modules: message stream simulation, processing, classification and evaluation. The simulation module uses techniques and queueing theory to replicate a real-world message stream. The processing module refines the input messages for optimal classification. The classification module categorises the generated message stream using existing solutions. The evaluation module evaluates the performance of the classification solutions by measuring accuracy, precision and recall. The framework can model different behaviours from different sources, such as different spammers with different attack strategies, press media or social network sources. Each profile generates a message stream that is combined into the main stream for greater realism. A spam detection case study is developed that demonstrates the implementation of the proposed framework and identifies latency and message body obfuscation as critical classification quality parameters. Full article
(This article belongs to the Special Issue Artificial Intelligence in Modeling and Simulation)
Show Figures

Figure 1

20 pages, 2119 KiB  
Article
A Biased-Randomized Discrete Event Algorithm to Improve the Productivity of Automated Storage and Retrieval Systems in the Steel Industry
by Mattia Neroni, Massimo Bertolini and Angel A. Juan
Algorithms 2024, 17(1), 46; https://0-doi-org.brum.beds.ac.uk/10.3390/a17010046 - 19 Jan 2024
Viewed by 1432
Abstract
In automated storage and retrieval systems (AS/RSs), the utilization of intelligent algorithms can reduce the makespan required to complete a series of input/output operations. This paper introduces a simulation optimization algorithm designed to minimize the makespan in a realistic AS/RS commonly found in [...] Read more.
In automated storage and retrieval systems (AS/RSs), the utilization of intelligent algorithms can reduce the makespan required to complete a series of input/output operations. This paper introduces a simulation optimization algorithm designed to minimize the makespan in a realistic AS/RS commonly found in the steel sector. This system includes weight and quality constraints for the selected items. Our hybrid approach combines discrete event simulation with biased-randomized heuristics. This combination enables us to efficiently address the complex time dependencies inherent in such dynamic scenarios. Simultaneously, it allows for intelligent decision making, resulting in feasible and high-quality solutions within seconds. A series of computational experiments illustrates the potential of our approach, which surpasses an alternative method based on traditional simulated annealing. Full article
(This article belongs to the Special Issue Artificial Intelligence in Modeling and Simulation)
Show Figures

Figure 1

27 pages, 721 KiB  
Article
Efficient Multi-Objective Simulation Metamodeling for Researchers
by Ken Jom Ho, Ender Özcan and Peer-Olaf Siebers
Algorithms 2024, 17(1), 41; https://0-doi-org.brum.beds.ac.uk/10.3390/a17010041 - 18 Jan 2024
Viewed by 1337
Abstract
Solving multiple objective optimization problems can be computationally intensive even when experiments can be performed with the help of a simulation model. There are many methodologies that can achieve good tradeoffs between solution quality and resource use. One possibility is using an intermediate [...] Read more.
Solving multiple objective optimization problems can be computationally intensive even when experiments can be performed with the help of a simulation model. There are many methodologies that can achieve good tradeoffs between solution quality and resource use. One possibility is using an intermediate “model of a model” (metamodel) built on experimental responses from the underlying simulation model and an optimization heuristic that leverages the metamodel to explore the input space more efficiently. However, determining the best metamodel and optimizer pairing for a specific problem is not directly obvious from the problem itself, and not all domains have experimental answers to this conundrum. This paper introduces a discrete multiple objective simulation metamodeling and optimization methodology that allows algorithmic testing and evaluation of four Metamodel-Optimizer (MO) pairs for different problems. For running our experiments, we have implemented a test environment in R and tested four different MO pairs on four different problem scenarios in the Operations Research domain. The results of our experiments suggest that patterns of relative performance between the four MO pairs tested differ in terms of computational time costs for the four problems studied. With additional integration of problems, metamodels and optimizers, the opportunity to identify ex ante the best MO pair to employ for a general problem can lead to a more profitable use of metamodel optimization. Full article
(This article belongs to the Special Issue Artificial Intelligence in Modeling and Simulation)
Show Figures

Figure 1

38 pages, 10361 KiB  
Article
Exploring the Use of Artificial Intelligence in Agent-Based Modeling Applications: A Bibliometric Study
by Ștefan Ionescu, Camelia Delcea, Nora Chiriță and Ionuț Nica
Algorithms 2024, 17(1), 21; https://0-doi-org.brum.beds.ac.uk/10.3390/a17010021 - 03 Jan 2024
Cited by 5 | Viewed by 2166
Abstract
This research provides a comprehensive analysis of the dynamic interplay between agent-based modeling (ABM) and artificial intelligence (AI) through a meticulous bibliometric study. This study reveals a substantial increase in scholarly interest, particularly post-2006, peaking in 2021 and 2022, indicating a contemporary surge [...] Read more.
This research provides a comprehensive analysis of the dynamic interplay between agent-based modeling (ABM) and artificial intelligence (AI) through a meticulous bibliometric study. This study reveals a substantial increase in scholarly interest, particularly post-2006, peaking in 2021 and 2022, indicating a contemporary surge in research on the synergy between AI and ABM. Temporal trends and fluctuations prompt questions about influencing factors, potentially linked to technological advancements or shifts in research focus. The sustained increase in citations per document per year underscores the field’s impact, with the 2021 peak suggesting cumulative influence. Reference Publication Year Spectroscopy (RPYS) reveals historical patterns, and the recent decline prompts exploration into shifts in research focus. Lotka’s law is reflected in the author’s contributions, supported by Pareto analysis. Journal diversity signals extensive exploration of AI applications in ABM. Identifying impactful journals and clustering them per Bradford’s Law provides insights for researchers. Global scientific production dominance and regional collaboration maps emphasize the worldwide landscape. Despite acknowledging limitations, such as citation lag and interdisciplinary challenges, our study offers a global perspective with implications for future research and as a resource in the evolving AI and ABM landscape. Full article
(This article belongs to the Special Issue Artificial Intelligence in Modeling and Simulation)
Show Figures

Graphical abstract

22 pages, 1766 KiB  
Article
Comparing Activation Functions in Machine Learning for Finite Element Simulations in Thermomechanical Forming
by Olivier Pantalé
Algorithms 2023, 16(12), 537; https://0-doi-org.brum.beds.ac.uk/10.3390/a16120537 - 25 Nov 2023
Viewed by 1710
Abstract
Finite element (FE) simulations have been effective in simulating thermomechanical forming processes, yet challenges arise when applying them to new materials due to nonlinear behaviors. To address this, machine learning techniques and artificial neural networks play an increasingly vital role in developing complex [...] Read more.
Finite element (FE) simulations have been effective in simulating thermomechanical forming processes, yet challenges arise when applying them to new materials due to nonlinear behaviors. To address this, machine learning techniques and artificial neural networks play an increasingly vital role in developing complex models. This paper presents an innovative approach to parameter identification in flow laws, utilizing an artificial neural network that learns directly from test data and automatically generates a Fortran subroutine for the Abaqus standard or explicit FE codes. We investigate the impact of activation functions on prediction and computational efficiency by comparing Sigmoid, Tanh, ReLU, Swish, Softplus, and the less common Exponential function. Despite its infrequent use, the Exponential function demonstrates noteworthy performance and reduced computation times. Model validation involves comparing predictive capabilities with experimental data from compression tests, and numerical simulations confirm the numerical implementation in the Abaqus explicit FE code. Full article
(This article belongs to the Special Issue Artificial Intelligence in Modeling and Simulation)
Show Figures

Figure 1

16 pages, 3049 KiB  
Article
A Largely Unsupervised Domain-Independent Qualitative Data Extraction Approach for Empirical Agent-Based Model Development
by Rajiv Paudel and Arika Ligmann-Zielinska
Algorithms 2023, 16(7), 338; https://0-doi-org.brum.beds.ac.uk/10.3390/a16070338 - 14 Jul 2023
Cited by 3 | Viewed by 1574
Abstract
Agent-based model (ABM) development needs information on system components and interactions. Qualitative narratives contain contextually rich system information beneficial for ABM conceptualization. Traditional qualitative data extraction is manual, complex, and time- and resource-consuming. Moreover, manual data extraction is often biased and may produce [...] Read more.
Agent-based model (ABM) development needs information on system components and interactions. Qualitative narratives contain contextually rich system information beneficial for ABM conceptualization. Traditional qualitative data extraction is manual, complex, and time- and resource-consuming. Moreover, manual data extraction is often biased and may produce questionable and unreliable models. A possible alternative is to employ automated approaches borrowed from Artificial Intelligence. This study presents a largely unsupervised qualitative data extraction framework for ABM development. Using semantic and syntactic Natural Language Processing tools, our methodology extracts information on system agents, their attributes, and actions and interactions. In addition to expediting information extraction for ABM, the largely unsupervised approach also minimizes biases arising from modelers’ preconceptions about target systems. We also introduce automatic and manual noise-reduction stages to make the framework usable on large semi-structured datasets. We demonstrate the approach by developing a conceptual ABM of household food security in rural Mali. The data for the model contain a large set of semi-structured qualitative field interviews. The data extraction is swift, predominantly automatic, and devoid of human manipulation. We contextualize the model manually using the extracted information. We also put the conceptual model to stakeholder evaluation for added credibility and validity. Full article
(This article belongs to the Special Issue Artificial Intelligence in Modeling and Simulation)
Show Figures

Figure 1

17 pages, 3390 KiB  
Article
CNN Based on Transfer Learning Models Using Data Augmentation and Transformation for Detection of Concrete Crack
by Md. Monirul Islam, Md. Belal Hossain, Md. Nasim Akhtar, Mohammad Ali Moni and Khondokar Fida Hasan
Algorithms 2022, 15(8), 287; https://0-doi-org.brum.beds.ac.uk/10.3390/a15080287 - 15 Aug 2022
Cited by 39 | Viewed by 4540
Abstract
Cracks in concrete cause initial structural damage to civil infrastructures such as buildings, bridges, and highways, which in turn causes further damage and is thus regarded as a serious safety concern. Early detection of it can assist in preventing further damage and can [...] Read more.
Cracks in concrete cause initial structural damage to civil infrastructures such as buildings, bridges, and highways, which in turn causes further damage and is thus regarded as a serious safety concern. Early detection of it can assist in preventing further damage and can enable safety in advance by avoiding any possible accident caused while using those infrastructures. Machine learning-based detection is gaining favor over time-consuming classical detection approaches that can only fulfill the objective of early detection. To identify concrete surface cracks from images, this research developed a transfer learning approach (TL) based on Convolutional Neural Networks (CNN). This work employs the transfer learning strategy by leveraging four existing deep learning (DL) models named VGG16, ResNet18, DenseNet161, and AlexNet with pre-trained (trained on ImageNet) weights. To validate the performance of each model, four performance indicators are used: accuracy, recall, precision, and F1-score. Using the publicly available CCIC dataset, the suggested technique on AlexNet outperforms existing models with a testing accuracy of 99.90%, precision of 99.92%, recall of 99.80%, and F1-score of 99.86% for crack class. Our approach is further validated by using an external dataset, BWCI, available on Kaggle. Using BWCI, models VGG16, ResNet18, DenseNet161, and AlexNet achieved the accuracy of 99.90%, 99.60%, 99.80%, and 99.90% respectively. This proposed transfer learning-based method, which is based on the CNN method, is demonstrated to be more effective at detecting cracks in concrete structures and is also applicable to other detection tasks. Full article
(This article belongs to the Special Issue Artificial Intelligence in Modeling and Simulation)
Show Figures

Figure 1

22 pages, 2169 KiB  
Article
Validating and Testing an Agent-Based Model for the Spread of COVID-19 in Ireland
by Elizabeth Hunter and John D. Kelleher
Algorithms 2022, 15(8), 270; https://0-doi-org.brum.beds.ac.uk/10.3390/a15080270 - 03 Aug 2022
Cited by 7 | Viewed by 2434
Abstract
Agent-based models can be used to better understand the impacts of lifting restrictions or implementing interventions during a pandemic. However, agent-based models are computationally expensive, and running a model of a large population can result in a simulation taking too long to run [...] Read more.
Agent-based models can be used to better understand the impacts of lifting restrictions or implementing interventions during a pandemic. However, agent-based models are computationally expensive, and running a model of a large population can result in a simulation taking too long to run for the model to be a useful analysis tool during a public health crisis. To reduce computing time and power while running a detailed agent-based model for the spread of COVID-19 in the Republic of Ireland, we introduce a scaling factor that equates 1 agent to 100 people in the population. We present the results from model validation and show that the scaling factor increases the variability in the model output, but the average model results are similar in scaled and un-scaled models of the same population, and the scaled model is able to accurately simulate the number of cases per day in Ireland during the autumn of 2020. We then test the usability of the model by using the model to explore the likely impacts of increasing community mixing when schools reopen after summer holidays. Full article
(This article belongs to the Special Issue Artificial Intelligence in Modeling and Simulation)
Show Figures

Figure 1

Review

Jump to: Research

45 pages, 2733 KiB  
Review
A Literature Review on Some Trends in Artificial Neural Networks for Modeling and Simulation with Time Series
by Angel E. Muñoz-Zavala, Jorge E. Macías-Díaz, Daniel Alba-Cuéllar and José A. Guerrero-Díaz-de-León
Algorithms 2024, 17(2), 76; https://0-doi-org.brum.beds.ac.uk/10.3390/a17020076 - 07 Feb 2024
Viewed by 1616
Abstract
This paper reviews the application of artificial neural network (ANN) models to time series prediction tasks. We begin by briefly introducing some basic concepts and terms related to time series analysis, and by outlining some of the most popular ANN architectures considered in [...] Read more.
This paper reviews the application of artificial neural network (ANN) models to time series prediction tasks. We begin by briefly introducing some basic concepts and terms related to time series analysis, and by outlining some of the most popular ANN architectures considered in the literature for time series forecasting purposes: feedforward neural networks, radial basis function networks, recurrent neural networks, and self-organizing maps. We analyze the strengths and weaknesses of these architectures in the context of time series modeling. We then summarize some recent time series ANN modeling applications found in the literature, focusing mainly on the previously outlined architectures. In our opinion, these summarized techniques constitute a representative sample of the research and development efforts made in this field. We aim to provide the general reader with a good perspective on how ANNs have been employed for time series modeling and forecasting tasks. Finally, we comment on possible new research directions in this area. Full article
(This article belongs to the Special Issue Artificial Intelligence in Modeling and Simulation)
Show Figures

Figure 1

Back to TopTop