Feature Papers on Artificial Intelligence Algorithms and Their Applications

A topical collection in Algorithms (ISSN 1999-4893). This collection belongs to the section "Evolutionary Algorithms and Machine Learning".

Viewed by 11651

Editors


E-Mail Website
Collection Editor
Data Science and Artificial Intelligence at IU International University of Applied Sciences, 53604 Bad-Honnef, Germany
Interests: data science; artificial intelligence; AI in materials science; algorithmic economy

grade E-Mail Website
Collection Editor
Department of Applied Mathematics, Faculty of Mathematics and Computer Sciences, Amirkabir University of Technology, Tehran 1591634311, Iran
Interests: meshless methods; fractional PDEs; finite element method; computational mechanics; machine learning

E-Mail Website
Collection Editor
1. Department of Applied Mathematics and Computational Sciences, University of Cantabria, C.P. 39005 Santander, Spain
2. Department of Information Science, Faculty of Sciences, Toho University, 2-2-1 Miyama, Funabashi 274-8510, Japan
Interests: swarm intelligence and swarm robotics; bio-inspired optimization; computer graphics; geometric modelling
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Collection Editor
1. Department of Applied Mathematics and Computational Sciences, University of Cantabria, C.P. 39005 Santander, Spain
2. Department of Information Science, Faculty of Sciences, Toho University, 2-2-1 Miyama, Funabashi 274-8510, Japan
Interests: artificial Intelligence; soft computing for optimization; evolutionary computation; computational intelligence
Special Issues, Collections and Topics in MDPI journals

Topical Collection Information

Dear Colleagues,

The field of artificial intelligence has made tremendous progress in recent years and many recent developments have emerged that are of strong interest both for fundamental research in the field of AI, as well as the application of AI methods in academic and industrial settings. AI continues to shape both our professional and private life.

We invite you to submit high-quality feature papers to this Topical Collection entitled “Feature Papers on Artificial Intelligence Algorithms and Their Applications”, covering the whole range of subjects from theory to applications. In particular, we welcome contributions focusing on new algorithms or new methods in AI, as well as applications in industry or academic settings, including natural or social sciences, medicine, and engineering.

Prof. Dr. Ulrich Kerzel
Dr. Mostafa Abbaszadeh
Dr. Andres Iglesias
Prof. Dr. Akemi Galvez Tomida
Collection Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the collection website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • AI algorithms
  • AI methods
  • applications of AI
  • AI in natural sciences
  • AI in social sciences
  • AI in engineering
  • AI in medicine

Published Papers (6 papers)

2024

Jump to: 2023

19 pages, 1005 KiB  
Article
Evaluating Diffusion Models for the Automation of Ultrasonic Nondestructive Evaluation Data Analysis
by Nick Torenvliet and John Zelek
Algorithms 2024, 17(4), 167; https://0-doi-org.brum.beds.ac.uk/10.3390/a17040167 - 21 Apr 2024
Viewed by 326
Abstract
We develop decision support and automation for the task of ultrasonic non-destructive evaluation data analysis. First, we develop a probabilistic model for the task and then implement the model as a series of neural networks based on Conditional Score-Based Diffusion and Denoising Diffusion [...] Read more.
We develop decision support and automation for the task of ultrasonic non-destructive evaluation data analysis. First, we develop a probabilistic model for the task and then implement the model as a series of neural networks based on Conditional Score-Based Diffusion and Denoising Diffusion Probabilistic Model architectures. We use the neural networks to generate estimates for peak amplitude response time of flight and perform a series of tests probing their behavior, capacity, and characteristics in terms of the probabilistic model. We train the neural networks on a series of datasets constructed from ultrasonic non-destructive evaluation data acquired during an inspection at a nuclear power generation facility. We modulate the partition classifying nominal and anomalous data in the dataset and observe that the probabilistic model predicts trends in neural network model performance, thereby demonstrating a principled basis for explainability. We improve on previous related work as our methods are self-supervised and require no data annotation or pre-processing, and we train on a per-dataset basis, meaning we do not rely on out-of-distribution generalization. The capacity of the probabilistic model to predict trends in neural network performance, as well as the quality of the estimates sampled from the neural networks, support the development of a technical justification for usage of the method in safety-critical contexts such as nuclear applications. The method may provide a basis or template for extension into similar non-destructive evaluation tasks in other industrial contexts. Full article
Show Figures

Figure 1

19 pages, 481 KiB  
Article
Program Code Generation with Generative AIs
by Baskhad Idrisov and Tim Schlippe
Algorithms 2024, 17(2), 62; https://0-doi-org.brum.beds.ac.uk/10.3390/a17020062 - 31 Jan 2024
Viewed by 2043
Abstract
Our paper compares the correctness, efficiency, and maintainability of human-generated and AI-generated program code. For that, we analyzed the computational resources of AI- and human-generated program code using metrics such as time and space complexity as well as runtime and memory [...] Read more.
Our paper compares the correctness, efficiency, and maintainability of human-generated and AI-generated program code. For that, we analyzed the computational resources of AI- and human-generated program code using metrics such as time and space complexity as well as runtime and memory usage. Additionally, we evaluated the maintainability using metrics such as lines of code, cyclomatic complexity, Halstead complexity and maintainability index. For our experiments, we had generative AIs produce program code in Java, Python, and C++ that solves problems defined on the competition coding website leetcode.com. We selected six LeetCode problems of varying difficulty, resulting in 18 program codes generated by each generative AI. GitHub Copilot, powered by Codex (GPT-3.0), performed best, solving 9 of the 18 problems (50.0%), whereas CodeWhisperer did not solve a single problem. BingAI Chat (GPT-4.0) generated correct program code for seven problems (38.9%), ChatGPT (GPT-3.5) and Code Llama (Llama 2) for four problems (22.2%) and StarCoder and InstructCodeT5+ for only one problem (5.6%). Surprisingly, although ChatGPT generated only four correct program codes, it was the only generative AI capable of providing a correct solution to a coding problem of difficulty level hard. In summary, 26 AI-generated codes (20.6%) solve the respective problem. For 11 AI-generated incorrect codes (8.7%), only minimal modifications to the program code are necessary to solve the problem, which results in time savings between 8.9% and even 71.3% in comparison to programming the program code from scratch. Full article
Show Figures

Figure 1

2023

Jump to: 2024

14 pages, 303 KiB  
Article
On Enhancement of Text Classification and Analysis of Text Emotions Using Graph Machine Learning and Ensemble Learning Methods on Non-English Datasets
by Fatemeh Gholami, Zahed Rahmati, Alireza Mofidi and Mostafa Abbaszadeh
Algorithms 2023, 16(10), 470; https://0-doi-org.brum.beds.ac.uk/10.3390/a16100470 - 04 Oct 2023
Viewed by 1660
Abstract
In recent years, machine learning approaches, in particular graph learning methods, have achieved great results in the field of natural language processing, in particular text classification tasks. However, many of such models have shown limited generalization on datasets in different languages. In this [...] Read more.
In recent years, machine learning approaches, in particular graph learning methods, have achieved great results in the field of natural language processing, in particular text classification tasks. However, many of such models have shown limited generalization on datasets in different languages. In this research, we investigate and elaborate graph machine learning methods on non-English datasets (such as the Persian Digikala dataset), which consists of users’ opinions for the task of text classification. More specifically, we investigate different combinations of (Pars) BERT with various graph neural network (GNN) architectures (such as GCN, GAT, and GIN) as well as use ensemble learning methods in order to tackle the text classification task on certain well-known non-English datasets. Our analysis and results demonstrate how applying GNN models helps in achieving good scores on the task of text classification by better capturing the topological information between textual data. Additionally, our experiments show how models employing language-specific pre-trained models (like ParsBERT, instead of BERT) capture better information about the data, resulting in better accuracies. Full article
Show Figures

Figure 1

18 pages, 7529 KiB  
Article
End-to-End Approach for Autonomous Driving: A Supervised Learning Method Using Computer Vision Algorithms for Dataset Creation
by Inês A. Ribeiro, Tiago Ribeiro, Gil Lopes and A. Fernando Ribeiro
Algorithms 2023, 16(9), 411; https://0-doi-org.brum.beds.ac.uk/10.3390/a16090411 - 28 Aug 2023
Cited by 2 | Viewed by 1771
Abstract
This paper presents a solution for an autonomously driven vehicle (a robotic car) based on artificial intelligence using a supervised learning method. A scaled-down robotic car containing only one camera as a sensor was developed to participate in the RoboCup Portuguese Open Autonomous [...] Read more.
This paper presents a solution for an autonomously driven vehicle (a robotic car) based on artificial intelligence using a supervised learning method. A scaled-down robotic car containing only one camera as a sensor was developed to participate in the RoboCup Portuguese Open Autonomous Driving League competition. This study is based solely on the development of this robotic car, and the results presented are only from this competition. Teams usually solve the competition problem by relying on computer vision algorithms, and no research could be found on neural network model-based assistance for vehicle control. This technique is commonly used in general autonomous driving, and the amount of research is increasing. To train a neural network, a large number of labelled images is necessary; however, these are difficult to obtain. In order to address this problem, a graphical simulator was used with an environment containing the track and the robot/car to extract images for the dataset. A classical computer vision algorithm developed by the authors processes the image data to extract relevant information about the environment and uses it to determine the optimal direction for the vehicle to follow on the track, which is then associated with the respective image-grab. Several trainings were carried out with the created dataset to reach the final neural network model; tests were performed within a simulator, and the effectiveness of the proposed approach was additionally demonstrated through experimental results in two real robotics cars, which performed better than expected. This system proved to be very successful in steering the robotic car on a road-like track, and the agent’s performance increased with the use of supervised learning methods. With computer vision algorithms, the system performed an average of 23 complete laps around the track before going off-track, whereas with assistance from the neural network model the system never went off the track. Full article
Show Figures

Figure 1

17 pages, 5504 KiB  
Article
Model Retraining: Predicting the Likelihood of Financial Inclusion in Kiva’s Peer-to-Peer Lending to Promote Social Impact
by Tasha Austin and Bharat S. Rawal
Algorithms 2023, 16(8), 363; https://0-doi-org.brum.beds.ac.uk/10.3390/a16080363 - 28 Jul 2023
Cited by 2 | Viewed by 1227
Abstract
The purpose of this study is to show how machine learning can be leveraged as a tool to govern social impact and drive fair and equitable investments. Many organizations today are establishing financial inclusion goals to promote social impact and have been increasing [...] Read more.
The purpose of this study is to show how machine learning can be leveraged as a tool to govern social impact and drive fair and equitable investments. Many organizations today are establishing financial inclusion goals to promote social impact and have been increasing their investments in this space. Financial inclusion is the opportunity for individuals and businesses to have access to affordable financial products including loans, credit, and insurance that they may otherwise not have access to with traditional financial institutions. Peer-to-peer (P2P) lending serves as a platform that can support and foster financial inclusion and influence social impact and is becoming more popular today as a resource to underserved communities. Loans issued through P2P lending can fund projects and initiatives focused on climate change, workforce diversity, women’s rights, equity, labor practices, natural resource management, accounting standards, carbon emissions, and several other areas. With this in mind, AI can be a powerful governance tool to help manage risks and promote opportunities for an organization’s financial inclusion goals. In this paper, we explore how AI, specifically machine learning, can help manage the P2P platform Kiva’s investment risks and deliver impact, emphasizing the importance of prediction model retraining to account for regulatory and other changes across the P2P landscape to drive better decision-making. As part of this research, we also explore how changes in important model variables affect aggregate model predictions. Full article
Show Figures

Figure 1

36 pages, 6469 KiB  
Article
Physics-Informed Deep Learning for Traffic State Estimation: A Survey and the Outlook
by Xuan Di, Rongye Shi, Zhaobin Mo and Yongjie Fu
Algorithms 2023, 16(6), 305; https://0-doi-org.brum.beds.ac.uk/10.3390/a16060305 - 17 Jun 2023
Cited by 4 | Viewed by 3421
Abstract
For its robust predictive power (compared to pure physics-based models) and sample-efficient training (compared to pure deep learning models), physics-informed deep learning (PIDL), a paradigm hybridizing physics-based models and deep neural networks (DNNs), has been booming in science and engineering fields. One key [...] Read more.
For its robust predictive power (compared to pure physics-based models) and sample-efficient training (compared to pure deep learning models), physics-informed deep learning (PIDL), a paradigm hybridizing physics-based models and deep neural networks (DNNs), has been booming in science and engineering fields. One key challenge of applying PIDL to various domains and problems lies in the design of a computational graph that integrates physics and DNNs. In other words, how the physics is encoded into DNNs and how the physics and data components are represented. In this paper, we offer an overview of a variety of architecture designs of PIDL computational graphs and how these structures are customized to traffic state estimation (TSE), a central problem in transportation engineering. When observation data, problem type, and goal vary, we demonstrate potential architectures of PIDL computational graphs and compare these variants using the same real-world dataset. Full article
Show Figures

Figure 1

Back to TopTop