Artificial Intelligence and Data Engineering in Engineering Applications

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (20 December 2022) | Viewed by 35624

Special Issue Editor


E-Mail Website
Guest Editor
Department of Biosystems Engineering, Faculty of Environmental and Mechanical Engineering, Poznań University of Life Sciences, Wojska Polskiego 50, 60-627 Poznań, Poland
Interests: artificial intelligence; neural networks; machine learning; computer image analysis; computer engineering; prediction; production process optimization and modeling; process management; Internet of Things; business management; management process; trends in management
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Artificial intelligence has been one of the most dynamically developing fields of science for the past dozen or so years. While for many researchers, this represents an enormous chance for increasingly faster development, lower manufacturing costs for products, and increase in production efficiency, other researchers are worried about potentially massive layoffs and dependence on technologies or even about the possibility of people who are in possession of intelligent technologies taking control of the world. Regardless of who is right, one thing is certain—there is no way back from the application of artificial intelligence to most businesses in the economy. During everyday operational activities in IT systems, a lot of information about the realization of processes in a company is gathered. These are often only logs—confirmation of receipt of goods, starting up machines, clocking in or clocking out. Thanks to algorithms, it is possible to use this sort of data. Application of algorithms allows automated detection of the process that is actually realized, as well as its analysis. It allows identifying any bottlenecks, redundant activities, and actions and how actual works realized in a company differ from what was determined in approved procedures. Artificial intelligence indicates the age of productivity and innovation. AI sets new standards for speed, flexibility, and optimization. However, the most important fact is that companies which automate their processes will take a competitive advantage over their rivals and will maintain the position of leaders on the market in order to meet the growing expectations of their clients.

This Special Issue will concentrate on the application of methods of artificial intelligence and data analysis and their implementation in engineering areas. The topics of interest include but are not limited to:

  • Artificial intelligence in optimization of production and logistics processes;
  • The Internet of Things as a competitive advantage;
  • Big data in managing operating activities;
  • Knowledge engineering and expert systems in knowledge management and in supporting business decisions;
  • Machine learning and decision making in engineering;
  • Intelligent sensors and systems in machine vision and control;
  • Artificial neural networks in identification and classification of objects;
  • Computer (digital) image analysis in production processes;
  • Business intelligence in decision-making systems.

Prof. Dr. Krzysztof Koszela
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence (AI)
  • intelligent manufacturing
  • big data
  • Industry 4.0
  • Internet of Things
  • manufacturing systems
  • cloud manufacturing
  • manufacturing processes
  • artificial vision
  • management in new digitally powered manufacturing concepts

Published Papers (12 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 5677 KiB  
Article
Impact of Interdependencies: Multi-Component System Perspective toward Predictive Maintenance Based on Machine Learning and XAI
by Milot Gashi, Belgin Mutlu and Stefan Thalmann
Appl. Sci. 2023, 13(5), 3088; https://0-doi-org.brum.beds.ac.uk/10.3390/app13053088 - 27 Feb 2023
Cited by 1 | Viewed by 1117
Abstract
Taking the multi-component perspective in Predictive Maintenance (PdM) is one promising approach to improve prediction quality. Therefore, detection and modeling of interdependencies within systems are important, especially as systems become more complex and personalized. However, existing solutions in PdM mostly focus on a [...] Read more.
Taking the multi-component perspective in Predictive Maintenance (PdM) is one promising approach to improve prediction quality. Therefore, detection and modeling of interdependencies within systems are important, especially as systems become more complex and personalized. However, existing solutions in PdM mostly focus on a single-component perspective, neglecting the dependencies between components, even if interdependencies can be found between most components in the real world. The major reason for this lost opportunity is the challenge of identifying and modeling interdependencies between components. This paper introduces a framework to identify interdependencies and explain their impact on PdM within a Multi-Component System (MCS). The contribution of this approach is two-fold. First, it shows the impact of modeling interdependencies in predictive analytics. Second, it helps to understand which components interact with each other and to which degree they affect the deterioration state of corresponding components. As a result, our approach can identify and explain the existence of interdependencies within components. In particular, we demonstrate that time from last change of component is a valuable feature to quantify interdependencies. Moreover, we show that taking into account the interdependencies provides a statistically significant improvement of f1-score by 7% on average compared to the model where interdependencies are neglected. We expect that our findings will improve maintenance scheduling in the industry while improving prediction models in general. Full article
Show Figures

Figure 1

20 pages, 2732 KiB  
Article
Choosing Solution Strategies for Scheduling Automated Guided Vehicles in Production Using Machine Learning
by Felicia Schweitzer, Günter Bitsch and Louis Louw
Appl. Sci. 2023, 13(2), 806; https://0-doi-org.brum.beds.ac.uk/10.3390/app13020806 - 06 Jan 2023
Cited by 5 | Viewed by 2230
Abstract
Artificial intelligence is considered to be a significant technology for driving the future evolution of smart manufacturing environments. At the same time, automated guided vehicles (AGVs) play an essential role in manufacturing systems due to their potential to improve internal logistics by increasing [...] Read more.
Artificial intelligence is considered to be a significant technology for driving the future evolution of smart manufacturing environments. At the same time, automated guided vehicles (AGVs) play an essential role in manufacturing systems due to their potential to improve internal logistics by increasing production flexibility. Thereby, the productivity of the entire system relies on the quality of the schedule, which can achieve production cost savings by minimizing delays and the total makespan. However, traditional scheduling algorithms often have difficulties in adapting to changing environment conditions, and the performance of a selected algorithm depends on the individual scheduling problem. Therefore, this paper aimed to analyze the scheduling problem classes of AGVs by applying design science research to develop an algorithm selection approach. The designed artifact addressed a catalogue of characteristics that used several machine learning algorithms to find the optimal solution strategy for the intended scheduling problem. The contribution of this paper is the creation of an algorithm selection method that automatically selects a scheduling algorithm, depending on the problem class and the algorithm space. In this way, production efficiency can be increased by dynamically adapting the AGV schedules. A computational study with benchmark literature instances unveiled the successful implementation of constraint programming solvers for solving JSSP and FJSSP scheduling problems and machine learning algorithms for predicting the most promising solver. The performance of the solvers strongly depended on the given problem class and the problem instance. Consequently, the overall production performance increased by selecting the algorithms per instance. A field experiment in the learning factory at Reutlingen University enabled the validation of the approach within a running production scenario. Full article
Show Figures

Figure 1

33 pages, 7304 KiB  
Article
Comparative Study of Various Neural Network Types for Direct Inverse Material Parameter Identification in Numerical Simulations
by Paul Meißner, Tom Hoppe and Thomas Vietor
Appl. Sci. 2022, 12(24), 12793; https://0-doi-org.brum.beds.ac.uk/10.3390/app122412793 - 13 Dec 2022
Cited by 1 | Viewed by 1561
Abstract
Increasing product requirements in the mechanical engineering industry and efforts to reduce time-to-market demand highly accurate and resource-efficient finite element simulations. The required parameter calibration of the material models is becoming increasingly challenging with regard to the growing variety of available materials. Besides [...] Read more.
Increasing product requirements in the mechanical engineering industry and efforts to reduce time-to-market demand highly accurate and resource-efficient finite element simulations. The required parameter calibration of the material models is becoming increasingly challenging with regard to the growing variety of available materials. Besides the classical iterative optimization-based parameter identification method, novel machine learning-based methods represent promising alternatives, especially in terms of efficiency. However, the machine learning algorithms, architectures, and settings significantly affect the resulting accuracy. This work presents a comparative study of different machine learning algorithms based on virtual datasets with varying settings for the direct inverse material parameter identification method. Multilayer perceptrons, convolutional neural networks, and Bayesian neural networks are compared; and their resulting prediction accuracies are investigated. Furthermore, advantages in material parameter identification by uncertainty quantification using the Bayesian probabilistic approach are examined and discussed. The results show increased prediction quality when using convolutional neural networks instead of multilayer perceptrons. The assessment of the aleatoric and epistemic uncertainties when using Bayesian neural networks also demonstrated advantages in evaluating the reliability of the predicted material parameters and their influences on the subsequent finite element simulations. Full article
Show Figures

Figure 1

15 pages, 4640 KiB  
Article
A Sample Balance-Based Regression Module for Object Detection in Construction Sites
by Xiaoyu Wang, Hengyou Wang, Changlun Zhang, Qiang He and Lianzhi Huo
Appl. Sci. 2022, 12(13), 6752; https://0-doi-org.brum.beds.ac.uk/10.3390/app12136752 - 03 Jul 2022
Cited by 2 | Viewed by 1433
Abstract
Object detection plays an important role in safety monitoring, quality control, and productivity management at construction sites. Currently, the dominant method for detection is deep neural networks (DNNs), and the state-of-the-art object detectors rely on a bounding box regression (BBR) module to localize [...] Read more.
Object detection plays an important role in safety monitoring, quality control, and productivity management at construction sites. Currently, the dominant method for detection is deep neural networks (DNNs), and the state-of-the-art object detectors rely on a bounding box regression (BBR) module to localize objects. However, the detection results suffer from a bounding box redundancy problem, which is caused by inaccurate BBR. In this paper, we propose an improvement of the object detection regression module for the bounding box redundancy problem. The inaccuracy of BBR in the detection results is caused by the imbalance between the hard and easy samples in the BBR process, i.e., the number of easy samples with small regression errors is much smaller than the hard samples. Therefore, the strategy of balancing hard and easy samples is introduced into the EIOU (Efficient Intersection over Union) loss and FocalL1 regression loss function, respectively, and the two are combined as the new regression loss function, namely EFocalL1-SEIOU (Efficient FocalL1-Segmented Efficient Intersection over Union) loss. Finally, the proposed EFocalL1-SEIOU loss is evaluated on four different DNN-based detectors based on the MOCS (Moving Objects in Construction Sites) dataset in construction sites. The experimental results show that the EFocalL1-SEIOU loss improves the detection ability of objects on different detectors at construction sites. Full article
Show Figures

Figure 1

21 pages, 1563 KiB  
Article
Error-Bounded Learned Scientific Data Compression with Preservation of Derived Quantities
by Jaemoon Lee, Qian Gong, Jong Choi, Tania Banerjee, Scott Klasky, Sanjay Ranka and Anand Rangarajan
Appl. Sci. 2022, 12(13), 6718; https://0-doi-org.brum.beds.ac.uk/10.3390/app12136718 - 02 Jul 2022
Cited by 11 | Viewed by 1785
Abstract
Scientific applications continue to grow and produce extremely large amounts of data, which require efficient compression algorithms for long-term storage. Compression errors in scientific applications can have a deleterious impact on downstream processing. Thus, it is crucial to preserve all the “known” Quantities [...] Read more.
Scientific applications continue to grow and produce extremely large amounts of data, which require efficient compression algorithms for long-term storage. Compression errors in scientific applications can have a deleterious impact on downstream processing. Thus, it is crucial to preserve all the “known” Quantities of Interest (QoI) during compression. To address this issue, most existing approaches guarantee the reconstruction error of the original data or primary data (PD), but cannot directly control the problem of preserving the QoI. In this work, we propose a physics-informed compression technique that is composed of two parts: (i) reduction of the PD with bounded errors and (ii) preservation of the QoI. In the first step, we combine tensor decompositions, autoencoders, product quantizers, and error-bounded lossy compressors to bound the reconstruction error at high levels of compression. In the second step, we use constraint satisfaction post-processing followed by quantization to preserve the QoI. To illustrate the challenges of reducing the reconstruction errors of the PD and QoI, we focus on simulation data generated by a large-scale fusion code, XGC, which can produce tens of petabytes in a single day. The results show that our approach can achieve a high compression amount while accurately preserving the QoI within scientifically acceptable bounds. Full article
Show Figures

Figure 1

21 pages, 388 KiB  
Article
Using Artificial Intelligence for Space Challenges: A Survey
by Antonia Russo and Gianluca Lax
Appl. Sci. 2022, 12(10), 5106; https://0-doi-org.brum.beds.ac.uk/10.3390/app12105106 - 19 May 2022
Cited by 18 | Viewed by 12571
Abstract
Artificial intelligence is applied to many fields and contributes to many important applications and research areas, such as intelligent data processing, natural language processing, autonomous vehicles, and robots. The adoption of artificial intelligence in several fields has been the subject of many research [...] Read more.
Artificial intelligence is applied to many fields and contributes to many important applications and research areas, such as intelligent data processing, natural language processing, autonomous vehicles, and robots. The adoption of artificial intelligence in several fields has been the subject of many research papers. Still, recently, the space sector is a field where artificial intelligence is receiving significant attention. This paper aims to survey the most relevant problems in the field of space applications solved by artificial intelligence techniques. We focus on applications related to mission design, space exploration, and Earth observation, and we provide a taxonomy of the current challenges. Moreover, we present and discuss current solutions proposed for each challenge to allow researchers to identify and compare the state of the art in this context. Full article
Show Figures

Figure 1

17 pages, 6803 KiB  
Article
A GAN-Augmented Corrosion Prediction Model for Uncoated Steel Plates
by Feng Jiang and Mikihito Hirohata
Appl. Sci. 2022, 12(9), 4706; https://0-doi-org.brum.beds.ac.uk/10.3390/app12094706 - 07 May 2022
Cited by 3 | Viewed by 1517
Abstract
The deterioration and damage of aging steel structures have caused huge safety concerns. Corrosion has been identified as a big reason for the deterioration and damage, which causes steel members to lose materials. As a result, the structures’ stiffness and load-bearing capacity will [...] Read more.
The deterioration and damage of aging steel structures have caused huge safety concerns. Corrosion has been identified as a big reason for the deterioration and damage, which causes steel members to lose materials. As a result, the structures’ stiffness and load-bearing capacity will be reduced, which brings economic losses and safety hazards. For the maintenance and repair of steel structures, fast and accurate prediction of corrosion development plays a critical role in numerical simulation analysis, which could save time and costs. In this research, we build a simulation system based on GAN data augmentation with UNet as the generator and MobileNetV2 as the discriminator. The goal is to effectively predict the corrosion behavior of uncoated steel structures over time and under different circumstances. The system can simulate three stages of corrosion based on the dataset collected from experiments. It can also predict the corrosion of steel plates in the next stage. The discriminator of the system can be used to classify the type of steel, the stage of corrosion, and days of corrosion. Based on comparative experiments, our system demonstrates outstanding performance and outperforms the baseline model. Full article
Show Figures

Figure 1

13 pages, 5045 KiB  
Article
Development of an Automated Body Temperature Detection Platform for Face Recognition in Cattle with YOLO V3-Tiny Deep Learning and Infrared Thermal Imaging
by Shih-Sian Guo, Kuo-Hua Lee, Liyun Chang, Chin-Dar Tseng, Sin-Jhe Sie, Guang-Zhi Lin, Jih-Yi Chen, Yi-Hsin Yeh, Yu-Jie Huang and Tsair-Fwu Lee
Appl. Sci. 2022, 12(8), 4036; https://0-doi-org.brum.beds.ac.uk/10.3390/app12084036 - 16 Apr 2022
Cited by 6 | Viewed by 3000
Abstract
This study developed an automated temperature measurement and monitoring platform for dairy cattle. The platform used the YOLO V3-tiny (you only look once, YOLO) deep learning algorithm to identify and classify dairy cattle images. The system included a total of three layers of [...] Read more.
This study developed an automated temperature measurement and monitoring platform for dairy cattle. The platform used the YOLO V3-tiny (you only look once, YOLO) deep learning algorithm to identify and classify dairy cattle images. The system included a total of three layers of YOLO V3-tiny identification: (1) dairy cow body; (2) individual number (identity, ID); (3) thermal image of eye socket identification. We recorded each cow’s individual number and body temperature data after the three layers of identification, and carried out long-term body temperature tracking. The average prediction score of the recognition rate was 96%, and the accuracy was 90.0%. The thermal image of eye socket recognition rate was >99%. The area under the receiver operating characteristic curves (AUC) index of the prediction model was 0.813 (0.717–0.910). This showed that the model had excellent predictive ability. This system provides a rapid and convenient temperature measurement solution for ranchers. The improvement in dairy cattle image recognition can be optimized by collecting more image data. In the future, this platform is expected to replace the traditional solution of intrusive radio-frequency identification for individual recognition. Full article
Show Figures

Figure 1

17 pages, 4752 KiB  
Article
Prediction of In-Cylinder Pressure of Diesel Engine Based on Extreme Gradient Boosting and Sparrow Search Algorithm
by Ying Sun, Lin Lv, Peng Lee and Yunkai Cai
Appl. Sci. 2022, 12(3), 1756; https://0-doi-org.brum.beds.ac.uk/10.3390/app12031756 - 08 Feb 2022
Cited by 2 | Viewed by 1749
Abstract
In-cylinder pressure is one of the most important references in the process of diesel engine performance optimization. In order to acquire effective in-cylinder pressure value, many physical tests are required. The cost of physical testing is high; various uncertain factors will bring errors [...] Read more.
In-cylinder pressure is one of the most important references in the process of diesel engine performance optimization. In order to acquire effective in-cylinder pressure value, many physical tests are required. The cost of physical testing is high; various uncertain factors will bring errors to test results, and the time of an engine test is so long that the test results cannot meet the real-time requirement. Therefore, it is necessary to develop technology with high accuracy and a fast response to predict the in-cylinder pressure of diesel engines. In this paper, the in-cylinder pressure values of a high-speed diesel engine under different conditions are used to train the extreme gradient boosting model, and the sparrow search algorithm—which belongs to the swarm intelligence optimization algorithm—is introduced to optimize the hyper parameters of the model. The research results show that the extreme gradient boosting model combined with the sparrow search algorithm can predict the in-cylinder pressure under each verification condition with high accuracy, and the proportion of the samples which prediction error is less than 10% in the validation set is 94%. In the process of model optimization, it is found that compared with the grid search method, the sparrow search algorithm has stronger hyper parameter optimization ability, which reduces the mean square error of the prediction model by 27.99%. Full article
Show Figures

Figure 1

18 pages, 4983 KiB  
Article
Optimization of the Sowing Unit of a Piezoelectrical Sensor Chamber with the Use of Grain Motion Modeling by Means of the Discrete Element Method. Case Study: Rape Seed
by Łukasz Gierz, Weronika Kruszelnicka, Mariola Robakowska, Krzysztof Przybył, Krzysztof Koszela, Anna Marciniak and Tomasz Zwiachel
Appl. Sci. 2022, 12(3), 1594; https://0-doi-org.brum.beds.ac.uk/10.3390/app12031594 - 02 Feb 2022
Cited by 6 | Viewed by 1471
Abstract
Nowadays, in the face of continuous technological progress and environmental requirements, all manufacturing processes and machines need to be optimized in order to achieve the highest possible efficiency. Agricultural machines such as seed drills and cultivation units are no exception. Their efficiency depends [...] Read more.
Nowadays, in the face of continuous technological progress and environmental requirements, all manufacturing processes and machines need to be optimized in order to achieve the highest possible efficiency. Agricultural machines such as seed drills and cultivation units are no exception. Their efficiency depends on the amount of sowing material to be used and the patency of seed transport tubes or colters. Most available control systems for seed drills are optical ones whose operation is not effective when working close to the ground due to large dusting. Thus, there is still a need to provide seed drills with sensors to be equipped with control systems suitable for use under conditions of massive dusting that would shorten the time of reaction to clogging and be affordable for every farmer. This study presents an analysis of grain motion in the sowing system and an analysis of the operation efficiency of an original piezoelectric sensor with patent application. The novelty of this work is reflected in the new design of a specially designed piezoelectric sensor in the sowing unit, for which an analysis of indication errors was carried out. A seed arrangement of this type has not been described so far. An analysis of the influence of the seed tube tilt angle and the type of its exit hole end on the coordinates of the grain point of collision with the sensor surface and erroneous indications of the amount of sown grains identified by the piezoelectric sensor is presented. Low values of the sensor indication errors (up to 10%), particularly for small tilt angles (0° and 5°) confirm its high grain detection efficiency, comparable with other sensors used in sowing systems, e.g., photoelectric, fiber or infrared sensors and confirm its suitability for commercial application. The results presented in this work broaden the knowledge on the use of sensors in seeding systems and provide the basis for the development of precise systems with piezoelectric sensors. Full article
Show Figures

Figure 1

15 pages, 3232 KiB  
Article
Measurements and Analysis of the Physical Properties of Cereal Seeds Depending on Their Moisture Content to Improve the Accuracy of DEM Simulation
by Łukasz Gierz, Ewelina Kolankowska, Piotr Markowski and Krzysztof Koszela
Appl. Sci. 2022, 12(2), 549; https://0-doi-org.brum.beds.ac.uk/10.3390/app12020549 - 06 Jan 2022
Cited by 9 | Viewed by 3555
Abstract
This article presents the results of research on the influence of moisture on changes in the physical properties, i.e., the length, width, thickness, and weight, of dressed and untreated cereal seeds in order to improve the simulation process based on the discrete element [...] Read more.
This article presents the results of research on the influence of moisture on changes in the physical properties, i.e., the length, width, thickness, and weight, of dressed and untreated cereal seeds in order to improve the simulation process based on the discrete element method (DEM). The research was conducted on the seeds of three winter cereals, i.e., triticale, rye, and barley. The seeds with an initial moisture content of about 7% were moistened to five levels, ranging from 9.5% to 17.5%, at an increment of 2%. The statistical analysis showed that moisture significantly influenced the physical properties of the seeds, i.e., their length, width, thickness, and weight. As the moisture content of the seeds increased, there were greater differences in their weight. The average increase in the thousand kernel weight resulting from the increase in their moisture content ranged from 4 to 6 mg. The change in the seed moisture content from 9.5% to 17.5% significantly increased the volume of rye seeds from 3.10% to 14.99%, the volume of triticale seeds from 1.00% to 13.40%, and the volume of barley seeds from 1.00% to 15.33%. These data can be used as a parameter to improve the DEM simulation process. Full article
Show Figures

Figure 1

13 pages, 383 KiB  
Article
Load Classification: A Case Study for Applying Neural Networks in Hyper-Constrained Embedded Devices
by Andrea Agiollo and Andrea Omicini
Appl. Sci. 2021, 11(24), 11957; https://0-doi-org.brum.beds.ac.uk/10.3390/app112411957 - 15 Dec 2021
Cited by 5 | Viewed by 1832
Abstract
The application of Artificial Intelligence to the industrial world and its appliances has recently grown in popularity. Indeed, AI techniques are now becoming the de-facto technology for the resolution of complex tasks concerning computer vision, natural language processing and many other areas. In [...] Read more.
The application of Artificial Intelligence to the industrial world and its appliances has recently grown in popularity. Indeed, AI techniques are now becoming the de-facto technology for the resolution of complex tasks concerning computer vision, natural language processing and many other areas. In the last years, most of the the research community efforts have focused on increasing the performance of most common AI techniques—e.g., Neural Networks, etc.—at the expenses of their complexity. Indeed, many works in the AI field identify and propose hyper-efficient techniques, targeting high-end devices. However, the application of such AI techniques to devices and appliances which are characterised by limited computational capabilities, remains an open research issue. In the industrial world, this problem heavily targets low-end appliances, which are developed focusing on saving costs and relying on—computationally—constrained components. While some efforts have been made in this area through the proposal of AI-simplification and AI-compression techniques, it is still relevant to study which available AI techniques can be used in modern constrained devices. Therefore, in this paper we propose a load classification task as a case study to analyse which state-of-the-art NN solutions can be embedded successfully into constrained industrial devices. The presented case study is tested on a simple microcontroller, characterised by very poor computational performances—i.e., FLOPS –, to mirror faithfully the design process of low-end appliances. A handful of NN models are tested, showing positive outcomes and possible limitations, and highlighting the complexity of AI embedding. Full article
Show Figures

Figure 1

Back to TopTop