Applied and Innovative Computational Intelligence Systems

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (30 December 2022) | Viewed by 54609

Special Issue Editors


E-Mail Website
Guest Editor
NOVA LINCS and Instituto Superior de Engenharia (ISE) , University of the Algarve, 8005-139 Faro, Portugal
Interests: computer vision; human–computer interaction; human–machine cooperation; artificial intelligence
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor

Special Issue Information

Dear Colleagues,

This Special Issue on ‘Applied and Innovative Computational Intelligence Systems’ provides a place where Computational Intelligence (CI) researchers and practitioners can publish their theoretical and experimental outcomes in a journal with an Impact Factor of 2.679 and CiteScore of 3.4 in 2021 (updated on November 4th). Supported in huge pillars (such as Neural Networks, Fuzzy Systems or Evolutionary Computation), CI practitioners seek an intelligent system that is characterized by computational adaptability, fault tolerance and high performance in the form of adaptive platforms that enable or facilitate intelligent behavior in complex and dynamic environments, developing technology that enables machines to think, behave or act more humanely.

In this context, this Special Issue intends to explore CI and complementary application and theory fields including, but not restricted to, Artificial Intelligence in general, Machine Learning, Deep Learning, Computer Vision, Augmented Reality, Human–Computer Interaction, Smart Spaces, Smart Cities, Ubiquitous Intelligence, Data Analysis and Science, Time-Series, Internet of Things/Everything, Fault Detection, Sentiment Analysis, Natural Language Processing, Operational Research, Evolutionary Computation, Fuzzy Logic, Robotics, etc.

Accepted papers will build a comprehensive collection of research and development trends on contemporary applied and innovative computational intelligence systems that will serve as a convenient reference for other CI experts as well as newly arrived practitioners, introducing them to the field’s trends. Following the journal’s policy, there is no limit on the documents’ length and full experimental details should be provided, allowing other researchers to reproduce results. Furthermore, electronic files and software can be deposited as supplementary electronic material, allowing full reproducibility and future analysis, which increases the authors’ and works’ visibility.

Prof. Dr. João M. F. Rodrigues
Prof. Dr. Pedro J. S. Cardoso
Prof. Dr. Cristina Portalés Ricart
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • machine learning
  • deep learning
  • computer vision
  • augmented reality
  • human–computer interaction
  • smart spaces
  • smart cities
  • ubiquitous intelligence
  • data analysis and science
  • time-series
  • internet of things/everything
  • fault detection
  • sentiment analysis
  • natural language processing
  • operational research
  • evolutionary computation
  • robotics

Published Papers (22 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

16 pages, 4052 KiB  
Article
Long Short-Term Memory Recurrent Neural Network Approach for Approximating Roots (Eigen Values) of Transcendental Equation of Cantilever Beam
by Madiha Bukhsh, Muhammad Saqib Ali, Abdullah Alourani, Khlood Shinan, Muhammad Usman Ashraf, Abdul Jabbar and Weiqiu Chen
Appl. Sci. 2023, 13(5), 2887; https://0-doi-org.brum.beds.ac.uk/10.3390/app13052887 - 23 Feb 2023
Cited by 2 | Viewed by 2768
Abstract
In this study, the natural frequencies and roots (Eigenvalues) of the transcendental equation in a cantilever steel beam for transverse vibration with clamped free (CF) boundary conditions are estimated using a long short-term memory-recurrent neural network (LSTM-RNN) approach. The finite element method (FEM) [...] Read more.
In this study, the natural frequencies and roots (Eigenvalues) of the transcendental equation in a cantilever steel beam for transverse vibration with clamped free (CF) boundary conditions are estimated using a long short-term memory-recurrent neural network (LSTM-RNN) approach. The finite element method (FEM) package ANSYS is used for dynamic analysis and, with the aid of simulated results, the Euler–Bernoulli beam theory is adopted for the generation of sample datasets. Then, a deep neural network (DNN)-based LSTM-RNN technique is implemented to approximate the roots of the transcendental equation. Datasets are mainly based on the cantilever beam geometry characteristics used for training and testing the proposed LSTM-RNN network. Furthermore, an algorithm using MATLAB platform for numerical solutions is used to cross-validate the dataset results. The network performance is evaluated using the mean square error (MSE) and mean absolute error (MAE). Finally, the numerical and simulated results are compared using the LSTM-RNN methodology to demonstrate the network validity. Full article
(This article belongs to the Special Issue Applied and Innovative Computational Intelligence Systems)
Show Figures

Figure 1

25 pages, 1961 KiB  
Article
Anomaly Detection of Consumption in Hotel Units: A Case Study Comparing Isolation Forest and Variational Autoencoder Algorithms
by Tomás Mendes, Pedro J. S. Cardoso, Jânio Monteiro and João Raposo
Appl. Sci. 2023, 13(1), 314; https://0-doi-org.brum.beds.ac.uk/10.3390/app13010314 - 27 Dec 2022
Cited by 2 | Viewed by 1727
Abstract
Buildings are responsible for a high percentage of global energy consumption, and thus, the improvement of their efficiency can positively impact not only the costs to the companies they house, but also at a global level. One way to reduce that impact is [...] Read more.
Buildings are responsible for a high percentage of global energy consumption, and thus, the improvement of their efficiency can positively impact not only the costs to the companies they house, but also at a global level. One way to reduce that impact is to constantly monitor the consumption levels of these buildings and to quickly act when unjustified levels are detected. Currently, a variety of sensor networks can be deployed to constantly monitor many variables associated with these buildings, including distinct types of meters, air temperature, solar radiation, etc. However, as consumption is highly dependent on occupancy and environmental variables, the identification of anomalous consumption levels is a challenging task. This study focuses on the implementation of an intelligent system, capable of performing the early detection of anomalous sequences of values in consumption time series applied to distinct hotel unit meters. The development of the system was performed in several steps, which resulted in the implementation of several modules. An initial (i) Exploratory Data Analysis (EDA) phase was made to analyze the data, including the consumption datasets of electricity, water, and gas, obtained over several years. The results of the EDA were used to implement a (ii) data correction module, capable of dealing with the transmission losses and erroneous values identified during the EDA’s phase. Then, a (iii) comparative study was performed between a machine learning (ML) algorithm and a deep learning (DL) one, respectively, the isolation forest (IF) and a variational autoencoder (VAE). The study was made, taking into consideration a (iv) proposed performance metric for anomaly detection algorithms in unsupervised time series, also considering computational requirements and adaptability to different types of data. (v) The results show that the IF algorithm is a better solution for the presented problem, since it is easily adaptable to different sources of data, to different combinations of features, and has lower computational complexity. This allows its deployment without major computational requirements, high knowledge, and data history, whilst also being less prone to problems with missing data. As a global outcome, an architecture of a platform is proposed that encompasses the mentioned modules. The platform represents a running system, performing continuous detection and quickly alerting hotel managers about possible anomalous consumption levels, allowing them to take more timely measures to investigate and solve the associated causes. Full article
(This article belongs to the Special Issue Applied and Innovative Computational Intelligence Systems)
Show Figures

Figure 1

14 pages, 2205 KiB  
Article
Reducing the Complexity of Musculoskeletal Models Using Gaussian Process Emulators
by Ivan Benemerito, Erica Montefiori, Alberto Marzo and Claudia Mazzà
Appl. Sci. 2022, 12(24), 12932; https://0-doi-org.brum.beds.ac.uk/10.3390/app122412932 - 16 Dec 2022
Viewed by 1360
Abstract
Musculoskeletal models (MSKMs) are used to estimate the muscle and joint forces involved in human locomotion, often associated with the onset of degenerative musculoskeletal pathologies (e.g., osteoarthritis). Subject-specific MSKMs offer more accurate predictions than their scaled-generic counterparts. This accuracy is achieved through time-consuming [...] Read more.
Musculoskeletal models (MSKMs) are used to estimate the muscle and joint forces involved in human locomotion, often associated with the onset of degenerative musculoskeletal pathologies (e.g., osteoarthritis). Subject-specific MSKMs offer more accurate predictions than their scaled-generic counterparts. This accuracy is achieved through time-consuming personalisation of models and manual tuning procedures that suffer from potential repeatability errors, hence limiting the wider application of this modelling approach. In this work we have developed a methodology relying on Sobol’s sensitivity analysis (SSA) for ranking muscles based on their importance to the determination of the joint contact forces (JCFs) in a cohort of older women. The thousands of data points required for SSA are generated using Gaussian Process emulators, a Bayesian technique to infer the input–output relationship between nonlinear models from a limited number of observations. Results show that there is a pool of muscles whose personalisation has little effects on the predictions of JCFs, allowing for a reduced but still accurate representation of the musculoskeletal system within shorter timeframes. Furthermore, joint forces in subject-specific and generic models are influenced by different sets of muscles, suggesting the existence of a model-specific component to the sensitivity analysis. Full article
(This article belongs to the Special Issue Applied and Innovative Computational Intelligence Systems)
Show Figures

Figure 1

21 pages, 638 KiB  
Article
Factors Affecting Information Security and the Implementation of Bring Your Own Device (BYOD) Programmes in the Kingdom of Saudi Arabia (KSA)
by Adel A. Bahaddad, Khalid A. Almarhabi and Ahmed M. Alghamdi
Appl. Sci. 2022, 12(24), 12707; https://0-doi-org.brum.beds.ac.uk/10.3390/app122412707 - 11 Dec 2022
Cited by 4 | Viewed by 1996
Abstract
In recent years, desktop computer use has decreased while smartphone use has increased. This trend is also prevalent in the Middle East, particularly in the Kingdom of Saudi Arabia (KSA). Therefore, the Saudi government has prioritised overcoming the challenges that smartphone users face [...] Read more.
In recent years, desktop computer use has decreased while smartphone use has increased. This trend is also prevalent in the Middle East, particularly in the Kingdom of Saudi Arabia (KSA). Therefore, the Saudi government has prioritised overcoming the challenges that smartphone users face as smartphones are considered critical infrastructure. The high number of information security (InfoSec) breaches and concerns has prompted most government stakeholders to develop comprehensive policies and regulations that introduce inclusive InfoSec systems. This has, mostly, been motivated by a keenness to adopt digital transformations and increase productivity while spending efficiently. This present study used quantitative measures to assess user acceptance of bring your own device (BYOD) programmes and identifies the main factors affecting their adoption using the unified theory of acceptance and use of technology (UTAUT) model. Constructs, such as the perceived business (PT-Bs) and private threats (PT-Ps) as well as employer attractiveness (EA), were also added to the UTAUT model to provide the public, private, and non-profit sectors with an acceptable method of adopting BYOD programmes. The factors affecting the adoption of BYOD programmes by the studied sectors of the KSA were derived from the responses of 857 participants. Full article
(This article belongs to the Special Issue Applied and Innovative Computational Intelligence Systems)
Show Figures

Figure 1

19 pages, 3397 KiB  
Article
Decision-Refillable-Based Two-Material-View Fuzzy Classification for Personal Thermal Comfort
by Zhaofei Xu, Weidong Lu, Zhenyu Hu, Ta Zhou, Yi Zhou, Wei Yan and Feifei Jiang
Appl. Sci. 2022, 12(22), 11700; https://0-doi-org.brum.beds.ac.uk/10.3390/app122211700 - 17 Nov 2022
Viewed by 949
Abstract
The personal thermal comfort model is used to design and control the thermal environment and predict the thermal comfort responses of individuals rather than reflect the average response of the population. Previous individual thermal comfort models were mainly focused on a single material [...] Read more.
The personal thermal comfort model is used to design and control the thermal environment and predict the thermal comfort responses of individuals rather than reflect the average response of the population. Previous individual thermal comfort models were mainly focused on a single material environment. However, the channels for individual thermal comfort were various in real life. Therefore, a new personal thermal comfort evaluation method is constructed by means of a reliable decision-based fuzzy classification model from two views. In this study, a two-view thermal comfort fuzzy classification model was constructed using the interpretable zero-order Takagi–Sugeno–Kang (TSK) fuzzy classifier as the basic training subblock, and it is the first time an optimized machine learning algorithm to study the interpretable thermal comfort model is used. The relevant information (including basic information, sampling conditions, physiological parameters, physical environment, environmental perception, and self-assessment parameters) was obtained from 157 subjects in experimental chambers with two different materials. This proposed method has the following features: (1) The training samples in the input layer contain the feature data under experimental conditions with two different materials. The training models constructed from the training samples under these two conditions complement and restrict each other and improve the accuracy of the whole model training. (2) In the rule layer of the training unit, interpretable fuzzy rules are designed to solve the existing layers with the design of short rules. The output of the intermediate layer of the fuzzy classifier and the fuzzy rules are difficult to explain, which is problematic. (3) Better decision-making knowledge information is obtained in both the rule layer of the single-view training model and in the two-view fusion model. In addition, the feature mapping space is generated according to the degree of contribution of the decision-making information from the two single training views, which not only preserves the feature information of the source training samples to a large extent but also improves the training accuracy of the model and enhances the generalization performance of the training model. Experimental results indicated that TMV-TSK-FC has better classification performance and generalization performance than several related state-of-the-art non-fuzzy classifiers applied in this study. Significantly, compared with the single view fuzzy classifier, the training accuracies and testing accuracies of TMV-TSK-FC are improved by 3–11% and 2–9%, respectively. In addition, the experimental results also showed good semantic interpretability of TMV-TSK-FC. Full article
(This article belongs to the Special Issue Applied and Innovative Computational Intelligence Systems)
Show Figures

Figure 1

22 pages, 735 KiB  
Article
Graph-Based Multi-Label Classification for WiFi Network Traffic Analysis
by Giuseppe Granato, Alessio Martino, Andrea Baiocchi and Antonello Rizzi
Appl. Sci. 2022, 12(21), 11303; https://0-doi-org.brum.beds.ac.uk/10.3390/app122111303 - 07 Nov 2022
Cited by 3 | Viewed by 2036
Abstract
Network traffic analysis, and specifically anomaly and attack detection, call for sophisticated tools relying on a large number of features. Mathematical modeling is extremely difficult, given the ample variety of traffic patterns and the subtle and varied ways that malicious activity can be [...] Read more.
Network traffic analysis, and specifically anomaly and attack detection, call for sophisticated tools relying on a large number of features. Mathematical modeling is extremely difficult, given the ample variety of traffic patterns and the subtle and varied ways that malicious activity can be carried out in a network. We address this problem by exploiting data-driven modeling and computational intelligence techniques. Sequences of packets captured on the communication medium are considered, along with multi-label metadata. Graph-based modeling of the data are introduced, thus resorting to the powerful GRALG approach based on feature information granulation, identification of a representative alphabet, embedding and genetic optimization. The obtained classifier is evaluated both under accuracy and complexity for two different supervised problems and compared with state-of-the-art algorithms. We show that the proposed preprocessing strategy is able to describe higher level relations between data instances in the input domain, thus allowing the algorithms to suitably reconstruct the structure of the input domain itself. Furthermore, the considered Granular Computing approach is able to extract knowledge on multiple semantic levels, thus effectively describing anomalies as subgraphs-based symbols of the whole network graph, in a specific time interval. Interesting performances can thus be achieved in identifying network traffic patterns, in spite of the complexity of the considered traffic classes. Full article
(This article belongs to the Special Issue Applied and Innovative Computational Intelligence Systems)
Show Figures

Figure 1

22 pages, 2387 KiB  
Article
Ensuring Security and Energy Efficiency of Wireless Sensor Network by Using Blockchain
by Abdul Rehman, Saima Abdullah, Muqaddas Fatima, Muhammad Waseem Iqbal, Khalid Ali Almarhabi, M. Usman Ashraf and Saqib Ali
Appl. Sci. 2022, 12(21), 10794; https://0-doi-org.brum.beds.ac.uk/10.3390/app122110794 - 25 Oct 2022
Cited by 7 | Viewed by 2108
Abstract
With the advancement of new technology, security is the biggest issue nowadays. To solve security problems, blockchain technology will be used. In recent work, most of the work has been done on homogeneous systems, but in our research, the primary focus is on [...] Read more.
With the advancement of new technology, security is the biggest issue nowadays. To solve security problems, blockchain technology will be used. In recent work, most of the work has been done on homogeneous systems, but in our research, the primary focus is on the security of wireless sensor networks using blockchain. Over the last few decades, the Internet of Things (IoT) has been the most advancing technology due to the number of intelligent devices and associated technologies that have rapidly grown in every field of the world, such as smart cities, education, agriculture, banking, healthcare, etc. Many of the applications are developing by using IoT technologies for real-time monitoring. Because of storage capacity or low processing power, smart devices or gadgets are vulnerable to attack as existing cryptography techniques or security are insufficient. In this research work, firstly, we review and identify the privacy and security issues in the IoT system. Secondly, there is a solution for the security issues, which is resolved by blockchain technology. We will check the wireless sensor network to see how data work on distributed or decentralized network architecture. Wireless sensor network clustering technique was introduced by researchers for network efficiency because when the workload spreads, the system will work faster and more efficiently. A cluster comprises a number of nodes, and the cluster head manages the local interactions between the nodes in the cluster (CH). In general, cluster members connect with the cluster head, and the cluster head aggregates and fuses the data acquired in order to save energy. Before approaching the sink, the cluster heads may additionally create another layer of clusters among themselves. The clustering concept divides data traffic into several groups similar to the other data points in the same data point. In contrast, this data point is dissimilar from other data points in another group. All results are presented at the end of this study paper, in which we will see the network or nodes’ performance in the specific area of the network, how it works, and how efficient it is. Likewise, Blockchain also works in a distributed manner. Full article
(This article belongs to the Special Issue Applied and Innovative Computational Intelligence Systems)
Show Figures

Figure 1

11 pages, 1709 KiB  
Article
Utilizing Ensemble Learning to Improve the Distance Information for UWB Positioning
by Che-Cheng Chang, Yee-Ming Ooi, Shih-Tung Tsui, Ting-Hui Chiang and Ming-Han Tsai
Appl. Sci. 2022, 12(19), 9614; https://0-doi-org.brum.beds.ac.uk/10.3390/app12199614 - 25 Sep 2022
Cited by 2 | Viewed by 962
Abstract
An ultra-wideband (UWB) positioning system consists of at least three anchors and a tag for the positioning procedure. Via the UWB transceivers mounted on all devices in the system, we can obtain the distance information between each pair of devices and further realize [...] Read more.
An ultra-wideband (UWB) positioning system consists of at least three anchors and a tag for the positioning procedure. Via the UWB transceivers mounted on all devices in the system, we can obtain the distance information between each pair of devices and further realize the tag localization. However, the uncertain measurement in the real world may introduce incorrect measurement information, e.g., time, distance, positioning, and so on. Therefore, we intend to incorporate the technique of ensemble learning with UWB positioning to improve its performance. In this paper, we present two methods. The experimental results show that our ideas can be applied to different scenarios and work well. Of note, compared with the existing research in the literature, our first algorithm was more accurate and stable. Further, our second algorithm possessed even better performance than the first. Moreover, we also provide a comprehensive discussion for an ill-advised point, which is often used to evaluate the positioning efficiency in the literature. Full article
(This article belongs to the Special Issue Applied and Innovative Computational Intelligence Systems)
Show Figures

Figure 1

20 pages, 7607 KiB  
Article
CCFont: Component-Based Chinese Font Generation Model Using Generative Adversarial Networks (GANs)
by Jangkyoung Park, Ammar Ul Hassan and Jaeyoung Choi
Appl. Sci. 2022, 12(16), 8005; https://0-doi-org.brum.beds.ac.uk/10.3390/app12168005 - 10 Aug 2022
Cited by 1 | Viewed by 2516
Abstract
Font generation using deep learning has made considerable progress using image style transfer, but the automatic conversion/generation of Chinese characters still remains a difficult task owing to the complex character shape and large number of Chinese characters. Most known Chinese character generation models [...] Read more.
Font generation using deep learning has made considerable progress using image style transfer, but the automatic conversion/generation of Chinese characters still remains a difficult task owing to the complex character shape and large number of Chinese characters. Most known Chinese character generation models use the image conversion method of the Chinese character shape itself; however, it is difficult to reproduce complex Chinese characters. Recent methods have utilized character compositionality by separating up to three or four components to improve the quality of generated characters, but it is still difficult to generate high-quality results for complex Chinese characters with many components. In this study, we proposed the CCFont model (component-based Chinese font generation model using generative adversarial networks (GANs)) that automatically generates all Chinese characters using Chinese character components (up to 17 components). The CCFont model generates all Chinese characters in various styles using the components of Chinese characters based on conditional GAN. By acquiring local style information from the components, the information is more accurate and there is less information loss than when global information is obtained from the image of the entire character, reducing the failure of style conversion and improving quality to produce high-quality results. Additionally, the CCFont model generates high-quality results without any additional training (zero-shot font generation without any additional training) for the first-seen characters and styles. For example, the CCFont model, which was trained with only traditional Chinese (TC) characters, generates high-quality results for languages that can be divided into components, such as Korean and Thai, as well as simplified Chinese (SC) characters that are only seen during inference. CCFont can be adopted as a multi-lingual font-generation model that can be applied to all languages, which can be divided into components. To the best of our knowledge, the proposed method is the first to generate a zero-shot multilingual generation model using components. Qualitative and quantitative experiments were conducted to demonstrate the effectiveness of the proposed method. Full article
(This article belongs to the Special Issue Applied and Innovative Computational Intelligence Systems)
Show Figures

Figure 1

15 pages, 18997 KiB  
Article
A Global-Local Feature Fusion Convolutional Neural Network for Bone Age Assessment of Hand X-ray Images
by Qinglei Hui, Chunlin Wang, Junwei Weng, Ming Chen and Dexing Kong
Appl. Sci. 2022, 12(14), 7218; https://0-doi-org.brum.beds.ac.uk/10.3390/app12147218 - 18 Jul 2022
Cited by 1 | Viewed by 1434
Abstract
Bone age assessment plays a critical role in the investigation of endocrine, genetic, and growth disorders in children. This process is usually conducted manually, with some drawbacks, such as reliance on the pediatrician’s experience and extensive labor, as well as high variations among [...] Read more.
Bone age assessment plays a critical role in the investigation of endocrine, genetic, and growth disorders in children. This process is usually conducted manually, with some drawbacks, such as reliance on the pediatrician’s experience and extensive labor, as well as high variations among methods. Most deep learning models use one neural network to extract the global information from the whole input image, ignoring the local details that doctors care about. In this paper, we propose a global-local feature fusion convolutional neural network, including a global pathway to capture the global contextual information and a local pathway to extract the fine-grained information from local patches. The fine-grained information is integrated into the global context information layer-by-layer to assist in predicting bone age. We evaluated the proposed method on a dataset with 11,209 X-ray images with an age range of 4–18 years. Compared with other state-of-the-art methods, the proposed global-local network reduces the mean absolute error of the estimated ages to 0.427 years for males and 0.455 years for females; the average accuracy rate is within 6 months and 12 months, reaching 70% and 91%, respectively. In addition, the effectiveness and rationality of the model were verified on a public dataset. Full article
(This article belongs to the Special Issue Applied and Innovative Computational Intelligence Systems)
Show Figures

Figure 1

26 pages, 5375 KiB  
Article
AAQAL: A Machine Learning-Based Tool for Performance Optimization of Parallel SPMV Computations Using Block CSR
by Muhammad Ahmed, Sardar Usman, Nehad Ali Shah, M. Usman Ashraf, Ahmed Mohammed Alghamdi, Adel A. Bahadded and Khalid Ali Almarhabi
Appl. Sci. 2022, 12(14), 7073; https://0-doi-org.brum.beds.ac.uk/10.3390/app12147073 - 13 Jul 2022
Cited by 5 | Viewed by 1783
Abstract
The sparse matrix–vector product (SpMV), considered one of the seven dwarfs (numerical methods of significance), is essential in high-performance real-world scientific and analytical applications requiring solution of large sparse linear equation systems, where SpMV is a key computing operation. As the sparsity patterns [...] Read more.
The sparse matrix–vector product (SpMV), considered one of the seven dwarfs (numerical methods of significance), is essential in high-performance real-world scientific and analytical applications requiring solution of large sparse linear equation systems, where SpMV is a key computing operation. As the sparsity patterns of sparse matrices are unknown before runtime, we used machine learning-based performance optimization of the SpMV kernel by exploiting the structure of the sparse matrices using the Block Compressed Sparse Row (BCSR) storage format. As the structure of sparse matrices varies across application domains, optimizing the block size is important for reducing the overall execution time. Manual allocation of block sizes is error prone and time consuming. Thus, we propose AAQAL, a data-driven, machine learning-based tool that automates the process of data distribution and selection of near-optimal block sizes based on the structure of the matrix. We trained and tested the tool using different machine learning methods—decision tree, random forest, gradient boosting, ridge regressor, and AdaBoost—and nearly 700 real-world matrices from 43 application domains, including computer vision, robotics, and computational fluid dynamics. AAQAL achieved 93.47% of the maximum attainable performance with a substantial difference compared to in practice manual or random selection of block sizes. This is the first attempt at exploiting matrix structure using BCSR, to select optimal block sizes for the SpMV computations using machine learning techniques. Full article
(This article belongs to the Special Issue Applied and Innovative Computational Intelligence Systems)
Show Figures

Figure 1

30 pages, 10797 KiB  
Article
A Feedback System Supporting Students Approaching a High-Level Programming Course
by Jong-Yih Kuo, Hui-Chi Lin, Ping-Feng Wang and Zhen-Gang Nie
Appl. Sci. 2022, 12(14), 7064; https://0-doi-org.brum.beds.ac.uk/10.3390/app12147064 - 13 Jul 2022
Cited by 2 | Viewed by 1525
Abstract
This study analyzes the mistakes students are prone to make in programming and uses the GDB and Valgrind tools to implement dynamic analysis techniques for their eventual application to programs created by students. In the analysis process, spectral error localization technology is used [...] Read more.
This study analyzes the mistakes students are prone to make in programming and uses the GDB and Valgrind tools to implement dynamic analysis techniques for their eventual application to programs created by students. In the analysis process, spectral error localization technology is used to strengthen the dynamic analysis to find errors more accurately. The analyzed results are sorted and corresponding feedback is given to students in order for them to better understand the content of errors when revising the program and classifying and counting the types of errors made. This study sorts mistakes frequently made by students and topics in which students are likely to make certain mistakes. The developed system was implemented in experiments including students from a programming course who were divided into two groups, namely the experimental group and the control group. A system for both groups of students to upload and submit assignments and a code analysis and feedback improvement system were used. Students in the control group only used the assignment uploading and submitting system for basic assignment uploading, verification, and the comparison of test data. After the program was entered, declarative sentence disassembly and dynamic slicing were suggested. Data were sent to GNU Debugger (GDB) and Valgrind for spectral error location; the classification and recording of error types; and the interpretation of the number of error lines, error types, and related variables. Feedback and a generated report were sent back to the student interface to provide effective and useful feedback to the students in the experimental group for them to revise their homework and record the types and number of errors they made in that week’s homework in the database. The answers provided by the students to the questions were recorded. The analysis of the pass rates of the students in the experimental and control groups for each homework test aided the understanding of the differences in the learning success of the two groups of students each week. The weekly pass rates and the numbers of measured errors in the experimental group compared with in the control group were input into a distribution map to allow us to better understand whether there was any positive correlation between the detected information, feedback to the students, pass rates of the tests, and other related data. The system statistically obtained feedback and the degree of improvement of homework programs; then, it distributed specially designed questionnaires to all students to directly obtain and quantify their feedback and perceived benefits of this system, thereby verifying the effectiveness of the system and its practicality. Full article
(This article belongs to the Special Issue Applied and Innovative Computational Intelligence Systems)
Show Figures

Figure 1

15 pages, 5720 KiB  
Article
Compressive Domain Deep CNN for Image Classification and Performance Improvement Using Genetic Algorithm-Based Sensing Mask Learning
by Baba Fakruddin Ali B H and Prakash Ramachandran
Appl. Sci. 2022, 12(14), 6881; https://0-doi-org.brum.beds.ac.uk/10.3390/app12146881 - 07 Jul 2022
Cited by 3 | Viewed by 1956
Abstract
The majority of digital images are stored in compressed form. Generally, image classification using convolution neural network (CNN) is done in uncompressed form rather than compressed one. Training the CNN in the compressed domain eliminates the requirement for decompression process and results in [...] Read more.
The majority of digital images are stored in compressed form. Generally, image classification using convolution neural network (CNN) is done in uncompressed form rather than compressed one. Training the CNN in the compressed domain eliminates the requirement for decompression process and results in improved efficiency, minimal storage, and lesser cost. Compressive sensing (CS) is one of the effective and efficient method for signal acquisition and recovery and CNN training on CS measurements makes the entire process compact. The most popular sensing phenomenon used in CS is based on image acquisition using single pixel camera (SPC) which has complex design implementation and usually a matrix simulation is used to represent the SPC process in numerical demonstration. The CS measurements using this phenomenon are visually different from the image and to add this in the training set of the compressed learning framework, there is a need for an inverse SPC process that is to be applied all through the training and testing dataset image samples. In this paper we proposed a simple sensing phenomenon which can be implemented using the image output of a standard digital camera by retaining few pixels and forcing the rest of the pixels to zero and this reduced set of pixels is assumed as CS measurements. This process is modeled by a binary mask application on the image and the resultant image still subjectively legible for human vision and can be used directly in the training dataset. This sensing mask has very few active pixels at arbitrary locations and there is a lot of scope to heuristically learn the sensing mask suitable for the dataset. Only very few attempts had been made to learn the sensing matrix and the sole effect of this learning process on the improvement of CNN model accuracy is not reported. We proposed to have an ablation approach to study how this sensing matrix learning improves the accuracy of the basic CNN architecture. We applied CS for two class image dataset by applying a Primitive Walsh Hadamard (PWH) binary mask function and performed the classification experiment using a basic CNN. By retaining arbitrary amount of pixel in the training and testing dataset we applied CNN on the compressed measurements to perform image classification and studied and reported the model performance in terms of training and validation accuracies by varying the amount of pixels retained. A novel Genetic Algorithm-based compressive learning (GACL) method is proposed to learn the PWH mask to optimize the model training accuracy by using two different crossover techniques. In the experiment conducted for the case of compression ratio (CR) 90% by retaining only 10% of the pixels in every images both in training and testing dataset that represent two classes, the training accuracy is improved from 67% to 85% by using diagonal crossover in offspring creation of GACL. The robustness of the method is examined by applying GACL for user defined multiclass dataset and achieved better CNN model accuracies. This work will bring out the strength of sensing matrix learning which can be integrated with advanced training models to minimize the amount of information that is to be sent to central servers and will be suitable for a typical IoT frame work. Full article
(This article belongs to the Special Issue Applied and Innovative Computational Intelligence Systems)
Show Figures

Figure 1

17 pages, 2329 KiB  
Article
Accurate Sinusoidal Frequency Estimation Algorithm for Internet of Things Based on Phase Angle Interpolation Using Frequency Shift
by Minglong Cheng, Guoqing Jia, Weidong Fang, Huiyue Yi and Wuxiong Zhang
Appl. Sci. 2022, 12(12), 6232; https://0-doi-org.brum.beds.ac.uk/10.3390/app12126232 - 19 Jun 2022
Cited by 1 | Viewed by 1413
Abstract
Frequency estimation of a sinusoidal signal is a fundamental problem in signal processing for the Internet of Things. The frequency interpolation estimation algorithm based on the fast Fourier transform is susceptible to being disturbed by noise, which leads to estimation error. In order [...] Read more.
Frequency estimation of a sinusoidal signal is a fundamental problem in signal processing for the Internet of Things. The frequency interpolation estimation algorithm based on the fast Fourier transform is susceptible to being disturbed by noise, which leads to estimation error. In order to improve the accuracy of frequency estimation, an improved Rife frequency estimation algorithm based on phase angle interpolation is proposed in this paper, namely the PAI–Rife algorithm. We changed the existing frequency deviation factor of the Rife algorithm using phase angle interpolation. Then, by setting the frequency shift threshold, the frequency that is not within the threshold range is shifted to the optimal estimation space. The simulation results show that the proposed algorithm has a wider valid estimation range, and the estimated standard deviation is closer to the Cramer–Rao lower bound. Compared with the Rife algorithm and some recently proposed advanced algorithms, the proposed algorithm has less computational complexity, lower misjudgment rate, and more stable performance. Full article
(This article belongs to the Special Issue Applied and Innovative Computational Intelligence Systems)
Show Figures

Figure 1

25 pages, 1713 KiB  
Article
A Comprehensive Survey for Deep-Learning-Based Abnormality Detection in Smart Grids with Multimodal Image Data
by Fangrong Zhou, Gang Wen, Yi Ma, Hao Geng, Ran Huang, Ling Pei, Wenxian Yu, Lei Chu and Robert Qiu
Appl. Sci. 2022, 12(11), 5336; https://0-doi-org.brum.beds.ac.uk/10.3390/app12115336 - 25 May 2022
Cited by 6 | Viewed by 2280
Abstract
In this paper, we provide a comprehensive survey of the recent advances in abnormality detection in smart grids using multimodal image data, which include visible light, infrared, and optical satellite images. The applications in visible light and infrared images, enabling abnormality detection at [...] Read more.
In this paper, we provide a comprehensive survey of the recent advances in abnormality detection in smart grids using multimodal image data, which include visible light, infrared, and optical satellite images. The applications in visible light and infrared images, enabling abnormality detection at short range, further include several typical applications in intelligent sensors deployed in smart grids, while optical satellite image data focus on abnormality detection from a large distance. Moreover, the literature in each aspect is organized according to the considered techniques. In addition, several key methodologies and conditions for applying these techniques to abnormality detection are identified to help determine whether to use deep learning and which kind of learning techniques to use. Traditional approaches are also summarized together with their performance comparison with deep-learning-based approaches, based on which the necessity, seen in the surveyed literature, of adopting image-data-based abnormality detection is clarified. Overall, this comprehensive survey categorizes and carefully summarizes insights from representative papers in this field, which will widely benefit practitioners and academic researchers. Full article
(This article belongs to the Special Issue Applied and Innovative Computational Intelligence Systems)
Show Figures

Figure 1

18 pages, 1382 KiB  
Article
Train Me If You Can: Decentralized Learning on the Deep Edge
by Diogo Costa, Miguel Costa and Sandro Pinto
Appl. Sci. 2022, 12(9), 4653; https://0-doi-org.brum.beds.ac.uk/10.3390/app12094653 - 06 May 2022
Cited by 5 | Viewed by 2341
Abstract
The end of Moore’s Law aligned with data privacy concerns is forcing machine learning (ML) to shift from the cloud to the deep edge. In the next-generation ML systems, the inference and part of the training process will perform at the edge, while [...] Read more.
The end of Moore’s Law aligned with data privacy concerns is forcing machine learning (ML) to shift from the cloud to the deep edge. In the next-generation ML systems, the inference and part of the training process will perform at the edge, while the cloud stays responsible for major updates. This new computing paradigm, called federated learning (FL), alleviates the cloud and network infrastructure while increasing data privacy. Recent advances empowered the inference pass of quantized artificial neural networks (ANNs) on Arm Cortex-M and RISC-V microcontroller units (MCUs). Nevertheless, the training remains confined to the cloud, imposing the transaction of high volumes of private data over a network and leading to unpredictable delays when ML applications attempt to adapt to adversarial environments. To fill this gap, we make the first attempt to evaluate the feasibility of ANN training in Arm Cortex-M MCUs. From the available optimization algorithms, stochastic gradient descent (SGD) has the best trade-off between accuracy, memory footprint, and latency. However, its original form and the variants available in the literature still do not fit the stringent requirements of Arm Cortex-M MCUs. We propose L-SGD, a lightweight implementation of SGD optimized for maximum speed and minimal memory footprint in this class of MCUs. We developed a floating-point version and another that operates over quantized weights. For a fully-connected ANN trained on the MNIST dataset, L-SGD (float-32) is 4.20× faster than the SGD while requiring only 2.80% of the memory with negligible accuracy loss. Results also show that quantized training is still unfeasible to train an ANN from the scratch but is a lightweight solution to perform minor model fixes and counteract the fairness problem in typical FL systems. Full article
(This article belongs to the Special Issue Applied and Innovative Computational Intelligence Systems)
Show Figures

Figure 1

16 pages, 625 KiB  
Article
Improving Semantic Dependency Parsing with Higher-Order Information Encoded by Graph Neural Networks
by Bin Li, Yunlong Fan, Yikemaiti Sataer, Zhiqiang Gao and Yaocheng Gui
Appl. Sci. 2022, 12(8), 4089; https://0-doi-org.brum.beds.ac.uk/10.3390/app12084089 - 18 Apr 2022
Cited by 8 | Viewed by 2503
Abstract
Higher-order information brings significant accuracy gains in semantic dependency parsing. However, modeling higher-order information is non-trivial. Graph neural networks (GNNs) have been demonstrated to be an effective tool for encoding higher-order information in many graph learning tasks. Inspired by the success of GNNs, [...] Read more.
Higher-order information brings significant accuracy gains in semantic dependency parsing. However, modeling higher-order information is non-trivial. Graph neural networks (GNNs) have been demonstrated to be an effective tool for encoding higher-order information in many graph learning tasks. Inspired by the success of GNNs, we investigate improving semantic dependency parsing with higher-order information encoded by multi-layer GNNs. Experiments are conducted on the SemEval 2015 Task 18 dataset in three languages (Chinese, English, and Czech). Compared to the previous state-of-the-art parser, our parser yields 0.3% and 2.2% improvement in average labeled F1-score on English in-domain (ID) and out-of-domain (OOD) test sets, 2.6% improvement on Chinese ID test set, and 2.0% and 1.8% improvement on Czech ID and OOD test sets. Experimental results show that our parser outperforms the previous best one on the SemEval 2015 Task 18 dataset in three languages. The outstanding performance of our parser demonstrates that the higher-order information encoded by GNNs is exceedingly beneficial for improving SDP. Full article
(This article belongs to the Special Issue Applied and Innovative Computational Intelligence Systems)
Show Figures

Figure 1

29 pages, 1495 KiB  
Article
Variable Neighborhood Search for the Two-Echelon Electric Vehicle Routing Problem with Time Windows
by Mehmet Anıl Akbay, Can Berk Kalayci, Christian Blum and Olcay Polat
Appl. Sci. 2022, 12(3), 1014; https://0-doi-org.brum.beds.ac.uk/10.3390/app12031014 - 19 Jan 2022
Cited by 7 | Viewed by 2723 | Correction
Abstract
Increasing environmental concerns and legal regulations have led to the development of sustainable technologies and systems in logistics, as in many fields. The adoption of multi-echelon distribution networks and the use of environmentally friendly vehicles in freight distribution have become major concepts for [...] Read more.
Increasing environmental concerns and legal regulations have led to the development of sustainable technologies and systems in logistics, as in many fields. The adoption of multi-echelon distribution networks and the use of environmentally friendly vehicles in freight distribution have become major concepts for reducing the negative impact of urban transportation activities. In this line, the present paper proposes a two-echelon electric vehicle routing problem. In the first echelon of the distribution network, products are transported from central warehouses to satellites located in the surroundings of cities. This is achieved by means of large conventional trucks. Subsequently, relatively smaller-sized electric vehicles distribute these products from the satellites to demand points/customers located in the cities. The proposed problem also takes into account the limited driving range of electric vehicles that need to be recharged at charging stations when necessary. In addition, the proposed problem considers time window constraints for the delivery of products to customers. A mixed-integer linear programming formulation is developed and small-sized instances are solved using CPLEX. Furthermore, we propose a constructive heuristic based on a modified Clarke and Wright savings heuristic. The solutions of this heuristic serve as initial solutions for a variable neighborhood search metaheuristic. The numerical results show that the variable neighborhood search matches CPLEX in the context of small problems. Moreover, it consistently outperforms CPLEX with the growing size and difficulty of problem instances. Full article
(This article belongs to the Special Issue Applied and Innovative Computational Intelligence Systems)
Show Figures

Figure 1

Review

Jump to: Research

20 pages, 731 KiB  
Review
Trends and Application of Artificial Intelligence Technology in Orthodontic Diagnosis and Treatment Planning—A Review
by Farraj Albalawi and Khalid A. Alamoud
Appl. Sci. 2022, 12(22), 11864; https://0-doi-org.brum.beds.ac.uk/10.3390/app122211864 - 21 Nov 2022
Cited by 9 | Viewed by 2914
Abstract
Artificial intelligence (AI) is a new breakthrough in technological advancements based on the concept of simulating human intelligence. These emerging technologies highly influence the diagnostic process in the field of medical sciences, with enhanced accuracy in diagnosis. This review article intends to report [...] Read more.
Artificial intelligence (AI) is a new breakthrough in technological advancements based on the concept of simulating human intelligence. These emerging technologies highly influence the diagnostic process in the field of medical sciences, with enhanced accuracy in diagnosis. This review article intends to report on the trends and application of AI models designed for diagnosis and treatment planning in orthodontics. A data search for the original research articles that were published over the last 22 years (from 1 January 2000 until 31 August 2022) was carried out in the most renowned electronic databases, which mainly included PubMed, Google Scholar, Web of Science, Scopus, and Saudi Digital Library. A total of 56 articles that met the eligibility criteria were included. The research trend shows a rapid increase in articles over the last two years. In total: 17 articles have reported on AI models designed for the automated identification of cephalometric landmarks; 12 articles on the estimation of bone age and maturity using cervical vertebra and hand-wrist radiographs; two articles on palatal shape analysis; seven articles for determining the need for orthodontic tooth extractions; two articles for automated skeletal classification; and 16 articles for the diagnosis and planning of orthognathic surgeries. AI is a significant development that has been successfully implemented in a wide range of image-based applications. These applications can facilitate clinicians in diagnosing, treatment planning, and decision-making. AI applications are beneficial as they are reliable, with enhanced speed, and have the potential to automatically complete the task with an efficiency equivalent to experienced clinicians. These models can prove as an excellent guide for less experienced orthodontists. Full article
(This article belongs to the Special Issue Applied and Innovative Computational Intelligence Systems)
Show Figures

Figure 1

18 pages, 1123 KiB  
Review
Performance of Artificial Intelligence (AI) Models Designed for Application in Pediatric Dentistry—A Systematic Review
by Sanjeev Balappa Khanagar, Khalid Alfouzan, Lubna Alkadi, Farraj Albalawi, Kiran Iyer and Mohammed Awawdeh
Appl. Sci. 2022, 12(19), 9819; https://0-doi-org.brum.beds.ac.uk/10.3390/app12199819 - 29 Sep 2022
Cited by 1 | Viewed by 2811
Abstract
Oral diseases are the most prevalent chronic childhood diseases, presenting as a major public health issue affecting children of all ages in the developing and developed countries. Early detection and control of these diseases is very crucial for a child’s oral health and [...] Read more.
Oral diseases are the most prevalent chronic childhood diseases, presenting as a major public health issue affecting children of all ages in the developing and developed countries. Early detection and control of these diseases is very crucial for a child’s oral health and general wellbeing. The aim of this systematic review is to assess the performance of artificial intelligence models designed for application in pediatric dentistry. A systematic search of the literature was conducted using different electronic databases, primarily (PubMed, Scopus, Web of Science, Embase, Cochrane) and secondarily (Google Scholar and the Saudi Digital Library) for studies published from 1 January 2000, until 20 July 2022, related to the research topic. The quality of the twenty articles that satisfied the eligibility criteria were critically analyzed based on the QUADAS-2 guidelines. Artificial intelligence models have been utilized for the detection of plaque on primary teeth, prediction of children’s oral health status (OHS) and treatment needs (TN); detection, classification and prediction of dental caries; detection and categorization of fissure sealants; determination of the chronological age; determination of the impact of oral health on adolescent’s quality of life; automated detection and charting of teeth; and automated detection and classification of mesiodens and supernumerary teeth in primary or mixed dentition. Artificial intelligence has been widely applied in pediatric dentistry in order to help less-experienced clinicians in making more accurate diagnoses. These models are very efficient in identifying and categorizing children into various risk groups at the individual and community levels. They also aid in developing preventive strategies, including designing oral hygiene practices and adopting healthy eating habits for individuals. Full article
(This article belongs to the Special Issue Applied and Innovative Computational Intelligence Systems)
Show Figures

Figure 1

13 pages, 477 KiB  
Review
Discrimination, Bias, Fairness, and Trustworthy AI
by Daniel Varona and Juan Luis Suárez
Appl. Sci. 2022, 12(12), 5826; https://0-doi-org.brum.beds.ac.uk/10.3390/app12125826 - 08 Jun 2022
Cited by 16 | Viewed by 9633
Abstract
In this study, we analyze “Discrimination”, ”Bias”, “Fairness”, and “Trustworthiness” as working variables in the context of the social impact of AI. It has been identified that there exists a set of specialized variables, such as security, privacy, responsibility, etc., that are used [...] Read more.
In this study, we analyze “Discrimination”, ”Bias”, “Fairness”, and “Trustworthiness” as working variables in the context of the social impact of AI. It has been identified that there exists a set of specialized variables, such as security, privacy, responsibility, etc., that are used to operationalize the principles in the Principled AI International Framework. These variables are defined in such a way that they contribute to others of more general scope, for example, the ones studied in this study, in what appears to be a generalization–specialization relationship. Our aim in this study is to comprehend how we can use available notions of bias, discrimination, fairness, and other related variables that will be assured during the software project’s lifecycle (security, privacy, responsibility, etc.) when developing trustworthy algorithmic decision-making systems (ADMS). Bias, discrimination, and fairness are mainly approached with an operational interest by the Principled AI International Framework, so we included sources from outside the framework to complement (from a conceptual standpoint) their study and their relationship with each other. Full article
(This article belongs to the Special Issue Applied and Innovative Computational Intelligence Systems)
Show Figures

Figure 1

15 pages, 841 KiB  
Review
Performance of Artificial Intelligence Models Designed for Diagnosis, Treatment Planning and Predicting Prognosis of Orthognathic Surgery (OGS)—A Scoping Review
by Sanjeev B. Khanagar, Khalid Alfouzan, Mohammed Awawdeh, Lubna Alkadi, Farraj Albalawi and Maryam A. Alghilan
Appl. Sci. 2022, 12(11), 5581; https://0-doi-org.brum.beds.ac.uk/10.3390/app12115581 - 31 May 2022
Cited by 3 | Viewed by 2342
Abstract
The technological advancements in the field of medical science have led to an escalation in the development of artificial intelligence (AI) applications, which are being extensively used in health sciences. This scoping review aims to outline the application and performance of artificial intelligence [...] Read more.
The technological advancements in the field of medical science have led to an escalation in the development of artificial intelligence (AI) applications, which are being extensively used in health sciences. This scoping review aims to outline the application and performance of artificial intelligence models used for diagnosing, treatment planning and predicting the prognosis of orthognathic surgery (OGS). Data for this paper was searched through renowned electronic databases such as PubMed, Google Scholar, Scopus, Web of science, Embase and Cochrane for articles related to the research topic that have been published between January 2000 and February 2022. Eighteen articles that met the eligibility criteria were critically analyzed based on QUADAS-2 guidelines and the certainty of evidence of the included studies was assessed using the GRADE approach. AI has been applied for predicting the post-operative facial profiles and facial symmetry, deciding on the need for OGS, predicting perioperative blood loss, planning OGS, segmentation of maxillofacial structures for OGS, and differential diagnosis of OGS. AI models have proven to be efficient and have outperformed the conventional methods. These models are reported to be reliable and reproducible, hence they can be very useful for less experienced practitioners in clinical decision making and in achieving better clinical outcomes. Full article
(This article belongs to the Special Issue Applied and Innovative Computational Intelligence Systems)
Show Figures

Figure 1

Back to TopTop