Next Issue
Volume 11, June
Previous Issue
Volume 11, April
 
 

Computers, Volume 11, Issue 5 (May 2022) – 26 articles

Cover Story (view full-size image): Using brain–computer interfaces (BCI), brain activity signals can be acquired, preprocessed, and classified in order to then be utilized in various fields of application such as prosthetics, robot control, or even entertainment. The extracted brain features and their classification method play crucial roles in the system’s ability to obtain and retain high robustness and efficiency. In this paper, we perform research to identify the most robustly effective approaches in the field of motor imagery (MI) BCIs. The results show that wavelet transforms combined with deep learning achieved the highest scores in terms of robustness and performance. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
15 pages, 4109 KiB  
Article
How Machine Learning Classification Accuracy Changes in a Happiness Dataset with Different Demographic Groups
by Colm Sweeney, Edel Ennis, Maurice Mulvenna, Raymond Bond and Siobhan O’Neill
Computers 2022, 11(5), 83; https://0-doi-org.brum.beds.ac.uk/10.3390/computers11050083 - 23 May 2022
Cited by 10 | Viewed by 2745
Abstract
This study aims to explore how machine learning classification accuracy changes with different demographic groups. The HappyDB is a dataset that contains over 100,000 happy statements, incorporating demographic information that includes marital status, gender, age, and parenthood status. Using the happiness category field, [...] Read more.
This study aims to explore how machine learning classification accuracy changes with different demographic groups. The HappyDB is a dataset that contains over 100,000 happy statements, incorporating demographic information that includes marital status, gender, age, and parenthood status. Using the happiness category field, we test different types of machine learning classifiers to predict what category of happiness the statements belong to, for example, whether they indicate happiness relating to achievement or affection. The tests were initially conducted with three distinct classifiers and the best performing model was the convolutional neural network (CNN) model, which is a deep learning algorithm, achieving an F1 score of 0.897 when used with the complete dataset. This model was then used as the main classifier to further analyze the results and to establish any variety in performance when tested on different demographic groups. We analyzed the results to see if classification accuracy was improved for different demographic groups, and found that the accuracy of prediction within this dataset declined with age, with the exception of the single parent subgroup. The results also showed improved performance for the married and parent subgroups, and lower performances for the non-parent and un-married subgroups, even when investigating a balanced sample. Full article
(This article belongs to the Special Issue Advances of Machine and Deep Learning in the Health Domain)
Show Figures

Figure 1

21 pages, 2268 KiB  
Article
Botanical Leaf Disease Detection and Classification Using Convolutional Neural Network: A Hybrid Metaheuristic Enabled Approach
by Madhumini Mohapatra, Ami Kumar Parida, Pradeep Kumar Mallick, Mikhail Zymbler and Sachin Kumar
Computers 2022, 11(5), 82; https://0-doi-org.brum.beds.ac.uk/10.3390/computers11050082 - 20 May 2022
Cited by 13 | Viewed by 2891
Abstract
Botanical plants suffer from several types of diseases that must be identified early to improve the production of fruits and vegetables. Mango fruit is one of the most popular and desirable fruits worldwide due to its taste and richness in vitamins. However, plant [...] Read more.
Botanical plants suffer from several types of diseases that must be identified early to improve the production of fruits and vegetables. Mango fruit is one of the most popular and desirable fruits worldwide due to its taste and richness in vitamins. However, plant diseases also affect these plants’ production and quality. This study proposes a convolutional neural network (CNN)-based metaheuristic approach for disease diagnosis and detection. The proposed approach involves preprocessing, image segmentation, feature extraction, and disease classification. First, the image of mango leaves is enhanced using histogram equalization and contrast enhancement. Then, a geometric mean-based neutrosophic with a fuzzy c-means method is used for segmentation. Next, the essential features are retrieved from the segmented images, including the Upgraded Local Binary Pattern (ULBP), color, and pixel features. Finally, these features are given into the disease detection phase, which is modeled using a Convolutional Neural Network (CNN) (deep learning model). Furthermore, to enhance the classification accuracy of CNN, the weights are fine-tuned using a new hybrid optimization model referred to as Cat Swarm Updated Black Widow Model (CSUBW). The new hybrid optimization model is developed by hybridizing the standard Cat Swarm Optimization Algorithm (CSO) and Black Widow Optimization Algorithm (BWO). Finally, a performance evaluation is undergone to validate the efficiency of the projected model. Full article
(This article belongs to the Special Issue Human Understandable Artificial Intelligence)
Show Figures

Figure 1

13 pages, 3292 KiB  
Communication
Energy Efficiency of IoT Networks for Environmental Parameters of Bulgarian Cities
by Zlatin Zlatev, Tsvetelina Georgieva, Apostol Todorov and Vanya Stoykova
Computers 2022, 11(5), 81; https://0-doi-org.brum.beds.ac.uk/10.3390/computers11050081 - 17 May 2022
Cited by 5 | Viewed by 1976
Abstract
Building modern Internet of Things (IoT) systems is associated with a number of challenges. One of the most significant among them is the need for wireless technology, which will serve to build connectivity between the individual components of this technology. In the larger [...] Read more.
Building modern Internet of Things (IoT) systems is associated with a number of challenges. One of the most significant among them is the need for wireless technology, which will serve to build connectivity between the individual components of this technology. In the larger cities of Bulgaria, measures to ensure low levels of harmful emissions, reduce noise levels, and ensure comfort in urban environments have been taken. LoRa technology shows more advantages in transmission distance and low energy consumption compared to other technologies. That is why this technology was chosen for the design of wireless sensor networks (WSN) for six cities in Bulgaria. These networks have the potential to be used in IoT configurations. Appropriate modules and devices for building WSN for cities in Bulgaria have been selected. It has been found that the greater number of nodes in the WSN leads to an increase in the average power consumed in the network. On the other hand, depending on the location of these nodes, the energy consumed may decrease. The performance of wireless sensor networks can be optimized by applying appropriate routing protocols, which are proposed in the available literature. The methodology for energy efficiency analysis of WSN can be used in the design of wireless sensor networks to determine the parameters of the environment, with the possibility of application in IoT. Full article
Show Figures

Figure 1

12 pages, 248 KiB  
Article
Comparison of Statistical and Machine-Learning Models on Road Traffic Accident Severity Classification
by Paulo Infante, Gonçalo Jacinto, Anabela Afonso, Leonor Rego, Vitor Nogueira, Paulo Quaresma, José Saias, Daniel Santos, Pedro Nogueira, Marcelo Silva, Rosalina Pisco Costa, Patrícia Gois and Paulo Rebelo Manuel
Computers 2022, 11(5), 80; https://0-doi-org.brum.beds.ac.uk/10.3390/computers11050080 - 16 May 2022
Cited by 10 | Viewed by 2737
Abstract
Portugal has the sixth highest road fatality rate among European Union members. This is a problem of different dimensions with serious consequences in people’s lives. This study analyses daily data from police and government authorities on road traffic accidents that occurred between 2016 [...] Read more.
Portugal has the sixth highest road fatality rate among European Union members. This is a problem of different dimensions with serious consequences in people’s lives. This study analyses daily data from police and government authorities on road traffic accidents that occurred between 2016 and 2019 in a district of Portugal. This paper looks for the determinants that contribute to the existence of victims in road traffic accidents, as well as the determinants for fatalities and/or serious injuries in accidents with victims. We use logistic regression models, and the results are compared to the machine-learning model results. For the severity model, where the response variable indicates whether only property damage or casualties resulted in the traffic accident, we used a large sample with a small imbalance. For the serious injuries model, where the response variable indicates whether or not there were victims with serious injuries and/or fatalities in the traffic accident with victims, we used a small sample with very imbalanced data. Empirical analysis supports the conclusion that, with a small sample of imbalanced data, machine-learning models generally do not perform better than statistical models; however, they perform similarly when the sample is large and has a small imbalance. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Graphical abstract

17 pages, 1080 KiB  
Article
Design of a Cattle-Health-Monitoring System Using Microservices and IoT Devices
by Isak Shabani, Tonit Biba and Betim Çiço
Computers 2022, 11(5), 79; https://0-doi-org.brum.beds.ac.uk/10.3390/computers11050079 - 12 May 2022
Cited by 10 | Viewed by 5451
Abstract
This article proposes a new concept of microservice-based architecture for the future of distributed systems. This architecture is a bridge between Internet-of-Things (IoT) devices and applications that are used to monitor cattle health in real time for the physical and health parameters of [...] Read more.
This article proposes a new concept of microservice-based architecture for the future of distributed systems. This architecture is a bridge between Internet-of-Things (IoT) devices and applications that are used to monitor cattle health in real time for the physical and health parameters of cattle, where microservice architecture is introduced that enables this form of monitoring. Within this architecture, machine-learning algorithms were used to predict cattle health and inform farmers about the health of each cattle in real time. Within this architecture, six microservices were proposed that had the tasks of receiving, processing, and sending data upon request. In addition, within the six microservices, a microservice was developed for the prediction of cattle health using algorithms from machine learning using the LightGBM algorithm. Through this algorithm, it is possible to determine the percentage value of the health of each head of cattle in the moment, based on the parameters that are sent from the mobile node. If health problems are identified in the cattle, the architecture notifies the farmer in real time about the problems that the cattle have. Based on the proposed solution, farmers will have 24 h online access to monitor the following parameters for each head of cattle: body temperature, heart rate, humidity, and position. Full article
(This article belongs to the Special Issue Edge Computing for the IoT)
Show Figures

Figure 1

20 pages, 5245 KiB  
Article
A Real Time Arabic Sign Language Alphabets (ArSLA) Recognition Model Using Deep Learning Architecture
by Zaran Alsaadi, Easa Alshamani, Mohammed Alrehaili, Abdulmajeed Ayesh D. Alrashdi, Saleh Albelwi and Abdelrahman Osman Elfaki
Computers 2022, 11(5), 78; https://0-doi-org.brum.beds.ac.uk/10.3390/computers11050078 - 10 May 2022
Cited by 13 | Viewed by 5159
Abstract
Currently, treating sign language issues and producing high quality solutions has attracted researchers and practitioners’ attention due to the considerable prevalence of hearing disabilities around the world. The literature shows that Arabic Sign Language (ArSL) is one of the most popular sign languages [...] Read more.
Currently, treating sign language issues and producing high quality solutions has attracted researchers and practitioners’ attention due to the considerable prevalence of hearing disabilities around the world. The literature shows that Arabic Sign Language (ArSL) is one of the most popular sign languages due to its rate of use. ArSL is categorized into two groups: The first group is ArSL, where words are represented by signs, i.e., pictures. The second group is ArSl alphabetic (ArSLA), where each Arabic letter is represented by a sign. This paper introduces a real time ArSLA recognition model using deep learning architecture. As a methodology, the proceeding steps were followed. First, a trusted scientific ArSLA dataset was located. Second, the best deep learning architectures were chosen by investigating related works. Third, an experiment was conducted to test the previously selected deep learning architectures. Fourth, the deep learning architecture was selected based on extracted results. Finally, a real time recognition system was developed. The results of the experiment show that the AlexNet architecture is the best due to its high accuracy rate. The model was developed based on AlexNet architecture and successfully tested at real time with a 94.81% accuracy rate. Full article
(This article belongs to the Special Issue Human Understandable Artificial Intelligence)
Show Figures

Figure 1

13 pages, 1455 KiB  
Article
QoS-Aware Scheduling Algorithm Enabling Video Services in LTE Networks
by Amal Abulgasim Masli, Falah Y. H. Ahmed and Ali Mohamed Mansoor
Computers 2022, 11(5), 77; https://0-doi-org.brum.beds.ac.uk/10.3390/computers11050077 - 09 May 2022
Cited by 7 | Viewed by 2084
Abstract
The Long-Term Evolution (LTE) system was a result of the 3rd-Generation Partnership Project (3GPP) to assure Quality-of-Service (QoS) performance pertaining to non-real-time and real-time services. An effective design with regards to resource allocation scheduling involves core challenges to realising a satisfactory service in [...] Read more.
The Long-Term Evolution (LTE) system was a result of the 3rd-Generation Partnership Project (3GPP) to assure Quality-of-Service (QoS) performance pertaining to non-real-time and real-time services. An effective design with regards to resource allocation scheduling involves core challenges to realising a satisfactory service in an LTE system, particularly with the growing demand for network applications. The continuous rise in terms of the number of network users has resulted in impacts on the performance of networks, which also creates resource allocation issues when performing downlink scheduling in an LTE network. This research paper puts forward a review of optimisation pertaining packet scheduling performance through the LTE downlink system by introducing a new downlink-scheduling algorithm for serving video application through LTE culler networks, and also accounts for QoS needs and channel conditions. A comparison of the recommended algorithms’ performances was made with regards to delay, throughput, PLR, and fairness by utilising the LTE-SIM simulator for video flow. On the basis of the outcomes obtained, the algorithms recommended in this research work considerably enhance the efficacy of video streaming compared against familiar LTE algorithms. Full article
Show Figures

Figure 1

13 pages, 2825 KiB  
Article
Co-Design of Multicore Hardware and Multithreaded Software for Thread Performance Assessment on an FPGA
by George K. Adam
Computers 2022, 11(5), 76; https://0-doi-org.brum.beds.ac.uk/10.3390/computers11050076 - 09 May 2022
Cited by 3 | Viewed by 2848
Abstract
Multicore and multithreaded architectures increase the performance of computing systems. The increase in cores and threads, however, raises further issues in the efficiency achieved in terms of speedup and parallelization, particularly for the real-time requirements of Internet of things (IoT)-embedded applications. This research [...] Read more.
Multicore and multithreaded architectures increase the performance of computing systems. The increase in cores and threads, however, raises further issues in the efficiency achieved in terms of speedup and parallelization, particularly for the real-time requirements of Internet of things (IoT)-embedded applications. This research investigates the efficiency of a 32-core field-programmable gate array (FPGA) architecture, with memory management unit (MMU) and real-time operating system (OS) support, to exploit the thread level parallelism (TLP) of tasks running in parallel as threads on multiple cores. The research outcomes confirm the feasibility of the proposed approach in the efficient execution of recursive sorting algorithms, as well as their evaluation in terms of speedup and parallelization. The results reveal that parallel implementation of the prevalent merge sort and quicksort algorithms on this platform is more efficient. The increase in the speedup is proportional to the core scaling, reaching a maximum of 53% for the configuration with the highest number of cores and threads. However, the maximum magnitude of the parallelization (66%) was found to be bounded to a low number of two cores and four threads. A further increase in the number of cores and threads did not add to the improvement of the parallelism. Full article
(This article belongs to the Special Issue Real-Time Embedded Systems in IoT)
Show Figures

Graphical abstract

17 pages, 4675 KiB  
Article
Performance Evaluation of Massively Parallel Systems Using SPEC OMP Suite
by Dheya Mustafa
Computers 2022, 11(5), 75; https://0-doi-org.brum.beds.ac.uk/10.3390/computers11050075 - 05 May 2022
Cited by 1 | Viewed by 2884
Abstract
Performance analysis plays an essential role in achieving a scalable performance of applications on massively parallel supercomputers equipped with thousands of processors. This paper is an empirical investigation to study, in depth, the performance of two of the most common High-Performance Computing architectures [...] Read more.
Performance analysis plays an essential role in achieving a scalable performance of applications on massively parallel supercomputers equipped with thousands of processors. This paper is an empirical investigation to study, in depth, the performance of two of the most common High-Performance Computing architectures in the world. IBM has developed three generations of Blue Gene supercomputers—Blue Gene/L, P, and Q—that use, at a large scale, low-power processors to achieve high performance. Better CPU core efficiency has been empowered by a higher level of integration to gain more parallelism per processing element. On the other hand, the Intel Xeon Phi coprocessor armed with 61 on-chip x86 cores, provides high theoretical peak performance, as well as software development flexibility with existing high-level programming tools. We present an extensive evaluation study of the performance peaks and scalability of these two modern architectures using SPEC OMP benchmarks. Full article
(This article belongs to the Topic Innovation of Applied System)
Show Figures

Graphical abstract

17 pages, 1115 KiB  
Article
Algebraic Zero Error Training Method for Neural Networks Achieving Least Upper Bounds on Neurons and Layers
by Juraj Kacur
Computers 2022, 11(5), 74; https://0-doi-org.brum.beds.ac.uk/10.3390/computers11050074 - 04 May 2022
Viewed by 1653
Abstract
In the domain of artificial neural networks, it is important to know what their representation, classification and generalization capabilities are. There is also a need for time and resource-efficient training algorithms. Here, a new zero-error training method is derived for digital computers and [...] Read more.
In the domain of artificial neural networks, it is important to know what their representation, classification and generalization capabilities are. There is also a need for time and resource-efficient training algorithms. Here, a new zero-error training method is derived for digital computers and single hidden layer networks. This method is the least upper bound on the number of hidden neurons as well. The bound states that if there are N input vectors expressed as rational numbers, a network having N − 1 neurons in the hidden layer and M neurons at the output represents a bounded function F: RD→RM for all input vectors. Such a network has massively shared weights calculated by 1 + M regular systems of linear equations. Compared to similar approaches, this new method achieves a theoretical least upper bound, is fast, robust, adapted to floating-point data, and uses few free parameters. This is documented by theoretical analyses and comparative tests. In theory, this method provides a new constructional proof of the least upper bound on the number of hidden neurons, extends the classes of supported activation functions, and relaxes conditions for mapping functions. Practically, it is a non-iterative zero-error training algorithm providing a minimum number of neurons and layers. Full article
Show Figures

Figure 1

21 pages, 2409 KiB  
Article
A Highly Adaptive Oversampling Approach to Address the Issue of Data Imbalance
by Szilvia Szeghalmy and Attila Fazekas
Computers 2022, 11(5), 73; https://0-doi-org.brum.beds.ac.uk/10.3390/computers11050073 - 04 May 2022
Cited by 2 | Viewed by 1925
Abstract
Data imbalance is a serious problem in machine learning that can be alleviated at the data level by balancing the class distribution with sampling. In the last decade, several sampling methods have been published to address the shortcomings of the initial ones, such [...] Read more.
Data imbalance is a serious problem in machine learning that can be alleviated at the data level by balancing the class distribution with sampling. In the last decade, several sampling methods have been published to address the shortcomings of the initial ones, such as noise sensitivity and incorrect neighbor selection. Based on the review of the literature, it has become clear to us that the algorithms achieve varying performance on different data sets. In this paper, we present a new oversampler that has been developed based on the key steps and sampling strategies identified by analyzing dozens of existing methods and that can be fitted to various data sets through an optimization process. Experiments were performed on a number of data sets, which show that the proposed method had a similar or better effect on the performance of SVM, DTree, kNN and MLP classifiers compared with other well-known samplers found in the literature. The results were also confirmed by statistical tests. Full article
Show Figures

Figure 1

13 pages, 1594 KiB  
Article
Emotion Recognition in Human–Robot Interaction Using the NAO Robot
by Iro Athina Valagkouti, Christos Troussas, Akrivi Krouska, Michalis Feidakis and Cleo Sgouropoulou
Computers 2022, 11(5), 72; https://0-doi-org.brum.beds.ac.uk/10.3390/computers11050072 - 02 May 2022
Cited by 10 | Viewed by 3887
Abstract
Affective computing can be implemented across many fields in order to provide a unique experience by tailoring services and products according to each person’s needs and interests. More specifically, digital learning and robotics in education can benefit from affective computing with a redesign [...] Read more.
Affective computing can be implemented across many fields in order to provide a unique experience by tailoring services and products according to each person’s needs and interests. More specifically, digital learning and robotics in education can benefit from affective computing with a redesign of the curriculum’s contents based on students’ emotions during teaching. This key feature is observed during traditional learning methods, and robot tutors are adapting to it gradually. Following this trend, this work focused on creating a game that aims to raise environmental awareness by using the social robot NAO as a conversation agent. This quiz-like game supports emotion recognition with DeepFace, allowing users to review their answers if a negative emotion is detected. A version of this game was tested during real-life circumstances and produced favorable results, both for emotion analysis and overall user enjoyment. Full article
(This article belongs to the Special Issue Interactive Technology and Smart Education)
Show Figures

Figure 1

13 pages, 4217 KiB  
Article
Traffic Request Generation through a Variational Auto Encoder Approach
by Stefano Chiesa and Sergio Taraglio
Computers 2022, 11(5), 71; https://0-doi-org.brum.beds.ac.uk/10.3390/computers11050071 - 29 Apr 2022
Cited by 1 | Viewed by 1676
Abstract
Traffic and transportation forecasting is a key issue in urban planning aimed to provide a greener and more sustainable environment to residents. Their privacy is a second key issue that requires synthetic travel data. A possible solution is offered by generative models. Here, [...] Read more.
Traffic and transportation forecasting is a key issue in urban planning aimed to provide a greener and more sustainable environment to residents. Their privacy is a second key issue that requires synthetic travel data. A possible solution is offered by generative models. Here, a variational autoencoder architecture has been trained on a floating car dataset in order to grasp the statistical features of the traffic demand in the city of Rome. The architecture is based on multilayer dense neural networks for encoding and decoding parts. A brief analysis of parameter influence is conducted. The generated trajectories are compared with those in the dataset. The resulting reconstructed synthetic data are employed to compute the traffic fluxes and geographic distribution of parked cars. Further work directions are provided. Full article
(This article belongs to the Special Issue Selected Papers from ICCSA 2021)
Show Figures

Figure 1

28 pages, 18492 KiB  
Article
The Influence of Genetic Algorithms on Learning Possibilities of Artificial Neural Networks
by Martin Kotyrba, Eva Volna, Hashim Habiballa and Josef Czyz
Computers 2022, 11(5), 70; https://0-doi-org.brum.beds.ac.uk/10.3390/computers11050070 - 29 Apr 2022
Cited by 6 | Viewed by 3156
Abstract
The presented research study focuses on demonstrating the learning ability of a neural network using a genetic algorithm and finding the most suitable neural network topology for solving a demonstration problem. The network topology is significantly dependent on the level of generalization. More [...] Read more.
The presented research study focuses on demonstrating the learning ability of a neural network using a genetic algorithm and finding the most suitable neural network topology for solving a demonstration problem. The network topology is significantly dependent on the level of generalization. More robust topology of a neural network is usually more suitable for particular details in the training set and it loses the ability to abstract general information. Therefore, we often design the network topology by taking into the account the required generalization, rather than the aspect of theoretical calculations. The next part of the article presents research whether a modification of the parameters of the genetic algorithm can achieve optimization and acceleration of the neural network learning process. The function of the neural network and its learning by using the genetic algorithm is demonstrated in a program for solving a computer game. The research focuses mainly on the assessment of the influence of changes in neural networks’ topology and changes in parameters in genetic algorithm on the achieved results and speed of neural network training. The achieved results are statistically presented and compared depending on the network topology and changes in the learning algorithm. Full article
(This article belongs to the Special Issue Human Understandable Artificial Intelligence)
Show Figures

Graphical abstract

19 pages, 2508 KiB  
Article
Performance Analysis of an Adaptive Rate Scheme for QoE-Assured Mobile VR Video Streaming
by Thi My Chinh Chu and Hans-Jürgen Zepernick
Computers 2022, 11(5), 69; https://0-doi-org.brum.beds.ac.uk/10.3390/computers11050069 - 29 Apr 2022
Cited by 6 | Viewed by 2064
Abstract
The emerging 5G mobile networks are essential enablers for mobile virtual reality (VR) video streaming applications assuring high quality of experience (QoE) at the end-user. In addition, mobile edge computing brings computational resources closer to the user equipment (UE), which allows offloading computationally [...] Read more.
The emerging 5G mobile networks are essential enablers for mobile virtual reality (VR) video streaming applications assuring high quality of experience (QoE) at the end-user. In addition, mobile edge computing brings computational resources closer to the user equipment (UE), which allows offloading computationally intensive processing. In this paper, we consider a network architecture for mobile VR video streaming applications consisting of a server that holds the VR video content, a mobile edge virtualization with prefetching (MVP) unit that handles the VR video packets, and a head-mounted display along with a buffer, which together serve as the UE. Several modulation and coding schemes with different rates are provided by the MVP unit to adaptively cope with the varying wireless link conditions to the UE and the state of the UE buffer. The UE buffer caches VR video packets as needed to compensate for the adaptive rates. A performance analysis is conducted in terms of blocking probability, throughput, queueing delay, and average packet error rate. To capture the effect of fading severity, the analytical expressions for these performance metrics are derived for Nakagami-m fading on the wireless link from the MVP unit to the UE. Numerical results show that the proposed system meets the network requirements needed to assure the QoE levels of different mobile VR video streaming applications. Full article
(This article belongs to the Special Issue Selected Papers from Computer Graphics & Visual Computing (CGVC 2021))
Show Figures

Figure 1

21 pages, 2607 KiB  
Article
Active Learning Activities in a Collaborative Teacher Setting in Colours, Design and Visualisation
by Jonathan C. Roberts
Computers 2022, 11(5), 68; https://0-doi-org.brum.beds.ac.uk/10.3390/computers11050068 - 29 Apr 2022
Cited by 2 | Viewed by 2483
Abstract
We present our experience with developing active learning activities in a collaborative teacher setting, along with guidelines for teachers to create them. We focus on developing learner skills in colours, design, and visualisation. Typically, teachers create content before considering learning tasks. In contrast, [...] Read more.
We present our experience with developing active learning activities in a collaborative teacher setting, along with guidelines for teachers to create them. We focus on developing learner skills in colours, design, and visualisation. Typically, teachers create content before considering learning tasks. In contrast, we develop them concurrently. In addition, teaching in a collaborative setting (where many teachers deliver or produce content) brings its own set of challenges. We developed and used a set of processes to help guide teachers to deliver appropriate learning activities within a theme that appear similarly structured and can be categorised and searched in a consistent way. Our presentation and experience of using these guidelines can act as a blueprint for others to follow and apply. We describe many of the learning activities we created and discuss how we delivered them in a bilingual (English, Welsh) setting. Delivering the learning activities within a theme (in our case, colours) means that it is possible to integrate a range of learning outcomes. Lessons can focus on, for instance, skill development in mathematics, physics, computer graphics, art, design, computer programming, and critical thought. Furthermore, colour is a topic that can motivate: it sparks curiosity and creativity, and people can learn to create their own colourful pictures, while learning and developing computing skills. Full article
(This article belongs to the Special Issue Selected Papers from Computer Graphics & Visual Computing (CGVC 2021))
Show Figures

Graphical abstract

18 pages, 1398 KiB  
Article
IoTwins: Toward Implementation of Distributed Digital Twins in Industry 4.0 Settings
by Alessandro Costantini, Giuseppe Di Modica, Jean Christian Ahouangonou, Doina Cristina Duma, Barbara Martelli, Matteo Galletti, Marica Antonacci, Daniel Nehls, Paolo Bellavista, Cedric Delamarre and Daniele Cesini
Computers 2022, 11(5), 67; https://0-doi-org.brum.beds.ac.uk/10.3390/computers11050067 - 28 Apr 2022
Cited by 24 | Viewed by 3546
Abstract
While the digital twins paradigm has attracted the interest of several research communities over the past twenty years, it has also gained ground recently in industrial environments, where mature technologies such as cloud, edge and IoT promise to enable the cost-effective implementation of [...] Read more.
While the digital twins paradigm has attracted the interest of several research communities over the past twenty years, it has also gained ground recently in industrial environments, where mature technologies such as cloud, edge and IoT promise to enable the cost-effective implementation of digital twins. In the industrial manufacturing field, a digital model refers to a virtual representation of a physical product or process that integrates data taken from various sources, such as application program interface (API) data, historical data, embedded sensor data and open data, and that is capable of providing manufacturers with unprecedented insights into the product’s expected performance or the defects that may cause malfunctions. The EU-funded IoTwins project aims to build a solid platform that manufacturers can access to develop hybrid digital twins (DTs) of their assets, deploy them as close to the data origin as possible (on IoT gateway or on edge nodes) and take advantage of cloud-based resources to off-load intensive computational tasks such as, e.g., big data analytics and machine learning (ML) model training. In this paper, we present the main research goals of the IoTwins project and discuss its reference architecture, platform functionalities and building components. Finally, we discuss an industry-related use case that showcases how manufacturers can leverage the potential of the IoTwins platform to develop and execute distributed DTs for the the predictive-maintenance purpose. Full article
(This article belongs to the Special Issue Selected Papers from ICCSA 2021)
Show Figures

Graphical abstract

34 pages, 7068 KiB  
Article
Metaheuristic Extreme Learning Machine for Improving Performance of Electric Energy Demand Forecasting
by Sarunyoo Boriratrit, Chitchai Srithapon, Pradit Fuangfoo and Rongrit Chatthaworn
Computers 2022, 11(5), 66; https://0-doi-org.brum.beds.ac.uk/10.3390/computers11050066 - 27 Apr 2022
Cited by 7 | Viewed by 2284
Abstract
Electric energy demand forecasting is very important for electric utilities to procure and supply electric energy for consumers sufficiently, safely, reliably, and continuously. Consequently, the processing time and accuracy of the forecast system are essential to consider when applying in real power system [...] Read more.
Electric energy demand forecasting is very important for electric utilities to procure and supply electric energy for consumers sufficiently, safely, reliably, and continuously. Consequently, the processing time and accuracy of the forecast system are essential to consider when applying in real power system operations. Nowadays, the Extreme Learning Machine (ELM) is significant for forecasting as it provides an acceptable value of forecasting and consumes less computation time when compared with the state-of-the-art forecasting models. However, the result of electric energy demand forecasting from the ELM was unstable and its accuracy was increased by reducing overfitting of the ELM model. In this research, metaheuristic optimization combined with the ELM is proposed to increase accuracy and reduce the cause of overfitting of three forecasting models, composed of the Jellyfish Search Extreme Learning Machine (JS-ELM), the Harris Hawk Extreme Learning Machine (HH-ELM), and the Flower Pollination Extreme Learning Machine (FP-ELM). The actual electric energy demand datasets in Thailand were collected from 2018 to 2020 and used to test and compare the performance of the proposed and state-of-the-art forecasting models. The overall results show that the JS-ELM provides the best minimum root mean square error compared with the state-of-the-art forecasting models. Moreover, the JS-ELM consumes the appropriate processing time in this experiment. Full article
(This article belongs to the Special Issue Computing, Electrical and Industrial Systems 2022)
Show Figures

Graphical abstract

17 pages, 831 KiB  
Article
Comparison of REST and GraphQL Interfaces for OPC UA
by Riku Ala-Laurinaho, Joel Mattila, Juuso Autiosalo, Jani Hietala, Heikki Laaki and Kari Tammi
Computers 2022, 11(5), 65; https://0-doi-org.brum.beds.ac.uk/10.3390/computers11050065 - 27 Apr 2022
Cited by 1 | Viewed by 2665
Abstract
Industry 4.0 and Cyber-physical systems require easy access to shop-floor data, which allows the monitoring and optimization of the manufacturing process. To achieve this, several papers have proposed various ways to make OPC UA (Open Platform Communications Unified Architecture), a standard protocol for [...] Read more.
Industry 4.0 and Cyber-physical systems require easy access to shop-floor data, which allows the monitoring and optimization of the manufacturing process. To achieve this, several papers have proposed various ways to make OPC UA (Open Platform Communications Unified Architecture), a standard protocol for industrial communication, RESTful (Representational State Transfer). As an alternative to REST, GraphQL has recently gained popularity amongst web developers. This paper compares the characteristics of the REST and GraphQL interfaces for OPC UA and conducts measurements on reading and writing data. The measurements show that GraphQL offers better performance than REST when multiple values are read or written, whereas REST is faster with single values. However, using OPC UA directly outperforms both REST and GraphQL interfaces. As a conclusion, this paper recommends using a GraphQL interface alongside an OPC UA server in smart factories to simultaneously yield easy data access, the best performance, and maximum interoperability. Full article
(This article belongs to the Section Internet of Things (IoT) and Industrial IoT)
Show Figures

Figure 1

18 pages, 7844 KiB  
Article
A Transfer-Learning-Based Novel Convolution Neural Network for Melanoma Classification
by Mohammad Naved Qureshi, Mohammad Sarosh Umar and Sana Shahab
Computers 2022, 11(5), 64; https://0-doi-org.brum.beds.ac.uk/10.3390/computers11050064 - 26 Apr 2022
Cited by 7 | Viewed by 2519
Abstract
Skin cancer is one of the most common human malignancies, which is generally diagnosed by screening and dermoscopic analysis followed by histopathological assessment and biopsy. Deep-learning-based methods have been proposed for skin lesion classification in the last few years. The major drawback of [...] Read more.
Skin cancer is one of the most common human malignancies, which is generally diagnosed by screening and dermoscopic analysis followed by histopathological assessment and biopsy. Deep-learning-based methods have been proposed for skin lesion classification in the last few years. The major drawback of all methods is that they require a considerable amount of training data, which poses a challenge for classifying medical images as limited datasets are available. The problem can be tackled through transfer learning, in which a model pre-trained on a huge dataset is utilized and fine-tuned as per the problem domain. This paper proposes a new Convolution neural network architecture to classify skin lesions into two classes: benign and malignant. The Google Xception model is used as a base model on top of which new layers are added and then fine-tuned. The model is optimized using various optimizers to achieve the maximum possible performance gain for the classifier output. The results on ISIC archive data for the model achieved the highest training accuracy of 99.78% using Adam and LazyAdam optimizers, validation and test accuracy of 97.94% and 96.8% using RMSProp, and on the HAM10000 dataset utilizing the RMSProp optimizer, the model achieved the highest training and prediction accuracy of 98.81% and 91.54% respectively, when compared to other models. Full article
(This article belongs to the Special Issue Advances of Machine and Deep Learning in the Health Domain)
Show Figures

Graphical abstract

16 pages, 1450 KiB  
Article
Window-Based Multi-Objective Optimization for Dynamic Patient Scheduling with Problem-Specific Operators
by Ali Nader Mahmed and M. N. M. Kahar
Computers 2022, 11(5), 63; https://0-doi-org.brum.beds.ac.uk/10.3390/computers11050063 - 25 Apr 2022
Cited by 2 | Viewed by 1708
Abstract
The problem of patient admission scheduling (PAS) is a nondeterministic polynomial time (NP)-hard combinatorial optimization problem with numerous constraints. Researchers have divided the constraints of this problem into hard (i.e., feasible solution) and soft constraints (i.e., quality solution). The majority of research has [...] Read more.
The problem of patient admission scheduling (PAS) is a nondeterministic polynomial time (NP)-hard combinatorial optimization problem with numerous constraints. Researchers have divided the constraints of this problem into hard (i.e., feasible solution) and soft constraints (i.e., quality solution). The majority of research has dealt with PAS using integer linear programming (ILP) and single objective meta-heuristic searching-based approaches. ILP-based approaches carry high computational demand and the risk of non-feasibility for a large dataset. In a single objective optimization, there is a risk of local minima due to the non-convexity of the problem. In this article, we present the first pareto front-based optimization for PAS using set of meta-heuristic approaches. We selected four multi-objective optimization methods. Problem-specific operators were developed for each of them. Next, we compared them with single objective optimization approaches, namely, simulated annealing and particle swarm optimization. In addition, this article also deals with the dynamical aspect of this problem by comparing historical window-based decomposition with day decomposition, as has previously been proposed in the literature. An evaluation of the models proposed in the article and comparison with traditional models reveals the superiority of our proposed multi-objective optimization with window incorporation in terms of optimality. Full article
(This article belongs to the Special Issue Advances of Machine and Deep Learning in the Health Domain)
Show Figures

Figure 1

14 pages, 1660 KiB  
Article
The Use of Reactive Programming in the Proposed Model for Cloud Security Controlled by ITSS
by Dhuratë Hyseni, Nimete Piraj, Betim Çiço and Isak Shabani
Computers 2022, 11(5), 62; https://0-doi-org.brum.beds.ac.uk/10.3390/computers11050062 - 25 Apr 2022
Cited by 1 | Viewed by 2233
Abstract
Reactive programming is a popular paradigm that has been used as a new solution in our proposed model for security in the cloud. In this context, we have been able to reduce the execution time compared to our previous work for the model [...] Read more.
Reactive programming is a popular paradigm that has been used as a new solution in our proposed model for security in the cloud. In this context, we have been able to reduce the execution time compared to our previous work for the model proposed in cloud security, where the control of security depending on the ITSS (IT security specialist) of a certain organization based on selecting options. Some of the difficulties we encountered in our previous work while using traditional programming were the coordination of parallel processes and the modification of real-time data. This study provides results for two methods of programming based on the solutions of the proposed model for cloud security, with the first method of traditional programming and the use of reactive programming as the most suitable solution in our case. While taking the measurements in this paper, we used the same algorithms, and we present comparative results between the first and second methods of programming. The results in the paper are presented in tables and graphs, which show that reactive programming in the proposed model for cloud security offers better results compared to traditional programming. Full article
(This article belongs to the Special Issue Innovative Authentication Methods)
Show Figures

Figure 1

19 pages, 1611 KiB  
Review
Robustly Effective Approaches on Motor Imagery-Based Brain Computer Interfaces
by Seraphim S. Moumgiakmas and George A. Papakostas
Computers 2022, 11(5), 61; https://0-doi-org.brum.beds.ac.uk/10.3390/computers11050061 - 24 Apr 2022
Cited by 4 | Viewed by 2785
Abstract
Motor Imagery Brain Computer Interfaces (MI-BCIs) are systems that receive the users’ brain activity as an input signal in order to communicate between the brain and the interface or an action to be performed through the detection of the imagination of a movement. [...] Read more.
Motor Imagery Brain Computer Interfaces (MI-BCIs) are systems that receive the users’ brain activity as an input signal in order to communicate between the brain and the interface or an action to be performed through the detection of the imagination of a movement. Brainwaves’ features are crucial for the performance of the interface to be increased. The robustness of these features must be ensured in order for the effectiveness to remain high in various subjects. The present work consists of a review, which includes scientific publications related to the use of robust feature extraction methods in Motor Imagery from 2017 until today. The research showed that the majority of the works focus on spatial features through Common Spatial Patterns (CSP) methods (44.26%). Based on the combination of accuracy percentages and K-values, which show the effectiveness of each approach, Wavelet Transform (WT) has shown higher robustness than CSP and PSD methods in the majority of the datasets used for comparison and also in the majority of the works included in the present review, although they had a lower usage percentage in the literature (16.65%). The research showed that there was an increase in 2019 of the detection of spatial features to increase the robustness of an approach, but the time-frequency features, or a combination of those, achieve better results with their increase starting from 2019 onwards. Additionally, Wavelet Transforms and their variants, in combination with deep learning, manage to achieve high percentages thus making a method robustly accurate. Full article
(This article belongs to the Special Issue Advances of Machine and Deep Learning in the Health Domain)
Show Figures

Graphical abstract

24 pages, 3167 KiB  
Article
Application Prospects of Blockchain Technology to Support the Development of Interport Communities
by Patrizia Serra, Gianfranco Fancello, Roberto Tonelli and Lodovica Marchesi
Computers 2022, 11(5), 60; https://0-doi-org.brum.beds.ac.uk/10.3390/computers11050060 - 21 Apr 2022
Cited by 3 | Viewed by 3201
Abstract
A key aspect for the efficiency and security of maritime transport is linked to the associated information flows. The optimal management of maritime transport requires the sharing of data in real-time between the various participating organizations. Moreover, as supply chains become increasingly integrated, [...] Read more.
A key aspect for the efficiency and security of maritime transport is linked to the associated information flows. The optimal management of maritime transport requires the sharing of data in real-time between the various participating organizations. Moreover, as supply chains become increasingly integrated, the connectivity of stakeholders must be ensured not only within the single port but also between ports. Blockchain could offer interesting opportunities in this regard and is believed to have a huge impact on the future of the digitization of the port and maritime industry. This document analyzes the state of art and practice of blockchain applications in the maritime industry and explores the application prospects and practical implications of blockchain for building an interport community. The paper uses SWOT analysis to address several research questions concerning the practical impacts and barriers related to the implementation of blockchain technology in port communities and develops a Proof of Concept (PoC) to concretely show how blockchain technology can be applied to roll-on roll-off transport and interport communities in real environments. In this regard, this study intends to contribute to the sector literature by providing a detailed framework that describes how to proceed to choose the correct blockchain scheme and implement the various management and operational aspects of an interport community by benefiting from the blockchain. Full article
(This article belongs to the Special Issue Selected Papers from ICCSA 2021)
Show Figures

Figure 1

20 pages, 13947 KiB  
Article
Digital Game-Based Support for Learning the Phlebotomy Procedure in the Biomedical Laboratory Scientist Education
by Tord Hettervik Frøland, Ilona Heldal, Turid Aarhus Braseth, Irene Nygård, Gry Sjøholt and Elisabeth Ersvær
Computers 2022, 11(5), 59; https://0-doi-org.brum.beds.ac.uk/10.3390/computers11050059 - 21 Apr 2022
Cited by 1 | Viewed by 3373
Abstract
Practice-based training in education is important, expensive, and resource-demanding. Digital games can provide complementary training opportunities for practicing procedural skills and increase the value of the limited laboratory training time in biomedical laboratory science (BLS) education. This paper presents how a serious game [...] Read more.
Practice-based training in education is important, expensive, and resource-demanding. Digital games can provide complementary training opportunities for practicing procedural skills and increase the value of the limited laboratory training time in biomedical laboratory science (BLS) education. This paper presents how a serious game can be integrated in a BLS course and supplement traditional learning and teaching with accessible learning material for phlebotomy. To gather information on challenges relevant to integrating Digital Game-Based Learning (DGBL), a case was carried out using mixed methods. Through a semester-long study, following a longitudinal, interventional cohort study, data and information were obtained from teachers and students about the learning impact of the current application. The game motivated students to train more, and teachers were positive towards using it in education. The results provide increased insights into how DGBL can be integrated into education and give rise to a discussion of the current challenges of DGBL for practice-based learning. Full article
(This article belongs to the Special Issue Interactive Technology and Smart Education)
Show Figures

Figure 1

26 pages, 1686 KiB  
Article
Foot-to-Ground Phases Detection: A Comparison of Data Representation Formatting Methods with Respect to Adaption of Deep Learning Architectures
by Youness El Marhraoui, Hamdi Amroun, Mehdi Boukallel, Margarita Anastassova, Sylvie Lamy, Stéphane Bouilland and Mehdi Ammi
Computers 2022, 11(5), 58; https://0-doi-org.brum.beds.ac.uk/10.3390/computers11050058 - 20 Apr 2022
Cited by 1 | Viewed by 2259
Abstract
Identifying the foot stance and foot swing phases, also known as foot-to-ground (FTG) detection, is a branch of Human Activity Recognition (HAR). Our study aims to detect two main phases of the gait (i.e., foot-off and foot-contact) corresponding to the moments when each [...] Read more.
Identifying the foot stance and foot swing phases, also known as foot-to-ground (FTG) detection, is a branch of Human Activity Recognition (HAR). Our study aims to detect two main phases of the gait (i.e., foot-off and foot-contact) corresponding to the moments when each foot is in contact with the ground or not. This will allow the medical professionals to characterize and identify the different phases of the human gait and their respective patterns. This detection process is paramount for extracting gait features (e.g., step width, stride width, gait speed, cadence, etc.) used by medical experts to highlight gait anomalies, stance issues, or any other walking irregularities. It will be used to assist health practitioners with patient monitoring, in addition to developing a full pipeline for FTG detection that would help compute gait indicators. In this paper, a comparison of different training configurations, including model architectures, data formatting, and pre-processing, was conducted to select the parameters leading to the highest detection accuracy. This binary classification provides a label for each timestamp informing whether the foot is in contact with the ground or not. Models such as CNN, LSTM, and ConvLSTM were the best fits for this study. Yet, we did not exclude DNNs and Machine Learning models, such as Random Forest and XGBoost from our work in order to have a wide range of possible comparisons. As a result of our experiments, which included 27 senior participants who had a stroke in the past wearing IMU sensors on their ankles, the ConvLSTM model achieved a high accuracy of 97.01% for raw windowed data with a size of 3 frames per window, and each window was formatted to have two superimposed channels (accelerometer and gyroscope channels). The model was trained to have the best detection without any knowledge of the participants’ personal information including age, gender, health condition, the type of activity, or the used foot. In other words, the model’s input data only originated from IMU sensors. Overall, in terms of FTG detection, the combination of the ConvLSTM model and the data representation had an important impact in outperforming other start-of-the-art configurations; in addition, the compromise between the model’s complexity and its accuracy is a major asset for deploying this model and developing real-time solutions. Full article
(This article belongs to the Special Issue Survey in Deep Learning for IoT Applications)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop