Next Issue
Volume 10, January
Previous Issue
Volume 9, September
 
 

Computers, Volume 9, Issue 4 (December 2020) – 28 articles

Cover Story (view full-size image): Embedded systems are widespread in our daily life. They are a vital component of larger structures such as wireless sensor networks or the Internet of Things. In various scientific fields, the crucial role is to accurately determine the location of an ongoing acoustic event. The current work mixes several flavors, including computation, software testing, and application of sensor networks in signal processing. It focuses on developing a test-bench to evaluate and test the performance of a swarm-based method running on an embedded implementation. The main findings corroborate the functionality of the proposed concept, since real-life platforms for product development were employed for testing, demonstrating the high potential of the localization algorithm for embedded implementation and real-time processing. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
18 pages, 2477 KiB  
Article
EEG and Deep Learning Based Brain Cognitive Function Classification
by Saraswati Sridhar and Vidya Manian
Computers 2020, 9(4), 104; https://0-doi-org.brum.beds.ac.uk/10.3390/computers9040104 - 21 Dec 2020
Cited by 21 | Viewed by 4908
Abstract
Electroencephalogram signals are used to assess neurodegenerative diseases and develop sophisticated brain machine interfaces for rehabilitation and gaming. Most of the applications use only motor imagery or evoked potentials. Here, a deep learning network based on a sensory motor paradigm (auditory, olfactory, movement, [...] Read more.
Electroencephalogram signals are used to assess neurodegenerative diseases and develop sophisticated brain machine interfaces for rehabilitation and gaming. Most of the applications use only motor imagery or evoked potentials. Here, a deep learning network based on a sensory motor paradigm (auditory, olfactory, movement, and motor-imagery) that employs a subject-agnostic Bidirectional Long Short-Term Memory (BLSTM) Network is developed to assess cognitive functions and identify its relationship with brain signal features, which is hypothesized to consistently indicate cognitive decline. Testing occurred with healthy subjects of age 20–40, 40–60, and >60, and mildly cognitive impaired subjects. Auditory and olfactory stimuli were presented to the subjects and the subjects imagined and conducted movement of each arm during which Electroencephalogram (EEG)/Electromyogram (EMG) signals were recorded. A deep BLSTM Neural Network is trained with Principal Component features from evoked signals and assesses their corresponding pathways. Wavelet analysis is used to decompose evoked signals and calculate the band power of component frequency bands. This deep learning system performs better than conventional deep neural networks in detecting MCI. Most features studied peaked at the age range 40–60 and were lower for the MCI group than for any other group tested. Detection accuracy of left-hand motor imagery signals best indicated cognitive aging (p = 0.0012); here, the mean classification accuracy per age group declined from 91.93% to 81.64%, and is 69.53% for MCI subjects. Motor-imagery-evoked band power, particularly in gamma bands, best indicated (p = 0.007) cognitive aging. Although the classification accuracy of the potentials effectively distinguished cognitive aging from MCI (p < 0.05), followed by gamma-band power. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

14 pages, 5647 KiB  
Article
Design and Implementation of Automated Steganography Image-Detection System for the KakaoTalk Instant Messenger
by Jun Park and Youngho Cho
Computers 2020, 9(4), 103; https://0-doi-org.brum.beds.ac.uk/10.3390/computers9040103 - 18 Dec 2020
Cited by 6 | Viewed by 3185
Abstract
As the popularity of social network service (SNS) messengers (such as Telegram, WeChat or KakaoTalk) grows rapidly, cyberattackers and cybercriminals start targeting them, and from various media, we can see numerous cyber incidents that have occurred in the SNS messenger platforms. Especially, according [...] Read more.
As the popularity of social network service (SNS) messengers (such as Telegram, WeChat or KakaoTalk) grows rapidly, cyberattackers and cybercriminals start targeting them, and from various media, we can see numerous cyber incidents that have occurred in the SNS messenger platforms. Especially, according to existing studies, a novel type of botnet, which is the so-called steganography-based botnet (stego-botnet), can be constructed and implemented in SNS chat messengers. In the stego-botnet, by using various steganography techniques, every botnet communication and control (C&C) messages are secretly embedded into multimedia files (such as image or video files) frequently shared in the SNS messenger. As a result, the stego-botnet can hide its malicious messages between a bot master and bots much better than existing botnets by avoiding traditional botnet-detection methods without steganography-detection functions. Meanwhile, existing studies have focused on devising and improving steganography-detection algorithms but no studies conducted automated steganography image-detection system although there are a large amount of SNS chatrooms on the Internet and thus may exist many potential steganography images on those chatrooms which need to be inspected for security. Consequently, in this paper, we propose an automated system that detects steganography image files by collecting and inspecting all image files shared in an SNS chatroom based on open image steganography tools. In addition, we implement our proposed system based on two open steganography tools (Stegano and Cryptosteganography) in the KakaoTalk SNS messenger and show our experimental results that validate our proposed automated detection system work successfully according to our design purposes. Full article
(This article belongs to the Special Issue IoT: Security, Privacy and Best Practices)
Show Figures

Graphical abstract

18 pages, 3376 KiB  
Article
An Interactive Serious Mobile Game for Supporting the Learning of Programming in JavaScript in the Context of Eco-Friendly City Management
by Rytis Maskeliūnas, Audrius Kulikajevas, Tomas Blažauskas, Robertas Damaševičius and Jakub Swacha
Computers 2020, 9(4), 102; https://0-doi-org.brum.beds.ac.uk/10.3390/computers9040102 - 17 Dec 2020
Cited by 25 | Viewed by 4893
Abstract
In the pedagogical process, a serious game acts as a method of teaching and upbringing, the transfer of accumulated experience and knowledge. In this paper, we describe an interactive serious programming game based on game-based learning for teaching JavaScript programming in an introductory [...] Read more.
In the pedagogical process, a serious game acts as a method of teaching and upbringing, the transfer of accumulated experience and knowledge. In this paper, we describe an interactive serious programming game based on game-based learning for teaching JavaScript programming in an introductory course at university. The game was developed by adopting the gamification pattern-based approach. The game is based on visualizations of different types of algorithms, which are interpreted in the context of city life. The game encourages interactivity and pursues deeper learning of programming concepts. The results of the evaluation of the game using pre-test and post-test knowledge assessment, the Technology Acceptance Model (TAM), and the Technology-Enhanced Training Effectiveness Model (TETEM) are presented. Full article
(This article belongs to the Special Issue Game-Based Learning, Gamification in Education and Serious Games)
Show Figures

Graphical abstract

20 pages, 7991 KiB  
Article
Multiple View Relations Using the Teaching and Learning-Based Optimization Algorithm
by Alan López-Martínez and Francisco Javier Cuevas
Computers 2020, 9(4), 101; https://0-doi-org.brum.beds.ac.uk/10.3390/computers9040101 - 17 Dec 2020
Cited by 2 | Viewed by 2402
Abstract
In computer vision, estimating geometric relations between two different views of the same scene has great importance due to its applications in 3D reconstruction, object recognition and digitization, image registration, pose retrieval, visual tracking and more. The Random Sample Consensus (RANSAC) is the [...] Read more.
In computer vision, estimating geometric relations between two different views of the same scene has great importance due to its applications in 3D reconstruction, object recognition and digitization, image registration, pose retrieval, visual tracking and more. The Random Sample Consensus (RANSAC) is the most popular heuristic technique to tackle this problem. However, RANSAC-like algorithms present a drawback regarding either the tuning of the number of samples and the threshold error or the computational burden. To relief this problem, we propose an estimator based on a metaheuristic, the Teaching–Learning-Based Optimization algorithm (TLBO) that is motivated by the teaching–learning process. We use the TLBO algorithm in the problem of computing multiple view relations given by the homography and the fundamental matrix. To improve the method, candidate models are better evaluated with a more precise objective function. To validate the efficacy of the proposed approach, several tests, and comparisons with two RANSAC-based algorithms and other metaheuristic-based estimators were executed. Full article
Show Figures

Figure 1

17 pages, 713 KiB  
Article
Folding-BSD Algorithm for Binary Sequence Decomposition
by Jose Luis Martin-Navarro and Amparo Fúster-Sabater
Computers 2020, 9(4), 100; https://0-doi-org.brum.beds.ac.uk/10.3390/computers9040100 - 15 Dec 2020
Cited by 3 | Viewed by 2427
Abstract
The Internet of Things (IoT) revolution leads to a range of critical services which rely on IoT devices. Nevertheless, they often lack proper security, becoming the gateway to attack the whole system. IoT security protocols often rely on stream ciphers, where pseudo-random number [...] Read more.
The Internet of Things (IoT) revolution leads to a range of critical services which rely on IoT devices. Nevertheless, they often lack proper security, becoming the gateway to attack the whole system. IoT security protocols often rely on stream ciphers, where pseudo-random number generators (PRNGs) are an essential part of them. In this article, a family of ciphers with strong characteristics that make them difficult to be analyzed by standard methods is described. In addition, we will discuss an innovative technique of sequence decomposition and present a novel algorithm to evaluate the strength of binary sequences, a key part of the IoT security stack. The density of the binomial sequences in the decomposition has been studied experimentally to compare the performance of the presented algorithm with previous works. Full article
(This article belongs to the Special Issue Selected Papers from ICCSA 2020)
Show Figures

Figure 1

16 pages, 1037 KiB  
Article
Toward Smart Lockdown: A Novel Approach for COVID-19 Hotspots Prediction Using a Deep Hybrid Neural Network
by Sultan Daud Khan, Louai Alarabi and Saleh Basalamah
Computers 2020, 9(4), 99; https://0-doi-org.brum.beds.ac.uk/10.3390/computers9040099 - 11 Dec 2020
Cited by 35 | Viewed by 3814
Abstract
COVID-19 caused the largest economic recession in the history by placing more than one third of world’s population in lockdown. The prolonged restrictions on economic and business activities caused huge economic turmoil that significantly affected the financial markets. To ease the growing pressure [...] Read more.
COVID-19 caused the largest economic recession in the history by placing more than one third of world’s population in lockdown. The prolonged restrictions on economic and business activities caused huge economic turmoil that significantly affected the financial markets. To ease the growing pressure on the economy, scientists proposed intermittent lockdowns commonly known as “smart lockdowns”. Under smart lockdown, areas that contain infected clusters of population, namely hotspots, are placed on lockdown, while economic activities are allowed to operate in un-infected areas. In this study, we proposed a novel deep learning prediction framework for the accurate prediction of hotpots. We exploit the benefits of two deep learning models, i.e., Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) and propose a hybrid framework that has the ability to extract multi time-scale features from convolutional layers of CNN. The multi time-scale features are then concatenated and provide as input to 2-layers LSTM model. The LSTM model identifies short, medium and long-term dependencies by learning the representation of time-series data. We perform a series of experiments and compare the proposed framework with other state-of-the-art statistical and machine learning based prediction models. From the experimental results, we demonstrate that the proposed framework beats other existing methods with a clear margin. Full article
Show Figures

Figure 1

29 pages, 13853 KiB  
Article
FuseVis: Interpreting Neural Networks for Image Fusion Using Per-Pixel Saliency Visualization
by Nishant Kumar and Stefan Gumhold
Computers 2020, 9(4), 98; https://0-doi-org.brum.beds.ac.uk/10.3390/computers9040098 - 10 Dec 2020
Cited by 8 | Viewed by 4918
Abstract
Image fusion helps in merging two or more images to construct a more informative single fused image. Recently, unsupervised learning-based convolutional neural networks (CNN) have been used for different types of image-fusion tasks such as medical image fusion, infrared-visible image fusion for autonomous [...] Read more.
Image fusion helps in merging two or more images to construct a more informative single fused image. Recently, unsupervised learning-based convolutional neural networks (CNN) have been used for different types of image-fusion tasks such as medical image fusion, infrared-visible image fusion for autonomous driving as well as multi-focus and multi-exposure image fusion for satellite imagery. However, it is challenging to analyze the reliability of these CNNs for the image-fusion tasks since no groundtruth is available. This led to the use of a wide variety of model architectures and optimization functions yielding quite different fusion results. Additionally, due to the highly opaque nature of such neural networks, it is difficult to explain the internal mechanics behind its fusion results. To overcome these challenges, we present a novel real-time visualization tool, named FuseVis, with which the end-user can compute per-pixel saliency maps that examine the influence of the input image pixels on each pixel of the fused image. We trained several image fusion-based CNNs on medical image pairs and then using our FuseVis tool we performed case studies on a specific clinical application by interpreting the saliency maps from each of the fusion methods. We specifically visualized the relative influence of each input image on the predictions of the fused image and showed that some of the evaluated image-fusion methods are better suited for the specific clinical application. To the best of our knowledge, currently, there is no approach for visual analysis of neural networks for image fusion. Therefore, this work opens a new research direction to improve the interpretability of deep fusion networks. The FuseVis tool can also be adapted in other deep neural network-based image processing applications to make them interpretable. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

16 pages, 2846 KiB  
Article
COTS-Based Real-Time System Development: An Effective Application in Pump Motor Control
by George K. Adam, Nikos Petrellis, Panagiotis A. Kontaxis and Tilemachos Stylianos
Computers 2020, 9(4), 97; https://0-doi-org.brum.beds.ac.uk/10.3390/computers9040097 - 05 Dec 2020
Cited by 7 | Viewed by 3073
Abstract
The progress of embedded control systems in the last several years has made possible the realization of highly-effective controllers in many domains. It is essential for such systems to provide effective performance at an affordable cost. Furthermore, real-time embedded control systems must have [...] Read more.
The progress of embedded control systems in the last several years has made possible the realization of highly-effective controllers in many domains. It is essential for such systems to provide effective performance at an affordable cost. Furthermore, real-time embedded control systems must have low energy consumption, as well as be reliable and timely. This research investigates primarily the feasibility of implementing an embedded real-time control system, based on a low-cost, commercially off-the-shelf (COTS) microcontroller platform. It explores real-time issues, such as the reliability and timely response, of such a system implementation. This work presents the development and performance evaluation of a novel real-time control architecture, based upon a BeagleBoard microcontroller, and applied into the PWM (pulse width modulation) control of a three-phase induction motor in a suction pump. The approach followed makes minimal use of general-purpose hardware (BeagleBone Black microcontroller board) and open-source software components (including Linux Operating System with PREEMPT_RT real-time support) for building a reliable real-time control system. The applicability of the proposed control system architecture is validated and evaluated in a real case study in manufacturing. The results provide sufficient evidence of the efficiency and reliability of the proposed approach into the development of a real-time control system based upon COTS components. Full article
Show Figures

Figure 1

20 pages, 2326 KiB  
Article
kNN Prototyping Schemes for Embedded Human Activity Recognition with Online Learning
by Paulo J. S. Ferreira, João M. P. Cardoso and João Mendes-Moreira
Computers 2020, 9(4), 96; https://0-doi-org.brum.beds.ac.uk/10.3390/computers9040096 - 03 Dec 2020
Cited by 19 | Viewed by 2622
Abstract
The kNN machine learning method is widely used as a classifier in Human Activity Recognition (HAR) systems. Although the kNN algorithm works similarly both online and in offline mode, the use of all training instances is much more critical online than [...] Read more.
The kNN machine learning method is widely used as a classifier in Human Activity Recognition (HAR) systems. Although the kNN algorithm works similarly both online and in offline mode, the use of all training instances is much more critical online than offline due to time and memory restrictions in the online mode. Some methods propose decreasing the high computational costs of kNN by focusing, e.g., on approximate kNN solutions such as the ones relying on Locality-Sensitive Hashing (LSH). However, embedded kNN implementations also need to address the target device’s memory constraints, especially as the use of online classification needs to cope with those constraints to be practical. This paper discusses online approaches to reduce the number of training instances stored in the kNN search space. To address practical implementations of HAR systems using kNN, this paper presents simple, energy/computationally efficient, and real-time feasible schemes to maintain at runtime a maximum number of training instances stored by kNN. The proposed schemes include policies for substituting the training instances, maintaining the search space to a maximum size. Experiments in the context of HAR datasets show the efficiency of our best schemes. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

15 pages, 937 KiB  
Article
Machine-Learning-Based Emotion Recognition System Using EEG Signals
by Rania Alhalaseh and Suzan Alasasfeh
Computers 2020, 9(4), 95; https://0-doi-org.brum.beds.ac.uk/10.3390/computers9040095 - 30 Nov 2020
Cited by 57 | Viewed by 7827
Abstract
Many scientific studies have been concerned with building an automatic system to recognize emotions, and building such systems usually relies on brain signals. These studies have shown that brain signals can be used to classify many emotional states. This process is considered difficult, [...] Read more.
Many scientific studies have been concerned with building an automatic system to recognize emotions, and building such systems usually relies on brain signals. These studies have shown that brain signals can be used to classify many emotional states. This process is considered difficult, especially since the brain’s signals are not stable. Human emotions are generated as a result of reactions to different emotional states, which affect brain signals. Thus, the performance of emotion recognition systems by brain signals depends on the efficiency of the algorithms used to extract features, the feature selection algorithm, and the classification process. Recently, the study of electroencephalography (EEG) signaling has received much attention due to the availability of several standard databases, especially since brain signal recording devices have become available in the market, including wireless ones, at reasonable prices. This work aims to present an automated model for identifying emotions based on EEG signals. The proposed model focuses on creating an effective method that combines the basic stages of EEG signal handling and feature extraction. Different from previous studies, the main contribution of this work relies in using empirical mode decomposition/intrinsic mode functions (EMD/IMF) and variational mode decomposition (VMD) for signal processing purposes. Despite the fact that EMD/IMFs and VMD methods are widely used in biomedical and disease-related studies, they are not commonly utilized in emotion recognition. In other words, the methods used in the signal processing stage in this work are different from the methods used in literature. After the signal processing stage, namely in the feature extraction stage, two well-known technologies were used: entropy and Higuchi’s fractal dimension (HFD). Finally, in the classification stage, four classification methods were used—naïve Bayes, k-nearest neighbor (k-NN), convolutional neural network (CNN), and decision tree (DT)—for classifying emotional states. To evaluate the performance of our proposed model, experiments were applied to a common database called DEAP based on many evaluation models, including accuracy, specificity, and sensitivity. The experiments showed the efficiency of the proposed method; a 95.20% accuracy was achieved using the CNN-based method. Full article
(This article belongs to the Special Issue Machine Learning for EEG Signal Processing)
Show Figures

Figure 1

31 pages, 789 KiB  
Article
An UML Based Performance Evaluation of Real-Time Systems Using Timed Petri Net
by Tanuja Shailesh, Ashalatha Nayak and Devi Prasad
Computers 2020, 9(4), 94; https://0-doi-org.brum.beds.ac.uk/10.3390/computers9040094 - 27 Nov 2020
Cited by 11 | Viewed by 5080
Abstract
Performance is a critical non-functional parameter for real-time systems and performance analysis is an important task making it more challenging for complex real-time systems. Mostly performance analysis is performed after the system development but an early stage analysis and validation of performance using [...] Read more.
Performance is a critical non-functional parameter for real-time systems and performance analysis is an important task making it more challenging for complex real-time systems. Mostly performance analysis is performed after the system development but an early stage analysis and validation of performance using system models can improve the system quality. In this paper, we present an early stage automated performance evaluation methodology to analyse system performance using the UML sequence diagram model annotated with modeling and analysis of real-time and embedded systems (MARTE) profile. MARTE offers a performance domain sub-profile that is used for representing real-time system properties essential for performance evaluation. In this paper, a transformation technique and transformation rules are proposed to map the UML sequence diagram model into a Generalized Stochastic Timed Petri net model. All the transformation rules are implemented using a metamodel based approach and Atlas Transformation Language (ATL). A case study from the manufacturing domain a Kanban system is used for validating the proposed technique. Full article
Show Figures

Figure 1

13 pages, 1621 KiB  
Article
CogniSoft: A Platform for the Automation of Cognitive Assessment and Rehabilitation of Multiple Sclerosis
by Dessislava Petrova-Antonova, Ivaylo Spasov, Yanita Petkova, Ilina Manova and Sylvia Ilieva
Computers 2020, 9(4), 93; https://0-doi-org.brum.beds.ac.uk/10.3390/computers9040093 - 16 Nov 2020
Cited by 1 | Viewed by 3866
Abstract
Cognitive disorders remain a major cause of disability in Multiple Sclerosis (MS). They lead to unemployment, the need for daily assistance, and a poor quality of life. The understanding of the origin, factors, processes, and consequences of cognitive disfunction is key to its [...] Read more.
Cognitive disorders remain a major cause of disability in Multiple Sclerosis (MS). They lead to unemployment, the need for daily assistance, and a poor quality of life. The understanding of the origin, factors, processes, and consequences of cognitive disfunction is key to its prevention, early diagnosis, and rehabilitation. The neuropsychological testing and continuous monitoring of cognitive status as part of the overall evaluation of patients with MS in parallel with clinical and paraclinical examinations are highly recommended. In order to improve health and disease understanding, a close linkage between fundamental, clinical, epidemiological, and socio-economic research is required. The effective sharing of data, standardized data processing, and the linkage of such data with large-scale cohort studies is a prerequisite for the translation of research findings into the clinical setting. In this context, this paper proposes a software platform for the cognitive assessment and rehabilitation of patients with MS called CogniSoft. The platform automates the Beck Depression Inventory (BDI-II) test and diagnostic tests for the evaluation of memory and executive functions based on the nature of Brief International Cognitive Assessment for MS (BICAMS), as well as implementing a set of games for cognitive rehabilitation based on BICAMS. The software architecture, core modules, and technologies used for their implementation are presented. Special attention is given to the development of cognitive tests for diagnostics and rehabilitation. Their automation enables better perception, avoids bias as a result of conducting the classic paper tests of various neurophysiologists, provides easy administration, and allows data collection in a uniform manner, which further enables analysis using statistical and machine learning algorithms. The CogniSoft platform is registered as medical software by the Bulgarian Drug Agency and it is currently deployed in the Neurological Clinic of the National Hospital of Cardiology in Sofia, Bulgaria. The first experiments prove the feasibility of the platform, showing that it saves time and financial resources while providing subjectivity in the interpretation of the cognitive test results. Full article
(This article belongs to the Special Issue Selected Papers from ICCSA 2020)
Show Figures

Figure 1

26 pages, 1881 KiB  
Review
Recommendations for Integrating a P300-Based Brain–Computer Interface in Virtual Reality Environments for Gaming: An Update
by Grégoire Cattan, Anton Andreev and Etienne Visinoni
Computers 2020, 9(4), 92; https://0-doi-org.brum.beds.ac.uk/10.3390/computers9040092 - 14 Nov 2020
Cited by 22 | Viewed by 5081
Abstract
The integration of a P300-based brain–computer interface (BCI) into virtual reality (VR) environments is promising for the video games industry. However, it faces several limitations, mainly due to hardware constraints and limitations engendered by the stimulation needed by the BCI. The main restriction [...] Read more.
The integration of a P300-based brain–computer interface (BCI) into virtual reality (VR) environments is promising for the video games industry. However, it faces several limitations, mainly due to hardware constraints and limitations engendered by the stimulation needed by the BCI. The main restriction is still the low transfer rate that can be achieved by current BCI technology, preventing movement while using VR. The goal of this paper is to review current limitations and to provide application creators with design recommendations to overcome them, thus significantly reducing the development time and making the domain of BCI more accessible to developers. We review the design of video games from the perspective of BCI and VR with the objective of enhancing the user experience. An essential recommendation is to use the BCI only for non-complex and non-critical tasks in the game. Also, the BCI should be used to control actions that are naturally integrated into the virtual world. Finally, adventure and simulation games, especially if cooperative (multi-user), appear to be the best candidates for designing an effective VR game enriched by BCI technology. Full article
Show Figures

Figure 1

10 pages, 1599 KiB  
Article
Social Distancing in Indoor Spaces: An Intelligent Guide Based on the Internet of Things: COVID-19 as a Case Study
by Malek Alrashidi
Computers 2020, 9(4), 91; https://0-doi-org.brum.beds.ac.uk/10.3390/computers9040091 - 07 Nov 2020
Cited by 11 | Viewed by 2856
Abstract
Using Internet of Things (IoT) solutions is a promising way to ensure that social distancing is respected, especially in common indoor spaces. This paper proposes a system of placement and relocation of people within an indoor space, using an intelligent method based on [...] Read more.
Using Internet of Things (IoT) solutions is a promising way to ensure that social distancing is respected, especially in common indoor spaces. This paper proposes a system of placement and relocation of people within an indoor space, using an intelligent method based on two optimizers (ant colony and particle swarm) to find the optimal relocation of a set of people equipped with IoT devices to control their locations and movements. As a real-world test, an amphitheater with students was used, and the algorithms guided students toward correct, safe positions. Two evolutionary algorithms are proposed to resolve the studied problem, ant colony optimization and particle swarm optimization. Then, a comparative analysis was performed between these two algorithms and a genetic algorithm, using different evaluation metrics to assess the behavior of the proposed system. The results show the efficiency of the proposed intelligent IoT system. Full article
Show Figures

Figure 1

13 pages, 2601 KiB  
Article
Developing a POS Tagged Corpus of Urdu Tweets
by Amber Baig, Mutee U Rahman, Hameedullah Kazi and Ahsanullah Baloch
Computers 2020, 9(4), 90; https://doi.org/10.3390/computers9040090 - 07 Nov 2020
Cited by 2 | Viewed by 3216
Abstract
Processing of social media text like tweets is challenging for traditional Natural Language Processing (NLP) tools developed for well-edited text due to the noisy nature of such text. However, demand for tools and resources to correctly process such noisy text has increased in [...] Read more.
Processing of social media text like tweets is challenging for traditional Natural Language Processing (NLP) tools developed for well-edited text due to the noisy nature of such text. However, demand for tools and resources to correctly process such noisy text has increased in recent years due to the usefulness of such text in various applications. Literature reports various efforts made to develop tools and resources to process such noisy text for various languages, notably, part-of-speech (POS) tagging, an NLP task having a direct effect on the performance of other successive text processing activities. Still, no such attempt has been made to develop a POS tagger for Urdu social media content. Thus, the focus of this paper is on POS tagging of Urdu tweets. We introduce a new tagset for POS-tagging of Urdu tweets along with the POS-tagged Urdu tweets corpus. We also investigated bootstrapping as a potential solution for overcoming the shortage of manually annotated data and present a supervised POS tagger with an accuracy of 93.8% precision, 92.9% recall and 93.3% F-measure. Full article
Show Figures

Figure 1

26 pages, 23280 KiB  
Article
An Optimal Stacked Ensemble Deep Learning Model for Predicting Time-Series Data Using a Genetic Algorithm—An Application for Aerosol Particle Number Concentrations
by Ola M. Surakhi, Martha Arbayani Zaidan, Sami Serhan, Imad Salah and Tareq Hussein
Computers 2020, 9(4), 89; https://0-doi-org.brum.beds.ac.uk/10.3390/computers9040089 - 05 Nov 2020
Cited by 12 | Viewed by 3355
Abstract
Time-series prediction is an important area that inspires numerous research disciplines for various applications, including air quality databases. Developing a robust and accurate model for time-series data becomes a challenging task, because it involves training different models and optimization. In this paper, we [...] Read more.
Time-series prediction is an important area that inspires numerous research disciplines for various applications, including air quality databases. Developing a robust and accurate model for time-series data becomes a challenging task, because it involves training different models and optimization. In this paper, we proposed and tested three machine learning techniques—recurrent neural networks (RNN), heuristic algorithm and ensemble learning—to develop a predictive model for estimating atmospheric particle number concentrations in the form of a time-series database. Here, the RNN included three variants—Long-Short Term Memory, Gated Recurrent Network, and Bi-directional Recurrent Neural Network—with various configurations. A Genetic Algorithm (GA) was then used to find the optimal time-lag in order to enhance the model’s performance. The optimized models were used to construct a stacked ensemble model as well as to perform the final prediction. The results demonstrated that the time-lag value can be optimized by using the heuristic algorithm; consequently, this improved the model prediction accuracy. Further improvement can be achieved by using ensemble learning that combines several models for better performance and more accurate predictions. Full article
Show Figures

Figure 1

27 pages, 374 KiB  
Article
Minimal Complexity Support Vector Machines for Pattern Classification
by Shigeo Abe
Computers 2020, 9(4), 88; https://0-doi-org.brum.beds.ac.uk/10.3390/computers9040088 - 04 Nov 2020
Cited by 4 | Viewed by 2335
Abstract
Minimal complexity machines (MCMs) minimize the VC (Vapnik-Chervonenkis) dimension to obtain high generalization abilities. However, because the regularization term is not included in the objective function, the solution is not unique. In this paper, to solve this problem, we discuss fusing the MCM [...] Read more.
Minimal complexity machines (MCMs) minimize the VC (Vapnik-Chervonenkis) dimension to obtain high generalization abilities. However, because the regularization term is not included in the objective function, the solution is not unique. In this paper, to solve this problem, we discuss fusing the MCM and the standard support vector machine (L1 SVM). This is realized by minimizing the maximum margin in the L1 SVM. We call the machine Minimum complexity L1 SVM (ML1 SVM). The associated dual problem has twice the number of dual variables and the ML1 SVM is trained by alternatingly optimizing the dual variables associated with the regularization term and with the VC dimension. We compare the ML1 SVM with other types of SVMs including the L1 SVM using several benchmark datasets and show that the ML1 SVM performs better than or comparable to the L1 SVM. Full article
(This article belongs to the Special Issue Artificial Neural Networks in Pattern Recognition)
Show Figures

Figure 1

19 pages, 2037 KiB  
Article
Development of a Test-Bench for Evaluating the Embedded Implementation of the Improved Elephant Herding Optimization Algorithm Applied to Energy-Based Acoustic Localization
by Sérgio D. Correia, João Fé, Slavisa Tomic and Marko Beko
Computers 2020, 9(4), 87; https://0-doi-org.brum.beds.ac.uk/10.3390/computers9040087 - 03 Nov 2020
Cited by 9 | Viewed by 4146
Abstract
The present work addresses the development of a test-bench for the embedded implementation, validity, and testing of the recently proposed Improved Elephant Herding Optimization (iEHO) algorithm, applied to the acoustic localization problem. The implemented methodology aims to corroborate the feasibility of [...] Read more.
The present work addresses the development of a test-bench for the embedded implementation, validity, and testing of the recently proposed Improved Elephant Herding Optimization (iEHO) algorithm, applied to the acoustic localization problem. The implemented methodology aims to corroborate the feasibility of applying iEHO in real-time applications on low complexity and low power devices, where three different electronic modules are used and tested. Swarm-based metaheuristic methods are usually examined by employing high-level languages on centralized computers, demonstrating their capability in finding global or good local solutions. This work considers iEHO implementation in C-language running on an embedded processor. Several random scenarios are generated, uploaded, and processed by the embedded processor to demonstrate the algorithm’s effectiveness and the test-bench usability, low complexity, and high reliability. On the one hand, the results obtained in our test-bench are concordant with the high-level implementations using MatLab® in terms of accuracy. On the other hand, concerning the processing time and as a breakthrough, the results obtained over the test-bench allow to demonstrate a high suitability of the embedded iEHO implementation for real-time applications due to its low latency. Full article
Show Figures

Figure 1

17 pages, 2375 KiB  
Article
Predicting Employee Attrition Using Machine Learning Techniques
by Francesca Fallucchi, Marco Coladangelo, Romeo Giuliano and Ernesto William De Luca
Computers 2020, 9(4), 86; https://0-doi-org.brum.beds.ac.uk/10.3390/computers9040086 - 03 Nov 2020
Cited by 79 | Viewed by 26849
Abstract
There are several areas in which organisations can adopt technologies that will support decision-making: artificial intelligence is one of the most innovative technologies that is widely used to assist organisations in business strategies, organisational aspects and people management. In recent years, attention has [...] Read more.
There are several areas in which organisations can adopt technologies that will support decision-making: artificial intelligence is one of the most innovative technologies that is widely used to assist organisations in business strategies, organisational aspects and people management. In recent years, attention has increasingly been paid to human resources (HR), since worker quality and skills represent a growth factor and a real competitive advantage for companies. After having been introduced to sales and marketing departments, artificial intelligence is also starting to guide employee-related decisions within HR management. The purpose is to support decisions that are based not on subjective aspects but on objective data analysis. The goal of this work is to analyse how objective factors influence employee attrition, in order to identify the main causes that contribute to a worker’s decision to leave a company, and to be able to predict whether a particular employee will leave the company. After the training, the obtained model for the prediction of employees’ attrition is tested on a real dataset provided by IBM analytics, which includes 35 features and about 1500 samples. Results are expressed in terms of classical metrics and the algorithm that produced the best results for the available dataset is the Gaussian Naïve Bayes classifier. It reveals the best recall rate (0.54), since it measures the ability of a classifier to find all the positive instances and achieves an overall false negative rate equal to 4.5% of the total observations. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

14 pages, 438 KiB  
Article
Statistical Model-Based Classification to Detect Patient-Specific Spike-and-Wave in EEG Signals
by Antonio Quintero-Rincón, Valeria Muro, Carlos D’Giano, Jorge Prendes and Hadj Batatia
Computers 2020, 9(4), 85; https://0-doi-org.brum.beds.ac.uk/10.3390/computers9040085 - 29 Oct 2020
Cited by 2 | Viewed by 5481
Abstract
Spike-and-wave discharge (SWD) pattern detection in electroencephalography (EEG) is a crucial signal processing problem in epilepsy applications. It is particularly important for overcoming time-consuming, difficult, and error-prone manual analysis of long-term EEG recordings. This paper presents a new method to detect SWD, with [...] Read more.
Spike-and-wave discharge (SWD) pattern detection in electroencephalography (EEG) is a crucial signal processing problem in epilepsy applications. It is particularly important for overcoming time-consuming, difficult, and error-prone manual analysis of long-term EEG recordings. This paper presents a new method to detect SWD, with a low computational complexity making it easily trained with data from standard medical protocols. Precisely, EEG signals are divided into time segments for which the continuous Morlet 1-D wavelet decomposition is computed. The generalized Gaussian distribution (GGD) is fitted to the resulting coefficients and their variance and median are calculated. Next, a k-nearest neighbors (k-NN) classifier is trained to detect the spike-and-wave patterns, using the scale parameter of the GGD in addition to the variance and the median. Experiments were conducted using EEG signals from six human patients. Precisely, 106 spike-and-wave and 106 non-spike-and-wave signals were used for training, and 96 other segments for testing. The proposed SWD classification method achieved 95% sensitivity (True positive rate), 87% specificity (True Negative Rate), and 92% accuracy. These promising results set the path for new research to study the causes underlying the so-called absence epilepsy in long-term EEG recordings. Full article
(This article belongs to the Special Issue Machine Learning for EEG Signal Processing)
Show Figures

Figure 1

15 pages, 3265 KiB  
Article
State Estimation and Localization Based on Sensor Fusion for Autonomous Robots in Indoor Environment
by Mamadou Doumbia and Xu Cheng
Computers 2020, 9(4), 84; https://0-doi-org.brum.beds.ac.uk/10.3390/computers9040084 - 16 Oct 2020
Cited by 6 | Viewed by 3876
Abstract
Currently, almost all robot state estimation and localization systems are based on the Kalman filter (KF) and its derived methods, in particular the unscented Kalman filter (UKF). When applying the UKF alone, the estimate of the state is not sufficiently precise. In this [...] Read more.
Currently, almost all robot state estimation and localization systems are based on the Kalman filter (KF) and its derived methods, in particular the unscented Kalman filter (UKF). When applying the UKF alone, the estimate of the state is not sufficiently precise. In this paper, a new hierarchical infrared navigational algorithm hybridization (HIRNAH) system is developed to provide better state estimation and localization for mobile robots. Two navigation subsystems (inertial navigation system (INS) and, using a novel infrared navigation algorithm (NIRNA), Odom-NIRNA) and an RPLIDAR-A3 scanner cooperation to build HIRNAH. The robot pose (position and orientation) errors are estimated by a system filtering module (SFM) and used to smooth the robot’s final poses. A prototype (two rotary encoders, one smartphone-based robot sensing model and one RPLIDAR-A3 scanner) has been built and mounted on a four-wheeled mobile robot (4-WMR). Simulation results have motivated real-life experiments, and obtained results are compared to some existent research (hardware and control technology navigation (HCTNav), rapid exploring random tree (RRT) and in stand-alone mode (INS)) for performance measurements. The experimental results confirm that HIRNAH presents a more accurate estimation and a lower mean square error (MSE) of the robot’s state than those calculated by the previously cited HCTNav, RRT and INS. Full article
Show Figures

Figure 1

12 pages, 491 KiB  
Review
Cybersecurity in Intelligent Transportation Systems
by Teodora Mecheva and Nikolay Kakanakov
Computers 2020, 9(4), 83; https://0-doi-org.brum.beds.ac.uk/10.3390/computers9040083 - 13 Oct 2020
Cited by 23 | Viewed by 6878
Abstract
Intelligent Transportation Systems (ITS) are emerging field characterized by complex data model, dynamics and strict time requirements. Ensuring cybersecurity in ITS is a complex task on which the safety and efficiency of transportation depends. The imposition of standards for a comprehensive architecture, as [...] Read more.
Intelligent Transportation Systems (ITS) are emerging field characterized by complex data model, dynamics and strict time requirements. Ensuring cybersecurity in ITS is a complex task on which the safety and efficiency of transportation depends. The imposition of standards for a comprehensive architecture, as well as specific security standards, is one of the key steps in the evolution of ITS. The article examines the general outlines of the ITS architecture and security issues. The main focus of security approaches is: configuration and initialization of the devices during manufacturing at perception layer; anonymous authentication of nodes in VANET at network layer; defense of fog-based structures at support layer and description and standardization of the complex model of data and metadata and defense of systems, based on AI at application layer. The article oversees some conventional methods as network segmentation and cryptography that should be adapted in order to be applied in ITS cybersecurity. The focus is on innovative approaches that have recently been trying to find their place in ITS security strategies. These approaches includes blockchain, bloom filter, fog computing, artificial intelligence, game theory and ontologies. In conclusion, a correlation is made between the commented methods, the problems they solve and the architectural layers in which they are applied. Full article
(This article belongs to the Special Issue Smart Computing for Smart Cities (SC2))
Show Figures

Figure 1

10 pages, 739 KiB  
Editorial
Special Issue “Post-IP Networks: Advances on RINA and other Alternative Network Architectures”
by John Day, Eduard Grasa and Peyman Teymoori
Computers 2020, 9(4), 82; https://0-doi-org.brum.beds.ac.uk/10.3390/computers9040082 - 13 Oct 2020
Viewed by 2458
Abstract
Over the last two decades, research funding bodies have supported “Future Internet”, “New-IP”, and “Next Generation” design initiatives intended to reduce network complexity by redesigning the network protocol architecture, questioning some of its key principles [...] Full article
Show Figures

Figure 1

25 pages, 556 KiB  
Article
Asymmetric Attributional Word Similarity Measures to Detect the Relations of Textual Generality
by Sebastião Pais and Gaël Dias
Computers 2020, 9(4), 81; https://0-doi-org.brum.beds.ac.uk/10.3390/computers9040081 - 10 Oct 2020
Viewed by 2333
Abstract
In this work, we present a new unsupervised and language-independent methodology to detect the relations of textual generality. For this, we introduce a particular case of Textual Entailment (TE), namely Textual Entailment by Generality (TEG). TE aims to capture primary semantic inference needs [...] Read more.
In this work, we present a new unsupervised and language-independent methodology to detect the relations of textual generality. For this, we introduce a particular case of Textual Entailment (TE), namely Textual Entailment by Generality (TEG). TE aims to capture primary semantic inference needs across applications in Natural Language Processing (NLP). Since 2005, in the TE Recognition (RTE) task, systems have been asked to automatically judge whether the meaning of a portion of the text, the Text (T), entails the meaning of another text, the Hypothesis (H). Several novel approaches and improvements in TE technologies demonstrated in RTE Challenges are signaling renewed interest towards a more in-depth and better understanding of the core phenomena involved in TE. In line with this direction, in this work, we focus on a particular case of entailment, entailment by generality, to detect the relations of textual generality. In text, there are different kinds of entailments, yielded from different types of implicative reasoning (lexical, syntactical, common sense based), but here, we focus just on TEG, which can be defined as an entailment from a specific statement towards a relatively more general one. Therefore, we have TGH whenever the premise T entails the hypothesis H, this also being more general than the premise. We propose an unsupervised and language-independent method to recognize TEGs, from a pair T,H having an entailment relation. To this end, we introduce an Informative Asymmetric Measure (IAM) called Simplified Asymmetric InfoSimba (AISs), which we combine with different Asymmetric Association Measures (AAM). In this work, we hypothesize about the existence of a particular mode of TE, namely TEG. Thus, the main contribution of our study is highlighting the importance of this inference mechanism. Consequently, the new annotation data seem to be a valuable resource for the community. Full article
Show Figures

Figure 1

23 pages, 861 KiB  
Review
Progress and Challenges in Generative Product Design: A Review of Systems
by James Mountstephens and Jason Teo
Computers 2020, 9(4), 80; https://0-doi-org.brum.beds.ac.uk/10.3390/computers9040080 - 09 Oct 2020
Cited by 29 | Viewed by 5262
Abstract
Design is a challenging task that is crucial to all product development. Advances in design computing may allow machines to move from a supporting role to generators of design content. Generative Design systems produce designs by algorithms and offer the potential for the [...] Read more.
Design is a challenging task that is crucial to all product development. Advances in design computing may allow machines to move from a supporting role to generators of design content. Generative Design systems produce designs by algorithms and offer the potential for the exploration of vast design spaces, the fostering of creativity, the combination of objective and subjective requirements, and the revolutionary integration of conceptual and detailed design phases. The application of generative methods to the design of discrete, physical, engineered products has not yet been reviewed. This paper reviews the Generative Product Design systems developed since 1998 in order to identify significant approaches and trends. Systems are analyzed according to their primary goal, generative method, the design phase they focus on, whether the generation is automatic or interactive, the number of design options they generate, and the types of design requirements involved in the generation process. Progress using this approach is recognized, and a number of challenges that must be addressed in order to achieve widespread acceptance are identified. Possible solutions are offered, including innovative approaches in Human–Computer Interaction. Full article
Show Figures

Figure 1

23 pages, 3246 KiB  
Article
Structured (De)composable Representations Trained with Neural Networks
by Graham Spinks and Marie-Francine Moens
Computers 2020, 9(4), 79; https://0-doi-org.brum.beds.ac.uk/10.3390/computers9040079 - 02 Oct 2020
Cited by 1 | Viewed by 2291
Abstract
This paper proposes a novel technique for representing templates and instances of concept classes. A template representation refers to the generic representation that captures the characteristics of an entire class. The proposed technique uses end-to-end deep learning to learn structured and composable representations [...] Read more.
This paper proposes a novel technique for representing templates and instances of concept classes. A template representation refers to the generic representation that captures the characteristics of an entire class. The proposed technique uses end-to-end deep learning to learn structured and composable representations from input images and discrete labels. The obtained representations are based on distance estimates between the distributions given by the class label and those given by contextual information, which are modeled as environments. We prove that the representations have a clear structure allowing decomposing the representation into factors that represent classes and environments. We evaluate our novel technique on classification and retrieval tasks involving different modalities (visual and language data). In various experiments, we show how the representations can be compressed and how different hyperparameters impact performance. Full article
(This article belongs to the Special Issue Artificial Neural Networks in Pattern Recognition)
Show Figures

Figure 1

14 pages, 2495 KiB  
Article
Comparison of Frontal-Temporal Channels in Epilepsy Seizure Prediction Based on EEMD-ReliefF and DNN
by Aníbal Romney and Vidya Manian
Computers 2020, 9(4), 78; https://0-doi-org.brum.beds.ac.uk/10.3390/computers9040078 - 29 Sep 2020
Cited by 9 | Viewed by 3675
Abstract
Epilepsy patients who do not have their seizures controlled with medication or surgery live in constant fear. The psychological burden of uncertainty surrounding the occurrence of random seizures is one of the most stressful and debilitating aspects of the disease. Despite the research [...] Read more.
Epilepsy patients who do not have their seizures controlled with medication or surgery live in constant fear. The psychological burden of uncertainty surrounding the occurrence of random seizures is one of the most stressful and debilitating aspects of the disease. Despite the research progress in this field, there is a need for a non-invasive prediction system that helps disrupt the seizure epileptiform. Electroencephalogram (EEG) signals are non-stationary, nonlinear and vary with each patient and every recording. Full use of the non-invasive electrode channels is impractical for real-time use. We propose two frontal-temporal electrode channels based on ensemble empirical mode decomposition (EEMD) and Relief methods to address these challenges. The EEMD decomposes the segmented data frame in the ictal state into its intrinsic mode functions, and then we apply Relief to select the most relevant oscillatory components. A deep neural network (DNN) model learns these features to perform seizure prediction and early detection of patient-specific EEG recordings. The model yields an average sensitivity and specificity of 86.7% and 89.5%, respectively. The two-channel model shows the ability to capture patterns from brain locations for non-fontal-temporal seizures. Full article
Show Figures

Figure 1

18 pages, 2665 KiB  
Article
PriADA: Management and Adaptation of Information Based on Data Privacy in Public Environments
by Hugo Lopes, Ivan Miguel Pires, Hector Sánchez San Blas, Raúl García-Ovejero and Valderi Leithardt
Computers 2020, 9(4), 77; https://0-doi-org.brum.beds.ac.uk/10.3390/computers9040077 - 28 Sep 2020
Cited by 22 | Viewed by 3404
Abstract
The mobile devices cause a constant struggle for the pursuit of data privacy. Nowadays, it appears that the number of mobile devices in the world is increasing. With this increase and technological evolution, thousands of data associated with everyone are generated and stored [...] Read more.
The mobile devices cause a constant struggle for the pursuit of data privacy. Nowadays, it appears that the number of mobile devices in the world is increasing. With this increase and technological evolution, thousands of data associated with everyone are generated and stored remotely. Thus, the topic of data privacy is highlighted in several areas. There is a need for control and management of data in circulation inherent to this theme. This article presents an approach to the interaction between the individual and the public environment, where this interaction will determine the access to information. This analysis was based on a data privacy management model in open environments created after reading and analyzing the current technologies. A mobile application based on location by Global Positioning System (GPS) was developed to substantiate this model, which considers the General Data Protection Regulation (GDPR) to control and manage access to each individual’s data. Full article
(This article belongs to the Special Issue IoT: Security, Privacy and Best Practices)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop