Special Issue "Feature Paper in Computers"

A special issue of Computers (ISSN 2073-431X).

Deadline for manuscript submissions: closed (31 December 2020).

Special Issue Editor

Prof. Dr. Stefan Gumhold
E-Mail Website
Guest Editor
Professorship for Computer Graphics and Visualization, Technische Universität Dresden, 01062 Dresden
Interests: scientific visualization; visual analysis; geometry processing; 3D acquisition; scene understanding

Special Issue Information

Dear Colleagues,

This is a Special Issue formed of high-quality papers in Open Access form by Editorial Board Members, or those invited by the Editorial Office and the Editor-in-Chief in Computer Sciences.

Prof. Dr. Stefan Gumhold
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Computers is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (18 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

Open AccessArticle
Improving Business Performance by Employing Virtualization Technology: A Case Study in the Financial Sector
Computers 2021, 10(4), 52; https://0-doi-org.brum.beds.ac.uk/10.3390/computers10040052 - 16 Apr 2021
Viewed by 238
Abstract
The financial crisis of the last decade has left many financial institutions with limited personnel and equipment resources. Thus, the IT departments of these institutions are being asked to explore novel approaches to resolve these constraints in a cost-effective and efficient manner. The [...] Read more.
The financial crisis of the last decade has left many financial institutions with limited personnel and equipment resources. Thus, the IT departments of these institutions are being asked to explore novel approaches to resolve these constraints in a cost-effective and efficient manner. The goal of this paper is to measure the impact of modern enabling technologies, such as virtualization, in the process of replacing legacy infrastructures. This paper proposes an IT services upgrade plan approach for an organization by using modern technologies. For this purpose, research took place in an operating financial institution, which required a significant upgrade of both its service-level and its hardware infrastructure. A virtualization implementation and deployment assessment for the entire infrastructure was conducted and the resulting consolidated data are presented and analysed. The paper concludes with a five-year financial-based evaluation of the proposed approach with respect to the projection of expenditures, the return of investment and profitability. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

Open AccessArticle
Online Judging Platform Utilizing Dynamic Plagiarism Detection Facilities
Computers 2021, 10(4), 47; https://0-doi-org.brum.beds.ac.uk/10.3390/computers10040047 - 08 Apr 2021
Viewed by 369
Abstract
A programming contest generally involves the host presenting a set of logical and mathematical problems to the contestants. The contestants are required to write computer programs that are capable of solving these problems. An online judge system is used to automate the judging [...] Read more.
A programming contest generally involves the host presenting a set of logical and mathematical problems to the contestants. The contestants are required to write computer programs that are capable of solving these problems. An online judge system is used to automate the judging procedure of the programs that are submitted by the users. Online judges are systems designed for the reliable evaluation of the source codes submitted by the users. Traditional online judging platforms are not ideally suitable for programming labs, as they do not support partial scoring and efficient detection of plagiarized codes. When considering this fact, in this paper, we present an online judging framework that is capable of automatic scoring of codes by detecting plagiarized contents and the level of accuracy of codes efficiently. Our system performs the detection of plagiarism by detecting fingerprints of programs and using the fingerprints to compare them instead of using the whole file. We used winnowing to select fingerprints among k-gram hash values of a source code, which was generated by the Rabin–Karp Algorithm. The proposed system is compared with the existing online judging platforms to show the superiority in terms of time efficiency, correctness, and feature availability. In addition, we evaluated our system by using large data sets and comparing the run time with MOSS, which is the widely used plagiarism detection technique. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

Open AccessArticle
Simulation and Analysis of Self-Replicating Robot Decision-Making Systems
Computers 2021, 10(1), 9; https://0-doi-org.brum.beds.ac.uk/10.3390/computers10010009 - 06 Jan 2021
Viewed by 758
Abstract
Self-replicating robot systems (SRRSs) are a new prospective paradigm for robotic exploration. They can potentially facilitate lower mission costs and enhance mission capabilities by allowing some materials, which are needed for robotic system construction, to be collected in situ and used for robot [...] Read more.
Self-replicating robot systems (SRRSs) are a new prospective paradigm for robotic exploration. They can potentially facilitate lower mission costs and enhance mission capabilities by allowing some materials, which are needed for robotic system construction, to be collected in situ and used for robot fabrication. The use of a self-replicating robot system can potentially lower risk aversion, due to the ability to potentially replenish lost or damaged robots, and may increase the likelihood of mission success. This paper proposes and compares system configurations of an SRRS. A simulation system was designed and is used to model how an SRRS performs based on its system configuration, attributes, and operating environment. Experiments were conducted using this simulation and the results are presented. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

Open AccessArticle
EEG and Deep Learning Based Brain Cognitive Function Classification
Computers 2020, 9(4), 104; https://0-doi-org.brum.beds.ac.uk/10.3390/computers9040104 - 21 Dec 2020
Viewed by 790
Abstract
Electroencephalogram signals are used to assess neurodegenerative diseases and develop sophisticated brain machine interfaces for rehabilitation and gaming. Most of the applications use only motor imagery or evoked potentials. Here, a deep learning network based on a sensory motor paradigm (auditory, olfactory, movement, [...] Read more.
Electroencephalogram signals are used to assess neurodegenerative diseases and develop sophisticated brain machine interfaces for rehabilitation and gaming. Most of the applications use only motor imagery or evoked potentials. Here, a deep learning network based on a sensory motor paradigm (auditory, olfactory, movement, and motor-imagery) that employs a subject-agnostic Bidirectional Long Short-Term Memory (BLSTM) Network is developed to assess cognitive functions and identify its relationship with brain signal features, which is hypothesized to consistently indicate cognitive decline. Testing occurred with healthy subjects of age 20–40, 40–60, and >60, and mildly cognitive impaired subjects. Auditory and olfactory stimuli were presented to the subjects and the subjects imagined and conducted movement of each arm during which Electroencephalogram (EEG)/Electromyogram (EMG) signals were recorded. A deep BLSTM Neural Network is trained with Principal Component features from evoked signals and assesses their corresponding pathways. Wavelet analysis is used to decompose evoked signals and calculate the band power of component frequency bands. This deep learning system performs better than conventional deep neural networks in detecting MCI. Most features studied peaked at the age range 40–60 and were lower for the MCI group than for any other group tested. Detection accuracy of left-hand motor imagery signals best indicated cognitive aging (p = 0.0012); here, the mean classification accuracy per age group declined from 91.93% to 81.64%, and is 69.53% for MCI subjects. Motor-imagery-evoked band power, particularly in gamma bands, best indicated (p = 0.007) cognitive aging. Although the classification accuracy of the potentials effectively distinguished cognitive aging from MCI (p < 0.05), followed by gamma-band power. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

Open AccessArticle
FuseVis: Interpreting Neural Networks for Image Fusion Using Per-Pixel Saliency Visualization
Computers 2020, 9(4), 98; https://0-doi-org.brum.beds.ac.uk/10.3390/computers9040098 - 10 Dec 2020
Viewed by 1200
Abstract
Image fusion helps in merging two or more images to construct a more informative single fused image. Recently, unsupervised learning-based convolutional neural networks (CNN) have been used for different types of image-fusion tasks such as medical image fusion, infrared-visible image fusion for autonomous [...] Read more.
Image fusion helps in merging two or more images to construct a more informative single fused image. Recently, unsupervised learning-based convolutional neural networks (CNN) have been used for different types of image-fusion tasks such as medical image fusion, infrared-visible image fusion for autonomous driving as well as multi-focus and multi-exposure image fusion for satellite imagery. However, it is challenging to analyze the reliability of these CNNs for the image-fusion tasks since no groundtruth is available. This led to the use of a wide variety of model architectures and optimization functions yielding quite different fusion results. Additionally, due to the highly opaque nature of such neural networks, it is difficult to explain the internal mechanics behind its fusion results. To overcome these challenges, we present a novel real-time visualization tool, named FuseVis, with which the end-user can compute per-pixel saliency maps that examine the influence of the input image pixels on each pixel of the fused image. We trained several image fusion-based CNNs on medical image pairs and then using our FuseVis tool we performed case studies on a specific clinical application by interpreting the saliency maps from each of the fusion methods. We specifically visualized the relative influence of each input image on the predictions of the fused image and showed that some of the evaluated image-fusion methods are better suited for the specific clinical application. To the best of our knowledge, currently, there is no approach for visual analysis of neural networks for image fusion. Therefore, this work opens a new research direction to improve the interpretability of deep fusion networks. The FuseVis tool can also be adapted in other deep neural network-based image processing applications to make them interpretable. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

Open AccessArticle
kNN Prototyping Schemes for Embedded Human Activity Recognition with Online Learning
Computers 2020, 9(4), 96; https://0-doi-org.brum.beds.ac.uk/10.3390/computers9040096 - 03 Dec 2020
Viewed by 566
Abstract
The kNN machine learning method is widely used as a classifier in Human Activity Recognition (HAR) systems. Although the kNN algorithm works similarly both online and in offline mode, the use of all training instances is much more critical online than [...] Read more.
The kNN machine learning method is widely used as a classifier in Human Activity Recognition (HAR) systems. Although the kNN algorithm works similarly both online and in offline mode, the use of all training instances is much more critical online than offline due to time and memory restrictions in the online mode. Some methods propose decreasing the high computational costs of kNN by focusing, e.g., on approximate kNN solutions such as the ones relying on Locality-Sensitive Hashing (LSH). However, embedded kNN implementations also need to address the target device’s memory constraints, especially as the use of online classification needs to cope with those constraints to be practical. This paper discusses online approaches to reduce the number of training instances stored in the kNN search space. To address practical implementations of HAR systems using kNN, this paper presents simple, energy/computationally efficient, and real-time feasible schemes to maintain at runtime a maximum number of training instances stored by kNN. The proposed schemes include policies for substituting the training instances, maintaining the search space to a maximum size. Experiments in the context of HAR datasets show the efficiency of our best schemes. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

Open AccessArticle
Predicting Employee Attrition Using Machine Learning Techniques
Computers 2020, 9(4), 86; https://0-doi-org.brum.beds.ac.uk/10.3390/computers9040086 - 03 Nov 2020
Viewed by 1371
Abstract
There are several areas in which organisations can adopt technologies that will support decision-making: artificial intelligence is one of the most innovative technologies that is widely used to assist organisations in business strategies, organisational aspects and people management. In recent years, attention has [...] Read more.
There are several areas in which organisations can adopt technologies that will support decision-making: artificial intelligence is one of the most innovative technologies that is widely used to assist organisations in business strategies, organisational aspects and people management. In recent years, attention has increasingly been paid to human resources (HR), since worker quality and skills represent a growth factor and a real competitive advantage for companies. After having been introduced to sales and marketing departments, artificial intelligence is also starting to guide employee-related decisions within HR management. The purpose is to support decisions that are based not on subjective aspects but on objective data analysis. The goal of this work is to analyse how objective factors influence employee attrition, in order to identify the main causes that contribute to a worker’s decision to leave a company, and to be able to predict whether a particular employee will leave the company. After the training, the obtained model for the prediction of employees’ attrition is tested on a real dataset provided by IBM analytics, which includes 35 features and about 1500 samples. Results are expressed in terms of classical metrics and the algorithm that produced the best results for the available dataset is the Gaussian Naïve Bayes classifier. It reveals the best recall rate (0.54), since it measures the ability of a classifier to find all the positive instances and achieves an overall false negative rate equal to 4.5% of the total observations. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

Open AccessArticle
Fog Computing for Realizing Smart Neighborhoods in Smart Grids
Computers 2020, 9(3), 76; https://0-doi-org.brum.beds.ac.uk/10.3390/computers9030076 - 21 Sep 2020
Viewed by 1122
Abstract
Cloud Computing provides on-demand computing services like software, networking, storage, analytics, and intelligence over the Internet (“the cloud”). But it is facing challenges because of the explosion of the Internet of Things (IoT) devices and the volume, variety, veracity and velocity of the [...] Read more.
Cloud Computing provides on-demand computing services like software, networking, storage, analytics, and intelligence over the Internet (“the cloud”). But it is facing challenges because of the explosion of the Internet of Things (IoT) devices and the volume, variety, veracity and velocity of the data generated by these devices. There is a need for ultra-low latency, reliable service along with security and privacy. Fog Computing is a promising solution to overcome these challenges. The originality, scope and novelty of this paper is the definition and formulation of the problem of smart neighborhoods in context of smart grids. This is achieved through an extensive literature study, firstly on Fog Computing and its foundation technologies, its applications and the literature review of Fog Computing research in various application domains. Thereafter, we introduce smart grid and community MicroGrid concepts and, their challenges to give the in depth background of the problem and hence, formalize the problem. The smart grid, which ensures reliable, secure, and cost-effective power supply to the smart neighborhoods, effectively needs Fog Computing architecture to achieve its purpose. This paper also identifies, without rigorous analysis, potential solutions to address the problem of smart neighborhoods. The challenges in the integration of Fog Computing and smart grids are also discussed. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

Open AccessArticle
Unmanned Aerial Vehicle Control through Domain-Based Automatic Speech Recognition
Computers 2020, 9(3), 75; https://0-doi-org.brum.beds.ac.uk/10.3390/computers9030075 - 19 Sep 2020
Cited by 1 | Viewed by 1092
Abstract
Currently, unmanned aerial vehicles, such as drones, are becoming a part of our lives and extend to many areas of society, including the industrialized world. A common alternative for controlling the movements and actions of the drone is through unwired tactile interfaces, for [...] Read more.
Currently, unmanned aerial vehicles, such as drones, are becoming a part of our lives and extend to many areas of society, including the industrialized world. A common alternative for controlling the movements and actions of the drone is through unwired tactile interfaces, for which different remote control devices are used. However, control through such devices is not a natural, human-like communication interface, which sometimes is difficult to master for some users. In this research, we experimented with a domain-based speech recognition architecture to effectively control an unmanned aerial vehicle such as a drone. The drone control was performed in a more natural, human-like way to communicate the instructions. Moreover, we implemented an algorithm for command interpretation using both Spanish and English languages, as well as to control the movements of the drone in a simulated domestic environment. We conducted experiments involving participants giving voice commands to the drone in both languages in order to compare the effectiveness of each, considering the mother tongue of the participants in the experiment. Additionally, different levels of distortion were applied to the voice commands to test the proposed approach when it encountered noisy input signals. The results obtained showed that the unmanned aerial vehicle was capable of interpreting user voice instructions. Speech-to-action recognition improved for both languages with phoneme matching in comparison to only using the cloud-based algorithm without domain-based instructions. Using raw audio inputs, the cloud-based approach achieves 74.81% and 97.04% accuracy for English and Spanish instructions, respectively. However, with our phoneme matching approach the results are improved, yielding 93.33% accuracy for English and 100.00% accuracy for Spanish. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

Open AccessArticle
Toward a Sustainable Cybersecurity Ecosystem
Computers 2020, 9(3), 74; https://0-doi-org.brum.beds.ac.uk/10.3390/computers9030074 - 17 Sep 2020
Cited by 1 | Viewed by 1532
Abstract
Cybersecurity issues constitute a key concern of today’s technology-based economies. Cybersecurity has become a core need for providing a sustainable and safe society to online users in cyberspace. Considering the rapid increase of technological implementations, it has turned into a global necessity in [...] Read more.
Cybersecurity issues constitute a key concern of today’s technology-based economies. Cybersecurity has become a core need for providing a sustainable and safe society to online users in cyberspace. Considering the rapid increase of technological implementations, it has turned into a global necessity in the attempt to adapt security countermeasures, whether direct or indirect, and prevent systems from cyberthreats. Identifying, characterizing, and classifying such threats and their sources is required for a sustainable cyber-ecosystem. This paper focuses on the cybersecurity of smart grids and the emerging trends such as using blockchain in the Internet of Things (IoT). The cybersecurity of emerging technologies such as smart cities is also discussed. In addition, associated solutions based on artificial intelligence and machine learning frameworks to prevent cyber-risks are also discussed. Our review will serve as a reference for policy-makers from the industry, government, and the cybersecurity research community. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

Open AccessArticle
Privacy-Preserving Passive DNS
Computers 2020, 9(3), 64; https://0-doi-org.brum.beds.ac.uk/10.3390/computers9030064 - 12 Aug 2020
Cited by 3 | Viewed by 1850
Abstract
The Domain Name System (DNS) was created to resolve the IP addresses of web servers to easily remembered names. When it was initially created, security was not a major concern; nowadays, this lack of inherent security and trust has exposed the global DNS [...] Read more.
The Domain Name System (DNS) was created to resolve the IP addresses of web servers to easily remembered names. When it was initially created, security was not a major concern; nowadays, this lack of inherent security and trust has exposed the global DNS infrastructure to malicious actors. The passive DNS data collection process creates a database containing various DNS data elements, some of which are personal and need to be protected to preserve the privacy of the end users. To this end, we propose the use of distributed ledger technology. We use Hyperledger Fabric to create a permissioned blockchain, which only authorized entities can access. The proposed solution supports queries for storing and retrieving data from the blockchain ledger, allowing the use of the passive DNS database for further analysis, e.g., for the identification of malicious domain names. Additionally, it effectively protects the DNS personal data from unauthorized entities, including the administrators that can act as potential malicious insiders, and allows only the data owners to perform queries over these data. We evaluated our proposed solution by creating a proof-of-concept experimental setup that passively collects DNS data from a network and then uses the distributed ledger technology to store the data in an immutable ledger, thus providing a full historical overview of all the records. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

Open AccessArticle
Possibilities of Electromagnetic Penetration of Displays of Multifunction Devices
Computers 2020, 9(3), 62; https://0-doi-org.brum.beds.ac.uk/10.3390/computers9030062 - 08 Aug 2020
Cited by 1 | Viewed by 1230
Abstract
A protection of information against electromagnetic penetration is very often considered in the aspect of the possibility of obtaining data contained in printed documents or displayed on screen monitors. However, many printing devices are equipped with screens based on LED technology or liquid [...] Read more.
A protection of information against electromagnetic penetration is very often considered in the aspect of the possibility of obtaining data contained in printed documents or displayed on screen monitors. However, many printing devices are equipped with screens based on LED technology or liquid crystal displays. Options enabling the selection of parameters of the printed document, technical settings of the device (e.g., screen activity time) are the most frequently displayed information. For more extensive displays, more detailed information appears, which may contain data that are not always irrelevant to third parties. Such data can be: names of printed documents (or documents registered and available on the internal media), service password access, user names or regular printer user activity. The printer display can be treated as a source of revealing emissions, like a typical screen monitor. The emissions correlated with the displayed data may allow us to obtain the abovementioned information. The article includes analyses of various types of computer printer displays. The tests results of the existing threat are presented in the form of reconstructed images that show the possibility of reading the text data contained in them. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

Open AccessArticle
Increasing Innovative Working Behaviour of Information Technology Employees in Vietnam by Knowledge Management Approach
Computers 2020, 9(3), 61; https://0-doi-org.brum.beds.ac.uk/10.3390/computers9030061 - 01 Aug 2020
Cited by 1 | Viewed by 1765
Abstract
Today, Knowledge Management (KM) is becoming a popular approach for improving organizational innovation, but whether encouraging knowledge sharing will lead to a better innovative working behaviour of employees is still a question. This study aims to identify the factors of KM affecting the [...] Read more.
Today, Knowledge Management (KM) is becoming a popular approach for improving organizational innovation, but whether encouraging knowledge sharing will lead to a better innovative working behaviour of employees is still a question. This study aims to identify the factors of KM affecting the innovative working behaviour of Information Technology (IT) employees in Vietnam. The research model involves three elements: attitude, subjective norm and perceived behavioural control affecting knowledge sharing, and then, on innovative working behaviour. The research method is the quantitative method. The survey was conducted with 202 samples via the five-scale questionnaire. The analysis results show that knowledge sharing has a positive impact on the innovative working behaviour of IT employees in Vietnam. Besides, attitude and perceived behavioural control are confirmed to have a strong positive effect on knowledge sharing, but the subjective norm has no significant impact on knowledge sharing. Based on this result, recommendations to promote knowledge sharing and the innovative work behaviour of IT employees in Vietnam are made. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

Open AccessArticle
Predicting LoRaWAN Behavior: How Machine Learning Can Help
Computers 2020, 9(3), 60; https://0-doi-org.brum.beds.ac.uk/10.3390/computers9030060 - 31 Jul 2020
Cited by 1 | Viewed by 1278
Abstract
Large scale deployments of Internet of Things (IoT) networks are becoming reality. From a technology perspective, a lot of information related to device parameters, channel states, network and application data are stored in databases and can be used for an extensive analysis to [...] Read more.
Large scale deployments of Internet of Things (IoT) networks are becoming reality. From a technology perspective, a lot of information related to device parameters, channel states, network and application data are stored in databases and can be used for an extensive analysis to improve the functionality of IoT systems in terms of network performance and user services. LoRaWAN (Long Range Wide Area Network) is one of the emerging IoT technologies, with a simple protocol based on LoRa modulation. In this work, we discuss how machine learning approaches can be used to improve network performance (and if and how they can help). To this aim, we describe a methodology to process LoRaWAN packets and apply a machine learning pipeline to: (i) perform device profiling, and (ii) predict the inter-arrival of IoT packets. This latter analysis is very related to the channel and network usage and can be leveraged in the future for system performance enhancements. Our analysis mainly focuses on the use of k-means, Long Short-Term Memory Neural Networks and Decision Trees. We test these approaches on a real large-scale LoRaWAN network where the overall captured traffic is stored in a proprietary database. Our study shows how profiling techniques enable a machine learning prediction algorithm even when training is not possible because of high error rates perceived by some devices. In this challenging case, the prediction of the inter-arrival time of packets has an error of about 3.5% for 77% of real sequence cases. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

Open AccessArticle
An Adversarial Approach for Intrusion Detection Systems Using Jacobian Saliency Map Attacks (JSMA) Algorithm
Computers 2020, 9(3), 58; https://0-doi-org.brum.beds.ac.uk/10.3390/computers9030058 - 20 Jul 2020
Viewed by 1407
Abstract
In today’s digital world, the information systems are revolutionizing the way we connect. As the people are trying to adopt and integrate intelligent systems into daily lives, the risks around cyberattacks on user-specific information have significantly grown. To ensure safe communication, the Intrusion [...] Read more.
In today’s digital world, the information systems are revolutionizing the way we connect. As the people are trying to adopt and integrate intelligent systems into daily lives, the risks around cyberattacks on user-specific information have significantly grown. To ensure safe communication, the Intrusion Detection Systems (IDS) were developed often by using machine learning (ML) algorithms that have the unique ability to detect malware against network security violations. Recently, it was reported that the IDS are prone to carefully crafted perturbations known as adversaries. With the aim to understand the impact of such attacks, in this paper, we have proposed a novel random neural network-based adversarial intrusion detection system (RNN-ADV). The NSL-KDD dataset is utilized for training. For adversarial attack crafting, the Jacobian Saliency Map Attack (JSMA) algorithm is used, which identifies the feature which can cause maximum change to the benign samples with minimum added perturbation. To check the effectiveness of the proposed adversarial scheme, the results are compared with a deep neural network which indicates that RNN-ADV performs better in terms of accuracy, precision, recall, F1 score and training epochs. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

Open AccessArticle
ERF: An Empirical Recommender Framework for Ascertaining Appropriate Learning Materials from Stack Overflow Discussions
Computers 2020, 9(3), 57; https://0-doi-org.brum.beds.ac.uk/10.3390/computers9030057 - 20 Jul 2020
Viewed by 1319
Abstract
Computer programmers require various instructive information during coding and development. Such information is dispersed in different sources like language documentation, wikis, and forums. As an information exchange platform, programmers broadly utilize Stack Overflow, a Web-based Question Answering site. In this paper, we propose [...] Read more.
Computer programmers require various instructive information during coding and development. Such information is dispersed in different sources like language documentation, wikis, and forums. As an information exchange platform, programmers broadly utilize Stack Overflow, a Web-based Question Answering site. In this paper, we propose a recommender system which uses a supervised machine learning approach to investigate Stack Overflow posts to present instructive information for the programmers. This might be helpful for the programmers to solve programming problems that they confront with in their daily life. We analyzed posts related to two most popular programming languages—Python and PHP. We performed a few trials and found that the supervised approach could effectively manifold valuable information from our corpus. We validated the performance of our system from human perception which showed an accuracy of 71%. We also presented an interactive interface for the users that satisfied the users’ query with the matching sentences with most instructive information. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

Review

Jump to: Research

Open AccessReview
Toward Management of Uncertainty in Self-Adaptive Software Systems: IoT Case Study
Computers 2021, 10(3), 27; https://0-doi-org.brum.beds.ac.uk/10.3390/computers10030027 - 27 Feb 2021
Viewed by 496
Abstract
Adaptivity is the ability of the system to change its behavior whenever it does not achieve the system requirements. Self-adaptive software systems (SASS) are considered a milestone in software development in many modern complex scientific and engineering fields. Employing self-adaptation into a system [...] Read more.
Adaptivity is the ability of the system to change its behavior whenever it does not achieve the system requirements. Self-adaptive software systems (SASS) are considered a milestone in software development in many modern complex scientific and engineering fields. Employing self-adaptation into a system can accomplish better functionality or performance; however, it may lead to unexpected system behavior and consequently to uncertainty. The uncertainty that results from using SASS needs to be tackled from different perspectives. The Internet of Things (IoT) that utilizes the attributes of SASS presents great development opportunities. Because IoT is a relatively new domain, it carries a high level of uncertainty. The goal of this work is to highlight more details about self-adaptivity in software systems, describe all possible sources of uncertainty, and illustrate its effect on the ability of the system to fulfill its objectives. We provide a survey of state-of-the-art approaches coping with uncertainty in SASS and discuss their performance. We classify the different sources of uncertainty based on their location and nature in SASS. Moreover, we present IoT as a case study to define uncertainty at different layers of the IoT stack. We use this case study to identify the sources of uncertainty, categorize the sources according to IoT stack layers, demonstrate the effect of uncertainty on the ability of the system to fulfill its objectives, and discuss the state-of-the-art approaches to mitigate the sources of uncertainty. We conclude with a set of challenges that provide a guide for future study. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Graphical abstract

Open AccessReview
A Review of Agent-Based Programming for Multi-Agent Systems
Computers 2021, 10(2), 16; https://0-doi-org.brum.beds.ac.uk/10.3390/computers10020016 - 27 Jan 2021
Viewed by 773
Abstract
Intelligent and autonomous agents is a subarea of symbolic artificial intelligence where these agents decide, either reactively or proactively, upon a course of action by reasoning about the information that is available about the world (including the environment, the agent itself, and other [...] Read more.
Intelligent and autonomous agents is a subarea of symbolic artificial intelligence where these agents decide, either reactively or proactively, upon a course of action by reasoning about the information that is available about the world (including the environment, the agent itself, and other agents). It encompasses a multitude of techniques, such as negotiation protocols, agent simulation, multi-agent argumentation, multi-agent planning, and many others. In this paper, we focus on agent programming and we provide a systematic review of the literature in agent-based programming for multi-agent systems. In particular, we discuss both veteran (still maintained) and novel agent programming languages, their extensions, work on comparing some of these languages, and applications found in the literature that make use of agent programming. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

Back to TopTop