Next Issue
Volume 8, June
Previous Issue
Volume 7, December
 
 

Computers, Volume 8, Issue 1 (March 2019) – 27 articles

Cover Story (view full-size image): We investigated user satisfaction in AR applied to three practical use cases. User satisfaction can be divided into satisfaction with the interaction and with the delivery device. A total of 142 participants from the three different industrial sectors of aeronautics, medicine, and astronautics contributed to this study. In our analysis, we investigated the influence of different factors, such as age, gender, education level, Internet knowledge, and the participants’ roles in the different sectors. Our results showed that computer knowledge has a positive effect on user satisfaction. Further analysis using two-factor interactions showed that there is no significant interaction between the different factors and user satisfaction. The results affirm that the questionnaires developed for user satisfaction of smart glasses and AR application performed well, with recommendations for further improvement. View [...] Read more.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
15 pages, 786 KiB  
Article
Symmetric-Key-Based Security for Multicast Communication in Wireless Sensor Networks
by Matthias Carlier, Kris Steenhaut and An Braeken
Computers 2019, 8(1), 27; https://0-doi-org.brum.beds.ac.uk/10.3390/computers8010027 - 19 Mar 2019
Cited by 15 | Viewed by 5641
Abstract
This paper presents a new key management protocol for group-based communications in non-hierarchical wireless sensor networks (WSNs), applied on a recently proposed IP-based multicast protocol. Confidentiality, integrity, and authentication are established, using solely symmetric-key-based operations. The protocol features a cloud-based network multicast manager [...] Read more.
This paper presents a new key management protocol for group-based communications in non-hierarchical wireless sensor networks (WSNs), applied on a recently proposed IP-based multicast protocol. Confidentiality, integrity, and authentication are established, using solely symmetric-key-based operations. The protocol features a cloud-based network multicast manager (NMM), which can create, control, and authenticate groups in the WSN, but is not able to derive the actual constructed group key. Three main phases are distinguished in the protocol. First, in the registration phase, the motes register to the group by sending a request to the NMM. Second, the members of the group calculate the shared group key in the key construction phase. For this phase, two different methods are tested. In the unicast approach, the key material is sent to each member individually using unicast messages, and in the multicast approach, a combination of Lagrange interpolation and a multicast packet are used. Finally, in the multicast communication phase, these keys are used to send confidential and authenticated messages. To investigate the impact of the proposed mechanisms on the WSN, the protocol was implemented in ContikiOS and simulated using COOJA, considering different group sizes and multi-hop communication. These simulations show that the multicast approach compared to the unicast approach results in significant smaller delays, is a bit more energy efficient, and requires more or less the same amount of memory for the code. Full article
Show Figures

Figure 1

16 pages, 3890 KiB  
Article
The Use of an Artificial Neural Network to Process Hydrographic Big Data during Surface Modeling
by Marta Wlodarczyk-Sielicka and Jacek Lubczonek
Computers 2019, 8(1), 26; https://0-doi-org.brum.beds.ac.uk/10.3390/computers8010026 - 14 Mar 2019
Cited by 9 | Viewed by 5568
Abstract
At the present time, spatial data are often acquired using varied remote sensing sensors and systems, which produce big data sets. One significant product from these data is a digital model of geographical surfaces, including the surface of the sea floor. To improve [...] Read more.
At the present time, spatial data are often acquired using varied remote sensing sensors and systems, which produce big data sets. One significant product from these data is a digital model of geographical surfaces, including the surface of the sea floor. To improve data processing, presentation, and management, it is often indispensable to reduce the number of data points. This paper presents research regarding the application of artificial neural networks to bathymetric data reductions. This research considers results from radial networks and self-organizing Kohonen networks. During reconstructions of the seabed model, the results show that neural networks with fewer hidden neurons than the number of data points can replicate the original data set, while the Kohonen network can be used for clustering during big geodata reduction. Practical implementations of neural networks capable of creating surface models and reducing bathymetric data are presented. Full article
Show Figures

Figure 1

16 pages, 3445 KiB  
Article
Concepts of a Modular System Architecture for Distributed Robotic Systems
by Uwe Jahn, Carsten Wolff and Peter Schulz
Computers 2019, 8(1), 25; https://0-doi-org.brum.beds.ac.uk/10.3390/computers8010025 - 14 Mar 2019
Cited by 12 | Viewed by 7859
Abstract
Modern robots often use more than one processing unit to solve the requirements in robotics. Robots are frequently designed in a modular manner to fulfill the possibility to be extended for future tasks. The use of multiple processing units leads to a distributed [...] Read more.
Modern robots often use more than one processing unit to solve the requirements in robotics. Robots are frequently designed in a modular manner to fulfill the possibility to be extended for future tasks. The use of multiple processing units leads to a distributed system within one single robot. Therefore, the system architecture is even more important than in single-computer robots. The presented concept of a modular and distributed system architecture was designed for robotic systems. The architecture is based on the Operator–Controller Module (OCM). This article describes the adaption of the distributed OCM for mobile robots considering the requirements on such robots, including, for example, real-time and safety constraints. The presented architecture splits the system hierarchically into a three-layer structure of controllers and operators. The controllers interact directly with all sensors and actuators within the system. For that reason, hard real-time constraints need to comply. The reflective operator, however, processes the information of the controllers, which can be done by model-based principles using state machines. The cognitive operator is used to optimize the system. The article also shows the exemplary design of the DAEbot, a self-developed robot, and discusses the experience of applying these concepts on this robot. Full article
Show Figures

Figure 1

21 pages, 16961 KiB  
Article
An Evaluation Approach for a Physically-Based Sticky Lip Model
by Matthew Leach and Steve Maddock
Computers 2019, 8(1), 24; https://0-doi-org.brum.beds.ac.uk/10.3390/computers8010024 - 08 Mar 2019
Cited by 2 | Viewed by 8532
Abstract
Physically-based mouth models operate on the principle that a better mouth animation will be produced by simulating physically accurate behaviour of the mouth. In the development of these models, it is useful to have an evaluation approach which can be used to judge [...] Read more.
Physically-based mouth models operate on the principle that a better mouth animation will be produced by simulating physically accurate behaviour of the mouth. In the development of these models, it is useful to have an evaluation approach which can be used to judge the effectiveness of a model and draw comparisons against other models and real-life mouth behaviour. This article presents a set of metrics which can be used to describe the motion of the lips, as well as a process for measuring these from video of real or simulated mouths, implemented using Python and OpenCV. As an example, the process is used to evaluate a physically-based mouth model focusing on recreating the stickiness effect of saliva between the lips. The metrics highlight the changes in behaviour due to the addition of stickiness between the lips in the synthetic mouth model and show quantitatively improved behaviour in relation to real mouth movements. The article concludes that the presented metrics provide a useful approach for evaluation of mouth animation models that incorporate sticky lip effects. Full article
Show Figures

Figure 1

12 pages, 1710 KiB  
Article
An Efficient Multicore Algorithm for Minimal Length Addition Chains
by Hazem M. Bahig and Yasser Kotb
Computers 2019, 8(1), 23; https://0-doi-org.brum.beds.ac.uk/10.3390/computers8010023 - 07 Mar 2019
Cited by 6 | Viewed by 5265
Abstract
A minimal length addition chain for a positive integer m is a finite sequence of positive integers such that (1) the first and last elements in the sequence are 1 and m, respectively, (2) any element greater than 1 in the sequence [...] Read more.
A minimal length addition chain for a positive integer m is a finite sequence of positive integers such that (1) the first and last elements in the sequence are 1 and m, respectively, (2) any element greater than 1 in the sequence is the addition of two earlier elements (not necessarily distinct), and (3) the length of the sequence is minimal. Generating the minimal length addition chain for m is challenging due to the running time, which increases with the size of m and particularly with the number of 1s in the binary representation of m. In this paper, we introduce a new parallel algorithm to find the minimal length addition chain for m. The experimental studies on multicore systems show that the running time of the proposed algorithm is faster than the sequential algorithm. Moreover, the maximum speedup obtained by the proposed algorithm is 2.5 times the best known sequential algorithm. Full article
Show Figures

Figure 1

14 pages, 3090 KiB  
Article
Natural Language Processing in OTF Computing: Challenges and the Need for Interactive Approaches
by Frederik S. Bäumer, Joschka Kersting and Michaela Geierhos
Computers 2019, 8(1), 22; https://0-doi-org.brum.beds.ac.uk/10.3390/computers8010022 - 06 Mar 2019
Cited by 3 | Viewed by 6240
Abstract
The vision of On-the-Fly (OTF) Computing is to compose and provide software services ad hoc, based on requirement descriptions in natural language. Since non-technical users write their software requirements themselves and in unrestricted natural language, deficits occur such as inaccuracy and incompleteness. These [...] Read more.
The vision of On-the-Fly (OTF) Computing is to compose and provide software services ad hoc, based on requirement descriptions in natural language. Since non-technical users write their software requirements themselves and in unrestricted natural language, deficits occur such as inaccuracy and incompleteness. These deficits are usually met by natural language processing methods, which have to face special challenges in OTF Computing because maximum automation is the goal. In this paper, we present current automatic approaches for solving inaccuracies and incompletenesses in natural language requirement descriptions and elaborate open challenges. In particular, we will discuss the necessity of domain-specific resources and show why, despite far-reaching automation, an intelligent and guided integration of end users into the compensation process is required. In this context, we present our idea of a chat bot that integrates users into the compensation process depending on the given circumstances. Full article
Show Figures

Figure 1

28 pages, 493 KiB  
Article
J48SS: A Novel Decision Tree Approach for the Handling of Sequential and Time Series Data
by Andrea Brunello, Enrico Marzano, Angelo Montanari and Guido Sciavicco
Computers 2019, 8(1), 21; https://0-doi-org.brum.beds.ac.uk/10.3390/computers8010021 - 05 Mar 2019
Cited by 16 | Viewed by 6797
Abstract
Temporal information plays a very important role in many analysis tasks, and can be encoded in at least two different ways. It can be modeled by discrete sequences of events as, for example, in the business intelligence domain, with the aim of tracking [...] Read more.
Temporal information plays a very important role in many analysis tasks, and can be encoded in at least two different ways. It can be modeled by discrete sequences of events as, for example, in the business intelligence domain, with the aim of tracking the evolution of customer behaviors over time. Alternatively, it can be represented by time series, as in the stock market to characterize price histories. In some analysis tasks, temporal information is complemented by other kinds of data, which may be represented by static attributes, e.g., categorical or numerical ones. This paper presents J48SS, a novel decision tree inducer capable of natively mixing static (i.e., numerical and categorical), sequential, and time series data for classification purposes. The novel algorithm is based on the popular C4.5 decision tree learner, and it relies on the concepts of frequent pattern extraction and time series shapelet generation. The algorithm is evaluated on a text classification task in a real business setting, as well as on a selection of public UCR time series datasets. Results show that it is capable of providing competitive classification performances, while generating highly interpretable models and effectively reducing the data preparation effort. Full article
Show Figures

Figure 1

11 pages, 3396 KiB  
Article
Software Requirement Specification Based on a Gray Box for Embedded Systems: A Case Study of a Mobile Phone Camera Sensor Controller
by Soojin Park
Computers 2019, 8(1), 20; https://0-doi-org.brum.beds.ac.uk/10.3390/computers8010020 - 02 Mar 2019
Cited by 3 | Viewed by 7763
Abstract
One of the most widely used models for specifying functional requirements is a use case model. The viewpoint of the use case model that views a system as a black box focuses on descriptions of external interactions between the system and related environments. [...] Read more.
One of the most widely used models for specifying functional requirements is a use case model. The viewpoint of the use case model that views a system as a black box focuses on descriptions of external interactions between the system and related environments. However, for embedded systems that do not disclose most implementation logics outside the system, black box-based use case models may experience the drawback that considerable information that must be defined for system developments is omitted. To solve this shortcoming, several studies have been proposed on the use of kind of white box technique in which the dynamic behaviors of embedded systems are defined first using a state diagram and the results are reflected in the requirement specifications. However, white box-based modeling has not been widely adopted by developers due to tasks that require a lot of time in the requirement analysis phase in the initial phase of the software development life cycle. This study proposes a gray box-based requirement specification method as a trade-off between two contradictory elements (the amount of information required to develop an embedded system and the cost of the effort required during the requirement analysis phase) in terms of the two approaches, the black and the white box-based models. The proposed method suggests that an appropriate depth level of embedded system modeling is required to define the requirements. This study also proposes a mechanism that automatically generates an application programming interface for each component based on the created model. The proposed method was applied to the development of a camera sensor controller in a mobile phone, and the case results proved the feasibility of the method through discussion of the application results. Full article
Show Figures

Figure 1

19 pages, 436 KiB  
Article
Automatic Correction of Arabic Dyslexic Text
by Maha M. Alamri and William J. Teahan
Computers 2019, 8(1), 19; https://0-doi-org.brum.beds.ac.uk/10.3390/computers8010019 - 21 Feb 2019
Cited by 8 | Viewed by 7786
Abstract
This paper proposes an automatic correction system that detects and corrects dyslexic errors in Arabic text. The system uses a language model based on the Prediction by Partial Matching (PPM) text compression scheme that generates possible alternatives for each misspelled word. Furthermore, the [...] Read more.
This paper proposes an automatic correction system that detects and corrects dyslexic errors in Arabic text. The system uses a language model based on the Prediction by Partial Matching (PPM) text compression scheme that generates possible alternatives for each misspelled word. Furthermore, the generated candidate list is based on edit operations (insertion, deletion, substitution and transposition), and the correct alternative for each misspelled word is chosen on the basis of the compression codelength of the trigram. The system is compared with widely-used Arabic word processing software and the Farasa tool. The system provided good results compared with the other tools, with a recall of 43%, precision 89%, F1 58% and accuracy 81%. Full article
Show Figures

Figure 1

15 pages, 625 KiB  
Article
Resource Allocation Model for Sensor Clouds under the Sensing as a Service Paradigm
by Joel Guerreiro, Luís Rodrigues and Noélia Correia
Computers 2019, 8(1), 18; https://0-doi-org.brum.beds.ac.uk/10.3390/computers8010018 - 20 Feb 2019
Cited by 12 | Viewed by 6141
Abstract
The Sensing as a Service is emerging as a new Internet of Things (IoT) business model for sensors and data sharing in the cloud. Under this paradigm, a resource allocation model for the assignment of both sensors and cloud resources to clients/applications is [...] Read more.
The Sensing as a Service is emerging as a new Internet of Things (IoT) business model for sensors and data sharing in the cloud. Under this paradigm, a resource allocation model for the assignment of both sensors and cloud resources to clients/applications is proposed. This model, contrarily to previous approaches, is adequate for emerging IoT Sensing as a Service business models supporting multi-sensing applications and mashups of Things in the cloud. A heuristic algorithm is also proposed having this model as a basis. Results show that the approach is able to incorporate strategies that lead to the allocation of fewer devices, while selecting the most adequate ones for application needs. Full article
Show Figures

Figure 1

18 pages, 4330 KiB  
Article
SoS TextVis: An Extended Survey of Surveys on Text Visualization
by Mohammad Alharbi and Robert S. Laramee
Computers 2019, 8(1), 17; https://0-doi-org.brum.beds.ac.uk/10.3390/computers8010017 - 20 Feb 2019
Cited by 28 | Viewed by 7357
Abstract
Text visualization is a rapidly growing sub-field of information visualization and visual analytics. There are many approaches and techniques introduced every year to address a wide range of challenges and analysis tasks, enabling researchers from different disciplines to obtain leading-edge knowledge from digitized [...] Read more.
Text visualization is a rapidly growing sub-field of information visualization and visual analytics. There are many approaches and techniques introduced every year to address a wide range of challenges and analysis tasks, enabling researchers from different disciplines to obtain leading-edge knowledge from digitized collections of text. This can be challenging particularly when the data is massive. Additionally, the sources of digital text have spread substantially in the last decades in various forms, such as web pages, blogs, twitter, email, electronic publications, and digitized books. In response to the explosion of text visualization research literature, the first text visualization survey article was published in 2010. Furthermore, there are a growing number of surveys that review existing techniques and classify them based on text research methodology. In this work, we aim to present the first Survey of Surveys (SoS) that review all of the surveys and state-of-the-art papers on text visualization techniques and provide an SoS classification. We study and compare the 14 surveys, and categorize them into five groups: (1) Document-centered, (2) user task analysis, (3) cross-disciplinary, (4) multi-faceted, and (5) satellite-themed. We provide survey recommendations for researchers in the field of text visualization. The result is a very unique, valuable starting point and overview of the current state-of-the-art in text visualization research literature. Full article
Show Figures

Figure 1

22 pages, 6891 KiB  
Article
Inter-Vehicle Communication Protocol Design for a Yielding Decision at an Unsignalized Intersection and Evaluation of the Protocol Using Radio Control Cars Equipped with Raspberry Pi
by Hayato Yajima and Kazumasa Takami
Computers 2019, 8(1), 16; https://0-doi-org.brum.beds.ac.uk/10.3390/computers8010016 - 18 Feb 2019
Cited by 2 | Viewed by 6140
Abstract
The Japanese government aims to introduce self-driven vehicles by 2020 to reduce the number of accidents and traffic jams. Various methods have been proposed for traffic control at accident-prone intersections to achieve safe and efficient self-driving. Most of them require roadside units to [...] Read more.
The Japanese government aims to introduce self-driven vehicles by 2020 to reduce the number of accidents and traffic jams. Various methods have been proposed for traffic control at accident-prone intersections to achieve safe and efficient self-driving. Most of them require roadside units to identify and control vehicles. However, it is difficult to install roadside units at all intersections. This paper proposes an inter-vehicle communication protocol that enables vehicles to transmit their vehicle information and moving direction information to nearby vehicles. Vehicles identify nearby vehicles using images captured by vehicle-mounted cameras. These arrangements make it possible for vehicles to exchange yielding intention at an unsignalized intersection without using a roadside unit. To evaluate the operations of the proposed protocol, we implemented the protocol in Raspberry Pi computers, which were connected to cameras and mounted on radio control cars and conducted experiments. The experiments simulated an unsignalized intersection where both self-driven and human-driven vehicles were present. The vehicle that had sent a yielding request identified the yielding vehicle by recognizing the colour of each radio control car, which was part of the vehicle information, from the image captured by its camera. We measured a series of time needed to complete the yielding sequence and evaluated the validity of yielding decisions. Full article
Show Figures

Figure 1

17 pages, 3884 KiB  
Article
High Dynamic Range Image Deghosting Using Spectral Angle Mapper
by Muhammad Murtaza Khan
Computers 2019, 8(1), 15; https://0-doi-org.brum.beds.ac.uk/10.3390/computers8010015 - 09 Feb 2019
Cited by 1 | Viewed by 5485
Abstract
The generation of high dynamic range (HDR) images in the presence of moving objects results in the appearance of blurred objects. These blurred objects are called ghosts. Over the past decade, numerous deghosting techniques have been proposed for removing blurred objects from HDR [...] Read more.
The generation of high dynamic range (HDR) images in the presence of moving objects results in the appearance of blurred objects. These blurred objects are called ghosts. Over the past decade, numerous deghosting techniques have been proposed for removing blurred objects from HDR images. These methods may try to identify moving objects and maximize dynamic range locally or may focus on removing moving objects and displaying static objects while enhancing the dynamic range. The resultant image may suffer from broken/incomplete objects or noise, depending upon the type of methodology selected. Generally, deghosting methods are computationally intensive; however, a simple deghosting method may provide sufficiently acceptable results while being computationally inexpensive. Inspired by this idea, a simple deghosting method based on the spectral angle mapper (SAM) measure is proposed. The advantage of using SAM is that it is intensity independent and focuses only on identifying the spectral—i.e., color—similarity between two images. The proposed method focuses on removing moving objects while enhancing the dynamic range of static objects. The subjective and objective results demonstrate the effectiveness of the proposed method. Full article
Show Figures

Figure 1

16 pages, 5162 KiB  
Article
Robust Computer Vision Chess Analysis and Interaction with a Humanoid Robot
by Andrew Tzer-Yeu Chen and Kevin I-Kai Wang
Computers 2019, 8(1), 14; https://0-doi-org.brum.beds.ac.uk/10.3390/computers8010014 - 08 Feb 2019
Cited by 11 | Viewed by 10610
Abstract
As we move towards improving the skill of computers to play games like chess against humans, the ability to accurately perceive real-world game boards and game states remains a challenge in many cases, hindering the development of game-playing robots. In this paper, we [...] Read more.
As we move towards improving the skill of computers to play games like chess against humans, the ability to accurately perceive real-world game boards and game states remains a challenge in many cases, hindering the development of game-playing robots. In this paper, we present a computer vision algorithm developed as part of a chess robot project that detects the chess board, squares, and piece positions in relatively unconstrained environments. Dynamically responding to lighting changes in the environment, accounting for perspective distortion, and using accurate detection methodologies results in a simple but robust algorithm that succeeds 100% of the time in standard environments, and 80% of the time in extreme environments with external lighting. The key contributions of this paper are a dynamic approach to the Hough line transform, and a hybrid edge and morphology-based approach for object/occupancy detection, that enable the development of a robot chess player that relies solely on the camera for sensory input. Full article
(This article belongs to the Special Issue Smart Interfacing)
Show Figures

Figure 1

17 pages, 8555 KiB  
Article
Location Intelligence Systems and Data Integration for Airport Capacities Planning
by Mirza Ponjavic and Almir Karabegovic
Computers 2019, 8(1), 13; https://0-doi-org.brum.beds.ac.uk/10.3390/computers8010013 - 07 Feb 2019
Cited by 6 | Viewed by 8146
Abstract
This paper describes an approach introducing location intelligence using open-source software components as the solution for planning and construction of the airport infrastructure. As a case study, the spatial information system of the International Airport in Sarajevo is selected. Due to the frequent [...] Read more.
This paper describes an approach introducing location intelligence using open-source software components as the solution for planning and construction of the airport infrastructure. As a case study, the spatial information system of the International Airport in Sarajevo is selected. Due to the frequent construction work on new terminals and the increase of existing airport capacities, as one of the measures for more efficient management of airport infrastructures, the development team has suggested to airport management to introduce location intelligence, meaning to upgrade the existing information system with a functional WebGIS solution. This solution is based on OpenGeo architecture that includes a set of spatial data management technologies used to create an online internet map and build a location intelligence infrastructure. Full article
(This article belongs to the Special Issue Computer Technologies for Human-Centered Cyber World)
Show Figures

Graphical abstract

21 pages, 8075 KiB  
Article
Feature-Rich, GPU-Assisted Scatterplots for Millions of Call Events
by Dylan Rees, Richard C. Roberts, Roberts S. Laramee, Paul Brookes, Tony D’Cruze and Gary A. Smith
Computers 2019, 8(1), 12; https://0-doi-org.brum.beds.ac.uk/10.3390/computers8010012 - 05 Feb 2019
Cited by 2 | Viewed by 6011
Abstract
The contact center industry represents a large proportion of many country’s economies. For example, 4% of the entire United States and UK’s working population is employed in this sector. As in most modern industries, contact centers generate gigabytes of operational data that require [...] Read more.
The contact center industry represents a large proportion of many country’s economies. For example, 4% of the entire United States and UK’s working population is employed in this sector. As in most modern industries, contact centers generate gigabytes of operational data that require analysis to provide insight and to improve efficiency. Visualization is a valuable approach to data analysis, enabling trends and correlations to be discovered, particularly when using scatterplots. We present a feature-rich application that visualizes large call center data sets using scatterplots that support millions of points. The application features a scatterplot matrix to provide an overview of the call center data attributes, animation of call start and end times, and utilizes both the CPU and GPU acceleration for processing and filtering. We illustrate the use of the Open Computing Language (OpenCL) to utilize a commodity graphics card for the fast filtering of fields with multiple attributes. We demonstrate the use of the application with millions of call events from a month’s worth of real-world data and report domain expert feedback from our industry partner. Full article
Show Figures

Figure 1

20 pages, 1965 KiB  
Article
Automated Hints Generation for Investigating Source Code Plagiarism and Identifying The Culprits on In-Class Individual Programming Assessment
by Ariel Elbert Budiman and Oscar Karnalim
Computers 2019, 8(1), 11; https://0-doi-org.brum.beds.ac.uk/10.3390/computers8010011 - 02 Feb 2019
Cited by 9 | Viewed by 6403
Abstract
Most source code plagiarism detection tools only rely on source code similarity to indicate plagiarism. This can be an issue since not all source code pairs with high similarity are plagiarism. Moreover, the culprits (i.e., the ones who plagiarise) cannot be differentiated from [...] Read more.
Most source code plagiarism detection tools only rely on source code similarity to indicate plagiarism. This can be an issue since not all source code pairs with high similarity are plagiarism. Moreover, the culprits (i.e., the ones who plagiarise) cannot be differentiated from the victims even though they need to be educated further on different ways. This paper proposes a mechanism to generate hints for investigating source code plagiarism and identifying the culprits on in-class individual programming assessment. The hints are collected from the culprits’ copying behaviour during the assessment. According to our evaluation, the hints from source code creation process and seating position are 76.88% and at least 80.87% accurate for indicating plagiarism. Further, the hints from source code creation process can be helpful for indicating the culprits as the culprits’ codes have at least one of our predefined conditions for the copying behaviour. Full article
Show Figures

Figure 1

17 pages, 2520 KiB  
Article
Generalized Majority Voter Design Method for N-Modular Redundant Systems Used in Mission- and Safety-Critical Applications
by Jaytrilok Choudhary, Padmanabhan Balasubramanian, Danny M. Varghese, Dhirendra Pratap Singh and Douglas Maskell
Computers 2019, 8(1), 10; https://0-doi-org.brum.beds.ac.uk/10.3390/computers8010010 - 28 Jan 2019
Cited by 11 | Viewed by 6175
Abstract
Mission- and safety-critical circuits and systems employ redundancy in their designs to overcome any faults or failures of constituent circuits and systems during the normal operation. In this aspect, the N-modular redundancy (NMR) is widely used. An NMR system is comprised of N [...] Read more.
Mission- and safety-critical circuits and systems employ redundancy in their designs to overcome any faults or failures of constituent circuits and systems during the normal operation. In this aspect, the N-modular redundancy (NMR) is widely used. An NMR system is comprised of N identical systems, the corresponding outputs of which are majority voted to generate the system outputs. To perform majority voting, a majority voter is required, and the sizes of majority voters tend to vary depending on an NMR system. Majority voters corresponding to NMR systems are physically realized by enumerating the majority input clauses corresponding to an NMR system and then synthesizing the majority logic equation. The issue is that the number of majority input clauses corresponding to an NMR system is governed by a mathematical combination, the complexity of which increases substantially with increases in the level of redundancy. In this context, the design of a majority voter of any size corresponding to an NMR specification based on a new, generalized design approach is described. The proposed approach is inherently hierarchical and progressive since any NMR majority voter can be constructed from an (N − 2)MR majority voter along with additional logic corresponding to the two extra inputs. Further, the proposed approach paves the way for simultaneous production of the NMR system outputs corresponding to different degrees of redundancy, which is not intrinsic to the existing methods. This feature is additionally useful for any sharing of common logic with diverse degrees of redundancy in appropriate portions of an NMR implementation. Full article
Show Figures

Figure 1

23 pages, 6821 KiB  
Article
User Satisfaction in Augmented Reality-Based Training Using Microsoft HoloLens
by Hui Xue, Puneet Sharma and Fridolin Wild
Computers 2019, 8(1), 9; https://0-doi-org.brum.beds.ac.uk/10.3390/computers8010009 - 25 Jan 2019
Cited by 44 | Viewed by 11064
Abstract
With the recent developments in augmented reality (AR) technologies comes an increased interest in the use of smart glasses for hands-on training. Whether this interest is turned into market success depends at the least on whether the interaction with smart AR glasses satisfies [...] Read more.
With the recent developments in augmented reality (AR) technologies comes an increased interest in the use of smart glasses for hands-on training. Whether this interest is turned into market success depends at the least on whether the interaction with smart AR glasses satisfies users, an aspect of AR use that so far has received little attention. With this contribution, we seek to change this. The objective of the article, therefore, is to investigate user satisfaction in AR applied to three cases of practical use. User satisfaction of AR can be broken down into satisfaction with the interaction and satisfaction with the delivery device. A total of 142 participants from three different industrial sectors contributed to this study, namely, aeronautics, medicine, and astronautics. In our analysis, we investigated the influence of different factors, such as age, gender, level of education, level of Internet knowledge, and the roles of the participants in the different sectors. Even though users were not familiar with the smart glasses, results show that general computer knowledge has a positive effect on user satisfaction. Further analysis using two-factor interactions showed that there is no significant interaction between the different factors and user satisfaction. The results of the study affirm that the questionnaires developed for user satisfaction of smart glasses and the AR application performed well, but leave room for improvement. Full article
(This article belongs to the Special Issue Augmented and Mixed Reality in Work Context)
Show Figures

Figure 1

13 pages, 3141 KiB  
Article
Hidden Link Prediction in Criminal Networks Using the Deep Reinforcement Learning Technique
by Marcus Lim, Azween Abdullah, NZ Jhanjhi and Mahadevan Supramaniam
Computers 2019, 8(1), 8; https://0-doi-org.brum.beds.ac.uk/10.3390/computers8010008 - 11 Jan 2019
Cited by 44 | Viewed by 7404
Abstract
Criminal network activities, which are usually secret and stealthy, present certain difficulties in conducting criminal network analysis (CNA) because of the lack of complete datasets. The collection of criminal activities data in these networks tends to be incomplete and inconsistent, which is reflected [...] Read more.
Criminal network activities, which are usually secret and stealthy, present certain difficulties in conducting criminal network analysis (CNA) because of the lack of complete datasets. The collection of criminal activities data in these networks tends to be incomplete and inconsistent, which is reflected structurally in the criminal network in the form of missing nodes (actors) and links (relationships). Criminal networks are commonly analyzed using social network analysis (SNA) models. Most machine learning techniques that rely on the metrics of SNA models in the development of hidden or missing link prediction models utilize supervised learning. However, supervised learning usually requires the availability of a large dataset to train the link prediction model in order to achieve an optimum performance level. Therefore, this research is conducted to explore the application of deep reinforcement learning (DRL) in developing a criminal network hidden links prediction model from the reconstruction of a corrupted criminal network dataset. The experiment conducted on the model indicates that the dataset generated by the DRL model through self-play or self-simulation can be used to train the link prediction model. The DRL link prediction model exhibits a better performance than a conventional supervised machine learning technique, such as the gradient boosting machine (GBM) trained with a relatively smaller domain dataset. Full article
Show Figures

Figure 1

4 pages, 440 KiB  
Editorial
Acknowledgement to Reviewers of Computers in 2018
by Computers Editorial Office
Computers 2019, 8(1), 7; https://0-doi-org.brum.beds.ac.uk/10.3390/computers8010007 - 10 Jan 2019
Viewed by 4100
Abstract
Rigorous peer-review is the corner-stone of high-quality academic publishing [...] Full article
16 pages, 1155 KiB  
Article
Position Certainty Propagation: A Localization Service for Ad-Hoc Networks
by Abdallah Sobehy, Eric Renault and Paul Muhlethaler
Computers 2019, 8(1), 6; https://0-doi-org.brum.beds.ac.uk/10.3390/computers8010006 - 07 Jan 2019
Cited by 5 | Viewed by 4965
Abstract
Location services for ad-hoc networks are of indispensable value for a wide range of applications, such as the Internet of Things (IoT) and vehicular ad-hoc networks (VANETs). Each context requires a solution that addresses the specific needs of the application. For instance, IoT [...] Read more.
Location services for ad-hoc networks are of indispensable value for a wide range of applications, such as the Internet of Things (IoT) and vehicular ad-hoc networks (VANETs). Each context requires a solution that addresses the specific needs of the application. For instance, IoT sensor nodes have resource constraints (i.e., computational capabilities), and so a localization service should be highly efficient to conserve the lifespan of these nodes. We propose an optimized energy-aware and low computational solution, requiring 3-GPS equipped nodes (anchor nodes) in the network. Moreover, the computations are lightweight and can be implemented distributively among nodes. Knowing the maximum range of communication for all nodes and distances between 1-hop neighbors, each node localizes itself and shares its location with the network in an efficient manner. We simulate our proposed algorithm in a NS-3 simulator, and compare our solution with state-of-the-art methods. Our method is capable of localizing more nodes (≈90% of nodes in a network with an average degree ≈10). Full article
Show Figures

Figure 1

11 pages, 491 KiB  
Article
Robust Cochlear-Model-Based Speech Recognition
by Mladen Russo, Maja Stella, Marjan Sikora and Vesna Pekić
Computers 2019, 8(1), 5; https://0-doi-org.brum.beds.ac.uk/10.3390/computers8010005 - 01 Jan 2019
Cited by 13 | Viewed by 8359
Abstract
Accurate speech recognition can provide a natural interface for human–computer interaction. Recognition rates of the modern speech recognition systems are highly dependent on background noise levels and a choice of acoustic feature extraction method can have a significant impact on system performance. This [...] Read more.
Accurate speech recognition can provide a natural interface for human–computer interaction. Recognition rates of the modern speech recognition systems are highly dependent on background noise levels and a choice of acoustic feature extraction method can have a significant impact on system performance. This paper presents a robust speech recognition system based on a front-end motivated by human cochlear processing of audio signals. In the proposed front-end, cochlear behavior is first emulated by the filtering operations of the gammatone filterbank and subsequently by the Inner Hair cell (IHC) processing stage. Experimental results using a continuous density Hidden Markov Model (HMM) recognizer with the proposed Gammatone Hair Cell (GHC) coefficients are lower for clean speech conditions, but demonstrate significant improvement in performance in noisy conditions compared to standard Mel-Frequency Cepstral Coefficients (MFCC) baseline. Full article
Show Figures

Figure 1

16 pages, 4553 KiB  
Article
Sentiment Analysis of Lithuanian Texts Using Traditional and Deep Learning Approaches
by Jurgita Kapočiūtė-Dzikienė, Robertas Damaševičius and Marcin Woźniak
Computers 2019, 8(1), 4; https://0-doi-org.brum.beds.ac.uk/10.3390/computers8010004 - 01 Jan 2019
Cited by 48 | Viewed by 8637
Abstract
We describe the sentiment analysis experiments that were performed on the Lithuanian Internet comment dataset using traditional machine learning (Naïve Bayes Multinomial—NBM and Support Vector Machine—SVM) and deep learning (Long Short-Term Memory—LSTM and Convolutional Neural Network—CNN) approaches. The traditional machine learning techniques were [...] Read more.
We describe the sentiment analysis experiments that were performed on the Lithuanian Internet comment dataset using traditional machine learning (Naïve Bayes Multinomial—NBM and Support Vector Machine—SVM) and deep learning (Long Short-Term Memory—LSTM and Convolutional Neural Network—CNN) approaches. The traditional machine learning techniques were used with the features based on the lexical, morphological, and character information. The deep learning approaches were applied on the top of two types of word embeddings (Vord2Vec continuous bag-of-words with negative sampling and FastText). Both traditional and deep learning approaches had to solve the positive/negative/neutral sentiment classification task on the balanced and full dataset versions. The best deep learning results (reaching 0.706 of accuracy) were achieved on the full dataset with CNN applied on top of the FastText embeddings, replaced emoticons, and eliminated diacritics. The traditional machine learning approaches demonstrated the best performance (0.735 of accuracy) on the full dataset with the NBM method, replaced emoticons, restored diacritics, and lemma unigrams as features. Although traditional machine learning approaches were superior when compared to the deep learning methods; deep learning demonstrated good results when applied on the small datasets. Full article
Show Figures

Figure 1

24 pages, 8439 KiB  
Article
Utilizing Transfer Learning and Homomorphic Encryption in a Privacy Preserving and Secure Biometric Recognition System
by Milad Salem, Shayan Taheri and Jiann-Shiun Yuan
Computers 2019, 8(1), 3; https://0-doi-org.brum.beds.ac.uk/10.3390/computers8010003 - 29 Dec 2018
Cited by 36 | Viewed by 8676
Abstract
Biometric verification systems have become prevalent in the modern world with the wide usage of smartphones. These systems heavily rely on storing the sensitive biometric data on the cloud. Due to the fact that biometric data like fingerprint and iris cannot be changed, [...] Read more.
Biometric verification systems have become prevalent in the modern world with the wide usage of smartphones. These systems heavily rely on storing the sensitive biometric data on the cloud. Due to the fact that biometric data like fingerprint and iris cannot be changed, storing them on the cloud creates vulnerability and can potentially have catastrophic consequences if these data are leaked. In the recent years, in order to preserve the privacy of the users, homomorphic encryption has been used to enable computation on the encrypted data and to eliminate the need for decryption. This work presents DeepZeroID: a privacy-preserving cloud-based and multiple-party biometric verification system that uses homomorphic encryption. Via transfer learning, training on sensitive biometric data is eliminated and one pre-trained deep neural network is used as feature extractor. By developing an exhaustive search algorithm, this feature extractor is applied on the tasks of biometric verification and liveness detection. By eliminating the need for training on and decrypting the sensitive biometric data, this system preserves privacy, requires zero knowledge of the sensitive data distribution, and is highly scalable. Our experimental results show that DeepZeroID can deliver 95.47% F1 score in the verification of combined iris and fingerprint feature vectors with zero true positives and with a 100% accuracy in liveness detection. Full article
Show Figures

Figure 1

26 pages, 3377 KiB  
Article
Neural Network-Based Formula for the Buckling Load Prediction of I-Section Cellular Steel Beams
by Miguel Abambres, Komal Rajana, Konstantinos Daniel Tsavdaridis and Tiago Pinto Ribeiro
Computers 2019, 8(1), 2; https://0-doi-org.brum.beds.ac.uk/10.3390/computers8010002 - 26 Dec 2018
Cited by 56 | Viewed by 9549
Abstract
Cellular beams are an attractive option for the steel construction industry due to their versatility in terms of strength, size, and weight. Further benefits are the integration of services thereby reducing ceiling-to-floor depth (thus, building’s height), which has a great economic impact. Moreover, [...] Read more.
Cellular beams are an attractive option for the steel construction industry due to their versatility in terms of strength, size, and weight. Further benefits are the integration of services thereby reducing ceiling-to-floor depth (thus, building’s height), which has a great economic impact. Moreover, the complex localized and global failures characterizing those members have led several scientists to focus their research on the development of more efficient design guidelines. This paper aims to propose an artificial neural network (ANN)-based formula to precisely compute the critical elastic buckling load of simply supported cellular beams under uniformly distributed vertical loads. The 3645-point dataset used in ANN design was obtained from an extensive parametric finite element analysis performed in ABAQUS. The independent variables adopted as ANN inputs are the following: beam’s length, opening diameter, web-post width, cross-section height, web thickness, flange width, flange thickness, and the distance between the last opening edge and the end support. The proposed model shows a strong potential as an effective design tool. The maximum and average relative errors among the 3645 data points were found to be 3.7% and 0.4%, respectively, whereas the average computing time per data point is smaller than a millisecond for any current personal computer. Full article
Show Figures

Figure 1

14 pages, 1238 KiB  
Article
Prototypes of User Interfaces for Mobile Applications for Patients with Diabetes
by Jan Pavlas, Ondrej Krejcar, Petra Maresova and Ali Selamat
Computers 2019, 8(1), 1; https://0-doi-org.brum.beds.ac.uk/10.3390/computers8010001 - 23 Dec 2018
Cited by 8 | Viewed by 6174
Abstract
We live in a heavily technologized global society. It is therefore not surprising that efforts are being made to integrate current information technology into the treatment of diabetes mellitus. This paper is dedicated to improving the treatment of this disease through the use [...] Read more.
We live in a heavily technologized global society. It is therefore not surprising that efforts are being made to integrate current information technology into the treatment of diabetes mellitus. This paper is dedicated to improving the treatment of this disease through the use of well-designed mobile applications. Our analysis of relevant literature sources and existing solutions has revealed that the current state of mobile applications for diabetics is unsatisfactory. These limitations relate both to the content and the Graphical User Interface (GUI) of existing applications. Following the analysis of relevant studies, there are four key elements that a diabetes mobile application should contain. These elements are: (1) blood glucose levels monitoring; (2) effective treatment; (3) proper eating habits; and (4) physical activity. As the next step in this study, three prototypes of new mobile applications were designed. Each of the prototypes represents one group of applications according to a set of given rules. The most optimal solution based on the users’ preferences was determined by using a questionnaire survey conducted with a sample of 30 respondents participating in a questionnaire after providing their informed consent. The age of participants was from 15 until 30 years old, where gender was split to 13 males and 17 females. As a result of this study, the specifications of the proposed application were identified, which aims to respond to the findings discovered in the analytical part of the study, and to eliminate the limitations of the current solutions. All of the respondents expressed preference for an application that includes not only the key functions, but a number of additional functions, namely synchronization with one of the external devices for measuring blood glucose levels, while five-sixths of them found suggested additional functions as being sufficient. Full article
(This article belongs to the Special Issue Computer Technologies in Personalized Medicine and Healthcare)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop