Next Issue
Volume 8, September
Previous Issue
Volume 8, March
 
 

Computers, Volume 8, Issue 2 (June 2019) – 23 articles

Cover Story (view full-size image): Shadow-retargeting maps depict the appearance of real shadows to virtual shadows given the corresponding deformation of scene geometry, such that appearance is seamlessly maintained. By performing virtual shadow reconstruction from unoccluded real-shadow samples observed in the camera frame, this method efficiently recovers deformed shadow appearance. In this manuscript, we introduce a light-estimation approach that enables light-source detection using flat Fresnel lenses that allow this method to work without a set of pre-established conditions. Results are presented on a range of objects, deformations, and illumination conditions in real-time augmented reality (AR) on a mobile device. We demonstrate the practical application of the method in generating otherwise laborious inbetweening frames for 3D-printed stop-motion animation. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
17 pages, 5065 KiB  
Article
Serverless Computing: An Investigation of Deployment Environments for Web APIs
by Cosmina Ivan, Radu Vasile and Vasile Dadarlat
Computers 2019, 8(2), 50; https://0-doi-org.brum.beds.ac.uk/10.3390/computers8020050 - 25 Jun 2019
Cited by 10 | Viewed by 6913
Abstract
Cloud vendors offer a variety of serverless technologies promising high availability and dynamic scaling while reducing operational and maintenance costs. One such technology, serverless computing, or function-as-a-service (FaaS), is advertised as a good candidate for web applications, data-processing, or backend services, where you [...] Read more.
Cloud vendors offer a variety of serverless technologies promising high availability and dynamic scaling while reducing operational and maintenance costs. One such technology, serverless computing, or function-as-a-service (FaaS), is advertised as a good candidate for web applications, data-processing, or backend services, where you only pay for usage. Unlike virtual machines (VMs), they come with automatic resource provisioning and allocation, providing elastic and automatic scaling. We present the results from our investigation of a specific serverless candidate, Web Application Programming Interface or Web API, deployed on virtual machines and as function(s)-as-a-service. We contrast these deployments by varying the number of concurrent users for measuring response times and costs. We found no significant response time differences between deployments when VMs are configured for the expected load, and test scenarios are within the FaaS hardware limitations. Higher numbers of concurrent users or unexpected user growths are effortlessly handled by FaaS, whereas additional labor must be invested in VMs for equivalent results. We identified that despite the advantages serverless computing brings, there is no clear choice between serverless or virtual machines for a Web API application because one needs to carefully measure costs and factor-in all components that are included with FaaS. Full article
Show Figures

Figure 1

25 pages, 1891 KiB  
Article
Expressing the Tacit Knowledge of a Digital Library System as Linked Data
by Angela Di Iorio and Marco Schaerf
Computers 2019, 8(2), 49; https://0-doi-org.brum.beds.ac.uk/10.3390/computers8020049 - 15 Jun 2019
Cited by 4 | Viewed by 5968
Abstract
Library organizations have enthusiastically undertaken semantic web initiatives and in particular the data publishing as linked data. Nevertheless, different surveys report the experimental nature of initiatives and the consumer difficulty in re-using data. These barriers are a hindrance for using linked datasets, as [...] Read more.
Library organizations have enthusiastically undertaken semantic web initiatives and in particular the data publishing as linked data. Nevertheless, different surveys report the experimental nature of initiatives and the consumer difficulty in re-using data. These barriers are a hindrance for using linked datasets, as an infrastructure that enhances the library and related information services. This paper presents an approach for encoding, as a Linked Vocabulary, the “tacit” knowledge of the information system that manages the data source. The objective is the improvement of the interpretation process of the linked data meaning of published datasets. We analyzed a digital library system, as a case study, for prototyping the “semantic data management” method, where data and its knowledge are natively managed, taking into account the linked data pillars. The ultimate objective of the semantic data management is to curate the correct consumers’ interpretation of data, and to facilitate the proper re-use. The prototype defines the ontological entities representing the knowledge, of the digital library system, that is not stored in the data source, nor in the existing ontologies related to the system’s semantics. Thus we present the local ontology and its matching with existing ontologies, Preservation Metadata Implementation Strategies (PREMIS) and Metadata Objects Description Schema (MODS), and we discuss linked data triples prototyped from the legacy relational database, by using the local ontology. We show how the semantic data management, can deal with the inconsistency of system data, and we conclude that a specific change in the system developer mindset, it is necessary for extracting and “codifying” the tacit knowledge, which is necessary to improve the data interpretation process. Full article
(This article belongs to the Special Issue REMS 2018: Multidisciplinary Symposium on Computer Science and ICT)
Show Figures

Figure 1

12 pages, 1553 KiB  
Article
Cloud-Based Image Retrieval Using GPU Platforms
by Sidi Ahmed Mahmoudi, Mohammed Amin Belarbi, El Wardani Dadi, Saïd Mahmoudi and Mohammed Benjelloun
Computers 2019, 8(2), 48; https://0-doi-org.brum.beds.ac.uk/10.3390/computers8020048 - 14 Jun 2019
Cited by 6 | Viewed by 6559
Abstract
The process of image retrieval presents an interesting tool for different domains related to computer vision such as multimedia retrieval, pattern recognition, medical imaging, video surveillance and movements analysis. Visual characteristics of images such as color, texture and shape are used to identify [...] Read more.
The process of image retrieval presents an interesting tool for different domains related to computer vision such as multimedia retrieval, pattern recognition, medical imaging, video surveillance and movements analysis. Visual characteristics of images such as color, texture and shape are used to identify the content of images. However, the retrieving process becomes very challenging due to the hard management of large databases in terms of storage, computation complexity, temporal performance and similarity representation. In this paper, we propose a cloud-based platform in which we integrate several features extraction algorithms used for content-based image retrieval (CBIR) systems. Moreover, we propose an efficient combination of SIFT and SURF descriptors that allowed to extract and match image features and hence improve the process of image retrieval. The proposed algorithms have been implemented on the CPU and also adapted to fully exploit the power of GPUs. Our platform is presented with a responsive web solution that offers for users the possibility to exploit, test and evaluate image retrieval methods. The platform offers to users a simple-to-use access for different algorithms such as SIFT, SURF descriptors without the need to setup the environment or install anything while spending minimal efforts on preprocessing and configuring. On the other hand, our cloud-based CPU and GPU implementations are scalable, which means that they can be used even with large database of multimedia documents. The obtained results showed: 1. Precision improvement in terms of recall and precision; 2. Performance improvement in terms of computation time as a result of exploiting GPUs in parallel; 3. Reduction of energy consumption. Full article
Show Figures

Figure 1

29 pages, 1149 KiB  
Article
Modelling and Simulation of a Cloud Platform for Sharing Distributed Digital Fabrication Resources
by Gianluca Cornetta, Francisco Javier Mateos, Abdellah Touhafi and Gabriel-Miro Muntean
Computers 2019, 8(2), 47; https://0-doi-org.brum.beds.ac.uk/10.3390/computers8020047 - 12 Jun 2019
Cited by 7 | Viewed by 5719
Abstract
Fabrication as a Service (FaaS) is a new concept developed within the framework of the NEWTON Horizon 2020 project. It is aimed at empowering digital fabrication laboratories (Fab Labs) by providing hardware and software wrappers to expose numerically-controlled expensive fabrication equipment as web [...] Read more.
Fabrication as a Service (FaaS) is a new concept developed within the framework of the NEWTON Horizon 2020 project. It is aimed at empowering digital fabrication laboratories (Fab Labs) by providing hardware and software wrappers to expose numerically-controlled expensive fabrication equipment as web services. More specifically, FaaS leverages cloud and IoT technologies to enable a wide learning community to have remote access to these labs’ computer-controlled tools and equipment over the Internet. In such context, the fabrication machines can be seen as networked resources distributed over a wide geographical area. These resources can communicate through machine-to-machine protocols and a centralized cloud infrastructure and can be digitally monitored and controlled through programmatic interfaces relying on REST APIs. This paper introduces FaaS in the context of Fab Lab challenges and describes FaaS deployment within NEWTON Fab Labs, part of the NEWTON European Horizon 2020 project on technology enhanced learning. The NEWTON Fab Labs architecture is described in detail targeting software, hardware and network architecture. The system has been extensively load-tested simulating real use-case scenarios and it is presently in production. In particular, this paper shows how the measured data has been used to build a simulation model to estimate system performance and identify possible bottlenecks. The measurements performed show that the platform delays exhibit a tail distribution with Pareto-like behaviour; this finding has been used to build a simple mathematical model and a simulator on top of CloudSim to estimate the latencies of the critical paths of the NEWTON Fab Lab platform under several load conditions. Full article
Show Figures

Figure 1

15 pages, 2072 KiB  
Article
An Efficient Energy-Aware Tasks Scheduling with Deadline-Constrained in Cloud Computing
by Said BEN ALLA, Hicham BEN ALLA, Abdellah TOUHAFI and Abdellah EZZATI
Computers 2019, 8(2), 46; https://0-doi-org.brum.beds.ac.uk/10.3390/computers8020046 - 10 Jun 2019
Cited by 17 | Viewed by 6270
Abstract
Nowadays, Cloud Computing (CC) has emerged as a new paradigm for hosting and delivering services over the Internet. However, the wider deployment of Cloud and the rapid increase in the capacity, as well as the size of data centers, induces a tremendous rise [...] Read more.
Nowadays, Cloud Computing (CC) has emerged as a new paradigm for hosting and delivering services over the Internet. However, the wider deployment of Cloud and the rapid increase in the capacity, as well as the size of data centers, induces a tremendous rise in electricity consumption, escalating data center ownership costs and increasing carbon footprints. This expanding scale of data centers has made energy consumption an imperative issue. Besides, users’ requirements regarding execution time, deadline, QoS have become more sophisticated and demanding. These requirements often conflict with the objectives of cloud providers, especially in a high-stress environment in which the tasks have very critical deadlines. To address these issues, this paper proposes an efficient Energy-Aware Tasks Scheduling with Deadline-constrained in Cloud Computing (EATSD). The main goal of the proposed solution is to reduce the energy consumption of the cloud resources, consider different users’ priorities and optimize the makespan under the deadlines constraints. Further, the proposed algorithm has been simulated using the CloudSim simulator. The experimental results validate that the proposed approach can effectively achieve good performance by minimizing the makespan, reducing energy consumption and improving resource utilization while meeting deadline constraints. Full article
Show Figures

Figure 1

19 pages, 6521 KiB  
Article
Distributed Management Systems for Infocommunication Networks: A Model Based on TM Forum Frameworx
by Valery Mochalov, Natalia Bratchenko, Gennady Linets and Sergey Yakovlev
Computers 2019, 8(2), 45; https://0-doi-org.brum.beds.ac.uk/10.3390/computers8020045 - 04 Jun 2019
Cited by 5 | Viewed by 5868
Abstract
The existing management systems for networks and communication services do not fully meet the demands of users in next-generation infocommunication services that are dictated by the business processes of companies. Open digital architecture (ODA) is able to dramatically simplify and automate main business [...] Read more.
The existing management systems for networks and communication services do not fully meet the demands of users in next-generation infocommunication services that are dictated by the business processes of companies. Open digital architecture (ODA) is able to dramatically simplify and automate main business processes using the logic of distributed computing and management, which allows implementing services on a set of network nodes. The performance of a distributed operational management system depends on the quality of solving several tasks as follows: the distribution of program components among processor modules; the prioritization of business processes with parallel execution; the elimination of dead states and interlocks during execution; and the reduction of system cost to integrate separate components of business processes. The program components can be distributed among processor modules by an iterative algorithm that calculates the frequency of resource conflicts; this algorithm yields a rational distribution in a finite number of iterations. The interlocks of parallel business processes can be eliminated using the classic file sharing example with two processes and also the methodology of colored Petri nets. The system cost of integration processes in a distributed management system is reduced through partitioning the network into segments with several controllers that interact with each other and manage the network in a coordinated way. This paper develops a model of a distributed operational management system for next-generation infocommunication networks that assesses the efficiency of operational activities for a communication company. Full article
(This article belongs to the Special Issue REMS 2018: Multidisciplinary Symposium on Computer Science and ICT)
Show Figures

Figure 1

16 pages, 2366 KiB  
Article
Use of Virtual Environment and Virtual Prototypes in Co-Design: The Case of Hospital Design
by Tarja Tiainen and Tiina Jouppila
Computers 2019, 8(2), 44; https://0-doi-org.brum.beds.ac.uk/10.3390/computers8020044 - 01 Jun 2019
Cited by 6 | Viewed by 5841
Abstract
Co-design is used for improving innovation, obtaining better solutions, and higher user satisfaction. In this paper we present how the use of a walk-in virtual environment and actual-size virtual prototypes support co-design. Unlike in most studies we presented the prototypes to users in [...] Read more.
Co-design is used for improving innovation, obtaining better solutions, and higher user satisfaction. In this paper we present how the use of a walk-in virtual environment and actual-size virtual prototypes support co-design. Unlike in most studies we presented the prototypes to users in an early phase of the design process. This study examines the co-design of healthcare facilities with multi-occupational groups. The practical case examines designing single-patient rooms for an intensive care unit. In this design process 238 participants of different hospital professions evaluated virtual prototypes in three iterative rounds. The participants improved the design by discussing their work practices. The virtual environment situation and actual size virtual prototypes make an easy environment for users to discuss and evaluate the design without any design knowledge. In addition to describing the co-design results we also outline some important issues and guidelines about creating the virtual prototypes and organizing the participants’ visits in a virtual environment. Full article
(This article belongs to the Special Issue Augmented and Mixed Reality in Work Context)
Show Figures

Figure 1

19 pages, 1583 KiB  
Article
Autonomous Wireless Sensor Networks in an IPM Spatial Decision Support System
by Mina Petrić, Jurgen Vandendriessche, Cedric Marsboom, Tom Matheussen, Els Ducheyne and Abdellah Touhafi
Computers 2019, 8(2), 43; https://0-doi-org.brum.beds.ac.uk/10.3390/computers8020043 - 28 May 2019
Cited by 4 | Viewed by 5251
Abstract
Until recently data acquisition in integrated pest management (IPM) relied on manual collection of both pest and environmental data. Autonomous wireless sensor networks (WSN) are providing a way forward by reducing the need for manual offload and maintenance; however, there is still a [...] Read more.
Until recently data acquisition in integrated pest management (IPM) relied on manual collection of both pest and environmental data. Autonomous wireless sensor networks (WSN) are providing a way forward by reducing the need for manual offload and maintenance; however, there is still a significant gap in pest management using WSN with most applications failing to provide a low-cost, autonomous monitoring system that can operate in remote areas. In this study, we investigate the feasibility of implementing a reliable, fully independent, low-power WSN that will provide high-resolution, near-real-time input to a spatial decision support system (SDSS), capturing the small-scale heterogeneity needed for intelligent IPM. The WSN hosts a dual-uplink taking advantage of both satellite and terrestrial communication. A set of tests were conducted to assess metrics such as signal strength, data transmission and bandwidth of the SatCom module as well as mesh configuration, energetic autonomy, point to point communication and data loss of the WSN nodes. Finally, we demonstrate the SDSS output from two vector models forced by WSN data from a field site in Belgium. We believe that this system can be a cost-effective solution for intelligent IPM in remote areas where there is no reliable terrestrial connection. Full article
Show Figures

Figure 1

14 pages, 2314 KiB  
Article
Improved Measures of Redundancy and Relevance for mRMR Feature Selection
by Insik Jo, Sangbum Lee and Sejong Oh
Computers 2019, 8(2), 42; https://0-doi-org.brum.beds.ac.uk/10.3390/computers8020042 - 27 May 2019
Cited by 50 | Viewed by 8848
Abstract
Many biological or medical data have numerous features. Feature selection is one of the data preprocessing steps that can remove the noise from data as well as save the computing time when the dataset has several hundred thousand or more features. Another goal [...] Read more.
Many biological or medical data have numerous features. Feature selection is one of the data preprocessing steps that can remove the noise from data as well as save the computing time when the dataset has several hundred thousand or more features. Another goal of feature selection is improving the classification accuracy in machine learning tasks. Minimum Redundancy Maximum Relevance (mRMR) is a well-known feature selection algorithm that selects features by calculating redundancy between features and relevance between features and class vector. mRMR adopts mutual information theory to measure redundancy and relevance. In this research, we propose a method to improve the performance of mRMR feature selection. We apply Pearson’s correlation coefficient as a measure of redundancy and R-value as a measure of relevance. To compare original mRMR and the proposed method, features were selected using both of two methods from various datasets, and then we performed a classification test. The classification accuracy was used as a measure of performance comparison. In many cases, the proposed method showed higher accuracy than original mRMR. Full article
Show Figures

Graphical abstract

13 pages, 1676 KiB  
Article
A Sparse Analysis-Based Single Image Super-Resolution
by Vahid Anari, Farbod Razzazi and Rasoul Amirfattahi
Computers 2019, 8(2), 41; https://0-doi-org.brum.beds.ac.uk/10.3390/computers8020041 - 27 May 2019
Cited by 5 | Viewed by 4788
Abstract
In the current study, we were inspired by sparse analysis signal representation theory to propose a novel single-image super-resolution method termed “sparse analysis-based super resolution” (SASR). This study presents and demonstrates mapping between low and high resolution images using a coupled sparse analysis [...] Read more.
In the current study, we were inspired by sparse analysis signal representation theory to propose a novel single-image super-resolution method termed “sparse analysis-based super resolution” (SASR). This study presents and demonstrates mapping between low and high resolution images using a coupled sparse analysis operator learning method to reconstruct high resolution (HR) images. We further show that the proposed method selects more informative high and low resolution (LR) learning patches based on image texture complexity to train high and low resolution operators more efficiently. The coupled high and low resolution operators are used for high resolution image reconstruction at a low computational complexity cost. The experimental results for quantitative criteria peak signal to noise ratio (PSNR), root mean square error (RMSE), structural similarity index (SSIM) and elapsed time, human observation as a qualitative measure, and computational complexity verify the improvements offered by the proposed SASR algorithm. Full article
Show Figures

Figure 1

4 pages, 164 KiB  
Editorial
The Emergence of Internet of Things (IoT): Connecting Anything, Anywhere
by Md Arafatur Rahman and A. Taufiq Asyhari
Computers 2019, 8(2), 40; https://0-doi-org.brum.beds.ac.uk/10.3390/computers8020040 - 17 May 2019
Cited by 42 | Viewed by 9170
Abstract
Internet of Things (IoT) plays the role of an expert’s technical tool by empowering physical resources into smart entities through existing network infrastructures. Its prime focus is to provide smart and seamless services at the user end without any interruption. The IoT paradigm [...] Read more.
Internet of Things (IoT) plays the role of an expert’s technical tool by empowering physical resources into smart entities through existing network infrastructures. Its prime focus is to provide smart and seamless services at the user end without any interruption. The IoT paradigm is aimed at formulating a complex information system with the combination of sensor data acquisition, efficient data exchange through networking, machine learning, artificial intelligence, big data, and clouds. Conversely, collecting information and maintaining the confidentiality of an independent entity, and then running together with privacy and security provision in IoT is the main concerning issue. Thus, new challenges of using and advancing existing technologies, such as new applications and using policies, cloud computing, smart vehicular system, protective protocols, analytics tools for IoT-generated data, communication protocols, etc., deserve further investigation. This Special Issue reviews the latest contributions of IoT application frameworks and the advancement of their supporting technology. It is extremely imperative for academic and industrial stakeholders to propagate solutions that can leverage the opportunities and minimize the challenges in terms of using this state-of-the-art technological development. Full article
12 pages, 4861 KiB  
Article
Diffusion of Innovation: Case of Co-Design of Cabins in Mobile Work Machine Industry
by Asko Ellman and Tarja Tiainen
Computers 2019, 8(2), 39; https://0-doi-org.brum.beds.ac.uk/10.3390/computers8020039 - 11 May 2019
Cited by 4 | Viewed by 5315
Abstract
This paper describes the development of using virtual reality for work content in one application area over a decade. Virtual reality technology has developed rapidly; from walk-in CAVE-like virtual environments to head-mounted displays within a decade. In this paper, the development is studied [...] Read more.
This paper describes the development of using virtual reality for work content in one application area over a decade. Virtual reality technology has developed rapidly; from walk-in CAVE-like virtual environments to head-mounted displays within a decade. In this paper, the development is studied through the lens of diffusion of innovation theory, which focuses not only on innovation itself, but also on the social system. The development of virtual technology is studied by one case, which is cabin design in the mobile work machine industry. This design process has been especially suitable for using virtual reality technology. Full article
(This article belongs to the Special Issue Augmented and Mixed Reality in Work Context)
Show Figures

Figure 1

33 pages, 11112 KiB  
Article
Procedural Modeling of Buildings Composed of Arbitrarily-Shaped Floor-Plans: Background, Progress, Contributions and Challenges of a Methodology Oriented to Cultural Heritage
by Telmo Adão, Luís Pádua, Pedro Marques, Joaquim João Sousa, Emanuel Peres and Luís Magalhães
Computers 2019, 8(2), 38; https://0-doi-org.brum.beds.ac.uk/10.3390/computers8020038 - 11 May 2019
Cited by 8 | Viewed by 8084
Abstract
Virtual models’ production is of high pertinence in research and business fields such as architecture, archeology, or video games, whose requirements might range between expeditious virtual building generation for extensively populating computer-based synthesized environments and hypothesis testing through digital reconstructions. There are some [...] Read more.
Virtual models’ production is of high pertinence in research and business fields such as architecture, archeology, or video games, whose requirements might range between expeditious virtual building generation for extensively populating computer-based synthesized environments and hypothesis testing through digital reconstructions. There are some known approaches to achieve the production/reconstruction of virtual models, namely digital settlements and buildings. Manual modeling requires highly-skilled manpower and a considerable amount of time to achieve the desired digital contents, in a process composed by many stages that are typically repeated over time. Both image-based and range scanning approaches are more suitable for digital preservation of well-conserved structures. However, they usually require trained human resources to prepare field operations and manipulate expensive equipment (e.g., 3D scanners) and advanced software tools (e.g., photogrammetric applications). To tackle the issues presented by previous approaches, a class of cost-effective, efficient, and scarce-data-tolerant techniques/methods, known as procedural modeling, has been developed aiming at the semi- or fully-automatic production of virtual environments composed of hollow buildings exclusively represented by outer façades or traversable buildings with interiors, either for expeditious generation or reconstruction. Despite the many achievements of the existing procedural modeling approaches, the production of virtual buildings with both interiors and exteriors composed by non-rectangular shapes (convex or concave n-gons) at the floor-plan level is still seldomly addressed. Therefore, a methodology (and respective system) capable of semi-automatically producing ontology-based traversable buildings composed of arbitrarily-shaped floor-plans has been proposed and continuously developed, and is under analysis in this paper, along with its contributions towards the accomplishment of other virtual reality (VR) and augmented reality (AR) projects/works oriented to digital applications for cultural heritage. Recent roof production-related enhancements resorting to the well-established straight skeleton approach are also addressed, as well as forthcoming challenges. The aim is to consolidate this procedural modeling methodology as a valuable computer graphics work and discuss its future directions. Full article
Show Figures

Figure 1

29 pages, 2357 KiB  
Article
An App that Changes Mentalities about Mobile Learning—The EduPARK Augmented Reality Activity
by Lúcia Pombo and Margarida M. Marques
Computers 2019, 8(2), 37; https://0-doi-org.brum.beds.ac.uk/10.3390/computers8020037 - 10 May 2019
Cited by 10 | Viewed by 6211
Abstract
The public usually associates mobile devices to distraction and learning disruption, and they are not frequently used in formal education. Additionally, games and parks are both associated with play and leisure time, and not to learn. This study shows that the combination of [...] Read more.
The public usually associates mobile devices to distraction and learning disruption, and they are not frequently used in formal education. Additionally, games and parks are both associated with play and leisure time, and not to learn. This study shows that the combination of mobiles, games, and parks can promote authentic learning and contributes to changing conventional mentalities. The study is framed by the EduPARK project that created an innovative app for authentic learning, supported by mobile and augmented reality (AR) technologies, for game-based approaches in a green park. A case study of the EduPARK strategy’s educational value, according to 86 Basic Education undergraduate students, was conducted. The participants experienced the app in the park and presented their opinion about: (i) mobile learning; (ii) the app’s usability; and (iii) the impact of the educational strategy in terms of factors, such as intrinsic motivation and authentic learning. Data collection included a survey and document collection of student reflections. Data were subjected to descriptive statistics, System Usability score computing, and content analysis. Students considered that the EduPARK strategy has educational value, particularly regarding content learning and motivation. From this study emerged seven supporting pillars that constitute a set of guidelines for future development of mobile game-based learning. Full article
(This article belongs to the Special Issue Augmented and Mixed Reality in Work Context)
Show Figures

Figure 1

13 pages, 768 KiB  
Article
Homogenous Granulation and Its Epsilon Variant
by Krzysztof Ropiak and Piotr Artiemjew
Computers 2019, 8(2), 36; https://0-doi-org.brum.beds.ac.uk/10.3390/computers8020036 - 10 May 2019
Cited by 3 | Viewed by 5298
Abstract
In the era of Big data, there is still place for techniques which reduce the data size with maintenance of its internal knowledge. This problem is the main subject of research of a family of granulation techniques proposed by Polkowski. In our recent [...] Read more.
In the era of Big data, there is still place for techniques which reduce the data size with maintenance of its internal knowledge. This problem is the main subject of research of a family of granulation techniques proposed by Polkowski. In our recent works, we have developed new, really effective and simple techniques for decision approximation, homogenous granulation and epsilon homogenous granulation. The real problem in this family of methods was the choice of an effective parameter of approximation for any datasets. It was resolved by homogenous techniques. There is no need to estimate the optimal parameters of approximation for these methods, because those are set in a dynamic way according to the data internal indiscernibility level. In this work, we have presented an extension of the work presented at ICIST 2018 conference. We present results for homogenous and epsilon homogenous granulation with the comparison of its effectiveness. Full article
Show Figures

Figure 1

16 pages, 1946 KiB  
Article
Detecting Website Defacements Based on Machine Learning Techniques and Attack Signatures
by Xuan Dau Hoang and Ngoc Tuong Nguyen
Computers 2019, 8(2), 35; https://0-doi-org.brum.beds.ac.uk/10.3390/computers8020035 - 08 May 2019
Cited by 14 | Viewed by 8091
Abstract
Defacement attacks have long been considered one of prime threats to websites and web applications of companies, enterprises, and government organizations. Defacement attacks can bring serious consequences to owners of websites, including immediate interruption of website operations and damage of the owner reputation, [...] Read more.
Defacement attacks have long been considered one of prime threats to websites and web applications of companies, enterprises, and government organizations. Defacement attacks can bring serious consequences to owners of websites, including immediate interruption of website operations and damage of the owner reputation, which may result in huge financial losses. Many solutions have been researched and deployed for monitoring and detection of website defacement attacks, such as those based on checksum comparison, diff comparison, DOM tree analysis, and complicated algorithms. However, some solutions only work on static websites and others demand extensive computing resources. This paper proposes a hybrid defacement detection model based on the combination of the machine learning-based detection and the signature-based detection. The machine learning-based detection first constructs a detection profile using training data of both normal and defaced web pages. Then, it uses the profile to classify monitored web pages into either normal or attacked. The machine learning-based component can effectively detect defacements for both static pages and dynamic pages. On the other hand, the signature-based detection is used to boost the model’s processing performance for common types of defacements. Extensive experiments show that our model produces an overall accuracy of more than 99.26% and a false positive rate of about 0.27%. Moreover, our model is suitable for implementation of a real-time website defacement monitoring system because it does not demand extensive computing resources. Full article
Show Figures

Figure 1

28 pages, 700 KiB  
Article
Security Pattern for Cloud SaaS: From System and Data Security to Privacy Case Study in AWS and Azure
by Annanda Rath, Bojan Spasic, Nick Boucart and Philippe Thiran
Computers 2019, 8(2), 34; https://0-doi-org.brum.beds.ac.uk/10.3390/computers8020034 - 03 May 2019
Cited by 17 | Viewed by 16523
Abstract
The Cloud is fast becoming a popular platform for SaaS, a popular software delivery model. This is because the Cloud has many advantages over the traditional private infrastructure, such as increased flexibility, no maintenance, less management burden, easy access and easy to share [...] Read more.
The Cloud is fast becoming a popular platform for SaaS, a popular software delivery model. This is because the Cloud has many advantages over the traditional private infrastructure, such as increased flexibility, no maintenance, less management burden, easy access and easy to share information. However, there are many concerns around issues like system security, communication security, data security, privacy, latency and availability. In addition, when designing and developing Cloud SaaS application, these security issues need to be addressed in order to ensure regulatory compliance, security and trusted environment for Cloud SaaS users. In this paper, we explore the security patterns for Cloud SaaS. We work on the patterns covering different security aspects from system and data security to privacy. Our goal is to produce the security best practices and security knowledge documentation that SaaS developer can use as a guideline for developing Cloud SaaS applications from the ground up. In addition to that, we also provide a case study of security patterns and solutions in AWS and Azure. Full article
Show Figures

Figure 1

12 pages, 693 KiB  
Article
A Novel Dictionary-Driven Mental Spelling Application Based on Code-Modulated Visual Evoked Potentials
by Felix Gembler and Ivan Volosyak
Computers 2019, 8(2), 33; https://0-doi-org.brum.beds.ac.uk/10.3390/computers8020033 - 30 Apr 2019
Cited by 9 | Viewed by 5570
Abstract
Brain–computer interfaces (BCIs) based on code-modulated visual evoked potentials (c-VEPs) typically utilize a synchronous approach to identify targets (i.e., after preset time periods the system produces command outputs). Hence, users have only a limited amount of time to fixate a desired target. This [...] Read more.
Brain–computer interfaces (BCIs) based on code-modulated visual evoked potentials (c-VEPs) typically utilize a synchronous approach to identify targets (i.e., after preset time periods the system produces command outputs). Hence, users have only a limited amount of time to fixate a desired target. This hinders the usage of more complex interfaces, as these require the BCI to distinguish between intentional and unintentional fixations. In this article, we investigate a dynamic sliding window mechanism as well as the implementation of software-based stimulus synchronization to enable the threshold-based target identification for the c-VEP paradigm. To further improve the usability of the system, an ensemble-based classification strategy was investigated. In addition, a software-based approach for stimulus on-set determination is proposed, which allows for an easier setup of the system, as it reduces additional hardware dependencies. The methods were tested with an eight-target spelling application utilizing an n-gram word prediction model. The performance of eighteen participants without disabilities was tested; all participants completed word- and sentence spelling tasks using the c-VEP BCI with a mean information transfer rate (ITR) of 75.7 and 57.8 bpm, respectively. Full article
(This article belongs to the Special Issue Computer Technologies for Human-Centered Cyber World)
Show Figures

Figure 1

21 pages, 8921 KiB  
Article
A Comparison of Compression Codecs for Maritime and Sonar Images in Bandwidth Constrained Applications
by Chiman Kwan, Jude Larkin, Bence Budavari, Bryan Chou, Eric Shang and Trac D. Tran
Computers 2019, 8(2), 32; https://0-doi-org.brum.beds.ac.uk/10.3390/computers8020032 - 28 Apr 2019
Cited by 12 | Viewed by 6700
Abstract
Since lossless compression can only achieve two to four times data compression, it may not be efficient to deploy lossless compression in bandwidth constrained applications. Instead, it would be more economical to adopt perceptually lossless compression, which can attain ten times or more [...] Read more.
Since lossless compression can only achieve two to four times data compression, it may not be efficient to deploy lossless compression in bandwidth constrained applications. Instead, it would be more economical to adopt perceptually lossless compression, which can attain ten times or more compression without loss of important information. Consequently, one can transmit more images over bandwidth limited channels. In this research, we first aimed to compare and select the best compression algorithm in the literature to achieve a compression ratio of 0.1 and 40 dBs or more in terms of a performance metric known as human visual system model (HVSm) for maritime and sonar images. Our second objective was to demonstrate error concealment algorithms that can handle corrupted pixels due to transmission errors in interference-prone communication channels. Using four state-of-the-art codecs, we demonstrated that perceptually lossless compression can be achieved for realistic maritime and sonar images. At the same time, we also selected the best codec for this purpose using four performance metrics. Finally, error concealment was demonstrated to be useful in recovering lost pixels due to transmission errors. Full article
(This article belongs to the Special Issue Vision, Image and Signal Processing (ICVISP))
Show Figures

Figure 1

20 pages, 2171 KiB  
Article
The Harvest Coach Architecture: Embedding Deviation-Tolerance in a Harvest Logistic Solution
by Hugo Daniel Macedo, René Søndergaard Nilsson and Peter Gorm Larsen
Computers 2019, 8(2), 31; https://0-doi-org.brum.beds.ac.uk/10.3390/computers8020031 - 23 Apr 2019
Cited by 6 | Viewed by 5259
Abstract
We introduce a deviation-tolerance software architecture, which is devised for a prototype of a cloud-based harvest operation optimisation system issuing harvest plans. The deviation-tolerance architecture adapts the fault tolerance notions originating in the area of systems engineering to the harvest domain and embeds [...] Read more.
We introduce a deviation-tolerance software architecture, which is devised for a prototype of a cloud-based harvest operation optimisation system issuing harvest plans. The deviation-tolerance architecture adapts the fault tolerance notions originating in the area of systems engineering to the harvest domain and embeds them into the Vienna developed method (VDM) model at the core of our harvest logistics system prototype. The fault tolerance supervision/execution level architecture is framed under the notion of an “harvest coach” which diagnoses deviations to the planned operations using “harvest deviation monitors” and deploys a novel “plan” (controller) that mitigates the encountered “deviation” (fault). The architecture enabled the early start of field experiments of the harvest logistics system prototype, which lead to the validation/refutation of early design stage assumptions on the diverse system components behaviours and capabilities. For instance, we casually found discrepancies in the arithmetic precision of open-source libraries used in the conversion of vehicle positioning coordinates, we assessed the maturity of the frameworks used to develop the field user interfaces, and we calibrated the level of system-operator interactivity when deviations occurs. The obtained results indicate that the architecture may have a positive impact in the context of developing systems featuring intrinsic human-driven deviations which require mitigation. Full article
Show Figures

Figure 1

19 pages, 2353 KiB  
Article
A Study of Image Upsampling and Downsampling Filters
by Dragoș Dumitrescu and Costin-Anton Boiangiu
Computers 2019, 8(2), 30; https://0-doi-org.brum.beds.ac.uk/10.3390/computers8020030 - 23 Apr 2019
Cited by 23 | Viewed by 11138
Abstract
In this paper, a set of techniques used for downsampling and upsampling of 2D images is analyzed on various image datasets. The comparison takes into account a significant number of interpolation kernels, their parameters, and their algebraical form, focusing mostly on linear interpolation [...] Read more.
In this paper, a set of techniques used for downsampling and upsampling of 2D images is analyzed on various image datasets. The comparison takes into account a significant number of interpolation kernels, their parameters, and their algebraical form, focusing mostly on linear interpolation methods with symmetric kernels. The most suitable metrics for measuring the performance of upsampling and downsampling filters’ combinations are presented, discussing their strengths and weaknesses. A test benchmark is proposed, and the obtained results are analyzed with respect to the presented metrics, offering explanations about specific filter behaviors in general, or just in certain circumstances. In the end, a set of filters and parameters recommendations is offered based on extensive testing on carefully selected image datasets. The entire research is based on the study of a large set of research papers and on a solid discussion of the underlying signal processing theory. Full article
Show Figures

Figure 1

21 pages, 62249 KiB  
Article
Enhanced Shadow Retargeting with Light-Source Estimation Using Flat Fresnel Lenses
by Llogari Casas Cambra, Matthias Fauconneau, Maggie Kosek, Kieran Mclister and Kenny Mitchell
Computers 2019, 8(2), 29; https://0-doi-org.brum.beds.ac.uk/10.3390/computers8020029 - 02 Apr 2019
Cited by 2 | Viewed by 6918
Abstract
Shadow-retargeting maps depict the appearance of real shadows to virtual shadows given corresponding deformation of scene geometry, such that appearance is seamlessly maintained. By performing virtual shadow reconstruction from unoccluded real-shadow samples observed in the camera frame, this method efficiently recovers deformed shadow [...] Read more.
Shadow-retargeting maps depict the appearance of real shadows to virtual shadows given corresponding deformation of scene geometry, such that appearance is seamlessly maintained. By performing virtual shadow reconstruction from unoccluded real-shadow samples observed in the camera frame, this method efficiently recovers deformed shadow appearance. In this manuscript, we introduce a light-estimation approach that enables light-source detection using flat Fresnel lenses that allow this method to work without a set of pre-established conditions. We extend the adeptness of this approach by handling scenarios with multiple receiver surfaces and a non-grounded occluder with high accuracy. Results are presented on a range of objects, deformations, and illumination conditions in real-time Augmented Reality (AR) on a mobile device. We demonstrate the practical application of the method in generating otherwise laborious in-betweening frames for 3D printed stop-motion animation. Full article
Show Figures

Figure 1

12 pages, 4470 KiB  
Article
The Application of Ant Colony Algorithms to Improving the Operation of Traction Rectifier Transformers
by Barbara Kulesz, Andrzej Sikora and Adam Zielonka
Computers 2019, 8(2), 28; https://0-doi-org.brum.beds.ac.uk/10.3390/computers8020028 - 28 Mar 2019
Cited by 2 | Viewed by 5184
Abstract
In this paper, we discuss a technical issue occurring in electric traction. Tram traction may use DC voltage; this is obtained by rectifying AC voltage supplied by the power grid. In the simplest design— one which is commonly used—only diode uncontrolled rectifiers are [...] Read more.
In this paper, we discuss a technical issue occurring in electric traction. Tram traction may use DC voltage; this is obtained by rectifying AC voltage supplied by the power grid. In the simplest design— one which is commonly used—only diode uncontrolled rectifiers are used. The rectified voltage is not smooth; it always contains a pulsating (AC) component. The amount of pulsation varies. It depends, among other factors, on the design of the transformer-rectifier set. In the 12-pulse system, we use a three-winding transformer, consisting of one primary winding and two secondary windings: one is delta-connected and the other is star-connected. The unbalance of secondary windings is an extra factor increasing the pulsation of DC voltage. To equalize secondary side voltages, a tap changer may be used. The setting of the tap changer is the question resolved in this paper; it is optimized by application of the ACO (ant colony optimization algorithm). We have analyzed different supply voltage variants, and in particular, distorted voltage containing 5th and 7th harmonics. The results of ant colony optimization application are described in this paper. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop