Next Issue
Volume 6, June
Previous Issue
Volume 5, December
 
 

Informatics, Volume 6, Issue 1 (March 2019) – 14 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
21 pages, 465 KiB  
Article
Selective Wander Join: Fast Progressive Visualizations for Data Joins
by Marianne Procopio, Carlos Scheidegger, Eugene Wu and Remco Chang
Informatics 2019, 6(1), 14; https://0-doi-org.brum.beds.ac.uk/10.3390/informatics6010014 - 25 Mar 2019
Cited by 7 | Viewed by 6230
Abstract
Progressive visualization offers a great deal of promise for big data visualization; however, current progressive visualization systems do not allow for continuous interaction. What if users want to see more confident results on a subset of the visualization? This can happen when users [...] Read more.
Progressive visualization offers a great deal of promise for big data visualization; however, current progressive visualization systems do not allow for continuous interaction. What if users want to see more confident results on a subset of the visualization? This can happen when users are in exploratory analysis mode but want to ask some directed questions of the data as well. In a progressive visualization system, the online aggregation algorithm determines the database sampling rate and resulting convergence rate, not the user. In this paper, we extend a recent method in online aggregation, called Wander Join, that is optimized for queries that join tables, one of the most computationally expensive operations. This extension leverages importance sampling to enable user-driven sampling when data joins are in the query. We applied user interaction techniques that allow the user to view and adjust the convergence rate, providing more transparency and control over the online aggregation process. By leveraging importance sampling, our extension of Wander Join also allows for stratified sampling of groups when there is data distribution skew. We also improve the convergence rate of filtering queries, but with additional overhead costs not needed in the original Wander Join algorithm. Full article
(This article belongs to the Special Issue Progressive Visual Analytics)
Show Figures

Figure 1

21 pages, 3315 KiB  
Article
Creating a Multimodal Translation Tool and Testing Machine Translation Integration Using Touch and Voice
by Carlos S. C. Teixeira, Joss Moorkens, Daniel Turner, Joris Vreeke and Andy Way
Informatics 2019, 6(1), 13; https://0-doi-org.brum.beds.ac.uk/10.3390/informatics6010013 - 25 Mar 2019
Cited by 11 | Viewed by 8066
Abstract
Commercial software tools for translation have, until now, been based on the traditional input modes of keyboard and mouse, latterly with a small amount of speech recognition input becoming popular. In order to test whether a greater variety of input modes might aid [...] Read more.
Commercial software tools for translation have, until now, been based on the traditional input modes of keyboard and mouse, latterly with a small amount of speech recognition input becoming popular. In order to test whether a greater variety of input modes might aid translation from scratch, translation using translation memories, or machine translation postediting, we developed a web-based translation editing interface that permits multimodal input via touch-enabled screens and speech recognition in addition to keyboard and mouse. The tool also conforms to web accessibility standards. This article describes the tool and its development process over several iterations. Between these iterations we carried out two usability studies, also reported here. Findings were promising, albeit somewhat inconclusive. Participants liked the tool and the speech recognition functionality. Reports of the touchscreen were mixed, and we consider that it may require further research to incorporate touch into a translation interface in a usable way. Full article
(This article belongs to the Special Issue Advances in Computer-Aided Translation Technology)
Show Figures

Figure 1

15 pages, 2304 KiB  
Article
Improvement in the Efficiency of a Distributed Multi-Label Text Classification Algorithm Using Infrastructure and Task-Related Data
by Martin Sarnovsky and Marek Olejnik
Informatics 2019, 6(1), 12; https://0-doi-org.brum.beds.ac.uk/10.3390/informatics6010012 - 18 Mar 2019
Cited by 2 | Viewed by 5722
Abstract
Distributed computing technologies allow a wide variety of tasks that use large amounts of data to be solved. Various paradigms and technologies are already widely used, but many of them are lacking when it comes to the optimization of resource usage. The aim [...] Read more.
Distributed computing technologies allow a wide variety of tasks that use large amounts of data to be solved. Various paradigms and technologies are already widely used, but many of them are lacking when it comes to the optimization of resource usage. The aim of this paper is to present the optimization methods used to increase the efficiency of distributed implementations of a text-mining model utilizing information about the text-mining task extracted from the data and information about the current state of the distributed environment obtained from a computational node, and to improve the distribution of the task on the distributed infrastructure. Two optimization solutions are developed and implemented, both based on the prediction of the expected task duration on the existing infrastructure. The solutions are experimentally evaluated in a scenario where a distributed tree-based multi-label classifier is built based on two standard text data collections. Full article
Show Figures

Figure 1

19 pages, 1534 KiB  
Article
IGR Token-Raw Material and Ingredient Certification of Recipe Based Foods Using Smart Contracts
by Ricardo Borges dos Santos, Nunzio Marco Torrisi, Erick Reyann Kasai Yamada and Rodrigo Palucci Pantoni
Informatics 2019, 6(1), 11; https://0-doi-org.brum.beds.ac.uk/10.3390/informatics6010011 - 11 Mar 2019
Cited by 25 | Viewed by 10287
Abstract
The use of smart contracts and blockchain tokens to implement a consumer trustworthy ingredient certification scheme for commingled foods, i.e., recipe based, food products is described. The proposed framework allows ingredients that carry any desired property (including social or environmental customer perceived value) [...] Read more.
The use of smart contracts and blockchain tokens to implement a consumer trustworthy ingredient certification scheme for commingled foods, i.e., recipe based, food products is described. The proposed framework allows ingredients that carry any desired property (including social or environmental customer perceived value) to be certified by any certification authority, at the moment of harvest or extraction, using the IGR Ethereum token. The mechanism involves the transfer of tokens containing the internet url published at the authority’s web site from the farmer all along the supply chain to the final consumer at each transfer of custody of the ingredient using the Cricital Tracking Event/Key Data Elements (CTE/KDE) philosophy of the Institute of Food Technologists (IFT). This allows the end consumer to easily inspect and be assured of the origin of the ingredient by means of a mobile application. A successful code implementation of the framework was deployed, tested and is running as a beta version on the Ethereum live blockchain as the IGR token. The main contribution of the framework is the possibility to ensure the true origin of any instance or lot of ingredient within a recipe to the customer, without harming the food processor legitimate right to protect its recipes and suppliers. Full article
Show Figures

Figure 1

13 pages, 2595 KiB  
Article
ETL Best Practices for Data Quality Checks in RIS Databases
by Otmane Azeroual, Gunter Saake and Mohammad Abuosba
Informatics 2019, 6(1), 10; https://0-doi-org.brum.beds.ac.uk/10.3390/informatics6010010 - 05 Mar 2019
Cited by 15 | Viewed by 9727
Abstract
The topic of data integration from external data sources or independent IT-systems has received increasing attention recently in IT departments as well as at management level, in particular concerning data integration in federated database systems. An example of the latter are commercial research [...] Read more.
The topic of data integration from external data sources or independent IT-systems has received increasing attention recently in IT departments as well as at management level, in particular concerning data integration in federated database systems. An example of the latter are commercial research information systems (RIS), which regularly import, cleanse, transform and prepare the analysis research information of the institutions of a variety of databases. In addition, all these so-called steps must be provided in a secured quality. As several internal and external data sources are loaded for integration into the RIS, ensuring information quality is becoming increasingly challenging for the research institutions. Before the research information is transferred to a RIS, it must be checked and cleaned up. An important factor for successful or competent data integration is therefore always the data quality. The removal of data errors (such as duplicates and harmonization of the data structure, inconsistent data and outdated data, etc.) are essential tasks of data integration using extract, transform, and load (ETL) processes. Data is extracted from the source systems, transformed and loaded into the RIS. At this point conflicts between different data sources are controlled and solved, as well as data quality issues during data integration are eliminated. Against this background, our paper presents the process of data transformation in the context of RIS which gains an overview of the quality of research information in an institution’s internal and external data sources during its integration into RIS. In addition, the question of how to control and improve the quality issues during the integration process in RIS will be addressed. Full article
Show Figures

Figure 1

13 pages, 1646 KiB  
Article
Using Malone’s Theoretical Model on Gamification for Designing Educational Rubrics
by Daniel Corona Martínez and José Julio Real García
Informatics 2019, 6(1), 9; https://0-doi-org.brum.beds.ac.uk/10.3390/informatics6010009 - 04 Mar 2019
Cited by 1 | Viewed by 10072
Abstract
How could a structured proposal for an evaluation rubric benefit from assessing and including the organizational variables used when one of the first definitions of gamification related to game theory was established by Thomas W. Malone in 1980? By studying the importance and [...] Read more.
How could a structured proposal for an evaluation rubric benefit from assessing and including the organizational variables used when one of the first definitions of gamification related to game theory was established by Thomas W. Malone in 1980? By studying the importance and current validity of Malone’s corollaries on his article What makes things fun to Learn? this work covers all different characteristics of the concepts once used to define the term “gamification.” Based on the results of this analysis, we will propose different evaluation concepts that will be assessed and included in a qualitative proposal for an evaluation rubric, with the ultimate goal of including a holistic approach to all different aspects related to evaluation for active methodologies in a secondary education environment. Full article
Show Figures

Figure 1

15 pages, 1459 KiB  
Article
Evaluating Awareness and Perception of Botnet Activity within Consumer Internet-of-Things (IoT) Networks
by Christopher D. McDermott, John P. Isaacs and Andrei V. Petrovski
Informatics 2019, 6(1), 8; https://0-doi-org.brum.beds.ac.uk/10.3390/informatics6010008 - 18 Feb 2019
Cited by 17 | Viewed by 7783
Abstract
The growth of the Internet of Things (IoT), and demand for low-cost, easy-to-deploy devices, has led to the production of swathes of insecure Internet-connected devices. Many can be exploited and leveraged to perform large-scale attacks on the Internet, such as those seen by [...] Read more.
The growth of the Internet of Things (IoT), and demand for low-cost, easy-to-deploy devices, has led to the production of swathes of insecure Internet-connected devices. Many can be exploited and leveraged to perform large-scale attacks on the Internet, such as those seen by the Mirai botnet. This paper presents a cross-sectional study of how users value and perceive security and privacy in smart devices found within the IoT. It analyzes user requirements from IoT devices, and the importance placed upon security and privacy. An experimental setup was used to assess user ability to detect threats, in the context of technical knowledge and experience. It clearly demonstrated that without any clear signs when an IoT device was infected, it was very difficult for consumers to detect and be situationally aware of threats exploiting home networks. It also demonstrated that without adequate presentation of data to users, there is no clear correlation between level of technical knowledge and ability to detect infected devices. Full article
(This article belongs to the Special Issue Human Factors in Security and Privacy in IoT (HFSP-IoT))
Show Figures

Figure 1

22 pages, 1244 KiB  
Article
What Is This Sensor and Does This App Need Access to It?
by Maryam Mehrnezhad and Ehsan Toreini
Informatics 2019, 6(1), 7; https://0-doi-org.brum.beds.ac.uk/10.3390/informatics6010007 - 24 Jan 2019
Cited by 13 | Viewed by 9832
Abstract
Mobile sensors have already proven to be helpful in different aspects of people’s everyday lives such as fitness, gaming, navigation, etc. However, illegitimate access to these sensors results in a malicious program running with an exploit path. While the users are benefiting from [...] Read more.
Mobile sensors have already proven to be helpful in different aspects of people’s everyday lives such as fitness, gaming, navigation, etc. However, illegitimate access to these sensors results in a malicious program running with an exploit path. While the users are benefiting from richer and more personalized apps, the growing number of sensors introduces new security and privacy risks to end users and makes the task of sensor management more complex. In this paper, first, we discuss the issues around the security and privacy of mobile sensors. We investigate the available sensors on mainstream mobile devices and study the permission policies that Android, iOS and mobile web browsers offer for them. Second, we reflect the results of two workshops that we organized on mobile sensor security. In these workshops, the participants were introduced to mobile sensors by working with sensor-enabled apps. We evaluated the risk levels perceived by the participants for these sensors after they understood the functionalities of these sensors. The results showed that knowing sensors by working with sensor-enabled apps would not immediately improve the users’ security inference of the actual risks of these sensors. However, other factors such as the prior general knowledge about these sensors and their risks had a strong impact on the users’ perception. We also taught the participants about the ways that they could audit their apps and their permissions. Our findings showed that when mobile users were provided with reasonable choices and intuitive teaching, they could easily self-direct themselves to improve their security and privacy. Finally, we provide recommendations for educators, app developers, and mobile users to contribute toward awareness and education on this topic. Full article
(This article belongs to the Special Issue Human Factors in Security and Privacy in IoT (HFSP-IoT))
Show Figures

Figure 1

16 pages, 9324 KiB  
Article
Hybrid Design Tools—Image Quality Assessment of a Digitally Augmented Blackboard Integrated System
by Ovidiu Banias and Camil Octavian Milincu
Informatics 2019, 6(1), 6; https://0-doi-org.brum.beds.ac.uk/10.3390/informatics6010006 - 21 Jan 2019
Cited by 1 | Viewed by 6208
Abstract
In the last two decades, Interactive White Boards (IWBs) have been widely available as a pedagogic tool. The usability of these boards for teaching disciplines where complex drawings are needed, we consider debatable in multiple regards. In a previous study, we proposed an [...] Read more.
In the last two decades, Interactive White Boards (IWBs) have been widely available as a pedagogic tool. The usability of these boards for teaching disciplines where complex drawings are needed, we consider debatable in multiple regards. In a previous study, we proposed an alternative to the IWBs as a blackboard augmented with a minimum of necessary digital elements. The current study continues our previous research on hybrid design tools, analyzing the limitations of the developed hybrid system regarding the perceived quality of the images being repeatedly captured, annotated, and reprojected onto the board. We validated the hybrid system by evaluating the quality of the projected and reprojected images over a blackboard, using both objective measurements and subjective human perception in extensive and realistic case studies. Based on the results achieved in the current research, we conclude that the proposed hybrid system provides good quality support for teaching disciplines that require complex drawings and board interaction. Full article
Show Figures

Figure 1

13 pages, 1384 KiB  
Article
Statistical Deadband: A Novel Approach for Event-Based Data Reporting
by Nunzio Marco Torrisi
Informatics 2019, 6(1), 5; https://0-doi-org.brum.beds.ac.uk/10.3390/informatics6010005 - 18 Jan 2019
Viewed by 6189
Abstract
Deadband algorithms are implemented inside industrial gateways to reduce the volume of data sent across different networks. By tuning the deadband sampling resolution by a preset interval Δ , it is possible to estimate the balance between the traffic rates of networks connected [...] Read more.
Deadband algorithms are implemented inside industrial gateways to reduce the volume of data sent across different networks. By tuning the deadband sampling resolution by a preset interval Δ , it is possible to estimate the balance between the traffic rates of networks connected by industrial SCADA gateways. This work describes the design and implementation of two original deadband algorithms based on statistical concepts derived by John Bollinger in his financial technical analysis. The statistical algorithms proposed do not require the setup of a preset interval—this is required by non-statistical algorithms. All algorithms were evaluated and compared by computing the effectiveness and fidelity over a public collection of random pseudo-periodic signals. The overall performance measured in the simulations showed better results, in terms of effectiveness and fidelity, for the statistical algorithms, while the measured computing resources were not as efficient as for the non-statistical deadband algorithms. Full article
Show Figures

Figure 1

11 pages, 1681 KiB  
Article
Unstructured Text in EMR Improves Prediction of Death after Surgery in Children
by Oguz Akbilgic, Ramin Homayouni, Kevin Heinrich, Max Raymond Langham and Robert Lowell Davis
Informatics 2019, 6(1), 4; https://0-doi-org.brum.beds.ac.uk/10.3390/informatics6010004 - 10 Jan 2019
Cited by 3 | Viewed by 8121
Abstract
Text fields in electronic medical records (EMR) contain information on important factors that influence health outcomes, however, they are underutilized in clinical decision making due to their unstructured nature. We analyzed 6497 inpatient surgical cases with 719,308 free text notes from Le Bonheur [...] Read more.
Text fields in electronic medical records (EMR) contain information on important factors that influence health outcomes, however, they are underutilized in clinical decision making due to their unstructured nature. We analyzed 6497 inpatient surgical cases with 719,308 free text notes from Le Bonheur Children’s Hospital EMR. We used a text mining approach on preoperative notes to obtain a text-based risk score to predict death within 30 days of surgery. In addition, we evaluated the performance of a hybrid model that included the text-based risk score along with structured data pertaining to clinical risk factors. The C-statistic of a logistic regression model with five-fold cross-validation significantly improved from 0.76 to 0.92 when text-based risk scores were included in addition to structured data. We conclude that preoperative free text notes in EMR include significant information that can predict adverse surgery outcomes. Full article
(This article belongs to the Special Issue Data-Driven Healthcare Research)
Show Figures

Figure 1

3 pages, 123 KiB  
Editorial
Acknowledgement to Reviewers of Informatics in 2018
by Informatics Editorial Office
Informatics 2019, 6(1), 3; https://0-doi-org.brum.beds.ac.uk/10.3390/informatics6010003 - 09 Jan 2019
Viewed by 5491
Abstract
Rigorous peer-review is the corner-stone of high-quality academic publishing [...] Full article
43 pages, 12836 KiB  
Article
Bringing the Illusion of Reality Inside Museums—A Methodological Proposal for an Advanced Museology Using Holographic Showcases
by Eva Pietroni, Daniele Ferdani, Massimiliano Forlani, Alfonsina Pagano and Claudio Rufa
Informatics 2019, 6(1), 2; https://0-doi-org.brum.beds.ac.uk/10.3390/informatics6010002 - 04 Jan 2019
Cited by 14 | Viewed by 11623
Abstract
The basic idea of a hologram is an apparition of something that does not exist but appears as if it was just in front of our eyes. These illusion techniques were invented a long time ago. The philosopher and alchemist Giovanni Battista della [...] Read more.
The basic idea of a hologram is an apparition of something that does not exist but appears as if it was just in front of our eyes. These illusion techniques were invented a long time ago. The philosopher and alchemist Giovanni Battista della Porta invented an effect that was later developed and brought to fame by Prof. J. H. Pepper (1821–1900) and applied in theatrical performances. The innovation nowadays consists in the adopted technology to produce them. Taking advantage of the available digital technologies, the challenge we are going to discuss is using holograms in the museum context, inside showcases, to realize a new form of scenography and dramaturgy around the exhibited objects. Case studies will be presented, with a detailed analysis of the EU project CEMEC (Connecting Early Medieval European Collections), where holographic showcases have been designed, built and experimented in EU museums. In this case, the coexistence in the same space of the real artifact and the virtual contents, and interior setup of the showcase, its dynamic lighting system, the script and the sound, converge to create an expressive unity. The reconstruction of sensory and symbolic dimensions that are ‘beyond’ any museum object can take the visitor in the middle of a lively and powerful experience with such technology, and represents an advancement in the museological sector. User experience results and a list of best practices will be presented in the second part of the paper, out of the tests and research activities conducted in these three years of the project. Full article
(This article belongs to the Section Social Informatics and Digital Humanities)
Show Figures

Figure 1

17 pages, 547 KiB  
Article
Improving the Classification Efficiency of an ANN Utilizing a New Training Methodology
by Ioannis E. Livieris
Informatics 2019, 6(1), 1; https://0-doi-org.brum.beds.ac.uk/10.3390/informatics6010001 - 28 Dec 2018
Cited by 27 | Viewed by 7306
Abstract
In this work, a new approach for training artificial neural networks is presented which utilises techniques for solving the constraint optimisation problem. More specifically, this study converts the training of a neural network into a constraint optimisation problem. Furthermore, we propose a new [...] Read more.
In this work, a new approach for training artificial neural networks is presented which utilises techniques for solving the constraint optimisation problem. More specifically, this study converts the training of a neural network into a constraint optimisation problem. Furthermore, we propose a new neural network training algorithm based on the L-BFGS-B method. Our numerical experiments illustrate the classification efficiency of the proposed algorithm and of our proposed methodology, leading to more efficient, stable and robust predictive models. Full article
(This article belongs to the Special Issue Advances in Randomized Neural Networks)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop