Next Issue
Volume 8, June
Previous Issue
Volume 7, December

Informatics, Volume 8, Issue 1 (March 2021) – 20 articles

Cover Story (view full-size image): This paper introduces the imperative role that the concept of augmented/intelligent Enterprise Systems plays in safeguarding organizations from the quoted problems. The topic is set to establish the foundational arguments on the imperative need of transforming the Enterprise Systems architecture status quo through intelligent augmentation using Deep Learning (namely, Siamese-LSTM in this example); Shed light on the importance of machine learning and what we call Deep Learning Representational Frameworks (DL-ReFrams); Present research directions that discuss sociotechnical research frontiers that need to be addressed by both academics and practitioners. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Select all
Export citation of selected articles as:
Article
The Rare Word Issue in Natural Language Generation: A Character-Based Solution
Informatics 2021, 8(1), 20; https://0-doi-org.brum.beds.ac.uk/10.3390/informatics8010020 - 23 Mar 2021
Viewed by 539
Abstract
In this paper, we analyze the problem of generating fluent English utterances from tabular data, focusing on the development of a sequence-to-sequence neural model which shows two major features: the ability to read and generate character-wise, and the ability to switch between generating [...] Read more.
In this paper, we analyze the problem of generating fluent English utterances from tabular data, focusing on the development of a sequence-to-sequence neural model which shows two major features: the ability to read and generate character-wise, and the ability to switch between generating and copying characters from the input: an essential feature when inputs contain rare words like proper names, telephone numbers, or foreign words. Working with characters instead of words is a challenge that can bring problems such as increasing the difficulty of the training phase and a bigger error probability during inference. Nevertheless, our work shows that these issues can be solved and efforts are repaid by the creation of a fully end-to-end system, whose inputs and outputs are not constrained to be part of a predefined vocabulary, like in word-based models. Furthermore, our copying technique is integrated with an innovative shift mechanism, which enhances the ability to produce outputs directly from inputs. We assess performance on the E2E dataset, the benchmark used for the E2E NLG challenge, and on a modified version of it, created to highlight the rare word copying capabilities of our model. The results demonstrate clear improvements over the baseline and promising performance compared to recent techniques in the literature. Full article
(This article belongs to the Section Machine Learning)
Show Figures

Figure 1

Article
An Experimental Analysis of Data Annotation Methodologies for Emotion Detection in Short Text Posted on Social Media
Informatics 2021, 8(1), 19; https://0-doi-org.brum.beds.ac.uk/10.3390/informatics8010019 - 12 Mar 2021
Cited by 1 | Viewed by 1014
Abstract
Opinion mining techniques, investigating if text is expressing a positive or negative opinion, continuously gain in popularity, attracting the attention of many scientists from different disciplines. Specific use cases, however, where the expressed opinion is indisputably positive or negative, render such solutions obsolete [...] Read more.
Opinion mining techniques, investigating if text is expressing a positive or negative opinion, continuously gain in popularity, attracting the attention of many scientists from different disciplines. Specific use cases, however, where the expressed opinion is indisputably positive or negative, render such solutions obsolete and emphasize the need for a more in-depth analysis of the available text. Emotion analysis is a solution to this problem, but the multi-dimensional elements of the expressed emotions in text along with the complexity of the features that allow their identification pose a significant challenge. Machine learning solutions fail to achieve a high accuracy, mainly due to the limited availability of annotated training datasets, and the bias introduced to the annotations by the personal interpretations of emotions from individuals. A hybrid rule-based algorithm that allows the acquisition of a dataset that is annotated with regard to the Plutchik’s eight basic emotions is proposed in this paper. Emoji, keywords and semantic relationships are used in order to identify in an objective and unbiased way the emotion expressed in a short phrase or text. The acquired datasets are used to train machine learning classification models. The accuracy of the models and the parameters that affect it are presented in length through an experimental analysis. The most accurate model is selected and offered through an API to tackle the emotion detection in social media posts. Full article
(This article belongs to the Special Issue Information Analysis and Retrieval in Social Media)
Show Figures

Figure 1

Review
Measuring Digital Citizenship: A Comparative Analysis
Informatics 2021, 8(1), 18; https://0-doi-org.brum.beds.ac.uk/10.3390/informatics8010018 - 06 Mar 2021
Cited by 5 | Viewed by 1082
Abstract
This paper aims at showing a state of the art about digital citizenship from the methodological point of view when it comes to measuring this construct. The review of the scientific literature offers at least ten definitions and nine different scales of measurement. [...] Read more.
This paper aims at showing a state of the art about digital citizenship from the methodological point of view when it comes to measuring this construct. The review of the scientific literature offers at least ten definitions and nine different scales of measurement. The comparative and diachronic analysis of the content of the definitions shows us two conceptions of digital citizenship, some more focused on digital competences and others on critical and activist aspects. This paper replicates and compares three scales of measurement of digital citizenship selected for their relevance and administered in a sample of 366 university students, to analyze their psychometric properties and the existing coincidences and divergences between the three. The most outstanding conclusion is that not all of them seem to measure the same construct, due to its diversity of dimensions. An online activism dimension needs to be incorporated if digital citizenship is to be measured. There is an urgent need to agree internationally on a definition of digital citizenship with its corresponding dimensions to elaborate a reliable and valid measuring instrument. Full article
(This article belongs to the Special Issue Building Smart Cities and Infrastructures for a Sustainable Future)
Show Figures

Figure 1

Article
Estimating Freeway Level-of-Service Using Crowdsourced Data
Informatics 2021, 8(1), 17; https://0-doi-org.brum.beds.ac.uk/10.3390/informatics8010017 - 05 Mar 2021
Cited by 1 | Viewed by 713
Abstract
In traffic operations, the aim of transportation agencies and researchers is typically to reduce congestion and improve safety. To attain these goals, agencies need continuous and accurate information about the traffic situation. Level-of-Service (LOS) is a beneficial index of traffic operations used to [...] Read more.
In traffic operations, the aim of transportation agencies and researchers is typically to reduce congestion and improve safety. To attain these goals, agencies need continuous and accurate information about the traffic situation. Level-of-Service (LOS) is a beneficial index of traffic operations used to monitor freeways. The Highway Capacity Manual (HCM) provides analytical methods to assess LOS based on traffic density and highway characteristics. Generally, obtaining reliable density data on every road in large networks using traditional fixed location sensors and cameras is expensive and otherwise unrealistic. Traditional intelligent transportation system facilities are typically limited to major urban areas in different states. Crowdsourced data are an emerging, low-cost solution that can potentially improve safety and operations. This study incorporates crowdsourced data provided by Waze to propose an algorithm for LOS assessment on an hourly basis. The proposed algorithm exploits various features from big data (crowdsourced Waze user alerts and speed/travel time variation) to perform LOS classification using machine learning models. Three categories of model inputs are introduced: Basic statistical measures of speed; travel time reliability measures; and the number of hourly Waze alerts. Data collected from fixed location sensors were used to calculate ground truth LOS. The results reveal that using Waze crowdsourced alerts can improve the LOS estimation accuracy by about 10% (accuracy = 0.93, Kappa = 0.83). The proposed method was also tested and confirmed by using data from after coronavirus disease 2019 (COVID-19) with severe traffic breakdown due to a stay-at-home policy. The proposed method is extendible for freeways in other locations. The results of this research provide transportation agencies with a LOS method based on crowdsourced data on different freeway segments, regardless of the availability of traditional fixed location sensors. Full article
(This article belongs to the Special Issue Big Data and Transportation)
Show Figures

Figure 1

Review
Application of Machine Learning in Intensive Care Unit (ICU) Settings Using MIMIC Dataset: Systematic Review
Informatics 2021, 8(1), 16; https://0-doi-org.brum.beds.ac.uk/10.3390/informatics8010016 - 03 Mar 2021
Viewed by 1180
Abstract
Modern Intensive Care Units (ICUs) provide continuous monitoring of critically ill patients susceptible to many complications affecting morbidity and mortality. ICU settings require a high staff-to-patient ratio and generates a sheer volume of data. For clinicians, the real-time interpretation of data and decision-making [...] Read more.
Modern Intensive Care Units (ICUs) provide continuous monitoring of critically ill patients susceptible to many complications affecting morbidity and mortality. ICU settings require a high staff-to-patient ratio and generates a sheer volume of data. For clinicians, the real-time interpretation of data and decision-making is a challenging task. Machine Learning (ML) techniques in ICUs are making headway in the early detection of high-risk events due to increased processing power and freely available datasets such as the Medical Information Mart for Intensive Care (MIMIC). We conducted a systematic literature review to evaluate the effectiveness of applying ML in the ICU settings using the MIMIC dataset. A total of 322 articles were reviewed and a quantitative descriptive analysis was performed on 61 qualified articles that applied ML techniques in ICU settings using MIMIC data. We assembled the qualified articles to provide insights into the areas of application, clinical variables used, and treatment outcomes that can pave the way for further adoption of this promising technology and possible use in routine clinical decision-making. The lessons learned from our review can provide guidance to researchers on application of ML techniques to increase their rate of adoption in healthcare. Full article
(This article belongs to the Special Issue Machine Learning in Healthcare)
Show Figures

Figure 1

Review
Organizational Strategies for End-User Development—A Systematic Literature Mapping
Informatics 2021, 8(1), 15; https://0-doi-org.brum.beds.ac.uk/10.3390/informatics8010015 - 28 Feb 2021
Viewed by 671
Abstract
In the last few years, several organizations have been looking for strategies to meet the needs of users of Information Technology (IT). The decentralization of IT and the empowerment of nonprofessional users have been a viable option among these strategies. This study aimed [...] Read more.
In the last few years, several organizations have been looking for strategies to meet the needs of users of Information Technology (IT). The decentralization of IT and the empowerment of nonprofessional users have been a viable option among these strategies. This study aimed to identify the End-User Development (EUD) strategies adopted by organizations. A systematic mapping was performed in order to provide for a structured body of knowledge and find potential research gaps. The results show that EUD methods and techniques are the most common strategies addressed in the literature. Also, most of the EUD strategies identified a focus either on EUD managerial issues, such as risk management, or on more technical elements, such as the implementation of components for EUD applications. The benefits and barriers to the adoption of EUD by organizations are also presented in this study. In general, defining EUD processes is a common gap in EUD surveys. We reinforce the need to carry out more research on the adoption of EUD in organizations, with a high level of evidence to validate the results. Full article
Show Figures

Figure 1

Article
Exploring the Interdependence Theory of Complementarity with Case Studies. Autonomous Human–Machine Teams (A-HMTs)
Informatics 2021, 8(1), 14; https://0-doi-org.brum.beds.ac.uk/10.3390/informatics8010014 - 26 Feb 2021
Viewed by 544
Abstract
Rational models of human behavior aim to predict, possibly control, humans. There are two primary models, the cognitive model that treats behavior as implicit, and the behavioral model that treats beliefs as implicit. The cognitive model reigned supreme until reproducibility issues arose, including [...] Read more.
Rational models of human behavior aim to predict, possibly control, humans. There are two primary models, the cognitive model that treats behavior as implicit, and the behavioral model that treats beliefs as implicit. The cognitive model reigned supreme until reproducibility issues arose, including Axelrod’s prediction that cooperation produces the best outcomes for societies. In contrast, by dismissing the value of beliefs, predictions of behavior improved dramatically, but only in situations where beliefs were suppressed, unimportant, or in low risk, highly certain environments, e.g., enforced cooperation. Moreover, rational models lack supporting evidence for their mathematical predictions, impeding generalizations to artificial intelligence (AI). Moreover, rational models cannot scale to teams or systems, which is another flaw. However, the rational models fail in the presence of uncertainty or conflict, their fatal flaw. These shortcomings leave rational models ill-prepared to assist the technical revolution posed by autonomous human–machine teams (A-HMTs) or autonomous systems. For A-HMT teams, we have developed the interdependence theory of complementarity, largely overlooked because of the bewilderment interdependence causes in the laboratory. Where the rational model fails in the face of uncertainty or conflict, interdependence theory thrives. The best human science teams are fully interdependent; intelligence has been located in the interdependent interactions of teammates, and interdependence is quantum-like. We have reported in the past that, facing uncertainty, human debate exploits the interdependent bistable views of reality in tradeoffs seeking the best path forward. Explaining uncertain contexts, which no single agent can determine alone, necessitates that members of A-HMTs express their actions in causal terms, however imperfectly. Our purpose in this paper is to review our two newest discoveries here, both of which generalize and scale, first, following new theory to separate entropy production from structure and performance, and second, discovering that the informatics of vulnerability generated during competition propels evolution, invisible to the theories and practices of cooperation. Full article
(This article belongs to the Special Issue Feature Paper in Informatics)
Show Figures

Figure 1

Article
On Blockchain-Based Cross-Service Communication and Resource Orchestration on Edge Clouds
Informatics 2021, 8(1), 13; https://0-doi-org.brum.beds.ac.uk/10.3390/informatics8010013 - 26 Feb 2021
Cited by 1 | Viewed by 603
Abstract
With the advent of 5G verticals and the Internet of Things paradigm, Edge Computing has emerged as the most dominant service delivery architecture, placing augmented computing resources in the proximity of end users. The resource orchestration of edge clouds relies on the concept [...] Read more.
With the advent of 5G verticals and the Internet of Things paradigm, Edge Computing has emerged as the most dominant service delivery architecture, placing augmented computing resources in the proximity of end users. The resource orchestration of edge clouds relies on the concept of network slicing, which provides logically isolated computing and network resources. However, though there is significant progress on the automation of the resource orchestration within a single cloud or edge cloud datacenter, the orchestration of multi-domain infrastructure or multi-administrative domain is still an open challenge. Towards exploiting the network service marketplace at its full capacity, while being aligned with ETSI Network Function Virtualization architecture, this article proposes a novel Blockchain-based service orchestrator that leverages the automation capabilities of smart contracts to establish cross-service communication between network slices of different tenants. In particular, we introduce a multi-tier architecture of a Blockchain-based network marketplace, and design the lifecycle of the cross-service orchestration. For the evaluation of the proposed approach, we set up cross-service communication in an edge cloud and we demonstrate that the orchestration overhead is less than other cross-service solutions. Full article
Show Figures

Figure 1

Review
Visual Analytics for Electronic Health Records: A Review
Informatics 2021, 8(1), 12; https://0-doi-org.brum.beds.ac.uk/10.3390/informatics8010012 - 23 Feb 2021
Cited by 1 | Viewed by 657
Abstract
The increasing use of electronic health record (EHR)-based systems has led to the generation of clinical data at an unprecedented rate, which produces an untapped resource for healthcare experts to improve the quality of care. Despite the growing demand for adopting EHRs, the [...] Read more.
The increasing use of electronic health record (EHR)-based systems has led to the generation of clinical data at an unprecedented rate, which produces an untapped resource for healthcare experts to improve the quality of care. Despite the growing demand for adopting EHRs, the large amount of clinical data has made some analytical and cognitive processes more challenging. The emergence of a type of computational system called visual analytics has the potential to handle information overload challenges in EHRs by integrating analytics techniques with interactive visualizations. In recent years, several EHR-based visual analytics systems have been developed to fulfill healthcare experts’ computational and cognitive demands. In this paper, we conduct a systematic literature review to present the research papers that describe the design of EHR-based visual analytics systems and provide a brief overview of 22 systems that met the selection criteria. We identify and explain the key dimensions of the EHR-based visual analytics design space, including visual analytics tasks, analytics, visualizations, and interactions. We evaluate the systems using the selected dimensions and identify the gaps and areas with little prior work. Full article
(This article belongs to the Special Issue Feature Papers: Health Informatics)
Show Figures

Figure 1

Article
Deep Learning for Enterprise Systems Implementation Lifecycle Challenges: Research Directions
Informatics 2021, 8(1), 11; https://0-doi-org.brum.beds.ac.uk/10.3390/informatics8010011 - 20 Feb 2021
Viewed by 713
Abstract
Transforming the state-of-the-art definition and anatomy of enterprise systems (ESs) seems to some academics and practitioners as an unavoidable destiny. Value depletion lead by early retirement and/or replacement of ESs solutions has been a constant throughout the past decade. That did drive an [...] Read more.
Transforming the state-of-the-art definition and anatomy of enterprise systems (ESs) seems to some academics and practitioners as an unavoidable destiny. Value depletion lead by early retirement and/or replacement of ESs solutions has been a constant throughout the past decade. That did drive an enormous amount of research that works on addressing the problems leading to the resource drain. The resource waste had persisted throughout the ESs implementation lifecycle phases and dimensions especially post-live phases; leading to depleting the value of the social and technical dimensions of the lifecycle. Parallel to this research stream, the momentum gained by deep learning (DL) algorithms and platforms has been exponentially growing to fuel the advancements toward artificial intelligence and automated augmentation. Correspondingly, this paper is set out to present five key research directions through which DL would take part as a contributor towards the transformation of the ESs state-of-the-art. The paper reviews the ESs implementation lifecycle challenges and the intersection with DL research conducted on ESs by analyzing and synthesizing key basket journals (list of the Association of Information Systems). The paper also presents results from several experiments showcasing the effectiveness of DL in adding a level of augmentation to ESs by analyzing a large set of data extracted from the Atlassian Jira Software Issue Tracking System across different ecosystems. The paper then concludes by presenting the research directions and discussing socio-technical research courses that work on key frontiers identified within this scholarly work. Full article
(This article belongs to the Special Issue Feature Paper in Informatics)
Show Figures

Figure 1

Article
Assessing Human Post-Editing Efforts to Compare the Performance of Three Machine Translation Engines for English to Russian Translation of Cochrane Plain Language Health Information: Results of a Randomised Comparison
Informatics 2021, 8(1), 9; https://0-doi-org.brum.beds.ac.uk/10.3390/informatics8010009 - 10 Feb 2021
Viewed by 691
Abstract
Cochrane produces independent research to improve healthcare decisions. It translates its research summaries into different languages to enable wider access, relying largely on volunteers. Machine translation (MT) could facilitate efficiency in Cochrane’s low-resource environment. We compared three off-the-shelf machine translation engines (MTEs)—DeepL, Google [...] Read more.
Cochrane produces independent research to improve healthcare decisions. It translates its research summaries into different languages to enable wider access, relying largely on volunteers. Machine translation (MT) could facilitate efficiency in Cochrane’s low-resource environment. We compared three off-the-shelf machine translation engines (MTEs)—DeepL, Google Translate and Microsoft Translator—for Russian translations of Cochrane plain language summaries (PLSs) by assessing the quantitative human post-editing effort within an established translation workflow and quality assurance process. 30 PLSs each were pre-translated with one of the three MTEs. Ten volunteer translators post-edited nine randomly assigned PLSs each—three per MTE—in their usual translation system, Memsource. Two editors performed a second editing step. Memsource’s Machine Translation Quality Estimation (MTQE) feature provided an artificial intelligence (AI)-powered estimate of how much editing would be required for each PLS, and the analysis feature calculated the amount of human editing after each editing step. Google Translate performed the best with highest average quality estimates for its initial MT output, and the lowest amount of human post-editing. DeepL performed slightly worse, and Microsoft Translator worst. Future developments in MT research and the associated industry may change our results. Full article
Show Figures

Figure 1

Article
Windows PE Malware Detection Using Ensemble Learning
Informatics 2021, 8(1), 10; https://0-doi-org.brum.beds.ac.uk/10.3390/informatics8010010 - 10 Feb 2021
Cited by 2 | Viewed by 975
Abstract
In this Internet age, there are increasingly many threats to the security and safety of users daily. One of such threats is malicious software otherwise known as malware (ransomware, Trojans, viruses, etc.). The effect of this threat can lead to loss or malicious [...] Read more.
In this Internet age, there are increasingly many threats to the security and safety of users daily. One of such threats is malicious software otherwise known as malware (ransomware, Trojans, viruses, etc.). The effect of this threat can lead to loss or malicious replacement of important information (such as bank account details, etc.). Malware creators have been able to bypass traditional methods of malware detection, which can be time-consuming and unreliable for unknown malware. This motivates the need for intelligent ways to detect malware, especially new malware which have not been evaluated or studied before. Machine learning provides an intelligent way to detect malware and comprises two stages: feature extraction and classification. This study suggests an ensemble learning-based method for malware detection. The base stage classification is done by a stacked ensemble of fully-connected and one-dimensional convolutional neural networks (CNNs), whereas the end-stage classification is done by a machine learning algorithm. For a meta-learner, we analyzed and compared 15 machine learning classifiers. For comparison, five machine learning algorithms were used: naïve Bayes, decision tree, random forest, gradient boosting, and AdaBoosting. The results of experiments made on the Windows Portable Executable (PE) malware dataset are presented. The best results were obtained by an ensemble of seven neural networks and the ExtraTrees classifier as a final-stage classifier. Full article
(This article belongs to the Special Issue Towards the Next-Generation of Network Monitoring Systems)
Show Figures

Figure 1

Article
Factors Driving Users’ Engagement in Patient Social Network Systems
Informatics 2021, 8(1), 8; https://0-doi-org.brum.beds.ac.uk/10.3390/informatics8010008 - 09 Feb 2021
Viewed by 1438
Abstract
Participatory medicine and e-health help to promote health literacy among non-medical professionals. Users of e-health systems actively participate in a patient social network system (PSNS) to share health information and experiences with other users with similar health conditions. Users’ activities provide valuable healthcare [...] Read more.
Participatory medicine and e-health help to promote health literacy among non-medical professionals. Users of e-health systems actively participate in a patient social network system (PSNS) to share health information and experiences with other users with similar health conditions. Users’ activities provide valuable healthcare resources to develop effective participatory medicine between patients, caregivers, and medical professionals. This study aims to investigate the factors of patients’ engagement in a PSNS by integrating and modifying an existing behavioral model and information system model (i.e., affective events theory (AET) and self-determination theory (SDT)). The AET is used to model the structure, the affective aspects of the driven behavior, and actual affective manifestation. The SDT is used to model interest and its relations with behavior. The data analysis and model testing are based on structural equation modeling, using responses from 428 users. The results indicate that interest and empathy promote users’ engagement in a PSNS. The findings from this study suggest recommendations to further promote users’ participation in a PSNS from the sociotechnical perspective, which include sensitizing and constructive engagement features. Furthermore, the data generated from a user’s participation in a PSNS could contribute to the study of clinical manifestations of disease, especially an emerging disease. Full article
(This article belongs to the Section Health Informatics)
Show Figures

Figure 1

Article
Towards a Better Integration of Fuzzy Matches in Neural Machine Translation through Data Augmentation
Informatics 2021, 8(1), 7; https://0-doi-org.brum.beds.ac.uk/10.3390/informatics8010007 - 29 Jan 2021
Viewed by 1040
Abstract
We identify a number of aspects that can boost the performance of Neural Fuzzy Repair (NFR), an easy-to-implement method to integrate translation memory matches and neural machine translation (NMT). We explore various ways of maximising the added value of retrieved matches within the [...] Read more.
We identify a number of aspects that can boost the performance of Neural Fuzzy Repair (NFR), an easy-to-implement method to integrate translation memory matches and neural machine translation (NMT). We explore various ways of maximising the added value of retrieved matches within the NFR paradigm for eight language combinations, using Transformer NMT systems. In particular, we test the impact of different fuzzy matching techniques, sub-word-level segmentation methods and alignment-based features on overall translation quality. Furthermore, we propose a fuzzy match combination technique that aims to maximise the coverage of source words. This is supplemented with an analysis of how translation quality is affected by input sentence length and fuzzy match score. The results show that applying a combination of the tested modifications leads to a significant increase in estimated translation quality over all baselines for all language combinations. Full article
(This article belongs to the Special Issue Feature Paper in Informatics)
Show Figures

Figure 1

Article
Social Media Adoption by Health Professionals: A TAM-Based Study
Informatics 2021, 8(1), 6; https://0-doi-org.brum.beds.ac.uk/10.3390/informatics8010006 - 29 Jan 2021
Cited by 1 | Viewed by 1171
Abstract
This research identifies the underlying drivers impacting on healthcare professionals’ social media usage behaviours using the technology acceptance model (TAM) as the theoretical lens. A self-administered survey questionnaire was developed and administered to 219 healthcare professionals. Data are analysed applying the structural equation [...] Read more.
This research identifies the underlying drivers impacting on healthcare professionals’ social media usage behaviours using the technology acceptance model (TAM) as the theoretical lens. A self-administered survey questionnaire was developed and administered to 219 healthcare professionals. Data are analysed applying the structural equation modelling (SEM) technique. The SEM model demonstrated an acceptable model fit (χ2 = 534.241; df, 239, χ2/df = 2.235, RMSEA = 0.06, IFI = 0.92, TLI = 0.93, and CFI = 0.92) and indicates content quality, perceived risk, perceived credibility, peer influence, confirmation of expectations, supporting conditions, and perceived cost significantly influence the notion of perceived social media usefulness. Furthermore, perceived social media usefulness positively affects social media usage behaviour of healthcare professionals. This research generates important insights into what drives the adoption of social media by healthcare professionals. These insights could help develop social media guidelines and strategies to improve the state of professional interactions between health professionals and their clients. Full article
(This article belongs to the Section Health Informatics)
Show Figures

Figure 1

Article
The Influence of Sociological Variables on Users’ Feelings about Programmatic Advertising and the Use of Ad-Blockers
Informatics 2021, 8(1), 5; https://0-doi-org.brum.beds.ac.uk/10.3390/informatics8010005 - 27 Jan 2021
Viewed by 695
Abstract
The evolution of digital advertising, which is aimed at a mass audience, to programmatic advertising, which is aimed at individual users depending on their profile, has raised concerns about the use of personal data and invasion of user privacy on the Internet. Concerned [...] Read more.
The evolution of digital advertising, which is aimed at a mass audience, to programmatic advertising, which is aimed at individual users depending on their profile, has raised concerns about the use of personal data and invasion of user privacy on the Internet. Concerned users install ad-blockers that prevent users from seeing ads and this has resulted in many companies using anti-ad-blockers. This study investigates the sociological variables that make users feel that advertising is annoying and then decide to use ad-blockers to avoid it. Our results provide useful information for companies to appropriately segment user profiles. To do this, data collected from Internet users (n = 19,973) about what makes online advertising annoying and why they decide to use ad-blockers are analyzed. First, the existing literature on the subject was reviewed and then the relevant sociological variables that influence users’ feelings about online advertising and the use of ad-blockers were investigated. This work contributes new information to the discussion about user privacy on the Internet. Some of the key findings suggest that Internet advertising can be very intrusive for many users and that all the variables investigated, except marital status and education, influence the users’ opinions. It was also found that all the variables in this study are important when a user decides to use an ad-blocker. A clear and inverse correlation between age and opinion about advertising as annoying could be seen, along with a clear difference of opinion due to gender. The results suggest that users without children use ad-blockers the least, while retirees and housewives use them the most. Full article
(This article belongs to the Section Digital Humanities)
Editorial
Acknowledgment to Reviewers of Informatics in 2020
Informatics 2021, 8(1), 4; https://0-doi-org.brum.beds.ac.uk/10.3390/informatics8010004 - 24 Jan 2021
Viewed by 478
Abstract
Peer review is the driving force of journal development, and reviewers are gatekeepers who ensure that Informatics maintains its standards for the high quality of its published papers [...] Full article
Article
Thai Tattoo Wisdom’s Representation of Knowledge by Ontology
Informatics 2021, 8(1), 3; https://0-doi-org.brum.beds.ac.uk/10.3390/informatics8010003 - 21 Jan 2021
Viewed by 585
Abstract
Sak Yan Ontology (SYO) models knowledge derived from Thai tattoos in the design of cultural heritage preservation planning. Ontology Development 101 is a technique of ontology model creation. The aims of this study are to share the performance of ontology development and ontology [...] Read more.
Sak Yan Ontology (SYO) models knowledge derived from Thai tattoos in the design of cultural heritage preservation planning. Ontology Development 101 is a technique of ontology model creation. The aims of this study are to share the performance of ontology development and ontology evaluation. The study is specifically focused on validation from domain experts and automation evaluated using the OOPS! tools (OntOlogy Pitfall Scanner is a tool that helps detect some of the most common pitfalls appearing when developing ontologies). The results obtained from OOPS! show that SYO is devoid of critical errors; however, it does have one critical, three important, and three minor problems. Four of the problems are fixed, whereas the others are continuous. The combination of automatic and human validation methodologies improves the quality of the ontology being modeled. The tools enhance the traditional methodology with quicker, easier, and smaller amounts of subjective analysis. In conclusion, for the reparation movement, solutions for the above problems are suggested. Full article
(This article belongs to the Section Digital Humanities)
Show Figures

Figure 1

Article
Deep Full-Body HPE for Activity Recognition from RGB Frames Only
Informatics 2021, 8(1), 2; https://0-doi-org.brum.beds.ac.uk/10.3390/informatics8010002 - 18 Jan 2021
Viewed by 997
Abstract
Human Pose Estimation (HPE) is defined as the problem of human joints’ localization (also known as keypoints: elbows, wrists, etc.) in images or videos. It is also defined as the search for a specific pose in space of all articulated joints. HPE has [...] Read more.
Human Pose Estimation (HPE) is defined as the problem of human joints’ localization (also known as keypoints: elbows, wrists, etc.) in images or videos. It is also defined as the search for a specific pose in space of all articulated joints. HPE has recently received significant attention from the scientific community. The main reason behind this trend is that pose estimation is considered as a key step for many computer vision tasks. Although many approaches have reported promising results, this domain remains largely unsolved due to several challenges such as occlusions, small and barely visible joints, and variations in clothing and lighting. In the last few years, the power of deep neural networks has been demonstrated in a wide variety of computer vision problems and especially the HPE task. In this context, we present in this paper a Deep Full-Body-HPE (DFB-HPE) approach from RGB images only. Based on ConvNets, fifteen human joint positions are predicted and can be further exploited for a large range of applications such as gesture recognition, sports performance analysis, or human-robot interaction. To evaluate the proposed deep pose estimation model, we apply it to recognize the daily activities of a person in an unconstrained environment. Therefore, the extracted features, represented by deep estimated poses, are fed to an SVM classifier. To validate the proposed architecture, our approach is tested on two publicly available benchmarks for pose estimation and activity recognition, namely the J-HMDBand CAD-60datasets. The obtained results demonstrate the efficiency of the proposed method based on ConvNets and SVM and prove how deep pose estimation can improve the recognition accuracy. By means of comparison with state-of-the-art methods, we achieve the best HPE performance, as well as the best activity recognition precision on the CAD-60 dataset. Full article
(This article belongs to the Section Machine Learning)
Show Figures

Figure 1

Article
From Data to Rhizomes: Applying a Geographical Concept to Understand the Mobility of Tourists from Geo-Located Tweets
Informatics 2021, 8(1), 1; https://0-doi-org.brum.beds.ac.uk/10.3390/informatics8010001 - 24 Dec 2020
Viewed by 894
Abstract
In geography, the concept of “rhizome” provides a theoretical tool to conceive the way people move in space in terms of “mobility networks”: the space lived by people is delimited and characterized on the basis of both the places they visited and the [...] Read more.
In geography, the concept of “rhizome” provides a theoretical tool to conceive the way people move in space in terms of “mobility networks”: the space lived by people is delimited and characterized on the basis of both the places they visited and the sequences of their transfers from place to place. Researchers are now wondering whether in the new era of data-driven geography it is possible to give a concrete shape to the concept of rhizome, by analyzing big data describing movement of people traced through social media. This paper is a first attempt to give a concrete shape to the concept of rhizome, by interpreting it as a problem of “itemset mining”, which is a well-known data mining technique. This technique was originally developed for market-basket analysis. We studied how the application of this technique, if supported by adequate visualization strategies, can provide geographers with a concrete shape for rhizomes, suitable for further studies. To validate the ideas, we chose the case study of tourists visiting a city: the rhizome can be conceived as the set of places visited by many tourists, and the common transfers made by tourists in the area of the city. Itemsets extracted from a real-life data set were used to study the effectiveness of both a topographic representation and a topological representation to visualize rhizomes. In this paper, we study how three different interpretations are actually able to give a concrete and visual shape to the concept of rhizome. The results that we present and discuss in this paper open further investigations on the problem. Full article
(This article belongs to the Section Digital Humanities)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop