Next Issue
Volume 11, December
Previous Issue
Volume 11, October
 
 

Information, Volume 11, Issue 11 (November 2020) – 49 articles

Cover Story (view full-size image): The ITS-G5 standard is the basis for European communication technologies and protocols that assist public road users by providing them with additional traffic information. The scientific community is developing ITS-G5 applications for various purposes. Our research team is currently working on the development of ITS applications that can be applied in public transport networks to support the dissemination of ITS technology. At this stage, our focus was an ITS-G5 prototype that aims at increasing the safety of pedestrians and drivers that are in the vicinity of a pedestrian crosswalk by sending ITS-G5 DENM messages to the vehicles. These messages are analyzed, and if they are relevant, they are presented to the driver on an onboard infotainment system. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
16 pages, 2996 KiB  
Article
Access Control in NB-IoT Networks: A Deep Reinforcement Learning Strategy
by Yassine Hadjadj-Aoul and Soraya Ait-Chellouche
Information 2020, 11(11), 541; https://0-doi-org.brum.beds.ac.uk/10.3390/info11110541 - 23 Nov 2020
Cited by 7 | Viewed by 1843
Abstract
The Internet of Things (IoT) is a key enabler of the digital mutation of our society. Driven by various services and applications, Machine Type Communications (MTC) will become an integral part of our daily life, over the next few years. Meeting the ITU-T [...] Read more.
The Internet of Things (IoT) is a key enabler of the digital mutation of our society. Driven by various services and applications, Machine Type Communications (MTC) will become an integral part of our daily life, over the next few years. Meeting the ITU-T requirements, in terms of density, battery longevity, coverage, price, and supported mechanisms and functionalities, Cellular IoT, and particularly Narrowband-IoT (NB-IoT), is identified as a promising candidate to handle massive MTC accesses. However, this massive connectivity would pose a huge challenge for network operators in terms of scalability. Indeed, the connection to the network in cellular IoT passes through a random access procedure and a high concentration of IoT devices would, very quickly, lead to a bottleneck. The latter procedure needs, then, to be enhanced as the connectivity would be considerable. With this in mind, we propose, in this paper, to apply the access class barring (ACB) mechanism to regulate the number of devices competing for the access. In order to derive the blocking factor, we formulated the access problem as a Markov decision process that we were able to solve using one of the most advanced deep reinforcement learning techniques. The evaluation of the proposed access control, through simulations, shows the effectiveness of our approach compared to existing approaches such as the adaptive one and the Proportional Integral Derivative (PID) controller. Indeed, it manages to keep the proportion of access attempts close to the optimum, despite the lack of accurate information on the number of access attempts. Full article
(This article belongs to the Special Issue Wireless IoT Network Protocols)
Show Figures

Figure 1

11 pages, 3187 KiB  
Article
Humanities: The Outlier of Research Assessments
by Güleda Doğan and Zehra Taşkın
Information 2020, 11(11), 540; https://0-doi-org.brum.beds.ac.uk/10.3390/info11110540 - 23 Nov 2020
Cited by 2 | Viewed by 4578
Abstract
Commercial bibliometric databases, and the quantitative indicators presented by them, are widely used for research assessment purposes, which is not fair for the humanities. The humanities are different from all other areas by nature in many aspects. This study aimed to show the [...] Read more.
Commercial bibliometric databases, and the quantitative indicators presented by them, are widely used for research assessment purposes, which is not fair for the humanities. The humanities are different from all other areas by nature in many aspects. This study aimed to show the extent of the difference in terms of five size-independent bibliometric indicators, based on citations and collaborations. We used categorical InCites data (1980–2020) to compare six main Organisation for Economic Co-operation and Development (OECD) subject areas, and the 45,987 sources of humanities, to make a comparison for subareas of the humanities. Results showed that the humanities are statistically different from all other areas, including social sciences, with high effect sizes in terms of the five indicators taken into consideration. Besides that, all the subareas of the humanities differ from each other. This main finding indicates that the humanities do not need new indicators for quantitative evaluation, but different approaches for assessment, such as bottom-up approaches. Full article
(This article belongs to the Special Issue ICT Enhanced Social Sciences and Humanities)
Show Figures

Figure 1

40 pages, 13898 KiB  
Article
Addressing Misinformation in Online Social Networks: Diverse Platforms and the Potential of Multiagent Trust Modeling
by Robin Cohen, Karyn Moffatt, Amira Ghenai, Andy Yang, Margaret Corwin, Gary Lin, Raymond Zhao, Yipeng Ji, Alexandre Parmentier, Jason P’ng, Wil Tan and Lachlan Gray
Information 2020, 11(11), 539; https://0-doi-org.brum.beds.ac.uk/10.3390/info11110539 - 23 Nov 2020
Cited by 4 | Viewed by 4644
Abstract
In this paper, we explore how various social networking platforms currently support the spread of misinformation. We then examine the potential of a few specific multiagent trust modeling algorithms from artificial intelligence, towards detecting that misinformation. Our investigation reveals that specific requirements of [...] Read more.
In this paper, we explore how various social networking platforms currently support the spread of misinformation. We then examine the potential of a few specific multiagent trust modeling algorithms from artificial intelligence, towards detecting that misinformation. Our investigation reveals that specific requirements of each environment may require distinct solutions for the processing. This then leads to a higher-level proposal for the actions to be taken in order to judge trustworthiness. Our final reflection concerns what information should be provided to users, once there are suspected misleading posts. Our aim is to enlighten both the organizations that host social networking and the users of those platforms, and to promote steps forward for more pro-social behaviour in these environments. As a look to the future and the growing need to address this vital topic, we reflect as well on two related topics of possible interest: the case of older adult users and the potential to track misinformation through dedicated data science studies, of particular use for healthcare. Full article
(This article belongs to the Special Issue Tackling Misinformation Online)
Show Figures

Figure A1

10 pages, 1024 KiB  
Article
American Children’s Screen Time: Diminished Returns of Household Income in Black Families
by Shervin Assari
Information 2020, 11(11), 538; https://0-doi-org.brum.beds.ac.uk/10.3390/info11110538 - 20 Nov 2020
Cited by 8 | Viewed by 2770
Abstract
While increased household income is associated with overall decreased screen time for children, less is known about the effect of racial variation on this association. According to Minorities’ Diminished Returns (MDRs) theory, family income and other economic resources show weaker association with children’s [...] Read more.
While increased household income is associated with overall decreased screen time for children, less is known about the effect of racial variation on this association. According to Minorities’ Diminished Returns (MDRs) theory, family income and other economic resources show weaker association with children’s developmental, behavioral, and health outcomes for racialized groups such as black families, due to the effect of racism and social stratification. In this study, we investigated the association, by race, between family income and children’s screen time, as a proxy of screen time. This longitudinal study followed 15,022 American children aged 9–11 over a 1-year period. The data came from the baseline of the Adolescent Brain Cognitive Development (ABCD) study. The independent variable was family income, and it was categorized as a three-level nominal variable. The dependent variable, screen time, was a continuous variable. Ethnicity, gender, parental education, and marital status were the covariates. The results showed that family income was inversely associated with children’s screen time. However, there was a weaker inverse association seen in black families when compared with white families. This was documented by a significant statistical interaction between race and family income on children’s screen time. Diminished association between family income and children’s screen time for black families, compared with white families, is similar to MDRs and reflects a health risk to high-income black children. In a society where race and skin color determine opportunities and treatment by society, children from middle class black families remain at risk across multiple domains. We should not assume that income similarly promotes the health of all racial and ethnic groups. Addressing health and behavioral inequalities requires interventions that go beyond equalizing socioeconomic resources for black families. Marginalization, racism, and poverty interfere with the normal family income-related development of American children. Full article
Show Figures

Figure 1

15 pages, 348 KiB  
Article
Evaluation of Attackers’ Skill Levels in Multi-Stage Attacks
by Terézia Mézešová, Pavol Sokol and Tomáš Bajtoš
Information 2020, 11(11), 537; https://0-doi-org.brum.beds.ac.uk/10.3390/info11110537 - 19 Nov 2020
Cited by 1 | Viewed by 2934
Abstract
The rapid move to digitalization and usage of online information systems brings new and evolving threats that organizations must protect themselves from and respond to. Monitoring an organization’s network for malicious activity has become a standard practice together with event and log collection [...] Read more.
The rapid move to digitalization and usage of online information systems brings new and evolving threats that organizations must protect themselves from and respond to. Monitoring an organization’s network for malicious activity has become a standard practice together with event and log collection from network hosts. Security operation centers deal with a growing number of alerts raised by intrusion detection systems that process the collected data and monitor networks. The alerts must be processed so that the relevant stakeholders can make informed decisions when responding to situations. Correlation of alerts into more expressive intrusion scenarios is an important tool in reducing false-positive and noisy alerts. In this paper, we propose correlation rules for identifying multi-stage attacks. Another contribution of this paper is a methodology for inferring from an alert the values needed to evaluate the attack in terms of the attacker’s skill level. We present our results on the CSE-CIC-IDS2018 data set. Full article
(This article belongs to the Special Issue Advanced Topics in Systems Safety and Security)
Show Figures

Figure 1

17 pages, 860 KiB  
Article
Document Summarization Based on Coverage with Noise Injection and Word Association
by Heechan Kim and Soowon Lee
Information 2020, 11(11), 536; https://0-doi-org.brum.beds.ac.uk/10.3390/info11110536 - 19 Nov 2020
Cited by 1 | Viewed by 1720
Abstract
Automatic document summarization is a field of natural language processing that is rapidly improving with the development of end-to-end deep learning models. In this paper, we propose a novel summarization model that consists of three methods. The first is a coverage method based [...] Read more.
Automatic document summarization is a field of natural language processing that is rapidly improving with the development of end-to-end deep learning models. In this paper, we propose a novel summarization model that consists of three methods. The first is a coverage method based on noise injection that makes the attention mechanism select only important words by defining previous context information as noise. This alleviates the problem that the summarization model generates the same word sequence repeatedly. The second is a word association method to update the information of each word by comparing the information of the current step with the information of all previous decoding steps. According to following words, this catches a change in the meaning of the word that has been already decoded. The third is a method using a suppression loss function that explicitly minimizes the probabilities of non-answer words. The proposed summarization model showed good performance on some recall-oriented understudy for gisting evaluation (ROUGE) metrics compared to the state-of-the-art models in the CNN/Daily Mail summarization task, and the results were achieved with very few learning steps compared to the state-of-the-art models. Full article
(This article belongs to the Special Issue Natural Language Processing for Social Media)
Show Figures

Figure 1

21 pages, 2129 KiB  
Article
Towards Context-Aware Opinion Summarization for Monitoring Social Impact of News
by Alejandro Ramón-Hernández, Alfredo Simón-Cuevas, María Matilde García Lorenzo, Leticia Arco and Jesús Serrano-Guerrero
Information 2020, 11(11), 535; https://0-doi-org.brum.beds.ac.uk/10.3390/info11110535 - 18 Nov 2020
Cited by 2 | Viewed by 2690
Abstract
Opinion mining and summarization of the increasing user-generated content on different digital platforms (e.g., news platforms) are playing significant roles in the success of government programs and initiatives in digital governance, from extracting and analyzing citizen’s sentiments for decision-making. Opinion mining provides the [...] Read more.
Opinion mining and summarization of the increasing user-generated content on different digital platforms (e.g., news platforms) are playing significant roles in the success of government programs and initiatives in digital governance, from extracting and analyzing citizen’s sentiments for decision-making. Opinion mining provides the sentiment from contents, whereas summarization aims to condense the most relevant information. However, most of the reported opinion summarization methods are conceived to obtain generic summaries, and the context that originates the opinions (e.g., the news) has not usually been considered. In this paper, we present a context-aware opinion summarization model for monitoring the generated opinions from news. In this approach, the topic modeling and the news content are combined to determine the “importance” of opinionated sentences. The effectiveness of different developed settings of our model was evaluated through several experiments carried out over Spanish news and opinions collected from a real news platform. The obtained results show that our model can generate opinion summaries focused on essential aspects of the news, as well as cover the main topics in the opinionated texts well. The integration of term clustering, word embeddings, and the similarity-based sentence-to-news scoring turned out the more promising and effective setting of our model. Full article
(This article belongs to the Special Issue Information Retrieval and Social Media Mining)
Show Figures

Figure 1

16 pages, 928 KiB  
Article
Identification of Social Aspects by Means of Inertial Sensor Data
by Luca Bedogni and Giacomo Cabri
Information 2020, 11(11), 534; https://0-doi-org.brum.beds.ac.uk/10.3390/info11110534 - 17 Nov 2020
Cited by 4 | Viewed by 1778
Abstract
Today’s applications and providers are very interested in knowing the social aspects of users in order to customize the services they provide and to be more effective. Among the others, the most frequented places and the paths to reach them are information that [...] Read more.
Today’s applications and providers are very interested in knowing the social aspects of users in order to customize the services they provide and to be more effective. Among the others, the most frequented places and the paths to reach them are information that turns out to be very useful to define users’ habits. The most exploited means to acquire positions and paths is the GPS sensor, however it has been shown how leveraging inertial data from installed sensors can lead to path identification. In this work, we present a Computationally Efficient algorithm to Reconstruct Vehicular Traces (CERT), a novel algorithm which computes the path traveled by a vehicle using accelerometer and magnetometer data. We show that by analyzing data obtained through the accelerometer and the magnetometer in vehicular scenarios, CERT achieves almost perfect identification for medium and small sized cities. Moreover, we show that the longer the path, the easier it is to recognize it. We also present results characterizing the privacy risks depending on the area of the world, since, as we show, urban dynamics play a key role in the path detection. Full article
(This article belongs to the Special Issue The Integration of Digital and Social Systems)
Show Figures

Figure 1

31 pages, 1498 KiB  
Article
Cybersecurity Challenges in Industry: Measuring the Challenge Solve Time to Inform Future Challenges
by Tiago Espinha Gasiba, Ulrike Lechner and Maria Pinto-Albuquerque
Information 2020, 11(11), 533; https://0-doi-org.brum.beds.ac.uk/10.3390/info11110533 - 16 Nov 2020
Cited by 4 | Viewed by 3152
Abstract
Cybersecurity vulnerabilities in industrial control systems have been steadily increasing over the last few years. One possible way to address this issue is through raising the awareness (through education) of software developers, with the intent to increase software quality and reduce the number [...] Read more.
Cybersecurity vulnerabilities in industrial control systems have been steadily increasing over the last few years. One possible way to address this issue is through raising the awareness (through education) of software developers, with the intent to increase software quality and reduce the number of vulnerabilities. CyberSecurity Challenges (CSCs) are a novel serious game genre that aims to raise industrial software developers’ awareness of secure coding, secure coding guidelines, and secure coding best practices. An important industry-specific requirement to consider in designing these kinds of games is related to the whole event’s duration and how much time it takes to solve each challenge individually—the challenge solve time. In this work, we present two different methods to compute the challenge solve time: one method based on data collected from the CSC dashboard and another method based on a challenge heartbeat. The results obtained by both methods are presented; both methods are compared to each other, and the advantages and limitations of each method are discussed. Furthermore, we introduce the notion of a player profile, which is derived from dashboard data. Our results and contributions aim to establish a method to measure the challenge solve time, inform the design of future challenges, and improve coaching during CSC gameplay. Full article
(This article belongs to the Special Issue Computer Programming Education)
Show Figures

Figure 1

30 pages, 11905 KiB  
Article
Connecting Semantic Situation Descriptions with Data Quality Evaluations—Towards a Framework of Automatic Thematic Map Evaluation
by Timo Homburg
Information 2020, 11(11), 532; https://0-doi-org.brum.beds.ac.uk/10.3390/info11110532 - 15 Nov 2020
Cited by 1 | Viewed by 2180
Abstract
A continuing question in the geospatial community is the evaluation of fitness for use of map data for a variety of use cases. While data quality metrics and dimensions have been discussed broadly in the geospatial community and have been modelled in semantic [...] Read more.
A continuing question in the geospatial community is the evaluation of fitness for use of map data for a variety of use cases. While data quality metrics and dimensions have been discussed broadly in the geospatial community and have been modelled in semantic web vocabularies, an ontological connection between use cases and data quality expressions allowing reasoning approaches to determine the fitness for use of semantic web map data has not yet been approached. This publication introduces such an ontological model to represent and link situations with geospatial data quality metrics to evaluate thematic map contents. The ontology model constitutes the data storage element of a framework for use case based data quality assurance, which creates suggestions for data quality evaluations which are verified and improved upon by end-users. So-created requirement profiles are associated and shared to semantic web concepts and therefore contribute to a pool of linked data describing situation-based data quality assessments, which may be used by a variety of applications. The framework is tested using two test scenarios which are evaluated and discussed in a wider context. Full article
(This article belongs to the Section Information Processes)
Show Figures

Figure 1

13 pages, 6433 KiB  
Article
An Accurate GNSS-Based Redundant Safe Braking System for Urban Elevated Rail Maglev Trains
by João Batista Pinto Neto, Lucas de Carvalho Gomes, Miguel Elias Mitre Campista and Luís Henrique Maciel Kosmalski Costa
Information 2020, 11(11), 531; https://0-doi-org.brum.beds.ac.uk/10.3390/info11110531 - 15 Nov 2020
Cited by 1 | Viewed by 1964
Abstract
The association of elevated rail structures and Maglev (magnetic levitation) trains is a promising alternative for urban transportation. Besides being cost-effective in comparison with underground solutions, the Maglev technology is a clean and low-noise mass transportation. In this paper, we propose a low-cost [...] Read more.
The association of elevated rail structures and Maglev (magnetic levitation) trains is a promising alternative for urban transportation. Besides being cost-effective in comparison with underground solutions, the Maglev technology is a clean and low-noise mass transportation. In this paper, we propose a low-cost automatic braking system for Maglev trains. There is a myriad of sensors and positioning techniques used to improve the accuracy, precision and stability of train navigation systems, but most of them result in high implementation costs. In this paper, we develop an affordable solution, called Redundant Autonomous Safe Braking System (RASBS), for the MagLev-Cobra train, a magnetic levitation vehicle developed at the Federal University of Rio de Janeiro (UFRJ), Brazil. The proposed braking system employs GNSS (Global Navigation Satellite System) receivers at the stations and trains, which are connected via an ad-hoc wireless network. The proposed system uses a cooperative error correction algorithm to achieve sub-meter distance precision. We experimentally evaluate the performance of RASBS in the MagLev prototype located at the campus of UFRJ, Brazil. Results show that, using RASBS, the train is able to dynamically set the precise location to start the braking procedure. Full article
Show Figures

Figure 1

21 pages, 7758 KiB  
Article
Context-Aware Wireless Sensor Networks for Smart Building Energy Management System
by Najem Naji, Mohamed Riduan Abid, Driss Benhaddou and Nissrine Krami
Information 2020, 11(11), 530; https://0-doi-org.brum.beds.ac.uk/10.3390/info11110530 - 15 Nov 2020
Cited by 13 | Viewed by 3794
Abstract
Energy Management Systems (EMS) are indispensable for Smart Energy-Efficient Buildings (SEEB). This paper proposes a Wireless Sensor Network (WSN)-based EMS deployed and tested in a real-world smart building on a university campus. The at-scale implementation enabled the deployment of a WSN mesh topology [...] Read more.
Energy Management Systems (EMS) are indispensable for Smart Energy-Efficient Buildings (SEEB). This paper proposes a Wireless Sensor Network (WSN)-based EMS deployed and tested in a real-world smart building on a university campus. The at-scale implementation enabled the deployment of a WSN mesh topology to evaluate performance in terms of routing capabilities, data collection, and throughput. The proposed EMS uses the Context-Based Reasoning (CBR) Model to represent different types of buildings and offices. We implemented a new energy-efficient policy for electrical heaters control based on a Finite State Machine (FSM) leveraging on context-related events. This demonstrated significant effectiveness in minimizing the processing load, especially when adopting multithreading in data acquisition and control. To optimize sensors’ battery lifetime, we deployed a new Energy Aware Context Recognition Algorithm (EACRA) that dynamically configures sensors to send data under specific conditions and at particular times to avoid redundant data transmissions. EACRA increases the sensors’ battery lifetime by optimizing the number of samples, used modules, and transmissions. Our proposed EMS design can be used as a model to retrofit other kinds of buildings, such as residential and industrial, and thus converting them to SEEBs. Full article
(This article belongs to the Special Issue Data Processing in the Internet of Things)
Show Figures

Figure 1

17 pages, 3398 KiB  
Article
Optimized Particle Swarm Optimization Algorithm for the Realization of an Enhanced Energy-Aware Location-Aided Routing Protocol in MANET
by Taj-Aldeen Naser Abdali, Rosilah Hassan, Ravie Chandren Muniyandi, Azana Hafizah Mohd Aman, Quang Ngoc Nguyen and Ahmed Salih Al-Khaleefa
Information 2020, 11(11), 529; https://0-doi-org.brum.beds.ac.uk/10.3390/info11110529 - 15 Nov 2020
Cited by 33 | Viewed by 3227
Abstract
Mobile Ad-hoc Network (MANETs) is a wireless network topology with mobile network nodes and movable communication routes. In addition, the network nodes in MANETs are free to either join or leave the network. Typically, routing in MANETs is multi-hop because of the limited [...] Read more.
Mobile Ad-hoc Network (MANETs) is a wireless network topology with mobile network nodes and movable communication routes. In addition, the network nodes in MANETs are free to either join or leave the network. Typically, routing in MANETs is multi-hop because of the limited communication range of nodes. Then, routing protocols have been developed for MANETs. Among them, energy-aware location-aided routing (EALAR) is an efficient reactive MANET routing protocol that has been recently obtained by integrating particle swarm optimization (PSO) with mutation operation into the conventional LAR protocol. However, the mutation operation (nonuniform) used in EALAR has some drawbacks, which make EALAR provide insufficient exploration, exploitation, and diversity of solutions. Therefore, this study aims to propose to apply the Optimized PSO (OPSO) via adopting a mutation operation (uniform) instead of nonuniform. The OPSO is integrated into the LAR protocol to enhance all critical performance metrics, including packet delivery ratio, energy consumption, overhead, and end-to-end delay. Full article
(This article belongs to the Special Issue Wireless IoT Network Protocols)
Show Figures

Figure 1

12 pages, 543 KiB  
Article
Semantic Enhanced Distantly Supervised Relation Extraction via Graph Attention Network
by Xiaoye Ouyang, Shudong Chen and Rong Wang
Information 2020, 11(11), 528; https://0-doi-org.brum.beds.ac.uk/10.3390/info11110528 - 14 Nov 2020
Cited by 3 | Viewed by 2483
Abstract
Distantly Supervised relation extraction methods can automatically extract the relation between entity pairs, which are essential for the construction of a knowledge graph. However, the automatically constructed datasets comprise amounts of low-quality sentences and noisy words, and the current Distantly Supervised methods ignore [...] Read more.
Distantly Supervised relation extraction methods can automatically extract the relation between entity pairs, which are essential for the construction of a knowledge graph. However, the automatically constructed datasets comprise amounts of low-quality sentences and noisy words, and the current Distantly Supervised methods ignore these noisy data, resulting in unacceptable accuracy. To mitigate this problem, we present a novel Distantly Supervised approach SEGRE (Semantic Enhanced Graph attention networks Relation Extraction) for improved relation extraction. Our model first uses word position and entity type information to provide abundant local features and background knowledge. Then it builds the dependency trees to remove noisy words that are irrelevant to relations and employs Graph Attention Networks (GATs) to encode syntactic information, which also captures the important semantic features of relational words in each instance. Furthermore, to make our model more robust against noisy words, the intra-bag attention module is used to weight the bag representation and mitigate noise in the bag. Through extensive experiments on Riedel New York Times (NYT) and Google IISc Distantly Supervised (GIDS) datasets, we demonstrate SEGRE’s effectiveness. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

15 pages, 5476 KiB  
Article
Distributed Simulation with Multi-Agents for IoT in a Retail Pharmacy Facility
by Mohammed Basingab
Information 2020, 11(11), 527; https://0-doi-org.brum.beds.ac.uk/10.3390/info11110527 - 13 Nov 2020
Cited by 1 | Viewed by 1980
Abstract
Nowadays, internet of things (IoT) technology is considered as one of the key future technologies. The adoption of such technology is receiving quick attention from many industries as competitive pressures inspire them to move forward and invest. As technologies continue to advance, such [...] Read more.
Nowadays, internet of things (IoT) technology is considered as one of the key future technologies. The adoption of such technology is receiving quick attention from many industries as competitive pressures inspire them to move forward and invest. As technologies continue to advance, such as IoT, there is a vital need for an approach to identify its viability. This research proposes the adoption of IoT technology and the use of a simulation paradigm to capture the complexity of a system, offer reliable and continuous perceptions into its present and likely future state, and evaluate the economic feasibility of such adoption. A case study of one of the largest pharmacy retail chain is presented. IoT devices are suggested to be used to remotely monitor the failures of a geographically distributed system of refrigeration units. Multi-agents distributed system is proposed to simulate the operational behavior of the refrigerators and calculate the return of investment (ROI) of the proposed IoT implementation. Full article
(This article belongs to the Special Issue Distributed Simulation 2020)
Show Figures

Figure 1

16 pages, 806 KiB  
Article
Evaluating the Investment Climate for China’s Cross-Border E-Commerce: The Application of Back Propagation Neural Network
by Yi Lei and Xiaodong Qiu
Information 2020, 11(11), 526; https://0-doi-org.brum.beds.ac.uk/10.3390/info11110526 - 12 Nov 2020
Cited by 5 | Viewed by 2343
Abstract
China’s cross-border e-commerce will usher in a new golden age of development. Based on seven countries which include the Russian Federation, Mongolia, Ukraine, Kazakhstan, Tajikistan, Kyrgyzstan and Belarus along the “Belt and Road”, an evaluation system for cross-border e-commerce investment climate indicators is [...] Read more.
China’s cross-border e-commerce will usher in a new golden age of development. Based on seven countries which include the Russian Federation, Mongolia, Ukraine, Kazakhstan, Tajikistan, Kyrgyzstan and Belarus along the “Belt and Road”, an evaluation system for cross-border e-commerce investment climate indicators is established in this study. This research applied the entropy method twice to evaluate the investment climate of seven countries based on 5 years panel data comprehensively and these countries are then classified into politics-oriented and industry-oriented countries, and then the weight of indicators for each category is analyzed. In addition, cross-border e-commerce investors are proposed to prioritize industry-oriented countries. Back propagation neural network algorithm is used to map the existing data and optimize the evaluation index system in combination with the genetic algorithm. This research denotes the effort to find out the index evaluation combination corresponding to the best overall score, make the established evaluation index system applicable to other countries, and provide reference for cross-border e-commerce investors when evaluating the investment climate in each country. This study provides the important practical implications in the sustainable development of China’s cross-border e-commerce environment. Full article
(This article belongs to the Special Issue Personalized Visual Recommendation for E-Commerce)
Show Figures

Figure 1

13 pages, 348 KiB  
Article
Graph Convolutional Neural Network for a Pharmacy Cross-Selling Recommender System
by Franz Hell, Yasser Taha, Gereon Hinz, Sabine Heibei, Harald Müller and Alois Knoll
Information 2020, 11(11), 525; https://0-doi-org.brum.beds.ac.uk/10.3390/info11110525 - 11 Nov 2020
Cited by 10 | Viewed by 4809
Abstract
Recent advancements in deep neural networks for graph-structured data have led to state-of-the-art performance in recommender system benchmarks. Adapting these methods to pharmacy product cross-selling recommendation tasks with a million products and hundreds of millions of sales remains a challenge, due to the [...] Read more.
Recent advancements in deep neural networks for graph-structured data have led to state-of-the-art performance in recommender system benchmarks. Adapting these methods to pharmacy product cross-selling recommendation tasks with a million products and hundreds of millions of sales remains a challenge, due to the intricate medical and legal properties of pharmaceutical data. To tackle this challenge, we developed a graph convolutional network (GCN) algorithm called PharmaSage, which uses graph convolutions to generate embeddings for pharmacy products, which are then used in a downstream recommendation task. In the underlying graph, we incorporate both cross-sales information from the sales transaction within the graph structure, as well as product information as node features. Via modifications to the sampling involved in the network optimization process, we address a common phenomenon in recommender systems, the so-called popularity bias: popular products are frequently recommended, while less popular items are often neglected and recommended seldomly or not at all. We deployed PharmaSage using real-world sales data and trained it on 700,000 articles represented as nodes in a graph with edges between nodes representing approximately 100 million sales transactions. By exploiting the pharmaceutical product properties, such as their indications, ingredients, and adverse effects, and combining these with large sales histories, we achieved better results than with a purely statistics based approach. To our knowledge, this is the first application of deep graph embeddings for pharmacy product cross-selling recommendation at this scale to date. Full article
(This article belongs to the Special Issue Information Retrieval and Social Media Mining)
Show Figures

Figure 1

21 pages, 2455 KiB  
Article
GPRS Sensor Node Battery Life Span Prediction Based on Received Signal Quality: Experimental Study
by Joseph Habiyaremye, Marco Zennaro, Chomora Mikeka, Emmanuel Masabo, Santhi Kumaran and Kayalvizhi Jayavel
Information 2020, 11(11), 524; https://0-doi-org.brum.beds.ac.uk/10.3390/info11110524 - 11 Nov 2020
Cited by 6 | Viewed by 3426
Abstract
Nowadays with the evolution of Internet of Things (IoT), building a network of sensors for measuring data from remote locations requires a good plan considering a lot of parameters including power consumption. A Lot of communication technologies such as WIFI, Bluetooth, Zigbee, Lora, [...] Read more.
Nowadays with the evolution of Internet of Things (IoT), building a network of sensors for measuring data from remote locations requires a good plan considering a lot of parameters including power consumption. A Lot of communication technologies such as WIFI, Bluetooth, Zigbee, Lora, Sigfox, and GSM/GPRS are being used based on the application and this application will have some requirements such as communication range, power consumption, and detail about data to be transmitted. In some places, especially the hilly area like Rwanda and where GSM connectivity is already covered, GSM/GPRS may be the best choice for IoT applications. Energy consumption is a big challenge in sensor nodes which are specially supplied by batteries as the lifetime of the node and network depends on the state of charge of the battery. In this paper, we are focusing on static sensor nodes communicating using the GPRS protocol. We acquired current consumption for the sensor node in different locations with their corresponding received signal quality and we tried to experimentally find a mathematical data-driven model for estimating the GSM/GPRS sensor node battery lifetime using the received signal strength indicator (RSSI). This research outcome will help to predict GPRS sensor node life, replacement intervals, and dynamic handover which will in turn provide uninterrupted data service. This model can be deployed in various remote WSN and IoT based applications like forests, volcano, etc. Our research has shown convincing results like when there is a reduction of −30 dBm in RSSI, the current consumption of the radio unit of the node will double. Full article
(This article belongs to the Special Issue Wireless IoT Network Protocols)
Show Figures

Figure 1

15 pages, 416 KiB  
Article
A Method of Ultra-Large-Scale Matrix Inversion Using Block Recursion
by HouZhen Wang, Yan Guo and HuanGuo Zhang
Information 2020, 11(11), 523; https://0-doi-org.brum.beds.ac.uk/10.3390/info11110523 - 10 Nov 2020
Cited by 6 | Viewed by 3787
Abstract
Ultra-large-scale matrix inversion has been applied as the fundamental operation of numerous domains, owing to the growth of big data and matrix applications. Using cryptography as an example, the solution of ultra-large-scale linear equations over finite fields is important in many cryptanalysis schemes. [...] Read more.
Ultra-large-scale matrix inversion has been applied as the fundamental operation of numerous domains, owing to the growth of big data and matrix applications. Using cryptography as an example, the solution of ultra-large-scale linear equations over finite fields is important in many cryptanalysis schemes. However, inverting matrices of extremely high order, such as in millions, is challenging; nonetheless, the need has become increasingly urgent. Hence, we propose a parallel distributed block recursive computing method that can process matrices at a significantly increased scale, based on Strassen’s method; furthermore, we describe the related well-designed algorithm herein. Additionally, the experimental results based on comparison show the efficiency and the superiority of our method. Using our method, up to 140,000 dimensions can be processed in a supercomputing center. Full article
(This article belongs to the Special Issue Cyberspace Security, Privacy & Forensics)
Show Figures

Figure 1

17 pages, 1818 KiB  
Article
The Challenges and Opportunities to Formulate and Integrate an Effective ICT Policy at Mountainous Rural Schools of Gilgit-Baltistan
by Sabit Rahim, Tehmina Bibi, Sadruddin Bahadur Qutoshi, Shehla Gul, Yasmeen Gul, Naveed Ali Khan Kaim Khani and Muhammad Shahid Malik
Information 2020, 11(11), 522; https://0-doi-org.brum.beds.ac.uk/10.3390/info11110522 - 09 Nov 2020
Viewed by 3238
Abstract
The study, through the lens of school principals’ views, investigates the challenges and opportunities to formulate an information and communications technology (ICT) policy in order to integrate it in teaching and learning practices at the schools of mountainous rural areas of Gilgit-Baltistan (GB). [...] Read more.
The study, through the lens of school principals’ views, investigates the challenges and opportunities to formulate an information and communications technology (ICT) policy in order to integrate it in teaching and learning practices at the schools of mountainous rural areas of Gilgit-Baltistan (GB). This quantitative research approach focuses on three different educational systems (Regional, National, and International), as a source of data collection, which operate in GB, Pakistan. To collect the required data, questionnaires with principals and policy document reviews were used. Applying SPSS, the data were analyzed. The results show that both groups (male and female) strongly agree to formulate a policy on ICT in order to integrate it in teaching and learning to improve at the school level. The results also show that the school heads face a number of challenges (e.g., lack of infrastructure, finance, Internet, technical staff, time, awareness, and training facilities, etc.) in the formulation of ICT policy and its integration in teaching and learning. The results revealed that the majority of the schools have an absence of ICT policy instead of having competent principals in those schools. Therefore, the research recommends that the school level ICT policy should be developed and integrated in teaching and learning practices to create an environment of powerful learning at schools, in order to fulfill the needs and demands of the 21st century education. Full article
(This article belongs to the Special Issue ICT Enhanced Social Sciences and Humanities)
Show Figures

Figure 1

31 pages, 517 KiB  
Review
Wearable Sensors for Monitoring and Preventing Noncommunicable Diseases: A Systematic Review
by Annica Kristoffersson and Maria Lindén
Information 2020, 11(11), 521; https://0-doi-org.brum.beds.ac.uk/10.3390/info11110521 - 06 Nov 2020
Cited by 13 | Viewed by 4015
Abstract
Ensuring healthy lives and promoting a healthy well-being for all at all ages are listed as some of the goals in Agenda 2030 for Sustainable Development. Considering that noncommunicable diseases (NCDs) are the leading cause of death worldwide, reducing the mortality of NCDs [...] Read more.
Ensuring healthy lives and promoting a healthy well-being for all at all ages are listed as some of the goals in Agenda 2030 for Sustainable Development. Considering that noncommunicable diseases (NCDs) are the leading cause of death worldwide, reducing the mortality of NCDs is an important target. To reach this goal, means for detecting and reacting to warning signals are necessary. Here, remote health monitoring in real time has great potential. This article provides a systematic review of the use of wearable sensors for the monitoring and prevention of NCDs. In addition, this article not only provides in-depth information about the retrieved articles, but also discusses examples of studies assessing warning signals that may result in serious health conditions, such as stroke and cardiac arrest, if left untreated. One finding is that even though many good examples of wearable sensor systems for monitoring and controlling NCDs are presented, many issues also remain to be solved. One major issue is the lack of testing on representative people from a sociodemographic perspective. Even though substantial work remains, the use of wearable sensor systems has a great potential to be used in the battle against NCDs by providing the means to diagnose, monitor and prevent NCDs. Full article
(This article belongs to the Special Issue Ubiquitous Sensing for Smart Health Monitoring)
Show Figures

Figure 1

13 pages, 2311 KiB  
Article
Two-Dimensional Jamming Recognition Algorithm Based on the Sevcik Fractal Dimension and Energy Concentration Property for UAV Frequency Hopping Systems
by Rui Xue, Jing Liu and Huaiyu Tang
Information 2020, 11(11), 520; https://0-doi-org.brum.beds.ac.uk/10.3390/info11110520 - 06 Nov 2020
Cited by 5 | Viewed by 1952
Abstract
Unmanned aircraft vehicle frequency hopping (UAV-FH) systems face multiple types of jamming, and one anti-jamming method cannot cope with all types of jamming. Therefore, the jamming signals of the environment where the UAV-FH system is located must be identified and classified; moreover, anti-jamming [...] Read more.
Unmanned aircraft vehicle frequency hopping (UAV-FH) systems face multiple types of jamming, and one anti-jamming method cannot cope with all types of jamming. Therefore, the jamming signals of the environment where the UAV-FH system is located must be identified and classified; moreover, anti-jamming measures must be selected in accordance with different jamming types. First, the algorithm extracts the Sevcik fractal dimension from the frequency domain (SFDF) and the degree of energy concentration from the fractional Fourier domain of various types of jamming. Then, these parameters are combined into a two-dimensional feature vector and used as a feature parameter for classification and recognition. Lastly, a binary tree-based support vector machine (BT-SVM) multi-classifier is used to classify the jamming signal. Simulation results show that the feature parameters extracted by the proposed method have good separation and strong stability. Compared with the existing box-dimensional recognition algorithm, the new algorithm not only can quickly and accurately identify the type of jamming signal but also has more advantages when the jamming-to-noise ratio (JNR) is low. Full article
Show Figures

Figure 1

13 pages, 460 KiB  
Article
Random Forest with Sampling Techniques for Handling Imbalanced Prediction of University Student Depression
by Siriporn Sawangarreerak and Putthiporn Thanathamathee
Information 2020, 11(11), 519; https://0-doi-org.brum.beds.ac.uk/10.3390/info11110519 - 05 Nov 2020
Cited by 24 | Viewed by 4002
Abstract
In this work, we propose a combined sampling technique to improve the performance of imbalanced classification of university student depression data. In experimental results, we found that combined random oversampling with the Tomek links under sampling methods allowed generating a relatively balanced depression [...] Read more.
In this work, we propose a combined sampling technique to improve the performance of imbalanced classification of university student depression data. In experimental results, we found that combined random oversampling with the Tomek links under sampling methods allowed generating a relatively balanced depression dataset without losing significant information. In this case, the random oversampling technique was used for sampling the minority class to balance the number of samples between the datasets. Then, the Tomek links technique was used for undersampling the samples by removing the depression data considered less relevant and noisy. The relatively balanced dataset was classified by random forest. The results show that the overall accuracy in the prediction of adolescent depression data was 94.17%, outperforming the individual sampling technique. Moreover, our proposed method was tested with another dataset for its external validity. This dataset’s predictive accuracy was found to be 93.33%. Full article
(This article belongs to the Special Issue Data Modeling and Predictive Analytics)
Show Figures

Figure 1

16 pages, 1058 KiB  
Article
Urdu Documents Clustering with Unsupervised and Semi-Supervised Probabilistic Topic Modeling
by Mubashar Mustafa, Feng Zeng, Hussain Ghulam and Hafiz Muhammad Arslan
Information 2020, 11(11), 518; https://0-doi-org.brum.beds.ac.uk/10.3390/info11110518 - 05 Nov 2020
Cited by 9 | Viewed by 5315
Abstract
Document clustering is to group documents according to certain semantic features. Topic model has a richer semantic structure and considerable potential for helping users to know document corpora. Unfortunately, this potential is stymied on text documents which have overlapping nature, due to their [...] Read more.
Document clustering is to group documents according to certain semantic features. Topic model has a richer semantic structure and considerable potential for helping users to know document corpora. Unfortunately, this potential is stymied on text documents which have overlapping nature, due to their purely unsupervised nature. To solve this problem, some semi-supervised models have been proposed for English language. However, no such work is available for poor resource language Urdu. Therefore, document clustering has become a challenging task in Urdu language, which has its own morphology, syntax and semantics. In this study, we proposed a semi-supervised framework for Urdu documents clustering to deal with the Urdu morphology challenges. The proposed model is a combination of pre-processing techniques, seeded-LDA model and Gibbs sampling, we named it seeded-Urdu Latent Dirichlet Allocation (seeded-ULDA). We apply the proposed model and other methods to Urdu news datasets for categorizing. For the datasets, two conditions are considered for document clustering, one is “Dataset without overlapping” in which all classes have distinct nature. The other is “Dataset with overlapping” in which the categories are overlapping and the classes are connected to each other. The aim of this study is threefold: it first shows that unsupervised models (Latent Dirichlet Allocation (LDA), Non-negative matrix factorization (NMF) and K-means) are giving satisfying results on the dataset without overlapping. Second, it shows that these unsupervised models are not performing well on the dataset with overlapping, because, on this dataset, these algorithms find some topics that are neither entirely meaningful nor effective in extrinsic tasks. Third, our proposed semi-supervised model Seeded-ULDA performs well on both datasets because this model is straightforward and effective to instruct topic models to find topics of specific interest. It is shown in this paper that the semi-supervised model, Seeded-ULDA, provides significant results as compared to unsupervised algorithms. Full article
(This article belongs to the Special Issue Natural Language Processing for Social Media)
Show Figures

Figure 1

19 pages, 2957 KiB  
Article
Heracles: A Context-Based Multisensor Sensor Data Fusion Algorithm for the Internet of Things
by Flávia C. Delicato, Tayssa Vandelli, Mario Bonicea and Claudio M. de Farias
Information 2020, 11(11), 517; https://0-doi-org.brum.beds.ac.uk/10.3390/info11110517 - 04 Nov 2020
Cited by 1 | Viewed by 1921
Abstract
In the Internet of Things (IoT), extending the average battery duration of devices is of paramount importance, since it promotes uptime without intervention in the environment, which can be undesirable or costly. In the IoT, the system’s functionalities are distributed among devices that [...] Read more.
In the Internet of Things (IoT), extending the average battery duration of devices is of paramount importance, since it promotes uptime without intervention in the environment, which can be undesirable or costly. In the IoT, the system’s functionalities are distributed among devices that (i) collect, (ii) transmit and (iii) apply algorithms to process and analyze data. A widely adopted technique for increasing the lifetime of an IoT system is using data fusion on the devices that process and analyze data. There are already several works proposing data fusion algorithms for the context of wireless sensor networks and IoT. However, most of them consider that application requirements (such as the data sampling rate and the data range of the events of interest) are previously known, and the solutions are tailored for a single target application. In the context of a smart city, we envision that the IoT will provide a sensing and communication infrastructure to be shared by multiple applications, that will make use of this infrastructure in an opportunistic and dynamic way, with no previous knowledge about its requirements. In this work, we present Heracles, a new data fusion algorithm tailored to meet the demands of the IoT for smart cities. Heracles considers the context of the application, adapting to the features of the dataset to perform the data analysis. Heracles aims at minimizing data transmission to save energy while generating value-added information, which will serve as input for decision-making processes. Results of the performed evaluation show that Heracles is feasible, enhances the performance of decision methods and extends the system lifetime. Full article
(This article belongs to the Special Issue Data Processing in the Internet of Things)
Show Figures

Figure 1

15 pages, 585 KiB  
Article
Botnet Defense System: Concept, Design, and Basic Strategy
by Shingo Yamaguchi
Information 2020, 11(11), 516; https://0-doi-org.brum.beds.ac.uk/10.3390/info11110516 - 04 Nov 2020
Cited by 19 | Viewed by 3773
Abstract
This paper proposes a new kind of cyber-security system, named Botnet Defense System (BDS), which defends an Internet of Things (IoT) system against malicious botnets. The concept of BDS is “Fight fire with fire”. The distinguishing feature is that it uses white-hat botnets [...] Read more.
This paper proposes a new kind of cyber-security system, named Botnet Defense System (BDS), which defends an Internet of Things (IoT) system against malicious botnets. The concept of BDS is “Fight fire with fire”. The distinguishing feature is that it uses white-hat botnets to fight malicious botnets. A BDS consists of four components: Monitor, Strategy Planner, Launcher, and Command and Control (C&C) server. The Monitor component watches over a target IoT system. If the component detects a malicious botnet, the Strategy Planner component makes a strategy against the botnet. Based on the planned strategy, the Launcher component sends white-hat worms into the IoT system and constructs a white-hat botnet. The C&C server component commands and controls the white-hat botnet to exterminate the malicious botnet. Strategy studies are essential to produce intended results. We proposed three basic strategies to launch white-hat worms: All-Out, Few-Elite, and Environment-Adaptive. We evaluated BDS and the proposed strategies through the simulation of agent-oriented Petri net model representing the battle between Mirai botnets and the white-hat botnets. This result shows that the Environment-Adaptive strategy is the best and reduced the number of needed white-hat worms to 38.5% almost without changing the extermination rate for Mirai bots. Full article
(This article belongs to the Special Issue Security and Privacy in the Internet of Things)
Show Figures

Figure 1

16 pages, 248 KiB  
Article
Social Capital on Social Media—Concepts, Measurement Techniques and Trends in Operationalization
by Flora Poecze and Christine Strauss
Information 2020, 11(11), 515; https://0-doi-org.brum.beds.ac.uk/10.3390/info11110515 - 04 Nov 2020
Cited by 7 | Viewed by 7080
Abstract
The introduction of the Web 2.0 era and the associated emergence of social media platforms opened an interdisciplinary research domain, wherein a growing number of studies are focusing on the interrelationship of social media usage and perceived individual social capital. The primary aim [...] Read more.
The introduction of the Web 2.0 era and the associated emergence of social media platforms opened an interdisciplinary research domain, wherein a growing number of studies are focusing on the interrelationship of social media usage and perceived individual social capital. The primary aim of the present study is to introduce the existing measurement techniques of social capital in this domain, explore trends, and offer promising directions and implications for future research. Applying the method of a scoping review, a set of 80 systematically identified scientific publications were analyzed, categorized, grouped and discussed. Focus was placed on the employed viewpoints and measurement techniques necessary to tap into the possible consistencies and/or heterogeneity in this domain in terms of operationalization. The results reveal that multiple views and measurement techniques are present in this research area, which might raise a challenge in future synthesis approaches, especially in the case of future meta-analytical contributions. Full article
(This article belongs to the Special Issue Information Retrieval and Social Media Mining)
14 pages, 475 KiB  
Article
Nutrient Profiling of Romanian Traditional Dishes—Prerequisite for Supporting the Flexitarian Eating Style
by Lelia Voinea, Dorin Vicențiu Popescu, Teodor Mihai Negrea and Răzvan Dina
Information 2020, 11(11), 514; https://0-doi-org.brum.beds.ac.uk/10.3390/info11110514 - 02 Nov 2020
Cited by 4 | Viewed by 2698
Abstract
Currently, most countries have to deal with multiple discrepancies that have arisen between the constraints of sustainable development and the return to traditions, involving food producers, as well as consumers, aspects that are also easily noticed in Romania. Thus, the main purpose of [...] Read more.
Currently, most countries have to deal with multiple discrepancies that have arisen between the constraints of sustainable development and the return to traditions, involving food producers, as well as consumers, aspects that are also easily noticed in Romania. Thus, the main purpose of this study was to assess the nutritional quality of the Romanian traditional diet using a nutrient profiling method based on the Nutri-Score algorithm, applied to several representative Romanian traditional dishes. Because this algorithm has the capacity to highlight the amount (%) of fruits, vegetables, and nuts from a certain dish, it might be considered an indicator of the sustainable valences of the selected meals. The results showed that the traditional menus do not correspond to a balanced and sustainable eating behavior; thus, it is recommended to improve the Romanian pattern of food consumption and to ensure its sustainable basis. In order to achieve this goal, we propose the development of a new paradigm of the contemporary Romanian food style incorporating three main directions of action: acceptance, adaptation, and transformation. Full article
(This article belongs to the Special Issue Green Marketing)
10 pages, 202 KiB  
Article
Perceptions and Misperceptions of Smartphone Use: Applying the Social Norms Approach
by John McAlaney, Mohamed Basel Almourad, Georgina Powell and Raian Ali
Information 2020, 11(11), 513; https://0-doi-org.brum.beds.ac.uk/10.3390/info11110513 - 02 Nov 2020
Cited by 4 | Viewed by 2613
Abstract
The social norms approach is an established technique to bring about behaviour change through challenging misperceptions of peer behaviour. This approach is limited by a reliance on self-report and a lack of interactivity with the target population. At the same time, excessive use [...] Read more.
The social norms approach is an established technique to bring about behaviour change through challenging misperceptions of peer behaviour. This approach is limited by a reliance on self-report and a lack of interactivity with the target population. At the same time, excessive use of digital devices, known as digital addiction, has been recognized as an emergent issue. There is potential to apply the social norms approach to digital addiction and, in doing so, address some of the limitations of the social norms field. In this study, we trialled a social norms intervention with a sample of smartphone users (n = 94) recruited from the users of a commercial app designed to empower individuals to reduce their device usage. Our results indicate that most of the sample overestimated peer use of smartphone apps, demonstrating the existence of misperceptions relating to smartphone use. Such misperceptions are the basis for the social norms approach. We also document the discrepancy between self-report and smartphone usage data as recorded through data collected directly from the device. The potential for the application of the social norms approach and directions for future research are discussed. Full article
(This article belongs to the Special Issue Interactive e-Health Interventions for Digital Addiction)
18 pages, 1224 KiB  
Article
Making the Case for a P2P Personal Health Record
by William Connor Horne and Zina Ben Miled
Information 2020, 11(11), 512; https://0-doi-org.brum.beds.ac.uk/10.3390/info11110512 - 31 Oct 2020
Cited by 2 | Viewed by 3295
Abstract
Improved health care services can benefit from a more seamless exchange of medical information between patients and health care providers. This exchange is especially important considering the increasing trends in mobility, comorbidity and outbreaks. However, current Electronic Health Records (EHR) tend to be [...] Read more.
Improved health care services can benefit from a more seamless exchange of medical information between patients and health care providers. This exchange is especially important considering the increasing trends in mobility, comorbidity and outbreaks. However, current Electronic Health Records (EHR) tend to be institution-centric, often leaving the medical information of the patient fragmented and more importantly inaccessible to the patient for sharing with other health providers in a timely manner. Nearly a decade ago, several client–server models for personal health records (PHR) were proposed. The aim of these previous PHRs was to address data fragmentation issues. However, these models were not widely adopted by patients. This paper discusses the need for a new PHR model that can enhance the patient experience by making medical services more accessible. The aims of the proposed model are to (1) help patients maintain a complete lifelong health record, (2) facilitate timely communication and data sharing with health care providers from multiple institutions and (3) promote integration with advanced third-party services (e.g., risk prediction for chronic diseases) that require access to the patient’s health data. The proposed model is based on a Peer-to-Peer (P2P) network as opposed to the client–server architecture of the previous PHR models. This architecture consists of a central index server that manages the network and acts as a mediator, a peer client for patients and providers that allows them to manage health records and connect to the network, and a service client that enables third-party providers to offer services to the patients. This distributed architecture is essential since it promotes ownership of the health record by the patient instead of the health care institution. Moreover, it allows the patient to subscribe to an extended range of personalized e-health services. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop