Next Issue
Volume 13, June
Previous Issue
Volume 13, April

Future Internet, Volume 13, Issue 5 (May 2021) – 34 articles

Cover Story (view full-size image): This article details the design, development, and assessment of a wildlife monitoring application for IoT animal-repelling devices that is able to cover large areas, thanks to low-power wide area network (LPWAN), which bridges the gap between cellular technologies and short-range wireless technologies. This article first presents a detailed survey of LPWAN for smart agriculture and wildlife monitoring applications. We evaluate the performance of LoRa (the global de-facto LPWAN) transmission technology, operating in the 433 and 868 MHz bands, aimed at wildlife monitoring in a forest vegetation area. Findings from this study show that achievable performance can greatly vary between the 433 and 868 MHz, and caution is required when taking numbers at face value, as this can have implications for IoT applications. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Select all
Export citation of selected articles as:
Article
Generating Synthetic Training Data for Supervised De-Identification of Electronic Health Records
Future Internet 2021, 13(5), 136; https://0-doi-org.brum.beds.ac.uk/10.3390/fi13050136 - 20 May 2021
Viewed by 686
Abstract
A major hurdle in the development of natural language processing (NLP) methods for Electronic Health Records (EHRs) is the lack of large, annotated datasets. Privacy concerns prevent the distribution of EHRs, and the annotation of data is known to be costly and cumbersome. [...] Read more.
A major hurdle in the development of natural language processing (NLP) methods for Electronic Health Records (EHRs) is the lack of large, annotated datasets. Privacy concerns prevent the distribution of EHRs, and the annotation of data is known to be costly and cumbersome. Synthetic data presents a promising solution to the privacy concern, if synthetic data has comparable utility to real data and if it preserves the privacy of patients. However, the generation of synthetic text alone is not useful for NLP because of the lack of annotations. In this work, we propose the use of neural language models (LSTM and GPT-2) for generating artificial EHR text jointly with annotations for named-entity recognition. Our experiments show that artificial documents can be used to train a supervised named-entity recognition model for de-identification, which outperforms a state-of-the-art rule-based baseline. Moreover, we show that combining real data with synthetic data improves the recall of the method, without manual annotation effort. We conduct a user study to gain insights on the privacy of artificial text. We highlight privacy risks associated with language models to inform future research on privacy-preserving automated text generation and metrics for evaluating privacy-preservation during text generation. Full article
(This article belongs to the Special Issue Natural Language Engineering: Methods, Tasks and Applications)
Show Figures

Figure 1

Article
Cognitive Profiling of Nodes in 6G through Multiplex Social Network and Evolutionary Collective Dynamics
Future Internet 2021, 13(5), 135; https://0-doi-org.brum.beds.ac.uk/10.3390/fi13050135 - 20 May 2021
Viewed by 475
Abstract
Complex systems are fully described by the connectedness of their elements studying how these develop a collective behavior, interacting with each other following their inner features, and the structure and dynamics of the entire system. The forthcoming 6G will attempt to rewrite the [...] Read more.
Complex systems are fully described by the connectedness of their elements studying how these develop a collective behavior, interacting with each other following their inner features, and the structure and dynamics of the entire system. The forthcoming 6G will attempt to rewrite the communication networks’ perspective, focusing on a radical revolution in the way entities and technologies are conceived, integrated and used. This will lead to innovative approaches with the aim of providing new directions to deal with future network challenges posed by the upcoming 6G, thus the complex systems could become an enabling set of tools and methods to design a self-organized, resilient and cognitive network, suitable for many application fields, such as digital health or smart city living scenarios. Here, we propose a complex profiling approach of heterogeneous nodes belonging to the network with the goal of including the multiplex social network as a mathematical representation that enables us to consider multiple types of interactions, the collective dynamics of diffusion and competition, through social contagion and evolutionary game theory, and the mesoscale organization in communities to drive learning and cognition. Through a framework, we detail the step by step modeling approach and show and discuss our findings, applying it to a real dataset, by demonstrating how the proposed model allows us to detect deeply complex knowable roles of nodes. Full article
(This article belongs to the Section Techno-Social Smart Systems)
Show Figures

Figure 1

Article
Comparative Study of Distributed Consensus Gossip Algorithms for Network Size Estimation in Multi-Agent Systems
Future Internet 2021, 13(5), 134; https://0-doi-org.brum.beds.ac.uk/10.3390/fi13050134 - 18 May 2021
Viewed by 621
Abstract
Determining the network size is a critical process in numerous areas (e.g., computer science, logistic, epidemiology, social networking services, mathematical modeling, demography, etc.). However, many modern real-world systems are so extensive that measuring their size poses a serious challenge. Therefore, the algorithms for [...] Read more.
Determining the network size is a critical process in numerous areas (e.g., computer science, logistic, epidemiology, social networking services, mathematical modeling, demography, etc.). However, many modern real-world systems are so extensive that measuring their size poses a serious challenge. Therefore, the algorithms for determining/estimating this parameter in an effective manner have been gaining popularity over the past decades. In the paper, we analyze five frequently applied distributed consensus gossip-based algorithms for network size estimation in multi-agent systems (namely, the Randomized gossip algorithm, the Geographic gossip algorithm, the Broadcast gossip algorithm, the Push-Sum protocol, and the Push-Pull protocol). We examine the performance of the mentioned algorithms with bounded execution over random geometric graphs by applying two metrics: the number of sent messages required for consensus achievement and the estimation precision quantified as the median deviation from the real value of the network size. The experimental part consists of two scenarios—the consensus achievement is conditioned by either the values of the inner states or the network size estimates—and, in both scenarios, either the best-connected or the worst-connected agent is chosen as the leader. The goal of this paper is to identify whether all the examined algorithms are applicable to estimating the network size, which algorithm provides the best performance, how the leader selection can affect the performance of the algorithms, and how to most effectively configure the applied stopping criterion. Full article
(This article belongs to the Special Issue Modern Trends in Multi-Agent Systems)
Show Figures

Figure 1

Article
Blockchain Implementation Method for Interoperability between CBDCs
Future Internet 2021, 13(5), 133; https://0-doi-org.brum.beds.ac.uk/10.3390/fi13050133 - 18 May 2021
Viewed by 612
Abstract
Central Bank Digital Currency (CBDC) is a digital currency issued by a central bank. Motivated by the financial crisis and prospect of a cashless society, countries are researching CBDC. Recently, global consideration has been given to paying basic income to avoid consumer sentiment [...] Read more.
Central Bank Digital Currency (CBDC) is a digital currency issued by a central bank. Motivated by the financial crisis and prospect of a cashless society, countries are researching CBDC. Recently, global consideration has been given to paying basic income to avoid consumer sentiment shrinkage and recession due to epidemics. CBDC is coming into the spotlight as the way to manage the public finance policy of nations comprehensively. CBDC is studied by many countries. The bank of the Bahamas released Sand Dollar. Each country’s central bank should consider the situation in which CBDCs are exchanged. The transaction of the CDDB is open data. Transaction registers CBDC exchange information of the central bank in the blockchain. Open data on currency exchange between countries will provide information on the flow of money between countries. This paper proposes a blockchain system and management method based on the ISO/IEC 11179 metadata registry for exchange between CBDCs that records transactions between registered CBDCs. Each country’s CBDC will have a different implementation and time of publication. We implement the blockchain system and experiment with the operation method, measuring the block generation time of blockchains using the proposed method. Full article
(This article belongs to the Special Issue Open Data and Artificial Intelligence)
Show Figures

Figure 1

Review
Trust, but Verify: Informed Consent, AI Technologies, and Public Health Emergencies
Future Internet 2021, 13(5), 132; https://0-doi-org.brum.beds.ac.uk/10.3390/fi13050132 - 18 May 2021
Viewed by 688
Abstract
To use technology or engage with research or medical treatment typically requires user consent: agreeing to terms of use with technology or services, or providing informed consent for research participation, for clinical trials and medical intervention, or as one legal basis for processing [...] Read more.
To use technology or engage with research or medical treatment typically requires user consent: agreeing to terms of use with technology or services, or providing informed consent for research participation, for clinical trials and medical intervention, or as one legal basis for processing personal data. Introducing AI technologies, where explainability and trustworthiness are focus items for both government guidelines and responsible technologists, imposes additional challenges. Understanding enough of the technology to be able to make an informed decision, or consent, is essential but involves an acceptance of uncertain outcomes. Further, the contribution of AI-enabled technologies not least during the COVID-19 pandemic raises ethical concerns about the governance associated with their development and deployment. Using three typical scenarios—contact tracing, big data analytics and research during public emergencies—this paper explores a trust-based alternative to consent. Unlike existing consent-based mechanisms, this approach sees consent as a typical behavioural response to perceived contextual characteristics. Decisions to engage derive from the assumption that all relevant stakeholders including research participants will negotiate on an ongoing basis. Accepting dynamic negotiation between the main stakeholders as proposed here introduces a specifically socio–psychological perspective into the debate about human responses to artificial intelligence. This trust-based consent process leads to a set of recommendations for the ethical use of advanced technologies as well as for the ethical review of applied research projects. Full article
Show Figures

Figure 1

Article
A Hierarchical Cache Size Allocation Scheme Based on Content Dissemination in Information-Centric Networks
Future Internet 2021, 13(5), 131; https://0-doi-org.brum.beds.ac.uk/10.3390/fi13050131 - 15 May 2021
Viewed by 543
Abstract
With the rapid growth of mass content retrieval on the Internet, Information-Centric Network (ICN) has become one of the hotspots in the field of future network architectures. The in-network cache is an important feature of ICN. For better network performance in ICN, the [...] Read more.
With the rapid growth of mass content retrieval on the Internet, Information-Centric Network (ICN) has become one of the hotspots in the field of future network architectures. The in-network cache is an important feature of ICN. For better network performance in ICN, the cache size on each node should be allocated in proportion to its importance. However, in some current studies, the importance of cache nodes is usually determined by their location in the network topology, ignoring their roles in the actual content transmission process. In this paper, we focus on the allocation of cache size for each node within a given total cache space budget. We explore the impact of heterogeneous cache allocation on content dissemination under the same ICN infrastructure and we quantify the importance of nodes from content dissemination and network topology. To this purpose, we implement a hierarchy partitioning method based on content dissemination, then we formulate a set of weight calculation methods for these hierarchies and to provide a per-node cache space allocation to allocate the total cache space budget to each node in the network. The performance of the scheme is evaluated on the Garr topology, and the average hit ratio, latency, and load are compared to show that the proposed scheme has better performance in these aspects than other schemes. Full article
(This article belongs to the Section Network Virtualization and Edge/Fog Computing)
Show Figures

Figure 1

Article
A Digital Currency Architecture for Privacy and Owner-Custodianship
Future Internet 2021, 13(5), 130; https://0-doi-org.brum.beds.ac.uk/10.3390/fi13050130 - 14 May 2021
Viewed by 625
Abstract
In recent years, electronic retail payment mechanisms, especially e-commerce and card payments at the point of sale, have increasingly replaced cash in many developed countries. As a result, societies are losing a critical public retail payment option, and retail consumers are losing important [...] Read more.
In recent years, electronic retail payment mechanisms, especially e-commerce and card payments at the point of sale, have increasingly replaced cash in many developed countries. As a result, societies are losing a critical public retail payment option, and retail consumers are losing important rights associated with using cash. To address this concern, we propose an approach to digital currency that would allow people without banking relationships to transact electronically and privately, including both e-commerce purchases and point-of-sale purchases that are required to be cashless. Our proposal introduces a government-backed, privately-operated digital currency infrastructure to ensure that every transaction is registered by a bank or money services business, and it relies upon non-custodial wallets backed by privacy-enhancing technology, such as blind signatures or zero-knowledge proofs, to ensure that transaction counterparties are not revealed. Our approach to digital currency can also facilitate more efficient and transparent clearing, settlement, and management of systemic risk. We argue that our system can restore and preserve the salient features of cash, including privacy, owner-custodianship, fungibility, and accessibility, while also preserving fractional reserve banking and the existing two-tiered banking system. We also show that it is possible to introduce regulation of digital currency transactions involving non-custodial wallets that unconditionally protect the privacy of end-users. Full article
(This article belongs to the Special Issue Blockchain Security and Privacy)
Show Figures

Figure 1

Review
Hashtag Recommendation Methods for Twitter and Sina Weibo: A Review
Future Internet 2021, 13(5), 129; https://0-doi-org.brum.beds.ac.uk/10.3390/fi13050129 - 14 May 2021
Viewed by 541
Abstract
Hashtag recommendation suggests hashtags to users while they write microblogs in social media platforms. Although researchers have investigated various methods and factors that affect the performance of hashtag recommendations in Twitter and Sina Weibo, a systematic review of these methods is lacking. The [...] Read more.
Hashtag recommendation suggests hashtags to users while they write microblogs in social media platforms. Although researchers have investigated various methods and factors that affect the performance of hashtag recommendations in Twitter and Sina Weibo, a systematic review of these methods is lacking. The objectives of this study are to present a comprehensive overview of research on hashtag recommendation for tweets and present insights from previous research papers. In this paper, we search for articles related to our research between 2010 and 2020 from CiteSeer, IEEE Xplore, Springer and ACM digital libraries. From the 61 articles included in this study, we notice that most of the research papers were focused on the textual content of tweets instead of other data. Furthermore, collaborative filtering methods are seldom used solely in hashtag recommendation. Taking this perspective, we present a taxonomy of hashtag recommendation based on the research methodologies that have been used. We provide a critical review of each of the classes in the taxonomy. We also discuss the challenges remaining in the field and outline future research directions in this area of study. Full article
(This article belongs to the Special Issue Social Networks Analysis and Mining)
Show Figures

Figure 1

Article
Research on Task-Oriented Computation Offloading Decision in Space-Air-Ground Integrated Network
Future Internet 2021, 13(5), 128; https://0-doi-org.brum.beds.ac.uk/10.3390/fi13050128 - 13 May 2021
Viewed by 440
Abstract
In Space–Air–Ground Integrated Networks (SAGIN), computation offloading technology is a new way to improve the processing efficiency of node tasks and improve the limitation of computing storage resources. To solve the problem of large delay and energy consumption cost of task computation offloading, [...] Read more.
In Space–Air–Ground Integrated Networks (SAGIN), computation offloading technology is a new way to improve the processing efficiency of node tasks and improve the limitation of computing storage resources. To solve the problem of large delay and energy consumption cost of task computation offloading, which caused by the complex and variable network offloading environment and a large amount of offloading tasks, a computation offloading decision scheme based on Markov and Deep Q Networks (DQN) is proposed. First, we select the optimal offloading network based on the characteristics of the movement of the task offloading process in the network. Then, the task offloading process is transformed into a Markov state transition process to build a model of the computational offloading decision process. Finally, the delay and energy consumption weights are introduced into the DQN algorithm to update the computation offloading decision process, and the optimal offloading decision under the low cost is achieved according to the task attributes. The simulation results show that compared with the traditional Lyapunov-based offloading decision scheme and the classical Q-learning algorithm, the delay and energy consumption are respectively reduced by 68.33% and 11.21%, under equal weights when the offloading task volume exceeds 500 Mbit. Moreover, compared with offloading to edge nodes or backbone nodes of the network alone, the proposed mixed offloading model can satisfy more than 100 task requests with low energy consumption and low delay. It can be seen that the computation offloading decision proposed in this paper can effectively reduce the delay and energy consumption during the task computation offloading in the Space–Air–Ground Integrated Network environment, and can select the optimal offloading sites to execute the tasks according to the characteristics of the task itself. Full article
(This article belongs to the Section Big Data and Augmented Intelligence)
Show Figures

Figure 1

Article
User Acceptance of Smart Watch for Medical Purposes: An Empirical Study
Future Internet 2021, 13(5), 127; https://0-doi-org.brum.beds.ac.uk/10.3390/fi13050127 - 12 May 2021
Viewed by 770
Abstract
This study aims to investigate the most effective and interesting variables that urge use of the smartwatch (SW) in a medical environment. To achieve this aim, the study was framed using an innovative and integrated research model, which is based on combining constructs [...] Read more.
This study aims to investigate the most effective and interesting variables that urge use of the smartwatch (SW) in a medical environment. To achieve this aim, the study was framed using an innovative and integrated research model, which is based on combining constructs from a well-established theoretical model’s TAM and other features that are critical to the effectiveness of SW which are content richness and personal innovativeness. The Technology Acceptance Model (TAM) is used to detect the determinants affecting the adoption of SW. The current study depends on an online questionnaire that is composed of (20) items. The questionnaire is distributed among a group of doctors, nurses, and administration staff in medical centers within the UAE. The total number of respondents is (325). The collected data were implemented to test the study model and the proposed constructs and hypotheses depending on the Smart PLS Software. The results of the current study show that the main constructs in the model contribute differently to the acceptance of SW. Based on the previous assumption, content richness and innovativeness are critical factors that enrich the user’s perceived usefulness. In addition, perceived ease of use was significantly predictive of either perceived usefulness or behavioral intention. Overall findings suggest that SW is in high demand in the medical field and is used as a common channel among doctors and their patients and it facilitates the role of transmitting information among its users. The outcomes of the current study indicate the importance of certain external factors for the acceptance of the technology. The genuine value of this study lies in the fact that it is based on a conceptual framework that emphasizes the close relationship between the TAM constructs of perceived usefulness and perceived ease of use to the construct of content richness, and innovativeness. Finally, this study helps us recognize the embedded motives for using SW in a medical environment, where the main motive is to enhance and facilitate the effective roles of doctors and patients. Full article
(This article belongs to the Special Issue The Future Internet of Medical Things)
Show Figures

Figure 1

Article
Reducing Videoconferencing Fatigue through Facial Emotion Recognition
Future Internet 2021, 13(5), 126; https://0-doi-org.brum.beds.ac.uk/10.3390/fi13050126 - 12 May 2021
Viewed by 603
Abstract
In the last 14 months, COVID-19 made face-to-face meetings impossible and this has led to rapid growth in videoconferencing. As highly social creatures, humans strive for direct interpersonal interaction, which means that in most of these video meetings the webcam is switched on [...] Read more.
In the last 14 months, COVID-19 made face-to-face meetings impossible and this has led to rapid growth in videoconferencing. As highly social creatures, humans strive for direct interpersonal interaction, which means that in most of these video meetings the webcam is switched on and people are “looking each other in the eyes”. However, it is far from clear what the psychological consequences of this shift to virtual face-to-face communication are and if there are methods to alleviate “videoconferencing fatigue”. We have studied the influence of emotions of meeting participants on the perceived outcome of video meetings. Our experimental setting consisted of 35 participants collaborating in eight teams over Zoom in a one semester course on Collaborative Innovation Networks in bi-weekly video meetings, where each team presented its progress. Emotion was tracked through Zoom face video snapshots using facial emotion recognition that recognized six emotions (happy, sad, fear, anger, neutral, and surprise). Our dependent variable was a score given after each presentation by all participants except the presenter. We found that the happier the speaker is, the happier and less neutral the audience is. More importantly, we found that the presentations that triggered wide swings in “fear” and “joy” among the participants are correlated with a higher rating. Our findings provide valuable input for online video presenters on how to conduct better and less tiring meetings; this will lead to a decrease in “videoconferencing fatigue”. Full article
(This article belongs to the Special Issue Social Networks Analysis and Mining)
Show Figures

Figure 1

Article
Towards Practical Applications in Modeling Blockchain System
Future Internet 2021, 13(5), 125; https://0-doi-org.brum.beds.ac.uk/10.3390/fi13050125 - 12 May 2021
Viewed by 672
Abstract
Like multiservice networks, blockchain technology is currently experiencing significant development because of its decentralization and ability to organize secure, seamless, reliable data exchange and storage. Due to the significant demand for the technology, there is a need to analyze the impact of these [...] Read more.
Like multiservice networks, blockchain technology is currently experiencing significant development because of its decentralization and ability to organize secure, seamless, reliable data exchange and storage. Due to the significant demand for the technology, there is a need to analyze the impact of these technology processes on network characteristics to predict traffic behavior and ensure required quality indicators, as well as on the stability of public communication network elements when blockchain technology operates. Conducting a full-scale experiment is a time-consuming task that cannot always be accomplished, so in this paper, the authors propose considering approaches to modeling these systems and, as an example, propose to use a simulation system to assess the performance of the network and its elements. Full article
(This article belongs to the Special Issue The Next Blockchain Wave Current Challenges and Future Prospects)
Show Figures

Figure 1

Article
Development of Knowledge Graph for Data Management Related to Flooding Disasters Using Open Data
Future Internet 2021, 13(5), 124; https://0-doi-org.brum.beds.ac.uk/10.3390/fi13050124 - 11 May 2021
Viewed by 732
Abstract
Despite the development of various technologies and systems using artificial intelligence (AI) to solve problems related to disasters, difficult challenges are still being encountered. Data are the foundation to solving diverse disaster problems using AI, big data analysis, and so on. Therefore, we [...] Read more.
Despite the development of various technologies and systems using artificial intelligence (AI) to solve problems related to disasters, difficult challenges are still being encountered. Data are the foundation to solving diverse disaster problems using AI, big data analysis, and so on. Therefore, we must focus on these various data. Disaster data depend on the domain by disaster type and include heterogeneous data and lack interoperability. In particular, in the case of open data related to disasters, there are several issues, where the source and format of data are different because various data are collected by different organizations. Moreover, the vocabularies used for each domain are inconsistent. This study proposes a knowledge graph to resolve the heterogeneity among various disaster data and provide interoperability among domains. Among disaster domains, we describe the knowledge graph for flooding disasters using Korean open datasets and cross-domain knowledge graphs. Furthermore, the proposed knowledge graph is used to assist, solve, and manage disaster problems. Full article
(This article belongs to the Special Issue Open Data and Artificial Intelligence)
Show Figures

Figure 1

Article
A Collaborative Merging Strategy with Lane Changing in Multilane Freeway On-Ramp Area with V2X Network
Future Internet 2021, 13(5), 123; https://0-doi-org.brum.beds.ac.uk/10.3390/fi13050123 - 10 May 2021
Viewed by 478
Abstract
The merging area of the freeway is an area with a high incidence of traffic accidents. With the development of connected and automated vehicles (CAVs) and V2X technology, the traffic efficiency of freeway ramp areas has been significantly improved. However, current research mostly [...] Read more.
The merging area of the freeway is an area with a high incidence of traffic accidents. With the development of connected and automated vehicles (CAVs) and V2X technology, the traffic efficiency of freeway ramp areas has been significantly improved. However, current research mostly focuses on merging a single mainline lane and ramp, and there are few cases of multiple lanes. In this paper, we present a collaborative merging model with a rule-based lane-changing strategy in a V2X environment. First, the vehicle selects the appropriate gap to change lanes safely without affecting other vehicles. Meanwhile, we established a linear time discrete model to optimize the trajectory of vehicles in real-time. Finally, the proposed model and strategy were implemented in SUMO and Python. The simulation results showed that the merging model we proposed based on the lane-changing strategy had good performance in terms of the number of stops, average delay, and average speed. Full article
Show Figures

Figure 1

Article
An Income Model Using Historical Data, Power-Law Distributions and Monte Carlo Method for University Technology Transfer Offices
Future Internet 2021, 13(5), 122; https://0-doi-org.brum.beds.ac.uk/10.3390/fi13050122 - 06 May 2021
Viewed by 547
Abstract
Engineering education pushes the creation of new technology to solve community problems. The process of technology transfer promotes educational innovation in universities, a vital process that can improve citizens’ quality of life in cities and rural communities. As a result, university technology transfer [...] Read more.
Engineering education pushes the creation of new technology to solve community problems. The process of technology transfer promotes educational innovation in universities, a vital process that can improve citizens’ quality of life in cities and rural communities. As a result, university technology transfer offices (TTOs) have to create strategies that motivate students and researchers to generate technology. Thus, a primary challenge that TTOs face is to know and communicate the income potential compared to their much more predictable and limited expense budgets. Institutional budgeting for a TTO’s growth would be simplified if the office were on a solid financial footing, i.e., breaking even or making a financial return. Many offices assume that income is unpredictable, that it is a lottery, luck, and more stakes in the fire improve the odds of hitting a winner, etc. These common assumptions or beliefs provide only a vague insight into how to move an intellectual property (IP) portfolio strategy forward. How can a TTO be assessed for quantitative value and not just be a cost center adding qualitative value? This paper illustrates the first steps to understanding how to project potential income versus a much more predictable expense budget, which would allow universities to improve their technology transfer strategy and results. As a result, TTOs would operate under a more sustainable IP portfolio strategy, promote educational innovation in universities, and generate a more significant community impact. Full article
Show Figures

Figure 1

Article
Exploiting Machine Learning for Improving In-Memory Execution of Data-Intensive Workflows on Parallel Machines
Future Internet 2021, 13(5), 121; https://0-doi-org.brum.beds.ac.uk/10.3390/fi13050121 - 05 May 2021
Viewed by 628
Abstract
Workflows are largely used to orchestrate complex sets of operations required to handle and process huge amounts of data. Parallel processing is often vital to reduce execution time when complex data-intensive workflows must be run efficiently, and at the same time, in-memory processing [...] Read more.
Workflows are largely used to orchestrate complex sets of operations required to handle and process huge amounts of data. Parallel processing is often vital to reduce execution time when complex data-intensive workflows must be run efficiently, and at the same time, in-memory processing can bring important benefits to accelerate execution. However, optimization techniques are necessary to fully exploit in-memory processing, avoiding performance drops due to memory saturation events. This paper proposed a novel solution, called the Intelligent In-memory Workflow Manager (IIWM), for optimizing the in-memory execution of data-intensive workflows on parallel machines. IIWM is based on two complementary strategies: (1) a machine learning strategy for predicting the memory occupancy and execution time of workflow tasks; (2) a scheduling strategy that allocates tasks to a computing node, taking into account the (predicted) memory occupancy and execution time of each task and the memory available on that node. The effectiveness of the machine learning-based predictor and the scheduling strategy were demonstrated experimentally using as a testbed, Spark, a high-performance Big Data processing framework that exploits in-memory computing to speed up the execution of large-scale applications. In particular, two synthetic workflows were prepared for testing the robustness of the IIWM in scenarios characterized by a high level of parallelism and a limited amount of memory reserved for execution. Furthermore, a real data analysis workflow was used as a case study, for better assessing the benefits of the proposed approach. Thanks to high accuracy in predicting resources used at runtime, the IIWM was able to avoid disk writes caused by memory saturation, outperforming a traditional strategy in which only dependencies among tasks are taken into account. Specifically, the IIWM achieved up to a 31% and a 40% reduction of makespan and a performance improvement up to 1.45× and 1.66× on the synthetic workflows and the real case study, respectively. Full article
(This article belongs to the Special Issue The Future of Supercomputing)
Show Figures

Figure 1

Article
Analysis and Prediction of “AI + Education” Attention Based on Baidu Index—Taking Guizhou Province as an Example
Future Internet 2021, 13(5), 120; https://0-doi-org.brum.beds.ac.uk/10.3390/fi13050120 - 30 Apr 2021
Viewed by 770
Abstract
Studying the attention of “artificial intelligence + education” in ethnic areas is of great significance for China for promoting the integrated development of new educational modes and modern technology in the western region. Guizhou province is an area inhabited by ethnic minorities, located [...] Read more.
Studying the attention of “artificial intelligence + education” in ethnic areas is of great significance for China for promoting the integrated development of new educational modes and modern technology in the western region. Guizhou province is an area inhabited by ethnic minorities, located in the heart of Southwest China. The development of its intelligent education has strong enlightenment for the whole country and the region. Therefore, this paper selects the Baidu Index of “artificial intelligence (AI) + education” in Guizhou province from 2013 to 2020, analyzes the spatial–temporal characteristics of its network attention by using the elastic coefficient method, and builds the ARIMA model on this basis to predict future development. The results show that the public’s attention to “AI + education” differs significantly in time and space. Then, according to the prediction results, this paper puts forward relevant suggestions for the country to promote the sustainable development of education in western ethnic areas. Full article
Show Figures

Figure 1

Article
The Social and Transfer Massive Open Online Course: Post-Digital Learning
Future Internet 2021, 13(5), 119; https://0-doi-org.brum.beds.ac.uk/10.3390/fi13050119 - 30 Apr 2021
Viewed by 689
Abstract
This research provides a current view on post-digital learning experiences with a massive open online course (MOOC), in relation to user profiles, universal instructional design, digital resources, inclusive activities and collaborative assessment. The study is based on a mixed research methodology, creating a [...] Read more.
This research provides a current view on post-digital learning experiences with a massive open online course (MOOC), in relation to user profiles, universal instructional design, digital resources, inclusive activities and collaborative assessment. The study is based on a mixed research methodology, creating a questionnaire aimed at people with experience in any MOOC typology, in which the learning methodology, the instructional didactic design of the MOOCs, the resources, proposed activities, and accessibility are analyzed. Additionally, interviews and focus groups were carried out with the creators of massive open online social courses alongside the students of the official Master of Communication and Education on the Internet, offered by the UNED (Universidad Nacional de Educación a Distancia—Spain), with the subject of virtual participation scenarios. The data obtained are subjected to statistical tests to determine the scientific rigor, such as Cronbach’s alpha, the Kolmogorov-Smirnov normality test, and the non-parametric tests of Spearman’s correlation coefficient and Kendall’s Tau b. In conclusion, the social massive open online course/transfer massive open online course model is evidenced as a projected approach in social networks. The sMOOC and tMOOC are online training models, which are in constant development and evolution, as a social, creative, collaborative, interactive, and inclusive learning methodology, offering new challenges for the digital distance education of the future. The research carried out is only related and linked to the experiences of different people with the sMOOC and tMOOC. Full article
(This article belongs to the Special Issue Social Network and Sustainable Distance Education)
Show Figures

Figure 1

Review
Survey on Intelligence Edge Computing in 6G: Characteristics, Challenges, Potential Use Cases, and Market Drivers
Future Internet 2021, 13(5), 118; https://0-doi-org.brum.beds.ac.uk/10.3390/fi13050118 - 30 Apr 2021
Cited by 1 | Viewed by 812
Abstract
Intelligence Edge Computing (IEC) is the key enabler of emerging 5G technologies networks and beyond. IEC is considered to be a promising backbone of future services and wireless communication systems in 5G integration. In addition, IEC enables various use cases and applications, including [...] Read more.
Intelligence Edge Computing (IEC) is the key enabler of emerging 5G technologies networks and beyond. IEC is considered to be a promising backbone of future services and wireless communication systems in 5G integration. In addition, IEC enables various use cases and applications, including autonomous vehicles, augmented and virtual reality, big data analytic, and other customer-oriented services. Moreover, it is one of the 5G technologies that most enhanced market drivers in different fields such as customer service, healthcare, education methods, IoT in agriculture and energy sustainability. However, 5G technological improvements face many challenges such as traffic volume, privacy, security, digitization capabilities, and required latency. Therefore, 6G is considered to be promising technology for the future. To this end, compared to other surveys, this paper provides a comprehensive survey and an inclusive overview of Intelligence Edge Computing (IEC) technologies in 6G focusing on main up-to-date characteristics, challenges, potential use cases and market drivers. Furthermore, we summarize research efforts on IEC in 5G from 2014 to 2021, in which the integration of IEC and 5G technologies are highlighted. Finally, open research challenges and new future directions in IEC with 6G networks will be discussed. Full article
Show Figures

Figure 1

Article
H2O: Secure Interactions in IoT via Behavioral Fingerprinting
Future Internet 2021, 13(5), 117; https://0-doi-org.brum.beds.ac.uk/10.3390/fi13050117 - 30 Apr 2021
Viewed by 442
Abstract
Sharing data and services in the Internet of Things (IoT) can give rise to significant security concerns with information being sensitive and vulnerable to attacks. In such an environment, objects can be either public resources or owned by humans. For this reason, the [...] Read more.
Sharing data and services in the Internet of Things (IoT) can give rise to significant security concerns with information being sensitive and vulnerable to attacks. In such an environment, objects can be either public resources or owned by humans. For this reason, the need of monitoring the reliability of all involved actors, both persons and smart objects, assuring that they really are who they claim to be, is becoming an essential property of the IoT, with the increase in the pervasive adoption of such a paradigm. In this paper, we tackle this problem by proposing a new framework, called H2O (Human to Object). Our solution is able to continuously authenticate an entity in the network, providing a reliability assessment mechanism based on behavioral fingerprinting. A detailed security analysis evaluates the robustness of the proposed protocol; furthermore, a performance analysis shows the feasibility of our approach. Full article
Show Figures

Figure 1

Article
A Framework for Mobile-Assisted Formative Assessment to Promote Students’ Self-Determination
Future Internet 2021, 13(5), 116; https://0-doi-org.brum.beds.ac.uk/10.3390/fi13050116 - 30 Apr 2021
Viewed by 525
Abstract
Motivation is an important issue to consider when designing learning activities, including mobile learning and assessment. While previous research provides evidence for the motivational impact of mobile learning, not many pedagogical frameworks exist for the design of mobile-assisted learning and assessment. The current [...] Read more.
Motivation is an important issue to consider when designing learning activities, including mobile learning and assessment. While previous research provides evidence for the motivational impact of mobile learning, not many pedagogical frameworks exist for the design of mobile-assisted learning and assessment. The current study is grounded in the Self-Determination Theory of motivation and proposes a pedagogical framework for mobile-assisted formative assessment, aiming at enhancing student motivation. For a preliminary evaluation of the framework, fifty-one students from a public European high school participated in a series of formative assessment activities. The tasks that were implemented according to the proposed mobile-based formative assessment framework had a significant positive impact on student perceived levels of autonomy, competence, and relatedness, enhancing students’ intrinsic motivation levels. Study findings highlighted the capacity of the proposed framework to guide the design of mobile-based formative assessment activities that enhance and promote student motivation. The study makes a theoretical contribution by proposing a framework that aligns mobile learning and assessment with elements of the Self-Determination Theory of motivation and also has a practical contribution by implementing mobile learning and assessment practices that have the potential to promote student motivation. Full article
(This article belongs to the Special Issue Technology Enhanced Learning and Mobile Learning)
Show Figures

Figure 1

Article
Experimental Evaluation of a LoRa Wildlife Monitoring Network in a Forest Vegetation Area
Future Internet 2021, 13(5), 115; https://0-doi-org.brum.beds.ac.uk/10.3390/fi13050115 - 29 Apr 2021
Viewed by 650
Abstract
Smart agriculture and wildlife monitoring are one of the recent trends of Internet of Things (IoT) applications, which are evolving in providing sustainable solutions from producers. This article details the design, development and assessment of a wildlife monitoring application for IoT animal repelling [...] Read more.
Smart agriculture and wildlife monitoring are one of the recent trends of Internet of Things (IoT) applications, which are evolving in providing sustainable solutions from producers. This article details the design, development and assessment of a wildlife monitoring application for IoT animal repelling devices that is able to cover large areas, thanks to the low power wide area networks (LPWAN), which bridge the gap between cellular technologies and short range wireless technologies. LoRa, the global de-facto LPWAN, continues to attract attention given its open specification and ready availability of off-the-shelf hardware, with claims of several kilometers of range in harsh challenging environments. At first, this article presents a survey of the LPWAN for smart agriculture applications. We proceed to evaluate the performance of LoRa transmission technology operating in the 433 MHz and 868 MHz bands, aimed at wildlife monitoring in a forest vegetation area. To characterize the communication link, we mainly use the signal-to-noise ratio (SNR), received signal strength indicator (RSSI) and packet delivery ratio (PDR). Findings from this study show that achievable performance can greatly vary between the 433 MHz and 868 MHz bands, and prompt caution is required when taking numbers at face value, as this can have implications for IoT applications. In addition, our results show that the link reaches up to 860 m in the highly dense forest vegetation environment, while in the not so dense forest vegetation environment, it reaches up to 2050 m. Full article
Show Figures

Figure 1

Article
Collecting a Large Scale Dataset for Classifying Fake News Tweets Using Weak Supervision
Future Internet 2021, 13(5), 114; https://0-doi-org.brum.beds.ac.uk/10.3390/fi13050114 - 29 Apr 2021
Cited by 2 | Viewed by 775
Abstract
The problem of automatic detection of fake news in social media, e.g., on Twitter, has recently drawn some attention. Although, from a technical perspective, it can be regarded as a straight-forward, binary classification problem, the major challenge is the collection of large enough [...] Read more.
The problem of automatic detection of fake news in social media, e.g., on Twitter, has recently drawn some attention. Although, from a technical perspective, it can be regarded as a straight-forward, binary classification problem, the major challenge is the collection of large enough training corpora, since manual annotation of tweets as fake or non-fake news is an expensive and tedious endeavor, and recent approaches utilizing distributional semantics require large training corpora. In this paper, we introduce an alternative approach for creating a large-scale dataset for tweet classification with minimal user intervention. The approach relies on weak supervision and automatically collects a large-scale, but very noisy, training dataset comprising hundreds of thousands of tweets. As a weak supervision signal, we label tweets by their source, i.e., trustworthy or untrustworthy source, and train a classifier on this dataset. We then use that classifier for a different classification target, i.e., the classification of fake and non-fake tweets. Although the labels are not accurate according to the new classification target (not all tweets by an untrustworthy source need to be fake news, and vice versa), we show that despite this unclean, inaccurate dataset, the results are comparable to those achieved using a manually labeled set of tweets. Moreover, we show that the combination of the large-scale noisy dataset with a human labeled one yields more advantageous results than either of the two alone. Full article
(This article belongs to the Special Issue Digital and Social Media in the Disinformation Age)
Show Figures

Figure 1

Article
Dynamic Control Architecture Based on Software Defined Networking for the Internet of Things
Future Internet 2021, 13(5), 113; https://0-doi-org.brum.beds.ac.uk/10.3390/fi13050113 - 28 Apr 2021
Cited by 1 | Viewed by 499
Abstract
Software Defined Networking (SDN) provides a new perspective for the Internet of Things (IoT), since, with the separation of the control from the data planes, it is viable to optimise the traditional networks operation management. In particular, the SDN Controller has a global [...] Read more.
Software Defined Networking (SDN) provides a new perspective for the Internet of Things (IoT), since, with the separation of the control from the data planes, it is viable to optimise the traditional networks operation management. In particular, the SDN Controller has a global vision of the network of sensors/actuators domain, allowing real-time network nodes and data flows reconfiguration. As a consequence, devices, usually facing limited communications and computing resources, are relieved of the route selection task in a distributed and, thus, suboptimal way. This paper proposes a SDN-IoT architecture, specifically focusing on the Controller design, which dynamically optimises in real time the end-to-end flows delivery. In particular, the dynamic routing policy adaptation is based on the real-time estimation of the network status and it allows jointly minimising the end-to-end latency and energy consumption and, consequently, to improve the network life time. The performance of the proposed approach is analysed in terms of the average latency, energy consumption and overhead, pointing out a better behaviour in comparison with the existing distributed approaches. Full article
(This article belongs to the Special Issue Service-Oriented Systems and Applications)
Show Figures

Figure 1

Article
Exploring the Roles of Local Mobility Patterns, Socioeconomic Conditions, and Lockdown Policies in Shaping the Patterns of COVID-19 Spread
Future Internet 2021, 13(5), 112; https://0-doi-org.brum.beds.ac.uk/10.3390/fi13050112 - 28 Apr 2021
Cited by 1 | Viewed by 539
Abstract
The COVID-19 crisis has shown that we can only prevent the risk of mass contagion through timely, large-scale, coordinated, and decisive actions. This pandemic has also highlighted the critical importance of generating rigorous evidence for decision-making, and actionable insights from data, considering further [...] Read more.
The COVID-19 crisis has shown that we can only prevent the risk of mass contagion through timely, large-scale, coordinated, and decisive actions. This pandemic has also highlighted the critical importance of generating rigorous evidence for decision-making, and actionable insights from data, considering further the intricate web of causes and drivers behind observed patterns of contagion diffusion. Using mobility, socioeconomic, and epidemiological data recorded throughout the pandemic development in the Santiago Metropolitan Region, we seek to understand the observed patterns of contagion. We characterize human mobility patterns during the pandemic through different mobility indices and correlate such patterns with the observed contagion diffusion, providing data-driven models for insights, analysis, and inferences. Through these models, we examine some effects of the late application of mobility restrictions in high-income urban regions that were affected by high contagion rates at the beginning of the pandemic. Using augmented synthesis control methods, we study the consequences of the early lifting of mobility restrictions in low-income sectors connected by public transport to high-risk and high-income communes. The Santiago Metropolitan Region is one of the largest Latin American metropolises with features that are common to large cities. Therefore, it can be used as a relevant case study to unravel complex patterns of the spread of COVID-19. Full article
(This article belongs to the Section Techno-Social Smart Systems)
Show Figures

Figure 1

Article
Designing a Network Intrusion Detection System Based on Machine Learning for Software Defined Networks
Future Internet 2021, 13(5), 111; https://0-doi-org.brum.beds.ac.uk/10.3390/fi13050111 - 28 Apr 2021
Viewed by 615
Abstract
Software-defined Networking (SDN) has recently developed and been put forward as a promising and encouraging solution for future internet architecture. Managed, the centralized and controlled network has become more flexible and visible using SDN. On the other hand, these advantages bring us a [...] Read more.
Software-defined Networking (SDN) has recently developed and been put forward as a promising and encouraging solution for future internet architecture. Managed, the centralized and controlled network has become more flexible and visible using SDN. On the other hand, these advantages bring us a more vulnerable environment and dangerous threats, causing network breakdowns, systems paralysis, online banking frauds and robberies. These issues have a significantly destructive impact on organizations, companies or even economies. Accuracy, high performance and real-time systems are essential to achieve this goal successfully. Extending intelligent machine learning algorithms in a network intrusion detection system (NIDS) through a software-defined network (SDN) has attracted considerable attention in the last decade. Big data availability, the diversity of data analysis techniques, and the massive improvement in the machine learning algorithms enable the building of an effective, reliable and dependable system for detecting different types of attacks that frequently target networks. This study demonstrates the use of machine learning algorithms for traffic monitoring to detect malicious behavior in the network as part of NIDS in the SDN controller. Different classical and advanced tree-based machine learning techniques, Decision Tree, Random Forest and XGBoost are chosen to demonstrate attack detection. The NSL-KDD dataset is used for training and testing the proposed methods; it is considered a benchmarking dataset for several state-of-the-art approaches in NIDS. Several advanced preprocessing techniques are performed on the dataset in order to extract the best form of the data, which produces outstanding results compared to other systems. Using just five out of 41 features of NSL-KDD, a multi-class classification task is conducted by detecting whether there is an attack and classifying the type of attack (DDoS, PROBE, R2L, and U2R), accomplishing an accuracy of 95.95%. Full article
(This article belongs to the Special Issue Mobile and Wireless Network Security and Privacy)
Show Figures

Figure 1

Article
Reviewing Stranger on the Internet: The Role of Identifiability through “Reputation” in Online Decision Making
Future Internet 2021, 13(5), 110; https://0-doi-org.brum.beds.ac.uk/10.3390/fi13050110 - 27 Apr 2021
Cited by 1 | Viewed by 564
Abstract
The stranger on the Internet effect has been studied in relation to self-disclosure. Nonetheless, quantitative evidence about how people mentally represent and perceive strangers online is still missing. Given the dynamic development of web technologies, quantifying how much strangers can be considered suitable [...] Read more.
The stranger on the Internet effect has been studied in relation to self-disclosure. Nonetheless, quantitative evidence about how people mentally represent and perceive strangers online is still missing. Given the dynamic development of web technologies, quantifying how much strangers can be considered suitable for pro-social acts such as self-disclosure appears fundamental for a whole series of phenomena ranging from privacy protection to fake news spreading. Using a modified and online version of the Ultimatum Game (UG), we quantified the mental representation of the stranger on the Internet effect and tested if people modify their behaviors according to the interactors’ identifiability (i.e., reputation). A total of 444 adolescents took part in a 2 × 2 design experiment where reputation was set active or not for the two traditional UG tasks. We discovered that, when matched with strangers, people donate the same amount of money as if the other has a good reputation. Moreover, reputation significantly affected the donation size, the acceptance rate and the feedback decision making as well. Full article
(This article belongs to the Special Issue Selected Papers from the INSCI2019: Internet Science 2019)
Show Figures

Figure 1

Article
How Schools Affected the COVID-19 Pandemic in Italy: Data Analysis for Lombardy Region, Campania Region, and Emilia Region
Future Internet 2021, 13(5), 109; https://0-doi-org.brum.beds.ac.uk/10.3390/fi13050109 - 27 Apr 2021
Cited by 1 | Viewed by 2193
Abstract
Background: Coronavirus Disease 2019 (COVID-19) is the main discussed topic worldwide in 2020 and at the beginning of the Italian epidemic, scientists tried to understand the virus diffusion and the epidemic curve of positive cases with controversial findings and numbers. Objectives: In this [...] Read more.
Background: Coronavirus Disease 2019 (COVID-19) is the main discussed topic worldwide in 2020 and at the beginning of the Italian epidemic, scientists tried to understand the virus diffusion and the epidemic curve of positive cases with controversial findings and numbers. Objectives: In this paper, a data analytics study on the diffusion of COVID-19 in Lombardy Region and Campania Region is developed in order to identify the driver that sparked the second wave in Italy. Methods: Starting from all the available official data collected about the diffusion of COVID-19, we analyzed Google mobility data, school data and infection data for two big regions in Italy: Lombardy Region and Campania Region, which adopted two different approaches in opening and closing schools. To reinforce our findings, we also extended the analysis to the Emilia Romagna Region. Results: The paper shows how different policies adopted in school opening/closing may have had an impact on the COVID-19 spread, while other factors related to citizen mobility did not affect the second Italian wave. Conclusions: The paper shows that a clear correlation exists between the school contagion and the subsequent temporal overall contagion in a geographical area. Moreover, it is clear that highly populated provinces have the greatest spread of the virus. Full article
(This article belongs to the Special Issue Software Engineering and Data Science)
Show Figures

Figure 1

Article
Inferring Urban Social Networks from Publicly Available Data
Future Internet 2021, 13(5), 108; https://0-doi-org.brum.beds.ac.uk/10.3390/fi13050108 - 26 Apr 2021
Viewed by 588
Abstract
The definition of suitable generative models for synthetic yet realistic social networks is a widely studied problem in the literature. By not being tied to any real data, random graph models cannot capture all the subtleties of real networks and are inadequate for [...] Read more.
The definition of suitable generative models for synthetic yet realistic social networks is a widely studied problem in the literature. By not being tied to any real data, random graph models cannot capture all the subtleties of real networks and are inadequate for many practical contexts—including areas of research, such as computational epidemiology, which are recently high on the agenda. At the same time, the so-called contact networks describe interactions, rather than relationships, and are strongly dependent on the application and on the size and quality of the sample data used to infer them. To fill the gap between these two approaches, we present a data-driven model for urban social networks, implemented and released as open source software. By using just widely available aggregated demographic and social-mixing data, we are able to create, for a territory of interest, an age-stratified and geo-referenced synthetic population whose individuals are connected by “strong ties” of two types: intra-household (e.g., kinship) or friendship. While household links are entirely data-driven, we propose a parametric probabilistic model for friendship, based on the assumption that distances and age differences play a role, and that not all individuals are equally sociable. The demographic and geographic factors governing the structure of the obtained network, under different configurations, are thoroughly studied through extensive simulations focused on three Italian cities of different size. Full article
Show Figures

Figure 1

Article
Mutual Influence of Users Credibility and News Spreading in Online Social Networks
Future Internet 2021, 13(5), 107; https://0-doi-org.brum.beds.ac.uk/10.3390/fi13050107 - 25 Apr 2021
Cited by 1 | Viewed by 566
Abstract
A real-time news spreading is now available for everyone, especially thanks to Online Social Networks (OSNs) that easily endorse gate watching, so the collective intelligence and knowledge of dedicated communities are exploited to filter the news flow and to highlight and debate relevant [...] Read more.
A real-time news spreading is now available for everyone, especially thanks to Online Social Networks (OSNs) that easily endorse gate watching, so the collective intelligence and knowledge of dedicated communities are exploited to filter the news flow and to highlight and debate relevant topics. The main drawback is that the responsibility for judging the content and accuracy of information moves from editors and journalists to online information users, with the side effect of the potential growth of fake news. In such a scenario, trustworthiness about information providers cannot be overlooked anymore, rather it more and more helps in discerning real news from fakes. In this paper we evaluate how trustworthiness among OSN users influences the news spreading process. To this purpose, we consider the news spreading as a Susceptible-Infected-Recovered (SIR) process in OSN, adding the contribution credibility of users as a layer on top of OSN. Simulations with both fake and true news spreading on such a multiplex network show that the credibility improves the diffusion of real news while limiting the propagation of fakes. The proposed approach can also be extended to real social networks. Full article
(This article belongs to the Special Issue Digital and Social Media in the Disinformation Age)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop