Next Issue
Volume 10, July
Previous Issue
Volume 10, May
 
 

Information, Volume 10, Issue 6 (June 2019) – 38 articles

Cover Story (view full-size image): Online Social Networks (OSNs) have found widespread applications in every area of our life. A large number of people have signed up to OSN for different purposes, including to meet old friends, to choose a given company, to identify expert users about a given topic, producing a large number of social connections. These aspects have led to the birth of a new generation of OSNs, called Multimedia Social Networks (MSNs), in which user-generated content plays a key role to enable interactions among users. In this work, we propose a novel expert-finding technique exploiting a hypergraph-based data model for MSNs. In particular, some user-ranking measures, obtained considering only particular useful hyperpaths, have been profitably used to evaluate the related expertness degree with respect to a given social topic. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
16 pages, 3322 KiB  
Article
Using an Exponential Random Graph Model to Recommend Academic Collaborators
by Hailah Al-Ballaa, Hmood Al-Dossari and Azeddine Chikh
Information 2019, 10(6), 220; https://0-doi-org.brum.beds.ac.uk/10.3390/info10060220 - 25 Jun 2019
Cited by 5 | Viewed by 4862
Abstract
Academic collaboration networks can be formed by grouping different faculty members into a single group. Grouping these faculty members together is a complex process that involves searching multiple web pages in order to collect and analyze information, and establishing new connections among prospective [...] Read more.
Academic collaboration networks can be formed by grouping different faculty members into a single group. Grouping these faculty members together is a complex process that involves searching multiple web pages in order to collect and analyze information, and establishing new connections among prospective collaborators. A recommender system (RS) for academic collaborations can help reduce the time and effort required to establish a new collaboration. Content-based recommendation system make recommendations based on similarity without taking social context into consideration. Hybrid recommender systems can be used to combine similarity and social context. In this paper, we propose a weighting method that can be used to combine two or more social context factors in a recommendation engine that leverages an exponential random graph model (ERGM) based on historical network data. We demonstrate our approach using real data from collaborations with faculty members at the College of Computer and Information Sciences (CCIS) in Saudi Arabia. Our results demonstrate that weighting social context factors helps increase recommendation accuracy for new users. Full article
(This article belongs to the Special Issue Modern Recommender Systems: Approaches, Challenges and Applications)
Show Figures

Figure 1

20 pages, 1837 KiB  
Article
Driving Style: How Should an Automated Vehicle Behave?
by Luis Oliveira, Karl Proctor, Christopher G. Burns and Stewart Birrell
Information 2019, 10(6), 219; https://0-doi-org.brum.beds.ac.uk/10.3390/info10060219 - 25 Jun 2019
Cited by 55 | Viewed by 10578
Abstract
This article reports on a study to investigate how the driving behaviour of autonomous vehicles influences trust and acceptance. Two different designs were presented to two groups of participants (n = 22/21), using actual autonomously driving vehicles. The first was a vehicle [...] Read more.
This article reports on a study to investigate how the driving behaviour of autonomous vehicles influences trust and acceptance. Two different designs were presented to two groups of participants (n = 22/21), using actual autonomously driving vehicles. The first was a vehicle programmed to drive similarly to a human, “peeking” when approaching road junctions as if it was looking before proceeding. The second design had a vehicle programmed to convey the impression that it was communicating with other vehicles and infrastructure and “knew” if the junction was clear so could proceed without ever stopping or slowing down. Results showed non-significant differences in trust between the two vehicle behaviours. However, there were significant increases in trust scores overall for both designs as the trials progressed. Post-interaction interviews indicated that there were pros and cons for both driving styles, and participants suggested which aspects of the driving styles could be improved. This paper presents user information recommendations for the design and programming of driving systems for autonomous vehicles, with the aim of improving their users’ trust and acceptance. Full article
(This article belongs to the Special Issue Automotive User Interfaces and Interactions in Automated Driving)
Show Figures

Figure 1

13 pages, 1560 KiB  
Article
Design Framework of a Traceability System for the Rice Agroindustry Supply Chain in West Java
by Pradeka Brilyan Purwandoko, Kudang Boro Seminar, Sutrisno and Sugiyanta
Information 2019, 10(6), 218; https://0-doi-org.brum.beds.ac.uk/10.3390/info10060218 - 25 Jun 2019
Cited by 6 | Viewed by 6948
Abstract
Rice is a vital food commodity in Indonesia due to its role as a staple food for most Indonesian people. The rice supply chain in Indonesia varies from one region to another and it is difficult to trace movement along the chain from [...] Read more.
Rice is a vital food commodity in Indonesia due to its role as a staple food for most Indonesian people. The rice supply chain in Indonesia varies from one region to another and it is difficult to trace movement along the chain from land to customers. This introduces non-transparency and uncertainty in the quantity and quality of rice at every node along the supply chain. The crucial issues of food safety and security, as well as consumer concern and curiosity in buying and consuming foods, increases the need for a traceability system for the rice value chain which can be easily and widely accessed. This paper describes the design framework of an IT (Information Technology)-based traceability system for the rice supply chain on web platforms. The system approach has been followed (where the system requirements are identified based on supply chain characteristics) and then the logical framework for implementing internal and external traceability was modeled using IDEF-0 (Integrated Definition Modeling). This paper further presents an explanation of the ERD (Entity Relationship Diagram) as an initial step to modeling the data requirements and a model of information exchange between stakeholders that explains the data that must be recorded and forwarded to the next stakeholder. Finally, we propose the CBIS (Computer Based Information System) concept to develop a traceability system in the rice supply chain. Full article
Show Figures

Figure 1

13 pages, 1333 KiB  
Article
Drowsiness Estimation Using Electroencephalogram and Recurrent Support Vector Regression
by Izzat Aulia Akbar and Tomohiko Igasaki
Information 2019, 10(6), 217; https://0-doi-org.brum.beds.ac.uk/10.3390/info10060217 - 24 Jun 2019
Cited by 6 | Viewed by 3385
Abstract
As a cause of accidents, drowsiness can cause economical and physical damage. A range of drowsiness estimation methods have been proposed in previous studies to aid accident prevention and address this problem. However, none of these methods are able to improve their estimation [...] Read more.
As a cause of accidents, drowsiness can cause economical and physical damage. A range of drowsiness estimation methods have been proposed in previous studies to aid accident prevention and address this problem. However, none of these methods are able to improve their estimation ability as the length of time or number of trials increases. Thus, in this study, we aim to find an effective drowsiness estimation method that is also able to improve its prediction ability as the subject’s activity increases. We used electroencephalogram (EEG) data to estimate drowsiness, and the Karolinska sleepiness scale (KSS) for drowsiness evaluation. Five parameters (α, β/α, (θ+α)/β, activity, and mobility) from the O1 electrode site were selected. By combining these parameters and KSS, we demonstrate that a typical support vector regression (SVR) algorithm can estimate drowsiness with a correlation coefficient (R2) of up to 0.64 and a root mean square error (RMSE) of up to 0.56. We propose a “recurrent SVR” (RSVR) method with improved estimation performance, as highlighted by an R2 value of up to 0.83, and an RMSE of up to 0.15. These results suggest that in addition to being able to estimate drowsiness based on EEG data, RSVR is able to improve its drowsiness estimation performance. Full article
(This article belongs to the Section Information Processes)
Show Figures

Figure 1

12 pages, 2797 KiB  
Article
Managing Software Security Knowledge in Context: An Ontology Based Approach
by Shao-Fang Wen and Basel Katt
Information 2019, 10(6), 216; https://0-doi-org.brum.beds.ac.uk/10.3390/info10060216 - 20 Jun 2019
Cited by 2 | Viewed by 3798
Abstract
Knowledge of software security is highly complex since it is quite context-specific and can be applied in diverse ways. To secure software development, software developers require not only knowledge about general security concepts but also about the context for which the software is [...] Read more.
Knowledge of software security is highly complex since it is quite context-specific and can be applied in diverse ways. To secure software development, software developers require not only knowledge about general security concepts but also about the context for which the software is being developed. With traditional security-centric knowledge formats, it is difficult for developers or knowledge users to retrieve their required security information based on the requirements of software products and development technologies. In order to effectively regulate the operation of security knowledge and be an essential part of practical software development practices, we argue that security knowledge must first incorporate features that specify what contextual characteristics are to be handled, and represent the security knowledge in a format that is understandable and acceptable to the individuals. This study introduces a novel ontology approach for modeling security knowledge with a context-based approach, by which security knowledge can be retrieved, taking the context of the software application at hand into consideration. In this paper, we present our security ontology with the design concepts and the corresponding evaluation process. Full article
(This article belongs to the Section Information Systems)
Show Figures

Figure 1

22 pages, 916 KiB  
Article
Comparative Performance Evaluation of an Accuracy-Enhancing Lyapunov Solver
by Vasile Sima
Information 2019, 10(6), 215; https://0-doi-org.brum.beds.ac.uk/10.3390/info10060215 - 19 Jun 2019
Cited by 2 | Viewed by 3024
Abstract
Lyapunov equations are key mathematical objects in systems theory, analysis and design of control systems, and in many applications, including balanced realization algorithms, procedures for reduced order models, Newton methods for algebraic Riccati equations, or stabilization algorithms. A new iterative accuracy-enhancing solver for [...] Read more.
Lyapunov equations are key mathematical objects in systems theory, analysis and design of control systems, and in many applications, including balanced realization algorithms, procedures for reduced order models, Newton methods for algebraic Riccati equations, or stabilization algorithms. A new iterative accuracy-enhancing solver for both standard and generalized continuous- and discrete-time Lyapunov equations is proposed and investigated in this paper. The underlying algorithm and some technical details are summarized. At each iteration, the computed solution of a reduced Lyapunov equation serves as a correction term to refine the current solution of the initial equation. The best available algorithms for solving Lyapunov equations with dense matrices, employing the real Schur(-triangular) form of the coefficient matrices, are used. The reduction to Schur(-triangular) form has to be done only once, before starting the iterative process. The algorithm converges in very few iterations. The results obtained by solving series of numerically difficult examples derived from the SLICOT benchmark collections for Lyapunov equations are compared to the solutions returned by the MATLAB and SLICOT solvers. The new solver can be more accurate than these state-of-the-art solvers and requires little additional computational effort. Full article
(This article belongs to the Special Issue ICSTCC 2018: Advances in Control and Computers)
Show Figures

Figure 1

29 pages, 415 KiB  
Article
Optimal Control of Virus Spread under Different Conditions of Resources Limitations
by Paolo Di Giamberardino and Daniela Iacoviello
Information 2019, 10(6), 214; https://0-doi-org.brum.beds.ac.uk/10.3390/info10060214 - 19 Jun 2019
Viewed by 2943
Abstract
The paper addresses the problem of human virus spread reduction when the resources for the control actions are somehow limited. This kind of problem can be successfully solved in the framework of the optimal control theory, where the best solution, which minimizes a [...] Read more.
The paper addresses the problem of human virus spread reduction when the resources for the control actions are somehow limited. This kind of problem can be successfully solved in the framework of the optimal control theory, where the best solution, which minimizes a cost function while satisfying input constraints, can be provided. The problem is formulated in this contest for the case of the HIV/AIDS virus, making use of a model that considers two classes of susceptible subjects, the wise people and the people with incautious behaviours, and three classes of infected, the ones still not aware of their status, the pre-AIDS patients and the AIDS ones; the control actions are represented by an information campaign, to reduce the category of subjects with unwise behaviour, a test campaign, to reduce the number of subjects not aware of having the virus, and the medication on patients with a positive diagnosis. The cost function considered aims at reducing patients with positive diagnosis using as less resources as possible. Four different types of resources bounds are considered, divided into two classes: limitations on the instantaneous control and fixed total budgets. The optimal solutions are numerically computed, and the results of simulations performed are illustrated and compared to put in evidence the different behaviours of the control actions. Full article
(This article belongs to the Special Issue ICSTCC 2018: Advances in Control and Computers)
Show Figures

Figure 1

17 pages, 974 KiB  
Article
Optimal Resource Allocation to Reduce an Epidemic Spread and Its Complication
by Paolo Di Giamberardino and Daniela Iacoviello
Information 2019, 10(6), 213; https://0-doi-org.brum.beds.ac.uk/10.3390/info10060213 - 13 Jun 2019
Cited by 8 | Viewed by 3344
Abstract
Mathematical modeling represents a useful instrument to describe epidemic spread and to propose useful control actions, such as vaccination scheduling, quarantine, informative campaign, and therapy, especially in the realistic hypothesis of resources limitations. Moreover, the same representation could efficiently describe different epidemic scenarios, [...] Read more.
Mathematical modeling represents a useful instrument to describe epidemic spread and to propose useful control actions, such as vaccination scheduling, quarantine, informative campaign, and therapy, especially in the realistic hypothesis of resources limitations. Moreover, the same representation could efficiently describe different epidemic scenarios, involving, for example, computer viruses spreading in the network. In this paper, a new model describing an infectious disease and a possible complication is proposed; after deep-model analysis discussing the role of the reproduction number, an optimal control problem is formulated and solved to reduce the number of dead patients, minimizing the control effort. The results show the reasonability of the proposed model and the effectiveness of the control action, aiming at an efficient resource allocation; the model also describes the different reactions of a population with respect to an epidemic disease depending on the economic and social original conditions. The optimal control theory applied to the proposed new epidemic model provides a sensible reduction in the number of dead patients, also suggesting the suitable scheduling of the vaccination control. Future work will be devoted to the identification of the model parameters referring to specific epidemic disease and complications, also taking into account the geographic and social scenario. Full article
(This article belongs to the Special Issue ICSTCC 2018: Advances in Control and Computers)
Show Figures

Figure 1

21 pages, 988 KiB  
Article
Large Scale Linguistic Processing of Tweets to Understand Social Interactions among Speakers of Less Resourced Languages: The Basque Case
by Joseba Fernandez de Landa, Rodrigo Agerri and Iñaki Alegria
Information 2019, 10(6), 212; https://0-doi-org.brum.beds.ac.uk/10.3390/info10060212 - 13 Jun 2019
Cited by 3 | Viewed by 6355
Abstract
Social networks like Twitter are increasingly important in the creation of new ways of communication. They have also become useful tools for social and linguistic research due to the massive amounts of public textual data available. This is particularly important for less resourced [...] Read more.
Social networks like Twitter are increasingly important in the creation of new ways of communication. They have also become useful tools for social and linguistic research due to the massive amounts of public textual data available. This is particularly important for less resourced languages, as it allows to apply current natural language processing techniques to large amounts of unstructured data. In this work, we study the linguistic and social aspects of young and adult people’s behaviour based on their tweets’ contents and the social relations that arise from them. With this objective in mind, we have gathered over 10 million tweets from more than 8000 users. First, we classified each user in terms of its life stage (young/adult) according to the writing style of their tweets. Second, we applied topic modelling techniques to the personal tweets to find the most popular topics according to life stages. Third, we established the relations and communities that emerge based on the retweets. We conclude that using large amounts of unstructured data provided by Twitter facilitates social research using computational techniques such as natural language processing, giving the opportunity both to segment communities based on demographic characteristics and to discover how they interact or relate to them. Full article
(This article belongs to the Special Issue Natural Language Processing and Text Mining)
Show Figures

Figure 1

31 pages, 1500 KiB  
Article
What Message Characteristics Make Social Engineering Successful on Facebook: The Role of Central Route, Peripheral Route, and Perceived Risk
by Abdullah Algarni
Information 2019, 10(6), 211; https://0-doi-org.brum.beds.ac.uk/10.3390/info10060211 - 13 Jun 2019
Cited by 10 | Viewed by 9847
Abstract
Past research suggests that the human ability to detect social engineering deception is very limited, and it is even more limited in the virtual environment of social networking sites (SNS) such as Facebook. At the organizational level, research suggests that social engineers could [...] Read more.
Past research suggests that the human ability to detect social engineering deception is very limited, and it is even more limited in the virtual environment of social networking sites (SNS) such as Facebook. At the organizational level, research suggests that social engineers could succeed even among those organizations that identify themselves as being aware of social engineering techniques. This may be partly due to the complexity of human behaviors in failing to recognize social engineering tricks in SNSs. Due to the vital role that persuasion and perception play on users’ decision to accept or reject social engineering tricks, this paper aims to investigate the impact of message characteristics on users’ susceptibility to social engineering victimization on Facebook. In doing so, we investigate the role of the central route of persuasion, peripheral route of persuasion, and perceived risk on susceptibility to social engineering on Facebook. In addition, we investigate the mediation effects between the explored factors, and whether there is any relationship between the effectiveness of them and users’ demographics. Full article
(This article belongs to the Special Issue Insider Attacks)
Show Figures

Figure 1

17 pages, 1109 KiB  
Article
Electronic Identification for Universities: Building Cross-Border Services Based on the eIDAS Infrastructure
by Diana Berbecaru, Antonio Lioy and Cesare Cameroni
Information 2019, 10(6), 210; https://0-doi-org.brum.beds.ac.uk/10.3390/info10060210 - 12 Jun 2019
Cited by 17 | Viewed by 4516
Abstract
The European Union (EU) Regulation 910/2014 on electronic IDentification, Authentication, and trust Services (eIDAS) for electronic transactions in the internal market went into effect on 29 September 2018, meaning that EU Member States are required to recognize the electronic identities issued in the [...] Read more.
The European Union (EU) Regulation 910/2014 on electronic IDentification, Authentication, and trust Services (eIDAS) for electronic transactions in the internal market went into effect on 29 September 2018, meaning that EU Member States are required to recognize the electronic identities issued in the countries that have notified their eID schemes. Technically speaking, a unified interoperability platform—named eIDAS infrastructure—has been set up to connect the EU countries’ national eID schemes to allow a person to authenticate in their home EU country when getting access to services provided by an eIDAS-enabled Service Provider (SP) in another EU country. The eIDAS infrastructure allows the transfer of authentication requests and responses back and forth between its nodes, transporting basic attributes about a person, e.g., name, surname, date of birth, and a so-called eIDAS identifier. However, to build new eIDAS-enabled services in specific domains, additional attributes are needed. We describe our approach to retrieve and transport new attributes through the eIDAS infrastructure, and we detail their exploitation in a selected set of academic services. First, we describe the definition and the support for the additional attributes in the eIDAS nodes. We then present a solution for their retrieval from our university. Finally, we detail the design, implementation, and installation of two eIDAS-enabled academic services at our university: the eRegistration in the Erasmus student exchange program and the Login facility with national eIDs on the university portal. Full article
(This article belongs to the Special Issue ICSTCC 2018: Advances in Control and Computers)
Show Figures

Figure 1

17 pages, 2477 KiB  
Article
An Intelligent Spam Detection Model Based on Artificial Immune System
by Abdul Jabbar Saleh, Asif Karim, Bharanidharan Shanmugam, Sami Azam, Krishnan Kannoorpatti, Mirjam Jonkman and Friso De Boer
Information 2019, 10(6), 209; https://0-doi-org.brum.beds.ac.uk/10.3390/info10060209 - 12 Jun 2019
Cited by 27 | Viewed by 9180
Abstract
Spam emails, also known as non-self, are unsolicited commercial or malicious emails, sent to affect either a single individual or a corporation or a group of people. Besides advertising, these may contain links to phishing or malware hosting websites set up to steal [...] Read more.
Spam emails, also known as non-self, are unsolicited commercial or malicious emails, sent to affect either a single individual or a corporation or a group of people. Besides advertising, these may contain links to phishing or malware hosting websites set up to steal confidential information. In this paper, a study of the effectiveness of using a Negative Selection Algorithm (NSA) for anomaly detection applied to spam filtering is presented. NSA has a high performance and a low false detection rate. The designed framework intelligently works through three detection phases to finally determine an email’s legitimacy based on the knowledge gathered in the training phase. The system operates by elimination through Negative Selection similar to the functionality of T-cells’ in biological systems. It has been observed that with the inclusion of more datasets, the performance continues to improve, resulting in a 6% increase of True Positive and True Negative detection rate while achieving an actual detection rate of spam and ham of 98.5%. The model has been further compared against similar studies, and the result shows that the proposed system results in an increase of 2 to 15% in the correct detection rate of spam and ham. Full article
(This article belongs to the Special Issue Machine Learning for Cyber-Security)
Show Figures

Figure 1

16 pages, 2651 KiB  
Article
Latent Feature Group Learning for High-Dimensional Data Clustering
by Wenting Wang, Yulin He, Liheng Ma and Joshua Zhexue Huang
Information 2019, 10(6), 208; https://0-doi-org.brum.beds.ac.uk/10.3390/info10060208 - 10 Jun 2019
Cited by 1 | Viewed by 3086
Abstract
In this paper, we propose a latent feature group learning (LFGL) algorithm to discover the feature grouping structures and subspace clusters for high-dimensional data. The feature grouping structures, which are learned in an analytical way, can enhance the accuracy and efficiency of high-dimensional [...] Read more.
In this paper, we propose a latent feature group learning (LFGL) algorithm to discover the feature grouping structures and subspace clusters for high-dimensional data. The feature grouping structures, which are learned in an analytical way, can enhance the accuracy and efficiency of high-dimensional data clustering. In LFGL algorithm, the Darwinian evolutionary process is used to explore the optimal feature grouping structures, which are coded as chromosomes in the genetic algorithm. The feature grouping weighting k-means algorithm is used as the fitness function to evaluate the chromosomes or feature grouping structures in each generation of evolution. To better handle the diverse densities of clusters in high-dimensional data, the original feature grouping weighting k-means is revised with the mass-based dissimilarity measure rather than the Euclidean distance measure and the feature weights are optimized as a nonnegative matrix factorization problem under the orthogonal constraint of feature weight matrix. The genetic operations of mutation and crossover are used to generate the new chromosomes for next generation. In comparison with the well-known clustering algorithms, LFGL algorithm produced encouraging experimental results on real world datasets, which demonstrated the better performance of LFGL when clustering high-dimensional data. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

19 pages, 550 KiB  
Article
Privacy-Aware MapReduce Based Multi-Party Secure Skyline Computation
by Saleh Ahmed, Mahboob Qaosar, Asif Zaman, Md. Anisuzzaman Siddique, Chen Li, Kazi Md. Rokibul Alam and Yasuhiko Morimoto
Information 2019, 10(6), 207; https://0-doi-org.brum.beds.ac.uk/10.3390/info10060207 - 08 Jun 2019
Viewed by 3226
Abstract
Selecting representative objects from a large-scale dataset is an important task for understanding the dataset. Skyline is a popular technique for selecting representative objects from a large dataset. It is obvious that the skyline computation from the collective databases of multiple organizations is [...] Read more.
Selecting representative objects from a large-scale dataset is an important task for understanding the dataset. Skyline is a popular technique for selecting representative objects from a large dataset. It is obvious that the skyline computation from the collective databases of multiple organizations is more effective than the skyline computed from a database of a single organization. However, due to privacy-awareness, every organization is also concerned about the security and privacy of their data. In this regards, we propose an efficient multi-party secure skyline computation method that computes the skyline on encrypted data and preserves the confidentiality of each party’s database objects. Although several distributed skyline computing methods have been proposed, very few of them consider the data privacy and security issues. However, privacy-preserving multi-party skyline computing techniques are not efficient enough. In our proposed method, we present a secure computation model that is more efficient in comparison with existing privacy-preserving multi-party skyline computation models in terms of computation and communication complexity. In our computation model, we also introduce MapReduce as a distributive, scalable, open-source, cost-effective, and reliable framework to handle multi-party data efficiently. Full article
Show Figures

Figure 1

18 pages, 330 KiB  
Article
Generalized Hamacher Aggregation Operators for Intuitionistic Uncertain Linguistic Sets: Multiple Attribute Group Decision Making Methods
by Yun Jin, Hecheng Wu, Jose M. Merigó and Bo Peng
Information 2019, 10(6), 206; https://0-doi-org.brum.beds.ac.uk/10.3390/info10060206 - 08 Jun 2019
Cited by 4 | Viewed by 3109
Abstract
In this paper, we consider multiple attribute group decision making (MAGDM) problems in which the attribute values take the form of intuitionistic uncertain linguistic variables. Based on Hamacher operations, we developed several Hamacher aggregation operators, which generalize the arithmetic aggregation operators and geometric [...] Read more.
In this paper, we consider multiple attribute group decision making (MAGDM) problems in which the attribute values take the form of intuitionistic uncertain linguistic variables. Based on Hamacher operations, we developed several Hamacher aggregation operators, which generalize the arithmetic aggregation operators and geometric aggregation operators, and extend the algebraic aggregation operators and Einstein aggregation operators. A number of special cases for the two operators with respect to the parameters are discussed in detail. Also, we developed an intuitionistic uncertain linguistic generalized Hamacher hybrid weighted average operator to reflect the importance degrees of both the given intuitionistic uncertain linguistic variables and their ordered positions. Based on the generalized Hamacher aggregation operator, we propose a method for MAGDM for intuitionistic uncertain linguistic sets. Finally, a numerical example and comparative analysis with related decision making methods are provided to illustrate the practicality and feasibility of the proposed method. Full article
15 pages, 422 KiB  
Article
Event Extraction and Representation: A Case Study for the Portuguese Language
by Paulo Quaresma, Vítor Beires Nogueira, Kashyap Raiyani and Roy Bayot
Information 2019, 10(6), 205; https://0-doi-org.brum.beds.ac.uk/10.3390/info10060205 - 08 Jun 2019
Cited by 6 | Viewed by 5758
Abstract
Text information extraction is an important natural language processing (NLP) task, which aims to automatically identify, extract, and represent information from text. In this context, event extraction plays a relevant role, allowing actions, agents, objects, places, and time periods to be identified and [...] Read more.
Text information extraction is an important natural language processing (NLP) task, which aims to automatically identify, extract, and represent information from text. In this context, event extraction plays a relevant role, allowing actions, agents, objects, places, and time periods to be identified and represented. The extracted information can be represented by specialized ontologies, supporting knowledge-based reasoning and inference processes. In this work, we will describe, in detail, our proposal for event extraction from Portuguese documents. The proposed approach is based on a pipeline of specialized natural language processing tools; namely, a part-of-speech tagger, a named entities recognizer, a dependency parser, semantic role labeling, and a knowledge extraction module. The architecture is language-independent, but its modules are language-dependent and can be built using adequate AI (i.e., rule-based or machine learning) methodologies. The developed system was evaluated with a corpus of Portuguese texts and the obtained results are presented and analysed. The current limitations and future work are discussed in detail. Full article
(This article belongs to the Special Issue Natural Language Processing and Text Mining)
Show Figures

Figure 1

27 pages, 7051 KiB  
Article
Machine Vibration Monitoring for Diagnostics through Hypothesis Testing
by Alessandro Paolo Daga and Luigi Garibaldi
Information 2019, 10(6), 204; https://0-doi-org.brum.beds.ac.uk/10.3390/info10060204 - 07 Jun 2019
Cited by 20 | Viewed by 4663
Abstract
Nowadays, the subject of machine diagnostics is gathering growing interest in the research field as switching from a programmed to a preventive maintenance regime based on the real health conditions (i.e., condition-based maintenance) can lead to great advantages both in terms of safety [...] Read more.
Nowadays, the subject of machine diagnostics is gathering growing interest in the research field as switching from a programmed to a preventive maintenance regime based on the real health conditions (i.e., condition-based maintenance) can lead to great advantages both in terms of safety and costs. Nondestructive tests monitoring the state of health are fundamental for this purpose. An effective form of condition monitoring is that based on vibration (vibration monitoring), which exploits inexpensive accelerometers to perform machine diagnostics. In this work, statistics and hypothesis testing will be used to build a solid foundation for damage detection by recognition of patterns in a multivariate dataset which collects simple time features extracted from accelerometric measurements. In this regard, data from high-speed aeronautical bearings were analyzed. These were acquired on a test rig built by the Dynamic and Identification Research Group (DIRG) of the Department of Mechanical and Aerospace Engineering at Politecnico di Torino. The proposed strategy was to reduce the multivariate dataset to a single index which the health conditions can be determined. This dimensionality reduction was initially performed using Principal Component Analysis, which proved to be a lossy compression. Improvement was obtained via Fisher’s Linear Discriminant Analysis, which finds the direction with maximum distance between the damaged and healthy indices. This method is still ineffective in highlighting phenomena that develop in directions orthogonal to the discriminant. Finally, a lossless compression was achieved using the Mahalanobis distance-based Novelty Indices, which was also able to compensate for possible latent confounding factors. Further, considerations about the confidence, the sensitivity, the curse of dimensionality, and the minimum number of samples were also tackled for ensuring statistical significance. The results obtained here were very good not only in terms of reduced amounts of missed and false alarms, but also considering the speed of the algorithms, their simplicity, and the full independence from human interaction, which make them suitable for real time implementation and integration in condition-based maintenance (CBM) regimes. Full article
(This article belongs to the Special Issue Fault Diagnosis, Maintenance and Reliability)
Show Figures

Figure 1

19 pages, 977 KiB  
Article
Asymmetric Residual Neural Network for Accurate Human Activity Recognition
by Jun Long, Wuqing Sun, Zhan Yang and Osolo Ian Raymond
Information 2019, 10(6), 203; https://0-doi-org.brum.beds.ac.uk/10.3390/info10060203 - 06 Jun 2019
Cited by 28 | Viewed by 4213
Abstract
Human activity recognition (HAR) using deep neural networks has become a hot topic in human–computer interaction. Machines can effectively identify human naturalistic activities by learning from a large collection of sensor data. Activity recognition is not only an interesting research problem but also [...] Read more.
Human activity recognition (HAR) using deep neural networks has become a hot topic in human–computer interaction. Machines can effectively identify human naturalistic activities by learning from a large collection of sensor data. Activity recognition is not only an interesting research problem but also has many real-world practical applications. Based on the success of residual networks in achieving a high level of aesthetic representation of automatic learning, we propose a novel asymmetric residual network, named ARN. ARN is implemented using two identical path frameworks consisting of (1) a short time window, which is used to capture spatial features, and (2) a long time window, which is used to capture fine temporal features. The long time window path can be made very lightweight by reducing its channel capacity, while still being able to learn useful temporal representations for activity recognition. In this paper, we mainly focus on proposing a new model to improve the accuracy of HAR. In order to demonstrate the effectiveness of the ARN model, we carried out extensive experiments on benchmark datasets (i.e., OPPORTUNITY, UniMiB-SHAR) and compared the results with some conventional and state-of-the-art learning-based methods. We discuss the influence of networks parameters on performance to provide insights about its optimization. Results from our experiments show that ARN is effective in recognizing human activities via wearable datasets. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Sports)
Show Figures

Figure 1

9 pages, 738 KiB  
Article
Spelling Correction of Non-Word Errors in Uyghur–Chinese Machine Translation
by Rui Dong, Yating Yang and Tonghai Jiang
Information 2019, 10(6), 202; https://0-doi-org.brum.beds.ac.uk/10.3390/info10060202 - 06 Jun 2019
Cited by 4 | Viewed by 6980
Abstract
This research was conducted to solve the out-of-vocabulary problem caused by Uyghur spelling errors in Uyghur–Chinese machine translation, so as to improve the quality of Uyghur–Chinese machine translation. This paper assesses three spelling correction methods based on machine translation: 1. Using a Bilingual [...] Read more.
This research was conducted to solve the out-of-vocabulary problem caused by Uyghur spelling errors in Uyghur–Chinese machine translation, so as to improve the quality of Uyghur–Chinese machine translation. This paper assesses three spelling correction methods based on machine translation: 1. Using a Bilingual Evaluation Understudy (BLEU) score; 2. Using a Chinese language model; 3. Using a bilingual language model. The best results were achieved in both the spelling correction task and the machine translation task by using the BLEU score for spelling correction. A maximum F1 score of 0.72 was reached for spelling correction, and the translation result increased the BLEU score by 1.97 points, relative to the baseline system. However, the method of using a BLEU score for spelling correction requires the support of a bilingual parallel corpus, which is a supervised method that can be used in corpus pre-processing. Unsupervised spelling correction can be performed by using either a Chinese language model or a bilingual language model. These two methods can be easily extended to other languages, such as Arabic. Full article
(This article belongs to the Special Issue Natural Language Processing and Text Mining)
Show Figures

Figure 1

14 pages, 1161 KiB  
Article
Project Procurement Method Selection Using a Multi-Criteria Decision-Making Method with Interval Neutrosophic Sets
by Limin Su, Tianze Wang, Lunyan Wang, Huimin Li and Yongchao Cao
Information 2019, 10(6), 201; https://0-doi-org.brum.beds.ac.uk/10.3390/info10060201 - 05 Jun 2019
Cited by 7 | Viewed by 4717
Abstract
Project procurement method (PPM) selection influences the efficiency of project implementation. Owners are presented with different options for project delivery. However, selecting the appropriate PPM poses great challenges to owners, given the existence of ambiguous information. The interval neutrosophic set (INS) shows power [...] Read more.
Project procurement method (PPM) selection influences the efficiency of project implementation. Owners are presented with different options for project delivery. However, selecting the appropriate PPM poses great challenges to owners, given the existence of ambiguous information. The interval neutrosophic set (INS) shows power to handle imprecise and ambiguous information. This paper aims to develop a PPM selection model under an interval neutrosophic environment for owners. The main contributions of this paper are as follows: (1) The similarity measure is innovatively introduced with interval neutrosophic information to handle the PPM selection problem. (2) The similarity measure based on minimum and maximum operators is applied to construct a decision-making model for PPM selection, through considering the truth, falsity, and indeterminacy memberships simultaneously. (3) This study establishes a PPM selection method with INS by applying similarity measures, that takes account into the determinacy, indeterminacy, and hesitation from the decision experts when giving an evaluation value. A case study on selecting PPM is made to show the applicability of the proposed approach. Finally, the results of the proposed method are compared with those of existing methods, which exhibit the superiority of the proposed PPM selection method. Full article
Show Figures

Figure 1

17 pages, 1094 KiB  
Article
Optimization and Security in Information Retrieval, Extraction, Processing, and Presentation on a Cloud Platform
by Adrian Alexandrescu
Information 2019, 10(6), 200; https://0-doi-org.brum.beds.ac.uk/10.3390/info10060200 - 05 Jun 2019
Cited by 7 | Viewed by 4386
Abstract
This paper presents the processing steps needed in order to have a fully functional vertical search engine. Four actions are identified (i.e., retrieval, extraction, presentation, and delivery) and are required to crawl websites, get the product information from the retrieved webpages, process that [...] Read more.
This paper presents the processing steps needed in order to have a fully functional vertical search engine. Four actions are identified (i.e., retrieval, extraction, presentation, and delivery) and are required to crawl websites, get the product information from the retrieved webpages, process that data, and offer the end-user the possibility of looking for various products. The whole application flow is focused on low resource usage, and especially on the delivery action, which consists of a web application that uses cloud resources and is optimized for cost efficiency. Novel methods for representing the crawl and extraction template, for product index optimizations, and for deploying and storing data in the cloud database are identified and explained. In addition, key aspects are discussed regarding ethics and security in the proposed solution. A practical use-case scenario is also presented, where products are extracted from seven online board and card game retailers. Finally, the potential of the proposed solution is discussed in terms of researching new methods for improving various aspects of the proposed solution in order to increase cost efficiency and scalability. Full article
(This article belongs to the Special Issue ICSTCC 2018: Advances in Control and Computers)
Show Figures

Figure 1

13 pages, 1442 KiB  
Article
A Robust Automatic Ultrasound Spectral Envelope Estimation
by Jinkai Li, Yi Zhang, Xin Liu, Paul Liu, Hao Yin and Dong C. Liu
Information 2019, 10(6), 199; https://0-doi-org.brum.beds.ac.uk/10.3390/info10060199 - 05 Jun 2019
Cited by 4 | Viewed by 4219
Abstract
Accurate estimation of ultrasound Doppler spectrogram envelope is essential for clinical pathological diagnosis of various cardiovascular diseases. However, due to intrinsic spectral broadening in the power spectrum and speckle noise existing in ultrasound images, it is difficult to obtain the accurate maximum velocity. [...] Read more.
Accurate estimation of ultrasound Doppler spectrogram envelope is essential for clinical pathological diagnosis of various cardiovascular diseases. However, due to intrinsic spectral broadening in the power spectrum and speckle noise existing in ultrasound images, it is difficult to obtain the accurate maximum velocity. Each of the standard existing methods has their own limitations and does not work well in complicated recordings. This paper proposes a robust automatic spectral envelope estimation method that is more accurate in phantom recordings and various in-vivo recordings than the currently used methods. Comparisons were performed on phantom recordings of the carotid artery with varying noise and additional in-vivo recordings. The accuracy of the proposed method was on average 8% greater than the existing methods. The experimental results demonstrate the wide applicability under different blood conditions and the robustness of the proposed algorithm. Full article
Show Figures

Figure 1

13 pages, 589 KiB  
Article
Investigating Users’ Continued Usage Intentions of Online Learning Applications
by Zhi Ji, Zhenhua Yang, Jianguo Liu and Changrui Yu
Information 2019, 10(6), 198; https://0-doi-org.brum.beds.ac.uk/10.3390/info10060198 - 04 Jun 2019
Cited by 20 | Viewed by 6327
Abstract
Understanding users’ continued usage intentions for online learning applications is significant for online education. In this paper, we explore a scale to measure users’ usage intentions of online learning applications and empirically investigate the factors that influence users’ continued usage intentions of online [...] Read more.
Understanding users’ continued usage intentions for online learning applications is significant for online education. In this paper, we explore a scale to measure users’ usage intentions of online learning applications and empirically investigate the factors that influence users’ continued usage intentions of online learning applications based on 275 participant data. Using the extended Technology Acceptance Model (TAM) and the Structural Equation Modelling (SEM), the results show that males or users off campus are more likely to use online learning applications; that system characteristics (SC), social influence (SI), and perceived ease of use (PEOU) positively affect the perceived usefulness (PU), with coefficients of 0.74, 0.23, and 0.04, which imply that SC is the most significant to the PU of online learning applications; that facilitating conditions (FC) and individual differences (ID) positively affect the PEOU, with coefficients of 0.72 and 0.37, which suggest that FC is more important to the PEOU of online learning applications; and that both PEOU and PU positively affect the behavioral intention (BI), with coefficients of 0.83 and 0.51, which indicate that PEOU is more influential than PU to users’ continued usage intentions of online learning applications. In particular, the output quality, perceived enjoyment, and objective usability are critical to the users’ continued usage intentions of online learning applications. This study contributes to the technology acceptance research field with a fast growing market named online learning applications. Our methods and results would benefit both academics and managers with useful suggestions for research directions and user-centered strategies for the design of online learning applications. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

15 pages, 3020 KiB  
Article
Multi-Sensor Activity Monitoring: Combination of Models with Class-Specific Voting
by Lingfei Mo, Lujie Zeng, Shaopeng Liu and Robert X. Gao
Information 2019, 10(6), 197; https://0-doi-org.brum.beds.ac.uk/10.3390/info10060197 - 04 Jun 2019
Cited by 2 | Viewed by 3549
Abstract
This paper presents a multi-sensor model combination system with class-specific voting for physical activity monitoring, which combines multiple classifiers obtained by splicing sensor data from different nodes into new data frames to improve the diversity of model inputs. Data obtained from a wearable [...] Read more.
This paper presents a multi-sensor model combination system with class-specific voting for physical activity monitoring, which combines multiple classifiers obtained by splicing sensor data from different nodes into new data frames to improve the diversity of model inputs. Data obtained from a wearable multi-sensor wireless integrated measurement system (WIMS) consisting of two accelerometers and one ventilation sensor have been analysed to identify 10 different activity types of varying intensities performed by 110 voluntary participants. It is noted that each classifier shows better performance on some specific activity classes. Through class-specific weighted majority voting, the recognition accuracy of 10 PA types has been improved from 86% to 92% compared with the non-combination approach. Furthermore, the combination method has shown to be effective in reducing the subject-to-subject variability (standard deviation of recognition accuracies across subjects) in activity recognition and has better performance in monitoring physical activities of varying intensities than traditional homogeneous classifiers. Full article
(This article belongs to the Special Issue Activity Monitoring by Multiple Distributed Sensing)
Show Figures

Figure 1

16 pages, 10408 KiB  
Article
A Hierarchical Resource Allocation Scheme Based on Nash Bargaining Game in VANET
by Xiaoshuai Zhao, Xiaoyong Zhang and Yingjuan Li
Information 2019, 10(6), 196; https://0-doi-org.brum.beds.ac.uk/10.3390/info10060196 - 04 Jun 2019
Cited by 7 | Viewed by 3311
Abstract
Due to the selfishness of vehicles and the scarcity of spectrum resources, how to realize fair and effective spectrum resources allocation has become one of the primary tasks in VANET. In this paper, we propose a hierarchical resource allocation scheme based on Nash [...] Read more.
Due to the selfishness of vehicles and the scarcity of spectrum resources, how to realize fair and effective spectrum resources allocation has become one of the primary tasks in VANET. In this paper, we propose a hierarchical resource allocation scheme based on Nash bargaining game. Firstly, we analyze the spectrum resource allocation problem between different Road Side Units (RSUs), which obtain resources from the central cloud. Thereafter, considering the difference of vehicular users (VUEs), we construct the matching degree index between VUEs and RSUs. Then, we deal with the spectrum resource allocation problem between VUEs and RSUs. To reduce computational overhead, we transform the original problem into two sub-problems: power allocation and slot allocation, according to the time division multiplexing mechanism. The simulation results show that the proposed scheme can fairly and effectively allocate resources in VANET according to VUEs’ demand. Full article
Show Figures

Figure 1

14 pages, 4315 KiB  
Article
Multi-Regional Online Car-Hailing Order Quantity Forecasting Based on the Convolutional Neural Network
by Zihao Huang, Gang Huang, Zhijun Chen, Chaozhong Wu, Xiaofeng Ma and Haobo Wang
Information 2019, 10(6), 193; https://0-doi-org.brum.beds.ac.uk/10.3390/info10060193 - 04 Jun 2019
Cited by 9 | Viewed by 3969
Abstract
With the development of online cars, the demand for travel prediction is increasing in order to reduce the information asymmetry between passengers and drivers of online car-hailing. This paper proposes a travel demand forecasting model named OC-CNN based on the convolutional neural network [...] Read more.
With the development of online cars, the demand for travel prediction is increasing in order to reduce the information asymmetry between passengers and drivers of online car-hailing. This paper proposes a travel demand forecasting model named OC-CNN based on the convolutional neural network to forecast the travel demand. In order to make full use of the spatial characteristics of the travel demand distribution, this paper meshes the prediction area and creates a travel demand data set of the graphical structure to preserve its spatial properties. Taking advantage of the convolutional neural network in image feature extraction, the historical demand data of the first twenty-five minutes of the entire region are used as a model input to predict the travel demand for the next five minutes. In order to verify the performance of the proposed method, one-month data from online car-hailing of the Chengdu Fourth Ring Road are used. The results show that the model successfully extracts the spatiotemporal features of the data, and the prediction accuracies of the proposed method are superior to those of the representative methods, including the Bayesian Ridge Model, Linear Regression, Support Vector Regression, and Long Short-Term Memory networks. Full article
(This article belongs to the Special Issue Machine Learning on Scientific Data and Information)
Show Figures

Figure 1

17 pages, 1123 KiB  
Article
Coupled Least Squares Support Vector Ensemble Machines
by Dickson Keddy Wornyo and Xiang-Jun Shen
Information 2019, 10(6), 195; https://0-doi-org.brum.beds.ac.uk/10.3390/info10060195 - 03 Jun 2019
Cited by 3 | Viewed by 2991
Abstract
The least squares support vector method is a popular data-driven modeling method which shows better performance and has been successfully applied in a wide range of applications. In this paper, we propose a novel coupled least squares support vector ensemble machine (C-LSSVEM). The [...] Read more.
The least squares support vector method is a popular data-driven modeling method which shows better performance and has been successfully applied in a wide range of applications. In this paper, we propose a novel coupled least squares support vector ensemble machine (C-LSSVEM). The proposed coupling ensemble helps improve robustness and produce good classification performance than the single model approach. The proposed C-LSSVEM can choose appropriate kernel types and their parameters in a good coupling strategy with a set of classifiers being trained simultaneously. The proposed method can further minimize the total loss of ensembles in kernel space. Thus, we form an ensemble regressor by co-optimizing and weighing base regressors. Experiments conducted on several datasets such as artificial datasets, UCI classification datasets, UCI regression datasets, handwritten digits datasets and NWPU-RESISC45 datasets, indicate that C-LSSVEM performs better in achieving the minimal regression loss and the best classification accuracy relative to selected state-of-the-art regression and classification techniques. Full article
Show Figures

Figure 1

22 pages, 9203 KiB  
Article
A Novel Improved Bat Algorithm Based on Hybrid Parallel and Compact for Balancing an Energy Consumption Problem
by Trong-The Nguyen, Jeng-Shyang Pan and Thi-Kien Dao
Information 2019, 10(6), 194; https://0-doi-org.brum.beds.ac.uk/10.3390/info10060194 - 03 Jun 2019
Cited by 31 | Viewed by 3766
Abstract
This paper proposes an improved Bat algorithm based on hybridizing a parallel and compact method (namely pcBA) for a class of saving variables in optimization problems. The parallel enhances diversity solutions for exploring in space search and sharing computation load. Nevertheless, the compact [...] Read more.
This paper proposes an improved Bat algorithm based on hybridizing a parallel and compact method (namely pcBA) for a class of saving variables in optimization problems. The parallel enhances diversity solutions for exploring in space search and sharing computation load. Nevertheless, the compact saves stored variables for computation in the optimization approaches. In the experimental section, the selected benchmark functions, and the energy balance problem in Wireless sensor networks (WSN) are used to evaluate the performance of the proposed method. Results compared with the other methods in the literature demonstrate that the proposed algorithm achieves a practical method of reducing the number of stored memory variables, and the running time consumption. Full article
Show Figures

Figure 1

17 pages, 2374 KiB  
Article
Call Details Record Analysis: A Spatiotemporal Exploration toward Mobile Traffic Classification and Optimization
by Kashif Sultan, Hazrat Ali, Adeel Ahmad and Zhongshan Zhang
Information 2019, 10(6), 192; https://0-doi-org.brum.beds.ac.uk/10.3390/info10060192 - 03 Jun 2019
Cited by 13 | Viewed by 5480
Abstract
The information contained within Call Details records (CDRs) of mobile networks can be used to study the operational efficacy of cellular networks and behavioural pattern of mobile subscribers. In this study, we extract actionable insights from the CDR data and show that there [...] Read more.
The information contained within Call Details records (CDRs) of mobile networks can be used to study the operational efficacy of cellular networks and behavioural pattern of mobile subscribers. In this study, we extract actionable insights from the CDR data and show that there exists a strong spatiotemporal predictability in real network traffic patterns. This knowledge can be leveraged by the mobile operators for effective network planning such as resource management and optimization. Motivated by this, we perform the spatiotemporal analysis of CDR data publicly available from Telecom Italia. Thus, on the basis of spatiotemporal insights, we propose a framework for mobile traffic classification. Experimental results show that the proposed model based on machine learning technique is able to accurately model and classify the network traffic patterns. Furthermore, we demonstrate the application of such insights for resource optimisation. Full article
Show Figures

Figure 1

20 pages, 1575 KiB  
Article
Computation Offloading Strategy in Mobile Edge Computing
by Jinfang Sheng, Jie Hu, Xiaoyu Teng, Bin Wang and Xiaoxia Pan
Information 2019, 10(6), 191; https://0-doi-org.brum.beds.ac.uk/10.3390/info10060191 - 02 Jun 2019
Cited by 43 | Viewed by 4956
Abstract
Mobile phone applications have been rapidly growing and emerging with the Internet of Things (IoT) applications in augmented reality, virtual reality, and ultra-clear video due to the development of mobile Internet services in the last three decades. These applications demand intensive computing to [...] Read more.
Mobile phone applications have been rapidly growing and emerging with the Internet of Things (IoT) applications in augmented reality, virtual reality, and ultra-clear video due to the development of mobile Internet services in the last three decades. These applications demand intensive computing to support data analysis, real-time video processing, and decision-making for optimizing the user experience. Mobile smart devices play a significant role in our daily life, and such an upward trend is continuous. Nevertheless, these devices suffer from limited resources such as CPU, memory, and energy. Computation offloading is a promising technique that can promote the lifetime and performance of smart devices by offloading local computation tasks to edge servers. In light of this situation, the strategy of computation offloading has been adopted to solve this problem. In this paper, we propose a computation offloading strategy under a scenario of multi-user and multi-mobile edge servers that considers the performance of intelligent devices and server resources. The strategy contains three main stages. In the offloading decision-making stage, the basis of offloading decision-making is put forward by considering the factors of computing task size, computing requirement, computing capacity of server, and network bandwidth. In the server selection stage, the candidate servers are evaluated comprehensively by multi-objective decision-making, and the appropriate servers are selected for the computation offloading. In the task scheduling stage, a task scheduling model based on the improved auction algorithm has been proposed by considering the time requirement of the computing tasks and the computing performance of the mobile edge computing server. Extensive simulations have demonstrated that the proposed computation offloading strategy could effectively reduce service delay and the energy consumption of intelligent devices, and improve user experience. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop