Next Issue
Volume 11, October
Previous Issue
Volume 11, August

Information, Volume 11, Issue 9 (September 2020) – 57 articles

Cover Story (view full-size image): In the 20th century, the role of information increased immensely while the speed of information production accelerated. To adapt, people created information processing technology and started exploring information and information systems and processes. Information studies intensified and many theories of information were created. In spite of all these efforts, there is no unanimous understanding of the nature and essence of information. In this work, two approaches to information are presented in a dialogue between two researchers. One demonstrates how the general theory of information elucidates the phenomenon of information by explaining the foundations and comprehensive mathematical theory. The other presents an info-autopoietic information theory based on Bateson’s approach. The goal is to help the reader better understand the phenomenon of information. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Select all
Export citation of selected articles as:
Article
Integrated Question-Answering System for Natural Disaster Domains Based on Social Media Messages Posted at the Time of Disaster
Information 2020, 11(9), 456; https://0-doi-org.brum.beds.ac.uk/10.3390/info11090456 - 21 Sep 2020
Cited by 2 | Viewed by 1182
Abstract
Natural disasters are events that humans cannot control, and Japan has suffered from many such disasters over its long history. Many of these have caused severe damage to human lives and property. These days, numerous Japanese people have gained considerable experience preparing for [...] Read more.
Natural disasters are events that humans cannot control, and Japan has suffered from many such disasters over its long history. Many of these have caused severe damage to human lives and property. These days, numerous Japanese people have gained considerable experience preparing for disasters and are now striving to predict the effects of disasters using social network services (SNSs) to exchange information in real time. Currently, Twitter is the most popular and powerful SNS tool used for disaster response in Japan because it allows users to exchange and disseminate information quickly. However, since almost all of the Japanese-related content is also written in the Japanese language, which restricts most of its benefits to Japanese people, we feel that it is necessary to create a disaster response system that would help people who do not understand Japanese. Accordingly, this paper presents the framework of a question-answering (QA) system that was developed using a Twitter dataset containing more than nine million tweets compiled during the Osaka North Earthquake that occurred on 18 June 2018. We also studied the structure of the questions posed and developed methods for classifying them into particular categories in order to find answers from the dataset using an ontology, word similarity, keyword frequency, and natural language processing. The experimental results presented herein confirm the accuracy of the answer results generated from our proposed system. Full article
Show Figures

Figure 1

Article
Identification of Malignancies from Free-Text Histopathology Reports Using a Multi-Model Supervised Machine Learning Approach
Information 2020, 11(9), 455; https://0-doi-org.brum.beds.ac.uk/10.3390/info11090455 - 21 Sep 2020
Viewed by 1264
Abstract
We explored various Machine Learning (ML) models to evaluate how each model performs in the task of classifying histopathology reports. We trained, optimized, and performed classification with Stochastic Gradient Descent (SGD), Support Vector Machine (SVM), Random Forest (RF), K-Nearest Neighbor (KNN), Adaptive Boosting [...] Read more.
We explored various Machine Learning (ML) models to evaluate how each model performs in the task of classifying histopathology reports. We trained, optimized, and performed classification with Stochastic Gradient Descent (SGD), Support Vector Machine (SVM), Random Forest (RF), K-Nearest Neighbor (KNN), Adaptive Boosting (AB), Decision Trees (DT), Gaussian Naïve Bayes (GNB), Logistic Regression (LR), and Dummy classifier. We started with 60,083 histopathology reports, which reduced to 60,069 after pre-processing. The F1-scores for SVM, SGD KNN, RF, DT, LR, AB, and GNB were 97%, 96%, 96%, 96%, 92%, 96%, 84%, and 88%, respectively, while the misclassification rates were 3.31%, 5.25%, 4.39%, 1.75%, 3.5%, 4.26%, 23.9%, and 19.94%, respectively. The approximate run times were 2 h, 20 min, 40 min, 8 h, 40 min, 10 min, 50 min, and 4 min, respectively. RF had the longest run time but the lowest misclassification rate on the labeled data. Our study demonstrated the possibility of applying ML techniques in the processing of free-text pathology reports for cancer registries for cancer incidence reporting in a Sub-Saharan Africa setting. This is an important consideration for the resource-constrained environments to leverage ML techniques to reduce workloads and improve the timeliness of reporting of cancer statistics. Full article
Show Figures

Figure 1

Article
The Role of Artificial Intelligence, MLR and Statistical Analysis in Investigations about the Correlation of Swab Tests and Stress on Health Care Systems by COVID-19
Information 2020, 11(9), 454; https://0-doi-org.brum.beds.ac.uk/10.3390/info11090454 - 21 Sep 2020
Cited by 5 | Viewed by 1466
Abstract
The outbreak of the new Coronavirus (COVID-19) pandemic has prompted investigations on various aspects. This research aims to study the possible correlation between the numbers of swab tests and the trend of confirmed cases of infection, while paying particular attention to the sickness [...] Read more.
The outbreak of the new Coronavirus (COVID-19) pandemic has prompted investigations on various aspects. This research aims to study the possible correlation between the numbers of swab tests and the trend of confirmed cases of infection, while paying particular attention to the sickness level. The study is carried out in relation to the Italian case, but the result is of more general importance, particularly for countries with limited ICU (intensive care units) availability. The statistical analysis showed that, by increasing the number of tests, the trend of home isolation cases was positive. However, the trend of mild cases admitted to hospitals, intensive case cases, and daily deaths were all negative. The result of the statistical analysis provided the basis for an AI study by ANN. In addition, the results were validated using a multivariate linear regression (MLR) approach. Our main result was to identify a significant statistical effect of a reduction of pressure on the health care system due to an increase in tests. The relevance of this result is not confined to the COVID-19 outbreak, because the high demand of hospitalizations and ICU treatments due to this pandemic has an indirect effect on the possibility of guaranteeing an adequate treatment for other high-fatality diseases, such as, e.g., cardiological and oncological ones. Our results show that swab testing may play a significant role in decreasing stress on the health system. Therefore, this case study is relevant, in particular, for plans to control the pandemic in countries with a limited capacity for admissions to ICU units. Full article
Show Figures

Figure 1

Article
Popularity Prediction of Instagram Posts
Information 2020, 11(9), 453; https://0-doi-org.brum.beds.ac.uk/10.3390/info11090453 - 18 Sep 2020
Cited by 9 | Viewed by 2054
Abstract
Predicting the popularity of posts on social networks has taken on significant importance in recent years, and several social media management tools now offer solutions to improve and optimize the quality of published content and to enhance the attractiveness of companies and organizations. [...] Read more.
Predicting the popularity of posts on social networks has taken on significant importance in recent years, and several social media management tools now offer solutions to improve and optimize the quality of published content and to enhance the attractiveness of companies and organizations. Scientific research has recently moved in this direction, with the aim of exploiting advanced techniques such as machine learning, deep learning, natural language processing, etc., to support such tools. In light of the above, in this work we aim to address the challenge of predicting the popularity of a future post on Instagram, by defining the problem as a classification task and by proposing an original approach based on Gradient Boosting and feature engineering, which led us to promising experimental results. The proposed approach exploits big data technologies for scalability and efficiency, and it is general enough to be applied to other social media as well. Full article
(This article belongs to the Special Issue Emerging Trends and Challenges in Supervised Learning Tasks)
Show Figures

Figure 1

Article
SlowTT: A Slow Denial of Service against IoT Networks
Information 2020, 11(9), 452; https://0-doi-org.brum.beds.ac.uk/10.3390/info11090452 - 18 Sep 2020
Cited by 6 | Viewed by 1146
Abstract
The security of Internet of Things environments is a critical and trending topic, due to the nature of the networks and the sensitivity of the exchanged information. In this paper, we investigate the security of the Message Queue Telemetry Transport (MQTT) protocol, widely [...] Read more.
The security of Internet of Things environments is a critical and trending topic, due to the nature of the networks and the sensitivity of the exchanged information. In this paper, we investigate the security of the Message Queue Telemetry Transport (MQTT) protocol, widely adopted in IoT infrastructures. We exploit two specific weaknesses of MQTT, identified during our research activities, allowing the client to configure the KeepAlive parameter and MQTT packets to execute an innovative cyber threat against the MQTT broker. In order to validate the exploitation of such vulnerabilities, we propose SlowTT, a novel “Slow” denial of service attack aimed at targeting MQTT through low-rate techniques, characterized by minimum attack bandwidth and computational power requirements. We validate SlowTT against real MQTT services, by considering both plaintext and encrypted communications and by comparing the effects of the attack when targeting different application daemons and protocol versions. Results show that SlowTT is extremely successful, and it can exploit the identified vulnerability to execute a denial of service against the IoT network by keeping the connection alive for a long time. Full article
(This article belongs to the Special Issue Security and Privacy in the Internet of Things)
Show Figures

Figure 1

Article
Decision-Making for Project Delivery System with Related-Indicators Based on Pythagorean Fuzzy Weighted Muirhead Mean Operator
Information 2020, 11(9), 451; https://0-doi-org.brum.beds.ac.uk/10.3390/info11090451 - 17 Sep 2020
Cited by 2 | Viewed by 707
Abstract
An appropriate project delivery system plays an essential role in sustainable construction project management. Due to the complexity of practical problems and the ambiguity of human thinking, selecting an appropriate project delivery system (PDS) is an enormous challenge for owners. This paper aims [...] Read more.
An appropriate project delivery system plays an essential role in sustainable construction project management. Due to the complexity of practical problems and the ambiguity of human thinking, selecting an appropriate project delivery system (PDS) is an enormous challenge for owners. This paper aims to develop a PDS selection method to deal with the related-indicators case by combining the advantages of Pythagorean fuzzy sets (PFSs) and Pythagorean fuzzy weighted Muirhead mean (PFWMM) operators. The contributions of this paper are as follows: (1) This study innovatively introduced the PFWMM operator to deal with PDS selection problems for the case of the relevance among all indicators affecting PDSs selection in a complex environment. (2) A new method of solving indicators’ weights was proposed to adapt to the related-indicators PDS selection problem, through investigating the differences between the ideal PDS and the alternative PDS under all indicators. (3) A decision-making framework for PDS selection was constructed by comprehensive use of the advantages of PFSs and the PFWMM operator in dealing with related-indicators PDS decision-making problems. An example of selecting a PDS is exhibited to illustrate the effectiveness and applicability of the proposed method. Full article
(This article belongs to the Special Issue New Applications in Multiple Criteria Decision Analysis)
Show Figures

Figure 1

Article
Detecting and Tracking Significant Events for Individuals on Twitter by Monitoring the Evolution of Twitter Followership Networks
Information 2020, 11(9), 450; https://0-doi-org.brum.beds.ac.uk/10.3390/info11090450 - 16 Sep 2020
Cited by 1 | Viewed by 826
Abstract
People publish tweets on Twitter to share everything from global news to their daily life. Abundant user-generated content makes Twitter become one of the major channels for people to obtain information about real-world events. Event detection techniques help to extract events from massive [...] Read more.
People publish tweets on Twitter to share everything from global news to their daily life. Abundant user-generated content makes Twitter become one of the major channels for people to obtain information about real-world events. Event detection techniques help to extract events from massive amounts of Twitter data. However, most existing techniques are based on Twitter information streams, which contain plenty of noise and polluted content that would affect the accuracy of the detecting result. In this article, we present an event discovery method based on the change of the user’s followers, which can detect the occurrences of significant events relevant to the particular user. We divide these events into categories according to the positive or negative effect on the specific user. Further, we observe the evolution of individuals’ followership networks and analyze the dynamics of networks. The results show that events have different effects on the evolution of different features of Twitter followership networks. Our findings may play an important role for realizing how patterns of social interaction are impacted by events and can be applied in fields such as public opinion monitoring, disaster warning, crisis management, and intelligent decision making. Full article
(This article belongs to the Special Issue Information Retrieval and Social Media Mining)
Show Figures

Figure 1

Article
An Assessment of Data Location Vulnerability for Human Factors Using Linear Regression and Collaborative Filtering
Information 2020, 11(9), 449; https://0-doi-org.brum.beds.ac.uk/10.3390/info11090449 - 16 Sep 2020
Viewed by 877
Abstract
End-user devices and applications (data locations) are becoming more capable and user friendly and are used in various Health Information Systems (HIS) by employees of many health organizations to perform their day to day tasks. Data locations are connected via the internet. The [...] Read more.
End-user devices and applications (data locations) are becoming more capable and user friendly and are used in various Health Information Systems (HIS) by employees of many health organizations to perform their day to day tasks. Data locations are connected via the internet. The locations have relatively good information security mechanisms to minimize attacks on and through them in terms of technology. However, human factors are often ignored in their security echo system. In this paper, we propose a human factor framework merged with an existing technological framework. We also explore how human factors affect data locations via linear regression computations and rank data location vulnerability using collaborative filtering. Our results show that human factors play a major role in data location breaches. Laptops are ranked as the most susceptible location and electronic medical records as the least. We validate the ranking by root mean square error. Full article
Show Figures

Figure 1

Article
Bimodal CT/MRI-Based Segmentation Method for Intervertebral Disc Boundary Extraction
Information 2020, 11(9), 448; https://0-doi-org.brum.beds.ac.uk/10.3390/info11090448 - 15 Sep 2020
Cited by 1 | Viewed by 975
Abstract
Intervertebral disc (IVD) localization and segmentation have triggered intensive research efforts in the medical image analysis community, since IVD abnormalities are strong indicators of various spinal cord-related pathologies. Despite the intensive research efforts to address IVD boundary extraction based on MR images, the [...] Read more.
Intervertebral disc (IVD) localization and segmentation have triggered intensive research efforts in the medical image analysis community, since IVD abnormalities are strong indicators of various spinal cord-related pathologies. Despite the intensive research efforts to address IVD boundary extraction based on MR images, the potential of bimodal approaches, which benefit from complementary information derived from both magnetic resonance imaging (MRI) and computed tomography (CT), has not yet been fully realized. Furthermore, most existing approaches rely on manual intervention or on learning, although sufficiently large and labelled 3D datasets are not always available. In this light, this work introduces a bimodal segmentation method for vertebrae and IVD boundary extraction, which requires a limited amount of intervention and is not based on learning. The proposed method comprises various image processing and analysis stages, including CT/MRI registration, Otsu-based thresholding and Chan–Vese-based segmentation. The method was applied on 98 expert-annotated pairs of CT and MR spinal cord images with varying slice thicknesses and pixel sizes, which were obtained from 7 patients using different scanners. The experimental results had a Dice similarity coefficient equal to 94.77(%) for CT and 86.26(%) for MRI and a Hausdorff distance equal to 4.4 pixels for CT and 4.5 pixels for MRI. Experimental comparisons with state-of-the-art CT and MRI segmentation methods lead to the conclusion that the proposed method provides a reliable alternative for vertebrae and IVD boundary extraction. Moreover, the segmentation results are utilized to perform a bimodal visualization of the spine, which could potentially aid differential diagnosis with respect to several spine-related pathologies. Full article
(This article belongs to the Special Issue Computer Vision for Biomedical Image Applications)
Show Figures

Figure 1

Article
Social Media Goes Green—The Impact of Social Media on Green Cosmetics Purchase Motivation and Intention
Information 2020, 11(9), 447; https://0-doi-org.brum.beds.ac.uk/10.3390/info11090447 - 15 Sep 2020
Cited by 10 | Viewed by 4971
Abstract
In the 21st century, green consumption has risen into a global trend, which inclines cosmetic companies to be more environmental-friendly and to have a larger green product portfolio to satisfy these new consumers’ needs. Social media contributed to this trend, shaping consumers’ attitudes [...] Read more.
In the 21st century, green consumption has risen into a global trend, which inclines cosmetic companies to be more environmental-friendly and to have a larger green product portfolio to satisfy these new consumers’ needs. Social media contributed to this trend, shaping consumers’ attitudes into more environmentally conscious behavior. The present study applied the Theory of Planned Behavior (TPB) to explain the impact of Social Media on consumers’ purchase intention and motivation (altruism and egoism). Based on empirical investigation, an online survey was developed to measure the proposed conceptual model. The reliability and validity of the reflective constructs were tested using partial least squares (PLS) modeling technique. The results indicate the importance of social media on consumers’ attitudes, subjective norms, altruistic and egoistic motivations, and the impact of these variables as the antecedents of green cosmetics purchase intention. These results have important theoretical implications. They revealed that external factors such as social media as an information source, have an important role in consumer motivation formation and green cosmetic purchasing intentions. The findings are relevant for marketers to implement better communication strategies on social media to increase consumers’ motivations and purchase intention toward green cosmetics. Full article
(This article belongs to the Special Issue Green Marketing)
Show Figures

Figure 1

Article
A Fast Algorithm to Initialize Cluster Centroids in Fuzzy Clustering Applications
Information 2020, 11(9), 446; https://0-doi-org.brum.beds.ac.uk/10.3390/info11090446 - 15 Sep 2020
Viewed by 842
Abstract
The goal of partitioning clustering analysis is to divide a dataset into a predetermined number of homogeneous clusters. The quality of final clusters from a prototype-based partitioning algorithm is highly affected by the initially chosen centroids. In this paper, we propose the InoFrep, [...] Read more.
The goal of partitioning clustering analysis is to divide a dataset into a predetermined number of homogeneous clusters. The quality of final clusters from a prototype-based partitioning algorithm is highly affected by the initially chosen centroids. In this paper, we propose the InoFrep, a novel data-dependent initialization algorithm for improving computational efficiency and robustness in prototype-based hard and fuzzy clustering. The InoFrep is a single-pass algorithm using the frequency polygon data of the feature with the highest peaks count in a dataset. By using the Fuzzy C-means (FCM) clustering algorithm, we empirically compare the performance of the InoFrep on one synthetic and six real datasets to those of two common initialization methods: Random sampling of data points and K-means++. Our results show that the InoFrep algorithm significantly reduces the number of iterations and the computing time required by the FCM algorithm. Additionally, it can be applied to multidimensional large datasets because of its shorter initialization time and independence from dimensionality due to working with only one feature with the highest number of peaks. Full article
(This article belongs to the Special Issue New Trends in Massive Data Clustering)
Show Figures

Figure 1

Article
Determinants of Social Media Usage in Business by Women: Age and Development of the Country
Information 2020, 11(9), 445; https://0-doi-org.brum.beds.ac.uk/10.3390/info11090445 - 15 Sep 2020
Viewed by 1222
Abstract
This paper aims to identify the most important social media purposes of usage by responding women’s attitudes according to age and the economic stage of development of their respective country. Research was done through an online survey in 2017–2018 followed by an analyses [...] Read more.
This paper aims to identify the most important social media purposes of usage by responding women’s attitudes according to age and the economic stage of development of their respective country. Research was done through an online survey in 2017–2018 followed by an analyses of the results from eight countries: four countries that represent an emerging economy and four developed economies. Participants responded to questions concerning social technologies and their purposes of usage as well as resulting job opportunities. The conducted analysis of regarding Facebook as a platform resulted in the highest number of responses in the survey. In this paper, detailed results are presented including a comparative analysis between two groups of economies. Findings reveal that in both groups, the usage of Facebook in business is related mostly to a positive experience. The result showed that among women in emerging economies, social media were used more broadly, and from an age perspective, the results show that marketing is a key benefit emphasized among older respondents. The communication benefit of Facebook usage in business was noticed as a key factor by respondents in groups from both developed and emerging economies. Full article
Show Figures

Figure 1

Article
The Effects of Major Selection Motivations on Dropout, Academic Achievement and Major Satisfactions of College Students Majoring in Foodservice and Culinary Arts
Information 2020, 11(9), 444; https://0-doi-org.brum.beds.ac.uk/10.3390/info11090444 - 14 Sep 2020
Viewed by 891
Abstract
This study is aimed at figuring out the effects of major selecting motivation on dropout, academic achievement, and major satisfactions of college students majoring in foodservice and culinary arts. To accomplish this, an empirical survey was conducted through a structural equation model. These [...] Read more.
This study is aimed at figuring out the effects of major selecting motivation on dropout, academic achievement, and major satisfactions of college students majoring in foodservice and culinary arts. To accomplish this, an empirical survey was conducted through a structural equation model. These findings showed that students are likely to drop out of college due to a career change or major maladjustment if they decide their major in consideration of college reputation or department recognition rather than their aptitude. Unlike existing studies, this study has practical implications concerning the importance of these factors in that their academic achievement is affected by their relationship and perception of their major satisfactions rather than their major selection motivations. Full article
(This article belongs to the Special Issue Big Data Integration)
Show Figures

Figure 1

Article
Intelligent Adversary Placements for Privacy Evaluation in VANET
Information 2020, 11(9), 443; https://0-doi-org.brum.beds.ac.uk/10.3390/info11090443 - 14 Sep 2020
Cited by 2 | Viewed by 815
Abstract
Safety applications in Vehicular Ad-hoc Networks (VANETs) often require vehicles to share information such as current position, speed, and vehicle status on a regular basis. This information can be collected to obtain private information about vehicles/drivers, such as home or office locations and [...] Read more.
Safety applications in Vehicular Ad-hoc Networks (VANETs) often require vehicles to share information such as current position, speed, and vehicle status on a regular basis. This information can be collected to obtain private information about vehicles/drivers, such as home or office locations and frequently visited places, creating serious privacy vulnerabilities. The use of pseudonyms, rather than actual vehicle IDs, can alleviate this problem and several different Pseudonym Management Techniques (PMTs) have been proposed in the literature. These PMTs are typically evaluated assuming a random placement of attacking stations. However, an adversary can utilize knowledge of traffic patterns and PMTs to place eavesdropping stations in a more targeted manner, leading to an increased tracking success rate. In this paper, we propose two new adversary placement strategies and study the impact of intelligent adversary placement on tracking success using different PMTs. The results indicate that targeted placement of attacking stations, based on traffic patterns, road type, and knowledge of PMT used, can significantly increase tracking success. Therefore, it is important to take this into consideration when developing PMTs that can protect vehicle privacy even in the presence of targeted placement techniques. Full article
(This article belongs to the Special Issue Vehicle-To-Everything (V2X) Communication)
Show Figures

Figure 1

Article
Accuracy Assessment of Small Unmanned Aerial Vehicle for Traffic Accident Photogrammetry in the Extreme Operating Conditions of Kuwait
Information 2020, 11(9), 442; https://0-doi-org.brum.beds.ac.uk/10.3390/info11090442 - 14 Sep 2020
Cited by 3 | Viewed by 954
Abstract
This study presents the first accuracy assessment of a low cost small unmanned aerial vehicle (sUAV) in reconstructing three dimensional (3D) models of traffic accidents at extreme operating environments. To date, previous studies have focused on the feasibility of adopting sUAVs in traffic [...] Read more.
This study presents the first accuracy assessment of a low cost small unmanned aerial vehicle (sUAV) in reconstructing three dimensional (3D) models of traffic accidents at extreme operating environments. To date, previous studies have focused on the feasibility of adopting sUAVs in traffic accidents photogrammetry applications as well as the accuracy at normal operating conditions. In this study, 3D models of simulated accident scenes were reconstructed using a low-cost sUAV and cloud-based photogrammetry platform. Several experiments were carried out to evaluate the measurements accuracy at different flight altitudes during high temperature, low light, scattered rain and dusty high wind environments. Quantitative analyses are presented to highlight the precision range of the reconstructed traffic accident 3D model. Reported results range from highly accurate to fairly accurate represented by the root mean squared error (RMSE) range between 0.97 and 4.66 and a mean percentage absolute error (MAPE) between 1.03% and 20.2% at normal and extreme operating conditions, respectively. The findings offer an insight into the robustness and generalizability of UAV-based photogrammetry method for traffic accidents at extreme environments. Full article
(This article belongs to the Special Issue UAVs for Smart Cities: Protocols, Applications, and Challenges)
Show Figures

Figure 1

Article
Mobile Phone in the Lives of Young People of Rural Mountainous Areas of Gilgit-Baltistan, Pakistan: Challenges and Opportunities
Information 2020, 11(9), 441; https://0-doi-org.brum.beds.ac.uk/10.3390/info11090441 - 14 Sep 2020
Viewed by 958
Abstract
This research aims to investigate the impact of mobile phones in the lives of youths of mountainous rural areas of Gilgit-Baltistan (GB). A total of 272 (133 male and 139 female) respondents of ages between 16 and 25 years participated in this study. [...] Read more.
This research aims to investigate the impact of mobile phones in the lives of youths of mountainous rural areas of Gilgit-Baltistan (GB). A total of 272 (133 male and 139 female) respondents of ages between 16 and 25 years participated in this study. To analyze the demographic data such as age, gender, district, the descriptive statistics (mean, SD and percentage) and inferential statistics such as independent sample t-test were used. The regression analysis was used to analyze the relationship between independent and dependent variables such as mobile phone features (M = 3.66, SD = 1.15); a mobile phone as a tool for socio-economic impact (M = 3.80, SD = 1.20); as a fashion symbol (M = 1.29, SD = 0.11) and a tool for safety (M = 3.91, SD = 1.06). The findings show that 97% (M = 1.026 SD = 0.159) of youths from GB own a mobile phone (47% male and 48% female). The findings also verify that a mobile phone is beneficial to its users in the fields of economic, education, safety, and security. However, using a mobile phone as status symbol could have a negative impact on the lives of youths. This study recommends that the government should develop effective and efficient policy for mobile phone usage and users should also be aware of the blessings and risks associated with using a mobile phone in their lives. Full article
(This article belongs to the Section Information and Communications Technology)
Show Figures

Figure 1

Article
A Web-Based Honeypot in IPv6 to Enhance Security
Information 2020, 11(9), 440; https://0-doi-org.brum.beds.ac.uk/10.3390/info11090440 - 12 Sep 2020
Viewed by 1328
Abstract
IPv6 is a next-generation IP protocol that replaces IPv4. It not only expands the number of network address resources but also solves the problem of multiple access devices connected to the Internet. While IPv6 has brought excellent convenience to the public, related security [...] Read more.
IPv6 is a next-generation IP protocol that replaces IPv4. It not only expands the number of network address resources but also solves the problem of multiple access devices connected to the Internet. While IPv6 has brought excellent convenience to the public, related security issues have gradually emerged, and an assessment of the security situation in IPv6 has also become more important. Unlike passive defense, the honeypot is a security device for active defense. The real network application and the fake network application, disguised by the honeypot, are located on a similar subnet, and provide a network application service; but, in both cases, behavior logs from unauthorized users are caught. In this manner, and to protect web-based applications from attacks, this article introduces the design and implementation of a web-based honeypot that includes a weak password module and an SQL inject module, which supports the IPv6 network to capture unauthorized access behavior. We also propose the Security Situation Index (SSI), which can measure the security situation of the network application environment. The value of SSI is established according to the different parameters that are based on honeypots. There is a firewall outside the test system environment, so the obtained data should be used as the real invasion data, and the captured behavior is not a false positive. Threats can be spotted smartly by deploying honeypots; this paper demonstrates that the honeypot is an excellent method of capturing malicious requests and can be measured with the SSI of the whole system. According to the information, the administrator can modify the current security policy, which can improve the security level of a whole IPv6 network system. Full article
(This article belongs to the Special Issue Cyberspace Security, Privacy & Forensics)
Show Figures

Figure 1

Article
Exploiting the User Social Context to Address Neighborhood Bias in Collaborative Filtering Music Recommender Systems
Information 2020, 11(9), 439; https://0-doi-org.brum.beds.ac.uk/10.3390/info11090439 - 11 Sep 2020
Cited by 6 | Viewed by 1088
Abstract
Recent research in the field of recommender systems focuses on the incorporation of social information into collaborative filtering methods to improve the reliability of recommendations. Social networks enclose valuable data regarding user behavior and connections that can be exploited in this area to [...] Read more.
Recent research in the field of recommender systems focuses on the incorporation of social information into collaborative filtering methods to improve the reliability of recommendations. Social networks enclose valuable data regarding user behavior and connections that can be exploited in this area to infer knowledge about user preferences and social influence. The fact that streaming music platforms have some social functionalities also allows this type of information to be used for music recommendation. In this work, we take advantage of the friendship structure to address a type of recommendation bias derived from the way collaborative filtering methods compute the neighborhood. These methods restrict the rating predictions for a user to the items that have been rated by their nearest neighbors while leaving out other items that might be of his/her interest. This problem is different from the popularity bias caused by the power-law distribution of the item rating frequency (long-tail), well-known in the music domain, although both shortcomings can be related. Our proposal is based on extending and diversifying the neighborhood by capturing trust and homophily effects between users through social structure metrics. The results show an increase in potentially recommendable items while reducing recommendation error rates. Full article
(This article belongs to the Special Issue Information Retrieval and Social Media Mining)
Show Figures

Figure 1

Article
My-AHA: Software Platform to Promote Active and Healthy Ageing
Information 2020, 11(9), 438; https://0-doi-org.brum.beds.ac.uk/10.3390/info11090438 - 11 Sep 2020
Cited by 1 | Viewed by 1518
Abstract
The population is getting old, and the use of technology has improved the quality of life of the senior population. This is confirmed by the increasing number of solutions targeting healthy and active ageing. Such solutions keep track of the daily routine of [...] Read more.
The population is getting old, and the use of technology has improved the quality of life of the senior population. This is confirmed by the increasing number of solutions targeting healthy and active ageing. Such solutions keep track of the daily routine of the elderly and combine it with other relevant information (e.g., biosignals, physical activity, social activity, nutrition) to help identify early signs of decline. Caregivers and elders use this information to improve their routine, focusing on improving the current condition. With that in mind, we have developed a software platform to support My-AHA, which is composed of a multi-platform middleware, a decision support system (DSS), and a dashboard. The middleware seamlessly merges data coming from multiple platforms targeting health and active ageing, the DSS performs an intelligent computation on top of the collected data, and the dashboard provides a user’s interaction with the whole system. To show the potential of the proposed My-AHA software platform, we introduce the My Personal Dashboard web-based application over a frailty use case to illustrate how senior well-being can benefit from the use of technology. Full article
(This article belongs to the Special Issue e-Health Pervasive Wireless Applications and Services (e-HPWAS'19))
Show Figures

Figure 1

Article
Evaluating Richer Features and Varied Machine Learning Models for Subjectivity Classification of Book Review Sentences in Portuguese
Information 2020, 11(9), 437; https://0-doi-org.brum.beds.ac.uk/10.3390/info11090437 - 11 Sep 2020
Viewed by 935
Abstract
Texts published on social media have been a valuable source of information for companies and users, as the analysis of this data helps improving/selecting products and services of interest. Due to the huge amount of data, techniques for automatically analyzing user opinions are [...] Read more.
Texts published on social media have been a valuable source of information for companies and users, as the analysis of this data helps improving/selecting products and services of interest. Due to the huge amount of data, techniques for automatically analyzing user opinions are necessary. The research field that investigates these techniques is called sentiment analysis. This paper focuses specifically on the task of subjectivity classification, which aims to predict whether a text passage conveys an opinion. We report the study and comparison of machine learning methods of different paradigms to perform subjectivity classification of book review sentences in Portuguese, which have shown to be a challenging domain in the area. Specifically, we explore richer features for the task, using several lexical, centrality-based and discourse features. We show the contributions of the different feature sets and evidence that the combination of lexical, centrality-based and discourse features produce better results than any of the feature sets individually. Additionally, by analyzing the achieved results and the acquired knowledge by some symbolic machine learning methods, we show that some discourse relations may clearly signal subjectivity. Our corpus annotation also reveals some distinctive discourse structuring patterns for sentence subjectivity. Full article
(This article belongs to the Special Issue Selected Papers from PROPOR 2020)
Show Figures

Figure 1

Article
Atrial Fibrillation Detection Directly from Compressed ECG with the Prior of Measurement Matrix
Information 2020, 11(9), 436; https://0-doi-org.brum.beds.ac.uk/10.3390/info11090436 - 10 Sep 2020
Cited by 1 | Viewed by 998
Abstract
In the wearable health monitoring based on compressed sensing, atrial fibrillation detection directly from the compressed ECG can effectively reduce the time cost of data processing rather than classification after reconstruction. However, the existing methods for atrial fibrillation detection from compressed ECG did [...] Read more.
In the wearable health monitoring based on compressed sensing, atrial fibrillation detection directly from the compressed ECG can effectively reduce the time cost of data processing rather than classification after reconstruction. However, the existing methods for atrial fibrillation detection from compressed ECG did not fully benefit from the existing prior information, resulting in unsatisfactory classification performance, especially in some applications that require high compression ratio (CR). In this paper, we propose a deep learning method to detect atrial fibrillation directly from compressed ECG without reconstruction. Specifically, we design a deep network model for one-dimensional ECG signals, and the measurement matrix is used to initialize the first layer of the model so that the proposed model can obtain more prior information which benefits improving the classification performance of atrial fibrillation detection from compressed ECG. The experimental results on the MIT-BIH Atrial Fibrillation Database show that when the CR is 10%, the accuracy and F1 score of the proposed method reach 97.52% and 98.02%, respectively. Compared with the atrial fibrillation detection from original ECG, the corresponding accuracy and F1 score are only reduced by 0.88% and 0.69%. Even at a high CR of 90%, the accuracy and F1 score are still only reduced by 6.77% and 5.31%, respectively. All of the experimental results demonstrate that the proposed method is superior to other existing methods for atrial fibrillation detection from compressed ECG. Therefore, the proposed method is promising for atrial fibrillation detection in wearable health monitoring based on compressed sensing. Full article
(This article belongs to the Special Issue Deep Learning in Biomedical Informatics)
Show Figures

Figure 1

Article
Calculated vs. Ad Hoc Publics in the #Brexit Discourse on Twitter and the Role of Business Actors
Information 2020, 11(9), 435; https://0-doi-org.brum.beds.ac.uk/10.3390/info11090435 - 10 Sep 2020
Cited by 1 | Viewed by 940
Abstract
Mobilization theory posits that social media gives a voice to non-traditional actors in socio-political discourse. This study uses network analytics to understand the underlying structure of the Brexit discourse and whether the main sub-networks identify new publics and influencers in political participation, and [...] Read more.
Mobilization theory posits that social media gives a voice to non-traditional actors in socio-political discourse. This study uses network analytics to understand the underlying structure of the Brexit discourse and whether the main sub-networks identify new publics and influencers in political participation, and specifically industry stakeholders. Content analytics and peak detection analysis are used to provide greater explanatory values to the organizing themes for these sub-networks. Our findings suggest that the Brexit discourse on Twitter can be largely explained by calculated publics organized around the two campaigns and political parties. Ad hoc communities were identified based on (i) the media, (ii) geo-location, and (iii) the US presidential election. Other than the media, significant sub-communities did not form around industry as whole or around individual sectors or leaders. Participation by business accounts in the Twitter discourse had limited impact. Full article
Show Figures

Figure 1

Article
Utilization of ICT as a Digital Infrastructure Concerning Disaster Countermeasures in Japan
Information 2020, 11(9), 434; https://0-doi-org.brum.beds.ac.uk/10.3390/info11090434 - 10 Sep 2020
Viewed by 915
Abstract
The present study aimed to describe the utilization of Information and Communication Technology (ICT) as a digital infrastructure concerning disaster countermeasures in Japan. Specifically, the study introduced development cases of the systems integrating social media and Geographic Information Systems (GIS), and presented the [...] Read more.
The present study aimed to describe the utilization of Information and Communication Technology (ICT) as a digital infrastructure concerning disaster countermeasures in Japan. Specifically, the study introduced development cases of the systems integrating social media and Geographic Information Systems (GIS), and presented the utilization potential as a digital infrastructure. Additionally, taking up Twitter as a familiar digital infrastructure, the study also presented its utilization potential based on the case of the heavy-rain disaster in Western Japan in 2018. As a result, due to the close relationship between the reality and virtual spaces, the issue is how to make the information circulating in the virtual space efficiently and effectively aid in the rescue and support activities in the reality space. The above systems are effective in order to solve such an issue, because these can efficiently consolidate the essential information on the digital map of Web-GIS. Additionally, it is necessary to set rules for the utilization of social media, and sift through information and share only the necessary information to the affected local governments and those involved in the rescue and support activities. Furthermore, various information communication methods including verbal calls in addition to ICT are necessary especially for people who are vulnerable to information. Full article
(This article belongs to the Special Issue ICT Enhanced Social Sciences and Humanities)
Show Figures

Figure 1

Article
On the Feasibility of Adversarial Sample Creation Using the Android System API
Information 2020, 11(9), 433; https://0-doi-org.brum.beds.ac.uk/10.3390/info11090433 - 10 Sep 2020
Cited by 1 | Viewed by 1133
Abstract
Due to its popularity, the Android operating system is a critical target for malware attacks. Multiple security efforts have been made on the design of malware detection systems to identify potentially harmful applications. In this sense, machine learning-based systems, leveraging both static and [...] Read more.
Due to its popularity, the Android operating system is a critical target for malware attacks. Multiple security efforts have been made on the design of malware detection systems to identify potentially harmful applications. In this sense, machine learning-based systems, leveraging both static and dynamic analysis, have been increasingly adopted to discriminate between legitimate and malicious samples due to their capability of identifying novel variants of malware samples. At the same time, attackers have been developing several techniques to evade such systems, such as the generation of evasive apps, i.e., carefully-perturbed samples that can be classified as legitimate by the classifiers. Previous work has shown the vulnerability of detection systems to evasion attacks, including those designed for Android malware detection. However, most works neglected to bring the evasive attacks onto the so-called problem space, i.e., by generating concrete Android adversarial samples, which requires preserving the app’s semantics and being realistic for human expert analysis. In this work, we aim to understand the feasibility of generating adversarial samples specifically through the injection of system API calls, which are typical discriminating characteristics for malware detectors. We perform our analysis on a state-of-the-art ransomware detector that employs the occurrence of system API calls as features of its machine learning algorithm. In particular, we discuss the constraints that are necessary to generate real samples, and we use techniques inherited from interpretability to assess the impact of specific API calls to evasion. We assess the vulnerability of such a detector against mimicry and random noise attacks. Finally, we propose a basic implementation to generate concrete and working adversarial samples. The attained results suggest that injecting system API calls could be a viable strategy for attackers to generate concrete adversarial samples. However, we point out the low suitability of mimicry attacks and the necessity to build more sophisticated evasion attacks. Full article
(This article belongs to the Special Issue New Frontiers in Android Malware Analysis and Detection)
Show Figures

Figure 1

Article
Fusion of Angle Measurements from Hull Mounted and Towed Array Sensors
Information 2020, 11(9), 432; https://0-doi-org.brum.beds.ac.uk/10.3390/info11090432 - 09 Sep 2020
Cited by 1 | Viewed by 867
Abstract
Two sensor arrays, hull-mounted array, and towed array sensors are considered for bearings-only tracking. An algorithm is designed to combine the information obtained as bearing (angle) measurements from both sensor arrays to give a better solution. Using data from two different sensor arrays [...] Read more.
Two sensor arrays, hull-mounted array, and towed array sensors are considered for bearings-only tracking. An algorithm is designed to combine the information obtained as bearing (angle) measurements from both sensor arrays to give a better solution. Using data from two different sensor arrays reduces the problem of observability and the observer need not follow the S-maneuver to attain observability of the process. The performance of the fusion algorithm is comparable to that of theoretical Cramer–Rao lower bound and with that of the algorithm when bearing measurements from a single sensor array are considered. Different filters are used for analyzing both algorithms. Monte Carlo runs need to be done to evaluate the performance of algorithms more accurately. Also, the performance of the fusion algorithm is evaluated in terms of solution convergence time. Full article
(This article belongs to the Special Issue Data Modeling and Predictive Analytics)
Show Figures

Figure 1

Article
Proposing a Supply Chain Collaboration Framework for Synchronous Flow Implementation in the Automotive Industry: A Moroccan Case Study
Information 2020, 11(9), 431; https://0-doi-org.brum.beds.ac.uk/10.3390/info11090431 - 07 Sep 2020
Cited by 1 | Viewed by 1254
Abstract
The present paper reports on studying synchronous flow implementation, as a lean supply chain tools, through a collaborative relationship with suppliers. This involves consolidating with a new contribution to the development and application of a supply chain collaboration framework between automotive constructor and [...] Read more.
The present paper reports on studying synchronous flow implementation, as a lean supply chain tools, through a collaborative relationship with suppliers. This involves consolidating with a new contribution to the development and application of a supply chain collaboration framework between automotive constructor and first-tier equipment suppliers to achieve the synchronous flow of components. The objective is to provide the automotive companies with a decision-making tool for selecting strategic suppliers to collaborate with, examining the collaboration context in terms of motivators, drivers, and barriers and evaluating the collaboration performance. Therefore, our contribution is structured as follows. As a first step, an overview of papers reporting on collaboration, lean supply chain, and synchronous flow is provided to identify the key elements of successful collaboration relationships. As a result, a preliminary framework is elaborated. The second step described the case study of a leading automotive firm “RENAULT” and its suppliers in Morocco. Based on semi-structured interviews conducted with participants from these companies, the preliminary framework was improved. The next section discusses the obtained results as well as the improved framework. Finally, conclusions and suggestions for further works are included. Full article
(This article belongs to the Special Issue Modeling of Supply Chain Systems)
Show Figures

Figure 1

Article
Enhancing Software Comments Readability Using Flesch Reading Ease Score
Information 2020, 11(9), 430; https://0-doi-org.brum.beds.ac.uk/10.3390/info11090430 - 07 Sep 2020
Viewed by 1500
Abstract
Comments are used to explain the meaning of code and ease communications between programmers themselves, quality assurance auditors, and code reviewers. A tool has been developed to help programmers write readable comments and measure their readability level. It is used to enhance software [...] Read more.
Comments are used to explain the meaning of code and ease communications between programmers themselves, quality assurance auditors, and code reviewers. A tool has been developed to help programmers write readable comments and measure their readability level. It is used to enhance software readability by providing alternatives to both keywords and comment statements from a local database and an online dictionary. It is also a word-finding query engine for developers. Readability level is measured using three different formulas: the fog index, the Flesch reading ease score, and Flesch–Kincaid grade levels. A questionnaire has been distributed to 42 programmers and 35 students to compare the readability aspect between both new comments written by the tool and the original comments written by previous programmers and developers. Programmers stated that the comments from the proposed tool had fewer complex words and took less time to read and understand. Nevertheless, this did not significantly affect the understandability of the text, as programmers normally have quite a high level of English. However, the results from students show that the tool affects the understandability of text and the time taken to read it, while text complexity results show that the tool makes new comment text that is more readable by changing the three studied variables. Full article
(This article belongs to the Special Issue Computer Programming Education)
Show Figures

Figure 1

Article
Lift Charts-Based Binary Classification in Unsupervised Setting for Concept-Based Retrieval of Emotionally Annotated Images from Affective Multimedia Databases
Information 2020, 11(9), 429; https://0-doi-org.brum.beds.ac.uk/10.3390/info11090429 - 03 Sep 2020
Cited by 1 | Viewed by 1053
Abstract
Evaluation of document classification is straightforward if complete information on the documents’ true categories exists. In this case, the rank of each document can be accurately determined and evaluated. However, in an unsupervised setting, where the exact document category is not available, lift [...] Read more.
Evaluation of document classification is straightforward if complete information on the documents’ true categories exists. In this case, the rank of each document can be accurately determined and evaluated. However, in an unsupervised setting, where the exact document category is not available, lift charts become an advantageous method for evaluation of the retrieval quality and categorization of ranked documents. We introduce lift charts as binary classifiers of ranked documents and explain how to apply them to the concept-based retrieval of emotionally annotated images as one of the possible retrieval methods for this application. Furthermore, we describe affective multimedia databases on a representative example of the International Affective Picture System (IAPS) dataset, their applications, advantages, and deficiencies, and explain how lift charts may be used as a helpful method for document retrieval in this domain. Optimization of lift charts for recall and precision is also described. A typical scenario of document retrieval is presented on a set of 800 affective pictures labeled with an unsupervised glossary. In the lift charts-based retrieval using the approximate matching method, the highest attained accuracy, precision, and recall were 51.06%, 47.41%, 95.89%, and 81.83%, 99.70%, 33.56%, when optimized for recall and precision, respectively. Full article
(This article belongs to the Section Information Processes)
Show Figures

Graphical abstract

Article
Developing Amaia: A Conversational Agent for Helping Portuguese Entrepreneurs—An Extensive Exploration of Question-Matching Approaches for Portuguese
Information 2020, 11(9), 428; https://0-doi-org.brum.beds.ac.uk/10.3390/info11090428 - 01 Sep 2020
Viewed by 947
Abstract
This paper describes how we tackled the development of Amaia, a conversational agent for Portuguese entrepreneurs. After introducing the domain corpus used as Amaia’s Knowledge Base (KB), we make an extensive comparison of approaches for automatically matching user requests with Frequently Asked Questions [...] Read more.
This paper describes how we tackled the development of Amaia, a conversational agent for Portuguese entrepreneurs. After introducing the domain corpus used as Amaia’s Knowledge Base (KB), we make an extensive comparison of approaches for automatically matching user requests with Frequently Asked Questions (FAQs) in the KB, covering Information Retrieval (IR), approaches based on static and contextual word embeddings, and a model of Semantic Textual Similarity (STS) trained for Portuguese, which achieved the best performance. We further describe how we decreased the model’s complexity and improved scalability, with minimal impact on performance. In the end, Amaia combines an IR library and an STS model with reduced features. Towards a more human-like behavior, Amaia can also answer out-of-domain questions, based on a second corpus integrated in the KB. Such interactions are identified with a text classifier, also described in the paper. Full article
(This article belongs to the Special Issue Selected Papers from PROPOR 2020)
Show Figures

Figure 1

Article
Modeling the Paraphrase Detection Task over a Heterogeneous Graph Network with Data Augmentation
Information 2020, 11(9), 422; https://0-doi-org.brum.beds.ac.uk/10.3390/info11090422 - 01 Sep 2020
Cited by 2 | Viewed by 889
Abstract
Paraphrase detection is a Natural-Language Processing (NLP) task that aims at automatically identifying whether two sentences convey the same meaning (even with different words). For the Portuguese language, most of the works model this task as a machine-learning solution, extracting features and training [...] Read more.
Paraphrase detection is a Natural-Language Processing (NLP) task that aims at automatically identifying whether two sentences convey the same meaning (even with different words). For the Portuguese language, most of the works model this task as a machine-learning solution, extracting features and training a classifier. In this paper, following a different line, we explore a graph structure representation and model the paraphrase identification task over a heterogeneous network. We also adopt a back-translation strategy for data augmentation to balance the dataset we use. Our approach, although simple, outperforms the best results reported for the paraphrase detection task in Portuguese, showing that graph structures may capture better the semantic relatedness among sentences. Full article
(This article belongs to the Special Issue Selected Papers from PROPOR 2020)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop