Digital and Social Media in the Disinformation Age

A special issue of Future Internet (ISSN 1999-5903). This special issue belongs to the section "Internet of Things".

Deadline for manuscript submissions: closed (19 November 2021) | Viewed by 44567

Special Issue Editor


E-Mail Website
Guest Editor
Häme University of Applied Sciences, Finland
Interests: social media; big social data analytics; disinformation literacy; digital competence; design science

Special Issue Information

Dear Colleagues,

This Issue is dedicated to digital and social media in the disinformation age, devoted to topics related to theories, methods, and tools for the detection and processing of disinformation and fake news in digital and social media, the theoretical and conceptual constructs of disinformation literacy and news literacy, digital competence and information literacy skills, the dark side of social media, as well as approaches for mitigating the generation or spread of misinformation and disinformation in digital and social media.

Troll farms, fake news, and the propagation of misinformation and disinformation in general are some of the dark-side phenomena of social media that present large risks for individuals, communities, firms, and society at large. This Issue aims to promote solutions and debate related to combating the dark side of social media. Researchers from different disciplines and methodological backgrounds are invited to discuss new ideas, research questions, recent results, and future challenges in this emerging area of research and public interest.

Potential topics include, but are not limited to:

  • Disinformation detection from digital and social media;
  • Disinformation, misinformation, and news literacy;
  • Detecting and mitigating fake news, deepfakes, dark advertising, and troll farms;
  • Understanding and coping with the dark side of social media;
  • Development of digital competence and information literacy skills related to social media.

Dr. Jari Jussila
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Future Internet is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Disinformation
  • Disinformation literacy
  • Fake news
  • News literacy
  • Deepfakes
  • Dark advertising
  • Dark side of social media
  • Digital competence

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 5780 KiB  
Article
Face Swapping Consistency Transfer with Neural Identity Carrier
by Kunlin Liu, Ping Wang, Wenbo Zhou, Zhenyu Zhang, Yanhao Ge, Honggu Liu, Weiming Zhang and Nenghai Yu
Future Internet 2021, 13(11), 298; https://0-doi-org.brum.beds.ac.uk/10.3390/fi13110298 - 22 Nov 2021
Cited by 2 | Viewed by 2784
Abstract
Deepfake aims to swap a face of an image with someone else’s likeness in a reasonable manner. Existing methods usually perform deepfake frame by frame, thus ignoring video consistency and producing incoherent results. To address such a problem, we propose a novel framework [...] Read more.
Deepfake aims to swap a face of an image with someone else’s likeness in a reasonable manner. Existing methods usually perform deepfake frame by frame, thus ignoring video consistency and producing incoherent results. To address such a problem, we propose a novel framework Neural Identity Carrier (NICe), which learns identity transformation from an arbitrary face-swapping proxy via a U-Net. By modeling the incoherence between frames as noise, NICe naturally suppresses its disturbance and preserves primary identity information. Concretely, NICe inputs the original frame and learns transformation supervised by swapped pseudo labels. As the temporal incoherence has an uncertain or stochastic pattern, NICe can filter out such outliers and well maintain the target content by uncertainty prediction. With the predicted temporally stable appearance, NICe enhances its details by constraining 3D geometry consistency, making NICe learn fine-grained facial structure across the poses. In this way, NICe guarantees the temporal stableness of deepfake approaches and predicts detailed results against over-smoothness. Extensive experiments on benchmarks demonstrate that NICe significantly improves the quality of existing deepfake methods on video-level. Besides, data generated by our methods can benefit video-level deepfake detection methods. Full article
(This article belongs to the Special Issue Digital and Social Media in the Disinformation Age)
Show Figures

Figure 1

39 pages, 6772 KiB  
Article
University Community Members’ Perceptions of Labels for Online Media
by Ryan Suttle, Scott Hogan, Rachel Aumaugher, Matthew Spradling, Zak Merrigan and Jeremy Straub
Future Internet 2021, 13(11), 281; https://0-doi-org.brum.beds.ac.uk/10.3390/fi13110281 - 31 Oct 2021
Cited by 6 | Viewed by 1780
Abstract
Fake news is prevalent in society. A variety of methods have been used in an attempt to mitigate the spread of misinformation and fake news ranging from using machine learning to detect fake news to paying fact checkers to manually fact check media [...] Read more.
Fake news is prevalent in society. A variety of methods have been used in an attempt to mitigate the spread of misinformation and fake news ranging from using machine learning to detect fake news to paying fact checkers to manually fact check media to ensure its accuracy. In this paper, three studies were conducted at two universities with different regional demographic characteristics to gain a better understanding of respondents’ perception of online media labeling techniques. The first study deals with what fields should appear on a media label. The second study looks into what types of informative labels respondents would use. The third focuses on blocking type labels. Participants’ perceptions, preferences, and results are analyzed by their demographic characteristics. Full article
(This article belongs to the Special Issue Digital and Social Media in the Disinformation Age)
Show Figures

Figure 1

15 pages, 1760 KiB  
Article
A Retrospective Analysis of the COVID-19 Infodemic in Saudi Arabia
by Ashwag Alasmari, Aseel Addawood, Mariam Nouh, Wajanat Rayes and Areej Al-Wabil
Future Internet 2021, 13(10), 254; https://doi.org/10.3390/fi13100254 - 30 Sep 2021
Cited by 9 | Viewed by 3196
Abstract
COVID-19 has had broad disruptive effects on economies, healthcare systems, governments, societies, and individuals. Uncertainty concerning the scale of this crisis has given rise to countless rumors, hoaxes, and misinformation. Much of this type of conversation and misinformation about the pandemic now occurs [...] Read more.
COVID-19 has had broad disruptive effects on economies, healthcare systems, governments, societies, and individuals. Uncertainty concerning the scale of this crisis has given rise to countless rumors, hoaxes, and misinformation. Much of this type of conversation and misinformation about the pandemic now occurs online and in particular on social media platforms like Twitter. This study analysis incorporated a data-driven approach to map the contours of misinformation and contextualize the COVID-19 pandemic with regards to socio-religious-political information. This work consists of a combined system bridging quantitative and qualitative methodologies to assess how information-exchanging behaviors can be used to minimize the effects of emergent misinformation. The study revealed that the social media platforms detected the most significant source of rumors in transmitting information rapidly in the community. It showed that WhatsApp users made up about 46% of the source of rumors in online platforms, while, through Twitter, it demonstrated a declining trend of rumors by 41%. Moreover, the results indicate the second-most common type of misinformation was provided by pharmaceutical companies; however, a prevalent type of misinformation spreading in the world during this pandemic has to do with the biological war. In this combined retrospective analysis of the study, social media with varying approaches in public discourse contributes to efficient public health responses. Full article
(This article belongs to the Special Issue Digital and Social Media in the Disinformation Age)
Show Figures

Figure 1

20 pages, 362 KiB  
Article
Machine Learning in Detecting COVID-19 Misinformation on Twitter
by Mohammed N. Alenezi and Zainab M. Alqenaei
Future Internet 2021, 13(10), 244; https://0-doi-org.brum.beds.ac.uk/10.3390/fi13100244 - 23 Sep 2021
Cited by 29 | Viewed by 5005
Abstract
Social media platforms such as Facebook, Instagram, and Twitter are an inevitable part of our daily lives. These social media platforms are effective tools for disseminating news, photos, and other types of information. In addition to the positives of the convenience of these [...] Read more.
Social media platforms such as Facebook, Instagram, and Twitter are an inevitable part of our daily lives. These social media platforms are effective tools for disseminating news, photos, and other types of information. In addition to the positives of the convenience of these platforms, they are often used for propagating malicious data or information. This misinformation may misguide users and even have dangerous impact on society’s culture, economics, and healthcare. The propagation of this enormous amount of misinformation is difficult to counter. Hence, the spread of misinformation related to the COVID-19 pandemic, and its treatment and vaccination may lead to severe challenges for each country’s frontline workers. Therefore, it is essential to build an effective machine-learning (ML) misinformation-detection model for identifying the misinformation regarding COVID-19. In this paper, we propose three effective misinformation detection models. The proposed models are long short-term memory (LSTM) networks, which is a special type of RNN; a multichannel convolutional neural network (MC-CNN); and k-nearest neighbors (KNN). Simulations were conducted to evaluate the performance of the proposed models in terms of various evaluation metrics. The proposed models obtained superior results to those from the literature. Full article
(This article belongs to the Special Issue Digital and Social Media in the Disinformation Age)
Show Figures

Figure 1

14 pages, 628 KiB  
Article
Socioeconomic Correlates of Anti-Science Attitudes in the US
by Minda Hu, Ashwin Rao, Mayank Kejriwal and Kristina Lerman
Future Internet 2021, 13(6), 160; https://0-doi-org.brum.beds.ac.uk/10.3390/fi13060160 - 19 Jun 2021
Cited by 3 | Viewed by 2668
Abstract
Successful responses to societal challenges require sustained behavioral change. However, as responses to the COVID-19 pandemic in the US showed, political partisanship and mistrust of science can reduce public willingness to adopt recommended behaviors such as wearing a mask or receiving a vaccination. [...] Read more.
Successful responses to societal challenges require sustained behavioral change. However, as responses to the COVID-19 pandemic in the US showed, political partisanship and mistrust of science can reduce public willingness to adopt recommended behaviors such as wearing a mask or receiving a vaccination. To better understand this phenomenon, we explored attitudes toward science using social media posts (tweets) that were linked to counties in the US through their locations. The data allowed us to study how attitudes towards science relate to the socioeconomic characteristics of communities in places from which people tweet. Our analysis revealed three types of communities with distinct behaviors: those in large metro centers, smaller urban places, and rural areas. While partisanship and race are strongly associated with the share of anti-science users across all communities, income was negatively and positively associated with anti-science attitudes in suburban and rural areas, respectively. We observed that emotions in tweets, specifically negative high arousal emotions, are expressed among suburban and rural communities by many anti-science users, but not in communities in large urban places. These trends were not apparent when pooled across all counties. In addition, we found that anti-science attitudes expressed five years earlier were significantly associated with lower COVID-19 vaccination rates. Our analysis demonstrates the feasibility of using spatially resolved social media data to monitor public attitudes on issues of social importance. Full article
(This article belongs to the Special Issue Digital and Social Media in the Disinformation Age)
Show Figures

Figure 1

16 pages, 1028 KiB  
Article
Text Analysis Methods for Misinformation–Related Research on Finnish Language Twitter
by Jari Jussila, Anu Helena Suominen, Atte Partanen and Tapani Honkanen
Future Internet 2021, 13(6), 157; https://0-doi-org.brum.beds.ac.uk/10.3390/fi13060157 - 17 Jun 2021
Cited by 8 | Viewed by 3437
Abstract
The dissemination of disinformation and fabricated content on social media is growing. Yet little is known of what the functional Twitter data analysis methods are for languages (such as Finnish) that include word formation with endings and word stems together with derivation and [...] Read more.
The dissemination of disinformation and fabricated content on social media is growing. Yet little is known of what the functional Twitter data analysis methods are for languages (such as Finnish) that include word formation with endings and word stems together with derivation and compounding. Furthermore, there is a need to understand which themes linked with misinformation—and the concepts related to it—manifest in different countries and language areas in Twitter discourse. To address this issue, this study explores misinformation and its related concepts: disinformation, fake news, and propaganda in Finnish language tweets. We utilized (1) word cloud clustering, (2) topic modeling, and (3) word count analysis and clustering to detect and analyze misinformation-related concepts and themes connected to those concepts in Finnish language Twitter discussions. Our results are two-fold: (1) those concerning the functional data analysis methods and (2) those about the themes connected in discourse to the misinformation-related concepts. We noticed that each utilized method individually has critical limitations, especially all the automated analysis methods processing for the Finnish language, yet when combined they bring value to the analysis. Moreover, we discovered that politics, both internal and external, are prominent in the Twitter discussions in connection with misinformation and its related concepts of disinformation, fake news, and propaganda. Full article
(This article belongs to the Special Issue Digital and Social Media in the Disinformation Age)
Show Figures

Figure 1

19 pages, 2957 KiB  
Article
Memetics of Deception: Spreading Local Meme Hoaxes during COVID-19 1st Year
by Raúl Rodríguez-Ferrándiz, Cande Sánchez-Olmos, Tatiana Hidalgo-Marí and Estela Saquete-Boro
Future Internet 2021, 13(6), 152; https://0-doi-org.brum.beds.ac.uk/10.3390/fi13060152 - 10 Jun 2021
Cited by 9 | Viewed by 4091
Abstract
The central thesis of this paper is that memetic practices can be crucial to understanding deception at present when hoaxes have increased globally due to COVID-19. Therefore, we employ existing memetic theory to describe the qualities and characteristics of meme hoaxes in terms [...] Read more.
The central thesis of this paper is that memetic practices can be crucial to understanding deception at present when hoaxes have increased globally due to COVID-19. Therefore, we employ existing memetic theory to describe the qualities and characteristics of meme hoaxes in terms of the way they are replicated by altering some aspects of the original, and then shared on social media platforms in order to connect global and local issues. Criteria for selecting the sample were hoaxes retrieved from and related to the local territory in the province of Alicante (Spain) during the first year of the pandemic (n = 35). Once typology, hoax topics and their memetic qualities were identified, we analysed their characteristics according to form in terms of Shifman (2014) and, secondly, their content and stance concordances both within and outside our sample (Spain and abroad). The results show, firstly, that hoaxes are mainly disinformation and they are related to the pandemic. Secondly, despite the notion that local hoaxes are linked to local circumstances that are difficult to extrapolate, our conclusions demonstrate their extraordinary memetic and “glocal” capacity: they rapidly adapt other hoaxes from other places to local areas, very often supplanting reliable sources, and thereby demonstrating consistency and opportunism. Full article
(This article belongs to the Special Issue Digital and Social Media in the Disinformation Age)
Show Figures

Figure 1

26 pages, 4663 KiB  
Article
Protection from ‘Fake News’: The Need for Descriptive Factual Labeling for Online Content
by Matthew Spradling, Jeremy Straub and Jay Strong
Future Internet 2021, 13(6), 142; https://0-doi-org.brum.beds.ac.uk/10.3390/fi13060142 - 28 May 2021
Cited by 19 | Viewed by 4872
Abstract
So-called ‘fake news’—deceptive online content that attempts to manipulate readers—is a growing problem. A tool of intelligence agencies, scammers and marketers alike, it has been blamed for election interference, public confusion and other issues in the United States and beyond. This problem is [...] Read more.
So-called ‘fake news’—deceptive online content that attempts to manipulate readers—is a growing problem. A tool of intelligence agencies, scammers and marketers alike, it has been blamed for election interference, public confusion and other issues in the United States and beyond. This problem is made particularly pronounced as younger generations choose social media sources over journalistic sources for their information. This paper considers the prospective solution of providing consumers with ‘nutrition facts’-style information for online content. To this end, it reviews prior work in product labeling and considers several possible approaches and the arguments for and against such labels. Based on this analysis, a case is made for the need for a nutrition facts-based labeling scheme for online content. Full article
(This article belongs to the Special Issue Digital and Social Media in the Disinformation Age)
Show Figures

Figure 1

25 pages, 3390 KiB  
Article
Collecting a Large Scale Dataset for Classifying Fake News Tweets Using Weak Supervision
by Stefan Helmstetter and Heiko Paulheim
Future Internet 2021, 13(5), 114; https://0-doi-org.brum.beds.ac.uk/10.3390/fi13050114 - 29 Apr 2021
Cited by 15 | Viewed by 3933
Abstract
The problem of automatic detection of fake news in social media, e.g., on Twitter, has recently drawn some attention. Although, from a technical perspective, it can be regarded as a straight-forward, binary classification problem, the major challenge is the collection of large enough [...] Read more.
The problem of automatic detection of fake news in social media, e.g., on Twitter, has recently drawn some attention. Although, from a technical perspective, it can be regarded as a straight-forward, binary classification problem, the major challenge is the collection of large enough training corpora, since manual annotation of tweets as fake or non-fake news is an expensive and tedious endeavor, and recent approaches utilizing distributional semantics require large training corpora. In this paper, we introduce an alternative approach for creating a large-scale dataset for tweet classification with minimal user intervention. The approach relies on weak supervision and automatically collects a large-scale, but very noisy, training dataset comprising hundreds of thousands of tweets. As a weak supervision signal, we label tweets by their source, i.e., trustworthy or untrustworthy source, and train a classifier on this dataset. We then use that classifier for a different classification target, i.e., the classification of fake and non-fake tweets. Although the labels are not accurate according to the new classification target (not all tweets by an untrustworthy source need to be fake news, and vice versa), we show that despite this unclean, inaccurate dataset, the results are comparable to those achieved using a manually labeled set of tweets. Moreover, we show that the combination of the large-scale noisy dataset with a human labeled one yields more advantageous results than either of the two alone. Full article
(This article belongs to the Special Issue Digital and Social Media in the Disinformation Age)
Show Figures

Figure 1

15 pages, 596 KiB  
Article
Mutual Influence of Users Credibility and News Spreading in Online Social Networks
by Vincenza Carchiolo, Alessandro Longheu, Michele Malgeri, Giuseppe Mangioni and Marialaura Previti
Future Internet 2021, 13(5), 107; https://0-doi-org.brum.beds.ac.uk/10.3390/fi13050107 - 25 Apr 2021
Cited by 8 | Viewed by 2556
Abstract
A real-time news spreading is now available for everyone, especially thanks to Online Social Networks (OSNs) that easily endorse gate watching, so the collective intelligence and knowledge of dedicated communities are exploited to filter the news flow and to highlight and debate relevant [...] Read more.
A real-time news spreading is now available for everyone, especially thanks to Online Social Networks (OSNs) that easily endorse gate watching, so the collective intelligence and knowledge of dedicated communities are exploited to filter the news flow and to highlight and debate relevant topics. The main drawback is that the responsibility for judging the content and accuracy of information moves from editors and journalists to online information users, with the side effect of the potential growth of fake news. In such a scenario, trustworthiness about information providers cannot be overlooked anymore, rather it more and more helps in discerning real news from fakes. In this paper we evaluate how trustworthiness among OSN users influences the news spreading process. To this purpose, we consider the news spreading as a Susceptible-Infected-Recovered (SIR) process in OSN, adding the contribution credibility of users as a layer on top of OSN. Simulations with both fake and true news spreading on such a multiplex network show that the credibility improves the diffusion of real news while limiting the propagation of fakes. The proposed approach can also be extended to real social networks. Full article
(This article belongs to the Special Issue Digital and Social Media in the Disinformation Age)
Show Figures

Figure 1

19 pages, 3934 KiB  
Article
iCaps-Dfake: An Integrated Capsule-Based Model for Deepfake Image and Video Detection
by Samar Samir Khalil, Sherin M. Youssef and Sherine Nagy Saleh
Future Internet 2021, 13(4), 93; https://0-doi-org.brum.beds.ac.uk/10.3390/fi13040093 - 05 Apr 2021
Cited by 32 | Viewed by 7354
Abstract
Fake media is spreading like wildfire all over the internet as a result of the great advancement in deepfake creation tools and the huge interest researchers and corporations are showing to explore its limits. Now anyone can create manipulated unethical media forensics, defame, [...] Read more.
Fake media is spreading like wildfire all over the internet as a result of the great advancement in deepfake creation tools and the huge interest researchers and corporations are showing to explore its limits. Now anyone can create manipulated unethical media forensics, defame, humiliate others or even scam them out of their money with a click of a button. In this research a new deepfake detection approach, iCaps-Dfake, is proposed that competes with state-of-the-art techniques of deepfake video detection and addresses their low generalization problem. Two feature extraction methods are combined, texture-based Local Binary Patterns (LBP) and Convolutional Neural Networks (CNN) based modified High-Resolution Network (HRNet), along with an application of capsule neural networks (CapsNets) implementing a concurrent routing technique. Experiments have been conducted on large benchmark datasets to evaluate the performance of the proposed model. Several performance metrics are applied and experimental results are analyzed. The proposed model was primarily trained and tested on the DeepFakeDetectionChallenge-Preview (DFDC-P) dataset then tested on Celeb-DF to examine its generalization capability. Experiments achieved an Area-Under Curve (AUC) score improvement of 20.25% over state-of-the-art models. Full article
(This article belongs to the Special Issue Digital and Social Media in the Disinformation Age)
Show Figures

Figure 1

Back to TopTop