Next Article in Journal
Conceptualizing Supply Chain Resilience: The Role of Complex IT Infrastructures
Next Article in Special Issue
A Novel Public Opinion Polarization Model Based on BA Network
Previous Article in Journal
Supporting Luxury Hotel Recovered in Times of COVID-19 by Applying TRIZ Method: A Case Study in Taiwan
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Disinformation in Social Networks and Bots: Simulated Scenarios of Its Spread from System Dynamics

by
Alfredo Guzmán Rincón
1,*,
Ruby Lorena Carrillo Barbosa
2,
Nuria Segovia-García
1 and
David Ricardo Africano Franco
2
1
Center of Research of Asturias (Centro de Investigación de Asturias), Corporación Universitaria de Asturias, Bogotá 110221, Colombia
2
Commercial Engineer, Universidad de Ciencias Aplicadas y Ambientales U.D.C.A, Bogotá 110221, Colombia
*
Author to whom correspondence should be addressed.
Submission received: 16 February 2022 / Revised: 3 March 2022 / Accepted: 4 March 2022 / Published: 10 March 2022

Abstract

:
Social networks have become the scenario with the greatest potential for the circulation of disinformation, hence there is a growing interest in understanding how this type of information is spread, especially in relation to the mechanisms used by disinformation agents such as bots and trolls, among others. In this scenario, the potential of bots to facilitate the spread of disinformation is recognised, however, the analysis of how they do this is still in its initial stages. Taking into consideration what was previously stated, this paper aimed to model and simulate scenarios of disinformation propagation in social networks caused by bots based on the dynamics of this mechanism documented in the literature. For achieving the purpose, System dynamics was used as the main modelling technique. The results present a mathematical model, as far as disinformation by this mechanism is concerned, and the simulations carried out against the increase in the rate of activation and deactivation of bots. Thus, the preponderant role of social networks in controlling disinformation through this mechanism, and the potential of bots to affect citizens, is recognised.

1. Introduction

The academic community has shown widespread interest in understanding how disinformation spreads in virtual media, including social networks, e.g., [1,2,3,4,5,6,7], due to the potential of disinformation to trigger various problems for governments, citizens, and other social actors [2]. Thus, the state approach has attributed multiple consequences to disinformation on social networks, such as: the polarisation of citizens’ opinions [4], the destruction of the credibility of traditional media [8], the mobility of citizens to prevent the development of public policies [9], among others.
Recently, the spread of misinformation has been growing exponentially [1] as a result of the massive use of social networks. An example of this was the case of COVID-19, when the Russian media RT and Sputnik accused NATO and the United States of America of creating the virus in order to destabilise the Chinese economy, and this information was widely spread on social networks such as Facebook, Twitter and Tik Tok [3,10], or, in the case of the vaccines developed for COVID-19, where the anti-vaccine movement sought to attribute effects such as autism and possible genetic malformations to their use, triggering mistrust on the part of the population and preventing the control of the virus and the mitigation of its transmission [11]. In view of these examples, one of the main problems for social actors, in particular states, as well as the academic community, is the lack of awareness of the existence of this type of information and the lack of understanding of the strategies used by the disinformation agent to ensure the propagation of misinformation on social networks [12,13,14].
This article focuses on the second problem area. Thus, it is evident from the literature that disinformation can be propagated by the direct intervention of individuals either consciously or unconsciously, as well as by automated accounts known as bots [15]. These bots are present in all social networks, however, in some of them their presence is more noticeable. An example of this is Twitter, where, given the importance of this network in issues related to politics, it has been estimated that between 9% and 15% of active accounts are bots [15,16,17,18].
Due to the proliferation of bots used in social networks to misinform and the interest in this mechanism, the literature has focused on establishing the role of bots in the spread of misinformation, with a focus on describing case studies, e.g., [15,18,19,20,21,22,23,24], however, the development of models to understand the patterns of propagation of disinformation caused by this mechanism is still in its beginnings, with most of the advances being generalist and not specialised in bots such as the works of Lazer et al. [24], Shao et al. [25]; Vosoughi et al. [26] and Shao et al. [27]. The limited amount of work is due to the difficulty of finding the initial sources of disinformation [4], as well as the absence of robust and easily accessible tools for identifying bots, and thus correctly identifying their activities [15].
In this context, the aim of this article was to model and simulate scenarios of disinformation propagation in social networks caused by bots based on the dynamics of this mechanism documented in the literature. This will strengthen the understanding of the phenomenon of disinformation through bots by making it possible to establish patterns of behaviour in the system and to evaluate the effects of the various decisions made by the actors involved in disinformation, especially in relation to social network policies.
It is important to highlight that, although there is no unified meaning of disinformation due to the large number of definitions and intermediate terms found in the literature, particularly in the literature from Anglo-Saxon countries, (e.g., fake news, misinformation, etc.), this work assumes disinformation to be any deviant information that is intended to distort and mislead a target audience in a predetermined way [3,28]. In this way, disinformation refers to a wide range of content, including fake news, misinformation, misleading content, hate speech and deliberate misreporting, among others [29,30,31]. Additionally, disinformation is not only about the message itself, but as a practice it has the potential to discredit the messenger and true information due to its close relationship with multiple social sectors, especially politics.
This article is structured in five main sections. The first one broadens the conceptualisation of bots and their features; the second one establishes the behaviour of the disinformation caused by this mechanism in social networks and introduces the diagram of causal loops with their respective dynamic hypotheses; the third section shows the methodology used while the fourth section describes the results with emphasis on the mathematical model developed and the simulations defined in the methodology; finally, the fifth section presents the conclusions of the study.

2. Theoretical Framework and Dynamic Model

2.1. What Are the Bots?

The use of the word bots corresponds to an abbreviation of the Anglicism “software robot” [15], which has permeated the disinformation literature, including that of Latin American origin. Thus, the bots as machines are usually automated to some degree, either independently or with human intervention. The bots as machines are put at the service of external agents, in this case diplomacy. In this sense, this type of software can be categorised as either beneficial or malicious [32]. In the case of those that are beneficial, they are programmed by companies to improve the attention and service provided to their customers [33], or as in the case of the social network Twitter, some are used to inform the time, as documented by Yang et al. [32], or to support the raising of charitable resources [34].
For those called malicious, in some cases they oversee malware distribution or spam generation, but in social networks they have another purpose, which is to automatically produce content and interact with humans on social networks, trying to emulate them and possibly alter their behaviour [35,36,37]. According to the Academic Society for Managment and Communication [38] these bots have three strategies for influencing the behaviour of online users on social networks:
  • Smoke screening: bots use context-related hashtags on social networks to distract online users from the main point of debate in a discussion.
  • Misdirecting: the use of hashtags related to the topic of discussion to talk about other non-corresponding plots to divert the attention of the target audience from the disinformation.
  • Astroturfing: the bot tries to influence public opinion, e.g., in a political debate, by creating the impression that a large majority is in favour of a certain position.
Although the strategies described above are the basis of the strategies used by the malicious bots, it is necessary to recognise that, along with the emergence of new technologies, these tend to evolve and new strategies emerge, which is why a general view of the behaviour of the system generated by this type of software is required, rather than the study of each of the strategies themselves.

2.2. Disinformation, Bots and Causal Loop Diagrams

Disinformation as a social phenomenon that occurs in social networks is not the result of chance, but of a strategic analysis developed by the disinforming agent, as related by Guzmán and Rodríguez-Canovas [2]. Therefore, the disinforming agent seeks to attract the largest possible target population in order to convert it into a population susceptible to being disinformed [39], which is where the various mechanisms for disinformation, such as bots, emerge [5]. With the linking of the population susceptible to disinformation, we proceed to the propagation of the message [40] to consolidate the disinformed population, which for the purposes of this study will be understood as the individual or subject who was exposed to the message of the disinforming agent.
Considering the above as a basis, the role of bots as a mechanism for spreading disinformation begins with their activation, where the disinformation agent establishes how many he or she wishes to have, however, maintaining a fixed number over time is difficult, given the detection mechanisms that social networks have to eliminate or block this type of account [16,41]. However, the interaction of these fake accounts is usually done with publications based on powerful hashtags, comments and by sharing content, being standardised in terms of the number of characters when providing a machine, as well as in the frequency and time of publications [42]. In view of the number of the target population, which will become a susceptible and post-informed population, bots are characterised by making exclusive use of the organic reach of the account, so that as the number of followers grows, this reach tends to decrease, requiring a greater number of bots to impact a greater number of the population [2]. Figure 1 shows the causal loop diagram that represents the behaviour of the studied phenomenon.
Finally, it should be noted that the spread of disinformation is like the way a disease is spread, where there is a population of infected people (bots), who seek to spread the disinformation message in a susceptible population, hence the previous models developed are based on the SIR model (Susceptible—Infected—Recovered), e.g., [43,44]. Thus, previous studies have sought to complement the SIR model by analysing specific disinformation mechanisms, as well as models that integrate several of these mechanisms, (e.g., bots, trolls, paid outreach, etc.). These models include the SIRaRu model, which allowed us to understand the behaviour of disinformation in homogeneous and heterogeneous communities [45], the SEIR model (Susceptible—Exposed—Infectious—Recovered), the SIR model for dynamic and complex social networks [46], among others. Hence, the model presented here both in Figure 1 and in the subsequent sections is based on the SIR model.

3. Methodology

The aim of this article was to model and simulate scenarios of disinformation propagation in social networks caused by bots based on the dynamics of this mechanism documented in the literature, so the main technique used for the development of the model was System Dynamics, considering Bala et al. [47] and Bianchi [48] as theoretical references. In this sense, the choice of modelling technique is based on the complexity of the disinformation caused by bots, in which various elements are involved and whose behaviour is characterised by a non-linear, multicausal and time lagged behaviour [47]. The model is based on the existing literature on the problem under study, for which the steps established by Bala et al. [47] were followed and summarised below:
  • Construction of the flow diagram and levels of the model: this corresponds to the physical structure of the system in a defined period and allows the generation of the model’s patterns. In this step, the variables that allow the system’s behaviour to be represented are defined.
  • The writing of differential equations: they represent the causes and effects of the system and allow its operationalisation.
  • Parameter estimation: they assign the values of the variables that make it possible to simulate and obtain the system’s behaviour. For the present work, this stage was based on multiple sources, especially the study developed by Himelein-Wachowiak et al. [15].
  • The testing of the internal consistency of the model: this corresponds to the evaluation of the behaviours generated by the model and which, for the present case, is based on the theory exposed in Section 2.2.
With the proposed model, we proceeded to develop the simulations presented in Table 1, for which modifications were made to the parameters established in the initial model (Table 2); it should be noted that in the execution of the simulations, only the parameter indicated in Table 1 was modified, and the other parameters retained their initial values. The description of the modified parameters is presented in Table 2.
The analyses of the simulations were carried out descriptively, and in order to determine the existence of statistically significant differences in the behaviour of the system, the medians of the PobD level were compared (see Table 2). The Kolmogorov-Smirnov statistic was applied to check whether the data fit a normal distribution (p-value > 0.05), and it was found that the data did not follow a normal distribution. Thus, to establish the difference in averages between the behaviour of the system with the initial parameters and the modified parameters, the Wilcoxon test was used, considering this difference with a p-value < 0.05.
Finally, the computational work on the model and the simulations were implemented in Stella Architect software version 1.9.5. The following model settings were considered: t i = 0 , t f = 360 , Δ t = 4 , where t represents time in days; additionally, Euler was used as the integration method. SPSS software version 25 was used for the statistical analyses.

4. Results

The results are presented in two sections. The first one describes the model proposed with the capacity to replicate the behaviour of the system based on the evidence from the literature review; and the second one describes the simulations obtained and the corresponding statistical analyses.

4.1. Model

Figure 2 represents the flow and level diagram [1], which is based on the SIR model. Thus, it consisted of five level variables, five flow variables and 10 auxiliary variables. The green section represents the process of disinformation of the target population, and the blue section how the activation and deactivation of bots behaves.
The proposed model explains the behaviour of the phenomenon studied here, as long as the following assumptions are met:
  • Between the moment of the constitution of the population susceptible to disinformation (PobS) and the beginning of the spread of disinformation there is a delay.
  • There is a limited number of bots that the disinformation agent wishes to place on the social network.
  • The existence of a delay for the deactivation of bots.
Under the technical conditions of non-negativity of the level variables, (i.e., their domain is restricted to 0 or positive numbers) and that t = 0 , 1 , 2 , 360 , the system was represented by the following differential equations.
Target population:
P o b O t = P o b O t 1 + P o b O t 1 × T C p o b C P o b O t 1 × A O × T E C × B o t s   d t
Susceptible population:
P o b S t = P o b S t 1 + P o b O t 1 × A O × T E C P o b S t 1 × f X t , X t τ , d t ; t t 0   d t
where:
X t = P o b S t 1 × B o t s × A O   d t
Misinformed population:
P o b D t = P o b D t 1 + P o b S t 1 × f X t , X t τ , d t ; t t 0   d t
Bots:
B o t s t = B o t s t 1 + B o t s t 1 × T A B × M E T B B o t s M E T B B o t s t 1 × f Y , Y t τ , d t ; t t 0   d t
where:
Y t = B o t s t 1 × T D B × R T D B   d t
Deactivated bots:
B o t s t = B o t s t 1 + B o t s t 1 × f Y , Y t τ , d t ; t t 0   d t
Table 2 presents the description of the variables and the initial parameters for the operationalisation of the model. Because of the generalist nature of the proposed model, which is applicable to any social network, the initial parameters are susceptible to modification. In this particular case, those of Twitter were used, so parameters such as organic reach, paid reach, effective contact rate, among others, must be modified for its use in other social networks.

4.2. Simulations

Compared to the results obtained in SIM-1, it was found that under the initial parameters for t = 360   P o b O increased by 230,000 people, with 159,000 being effectively uninformed, which represented 15.9% of the initial P o b O . On the other hand, for t = 90 , period in which the disinformation of P o b S begins, this was close to 28,200 people, increasing until t = 111 , after this day the P o b S begins to decrease until it reaches the value of 0. In the case of the bots for t = 133 the highest number of activated automata was 54.8 ≈ 55 and for t = 360 the number of deactivated bots was 158. Figure 3 shows the behaviour of the system for SIM-1.
The results for SIM-2 show that with a higher activation rate bot, the P o b O for t = 360 would be equal to 1,150,000 people, with P o b D being 228,000, which represented an increase of 43.39% compared to SIM-1. For the bots in this simulation for t = 132 the highest number of activated automata would be reached with a total of 81.2 ≈ 82. Similarly, due to the increase in the activation rate for t = 360 , a total of 238 bots would have been deactivated, showing an increase of 50.63% in relation to SIM-1. On the other hand, a comparison between the P o b D between SIM-1 and SIM-2 showed statistically significant differences with z = 16.42 , p - value = 0.00 . Figure 4 shows the behaviour of the system for SIM-2.
For SIM-3, which sought to simulate a higher rate of bots’ deactivation by social networks, a change in system behaviour was evident (see Figure 5). Thus, the disinformation target population P o b O for t = 360 would be equal to 1,260,000 people with a total of 109,000 people being disinformed, decreasing the P o b D by 31.44 % in relation to SIM-1. Similarly, in the case of the susceptible population, there are two peaks at t = 107 and t = 360 (see Figure 5a), correlated with the number of active bots (see Figure 5b). Regarding the P o b D level between SIM-3 and SIM-1, it was determined that there are statistically significant differences with z = 14.00   p - value = 0.00 .
Now, in the case that disinformation starts to circulate at t = 30 , it was observed that the target population of disinformation P o b O for t = 360 would be equal to 1,240,000, being similar to the behaviour derived from SIM-1. For the population that managed to be uninformed, it was determined that for t = 360 the P o b D was 145,000. Similarly, in the case of bots, and due to the ability of social networks to deactivate them, after t = 170 the number of active automatons tends to stabilise. Thus, for t = 360 a total of 23.3 ≈ 24 active bots and 147 deactivated bots were evident. On the other hand, the comparison between the P o b D between SIM-1 and SIM-2 established statistically significant differences with z = 14.66 , p - value = 0.00 . Figure 6 shows the behaviour of the system for SIM-4.

5. Conclusions

The objective of this study, which was to model and simulate scenarios of the propagation of disinformation in social networks caused by bots based on the dynamics of this mechanism documented in the literature, was achieved. The model presented has the capacity to replicate the behaviour of the system, being consistent with the dynamic hypotheses set out in Figure 1, complementing previous studies such as those developed by de Lazer et al. [24], Shao et al. [25]; Vosoughi et al. [26] and Shao et al. [27], in relation to the use of bots as a mechanism to propagate false information.
This model allows actors involved in disinformation to analyse in a more objective way the behavioural patterns of disinformation caused by bots for decision making, based on three assumptions. The first one relates to the delay in the start of disinformation; the second one to the limited number of bots that the disinformation agent can put in place; and the third one to the limitations of social network systems to detect and deactivate automated accounts.
With that said, the simulations developed clarify that the system of disinformation using bots is susceptible to policies that are conducive to better detection, blocking and elimination of these types of accounts. In this sense, the uninformed population is smaller, hence the responsibility of social networks to design better detection mechanisms and of citizens to report these types of accounts, to increase the effective rate of deactivation of bots. However, if the disinformation agent starts its activity early, the impact over time will not be on the amount of the disinformed population, but on the number of bots required to achieve its purpose, since the number of bots tends to stabilise over time.
From the proposed model and the simulations developed, it is necessary to recognise the role of bots in aggravating existing social problems as a result of the propagation of false information, hence the need to delve deeper into various analyses such as the evolution of this type of mechanism, the new technologies they incorporate to circumvent the security systems of social networks, the use of artificial intelligence in these, among other aspects. On the other hand, it is necessary to urge the academic community to make use of the model, to complement it and, above all, to eliminate the current barriers to the study of disinformation, such as the difficulties of access to declassified information that would allow the model to be operationalised under conditions different from those expressed in this article and which are based on secondary data from other studies.

Author Contributions

Conceptualization, A.G.R., R.L.C.B., D.R.A.F. and N.S.-G.; methodology, A.G.R.; software, A.G.R. and N.S.-G.; validation, R.L.C.B.; formal analysis, A.G.R.; investigation, A.G.R. and D.R.A.F.; resources, A.G.R.; data curation, A.G.R. and R.L.C.B.; writing—original draft preparation, A.G.R.; writing—review and editing, N.S.-G.; supervision, R.L.C.B.; project administration, A.G.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding, and the APC was funded by Corporación Universitaria de Asturias.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Thea data and the model are available at: https://exchange.iseesystems.com/models/player/alfredoguzmanrincon/disinformation-and-bots (accessed on 4 March 2022).

Acknowledgments

To Cecilia Carabajal who, with her unconditional support, made the style correction and translation of this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Jussila, J.; Suominen, A.H.; Partanen, A.; Honkanen, T. Text Analysis Methods for Misinformation–Related Research on Finnish Language Twitter. Future Internet 2021, 13, 157. [Google Scholar] [CrossRef]
  2. Guzmán, A.; Rodríguez-Cánovas, B. Disinformation Propagation in Social Networks as a Diplomacy Strategy: Analysis from System Dynamics. JANUS. NET e-J. Int. Relat. 2021, 11, 32–43. [Google Scholar] [CrossRef]
  3. Agarwal, N.K.; Alsaeedi, F. Understanding and Fighting Disinformation and Fake News: Towards an Information Behavior Framework. Proc. Assoc. Inf. Sci. Technol. 2020, 57, e327. [Google Scholar] [CrossRef]
  4. La Cour, C. Theorising Digital Disinformation in International Relations. Int. Polit. 2020, 57, 704–723. [Google Scholar] [CrossRef]
  5. Buchanan, T.; Benson, V. Spreading Disinformation on Facebook: Do Trust in Message Source, Risk Propensity, or Personality Affect the Organic Reach of “Fake News”? Soc. Media Soc. 2019, 5, 205630511988865. [Google Scholar] [CrossRef] [Green Version]
  6. Bjola, C. The Ethics of Countering Digital Propaganda. Ethics Int. Aff. 2018, 32, 305–315. [Google Scholar] [CrossRef]
  7. Gerrits, A.W.M. Disinformation in International Relations: How Important Is It? Secur. Hum. Rights 2018, 29, 3–23. [Google Scholar] [CrossRef] [Green Version]
  8. Bennett, W.L.; Livingston, S. The Disinformation Order: Disruptive Communication and the Decline of Democratic Institutions. Eur. J. Commun. 2018, 33, 122–139. [Google Scholar] [CrossRef]
  9. Acosta-Quiroz, J.; Iglesias-Osores, S. COVID-19: Desinformación en Redes Sociales. Rev. Cuerpo Med. HNAAA 2020, 13, 218–219. [Google Scholar] [CrossRef]
  10. Gottlieb, M.; Dyer, S. Information and Disinformation: Social Media in the COVID-19 Crisis. Acad. Emerg. Med. 2020, 27, 640–641. [Google Scholar] [CrossRef]
  11. Grimes, D.R. Medical Disinformation and the Unviable Nature of COVID-19 Conspiracy Theories. PLoS ONE 2021, 16, e0245900. [Google Scholar] [CrossRef] [PubMed]
  12. Spradling, M.; Straub, J.; Strong, J. Protection from ‘Fake News’: The Need for Descriptive Factual Labeling for Online Content. Future Internet 2021, 13, 142. [Google Scholar] [CrossRef]
  13. Helmstetter, S.; Paulheim, H. Collecting a Large Scale Dataset for Classifying Fake News Tweets Using Weak Supervision. Future Internet 2021, 13, 114. [Google Scholar] [CrossRef]
  14. Shu, K.; Bhattacharjee, A.; Alatawi, F.; Nazer, T.H.; Ding, K.; Karami, M.; Liu, H. Combating Disinformation in a Social Media Age. WIREs Data Min. Knowl. Discov. 2020, 10, e1385. [Google Scholar] [CrossRef]
  15. Himelein-Wachowiak, M.; Giorgi, S.; Devoto, A.; Rahman, M.; Ungar, L.; Schwartz, H.A.; Epstein, D.H.; Leggio, L.; Curtis, B. Bots and Misinformation Spread on Social Media: Implications for COVID-19. J. Med. Internet Res. 2021, 23, e26933. [Google Scholar] [CrossRef] [PubMed]
  16. Samper-Escalante, L.D.; Loyola-González, O.; Monroy, R.; Medina-Pérez, M.A. Bot Datasets on Twitter: Analysis and Challenges. Appl. Sci. 2021, 11, 4105. [Google Scholar] [CrossRef]
  17. Badawy, A.; Ferrara, E.; Lerman, K. Analyzing the Digital Traces of Political Manipulation: The 2016 Russian Interference Twitter Campaign. In Proceedings of the 2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), Barcelona, Spain, 28–31 August 2018; pp. 258–265. [Google Scholar]
  18. Jin, F.; Wang, W.; Zhao, L.; Dougherty, E.; Cao, Y.; Lu, C.-T.; Ramakrishnan, N. Misinformation Propagation in the Age of Twitter. Computer 2014, 47, 90–94. [Google Scholar] [CrossRef]
  19. Xu, W.; Sasahara, K. Characterizing the Roles of Bots on Twitter during the COVID-19 Infodemic. J. Comput. Soc. Sci. 2021, 1–19. [Google Scholar] [CrossRef]
  20. Lanius, C.; Weber, R.; MacKenzie, W.I. Use of Bot and Content Flags to Limit the Spread of Misinformation among Social Networks: A Behavior and Attitude Survey. Soc. Netw. Anal. Min. 2021, 11, 32. [Google Scholar] [CrossRef]
  21. Wang, P.; Angarita, R.; Renna, I. Is This the Era of Misinformation yet: Combining Social Bots and Fake News to Deceive the Masses. In Proceedings of the Web Conference 2018—WWW ’18 Companion, Lyon, France, 23–27 April 2018; ACM Press: Lyon, France, 2018; pp. 1557–1561. [Google Scholar]
  22. Dunn, A.G.; Surian, D.; Dalmazzo, J.; Rezazadegan, D.; Steffens, M.; Dyda, A.; Leask, J.; Coiera, E.; Dey, A.; Mandl, K.D. Limited Role of Bots in Spreading Vaccine-Critical Information Among Active Twitter Users in the United States: 2017–2019. Am. J. Public Health 2020, 110, S319–S325. [Google Scholar] [CrossRef]
  23. Tandoc, E.C. The Facts of Fake News: A Research Review. Sociol. Compass 2019, 13, e12724. [Google Scholar] [CrossRef]
  24. Lazer, D.M.J.; Baum, M.A.; Benkler, Y.; Berinsky, A.J.; Greenhill, K.M.; Menczer, F.; Metzger, M.J.; Nyhan, B.; Pennycook, G.; Rothschild, D.; et al. The Science of Fake News. Science 2018, 359, 1094–1096. [Google Scholar] [CrossRef] [PubMed]
  25. Shao, C.; Ciampaglia, G.L.; Varol, O.; Yang, K.-C.; Flammini, A.; Menczer, F. The Spread of Low-Credibility Content by Social Bots. Nat. Commun. 2018, 9, 4787. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Vosoughi, S.; Roy, D.; Aral, S. The Spread of True and False News Online. Science 2018, 359, 1146–1151. [Google Scholar] [CrossRef] [PubMed]
  27. Shao, C.; Hui, P.-M.; Wang, L.; Jiang, X.; Flammini, A.; Menczer, F.; Ciampaglia, G.L. Anatomy of an Online Misinformation Network. PLoS ONE 2018, 13, e0196087. [Google Scholar] [CrossRef]
  28. Innes, M. Techniques of Disinformation: Constructing and Communicating “Soft Facts” after Terrorism. Br. J. Sociol. 2020, 71, 284–299. [Google Scholar] [CrossRef]
  29. Rodríguez, A.R. Fundamentos del Concepto de Desinformación Como Práctica Manipuladora en la Comunicación Política y Las Relaciones Internacionales. Hist. Comun. Soc. 2018, 23, 231–244. [Google Scholar] [CrossRef] [Green Version]
  30. McGonagle, T. “Fake News”: False Fears or Real Concerns? Neth. Q. Hum. Rights 2017, 35, 203–209. [Google Scholar] [CrossRef]
  31. Fallis, D. What is Disinformation? Libr. Trends 2015, 63, 401–426. [Google Scholar] [CrossRef] [Green Version]
  32. Yang, K.; Varol, O.; Davis, C.A.; Ferrara, E.; Flammini, A.; Menczer, F. Arming the Public with Artificial Intelligence to Counter Social Bots. Hum. Behav. Emerg. Tech. 2019, 1, 48–61. [Google Scholar] [CrossRef] [Green Version]
  33. Rossmann, A.; Zimmermann, A.; Hertweck, D. The Impact of Chatbots on Customer Service Performance. In Advances in the Human Side of Service Engineering; Spohrer, J., Leitner, C., Eds.; Springer International Publishing: Cham, Switzerland, 2020; Volume 1208, pp. 237–243. ISBN 9783030510565. [Google Scholar]
  34. Savage, S.; Monroy-Hernandez, A.; Höllerer, T. Botivist: Calling Volunteers to Action Using Online Bots. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing, San Francisco, CA, USA, 27 February–2 March 2016; ACM: San Francisco, CA, USA, 2016; pp. 813–822. [Google Scholar]
  35. Zhang, J.; Zhang, R.; Zhang, Y.; Yan, G. The Rise of Social Botnets: Attacks and Countermeasures. IEEE Trans. Dependable Secur. Comput. 2018, 15, 1068–1082. [Google Scholar] [CrossRef] [Green Version]
  36. Cresci, S.; Di Pietro, R.; Petrocchi, M.; Spognardi, A.; Tesconi, M. The Paradigm-Shift of Social Spambots: Evidence, Theories, and Tools for the Arms Race. In Proceedings of the 26th International Conference on World Wide Web Companion—WWW ’17 Companion, Perth, Australia, 3–7 April 2017; ACM Press: Perth, Australia, 2017; pp. 963–972. [Google Scholar]
  37. Ferrara, E.; Varol, O.; Davis, C.; Menczer, F.; Flammini, A. The Rise of Social Bots. Commun. ACM 2016, 59, 96–104. [Google Scholar] [CrossRef] [Green Version]
  38. Academic Society for Managment and Communication. How Powerful Are Social Bots? Understanding the Types, Purposes and Impacts of Bots in Social Media. 2018. Available online: https://www.akademische-gesellschaft.com/fileadmin/webcontent/Publikationen/Communication_Snapshots/AGUK_CommunicationSnapshot_SocialBots_June2018.pdf (accessed on 18 December 2021).
  39. Hollenbaugh, E.E.; Ferris, A.L. Facebook Self-Disclosure: Examining the Role of Traits, Social Cohesion, and Motives. Comput. Hum. Behav. 2014, 30, 50–58. [Google Scholar] [CrossRef]
  40. Jahng, M.R. Is Fake News the New Social Media Crisis? Examining the Public Evaluation of Crisis Management for Corporate Organizations Targeted in Fake News. Int. J. Strateg. Commun. 2021, 15, 18–36. [Google Scholar] [CrossRef]
  41. Pozzana, I.; Ferrara, E. Measuring Bot and Human Behavioral Dynamics. Front. Phys. 2020, 8, 125. [Google Scholar] [CrossRef] [Green Version]
  42. Kooti, F.; Moro, E.; Lerman, K. Twitter Session Analytics: Profiling Users’ Short-Term Behavioral Changes. In Social Informatics; Spiro, E., Ahn, Y.-Y., Eds.; Springer International Publishing: Cham, Switzerland, 2016; Volume 10047, pp. 71–86. ISBN 9783319478739. [Google Scholar]
  43. Zhao, X.; Wang, J. Dynamical Model about Rumor Spreading with Medium. Discret. Dyn. Nat. Soc. 2013, 2013, 586867. [Google Scholar] [CrossRef] [Green Version]
  44. Rapoport, A.; Rebhun, L.I. On the Mathematical Theory of Rumor Spread. Bull. Math. Biophys. 1952, 14, 375–383. [Google Scholar] [CrossRef]
  45. Wang, J.; Zhao, L.; Huang, R. SIRaRu Rumor Spreading Model in Complex Networks. Phys. A Stat. Mech. Appl. 2014, 398, 43–55. [Google Scholar] [CrossRef]
  46. Zhu, H.; Wang, Y.; Yan, X.; Jin, Z. Research on Knowledge Dissemination Model in the Multiplex Network with Enterprise Social Media and Offline Transmission Routes. Phys. A Stat. Mech. Appl. 2022, 587, 126468. [Google Scholar] [CrossRef]
  47. Bala, B.K.; Arshad, F.M.; Noh, K.M. System Dynamics; Springer Texts in Business and Economics; Springer: Singapore, 2017; ISBN 9789811020438. [Google Scholar]
  48. Bianchi, C. Dynamic Performance Management. In System Dynamics for Performance Management; Springer International Publishing: Cham, Switzerland, 2016; Volume 1, ISBN 9783319318448. [Google Scholar]
Figure 1. Diagram of causal loops. Note: B represents the balance loops of the system and R represents the reinforcement loops.
Figure 1. Diagram of causal loops. Note: B represents the balance loops of the system and R represents the reinforcement loops.
Systems 10 00034 g001
Figure 2. Diagram of flows and levels of disinformation in social networks by means of bots.
Figure 2. Diagram of flows and levels of disinformation in social networks by means of bots.
Systems 10 00034 g002
Figure 3. Results SIM-1. Note: (a) behaviours for the levels of the disinformation process and (b) behaviour of the activation and deactivation of bots.
Figure 3. Results SIM-1. Note: (a) behaviours for the levels of the disinformation process and (b) behaviour of the activation and deactivation of bots.
Systems 10 00034 g003
Figure 4. Results SIM-2. Note: (a) behaviours for the levels of the disinformation process and (b) behaviour of the activation and deactivation of bots.
Figure 4. Results SIM-2. Note: (a) behaviours for the levels of the disinformation process and (b) behaviour of the activation and deactivation of bots.
Systems 10 00034 g004
Figure 5. Results SIM-3. Note: (a) behaviours for the levels of the disinformation process and (b) behaviour of the activation and deactivation of bots.
Figure 5. Results SIM-3. Note: (a) behaviours for the levels of the disinformation process and (b) behaviour of the activation and deactivation of bots.
Systems 10 00034 g005
Figure 6. Results SIM-4. Note: (a) behaviours for the levels of the disinformation process and (b) behaviour of the activation and deactivation of bots.
Figure 6. Results SIM-4. Note: (a) behaviours for the levels of the disinformation process and (b) behaviour of the activation and deactivation of bots.
Systems 10 00034 g006
Table 1. Computer simulations.
Table 1. Computer simulations.
CodeSimulationModified Parameters
Sim—1Reference simulationNA
Sim—2Increased activation rate T A B = 0.8
Sim—3Increased deactivation rate T D B = 0.1
Sim—4Beginning of disinformation R D = 30
Table 2. Variables required for model development and initial parameters.
Table 2. Variables required for model development and initial parameters.
VariablesConceptualizaciónUnidadesParámetro Inicial
P o b O t Target population: a population with specific characteristics that was defined by the agent to be susceptible to disinformationPeople1,000,000
P o b S t Susceptible population: the population that is subscribed to one of the disinformation agent’s accounts.People0
P o b D t Disinformed population: the population that visualised or had some contact with the disinforming message.People0
B o t s t Bots: number of bots in the social network used by the disinformation agent at a given time.Bots5
B o t s D t Deactivated bots: corresponds to the number of bots that were deactivated by the algorithm or social network staff.Bots0
T C p o b C Target population growth rate: is the ratio of growth of the population P o b O for a specified t .%0.1%
A O Organic reach: percentage of publications viewed by the Bots through the distribution of the algorithm. This rate is defined according to the number of P o b S t contacted.%GRAPH (TIME)
(0, 0.0017), (10001, 0.0004), (100001, 0.00015), (300000, 0.00015), (400000, 0.00015), (500000, 0.00015), (600000, 0.00015), (700000, 0.00015), (800000, 0.00015), (1000000, 0.00015), (10000001, 0.00008)
T E C Effective contact rate: defined as the percentage of people who subscribe to the disinformation agent’s account.%0.01%
R D Disinformation delay: This corresponds to the initial t at which message starts its propagation. This lag was developed by using the delay function.Days90
T A B Bot activation rate: the percentage of bots that are activated in each t .%50%
M E T B Goal bots: number of bots that the uninformed agent wishes to have active.Bots300
T D B Bot deactivation rate: the percentage of bots that are deactivated by the social network at a given t .%1.973%
R T D B Delay deactivation of bots: This corresponds to the initial t at which the deactivation of the bots is initiated. This lag was developed by using the delay functionDays98
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Guzmán Rincón, A.; Carrillo Barbosa, R.L.; Segovia-García, N.; Africano Franco, D.R. Disinformation in Social Networks and Bots: Simulated Scenarios of Its Spread from System Dynamics. Systems 2022, 10, 34. https://0-doi-org.brum.beds.ac.uk/10.3390/systems10020034

AMA Style

Guzmán Rincón A, Carrillo Barbosa RL, Segovia-García N, Africano Franco DR. Disinformation in Social Networks and Bots: Simulated Scenarios of Its Spread from System Dynamics. Systems. 2022; 10(2):34. https://0-doi-org.brum.beds.ac.uk/10.3390/systems10020034

Chicago/Turabian Style

Guzmán Rincón, Alfredo, Ruby Lorena Carrillo Barbosa, Nuria Segovia-García, and David Ricardo Africano Franco. 2022. "Disinformation in Social Networks and Bots: Simulated Scenarios of Its Spread from System Dynamics" Systems 10, no. 2: 34. https://0-doi-org.brum.beds.ac.uk/10.3390/systems10020034

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop