Next Article in Journal
Conceptualizing Supply Chain Resilience: The Role of Complex IT Infrastructures
Next Article in Special Issue
A Novel Public Opinion Polarization Model Based on BA Network
Previous Article in Journal
Supporting Luxury Hotel Recovered in Times of COVID-19 by Applying TRIZ Method: A Case Study in Taiwan
 
 
Article
Peer-Review Record

Disinformation in Social Networks and Bots: Simulated Scenarios of Its Spread from System Dynamics

by Alfredo Guzmán Rincón 1,*, Ruby Lorena Carrillo Barbosa 2, Nuria Segovia-García 1 and David Ricardo Africano Franco 2
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Submission received: 16 February 2022 / Revised: 3 March 2022 / Accepted: 4 March 2022 / Published: 10 March 2022

Round 1

Reviewer 1 Report

The is a very well written and researched paper on a highly actual contemporary problem affecting global information society. It is enjoyable to read and is written in an easily understandable manner. Yet there are some issues that need to be addressed before the manuscript is ready for publication. 

The definition of "misinformation" is not simply a language exercise owing to the deeply politicized and deceptive nature of the practice. True it is something that is a falsehood and there is no single definition. But it is also a practice that is used to discredit an opposing messenger or message, to accuse it of misinformation owing to the prevailing nature of politics on the issue. This has been seen countless times during the pandemic, people or ideas have been dismissed by governments (NZ prime minister- the government as the one stop for all true information), yet these have later proven to be true and the accuser actually guilty of misinformation. 

Some of the graphical headings are in Spanish, and probably need to be translated into English. 

This is a very clean and clinical study in so far as it concerns the non-human side of engineering and technical aspects. However, even though this is not explicitly part of the research agenda of the manuscript it should be stated and acknowledged. Bots on social networks are not entirely autonomous, they have been directed there by people that have a specific social or political programme with certain goals and aims. It is understanding this information programme that supports a political agenda that needs to be understood in order to make sense of what the technical mechanisms of misinformation are likely to achieve. Thus the human component of motivation for misinformation should be understood. 

 

Author Response

Point 1: The definition of "misinformation" is not simply a language exercise owing to the deeply politicized and deceptive nature of the practice. True it is something that is a falsehood and there is no single definition. But it is also a practice that is used to discredit an opposing messenger or message, to accuse it of misinformation owing to the prevailing nature of politics on the issue. This has been seen countless times during the pandemic, people or ideas have been dismissed by governments (NZ prime minister- the government as the one stop for all true information), yet these have later proven to be true and the accuser actually guilty of misinformation. 

 

 

Response 1: In the framework of the introduction, this point of view, which had not been taken into account before, was adjusted.

 

Point 2: Some of the graphical headings are in Spanish, and probably need to be translated into English.

 

Response 2: it was corrected.

 

Point 3: This is a very clean and clinical study in so far as it concerns the non-human side of engineering and technical aspects. However, even though this is not explicitly part of the research agenda of the manuscript it should be stated and acknowledged. Bots on social networks are not entirely autonomous, they have been directed there by people that have a specific social or political programme with certain goals and aims. It is understanding this information programme that supports a political agenda that needs to be understood in order to make sense of what the technical mechanisms of misinformation are likely to achieve. Thus the human component of motivation for misinformation should be understood.

 

Response 3: Information was clarified in relation to the intervention of persons in the creation of bots.

Reviewer 2 Report

A relevant contribution to a very important area of study.

Author Response

Best regards dear reviewer.

We appreciate your comments on the document and your interest in the evaluation.

Reviewer 3 Report

The manuscript describes the current relevance of automated agents such as bots in social media, and how its maluse can derive in the inappropriate propagation of fake information. In order to show this, several simulated scenarios are run and later discussed.

The results show how interesting can be to properly manage this source of misinformation in the Internet, by means of the experimental section discussion.

The paper is well structured and easy to follow. Enough background is given to put the reader in context. References are extensive and complete.

There minor issues relating to some terms not translated in to English in figures (Figure 1) and tables (Table 2 headers). In page 9, line 277, I guess that "stud" stands for "study".

 

Author Response

Point 1: There minor issues relating to some terms not translated in to English in figures (Figure 1) and tables (Table 2 headers). In page 9, line 277, I guess that "stud" stands for "study".

 Response 1: it was corrected.

Back to TopTop