Next Article in Journal
Correction: Young Sik, K. Partial Derivative Approach to the Integral Transform for the Function Space in the Banach Algebra. Entropy 2020, 22, 1047
Next Article in Special Issue
A Comprehensive Framework for Uncovering Non-Linearity and Chaos in Financial Markets: Empirical Evidence for Four Major Stock Market Indices
Previous Article in Journal
What Is Temperature? Modern Outlook on the Concept of Temperature
Previous Article in Special Issue
The Gender Productivity Gap in Croatian Science: Women Are Catching up with Males and Becoming Even Better
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Value of Information Searching against Fake News

1
LIAAD-INESC TEC and School of Technology and Management, Polytechnic of Leiria, Campus 2, Morro do Lena-Alto do Vieiro, 2411-901 Leiria, Portugal
2
LIAAD-INESC TEC and Faculty of Sciences, University of Porto, R Campo Alegre, 4169-007 Porto, Portugal
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Submission received: 13 November 2020 / Accepted: 19 November 2020 / Published: 3 December 2020
(This article belongs to the Special Issue Complexity in Economic and Social Systems)

Abstract

:
Inspired by the Daley-Kendall and Goffman-Newill models, we propose an Ignorant-Believer-Unbeliever rumor (or fake news) spreading model with the following characteristics: (i) a network contact between individuals that determines the spread of rumors; (ii) the value (cost versus benefit) for individuals who search for truthful information (learning); (iii) an impact measure that assesses the risk of believing the rumor; (iv) an individual search strategy based on the probability that an individual searches for truthful information; (v) the population search strategy based on the proportion of individuals of the population who decide to search for truthful information; (vi) a payoff for the individuals that depends on the parameters of the model and the strategies of the individuals. Furthermore, we introduce evolutionary information search dynamics and study the dynamics of population search strategies. For each value of searching for information, we compute evolutionarily stable information (ESI) search strategies (occurring in non-cooperative environments), which are the attractors of the information search dynamics, and the optimal information (OI) search strategy (occurring in (eventually forced) cooperative environments) that maximizes the expected information payoff for the population. For rumors that are advantageous or harmful to the population (positive or negative impact), we show the existence of distinct scenarios that depend on the value of searching for truthful information. We fully discuss which evolutionarily stable information (ESI) search strategies and which optimal information (OI) search strategies eradicate (or not) the rumor and the corresponding expected payoffs. As a corollary of our results, a recommendation for legislators and policymakers who aim to eradicate harmful rumors is to make the search for truthful information free or rewarding.

1. Introduction

The theory of rumor (or fake news) spreading proposed by Daley and Kendall [1,2] became known as the DK model, in which a population is divided into three different groups: ignorants—people who are ignorant concerning the rumor; spreaders—people who actively spread the rumor; and stiflers—people who have heard the rumor but are no longer interested in spreading it. Goffman and Newill [3] also published a paper in 1964 that generalized epidemic theory and provided a clear analogy between the spreading of infectious disease and the transmission of ideas. In the subsequent years, several authors developed the theory of rumor spreading, proposing new models using complex networks [4], transitions capable of describing different issues in the transmission process [5], and Lévy noise [6]. In this paper, we develop a game-theoretical approach for a network-extended version of the rumor spreading models proposed in [1,3], using the ideas developed by Bauch and Earn [7] for the well-known Susceptible-Infected-Recovered (SIR) epidemiological model. Furthermore, the topic of fake news is quantified in terms of the value of searching for truthful information (learning), the impact of believing the fake rumor, and the individual’s payoff, which is of paramount importance in academia.
Rumors and fake news can be considered a form of cheating. Individuals might be pushed toward risk-seeking or loss aversion on the basis of their feelings (see [8]). Political news can have a strong effect on stock prices (see [9]). In terms of the outbreak of COVID-19, information on social media can lead to numerous negative behaviors that can reduce vaccination coverage and the use of COVID alert applications (see [10]). On the other hand, rumors and fake news do not necessarily have negative impacts. An extreme example occurred during the Cold War: the propaganda machines from the American and Soviet sides spread numerous rumors about (i) the intentions of their rivals and (ii) the achievements of their countries in several areas (e.g., science, business, and industry). These rumors served the purpose of contributing to improvements in the well-being of both populations. If we see a rumor as an exaggerated piece of information with an essence of truth, then it can have a positive impact on the population. Another example occurred during World War II, when many rumors were spread concerning Nazi Germany. Although some news was fake, it served the purpose of boosting the morale of the Allied population and the troops. Hence, fake news can have either a negative or a positive impact on an individual’s behavior.
In the Ignorant-Believer-Unbeliever (IBU) rumor (or fake news) spreading dynamical model, individuals are spatially distributed in a network and can be either ignorants, believers, or unbelievers regarding a certain rumor. When a rumor appears in a population, individuals will act differently depending on their beliefs about the rumor. If an individual believes the rumor, then he/she will spread the rumor to his/her neighbors. On the other hand, individuals who do not believe the rumor will not act as active spreaders. This spreading dynamical model is fully inspired by the SIR epidemiological model. The impact measure y of the rumor evaluates the gains and losses resulting from individuals’ decisions, provoked by their beliefs in the rumor. The value v of searching for truthful information (learning), instead of just believing the rumor, has natural benefits and costs to the individual. Each ignorant individual has his/her own information search strategy S based on his/her probability of searching for truthful information per unit of time. The population’s information search strategy s is the proportion of ignorant individuals who will choose to search for truthful information per unit of time. For instance, if all ignorant individuals follow the same strategy S (homogeneous strategy), then s = S . For an ignorant individual, we introduce the expected information search payoff, which depends on (i) his/her information search strategy S; (ii) the population information search strategy s; (iii) the value v of searching for truthful information; (iv) the impact measure y of the rumor; and (v) the spread dynamics of the rumor.
A population information search strategy S is a Nash equilibrium if not a single individual has an incentive to change his/her information search strategy to any other strategy S S (see [11]). A population information search strategy S is an evolutionarily stable information search strategy if any small group of individuals that tries to adopt a different strategy S obtains a lower payoff than those adopting the original strategy S (see [11]). Evolutionarily stable information search strategies are Nash strategies that are practiced by individuals in non-cooperative environments. A population information search strategy S is an optimal information search strategy if it maximizes the payoffs of individuals. Optimal information search strategies are practiced by individuals in (eventually forced) cooperative environments. Here, we fully characterize the triples ( v , y , S ) , where S is (i) a Nash strategy, (ii) an evolutionarily stable information search strategy, or (iii) an optimal information search strategy; v is the value of searching for information; and y is the impact measure of believing a false rumor (fake news). Finally, we introduce evolutionary information searching dynamics following the replicator dynamics theory [11,12,13], where the search strategies evolve over time to increase the payoffs of individuals. Evolutionarily stable information search strategies are the attractors of the dynamics; i.e., over time, the population search strategy tends toward the evolutionarily stable information search strategy (and not necessarily to the optimal information search strategies).
For rumors that are advantageous to the population (positive impact y > 0 ), three distinct scenarios occur, depending on the value of searching for truthful information: (i) for high positive values of searching, both evolutionarily stable information (ESI) and optimal information (OI) search strategies coincide, and all individuals search for truthful information, eradicating the rumor; (ii) (bi-stability) for small positive values of searching, there are two ESI search strategies: either all individuals search for truthful information (eradicating the rumor in non-cooperative environments), or no one searches for truthful information (persistence of the rumor in non-cooperative environments), and the OI search strategy jumps (at the right-boundary of this bi-stability region) from no one searching (persistence of the rumor in cooperative environments) to all individuals searching (eradicating the rumor in cooperative environments); (iii) for negative values of searching, we show that both ESI and OI search strategies coincide, and no individuals search for truthful information, and thus, the rumor persists.
For rumors that are harmful to the population (negative impact y < 0 ), we show the existence of three distinct scenarios that occur, depending on the value of searching for truthful information: (i) for positive values of searching, both ESI and OI search strategies coincide, and all individuals search for truthful information, eradicating the rumor; (ii) for small negative values of searching, the OI search strategy coincides with the critical probability that is necessary to eradicate the rumor, and thus, the rumor is eradicated in cooperative environments, but the ESI search strategy is less successful than the OI search strategy, so, unfortunately, the rumor persists in non-cooperative environments; (iii) for highly negative values of searching, both ESI and OI search strategies coincide, and no individuals search for truthful information, and thus, the rumor persists. Hence, a recommendation for legislators and policymakers who aim to eradicate harmful rumors is to make the search for truthful information free or rewarding, i.e., information search value v 0 . For instance, truthful public social media campaigns can help by making the information easily available.
This paper is organized as follows. In Section 2, we introduce the IBU rumor spreading model for networks. In Section 3, we introduce a utility for individuals that depends on the value of information and the impact of believing the rumor. Nash and evolutionarily stable information search strategies are completely characterized. In Section 4, optimal information search strategies are deduced for different values of information. In Section 5, we introduce evolutionary information search dynamics and study its attractors. Section 6 provides the conclusions of the paper and directions for future research work.

2. The IBU Spreading Model on Regular Networks

Inspired by the work in [1,3], we propose the Ignorant-Believer-Unbeliever (IBU) dynamic model for rumor spreading based on the classical Susceptible-Infected-Recovered (SIR) epidemic model (see also [14,15]). Individuals can be either Ignorants, Believers, or Unbelievers of a certain rumor. The IBU model is directly analogous to the SIR model:
S—Susceptibles correspond to I—Ignorants,
I—Infected individuals correspond to B—Believers of the rumor,
R—Recovered individuals correspond to U—Unbelievers of the rumor.
Individuals who believe the rumor are the active spreaders: i.e., they are the individuals who transmit the rumor to ignorant individuals. Once a believer stops believing the rumor and becomes an unbeliever, he/she will stop transmitting the rumor. Hence, unbelievers are not active spreaders. As in epidemiology [10], a transition corresponding to vaccination is introduced in the model. This transition is due to information search activities that can be voluntarily adopted by an ignorant individual. State transitions in the IBU model are illustrated in Figure 1 and defined by the following reaction scheme:
I i + B j β B i + B j B i γ U i I i ν U i I i , B i , U i μ I i .
The individual state variables I i , B i , and U i { 0 , 1 } identify the state of individual i, restricted to the condition that the individual belongs to one of the three classes. Hence,
I i + B i + U i = 1 .
The parameters of the model have the following interpretation: β is the rate at which one believer individual spreads the rumor; μ is the mean birth and death rates, and thus, 1 / μ is the mean life expectancy at birth; γ is the rate at which a believer stops believing the rumor and stops spreading it, and thus, 1 / ( γ + μ ) is the mean believing/spreading period; ν is the information search rate, i.e., the rate at which an ignorant individual searches for real information and become an unbeliever.
Let us assume that a population is fixed in size with N individuals; hence,
i = 1 N I i + B i + U i = N .
To describe the neighbor structure of individuals in the population, we consider the N × N adjacency matrix J with elements J i , j { 0 , 1 } such that: if individual i is a neighbor of j, then J i , j = 1 , and if individual i is not a neighbor of j, then J i , j = 0 . The matrix J is symmetric with zero elements in the diagonal. Let I 1 , B 1 , U 1 , . . . , I i , B i , U i , . . . , U N denote a certain state of the population, and let p ( I 1 , B 1 , U 1 , . . . , I i , B i , U i , . . . , U N , t ) be the probability of that state occurring at time t. The time evolution of p ( I 1 , B 1 , U 1 , . . . , I i , B i , U i , . . . , U N , t ) is described by a master equation [16] given by an ordinary differential equation (ODE) system that models the probabilistic combination of states and the switching between those states depending on the transition rates of the mathematical model and the spatial structure of the population. Following Glauber’s Ising spin dynamics [17] or Stollenwerk et al.’s reinfection SIRI model [18,19], the master equation for the I B U spreading model is given by
d d t p ( I 1 , B 1 , U 1 , . . . , I i , B i , U i , . . . , U N , t ) = i = 1 N β j = 1 N J i j B j ( 1 I i ) p ( I 1 , B 1 , U 1 , . . . , 1 I i , 1 B i , U i . . . , U N , t ) + i = 1 N γ ( 1 B i ) p ( I 1 , B 1 , U 1 , . . . , I i , 1 B i , 1 U i . . . , U N , t ) + i = 1 N ν ( 1 I i ) p ( I 1 , B 1 , U 1 , . . . , 1 I i , B i , 1 U i . . . , U N , t ) + i = 1 N μ [ ( 1 I i ) p ( I 1 , B 1 , U 1 , . . . , 1 I i , B i , U i . . . , U N , t ) + ( 1 B i ) p ( I 1 , B 1 , U 1 , . . . , 1 I i , 1 B i , U i . . . , U N , t ) + ( 1 U i ) p ( I 1 , B 1 , U 1 , . . . , 1 I i , B i , 1 U i . . . , U N , t ) ] i = 1 N β j = 1 N J i j B j I i + γ B i + ν I i + μ I i + B i + U i p ( . . . I i , B i , U i . . . ) .
The expectation value for the total number of ignorant individuals in the population at a given time t is defined by
I = I 1 = 0 1 B 1 = 0 1 U 1 = 0 1 I 2 = 0 1 . . . U N = 0 1 i = 1 N I i · p ( I 1 , B 1 , U 1 , I 2 , . . . , U N , t ) ,
and its time evolution is given by
d d t I = I 1 = 0 1 B 1 = 0 1 U 1 = 0 1 I 2 = 0 1 . . . U N = 0 1 i = 1 N I i · d d t p ( I 1 , B 1 , U 1 , I 2 , . . . , U N , t ) .
Inserting the master equation into Equation (3), after some computations, we obtain the dynamic equation for the mean quantity of ignorant individuals in the population:
d d t I = β I B 1 ν I μ I + μ ( I + B + U ) .
Similarly, for the expectation value of the total number of believers and unbelievers, we obtain the following dynamic equations:
d d t B = β I B 1 γ B μ B
d d t U = γ B + ν I μ U .
The dynamics of the first moments depend on the second moment:
I B 1 = I 1 = 0 1 B 1 = 0 1 U 1 = 0 1 I 2 = 0 1 . . . U N = 0 1 i = 1 N j = 1 N ( J 1 ) i j I i B j · p ( I 1 , B 1 , U 1 , I 2 , . . . , U N , t )
which is the mean number of ignorant and believer neighbors. We can now proceed by computing the dynamic equation for the second moment I B 1 , or we can close the ODE system (4)–(6) by approximating I B 1 by a mathematical formula involving only the first moments I , B , and U . Here, we close the ODE system (4)–(6) using the mean-field approximation.
Let us assume that the individuals in the population are distributed in a regular network, where all individuals have the same number of neighbors Q, and hence,
j = 1 N J i j = Q .
In the mean-field approximation, the exact number of believers who are neighbors of a certain individual i is approximated by the average of the number of believers in the entire population:
j = 1 N J i j B j Q B N , i = 1 , . . . , N .
Hence, the second moment I B 1 is approximated by
I B 1 Q N I B ,
and the ODE system (4)–(6) transforms into the closed system
d d t I = β Q N I B ν I μ I + μ ( I + B + U )
d d t B = β Q N I B γ B μ B
d d t U = γ B + ν I μ U .
We observe that more complex ODEs can be obtained by using higher-order moment closures (see [18,20]).
Next, let the normalized state variables I ( t ) = I / N , B ( t ) = B / N and U ( t ) = U / N denote the mean densities of ignorant, believer, and unbeliever individuals in the population; then, we normalize the time scale τ = ( γ + μ ) t by the mean believing/spreading period 1 / ( γ + μ ) . Hence, I ( τ ) + B ( τ ) + U ( τ ) = 1 and Equations (7)–(9) are rescaled to the following ODE system:
d I d τ = R 0 B I ( s + f ) I + f
d B d τ = R 0 B I B
d U d τ = ( 1 f ) B + s I f U ;
where
(a)
f = μ / ( γ + μ ) > 0 , typically very small, is the mean birth and death rates in the time unit given by the mean believing/spreading period ( τ );
(b)
s = ν / ( γ + μ ) is the information search rate in the time unit ( τ ); and
(c)
R 0 = β Q / ( γ + μ ) is the so-called basic reproductive number R 0 (see [21]) in epidemiological models: i.e., R 0 is the rate at which the expected number of ignorant individuals become believers through the influence of the expected number of believer/spreader individuals in the time unit ( τ ).

Stable Stationary States

Let the stationary values of ignorants, believers, and unbelievers of the rumor be denoted by I * , B * , and U * , respectively.
The stationary states of the ODE system (10)–(12) are given by
I 0 * = f s + f , B 0 * = 0 a n d U 0 * = s s + f ,
and by
I * = 1 R 0
B * = f 1 1 R 0 s R 0 0
U * = ( 1 f ) 1 1 R 0 + s R 0 .
From Equation (15), we observe that the believers’ stationary state decreases linearly with the information search rate s (see also Figure 2), and the critical information search rate, which is the rate at which the believers’ stationary state vanishes, is
s C = f ( R 0 1 ) .
Since f is a small number, we assume in this paper that 0 < s C = f ( R 0 1 ) < 1 . We observe that the stationary states ( I * , B * , U * ) only hold for s s C because of the natural restriction that B * 0 . If s = s C , then there is a single equilibrium ( I 0 * , B 0 * , U 0 * ) = ( I * , B * , U * ) .
Lemma 1.
For s < s C , the stationary states ( I 0 * , B 0 * , U 0 * ) are unstable, and the stationary states ( I * , B * , U * ) are stable. Furthermore, for s > s C , the stationary states ( I 0 * , B 0 * , U 0 * ) are stable.
Proof. 
The Jacobean matrix of the ODE system (10)–(12) is given by
J ( I , B , U ) = R 0 B s f R 0 I 0 R 0 B R 0 I 1 0 s 1 f f
The eigenvalues of the Jacobean matrix J ( I 0 * , B 0 * , U 0 * ) are
λ 1 = f , λ 2 = s f and λ 3 = f ( R 0 1 ) s s + f .
Hence, all eigenvalues have a negative real part if and only if s > f ( R 0 1 ) = s C . The eigenvalues of the Jacobean matrix J ( I * , B * , U * ) are
λ 1 = f , λ 2 = 1 / 2 f R 0 1 / 2 f 2 R 0 2 + 4 s + 4 f 4 f R 0 and λ 3 = 1 / 2 f R 0 + 1 / 2 f 2 R 0 2 + 4 s + 4 f 4 f R 0 .
Hence, all eigenvalues have a negative real part if and only if
f 2 R 0 2 > f 2 R 0 2 + 4 s + 4 f 4 f R 0 .
This is equivalent to s < f ( R 0 1 ) = s C .  □

3. Nash and Evolutionarily Stable Information Search Strategies

In this section, we consider a game in which individuals have to decide between searching and not searching for real information to avoid believing the false rumor. Here, we define the Nash and evolutionarily stable information search strategies (see [7,10,11]).
S denotes the probability that an ignorant individual will choose to search for information. This probability S is the individual’s information search strategy in the game. The uptake level of searching for information in the population is the proportion of individuals who will choose to search for real information, i.e., the mean of all information search strategies. We denote the uptake level of searching for information by s, i.e., the population information search strategy.
Let b L and c L denote the benefits and the costs of searching for information, respectively, and let v = b L c L denote the value of the information search. We define the payoff of an ignorant individual who searches for real information and does not believe in the false rumor by v.
Let b B and c B denote the benefits and the costs of believing the rumor, respectively, and let y = b B c B denote the impact measure that assesses the risk of believing the rumor.
Let P ( s ) denote the probability that an ignorant individual, who does not search for real information, becomes a believer for a proportion s of individuals in the population who search for information. The probability P ( s ) uses the stable stationary states of ignorant and believer individuals computed in Lemma 1:
P ( s ) = R 0 B * I * R 0 B * I * + f I * = f ( R 0 1 ) s f R 0 s , i f s < s C .
If s s C , then B * = 0 , and thus, P ( s ) = 0 (see Figure 2). In particular, P ( 0 ) = ( R 0 1 ) / R 0 . We define the payoff of an ignorant individual who does not search for real information and believes the rumor by
y P ( s ) .
The expected information search payoff E ( S , s ) of an individual with an information search strategy S in a population with an information search strategy s is
E ( S , s ) = v S + y P ( s ) ( 1 S ) = y P ( s ) + S ( v y P ( s ) ) .
Nash and evolutionarily stable information search strategies are the typical strategies studied in game theory (see [10,12]).
A population information search strategy s = S * is an information search Nash equilibrium if
Δ S * S = E ( S , S * ) E ( S * , S * ) 0 ,
for every strategy S [ 0 , 1 ] . By Equation (19), an information search strategy S * is a Nash equilibrium if and only if
( S S * ) ( v y P ( S * ) ) 0 .
Let W y ( R 0 1 ) / R 0 be the threshold for believing a rumor, where ( R 0 1 ) / R 0 = P ( 0 ) . The remark below follows, for instance, from Lemma 1 in [10].
Remark 1.
An information strategy S * is a Nash equilibrium if and only if S * satisfies one of the following conditions:
(a) 
S * = 0 and v W , with
E ( 0 , 0 ) = W ; or
(b) 
S * ( 0 , 1 ) and v = y P ( S * ) , with P ( S * ) < P ( 0 ) and
E ( S * , S * ) = y P ( S * ) = v ; or
(c) 
S * = 1 and v 0 , with
E ( 1 , 1 ) = v .
Hence, for every S * > 0 , E ( S * , S * ) = v is constant, with | v | < | W | . We observe that (i) for every S * ( 0 , s C ) , P ( S * ) > 0 , and (ii) for every S * [ s C , 1 ) , P ( S * ) = 0 , and thus, E ( S * , S * ) = 0 = v . In Figure 3, we plot the Nash information search strategies s = S * for each mixed Nash strategy with the value of information v = y P ( s ) and for pure Nash strategies S * = 0 and S * = 1 .
To define an evolutionarily stable information search strategy, we start by assuming that all individuals in the population opt for an individual information search strategy S. If a group of size ε chooses a different individual information search strategy S , then the population information search strategy becomes
s ( ε ) = ( 1 ε ) S + ε S .
A population information search strategy S * is a left evolutionarily stable information search strategy if there is a ε 0 > 0 such that for every ε ( 0 , ε 0 ) and for every S < S * ,
Δ E S * S ( s ( ε ) ) = E ( S , s ( ε ) ) E ( S * , s ( ε ) ) = ( S S * ) v y P ( s ( ε ) ) < 0 .
The definition of a right evolutionarily stable information search strategy is similar. A population information search strategy S * is an evolutionarily stable information search strategy if it is a left and right evolutionarily stable strategy.
Theorem 1.
A Nash search strategy S * is an evolutionarily stable information (ESI) search strategy if and only if S * satisfies one of the following conditions:
(i) 
For positive impact measures y 0 ,
(a) 
S * = 0 and v < y P ( 0 ) ; or
(b) 
S * = 1 and v > 0 .
(ii) 
For negative impact measures y 0 ,
(a) 
S * = 0 and v y P ( 0 ) ; or
(b) 
S * ( 0 , s C ) and v = y P ( S * ) ; or
(c) 
S * = 1 and v > 0 .
Moreover, S * is a Nash equilibrium and a left (and not a right) evolutionarily stable information search strategy if and only if S * = s C , v = 0 , and y > 0 .
Hence, S * is a Nash equilibrium and not an evolutionarily stable information search strategy if S * [ 0 , 1 ] and v = y P ( S * ) and y > 0 .
In Figure 3, we plot the evolutionarily stable information search strategies s = S * for each value v of searching for information.
Proof. 
The proof follows from Lemma 2 in [10], noting that v is negative and P ( S * ) is strictly decreasing for S * ( 0 , s C ) .  □

4. Optimal Strategies

In this section, we compute the optimal information (OI) search strategy for every value of searching for information and every value of the rumor impact measure, under the assumption that all individuals adopt the same information search strategy s = S (homogeneous strategy). Let s C = f ( R 0 1 ) be the critical information search strategy. Let
E ˜ ( s ) E ˜ ( s ; v ) = v s + y P ( s ) ( 1 s ) ,
for 0 s s C .
Lemma 2.
Assume that f is sufficiently small, where f < 1 / R 0 .
(i) 
E ˜ ( s ) < 0 for positive impact measure values y > 0 ;
(ii) 
E ˜ ( s ) = 0 for null impact measure values y = 0 ; and
(iii) 
E ˜ ( s ) > 0 for negative impact measure values y < 0 .
Proof. 
We have
E ˜ ( s ) = y P ( s ) ( 1 s ) 2 P ( s ) .
We observe that E ˜ ( s ) / y < 0 is equivalent to 2 P ( s ) > P ( s ) ( 1 s ) . By Equation (18), we have P ( s ) = f / ( f R 0 s ) 2 and P ( s ) = 2 f / ( f R 0 s ) 3 . Thus, 2 P ( s ) > P ( s ) ( 1 s ) is equivalent to
2 f / ( f R 0 s ) 2 > 2 f / ( f R 0 s ) 3 .
Hence, we conclude that E ˜ ( s ) / y < 0 is equivalent to f R 0 < 1 .  □
By Equation (19), the expected information search payoff is given by
E ( s ; v ) E ( s , s ; v ) = E ˜ ( s ; v ) i f s s C v s i f s > s C .
Since P ( s C ) = 0 , we note that E ˜ ( s C ; v ) = v s C , and thus, E is a continuous function (see also Figure 4). The optimal information (OI) search strategy (or strategies, eventually) is
s O s O ( v ) = arg max 0 s 1 E ( s ; v ) .
The expected payoff of the optimal information search strategy is E O ( v ) = E ( s O ( v ) ; v ) . Let s E S I ( v ) denote the evolutionarily stable information search strategy (or strategies, eventually). The expected payoff of the evolutionarily stable information search strategy is E E S I ( v ) = E ( s E S I ( v ) ; v ) . Let s N a s h ( v ) denote the Nash search strategy (or strategies, eventually) that are not evolutionarily stable information search strategies. The Nash expected payoff is E N a s h ( v ) = E ( s N a s h ( v ) ; v ) .

4.1. The OI Search Strategy for a Positive Impact Measure

Throughout this section, let us assume that the impact measure is positive y > 0 . Hence, by Lemma 2 (see also Figure 4), for v 0 , E ˜ is strictly concave, and thus, the optimal information search strategy is a pure strategy (0 or 1) or a mixed strategy s C or s M ( v ) , where s M ( v ) is the interior maximum point of E ˜ ( s ; v ) (when it exists).
Let U = y ( f R 0 ( R 0 1 ) + 1 ) / ( f R 0 2 ) be the positive information search threshold. Note that 0 < W < U .
Lemma 3.
Assume that f is small, where f < 1 / R 0 . For a positive impact measure y > 0 , the optimal information (OI) search strategy is
(a) 
for v < W , s O ( v ) = 0 , with E ( s O ( v ) ) = W ;
(b) 
s O ( W ) { 0 , 1 } , with E ( s O ( W ) ) = W ; and
(c) 
for v > W , s O ( v ) = 1 , with E ( s O ( W ) ) = v .
For a null impact measure y = 0 , optimal information (OI) search strategies are similar to those described above, observing that s O ( 0 ) [ 0 , 1 ] (note that W = 0 ).
Proof. 
Since E ˜ is strictly concave, if E ( 0 ) 0 , then 0 or 1 is the maximum of E. Hence, let us compute the following for E ( 0 ) 0 . The first derivative of the expected payoff is
E ( s ) = v y ( P ( s ) + ( s 1 ) P ( s ) ) .
Since P ( 0 ) = ( R 0 1 ) / R 0 and P ( 0 ) = 1 / ( f R 0 2 ) , we have E ( 0 ) = v U . Hence, E ( 0 ) 0 if and only if v U . Therefore, for v U , 0 is the maximum point of E when E ( 0 ) E ( 1 ) , and 1 is the maximum point of E when E ( 0 ) E ( 1 ) . Recall that E ( 0 ) = W and E ( 1 ) = v . Hence, E ( 0 ) E ( 1 ) if and only if v W .
Finally, for v > U > 0 , let us prove that 1 is the maximum of E. This follows from the confirmation that E ( s ) < E ( 1 ) for every s s C . We observe that E ( s ) < E ( 1 ) if and only if
s < f R 0 + y f v y .
Since s C = f ( R 0 1 ) , we confirm that the equivalence between
s C = f ( R 0 1 ) < f R 0 + y f v y
and y ( 1 f ) < f v holds because of y ( 1 f ) < 0 < f v . Hence,
s s C < f R 0 + y f v y ,
which concludes the proof.  □
Remark 2.
Assume that f is small, where f < 1 / R 0 , and the impact measure is positive y > 0 .
(a) 
s E S I ( v ) = s O ( v ) = 0 , for v < 0 ;
(b) 
0 = s O ( 0 ) < s N a s h ( 0 ) [ s C , 1 ] ;
(c) 
0 = s O ( v ) < s N a s h ( v ) < 1 and s E S I ( v ) { 0 , 1 } , for 0 < v < W ;
(d) 
0 = s N a s h ( W ) < s E S I ( W ) = 1 and s O ( W ) { 0 , 1 } ; and
(e) 
s E S I ( v ) = s O ( v ) = 1 , for v > W .
For the null impact measure y = 0 , the comparison is similar to that described above, observing that s O ( 0 ) , s N a s h ( 0 ) [ 0 , 1 ] (note that W = 0 ).
By Lemma 3 and Remark 2, (i) for small values of the information search v W , the optimal strategy s O = 0 coincides with the evolutionarily stable information search strategy s E S I = 0 , in which individuals never search for truthful information; (ii) for positive values of the information search v W , the optimal strategy s O = 1 coincides with the evolutionarily stable information search strategy s E S I = 1 , in which individuals always search for truthful information. In Figure 5, we compare the expected payoff of the evolutionarily stable information search E E S I ( v ) with that of the optimal information search, denoted by E O ( v ) .

4.2. The OI Search Strategy for a Negative Impact Measure

Throughout this section, let us assume that the impact measure is negative y < 0 . Hence, by Lemma 2 (see also Figure 4), E ˜ is strictly convex, and thus, the optimal information search strategy is a pure strategy (0 or 1) or a mixed strategy s C for v 0 . Let V = y / ( f R 0 ) < 0 be the negative information search threshold.
Lemma 4.
Assume that f is small, where f < 1 / R 0 . For a negative impact measure y < 0 , the optimal information (OI) search strategy is
(a) 
for v < V , s O ( v ) = 0 , with E ( s O ( v ) ) = W ;
(b) 
s O ( V ) { 0 , s C } , with E ( s O ( V ) ) = W ;
(c) 
for V < v < 0 , s O ( v ) = s C , with
E ( s O ( v ) ) = v s C = v f ( R 0 1 ) = v W / V ;
(d) 
s O ( 0 ) [ s C , 1 ] , with E ( s O ( 0 ) ) = 0 ; and
(e) 
for v > 0 , s O ( v ) = 1 , with E ( s O ( v ) ) = v ;
Proof. 
Since E ˜ ( s ; v ) is a linear function in v, there is only one value V = y / ( f R 0 ) < 0 such that E ˜ ( 0 ; V ) = E ˜ ( s C ; V ) . Furthermore, E ( 0 ; v ) > E ( s C ; v ) if and only if v < V . (a) If v < V < 0 , E ( 0 ; v ) > E ( s C ; v ) and, by linearity, E ( s C ; v ) > E ( 1 ; v ) . Hence, s O ( v ) = 0 . (b) If v = V < 0 , E ( 0 ; V ) = E ( s C ; V ) and, by linearity, E ( s C ; V ) > E ( 1 ; V ) . Hence, s O ( V ) = 0 or s O ( V ) = s C . (c) If V < v < 0 , E ( s C ; v ) > E ( 0 ; v ) and, by linearity, E ( s C ; v ) > E ( 1 ; v ) . Hence, s O ( v ) = s C . (d) If v = 0 , E ( s C ; v ) > E ( 0 ; v ) and, by linearity, E ( s ; 0 ) = E ( s C ; 0 ) for all s [ s C , 1 ] . Hence, s O ( 0 ) [ s C , 1 ] . (e) If v > 0 , E ( s C ; v ) > E ( 0 ; v ) and, by linearity, E ( 1 ; v ) > E ( s C ; v ) . Hence, s O ( v ) = 1 .  □
Since V = y / ( f R 0 ) < y ( R 0 1 ) / R 0 = y P ( 0 ) W , we state the following remark.
Remark 3.
Assume that f is small, where f < 1 / R 0 and the impact measure is negative y < 0 .
(a) 
s E S I ( v ) = s O ( v ) = 0 , for v < V ;
(b) 
s E S I ( V ) = 0 , s O ( V ) { 0 , s C } ;
(c) 
s E S I ( v ) = 0 < s C = s O ( v ) , for V < v < W ;
(d) 
0 < s E S I ( v ) = P 1 ( v ) < s C = s O ( v ) , for W < v < 0 ;
(e) 
s N a s h ( 0 ) , s O ( 0 ) [ s C , 1 ] ; and
(f) 
s E S I ( v ) = s O ( v ) = 1 , for v > 0 ;
By Lemma 4 and Remark 3, (i) for small values of searching for information v V , the optimal strategy s O = 0 coincides with the evolutionarily stable information search strategy s E S I = 0 , in which individuals never search for truthful information; (ii) for positive values of searching for information v > 0 , the optimal strategy s O = 1 coincides with the evolutionarily stable information search strategy s E S I = 1 , in which individuals always search for truthful information; (iii) for intermediate values of searching for information V < v < 0 , the optimal strategy coincides with the critical information search rate s C , which eradicates the rumor. This value is above the value given by the evolutionarily stable information search strategy s E S I that is not able to eradicate the rumor and yields a lower expected information search payoff E E S I ( v ) < E O ( v ) (see Figure 5).

5. Evolutionary Information Search Dynamics

Evolutionary information search dynamics is introduced here (see [11,12,13]), under the assumption that all individuals adopt the same information search strategy s = S (homogeneous strategy).
Consider a case in which a small group of individuals of size ε modify their search strategy from the population information search strategy S to S + Δ S . The change in the expected information search payoff satisfies
Δ E S ( S + Δ S ) Δ S = E ( S + Δ S , s ( ε ) ) E ( S , s ( ε ) ) Δ S = v y P ( s ( ε ) ) ,
where s ( ε ) = ( 1 ε ) S + ε ( S + Δ S ) = S + ε Δ S defines the new population search strategy.
Let s ( τ ) be the population information search strategy adopted at time τ . Hence, we define the evolutionary information search dynamics by
d s d τ = η ( s ) lim Δ S 0 Δ E S ( S + Δ S ) Δ S = η ( s ) ( v y P ( s ) ) ,
where η ( s ) 0 is a smooth map that measures the information search strategy adaptation speed of the population.
A point s is a dynamic equilibrium of the evolutionary information search dynamics if and only if d s / d τ = 0 . Hence, a point s is a dynamic equilibrium if and only if
( i ) η ( s ) = 0 or ( ii ) v = y P ( s ) .
Recall that f is assumed to be small, and thus, s C = f ( R 0 1 ) < 1 ; P ( 1 ) = 0 , and W = y P ( 0 ) is the rumor belief threshold. As usual (see [10]), we assume the following for η : (i) η ( s ) > 0 , for all 0 < s < 1 ; (ii) if v < W , then η ( 0 ) = 0 and η ( 0 ) > 0 ; (iii) if v > W , then η ( 0 ) > 0 ; (iv) if v > 0 , then η ( 1 ) = 0 and η ( 1 ) < 0 ; and (v) if v < 0 , then η ( 1 ) > 0 .
We use the standard definition of left, right, and global attractors for a dynamic equilibrium p (see [10]).
Theorem 2.
Assume that f is small, where f ( R 0 1 ) < 1 .
(i) 
For negative impact measures y 0 , the dynamic equilibria of the evolutionary information search dynamics are as follows:
(a) 
for v < 0 , the evolutionarily stable information search strategy s E S I ( v ) is a global attractor;
(b) 
for v = 0 , the Nash information search strategies s N a s h ( v ) [ s C , 1 ] are equilibria points, and s C is a left (and not right) attractor;
(c) 
for v > 0 , the evolutionarily stable information search strategy s E S I ( v ) = 1 is a global attractor.
(ii) 
For positive impact measures y 0 , the dynamic equilibria of the evolutionary information search dynamics are as follows:
(a) 
for v < W , the evolutionarily stable information search strategy s E S I ( v ) = 0 is an attractor (also global for v < 0 );
(b) 
for 0 v W , the Nash information search strategies s N a s h ( v ) are dynamical equilibria, but not attractors; and
(c) 
for v > 0 , the evolutionarily stable information search strategy s E S I ( v ) = 1 is an attractor (also global for v > W ).
In Figure 6, we show the dynamics described above. For advantageous rumors, we observe the existence of a bi-stability region, where the evolutionarily stable information search strategies in which no one searches (persistence of the rumor) or everyone searches (eradication of the rumor) are the attractors, and the Nash equilibria form the boundary of the basins of attraction of the two attractors. Hence, the Nash equilibria are unstable equilibria and are thus not observed (at least for large periods), but have the interesting property of determining the basin of attraction of the attractors. For harmful rumors, we observe that for negative values of the information search v < 0 , the evolutionary information search dynamic drives the population search strategy to an evolutionarily stable information search strategy that is lower than the critical information search rate s E S I < s C . Hence, to eradicate the rumor, a forcing mechanism must be implemented to increase the population search strategy to (or above) the critical information search rate s C . For positive values of the information search v > 0 , the evolutionary information search dynamic drives the population search strategy to the evolutionarily stable information search strategy, in which individuals always search for truthful information s E S I = 1 , and thus, the rumor is eradicated. Hence, a recommendation for legislators and policymakers who aim to eradicate harmful rumors is to make the search for truthful information free or rewarding, i.e., information search value v 0 . Truthful public social media campaigns can help by facilitating access to information.
The proof of the above theorem follows similarly to the proofs of Theorems 6–8 in [10].
Proof. 
Let y < 0 (the proof follows similarly for y > 0 ). To simplify the presentation of the proof, let us introduce the function
F ( s ) = η ( s ) ( v y P ( s ) )
such that d s / d τ = F ( s ) .
(a) If v y P ( 0 ) , then η ( 0 ) = 0 , and thus, F ( 0 ) = 0 . Hence, s = 0 is an equilibrium point. Since P ( s ) is decreasing, for every s ( 0 , 1 ] , P ( s ) < P ( 0 ) , and thus, F ( s ) < 0 . Hence,
lim τ s ( τ ; s ) = 0 ,
and therefore, s = 0 is a global attractor.
(b) If y P ( 0 ) < v < 0 and s * is such that v = y P ( s * ) , then F ( s * ) = 0 , and s * is an equilibrium point. Since P ( s ) is decreasing, for every s [ 0 , s * ) , P ( s ) > P ( s * ) , and thus, F ( s ) > 0 . Hence,
lim τ s ( τ ; s ) = s * ,
and therefore, s * is a left attractor in [ 0 , s * ) . For every s ( s * , 1 ] , P ( s ) < P ( s * ) , and thus, F ( s ) < 0 . Hence,
lim τ s ( τ ; s ) = s * ,
and therefore, s * is a right attractor in ( s * , 1 ] . Hence, s * is a global attractor.
(c) For every s * [ s C , 1 ] , P ( s * ) = 0 , and thus, F ( s * ) = 0 if v = 0 . Hence, s * [ s C , 1 ] are equilibria points. Since P ( s ) is decreasing, for every s [ 0 , s C ) , P ( s ) > P ( s C ) = 0 , and thus, F ( s ) > 0 . Hence,
lim τ s ( τ ; s ) = s C ,
and therefore, s C is a left attractor in [ 0 , s C ) .
(d) If v > 0 , then η ( 1 ) = 0 , and thus, F ( 1 ) = 0 . Hence, s = 1 is an equilibrium point. Since P ( s ) is decreasing, for every s [ 0 , 1 ) , P ( s ) > P ( 1 ) , and thus, F ( s ) > 0 . Hence,
lim τ s ( τ ; s ) = 1 ,
and therefore, s = 1 is a global attractor.  □

6. Conclusions

In this paper, we present a rumor spreading model with potential information searching in a population where individuals can be ignorants, believers, or unbelievers of the rumor. Depending on whether the impact measure, which assesses the risk of believing the rumor (fake news), is positive or negative and on the value of searching for information, we introduce an expected payoff or utility for the individuals. We derive all of the Nash and all of the evolutionarily stable information search strategies. Furthermore, we introduce evolutionary information search dynamics, whose attractors are evolutionarily stable information search strategies.
For advantageous rumors, we observe the existence of a bi-stability region, where the evolutionarily stable information search strategies are either to fully search for truthful information or not search at all. For harmful rumors, we observe that there is a single evolutionarily stable information search strategy by which individuals decide, or not, to search for information. When the benefits of searching for information outweigh the costs, i.e., the value of information search is positive, the evolutionarily stable information search strategy is to search for information with a probability of 1. However, when the value of the information search is negative, the evolutionarily stable information search strategy is smaller than the optimal information search strategy that eradicates the rumor. In this case, unfortunately, the rumor persists. The persistence of false rumors may be quite dangerous and lead to extensive damage to the individual, as well as to all of society. For example, when a disease is spreading, some cases of vaccination with moderate side-effects can be inflated by social media, provoking fear in the population and leading to a large proportion of individuals deciding against vaccination. In an outbreak with a large transmission rate, such as COVID-19, decisions against vaccination are a major contributor to the spread of the disease and so are quite harmful to all. A recommendation for legislators and policymakers who aim to eradicate harmful rumors is to make the search for truthful information free or rewarding.
The population is assumed to be distributed in a regular spatial network, where all individuals have the same number of neighbors, and thus, all of them can equally spread the rumor. This model will be the basis for future works that involve different and more complex spatial networks, heterogeneous strategies, and higher moment closure approximations and encompass the routes of modern social media transmission.

Author Contributions

Conceptualization, J.M. and A.P.; formal analysis, J.M. and A.P.; writing—original draft, J.M. and A.P.; writing—review and editing, J.M. and A.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by LIAAD-INESC TEC and FCT Fundação para a Ciência e Tecnologia (Portuguese Foundation for Science and Technology) within project UID/EEA/50014/2019 and by National Funds through the FCT within Project “Modelling, dynamics and games”, with reference PTDC/MAT-APL/31753/2017.

Acknowledgments

The authors thank the referees for their invaluable comments and suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Daley, D.J.; Kendall, D.G. Epidemics and rumours. Nature 1964, 204, 1118. [Google Scholar] [CrossRef] [PubMed]
  2. Daley, D.J.; Kendall, D.G. Stochastic rumours. IMA J. Appl. Math. 1965, 1, 42–45. [Google Scholar] [CrossRef]
  3. Goffman, W.; Newill, V.A. Generalization of epidemic theory: An application to the transmission of ideas. Nature 1964, 204, 225–228. [Google Scholar] [CrossRef] [PubMed]
  4. Moreno, Y.; Nekovee, M.; Pacheco, A.F. Dynamics of rumor spreading in complex networks. Phys. Rev. E 2004, 69, 066130. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Zhang, N.; Huang, H.; Duarte, M.; Zhang, J. Risk analysis for rumor propagation in metropolises based on improved 8-state ICSAR model and dynamic personal activity trajectories. Physica A 2016, 451, 403–419. [Google Scholar] [CrossRef]
  6. Jia, F.; Lv, G.; Zou, G. Dynamic analysis of a rumor propagation model with Lévy noise. Math. Meth. Appl. Sci. 2018, 41, 1661–1673. [Google Scholar] [CrossRef]
  7. Bauch, C.T.; Earn, D.J.D. Vaccination and the theory of games. Proc. Natl. Acad. Sci. USA 2004, 101, 13391–13394. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Huynh, T.L.D. Replication: Cheating, loss aversion, and moral attitudes in Vietnam. J. Econ. Psychol. 2020, 78, 102277. [Google Scholar] [CrossRef]
  9. Burggraf, T.; Fendel, R.; Huynh, T.L.D. Political news and stock prices: Evidence from Trump’s trade war. Appl. Econ. Lett. 2020, 27, 1–4. [Google Scholar] [CrossRef]
  10. Martins, J.; Pinto, A. Bistability of Evolutionary Stable Vaccination Strategies in the Reinfection SIRI Model. Bull. Math. Biol. 2017, 79, 853–883. [Google Scholar] [CrossRef] [PubMed]
  11. Maynard-Smith, J. Evolution and the Theory of Games; Cambridge University Press: Cambridge, UK, 1982. [Google Scholar]
  12. Hofbauer, J.; Sigmund, K. Evolutionary Games and Population Dynamics; Cambridge University Press: Cambridge, UK, 1998. [Google Scholar]
  13. Nowak, M. Evolutionary Dynamics: Exploring the Equations of Life; The Belknap Press of Harvard University Press: Cambridge, MA, USA, 2006. [Google Scholar]
  14. Kermack, W.O.; McKendrick, A.G. A contribution to the mathematical theory of epidemics. Proc. R. Soc. Lond. 1927, 115, 700–721. [Google Scholar]
  15. Pinto, A.; Martins, J. A game theoretical analysis in a rumor spreading model based on the SIR epidemic model. In Proceedings of the 17th International Conference on Computational and Mathematical Methods in Science and Engineering (CMMSE2017), Cadiz, Spain, 4–8 July 2017; Volume 1689–1694. [Google Scholar]
  16. Van Kampen, N.G. Stochastic Processes in Physics and Chemistry; North-Holland: Amsterdam, The Netherlands, 1992. [Google Scholar]
  17. Glauber, R.J. Time-dependent statistics of the Ising model. J. Math. Phys. 1963, 4, 294–307. [Google Scholar] [CrossRef]
  18. Stollenwerk, N.; Martins, J.; Pinto, A. The phase transition lines in pair approximation for the basic reinfection model SIRI. Phys. Lett. A 2007, 371, 379–388. [Google Scholar] [CrossRef] [Green Version]
  19. Stollenwerk, N.; van Noort, S.; Martins, J.; Aguiar, M.; Hilker, F.; Pinto, A.; Gomes, M.G. A spatially stochastic epidemic model with partial immunization shows in mean field approximation the reinfection threshold. J. Biol. Dyn. 2010, 4, 634–649. [Google Scholar] [CrossRef] [PubMed]
  20. Martins, J.; Pinto, A.; Stollenwerk, N. Stationarity in moment closure and quasi-stationarity of the SIS model. Math. Biosci. 2012, 236, 126–131. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  21. Heesterbeek, J.A.P. A brief history of R0 and a recipe for its calculation. Acta Biotheor. 2002, 50, 189–204. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The compartmental Ignorant-Believer-Unbeliever (IBU) rumor spreading model.
Figure 1. The compartmental Ignorant-Believer-Unbeliever (IBU) rumor spreading model.
Entropy 22 01368 g001
Figure 2. (left) The stationary value of believers B * ( s ) and (right) the probability that an ignorant individual does not search for real information to become believer P ( s ) , which depends on the information search rate s. The other parameter is f = 0.01 .
Figure 2. (left) The stationary value of believers B * ( s ) and (right) the probability that an ignorant individual does not search for real information to become believer P ( s ) , which depends on the information search rate s. The other parameter is f = 0.01 .
Entropy 22 01368 g002
Figure 3. Nash and evolutionarily stable information search strategies s, depending on the information search values v. On the (left), a negative impact measure y = 1 is considered, and on the (right), a positive impact measure y = 1 is considered. The blue line corresponds to Nash equilibria (that are not ESI strategies), and the black line corresponds to evolutionarily stable information (ESI) search strategies. Other parameter values: R 0 = 20 , f = 0.01 , and s C = 0.19 .
Figure 3. Nash and evolutionarily stable information search strategies s, depending on the information search values v. On the (left), a negative impact measure y = 1 is considered, and on the (right), a positive impact measure y = 1 is considered. The blue line corresponds to Nash equilibria (that are not ESI strategies), and the black line corresponds to evolutionarily stable information (ESI) search strategies. Other parameter values: R 0 = 20 , f = 0.01 , and s C = 0.19 .
Entropy 22 01368 g003
Figure 4. The expected information search payoff, depending on the information search strategy s, for different information search values v. On the (left), a negative impact measure y = 1 is considered, and on the (right), a positive impact measure y = 1 is considered. Other parameter values: f = 0.01 , R 0 = 20 , and s C = 0.19 .
Figure 4. The expected information search payoff, depending on the information search strategy s, for different information search values v. On the (left), a negative impact measure y = 1 is considered, and on the (right), a positive impact measure y = 1 is considered. Other parameter values: f = 0.01 , R 0 = 20 , and s C = 0.19 .
Entropy 22 01368 g004
Figure 5. The expected information search payoff E ( s ; v ) , depending on the value of the information search v for different search strategies s: critical search strategy s C , OI search strategy s O , ESI search strategy s E S I , and Nash strategy s N a s h . On the (left), a negative impact measure y = 1 is considered, and on the (right), a positive impact measure y = 1 is considered. The other parameter values are f = 0.01 and R 0 = 20 . Hence, V = 5 and W = 0.95 (left) or W = 0.95 (right).
Figure 5. The expected information search payoff E ( s ; v ) , depending on the value of the information search v for different search strategies s: critical search strategy s C , OI search strategy s O , ESI search strategy s E S I , and Nash strategy s N a s h . On the (left), a negative impact measure y = 1 is considered, and on the (right), a positive impact measure y = 1 is considered. The other parameter values are f = 0.01 and R 0 = 20 . Hence, V = 5 and W = 0.95 (left) or W = 0.95 (right).
Entropy 22 01368 g005
Figure 6. The stable (solid line) and the unstable (dashed line) equilibria of the evolutionary information search dynamics. On the (left), a negative impact measure y = 1 is considered, and on the (right), a positive impact measure y = 1 is considered. Parameter values: f = 0.01 and R 0 = 20 .
Figure 6. The stable (solid line) and the unstable (dashed line) equilibria of the evolutionary information search dynamics. On the (left), a negative impact measure y = 1 is considered, and on the (right), a positive impact measure y = 1 is considered. Parameter values: f = 0.01 and R 0 = 20 .
Entropy 22 01368 g006
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Martins, J.; Pinto, A. The Value of Information Searching against Fake News. Entropy 2020, 22, 1368. https://0-doi-org.brum.beds.ac.uk/10.3390/e22121368

AMA Style

Martins J, Pinto A. The Value of Information Searching against Fake News. Entropy. 2020; 22(12):1368. https://0-doi-org.brum.beds.ac.uk/10.3390/e22121368

Chicago/Turabian Style

Martins, José, and Alberto Pinto. 2020. "The Value of Information Searching against Fake News" Entropy 22, no. 12: 1368. https://0-doi-org.brum.beds.ac.uk/10.3390/e22121368

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop