Interactions between the life history of a pathogen and the environment in which it is embedded drive the evolution of virulence. These interactions thus dictate both the experience of disease at the individual host level and the shape of disease dynamics in host populations [1
]. The nature of the interaction between virulence and transmission has been the object of both theoretical and empirical examination [2
]. Free-living survival, here defined as the ability of a pathogen to persist outside of its host, is one of many transmission life-history traits associated with virulence. The relationship between the two varies between host–pathogen types and different environments [4
Several hypotheses serve as the canon in the evolution of virulence, theorizing its relationship with transmission traits. The Curse of the Pharaoh hypothesis—Named after a tale about a mythical curse that torments individuals who dig up tombs of Egyptian pharaohs [11
]—Suggests that, if a parasite has high free-living survival, then it is far less dependent on its host for transmission and, consequently, will have no evolutionary incentive to decrease virulence [2
]. The potential negative fitness consequences of killing hosts rapidly (being highly virulent) can be counteracted by persisting in the environment until the arrival of new susceptible hosts. Any presumptive selection on beneficence may be relaxed: parasites can detrimentally affect the health of hosts at no cost to transmission because most of their life cycle is spent outside of a host. Previous studies support a positive correlation between free-living survival and mortality per infection (a common proxy for virulence) [13
Alternatively, the “tradeoff” hypothesis suggests that there is some intermediate level of parasite virulence [3
] that is optimal for a given setting. In this scenario, too high a virulence kills the host and parasite and too low a virulence leads to failure to transmit. Applying this hypothesis specifically to free-living survival would suggest that selection for increased free-living survival should come at the expense of virulence (producing a pathogen that is less harmful to the host). Mechanistically, as a consequence of increased adaptation to a nonhost environment, a virus may be less fit to replicate inside a host [9
]. For example, a more robust viral capsid may help to survive harsh environmental conditions but may make it more difficult to package RNA/DNA [15
]. More generally, the tradeoff hypothesis can be framed in the context of a life-history tradeoff: investment in certain parts of the life cycle often comes at the expense of others [2
Theoretical studies have explored varying evolutionary relationships between heightened virulence and extreme pathogen longevity [4
]. One critical component of these studies revolves around whether virulence evolves independently of free-living survival. For example, some models have argued [4
] that pathogen virulence is independent of survival under a set of conditions: when the host–pathogen system is at an equilibrium (evolutionary and ecological), if host density fluctuates around an equilibrium, or if turnover of the infected host population is fast relative to the pathogen in the environment. However, if the host–pathogen system is at disequilibrium and if the dynamics of propagules in the environment are fast compared to the dynamics of infected hosts, then virulence is, as hypothesized, an increasing function of propagule survival [4
]. Kamo and Boots [17
] examined this hypothesis by incorporating a spatial structure in the environment using a cellular, automata model and found that, if virulence evolution is independent of transmission, then long-lived infective stages select for higher virulence. However, if there is a tradeoff between virulence and transmission, there is no evidence for the Curse of the Pharaoh hypothesis, and in fact, higher virulence may be selected for by shorter rather than long-lived infectious stages. Further, the evolution of high virulence does not have to occur solely through a transmission–virulence tradeoff. Day [18
] demonstrated how pathogens can evolve high virulence and even select for traits to kill the host (e.g., toxins) if pathogen transmission and reproductive success are decoupled. These studies emphasized the context-dependence of virulence–survival relationships. Understanding where in the relationship between virulence and survival a given pathogen population exists may allow one to understand how virus evolution will manifest at the level of epidemics.
In this study, we examine the epidemic consequences of different virulence–survival relationships—Positive and negative correlation—In a viral disease with an environmental transmission component. In order to measure how pathogen survival influences disease dynamics, we included an environmental compartment in our model, which represents contaminated environments that act as a reservoir for persisting pathogens, causing disease spread when they come in contact with susceptible individuals (infection via “fomites”) [20
We find that the identity of the virulence–free-living survival relationship (e.g., positive vs. negative) has distinct implications for how an epidemic will unfold. Some, but not all, features of an outbreak are dramatically influenced by the nature of the underlying virulence–survival relationship. This indicates that signatures for evolution (adaptive or other) in a pathogen population will manifest more conspicuously in certain features of an outbreak. We reflect on these findings in light of their theoretical implications on the evolution and ecology of infectious disease and for their potential utility in public health interventions.
The virulence–survival relationship drives the consequences of virus evolution on the trajectory of an outbreak. In this study, we examined how different virulence–survival relationships may dictate different features of outbreaks at the endpoints of evolution (according to the positive or negative correlation scenarios). When the parameter space for virulence and survival is mapped, we find that certain outbreak metrics are more sensitive to change in free-living survival and virulence than others and that the nature of this sensitivity differs depending on whether survival and virulence are positively or negatively correlated.
For the positive correlation scenario, when free-living survival varies between 5% below and above the nominal value, we observed a dramatic change in the total number of infected individuals in the first 30 days (98% increase from minimum survival to maximum survival; Table 5
), and R0
nearly doubles (94% increase; Table 5
). These two traits are, of course, connected: the theoretical construction of the R0
metric specifically applies to settings where a pathogen spreads in a population of susceptible hosts [34
], an early window that is captured in the first 30 days.
When survival and virulence are negatively correlated, different outbreak dynamics emerge: while the R0
difference between minimum and maximum survival is significant (approximately 84% decrease), the total number of infected individuals only changes by roughly 3% (Table 6
). This large difference between R0
at higher and lower survival values also does not translate to a difference in the total number of infected individuals in the first 30 days of an infection (the early outbreak window). In a scenario where survival and virulence are negatively correlated, a highly virulent and less virulent virus population can have similar signatures on a population with respect to the number of infected individuals in the first month. Thus, simply measuring the number of infected individuals in the first month of an outbreak is unlikely to reveal whether a pathogen population has undergone adaptive evolution or has evolved in a manner that meaningfully influences the natural history of disease.
Notably, for scenarios where survival and virulence are both positively and negatively correlated, the time that it takes for an epidemic to reach its maximum number of infected individuals changes little across extreme values of survival (12% in the positive correlation scenario; 0.15% in the negative correlation scenario; see Table 5
and Table 6
). That is, the time that it takes for an epidemic to reach its peak (however high) is not especially sensitive to evolution in virulence or survival.
Practical Implications for the Understanding of Outbreaks Caused by Emerging Viruses
That different features of an outbreak are differentially influenced by the endpoints of viral life-history evolution highlights how epidemiology should continue to consider principles in the evolution and ecology of infectious disease in its analyses and predictions. As not all features of an epidemic are going to be equally reliable signatures of virus evolution, we should carefully consider the data on how the dynamics of an epidemic change when making inferences about whether a pathogen population is essentially different from prior iterations (e.g., prior outbreaks of the same virus type). The results of this study suggest that carefully constructed, mechanistically sound models of epidemics are important, both for capturing the dynamics of an outbreak and for abetting our efforts to understand how evolution of survival and virulence influences disease dynamics.
For example, the potential for adaptive evolution of SARS-CoV-2 has emerged as a possible explanation for different COVID-19 dynamics in different countries. We suggest that such interpretations should be considered with caution and that they require very specific types of evidence to support them. As of 1 July 2020, any conclusion that widespread SARS-CoV-2 evolution is an explanation for variation in disease patterns across settings (space and/or time) is premature.
The practical process of interpreting the evolutionary consequences of signals of virus evolution should encompass several discrete steps. Firstly, we should determine whether molecular signatures exist for adaptive evolution. Adaptive evolution would manifest in observable differences in genotype and phenotype and, perhaps, in the natural history of disease. Secondly, we should aim to attain knowledge of the underlying mechanistic relationship between survival and virulence. This knowledge is not necessarily easy to attain (it requires extensive laboratory studies) but would allow added biological insight: we may be able to extrapolate how changes in some traits (e.g., those that compose survival) influence others (e.g., those that influence virulence).
More generally, our findings suggest that the ability to detect the consequences of virus evolution would depend on which feature of an outbreak an epidemiologist measures: from our analysis, R0 is most impacted by changes in virulence and survival. In addition, the total number of infected individuals in the early window and the size of the infected “peak” would each be impacted most readily by changes in virulence–survival traits. The rate at which the epidemic peak was reached, on the other hand, showed relatively little change as survival increased or between the two correlation scenarios. Consequently, it would not serve as a useful proxy for virus evolution.
While the stochastic, sometimes entropic nature of epidemics renders them very challenging to predict [36
], we suggest that canons such as life-history theory and the evolution of virulence provide useful lenses that can aid in our ability to interpret how life-history changes in virus populations will manifest at the epidemiological scale. We propose that, in an age of accumulating genomic and phenotypic data in many pathogen–host systems, we continue to responsibly apply or modify existing theory in order to collate said data into an organized picture for how different components of the host–parasite interaction influence the shape of viral outbreaks of various kinds.