2. Materials and Methods
In November 2019, the U.S. NIEHS established a DR2 working group as part of its DR2 program to: (1) foster the development of an environmental health researcher “community of practice”; (2) identify environmental health disaster research scientific knowledge gaps and associated research needs in response to specific situations; (3) identify challenges associated with developing and implementing disaster research responses; (4) propose DR2 programmatic strategies and solutions to help address these challenges; and (5) promote collaborations and coordinated research response among the NIEHS community during a disaster.
Prior to the COVID-19 pandemic, the working group had initiated plans, including formation of a new subcommittee, focused on formulating strategies for identifying the environmental health sciences (EHS) research priorities in response to disasters. Recognizing the need to help inform the public of relevant SARS-CoV-2 scientific gaps and needs, the subcommittee quickly transitioned its efforts to focus on the identification and prioritization of EHS questions related to the current pandemic.
In light of travel and gathering restrictions, along with immense personal and professional disruptions brought about by COVID-19, the subcommittee sought a process that could be rapid, remote, and asynchronous. The group decided to employ a modified Delphi method, which deviated from the classic Delphi technique by virtue of it being conducted entirely via email and web-based survey (also known as an e-Delphi or online Delphi) [15
]. The Delphi method is a widely used process employed to solicit and refine expert opinion [15
], where a panel of experts respond to two or more rounds of questionnaires. In the first questionnaire, panelists provide open-ended responses on a certain topic or issue. Analyzed or synthesized responses are then returned to panelists in the form of statements or questions. Panelists rank or rate the statements or questions according to their expert opinion. Panelists receive controlled feedback and are asked to re-rate or re-rank the statements or questions until consensus is achieved, as defined by the study. The Delphi method’s key characteristics include iterative expert input, controlled feedback, participant anonymity, and statistical representation of the group response [15
In our process, panelists were asked to identify research questions and rate their priority through a series of online questionnaires. Following an initial rating, research questions were determined to meet “consensus” at a particular priority category if a certain proportion of panelists assign them a rating within that category. Items that did not reach consensus were redistributed to the panelists, along with feedback including their rating of a particular item and a statistical representation of the overall panel response. They were then asked to repeat the process (i.e., re-rate the remaining items), with the goal of getting closer to a consensus [17
]. An overview of our process is provided in Figure 1
. As the goal of this activity was to identify high priority environmental and occupational health sciences research needs, and was not research itself, institutional review board approval was not necessary.
2.1. Step 1: Identify Diverse Experts
We recruited an expert panel to represent a variety of EHS-related expertise relevant to SARS-CoV-2/COVID-19, including but not limited to: environmental toxicology, environmental microbiology, exposure science, environmental epidemiology, occupational health, environmental public health practice, and environmental health social science. We distributed an email invitation for self-nomination to the panel to members of the NIEHS DR2 Working Group and the Directors of NIEHS Environmental Health Sciences Core Centers [18
]. The email encouraged broad distribution to relevant networks. Prospective panelists self-nominated by responding to questions about their expertise and uploading their biosketch or, for practitioners, a four-page resume via a Google Form. We purposefully recruited additional experts from our professional networks to ensure breadth and depth of expertise (e.g., from environmental public health practice). A total of 45 applications were received. The research priorities subcommittee reviewed panelist applications to assess relevance of expertise and to ensure a diverse representation across different environmental and occupational health subdisciplines. This resulted in a total of 28 initial panelists. Applicants received an email notifying them of their selection status, outlining the process and timeline, and were asked to declare any significant financial conflicts of interest and to consent to participation in the process in a Google Form. As this activity was not an NIEHS advisory panel, nor part of any official or unofficial efforts related to NIH funding, current or planned NIEHS grants were not considered conflicts of interest. One selected panelist declined to participate further, and another stopped responding, leaving 26 panelists who participated fully throughout the remainder of the process. Panelists’ self-reported expertise is documented in Table 1
2.2. Step 2: Solicit Research Questions from Panelists
In round 1, panelists were invited to submit up to four, one-sentence, free-form research questions, along with a brief justification and explanation of the associated literature (four to five sentences each) and up to two supporting references through Google Forms. Participants were asked to submit questions reflective of EHS research that should be conducted in the context of the current pandemic to provide public health officials and the general public with additional accurate information about virus transmission, as well as individual and public health measures to limit the spread of disease or reduce its public health consequences. Panelists were provided with the following topics to focus their responses:
Transmission and routes of exposure;
Virus survival and infectivity;
Personal protective equipment (PPE);
Occupational health impacts and interventions;
Environmental public health impacts and interventions;
Environmental health risk communication;
Panelists were provided with instructions about the format for research question submission, along with examples. Instructions to panelists are available in Supplementary File S1, Methods S1
. The research priorities subcommittee reviewed, clarified, consolidated, and thematically grouped submitted research questions. A table outlining 61 unique research questions, justifications, and references was developed.
2.3. Step 3: Panelists Rate Priority of Research Questions
The developed table with round 1 results was distributed to panelists, along with a REDCap web-administered questionnaire. Study data were collected and managed using REDCap electronic data capture tools [19
] hosted at the Institute for Translational Health Sciences. REDCap (Research Electronic Data Capture) is a secure, web-based application designed to support data capture for research studies, providing: (1) an intuitive interface for validated data entry; (2) audit trails for tracking data manipulation and export procedures; (3) automated export procedures for seamless data downloads to common statistical packages; and (4) procedures for importing data from external sources.
In round 2, panelists were asked to rate each question on a nine-point Likert scale (lowest to highest priority). Panelists could also provide explanatory comments, including a justification of their rating. As the research priorities subcommittee felt that some of the research questions submitted may be outside the scope of environmental health sciences, we opted to include an “N/A” option that panelists were instructed to use if they felt the research question was not relevant to environmental health sciences.
We calculated summary statistics and trichotomized responses into “low” (rating = 1, 2, or 3), “medium” (rating = 4, 5, or 6), and “high” (rating = 7, 8, or 9) priority categories at the time of analysis. For each unique research question, we calculated the proportion of respondents who rated at each priority level. We found that the N/A option was not selected by many participants and re-coded N/A responses as “1,” or “lowest priority”. Through comments, we identified that one participant consistently used the N/A option (at the far right of the scale) as a “10”, or highest priority, and re-coded this respondent’s N/A selections as “9”. Each research question where at least 69.2% of panelists’ (n = 18) ratings indicated “low priority”, “medium priority”, or “high priority” was determined to meet consensus at that priority rating and eliminated from round 3. While we set a consensus threshold of 70% a priori, we determined that 69.2% was the closest possible proportion given the final number of panelists and that it was not meaningfully different than 70%.
2.4. Step 4: Panelists Re-rate Priority of Research Questions, Considering the Group’s Response
Individual reports were developed for each panelist, including the research questions that did not reach consensus in round 2, original justification, individual round 2 ratings, and the “group response”, including mean rating and proportion of responses at each priority level. After consideration of the “group response”, each panelist was asked to re-rate the remaining research questions using a new REDCap web-administered form. The N/A option was eliminated in round 3 given its low use in round 2, and participants were instructed to rate questions that they thought were N/A to environmental health sciences research as lowest priority (i.e., “1”). Summary statistics were again calculated, including the mean and proportion of respondents who indicated “low priority”, “medium priority”, and “high priority”. Research questions that at least 69.2% of panelists rated as “low priority”, “medium priority”, or “high priority” were determined to meet consensus at that priority rating.
We identified 26 interdisciplinary environmental health science experts to participate in our expert panel. These experts identified 61 unique research questions across several environmental health subdisciplines. Twenty-eight of these research questions were rated as “high priority” by panelists in either round 2 or 3 (Table 2
). No research question reached consensus at a “medium” or “low” priority category, potentially indicating reduced discriminatory ability of such a process.
Research questions achieving consensus in the first rating round focused on exposure and transmission variables including timing of infectivity, routes of exposure and their impact on severity, environmental surfaces, particle size in air, and occupational transmission related to work activities. The impact of co-exposure to environmental toxicants and of comorbidities also achieved consensus. The impact on and particular vulnerability of environmental justice communities achieved high priority consensus in the first round, as did understanding the needs of communities with barriers related to education, language, and culture. Prevention-related questions were also prioritized in the first round of rating, including the role of heating, ventilation, and air conditioning (HVAC) systems and public health interventions. The second rating round revealed consensus around similarly themed research questions, but they were often more specific in terms of exposure scenario, prevention strategy, or exposed group. In the weeks between ratings, the media and scientific community brought attention to different issues (e.g., particular essential worker groups and the role of aerosols in disease transmission), which may have also influenced ratings in addition to the group response.
In several cases, the 33 research questions that did not reach consensus could be characterized as very specific in their scope (Supplementary Table S1
). These questions received a rating of between 3.73 and 6.46 in round 2, changing to between 3.12 and 6.85 in round 3. This is compared to those questions that did reach consensus in round 2 (ratings between 6.69 and 7.62) and in round 3 (ratings between 6.42 and 7.73). The high rating of some questions that did not reach consensus may be due to the specificity in scope not appreciated by researchers in other areas. These research questions were considered of moderate or high priority by at least 50% of our panel. Our prioritization process should not be construed as defining them as unimportant.
The need for a cohesive community of interdisciplinary disaster scientists “who focus on crises that severely disrupt the environment or threaten human health, and can apply scientific methods in a timely manner to understand how to prevent, mitigate, respond to, or recover from such events” has been recommended [20
]. This includes the ongoing development of a community of trained environmental health scientists, in the U.S. and globally, to help provide timely expertise, experience, and applied research in response to health emergencies [5
]. The ability for the nascent NIEHS DR2 working group to rapidly transition ongoing collaborative activities to help identify high-priority EHS research questions is a testament to the power of such communities of practice in the context of disaster responses. Yet, there is much work to do to promote capacity and flexibility of such communities to rapidly and cohesively initiate a science response to disasters.
The pace of the COVID-19 science response highlights the imperative to develop hazard-specific research agendas prior to a major disaster response that can be quickly built upon to address situational specificities. While a modified Delphi method offers a remote and relatively quick way to identify research needs, the scientific response has been ongoing and will have vastly outpaced our efforts. As the response to the pandemic is likely to be protracted and occur over multiple waves, we believe these identified research priorities will have significant value in contributing to the identification of scientific needs and opportunities for this ongoing response. Yet, we acknowledge the need for established processes and prepositioned resources to increase the speed of identifying and prioritizing research questions and issues of concern in the future. While event-specific scientific questions can and should be refined and prioritized, initiating processes for research issue identification as a scientific community must occur as a function of scientific preparedness. Formation of a research prioritization process and team whose main purpose would be to manage the administrative process of carrying out similar research issue identification activities in response to future disasters may help streamline the process and provide relevant information to policy-makers, public health officials, and scientific funders earlier in the response.
Finally, the volume of scientific literature that has been produced in the context of the COVID-19 pandemic begs questions about its quality and ability to fully or adequately address scientific needs. A surge of COVID-19 manuscripts published in online preprint servers, preliminary reports of findings released without peer review, has also raised questions about the quality of the work due to accelerated publication [21
]. The National Academies of Sciences, Engineering, and Medicine have convened an ad hoc committee to develop and apply methods to review the evidence base for public health preparedness and response practices [22
]. However, to the best of our knowledge, no standard accepted practice exists to rapidly review the quality or sufficiency of evidence generated in the context of a disaster. While substantial efforts have been made by scientists to translate rapidly emerging evidence to policy and practice (e.g., Rapid Expert Consultations on the COVID-19 Pandemic by the NASEM Standing Committee on Emerging Infectious Diseases and 21st Century Health Threats) [23
], the development of formal processes for rapid evidence typing and grading may help assess whether or not research needs have been met and inform the refinement of future research priorities.
While we chose to select a diverse panel of environmental health expertise, we received feedback that panelists felt unqualified to rate the scientific priority of research questions outside of their particular research domain and requested that a “don’t know” option be considered in future iterations. The extent of the environmental health sciences scientific scope was also apparent in the breadth of ratings. While some participants indicated that certain research areas were outside the scope of environmental health sciences or rated them as low priority, others rated the same questions as high priority. As environmental and occupational health sciences is an inherently interdisciplinary field, future research agenda development may focus on questions specific to particular subdisciplines, with opportunity for cross-disciplinary input, discussion, and synthesis.
While we endeavored to include diverse expertise across the various domains of environmental health sciences in our panel, our closed process did not lend itself to the identification of gaps in the types of expertise included. In addition, while we did not explicitly exclude international participants, we received self-nominations only from panelists employed by organizations located in the U.S. This was likely largely due to our recruitment approach, which focused on U.S. NIEHS grantees and their professional networks. Panelists were not provided with instructions on geographic focus when submitting or rating research priorities. While some of the research priorities identified are clearly applicable (e.g., on routes of transmission) or focused (e.g., on management approaches in areas where WASH access is limited) internationally, the research priorities identified may not reflect global or country-specific priorities outside of the U.S.
We purposefully chose to keep panel participation confidential and anonymous (a key feature of the Delphi technique) throughout the rating process out of concern for unsolicited communication and/or undue influence from individuals or parties with significant financial or other interest in the resultant research priorities. This action was not foolproof; while we asked individual panelists to declare financial conflicts of interest [24
], we did not ask them to explicitly refrain from engagement with others when submitting research questions or rating their priority. Moreover, our decision to maintain panelist anonymity precluded panelist interaction, discussion, and deliberation that may have brought clarity or enhanced capacity to achieve consensus, and it inhibited broader scientific, practice, and community contributions to the development of this research agenda.
Evidence is only as useful as it is used. Challenges of evidence-based practice in public health preparedness and response have been well-described and include disparities in perceived information needs between scholars and practitioners [25
]. While panel participants included individuals from environmental health practice membership organizations, we did not engage practitioners directly. Future efforts should include more deliberate opportunities for practitioner feedback, along with that of community-based stakeholders, to ensure responsive research that benefits affected communities [26
Here, we identified 28 interdisciplinary SARS-CoV-2/COVID-19 environmental health sciences research needs rated as “high priority” by a panel of 26 experts. Prioritizing disaster-specific research questions in the context of a disaster response can help funding organizations prioritize research support, and help researchers focus on projects that matter.
Yet, identifying scientific and operationally relevant information specific to a given disaster is a necessary but insufficient step in public health disaster research response. Future environmental health science preparedness activities should seek to: identify hazard-specific research questions and develop processes for event-specific customization; build organization and workforce capacity to conduct environmental health science disaster research response; and refine processes to assess the quality and sufficiency of environmental health science evidence generated to inform public health and clinical responses in the context of disaster. Given what is sure to be a lengthy response and recovery, alongside a rapidly evolving evidentiary environment, research needs will be inherently dynamic. Processes for refining and updating agendas should be prioritized and resources devoted to their implementation.