Advertisement

Improving the utility of evidence synthesis for decision makers in the face of insufficient evidence

Open AccessPublished:March 18, 2021DOI:https://doi.org/10.1016/j.jclinepi.2021.02.028

      Abstract

      Objective

      To identify and suggest strategies to make insufficient evidence ratings in systematic reviews more actionable.

      Study Design and Setting

      A workgroup comprising members from the Evidence-Based Practice (EPC) Program of the Agency for Healthcare Research and Quality convened throughout 2020. We conducted iterative discussions considering information from three data sources: a literature review for relevant publications and frameworks, a review of a convenience sample of past systematic reviews conducted by the EPCs, and an audit of methods used in past EPC technical briefs.

      Results

      We identified five strategies for supplementing systematic review findings when evidence on benefits or harms is expected to be, or found to be, insufficient: 1) reconsider eligible study designs, 2) summarize indirect evidence, 3) summarize contextual and implementation evidence, 4) consider modelling, and 5) incorporate unpublished health system data in the evidence synthesis. While these strategies may not increase the strength of evidence, they may improve the utility of reports for decision makers. Adopting these strategies depends on feasibility, timeline, funding, and expertise of the systematic reviewers.

      Conclusion

      Throughout the process of evidence synthesis of early scoping, protocol development, review conduct, and review presentation, authors can consider these five strategies to supplement evidence with insufficient rating to make it more actionable for end-users.

      Keywords

      Abbreviations:

      AHRQ (Agency for Healthcare Research and Quality), EPC (Evidence-based Practice Center), GRADE (Grading of Recommendations Assessment, Development and Evaluation), SARS-CoV-2 (Severe Acute Respiratory Syndrome Coronavirus 2), SoE (Strength of evidence), SRC (Scientific Resource Center)
      What is new?
      Throughout the process of evidence synthesis of early scoping, protocol development, review conduct, and review presentation—systematic review authors should consider five strategies to supplement identified or expected insufficient strength of evidence, including: reconsidering eligible study designs, summarizing indirect evidence, summarizing contextual and implementation evidence, modelling, and incorporating unpublished health system data.
      Rationale for rating the strength of evidence as insufficient should be explicitly described. When there is no evidence available for a specific outcome, reviewers should use a statement such as “no studies” instead of “insufficient.”

      1. Introduction

      The Agency for Healthcare Research and Quality's (AHRQ) Evidence-based Practice Center (EPC) Program conducts comprehensive systematic reviews for a variety of clinical audiences. Systematic reviewers synthesize a body of evidence and rate the strength of evidence available for each eligible outcome based on evaluation of included study limitations, consistency, directness, precision, and additional factors. When criteria are not adequately met, evidence may be rated as “insufficient.” The phrase “insufficient evidence” is used by the AHRQ EPC Program to indicate that “We have no evidence, we are unable to estimate an effect, or we have no confidence in the estimate of effect for this outcome. No evidence is available or the body of evidence has unacceptable deficiencies, precluding reaching a conclusion” [
      • Berkman ND
      • Lohr KN
      • Ansari M
      • McDonagh M
      • Balk E
      • Whitlock E
      • et al.
      Grading the strength of a body of evidence when assessing health care interventions for the effective health care program of the Agency for Healthcare Research and Quality an update. Methods Guide for Effectiveness and Comparative Effectiveness Reviews [Internet].
      ]. By contrast, the lowest category of certainty of evidence in the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach is called “very low,” which is defined as, “We have very little confidence in the effect estimate [for this outcome]. The true effect is likely to be substantially different from the estimate of effect” [
      • Schünemann H
      • Brożek J
      • Guyatt G
      • Oxman A
      Handbook for grading the quality of evidence and the strength of recommendations using the GRADE approach.
      ].
      The term “insufficient,” may be interpreted differently by various end-users of the review and may refer to different limitations of a given literature base. In the absence of qualifiers, readers of systematic reviews may conflate the insufficiency of evidence about an effect (e.g., on benefits or harms on a particular population, intervention, comparison, or outcome) with the insufficiency of information to make a decision. Insufficient evidence does not necessarily mean that decision makers cannot or should not act on the evidence that is available. In fact, healthcare decision makers consider evidence as one of many decisional factors, which may include patient and healthcare provider values and preferences, resources, feasibility, acceptability of the recommended actions [
      • Alonso-Coello P
      • Schünemann HJ
      • Moberg J
      • Brignardello-Petersen R
      • Akl E
      • Davoli M
      • et al.
      GRADE Evidence to Decision (EtD) frameworks: a systematic and transparent approach to making well informed healthcare choices. 1: Introduction.
      ], and concerns about inaction. When there is no evidence or insufficient evidence on benefits or harms, information on these other factors may be important to summarize for decision makers [
      • Christensen V
      • Floyd N
      • Anderson J.
      “It Would’ve Been Nice if They Interpreted the Data a Little Bit. It Didn’t Really Say Much, and It Didn’t Really Help Us.”: a qualitative study of VA health system evidence needs.
      ].
      A workgroup from the AHRQ EPC Program was convened to understand how systematic reviewers can support decision-making in the face of insufficient evidence. The workgroup aimed to identify (1) the various ways in which the term “insufficient evidence” has been used, defined, and understood in the literature; (2) published frameworks for decision-making based on insufficient evidence; and (3) strategies that can be adopted by systematic reviewers to provide additional information to support decision-making when facing insufficient evidence. Finally, the workgroup provided suggestions for systematic reviewers on how to handle insufficient evidence during scoping of the topic, developing the protocol, and conducting and reporting the review.

      2. Methods

      This report draws on three sources of data: a literature review, review of a convenience sample of systematic reviews conducted by EPCs that identified insufficient evidence, and an audit of EPC technical briefs; which are often prepared for topics anticipated to have only a small body of direct evidence. We identified potential strategies based on the three data sources and iterative discussions among the workgroup members.

      2.1 Literature review

      The Scientific Resource Center (SRC) staff librarian conducted two literature searches (see Appendix A) to identify articles describing insufficient evidence in terms of: (1) how it was defined or acted on in decision-making or guideline development and (2) how different audiences might react to the term “insufficient.” The workgroup organized results thematically and used these findings to identify potential approaches that can facilitate decision-making.

      2.2 Review of systematic reviews

      To uncover how EPCs currently classify and present insufficient evidence ratings, we reviewed a convenience sample of systematic reviews previously published by the EPCs within the last 5 years and including at least one outcome with an insufficient rating. We prioritized systematic reviews commissioned for a specific end-user because these reflect scenarios when the review was most likely carried out to directly inform decision-making. For each included review, workgroup members extracted information pertaining to the decisional dilemma addressed by the review, whether the insufficient evidence rating was anticipated at the start of the review, reasons for insufficient evidence ratings, approaches used to address the insufficient evidence and to help decision makers act on the evidence, and, when information was available, whether the main stakeholder of the review made any actions (e.g., guideline recommendations) based on the insufficient evidence.

      2.3 Audit of technical briefs

      Technical briefs often combine multiple sources of information (topic expert interviews, published literature, grey literature, audits of commercially available products) and they consider practical aspects of implementing various clinical or quality improvement interventions. We extracted information pertaining to the reports’ decisional dilemma, a subjective determination of how well the technical brief research questions directly addressed that dilemma, evidence synthesis methods, and whether peer and public reviewers recommended substantial changes to the synthesis, conduct, or framing of the report. We then reviewed the extracted data to identify common themes.

      2.4 Workgroup discussion and consensus process

      The method for determining consensus of these strategies was informal. We discussed issues until no one voiced disagreements.

      3. Findings

      The literature review, review of systematic reviews, audit of technical briefs, and iterative discussions amongst workgroup members contributed to the development of the following strategies that may be used to supplement findings of insufficient evidence. A complete description of the findings from each data source can be found in Appendices B-D. We identified five strategies: 1) reconsider eligible study designs, 2) summarize indirect evidence, 3) summarize contextual and implementation evidence, 4) consider modelling, and 5) incorporate unpublished health system data in the evidence synthesis (e.g., a primary observational study that uses data from the electronic medical record of the health system). Table 1 lists these strategies with examples. Some of these strategies are consistent with best practices regardless of anticipated strength of evidence (SoE). When reviewers adopt a strategy, they should follow the methodological guidance relevant to the chosen strategy (e.g., best practices of qualitative synthesis or modelling) to maintain the rigor and reproducibility.
      Table 1Strategies for addressing insufficient evidence in evidence synthesis programs*
      StrategyDescriptionExampleExample Description
      Reconsider eligible study designsIn designing the original protocol, authors may have anticipated sufficient evidence from stronger designed studies. However, if potential bias in design or conduct of the study leads to insufficient evidence, authors may reconsider inclusion of observational studies, studies without comparisons or other study designs.Whole Exome Sequencing: Final Evidence Report
      • Chou R
      • Dana T
      • Jungbauer R
      • Weeks C
      • McDonagh M
      • et al.
      Masks for prevention of respiratory virus infections, including SARS-CoV-2, in health care and community settings: a living rapid review.
      The systematic review was conducted to support a recommendation for or against whole exome sequencing. The review summarized the results from single arm studies in addition to modeling studies and studies with comparator arms.
      Summarize evidence outside the prespecified review parameters (indirect evidence)Evidence may be sought from studies excluded during the review process due to different populations, interventions, comparators, and settings. These excluded studies may have limited applicability to the review question; use of such evidence requires appropriate interpretation and contextualization by clinical experts. These results may be summarized as contextual evidence or in the discussion section of the report.American Society of Hematology 2020 guidelines for sickle cell disease: management of acute and chronic pain
      • Brandow AM
      • Carroll CP
      • Creary S
      • Edwards-Elliott R
      • Glassberg J
      • Hurley R
      • et al.
      American Society of Hematology 2020 guidelines for sickle cell disease: management of acute and chronic pain.
      A systematic review was done to support guidelines about the management of pain in individuals with sickle cell disease. Due to paucity of data, the EPC summarized published systematic reviews on pain management in conditions other than sickle cell disease that were deemed clinically similar by the guideline panel.
      Summarize evidence on contextual factors (factors other than benefits/harms)Example 1. Decision makers must consider other factors besides evidence on effectiveness and harms of an intervention. Evidence on other factors that may affect the decision may be helpful to decision makers, such as patient values, equity, resources, acceptability, and feasibility.Comparative effectiveness and safety of cognitive behavioral therapy and pharmacotherapy for childhood anxiety disorders: a systematic review and meta-analysis
      • Wang Z
      • Whiteside SP
      • Sim L
      • Farah W
      • Morrow AS
      • Alsawas M
      • et al.
      Comparative effectiveness and safety of cognitive behavioral therapy and pharmacotherapy for childhood anxiety disorders: a systematic review and meta-analysis.
      An EPC report about the management of anxiety in children compared the different pharmacological and nonpharmacological treatments in terms of benefits and harms
      • Wang Z
      • Whiteside SP
      • Sim L
      • Farah W
      • Morrow AS
      • Alsawas M
      • et al.
      Comparative effectiveness and safety of cognitive behavioral therapy and pharmacotherapy for childhood anxiety disorders: a systematic review and meta-analysis.
      . Additional data were summarized in a subsequent report
      • Morrow AS
      • Whiteside SP
      • Sim LA
      • Brito JP
      • Wang Z
      • Murad MH
      Developing tools to enhance the use of systematic reviews for clinical care in health systems.
      that included contextual and implementation information (doses of common treatments, which patients are candidate for treatment, values and preferences, costs and resources, acceptability, impact on health equity, feasibility, alternative therapies, remission rates and prognosis). Contextual or implementation evidence may require quantitative or qualitative evidence synthesis.
      Example 2. Studies examining the effectiveness of complex interventions may be challenging to synthesize because of heterogeneity in interventions or populations studied. Realist reviews or qualitative evidence synthesis may be helpful to explore reasons for heterogeneity, and to uncover specific conditions under which a complex intervention may work better or worse.A systematic review of qualitative evidence on barriers and facilitators to the implementation of task-shifting in midwifery services
      • Colvin CJ
      • de Heer J
      • Winterton L
      • Mellenkamp M
      • Glenton C
      • Noyes J
      • et al.
      A systematic review of qualitative evidence on barriers and facilitators to the implementation of task-shifting in midwifery services.
      A qualitative evidence synthesis examined the qualitative literature to report implementation factors associated with midwifery task shifting and optimization. For this complex intervention, the question went beyond asking if it works, but the World Health Organization wanted to know how to implement it in the most effective way. The qualitative evidence synthesis elucidated challenges and other considerations when implementing such practices.
      Consider modelling if appropriate and expertise is availableVarious types of modeling such as decision analysis can be used to fill gaps in the evidence base. Modelling is time intensive but may be appropriate if models exist that can be adapted to address research gapsCollaborative Modeling of U.S. Breast, Lung, Colorectal, and Cervical Cancer Screening Strategies
      • Mandelblatt J
      • Cronin K
      • de Koning H
      • Miglioretti DL
      • Schechter C
      • Stout N
      Writing Committee of the Breast Cancer Working Group, Cancer Intervention and Surveillance Modeling Network (CISNET)
      Breast Cancer Surveillance Consortium (BCSC). Collaborative modeling of U.S. Breast Cancer Screening Strategies.
      The systematic review addressed the question of benefits and harms of screening for breast cancer. Modeling was used to address specific remaining gaps about combinations of screening modalities, frequency and start age.
      Incorporate health system data into a reviewLocal health system data can inform decision-making by augmenting the evidence base or by informing implementation efforts
      • Lin JS
      • Murad MH
      • Leas B
      • Treadwell J
      • Chou R
      • Ivlev I
      • et al.
      Integrating health system data with systematic reviews a framework for when and how unpublished health system data can be used with systematic reviews to support health system decision making.
      .
      Endovascular treatment of internal carotid artery bifurcation aneurysms: a single-center experience and a systematic review and meta-analysis
      • Morales-Valero S
      • Brinjikji W
      • Murad MH
      • Wald JT
      • Lanzino G
      Endovascular treatment of internal carotid artery bifurcation aneurysms: a single-center experience and a systematic review and meta-analysis.
      To determine the outcomes of endovascular treatment of internal carotid artery bifurcation aneurysms, only 6 small surgical series were found in the literature (a total of only 158 patients). Reviewing the electronic medical record of a single health system (Mayo Clinic), identified 37 additional cases that were incorporated into the systematic review. This addition increased the size of the body of evidence by 23% and provided more granular details on patients’ clinical characteristics, and thus, may further support decision-making in this context, although it may not increase strength of evidence.
      Acronym: EPC= Evidence-based Practice Center.
      *These strategies may not always be logistically possible during the conduct of the review and may require a separate subsequent study.

      3.1 Strategies during scoping the topic

      During the scoping stage, when a determination is being made whether a systematic review is of interest, of value, and is likely to have sufficient evidence to summarize, it may be possible to anticipate and plan for specific findings of insufficient evidence. Early identification and engagement of stakeholders can facilitate clear understanding of the decisional context and dilemma. This early partnership can also clarify the anticipated volume of the literature, timeline, and feasibility of the review. In this case, the specific question and approach can be discussed and modified if needed, the possibility of conducting a technical brief can be entertained, and the need for some of the approaches to address insufficient evidence can be determined (see below: Developing the Protocol, Conducting the Review).
      For complex questions, including questions related to implementation, topic experts could offer a good source of information about the quantity and quality of available evidence. Care should be taken that all relevant stakeholders are represented and that interview methods are adequate to reach thematic saturation. Scoping of a review requires balance and consideration of the tradeoffs necessary to keep workloads manageable.
      See Appendix Table F1 for several examples of decisional dilemmas and approaches.

      3.2 Developing the protocol

      When developing the protocol, systematic review authors should consider the most appropriate methods and inclusion criteria that can provide unbiased information to answer the review's questions. Authors should determine a priori critical and important outcomes, specific thresholds for determining benefit or harm, and the outcomes that require SoE grading. The most appropriate outcomes to be rated for SoE should account for stakeholder needs, decisional dilemmas, and context. The authors can consider at this stage which prioritized outcomes are likely to have insufficient evidence and determine what methods are feasible and can be used to facilitate the decision-making process.
      Strategies such as widening the inclusion criteria to other study designs or indirect evidence can be viewed during the protocol stage as a “best evidence” approach [
      • Treadwell JR
      • Singh S
      • Talati R
      • McPheeters M
      • Reston J
      A framework for“ best evidence” approaches in systematic reviews.
      ] which may start with more narrow inclusion criteria but expands to other study designs or populations if evidence is insufficient. This approach is consistent with the EPC Methods Guide for Effectiveness and Comparative Effectiveness Reviews, which recommends including nonrandomized studies if there is anticipated insufficient evidence from randomized controlled studies [
      • Norris S
      • Atkins D
      • Bruening W
      • Fox S
      • Johnson E
      • Kane R
      • et al.
      Selecting observational studies for comparing medical interventions. Methods guide for effectiveness and comparative effectiveness reviews [Internet].
      ].

      3.3 Strategies during conducting the review

      Ideally, reviewers would anticipate insufficient evidence at the earlier stages, during the scoping or protocol development of the evidence synthesis. However, if systematic reviewers find insufficient evidence during the course of review and have the time and resources, they may consider these additional strategies. See Appendix Table F2 for a description of potential reasons for insufficient evidence and corresponding methods that may be used to supplement insufficient ratings. Notably, in some instances, the suggested approaches may not be logistically feasible during the conduct of the review and may be appropriate to recommend for a subsequent study.

      3.4 Strategies during reporting findings

      Review authors should explicitly state when no studies are available (e.g., “no eligible studies have evaluated this outcome” or “no evidence available”) instead of using the term “insufficient” to imply a lack of evidence. Review authors should also qualify the term “insufficient” by stating the main reason that lead to an insufficient rating (e.g., insufficient because of imprecision). Implications for decision makers and recommendations for next steps may vary for evidence that is insufficient due to conflicting or heterogeneous studies, imprecise estimates of effect/association, poor applicability to population of interest, and/or high risk of bias studies. Approaches to supplementing evidence and recommendations for future research may differ.

      4. Discussion

      4.1 Main findings

      EPC systematic reviews commonly examine the evidence on benefits and harms of interventions. This evidence is one of the main factors that enable decision makers to make their decisions. However, decision makers must also consider a range of other factors, such as costs, values, preferences, and impact on equity. The relative weight of these factors may vary depending on the topic or the availability of evidence for benefit or harm. We have summarized the findings of a workgroup from the EPC Program that sought to understand how systematic reviewers can further support decision-making in the face of insufficient evidence for benefits or harms. We identified potential strategies that can be performed by systematic reviewers to facilitate decision-making in the context of insufficient evidence, including broadening eligibility criteria to other study designs, summarizing indirect evidence, summarizing contextual and implementation evidence, modelling, and incorporating unpublished health system data in the review.

      4.2 Limitations

      The strategies may not be feasible within a specific timeline and budget. A key challenge is that planning and budgeting of a review needs to be done early, alongside conversations with topic experts about scope, whereas the determination of insufficient evidence may not be made until late in the review. However, it is possible that early planning and anticipation of one of these strategies can make them feasible with appropriate protocol amendments.
      It is important to acknowledge the limitations of the evidence, even with the implementation of these strategies, as well as limitations to our approach to identify them. These strategies do not “fix the problem” of insufficient evidence; but rather, they facilitate decision-making in the context of insufficient evidence. For example, adding unpublished health system data to a systematic review can improve precision of the estimates and may enhance applicability; however, such data are not peer reviewed and can suffer from various types of bias. We may have not included important strategies because our sample was limited to reports in which EPC investigators were involved, most of which were conducted for guideline groups or governmental agencies. The reports we identified as examples (Table 1) that utilize these strategies provide only indirect evidence about their relative success as was inferred from peer and public comments and not directly from end-users.
      The examples, strategies, and suggestions in this document apply to the EPC program and may not apply to other systematic reviewers.

      5. Conclusion

      Systematic reviews commonly examine the evidence on benefits and harms of interventions but other factors are required for decision-making. When the strength of this evidence warrants an insufficient rating, information on these factors can enhance the utility of systematic reviews for health systems and other stakeholders. We identified five potential strategies, including broadening eligibility criteria to other study designs, summarizing indirect evidence, summarizing contextual and implementation evidence, modelling, and incorporating unpublished health system data in the review.

      Appendix. Supplementary materials

      References

        • Berkman ND
        • Lohr KN
        • Ansari M
        • McDonagh M
        • Balk E
        • Whitlock E
        • et al.
        Grading the strength of a body of evidence when assessing health care interventions for the effective health care program of the Agency for Healthcare Research and Quality.
        Agency for Healthcare Research and Quality (US), Rockville, MD2013
        • Schünemann H
        • Brożek J
        • Guyatt G
        • Oxman A
        Handbook for grading the quality of evidence and the strength of recommendations using the GRADE approach.
        Grading of Recommendations, Assessment, Development and Evaluation (GRADE) Working Group, 2013
        • Alonso-Coello P
        • Schünemann HJ
        • Moberg J
        • Brignardello-Petersen R
        • Akl E
        • Davoli M
        • et al.
        GRADE Evidence to Decision (EtD) frameworks: a systematic and transparent approach to making well informed healthcare choices. 1: Introduction.
        BMJ. 2016; 353 (PMID: 27353417): i2016https://doi.org/10.1136/bmj.i2016
        • Christensen V
        • Floyd N
        • Anderson J.
        “It Would’ve Been Nice if They Interpreted the Data a Little Bit. It Didn’t Really Say Much, and It Didn’t Really Help Us.”: a qualitative study of VA health system evidence needs.
        Med Care. 2019; 57 (PMID: 31517792): S228https://doi.org/10.1097/MLR.0000000000001171
        • Treadwell JR
        • Singh S
        • Talati R
        • McPheeters M
        • Reston J
        A framework for“ best evidence” approaches in systematic reviews.
        Agency for Healthcare Research and Quality, 2011 (PMID: 21834173)
        • Norris S
        • Atkins D
        • Bruening W
        • Fox S
        • Johnson E
        • Kane R
        • et al.
        Selecting observational studies for comparing medical interventions. Methods guide for effectiveness and comparative effectiveness reviews [Internet].
        Agency for Healthcare Research and Quality (US), 2010
        • Brandow AM
        • Carroll CP
        • Creary S
        • Edwards-Elliott R
        • Glassberg J
        • Hurley R
        • et al.
        American Society of Hematology 2020 guidelines for sickle cell disease: management of acute and chronic pain.
        Blood Adv. 2020; 4 (PMID: 32559294): 2656-2701https://doi.org/10.1182/bloodadvances.2020001851
        • Wang Z
        • Whiteside SP
        • Sim L
        • Farah W
        • Morrow AS
        • Alsawas M
        • et al.
        Comparative effectiveness and safety of cognitive behavioral therapy and pharmacotherapy for childhood anxiety disorders: a systematic review and meta-analysis.
        JAMA Pediatr. 2017; 171 (PMID: 28859190): 1049-1056https://doi.org/10.1001/jamapediatrics.2017.3036
        • Morrow AS
        • Whiteside SP
        • Sim LA
        • Brito JP
        • Wang Z
        • Murad MH
        Developing tools to enhance the use of systematic reviews for clinical care in health systems.
        BMJ Evid Based Med. 2018; 23 (PMID: 30194075): 206-209https://doi.org/10.1136/bmjebm-2018-110995
        • Colvin CJ
        • de Heer J
        • Winterton L
        • Mellenkamp M
        • Glenton C
        • Noyes J
        • et al.
        A systematic review of qualitative evidence on barriers and facilitators to the implementation of task-shifting in midwifery services.
        Midwifery. 2013; 29 (PMID: 23769757): 1211-1221https://doi.org/10.1016/j.midw.2013.05.001
        • Mandelblatt J
        • Cronin K
        • de Koning H
        • Miglioretti DL
        • Schechter C
        • Stout N
        • Writing Committee of the Breast Cancer Working Group, Cancer Intervention and Surveillance Modeling Network (CISNET)
        Breast Cancer Surveillance Consortium (BCSC). Collaborative modeling of U.S. Breast Cancer Screening Strategies.
        Agency for Healthcare Research and Quality (US), 2015 (Report No. 14-05201-EF-4)
        • Lin JS
        • Murad MH
        • Leas B
        • Treadwell J
        • Chou R
        • Ivlev I
        • et al.
        Integrating health system data with systematic reviews.
        Agency for Healthcare Research and Quality (US), 2020https://doi.org/10.23970/AHRQEPCMETHQUALIMPRINTEGRATING (Report No.: 19(20)-EHC023-EFPMID: 32271513)
        • Morales-Valero S
        • Brinjikji W
        • Murad MH
        • Wald JT
        • Lanzino G
        Endovascular treatment of internal carotid artery bifurcation aneurysms: a single-center experience and a systematic review and meta-analysis.
        AJNR Am J Neuroradiol. 2014; 35 (PMID: 24904050): 1948-1953https://doi.org/10.3174/ajnr.A3992
        • Chou R
        • Dana T
        • Jungbauer R
        • Weeks C
        • McDonagh M
        • et al.
        Masks for prevention of respiratory virus infections, including SARS-CoV-2, in health care and community settings: a living rapid review.
        Ann Intern Med. 2020; (PMID: 32579379)https://doi.org/10.7326/M20-3213