Advertisement
Original Article| Volume 63, ISSUE 5, P513-523, May 2010

AHRQ Series Paper 5: Grading the strength of a body of evidence when comparing medical interventions—Agency for Healthcare Research and Quality and the Effective Health-Care Program

      Abstract

      Objective

      To establish guidance on grading strength of evidence for the Evidence-based Practice Center (EPC) program of the U.S. Agency for Healthcare Research and Quality.

      Study Design and Setting

      Authors reviewed authoritative systems for grading strength of evidence, identified domains and methods that should be considered when grading bodies of evidence in systematic reviews, considered public comments on an earlier draft, and discussed the approach with representatives of the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) working group.

      Results

      The EPC approach is conceptually similar to the GRADE system of evidence rating; it requires assessment of four domains: risk of bias, consistency, directness, and precision. Additional domains to be used when appropriate include dose–response association, presence of confounders that would diminish an observed effect, strength of association, and publication bias. Strength of evidence receives a single grade: high, moderate, low, or insufficient. We give definitions, examples, mechanisms for scoring domains, and an approach for assigning strength of evidence.

      Conclusion

      EPCs should grade strength of evidence separately for each major outcome and, for comparative effectiveness reviews, all major comparisons. We will collaborate with the GRADE group to address ongoing challenges in assessing the strength of evidence.

      Keywords

      To read this article in full you will need to make a payment

      Purchase one-time access:

      Academic & Personal: 24 hour online accessCorporate R&D Professionals: 24 hour online access
      One-time access price info
      • For academic or personal research use, select 'Academic and Personal'
      • For corporate R&D use, select 'Corporate R&D Professionals'

      Subscribe:

      Subscribe to Journal of Clinical Epidemiology
      Already a print subscriber? Claim online access
      Already an online subscriber? Sign in
      Institutional Access: Sign in to ScienceDirect

      References

        • Helfand M.
        Using evidence reports: progress and challenges in evidence-based decision making.
        Health Aff (Millwood). 2005; 24: 123-127
        • Atkins D.
        • Fink K.
        • Slutsky J.
        Better information for better health care: the Evidence-based Practice Center program and the Agency for Healthcare Research and Quality.
        Ann Intern Med. 2005; 142: 1035-1041
        • Helfand M.
        • Balshem H.
        AHRQ Series Paper 2: Principles for Developing Guidance: AHRQ and the Effective Health-Care Program.
        J Clin Epidemiol. 2010; 63 (In this issue): 484-490
        • Atkins D.
        • Eccles M.
        • Flottorp S.
        • Guyatt G.H.
        • Henry D.
        • Hill S.
        • et al.
        Systems for grading the quality of evidence and the strength of recommendations I: critical appraisal of existing approaches. The GRADE Working Group.
        BMC Health Serv Res. 2004; 4: 38
        • Guyatt G.H.
        • Oxman A.D.
        • Vist G.E.
        • Kunz R.
        • Falck-Ytter Y.
        • Alonso-Coello P.
        • et al.
        GRADE: an emerging consensus on rating quality of evidence and strength of recommendations.
        BMJ. 2008; 336: 924-926
        • Guyatt G.H.
        • Oxman A.D.
        • Kunz R.
        • Vist G.E.
        • Falck-Ytter Y.
        • Schunemann H.J.
        What is “quality of evidence” and why is it important to clinicians?.
        BMJ. 2008; 336: 995-998
        • West S.
        • King V.
        • Carey T.S.
        • Lohr K.N.
        • McKoy N.
        • Sutton S.F.
        • et al.
        Systems to Rate the Strength of Scientific Evidence. Evidence Report/Technology Assessment No. 47 (Prepared by the Research Triangle Institute-University of North Carolina Evidence-based Practice Center under Contract No. 290-97-0011). AHRQ Publication No. 02-E016.
        Agency for Healthcare Research and Quality, Rockville, MD2002
        • Harris R.P.
        • Helfand M.
        • Woolf S.H.
        • Lohr K.N.
        • Mulrow C.D.
        • Teutsch S.M.
        • et al.
        Current methods of the US Preventive Services Task Force: a review of the process.
        Am J Prev Med. 2001; 20: 21-35
        • Treadwell J.R.
        • Tregear S.J.
        • Reston J.T.
        • Turkelson C.M.
        A system for rating the stability and strength of medical evidence.
        BMC Med Res Methodol. 2006; 6: 52
        • Furukawa T.A.
        • Streiner D.L.
        • Hori S.
        Discrepancies among megatrials.
        J Clin Epidemiol. 2000; 53: 1193-1199
        • Charlton B.G.
        Fundamental deficiencies in the megatrial methodology.
        Curr Control Trials Cardiovasc Med. 2001; 2: 2-7
        • Langer G.
        • Schloemer G.
        • Knerr A.
        • Kuss O.
        • Behrens J.
        Nutritional interventions for preventing and treating pressure ulcers.
        Cochrane Database Syst Rev. 2003; 4 (CD003216)
        • Sackett D.L.
        Superiority trials, noninferiority trials, and prisoners of the 2-sided null hypothesis.
        ACP J Club. 2004; 140: A11
        • Sackett D.
        The principles behind the tactics of performing therapeutic trials.
        in: Haynes R.B.S. Guyatt D.L. Gordon H. Tugwell P. Clinical epidemiology: how to do clinical practice research. Lippincott Williams & Wilkins, New York2005
        • Gartlehner G.
        • Hansen R.A.
        • Nissman D.
        • Lohr K.N.
        • Carey T.S.
        A simple and valid tool distinguished efficacy from effectiveness studies.
        J Clin Epidemiol. 2006; 59: 1040-1048