Advertisement

November 2022 Editor's choice

      COVID-19 has provided the stimulus for evidence-focused collaboration globally on many fronts, with over 20 articles here in the Journal of Clinical Epidemiology (JCE). A commentary by McCaul et al. in this issue is one of several reports in the JCE [

      Dewidar O, Kawala B., Antequera A et al. Methodological guidance for incorporating equity when informing rapid policy and guidelines development. J Clin Epidemiol, 150, p142-p153

      ,
      • Stewart R.
      • Boutron I.
      • Akl E.A.
      The Global Evidence Commission’s Report provided a wake-up call for the evidence community.
      ] that have emerged as a result of the COVID-19 Evidence Network to support Decision-making initiative and the Global Commission on Evidence. This is a global initiative that brought together more than 50 of the world's leading evidence synthesis groups, with the objective to support and promote better co-ordination, leading to improved prioritization, fewer decision critical gaps, reduced waste, and consistently reliable quality across the evidence ecosystem. This was achieved partly through the work of several working groups each tasked with finding solutions in key areas. The commentary reports on the work of two of these groups, the synthesizing and recommending groups. In addition to the substantive outputs produced by the network, members of the collaborative have supported various methodological advances in the evidence ecosystem such as speed, living systematic reviews, and recommendation mapping. One important issue is addressing the major needs of large- and middle-income countries as research resources are so inequitably distributed that it is rare to find trained local systematic review teams that can take existing high-quality systematic reviews and guidelines from high-income counties and then adapt these to their systems so that they are contextualized to their settings. In this commentary, McCaul et al. provide an example from South Africa of what can be achieved, which is described as a part of the network's work in support of evidence and guidelines producers. Forty-two rapid systematic reviews have been completed and contextualized, as commissioned by the National Essential Medicine List Committee to inform their guidelines on how health-care workers should treat people with COVID-19.
      No doubt stimulated by the COVID-19 pandemic as described above, acceptance and adoption of rapid systematic reviews completed in a few weeks is one of most striking changes adopted by many organizations such as Cochrane. However, as the Cochrane Rapid Review Methods group point out, “while rapid review producers must answer the time-sensitive needs of the health decision-makers they serve, they must simultaneously ensure that the scientific imperative of methodological rigor is satisfied.” In order to adequately address this inherent tension, methodological research and standard development are needed [https://methods. cochrane.org/rapidreviews/welcome2022]. In this issue, Beecher et al. report on using the James Lind Alliance methodology with a panel of multiple stakeholders (including patients and the public and those who conduct these reviews) to answer the top 10 unanswered research questions about rapid review methodology. The top 10 prioritized questions are wide-ranging; they include establishing the research question, which stakeholders to involve, including from underserved groups, comparison of findings from rapid reviews with traditional full systematic reviews, and which methods of a full review can be omitted. This list provides a parsimonious but critical agenda for future research to directly improve the robustness of rapid reviews. The authors call for funders to incorporate these priorities into research agendas.
      One of the ways that the field of clinical epidemiology is evolving is through the increasing interest in moving from efficacy trials, which assess “can it work?”, to effectiveness (or pragmatic) trials, “does it work in practice?” [
      • Haynes B.
      Can it work? Does it work? Is it worth it? The testing of healthcare interventions is evolving.
      ]. Pragmatic trials thus focus on providing a realistic estimate of benefit and harm that can be generalized to real-world practice. However, as Taljaard et al. report, despite this concept being taught as a basic principle in all critical appraisal and graduate courses, only few studies report sufficient design characteristics to justify the term pragmatic. They emphasize that the characteristics are multidimensional and argue that they should address each of the following elements included in the PRECIS tools: eligibility, recruitment, setting, organization, delivery flexibility, adherence, follow-up, relevant outcomes, and inclusion of all data in the analysis (https://www.jclinepi.com/article/S0895-4356(16)30410-3/fulltext). They reviewed 415 primary trial reports from ClinicalTrials.gov that used terms or phrases known to be associated with pragmatic approaches to trial design. A third failed to provide any justification, and most of the remainder failed to describe more than 1 or two of the 9 characteristics. Indeed, many studies included design elements more consistent with efficacy/explanatory trials so should not have been classified as pragmatic trials.
      Because even efficacy randomized controlled trials, let alone pragmatic trials, are not available for many interventions, clinical epidemiology is broadening its focus to endorsing well-planned observational designs [
      • Deeks J.J.
      • Dinnes J.
      • D'Amico R.
      • et al.
      Evaluating non-randomised intervention studies.
      ]. One model that provides real-world evidence and is growing in frequency is the “target trial” design [
      • Hernán M.A.
      • Sauer B.C.
      • Hernández-Díaz S.
      • Platt R.
      • Shrier I.
      Specifying a target trial prevents immortal time bias and other self-inflicted injuries in observational analyses.
      ]. The target trial design provides an explicit framework modeled on randomized clinical trials, for comparative effectiveness research using big data. JCE is receiving increasing numbers of examples. One such example in this issue addresses a thromboprophylaxis clinical challenge in COVID-19 patients. The authors have emulated the standards of a controlled trial in comparing the observational data results of 1200 patients to assess the risks of bleeding and coagulopathy with and without an increased baseline risk. Although no randomized controlled trials were available for this question, we encourage more evidence comparing this target trial methodology to traditional controlled trials of the same question paper.

      References

      1. Dewidar O, Kawala B., Antequera A et al. Methodological guidance for incorporating equity when informing rapid policy and guidelines development. J Clin Epidemiol, 150, p142-p153

        • Stewart R.
        • Boutron I.
        • Akl E.A.
        The Global Evidence Commission’s Report provided a wake-up call for the evidence community.
        J Clin Epidemiol. 2022; https://doi.org/10.1016/j.jclinepi.2022.10.002
        • Haynes B.
        Can it work? Does it work? Is it worth it? The testing of healthcare interventions is evolving.
        BMJ. 1999; 319: 652-653
        • Deeks J.J.
        • Dinnes J.
        • D'Amico R.
        • et al.
        Evaluating non-randomised intervention studies.
        Health Technol Assess. 2003; 7: 1-173
        • Hernán M.A.
        • Sauer B.C.
        • Hernández-Díaz S.
        • Platt R.
        • Shrier I.
        Specifying a target trial prevents immortal time bias and other self-inflicted injuries in observational analyses.
        J Clin Epidemiol. 2016; 79: 70-75