Advertisement
Series Evidence Based Research Series| Volume 129, P167-171, January 2021

Download started.

Ok

Evidence-Based Research Series-Paper 3: Using an Evidence-Based Research approach to place your results into context after the study is performed to ensure usefulness of the conclusion

Open AccessPublished:September 23, 2020DOI:https://doi.org/10.1016/j.jclinepi.2020.07.021

      Abstract

      Background and Objective

      There is considerable actual and potential waste in research. Using evidence-based research (EBR) can ensure the value of a new study. The aim of this article, the third in a series, is to describe an EBR approach to putting research results into context.

      Study Design and Setting

      EBR is the use of prior research in a systematic and transparent way to inform a new study so that it is answering questions that matter in a valid, efficient, and accessible manner. In this third and final article of a series, we describe how to use the context of existing evidence to reach and present a trustworthy and useful conclusion when reporting results from a new clinical study.

      Results

      We describe a method, the EBR approach, that by using a systematic and transparent consideration of earlier similar studies when interpreting and presenting results from a new original study will ensure usefulness of the conclusion.

      Conclusion

      Using an EBR approach will improve the usefulness of a clinical study by providing the context to draw more valid conclusions and explicit information about new research needs.

      Keywords

      What is new?

        Key findings

      • The conclusion of a new study should not be based on the results from the new study alone but by a synthesis of existing evidence and new results. The interpretation and reporting of results from a new study should be within the context of what is already known.

        What this adds to what was known?

      • This article describes the evidence-based research approach to making and reporting conclusions. This approach includes an update of the systematic review that was used to justify and design the study and a process to decide if there are implications for clinical practice or if further research is needed.

        What is the implication and what should change now?

      • To ensure valid and valuable studies, researchers should adopt the evidence-based research approach to determine and report the implications of the new study results or practice and for future research by explicitly considering the existing evidence.

      1. Introduction

      Evidence-Based Research (EBR) is the use of prior research in a systematic and transparent way to inform a new study so that it is answering questions that matter in a valid, efficient, and accessible manner [
      • Robinson K.A.
      Use of prior research in the justification and interpretation of clinical trials ProQuest Dissertations and Theses; 2009.
      ]. In the previous article in our EBR series, we discussed the cumulative nature or science and showed the importance of justifying and designing new studies based on existing knowledge (see article #2) [
      • Chalmers I.
      • Hedges L.V.
      • Cooper H.
      A brief history of research synthesis.
      ,
      • Light R.J.
      • Pillemer D.B.
      Summing up. The science of reviewing research.
      ]. In this third article of the series, we focus on the use of an EBR approach after the completion of a study and make recommendations on how to interpret and report the results of a new study in the context of the existing evidence base. We argue that placing the new study results in the context of what is already known is a key requisite to creating a meaningful publication that lets readers assess the value added by the new study, its internal and external validity, and any similarities and differences between available studies to support decision-making in clinical practice and for future research [
      • Altman D.G.
      • Schulz K.F.
      • Moher D.
      • Egger M.
      • Davidoff F.
      • Elbourne D.
      • et al.
      The revised CONSORT statement for reporting randomized trials: explanation and elaboration.
      ].
      The importance of these characteristics of good science has been clearly stated in the reporting standards for clinical studies. The CONSORT statement, first published in 1996, noted the need to “State general interpretation of the data in light of the totality of the available evidence” [
      • Begg C.
      • Cho M.
      • Eastwood S.
      • Horton R.
      • Moher D.
      • Olkin I.
      • et al.
      Improving the quality of reporting of randomized controlled trials. The CONSORT statement.
      ]. When updated in 2001, the CONSORT statement included more detailed recommendations on how to place new results in the context of previous studies, not only recommending comparing new results with other published studies, but that whenever possible, this should be performed by using a systematic review [
      • Altman D.G.
      • Schulz K.F.
      • Moher D.
      • Egger M.
      • Davidoff F.
      • Elbourne D.
      • et al.
      The revised CONSORT statement for reporting randomized trials: explanation and elaboration.
      ]. This recommendation was re-emphasized in 2010: “Readers will want to know how the present trial's results relate to those of other RCTs. This can best be achieved by including a formal systematic review in the results or discussion section of the report” [
      • MacPherson H.
      • Altman D.G.
      • Hammerschlag R.
      • Youping L.
      • Taixiang W.
      • White A.
      • et al.
      Revised STandards for reporting interventions in clinical trials of acupuncture (STRICTA): extending the CONSORT statement.
      ].

      1.1 Is there a problem?

      A single study can very rarely (if ever) provide a definitive answer to the question investigated. Therefore, placing the new study in the context of relevant previous research is key, and metaresearch has shown that the interpretation of new results is at high risk of being biased if only a subset of earlier studies is included in the discussion of these new results [
      • Fiorentino F.
      • Vasilakis C.
      • Treasure T.
      Clinical reports of pulmonary metastasectomy for colorectal cancer: a citation network analysis.
      ,
      • Greenberg S.A.
      How citation distortions create unfounded authority: analysis of a citation network.
      ,
      • Bastiaansen J.A.
      • de Vries Y.A.
      • Munafo M.R.
      Citation distortions in the literature on the serotonin-transporter-linked polymorphic region and amygdala activation.
      ].
      However, the results of new studies are rarely interpreted in the context of existing evidence [
      • Clarke M.
      • Chalmers I.
      Discussion sections in reports of controlled trials published in general medical journals: islands in search of continents?.
      ,
      • Clarke M.
      • Alderson P.
      • Chalmers I.
      Discussion sections in reports of controlled trials published in general medical journals.
      ,
      • Clarke M.
      • Hopewell S.
      • Chalmers I.
      Reports of clinical trials should begin and end with up-to-date systematic reviews of other relevant evidence: a status report.
      ,
      • Clarke M.
      • Hopewell S.
      • Chalmers I.
      Clinical trials should begin and end with systematic reviews of relevant evidence: 12 years and waiting.
      ,
      • Clarke M.
      • Hopewell S.
      Many reports of randomised trials still don't begin or end with a systematic review of the relevant evidence.
      ,
      • Helfer B.
      • Prosser A.
      • Samara M.T.
      • Geddes J.R.
      • Cipriani A.
      • Davis J.M.
      • et al.
      Recent meta-analyses neglect previous systematic reviews and meta-analyses about the same topic: a systematic examination.
      ]. Clarke et al. repeated investigations of whether clinical trials, published between 1997 and 2012 in five high-impact medical journals (Annals of Internal Medicine, BMJ, JAMA, The Lancet, and the New England Journal of Medicine), interpreted the new trial results by presenting an updated systematic review [
      • Clarke M.
      • Chalmers I.
      Discussion sections in reports of controlled trials published in general medical journals: islands in search of continents?.
      ,
      • Clarke M.
      • Alderson P.
      • Chalmers I.
      Discussion sections in reports of controlled trials published in general medical journals.
      ,
      • Clarke M.
      • Hopewell S.
      • Chalmers I.
      Reports of clinical trials should begin and end with up-to-date systematic reviews of other relevant evidence: a status report.
      ,
      • Clarke M.
      • Hopewell S.
      • Chalmers I.
      Clinical trials should begin and end with systematic reviews of relevant evidence: 12 years and waiting.
      ,
      • Clarke M.
      • Hopewell S.
      Many reports of randomised trials still don't begin or end with a systematic review of the relevant evidence.
      ]. In 2009, 13 years after the publication of the first edition of the CONSORT statement, they concluded that only one study out of 29 examined contained an updated systematic review integrating the new results [
      • Clarke M.
      • Hopewell S.
      • Chalmers I.
      Clinical trials should begin and end with systematic reviews of relevant evidence: 12 years and waiting.
      ] and in subsequent updates found no evidence of progress in reporting results using systematic reviews from 1997 to 2012.

      1.2 Suggested solution: the evidence-based research approach

      The best way to place new results in the context of existing evidence is to include the new results in a systematic review. In this way, the investigator of the new study avoids the risk of a biased selection of earlier studies and ensures a trustworthy synthesis of earlier studies.
      Preparing a systematic review from scratch is time intensive and effort intensive [
      • Allen I.E.
      • Olkin I.
      Estimating time to conduct a meta-analysis from number of citations retrieved.
      ,
      • Borah R.
      • Brown A.W.
      • Capers P.L.
      • Kaiser K.A.
      Analysis of the time and workers needed to conduct systematic reviews of medical interventions using data from the PROSPERO registry.
      ]. Identifying or preparing a systematic review during the planning phase of the new study will considerably lessen the required work when aiming to place results in context. Consequently, the suggested EBR approach described in this third article should be considered alongside the processes for how to justify the need for a new study and how to design a new study based on the totality of earlier similar studies (REF to EBR series article #2).
      The relevant steps for this phase of the EBR approach are highlighted in Figure 1 and described in more detail in the next section.
      Figure thumbnail gr1
      Fig. 1The evidence-based research approach, here highlighting the approach after completion of the study.

      1.3 How to evaluate the contribution of the new study in the context of the overall evidence

      Building on the recommendations from CONSORT and the evaluation of research reporting [
      • Clarke M.
      • Chalmers I.
      Discussion sections in reports of controlled trials published in general medical journals: islands in search of continents?.
      ,
      • Clarke M.
      • Alderson P.
      • Chalmers I.
      Discussion sections in reports of controlled trials published in general medical journals.
      ,
      • Clarke M.
      • Hopewell S.
      • Chalmers I.
      Reports of clinical trials should begin and end with up-to-date systematic reviews of other relevant evidence: a status report.
      ,
      • Clarke M.
      • Hopewell S.
      • Chalmers I.
      Clinical trials should begin and end with systematic reviews of relevant evidence: 12 years and waiting.
      ,
      • Clarke M.
      • Hopewell S.
      Many reports of randomised trials still don't begin or end with a systematic review of the relevant evidence.
      ,
      • Robinson K.A.
      • Goodman S.N.
      A systematic examination of the citation of prior research in reports of randomized, controlled trials.
      ], we propose the following EBR process to place the results from a new study in the context of prior studies.
      Considering the impact of the new study results on the existing evidence raises similar issues as during the planning phase of the study when deciding whether to update a systematic review [
      • Garner P.
      • Hopewell S.
      • Chandler J.
      • MacLehose H.
      • Schünemann H.J.
      • Akl E.A.
      • et al.
      When and how to update systematic reviews: consensus and checklist.
      ] or use an existing one to justify and design the study [
      • Robinson K.A.
      • Goodman S.N.
      A systematic examination of the citation of prior research in reports of randomized, controlled trials.
      ]. The relevant process is illustrated in Figure 1, article 2 in this series, where it is also described how investigators should identify an updated version of the existing systematic review (and if that is not available, update the systematic review) and how they should use it to justify and design their new study (REF til article #2 in this series). The results of the up-to-date systematic review can then be used as the context for interpreting and discussing the new findings [
      • Smith A.J.
      • Goodman N.W.
      The hypertensive response to intubation. Do researchers acknowledge previous work?.
      ,
      • Lund H.
      • Brunnhuber K.
      • Juhl C.
      • Robinson K.
      • Leenaars M.
      • Dorch B.F.
      • et al.
      Towards evidence based research.
      ]. If the systematic review used during the planning phase of the study contained a meta-analysis and the necessary data from earlier studies are available, updating the meta-analysis with the new results should be straightforward. At the completion of the new study, the researcher should assess the effect on the magnitude and precision of the effect when adding the new study to the pre-existing systematic review and assess whether and how the new results affect the conclusion and level of certainty.
      Ellis et al. provide an example of adding the results of the new study to an existing meta-analysis in their investigation of taxane as adjuvant chemotherapy for patients with early breast cancer [
      • Ellis P.
      • Barrett-Lee P.
      • Johnson L.
      • Cameron D.
      • Wardley A.
      • O'Reilly S.
      • et al.
      Sequential docetaxel as adjuvant chemotherapy for early breast cancer (TACT): an open-label, phase III, randomised controlled trial.
      ]. When embarking on the study in 2000, only a few studies had presented initial results, but by the time they were preparing to report the study results in 2009, a simple search using the search terms ‘taxane’ and ‘breast cancer’ identified a systematic review in the Cochrane Library performed by Ferguson et al. [
      • Ferguson T.
      • Wilcken N.
      • Vagg R.
      • Ghersi D.
      • Nowak A.K.
      Taxanes for adjuvant treatment of early breast cancer.
      ], and another systematic review published shortly after the Cochrane review [
      • De Laurentiis M.
      • Cancello G.
      • D'Agostino D.
      • Giuliano M.
      • Giordano A.
      • Montagna E.
      • et al.
      Taxane-based combinations as adjuvant chemotherapy of early breast cancer: a meta-analysis of randomized trials.
      ]. Ellis et al. were able to update these meta-analyses by adding the results of their new study. Thus, when discussing the impact for future clinical practice and research, the key findings included not just the results from the new study alone, but a much more meaningful estimate and conclusion based on the combined results of all studies examining the same clinical question.
      Merry et al. did not conduct a meta-analysis including the results of their study on computerized self-help intervention for adolescents seeking help for depression (abbreviated SPARX) but discussed their own results within context of all the earlier studies [
      • Merry S.N.
      • Stasiak K.
      • Shepherd M.
      • Frampton C.
      • Fleming T.
      • Lucassen M.F.
      The effectiveness of SPARX, a computerised self help intervention for adolescents seeking help for depression: randomised controlled non-inferiority trial.
      ]. They provided the context for their findings by comparing these with the results from earlier similar studies identified in a systematic review [
      • Richardson T.
      • Stallard P.
      • Velleman S.
      Computerised cognitive behavioural therapy for the prevention and treatment of depression and anxiety in children and adolescents: a systematic review.
      ]. In addition, as data were sparse on adolescents, they complemented this by discussing another systematic review that included studies on adults and showed a similar positive effect to that found in earlier studies and their own new study with adolescents [
      • Andrews G.
      • Cuijpers P.
      • Craske M.G.
      • McEvoy P.
      • Titov N.
      Computer therapy for the anxiety and depressive disorders is effective, acceptable and practical health care: a meta-analysis.
      ]. This way, the authors were able to draw confident conclusions about the benefit of the self-help intervention tested.
      Once the new findings are combined with existing evidence, be it through a meta-analysis or not, the next question is as follows: Is it possible to draw a definitive conclusion or is further research needed? If a definitive conclusion can be drawn, this leads to a second question: Do we have confidence in this conclusion, or in other words, is the evidence of sufficiently high certainty? This process is very similar to the one described in article #2 of this series, where we established the need to take the ethical dimension and the grading of the evidence (including the statistics) into consideration when assessing the quality of the evidence (REF to article #2). Again, if the answer to this question is no, further research is needed. However, if our confidence in the conclusion is high, no further studies are needed, and a recommendation for or against the use of the intervention in clinical practice can be made (see Figure 2.).
      Figure thumbnail gr2
      Fig. 2The process steps from combining the results from the new study with existing evidence to the decision whether new research is needed, or if a recommendation for or against the use of the intervention in clinical practice can be made.
      Finally, authors should formulate implications for clinical practice and suggestions for future research based on the totality of the evidence, highlighting the contribution of the new study. We recognize that numerous factors must be considered when evaluating the implications of a new study for both clinical practice and research but want to stress the role of the EBR approach as a vital step in this process.

      1.4 Discussion

      In this final article of our three-part series, we discussed how to use the EBR approach to reach and present a trustworthy and meaningful conclusion when reporting the results of a new clinical study. If new findings are not placed in the context of all earlier similar studies, the conclusion is at high risk of being biased. As a result, interventions without real effects may be introduced into clinical care or there may be erroneous recommendations that further studies are needed leading to new redundant studies and so increased research waste.
      The focus for evidence-based clinical decisions needs to move beyond the point estimate and confidence interval of the single study to considering the aggregate estimate (and confidence and prediction intervals) from the accumulated evidence. If the certainty of the overall evidence is either unclear or low, more research is needed, and reporting this fact will help mitigate future risk of unnecessary medical reversal (premature uptake of therapies into practice) [
      • Prasad V.
      • Vandross A.
      • Toomey C.
      • Cheung M.
      • Rho J.
      • Quinn S.
      • et al.
      A decade of reversal: an analysis of 146 contradicted medical practices.
      ]. With a high certainty of the evidence, clinicians can use the results for clinical decision-making and researchers will be able to avoid repeating conclusive clinical research, thus avoiding wasteful redundant studies. With its focus on the totality of evidence and end users’ perspective, EBR is directly linked to two of the three key components of evidence-based medicine, an objective assessment of all relevant evidence, combined with patient values and circumstances, when making decisions for clinical care [
      • Sackett D.L.
      • Rosenberg W.M.
      • Gray J.A.
      • Haynes R.B.
      • Richardson W.S.
      Evidence based medicine: what it is and what it isn't.
      ].
      Researchers implementing the EBR approach will require more explicit guidance about how to interpret and report study results to ensure that study reports effectively support clinical decision-making and future research.

      Acknowledgments

      This work has been prepared as part of the Evidence-Based Research Network (ebrnetwork.org). The EBRNetwork is an international network that promotes the use of systematic reviews when prioritizing, designing, and interpreting research. Evidence-based research is the use of prior research in a systematic and transparent way to inform the new study so that it is answering questions that matter in a valid, efficient, and accessible manner.
      The authors thank the Centre for Evidence-Based Practice, Western Norway University of Applied Sciences for their very generous support of the EBRNetwork.
      The Parker Institute, Bispebjerg and Frederiksberg Hospital (Professor Christensen and Professor Henriksen) are supported by a core grant from the Oak Foundation USA (OCAY-18-774-OFIL).
      Financial support
      This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

      Appendix A. Supplementary data

      References

        • Robinson K.A.
        Use of prior research in the justification and interpretation of clinical trials ProQuest Dissertations and Theses; 2009.
        Johns Hopkins University, 2009
        • Chalmers I.
        • Hedges L.V.
        • Cooper H.
        A brief history of research synthesis.
        Eval Health Prof. 2002; 25: 12-37
        • Light R.J.
        • Pillemer D.B.
        Summing up. The science of reviewing research.
        Harvard University Press, Boston1984
        • Altman D.G.
        • Schulz K.F.
        • Moher D.
        • Egger M.
        • Davidoff F.
        • Elbourne D.
        • et al.
        The revised CONSORT statement for reporting randomized trials: explanation and elaboration.
        Ann Intern Med. 2001; 134: 663-694
        • Begg C.
        • Cho M.
        • Eastwood S.
        • Horton R.
        • Moher D.
        • Olkin I.
        • et al.
        Improving the quality of reporting of randomized controlled trials. The CONSORT statement.
        JAMA. 1996; 276: 637-639
        • MacPherson H.
        • Altman D.G.
        • Hammerschlag R.
        • Youping L.
        • Taixiang W.
        • White A.
        • et al.
        Revised STandards for reporting interventions in clinical trials of acupuncture (STRICTA): extending the CONSORT statement.
        PLoS Med. 2010; 7: e1000261
        • Fiorentino F.
        • Vasilakis C.
        • Treasure T.
        Clinical reports of pulmonary metastasectomy for colorectal cancer: a citation network analysis.
        Br J Cancer. 2011; 104: 1085-1097
        • Greenberg S.A.
        How citation distortions create unfounded authority: analysis of a citation network.
        BMJ. 2009; 339: b2680
        • Bastiaansen J.A.
        • de Vries Y.A.
        • Munafo M.R.
        Citation distortions in the literature on the serotonin-transporter-linked polymorphic region and amygdala activation.
        Biol Psychiatry. 2015; 78: e35-e36
        • Clarke M.
        • Chalmers I.
        Discussion sections in reports of controlled trials published in general medical journals: islands in search of continents?.
        JAMA. 1998; 280: 280-282
        • Clarke M.
        • Alderson P.
        • Chalmers I.
        Discussion sections in reports of controlled trials published in general medical journals.
        JAMA. 2002; 287: 2799-2801
        • Clarke M.
        • Hopewell S.
        • Chalmers I.
        Reports of clinical trials should begin and end with up-to-date systematic reviews of other relevant evidence: a status report.
        J R Soc Med. 2007; 100: 187-190
        • Clarke M.
        • Hopewell S.
        • Chalmers I.
        Clinical trials should begin and end with systematic reviews of relevant evidence: 12 years and waiting.
        Lancet. 2010; 376: 20-21
        • Clarke M.
        • Hopewell S.
        Many reports of randomised trials still don't begin or end with a systematic review of the relevant evidence.
        J Bahrain Med Soc. 2013; 24: 145-148
        • Helfer B.
        • Prosser A.
        • Samara M.T.
        • Geddes J.R.
        • Cipriani A.
        • Davis J.M.
        • et al.
        Recent meta-analyses neglect previous systematic reviews and meta-analyses about the same topic: a systematic examination.
        BMC Med. 2015; 13: 82
        • Allen I.E.
        • Olkin I.
        Estimating time to conduct a meta-analysis from number of citations retrieved.
        JAMA. 1999; 282: 634-635
        • Borah R.
        • Brown A.W.
        • Capers P.L.
        • Kaiser K.A.
        Analysis of the time and workers needed to conduct systematic reviews of medical interventions using data from the PROSPERO registry.
        BMJ Open. 2017; 7: e012545
        • Robinson K.A.
        • Goodman S.N.
        A systematic examination of the citation of prior research in reports of randomized, controlled trials.
        Ann Intern Med. 2011; 154: 50-55
        • Garner P.
        • Hopewell S.
        • Chandler J.
        • MacLehose H.
        • Schünemann H.J.
        • Akl E.A.
        • et al.
        When and how to update systematic reviews: consensus and checklist.
        BMJ. 2016; 354
        • Smith A.J.
        • Goodman N.W.
        The hypertensive response to intubation. Do researchers acknowledge previous work?.
        Can J Anaesth. 1997; 44: 9-13
        • Lund H.
        • Brunnhuber K.
        • Juhl C.
        • Robinson K.
        • Leenaars M.
        • Dorch B.F.
        • et al.
        Towards evidence based research.
        BMJ. 2016; 355: i5440
        • Ellis P.
        • Barrett-Lee P.
        • Johnson L.
        • Cameron D.
        • Wardley A.
        • O'Reilly S.
        • et al.
        Sequential docetaxel as adjuvant chemotherapy for early breast cancer (TACT): an open-label, phase III, randomised controlled trial.
        Lancet. 2009; 373: 1681-1692
        • Ferguson T.
        • Wilcken N.
        • Vagg R.
        • Ghersi D.
        • Nowak A.K.
        Taxanes for adjuvant treatment of early breast cancer.
        Cochrane Database Syst Rev. 2007; : CD004421
        • De Laurentiis M.
        • Cancello G.
        • D'Agostino D.
        • Giuliano M.
        • Giordano A.
        • Montagna E.
        • et al.
        Taxane-based combinations as adjuvant chemotherapy of early breast cancer: a meta-analysis of randomized trials.
        J Clin Oncol. 2008; 26: 44-53
        • Merry S.N.
        • Stasiak K.
        • Shepherd M.
        • Frampton C.
        • Fleming T.
        • Lucassen M.F.
        The effectiveness of SPARX, a computerised self help intervention for adolescents seeking help for depression: randomised controlled non-inferiority trial.
        BMJ. 2012; 344: e2598
        • Richardson T.
        • Stallard P.
        • Velleman S.
        Computerised cognitive behavioural therapy for the prevention and treatment of depression and anxiety in children and adolescents: a systematic review.
        Clin Child Fam Psychol Rev. 2010; 13: 275-290
        • Andrews G.
        • Cuijpers P.
        • Craske M.G.
        • McEvoy P.
        • Titov N.
        Computer therapy for the anxiety and depressive disorders is effective, acceptable and practical health care: a meta-analysis.
        PLoS One. 2010; 5: e13196
        • Prasad V.
        • Vandross A.
        • Toomey C.
        • Cheung M.
        • Rho J.
        • Quinn S.
        • et al.
        A decade of reversal: an analysis of 146 contradicted medical practices.
        Mayo Clin Proc. 2013; 88: 790-798
        • Sackett D.L.
        • Rosenberg W.M.
        • Gray J.A.
        • Haynes R.B.
        • Richardson W.S.
        Evidence based medicine: what it is and what it isn't.
        BMJ. 1996; 312: 71-72