Evidence-Based Research Series-Paper 2 : Using an Evidence-Based Research approach before a new study is conducted to ensure value

Open AccessPublished:September 25, 2020DOI:https://doi.org/10.1016/j.jclinepi.2020.07.019

      Abstract

      Background and Objectives

      There is considerable actual and potential waste in research. The aim of this article is to describe how using an evidence-based research approach before conducting a study helps to ensure that the new study truly adds value.

      Study Design and Setting

      Evidence-based research is the use of prior research in a systematic and transparent way to inform a new study so that it is answering questions that matter in a valid, efficient, and accessible manner. In this second article of the evidence-based research series, we describe how to apply an evidence-based research approach before starting a new study.

      Results

      Before a new study is performed, researchers need to provide a solid justification for it using the available scientific knowledge as well as the perspectives of end users. The key method for both is to conduct a systematic review of earlier relevant studies.

      Conclusion

      Describing the ideal process illuminates the challenges and opportunities offered through the suggested evidence-based research approach. A systematic and transparent approach is needed to provide justification for and to optimally design a relevant and necessary new study.

      Keywords

      What is new?

         Key findings

      • An evidence-based research approach—the use of existing evidence in a transparent and explicit way—is needed to justify the need for and design a new study.

         What this adds to what was known?

      • Researchers are given guidance on why and how to use an evidence-based research approach to justify and design a new study.

         What are the implications and what should change now?

      • To ensure that only valuable studies are conducted in future, researchers should adopt the evidence-based research approach for justifying and designing a new study.

      1. Introduction

      This article is part of a series describing evidence-based research—the use of prior research in a systematic and transparent way to inform a new study so that it is answering questions that matter in a valid, efficient, and accessible manner [
      • Robinson K.A.
      Use of prior research in the justification and interpretation of clinical trials ProQuest Dissertations and Theses; 2009.
      ]. By prior research, we mean original studies (also called primary studies), but even when planning a new systematic review (secondary study), the authors should perform a comprehensive search for earlier similar systematic reviews to avoid redundancy [
      • Juhl C.B.
      • Lund H.
      Do we really need another systematic review?.
      ,
      • Ioannidis J.P.
      The mass production of redundant, misleading, and conflicted systematic reviews and meta-analyses.
      ]. In this second of a three-part series, we describe how using the evidence-based research approach before starting a study helps to ensure its value.
      Funding agencies and research ethics committees are key gatekeepers of the scientific process by reviewing new study protocols to evaluate their validity. If the design and chosen methods align with and seem appropriate to answer the proposed research question, the study is judged to be valid. In addition, if the recruitment of and all dealings with human participants (and their data) is ethically acceptable, the new study is usually approved and supported. However, a further ethical dimension needs to come into focus before the new study is allowed to go ahead: Is the proposed study worthwhile? Does it add true value?
      The extreme circumstances of formulating an indictment against the investigated physicians in the Nuremberg processes (1946–47) after the Second World War led to the formulation of an ethical code that included the following statement [
      • Freedman B.
      Scientific value and validity as ethical requirements for research: a proposed explication.
      ]: “2. The experiment should be such as to yield fruitful results for the good of society, unprocurable by any other methods or means of study, and not random or unnecessary in nature.” As Benjamin Freedman states, “These principles seem to require as ethical preconditions that the study be of some value, and not simply be valid.” [
      • Freedman B.
      Scientific value and validity as ethical requirements for research: a proposed explication.
      ].
      How can a new study be demonstrated to add value? Emanuel et al. provided an answer by reversing the argument, stating that research that is not “socially or scientifically valuable includes clinical research with nongeneralizable results, a trifling hypothesis, or substantial or total overlap with proven results.” [
      • Emanuel E.J.
      • Wendler D.
      • Grady C.
      What makes clinical research ethical?.
      ]. Hence, if the relevant question raised by the new study has already been answered elsewhere, or where a substantial or total overlap exists with the available evidence base, the new study is unnecessary and of limited value.
      A systematic review of earlier studies that addresses questions similar to the one to be investigated by the new study can identify evidence gaps and the presence of an 'overlap with proven results' [
      • Emanuel E.J.
      • Wendler D.
      • Grady C.
      What makes clinical research ethical?.
      ]. This will help ensure that any new study is necessary. However, judgments about value should include not only the identification of an evidence gap but also the perspectives of the end users, ensuring that the new study is of relevance to those it affects and to society. If an identified evidence gap cannot be shown to be relevant (based on the perspectives of the end users), it may not need to be filled. In addition, similarly, a need identified by end users does not in itself constitute a gap in our knowledge [
      • Robinson K.A.
      • Saldanha I.J.
      • McKoy N.A.
      Development of a framework to identify research gaps from systematic reviews.
      ]. Today a number of tools and approaches exist that help include end users' perspectives (see e.g., http://www.jla.nihr.ac.uk/; https://www.involve.org.uk/; or https://www.patientslikeme.com/).
      Other factors such as the availability of funding, access to relevant technologies, and the competency of involved researchers will also be strong determinants whether a new study should be conducted, but an evidence-based research approach should always be applied first.

      1.1 The operationalization of the evidence-based research approach

      To assess whether the new study would indeed be filling an evidence gap, the existing evidence should be identified and synthesized systematically and transparently—currently this is performed by conducting a systematic review of earlier studies that answer the same research question.
      The use of an evidence-based research approach during the planning phase of a study can be illustrated as shown in Figure 1.
      Figure thumbnail gr1
      Fig. 1The evidence-based research approach highlighting the steps to be taken before a study is conducted.
      Whenever health researchers are planning a new study, a systematic review of earlier similar studies should be identified (and if need be updated) or conducted. In addition, a systematic and transparent gathering of the relevant end users’ perspectives should be undertaken.
      If the answer is no to the question of value of the new study, the figure shows that the researchers must consider a new angle to the suggested research question or identify another research question altogether, and then once more consider the value of this new question. Only if the researchers can clearly demonstrate that the intended study will add value, that is, there is both a societal need for it and it will fill a shown evidence gap should a new study be planned and designed in more detail.

      2. Identifying gaps in the present knowledge

      Any identification of an evidence gap related to a specific clinical question must be based on a systematic review of earlier similar studies, to acquire an unbiased and trustworthy knowledge of the existing evidence, to avoid unnecessarily repeating a study for which the answer is already known [
      • Clarke M.
      Doing new research? Don't forget the old.
      ]. Previous studies also provide researchers the opportunity to reflect on and determine the optimal study design—a duty they owe the patients participating in their new study (see section 5 in the following).
      Unfortunately, numerous metaresearch studies show that researchers continue to perform unnecessarily redundant research, when results from similar studies provide adequate evidence to address the question [
      • Andrade N.S.
      • Flynn J.P.
      • Bartanusz V.
      Twenty-year perspective of randomized controlled trials for surgery of chronic nonspecific low back pain: citation bias and tangential knowledge.
      ,
      • Clarke M.
      • Brice A.
      • Chalmers I.
      Accumulating research: a systematic account of how cumulative meta-analyses would have provided knowledge, improved health, reduced harm and saved resources.
      ,
      • Fergusson D.
      • Glass K.C.
      • Hutton B.
      • Shapiro S.
      Randomized controlled trials of aprotinin in cardiac surgery: could clinical equipoise have stopped the bleeding?.
      ,
      • Haapakoski R.
      • Mathieu J.
      • Ebmeier K.P.
      • Alenius H.
      • Kivimaki M.
      Cumulative meta-analysis of interleukins 6 and 1beta, tumour necrosis factor alpha and C-reactive protein in patients with major depressive disorder.
      ,
      • Habre C.
      • Tramer M.R.
      • Popping D.M.
      • Elia N.
      Ability of a meta-analysis to prevent redundant research: systematic review of studies on pain from propofol injection.
      ,
      • Juni P.
      • Nartey L.
      • Reichenbach S.
      • Sterchi R.
      • Dieppe P.A.
      • Egger M.
      Risk of cardiovascular events and rofecoxib: cumulative meta-analysis.
      ,
      • Ker K.
      • Edwards P.
      • Perel P.
      • Shakur H.
      • Roberts I.
      Effect of tranexamic acid on surgical bleeding: systematic review and cumulative meta-analysis.
      ,
      • Lau J.
      • Antman E.M.
      • Jimenez-Silva J.
      • Kupelnick B.
      • Mosteller F.
      • Chalmers T.C.
      Cumulative meta-analysis of therapeutic trials for myocardial infarction.
      ,
      • Lau J.
      • Schmid C.H.
      • Chalmers T.C.
      Cumulative meta-analysis of clinical trials builds evidence for exemplary medical care.
      ,
      • Poolman R.W.
      • Farrokhyar F.
      • Bhandari M.
      Hamstring tendon autograft better than bone patellar-tendon bone autograft in ACL reconstruction: a cumulative meta-analysis and clinically relevant sensitivity analysis applied to a previously published analysis.
      ]. It is very rare that researchers use systematic reviews to justify or design a new study [
      • Clarke M.
      • Alderson P.
      • Chalmers I.
      Discussion sections in reports of controlled trials published in general medical journals.
      ,
      • Clarke M.
      • Hopewell S.
      • Chalmers I.
      Reports of clinical trials should begin and end with up-to-date systematic reviews of other relevant evidence: a status report.
      ,
      • Goudie A.C.
      • Sutton A.J.
      • Jones D.R.
      • Donald A.
      Empirical assessment suggests that existing evidence could be used more fully in designing randomized controlled trials.
      ,
      • Clarke M.
      • Hopewell S.
      • Chalmers I.
      Clinical trials should begin and end with systematic reviews of relevant evidence: 12 years and waiting.
      ,
      • Clarke M.
      • Hopewell S.
      Many reports of randomised trials still don't begin or end with a systematic review of the relevant evidence.
      ,
      • Jones A.P.
      • Conroy E.
      • Williamson P.R.
      • Clarke M.
      • Gamble C.
      The use of systematic reviews in the planning, design and conduct of randomised trials: a retrospective cohort of NIHR HTA funded trials.
      ,
      • Helfer B.
      • Prosser A.
      • Samara M.T.
      • Geddes J.R.
      • Cipriani A.
      • Davis J.M.
      • et al.
      Recent meta-analyses neglect previous systematic reviews and meta-analyses about the same topic: a systematic examination.
      ]. Typically, they refer only to a very small proportion of the original similar studies [
      • Fergusson D.
      • Glass K.C.
      • Hutton B.
      • Shapiro S.
      Randomized controlled trials of aprotinin in cardiac surgery: could clinical equipoise have stopped the bleeding?.
      ,
      • Goudie A.C.
      • Sutton A.J.
      • Jones D.R.
      • Donald A.
      Empirical assessment suggests that existing evidence could be used more fully in designing randomized controlled trials.
      ,
      • Robinson K.A.
      • Goodman S.N.
      A systematic examination of the citation of prior research in reports of randomized, controlled trials.
      ,
      • Schrag M.
      • Mueller C.
      • Oyoyo U.
      • Smith M.A.
      • Kirsch W.M.
      Iron, zinc and copper in the Alzheimer's disease brain: a quantitative meta-analysis. Some insight on the influence of citation bias on scientific opinion.
      ,
      • Sheth U.
      • Simunovic N.
      • Tornetta 3rd, P.
      • Einhorn T.A.
      • Bhandari M.
      Poor citation of prior evidence in hip fracture trials.
      ,
      • Sawin V.I.
      • Robinson K.A.
      Biased and inadequate citation of prior research in reports of cardiovascular trials is a continuing source of waste in research.
      ], and there does not seem to be a relationship between the number of earlier studies available to refer to and the number of studies the researchers actually cite. The reasons for selecting references for a study seem to be subjective [
      • Macroberts M.H.
      • Macroberts B.R.
      Quantitative measures of communication in science - a study of the formal level.
      ,
      • Amancio D.R.
      • Nunes M.G.V.
      • Oliveira O.N.
      • Costa LdF.
      Using complex networks concepts to assess approaches for citations in scientific papers.
      ,
      • Thornley C.
      • Watkinson A.
      • Nicholas D.
      • Volentine R.
      • Jamali H.R.
      • Herman E.
      • et al.
      The role of trust and authority in the citation behaviour of researchers.
      ] and based on preferences and strategic considerations rather than on a systematic and transparent approach. Positive, significant, and supportive studies are much more commonly cited than those that are negative, nonsignificant, or critical [
      • Sawin V.I.
      • Robinson K.A.
      Biased and inadequate citation of prior research in reports of cardiovascular trials is a continuing source of waste in research.
      ,
      • Puder K.S.
      • Morgan J.P.
      Persuading by citation: an analysis of the references of fifty-three published reports of phenylpropanolamine's clinical toxicity.
      ,
      • Shadish W.R.
      • Tolliver D.
      • Gray M.
      • Gupta S.K.S.
      Author judgements about works they cite: three studies from psychology journals.
      ,
      • Greenberg S.A.
      How citation distortions create unfounded authority: analysis of a citation network.
      ,
      • Fiorentino F.
      • Vasilakis C.
      • Treasure T.
      Clinical reports of pulmonary metastasectomy for colorectal cancer: a citation network analysis.
      ,
      • Jannot A.S.
      • Agoritsas T.
      • Gayet-Ageron A.
      • Perneger T.V.
      Citation bias favoring statistically significant studies was present in medical research.
      ,
      • Bastiaansen J.A.
      • de Vries Y.A.
      • Munafo M.R.
      Citation distortions in the literature on the serotonin-transporter-linked polymorphic region and amygdala activation.
      ].

      2.1 Identifying an existing or preparing a new systematic review when planning a new study

      Conducting a systematic review from scratch demands specialized knowledge, skills, and a considerable amount of time [
      • Allen I.E.
      • Olkin I.
      Estimating time to conduct a meta-analysis from number of citations retrieved.
      ,
      • Borah R.
      • Brown A.W.
      • Capers P.L.
      • Kaiser K.A.
      Analysis of the time and workers needed to conduct systematic reviews of medical interventions using data from the PROSPERO registry.
      ] as well as additional expertise from subject matter experts such as specialized librarians and statisticians. An immense effort is put worldwide into preparing systematic reviews and developing the underlying methodology. One great example here is Cochrane, which has laid the foundations for gold standard systematic reviewing since its launch in 1993 and continues to develop and enhance the methods for the updating and preparation of systematic reviews. A more recent development is the International Collaboration for the Automation of Systematic Reviews (ICASR, https://icasr.github.io/) [
      • Beller E.
      • Clark J.
      • Tsafnat G.
      • Adams C.
      • Diehl H.
      • Lund H.
      • et al.
      Making progress with the automation of systematic reviews: principles of the international collaboration for the automation of systematic reviews (ICASR).
      ] that is stimulating and supporting technical initiatives to increase the efficiency and speed of the systematic review process. Most importantly, the number of globally published systematic reviews is growing rapidly [
      • Bastian H.
      • Glasziou P.
      • Chalmers I.
      Seventy-five trials and eleven systematic reviews a day: how will we ever keep up?.
      ], meaning that in future, health researchers will have a higher chance to identify and use existing systematic reviews for justifying and designing their new study. Unfortunately, among these is also an increasing number of irrelevant and redundant reviews, making quality appraisal ever more important [
      • Juhl C.B.
      • Lund H.
      Do we really need another systematic review?.
      ,
      • Ioannidis J.P.
      The mass production of redundant, misleading, and conflicted systematic reviews and meta-analyses.
      ].
      Early on during the planning of a new study, health researchers should establish whether a relevant systematic review exists. Table 1 lists a number of recommended sites to search for existing systematic reviews.
      Table 1Key searchable resources for existing systematic reviews
      ResourceLinkSystematic review coverage
      Cochrane Libraryhttps://www.cochranelibrary.com/Health care interventions and diagnostic tests
      Campbell Collaboration Libraryhttps://campbellcollaboration.org/library.htmlResearch related to crime and justice, education, international development, knowledge translation and implementation, nutrition, and social welfare
      Epistemonikoshttps://www.epistemonikos.org/Cochrane and non-Cochrane systematic reviews and overviews of reviews of health care research
      Joanna Briggs Institute EBP Databasehttps://joannabriggs.org/Health-related research questions relevant for allied health care professionals
      PROSPERO and Open Science FrameworkPROSPERO (https://www.crd.york.ac.uk/prospero/)

      Open Science Framework (https://osf.io/)
      Registered systematic reviews currently in progress
      If a published systematic review evaluating the same research question is identified, the next step is to determine whether it is of sufficient quality by using tools such as risk of bias in systematic reviews (ROBIS) [
      • Whiting P.
      • Savovic J.
      • Higgins J.P.
      • Caldwell D.M.
      • Reeves B.C.
      • Shea B.
      • et al.
      ROBIS: a new tool to assess risk of bias in systematic reviews was developed.
      ] or AMSTAR-2 [
      • Shea B.J.
      • Reeves B.C.
      • Wells G.
      • Thuku M.
      • Hamel C.
      • Moran J.
      • et al.
      Amstar 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both.
      ].
      Once we know that we can have confidence in the results of the systematic review, the researcher needs to evaluate whether the systematic review is up to date or not. A generally applicable threshold for currency of reviews does not exist, but a review based on a search within a year will, in most cases, be current, whereas a review based on a search older than 5 years will probably not be current. Specifying a precise threshold would be misleading; the decision must be based on an assessment using specialist knowledge of the clinical field in which the research study will take place and the speed at which its evidence base evolves. If the systematic review is found to be out of date, the researcher needs to initiate updating (either in-house or outsourced) to identify, appraise, and incorporate more recent studies to base the planning of the new study on a comprehensive and up-to-date knowledge of all pre-existing evidence.
      Once all three components have been evaluated, so it is determined that the systematic review is relevant for the research question, up to date and of acceptable quality—and its conclusions call for further research—the researcher can proceed to use it to justify their planned study and to inform its design.
      However, if no relevant and adequate systematic review is found, the researcher needs to either prepare such a systematic review themselves or outsource the task to experts for preparing systematic reviews. If the identified systematic review is of acceptable quality but outdated, the researcher will need to update the search for original studies. This is necessary to ensure that a study published after the search date of the systematic review does not negate the need for a new study.

      2.2 Interpreting the results from the identified systematic review: is there an evidence gap?

      When deciding if an evidence gap exists or not based on the identified systematic review, health researchers should consider three elements: the ethical element, the quality grading of the evidence found in the systematic review, and the use of statistical methods to support the decision process.

      2.2.1 The ethical element

      In 2000, Emanuel, Wendler, and Grady proposed seven requirements for ethical clinical research studies [
      • Emanuel E.J.
      • Wendler D.
      • Grady C.
      What makes clinical research ethical?.
      ]. In addition to respect for the participants, informed consent, independent review, a favorable risk ratio, fair subject selection, and scientific validity, they also ask health researchers to consider if the proposed research will enhance knowledge or health, and therefore have value.
      Their recommendations are formulated as two questions to consider in determining if a new study is ethical [
      • Emanuel E.J.
      • Wendler D.
      • Grady C.
      What makes clinical research ethical?.
      ]:
      • (a)
        Is the research question scientifically valid and not a trifling hypothesis?
      • (b)
        Will it be possible to generalize the new results, that is, beyond the sample and context of the study?

      2.2.2 Grading the quality of the existing evidence

      When a systematic review is found, updated, or prepared, the certainty of the evidence for the existing body of evidence should be determined [
      • Owens D.K.
      • Lohr K.N.
      • Atkins D.
      • Treadwell J.R.
      • Reston J.T.
      • Bass E.B.
      • et al.
      AHRQ series paper 5: grading the strength of a body of evidence when comparing medical interventions--agency for healthcare research and quality and the effective health-care program.
      ]. Several different grading systems have been developed (see e.g., [
      • Owens D.K.
      • Lohr K.N.
      • Atkins D.
      • Treadwell J.R.
      • Reston J.T.
      • Bass E.B.
      • et al.
      AHRQ series paper 5: grading the strength of a body of evidence when comparing medical interventions--agency for healthcare research and quality and the effective health-care program.
      ,
      • Balshem H.
      • Helfand M.
      • Schunemann H.J.
      • Oxman A.D.
      • Kunz R.
      • Brozek J.
      • et al.
      GRADE guidelines: 3. Rating the quality of evidence.
      ]). If confidence in the conclusion is high, that is, if the quality grading of the evidence indicates that the certainty of the evidence is high, there is no need for a new study, but if confidence is low or the evidence is insufficient, there is a case to be made for a new study.

      2.2.3 Additional statistical methods

      The interest in deciding when a meta-analysis is conclusive has led to the development of different statistical methods. Besides the obvious use of the confidence interval, other methods have been suggested such as the prediction interval (Section 10.10.4.3 in [
      • Higgins J.P.T.
      • Thomas J.
      Cochrane Handbook for Systematic Reviews of Interventions.
      ]), funnel plots [
      • Egger M.
      • Smith G.D.
      Misleading meta-analysis.
      ], trial sequential analysis [
      • Wetterslev J.
      • Thorlund K.
      • Brok J.
      • Gluud C.
      Trial sequential analysis may establish when firm evidence is reached in cumulative meta-analysis.
      ], and others (see examples in [
      • Nuesch E.
      • Juni P.
      Commentary: which meta-analyses are conclusive?.
      ]). The prediction interval, for example, includes the heterogeneity of the included studies when calculating the probable range within which the true effect lies (10.10.4.3 in [
      • Higgins J.P.T.
      • Thomas J.
      Cochrane Handbook for Systematic Reviews of Interventions.
      ]). One could argue that in cases with a broader prediction interval compared with the confidence interval, the underlying higher heterogeneity necessitates further studies, that is, no further studies needed if the lower limit of the prediction interval is higher than the minimal clinical important difference threshold.
      To sum up one can be fairly confident that a new study needs to be carried out if the answers to the two ethical questions are “yes”, if the grading of the evidence for the conclusion from the systematic review indicates that the certainty of evidence is low or very low, and if statistical methods support low certainty of evidence. The final decision whether to conduct a study will come down to a nuanced consideration of the balance between the existing evidence, the relevance of the topic, and the opportunity cost of the research.

      2.3 What if there are no earlier studies

      During the early days of a new treatment or diagnostic method, it can be difficult to identify any earlier similar studies for the research in question. Even if the intervention is genuinely new, a systematic review should be conducted. It should be easy and fast to perform and appraise the resulting search to conclusively document that no earlier studies exist. In such cases, the main focus will be to make sure that the intervention is relevant for end users and society (see Section 3 in the following).
      As stated in the explanation for CONSORT guidelines, it would also be important to consider if any “plausible explanation for how the intervention under investigation might work, especially if there is little or no previous experience with the intervention” [
      • Altman D.G.
      • Schulz K.F.
      • Moher D.
      • Egger M.
      • Davidoff F.
      • Elbourne D.
      • et al.
      The revised CONSORT statement for reporting randomized trials: explanation and elaboration.
      ]. The example in Box 1 illustrates a possible approach in cases when no earlier similar original studies can be found.
      Researchers are planning a randomized controlled trial (RCT) for a recently developed physiotherapy treatment for patients with balance deficits, for example, due to sclerosis or Parkinsonism and want to use an evidence-based research approach to justify their study. They decide to perform a qualitative study to obtain the perspectives of patients and clinicians who will either receive or prescribe this new intervention (See Section 3 in the following) and a scoping review to understand the present evidence for other nonpharmacological treatments for patients with balance deficits.

      3. Establishing whether a new study is relevant

      In addition to the identification of evidence gaps through the use of systematic review, the need for a new study should be justified by establishing that it is addressing a question relevant to its end users. In the health context, end users usually encompass patients, caregivers, and clinicians. Patients and their caregivers may often be represented by national patient organizations (e.g., the American Heart Organization or the Norwegian Rheumatic Association). In other circumstances, carefully selected individuals, committees, or boards arranged by researchers may be used to elicit patient perspectives. Likewise, clinicians also have their professional organizations or can provide input as individuals invited to a panel or a committee.
      The US Patient-Centered Outcomes Research Institute defines patient-relevant research as “the evaluation of questions and outcomes meaningful and important to patients and caregivers. The definition rests on the axiom that patients have unique perspectives that can change and improve the pursuit of clinical questions…including the perspectives of end users of the research, which include patients, physicians, and other health care stakeholders, will enhance the relevance of research to actual health decisions these end users face” [
      • Frank L.
      • Basch E.
      • Selby J.V.
      Patient-Centered Outcomes Research I. The PCORI perspective on patient-centered outcomes research.
      ]. Within the last 20 years, and accentuated within the last 5–10 years, funders, regulators, and others supporting health research have been demanding the involvement of end users in research.
      Patients have first-hand lived experience providing insight as to the most important matters in relation to a study of a specific patient group. Numerous studies show a discrepancy between what end users need and what researchers focus on, indicating that researchers are very poor at identifying or ignore the needs of end users when planning new research [
      • Tallon D.
      • Chard J.
      • Dieppe P.
      Relation between agendas of the research community and the research consumer.
      ,
      • Chalmers I.
      • Rounding C.
      • Lock K.
      Descriptive survey of non-commercial randomised controlled trials in the United Kingdom, 1980-2002.
      ,
      • Hewlett S.
      • Wit M.
      • Richards P.
      • Quest E.
      • Hughes R.
      • Heiberg T.
      • et al.
      Patients and professionals as research partners: challenges, practicalities, and benefits.
      ,
      • Chalmers I.
      • Glasziou P.
      Avoidable waste in the production and reporting of research evidence.
      ,
      • Crowe S.
      • Fenton M.
      • Hall M.
      • Cowan K.
      • Chalmers I.
      Patients’, clinicians’ and the research communities’ priorities for treatment research: there is an important mismatch.
      ,
      • Owens C.
      • Ley A.
      • Aitken P.
      Do different stakeholder groups share mental health research priorities? A four-arm Delphi study.
      ,
      • Stewart R.J.
      • Caird J.
      • Oliver K.
      • Oliver S.
      Patients' and clinicians' research priorities.
      ,
      • Kirwan J.R.
      • Minnock P.
      • Adebajo A.
      • Bresnihan B.
      • Choy E.
      • de Wit M.
      • et al.
      Patient perspective: fatigue as a recommended patient centered outcome measure in rheumatoid arthritis.
      ]. In addition, public funding of research only modestly correlates with disease burden, if at all [
      • Gross C.P.
      • Anderson G.F.
      • Powe N.R.
      The relation between funding by the National Institutes of Health and the burden of disease.
      ,
      • Stuckler D.
      • King L.
      • Robinson H.
      • McKee M.
      WHO's budgetary allocations and burden of disease: a comparative analysis.
      ,
      • Perel P.
      • Miranda J.J.
      • Ortiz Z.
      • Casas J.P.
      Relation between the global burden of disease and randomized clinical trials conducted in Latin America published in the five leading medical journals.
      ].

      3.1 How to include the end user perspective?

      The evidence-based research approach does not prescribe any particular methodology for involving end users in the research process but suggests that whatever is chosen should be carried out systematically and transparently (see Box 2). The optimal approach would be to conduct a systematic review of qualitative studies or surveys identifying experiences and attitudes among patients or clinicians about the disease and/or treatment, or diagnostic technique in question.
      A new rehabilitation program for patients surviving a cardiac arrest was to be developed and tested. The PhD student applied an evidence-based research approach to planning and, instead of straightaway designing a new randomized clinical trial, first prepared the following three studies: The first study was a national population-based survey aiming to describe the self-reported prevalence of cognitive, psychological, and physical problems in people surviving a cardiac arrest and to analyze how these change over time. The study provided information about end users' perspectives in a systematic and transparent way. This could also have been achieved by conducting a systematic review of earlier published qualitative studies that had asked cardiac arrest survivors what was important to them when it came to their need for rehabilitation. The second study was a systematic review of the evidence for the effectiveness of nonpharmacological interventions, evaluating different existing rehabilitation programs for people with cardiac arrest. Based on the results of the two initial studies, the PhD student prepared a third study exploring the acceptability and feasibility of a rehabilitation intervention designed for survivors with cardiac arrest. Using an evidence-based research approach helped to lay the firm foundations for future randomized controlled trials.
      The challenge of using systematic reviews of qualitative studies or surveys is that the systematic review will most probably not cover all necessary aspects of the project. Thus, many health researchers have chosen to obtain end users’ perspectives by inviting them to be a member of a panel, committee, board, or even of the research group itself [
      • Domecq J.P.
      • Prutsky G.
      • Elraiyah T.
      • Wang Z.
      • Nabhan M.
      • Shippee N.
      • et al.
      Patient engagement in research: a systematic review.
      ]. Unfortunately, this approach increases the risk of a nonsystematic and nontransparent involvement of end users. We therefore suggest to look for systematic reviews of qualitative studies or surveys obtaining the perspectives of the relevant end users, even though currently the number of such systematic reviews is sparse [
      • Tong A.
      • Flemming K.
      • McInnes E.
      • Oliver S.
      • Craig J.
      Enhancing transparency in reporting the synthesis of qualitative research: ENTREQ.
      ] compared with that of systematic reviews of randomized clinical trials [
      • Bastian H.
      • Glasziou P.
      • Chalmers I.
      Seventy-five trials and eleven systematic reviews a day: how will we ever keep up?.
      ].

      4. The interaction between identifying evidence gaps and including end users' perspectives

      As illustrated in Figure 1, both the end user's perspectives and results of a relevant systematic review of similar earlier studies need to be considered when aiming to establish the need for a new study. Moreover, the two elements interact with each other. If end users are asked to identify and prioritize the most relevant research questions, it would only be fair to inform them about the existing evidence base to avoid wasting time discussing research questions that have already been answered. Methods such as the evidence gap maps, systematic maps, evidence maps, evidence mapping and alike (see [
      • Saran A.
      • White H.
      Evidence and gap maps: a comparison of different approaches.
      ]) have been successfully used for this purpose.
      In cases where there are no earlier studies (like the case with the new nonpharmacological treatment for balance deficits among neurological patients, see Box 1), end users’ perspectives may well be the sole source for answering the question regarding the value of a new study.

      5. Applying evidence-based research principles to the design of a new study

      Study design is determined by a range of methodological options, such as the ways to collect and assign samples, to collect and analyze data, and to use interventions and instruments. Applying an evidence-based research approach will closely link the research question with the chosen methods and design, thereby ensuring that the new study is valuable.
      The justification process described previously is so closely related to the design process that in practice the two should not be performed separately. The justification process reaches a binary conclusion, namely the study is valuable, or it is not. If the new study does not add value, there is no reason to progress to the design process. If the new study is determined to be of value, the process leading to this conclusion is the same as for designing the new study.

      5.1 Design informed by a systematic review

      A systematic review will typically conclude with implications for practice and implications for research. In the newest version of the Cochrane Handbook (Chp 15.6, [
      • Higgins J.P.T.
      • Thomas J.
      Cochrane Handbook for Systematic Reviews of Interventions.
      ]), it is suggested to formulate the implications for research in relation to the grading of the evidence. A framework using a Population, Intervention, Comparison, Outcomes (PICO) approach to characterize important aspects of a possible evidence gap was published in 2011 [
      • Robinson K.A.
      • Saldanha I.J.
      • McKoy N.A.
      Development of a framework to identify research gaps from systematic reviews.
      ]. Table 2 gives an example of such evidence gaps identified through the synthesis of earlier studies in the Cochrane systematic review about aquatic exercise for knee and hip osteoarthritis from 2016 using Robinson's framework [
      • Robinson K.A.
      • Saldanha I.J.
      • McKoy N.A.
      Development of a framework to identify research gaps from systematic reviews.
      ,
      • Bartels Else M.
      • Juhl Carsten B.
      • Christensen R.
      • Hagen Kåre B.
      • Danneskiold-Samsøe B.
      • Dagfinrud H.
      • et al.
      Aquatic exercise for the treatment of knee and hip osteoarthritis.
      ].
      Table 2The example of a Cochrane review on aquatic exercise for patients with knee and hip osteoarthritis [
      • Bartels Else M.
      • Juhl Carsten B.
      • Christensen R.
      • Hagen Kåre B.
      • Danneskiold-Samsøe B.
      • Dagfinrud H.
      • et al.
      Aquatic exercise for the treatment of knee and hip osteoarthritis.
      ] using Robinson's framework to identify evidence gaps [
      • Robinson K.A.
      • Saldanha I.J.
      • McKoy N.A.
      Development of a framework to identify research gaps from systematic reviews.
      ]
      Population (P)Intervention (I)Comparison (C)Outcome (O)
      PICOPatients with knee and hip osteoarthritis: - Severity of disease (unclearly reported)
      • -
        use international standard for defining a patient with knee and hip osteoarthritis
      Aquatic exercise:
      • -
        clear type of aquatic exercise
      • -
        design studies testi2ng different exercise doses relevant for clinical practice
      Control:
      • -
        lack of studies comparing often used alternative treatments
      Pain, function, and quality of life were measured, but no studies have measured the effect of aquatic exercise on fatigue
      • -
        measured immediately after the end of treatment
      • -
        appropriate follow-up time

      6. Reporting

      An inevitable question is how the applied evidence-based research approach should be documented when the study is being written up for publication. CONSORT clearly recommends that the “need for a new trial should be justified in the introduction. Ideally, it should include a reference to a systematic review of previous similar trials or a note of the absence of such trials” [
      • Moher D.
      • Hopewell S.
      • Schulz K.F.
      • Montori V.
      • Gotzsche P.C.
      • Devereaux P.J.
      • et al.
      CONSORT 2010 Explanation and Elaboration: updated guidelines for reporting parallel group randomised trials.
      ]. Even in the first version of CONSORT [
      • Begg C.
      • Cho M.
      • Eastwood S.
      • Horton R.
      • Moher D.
      • Olkin I.
      • et al.
      Improving the quality of reporting of randomized controlled trials. The CONSORT statement.
      ], this issue was emphasized but—as metaresearch is clearly indicating—very few authors are following this guidance [
      • Fergusson D.
      • Glass K.C.
      • Hutton B.
      • Shapiro S.
      Randomized controlled trials of aprotinin in cardiac surgery: could clinical equipoise have stopped the bleeding?.
      ,
      • Clarke M.
      • Hopewell S.
      Many reports of randomised trials still don't begin or end with a systematic review of the relevant evidence.
      ,
      • Helfer B.
      • Prosser A.
      • Samara M.T.
      • Geddes J.R.
      • Cipriani A.
      • Davis J.M.
      • et al.
      Recent meta-analyses neglect previous systematic reviews and meta-analyses about the same topic: a systematic examination.
      ,
      • Sutton A.J.
      • Cooper N.J.
      • Jones D.R.
      • Lambert P.C.
      • Thompson J.R.
      • Abrams K.R.
      Evidence-based sample size calculations based upon updated meta-analysis.
      ,
      • Engelking A.
      • Cavar M.
      • Puljak L.
      The use of systematic reviews to justify anaesthesiology trials: a meta-epidemiological study.
      ,
      • Cooper N.J.
      • Jones D.R.
      • Sutton A.J.
      The use of systematic reviews when designing studies.
      ,
      • De Meulemeester J.
      • Fedyk M.
      • Jurkovic L.
      • Reaume M.
      • Dowlatshahi D.
      • Stotts G.
      • et al.
      Many randomized clinical trials may not be justified: a cross-sectional analysis of the ethics and science of randomized clinical trials.
      ] and very few journals are requesting, let alone enforcing it. Taking the example from Table 2, a new study aiming to meet the identified evidence gaps listed in Bartels’ Cochrane review should refer to and cite this systematic review when providing a justification for the new study [
      • Bartels Else M.
      • Juhl Carsten B.
      • Christensen R.
      • Hagen Kåre B.
      • Danneskiold-Samsøe B.
      • Dagfinrud H.
      • et al.
      Aquatic exercise for the treatment of knee and hip osteoarthritis.
      ].
      The introduction section could also be used to document the process for determining the relevant components of study design. In our example, authors might simply state with regard to selected primary clinical outcomes that “there is moderate quality evidence that aquatic exercise may have small, short-term, and clinically relevant effects on patient-reported pain, disability, and QoL in people with knee and hip osteoarthritis” [
      • Bartels Else M.
      • Juhl Carsten B.
      • Christensen R.
      • Hagen Kåre B.
      • Danneskiold-Samsøe B.
      • Dagfinrud H.
      • et al.
      Aquatic exercise for the treatment of knee and hip osteoarthritis.
      ]. Or they might mention that the evidence has identified a number of gaps relating to disease severity when justifying the study's patient groups. Authors of a new study protocol should not only state the fact but also explain how the research question and the design of the new study are justified and informed by the perspectives of end users and the implications for research in a relevant systematic review.
      In cases where a study protocol is published, this would be the optimal opportunity to report the evidence-based research approach to study design, documenting the reasoning around the study's PICO selection and the prioritization of the identified evidence gaps in relation to the clinical relevance for the patient and the practitioner in much greater detail than in the final article summarizing study results. One good example was a protocol for a systematic review on the impact of periodontal therapy on measures of disease activity and actual inflammatory burden in patients with rheumatoid arthritis. Its introduction section included an evidence-based research component providing a detailed description of the PICO question, the information sources used, the results of the search, and the implications of these results for the justification and design of the new systematic review about this topic.
      If ethic committees and funding agencies demanded such an approach and level of information in the applications for ethical approval and financial support, this would create a strong incentive to systematically and transparently consider the relevance and necessity of any new study. Apart from a few exceptions [
      • Chinnery F.
      • Dunham K.M.
      • van der Linden B.
      • Westmore M.
      • Whitlock E.
      Ensuring value in health-related research.
      ], funding agencies have not yet implemented this requirement [
      • Nasser M.
      • Clarke M.
      • Chalmers I.
      • Brurberg K.G.
      • Nykvist H.
      • Lund H.
      • et al.
      What are funders doing to minimise waste in research?.
      ].

      7. Discussion

      Although by itself it does not address all issues required to demonstrate the value of new research, the evidence-based research approach is a crucial part of the process of justifying and designing a new study.
      Proposing the evidence-based research approach to justify and design new studies, we recognize that there are a number of important challenges. First, as noted in the introduction to this series (REF to EBR article #1), many clinical researchers—despite being aware of systematic reviews—lack the knowledge, skills, and time to conduct one. Strongly related to this first challenge is the current lack of incentive for researchers to prioritize the preparation of a systematic review over actually starting their new study. These challenges need to be tackled by the wider research ecosystem and not seen purely as the responsibility of the individual researcher. Research institutions and funding agencies, ethics committees, and publishers need to adapt and remove barriers to the implementation of an evidence-based research approach. The relevant training curricula and facilities need to be provided to equip researchers with the necessary skills to identify and use (and if need be update or conduct) systematic reviews, and so facilitate the use of systematic reviews when planning new studies.
      Research waste due to the wrong questions being asked by scientists, poor study design, inaccessible research, and selective and biased reporting [
      • Kleinert S.
      • Benham L.
      Further emphasis on research in context.
      ] is not only a matter of redundant studies with no apparent value for research, practice, or end users, but also perpetuates irresponsible use of funding resources and risks damaging the public's trust in research [
      • Maggio L.A.
      • Artino Jr., A.R.
      • Picho K.
      • Driessen E.W.
      Are you sure you want to do that? Fostering the responsible conduct of medical education research. Academic medicine.
      ].
      Therefore, how should these challenges of dissemination and anchoring of evidence-based research be met? We recommend structured and mandatory training in finding and critically appraising systematic reviews at undergraduate and postgraduate levels and the active empowerment of mentors and supervisors by offering relevant training to senior researchers during the transition period until the evidence-based research approach is fully established.
      Some might argue that our recommendations will place a considerable burden of effort and time on researchers, but as outlined in our series, the evidence-based research approach is an absolute necessity to ensure valuable research. The greatest waste and burden is to carry out a completely unnecessary research study!

      Acknowledgments

      This work has been prepared as part of the Evidence-Based Research Network (ebrnetwork.org). The EBRNetwork is an international network that promotes the use of systematic reviews when prioritizing, designing, and interpreting research. Evidence-based research is the use of prior research in a systematic and transparent way to inform the new study so that it is answering questions that matter in a valid, efficient, and accessible manner.
      The authors thank the Centre for Evidence-Based Practice, Western Norway University of Applied Sciences for its very generous support of the EBRNetwork.
      The Parker Institute, Bispebjerg and Frederiksberg Hospital (Professor Christensen and Professor Henriksen) are supported by a core grant from the Oak Foundation USA ( OCAY-18-774-OFIL ).
      Financial support
      This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

      Supplementary data

      References

        • Robinson K.A.
        Use of prior research in the justification and interpretation of clinical trials ProQuest Dissertations and Theses; 2009.
        Johns Hopkins University, 2009
        • Juhl C.B.
        • Lund H.
        Do we really need another systematic review?.
        Br J Sports Med. 2018;
        • Ioannidis J.P.
        The mass production of redundant, misleading, and conflicted systematic reviews and meta-analyses.
        Milbank Q. 2016; 94: 485-514
        • Freedman B.
        Scientific value and validity as ethical requirements for research: a proposed explication.
        IRB. 1987; 9: 7-10
        • Emanuel E.J.
        • Wendler D.
        • Grady C.
        What makes clinical research ethical?.
        JAMA. 2000; 283: 2701-2711
        • Robinson K.A.
        • Saldanha I.J.
        • McKoy N.A.
        Development of a framework to identify research gaps from systematic reviews.
        J Clin Epidemiol. 2011; 64: 1325-1330
        • Clarke M.
        Doing new research? Don't forget the old.
        PLoS Med. 2004; 1: e35
        • Andrade N.S.
        • Flynn J.P.
        • Bartanusz V.
        Twenty-year perspective of randomized controlled trials for surgery of chronic nonspecific low back pain: citation bias and tangential knowledge.
        Spine J. 2013; 13: 1698-1704
        • Clarke M.
        • Brice A.
        • Chalmers I.
        Accumulating research: a systematic account of how cumulative meta-analyses would have provided knowledge, improved health, reduced harm and saved resources.
        PLoS One. 2014; 9: e102670
        • Fergusson D.
        • Glass K.C.
        • Hutton B.
        • Shapiro S.
        Randomized controlled trials of aprotinin in cardiac surgery: could clinical equipoise have stopped the bleeding?.
        Clin Trials. 2005; 2: 218-229
        • Haapakoski R.
        • Mathieu J.
        • Ebmeier K.P.
        • Alenius H.
        • Kivimaki M.
        Cumulative meta-analysis of interleukins 6 and 1beta, tumour necrosis factor alpha and C-reactive protein in patients with major depressive disorder.
        Brain Behav Immun. 2015; 49: 206-215
        • Habre C.
        • Tramer M.R.
        • Popping D.M.
        • Elia N.
        Ability of a meta-analysis to prevent redundant research: systematic review of studies on pain from propofol injection.
        BMJ. 2014; 348: g5219
        • Juni P.
        • Nartey L.
        • Reichenbach S.
        • Sterchi R.
        • Dieppe P.A.
        • Egger M.
        Risk of cardiovascular events and rofecoxib: cumulative meta-analysis.
        Lancet. 2004; 364: 2021-2029
        • Ker K.
        • Edwards P.
        • Perel P.
        • Shakur H.
        • Roberts I.
        Effect of tranexamic acid on surgical bleeding: systematic review and cumulative meta-analysis.
        BMJ. 2012; 344: e3054
        • Lau J.
        • Antman E.M.
        • Jimenez-Silva J.
        • Kupelnick B.
        • Mosteller F.
        • Chalmers T.C.
        Cumulative meta-analysis of therapeutic trials for myocardial infarction.
        N Engl J Med. 1992; 327: 248-254
        • Lau J.
        • Schmid C.H.
        • Chalmers T.C.
        Cumulative meta-analysis of clinical trials builds evidence for exemplary medical care.
        J Clin Epidemiol. 1995; 48: 45-57
        • Poolman R.W.
        • Farrokhyar F.
        • Bhandari M.
        Hamstring tendon autograft better than bone patellar-tendon bone autograft in ACL reconstruction: a cumulative meta-analysis and clinically relevant sensitivity analysis applied to a previously published analysis.
        Acta Orthop. 2007; 78: 350-354
        • Clarke M.
        • Alderson P.
        • Chalmers I.
        Discussion sections in reports of controlled trials published in general medical journals.
        JAMA. 2002; 287: 2799-2801
        • Clarke M.
        • Hopewell S.
        • Chalmers I.
        Reports of clinical trials should begin and end with up-to-date systematic reviews of other relevant evidence: a status report.
        J R Soc Med. 2007; 100: 187-190
        • Goudie A.C.
        • Sutton A.J.
        • Jones D.R.
        • Donald A.
        Empirical assessment suggests that existing evidence could be used more fully in designing randomized controlled trials.
        J Clin Epidemiol. 2010; 63: 983-991
        • Clarke M.
        • Hopewell S.
        • Chalmers I.
        Clinical trials should begin and end with systematic reviews of relevant evidence: 12 years and waiting.
        Lancet. 2010; 376: 20-21
        • Clarke M.
        • Hopewell S.
        Many reports of randomised trials still don't begin or end with a systematic review of the relevant evidence.
        J Bahrain Med Soc. 2013; 24: 145-148
        • Jones A.P.
        • Conroy E.
        • Williamson P.R.
        • Clarke M.
        • Gamble C.
        The use of systematic reviews in the planning, design and conduct of randomised trials: a retrospective cohort of NIHR HTA funded trials.
        BMC Med Res Methodol. 2013; 13: 50
        • Helfer B.
        • Prosser A.
        • Samara M.T.
        • Geddes J.R.
        • Cipriani A.
        • Davis J.M.
        • et al.
        Recent meta-analyses neglect previous systematic reviews and meta-analyses about the same topic: a systematic examination.
        BMC Med. 2015; 13: 82
        • Robinson K.A.
        • Goodman S.N.
        A systematic examination of the citation of prior research in reports of randomized, controlled trials.
        Ann Intern Med. 2011; 154: 50-55
        • Schrag M.
        • Mueller C.
        • Oyoyo U.
        • Smith M.A.
        • Kirsch W.M.
        Iron, zinc and copper in the Alzheimer's disease brain: a quantitative meta-analysis. Some insight on the influence of citation bias on scientific opinion.
        Prog Neurobiol. 2011; 94: 296-306
        • Sheth U.
        • Simunovic N.
        • Tornetta 3rd, P.
        • Einhorn T.A.
        • Bhandari M.
        Poor citation of prior evidence in hip fracture trials.
        J Bone Joint Surg Am. 2011; 93: 2079-2086
        • Sawin V.I.
        • Robinson K.A.
        Biased and inadequate citation of prior research in reports of cardiovascular trials is a continuing source of waste in research.
        J Clin Epidemiol. 2015;
        • Macroberts M.H.
        • Macroberts B.R.
        Quantitative measures of communication in science - a study of the formal level.
        Soc Stud Sci. 1986; 16: 151-172
        • Amancio D.R.
        • Nunes M.G.V.
        • Oliveira O.N.
        • Costa LdF.
        Using complex networks concepts to assess approaches for citations in scientific papers.
        Scientometrics. 2012; 91: 827-842
        • Thornley C.
        • Watkinson A.
        • Nicholas D.
        • Volentine R.
        • Jamali H.R.
        • Herman E.
        • et al.
        The role of trust and authority in the citation behaviour of researchers.
        Inflamm Res. 2015; 20: 677
        • Puder K.S.
        • Morgan J.P.
        Persuading by citation: an analysis of the references of fifty-three published reports of phenylpropanolamine's clinical toxicity.
        Clin Pharmacol Ther. 1987; 42: 1-9
        • Shadish W.R.
        • Tolliver D.
        • Gray M.
        • Gupta S.K.S.
        Author judgements about works they cite: three studies from psychology journals.
        Soc Stud Sci. 1995; 25: 477-498
        • Greenberg S.A.
        How citation distortions create unfounded authority: analysis of a citation network.
        BMJ. 2009; 339: b2680
        • Fiorentino F.
        • Vasilakis C.
        • Treasure T.
        Clinical reports of pulmonary metastasectomy for colorectal cancer: a citation network analysis.
        Br J Cancer. 2011; 104: 1085-1097
        • Jannot A.S.
        • Agoritsas T.
        • Gayet-Ageron A.
        • Perneger T.V.
        Citation bias favoring statistically significant studies was present in medical research.
        J Clin Epidemiol. 2013; 66: 296-301
        • Bastiaansen J.A.
        • de Vries Y.A.
        • Munafo M.R.
        Citation distortions in the literature on the serotonin-transporter-linked polymorphic region and amygdala activation.
        Biol Psychiatry. 2015; 78: e35-e36
        • Allen I.E.
        • Olkin I.
        Estimating time to conduct a meta-analysis from number of citations retrieved.
        JAMA. 1999; 282: 634-635
        • Borah R.
        • Brown A.W.
        • Capers P.L.
        • Kaiser K.A.
        Analysis of the time and workers needed to conduct systematic reviews of medical interventions using data from the PROSPERO registry.
        BMJ Open. 2017; 7: e012545
        • Beller E.
        • Clark J.
        • Tsafnat G.
        • Adams C.
        • Diehl H.
        • Lund H.
        • et al.
        Making progress with the automation of systematic reviews: principles of the international collaboration for the automation of systematic reviews (ICASR).
        Syst Rev. 2018; 7: 77
        • Bastian H.
        • Glasziou P.
        • Chalmers I.
        Seventy-five trials and eleven systematic reviews a day: how will we ever keep up?.
        PLoS Med. 2010; 7: e1000326
        • Whiting P.
        • Savovic J.
        • Higgins J.P.
        • Caldwell D.M.
        • Reeves B.C.
        • Shea B.
        • et al.
        ROBIS: a new tool to assess risk of bias in systematic reviews was developed.
        J Clin Epidemiol. 2016; 69: 225-234
        • Shea B.J.
        • Reeves B.C.
        • Wells G.
        • Thuku M.
        • Hamel C.
        • Moran J.
        • et al.
        Amstar 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both.
        BMJ. 2017; 358: j4008
        • Owens D.K.
        • Lohr K.N.
        • Atkins D.
        • Treadwell J.R.
        • Reston J.T.
        • Bass E.B.
        • et al.
        AHRQ series paper 5: grading the strength of a body of evidence when comparing medical interventions--agency for healthcare research and quality and the effective health-care program.
        J Clin Epidemiol. 2010; 63: 513-523
        • Balshem H.
        • Helfand M.
        • Schunemann H.J.
        • Oxman A.D.
        • Kunz R.
        • Brozek J.
        • et al.
        GRADE guidelines: 3. Rating the quality of evidence.
        J Clin Epidemiol. 2011; 64: 401-406
        • Higgins J.P.T.
        • Thomas J.
        Cochrane Handbook for Systematic Reviews of Interventions.
        2nd ed. Wiley Blackwell, Hoboken, NJ2019
        • Egger M.
        • Smith G.D.
        Misleading meta-analysis.
        BMJ. 1995; 311: 753-754
        • Wetterslev J.
        • Thorlund K.
        • Brok J.
        • Gluud C.
        Trial sequential analysis may establish when firm evidence is reached in cumulative meta-analysis.
        J Clin Epidemiol. 2008; 61: 64-75
        • Nuesch E.
        • Juni P.
        Commentary: which meta-analyses are conclusive?.
        Int J Epidemiol. 2009; 38: 298-303
        • Altman D.G.
        • Schulz K.F.
        • Moher D.
        • Egger M.
        • Davidoff F.
        • Elbourne D.
        • et al.
        The revised CONSORT statement for reporting randomized trials: explanation and elaboration.
        Ann Intern Med. 2001; 134: 663-694
        • Frank L.
        • Basch E.
        • Selby J.V.
        Patient-Centered Outcomes Research I. The PCORI perspective on patient-centered outcomes research.
        JAMA. 2014; 312: 1513-1514
        • Tallon D.
        • Chard J.
        • Dieppe P.
        Relation between agendas of the research community and the research consumer.
        Lancet. 2000; 355: 2037-2040
        • Chalmers I.
        • Rounding C.
        • Lock K.
        Descriptive survey of non-commercial randomised controlled trials in the United Kingdom, 1980-2002.
        BMJ. 2003; 327: 1017
        • Hewlett S.
        • Wit M.
        • Richards P.
        • Quest E.
        • Hughes R.
        • Heiberg T.
        • et al.
        Patients and professionals as research partners: challenges, practicalities, and benefits.
        Arthritis Rheum. 2006; 55: 676-680
        • Chalmers I.
        • Glasziou P.
        Avoidable waste in the production and reporting of research evidence.
        Lancet. 2009; 374: 86-89
        • Crowe S.
        • Fenton M.
        • Hall M.
        • Cowan K.
        • Chalmers I.
        Patients’, clinicians’ and the research communities’ priorities for treatment research: there is an important mismatch.
        Res Involv Engagem. 2015; 1: 1-10
        • Owens C.
        • Ley A.
        • Aitken P.
        Do different stakeholder groups share mental health research priorities? A four-arm Delphi study.
        Health Expect. 2008; 11: 418-431
        • Stewart R.J.
        • Caird J.
        • Oliver K.
        • Oliver S.
        Patients' and clinicians' research priorities.
        Health Expect. 2011; 14: 439-448
        • Kirwan J.R.
        • Minnock P.
        • Adebajo A.
        • Bresnihan B.
        • Choy E.
        • de Wit M.
        • et al.
        Patient perspective: fatigue as a recommended patient centered outcome measure in rheumatoid arthritis.
        J Rheumatol. 2007; 34: 1174-1177
        • Gross C.P.
        • Anderson G.F.
        • Powe N.R.
        The relation between funding by the National Institutes of Health and the burden of disease.
        N Engl J Med. 1999; 340: 1881-1887
        • Stuckler D.
        • King L.
        • Robinson H.
        • McKee M.
        WHO's budgetary allocations and burden of disease: a comparative analysis.
        Lancet. 2008; 372: 1563-1569
        • Perel P.
        • Miranda J.J.
        • Ortiz Z.
        • Casas J.P.
        Relation between the global burden of disease and randomized clinical trials conducted in Latin America published in the five leading medical journals.
        PLoS One. 2008; 3: e1696
        • Domecq J.P.
        • Prutsky G.
        • Elraiyah T.
        • Wang Z.
        • Nabhan M.
        • Shippee N.
        • et al.
        Patient engagement in research: a systematic review.
        BMC Health Serv Res. 2014; 14: 89
        • Tong A.
        • Flemming K.
        • McInnes E.
        • Oliver S.
        • Craig J.
        Enhancing transparency in reporting the synthesis of qualitative research: ENTREQ.
        BMC Med Res Methodol. 2012; 12: 181
        • Saran A.
        • White H.
        Evidence and gap maps: a comparison of different approaches.
        Campbell Collaboration, 2018
        • Bartels Else M.
        • Juhl Carsten B.
        • Christensen R.
        • Hagen Kåre B.
        • Danneskiold-Samsøe B.
        • Dagfinrud H.
        • et al.
        Aquatic exercise for the treatment of knee and hip osteoarthritis.
        Cochrane Database Syst Rev. 2016;
        • Moher D.
        • Hopewell S.
        • Schulz K.F.
        • Montori V.
        • Gotzsche P.C.
        • Devereaux P.J.
        • et al.
        CONSORT 2010 Explanation and Elaboration: updated guidelines for reporting parallel group randomised trials.
        J Clin Epidemiol. 2010; 63: e1-e37
        • Begg C.
        • Cho M.
        • Eastwood S.
        • Horton R.
        • Moher D.
        • Olkin I.
        • et al.
        Improving the quality of reporting of randomized controlled trials. The CONSORT statement.
        JAMA. 1996; 276: 637-639
        • Sutton A.J.
        • Cooper N.J.
        • Jones D.R.
        • Lambert P.C.
        • Thompson J.R.
        • Abrams K.R.
        Evidence-based sample size calculations based upon updated meta-analysis.
        Stat Med. 2007; 26: 2479-2500
        • Engelking A.
        • Cavar M.
        • Puljak L.
        The use of systematic reviews to justify anaesthesiology trials: a meta-epidemiological study.
        Eur J Pain. 2018; 22: 1844-1849
        • Cooper N.J.
        • Jones D.R.
        • Sutton A.J.
        The use of systematic reviews when designing studies.
        Clin Trials. 2005; 2: 260-264
        • De Meulemeester J.
        • Fedyk M.
        • Jurkovic L.
        • Reaume M.
        • Dowlatshahi D.
        • Stotts G.
        • et al.
        Many randomized clinical trials may not be justified: a cross-sectional analysis of the ethics and science of randomized clinical trials.
        J Clin Epidemiol. 2018; 97: 20-25
        • Chinnery F.
        • Dunham K.M.
        • van der Linden B.
        • Westmore M.
        • Whitlock E.
        Ensuring value in health-related research.
        Lancet. 2018; 391: 836-837
        • Nasser M.
        • Clarke M.
        • Chalmers I.
        • Brurberg K.G.
        • Nykvist H.
        • Lund H.
        • et al.
        What are funders doing to minimise waste in research?.
        Lancet. 2017; 389: 1006-1007
        • Kleinert S.
        • Benham L.
        Further emphasis on research in context.
        Lancet. 2014; 384: 2176-2177
        • Maggio L.A.
        • Artino Jr., A.R.
        • Picho K.
        • Driessen E.W.
        Are you sure you want to do that? Fostering the responsible conduct of medical education research. Academic medicine.
        J Assoc Am Med Coll. 2018; 93: 544-549