Advertisement

A systematic survey of methods guidance suggests areas for improvement regarding access, development, and transparency

  • Julian Hirt
    Affiliations
    Department of Clinical Research, University Hospital Basel, University of Basel, Basel, Switzerland

    International Graduate Academy, Institute for Health and Nursing Science, Medical Faculty, Martin Luther University Halle-Wittenberg, Halle (Saale), Germany
    Search for articles by this author
  • Hannah Ewald
    Affiliations
    University Medical Library, University of Basel, Basel, Switzerland
    Search for articles by this author
  • Daeria O. Lawson
    Affiliations
    Department of Health Research Methods, Evidence, and Impact, McMaster University, Hamilton, Canada
    Search for articles by this author
  • Lars G. Hemkens
    Affiliations
    Department of Clinical Research, University Hospital Basel, University of Basel, Basel, Switzerland

    Meta-Research Innovation Center Berlin (METRIC-B), Berlin Institute of Health, Berlin, Germany

    Meta-Research Innovation Center at Stanford (METRICS), Stanford University, Stanford, CA, USA
    Search for articles by this author
  • Matthias Briel
    Affiliations
    Department of Clinical Research, University Hospital Basel, University of Basel, Basel, Switzerland

    Department of Health Research Methods, Evidence, and Impact, McMaster University, Hamilton, Canada
    Search for articles by this author
  • Stefan Schandelmaier
    Correspondence
    Corresponding author. Department of Clinical Research, University Hospital Basel, University of Basel, Basel, Switzerland. Tel.: +41613285412.
    Affiliations
    Department of Clinical Research, University Hospital Basel, University of Basel, Basel, Switzerland

    Department of Health Research Methods, Evidence, and Impact, McMaster University, Hamilton, Canada
    Search for articles by this author
Open AccessPublished:May 22, 2022DOI:https://doi.org/10.1016/j.jclinepi.2022.05.005

      Abstract

      Background

      To assess the current practice of developing and presenting methods guidance and explore opportunities for improvement.

      Study Design and Setting

      We systematically surveyed methods guidance published in high-impact general and methodology-focused medical journals indexed in MEDLINE in 2020. We included articles that explicitly stated the objective to provide methods guidance for health research. We extracted characteristics related to findability, methods used for development, presentation, and transparency.

      Results

      We included 105 methods guidance articles published in 12 different journals. Less than half had a structured abstract (42%) or was indexed with medical subject headings (38%) or author keywords (17%) related to guidance. Methods for development, reported in 42%, differed between reporting guidelines (n = 13, 100% reported methods) and other guidance articles (n = 92, 34% reported methods). Frequent methods for presentation were illustrative case studies (45%), research checklists (34%), and step-by-step guides (10%). Most articles did not describe the authors’ expertise (22%). Conflicts of interest, reported in 34%, were often unclear.

      Conclusion

      Potential areas for improving methods guidance include better findability through more consistent labeling and indexing and standards for development and reporting.

      Keywords

      What is new?

        Key findings

      • Published methods guidance articles were infrequently indexed as guidance, a minority reported methods of development or authors' expertise and conflicts of interest were typically not clearly stated.
      • Common formats of methods guidance articles included illustrative case studies, research checklists, and step-by-step guides.

        What this adds to what is known?

      • In contrast to reporting guidance, guidance for the planning, conduct, analysis, and interpretation of health research is highly inconsistent regarding terminology, presentation, and methods for development.

        What is the implication, what should change now?

      • Research initiatives to create more consistent terminology and a freely accessible, more comprehensive sounding inventory for methods guidance (e.g., www.lights.science) could improve findability.
      • Consented recommendations for developing and reporting of methods guidance are needed to increase quality, consistency, and uptake.

      1. Introduction

      Studies suggesting that about 85% of research resources are wasted [
      • Ioannidis J.P.A.
      How to make more published research true.
      ] and that most published clinical research is false [
      • Ioannidis J.P.A.
      Why most published research findings are false.
      ] or useless [
      • Ioannidis J.P.A.
      Why most clinical research is not useful.
      ] have shaken the health research community. A survey among 1,576 researchers identified “more robust experimental design” and “better statistics” as the leading suggestions for improving research quality [
      • Baker M.
      Is there a reproducibility crisis? A Nature survey lifts the lid on how researchers view the crisis rocking science and what they think will help.
      ]. Empirical evidence supporting this view comes from numerous methodological studies that document an inappropriate methodology as a major source of waste [
      • Yordanov Y.
      • Dechartres A.
      • Porcher R.
      • Boutron I.
      • Altman D.G.
      • Ravaud P.
      Avoidable waste of research related to inadequate methods in clinical trials.
      ,
      • Dechartres A.
      • Trinquart L.
      • Atal I.
      • Moher D.
      • Dickersin K.
      • Boutron I.
      • et al.
      Evolution of poor reporting and inadequate methods over time in 20 920 randomised controlled trials included in Cochrane reviews: research on research study.
      ,
      • Ndounga Diakou L.A.
      • Ntoumi F.
      • Ravaud P.
      • Boutron I.
      Avoidable waste related to inadequate methods and incomplete reporting of interventions: a systematic review of randomized trials performed in Sub-Saharan Africa.
      ,
      • Ramagopalan S.
      • Skingsley A.P.
      • Handunnetthi L.
      • Klingel M.
      • Magnus D.
      • Pakpoor J.
      • et al.
      Prevalence of primary outcome changes in clinical trials registered on ClinicalTrials.gov: a cross-sectional study.
      ,
      • Sun X.
      • Briel M.
      • Busse J.W.
      • You J.J.
      • Akl E.A.
      • Mejza F.
      • et al.
      Credibility of claims of subgroup effects in randomised controlled trials: systematic review.
      ,
      SAGE Research Methods
      Find resources to answer your research methods and statistics questions.
      ]. For instance, a review of 142 randomized clinical trials found that 96% suffered from one or more serious methodological flaws, most of which could have been corrected [
      • Yordanov Y.
      • Dechartres A.
      • Porcher R.
      • Boutron I.
      • Altman D.G.
      • Ravaud P.
      Avoidable waste of research related to inadequate methods in clinical trials.
      ]. Another study provided a longitudinal perspective and found, despite a positive trend over time, that methodological limitations remain an urgent problem [
      • Vinkers C.H.
      • Lamberink H.J.
      • Tijdink J.K.
      • Heus P.
      • Bouter L.
      • Glasziou P.
      • et al.
      The methodological quality of 176,620 randomized controlled trials published between 1966 and 2018 reveals a positive trend but also an urgent need for improvement.
      ].
      An important activity among research methodologists and statisticians is developing guidance for the design, conduct, analysis, interpretation, and reporting of health studies (i.e., methods guidance). Common formats of methods guidance include textbooks [
      SAGE Research Methods
      Find resources to answer your research methods and statistics questions.
      ,
      • Higgins J.P.T.
      • Thomas J.
      • Chandler J.
      • Cumpston M.
      • Li T.
      • Page M.J.
      • et al.
      Cochrane Handbook for systematic reviews of interventions version 6.2 (updated February 2021) [Internet]. Cochrane.
      ], documents issued by regulatory organizations [
      ICH Official web site : ICH.
      , ,
      Office of the commissioner. Search for FDA guidance documents.
      ], and journal articles. A recent survey showed that peer-reviewed journal articles are the primary knowledge dissemination instruments used by methodologists and—aside from exchange with colleagues—the second most important source for learning about new research methods [
      • Pullenayegum E.M.
      • Platt R.W.
      • Barwick M.
      • Feldman B.M.
      • Offringa M.
      • Thabane L.
      Knowledge translation in biostatistics: a survey of current practices, preferences, and barriers to the dissemination and uptake of new statistical methods.
      ].
      Literature addressing the content and quality of methods guidance is scarce. Some articles have focused on the content of reporting guidelines [
      • Banno M.
      • Tsujimoto Y.
      • Kataoka Y.
      The majority of reporting guidelines are not developed with the Delphi method: a systematic review of reporting guidelines.
      ,
      • Wang X.
      • Chen Y.
      • Yang N.
      • Deng W.
      • Wang Q.
      • Li N.
      • et al.
      Methodology and reporting quality of reporting guidelines: systematic review.
      ,
      • Moher D.
      • Schulz K.F.
      • Simera I.
      • Altman D.G.
      Guidance for developers of health research reporting guidelines.
      ,
      • Moher D.
      • Weeks L.
      • Ocampo M.
      • Seely D.
      • Sampson M.
      • Altman D.G.
      • et al.
      Describing reporting guidelines for health research: a systematic review.
      ,
      • Bennett C.
      • Khangura S.
      • Brehaut J.C.
      • Graham I.D.
      • Moher D.
      • Potter B.K.
      • et al.
      Reporting guidelines for survey research: an analysis of published guidance and reporting practices.
      ]. One study assessed the characteristics of 30 methods guidance published between 2009 and 2018 that were labeled as “frameworks” and included a methods section (i.e., a small fraction of methods guidance) [
      • McMeekin N.
      • Wu O.
      • Germeni E.
      • Briggs A.
      How methodological frameworks are being developed: evidence from a scoping review.
      ]. Another study assessed 18 guidance articles for psychiatric drug trials from the European Medicines Agency (EMA; published between 2005 and 2017) and the United States Food and Drug Administration (FDA; published between 2015 and 2019) and found limitations regarding generalizability and under-representation of nonconflicted stakeholders [
      • Alonso-Coello P.
      • Schunemann H.J.
      • Moberg J.
      • Brignardello-Petersen R.
      • Akl E.A.
      • Davoli M.
      • et al.
      GRADE Evidence to Decision (EtD) frameworks: a systematic and transparent approach to making well informed healthcare choices. 1: Introduction.
      ].
      Research investigating methods guidance published in medical journals covering their terminology, transparency, and methods is lacking. Such research would be important to understand better the different types of methods guidance, the methods used for developing and presenting guidance, help identify areas in need of improvement or further research, and provide an empirical basis for developing quality and reporting criteria for methods guidance.
      Motivated by the potential of methods guidance to improve health research, we systematically characterized recent guidance articles to assess the current practice of developing and presenting them and explore opportunities for improvement.

      2. Methods

      2.1 Study design and sources of information

      A team consisting of experts in health research methodology and metaresearch (J.H., M.B., L.G.H., D.O.L., and St.S.), information science (H.E. and J.H.), and biostatistics (D.O.L.) performed a systematic survey to characterize recent methods guidance articles. We focused our survey on high-impact general medical and methodology-oriented journals published in 2020. Such a survey focused on a specific time frame and selected relevant sources of information that has recently been labeled a mapping review [,
      • Alonso-Coello P.
      • Schunemann H.J.
      • Moberg J.
      • Brignardello-Petersen R.
      • Akl E.A.
      • Davoli M.
      • et al.
      GRADE Evidence to Decision (EtD) frameworks: a systematic and transparent approach to making well informed healthcare choices. 1: Introduction.
      ]. The six highest-impact general medical journals as per their 2020 Clarivate Analytics Impact Factor were The Lancet, The New England Journal of Medicine, Journal of the American Medical Association, The British Medical Journal, PLOS Medicine, and Annals of Internal Medicine. Methodology-focused journals that we identified as likely to have a high impact and include methods guidance for health research (selected based on our experience) were Statistics in Medicine, BMC Medical Research Methodology, Journal of Clinical Epidemiology, International Journal of Epidemiology, European Journal of Epidemiology, American Journal of Epidemiology, Epidemiology, and BMC Trials.

      2.2 Eligibility criteria

      A definition of methods guidance is not available. In this exploratory survey, we avoided to prespecify a potentially exclusive definition of methods guidance. Instead, we included journal articles that (1) stated the objective to provide methods guidance (author-defined, considering any potential alternative expression) or were part of a series, section, or type of article that journal editors characterized as methods guidance and (2) addressed methods for the design, conduct, analysis, interpretation, or reporting of health studies.
      We excluded (1) articles that provided methodological guidance in the discussion section only without specifying the development of guidance as an objective, (2) case studies and lessons learned articles written by groups of researchers whose original aim was to conduct a study but not develop methods guidance, (3) articles that explicitly stated that further development of the guidance is needed, (4) articles that provided recommendations on the content rather than the methods of research (such as research priorities, core outcome sets, or health measurement scales), (5) proposals for new methods and proof-of-concept studies, (6) letters to the editor, and (7) research protocols.

      2.3 Search and selection process

      We used a stepwise approach to create a diverse sample of recent methods guidance. First, a single reviewer (St.S.) searched the journals listed above for specific sections, series, and publication types that intended to provide methods guidance (provided in Fig. S1). We then included all eligible articles published in those sections, series, or types in the year 2020 (including Epub ahead of print). In a second step, we reviewed labels of sections, article series, article types, titles, abstracts, author keywords, and Medical Subject Headings (MeSH terms) of the included articles and recorded any alternative terms for guidance. In addition, we searched the InterTASC Information Specialists' Sub-Group (ISSG) repository [] to identify search filters for methods guidance (we found none) and clinical practice guidelines from which we extracted alternative terms for guidance. An information specialist trained in research methodology (H.E.) used the identified terms to develop a formal search strategy that we applied to MEDLINE (Supplement File 1). The search did not include a filter for “methods”; such filters are highly inefficient considering that most scientific publications include methodological terms in their abstracts. Instead, we limited the search to the abovementioned eight methodology-focussed journals. A single reviewer (J.H., D.O.L., or St.S.) assessed abstracts or, if unclear, the full text for eligibility and recorded a quote to support eligibility (provided in Supplement File 2). If unclear, the reviewer discussed and made a decision together with a second reviewer.

      2.4 Data extraction and synthesis

      One reviewer (St.S., J.H., or D.O.L.) extracted information from the eligible methods guidance and recorded them in a spreadsheet. A second reviewer (St.S., J.H., or D.O.L.) double checked the extracted information and discussed discrepancies with the first reviewer until they achieved consensus. Because formal expectations for characteristics and the quality of methods guidance are not available, we selected data items through discussion within our team, focusing on potential issues that we had repeatedly observed in our own research practice. We extracted the following information: bibliographic data (journal name, number of authors, and number of pages); terminology to express guidance (in title, abstract, keywords, and MeSH terms); general document structure (abstract available or missing, abstract structured or not, and formal methods section in full text); transparency (reporting of expertise of authors, conflicts of interests, the description of how they might have influenced the content of the guideline, and funding sources); methodological topic addressed in guidance (study design and/or other methodological topic); medical context (such as oncology or public health); project phase (such as planning, analysis, and/or reporting); target audience addressed in guidance; methods used for guideline development (such as literature review, consensus study, and/or user-testing); whether authors referenced any standards for developing the methods guidance; and methods used for presentation (such as checklist or illustrative examples).
      We created a codebook in which we collected and grouped alternative terms for variables that showed great variation in terminology (i.e., methodological topic, medical context, project phase, target audience, methods for development and presentation). We screened each guidance article for alternative terms and continuously updated the codebook. We identified overarching themes that provided the basis for categorizing and summarizing the characteristics of the included guidance articles. For variables that required the interpretation of a specific text segment (i.e., objectives, conflicts of interest, funding, expertise of authors) we extracted verbatim quotes before we devised categories.
      We characterized and described the guidance articles using absolute and relative frequencies for all categorical variables. Post hoc, we stratified our presenting of variables related to development and presentation by reporting guidelines versus other subtypes of methods guidance to illustrate possible effects of the existing guidance for reporting guidelines [
      • Moher D.
      • Weeks L.
      • Ocampo M.
      • Seely D.
      • Sampson M.
      • Altman D.G.
      • et al.
      Describing reporting guidelines for health research: a systematic review.
      ].

      2.5 Patient and public involvement

      We did not involve patients or members of the public in formulating the research objectives, designing the survey, interpreting the results, or writing the manuscript.

      3. Results

      3.1 Search results

      Of the 14 journals that we screened, five provided a specific section for methods guidance. Overall, we included 105 guidance articles published in 12 journals (Fig. S1). Most articles were published in the Journal of Clinical Epidemiology (28%), the British Medical Journal (13%), and BMC Research Methodology (13%) (Table 1; full dataset in Supplement File 2).
      Table 1General characteristics of the methods guidance articles (n = 105)
      JournalFrequency n (%)
      Journal of Clinical Epidemiology29 (28%)
      The British Medical Journal14 (13%)
      BMC Medical Research Methodology14 (13%)
      Statistics in Medicine12 (11%)
      Journal of the American Medical Association12 (11%)
      International Journal of Epidemiology9 (9%)
      BMC Trials3 (3%)
      PLOS Medicine3 (3%)
      Annals of Internal Medicine3 (3%)
      European Journal of Epidemiology3 (3%)
      American Journal of Epidemiology2 (2%)
      Epidemiology1 (1%)
      Single or multiple articles
       Single article101 (96%)
       Two parts published as separate articles2 (2%, analyzed as four articles)
      Number of pages
       10 (0)
       2–412 (11%)
       5–935 (33%)
       10–1437 (35%)
       15–1912 (11%)
       20 or more9 (9%)
       Median (interquartile range)10 (7, 13)
      Number of authors
       14 (4%)
       2–433 (31%)
       5–936 (34%)
       10–1417 (16%)
       15–196 (6%)
       20 or more9 (9%)
       Median (interquartile range)6 (3, 11)
      Methods guidance network
       No network mentioned78 (74%)
       EQUATOR/reporting guidelines13 (12%)
       GRADE working group10 (9%)
       STRATOS initiative3 (3%)
       Cochrane1 (1%)
       Trial forge1 (1%)
      Abstract
       None16 (15%)
       Structured44 (42%)
       Unstructured45 (43%)

      3.2 General characteristics

      The guidance articles focused either on a specific study design (19%, e.g., systematic review), a methodological topic other than study design (31%, e.g., causal modeling), or a combination of both (50%, e.g., effect modification in individual participant data meta-analyses). Most articles (90%) were not specific to a medical area. Some referred to a specific research network (26%), most frequently the Enhancing the QUAlity and Transparency Of health Research (EQUATOR) network for reporting guidelines (12%), the Grading of Recommendations Assessment, Development, and Evaluation GRADE working group (9%), and STRengthening Analytical Thinking for Observational Studies (STRATOS) initiative (3%) (Table 1 and Table S1). Guidance articles frequently addressed analysis (40%), design (34%), or reporting (20%) (more than one study phase possible). Target audiences, if mentioned (73%), were mostly primary researchers (35%), systematic review authors (22%), or statisticians (11%) (more than one audience possible) (Table S2).

      3.3 Terminology

      The most frequent labels used to express guidance were guide/guidance/guideline(s) (77%), followed by recommend/recommendation(s) (19%) and tutorial (19%). Overall, we identified 36 alternative expressions for guidance. Two-thirds of articles expressed guidance (any alternative expression) in the title (66%) but few (17%) provided author keywords expressing guidance. Most articles had MeSH terms (91%) but only 38% included a MeSH term expressing guidance (any alternative expression) (Table S3).

      3.4 Transparency

      Few guidance articles (22%) described the expertise of authors. Most contained a conflicts of interest section (88%), including 53% that stated the absence of conflicts of interest and 34% provided information. This information included that authors were involved in related work (14%), previous funding received from the pharmaceutical industry (10%), or sources of public funding (10%) (Table 2). None of the articles explained how the stated information might have influenced the content of the guidance (Table 2, verbatim quotes provided in Supplement File 2). Of the guidance articles that contained a funding section (79%), 36% reported funding received specifically for the guidance development. None of the guidance articles reported industry funding (Table 2, verbatim quotes provided in Supplement File 2).
      Table 2Reporting of the guidance authors’ expertise, conflicts of interest, and funding (n = 105)
      Description of the authors' expertiseFrequency n (%)
      Yes23 (22%)
      No82 (78%)
      Conflict of interest statement
       No conflict of interest statement13 (12%)
       Conflict mentioned
      More than one per article possible.
      36 (34%)
      Author(s) involved in related work15 (14%)
      Personal funding or consultancy fees from pharmaceutical industry11 (10%)
      Public funding11 (10%)
      Member of not-for profit organization6 (6%)
      Editor of journal in which the guidance was published6 (6%)
      Employed by pharmaceutical industry3 (3%)
       Explicitly no conflicts of interest56 (53%)
      If conflicts mentioned, any explanation provided how it might have influenced the content of the guidance
       Yes0 (0)
       No36 (100%)
      Type of funding
       No funding section22 (21%)
       Funding mentioned
      More than one per article possible.
      65 (62%)
      Public62 (59%)
      Not-for-profit23 (22%)
      Industry0 (0%)
       Explicitly no funding18 (17%)
      If funding mentioned, was the funding for the purpose of guidance development?
       Explicitly for the guidance development38 (36%)
       Purpose not mentioned22 (21%)
       Explicitly not for the guidance development5 (5%)
      If funding mentioned, role of funder specified
       Explicitly no role25 (24%)
       Not specified40 (38%)
      a More than one per article possible.

      3.5 Methods

      Less than half (42%) of the guidance articles reported methods for development. The spectrum of methods used was broad. Approaches included the involvement of different types of stakeholders (28%), different types of systematic reviews of the methodological literature (20%), consensus processes (20%), and reviews of primary studies (14%). Only 9% of the guidance articles included a reference to support the choice of methods and all referred to the same article by Moher et al. (“Guidance for Developers of Health Research Reporting Guidelines”) [
      • Moher D.
      • Weeks L.
      • Ocampo M.
      • Seely D.
      • Sampson M.
      • Altman D.G.
      • et al.
      Describing reporting guidelines for health research: a systematic review.
      ]. The reporting and types of methods differed markedly between reporting guidelines (n = 13) and other guidance (n = 92). All reporting guidelines reported methods and involved external stakeholders, typically using a formal consensus process (85%). In contrast, other types of guidance infrequently reported methods (34%), involved external stakeholders (18%), or performed a consensus study (11%) (Table 3). Regarding methods for presenting the guidance, most frequent were illustrative case studies (45%), research checklists (36%), and structured overviews of the literature (12%) (Table 4).
      Table 3Methods used for development of methods guidance (n = 105)
      Any methods for developing guidance reported?Frequency n (%)
      All articles (n = 105)Reporting guidelines (n = 13)Other methods guidance (n = 92)
      Yes44 (42%)13 (100%)31 (34%)
      No61 (58%)0 (0%)61 (66%)
      Standards for developing methods guidance referenced?
       Yes
      More than one category per article possible.
      (all referring to Moher 2010 [
      • Moher D.
      • Weeks L.
      • Ocampo M.
      • Seely D.
      • Sampson M.
      • Altman D.G.
      • et al.
      Describing reporting guidelines for health research: a systematic review.
      ])
      9 (9%)8 (62%)1 (1%)
       No96 (91%)5 (38%)91 (99%)
      Reported methods for developing guidance
      More than one category per article possible.
       Stakeholder involvement30 (28%)13 (100%)17 (18%)
      External methodology experts involved in development process23 (22%)9 (62%)14 (15%)
      Involved in consensus study14 (13%)9 (62%)5 (5%)
      Provided feedback at conference workshop4 (4%)0 (0%)4 (4%)
      Provided feedback through online survey3 (3%)0 (0%)3 (3%)
      Tested the guidance3 (3%)2 (15%)1 (1%)
      Were interviewed2 (2%)0 (0%)2 (2%)
      Participated in workshop to develop guidance2 (2%)0 (0%)2 (2%)
      Provided feedback; no method reported2 (2%)0 (0%)2 (2%)
      Invited to project group1 (1%)1 (8%)0 (0%)
      Health research practitioners involved in development process19 (18%)9 (62%)10 (11%)
      Involved in consensus study9 (8%)9 (62%)0 (0%)
      Were interviewed4 (4%)0 (0%)4 (4%)
      Tested the guidance4 (4%)1 (8%)3 (1%)
      Provided feedback at conference workshop4 (4%)0 (0%)4 (4%)
      Participated in online survey2 (2%)0 (0%)2 (2%)
      Provided feedback through web application1 (1%)0 (0%)1 (1%)
      Provided feedback; no method reported1 (1%)0 (0%)1 (1%)
      Invited to project group1 (1%)1 (8%)0 (0%)
      Were observed while performing research1 (1%)0 (0%)1 (1%)
      Other stakeholders
      Other stakeholders were: patients and public contributors (n = 5); funders and commissioners (n = 5); journal editors (n = 5); industry representatives (n = 2); policy makers (n = 4); experts in law and ethics (n = 3); regulators (n = 4); insurance representatives (n = 1); health economists (n = 1); health care providers (n = 1).
      involved in development process
      11 (10%)4 (31%)6 (7%)
      Participated in consensus study6 (6%)5 (31%)1 (1%)
      Tested draft guidance2 (2%)0 (0%)2 (2%)
      Participated in interviews2 (2%)0 (0%)2 (2%)
      Participated in focus groups1 (1%)0 (0%)1 (1%)
      Participated in online survey1 (1%)0 (0%)1 (1%)
       Systematic review of the methodological literature21 (20%)4 (31%)17 (18%)
      Review of methods guidance16 (15%)4 (31%)12 (13%)
      Review of alternative methods and concepts4 (4%)0 (0%)4 (4%)
      Review of qualitative studies investigating practical barriers2 (2%)0 (0%)2 (2%)
      Review of definitions2 (2%)0 (0%)2 (2%)
      Review of meta-research1 (1%)0 (0%)1 (1%)
      Review of lessons learned that health researchers report1 (1%)0 (0%)1 (1%)
      Review of sources of bias1 (1%)0 (0%)1 (1%)
       Consensus process21 (20%)11 (85%)10 (11%)
      Delphi study18 (17%)11 (85%)7 (8%)
      Consensus meeting13 (12%)9 (69%)4 (4%)
      Informal consensus process3 (3%)0 (0%)3 (3%)
       Review of primary studies15 (14%)4 (31%)5 (5%)
      Systematic metaresearch assessing current research practice14 (13%)4 (31%)10 (11%)
      Case series to study current research practice1 (1%)0 (0%)1 (1%)
      Single case study1 (1%)0 (0%)1 (1%)
       Other methods
      Other methods were: simulation study (n = 1); practical testing by developers (4%); public commenting on draft version (n = 1).
      6 (6%)1 (62%)5 (5%)
      a More than one category per article possible.
      b Other stakeholders were: patients and public contributors (n = 5); funders and commissioners (n = 5); journal editors (n = 5); industry representatives (n = 2); policy makers (n = 4); experts in law and ethics (n = 3); regulators (n = 4); insurance representatives (n = 1); health economists (n = 1); health care providers (n = 1).
      c Other methods were: simulation study (n = 1); practical testing by developers (4%); public commenting on draft version (n = 1).
      Table 4Methods used for presentation of guidance articles
      Reported methods for presenting guidance
      More than one category per article possible.
      Frequency n (%)
      All articles (n = 105)Reporting guidelines (n = 13)Other methods guidance (n = 92)
      Illustrative examples53 (50%)3 (23%)50 (54%)
       Real case study47 (45%)3 (23%)44 (48%)
       Illustrative simulation study10 (10%)0 (0%)10 (11%)
       Hypothetical case study7 (7%)0 (0%)7 (8%)
       Toy dataset1 (1%)0 (0%)1 (1%)
      Instruments and other practical aids50 (48%)12 (92%)38 (41%)
       Checklist36 (34%)12 (92%)24 (27%)
       Programming code5 (5%)0 (0%)5 (5%)
       Decision tree4 (4%)0 (0%)4 (4%)
       Criteria for deciding if a method should be applied or not3 (3%)0 (0%)3 (3%)
       Software3 (3%)0 (0%)3 (3%)
       Quality assessment tool2 (2%)0 (0%)2 (2%)
       Template for wording2 (2%)0 (0%)2 (2%)
       Conversation guide1 (1%)0 (0%)1 (1%)
      Structured overview of the literature13 (12%)0 (0%)13 (15%)
       Overview of alternative methods9 (9%)0 (0%)9 (10%)
       Overview of methods guidance2 (2%)0 (0%)2 (2%)
       Overview of software1 (1%)0 (0%)1 (1%)
      Specific method for structuring guidance text42 (40%)11 (85%)31 (34%)
       Checklist explained item-by-item28 (27%)11 (85%)17 (18%)
       Step-by-step guide10 (10%)0 (0%)10 (11%)
       Common mistakes and solutions3 (3%)0 (0%)3 (3%)
       Frequently asked questions1 (1%)0 (0%)1 (1%)
       Dos and don'ts1 (1%)0 (0%)1 (1%)
      Clarification of terminology7 (7%)1 (8%)6 (7%)
      Teaching material2 (2%)0 (0%)2 (2%)
      None of the above (only conceptual clarification through text and figures)2 (2%)0 (0%)2 (2%)
      a More than one category per article possible.

      4. Discussion

      We systematically surveyed a sample of methods guidance articles published in 2020. Our study highlights a number of issues that are likely to impede the uptake and possibly also the trustworthiness of existing methods guidance.

      4.1 Methods guidance articles are difficult to find

      Literature searches are likely to miss a substantial proportion of methods guidance because of the inconsistent and unspecific terminology. Some guidance articles can be identified by searching specific journal sections such as “Research Methods and Reporting” in The British Medical Journal or “Tutorial in Biostatistics” in Statistics in Medicine. However such journal sections are not part of MEDLINE records and therefore not searchable through database queries. Author keywords are searchable but not all journals provide keywords and authors that rarely use them to express methods guidance. At present, a MeSH term for methods guidance is not available. Some guidance articles were indexed with related MeSH terms such as “checklist” or the subheading “standards”. Those MeSH terms are, however, unspecific and primarily used for nonmethodological topics. The lack of structured abstracts further complicates the search process because structured abstracts are usually clearer, more comprehensive, and easier to search. Using clear titles, keywords, structured abstracts, and introducing a MeSH term specifically for methods guidance could therefore improve the findability of methods guidance. Another, more immediate solution to the findability issue provides a new searchable database for methods guidance (Library of Guidance for Health Scientists, LIGHTS, www.lights.science) that we are developing.

      4.2 The development process is often unclear

      In contrast to clinical practice guidelines for which sophisticated methods have been developed [
      • Alonso-Coello P.
      • Schunemann H.J.
      • Moberg J.
      • Brignardello-Petersen R.
      • Akl E.A.
      • Davoli M.
      • et al.
      GRADE Evidence to Decision (EtD) frameworks: a systematic and transparent approach to making well informed healthcare choices. 1: Introduction.
      ], most methods guidance articles do not report (and probably are not based on) any specific methodology for development. The lack of methods can affect the credibility of the guidance (e.g., when not based on a systematic review or expert consensus) and its usefulness (e.g., when the guidance ignores common mistakes identified through metaresearch or is not tested for clarity and feasibility). In some cases, article type (e.g., commentary or review) may preclude a formal methods section. Another possible explanation for the lack of methods could be that some methodologists and journal editors may not perceive the development of methods guidance as a formal research activity that requires a specific methodology. This is at odds with the explicit standards available for reporting guidelines (i.e., key steps include needs assessment, literature review, consensus study, and testing) [
      • Moher D.
      • Weeks L.
      • Ocampo M.
      • Seely D.
      • Sampson M.
      • Altman D.G.
      • et al.
      Describing reporting guidelines for health research: a systematic review.
      ] or methods guidance from regulatory organizations such as the FDA and EMA (i.e., the key method is publishing a draft for a minimum time period so that stakeholders can provide detailed comments) [
      • Alonso-Coello P.
      • Schunemann H.J.
      • Moberg J.
      • Brignardello-Petersen R.
      • Akl E.A.
      • Davoli M.
      • et al.
      GRADE Evidence to Decision (EtD) frameworks: a systematic and transparent approach to making well informed healthcare choices. 1: Introduction.
      ].
      Our review shows that different methods for developing methods guidance are available and feasible. Those methods include systematic reviews of the methodological literature, metaresearch to identify common mistakes, simulation studies to quantify biases, consensus studies to maximize relevance and credibility, and user-testing to optimize the user-friendliness of methods guidance. Our findings concur with McMeekin et al. who analyzed a sample of methodology frameworks and found that the development steps, if reported, tend to be similar on a superficial level (e.g., review of the literature, structured development, testing) [
      • Boesen K.
      • Gøtzsche P.C.
      • Ioannidis J.P.A.
      EMA and FDA psychiatric drug trial guidelines: assessment of guideline development and trial design recommendations.
      ]. However, on closer inspection, methods for development vary substantially among methods guidance. It is currently unclear how guideline developers should choose among the different types of methodological reviews, metaresearch, simulation studies, consensus studies, and approaches for involving experts, users, and other stakeholders. Different subtypes of methods guidance may require different methods. For instance, a Delphi study may perform well for developing a reporting checklist but is likely less useful for developing a statistical tutorial; meta-research can be highly valuable for identifying common mistakes but is only feasible if a method has already been used in a sufficiently large number of studies. Future research may develop a conceptual framework for methods guidance that includes clarification regarding different subtypes of methods guidance, discuss suitable methods for the subtypes, and elaborates on their strengths and weaknesses.
      Similar thoughts apply to methods for presenting methods guidance. We identified established standards only for reporting guidelines that typically use a checklist format with point-by-point explanation and elaboration [
      • Moher D.
      • Weeks L.
      • Ocampo M.
      • Seely D.
      • Sampson M.
      • Altman D.G.
      • et al.
      Describing reporting guidelines for health research: a systematic review.
      ]. Our review shows that a spectrum of presentation styles is available including checklists, step-by-step guides with or without worked examples, and decision algorithms. Further research is required to better understand the strength and limitations of different presentation styles with respect to their clarity and usefulness from a user-perspective. Research in the context of clinical practice guidelines could provide a model for systematically researching and user-testing the presentation of methodological recommendations [
      • Andrews J.
      • Guyatt G.
      • Oxman A.D.
      • Alderson P.
      • Dahm P.
      • Falck-Ytter Y.
      • et al.
      GRADE guidelines: 14. Going from evidence to recommendations: the significance and presentation of recommendations.
      ,
      • Yepes-Nuñez J.J.
      • Li S.-A.
      • Guyatt G.
      • Jack S.M.
      • Brozek J.L.
      • Beyene J.
      • et al.
      Development of the summary of findings table for network meta-analysis.
      ].

      4.3 Transparency could be improved

      The fact that most guidance articles lack a specific methodology to support their credibility makes the expertise of guidance developers an important (and for some articles the only) credibility factor. Most guidance articles, however, do not include a description of the authors’ expertise. Another potential transparency issue concerns conflicts of interest. In our sample, most articles included a section for conflicts of interest and typically stated no conflicts. If stated, we found it difficult to understand the relationship between the stated conflicts and the guidance. For instance, we did not understand why guidance authors list previous funding under conflicts of interest or why being a journal editor may constitute a competing interest. The extent to which guidance authors should disclose intellectual conflicts of interest is another open question considering that—unavoidably—methodologists “are influenced by personal preferences, past experiences, and own technical competence” [
      • Boulesteix A.-L.
      • Lauer S.
      • Eugster M.J.A.
      A plea for neutral comparison studies in computational sciences.
      ]. Future research may clarify potential conflicts of interest in the context of methods guidance and develop corresponding reporting standards. Our concerns complement the findings of Boesen et al. who found that methods guidance from the United States FDA and EMA do not disclose the members of development committees nor their conflicts of interest [
      • Alonso-Coello P.
      • Schunemann H.J.
      • Moberg J.
      • Brignardello-Petersen R.
      • Akl E.A.
      • Davoli M.
      • et al.
      GRADE Evidence to Decision (EtD) frameworks: a systematic and transparent approach to making well informed healthcare choices. 1: Introduction.
      ].

      4.4 Limitations

      Our study has important limitations. First, we made arbitrary choices regarding the definition of our sample. Some may disagree to consider reporting guidelines a subtype of methods guidance. However, we aimed to reflect the practical reality where reporting guidelines can have a substantial influence on methodological decisions. This is especially true considering that reporting guidelines are frequently cited, relatively easy to access, and are among the most formal guidance articles available. In addition, many reporting guidelines focus on specific study designs and include important methodological steps to consider [
      • Chaplin M.
      • Kirkham J.J.
      • Dwan K.
      • Sloan D.J.
      • Davies G.
      • Jorgensen A.L.
      STrengthening the reporting of Pharmacogenetic studies: development of the STROPS guideline.
      ,
      • Heus P.
      • Reitsma J.B.
      • Collins G.S.
      • Damen J.A.A.G.
      • Scholten R.J.P.M.
      • Altman D.G.
      • et al.
      Transparent reporting of Multivariable Prediction models in journal and conference abstracts: TRIPOD for abstracts.
      ,
      • Howick J.
      • Webster R.K.
      • Rees J.L.
      • Turner R.
      • Macdonald H.
      • Price A.
      • et al.
      TIDieR-Placebo: a guide and checklist for reporting placebo and sham controls.
      ,
      • Campbell M.
      • McKenzie J.E.
      • Sowden A.
      • Katikireddi S.V.
      • Brennan S.E.
      • Ellis S.
      • et al.
      Synthesis without meta-analysis (SWiM) in systematic reviews: reporting guideline.
      ,
      • Monsalves M.J.
      • Bangdiwala A.S.
      • Thabane A.
      • Bangdiwala S.I.
      LEVEL (Logical Explanations & Visualizations of Estimates in Linear mixed models): recommendations for reporting multilevel data and analyses.
      ]. Second, we limited our review to articles published in selected journals and acknowledge that there are more relevant journals that we could have included. Third, we did not include other sources of information than journal articles that may contain highly relevant methods guidance such as textbooks or guidance from regulatory organizations. Fourth, our sample represents a selection of guidance articles that clearly expressed a guidance intent in titles or abstracts which required judgment. Although we are confident that suggestions for improvement inferred from our sample apply to any type of methods guidance, confirmation would be reassuring. Finally, considering that only 44 articles reported methods, we may have missed infrequent methods.

      5. Conclusion

      In summary, methods guidance published in biomedical journals represent a highly heterogeneous body of literature. Inconsistent terminology and inappropriate indexing in MEDLINE make it difficult to find methods guidance. A new database may provide an effective solution. Reporting guidelines stood out as a subtype of guidance that showed a noticeable degree of standardization (e.g., based on a Delphi study and providing a checklist). Other subtypes of methods guidance seldom reported any methods. Our findings call for more research on typology, terminology, development process, and reporting of methods guidance to maximize its impact on the quality of health research.

      Author contributions

      All authors have made substantial contributions to all of the following: (1) the conception and design of the study or acquisition of data or analysis and interpretation of data, (2) drafting the article or revising it critically for important intellectual content, and (3) final approval of the version to be submitted. All authors had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis. Julian Hirt: Conceptualization, data curation, formal analysis, investigation, methodology, validation, visualization, writing–original draft, and writing–review and editing. Hannah Ewald: Conceptualization, data curation, investigation, methodology, and writing–review and editing. Daeria O. Lawson: Conceptualization, data curation, formal analysis, investigation, methodology, and writing–review and editing. Lars G. Hemkens: Conceptualization, funding acquisition, investigation, methodology, and writing–review and editing. Matthias Briel: Conceptualization, funding acquisition, investigation, methodology, and writing–review and editing. Stefan Schandelmaier: Conceptualization, data curation, formal analysis, funding acquisition, methodology, project administration, validation, visualization, writing–original draft, and writing–review and editing.

      References

        • Ioannidis J.P.A.
        How to make more published research true.
        PLos Med. 2014; 11: e1001747
        • Ioannidis J.P.A.
        Why most published research findings are false.
        PLos Med. 2005; 2: e124
        • Ioannidis J.P.A.
        Why most clinical research is not useful.
        PLos Med. 2016; 13: e1002049
        • Baker M.
        Is there a reproducibility crisis? A Nature survey lifts the lid on how researchers view the crisis rocking science and what they think will help.
        Nature. 2016; 533: 452-455
        • Yordanov Y.
        • Dechartres A.
        • Porcher R.
        • Boutron I.
        • Altman D.G.
        • Ravaud P.
        Avoidable waste of research related to inadequate methods in clinical trials.
        BMJ. 2015; 350: h809
        • Dechartres A.
        • Trinquart L.
        • Atal I.
        • Moher D.
        • Dickersin K.
        • Boutron I.
        • et al.
        Evolution of poor reporting and inadequate methods over time in 20 920 randomised controlled trials included in Cochrane reviews: research on research study.
        BMJ. 2017; 357: j2490
        • Ndounga Diakou L.A.
        • Ntoumi F.
        • Ravaud P.
        • Boutron I.
        Avoidable waste related to inadequate methods and incomplete reporting of interventions: a systematic review of randomized trials performed in Sub-Saharan Africa.
        Trials. 2017; 18: 291
        • Ramagopalan S.
        • Skingsley A.P.
        • Handunnetthi L.
        • Klingel M.
        • Magnus D.
        • Pakpoor J.
        • et al.
        Prevalence of primary outcome changes in clinical trials registered on ClinicalTrials.gov: a cross-sectional study.
        F1000Res. 2014; 3: 77
        • Sun X.
        • Briel M.
        • Busse J.W.
        • You J.J.
        • Akl E.A.
        • Mejza F.
        • et al.
        Credibility of claims of subgroup effects in randomised controlled trials: systematic review.
        BMJ. 2012; 344: e1553
        • Vinkers C.H.
        • Lamberink H.J.
        • Tijdink J.K.
        • Heus P.
        • Bouter L.
        • Glasziou P.
        • et al.
        The methodological quality of 176,620 randomized controlled trials published between 1966 and 2018 reveals a positive trend but also an urgent need for improvement.
        PLoS Biol. 2021; 19: e3001162
        • SAGE Research Methods
        Find resources to answer your research methods and statistics questions.
        (Available at)
        https://methods.sagepub.com/
        Date accessed: May 13, 2022
        • Higgins J.P.T.
        • Thomas J.
        • Chandler J.
        • Cumpston M.
        • Li T.
        • Page M.J.
        • et al.
        Cochrane Handbook for systematic reviews of interventions version 6.2 (updated February 2021) [Internet]. Cochrane.
        (Available at)
        www.training.cochrane.org/handbook
        Date: 2021
        Date accessed: May 13, 2022
      1. ICH Official web site : ICH.
        (Available at)
        https://www.ich.org/page/ich-guidelines
        Date accessed: May 13, 2022
      2. EMA. Scientific guidelines.
        (Available at)
      3. Office of the commissioner. Search for FDA guidance documents.
        (Available at)
        • Pullenayegum E.M.
        • Platt R.W.
        • Barwick M.
        • Feldman B.M.
        • Offringa M.
        • Thabane L.
        Knowledge translation in biostatistics: a survey of current practices, preferences, and barriers to the dissemination and uptake of new statistical methods.
        Stat Med. 2016; 35: 805-818
        • Banno M.
        • Tsujimoto Y.
        • Kataoka Y.
        The majority of reporting guidelines are not developed with the Delphi method: a systematic review of reporting guidelines.
        J Clin Epidemiol. 2020; 124: 50-57
        • Wang X.
        • Chen Y.
        • Yang N.
        • Deng W.
        • Wang Q.
        • Li N.
        • et al.
        Methodology and reporting quality of reporting guidelines: systematic review.
        BMC Med Res Methodol. 2015; 15: 74
        • Moher D.
        • Schulz K.F.
        • Simera I.
        • Altman D.G.
        Guidance for developers of health research reporting guidelines.
        PLos Med. 2010; 7: e1000217
        • Moher D.
        • Weeks L.
        • Ocampo M.
        • Seely D.
        • Sampson M.
        • Altman D.G.
        • et al.
        Describing reporting guidelines for health research: a systematic review.
        J Clin Epidemiol. 2011; 64: 718-742
        • Bennett C.
        • Khangura S.
        • Brehaut J.C.
        • Graham I.D.
        • Moher D.
        • Potter B.K.
        • et al.
        Reporting guidelines for survey research: an analysis of published guidance and reporting practices.
        PLos Med. 2010; 8: e1001069
        • McMeekin N.
        • Wu O.
        • Germeni E.
        • Briggs A.
        How methodological frameworks are being developed: evidence from a scoping review.
        BMC Med Res Methodol. 2020; 20: 173
        • Boesen K.
        • Gøtzsche P.C.
        • Ioannidis J.P.A.
        EMA and FDA psychiatric drug trial guidelines: assessment of guideline development and trial design recommendations.
        Epidemiol Psychiatr Sci. 2021; 30: e35
      4. ISSG search filter resource.
        (Available at)
        • Alonso-Coello P.
        • Schunemann H.J.
        • Moberg J.
        • Brignardello-Petersen R.
        • Akl E.A.
        • Davoli M.
        • et al.
        GRADE Evidence to Decision (EtD) frameworks: a systematic and transparent approach to making well informed healthcare choices. 1: Introduction.
        BMJ. 2016; 353: i2016
        • Andrews J.
        • Guyatt G.
        • Oxman A.D.
        • Alderson P.
        • Dahm P.
        • Falck-Ytter Y.
        • et al.
        GRADE guidelines: 14. Going from evidence to recommendations: the significance and presentation of recommendations.
        J Clin Epidemiol. 2013; 66: 719-725
        • Yepes-Nuñez J.J.
        • Li S.-A.
        • Guyatt G.
        • Jack S.M.
        • Brozek J.L.
        • Beyene J.
        • et al.
        Development of the summary of findings table for network meta-analysis.
        J Clin Epidemiol. 2019; 115: 1-13
        • Boulesteix A.-L.
        • Lauer S.
        • Eugster M.J.A.
        A plea for neutral comparison studies in computational sciences.
        PLoS One. 2013; 8: e61562
        • Chaplin M.
        • Kirkham J.J.
        • Dwan K.
        • Sloan D.J.
        • Davies G.
        • Jorgensen A.L.
        STrengthening the reporting of Pharmacogenetic studies: development of the STROPS guideline.
        PLos Med. 2020; 17: e1003344
        • Heus P.
        • Reitsma J.B.
        • Collins G.S.
        • Damen J.A.A.G.
        • Scholten R.J.P.M.
        • Altman D.G.
        • et al.
        Transparent reporting of Multivariable Prediction models in journal and conference abstracts: TRIPOD for abstracts.
        Ann Intern Med. 2020; (Available at) (Online ahead of print)
        https://doi.org/10.7326/M20-0193
        Date accessed: May 13, 2022
        • Howick J.
        • Webster R.K.
        • Rees J.L.
        • Turner R.
        • Macdonald H.
        • Price A.
        • et al.
        TIDieR-Placebo: a guide and checklist for reporting placebo and sham controls.
        PLos Med. 2020; 17: e1003294
        • Campbell M.
        • McKenzie J.E.
        • Sowden A.
        • Katikireddi S.V.
        • Brennan S.E.
        • Ellis S.
        • et al.
        Synthesis without meta-analysis (SWiM) in systematic reviews: reporting guideline.
        BMJ. 2020; 368: l6890
        • Monsalves M.J.
        • Bangdiwala A.S.
        • Thabane A.
        • Bangdiwala S.I.
        LEVEL (Logical Explanations & Visualizations of Estimates in Linear mixed models): recommendations for reporting multilevel data and analyses.
        BMC Med Res Methodol. 2020; 20: 3