Advertisement
GRADE Series - Sharon Straus, Rachel Churchill and Sasha Shepperd, Guest Editors| Volume 64, ISSUE 12, P1277-1282, December 2011

GRADE guidelines: 5. Rating the quality of evidence—publication bias

      Abstract

      In the GRADE approach, randomized trials start as high-quality evidence and observational studies as low-quality evidence, but both can be rated down if a body of evidence is associated with a high risk of publication bias. Even when individual studies included in best-evidence summaries have a low risk of bias, publication bias can result in substantial overestimates of effect. Authors should suspect publication bias when available evidence comes from a number of small studies, most of which have been commercially funded. A number of approaches based on examination of the pattern of data are available to help assess publication bias. The most popular of these is the funnel plot; all, however, have substantial limitations. Publication bias is likely frequent, and caution in the face of early results, particularly with small sample size and number of events, is warranted.

      Keywords

      1. Introduction

      Key points
      • Empirical evidence shows that, in general, studies with statistically significant results are more likely to be published than studies without statistically significant results (“negative studies”).
      • Systematic reviews performed early, when only few initial studies are available, will overestimate effects when “negative” studies face delayed publication. Early positive studies, particularly if small in size, are suspect.
      • Recent revelations suggest that withholding of “negative” results by industry sponsors is common. Authors of systematic reviews should suspect publication bias when studies are uniformly small, particularly when sponsored by the industry.
      • Empirical examination of patterns of results (e.g., funnel plots) may suggest publication bias but should be interpreted with caution.
      In four previous articles in our series describing the GRADE system of rating the quality of evidence and grading the strength of recommendations, we have described the process of framing the question, introduced GRADE’s approach to rating the quality of evidence, and dealt with the possibility of rating down quality for study limitations (risk of bias). This fifth article deals with the another of the five categories of reasons for rating down the quality of evidence: publication bias. Our exposition relies to some extent on prior work addressing issues related to publication bias [
      • Montori V.
      • Ioannidis J.
      • Guyatt G.
      Reporting bias.
      ]; we did not conduct a systematic review of the literature relating to publication bias.
      Even if individual studies are perfectly designed and executed, syntheses of studies may provide biased estimates because systematic review authors or guideline developers fail to identify studies. In theory, the unidentified studies may yield systematically larger or smaller estimates of beneficial effects than those identified. In practice, there is more often a problem with “negative” studies, the omission of which leads to an upward bias in estimate of effect. Failure to identify studies is typically a result of studies remaining unpublished or obscurely published (e.g., as abstracts or theses)—thus, methodologists have labeled the phenomenon “publication bias.”
      An informative systematic review assessed the extent to which publication of a cohort of clinical trials is influenced by the statistical significance, perceived importance, or direction of their results [
      • Hopewell S.
      • Louden K.
      • Clarke M.
      • Oxman D.
      • Dickersin K.
      Publication bias in clinical trials due to statistical significance or direction of trial results.
      ]. It found five studies that investigated these associations in a cohort of registered clinical trials. Trials with positive findings were more likely to be published than trials with negative or null findings (odds ratio: 3.90; 95% confidence interval [CI]: 2.68, 5.68). This corresponds to a risk ratio of 1.78 (95% CI: 1.58, 1.95), assuming that 41% of negative trials are published (the median among the included studies, range=11–85%). In absolute terms, this means that if 41% of negative trials are published, we would expect that 73% of positive trials would be published. Two studies assessed time to publication and showed that trials with positive findings tended to be published after 4–5 years compared with those with negative findings, which were published after 6–8 years. Three studies found no statistically significant association between sample size and publication. One study found no statistically significant association between either funding mechanism, investigator rank, or sex and publication.

      2. Publication bias vs. selective reporting bias

      In some classification systems, reporting bias has two subcategories: selective outcome reporting, with which we have dealt in the previous article in the series, and publication bias. However, all the sources of bias that we have considered under study limitations, including selective outcome reporting, can be addressed in single studies. In contrast, when an entire study remains unreported and reporting is related to the size of the effect—publication bias—one can assess the likelihood of publication bias only by looking at a group of studies [
      • Hopewell S.
      • Louden K.
      • Clarke M.
      • Oxman D.
      • Dickersin K.
      Publication bias in clinical trials due to statistical significance or direction of trial results.
      ,
      • Dickersin K.
      • Min Y.I.
      • Meinert C.L.
      Factors influencing publication of research results. Follow-up of applications submitted to two institutional review boards.
      ,
      • Stern J.M.
      • Simes R.J.
      Publication bias: evidence of delayed publication in a cohort study of clinical research projects.
      ,
      • Bardy A.H.
      Bias in reporting clinical trials.
      ,
      • Egger M.
      • Smith G.D.
      Bias in location and selection of studies.
      ,
      • Song F.
      • Eastwood A.
      • Gilbody S.
      • Duley L.
      • Sutton A.
      Publication and related biases.
      ]. Currently, we follow the Cochrane approach and consider selective reporting bias as an issue in risk of bias (study limitations). This issue is currently under review by the Cochrane Collaboration, and both Cochrane and GRADE may revise this in future.

      3. Variations in publication bias

      The results of a systematic review will be biased if the sample of studies included is unrepresentative—whether the studies not included are published or not. Thus, biased conclusions can result from an early review that omits studies with delayed publication—a phenomenon sometimes termed “lag bias” [
      • Hopewell S.
      • Clarke M.
      • Steward L.
      • Tierney J.
      Time to publication for results of clinical trials.
      ]. Either because authors do not submit studies with what they perceive as uninteresting results to prominent journals or because of repeated rejection at such journals, a study may end up published in an obscure journal not indexed in major databases and not identified in a less-than-comprehensive search. Authors from non-English speaking countries may submit their negative studies to local journals not published in English [
      • Egger M.
      • Zellweger-Zahner T.
      • Schneider M.
      • Junker C.
      • Lengeler C.
      • Antes G.
      Language bias in randomised controlled trials published in English and German.
      ,
      • Juni P.
      • Hollenstein F.
      • Sterne J.
      • Bartlett C.
      • Egger M.
      Direction and impact of language bias in meta-analyses of controlled trials: empirical study.
      ]; these will inevitably be missed by any review that restricts itself to English-language journals. Negative studies may be published in some form (theses, book chapters, compendia of meeting abstract submissions—sometimes referred to as “gray literature”) that tend to be omitted from systematic reviews without comprehensive searching [
      • Hopewell S.
      • McDonald S.
      • Clarke M.
      • Egger M.
      Grey literature in meta-analyses of randomized trials of health care interventions.
      ].
      With each of these variations of publication bias, there is a risk of overestimating the size of an effect. However, the importance of unpublished studies, non-English language publication and gray literature are difficult to predict for individual systematic reviews.
      One may have a mirror image phenomenon to the usual publication bias: a study may be published more than once, with different authors and changes in presentation that make the duplication difficult to identify, and potentially lead to double counting of results within systematic reviews [
      • Rennie D.
      Fair conduct and fair reporting of clinical trials.
      ,
      • Tramer M.R.
      • Reynolds D.
      • Moore R.
      • McQuay H.
      Impact of covert duplicate publication on meta-analysis: a case study.
      ,
      • Melander H.
      • Ahlqvist-Rastad J.
      • Meijer G.
      • Beerman B.
      Evidence b(i)ased medicine—selective reporting from studies sponsored by pharmaceutical industry: review of studies in new drug applications.
      ,
      • von Elm E.
      • et al.
      Different patterns of duplicate publication: an analysis of articles used in systematic reviews.
      ].
      Meta-analyses of N-acetylcysteine for preventing contrast-induced nephropathy demonstrate a number of these phenomena [
      • Vaitkus P.T.
      • Brar C.
      N-acetylcysteine in the prevention of contrast-induced nephropathy: publication bias perpetuated by meta-analyses.
      ]. Randomized trials reported only in abstract form in major cardiology journals showed smaller effects than trials fully published. Of those trials published, the earlier published studies showed larger effects than the later published studies. Studies with positive results were published in journals with higher impact factors than studies with negative conclusions. Systematic reviews proved vulnerable to these factors, included published studies more often than abstracts, and conveyed inflated estimates of treatment effect. Table 1 presents a number of ways that selective or nonpublication can bias the results of a best-evidence summary classified according to the phase of the publication process.
      Table 1Publication bias
      Phases of research publicationActions contributing to or resulting in bias
      Preliminary and pilot studiesSmall studies more likely to be “negative” (e.g., those with discarded or failed hypotheses) remain unpublished; companies classify some as proprietary information
      Report completionAuthors decide that reporting a “negative” study is uninteresting; and do not invest the time and effort required for submission
      Journal selectionAuthors decide to submit the “negative” report to a nonindexed, non-English, or limited-circulation journal
      Editorial considerationEditor decides that the “negative” study does not warrant peer review and rejects manuscript
      Peer reviewPeer reviewers conclude that the “negative” study does not contribute to the field and recommend rejecting the manuscript. Author gives up or moves to lower impact journal. Publication delayed
      Author revision and resubmissionAuthor of rejected manuscript decides to forgo the submission of the “negative” study or to submit it again at a later time to another journal (see “journal selection,” above).
      Report publicationJournal delays the publication of the “negative” study
      Proprietary interests lead to report getting submitted to, and accepted by, different journals

      4. Bigger dangers of publication bias in reviews with small studies

      The risk of publication bias may be higher for reviews that are based on small randomized controlled trials (RCTs) [
      • Begg C.B.
      • Berlin J.A.
      Publication bias and dissemination of clinical research.
      ,
      • Egger M.
      • et al.
      Bias in meta-analysis detected by a simple, graphical test.
      ,
      • Ioannidis J.P.
      Why most published research findings are false.
      ]. RCTs including large numbers of patients are less likely to remain unpublished or ignored and tend to provide more precise estimates of the treatment effect, whether positive or negative (i.e., showing or not showing a statistically significant difference between intervention and control groups). Discrepancies between results of meta-analyses of small studies and subsequent large trials may occur as often as 20% of the time [
      • Cappelleri J.C.
      • et al.
      Large trials vs meta-analysis of smaller trials: how do their results compare?.
      ], and publication bias may be a major contributor to the discrepancies [
      • Sutton A.J.
      • et al.
      Empirical assessment of effect of publication bias on meta-analyses.
      ].

      5. Large studies are not immune

      Although large studies are more likely to be published, sponsors who are displeased with the results may delay or even suppress publication [
      • Melander H.
      • Ahlqvist-Rastad J.
      • Meijer G.
      • Beerman B.
      Evidence b(i)ased medicine—selective reporting from studies sponsored by pharmaceutical industry: review of studies in new drug applications.
      ,
      • Ioannidis J.P.
      Effect of the statistical significance of results on the time to completion and publication of randomized efficacy trials.
      ,
      • Lexchin J.
      • et al.
      Pharmaceutical industry sponsorship and research outcome and quality: systematic review.
      ]. Furthermore, they may publish in journals with limited readership studies that, by their significance, warrant publication in the highest profile medical journals. They may also succeed in obscuring results using strategies that are scientifically unsound. The following example illustrates all these phenomena.
      Salmeterol Multicentre Asthma Research Trial (SMART) was a randomized trial that examined the impact of salmeterol or placebo on a composite outcome of respiratory-related deaths and life-threatening experiences. In September 2002, after a data monitoring committee review of 25,858 randomized patients showed a nearly significant increase in the primary outcome in the salmeterol group, the sponsor, GlaxoSmithKline (GSK), terminated the study. Deviating from the original protocol, GSK submitted to the Food and Drug Administration (FDA) an analysis that included events in the 6 months after trial termination, an analysis that produced a diminution of the dangers associated with salmeterol. The FDA eventually obtained the correct analysis [
      • Lurie P.
      • Wolfe S.
      Misleading data analyses in salmeterol (SMART) study.
      ]. The correct SMART analysis was finally published in January 2006 in a specialty journal, CHEST [
      • Nelson H.S.
      • et al.
      The Salmeterol Multicenter Asthma Research Trial: a comparison of usual pharmacotherapy for asthma or usual pharmacotherapy plus salmeterol.
      ].
      In another more recent example, Schering-Plough delayed, for almost 2 years, publication of a study of more than 700 patients that investigated a combination drug, ezetimibe and simvastatin vs. simvastatin alone, for improving lipid profiles and preventing atherosclerosis [
      • Mitka M.
      Controversies surround heart drug study: questions about vytorin and trial sponsors’ conduct.
      ]. A review of submissions to the FDA in 2001 and 2002 found that many trials were still not published 5 years after FDA approval [
      • Rising K.
      • Bacchetti P.
      • Bero L.
      Reporting bias in drug trials submitted to the Food and Drug Administration: review of publication and presentation.
      ]. These examples of lag time bias demonstrate the need for avoiding excessive enthusiasm about early findings with new agents.

      6. When to rate down for publication bias—industry influence

      In general, review authors and guideline developers should consider rating down for likelihood of publication bias when the evidence consists of a number of small studies [
      • Begg C.B.
      • Berlin J.A.
      Publication bias and dissemination of clinical research.
      ,
      • Egger M.
      • et al.
      Bias in meta-analysis detected by a simple, graphical test.
      ,
      • Ioannidis J.P.
      Why most published research findings are false.
      ,
      • Cappelleri J.C.
      • et al.
      Large trials vs meta-analysis of smaller trials: how do their results compare?.
      ,
      • Sutton A.J.
      • et al.
      Empirical assessment of effect of publication bias on meta-analyses.
      ]. The inclination to rate down for publication bias should increase if most of those small studies are industry sponsored or likely to be industry sponsored (or if the investigators share another conflict of interest) [
      • Melander H.
      • Ahlqvist-Rastad J.
      • Meijer G.
      • Beerman B.
      Evidence b(i)ased medicine—selective reporting from studies sponsored by pharmaceutical industry: review of studies in new drug applications.
      ,
      • Lexchin J.
      • et al.
      Pharmaceutical industry sponsorship and research outcome and quality: systematic review.
      ,
      • Turner E.H.
      • et al.
      Selective publication of antidepressant trials and its influence on apparent efficacy.
      ].
      An investigation of 74 antidepressant trials with a mean sample size of fewer than 200 patients submitted to the FDA illustrates the paradigmatic situation [
      • Turner E.H.
      • et al.
      Selective publication of antidepressant trials and its influence on apparent efficacy.
      ]. Of the 38 studies viewed as positive by the FDA, 37 were published. Of the 36 studies viewed as negative by the FDA, 14 were published. Publication bias of this magnitude can seriously bias effect estimates.
      Additional criteria for suspicion of publication bias include a relatively recent RCT or set of RCTs addressing a novel therapy and systematic review authors’ failure to conduct a comprehensive search (including a search for unpublished studies).

      7. Using study results to estimate the likelihood of publication bias

      Another criterion for publication bias is the pattern of study results. Suspicion may increase if visual inspection demonstrates an asymmetrical (Fig. 1b ) rather than a symmetrical (Fig. 1a) funnel plot or if statistical tests of asymmetry are positive [
      • Begg C.
      • Berlin J.
      Publication bias: a problem in interpreting medical data.
      ,
      • Begg C.B.
      • Mazumdar M.
      Operating characteristics of a rank correlation test for publication bias.
      ]. Although funnel plots may be helpful, review authors and guideline developers should bear in mind that visual assessment of funnel plots is distressingly prone to error [
      • Terrin N.
      • Schmid C.H.
      • Lau J.
      In an empirical evaluation of the funnel plot, researchers could not visually identify publication bias.
      ,
      • Lau J.
      • et al.
      The case of the misleading funnel plot.
      ]. Enhancements of funnel plots may (or may not) help to improve reproducibility and validity associated with their use [
      • Peters J.L.
      • et al.
      Contour-enhanced meta-analysis funnel plots help distinguish publication bias from other causes of asymmetry.
      ].
      Figure thumbnail gr1
      Fig. 1(a) Funnel plot. The circles represent the point estimates of the trials. The pattern of distribution resembles an inverted funnel. Larger studies tend to be closer to the pooled estimate (the dashed line). In this case, the effect sizes of the smaller studies are more or less symmetrically distributed around the pooled estimate. (b) Publication bias. This funnel plot shows that the smaller studies are not symmetrically distributed around either the point estimate (dominated by the larger trials) or the results of the larger trials themselves. The trials expected in the bottom right quadrant are missing. One possible explanation for this set of results is publication bias—an overestimate of the treatment effect relative to the underlying truth.
      Statisticians have developed quantitative methods that rely on the same principles [
      • Begg C.
      • Berlin J.
      Publication bias: a problem in interpreting medical data.
      ,
      • Begg C.B.
      • Mazumdar M.
      Operating characteristics of a rank correlation test for publication bias.
      ]. Other statisticians have, however, questioned their appropriateness [
      • Song F.
      • Eastwood A.
      • Gilbody S.
      • Duley L.
      • Sutton A.
      Publication and related biases.
      ,
      • Irwig L.
      • Macaskill P.
      • Berry G.
      • Glasziou P.
      Bias in meta-analysis detected by a simple, graphical test. Graphical test is itself biased.
      ,
      • Stuck A.
      • Rubenstein L.
      • Wieland D.
      Bias in meta-analysis detected by a simple, graphical test. Asymmetry detected in funnel plot was probably due to true heterogeneity.
      ,
      • Seagroatt V.
      • Stratton I.
      Bias in meta-analysis detected by a simple, graphical test. Test had 10% positive rate.
      ].
      Furthermore, systematic review and guideline authors should bear in mind that even if they find convincing evidence of asymmetry, publication bias is not the only explanation. For instance, if smaller studies suffer from greater study limitations, they may yield biased overestimates of effects. Another explanation would be that, because of a more restrictive (and thus responsive) population, or a more careful administration of the intervention, the effect may actually be larger in the small studies.
      A second set of tests, referred to as “trim and fill,” tries to impute missing information and address its impact. Such tests begin by removing small “positive” studies that do not have a “negative” study counterpart. This leaves a symmetric funnel plot that allows calculation of a putative true effect. The investigators then replace the “positive” studies they have removed and add hypothetical studies that mirror these “positive” studies to create a symmetrical funnel plot that retains the new pooled effect estimate [
      • Sutton A.J.
      • et al.
      Empirical assessment of effect of publication bias on meta-analyses.
      ]. The same alternative explanations to asymmetry that we have noted for funnel plots apply here, and the imputation of new missing studies represents a daring assumption that would leave many uncomfortable.
      Another set of tests estimates whether there are differential chances of publication based on the level of statistical significance [
      • Hedges L.
      • Vevea J.
      Estimating effect size under publication bias: small sample properties and robustness of a random effects selection model.
      ,
      • Vevea J.
      • Hedges L.
      A general linear model for estimating effect size in the presence of publication bias.
      ]. These tests are well established in the educational and psychology literature but, probably because of their computational difficulty and complex assumptions, are uncommonly used in the medical sciences.
      Finally, a set of tests examines whether evidence changes over time. Recursive cumulative meta-analysis [
      • Ioannidis J.P.
      • Contopoulos-Ioannidis D.G.
      • Lau J.
      Recursive cumulative meta-analysis: a diagnostic for the evolution of total randomized evidence from group and individual patient data.
      ] performs a meta-analysis at the end of each year for trials ordered chronologically and notes changes in the summary effect. Continuously diminishing effects strongly suggests time lag bias. Another test examines whether the number of statistically significant results is larger than what would be expected under plausible assumptions [
      • Pan Z.
      • Trikalinos T.
      • Kavvoura F.
      • Lau J.
      • Ioannidis J.
      Local literature bias in genetic epidemiology: an empirical evaluation of the Chinese literature.
      ].
      In summary, each of the approaches to using available data to provide insight into the likelihood of publication bias may be useful but has limitations. Concordant results of using more than one approach may strengthen inferences regarding publication bias.
      More compelling than any of these theoretical exercises is authors’ success in obtaining the results of some unpublished studies and demonstrating that the published and unpublished data show different results. In these circumstances, the possibility of publication bias looms large. For instance, a systematic review found that including unpublished studies of the use of quinine for the treatment of leg cramps decreased the estimated effect size by a factor of two [
      • Man-Son-Hing M.
      • Wells G.
      • Lau A.
      Quinine for nocturnal leg cramps: a meta-analysis including unpublished data.
      ]. Unfortunately, obtaining the unpublished studies is not easy (although reliance on FDA submissions [or submissions to other regulatory agencies], as demonstrated in a number of examples we cited, can be very effective). On the other hand, reassurance may come from a systematic review that has succeeded in gaining industry cooperation and states that all trials have been revealed [
      • Cranney A.
      • Wells G.
      • Wilan A.
      • Griffith L.
      • Zytaruk N.
      • Robinson V.
      • et al.
      Meta-analyses of therapies for postmenopausal osteoporosis. II. Meta-analysis of alendronate for the treatment of postmenopausal women.
      ].
      Prospective registration of all RCTs at inception and before their results become available enables review authors (and those using systematic reviews) to know when relevant trials have been conducted so that they can ask the responsible investigators for the relevant study data [
      • DeAngelis C.D.
      • Drazen J.
      • Frizelle F.
      • Haug C.
      • Hoey J.
      • Horton R.
      • et al.
      Clinical trial registration: a statement from the International Committee of Medical Journal Editors.
      ,
      • Gulmezoglu A.M.
      • Pang T.
      • Horton R.
      • Dickersin K.
      WHO facilitates international collaboration in setting standards for clinical trial registration.
      ]. Mandatory registration of RCTs may be the only reliable method of addressing publication bias, and it is becoming increasingly common [
      • Laine C.
      • Horton R.
      • DeAngelis C.D.
      • Drazen J.M.
      • Frizelle F.A.
      • Godlee F.
      • et al.
      Clinical trial registration—looking back and moving ahead.
      ]. Consequently, searching clinical trial registers is becoming increasingly valuable and should be considered by review authors and those using systematic reviews when assessing the risk of publication bias. There is currently no initiative for registration of observational studies, leaving them, for the foreseeable future, open to publication bias.

      8. Publication bias in observational studies

      The risk of publication bias is probably larger for observational studies than for RCTs [
      • Dickersin K.
      • Min Y.I.
      • Meinert C.L.
      Factors influencing publication of research results. Follow-up of applications submitted to two institutional review boards.
      ,
      • Lau J.
      • et al.
      The case of the misleading funnel plot.
      ], particularly small observational studies and studies conducted on data collected automatically (e.g., in the electronic medical record or in a diabetes registry) or data collected for a previous study. In these instances, it is difficult for the reviewer to know if the observational studies that appear in the literature represent all or a fraction of the studies conducted, and whether the analyses in them represent all or a fraction of those conducted. In these instances, reviewers may consider the risk of publication bias as substantial [
      • Easterbrook P.J.
      • Gopalan R.
      • Berlin J.
      • Matthews D.
      Publication bias in clinical research.
      ,
      • Altman D.G.
      Systematic reviews of evaluations of prognostic variables.
      ].

      9. Rating down for publication bias—an example

      A systematic review of flavonoids in patients with hemorrhoids provides an example of a body of evidence in which rating down for publication bias is likely appropriate [
      • Alonso-Coello P.
      • Zhou Q.
      • Martinez-Zapata M.
      • Mills E.
      • Heels-Ansdell D.
      • Johansen J.
      • Guyatt G.
      Meta-analysis of flavonoids for the treatment of haemorrhoids.
      ]. All trials, which ranged in size from 40 to 234 patients—with most around 100—were industry sponsored. Furthermore, the funnel plot suggests the possibility of publication bias (Fig. 2).
      Figure thumbnail gr2
      Fig. 2Funnel plot of studies of flavonoids for ameliorating symptoms in patients with hemorrhoids [
      • Alonso-Coello P.
      • Zhou Q.
      • Martinez-Zapata M.
      • Mills E.
      • Heels-Ansdell D.
      • Johansen J.
      • Guyatt G.
      Meta-analysis of flavonoids for the treatment of haemorrhoids.
      ]. RR, risk ratio.

      10. Acknowledging the difficulties in assessing the likelihood of publication bias

      Unfortunately, it is very difficult to be confident that publication bias is absent, and almost equally difficult to know where to place the threshold and rate down for its likely presence. Recognizing these challenges, the terms GRADE suggests using in GRADE evidence profiles for publication bias are “undetected” and “strongly suspected.” Acknowledging the uncertainty, GRADE suggests rating down a maximum of one level (rather than two) for suspicion of publication bias. Nevertheless, the examples cited herein suggest that publication bias is likely frequent, particularly in industry-funded studies. This suggests the wisdom of caution in the face of early results, particularly with small sample size and number of events.

      References

        • Montori V.
        • Ioannidis J.
        • Guyatt G.
        Reporting bias.
        in: Guyatt G. Users’ guides to the medical literature: a manual for evidence-based clinical practice. McGraw-Hill, New York, NY2008
        • Hopewell S.
        • Louden K.
        • Clarke M.
        • Oxman D.
        • Dickersin K.
        Publication bias in clinical trials due to statistical significance or direction of trial results.
        Cochrane Database Syst Rev. 2009; (MR000006)
        • Dickersin K.
        • Min Y.I.
        • Meinert C.L.
        Factors influencing publication of research results. Follow-up of applications submitted to two institutional review boards.
        JAMA. 1992; 267: 374-378
        • Stern J.M.
        • Simes R.J.
        Publication bias: evidence of delayed publication in a cohort study of clinical research projects.
        BMJ. 1997; 315: 640-645
        • Bardy A.H.
        Bias in reporting clinical trials.
        Br J Clin Pharmacol. 1998; 46: 147-150
        • Egger M.
        • Smith G.D.
        Bias in location and selection of studies.
        BMJ. 1998; 316: 61-66
        • Song F.
        • Eastwood A.
        • Gilbody S.
        • Duley L.
        • Sutton A.
        Publication and related biases.
        Health Technol Assess. 2000; 4: 1-115
        • Hopewell S.
        • Clarke M.
        • Steward L.
        • Tierney J.
        Time to publication for results of clinical trials.
        Cochrane Database Syst Rev. 2008; (MR000011)
        • Egger M.
        • Zellweger-Zahner T.
        • Schneider M.
        • Junker C.
        • Lengeler C.
        • Antes G.
        Language bias in randomised controlled trials published in English and German.
        Lancet. 1997; 350: 326-329
        • Juni P.
        • Hollenstein F.
        • Sterne J.
        • Bartlett C.
        • Egger M.
        Direction and impact of language bias in meta-analyses of controlled trials: empirical study.
        Int J Epidemiol. 2002; 31: 115-123
        • Hopewell S.
        • McDonald S.
        • Clarke M.
        • Egger M.
        Grey literature in meta-analyses of randomized trials of health care interventions.
        Cochrane Database Syst Rev. 2007; (MR000010)
        • Rennie D.
        Fair conduct and fair reporting of clinical trials.
        JAMA. 1999; 282: 1766-1768
        • Tramer M.R.
        • Reynolds D.
        • Moore R.
        • McQuay H.
        Impact of covert duplicate publication on meta-analysis: a case study.
        BMJ. 1997; 315: 635-640
        • Melander H.
        • Ahlqvist-Rastad J.
        • Meijer G.
        • Beerman B.
        Evidence b(i)ased medicine—selective reporting from studies sponsored by pharmaceutical industry: review of studies in new drug applications.
        BMJ. 2003; 326: 1171-1173
        • von Elm E.
        • et al.
        Different patterns of duplicate publication: an analysis of articles used in systematic reviews.
        JAMA. 2004; 291: 974-980
        • Vaitkus P.T.
        • Brar C.
        N-acetylcysteine in the prevention of contrast-induced nephropathy: publication bias perpetuated by meta-analyses.
        Am Heart J. 2007; 153: 275-280
        • Begg C.B.
        • Berlin J.A.
        Publication bias and dissemination of clinical research.
        J Natl Cancer Inst. 1989; 81: 107-115
        • Egger M.
        • et al.
        Bias in meta-analysis detected by a simple, graphical test.
        BMJ. 1997; 315: 629-634
        • Ioannidis J.P.
        Why most published research findings are false.
        PLoS Med. 2005; 2: e124
        • Cappelleri J.C.
        • et al.
        Large trials vs meta-analysis of smaller trials: how do their results compare?.
        JAMA. 1996; 276: 1332-1338
        • Sutton A.J.
        • et al.
        Empirical assessment of effect of publication bias on meta-analyses.
        BMJ. 2000; 320: 1574-1577
        • Ioannidis J.P.
        Effect of the statistical significance of results on the time to completion and publication of randomized efficacy trials.
        JAMA. 1998; 279: 281-286
        • Lexchin J.
        • et al.
        Pharmaceutical industry sponsorship and research outcome and quality: systematic review.
        BMJ. 2003; 326: 1167-1170
        • Lurie P.
        • Wolfe S.
        Misleading data analyses in salmeterol (SMART) study.
        Lancet. 2005; 366: 1261-1262
        • Nelson H.S.
        • et al.
        The Salmeterol Multicenter Asthma Research Trial: a comparison of usual pharmacotherapy for asthma or usual pharmacotherapy plus salmeterol.
        Chest. 2006; 129: 15-26
        • Mitka M.
        Controversies surround heart drug study: questions about vytorin and trial sponsors’ conduct.
        JAMA. 2008; 299: 885-887
        • Rising K.
        • Bacchetti P.
        • Bero L.
        Reporting bias in drug trials submitted to the Food and Drug Administration: review of publication and presentation.
        PLoS Med. 2008; 5 (discussion e217): e217
        • Turner E.H.
        • et al.
        Selective publication of antidepressant trials and its influence on apparent efficacy.
        N Engl J Med. 2008; 358: 252-260
        • Begg C.
        • Berlin J.
        Publication bias: a problem in interpreting medical data.
        J R Statist Soc A. 1988; 151: 419-463
        • Begg C.B.
        • Mazumdar M.
        Operating characteristics of a rank correlation test for publication bias.
        Biometrics. 1994; 50: 1088-1101
        • Terrin N.
        • Schmid C.H.
        • Lau J.
        In an empirical evaluation of the funnel plot, researchers could not visually identify publication bias.
        J Clin Epidemiol. 2005; 58: 894-901
        • Lau J.
        • et al.
        The case of the misleading funnel plot.
        BMJ. 2006; 333: 597-600
        • Peters J.L.
        • et al.
        Contour-enhanced meta-analysis funnel plots help distinguish publication bias from other causes of asymmetry.
        J Clin Epidemiol. 2008; 61: 991-996
        • Irwig L.
        • Macaskill P.
        • Berry G.
        • Glasziou P.
        Bias in meta-analysis detected by a simple, graphical test. Graphical test is itself biased.
        BMJ. 1998; 316 (discussion 470–471): 470
        • Stuck A.
        • Rubenstein L.
        • Wieland D.
        Bias in meta-analysis detected by a simple, graphical test. Asymmetry detected in funnel plot was probably due to true heterogeneity.
        BMJ. 1998; 316: 469
        • Seagroatt V.
        • Stratton I.
        Bias in meta-analysis detected by a simple, graphical test. Test had 10% positive rate.
        BMJ. 1998; 316 (discussion 470–471): 470
        • Hedges L.
        • Vevea J.
        Estimating effect size under publication bias: small sample properties and robustness of a random effects selection model.
        J Educ Behav Stat. 1996; 21: 299-333
        • Vevea J.
        • Hedges L.
        A general linear model for estimating effect size in the presence of publication bias.
        Psychometrika. 1995; 60: 419-435
        • Ioannidis J.P.
        • Contopoulos-Ioannidis D.G.
        • Lau J.
        Recursive cumulative meta-analysis: a diagnostic for the evolution of total randomized evidence from group and individual patient data.
        J Clin Epidemiol. 1999; 52: 281-291
        • Pan Z.
        • Trikalinos T.
        • Kavvoura F.
        • Lau J.
        • Ioannidis J.
        Local literature bias in genetic epidemiology: an empirical evaluation of the Chinese literature.
        PLoS Med. 2011; 2: e334
        • Man-Son-Hing M.
        • Wells G.
        • Lau A.
        Quinine for nocturnal leg cramps: a meta-analysis including unpublished data.
        J Gen Intern Med. 1998; 13: 600-606
        • Cranney A.
        • Wells G.
        • Wilan A.
        • Griffith L.
        • Zytaruk N.
        • Robinson V.
        • et al.
        Meta-analyses of therapies for postmenopausal osteoporosis. II. Meta-analysis of alendronate for the treatment of postmenopausal women.
        Endocr Rev. 2002; 23: 508-516
        • DeAngelis C.D.
        • Drazen J.
        • Frizelle F.
        • Haug C.
        • Hoey J.
        • Horton R.
        • et al.
        Clinical trial registration: a statement from the International Committee of Medical Journal Editors.
        JAMA. 2004; 292: 1363-1364
        • Gulmezoglu A.M.
        • Pang T.
        • Horton R.
        • Dickersin K.
        WHO facilitates international collaboration in setting standards for clinical trial registration.
        Lancet. 2005; 365: 1829-1831
        • Laine C.
        • Horton R.
        • DeAngelis C.D.
        • Drazen J.M.
        • Frizelle F.A.
        • Godlee F.
        • et al.
        Clinical trial registration—looking back and moving ahead.
        N Engl J Med. 2007; 356: 2734-2736
        • Easterbrook P.J.
        • Gopalan R.
        • Berlin J.
        • Matthews D.
        Publication bias in clinical research.
        Lancet. 1991; 337: 867-872
        • Altman D.G.
        Systematic reviews of evaluations of prognostic variables.
        BMJ. 2001; 323: 224-228
        • Alonso-Coello P.
        • Zhou Q.
        • Martinez-Zapata M.
        • Mills E.
        • Heels-Ansdell D.
        • Johansen J.
        • Guyatt G.
        Meta-analysis of flavonoids for the treatment of haemorrhoids.
        Br J Surg. 2006; 93: 909-920