That confounded P-value revisited

      Hooper [
      • Hooper R.
      P-values are misunderstood, but do not confound.
      ] defends using P-values to answer the question, “do we think there is an effect at all.” But what advantage is there in viewing measurable phenomena as a dichotomy? The P-value’s role in significance testing only fosters this unfortunate dichotomous thinking. Quantitative thinking is preferable [
      • Stang A.
      • Poole C.
      • Kuss O.
      The ongoing tyranny of statistical significance testing in biomedical research.
      ]. Although one could argue that a zero effect is qualitatively different from other values, one cannot distinguish zero from values close to it. It makes far more sense to consider zero on an equal footing with all other possible effect values. The question for the investigator ought to be “what is the best estimate of effect given the data in hand?” [
      • Oakes M.W.
      Statistical inference.
      ].
      To read this article in full you will need to make a payment

      References

        • Hooper R.
        P-values are misunderstood, but do not confound.
        J Clin Epidemiol. 2011; 64 ([in this issue]): 1047
        • Stang A.
        • Poole C.
        • Kuss O.
        The ongoing tyranny of statistical significance testing in biomedical research.
        Eur J Epidemiol. 2010; 25: 225-230
        • Oakes M.W.
        Statistical inference.
        Wiley, Chichester, UK1986
        • Lang J.M.
        • Rothman K.J.
        • Cann C.I.
        That confounded P-value.
        Epidemiology. 1998; 9: 7-8
        • Lash T.L.
        Heuristic thinking and inference from observational epidemiology.
        Epidemiology. 2007; 18: 67-72
        • Lash T.L.
        • Fox M.P.
        • Fink A.K.
        Applying quantitative bias analysis to epidemiologic data.
        Springer, Dordrecht, The Netherlands2009
        • Poole C.
        Low P-values or narrow confidence intervals: which are more durable?.
        Epidemiology. 2001; 12: 291-294

      Linked Article

      • P-values are misunderstood, but do not confound
        Journal of Clinical EpidemiologyVol. 64Issue 9
        • Preview
          I agree with Professor Stang [1] that P-values cannot rule out the null hypothesis and do not substitute for measures of effect, and that these points bear repeating. But in cleaning up the language of published quantitative research, we should be careful not to throw P-values out with the bath water. The idea that the P-value “confounds” effect size and sample size is itself a fallacy (pace Rothman) [2]. P is calculated using the effect size and sample size, but the two things are combined in such a way as to weigh the evidence against the null hypothesis, adjusting for sample size (large effects are less convincing in smaller studies) [3].
        • Full-Text
        • PDF