Development of the Review Quality Instrument (RQI) for Assessing Peer Reviews of Manuscripts


      Research on the value of peer review is limited by the lack of a validated instrument to measure the quality of reviews. The aim of this study was to develop a simple, reliable, and valid scale that could be used in studies of peer review. A Review Quality Instrument (RQI) that assesses the extent to which a reviewer has commented on five aspects of a manuscript (importance of the research question, originality of the paper, strengths and weaknesses of the method, presentation, interpretation of results) and on two aspects of the review (constructiveness and substantiation of comments) was devised and tested. Its internal consistency was high (Cronbach’s alpha 0.84). The mean total score (based on the seven items each scored on a 5-point Likert scale from 1 to 5) had good test-retest (Kw = 1.00) and inter-rater (Kw = 0.83) reliability. There was no evidence of floor or ceiling effects, construct validity was evident, and the respondent burden was acceptable (2–10 minutes). Although improvements to the RQI should be pursued, the instrument can be recommended for use in the study of peer review.


      To read this article in full you will need to make a payment

      Purchase one-time access:

      Academic & Personal: 24 hour online accessCorporate R&D Professionals: 24 hour online access
      One-time access price info
      • For academic or personal research use, select 'Academic and Personal'
      • For corporate R&D use, select 'Corporate R&D Professionals'


      Subscribe to Journal of Clinical Epidemiology
      Already a print subscriber? Claim online access
      Already an online subscriber? Sign in
      Institutional Access: Sign in to ScienceDirect


        • McNutt R.A.
        • Evans A.T.
        • Fletcher R.H.
        • Fletcher S.W.
        The effects of blinding on the quality of peer review.
        JAMA. 1990; 263: 1371-1376
        • Streiner D.L.
        • Norman G.R.
        Health Measurement Scales. A Practical Guide to Their Development and Use. Oxford Medical Publications, Oxford1989
        • McDowell I.
        • Jenkinson C.
        Development standards for health measures.
        J Health Serv Res Policy. 1996; 1: 238-246
        • Cronbach L.J.
        Essentials of Psychological Testing. Harper & Row, New York1960
        • Cohen J.
        Weighted kappa.
        Psychol Bull. 1968; 70: 213-220
        • Friedman M.
        A comparison of alternative tests of significance for the problem of m rankings.
        Ann Math Stat. 1940; 11: 86-92
        • Wilcoxon F.
        Individual comparisons by ranking methods.
        Biometrics. 1945; 1: 80-83
        • Friedman M.
        The use of ranks to avoid the assumption of normality explicit in the analysis of variance.
        J Am Stat Assoc. 1937; 32: 675-701
        • van Rooyen S.
        • Godlee F.
        • Evans S.
        • Smith R.
        • Black N.
        The effect of blinding and unmasking on the quality of peer review.
        JAMA. 1998; 280: 234-237
        • Black N.
        • van Rooyen S.
        • Godlee F.
        • Evans S.
        • Smith R.
        What makes a good reviewer and a good review for a general medical journal?.
        JAMA. 1998; 280: 231-233
        • Van Rooyen S.
        • Godlee F.
        • Evans S.
        • Black N.
        • Smith R.
        Effect of open peer review on quality of reviews and on reviewers’ recommendations.
        BMJ. 1999; 318: 23-27
        • Moher D.
        • Jadad A.R.
        • Nichol G.
        • Penman M.
        • Tugwell P.
        • Walsh S.
        Assessing the quality of randomized controlled trials.
        Control Clin Trials. 1995; 16: 62-73
        • Downs S.
        • Black N.A.
        The feasibility of creating a checklist for the assessment of the methodological quality of randomized and nonrandomized studies of health care interventions.
        J Epidemiol Community Health. 1998; 52: 377-384
      1. The Annual Report of the NHS Health Technology Assessment Programme 1997. London: NHS Executive; 1997.