Abstract
Research on the value of peer review is limited by the lack of a validated instrument
to measure the quality of reviews. The aim of this study was to develop a simple,
reliable, and valid scale that could be used in studies of peer review. A Review Quality
Instrument (RQI) that assesses the extent to which a reviewer has commented on five
aspects of a manuscript (importance of the research question, originality of the paper,
strengths and weaknesses of the method, presentation, interpretation of results) and
on two aspects of the review (constructiveness and substantiation of comments) was
devised and tested. Its internal consistency was high (Cronbach’s alpha 0.84). The
mean total score (based on the seven items each scored on a 5-point Likert scale from
1 to 5) had good test-retest (Kw = 1.00) and inter-rater (Kw = 0.83) reliability. There was no evidence of floor or ceiling effects, construct
validity was evident, and the respondent burden was acceptable (2–10 minutes). Although
improvements to the RQI should be pursued, the instrument can be recommended for use
in the study of peer review.
Keywords
To read this article in full you will need to make a payment
Purchase one-time access:
Academic & Personal: 24 hour online accessCorporate R&D Professionals: 24 hour online accessOne-time access price info
- For academic or personal research use, select 'Academic and Personal'
- For corporate R&D use, select 'Corporate R&D Professionals'
Subscribe:
Subscribe to Journal of Clinical EpidemiologyAlready a print subscriber? Claim online access
Already an online subscriber? Sign in
Register: Create an account
Institutional Access: Sign in to ScienceDirect
References
- The effects of blinding on the quality of peer review.JAMA. 1990; 263: 1371-1376
- Health Measurement Scales. A Practical Guide to Their Development and Use. Oxford Medical Publications, Oxford1989
- Development standards for health measures.J Health Serv Res Policy. 1996; 1: 238-246
- Essentials of Psychological Testing. Harper & Row, New York1960
- Weighted kappa.Psychol Bull. 1968; 70: 213-220
- A comparison of alternative tests of significance for the problem of m rankings.Ann Math Stat. 1940; 11: 86-92
- Individual comparisons by ranking methods.Biometrics. 1945; 1: 80-83
- The use of ranks to avoid the assumption of normality explicit in the analysis of variance.J Am Stat Assoc. 1937; 32: 675-701
- The effect of blinding and unmasking on the quality of peer review.JAMA. 1998; 280: 234-237
- What makes a good reviewer and a good review for a general medical journal?.JAMA. 1998; 280: 231-233
- Effect of open peer review on quality of reviews and on reviewers’ recommendations.BMJ. 1999; 318: 23-27
- Assessing the quality of randomized controlled trials.Control Clin Trials. 1995; 16: 62-73
- The feasibility of creating a checklist for the assessment of the methodological quality of randomized and nonrandomized studies of health care interventions.J Epidemiol Community Health. 1998; 52: 377-384
The Annual Report of the NHS Health Technology Assessment Programme 1997. London: NHS Executive; 1997.
Article info
Publication history
Accepted:
March 8,
1999
Identification
Copyright
© 1999 Elsevier Science Inc. Published by Elsevier Inc. All rights reserved.