If you don't remember your password, you can reset it by entering your email address and clicking the Reset Password button. You will then receive an email that contains a secure link for resetting your password
If the address matches a valid account an email will be sent to __email__ with instructions for resetting your password
Corresponding author. Division of Rheumatology, University Health Network, Ground Floor, East Wing, Toronto Western Hospital, 399 Bathurst Street, Toronto, Ontario M5T 2S8, Canada. Tel.: +416-603-6417; fax +416-603-4348.
Division of Rheumatology, Department of Medicine, University Health Network, Toronto, Ontario, CanadaDepartment of Health Policy, Management and Evaluation, University of Toronto, Toronto, Ontario, Canada
Department of Health Policy, Management and Evaluation, University of Toronto, Toronto, Ontario, CanadaDalla Lana School of Public Health, University of Toronto, Toronto, Ontario, CanadaDivision of Clinical Decision Making and Health Care, Toronto General Research Institute, Toronto, Ontario, Canada
Department of Health Policy, Management and Evaluation, University of Toronto, Toronto, Ontario, CanadaDivision of Rheumatology, Department of Medicine, Women's College Hospital, Toronto, Ontario, Canada
Department of Health Policy, Management and Evaluation, University of Toronto, Toronto, Ontario, CanadaDalla Lana School of Public Health, University of Toronto, Toronto, Ontario, CanadaDivision of Rheumatology, Department of Paediatrics, The Hospital for Sick Children, Toronto, Ontario, Canada
Bayesian analysis can incorporate clinicians' beliefs about treatment effectiveness into models that estimate treatment effects. Many elicitation methods are available, but it is unclear if any confer advantages based on principles of measurement science. We review belief-elicitation methods for Bayesian analysis and determine if any of them had an incremental value over the others based on its validity, reliability, and responsiveness.
Study Design and Setting
A systematic review was performed. MEDLINE, EMBASE, CINAHL, Health and Psychosocial Instruments, Current Index to Statistics, MathSciNet, and Zentralblatt Math were searched using the terms (prior OR prior probability distribution) AND (beliefs OR elicitation) AND (Bayes OR Bayesian). Studies were evaluated on: design, question stem, response options, analysis, consideration of validity, reliability, and responsiveness.
Results
We identified 33 studies describing methods for elicitation in a Bayesian context. Elicitation occurred in cross-sectional studies (n=30, 89%), to derive point estimates with individual-level variation (n=19; 58%). Although 64% (n=21) considered validity, 24% (n=8) reliability, 12% (n=4) responsiveness of the elicitation methods, only 12% (n=4) formally tested validity, 6% (n=2) tested reliability, and none tested responsiveness.
Conclusions
We have summarized methods of belief elicitation for Bayesian priors. The validity, reliability, and responsiveness of elicitation methods have been infrequently evaluated. Until comparative studies are performed, strategies to reduce the effects of bias on the elicitation should be used.
This article summarizes methods that have been applied for belief elicitation;
•
Reviews the published measurement properties of each method;
•
Presents a conceptual framework for the belief-elicitation process;
•
Identifies pragmatic methodologic strategies to reduce the effect of bias in belief-elicitation studies.
What should change now?
•
Strategies to reduce the effect of bias include sampling from groups of experts, use of clear instructions and a standardized script, provision of examples and training exercises, avoidance of scenarios or anchoring data, provision of feedback and opportunity for revision of the response, and use of simple graphical methods.
Bayesian analysis is an increasingly common method of statistical inference used in clinical research [
]. The empirical Bayesian approach is one where parameters of the prior distribution are estimated by using the same data used in the main analysis. When no prior information is available, investigators use a vague prior so that new data will dominate. The fully Bayesian approach is one that considers all sources of preexisting knowledge admissible for the analysis. One advantage of the fully Bayesian approach over the traditional “frequentist” approach to statistical inference or the empirical Bayesian approach is the ability to incorporate beliefs into models that estimate treatment effects. Once beliefs are elicited from a sample (e.g., experts in a field), the elicited beliefs (e.g., regarding the probability of a treatment effect) can be graphically expressed as a prior probability distribution. This distribution can be used to document clinical equipoise (a prerequisite for clinical trials) [
]. Therefore, to apply Bayesian prior probability distributions of existing belief about a treatment effect in clinical trials, clinical researchers would benefit from knowledge of existing belief-elicitation methods and identification of methods that have demonstrable methodologic rigor. In particular, belief-elicitation methods should be valid, reliable, responsive to change, and feasible. Thus, the primary objectives of this study were: (1) to review methods of eliciting prior beliefs for a Bayesian analysis; and (2) to review the measurement properties (validity, reliability, responsiveness, and feasibility) of these methods to determine if one method had incremental value over another. To better understand the processes by which experts formulate a belief, as well as the processes by which investigators can elicit this belief, and the potential biases that may affect the validity, reliability, and responsiveness of these methods, the secondary objectives of this study were: (1) to develop a conceptual framework for the belief--elicitation process and biases that may affect the elicited response through review of the literature; and (2) to identify methodologic strategies that may reduce the effect of bias on elicitation process.
2. Methods
2.1 Search strategy
Eligible studies were identified using MEDLINE (1950 to week 2, June 2008), EMBASE (1980 to week 25, 2008), CINAHL (1982 to week 2, June 2008), Health and Psychosocial Instruments (1985 to March 2008), Current Index to Statistics (1974 to June 2008), MathSciNet (1940 to June 2008), and Zentralblatt Math (1868 to June 2008) using the search terms (prior OR prior probability distribution) AND (beliefs OR elicitation) AND (Bayes OR Bayesian). Mapping of term to subject heading was used, where appropriate. Titles and abstracts were screened to exclude ineligible studies. Included studies were entered in the Science Citation Index and PUBMED (with use of the “related articles” tool) to search for other potentially eligible studies. In addition, the bibliographies of included studies and published reviews were searched.
2.2 Inclusion and exclusion criteria
Eligible articles included published observational studies, randomized controlled trials, book chapters, and technical reports, which describe elicitation of beliefs in a Bayesian context. Studies using human and nonhuman subjects were included. Non–English language studies were excluded.
2.3 Data abstraction and methodologic assessment
Using a standardized form, the following data were abstracted: sample size, study design (cross-sectional, longitudinal, unspecified), level of elicitation (individual, group), questionnaire-administration format (in person, telephone interview, mail, Delphi consensus, other), questionnaire format (article, computer assisted, other), question format (scenario with/without data provided in stem, predictive question, both, other), response options (visual analog scale, distribution of probabilities or proportions into bins, other), response rate (percentage, not specified, not applicable [methodologic or simulation papers]), analysis (point estimate with group-level variation, point estimate with individual-level variation), and graphical display (none, probability density function, cumulative distribution function, other). Often respondents are asked to make a probability estimate for an event which is not definitively known (e.g., probability of survival at 3 years). There may be some uncertainty around the reported point estimate. “Group-level variation” was used to characterize analyses that reported the variability for the groups' point estimate. “Individual-level variation” was used to characterize analyses that reported the variability around the point estimate for each individual study participant.
2.4 Measurement properties
Articles describing elicitation methods were evaluated for consideration of the following properties:
1.
Validity. Face validity evaluates if the elicitation method appears to measure what it purports to measure. Content validity evaluates if the elicitation method captures all the relevant aspects of the belief [
The health assessment questionnaire disability index and scleroderma health assessment questionnaire in scleroderma trials: an evaluation of their measurement properties.
]. Criterion validity evaluates the correlation of an elicitation method with the “gold standard.” Under the assumption that there is no gold standard for the truth or belief, construct validity evaluates the relationship of two different methods of measuring the same belief. Convergent construct validity evaluates the correlation between two related aspects of the elicited belief, whereas divergent construct validity evaluates the ability of an elicitation method to correctly distinguish between dissimilar beliefs [
Reliability. Reliability refers to the reproducibility of the measure. Intrarater reliability is evaluated when the elicitation method is applied to the same participant(s) on two different occasions, whereas interrater reliability is evaluated when the elicitation method is applied to different participants on the same occasion. In the context of belief measurement, interrater reliability is of lesser importance. Measures of reliability include the method of Bland and Altman, intraclass correlation coefficient, or Cohen's kappa [
]. Determinants of feasibility include time, cost, and need for equipment or personnel.
Consideration of validity, reliability, responsiveness, and feasibility by investigators was categorized as commented on, evaluated (measure of association or change recorded), or not specified. The measures of validity, reliability, and responsiveness cited earlier (e.g., correlations, kappa) are appropriate when elicitation yields a single value per respondent. When each respondent provides an entire probability distribution, it is not clear how validity, reliability, and responsiveness should be measured.
2.5 Statistical analysis
Summary statistics were calculated using R 2.4 (R Foundation for Statistical Computing, Vienna, Austria).
3. Results
3.1 Search strategy
Systematic review of the literature identified 33 articles which described unique methods for belief elicitation in a Bayesian context (Fig. 1).
Table 1 summarizes the study characteristics. Belief elicitation mostly occurred in cross-sectional studies (91%), at the level of the individual (97%), using small sample sizes (median of 11 participants). Questionnaires were largely administered in person (58%) or on paper (52%), and to derive a point estimate with individual-level variation (58%).
Question stems (the question asked of the participant) and response options are summarized in Table 2. Investigators had asked participants about the mean [
Express your belief about neutron therapy compared with an expected 12-month failure rate of 50% in the photon arm of the trial
Given 20 counters, place 2 of them at the upper and lower limits of belief. Place the remaining 18 counters so as to express their remaining prior beliefs about the neutron failure rates
What is your best guess of the percentage of people assigned to daily trimethoprim-sulfamethoxazole (TMS) group who will experience pneumocystitis pneumonia (PCP) 2 years after enrollment?
(b)
Think about the people on thrice weekly arm and think about an interval estimate for what you would expect for the percentage of people on the thrice weekly TMS arm who will experience PCP in 2 years given that the proportion experiencing PCP on the daily TMS arm is what you guessed. Please specify the interval by an upper and lower number within which you think that the percentage of people experiencing PCP on the 3 times a week arm will lie
Suppose you were asked to predict whether a project would be successfully implemented. You can ask me any question you want about the project and I will find the answer for you. What questions would you ask of me?
(b)
Please give me examples of answers that would make you optimistic and pessimistic about the chances of success
(c)
Estimate the prior probability of implementation success using an “estimate–talk–estimate” approach
Please give your best estimate of the relative probability of pregnancy in the 6 months following a lipiodal hysterosalpingogram, compared with “no intervention” probability of pregnancy being 1.0
(b)
Please give 95% confidence limits to this estimate.
(c)
What is the minimum relative probability of pregnancy following a lipiodol hysterosalpingogram that would justify, in your opinion, this being used as a standard for some women with unexplained fertility?
What is your guess of the percentage of the 758 “first words” in this particular edition of “Of Human Bondage” that have six or more letters?
(b)
Imagine you were allowed to draw a sample of 10 randomly selected first words out of 758 pages. What weight (in decimal numbers) do you assign to a random sample of 10?
(c)
What weight do you assign to the data if you were allowed to randomly select a larger sample of 50 pages from a total of 758?
We are interested in your expectations of the difference in 2 year which might result from using CHART rather than the standard radical radiotherapy for eligible patients. Enter your weight of belief in each of the possible intervals. The stronger you believe that the difference will truly lie in a given interval the greater should your weight for that interval. If you believe that it is impossible that the difference lie in a given interval your weight should be zero. Your weights should add up to 100
We are interested in your expectations of the difference in 2 year survival rate which might result from using treatment X rather than the standard Y for eligible patients. Enter your weight of belief in each of the possible intervals. The stronger you believe that the difference will truly lie in a given interval the greater should your weight for that interval. If you believe that it is impossible that the difference lies in a given interval your weight should be zero. Your weights should add up to 100
Estimate the probability of complete hearing recovery and normal language recovery within a year, in a situation without treatment and in a situation with ventilation tube insertion
We are interested in your expectations of the difference in rates of death or hospitalization which might result from using treatment X rather than the standard Y for eligible patients. Enter your weight of belief in each of the possible intervals. The stronger you believe that the difference will truly lie in a given interval the greater should your weight for that interval. If you believe that it is impossible that the difference lies in a given interval your weight should be zero. Your weights should add up to 100. Suppose the annual event rate on placebo is 18%, what is your expectation for the annual event rate on X?
What is the probability that a random student at the university is male?
(b)
Can you determine a point such that it is equally likely that p is less than or greater than this point?
(c)
Now suppose that you were told that p is less than I2. Determine a new point such that it is equally likely that p is less than or greater than this point
(d)
Now suppose that you were told that p is less than I3. Determine a new point such that it is equally likely that p is less than or greater than this point.
Probability density function:
(a)
What do you consider the most likely value of p?
(b)
Can you determine 2 values of p (one on each side of p) which are about half as likely as the value in a?
(c)
Can you determine a point such that 1/2 the area under the graph of the density function is to the left of the point and half of the area is to the right of the point?
(d)
Such that 1/4 of the area is to the left of the point and 3/4 is to the right?
(e)
Such that 3/4 of the area is to the left of the point and 1/4 is to the right?
(f)
Such that 1/100 of the area is to the left of the point and 99/100 is to the right?
(g)
Such that 99/100 of the area is to the left of the point and 1/100 is to the right?
(a)
p=A%
(b)
I2=B%
(c)
I3=C%
(d)
I4=D%
Questions have been paraphrased for space.
Abbreviations: CT, computed tomography; VAS, visual analog scale.
Of the identified studies, 64% (21 of 33) considered the validity, 24% (8 of 33) the reliability, 12% (4 of 33) the responsiveness, and 55% (18 of 33) the feasibility of the elicitation methods (Table 1). However, only four (12%) studies formally evaluated validity, two (6%) studies tested reliability, none tested responsiveness, and one (3%) study formally evaluated feasibility (Table 3).
Table 3Summary of studies which considered validity, reliability, responsiveness, and feasibility
], we have developed a conceptual framework for this process (Fig. 3). An individual's belief about the effectiveness of an intervention is influenced by his or her knowledge of the research evidence and his or her clinical experience, which are presumably both approximations of the truth. Some schools of thought suggest that an individual does not have a preexisting quantification of his or her belief “ready for the picking” [
]. Rather, when asked about his or her belief about an intervention, an individual will synthesize his or her knowledge and experience into a “quantified belief prior” [
]. Using an elicitation procedure (question and response option), the investigator tries to elicit the belief. The investigator may quantify the elicited belief, express it graphically, and then combine multiple individual priors to form a group “clinical prior” [
Using the personalistic theory of probability, all self-consistent or coherent beliefs are admissible in a study as long as the individual feels that they correspond with his judgment [
]. The elicitation procedure, the manner in which the belief is elicited, can influence the creation of both the individual's quantified prior and the group's clinical prior [
]. A person may modify the reporting of his or her quantified belief depending on the method by which the belief was elicited. Biases that may threaten the validity of the elicited belief are summarized in Table 4[
Overconfidence may bias the validity of the elicited belief where some clinicians provide very little uncertainty around their estimate, corresponding to strong beliefs
Believability: clinicians are more likely to be influenced by study findings that are concordant with their preconceived beliefs about the disease process or treatment effect
Ordering: participants' probability estimates are influenced by data presented at the beginning of the question stem (primacy effect) while others are influenced by data presented at the end of the question stem (recency effect)
The reliability, responsiveness, and feasibility of an elicitation procedure are also important determinants of its utility. Threats to the reliability of an elicitation procedure include lack of understanding of the elicitation procedure, carelessness, lack of interest, and fatigue[
]. In the setting of a longitudinal study, an elicitation procedure should also be able to detect any important changes in belief that occur over time as new information is gained. Finally, the implementation of an elicitation method in clinical research is constrained by factors that affect its feasibility. Factors may include costs incurred through implementation of the method, need for specialized personnel or hardware, and the time required of the study participant.
3.6 Methodologic strategies to reduce bias
Methodologic strategies to reduce the influence of potential biases on the validity and reliability of elicitation methods are summarized in Table 4. Strategies to minimize bias can be implemented at each stage of the elicitation procedure: identification of the sample, framing of the question, choice of the response option, and summarizing of the data.
]. The training of a clinical expert generally extends over a period of time—years rather than weeks. During that time, the expert gains extensive experience with the specific events in question and with the factors that affect them [
]. An expert encounters the condition in a repetitive manner and receives relatively immediate feedback for the consequences of their therapeutic decisions [
]. As a result, experts are able to predict events about which they have special training, and tend to be more consistent in their beliefs than nonexperts [
]. This results in the elicited probability distributions being truncated at hard and perhaps unrealistic boundaries rather than extending to include extreme tail areas with very small probabilities [
]. Insufficient normative goodness (statistical understanding) and insufficient understanding of the elicitation question threaten the validity of the belief elicited [
The use of a dichotomous response option (e.g., I believe this intervention is effective. Yes/No) has insufficient content validity, as clinicians often have beliefs about the magnitude of the effect and varying degrees of certainty in the strength of their belief [
Strategies can be used to reduce the threat to the validity and reliability of the elicited belief of limited normative goodness, or the respondents' insufficient understanding of the elicitation procedure. Provision of feedback to the participant about the elicited belief allows for self-correction [
]. An opportunity for verification and revision of the elicited response allows the participant to detect and revise inconsistencies in their response [
]. The use of a response option that requires betting or utilizes penalties also improves validity and reliability. Participants will reflect more deeply when provided a disincentive, as there is a sense of potential loss associated with their response (e.g., an approach where a study participant has to wager his own money based on his assessed probability of an outcome) [
]. Bias introduced by base-rate neglect (which occurs when participants fail to take account of the prevalence of the outcome among untreated patients) may be reduced by asking the participant to state the baseline rate or describe the outcome in both untreated and treated patients [
There are a variety of methods by which individual priors are aggregated to form a group clinical prior. Although some studies have used consensus methods to derive a group clinical prior [
], most studies have combined individually elicited priors. Biases introduced by overoptimism or overconfidence may be reduced by the use of averaging methods for the group clinical prior [
]. It has also been suggested that the elicited belief could be weighted by occupation, level of experience, self-confidence, or other personal characteristics [
]. However, the value of these pooling and weighting methods remains uncertain and requires evaluation.
Graphical presentation of the combined clinical prior has been used to express the degree of variability of the elicited belief, illustrate the existence of clinical uncertainty, and demonstrate the amount of evidence that would be required from data to convince optimistic and skeptical clinicians. In general, people more easily comprehend normal distributions than fractiles, relative densities, or cumulative distribution functions [
]. A probability density function is more intuitive than a cumulative distribution function, and its use is associated with improved feasibility and validity [
]. The use of a concomitant histogram is useful for individuals who are less familiar with probability distributions. The use of simple graphical representations is preferred as the trade-off of more information is busier figures where patterns are harder to see [
This systematic review summarizes methods of belief elicitation for use in a Bayesian analysis. The validity, reliability, and responsiveness of the methods have not been adequately evaluated. Identification of the “best” method based on the principles of measurement science is limited by the paucity of data. With the increasing use of Bayesian analysis in clinical research [
], evaluation of the measurement properties of elicitation methods is required in order for researchers to be confident that the methods meet methodologic standards. In particular, evaluation of the validity and reliability of methods is needed. If belief elicitation is to be used in a longitudinal setting where new information is gained over time, research on the responsiveness of the methods is warranted.
Through review of the literature, we have developed a conceptual framework outlining the process by which beliefs about treatment effects are formulated by experts and the process by which investigators may elicit beliefs. We have also identified potential biases which may threaten the validity, reliability, and responsiveness of the elicited belief, and incorporated these findings into the conceptual framework. Conceptual frameworks are increasingly being used to guide our thinking [
]. This framework is meant to lay down a foundation on which we synthesize the existing knowledge about the belief-elicitation process. It is not meant to be static, but rather meant to be modified as additional insights are gained. We summarize pragmatic methodologic strategies to reduce the effect of potential biases until comparative validity, reliability, and responsiveness studies are conducted. Strategies to minimize bias can be implemented at each stage of the elicitation procedure.
In an attempt to be comprehensive, we included all studies that elicited belief in a “Bayesian context.” Although some studies elicited prior beliefs and then incorporated it with new data in a fully Bayesian analysis, other studies did not. For example, Bergus et al. evaluated diagnostic clinical reasoning of family physicians by comparing their elicited probabilities of different diagnoses with Bayesian-derived probabilities [
]. Using Bayesian inference, subjective probabilities are not uncertain and are not estimated. A probability is stated and used to describe one's uncertainty. However, probability elicitation is also used to estimate proportions or frequencies [
]. For example, investigators may ask participants to estimate their probability of being struck by lightening, when investigators are actually asking for an estimate of the proportion of individuals who are struck by lightening. Estimating the probability of the event does not allow one to consider uncertainty. Using a Bayesian paradigm, investigators could elicit both an estimate of this proportion and the individual's uncertainty about this proportion.
One area of uncertainty is the number of participants required for a belief-elicitation study [
]. We found the median sample size of participants in belief-elicitation studies to be 11. Some investigators have advocated for the inclusion of more than one expert [
The correct method of sampling experts is also uncertain. The selection of a group of experts to participate in a belief-elicitation study is intended to yield some knowledge about the population of experts. It may not be possible to study the whole population. One option is simple random sampling. However, experts are not likely to be statistically independent. It may be preferable to include experts chosen nonrandomly (e.g., purposive expert sampling) and capture a range of opinions of the target population [
]. This has the advantage of instant graphical presentation of the elicited belief. However, these technologies have been criticized for their lack of usability and intuitiveness [
]. This is likely to be related to the software in question. Computer-assisted elicitation studies have been performed one-on-one. Internet-based, computer-assisted belief-elicitation surveys may be an option for future studies.
Evaluation of the validity of a belief-elicitation method for Bayesian priors is challenged by the lack of a “true objective” probability that represents subjective uncertainty about a fixed, unknown quantity. In the psychology literature, there have been studies that measure the calibration of elicited distributions compared with the true value that has been verified by the investigator (e.g., population of a country, dates of historical events, meaning of words) [
]. The use of these calibration methods in studies evaluating the probability of an intervention's treatment effect is limited as the “true” treatment effect is not known. Preexisting clinical trials or observational studies may provide estimates of the treatment effect but the “truth” remains unknown. In the setting where the gold standard is not known, an alternative option would include the evaluation of construct validity. For example, one study examined intensive care unit physicians' judgments for the probability of survival for patients compared with probabilities generated by a logistic model derived from the Acute Physiology And Chronic Health Evaluation (APACHE) II illness severity index [
]. Whether it is better to include experts or nonexperts remains a subject of controversy. The results of this review suggest that the inclusion of clinical experts rather than generalists in an elicitation procedure improves the validity and reliability of the elicited beliefs.
Whether prior beliefs should be included in a Bayesian analysis is also controversial. Proponents of the empirical Bayesian approach do not use information external to the data at hand. We argue that the fully Bayesian approach, whether priors are informative or vague, more closely approximates true medical practice. Often, there is no published evidence available to guide physicians' ability to make a diagnosis, prognosis, or decision to institute a therapy. In these settings, clinicians will use other sources of knowledge (education, experience, expert opinion) to guide their beliefs. The fully Bayesian approach allows quantification and incorporation of these beliefs into statistical models. The onus remains on clinical investigators to use belief-elicitation methods that have demonstrable methodologic rigor. In addition, Hiance et al. have demonstrated that elicitation of prior beliefs is not only feasible, but allows for insights to be gained into the variability of experts' beliefs [
By summarizing methods that have been applied for belief elicitation, reviewing whatever is known about the measurement properties of each method, developing a conceptual framework for the belief-elicitation process, and identifying pragmatic methodologic strategies to reduce the effect of bias, we have synthesized the current state of knowledge for clinical researchers. This study lays the necessary groundwork for future research by highlighting areas requiring investigation. Through the use of measurement properties as criteria to assess the utility of belief-elicitation methods, we are rising to the challenge of using disciplined research methodology [
] when applying the Bayesian paradigm to clinical trials.
Our ability to comparatively evaluate the identified elicitation methods is limited by the paucity of data evaluating their measurement properties. It should be noted that for most of the studies, evaluation of the methodologic properties of the elicitation method was not the intent of the investigators. Furthermore, evaluation of the measurement properties of the methods may not have been considered necessary. In an era of evolving and more rigorous methodologic standards [
], evaluation of the measurement properties of the methods is needed, and will provide objective criteria based on which the comparative utility of the various methods could be decided.
5. Conclusion
This systematic review of the literature summarizes methods of belief elicitation for a Bayesian analysis. The measurement properties of the methods have not been adequately evaluated. Further evaluation of the validity, reliability, and responsiveness of elicitation methods is needed. Until comparative studies are performed, methodologic strategies to reduce the effect of bias on the validity and reliability of the elicited belief should be used. Based on the results of this systematic review, we recommend the following strategies: include sampling from groups of experts, use clear instructions and a standardized script, provide examples and/or training exercises, avoid use of scenarios or anchoring data, ask participants to state the baseline rate in untreated patients, provide feedback and opportunity for revision of the response, and use simple graphical methods.
Acknowledgments
Dr. Sindhu Johnson has been awarded a Canadian Institutes of Health Research Phase 1 Clinician Scientist Award. Dr. Brian Feldman is supported by a Canada Research Chair in Childhood Arthritis.
The health assessment questionnaire disability index and scleroderma health assessment questionnaire in scleroderma trials: an evaluation of their measurement properties.