Abstract
Background
The wise integration of evidence from health care research into diagnostic decisions could influence patient outcomes by improving clinical diagnosis, reducing unnecessary testing, and minimizing diagnostic error. Yet for many, this promise does not match reality. Here, we collect and categorize barriers to the use of health care research evidence in diagnostic decisions, examine their potential consequences, and propose potential ways to overcome these impediments.
Methods
Barriers were derived from observations over years of trying to inform clinical diagnoses with research evidence, and from interpretations of the literature.
Results
Barriers are categorized into those related to the evidence itself, those related to diagnosticians, and those related to health care systems. Tentative solutions are proffered. Data are lacking on the frequency and impact of the identified barriers, as well as on the effectiveness of the proposed solutions.
Conclusions
Barriers to the sensible use of evidence from health care research in clinical diagnosis can be identified and categorized, and possible solutions can be imagined. We could, and should, muster the will to overcome these barriers.
Keywords
1. Evidence-based clinical diagnosis
Readers of the Journal of Clinical Epidemiology are likely to be very familiar with the potential of evidence-based clinical diagnosis. Its central notion is that the wise integration of evidence from health care research into diagnostic decisions could influence patient outcomes by improving clinical diagnosis, reducing unnecessary testing, and minimizing diagnostic error [
1
, 2
]. Authors and editors of this Journal have made great contributions to the science of this art [3
, 4
, 5
, 6
, 7
, 8
, 9
, 10
], even as they sometimes expressed concerns about its proper place in practice [11
, 12
, 13
]. These pages have also hosted periodic thoughtful discussion of the barriers to the use of evidence in diagnosis [[14]
].Without disputing prior commentary, in this essay I aim to collect and categorize the barriers to evidence-based clinical diagnosis, in search of additional insight. I also aim to suggest possible ways to overcome these barriers and persuade some readers that we should find the will to do so. These barriers are collected from years of observations of my own and others' attempts to use evidence to inform sound diagnostic decisions, and from interpretations of available literature. Unfortunately, I have found no published surveys of the nature, frequency, and impact of these barriers. The barriers are listed in Table 1, grouped into three categories, those related to the evidence itself, those related to diagnosticians, and those related to health care systems.
Table 1Some barriers to integrating research evidence into clinical diagnosis
Category of barrier, with examples | Potential impact of barrier |
---|---|
Evidence issues | |
No published research at all. | Answer not yet knowable from research. |
Evidence incomplete or contradictory. | It is unclear what is known and what is not. |
Evidence is not accessible at the points of diagnostic thinking. | Although knowable, evidence is not usable for diagnostic decisions. |
Published research focused too narrowly, e.g., only on test accuracy. | Clinically important diagnostic questions not yet addressed by research. |
Diagnostician issues | |
Unfamiliarity with probabilistic tradition. | Can not make use of any type of evidence. |
Unfamiliarity with specific evidence type. | Can not use that specific type of evidence. |
Inability or unwillingness to quantify uncertainty or carry out arithmetic of probability revision. | Can not use quantitative power of probabilistic tradition (although might still use information qualitatively). |
Deference to others' diagnostic authority. | Will not use analytic thought, including probabilistic reasoning and evidence. |
Inability or unwillingness to integrate probabilistic tradition with other traditions and negotiate any conflicts in approach. | May rely too heavily on one tradition, particularly deterministic ones, and not use strengths of probabilistic tradition when appropriate. |
Mismatch of thinking modes, e.g., use of “non-analytic” mode when analytic is needed for particular patient illness. | Will not use analytic thought, including probabilistic reasoning and evidence. |
Health care system issues | |
Multiple forces converge to pressure for early diagnostic labeling. | “Through-put” concerns can discourage the deliberate processes of analytic diagnostic thinking, including probabilistic tradition. |
Both “rewards” (e.g., reimbursement, patient satisfaction) and “punishments” (e.g., quality measures, malpractice) may reinforce doing tests of limited utility. | Desire for rewards or fear of punishments can influence diagnostic strategies more strongly than analytic thought using research evidence. |
Existing knowledge resources do not incorporate evidence and are not integrated into health records and other systems. | By not supporting the delivery of evidence to the point of care, health care organizations do not enable individual clinicians to integrate evidence into diagnostic decisions. |
Despite using systems approaches for reducing other types of medical error, health care organizations tend to view diagnostic error as fault of individuals. | By not redesigning processes of care, health care organizations may miss opportunities to incorporate evidence into explicit strategies to reduce diagnostic error. |
a Barriers printed in italics are explored in depth in the text.
2. Barriers related to the research evidence
The first two barriers listed in Table 1 can arise in any field of science when no data exist or when the data are incomplete or contradictory. In these situations, either no answer is yet knowable through research, or it is unclear what is known. Without the relevant surveys, we do not yet know how often the absence or incompleteness of research leaves us with uninformed diagnostic decisions.
The third barrier listed, that evidence is poorly accessible at the points of diagnostic thinking, begins with the way potentially useful research can be published in a vast, scattered array of biomedical publications, continues with the lack until recently of systematic reviews of such evidence, and follows through with the lack of synopses of this evidence integrated into clinical systems, thus affecting all four levels of the “4S model” of evidence collections [
[15]
]. For instance, although a systematic review of good quality evidence about the diagnostic accuracy of clinical and test findings for heart failure has been recently published [[16]
], in how many emergent and urgent care centers is this evidence already quickly accessible at the point of diagnostic decision making? Even if research evidence is extant that could definitively answer a diagnostic question, if it is inaccessible, it will not be used.The fourth barrier in the list bears further exploration. If we take a broad view of the decisions and questions clinicians face during diagnosis, we can see that several types of research are potentially relevant, 15 of which are listed in Table 2. The first type, cross-sectional studies of test accuracy, is presumably most familiar to readers of this Journal[
17
, 18
], whereas the second type, systematic reviews of test accuracy studies, is increasingly recognized as well [19
, 20
, 21
, 22
]. These two types of studies may occupy most of our teaching and writing about evidence-based diagnosis [2
, 23
, 24
, 25
], with less time devoted to how the other types of studies listed could inform diagnosis [[26]
]. This focus on studies of test accuracy has at least one benefit, by concentrating our attention here, we can maximally advance the state of the science of these studies. The recent publication of the STARD (Standards for Reporting Diagnostic Accuracy) initiative [[27]
], and the recent decision of the Cochrane Collaboration to undertake systematic reviews of test accuracy demonstrate some of what can be achieved with such concentration.Table 2Some forms of evidence from health care research that can be useful for diagnostic decisions
Type of research | Output of research | Questions addressed |
---|---|---|
1. Cross-sectional studies of test accuracy: 17 , 18 , 84 , 85 | Accuracy and discriminatory power of tests. | Which tests should be ordered? |
How should these test results be interpreted? | ||
2. Systematic reviews of cross-sectional studies of test accuracy: 19 , 20 , 21 , 22 , 86 | Pooled test accuracy. | Which tests should be ordered? |
Summary levels of evidence supporting test use. | How should these test results be interpreted? | |
3. Consecutive case series or cohort studies of defined clinical problems: [32] | Frequency of underlying disorders that cause this clinical problem. | What is the starting probability of this target disorder? |
Is this disorder likely enough that it should be pursued in all with this clinical problem? | ||
4. Derivation or validation studies of clinical decision rules: [31] | Probability of the target disorder in different patient groups, divided by the decision rule. | What is the revised pretest probability of the target disorder (after using the decision rule)? |
5. Consecutive case series or cohort studies of the clinical manifestations of disease: [87] | Frequency of clinical findings in those proved to have the target disorder. | Should this finding cue this diagnostic hypothesis? |
Does the absence of this finding allow us to safely discard this diagnostic hypothesis? | ||
Do the known manifestations of this target disorder adequately explain all the findings of this patient's illness? | ||
6. Case–control or cohort studies of risk factors for disease: [88] | Strength of association between risk factor and target disorder. | Does this factor place the patient at particular risk of the target disorder? |
7. Cohort studies of prognosis and prognostic factors: [89] | Range and likelihood over time of disease outcomes. | How serious is the target disorder if left undiagnosed and untreated? Therefore, how vigorously should this diagnosis be pursued? |
Could the known course over time of the target disorder explain this individual person's illness trajectory so far? | ||
8. Randomized trials of treatments for the target disorder: 90 , 91 | Effectiveness of treatments for this disorder. | How responsive is the target disorder to treatment? Therefore, how vigorously should this diagnosis be pursued? |
9. Clinical decision analyses of diagnostic or screening strategies: 92 , 93 | Expected impact on outcomes if strategies are used. | When formulating diagnostic or screening policy, what impact on clinical outcomes can be expected from each strategy? |
10. Economic analyses of diagnostic or screening strategies: 94 , 95
for the Evidence-Based Medicine Working Group Users' guides to the medical literature. XIII. How to use an article on economic analysis of clinical practice. B. What are the results and will they help me in caring for my patients?. JAMA. 1997; 277: 1802-1806 | Expected impact on resource use if strategies are used. | When formulating diagnostic or screening policy, what impact will each strategy have on resource use? |
Is this diagnostic or screening program worth doing? | ||
11. Randomized trials of diagnostic or screening strategies: 90 , 91 , 96 | Effectiveness and impact of these strategies. | When considering this target disorder, which diagnostic strategy yields the greatest impact? |
Should screening for this target disorder be undertaken? | ||
12. Utilization review or observational “outcomes” studies of the impact of diagnostic policies, or of the occurrence of diagnostic errors: 97 , 98 | Observed impact of diagnostic policy in real-world settings. | How well do these diagnostic strategies hold up in real-world clinical settings? |
Frequency and determinants of diagnostic error. | For which target disorders are errors most likely? | |
Under which conditions are we most prone to diagnostic error? | ||
13. Studies of use of computerized CDSS for diagnosis: [99] | Impact of CDSS on diagnostic outcomes. | Should we implement this CDSS to improve our clinical diagnoses and reduce diagnostic error? |
14. Randomized trials of interventions to reduce diagnostic errors: 90 , 91 | Effectiveness of interventions on errors. | By what methods can we most effectively reduce our chances of diagnostic error? |
15. Evidence-based practice guidelines of diagnostic or screening strategies: 96 , 100 , 101 ,
for the Evidence-Based Medicine Working Group Users' guides to the medical literature. VIII. How to use clinical practice guidelines. B. What are the recommendations and will they help you in caring for your patients?. JAMA. 1995; 274: 1630-1632 102 | Summary levels of evidence supporting use of strategies. | What are the recommended strategies for diagnosis or screening for this target disorder? |
Graded recommendations for diagnosis or screening. | When should and when should not these recommendations be followed? |
Abbreviation: CDSS, clinical decision support systems.
Yet having diagnostic evidence so narrowly focused also comes with two opportunity costs for clinicians. First, when research evidence ignores clinicians' other diagnostic questions (see right-hand column of Table 2), it preemptively dismisses opportunities to use other evidence to provide answers. For instance, for a patient with involuntary weight loss, a diagnostician might ask which of its many causes should be sought in this patient, or whether a particular disorder should be sought. Note that although these questions are not informed directly by evidence about test accuracy, they could be answered by evidence about disease probability for differential diagnosis. As these unanswered questions accumulate, so do the lost opportunities to integrate evidence into diagnostic decisions. Second by not addressing their other questions, clinicians might even become disempowered to use test accuracy evidence. For instance, if after reviewing a synthesis of evidence of test accuracy, a clinician asks, “where do the pretest probabilities come from?” [
[28]
] Both of the “usual” answers to this question (“from clinical experience” and “population prevalence”) have limitations for most diagnostic situations [28
, 29
, 30
]. This leaves clinicians unable to estimate pretest probability soundly, so it should not surprise us if clinicians give up trying to use test accuracy evidence quantitatively according to Bayes' theorem. Instead, with the broader view of evidence-based diagnosis, clinicians could be guided to research evidence that can inform our estimates pretest probability, either clinical decision rules [[31]
], or direct studies of disease probability [[32]
], because they may be more frequently available than is widely recognized [[33]
].A narrow focus on test accuracy may also have three opportunity costs for investigators. First, to the extent that clinicians are disempowered to use evidence, the results of investigators' efforts will not be acted upon, and these lines of research risk being viewed as potentially irrelevant and less fundable. Second, the narrowed focus could lead to reduced attention to solving methodologic problems in the 13 other types of evidence, so that these types could lag behind in quality and amount. Third, slow or poor methodologic advances could even retard solutions to problems in test accuracy research. For instance, studies of test accuracy may be done on patient samples that differ on the frequency of the underlying disease, and these differences may be considered by some as a source of confounding [
34
, 35
, 36
]. What if the methods for studies of disease probability were more advanced than they are now, to include a taxonomy of sources of differences in prevalence, empiric measurements of the impact of these differences, and methods for adjusting for these variations? If well developed and widely accepted, such methodologic standards for disease probability research could bring insight to how best to handle the parallel issues in studies of test accuracy or their systematic reviews.3. The probabilistic diagnostic tradition
Before turning to the barriers involving diagnosticians, let us review briefly how research evidence could be used to inform diagnostic inferences. As Table 2 shows, almost all 15 types of research will yield quantitative results, whether frequencies, proportions, ratios, and so forth. We can use these quantitative results to estimate and revise probabilities of disease, thereby supporting inferences in what can be called the probabilistic diagnostic tradition [
[37]
]. Using this approach, a clinician identifies a plausible target disorder that could cause the patient's illness, and then quantifies the uncertainty by estimating the independent probability of this disorder being present, before any additional information is known, termed the pretest probability [38
, 39
]. As new information arrives, it can serve to raise or lower the probability of disease, depending on its discriminatory power, described in conditional probability terms, such as with the likelihood ratio or with sensitivity and specificity [40
, 41
, 42
]. The resulting changed probability, termed posttest probability, can be interpreted in relation to diagnostic probability thresholds [43
, 44
, 45
]. As the probability approaches 1, the diagnosis becomes nearly certain, yet short of this absolute proof is a threshold of sufficient certainty above which the clinician would consider the diagnosis confirmed. Alternatively, as the probability approaches 0, the diagnosis is excluded, yet short of this absolute disproof is a threshold of sufficiently low likelihood below which the clinician would not consider the diagnosis further [38
, 39
, 40
, 41
, 42
, 43
, 44
, 45
].Using this probabilistic approach to diagnostic thinking requires several cognitive skills of the diagnostician. First, it requires adequate skill in translating diagnostic uncertainty into the language of probability. Second, it requires access to credible information about the relevant probabilities, including independent, conditional, and threshold. Third, it requires sufficient arithmetic skill to revise probabilities accurately. Fourth, it requires the skills to compare posttest probabilities to the relevant probability thresholds, and then “back translate” from probabilities into diagnostic decisions and actions. In addition, using this approach wisely means understanding when and in which situations it applies and when it does not, as well as knowing how to reconcile conflicts that may arise when integrating the probabilistic tradition with knowledge and inferences from other diagnostic traditions (more in a moment).
4. Barriers related to diagnosticians
The first two barriers related to diagnosticians listed in Table 1 are rooted in the preparation they undergo during their formal education. First, they may not have ever learned the probabilistic tradition of diagnosis, in which case they would be ill prepared to use any of the types of research evidence listed in Table 2. Second, even if clinicians were taught the probabilistic tradition, that education might have been limited in scope or effectiveness, so these diagnosticians may not have learned how to use the full range of research in Table 2, limiting their effective use of these types of evidence.
The third and fourth barriers listed are likely to be multifactorial, mixing issues of skill, motivation, and environment. Some diagnosticians may be unable or unwilling to use the language of probability to express uncertainty, or to carry out the arithmetic for probability revision, or to compare posttest probabilities to thresholds for diagnostic action. If so, they would not be able to use fully the quantitative power of this research for diagnostic decision making. Some diagnosticians may find themselves deferring to others' diagnostic authority, either intermittently (when patient illnesses are beyond their field) or habitually, such that they would seldom be in the position to use research evidence, simply because they make so few diagnostic decisions on their own.
4.1 Other analytic diagnostic traditions
The fifth barrier listed bears further exploration. So far, this discussion has centered on the probabilistic tradition of analytic thought. Yet this is not the only approach—at least six traditions of analytic diagnostic thinking can be recognized, as listed in Table 3[
[46]
]. Whereas a full discussion of these traditions is beyond the scope of this article, seven points are worth considering here. First, each tradition has a historical basis, measured in decades or centuries, complete with progenitors, proponents, and detractors. Second, each tradition is based on certain forms of knowledge learned in specific ways (e.g., anatomic study underlies the anatomic tradition, and clinical care research underlies the probabilistic approach), so these traditions represent complementary forms of diagnostic epistemology. Third, each tradition uses a different way of building the case for or against a particular diagnosis as the explanation of a patient's illness, so these traditions represent complementary forms of diagnostic rhetoric. Fourth, these traditions are not mutually exclusive, in that hybrids may exist, and they may not be jointly exhaustive, in that other useful traditions may be found now or in the future. Fifth, different specialties may draw upon these analytic traditions in differing proportions, in different sequences, and at different times. For example, my neurologic colleagues regularly exhort me to think anatomically, “Where is the lesion?,” before thinking probabilistically, “What usually causes lesions there?” Sixth, several of the traditions are by nature deterministic and may appear quite certain, whereas the probabilistic tradition is by nature stochastic and makes diagnostic uncertainty explicit. Seventh, these traditions are traditions, that is, they have stood the test of time, precisely because each has been found to be useful for clinical diagnosis.Table 3Some traditions of analytic diagnostic thinking
Tradition | Description | Decisions involved |
---|---|---|
Descriptive | Using a taxonomy of human illness based on detailed descriptions, the clinician identifies which class of disorders best fits the features of an individual person's illness. | Involves judging how well a person's illness fits the diseases in the taxonomy, and the “best fit wins.” |
Criteria based | Adds an explicit set of diagnostic criteria for each disease in taxonomy, which the clinician uses to find the best match with the features of an individual person's illness. | Requires an explicit set of criteria for each diagnosis, along with a scoring rule (e.g., how many criteria are needed to qualify). |
Anatomic | Examination of the patient yields anatomic findings, from which is inferred the cause of an individual person's illness. | Requires that the target disorder manifest some anatomic abnormality, whether at the gross, the microscopic, or the molecular level. |
Pathophysiologic | Testing of the patient or specimens detects a pathophysiologic state, from which is inferred the cause of an individual person's illness. | Requires that the target disorder manifest some detectable pathophysiologic derangement. |
Probabilistic | Clinical findings and test results are used to revise the probability of disease, until target disorder is confirmed or excluded. | Involves quantifying the uncertainty in diagnosis and the discriminatory power of findings or tests. |
Biopsychosocial | Examination of the patient and life context yield clues about psychologic or social well-being, which are integrated with biologic issues to identify cause of patient's illness. | Aims to integrate the biologic, the psychologic, and the sociologic dimensions into a more complete understanding of the patient's suffering. |
So what is the problem? For one thing, clinicians seldom, if ever, learn about all six of these diagnostic traditions at the same time, in an explicit and fully integrated fashion. As mentioned, many clinicians have not been taught the probabilistic approach at all. Even if they were taught it at some point, clinicians would have usually studied anatomy, pathophysiology, and disease descriptions and criteria well before they encounter the probabilistic approach, so they may be better prepared for using those analytic diagnostic traditions. Because those other approaches are quite useful, clinicians will be very reluctant to abandon them, if their teachers of the probabilistic tradition were to insist they do so when learning about test accuracy. Further clinicians are seldom taught explicitly when and for which problems each tradition is most useful, or how to negotiate any disagreements that may arise between two or more approaches. Because of these factors, along with discomfort about the overt uncertainty in the probabilistic method, we might expect that many clinicians will be less well prepared to use the probabilistic approach compared to other diagnostic traditions, so they may be reluctant or resistant to use research evidence for diagnosis, even when it would be very effective and efficient to do so.
4.2 “Analytic” and “Non-analytic” diagnostic thinking
The sixth barrier related to diagnosticians also bears further exploration. As novices, clinicians are taught to use a deliberate, analytic mode of diagnostic thought, using some or all the traditions noted above [
47
, 48
, 49
, 50
]. In this mode, specific case details stimulate the recall of formal knowledge, that clinicians use to make inferences about patient findings and deduce the correct diagnoses [47
, 48
, 49
, 50
]. Using this analytic mode reliably requires that clinicians access this formal knowledge and use it to make sound diagnostic inferences, so this mode is vulnerable to incomplete or outdated knowledge and to poor inferential logic [39
, 48
, 50
, 51
].Yet studies in the cognitive sciences repeatedly suggest that clinicians make many everyday diagnoses in a “non-analytic” fashion [
52
, 53
, 54
, 55
]. In this alternative mode of diagnostic thought, the clinician rapidly recognizes the patient's illness as an instance of a familiar disorder, based on similarities with illnesses in other patients [52
, 54
, 55
, 56
, 57
]. To make this mode work well, clinicians need prior experience with the specific target disorder being seen—other experiences will not help much. When it works, the “non-analytic” mode of diagnostic thought is very fast and may be quite accurate, although it is vulnerable to “look-alike” disorders and to atypical presentations of disease.How could this create barriers to the use of research evidence? First, the more frequently that clinical diagnoses are made in the “non-analytic” mode, the less often clinicians will need to use the analytic mode, which would reduce the opportunities for probabilistic reasoning. This could limit the potential impact of diagnostic research, particularly diagnostic practice guidelines. Keep in mind that for expert diagnosticians seeing familiar clinical problems, this “non-analytic” mode could be faster, more efficient, and just as accurate as the slower, analytic approach. Even for experts, though, there will be disorders or circumstances that will be less familiar, and for which the analytic mode will be required for the correct explanation of the patient's illness. In these situations, if the diagnostician is unable or unwilling to use the appropriate analytic mode, this “mode mismatch” would represent another lost opportunity to use evidence from diagnostic research, and may contribute to diagnostic error.
5. Barriers related to health care delivery systems
The first two barriers listed arise from how loudly the hectic and heavily conflicted environment of modern clinical practice “out-shouts” the softly whispered messages of evidence from diagnostic research. First, several forces converge to pressure clinicians to rapidly label patients' illnesses with diagnoses. These include patients' needs for explanations of their suffering, clinicians' aims to use diagnoses as springboards for offering cause-specific therapy, the requirements for coding and billing for health care visits that stipulate that diagnostic labels are needed to get reimbursement, and the requirements for documentation for such purposes of audit or quality improvement efforts, to name a few. These “throughput” concerns result in disincentives to the clinician for the use of the more deliberate analytic mode of diagnostic thought, including probabilistic reasoning, leading to lost opportunities for research to guide diagnostic decisions. Second, the overall mixture of the desire for “rewards” (e.g., meeting patient demands or receiving reimbursement) and the fear of “punishments” (e.g., receiving “quality demerits” or being sued for malpractice), can be expected to strongly influence clinicians' behavior, in ways that directly conflict with the guidance of diagnostic research evidence.
5.1 Lack of integrated, evidence-based knowledge resources
The next two barriers listed arise in health care delivery systems that hold, or simply fall into, the belief that the organization should share none of the responsibility for optimizing the quality of clinical diagnoses in the patients they serve. This “not my job” belief has two main negative consequences for the use of evidence in clinical diagnosis. First, in such health care systems, the existing patient data/health records systems seldom include knowledge resources to support and guide the diagnostic efforts on behalf of their patients. Even when they do, these resources virtually never explicitly integrate synopses of synthesized diagnostic research evidence along with other types of knowledge into clinically realistic, biologically sound, and evidence-based systematic approaches to the diagnosis of presenting clinical problems [
58
, 59
]. By not supporting the delivery of integrated knowledge resources that bring evidence to the point of care, health care organizations lose a major opportunity to integrate the results of clinical care research into diagnostic decision making.For instance, when considering the differential diagnosis for a patient with hemoptysis, the clinician must select disorders to pursue and estimate the pretest probability for these conditions. Yet even experienced diagnosticians have been found to have difficulties estimating pretest probabilities accurately [
60
, 61
, 62
, 63
, 64
, 65
, 66
, 67
, 68
], for several reasons [69
, 70
, 71
]. Research evidence is published that could guide clinicians to make these estimates soundly, yet without committing to make this knowledge accessible at the point of care, the health care system contributes to another lost opportunity to integrate evidence into diagnostic decisions.5.2 Lack of a “systems approach” to diagnostic error
The second major negative consequence of this “not my job” belief is the near total neglect of the issue of diagnostic error at the health care system level [
[72]
]. Despite diagnostic errors being the second most frequently found cause of medical error, and despite the growing recognition of the value of using a “systems approach” to reduce other types of error such as medication errors, many health care delivery systems do not appear to recognize the systems-level aspects of this problem, thereby perpetuating the “culture of blame” toward individual clinicians for diagnostic errors [72
, 73
, 74
, 75
, , 77
, 78
]. By not redesigning their diagnostic processes, health care organizations are not only supporting the error-prone systems that enable diagnostic errors, but they also lose enormous opportunities to incorporate research evidence into explicit strategies to reduce the frequency and impact of these errors.6. Overcoming the barriers to evidence-based clinical diagnosis
Some solutions to these barriers are tentatively proposed in Table 4. For barriers related to the evidence, more diagnostic research could be undertaken, ideally of all 15 types listed in Table 2. Also, the methodologic standards for each of the 15 types could be advanced and disseminated, analogous to the STARD initiative [
[27]
], providing investigators broader guidance for doing high-quality studies of all types. Further systematic reviews could be undertaken for a broader range of evidence types, such as evidence syntheses of clinical decision rules [[79]
]. Teams of clinicians and methodologists could develop new knowledge about the comprehensibility, usefulness, and impact of different ways to portray the quantitative results of diagnostic research [80
, 81
, 82
]. The results of this work could inform the choices involved in providing concise yet usable synopses of diagnostic evidence. New knowledge could also be developed about how best to integrate the summarized evidence with other knowledge useful for clinical diagnosis, such as selected anatomy, pathophysiology, and with clinical expertise. This new knowledge could inform how we go about incorporating evidence into active decision supports embedded within electronic health records. Preliminarily, we might expect that different types of evidence listed in Table 2 would be useful to summarize and present to clinicians at different points of diagnostic thinking, such as when gathering clinical findings, selecting a patient-specific differential diagnosis, choosing a test strategy, interpreting test results, or when verifying the final diagnosis [1
, 46
].Table 4Potential solutions to address barriers to the use of evidence in diagnosis
Categories of barriers | Potential solutions |
---|---|
Evidence issues | |
No published research at all. | Undertake research in needed areas. |
Evidence incomplete or contradictory. | Undertake systematic reviews of existing evidence, highlighting gaps in knowledge. |
Undertake research designed to fill gaps. | |
Evidence is not accessible at the points of diagnostic thinking. | Integrate synopses of diagnostic evidence into electronic health records in ways that can be used at times of diagnostic decision making, and that allow “drilling down” to the level of evidence syntheses or even the level of individual studies, as needed. |
Published research focused too narrowly, e.g., only on test accuracy. | Undertake research for all types of research listed in Table 2. |
Develop methodologic standards for each type of evidence in Table 2. | |
Diagnostician issues | |
Unfamiliarity with probabilistic tradition. | Incorporate probabilistic tradition into formal education in diagnostic reasoning. |
Unfamiliarity with specific evidence type. | Teach the sound use of all the evidence types listed in Table 2. |
Inability or unwillingness to quantify uncertainty or carry out arithmetic of probability revision. | Help diagnosticians find motivation to use probabilistic approach. |
Improve and make more available tools and calculators to help diagnosticians revise probability quickly yet accurately. | |
Deference to others' diagnostic authority. | Rebuild group cultures, norms, and expectations regarding diagnosis. |
Inability or unwillingness to integrate probabilistic tradition with other traditions and negotiate any conflicts in approach. | Teach explicitly how to integrate probabilistic approach and negotiate conflicts with other traditions in Table 3. |
Mismatch of thinking modes, e.g., use of “non-analytic” mode when analytic is needed for particular patient illness. | Teach explicitly when to use analytic and nonanalytic modes, for which situations, in which types of patients. |
Health care system issues | |
Multiple forces converge to pressure for early diagnostic labeling. | Reduce or reverse incentives for premature diagnostic labeling. |
Both “rewards” (e.g., reimbursement, patient satisfaction) and “punishments” (e.g., quality measures, malpractice) may reinforce doing tests of limited utility. | Reduce or reverse incentives for ordering tests of limited utility. |
Existing knowledge resources do not incorporate evidence and are not integrated into health records and other systems. | Rebuild knowledge resources to incorporate diagnostic research evidence. |
Integrate evidence-based knowledge resources into electronic health records. | |
Despite using systems approaches for reducing other types of medical error, health care organizations tend to view diagnostic error as fault of individuals. | Replace the “culture of blame” with culture of ongoing improvement and teamwork. |
Undertake systems reengineering to reduce the frequency and impact of diagnostic errors. |
Working past the barriers related to diagnosticians should begin with earlier and more explicit introduction of the probabilistic tradition within formal curricula, including the use of all 15 types of evidence listed in Table 2. This could be coupled with greater educational attention to when and how to integrate the probabilistic approach with other useful analytic diagnostic traditions, and how to resolve conflicts among them should they arise. The prevailing cultures of medical education and clinical practice could be changed to reduce the reliance on, and deference to, clinical authorities, while still celebrating and positively exploiting those with exceptional diagnostic acumen. Furthermore, we might hope to find ways to build clinicians' skills for both analytic and “non-analytic” modes of diagnostic thought.
Overcoming the barriers related to health care systems could begin by interventions to reduce the incentives for both premature diagnostic labeling and for “defensive testing.” Steps could also be taken to reduce the culture of blame for mistaken diagnoses, whereas new knowledge is made through more research into the determinants of diagnostic error. This new knowledge could then inform the broader use of systems approaches to reduce the frequency and impact of diagnostic errors.
Even without full prototypes, we can start to imagine what integrated, explicitly evidence-based diagnostic knowledge resources might look like. For instance, when seeing the patient with hemoptysis mentioned before, what if the clinician had rapid access to the following: (1) a compilation of all their own prior cases of hemoptysis with the frequency of underlying diseases found; (2) a compilation of all the recent cases (e.g., last 5 years) of hemoptysis seen in that health care system with the frequency of underlying disorders found; and (3) a synopsis of a systematic review of all the published studies of the frequency of causes of hemoptysis? The clinician would then have the combined power of knowledge from a practice database (probably more calibrated to clinician's own practice) with knowledge from the published research evidence (potentially more rigorously derived) to inform the selection of the differential diagnosis and the estimation of pretest probability for the current patient with hemoptysis [
[28]
]. Once the differential diagnosis is selected and the probabilities estimated, the knowledge resource could then offer integrated, evidence-based recommendations for test strategies, along with pragmatic information about availability, cost, patient instructions, and possibly links to test scheduling systems. Next, what if when interpreting a positive test result, the clinician had immediate access to an interactive likelihood ration nomogram [[83]
], or other informatics tools to assist in deriving the posttest probability of the disorder? Further what if the clinician could also compare these probabilities with formally-derived threshold probabilities, both from published decision analyses and from a formal threshold setting process undertaken at the health system level? [Centre for Evidence-Based Medicine. Accessed May 12, 2006. Available at http://www.cebm.net/nomogram.asp.
38
, 43
, 44
, 45
] Finally, this imagined knowledge resource might link the clinician to diagnostic criteria and other knowledge that can aid in diagnostic verification [[46]
].7. Why we should overcome these barriers
In aggregate, the barriers listed in Table 1 lead to two main consequences. First, a reduced fraction of all the high-quality diagnostic evidence that is published would be understood and used, thereby diminishing the impact of researchers' efforts, and raising the “cost per impact” quotient of diagnostic research. Second, a reduced proportion of all the diagnostic decisions that could be guided by evidence would be, thereby reducing the chances that patients will actually benefit from the research. Through both pathways, the barriers considered here contribute to the wide gap between what is known from research and what is done in practice. They contribute to the inefficiencies of our diagnostic endeavors and raise the costs of care for little or no benefit. Also, these barriers negatively affect the quality of clinical diagnoses, and the frequency and impact of diagnostic error. Overcoming these barriers will take resources, and may be judged by some as expensive, yet not working past them is expensive too, may be even more so. Thus, our motivations to overcome these barriers include wanting to improve diagnostic processes, to improve Patient outcomes, to reduce diagnostic error, to reduce the costs of care, and to boost the wise use of evidence in everyday clinical diagnosis. We could, and should, find the will to overcome these barriers.
8. Conclusion
In conclusion, the potential barriers to the wise use of health care research evidence in decisions for clinical diagnosis have been categorized into those related to the evidence itself, those related to diagnosticians, and those related to health care systems. For each category, several barriers were listed and some were explored in depth. Possible ways to work past these barriers have been suggested, and the reasons why we should do this have been articulated.
References
- Evidence-based diagnosis: more is needed.Evidence-Based Medicine. 1997; 2 ([EBM Note]): 70-71
- Knottnerus J.A. The evidence base of clinical diagnosis. BMJ Books, London, UK2002
- Clinical biostatistics. XXXI. On the sensitivity, specificity, and discrimination of diagnostic tests.Clin Pharmacol Ther. 1975; 17: 104-116
- Problems of spectrum and bias in evaluating the efficacy of diagnostic tests.N Engl J Med. 1978; 299: 926-930
- A bibliography of publications on observer variability.J Chronic Dis. 1985; 38: 619-632
- A bibliography of publications on observer variability (final installment).J Clin Epidemiol. 1992; 45: 567-580
- Biases in assessment of diagnostic tests.Stat Med. 1987; 6: 411-423
- Use of methodological standards in diagnostic test research: getting better but still not good.JAMA. 1995; 274: 645-651
- Empirical evidence of design-related bias in studies of diagnostic tests.JAMA. 1999; 282: 1061-1066
- The Blame-X syndrome: problems and lessons in nosology, spectrum, and etiology.J Clin Epidemiol. 2001; 54: 433-439
- Clinical biostatistics. XXXIX. The haze of Bayes, the aerial palaces of decision analysis, and the computerized Ouija board.Clin Pharmacol Ther. 1977; 21: 482-496
- Academic calculations versus clinical judgments: practicing physicians' use of quantitative measures of test accuracy.Am J Med. 1998; 104: 374-380
- Misguided efforts and future challenges for research on ‘diagnostic tests’.J Epidemiol Community Health. 2002; 56: 330-332
- Challenges and opportunities in evaluating diagnostic tests.J Clin Epidemiol. 2002; 55: 1178-1182
- Of studies, syntheses, synopses, and systems: the “4S” evolution of services for finding current best evidence.ACP J Club. 2001; 134 ([Editorial]): A11-A13
- Does this dyspneic patient in the emergency department have congestive heart failure?.JAMA. 2005; 294: 1944-1956
- The architecture of diagnostic research.BMJ. 2002; 324: 539-541
- Assessment of the accuracy of diagnostic tests: the cross-sectional study.J Clin Epidemiol. 2003; 56: 1118-1128
- Guidelines for meta-analyses evaluating diagnostic tests.Ann Intern Med. 1994; 120: 667-676
- Meta-analytic methods for diagnostic test accuracy.J Clin Epidemiol. 1995; 48: 119-130
- Systematic reviews of evaluations of diagnostic and screening tests.in: Egger M. Smith G.D. Altman D.G. Systematic reviews in health care: meta-analyses in context. BMJ Books, London, UK2001
- Summarizing studies of diagnostic test performance.Clin Chem. 2003; 49: 1783-1784
- Clinical epidemiology: a basic science for clinical medicine, 2/e.Little, Brown, & Co., Boston, MA1991
- Black E.R. Bordley D.R. Tape T.G. Panzer R.J. Diagnostic strategies for common medical problems, 2/e. ACP, Philadelphia, PA1999
- Rational diagnosis and treatment: evidence-based clinical decision making, 3/e.Blackwell Science, Oxford, UK2000
- The winding road towards evidence based diagnosis.J Epidemiol Community Health. 2002; 56: 323-325
- Towards complete and accurate reporting of studies of diagnostic accuracy: the STARD initiative.Ann Intern Med. 2003; 138: 40-44
- Where do pretest probabilities come from [Editorial]?.Evidence-Based Medicine. 1999; 4: 68-69
- A new arrival—evidence about differential diagnosis.ACP J Club. 2000; 133 ([Editorial]): A11-A12
- Five uneasy pieces about pretest probability.J Gen Intern Med. 2002; 17: 882-883
- Users' guides to the medical literature. XXII. How to use articles about clinical decision rules.JAMA. 2000; 284: 79-84
- Users' guides to the medical literature. XV. How to use an article about disease probability for differential diagnosis.JAMA. 1999; 281: 1214-1219
- Could our pretest probabilities become evidence based? A prospective survey of hospital practice.J Gen Intern Med. 2003; 18: 203-208
- Designing studies to ensure that estimates of test accuracy are transferable.BMJ. 2002; 324: 669-671
- Spectrum bias or spectrum effect? Subgroup variation in diagnostic test evaluation.Ann Intern Med. 2002; 137: 598-602
- Sources of variation and bias in studies of diagnostic accuracy: a systematic review.Ann Intern Med. 2004; 140: 189-202
- Diagnostic reasoning.Ann Intern Med. 1989; 110: 893-900
- Sox Jr., H.C. Blatt M.A. Higgins M.C. Marton K.I. Medical decision making. Butterworth-Heinemann, Boston, MA1988
- Learning clinical reasoning.Williams & Wilkins, Baltimore, MD1991
- Guyatt G.H. Rennie D.R. Users' guides to the medical literature: a manual for evidence-based clinical practice. AMA Press, Chicago, IL2002
- Clinical epidemiology: the essentials, 4/e.Lippincott, Williams & Wilkins, Baltimore, MD2005
- Straus S.E. Richardson W.S. Glasziou P. Haynes R.B. Evidence-based medicine: how to practice and teach EBM, 3/e. Churchill-Livingstone, Edinburgh, UK2005
- The threshold approach to clinical decision making.N Engl J Med. 1980; 302: 1109-1117
- Making medical decisions: an approach to clinical decision making for practicing physicians.ACP Publications, Philadelphia, PA1999
- Hunink M. Glasziou P. Decision making in health and medicine: integrating evidence and values. Cambridge University Press, Cambridge, UK2001
- Integrating evidence into clinical diagnosis, Ch. 6.in: Montori V.M. Evidence-based endocrinology. Humana Press, Totowa, NJ2006: 69-89
- Developing clinical problem solving skills: a guide to more effective diagnosis and treatment.WW Norton, New York, NY1991
- Barondess J.A. Carpenter C.C.J. Differential diagnosis. Lea and Febiger, Philadelphia, PA1994
- Diagnosis: a brief introduction.Oxford University Press, Melbourne, Australia1996
- Doctor and patient: exploring clinical thinking.UNSW Press, Sydney, Australia1999
- Problems for clinical judgment: introducing cognitive psychology as one more basic science.CMAJ. 2001; 164: 358-360
- The non-analytical basis of clinical reasoning.Adv Health Sci Educ. 1997; 2: 173-184
- Medical problem solving: an analysis of clinical reasoning.Harvard University Press, Cambridge, MA1978
- What every teacher needs to know about clinical reasoning.Med Educ. 2004; 39: 98-106
- Research in clinical reasoning: past history and current trends.Med Educ. 2005; 39: 418-427
- Mental representations of medical diagnostic knowledge: a review.Acad Med. 1996; 71: S55-S61
- The epistemology of clinical reasoning: perspectives from philosophy, psychology, and neuroscience.Acad Med. 2000; 75: S127-S136
- Clinical decision support systems for the practice of evidence-based medicine.J Am Med Inform Assoc. 2001; 8: 527-534
- About time: diagnostic guidelines that help clinicians.Qual Saf Health Care. 2003; 12: 205-209
- An evaluation of clinicians' subjective prior probability estimates.Med Decis Making. 1986; 6: 216-223
- Clinical assessment of the probability of coronary artery disease: judgmental bias from personal knowledge.Med Decis Making. 1992; 12: 197-203
- Diagnostic accuracy of predicting coronary artery disease related to patients' characteristics.J Clin Epidemiol. 1994; 47: 389-395
- The effect of changing disease risk on clinical reasoning.J Gen Intern Med. 1994; 9: 488-495
- Quantitative evaluation of the diagnostic thinking process in medical students.J Gen Intern Med. 2002; 11: 839-844
- Probabilistic reasoning and clinical decision-making: do doctors overestimate diagnostic probabilities?.QJM. 2003; 96: 763-769
- Generating pre-test probabilities: a neglected area in clinical decision making.Med J Aust. 2004; 180: 449-454
- Pretest probability estimates: a pitfall to the clinical utility of evidence-based medicine?.Acad Emerg Med. 2004; 11: 692-694
- Variability in diagnostic probability estimates.Ann Intern Med. 2004; 141 ([Research Letter]): 578-579
- Judgment under uncertainty.Science. 1974; 185: 1124-1131
- Systematic errors in medical decision making: judgment limitations.J Gen Intern Med. 1987; 2: 183-187
- Rationality in medical decision making: a review of the literature on doctors' decision-making biases.J Eval Clin Pract. 2001; 7: 97-107
- Diagnostic errors in medicine: a case of neglect.Jt Comm J Qual Patient Saf. 2005; 31 (PMID: 15791770): 106-113
- Cognitive errors in diagnosis.Am J Med. 1989; 86: 433-441
- Why did I miss the diagnosis? Some cognitive explanations and educational implications.Acad Med. 1999; 74: S138-S143
- Reducing diagnostic errors in medicine: what's the goal?.Acad Med. 2002; 77: 981-992
- Diagnostic errors.Acad Emerg Med. 2002; 9: 740-750
- Diagnostic error in internal medicine.Arch Intern Med. 2005; 165: 1493-1499
- The cognitive psychology of missed diagnoses.Ann Intern Med. 2005; 142: 115-120
- Validity of clinical prediction rules for isolating inpatients with suspected tuberculosis: a systematic review.J Gen Intern Med. 2005; 20: 947-952
- Which methods for bedside Bayes [Editorial]?.ACP J Club. 2001; 135: A11-A12
- A randomized trial of ways to describe test accuracy: the effect on physicians' post-test probability estimates.Ann Intern Med. 2005; 143: 184-189
- Making medical research clinically friendly: a communication-based conceptual framework.Educ Health (Abingdon). 2004; 17: 374-384
Centre for Evidence-Based Medicine. Accessed May 12, 2006. Available at http://www.cebm.net/nomogram.asp.
- Users' guides to the medical literature. III. How to use an article about a diagnostic test. A. Are the results of the study valid?.JAMA. 1994; 271: 389-391
- Users' guides to the medical literature. III. How to use an article about a diagnostic test. B. What are the results and will they help me in caring for my patients?.JAMA. 1994; 271: 59-63
- Users' guides to the medical literature. VI. How to use an overview.JAMA. 1994; 272: 1367-1371
- Users' guides to the medical literature. XXIV. How to use an article on the clinical manifestations of disease.JAMA. 2000; 284: 869-875
- Users' guides to the medical literature. IV. How to use an article about harm.JAMA. 1994; 271: 1615-1619
- Users' guides to the medical literature. V. How to use an article about prognosis.JAMA. 1994; 272: 234-237
- Users' guides to the medical literature. II How to use an article about therapy or prevention. A. Are the results of the study valid?.JAMA. 1993; 270: 2598-2601
- Users' guides to the medical literature. II. How to use an article about therapy or prevention. B. What are the results and will they help me in caring for my patients?.JAMA. 1994; 271: 59-63
- Users' guides to the medical literature. VII. How to use a clinical decision analysis. A. Are the results of the study valid?.JAMA. 1995; 273: 1292-1295
- Users' guides to the medical literature. VII. How to use a clinical decision analysis. B. What are the results and will they help me in caring for my patients?.JAMA. 1995; 273: 1610-1613
- Users' guides to the medical literature. XIII. How to use an article on economic analysis of clinical practice. A. Are the results of the study valid?.JAMA. 1997; 277: 1552-1557
- Users' guides to the medical literature. XIII. How to use an article on economic analysis of clinical practice. B. What are the results and will they help me in caring for my patients?.JAMA. 1997; 277: 1802-1806
- Users' guides to the medical literature. XVII. How to use guidelines and recommendations about screening.JAMA. 1999; 281: 2029-2034
- Users' guides to the medical literature. X. How to use an article reporting variations in the outcomes of health services.JAMA. 1996; 275: 554-558
- Users' guides to the medical literature. XI. How to use an article about a clinical utilization review.JAMA. 1996; 275: 1435-1439
- Users' guides to the medical literature. XVIII. How to use an article evaluating the clinical impact of a computer-based clinical decision support system.JAMA. 1999; 282: 67-74
- Users' guides to the medical literature. VIII. How to use clinical practice guidelines. A. Are the recommendations valid?.JAMA. 1995; 274: 570-574
- Users' guides to the medical literature. VIII. How to use clinical practice guidelines. B. What are the recommendations and will they help you in caring for your patients?.JAMA. 1995; 274: 1630-1632
- Users' guides to the medical literature. IX. A method for grading health care recommendations.JAMA. 1995; 274: 1800-1804
Article info
Publication history
Published online: September 06, 2006
Accepted:
June 7,
2006
Identification
Copyright
© 2007 Elsevier Inc. Published by Elsevier Inc. All rights reserved.