Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You and Your Team.

Learn More →

Predictably Unhelpful: Why Clinicians Do Not Use Prediction Rules

Predictably Unhelpful: Why Clinicians Do Not Use Prediction Rules In the third year of my residency, I was given Clinical Epidemiology: A Basic Science for Clinical Medicine.1 Although that wonderful book has since gone out of print, it was transformative to my nascent practice of medicine. Written by the founders of the evidence-based medicine movement, it cogently argued in favor of using heady things such as likelihood ratios, treatment thresholds, and nomograms at the bedside to guide clinical decision making. I devoured it and taught its contents to the residents I subsequently supervised. While I remain a firm believer in the philosophy of evidence-based medicine, I have grown increasingly skeptical about how it is to be operationalized. In the years since I read the book by Sackett et al,1 hundreds of decision tools in a variety of forms—guidelines, practice parameters, prediction rules—have been generated. Some have been good, some bad; some have been validated, others not. But what they all have in common is that their overall use remains poor at best. In the meantime, those of us in academia continue to create them and those of us on editorial boards continue to vet them for methodological rigor. The cottage industry of decision tools has at least the appearance of an academic jobs program since, to clinicians in the real world, their utilities remain largely unproven. For example, there are no fewer than 10 clinical prediction rules for something as common as streptococcal pharyngitis, and I would be surprised if most clinicians even use one. Some (myself included at one point) would argue that the problem is merely one of dissemination.2 All we have to do is simply put these tools into the hands of busy clinicians and help integrate them into their practice. This approach has led to, among other things, integrated decision support so that virtually no click in an electronic medical record passes without a prompt of some kind. The resultant “pop-up fatigue” has led most of us to quickly click past the latest reminder, significantly reducing the efficacy of the entire endeavor. In fact, what once seemed like a promising avenue of research—real-time messaging—may very well end up becoming a blind alley. Others have argued that the problem of decision-tool uptake can be overcome by education, that is, too many clinicians lack the expertise to apply epidemiological principles to their practice. However, we have found that providing education about test characteristics often yields conflicting and counterintuitive results.3,4 In some cases, increased knowledge improved clinicians' estimates of disease probability; in other cases, it made them worse; and in many cases, it did not affect their therapeutic decision making at all. Ultimately, I think the problem with decision tools is not about their validity or their dissemination but the underlying philosophical approach that many clinicians have to the practice of medicine. The idea that clinicians would “go bayesian” at the bedside is intriguing, but in the end it is the purest form of an academic exercise—great on teaching rounds, useless in practice. Clinicians are not so meticulously quantitative in their clinical reasoning. They are either unable or unwilling to articulate a treatment threshold as aficionados of evidence-based medicine would have them do and then act on it. A patient either does or does not have a disease. Once I think a febrile infant might have a serious bacterial infection, it becomes immaterial to me whether the chance is 1%, 5%, or even 0.001%. All are intolerable given the perceived potential consequences of inaction. Perhaps this is one reason the Ottawa ankle rules have been more effectively implemented. In addition to their rigor and simplicity, the downside of missing an ankle fracture is tolerable to clinicians. However, beyond the risks of missed diagnosis, an individual patient does not have 90% of a disease. He or she either has the disease or does not. What this means is that a rule, whatever its characteristics are, simply is not that helpful unless it perfectly predicts outcomes. Along then comes the latest prediction rule. This one, by Fine et al5 in this issue of the Archives, is for Lyme disease. Putting aside for a second whether its perfect test characteristics withstand prospective validation, deploying it requires a simple piece of data regarding the background prevalence of Lyme disease dichotomized as more or less than 4 in 100 000 people in a community. Fair enough. One might imagine that such data in this day and age are readily retrievable via a search on the Internet. Actually, they do not appear to be. After a few search attempts, I found a Centers for Disease Control and Prevention Web site6 that gives the number of cases and incidence by state (not county), but only through 2008. I do not need a prediction rule to tell me that this prediction rule will not be used much. But even stipulating that such information could be readily found, the question of whether clinicians would seek it out and apply it to the patient in an examination room is still begged, and therein lies the valuable research agenda. How can we truly influence decision making in meaningful ways? Given time constraints, ever-growing clinical to-do lists, an expanding evidence base, and increased productivity expectations, what decision tools are truly practical and implementable? Before we devise any more rules, we might well benefit from a needs assessment and focus on areas where we can make a difference. Such solution-oriented research would have us desist from making prediction rules until we learn how to implement them.7 Correspondence: Dr Christakis, Center for Child Health, Behavior, and Development, Seattle Children's Research Institute, University of Washington, 1100 Olive Way, Ste 500, Seattle, WA 98101 (dachris@u.washington.edu). Financial Disclosure: None reported. References 1. Sackett DLHaynes RBGuyatt GHTugwell P Clinical Epidemiology: A Basic Science for Clinical Medicine. 2nd ed. Boston, MA Little, Brown & Co1991; 2. Christakis DARivara FP Pediatricians' awareness of and attitudes about four clinical practice guidelines. Pediatrics 1998;101 (5) 825- 830PubMedGoogle ScholarCrossref 3. Sox CMDoctor JNKoepsell TDChristakis DA The influence of types of decision support on physicians' decision making. Arch Dis Child 2009;94 (3) 185- 190PubMedGoogle ScholarCrossref 4. Sox CMKoepsell TDDoctor JNChristakis DA Pediatricians' clinical decision making: results of 2 randomized controlled trials of test performance characteristics. Arch Pediatr Adolesc Med 2006;160 (5) 487- 492PubMedGoogle ScholarCrossref 5. Fine AMBrownstein JSNigrovic LE et al. Integrating spatial epidemiology into a decision model for evaluation of facial palsy in children. Arch Pediatr Adolesc Med 2011;165 (1) 61- 67Google ScholarCrossref 6. Centers for Disease Control and Prevention, Map: reported cases of Lyme disease, United States, 2003. http://www.cdc.gov/ncidod/dvbid/lyme/distribution_density.htm. Accessed November 2, 2010 7. Robinson TNSirard JR Preventing childhood obesity: a solution-oriented research paradigm. Am J Prev Med 2005;28 (2) ((suppl 2)) 194- 201PubMedGoogle ScholarCrossref http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Archives of Pediatrics & Adolescent Medicine American Medical Association

Predictably Unhelpful: Why Clinicians Do Not Use Prediction Rules

Loading next page...
 
/lp/american-medical-association/predictably-unhelpful-why-clinicians-do-not-use-prediction-rules-Vj2VuArCmv
Publisher
American Medical Association
Copyright
Copyright © 2011 American Medical Association. All Rights Reserved.
ISSN
1072-4710
eISSN
1538-3628
DOI
10.1001/archpediatrics.2010.255
Publisher site
See Article on Publisher Site

Abstract

In the third year of my residency, I was given Clinical Epidemiology: A Basic Science for Clinical Medicine.1 Although that wonderful book has since gone out of print, it was transformative to my nascent practice of medicine. Written by the founders of the evidence-based medicine movement, it cogently argued in favor of using heady things such as likelihood ratios, treatment thresholds, and nomograms at the bedside to guide clinical decision making. I devoured it and taught its contents to the residents I subsequently supervised. While I remain a firm believer in the philosophy of evidence-based medicine, I have grown increasingly skeptical about how it is to be operationalized. In the years since I read the book by Sackett et al,1 hundreds of decision tools in a variety of forms—guidelines, practice parameters, prediction rules—have been generated. Some have been good, some bad; some have been validated, others not. But what they all have in common is that their overall use remains poor at best. In the meantime, those of us in academia continue to create them and those of us on editorial boards continue to vet them for methodological rigor. The cottage industry of decision tools has at least the appearance of an academic jobs program since, to clinicians in the real world, their utilities remain largely unproven. For example, there are no fewer than 10 clinical prediction rules for something as common as streptococcal pharyngitis, and I would be surprised if most clinicians even use one. Some (myself included at one point) would argue that the problem is merely one of dissemination.2 All we have to do is simply put these tools into the hands of busy clinicians and help integrate them into their practice. This approach has led to, among other things, integrated decision support so that virtually no click in an electronic medical record passes without a prompt of some kind. The resultant “pop-up fatigue” has led most of us to quickly click past the latest reminder, significantly reducing the efficacy of the entire endeavor. In fact, what once seemed like a promising avenue of research—real-time messaging—may very well end up becoming a blind alley. Others have argued that the problem of decision-tool uptake can be overcome by education, that is, too many clinicians lack the expertise to apply epidemiological principles to their practice. However, we have found that providing education about test characteristics often yields conflicting and counterintuitive results.3,4 In some cases, increased knowledge improved clinicians' estimates of disease probability; in other cases, it made them worse; and in many cases, it did not affect their therapeutic decision making at all. Ultimately, I think the problem with decision tools is not about their validity or their dissemination but the underlying philosophical approach that many clinicians have to the practice of medicine. The idea that clinicians would “go bayesian” at the bedside is intriguing, but in the end it is the purest form of an academic exercise—great on teaching rounds, useless in practice. Clinicians are not so meticulously quantitative in their clinical reasoning. They are either unable or unwilling to articulate a treatment threshold as aficionados of evidence-based medicine would have them do and then act on it. A patient either does or does not have a disease. Once I think a febrile infant might have a serious bacterial infection, it becomes immaterial to me whether the chance is 1%, 5%, or even 0.001%. All are intolerable given the perceived potential consequences of inaction. Perhaps this is one reason the Ottawa ankle rules have been more effectively implemented. In addition to their rigor and simplicity, the downside of missing an ankle fracture is tolerable to clinicians. However, beyond the risks of missed diagnosis, an individual patient does not have 90% of a disease. He or she either has the disease or does not. What this means is that a rule, whatever its characteristics are, simply is not that helpful unless it perfectly predicts outcomes. Along then comes the latest prediction rule. This one, by Fine et al5 in this issue of the Archives, is for Lyme disease. Putting aside for a second whether its perfect test characteristics withstand prospective validation, deploying it requires a simple piece of data regarding the background prevalence of Lyme disease dichotomized as more or less than 4 in 100 000 people in a community. Fair enough. One might imagine that such data in this day and age are readily retrievable via a search on the Internet. Actually, they do not appear to be. After a few search attempts, I found a Centers for Disease Control and Prevention Web site6 that gives the number of cases and incidence by state (not county), but only through 2008. I do not need a prediction rule to tell me that this prediction rule will not be used much. But even stipulating that such information could be readily found, the question of whether clinicians would seek it out and apply it to the patient in an examination room is still begged, and therein lies the valuable research agenda. How can we truly influence decision making in meaningful ways? Given time constraints, ever-growing clinical to-do lists, an expanding evidence base, and increased productivity expectations, what decision tools are truly practical and implementable? Before we devise any more rules, we might well benefit from a needs assessment and focus on areas where we can make a difference. Such solution-oriented research would have us desist from making prediction rules until we learn how to implement them.7 Correspondence: Dr Christakis, Center for Child Health, Behavior, and Development, Seattle Children's Research Institute, University of Washington, 1100 Olive Way, Ste 500, Seattle, WA 98101 (dachris@u.washington.edu). Financial Disclosure: None reported. References 1. Sackett DLHaynes RBGuyatt GHTugwell P Clinical Epidemiology: A Basic Science for Clinical Medicine. 2nd ed. Boston, MA Little, Brown & Co1991; 2. Christakis DARivara FP Pediatricians' awareness of and attitudes about four clinical practice guidelines. Pediatrics 1998;101 (5) 825- 830PubMedGoogle ScholarCrossref 3. Sox CMDoctor JNKoepsell TDChristakis DA The influence of types of decision support on physicians' decision making. Arch Dis Child 2009;94 (3) 185- 190PubMedGoogle ScholarCrossref 4. Sox CMKoepsell TDDoctor JNChristakis DA Pediatricians' clinical decision making: results of 2 randomized controlled trials of test performance characteristics. Arch Pediatr Adolesc Med 2006;160 (5) 487- 492PubMedGoogle ScholarCrossref 5. Fine AMBrownstein JSNigrovic LE et al. Integrating spatial epidemiology into a decision model for evaluation of facial palsy in children. Arch Pediatr Adolesc Med 2011;165 (1) 61- 67Google ScholarCrossref 6. Centers for Disease Control and Prevention, Map: reported cases of Lyme disease, United States, 2003. http://www.cdc.gov/ncidod/dvbid/lyme/distribution_density.htm. Accessed November 2, 2010 7. Robinson TNSirard JR Preventing childhood obesity: a solution-oriented research paradigm. Am J Prev Med 2005;28 (2) ((suppl 2)) 194- 201PubMedGoogle ScholarCrossref

Journal

Archives of Pediatrics & Adolescent MedicineAmerican Medical Association

Published: Jan 3, 2011

Keywords: decision support systems,evidence-based medicine,fatigue,fever,decision making,sore throat, streptococcal,lyme disease,ankle fractures,bacterial infection, serious,clinical prediction rule,delayed diagnosis,epidemiology,ottawa ankle rules,treatment threshold

References

You’re reading a free preview. Subscribe to read the entire article.


DeepDyve is your
personal research library

It’s your single place to instantly
discover and read the research
that matters to you.

Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month

Explore the DeepDyve Library

Search

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Organize

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Access

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Your journals are on DeepDyve

Read from thousands of the leading scholarly journals from SpringerNature, Wiley-Blackwell, Oxford University Press and more.

All the latest content is available, no embargo periods.

See the journals in your area

DeepDyve

Freelancer

DeepDyve

Pro

Price

FREE

$49/month
$499/year

Save searches from
Google Scholar,
PubMed

Create folders to
organize your research

Export folders, citations

Read DeepDyve articles

Abstract access only

Unlimited access to over
18 million full-text articles

Print

20 pages / month