Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Everyone's a Little Bit Biased (Even Physicians)

Everyone's a Little Bit Biased (Even Physicians) Medical schools and professional medical associations have developed policies and guidelines in response to increasing concerns over potential conflicts of interest.1 While many physicians agree with these concerns, some view conflict-of-interest policies as affronts to their integrity and an indictment of the ethical conduct of the profession as a whole. These individuals believe that their training as scientists and their devotion to professionalism protects them from external influences that might bias their opinions. However, this view may be based on an incorrect understanding of human psychology. Conflicts of interest are problematic, not only because they are widespread but also because most people incorrectly think that succumbing to them is due to intentional corruption, a problem for only a few bad apples. In this Commentary, we argue that succumbing to a conflict of interest is more likely to result from unintentional bias, something common in everyone. We review studies in neuropsychology, behavioral economics, cognitive psychology, and clinical epidemiology to illustrate this point. The Ethical Brain, Post Hoc Navigating a conflict of interest is not only a problem for the intentionally corrupt, but also for well-meaning individuals who unintentionally succumb to conflicts but then post hoc rationalize their actions so as to appear (to themselves and to others) to be objective. Gazanniga and LeDoux2 performed many experiments that reveal the human potential to rationalize actions. In the classic studies on “split-brain” patients, one participant was shown a picture of a chicken claw in his right visual field (which he consciously saw and reacted to) and shown a picture of a snow scene in his left visual field that, because of a brain injury that affected his left visual field, he processed only subconsciously. Then the researchers spread out several pictures and asked the patient to select the ones associated with the pictures he had just seen. The patient correctly picked out a picture of a chicken and could articulate that it was associated with the chicken claw; he also correctly picked out the item that went with the snow scene: a snow shovel. However, since the snow scene was not consciously processed, he could not correctly explain why he picked the shovel. Rather than be dumbfounded, his explanation was “Oh, that's easy. The chicken claw goes with the chicken and you need a shovel to clean out the chicken coop.” As did other participants in similar experiments, he “filled in the blanks” in describing his actions by developing his own rationalization for what he witnessed himself doing. Furthermore, while it was an incorrect rationalization, he actually believed it. Research in neuroscience and psychology (eg, such as by Haidt3) suggests that every person may be a bit more like this than he or she would like to think. Everyone is prone to rationalize actions and beliefs. So it is no wonder that physicians think that conflicts of interest do not affect them, since so many of the effects are subconscious and hidden by post hoc rationalization. Most individuals believe they have good reasons for their beliefs, but these reasons are often made up after the fact. For instance, does a physician recommend the drug because of a friendly, generous pharmaceutical representative or because it is really the best drug for the patient's condition? Psychological Demonstrations of Subconscious Conflicts Individuals form beliefs based on their observations and interpretations of data presented to them. However, the process of doing so is complex and subjective. The question is whether beliefs are always constrained by the facts. Several articles describing psychological experiments speak to this point. First, a US News & World Report survey4 asked the following of one group of respondents: “If someone sues you and you win the case, should he pay your legal costs?” Eighty-five percent of respondents answered yes to this question. However, another group of respondents were asked the question in a different way: “If you sue someone and lose the case, should you pay his costs?” Only 44% answered yes to this question. Second, Moore et al5 asked 139 auditors (all employed by a major US accounting firm) to judge the accounting decisions made in 5 auditing vignettes. Across all vignettes, even when incentivized to be accurate, auditors were 30% more likely to report that the accounting firm behind the firm's financial reports complied with generally agreed-upon accounting principles when the auditors were told to imagine that they were the auditors for the firm, compared with when told to imagine that they were independent auditors. Although these relations were simply imagined, even imagined relations, let alone currently real or past real relations, can affect judgment. In both of these examples, it is likely that respondents could justify their answers to themselves because there were good reasons for and good reasons against both sets of judgments. These judgments had to weigh both favorable and unfavorable information in a subjective fashion. Dawson et al6 described 3 experiments that support the theory of “motivated reasoning” in which individuals use different strategies to evaluate propositions depending on whether the hypothesis is desirable or threatening/disagreeable to them. In the former (desirable) case, it is common to ask “Can I believe this?” and use persuasive standards of evidence to support the belief (ie, evidence that persuades the person that the belief is correct). In the latter (disagreeable) case, it is common to ask “Must I believe this?” and look for evidence to contradict the belief. Many of these processes have been shown to be unintentional, difficult to control, and resistant to incentives for accuracy.7 Physician–Pharmaceutical Company Relations Most physicians and academicians claim that small gifts, meals, educational grants, travel grants, and research awards have little influence on their opinions because they have integrity. They believe their opinions are truly their own; but that is not the point. The question is: how did they come to have those beliefs? Research has long shown that an individual's judgment can be influenced by the first information encountered, even information that person is trying to ignore. In research on the “anchoring” bias, Tverksy and Kahneman8 showed that purportedly random suggestions had significant effects on participants' answers. In their experiment, participants were asked to guess the percentage of African countries that belong to the United Nations. But first they were given a number between 0 and 100 determined by a spinning wheel and asked if they thought the true percentage was higher or lower than that number. In the study's sample, if the purportedly random number the participants first saw was 10, the median estimate was 25, whereas if the purportedly random number was 65, the median estimate was 45. If what are believed to be randomly generated suggestions can affect judgments, it stands to reason that the nonrandom “suggestions” of a pharmaceutical company representative might also affect a physician's judgment. McCormick et al9 showed that a policy of “no contact” between pharmaceutical representatives and medical residents had a lasting effect on the beliefs and behavior of those residents up to 5 years after they completed training. Tatsioni et al10 showed that evidence from observational epidemiology that is subsequently contradicted by randomized trials still has a substantial influence on beliefs and recommendations long after the contradictory evidence is published. These studies from the clinical field support the notion that it is difficult to overcome the influence of early information on beliefs. Physicians have many relationships that may result in bias11 other than those involving pharmaceutical companies, including nonfinancial conflicts of interest. Such bias may be difficult to undo. Disclosure The response of the academic community to potential conflicts of interest often focuses on disclosing those interests to the audience. In a behavioral-economics experiment involving 147 undergraduate students at Carnegie Mellon University, Cain et al12 showed that disclosure may in fact exacerbate the influence of those conflicts. One group in the study, designated “the estimators,” was required to look at a sequence of jars of coins from a distance to estimate how much money was in each jar. The estimators were incentivized to be accurate. A second group of participants, known as “advisors,” had a closer look at the jars and was told a range of possible values. The advisors gave the estimators advice on how many coins were in each jar. Some advisors were subject to conflicts of interest; one group of conflicted advisors was paid more if the estimators overestimated the number of coins in the jar and it is no surprise that those conflicted advisors gave the estimators higher advice. However, when the estimators were warned of this conflict via disclosure, their estimates failed to show adequate downward adjustment. In fact, disclosure actually worsened (ie, “inflated”) the advisors' advice perhaps because the advisors were less concerned about giving accurate advice once the estimators had been warned of those conflicts. The inflated advice and the insufficient downward adjustment left the estimators in the disclosure condition worse off for having been warned. Full disclosure, by itself, may have the perverse effect of making professionals more biased rather than less. This is not to say that conflicts of interest should be hidden but rather that disclosure may not be the solution. Even where disclosure does not make advice worse, the huge body of research on anchoring (where advice is often disclosed as being randomly generated, yet it still impacts judgment) suggests that disclosure is often unlikely to serve as sufficient warning. If a disclosure that some advice is randomly generated does not completely undo the influence of that advice, other disclosures (especially those in fine-print legalese) might similarly fall short. Moreover, even where disclosure is better than no disclosures, it may cause indirect harm by replacing more effective solutions. Surowieki13 summarized this problem by noting, “Transparency is well and good, but accuracy and objectivity are even better. [The profession] does not have to keep confessing its sins. It just has to stop committing them.” Conclusion Researchers and clinicians have been taught that scientific experiments that involve observations or judgments must be protected from the unintentional influences of bias.14 For example, patients, clinicians, and those rendering outcome judgments are “double-blinded” from the knowledge about which treatment the patient is receiving. Researchers are not insulted by the imposition of these methods in research. Why then are they so insulted by the suggestion that similar influences might have affected their beliefs in other settings? Bias is not a crime, is not necessarily intentional, and is not a sign of lack of integrity; rather, it is a natural human phenomenon. Like the research participant with the split brain, everyone is likely capable of rationalizing beliefs and denying influences that bias them. The most important action physicians can take as a profession is to recognize this. Back to top Article Information Corresponding Author: Allan S. Detsky, MD, PhD, Mount Sinai Hospital, 427-600 University Ave, Toronto, Ontario, Canada M5G 1X5 (adetsky@mtsinai.on.ca). Financial Disclosures: None reported. Additional Contributions: We acknowledge Andreas Laupacis, MD, Donald Redelmeier, MD (both at the University of Toronto), and George Loewenstein, PhD (Carnegie Mellon University), for their reviews of the manuscript. These individuals were not compensated. References 1. Brennan TA, Rothman DJ, Blank L, et al. Health industry practices that create conflicts of interest: a policy for academic medical centers. JAMA. 2006;295(4):429-43316434633PubMedGoogle ScholarCrossref 2. Gazanniga MS, LeDoux JE. The Integrated Mind. New York, NY: Plenum Press; 1978 3. Haidt J. The emotional dog and its rational tail: a social intuitionist approach to moral judgment. Psychol Rev. 2001;108(4):814-83411699120PubMedGoogle ScholarCrossref 4. Budiansky S, Gest T, Fischer D. How lawyers abuse the law. US News World Rep. 1995;52January 30Google Scholar 5. Moore DA, Loewenstein G, Tanlu L, Bazerman MH. Psychological dimensions of holding conflicting roles: is it possible to play advocate and judge at the same time? Pittsburgh, PA: Carnegie Mellon Tepper School of Business; 2004. Tepper Working Paper 2004-E40 6. Dawson E, Gilovich T, Regan DT. Motivated reasoning and performance on the Wason Selection Task. Pers Soc Psychol Bull. 2002;28(10):1379-1387Google ScholarCrossref 7. Moore DA, ed, Cain DM, ed, Loewenstein G, ed, Bazerman M, ed. Conflicts of Interest: Problems and Solutions From Law, Medicine and Organizational Settings. Cambridge, UK: Cambridge University Press; 2005 8. Tversky A, Kahneman D. Judgment under uncertainty: heuristics and biases. Science. 1974;(4157):1124-113117835457PubMedGoogle Scholar 9. McCormick BB, Tomlinson G, Brill-Edwards P, Detsky AS. Effect of restricting contact between pharmaceutical company representatives and internal medicine residents on posttraining attitudes and behaviour. JAMA. 2001;286(16):1994-199911667936PubMedGoogle ScholarCrossref 10. Tatsioni A, Bonitsis NG, Ioannidis JPA. Persistence of contradicted claims in the literature. JAMA. 2007;298(21):2517-252618056905PubMedGoogle ScholarCrossref 11. Detsky AS. Sources of bias for authors of clinical practice guidelines. CMAJ. 2006;175(9):103317060643PubMedGoogle ScholarCrossref 12. Cain DM, Loewenstein G, Moore DA. The dirt on coming clean: perverse effects of disclosing conflicts of interest. J Legal Stud. 2005;34:1-25Google ScholarCrossref 13. Surowiecki J. The talking cure. New Yorker. 2002;38December 9Google Scholar 14. Sackett DL, Haynes RB, Tugwell P. Clinical Epidemiology: A Basic Science for Clinical Medicine. New York, NY: Little Brown & Co; 1985 http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png JAMA American Medical Association

Everyone's a Little Bit Biased (Even Physicians)

JAMA , Volume 299 (24) – Jun 25, 2008

Loading next page...
 
/lp/american-medical-association/everyone-s-a-little-bit-biased-even-physicians-BUbHFTvQ8s
Publisher
American Medical Association
Copyright
Copyright © 2008 American Medical Association. All Rights Reserved.
ISSN
0098-7484
eISSN
1538-3598
DOI
10.1001/jama.299.24.2893
Publisher site
See Article on Publisher Site

Abstract

Medical schools and professional medical associations have developed policies and guidelines in response to increasing concerns over potential conflicts of interest.1 While many physicians agree with these concerns, some view conflict-of-interest policies as affronts to their integrity and an indictment of the ethical conduct of the profession as a whole. These individuals believe that their training as scientists and their devotion to professionalism protects them from external influences that might bias their opinions. However, this view may be based on an incorrect understanding of human psychology. Conflicts of interest are problematic, not only because they are widespread but also because most people incorrectly think that succumbing to them is due to intentional corruption, a problem for only a few bad apples. In this Commentary, we argue that succumbing to a conflict of interest is more likely to result from unintentional bias, something common in everyone. We review studies in neuropsychology, behavioral economics, cognitive psychology, and clinical epidemiology to illustrate this point. The Ethical Brain, Post Hoc Navigating a conflict of interest is not only a problem for the intentionally corrupt, but also for well-meaning individuals who unintentionally succumb to conflicts but then post hoc rationalize their actions so as to appear (to themselves and to others) to be objective. Gazanniga and LeDoux2 performed many experiments that reveal the human potential to rationalize actions. In the classic studies on “split-brain” patients, one participant was shown a picture of a chicken claw in his right visual field (which he consciously saw and reacted to) and shown a picture of a snow scene in his left visual field that, because of a brain injury that affected his left visual field, he processed only subconsciously. Then the researchers spread out several pictures and asked the patient to select the ones associated with the pictures he had just seen. The patient correctly picked out a picture of a chicken and could articulate that it was associated with the chicken claw; he also correctly picked out the item that went with the snow scene: a snow shovel. However, since the snow scene was not consciously processed, he could not correctly explain why he picked the shovel. Rather than be dumbfounded, his explanation was “Oh, that's easy. The chicken claw goes with the chicken and you need a shovel to clean out the chicken coop.” As did other participants in similar experiments, he “filled in the blanks” in describing his actions by developing his own rationalization for what he witnessed himself doing. Furthermore, while it was an incorrect rationalization, he actually believed it. Research in neuroscience and psychology (eg, such as by Haidt3) suggests that every person may be a bit more like this than he or she would like to think. Everyone is prone to rationalize actions and beliefs. So it is no wonder that physicians think that conflicts of interest do not affect them, since so many of the effects are subconscious and hidden by post hoc rationalization. Most individuals believe they have good reasons for their beliefs, but these reasons are often made up after the fact. For instance, does a physician recommend the drug because of a friendly, generous pharmaceutical representative or because it is really the best drug for the patient's condition? Psychological Demonstrations of Subconscious Conflicts Individuals form beliefs based on their observations and interpretations of data presented to them. However, the process of doing so is complex and subjective. The question is whether beliefs are always constrained by the facts. Several articles describing psychological experiments speak to this point. First, a US News & World Report survey4 asked the following of one group of respondents: “If someone sues you and you win the case, should he pay your legal costs?” Eighty-five percent of respondents answered yes to this question. However, another group of respondents were asked the question in a different way: “If you sue someone and lose the case, should you pay his costs?” Only 44% answered yes to this question. Second, Moore et al5 asked 139 auditors (all employed by a major US accounting firm) to judge the accounting decisions made in 5 auditing vignettes. Across all vignettes, even when incentivized to be accurate, auditors were 30% more likely to report that the accounting firm behind the firm's financial reports complied with generally agreed-upon accounting principles when the auditors were told to imagine that they were the auditors for the firm, compared with when told to imagine that they were independent auditors. Although these relations were simply imagined, even imagined relations, let alone currently real or past real relations, can affect judgment. In both of these examples, it is likely that respondents could justify their answers to themselves because there were good reasons for and good reasons against both sets of judgments. These judgments had to weigh both favorable and unfavorable information in a subjective fashion. Dawson et al6 described 3 experiments that support the theory of “motivated reasoning” in which individuals use different strategies to evaluate propositions depending on whether the hypothesis is desirable or threatening/disagreeable to them. In the former (desirable) case, it is common to ask “Can I believe this?” and use persuasive standards of evidence to support the belief (ie, evidence that persuades the person that the belief is correct). In the latter (disagreeable) case, it is common to ask “Must I believe this?” and look for evidence to contradict the belief. Many of these processes have been shown to be unintentional, difficult to control, and resistant to incentives for accuracy.7 Physician–Pharmaceutical Company Relations Most physicians and academicians claim that small gifts, meals, educational grants, travel grants, and research awards have little influence on their opinions because they have integrity. They believe their opinions are truly their own; but that is not the point. The question is: how did they come to have those beliefs? Research has long shown that an individual's judgment can be influenced by the first information encountered, even information that person is trying to ignore. In research on the “anchoring” bias, Tverksy and Kahneman8 showed that purportedly random suggestions had significant effects on participants' answers. In their experiment, participants were asked to guess the percentage of African countries that belong to the United Nations. But first they were given a number between 0 and 100 determined by a spinning wheel and asked if they thought the true percentage was higher or lower than that number. In the study's sample, if the purportedly random number the participants first saw was 10, the median estimate was 25, whereas if the purportedly random number was 65, the median estimate was 45. If what are believed to be randomly generated suggestions can affect judgments, it stands to reason that the nonrandom “suggestions” of a pharmaceutical company representative might also affect a physician's judgment. McCormick et al9 showed that a policy of “no contact” between pharmaceutical representatives and medical residents had a lasting effect on the beliefs and behavior of those residents up to 5 years after they completed training. Tatsioni et al10 showed that evidence from observational epidemiology that is subsequently contradicted by randomized trials still has a substantial influence on beliefs and recommendations long after the contradictory evidence is published. These studies from the clinical field support the notion that it is difficult to overcome the influence of early information on beliefs. Physicians have many relationships that may result in bias11 other than those involving pharmaceutical companies, including nonfinancial conflicts of interest. Such bias may be difficult to undo. Disclosure The response of the academic community to potential conflicts of interest often focuses on disclosing those interests to the audience. In a behavioral-economics experiment involving 147 undergraduate students at Carnegie Mellon University, Cain et al12 showed that disclosure may in fact exacerbate the influence of those conflicts. One group in the study, designated “the estimators,” was required to look at a sequence of jars of coins from a distance to estimate how much money was in each jar. The estimators were incentivized to be accurate. A second group of participants, known as “advisors,” had a closer look at the jars and was told a range of possible values. The advisors gave the estimators advice on how many coins were in each jar. Some advisors were subject to conflicts of interest; one group of conflicted advisors was paid more if the estimators overestimated the number of coins in the jar and it is no surprise that those conflicted advisors gave the estimators higher advice. However, when the estimators were warned of this conflict via disclosure, their estimates failed to show adequate downward adjustment. In fact, disclosure actually worsened (ie, “inflated”) the advisors' advice perhaps because the advisors were less concerned about giving accurate advice once the estimators had been warned of those conflicts. The inflated advice and the insufficient downward adjustment left the estimators in the disclosure condition worse off for having been warned. Full disclosure, by itself, may have the perverse effect of making professionals more biased rather than less. This is not to say that conflicts of interest should be hidden but rather that disclosure may not be the solution. Even where disclosure does not make advice worse, the huge body of research on anchoring (where advice is often disclosed as being randomly generated, yet it still impacts judgment) suggests that disclosure is often unlikely to serve as sufficient warning. If a disclosure that some advice is randomly generated does not completely undo the influence of that advice, other disclosures (especially those in fine-print legalese) might similarly fall short. Moreover, even where disclosure is better than no disclosures, it may cause indirect harm by replacing more effective solutions. Surowieki13 summarized this problem by noting, “Transparency is well and good, but accuracy and objectivity are even better. [The profession] does not have to keep confessing its sins. It just has to stop committing them.” Conclusion Researchers and clinicians have been taught that scientific experiments that involve observations or judgments must be protected from the unintentional influences of bias.14 For example, patients, clinicians, and those rendering outcome judgments are “double-blinded” from the knowledge about which treatment the patient is receiving. Researchers are not insulted by the imposition of these methods in research. Why then are they so insulted by the suggestion that similar influences might have affected their beliefs in other settings? Bias is not a crime, is not necessarily intentional, and is not a sign of lack of integrity; rather, it is a natural human phenomenon. Like the research participant with the split brain, everyone is likely capable of rationalizing beliefs and denying influences that bias them. The most important action physicians can take as a profession is to recognize this. Back to top Article Information Corresponding Author: Allan S. Detsky, MD, PhD, Mount Sinai Hospital, 427-600 University Ave, Toronto, Ontario, Canada M5G 1X5 (adetsky@mtsinai.on.ca). Financial Disclosures: None reported. Additional Contributions: We acknowledge Andreas Laupacis, MD, Donald Redelmeier, MD (both at the University of Toronto), and George Loewenstein, PhD (Carnegie Mellon University), for their reviews of the manuscript. These individuals were not compensated. References 1. Brennan TA, Rothman DJ, Blank L, et al. Health industry practices that create conflicts of interest: a policy for academic medical centers. JAMA. 2006;295(4):429-43316434633PubMedGoogle ScholarCrossref 2. Gazanniga MS, LeDoux JE. The Integrated Mind. New York, NY: Plenum Press; 1978 3. Haidt J. The emotional dog and its rational tail: a social intuitionist approach to moral judgment. Psychol Rev. 2001;108(4):814-83411699120PubMedGoogle ScholarCrossref 4. Budiansky S, Gest T, Fischer D. How lawyers abuse the law. US News World Rep. 1995;52January 30Google Scholar 5. Moore DA, Loewenstein G, Tanlu L, Bazerman MH. Psychological dimensions of holding conflicting roles: is it possible to play advocate and judge at the same time? Pittsburgh, PA: Carnegie Mellon Tepper School of Business; 2004. Tepper Working Paper 2004-E40 6. Dawson E, Gilovich T, Regan DT. Motivated reasoning and performance on the Wason Selection Task. Pers Soc Psychol Bull. 2002;28(10):1379-1387Google ScholarCrossref 7. Moore DA, ed, Cain DM, ed, Loewenstein G, ed, Bazerman M, ed. Conflicts of Interest: Problems and Solutions From Law, Medicine and Organizational Settings. Cambridge, UK: Cambridge University Press; 2005 8. Tversky A, Kahneman D. Judgment under uncertainty: heuristics and biases. Science. 1974;(4157):1124-113117835457PubMedGoogle Scholar 9. McCormick BB, Tomlinson G, Brill-Edwards P, Detsky AS. Effect of restricting contact between pharmaceutical company representatives and internal medicine residents on posttraining attitudes and behaviour. JAMA. 2001;286(16):1994-199911667936PubMedGoogle ScholarCrossref 10. Tatsioni A, Bonitsis NG, Ioannidis JPA. Persistence of contradicted claims in the literature. JAMA. 2007;298(21):2517-252618056905PubMedGoogle ScholarCrossref 11. Detsky AS. Sources of bias for authors of clinical practice guidelines. CMAJ. 2006;175(9):103317060643PubMedGoogle ScholarCrossref 12. Cain DM, Loewenstein G, Moore DA. The dirt on coming clean: perverse effects of disclosing conflicts of interest. J Legal Stud. 2005;34:1-25Google ScholarCrossref 13. Surowiecki J. The talking cure. New Yorker. 2002;38December 9Google Scholar 14. Sackett DL, Haynes RB, Tugwell P. Clinical Epidemiology: A Basic Science for Clinical Medicine. New York, NY: Little Brown & Co; 1985

Journal

JAMAAmerican Medical Association

Published: Jun 25, 2008

Keywords: ethics,conflict of interest,disclosure,judgment,brain,pharmaceutical company,snow - weather,chickens

References