Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Looking for Medical Injuries Where the Light Is Bright

Looking for Medical Injuries Where the Light Is Bright Health care quality improvement experts often argue that "you can't manage what you can't measure." Suitable yardsticks are essential to judge the magnitude of potential quality problems and track whether interventions improve care. However, this aphorism needs one critical addendum: "You can't measure what you can't define." Measurement and definitional issues loom large when discussing patient safety. The bellwether 1999 Institute of Medicine report To Err Is Human provided compelling evidence that medical errors pose daily risks throughout the US health care system but failed to quash controversy about the magnitude of that risk.1 The best-known estimates of the extent of medical error rely on extrapolations from medical record review studies,2,3 although these numbers have generated heated debate.4-6 Delineating definitions, though, should precede measurement. Another Institute of Medicine report defined safety as "avoiding injuries to patients from the care that is intended to help them."7(p39) Producing useful patient safety measures (ie, measures that can assist in managing and improving care) requires honing this definition to the subset of events that are amenable to improvement. While most observers agree that iatrogenic injuries occur in virtually all practice settings, attribution of injuries to error is complicated. Medical harm can result from myriad and sometimes intertwined factors, including the natural history of patients' diseases, coexisting medical conditions and risk factors, access to and availability of care, recognized toxic effects of appropriate therapies, clinical judgments and misjudgments, flaws in executing medical interventions, and bad luck. Although injuries sometimes trace back clearly to actions of individual practitioners, experts believe that multiple deficiencies latent in complex care delivery systems contribute importantly to most preventable iatrogenic injuries.8 To be maximally useful, patient safety measurement tools should therefore focus on preventable injuries. Preventable injuries, in contrast with complications resulting from recognized risky therapies administered correctly, offer actionable targets for quality improvement. Unfortunately, preventable injuries are technically difficult and expensive to capture. Chart reviews, although rich in clinical detail, are expensive, fail to identify undocumented events and causes, and often produce unreliable judgments about preventability.5 Incident reporting systems, required by many state health departments and accreditation bodies, are rarely used by physicians and undercount incidents by at least 1 order of magnitude.9 Sophisticated computer algorithms that monitor clinical databases and electronic medical records offer promise, but existing algorithms are typically insensitive screens, and few hospitals have adequate resources to develop or implement these high-end applications.10 Given this situation, exploring the utility of existing administrative data, such as computerized hospital discharge abstracts, for measuring patient safety seems reasonable. Administrative data offer significant attractions, including low cost, ready availability, and coverage of large populations. Creative combinations of administrative data elements, including diagnosis and procedure codes (along with procedure dates), could yield insight into clinical events or conditions that might represent safety problems. Recognizing this opportunity, the US Agency for Healthcare Research and Quality (AHRQ) commissioned researchers at the University of California, and Stanford University to update and expand a set of measures to identify potential safety problems. Informed by literature reviews, expert opinion, analyses of administrative databases,10 and the Complications Screening Program (CSP) that we developed in the early 1990s,11,12 the investigators created 20 Patient Safety Indicators (PSIs) by linking discharge diagnoses and procedure codes with other information from computerized hospital discharge abstracts.13,14 The AHRQ has usefully posted this computerized algorithm on its Internet site, freely available to all, calling the PSIs a "quick checkup" that "can help hospitals enhance their patient safety performance by quickly detecting potential medical errors in patients who have undergone medical or surgical care."15 In this issue of THE JOURNAL, Zhan and Miller16 analyze excess mortality, length of stay, and charges among inpatients who had 1 of 18 PSI-defined incidents identified using administrative data from a large, representative sample of US hospitalizations. Using sophisticated analytic methods, the authors estimate that these 18 types of medical events may account for 2.4 million extra hospital days, $9.3 billion in excess charges, and almost 32 600 attributable deaths in the United States annually. They argue that the results represent only the "tip of the iceberg," given the limited number of conditions included in the PSI algorithm and problems inherent in the use of administrative data. The authors carefully note the limitations of their work and recommend that the results be used advisedly. Little is known about the validity of PSIs to compare hospitals and communities, to track changes over time, and to measure preventable complications.13(p145) Nonetheless, given their staggering magnitude, these estimates are clearly sobering and provide a more granular view of specific types of complications than typically presented. Today's urgent social need to understand patient safety makes it sensible to use existing administrative data sources to assess the problem. Like the man searching for his lost keys under the lamppost, clinicians and quality improvement professionals might reasonably start looking for potential answers where the light is best. But what do the PSIs actually mean? Specifically, do the PSI measures capture actionable patient safety problems (ie, preventable iatrogenic events) amenable to improvement or complications resulting from other causes? How sensitive are the PSIs for identifying substandard care? Findings from a multiyear, AHRQ-funded project involving 1298 in-depth medical record reviews to validate the CSP—a progenitor of the PSIs—give some pause. The results of the CSP validation study, described in detail elsewhere,17-21 suggest that administrative data–based algorithms provide questionable insight into substandard hospital care. Briefly, the study failed to find objective clinical evidence in the medical record to support hospital-assigned discharge diagnosis codes used to identify complications for 19% of surgical and 30% of medical admissions.17 Discharge diagnoses used by the CSP to flag complications represented conditions that were present on admission for 13% of surgical and 58% of medical cases (ie, these conditions had not occurred during the hospitalization from iatrogenic causes).18 Although physician reviewers confirmed the presence of CSP-flagged complications among 68% of surgical and 27% of medical patients, they found quality problems in only 30% of surgical and 16% of medical cases.19 Nurse reviewers using structured chart review instruments and physicians conducting reviews based on their implicit clinical judgment frequently disagreed about the presence of quality problems in cases flagged as complications by the CSP.20 Indeed, an expert panel found it extraordinarily difficult to construct review instruments to identify specific process-of-care problems (ie, actions by clinicians) that might contribute to the occurrence of many complications. The conclusion was that the CSP was a useful screening tool for selecting certain surgical (but not medical) inpatient cases for further, detailed chart review but that it offered little information about quality-of-care deficiencies. Straightforward strategies exist to enhance the ability of administrative data to identify in-hospital complications of care. Attaching a flag to each discharge diagnosis indicating whether it was present on admission is an obvious example, already implemented in New York and California. However, widespread use of methods like the PSIs could have untoward consequences. If PSI results are used to evaluate individual hospitals, shifts in discharge diagnosis coding may occur (ie, hospitals could stop coding certain conditions that arise in the hospital). Monitoring and redressing such changes in coding would be cumbersome and expensive, as Medicare has found in overseeing coding related to assignment of diagnosis related groups for hospital payment. Ultimately, as the Institute of Medicine noted,1 measuring preventable harm will require a comprehensive and far-reaching approach. Many inpatient injuries and errors that appear most prevalent based on chart review studies, such as adverse drug events, diagnostic errors, and problems with adequate monitoring and follow-up, require sources other than administrative data. Capturing safety information reliably and efficiently may become easier with electronic monitoring algorithms of clinical databases and computerized text searches of electronic medical records.22 In addition, deploying enhanced incident reporting systems based on the aviation model and developing intensive surveillance of high-risk processes and practice settings may be worthwhile investments. The PSIs are a step in the right direction, making transparent and accessible information that may someday help to prevent errors and injuries. However, the precise meaning of the PSIs and other administrative data–based measures remains elusive. Developing and validating a robust set of measurement tools is essential to move patient safety information out of the shadows and into the light. References 1. Kohn LT, Corrigan JM, Donaldson MS. To Err Is Human: Building a Safer Health System. Washington, DC: National Academy Press; 1999. 2. Brennan TA, Leape LL, Laird NM. et al. Incidence of adverse events and negligence in hospitalized patients. N Engl J Med.1991;324:370-376.http://www.ncbi.nlm.nih.gov/htbin-post/Entrez/query?db=m&form=6&Dopt=r&uid=entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=1987460&dopt=AbstractGoogle Scholar 3. Thomas EJ, Studdert DM, Burstin HR. et al. Incidence and types of adverse events and negligent care in Utah and Colorado. Med Care.2000;38:261-271.http://www.ncbi.nlm.nih.gov/htbin-post/Entrez/query?db=m&form=6&Dopt=r&uid=entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=10718351&dopt=AbstractGoogle Scholar 4. McDonald CJ, Weiner M, Hui SL. Deaths due to medical errors are exaggerated in Institute of Medicine report. JAMA.2000;284:93-95.http://www.ncbi.nlm.nih.gov/htbin-post/Entrez/query?db=m&form=6&Dopt=r&uid=entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=10872021&dopt=AbstractGoogle Scholar 5. Brennan TA. The Institute of Medicine report on medical errors: could it do harm? N Engl J Med.2000;342:1123-1125.http://www.ncbi.nlm.nih.gov/htbin-post/Entrez/query?db=m&form=6&Dopt=r&uid=entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=10760315&dopt=AbstractGoogle Scholar 6. Leape LL. Institute of Medicine medical error figures are not exaggerated. JAMA.2000;284:95-97.http://www.ncbi.nlm.nih.gov/htbin-post/Entrez/query?db=m&form=6&Dopt=r&uid=entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=10872022&dopt=AbstractGoogle Scholar 7. Committee on the Quality of Health Care in America, Institute of Medicine. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, DC: National Academy Press; 2001. 8. Leape LL. Error in medicine. JAMA.1994;272:1851-1857.http://www.ncbi.nlm.nih.gov/htbin-post/Entrez/query?db=m&form=6&Dopt=r&uid=entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=7503827&dopt=AbstractGoogle Scholar 9. Cullen DJ, Bates DW, Small SD, Cooper JB, Nemeskal AR, Leape LL. The incident reporting system does not detect adverse drug events: a problem for quality improvement. Jt Comm J Qual Improv.1995;21:541-548.http://www.ncbi.nlm.nih.gov/htbin-post/Entrez/query?db=m&form=6&Dopt=r&uid=entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=8556111&dopt=AbstractGoogle Scholar 10. Miller M, Elixhauser A, Zhan C, Meyer G. Patient Safety Indicators: using administrative data to identify potential patient safety concerns. Health Serv Res.2001;36:110-132.Google Scholar 11. Iezzoni LI, Daley J, Heeren T. et al. Using administrative data to screen hospitals for high complication rates. Inquiry.1994;31:40-55.http://www.ncbi.nlm.nih.gov/htbin-post/Entrez/query?db=m&form=6&Dopt=r&uid=entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=8168908&dopt=AbstractGoogle Scholar 12. Iezzoni LI, Daley J, Heeren T. et al. Identifying complications of care using administrative data. Med Care.1994;32:700-715.http://www.ncbi.nlm.nih.gov/htbin-post/Entrez/query?db=m&form=6&Dopt=r&uid=entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=8028405&dopt=AbstractGoogle Scholar 13. McDonald KM, Romano PS, Geppert J. et al. Measures of Patient Safety Based on Hospital Administrative Data—The Patient Safety Indicators. Rockville, Md: Agency for Healthcare Research and Quality; 2002. Technical Review 5. 14. Romano PS, Geppert JJ, Davies S, Miller MR, Elixhauser A, McDonald K. A national profile of patient safety in US hospitals. Health Aff (Millwood).2003;22:154-166.http://www.ncbi.nlm.nih.gov/htbin-post/Entrez/query?db=m&form=6&Dopt=r&uid=entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=12674418&dopt=AbstractGoogle Scholar 15. New AHRQ Web-based tool offers hospital quick checkup on patient safety [press release]. March 13, 2003. Available at: http://www.ahrq.gov/news/press/pr2003/psipr.htm. Accessed September 1, 2003. 16. Zhan C, Miller MR. Excess length of stay, charges, and mortality attributable to medical injuries during hospitalization. JAMA.2003;290:1868-1874.Google Scholar 17. McCarthy EP, Iezzoni LI, Davis RB. et al. Does clinical evidence support ICD-9-CM diagnosis coding of complications? Med Care.2000;38:868-876.http://www.ncbi.nlm.nih.gov/htbin-post/Entrez/query?db=m&form=6&Dopt=r&uid=entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=10929998&dopt=AbstractGoogle Scholar 18. Lawthers AG, McCarthy EP, Davis RB, Peterson LE, Palmer RH, Iezzoni LI. Identifying in-hospital complications from claims data: is it valid? Med Care.2000;38:785-795.http://www.ncbi.nlm.nih.gov/htbin-post/Entrez/query?db=m&form=6&Dopt=r&uid=entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=10929991&dopt=AbstractGoogle Scholar 19. Weingart SN, Iezzoni LI, Davis RB. et al. Using administrative data to find substandard care: validation of the Complications Screening Program. Med Care.2000;38:796-806.http://www.ncbi.nlm.nih.gov/htbin-post/Entrez/query?db=m&form=6&Dopt=r&uid=entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=10929992&dopt=AbstractGoogle Scholar 20. Weingart SN, Davis RB, Palmer RH. et al. Discrepancies between explicit and implicit review: physician and nurse assessments of complications and quality. Health Serv Res.2002;37:483-498.http://www.ncbi.nlm.nih.gov/htbin-post/Entrez/query?db=m&form=6&Dopt=r&uid=entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=12036004&dopt=AbstractGoogle Scholar 21. Iezzoni LI, Davis RB, Palmer RH. et al. Does the Complications Screening Program flag cases with process of care problems? using explicit criteria to judge processes. Int J Qual Health Care.1999;11:107-118.http://www.ncbi.nlm.nih.gov/htbin-post/Entrez/query?db=m&form=6&Dopt=r&uid=entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=10442841&dopt=AbstractGoogle Scholar 22. Bates DW, Evans RS, Murff H, Stetson PD, Pizziferri L, Hripcsak G. Detecting adverse events using information technology. J Am Med Inform Assoc.2003;10:115-128.http://www.ncbi.nlm.nih.gov/htbin-post/Entrez/query?db=m&form=6&Dopt=r&uid=entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=12595401&dopt=AbstractGoogle Scholar http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png JAMA American Medical Association

Looking for Medical Injuries Where the Light Is Bright

JAMA , Volume 290 (14) – Oct 8, 2003

Loading next page...
 
/lp/american-medical-association/looking-for-medical-injuries-where-the-light-is-bright-zPbr6oB6lU

References (27)

Publisher
American Medical Association
Copyright
Copyright © 2003 American Medical Association. All Rights Reserved.
ISSN
0098-7484
eISSN
1538-3598
DOI
10.1001/jama.290.14.1917
Publisher site
See Article on Publisher Site

Abstract

Health care quality improvement experts often argue that "you can't manage what you can't measure." Suitable yardsticks are essential to judge the magnitude of potential quality problems and track whether interventions improve care. However, this aphorism needs one critical addendum: "You can't measure what you can't define." Measurement and definitional issues loom large when discussing patient safety. The bellwether 1999 Institute of Medicine report To Err Is Human provided compelling evidence that medical errors pose daily risks throughout the US health care system but failed to quash controversy about the magnitude of that risk.1 The best-known estimates of the extent of medical error rely on extrapolations from medical record review studies,2,3 although these numbers have generated heated debate.4-6 Delineating definitions, though, should precede measurement. Another Institute of Medicine report defined safety as "avoiding injuries to patients from the care that is intended to help them."7(p39) Producing useful patient safety measures (ie, measures that can assist in managing and improving care) requires honing this definition to the subset of events that are amenable to improvement. While most observers agree that iatrogenic injuries occur in virtually all practice settings, attribution of injuries to error is complicated. Medical harm can result from myriad and sometimes intertwined factors, including the natural history of patients' diseases, coexisting medical conditions and risk factors, access to and availability of care, recognized toxic effects of appropriate therapies, clinical judgments and misjudgments, flaws in executing medical interventions, and bad luck. Although injuries sometimes trace back clearly to actions of individual practitioners, experts believe that multiple deficiencies latent in complex care delivery systems contribute importantly to most preventable iatrogenic injuries.8 To be maximally useful, patient safety measurement tools should therefore focus on preventable injuries. Preventable injuries, in contrast with complications resulting from recognized risky therapies administered correctly, offer actionable targets for quality improvement. Unfortunately, preventable injuries are technically difficult and expensive to capture. Chart reviews, although rich in clinical detail, are expensive, fail to identify undocumented events and causes, and often produce unreliable judgments about preventability.5 Incident reporting systems, required by many state health departments and accreditation bodies, are rarely used by physicians and undercount incidents by at least 1 order of magnitude.9 Sophisticated computer algorithms that monitor clinical databases and electronic medical records offer promise, but existing algorithms are typically insensitive screens, and few hospitals have adequate resources to develop or implement these high-end applications.10 Given this situation, exploring the utility of existing administrative data, such as computerized hospital discharge abstracts, for measuring patient safety seems reasonable. Administrative data offer significant attractions, including low cost, ready availability, and coverage of large populations. Creative combinations of administrative data elements, including diagnosis and procedure codes (along with procedure dates), could yield insight into clinical events or conditions that might represent safety problems. Recognizing this opportunity, the US Agency for Healthcare Research and Quality (AHRQ) commissioned researchers at the University of California, and Stanford University to update and expand a set of measures to identify potential safety problems. Informed by literature reviews, expert opinion, analyses of administrative databases,10 and the Complications Screening Program (CSP) that we developed in the early 1990s,11,12 the investigators created 20 Patient Safety Indicators (PSIs) by linking discharge diagnoses and procedure codes with other information from computerized hospital discharge abstracts.13,14 The AHRQ has usefully posted this computerized algorithm on its Internet site, freely available to all, calling the PSIs a "quick checkup" that "can help hospitals enhance their patient safety performance by quickly detecting potential medical errors in patients who have undergone medical or surgical care."15 In this issue of THE JOURNAL, Zhan and Miller16 analyze excess mortality, length of stay, and charges among inpatients who had 1 of 18 PSI-defined incidents identified using administrative data from a large, representative sample of US hospitalizations. Using sophisticated analytic methods, the authors estimate that these 18 types of medical events may account for 2.4 million extra hospital days, $9.3 billion in excess charges, and almost 32 600 attributable deaths in the United States annually. They argue that the results represent only the "tip of the iceberg," given the limited number of conditions included in the PSI algorithm and problems inherent in the use of administrative data. The authors carefully note the limitations of their work and recommend that the results be used advisedly. Little is known about the validity of PSIs to compare hospitals and communities, to track changes over time, and to measure preventable complications.13(p145) Nonetheless, given their staggering magnitude, these estimates are clearly sobering and provide a more granular view of specific types of complications than typically presented. Today's urgent social need to understand patient safety makes it sensible to use existing administrative data sources to assess the problem. Like the man searching for his lost keys under the lamppost, clinicians and quality improvement professionals might reasonably start looking for potential answers where the light is best. But what do the PSIs actually mean? Specifically, do the PSI measures capture actionable patient safety problems (ie, preventable iatrogenic events) amenable to improvement or complications resulting from other causes? How sensitive are the PSIs for identifying substandard care? Findings from a multiyear, AHRQ-funded project involving 1298 in-depth medical record reviews to validate the CSP—a progenitor of the PSIs—give some pause. The results of the CSP validation study, described in detail elsewhere,17-21 suggest that administrative data–based algorithms provide questionable insight into substandard hospital care. Briefly, the study failed to find objective clinical evidence in the medical record to support hospital-assigned discharge diagnosis codes used to identify complications for 19% of surgical and 30% of medical admissions.17 Discharge diagnoses used by the CSP to flag complications represented conditions that were present on admission for 13% of surgical and 58% of medical cases (ie, these conditions had not occurred during the hospitalization from iatrogenic causes).18 Although physician reviewers confirmed the presence of CSP-flagged complications among 68% of surgical and 27% of medical patients, they found quality problems in only 30% of surgical and 16% of medical cases.19 Nurse reviewers using structured chart review instruments and physicians conducting reviews based on their implicit clinical judgment frequently disagreed about the presence of quality problems in cases flagged as complications by the CSP.20 Indeed, an expert panel found it extraordinarily difficult to construct review instruments to identify specific process-of-care problems (ie, actions by clinicians) that might contribute to the occurrence of many complications. The conclusion was that the CSP was a useful screening tool for selecting certain surgical (but not medical) inpatient cases for further, detailed chart review but that it offered little information about quality-of-care deficiencies. Straightforward strategies exist to enhance the ability of administrative data to identify in-hospital complications of care. Attaching a flag to each discharge diagnosis indicating whether it was present on admission is an obvious example, already implemented in New York and California. However, widespread use of methods like the PSIs could have untoward consequences. If PSI results are used to evaluate individual hospitals, shifts in discharge diagnosis coding may occur (ie, hospitals could stop coding certain conditions that arise in the hospital). Monitoring and redressing such changes in coding would be cumbersome and expensive, as Medicare has found in overseeing coding related to assignment of diagnosis related groups for hospital payment. Ultimately, as the Institute of Medicine noted,1 measuring preventable harm will require a comprehensive and far-reaching approach. Many inpatient injuries and errors that appear most prevalent based on chart review studies, such as adverse drug events, diagnostic errors, and problems with adequate monitoring and follow-up, require sources other than administrative data. Capturing safety information reliably and efficiently may become easier with electronic monitoring algorithms of clinical databases and computerized text searches of electronic medical records.22 In addition, deploying enhanced incident reporting systems based on the aviation model and developing intensive surveillance of high-risk processes and practice settings may be worthwhile investments. The PSIs are a step in the right direction, making transparent and accessible information that may someday help to prevent errors and injuries. However, the precise meaning of the PSIs and other administrative data–based measures remains elusive. Developing and validating a robust set of measurement tools is essential to move patient safety information out of the shadows and into the light. References 1. Kohn LT, Corrigan JM, Donaldson MS. To Err Is Human: Building a Safer Health System. Washington, DC: National Academy Press; 1999. 2. Brennan TA, Leape LL, Laird NM. et al. Incidence of adverse events and negligence in hospitalized patients. N Engl J Med.1991;324:370-376.http://www.ncbi.nlm.nih.gov/htbin-post/Entrez/query?db=m&form=6&Dopt=r&uid=entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=1987460&dopt=AbstractGoogle Scholar 3. Thomas EJ, Studdert DM, Burstin HR. et al. Incidence and types of adverse events and negligent care in Utah and Colorado. Med Care.2000;38:261-271.http://www.ncbi.nlm.nih.gov/htbin-post/Entrez/query?db=m&form=6&Dopt=r&uid=entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=10718351&dopt=AbstractGoogle Scholar 4. McDonald CJ, Weiner M, Hui SL. Deaths due to medical errors are exaggerated in Institute of Medicine report. JAMA.2000;284:93-95.http://www.ncbi.nlm.nih.gov/htbin-post/Entrez/query?db=m&form=6&Dopt=r&uid=entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=10872021&dopt=AbstractGoogle Scholar 5. Brennan TA. The Institute of Medicine report on medical errors: could it do harm? N Engl J Med.2000;342:1123-1125.http://www.ncbi.nlm.nih.gov/htbin-post/Entrez/query?db=m&form=6&Dopt=r&uid=entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=10760315&dopt=AbstractGoogle Scholar 6. Leape LL. Institute of Medicine medical error figures are not exaggerated. JAMA.2000;284:95-97.http://www.ncbi.nlm.nih.gov/htbin-post/Entrez/query?db=m&form=6&Dopt=r&uid=entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=10872022&dopt=AbstractGoogle Scholar 7. Committee on the Quality of Health Care in America, Institute of Medicine. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, DC: National Academy Press; 2001. 8. Leape LL. Error in medicine. JAMA.1994;272:1851-1857.http://www.ncbi.nlm.nih.gov/htbin-post/Entrez/query?db=m&form=6&Dopt=r&uid=entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=7503827&dopt=AbstractGoogle Scholar 9. Cullen DJ, Bates DW, Small SD, Cooper JB, Nemeskal AR, Leape LL. The incident reporting system does not detect adverse drug events: a problem for quality improvement. Jt Comm J Qual Improv.1995;21:541-548.http://www.ncbi.nlm.nih.gov/htbin-post/Entrez/query?db=m&form=6&Dopt=r&uid=entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=8556111&dopt=AbstractGoogle Scholar 10. Miller M, Elixhauser A, Zhan C, Meyer G. Patient Safety Indicators: using administrative data to identify potential patient safety concerns. Health Serv Res.2001;36:110-132.Google Scholar 11. Iezzoni LI, Daley J, Heeren T. et al. Using administrative data to screen hospitals for high complication rates. Inquiry.1994;31:40-55.http://www.ncbi.nlm.nih.gov/htbin-post/Entrez/query?db=m&form=6&Dopt=r&uid=entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=8168908&dopt=AbstractGoogle Scholar 12. Iezzoni LI, Daley J, Heeren T. et al. Identifying complications of care using administrative data. Med Care.1994;32:700-715.http://www.ncbi.nlm.nih.gov/htbin-post/Entrez/query?db=m&form=6&Dopt=r&uid=entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=8028405&dopt=AbstractGoogle Scholar 13. McDonald KM, Romano PS, Geppert J. et al. Measures of Patient Safety Based on Hospital Administrative Data—The Patient Safety Indicators. Rockville, Md: Agency for Healthcare Research and Quality; 2002. Technical Review 5. 14. Romano PS, Geppert JJ, Davies S, Miller MR, Elixhauser A, McDonald K. A national profile of patient safety in US hospitals. Health Aff (Millwood).2003;22:154-166.http://www.ncbi.nlm.nih.gov/htbin-post/Entrez/query?db=m&form=6&Dopt=r&uid=entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=12674418&dopt=AbstractGoogle Scholar 15. New AHRQ Web-based tool offers hospital quick checkup on patient safety [press release]. March 13, 2003. Available at: http://www.ahrq.gov/news/press/pr2003/psipr.htm. Accessed September 1, 2003. 16. Zhan C, Miller MR. Excess length of stay, charges, and mortality attributable to medical injuries during hospitalization. JAMA.2003;290:1868-1874.Google Scholar 17. McCarthy EP, Iezzoni LI, Davis RB. et al. Does clinical evidence support ICD-9-CM diagnosis coding of complications? Med Care.2000;38:868-876.http://www.ncbi.nlm.nih.gov/htbin-post/Entrez/query?db=m&form=6&Dopt=r&uid=entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=10929998&dopt=AbstractGoogle Scholar 18. Lawthers AG, McCarthy EP, Davis RB, Peterson LE, Palmer RH, Iezzoni LI. Identifying in-hospital complications from claims data: is it valid? Med Care.2000;38:785-795.http://www.ncbi.nlm.nih.gov/htbin-post/Entrez/query?db=m&form=6&Dopt=r&uid=entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=10929991&dopt=AbstractGoogle Scholar 19. Weingart SN, Iezzoni LI, Davis RB. et al. Using administrative data to find substandard care: validation of the Complications Screening Program. Med Care.2000;38:796-806.http://www.ncbi.nlm.nih.gov/htbin-post/Entrez/query?db=m&form=6&Dopt=r&uid=entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=10929992&dopt=AbstractGoogle Scholar 20. Weingart SN, Davis RB, Palmer RH. et al. Discrepancies between explicit and implicit review: physician and nurse assessments of complications and quality. Health Serv Res.2002;37:483-498.http://www.ncbi.nlm.nih.gov/htbin-post/Entrez/query?db=m&form=6&Dopt=r&uid=entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=12036004&dopt=AbstractGoogle Scholar 21. Iezzoni LI, Davis RB, Palmer RH. et al. Does the Complications Screening Program flag cases with process of care problems? using explicit criteria to judge processes. Int J Qual Health Care.1999;11:107-118.http://www.ncbi.nlm.nih.gov/htbin-post/Entrez/query?db=m&form=6&Dopt=r&uid=entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=10442841&dopt=AbstractGoogle Scholar 22. Bates DW, Evans RS, Murff H, Stetson PD, Pizziferri L, Hripcsak G. Detecting adverse events using information technology. J Am Med Inform Assoc.2003;10:115-128.http://www.ncbi.nlm.nih.gov/htbin-post/Entrez/query?db=m&form=6&Dopt=r&uid=entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=12595401&dopt=AbstractGoogle Scholar

Journal

JAMAAmerican Medical Association

Published: Oct 8, 2003

Keywords: medical errors,medical records,institute of medicine (u.s.),judgment,personnel staffing and scheduling information systems,surgical procedures, operative,screening,electronic medical records,patient safety,quality improvement,posterior superior iliac spine,clinical databases,quality of care

There are no references for this article.