TY - JOUR AU - Parker, Joseph AB - More than a decade has passed since the Institute of Medicine released the Crossing the Quality Chasm report, highlighting the challenges in health care–quality improvements.1 The proposals outlined by this report have extensive data needs. Given the ubiquitous nature of administrative databases that are already being generated from hospital billings, these databases have formed the backbone of many current health care policies such as the transparency initiatives by the Centers for Medicare and Medicaid Services through the Hospital Compare website, the development of Patient Safety Indicators by the Agency for Healthcare Research and Quality,2 and the pay-for-performance policy related to hospital readmissions.3 In addition to patient demographic data, these databases capture diagnosis and procedure codes associated with patient care and some basic outcome information on discharge. However, critics often argue that because these databases are created by coders who have no clinical experience and no direct contact with the care process, these data are often inaccurate and thus unsuitable for quality-improvement initiatives.4,5 Therefore, many have instead recommended building more specialized clinical data registries that are maintained by clinical personnel. Well-known examples in surgery include the National Surgical Quality Improvement Program, the Society of Thoracic Surgeons Database, the National Cancer DataBase, and the National Trauma DataBase. Supporters of administrative databases have noted that there are already extensive processes in place to ensure the accuracy of administrative coding. The coders are required to undergo a rigorous training and certification process in medical coding, sometimes referred to as nosology (and these professionals are thus sometimes called nosologists). Such certification programs are often accredited by the American Academy of Professional Coders6 or the American Health Information Management Association.7 These programs are variable in length but are commonly 1 year in duration. Additionally, the inaccuracies related to administrative databases are likely limited to the diagnosis codes, while the coding of procedures are considered fairly accurate because any inaccuracies there could be construed as fraud. Lastly, inaccuracies that may occur in diagnosis codes are likely to be randomly distributed and thus should not bias any finding. In this article, we aimed to propose a compromise. We believe that the debate about administrative databases vs clinical data registry is ultimately irrelevant because they are simply meant to serve different, but complimentary, functions. We propose that administrative databases should be viewed as a public health screening tool, whereas clinical data registries should be viewed as confirmatory diagnostic tools. Therefore, administrative databases are not meant to replace, nor should they be replaced by, clinical data registries; both data systems should be used together in the pursuit of health care–quality improvement. Concept of Screening vs Diagnostic Tests The use of screening tests in combination with diagnostic tests is a common practice in clinical medicine. Examples include mammography and breast biopsy, fecal occult blood tests and colonoscopy, and pap smear and colposcopy. The rationale behind this 2-step approach in diagnostic workup is the unavoidable trade-off between accuracy and costs, not just in terms of financial costs, but also in terms of time, patient comfort, and the risks associated with invasive procedures. For these screening tests, the primary focus is to reduce false-negatives—which, in turn, entails some tolerance for false-positives. Therefore, as counterintuitive as it sounds, inaccuracy in screening tests is actually acceptable because they represent false-positives that are necessary to minimize the risk for false-negatives. In other words, we are balancing increased sensitivity for some loss in specificity. For example, take the diagnostic approach to breast cancer. Initial screening is performed via mammography. If positive, a follow-up diagnostic test of needle biopsy would be performed. Mammography alone would not be sufficient to make definitive diagnoses but, conversely, it would be difficult to subject every patient to the risks and costs of needle biopsies. Additionally, dismissing the mammography completely without any follow-up, simply because mammographies are often inaccurate, would likely be considered medical malpractice. Applying the Concept of Screening Tests to Administrative Databases In many ways, administrative databases are analogous to mammography, detecting problems for which further investigations should be pursued. However, just like mammography, there will likely be some false-positives that would surface (ie, a signal [that later turns out to be false] that something bad may be happening); however, these inaccuracies should not be reasons to dismiss the test findings. Similarly, replacing screening tests with a universal policy of diagnostic tests would be cost prohibitive. For example, the National Surgical Quality Improvement Program requires a dedicated trained medical record abstractor, plus an annual fee for data analysis and reports. A combination approach is thus appropriate, with the inexpensive, but ubiquitous, administrative database used as first-line screening tools and complemented by efforts to build more detailed clinical registries only in problem areas to make the ultimate diagnosis. In fact, with this conceptual framework in mind, the value of administrative databases becomes even more apparent. There is no other data source that is as ubiquitous as administrative databases in our health care system today. In contrast, there are several potential alternatives to the diagnostic function to be served by clinical registries such as medical record audits or peer-review panels. Implications The conceptualization of administrative databases as screening tools has several different important implications. In the same sense that an invasive surgery, such as a colectomy, should not be recommended solely based on a simple positive fecal occult blood test result, policymakers should refrain from recommending any intervention based on findings from administrative databases alone. Instead, more detailed diagnostic tests, such as data from clinical registries, if they exist, should be considered before any decisive intervention, such as pay for performance, is implemented. (Medical record audits or peer-review panels have also been suggested as alternatives, but they are unlikely to be feasible for population-level quality initiatives.) For health care providers increasingly under the quality microscope, we would like to suggest that dismissing the findings from administrative databases would also be misguided. It would be as inappropriate as dismissing a positive mammography finding without any follow-up. Lastly, health system researchers should determine the sensitivity vs specificity of administrative databases in different settings or develop analytic methods to determine the (likely inconsequential) impact from random inaccuracies in the database. Conclusions When interpreted correctly as a screening tool, administrative databases allow for broad snapshots into our health care delivery system and serve to complement detailed clinical registries or medical record audits in our pursuit of health care–quality improvement. Back to top Article Information Corresponding Author: David C. Chang, PhD, MPH, MBA, Department of Surgery, Massachusetts General Hospital, Harvard Medical School, 165 Cambridge St, Ste 403, Boston, MA 02114 (dchang8@mgh.harvard.edu). Published Online: November 19, 2014. doi:10.1001/jamasurg.2014.1352. Conflict of Interest Disclosures: None reported. References 1. National Research Council. Crossing the Quality Chasm: A New System for the 21st Century. Washington, DC: National Research Council; 2001. 2. Agency for Healthcare Research and Quality. Patient Safety Indicators overview.http://www.qualityindicators.ahrq.gov/modules/psi_overview.aspx. Updated 2014. Accessed January 17, 2014. 3. Centers for Medicare and Medicaid Services. Readmission reduction program. CMS.gov website. http://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/AcuteInpatientPPS/Readmissions-Reduction-Program.html. Updated 2013. Accessed January 1, 2014. 4. Koch CG, Li L, Hixson E, Tang A, Phillips S, Henderson JM. What are the real rates of postoperative complications: elucidating inconsistencies between administrative and clinical data sources. J Am Coll Surg. 2012;214(5):798-805.PubMedGoogle ScholarCrossref 5. Lawson EH, Louie R, Zingmond DS, et al. A comparison of clinical registry versus administrative claims data for reporting of 30-day surgical complications. Ann Surg. 2012;256(6):973-981.PubMedGoogle ScholarCrossref 6. American Association of Professional Coders. About AAPC.http://www.aapc.com/AboutUs/. Updated 2014. Accessed January 1, 2014. 7. American Health Information Management System. CCHIIM: about the Commission.http://www.ahima.org/certification/cchiim. Updated 2014. Accessed January 17, 2014. TI - Conceptualizing Administrative Databases as Screening Tools for Health System Quality: Rethinking the Issue of Data Accuracy JF - JAMA Surgery DO - 10.1001/jamasurg.2014.1352 DA - 2015-01-01 UR - https://www.deepdyve.com/lp/american-medical-association/conceptualizing-administrative-databases-as-screening-tools-for-health-ATZS6J0K5S SP - 5 EP - 6 VL - 150 IS - 1 DP - DeepDyve ER -