Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You and Your Team.

Learn More →

The Quality of Quality Research

The Quality of Quality Research Last month's release of the second child health scorecard by the Commonwealth Fund reminded us of the enduring challenge of suboptimal care for too many children in this country.1 This is just the most recent example of the growing literature documenting the many shortfalls in quality in every aspect of pediatric care. This evidence base has already contributed to significant new federal investments in strategies to measure and improve pediatric quality through the reauthorization of the Child Health Insurance Program Reauthorization Act in 2009 and the national quality strategy called for in the Accountable Care Act.2 Eighteen states are now working to develop and test new ways to improve quality of care for children, and 7 Centers of Excellence in pediatric quality measures have been recently established to improve existing measures and develop new measures for the numerous gaps that currently exist.3 This issue's articles on quality of care are an important contribution to this field, first in helping us understand the challenges and begin to address them, but also in reminding us of the current limitations in the methods used in many quality-related studies. Moving beyond description to action Understanding the nature, scope, and effect of quality problems is a critical first step to action. The finding by Raphael et al4 of an association between low scores on parent reports of family centeredness and higher rates of nonurgent emergency department visits among children with special health care needs provides a starting point for designing an intervention to address those dimensions. That being said, our field is replete with the equivalent of “me too drug” studies. Each examination of patterns of care or disparities is slightly different (enough to get funded and published), but, in the end, all of these studies leave one asking the question: “so what do I do about this?” Secondary data set studies are clearly easier, cheaper, and quicker, but how do we move from description to action more often and sooner? Actually changing care is really hard. Changing care and studying whether that change was actually effective in improving quality and outcomes is even harder and not for the fain-hearted. It requires working in the real world of care delivery, which is messy and uncontrollable. It also often requires the use of multiple sites to recruit sufficient numbers of patients and the use of newer and complex methods, and it subjects the researcher to the vagaries of often highly unpredictable institutional review board judgments. Sullivan and Goldman5 recently summarized some of these methods in calling for comparative studies of implementation and improvement strategies. Three of the articles in this issue do report on the effect of interventions intended to improve care; however, few make it all the way to demonstrating outcomes. The study by Gordon et al6 reports on physician and nurse self-reports of communication practices. Although improved communication was reported, we providers are notoriously bad at assessing our own performance, and no attempts were made to capture any independent measures of improvement. The report by Byczkowski et al7 goes a little further and actually measures the participation and characteristics of users of a patient portal designed to assist parents of chronically ill children and thus begins to take us part of the way toward measuring actual change. It provides initial information on the reach of a particular intervention but cannot yet tell us what, if any, effect on outcomes the portal is having. This is necessary but not sufficient. The study by Shapiro et al8 takes us a little further down the process-outcomes continuum by assessing documentation of guideline adherence in medical records but does not correlate these improvements with any changes in measurable outcomes such as rates of emergency department or hospital use. We need studies that can go further down this continuum and measure the full range of outcomes that decision makers in health systems and government are concerned about. Methods matter The articles by Casey et al9 and Joffe et al10 address timely issues in delivery systems and policy circles (the role of care coordination in achieving cost savings and the role of pediatric medical emergency teams in reducing in-hospital mortality, respectively). Casey et al9 address an issue too often ignored by child health researchers, namely, cost-reduction strategies. Most of pediatric literature emphasizes the improvement of care or the underuse of effective interventions, not the costs associated with pediatric care. The potential savings found by the researchers are significant when projected to a theoretical cohort and, if they were to be extrapolated statewide or nationally, could garner the attention of policy makers. Although health care costs have always been on the agenda in Washington, the $10 billion investment in the Center for Medicare and Medicaid Innovations is unprecedented, and the urgency to tackle costs real. In addition, providers everywhere are scrambling to position themselves to become or join an Accountable Care Organization, even as the meaning of that term is being defined. However, the question the reader is left asking is “compared to what?” As in any pre-post study, one cannot know what else happened in the care system or in state policies during the time of the study that could have led in whole or in part to the positive findings. The study by Shapiro et al8 also uses a pre-post design to assess the effect of the implementation of an asthma toolbox on adherence to the National Asthma Education and Prevention Program guide lines, as measured by documentation in the medical record. The “compared to what” question is somewhat attenuated by conducting the study in 2 quite different settings, although the authors acknowledge the limitations on generalizability. This limitation of pre-post designs without any comparisons is squarely taken on by Joffe et al10 when they conclude that reductions in mortality were achieved in a hospital without a pediatric medical emergency team that were comparable to reductions reported by hospitals in single-site studies of hospitals that had deployed this strategy. Many other methods exist to generate more robust findings from intervention or improvement research, including time series analyses, pragmatic clinical trials, adaptive clinical trials, cluster randomized trials, and factorial designs.11-13 However, many researchers are not adequately trained in these methods, and it is not clear the extent to which new investments in comparative effectiveness research training will help address these gaps. The article by Lee et al14 is a profound testament to the critical importance of methods in drawing conclusions about performance. As the country rushes to implement performance-based accountability strategies, Lee et al14 remind us how different the conclusion would be based on the analytic choices made by those reporting on the performance of neonatal intensive care units. It is a great cautionary tale for public reporting advocates. Conclusions The quality of quality of care research is commensurate with its developmental stage and the degree of investment to date. In other words, we have taken our first wobbly steps and continue to learn with each misstep, skip, and jump. Much can and should be done to accelerate the robustness and relevance of this type of research given the imperative to improve quality and reduce costs and the opportunity provided by the Affordable Care Act. Federal agencies and the newly established Patient-Centered Outcomes Research Institute can prioritize these types of studies and fund the training and methods development so desperately needed. Journals such as Archives need to continue to publish these studies while demanding the utmost rigor. Only then will we truly be able to know how to improve care for children. Correspondence: Dr Simpson, AcademyHealth, 1150 17th St, NW, Suite 600, Washington, DC 20036 (lisa.simpson@academyhealth.org). Financial Disclosure: None reported. References 1. How SKHFryer AKMcCarthy DSchoen CSchor EL Securing a Healthy Future: The Commonwealth Fund State Scorecard on Child Health System Performance, 2011. The Commonwealth Fund Web site. http://mobile.commonwealthfund.org/Content/Publications/Fund-Reports/2011/Feb/State-Scorecard-Child-Health.aspx. Published February 2, 2011. Accessed March 10, 2011Google Scholar 2. Pub L 111–3, 123 Stat 8 3. Chief Medical Information Officer (CMIO). AHRQ awards quality grants to pediatric centers. CMIO Web site. http://www.cmio.net/index.php?option=com_articles&article=26624. Accessed March 14, 2011Google Scholar 4. Raphael JLMei MBrousseau DCGiordano TP Associations between quality of primary care and health care use among children with special health care needs. Arch Pediatr Adolesc Med 2011;165 (5) 399- 404Google Scholar 5. Sullivan PGoldmann D The promise of comparative effectiveness research. JAMA 2011;305 (4) 400- 401PubMedGoogle Scholar 6. Gordon MBMelvin PGraham D et al. Unit-based care teams and the frequency and quality of physician-nurse communications. Arch Pediatr Adolesc Med 2011;165 (5) 424- 428Google Scholar 7. Byczkowski TLMunafo JKBritto MT Variation in use of Internet-based patient portals by parents of children with chronic disease. Arch Pediatr Adolesc Med 2011;165 (5) 405- 411Google Scholar 8. Shapiro AGracy DQuinones WApplebaum JSarmiento A Putting guidelines into practice: improving documentation of pediatric asthma management using a decision-making tool. Arch Pediatr Adolesc Med 2011;165 (5) 412- 418Google Scholar 9. Casey PHLyle REBird TM et al. Effect of hospital-based comprehensive care clinic on health costs for Medicaid-insured medically complex children. Arch Pediatr Adolesc Med 2011;165 (5) 392- 398Google Scholar 10. Joffe ARAnton NRBurkholder SC Reduction in hospital mortality over time in a hospital without a pediatric medical emergency team: limitations of before-and-after study designs. Arch Pediatr Adolesc Med 2011;165 (5) 419- 423Google Scholar 11. Shojania KGGrimshaw JM Evidence-based quality improvement: the state of the science. Health Aff (Millwood) 2005;24 (1) 138- 150PubMedGoogle Scholar 12. Davidoff FBatalden PStevens DOgrinc GMooney SESQUIRE development group, Publication guidelines for quality improvement studies in health care: evolution of the SQUIRE project. BMJ 2009;338a3152PubMed10.1136/bmj.a3152Google Scholar 13. Chow SCChang M Adaptive design methods in clinical trials: a review. Orphanet J Rare Dis 2008;311PubMedGoogle Scholar 14. Lee HCChien ATBardach NSClay TGould JBDudley RA The impact of statistical choices on neonatal intensive care unit quality ratings based on nosocomial infection rates. Arch Pediatr Adolesc Med 2011;165 (5) 429- 434Google Scholar http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Archives of Pediatrics & Adolescent Medicine American Medical Association

The Quality of Quality Research

Loading next page...
 
/lp/american-medical-association/the-quality-of-quality-research-gj4q0rsDK1
Publisher
American Medical Association
Copyright
Copyright © 2011 American Medical Association. All Rights Reserved.
ISSN
1072-4710
eISSN
1538-3628
DOI
10.1001/archpediatrics.2011.53
Publisher site
See Article on Publisher Site

Abstract

Last month's release of the second child health scorecard by the Commonwealth Fund reminded us of the enduring challenge of suboptimal care for too many children in this country.1 This is just the most recent example of the growing literature documenting the many shortfalls in quality in every aspect of pediatric care. This evidence base has already contributed to significant new federal investments in strategies to measure and improve pediatric quality through the reauthorization of the Child Health Insurance Program Reauthorization Act in 2009 and the national quality strategy called for in the Accountable Care Act.2 Eighteen states are now working to develop and test new ways to improve quality of care for children, and 7 Centers of Excellence in pediatric quality measures have been recently established to improve existing measures and develop new measures for the numerous gaps that currently exist.3 This issue's articles on quality of care are an important contribution to this field, first in helping us understand the challenges and begin to address them, but also in reminding us of the current limitations in the methods used in many quality-related studies. Moving beyond description to action Understanding the nature, scope, and effect of quality problems is a critical first step to action. The finding by Raphael et al4 of an association between low scores on parent reports of family centeredness and higher rates of nonurgent emergency department visits among children with special health care needs provides a starting point for designing an intervention to address those dimensions. That being said, our field is replete with the equivalent of “me too drug” studies. Each examination of patterns of care or disparities is slightly different (enough to get funded and published), but, in the end, all of these studies leave one asking the question: “so what do I do about this?” Secondary data set studies are clearly easier, cheaper, and quicker, but how do we move from description to action more often and sooner? Actually changing care is really hard. Changing care and studying whether that change was actually effective in improving quality and outcomes is even harder and not for the fain-hearted. It requires working in the real world of care delivery, which is messy and uncontrollable. It also often requires the use of multiple sites to recruit sufficient numbers of patients and the use of newer and complex methods, and it subjects the researcher to the vagaries of often highly unpredictable institutional review board judgments. Sullivan and Goldman5 recently summarized some of these methods in calling for comparative studies of implementation and improvement strategies. Three of the articles in this issue do report on the effect of interventions intended to improve care; however, few make it all the way to demonstrating outcomes. The study by Gordon et al6 reports on physician and nurse self-reports of communication practices. Although improved communication was reported, we providers are notoriously bad at assessing our own performance, and no attempts were made to capture any independent measures of improvement. The report by Byczkowski et al7 goes a little further and actually measures the participation and characteristics of users of a patient portal designed to assist parents of chronically ill children and thus begins to take us part of the way toward measuring actual change. It provides initial information on the reach of a particular intervention but cannot yet tell us what, if any, effect on outcomes the portal is having. This is necessary but not sufficient. The study by Shapiro et al8 takes us a little further down the process-outcomes continuum by assessing documentation of guideline adherence in medical records but does not correlate these improvements with any changes in measurable outcomes such as rates of emergency department or hospital use. We need studies that can go further down this continuum and measure the full range of outcomes that decision makers in health systems and government are concerned about. Methods matter The articles by Casey et al9 and Joffe et al10 address timely issues in delivery systems and policy circles (the role of care coordination in achieving cost savings and the role of pediatric medical emergency teams in reducing in-hospital mortality, respectively). Casey et al9 address an issue too often ignored by child health researchers, namely, cost-reduction strategies. Most of pediatric literature emphasizes the improvement of care or the underuse of effective interventions, not the costs associated with pediatric care. The potential savings found by the researchers are significant when projected to a theoretical cohort and, if they were to be extrapolated statewide or nationally, could garner the attention of policy makers. Although health care costs have always been on the agenda in Washington, the $10 billion investment in the Center for Medicare and Medicaid Innovations is unprecedented, and the urgency to tackle costs real. In addition, providers everywhere are scrambling to position themselves to become or join an Accountable Care Organization, even as the meaning of that term is being defined. However, the question the reader is left asking is “compared to what?” As in any pre-post study, one cannot know what else happened in the care system or in state policies during the time of the study that could have led in whole or in part to the positive findings. The study by Shapiro et al8 also uses a pre-post design to assess the effect of the implementation of an asthma toolbox on adherence to the National Asthma Education and Prevention Program guide lines, as measured by documentation in the medical record. The “compared to what” question is somewhat attenuated by conducting the study in 2 quite different settings, although the authors acknowledge the limitations on generalizability. This limitation of pre-post designs without any comparisons is squarely taken on by Joffe et al10 when they conclude that reductions in mortality were achieved in a hospital without a pediatric medical emergency team that were comparable to reductions reported by hospitals in single-site studies of hospitals that had deployed this strategy. Many other methods exist to generate more robust findings from intervention or improvement research, including time series analyses, pragmatic clinical trials, adaptive clinical trials, cluster randomized trials, and factorial designs.11-13 However, many researchers are not adequately trained in these methods, and it is not clear the extent to which new investments in comparative effectiveness research training will help address these gaps. The article by Lee et al14 is a profound testament to the critical importance of methods in drawing conclusions about performance. As the country rushes to implement performance-based accountability strategies, Lee et al14 remind us how different the conclusion would be based on the analytic choices made by those reporting on the performance of neonatal intensive care units. It is a great cautionary tale for public reporting advocates. Conclusions The quality of quality of care research is commensurate with its developmental stage and the degree of investment to date. In other words, we have taken our first wobbly steps and continue to learn with each misstep, skip, and jump. Much can and should be done to accelerate the robustness and relevance of this type of research given the imperative to improve quality and reduce costs and the opportunity provided by the Affordable Care Act. Federal agencies and the newly established Patient-Centered Outcomes Research Institute can prioritize these types of studies and fund the training and methods development so desperately needed. Journals such as Archives need to continue to publish these studies while demanding the utmost rigor. Only then will we truly be able to know how to improve care for children. Correspondence: Dr Simpson, AcademyHealth, 1150 17th St, NW, Suite 600, Washington, DC 20036 (lisa.simpson@academyhealth.org). Financial Disclosure: None reported. References 1. How SKHFryer AKMcCarthy DSchoen CSchor EL Securing a Healthy Future: The Commonwealth Fund State Scorecard on Child Health System Performance, 2011. The Commonwealth Fund Web site. http://mobile.commonwealthfund.org/Content/Publications/Fund-Reports/2011/Feb/State-Scorecard-Child-Health.aspx. Published February 2, 2011. Accessed March 10, 2011Google Scholar 2. Pub L 111–3, 123 Stat 8 3. Chief Medical Information Officer (CMIO). AHRQ awards quality grants to pediatric centers. CMIO Web site. http://www.cmio.net/index.php?option=com_articles&article=26624. Accessed March 14, 2011Google Scholar 4. Raphael JLMei MBrousseau DCGiordano TP Associations between quality of primary care and health care use among children with special health care needs. Arch Pediatr Adolesc Med 2011;165 (5) 399- 404Google Scholar 5. Sullivan PGoldmann D The promise of comparative effectiveness research. JAMA 2011;305 (4) 400- 401PubMedGoogle Scholar 6. Gordon MBMelvin PGraham D et al. Unit-based care teams and the frequency and quality of physician-nurse communications. Arch Pediatr Adolesc Med 2011;165 (5) 424- 428Google Scholar 7. Byczkowski TLMunafo JKBritto MT Variation in use of Internet-based patient portals by parents of children with chronic disease. Arch Pediatr Adolesc Med 2011;165 (5) 405- 411Google Scholar 8. Shapiro AGracy DQuinones WApplebaum JSarmiento A Putting guidelines into practice: improving documentation of pediatric asthma management using a decision-making tool. Arch Pediatr Adolesc Med 2011;165 (5) 412- 418Google Scholar 9. Casey PHLyle REBird TM et al. Effect of hospital-based comprehensive care clinic on health costs for Medicaid-insured medically complex children. Arch Pediatr Adolesc Med 2011;165 (5) 392- 398Google Scholar 10. Joffe ARAnton NRBurkholder SC Reduction in hospital mortality over time in a hospital without a pediatric medical emergency team: limitations of before-and-after study designs. Arch Pediatr Adolesc Med 2011;165 (5) 419- 423Google Scholar 11. Shojania KGGrimshaw JM Evidence-based quality improvement: the state of the science. Health Aff (Millwood) 2005;24 (1) 138- 150PubMedGoogle Scholar 12. Davidoff FBatalden PStevens DOgrinc GMooney SESQUIRE development group, Publication guidelines for quality improvement studies in health care: evolution of the SQUIRE project. BMJ 2009;338a3152PubMed10.1136/bmj.a3152Google Scholar 13. Chow SCChang M Adaptive design methods in clinical trials: a review. Orphanet J Rare Dis 2008;311PubMedGoogle Scholar 14. Lee HCChien ATBardach NSClay TGould JBDudley RA The impact of statistical choices on neonatal intensive care unit quality ratings based on nosocomial infection rates. Arch Pediatr Adolesc Med 2011;165 (5) 429- 434Google Scholar

Journal

Archives of Pediatrics & Adolescent MedicineAmerican Medical Association

Published: May 2, 2011

Keywords: child,investments,pediatrics,quality improvement,medical records,child health,quality of care,medical emergency team,emergency service, hospital,asthma,chronic disease

References

You’re reading a free preview. Subscribe to read the entire article.


DeepDyve is your
personal research library

It’s your single place to instantly
discover and read the research
that matters to you.

Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month

Explore the DeepDyve Library

Search

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Organize

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Access

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Your journals are on DeepDyve

Read from thousands of the leading scholarly journals from SpringerNature, Wiley-Blackwell, Oxford University Press and more.

All the latest content is available, no embargo periods.

See the journals in your area

DeepDyve

Freelancer

DeepDyve

Pro

Price

FREE

$49/month
$499/year

Save searches from
Google Scholar,
PubMed

Create folders to
organize your research

Export folders, citations

Read DeepDyve articles

Abstract access only

Unlimited access to over
18 million full-text articles

Print

20 pages / month