Access the full text.
Sign up today, get DeepDyve free for 14 days.
Introduction In order to achieve the positive outcomes with parents and children demonstrated by many home visiting models, home visiting services must be well implemented. The Michigan Home Visiting Initiative developed a tool and procedure for monitoring implementation quality across models referred to as Michigan’s Home Visiting Quality Assurance System (MHVQAS). This study field tested the MHVQAS. This article focuses on one of the study’s evaluation questions: Can the MHVQAS be applied across models? Methods Eight local implementing agencies (LIAs) from four home visiting models (Healthy Families America, Early Head Start-Home Based, Parents as Teachers, Maternal Infant Health Program) and five reviewers participated in the study by completing site visits, tracking their time and costs, and completing surveys about the process. LIAs also submitted their most recent review by their model developer. The researchers conducted participant observation of the review process. Results Ratings on the MHVQAS were not significantly different between models. There were some differences in interrater reliability and perceived reliability between models. There were no significant differ - ences between models in perceived validity, satisfaction with the review process, or cost to participate. Observational data suggested that cross-model applicability could be improved by assisting sites in relating the requirements of the tool to the specifics of their model. Discussion The MHVQAS shows promise as a tool and process to monitor implementation quality of home visiting services across models. The results of the study will be used to make improvements before the MHVQAS is used in practice. Keywords Home visiting · Implementation quality · Quality assurance · Quality improvement Significance field test of Michigan’s Home Visiting Quality Assurance System contributes to the limited evidence base on measur- Given the importance of implementation quality and fidelity, ing the quality of home visiting program implementation by states responsible for administering grant funding to home testing a tool that assesses quality and fidelity across models visiting programs have the potential to play a critical role in and informs decision making. ensuring that funds are directed toward agencies that imple- ment quality programs and produce positive outcomes. How- ever, few tools exist to assure and improve quality imple- Introduction mentation of home visiting models across a system. This The positive impact of evidence-based home visiting models on family wellbeing has become more broadly understood * Julia Heany since federal Maternal Infant Early Childhood Home Visit- jheany@mphi.org ing (MIECHV) funding was authorized in 2010. MIECHV Michigan Public Health Institute, Center for Healthy and other efforts to take evidence-based models to scale are Communities, 2342 Woodlake Drive, Okemos, MI 48864, driven by the assumption that evidence-based models are USA more likely to achieve intended outcomes than models that Michigan Public Health Institute, Public Health Services, are untested. However, as evidence-based models transition 2364 Woodlake Drive, Suite 180, Okemos, MI 48864, USA from the well-controlled context of a research study to a Michigan Department of Health and Human Services, 109 W. Michigan Ave, 7th FL, Lansing, MI 48933, USA Vol.:(0123456789) 1 3 S14 Maternal and Child Health Journal (2018) 22 (Suppl 1):S13–S21 more dynamic real world context, fidelity to the original not directly applicable to cross-model quality assurance in model can be challenging. a home visiting context. The variability observed in the outcomes of home visit- Building on early work to develop cross-model quality ing evaluation studies has been attributed, in part, to lack of indicators and monitoring processes, the current study tested model adherence (Howard and Brooks-Gunn 2009). Effec- a comprehensive quality assurance tool designed to assess tive implementation and maintaining model fidelity drive quality and fidelity across home visiting models and support better outcomes (Carroll et al. 2007; Donelan-McCall et al. quality improvement. Michigan’s Home Visiting Quality 2009; Russell et al. 2007), and attention to implementation Assurance System (MHVQAS) tool was developed as part is critical to ensuring effective and efficient use of program- of Michigan’s MIECHV initiative. In Michigan, MIECHV ming dollars (Fixsen et al. 2005). funds 17 Local Implementing Agencies (LIAs) who served Each home visiting model establishes, reviews, and 1963 families in fiscal year 2016. More broadly, state funded updates its own criteria for fidelity based on the experience home visiting programs served over 34,000 families in that and expertise of developers and implementers. In contrast, same year. Following validation of the tool, Michigan plans efforts to identify criteria critical for achieving outcomes to require that all home visiting programs funded with across different models are limited. However, as part of the MIECHV funds, or those who receive specifically-iden- Supporting Evidence-Based Home Visiting to Prevent Child tified home visiting funding from the Michigan Depart- Maltreatment (EBHV) initiative, Daro (2010) and Mathe- ment of Health and Human Services, are reviewed with the matica Policy Research developed a concise set of quanti- MHVQAS at least once every 3 years. Home visiting models tative measures of home visiting fidelity that they applied will include Healthy Families America, Early Head Start, across five models and 46 implementing agencies. The Parents as Teachers, and Nurse Family Partnership. study found significant variability in model fidelity, and the The study contributes to the limited but growing evi- authors concluded that it was both possible and important dence base on measuring the quality of home visiting pro- for state administrators of home visiting funds to monitor gram model implementation by testing the validity and reli- implementation across models. Additionally, Korfmacher ability of the tool, and examining the ability of the tool to et al. (2012) developed and are testing a Home Visiting Pro- assess quality and fidelity across models and inform decision gram Quality Rating Tool (now titled Prevention Initiative making. In addition to this contribution to the literature, the Quality Rating Instrument), designed to work across home MHVQAS has implications for policy, in that the tool pro- visiting models that is grounded in the literature regarding vides a framework for a system-wide definition of quality what works in home visiting. Although reliability and valid- and a mechanism for monitoring quality across home visit- ity testing were still in process and the authors cautioned ing models. A system-wide definition and tool could help against using the Tool until more testing and development states identify statewide support and technical assistance was completed, initial testing suggested it was possible to policies and practices that would aid the entire system, and establish useful quality indicators across home visiting mod- identify opportunities for state-level quality improvement els to inform improvement efforts. projects. The study aimed to: (1) determine if the MHVQAS Both studies suggested that states responsible for admin- tool produces valid and reliable results across models, and istering grant funding to home visiting programs are in a (2) determine if the MHVQAS procedure can be feasibly position to play a critical role in monitoring implementation implemented and used to support quality improvement. This and assuring program quality. States can establish expecta- article focuses on one of the evaluation questions from the tions for quality and fidelity, and they can provide direct study: Can the MHVQAS be applied across models imple- support for quality improvement. Through the Home Vis- mented in Michigan? iting Collaborative Improvement and Innovation network (HV CoIIN) and state-driven efforts, home visiting as a Methods field is building its capacity in using quality improvement methods across models. However, there have been relatively Participants few federal or state-level efforts to establish cross-model quality standards and quality assurance processes, and there This mixed methods study conducted a field test of the has been little research focusing on building practical sys- MHVQAS tool and process, examining validity, reliability, tems for monitoring and improving quality in home visiting costs, and utility. Five reviewers were enrolled into the study model implementation. Systems for assessing the quality and trained to conduct reviews of local home visiting pro- of similar services, such as those provided by community grams using the MHVQAS tool and process. Each reviewer health workers, have been developed and tested nationally had expertise in a different home visiting model: Healthy and internationally, and found to support quality service Families America (HFA), Parents as Teachers (PAT), Early delivery (Crigler et al. 2013). However, these systems are Head Start (EHS), Nurse Family Partnership (NFP), and 1 3 Maternal and Child Health Journal (2018) 22 (Suppl 1):S13–S21 S15 Maternal Infant Health Program (MIHP). Eight home visit- Korfmacher et al. 2012), along with discussion with experts ing LIAs, with a total of 91 staff, were enrolled into the study in the field. The tool was organized into eight domains, 19 to participate in the MHVQAS review process (two each of: standards, and 72 measures. Domains are broad categories HFA, PAT, EHS, and MIHP). The researchers recruited NFP that relate to an element of implementation quality (e.g., LIAs, but were unable to enroll any LIAs from that model. home visit content). Standards are expectations for quality LIAs were chosen through model national office recommen- under each domain (e.g., use of evidence-informed content). dations, based on the following inclusion criteria: success Measures define how performance will be assessed under in implementing their model’s fidelity indicators, a review each standard (e.g., policy that describes use of evidence- by their model developer within the past two years or within informed content). Measures within each standard require the coming year, and use of a data system that captured basic home visiting LIAs to demonstrate that they have put in information about implementation. place written policy or procedures (for standards 1–17) and can demonstrate practice that aligns with model require- Data Collection and Analysis ments, if applicable. Each measure is rated as fully met (i.e., meet the expectation), partially met (i.e., opportunity for MHVQAS Tool and Site Visits improvement), or unmet (i.e., improvement plan needed). The criteria for ratings are specific to each measure. A list of The MHVQAS tool was developed through review of model domains and standards can be found in Table 1. An example requirements from evidence-based home visiting models, the of a measure can be found in Table 2. research literature, MIECHV benchmarks and constructs, The tool was designed to be completed by trained review- and existing instruments for monitoring quality (Daro 2010; ers. For the field study, trained reviewers completed review Table 1 MHVQAS domains and standards Recruitment and enrollment Standard 1: Home visiting implementing sites recruit and enroll families that meet eligibility criteria Home visitor and supervisor caseloads Standard 2: home visiting implementing sites maintain appropriate home visitor caseloads Standard 3: home visiting implementing sites maintain appropriate supervisor caseloads Assessment of family needs and referral to services Standard 4: home visiting implementing sites assess family needs and provide referrals when appropriate Standard 5: home visiting implementing sites conduct developmental screenings and provide referrals when appropriate Dosage and duration Standard 6: home visiting implementing sites provide home visits with the frequency and duration necessary to achieve intended outcomes for families Standard 7: home visiting implementing sites retain families until they complete services and support families as they exit the program Home visit content Standard 8: home visiting implementing sites individualize program delivery to family risks and needs, as well as family strengths and protec- tive factors Standard 9: home visiting implementing sites use evidence-informed content/curriculum/curricula Standard 10: home visiting implementing sites build positive and productive relationships between home visitors and families Staff qualifications and supervision Standard 11: home visiting implementing sites are staffed by qualified supervisors Standard 12: home visiting implementing sites are staffed by qualified home visitors Standard 13: home visiting implementing sites provide home visitors with supervision that reduces the emotional stress of home visiting, reduces burnout and turnover, and improves performance Standard 14: home visiting implementing sites provide supervisors with supervision that improves their skill and effectiveness Professional development Standard 15: home visiting implementing sites provide staff with the training necessary to deliver the program as designed Organizational structure and support Standard 16: home visiting implementing sites receive guidance and support from partners Standard 17: home visiting implementing sites have the infrastructure necessary to support high quality implementation Standard 18: home visiting implementing sites assure and improve program quality Standard 19: home visiting sites are integrated within the broader service system for children and families in their communities 1 3 S16 Maternal and Child Health Journal (2018) 22 (Suppl 1):S13–S21 of documentation and data prior to and during a daylong site visit. During the site visit, reviewers examined client files, supervision documentation, and meeting minutes. They also received a tour of the facility and interviewed program administrators, supervisors, and home visitors. Reliability of review procedures was supported through training and use of standardized materials such as interview questions and worksheets. Two reviewers scored the LIA independently and did not discuss ratings. After each site visit, both reviewers indepen- dently completed a Quality Report that summarized their rat- ings. Reviewers then met to compare their ratings, and come to consensus by discussing notes they had made, referencing back to the MHVQAS tool, and seeking understanding in how their counterpart made their decision. Consensus was consistently met through these means. After consensus, the final Quality Report was sent to the LIA, with suggestions for improvement on measures that were ‘partially met’ or ‘unmet,’ and the reviewers discussed the Quality Report with the LIA by phone conference. All LIAs received a copy of the MHVQAS before the site review, with detailed descrip- tions of the requirements for each measure and the criteria used to determine the ratings. MHVQAS tool ratings were compared across models by computing a mean score for each LIA for each stand- ard. Given the small sample size of four models, and the fact that ratings were not normally distributed, scores were compared across models using a Kruskal–Wallis H Test. Interrater reliability was analyzed by computing the Cohen’s Kappa between the two reviewers’ ratings on each measure (n = 72) for each site visit. To examine differences in inter - rater reliability between models, the Cohen’s Kappa table was combined for the two LIAs in a model, and compared with the combined Cohen’s Kappa tables of the LIAs from the other models. Observation The researchers observed key parts of the review process, including site visits, consensus meetings between reviewers, and calls with LIAs to review the Quality Report. During these observations, fieldnotes were taken on aspects of the process that went well, and aspects that created challenges. Fieldnotes were analyzed by two members of the research team, who independently reviewed fieldnotes and identified common themes related to reliability, validity, usefulness of the MHVQAS process, and cross-model applicability. They then met to compare themes, and worked together to come to consensus. 1 3 Table 2 Example of MHVQAS measure Standard 9: home visiting Implementing Sites use evidence-informed content/curriculum/curricula Measure Expectation and required components Review procedure Rating scale The home visiting program has a policy that The site will provide documentation that The site will provide documentation that 3—Fully met, all three components are clear, describes the use of evidence-informed describes describes the evidence-informed con- complete, and aligned with model expecta- content/curriculum/curricula used by home The content that is delivered during home tent covered during home visits. The site tions visitors and how it will be incorporated into visits—must align with model requirements, reviewer will assess the written documenta- 2—Partially met, fewer than three components visit plans. Policy shall reflect model expec- if applicable: see model guidance tion for each required component. The site are clear, complete, and aligned with model tations, if applicable Procedures for incorporating content into reviewer will assess the degree to which the expectations visit plans, including procedures for modify- policy aligns with model expectations. If 1—Unmet, policy does not exist, does not meet ing the order, eliminating components, and necessary the site reviewer will ask the site model expectations, or does not describe adding components for clarification during the site visit the use of evidence-informed content that is Expectations for covering the content over delivered during home visits the course of service Maternal and Child Health Journal (2018) 22 (Suppl 1):S13–S21 S17 One important aspect of the feasibility of implementing Surveys a quality assurance process is cost. Therefore, a Cost Track- ing Survey was sent after each of the eight site visits to the A 48-item Reviewer Satisfaction Survey was sent after each of the eight site visits to the two reviewers for each LIA two reviewers for each LIA (n = 13, response rate 81.3%) and to all home visiting staff (n = 68, response rate 81.9%). (n = 16, response rate 100%). It assessed factors such as the ease of making decisions about whether indicators were met, Cost Tracking Surveys were used to calculate the total cost for each site, including staff time and additional costs, and strengths and limitations of the tool, and the efficiency of the process. the mean number of hours spent for reviewers. Mean total time and mean total cost of time were compared between A 50-item LIA Staff Satisfaction Survey was sent to home visiting staff after each site visit (n = 69, response the four models using analysis of variance. All surveys were administered through Qualtrics, an online survey software. rate 83.1%). The survey assessed factors such as the effi - ciency of the process, strengths and limitations of the tool, Model Review Reports and the likelihood that the LIA would use findings to inform improvement. Each model has a process for reviewing fidelity of imple- Face validity of the tool was analyzed using scales of three items from the LIA Staff Satisfaction Survey (Cron- mentation, and produces a report that describes their find- ings. These reports were collected from all eight participat- bach’s α = 0.833) and two items from the Reviewer Satis- faction Survey (α = 0.836) that asked about perceptions of ing LIAs to assess criterion validity after the site visit and Quality Report were completed. Because reports differ by the tool related to validity (e.g., “The standards reflect key drivers of quality in home visiting”) using a Likert scale model, the first step in the analysis of model reviews was to cross-walk the MHVQAS with each model review to iden- ranging from 1 (strongly disagree) to 6 (strongly agree). Per- ceived reliability was analyzed using scales of 13 items from tify the underlying concepts measured by each and organize them into categories. Because the MHVQAS is organized the LIA Staff Satisfaction Survey (α = 0.978) and 13 items from the Reviewer Satisfaction Survey (α = 0.979) that asked into domains, standards, and measures, a similar structure was used to organize model reviews, creating one set of about perceptions of the tool and guidance document related to reliability (e.g., “The measures to be assessed are clear”) domains and standards that were common across models and could be used for comparison. Once each measure was using a Likert scale ranging from 1 (strongly disagree) to 6 (strongly agree). Mean scale scores for face validity and categorized, overall performance in that area on the model review was compared to overall performance in that area on perceived reliability were compared between the four models using analysis of variance. the MHVQAS review by determining whether both reviews identified strengths or gaps in each category. The research- Each LIA was invited to complete a LIA Staff 6 Month Follow-Up Survey (n = 22, response rate 100% of LIAs) that ers then tallied the number of domains where the MHVQAS findings aligned with the model findings for each LIA. Fig- focused on how results were communicated and to whom, which findings were helpful and unhelpful, and if/how find- ure 1 shows an overview of the MHVQAS procedures and study procedures. ings were used to inform improvement activities. Usefulness of the process was examined using a scale of three items This study was reviewed and approved by the Michigan Public Health Institute’s Institutional Review Board (IRB) (α = 0.739) that asked about whether the Quality Report had informed decision making (e.g., “The ‘opportunities and the Michigan Department of Health and Human Ser- vices IRB. for improvement’ described in the report were actionable”) using a Likert scale ranging from 1 (strongly disagree) to 6 Results (strongly agree). Mean scale scores for usefulness of the pro- cess were compared between the four models using analysis Summary scores for each MHVQAS standard were com- of variance. Open ended questions from the surveys were analyzed pared by model. Kruskal–Wallis H tests showed that there was not a statistically significant difference in ratings on any using content analysis. Two members of the research team independently reviewed answers and identified explicit and of the Standards between the different home visiting models (χ (3) ranged from 0.81 to 7.0; p ranged from 0.07 to 0.85). implicit content related to reliability, validity, usefulness, and cross-model applicability. They then used an inductive Interrater reliability was compared across models as well. The Cohen’s Kappa (κ) agreement scores for each model analysis to identify key themes for each content area. The two researchers then met to compare themes, and worked ranged from 0.240 (fair agreement) to 0.452 (moderate agreement). The combined κ of the HFA LIAs had higher together to come to consensus by discussing similarities and differences between themes, and returning to the data agreement than the other models (95% CI [0.048, 0.330]). to refine themes as needed. 1 3 S18 Maternal and Child Health Journal (2018) 22 (Suppl 1):S13–S21 Fig. 1 Michigan’s home visiting quality assurance system field study procedures 0.400 Table 3 Differences in reliability, validity, satisfaction, and useful- ness of the MHVQAS across home visiting models 0.300 Mean SD ANOVA 0.200 d.f F-value p-value 0.100 Reliability of MHVQAS 0.000 LIA Staff (n = 49) 4.49 0.81 3 0.004 1.000 Reviewers (n = 16) 4.44 1.14 3 4.84 0.020 -0.100 Validity of MHVQAS Maternal Infant Healthy Families Early Head Start Parents as Health Program America Teachers -0.200 LIA Staff (n = 44) 4.44 0.82 3 0.81 0.496 Reviewers (n = 16) 4.91 0.90 3 1.58 0.247 Fig. 2 Single model κ scores compared with the other models com- Satisfaction with MHVQAS bined κ. Difference between a single model’s κ and the combined κ LIA Staff (n = 35) 4.81 0.62 3 1.68 0.192 of the remaining models. Confidence interval of the difference was Reviewers (n = 16) 4.95 0.46 3 0.36 0.782 used to create error bars. *Confidence interval of the difference does Usefulness of MHVQAS not contain 0 LIA Staff (n = 21) 4.65 0.60 3 0.88 0.471 Possible scale scores ranged from 1 to 6 The combined κ of the PAT LIAs had the lowest agreement compared with the other models (see Fig. 2). Analysis of variance of the LIA Staff Survey Reliability significant difference between the four models. Analysis scale showed that there was not a statistically significant of variance of the 6 Month Follow-Up Survey questions difference between the four models (see Table 3). Analysis about whether the Quality Report informed decision mak- of variance of the Reviewer Survey Reliability scale showed ing showed that there was not a statistically significant dif- that there was a statistically significant difference between ference between the four models. the four models. A Tukey post-hoc test revealed that reliabil- The cross-walk of the MHVQAS and LIA model reviews ity for MIHP site reviews (M = 3.10, SD = 1.033) was sig- identified that, for each of the four models, the MHVQAS nificantly lower than HFA (M = 5.12, SD = 0.263) and PAT included standards that were not included in the model’s (M = 5.08, SD = 0.548). Analysis of variance of the LIA Staff own standards. However, each of the MHVQAS stand- and Reviewer Validity scales showed that there was a not a ards was included in at least one model. One domain that statistically significant difference between the four models. MIHP and HFA included was not explicitly included in the Analysis of variance of the LIA Staff and Reviewer Sat- MHVQAS, which was family rights. While the MHVQAS isfaction scales showed that there was not a statistically had a standard around family feedback, MIHP and HFA had 1 3 Difference in Kappa Maternal and Child Health Journal (2018) 22 (Suppl 1):S13–S21 S19 additional standards around accommodating families based time spent, the cost of time spent, or the total cost of pre- on specific characteristics, such as language, race/ethnicity, paring for and participating in the site review (see Table 4; and disability. EHS had additional standards around envi- Fig. 3). ronmental health and safety, but most of these were specific Altogether, reviewers spent an average of 28.90 total to center-based services. hours on preparing for and conducting site reviews, and pre- Findings from the MHVQAS and model reviews were paring the Quality Report. Analysis of Variance showed that then compared for those standards that were similar across there was not a statistically significant difference between both systems to assess criterion validity. The MHVQAS the four models on total time spent for reviewers, F(3, review and the model reviews had some differences in rat- 8) = .512, p = 0.685. ings in areas where the MHVQAS and models shared similar standards. On average, the MHVQAS findings aligned with the model findings on 87% of the standards. However, there did not appear to be any standards that were systematically Discussion rated differently by the MHVQAS. Observational data and answers to open-ended ques- The findings suggest that the MHVQAS can be applied tions on the Satisfaction Surveys indicated that, while the across models, but would benefit from clarification of domains, standards, and measures in the MHVQAS seemed expectations for models that do not specify requirements relevant across models, LIAs sometimes had difficulty relat- in key areas of the tool. While the results showed quite a ing the requirements of the tool to the specifics of their large degree of variation in performance across LIAs, no model, particularly in regards to the types of documenta- one model outperformed or underperformed others. Addi- tion to provide. Reviewers said it would be helpful to have tionally, validity was not different by model, suggesting that more guidance for scoring when a MHVQAS requirement the MHVQAS measures key components of quality in home is not part of a model. Additionally, participants commented visiting regardless of model. Ratings of satisfaction with the that the MHVQAS included measures that were not part of process by both reviewers and LIAs were also consistent their model, which was seen as a positive and a negative. As across models, as were estimates of time and cost by both a positive, it could provide opportunities for improvement LIAs and reviewers. outside of the model standards. As a negative, it could feel Measures of reliability were different across some mod- like greater scrutiny and it could be very time consuming to els. Interrater reliability was somewhat higher for HFA LIAs meet the requirements in addition to the model requirements. and somewhat lower for PAT LIAs. This finding may reflect Analysis of Variance showed that there was a not a sta- die ff rences in the extensiveness of these two models’ expec - tistically significant difference between the four models on tations for documentation and existing review procedures. While the study found no differences by model in LIA staff perceptions of reliability, reviewers felt that reliability was Table 4 Differences in cost of Mean Min Max ANOVA the MHVQAS across home visiting models d.f F-value p-value Hours spent per person (n = 68) 13.09 0 58 3 0.865 0.464 Cost of time per person (n = 62) $346.57 $0.00 $2154.73 3 0.581 0.630 Total LIA cost (N = 8) $2722 $1166 $4204 3 2.160 0.235 Fig. 3 Mean hours staff spent preparing for and participating in the review process (n = 68) 15 4.46 1.83 2.43 2.08 1.87 13.24 12.79 10.66 8.88 8.38 TotalMaternal Infant Health Families Early Head StartParents as Health Program America Teachers Mean hours preparing per staff Mean hours day of review 1 3 S20 Maternal and Child Health Journal (2018) 22 (Suppl 1):S13–S21 lower for MIHP as compared with HFA and PAT. As a Mich- making across models. These findings were used to inform igan-developed home visiting model, MIHP’s standards do revisions to the MHVQAS, which were put into practice not include expectations in some key areas of the MHVQAS with a pilot set of MIECHV funded LIAs in the spring of tool, such as caseloads or dosage, and the MHVQAS tool 2018. The pilot will be used to inform improvements to the does not set expectations for Michigan LIAs in these areas tool, and to identify opportunities for individualized tech- outside of what models require. Reviewers noted that this nical assistance, common professional development needs created ambiguity in the review process that could lead to across LIAs, model implementation challenges, and best inconsistent findings. This difference in perceived reliability practices. While NFP did not participate in the study, most can be addressed by offering clarification across measures NFP programs in Michigan receive MIECHV funding, and regarding state expectations for models that do not specify will be required to participate in MHVQAS reviews. Based requirements in key areas of the tool. on the study finding that the tool performed comparably When compared with model developer reviews, the across four different models, we anticipate a similar result MHVQAS included a very comprehensive set of standards. with NFP. Michigan is also considering using the MHVQAS The comprehensive nature of the MHVQAS was consid- across all evidence-based home visiting models, regardless ered both a strength and limitation of the tool. It provided of funding source, to facilitate development of statewide an opportunity to learn about strategies for assuring quality supports and technical assistance. home visiting programs beyond the constraints of a model, MIECHV funded states will benefit from tools and pro- and it highlighted new opportunities for improvement. How- cedures that support their efforts to monitor implementation ever, it felt burdensome for some LIAs, and both reviewers across evidence-based home visiting models. The MHVQAS and LIAs noted that it was challenging to know how to meet offers a validated tool that will support states in meeting a standard when it is not part of a particular model. This MIECHV’s expectation that states provide quality home vis- challenge could be addressed by providing clearer guidance iting services to vulnerable families. The Michigan Home on state expectations when a specific model does not have Visiting Initiative (MHVI) will be ready to share copies of expectations in a particular area. One gap in MHVQAS the MHVQAS for use by other states beginning in fall 2018. standards that was highlighted by the cross model standard Please contact the MHVI (MDHHS-HVInitiative@michi- comparison was Family Rights, which could be addressed gan.org) for a copy of the tool and to receive training on by building in a standard related to this aspect of quality. implementing the MHVQAS. Both reviewers and LIA staff noted that one way to sup- port all models in completing the MHVQAS efficiently would be to develop guidance that aligns model-specific Limitations documentation with specific measures on the MHVQAS. LIA sometimes found it challenging to recognize how the This study has several limitations. Using model reviews requirements of their model fit with MHVQAS standards to measure criterion validity was challenged by the differ - and measures. Developing such a resource could improve ences between each review’s specific measures and rating the efficiency of the process. systems. The reviews did not have a one-to-one alignment at Implementation quality drives outcomes in home visiting the measure level or between rating systems, which required programs (Carroll et al. 2007; Donelan-McCall et al. 2009; more loosely looking at alignment at the standard level. Russell et al. 2007), and ensures effective and efficient use of Also, differences in ratings could have been attributable to programming dollars (Fixsen et al. 2005). States responsible a validity problem with the MHVQAS or to either improve- for administering grant funding to home visiting programs ment based on model review findings or drift from model are well-positioned to ensure that funds are directed toward expectations following the model review. agencies that implement quality programs and produce The study also had a very limited sample size that positive outcomes. States can also establish system-wide included LIAs selected for their experience implementing expectations for quality and fidelity, and support for qual- their model. A larger or more representative sample may ity improvement. However, few tools exist to support states have different perceptions of reliability and validity, dif- in developing quality assurance and quality improvement ferent costs to participate, and potentially more variation tools and procedures that work across a home visiting system between models. As such, it will be important to continue and the different models within that system. Research on to collect data on key aspects of MHVQAS performance as building practical systems for this level of monitoring and it is implemented in practice to assure that the findings of improvement is also lacking. this study hold true when a broader set of LIAs is assessed The results of this study indicate that the MHVQAS using the tool. shows promise as a tool and process to monitor implemen- tation quality of home visiting services and inform decision 1 3 Maternal and Child Health Journal (2018) 22 (Suppl 1):S13–S21 S21 A toolkit for improving community health worker programs and Finally, while an NFP representative participated in the services. revised version. Bethesda, MD: University Research Co., design of the MHVQAS, the study cannot speak to how USAID Health Care Improvement Project. the MHVQAS performs in the context of an NFP program, Daro, D. (2010). Replicating evidence-based home visiting models: A because no NFP LIAs agreed to participate. framework for assessing fidelity. Brief 3. Washington, DC: Chil- dren’s Bureau, Administration for Children and Families, U.S. Department of Health and Human Services. Acknowledgements This research was supported by a Health Donelan-McCall, N., Eckenrode, J., & Olds, D. L. (2009). Home visit- Resources and Services Administration (HRSA) Maternal, Infant and ing for the prevention of child maltreatment: Lessons learned dur- Early Childhood Home Visiting (MIECHV) Program Expansion Grant. ing the past 20 years. Pediatric Clinics of North America, 56(2), 389–403. https ://doi.org/10.1016/j.pcl.2009.01.002. Open Access This article is distributed under the terms of the Crea- Fixsen, D. L., Naoom, S. F., Blase, K. A., Friedman, R. M., & Wallace, tive Commons Attribution 4.0 International License (http://creat iveco F. (2005). Implementation research: A synthesis of the literature mmons.or g/licenses/b y/4.0/), which permits unrestricted use, distribu- (Vol. FMHI Publication #231). Tampa, FL: University of South tion, and reproduction in any medium, provided you give appropriate Florida, Louis de la Parte Florida Mental Health Institute: The credit to the original author(s) and the source, provide a link to the National Implementation Research Network. Creative Commons license, and indicate if changes were made. Howard, K. S., & Brooks-Gunn, J. (2009). The role of home-visiting programs in preventing child abuse and neglect. The Future of Children, 19, 119–146. Korfmacher, J., Laszewski, A., Sparr, M., & Hammel, J. (2012). References Assessing home visiting program quality: A final report to Pew Center on the States. Philadelphia: The Pew Charitable Trusts. Carroll, C., Patterson, M., Wood, S., Booth, A., Rick, J., & Balain, Russell, B. S., Britner, P. A., & Woolard, J. L. (2007). The promise of S. (2007). A conceptual framework for implementation fidelity. primary prevention home visiting programs: A review of potential Implementation Science, 2(1), 40. outcomes. Journal of Prevention & Intervention in the Commu- Crigler, L., Hill, K., Furth, R., & Bjerregaard, D. (2013). Community nity, 34(1–2), 129–147. health worker assessment and improvement matrix (CHW AIM): 1 3
Maternal and Child Health Journal – Springer Journals
Published: Jun 5, 2018
You can share this free article with as many people as you like with the url below! We hope you enjoy this feature!
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.