Standardizing an approach to the evaluation of implementation science proposals

Standardizing an approach to the evaluation of implementation science proposals Background: The fields of implementation and improvement sciences have experienced rapid growth in recent years. However, research that seeks to inform health care change may have difficulty translating core components of implementation and improvement sciences within the traditional paradigms used to evaluate efficacy and effectiveness research. A review of implementation and improvement sciences grant proposals within an academic medical center using a traditional National Institutes of Health framework highlighted the need for tools that could assist investigators and reviewers in describing and evaluating proposed implementation and improvement sciences research. Methods: We operationalized existing recommendations for writing implementation science proposals as the ImplemeNtation and Improvement Science Proposals Evaluation CriTeria (INSPECT) scoring system. The resulting system was applied to pilot grants submitted to a call for implementation and improvement science proposals at an academic medical center. We evaluated the reliability of the INSPECT system using Krippendorff’s alpha coefficients and explored the utility of the INSPECT system to characterize common deficiencies in implementation research proposals. Results: We scored 30 research proposals using the INSPECT system. Proposals received a median cumulative score of 7 out of a possible score of 30. Across individual elements of INSPECT, proposals scored highest for criteria rating evidence of a care or quality gap. Proposals generally performed poorly on all other criteria. Most proposals received scores of 0 for criteria identifying an evidence-based practice or treatment (50%), conceptual model and theoretical justification (70%), setting’s readiness to adopt new services/treatment/programs (54%), implementation strategy/process (67%), and measurement and analysis (70%). Inter-coder reliability testing showed excellent reliability (Krippendorff’s alpha coefficient 0.88) for the application of the scoring system overall and demonstrated reliability scores ranging from 0.77 to 0.99 for individual elements. Conclusions: The INSPECT scoring system presents a new scoring criteria with a high degree of inter-rater reliability and utility for evaluating the quality of implementation and improvement sciences grant proposals. Keywords: Implementation research, Improvement research, Grant writing, Grant scoring * Correspondence: ecrable@bu.edu Evans Center for Implementation and Improvement Sciences, Boston University School of Medicine, 88 East Newton Street, Vose 216, Boston, MA 02118, USA Department of Health Law, Policy & Management, Boston University School of Public Health, Boston, MA, USA Full list of author information is available at the end of the article © The Author(s). 2018 Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Crable et al. Implementation Science (2018) 13:71 Page 2 of 11 Background team. The NIH framework was required because it The recognition that experimental efficacy studies alone corresponds with the grant proposal structure required are insufficient to improve public health [1] has led to by the NIH. A study budget and justification as well as the rapid expansion of the fields of implementation and research team biographical sketches were required with improvement sciences [2–5]. However, studies that aim no page limit restrictions. CIIS received 30 pilot grant to identify strategies that facilitate adoption, sustainabil- applications covering a broad array of content areas, ity, and scalability of evidence may not translate well such as smoking cessation, hepatitis C, diabetes, cancer, within traditional efficacy and effectiveness research and neonatal abstinence syndrome. paradigms [6]. Six researchers with experience in implementation and The need for new tools to aid investigators and improvement sciences served as grant reviewers. Four research stakeholders in implementation science became reviewers scored each proposal. Reviewers evaluated the clear during evaluation of grant submissions to the quality of pilot study proposals, assigning numerical Evans Center for Implementation and Improvement scores from 1 to 9 (1 = exceptional, 9 = poor) for each Sciences (CIIS) at Boston University. CIIS was estab- of the NIH criteria (significance, innovation, investiga- lished in 2016 to promote scientific rigor in new and tors, approach, environment, overall impact) [8]. CIIS ongoing projects aimed at increasing the use of evidence elected to use the NIH criteria to evaluate the pilot grant and improving patient outcomes within an urban, aca- applications because the criteria are those used by the demic, safety net medical center. As part of CIIS’s goal NIH peer review systems to evaluate the scientific and to foster rigorous implementation and improvement technical merit of grant proposals. The CIIS grant methods, CIIS established a call for pilot grant applica- review team held a “study section” to review and discuss tions for implementation and improvement sciences [7]. the proposals. However, during that meeting, reviewers Proposals were peer-reviewed using traditional National provided feedback that the NIH evaluation criteria, Institutes of Health (NIH) scoring criteria [8]. Through based in the traditional efficacy and effectiveness two cycles of grant applications, proposal reviewers research paradigm, did not offer sufficient guidance for identified a need for improved evaluation criteria capable evaluating implementation and improvement science of identifying specific strengths and weaknesses in order proposals, nor did it provide enough specificity for the to rate the potential impact of implementation and/or proposal writers who are less experienced in implemen- improvement study designs. tation research. Grant reviewers requested new proposal We describe the development and evaluation of Imple- evaluation criteria that would better inform score deci- meNtation and Improvement Science Proposal Evaluation sions and feedback to proposal writers on specific CriTeria (INSPECT): a tool for the standardized evalu- aspects of implementation science including measuring ation of implementation and improvement research pro- the strength of implementation study design, strategy, posals. The INSPECT tool seeks to operationalize criteria feasibility, and relevance. proposed by Proctor et al. as “key ingredients” that consti- Despite the challenges of using the traditional NIH tute a well-crafted implementation science proposal, evaluation criteria, the review panel used those criteria which operate within the NIH proposal scoring frame- to score all of the grants received during the first 2 years work [6]. of proposal requests. CIIS pilot grant funding was awarded to applications that received the lowest (best) Methods scores under the NIH criteria and received positive feed- Assessment of need back from the review panel. CIIS released requests for pilot grant applications The request for more explicit implementation science focused on implementation and improvement sciences evaluation criteria prompted the CIIS research team to in April 2016 and April 2017 [7]. The request for appli- conduct a qualitative needs assessment of all 30 pilot cations described an opportunity for investigators to re- study applications in order to determine how the ceive up to $15,000 for innovative implementation and proposals described study designs, implementation improvement sciences research on any topic related to strategies, and other aspects of proposed implementa- improving the processes and outcomes of health care tion and improvement research. Three members of the delivery in safety net settings. CIIS funds pilot grants CIIS research team (MLD, AJW, DB) independently with the goal of providing investigators with the oppor- open-coded pilot proposals to identify properties related tunity to obtain preliminary data for further research. to core implementation science concepts or efficacy and Proposals were required to include a specific aims page effectiveness research [9]. The team identified common and a three-page research plan structured within the themes in the proposals, including an emphasis on traditional NIH framework with subheadings for signifi- efficacy hypotheses, descriptions of untested interven- cance, innovation, approach, environment, and research tions, and the absence of implementation strategies and Crable et al. Implementation Science (2018) 13:71 Page 3 of 11 conceptual frameworks. The consistent lack of features Results identified as important aspects of implementation Iterative review of the 30 research proposals using science reinforced the need for criteria that specifically Proctor et al.’s “ten key ingredients” resulted in the develop- addressed implementation science approaches to guide ment and testing of the INSPECT system for assessing im- both proposal preparation and evaluation. plementation and improvement science proposals. Figure 1 displays the skewed right distribution of cumulative proposal scores, with most proposals receiving Operationalizing scoring criteria low overall scores. Out of a possible cumulative score of We identified Proctor et al.’s “ten key ingredients” for 30, proposals had a median score of 7 (IQR 3.3–11.8). writing implementation research proposals [6]as an Table 2 presents the distribution of cumulative and appropriate framework to guide and evaluate proposals. item-specific scores assigned to proposals using the We operationalized the “ingredients” into a scoring system. INSPECT criteria. Across individual elements, proposals To construct the scoring system, a four-point scale (0–3) scored highest for criteria describing care/quality gaps in was created for each element. In general, a score of 3 was health services. Thirty-six percent of proposals received given for an element if all of the criteria requirements for the maximum score of 3 for meeting all care or care or theelement were fullymet;a scoreof 2 was given if the quality gap element requirements, including using local criteria were somewhat, but not fully addressed; a score of setting data to support the existence of a gap, including 1 was given if the ingredient was mentioned but not opera- an explicit description of the potential for improvement, tionalized in the proposal or linked to the rest of the study; and linking the proposed research to funding priorities and a score of 0 was given if the element was not addressed (i.e., safety net setting). at all in the proposal. Table 1 illustrates the INSPECT Proposals generally scored poorly for other criteria. As scoring system for the 10 items, in which proposals receive shown in Table 2, most study proposals received scores one score for each of the 10 ingredients, for a cumulative of 0 in the categories of evidence-based treatment to be score between 0 and 30. implemented (50%), conceptual model and theoretical justification (70%), setting’s readiness to adopt new ser- Testing INSPECT vices/treatment/programs (53%), implementation strat- We used the pilot study proposals submitted to CIIS to egy/process (67%), and measurement and analysis (70%). develop and evaluate the utility and reliability of the IN- For example, reviewers gave scores of 0 for the “eviden- SPECT scoring system. Initially, two research team mem- ce-based intervention to be implemented” element bers (ELC, DB) independently applied the 10-element because the intervention was not evidence-based and criteria to 7 of the 30 pilot grant proposals. Four team the project sought to establish efficacy, rather than to members (MLD, AJW, ELC, DB) then met to discuss examine uptake of an established evidence-based these initial results and achieve consensus on the scoring practice. Similarly, proposals that only sought to study criteria. Two team members (ELC, DB) then independ- effectiveness and did not assess any implementation out- ently scored the remaining 23 pilot study applications comes [13] (e.g., adoption, fidelity) received scores of 0 using the revised scoring system. Both reviewers recorded for “measurement and analysis.” None of the study pro- brief justifications for each of the ten scores assigned to posals primarily aiming to assess effectiveness outcomes individual study proposals. The two coders (ELC, DB) expressed the dual research intent of a hybrid design. then met to compare scores, share scoring justifications, Scores of 0 for other categories were given when appli- and determine the final item-specific scores for each pro- cations lacked any description relevant to the category, posal using group consensus. such as no conceptual model, no implementation strategy, Inter-coder reliability with the scoring protocol was or no research team skills revenant to implementation or measured using Krippendorff’s alpha to assess observed improvement science. and expected disagreement between the two coders’ ini- Table 2 displays the assessed rates of inter-coder tial individual item scores [10, 11]. An alpha coefficient reliability in applying INSPECT to the 30 pilot study of 0.70 was deemed a priori as the lowest acceptable proposals. An overall alpha coefficient of 0.88 was level of agreement to establish reliability of the new observed between the coders. Rates of inter-coder scoring protocol [10, 11]. Frequency analyses were reliability in applying each of the 10 items to the conducted to determine the distribution of final proposals ranged from 0.77 to 0.99, all above the 0.70 element-specific scores (0–3) across all proposals. We reliability threshold. calculated a correlation coefficient to assess the associ- Additionally, we observed a moderate inverse correl- ation between proposal scores assigned using the NIH ation (r = − 0.62, p < 0.01) between the proposal scores framework and scores assigned using INSPECT. All cal- initially assigned using the NIH framework and the culations were performed in R version 3.3.2 [12]. scores assigned using INSPECT. Crable et al. Implementation Science (2018) 13:71 Page 4 of 11 � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � Table 1 Implementation and Improvement Science Proposal Evaluation Criteria Criteria Score 01 2 3 The care or No care/quality gap is defined; an Unclearly defined care/quality gap is Defined care/quality gap is supported Clearly defined quality gap is quality gap issue may be presented, but it is not poorly supported with inappropriate/ by either local setting data supported by local setting data (i.e., described as a gap in quality or care inadequate/irrelevant local setting data (i.e., evidence of chart review or evidence of chart review or other No information or lack of clarity in (i.e., evidence of chart review or other other preliminary data) or citations preliminary data) and appropriate the information cited about the preliminary data) or citations from from the literature citations from the literature potential for improvement or the the literature Adequate information about the Explicit, well thought out description impact of the proposed implementation Insufficient information about the potential for improvement, but would of the potential for improvement and/or improvement science study potential for improvement or the benefit from further specification Proposed implementation and/or Proposed implementation and/or impact of the proposed implementation Proposed implementation and/or improvement study is clearly linked improvement science study is and/or improvement science study improvement science study links to a safety net setting not linked to a safety net setting Proposed implementation and/or to a safety net setting but may need improvement science study does further clarification not explicitly link to a safety net setting The evidence-based No evidence-based or evidence- Some literature is cited to provide Sufficient literature is cited Clearly discusses evidence of prior treatment to informed intervention is identified or evidence of limited prior efficacy studies demonstrating evidence of prior efficacy studies concerning the be implemented the intervention justification/back concerning the planned intervention, efficacy studies using the intervention planned intervention, meeting ground is based on zero/inappropriate/ meeting “evidence-informed” rather to meet either “evidence-informed” or “evidence-based” rather than inadequate citations from literature than “evidence-based” criteria “evidence-based” criteria “evidence-informed” criteria Lack of clarity about why the Limited justification about why the If the intervention is “evidence-informed, Explicit, well thought-out rationale intervention was chosen for intervention was chosen for the study ” the innovative use of said intervention for implementing the intervention the study setting setting and/or justification is based on in the study setting is compelling enough in the selected safety net setting Unclear what effect the intervention desire to document efficacy of the to consider, or there is appropriate including the potential effect it will will have on the selected safety net proposed evidence-informed practice justification about why the evidence-based have on that setting setting (or study is not based Insufficient information describing intervention was chosen for the study in safety net setting) what effect the intervention will have setting, and the goal is not based on on the selected safety net setting developing efficacy of the said evidence-informed practice Adequate information describing what effect the intervention will have on the selected safety net setting, but may need further clarification Conceptual model and No conceptual model, framework, or A conceptual model, framework, or A conceptual model, framework, or other An implementation and/or theoretical justification other theoretical grounding is discussed other theoretical grounding is theoretical grounding is linked in some improvement science-specific Some conceptual model is cited but its mentioned, but not linked to the study capacity to the study objectives, conceptual model or framework is basis and constructs are irrelevant objectives, hypotheses, and measures hypotheses, and measures, but may need clearly described, with theoretical to study objectives and/or the study setting The chosen conceptual model, additional clarification constructions explicitly described framework, or other theoretical The chosen conceptual model, within the proposed setting, grounding may be appropriate framework, or other theoretical population, and intervention contexts for the intervention, but grounding is appropriate for the The implementation and/or the rationale is not clearly intervention /implementation improvement science-specific supported with citations strategies as evidenced by a well-defined conceptual model or framework is from the literature rationale with adequate citations used to frame the proposed study from the literature, but would still in all aspects including the study benefit from further specificity questions, aims/objectives, hypotheses, process, and outcome measures Some discussion may refer and describe how study findings would build upon or otherwise contribute to theory or the larger implementation and/or improvement science fields Crable et al. Implementation Science (2018) 13:71 Page 5 of 11 � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � Table 1 Implementation and Improvement Science Proposal Evaluation Criteria (Continued) Criteria Score 01 2 3 Stakeholder priorities, Zero or extremely limited description of Limited description of who the Sufficient description of who all of Comprehensive description of engagement in change who the stakeholders are or what their stakeholders are, with some key the identifiable stakeholders are who all of the identifiable preferences and priorities are around players missing from consideration Clear understanding of stakeholder stakeholders are the proposed intervention Limited understanding of stakeholder concerns related to the intervention Clear understanding of stakeholder No evidence of stakeholder analysis priorities and concerns related to the as evidenced by a stakeholder analysis concerns related to the intervention planning, or basic information gathering intervention is demonstrated by easily plan that describes how the applicant as evidenced by a stakeholder is discussed in relation to how the identified potential issues that are not will collect at least some information analysis plan that describes applicant developed the discussed in the application, or no on stakeholders interests, interrelations, how the applicant will collect implementation strategies evidence of stakeholder analysis influences, preferences, or/and priorities comprehensive information on stakeholders interests, interrelations, planning is discussed Somewhat unclear description of how Zero or very limited mention of involving stakeholders were involved in the influences, preferences, and priorities stakeholders in the conceptual design of conceptual design of the intervention, Detailed description of how the intervention, and/or consideration of and/or consideration of the stakeholders were involved in the the implementation strategies, process, implementation strategies, process, conceptual design of the intervention or outcomes or outcomes and in considering the No clear agreement or collaboration Some type of agreement or implementation strategies, between the stakeholders and the collaboration between the process, and outcomes applicant is explained stakeholders and the applicant is An explicit agreement explained but supporting evidence (such as a memorandum is limited of understanding) or evidence of collaboration between the stakeholders and the applicant that is explained with relevance to the proposed study process and how findings will be communicated Settings readiness to adopt Zero or very limited rationale/interest Some description of the setting’s Clearly describes the setting’s interest and Explicitly describes preliminary data new services/treatment/programs for implementing the proposed interest in the proposed intervention rationale for the proposed intervention on the assessed organizational and intervention is discussed Incomplete or unclear description of Clearly describes how the setting will be political capacity and readiness for No information on the study setting’s how the setting will be assessed for assessed for capacity and readiness for implementation (assessment capacity or readiness for implementation capacity and/or readiness for implementation including which methods, completed prior to application/pilot) No information on how those in the study implementation including which scales, or other tools will be used Preliminary capacity and readiness setting who are opposed to change will be methods and tools will be used, or Thoroughly describes the potential assessments were completed using involved with or have their concerns there is a limited description of influence of organizational/political a scale with established validity and addressed by study processes organizational/political culture and culture, and potential contextual reliability, or a scale that has or components potential contextual barriers barriers or facilitators undergone some validity and or facilitators May include strategies for how those reliability testing May include a brief discussion on how opposed to change in the study setting May include strategies for how those those opposed to change in the study will be involved with opposed to change in the study setting will be involved with or have or have their concerns setting will be involved with or have their concerns addressed by addressed by study their concerns addressed by study processes or components processes or components study processes or components May not include evidence May not include evidence of support Evidence of support (e.g., letters) of support (e.g., letters) from (e.g., letters) from the study from the study setting that the study setting that address setting that address how address how the proposed how the proposed study aligns the proposed study aligns study aligns with the organization’s with the organization’s with the organization’s priorities/policies priorities/policies priorities/policies Implementation strategy/process No implementation strategies are identified Implementation strategies are not Implementation strategies are clearly Explicitly describes and theoretically Intervention may be incorrectly described clearly distinguished from distinguished from the intervention justifies the implementation strategies as an implementation strategy the intervention Crable et al. Implementation Science (2018) 13:71 Page 6 of 11 � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � Table 1 Implementation and Improvement Science Proposal Evaluation Criteria (Continued) Criteria Score 01 2 3 Unclear implementation strategies Some theoretical justification of the Explicitly describes how implementation are not theoretically justified implementation strategies strategies link to the stated aims/ and/or do not match with the Clearly describes how implementation setting/outcome measures of the stated aims/setting/outcome strategies link to the stated aims/setting/ proposed study measures of the proposed study outcome measures of the proposed study Explicitly describes how implementation Limited description linking the More description is needed to clearly strategies will be observed or implementation strategies to the understand how implementation strategies empirically tested stated aims/setting/outcome will be observed or empirically tested Implementation strategies are feasible measures of the proposed study Implementation strategies are mostly feasible given the pilot study timeline and with no plan for how strategies given the pilot study timeline and budget budget constraints will be observed or tested constraints Implementation strategies may be unrealistic given the pilot timeline and/or budget constraints Team experience with Only the principal investigator’s It is unclear how the team Team description, biographical sketches, Clearly describes how team setting, treatment, and skills are described experience relates to the study resumes/CVs depict a multidisciplinary skillset experience relates to the study implementation process No additional information, biographical setting, treatment, and/or processes relevant to the proposed study setting, setting, treatment, and processes sketches, resumes/CVs are provided Staffing plan may not facilitate treatment, processes, and other needs Team description, biographical beyond the principal investigator successful study completion without Staffing plan facilitates successful study sketches, resumes/CVs depict a significant support from CIIS completion, with some support from CIIS multidisciplinary skillset relevant to Team experience is uniform and does likely necessary the proposed study setting, treatment, not offer multidisciplinary skills or No description of the research environment processes, and other needs perspective to the proposed study strengths including resources and/or Staffing plan facilitates successful infrastructure study completion without If principal investigator is considered junior necessitating CIIS support or early career or novice to implementation Clearly describes strengths of the science, it is unclear what senior leadership research environment including outside of CIIS will be available for resources and infrastructure mentoring and/or consultation If principal investigator is considered junior or early career or novice to implementation science, senior leadership outside of CIIS has been identified to support study completion with mentoring and/or consultation Feasibility of proposed research The proposed study includes methods, The proposed study includes methods, The proposed study includes appropriate The proposed study includes design and methods interventions, and other components interventions, and other components methods, interventions, and other appropriate methods, interventions, that are beyond the scope of a pilot that may be challenging to accomplish components that are likely achievable as a and other components that are study and/or inappropriate for a The budget and/or timeline are not pilot study achievable as a pilot study pilot study included or unrealistic The budget and/or timeline may need some and are justified against A budget and/or timeline are not Potential barriers to implementation revising potential alternatives included or are unrealistic are not clearly described or Potential barriers to implementation are Thebudgetand timeline areappropriate Potential barriers to implementation are insurmountable clearly described but may lack clear Potential barriers to implementation are not described or are insurmountable description of how those barriers will be are clearly identified with potential overcome plans to overcome those barriers Measurement and Outcomes described are not implementation Outcomes described are implementation Outcomes described are implementation Outcomes described are analysis section or improvement science-related and/or improvement science-related and/or improvement science-related implementation and/ Outcomes are not linked to the proposed Outcomes are unclearly linked to the Outcomes are clearly linked to the proposed or improvement science-related study aims proposed study aims study aims Outcomes are clearly linked The unit of analysis is inappropriate for The unit of analysis is appropriate for the The unit of analysis is appropriate for the to the proposed study aims the proposed study proposed study proposed study Crable et al. Implementation Science (2018) 13:71 Page 7 of 11 � � � � � � � � � � � � � � � � � Table 1 Implementation and Improvement Science Proposal Evaluation Criteria (Continued) Criteria Score 01 2 3 No measurement and/or data analysis Measurement and/or data analysis plans Measurement and/or data analytic plans The unit of analysis is appropriate plan are included to describe how variables do not clearly describe how all variables describe how all variables and outcomes will for the proposed study and outcomes will be measured and outcomes will be measured, or plans be measured and is appropriate for the Measurement and data analytic are inappropriate for the proposed study proposed study, but linkage to the plans robustly describe how theoretical model is unclear all variables and outcomes will be measured and are appropriate for the proposed study through a clear theoretical justification Policy/funding environment; No acknowledgement of the internal/ The internal/external policy trends and/or The internal/external policy trends and/or The internal/external policy trends leverage of support for external policy trends and/or funding funding environment are discussed but funding environment are clearly described and/or funding environment are sustaining change environment for the propose additional clarification is needed The potential impact of the intervention is clearly described study is included The potential impact of the intervention linked to relevant policies and funding issues Potential impact of the intervention is Zero or limited discussion of is not linked to the policy and/or funding associated with a safety net setting but may explicitly linked to relevant policies the potential impact of context and may not be relevant to a need further explanation and funding issues associated with a the intervention is included safety net setting The dissemination plan for study findings safety net setting Zero or limited discussion of The dissemination plan for study findings indicates a contribution will be made to the The dissemination plan for study disseminating study does not clearly indicate a contribution broader policy level and safety net setting, findings indicates what and how a findings is included will be made to the broader policy level but what contribution and how it will be contribution will be made to the and safety net setting achieved is unclear broader policy level and safety net setting Crable et al. Implementation Science (2018) 13:71 Page 8 of 11 scoring criteria and assessing efforts to improve the cri- teria [15, 16]. Application of the INSPECT system dem- onstrated high inter-rater reliability overall and within each of the 10 items. The high degree of reliability mea- sured for INSPECT may be related to the specificity of its design as an implementation and improvement science scoring criteria. A review of scoring rubrics 2 reported in the scientific literature suggests that topic-focused criteria contribute to increased scoring reliability [17]. Additionally, the moderate correlation 0 2 4 6 8 101218202224262830 between scores assigned using the NIH framework and Cumulative Proposal Scores scores assigned using INSPECT suggests validity of the Fig. 1 Distribution of cumulative proposal scores assigned using INSPECT criteria in evaluating proposal quality. Proctor ImplemeNtation and Improvement Science Proposal Evaluation et al.’s “ten key ingredients” for grant writers were developed CriTeria (INSPECT) to map onto the existing NIH criteria. Our operationalized version of the ingredients as scoring criteria demonstrated Discussion that proposals that scored poorly under NIH criteria also We developed a reliable proposal scoring system that scored poorly under INSPECT. operationalizes Proctor et al.’s “ten key ingredients” for Applying the INSPECT system to proposed implemen- writing an implementation research grant [6]. Previous tation and improvement science research at an academic research analyzing peer-review grant processes has medical center improved proposal reviewers’ ability to highlighted a need to improve scoring agreement be- identify specific strengths and weaknesses in implemen- tween peer reviewers [14]. High levels of disagreement tation approach. Overall, proposals only received high in assessors’ interpretation of grant scoring criteria result scores for identifying the care gap or quality gap. Since in unreliable peer-review processes and funding deci- efficacy and implementation or improvement research sions based more on chance than scientific merit [14]. may use similar techniques to establish the significance Measuring rates of inter-rater reliability are a standard of the study questions [18], proposals may score well on approach for evaluating the utility of existing proposal describing the quality gap, even if they later described Table 2 Distribution of ImplemeNtation and Improvement Science Proposal Evaluation CriTeria (INSPECT) Scores Cumulative Proposal Scores Proposals evaluated: n = 30, Median score 7 (IQR 3.3–11.8) Individual Item Scores INSPECT Items Rating Scale Krippendorff’s Alpha 01 2 3 (N = 150) (N = 74) (N = 47) (N = 29) (0.88) The care gap or quality gap 7 (23%) 6 (20%) 6 (20%) 11 (36%) 0.84 The evidence-based treatment 15 (50%) 9 (30%) 2 (7%) 4 (13%) 0.77 to be implemented Conceptual model and 21 (70%) 4 (13%) 3 (10%) 2 (7%) 0.99 heoretical justification Stakeholder priorities, 13 (43%) 9 (30%) 7 (23%) 1 (3%) 0.88 engagement in change Setting’s readiness to adopt new 16 (53%) 7 (23%) 6 (20%) 1 (3%) 0.96 services/treatment/programs Implementation strategy/process 20 (67%) 7 (23%) 1 (3%) 2 (7%) 0.84 Team experience with setting, 13 (43%) 5 (17%) 8 (27%) 4 (13%) 0.96 treatment, and implementation process Feasibility of proposed research 13 (43%) 11 (37%) 6 (20%) 0 (0%) 0.84 design and methods Measurement and analysis 21 (70%) 4 (13%) 3 (10%) 2 (7%) 0.78 Policy/funding environment; leverage 11 (37%) 12 (40%) 5 (17%) 2 (7%) 0.77 or support for sustaining change Number of Study Proposals Crable et al. Implementation Science (2018) 13:71 Page 9 of 11 efficacy hypotheses that received overall low scores from Strengths of our results include that application of the INSPECT system. Further studies should explore INSPECT to study proposals submitted by investigators techniques for describing care and quality gaps that with a wide range of implementation and improvement highlight implementation or improvement research science-specific experience, and covering a variety of questions. content areas. However, our results are limited in that Consistently low scores in four areas—defining the they characterize one academic institution’s familiarity evidence-based treatment to be implemented, concep- with implementation and improvement science research tual model and theoretical justification, setting’s readi- and the INSPECT system requires validation in other ness to adopt new programs, and measurement and settings and over a broader range of proposal ratings. analysis—suggest that many investigators seeking to con- Additionally, we measured a high degree of inter-rater duct implementation research may have misconceptions reliability for INSPECT when it was applied to a sample about the fundamental goals of this field. One miscon- of low-scoring proposals. INSPECT’s inter-rater reli- ception may relate to a sole focus on evaluating an inter- ability may decrease when applied to a sample of higher vention’s effectiveness rather than studying the processes quality proposals, and reviewers are required to dis- and outcomes of implementation strategies. The major- criminate between gradations of quality (i.e., scores of ity of study proposals evaluated using INSPECT neither 1–3) rather than mostly scoring the absence of key aimed to improve uptake of any evidence-based practice items (i.e., scores of 0). Future research should test the nor included any implementation measures such as validity of INSPECT by comparing INSPECT-assigned acceptability, adoption, feasibility, fidelity, penetration, scores to ratings assigned to approved proposals by the or sustainability [19]. Inadequate and inconsistent NIH Dissemination and Implementation Research in descriptions of implementation strategies and outcomes Health study section. Future research should also assess represent major challenges to overall implementation the relationship between INSPECT score assignments study success [20]. In addition to guidance provided by and successful study completion to determine the util- the INSPECT criteria, recent efforts to develop imple- ity of INSPECT as a mechanism for ensuring the qual- mentation study reporting standards [21] may assist ity and impact of funded research. To aid in these proposal writers in describing planned research. prospective research efforts, forthcoming proposal calls Several proposals addressed treatments or practices with from CIIS will specifically use INSPECT as the proposal low evidence for the potential to improve healthcare. Al- evaluation criteria. though hybrid studies, which study both effectiveness and Although multiple tools exist to aid researchers in implementation outcomes, are practical approaches to writing implementation science proposals [6, 29, 30], establishing the effectiveness of evidence-informed prac- few resources exist to support grant reviewers. This tices while measuring implementation efforts [18], none study identified additional functionality of Proctor et of the study proposals expressed this dual research intent al.’s “ten key ingredients” as a guide for writers by de- or were conceived as hybrid designs. veloping it into a detailed checklist for proposal Our findings also suggest low familiarity with and use of reviewers. The current research makes a substantive resources to evaluate of the strength of evidence (such as contribution to implementation and improvement the Grading Quality of Evidence and Strength of Recom- sciences by demonstrating the utility and reliability of mendations system [22] and the Strength of Recommenda- a new tool designed to aid grant reviewers in identify- tion Taxonomy grading scale [23]) for implementation ing high-quality research. science research. A more systematic evaluation of the strength of evidence [24–27] necessary to warrant imple- mentation efforts may help to differentiate implementation Conclusion science from efficacy or effectiveness research and improve In conclusion, we operationalized an implementation understanding of the utility hybrid studies offer [28]. and improvement research-specific scoring system to Expanding access to implementation science training in provide guidance for proposal writers and grant universities as part of the core health services research reviewers. We demonstrated the utility and reliability of curriculum and enhancing access to professional develop- the new INSPECT scoring systems in evaluating the ment opportunities that focus on conceptual and meth- quality of implementation and improvement sciences odological implementation skills in a content agnostic way research proposed at one academic medical center. The would aid in building capacity for the next generation of prevalence of low scores across the majority of implementation science researchers. Additionally, training INSPECT criteria suggests a need to promote education programs provide an opportunity to provide guidance on about the goals of implementation and improvement both writing and evaluating the quality of implementation science, including the conceptual and methodological science grant applications. distinctions from efficacy and effectiveness research. Crable et al. Implementation Science (2018) 13:71 Page 10 of 11 Abbreviations 2. Neta G, Sanchez MA, Chambers DA, Phillips SM, Leyva B, Cynkin L, et al. CIIS: Center for Implementation and Improvement Sciences; Implementation science in cancer prevention and control: a decade of INSPECT: Implementation and Improvement Science Proposal Evaluation grant funding by the National Cancer Institute and future directions. Criteria; NIH: National Institutes of Health Implement Sci. 2015;10:4. https://doi.org/10.1186/s13012-014-0200-2. 3. Purtle J, Peters R, Brownson RC. A review of policy dissemination and implementation research funded by the National Institutes of Health, 2007– Acknowledgements 2014. Implement Sci. 2015;11(1) https://doi.org/10.1186/s13012-015-0367-1. We would like to thank the investigators who submitted pilot grant applications to the Center for Implementation and Improvement Sciences 4. Tinkle M, Kimball R, Haozous EA, Shuster G, Meize-Grochowski R. Pilot Grant Program in 2016 and 2017. Creating the Implementation and Dissemination and implementation research funded by the US National Improvement Science Proposals Evaluation Criteria would not have been Institutes of Health, 2005-2012. Nurs Res Pract. 2013;2013:909606. https://doi. possible without their submissions. We also appreciate the feedback form org/10.1155/2013/909606. CIIS grant application reviewers which was instrumental in identifying the 5. Smits PA, Denis J-L. How research funding agencies support science need for new scoring criteria. The CIIS team appreciates the ongoing integration into policy and practice: an international overview. Implement guidance, interest, and support from David Coleman. Thanks also to Kevin Sci. 2014;9:28. https://doi.org/10.1186/1748-5908-9-28. Griffith for his feedback on measures of reliability. 6. Proctor EK, Powell BJ, Baumann AA, Hamilton AM, Santens RL. Writing implementation research grant proposals: ten key ingredients. Implement Sci. 2012;7:96. https://doi.org/10.1186/1748-5908-7-96. Funding 7. Center for Implementation and Improvement Sciences. Pilot Grant Program; This research was supported with funding from the Evans Medical 2016. http://sites.bu.edu/ciis/pilotgrants/. Accessed 2 Nov 2017. Foundation Inc. 8. National Institutes of Health. Definitions of criteria and considerations for research project grant (RPG/X01/R01/R03/R21/R33/R34) Critiques | grants.nih. Availability of data and materials gov. 2016. https://archives.nih.gov/asites/grants/11-14-2016/Grants/peer/ The datasets generated and/or analyzed during the current study are not critiques/rpg_D.htm. Accessed 26 Oct 2017. publicly available because they represent study proposals prepared by 9. Green J, Thorogood N. Qualitative methods for health research. 3rd ed. individual investigators. Proposal scoring data are available from the Thousand Oaks: SAGE Publications Ltd; 2013. corresponding author on reasonable request. 10. Hayes AF, Krippendorff K. Answering the call for a standard reliability measure for coding data. Commun Methods Meas. 2007;1:77–89. https:// Authors’ contributions doi.org/10.1080/19312450709336664. DB, CGA, AJW, and MD conducted the initial thematic analysis. MD and DB 11. Krippendorff K. Content analysis: an introduction to its methodology. created the original scoring criteria. ELC revised the scoring criteria. ELC, DB, Thousand Oaks, CA: SAGE Publications; 2004. AJW, and MD reviewed and finalized the scoring criteria. ELC and DB piloted 12. R Core. R: a language and environment for statistical computing 2016. the use of the scoring criteria and analyzed the score data. ELC drafted and https://www.r-project.org. revised the manuscript based on comments from coauthors. AJW, MD, DB, 13. Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. and EKP provided manuscript comments and revisions. All authors read and Outcomes for implementation research: conceptual distinctions, approved the final manuscript. measurement challenges, and research agenda. Admin Pol Ment Health. 2011;38:65–76. https://doi.org/10.1007/s10488-010-0319-7. Ethics approval and consent to participate 14. Marsh HW, Jayasinghe UW, Bond NW. Improving the peer-review process This study was reviewed and determined to not qualify as human subjects for grant applications: reliability, validity, bias, and generalizability. Am research by the Boston University Medical Campus Institutional Review Board Psychol. 2008;63:160–8. https://doi.org/10.1037/0003-066X.63.3.160. (reference number H-37709). 15. Sattler DN, McKnight PE, Naney L, Mathis R. Grant peer review: improving inter-rater reliability with training. PLoS One. 2015;10:e0130450. https://doi. Competing interests org/10.1371/journal.pone.0130450. The authors declare that they have no competing interests. 16. Demicheli V, Di Pietrantonj C. Peer review for improving the quality of grant applications. Cochrane Database Syst Rev. 2007:MR000003. https://doi.org/ 10.1002/14651858.MR000003.pub2. Publisher’sNote 17. Jonsson A, Svingby G. The use of scoring rubrics: reliability, validity and Springer Nature remains neutral with regard to jurisdictional claims in educational consequences. Educ Res Rev. 2007;2:130–44. https://doi.org/10. published maps and institutional affiliations. 1016/j.edurev.2007.05.002. Author details 18. Inouye SK, Fiellin DA. An evidence-based guide to writing grant proposals Evans Center for Implementation and Improvement Sciences, Boston for clinical research. Ann Intern Med. 2005;142:274–82. http://www.ncbi.nlm. University School of Medicine, 88 East Newton Street, Vose 216, Boston, MA nih.gov/pubmed/15710960. Accessed 28 Feb 2018 02118, USA. Department of Health Law, Policy & Management, Boston 19. Proctor EK, Landsverk J, Aarons G, Chambers D, Glisson C, Mittman B. University School of Public Health, Boston, MA, USA. Section of Pulmonary, Implementation research in mental health services: an emerging Allergy, and Critical Care Medicine, Department of Medicine, Boston science with conceptual, methodological, and training challenges. University School of Medicine, Boston, MA, USA. Behavioral Sciences and Adm Policy Ment Heal Ment Heal Serv Res. 2009;36:24–34. https://doi. Health Education Department, Rollins School of Public Health, Emory org/10.1007/s10488-008-0197-4. University, Atlanta, GA, USA. Center for Mental Health Services Research, The 20. Proctor EK, Powell BJ, McMillen JC. Implementation strategies: Brown School at Washington University in St. Louis, St. Louis, MO, USA. recommendations for specifying and reporting. Implement Sci. 2013;8:139. Section of Infectious Diseases, Department of Medicine, Boston University https://doi.org/10.1186/1748-5908-8-139. School of Medicine, Boston, MA, USA. Center for Healthcare Organization 21. Pinnock H, Barwick M, Carpenter CR, Eldridge S, Grandes G, Griffiths CJ, et al. and Implementation Research, Edith Nourse Rogers Memorial VA Hospital, Standards for reporting implementation studies (StaRI): explanation and Bedford, MA, USA. elaboration document. BMJ Open. 2017;7:e013318. https://doi.org/10.1136/ bmjopen-2016-013318. Received: 10 April 2018 Accepted: 22 May 2018 22. Oxman AD. Grading quality of evidence and strength of recommendations. BMJ. 2004;328:24–34. http://www.bmj.com/content/bmj/328/7454/1490. abridgement.pdf. Accessed 28 Feb 2018 References 23. EbellMH, SiwekJ,Weiss BD,Woolf SH,SusmanJ,Ewigman B, et al. 1. Glasgow RE, Lichtenstein E, Marcus AC. Why don’t we see more translation Strength of recommendation taxonomy (SORT): a patient-centered of health promotion research to practice? Rethinking the efficacy-to- approach to grading evidence in the medical literature. Am Fam Physician. effectiveness transition. Am J Public Health. 2003;93:1261–7. http://www. 2004;69:548–56. https://www.ncbi.nlm.nih.gov/pubmed/14971837. Accessed ncbi.nlm.nih.gov/pubmed/12893608. Accessed 1 Mar 2018 28 Feb 2018. Crable et al. Implementation Science (2018) 13:71 Page 11 of 11 24. Rycroft-Malone J, Seers K, Titchen A, Harvey G, Kitson A, McCormack B. What counts as evidence in evidence-based practice? J Adv Nurs. 2004;47:81–90. https://doi.org/10.1111/j.1365-2648.2004.03068.x. 25. Rycroft-Malone J, Seers K, Chandler J, Hawkes CA, Crichton N, Allen C, et al. The role of evidence, context, and facilitation in an implementation trial: implications for the development of the PARIHS framework. Implement Sci. 2013;8:28. https://doi.org/10.1186/1748-5908-8-28. 26. Prasad V, Ioannidis JP. Evidence-based de-implementation for contradicted, unproven, and aspiring healthcare practices. Implement Sci. 2014;9(1) https://doi.org/10.1186/1748-5908-9-1. 27. McCaughey D, Bruning NS. Rationality versus reality: the challenges of evidence-based decision making for health policy makers. Implement Sci. 2010;5:39. https://doi.org/10.1186/1748-5908-5-39. 28. Bernet AC, Willens DE, Bauer MS. Effectiveness-implementation hybrid designs: implications for quality improvement science. Implement Sci. 2013; 8(Suppl 1):S2. https://doi.org/10.1186/1748-5908-8-S1-S2. 29. Brownson RC, Colditz GA, Dobbins M, Emmons KM, Kerner JF, Padek M, et al. Concocting that magic elixir: successful grant application writing in dissemination and implementation research. Clin Transl Sci. 2015;8:710–6. https://doi.org/10.1111/cts.12356. 30. University of Colorado Implementation Science Program. Ten key ingredients to writing successful d&i research proposals. 2018. http://www. crispebooks.org/DIFundingTips. Accessed 4 Apr 2018. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Implementation Science Springer Journals

Standardizing an approach to the evaluation of implementation science proposals

Free
11 pages

Loading next page...
 
/lp/springer_journal/standardizing-an-approach-to-the-evaluation-of-implementation-science-ErE4gI7dc1
Publisher
BioMed Central
Copyright
Copyright © 2018 by The Author(s).
Subject
Medicine & Public Health; Health Promotion and Disease Prevention; Health Administration; Health Informatics; Public Health
eISSN
1748-5908
D.O.I.
10.1186/s13012-018-0770-5
Publisher site
See Article on Publisher Site

Abstract

Background: The fields of implementation and improvement sciences have experienced rapid growth in recent years. However, research that seeks to inform health care change may have difficulty translating core components of implementation and improvement sciences within the traditional paradigms used to evaluate efficacy and effectiveness research. A review of implementation and improvement sciences grant proposals within an academic medical center using a traditional National Institutes of Health framework highlighted the need for tools that could assist investigators and reviewers in describing and evaluating proposed implementation and improvement sciences research. Methods: We operationalized existing recommendations for writing implementation science proposals as the ImplemeNtation and Improvement Science Proposals Evaluation CriTeria (INSPECT) scoring system. The resulting system was applied to pilot grants submitted to a call for implementation and improvement science proposals at an academic medical center. We evaluated the reliability of the INSPECT system using Krippendorff’s alpha coefficients and explored the utility of the INSPECT system to characterize common deficiencies in implementation research proposals. Results: We scored 30 research proposals using the INSPECT system. Proposals received a median cumulative score of 7 out of a possible score of 30. Across individual elements of INSPECT, proposals scored highest for criteria rating evidence of a care or quality gap. Proposals generally performed poorly on all other criteria. Most proposals received scores of 0 for criteria identifying an evidence-based practice or treatment (50%), conceptual model and theoretical justification (70%), setting’s readiness to adopt new services/treatment/programs (54%), implementation strategy/process (67%), and measurement and analysis (70%). Inter-coder reliability testing showed excellent reliability (Krippendorff’s alpha coefficient 0.88) for the application of the scoring system overall and demonstrated reliability scores ranging from 0.77 to 0.99 for individual elements. Conclusions: The INSPECT scoring system presents a new scoring criteria with a high degree of inter-rater reliability and utility for evaluating the quality of implementation and improvement sciences grant proposals. Keywords: Implementation research, Improvement research, Grant writing, Grant scoring * Correspondence: ecrable@bu.edu Evans Center for Implementation and Improvement Sciences, Boston University School of Medicine, 88 East Newton Street, Vose 216, Boston, MA 02118, USA Department of Health Law, Policy & Management, Boston University School of Public Health, Boston, MA, USA Full list of author information is available at the end of the article © The Author(s). 2018 Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Crable et al. Implementation Science (2018) 13:71 Page 2 of 11 Background team. The NIH framework was required because it The recognition that experimental efficacy studies alone corresponds with the grant proposal structure required are insufficient to improve public health [1] has led to by the NIH. A study budget and justification as well as the rapid expansion of the fields of implementation and research team biographical sketches were required with improvement sciences [2–5]. However, studies that aim no page limit restrictions. CIIS received 30 pilot grant to identify strategies that facilitate adoption, sustainabil- applications covering a broad array of content areas, ity, and scalability of evidence may not translate well such as smoking cessation, hepatitis C, diabetes, cancer, within traditional efficacy and effectiveness research and neonatal abstinence syndrome. paradigms [6]. Six researchers with experience in implementation and The need for new tools to aid investigators and improvement sciences served as grant reviewers. Four research stakeholders in implementation science became reviewers scored each proposal. Reviewers evaluated the clear during evaluation of grant submissions to the quality of pilot study proposals, assigning numerical Evans Center for Implementation and Improvement scores from 1 to 9 (1 = exceptional, 9 = poor) for each Sciences (CIIS) at Boston University. CIIS was estab- of the NIH criteria (significance, innovation, investiga- lished in 2016 to promote scientific rigor in new and tors, approach, environment, overall impact) [8]. CIIS ongoing projects aimed at increasing the use of evidence elected to use the NIH criteria to evaluate the pilot grant and improving patient outcomes within an urban, aca- applications because the criteria are those used by the demic, safety net medical center. As part of CIIS’s goal NIH peer review systems to evaluate the scientific and to foster rigorous implementation and improvement technical merit of grant proposals. The CIIS grant methods, CIIS established a call for pilot grant applica- review team held a “study section” to review and discuss tions for implementation and improvement sciences [7]. the proposals. However, during that meeting, reviewers Proposals were peer-reviewed using traditional National provided feedback that the NIH evaluation criteria, Institutes of Health (NIH) scoring criteria [8]. Through based in the traditional efficacy and effectiveness two cycles of grant applications, proposal reviewers research paradigm, did not offer sufficient guidance for identified a need for improved evaluation criteria capable evaluating implementation and improvement science of identifying specific strengths and weaknesses in order proposals, nor did it provide enough specificity for the to rate the potential impact of implementation and/or proposal writers who are less experienced in implemen- improvement study designs. tation research. Grant reviewers requested new proposal We describe the development and evaluation of Imple- evaluation criteria that would better inform score deci- meNtation and Improvement Science Proposal Evaluation sions and feedback to proposal writers on specific CriTeria (INSPECT): a tool for the standardized evalu- aspects of implementation science including measuring ation of implementation and improvement research pro- the strength of implementation study design, strategy, posals. The INSPECT tool seeks to operationalize criteria feasibility, and relevance. proposed by Proctor et al. as “key ingredients” that consti- Despite the challenges of using the traditional NIH tute a well-crafted implementation science proposal, evaluation criteria, the review panel used those criteria which operate within the NIH proposal scoring frame- to score all of the grants received during the first 2 years work [6]. of proposal requests. CIIS pilot grant funding was awarded to applications that received the lowest (best) Methods scores under the NIH criteria and received positive feed- Assessment of need back from the review panel. CIIS released requests for pilot grant applications The request for more explicit implementation science focused on implementation and improvement sciences evaluation criteria prompted the CIIS research team to in April 2016 and April 2017 [7]. The request for appli- conduct a qualitative needs assessment of all 30 pilot cations described an opportunity for investigators to re- study applications in order to determine how the ceive up to $15,000 for innovative implementation and proposals described study designs, implementation improvement sciences research on any topic related to strategies, and other aspects of proposed implementa- improving the processes and outcomes of health care tion and improvement research. Three members of the delivery in safety net settings. CIIS funds pilot grants CIIS research team (MLD, AJW, DB) independently with the goal of providing investigators with the oppor- open-coded pilot proposals to identify properties related tunity to obtain preliminary data for further research. to core implementation science concepts or efficacy and Proposals were required to include a specific aims page effectiveness research [9]. The team identified common and a three-page research plan structured within the themes in the proposals, including an emphasis on traditional NIH framework with subheadings for signifi- efficacy hypotheses, descriptions of untested interven- cance, innovation, approach, environment, and research tions, and the absence of implementation strategies and Crable et al. Implementation Science (2018) 13:71 Page 3 of 11 conceptual frameworks. The consistent lack of features Results identified as important aspects of implementation Iterative review of the 30 research proposals using science reinforced the need for criteria that specifically Proctor et al.’s “ten key ingredients” resulted in the develop- addressed implementation science approaches to guide ment and testing of the INSPECT system for assessing im- both proposal preparation and evaluation. plementation and improvement science proposals. Figure 1 displays the skewed right distribution of cumulative proposal scores, with most proposals receiving Operationalizing scoring criteria low overall scores. Out of a possible cumulative score of We identified Proctor et al.’s “ten key ingredients” for 30, proposals had a median score of 7 (IQR 3.3–11.8). writing implementation research proposals [6]as an Table 2 presents the distribution of cumulative and appropriate framework to guide and evaluate proposals. item-specific scores assigned to proposals using the We operationalized the “ingredients” into a scoring system. INSPECT criteria. Across individual elements, proposals To construct the scoring system, a four-point scale (0–3) scored highest for criteria describing care/quality gaps in was created for each element. In general, a score of 3 was health services. Thirty-six percent of proposals received given for an element if all of the criteria requirements for the maximum score of 3 for meeting all care or care or theelement were fullymet;a scoreof 2 was given if the quality gap element requirements, including using local criteria were somewhat, but not fully addressed; a score of setting data to support the existence of a gap, including 1 was given if the ingredient was mentioned but not opera- an explicit description of the potential for improvement, tionalized in the proposal or linked to the rest of the study; and linking the proposed research to funding priorities and a score of 0 was given if the element was not addressed (i.e., safety net setting). at all in the proposal. Table 1 illustrates the INSPECT Proposals generally scored poorly for other criteria. As scoring system for the 10 items, in which proposals receive shown in Table 2, most study proposals received scores one score for each of the 10 ingredients, for a cumulative of 0 in the categories of evidence-based treatment to be score between 0 and 30. implemented (50%), conceptual model and theoretical justification (70%), setting’s readiness to adopt new ser- Testing INSPECT vices/treatment/programs (53%), implementation strat- We used the pilot study proposals submitted to CIIS to egy/process (67%), and measurement and analysis (70%). develop and evaluate the utility and reliability of the IN- For example, reviewers gave scores of 0 for the “eviden- SPECT scoring system. Initially, two research team mem- ce-based intervention to be implemented” element bers (ELC, DB) independently applied the 10-element because the intervention was not evidence-based and criteria to 7 of the 30 pilot grant proposals. Four team the project sought to establish efficacy, rather than to members (MLD, AJW, ELC, DB) then met to discuss examine uptake of an established evidence-based these initial results and achieve consensus on the scoring practice. Similarly, proposals that only sought to study criteria. Two team members (ELC, DB) then independ- effectiveness and did not assess any implementation out- ently scored the remaining 23 pilot study applications comes [13] (e.g., adoption, fidelity) received scores of 0 using the revised scoring system. Both reviewers recorded for “measurement and analysis.” None of the study pro- brief justifications for each of the ten scores assigned to posals primarily aiming to assess effectiveness outcomes individual study proposals. The two coders (ELC, DB) expressed the dual research intent of a hybrid design. then met to compare scores, share scoring justifications, Scores of 0 for other categories were given when appli- and determine the final item-specific scores for each pro- cations lacked any description relevant to the category, posal using group consensus. such as no conceptual model, no implementation strategy, Inter-coder reliability with the scoring protocol was or no research team skills revenant to implementation or measured using Krippendorff’s alpha to assess observed improvement science. and expected disagreement between the two coders’ ini- Table 2 displays the assessed rates of inter-coder tial individual item scores [10, 11]. An alpha coefficient reliability in applying INSPECT to the 30 pilot study of 0.70 was deemed a priori as the lowest acceptable proposals. An overall alpha coefficient of 0.88 was level of agreement to establish reliability of the new observed between the coders. Rates of inter-coder scoring protocol [10, 11]. Frequency analyses were reliability in applying each of the 10 items to the conducted to determine the distribution of final proposals ranged from 0.77 to 0.99, all above the 0.70 element-specific scores (0–3) across all proposals. We reliability threshold. calculated a correlation coefficient to assess the associ- Additionally, we observed a moderate inverse correl- ation between proposal scores assigned using the NIH ation (r = − 0.62, p < 0.01) between the proposal scores framework and scores assigned using INSPECT. All cal- initially assigned using the NIH framework and the culations were performed in R version 3.3.2 [12]. scores assigned using INSPECT. Crable et al. Implementation Science (2018) 13:71 Page 4 of 11 � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � Table 1 Implementation and Improvement Science Proposal Evaluation Criteria Criteria Score 01 2 3 The care or No care/quality gap is defined; an Unclearly defined care/quality gap is Defined care/quality gap is supported Clearly defined quality gap is quality gap issue may be presented, but it is not poorly supported with inappropriate/ by either local setting data supported by local setting data (i.e., described as a gap in quality or care inadequate/irrelevant local setting data (i.e., evidence of chart review or evidence of chart review or other No information or lack of clarity in (i.e., evidence of chart review or other other preliminary data) or citations preliminary data) and appropriate the information cited about the preliminary data) or citations from from the literature citations from the literature potential for improvement or the the literature Adequate information about the Explicit, well thought out description impact of the proposed implementation Insufficient information about the potential for improvement, but would of the potential for improvement and/or improvement science study potential for improvement or the benefit from further specification Proposed implementation and/or Proposed implementation and/or impact of the proposed implementation Proposed implementation and/or improvement study is clearly linked improvement science study is and/or improvement science study improvement science study links to a safety net setting not linked to a safety net setting Proposed implementation and/or to a safety net setting but may need improvement science study does further clarification not explicitly link to a safety net setting The evidence-based No evidence-based or evidence- Some literature is cited to provide Sufficient literature is cited Clearly discusses evidence of prior treatment to informed intervention is identified or evidence of limited prior efficacy studies demonstrating evidence of prior efficacy studies concerning the be implemented the intervention justification/back concerning the planned intervention, efficacy studies using the intervention planned intervention, meeting ground is based on zero/inappropriate/ meeting “evidence-informed” rather to meet either “evidence-informed” or “evidence-based” rather than inadequate citations from literature than “evidence-based” criteria “evidence-based” criteria “evidence-informed” criteria Lack of clarity about why the Limited justification about why the If the intervention is “evidence-informed, Explicit, well thought-out rationale intervention was chosen for intervention was chosen for the study ” the innovative use of said intervention for implementing the intervention the study setting setting and/or justification is based on in the study setting is compelling enough in the selected safety net setting Unclear what effect the intervention desire to document efficacy of the to consider, or there is appropriate including the potential effect it will will have on the selected safety net proposed evidence-informed practice justification about why the evidence-based have on that setting setting (or study is not based Insufficient information describing intervention was chosen for the study in safety net setting) what effect the intervention will have setting, and the goal is not based on on the selected safety net setting developing efficacy of the said evidence-informed practice Adequate information describing what effect the intervention will have on the selected safety net setting, but may need further clarification Conceptual model and No conceptual model, framework, or A conceptual model, framework, or A conceptual model, framework, or other An implementation and/or theoretical justification other theoretical grounding is discussed other theoretical grounding is theoretical grounding is linked in some improvement science-specific Some conceptual model is cited but its mentioned, but not linked to the study capacity to the study objectives, conceptual model or framework is basis and constructs are irrelevant objectives, hypotheses, and measures hypotheses, and measures, but may need clearly described, with theoretical to study objectives and/or the study setting The chosen conceptual model, additional clarification constructions explicitly described framework, or other theoretical The chosen conceptual model, within the proposed setting, grounding may be appropriate framework, or other theoretical population, and intervention contexts for the intervention, but grounding is appropriate for the The implementation and/or the rationale is not clearly intervention /implementation improvement science-specific supported with citations strategies as evidenced by a well-defined conceptual model or framework is from the literature rationale with adequate citations used to frame the proposed study from the literature, but would still in all aspects including the study benefit from further specificity questions, aims/objectives, hypotheses, process, and outcome measures Some discussion may refer and describe how study findings would build upon or otherwise contribute to theory or the larger implementation and/or improvement science fields Crable et al. Implementation Science (2018) 13:71 Page 5 of 11 � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � Table 1 Implementation and Improvement Science Proposal Evaluation Criteria (Continued) Criteria Score 01 2 3 Stakeholder priorities, Zero or extremely limited description of Limited description of who the Sufficient description of who all of Comprehensive description of engagement in change who the stakeholders are or what their stakeholders are, with some key the identifiable stakeholders are who all of the identifiable preferences and priorities are around players missing from consideration Clear understanding of stakeholder stakeholders are the proposed intervention Limited understanding of stakeholder concerns related to the intervention Clear understanding of stakeholder No evidence of stakeholder analysis priorities and concerns related to the as evidenced by a stakeholder analysis concerns related to the intervention planning, or basic information gathering intervention is demonstrated by easily plan that describes how the applicant as evidenced by a stakeholder is discussed in relation to how the identified potential issues that are not will collect at least some information analysis plan that describes applicant developed the discussed in the application, or no on stakeholders interests, interrelations, how the applicant will collect implementation strategies evidence of stakeholder analysis influences, preferences, or/and priorities comprehensive information on stakeholders interests, interrelations, planning is discussed Somewhat unclear description of how Zero or very limited mention of involving stakeholders were involved in the influences, preferences, and priorities stakeholders in the conceptual design of conceptual design of the intervention, Detailed description of how the intervention, and/or consideration of and/or consideration of the stakeholders were involved in the the implementation strategies, process, implementation strategies, process, conceptual design of the intervention or outcomes or outcomes and in considering the No clear agreement or collaboration Some type of agreement or implementation strategies, between the stakeholders and the collaboration between the process, and outcomes applicant is explained stakeholders and the applicant is An explicit agreement explained but supporting evidence (such as a memorandum is limited of understanding) or evidence of collaboration between the stakeholders and the applicant that is explained with relevance to the proposed study process and how findings will be communicated Settings readiness to adopt Zero or very limited rationale/interest Some description of the setting’s Clearly describes the setting’s interest and Explicitly describes preliminary data new services/treatment/programs for implementing the proposed interest in the proposed intervention rationale for the proposed intervention on the assessed organizational and intervention is discussed Incomplete or unclear description of Clearly describes how the setting will be political capacity and readiness for No information on the study setting’s how the setting will be assessed for assessed for capacity and readiness for implementation (assessment capacity or readiness for implementation capacity and/or readiness for implementation including which methods, completed prior to application/pilot) No information on how those in the study implementation including which scales, or other tools will be used Preliminary capacity and readiness setting who are opposed to change will be methods and tools will be used, or Thoroughly describes the potential assessments were completed using involved with or have their concerns there is a limited description of influence of organizational/political a scale with established validity and addressed by study processes organizational/political culture and culture, and potential contextual reliability, or a scale that has or components potential contextual barriers barriers or facilitators undergone some validity and or facilitators May include strategies for how those reliability testing May include a brief discussion on how opposed to change in the study setting May include strategies for how those those opposed to change in the study will be involved with opposed to change in the study setting will be involved with or have or have their concerns setting will be involved with or have their concerns addressed by addressed by study their concerns addressed by study processes or components processes or components study processes or components May not include evidence May not include evidence of support Evidence of support (e.g., letters) of support (e.g., letters) from (e.g., letters) from the study from the study setting that the study setting that address setting that address how address how the proposed how the proposed study aligns the proposed study aligns study aligns with the organization’s with the organization’s with the organization’s priorities/policies priorities/policies priorities/policies Implementation strategy/process No implementation strategies are identified Implementation strategies are not Implementation strategies are clearly Explicitly describes and theoretically Intervention may be incorrectly described clearly distinguished from distinguished from the intervention justifies the implementation strategies as an implementation strategy the intervention Crable et al. Implementation Science (2018) 13:71 Page 6 of 11 � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � Table 1 Implementation and Improvement Science Proposal Evaluation Criteria (Continued) Criteria Score 01 2 3 Unclear implementation strategies Some theoretical justification of the Explicitly describes how implementation are not theoretically justified implementation strategies strategies link to the stated aims/ and/or do not match with the Clearly describes how implementation setting/outcome measures of the stated aims/setting/outcome strategies link to the stated aims/setting/ proposed study measures of the proposed study outcome measures of the proposed study Explicitly describes how implementation Limited description linking the More description is needed to clearly strategies will be observed or implementation strategies to the understand how implementation strategies empirically tested stated aims/setting/outcome will be observed or empirically tested Implementation strategies are feasible measures of the proposed study Implementation strategies are mostly feasible given the pilot study timeline and with no plan for how strategies given the pilot study timeline and budget budget constraints will be observed or tested constraints Implementation strategies may be unrealistic given the pilot timeline and/or budget constraints Team experience with Only the principal investigator’s It is unclear how the team Team description, biographical sketches, Clearly describes how team setting, treatment, and skills are described experience relates to the study resumes/CVs depict a multidisciplinary skillset experience relates to the study implementation process No additional information, biographical setting, treatment, and/or processes relevant to the proposed study setting, setting, treatment, and processes sketches, resumes/CVs are provided Staffing plan may not facilitate treatment, processes, and other needs Team description, biographical beyond the principal investigator successful study completion without Staffing plan facilitates successful study sketches, resumes/CVs depict a significant support from CIIS completion, with some support from CIIS multidisciplinary skillset relevant to Team experience is uniform and does likely necessary the proposed study setting, treatment, not offer multidisciplinary skills or No description of the research environment processes, and other needs perspective to the proposed study strengths including resources and/or Staffing plan facilitates successful infrastructure study completion without If principal investigator is considered junior necessitating CIIS support or early career or novice to implementation Clearly describes strengths of the science, it is unclear what senior leadership research environment including outside of CIIS will be available for resources and infrastructure mentoring and/or consultation If principal investigator is considered junior or early career or novice to implementation science, senior leadership outside of CIIS has been identified to support study completion with mentoring and/or consultation Feasibility of proposed research The proposed study includes methods, The proposed study includes methods, The proposed study includes appropriate The proposed study includes design and methods interventions, and other components interventions, and other components methods, interventions, and other appropriate methods, interventions, that are beyond the scope of a pilot that may be challenging to accomplish components that are likely achievable as a and other components that are study and/or inappropriate for a The budget and/or timeline are not pilot study achievable as a pilot study pilot study included or unrealistic The budget and/or timeline may need some and are justified against A budget and/or timeline are not Potential barriers to implementation revising potential alternatives included or are unrealistic are not clearly described or Potential barriers to implementation are Thebudgetand timeline areappropriate Potential barriers to implementation are insurmountable clearly described but may lack clear Potential barriers to implementation are not described or are insurmountable description of how those barriers will be are clearly identified with potential overcome plans to overcome those barriers Measurement and Outcomes described are not implementation Outcomes described are implementation Outcomes described are implementation Outcomes described are analysis section or improvement science-related and/or improvement science-related and/or improvement science-related implementation and/ Outcomes are not linked to the proposed Outcomes are unclearly linked to the Outcomes are clearly linked to the proposed or improvement science-related study aims proposed study aims study aims Outcomes are clearly linked The unit of analysis is inappropriate for The unit of analysis is appropriate for the The unit of analysis is appropriate for the to the proposed study aims the proposed study proposed study proposed study Crable et al. Implementation Science (2018) 13:71 Page 7 of 11 � � � � � � � � � � � � � � � � � Table 1 Implementation and Improvement Science Proposal Evaluation Criteria (Continued) Criteria Score 01 2 3 No measurement and/or data analysis Measurement and/or data analysis plans Measurement and/or data analytic plans The unit of analysis is appropriate plan are included to describe how variables do not clearly describe how all variables describe how all variables and outcomes will for the proposed study and outcomes will be measured and outcomes will be measured, or plans be measured and is appropriate for the Measurement and data analytic are inappropriate for the proposed study proposed study, but linkage to the plans robustly describe how theoretical model is unclear all variables and outcomes will be measured and are appropriate for the proposed study through a clear theoretical justification Policy/funding environment; No acknowledgement of the internal/ The internal/external policy trends and/or The internal/external policy trends and/or The internal/external policy trends leverage of support for external policy trends and/or funding funding environment are discussed but funding environment are clearly described and/or funding environment are sustaining change environment for the propose additional clarification is needed The potential impact of the intervention is clearly described study is included The potential impact of the intervention linked to relevant policies and funding issues Potential impact of the intervention is Zero or limited discussion of is not linked to the policy and/or funding associated with a safety net setting but may explicitly linked to relevant policies the potential impact of context and may not be relevant to a need further explanation and funding issues associated with a the intervention is included safety net setting The dissemination plan for study findings safety net setting Zero or limited discussion of The dissemination plan for study findings indicates a contribution will be made to the The dissemination plan for study disseminating study does not clearly indicate a contribution broader policy level and safety net setting, findings indicates what and how a findings is included will be made to the broader policy level but what contribution and how it will be contribution will be made to the and safety net setting achieved is unclear broader policy level and safety net setting Crable et al. Implementation Science (2018) 13:71 Page 8 of 11 scoring criteria and assessing efforts to improve the cri- teria [15, 16]. Application of the INSPECT system dem- onstrated high inter-rater reliability overall and within each of the 10 items. The high degree of reliability mea- sured for INSPECT may be related to the specificity of its design as an implementation and improvement science scoring criteria. A review of scoring rubrics 2 reported in the scientific literature suggests that topic-focused criteria contribute to increased scoring reliability [17]. Additionally, the moderate correlation 0 2 4 6 8 101218202224262830 between scores assigned using the NIH framework and Cumulative Proposal Scores scores assigned using INSPECT suggests validity of the Fig. 1 Distribution of cumulative proposal scores assigned using INSPECT criteria in evaluating proposal quality. Proctor ImplemeNtation and Improvement Science Proposal Evaluation et al.’s “ten key ingredients” for grant writers were developed CriTeria (INSPECT) to map onto the existing NIH criteria. Our operationalized version of the ingredients as scoring criteria demonstrated Discussion that proposals that scored poorly under NIH criteria also We developed a reliable proposal scoring system that scored poorly under INSPECT. operationalizes Proctor et al.’s “ten key ingredients” for Applying the INSPECT system to proposed implemen- writing an implementation research grant [6]. Previous tation and improvement science research at an academic research analyzing peer-review grant processes has medical center improved proposal reviewers’ ability to highlighted a need to improve scoring agreement be- identify specific strengths and weaknesses in implemen- tween peer reviewers [14]. High levels of disagreement tation approach. Overall, proposals only received high in assessors’ interpretation of grant scoring criteria result scores for identifying the care gap or quality gap. Since in unreliable peer-review processes and funding deci- efficacy and implementation or improvement research sions based more on chance than scientific merit [14]. may use similar techniques to establish the significance Measuring rates of inter-rater reliability are a standard of the study questions [18], proposals may score well on approach for evaluating the utility of existing proposal describing the quality gap, even if they later described Table 2 Distribution of ImplemeNtation and Improvement Science Proposal Evaluation CriTeria (INSPECT) Scores Cumulative Proposal Scores Proposals evaluated: n = 30, Median score 7 (IQR 3.3–11.8) Individual Item Scores INSPECT Items Rating Scale Krippendorff’s Alpha 01 2 3 (N = 150) (N = 74) (N = 47) (N = 29) (0.88) The care gap or quality gap 7 (23%) 6 (20%) 6 (20%) 11 (36%) 0.84 The evidence-based treatment 15 (50%) 9 (30%) 2 (7%) 4 (13%) 0.77 to be implemented Conceptual model and 21 (70%) 4 (13%) 3 (10%) 2 (7%) 0.99 heoretical justification Stakeholder priorities, 13 (43%) 9 (30%) 7 (23%) 1 (3%) 0.88 engagement in change Setting’s readiness to adopt new 16 (53%) 7 (23%) 6 (20%) 1 (3%) 0.96 services/treatment/programs Implementation strategy/process 20 (67%) 7 (23%) 1 (3%) 2 (7%) 0.84 Team experience with setting, 13 (43%) 5 (17%) 8 (27%) 4 (13%) 0.96 treatment, and implementation process Feasibility of proposed research 13 (43%) 11 (37%) 6 (20%) 0 (0%) 0.84 design and methods Measurement and analysis 21 (70%) 4 (13%) 3 (10%) 2 (7%) 0.78 Policy/funding environment; leverage 11 (37%) 12 (40%) 5 (17%) 2 (7%) 0.77 or support for sustaining change Number of Study Proposals Crable et al. Implementation Science (2018) 13:71 Page 9 of 11 efficacy hypotheses that received overall low scores from Strengths of our results include that application of the INSPECT system. Further studies should explore INSPECT to study proposals submitted by investigators techniques for describing care and quality gaps that with a wide range of implementation and improvement highlight implementation or improvement research science-specific experience, and covering a variety of questions. content areas. However, our results are limited in that Consistently low scores in four areas—defining the they characterize one academic institution’s familiarity evidence-based treatment to be implemented, concep- with implementation and improvement science research tual model and theoretical justification, setting’s readi- and the INSPECT system requires validation in other ness to adopt new programs, and measurement and settings and over a broader range of proposal ratings. analysis—suggest that many investigators seeking to con- Additionally, we measured a high degree of inter-rater duct implementation research may have misconceptions reliability for INSPECT when it was applied to a sample about the fundamental goals of this field. One miscon- of low-scoring proposals. INSPECT’s inter-rater reli- ception may relate to a sole focus on evaluating an inter- ability may decrease when applied to a sample of higher vention’s effectiveness rather than studying the processes quality proposals, and reviewers are required to dis- and outcomes of implementation strategies. The major- criminate between gradations of quality (i.e., scores of ity of study proposals evaluated using INSPECT neither 1–3) rather than mostly scoring the absence of key aimed to improve uptake of any evidence-based practice items (i.e., scores of 0). Future research should test the nor included any implementation measures such as validity of INSPECT by comparing INSPECT-assigned acceptability, adoption, feasibility, fidelity, penetration, scores to ratings assigned to approved proposals by the or sustainability [19]. Inadequate and inconsistent NIH Dissemination and Implementation Research in descriptions of implementation strategies and outcomes Health study section. Future research should also assess represent major challenges to overall implementation the relationship between INSPECT score assignments study success [20]. In addition to guidance provided by and successful study completion to determine the util- the INSPECT criteria, recent efforts to develop imple- ity of INSPECT as a mechanism for ensuring the qual- mentation study reporting standards [21] may assist ity and impact of funded research. To aid in these proposal writers in describing planned research. prospective research efforts, forthcoming proposal calls Several proposals addressed treatments or practices with from CIIS will specifically use INSPECT as the proposal low evidence for the potential to improve healthcare. Al- evaluation criteria. though hybrid studies, which study both effectiveness and Although multiple tools exist to aid researchers in implementation outcomes, are practical approaches to writing implementation science proposals [6, 29, 30], establishing the effectiveness of evidence-informed prac- few resources exist to support grant reviewers. This tices while measuring implementation efforts [18], none study identified additional functionality of Proctor et of the study proposals expressed this dual research intent al.’s “ten key ingredients” as a guide for writers by de- or were conceived as hybrid designs. veloping it into a detailed checklist for proposal Our findings also suggest low familiarity with and use of reviewers. The current research makes a substantive resources to evaluate of the strength of evidence (such as contribution to implementation and improvement the Grading Quality of Evidence and Strength of Recom- sciences by demonstrating the utility and reliability of mendations system [22] and the Strength of Recommenda- a new tool designed to aid grant reviewers in identify- tion Taxonomy grading scale [23]) for implementation ing high-quality research. science research. A more systematic evaluation of the strength of evidence [24–27] necessary to warrant imple- mentation efforts may help to differentiate implementation Conclusion science from efficacy or effectiveness research and improve In conclusion, we operationalized an implementation understanding of the utility hybrid studies offer [28]. and improvement research-specific scoring system to Expanding access to implementation science training in provide guidance for proposal writers and grant universities as part of the core health services research reviewers. We demonstrated the utility and reliability of curriculum and enhancing access to professional develop- the new INSPECT scoring systems in evaluating the ment opportunities that focus on conceptual and meth- quality of implementation and improvement sciences odological implementation skills in a content agnostic way research proposed at one academic medical center. The would aid in building capacity for the next generation of prevalence of low scores across the majority of implementation science researchers. Additionally, training INSPECT criteria suggests a need to promote education programs provide an opportunity to provide guidance on about the goals of implementation and improvement both writing and evaluating the quality of implementation science, including the conceptual and methodological science grant applications. distinctions from efficacy and effectiveness research. Crable et al. Implementation Science (2018) 13:71 Page 10 of 11 Abbreviations 2. Neta G, Sanchez MA, Chambers DA, Phillips SM, Leyva B, Cynkin L, et al. CIIS: Center for Implementation and Improvement Sciences; Implementation science in cancer prevention and control: a decade of INSPECT: Implementation and Improvement Science Proposal Evaluation grant funding by the National Cancer Institute and future directions. Criteria; NIH: National Institutes of Health Implement Sci. 2015;10:4. https://doi.org/10.1186/s13012-014-0200-2. 3. Purtle J, Peters R, Brownson RC. A review of policy dissemination and implementation research funded by the National Institutes of Health, 2007– Acknowledgements 2014. Implement Sci. 2015;11(1) https://doi.org/10.1186/s13012-015-0367-1. We would like to thank the investigators who submitted pilot grant applications to the Center for Implementation and Improvement Sciences 4. Tinkle M, Kimball R, Haozous EA, Shuster G, Meize-Grochowski R. Pilot Grant Program in 2016 and 2017. Creating the Implementation and Dissemination and implementation research funded by the US National Improvement Science Proposals Evaluation Criteria would not have been Institutes of Health, 2005-2012. Nurs Res Pract. 2013;2013:909606. https://doi. possible without their submissions. We also appreciate the feedback form org/10.1155/2013/909606. CIIS grant application reviewers which was instrumental in identifying the 5. Smits PA, Denis J-L. How research funding agencies support science need for new scoring criteria. The CIIS team appreciates the ongoing integration into policy and practice: an international overview. Implement guidance, interest, and support from David Coleman. Thanks also to Kevin Sci. 2014;9:28. https://doi.org/10.1186/1748-5908-9-28. Griffith for his feedback on measures of reliability. 6. Proctor EK, Powell BJ, Baumann AA, Hamilton AM, Santens RL. Writing implementation research grant proposals: ten key ingredients. Implement Sci. 2012;7:96. https://doi.org/10.1186/1748-5908-7-96. Funding 7. Center for Implementation and Improvement Sciences. Pilot Grant Program; This research was supported with funding from the Evans Medical 2016. http://sites.bu.edu/ciis/pilotgrants/. Accessed 2 Nov 2017. Foundation Inc. 8. National Institutes of Health. Definitions of criteria and considerations for research project grant (RPG/X01/R01/R03/R21/R33/R34) Critiques | grants.nih. Availability of data and materials gov. 2016. https://archives.nih.gov/asites/grants/11-14-2016/Grants/peer/ The datasets generated and/or analyzed during the current study are not critiques/rpg_D.htm. Accessed 26 Oct 2017. publicly available because they represent study proposals prepared by 9. Green J, Thorogood N. Qualitative methods for health research. 3rd ed. individual investigators. Proposal scoring data are available from the Thousand Oaks: SAGE Publications Ltd; 2013. corresponding author on reasonable request. 10. Hayes AF, Krippendorff K. Answering the call for a standard reliability measure for coding data. Commun Methods Meas. 2007;1:77–89. https:// Authors’ contributions doi.org/10.1080/19312450709336664. DB, CGA, AJW, and MD conducted the initial thematic analysis. MD and DB 11. Krippendorff K. Content analysis: an introduction to its methodology. created the original scoring criteria. ELC revised the scoring criteria. ELC, DB, Thousand Oaks, CA: SAGE Publications; 2004. AJW, and MD reviewed and finalized the scoring criteria. ELC and DB piloted 12. R Core. R: a language and environment for statistical computing 2016. the use of the scoring criteria and analyzed the score data. ELC drafted and https://www.r-project.org. revised the manuscript based on comments from coauthors. AJW, MD, DB, 13. Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. and EKP provided manuscript comments and revisions. All authors read and Outcomes for implementation research: conceptual distinctions, approved the final manuscript. measurement challenges, and research agenda. Admin Pol Ment Health. 2011;38:65–76. https://doi.org/10.1007/s10488-010-0319-7. Ethics approval and consent to participate 14. Marsh HW, Jayasinghe UW, Bond NW. Improving the peer-review process This study was reviewed and determined to not qualify as human subjects for grant applications: reliability, validity, bias, and generalizability. Am research by the Boston University Medical Campus Institutional Review Board Psychol. 2008;63:160–8. https://doi.org/10.1037/0003-066X.63.3.160. (reference number H-37709). 15. Sattler DN, McKnight PE, Naney L, Mathis R. Grant peer review: improving inter-rater reliability with training. PLoS One. 2015;10:e0130450. https://doi. Competing interests org/10.1371/journal.pone.0130450. The authors declare that they have no competing interests. 16. Demicheli V, Di Pietrantonj C. Peer review for improving the quality of grant applications. Cochrane Database Syst Rev. 2007:MR000003. https://doi.org/ 10.1002/14651858.MR000003.pub2. Publisher’sNote 17. Jonsson A, Svingby G. The use of scoring rubrics: reliability, validity and Springer Nature remains neutral with regard to jurisdictional claims in educational consequences. Educ Res Rev. 2007;2:130–44. https://doi.org/10. published maps and institutional affiliations. 1016/j.edurev.2007.05.002. Author details 18. Inouye SK, Fiellin DA. An evidence-based guide to writing grant proposals Evans Center for Implementation and Improvement Sciences, Boston for clinical research. Ann Intern Med. 2005;142:274–82. http://www.ncbi.nlm. University School of Medicine, 88 East Newton Street, Vose 216, Boston, MA nih.gov/pubmed/15710960. Accessed 28 Feb 2018 02118, USA. Department of Health Law, Policy & Management, Boston 19. Proctor EK, Landsverk J, Aarons G, Chambers D, Glisson C, Mittman B. University School of Public Health, Boston, MA, USA. Section of Pulmonary, Implementation research in mental health services: an emerging Allergy, and Critical Care Medicine, Department of Medicine, Boston science with conceptual, methodological, and training challenges. University School of Medicine, Boston, MA, USA. Behavioral Sciences and Adm Policy Ment Heal Ment Heal Serv Res. 2009;36:24–34. https://doi. Health Education Department, Rollins School of Public Health, Emory org/10.1007/s10488-008-0197-4. University, Atlanta, GA, USA. Center for Mental Health Services Research, The 20. Proctor EK, Powell BJ, McMillen JC. Implementation strategies: Brown School at Washington University in St. Louis, St. Louis, MO, USA. recommendations for specifying and reporting. Implement Sci. 2013;8:139. Section of Infectious Diseases, Department of Medicine, Boston University https://doi.org/10.1186/1748-5908-8-139. School of Medicine, Boston, MA, USA. Center for Healthcare Organization 21. Pinnock H, Barwick M, Carpenter CR, Eldridge S, Grandes G, Griffiths CJ, et al. and Implementation Research, Edith Nourse Rogers Memorial VA Hospital, Standards for reporting implementation studies (StaRI): explanation and Bedford, MA, USA. elaboration document. BMJ Open. 2017;7:e013318. https://doi.org/10.1136/ bmjopen-2016-013318. Received: 10 April 2018 Accepted: 22 May 2018 22. Oxman AD. Grading quality of evidence and strength of recommendations. BMJ. 2004;328:24–34. http://www.bmj.com/content/bmj/328/7454/1490. abridgement.pdf. Accessed 28 Feb 2018 References 23. EbellMH, SiwekJ,Weiss BD,Woolf SH,SusmanJ,Ewigman B, et al. 1. Glasgow RE, Lichtenstein E, Marcus AC. Why don’t we see more translation Strength of recommendation taxonomy (SORT): a patient-centered of health promotion research to practice? Rethinking the efficacy-to- approach to grading evidence in the medical literature. Am Fam Physician. effectiveness transition. Am J Public Health. 2003;93:1261–7. http://www. 2004;69:548–56. https://www.ncbi.nlm.nih.gov/pubmed/14971837. Accessed ncbi.nlm.nih.gov/pubmed/12893608. Accessed 1 Mar 2018 28 Feb 2018. Crable et al. Implementation Science (2018) 13:71 Page 11 of 11 24. Rycroft-Malone J, Seers K, Titchen A, Harvey G, Kitson A, McCormack B. What counts as evidence in evidence-based practice? J Adv Nurs. 2004;47:81–90. https://doi.org/10.1111/j.1365-2648.2004.03068.x. 25. Rycroft-Malone J, Seers K, Chandler J, Hawkes CA, Crichton N, Allen C, et al. The role of evidence, context, and facilitation in an implementation trial: implications for the development of the PARIHS framework. Implement Sci. 2013;8:28. https://doi.org/10.1186/1748-5908-8-28. 26. Prasad V, Ioannidis JP. Evidence-based de-implementation for contradicted, unproven, and aspiring healthcare practices. Implement Sci. 2014;9(1) https://doi.org/10.1186/1748-5908-9-1. 27. McCaughey D, Bruning NS. Rationality versus reality: the challenges of evidence-based decision making for health policy makers. Implement Sci. 2010;5:39. https://doi.org/10.1186/1748-5908-5-39. 28. Bernet AC, Willens DE, Bauer MS. Effectiveness-implementation hybrid designs: implications for quality improvement science. Implement Sci. 2013; 8(Suppl 1):S2. https://doi.org/10.1186/1748-5908-8-S1-S2. 29. Brownson RC, Colditz GA, Dobbins M, Emmons KM, Kerner JF, Padek M, et al. Concocting that magic elixir: successful grant application writing in dissemination and implementation research. Clin Transl Sci. 2015;8:710–6. https://doi.org/10.1111/cts.12356. 30. University of Colorado Implementation Science Program. Ten key ingredients to writing successful d&i research proposals. 2018. http://www. crispebooks.org/DIFundingTips. Accessed 4 Apr 2018.

Journal

Implementation ScienceSpringer Journals

Published: May 29, 2018

References

You’re reading a free preview. Subscribe to read the entire article.


DeepDyve is your
personal research library

It’s your single place to instantly
discover and read the research
that matters to you.

Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month

Explore the DeepDyve Library

Search

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Organize

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Access

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Your journals are on DeepDyve

Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.

All the latest content is available, no embargo periods.

See the journals in your area

DeepDyve

Freelancer

DeepDyve

Pro

Price

FREE

$49/month
$360/year

Save searches from
Google Scholar,
PubMed

Create lists to
organize your research

Export lists, citations

Read DeepDyve articles

Abstract access only

Unlimited access to over
18 million full-text articles

Print

20 pages / month

PDF Discount

20% off